linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks
@ 2018-02-22  7:08 Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 01/17] lockdep: Demagic the return value of BFS Boqun Feng
                   ` (16 more replies)
  0 siblings, 17 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Hi Ingo and Peter,

This is V5 for recursive read lock support in lockdep. The changes since
V4 are quite trivial:

*	Fix some wording issues in patch #16 as pointed out by Randy
	Dunlap

I added the explanation about reasoning in patch #16 in V4, which will
help understand this whole series. This patchset is based on v4.16-rc2.

Changes since V3:

*	Reduce the unnecessary cost in structure lock_list as suggested
	by Peter Zijlstra

*	Add documentation for explanation of the reasoning in recursive
	read lock deadlock detection.

*	Add comments before bfs() describing the greedy search we use in
	BFS for strong dependency paths.

*	Add myself as a reviewer for LOCKING PRIMITIVES so that if there
	is any performance regression, log misunderstanding or false
	positives, people know who to blame^Wask help from.

V1: https://marc.info/?l=linux-kernel&m=150393341825453
V2: https://marc.info/?l=linux-kernel&m=150468649417950
V3: https://marc.info/?l=linux-kernel&m=150637795424969
V4: https://marc.info/?l=linux-kernel&m=151550860121565


As Peter pointed out:

	https://marc.info/?l=linux-kernel&m=150349072023540

The lockdep current has a limit support for recursive read locks, the
deadlock case as follow could not be detected:

	read_lock(A);
				lock(B);
	lock(B);
				write_lock(A);

I got some inspiration from Gautham R Shenoy:

	https://lwn.net/Articles/332801/

, and came up with this series.

The basic idea is:

*	Add recursive read locks into the graph

*	Classify dependencies into -(RR)->, -(NR)->, -(RN)->,
	-(NN)->, where R stands for recursive read lock, N stands for
	other locks(i.e. non-recursive read locks and write locks).

*	Define strong dependency paths as the paths of dependencies
	don't have two adjacent dependencies as -(*R)-> and -(R*)->.

*	Extend __bfs() to only traverse on strong dependency paths.

*	If __bfs() finds a strong dependency circle, then a deadlock is
	reported.

The whole series consists of 17 patches:

1.	Do a clean up on the return value of __bfs() and its friends.

2.	Make __bfs() able to visit every dependency until a match is
	found. The old version of __bfs() could only visit each lock
	class once, and this is insufficient if we are going to add
	recursive read locks into the dependency graph.

3.	Redefine LOCK*_STATE*, now LOCK*_STATE_RR stand for recursive
	read lock only and LOCK*_STATE stand for write lock and
	non-recursive read lock.

4-5	Extend __bfs() to be able to traverse the stong dependency
	patchs after recursive read locks added into the graph.

6-8	Adjust check_redundant(), check_noncircular() and
	check_irq_usage() with recursive read locks into consideration.

9.	Finally add recursive read locks into the dependency graph.

10-11	Adjust lock cache chain key generation with recursive read locks
	into consideration, and provide a test case.

12-13	Add more test cases.

14.	Revert commit d82fed752942 ("locking/lockdep/selftests: Fix
	mixed read-write ABBA tests"),

15.	Reduce the extra size cost of lock_list to zero

16.	Add documentation for recursive read lock deadlock detection
	reasoning

17.	Add myself as a LOCKING PRIMITIVES reviewer.

This series passed all the lockdep selftest cases(including those I
introduce).

Test and comments are welcome!

Regards,
Boqun

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 01/17] lockdep: Demagic the return value of BFS
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 02/17] lockdep: Make __bfs() visit every dependency until a match Boqun Feng
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

__bfs() could return four magic numbers:

	1: search succeeds, but none match.
	0: search succeeds, find one match.
	-1: search fails because of the cq is full.
	-2: search fails because a invalid node is found.

This patch cleans things up by using a enum type for the return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 136 ++++++++++++++++++++++++++++-------------------
 1 file changed, 80 insertions(+), 56 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 89b5f83f1969..9b2e318bcc81 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -984,21 +984,52 @@ static inline int get_lock_depth(struct lock_list *child)
 	}
 	return depth;
 }
+/*
+ * Return values of a bfs search:
+ *
+ * BFS_E* indicates an error
+ * BFS_R* indicates a result(match or not)
+ *
+ * BFS_EINVALIDNODE: Find a invalid node in the graph.
+ *
+ * BFS_EQUEUEFULL: The queue is full while doing the bfs.
+ *
+ * BFS_RMATCH: Find the matched node in the graph, and put that node * into
+ *            *@target_entry.
+ *
+ * BFS_RNOMATCH: Haven't found the matched node and keep *@target_entry
+ *              _unchanged_.
+ */
+enum bfs_result {
+	BFS_EINVALIDNODE = -2,
+	BFS_EQUEUEFULL = -1,
+	BFS_RMATCH = 0,
+	BFS_RNOMATCH = 1,
+};
 
-static int __bfs(struct lock_list *source_entry,
-		 void *data,
-		 int (*match)(struct lock_list *entry, void *data),
-		 struct lock_list **target_entry,
-		 int forward)
+/*
+ * bfs_result < 0 means error
+ */
+
+static inline bool bfs_error(enum bfs_result res)
+{
+	return res < 0;
+}
+
+static enum bfs_result __bfs(struct lock_list *source_entry,
+			     void *data,
+			     int (*match)(struct lock_list *entry, void *data),
+			     struct lock_list **target_entry,
+			     int forward)
 {
 	struct lock_list *entry;
 	struct list_head *head;
 	struct circular_queue *cq = &lock_cq;
-	int ret = 1;
+	enum bfs_result ret = BFS_RNOMATCH;
 
 	if (match(source_entry, data)) {
 		*target_entry = source_entry;
-		ret = 0;
+		ret = BFS_RMATCH;
 		goto exit;
 	}
 
@@ -1019,7 +1050,7 @@ static int __bfs(struct lock_list *source_entry,
 		__cq_dequeue(cq, (unsigned long *)&lock);
 
 		if (!lock->class) {
-			ret = -2;
+			ret = BFS_EINVALIDNODE;
 			goto exit;
 		}
 
@@ -1036,12 +1067,12 @@ static int __bfs(struct lock_list *source_entry,
 				mark_lock_accessed(entry, lock);
 				if (match(entry, data)) {
 					*target_entry = entry;
-					ret = 0;
+					ret = BFS_RMATCH;
 					goto exit;
 				}
 
 				if (__cq_enqueue(cq, (unsigned long)entry)) {
-					ret = -1;
+					ret = BFS_EQUEUEFULL;
 					goto exit;
 				}
 				cq_depth = __cq_get_elem_count(cq);
@@ -1054,19 +1085,21 @@ static int __bfs(struct lock_list *source_entry,
 	return ret;
 }
 
-static inline int __bfs_forwards(struct lock_list *src_entry,
-			void *data,
-			int (*match)(struct lock_list *entry, void *data),
-			struct lock_list **target_entry)
+static inline enum bfs_result
+__bfs_forwards(struct lock_list *src_entry,
+	       void *data,
+	       int (*match)(struct lock_list *entry, void *data),
+	       struct lock_list **target_entry)
 {
 	return __bfs(src_entry, data, match, target_entry, 1);
 
 }
 
-static inline int __bfs_backwards(struct lock_list *src_entry,
-			void *data,
-			int (*match)(struct lock_list *entry, void *data),
-			struct lock_list **target_entry)
+static inline enum bfs_result
+__bfs_backwards(struct lock_list *src_entry,
+		void *data,
+		int (*match)(struct lock_list *entry, void *data),
+		struct lock_list **target_entry)
 {
 	return __bfs(src_entry, data, match, target_entry, 0);
 
@@ -1299,13 +1332,13 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 
 /*
  * Prove that the dependency graph starting at <entry> can not
- * lead to <target>. Print an error and return 0 if it does.
+ * lead to <target>. Print an error and return BFS_RMATCH if it does.
  */
-static noinline int
+static noinline enum bfs_result
 check_noncircular(struct lock_list *root, struct lock_class *target,
-		struct lock_list **target_entry)
+		  struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_cyclic_checks);
 
@@ -1314,11 +1347,11 @@ check_noncircular(struct lock_list *root, struct lock_class *target,
 	return result;
 }
 
-static noinline int
+static noinline enum bfs_result
 check_redundant(struct lock_list *root, struct lock_class *target,
 		struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_redundant_checks);
 
@@ -1347,15 +1380,12 @@ static inline int usage_match(struct lock_list *entry, void *bit)
  *
  * Return 0 if such a node exists in the subgraph, and put that node
  * into *@target_entry.
- *
- * Return 1 otherwise and keep *@target_entry unchanged.
- * Return <0 on error.
  */
-static int
+static enum bfs_result
 find_usage_forwards(struct lock_list *root, enum lock_usage_bit bit,
 			struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_find_usage_forwards_checks);
 
@@ -1367,18 +1397,12 @@ find_usage_forwards(struct lock_list *root, enum lock_usage_bit bit,
 /*
  * Find a node in the backwards-direction dependency sub-graph starting
  * at @root->class that matches @bit.
- *
- * Return 0 if such a node exists in the subgraph, and put that node
- * into *@target_entry.
- *
- * Return 1 otherwise and keep *@target_entry unchanged.
- * Return <0 on error.
  */
-static int
+static enum bfs_result
 find_usage_backwards(struct lock_list *root, enum lock_usage_bit bit,
 			struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_find_usage_backwards_checks);
 
@@ -1586,18 +1610,18 @@ check_usage(struct task_struct *curr, struct held_lock *prev,
 
 	this.class = hlock_class(prev);
 	ret = find_usage_backwards(&this, bit_backwards, &target_entry);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
+	if (ret == BFS_RNOMATCH)
+		return 1;
 
 	that.parent = NULL;
 	that.class = hlock_class(next);
 	ret = find_usage_forwards(&that, bit_forwards, &target_entry1);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
+	if (ret == BFS_RNOMATCH)
+		return 1;
 
 	return print_bad_irq_dependency(curr, &this, &that,
 			target_entry, target_entry1,
@@ -1834,10 +1858,10 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	       struct held_lock *next, int distance, struct stack_trace *trace,
 	       int (*save)(struct stack_trace *trace))
 {
-	struct lock_list *uninitialized_var(target_entry);
 	struct lock_list *entry;
+	enum bfs_result ret;
 	struct lock_list this;
-	int ret;
+	struct lock_list *uninitialized_var(target_entry);
 
 	/*
 	 * Prove that the new <prev> -> <next> dependency would not
@@ -1851,7 +1875,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	this.class = hlock_class(next);
 	this.parent = NULL;
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
-	if (unlikely(!ret)) {
+	if (unlikely(ret == BFS_RMATCH)) {
 		if (!trace->entries) {
 			/*
 			 * If @save fails here, the printing might trigger
@@ -1862,7 +1886,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 		}
 		return print_circular_bug(&this, target_entry, next, prev, trace);
 	}
-	else if (unlikely(ret < 0))
+	else if (unlikely(bfs_error(ret)))
 		return print_bfs_bug(ret);
 
 	if (!check_prev_add_irq(curr, prev, next))
@@ -1900,11 +1924,11 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	this.class = hlock_class(prev);
 	this.parent = NULL;
 	ret = check_redundant(&this, hlock_class(next), &target_entry);
-	if (!ret) {
+	if (ret == BFS_RMATCH) {
 		debug_atomic_inc(nr_redundant);
 		return 2;
 	}
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
 
 
@@ -2633,16 +2657,16 @@ static int
 check_usage_forwards(struct task_struct *curr, struct held_lock *this,
 		     enum lock_usage_bit bit, const char *irqclass)
 {
-	int ret;
+	enum bfs_result ret;
 	struct lock_list root;
 	struct lock_list *uninitialized_var(target_entry);
 
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_forwards(&root, bit, &target_entry);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
+	if (ret == BFS_RNOMATCH)
 		return ret;
 
 	return print_irq_inversion_bug(curr, &root, target_entry,
@@ -2657,17 +2681,17 @@ static int
 check_usage_backwards(struct task_struct *curr, struct held_lock *this,
 		      enum lock_usage_bit bit, const char *irqclass)
 {
-	int ret;
+	enum bfs_result ret;
 	struct lock_list root;
 	struct lock_list *uninitialized_var(target_entry);
 
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_backwards(&root, bit, &target_entry);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
+	if (ret == BFS_RNOMATCH)
+		return 1;
 
 	return print_irq_inversion_bug(curr, &root, target_entry,
 					this, 0, irqclass);
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 02/17] lockdep: Make __bfs() visit every dependency until a match
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 01/17] lockdep: Demagic the return value of BFS Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 03/17] lockdep: Redefine LOCK_*_STATE* bits Boqun Feng
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Currently, __bfs() will do a breadth-first search in the dependency
graph and visit each lock class in the graph exactly once, so for
example, in the following graph:

	A ---------> B
	|            ^
	|            |
	+----------> C

a __bfs() call starts at A, will visit B through dependency A -> B and
visit C through dependency A -> C and that's it, IOW, __bfs() will not
visit dependency C -> B.

This is OK for now, as we only have strong dependencies in the
dependency graph, so whenever there is a traverse path from A to B in
__bfs(), it means A has strong dependency to B(IOW, B depends on A
strongly). So no need to visit all dependencies in the graph.

However, as we are going to add recursive-read lock into the dependency
graph, afterwards, not all the paths mean strong dependencies, in the
same example above, dependency A -> B may be a weak dependency and
traverse A -> C -> B may be a strong dependency path. And with the old
way of __bfs()(i.e. visiting every lock class exactly once), we will
miss the strong dependency path, which will result into failing to find
a deadlock. To cure this for the future, we need to find a way for
__bfs() to visit each dependency, rather than each class, exactly once
in the search until we find a match.

The solution is simple:

We used to mark lock_class::lockdep_dependency_gen_id to indicate a
class has been visited in __bfs(), now we change the semantics a little
bit: we now mark lock_class::lockdep_dependency_gen_id to indicate _all
the dependencies_ in its lock_{after,before} have been visited in the
__bfs()(note we only take one direction in a __bfs() search). In this
way, every dependency is guaranteed to be visited until we find a match.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 61 +++++++++++++++++++++++++++---------------------
 1 file changed, 34 insertions(+), 27 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 9b2e318bcc81..c80f8276ceaa 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -948,24 +948,20 @@ static inline unsigned int  __cq_get_elem_count(struct circular_queue *cq)
 	return (cq->rear - cq->front) & CQ_MASK;
 }
 
-static inline void mark_lock_accessed(struct lock_list *lock,
-					struct lock_list *parent)
+static inline void mark_lock_list_accessed(struct lock_class *class)
 {
-	unsigned long nr;
+	class->dep_gen_id = lockdep_dependency_gen_id;
+}
 
-	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+static inline void visit_lock_entry(struct lock_list *lock,
+				    struct lock_list *parent)
+{
 	lock->parent = parent;
-	lock->class->dep_gen_id = lockdep_dependency_gen_id;
 }
 
-static inline unsigned long lock_accessed(struct lock_list *lock)
+static inline unsigned long lock_list_accessed(struct lock_class *class)
 {
-	unsigned long nr;
-
-	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
-	return lock->class->dep_gen_id == lockdep_dependency_gen_id;
+	return class->dep_gen_id == lockdep_dependency_gen_id;
 }
 
 static inline struct lock_list *get_lock_parent(struct lock_list *child)
@@ -1054,6 +1050,18 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 			goto exit;
 		}
 
+		/*
+		 * If we have visited all the dependencies from this @lock to
+		 * others(iow, if we have visited all lock_list entries in
+		 * @lock->class->locks_{after,before}, we skip, otherwise go
+		 * and visit all the dependencies in the list and mark this
+		 * list accessed.
+		 */
+		if (lock_list_accessed(lock->class))
+			continue;
+		else
+			mark_lock_list_accessed(lock->class);
+
 		if (forward)
 			head = &lock->class->locks_after;
 		else
@@ -1062,23 +1070,22 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
 		list_for_each_entry_rcu(entry, head, entry) {
-			if (!lock_accessed(entry)) {
-				unsigned int cq_depth;
-				mark_lock_accessed(entry, lock);
-				if (match(entry, data)) {
-					*target_entry = entry;
-					ret = BFS_RMATCH;
-					goto exit;
-				}
+			unsigned int cq_depth;
 
-				if (__cq_enqueue(cq, (unsigned long)entry)) {
-					ret = BFS_EQUEUEFULL;
-					goto exit;
-				}
-				cq_depth = __cq_get_elem_count(cq);
-				if (max_bfs_queue_depth < cq_depth)
-					max_bfs_queue_depth = cq_depth;
+			visit_lock_entry(entry, lock);
+			if (match(entry, data)) {
+				*target_entry = entry;
+				ret = BFS_RMATCH;
+				goto exit;
+			}
+
+			if (__cq_enqueue(cq, (unsigned long)entry)) {
+				ret = BFS_EQUEUEFULL;
+				goto exit;
 			}
+			cq_depth = __cq_get_elem_count(cq);
+			if (max_bfs_queue_depth < cq_depth)
+				max_bfs_queue_depth = cq_depth;
 		}
 	}
 exit:
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 03/17] lockdep: Redefine LOCK_*_STATE* bits
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 01/17] lockdep: Demagic the return value of BFS Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 02/17] lockdep: Make __bfs() visit every dependency until a match Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep Boqun Feng
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng,
	Jonathan Corbet, open list:DOCUMENTATION

There are three types of lock acquisitions: write, non-recursive read
and recursive read, among which write locks and non-recursive read locks
have no difference from a viewpoint for deadlock detections, because a
write acquisition of the corresponding lock on an independent CPU or
task makes a non-recursive read lock act as a write lock in the sense of
deadlock. So we could treat them as the same type(named as
"non-recursive lock") in lockdep.

As in the irq lock inversion detection(safe->unsafe deadlock detection),
we used to differ write lock with read lock(non-recursive and
recursive ones), such a classification could be improved as
non-recursive read lock behaves the same as write lock, so this patch
redefines the meanings of LOCK_{USED_IN, ENABLED}_STATE*.

old:
	LOCK_* : stands for write lock
	LOCK_*_READ: stands for read lock(non-recursive and recursive)
new:
	LOCK_* : stands for non-recursive(write lock and non-recursive
	read lock)
	LOCK_*_RR: stands for recursive read lock

Such a change is needed for a future improvement on recursive read
related irq inversion deadlock detection.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 Documentation/locking/lockdep-design.txt |  6 +++---
 kernel/locking/lockdep.c                 | 28 ++++++++++++++--------------
 kernel/locking/lockdep_internals.h       | 16 ++++++++--------
 kernel/locking/lockdep_proc.c            | 12 ++++++------
 4 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/Documentation/locking/lockdep-design.txt b/Documentation/locking/lockdep-design.txt
index 9de1c158d44c..382bc25589c2 100644
--- a/Documentation/locking/lockdep-design.txt
+++ b/Documentation/locking/lockdep-design.txt
@@ -30,9 +30,9 @@ State
 The validator tracks lock-class usage history into 4n + 1 separate state bits:
 
 - 'ever held in STATE context'
-- 'ever held as readlock in STATE context'
+- 'ever held as recursive readlock in STATE context'
 - 'ever held with STATE enabled'
-- 'ever held as readlock with STATE enabled'
+- 'ever held as recurisve readlock with STATE enabled'
 
 Where STATE can be either one of (kernel/locking/lockdep_states.h)
  - hardirq
@@ -51,7 +51,7 @@ locking error messages, inside curlies. A contrived example:
     (&sio_locks[i].lock){-.-...}, at: [<c02867fd>] mutex_lock+0x21/0x24
 
 
-The bit position indicates STATE, STATE-read, for each of the states listed
+The bit position indicates STATE, STATE-RR, for each of the states listed
 above, and the character displayed in each indicates:
 
    '.'  acquired while irqs disabled and not in irq context
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index c80f8276ceaa..5e6bf8d6954d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -448,10 +448,10 @@ DEFINE_PER_CPU(struct lockdep_stats, lockdep_stats);
  */
 
 #define __USAGE(__STATE)						\
-	[LOCK_USED_IN_##__STATE] = "IN-"__stringify(__STATE)"-W",	\
-	[LOCK_ENABLED_##__STATE] = __stringify(__STATE)"-ON-W",		\
-	[LOCK_USED_IN_##__STATE##_READ] = "IN-"__stringify(__STATE)"-R",\
-	[LOCK_ENABLED_##__STATE##_READ] = __stringify(__STATE)"-ON-R",
+	[LOCK_USED_IN_##__STATE] = "IN-"__stringify(__STATE),		\
+	[LOCK_ENABLED_##__STATE] = __stringify(__STATE)"-ON",		\
+	[LOCK_USED_IN_##__STATE##_RR] = "IN-"__stringify(__STATE)"-RR",	\
+	[LOCK_ENABLED_##__STATE##_RR] = __stringify(__STATE)"-ON-RR",
 
 static const char *usage_str[] =
 {
@@ -492,7 +492,7 @@ void get_usage_chars(struct lock_class *class, char usage[LOCK_USAGE_CHARS])
 
 #define LOCKDEP_STATE(__STATE) 						\
 	usage[i++] = get_usage_char(class, LOCK_USED_IN_##__STATE);	\
-	usage[i++] = get_usage_char(class, LOCK_USED_IN_##__STATE##_READ);
+	usage[i++] = get_usage_char(class, LOCK_USED_IN_##__STATE##_RR);
 #include "lockdep_states.h"
 #undef LOCKDEP_STATE
 
@@ -1645,7 +1645,7 @@ static const char *state_names[] = {
 
 static const char *state_rnames[] = {
 #define LOCKDEP_STATE(__STATE) \
-	__stringify(__STATE)"-READ",
+	__stringify(__STATE)"-RR",
 #include "lockdep_states.h"
 #undef LOCKDEP_STATE
 };
@@ -3039,14 +3039,14 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
 	 * mark the lock as used in these contexts:
 	 */
 	if (!hlock->trylock) {
-		if (hlock->read) {
+		if (hlock->read == 2) {
 			if (curr->hardirq_context)
 				if (!mark_lock(curr, hlock,
-						LOCK_USED_IN_HARDIRQ_READ))
+						LOCK_USED_IN_HARDIRQ_RR))
 					return 0;
 			if (curr->softirq_context)
 				if (!mark_lock(curr, hlock,
-						LOCK_USED_IN_SOFTIRQ_READ))
+						LOCK_USED_IN_SOFTIRQ_RR))
 					return 0;
 		} else {
 			if (curr->hardirq_context)
@@ -3058,13 +3058,13 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
 		}
 	}
 	if (!hlock->hardirqs_off) {
-		if (hlock->read) {
+		if (hlock->read == 2) {
 			if (!mark_lock(curr, hlock,
-					LOCK_ENABLED_HARDIRQ_READ))
+					LOCK_ENABLED_HARDIRQ_RR))
 				return 0;
 			if (curr->softirqs_enabled)
 				if (!mark_lock(curr, hlock,
-						LOCK_ENABLED_SOFTIRQ_READ))
+						LOCK_ENABLED_SOFTIRQ_RR))
 					return 0;
 		} else {
 			if (!mark_lock(curr, hlock,
@@ -3170,9 +3170,9 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	switch (new_bit) {
 #define LOCKDEP_STATE(__STATE)			\
 	case LOCK_USED_IN_##__STATE:		\
-	case LOCK_USED_IN_##__STATE##_READ:	\
+	case LOCK_USED_IN_##__STATE##_RR:	\
 	case LOCK_ENABLED_##__STATE:		\
-	case LOCK_ENABLED_##__STATE##_READ:
+	case LOCK_ENABLED_##__STATE##_RR:
 #include "lockdep_states.h"
 #undef LOCKDEP_STATE
 		ret = mark_lock_irq(curr, this, new_bit);
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index d459d624ba2a..93002d337936 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -13,9 +13,9 @@
 enum lock_usage_bit {
 #define LOCKDEP_STATE(__STATE)		\
 	LOCK_USED_IN_##__STATE,		\
-	LOCK_USED_IN_##__STATE##_READ,	\
+	LOCK_USED_IN_##__STATE##_RR,	\
 	LOCK_ENABLED_##__STATE,		\
-	LOCK_ENABLED_##__STATE##_READ,
+	LOCK_ENABLED_##__STATE##_RR,
 #include "lockdep_states.h"
 #undef LOCKDEP_STATE
 	LOCK_USED,
@@ -30,9 +30,9 @@ enum lock_usage_bit {
 enum {
 #define LOCKDEP_STATE(__STATE)						\
 	__LOCKF(USED_IN_##__STATE)					\
-	__LOCKF(USED_IN_##__STATE##_READ)				\
+	__LOCKF(USED_IN_##__STATE##_RR)				\
 	__LOCKF(ENABLED_##__STATE)					\
-	__LOCKF(ENABLED_##__STATE##_READ)
+	__LOCKF(ENABLED_##__STATE##_RR)
 #include "lockdep_states.h"
 #undef LOCKDEP_STATE
 	__LOCKF(USED)
@@ -41,10 +41,10 @@ enum {
 #define LOCKF_ENABLED_IRQ (LOCKF_ENABLED_HARDIRQ | LOCKF_ENABLED_SOFTIRQ)
 #define LOCKF_USED_IN_IRQ (LOCKF_USED_IN_HARDIRQ | LOCKF_USED_IN_SOFTIRQ)
 
-#define LOCKF_ENABLED_IRQ_READ \
-		(LOCKF_ENABLED_HARDIRQ_READ | LOCKF_ENABLED_SOFTIRQ_READ)
-#define LOCKF_USED_IN_IRQ_READ \
-		(LOCKF_USED_IN_HARDIRQ_READ | LOCKF_USED_IN_SOFTIRQ_READ)
+#define LOCKF_ENABLED_IRQ_RR \
+		(LOCKF_ENABLED_HARDIRQ_RR | LOCKF_ENABLED_SOFTIRQ_RR)
+#define LOCKF_USED_IN_IRQ_RR \
+		(LOCKF_USED_IN_HARDIRQ_RR | LOCKF_USED_IN_SOFTIRQ_RR)
 
 /*
  * CONFIG_LOCKDEP_SMALL is defined for sparc. Sparc requires .text,
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index ad69bbc9bd28..630a6bc6e24c 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -252,17 +252,17 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 			nr_hardirq_safe++;
 		if (class->usage_mask & LOCKF_ENABLED_HARDIRQ)
 			nr_hardirq_unsafe++;
-		if (class->usage_mask & LOCKF_USED_IN_IRQ_READ)
+		if (class->usage_mask & LOCKF_USED_IN_IRQ_RR)
 			nr_irq_read_safe++;
-		if (class->usage_mask & LOCKF_ENABLED_IRQ_READ)
+		if (class->usage_mask & LOCKF_ENABLED_IRQ_RR)
 			nr_irq_read_unsafe++;
-		if (class->usage_mask & LOCKF_USED_IN_SOFTIRQ_READ)
+		if (class->usage_mask & LOCKF_USED_IN_SOFTIRQ_RR)
 			nr_softirq_read_safe++;
-		if (class->usage_mask & LOCKF_ENABLED_SOFTIRQ_READ)
+		if (class->usage_mask & LOCKF_ENABLED_SOFTIRQ_RR)
 			nr_softirq_read_unsafe++;
-		if (class->usage_mask & LOCKF_USED_IN_HARDIRQ_READ)
+		if (class->usage_mask & LOCKF_USED_IN_HARDIRQ_RR)
 			nr_hardirq_read_safe++;
-		if (class->usage_mask & LOCKF_ENABLED_HARDIRQ_READ)
+		if (class->usage_mask & LOCKF_ENABLED_HARDIRQ_RR)
 			nr_hardirq_read_unsafe++;
 
 #ifdef CONFIG_PROVE_LOCKING
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (2 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 03/17] lockdep: Redefine LOCK_*_STATE* bits Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-23 11:55   ` Peter Zijlstra
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies Boqun Feng
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

To add recursive read locks into the dependency graph, we need to store
the types of dependencies for the BFS later. There are four kinds of
dependencies:

*	Non-recursive -> Non-recursive dependencies(NN)
	e.g. write_lock(prev) -> write_lock(next), we can also write
	this as "prev --(NN)--> next".

*	Recursive -> Non-recursive dependencies(RN)
	e.g. read_lock(prev) -> write_lock(next), we can also write this
	as "prev --(RN)--> next".

*	Non-recursive -> recursive dependencies(NR)
	e.g. write_lock(prev) -> read_lock(next), we can also write this
	as "prev --(NR)--> next".

*	Recursive -> recursive dependencies(RR)
	e.g. read_lock(prev) -> read_lock(next), we can also write this
	as "prev --(RR)--> next".

Given a pair of two locks, four kinds of dependencies could all exist
between them, so we use 4 bit for the presence of each kind(stored in
lock_list::dep). Helper functions and marcos are also introduced to
convert a pair of locks into ::dep bit and maintain the addition of
different kinds of dependencies.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/lockdep.h  |  2 ++
 kernel/locking/lockdep.c | 48 +++++++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 47 insertions(+), 3 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 6fc77d4dbdcd..ab1e5a7d8864 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -187,6 +187,8 @@ struct lock_list {
 	struct lock_class		*class;
 	struct stack_trace		trace;
 	int				distance;
+	/* bitmap of different dependencies from head to this */
+	u16				dep;
 
 	/*
 	 * The parent field is used to implement breadth-first search, and the
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5e6bf8d6954d..acd25bfc336d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -859,7 +859,7 @@ static struct lock_list *alloc_list_entry(void)
  * Add a new dependency to the head of the list:
  */
 static int add_lock_to_list(struct lock_class *this, struct list_head *head,
-			    unsigned long ip, int distance,
+			    unsigned long ip, int distance, unsigned int dep,
 			    struct stack_trace *trace)
 {
 	struct lock_list *entry;
@@ -872,6 +872,7 @@ static int add_lock_to_list(struct lock_class *this, struct list_head *head,
 		return 0;
 
 	entry->class = this;
+	entry->dep = dep;
 	entry->distance = distance;
 	entry->trace = *trace;
 	/*
@@ -1012,6 +1013,33 @@ static inline bool bfs_error(enum bfs_result res)
 	return res < 0;
 }
 
+#define DEP_NN_BIT 0
+#define DEP_RN_BIT 1
+#define DEP_NR_BIT 2
+#define DEP_RR_BIT 3
+
+#define DEP_NN_MASK (1U << (DEP_NN_BIT))
+#define DEP_RN_MASK (1U << (DEP_RN_BIT))
+#define DEP_NR_MASK (1U << (DEP_NR_BIT))
+#define DEP_RR_MASK (1U << (DEP_RR_BIT))
+
+static inline unsigned int __calc_dep_bit(int prev, int next)
+{
+	if (prev == 2 && next != 2)
+		return DEP_RN_BIT;
+	if (prev != 2 && next == 2)
+		return DEP_NR_BIT;
+	if (prev == 2 && next == 2)
+		return DEP_RR_BIT;
+	else
+		return DEP_NN_BIT;
+}
+
+static inline unsigned int calc_dep(int prev, int next)
+{
+	return 1U << __calc_dep_bit(prev, next);
+}
+
 static enum bfs_result __bfs(struct lock_list *source_entry,
 			     void *data,
 			     int (*match)(struct lock_list *entry, void *data),
@@ -1921,6 +1949,16 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 		if (entry->class == hlock_class(next)) {
 			if (distance == 1)
 				entry->distance = 1;
+			entry->dep |= calc_dep(prev->read, next->read);
+		}
+	}
+
+	/* Also, update the reverse dependency in @next's ->locks_before list */
+	list_for_each_entry(entry, &hlock_class(next)->locks_before, entry) {
+		if (entry->class == hlock_class(prev)) {
+			if (distance == 1)
+				entry->distance = 1;
+			entry->dep |= calc_dep(next->read, prev->read);
 			return 1;
 		}
 	}
@@ -1948,14 +1986,18 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 */
 	ret = add_lock_to_list(hlock_class(next),
 			       &hlock_class(prev)->locks_after,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance,
+			       calc_dep(prev->read, next->read),
+			       trace);
 
 	if (!ret)
 		return 0;
 
 	ret = add_lock_to_list(hlock_class(prev),
 			       &hlock_class(next)->locks_before,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance,
+			       calc_dep(next->read, prev->read),
+			       trace);
 	if (!ret)
 		return 0;
 
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (3 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22 14:26   ` Peter Zijlstra
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular() Boqun Feng
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Now we have four kinds of dependencies in the dependency graph, and not
all the pathes carry strong dependencies, for example:

	Given lock A, B, C, if we have:

	CPU1			CPU2
	=============		==============
	write_lock(A);		read_lock(B);
	read_lock(B);		write_lock(C);

	then we have dependencies A--(NR)-->B, and B--(RN)-->C, (NR and
	RN are to indicate the dependency kind), A actually doesn't have
	strong dependency to C(IOW, C doesn't depend on A), to see this,
	let's say we have a third CPU3 doing:

	CPU3:
	=============
	write_lock(C);
	write_lock(A);

	, this is not a deadlock. However if we change the read_lock()
	on CPU2 to a write_lock(), it's a deadlock then.

	So A --(NR)--> B --(RN)--> C is not a strong dependency path but
	A --(NR)--> B --(NN)-->C is a strong dependency path.

We can generalize this as: If a path of dependencies doesn't have two
adjacent dependencies as (*R)--L-->(R*), where L is some lock, it is a
strong dependency path, otherwise it's not.

Now our mission is to make __bfs() traverse only the strong dependency
paths, which is simple: we record whether we have -(*R)-> at the current
tail of the path in lock_list::is_rr, and whenever we pick a dependency
in the traverse, we 1) make sure we don't pick a -(R*)-> dependency if
our current tail is -(*R)-> and 2) greedily pick a -(*N)-> as hard as
possible.

With this extension for __bfs(), we now need to initialize the root of
__bfs() properly(with a correct ->is_rr), to do so, we introduce some
helper functions, which also cleans up a little bit for the __bfs() root
initialization code.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/lockdep.h  |   2 +
 kernel/locking/lockdep.c | 116 ++++++++++++++++++++++++++++++++++++++++-------
 2 files changed, 101 insertions(+), 17 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index ab1e5a7d8864..a1f91f8680bd 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -189,6 +189,8 @@ struct lock_list {
 	int				distance;
 	/* bitmap of different dependencies from head to this */
 	u16				dep;
+	/* used by BFS to record whether this is picked as a recursive read */
+	u16				is_rr;
 
 	/*
 	 * The parent field is used to implement breadth-first search, and the
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index acd25bfc336d..07bcfaac6fe2 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1040,6 +1040,89 @@ static inline unsigned int calc_dep(int prev, int next)
 	return 1U << __calc_dep_bit(prev, next);
 }
 
+/*
+ * return -1 if no proper dependency could be picked
+ * return 0 if a -(*N)-> dependency could be picked
+ * return 1 if only a -(*R)-> dependency could be picked
+ *
+ * N: non-recursive lock
+ * R: recursive read lock
+ */
+static inline int pick_dep(u16 is_rr, u16 cap_dep)
+{
+	if (is_rr) { /* could only pick -(N*)-> */
+		if (cap_dep & DEP_NN_MASK)
+			return 0;
+		else if (cap_dep & DEP_NR_MASK)
+			return 1;
+		else
+			return -1;
+	} else {
+		if (cap_dep & DEP_NN_MASK || cap_dep & DEP_RN_MASK)
+			return 0;
+		else
+			return 1;
+	}
+}
+
+/*
+ * Initialize a lock_list entry @lock belonging to @class as the root for a BFS
+ * search.
+ */
+static inline void __bfs_init_root(struct lock_list *lock,
+				   struct lock_class *class)
+{
+	lock->class = class;
+	lock->parent = NULL;
+	lock->is_rr = 0;
+}
+
+/*
+ * Initialize a lock_list entry @lock based on a lock acquisition @hlock as the
+ * root for a BFS search.
+ */
+static inline void bfs_init_root(struct lock_list *lock,
+				 struct held_lock *hlock)
+{
+	__bfs_init_root(lock, hlock_class(hlock));
+	lock->is_rr = (hlock->read == 2);
+}
+
+/*
+ * Breadth-First Search to find a strong path in the dependency graph.
+ *
+ * @source_entry: the source of the path we are searching for.
+ * @data: data used for the second parameter of @match function
+ * @match: match function for the search
+ * @target_entry: pointer to the target of a matched path
+ * @forward: direction of path, the lockdep dependency forward or backward
+ *
+ * We may have multiple edges(considering different kinds of dependencies, e.g.
+ * -(NR)-> and -(RN)->) between two nodes in the dependency graph, which may
+ * undermine the normal BFS algorithm, however, we are lucky because: in the
+ * search, for each pair of adjacent nodes, we can pick the edge greedily:
+ *
+ * 	Say we have nodes L0, L1 and L2, and we already pick edge from L0 to
+ * 	L1, and we are going to pick the edge from L1 to L2, because we are
+ * 	picking edges to *strong* path, that means if we pick -(*R)-> for L0 to
+ * 	L1 (i.e. we pick L0 -(*R)-> L1), we can not pick any L1 -(R*)-> L2.
+ *
+ * 	And if we pick L0 -(NR)-> L1, and we have edge 1) L1-(NR)->L2 and 2)
+ * 	L1-(NN)->L2, we can greedily pick edge 2) for the path searching,
+ * 	because a) if ...->L0-(NR)->L1-(NR)->L2->... could cause a deadlock,
+ * 	then so does ...->L0->(NR)->L1-(NN)->L2->... and b) picking 2) means we
+ * 	can pick any kinds of edge for L2 to the next node, so we can search
+ * 	more deadlock cases then picking 1).
+ *
+ * So we have two rules of picking edges in BFS:
+ *
+ * "strong": if the previous edge we pick is -(*R)->, we must pick -(N*)->,
+ *           otherwise, we can pick any kind.
+ * "greedy": if we can pick -(*R)-> or -(*N)-> (if both of them satisfies the
+ *           "strong" rule), we always pick -(*N)-> ones.
+ *
+ * And that's how pick_dep() is implemeneted.
+ */
 static enum bfs_result __bfs(struct lock_list *source_entry,
 			     void *data,
 			     int (*match)(struct lock_list *entry, void *data),
@@ -1050,6 +1133,7 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 	struct list_head *head;
 	struct circular_queue *cq = &lock_cq;
 	enum bfs_result ret = BFS_RNOMATCH;
+	int is_rr, next_is_rr;
 
 	if (match(source_entry, data)) {
 		*target_entry = source_entry;
@@ -1095,11 +1179,18 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 		else
 			head = &lock->class->locks_before;
 
+		is_rr = lock->is_rr;
+
 		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
 		list_for_each_entry_rcu(entry, head, entry) {
 			unsigned int cq_depth;
 
+			next_is_rr = pick_dep(is_rr, entry->dep);
+			if (next_is_rr < 0)
+				continue;
+			entry->is_rr = next_is_rr;
+
 			visit_lock_entry(entry, lock);
 			if (match(entry, data)) {
 				*target_entry = entry;
@@ -1326,8 +1417,7 @@ unsigned long lockdep_count_forward_deps(struct lock_class *class)
 	unsigned long ret, flags;
 	struct lock_list this;
 
-	this.parent = NULL;
-	this.class = class;
+	__bfs_init_root(&this, class);
 
 	local_irq_save(flags);
 	arch_spin_lock(&lockdep_lock);
@@ -1353,8 +1443,7 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 	unsigned long ret, flags;
 	struct lock_list this;
 
-	this.parent = NULL;
-	this.class = class;
+	__bfs_init_root(&this, class);
 
 	local_irq_save(flags);
 	arch_spin_lock(&lockdep_lock);
@@ -1641,17 +1730,14 @@ check_usage(struct task_struct *curr, struct held_lock *prev,
 	struct lock_list *uninitialized_var(target_entry);
 	struct lock_list *uninitialized_var(target_entry1);
 
-	this.parent = NULL;
-
-	this.class = hlock_class(prev);
+	bfs_init_root(&this, prev);
 	ret = find_usage_backwards(&this, bit_backwards, &target_entry);
 	if (bfs_error(ret))
 		return print_bfs_bug(ret);
 	if (ret == BFS_RNOMATCH)
 		return 1;
 
-	that.parent = NULL;
-	that.class = hlock_class(next);
+	bfs_init_root(&that, next);
 	ret = find_usage_forwards(&that, bit_forwards, &target_entry1);
 	if (bfs_error(ret))
 		return print_bfs_bug(ret);
@@ -1907,8 +1993,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 * We are using global variables to control the recursion, to
 	 * keep the stackframe size of the recursive functions low:
 	 */
-	this.class = hlock_class(next);
-	this.parent = NULL;
+	bfs_init_root(&this, next);
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
 	if (unlikely(ret == BFS_RMATCH)) {
 		if (!trace->entries) {
@@ -1966,8 +2051,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	/*
 	 * Is the <prev> -> <next> link redundant?
 	 */
-	this.class = hlock_class(prev);
-	this.parent = NULL;
+	bfs_init_root(&this, prev);
 	ret = check_redundant(&this, hlock_class(next), &target_entry);
 	if (ret == BFS_RMATCH) {
 		debug_atomic_inc(nr_redundant);
@@ -2710,8 +2794,7 @@ check_usage_forwards(struct task_struct *curr, struct held_lock *this,
 	struct lock_list root;
 	struct lock_list *uninitialized_var(target_entry);
 
-	root.parent = NULL;
-	root.class = hlock_class(this);
+	bfs_init_root(&root, this);
 	ret = find_usage_forwards(&root, bit, &target_entry);
 	if (bfs_error(ret))
 		return print_bfs_bug(ret);
@@ -2734,8 +2817,7 @@ check_usage_backwards(struct task_struct *curr, struct held_lock *this,
 	struct lock_list root;
 	struct lock_list *uninitialized_var(target_entry);
 
-	root.parent = NULL;
-	root.class = hlock_class(this);
+	bfs_init_root(&root, this);
 	ret = find_usage_backwards(&root, bit, &target_entry);
 	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular()
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (4 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22 14:54   ` Peter Zijlstra
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change Boqun Feng
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Currently, lockdep only has limit support for deadlock detection for
recursive read locks.

The basic idea of the detection is:

Since we make __bfs() able to traverse only the strong dependency paths,
so we report a circular deadlock if we could find a circle of a strong
dependency path.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 07bcfaac6fe2..e1be088a34c4 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1343,6 +1343,14 @@ static inline int class_equal(struct lock_list *entry, void *data)
 	return entry->class == data;
 }
 
+static inline int hlock_conflict(struct lock_list *entry, void *data)
+{
+	struct held_lock *hlock = (struct held_lock *)data;
+
+	return hlock_class(hlock) == entry->class &&
+	       (hlock->read != 2 || !entry->is_rr);
+}
+
 static noinline int print_circular_bug(struct lock_list *this,
 				struct lock_list *target,
 				struct held_lock *check_src,
@@ -1455,18 +1463,18 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 }
 
 /*
- * Prove that the dependency graph starting at <entry> can not
+ * Prove that the dependency graph starting at <root> can not
  * lead to <target>. Print an error and return BFS_RMATCH if it does.
  */
 static noinline enum bfs_result
-check_noncircular(struct lock_list *root, struct lock_class *target,
+check_noncircular(struct lock_list *root, struct held_lock *target,
 		  struct lock_list **target_entry)
 {
 	enum bfs_result result;
 
 	debug_atomic_inc(nr_cyclic_checks);
 
-	result = __bfs_forwards(root, target, class_equal, target_entry);
+	result = __bfs_forwards(root, target, hlock_conflict, target_entry);
 
 	return result;
 }
@@ -1994,7 +2002,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 * keep the stackframe size of the recursive functions low:
 	 */
 	bfs_init_root(&this, next);
-	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
+	ret = check_noncircular(&this, prev, &target_entry);
 	if (unlikely(ret == BFS_RMATCH)) {
 		if (!trace->entries) {
 			/*
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (5 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular() Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22 17:29   ` Peter Zijlstra
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection Boqun Feng
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

As we have four kinds of dependencies now, check_redundant() should only
report redundant if we have a dependency path which is equal or
_stronger_ than the current dependency. For example if in
check_prev_add() we have:

	prev->read == 2 && next->read != 2

, we should only report redundant if we find a path like:

	prev--(RN)-->....--(*N)-->next

and if we have:

	prev->read == 2 && next->read == 2

, we could report redundant if we find a path like:

	prev--(RN)-->....--(*N)-->next

or

	prev--(RN)-->....--(*R)-->next

To do so, we need to pass the recursive-read status of @next into
check_redundant(). This patch changes the parameter of check_redundant()
and the match function to achieve this.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index e1be088a34c4..0b0ad3db78b4 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1338,9 +1338,12 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth,
 	return 0;
 }
 
-static inline int class_equal(struct lock_list *entry, void *data)
+static inline int hlock_equal(struct lock_list *entry, void *data)
 {
-	return entry->class == data;
+	struct held_lock *hlock = (struct held_lock *)data;
+
+	return hlock_class(hlock) == entry->class &&
+	       (hlock->read == 2 || !entry->is_rr);
 }
 
 static inline int hlock_conflict(struct lock_list *entry, void *data)
@@ -1480,14 +1483,14 @@ check_noncircular(struct lock_list *root, struct held_lock *target,
 }
 
 static noinline enum bfs_result
-check_redundant(struct lock_list *root, struct lock_class *target,
+check_redundant(struct lock_list *root, struct held_lock *target,
 		struct lock_list **target_entry)
 {
 	enum bfs_result result;
 
 	debug_atomic_inc(nr_redundant_checks);
 
-	result = __bfs_forwards(root, target, class_equal, target_entry);
+	result = __bfs_forwards(root, target, hlock_equal, target_entry);
 
 	return result;
 }
@@ -2060,7 +2063,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 * Is the <prev> -> <next> link redundant?
 	 */
 	bfs_init_root(&this, prev);
-	ret = check_redundant(&this, hlock_class(next), &target_entry);
+	ret = check_redundant(&this, next, &target_entry);
 	if (ret == BFS_RMATCH) {
 		debug_atomic_inc(nr_redundant);
 		return 2;
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (6 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22 17:41   ` Peter Zijlstra
  2018-02-22 17:46   ` Peter Zijlstra
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 09/17] lockdep: Add recursive read locks into dependency graph Boqun Feng
                   ` (8 subsequent siblings)
  16 siblings, 2 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

There are four cases for recursive read lock realted deadlocks:

(--(X..Y)--> means a strong dependency path starts with a --(X*)-->
dependency and ends with a --(*Y)-- dependency.)

1.	An irq-safe lock L1 has a dependency --(*..*)--> to an
	irq-unsafe lock L2.

2.	An irq-read-safe lock L1 has a dependency --(N..*)--> to an
	irq-unsafe lock L2.

3.	An irq-safe lock L1 has a dependency --(*..N)--> to an
	irq-read-unsafe lock L2.

4.	An irq-read-safe lock L1 has a dependency --(N..N)--> to an
	irq-read-unsafe lock L2.

The current check_usage() only checks 1) and 2), so this patch adds
checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
an irq-read-{,un}safe lock, the traverse path should ends at a
dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
a real dependency --(N*)-->.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 0b0ad3db78b4..bd3eef664f9d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1504,7 +1504,14 @@ check_redundant(struct lock_list *root, struct held_lock *target,
 
 static inline int usage_match(struct lock_list *entry, void *bit)
 {
-	return entry->class->usage_mask & (1 << (enum lock_usage_bit)bit);
+	enum lock_usage_bit ub = (enum lock_usage_bit)bit;
+
+
+	if (ub & 1)
+		return entry->class->usage_mask & (1 << ub) &&
+		       !entry->is_rr;
+	else
+		return entry->class->usage_mask & (1 << ub);
 }
 
 
@@ -1815,6 +1822,10 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
 			   exclusive_bit(bit), state_name(bit)))
 		return 0;
 
+	if (!check_usage(curr, prev, next, bit,
+			   exclusive_bit(bit) + 1, state_name(bit)))
+		return 0;
+
 	bit++; /* _READ */
 
 	/*
@@ -1827,6 +1838,10 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
 			   exclusive_bit(bit), state_name(bit)))
 		return 0;
 
+	if (!check_usage(curr, prev, next, bit,
+			   exclusive_bit(bit) + 1, state_name(bit)))
+		return 0;
+
 	return 1;
 }
 
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 09/17] lockdep: Add recursive read locks into dependency graph
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (7 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 10/17] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior Boqun Feng
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Since we have all the fundamental to handle recursive read locks, we now
add them into the dependency graph.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index bd3eef664f9d..254f90bade54 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2038,16 +2038,6 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	if (!check_prev_add_irq(curr, prev, next))
 		return 0;
 
-	/*
-	 * For recursive read-locks we do all the dependency checks,
-	 * but we dont store read-triggered dependencies (only
-	 * write-triggered dependencies). This ensures that only the
-	 * write-side dependencies matter, and that if for example a
-	 * write-lock never takes any other locks, then the reads are
-	 * equivalent to a NOP.
-	 */
-	if (next->read == 2 || prev->read == 2)
-		return 1;
 	/*
 	 * Is the <prev> -> <next> dependency already present?
 	 *
@@ -2151,11 +2141,7 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 		int distance = curr->lockdep_depth - depth + 1;
 		hlock = curr->held_locks + depth - 1;
 
-		/*
-		 * Only non-recursive-read entries get new dependencies
-		 * added:
-		 */
-		if (hlock->read != 2 && hlock->check) {
+		if (hlock->check) {
 			int ret = check_prev_add(curr, hlock, next, distance, &trace, save_trace);
 			if (!ret)
 				return 0;
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 10/17] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (8 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 09/17] lockdep: Add recursive read locks into dependency graph Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 11/17] lockdep: Take read/write status in consideration when generate chainkey Boqun Feng
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

As our chain cache doesn't differ read/write locks, so even we can
detect a read-lock/lock-write deadlock in check_noncircular(), we can
still be fooled if a read-lock/lock-read case(which is not a deadlock)
comes first.

So introduce this test case to test specific to the chain cache behavior
on detecting recursive read lock related deadlocks.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 lib/locking-selftest.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index b5c1293ce147..700f9aa19db6 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -395,6 +395,49 @@ static void rwsem_ABBA1(void)
 	MU(Y1); // should fail
 }
 
+/*
+ * read_lock(A)
+ * spin_lock(B)
+ *		spin_lock(B)
+ *		write_lock(A)
+ *
+ * This test case is aimed at poking whether the chain cache prevents us from
+ * detecting a read-lock/lock-write deadlock: if the chain cache doesn't differ
+ * read/write locks, the following case may happen
+ *
+ * 	{ read_lock(A)->lock(B) dependency exists }
+ *
+ * 	P0:
+ * 	lock(B);
+ * 	read_lock(A);
+ *
+ *	{ Not a deadlock, B -> A is added in the chain cache }
+ *
+ *	P1:
+ *	lock(B);
+ *	write_lock(A);
+ *
+ *	{ B->A found in chain cache, not reported as a deadlock }
+ *
+ */
+static void rlock_chaincache_ABBA1(void)
+{
+	RL(X1);
+	L(Y1);
+	U(Y1);
+	RU(X1);
+
+	L(Y1);
+	RL(X1);
+	RU(X1);
+	U(Y1);
+
+	L(Y1);
+	WL(X1);
+	WU(X1);
+	U(Y1); // should fail
+}
+
 /*
  * read_lock(A)
  * spin_lock(B)
@@ -2055,6 +2098,10 @@ void locking_selftest(void)
 	pr_cont("             |");
 	dotest(rwsem_ABBA3, FAILURE, LOCKTYPE_RWSEM);
 
+	print_testname("chain cached mixed R-L/L-W ABBA");
+	pr_cont("             |");
+	dotest(rlock_chaincache_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
+
 	printk("  --------------------------------------------------------------------------\n");
 
 	/*
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 11/17] lockdep: Take read/write status in consideration when generate chainkey
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (9 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 10/17] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 12/17] lockdep/selftest: Unleash irq_read_recursion2 and add more Boqun Feng
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Currently, the chainkey of a lock chain is a hash sum of the class_idx
of all the held locks, the read/write status are not taken in to
consideration while generating the chainkey. This could result into a
problem, if we have:

	P1()
	{
		read_lock(B);
		lock(A);
	}

	P2()
	{
		lock(A);
		read_lock(B);
	}

	P3()
	{
		lock(A);
		write_lock(B);
	}

, and P1(), P2(), P3() run one by one. And when running P2(), lockdep
detects such a lock chain A -> B is not a deadlock, then it's added in
the chain cache, and then when running P3(), even if it's a deadlock, we
could miss it because of the hit of chain cache. This could be confirmed
by self testcase "chain cached mixed R-L/L-W ".

To resolve this, we use concept"hlock_id" to generate the chainkey, the
hlock_id is a tuple (hlock->class_idx, hlock->read), which fits in a u16
type. With this, the chainkeys are different is the lock sequences have
the same locks but different read/write status.

Besides, since we use "hlock_id" to generate chainkeys, the chain_hlocks
array now store the "hlock_id"s rather than lock_class indexes.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 56 +++++++++++++++++++++++++++++++-----------------
 1 file changed, 36 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 254f90bade54..1b981dc4c061 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -307,6 +307,21 @@ static struct hlist_head classhash_table[CLASSHASH_SIZE];
 
 static struct hlist_head chainhash_table[CHAINHASH_SIZE];
 
+/*
+ * the id  chain_hlocks
+ */
+static inline u16 hlock_id(struct held_lock *hlock)
+{
+	BUILD_BUG_ON(MAX_LOCKDEP_KEYS_BITS + 2 > 16);
+
+	return (hlock->class_idx | (hlock->read << MAX_LOCKDEP_KEYS_BITS));
+}
+
+static inline unsigned int chain_hlock_class_idx(u16 hlock_id)
+{
+	return hlock_id & MAX_LOCKDEP_KEYS;
+}
+
 /*
  * The hash key of the lock dependency chains is a hash itself too:
  * it's a hash of all locks taken up to that lock, including that lock.
@@ -2191,7 +2206,10 @@ static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
 
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
 {
-	return lock_classes + chain_hlocks[chain->base + i];
+	u16 chain_hlock = chain_hlocks[chain->base + i];
+	unsigned int class_idx = chain_hlock_class_idx(chain_hlock);
+
+	return lock_classes + class_idx - 1;
 }
 
 /*
@@ -2217,12 +2235,12 @@ static inline int get_first_held_lock(struct task_struct *curr,
 /*
  * Returns the next chain_key iteration
  */
-static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
+static u64 print_chain_key_iteration(u16 hlock_id, u64 chain_key)
 {
-	u64 new_chain_key = iterate_chain_key(chain_key, class_idx);
+	u64 new_chain_key = iterate_chain_key(chain_key, hlock_id);
 
-	printk(" class_idx:%d -> chain_key:%016Lx",
-		class_idx,
+	printk(" hlock_id:%d -> chain_key:%016Lx",
+		(unsigned int)hlock_id,
 		(unsigned long long)new_chain_key);
 	return new_chain_key;
 }
@@ -2238,12 +2256,12 @@ print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_ne
 	printk("depth: %u\n", depth + 1);
 	for (i = get_first_held_lock(curr, hlock_next); i < depth; i++) {
 		hlock = curr->held_locks + i;
-		chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
+		chain_key = print_chain_key_iteration(hlock_id(hlock), chain_key);
 
 		print_lock(hlock);
 	}
 
-	print_chain_key_iteration(hlock_next->class_idx, chain_key);
+	print_chain_key_iteration(hlock_id(hlock_next), chain_key);
 	print_lock(hlock_next);
 }
 
@@ -2251,14 +2269,14 @@ static void print_chain_keys_chain(struct lock_chain *chain)
 {
 	int i;
 	u64 chain_key = 0;
-	int class_id;
+	u16 hlock_id;
 
 	printk("depth: %u\n", chain->depth);
 	for (i = 0; i < chain->depth; i++) {
-		class_id = chain_hlocks[chain->base + i];
-		chain_key = print_chain_key_iteration(class_id + 1, chain_key);
+		hlock_id = chain_hlocks[chain->base + i];
+		chain_key = print_chain_key_iteration(hlock_id, chain_key);
 
-		print_lock_name(lock_classes + class_id);
+		print_lock_name(lock_classes + chain_hlock_class_idx(hlock_id) - 1);
 		printk("\n");
 	}
 }
@@ -2307,7 +2325,7 @@ static int check_no_collision(struct task_struct *curr,
 	}
 
 	for (j = 0; j < chain->depth - 1; j++, i++) {
-		id = curr->held_locks[i].class_idx - 1;
+		id = hlock_id(&curr->held_locks[i]);
 
 		if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) {
 			print_collision(curr, hlock, chain);
@@ -2364,8 +2382,8 @@ static inline int add_chain_cache_classes(unsigned int prev,
 	if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {
 		chain->base = nr_chain_hlocks;
 		nr_chain_hlocks += chain->depth;
-		chain_hlocks[chain->base] = prev - 1;
-		chain_hlocks[chain->base + 1] = next -1;
+		chain_hlocks[chain->base] = prev;
+		chain_hlocks[chain->base + 1] = next;
 	}
 #ifdef CONFIG_DEBUG_LOCKDEP
 	/*
@@ -2399,7 +2417,6 @@ static inline int add_chain_cache(struct task_struct *curr,
 				  struct held_lock *hlock,
 				  u64 chain_key)
 {
-	struct lock_class *class = hlock_class(hlock);
 	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 	int i, j;
@@ -2438,10 +2455,9 @@ static inline int add_chain_cache(struct task_struct *curr,
 	if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {
 		chain->base = nr_chain_hlocks;
 		for (j = 0; j < chain->depth - 1; j++, i++) {
-			int lock_id = curr->held_locks[i].class_idx - 1;
-			chain_hlocks[chain->base + j] = lock_id;
+			chain_hlocks[chain->base + j] = hlock_id(&curr->held_locks[i]);
 		}
-		chain_hlocks[chain->base + j] = class - lock_classes;
+		chain_hlocks[chain->base + j] = hlock_id(hlock);
 	}
 
 	if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS)
@@ -2639,7 +2655,7 @@ static void check_chain_key(struct task_struct *curr)
 		if (prev_hlock && (prev_hlock->irq_context !=
 							hlock->irq_context))
 			chain_key = 0;
-		chain_key = iterate_chain_key(chain_key, hlock->class_idx);
+		chain_key = iterate_chain_key(chain_key, hlock_id(hlock));
 		prev_hlock = hlock;
 	}
 	if (chain_key != curr->curr_chain_key) {
@@ -3590,7 +3606,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		chain_key = 0;
 		chain_head = 1;
 	}
-	chain_key = iterate_chain_key(chain_key, class_idx);
+	chain_key = iterate_chain_key(chain_key, hlock_id(hlock));
 
 	if (nest_lock && !__lock_is_held(nest_lock, -1))
 		return print_lock_nested_lock_not_held(curr, hlock, ip);
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 12/17] lockdep/selftest: Unleash irq_read_recursion2 and add more
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (10 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 11/17] lockdep: Take read/write status in consideration when generate chainkey Boqun Feng
@ 2018-02-22  7:08 ` Boqun Feng
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 13/17] lockdep/selftest: Add more recursive read related test cases Boqun Feng
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Now since we can handle recursive read related irq inversion deadlocks
correctly, uncomment the irq_read_recursion2 and add more testcases.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 lib/locking-selftest.c | 59 ++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 47 insertions(+), 12 deletions(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 700f9aa19db6..c2f06b423da8 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -1052,20 +1052,28 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_wlock)
 #define E3()				\
 					\
 	IRQ_ENTER();			\
-	RL(A);				\
+	LOCK(A);			\
 	L(B);				\
 	U(B);				\
-	RU(A);				\
+	UNLOCK(A);			\
 	IRQ_EXIT();
 
 /*
- * Generate 12 testcases:
+ * Generate 24 testcases:
  */
 #include "locking-selftest-hardirq.h"
-GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_hard)
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_hard_rlock)
+
+#include "locking-selftest-wlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_hard_wlock)
 
 #include "locking-selftest-softirq.h"
-GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft_rlock)
+
+#include "locking-selftest-wlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft_wlock)
 
 #undef E1
 #undef E2
@@ -1079,8 +1087,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
 					\
 	IRQ_DISABLE();			\
 	L(B);				\
-	WL(A);				\
-	WU(A);				\
+	LOCK(A);			\
+	UNLOCK(A);			\
 	U(B);				\
 	IRQ_ENABLE();
 
@@ -1097,13 +1105,21 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
 	IRQ_EXIT();
 
 /*
- * Generate 12 testcases:
+ * Generate 24 testcases:
  */
 #include "locking-selftest-hardirq.h"
-// GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_hard)
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_hard_rlock)
+
+#include "locking-selftest-wlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_hard_wlock)
 
 #include "locking-selftest-softirq.h"
-// GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_soft)
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_soft_rlock)
+
+#include "locking-selftest-wlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_soft_wlock)
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 # define I_SPINLOCK(x)	lockdep_reset_lock(&lock_##x.dep_map)
@@ -1256,6 +1272,25 @@ static inline void print_testname(const char *testname)
 	dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK);	\
 	pr_cont("\n");
 
+#define DO_TESTCASE_2RW(desc, name, nr)				\
+	print_testname(desc"/"#nr);				\
+	pr_cont("      |");					\
+	dotest(name##_wlock_##nr, FAILURE, LOCKTYPE_RWLOCK);	\
+	dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK);	\
+	pr_cont("\n");
+
+#define DO_TESTCASE_2x2RW(desc, name, nr)			\
+	DO_TESTCASE_2RW("hard-"desc, name##_hard, nr)		\
+	DO_TESTCASE_2RW("soft-"desc, name##_soft, nr)		\
+
+#define DO_TESTCASE_6x2x2RW(desc, name)				\
+	DO_TESTCASE_2x2RW(desc, name, 123);			\
+	DO_TESTCASE_2x2RW(desc, name, 132);			\
+	DO_TESTCASE_2x2RW(desc, name, 213);			\
+	DO_TESTCASE_2x2RW(desc, name, 231);			\
+	DO_TESTCASE_2x2RW(desc, name, 312);			\
+	DO_TESTCASE_2x2RW(desc, name, 321);
+
 #define DO_TESTCASE_6(desc, name)				\
 	print_testname(desc);					\
 	dotest(name##_spin, FAILURE, LOCKTYPE_SPIN);		\
@@ -2114,8 +2149,8 @@ void locking_selftest(void)
 	DO_TESTCASE_6x6("safe-A + unsafe-B #2", irqsafe4);
 	DO_TESTCASE_6x6RW("irq lock-inversion", irq_inversion);
 
-	DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion);
-//	DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2);
+	DO_TESTCASE_6x2x2RW("irq read-recursion", irq_read_recursion);
+	DO_TESTCASE_6x2x2RW("irq read-recursion #2", irq_read_recursion2);
 
 	ww_tests();
 
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 13/17] lockdep/selftest: Add more recursive read related test cases
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (11 preceding siblings ...)
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 12/17] lockdep/selftest: Unleash irq_read_recursion2 and add more Boqun Feng
@ 2018-02-22  7:09 ` Boqun Feng
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 14/17] Revert "locking/lockdep/selftests: Fix mixed read-write ABBA tests" Boqun Feng
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:09 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

Add those four test cases:

1.	X --(NR)--> Y --(NR)--> Z --(NR)--> X is deadlock.

2.	X --(NN)--> Y --(RR)--> Z --(NR)--> X is deadlock.

3.	X --(NN)--> Y --(RR)--> Z --(RN)--> X is not deadlock.

4.	X --(NR)--> Y --(RR)--> Z --(NN)--> X is not deadlock.

Those self testcases are valuable for the development of supporting
recursive read related deadlock detection.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 lib/locking-selftest.c | 161 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 161 insertions(+)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index c2f06b423da8..6b7a28d84fc4 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -1033,6 +1033,133 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_wlock)
 #undef E2
 #undef E3
 
+/*
+ * write-read / write-read / write-read deadlock even if read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	RL(Y1);				\
+	RU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	WL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	WU(Y1);
+
+#define E3()				\
+					\
+	WL(Z1);				\
+	RL(X1);				\
+	RU(X1);				\
+	WU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1R2_W2R3_W3R1)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * write-write / read-read / write-read deadlock even if read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	WL(Y1);				\
+	WU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	RL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	RU(Y1);
+
+#define E3()				\
+					\
+	WL(Z1);				\
+	RL(X1);				\
+	RU(X1);				\
+	WU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1W2_R2R3_W3R1)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * write-write / read-read / read-write is not deadlock when read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	WL(Y1);				\
+	WU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	RL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	RU(Y1);
+
+#define E3()				\
+					\
+	RL(Z1);				\
+	WL(X1);				\
+	WU(X1);				\
+	RU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1R2_R2R3_W3W1)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * write-read / read-read / write-write is not deadlock when read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	RL(Y1);				\
+	RU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	RL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	RU(Y1);
+
+#define E3()				\
+					\
+	WL(Z1);				\
+	WL(X1);				\
+	WU(X1);				\
+	WU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1W2_R2R3_R3W1)
+
+#undef E1
+#undef E2
+#undef E3
 /*
  * read-lock / write-lock recursion that is actually safe.
  */
@@ -1258,6 +1385,19 @@ static inline void print_testname(const char *testname)
 	dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK);		\
 	pr_cont("\n");
 
+#define DO_TESTCASE_1RR(desc, name, nr)				\
+	print_testname(desc"/"#nr);				\
+	pr_cont("             |");				\
+	dotest(name##_##nr, SUCCESS, LOCKTYPE_RWLOCK);		\
+	pr_cont("\n");
+
+#define DO_TESTCASE_1RRB(desc, name, nr)			\
+	print_testname(desc"/"#nr);				\
+	pr_cont("             |");				\
+	dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK);		\
+	pr_cont("\n");
+
+
 #define DO_TESTCASE_3(desc, name, nr)				\
 	print_testname(desc"/"#nr);				\
 	dotest(name##_spin_##nr, FAILURE, LOCKTYPE_SPIN);	\
@@ -1367,6 +1507,22 @@ static inline void print_testname(const char *testname)
 	DO_TESTCASE_2IB(desc, name, 312);			\
 	DO_TESTCASE_2IB(desc, name, 321);
 
+#define DO_TESTCASE_6x1RR(desc, name)				\
+	DO_TESTCASE_1RR(desc, name, 123);			\
+	DO_TESTCASE_1RR(desc, name, 132);			\
+	DO_TESTCASE_1RR(desc, name, 213);			\
+	DO_TESTCASE_1RR(desc, name, 231);			\
+	DO_TESTCASE_1RR(desc, name, 312);			\
+	DO_TESTCASE_1RR(desc, name, 321);
+
+#define DO_TESTCASE_6x1RRB(desc, name)				\
+	DO_TESTCASE_1RRB(desc, name, 123);			\
+	DO_TESTCASE_1RRB(desc, name, 132);			\
+	DO_TESTCASE_1RRB(desc, name, 213);			\
+	DO_TESTCASE_1RRB(desc, name, 231);			\
+	DO_TESTCASE_1RRB(desc, name, 312);			\
+	DO_TESTCASE_1RRB(desc, name, 321);
+
 #define DO_TESTCASE_6x6(desc, name)				\
 	DO_TESTCASE_6I(desc, name, 123);			\
 	DO_TESTCASE_6I(desc, name, 132);			\
@@ -2137,6 +2293,11 @@ void locking_selftest(void)
 	pr_cont("             |");
 	dotest(rlock_chaincache_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
 
+	DO_TESTCASE_6x1RRB("rlock W1R2/W2R3/W3R1", W1R2_W2R3_W3R1);
+	DO_TESTCASE_6x1RRB("rlock W1W2/R2R3/W3R1", W1W2_R2R3_W3R1);
+	DO_TESTCASE_6x1RR("rlock W1W2/R2R3/R3W1", W1W2_R2R3_R3W1);
+	DO_TESTCASE_6x1RR("rlock W1R2/R2R3/W3W1", W1R2_R2R3_W3W1);
+
 	printk("  --------------------------------------------------------------------------\n");
 
 	/*
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 14/17] Revert "locking/lockdep/selftests: Fix mixed read-write ABBA tests"
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (12 preceding siblings ...)
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 13/17] lockdep/selftest: Add more recursive read related test cases Boqun Feng
@ 2018-02-22  7:09 ` Boqun Feng
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 15/17] lockdep: Reduce the size of lock_list Boqun Feng
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:09 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

This reverts commit d82fed75294229abc9d757f08a4817febae6c4f4.

Since we now could handle mixed read-write deadlock detection well, the
self tests could be detected as expected, no need to use this
work-around.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 lib/locking-selftest.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 6b7a28d84fc4..79270288fa28 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -2266,14 +2266,6 @@ void locking_selftest(void)
 	print_testname("mixed read-lock/lock-write ABBA");
 	pr_cont("             |");
 	dotest(rlock_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
-#ifdef CONFIG_PROVE_LOCKING
-	/*
-	 * Lockdep does indeed fail here, but there's nothing we can do about
-	 * that now.  Don't kill lockdep for it.
-	 */
-	unexpected_testcase_failures--;
-#endif
-
 	pr_cont("             |");
 	dotest(rwsem_ABBA1, FAILURE, LOCKTYPE_RWSEM);
 
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 15/17] lockdep: Reduce the size of lock_list
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (13 preceding siblings ...)
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 14/17] Revert "locking/lockdep/selftests: Fix mixed read-write ABBA tests" Boqun Feng
@ 2018-02-22  7:09 ` Boqun Feng
  2018-02-23 11:38   ` Peter Zijlstra
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 16/17] lockdep: Documention for recursive read lock detection reasoning Boqun Feng
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 17/17] MAINTAINERS: Add myself as a LOCKING PRIMITIVES reviewer Boqun Feng
  16 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:09 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng

We actually only need 4 bits for lock_list::dep and 1 bit for
lock_list::is_rr, besides lock_list::distance should always be no
greater than MAX_LOCKDEP_DEPTH(which is 48 right now), so a u16 will
fit, this patch then reduces the sizes of those fields to save space for
lock_list structure, as a result we can reduce the size increment
introduced by recursive read lock detection and keep the lock_list the
same size as before.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/lockdep.h  |  6 +++---
 kernel/locking/lockdep.c | 11 ++++++-----
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index a1f91f8680bd..3fce8dbf5091 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -186,11 +186,11 @@ struct lock_list {
 	struct list_head		entry;
 	struct lock_class		*class;
 	struct stack_trace		trace;
-	int				distance;
+	u16				distance;
 	/* bitmap of different dependencies from head to this */
-	u16				dep;
+	u8				dep;
 	/* used by BFS to record whether this is picked as a recursive read */
-	u16				is_rr;
+	bool				is_rr;
 
 	/*
 	 * The parent field is used to implement breadth-first search, and the
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1b981dc4c061..e8b83b36669c 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -874,7 +874,7 @@ static struct lock_list *alloc_list_entry(void)
  * Add a new dependency to the head of the list:
  */
 static int add_lock_to_list(struct lock_class *this, struct list_head *head,
-			    unsigned long ip, int distance, unsigned int dep,
+			    unsigned long ip, u16 distance, unsigned int dep,
 			    struct stack_trace *trace)
 {
 	struct lock_list *entry;
@@ -1063,7 +1063,7 @@ static inline unsigned int calc_dep(int prev, int next)
  * N: non-recursive lock
  * R: recursive read lock
  */
-static inline int pick_dep(u16 is_rr, u16 cap_dep)
+static inline int pick_dep(bool is_rr, u8 cap_dep)
 {
 	if (is_rr) { /* could only pick -(N*)-> */
 		if (cap_dep & DEP_NN_MASK)
@@ -1148,7 +1148,8 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 	struct list_head *head;
 	struct circular_queue *cq = &lock_cq;
 	enum bfs_result ret = BFS_RNOMATCH;
-	int is_rr, next_is_rr;
+	bool is_rr;
+	int next_is_rr;
 
 	if (match(source_entry, data)) {
 		*target_entry = source_entry;
@@ -1204,7 +1205,7 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 			next_is_rr = pick_dep(is_rr, entry->dep);
 			if (next_is_rr < 0)
 				continue;
-			entry->is_rr = next_is_rr;
+			entry->is_rr = !!next_is_rr;
 
 			visit_lock_entry(entry, lock);
 			if (match(entry, data)) {
@@ -2153,7 +2154,7 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 		goto out_bug;
 
 	for (;;) {
-		int distance = curr->lockdep_depth - depth + 1;
+		u16 distance = curr->lockdep_depth - depth + 1;
 		hlock = curr->held_locks + depth - 1;
 
 		if (hlock->check) {
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 16/17] lockdep: Documention for recursive read lock detection reasoning
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (14 preceding siblings ...)
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 15/17] lockdep: Reduce the size of lock_list Boqun Feng
@ 2018-02-22  7:09 ` Boqun Feng
  2018-02-24 22:53   ` Andrea Parri
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 17/17] MAINTAINERS: Add myself as a LOCKING PRIMITIVES reviewer Boqun Feng
  16 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng,
	Randy Dunlap, Jonathan Corbet, open list:DOCUMENTATION

As now we support recursive read lock deadlock detection, add related
explanation in the Documentation/lockdep/lockdep-desgin.txt:

*	Definition of recursive read locks, non-recursive locks, strong
	dependency path and notions of -(**)->.

*	Lockdep's assumption.

*	Informal proof of recursive read lock deadlock detection.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
---
 Documentation/locking/lockdep-design.txt | 170 +++++++++++++++++++++++++++++++
 1 file changed, 170 insertions(+)

diff --git a/Documentation/locking/lockdep-design.txt b/Documentation/locking/lockdep-design.txt
index 382bc25589c2..fd8a8d97ce58 100644
--- a/Documentation/locking/lockdep-design.txt
+++ b/Documentation/locking/lockdep-design.txt
@@ -284,3 +284,173 @@ Run the command and save the output, then compare against the output from
 a later run of this command to identify the leakers.  This same output
 can also help you find situations where runtime lock initialization has
 been omitted.
+
+Recursive Read Deadlock Detection:
+----------------------------------
+Lockdep now is equipped with deadlock detection for recursive read locks.
+
+Recursive read locks, as their name indicates, are the locks able to be
+acquired recursively. Unlike non-recursive read locks, recursive read locks
+only get blocked by current write lock *holders* other than write lock
+*waiters*, for example:
+
+	TASK A:			TASK B:
+
+	read_lock(X);
+
+				write_lock(X);
+
+	read_lock(X);
+
+is not a deadlock for recursive read locks, as while the task B is waiting for
+the lock X, the second read_lock() doesn't need to wait because it's a recursive
+read lock.
+
+Note that a lock can be a write lock(exclusive lock), a non-recursive read lock
+(non-recursive shared lock) or a recursive read lock(recursive shared lock),
+depending on the API used to acquire it(more specifically, the value of the
+'read' parameter for lock_acquire(...)). In other words, a single lock instance
+has three types of acquisition depending on the acquisition functions:
+exclusive, non-recursive read, and recursive read.
+
+That said, recursive read locks could introduce deadlocks too, considering the
+following:
+
+	TASK A:			TASK B:
+
+	read_lock(X);
+				read_lock(Y);
+	write_lock(Y);
+				write_lock(X);
+
+, neither task could get the write locks because the corresponding read locks
+are held by each other.
+
+Lockdep could detect recursive read lock related deadlocks. The dependencies(edges)
+in the lockdep graph are classified into four categories:
+
+1) -(NN)->: non-recursive to non-recursive dependency, non-recursive locks include
+            non-recursive read locks, write locks and exclusive locks(e.g. spinlock_t).
+	    They are treated equally in deadlock detection. "X -(NN)-> Y" means
+            X -> Y and both X and Y are non-recursive locks.
+
+2) -(RN)->: recursive to non-recursive dependency, recursive locks means recursive read
+	    locks. "X -(RN)-> Y" means X -> Y and X is recursive read lock and
+            Y is non-recursive lock.
+
+3) -(NR)->: non-recursive to recursive dependency, "X -(NR)-> Y" means X -> Y and X is
+            non-recursive lock and Y is recursive lock.
+
+4) -(RR)->: recursive to recursive dependency, "X -(RR)-> Y" means X -> Y and both X
+            and Y are recursive locks.
+
+Note that given two locks, they may have multiple dependencies between them, for example:
+
+	TASK A:
+
+	read_lock(X);
+	write_lock(Y);
+	...
+
+	TASK B:
+
+	write_lock(X);
+	write_lock(Y);
+
+, we have both X -(RN)-> Y and X -(NN)-> Y in the dependency graph.
+
+And obviously a non-recursive lock can block the corresponding recursive lock,
+and vice versa. Besides a non-recursive lock may block the other non-recursive
+lock of the same instance(e.g. a write lock may block a corresponding
+non-recursive read lock and vice versa).
+
+We use -(*N)-> for edges that is either -(RN)-> or -(NN)->, the similar for -(N*)->,
+-(*R)-> and -(R*)->
+
+A "path" is a series of conjunct dependency edges in the graph. And we define a
+"strong" path, which indicates the strong dependency throughout each dependency
+in the path, as the path that doesn't have two conjunct edges(dependencies) as
+-(*R)-> and -(R*)->. IOW, a "strong" path is a path from a lock walking to another
+through the lock dependencies, and if X -> Y -> Z in the path(where X, Y, Z are
+locks), if the walk from X to Y is through a -(NR)-> or -(RR)-> dependency, the
+walk from Y to Z must not be through a -(RN)-> or -(RR)-> dependency, otherwise
+it's not a strong path.
+
+We now prove that if a strong path forms a circle, then we have a potential deadlock.
+By "forms a circle", it means for a set of locks A0,A1...An, there is a path from
+A0 to An:
+
+	A0 -> A1 -> ... -> An
+
+and there is also a dependency An->A0. And as the circle is formed by a strong path,
+so there are no two conjunct dependency edges as -(*R)-> and -(R*)->.
+
+
+To understand the actual proof, let's look into lockdep's assumption:
+
+For each lockdep dependency A -> B, there may exist a case where someone is
+trying to acquire B with A held, and the existence of such a case is
+independent to the existences of cases for other lockdep dependencies.
+
+For example if we have two functions func1 and func2:
+
+	void func1(...) {
+		lock(A);
+		lock(B);
+		unlock(A);
+		unlock(B);
+
+		lock(C);
+		lock(A);
+		unlock(A);
+		unlock(C);
+	}
+
+	void func2(...) {
+		lock(B);
+		lock(C);
+		unlock(C);
+		unlock(B);
+	}
+
+lockdep will generate dependencies: A->B, B->C and C->A, and assume that:
+
+	there may exist a case where someone is trying to acquire B with A held,
+	there may exist a case where someone is trying to acquire C with B held,
+	and there may exist a case where someone is trying to acquire A with C held,
+
+and those three cases exist *independently*, meaning they can happen at the
+same time(which requires func1() being called simultaneously by two CPUs or
+tasks, which may be impossible due to other constraints in the real life)
+
+
+With this assumption, we can prove that if a strong dependency path forms a circle,
+then it indicates a deadlock as far as lockdep is concerned.
+
+For a strong dependency circle like:
+
+	A0 -> A1 -> ... -> An
+	^                  |
+	|                  |
+	+------------------+
+, lockdep assumes that
+
+	there may exist a case where someone is trying to acquire A1 with A0 held
+	there may exist a case where someone is trying to acquire A2 with A1 held
+	...
+	there may exist a case where someone is trying to acquire An with An-1 held
+	there may exist a case where someone is trying to acquire A0 with An held
+
+, and because it's a *strong* dependency circle for every Ai (0<=i<=n), Ai is either
+held as a non-recursive lock or someone is trying to acquire Ai as a non-recursive lock,
+which gives:
+
+*	If Ai is held as a non-recursive lock, then trying to acquire it,
+	whether as a recursive lock or not, will get blocked.
+
+*	If Ai is held as a recursive lock, then there must be someone is trying to acquire
+	it as a non-recursive lock, and because recursive locks blocks non-recursive locks,
+	then it(the "someone") will get blocked.
+
+So all the holders of A0, A1...An are blocked with A0, A1...An held by each other,
+no one can progress, therefore deadlock.
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [RFC tip/locking/lockdep v5 17/17] MAINTAINERS: Add myself as a LOCKING PRIMITIVES reviewer
  2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (15 preceding siblings ...)
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 16/17] lockdep: Documention for recursive read lock detection reasoning Boqun Feng
@ 2018-02-22  7:09 ` Boqun Feng
  16 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22  7:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Andrea Parri, Boqun Feng,
	David S. Miller, Greg Kroah-Hartman, Mauro Carvalho Chehab,
	Linus Walleij, Randy Dunlap

Recursive read lock detection work touches most core part of lockdep, so
add myself as a dedicated reviewer to help people find me if any of my
code introduces problems or misunderstandings, also if they need my help
on parsing logs related to recursive read locks.

Besides, I'd like to provide any help for lock related code.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a7f76eadae9..197dc9c5162f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8239,6 +8239,7 @@ F:	Documentation/admin-guide/LSM/LoadPin.rst
 LOCKING PRIMITIVES
 M:	Peter Zijlstra <peterz@infradead.org>
 M:	Ingo Molnar <mingo@redhat.com>
+R:	Boqun Feng <boqun.feng@gmail.com>
 L:	linux-kernel@vger.kernel.org
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking/core
 S:	Maintained
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies Boqun Feng
@ 2018-02-22 14:26   ` Peter Zijlstra
  2018-02-22 15:12     ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 14:26 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:08:52PM +0800, Boqun Feng wrote:
> Now we have four kinds of dependencies in the dependency graph, and not
> all the pathes carry strong dependencies, for example:
> 
> 	Given lock A, B, C, if we have:
> 
> 	CPU1			CPU2
> 	=============		==============
> 	write_lock(A);		read_lock(B);
> 	read_lock(B);		write_lock(C);
> 
> 	then we have dependencies A--(NR)-->B, and B--(RN)-->C, (NR and
> 	RN are to indicate the dependency kind), A actually doesn't have
> 	strong dependency to C(IOW, C doesn't depend on A), to see this,
                              ^ missing space

You're fairly consistent with not putting spaces before opening braces
in text, please don't do this, this is not a C function call. Also
double check all your braces are matched, IIRC there's at least one that
isn't closed in the previous patches.

> 	let's say we have a third CPU3 doing:
> 
> 	CPU3:
> 	=============
> 	write_lock(C);
> 	write_lock(A);
> 
> 	, this is not a deadlock. However if we change the read_lock()
> 	on CPU2 to a write_lock(), it's a deadlock then.
> 
> 	So A --(NR)--> B --(RN)--> C is not a strong dependency path but
> 	A --(NR)--> B --(NN)-->C is a strong dependency path.

I'm not really satisfied with the above reasoning. I don't disagree, but
if possible it would be nice to have something a little more solid.


> We can generalize this as: If a path of dependencies doesn't have two
> adjacent dependencies as (*R)--L-->(R*), where L is some lock, it is a
> strong dependency path, otherwise it's not.
> 
> Now our mission is to make __bfs() traverse only the strong dependency
> paths, which is simple: we record whether we have -(*R)-> at the current
> tail of the path in lock_list::is_rr, and whenever we pick a dependency
> in the traverse, we 1) make sure we don't pick a -(R*)-> dependency if
> our current tail is -(*R)-> and 2) greedily pick a -(*N)-> as hard as
> possible.
> 
> With this extension for __bfs(), we now need to initialize the root of
> __bfs() properly(with a correct ->is_rr), to do so, we introduce some
                  ^ another missing space

Also, I don't like is_rr as a name, have_xr might be better.

Only if we combine *R with R* do we have RR.

> +/*
> + * return -1 if no proper dependency could be picked
> + * return 0 if a -(*N)-> dependency could be picked
> + * return 1 if only a -(*R)-> dependency could be picked
> + *
> + * N: non-recursive lock
> + * R: recursive read lock
> + */
> +static inline int pick_dep(u16 is_rr, u16 cap_dep)
> +{
> +	if (is_rr) { /* could only pick -(N*)-> */
> +		if (cap_dep & DEP_NN_MASK)
> +			return 0;
> +		else if (cap_dep & DEP_NR_MASK)
> +			return 1;
> +		else
> +			return -1;
> +	} else {
> +		if (cap_dep & DEP_NN_MASK || cap_dep & DEP_RN_MASK)

	if (cap_dep & (DEP_NN_MASK | DEP_RN_MASK))

> +			return 0;
> +		else
> +			return 1;
> +	}
> +}

However, I would suggest:

static inline bool is_xr(u16 dep)
{
	return !!(dep & (DEP_NR_MASK | DEP_RR_MASK));
}

static inline bool is_rx(u16 dep)
{
	return !!(dep & (DEP_RN_MASK | DEP_RR_MASK));
}


> @@ -1095,11 +1179,18 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
>  		else
>  			head = &lock->class->locks_before;
>  
> +		is_rr = lock->is_rr;
> +
>  		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
>  
>  		list_for_each_entry_rcu(entry, head, entry) {
>  			unsigned int cq_depth;
>  
> +			next_is_rr = pick_dep(is_rr, entry->dep);
> +			if (next_is_rr < 0)
> +				continue;
> +			entry->is_rr = next_is_rr;

		/* Skip *R -> R* relations */
		if (have_xr && is_rx(entry->dep))
			continue;

		entry->have_xr = is_xr(entry->dep);

Which to me is a much simpler construct, hmm?

> +
>  			visit_lock_entry(entry, lock);
>  			if (match(entry, data)) {
>  				*target_entry = entry;

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular()
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular() Boqun Feng
@ 2018-02-22 14:54   ` Peter Zijlstra
  2018-02-22 15:16     ` Peter Zijlstra
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 14:54 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:08:53PM +0800, Boqun Feng wrote:

> +static inline int hlock_conflict(struct lock_list *entry, void *data)
> +{
> +	struct held_lock *hlock = (struct held_lock *)data;
> +
> +	return hlock_class(hlock) == entry->class &&
> +	       (hlock->read != 2 || !entry->is_rr);
> +}

Bah, brain hurts.

So before we add prev -> this, relation, we check if there's a this ->
prev relation already in the graph -- if so that would be a problem.

The above function has @data == @prev (__bfs_forward starts at @this,
looking for @prev), and the above patch augments the 'class_equal' test
with @prev not having read==2 or @entry not having xr;

This is because.... (insert brain hurt)

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 14:26   ` Peter Zijlstra
@ 2018-02-22 15:12     ` Boqun Feng
  2018-02-22 15:30       ` Peter Zijlstra
  2018-02-22 16:08       ` Peter Zijlstra
  0 siblings, 2 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22 15:12 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 5419 bytes --]

On Thu, Feb 22, 2018 at 03:26:14PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 03:08:52PM +0800, Boqun Feng wrote:
> > Now we have four kinds of dependencies in the dependency graph, and not
> > all the pathes carry strong dependencies, for example:
> > 
> > 	Given lock A, B, C, if we have:
> > 
> > 	CPU1			CPU2
> > 	=============		==============
> > 	write_lock(A);		read_lock(B);
> > 	read_lock(B);		write_lock(C);
> > 
> > 	then we have dependencies A--(NR)-->B, and B--(RN)-->C, (NR and
> > 	RN are to indicate the dependency kind), A actually doesn't have
> > 	strong dependency to C(IOW, C doesn't depend on A), to see this,
>                               ^ missing space
> 
> You're fairly consistent with not putting spaces before opening braces
> in text, please don't do this, this is not a C function call. Also

Got it.

> double check all your braces are matched, IIRC there's at least one that
> isn't closed in the previous patches.
> 

Sure, will do.

> > 	let's say we have a third CPU3 doing:
> > 
> > 	CPU3:
> > 	=============
> > 	write_lock(C);
> > 	write_lock(A);
> > 
> > 	, this is not a deadlock. However if we change the read_lock()
> > 	on CPU2 to a write_lock(), it's a deadlock then.
> > 
> > 	So A --(NR)--> B --(RN)--> C is not a strong dependency path but
> > 	A --(NR)--> B --(NN)-->C is a strong dependency path.
> 
> I'm not really satisfied with the above reasoning. I don't disagree, but
> if possible it would be nice to have something a little more solid.
> 

What do you mean by "solid"? You mean "A --(NR)--> B --(NN)-->C" is too
abstract, and want something like the below instead:

	CPU 1			CPU 2			CPU 3
	===========		============		============
	write_lock(A);
				write_lock(B);
	read_lock(B); // blocked			write_lock(C);
				write_lock(C); // blocked
							write_lock(A); // blocked

?


> 
> > We can generalize this as: If a path of dependencies doesn't have two
> > adjacent dependencies as (*R)--L-->(R*), where L is some lock, it is a
> > strong dependency path, otherwise it's not.
> > 
> > Now our mission is to make __bfs() traverse only the strong dependency
> > paths, which is simple: we record whether we have -(*R)-> at the current
> > tail of the path in lock_list::is_rr, and whenever we pick a dependency
> > in the traverse, we 1) make sure we don't pick a -(R*)-> dependency if
> > our current tail is -(*R)-> and 2) greedily pick a -(*N)-> as hard as
> > possible.
> > 
> > With this extension for __bfs(), we now need to initialize the root of
> > __bfs() properly(with a correct ->is_rr), to do so, we introduce some
>                   ^ another missing space
> 
> Also, I don't like is_rr as a name, have_xr might be better.
> 

Maybe "pick_xr", which means we pick a *R in the BFS search path for the
previous entry, then we can not pick R* for the next entry.

> Only if we combine *R with R* do we have RR.
> 
> > +/*
> > + * return -1 if no proper dependency could be picked
> > + * return 0 if a -(*N)-> dependency could be picked
> > + * return 1 if only a -(*R)-> dependency could be picked
> > + *
> > + * N: non-recursive lock
> > + * R: recursive read lock
> > + */
> > +static inline int pick_dep(u16 is_rr, u16 cap_dep)
> > +{
> > +	if (is_rr) { /* could only pick -(N*)-> */
> > +		if (cap_dep & DEP_NN_MASK)
> > +			return 0;
> > +		else if (cap_dep & DEP_NR_MASK)
> > +			return 1;
> > +		else
> > +			return -1;
> > +	} else {
> > +		if (cap_dep & DEP_NN_MASK || cap_dep & DEP_RN_MASK)
> 
> 	if (cap_dep & (DEP_NN_MASK | DEP_RN_MASK))

Good point!


> 
> > +			return 0;
> > +		else
> > +			return 1;
> > +	}
> > +}
> 
> However, I would suggest:
> 
> static inline bool is_xr(u16 dep)
> {
> 	return !!(dep & (DEP_NR_MASK | DEP_RR_MASK));
> }
> 
> static inline bool is_rx(u16 dep)
> {
> 	return !!(dep & (DEP_RN_MASK | DEP_RR_MASK));
> }
> 
> 
> > @@ -1095,11 +1179,18 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
> >  		else
> >  			head = &lock->class->locks_before;
> >  
> > +		is_rr = lock->is_rr;
> > +
> >  		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
> >  
> >  		list_for_each_entry_rcu(entry, head, entry) {
> >  			unsigned int cq_depth;
> >  
> > +			next_is_rr = pick_dep(is_rr, entry->dep);
> > +			if (next_is_rr < 0)
> > +				continue;
> > +			entry->is_rr = next_is_rr;
> 
> 		/* Skip *R -> R* relations */
> 		if (have_xr && is_rx(entry->dep))
> 			continue;

I don't think this works, if we pick a *R for previous entry, and for
current entry, we have RR, NN and NR, your approach will skip the
current entry, but actually we can pick NN or NR (of course, in __bfs(),
we can greedily pick NN, because if NR causes a deadlock, so does NN).

Note that lock_list::dep can be treated as a set/bitmap for different
kinds of dependencies between the same pair of locks.

Some related explanation of pick_dep() is put in the comment of
__bfs(), which is my bad ;-( I will reorder the code to have an
overall explanation for the algorithm, and put all related functions
after that.

Regards,
Boqun

> 
> 		entry->have_xr = is_xr(entry->dep);
> 
> Which to me is a much simpler construct, hmm?
> 
> > +
> >  			visit_lock_entry(entry, lock);
> >  			if (match(entry, data)) {
> >  				*target_entry = entry;
> 
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular()
  2018-02-22 14:54   ` Peter Zijlstra
@ 2018-02-22 15:16     ` Peter Zijlstra
  2018-02-22 15:44       ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 15:16 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:54:34PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 03:08:53PM +0800, Boqun Feng wrote:
> 
> > +static inline int hlock_conflict(struct lock_list *entry, void *data)
> > +{
> > +	struct held_lock *hlock = (struct held_lock *)data;
> > +
> > +	return hlock_class(hlock) == entry->class &&
> > +	       (hlock->read != 2 || !entry->is_rr);
> > +}
> 
> Bah, brain hurts.
> 
> So before we add prev -> this, relation, we check if there's a this ->
> prev relation already in the graph -- if so that would be a problem.
> 
> The above function has @data == @prev (__bfs_forward starts at @this,
> looking for @prev), and the above patch augments the 'class_equal' test
> with @prev not having read==2 or @entry not having xr;
> 
> This is because.... (insert brain hurt)

(hlock->read != 2 || !entry->have_xr) := !(hlock->read == 2 && entry->have_xr)

hlock->read == 2 := prev->read == 2
entry->have_xr means the last fwd link has read==2.

Together this gives that:

  @prev (Rx) ---> X ---> @entry (xR)

does not form a cycle, because:

  @enrty (xR) -> @prev (Rx)

is not strong and can be ignored.

Did I get that right? If so, the Changelog needs serious help and code
does too.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 15:12     ` Boqun Feng
@ 2018-02-22 15:30       ` Peter Zijlstra
  2018-02-22 15:51         ` Peter Zijlstra
  2018-02-22 16:08       ` Peter Zijlstra
  1 sibling, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 15:30 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 11:12:10PM +0800, Boqun Feng wrote:
> On Thu, Feb 22, 2018 at 03:26:14PM +0100, Peter Zijlstra wrote:

> > However, I would suggest:
> > 
> > static inline bool is_xr(u16 dep)
> > {
> > 	return !!(dep & (DEP_NR_MASK | DEP_RR_MASK));
> > }
> > 
> > static inline bool is_rx(u16 dep)
> > {
> > 	return !!(dep & (DEP_RN_MASK | DEP_RR_MASK));
> > }
> > 
> > 
> > > @@ -1095,11 +1179,18 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
> > >  		else
> > >  			head = &lock->class->locks_before;
> > >  
> > > +		is_rr = lock->is_rr;
> > > +
> > >  		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
> > >  
> > >  		list_for_each_entry_rcu(entry, head, entry) {
> > >  			unsigned int cq_depth;
> > >  
> > > +			next_is_rr = pick_dep(is_rr, entry->dep);
> > > +			if (next_is_rr < 0)
> > > +				continue;
> > > +			entry->is_rr = next_is_rr;
> > 
> > 		/* Skip *R -> R* relations */
> > 		if (have_xr && is_rx(entry->dep))
> > 			continue;
> 
> I don't think this works, if we pick a *R for previous entry, and for
> current entry, we have RR, NN and NR, your approach will skip the
> current entry, but actually we can pick NN or NR (of course, in __bfs(),
> we can greedily pick NN, because if NR causes a deadlock, so does NN).

I don't get it, afaict my suggestion is identical.

You skip condition: pick_dep() < 0, evaluates to:

	is_rr && (!NN_MASK && !NR_MASK) :=
	is_rr && (RN_MASK | RR_MASK)

Which is exactly what I have.

If that is satisfied, you set entry->is_rr to pick_dep(), which his
harder to evaluate, but is something like:

	is_rr && NR_MASK || !(NN_MASK | RN_MASK) :=
	is_rr && NR_MASK || (NR_MASK | RR_MASK) :=
	(NR_MASK | RR_MASK)

(because is_rr && RR_MASK will have been skipped)

> > 
> > 		entry->have_xr = is_xr(entry->dep);
> > 
> > Which to me is a much simpler construct, hmm?

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular()
  2018-02-22 15:16     ` Peter Zijlstra
@ 2018-02-22 15:44       ` Boqun Feng
  0 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-22 15:44 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 1593 bytes --]

On Thu, Feb 22, 2018 at 04:16:26PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 03:54:34PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 03:08:53PM +0800, Boqun Feng wrote:
> > 
> > > +static inline int hlock_conflict(struct lock_list *entry, void *data)
> > > +{
> > > +	struct held_lock *hlock = (struct held_lock *)data;
> > > +
> > > +	return hlock_class(hlock) == entry->class &&
> > > +	       (hlock->read != 2 || !entry->is_rr);
> > > +}
> > 
> > Bah, brain hurts.
> > 
> > So before we add prev -> this, relation, we check if there's a this ->
> > prev relation already in the graph -- if so that would be a problem.
> > 
> > The above function has @data == @prev (__bfs_forward starts at @this,
> > looking for @prev), and the above patch augments the 'class_equal' test
> > with @prev not having read==2 or @entry not having xr;
> > 
> > This is because.... (insert brain hurt)
> 
> (hlock->read != 2 || !entry->have_xr) := !(hlock->read == 2 && entry->have_xr)
> 
> hlock->read == 2 := prev->read == 2
> entry->have_xr means the last fwd link has read==2.
> 
> Together this gives that:
> 
>   @prev (Rx) ---> X ---> @entry (xR)
> 
> does not form a cycle, because:
> 
>   @enrty (xR) -> @prev (Rx)
> 
> is not strong and can be ignored.
> 
> Did I get that right? If so, the Changelog needs serious help and code

Yep.. I was about to rely you with something similar.. I will add a
comment for this function and other "brain-hurting" functions too.

Sorry for the headache ;-(

Regards,
Boqun

> does too.



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 15:30       ` Peter Zijlstra
@ 2018-02-22 15:51         ` Peter Zijlstra
  2018-02-22 16:31           ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 15:51 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 04:30:34PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 11:12:10PM +0800, Boqun Feng wrote:
> > On Thu, Feb 22, 2018 at 03:26:14PM +0100, Peter Zijlstra wrote:
> 
> > > However, I would suggest:
> > > 
> > > static inline bool is_xr(u16 dep)
> > > {
> > > 	return !!(dep & (DEP_NR_MASK | DEP_RR_MASK));
> > > }
> > > 
> > > static inline bool is_rx(u16 dep)
> > > {
> > > 	return !!(dep & (DEP_RN_MASK | DEP_RR_MASK));
> > > }
> > > 
> > > 		/* Skip *R -> R* relations */
> > > 		if (have_xr && is_rx(entry->dep))
> > > 			continue;
> > 
> > I don't think this works, if we pick a *R for previous entry, and for
> > current entry, we have RR, NN and NR, your approach will skip the
> > current entry, but actually we can pick NN or NR (of course, in __bfs(),
> > we can greedily pick NN, because if NR causes a deadlock, so does NN).
> 
> I don't get it, afaict my suggestion is identical.
> 
> You skip condition: pick_dep() < 0, evaluates to:
> 
> 	is_rr && (!NN_MASK && !NR_MASK) :=
> 	is_rr && (RN_MASK | RR_MASK)
> 
> Which is exactly what I have.

Ooh, I think I see what I did wrong, would something like:

	if (have_xr && !is_nx(entry-dep))

work? That's a lot harder to argue about though, still much better than
that tri-state pick thing.

> If that is satisfied, you set entry->is_rr to pick_dep(), which his
> harder to evaluate, but is something like:
> 
> 	is_rr && NR_MASK || !(NN_MASK | RN_MASK) :=
> 	is_rr && NR_MASK || (NR_MASK | RR_MASK) :=
> 	(NR_MASK | RR_MASK)
> 
> (because is_rr && RR_MASK will have been skipped)
> 
> > > 
> > > 		entry->have_xr = is_xr(entry->dep);

This one I think is still correct though.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 15:12     ` Boqun Feng
  2018-02-22 15:30       ` Peter Zijlstra
@ 2018-02-22 16:08       ` Peter Zijlstra
  2018-02-22 16:34         ` Boqun Feng
  1 sibling, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 16:08 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 11:12:10PM +0800, Boqun Feng wrote:
> On Thu, Feb 22, 2018 at 03:26:14PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 03:08:52PM +0800, Boqun Feng wrote:
> > > Now we have four kinds of dependencies in the dependency graph, and not
> > > all the pathes carry strong dependencies, for example:
> > > 
> > > 	Given lock A, B, C, if we have:
> > > 
> > > 	CPU1			CPU2
> > > 	=============		==============
> > > 	write_lock(A);		read_lock(B);
> > > 	read_lock(B);		write_lock(C);
> > > 
> > > 	then we have dependencies A--(NR)-->B, and B--(RN)-->C, (NR and
> > > 	RN are to indicate the dependency kind), A actually doesn't have
> > > 	strong dependency to C(IOW, C doesn't depend on A), to see this,
> > > 	let's say we have a third CPU3 doing:
> > > 
> > > 	CPU3:
> > > 	=============
> > > 	write_lock(C);
> > > 	write_lock(A);
> > > 
> > > 	, this is not a deadlock. However if we change the read_lock()
> > > 	on CPU2 to a write_lock(), it's a deadlock then.
> > > 
> > > 	So A --(NR)--> B --(RN)--> C is not a strong dependency path but
> > > 	A --(NR)--> B --(NN)-->C is a strong dependency path.
> > 
> > I'm not really satisfied with the above reasoning. I don't disagree, but
> > if possible it would be nice to have something a little more solid.
> > 
> 
> What do you mean by "solid"? You mean "A --(NR)--> B --(NN)-->C" is too
> abstract, and want something like the below instead:

The above description mostly leaves it as an exercise to the reader to
'proof' ignoring *R -> R* is both safe and complete while that is the
main argument.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 15:51         ` Peter Zijlstra
@ 2018-02-22 16:31           ` Boqun Feng
  2018-02-23  5:02             ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22 16:31 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 2272 bytes --]

On Thu, Feb 22, 2018 at 04:51:43PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 04:30:34PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 11:12:10PM +0800, Boqun Feng wrote:
> > > On Thu, Feb 22, 2018 at 03:26:14PM +0100, Peter Zijlstra wrote:
> > 
> > > > However, I would suggest:
> > > > 
> > > > static inline bool is_xr(u16 dep)
> > > > {
> > > > 	return !!(dep & (DEP_NR_MASK | DEP_RR_MASK));
> > > > }
> > > > 
> > > > static inline bool is_rx(u16 dep)
> > > > {
> > > > 	return !!(dep & (DEP_RN_MASK | DEP_RR_MASK));
> > > > }
> > > > 
> > > > 		/* Skip *R -> R* relations */
> > > > 		if (have_xr && is_rx(entry->dep))
> > > > 			continue;
> > > 
> > > I don't think this works, if we pick a *R for previous entry, and for
> > > current entry, we have RR, NN and NR, your approach will skip the
> > > current entry, but actually we can pick NN or NR (of course, in __bfs(),
> > > we can greedily pick NN, because if NR causes a deadlock, so does NN).
> > 
> > I don't get it, afaict my suggestion is identical.
> > 
> > You skip condition: pick_dep() < 0, evaluates to:
> > 
> > 	is_rr && (!NN_MASK && !NR_MASK) :=
> > 	is_rr && (RN_MASK | RR_MASK)
> > 
> > Which is exactly what I have.
> 
> Ooh, I think I see what I did wrong, would something like:
> 
> 	if (have_xr && !is_nx(entry-dep))
> 
> work? That's a lot harder to argue about though, still much better than

I think it works. Although I prefer use name "has_nx" for the fucntion.

> that tri-state pick thing.
> 

Agree.

> > If that is satisfied, you set entry->is_rr to pick_dep(), which his
> > harder to evaluate, but is something like:
> > 
> > 	is_rr && NR_MASK || !(NN_MASK | RN_MASK) :=

If is_rr is true and NN_MASK is true, pick_dep() will return 0, however,
your expression will return NR_MASK.

	entry->have_xr = !(has_nn(entry->dep) || (!is_rr && has_rn(entry->dep)));
	              := !has_nn(entry->dep) && (is_rr || !has_rn(entry->dep))

Regards,
Boqun

> > 	is_rr && NR_MASK || (NR_MASK | RR_MASK) :=
> > 	(NR_MASK | RR_MASK)
> > 
> > (because is_rr && RR_MASK will have been skipped)
> > 
> > > > 
> > > > 		entry->have_xr = is_xr(entry->dep);
> 
> This one I think is still correct though.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 16:34         ` Boqun Feng
@ 2018-02-22 16:32           ` Peter Zijlstra
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 16:32 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Fri, Feb 23, 2018 at 12:34:26AM +0800, Boqun Feng wrote:
> OK, so I have some 'proof' in patch #16. I could move that proof in the
> commit log or merge that patch with this one?

Hehe, I'm not yet that far with reading patches.. I'll get to it,
eventually ;-)

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 16:08       ` Peter Zijlstra
@ 2018-02-22 16:34         ` Boqun Feng
  2018-02-22 16:32           ` Peter Zijlstra
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-22 16:34 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 1912 bytes --]

On Thu, Feb 22, 2018 at 05:08:11PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 11:12:10PM +0800, Boqun Feng wrote:
> > On Thu, Feb 22, 2018 at 03:26:14PM +0100, Peter Zijlstra wrote:
> > > On Thu, Feb 22, 2018 at 03:08:52PM +0800, Boqun Feng wrote:
> > > > Now we have four kinds of dependencies in the dependency graph, and not
> > > > all the pathes carry strong dependencies, for example:
> > > > 
> > > > 	Given lock A, B, C, if we have:
> > > > 
> > > > 	CPU1			CPU2
> > > > 	=============		==============
> > > > 	write_lock(A);		read_lock(B);
> > > > 	read_lock(B);		write_lock(C);
> > > > 
> > > > 	then we have dependencies A--(NR)-->B, and B--(RN)-->C, (NR and
> > > > 	RN are to indicate the dependency kind), A actually doesn't have
> > > > 	strong dependency to C(IOW, C doesn't depend on A), to see this,
> > > > 	let's say we have a third CPU3 doing:
> > > > 
> > > > 	CPU3:
> > > > 	=============
> > > > 	write_lock(C);
> > > > 	write_lock(A);
> > > > 
> > > > 	, this is not a deadlock. However if we change the read_lock()
> > > > 	on CPU2 to a write_lock(), it's a deadlock then.
> > > > 
> > > > 	So A --(NR)--> B --(RN)--> C is not a strong dependency path but
> > > > 	A --(NR)--> B --(NN)-->C is a strong dependency path.
> > > 
> > > I'm not really satisfied with the above reasoning. I don't disagree, but
> > > if possible it would be nice to have something a little more solid.
> > > 
> > 
> > What do you mean by "solid"? You mean "A --(NR)--> B --(NN)-->C" is too
> > abstract, and want something like the below instead:
> 
> The above description mostly leaves it as an exercise to the reader to
> 'proof' ignoring *R -> R* is both safe and complete while that is the
> main argument.
> 

OK, so I have some 'proof' in patch #16. I could move that proof in the
commit log or merge that patch with this one?

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change Boqun Feng
@ 2018-02-22 17:29   ` Peter Zijlstra
  2018-03-16  8:20     ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 17:29 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:08:54PM +0800, Boqun Feng wrote:
> As we have four kinds of dependencies now, check_redundant() should only
> report redundant if we have a dependency path which is equal or
> _stronger_ than the current dependency. For example if in
> check_prev_add() we have:
> 
> 	prev->read == 2 && next->read != 2
> 
> , we should only report redundant if we find a path like:
> 
> 	prev--(RN)-->....--(*N)-->next
> 
> and if we have:
> 
> 	prev->read == 2 && next->read == 2
> 
> , we could report redundant if we find a path like:
> 
> 	prev--(RN)-->....--(*N)-->next
> 
> or
> 
> 	prev--(RN)-->....--(*R)-->next
> 
> To do so, we need to pass the recursive-read status of @next into
> check_redundant().

Very hard to read that.

> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  kernel/locking/lockdep.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index e1be088a34c4..0b0ad3db78b4 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -1338,9 +1338,12 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth,
>  	return 0;
>  }
>  
> -static inline int class_equal(struct lock_list *entry, void *data)
> +static inline int hlock_equal(struct lock_list *entry, void *data)
>  {
> -	return entry->class == data;
> +	struct held_lock *hlock = (struct held_lock *)data;
> +
> +	return hlock_class(hlock) == entry->class &&
> +	       (hlock->read == 2 || !entry->is_rr);
>  }

So I guess @data = @next, and we're checking if @prev -> @next already
exists.

Since we only care about forward dependencies, @next->read==2 means *R
and if so, any existing link is equal or stronger. If @next->read!=2, it
means *N and we must regard *R as weaker and not match.

OK, that seems to be fine, but again, that function _really_ could do
with a comment.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection Boqun Feng
@ 2018-02-22 17:41   ` Peter Zijlstra
  2018-02-22 17:46   ` Peter Zijlstra
  1 sibling, 0 replies; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 17:41 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:08:55PM +0800, Boqun Feng wrote:
> There are four cases for recursive read lock realted deadlocks:
> 
> (--(X..Y)--> means a strong dependency path starts with a --(X*)-->
> dependency and ends with a --(*Y)-- dependency.)
> 
> 1.	An irq-safe lock L1 has a dependency --(*..*)--> to an
> 	irq-unsafe lock L2.
> 
> 2.	An irq-read-safe lock L1 has a dependency --(N..*)--> to an
> 	irq-unsafe lock L2.
> 
> 3.	An irq-safe lock L1 has a dependency --(*..N)--> to an
> 	irq-read-unsafe lock L2.
> 
> 4.	An irq-read-safe lock L1 has a dependency --(N..N)--> to an
> 	irq-read-unsafe lock L2.
> 
> The current check_usage() only checks 1) and 2), so this patch adds
> checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
> an irq-read-{,un}safe lock, the traverse path should ends at a
> dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
> a real dependency --(N*)-->.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  kernel/locking/lockdep.c | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 0b0ad3db78b4..bd3eef664f9d 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -1504,7 +1504,14 @@ check_redundant(struct lock_list *root, struct held_lock *target,
>  
>  static inline int usage_match(struct lock_list *entry, void *bit)
>  {
> -	return entry->class->usage_mask & (1 << (enum lock_usage_bit)bit);
> +	enum lock_usage_bit ub = (enum lock_usage_bit)bit;
> +
> +
> +	if (ub & 1)
> +		return entry->class->usage_mask & (1 << ub) &&
> +		       !entry->is_rr;
> +	else
> +		return entry->class->usage_mask & (1 << ub);
>  }

The whole is_rr/have_xr thing and backwards hurts my brain. That really
wants more than a little 'Note'.

Also, the above is unreadable, something like:

	unsigned long usage_mask = entry->class->usage_mask;
	enum lock_usage_bit ub = (enum lock_usage_bit)bit;
	unsigned long mask = 1ULL << ub;

	if (ub & 1) /* __STATE_RR */
		return !entry->have_xr && (usage_mask & mask);

	return !!(usage_mask & mask);

maybe. Also, perhaps we should make __bfs(.match) have a bool return
value.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection Boqun Feng
  2018-02-22 17:41   ` Peter Zijlstra
@ 2018-02-22 17:46   ` Peter Zijlstra
  2018-02-23  8:21     ` Boqun Feng
  1 sibling, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-22 17:46 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:08:55PM +0800, Boqun Feng wrote:
> There are four cases for recursive read lock realted deadlocks:
> 
> (--(X..Y)--> means a strong dependency path starts with a --(X*)-->
> dependency and ends with a --(*Y)-- dependency.)
> 
> 1.	An irq-safe lock L1 has a dependency --(*..*)--> to an
> 	irq-unsafe lock L2.
> 
> 2.	An irq-read-safe lock L1 has a dependency --(N..*)--> to an
> 	irq-unsafe lock L2.
> 
> 3.	An irq-safe lock L1 has a dependency --(*..N)--> to an
> 	irq-read-unsafe lock L2.
> 
> 4.	An irq-read-safe lock L1 has a dependency --(N..N)--> to an
> 	irq-read-unsafe lock L2.
> 
> The current check_usage() only checks 1) and 2), so this patch adds
> checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
> an irq-read-{,un}safe lock, the traverse path should ends at a
> dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
> a real dependency --(N*)-->.

This adds 4 __bfs() searches for every new link.

Can't we make the existing traversals smarter?

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-22 16:31           ` Boqun Feng
@ 2018-02-23  5:02             ` Boqun Feng
  2018-02-23 11:15               ` Peter Zijlstra
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-23  5:02 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 2832 bytes --]

On Fri, Feb 23, 2018 at 12:31:20AM +0800, Boqun Feng wrote:
> On Thu, Feb 22, 2018 at 04:51:43PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 04:30:34PM +0100, Peter Zijlstra wrote:
> > > On Thu, Feb 22, 2018 at 11:12:10PM +0800, Boqun Feng wrote:
> > > > On Thu, Feb 22, 2018 at 03:26:14PM +0100, Peter Zijlstra wrote:
> > > 
> > > > > However, I would suggest:
> > > > > 
> > > > > static inline bool is_xr(u16 dep)
> > > > > {
> > > > > 	return !!(dep & (DEP_NR_MASK | DEP_RR_MASK));
> > > > > }
> > > > > 
> > > > > static inline bool is_rx(u16 dep)
> > > > > {
> > > > > 	return !!(dep & (DEP_RN_MASK | DEP_RR_MASK));
> > > > > }
> > > > > 
> > > > > 		/* Skip *R -> R* relations */
> > > > > 		if (have_xr && is_rx(entry->dep))
> > > > > 			continue;
> > > > 
> > > > I don't think this works, if we pick a *R for previous entry, and for
> > > > current entry, we have RR, NN and NR, your approach will skip the
> > > > current entry, but actually we can pick NN or NR (of course, in __bfs(),
> > > > we can greedily pick NN, because if NR causes a deadlock, so does NN).
> > > 
> > > I don't get it, afaict my suggestion is identical.
> > > 
> > > You skip condition: pick_dep() < 0, evaluates to:
> > > 
> > > 	is_rr && (!NN_MASK && !NR_MASK) :=
> > > 	is_rr && (RN_MASK | RR_MASK)
> > > 
> > > Which is exactly what I have.
> > 
> > Ooh, I think I see what I did wrong, would something like:
> > 
> > 	if (have_xr && !is_nx(entry-dep))
> > 
> > work? That's a lot harder to argue about though, still much better than
> 
> I think it works. Although I prefer use name "has_nx" for the fucntion.
> 
> > that tri-state pick thing.
> > 
> 
> Agree.
> 
> > > If that is satisfied, you set entry->is_rr to pick_dep(), which his
> > > harder to evaluate, but is something like:
> > > 
> > > 	is_rr && NR_MASK || !(NN_MASK | RN_MASK) :=
> 
> If is_rr is true and NN_MASK is true, pick_dep() will return 0, however,
> your expression will return NR_MASK.
> 

It was too late last night, I was meant to say, the correct
entry->have_xr should be as follow:

> 	entry->have_xr = !(has_nn(entry->dep) || (!is_rr && has_rn(entry->dep)));
> 	              := !has_nn(entry->dep) && (is_rr || !has_rn(entry->dep))
> 

so it seems that we have to introduce is_{nn,rn,nx}(), I'm not sure
introducing three one-off helpers is a good direction to go. One benefit
of using pick_dep() is that we can keep the whole logic in one function.
Thoughts?

Regards,
Boqun

> Regards,
> Boqun
> 
> > > 	is_rr && NR_MASK || (NR_MASK | RR_MASK) :=
> > > 	(NR_MASK | RR_MASK)
> > > 
> > > (because is_rr && RR_MASK will have been skipped)
> > > 
> > > > > 
> > > > > 		entry->have_xr = is_xr(entry->dep);
> > 
> > This one I think is still correct though.



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection
  2018-02-22 17:46   ` Peter Zijlstra
@ 2018-02-23  8:21     ` Boqun Feng
  2018-02-23  8:58       ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-23  8:21 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 1391 bytes --]

On Thu, Feb 22, 2018 at 06:46:54PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 03:08:55PM +0800, Boqun Feng wrote:
> > There are four cases for recursive read lock realted deadlocks:
> > 
> > (--(X..Y)--> means a strong dependency path starts with a --(X*)-->
> > dependency and ends with a --(*Y)-- dependency.)
> > 
> > 1.	An irq-safe lock L1 has a dependency --(*..*)--> to an
> > 	irq-unsafe lock L2.
> > 
> > 2.	An irq-read-safe lock L1 has a dependency --(N..*)--> to an
> > 	irq-unsafe lock L2.
> > 
> > 3.	An irq-safe lock L1 has a dependency --(*..N)--> to an
> > 	irq-read-unsafe lock L2.
> > 
> > 4.	An irq-read-safe lock L1 has a dependency --(N..N)--> to an
> > 	irq-read-unsafe lock L2.
> > 
> > The current check_usage() only checks 1) and 2), so this patch adds
> > checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
> > an irq-read-{,un}safe lock, the traverse path should ends at a
> > dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
> > a real dependency --(N*)-->.
> 
> This adds 4 __bfs() searches for every new link.
> 
> Can't we make the existing traversals smarter?

Haven't really thought this one through, I will try. But as you said, we
only need to do more searchs for _new_ links, so I think it's the slow
path, would the performance matter that much?

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection
  2018-02-23  8:21     ` Boqun Feng
@ 2018-02-23  8:58       ` Boqun Feng
  2018-02-23 11:36         ` Peter Zijlstra
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-23  8:58 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 2345 bytes --]

On Fri, Feb 23, 2018 at 04:21:34PM +0800, Boqun Feng wrote:
> On Thu, Feb 22, 2018 at 06:46:54PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 03:08:55PM +0800, Boqun Feng wrote:
> > > There are four cases for recursive read lock realted deadlocks:
> > > 
> > > (--(X..Y)--> means a strong dependency path starts with a --(X*)-->
> > > dependency and ends with a --(*Y)-- dependency.)
> > > 
> > > 1.	An irq-safe lock L1 has a dependency --(*..*)--> to an
> > > 	irq-unsafe lock L2.
> > > 
> > > 2.	An irq-read-safe lock L1 has a dependency --(N..*)--> to an
> > > 	irq-unsafe lock L2.
> > > 
> > > 3.	An irq-safe lock L1 has a dependency --(*..N)--> to an
> > > 	irq-read-unsafe lock L2.
> > > 
> > > 4.	An irq-read-safe lock L1 has a dependency --(N..N)--> to an
> > > 	irq-read-unsafe lock L2.
> > > 
> > > The current check_usage() only checks 1) and 2), so this patch adds
> > > checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
> > > an irq-read-{,un}safe lock, the traverse path should ends at a
> > > dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
> > > a real dependency --(N*)-->.
> > 
> > This adds 4 __bfs() searches for every new link.
> > 
> > Can't we make the existing traversals smarter?
> 
> Haven't really thought this one through, I will try. But as you said, we

Hmm... think again, maybe I can combine case 1 with 3, and case 2 with
4, because each of them could share the same find_usage_backwards(), and
find_usage_forwards() uses a usage_match_forwards() as follow for the
match function:

	static inline int usage_match_forwards(struct lock_list *entry, void *bit)
	{
		enum lock_usage_bit ub = (enum lock_usage_bit)bit;
		unsigned long mask;
		unsigned long read_mask;

		/* mask out the read bit */
		ub &= ~1;

		mask = 1ULL << ub;
		read_mask = 1ULL << (ub + 1);

		return (entry->class->usage_mask & mask) ||  // *-> L2 and L2 is an irq-unsafe lock
		       ((entry->class->usage_mask & read_mask) && !entry->is_rr); // N-> L2 and L2 is an irq-read-unsafe lock
	}

Got a bus to catch, I can explain this later, if you need ;-)

Regards,
Boqun

> only need to do more searchs for _new_ links, so I think it's the slow
> path, would the performance matter that much?
> 
> Regards,
> Boqun



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies
  2018-02-23  5:02             ` Boqun Feng
@ 2018-02-23 11:15               ` Peter Zijlstra
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-23 11:15 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Fri, Feb 23, 2018 at 01:02:09PM +0800, Boqun Feng wrote:
> > 	entry->have_xr = !(has_nn(entry->dep) || (!is_rr && has_rn(entry->dep)));
> > 	              := !has_nn(entry->dep) && (is_rr || !has_rn(entry->dep))
> > 
> 
> so it seems that we have to introduce is_{nn,rn,nx}(), I'm not sure
> introducing three one-off helpers is a good direction to go. One benefit
> of using pick_dep() is that we can keep the whole logic in one function.
> Thoughts?

Urgh, I see...

Damn this is confusing, I'm sure there's something simple we're missing.
Let me go stare at the earlier patches again.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection
  2018-02-23  8:58       ` Boqun Feng
@ 2018-02-23 11:36         ` Peter Zijlstra
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-23 11:36 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Fri, Feb 23, 2018 at 04:58:12PM +0800, Boqun Feng wrote:
> Hmm... think again, maybe I can combine case 1 with 3, and case 2 with
> 4, because each of them could share the same find_usage_backwards(), and
> find_usage_forwards() uses a usage_match_forwards() as follow for the
> match function:
> 
> 	static inline int usage_match_forwards(struct lock_list *entry, void *bit)
> 	{
> 		enum lock_usage_bit ub = (enum lock_usage_bit)bit;
> 		unsigned long mask;
> 		unsigned long read_mask;
> 
> 		/* mask out the read bit */
> 		ub &= ~1;
> 
> 		mask = 1ULL << ub;
> 		read_mask = 1ULL << (ub + 1);
> 
> 		return (entry->class->usage_mask & mask) ||  // *-> L2 and L2 is an irq-unsafe lock
> 		       ((entry->class->usage_mask & read_mask) && !entry->is_rr); // N-> L2 and L2 is an irq-read-unsafe lock
> 	}
> 
> Got a bus to catch, I can explain this later, if you need ;-)

Right, that's about what I was thinking of. Clearly that needs a wee
comment but it's much better.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 15/17] lockdep: Reduce the size of lock_list
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 15/17] lockdep: Reduce the size of lock_list Boqun Feng
@ 2018-02-23 11:38   ` Peter Zijlstra
  2018-02-23 12:40     ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-23 11:38 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:09:02PM +0800, Boqun Feng wrote:
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index a1f91f8680bd..3fce8dbf5091 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -186,11 +186,11 @@ struct lock_list {
>  	struct list_head		entry;
>  	struct lock_class		*class;
>  	struct stack_trace		trace;
> -	int				distance;
> +	u16				distance;
>  	/* bitmap of different dependencies from head to this */
> -	u16				dep;
> +	u8				dep;
>  	/* used by BFS to record whether this is picked as a recursive read */
> -	u16				is_rr;
> +	bool				is_rr;

Don't use bool, use u8 if you want a single byte. sizeof(_Bool) is part
of the architecture ABI and can be up to 4 bytes (well really anything,
but I've not seen it bigger than 4 bytes for any sane implementation).

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep Boqun Feng
@ 2018-02-23 11:55   ` Peter Zijlstra
  2018-02-23 12:37     ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-23 11:55 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Thu, Feb 22, 2018 at 03:08:51PM +0800, Boqun Feng wrote:
> @@ -1012,6 +1013,33 @@ static inline bool bfs_error(enum bfs_result res)
>  	return res < 0;
>  }
>  
> +#define DEP_NN_BIT 0
> +#define DEP_RN_BIT 1
> +#define DEP_NR_BIT 2
> +#define DEP_RR_BIT 3
> +
> +#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> +#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> +#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> +#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> +
> +static inline unsigned int __calc_dep_bit(int prev, int next)
> +{
> +	if (prev == 2 && next != 2)
> +		return DEP_RN_BIT;
> +	if (prev != 2 && next == 2)
> +		return DEP_NR_BIT;
> +	if (prev == 2 && next == 2)
> +		return DEP_RR_BIT;
> +	else
> +		return DEP_NN_BIT;
> +}
> +
> +static inline unsigned int calc_dep(int prev, int next)
> +{
> +	return 1U << __calc_dep_bit(prev, next);
> +}
> +
>  static enum bfs_result __bfs(struct lock_list *source_entry,
>  			     void *data,
>  			     int (*match)(struct lock_list *entry, void *data),
> @@ -1921,6 +1949,16 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
>  		if (entry->class == hlock_class(next)) {
>  			if (distance == 1)
>  				entry->distance = 1;
> +			entry->dep |= calc_dep(prev->read, next->read);
> +		}
> +	}
> +
> +	/* Also, update the reverse dependency in @next's ->locks_before list */
> +	list_for_each_entry(entry, &hlock_class(next)->locks_before, entry) {
> +		if (entry->class == hlock_class(prev)) {
> +			if (distance == 1)
> +				entry->distance = 1;
> +			entry->dep |= calc_dep(next->read, prev->read);
>  			return 1;
>  		}
>  	}

I think it all becomes simpler if you use only 2 bits. Such that:

  bit0 is the prev R (0) or N (1) value,
  bit1 is the next R (0) or N (1) value.

I think this should work because we don't care about the empty set
(currently 0000) and all the complexity in patch 5 is because we can
have R bits set when there's also N bits. The concequence of that is
that we cannot replace ! with ~ (which is what I kept doing).

But with only 2 bits, we only track the strongest relation in the set,
which is exactly what we appear to need.


The above then becomes something like:

static inline u8 __calc_dep(struct held_lock *lock)
{
	return lock->read != 2;
}

static inline u8
calc_dep(struct held_lock *prev, struct held_lock *next)
{
	return (__calc_dep(prev) << 0) | (__calc_dep(next) << 1);
}


	entry->dep |= calc_dep(prev, next);



Then the stuff from 5 can be:

static inline bool is_rx(u8 dep)
{
	return !(dep & 1);
}

static inline bool is_xr(u8 dep)
{
	return !(dep & 2);
}


	if (have_xr && is_rx(entry->dep))
		continue;

	entry->have_xr = is_xr(entry->dep);


Or did I mess that up somewhere?

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-23 11:55   ` Peter Zijlstra
@ 2018-02-23 12:37     ` Boqun Feng
  2018-02-24  5:32       ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-23 12:37 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 3832 bytes --]

On Fri, Feb 23, 2018 at 12:55:20PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 03:08:51PM +0800, Boqun Feng wrote:
> > @@ -1012,6 +1013,33 @@ static inline bool bfs_error(enum bfs_result res)
> >  	return res < 0;
> >  }
> >  
> > +#define DEP_NN_BIT 0
> > +#define DEP_RN_BIT 1
> > +#define DEP_NR_BIT 2
> > +#define DEP_RR_BIT 3
> > +
> > +#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> > +#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> > +#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> > +#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> > +
> > +static inline unsigned int __calc_dep_bit(int prev, int next)
> > +{
> > +	if (prev == 2 && next != 2)
> > +		return DEP_RN_BIT;
> > +	if (prev != 2 && next == 2)
> > +		return DEP_NR_BIT;
> > +	if (prev == 2 && next == 2)
> > +		return DEP_RR_BIT;
> > +	else
> > +		return DEP_NN_BIT;
> > +}
> > +
> > +static inline unsigned int calc_dep(int prev, int next)
> > +{
> > +	return 1U << __calc_dep_bit(prev, next);
> > +}
> > +
> >  static enum bfs_result __bfs(struct lock_list *source_entry,
> >  			     void *data,
> >  			     int (*match)(struct lock_list *entry, void *data),
> > @@ -1921,6 +1949,16 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
> >  		if (entry->class == hlock_class(next)) {
> >  			if (distance == 1)
> >  				entry->distance = 1;
> > +			entry->dep |= calc_dep(prev->read, next->read);
> > +		}
> > +	}
> > +
> > +	/* Also, update the reverse dependency in @next's ->locks_before list */
> > +	list_for_each_entry(entry, &hlock_class(next)->locks_before, entry) {
> > +		if (entry->class == hlock_class(prev)) {
> > +			if (distance == 1)
> > +				entry->distance = 1;
> > +			entry->dep |= calc_dep(next->read, prev->read);
> >  			return 1;
> >  		}
> >  	}
> 
> I think it all becomes simpler if you use only 2 bits. Such that:
> 
>   bit0 is the prev R (0) or N (1) value,
>   bit1 is the next R (0) or N (1) value.
> 
> I think this should work because we don't care about the empty set
> (currently 0000) and all the complexity in patch 5 is because we can
> have R bits set when there's also N bits. The concequence of that is
> that we cannot replace ! with ~ (which is what I kept doing).
> 
> But with only 2 bits, we only track the strongest relation in the set,
> which is exactly what we appear to need.
> 

But if we only have RN and NR, both bits will be set, we can not check
whether we have NN or not. Consider we have:

	A -(RR)-> B
	B -(NR)-> C and B -(RN)-> C
	C -(RN)-> A

this is not a deadlock case, but with "two bits" approach, we can not
differ this with:

	A -(RR)-> B
	B -(NN)-> C
	C -(RN)-> A

, which is a deadlock.

But maybe "three bits" (NR, RN and NN bits) approach works, that is if
->dep is 0, we indicates this is only RR, and is_rx() becomes:

	static inline bool is_rx(u8 dep)
	{
		return !(dep & (NR_MASK | NN_MASK));
	}

and is_xr() becomes:

	static inline bool is_xr(u8 dep)
	{
		return !(dep & (RN_MASK | NN_MASK));
	}

, with this I think your simplification with have_xr works, thanks!

Regards,
Boqun

> 
> The above then becomes something like:
> 
> static inline u8 __calc_dep(struct held_lock *lock)
> {
> 	return lock->read != 2;
> }
> 
> static inline u8
> calc_dep(struct held_lock *prev, struct held_lock *next)
> {
> 	return (__calc_dep(prev) << 0) | (__calc_dep(next) << 1);
> }
> 
> 
> 	entry->dep |= calc_dep(prev, next);
> 
> 
> 
> Then the stuff from 5 can be:
> 
> static inline bool is_rx(u8 dep)
> {
> 	return !(dep & 1);
> }
> 
> static inline bool is_xr(u8 dep)
> {
> 	return !(dep & 2);
> }
> 
> 
> 	if (have_xr && is_rx(entry->dep))
> 		continue;
> 
> 	entry->have_xr = is_xr(entry->dep);
> 
> 
> Or did I mess that up somewhere?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 15/17] lockdep: Reduce the size of lock_list
  2018-02-23 11:38   ` Peter Zijlstra
@ 2018-02-23 12:40     ` Boqun Feng
  0 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-23 12:40 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 993 bytes --]

On Fri, Feb 23, 2018 at 12:38:43PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 03:09:02PM +0800, Boqun Feng wrote:
> > diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> > index a1f91f8680bd..3fce8dbf5091 100644
> > --- a/include/linux/lockdep.h
> > +++ b/include/linux/lockdep.h
> > @@ -186,11 +186,11 @@ struct lock_list {
> >  	struct list_head		entry;
> >  	struct lock_class		*class;
> >  	struct stack_trace		trace;
> > -	int				distance;
> > +	u16				distance;
> >  	/* bitmap of different dependencies from head to this */
> > -	u16				dep;
> > +	u8				dep;
> >  	/* used by BFS to record whether this is picked as a recursive read */
> > -	u16				is_rr;
> > +	bool				is_rr;
> 
> Don't use bool, use u8 if you want a single byte. sizeof(_Bool) is part
> of the architecture ABI and can be up to 4 bytes (well really anything,
> but I've not seen it bigger than 4 bytes for any sane implementation).

Got it, thanks!

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-23 12:37     ` Boqun Feng
@ 2018-02-24  5:32       ` Boqun Feng
  2018-02-24  6:30         ` Boqun Feng
  2018-02-24  7:31         ` Boqun Feng
  0 siblings, 2 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-24  5:32 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 5705 bytes --]

On Fri, Feb 23, 2018 at 08:37:32PM +0800, Boqun Feng wrote:
> On Fri, Feb 23, 2018 at 12:55:20PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 03:08:51PM +0800, Boqun Feng wrote:
> > > @@ -1012,6 +1013,33 @@ static inline bool bfs_error(enum bfs_result res)
> > >  	return res < 0;
> > >  }
> > >  
> > > +#define DEP_NN_BIT 0
> > > +#define DEP_RN_BIT 1
> > > +#define DEP_NR_BIT 2
> > > +#define DEP_RR_BIT 3
> > > +
> > > +#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> > > +#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> > > +#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> > > +#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> > > +
> > > +static inline unsigned int __calc_dep_bit(int prev, int next)
> > > +{
> > > +	if (prev == 2 && next != 2)
> > > +		return DEP_RN_BIT;
> > > +	if (prev != 2 && next == 2)
> > > +		return DEP_NR_BIT;
> > > +	if (prev == 2 && next == 2)
> > > +		return DEP_RR_BIT;
> > > +	else
> > > +		return DEP_NN_BIT;
> > > +}
> > > +
> > > +static inline unsigned int calc_dep(int prev, int next)
> > > +{
> > > +	return 1U << __calc_dep_bit(prev, next);
> > > +}
> > > +
> > >  static enum bfs_result __bfs(struct lock_list *source_entry,
> > >  			     void *data,
> > >  			     int (*match)(struct lock_list *entry, void *data),
> > > @@ -1921,6 +1949,16 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
> > >  		if (entry->class == hlock_class(next)) {
> > >  			if (distance == 1)
> > >  				entry->distance = 1;
> > > +			entry->dep |= calc_dep(prev->read, next->read);
> > > +		}
> > > +	}
> > > +
> > > +	/* Also, update the reverse dependency in @next's ->locks_before list */
> > > +	list_for_each_entry(entry, &hlock_class(next)->locks_before, entry) {
> > > +		if (entry->class == hlock_class(prev)) {
> > > +			if (distance == 1)
> > > +				entry->distance = 1;
> > > +			entry->dep |= calc_dep(next->read, prev->read);
> > >  			return 1;
> > >  		}
> > >  	}
> > 
> > I think it all becomes simpler if you use only 2 bits. Such that:
> > 
> >   bit0 is the prev R (0) or N (1) value,
> >   bit1 is the next R (0) or N (1) value.
> > 
> > I think this should work because we don't care about the empty set
> > (currently 0000) and all the complexity in patch 5 is because we can
> > have R bits set when there's also N bits. The concequence of that is
> > that we cannot replace ! with ~ (which is what I kept doing).
> > 
> > But with only 2 bits, we only track the strongest relation in the set,
> > which is exactly what we appear to need.
> > 
> 
> But if we only have RN and NR, both bits will be set, we can not check
> whether we have NN or not. Consider we have:
> 
> 	A -(RR)-> B
> 	B -(NR)-> C and B -(RN)-> C
> 	C -(RN)-> A
> 
> this is not a deadlock case, but with "two bits" approach, we can not
> differ this with:
> 
> 	A -(RR)-> B
> 	B -(NN)-> C
> 	C -(RN)-> A
> 
> , which is a deadlock.
> 
> But maybe "three bits" (NR, RN and NN bits) approach works, that is if
> ->dep is 0, we indicates this is only RR, and is_rx() becomes:
> 
> 	static inline bool is_rx(u8 dep)
> 	{
> 		return !(dep & (NR_MASK | NN_MASK));
> 	}
> 
> and is_xr() becomes:
> 
> 	static inline bool is_xr(u8 dep)
> 	{
> 		return !(dep & (RN_MASK | NN_MASK));
> 	}
> 
> , with this I think your simplification with have_xr works, thanks!
> 

Ah! I see. Actually your very first approach works, except the
definitions of is_rx() and ir_xr() are wrong. In that approach, you
define
	
	static inline bool is_rx(u8 dep)
	{
		return !!(dep & (DEP_RR_MASK | DEP_RN_MASK);
	}

, which means "whether we have a R* dependency?". But in fact, what we
need to check is "whether we _only_ have R* dependencies?", if so and
have_xr is true, that means we could only have a -(*R)-> A -(R*)-> if we
pick the next dependency, and that means we should skip. So my new
definition above works, and I think we better name it as only_rx() to
avoid confusion? Ditto for is_xr().

I also reorder bit number for each kind of dependency, so that we have a
simple __calc_dep_bit(), see the following:

	/*
	 * DEP_*_BIT in lock_list::dep
	 *
	 * For dependency @prev -> @next:
	 *
	 *   RR: both @prev and @next are recursive read locks, i.e. ->read == 2.
	 *   RN: @prev is recursive and @next is non-recursive.
	 *   NR: @prev is a not recursive and @next is recursive.
	 *   NN: both @prev and @next are non-recursive.
	 * 
	 * Note that we define the value of DEP_*_BITs so that:
	 * 	bit0 is prev->read != 2
	 * 	bit1 is next->read != 2
	 */
	#define DEP_RR_BIT 0
	#define DEP_RN_BIT 1
	#define DEP_NR_BIT 2
	#define DEP_NN_BIT 3

	#define DEP_RR_MASK (1U << (DEP_RR_BIT))
	#define DEP_RN_MASK (1U << (DEP_RN_BIT))
	#define DEP_NR_MASK (1U << (DEP_NR_BIT))
	#define DEP_NN_MASK (1U << (DEP_NN_BIT))

	static inline unsigned int
	__calc_dep_bit(struct held_lock *prev, struct held_lock *next)
	{
		return (prev->read != 2) + ((next->read != 2) << 1)
	}

	static inline u8 calc_dep(struct held_lock *prev, struct held_lock *next)
	{
		return 1U << __calc_dep_bit(prev, next);
	}

 	static inline bool only_rx(u8 dep)
 	{
 		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
 	}

 	static inline bool only_xr(u8 dep)
 	{
 		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
 	}

Note that we actually don't need DEP_RR_BIT, but I leave it there for
implementation simplicity. With this, your check and set below works.

Thoughts?

Regards,
Boqun

> > 
> > 
> > 	if (have_xr && is_rx(entry->dep))
> > 		continue;
> > 
> > 	entry->have_xr = is_xr(entry->dep);
> > 
> > 
> > Or did I mess that up somewhere?



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-24  5:32       ` Boqun Feng
@ 2018-02-24  6:30         ` Boqun Feng
  2018-02-24  8:38           ` Peter Zijlstra
  2018-02-24  7:31         ` Boqun Feng
  1 sibling, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-24  6:30 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 6697 bytes --]

On Sat, Feb 24, 2018 at 01:32:50PM +0800, Boqun Feng wrote:
> On Fri, Feb 23, 2018 at 08:37:32PM +0800, Boqun Feng wrote:
> > On Fri, Feb 23, 2018 at 12:55:20PM +0100, Peter Zijlstra wrote:
> > > On Thu, Feb 22, 2018 at 03:08:51PM +0800, Boqun Feng wrote:
> > > > @@ -1012,6 +1013,33 @@ static inline bool bfs_error(enum bfs_result res)
> > > >  	return res < 0;
> > > >  }
> > > >  
> > > > +#define DEP_NN_BIT 0
> > > > +#define DEP_RN_BIT 1
> > > > +#define DEP_NR_BIT 2
> > > > +#define DEP_RR_BIT 3
> > > > +
> > > > +#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> > > > +#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> > > > +#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> > > > +#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> > > > +
> > > > +static inline unsigned int __calc_dep_bit(int prev, int next)
> > > > +{
> > > > +	if (prev == 2 && next != 2)
> > > > +		return DEP_RN_BIT;
> > > > +	if (prev != 2 && next == 2)
> > > > +		return DEP_NR_BIT;
> > > > +	if (prev == 2 && next == 2)
> > > > +		return DEP_RR_BIT;
> > > > +	else
> > > > +		return DEP_NN_BIT;
> > > > +}
> > > > +
> > > > +static inline unsigned int calc_dep(int prev, int next)
> > > > +{
> > > > +	return 1U << __calc_dep_bit(prev, next);
> > > > +}
> > > > +
> > > >  static enum bfs_result __bfs(struct lock_list *source_entry,
> > > >  			     void *data,
> > > >  			     int (*match)(struct lock_list *entry, void *data),
> > > > @@ -1921,6 +1949,16 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
> > > >  		if (entry->class == hlock_class(next)) {
> > > >  			if (distance == 1)
> > > >  				entry->distance = 1;
> > > > +			entry->dep |= calc_dep(prev->read, next->read);
> > > > +		}
> > > > +	}
> > > > +
> > > > +	/* Also, update the reverse dependency in @next's ->locks_before list */
> > > > +	list_for_each_entry(entry, &hlock_class(next)->locks_before, entry) {
> > > > +		if (entry->class == hlock_class(prev)) {
> > > > +			if (distance == 1)
> > > > +				entry->distance = 1;
> > > > +			entry->dep |= calc_dep(next->read, prev->read);
> > > >  			return 1;
> > > >  		}
> > > >  	}
> > > 
> > > I think it all becomes simpler if you use only 2 bits. Such that:
> > > 
> > >   bit0 is the prev R (0) or N (1) value,
> > >   bit1 is the next R (0) or N (1) value.
> > > 
> > > I think this should work because we don't care about the empty set
> > > (currently 0000) and all the complexity in patch 5 is because we can
> > > have R bits set when there's also N bits. The concequence of that is
> > > that we cannot replace ! with ~ (which is what I kept doing).
> > > 
> > > But with only 2 bits, we only track the strongest relation in the set,
> > > which is exactly what we appear to need.
> > > 
> > 
> > But if we only have RN and NR, both bits will be set, we can not check
> > whether we have NN or not. Consider we have:
> > 
> > 	A -(RR)-> B
> > 	B -(NR)-> C and B -(RN)-> C
> > 	C -(RN)-> A
> > 
> > this is not a deadlock case, but with "two bits" approach, we can not
> > differ this with:
> > 
> > 	A -(RR)-> B
> > 	B -(NN)-> C
> > 	C -(RN)-> A
> > 
> > , which is a deadlock.
> > 
> > But maybe "three bits" (NR, RN and NN bits) approach works, that is if
> > ->dep is 0, we indicates this is only RR, and is_rx() becomes:
> > 
> > 	static inline bool is_rx(u8 dep)
> > 	{
> > 		return !(dep & (NR_MASK | NN_MASK));
> > 	}
> > 
> > and is_xr() becomes:
> > 
> > 	static inline bool is_xr(u8 dep)
> > 	{
> > 		return !(dep & (RN_MASK | NN_MASK));
> > 	}
> > 
> > , with this I think your simplification with have_xr works, thanks!
> > 
> 
> Ah! I see. Actually your very first approach works, except the
> definitions of is_rx() and ir_xr() are wrong. In that approach, you
> define
> 	
> 	static inline bool is_rx(u8 dep)
> 	{
> 		return !!(dep & (DEP_RR_MASK | DEP_RN_MASK);
> 	}
> 
> , which means "whether we have a R* dependency?". But in fact, what we
> need to check is "whether we _only_ have R* dependencies?", if so and
> have_xr is true, that means we could only have a -(*R)-> A -(R*)-> if we
> pick the next dependency, and that means we should skip. So my new
> definition above works, and I think we better name it as only_rx() to
> avoid confusion? Ditto for is_xr().
> 
> I also reorder bit number for each kind of dependency, so that we have a
> simple __calc_dep_bit(), see the following:
> 
> 	/*
> 	 * DEP_*_BIT in lock_list::dep
> 	 *
> 	 * For dependency @prev -> @next:
> 	 *
> 	 *   RR: both @prev and @next are recursive read locks, i.e. ->read == 2.
> 	 *   RN: @prev is recursive and @next is non-recursive.
> 	 *   NR: @prev is a not recursive and @next is recursive.
> 	 *   NN: both @prev and @next are non-recursive.
> 	 * 
> 	 * Note that we define the value of DEP_*_BITs so that:
> 	 * 	bit0 is prev->read != 2
> 	 * 	bit1 is next->read != 2
> 	 */
> 	#define DEP_RR_BIT 0
> 	#define DEP_RN_BIT 1
> 	#define DEP_NR_BIT 2
> 	#define DEP_NN_BIT 3
> 
> 	#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> 	#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> 	#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> 	#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> 
> 	static inline unsigned int
> 	__calc_dep_bit(struct held_lock *prev, struct held_lock *next)
> 	{
> 		return (prev->read != 2) + ((next->read != 2) << 1)
> 	}
> 
> 	static inline u8 calc_dep(struct held_lock *prev, struct held_lock *next)
> 	{
> 		return 1U << __calc_dep_bit(prev, next);
> 	}
> 
>  	static inline bool only_rx(u8 dep)
>  	{
>  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
>  	}
> 
>  	static inline bool only_xr(u8 dep)
>  	{
>  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
>  	}
> 
> Note that we actually don't need DEP_RR_BIT, but I leave it there for
> implementation simplicity. With this, your check and set below works.
> 
> Thoughts?
> 
> Regards,
> Boqun
> 
> > > 
> > > 
> > > 	if (have_xr && is_rx(entry->dep))
> > > 		continue;
> > > 
> > > 	entry->have_xr = is_xr(entry->dep);
> > > 

Hmm.. I think this part also needs some tweak:

	/* if -> prev is *R, and we only have R* for prev -> this, * skip*/
	if (have_xr && only_rx(entry->dep))
		continue;
	
	/*
	 * we pick a *R for prev -> this only if:
	 *     prev -> this dependencies are all *R 
	 * or
	 *     -> prev is *R, and we don't have NN for prev -> this
	 */
	entry->have_xr = only_xr(entry->dep) || (have_xr && !is_nn(entry->dep));

otherwise, we will wrongly set entry->have_xr to false if have_xr is
true and we have RN for prev -> this.

Regards,
Boqun

> > > 
> > > Or did I mess that up somewhere?
> 
> 



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-24  5:32       ` Boqun Feng
  2018-02-24  6:30         ` Boqun Feng
@ 2018-02-24  7:31         ` Boqun Feng
  1 sibling, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-24  7:31 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 1438 bytes --]

On Sat, Feb 24, 2018 at 01:32:50PM +0800, Boqun Feng wrote:
[...]
> 
> I also reorder bit number for each kind of dependency, so that we have a
> simple __calc_dep_bit(), see the following:
> 
> 	/*
> 	 * DEP_*_BIT in lock_list::dep
> 	 *
> 	 * For dependency @prev -> @next:
> 	 *
> 	 *   RR: both @prev and @next are recursive read locks, i.e. ->read == 2.
> 	 *   RN: @prev is recursive and @next is non-recursive.
> 	 *   NR: @prev is a not recursive and @next is recursive.
> 	 *   NN: both @prev and @next are non-recursive.
> 	 * 
> 	 * Note that we define the value of DEP_*_BITs so that:
> 	 * 	bit0 is prev->read != 2
> 	 * 	bit1 is next->read != 2
> 	 */
> 	#define DEP_RR_BIT 0
> 	#define DEP_RN_BIT 1
> 	#define DEP_NR_BIT 2

Oops, to make the following __cal_dep_bit() works, we should have:

 	#define DEP_NR_BIT 1
 	#define DEP_RN_BIT 2

instead.

Regards,
Boqun

> 	#define DEP_NN_BIT 3
> 
> 	#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> 	#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> 	#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> 	#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> 
> 	static inline unsigned int
> 	__calc_dep_bit(struct held_lock *prev, struct held_lock *next)
> 	{
> 		return (prev->read != 2) + ((next->read != 2) << 1)
> 	}
> 
> 	static inline u8 calc_dep(struct held_lock *prev, struct held_lock *next)
> 	{
> 		return 1U << __calc_dep_bit(prev, next);
> 	}
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-24  6:30         ` Boqun Feng
@ 2018-02-24  8:38           ` Peter Zijlstra
  2018-02-24  9:00             ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-24  8:38 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Sat, Feb 24, 2018 at 02:30:05PM +0800, Boqun Feng wrote:
> On Sat, Feb 24, 2018 at 01:32:50PM +0800, Boqun Feng wrote:

> > 	/*
> > 	 * DEP_*_BIT in lock_list::dep
> > 	 *
> > 	 * For dependency @prev -> @next:
> > 	 *
> > 	 *   RR: both @prev and @next are recursive read locks, i.e. ->read == 2.
> > 	 *   RN: @prev is recursive and @next is non-recursive.
> > 	 *   NR: @prev is a not recursive and @next is recursive.
> > 	 *   NN: both @prev and @next are non-recursive.
> > 	 * 
> > 	 * Note that we define the value of DEP_*_BITs so that:
> > 	 * 	bit0 is prev->read != 2
> > 	 * 	bit1 is next->read != 2
> > 	 */
> > 	#define DEP_RR_BIT 0
> > 	#define DEP_RN_BIT 1
> > 	#define DEP_NR_BIT 2
> > 	#define DEP_NN_BIT 3
> > 
> > 	#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> > 	#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> > 	#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> > 	#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> > 
> > 	static inline unsigned int
> > 	__calc_dep_bit(struct held_lock *prev, struct held_lock *next)
> > 	{
> > 		return (prev->read != 2) + ((next->read != 2) << 1)
> > 	}
> > 
> > 	static inline u8 calc_dep(struct held_lock *prev, struct held_lock *next)
> > 	{
> > 		return 1U << __calc_dep_bit(prev, next);
> > 	}
> > 
> >  	static inline bool only_rx(u8 dep)
> >  	{
> >  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
> >  	}
> > 
> >  	static inline bool only_xr(u8 dep)
> >  	{
> >  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
> >  	}
> > 

> > > > 	if (have_xr && is_rx(entry->dep))
> > > > 		continue;
> > > > 
> > > > 	entry->have_xr = is_xr(entry->dep);
> > > > 
> 
> Hmm.. I think this part also needs some tweak:
> 
> 	/* if -> prev is *R, and we only have R* for prev -> this, * skip*/
> 	if (have_xr && only_rx(entry->dep))
> 		continue;
> 	
> 	/*
> 	 * we pick a *R for prev -> this only if:
> 	 *     prev -> this dependencies are all *R 
> 	 * or
> 	 *     -> prev is *R, and we don't have NN for prev -> this
> 	 */
> 	entry->have_xr = only_xr(entry->dep) || (have_xr && !is_nn(entry->dep));
> 
> otherwise, we will wrongly set entry->have_xr to false if have_xr is
> true and we have RN for prev -> this.

OK, so its saturday morning and such, but what? Why should we set
have_xr true when we have RN? Note that if we only had RN we'd already
have bailed on the continue due to only_rx().

So can you elaborate a bit?

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-24  8:38           ` Peter Zijlstra
@ 2018-02-24  9:00             ` Boqun Feng
  2018-02-24  9:26               ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-24  9:00 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 3251 bytes --]

On Sat, Feb 24, 2018 at 09:38:07AM +0100, Peter Zijlstra wrote:
> On Sat, Feb 24, 2018 at 02:30:05PM +0800, Boqun Feng wrote:
> > On Sat, Feb 24, 2018 at 01:32:50PM +0800, Boqun Feng wrote:
> 
> > > 	/*
> > > 	 * DEP_*_BIT in lock_list::dep
> > > 	 *
> > > 	 * For dependency @prev -> @next:
> > > 	 *
> > > 	 *   RR: both @prev and @next are recursive read locks, i.e. ->read == 2.
> > > 	 *   RN: @prev is recursive and @next is non-recursive.
> > > 	 *   NR: @prev is a not recursive and @next is recursive.
> > > 	 *   NN: both @prev and @next are non-recursive.
> > > 	 * 
> > > 	 * Note that we define the value of DEP_*_BITs so that:
> > > 	 * 	bit0 is prev->read != 2
> > > 	 * 	bit1 is next->read != 2
> > > 	 */
> > > 	#define DEP_RR_BIT 0
> > > 	#define DEP_RN_BIT 1
> > > 	#define DEP_NR_BIT 2
> > > 	#define DEP_NN_BIT 3
> > > 
> > > 	#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> > > 	#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> > > 	#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> > > 	#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> > > 
> > > 	static inline unsigned int
> > > 	__calc_dep_bit(struct held_lock *prev, struct held_lock *next)
> > > 	{
> > > 		return (prev->read != 2) + ((next->read != 2) << 1)
> > > 	}
> > > 
> > > 	static inline u8 calc_dep(struct held_lock *prev, struct held_lock *next)
> > > 	{
> > > 		return 1U << __calc_dep_bit(prev, next);
> > > 	}
> > > 
> > >  	static inline bool only_rx(u8 dep)
> > >  	{
> > >  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
> > >  	}
> > > 
> > >  	static inline bool only_xr(u8 dep)
> > >  	{
> > >  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
> > >  	}
> > > 
> 
> > > > > 	if (have_xr && is_rx(entry->dep))
> > > > > 		continue;
> > > > > 
> > > > > 	entry->have_xr = is_xr(entry->dep);
> > > > > 
> > 
> > Hmm.. I think this part also needs some tweak:
> > 
> > 	/* if -> prev is *R, and we only have R* for prev -> this, * skip*/
> > 	if (have_xr && only_rx(entry->dep))
> > 		continue;
> > 	
> > 	/*
> > 	 * we pick a *R for prev -> this only if:
> > 	 *     prev -> this dependencies are all *R 
> > 	 * or
> > 	 *     -> prev is *R, and we don't have NN for prev -> this
> > 	 */
> > 	entry->have_xr = only_xr(entry->dep) || (have_xr && !is_nn(entry->dep));
> > 
> > otherwise, we will wrongly set entry->have_xr to false if have_xr is
> > true and we have RN for prev -> this.
> 
> OK, so its saturday morning and such, but what? Why should we set
> have_xr true when we have RN? Note that if we only had RN we'd already
> have bailed on the continue due to only_rx().
> 

But what if we have RN and NR? only_rx() will return false, but due to
have_xr is true, we can not pick RN, so entry->have_xr should be set to
true (due to we have to pick NR), however only_xr() is false becuase we
have RN, so if we set entry->have_xr to only_xr(), we set it as false.

This is for case like:

	TASK1:
		read_lock(A);
		read_lock(B);
	
	TASK2:
		write_lock(B);
		read_lock(C);
	
	TASK3:
		read_lock(B);
		write_lock(C);

	TASK4:
		read_lock(C);
		write_lock(A);

, which is not a deadlock.

Am I missing something sublte?
		

Regards,
Boqun

> So can you elaborate a bit?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-24  9:00             ` Boqun Feng
@ 2018-02-24  9:26               ` Boqun Feng
  2018-02-26  9:00                 ` Peter Zijlstra
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-24  9:26 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 4923 bytes --]

On Sat, Feb 24, 2018 at 05:00:19PM +0800, Boqun Feng wrote:
> On Sat, Feb 24, 2018 at 09:38:07AM +0100, Peter Zijlstra wrote:
> > On Sat, Feb 24, 2018 at 02:30:05PM +0800, Boqun Feng wrote:
> > > On Sat, Feb 24, 2018 at 01:32:50PM +0800, Boqun Feng wrote:
> > 
> > > > 	/*
> > > > 	 * DEP_*_BIT in lock_list::dep
> > > > 	 *
> > > > 	 * For dependency @prev -> @next:
> > > > 	 *
> > > > 	 *   RR: both @prev and @next are recursive read locks, i.e. ->read == 2.
> > > > 	 *   RN: @prev is recursive and @next is non-recursive.
> > > > 	 *   NR: @prev is a not recursive and @next is recursive.
> > > > 	 *   NN: both @prev and @next are non-recursive.
> > > > 	 * 
> > > > 	 * Note that we define the value of DEP_*_BITs so that:
> > > > 	 * 	bit0 is prev->read != 2
> > > > 	 * 	bit1 is next->read != 2
> > > > 	 */
> > > > 	#define DEP_RR_BIT 0
> > > > 	#define DEP_RN_BIT 1
> > > > 	#define DEP_NR_BIT 2
> > > > 	#define DEP_NN_BIT 3
> > > > 
> > > > 	#define DEP_RR_MASK (1U << (DEP_RR_BIT))
> > > > 	#define DEP_RN_MASK (1U << (DEP_RN_BIT))
> > > > 	#define DEP_NR_MASK (1U << (DEP_NR_BIT))
> > > > 	#define DEP_NN_MASK (1U << (DEP_NN_BIT))
> > > > 
> > > > 	static inline unsigned int
> > > > 	__calc_dep_bit(struct held_lock *prev, struct held_lock *next)
> > > > 	{
> > > > 		return (prev->read != 2) + ((next->read != 2) << 1)
> > > > 	}
> > > > 
> > > > 	static inline u8 calc_dep(struct held_lock *prev, struct held_lock *next)
> > > > 	{
> > > > 		return 1U << __calc_dep_bit(prev, next);
> > > > 	}
> > > > 
> > > >  	static inline bool only_rx(u8 dep)
> > > >  	{
> > > >  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
> > > >  	}
> > > > 
> > > >  	static inline bool only_xr(u8 dep)
> > > >  	{
> > > >  		return !(dep & (DEP_NR_MASK | DEP_NN_MASK));
> > > >  	}
> > > > 
> > 
> > > > > > 	if (have_xr && is_rx(entry->dep))
> > > > > > 		continue;
> > > > > > 
> > > > > > 	entry->have_xr = is_xr(entry->dep);
> > > > > > 
> > > 
> > > Hmm.. I think this part also needs some tweak:
> > > 
> > > 	/* if -> prev is *R, and we only have R* for prev -> this, * skip*/
> > > 	if (have_xr && only_rx(entry->dep))
> > > 		continue;
> > > 	
> > > 	/*
> > > 	 * we pick a *R for prev -> this only if:
> > > 	 *     prev -> this dependencies are all *R 
> > > 	 * or
> > > 	 *     -> prev is *R, and we don't have NN for prev -> this
> > > 	 */
> > > 	entry->have_xr = only_xr(entry->dep) || (have_xr && !is_nn(entry->dep));
> > > 
> > > otherwise, we will wrongly set entry->have_xr to false if have_xr is
> > > true and we have RN for prev -> this.
> > 
> > OK, so its saturday morning and such, but what? Why should we set
> > have_xr true when we have RN? Note that if we only had RN we'd already
> > have bailed on the continue due to only_rx().
> > 
> 
> But what if we have RN and NR? only_rx() will return false, but due to
> have_xr is true, we can not pick RN, so entry->have_xr should be set to
> true (due to we have to pick NR), however only_xr() is false becuase we
> have RN, so if we set entry->have_xr to only_xr(), we set it as false.
> 
> This is for case like:
> 
> 	TASK1:
> 		read_lock(A);
> 		read_lock(B);
> 	
> 	TASK2:
> 		write_lock(B);
> 		read_lock(C);
> 	
> 	TASK3:
> 		read_lock(B);
> 		write_lock(C);
> 
> 	TASK4:
> 		read_lock(C);
> 		write_lock(A);
> 
> , which is not a deadlock.
> 

After TASK 1,2,3 have executed, we have A -(RR)-> B, B -(RN/NR)-> C, and
when TASK4 executed, we will try to add C -(RN)-> A into the graph.
Before that we need to check whether we have a A -> ... -(*N)-> C path
in the graph already, so we search from A (@prev is C and @this is A):

*	we set A->have_xr to false, because the dependency we are adding
	is a RN.

*	we find A -(RR)-> B, and since have_xr (= A->have_xr) is false,
	we can pick this dependency, and since for A -> B, we only have
	RR, so we set B->have_xr to true.

*	we then find B -(RN/NR)-> C, and since have_xr (= B->have_xr) is
	true, we will pick it only only_rx(C->dep) return false,
	otherwise we skip. Because we have RN and NR for B -> C,
	therefore we won't skip B -> C.

*	Now we try to set C->have_xr, if we set it to only_xr(C->dep),
	we will set it to false, right? Because B -> C has RN.

*	Since we now find a entry equal to @prev, we go into the
	hlock_conflict() logic and for expression
		
		hlock->read != 2 || !entry->have_xr 
	
	@hlock is the C in TASK4, so hlock->read == 2, and @entry is the
	C whose ->have_xr we just set, so entry->have_xr is false.
	Therefore hlock_conflict() returns true. And that indicates we
	find a deadlock in the search. But the above senario can not
	introduce a deadlock.

Could this help you, or I miss something?

Regards,
Boqun

> Am I missing something sublte?
> 		
> 
> Regards,
> Boqun
> 
> > So can you elaborate a bit?



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 16/17] lockdep: Documention for recursive read lock detection reasoning
  2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 16/17] lockdep: Documention for recursive read lock detection reasoning Boqun Feng
@ 2018-02-24 22:53   ` Andrea Parri
  2018-02-27  2:32     ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Andrea Parri @ 2018-02-24 22:53 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Randy Dunlap,
	Jonathan Corbet, open list:DOCUMENTATION

On Thu, Feb 22, 2018 at 03:09:03PM +0800, Boqun Feng wrote:
> As now we support recursive read lock deadlock detection, add related
> explanation in the Documentation/lockdep/lockdep-desgin.txt:
> 
> *	Definition of recursive read locks, non-recursive locks, strong
> 	dependency path and notions of -(**)->.
> 
> *	Lockdep's assumption.
> 
> *	Informal proof of recursive read lock deadlock detection.

Once again... much appreciated!!, thanks for sharing.


> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> Cc: Randy Dunlap <rdunlap@infradead.org>
> ---
>  Documentation/locking/lockdep-design.txt | 170 +++++++++++++++++++++++++++++++
>  1 file changed, 170 insertions(+)
> 
> diff --git a/Documentation/locking/lockdep-design.txt b/Documentation/locking/lockdep-design.txt
> index 382bc25589c2..fd8a8d97ce58 100644
> --- a/Documentation/locking/lockdep-design.txt
> +++ b/Documentation/locking/lockdep-design.txt
> @@ -284,3 +284,173 @@ Run the command and save the output, then compare against the output from
>  a later run of this command to identify the leakers.  This same output
>  can also help you find situations where runtime lock initialization has
>  been omitted.
> +
> +Recursive Read Deadlock Detection:
> +----------------------------------
> +Lockdep now is equipped with deadlock detection for recursive read locks.
> +
> +Recursive read locks, as their name indicates, are the locks able to be
> +acquired recursively. Unlike non-recursive read locks, recursive read locks
> +only get blocked by current write lock *holders* other than write lock
> +*waiters*, for example:
> +
> +	TASK A:			TASK B:
> +
> +	read_lock(X);
> +
> +				write_lock(X);
> +
> +	read_lock(X);
> +
> +is not a deadlock for recursive read locks, as while the task B is waiting for
> +the lock X, the second read_lock() doesn't need to wait because it's a recursive
> +read lock.
> +
> +Note that a lock can be a write lock(exclusive lock), a non-recursive read lock
> +(non-recursive shared lock) or a recursive read lock(recursive shared lock),
> +depending on the API used to acquire it(more specifically, the value of the
> +'read' parameter for lock_acquire(...)). In other words, a single lock instance
> +has three types of acquisition depending on the acquisition functions:
> +exclusive, non-recursive read, and recursive read.
> +
> +That said, recursive read locks could introduce deadlocks too, considering the
> +following:
> +
> +	TASK A:			TASK B:
> +
> +	read_lock(X);
> +				read_lock(Y);
> +	write_lock(Y);
> +				write_lock(X);
> +
> +, neither task could get the write locks because the corresponding read locks
> +are held by each other.
> +
> +Lockdep could detect recursive read lock related deadlocks. The dependencies(edges)
> +in the lockdep graph are classified into four categories:
> +
> +1) -(NN)->: non-recursive to non-recursive dependency, non-recursive locks include
> +            non-recursive read locks, write locks and exclusive locks(e.g. spinlock_t).
> +	    They are treated equally in deadlock detection. "X -(NN)-> Y" means
> +            X -> Y and both X and Y are non-recursive locks.
> +
> +2) -(RN)->: recursive to non-recursive dependency, recursive locks means recursive read
> +	    locks. "X -(RN)-> Y" means X -> Y and X is recursive read lock and
> +            Y is non-recursive lock.
> +
> +3) -(NR)->: non-recursive to recursive dependency, "X -(NR)-> Y" means X -> Y and X is
> +            non-recursive lock and Y is recursive lock.
> +
> +4) -(RR)->: recursive to recursive dependency, "X -(RR)-> Y" means X -> Y and both X
> +            and Y are recursive locks.
> +
> +Note that given two locks, they may have multiple dependencies between them, for example:
> +
> +	TASK A:
> +
> +	read_lock(X);
> +	write_lock(Y);
> +	...
> +
> +	TASK B:
> +
> +	write_lock(X);
> +	write_lock(Y);
> +
> +, we have both X -(RN)-> Y and X -(NN)-> Y in the dependency graph.
> +
> +And obviously a non-recursive lock can block the corresponding recursive lock,
> +and vice versa. Besides a non-recursive lock may block the other non-recursive
> +lock of the same instance(e.g. a write lock may block a corresponding
> +non-recursive read lock and vice versa).
> +
> +We use -(*N)-> for edges that is either -(RN)-> or -(NN)->, the similar for -(N*)->,
> +-(*R)-> and -(R*)->
> +
> +A "path" is a series of conjunct dependency edges in the graph. And we define a
> +"strong" path, which indicates the strong dependency throughout each dependency
> +in the path, as the path that doesn't have two conjunct edges(dependencies) as
> +-(*R)-> and -(R*)->. IOW, a "strong" path is a path from a lock walking to another
> +through the lock dependencies, and if X -> Y -> Z in the path(where X, Y, Z are
> +locks), if the walk from X to Y is through a -(NR)-> or -(RR)-> dependency, the
> +walk from Y to Z must not be through a -(RN)-> or -(RR)-> dependency, otherwise
> +it's not a strong path.
> +
> +We now prove that if a strong path forms a circle, then we have a potential deadlock.
> +By "forms a circle", it means for a set of locks A0,A1...An, there is a path from
> +A0 to An:
> +
> +	A0 -> A1 -> ... -> An
> +
> +and there is also a dependency An->A0. And as the circle is formed by a strong path,
> +so there are no two conjunct dependency edges as -(*R)-> and -(R*)->.
> +
> +
> +To understand the actual proof, let's look into lockdep's assumption:
> +
> +For each lockdep dependency A -> B, there may exist a case where someone is
> +trying to acquire B with A held, and the existence of such a case is
> +independent to the existences of cases for other lockdep dependencies.
> +
> +For example if we have two functions func1 and func2:
> +
> +	void func1(...) {
> +		lock(A);
> +		lock(B);
> +		unlock(A);
> +		unlock(B);
> +
> +		lock(C);
> +		lock(A);
> +		unlock(A);
> +		unlock(C);
> +	}
> +
> +	void func2(...) {
> +		lock(B);
> +		lock(C);
> +		unlock(C);
> +		unlock(B);
> +	}
> +
> +lockdep will generate dependencies: A->B, B->C and C->A, and assume that:
> +
> +	there may exist a case where someone is trying to acquire B with A held,
> +	there may exist a case where someone is trying to acquire C with B held,
> +	and there may exist a case where someone is trying to acquire A with C held,
> +
> +and those three cases exist *independently*, meaning they can happen at the
> +same time(which requires func1() being called simultaneously by two CPUs or
> +tasks, which may be impossible due to other constraints in the real life)
> +
> +
> +With this assumption, we can prove that if a strong dependency path forms a circle,
> +then it indicates a deadlock as far as lockdep is concerned.

As mentioned in a private communication, I would recommend the adoption of
a "more impersonal" style.  Here, for instance, the expression:

  "indicates a deadlock as far as lockdep is concerned"

would be replaced with:

  "indicates a deadlock as (informally) defined in Sect. ?.?";

similarly (from the proof):

  "For a strong dependency [...], lockdep assumes that [...]"

would be replaced with:

  "For a strong dependency [...], from assumption/notation ?.?,
   we have/we can conclude [...]".

This could mean that additional text/content would need to be added to the
present doc./.txt; OTOH, this could help to make this doc. self-contained/
more accessible to "non-lockdep-experts".

  Andrea


> +
> +For a strong dependency circle like:
> +
> +	A0 -> A1 -> ... -> An
> +	^                  |
> +	|                  |
> +	+------------------+
> +, lockdep assumes that
> +
> +	there may exist a case where someone is trying to acquire A1 with A0 held
> +	there may exist a case where someone is trying to acquire A2 with A1 held
> +	...
> +	there may exist a case where someone is trying to acquire An with An-1 held
> +	there may exist a case where someone is trying to acquire A0 with An held
> +
> +, and because it's a *strong* dependency circle for every Ai (0<=i<=n), Ai is either
> +held as a non-recursive lock or someone is trying to acquire Ai as a non-recursive lock,
> +which gives:
> +
> +*	If Ai is held as a non-recursive lock, then trying to acquire it,
> +	whether as a recursive lock or not, will get blocked.
> +
> +*	If Ai is held as a recursive lock, then there must be someone is trying to acquire
> +	it as a non-recursive lock, and because recursive locks blocks non-recursive locks,
> +	then it(the "someone") will get blocked.
> +
> +So all the holders of A0, A1...An are blocked with A0, A1...An held by each other,
> +no one can progress, therefore deadlock.
> -- 
> 2.16.1
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-24  9:26               ` Boqun Feng
@ 2018-02-26  9:00                 ` Peter Zijlstra
  2018-02-26 10:15                   ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2018-02-26  9:00 UTC (permalink / raw)
  To: Boqun Feng; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

On Sat, Feb 24, 2018 at 05:26:52PM +0800, Boqun Feng wrote:
> > This is for case like:
> > 
> > 	TASK1:
> > 		read_lock(A);
> > 		read_lock(B);
> > 	
> > 	TASK2:
> > 		write_lock(B);
> > 		read_lock(C);
> > 	
> > 	TASK3:
> > 		read_lock(B);
> > 		write_lock(C);
> > 
> > 	TASK4:
> > 		read_lock(C);
> > 		write_lock(A);
> > 
> > , which is not a deadlock.
> > 
> 
> After TASK 1,2,3 have executed, we have A -(RR)-> B, B -(RN/NR)-> C, and
> when TASK4 executed, we will try to add C -(RN)-> A into the graph.
> Before that we need to check whether we have a A -> ... -(*N)-> C path
> in the graph already, so we search from A (@prev is C and @this is A):
> 
> *	we set A->have_xr to false, because the dependency we are adding
> 	is a RN.
> 
> *	we find A -(RR)-> B, and since have_xr (= A->have_xr) is false,
> 	we can pick this dependency, and since for A -> B, we only have
> 	RR, so we set B->have_xr to true.
> 
> *	we then find B -(RN/NR)-> C, and since have_xr (= B->have_xr) is
> 	true, we will pick it only only_rx(C->dep) return false,
> 	otherwise we skip. Because we have RN and NR for B -> C,
> 	therefore we won't skip B -> C.
> 
> *	Now we try to set C->have_xr, if we set it to only_xr(C->dep),
> 	we will set it to false, right? Because B -> C has RN.
> 
> *	Since we now find a entry equal to @prev, we go into the
> 	hlock_conflict() logic and for expression
> 		
> 		hlock->read != 2 || !entry->have_xr 
> 	
> 	@hlock is the C in TASK4, so hlock->read == 2, and @entry is the
> 	C whose ->have_xr we just set, so entry->have_xr is false.
> 	Therefore hlock_conflict() returns true. And that indicates we
> 	find a deadlock in the search. But the above senario can not
> 	introduce a deadlock.
> 
> Could this help you, or I miss something?

Yes, although it took me forever to grasp because I have snot for brains
atm :-(

Would something like:


	dep = entry->dep;

	/* Mask out all *R -> R* relations. */
	if (have_xr)
		dep &= ~(RR_MASK | RN_MASK);

	/* If nothing left, we're done. */
	if (!dep)
		continue;

	/* If there are (only) *R left, set that for the next step. */
	entry->have_xr = !(dep & (RN_MASK | NN_MASK));


Work? I think that reads fairly simple.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-26  9:00                 ` Peter Zijlstra
@ 2018-02-26 10:15                   ` Boqun Feng
  2018-02-26 10:20                     ` Boqun Feng
  0 siblings, 1 reply; 53+ messages in thread
From: Boqun Feng @ 2018-02-26 10:15 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 2782 bytes --]

On Mon, Feb 26, 2018 at 10:00:50AM +0100, Peter Zijlstra wrote:
> On Sat, Feb 24, 2018 at 05:26:52PM +0800, Boqun Feng wrote:
> > > This is for case like:
> > > 
> > > 	TASK1:
> > > 		read_lock(A);
> > > 		read_lock(B);
> > > 	
> > > 	TASK2:
> > > 		write_lock(B);
> > > 		read_lock(C);
> > > 	
> > > 	TASK3:
> > > 		read_lock(B);
> > > 		write_lock(C);
> > > 
> > > 	TASK4:
> > > 		read_lock(C);
> > > 		write_lock(A);
> > > 
> > > , which is not a deadlock.
> > > 
> > 
> > After TASK 1,2,3 have executed, we have A -(RR)-> B, B -(RN/NR)-> C, and
> > when TASK4 executed, we will try to add C -(RN)-> A into the graph.
> > Before that we need to check whether we have a A -> ... -(*N)-> C path
> > in the graph already, so we search from A (@prev is C and @this is A):
> > 
> > *	we set A->have_xr to false, because the dependency we are adding
> > 	is a RN.
> > 
> > *	we find A -(RR)-> B, and since have_xr (= A->have_xr) is false,
> > 	we can pick this dependency, and since for A -> B, we only have
> > 	RR, so we set B->have_xr to true.
> > 
> > *	we then find B -(RN/NR)-> C, and since have_xr (= B->have_xr) is
> > 	true, we will pick it only only_rx(C->dep) return false,
> > 	otherwise we skip. Because we have RN and NR for B -> C,
> > 	therefore we won't skip B -> C.
> > 
> > *	Now we try to set C->have_xr, if we set it to only_xr(C->dep),
> > 	we will set it to false, right? Because B -> C has RN.
> > 
> > *	Since we now find a entry equal to @prev, we go into the
> > 	hlock_conflict() logic and for expression
> > 		
> > 		hlock->read != 2 || !entry->have_xr 
> > 	
> > 	@hlock is the C in TASK4, so hlock->read == 2, and @entry is the
> > 	C whose ->have_xr we just set, so entry->have_xr is false.
> > 	Therefore hlock_conflict() returns true. And that indicates we
> > 	find a deadlock in the search. But the above senario can not
> > 	introduce a deadlock.
> > 
> > Could this help you, or I miss something?
> 
> Yes, although it took me forever to grasp because I have snot for brains
> atm :-(
> 

Take care!

> Would something like:
> 
> 
> 	dep = entry->dep;
> 
> 	/* Mask out all *R -> R* relations. */
> 	if (have_xr)
> 		dep &= ~(RR_MASK | RN_MASK);
> 
> 	/* If nothing left, we're done. */
> 	if (!dep)
> 		continue;
> 
> 	/* If there are (only) *R left, set that for the next step. */
> 	entry->have_xr = !(dep & (RN_MASK | NN_MASK));
> 
> 
> Work? I think that reads fairly simple.

I think that works, will apply this simplification to see whether the
self tests agree.

Btw, given the comments in the code, I think it's better to name
"have_xr" as "only_xr"? I have a feeling that my comment-less code
somehow misled you to that name ;-(

Regards,
Boqun


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep
  2018-02-26 10:15                   ` Boqun Feng
@ 2018-02-26 10:20                     ` Boqun Feng
  0 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-26 10:20 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 3104 bytes --]

On Mon, Feb 26, 2018 at 06:15:24PM +0800, Boqun Feng wrote:
> On Mon, Feb 26, 2018 at 10:00:50AM +0100, Peter Zijlstra wrote:
> > On Sat, Feb 24, 2018 at 05:26:52PM +0800, Boqun Feng wrote:
> > > > This is for case like:
> > > > 
> > > > 	TASK1:
> > > > 		read_lock(A);
> > > > 		read_lock(B);
> > > > 	
> > > > 	TASK2:
> > > > 		write_lock(B);
> > > > 		read_lock(C);
> > > > 	
> > > > 	TASK3:
> > > > 		read_lock(B);
> > > > 		write_lock(C);
> > > > 
> > > > 	TASK4:
> > > > 		read_lock(C);
> > > > 		write_lock(A);
> > > > 
> > > > , which is not a deadlock.
> > > > 
> > > 
> > > After TASK 1,2,3 have executed, we have A -(RR)-> B, B -(RN/NR)-> C, and
> > > when TASK4 executed, we will try to add C -(RN)-> A into the graph.
> > > Before that we need to check whether we have a A -> ... -(*N)-> C path
> > > in the graph already, so we search from A (@prev is C and @this is A):
> > > 
> > > *	we set A->have_xr to false, because the dependency we are adding
> > > 	is a RN.
> > > 
> > > *	we find A -(RR)-> B, and since have_xr (= A->have_xr) is false,
> > > 	we can pick this dependency, and since for A -> B, we only have
> > > 	RR, so we set B->have_xr to true.
> > > 
> > > *	we then find B -(RN/NR)-> C, and since have_xr (= B->have_xr) is
> > > 	true, we will pick it only only_rx(C->dep) return false,
> > > 	otherwise we skip. Because we have RN and NR for B -> C,
> > > 	therefore we won't skip B -> C.
> > > 
> > > *	Now we try to set C->have_xr, if we set it to only_xr(C->dep),
> > > 	we will set it to false, right? Because B -> C has RN.
> > > 
> > > *	Since we now find a entry equal to @prev, we go into the
> > > 	hlock_conflict() logic and for expression
> > > 		
> > > 		hlock->read != 2 || !entry->have_xr 
> > > 	
> > > 	@hlock is the C in TASK4, so hlock->read == 2, and @entry is the
> > > 	C whose ->have_xr we just set, so entry->have_xr is false.
> > > 	Therefore hlock_conflict() returns true. And that indicates we
> > > 	find a deadlock in the search. But the above senario can not
> > > 	introduce a deadlock.
> > > 
> > > Could this help you, or I miss something?
> > 
> > Yes, although it took me forever to grasp because I have snot for brains
> > atm :-(
> > 
> 
> Take care!
> 
> > Would something like:
> > 
> > 
> > 	dep = entry->dep;
> > 
> > 	/* Mask out all *R -> R* relations. */
> > 	if (have_xr)
> > 		dep &= ~(RR_MASK | RN_MASK);
> > 
> > 	/* If nothing left, we're done. */
> > 	if (!dep)
> > 		continue;
> > 
> > 	/* If there are (only) *R left, set that for the next step. */
> > 	entry->have_xr = !(dep & (RN_MASK | NN_MASK));
> > 
> > 
> > Work? I think that reads fairly simple.
> 
> I think that works, will apply this simplification to see whether the
> self tests agree.
> 

And the self tests agree this works ;-) Thank you!

Regards,
Boqun

> Btw, given the comments in the code, I think it's better to name
> "have_xr" as "only_xr"? I have a feeling that my comment-less code
> somehow misled you to that name ;-(
> 
> Regards,
> Boqun
> 



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 16/17] lockdep: Documention for recursive read lock detection reasoning
  2018-02-24 22:53   ` Andrea Parri
@ 2018-02-27  2:32     ` Boqun Feng
  0 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-02-27  2:32 UTC (permalink / raw)
  To: Andrea Parri
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Randy Dunlap,
	Jonathan Corbet, open list:DOCUMENTATION

[-- Attachment #1: Type: text/plain, Size: 9410 bytes --]

On Sat, Feb 24, 2018 at 11:53:20PM +0100, Andrea Parri wrote:
> On Thu, Feb 22, 2018 at 03:09:03PM +0800, Boqun Feng wrote:
> > As now we support recursive read lock deadlock detection, add related
> > explanation in the Documentation/lockdep/lockdep-desgin.txt:
> > 
> > *	Definition of recursive read locks, non-recursive locks, strong
> > 	dependency path and notions of -(**)->.
> > 
> > *	Lockdep's assumption.
> > 
> > *	Informal proof of recursive read lock deadlock detection.
> 
> Once again... much appreciated!!, thanks for sharing.
> 

np ;-)

> 
> > 
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Randy Dunlap <rdunlap@infradead.org>
> > ---
> >  Documentation/locking/lockdep-design.txt | 170 +++++++++++++++++++++++++++++++
> >  1 file changed, 170 insertions(+)
> > 
> > diff --git a/Documentation/locking/lockdep-design.txt b/Documentation/locking/lockdep-design.txt
> > index 382bc25589c2..fd8a8d97ce58 100644
> > --- a/Documentation/locking/lockdep-design.txt
> > +++ b/Documentation/locking/lockdep-design.txt
> > @@ -284,3 +284,173 @@ Run the command and save the output, then compare against the output from
> >  a later run of this command to identify the leakers.  This same output
> >  can also help you find situations where runtime lock initialization has
> >  been omitted.
> > +
> > +Recursive Read Deadlock Detection:
> > +----------------------------------
> > +Lockdep now is equipped with deadlock detection for recursive read locks.
> > +
> > +Recursive read locks, as their name indicates, are the locks able to be
> > +acquired recursively. Unlike non-recursive read locks, recursive read locks
> > +only get blocked by current write lock *holders* other than write lock
> > +*waiters*, for example:
> > +
> > +	TASK A:			TASK B:
> > +
> > +	read_lock(X);
> > +
> > +				write_lock(X);
> > +
> > +	read_lock(X);
> > +
> > +is not a deadlock for recursive read locks, as while the task B is waiting for
> > +the lock X, the second read_lock() doesn't need to wait because it's a recursive
> > +read lock.
> > +
> > +Note that a lock can be a write lock(exclusive lock), a non-recursive read lock
> > +(non-recursive shared lock) or a recursive read lock(recursive shared lock),
> > +depending on the API used to acquire it(more specifically, the value of the
> > +'read' parameter for lock_acquire(...)). In other words, a single lock instance
> > +has three types of acquisition depending on the acquisition functions:
> > +exclusive, non-recursive read, and recursive read.
> > +
> > +That said, recursive read locks could introduce deadlocks too, considering the
> > +following:
> > +
> > +	TASK A:			TASK B:
> > +
> > +	read_lock(X);
> > +				read_lock(Y);
> > +	write_lock(Y);
> > +				write_lock(X);
> > +
> > +, neither task could get the write locks because the corresponding read locks
> > +are held by each other.
> > +
> > +Lockdep could detect recursive read lock related deadlocks. The dependencies(edges)
> > +in the lockdep graph are classified into four categories:
> > +
> > +1) -(NN)->: non-recursive to non-recursive dependency, non-recursive locks include
> > +            non-recursive read locks, write locks and exclusive locks(e.g. spinlock_t).
> > +	    They are treated equally in deadlock detection. "X -(NN)-> Y" means
> > +            X -> Y and both X and Y are non-recursive locks.
> > +
> > +2) -(RN)->: recursive to non-recursive dependency, recursive locks means recursive read
> > +	    locks. "X -(RN)-> Y" means X -> Y and X is recursive read lock and
> > +            Y is non-recursive lock.
> > +
> > +3) -(NR)->: non-recursive to recursive dependency, "X -(NR)-> Y" means X -> Y and X is
> > +            non-recursive lock and Y is recursive lock.
> > +
> > +4) -(RR)->: recursive to recursive dependency, "X -(RR)-> Y" means X -> Y and both X
> > +            and Y are recursive locks.
> > +
> > +Note that given two locks, they may have multiple dependencies between them, for example:
> > +
> > +	TASK A:
> > +
> > +	read_lock(X);
> > +	write_lock(Y);
> > +	...
> > +
> > +	TASK B:
> > +
> > +	write_lock(X);
> > +	write_lock(Y);
> > +
> > +, we have both X -(RN)-> Y and X -(NN)-> Y in the dependency graph.
> > +
> > +And obviously a non-recursive lock can block the corresponding recursive lock,
> > +and vice versa. Besides a non-recursive lock may block the other non-recursive
> > +lock of the same instance(e.g. a write lock may block a corresponding
> > +non-recursive read lock and vice versa).
> > +
> > +We use -(*N)-> for edges that is either -(RN)-> or -(NN)->, the similar for -(N*)->,
> > +-(*R)-> and -(R*)->
> > +
> > +A "path" is a series of conjunct dependency edges in the graph. And we define a
> > +"strong" path, which indicates the strong dependency throughout each dependency
> > +in the path, as the path that doesn't have two conjunct edges(dependencies) as
> > +-(*R)-> and -(R*)->. IOW, a "strong" path is a path from a lock walking to another
> > +through the lock dependencies, and if X -> Y -> Z in the path(where X, Y, Z are
> > +locks), if the walk from X to Y is through a -(NR)-> or -(RR)-> dependency, the
> > +walk from Y to Z must not be through a -(RN)-> or -(RR)-> dependency, otherwise
> > +it's not a strong path.
> > +
> > +We now prove that if a strong path forms a circle, then we have a potential deadlock.
> > +By "forms a circle", it means for a set of locks A0,A1...An, there is a path from
> > +A0 to An:
> > +
> > +	A0 -> A1 -> ... -> An
> > +
> > +and there is also a dependency An->A0. And as the circle is formed by a strong path,
> > +so there are no two conjunct dependency edges as -(*R)-> and -(R*)->.
> > +
> > +
> > +To understand the actual proof, let's look into lockdep's assumption:
> > +
> > +For each lockdep dependency A -> B, there may exist a case where someone is
> > +trying to acquire B with A held, and the existence of such a case is
> > +independent to the existences of cases for other lockdep dependencies.
> > +
> > +For example if we have two functions func1 and func2:
> > +
> > +	void func1(...) {
> > +		lock(A);
> > +		lock(B);
> > +		unlock(A);
> > +		unlock(B);
> > +
> > +		lock(C);
> > +		lock(A);
> > +		unlock(A);
> > +		unlock(C);
> > +	}
> > +
> > +	void func2(...) {
> > +		lock(B);
> > +		lock(C);
> > +		unlock(C);
> > +		unlock(B);
> > +	}
> > +
> > +lockdep will generate dependencies: A->B, B->C and C->A, and assume that:
> > +
> > +	there may exist a case where someone is trying to acquire B with A held,
> > +	there may exist a case where someone is trying to acquire C with B held,
> > +	and there may exist a case where someone is trying to acquire A with C held,
> > +
> > +and those three cases exist *independently*, meaning they can happen at the
> > +same time(which requires func1() being called simultaneously by two CPUs or
> > +tasks, which may be impossible due to other constraints in the real life)
> > +
> > +
> > +With this assumption, we can prove that if a strong dependency path forms a circle,
> > +then it indicates a deadlock as far as lockdep is concerned.
> 
> As mentioned in a private communication, I would recommend the adoption of
> a "more impersonal" style.  Here, for instance, the expression:
> 
>   "indicates a deadlock as far as lockdep is concerned"
> 
> would be replaced with:
> 
>   "indicates a deadlock as (informally) defined in Sect. ?.?";
> 
> similarly (from the proof):
> 
>   "For a strong dependency [...], lockdep assumes that [...]"
> 
> would be replaced with:
> 
>   "For a strong dependency [...], from assumption/notation ?.?,
>    we have/we can conclude [...]".
> 
> This could mean that additional text/content would need to be added to the
> present doc./.txt; OTOH, this could help to make this doc. self-contained/
> more accessible to "non-lockdep-experts".
> 

Yeah. I will do that in next spin. Thanks!

Regards,
Boqun

>   Andrea
> 
> 
> > +
> > +For a strong dependency circle like:
> > +
> > +	A0 -> A1 -> ... -> An
> > +	^                  |
> > +	|                  |
> > +	+------------------+
> > +, lockdep assumes that
> > +
> > +	there may exist a case where someone is trying to acquire A1 with A0 held
> > +	there may exist a case where someone is trying to acquire A2 with A1 held
> > +	...
> > +	there may exist a case where someone is trying to acquire An with An-1 held
> > +	there may exist a case where someone is trying to acquire A0 with An held
> > +
> > +, and because it's a *strong* dependency circle for every Ai (0<=i<=n), Ai is either
> > +held as a non-recursive lock or someone is trying to acquire Ai as a non-recursive lock,
> > +which gives:
> > +
> > +*	If Ai is held as a non-recursive lock, then trying to acquire it,
> > +	whether as a recursive lock or not, will get blocked.
> > +
> > +*	If Ai is held as a recursive lock, then there must be someone is trying to acquire
> > +	it as a non-recursive lock, and because recursive locks blocks non-recursive locks,
> > +	then it(the "someone") will get blocked.
> > +
> > +So all the holders of A0, A1...An are blocked with A0, A1...An held by each other,
> > +no one can progress, therefore deadlock.
> > -- 
> > 2.16.1
> > 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change
  2018-02-22 17:29   ` Peter Zijlstra
@ 2018-03-16  8:20     ` Boqun Feng
  0 siblings, 0 replies; 53+ messages in thread
From: Boqun Feng @ 2018-03-16  8:20 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 3479 bytes --]

On Thu, Feb 22, 2018 at 06:29:06PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 03:08:54PM +0800, Boqun Feng wrote:
> > As we have four kinds of dependencies now, check_redundant() should only
> > report redundant if we have a dependency path which is equal or
> > _stronger_ than the current dependency. For example if in
> > check_prev_add() we have:
> > 
> > 	prev->read == 2 && next->read != 2
> > 
> > , we should only report redundant if we find a path like:
> > 
> > 	prev--(RN)-->....--(*N)-->next
> > 
> > and if we have:
> > 
> > 	prev->read == 2 && next->read == 2
> > 
> > , we could report redundant if we find a path like:
> > 
> > 	prev--(RN)-->....--(*N)-->next
> > 
> > or
> > 
> > 	prev--(RN)-->....--(*R)-->next
> > 
> > To do so, we need to pass the recursive-read status of @next into
> > check_redundant().
> 
> Very hard to read that.
> 

Right.. and I find a bug about this, let me explain below..

> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> >  kernel/locking/lockdep.c | 13 ++++++++-----
> >  1 file changed, 8 insertions(+), 5 deletions(-)
> > 
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index e1be088a34c4..0b0ad3db78b4 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockdep.c
> > @@ -1338,9 +1338,12 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth,
> >  	return 0;
> >  }
> >  
> > -static inline int class_equal(struct lock_list *entry, void *data)
> > +static inline int hlock_equal(struct lock_list *entry, void *data)
> >  {
> > -	return entry->class == data;
> > +	struct held_lock *hlock = (struct held_lock *)data;
> > +
> > +	return hlock_class(hlock) == entry->class &&
> > +	       (hlock->read == 2 || !entry->is_rr);
> >  }
> 
> So I guess @data = @next, and we're checking if @prev -> @next already
> exists.
> 
> Since we only care about forward dependencies, @next->read==2 means *R
> and if so, any existing link is equal or stronger. If @next->read!=2, it
> means *N and we must regard *R as weaker and not match.
> 

Yep, the idea if we find a path @prev -> .. -> @next is RN, and @prev ->
@next is RR, then we treat RR as weaker and redundant. Basically, if we
find a strong path that could replace the about-to-add dependency
(the path is stronger than or equal to the dependency), we report
redundant (a match).

But I miss something here, as you may see both the start and end of the
path are important, so say we find a strong path RN, but the
about-to-add dependency is NR, we can not report it as redundant,
because the path can not replace the dependency.

To make sure we find a path whose start point is stronger than @prev, we
need a trick, we should initialize the ->only_xr (or ->have_xr) of the
root (start point) of __bfs() to be @prev->read != 2, therefore if @prev
is N, __bfs() will pick N* for the first dependency, otherwise, __bfs()
can pick N* or R* for the first dependency.

I use a similar setup before check_noncircular(), which sets the initial
->only_xr to be @next->read == 2, because we need @prev -> @next ->
<path> to be strong. But I should use a opposite setup for
check_redundant() as I describe above.

Anyway, I will fix this and prove (hopefully) enough comments for those
tricks.

Regards,
Boqun

> OK, that seems to be fine, but again, that function _really_ could do
> with a comment.
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2018-03-16  8:16 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-22  7:08 [RFC tip/locking/lockdep v5 00/17] lockdep: Support deadlock detection for recursive read locks Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 01/17] lockdep: Demagic the return value of BFS Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 02/17] lockdep: Make __bfs() visit every dependency until a match Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 03/17] lockdep: Redefine LOCK_*_STATE* bits Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 04/17] lockdep: Introduce lock_list::dep Boqun Feng
2018-02-23 11:55   ` Peter Zijlstra
2018-02-23 12:37     ` Boqun Feng
2018-02-24  5:32       ` Boqun Feng
2018-02-24  6:30         ` Boqun Feng
2018-02-24  8:38           ` Peter Zijlstra
2018-02-24  9:00             ` Boqun Feng
2018-02-24  9:26               ` Boqun Feng
2018-02-26  9:00                 ` Peter Zijlstra
2018-02-26 10:15                   ` Boqun Feng
2018-02-26 10:20                     ` Boqun Feng
2018-02-24  7:31         ` Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 05/17] lockdep: Extend __bfs() to work with multiple kinds of dependencies Boqun Feng
2018-02-22 14:26   ` Peter Zijlstra
2018-02-22 15:12     ` Boqun Feng
2018-02-22 15:30       ` Peter Zijlstra
2018-02-22 15:51         ` Peter Zijlstra
2018-02-22 16:31           ` Boqun Feng
2018-02-23  5:02             ` Boqun Feng
2018-02-23 11:15               ` Peter Zijlstra
2018-02-22 16:08       ` Peter Zijlstra
2018-02-22 16:34         ` Boqun Feng
2018-02-22 16:32           ` Peter Zijlstra
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 06/17] lockdep: Support deadlock detection for recursive read in check_noncircular() Boqun Feng
2018-02-22 14:54   ` Peter Zijlstra
2018-02-22 15:16     ` Peter Zijlstra
2018-02-22 15:44       ` Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change Boqun Feng
2018-02-22 17:29   ` Peter Zijlstra
2018-03-16  8:20     ` Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 08/17] lockdep: Fix recursive read lock related safe->unsafe detection Boqun Feng
2018-02-22 17:41   ` Peter Zijlstra
2018-02-22 17:46   ` Peter Zijlstra
2018-02-23  8:21     ` Boqun Feng
2018-02-23  8:58       ` Boqun Feng
2018-02-23 11:36         ` Peter Zijlstra
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 09/17] lockdep: Add recursive read locks into dependency graph Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 10/17] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 11/17] lockdep: Take read/write status in consideration when generate chainkey Boqun Feng
2018-02-22  7:08 ` [RFC tip/locking/lockdep v5 12/17] lockdep/selftest: Unleash irq_read_recursion2 and add more Boqun Feng
2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 13/17] lockdep/selftest: Add more recursive read related test cases Boqun Feng
2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 14/17] Revert "locking/lockdep/selftests: Fix mixed read-write ABBA tests" Boqun Feng
2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 15/17] lockdep: Reduce the size of lock_list Boqun Feng
2018-02-23 11:38   ` Peter Zijlstra
2018-02-23 12:40     ` Boqun Feng
2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 16/17] lockdep: Documention for recursive read lock detection reasoning Boqun Feng
2018-02-24 22:53   ` Andrea Parri
2018-02-27  2:32     ` Boqun Feng
2018-02-22  7:09 ` [RFC tip/locking/lockdep v5 17/17] MAINTAINERS: Add myself as a LOCKING PRIMITIVES reviewer Boqun Feng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).