All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC tip 0/5] lockdep: Support deadlock detection for recursive read locks
@ 2017-08-28 14:56 Boqun Feng
  2017-08-28 14:56 ` [RFC tip 1/5] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior Boqun Feng
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Boqun Feng @ 2017-08-28 14:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Peter Zijlstra, Gautham R Shenoy, Byungchul Park,
	Boqun Feng

Hi Ingo and Peter,

As Peter pointed out:

	https://marc.info/?l=linux-kernel&m=150349072023540

The lockdep current has a limit support for recursive read locks, the
deadlock case as follow could not be detected:

	read_lock(A);
				lock(B);
	lock(B);
				write_lock(A);

I got some inspiration from Gautham R Shenoy:

	https://lwn.net/Articles/332801/

, and came up with this series.

The basic idea is:

*	Add recursive read locks into the graph

*	Classify dependencies into R->R, N->R, R->N, N->N, where R
	stands for recursive read lock, N stands for other locks.

*	Extend __bfs() to go through all kinds of dependencies and the
	read/write status could be changed in the traverse(i.e. with
	dependency N(A)->R(B) and N(B)->R(C), BFS could go from A to B
	and then to C).

*	But don't allow use a lock B as a transfer station if B only has
	*->R dependencies to the previous lock and R->* dependencies to
	the next lock. This is because if a BFS traverse has such a B as
	a transfer station, the following exists:

		CPU0		CPU1		CPU2		CPU3
		lock(X);
		lock(Y);   	lock(Y);
				rlock(B);	rlock(B);
						lock(P);   	lock(P);
								lock(Q);
	
	The lock dependency breaks between CPU1 and CPU2, no deadlock.
		
In this way, we can reflect the real dependencies while taking recursive
read locks into considerations.

This is readlly an RFC, as I'm 100% sure I cover all the cases related
to read recursive locks, but I do add two sets of self testcases, and
they did pass ;-)

This series consists of 5 patches:

Patch #1 introduces a new test case to test chain cache behavior on the
recursive read deadlock detection.

Patch #2 introduces more complex cases for recursive read deadlock
detection.

Patch #3 does a little bit clean-up on the return value of __bfs() and
its friends.

Patch #4 adds recursive locks into dependency graph and extends BFS to
allow deadlock detection for recursive read locks.

Patch #5 fixes the problem that lock chains and chainkeys don't treat
read/write locks differently, which could miss the chance to detect a
deadlock because a lock chain cache hit.

I plan to write more tests and play about this in next weeks, just send
out for suggestions and comments.

Reviews and tests are welcome!

Regards,
Boqun

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC tip 1/5] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior
  2017-08-28 14:56 [RFC tip 0/5] lockdep: Support deadlock detection for recursive read locks Boqun Feng
@ 2017-08-28 14:56 ` Boqun Feng
  2017-08-28 14:56 ` [RFC tip 2/5] lockdep/selftest: Add more recursive read related test cases Boqun Feng
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Boqun Feng @ 2017-08-28 14:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Peter Zijlstra, Gautham R Shenoy, Byungchul Park,
	Boqun Feng

As our chain cache doesn't differ read/write locks, so even we can
detect a read-lock/lock-write deadlock in check_noncircular(), we can
still be fooled if a read-lock/lock-read case(which is not a deadlock)
comes first.

So introduce this test case to test specific to the chain cache behavior
on detecting recursive read lock related deadlocks.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 lib/locking-selftest.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 3c7151a6cd98..747a5379aeee 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -394,6 +394,49 @@ static void rwsem_ABBA1(void)
 	MU(Y1); // should fail
 }
 
+/*
+ * read_lock(A)
+ * spin_lock(B)
+ *		spin_lock(B)
+ *		write_lock(A)
+ *
+ * This test case is aimed at poking whether the chain cache prevents us from
+ * detecting a read-lock/lock-write deadlock: if the chain cache doesn't differ
+ * read/write locks, the following case may happen
+ *
+ * 	{ read_lock(A)->lock(B) dependency exists }
+ *
+ * 	P0:
+ * 	lock(B);
+ * 	read_lock(A);
+ *
+ *	{ Not a deadlock, B -> A is added in the chain cache }
+ *
+ *	P1:
+ *	lock(B);
+ *	write_lock(A);
+ *
+ *	{ B->A found in chain cache, not reported as a deadlock }
+ *
+ */
+static void rlock_chaincache_ABBA1(void)
+{
+	RL(X1);
+	L(Y1);
+	U(Y1);
+	RU(X1);
+
+	L(Y1);
+	RL(X1);
+	RU(X1);
+	U(Y1);
+
+	L(Y1);
+	WL(X1);
+	WU(X1);
+	U(Y1); // should fail
+}
+
 /*
  * read_lock(A)
  * spin_lock(B)
@@ -2046,6 +2089,10 @@ void locking_selftest(void)
 	pr_cont("             |");
 	dotest(rwsem_ABBA3, FAILURE, LOCKTYPE_RWSEM);
 
+	print_testname("chain cached mixed R-L/L-W ABBA");
+	pr_cont("             |");
+	dotest(rlock_chaincache_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
+
 	printk("  --------------------------------------------------------------------------\n");
 
 	/*
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC tip 2/5] lockdep/selftest: Add more recursive read related test cases
  2017-08-28 14:56 [RFC tip 0/5] lockdep: Support deadlock detection for recursive read locks Boqun Feng
  2017-08-28 14:56 ` [RFC tip 1/5] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior Boqun Feng
@ 2017-08-28 14:56 ` Boqun Feng
  2017-08-28 14:56 ` [RFC tip 3/5] lockdep: Demagic the return value of BFS Boqun Feng
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Boqun Feng @ 2017-08-28 14:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Peter Zijlstra, Gautham R Shenoy, Byungchul Park,
	Boqun Feng

Add those four test cases:

1.
	CPU1		CPU2		CPU3
	=============	=============	=============
	write_lock(X);
			write_lock(Y);
					write_lock(Z);
	read_lock(Y);
			read_lock(Z);
					read_lock(X);

	deadlock.

2.
	CPU1		CPU2		CPU3
	=============	=============	=============
	write_lock(X);
			read_lock(Y);
					write_lock(Z);
	write_lock(Y);
			read_lock(Z);
					read_lock(X);

	deadlock.

3.
	CPU1		CPU2		CPU3
	=============	=============	=============
	write_lock(X);
			read_lock(Y);
					read_lock(Z);
	write_lock(Y);
			read_lock(Z);
					write_lock(X);

	not deadlock.

4.
	CPU1		CPU2		CPU3
	=============	=============	=============
	write_lock(X);
			read_lock(Y);
					write_lock(Z);
	read_lock(Y);
			read_lock(Z);
					write_lock(X);

	not deadlock.

Those self testcases are valuable for the development of supporting
recursive read related deadlock detection.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 lib/locking-selftest.c | 161 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 161 insertions(+)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 747a5379aeee..9c04b304008a 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -1070,6 +1070,133 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
 #undef E2
 #undef E3
 
+/*
+ * write-read / write-read / write-read deadlock even if read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	RL(Y1);				\
+	RU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	WL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	WU(Y1);
+
+#define E3()				\
+					\
+	WL(Z1);				\
+	RL(X1);				\
+	RU(X1);				\
+	WU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1R2_W2R3_W3R1)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * write-write / read-read / write-read deadlock even if read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	WL(Y1);				\
+	WU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	RL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	RU(Y1);
+
+#define E3()				\
+					\
+	WL(Z1);				\
+	RL(X1);				\
+	RU(X1);				\
+	WU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1W2_R2R3_W3R1)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * write-write / read-read / read-write is not deadlock when read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	WL(Y1);				\
+	WU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	RL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	RU(Y1);
+
+#define E3()				\
+					\
+	RL(Z1);				\
+	WL(X1);				\
+	WU(X1);				\
+	RU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1R2_R2R3_W3W1)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * write-read / read-read / write-write is not deadlock when read is recursive
+ */
+
+#define E1()				\
+					\
+	WL(X1);				\
+	RL(Y1);				\
+	RU(Y1);				\
+	WU(X1);
+
+#define E2()				\
+					\
+	RL(Y1);				\
+	RL(Z1);				\
+	RU(Z1);				\
+	RU(Y1);
+
+#define E3()				\
+					\
+	WL(Z1);				\
+	WL(X1);				\
+	WU(X1);				\
+	WU(Z1);
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_3_EVENTS(W1W2_R2R3_R3W1)
+
+#undef E1
+#undef E2
+#undef E3
 /*
  * read-lock / write-lock recursion that is unsafe.
  */
@@ -1241,6 +1368,19 @@ static inline void print_testname(const char *testname)
 	dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK);		\
 	pr_cont("\n");
 
+#define DO_TESTCASE_1RR(desc, name, nr)				\
+	print_testname(desc"/"#nr);				\
+	pr_cont("             |");				\
+	dotest(name##_##nr, SUCCESS, LOCKTYPE_RWLOCK);		\
+	pr_cont("\n");
+
+#define DO_TESTCASE_1RRB(desc, name, nr)			\
+	print_testname(desc"/"#nr);				\
+	pr_cont("             |");				\
+	dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK);		\
+	pr_cont("\n");
+
+
 #define DO_TESTCASE_3(desc, name, nr)				\
 	print_testname(desc"/"#nr);				\
 	dotest(name##_spin_##nr, FAILURE, LOCKTYPE_SPIN);	\
@@ -1331,6 +1471,22 @@ static inline void print_testname(const char *testname)
 	DO_TESTCASE_2IB(desc, name, 312);			\
 	DO_TESTCASE_2IB(desc, name, 321);
 
+#define DO_TESTCASE_6x1RR(desc, name)				\
+	DO_TESTCASE_1RR(desc, name, 123);			\
+	DO_TESTCASE_1RR(desc, name, 132);			\
+	DO_TESTCASE_1RR(desc, name, 213);			\
+	DO_TESTCASE_1RR(desc, name, 231);			\
+	DO_TESTCASE_1RR(desc, name, 312);			\
+	DO_TESTCASE_1RR(desc, name, 321);
+
+#define DO_TESTCASE_6x1RRB(desc, name)				\
+	DO_TESTCASE_1RRB(desc, name, 123);			\
+	DO_TESTCASE_1RRB(desc, name, 132);			\
+	DO_TESTCASE_1RRB(desc, name, 213);			\
+	DO_TESTCASE_1RRB(desc, name, 231);			\
+	DO_TESTCASE_1RRB(desc, name, 312);			\
+	DO_TESTCASE_1RRB(desc, name, 321);
+
 #define DO_TESTCASE_6x6(desc, name)				\
 	DO_TESTCASE_6I(desc, name, 123);			\
 	DO_TESTCASE_6I(desc, name, 132);			\
@@ -2093,6 +2249,11 @@ void locking_selftest(void)
 	pr_cont("             |");
 	dotest(rlock_chaincache_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
 
+	DO_TESTCASE_6x1RRB("rlock W1R2/W2R3/W3R1", W1R2_W2R3_W3R1);
+	DO_TESTCASE_6x1RRB("rlock W1W2/R2R3/W3R1", W1W2_R2R3_W3R1);
+	DO_TESTCASE_6x1RR("rlock W1W2/R2R3/R3W1", W1W2_R2R3_R3W1);
+	DO_TESTCASE_6x1RR("rlock W1R2/R2R3/W3W1", W1R2_R2R3_W3W1);
+
 	printk("  --------------------------------------------------------------------------\n");
 
 	/*
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC tip 3/5] lockdep: Demagic the return value of BFS
  2017-08-28 14:56 [RFC tip 0/5] lockdep: Support deadlock detection for recursive read locks Boqun Feng
  2017-08-28 14:56 ` [RFC tip 1/5] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior Boqun Feng
  2017-08-28 14:56 ` [RFC tip 2/5] lockdep/selftest: Add more recursive read related test cases Boqun Feng
@ 2017-08-28 14:56 ` Boqun Feng
  2017-08-28 14:56 ` [RFC tip 4/5] lockdep: Support deadlock detection for recursive read locks in check_noncircular() Boqun Feng
  2017-08-28 14:56 ` [RFC tip 5/5] lockdep: Take read/write status in consideration when generate chainkey Boqun Feng
  4 siblings, 0 replies; 6+ messages in thread
From: Boqun Feng @ 2017-08-28 14:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Peter Zijlstra, Gautham R Shenoy, Byungchul Park,
	Boqun Feng

__bfs() could return four magic numbers:

	1: search succeeds, but none match.
	0: search succeeds, find one match.
	-1: search fails because of the cq is full.
	-2: search fails because a invalid node is found.

This patch cleans things up by making a enum type for the return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 134 ++++++++++++++++++++++++++++-------------------
 1 file changed, 79 insertions(+), 55 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index f73ca595b81e..535f6ec1a393 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -996,21 +996,52 @@ static inline int get_lock_depth(struct lock_list *child)
 	}
 	return depth;
 }
+/*
+ * Return values of a bfs search:
+ *
+ * BFS_E* indicates an error
+ * BFS_R* indicates a result(match or not)
+ *
+ * BFS_EINVALIDNODE: Find a invalid node in the graph.
+ *
+ * BFS_EQUEUEFULL: The queue is full while doing the bfs.
+ *
+ * BFS_RMATCH: Find the matched node in the graph, and put that node * into
+ *            *@target_entry.
+ *
+ * BFS_RNOMATCH: Haven't found the matched node and keep *@target_entry
+ *              _unchanged_.
+ */
+enum bfs_result {
+	BFS_EINVALIDNODE = -2,
+	BFS_EQUEUEFULL = -1,
+	BFS_RMATCH = 0,
+	BFS_RNOMATCH = 1,
+};
+
+/*
+ * bfs_result < 0 means error
+ */
+
+static inline bool bfs_error(enum bfs_result res)
+{
+	return res < 0;
+}
 
-static int __bfs(struct lock_list *source_entry,
-		 void *data,
-		 int (*match)(struct lock_list *entry, void *data),
-		 struct lock_list **target_entry,
-		 int forward)
+static enum bfs_result __bfs(struct lock_list *source_entry,
+			     void *data,
+			     int (*match)(struct lock_list *entry, void *data),
+			     struct lock_list **target_entry,
+			     int forward)
 {
 	struct lock_list *entry;
 	struct list_head *head;
 	struct circular_queue *cq = &lock_cq;
-	int ret = 1;
+	enum bfs_result ret = BFS_RNOMATCH;
 
 	if (match(source_entry, data)) {
 		*target_entry = source_entry;
-		ret = 0;
+		ret = BFS_RMATCH;
 		goto exit;
 	}
 
@@ -1031,7 +1062,7 @@ static int __bfs(struct lock_list *source_entry,
 		__cq_dequeue(cq, (unsigned long *)&lock);
 
 		if (!lock->class) {
-			ret = -2;
+			ret = BFS_EINVALIDNODE;
 			goto exit;
 		}
 
@@ -1048,12 +1079,12 @@ static int __bfs(struct lock_list *source_entry,
 				mark_lock_accessed(entry, lock);
 				if (match(entry, data)) {
 					*target_entry = entry;
-					ret = 0;
+					ret = BFS_RMATCH;
 					goto exit;
 				}
 
 				if (__cq_enqueue(cq, (unsigned long)entry)) {
-					ret = -1;
+					ret = BFS_EQUEUEFULL;
 					goto exit;
 				}
 				cq_depth = __cq_get_elem_count(cq);
@@ -1066,19 +1097,21 @@ static int __bfs(struct lock_list *source_entry,
 	return ret;
 }
 
-static inline int __bfs_forwards(struct lock_list *src_entry,
-			void *data,
-			int (*match)(struct lock_list *entry, void *data),
-			struct lock_list **target_entry)
+static inline enum bfs_result
+__bfs_forwards(struct lock_list *src_entry,
+	       void *data,
+	       int (*match)(struct lock_list *entry, void *data),
+	       struct lock_list **target_entry)
 {
 	return __bfs(src_entry, data, match, target_entry, 1);
 
 }
 
-static inline int __bfs_backwards(struct lock_list *src_entry,
-			void *data,
-			int (*match)(struct lock_list *entry, void *data),
-			struct lock_list **target_entry)
+static inline enum bfs_result
+__bfs_backwards(struct lock_list *src_entry,
+		void *data,
+		int (*match)(struct lock_list *entry, void *data),
+		struct lock_list **target_entry)
 {
 	return __bfs(src_entry, data, match, target_entry, 0);
 
@@ -1335,13 +1368,13 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 
 /*
  * Prove that the dependency graph starting at <entry> can not
- * lead to <target>. Print an error and return 0 if it does.
+ * lead to <target>. Print an error and return BFS_RMATCH if it does.
  */
-static noinline int
+static noinline enum bfs_result
 check_noncircular(struct lock_list *root, struct lock_class *target,
-		struct lock_list **target_entry)
+		  struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_cyclic_checks);
 
@@ -1350,11 +1383,11 @@ check_noncircular(struct lock_list *root, struct lock_class *target,
 	return result;
 }
 
-static noinline int
+static noinline enum bfs_result
 check_redundant(struct lock_list *root, struct lock_class *target,
 		struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_redundant_checks);
 
@@ -1383,15 +1416,12 @@ static inline int usage_match(struct lock_list *entry, void *bit)
  *
  * Return 0 if such a node exists in the subgraph, and put that node
  * into *@target_entry.
- *
- * Return 1 otherwise and keep *@target_entry unchanged.
- * Return <0 on error.
  */
-static int
+static enum bfs_result
 find_usage_forwards(struct lock_list *root, enum lock_usage_bit bit,
 			struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_find_usage_forwards_checks);
 
@@ -1403,18 +1433,12 @@ find_usage_forwards(struct lock_list *root, enum lock_usage_bit bit,
 /*
  * Find a node in the backwards-direction dependency sub-graph starting
  * at @root->class that matches @bit.
- *
- * Return 0 if such a node exists in the subgraph, and put that node
- * into *@target_entry.
- *
- * Return 1 otherwise and keep *@target_entry unchanged.
- * Return <0 on error.
  */
-static int
+static enum bfs_result
 find_usage_backwards(struct lock_list *root, enum lock_usage_bit bit,
 			struct lock_list **target_entry)
 {
-	int result;
+	enum bfs_result result;
 
 	debug_atomic_inc(nr_find_usage_backwards_checks);
 
@@ -1622,18 +1646,18 @@ check_usage(struct task_struct *curr, struct held_lock *prev,
 
 	this.class = hlock_class(prev);
 	ret = find_usage_backwards(&this, bit_backwards, &target_entry);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
+	if (ret == BFS_RNOMATCH)
+		return 1;
 
 	that.parent = NULL;
 	that.class = hlock_class(next);
 	ret = find_usage_forwards(&that, bit_forwards, &target_entry1);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
+	if (ret == BFS_RNOMATCH)
+		return 1;
 
 	return print_bad_irq_dependency(curr, &this, &that,
 			target_entry, target_entry1,
@@ -1874,7 +1898,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	       int (*save)(struct stack_trace *trace))
 {
 	struct lock_list *entry;
-	int ret;
+	enum bfs_result ret;
 	struct lock_list this;
 	struct lock_list *uninitialized_var(target_entry);
 
@@ -1890,9 +1914,9 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	this.class = hlock_class(next);
 	this.parent = NULL;
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
-	if (unlikely(!ret))
+	if (unlikely(ret == BFS_RMATCH))
 		return print_circular_bug(&this, target_entry, next, prev, trace);
-	else if (unlikely(ret < 0))
+	else if (unlikely(bfs_error(ret)))
 		return print_bfs_bug(ret);
 
 	if (!check_prev_add_irq(curr, prev, next))
@@ -1930,11 +1954,11 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	this.class = hlock_class(prev);
 	this.parent = NULL;
 	ret = check_redundant(&this, hlock_class(next), &target_entry);
-	if (!ret) {
+	if (ret == BFS_RMATCH) {
 		debug_atomic_inc(nr_redundant);
 		return 2;
 	}
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
 
 
@@ -2685,16 +2709,16 @@ static int
 check_usage_forwards(struct task_struct *curr, struct held_lock *this,
 		     enum lock_usage_bit bit, const char *irqclass)
 {
-	int ret;
+	enum bfs_result ret;
 	struct lock_list root;
 	struct lock_list *uninitialized_var(target_entry);
 
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_forwards(&root, bit, &target_entry);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
+	if (ret == BFS_RNOMATCH)
 		return ret;
 
 	return print_irq_inversion_bug(curr, &root, target_entry,
@@ -2709,17 +2733,17 @@ static int
 check_usage_backwards(struct task_struct *curr, struct held_lock *this,
 		      enum lock_usage_bit bit, const char *irqclass)
 {
-	int ret;
+	enum bfs_result ret;
 	struct lock_list root;
 	struct lock_list *uninitialized_var(target_entry);
 
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_backwards(&root, bit, &target_entry);
-	if (ret < 0)
+	if (bfs_error(ret))
 		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
+	if (ret == BFS_RNOMATCH)
+		return 1;
 
 	return print_irq_inversion_bug(curr, &root, target_entry,
 					this, 0, irqclass);
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC tip 4/5] lockdep: Support deadlock detection for recursive read locks in check_noncircular()
  2017-08-28 14:56 [RFC tip 0/5] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (2 preceding siblings ...)
  2017-08-28 14:56 ` [RFC tip 3/5] lockdep: Demagic the return value of BFS Boqun Feng
@ 2017-08-28 14:56 ` Boqun Feng
  2017-08-28 14:56 ` [RFC tip 5/5] lockdep: Take read/write status in consideration when generate chainkey Boqun Feng
  4 siblings, 0 replies; 6+ messages in thread
From: Boqun Feng @ 2017-08-28 14:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Peter Zijlstra, Gautham R Shenoy, Byungchul Park,
	Boqun Feng

Currently, lockdep only has limit support for deadlock detection for
recursive read locks.

The basic idea of the detection is:

Firstly we add recursive read lock into the graph, so now we have four
types of dependency: 1) non-recursive to recursive(NR), 2) non-recursive
to non-recursive(NN), 3) recursive to non-recursive(RN) and 4) recursive
to recursive(RR).

And further in __bfs() we now allow switch the lock read/write status in
the traverse, in other words, if the __bfs() went from A to B through
write_lock(A) --> read_lock(B), we would allow __bfs() to go from B to C
through write_lock(B) --> read_lock(C).

However, we limit the traverse reflect the real dependencies, and the
rule is simple: the bfs traverse can go through A to B and then to C iff
we can find dependency A --> B and B --> C where B is not a recursive
read lock in at least one of dependencies(A --> B and B-->C). In other
words, a lock cannot be the transfer station if it only has *->R
dependencies with previous locks and R->* dependencies with following
locks.

If we could still find a circle under this rule, a deadlock is reported.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/lockdep.h  |  4 +++
 kernel/locking/lockdep.c | 93 +++++++++++++++++++++++++++++++++++++++---------
 2 files changed, 81 insertions(+), 16 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 78bb7133abed..bf75338879f5 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -192,6 +192,10 @@ struct lock_list {
 	struct lock_class		*class;
 	struct stack_trace		trace;
 	int				distance;
+	/* bitmap of different dependencies from head to this */
+	u16				dep;
+	/* used by BFS to record whether this is picked as a recursive read */
+	u16				is_rr;
 
 	/*
 	 * The parent field is used to implement breadth-first search, and the
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 535f6ec1a393..245324c59706 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -871,7 +871,7 @@ static struct lock_list *alloc_list_entry(void)
  * Add a new dependency to the head of the list:
  */
 static int add_lock_to_list(struct lock_class *this, struct list_head *head,
-			    unsigned long ip, int distance,
+			    unsigned long ip, int distance, unsigned int dep,
 			    struct stack_trace *trace)
 {
 	struct lock_list *entry;
@@ -884,6 +884,7 @@ static int add_lock_to_list(struct lock_class *this, struct list_head *head,
 		return 0;
 
 	entry->class = this;
+	entry->dep = dep;
 	entry->distance = distance;
 	entry->trace = *trace;
 	/*
@@ -1028,6 +1029,58 @@ static inline bool bfs_error(enum bfs_result res)
 	return res < 0;
 }
 
+#define DEP_NN_BIT 0
+#define DEP_RN_BIT 1
+#define DEP_NR_BIT 2
+#define DEP_RR_BIT 3
+
+#define DEP_NN_MASK (1U << (DEP_NN_BIT))
+#define DEP_RN_MASK (1U << (DEP_RN_BIT))
+#define DEP_NR_MASK (1U << (DEP_NR_BIT))
+#define DEP_RR_MASK (1U << (DEP_RR_BIT))
+
+static inline unsigned int __calc_dep_bit(int prev, int next)
+{
+	if (prev == 2 && next != 2)
+		return DEP_RN_BIT;
+	if (prev != 2 && next == 2)
+		return DEP_NR_BIT;
+	if (prev == 2 && next == 2)
+		return DEP_RR_BIT;
+	else
+		return DEP_NN_BIT;
+}
+
+static inline unsigned int calc_dep(int prev, int next)
+{
+	return 1U << __calc_dep_bit(prev, next);
+}
+
+/*
+ * return -1 if no proper dependency could be picked
+ * return 0 if a * --> N dependency could be picked
+ * return 1 if only a * --> R dependency could be picked
+ *
+ * N: non-recursive lock
+ * R: recursive read lock
+ */
+static inline int pick_dep(u16 is_rr, u16 cap_dep)
+{
+	if (is_rr) { /* could only pick N --> */
+		if (cap_dep & DEP_NN_MASK)
+			return 0;
+		else if (cap_dep & DEP_NR_MASK)
+			return 1;
+		else
+			return -1;
+	} else {
+		if (cap_dep & DEP_NN_MASK || cap_dep & DEP_RN_MASK)
+			return 0;
+		else
+			return 1;
+	}
+}
+
 static enum bfs_result __bfs(struct lock_list *source_entry,
 			     void *data,
 			     int (*match)(struct lock_list *entry, void *data),
@@ -1038,6 +1091,7 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 	struct list_head *head;
 	struct circular_queue *cq = &lock_cq;
 	enum bfs_result ret = BFS_RNOMATCH;
+	int is_rr, next_is_rr;
 
 	if (match(source_entry, data)) {
 		*target_entry = source_entry;
@@ -1071,11 +1125,20 @@ static enum bfs_result __bfs(struct lock_list *source_entry,
 		else
 			head = &lock->class->locks_before;
 
+		is_rr = lock->is_rr;
+
 		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
 		list_for_each_entry_rcu(entry, head, entry) {
 			if (!lock_accessed(entry)) {
 				unsigned int cq_depth;
+
+				next_is_rr = pick_dep(is_rr, entry->dep);
+
+				if (next_is_rr < 0)
+					continue;
+
+				entry->is_rr = next_is_rr;
 				mark_lock_accessed(entry, lock);
 				if (match(entry, data)) {
 					*target_entry = entry;
@@ -1367,7 +1430,7 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 }
 
 /*
- * Prove that the dependency graph starting at <entry> can not
+ * Prove that the dependency graph starting at <root> can not
  * lead to <target>. Print an error and return BFS_RMATCH if it does.
  */
 static noinline enum bfs_result
@@ -1913,8 +1976,10 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 */
 	this.class = hlock_class(next);
 	this.parent = NULL;
+	this.is_rr = (next->read == 2);
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
-	if (unlikely(ret == BFS_RMATCH))
+	if (unlikely(ret == BFS_RMATCH) &&
+	    (prev->read != 2 || !target_entry->is_rr))
 		return print_circular_bug(&this, target_entry, next, prev, trace);
 	else if (unlikely(bfs_error(ret)))
 		return print_bfs_bug(ret);
@@ -1922,16 +1987,6 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	if (!check_prev_add_irq(curr, prev, next))
 		return 0;
 
-	/*
-	 * For recursive read-locks we do all the dependency checks,
-	 * but we dont store read-triggered dependencies (only
-	 * write-triggered dependencies). This ensures that only the
-	 * write-side dependencies matter, and that if for example a
-	 * write-lock never takes any other locks, then the reads are
-	 * equivalent to a NOP.
-	 */
-	if (next->read == 2 || prev->read == 2)
-		return 1;
 	/*
 	 * Is the <prev> -> <next> dependency already present?
 	 *
@@ -1944,6 +1999,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 		if (entry->class == hlock_class(next)) {
 			if (distance == 1)
 				entry->distance = 1;
+			entry->dep |= calc_dep(prev->read, next->read);
 			return 1;
 		}
 	}
@@ -1953,6 +2009,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 */
 	this.class = hlock_class(prev);
 	this.parent = NULL;
+	this.is_rr = (prev->read == 2);
 	ret = check_redundant(&this, hlock_class(next), &target_entry);
 	if (ret == BFS_RMATCH) {
 		debug_atomic_inc(nr_redundant);
@@ -1971,14 +2028,18 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 */
 	ret = add_lock_to_list(hlock_class(next),
 			       &hlock_class(prev)->locks_after,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance,
+			       calc_dep(prev->read, next->read),
+			       trace);
 
 	if (!ret)
 		return 0;
 
 	ret = add_lock_to_list(hlock_class(prev),
 			       &hlock_class(next)->locks_before,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance,
+			       calc_dep(next->read, prev->read),
+			       trace);
 	if (!ret)
 		return 0;
 
@@ -2040,7 +2101,7 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 			 * Only non-recursive-read entries get new dependencies
 			 * added:
 			 */
-			if (hlock->read != 2 && hlock->check) {
+			if (hlock->check) {
 				int ret = check_prev_add(curr, hlock, next,
 							 distance, &trace, save);
 				if (!ret)
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC tip 5/5] lockdep: Take read/write status in consideration when generate chainkey
  2017-08-28 14:56 [RFC tip 0/5] lockdep: Support deadlock detection for recursive read locks Boqun Feng
                   ` (3 preceding siblings ...)
  2017-08-28 14:56 ` [RFC tip 4/5] lockdep: Support deadlock detection for recursive read locks in check_noncircular() Boqun Feng
@ 2017-08-28 14:56 ` Boqun Feng
  4 siblings, 0 replies; 6+ messages in thread
From: Boqun Feng @ 2017-08-28 14:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Peter Zijlstra, Gautham R Shenoy, Byungchul Park,
	Boqun Feng

Currently, the chainkey of a lock chain is a hash sum of the class_idx
of all the held locks, the read/write status are not taken in to
consideration while generating the chainkey. This could result into a
problem, if we have:

	P1()
	{
		read_lock(B);
		lock(A);
	}

	P2()
	{
		lock(A);
		read_lock(B);
	}

	P3()
	{
		lock(A);
		write_lock(B);
	}

, and P1(), P2(), P3() run one by one, when running P2(), it's detected
such a lock chain A -> B is not a deadlock, then it's added in the chain
cache, and then when running P3(), even if it's a deadlock, we could
miss it because of the hit of chain cache. This could be confirmed by
self testcase "chain cached mixed R-L/L-W ".

To resolve this, we use "hlock_id" to generate the chainkey, the
hlock_id is a tuple (hlock->class_idx, hlock->read), which fits in a u16
type. With this, the chainkeys are different is the lock sequences have
the same locks but different read/write status.

Besides, since we use "hlock_id" to generate chainkeys, the chain_hlocks
now store the "hlock_id"s rather than lock_class indexes.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 60 ++++++++++++++++++++++++++++++------------------
 1 file changed, 38 insertions(+), 22 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 245324c59706..c7c430e46b4b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -311,6 +311,21 @@ static struct hlist_head classhash_table[CLASSHASH_SIZE];
 
 static struct hlist_head chainhash_table[CHAINHASH_SIZE];
 
+/*
+ * the id  chain_hlocks
+ */
+static inline u16 hlock_id(struct held_lock *hlock)
+{
+	BUILD_BUG_ON(MAX_LOCKDEP_KEYS_BITS + 2 > 16);
+
+	return (hlock->class_idx | (hlock->read << MAX_LOCKDEP_KEYS_BITS));
+}
+
+static inline unsigned int chain_hlock_class_idx(u16 hlock_id)
+{
+	return hlock_id & MAX_LOCKDEP_KEYS;
+}
+
 /*
  * The hash key of the lock dependency chains is a hash itself too:
  * it's a hash of all locks taken up to that lock, including that lock.
@@ -2159,7 +2174,10 @@ static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
 
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
 {
-	return lock_classes + chain_hlocks[chain->base + i];
+	u16 chain_hlock = chain_hlocks[chain->base + i];
+	unsigned int class_idx = chain_hlock_class_idx(chain_hlock);
+
+	return lock_classes + class_idx - 1;
 }
 
 /*
@@ -2185,12 +2203,12 @@ static inline int get_first_held_lock(struct task_struct *curr,
 /*
  * Returns the next chain_key iteration
  */
-static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
+static u64 print_chain_key_iteration(u16 hlock_id, u64 chain_key)
 {
-	u64 new_chain_key = iterate_chain_key(chain_key, class_idx);
+	u64 new_chain_key = iterate_chain_key(chain_key, hlock_id);
 
-	printk(" class_idx:%d -> chain_key:%016Lx",
-		class_idx,
+	printk(" hlock_id:%d -> chain_key:%016Lx",
+		(unsigned int)hlock_id,
 		(unsigned long long)new_chain_key);
 	return new_chain_key;
 }
@@ -2206,12 +2224,12 @@ print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_ne
 	printk("depth: %u\n", depth + 1);
 	for (i = get_first_held_lock(curr, hlock_next); i < depth; i++) {
 		hlock = curr->held_locks + i;
-		chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
+		chain_key = print_chain_key_iteration(hlock_id(hlock), chain_key);
 
 		print_lock(hlock);
 	}
 
-	print_chain_key_iteration(hlock_next->class_idx, chain_key);
+	print_chain_key_iteration(hlock_id(hlock_next), chain_key);
 	print_lock(hlock_next);
 }
 
@@ -2219,14 +2237,14 @@ static void print_chain_keys_chain(struct lock_chain *chain)
 {
 	int i;
 	u64 chain_key = 0;
-	int class_id;
+	u16 hlock_id;
 
 	printk("depth: %u\n", chain->depth);
 	for (i = 0; i < chain->depth; i++) {
-		class_id = chain_hlocks[chain->base + i];
-		chain_key = print_chain_key_iteration(class_id + 1, chain_key);
+		hlock_id = chain_hlocks[chain->base + i];
+		chain_key = print_chain_key_iteration(hlock_id, chain_key);
 
-		print_lock_name(lock_classes + class_id);
+		print_lock_name(lock_classes + chain_hlock_class_idx(hlock_id) - 1);
 		printk("\n");
 	}
 }
@@ -2275,7 +2293,7 @@ static int check_no_collision(struct task_struct *curr,
 	}
 
 	for (j = 0; j < chain->depth - 1; j++, i++) {
-		id = curr->held_locks[i].class_idx - 1;
+		id = hlock_id(&curr->held_locks[i]);
 
 		if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) {
 			print_collision(curr, hlock, chain);
@@ -2332,8 +2350,8 @@ static inline int add_chain_cache_classes(unsigned int prev,
 	if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {
 		chain->base = nr_chain_hlocks;
 		nr_chain_hlocks += chain->depth;
-		chain_hlocks[chain->base] = prev - 1;
-		chain_hlocks[chain->base + 1] = next -1;
+		chain_hlocks[chain->base] = prev;
+		chain_hlocks[chain->base + 1] = next;
 	}
 #ifdef CONFIG_DEBUG_LOCKDEP
 	/*
@@ -2367,7 +2385,6 @@ static inline int add_chain_cache(struct task_struct *curr,
 				  struct held_lock *hlock,
 				  u64 chain_key)
 {
-	struct lock_class *class = hlock_class(hlock);
 	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 	int i, j;
@@ -2406,10 +2423,9 @@ static inline int add_chain_cache(struct task_struct *curr,
 	if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {
 		chain->base = nr_chain_hlocks;
 		for (j = 0; j < chain->depth - 1; j++, i++) {
-			int lock_id = curr->held_locks[i].class_idx - 1;
-			chain_hlocks[chain->base + j] = lock_id;
+			chain_hlocks[chain->base + j] = hlock_id(&curr->held_locks[i]);
 		}
-		chain_hlocks[chain->base + j] = class - lock_classes;
+		chain_hlocks[chain->base + j] = hlock_id(hlock);
 	}
 
 	if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS)
@@ -2607,7 +2623,7 @@ static void check_chain_key(struct task_struct *curr)
 		if (prev_hlock && (prev_hlock->irq_context !=
 							hlock->irq_context))
 			chain_key = 0;
-		chain_key = iterate_chain_key(chain_key, hlock->class_idx);
+		chain_key = iterate_chain_key(chain_key, hlock_id(hlock));
 		prev_hlock = hlock;
 	}
 	if (chain_key != curr->curr_chain_key) {
@@ -3575,7 +3591,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		chain_key = 0;
 		chain_head = 1;
 	}
-	chain_key = iterate_chain_key(chain_key, class_idx);
+	chain_key = iterate_chain_key(chain_key, hlock_id(hlock));
 
 	if (nest_lock && !__lock_is_held(nest_lock, -1))
 		return print_lock_nested_lock_not_held(curr, hlock, ip);
@@ -5088,9 +5104,9 @@ static int commit_xhlock(struct cross_lock *xlock, struct hist_lock *xhlock)
 	unsigned int xid, pid;
 	u64 chain_key;
 
-	xid = xlock_class(xlock) - lock_classes;
+	xid = xlock_class(xlock) - lock_classes + 1;
 	chain_key = iterate_chain_key((u64)0, xid);
-	pid = xhlock_class(xhlock) - lock_classes;
+	pid = xhlock_class(xhlock) - lock_classes + 1;
 	chain_key = iterate_chain_key(chain_key, pid);
 
 	if (lookup_chain_cache(chain_key))
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-08-28 14:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-28 14:56 [RFC tip 0/5] lockdep: Support deadlock detection for recursive read locks Boqun Feng
2017-08-28 14:56 ` [RFC tip 1/5] lockdep/selftest: Add a R-L/L-W test case specific to chain cache behavior Boqun Feng
2017-08-28 14:56 ` [RFC tip 2/5] lockdep/selftest: Add more recursive read related test cases Boqun Feng
2017-08-28 14:56 ` [RFC tip 3/5] lockdep: Demagic the return value of BFS Boqun Feng
2017-08-28 14:56 ` [RFC tip 4/5] lockdep: Support deadlock detection for recursive read locks in check_noncircular() Boqun Feng
2017-08-28 14:56 ` [RFC tip 5/5] lockdep: Take read/write status in consideration when generate chainkey Boqun Feng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.