All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/27] locking/lockdep: Add support for dynamic keys
@ 2018-11-28 23:42 Bart Van Assche
  2018-11-28 23:42 ` [PATCH 01/27] lockdep tests: Display compiler warning and error messages Bart Van Assche
                   ` (27 more replies)
  0 siblings, 28 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:42 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Hi Ingo and Peter,

A known shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically and that this key sharing can cause false
positive deadlock reports. This patch series adds support for dynamic keys in
the lockdep code. I'm not claiming that this patch series is perfect. However,
the code in this patch series survives nontrivial tests so I think it's worth
a look. Two unrelated changes in this patch series are:
- Improve the lockdep tests.
- Complain if no name has been assigned to a lock object.

Thanks,
    
Bart.

Bart Van Assche (27):
  lockdep tests: Display compiler warning and error messages
  lockdep tests: Fix shellcheck warnings
  lockdep tests: Improve testing accuracy
  lockdep tests: Run lockdep tests a second time under Valgrind
  liblockdep: Rename "trywlock" into "trywrlock"
  liblockdep: Add dummy print_irqtrace_events() implementation
  lockdep tests: Test the lockdep_reset_lock() implementation
  locking/lockdep: Declare local symbols static
  locking/lockdep: Inline __lockdep_init_map()
  locking/lockdep: Introduce lock_class_cache_is_registered()
  timekeeping: Assign a name to tk_core.seq.dep_map
  net/core: Assign a name to devnet_rename_seq.dep_map
  locking/lockdep: Complain if a lock object has no name
  locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement
  locking/lockdep: Make concurrent lockdep_reset_lock() calls safe
  locking/lockdep: Stop using RCU primitives to access all_lock_classes
  locking/lockdep: Make zap_class() remove all matching lock order
    entries
  locking/lockdep: Reorder struct lock_class members
  locking/lockdep: Retain the class key and name while freeing a lock
    class
  locking/lockdep: Free lock classes that are no longer in use
  locking/lockdep: Rename lock_list.entry into
    lock_list.lock_order_entry
  locking/lockdep: Reuse list entries that are no longer in use
  locking/lockdep: Check data structure consistency
  locking/lockdep: Introduce __lockdep_free_key_range()
  locking/lockdep: Add support for dynamic keys
  kernel/workqueue: Use dynamic lockdep keys for workqueues
  lockdep tests: Test dynamic key registration

 include/linux/lockdep.h                       |  45 +-
 include/linux/workqueue.h                     |  28 +-
 kernel/locking/lockdep.c                      | 640 +++++++++++++++---
 kernel/locking/lockdep_proc.c                 |   2 +-
 kernel/time/timekeeping.c                     |   4 +-
 kernel/workqueue.c                            |  60 +-
 net/core/dev.c                                |   2 +-
 tools/lib/lockdep/include/liblockdep/common.h |   3 +
 tools/lib/lockdep/include/liblockdep/mutex.h  |  12 +-
 tools/lib/lockdep/include/liblockdep/rwlock.h |   6 +-
 tools/lib/lockdep/lockdep.c                   |   5 +
 tools/lib/lockdep/run_tests.sh                |  38 +-
 tools/lib/lockdep/tests/AA.sh                 |   2 +
 tools/lib/lockdep/tests/ABA.sh                |   2 +
 tools/lib/lockdep/tests/ABBA.c                |  12 +
 tools/lib/lockdep/tests/ABBA.sh               |   2 +
 tools/lib/lockdep/tests/ABBA_2threads.sh      |   2 +
 tools/lib/lockdep/tests/ABBCCA.c              |   4 +
 tools/lib/lockdep/tests/ABBCCA.sh             |   2 +
 tools/lib/lockdep/tests/ABBCCDDA.c            |   5 +
 tools/lib/lockdep/tests/ABBCCDDA.sh           |   2 +
 tools/lib/lockdep/tests/ABCABC.c              |   4 +
 tools/lib/lockdep/tests/ABCABC.sh             |   2 +
 tools/lib/lockdep/tests/ABCDBCDA.c            |   5 +
 tools/lib/lockdep/tests/ABCDBCDA.sh           |   2 +
 tools/lib/lockdep/tests/ABCDBDDA.c            |   5 +
 tools/lib/lockdep/tests/ABCDBDDA.sh           |   2 +
 tools/lib/lockdep/tests/WW.sh                 |   2 +
 tools/lib/lockdep/tests/unlock_balance.c      |   2 +
 tools/lib/lockdep/tests/unlock_balance.sh     |   2 +
 30 files changed, 726 insertions(+), 178 deletions(-)
 create mode 100755 tools/lib/lockdep/tests/AA.sh
 create mode 100755 tools/lib/lockdep/tests/ABA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA_2threads.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCDDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCABC.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBCDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBDDA.sh
 create mode 100755 tools/lib/lockdep/tests/WW.sh
 create mode 100755 tools/lib/lockdep/tests/unlock_balance.sh

-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 01/27] lockdep tests: Display compiler warning and error messages
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
@ 2018-11-28 23:42 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 02/27] lockdep tests: Fix shellcheck warnings Bart Van Assche
                   ` (26 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:42 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

If compilation of liblockdep fails, display an error message and exit
immediately. Display compiler warning and error messages that are
generated while building a test. Only run a test if compilation
succeeded.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 2e570a188f16..eef3fe4a24fe 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -1,13 +1,16 @@
 #! /bin/bash
 # SPDX-License-Identifier: GPL-2.0
 
-make &> /dev/null
+if ! make >/dev/null; then
+    echo "Building liblockdep failed."
+    echo "FAILED!"
+fi
 
 for i in `ls tests/*.c`; do
 	testname=$(basename "$i" .c)
-	gcc -o tests/$testname -pthread $i liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &> /dev/null
 	echo -ne "$testname... "
-	if [ $(timeout 1 ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
+		[ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
@@ -19,9 +22,9 @@ done
 
 for i in `ls tests/*.c`; do
 	testname=$(basename "$i" .c)
-	gcc -o tests/$testname -pthread -Iinclude $i &> /dev/null
 	echo -ne "(PRELOAD) $testname... "
-	if [ $(timeout 1 ./lockdep ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+		[ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 02/27] lockdep tests: Fix shellcheck warnings
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
  2018-11-28 23:42 ` [PATCH 01/27] lockdep tests: Display compiler warning and error messages Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 03/27] lockdep tests: Improve testing accuracy Bart Van Assche
                   ` (25 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Use find instead of ls to avoid splitting filenames that contain spaces.
Use rm -f instead of if ... then rm ...; fi. This patch addresses all
shellcheck complaints about the run_tests.sh shell script.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index eef3fe4a24fe..e4f41318aa79 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -6,7 +6,7 @@ if ! make >/dev/null; then
     echo "FAILED!"
 fi
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "$testname... "
 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
@@ -15,12 +15,10 @@ for i in `ls tests/*.c`; do
 	else
 		echo "FAILED!"
 	fi
-	if [ -f "tests/$testname" ]; then
-		rm tests/$testname
-	fi
+	rm -f "tests/$testname"
 done
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "(PRELOAD) $testname... "
 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
@@ -29,7 +27,5 @@ for i in `ls tests/*.c`; do
 	else
 		echo "FAILED!"
 	fi
-	if [ -f "tests/$testname" ]; then
-		rm tests/$testname
-	fi
+	rm -f "tests/$testname"
 done
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 03/27] lockdep tests: Improve testing accuracy
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
  2018-11-28 23:42 ` [PATCH 01/27] lockdep tests: Display compiler warning and error messages Bart Van Assche
  2018-11-28 23:43 ` [PATCH 02/27] lockdep tests: Fix shellcheck warnings Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 04/27] lockdep tests: Run lockdep tests a second time under Valgrind Bart Van Assche
                   ` (24 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Instead of checking whether the tests produced any output, check the
output itself. This patch avoids that e.g. debug output causes the
message "PASSED!" to be reported for failed tests.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh            | 5 +++--
 tools/lib/lockdep/tests/AA.sh             | 2 ++
 tools/lib/lockdep/tests/ABA.sh            | 2 ++
 tools/lib/lockdep/tests/ABBA.sh           | 2 ++
 tools/lib/lockdep/tests/ABBA_2threads.sh  | 2 ++
 tools/lib/lockdep/tests/ABBCCA.sh         | 2 ++
 tools/lib/lockdep/tests/ABBCCDDA.sh       | 2 ++
 tools/lib/lockdep/tests/ABCABC.sh         | 2 ++
 tools/lib/lockdep/tests/ABCDBCDA.sh       | 2 ++
 tools/lib/lockdep/tests/ABCDBDDA.sh       | 2 ++
 tools/lib/lockdep/tests/WW.sh             | 2 ++
 tools/lib/lockdep/tests/unlock_balance.sh | 2 ++
 12 files changed, 25 insertions(+), 2 deletions(-)
 create mode 100755 tools/lib/lockdep/tests/AA.sh
 create mode 100755 tools/lib/lockdep/tests/ABA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA_2threads.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCDDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCABC.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBCDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBDDA.sh
 create mode 100755 tools/lib/lockdep/tests/WW.sh
 create mode 100755 tools/lib/lockdep/tests/unlock_balance.sh

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index e4f41318aa79..38b8c9034b8e 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -10,7 +10,7 @@ find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "$testname... "
 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
-		[ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
+		timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
@@ -22,7 +22,8 @@ find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "(PRELOAD) $testname... "
 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
-		[ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
+		timeout 1 ./lockdep "tests/$testname" 2>&1 |
+		"tests/${testname}.sh"; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
diff --git a/tools/lib/lockdep/tests/AA.sh b/tools/lib/lockdep/tests/AA.sh
new file mode 100755
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/AA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABA.sh b/tools/lib/lockdep/tests/ABA.sh
new file mode 100755
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABBA.sh b/tools/lib/lockdep/tests/ABBA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBA_2threads.sh b/tools/lib/lockdep/tests/ABBA_2threads.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA_2threads.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCA.sh b/tools/lib/lockdep/tests/ABBCCA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.sh b/tools/lib/lockdep/tests/ABBCCDDA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCDDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCABC.sh b/tools/lib/lockdep/tests/ABCABC.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCABC.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.sh b/tools/lib/lockdep/tests/ABCDBCDA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCDBCDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.sh b/tools/lib/lockdep/tests/ABCDBDDA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCDBDDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/WW.sh b/tools/lib/lockdep/tests/WW.sh
new file mode 100755
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/WW.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/unlock_balance.sh b/tools/lib/lockdep/tests/unlock_balance.sh
new file mode 100755
index 000000000000..c6e3952303fe
--- /dev/null
+++ b/tools/lib/lockdep/tests/unlock_balance.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: bad unlock balance detected'
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 04/27] lockdep tests: Run lockdep tests a second time under Valgrind
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (2 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 03/27] lockdep tests: Improve testing accuracy Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 05/27] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
                   ` (23 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

This improves test coverage.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 38b8c9034b8e..f1b5027925db 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -30,3 +30,17 @@ find tests -name '*.c' | sort | while read -r i; do
 	fi
 	rm -f "tests/$testname"
 done
+
+find tests -name '*.c' | sort | while read -r i; do
+	testname=$(basename "$i" .c)
+	echo -ne "(PRELOAD + Valgrind) $testname... "
+	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+		{ timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
+		"tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+		! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
+		echo "PASSED!"
+	else
+		echo "FAILED!"
+	fi
+	rm -f "tests/$testname"
+done
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 05/27] liblockdep: Rename "trywlock" into "trywrlock"
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (3 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 04/27] lockdep tests: Run lockdep tests a second time under Valgrind Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 06/27] liblockdep: Add dummy print_irqtrace_events() implementation Bart Van Assche
                   ` (22 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche, Sasha Levin

This patch avoids that the following compiler warning is reported while
compiling the lockdep unit tests:

include/liblockdep/rwlock.h: In function 'liblockdep_pthread_rwlock_trywlock':
include/liblockdep/rwlock.h:66:9: warning: implicit declaration of function 'pthread_rwlock_trywlock'; did you mean 'pthread_rwlock_trywrlock'? [-Wimplicit-function-declaration]
  return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
         ^~~~~~~~~~~~~~~~~~~~~~~
         pthread_rwlock_trywrlock

Fixes: 5a52c9b480e0 ("liblockdep: Add public headers for pthread_rwlock_t implementation")
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/include/liblockdep/rwlock.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/rwlock.h b/tools/lib/lockdep/include/liblockdep/rwlock.h
index a96c3bf0fef1..365762e3a1ea 100644
--- a/tools/lib/lockdep/include/liblockdep/rwlock.h
+++ b/tools/lib/lockdep/include/liblockdep/rwlock.h
@@ -60,10 +60,10 @@ static inline int liblockdep_pthread_rwlock_tryrdlock(liblockdep_pthread_rwlock_
 	return pthread_rwlock_tryrdlock(&lock->rwlock) == 0 ? 1 : 0;
 }
 
-static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock)
+static inline int liblockdep_pthread_rwlock_trywrlock(liblockdep_pthread_rwlock_t *lock)
 {
 	lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_);
-	return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
+	return pthread_rwlock_trywrlock(&lock->rwlock) == 0 ? 1 : 0;
 }
 
 static inline int liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
@@ -79,7 +79,7 @@ static inline int liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
 #define pthread_rwlock_unlock		liblockdep_pthread_rwlock_unlock
 #define pthread_rwlock_wrlock		liblockdep_pthread_rwlock_wrlock
 #define pthread_rwlock_tryrdlock	liblockdep_pthread_rwlock_tryrdlock
-#define pthread_rwlock_trywlock		liblockdep_pthread_rwlock_trywlock
+#define pthread_rwlock_trywrlock	liblockdep_pthread_rwlock_trywrlock
 #define pthread_rwlock_destroy		liblockdep_rwlock_destroy
 
 #endif
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 06/27] liblockdep: Add dummy print_irqtrace_events() implementation
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (4 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 05/27] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 07/27] lockdep tests: Test the lockdep_reset_lock() implementation Bart Van Assche
                   ` (21 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

This patch avoids that linking against liblockdep fails due to no
print_irqtrace_events() definition being available.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/lockdep.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/lib/lockdep/lockdep.c b/tools/lib/lockdep/lockdep.c
index 6002fcf2f9bc..348a9d0fb766 100644
--- a/tools/lib/lockdep/lockdep.c
+++ b/tools/lib/lockdep/lockdep.c
@@ -15,6 +15,11 @@ u32 prandom_u32(void)
 	abort();
 }
 
+void print_irqtrace_events(struct task_struct *curr)
+{
+	abort();
+}
+
 static struct new_utsname *init_utsname(void)
 {
 	static struct new_utsname n = (struct new_utsname) {
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 07/27] lockdep tests: Test the lockdep_reset_lock() implementation
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (5 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 06/27] liblockdep: Add dummy print_irqtrace_events() implementation Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 08/27] locking/lockdep: Declare local symbols static Bart Van Assche
                   ` (20 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

This patch makes sure that the lockdep_reset_lock() function gets
tested.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/include/liblockdep/common.h | 1 +
 tools/lib/lockdep/include/liblockdep/mutex.h  | 1 +
 tools/lib/lockdep/tests/ABBA.c                | 3 +++
 tools/lib/lockdep/tests/ABBCCA.c              | 4 ++++
 tools/lib/lockdep/tests/ABBCCDDA.c            | 5 +++++
 tools/lib/lockdep/tests/ABCABC.c              | 4 ++++
 tools/lib/lockdep/tests/ABCDBCDA.c            | 5 +++++
 tools/lib/lockdep/tests/ABCDBDDA.c            | 5 +++++
 tools/lib/lockdep/tests/unlock_balance.c      | 2 ++
 9 files changed, 30 insertions(+)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h b/tools/lib/lockdep/include/liblockdep/common.h
index 8862da80995a..d640a9761f09 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -44,6 +44,7 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 			struct lockdep_map *nest_lock, unsigned long ip);
 void lock_release(struct lockdep_map *lock, int nested,
 			unsigned long ip);
+void lockdep_reset_lock(struct lockdep_map *lock);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h
index a80ac39f966e..2073d4e1f2f0 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -54,6 +54,7 @@ static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t *lock)
 {
+	lockdep_reset_lock(&lock->dep_map);
 	return pthread_mutex_destroy(&lock->mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 1460afd33d71..623313f54720 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -11,4 +11,7 @@ void main(void)
 
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(b, a);
+
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABBCCA.c b/tools/lib/lockdep/tests/ABBCCA.c
index a54c1b2af118..48446129d496 100644
--- a/tools/lib/lockdep/tests/ABBCCA.c
+++ b/tools/lib/lockdep/tests/ABBCCA.c
@@ -13,4 +13,8 @@ void main(void)
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(c, a);
+
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.c b/tools/lib/lockdep/tests/ABBCCDDA.c
index aa5d194e8869..3570bf7b3804 100644
--- a/tools/lib/lockdep/tests/ABBCCDDA.c
+++ b/tools/lib/lockdep/tests/ABBCCDDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCABC.c b/tools/lib/lockdep/tests/ABCABC.c
index b54a08e60416..a1c4659894cd 100644
--- a/tools/lib/lockdep/tests/ABCABC.c
+++ b/tools/lib/lockdep/tests/ABCABC.c
@@ -13,4 +13,8 @@ void main(void)
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(c, a);
 	LOCK_UNLOCK_2(b, c);
+
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.c b/tools/lib/lockdep/tests/ABCDBCDA.c
index a56742250d86..335af1c90ab5 100644
--- a/tools/lib/lockdep/tests/ABCDBCDA.c
+++ b/tools/lib/lockdep/tests/ABCDBCDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.c b/tools/lib/lockdep/tests/ABCDBDDA.c
index 238a3353f3c3..3c5972863049 100644
--- a/tools/lib/lockdep/tests/ABCDBDDA.c
+++ b/tools/lib/lockdep/tests/ABCDBDDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(b, d);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/unlock_balance.c b/tools/lib/lockdep/tests/unlock_balance.c
index 34cf32f689de..dba25064b50a 100644
--- a/tools/lib/lockdep/tests/unlock_balance.c
+++ b/tools/lib/lockdep/tests/unlock_balance.c
@@ -10,4 +10,6 @@ void main(void)
 	pthread_mutex_lock(&a);
 	pthread_mutex_unlock(&a);
 	pthread_mutex_unlock(&a);
+
+	pthread_mutex_destroy(&a);
 }
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 08/27] locking/lockdep: Declare local symbols static
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (6 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 07/27] lockdep tests: Test the lockdep_reset_lock() implementation Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 09/27] locking/lockdep: Inline __lockdep_init_map() Bart Van Assche
                   ` (19 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

This patch avoids that sparse complains about a missing declaration for
the lock_classes array.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1efada2dd9dd..46b67b7467b4 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -138,7 +138,7 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
  * get freed - this significantly simplifies the debugging code.
  */
 unsigned long nr_lock_classes;
-struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+static struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
 
 static inline struct lock_class *hlock_class(struct held_lock *hlock)
 {
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 09/27] locking/lockdep: Inline __lockdep_init_map()
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (7 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 08/27] locking/lockdep: Declare local symbols static Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 10/27] locking/lockdep: Introduce lock_class_cache_is_registered() Bart Van Assche
                   ` (18 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Since the function __lockdep_init_map() only has one caller, inline it
into its caller. This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 46b67b7467b4..58205b6ac5ed 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3088,7 +3088,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 /*
  * Initialize a lock instance's lock-class mapping info:
  */
-static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
+void lockdep_init_map(struct lockdep_map *lock, const char *name,
 		      struct lock_class_key *key, int subclass)
 {
 	int i;
@@ -3144,12 +3144,6 @@ static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
 		raw_local_irq_restore(flags);
 	}
 }
-
-void lockdep_init_map(struct lockdep_map *lock, const char *name,
-		      struct lock_class_key *key, int subclass)
-{
-	__lockdep_init_map(lock, name, key, subclass);
-}
 EXPORT_SYMBOL_GPL(lockdep_init_map);
 
 struct lock_class_key __lockdep_no_validate__;
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 10/27] locking/lockdep: Introduce lock_class_cache_is_registered()
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (8 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 09/27] locking/lockdep: Inline __lockdep_init_map() Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 11/27] timekeeping: Assign a name to tk_core.seq.dep_map Bart Van Assche
                   ` (17 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

This patch does not change any functionality but makes the
lockdep_reset_lock() function easier to read.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 50 ++++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 58205b6ac5ed..8177a8de1e1d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4198,13 +4198,33 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	 */
 }
 
-void lockdep_reset_lock(struct lockdep_map *lock)
+/*
+ * Check whether any element of the @lock->class_cache[] array refers to a
+ * registered lock class. The caller must hold either the graph lock or the
+ * RCU read lock.
+ */
+static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 {
 	struct lock_class *class;
 	struct hlist_head *head;
-	unsigned long flags;
 	int i, j;
-	int locked;
+
+	for (i = 0; i < CLASSHASH_SIZE; i++) {
+		head = classhash_table + i;
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
+			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
+				if (lock->class_cache[j] == class)
+					return true;
+		}
+	}
+	return false;
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+	struct lock_class *class;
+	unsigned long flags;
+	int j, locked;
 
 	raw_local_irq_save(flags);
 
@@ -4224,24 +4244,14 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	 * be gone.
 	 */
 	locked = graph_lock();
-	for (i = 0; i < CLASSHASH_SIZE; i++) {
-		head = classhash_table + i;
-		hlist_for_each_entry_rcu(class, head, hash_entry) {
-			int match = 0;
-
-			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
-				match |= class == lock->class_cache[j];
-
-			if (unlikely(match)) {
-				if (debug_locks_off_graph_unlock()) {
-					/*
-					 * We all just reset everything, how did it match?
-					 */
-					WARN_ON(1);
-				}
-				goto out_restore;
-			}
+	if (unlikely(lock_class_cache_is_registered(lock))) {
+		if (debug_locks_off_graph_unlock()) {
+			/*
+			 * We all just reset everything, how did it match?
+			 */
+			WARN_ON(1);
 		}
+		goto out_restore;
 	}
 	if (locked)
 		graph_unlock();
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 11/27] timekeeping: Assign a name to tk_core.seq.dep_map
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (9 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 10/27] locking/lockdep: Introduce lock_class_cache_is_registered() Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-12-05 10:03   ` [tip:timers/core] timekeeping: Use proper seqcount initializer tip-bot for Bart Van Assche
  2018-11-28 23:43 ` [PATCH 12/27] net/core: Assign a name to devnet_rename_seq.dep_map Bart Van Assche
                   ` (16 subsequent siblings)
  27 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche,
	Thomas Gleixner

This patch makes lockdep reports that refer to tk_core.seq more
informative.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/time/timekeeping.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 2d110c948805..6c9493495538 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -50,7 +50,9 @@ enum timekeeping_adv_mode {
 static struct {
 	seqcount_t		seq;
 	struct timekeeper	timekeeper;
-} tk_core ____cacheline_aligned;
+} tk_core ____cacheline_aligned = {
+	.seq = SEQCNT_ZERO(tk_core.seq),
+};
 
 static DEFINE_RAW_SPINLOCK(timekeeper_lock);
 static struct timekeeper shadow_timekeeper;
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 12/27] net/core: Assign a name to devnet_rename_seq.dep_map
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (10 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 11/27] timekeeping: Assign a name to tk_core.seq.dep_map Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-29  0:45   ` David Miller
  2018-11-28 23:43 ` [PATCH 13/27] locking/lockdep: Complain if a lock object has no name Bart Van Assche
                   ` (15 subsequent siblings)
  27 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche,
	David S . Miller

This patch makes lockdep reports about devnet_rename_seq more informative.

Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 net/core/dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index ddc551f24ba2..8c109a1624ba 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -194,7 +194,7 @@ static DEFINE_SPINLOCK(napi_hash_lock);
 static unsigned int napi_gen_id = NR_CPUS;
 static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8);
 
-static seqcount_t devnet_rename_seq;
+static seqcount_t devnet_rename_seq = SEQCNT_ZERO(devnet_rename_seq);
 
 static inline void dev_base_seq_inc(struct net *net)
 {
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 13/27] locking/lockdep: Complain if a lock object has no name
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (11 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 12/27] net/core: Assign a name to devnet_rename_seq.dep_map Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 14/27] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement Bart Van Assche
                   ` (14 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Lockdep reports about lock objects that do not have a name are hard to
interpret. Hence complain if no name has been assigned to a lock object.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 8177a8de1e1d..41fd3b279220 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -743,6 +743,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	struct hlist_head *hash_head;
 	struct lock_class *class;
 
+	WARN_ON_ONCE(!lock->name);
 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
 	class = look_up_lock_class(lock, subclass);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 14/27] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (12 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 13/27] locking/lockdep: Complain if a lock object has no name Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 15/27] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe Bart Van Assche
                   ` (13 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Initializing a list entry just before it is passed to list_add_tail_rcu()
is not necessary because list_add_tail_rcu() overwrites the next and prev
pointers anyway. Hence remove the INIT_LIST_HEAD() statement.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 41fd3b279220..83ffcc7f1ced 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -790,7 +790,6 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	class->key = key;
 	class->name = lock->name;
 	class->subclass = subclass;
-	INIT_LIST_HEAD(&class->lock_entry);
 	INIT_LIST_HEAD(&class->locks_before);
 	INIT_LIST_HEAD(&class->locks_after);
 	class->name_version = count_matching_names(class);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 15/27] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (13 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 14/27] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 16/27] locking/lockdep: Stop using RCU primitives to access all_lock_classes Bart Van Assche
                   ` (12 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Since zap_class() removes items from the all_lock_classes list and the
classhash_table, protect all zap_class() calls with the graph lock.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 83ffcc7f1ced..ab6abe52e974 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4227,6 +4227,7 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	int j, locked;
 
 	raw_local_irq_save(flags);
+	locked = graph_lock();
 
 	/*
 	 * Remove all classes this lock might have:
@@ -4243,7 +4244,6 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	 * Debug check: in the end all mapped classes should
 	 * be gone.
 	 */
-	locked = graph_lock();
 	if (unlikely(lock_class_cache_is_registered(lock))) {
 		if (debug_locks_off_graph_unlock()) {
 			/*
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 16/27] locking/lockdep: Stop using RCU primitives to access all_lock_classes
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (14 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 15/27] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 17/27] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
                   ` (11 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Due to the previous patch all code that accesses the 'all_lock_classes'
list holds the graph lock. Hence use regular list primitives instead of
their RCU variants to access this list.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ab6abe52e974..96fc8e92c2a6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -626,7 +626,8 @@ static int static_obj(void *obj)
 
 /*
  * To make lock name printouts unique, we calculate a unique
- * class->name_version generation counter:
+ * class->name_version generation counter. The caller must hold the graph
+ * lock.
  */
 static int count_matching_names(struct lock_class *new_class)
 {
@@ -636,7 +637,7 @@ static int count_matching_names(struct lock_class *new_class)
 	if (!new_class->name)
 		return 0;
 
-	list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) {
+	list_for_each_entry(class, &all_lock_classes, lock_entry) {
 		if (new_class->key - new_class->subclass == class->key)
 			return class->name_version;
 		if (class->name && !strcmp(class->name, new_class->name))
@@ -801,7 +802,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	/*
 	 * Add it to the global list of classes:
 	 */
-	list_add_tail_rcu(&class->lock_entry, &all_lock_classes);
+	list_add_tail(&class->lock_entry, &all_lock_classes);
 
 	if (verbose(class)) {
 		graph_unlock();
@@ -4120,6 +4121,9 @@ void lockdep_reset(void)
 	raw_local_irq_restore(flags);
 }
 
+/*
+ * Remove all references to a lock class. The caller must hold the graph lock.
+ */
 static void zap_class(struct lock_class *class)
 {
 	int i;
@@ -4136,7 +4140,7 @@ static void zap_class(struct lock_class *class)
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
 	hlist_del_rcu(&class->hash_entry);
-	list_del_rcu(&class->lock_entry);
+	list_del(&class->lock_entry);
 
 	RCU_INIT_POINTER(class->key, NULL);
 	RCU_INIT_POINTER(class->name, NULL);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 17/27] locking/lockdep: Make zap_class() remove all matching lock order entries
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (15 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 16/27] locking/lockdep: Stop using RCU primitives to access all_lock_classes Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 18/27] locking/lockdep: Reorder struct lock_class members Bart Van Assche
                   ` (10 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Make sure that all entries that refer to a class are removed from the
list_entries[] array when a kernel module is unloaded.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |  1 +
 kernel/locking/lockdep.c | 17 +++++++++++------
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 1fd82ff99c65..6d0f8d1c2bee 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -180,6 +180,7 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
 struct lock_list {
 	struct list_head		entry;
 	struct lock_class		*class;
+	struct lock_class		*links_to;
 	struct stack_trace		trace;
 	int				distance;
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 96fc8e92c2a6..fc10302d34fd 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -857,7 +857,8 @@ static struct lock_list *alloc_list_entry(void)
 /*
  * Add a new dependency to the head of the list:
  */
-static int add_lock_to_list(struct lock_class *this, struct list_head *head,
+static int add_lock_to_list(struct lock_class *this,
+			    struct lock_class *links_to, struct list_head *head,
 			    unsigned long ip, int distance,
 			    struct stack_trace *trace)
 {
@@ -870,7 +871,9 @@ static int add_lock_to_list(struct lock_class *this, struct list_head *head,
 	if (!entry)
 		return 0;
 
+	WARN_ON_ONCE(this == links_to);
 	entry->class = this;
+	entry->links_to = links_to;
 	entry->distance = distance;
 	entry->trace = *trace;
 	/*
@@ -1916,14 +1919,14 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 * Ok, all validations passed, add the new lock
 	 * to the previous lock's dependency list:
 	 */
-	ret = add_lock_to_list(hlock_class(next),
+	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
 			       &hlock_class(prev)->locks_after,
 			       next->acquire_ip, distance, trace);
 
 	if (!ret)
 		return 0;
 
-	ret = add_lock_to_list(hlock_class(prev),
+	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
 			       &hlock_class(next)->locks_before,
 			       next->acquire_ip, distance, trace);
 	if (!ret)
@@ -4126,15 +4129,17 @@ void lockdep_reset(void)
  */
 static void zap_class(struct lock_class *class)
 {
+	struct lock_list *entry;
 	int i;
 
 	/*
 	 * Remove all dependencies this lock is
 	 * involved in:
 	 */
-	for (i = 0; i < nr_list_entries; i++) {
-		if (list_entries[i].class == class)
-			list_del_rcu(&list_entries[i].entry);
+	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+		if (entry->class != class && entry->links_to != class)
+			continue;
+		list_del_rcu(&entry->entry);
 	}
 	/*
 	 * Unhash the class and remove it from the all_lock_classes list:
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 18/27] locking/lockdep: Reorder struct lock_class members
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (16 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 17/27] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 19/27] locking/lockdep: Retain the class key and name while freeing a lock class Bart Van Assche
                   ` (9 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

This patch does not change any functionality but makes the patch that
frees lock classes that are no longer in use easier to read.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 6d0f8d1c2bee..9421f028c26c 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -76,6 +76,13 @@ struct lock_class {
 	 */
 	struct list_head		lock_entry;
 
+	/*
+	 * These fields represent a directed graph of lock dependencies,
+	 * to every node we attach a list of "forward" and a list of
+	 * "backward" graph nodes.
+	 */
+	struct list_head		locks_after, locks_before;
+
 	struct lockdep_subclass_key	*key;
 	unsigned int			subclass;
 	unsigned int			dep_gen_id;
@@ -86,13 +93,6 @@ struct lock_class {
 	unsigned long			usage_mask;
 	struct stack_trace		usage_traces[XXX_LOCK_USAGE_STATES];
 
-	/*
-	 * These fields represent a directed graph of lock dependencies,
-	 * to every node we attach a list of "forward" and a list of
-	 * "backward" graph nodes.
-	 */
-	struct list_head		locks_after, locks_before;
-
 	/*
 	 * Generation counter, when doing certain classes of graph walking,
 	 * to ensure that we check one node only once:
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 19/27] locking/lockdep: Retain the class key and name while freeing a lock class
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (17 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 18/27] locking/lockdep: Reorder struct lock_class members Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 20/27] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
                   ` (8 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

The next patch in this series uses the class name in code that
detects lock class use-after-free. Hence retain the class name for
lock classes that are being freed.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index fc10302d34fd..4610f3c4f3db 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4145,10 +4145,8 @@ static void zap_class(struct lock_class *class)
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
 	hlist_del_rcu(&class->hash_entry);
+	class->hash_entry.pprev = NULL;
 	list_del(&class->lock_entry);
-
-	RCU_INIT_POINTER(class->key, NULL);
-	RCU_INIT_POINTER(class->name, NULL);
 }
 
 static inline int within(const void *addr, void *start, unsigned long size)
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 20/27] locking/lockdep: Free lock classes that are no longer in use
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (18 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 19/27] locking/lockdep: Retain the class key and name while freeing a lock class Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-29 10:37   ` Peter Zijlstra
  2018-11-28 23:43 ` [PATCH 21/27] locking/lockdep: Rename lock_list.entry into lock_list.lock_order_entry Bart Van Assche
                   ` (7 subsequent siblings)
  27 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Instead of leaving lock classes that are no longer in use in the
lock_classes array, reuse entries from that array that are no longer
in use. Maintain a linked list of free lock classes with list head
'free_lock_class'. Initialize that list from inside register_lock_class()
instead of from inside lockdep_init() because register_lock_class() can
be called before lockdep_init() has been called. Only add freed lock
classes to the free_lock_classes list after a grace period to avoid that
a lock_classes[] element would be reused while an RCU reader is
accessing it.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |   9 +-
 kernel/locking/lockdep.c | 233 +++++++++++++++++++++++++++++++--------
 2 files changed, 195 insertions(+), 47 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 9421f028c26c..02a1469c46e1 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -63,7 +63,8 @@ extern struct lock_class_key __lockdep_no_validate__;
 #define LOCKSTAT_POINTS		4
 
 /*
- * The lock-class itself:
+ * The lock-class itself. The order of the structure members matters.
+ * reinit_class() zeroes the key member and all subsequent members.
  */
 struct lock_class {
 	/*
@@ -72,7 +73,9 @@ struct lock_class {
 	struct hlist_node		hash_entry;
 
 	/*
-	 * global list of all lock-classes:
+	 * Entry in all_lock_classes when in use. Entry in free_lock_classes
+	 * when not in use. Instances that are being freed are briefly on
+	 * neither list.
 	 */
 	struct list_head		lock_entry;
 
@@ -106,7 +109,7 @@ struct lock_class {
 	unsigned long			contention_point[LOCKSTAT_POINTS];
 	unsigned long			contending_point[LOCKSTAT_POINTS];
 #endif
-};
+} __no_randomize_layout;
 
 #ifdef CONFIG_LOCK_STAT
 struct lock_time {
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 4610f3c4f3db..53d8daa8d0dc 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -134,11 +134,14 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
 /*
  * All data structures here are protected by the global debug_lock.
  *
- * Mutex key structs only get allocated, once during bootup, and never
- * get freed - this significantly simplifies the debugging code.
+ * nr_lock_classes is the number of elements of lock_classes[] that is in use.
+ * free_lock_classes points at the first free element. These elements are
+ * linked together by the lock_entry member in struct lock_class.
  */
 unsigned long nr_lock_classes;
 static struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+static LIST_HEAD(free_lock_classes);
+static bool initialization_happened;
 
 static inline struct lock_class *hlock_class(struct held_lock *hlock)
 {
@@ -274,9 +277,8 @@ static inline void lock_release_holdtime(struct held_lock *hlock)
 #endif
 
 /*
- * We keep a global list of all lock classes. The list only grows,
- * never shrinks. The list is only accessed with the lockdep
- * spinlock lock held.
+ * We keep a global list of all lock classes. The list is only accessed with
+ * the lockdep spinlock lock held.
  */
 LIST_HEAD(all_lock_classes);
 
@@ -732,6 +734,17 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	return true;
 }
 
+static void init_lists(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		list_add_tail(&lock_classes[i].lock_entry, &free_lock_classes);
+		INIT_LIST_HEAD(&lock_classes[i].locks_after);
+		INIT_LIST_HEAD(&lock_classes[i].locks_before);
+	}
+}
+
 /*
  * Register a lock's class in the hash-table, if the class is not present
  * yet. Otherwise we look it up. We cache the result in the lock object
@@ -748,8 +761,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
 	class = look_up_lock_class(lock, subclass);
-	if (likely(class))
+	if (likely(class)) {
+		WARN_ON_ONCE(!class->hash_entry.pprev);
 		goto out_set_class_cache;
+	}
 
 	if (!lock->key) {
 		if (!assign_lock_key(lock))
@@ -773,11 +788,14 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 			goto out_unlock_set;
 	}
 
-	/*
-	 * Allocate a new key from the static array, and add it to
-	 * the hash:
-	 */
-	if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {
+	/* Allocate a new lock class and add it to the hash. */
+	if (unlikely(!initialization_happened)) {
+		initialization_happened = true;
+		init_lists();
+	}
+	class = list_first_entry_or_null(&free_lock_classes, typeof(*class),
+					 lock_entry);
+	if (!class) {
 		if (!debug_locks_off_graph_unlock()) {
 			return NULL;
 		}
@@ -786,13 +804,14 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 		dump_stack();
 		return NULL;
 	}
-	class = lock_classes + nr_lock_classes++;
+	list_del(&class->lock_entry);
+	nr_lock_classes++;
 	debug_atomic_inc(nr_unused_locks);
 	class->key = key;
 	class->name = lock->name;
 	class->subclass = subclass;
-	INIT_LIST_HEAD(&class->locks_before);
-	INIT_LIST_HEAD(&class->locks_after);
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
 	class->name_version = count_matching_names(class);
 	/*
 	 * We use RCU's safe list-add method to make
@@ -1843,6 +1862,13 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	struct lock_list this;
 	int ret;
 
+	if (WARN_ON_ONCE(!hlock_class(prev)->hash_entry.pprev) ||
+	    WARN_ONCE(!hlock_class(next)->hash_entry.pprev,
+		      KERN_INFO "Detected use-after-free of lock class %s\n",
+		      hlock_class(next)->name)) {
+		return 2;
+	}
+
 	/*
 	 * Prove that the new <prev> -> <next> dependency would not
 	 * create a circular dependency in the graph. (We do this by
@@ -2234,17 +2260,14 @@ static inline int add_chain_cache(struct task_struct *curr,
 }
 
 /*
- * Look up a dependency chain.
+ * Look up a dependency chain. Must be called with either the graph lock or
+ * the RCU read lock held.
  */
 static inline struct lock_chain *lookup_chain_cache(u64 chain_key)
 {
 	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 
-	/*
-	 * We can walk it lock-free, because entries only get added
-	 * to the hash:
-	 */
 	hlist_for_each_entry_rcu(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 			debug_atomic_inc(chain_lookup_hits);
@@ -3225,6 +3248,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		class = register_lock_class(lock, subclass, 0);
 		if (!class)
 			return 0;
+		WARN_ON_ONCE(!class->hash_entry.pprev);
+	} else {
+		WARN_ON_ONCE(!class->hash_entry.pprev);
 	}
 
 	debug_class_ops_inc(class);
@@ -3336,6 +3362,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	if (nest_lock && !__lock_is_held(nest_lock, -1))
 		return print_lock_nested_lock_not_held(curr, hlock, ip);
 
+	WARN_ON_ONCE(depth && !hlock_class(hlock - 1)->hash_entry.pprev);
+	WARN_ON_ONCE(!hlock_class(hlock)->hash_entry.pprev);
+
 	if (!validate_chain(curr, lock, hlock, chain_head, chain_key))
 		return 0;
 
@@ -4124,11 +4153,87 @@ void lockdep_reset(void)
 	raw_local_irq_restore(flags);
 }
 
+/* Must be called with the graph lock held. */
+static void remove_class_from_lock_chain(struct lock_chain *chain,
+					 struct lock_class *class)
+{
+	u64 chain_key;
+	int i;
+
+	for (i = chain->base; i < chain->base + chain->depth; i++) {
+		if (chain_hlocks[i] != class - lock_classes)
+			continue;
+		if (--chain->depth == 0)
+			break;
+		memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
+			(chain->base + chain->depth - i) *
+			sizeof(chain_hlocks[0]));
+		/*
+		 * Each lock class occurs at most once in a
+		 * lock chain so once we found a match we can
+		 * break out of this loop.
+		 */
+		break;
+	}
+	/*
+	 * Note: calling hlist_del_rcu() from inside a
+	 * hlist_for_each_entry_rcu() loop is safe.
+	 */
+	if (chain->depth == 0) {
+		/* To do: decrease chain count. See also inc_chains(). */
+		hlist_del_rcu(&chain->entry);
+		return;
+	}
+	chain_key = 0;
+	for (i = chain->base; i < chain->base + chain->depth; i++)
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+	if (chain->chain_key == chain_key)
+		return;
+	hlist_del_rcu(&chain->entry);
+	chain->chain_key = chain_key;
+	hlist_add_head_rcu(&chain->entry, chainhashentry(chain_key));
+}
+
+/* Must be called with the graph lock held. */
+static void remove_class_from_lock_chains(struct lock_class *class)
+{
+	struct lock_chain *chain;
+	struct hlist_head *head;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
+		head = chainhash_table + i;
+		hlist_for_each_entry_rcu(chain, head, entry) {
+			remove_class_from_lock_chain(chain, class);
+		}
+	}
+}
+
+/* Must be called with the graph lock held. */
+static void check_free_class(struct list_head *zapped_classes,
+			     struct lock_class *class)
+{
+	/*
+	 * If the list_del_rcu(&entry->entry) call made both lock order lists
+	 * empty, remove @class from the all_lock_classes list and add it to
+	 * the zapped_classes list.
+	 */
+	if (class->hash_entry.pprev &&
+	    list_empty(&class->locks_after) &&
+	    list_empty(&class->locks_before)) {
+		list_move_tail(&class->lock_entry, zapped_classes);
+		hlist_del_rcu(&class->hash_entry);
+		class->hash_entry.pprev = NULL;
+	}
+}
+
 /*
  * Remove all references to a lock class. The caller must hold the graph lock.
  */
-static void zap_class(struct lock_class *class)
+static void zap_class(struct list_head *zapped_classes,
+		      struct lock_class *class)
 {
+	struct lock_class *links_to;
 	struct lock_list *entry;
 	int i;
 
@@ -4139,14 +4244,33 @@ static void zap_class(struct lock_class *class)
 	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
 		if (entry->class != class && entry->links_to != class)
 			continue;
+		links_to = entry->links_to;
+		WARN_ON_ONCE(entry->class == links_to);
 		list_del_rcu(&entry->entry);
+		entry->class = NULL;
+		entry->links_to = NULL;
+		check_free_class(zapped_classes, class);
 	}
-	/*
-	 * Unhash the class and remove it from the all_lock_classes list:
-	 */
-	hlist_del_rcu(&class->hash_entry);
-	class->hash_entry.pprev = NULL;
-	list_del(&class->lock_entry);
+	check_free_class(zapped_classes, class);
+	WARN_ONCE(class->hash_entry.pprev,
+		  KERN_INFO "%s() failed for class %s\n", __func__,
+		  class->name);
+
+	remove_class_from_lock_chains(class);
+}
+
+static void reinit_class(struct lock_class *class)
+{
+	void *const p = class;
+	const unsigned int offset = offsetof(struct lock_class, key);
+
+	WARN_ON_ONCE(!class->lock_entry.next);
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
+	memset(p + offset, 0, sizeof(*class) - offset);
+	WARN_ON_ONCE(!class->lock_entry.next);
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
 }
 
 static inline int within(const void *addr, void *start, unsigned long size)
@@ -4154,6 +4278,34 @@ static inline int within(const void *addr, void *start, unsigned long size)
 	return addr >= start && addr < start + size;
 }
 
+static void free_zapped_classes(struct list_head *zapped_classes)
+{
+	struct lock_class *class;
+	unsigned long flags;
+	int locked;
+
+	if (list_empty(zapped_classes))
+		return;
+
+	/*
+	 * Wait until look_up_lock_class() has finished accessing the
+	 * list_entries[] elements we are about to free. sync_sched() is
+	 * sufficient because look_up_lock_class() is called with IRQs off.
+	 */
+	synchronize_sched();
+
+	raw_local_irq_save(flags);
+	locked = graph_lock();
+	list_for_each_entry(class, zapped_classes, lock_entry) {
+		reinit_class(class);
+		nr_lock_classes--;
+	}
+	list_splice(zapped_classes, &free_lock_classes);
+	if (locked)
+		graph_unlock();
+	raw_local_irq_restore(flags);
+}
+
 /*
  * Used in module.c to remove lock classes from memory that is going to be
  * freed; and possibly re-used by other modules.
@@ -4166,6 +4318,7 @@ void lockdep_free_key_range(void *start, unsigned long size)
 {
 	struct lock_class *class;
 	struct hlist_head *head;
+	LIST_HEAD(zapped_classes);
 	unsigned long flags;
 	int i;
 	int locked;
@@ -4179,10 +4332,11 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
 		hlist_for_each_entry_rcu(class, head, hash_entry) {
-			if (within(class->key, start, size))
-				zap_class(class);
-			else if (within(class->name, start, size))
-				zap_class(class);
+			if (!class->hash_entry.pprev ||
+			    (!within(class->key, start, size) &&
+			     !within(class->name, start, size)))
+				continue;
+			zap_class(&zapped_classes, class);
 		}
 	}
 
@@ -4190,19 +4344,7 @@ void lockdep_free_key_range(void *start, unsigned long size)
 		graph_unlock();
 	raw_local_irq_restore(flags);
 
-	/*
-	 * Wait for any possible iterators from look_up_lock_class() to pass
-	 * before continuing to free the memory they refer to.
-	 *
-	 * sync_sched() is sufficient because the read-side is IRQ disable.
-	 */
-	synchronize_sched();
-
-	/*
-	 * XXX at this point we could return the resources to the pool;
-	 * instead we leak them. We would need to change to bitmap allocators
-	 * instead of the linear allocators we have now.
-	 */
+	free_zapped_classes(&zapped_classes);
 }
 
 /*
@@ -4230,6 +4372,7 @@ static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 void lockdep_reset_lock(struct lockdep_map *lock)
 {
 	struct lock_class *class;
+	LIST_HEAD(zapped_classes);
 	unsigned long flags;
 	int j, locked;
 
@@ -4245,7 +4388,7 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 		 */
 		class = look_up_lock_class(lock, j);
 		if (class)
-			zap_class(class);
+			zap_class(&zapped_classes, class);
 	}
 	/*
 	 * Debug check: in the end all mapped classes should
@@ -4265,6 +4408,8 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 
 out_restore:
 	raw_local_irq_restore(flags);
+
+	free_zapped_classes(&zapped_classes);
 }
 
 void __init lockdep_init(void)
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 21/27] locking/lockdep: Rename lock_list.entry into lock_list.lock_order_entry
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (19 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 20/27] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use Bart Van Assche
                   ` (6 subsequent siblings)
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

The next patch in this series will add a new list entry member to
struct lock_list. Rename the existing "entry" member to keep the
lockdep source code readable.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h       | 3 ++-
 kernel/locking/lockdep.c      | 9 +++++----
 kernel/locking/lockdep_proc.c | 2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 02a1469c46e1..43327a1dd488 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -181,7 +181,8 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
  * We only grow the list, never remove from it:
  */
 struct lock_list {
-	struct list_head		entry;
+	/* Entry in locks_after or locks_before. */
+	struct list_head		lock_order_entry;
 	struct lock_class		*class;
 	struct lock_class		*links_to;
 	struct stack_trace		trace;
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 53d8daa8d0dc..038377d67410 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -900,7 +900,7 @@ static int add_lock_to_list(struct lock_class *this,
 	 * iteration is under RCU-sched; see look_up_lock_class() and
 	 * lockdep_free_key_range().
 	 */
-	list_add_tail_rcu(&entry->entry, head);
+	list_add_tail_rcu(&entry->lock_order_entry, head);
 
 	return 1;
 }
@@ -1051,7 +1051,7 @@ static int __bfs(struct lock_list *source_entry,
 
 		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
-		list_for_each_entry_rcu(entry, head, entry) {
+		list_for_each_entry_rcu(entry, head, lock_order_entry) {
 			if (!lock_accessed(entry)) {
 				unsigned int cq_depth;
 				mark_lock_accessed(entry, lock);
@@ -1916,7 +1916,8 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 *  chains - the second one will be new, but L1 already has
 	 *  L2 added to its dependency list, due to the first chain.)
 	 */
-	list_for_each_entry(entry, &hlock_class(prev)->locks_after, entry) {
+	list_for_each_entry(entry, &hlock_class(prev)->locks_after,
+			    lock_order_entry) {
 		if (entry->class == hlock_class(next)) {
 			if (distance == 1)
 				entry->distance = 1;
@@ -4246,7 +4247,7 @@ static void zap_class(struct list_head *zapped_classes,
 			continue;
 		links_to = entry->links_to;
 		WARN_ON_ONCE(entry->class == links_to);
-		list_del_rcu(&entry->entry);
+		list_del_rcu(&entry->lock_order_entry);
 		entry->class = NULL;
 		entry->links_to = NULL;
 		check_free_class(zapped_classes, class);
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 3d31f9b0059e..17460b412927 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -82,7 +82,7 @@ static int l_show(struct seq_file *m, void *v)
 	print_name(m, class);
 	seq_puts(m, "\n");
 
-	list_for_each_entry(entry, &class->locks_after, entry) {
+	list_for_each_entry(entry, &class->locks_after, lock_order_entry) {
 		if (entry->distance == 1) {
 			seq_printf(m, " -> [%p] ", entry->class->key);
 			print_name(m, entry->class);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (20 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 21/27] locking/lockdep: Rename lock_list.entry into lock_list.lock_order_entry Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-29 10:49   ` Peter Zijlstra
  2018-11-28 23:43 ` [PATCH 23/27] locking/lockdep: Check data structure consistency Bart Van Assche
                   ` (5 subsequent siblings)
  27 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Instead of abandoning elements of list_entries[] that are no longer in
use, make alloc_list_entry() reuse array elements that have been freed.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |  5 +++++
 kernel/locking/lockdep.c | 23 ++++++++++++++++-------
 2 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 43327a1dd488..01e55fca7c2c 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -183,6 +183,11 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
 struct lock_list {
 	/* Entry in locks_after or locks_before. */
 	struct list_head		lock_order_entry;
+	/*
+	 * Entry in all_list_entries when in use and entry in
+	 * free_list_entries when not in use.
+	 */
+	struct list_head		alloc_entry;
 	struct lock_class		*class;
 	struct lock_class		*links_to;
 	struct stack_trace		trace;
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 038377d67410..288a2f6fd0ef 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -130,6 +130,8 @@ static inline int debug_locks_off_graph_unlock(void)
 
 unsigned long nr_list_entries;
 static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
+static LIST_HEAD(all_list_entries);
+static LIST_HEAD(free_list_entries);
 
 /*
  * All data structures here are protected by the global debug_lock.
@@ -743,6 +745,9 @@ static void init_lists(void)
 		INIT_LIST_HEAD(&lock_classes[i].locks_after);
 		INIT_LIST_HEAD(&lock_classes[i].locks_before);
 	}
+
+	for (i = 0; i < ARRAY_SIZE(list_entries); i++)
+		list_add_tail(&list_entries[i].alloc_entry, &free_list_entries);
 }
 
 /*
@@ -862,7 +867,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
  */
 static struct lock_list *alloc_list_entry(void)
 {
-	if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
+	struct lock_list *e = list_first_entry_or_null(&free_list_entries,
+						       typeof(*e), alloc_entry);
+
+	if (!e) {
 		if (!debug_locks_off_graph_unlock())
 			return NULL;
 
@@ -870,7 +878,8 @@ static struct lock_list *alloc_list_entry(void)
 		dump_stack();
 		return NULL;
 	}
-	return list_entries + nr_list_entries++;
+	list_move_tail(&e->alloc_entry, &all_list_entries);
+	return e;
 }
 
 /*
@@ -975,7 +984,7 @@ static inline void mark_lock_accessed(struct lock_list *lock,
 	unsigned long nr;
 
 	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+	WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
 	lock->parent = parent;
 	lock->class->dep_gen_id = lockdep_dependency_gen_id;
 }
@@ -985,7 +994,7 @@ static inline unsigned long lock_accessed(struct lock_list *lock)
 	unsigned long nr;
 
 	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+	WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
 	return lock->class->dep_gen_id == lockdep_dependency_gen_id;
 }
 
@@ -4235,19 +4244,19 @@ static void zap_class(struct list_head *zapped_classes,
 		      struct lock_class *class)
 {
 	struct lock_class *links_to;
-	struct lock_list *entry;
-	int i;
+	struct lock_list *entry, *tmp;
 
 	/*
 	 * Remove all dependencies this lock is
 	 * involved in:
 	 */
-	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+	list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
 		if (entry->class != class && entry->links_to != class)
 			continue;
 		links_to = entry->links_to;
 		WARN_ON_ONCE(entry->class == links_to);
 		list_del_rcu(&entry->lock_order_entry);
+		list_move(&entry->alloc_entry, &free_list_entries);
 		entry->class = NULL;
 		entry->links_to = NULL;
 		check_free_class(zapped_classes, class);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 23/27] locking/lockdep: Check data structure consistency
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (21 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-29 12:30   ` Peter Zijlstra
  2018-11-28 23:43 ` [PATCH 24/27] locking/lockdep: Introduce __lockdep_free_key_range() Bart Van Assche
                   ` (4 subsequent siblings)
  27 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Debugging lockdep data structure inconsistencies is challenging. Add
disabled code that verifies data structure consistency at runtime.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 142 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 142 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 288a2f6fd0ef..141bb0662ff5 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -72,6 +72,8 @@ module_param(lock_stat, int, 0644);
 #define lock_stat 0
 #endif
 
+static bool check_data_structure_consistency;
+
 /*
  * lockdep_lock: protects the lockdep graph, the hashes and the
  *               class/list/hash allocators.
@@ -736,6 +738,135 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	return true;
 }
 
+/* Check whether element @e occurs in list @h */
+static bool in_list(struct list_head *e, struct list_head *h)
+{
+	struct list_head *f;
+
+	list_for_each(f, h)
+		if (e == f)
+			return true;
+
+	return false;
+}
+
+/*
+ * Check whether entry @e occurs in any of the locks_after or locks_before
+ * lists.
+ */
+static bool in_any_class_list(struct list_head *e)
+{
+	struct lock_class *class;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		class = &lock_classes[i];
+		if (in_list(e, &class->locks_after) ||
+		    in_list(e, &class->locks_before))
+			return true;
+	}
+	return false;
+}
+
+static bool class_lock_list_valid(struct lock_class *c, struct list_head *h)
+{
+	struct lock_list *e;
+
+	list_for_each_entry(e, h, lock_order_entry) {
+		if (e->links_to != c) {
+			printk(KERN_INFO "class %s: mismatch for lock entry %ld; class %s <> %s",
+			       c->name ? : "(?)", e - list_entries,
+			       e->links_to && e->links_to->name ?
+			       e->links_to->name : "(?)",
+			       e->class && e->class->name ? e->class->name :
+			       "(?)");
+			return false;
+		}
+	}
+	return true;
+}
+
+static u16 chain_hlocks[];
+
+static bool check_lock_chain_key(struct lock_chain *chain)
+{
+	u64 chain_key = 0;
+	int i;
+
+	for (i = chain->base; i < chain->base + chain->depth; i++)
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+	/*
+	 * The 'unsigned long long' casts avoid that a compiler warning
+	 * is reported when building tools/lib/lockdep.
+	 */
+	if (chain->chain_key != chain_key)
+		printk(KERN_INFO "chain %lld: key %#llx <> %#llx\n",
+		       (unsigned long long)(chain - lock_chains),
+		       (unsigned long long)chain->chain_key,
+		       (unsigned long long)chain_key);
+	return chain->chain_key == chain_key;
+}
+
+static bool check_data_structures(void)
+{
+	struct lock_class *class;
+	struct lock_chain *chain;
+	struct hlist_head *head;
+	struct lock_list *e;
+	int i;
+
+	/*
+	 * Check whether all list entries that are in use occur in a class
+	 * lock list.
+	 */
+	list_for_each_entry(e, &all_list_entries, alloc_entry) {
+		if (!in_any_class_list(&e->lock_order_entry)) {
+			printk(KERN_INFO "list entry %ld is not in any class list; class %s <> %s\n",
+			       e - list_entries,
+			       e->class->name ? : "(?)",
+			       e->links_to->name ? : "(?)");
+			return false;
+		}
+	}
+
+	/*
+	 * Check whether all list entries that are not in use do not occur in
+	 * a class lock list.
+	 */
+	list_for_each_entry(e, &free_list_entries, alloc_entry) {
+		if (in_any_class_list(&e->lock_order_entry)) {
+			printk(KERN_INFO "list entry %ld occurs in a class list; class %s <> %s\n",
+			       e - list_entries,
+			       e->class && e->class->name ? e->class->name :
+			       "(?)",
+			       e->links_to && e->links_to->name ?
+			       e->links_to->name : "(?)");
+			return false;
+		}
+	}
+
+	/* Check whether all classes have valid lock lists. */
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		class = &lock_classes[i];
+		if (!class->locks_before.next)
+			continue;
+		if (!class_lock_list_valid(class, &class->locks_before))
+			return false;
+		if (!class_lock_list_valid(class, &class->locks_after))
+			return false;
+	}
+
+	/* Check the chain_key of all lock chains. */
+	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
+		head = chainhash_table + i;
+		hlist_for_each_entry_rcu(chain, head, entry)
+			if (!check_lock_chain_key(chain))
+				return false;
+	}
+
+	return true;
+}
+
 static void init_lists(void)
 {
 	int i;
@@ -4294,6 +4425,14 @@ static void free_zapped_classes(struct list_head *zapped_classes)
 	unsigned long flags;
 	int locked;
 
+	raw_local_irq_save(flags);
+	locked = graph_lock();
+	if (check_data_structure_consistency)
+		WARN_ON_ONCE(!check_data_structures());
+	if (locked)
+		graph_unlock();
+	raw_local_irq_restore(flags);
+
 	if (list_empty(zapped_classes))
 		return;
 
@@ -4314,6 +4453,9 @@ static void free_zapped_classes(struct list_head *zapped_classes)
 	if (locked)
 		graph_unlock();
 	raw_local_irq_restore(flags);
+
+	if (check_data_structure_consistency)
+		WARN_ON_ONCE(!check_data_structures());
 }
 
 /*
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 24/27] locking/lockdep: Introduce __lockdep_free_key_range()
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (22 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 23/27] locking/lockdep: Check data structure consistency Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-29 10:00   ` Peter Zijlstra
  2018-11-28 23:43 ` [PATCH 25/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (3 subsequent siblings)
  27 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

This patch does not change any functionality but makes the next patch
in this series easier to read.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 141bb0662ff5..0e273731d028 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4459,18 +4459,16 @@ static void free_zapped_classes(struct list_head *zapped_classes)
 }
 
 /*
- * Used in module.c to remove lock classes from memory that is going to be
- * freed; and possibly re-used by other modules.
- *
- * We will have had one sync_sched() before getting here, so we're guaranteed
- * nobody will look up these exact classes -- they're properly dead but still
- * allocated.
+ * Remove all lock classes from the class hash table and from the
+ * all_lock_classes list whose key or name is in the address range
+ * [start, start + size). Move these lock classes to the
+ * @zapped_classes list.
  */
-void lockdep_free_key_range(void *start, unsigned long size)
+static void __lockdep_free_key_range(struct list_head *zapped_classes,
+				     void *start, unsigned long size)
 {
 	struct lock_class *class;
 	struct hlist_head *head;
-	LIST_HEAD(zapped_classes);
 	unsigned long flags;
 	int i;
 	int locked;
@@ -4478,9 +4476,8 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	raw_local_irq_save(flags);
 	locked = graph_lock();
 
-	/*
-	 * Unhash all classes that were created by this module:
-	 */
+	INIT_LIST_HEAD(zapped_classes);
+
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
 		hlist_for_each_entry_rcu(class, head, hash_entry) {
@@ -4488,14 +4485,28 @@ void lockdep_free_key_range(void *start, unsigned long size)
 			    (!within(class->key, start, size) &&
 			     !within(class->name, start, size)))
 				continue;
-			zap_class(&zapped_classes, class);
+			zap_class(zapped_classes, class);
 		}
 	}
 
 	if (locked)
 		graph_unlock();
 	raw_local_irq_restore(flags);
+}
+
+/*
+ * Used in module.c to remove lock classes from memory that is going to be
+ * freed; and possibly re-used by other modules.
+ *
+ * We will have had one sync_sched() before getting here, so we're guaranteed
+ * nobody will look up these exact classes -- they're properly dead but still
+ * allocated.
+ */
+void lockdep_free_key_range(void *start, unsigned long size)
+{
+	LIST_HEAD(zapped_classes);
 
+	__lockdep_free_key_range(&zapped_classes, start, size);
 	free_zapped_classes(&zapped_classes);
 }
 
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 25/27] locking/lockdep: Add support for dynamic keys
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (23 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 24/27] locking/lockdep: Introduce __lockdep_free_key_range() Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-29 10:10   ` Peter Zijlstra
  2018-11-29 12:04   ` Peter Zijlstra
  2018-11-28 23:43 ` [PATCH 26/27] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
                   ` (2 subsequent siblings)
  27 siblings, 2 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

A shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically. That forces certain lock objects
to share lock keys. Since lock dependency analysis groups lock objects
per key sharing lock keys can cause false positive lockdep reports.
Make it possible to avoid such false positive reports by allowing lock
keys to be allocated dynamically. Require that dynamically allocated
lock keys are registered before use by calling lockdep_register_key().
Complain about attempts to register the same lock key pointer twice
without calling lockdep_unregister_key() between successive
registration calls.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |  13 ++++-
 kernel/locking/lockdep.c | 121 ++++++++++++++++++++++++++++++++++++---
 2 files changed, 123 insertions(+), 11 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 01e55fca7c2c..bd6bfad66382 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -46,15 +46,19 @@ extern int lock_stat;
 #define NR_LOCKDEP_CACHING_CLASSES	2
 
 /*
- * Lock-classes are keyed via unique addresses, by embedding the
- * lockclass-key into the kernel (or module) .data section. (For
- * static locks we use the lock address itself as the key.)
+ * A lockdep key is associated with each lock object. For static locks we use
+ * the lock address itself as the key. Dynamically allocated lock objects can
+ * have a statically or dynamically allocated key. Dynamically allocated lock
+ * keys must be registered before being used and must be unregistered before
+ * the key memory is freed.
  */
 struct lockdep_subclass_key {
 	char __one_byte;
 } __attribute__ ((__packed__));
 
+/* hash_entry is used to keep track of dynamically allocated keys. */
 struct lock_class_key {
+	struct hlist_node		hash_entry;
 	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
 };
 
@@ -280,6 +284,9 @@ extern asmlinkage void lockdep_sys_exit(void);
 extern void lockdep_off(void);
 extern void lockdep_on(void);
 
+extern void lockdep_register_key(struct lock_class_key *key);
+extern void lockdep_unregister_key(struct lock_class_key *key);
+
 /*
  * These methods are used by specific locking variants (spinlocks,
  * rwlocks, mutexes and rwsems) to pass init/acquire/release events
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 0e273731d028..213153b10951 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -142,6 +142,9 @@ static LIST_HEAD(free_list_entries);
  * free_lock_classes points at the first free element. These elements are
  * linked together by the lock_entry member in struct lock_class.
  */
+#define KEYHASH_BITS		(MAX_LOCKDEP_KEYS_BITS - 1)
+#define KEYHASH_SIZE		(1UL << KEYHASH_BITS)
+static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
 unsigned long nr_lock_classes;
 static struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
 static LIST_HEAD(free_lock_classes);
@@ -602,7 +605,7 @@ static int very_verbose(struct lock_class *class)
  * Is this the address of a static object:
  */
 #ifdef __KERNEL__
-static int static_obj(void *obj)
+static int static_obj(const void *obj)
 {
 	unsigned long start = (unsigned long) &_stext,
 		      end   = (unsigned long) &_end,
@@ -881,6 +884,70 @@ static void init_lists(void)
 		list_add_tail(&list_entries[i].alloc_entry, &free_list_entries);
 }
 
+static inline struct hlist_head *keyhashentry(const struct lock_class_key *key)
+{
+	unsigned long hash = hash_long((uintptr_t)key, KEYHASH_BITS);
+
+	return lock_keys_hash + hash;
+}
+
+/*
+ * Register a dynamically allocated key.
+ */
+void lockdep_register_key(struct lock_class_key *key)
+{
+	struct hlist_head *hash_head;
+	struct lock_class_key *k;
+	unsigned long flags;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return;
+	hash_head = keyhashentry(key);
+	raw_local_irq_save(flags);
+	if (!graph_lock())
+		goto restore_irqs;
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (WARN_ON_ONCE(k == key))
+			goto out_unlock;
+	}
+	hlist_add_head_rcu(&key->hash_entry, hash_head);
+out_unlock:
+	graph_unlock();
+restore_irqs:
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lockdep_register_key);
+
+/*
+ * Check whether a key has been registered as a dynamic key. Must not be called
+ * from interrupt context.
+ */
+static bool is_dynamic_key(const struct lock_class_key *key)
+{
+	struct hlist_head *hash_head;
+	struct lock_class_key *k;
+	unsigned long flags;
+	bool found = false;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return false;
+	hash_head = keyhashentry(key);
+	raw_local_irq_save(flags);
+	if (!graph_lock())
+		goto restore_irqs;
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (k == key) {
+			found = true;
+			break;
+		}
+	}
+	graph_unlock();
+restore_irqs:
+	raw_local_irq_restore(flags);
+
+	return found;
+}
+
 /*
  * Register a lock's class in the hash-table, if the class is not present
  * yet. Otherwise we look it up. We cache the result in the lock object
@@ -905,7 +972,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	if (!lock->key) {
 		if (!assign_lock_key(lock))
 			return NULL;
-	} else if (!static_obj(lock->key)) {
+	} else if (!static_obj(lock->key) && !is_dynamic_key(lock->key)) {
 		return NULL;
 	}
 
@@ -3284,13 +3351,13 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,
 	if (DEBUG_LOCKS_WARN_ON(!key))
 		return;
 	/*
-	 * Sanity check, the lock-class key must be persistent:
+	 * Sanity check, the lock-class key must either have been allocated
+	 * statically or must have been registered as a dynamic key.
 	 */
-	if (!static_obj(key)) {
-		printk("BUG: key %px not in .data!\n", key);
-		/*
-		 * What it says above ^^^^^, I suggest you read it.
-		 */
+	if (!static_obj(key) && !is_dynamic_key(key)) {
+		if (debug_locks)
+			printk(KERN_INFO "BUG: key %px has not been registered!\n",
+			       key);
 		DEBUG_LOCKS_WARN_ON(1);
 		return;
 	}
@@ -4510,6 +4577,44 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	free_zapped_classes(&zapped_classes);
 }
 
+
+/*
+ * Unregister a dynamically allocated key. Must not be called from interrupt
+ * context. The caller must ensure that freeing @key only happens after an RCU
+ * grace period.
+ */
+void lockdep_unregister_key(struct lock_class_key *key)
+{
+	struct list_head zapped_classes;
+	struct hlist_head *hash_head = keyhashentry(key);
+	struct lock_class_key *k;
+	unsigned long flags;
+	bool found = false;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return;
+
+	__lockdep_free_key_range(&zapped_classes, key, 1);
+
+	raw_local_irq_save(flags);
+	if (!graph_lock())
+		goto restore_irqs;
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (k == key) {
+			hlist_del_rcu(&k->hash_entry);
+			found = true;
+			break;
+		}
+	}
+	WARN_ON_ONCE(!found);
+	graph_unlock();
+restore_irqs:
+	raw_local_irq_restore(flags);
+
+	free_zapped_classes(&zapped_classes);
+}
+EXPORT_SYMBOL_GPL(lockdep_unregister_key);
+
 /*
  * Check whether any element of the @lock->class_cache[] array refers to a
  * registered lock class. The caller must hold either the graph lock or the
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 26/27] kernel/workqueue: Use dynamic lockdep keys for workqueues
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (24 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 25/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-28 23:43 ` [PATCH 27/27] lockdep tests: Test dynamic key registration Bart Van Assche
  2018-11-29 12:31 ` [PATCH 00/27] locking/lockdep: Add support for dynamic keys Peter Zijlstra
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche, Will Deacon

Commit 87915adc3f0a ("workqueue: re-add lockdep dependencies for flushing")
improved deadlock checking in the workqueue implementation. Unfortunately
that patch also introduced a few false positive lockdep complaints. This
patch suppresses these false positives by allocating the workqueue mutex
lockdep key dynamically. An example of a false positive lockdep complaint
suppressed by this report can be found below. The root cause of the
lockdep complaint shown below is that the direct I/O code can call
alloc_workqueue() from inside a work item created by another
alloc_workqueue() call and that both workqueues share the same lockdep
key. This patch avoids that that lockdep complaint is triggered by
allocating the work queue lockdep keys dynamically. In other words, this
patch guarantees that a unique lockdep key is associated with each work
queue mutex.

======================================================
WARNING: possible circular locking dependency detected
4.19.0-dbg+ #1 Not tainted
------------------------------------------------------
fio/4129 is trying to acquire lock:
00000000a01cfe1a ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: flush_workqueue+0xd0/0x970

but task is already holding lock:
00000000a0acecf9 (&sb->s_type->i_mutex_key#14){+.+.}, at: ext4_file_write_iter+0x154/0x710

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #2 (&sb->s_type->i_mutex_key#14){+.+.}:
       down_write+0x3d/0x80
       __generic_file_fsync+0x77/0xf0
       ext4_sync_file+0x3c9/0x780
       vfs_fsync_range+0x66/0x100
       dio_complete+0x2f5/0x360
       dio_aio_complete_work+0x1c/0x20
       process_one_work+0x481/0x9f0
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #1 ((work_completion)(&dio->complete_work)){+.+.}:
       process_one_work+0x447/0x9f0
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #0 ((wq_completion)"dio/%s"sb->s_id){+.+.}:
       lock_acquire+0xc5/0x200
       flush_workqueue+0xf3/0x970
       drain_workqueue+0xec/0x220
       destroy_workqueue+0x23/0x350
       sb_init_dio_done_wq+0x6a/0x80
       do_blockdev_direct_IO+0x1f33/0x4be0
       __blockdev_direct_IO+0x79/0x86
       ext4_direct_IO+0x5df/0xbb0
       generic_file_direct_write+0x119/0x220
       __generic_file_write_iter+0x131/0x2d0
       ext4_file_write_iter+0x3fa/0x710
       aio_write+0x235/0x330
       io_submit_one+0x510/0xeb0
       __x64_sys_io_submit+0x122/0x340
       do_syscall_64+0x71/0x220
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
  (wq_completion)"dio/%s"sb->s_id --> (work_completion)(&dio->complete_work) --> &sb->s_type->i_mutex_key#14

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&sb->s_type->i_mutex_key#14);
                               lock((work_completion)(&dio->complete_work));
                               lock(&sb->s_type->i_mutex_key#14);
  lock((wq_completion)"dio/%s"sb->s_id);

 *** DEADLOCK ***

1 lock held by fio/4129:
 #0: 00000000a0acecf9 (&sb->s_type->i_mutex_key#14){+.+.}, at: ext4_file_write_iter+0x154/0x710

stack backtrace:
CPU: 3 PID: 4129 Comm: fio Not tainted 4.19.0-dbg+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
Call Trace:
 dump_stack+0x86/0xc5
 print_circular_bug.isra.32+0x20a/0x218
 __lock_acquire+0x1c68/0x1cf0
 lock_acquire+0xc5/0x200
 flush_workqueue+0xf3/0x970
 drain_workqueue+0xec/0x220
 destroy_workqueue+0x23/0x350
 sb_init_dio_done_wq+0x6a/0x80
 do_blockdev_direct_IO+0x1f33/0x4be0
 __blockdev_direct_IO+0x79/0x86
 ext4_direct_IO+0x5df/0xbb0
 generic_file_direct_write+0x119/0x220
 __generic_file_write_iter+0x131/0x2d0
 ext4_file_write_iter+0x3fa/0x710
 aio_write+0x235/0x330
 io_submit_one+0x510/0xeb0
 __x64_sys_io_submit+0x122/0x340
 do_syscall_64+0x71/0x220
 entry_SYSCALL_64_after_hwframe+0x49/0xbe

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/workqueue.h | 28 +++---------------
 kernel/workqueue.c        | 60 +++++++++++++++++++++++++++++++++------
 2 files changed, 55 insertions(+), 33 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 60d673e15632..d9a1a480e920 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -390,43 +390,23 @@ extern struct workqueue_struct *system_freezable_wq;
 extern struct workqueue_struct *system_power_efficient_wq;
 extern struct workqueue_struct *system_freezable_power_efficient_wq;
 
-extern struct workqueue_struct *
-__alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
-	struct lock_class_key *key, const char *lock_name, ...) __printf(1, 6);
-
 /**
  * alloc_workqueue - allocate a workqueue
  * @fmt: printf format for the name of the workqueue
  * @flags: WQ_* flags
  * @max_active: max in-flight work items, 0 for default
- * @args...: args for @fmt
+ * remaining args: args for @fmt
  *
  * Allocate a workqueue with the specified parameters.  For detailed
  * information on WQ_* flags, please refer to
  * Documentation/core-api/workqueue.rst.
  *
- * The __lock_name macro dance is to guarantee that single lock_class_key
- * doesn't end up with different namesm, which isn't allowed by lockdep.
- *
  * RETURNS:
  * Pointer to the allocated workqueue on success, %NULL on failure.
  */
-#ifdef CONFIG_LOCKDEP
-#define alloc_workqueue(fmt, flags, max_active, args...)		\
-({									\
-	static struct lock_class_key __key;				\
-	const char *__lock_name;					\
-									\
-	__lock_name = "(wq_completion)"#fmt#args;			\
-									\
-	__alloc_workqueue_key((fmt), (flags), (max_active),		\
-			      &__key, __lock_name, ##args);		\
-})
-#else
-#define alloc_workqueue(fmt, flags, max_active, args...)		\
-	__alloc_workqueue_key((fmt), (flags), (max_active),		\
-			      NULL, NULL, ##args)
-#endif
+struct workqueue_struct *alloc_workqueue(const char *fmt,
+					 unsigned int flags,
+					 int max_active, ...);
 
 /**
  * alloc_ordered_workqueue - allocate an ordered workqueue
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0280deac392e..82e155f764b7 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -259,6 +259,8 @@ struct workqueue_struct {
 	struct wq_device	*wq_dev;	/* I: for sysfs interface */
 #endif
 #ifdef CONFIG_LOCKDEP
+	char			*lock_name;
+	struct lock_class_key	key;
 	struct lockdep_map	lockdep_map;
 #endif
 	char			name[WQ_NAME_LEN]; /* I: workqueue name */
@@ -3314,11 +3316,50 @@ static int init_worker_pool(struct worker_pool *pool)
 	return 0;
 }
 
+#ifdef CONFIG_LOCKDEP
+static void wq_init_lockdep(struct workqueue_struct *wq)
+{
+	char *lock_name;
+
+	lockdep_register_key(&wq->key);
+	lock_name = kasprintf(GFP_KERNEL, "%s%s", "(wq_completion)", wq->name);
+	if (!lock_name)
+		lock_name = wq->name;
+	lockdep_init_map(&wq->lockdep_map, lock_name, &wq->key, 0);
+}
+
+static void wq_unregister_lockdep(struct workqueue_struct *wq)
+{
+	lockdep_reset_lock(&wq->lockdep_map);
+	lockdep_unregister_key(&wq->key);
+}
+
+static void wq_free_lockdep(struct workqueue_struct *wq)
+{
+	if (wq->lock_name != wq->name)
+		kfree(wq->lock_name);
+}
+#else
+static void wq_init_lockdep(struct workqueue_struct *wq)
+{
+}
+
+static void wq_unregister_lockdep(struct workqueue_struct *wq)
+{
+}
+
+static void wq_free_lockdep(struct workqueue_struct *wq)
+{
+}
+#endif
+
 static void rcu_free_wq(struct rcu_head *rcu)
 {
 	struct workqueue_struct *wq =
 		container_of(rcu, struct workqueue_struct, rcu);
 
+	wq_free_lockdep(wq);
+
 	if (!(wq->flags & WQ_UNBOUND))
 		free_percpu(wq->cpu_pwqs);
 	else
@@ -3509,8 +3550,10 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
 	 * If we're the last pwq going away, @wq is already dead and no one
 	 * is gonna access it anymore.  Schedule RCU free.
 	 */
-	if (is_last)
+	if (is_last) {
+		wq_unregister_lockdep(wq);
 		call_rcu_sched(&wq->rcu, rcu_free_wq);
+	}
 }
 
 /**
@@ -4044,11 +4087,9 @@ static int init_rescuer(struct workqueue_struct *wq)
 	return 0;
 }
 
-struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
-					       unsigned int flags,
-					       int max_active,
-					       struct lock_class_key *key,
-					       const char *lock_name, ...)
+struct workqueue_struct *alloc_workqueue(const char *fmt,
+					 unsigned int flags,
+					 int max_active, ...)
 {
 	size_t tbl_size = 0;
 	va_list args;
@@ -4083,7 +4124,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 			goto err_free_wq;
 	}
 
-	va_start(args, lock_name);
+	va_start(args, max_active);
 	vsnprintf(wq->name, sizeof(wq->name), fmt, args);
 	va_end(args);
 
@@ -4100,7 +4141,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	INIT_LIST_HEAD(&wq->flusher_overflow);
 	INIT_LIST_HEAD(&wq->maydays);
 
-	lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
+	wq_init_lockdep(wq);
 	INIT_LIST_HEAD(&wq->list);
 
 	if (alloc_and_link_pwqs(wq) < 0)
@@ -4138,7 +4179,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	destroy_workqueue(wq);
 	return NULL;
 }
-EXPORT_SYMBOL_GPL(__alloc_workqueue_key);
+EXPORT_SYMBOL_GPL(alloc_workqueue);
 
 /**
  * destroy_workqueue - safely terminate a workqueue
@@ -4191,6 +4232,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
 		kthread_stop(wq->rescuer->task);
 
 	if (!(wq->flags & WQ_UNBOUND)) {
+		wq_unregister_lockdep(wq);
 		/*
 		 * The base ref is never dropped on per-cpu pwqs.  Directly
 		 * schedule RCU free.
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 27/27] lockdep tests: Test dynamic key registration
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (25 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 26/27] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
@ 2018-11-28 23:43 ` Bart Van Assche
  2018-11-29 12:31 ` [PATCH 00/27] locking/lockdep: Add support for dynamic keys Peter Zijlstra
  27 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-28 23:43 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, johannes.berg, linux-kernel, Bart Van Assche

Make sure that the lockdep_register_key() and lockdep_unregister_key()
code is tested when running the lockdep tests.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/include/liblockdep/common.h |  2 ++
 tools/lib/lockdep/include/liblockdep/mutex.h  | 11 ++++++-----
 tools/lib/lockdep/tests/ABBA.c                |  9 +++++++++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h b/tools/lib/lockdep/include/liblockdep/common.h
index d640a9761f09..a81d91d4fc78 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -45,6 +45,8 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 void lock_release(struct lockdep_map *lock, int nested,
 			unsigned long ip);
 void lockdep_reset_lock(struct lockdep_map *lock);
+void lockdep_register_key(struct lock_class_key *key);
+void lockdep_unregister_key(struct lock_class_key *key);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h
index 2073d4e1f2f0..783dd0df06f9 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -7,6 +7,7 @@
 
 struct liblockdep_pthread_mutex {
 	pthread_mutex_t mutex;
+	struct lock_class_key key;
 	struct lockdep_map dep_map;
 };
 
@@ -27,11 +28,10 @@ static inline int __mutex_init(liblockdep_pthread_mutex_t *lock,
 	return pthread_mutex_init(&lock->mutex, __mutexattr);
 }
 
-#define liblockdep_pthread_mutex_init(mutex, mutexattr)		\
-({								\
-	static struct lock_class_key __key;			\
-								\
-	__mutex_init((mutex), #mutex, &__key, (mutexattr));	\
+#define liblockdep_pthread_mutex_init(mutex, mutexattr)			\
+({									\
+	lockdep_register_key(&(mutex)->key);				\
+	__mutex_init((mutex), #mutex, &(mutex)->key, (mutexattr));	\
 })
 
 static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t *lock)
@@ -55,6 +55,7 @@ static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t *lock)
 {
 	lockdep_reset_lock(&lock->dep_map);
+	lockdep_unregister_key(&lock->key);
 	return pthread_mutex_destroy(&lock->mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 623313f54720..543789bc3e37 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -14,4 +14,13 @@ void main(void)
 
 	pthread_mutex_destroy(&b);
 	pthread_mutex_destroy(&a);
+
+	pthread_mutex_init(&a, NULL);
+	pthread_mutex_init(&b, NULL);
+
+	LOCK_UNLOCK_2(a, b);
+	LOCK_UNLOCK_2(b, a);
+
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH 12/27] net/core: Assign a name to devnet_rename_seq.dep_map
  2018-11-28 23:43 ` [PATCH 12/27] net/core: Assign a name to devnet_rename_seq.dep_map Bart Van Assche
@ 2018-11-29  0:45   ` David Miller
  0 siblings, 0 replies; 50+ messages in thread
From: David Miller @ 2018-11-29  0:45 UTC (permalink / raw)
  To: bvanassche; +Cc: mingo, peterz, tj, johannes.berg, linux-kernel

From: Bart Van Assche <bvanassche@acm.org>
Date: Wed, 28 Nov 2018 15:43:10 -0800

> This patch makes lockdep reports about devnet_rename_seq more informative.
> 
> Cc: David S. Miller <davem@davemloft.net>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 24/27] locking/lockdep: Introduce __lockdep_free_key_range()
  2018-11-28 23:43 ` [PATCH 24/27] locking/lockdep: Introduce __lockdep_free_key_range() Bart Van Assche
@ 2018-11-29 10:00   ` Peter Zijlstra
  0 siblings, 0 replies; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 10:00 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Wed, Nov 28, 2018 at 03:43:22PM -0800, Bart Van Assche wrote:
> This patch does not change any functionality but makes the next patch
> in this series easier to read.

Ooh, I completely forgot about commit:

  35a9393c95b3 ("lockdep: Fix the module unload key range freeing logic")

I was still thinking that all was broken... yes, I think I see where
you're going.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 25/27] locking/lockdep: Add support for dynamic keys
  2018-11-28 23:43 ` [PATCH 25/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
@ 2018-11-29 10:10   ` Peter Zijlstra
  2018-12-03 17:07     ` Bart Van Assche
  2018-11-29 12:04   ` Peter Zijlstra
  1 sibling, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 10:10 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Wed, Nov 28, 2018 at 03:43:23PM -0800, Bart Van Assche wrote:
> +/* hash_entry is used to keep track of dynamically allocated keys. */
>  struct lock_class_key {
> +	struct hlist_node		hash_entry;
>  	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
>  };

One consideration; and maybe we should have a BUILD_BUG for that, is
that this object should be no larger than the smallest lock primitive.

That typically is raw_spinlock_t, which normally is 4 bytes, but with
lockdep on that at least also includes struct lockdep_map.

So what we want is:

	sizeof(lock_class_key) <= sizeof(raw_spinlock_t)

Otherwise, two consecutive spinlocks could end up with key overlap in
their subclass range.

Now, I think that is still valid after this patch, but it is something
that gave me pause.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 20/27] locking/lockdep: Free lock classes that are no longer in use
  2018-11-28 23:43 ` [PATCH 20/27] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
@ 2018-11-29 10:37   ` Peter Zijlstra
  0 siblings, 0 replies; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 10:37 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Wed, Nov 28, 2018 at 03:43:18PM -0800, Bart Van Assche wrote:
> +/* Must be called with the graph lock held. */
> +static void remove_class_from_lock_chain(struct lock_chain *chain,
> +					 struct lock_class *class)
> +{
> +	u64 chain_key;
> +	int i;
> +
> +	for (i = chain->base; i < chain->base + chain->depth; i++) {
> +		if (chain_hlocks[i] != class - lock_classes)
> +			continue;
> +		if (--chain->depth == 0)
> +			break;
> +		memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
> +			(chain->base + chain->depth - i) *
> +			sizeof(chain_hlocks[0]));
> +		/*
> +		 * Each lock class occurs at most once in a
> +		 * lock chain so once we found a match we can
> +		 * break out of this loop.
> +		 */
> +		break;
> +	}
> +	/*
> +	 * Note: calling hlist_del_rcu() from inside a
> +	 * hlist_for_each_entry_rcu() loop is safe.
> +	 */
> +	if (chain->depth == 0) {
> +		/* To do: decrease chain count. See also inc_chains(). */
> +		hlist_del_rcu(&chain->entry);
> +		return;
> +	}
> +	chain_key = 0;
> +	for (i = chain->base; i < chain->base + chain->depth; i++)
> +		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
> +	if (chain->chain_key == chain_key)
> +		return;
> +	hlist_del_rcu(&chain->entry);
> +	chain->chain_key = chain_key;
> +	hlist_add_head_rcu(&chain->entry, chainhashentry(chain_key));
> +}
> +
> +/* Must be called with the graph lock held. */
> +static void remove_class_from_lock_chains(struct lock_class *class)
> +{
> +	struct lock_chain *chain;
> +	struct hlist_head *head;
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
> +		head = chainhash_table + i;
> +		hlist_for_each_entry_rcu(chain, head, entry) {
> +			remove_class_from_lock_chain(chain, class);
> +		}
> +	}
> +}

*shudder*, I suppose that is the reason I never went there.

I suoppose that if you don't do this too often it doesn't matter it is
horribly epxneisve.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-11-28 23:43 ` [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use Bart Van Assche
@ 2018-11-29 10:49   ` Peter Zijlstra
  2018-11-29 12:01     ` Peter Zijlstra
  0 siblings, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 10:49 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> Instead of abandoning elements of list_entries[] that are no longer in
> use, make alloc_list_entry() reuse array elements that have been freed.

> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index 43327a1dd488..01e55fca7c2c 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -183,6 +183,11 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
>  struct lock_list {
>  	/* Entry in locks_after or locks_before. */
>  	struct list_head		lock_order_entry;
> +	/*
> +	 * Entry in all_list_entries when in use and entry in
> +	 * free_list_entries when not in use.
> +	 */
> +	struct list_head		alloc_entry;
>  	struct lock_class		*class;
>  	struct lock_class		*links_to;
>  	struct stack_trace		trace;

> +static LIST_HEAD(all_list_entries);
> +static LIST_HEAD(free_list_entries);
>  

> @@ -862,7 +867,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
>   */
>  static struct lock_list *alloc_list_entry(void)
>  {
> -	if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
> +	struct lock_list *e = list_first_entry_or_null(&free_list_entries,
> +						       typeof(*e), alloc_entry);
> +
> +	if (!e) {
>  		if (!debug_locks_off_graph_unlock())
>  			return NULL;
>  
> @@ -870,7 +878,8 @@ static struct lock_list *alloc_list_entry(void)
>  		dump_stack();
>  		return NULL;
>  	}
> -	return list_entries + nr_list_entries++;
> +	list_move_tail(&e->alloc_entry, &all_list_entries);
> +	return e;
>  }

> @@ -4235,19 +4244,19 @@ static void zap_class(struct list_head *zapped_classes,
>  		      struct lock_class *class)
>  {
>  	struct lock_class *links_to;
> +	struct lock_list *entry, *tmp;
>  
>  	/*
>  	 * Remove all dependencies this lock is
>  	 * involved in:
>  	 */
> +	list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
>  		if (entry->class != class && entry->links_to != class)
>  			continue;
>  		links_to = entry->links_to;
>  		WARN_ON_ONCE(entry->class == links_to);
>  		list_del_rcu(&entry->lock_order_entry);
> +		list_move(&entry->alloc_entry, &free_list_entries);
>  		entry->class = NULL;
>  		entry->links_to = NULL;
>  		check_free_class(zapped_classes, class);

Hurm.. I'm confused here.

The reason you cannot re-use lock_order_entry for the free list is
because list_del_rcu(), right? But if so, then what ensures the
list_entry is not re-used before it's grace-period?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-11-29 10:49   ` Peter Zijlstra
@ 2018-11-29 12:01     ` Peter Zijlstra
  2018-11-29 16:48       ` Bart Van Assche
  0 siblings, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 12:01 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Thu, Nov 29, 2018 at 11:49:02AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> > Instead of abandoning elements of list_entries[] that are no longer in
> > use, make alloc_list_entry() reuse array elements that have been freed.
> 
> > diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> > index 43327a1dd488..01e55fca7c2c 100644
> > --- a/include/linux/lockdep.h
> > +++ b/include/linux/lockdep.h
> > @@ -183,6 +183,11 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
> >  struct lock_list {
> >  	/* Entry in locks_after or locks_before. */
> >  	struct list_head		lock_order_entry;
> > +	/*
> > +	 * Entry in all_list_entries when in use and entry in
> > +	 * free_list_entries when not in use.
> > +	 */
> > +	struct list_head		alloc_entry;
> >  	struct lock_class		*class;
> >  	struct lock_class		*links_to;
> >  	struct stack_trace		trace;
> 
> > +static LIST_HEAD(all_list_entries);
> > +static LIST_HEAD(free_list_entries);
> >  
> 
> > @@ -862,7 +867,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
> >   */
> >  static struct lock_list *alloc_list_entry(void)
> >  {
> > -	if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
> > +	struct lock_list *e = list_first_entry_or_null(&free_list_entries,
> > +						       typeof(*e), alloc_entry);
> > +
> > +	if (!e) {
> >  		if (!debug_locks_off_graph_unlock())
> >  			return NULL;
> >  
> > @@ -870,7 +878,8 @@ static struct lock_list *alloc_list_entry(void)
> >  		dump_stack();
> >  		return NULL;
> >  	}
> > -	return list_entries + nr_list_entries++;
> > +	list_move_tail(&e->alloc_entry, &all_list_entries);
> > +	return e;
> >  }
> 
> > @@ -4235,19 +4244,19 @@ static void zap_class(struct list_head *zapped_classes,
> >  		      struct lock_class *class)
> >  {
> >  	struct lock_class *links_to;
> > +	struct lock_list *entry, *tmp;
> >  
> >  	/*
> >  	 * Remove all dependencies this lock is
> >  	 * involved in:
> >  	 */
> > +	list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
> >  		if (entry->class != class && entry->links_to != class)
> >  			continue;
> >  		links_to = entry->links_to;
> >  		WARN_ON_ONCE(entry->class == links_to);
> >  		list_del_rcu(&entry->lock_order_entry);
> > +		list_move(&entry->alloc_entry, &free_list_entries);
> >  		entry->class = NULL;
> >  		entry->links_to = NULL;
> >  		check_free_class(zapped_classes, class);
> 
> Hurm.. I'm confused here.
> 
> The reason you cannot re-use lock_order_entry for the free list is
> because list_del_rcu(), right? But if so, then what ensures the
> list_entry is not re-used before it's grace-period?

Also; if you have to grow lock_list by 16 bytes just to be able to free
it, a bitmap allocator is much cheaper, space wise.

Some people seem to really care about the static image size, and
lockdep's .data section does matter to them.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 25/27] locking/lockdep: Add support for dynamic keys
  2018-11-28 23:43 ` [PATCH 25/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
  2018-11-29 10:10   ` Peter Zijlstra
@ 2018-11-29 12:04   ` Peter Zijlstra
  2018-11-29 16:59     ` Bart Van Assche
  1 sibling, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 12:04 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Wed, Nov 28, 2018 at 03:43:23PM -0800, Bart Van Assche wrote:
> A shortcoming of the current lockdep implementation is that it requires
> lock keys to be allocated statically. That forces certain lock objects
> to share lock keys. Since lock dependency analysis groups lock objects
> per key sharing lock keys can cause false positive lockdep reports.
> Make it possible to avoid such false positive reports by allowing lock
> keys to be allocated dynamically. Require that dynamically allocated
> lock keys are registered before use by calling lockdep_register_key().
> Complain about attempts to register the same lock key pointer twice
> without calling lockdep_unregister_key() between successive
> registration calls.

>  struct lock_class_key {
> +	struct hlist_node		hash_entry;
>  	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
>  };

That hash_entry is purely for that double-register warning, right? I
wonder if we can do that differently; by always doing
register_lock_class(), and checking that state.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 23/27] locking/lockdep: Check data structure consistency
  2018-11-28 23:43 ` [PATCH 23/27] locking/lockdep: Check data structure consistency Bart Van Assche
@ 2018-11-29 12:30   ` Peter Zijlstra
  2018-11-29 16:50     ` Bart Van Assche
  0 siblings, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 12:30 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Wed, Nov 28, 2018 at 03:43:21PM -0800, Bart Van Assche wrote:

> +static bool in_list(struct list_head *e, struct list_head *h)
> +{
> +	struct list_head *f;
> +
> +	list_for_each(f, h)
> +		if (e == f)
> +			return true;

Coding style wants { } around any multi line block, even if C doesn't
strictly require it.

> +
> +	return false;
> +}

> +static bool check_lock_chain_key(struct lock_chain *chain)
> +{
> +	u64 chain_key = 0;
> +	int i;
> +
> +	for (i = chain->base; i < chain->base + chain->depth; i++)
> +		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
> +	/*
> +	 * The 'unsigned long long' casts avoid that a compiler warning
> +	 * is reported when building tools/lib/lockdep.
> +	 */
> +	if (chain->chain_key != chain_key)
> +		printk(KERN_INFO "chain %lld: key %#llx <> %#llx\n",
> +		       (unsigned long long)(chain - lock_chains),
> +		       (unsigned long long)chain->chain_key,
> +		       (unsigned long long)chain_key);

Idem on the { }

> +	return chain->chain_key == chain_key;
> +}
> +
> +static bool check_data_structures(void)
> +{
> +	struct lock_class *class;
> +	struct lock_chain *chain;
> +	struct hlist_head *head;
> +	struct lock_list *e;
> +	int i;
> +
> +	/*
> +	 * Check whether all list entries that are in use occur in a class
> +	 * lock list.
> +	 */
> +	list_for_each_entry(e, &all_list_entries, alloc_entry) {
> +		if (!in_any_class_list(&e->lock_order_entry)) {
> +			printk(KERN_INFO "list entry %ld is not in any class list; class %s <> %s\n",
> +			       e - list_entries,
> +			       e->class->name ? : "(?)",
> +			       e->links_to->name ? : "(?)");
> +			return false;
> +		}
> +	}
> +
> +	/*
> +	 * Check whether all list entries that are not in use do not occur in
> +	 * a class lock list.
> +	 */
> +	list_for_each_entry(e, &free_list_entries, alloc_entry) {
> +		if (in_any_class_list(&e->lock_order_entry)) {
> +			printk(KERN_INFO "list entry %ld occurs in a class list; class %s <> %s\n",
> +			       e - list_entries,
> +			       e->class && e->class->name ? e->class->name :
> +			       "(?)",
> +			       e->links_to && e->links_to->name ?
> +			       e->links_to->name : "(?)");
> +			return false;
> +		}
> +	}
> +
> +	/* Check whether all classes have valid lock lists. */
> +	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
> +		class = &lock_classes[i];
> +		if (!class->locks_before.next)
> +			continue;
> +		if (!class_lock_list_valid(class, &class->locks_before))
> +			return false;
> +		if (!class_lock_list_valid(class, &class->locks_after))
> +			return false;
> +	}
> +
> +	/* Check the chain_key of all lock chains. */
> +	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
> +		head = chainhash_table + i;
> +		hlist_for_each_entry_rcu(chain, head, entry)
> +			if (!check_lock_chain_key(chain))
> +				return false;

And again.

> +	}
> +
> +	return true;
> +}

IIRC there were a few other sites in the series, please check them all.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 00/27] locking/lockdep: Add support for dynamic keys
  2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (26 preceding siblings ...)
  2018-11-28 23:43 ` [PATCH 27/27] lockdep tests: Test dynamic key registration Bart Van Assche
@ 2018-11-29 12:31 ` Peter Zijlstra
  27 siblings, 0 replies; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 12:31 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Wed, Nov 28, 2018 at 03:42:58PM -0800, Bart Van Assche wrote:
> Hi Ingo and Peter,
> 
> A known shortcoming of the current lockdep implementation is that it requires
> lock keys to be allocated statically and that this key sharing can cause false
> positive deadlock reports. This patch series adds support for dynamic keys in
> the lockdep code. I'm not claiming that this patch series is perfect. However,
> the code in this patch series survives nontrivial tests so I think it's worth
> a look. Two unrelated changes in this patch series are:
> - Improve the lockdep tests.
> - Complain if no name has been assigned to a lock object.

Looks very good mostly, and it improves the lives of people that like
modules too.

Thanks for doing all that.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-11-29 12:01     ` Peter Zijlstra
@ 2018-11-29 16:48       ` Bart Van Assche
  2018-12-01 20:24         ` Peter Zijlstra
  0 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-29 16:48 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, johannes.berg, linux-kernel

On Thu, 2018-11-29 at 13:01 +0100, Peter Zijlstra wrote:
> On Thu, Nov 29, 2018 at 11:49:02AM +0100, Peter Zijlstra wrote:
> > On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> > >  	/*
> > >  	 * Remove all dependencies this lock is
> > >  	 * involved in:
> > >  	 */
> > > +	list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
> > >  		if (entry->class != class && entry->links_to != class)
> > >  			continue;
> > >  		links_to = entry->links_to;
> > >  		WARN_ON_ONCE(entry->class == links_to);
> > >  		list_del_rcu(&entry->lock_order_entry);
> > > +		list_move(&entry->alloc_entry, &free_list_entries);
> > >  		entry->class = NULL;
> > >  		entry->links_to = NULL;
> > >  		check_free_class(zapped_classes, class);
> > 
> > Hurm.. I'm confused here.
> > 
> > The reason you cannot re-use lock_order_entry for the free list is
> > because list_del_rcu(), right? But if so, then what ensures the
> > list_entry is not re-used before it's grace-period?
> 
> Also; if you have to grow lock_list by 16 bytes just to be able to free
> it, a bitmap allocator is much cheaper, space wise.
> 
> Some people seem to really care about the static image size, and
> lockdep's .data section does matter to them.

How about addressing this by moving removed list entries to a "zapped_entries"
list and only moving list entries from the zapped_entries list to the
free_list_entries list after an RCU grace period? I'm not sure that it is
possible to implement that approach without introducing a new list_head in
struct lock_list.

Thanks,

Bart.



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 23/27] locking/lockdep: Check data structure consistency
  2018-11-29 12:30   ` Peter Zijlstra
@ 2018-11-29 16:50     ` Bart Van Assche
  2018-11-29 16:59       ` Peter Zijlstra
  0 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-11-29 16:50 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, johannes.berg, linux-kernel

On Thu, 2018-11-29 at 13:30 +0100, Peter Zijlstra wrote:
> IIRC there were a few other sites in the series, please check them all.

OK, I will add braces around multi-line statement blocks. You may want to
know that checkpatch didn't complain about missing braces.

Bart.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 25/27] locking/lockdep: Add support for dynamic keys
  2018-11-29 12:04   ` Peter Zijlstra
@ 2018-11-29 16:59     ` Bart Van Assche
  0 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-11-29 16:59 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, johannes.berg, linux-kernel

On Thu, 2018-11-29 at 13:04 +0100, Peter Zijlstra wrote:
> On Wed, Nov 28, 2018 at 03:43:23PM -0800, Bart Van Assche wrote:
> > A shortcoming of the current lockdep implementation is that it requires
> > lock keys to be allocated statically. That forces certain lock objects
> > to share lock keys. Since lock dependency analysis groups lock objects
> > per key sharing lock keys can cause false positive lockdep reports.
> > Make it possible to avoid such false positive reports by allowing lock
> > keys to be allocated dynamically. Require that dynamically allocated
> > lock keys are registered before use by calling lockdep_register_key().
> > Complain about attempts to register the same lock key pointer twice
> > without calling lockdep_unregister_key() between successive
> > registration calls.
> >  struct lock_class_key {
> > +	struct hlist_node		hash_entry;
> >  	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
> >  };
> 
> That hash_entry is purely for that double-register warning, right? I
> wonder if we can do that differently; by always doing
> register_lock_class(), and checking that state.

Hi Peter,

The hash_entry serves two purposes. One purpose is to verify whether the
lockdep_register_key() and  lockdep_unregister_key() functions are used
correctly. A second purpose is to avoid that lockdep_init_map() complains
when encountering a dynamically allocated key. I'm not sure how always
doing register_lock_class() would help?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 23/27] locking/lockdep: Check data structure consistency
  2018-11-29 16:50     ` Bart Van Assche
@ 2018-11-29 16:59       ` Peter Zijlstra
  0 siblings, 0 replies; 50+ messages in thread
From: Peter Zijlstra @ 2018-11-29 16:59 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Thu, Nov 29, 2018 at 08:50:02AM -0800, Bart Van Assche wrote:
> On Thu, 2018-11-29 at 13:30 +0100, Peter Zijlstra wrote:
> > IIRC there were a few other sites in the series, please check them all.
> 
> OK, I will add braces around multi-line statement blocks. You may want to
> know that checkpatch didn't complain about missing braces.

Yeah, checkpatch is far from perfect. I think this recently got
documented somewhere though. Lets see if I can find that.

  https://lkml.kernel.org/r/20181107171010.421878737@linutronix.de

Not sure what the status of that all is, but there goes.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-11-29 16:48       ` Bart Van Assche
@ 2018-12-01 20:24         ` Peter Zijlstra
  2018-12-03 16:40           ` Bart Van Assche
  0 siblings, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-12-01 20:24 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Thu, Nov 29, 2018 at 08:48:50AM -0800, Bart Van Assche wrote:
> On Thu, 2018-11-29 at 13:01 +0100, Peter Zijlstra wrote:
> > On Thu, Nov 29, 2018 at 11:49:02AM +0100, Peter Zijlstra wrote:
> > > On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> > > >  	/*
> > > >  	 * Remove all dependencies this lock is
> > > >  	 * involved in:
> > > >  	 */
> > > > +	list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
> > > >  		if (entry->class != class && entry->links_to != class)
> > > >  			continue;
> > > >  		links_to = entry->links_to;
> > > >  		WARN_ON_ONCE(entry->class == links_to);
> > > >  		list_del_rcu(&entry->lock_order_entry);
> > > > +		list_move(&entry->alloc_entry, &free_list_entries);
> > > >  		entry->class = NULL;
> > > >  		entry->links_to = NULL;
> > > >  		check_free_class(zapped_classes, class);
> > > 
> > > Hurm.. I'm confused here.
> > > 
> > > The reason you cannot re-use lock_order_entry for the free list is
> > > because list_del_rcu(), right? But if so, then what ensures the
> > > list_entry is not re-used before it's grace-period?
> > 
> > Also; if you have to grow lock_list by 16 bytes just to be able to free
> > it, a bitmap allocator is much cheaper, space wise.
> > 
> > Some people seem to really care about the static image size, and
> > lockdep's .data section does matter to them.
> 
> How about addressing this by moving removed list entries to a "zapped_entries"
> list and only moving list entries from the zapped_entries list to the
> free_list_entries list after an RCU grace period? I'm not sure that it is
> possible to implement that approach without introducing a new list_head in
> struct lock_list.

I think we can do this with a free bitmap and an array of 2 pending
bitmaps and an index. Add newly freed entries to the pending bitmap
indicated by the current index, when complete flip the index -- such
that further new bits go to the other pending bitmap -- and call_rcu().

Then, on the call_rcu() callback, ie. after a GP has happened, OR our
pending bitmap into the free bitmap, and when the other pending bitmap
isn't empty, flip the index again and start it all again.

This ensures there is at least one full GP between setting a bit and it
landing in the free mask.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-12-01 20:24         ` Peter Zijlstra
@ 2018-12-03 16:40           ` Bart Van Assche
  2018-12-03 17:32             ` Peter Zijlstra
  0 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-12-03 16:40 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, johannes.berg, linux-kernel

On Sat, 2018-12-01 at 21:24 +0100, Peter Zijlstra wrote:
> On Thu, Nov 29, 2018 at 08:48:50AM -0800, Bart Van Assche wrote:
> > On Thu, 2018-11-29 at 13:01 +0100, Peter Zijlstra wrote:
> > > On Thu, Nov 29, 2018 at 11:49:02AM +0100, Peter Zijlstra wrote:
> > > > On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> > > > >  	/*
> > > > >  	 * Remove all dependencies this lock is
> > > > >  	 * involved in:
> > > > >  	 */
> > > > > +	list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
> > > > >  		if (entry->class != class && entry->links_to != class)
> > > > >  			continue;
> > > > >  		links_to = entry->links_to;
> > > > >  		WARN_ON_ONCE(entry->class == links_to);
> > > > >  		list_del_rcu(&entry->lock_order_entry);
> > > > > +		list_move(&entry->alloc_entry, &free_list_entries);
> > > > >  		entry->class = NULL;
> > > > >  		entry->links_to = NULL;
> > > > >  		check_free_class(zapped_classes, class);
> > > > 
> > > > Hurm.. I'm confused here.
> > > > 
> > > > The reason you cannot re-use lock_order_entry for the free list is
> > > > because list_del_rcu(), right? But if so, then what ensures the
> > > > list_entry is not re-used before it's grace-period?
> > > 
> > > Also; if you have to grow lock_list by 16 bytes just to be able to free
> > > it, a bitmap allocator is much cheaper, space wise.
> > > 
> > > Some people seem to really care about the static image size, and
> > > lockdep's .data section does matter to them.
> > 
> > How about addressing this by moving removed list entries to a "zapped_entries"
> > list and only moving list entries from the zapped_entries list to the
> > free_list_entries list after an RCU grace period? I'm not sure that it is
> > possible to implement that approach without introducing a new list_head in
> > struct lock_list.
> 
> I think we can do this with a free bitmap and an array of 2 pending
> bitmaps and an index. Add newly freed entries to the pending bitmap
> indicated by the current index, when complete flip the index -- such
> that further new bits go to the other pending bitmap -- and call_rcu().
> 
> Then, on the call_rcu() callback, ie. after a GP has happened, OR our
> pending bitmap into the free bitmap, and when the other pending bitmap
> isn't empty, flip the index again and start it all again.
> 
> This ensures there is at least one full GP between setting a bit and it
> landing in the free mask.

Hi Peter,

How about the following alternative which requires only two bitmaps instead
of three:
- Maintain two bitmaps, one for the free entries and one for the entries
  that are being freed.
- Protect all accesses to both bitmaps with the graph lock.
- zap_class() sets a bit in the "being freed" bitmap for the entries that
  should be freed after a GP.
- Instead of making free_zapped_classes() wait for a grace period by calling
  synchronize_sched(), use call_rcu() and do the freeing work from inside the
  RCU callback.
- From inside the RCU callback, set a bit in the "free" bitmap for all entries
  that have a bit set in the "being freed" bitmap and clears the "being freed"
  bitmap.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 25/27] locking/lockdep: Add support for dynamic keys
  2018-11-29 10:10   ` Peter Zijlstra
@ 2018-12-03 17:07     ` Bart Van Assche
  2018-12-03 17:31       ` Peter Zijlstra
  0 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-12-03 17:07 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, johannes.berg, linux-kernel

On Thu, 2018-11-29 at 11:10 +0100, Peter Zijlstra wrote:
> On Wed, Nov 28, 2018 at 03:43:23PM -0800, Bart Van Assche wrote:
> > +/* hash_entry is used to keep track of dynamically allocated keys. */
> >  struct lock_class_key {
> > +	struct hlist_node		hash_entry;
> >  	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
> >  };
> 
> One consideration; and maybe we should have a BUILD_BUG for that, is
> that this object should be no larger than the smallest lock primitive.
> 
> That typically is raw_spinlock_t, which normally is 4 bytes, but with
> lockdep on that at least also includes struct lockdep_map.
> 
> So what we want is:
> 
> 	sizeof(lock_class_key) <= sizeof(raw_spinlock_t)
> 
> Otherwise, two consecutive spinlocks could end up with key overlap in
> their subclass range.
> 
> Now, I think that is still valid after this patch, but it is something
> that gave me pause.

How about adding this as an additional patch before patch 25/27?

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 9a7cca6dc3d4..ce05b9b419f4 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -725,6 +725,15 @@ static bool assign_lock_key(struct lockdep_map *lock)
 {
 	unsigned long can_addr, addr = (unsigned long)lock;
 
+	/*
+	 * lockdep_free_key_range() assumes that struct lock_class_key
+	 * objects do not overlap. Since we use the address of lock
+	 * objects as class key for static objects, check whether the
+	 * size of lock_class_key objects does not exceed the size of
+	 * the smallest lock object.
+	 */
+	BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
+
 	if (__is_kernel_percpu_address(addr, &can_addr))
 		lock->key = (void *)can_addr;
 	else if (__is_module_percpu_address(addr, &can_addr))

Thanks,

Bart.

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH 25/27] locking/lockdep: Add support for dynamic keys
  2018-12-03 17:07     ` Bart Van Assche
@ 2018-12-03 17:31       ` Peter Zijlstra
  0 siblings, 0 replies; 50+ messages in thread
From: Peter Zijlstra @ 2018-12-03 17:31 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Mon, Dec 03, 2018 at 09:07:00AM -0800, Bart Van Assche wrote:
> How about adding this as an additional patch before patch 25/27?

Excellent, thanks!

> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 9a7cca6dc3d4..ce05b9b419f4 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -725,6 +725,15 @@ static bool assign_lock_key(struct lockdep_map *lock)
>  {
>  	unsigned long can_addr, addr = (unsigned long)lock;
>  
> +	/*
> +	 * lockdep_free_key_range() assumes that struct lock_class_key
> +	 * objects do not overlap. Since we use the address of lock
> +	 * objects as class key for static objects, check whether the
> +	 * size of lock_class_key objects does not exceed the size of
> +	 * the smallest lock object.
> +	 */
> +	BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
> +
>  	if (__is_kernel_percpu_address(addr, &can_addr))
>  		lock->key = (void *)can_addr;
>  	else if (__is_module_percpu_address(addr, &can_addr))
> 
> Thanks,
> 
> Bart.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-12-03 16:40           ` Bart Van Assche
@ 2018-12-03 17:32             ` Peter Zijlstra
  2018-12-03 18:16               ` Bart Van Assche
  0 siblings, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-12-03 17:32 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Mon, Dec 03, 2018 at 08:40:48AM -0800, Bart Van Assche wrote:

> > I think we can do this with a free bitmap and an array of 2 pending
> > bitmaps and an index. Add newly freed entries to the pending bitmap
> > indicated by the current index, when complete flip the index -- such
> > that further new bits go to the other pending bitmap -- and call_rcu().
> > 
> > Then, on the call_rcu() callback, ie. after a GP has happened, OR our
> > pending bitmap into the free bitmap, and when the other pending bitmap
> > isn't empty, flip the index again and start it all again.
> > 
> > This ensures there is at least one full GP between setting a bit and it
> > landing in the free mask.
> 
> Hi Peter,
> 
> How about the following alternative which requires only two bitmaps instead
> of three:
> - Maintain two bitmaps, one for the free entries and one for the entries
>   that are being freed.
> - Protect all accesses to both bitmaps with the graph lock.
> - zap_class() sets a bit in the "being freed" bitmap for the entries that
>   should be freed after a GP.
> - Instead of making free_zapped_classes() wait for a grace period by calling
>   synchronize_sched(), use call_rcu() and do the freeing work from inside the
>   RCU callback.
> - From inside the RCU callback, set a bit in the "free" bitmap for all entries
>   that have a bit set in the "being freed" bitmap and clears the "being freed"
>   bitmap.
> 

What happens when another unreg happens while the rcu_call thing is
still pending?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-12-03 17:32             ` Peter Zijlstra
@ 2018-12-03 18:16               ` Bart Van Assche
  2018-12-04  8:14                 ` Peter Zijlstra
  0 siblings, 1 reply; 50+ messages in thread
From: Bart Van Assche @ 2018-12-03 18:16 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, johannes.berg, linux-kernel

On Mon, 2018-12-03 at 18:32 +0100, Peter Zijlstra wrote:
> On Mon, Dec 03, 2018 at 08:40:48AM -0800, Bart Van Assche wrote:
> 
> > > I think we can do this with a free bitmap and an array of 2 pending
> > > bitmaps and an index. Add newly freed entries to the pending bitmap
> > > indicated by the current index, when complete flip the index -- such
> > > that further new bits go to the other pending bitmap -- and call_rcu().
> > > 
> > > Then, on the call_rcu() callback, ie. after a GP has happened, OR our
> > > pending bitmap into the free bitmap, and when the other pending bitmap
> > > isn't empty, flip the index again and start it all again.
> > > 
> > > This ensures there is at least one full GP between setting a bit and it
> > > landing in the free mask.
> > 
> > Hi Peter,
> > 
> > How about the following alternative which requires only two bitmaps instead
> > of three:
> > - Maintain two bitmaps, one for the free entries and one for the entries
> >   that are being freed.
> > - Protect all accesses to both bitmaps with the graph lock.
> > - zap_class() sets a bit in the "being freed" bitmap for the entries that
> >   should be freed after a GP.
> > - Instead of making free_zapped_classes() wait for a grace period by calling
> >   synchronize_sched(), use call_rcu() and do the freeing work from inside the
> >   RCU callback.
> > - From inside the RCU callback, set a bit in the "free" bitmap for all entries
> >   that have a bit set in the "being freed" bitmap and clears the "being freed"
> >   bitmap.
> 
> What happens when another unreg happens while the rcu_call thing is
> still pending?

A new flag will have to keep track of whether or not an RCU callback has
already been scheduled via rcu_call() but not yet executed to avoid double
RCU call complaints. In other code a possible alternative would be to
allocate the RCU head data structure dynamically. However, I don't think
that alternative is appropriate inside the lockdep code - I don't want to
introduce a circular dependency between the lockdep code and the memory
allocator.

Bart.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-12-03 18:16               ` Bart Van Assche
@ 2018-12-04  8:14                 ` Peter Zijlstra
  2018-12-04 16:08                   ` Bart Van Assche
  0 siblings, 1 reply; 50+ messages in thread
From: Peter Zijlstra @ 2018-12-04  8:14 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, johannes.berg, linux-kernel

On Mon, Dec 03, 2018 at 10:16:59AM -0800, Bart Van Assche wrote:
> On Mon, 2018-12-03 at 18:32 +0100, Peter Zijlstra wrote:
> > On Mon, Dec 03, 2018 at 08:40:48AM -0800, Bart Van Assche wrote:
> > 
> > > > I think we can do this with a free bitmap and an array of 2 pending
> > > > bitmaps and an index. Add newly freed entries to the pending bitmap
> > > > indicated by the current index, when complete flip the index -- such
> > > > that further new bits go to the other pending bitmap -- and call_rcu().
> > > > 
> > > > Then, on the call_rcu() callback, ie. after a GP has happened, OR our
> > > > pending bitmap into the free bitmap, and when the other pending bitmap
> > > > isn't empty, flip the index again and start it all again.
> > > > 
> > > > This ensures there is at least one full GP between setting a bit and it
> > > > landing in the free mask.
> > > 
> > > Hi Peter,
> > > 
> > > How about the following alternative which requires only two bitmaps instead
> > > of three:
> > > - Maintain two bitmaps, one for the free entries and one for the entries
> > >   that are being freed.
> > > - Protect all accesses to both bitmaps with the graph lock.
> > > - zap_class() sets a bit in the "being freed" bitmap for the entries that
> > >   should be freed after a GP.
> > > - Instead of making free_zapped_classes() wait for a grace period by calling
> > >   synchronize_sched(), use call_rcu() and do the freeing work from inside the
> > >   RCU callback.
> > > - From inside the RCU callback, set a bit in the "free" bitmap for all entries
> > >   that have a bit set in the "being freed" bitmap and clears the "being freed"
> > >   bitmap.
> > 
> > What happens when another unreg happens while the rcu_call thing is
> > still pending?
> 
> A new flag will have to keep track of whether or not an RCU callback has
> already been scheduled via rcu_call() but not yet executed to avoid double
> RCU call complaints.

That's not the only problem there. You either then have to synchronously
wait for that flag / rcu_call to complete, or, if you modify the bitmap,
ensure it re-queues itself for another GP before committing, which is
starvation prone.

> In other code a possible alternative would be to
> allocate the RCU head data structure dynamically. However, I don't think
> that alternative is appropriate inside the lockdep code - I don't want to
> introduce a circular dependency between the lockdep code and the memory
> allocator.

Yes, that's a trainwreck waiting to happen ;-)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use
  2018-12-04  8:14                 ` Peter Zijlstra
@ 2018-12-04 16:08                   ` Bart Van Assche
  0 siblings, 0 replies; 50+ messages in thread
From: Bart Van Assche @ 2018-12-04 16:08 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, johannes.berg, linux-kernel

On Tue, 2018-12-04 at 09:14 +0100, Peter Zijlstra wrote:
> On Mon, Dec 03, 2018 at 10:16:59AM -0800, Bart Van Assche wrote:
> > On Mon, 2018-12-03 at 18:32 +0100, Peter Zijlstra wrote:
> > > On Mon, Dec 03, 2018 at 08:40:48AM -0800, Bart Van Assche wrote:
> > > > How about the following alternative which requires only two bitmaps instead
> > > > of three:
> > > > - Maintain two bitmaps, one for the free entries and one for the entries
> > > >   that are being freed.
> > > > - Protect all accesses to both bitmaps with the graph lock.
> > > > - zap_class() sets a bit in the "being freed" bitmap for the entries that
> > > >   should be freed after a GP.
> > > > - Instead of making free_zapped_classes() wait for a grace period by calling
> > > >   synchronize_sched(), use call_rcu() and do the freeing work from inside the
> > > >   RCU callback.
> > > > - From inside the RCU callback, set a bit in the "free" bitmap for all entries
> > > >   that have a bit set in the "being freed" bitmap and clears the "being freed"
> > > >   bitmap.
> > > 
> > > What happens when another unreg happens while the rcu_call thing is
> > > still pending?
> > 
> > A new flag will have to keep track of whether or not an RCU callback has
> > already been scheduled via rcu_call() but not yet executed to avoid double
> > RCU call complaints.
>
> That's not the only problem there. You either then have to synchronously
> wait for that flag / rcu_call to complete, or, if you modify the bitmap,
> ensure it re-queues itself for another GP before committing, which is
> starvation prone.

Can you have a look at free_zapped_classes() and schedule_free_zapped_classes()
in v2 of this patch series? In v2 the call_rcu(), manipulation of the boolean
and processing of the bitmaps are all protected by the graph
lock to avoid the issues that you described. See also
* [PATCH v2 17/24] locking/lockdep: Free lock classes that are no longer in use
  (https://lore.kernel.org/lkml/20181204002833.55452-18-bvanassche@acm.org/).
* [PATCH v2 18/24] locking/lockdep: Reuse list entries that are no longer in use
  (https://lore.kernel.org/lkml/20181204002833.55452-19-bvanassche@acm.org/).

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [tip:timers/core] timekeeping: Use proper seqcount initializer
  2018-11-28 23:43 ` [PATCH 11/27] timekeeping: Assign a name to tk_core.seq.dep_map Bart Van Assche
@ 2018-12-05 10:03   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 50+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-05 10:03 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, tglx, bvanassche, hpa, mingo

Commit-ID:  ce10a5b3954f2514af726beb78ed8d7350c5e41c
Gitweb:     https://git.kernel.org/tip/ce10a5b3954f2514af726beb78ed8d7350c5e41c
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Wed, 28 Nov 2018 15:43:09 -0800
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 5 Dec 2018 11:00:09 +0100

timekeeping: Use proper seqcount initializer

tk_core.seq is initialized open coded, but that misses to initialize the
lockdep map when lockdep is enabled. Lockdep splats involving tk_core seq
consequently lack a name and are hard to read.

Use the proper initializer which takes care of the lockdep map
initialization.

[ tglx: Massaged changelog ]

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: peterz@infradead.org
Cc: tj@kernel.org
Cc: johannes.berg@intel.com
Link: https://lkml.kernel.org/r/20181128234325.110011-12-bvanassche@acm.org

---
 kernel/time/timekeeping.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index cd02bd38cf2d..c801e25875a3 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -45,7 +45,9 @@ enum timekeeping_adv_mode {
 static struct {
 	seqcount_t		seq;
 	struct timekeeper	timekeeper;
-} tk_core ____cacheline_aligned;
+} tk_core ____cacheline_aligned = {
+	.seq = SEQCNT_ZERO(tk_core.seq),
+};
 
 static DEFINE_RAW_SPINLOCK(timekeeper_lock);
 static struct timekeeper shadow_timekeeper;

^ permalink raw reply related	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2018-12-05 10:04 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-28 23:42 [PATCH 00/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
2018-11-28 23:42 ` [PATCH 01/27] lockdep tests: Display compiler warning and error messages Bart Van Assche
2018-11-28 23:43 ` [PATCH 02/27] lockdep tests: Fix shellcheck warnings Bart Van Assche
2018-11-28 23:43 ` [PATCH 03/27] lockdep tests: Improve testing accuracy Bart Van Assche
2018-11-28 23:43 ` [PATCH 04/27] lockdep tests: Run lockdep tests a second time under Valgrind Bart Van Assche
2018-11-28 23:43 ` [PATCH 05/27] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
2018-11-28 23:43 ` [PATCH 06/27] liblockdep: Add dummy print_irqtrace_events() implementation Bart Van Assche
2018-11-28 23:43 ` [PATCH 07/27] lockdep tests: Test the lockdep_reset_lock() implementation Bart Van Assche
2018-11-28 23:43 ` [PATCH 08/27] locking/lockdep: Declare local symbols static Bart Van Assche
2018-11-28 23:43 ` [PATCH 09/27] locking/lockdep: Inline __lockdep_init_map() Bart Van Assche
2018-11-28 23:43 ` [PATCH 10/27] locking/lockdep: Introduce lock_class_cache_is_registered() Bart Van Assche
2018-11-28 23:43 ` [PATCH 11/27] timekeeping: Assign a name to tk_core.seq.dep_map Bart Van Assche
2018-12-05 10:03   ` [tip:timers/core] timekeeping: Use proper seqcount initializer tip-bot for Bart Van Assche
2018-11-28 23:43 ` [PATCH 12/27] net/core: Assign a name to devnet_rename_seq.dep_map Bart Van Assche
2018-11-29  0:45   ` David Miller
2018-11-28 23:43 ` [PATCH 13/27] locking/lockdep: Complain if a lock object has no name Bart Van Assche
2018-11-28 23:43 ` [PATCH 14/27] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement Bart Van Assche
2018-11-28 23:43 ` [PATCH 15/27] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe Bart Van Assche
2018-11-28 23:43 ` [PATCH 16/27] locking/lockdep: Stop using RCU primitives to access all_lock_classes Bart Van Assche
2018-11-28 23:43 ` [PATCH 17/27] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
2018-11-28 23:43 ` [PATCH 18/27] locking/lockdep: Reorder struct lock_class members Bart Van Assche
2018-11-28 23:43 ` [PATCH 19/27] locking/lockdep: Retain the class key and name while freeing a lock class Bart Van Assche
2018-11-28 23:43 ` [PATCH 20/27] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
2018-11-29 10:37   ` Peter Zijlstra
2018-11-28 23:43 ` [PATCH 21/27] locking/lockdep: Rename lock_list.entry into lock_list.lock_order_entry Bart Van Assche
2018-11-28 23:43 ` [PATCH 22/27] locking/lockdep: Reuse list entries that are no longer in use Bart Van Assche
2018-11-29 10:49   ` Peter Zijlstra
2018-11-29 12:01     ` Peter Zijlstra
2018-11-29 16:48       ` Bart Van Assche
2018-12-01 20:24         ` Peter Zijlstra
2018-12-03 16:40           ` Bart Van Assche
2018-12-03 17:32             ` Peter Zijlstra
2018-12-03 18:16               ` Bart Van Assche
2018-12-04  8:14                 ` Peter Zijlstra
2018-12-04 16:08                   ` Bart Van Assche
2018-11-28 23:43 ` [PATCH 23/27] locking/lockdep: Check data structure consistency Bart Van Assche
2018-11-29 12:30   ` Peter Zijlstra
2018-11-29 16:50     ` Bart Van Assche
2018-11-29 16:59       ` Peter Zijlstra
2018-11-28 23:43 ` [PATCH 24/27] locking/lockdep: Introduce __lockdep_free_key_range() Bart Van Assche
2018-11-29 10:00   ` Peter Zijlstra
2018-11-28 23:43 ` [PATCH 25/27] locking/lockdep: Add support for dynamic keys Bart Van Assche
2018-11-29 10:10   ` Peter Zijlstra
2018-12-03 17:07     ` Bart Van Assche
2018-12-03 17:31       ` Peter Zijlstra
2018-11-29 12:04   ` Peter Zijlstra
2018-11-29 16:59     ` Bart Van Assche
2018-11-28 23:43 ` [PATCH 26/27] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
2018-11-28 23:43 ` [PATCH 27/27] lockdep tests: Test dynamic key registration Bart Van Assche
2018-11-29 12:31 ` [PATCH 00/27] locking/lockdep: Add support for dynamic keys Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.