linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys
@ 2018-12-07  1:11 Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 01/24] lockdep tests: Display compiler warning and error messages Bart Van Assche
                   ` (24 more replies)
  0 siblings, 25 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo; +Cc: peterz, tj, longman, johannes.berg, linux-kernel, Bart Van Assche

Hi Ingo and Peter,

A known shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically. This forces certain unrelated
synchronization objects to share keys and this key sharing can cause false
positive deadlock reports. This patch series adds support for dynamic keys in
the lockdep code and eliminates a class of false positive reports from the
workqueue implementation.

The changes compared to v2 are:
- Made sure that all schedule_free_zapped_classes() calls are protected
  with the graph lock.
- When removing a lock class, only recalculate lock chains that have been
  modified.
- Combine a list_del() and list_add_tail() call into a list_move_tail()
  call in register_lock_class().
- Use an RCU read lock instead of the graph lock inside is_dynamic_key().

The changes compared to v1 are:
- Addressed Peter's review comments: remove the list_head that I had added
  to struct lock_list again, replaced all_list_entries and free_list_entries
  by two bitmaps, use call_rcu() to free lockdep objects, add a BUILD_BUG_ON()
  that compares the size of struct lock_class_key and raw_spin_lock_t.
- Addressed the "unknown symbol" errors reported by the build bot by adding a
  few #ifdef / #endif directives. Addressed the 32-bit warnings by using %d
  instead of %ld for array indices and by casting the array indices to
  unsigned int.
- Removed several WARN_ON_ONCE(!class->hash_entry.pprev) statements since
  these duplicate the code in check_data_structures().
- Left out the patch that causes lockdep to complain if no name has been
  assigned to a lock object. That patch namely causes the build bot to
  complain about certain lock objects but I have not yet had the time to
  figure out the identity of these lock objects.
  
Bart.

Bart Van Assche (24):
  lockdep tests: Display compiler warning and error messages
  lockdep tests: Fix shellcheck warnings
  lockdep tests: Improve testing accuracy
  lockdep tests: Run lockdep tests a second time under Valgrind
  liblockdep: Rename "trywlock" into "trywrlock"
  liblockdep: Add dummy print_irqtrace_events() implementation
  lockdep tests: Test the lockdep_reset_lock() implementation
  locking/lockdep: Declare local symbols static
  locking/lockdep: Inline __lockdep_init_map()
  locking/lockdep: Introduce lock_class_cache_is_registered()
  locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement
  locking/lockdep: Make concurrent lockdep_reset_lock() calls safe
  locking/lockdep: Stop using RCU primitives to access all_lock_classes
  locking/lockdep: Make zap_class() remove all matching lock order
    entries
  locking/lockdep: Reorder struct lock_class members
  locking/lockdep: Retain the class key and name while freeing a lock
    class
  locking/lockdep: Free lock classes that are no longer in use
  locking/lockdep: Reuse list entries that are no longer in use
  locking/lockdep: Check data structure consistency
  locking/lockdep: Introduce __lockdep_free_key_range()
  locking/lockdep: Verify whether lock objects are small enough to be
    used as class keys
  locking/lockdep: Add support for dynamic keys
  kernel/workqueue: Use dynamic lockdep keys for workqueues
  lockdep tests: Test dynamic key registration

 include/linux/lockdep.h                       |  37 +-
 include/linux/workqueue.h                     |  28 +-
 kernel/locking/lockdep.c                      | 647 +++++++++++++++---
 kernel/workqueue.c                            |  60 +-
 tools/lib/lockdep/include/liblockdep/common.h |   3 +
 tools/lib/lockdep/include/liblockdep/mutex.h  |  12 +-
 tools/lib/lockdep/include/liblockdep/rwlock.h |   6 +-
 tools/lib/lockdep/lockdep.c                   |   5 +
 tools/lib/lockdep/run_tests.sh                |  39 +-
 tools/lib/lockdep/tests/AA.sh                 |   2 +
 tools/lib/lockdep/tests/ABA.sh                |   2 +
 tools/lib/lockdep/tests/ABBA.c                |  12 +
 tools/lib/lockdep/tests/ABBA.sh               |   2 +
 tools/lib/lockdep/tests/ABBA_2threads.sh      |   2 +
 tools/lib/lockdep/tests/ABBCCA.c              |   4 +
 tools/lib/lockdep/tests/ABBCCA.sh             |   2 +
 tools/lib/lockdep/tests/ABBCCDDA.c            |   5 +
 tools/lib/lockdep/tests/ABBCCDDA.sh           |   2 +
 tools/lib/lockdep/tests/ABCABC.c              |   4 +
 tools/lib/lockdep/tests/ABCABC.sh             |   2 +
 tools/lib/lockdep/tests/ABCDBCDA.c            |   5 +
 tools/lib/lockdep/tests/ABCDBCDA.sh           |   2 +
 tools/lib/lockdep/tests/ABCDBDDA.c            |   5 +
 tools/lib/lockdep/tests/ABCDBDDA.sh           |   2 +
 tools/lib/lockdep/tests/WW.sh                 |   2 +
 tools/lib/lockdep/tests/unlock_balance.c      |   2 +
 tools/lib/lockdep/tests/unlock_balance.sh     |   2 +
 27 files changed, 731 insertions(+), 165 deletions(-)
 create mode 100755 tools/lib/lockdep/tests/AA.sh
 create mode 100755 tools/lib/lockdep/tests/ABA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA_2threads.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCDDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCABC.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBCDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBDDA.sh
 create mode 100755 tools/lib/lockdep/tests/WW.sh
 create mode 100755 tools/lib/lockdep/tests/unlock_balance.sh

-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v3 01/24] lockdep tests: Display compiler warning and error messages
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:22   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 02/24] lockdep tests: Fix shellcheck warnings Bart Van Assche
                   ` (23 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

If compilation of liblockdep fails, display an error message and exit
immediately. Display compiler warning and error messages that are
generated while building a test. Only run a test if compilation of it
succeeded.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 2e570a188f16..9f31f84e7fac 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -1,13 +1,17 @@
 #! /bin/bash
 # SPDX-License-Identifier: GPL-2.0
 
-make &> /dev/null
+if ! make >/dev/null; then
+    echo "Building liblockdep failed."
+    echo "FAILED!"
+    exit 1
+fi
 
 for i in `ls tests/*.c`; do
 	testname=$(basename "$i" .c)
-	gcc -o tests/$testname -pthread $i liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &> /dev/null
 	echo -ne "$testname... "
-	if [ $(timeout 1 ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
+		[ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
@@ -19,9 +23,9 @@ done
 
 for i in `ls tests/*.c`; do
 	testname=$(basename "$i" .c)
-	gcc -o tests/$testname -pthread -Iinclude $i &> /dev/null
 	echo -ne "(PRELOAD) $testname... "
-	if [ $(timeout 1 ./lockdep ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+		[ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 02/24] lockdep tests: Fix shellcheck warnings
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 01/24] lockdep tests: Display compiler warning and error messages Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:23   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 03/24] lockdep tests: Improve testing accuracy Bart Van Assche
                   ` (22 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Use find instead of ls to avoid splitting filenames that contain spaces.
Use rm -f instead of if ... then rm ...; fi. This patch addresses all
shellcheck complaints about the run_tests.sh shell script.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 9f31f84e7fac..253719ee6377 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -7,7 +7,7 @@ if ! make >/dev/null; then
     exit 1
 fi
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "$testname... "
 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
@@ -16,12 +16,10 @@ for i in `ls tests/*.c`; do
 	else
 		echo "FAILED!"
 	fi
-	if [ -f "tests/$testname" ]; then
-		rm tests/$testname
-	fi
+	rm -f "tests/$testname"
 done
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "(PRELOAD) $testname... "
 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
@@ -30,7 +28,5 @@ for i in `ls tests/*.c`; do
 	else
 		echo "FAILED!"
 	fi
-	if [ -f "tests/$testname" ]; then
-		rm tests/$testname
-	fi
+	rm -f "tests/$testname"
 done
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 03/24] lockdep tests: Improve testing accuracy
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 01/24] lockdep tests: Display compiler warning and error messages Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 02/24] lockdep tests: Fix shellcheck warnings Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:23   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 04/24] lockdep tests: Run lockdep tests a second time under Valgrind Bart Van Assche
                   ` (21 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Instead of checking whether the tests produced any output, check the
output itself. This patch avoids that e.g. debug output causes the
message "PASSED!" to be reported for failed tests.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh            | 5 +++--
 tools/lib/lockdep/tests/AA.sh             | 2 ++
 tools/lib/lockdep/tests/ABA.sh            | 2 ++
 tools/lib/lockdep/tests/ABBA.sh           | 2 ++
 tools/lib/lockdep/tests/ABBA_2threads.sh  | 2 ++
 tools/lib/lockdep/tests/ABBCCA.sh         | 2 ++
 tools/lib/lockdep/tests/ABBCCDDA.sh       | 2 ++
 tools/lib/lockdep/tests/ABCABC.sh         | 2 ++
 tools/lib/lockdep/tests/ABCDBCDA.sh       | 2 ++
 tools/lib/lockdep/tests/ABCDBDDA.sh       | 2 ++
 tools/lib/lockdep/tests/WW.sh             | 2 ++
 tools/lib/lockdep/tests/unlock_balance.sh | 2 ++
 12 files changed, 25 insertions(+), 2 deletions(-)
 create mode 100755 tools/lib/lockdep/tests/AA.sh
 create mode 100755 tools/lib/lockdep/tests/ABA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBA_2threads.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCA.sh
 create mode 100755 tools/lib/lockdep/tests/ABBCCDDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCABC.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBCDA.sh
 create mode 100755 tools/lib/lockdep/tests/ABCDBDDA.sh
 create mode 100755 tools/lib/lockdep/tests/WW.sh
 create mode 100755 tools/lib/lockdep/tests/unlock_balance.sh

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 253719ee6377..bc36178329a8 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "$testname... "
 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
-		[ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
+		timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
@@ -23,7 +23,8 @@ find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "(PRELOAD) $testname... "
 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
-		[ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
+		timeout 1 ./lockdep "tests/$testname" 2>&1 |
+		"tests/${testname}.sh"; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
diff --git a/tools/lib/lockdep/tests/AA.sh b/tools/lib/lockdep/tests/AA.sh
new file mode 100755
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/AA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABA.sh b/tools/lib/lockdep/tests/ABA.sh
new file mode 100755
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABBA.sh b/tools/lib/lockdep/tests/ABBA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBA_2threads.sh b/tools/lib/lockdep/tests/ABBA_2threads.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA_2threads.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCA.sh b/tools/lib/lockdep/tests/ABBCCA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.sh b/tools/lib/lockdep/tests/ABBCCDDA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCDDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCABC.sh b/tools/lib/lockdep/tests/ABCABC.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCABC.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.sh b/tools/lib/lockdep/tests/ABCDBCDA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCDBCDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.sh b/tools/lib/lockdep/tests/ABCDBDDA.sh
new file mode 100755
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCDBDDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/WW.sh b/tools/lib/lockdep/tests/WW.sh
new file mode 100755
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/WW.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/unlock_balance.sh b/tools/lib/lockdep/tests/unlock_balance.sh
new file mode 100755
index 000000000000..c6e3952303fe
--- /dev/null
+++ b/tools/lib/lockdep/tests/unlock_balance.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: bad unlock balance detected'
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 04/24] lockdep tests: Run lockdep tests a second time under Valgrind
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (2 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 03/24] lockdep tests: Improve testing accuracy Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:24   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
                   ` (20 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

This improves test coverage.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/run_tests.sh | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index bc36178329a8..c8fbd0306960 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -31,3 +31,17 @@ find tests -name '*.c' | sort | while read -r i; do
 	fi
 	rm -f "tests/$testname"
 done
+
+find tests -name '*.c' | sort | while read -r i; do
+	testname=$(basename "$i" .c)
+	echo -ne "(PRELOAD + Valgrind) $testname... "
+	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+		{ timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
+		"tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+		! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
+		echo "PASSED!"
+	else
+		echo "FAILED!"
+	fi
+	rm -f "tests/$testname"
+done
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock"
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (3 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 04/24] lockdep tests: Run lockdep tests a second time under Valgrind Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:24   ` [tip:locking/core] tools/lib/lockdep: " tip-bot for Bart Van Assche
  2018-12-11 17:19   ` [PATCH v3 05/24] liblockdep: " Sasha Levin
  2018-12-07  1:11 ` [PATCH v3 06/24] liblockdep: Add dummy print_irqtrace_events() implementation Bart Van Assche
                   ` (19 subsequent siblings)
  24 siblings, 2 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Sasha Levin, Johannes Berg

This patch avoids that the following compiler warning is reported while
compiling the lockdep unit tests:

include/liblockdep/rwlock.h: In function 'liblockdep_pthread_rwlock_trywlock':
include/liblockdep/rwlock.h:66:9: warning: implicit declaration of function 'pthread_rwlock_trywlock'; did you mean 'pthread_rwlock_trywrlock'? [-Wimplicit-function-declaration]
  return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
         ^~~~~~~~~~~~~~~~~~~~~~~
         pthread_rwlock_trywrlock

Fixes: 5a52c9b480e0 ("liblockdep: Add public headers for pthread_rwlock_t implementation")
Cc: Sasha Levin <sashal@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/include/liblockdep/rwlock.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/rwlock.h b/tools/lib/lockdep/include/liblockdep/rwlock.h
index a96c3bf0fef1..365762e3a1ea 100644
--- a/tools/lib/lockdep/include/liblockdep/rwlock.h
+++ b/tools/lib/lockdep/include/liblockdep/rwlock.h
@@ -60,10 +60,10 @@ static inline int liblockdep_pthread_rwlock_tryrdlock(liblockdep_pthread_rwlock_
 	return pthread_rwlock_tryrdlock(&lock->rwlock) == 0 ? 1 : 0;
 }
 
-static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock)
+static inline int liblockdep_pthread_rwlock_trywrlock(liblockdep_pthread_rwlock_t *lock)
 {
 	lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_);
-	return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
+	return pthread_rwlock_trywrlock(&lock->rwlock) == 0 ? 1 : 0;
 }
 
 static inline int liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
@@ -79,7 +79,7 @@ static inline int liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
 #define pthread_rwlock_unlock		liblockdep_pthread_rwlock_unlock
 #define pthread_rwlock_wrlock		liblockdep_pthread_rwlock_wrlock
 #define pthread_rwlock_tryrdlock	liblockdep_pthread_rwlock_tryrdlock
-#define pthread_rwlock_trywlock		liblockdep_pthread_rwlock_trywlock
+#define pthread_rwlock_trywrlock	liblockdep_pthread_rwlock_trywrlock
 #define pthread_rwlock_destroy		liblockdep_rwlock_destroy
 
 #endif
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 06/24] liblockdep: Add dummy print_irqtrace_events() implementation
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (4 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:25   ` [tip:locking/core] tools/lib/lockdep: " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 07/24] lockdep tests: Test the lockdep_reset_lock() implementation Bart Van Assche
                   ` (18 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

This patch avoids that linking against liblockdep fails due to no
print_irqtrace_events() definition being available.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/lockdep.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/lib/lockdep/lockdep.c b/tools/lib/lockdep/lockdep.c
index 6002fcf2f9bc..348a9d0fb766 100644
--- a/tools/lib/lockdep/lockdep.c
+++ b/tools/lib/lockdep/lockdep.c
@@ -15,6 +15,11 @@ u32 prandom_u32(void)
 	abort();
 }
 
+void print_irqtrace_events(struct task_struct *curr)
+{
+	abort();
+}
+
 static struct new_utsname *init_utsname(void)
 {
 	static struct new_utsname n = (struct new_utsname) {
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 07/24] lockdep tests: Test the lockdep_reset_lock() implementation
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (5 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 06/24] liblockdep: Add dummy print_irqtrace_events() implementation Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:26   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 08/24] locking/lockdep: Declare local symbols static Bart Van Assche
                   ` (17 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

This patch makes sure that the lockdep_reset_lock() function gets
tested.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/include/liblockdep/common.h | 1 +
 tools/lib/lockdep/include/liblockdep/mutex.h  | 1 +
 tools/lib/lockdep/tests/ABBA.c                | 3 +++
 tools/lib/lockdep/tests/ABBCCA.c              | 4 ++++
 tools/lib/lockdep/tests/ABBCCDDA.c            | 5 +++++
 tools/lib/lockdep/tests/ABCABC.c              | 4 ++++
 tools/lib/lockdep/tests/ABCDBCDA.c            | 5 +++++
 tools/lib/lockdep/tests/ABCDBDDA.c            | 5 +++++
 tools/lib/lockdep/tests/unlock_balance.c      | 2 ++
 9 files changed, 30 insertions(+)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h b/tools/lib/lockdep/include/liblockdep/common.h
index 8862da80995a..d640a9761f09 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -44,6 +44,7 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 			struct lockdep_map *nest_lock, unsigned long ip);
 void lock_release(struct lockdep_map *lock, int nested,
 			unsigned long ip);
+void lockdep_reset_lock(struct lockdep_map *lock);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h
index a80ac39f966e..2073d4e1f2f0 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -54,6 +54,7 @@ static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t *lock)
 {
+	lockdep_reset_lock(&lock->dep_map);
 	return pthread_mutex_destroy(&lock->mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 1460afd33d71..623313f54720 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -11,4 +11,7 @@ void main(void)
 
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(b, a);
+
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABBCCA.c b/tools/lib/lockdep/tests/ABBCCA.c
index a54c1b2af118..48446129d496 100644
--- a/tools/lib/lockdep/tests/ABBCCA.c
+++ b/tools/lib/lockdep/tests/ABBCCA.c
@@ -13,4 +13,8 @@ void main(void)
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(c, a);
+
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.c b/tools/lib/lockdep/tests/ABBCCDDA.c
index aa5d194e8869..3570bf7b3804 100644
--- a/tools/lib/lockdep/tests/ABBCCDDA.c
+++ b/tools/lib/lockdep/tests/ABBCCDDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCABC.c b/tools/lib/lockdep/tests/ABCABC.c
index b54a08e60416..a1c4659894cd 100644
--- a/tools/lib/lockdep/tests/ABCABC.c
+++ b/tools/lib/lockdep/tests/ABCABC.c
@@ -13,4 +13,8 @@ void main(void)
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(c, a);
 	LOCK_UNLOCK_2(b, c);
+
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.c b/tools/lib/lockdep/tests/ABCDBCDA.c
index a56742250d86..335af1c90ab5 100644
--- a/tools/lib/lockdep/tests/ABCDBCDA.c
+++ b/tools/lib/lockdep/tests/ABCDBCDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.c b/tools/lib/lockdep/tests/ABCDBDDA.c
index 238a3353f3c3..3c5972863049 100644
--- a/tools/lib/lockdep/tests/ABCDBDDA.c
+++ b/tools/lib/lockdep/tests/ABCDBDDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(b, d);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/unlock_balance.c b/tools/lib/lockdep/tests/unlock_balance.c
index 34cf32f689de..dba25064b50a 100644
--- a/tools/lib/lockdep/tests/unlock_balance.c
+++ b/tools/lib/lockdep/tests/unlock_balance.c
@@ -10,4 +10,6 @@ void main(void)
 	pthread_mutex_lock(&a);
 	pthread_mutex_unlock(&a);
 	pthread_mutex_unlock(&a);
+
+	pthread_mutex_destroy(&a);
 }
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 08/24] locking/lockdep: Declare local symbols static
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (6 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 07/24] lockdep tests: Test the lockdep_reset_lock() implementation Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:26   ` [tip:locking/core] " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 09/24] locking/lockdep: Inline __lockdep_init_map() Bart Van Assche
                   ` (16 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

This patch avoids that sparse complains about a missing declaration for
the lock_classes array when building with CONFIG_DEBUG_LOCKDEP=n.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1efada2dd9dd..7434a00b2b2f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -138,6 +138,9 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
  * get freed - this significantly simplifies the debugging code.
  */
 unsigned long nr_lock_classes;
+#ifndef CONFIG_DEBUG_LOCKDEP
+static
+#endif
 struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
 
 static inline struct lock_class *hlock_class(struct held_lock *hlock)
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 09/24] locking/lockdep: Inline __lockdep_init_map()
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (7 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 08/24] locking/lockdep: Declare local symbols static Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:27   ` [tip:locking/core] " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 10/24] locking/lockdep: Introduce lock_class_cache_is_registered() Bart Van Assche
                   ` (15 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Since the function __lockdep_init_map() only has one caller, inline it
into its caller. This patch does not change any functionality.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7434a00b2b2f..b5c8fcb6c070 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3091,7 +3091,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 /*
  * Initialize a lock instance's lock-class mapping info:
  */
-static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
+void lockdep_init_map(struct lockdep_map *lock, const char *name,
 		      struct lock_class_key *key, int subclass)
 {
 	int i;
@@ -3147,12 +3147,6 @@ static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
 		raw_local_irq_restore(flags);
 	}
 }
-
-void lockdep_init_map(struct lockdep_map *lock, const char *name,
-		      struct lock_class_key *key, int subclass)
-{
-	__lockdep_init_map(lock, name, key, subclass);
-}
 EXPORT_SYMBOL_GPL(lockdep_init_map);
 
 struct lock_class_key __lockdep_no_validate__;
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 10/24] locking/lockdep: Introduce lock_class_cache_is_registered()
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (8 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 09/24] locking/lockdep: Inline __lockdep_init_map() Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:27   ` [tip:locking/core] " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 11/24] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement Bart Van Assche
                   ` (14 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

This patch does not change any functionality but makes the
lockdep_reset_lock() function easier to read.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 50 ++++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b5c8fcb6c070..81388d028ac7 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4201,13 +4201,33 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	 */
 }
 
-void lockdep_reset_lock(struct lockdep_map *lock)
+/*
+ * Check whether any element of the @lock->class_cache[] array refers to a
+ * registered lock class. The caller must hold either the graph lock or the
+ * RCU read lock.
+ */
+static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 {
 	struct lock_class *class;
 	struct hlist_head *head;
-	unsigned long flags;
 	int i, j;
-	int locked;
+
+	for (i = 0; i < CLASSHASH_SIZE; i++) {
+		head = classhash_table + i;
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
+			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
+				if (lock->class_cache[j] == class)
+					return true;
+		}
+	}
+	return false;
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+	struct lock_class *class;
+	unsigned long flags;
+	int j, locked;
 
 	raw_local_irq_save(flags);
 
@@ -4227,24 +4247,14 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	 * be gone.
 	 */
 	locked = graph_lock();
-	for (i = 0; i < CLASSHASH_SIZE; i++) {
-		head = classhash_table + i;
-		hlist_for_each_entry_rcu(class, head, hash_entry) {
-			int match = 0;
-
-			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
-				match |= class == lock->class_cache[j];
-
-			if (unlikely(match)) {
-				if (debug_locks_off_graph_unlock()) {
-					/*
-					 * We all just reset everything, how did it match?
-					 */
-					WARN_ON(1);
-				}
-				goto out_restore;
-			}
+	if (unlikely(lock_class_cache_is_registered(lock))) {
+		if (debug_locks_off_graph_unlock()) {
+			/*
+			 * We all just reset everything, how did it match?
+			 */
+			WARN_ON(1);
 		}
+		goto out_restore;
 	}
 	if (locked)
 		graph_unlock();
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 11/24] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (9 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 10/24] locking/lockdep: Introduce lock_class_cache_is_registered() Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:28   ` [tip:locking/core] " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 12/24] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe Bart Van Assche
                   ` (13 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Initializing a list entry just before it is passed to list_add_tail_rcu()
is not necessary because list_add_tail_rcu() overwrites the next and prev
pointers anyway. Hence remove the INIT_LIST_HEAD() statement.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 81388d028ac7..346b5a1fd062 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -792,7 +792,6 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	class->key = key;
 	class->name = lock->name;
 	class->subclass = subclass;
-	INIT_LIST_HEAD(&class->lock_entry);
 	INIT_LIST_HEAD(&class->locks_before);
 	INIT_LIST_HEAD(&class->locks_after);
 	class->name_version = count_matching_names(class);
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 12/24] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (10 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 11/24] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:29   ` [tip:locking/core] " tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 13/24] locking/lockdep: Stop using RCU primitives to access all_lock_classes Bart Van Assche
                   ` (12 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Since zap_class() removes items from the all_lock_classes list and the
classhash_table, protect all zap_class() calls against concurrent
data structure modifications with the graph lock.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 346b5a1fd062..737d2dd3ea56 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4122,6 +4122,9 @@ void lockdep_reset(void)
 	raw_local_irq_restore(flags);
 }
 
+/*
+ * Remove all references to a lock class. The caller must hold the graph lock.
+ */
 static void zap_class(struct lock_class *class)
 {
 	int i;
@@ -4229,6 +4232,7 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	int j, locked;
 
 	raw_local_irq_save(flags);
+	locked = graph_lock();
 
 	/*
 	 * Remove all classes this lock might have:
@@ -4245,7 +4249,6 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	 * Debug check: in the end all mapped classes should
 	 * be gone.
 	 */
-	locked = graph_lock();
 	if (unlikely(lock_class_cache_is_registered(lock))) {
 		if (debug_locks_off_graph_unlock()) {
 			/*
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 13/24] locking/lockdep: Stop using RCU primitives to access all_lock_classes
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (11 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 12/24] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-11 15:29   ` [tip:locking/core] locking/lockdep: Stop using RCU primitives to access 'all_lock_classes' tip-bot for Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 14/24] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
                   ` (11 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Due to the previous patch all code that accesses the 'all_lock_classes'
list holds the graph lock. Hence use regular list primitives instead of
their RCU variants to access this list.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 737d2dd3ea56..5c837a537273 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -629,7 +629,8 @@ static int static_obj(void *obj)
 
 /*
  * To make lock name printouts unique, we calculate a unique
- * class->name_version generation counter:
+ * class->name_version generation counter. The caller must hold the graph
+ * lock.
  */
 static int count_matching_names(struct lock_class *new_class)
 {
@@ -639,7 +640,7 @@ static int count_matching_names(struct lock_class *new_class)
 	if (!new_class->name)
 		return 0;
 
-	list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) {
+	list_for_each_entry(class, &all_lock_classes, lock_entry) {
 		if (new_class->key - new_class->subclass == class->key)
 			return class->name_version;
 		if (class->name && !strcmp(class->name, new_class->name))
@@ -803,7 +804,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	/*
 	 * Add it to the global list of classes:
 	 */
-	list_add_tail_rcu(&class->lock_entry, &all_lock_classes);
+	list_add_tail(&class->lock_entry, &all_lock_classes);
 
 	if (verbose(class)) {
 		graph_unlock();
@@ -4141,7 +4142,7 @@ static void zap_class(struct lock_class *class)
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
 	hlist_del_rcu(&class->hash_entry);
-	list_del_rcu(&class->lock_entry);
+	list_del(&class->lock_entry);
 
 	RCU_INIT_POINTER(class->key, NULL);
 	RCU_INIT_POINTER(class->name, NULL);
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 14/24] locking/lockdep: Make zap_class() remove all matching lock order entries
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (12 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 13/24] locking/lockdep: Stop using RCU primitives to access all_lock_classes Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 15/24] locking/lockdep: Reorder struct lock_class members Bart Van Assche
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Make sure that all lock order entries that refer to a class are removed
from the list_entries[] array when a kernel module is unloaded.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |  1 +
 kernel/locking/lockdep.c | 17 +++++++++++------
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 1fd82ff99c65..6d0f8d1c2bee 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -180,6 +180,7 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
 struct lock_list {
 	struct list_head		entry;
 	struct lock_class		*class;
+	struct lock_class		*links_to;
 	struct stack_trace		trace;
 	int				distance;
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5c837a537273..ecd92969674c 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -859,7 +859,8 @@ static struct lock_list *alloc_list_entry(void)
 /*
  * Add a new dependency to the head of the list:
  */
-static int add_lock_to_list(struct lock_class *this, struct list_head *head,
+static int add_lock_to_list(struct lock_class *this,
+			    struct lock_class *links_to, struct list_head *head,
 			    unsigned long ip, int distance,
 			    struct stack_trace *trace)
 {
@@ -872,7 +873,9 @@ static int add_lock_to_list(struct lock_class *this, struct list_head *head,
 	if (!entry)
 		return 0;
 
+	WARN_ON_ONCE(this == links_to);
 	entry->class = this;
+	entry->links_to = links_to;
 	entry->distance = distance;
 	entry->trace = *trace;
 	/*
@@ -1918,14 +1921,14 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 * Ok, all validations passed, add the new lock
 	 * to the previous lock's dependency list:
 	 */
-	ret = add_lock_to_list(hlock_class(next),
+	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
 			       &hlock_class(prev)->locks_after,
 			       next->acquire_ip, distance, trace);
 
 	if (!ret)
 		return 0;
 
-	ret = add_lock_to_list(hlock_class(prev),
+	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
 			       &hlock_class(next)->locks_before,
 			       next->acquire_ip, distance, trace);
 	if (!ret)
@@ -4128,15 +4131,17 @@ void lockdep_reset(void)
  */
 static void zap_class(struct lock_class *class)
 {
+	struct lock_list *entry;
 	int i;
 
 	/*
 	 * Remove all dependencies this lock is
 	 * involved in:
 	 */
-	for (i = 0; i < nr_list_entries; i++) {
-		if (list_entries[i].class == class)
-			list_del_rcu(&list_entries[i].entry);
+	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+		if (entry->class != class && entry->links_to != class)
+			continue;
+		list_del_rcu(&entry->entry);
 	}
 	/*
 	 * Unhash the class and remove it from the all_lock_classes list:
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 15/24] locking/lockdep: Reorder struct lock_class members
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (13 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 14/24] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class Bart Van Assche
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

This patch does not change any functionality but makes the patch that
frees lock classes that are no longer in use easier to read.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 6d0f8d1c2bee..9421f028c26c 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -76,6 +76,13 @@ struct lock_class {
 	 */
 	struct list_head		lock_entry;
 
+	/*
+	 * These fields represent a directed graph of lock dependencies,
+	 * to every node we attach a list of "forward" and a list of
+	 * "backward" graph nodes.
+	 */
+	struct list_head		locks_after, locks_before;
+
 	struct lockdep_subclass_key	*key;
 	unsigned int			subclass;
 	unsigned int			dep_gen_id;
@@ -86,13 +93,6 @@ struct lock_class {
 	unsigned long			usage_mask;
 	struct stack_trace		usage_traces[XXX_LOCK_USAGE_STATES];
 
-	/*
-	 * These fields represent a directed graph of lock dependencies,
-	 * to every node we attach a list of "forward" and a list of
-	 * "backward" graph nodes.
-	 */
-	struct list_head		locks_after, locks_before;
-
 	/*
 	 * Generation counter, when doing certain classes of graph walking,
 	 * to ensure that we check one node only once:
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (14 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 15/24] locking/lockdep: Reorder struct lock_class members Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07 10:21   ` Peter Zijlstra
  2018-12-07  1:11 ` [PATCH v3 17/24] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
                   ` (8 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

The next patch in this series uses the class name in code that
detects calls to lock_acquire() while a class key is being freed. Hence
retain the class name for lock classes that are being freed.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ecd92969674c..92bdb187987f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4147,10 +4147,8 @@ static void zap_class(struct lock_class *class)
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
 	hlist_del_rcu(&class->hash_entry);
+	class->hash_entry.pprev = NULL;
 	list_del(&class->lock_entry);
-
-	RCU_INIT_POINTER(class->key, NULL);
-	RCU_INIT_POINTER(class->name, NULL);
 }
 
 static inline int within(const void *addr, void *start, unsigned long size)
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 17/24] locking/lockdep: Free lock classes that are no longer in use
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (15 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07 12:14   ` Peter Zijlstra
  2018-12-07  1:11 ` [PATCH v3 18/24] locking/lockdep: Reuse list entries " Bart Van Assche
                   ` (7 subsequent siblings)
  24 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Instead of leaving lock classes that are no longer in use in the
lock_classes array, reuse entries from that array that are no longer
in use. Maintain a linked list of free lock classes with list head
'free_lock_class'. Initialize that list from inside register_lock_class()
instead of from inside lockdep_init() because register_lock_class() can
be called before lockdep_init() has been called. Only add freed lock
classes to the free_lock_classes list after a grace period to avoid that
a lock_classes[] element would be reused while an RCU reader is
accessing it.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |   9 +-
 kernel/locking/lockdep.c | 242 ++++++++++++++++++++++++++++++++-------
 2 files changed, 208 insertions(+), 43 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 9421f028c26c..72b4d5bb409f 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -63,7 +63,8 @@ extern struct lock_class_key __lockdep_no_validate__;
 #define LOCKSTAT_POINTS		4
 
 /*
- * The lock-class itself:
+ * The lock-class itself. The order of the structure members matters.
+ * reinit_class() zeroes the key member and all subsequent members.
  */
 struct lock_class {
 	/*
@@ -72,7 +73,9 @@ struct lock_class {
 	struct hlist_node		hash_entry;
 
 	/*
-	 * global list of all lock-classes:
+	 * Entry in all_lock_classes when in use. Entry in free_lock_classes
+	 * when not in use. Instances that are being freed are on the
+	 * zapped_classes list.
 	 */
 	struct list_head		lock_entry;
 
@@ -106,7 +109,7 @@ struct lock_class {
 	unsigned long			contention_point[LOCKSTAT_POINTS];
 	unsigned long			contending_point[LOCKSTAT_POINTS];
 #endif
-};
+} __no_randomize_layout;
 
 #ifdef CONFIG_LOCK_STAT
 struct lock_time {
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 92bdb187987f..ce8abd268727 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -134,8 +134,8 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
 /*
  * All data structures here are protected by the global debug_lock.
  *
- * Mutex key structs only get allocated, once during bootup, and never
- * get freed - this significantly simplifies the debugging code.
+ * nr_lock_classes is the number of elements of lock_classes[] that is
+ * in use.
  */
 unsigned long nr_lock_classes;
 #ifndef CONFIG_DEBUG_LOCKDEP
@@ -277,11 +277,18 @@ static inline void lock_release_holdtime(struct held_lock *hlock)
 #endif
 
 /*
- * We keep a global list of all lock classes. The list only grows,
- * never shrinks. The list is only accessed with the lockdep
- * spinlock lock held.
+ * We keep a global list of all lock classes. The list is only accessed with
+ * the lockdep spinlock lock held. The zapped_classes list contains lock
+ * classes that are about to be freed but that may still be accessed by an RCU
+ * reader. free_lock_classes is a list with free elements. These elements are
+ * linked together by the lock_entry member in struct lock_class.
  */
 LIST_HEAD(all_lock_classes);
+static LIST_HEAD(zapped_classes);
+static LIST_HEAD(free_lock_classes);
+static bool initialization_happened;
+static bool rcu_callback_scheduled;
+static struct rcu_head free_zapped_classes_rcu_head;
 
 /*
  * The lockdep classes are in a hash-table as well, for fast lookup:
@@ -735,6 +742,21 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	return true;
 }
 
+/*
+ * Initialize the lock_classes[] array elements and also the free_lock_classes
+ * list.
+ */
+static void init_data_structures(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		list_add_tail(&lock_classes[i].lock_entry, &free_lock_classes);
+		INIT_LIST_HEAD(&lock_classes[i].locks_after);
+		INIT_LIST_HEAD(&lock_classes[i].locks_before);
+	}
+}
+
 /*
  * Register a lock's class in the hash-table, if the class is not present
  * yet. Otherwise we look it up. We cache the result in the lock object
@@ -775,11 +797,14 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 			goto out_unlock_set;
 	}
 
-	/*
-	 * Allocate a new key from the static array, and add it to
-	 * the hash:
-	 */
-	if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {
+	/* Allocate a new lock class and add it to the hash. */
+	if (unlikely(!initialization_happened)) {
+		initialization_happened = true;
+		init_data_structures();
+	}
+	class = list_first_entry_or_null(&free_lock_classes, typeof(*class),
+					 lock_entry);
+	if (!class) {
 		if (!debug_locks_off_graph_unlock()) {
 			return NULL;
 		}
@@ -788,13 +813,13 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 		dump_stack();
 		return NULL;
 	}
-	class = lock_classes + nr_lock_classes++;
+	nr_lock_classes++;
 	debug_atomic_inc(nr_unused_locks);
 	class->key = key;
 	class->name = lock->name;
 	class->subclass = subclass;
-	INIT_LIST_HEAD(&class->locks_before);
-	INIT_LIST_HEAD(&class->locks_after);
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
 	class->name_version = count_matching_names(class);
 	/*
 	 * We use RCU's safe list-add method to make
@@ -804,7 +829,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	/*
 	 * Add it to the global list of classes:
 	 */
-	list_add_tail(&class->lock_entry, &all_lock_classes);
+	list_move_tail(&class->lock_entry, &all_lock_classes);
 
 	if (verbose(class)) {
 		graph_unlock();
@@ -1845,6 +1870,13 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	struct lock_list this;
 	int ret;
 
+	if (WARN_ON_ONCE(!hlock_class(prev)->hash_entry.pprev) ||
+	    WARN_ONCE(!hlock_class(next)->hash_entry.pprev,
+		      KERN_INFO "Detected use-after-free of lock class %s\n",
+		      hlock_class(next)->name)) {
+		return 2;
+	}
+
 	/*
 	 * Prove that the new <prev> -> <next> dependency would not
 	 * create a circular dependency in the graph. (We do this by
@@ -2236,17 +2268,14 @@ static inline int add_chain_cache(struct task_struct *curr,
 }
 
 /*
- * Look up a dependency chain.
+ * Look up a dependency chain. Must be called with either the graph lock or
+ * the RCU read lock held.
  */
 static inline struct lock_chain *lookup_chain_cache(u64 chain_key)
 {
 	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 
-	/*
-	 * We can walk it lock-free, because entries only get added
-	 * to the hash:
-	 */
 	hlist_for_each_entry_rcu(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 			debug_atomic_inc(chain_lookup_hits);
@@ -3338,6 +3367,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	if (nest_lock && !__lock_is_held(nest_lock, -1))
 		return print_lock_nested_lock_not_held(curr, hlock, ip);
 
+	WARN_ON_ONCE(depth && !hlock_class(hlock - 1)->hash_entry.pprev);
+	WARN_ON_ONCE(!hlock_class(hlock)->hash_entry.pprev);
+
 	if (!validate_chain(curr, lock, hlock, chain_head, chain_key))
 		return 0;
 
@@ -4126,11 +4158,93 @@ void lockdep_reset(void)
 	raw_local_irq_restore(flags);
 }
 
+/* Must be called with the graph lock held. */
+static void remove_class_from_lock_chain(struct lock_chain *chain,
+					 struct lock_class *class)
+{
+	u64 chain_key;
+	int i;
+
+#ifdef CONFIG_PROVE_LOCKING
+	for (i = chain->base; i < chain->base + chain->depth; i++) {
+		if (chain_hlocks[i] != class - lock_classes)
+			continue;
+		if (--chain->depth > 0)
+			memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
+				(chain->base + chain->depth - i) *
+				sizeof(chain_hlocks[0]));
+		/*
+		 * Each lock class occurs at most once in a
+		 * lock chain so once we found a match we can
+		 * break out of this loop.
+		 */
+		goto recalc;
+	}
+	/* Since the chain has not been modified, return. */
+	return;
+
+recalc:
+	/*
+	 * Note: calling hlist_del_rcu() from inside a
+	 * hlist_for_each_entry_rcu() loop is safe.
+	 */
+	if (chain->depth == 0) {
+		/* To do: decrease chain count. See also inc_chains(). */
+		hlist_del_rcu(&chain->entry);
+		return;
+	}
+	chain_key = 0;
+	for (i = chain->base; i < chain->base + chain->depth; i++)
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+	if (chain->chain_key == chain_key)
+		return;
+	hlist_del_rcu(&chain->entry);
+	chain->chain_key = chain_key;
+	hlist_add_head_rcu(&chain->entry, chainhashentry(chain_key));
+#endif
+}
+
+/* Must be called with the graph lock held. */
+static void remove_class_from_lock_chains(struct lock_class *class)
+{
+	struct lock_chain *chain;
+	struct hlist_head *head;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
+		head = chainhash_table + i;
+		hlist_for_each_entry_rcu(chain, head, entry) {
+			remove_class_from_lock_chain(chain, class);
+		}
+	}
+}
+
+/*
+ * Move a class to the zapped_classes list if it is no longer in use. Must be
+ * called with the graph lock held.
+ */
+static void check_free_class(struct lock_class *class)
+{
+	/*
+	 * If the list_del_rcu(&entry->entry) call made both lock order lists
+	 * empty, remove @class from the all_lock_classes list and add it to
+	 * the zapped_classes list.
+	 */
+	if (class->hash_entry.pprev &&
+	    list_empty(&class->locks_after) &&
+	    list_empty(&class->locks_before)) {
+		list_move_tail(&class->lock_entry, &zapped_classes);
+		hlist_del_rcu(&class->hash_entry);
+		class->hash_entry.pprev = NULL;
+	}
+}
+
 /*
  * Remove all references to a lock class. The caller must hold the graph lock.
  */
 static void zap_class(struct lock_class *class)
 {
+	struct lock_class *links_to;
 	struct lock_list *entry;
 	int i;
 
@@ -4141,14 +4255,30 @@ static void zap_class(struct lock_class *class)
 	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
 		if (entry->class != class && entry->links_to != class)
 			continue;
+		links_to = entry->links_to;
+		WARN_ON_ONCE(entry->class == links_to);
 		list_del_rcu(&entry->entry);
 	}
-	/*
-	 * Unhash the class and remove it from the all_lock_classes list:
-	 */
-	hlist_del_rcu(&class->hash_entry);
-	class->hash_entry.pprev = NULL;
-	list_del(&class->lock_entry);
+	check_free_class(class);
+	WARN_ONCE(class->hash_entry.pprev,
+		  KERN_INFO "%s() failed for class %s\n", __func__,
+		  class->name);
+
+	remove_class_from_lock_chains(class);
+}
+
+static void reinit_class(struct lock_class *class)
+{
+	void *const p = class;
+	const unsigned int offset = offsetof(struct lock_class, key);
+
+	WARN_ON_ONCE(!class->lock_entry.next);
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
+	memset(p + offset, 0, sizeof(*class) - offset);
+	WARN_ON_ONCE(!class->lock_entry.next);
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
 }
 
 static inline int within(const void *addr, void *start, unsigned long size)
@@ -4156,6 +4286,38 @@ static inline int within(const void *addr, void *start, unsigned long size)
 	return addr >= start && addr < start + size;
 }
 
+/*
+ * Free all lock classes that are on the zapped_classes list. Called as an
+ * RCU callback function.
+ */
+static void free_zapped_classes(struct callback_head *ch)
+{
+	struct lock_class *class;
+	unsigned long flags;
+	int locked;
+
+	raw_local_irq_save(flags);
+	locked = graph_lock();
+	rcu_callback_scheduled = false;
+	list_for_each_entry(class, &zapped_classes, lock_entry) {
+		reinit_class(class);
+		nr_lock_classes--;
+	}
+	list_splice_init(&zapped_classes, &free_lock_classes);
+	if (locked)
+		graph_unlock();
+	raw_local_irq_restore(flags);
+}
+
+/* Must be called with the graph lock held. */
+static void schedule_free_zapped_classes(void)
+{
+	if (rcu_callback_scheduled)
+		return;
+	rcu_callback_scheduled = true;
+	call_rcu(&free_zapped_classes_rcu_head, free_zapped_classes);
+}
+
 /*
  * Used in module.c to remove lock classes from memory that is going to be
  * freed; and possibly re-used by other modules.
@@ -4181,29 +4343,26 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
 		hlist_for_each_entry_rcu(class, head, hash_entry) {
-			if (within(class->key, start, size))
-				zap_class(class);
-			else if (within(class->name, start, size))
-				zap_class(class);
+			if (!class->hash_entry.pprev ||
+			    (!within(class->key, start, size) &&
+			     !within(class->name, start, size)))
+				continue;
+			zap_class(class);
 		}
 	}
 
+	schedule_free_zapped_classes();
+
 	if (locked)
 		graph_unlock();
 	raw_local_irq_restore(flags);
 
 	/*
-	 * Wait for any possible iterators from look_up_lock_class() to pass
-	 * before continuing to free the memory they refer to.
-	 *
-	 * sync_sched() is sufficient because the read-side is IRQ disable.
-	 */
-	synchronize_sched();
-
-	/*
-	 * XXX at this point we could return the resources to the pool;
-	 * instead we leak them. We would need to change to bitmap allocators
-	 * instead of the linear allocators we have now.
+	 * Do not wait for concurrent look_up_lock_class() calls. If any such
+	 * concurrent call would return a pointer to one of the lock classes
+	 * freed by this function then that means that there is a race in the
+	 * code that calls look_up_lock_class(), namely concurrently accessing
+	 * and freeing a synchronization object.
 	 */
 }
 
@@ -4249,6 +4408,9 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 		if (class)
 			zap_class(class);
 	}
+
+	schedule_free_zapped_classes();
+
 	/*
 	 * Debug check: in the end all mapped classes should
 	 * be gone.
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 18/24] locking/lockdep: Reuse list entries that are no longer in use
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (16 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 17/24] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 19/24] locking/lockdep: Check data structure consistency Bart Van Assche
                   ` (6 subsequent siblings)
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Instead of abandoning elements of list_entries[] that are no longer in
use, make alloc_list_entry() reuse array elements that have been freed.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ce8abd268727..6f338f852f7e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -130,6 +130,8 @@ static inline int debug_locks_off_graph_unlock(void)
 
 unsigned long nr_list_entries;
 static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
+static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
+static DECLARE_BITMAP(list_entries_being_freed, MAX_LOCKDEP_ENTRIES);
 
 /*
  * All data structures here are protected by the global debug_lock.
@@ -870,7 +872,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
  */
 static struct lock_list *alloc_list_entry(void)
 {
-	if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
+	int idx = find_first_zero_bit(list_entries_in_use,
+				      ARRAY_SIZE(list_entries));
+
+	if (idx >= ARRAY_SIZE(list_entries)) {
 		if (!debug_locks_off_graph_unlock())
 			return NULL;
 
@@ -878,7 +883,8 @@ static struct lock_list *alloc_list_entry(void)
 		dump_stack();
 		return NULL;
 	}
-	return list_entries + nr_list_entries++;
+	__set_bit(idx, list_entries_in_use);
+	return list_entries + idx;
 }
 
 /*
@@ -983,7 +989,7 @@ static inline void mark_lock_accessed(struct lock_list *lock,
 	unsigned long nr;
 
 	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+	WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
 	lock->parent = parent;
 	lock->class->dep_gen_id = lockdep_dependency_gen_id;
 }
@@ -993,7 +999,7 @@ static inline unsigned long lock_accessed(struct lock_list *lock)
 	unsigned long nr;
 
 	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+	WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
 	return lock->class->dep_gen_id == lockdep_dependency_gen_id;
 }
 
@@ -4252,9 +4258,12 @@ static void zap_class(struct lock_class *class)
 	 * Remove all dependencies this lock is
 	 * involved in:
 	 */
-	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+	for_each_set_bit(i, list_entries_in_use, ARRAY_SIZE(list_entries)) {
+		entry = list_entries + i;
 		if (entry->class != class && entry->links_to != class)
 			continue;
+		if (__test_and_set_bit(i, list_entries_being_freed))
+			continue;
 		links_to = entry->links_to;
 		WARN_ON_ONCE(entry->class == links_to);
 		list_del_rcu(&entry->entry);
@@ -4287,8 +4296,9 @@ static inline int within(const void *addr, void *start, unsigned long size)
 }
 
 /*
- * Free all lock classes that are on the zapped_classes list. Called as an
- * RCU callback function.
+ * Free all lock classes that are on the zapped_classes list and also all list
+ * entries that have been marked as being freed. Called as an RCU callback
+ * function.
  */
 static void free_zapped_classes(struct callback_head *ch)
 {
@@ -4304,6 +4314,9 @@ static void free_zapped_classes(struct callback_head *ch)
 		nr_lock_classes--;
 	}
 	list_splice_init(&zapped_classes, &free_lock_classes);
+	bitmap_andnot(list_entries_in_use, list_entries_in_use,
+		      list_entries_being_freed, ARRAY_SIZE(list_entries));
+	bitmap_clear(list_entries_being_freed, 0, ARRAY_SIZE(list_entries));
 	if (locked)
 		graph_unlock();
 	raw_local_irq_restore(flags);
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 19/24] locking/lockdep: Check data structure consistency
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (17 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 18/24] locking/lockdep: Reuse list entries " Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 20/24] locking/lockdep: Introduce __lockdep_free_key_range() Bart Van Assche
                   ` (5 subsequent siblings)
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Debugging lockdep data structure inconsistencies is challenging. Add
disabled code that verifies data structure consistency at runtime.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 146 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 146 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 6f338f852f7e..78f14c151407 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -72,6 +72,8 @@ module_param(lock_stat, int, 0644);
 #define lock_stat 0
 #endif
 
+static bool check_data_structure_consistency;
+
 /*
  * lockdep_lock: protects the lockdep graph, the hashes and the
  *               class/list/hash allocators.
@@ -744,6 +746,148 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	return true;
 }
 
+/* Check whether element @e occurs in list @h */
+static bool in_list(struct list_head *e, struct list_head *h)
+{
+	struct list_head *f;
+
+	list_for_each(f, h) {
+		if (e == f)
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Check whether entry @e occurs in any of the locks_after or locks_before
+ * lists.
+ */
+static bool in_any_class_list(struct list_head *e)
+{
+	struct lock_class *class;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		class = &lock_classes[i];
+		if (in_list(e, &class->locks_after) ||
+		    in_list(e, &class->locks_before))
+			return true;
+	}
+	return false;
+}
+
+static bool class_lock_list_valid(struct lock_class *c, struct list_head *h)
+{
+	struct lock_list *e;
+
+	list_for_each_entry(e, h, entry) {
+		if (e->links_to != c) {
+			printk(KERN_INFO "class %s: mismatch for lock entry %ld; class %s <> %s",
+			       c->name ? : "(?)",
+			       (unsigned long)(e - list_entries),
+			       e->links_to && e->links_to->name ?
+			       e->links_to->name : "(?)",
+			       e->class && e->class->name ? e->class->name :
+			       "(?)");
+			return false;
+		}
+	}
+	return true;
+}
+
+static u16 chain_hlocks[];
+
+static bool check_lock_chain_key(struct lock_chain *chain)
+{
+#ifdef CONFIG_PROVE_LOCKING
+	u64 chain_key = 0;
+	int i;
+
+	for (i = chain->base; i < chain->base + chain->depth; i++)
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+	/*
+	 * The 'unsigned long long' casts avoid that a compiler warning
+	 * is reported when building tools/lib/lockdep.
+	 */
+	if (chain->chain_key != chain_key)
+		printk(KERN_INFO "chain %lld: key %#llx <> %#llx\n",
+		       (unsigned long long)(chain - lock_chains),
+		       (unsigned long long)chain->chain_key,
+		       (unsigned long long)chain_key);
+	return chain->chain_key == chain_key;
+#else
+	return true;
+#endif
+}
+
+static bool check_data_structures(void)
+{
+	struct lock_class *class;
+	struct lock_chain *chain;
+	struct hlist_head *head;
+	struct lock_list *e;
+	int i;
+
+	/*
+	 * Check whether all list entries that are in use occur in a class
+	 * lock list.
+	 */
+	for_each_set_bit(i, list_entries_in_use, ARRAY_SIZE(list_entries)) {
+		if (test_bit(i, list_entries_being_freed))
+			continue;
+		e = list_entries + i;
+		if (!in_any_class_list(&e->entry)) {
+			printk(KERN_INFO "list entry %d is not in any class list; class %s <> %s\n",
+			       (unsigned int)(e - list_entries),
+			       e->class->name ? : "(?)",
+			       e->links_to->name ? : "(?)");
+			return false;
+		}
+	}
+
+	/*
+	 * Check whether all list entries that are not in use do not occur in
+	 * a class lock list.
+	 */
+	for_each_clear_bit(i, list_entries_in_use, ARRAY_SIZE(list_entries)) {
+		e = list_entries + i;
+		if (WARN_ON_ONCE(test_bit(i, list_entries_being_freed)))
+			return false;
+		if (in_any_class_list(&e->entry)) {
+			printk(KERN_INFO "list entry %d occurs in a class list; class %s <> %s\n",
+			       (unsigned int)(e - list_entries),
+			       e->class && e->class->name ? e->class->name :
+			       "(?)",
+			       e->links_to && e->links_to->name ?
+			       e->links_to->name : "(?)");
+			return false;
+		}
+	}
+
+	/* Check whether all classes have valid lock lists. */
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		class = &lock_classes[i];
+		if (!class->locks_before.next)
+			continue;
+		if (!class_lock_list_valid(class, &class->locks_before))
+			return false;
+		if (!class_lock_list_valid(class, &class->locks_after))
+			return false;
+	}
+
+	/* Check the chain_key of all lock chains. */
+	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
+		head = chainhash_table + i;
+		hlist_for_each_entry_rcu(chain, head, entry) {
+			if (!check_lock_chain_key(chain))
+				return false;
+		}
+	}
+
+	return true;
+}
+
 /*
  * Initialize the lock_classes[] array elements and also the free_lock_classes
  * list.
@@ -4309,6 +4453,8 @@ static void free_zapped_classes(struct callback_head *ch)
 	raw_local_irq_save(flags);
 	locked = graph_lock();
 	rcu_callback_scheduled = false;
+	if (check_data_structure_consistency)
+		WARN_ON_ONCE(!check_data_structures());
 	list_for_each_entry(class, &zapped_classes, lock_entry) {
 		reinit_class(class);
 		nr_lock_classes--;
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 20/24] locking/lockdep: Introduce __lockdep_free_key_range()
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (18 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 19/24] locking/lockdep: Check data structure consistency Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 21/24] locking/lockdep: Verify whether lock objects are small enough to be used as class keys Bart Van Assche
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

This patch does not change any functionality but makes the next patch
in this series easier to read.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 37 ++++++++++++++++++++++---------------
 1 file changed, 22 insertions(+), 15 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 78f14c151407..8c69516b1283 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4478,27 +4478,17 @@ static void schedule_free_zapped_classes(void)
 }
 
 /*
- * Used in module.c to remove lock classes from memory that is going to be
- * freed; and possibly re-used by other modules.
- *
- * We will have had one sync_sched() before getting here, so we're guaranteed
- * nobody will look up these exact classes -- they're properly dead but still
- * allocated.
+ * Remove all lock classes from the class hash table and from the
+ * all_lock_classes list whose key or name is in the address range [start,
+ * start + size). Move these lock classes to the zapped_classes list. Must
+ * be called with the graph lock held.
  */
-void lockdep_free_key_range(void *start, unsigned long size)
+static void __lockdep_free_key_range(void *start, unsigned long size)
 {
 	struct lock_class *class;
 	struct hlist_head *head;
-	unsigned long flags;
 	int i;
-	int locked;
-
-	raw_local_irq_save(flags);
-	locked = graph_lock();
 
-	/*
-	 * Unhash all classes that were created by this module:
-	 */
 	for (i = 0; i < CLASSHASH_SIZE; i++) {
 		head = classhash_table + i;
 		hlist_for_each_entry_rcu(class, head, hash_entry) {
@@ -4511,7 +4501,24 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	}
 
 	schedule_free_zapped_classes();
+}
 
+/*
+ * Used in module.c to remove lock classes from memory that is going to be
+ * freed; and possibly re-used by other modules.
+ *
+ * We will have had one sync_sched() before getting here, so we're guaranteed
+ * nobody will look up these exact classes -- they're properly dead but still
+ * allocated.
+ */
+void lockdep_free_key_range(void *start, unsigned long size)
+{
+	unsigned long flags;
+	int locked;
+
+	raw_local_irq_save(flags);
+	locked = graph_lock();
+	__lockdep_free_key_range(start, size);
 	if (locked)
 		graph_unlock();
 	raw_local_irq_restore(flags);
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 21/24] locking/lockdep: Verify whether lock objects are small enough to be used as class keys
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (19 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 20/24] locking/lockdep: Introduce __lockdep_free_key_range() Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 22/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (3 subsequent siblings)
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 8c69516b1283..a47357d3dca7 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -727,6 +727,15 @@ static bool assign_lock_key(struct lockdep_map *lock)
 {
 	unsigned long can_addr, addr = (unsigned long)lock;
 
+	/*
+	 * lockdep_free_key_range() assumes that struct lock_class_key
+	 * objects do not overlap. Since we use the address of lock
+	 * objects as class key for static objects, check whether the
+	 * size of lock_class_key objects does not exceed the size of
+	 * the smallest lock object.
+	 */
+	BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
+
 	if (__is_kernel_percpu_address(addr, &can_addr))
 		lock->key = (void *)can_addr;
 	else if (__is_module_percpu_address(addr, &can_addr))
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 22/24] locking/lockdep: Add support for dynamic keys
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (20 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 21/24] locking/lockdep: Verify whether lock objects are small enough to be used as class keys Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 23/24] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
                   ` (2 subsequent siblings)
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

A shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically. That forces certain lock objects
to share lock keys. Since lock dependency analysis groups lock objects
per key sharing lock keys can cause false positive lockdep reports.
Make it possible to avoid such false positive reports by allowing lock
keys to be allocated dynamically. Require that dynamically allocated
lock keys are registered before use by calling lockdep_register_key().
Complain about attempts to register the same lock key pointer twice
without calling lockdep_unregister_key() between successive
registration calls.

The purpose of the new lock_keys_hash[] data structure that keeps
track of all dynamic keys is twofold:
- Verify whether the lockdep_register_key() and  lockdep_unregister_key()
  functions are used correctly.
- Avoid that lockdep_init_map() complains when encountering a dynamically
  allocated key.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |  13 ++++-
 kernel/locking/lockdep.c | 115 ++++++++++++++++++++++++++++++++++++---
 2 files changed, 117 insertions(+), 11 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 72b4d5bb409f..52853baa591d 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -46,15 +46,19 @@ extern int lock_stat;
 #define NR_LOCKDEP_CACHING_CLASSES	2
 
 /*
- * Lock-classes are keyed via unique addresses, by embedding the
- * lockclass-key into the kernel (or module) .data section. (For
- * static locks we use the lock address itself as the key.)
+ * A lockdep key is associated with each lock object. For static locks we use
+ * the lock address itself as the key. Dynamically allocated lock objects can
+ * have a statically or dynamically allocated key. Dynamically allocated lock
+ * keys must be registered before being used and must be unregistered before
+ * the key memory is freed.
  */
 struct lockdep_subclass_key {
 	char __one_byte;
 } __attribute__ ((__packed__));
 
+/* hash_entry is used to keep track of dynamically allocated keys. */
 struct lock_class_key {
+	struct hlist_node		hash_entry;
 	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
 };
 
@@ -274,6 +278,9 @@ extern asmlinkage void lockdep_sys_exit(void);
 extern void lockdep_off(void);
 extern void lockdep_on(void);
 
+extern void lockdep_register_key(struct lock_class_key *key);
+extern void lockdep_unregister_key(struct lock_class_key *key);
+
 /*
  * These methods are used by specific locking variants (spinlocks,
  * rwlocks, mutexes and rwsems) to pass init/acquire/release events
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a47357d3dca7..a84c9ef13afb 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -141,6 +141,9 @@ static DECLARE_BITMAP(list_entries_being_freed, MAX_LOCKDEP_ENTRIES);
  * nr_lock_classes is the number of elements of lock_classes[] that is
  * in use.
  */
+#define KEYHASH_BITS		(MAX_LOCKDEP_KEYS_BITS - 1)
+#define KEYHASH_SIZE		(1UL << KEYHASH_BITS)
+static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
 unsigned long nr_lock_classes;
 #ifndef CONFIG_DEBUG_LOCKDEP
 static
@@ -610,7 +613,7 @@ static int very_verbose(struct lock_class *class)
  * Is this the address of a static object:
  */
 #ifdef __KERNEL__
-static int static_obj(void *obj)
+static int static_obj(const void *obj)
 {
 	unsigned long start = (unsigned long) &_stext,
 		      end   = (unsigned long) &_end,
@@ -912,6 +915,68 @@ static void init_data_structures(void)
 	}
 }
 
+static inline struct hlist_head *keyhashentry(const struct lock_class_key *key)
+{
+	unsigned long hash = hash_long((uintptr_t)key, KEYHASH_BITS);
+
+	return lock_keys_hash + hash;
+}
+
+/*
+ * Register a dynamically allocated key.
+ */
+void lockdep_register_key(struct lock_class_key *key)
+{
+	struct hlist_head *hash_head;
+	struct lock_class_key *k;
+	unsigned long flags;
+	int locked;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return;
+	hash_head = keyhashentry(key);
+
+	raw_local_irq_save(flags);
+	locked = graph_lock();
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (WARN_ON_ONCE(k == key))
+			goto out_unlock;
+	}
+	hlist_add_head_rcu(&key->hash_entry, hash_head);
+out_unlock:
+	if (locked)
+		graph_unlock();
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lockdep_register_key);
+
+/*
+ * Check whether a key has been registered as a dynamic key. Must not be called
+ * from interrupt context.
+ */
+static bool is_dynamic_key(const struct lock_class_key *key)
+{
+	struct hlist_head *hash_head;
+	struct lock_class_key *k;
+	bool found = false;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return false;
+
+	hash_head = keyhashentry(key);
+
+	rcu_read_lock();
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (k == key) {
+			found = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return found;
+}
+
 /*
  * Register a lock's class in the hash-table, if the class is not present
  * yet. Otherwise we look it up. We cache the result in the lock object
@@ -933,7 +998,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	if (!lock->key) {
 		if (!assign_lock_key(lock))
 			return NULL;
-	} else if (!static_obj(lock->key)) {
+	} else if (!static_obj(lock->key) && !is_dynamic_key(lock->key)) {
 		return NULL;
 	}
 
@@ -3310,13 +3375,13 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,
 	if (DEBUG_LOCKS_WARN_ON(!key))
 		return;
 	/*
-	 * Sanity check, the lock-class key must be persistent:
+	 * Sanity check, the lock-class key must either have been allocated
+	 * statically or must have been registered as a dynamic key.
 	 */
-	if (!static_obj(key)) {
-		printk("BUG: key %px not in .data!\n", key);
-		/*
-		 * What it says above ^^^^^, I suggest you read it.
-		 */
+	if (!static_obj(key) && !is_dynamic_key(key)) {
+		if (debug_locks)
+			printk(KERN_INFO "BUG: key %px has not been registered!\n",
+			       key);
 		DEBUG_LOCKS_WARN_ON(1);
 		return;
 	}
@@ -4541,6 +4606,40 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	 */
 }
 
+
+/*
+ * Unregister a dynamically allocated key. Must not be called from interrupt
+ * context. The caller must ensure that freeing @key only happens after an RCU
+ * grace period.
+ */
+void lockdep_unregister_key(struct lock_class_key *key)
+{
+	struct hlist_head *hash_head = keyhashentry(key);
+	struct lock_class_key *k;
+	unsigned long flags;
+	bool found = false;
+	int locked;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return;
+
+	raw_local_irq_save(flags);
+	locked = graph_lock();
+	__lockdep_free_key_range(key, 1);
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (k == key) {
+			hlist_del_rcu(&k->hash_entry);
+			found = true;
+			break;
+		}
+	}
+	WARN_ON_ONCE(!found);
+	if (locked)
+		graph_unlock();
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lockdep_unregister_key);
+
 /*
  * Check whether any element of the @lock->class_cache[] array refers to a
  * registered lock class. The caller must hold either the graph lock or the
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 23/24] kernel/workqueue: Use dynamic lockdep keys for workqueues
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (21 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 22/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07  1:11 ` [PATCH v3 24/24] lockdep tests: Test dynamic key registration Bart Van Assche
  2018-12-07 12:23 ` [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Peter Zijlstra
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Will Deacon

Commit 87915adc3f0a ("workqueue: re-add lockdep dependencies for flushing")
improved deadlock checking in the workqueue implementation. Unfortunately
that patch also introduced a few false positive lockdep complaints. This
patch suppresses these false positives by allocating the workqueue mutex
lockdep key dynamically. An example of a false positive lockdep complaint
suppressed by this report can be found below. The root cause of the
lockdep complaint shown below is that the direct I/O code can call
alloc_workqueue() from inside a work item created by another
alloc_workqueue() call and that both workqueues share the same lockdep
key. This patch avoids that that lockdep complaint is triggered by
allocating the work queue lockdep keys dynamically. In other words, this
patch guarantees that a unique lockdep key is associated with each work
queue mutex.

======================================================
WARNING: possible circular locking dependency detected
4.19.0-dbg+ #1 Not tainted
------------------------------------------------------
fio/4129 is trying to acquire lock:
00000000a01cfe1a ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: flush_workqueue+0xd0/0x970

but task is already holding lock:
00000000a0acecf9 (&sb->s_type->i_mutex_key#14){+.+.}, at: ext4_file_write_iter+0x154/0x710

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #2 (&sb->s_type->i_mutex_key#14){+.+.}:
       down_write+0x3d/0x80
       __generic_file_fsync+0x77/0xf0
       ext4_sync_file+0x3c9/0x780
       vfs_fsync_range+0x66/0x100
       dio_complete+0x2f5/0x360
       dio_aio_complete_work+0x1c/0x20
       process_one_work+0x481/0x9f0
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #1 ((work_completion)(&dio->complete_work)){+.+.}:
       process_one_work+0x447/0x9f0
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #0 ((wq_completion)"dio/%s"sb->s_id){+.+.}:
       lock_acquire+0xc5/0x200
       flush_workqueue+0xf3/0x970
       drain_workqueue+0xec/0x220
       destroy_workqueue+0x23/0x350
       sb_init_dio_done_wq+0x6a/0x80
       do_blockdev_direct_IO+0x1f33/0x4be0
       __blockdev_direct_IO+0x79/0x86
       ext4_direct_IO+0x5df/0xbb0
       generic_file_direct_write+0x119/0x220
       __generic_file_write_iter+0x131/0x2d0
       ext4_file_write_iter+0x3fa/0x710
       aio_write+0x235/0x330
       io_submit_one+0x510/0xeb0
       __x64_sys_io_submit+0x122/0x340
       do_syscall_64+0x71/0x220
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
  (wq_completion)"dio/%s"sb->s_id --> (work_completion)(&dio->complete_work) --> &sb->s_type->i_mutex_key#14

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&sb->s_type->i_mutex_key#14);
                               lock((work_completion)(&dio->complete_work));
                               lock(&sb->s_type->i_mutex_key#14);
  lock((wq_completion)"dio/%s"sb->s_id);

 *** DEADLOCK ***

1 lock held by fio/4129:
 #0: 00000000a0acecf9 (&sb->s_type->i_mutex_key#14){+.+.}, at: ext4_file_write_iter+0x154/0x710

stack backtrace:
CPU: 3 PID: 4129 Comm: fio Not tainted 4.19.0-dbg+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
Call Trace:
 dump_stack+0x86/0xc5
 print_circular_bug.isra.32+0x20a/0x218
 __lock_acquire+0x1c68/0x1cf0
 lock_acquire+0xc5/0x200
 flush_workqueue+0xf3/0x970
 drain_workqueue+0xec/0x220
 destroy_workqueue+0x23/0x350
 sb_init_dio_done_wq+0x6a/0x80
 do_blockdev_direct_IO+0x1f33/0x4be0
 __blockdev_direct_IO+0x79/0x86
 ext4_direct_IO+0x5df/0xbb0
 generic_file_direct_write+0x119/0x220
 __generic_file_write_iter+0x131/0x2d0
 ext4_file_write_iter+0x3fa/0x710
 aio_write+0x235/0x330
 io_submit_one+0x510/0xeb0
 __x64_sys_io_submit+0x122/0x340
 do_syscall_64+0x71/0x220
 entry_SYSCALL_64_after_hwframe+0x49/0xbe

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/workqueue.h | 28 +++---------------
 kernel/workqueue.c        | 60 +++++++++++++++++++++++++++++++++------
 2 files changed, 55 insertions(+), 33 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 60d673e15632..d9a1a480e920 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -390,43 +390,23 @@ extern struct workqueue_struct *system_freezable_wq;
 extern struct workqueue_struct *system_power_efficient_wq;
 extern struct workqueue_struct *system_freezable_power_efficient_wq;
 
-extern struct workqueue_struct *
-__alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
-	struct lock_class_key *key, const char *lock_name, ...) __printf(1, 6);
-
 /**
  * alloc_workqueue - allocate a workqueue
  * @fmt: printf format for the name of the workqueue
  * @flags: WQ_* flags
  * @max_active: max in-flight work items, 0 for default
- * @args...: args for @fmt
+ * remaining args: args for @fmt
  *
  * Allocate a workqueue with the specified parameters.  For detailed
  * information on WQ_* flags, please refer to
  * Documentation/core-api/workqueue.rst.
  *
- * The __lock_name macro dance is to guarantee that single lock_class_key
- * doesn't end up with different namesm, which isn't allowed by lockdep.
- *
  * RETURNS:
  * Pointer to the allocated workqueue on success, %NULL on failure.
  */
-#ifdef CONFIG_LOCKDEP
-#define alloc_workqueue(fmt, flags, max_active, args...)		\
-({									\
-	static struct lock_class_key __key;				\
-	const char *__lock_name;					\
-									\
-	__lock_name = "(wq_completion)"#fmt#args;			\
-									\
-	__alloc_workqueue_key((fmt), (flags), (max_active),		\
-			      &__key, __lock_name, ##args);		\
-})
-#else
-#define alloc_workqueue(fmt, flags, max_active, args...)		\
-	__alloc_workqueue_key((fmt), (flags), (max_active),		\
-			      NULL, NULL, ##args)
-#endif
+struct workqueue_struct *alloc_workqueue(const char *fmt,
+					 unsigned int flags,
+					 int max_active, ...);
 
 /**
  * alloc_ordered_workqueue - allocate an ordered workqueue
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0280deac392e..82e155f764b7 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -259,6 +259,8 @@ struct workqueue_struct {
 	struct wq_device	*wq_dev;	/* I: for sysfs interface */
 #endif
 #ifdef CONFIG_LOCKDEP
+	char			*lock_name;
+	struct lock_class_key	key;
 	struct lockdep_map	lockdep_map;
 #endif
 	char			name[WQ_NAME_LEN]; /* I: workqueue name */
@@ -3314,11 +3316,50 @@ static int init_worker_pool(struct worker_pool *pool)
 	return 0;
 }
 
+#ifdef CONFIG_LOCKDEP
+static void wq_init_lockdep(struct workqueue_struct *wq)
+{
+	char *lock_name;
+
+	lockdep_register_key(&wq->key);
+	lock_name = kasprintf(GFP_KERNEL, "%s%s", "(wq_completion)", wq->name);
+	if (!lock_name)
+		lock_name = wq->name;
+	lockdep_init_map(&wq->lockdep_map, lock_name, &wq->key, 0);
+}
+
+static void wq_unregister_lockdep(struct workqueue_struct *wq)
+{
+	lockdep_reset_lock(&wq->lockdep_map);
+	lockdep_unregister_key(&wq->key);
+}
+
+static void wq_free_lockdep(struct workqueue_struct *wq)
+{
+	if (wq->lock_name != wq->name)
+		kfree(wq->lock_name);
+}
+#else
+static void wq_init_lockdep(struct workqueue_struct *wq)
+{
+}
+
+static void wq_unregister_lockdep(struct workqueue_struct *wq)
+{
+}
+
+static void wq_free_lockdep(struct workqueue_struct *wq)
+{
+}
+#endif
+
 static void rcu_free_wq(struct rcu_head *rcu)
 {
 	struct workqueue_struct *wq =
 		container_of(rcu, struct workqueue_struct, rcu);
 
+	wq_free_lockdep(wq);
+
 	if (!(wq->flags & WQ_UNBOUND))
 		free_percpu(wq->cpu_pwqs);
 	else
@@ -3509,8 +3550,10 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
 	 * If we're the last pwq going away, @wq is already dead and no one
 	 * is gonna access it anymore.  Schedule RCU free.
 	 */
-	if (is_last)
+	if (is_last) {
+		wq_unregister_lockdep(wq);
 		call_rcu_sched(&wq->rcu, rcu_free_wq);
+	}
 }
 
 /**
@@ -4044,11 +4087,9 @@ static int init_rescuer(struct workqueue_struct *wq)
 	return 0;
 }
 
-struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
-					       unsigned int flags,
-					       int max_active,
-					       struct lock_class_key *key,
-					       const char *lock_name, ...)
+struct workqueue_struct *alloc_workqueue(const char *fmt,
+					 unsigned int flags,
+					 int max_active, ...)
 {
 	size_t tbl_size = 0;
 	va_list args;
@@ -4083,7 +4124,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 			goto err_free_wq;
 	}
 
-	va_start(args, lock_name);
+	va_start(args, max_active);
 	vsnprintf(wq->name, sizeof(wq->name), fmt, args);
 	va_end(args);
 
@@ -4100,7 +4141,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	INIT_LIST_HEAD(&wq->flusher_overflow);
 	INIT_LIST_HEAD(&wq->maydays);
 
-	lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
+	wq_init_lockdep(wq);
 	INIT_LIST_HEAD(&wq->list);
 
 	if (alloc_and_link_pwqs(wq) < 0)
@@ -4138,7 +4179,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	destroy_workqueue(wq);
 	return NULL;
 }
-EXPORT_SYMBOL_GPL(__alloc_workqueue_key);
+EXPORT_SYMBOL_GPL(alloc_workqueue);
 
 /**
  * destroy_workqueue - safely terminate a workqueue
@@ -4191,6 +4232,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
 		kthread_stop(wq->rescuer->task);
 
 	if (!(wq->flags & WQ_UNBOUND)) {
+		wq_unregister_lockdep(wq);
 		/*
 		 * The base ref is never dropped on per-cpu pwqs.  Directly
 		 * schedule RCU free.
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v3 24/24] lockdep tests: Test dynamic key registration
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (22 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 23/24] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
@ 2018-12-07  1:11 ` Bart Van Assche
  2018-12-07 12:23 ` [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Peter Zijlstra
  24 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07  1:11 UTC (permalink / raw)
  To: mingo
  Cc: peterz, tj, longman, johannes.berg, linux-kernel,
	Bart Van Assche, Johannes Berg

Make sure that the lockdep_register_key() and lockdep_unregister_key()
code is tested when running the lockdep tests.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/include/liblockdep/common.h |  2 ++
 tools/lib/lockdep/include/liblockdep/mutex.h  | 11 ++++++-----
 tools/lib/lockdep/tests/ABBA.c                |  9 +++++++++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h b/tools/lib/lockdep/include/liblockdep/common.h
index d640a9761f09..a81d91d4fc78 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -45,6 +45,8 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 void lock_release(struct lockdep_map *lock, int nested,
 			unsigned long ip);
 void lockdep_reset_lock(struct lockdep_map *lock);
+void lockdep_register_key(struct lock_class_key *key);
+void lockdep_unregister_key(struct lock_class_key *key);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h
index 2073d4e1f2f0..783dd0df06f9 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -7,6 +7,7 @@
 
 struct liblockdep_pthread_mutex {
 	pthread_mutex_t mutex;
+	struct lock_class_key key;
 	struct lockdep_map dep_map;
 };
 
@@ -27,11 +28,10 @@ static inline int __mutex_init(liblockdep_pthread_mutex_t *lock,
 	return pthread_mutex_init(&lock->mutex, __mutexattr);
 }
 
-#define liblockdep_pthread_mutex_init(mutex, mutexattr)		\
-({								\
-	static struct lock_class_key __key;			\
-								\
-	__mutex_init((mutex), #mutex, &__key, (mutexattr));	\
+#define liblockdep_pthread_mutex_init(mutex, mutexattr)			\
+({									\
+	lockdep_register_key(&(mutex)->key);				\
+	__mutex_init((mutex), #mutex, &(mutex)->key, (mutexattr));	\
 })
 
 static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t *lock)
@@ -55,6 +55,7 @@ static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t *lock)
 {
 	lockdep_reset_lock(&lock->dep_map);
+	lockdep_unregister_key(&lock->key);
 	return pthread_mutex_destroy(&lock->mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 623313f54720..543789bc3e37 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -14,4 +14,13 @@ void main(void)
 
 	pthread_mutex_destroy(&b);
 	pthread_mutex_destroy(&a);
+
+	pthread_mutex_init(&a, NULL);
+	pthread_mutex_init(&b, NULL);
+
+	LOCK_UNLOCK_2(a, b);
+	LOCK_UNLOCK_2(b, a);
+
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
-- 
2.20.0.rc2.403.gdbc3b29805-goog


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class
  2018-12-07  1:11 ` [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class Bart Van Assche
@ 2018-12-07 10:21   ` Peter Zijlstra
  2018-12-07 14:50     ` Waiman Long
  0 siblings, 1 reply; 49+ messages in thread
From: Peter Zijlstra @ 2018-12-07 10:21 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Johannes Berg

On Thu, Dec 06, 2018 at 05:11:40PM -0800, Bart Van Assche wrote:
> The next patch in this series uses the class name in code that
> detects calls to lock_acquire() while a class key is being freed. Hence
> retain the class name for lock classes that are being freed.

From readin the discussion with v2; you meant to say: 'uses the class
name pointer', right? You're not actually ever going to dereference it,
since that would be a UaF.

> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Johannes Berg <johannes@sipsolutions.net>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  kernel/locking/lockdep.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index ecd92969674c..92bdb187987f 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -4147,10 +4147,8 @@ static void zap_class(struct lock_class *class)
>  	 * Unhash the class and remove it from the all_lock_classes list:
>  	 */
>  	hlist_del_rcu(&class->hash_entry);
> +	class->hash_entry.pprev = NULL;
>  	list_del(&class->lock_entry);
> -
> -	RCU_INIT_POINTER(class->key, NULL);
> -	RCU_INIT_POINTER(class->name, NULL);
>  }
>  
>  static inline int within(const void *addr, void *start, unsigned long size)
> -- 
> 2.20.0.rc2.403.gdbc3b29805-goog
> 

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 17/24] locking/lockdep: Free lock classes that are no longer in use
  2018-12-07  1:11 ` [PATCH v3 17/24] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
@ 2018-12-07 12:14   ` Peter Zijlstra
  2018-12-07 16:27     ` Bart Van Assche
  0 siblings, 1 reply; 49+ messages in thread
From: Peter Zijlstra @ 2018-12-07 12:14 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Johannes Berg

On Thu, Dec 06, 2018 at 05:11:41PM -0800, Bart Van Assche wrote:
> +	if (WARN_ON_ONCE(!hlock_class(prev)->hash_entry.pprev) ||
> +	    WARN_ONCE(!hlock_class(next)->hash_entry.pprev,
> +		      KERN_INFO "Detected use-after-free of lock class %s\n",
> +		      hlock_class(next)->name)) {
> +		return 2;
> +	}

Ah, this is that UaF on ->name, but it only happens when there's already
been a UaF, so that's fine I suppose. Still a note on that earlier
Changelog would've been nice I suppose.

> +/* Must be called with the graph lock held. */
> +static void remove_class_from_lock_chain(struct lock_chain *chain,
> +					 struct lock_class *class)
> +{
> +	u64 chain_key;
> +	int i;
> +
> +#ifdef CONFIG_PROVE_LOCKING
> +	for (i = chain->base; i < chain->base + chain->depth; i++) {
> +		if (chain_hlocks[i] != class - lock_classes)
> +			continue;
> +		if (--chain->depth > 0)
 {
> +			memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
> +				(chain->base + chain->depth - i) *
> +				sizeof(chain_hlocks[0]));
 }

Also, I suppose a comment here that notes we 'leak' chain_hlock[]
entries would be appropriate here.

If Waiman cares, it is possible to reclaim then by extending the above
memmove() to cover the _entire_ tail of the array and then going around
and fixing up all the chain->base 'pointers' that are in the moved part.

> +		/*
> +		 * Each lock class occurs at most once in a
> +		 * lock chain so once we found a match we can
> +		 * break out of this loop.
> +		 */
> +		goto recalc;
> +	}
> +	/* Since the chain has not been modified, return. */
> +	return;
> +
> +recalc:
> +	/*
> +	 * Note: calling hlist_del_rcu() from inside a
> +	 * hlist_for_each_entry_rcu() loop is safe.
> +	 */
> +	if (chain->depth == 0) {
> +		/* To do: decrease chain count. See also inc_chains(). */
> +		hlist_del_rcu(&chain->entry);
> +		return;
> +	}
> +	chain_key = 0;
> +	for (i = chain->base; i < chain->base + chain->depth; i++)
> +		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
> +	if (chain->chain_key == chain_key)
> +		return;
> +	hlist_del_rcu(&chain->entry);
> +	chain->chain_key = chain_key;
> +	hlist_add_head_rcu(&chain->entry, chainhashentry(chain_key));

I think this is broken; while hlist_del_rcu() and hlist_add_head_rcu()
are individually safe against concurrent hlist_for_each_entry_rcu(),
they cannot be used consecutively like this.

The thing is, hlist_del_rcu() preserves the ->next pointer such that
a concurrent hlist_for_each_entry_rcu() that happens to be at that entry
will be able to continue.

But your hlist_add_head_rcu() will overwrite that, so the concurrent
iterator, which started on one list will continue on another.

There must be an RCU-GP between del_rcu and add_rcu such that the above
situation cannot happen.


> +/*
> + * Free all lock classes that are on the zapped_classes list. Called as an
> + * RCU callback function.
> + */
> +static void free_zapped_classes(struct callback_head *ch)
> +{
> +	struct lock_class *class;
> +	unsigned long flags;
> +	int locked;
> +
> +	raw_local_irq_save(flags);
> +	locked = graph_lock();

If this fails; you must not touch any of the state. You don't hold the
lock after all.

> +	rcu_callback_scheduled = false;
> +	list_for_each_entry(class, &zapped_classes, lock_entry) {
> +		reinit_class(class);
> +		nr_lock_classes--;
> +	}
> +	list_splice_init(&zapped_classes, &free_lock_classes);
> +	if (locked)
> +		graph_unlock();
> +	raw_local_irq_restore(flags);
> +}
> +
> +/* Must be called with the graph lock held. */
> +static void schedule_free_zapped_classes(void)
> +{
> +	if (rcu_callback_scheduled)
> +		return;
> +	rcu_callback_scheduled = true;
> +	call_rcu(&free_zapped_classes_rcu_head, free_zapped_classes);
> +}

But I fear this is fundamentally broken too. In exactly the manner I
tried to explain earlier.

(sorry, wide(r) terminal required)

It starts out good:

	CPU0			CPU1				CPU2

	rcu_read_lock()
	// use class
				zap_class()
				schedule_free_zapped_classes();
				  call_rcu();

	rcu_read_lock()

				/* since none of the rcu_read_lock()
				 * sections we started out with are
				 * still active; this is where the
				 * callback _can_ happen */

But then, another user and and removal come in:

	rcu_read_lock();

	// use class						zap_class()
								  list_add(&entry, &zapped_classes)
								schedule_free_zapped_classes()
								  /* no-op, still pending */

				free_zapped_classes()
				  list_splice(&zapped_classes, &free_lock_classes)

				  *whoops* we just splice'd a class
				  onto the free list that is still in
				  use !!


	rcu_read_unlock()


There's two possible solutions:

 - the second schedule_free_zapped_classes() must somehow ensure the
   call_rcu() callback re-queues itself and waits for yet another GP, or

 - you must add classes zapped after the first invocation to a second
   list.



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys
  2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (23 preceding siblings ...)
  2018-12-07  1:11 ` [PATCH v3 24/24] lockdep tests: Test dynamic key registration Bart Van Assche
@ 2018-12-07 12:23 ` Peter Zijlstra
  2018-12-07 16:35   ` Bart Van Assche
  24 siblings, 1 reply; 49+ messages in thread
From: Peter Zijlstra @ 2018-12-07 12:23 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, longman, johannes.berg, linux-kernel


Bart,

I took the first 15 patches for now to shrink the series.

I think the rest wants a little more work.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class
  2018-12-07 10:21   ` Peter Zijlstra
@ 2018-12-07 14:50     ` Waiman Long
  2018-12-07 16:23       ` Bart Van Assche
  0 siblings, 1 reply; 49+ messages in thread
From: Waiman Long @ 2018-12-07 14:50 UTC (permalink / raw)
  To: Peter Zijlstra, Bart Van Assche
  Cc: mingo, tj, johannes.berg, linux-kernel, Johannes Berg

On 12/07/2018 05:21 AM, Peter Zijlstra wrote:
> On Thu, Dec 06, 2018 at 05:11:40PM -0800, Bart Van Assche wrote:
>> The next patch in this series uses the class name in code that
>> detects calls to lock_acquire() while a class key is being freed. Hence
>> retain the class name for lock classes that are being freed.
> From readin the discussion with v2; you meant to say: 'uses the class
> name pointer', right? You're not actually ever going to dereference it,
> since that would be a UaF.
>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Waiman Long <longman@redhat.com>
>> Cc: Johannes Berg <johannes@sipsolutions.net>
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>> ---
>>  kernel/locking/lockdep.c | 4 +---
>>  1 file changed, 1 insertion(+), 3 deletions(-)
>>
>> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
>> index ecd92969674c..92bdb187987f 100644
>> --- a/kernel/locking/lockdep.c
>> +++ b/kernel/locking/lockdep.c
>> @@ -4147,10 +4147,8 @@ static void zap_class(struct lock_class *class)
>>  	 * Unhash the class and remove it from the all_lock_classes list:
>>  	 */
>>  	hlist_del_rcu(&class->hash_entry);
>> +	class->hash_entry.pprev = NULL;
>>  	list_del(&class->lock_entry);
>> -
>> -	RCU_INIT_POINTER(class->key, NULL);
>> -	RCU_INIT_POINTER(class->name, NULL);
>>  }
>>  
>>  static inline int within(const void *addr, void *start, unsigned long size)
>> -- 
>> 2.20.0.rc2.403.gdbc3b29805-goog
>>
I still prefer keeping the clearing statements. Leaving key behind
should be OK as it is just a long value to be compared against. The
variable name is a different story as it is a pointer that will
reference bytes that it points to unless you save the name in a safe
storage area and change the pointer to point there.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class
  2018-12-07 14:50     ` Waiman Long
@ 2018-12-07 16:23       ` Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07 16:23 UTC (permalink / raw)
  To: Waiman Long, Peter Zijlstra
  Cc: mingo, tj, johannes.berg, linux-kernel, Johannes Berg

On Fri, 2018-12-07 at 09:50 -0500, Waiman Long wrote:
> On 12/07/2018 05:21 AM, Peter Zijlstra wrote:
> > On Thu, Dec 06, 2018 at 05:11:40PM -0800, Bart Van Assche wrote:
> > > The next patch in this series uses the class name in code that
> > > detects calls to lock_acquire() while a class key is being freed. Hence
> > > retain the class name for lock classes that are being freed.
> > 
> > From readin the discussion with v2; you meant to say: 'uses the class
> > name pointer', right? You're not actually ever going to dereference it,
> > since that would be a UaF.
> > 
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Cc: Waiman Long <longman@redhat.com>
> > > Cc: Johannes Berg <johannes@sipsolutions.net>
> > > Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> > > ---
> > >  kernel/locking/lockdep.c | 4 +---
> > >  1 file changed, 1 insertion(+), 3 deletions(-)
> > > 
> > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > > index ecd92969674c..92bdb187987f 100644
> > > --- a/kernel/locking/lockdep.c
> > > +++ b/kernel/locking/lockdep.c
> > > @@ -4147,10 +4147,8 @@ static void zap_class(struct lock_class *class)
> > >  	 * Unhash the class and remove it from the all_lock_classes list:
> > >  	 */
> > >  	hlist_del_rcu(&class->hash_entry);
> > > +	class->hash_entry.pprev = NULL;
> > >  	list_del(&class->lock_entry);
> > > -
> > > -	RCU_INIT_POINTER(class->key, NULL);
> > > -	RCU_INIT_POINTER(class->name, NULL);
> > >  }
> > >  
> > >  static inline int within(const void *addr, void *start, unsigned long size)
> > > -- 
> > > 2.20.0.rc2.403.gdbc3b29805-goog
> > > 
> 
> I still prefer keeping the clearing statements. Leaving key behind
> should be OK as it is just a long value to be compared against. The
> variable name is a different story as it is a pointer that will
> reference bytes that it points to unless you save the name in a safe
> storage area and change the pointer to point there.

I will drop this patch and update later patches accordingly.

Bart.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 17/24] locking/lockdep: Free lock classes that are no longer in use
  2018-12-07 12:14   ` Peter Zijlstra
@ 2018-12-07 16:27     ` Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07 16:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Johannes Berg

On Fri, 2018-12-07 at 13:14 +0100, Peter Zijlstra wrote:
> On Thu, Dec 06, 2018 at 05:11:41PM -0800, Bart Van Assche wrote:
> > +	if (WARN_ON_ONCE(!hlock_class(prev)->hash_entry.pprev) ||
> > +	    WARN_ONCE(!hlock_class(next)->hash_entry.pprev,
> > +		      KERN_INFO "Detected use-after-free of lock class %s\n",
> > +		      hlock_class(next)->name)) {
> > +		return 2;
> > +	}
> 
> Ah, this is that UaF on ->name, but it only happens when there's already
> been a UaF, so that's fine I suppose. Still a note on that earlier
> Changelog would've been nice I suppose.

How about reporting the class pointer only as is done elsewhere in the
lockdep code?

> > +/* Must be called with the graph lock held. */
> > +static void remove_class_from_lock_chain(struct lock_chain *chain,
> > +					 struct lock_class *class)
> > +{
> > +	u64 chain_key;
> > +	int i;
> > +
> > +#ifdef CONFIG_PROVE_LOCKING
> > +	for (i = chain->base; i < chain->base + chain->depth; i++) {
> > +		if (chain_hlocks[i] != class - lock_classes)
> > +			continue;
> > +		if (--chain->depth > 0)
> 
>  {
> > +			memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
> > +				(chain->base + chain->depth - i) *
> > +				sizeof(chain_hlocks[0]));
> 
>  }
> 
> Also, I suppose a comment here that notes we 'leak' chain_hlock[]
> entries would be appropriate here.

OK, I will add such a comment.

> If Waiman cares, it is possible to reclaim then by extending the above
> memmove() to cover the _entire_ tail of the array and then going around
> and fixing up all the chain->base 'pointers' that are in the moved part.

Since that change is outside the scope of what I want to realize I will leave
this to Waiman.

Bart.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys
  2018-12-07 12:23 ` [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Peter Zijlstra
@ 2018-12-07 16:35   ` Bart Van Assche
  2018-12-08 10:26     ` Peter Zijlstra
  0 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-07 16:35 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, longman, johannes.berg, linux-kernel

On Fri, 2018-12-07 at 13:23 +0100, Peter Zijlstra wrote:
> I took the first 15 patches for now to shrink the series.

Thanks Peter. Since I would like to rebase the remaining patches on top of
what you have already queued: is what you have queued available in a public
git repository? I had a quick look at
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/ but couldn't
find my patches in any of the branches that have been updated in the last 24
hours.

Bart.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys
  2018-12-07 16:35   ` Bart Van Assche
@ 2018-12-08 10:26     ` Peter Zijlstra
  0 siblings, 0 replies; 49+ messages in thread
From: Peter Zijlstra @ 2018-12-08 10:26 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, longman, johannes.berg, linux-kernel

On Fri, Dec 07, 2018 at 08:35:52AM -0800, Bart Van Assche wrote:
> On Fri, 2018-12-07 at 13:23 +0100, Peter Zijlstra wrote:
> > I took the first 15 patches for now to shrink the series.
> 
> Thanks Peter. Since I would like to rebase the remaining patches on top of
> what you have already queued: is what you have queued available in a public
> git repository? I had a quick look at
> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/ but couldn't
> find my patches in any of the branches that have been updated in the last 24
> hours.

I host my quilt tree here:

  git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git

It is a throw-away auto generated git tree. Your patches would be in the
locking/core branch.

If all goes well, they should also show up in tip/locking/core
'soonish'.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* [tip:locking/core] tools/lib/lockdep/tests: Display compiler warning and error messages
  2018-12-07  1:11 ` [PATCH v3 01/24] lockdep tests: Display compiler warning and error messages Bart Van Assche
@ 2018-12-11 15:22   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:22 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bvanassche, sasha.levin, linux-kernel, johannes, tglx, mingo,
	torvalds, peterz, longman, hpa

Commit-ID:  da087b2229618f78ecea5c203fed8ba2245de636
Gitweb:     https://git.kernel.org/tip/da087b2229618f78ecea5c203fed8ba2245de636
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:25 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:47 +0100

tools/lib/lockdep/tests: Display compiler warning and error messages

If compilation of liblockdep fails, display an error message and exit
immediately. Display compiler warning and error messages that are
generated while building a test. Only run a test if compilation of it
succeeded.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-2-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/lib/lockdep/run_tests.sh | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 2e570a188f16..9f31f84e7fac 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -1,13 +1,17 @@
 #! /bin/bash
 # SPDX-License-Identifier: GPL-2.0
 
-make &> /dev/null
+if ! make >/dev/null; then
+    echo "Building liblockdep failed."
+    echo "FAILED!"
+    exit 1
+fi
 
 for i in `ls tests/*.c`; do
 	testname=$(basename "$i" .c)
-	gcc -o tests/$testname -pthread $i liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &> /dev/null
 	echo -ne "$testname... "
-	if [ $(timeout 1 ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
+		[ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
@@ -19,9 +23,9 @@ done
 
 for i in `ls tests/*.c`; do
 	testname=$(basename "$i" .c)
-	gcc -o tests/$testname -pthread -Iinclude $i &> /dev/null
 	echo -ne "(PRELOAD) $testname... "
-	if [ $(timeout 1 ./lockdep ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+		[ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] tools/lib/lockdep/tests: Fix shellcheck warnings
  2018-12-07  1:11 ` [PATCH v3 02/24] lockdep tests: Fix shellcheck warnings Bart Van Assche
@ 2018-12-11 15:23   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:23 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, johannes, longman, sasha.levin, peterz, torvalds,
	linux-kernel, mingo, hpa, bvanassche

Commit-ID:  7e9798871a9186cb831cf693d7ff58085384ccbd
Gitweb:     https://git.kernel.org/tip/7e9798871a9186cb831cf693d7ff58085384ccbd
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:26 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:48 +0100

tools/lib/lockdep/tests: Fix shellcheck warnings

Use find instead of ls to avoid splitting filenames that contain spaces.
Use rm -f instead of if ... then rm ...; fi. This patch addresses all
shellcheck complaints about the run_tests.sh shell script.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-3-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/lib/lockdep/run_tests.sh | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 9f31f84e7fac..253719ee6377 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -7,7 +7,7 @@ if ! make >/dev/null; then
     exit 1
 fi
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "$testname... "
 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
@@ -16,12 +16,10 @@ for i in `ls tests/*.c`; do
 	else
 		echo "FAILED!"
 	fi
-	if [ -f "tests/$testname" ]; then
-		rm tests/$testname
-	fi
+	rm -f "tests/$testname"
 done
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "(PRELOAD) $testname... "
 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
@@ -30,7 +28,5 @@ for i in `ls tests/*.c`; do
 	else
 		echo "FAILED!"
 	fi
-	if [ -f "tests/$testname" ]; then
-		rm tests/$testname
-	fi
+	rm -f "tests/$testname"
 done

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] tools/lib/lockdep/tests: Improve testing accuracy
  2018-12-07  1:11 ` [PATCH v3 03/24] lockdep tests: Improve testing accuracy Bart Van Assche
@ 2018-12-11 15:23   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:23 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, sasha.levin, tglx, longman, peterz, linux-kernel,
	bvanassche, torvalds, johannes, hpa

Commit-ID:  5ecb8e94b494af0df8de4ca9b9ef88d87b30a9c1
Gitweb:     https://git.kernel.org/tip/5ecb8e94b494af0df8de4ca9b9ef88d87b30a9c1
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:27 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:49 +0100

tools/lib/lockdep/tests: Improve testing accuracy

Instead of checking whether the tests produced any output, check the
output itself. This patch avoids that e.g. debug output causes the
message "PASSED!" to be reported for failed tests.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-4-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/lib/lockdep/run_tests.sh            | 5 +++--
 tools/lib/lockdep/tests/AA.sh             | 2 ++
 tools/lib/lockdep/tests/ABA.sh            | 2 ++
 tools/lib/lockdep/tests/ABBA.sh           | 2 ++
 tools/lib/lockdep/tests/ABBA_2threads.sh  | 2 ++
 tools/lib/lockdep/tests/ABBCCA.sh         | 2 ++
 tools/lib/lockdep/tests/ABBCCDDA.sh       | 2 ++
 tools/lib/lockdep/tests/ABCABC.sh         | 2 ++
 tools/lib/lockdep/tests/ABCDBCDA.sh       | 2 ++
 tools/lib/lockdep/tests/ABCDBDDA.sh       | 2 ++
 tools/lib/lockdep/tests/WW.sh             | 2 ++
 tools/lib/lockdep/tests/unlock_balance.sh | 2 ++
 12 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 253719ee6377..bc36178329a8 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "$testname... "
 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
-		[ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
+		timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
@@ -23,7 +23,8 @@ find tests -name '*.c' | sort | while read -r i; do
 	testname=$(basename "$i" .c)
 	echo -ne "(PRELOAD) $testname... "
 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
-		[ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
+		timeout 1 ./lockdep "tests/$testname" 2>&1 |
+		"tests/${testname}.sh"; then
 		echo "PASSED!"
 	else
 		echo "FAILED!"
diff --git a/tools/lib/lockdep/tests/AA.sh b/tools/lib/lockdep/tests/AA.sh
new file mode 100644
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/AA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABA.sh b/tools/lib/lockdep/tests/ABA.sh
new file mode 100644
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABBA.sh b/tools/lib/lockdep/tests/ABBA.sh
new file mode 100644
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBA_2threads.sh b/tools/lib/lockdep/tests/ABBA_2threads.sh
new file mode 100644
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA_2threads.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCA.sh b/tools/lib/lockdep/tests/ABBCCA.sh
new file mode 100644
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.sh b/tools/lib/lockdep/tests/ABBCCDDA.sh
new file mode 100644
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCDDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCABC.sh b/tools/lib/lockdep/tests/ABCABC.sh
new file mode 100644
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCABC.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.sh b/tools/lib/lockdep/tests/ABCDBCDA.sh
new file mode 100644
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCDBCDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.sh b/tools/lib/lockdep/tests/ABCDBDDA.sh
new file mode 100644
index 000000000000..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCDBDDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/WW.sh b/tools/lib/lockdep/tests/WW.sh
new file mode 100644
index 000000000000..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/WW.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/unlock_balance.sh b/tools/lib/lockdep/tests/unlock_balance.sh
new file mode 100644
index 000000000000..c6e3952303fe
--- /dev/null
+++ b/tools/lib/lockdep/tests/unlock_balance.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: bad unlock balance detected'

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] tools/lib/lockdep/tests: Run lockdep tests a second time under Valgrind
  2018-12-07  1:11 ` [PATCH v3 04/24] lockdep tests: Run lockdep tests a second time under Valgrind Bart Van Assche
@ 2018-12-11 15:24   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:24 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, johannes, sasha.levin, torvalds, hpa, longman,
	linux-kernel, mingo, peterz, bvanassche

Commit-ID:  2b28a8609ec9891e37607ae20688b4ab34f2778c
Gitweb:     https://git.kernel.org/tip/2b28a8609ec9891e37607ae20688b4ab34f2778c
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:28 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:49 +0100

tools/lib/lockdep/tests: Run lockdep tests a second time under Valgrind

This improves test coverage.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-5-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/lib/lockdep/run_tests.sh | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index bc36178329a8..c8fbd0306960 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -31,3 +31,17 @@ find tests -name '*.c' | sort | while read -r i; do
 	fi
 	rm -f "tests/$testname"
 done
+
+find tests -name '*.c' | sort | while read -r i; do
+	testname=$(basename "$i" .c)
+	echo -ne "(PRELOAD + Valgrind) $testname... "
+	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+		{ timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
+		"tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+		! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
+		echo "PASSED!"
+	else
+		echo "FAILED!"
+	fi
+	rm -f "tests/$testname"
+done

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] tools/lib/lockdep: Rename "trywlock" into "trywrlock"
  2018-12-07  1:11 ` [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
@ 2018-12-11 15:24   ` tip-bot for Bart Van Assche
  2018-12-11 17:19   ` [PATCH v3 05/24] liblockdep: " Sasha Levin
  1 sibling, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:24 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, peterz, sashal, tglx, longman, torvalds, linux-kernel,
	sasha.levin, johannes, bvanassche, hpa

Commit-ID:  7f3c7952d111ac93573fb86f4d5aeff527a07fcc
Gitweb:     https://git.kernel.org/tip/7f3c7952d111ac93573fb86f4d5aeff527a07fcc
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:29 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:50 +0100

tools/lib/lockdep: Rename "trywlock" into "trywrlock"

This patch avoids that the following compiler warning is reported while
compiling the lockdep unit tests:

include/liblockdep/rwlock.h: In function 'liblockdep_pthread_rwlock_trywlock':
include/liblockdep/rwlock.h:66:9: warning: implicit declaration of function 'pthread_rwlock_trywlock'; did you mean 'pthread_rwlock_trywrlock'? [-Wimplicit-function-declaration]
  return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
         ^~~~~~~~~~~~~~~~~~~~~~~
         pthread_rwlock_trywrlock

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Fixes: 5a52c9b480e0 ("liblockdep: Add public headers for pthread_rwlock_t implementation")
Link: https://lkml.kernel.org/r/20181207011148.251812-6-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/lib/lockdep/include/liblockdep/rwlock.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/rwlock.h b/tools/lib/lockdep/include/liblockdep/rwlock.h
index a96c3bf0fef1..365762e3a1ea 100644
--- a/tools/lib/lockdep/include/liblockdep/rwlock.h
+++ b/tools/lib/lockdep/include/liblockdep/rwlock.h
@@ -60,10 +60,10 @@ static inline int liblockdep_pthread_rwlock_tryrdlock(liblockdep_pthread_rwlock_
 	return pthread_rwlock_tryrdlock(&lock->rwlock) == 0 ? 1 : 0;
 }
 
-static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock)
+static inline int liblockdep_pthread_rwlock_trywrlock(liblockdep_pthread_rwlock_t *lock)
 {
 	lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_);
-	return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
+	return pthread_rwlock_trywrlock(&lock->rwlock) == 0 ? 1 : 0;
 }
 
 static inline int liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
@@ -79,7 +79,7 @@ static inline int liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
 #define pthread_rwlock_unlock		liblockdep_pthread_rwlock_unlock
 #define pthread_rwlock_wrlock		liblockdep_pthread_rwlock_wrlock
 #define pthread_rwlock_tryrdlock	liblockdep_pthread_rwlock_tryrdlock
-#define pthread_rwlock_trywlock		liblockdep_pthread_rwlock_trywlock
+#define pthread_rwlock_trywrlock	liblockdep_pthread_rwlock_trywrlock
 #define pthread_rwlock_destroy		liblockdep_rwlock_destroy
 
 #endif

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] tools/lib/lockdep: Add dummy print_irqtrace_events() implementation
  2018-12-07  1:11 ` [PATCH v3 06/24] liblockdep: Add dummy print_irqtrace_events() implementation Bart Van Assche
@ 2018-12-11 15:25   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: longman, peterz, linux-kernel, mingo, tglx, hpa, sasha.levin,
	bvanassche, torvalds, johannes

Commit-ID:  ac862d9b2fd084b50ee7a332a35d8d8d3228ce09
Gitweb:     https://git.kernel.org/tip/ac862d9b2fd084b50ee7a332a35d8d8d3228ce09
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:30 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:50 +0100

tools/lib/lockdep: Add dummy print_irqtrace_events() implementation

This patch avoids that linking against liblockdep fails due to no
print_irqtrace_events() definition being available.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-7-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/lib/lockdep/lockdep.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/lib/lockdep/lockdep.c b/tools/lib/lockdep/lockdep.c
index 6002fcf2f9bc..348a9d0fb766 100644
--- a/tools/lib/lockdep/lockdep.c
+++ b/tools/lib/lockdep/lockdep.c
@@ -15,6 +15,11 @@ u32 prandom_u32(void)
 	abort();
 }
 
+void print_irqtrace_events(struct task_struct *curr)
+{
+	abort();
+}
+
 static struct new_utsname *init_utsname(void)
 {
 	static struct new_utsname n = (struct new_utsname) {

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] tools/lib/lockdep/tests: Test the lockdep_reset_lock() implementation
  2018-12-07  1:11 ` [PATCH v3 07/24] lockdep tests: Test the lockdep_reset_lock() implementation Bart Van Assche
@ 2018-12-11 15:26   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: johannes, sasha.levin, tglx, bvanassche, mingo, longman, hpa,
	linux-kernel, peterz, torvalds

Commit-ID:  886adbed7ac19352315e9f1dd880360c7544d25c
Gitweb:     https://git.kernel.org/tip/886adbed7ac19352315e9f1dd880360c7544d25c
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:31 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:51 +0100

tools/lib/lockdep/tests: Test the lockdep_reset_lock() implementation

This patch makes sure that the lockdep_reset_lock() function gets
tested.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-8-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/lib/lockdep/include/liblockdep/common.h | 1 +
 tools/lib/lockdep/include/liblockdep/mutex.h  | 1 +
 tools/lib/lockdep/tests/ABBA.c                | 3 +++
 tools/lib/lockdep/tests/ABBCCA.c              | 4 ++++
 tools/lib/lockdep/tests/ABBCCDDA.c            | 5 +++++
 tools/lib/lockdep/tests/ABCABC.c              | 4 ++++
 tools/lib/lockdep/tests/ABCDBCDA.c            | 5 +++++
 tools/lib/lockdep/tests/ABCDBDDA.c            | 5 +++++
 tools/lib/lockdep/tests/unlock_balance.c      | 2 ++
 9 files changed, 30 insertions(+)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h b/tools/lib/lockdep/include/liblockdep/common.h
index 8862da80995a..d640a9761f09 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -44,6 +44,7 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 			struct lockdep_map *nest_lock, unsigned long ip);
 void lock_release(struct lockdep_map *lock, int nested,
 			unsigned long ip);
+void lockdep_reset_lock(struct lockdep_map *lock);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h
index a80ac39f966e..2073d4e1f2f0 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -54,6 +54,7 @@ static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t *lock)
 {
+	lockdep_reset_lock(&lock->dep_map);
 	return pthread_mutex_destroy(&lock->mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 1460afd33d71..623313f54720 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -11,4 +11,7 @@ void main(void)
 
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(b, a);
+
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABBCCA.c b/tools/lib/lockdep/tests/ABBCCA.c
index a54c1b2af118..48446129d496 100644
--- a/tools/lib/lockdep/tests/ABBCCA.c
+++ b/tools/lib/lockdep/tests/ABBCCA.c
@@ -13,4 +13,8 @@ void main(void)
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(c, a);
+
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.c b/tools/lib/lockdep/tests/ABBCCDDA.c
index aa5d194e8869..3570bf7b3804 100644
--- a/tools/lib/lockdep/tests/ABBCCDDA.c
+++ b/tools/lib/lockdep/tests/ABBCCDDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCABC.c b/tools/lib/lockdep/tests/ABCABC.c
index b54a08e60416..a1c4659894cd 100644
--- a/tools/lib/lockdep/tests/ABCABC.c
+++ b/tools/lib/lockdep/tests/ABCABC.c
@@ -13,4 +13,8 @@ void main(void)
 	LOCK_UNLOCK_2(a, b);
 	LOCK_UNLOCK_2(c, a);
 	LOCK_UNLOCK_2(b, c);
+
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.c b/tools/lib/lockdep/tests/ABCDBCDA.c
index a56742250d86..335af1c90ab5 100644
--- a/tools/lib/lockdep/tests/ABCDBCDA.c
+++ b/tools/lib/lockdep/tests/ABCDBCDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(b, c);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.c b/tools/lib/lockdep/tests/ABCDBDDA.c
index 238a3353f3c3..3c5972863049 100644
--- a/tools/lib/lockdep/tests/ABCDBDDA.c
+++ b/tools/lib/lockdep/tests/ABCDBDDA.c
@@ -15,4 +15,9 @@ void main(void)
 	LOCK_UNLOCK_2(c, d);
 	LOCK_UNLOCK_2(b, d);
 	LOCK_UNLOCK_2(d, a);
+
+	pthread_mutex_destroy(&d);
+	pthread_mutex_destroy(&c);
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
diff --git a/tools/lib/lockdep/tests/unlock_balance.c b/tools/lib/lockdep/tests/unlock_balance.c
index 34cf32f689de..dba25064b50a 100644
--- a/tools/lib/lockdep/tests/unlock_balance.c
+++ b/tools/lib/lockdep/tests/unlock_balance.c
@@ -10,4 +10,6 @@ void main(void)
 	pthread_mutex_lock(&a);
 	pthread_mutex_unlock(&a);
 	pthread_mutex_unlock(&a);
+
+	pthread_mutex_destroy(&a);
 }

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] locking/lockdep: Declare local symbols static
  2018-12-07  1:11 ` [PATCH v3 08/24] locking/lockdep: Declare local symbols static Bart Van Assche
@ 2018-12-11 15:26   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, sasha.levin, bvanassche, johannes, peterz, hpa, torvalds,
	longman, linux-kernel, mingo

Commit-ID:  1431a5d2cfa18d7006d9b0e7ab4548d9bb19ce55
Gitweb:     https://git.kernel.org/tip/1431a5d2cfa18d7006d9b0e7ab4548d9bb19ce55
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:32 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:51 +0100

locking/lockdep: Declare local symbols static

This patch avoids that sparse complains about a missing declaration for
the lock_classes array when building with CONFIG_DEBUG_LOCKDEP=n.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-9-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1efada2dd9dd..7434a00b2b2f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -138,6 +138,9 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
  * get freed - this significantly simplifies the debugging code.
  */
 unsigned long nr_lock_classes;
+#ifndef CONFIG_DEBUG_LOCKDEP
+static
+#endif
 struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
 
 static inline struct lock_class *hlock_class(struct held_lock *hlock)

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] locking/lockdep: Inline __lockdep_init_map()
  2018-12-07  1:11 ` [PATCH v3 09/24] locking/lockdep: Inline __lockdep_init_map() Bart Van Assche
@ 2018-12-11 15:27   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:27 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, linux-kernel, tglx, mingo, sasha.levin, johannes,
	torvalds, bvanassche, longman, hpa

Commit-ID:  d35568bdb6ce4be3f885f8f189bbde5adc7e0160
Gitweb:     https://git.kernel.org/tip/d35568bdb6ce4be3f885f8f189bbde5adc7e0160
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:33 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:51 +0100

locking/lockdep: Inline __lockdep_init_map()

Since the function __lockdep_init_map() only has one caller, inline it
into its caller. This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-10-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7434a00b2b2f..b5c8fcb6c070 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3091,7 +3091,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 /*
  * Initialize a lock instance's lock-class mapping info:
  */
-static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
+void lockdep_init_map(struct lockdep_map *lock, const char *name,
 		      struct lock_class_key *key, int subclass)
 {
 	int i;
@@ -3147,12 +3147,6 @@ static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
 		raw_local_irq_restore(flags);
 	}
 }
-
-void lockdep_init_map(struct lockdep_map *lock, const char *name,
-		      struct lock_class_key *key, int subclass)
-{
-	__lockdep_init_map(lock, name, key, subclass);
-}
 EXPORT_SYMBOL_GPL(lockdep_init_map);
 
 struct lock_class_key __lockdep_no_validate__;

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] locking/lockdep: Introduce lock_class_cache_is_registered()
  2018-12-07  1:11 ` [PATCH v3 10/24] locking/lockdep: Introduce lock_class_cache_is_registered() Bart Van Assche
@ 2018-12-11 15:27   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:27 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: johannes, tglx, hpa, longman, peterz, mingo, torvalds,
	sasha.levin, bvanassche, linux-kernel

Commit-ID:  2904d9fa45d3ce7153f1e10d78c570ecf7f19c35
Gitweb:     https://git.kernel.org/tip/2904d9fa45d3ce7153f1e10d78c570ecf7f19c35
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:34 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:53 +0100

locking/lockdep: Introduce lock_class_cache_is_registered()

This patch does not change any functionality but makes the
lockdep_reset_lock() function easier to read.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-11-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 50 +++++++++++++++++++++++++++++-------------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b5c8fcb6c070..81388d028ac7 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4201,13 +4201,33 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	 */
 }
 
-void lockdep_reset_lock(struct lockdep_map *lock)
+/*
+ * Check whether any element of the @lock->class_cache[] array refers to a
+ * registered lock class. The caller must hold either the graph lock or the
+ * RCU read lock.
+ */
+static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 {
 	struct lock_class *class;
 	struct hlist_head *head;
-	unsigned long flags;
 	int i, j;
-	int locked;
+
+	for (i = 0; i < CLASSHASH_SIZE; i++) {
+		head = classhash_table + i;
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
+			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
+				if (lock->class_cache[j] == class)
+					return true;
+		}
+	}
+	return false;
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+	struct lock_class *class;
+	unsigned long flags;
+	int j, locked;
 
 	raw_local_irq_save(flags);
 
@@ -4227,24 +4247,14 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	 * be gone.
 	 */
 	locked = graph_lock();
-	for (i = 0; i < CLASSHASH_SIZE; i++) {
-		head = classhash_table + i;
-		hlist_for_each_entry_rcu(class, head, hash_entry) {
-			int match = 0;
-
-			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
-				match |= class == lock->class_cache[j];
-
-			if (unlikely(match)) {
-				if (debug_locks_off_graph_unlock()) {
-					/*
-					 * We all just reset everything, how did it match?
-					 */
-					WARN_ON(1);
-				}
-				goto out_restore;
-			}
+	if (unlikely(lock_class_cache_is_registered(lock))) {
+		if (debug_locks_off_graph_unlock()) {
+			/*
+			 * We all just reset everything, how did it match?
+			 */
+			WARN_ON(1);
 		}
+		goto out_restore;
 	}
 	if (locked)
 		graph_unlock();

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement
  2018-12-07  1:11 ` [PATCH v3 11/24] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement Bart Van Assche
@ 2018-12-11 15:28   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:28 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: longman, bvanassche, torvalds, hpa, mingo, johannes,
	linux-kernel, tglx, sasha.levin, peterz

Commit-ID:  a66b6922dc6a5ece60ea9326153da3b062977a4d
Gitweb:     https://git.kernel.org/tip/a66b6922dc6a5ece60ea9326153da3b062977a4d
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:35 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:54 +0100

locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement

Initializing a list entry just before it is passed to list_add_tail_rcu()
is not necessary because list_add_tail_rcu() overwrites the next and prev
pointers anyway. Hence remove the INIT_LIST_HEAD() statement.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-12-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 81388d028ac7..346b5a1fd062 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -792,7 +792,6 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	class->key = key;
 	class->name = lock->name;
 	class->subclass = subclass;
-	INIT_LIST_HEAD(&class->lock_entry);
 	INIT_LIST_HEAD(&class->locks_before);
 	INIT_LIST_HEAD(&class->locks_after);
 	class->name_version = count_matching_names(class);

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe
  2018-12-07  1:11 ` [PATCH v3 12/24] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe Bart Van Assche
@ 2018-12-11 15:29   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:29 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, torvalds, linux-kernel, longman, johannes, mingo,
	bvanassche, hpa, peterz, sasha.levin

Commit-ID:  786fa29e9cb6810e21ab0d9c41a81d81d54d1d1b
Gitweb:     https://git.kernel.org/tip/786fa29e9cb6810e21ab0d9c41a81d81d54d1d1b
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:36 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:55 +0100

locking/lockdep: Make concurrent lockdep_reset_lock() calls safe

Since zap_class() removes items from the all_lock_classes list and the
classhash_table, protect all zap_class() calls against concurrent
data structure modifications with the graph lock.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-13-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 346b5a1fd062..737d2dd3ea56 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4122,6 +4122,9 @@ void lockdep_reset(void)
 	raw_local_irq_restore(flags);
 }
 
+/*
+ * Remove all references to a lock class. The caller must hold the graph lock.
+ */
 static void zap_class(struct lock_class *class)
 {
 	int i;
@@ -4229,6 +4232,7 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	int j, locked;
 
 	raw_local_irq_save(flags);
+	locked = graph_lock();
 
 	/*
 	 * Remove all classes this lock might have:
@@ -4245,7 +4249,6 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	 * Debug check: in the end all mapped classes should
 	 * be gone.
 	 */
-	locked = graph_lock();
 	if (unlikely(lock_class_cache_is_registered(lock))) {
 		if (debug_locks_off_graph_unlock()) {
 			/*

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [tip:locking/core] locking/lockdep: Stop using RCU primitives to access 'all_lock_classes'
  2018-12-07  1:11 ` [PATCH v3 13/24] locking/lockdep: Stop using RCU primitives to access all_lock_classes Bart Van Assche
@ 2018-12-11 15:29   ` tip-bot for Bart Van Assche
  0 siblings, 0 replies; 49+ messages in thread
From: tip-bot for Bart Van Assche @ 2018-12-11 15:29 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: johannes, sasha.levin, longman, mingo, tglx, hpa, bvanassche,
	linux-kernel, peterz, torvalds

Commit-ID:  fe27b0de8dfcdf8482558ce5d25e697fe74d851e
Gitweb:     https://git.kernel.org/tip/fe27b0de8dfcdf8482558ce5d25e697fe74d851e
Author:     Bart Van Assche <bvanassche@acm.org>
AuthorDate: Thu, 6 Dec 2018 17:11:37 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 11 Dec 2018 14:54:56 +0100

locking/lockdep: Stop using RCU primitives to access 'all_lock_classes'

Due to the previous patch all code that accesses the 'all_lock_classes'
list holds the graph lock. Hence use regular list primitives instead of
their RCU variants to access this list.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-14-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 737d2dd3ea56..5c837a537273 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -629,7 +629,8 @@ static int static_obj(void *obj)
 
 /*
  * To make lock name printouts unique, we calculate a unique
- * class->name_version generation counter:
+ * class->name_version generation counter. The caller must hold the graph
+ * lock.
  */
 static int count_matching_names(struct lock_class *new_class)
 {
@@ -639,7 +640,7 @@ static int count_matching_names(struct lock_class *new_class)
 	if (!new_class->name)
 		return 0;
 
-	list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) {
+	list_for_each_entry(class, &all_lock_classes, lock_entry) {
 		if (new_class->key - new_class->subclass == class->key)
 			return class->name_version;
 		if (class->name && !strcmp(class->name, new_class->name))
@@ -803,7 +804,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	/*
 	 * Add it to the global list of classes:
 	 */
-	list_add_tail_rcu(&class->lock_entry, &all_lock_classes);
+	list_add_tail(&class->lock_entry, &all_lock_classes);
 
 	if (verbose(class)) {
 		graph_unlock();
@@ -4141,7 +4142,7 @@ static void zap_class(struct lock_class *class)
 	 * Unhash the class and remove it from the all_lock_classes list:
 	 */
 	hlist_del_rcu(&class->hash_entry);
-	list_del_rcu(&class->lock_entry);
+	list_del(&class->lock_entry);
 
 	RCU_INIT_POINTER(class->key, NULL);
 	RCU_INIT_POINTER(class->name, NULL);

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock"
  2018-12-07  1:11 ` [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
  2018-12-11 15:24   ` [tip:locking/core] tools/lib/lockdep: " tip-bot for Bart Van Assche
@ 2018-12-11 17:19   ` Sasha Levin
  2018-12-11 17:38     ` Bart Van Assche
  1 sibling, 1 reply; 49+ messages in thread
From: Sasha Levin @ 2018-12-11 17:19 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: mingo, peterz, tj, longman, johannes.berg, linux-kernel, Johannes Berg

On Thu, Dec 06, 2018 at 05:11:29PM -0800, Bart Van Assche wrote:
>This patch avoids that the following compiler warning is reported while
>compiling the lockdep unit tests:
>
>include/liblockdep/rwlock.h: In function 'liblockdep_pthread_rwlock_trywlock':
>include/liblockdep/rwlock.h:66:9: warning: implicit declaration of function 'pthread_rwlock_trywlock'; did you mean 'pthread_rwlock_trywrlock'? [-Wimplicit-function-declaration]
>  return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
>         ^~~~~~~~~~~~~~~~~~~~~~~
>         pthread_rwlock_trywrlock

Hm, I'm not sure why you're seeing that error (I'm not). We define
pthread_rwlock_trywlock() in:

	include/liblockdep/rwlock.h:static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock)

All the tests also seem to pass for me. Did you see failures on your
end?

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock"
  2018-12-11 17:19   ` [PATCH v3 05/24] liblockdep: " Sasha Levin
@ 2018-12-11 17:38     ` Bart Van Assche
  2018-12-13 19:41       ` Sasha Levin
  0 siblings, 1 reply; 49+ messages in thread
From: Bart Van Assche @ 2018-12-11 17:38 UTC (permalink / raw)
  To: Sasha Levin
  Cc: mingo, peterz, tj, longman, johannes.berg, linux-kernel, Johannes Berg

On Tue, 2018-12-11 at 12:19 -0500, Sasha Levin wrote:
> On Thu, Dec 06, 2018 at 05:11:29PM -0800, Bart Van Assche wrote:
> > This patch avoids that the following compiler warning is reported while
> > compiling the lockdep unit tests:
> > 
> > include/liblockdep/rwlock.h: In function 'liblockdep_pthread_rwlock_trywlock':
> > include/liblockdep/rwlock.h:66:9: warning: implicit declaration of function 'pthread_rwlock_trywlock'; did you mean 'pthread_rwlock_trywrlock'? [-Wimplicit-function-declaration]
> >  return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
> >         ^~~~~~~~~~~~~~~~~~~~~~~
> >         pthread_rwlock_trywrlock
> 
> Hm, I'm not sure why you're seeing that error (I'm not). We define
> pthread_rwlock_trywlock() in:
> 
> 	include/liblockdep/rwlock.h:static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock)
> 
> All the tests also seem to pass for me. Did you see failures on your
> end?

What I saw is that due to the typo that is fixed by this patch that linking
of the tests failed and that the error message that was reported because
starting the tests failed made the tests pass due to the following line in
run_tests.sh:

if [ $(timeout 1 ./lockdep ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then

Bart.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock"
  2018-12-11 17:38     ` Bart Van Assche
@ 2018-12-13 19:41       ` Sasha Levin
  0 siblings, 0 replies; 49+ messages in thread
From: Sasha Levin @ 2018-12-13 19:41 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: mingo, peterz, tj, longman, johannes.berg, linux-kernel, Johannes Berg

On Tue, Dec 11, 2018 at 09:38:31AM -0800, Bart Van Assche wrote:
>On Tue, 2018-12-11 at 12:19 -0500, Sasha Levin wrote:
>> On Thu, Dec 06, 2018 at 05:11:29PM -0800, Bart Van Assche wrote:
>> > This patch avoids that the following compiler warning is reported while
>> > compiling the lockdep unit tests:
>> >
>> > include/liblockdep/rwlock.h: In function 'liblockdep_pthread_rwlock_trywlock':
>> > include/liblockdep/rwlock.h:66:9: warning: implicit declaration of function 'pthread_rwlock_trywlock'; did you mean 'pthread_rwlock_trywrlock'? [-Wimplicit-function-declaration]
>> >  return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0;
>> >         ^~~~~~~~~~~~~~~~~~~~~~~
>> >         pthread_rwlock_trywrlock
>>
>> Hm, I'm not sure why you're seeing that error (I'm not). We define
>> pthread_rwlock_trywlock() in:
>>
>> 	include/liblockdep/rwlock.h:static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock)
>>
>> All the tests also seem to pass for me. Did you see failures on your
>> end?
>
>What I saw is that due to the typo that is fixed by this patch that linking
>of the tests failed and that the error message that was reported because
>starting the tests failed made the tests pass due to the following line in
>run_tests.sh:
>
>if [ $(timeout 1 ./lockdep ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then

Ah, I see. Thank you!

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2018-12-13 19:41 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-07  1:11 [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 01/24] lockdep tests: Display compiler warning and error messages Bart Van Assche
2018-12-11 15:22   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 02/24] lockdep tests: Fix shellcheck warnings Bart Van Assche
2018-12-11 15:23   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 03/24] lockdep tests: Improve testing accuracy Bart Van Assche
2018-12-11 15:23   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 04/24] lockdep tests: Run lockdep tests a second time under Valgrind Bart Van Assche
2018-12-11 15:24   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 05/24] liblockdep: Rename "trywlock" into "trywrlock" Bart Van Assche
2018-12-11 15:24   ` [tip:locking/core] tools/lib/lockdep: " tip-bot for Bart Van Assche
2018-12-11 17:19   ` [PATCH v3 05/24] liblockdep: " Sasha Levin
2018-12-11 17:38     ` Bart Van Assche
2018-12-13 19:41       ` Sasha Levin
2018-12-07  1:11 ` [PATCH v3 06/24] liblockdep: Add dummy print_irqtrace_events() implementation Bart Van Assche
2018-12-11 15:25   ` [tip:locking/core] tools/lib/lockdep: " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 07/24] lockdep tests: Test the lockdep_reset_lock() implementation Bart Van Assche
2018-12-11 15:26   ` [tip:locking/core] tools/lib/lockdep/tests: " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 08/24] locking/lockdep: Declare local symbols static Bart Van Assche
2018-12-11 15:26   ` [tip:locking/core] " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 09/24] locking/lockdep: Inline __lockdep_init_map() Bart Van Assche
2018-12-11 15:27   ` [tip:locking/core] " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 10/24] locking/lockdep: Introduce lock_class_cache_is_registered() Bart Van Assche
2018-12-11 15:27   ` [tip:locking/core] " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 11/24] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement Bart Van Assche
2018-12-11 15:28   ` [tip:locking/core] " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 12/24] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe Bart Van Assche
2018-12-11 15:29   ` [tip:locking/core] " tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 13/24] locking/lockdep: Stop using RCU primitives to access all_lock_classes Bart Van Assche
2018-12-11 15:29   ` [tip:locking/core] locking/lockdep: Stop using RCU primitives to access 'all_lock_classes' tip-bot for Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 14/24] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 15/24] locking/lockdep: Reorder struct lock_class members Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 16/24] locking/lockdep: Retain the class key and name while freeing a lock class Bart Van Assche
2018-12-07 10:21   ` Peter Zijlstra
2018-12-07 14:50     ` Waiman Long
2018-12-07 16:23       ` Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 17/24] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
2018-12-07 12:14   ` Peter Zijlstra
2018-12-07 16:27     ` Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 18/24] locking/lockdep: Reuse list entries " Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 19/24] locking/lockdep: Check data structure consistency Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 20/24] locking/lockdep: Introduce __lockdep_free_key_range() Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 21/24] locking/lockdep: Verify whether lock objects are small enough to be used as class keys Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 22/24] locking/lockdep: Add support for dynamic keys Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 23/24] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
2018-12-07  1:11 ` [PATCH v3 24/24] lockdep tests: Test dynamic key registration Bart Van Assche
2018-12-07 12:23 ` [PATCH v3 00/24] locking/lockdep: Add support for dynamic keys Peter Zijlstra
2018-12-07 16:35   ` Bart Van Assche
2018-12-08 10:26     ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).