linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
@ 2019-01-09 21:01 Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 01/16] locking/lockdep: Fix reported required memory size Bart Van Assche
                   ` (16 more replies)
  0 siblings, 17 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz; +Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche

Hi Peter and Ingo,

A known shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically. This forces certain unrelated
synchronization objects to share keys and this key sharing can cause false
positive deadlock reports. This patch series adds support for dynamic keys in
the lockdep code and eliminates a class of false positive reports from the
workqueue implementation.

The changes compared to v5 are:
- Modified zap_class() such that it doesn't try to free a list entry that
  is already being freed.
- Added a patch that fixes an existing bug in add_chain_cache().
- Improved the code that reports the size needed for lockdep data structures
  further.
- Rebased and retested this patch series on top of kernel v5.0-rc1.

The changes compared to v4 are:
- Introduced the function lockdep_set_selftest_task() to fix a build failure
  for CONFIG_LOCKDEP=n.
- Fixed a use-after-free issue in is_dynamic_key() by adding the following
  code in that function: if (!debug_locks) return true;
- Changed if (WARN_ON_ONCE(!pf)) into if (!pf) to avoid that the new lockdep
  implementation triggers more kernel warnings than the current implementation.
  This keeps the build happy when doing regression tests.
- Added a synchronize_rcu() call at the end of lockdep_unregister_key() to
  avoid a use-after-free.

The changes compared to v3 are:
- Rework the code that frees objects that are no longer used such that it
  is now guaranteed that a grace period elapses between last use and freeing.
- The lockdep self tests pass again.
- Avoid that the patch that removes all matching lock order entries can
  cause list corruption. Note: the change in this patch to realize that
  is removed again by a later patch. In other words, this change is only
  necessary to make the series bisectable.
- Rebased this patch series on top of the tip/locking/core branch.

The changes compared to v2 are:
- Made sure that all schedule_free_zapped_classes() calls are protected
  with the graph lock.
- When removing a lock class, only recalculate lock chains that have been
  modified.
- Combine a list_del() and list_add_tail() call into a list_move_tail()
  call in register_lock_class().
- Use an RCU read lock instead of the graph lock inside is_dynamic_key().

The changes compared to v1 are:
- Addressed Peter's review comments: remove the list_head that I had added
  to struct lock_list again, replaced all_list_entries and free_list_entries
  by two bitmaps, use call_rcu() to free lockdep objects, add a BUILD_BUG_ON()
  that compares the size of struct lock_class_key and raw_spin_lock_t.
- Addressed the "unknown symbol" errors reported by the build bot by adding a
  few #ifdef / #endif directives. Addressed the 32-bit warnings by using %d
  instead of %ld for array indices and by casting the array indices to
  unsigned int.
- Removed several WARN_ON_ONCE(!class->hash_entry.pprev) statements since
  these duplicate the code in check_data_structures().
- Left out the patch that causes lockdep to complain if no name has been
  assigned to a lock object. That patch namely causes the build bot to
  complain about certain lock objects but I have not yet had the time to
  figure out the identity of these lock objects.
  
Bart.

Bart Van Assche (16):
  locking/lockdep: Fix reported required memory size
  locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to
    the cache
  locking/lockdep: Make zap_class() remove all matching lock order
    entries
  locking/lockdep: Reorder struct lock_class members
  locking/lockdep: Initialize the locks_before and locks_after lists
    earlier
  locking/lockdep: Split lockdep_free_key_range() and
    lockdep_reset_lock()
  locking/lockdep: Make it easy to detect whether or not inside a
    selftest
  locking/lockdep: Free lock classes that are no longer in use
  locking/lockdep: Reuse list entries that are no longer in use
  locking/lockdep: Introduce lockdep_next_lockchain() and
    lock_chain_count()
  locking/lockdep: Reuse lock chains that have been freed
  locking/lockdep: Check data structure consistency
  locking/lockdep: Verify whether lock objects are small enough to be
    used as class keys
  locking/lockdep: Add support for dynamic keys
  kernel/workqueue: Use dynamic lockdep keys for workqueues
  lockdep tests: Test dynamic key registration

 include/linux/lockdep.h                       |  42 +-
 include/linux/workqueue.h                     |  28 +-
 kernel/locking/lockdep.c                      | 940 +++++++++++++++---
 kernel/locking/lockdep_internals.h            |   3 +-
 kernel/locking/lockdep_proc.c                 |  12 +-
 kernel/workqueue.c                            |  60 +-
 lib/locking-selftest.c                        |   2 +
 tools/lib/lockdep/include/liblockdep/common.h |   2 +
 tools/lib/lockdep/include/liblockdep/mutex.h  |  11 +-
 tools/lib/lockdep/tests/ABBA.c                |   9 +
 10 files changed, 923 insertions(+), 186 deletions(-)

-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v6 01/16] locking/lockdep: Fix reported required memory size
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 02/16] locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache Bart Van Assche
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Change the sizeof(array element time) * (array size) expressions into
sizeof(array). This fixes the size computations of the classhash_table[]
and chainhash_table[] arrays. Commit a63f38cc4ccf ("locking/lockdep:
Convert hash tables to hlists") namely changed the type of the elements
of that array from struct list_head into struct hlist_head.

Lock chains are only tracked with CONFIG_PROVE_LOCKING=y. Do not report
the memory required for the lock chain array if CONFIG_PROVE_LOCKING=n.
See also commit ca58abcb4a6d ("lockdep: sanitise CONFIG_PROVE_LOCKING").

Include the size of the chain_hlocks[] array.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 95932333a48b..cb3fa7042886 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4278,20 +4278,21 @@ void __init lockdep_init(void)
 	printk("... MAX_LOCKDEP_CHAINS:      %lu\n", MAX_LOCKDEP_CHAINS);
 	printk("... CHAINHASH_SIZE:          %lu\n", CHAINHASH_SIZE);
 
-	printk(" memory used by lock dependency info: %lu kB\n",
-		(sizeof(struct lock_class) * MAX_LOCKDEP_KEYS +
-		sizeof(struct list_head) * CLASSHASH_SIZE +
-		sizeof(struct lock_list) * MAX_LOCKDEP_ENTRIES +
-		sizeof(struct lock_chain) * MAX_LOCKDEP_CHAINS +
-		sizeof(struct list_head) * CHAINHASH_SIZE
+	printk(" memory used by lock dependency info: %zu kB\n",
+	       (sizeof(list_entries) +
+		sizeof(lock_classes) +
+		sizeof(classhash_table) +
+		sizeof(chainhash_table)
 #ifdef CONFIG_PROVE_LOCKING
-		+ sizeof(struct circular_queue)
+		+ sizeof(lock_cq)
+		+ sizeof(lock_chains)
+		+ sizeof(chain_hlocks)
 #endif
 		) / 1024
 		);
 
-	printk(" per task-struct memory footprint: %lu bytes\n",
-		sizeof(struct held_lock) * MAX_LOCK_DEPTH);
+	printk(" per task-struct memory footprint: %zu bytes\n",
+	       sizeof(((struct task_struct *)NULL)->held_locks));
 }
 
 static void
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 02/16] locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 01/16] locking/lockdep: Fix reported required memory size Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 03/16] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Make sure that add_chain_cache() returns 0 and does not modify the
chain hash if nr_chain_hlocks == MAX_LOCKDEP_CHAIN_HLOCKS before this
function is called.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 11 +----------
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index cb3fa7042886..7a7c2d7b01c2 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2206,16 +2206,8 @@ static inline int add_chain_cache(struct task_struct *curr,
 			chain_hlocks[chain->base + j] = lock_id;
 		}
 		chain_hlocks[chain->base + j] = class - lock_classes;
-	}
-
-	if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS)
 		nr_chain_hlocks += chain->depth;
-
-#ifdef CONFIG_DEBUG_LOCKDEP
-	/*
-	 * Important for check_no_collision().
-	 */
-	if (unlikely(nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)) {
+	} else {
 		if (!debug_locks_off_graph_unlock())
 			return 0;
 
@@ -2223,7 +2215,6 @@ static inline int add_chain_cache(struct task_struct *curr,
 		dump_stack();
 		return 0;
 	}
-#endif
 
 	hlist_add_head_rcu(&chain->entry, hash_head);
 	debug_atomic_inc(chain_lookup_misses);
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 03/16] locking/lockdep: Make zap_class() remove all matching lock order entries
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 01/16] locking/lockdep: Fix reported required memory size Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 02/16] locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 04/16] locking/lockdep: Reorder struct lock_class members Bart Van Assche
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Make sure that all lock order entries that refer to a class are removed
from the list_entries[] array when a kernel module is unloaded.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |  1 +
 kernel/locking/lockdep.c | 19 +++++++++++++------
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index c5335df2372f..71caa1118f4c 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -178,6 +178,7 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
 struct lock_list {
 	struct list_head		entry;
 	struct lock_class		*class;
+	struct lock_class		*links_to;
 	struct stack_trace		trace;
 	int				distance;
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7a7c2d7b01c2..e52ce8745cba 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -859,7 +859,8 @@ static struct lock_list *alloc_list_entry(void)
 /*
  * Add a new dependency to the head of the list:
  */
-static int add_lock_to_list(struct lock_class *this, struct list_head *head,
+static int add_lock_to_list(struct lock_class *this,
+			    struct lock_class *links_to, struct list_head *head,
 			    unsigned long ip, int distance,
 			    struct stack_trace *trace)
 {
@@ -873,6 +874,7 @@ static int add_lock_to_list(struct lock_class *this, struct list_head *head,
 		return 0;
 
 	entry->class = this;
+	entry->links_to = links_to;
 	entry->distance = distance;
 	entry->trace = *trace;
 	/*
@@ -1918,14 +1920,14 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 * Ok, all validations passed, add the new lock
 	 * to the previous lock's dependency list:
 	 */
-	ret = add_lock_to_list(hlock_class(next),
+	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
 			       &hlock_class(prev)->locks_after,
 			       next->acquire_ip, distance, trace);
 
 	if (!ret)
 		return 0;
 
-	ret = add_lock_to_list(hlock_class(prev),
+	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
 			       &hlock_class(next)->locks_before,
 			       next->acquire_ip, distance, trace);
 	if (!ret)
@@ -4119,15 +4121,20 @@ void lockdep_reset(void)
  */
 static void zap_class(struct lock_class *class)
 {
+	struct lock_list *entry;
 	int i;
 
 	/*
 	 * Remove all dependencies this lock is
 	 * involved in:
 	 */
-	for (i = 0; i < nr_list_entries; i++) {
-		if (list_entries[i].class == class)
-			list_del_rcu(&list_entries[i].entry);
+	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+		if (entry->class != class && entry->links_to != class)
+			continue;
+		list_del_rcu(&entry->entry);
+		/* Clear .class and .links_to to avoid double removal. */
+		WRITE_ONCE(entry->class, NULL);
+		WRITE_ONCE(entry->links_to, NULL);
 	}
 	/*
 	 * Unhash the class and remove it from the all_lock_classes list:
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 04/16] locking/lockdep: Reorder struct lock_class members
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (2 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 03/16] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 05/16] locking/lockdep: Initialize the locks_before and locks_after lists earlier Bart Van Assche
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

This patch does not change any functionality but makes the patch that
frees lock classes that are no longer in use easier to read.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 71caa1118f4c..b5e6bfe0ae4a 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -76,6 +76,13 @@ struct lock_class {
 	 */
 	struct list_head		lock_entry;
 
+	/*
+	 * These fields represent a directed graph of lock dependencies,
+	 * to every node we attach a list of "forward" and a list of
+	 * "backward" graph nodes.
+	 */
+	struct list_head		locks_after, locks_before;
+
 	struct lockdep_subclass_key	*key;
 	unsigned int			subclass;
 	unsigned int			dep_gen_id;
@@ -86,13 +93,6 @@ struct lock_class {
 	unsigned long			usage_mask;
 	struct stack_trace		usage_traces[XXX_LOCK_USAGE_STATES];
 
-	/*
-	 * These fields represent a directed graph of lock dependencies,
-	 * to every node we attach a list of "forward" and a list of
-	 * "backward" graph nodes.
-	 */
-	struct list_head		locks_after, locks_before;
-
 	/*
 	 * Generation counter, when doing certain classes of graph walking,
 	 * to ensure that we check one node only once:
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 05/16] locking/lockdep: Initialize the locks_before and locks_after lists earlier
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (3 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 04/16] locking/lockdep: Reorder struct lock_class members Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 06/16] locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock() Bart Van Assche
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

This patch does not change any functionality. A later patch will reuse
lock classes that have been freed. In combination with that patch this
patch wil have the effect of initializing lock class order lists once
instead of every time a lock class structure is reinitialized.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index e52ce8745cba..5ca5904ad489 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -735,6 +735,25 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	return true;
 }
 
+/*
+ * Initialize the lock_classes[] array elements.
+ */
+static void init_data_structures_once(void)
+{
+	static bool initialization_happened;
+	int i;
+
+	if (likely(initialization_happened))
+		return;
+
+	initialization_happened = true;
+
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		INIT_LIST_HEAD(&lock_classes[i].locks_after);
+		INIT_LIST_HEAD(&lock_classes[i].locks_before);
+	}
+}
+
 /*
  * Register a lock's class in the hash-table, if the class is not present
  * yet. Otherwise we look it up. We cache the result in the lock object
@@ -775,6 +794,8 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 			goto out_unlock_set;
 	}
 
+	init_data_structures_once();
+
 	/*
 	 * Allocate a new key from the static array, and add it to
 	 * the hash:
@@ -793,8 +814,8 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	class->key = key;
 	class->name = lock->name;
 	class->subclass = subclass;
-	INIT_LIST_HEAD(&class->locks_before);
-	INIT_LIST_HEAD(&class->locks_after);
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
 	class->name_version = count_matching_names(class);
 	/*
 	 * We use RCU's safe list-add method to make
@@ -4167,6 +4188,8 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	int i;
 	int locked;
 
+	init_data_structures_once();
+
 	raw_local_irq_save(flags);
 	locked = graph_lock();
 
@@ -4230,6 +4253,8 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	unsigned long flags;
 	int j, locked;
 
+	init_data_structures_once();
+
 	raw_local_irq_save(flags);
 	locked = graph_lock();
 
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 06/16] locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock()
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (4 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 05/16] locking/lockdep: Initialize the locks_before and locks_after lists earlier Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 07/16] locking/lockdep: Make it easy to detect whether or not inside a selftest Bart Van Assche
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

This patch does not change the behavior of these functions but makes the
patch that frees unused lock classes easier to read.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 74 +++++++++++++++++++++-------------------
 1 file changed, 38 insertions(+), 36 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5ca5904ad489..52b280480a08 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4172,6 +4172,26 @@ static inline int within(const void *addr, void *start, unsigned long size)
 	return addr >= start && addr < start + size;
 }
 
+static void __lockdep_free_key_range(void *start, unsigned long size)
+{
+	struct lock_class *class;
+	struct hlist_head *head;
+	int i;
+
+	/*
+	 * Unhash all classes that were created by this module:
+	 */
+	for (i = 0; i < CLASSHASH_SIZE; i++) {
+		head = classhash_table + i;
+		hlist_for_each_entry_rcu(class, head, hash_entry) {
+			if (!within(class->key, start, size) &&
+			    !within(class->name, start, size))
+				continue;
+			zap_class(class);
+		}
+	}
+}
+
 /*
  * Used in module.c to remove lock classes from memory that is going to be
  * freed; and possibly re-used by other modules.
@@ -4182,30 +4202,14 @@ static inline int within(const void *addr, void *start, unsigned long size)
  */
 void lockdep_free_key_range(void *start, unsigned long size)
 {
-	struct lock_class *class;
-	struct hlist_head *head;
 	unsigned long flags;
-	int i;
 	int locked;
 
 	init_data_structures_once();
 
 	raw_local_irq_save(flags);
 	locked = graph_lock();
-
-	/*
-	 * Unhash all classes that were created by this module:
-	 */
-	for (i = 0; i < CLASSHASH_SIZE; i++) {
-		head = classhash_table + i;
-		hlist_for_each_entry_rcu(class, head, hash_entry) {
-			if (within(class->key, start, size))
-				zap_class(class);
-			else if (within(class->name, start, size))
-				zap_class(class);
-		}
-	}
-
+	__lockdep_free_key_range(start, size);
 	if (locked)
 		graph_unlock();
 	raw_local_irq_restore(flags);
@@ -4247,16 +4251,11 @@ static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 	return false;
 }
 
-void lockdep_reset_lock(struct lockdep_map *lock)
+/* The caller must hold the graph lock. Does not sleep. */
+static void __lockdep_reset_lock(struct lockdep_map *lock)
 {
 	struct lock_class *class;
-	unsigned long flags;
-	int j, locked;
-
-	init_data_structures_once();
-
-	raw_local_irq_save(flags);
-	locked = graph_lock();
+	int j;
 
 	/*
 	 * Remove all classes this lock might have:
@@ -4273,19 +4272,22 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	 * Debug check: in the end all mapped classes should
 	 * be gone.
 	 */
-	if (unlikely(lock_class_cache_is_registered(lock))) {
-		if (debug_locks_off_graph_unlock()) {
-			/*
-			 * We all just reset everything, how did it match?
-			 */
-			WARN_ON(1);
-		}
-		goto out_restore;
-	}
+	if (WARN_ON_ONCE(lock_class_cache_is_registered(lock)))
+		debug_locks_off();
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+	unsigned long flags;
+	int locked;
+
+	init_data_structures_once();
+
+	raw_local_irq_save(flags);
+	locked = graph_lock();
+	__lockdep_reset_lock(lock);
 	if (locked)
 		graph_unlock();
-
-out_restore:
 	raw_local_irq_restore(flags);
 }
 
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 07/16] locking/lockdep: Make it easy to detect whether or not inside a selftest
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (5 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 06/16] locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock() Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 08/16] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

The patch that frees unused lock classes will modify the behavior of
lockdep_free_key_range() and lockdep_reset_lock() depending on whether
or not these functions are called from the context of the lockdep
selftests. Hence make it easy to detect whether or not lockdep code
is called from the context of a lockdep selftest.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  | 5 +++++
 kernel/locking/lockdep.c | 6 ++++++
 lib/locking-selftest.c   | 2 ++
 3 files changed, 13 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index b5e6bfe0ae4a..66eee1ba0f2a 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -265,6 +265,7 @@ extern void lockdep_reset(void);
 extern void lockdep_reset_lock(struct lockdep_map *lock);
 extern void lockdep_free_key_range(void *start, unsigned long size);
 extern asmlinkage void lockdep_sys_exit(void);
+extern void lockdep_set_selftest_task(struct task_struct *task);
 
 extern void lockdep_off(void);
 extern void lockdep_on(void);
@@ -395,6 +396,10 @@ static inline void lockdep_on(void)
 {
 }
 
+static inline void lockdep_set_selftest_task(struct task_struct *task)
+{
+}
+
 # define lock_acquire(l, s, t, r, c, n, i)	do { } while (0)
 # define lock_release(l, n, i)			do { } while (0)
 # define lock_downgrade(l, i)			do { } while (0)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 52b280480a08..1e82ca4982b3 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -81,6 +81,7 @@ module_param(lock_stat, int, 0644);
  * code to recurse back into the lockdep code...
  */
 static arch_spinlock_t lockdep_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
+static struct task_struct *lockdep_selftest_task_struct;
 
 static int graph_lock(void)
 {
@@ -331,6 +332,11 @@ void lockdep_on(void)
 }
 EXPORT_SYMBOL(lockdep_on);
 
+void lockdep_set_selftest_task(struct task_struct *task)
+{
+	lockdep_selftest_task_struct = task;
+}
+
 /*
  * Debugging switches:
  */
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 1e1bbf171eca..a1705545e6ac 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -1989,6 +1989,7 @@ void locking_selftest(void)
 
 	init_shared_classes();
 	debug_locks_silent = !debug_locks_verbose;
+	lockdep_set_selftest_task(current);
 
 	DO_TESTCASE_6R("A-A deadlock", AA);
 	DO_TESTCASE_6R("A-B-B-A deadlock", ABBA);
@@ -2097,5 +2098,6 @@ void locking_selftest(void)
 		printk("---------------------------------\n");
 		debug_locks = 1;
 	}
+	lockdep_set_selftest_task(NULL);
 	debug_locks_silent = 0;
 }
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 08/16] locking/lockdep: Free lock classes that are no longer in use
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (6 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 07/16] locking/lockdep: Make it easy to detect whether or not inside a selftest Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 09/16] locking/lockdep: Reuse list entries " Bart Van Assche
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Instead of leaving lock classes that are no longer in use in the
lock_classes array, reuse entries from that array that are no longer
in use. Maintain a linked list of free lock classes with list head
'free_lock_class'. Initialize that list from inside register_lock_class()
instead of from inside lockdep_init() because register_lock_class() can
be called before lockdep_init() has been called. Only add freed lock
classes to the free_lock_classes list after a grace period to avoid that
a lock_classes[] element would be reused while an RCU reader is
accessing it. Since the lockdep selftests run in a context where
sleeping is not allowed and since the selftests require that lock
resetting/zapping works with debug_locks = 0, make the behavior of
lockdep_free_key_range() and lockdep_reset_lock() depend on whether
or not these are called from the context of the lockdep selftests.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |   9 +-
 kernel/locking/lockdep.c | 434 +++++++++++++++++++++++++++++++++------
 2 files changed, 381 insertions(+), 62 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 66eee1ba0f2a..619ec3f26cdc 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -63,7 +63,8 @@ extern struct lock_class_key __lockdep_no_validate__;
 #define LOCKSTAT_POINTS		4
 
 /*
- * The lock-class itself:
+ * The lock-class itself. The order of the structure members matters.
+ * reinit_class() zeroes the key member and all subsequent members.
  */
 struct lock_class {
 	/*
@@ -72,7 +73,9 @@ struct lock_class {
 	struct hlist_node		hash_entry;
 
 	/*
-	 * global list of all lock-classes:
+	 * Entry in all_lock_classes when in use. Entry in free_lock_classes
+	 * when not in use. Instances that are being freed are on one of the
+	 * zapped_classes lists.
 	 */
 	struct list_head		lock_entry;
 
@@ -104,7 +107,7 @@ struct lock_class {
 	unsigned long			contention_point[LOCKSTAT_POINTS];
 	unsigned long			contending_point[LOCKSTAT_POINTS];
 #endif
-};
+} __no_randomize_layout;
 
 #ifdef CONFIG_LOCK_STAT
 struct lock_time {
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1e82ca4982b3..5b142f699503 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -50,6 +50,7 @@
 #include <linux/random.h>
 #include <linux/jhash.h>
 #include <linux/nmi.h>
+#include <linux/rcupdate.h>
 
 #include <asm/sections.h>
 
@@ -135,8 +136,8 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
 /*
  * All data structures here are protected by the global debug_lock.
  *
- * Mutex key structs only get allocated, once during bootup, and never
- * get freed - this significantly simplifies the debugging code.
+ * nr_lock_classes is the number of elements of lock_classes[] that is
+ * in use.
  */
 unsigned long nr_lock_classes;
 #ifndef CONFIG_DEBUG_LOCKDEP
@@ -278,11 +279,25 @@ static inline void lock_release_holdtime(struct held_lock *hlock)
 #endif
 
 /*
- * We keep a global list of all lock classes. The list only grows,
- * never shrinks. The list is only accessed with the lockdep
- * spinlock lock held.
+ * We keep a global list of all lock classes. The list is only accessed with
+ * the lockdep spinlock lock held. free_lock_classes is a list with free
+ * elements. These elements are linked together by the lock_entry member in
+ * struct lock_class.
  */
 LIST_HEAD(all_lock_classes);
+static LIST_HEAD(free_lock_classes);
+/*
+ * A data structure for delayed freeing of data structures that may be
+ * accessed by RCU readers at the time these were freed. The size of the array
+ * is a compromise between minimizing the amount of memory used by this array
+ * and minimizing the number of wait_event() calls by get_pending_free_lock().
+ */
+static struct pending_free {
+	struct list_head zapped_classes;
+	struct rcu_head	 rcu_head;
+	bool		 scheduled;
+} pending_free[2];
+static DECLARE_WAIT_QUEUE_HEAD(rcu_cb);
 
 /*
  * The lockdep classes are in a hash-table as well, for fast lookup:
@@ -742,11 +757,13 @@ static bool assign_lock_key(struct lockdep_map *lock)
 }
 
 /*
- * Initialize the lock_classes[] array elements.
+ * Initialize the lock_classes[] array elements, the free_lock_classes list
+ * and also the pending_free[] array.
  */
 static void init_data_structures_once(void)
 {
 	static bool initialization_happened;
+	struct pending_free *pf;
 	int i;
 
 	if (likely(initialization_happened))
@@ -754,7 +771,14 @@ static void init_data_structures_once(void)
 
 	initialization_happened = true;
 
+	for (i = 0, pf = pending_free; i < ARRAY_SIZE(pending_free);
+	     i++, pf++) {
+		INIT_LIST_HEAD(&pf->zapped_classes);
+		init_rcu_head(&pf->rcu_head);
+	}
+
 	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		list_add_tail(&lock_classes[i].lock_entry, &free_lock_classes);
 		INIT_LIST_HEAD(&lock_classes[i].locks_after);
 		INIT_LIST_HEAD(&lock_classes[i].locks_before);
 	}
@@ -802,11 +826,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 
 	init_data_structures_once();
 
-	/*
-	 * Allocate a new key from the static array, and add it to
-	 * the hash:
-	 */
-	if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {
+	/* Allocate a new lock class and add it to the hash. */
+	class = list_first_entry_or_null(&free_lock_classes, typeof(*class),
+					 lock_entry);
+	if (!class) {
 		if (!debug_locks_off_graph_unlock()) {
 			return NULL;
 		}
@@ -815,7 +838,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 		dump_stack();
 		return NULL;
 	}
-	class = lock_classes + nr_lock_classes++;
+	nr_lock_classes++;
 	debug_atomic_inc(nr_unused_locks);
 	class->key = key;
 	class->name = lock->name;
@@ -829,9 +852,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	 */
 	hlist_add_head_rcu(&class->hash_entry, hash_head);
 	/*
-	 * Add it to the global list of classes:
+	 * Remove the class from the free list and add it to the global list
+	 * of classes.
 	 */
-	list_add_tail(&class->lock_entry, &all_lock_classes);
+	list_move_tail(&class->lock_entry, &all_lock_classes);
 
 	if (verbose(class)) {
 		graph_unlock();
@@ -1871,6 +1895,24 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	struct lock_list this;
 	int ret;
 
+	if (!hlock_class(prev)->key || !hlock_class(next)->key) {
+		/*
+		 * The warning statements below may trigger a use-after-free
+		 * of the class name. It is better to trigger a use-after free
+		 * and to have the class name most of the time instead of not
+		 * having the class name available.
+		 */
+		WARN_ONCE(!debug_locks_silent && !hlock_class(prev)->key,
+			  "Detected use-after-free of lock class %px/%s\n",
+			  hlock_class(prev),
+			  hlock_class(prev)->name);
+		WARN_ONCE(!debug_locks_silent && !hlock_class(next)->key,
+			  "Detected use-after-free of lock class %px/%s\n",
+			  hlock_class(next),
+			  hlock_class(next)->name);
+		return 2;
+	}
+
 	/*
 	 * Prove that the new <prev> -> <next> dependency would not
 	 * create a circular dependency in the graph. (We do this by
@@ -2253,19 +2295,16 @@ static inline int add_chain_cache(struct task_struct *curr,
 }
 
 /*
- * Look up a dependency chain.
+ * Look up a dependency chain. Must be called with either the graph lock or
+ * the RCU read lock held.
  */
 static inline struct lock_chain *lookup_chain_cache(u64 chain_key)
 {
 	struct hlist_head *hash_head = chainhashentry(chain_key);
 	struct lock_chain *chain;
 
-	/*
-	 * We can walk it lock-free, because entries only get added
-	 * to the hash:
-	 */
 	hlist_for_each_entry_rcu(chain, hash_head, entry) {
-		if (chain->chain_key == chain_key) {
+		if (READ_ONCE(chain->chain_key) == chain_key) {
 			debug_atomic_inc(chain_lookup_hits);
 			return chain;
 		}
@@ -3355,6 +3394,11 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	if (nest_lock && !__lock_is_held(nest_lock, -1))
 		return print_lock_nested_lock_not_held(curr, hlock, ip);
 
+	if (!debug_locks_silent) {
+		WARN_ON_ONCE(depth && !hlock_class(hlock - 1)->key);
+		WARN_ON_ONCE(!hlock_class(hlock)->key);
+	}
+
 	if (!validate_chain(curr, lock, hlock, chain_head, chain_key))
 		return 0;
 
@@ -4143,14 +4187,92 @@ void lockdep_reset(void)
 	raw_local_irq_restore(flags);
 }
 
+/* Remove a class from a lock chain. Must be called with the graph lock held. */
+static void remove_class_from_lock_chain(struct lock_chain *chain,
+					 struct lock_class *class)
+{
+#ifdef CONFIG_PROVE_LOCKING
+	struct lock_chain *new_chain;
+	u64 chain_key;
+	int i;
+
+	for (i = chain->base; i < chain->base + chain->depth; i++) {
+		if (chain_hlocks[i] != class - lock_classes)
+			continue;
+		/* The code below leaks one chain_hlock[] entry. */
+		if (--chain->depth > 0)
+			memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
+				(chain->base + chain->depth - i) *
+				sizeof(chain_hlocks[0]));
+		/*
+		 * Each lock class occurs at most once in a lock chain so once
+		 * we found a match we can break out of this loop.
+		 */
+		goto recalc;
+	}
+	/* Since the chain has not been modified, return. */
+	return;
+
+recalc:
+	chain_key = 0;
+	for (i = chain->base; i < chain->base + chain->depth; i++)
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+	if (chain->depth && chain->chain_key == chain_key)
+		return;
+	/* Overwrite the chain key for concurrent RCU readers. */
+	WRITE_ONCE(chain->chain_key, chain_key);
+	/*
+	 * Note: calling hlist_del_rcu() from inside a
+	 * hlist_for_each_entry_rcu() loop is safe.
+	 */
+	hlist_del_rcu(&chain->entry);
+	if (chain->depth == 0)
+		return;
+	/*
+	 * If the modified lock chain matches an existing lock chain, drop
+	 * the modified lock chain.
+	 */
+	if (lookup_chain_cache(chain_key))
+		return;
+	if (WARN_ON_ONCE(nr_lock_chains >= MAX_LOCKDEP_CHAINS)) {
+		debug_locks_off();
+		return;
+	}
+	/*
+	 * Leak *chain because it is not safe to reinsert it before an RCU
+	 * grace period has expired.
+	 */
+	new_chain = lock_chains + nr_lock_chains++;
+	*new_chain = *chain;
+	hlist_add_head_rcu(&new_chain->entry, chainhashentry(chain_key));
+#endif
+}
+
+/* Must be called with the graph lock held. */
+static void remove_class_from_lock_chains(struct lock_class *class)
+{
+	struct lock_chain *chain;
+	struct hlist_head *head;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
+		head = chainhash_table + i;
+		hlist_for_each_entry_rcu(chain, head, entry) {
+			remove_class_from_lock_chain(chain, class);
+		}
+	}
+}
+
 /*
  * Remove all references to a lock class. The caller must hold the graph lock.
  */
-static void zap_class(struct lock_class *class)
+static void zap_class(struct pending_free *pf, struct lock_class *class)
 {
 	struct lock_list *entry;
 	int i;
 
+	WARN_ON_ONCE(!class->key);
+
 	/*
 	 * Remove all dependencies this lock is
 	 * involved in:
@@ -4163,14 +4285,33 @@ static void zap_class(struct lock_class *class)
 		WRITE_ONCE(entry->class, NULL);
 		WRITE_ONCE(entry->links_to, NULL);
 	}
-	/*
-	 * Unhash the class and remove it from the all_lock_classes list:
-	 */
-	hlist_del_rcu(&class->hash_entry);
-	list_del(&class->lock_entry);
+	if (list_empty(&class->locks_after) &&
+	    list_empty(&class->locks_before)) {
+		list_move_tail(&class->lock_entry, &pf->zapped_classes);
+		hlist_del_rcu(&class->hash_entry);
+		WRITE_ONCE(class->key, NULL);
+		WRITE_ONCE(class->name, NULL);
+		nr_lock_classes--;
+	} else {
+		WARN_ONCE(true, "%s() failed for class %s\n", __func__,
+			  class->name);
+	}
 
-	RCU_INIT_POINTER(class->key, NULL);
-	RCU_INIT_POINTER(class->name, NULL);
+	remove_class_from_lock_chains(class);
+}
+
+static void reinit_class(struct lock_class *class)
+{
+	void *const p = class;
+	const unsigned int offset = offsetof(struct lock_class, key);
+
+	WARN_ON_ONCE(!class->lock_entry.next);
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
+	memset(p + offset, 0, sizeof(*class) - offset);
+	WARN_ON_ONCE(!class->lock_entry.next);
+	WARN_ON_ONCE(!list_empty(&class->locks_after));
+	WARN_ON_ONCE(!list_empty(&class->locks_before));
 }
 
 static inline int within(const void *addr, void *start, unsigned long size)
@@ -4178,7 +4319,118 @@ static inline int within(const void *addr, void *start, unsigned long size)
 	return addr >= start && addr < start + size;
 }
 
-static void __lockdep_free_key_range(void *start, unsigned long size)
+static bool inside_selftest(void)
+{
+	return current == lockdep_selftest_task_struct;
+}
+
+/*
+ * Free all lock classes that are on the pf->zapped_classes list. May be called
+ * from RCU callback context.
+ */
+static void free_zapped_classes(struct rcu_head *ch)
+{
+	struct pending_free *pf = container_of(ch, typeof(*pf), rcu_head);
+	struct lock_class *class;
+	unsigned long flags;
+
+	raw_local_irq_save(flags);
+	if (!graph_lock())
+		goto restore_irqs;
+	pf->scheduled = false;
+	list_for_each_entry(class, &pf->zapped_classes, lock_entry) {
+		reinit_class(class);
+	}
+	list_splice_init(&pf->zapped_classes, &free_lock_classes);
+	graph_unlock();
+restore_irqs:
+	raw_local_irq_restore(flags);
+
+	wake_up(&rcu_cb);
+}
+
+/* Schedule an RCU callback. Must be called with the graph lock held. */
+static void schedule_free_zapped_classes(struct pending_free *pf)
+{
+	WARN_ON_ONCE(inside_selftest());
+	pf->scheduled = true;
+	call_rcu(&pf->rcu_head, free_zapped_classes);
+}
+
+/*
+ * Find an element in the pending_free[] array for which no RCU callback is
+ * pending.
+ */
+static struct pending_free *get_pending_free(void)
+{
+	struct pending_free *pf;
+	int i;
+
+	for (i = 0, pf = pending_free; i < ARRAY_SIZE(pending_free);
+	     i++, pf++)
+		if (!pf->scheduled)
+			return pf;
+
+	return NULL;
+}
+
+/*
+ * Find an element in the pending_free[] array for which no RCU callback is
+ * pending and obtain the graph lock. May sleep.
+ */
+static struct pending_free *get_pending_free_lock(unsigned long *flags)
+{
+	struct pending_free *pf;
+
+	WARN_ON_ONCE(inside_selftest());
+
+	while (true) {
+		raw_local_irq_save(*flags);
+		if (!graph_lock()) {
+			raw_local_irq_restore(*flags);
+			return NULL;
+		}
+		pf = get_pending_free();
+		if (pf)
+			break;
+		graph_unlock();
+		raw_local_irq_restore(*flags);
+
+		wait_event(rcu_cb, get_pending_free() != NULL);
+	}
+
+	return pf;
+}
+
+/*
+ * Find an element in the pending_free[] array for which no RCU callback is
+ * pending and obtain the graph lock. Ignores debug_locks. Does not sleep.
+ */
+static struct pending_free *get_pending_free_lock_imm(unsigned long *flags)
+{
+	struct pending_free *pf;
+
+	WARN_ON_ONCE(!inside_selftest());
+
+	raw_local_irq_save(*flags);
+	arch_spin_lock(&lockdep_lock);
+	pf = get_pending_free();
+	if (!pf) {
+		arch_spin_unlock(&lockdep_lock);
+		raw_local_irq_restore(*flags);
+	}
+
+	return pf;
+}
+
+/*
+ * Remove all lock classes from the class hash table and from the
+ * all_lock_classes list whose key or name is in the address range [start,
+ * start + size). Move these lock classes to the zapped_classes list. Must
+ * be called with the graph lock held.
+ */
+static void __lockdep_free_key_range(struct pending_free *pf, void *start,
+				     unsigned long size)
 {
 	struct lock_class *class;
 	struct hlist_head *head;
@@ -4193,7 +4445,7 @@ static void __lockdep_free_key_range(void *start, unsigned long size)
 			if (!within(class->key, start, size) &&
 			    !within(class->name, start, size))
 				continue;
-			zap_class(class);
+			zap_class(pf, class);
 		}
 	}
 }
@@ -4206,40 +4458,68 @@ static void __lockdep_free_key_range(void *start, unsigned long size)
  * nobody will look up these exact classes -- they're properly dead but still
  * allocated.
  */
-void lockdep_free_key_range(void *start, unsigned long size)
+static void lockdep_free_key_range_reg(void *start, unsigned long size)
 {
+	struct pending_free *pf;
 	unsigned long flags;
-	int locked;
+
+	might_sleep();
 
 	init_data_structures_once();
 
-	raw_local_irq_save(flags);
-	locked = graph_lock();
-	__lockdep_free_key_range(start, size);
-	if (locked)
-		graph_unlock();
+	pf = get_pending_free_lock(&flags);
+	if (!pf)
+		return;
+	__lockdep_free_key_range(pf, start, size);
+	schedule_free_zapped_classes(pf);
+	graph_unlock();
 	raw_local_irq_restore(flags);
 
 	/*
-	 * Wait for any possible iterators from look_up_lock_class() to pass
-	 * before continuing to free the memory they refer to.
-	 *
-	 * sync_sched() is sufficient because the read-side is IRQ disable.
+	 * Do not wait for concurrent look_up_lock_class() calls. If any such
+	 * concurrent call would return a pointer to one of the lock classes
+	 * freed by this function then that means that there is a race in the
+	 * code that calls look_up_lock_class(), namely concurrently accessing
+	 * and freeing a synchronization object.
 	 */
-	synchronize_rcu();
+}
 
-	/*
-	 * XXX at this point we could return the resources to the pool;
-	 * instead we leak them. We would need to change to bitmap allocators
-	 * instead of the linear allocators we have now.
-	 */
+/*
+ * Free all lockdep keys in the range [start, start+size). Does not sleep.
+ * Ignores debug_locks. Must only be used by the lockdep selftests.
+ */
+static void lockdep_free_key_range_imm(void *start, unsigned long size)
+{
+	struct pending_free *pf;
+	unsigned long flags;
+
+	init_data_structures_once();
+
+	pf = get_pending_free_lock_imm(&flags);
+	if (!pf)
+		return;
+	__lockdep_free_key_range(pf, start, size);
+	arch_spin_unlock(&lockdep_lock);
+	raw_local_irq_restore(flags);
+
+	free_zapped_classes(&pf->rcu_head);
+}
+
+void lockdep_free_key_range(void *start, unsigned long size)
+{
+	init_data_structures_once();
+
+	if (inside_selftest())
+		lockdep_free_key_range_imm(start, size);
+	else
+		lockdep_free_key_range_reg(start, size);
 }
 
 /*
  * Check whether any element of the @lock->class_cache[] array refers to a
  * registered lock class. The caller must hold either the graph lock or the
  * RCU read lock.
- */
+  */
 static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 {
 	struct lock_class *class;
@@ -4258,7 +4538,8 @@ static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 }
 
 /* The caller must hold the graph lock. Does not sleep. */
-static void __lockdep_reset_lock(struct lockdep_map *lock)
+static void __lockdep_reset_lock(struct pending_free *pf,
+				 struct lockdep_map *lock)
 {
 	struct lock_class *class;
 	int j;
@@ -4272,7 +4553,7 @@ static void __lockdep_reset_lock(struct lockdep_map *lock)
 		 */
 		class = look_up_lock_class(lock, j);
 		if (class)
-			zap_class(class);
+			zap_class(pf, class);
 	}
 	/*
 	 * Debug check: in the end all mapped classes should
@@ -4282,21 +4563,55 @@ static void __lockdep_reset_lock(struct lockdep_map *lock)
 		debug_locks_off();
 }
 
-void lockdep_reset_lock(struct lockdep_map *lock)
+/*
+ * Reset a lock if debug_locks == 1. Free released data structures from RCU
+ * context.
+ */
+static void lockdep_reset_lock_reg(struct lockdep_map *lock)
 {
+	struct pending_free *pf;
 	unsigned long flags;
-	int locked;
 
-	init_data_structures_once();
+	might_sleep();
 
-	raw_local_irq_save(flags);
-	locked = graph_lock();
-	__lockdep_reset_lock(lock);
-	if (locked)
-		graph_unlock();
+	pf = get_pending_free_lock(&flags);
+	if (!pf)
+		return;
+	__lockdep_reset_lock(pf, lock);
+	schedule_free_zapped_classes(pf);
+	graph_unlock();
 	raw_local_irq_restore(flags);
 }
 
+/*
+ * Reset a lock. Does not sleep. Ignores debug_locks. Must only be used by the
+ * lockdep selftests.
+ */
+static void lockdep_reset_lock_imm(struct lockdep_map *lock)
+{
+	struct pending_free *pf;
+	unsigned long flags;
+
+	pf = get_pending_free_lock_imm(&flags);
+	if (!pf)
+		return;
+	__lockdep_reset_lock(pf, lock);
+	arch_spin_unlock(&lockdep_lock);
+	raw_local_irq_restore(flags);
+
+	free_zapped_classes(&pf->rcu_head);
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+	init_data_structures_once();
+
+	if (inside_selftest())
+		lockdep_reset_lock_imm(lock);
+	else
+		lockdep_reset_lock_reg(lock);
+}
+
 void __init lockdep_init(void)
 {
 	printk("Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar\n");
@@ -4313,7 +4628,8 @@ void __init lockdep_init(void)
 	       (sizeof(list_entries) +
 		sizeof(lock_classes) +
 		sizeof(classhash_table) +
-		sizeof(chainhash_table)
+		sizeof(chainhash_table) +
+		sizeof(pending_free)
 #ifdef CONFIG_PROVE_LOCKING
 		+ sizeof(lock_cq)
 		+ sizeof(lock_chains)
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 09/16] locking/lockdep: Reuse list entries that are no longer in use
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (7 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 08/16] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 10/16] locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() Bart Van Assche
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Instead of abandoning elements of list_entries[] that are no longer in
use, make alloc_list_entry() reuse array elements that have been freed.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 48 +++++++++++++++++++++++++++++++---------
 1 file changed, 38 insertions(+), 10 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5b142f699503..5e8a3a17bb94 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -45,6 +45,7 @@
 #include <linux/hash.h>
 #include <linux/ftrace.h>
 #include <linux/stringify.h>
+#include <linux/bitmap.h>
 #include <linux/bitops.h>
 #include <linux/gfp.h>
 #include <linux/random.h>
@@ -132,6 +133,7 @@ static inline int debug_locks_off_graph_unlock(void)
 
 unsigned long nr_list_entries;
 static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
+static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
 
 /*
  * All data structures here are protected by the global debug_lock.
@@ -296,6 +298,7 @@ static struct pending_free {
 	struct list_head zapped_classes;
 	struct rcu_head	 rcu_head;
 	bool		 scheduled;
+	DECLARE_BITMAP(list_entries_being_freed, MAX_LOCKDEP_ENTRIES);
 } pending_free[2];
 static DECLARE_WAIT_QUEUE_HEAD(rcu_cb);
 
@@ -756,6 +759,19 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	return true;
 }
 
+static bool list_entry_being_freed(int list_entry_idx)
+{
+	struct pending_free *pf;
+	int i;
+
+	for (i = 0, pf = pending_free; i < ARRAY_SIZE(pending_free);
+	     i++, pf++)
+		if (test_bit(list_entry_idx, pf->list_entries_being_freed))
+			return true;
+
+	return false;
+}
+
 /*
  * Initialize the lock_classes[] array elements, the free_lock_classes list
  * and also the pending_free[] array.
@@ -896,7 +912,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
  */
 static struct lock_list *alloc_list_entry(void)
 {
-	if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
+	int idx = find_first_zero_bit(list_entries_in_use,
+				      ARRAY_SIZE(list_entries));
+
+	if (idx >= ARRAY_SIZE(list_entries)) {
 		if (!debug_locks_off_graph_unlock())
 			return NULL;
 
@@ -904,7 +923,9 @@ static struct lock_list *alloc_list_entry(void)
 		dump_stack();
 		return NULL;
 	}
-	return list_entries + nr_list_entries++;
+	nr_list_entries++;
+	__set_bit(idx, list_entries_in_use);
+	return list_entries + idx;
 }
 
 /*
@@ -1008,7 +1029,7 @@ static inline void mark_lock_accessed(struct lock_list *lock,
 	unsigned long nr;
 
 	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+	WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
 	lock->parent = parent;
 	lock->class->dep_gen_id = lockdep_dependency_gen_id;
 }
@@ -1018,7 +1039,7 @@ static inline unsigned long lock_accessed(struct lock_list *lock)
 	unsigned long nr;
 
 	nr = lock - list_entries;
-	WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+	WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
 	return lock->class->dep_gen_id == lockdep_dependency_gen_id;
 }
 
@@ -4277,13 +4298,15 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
 	 * Remove all dependencies this lock is
 	 * involved in:
 	 */
-	for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+	for_each_set_bit(i, list_entries_in_use, ARRAY_SIZE(list_entries)) {
+		entry = list_entries + i;
 		if (entry->class != class && entry->links_to != class)
 			continue;
+		if (list_entry_being_freed(i))
+			continue;
+		set_bit(i, pf->list_entries_being_freed);
+		nr_list_entries--;
 		list_del_rcu(&entry->entry);
-		/* Clear .class and .links_to to avoid double removal. */
-		WRITE_ONCE(entry->class, NULL);
-		WRITE_ONCE(entry->links_to, NULL);
 	}
 	if (list_empty(&class->locks_after) &&
 	    list_empty(&class->locks_before)) {
@@ -4325,8 +4348,9 @@ static bool inside_selftest(void)
 }
 
 /*
- * Free all lock classes that are on the pf->zapped_classes list. May be called
- * from RCU callback context.
+ * Free all lock classes that are on the pf->zapped_classes list and also all
+ * list entries that have been marked as being freed. May be called from RCU
+ * callback context.
  */
 static void free_zapped_classes(struct rcu_head *ch)
 {
@@ -4342,6 +4366,9 @@ static void free_zapped_classes(struct rcu_head *ch)
 		reinit_class(class);
 	}
 	list_splice_init(&pf->zapped_classes, &free_lock_classes);
+	bitmap_andnot(list_entries_in_use, list_entries_in_use,
+		      pf->list_entries_being_freed, ARRAY_SIZE(list_entries));
+	bitmap_clear(pf->list_entries_being_freed, 0, ARRAY_SIZE(list_entries));
 	graph_unlock();
 restore_irqs:
 	raw_local_irq_restore(flags);
@@ -4626,6 +4653,7 @@ void __init lockdep_init(void)
 
 	printk(" memory used by lock dependency info: %zu kB\n",
 	       (sizeof(list_entries) +
+		sizeof(list_entries_in_use) +
 		sizeof(lock_classes) +
 		sizeof(classhash_table) +
 		sizeof(chainhash_table) +
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 10/16] locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count()
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (8 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 09/16] locking/lockdep: Reuse list entries " Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:01 ` [PATCH v6 11/16] locking/lockdep: Reuse lock chains that have been freed Bart Van Assche
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

This patch does not change any functionality but makes the next patch in
this series easier to read.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c           | 16 +++++++++++++++-
 kernel/locking/lockdep_internals.h |  3 ++-
 kernel/locking/lockdep_proc.c      | 12 ++++++------
 3 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5e8a3a17bb94..ef3809e21fa9 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2110,7 +2110,7 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 	return 0;
 }
 
-unsigned long nr_lock_chains;
+static unsigned long nr_lock_chains;
 struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
 int nr_chain_hlocks;
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
@@ -2244,6 +2244,20 @@ static int check_no_collision(struct task_struct *curr,
 	return 1;
 }
 
+/*
+ * Given an index that is >= -1, return the index of the next lock chain.
+ * Return -2 if there is no next lock chain.
+ */
+long lockdep_next_lockchain(long i)
+{
+	return i + 1 < nr_lock_chains ? i + 1 : -2;
+}
+
+unsigned long lock_chain_count(void)
+{
+	return nr_lock_chains;
+}
+
 /*
  * Adds a dependency chain into chain hashtable. And must be called with
  * graph_lock held.
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 88c847a41c8a..ba8a4ac7bd04 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -96,7 +96,8 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i);
 
 extern unsigned long nr_lock_classes;
 extern unsigned long nr_list_entries;
-extern unsigned long nr_lock_chains;
+long lockdep_next_lockchain(long i);
+unsigned long lock_chain_count(void);
 extern int nr_chain_hlocks;
 extern unsigned long nr_stack_trace_entries;
 
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 3d31f9b0059e..9c49ec645d8b 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -104,18 +104,18 @@ static const struct seq_operations lockdep_ops = {
 #ifdef CONFIG_PROVE_LOCKING
 static void *lc_start(struct seq_file *m, loff_t *pos)
 {
+	if (*pos < 0)
+		return NULL;
+
 	if (*pos == 0)
 		return SEQ_START_TOKEN;
 
-	if (*pos - 1 < nr_lock_chains)
-		return lock_chains + (*pos - 1);
-
-	return NULL;
+	return lock_chains + (*pos - 1);
 }
 
 static void *lc_next(struct seq_file *m, void *v, loff_t *pos)
 {
-	(*pos)++;
+	*pos = lockdep_next_lockchain(*pos - 1) + 1;
 	return lc_start(m, pos);
 }
 
@@ -268,7 +268,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 
 #ifdef CONFIG_PROVE_LOCKING
 	seq_printf(m, " dependency chains:             %11lu [max: %lu]\n",
-			nr_lock_chains, MAX_LOCKDEP_CHAINS);
+			lock_chain_count(), MAX_LOCKDEP_CHAINS);
 	seq_printf(m, " dependency chain hlocks:       %11d [max: %lu]\n",
 			nr_chain_hlocks, MAX_LOCKDEP_CHAIN_HLOCKS);
 #endif
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 11/16] locking/lockdep: Reuse lock chains that have been freed
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (9 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 10/16] locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() Bart Van Assche
@ 2019-01-09 21:01 ` Bart Van Assche
  2019-01-09 21:02 ` [PATCH v6 12/16] locking/lockdep: Check data structure consistency Bart Van Assche
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:01 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

A previous patch introduced a lock chain leak. Fix that leak by reusing
lock chains that have been freed.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 62 ++++++++++++++++++++++------------------
 1 file changed, 34 insertions(+), 28 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ef3809e21fa9..a8ea03bfc944 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -299,6 +299,7 @@ static struct pending_free {
 	struct rcu_head	 rcu_head;
 	bool		 scheduled;
 	DECLARE_BITMAP(list_entries_being_freed, MAX_LOCKDEP_ENTRIES);
+	DECLARE_BITMAP(lock_chains_being_freed, MAX_LOCKDEP_CHAINS);
 } pending_free[2];
 static DECLARE_WAIT_QUEUE_HEAD(rcu_cb);
 
@@ -2110,8 +2111,8 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 	return 0;
 }
 
-static unsigned long nr_lock_chains;
 struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
+static DECLARE_BITMAP(lock_chains_in_use, MAX_LOCKDEP_CHAINS);
 int nr_chain_hlocks;
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
 
@@ -2250,12 +2251,25 @@ static int check_no_collision(struct task_struct *curr,
  */
 long lockdep_next_lockchain(long i)
 {
-	return i + 1 < nr_lock_chains ? i + 1 : -2;
+	i = find_next_bit(lock_chains_in_use, ARRAY_SIZE(lock_chains), i + 1);
+	return i < ARRAY_SIZE(lock_chains) ? i : -2;
 }
 
 unsigned long lock_chain_count(void)
 {
-	return nr_lock_chains;
+	return bitmap_weight(lock_chains_in_use, ARRAY_SIZE(lock_chains));
+}
+
+/* Must be called with the graph lock held. */
+static struct lock_chain *alloc_lock_chain(void)
+{
+	int idx = find_first_zero_bit(lock_chains_in_use,
+				      ARRAY_SIZE(lock_chains));
+
+	if (unlikely(idx >= ARRAY_SIZE(lock_chains)))
+		return NULL;
+	__set_bit(idx, lock_chains_in_use);
+	return lock_chains + idx;
 }
 
 /*
@@ -2274,20 +2288,8 @@ static inline int add_chain_cache(struct task_struct *curr,
 	struct lock_chain *chain;
 	int i, j;
 
-	/*
-	 * Allocate a new chain entry from the static array, and add
-	 * it to the hash:
-	 */
-
-	/*
-	 * We might need to take the graph lock, ensure we've got IRQs
-	 * disabled to make this an IRQ-safe lock.. for recursion reasons
-	 * lockdep won't complain about its own locking errors.
-	 */
-	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
-		return 0;
-
-	if (unlikely(nr_lock_chains >= MAX_LOCKDEP_CHAINS)) {
+	chain = alloc_lock_chain();
+	if (!chain) {
 		if (!debug_locks_off_graph_unlock())
 			return 0;
 
@@ -2295,7 +2297,6 @@ static inline int add_chain_cache(struct task_struct *curr,
 		dump_stack();
 		return 0;
 	}
-	chain = lock_chains + nr_lock_chains++;
 	chain->chain_key = chain_key;
 	chain->irq_context = hlock->irq_context;
 	i = get_first_held_lock(curr, hlock);
@@ -4223,7 +4224,8 @@ void lockdep_reset(void)
 }
 
 /* Remove a class from a lock chain. Must be called with the graph lock held. */
-static void remove_class_from_lock_chain(struct lock_chain *chain,
+static void remove_class_from_lock_chain(struct pending_free *pf,
+					 struct lock_chain *chain,
 					 struct lock_class *class)
 {
 #ifdef CONFIG_PROVE_LOCKING
@@ -4261,6 +4263,7 @@ static void remove_class_from_lock_chain(struct lock_chain *chain,
 	 * hlist_for_each_entry_rcu() loop is safe.
 	 */
 	hlist_del_rcu(&chain->entry);
+	__set_bit(chain - lock_chains, pf->lock_chains_being_freed);
 	if (chain->depth == 0)
 		return;
 	/*
@@ -4269,22 +4272,19 @@ static void remove_class_from_lock_chain(struct lock_chain *chain,
 	 */
 	if (lookup_chain_cache(chain_key))
 		return;
-	if (WARN_ON_ONCE(nr_lock_chains >= MAX_LOCKDEP_CHAINS)) {
+	new_chain = alloc_lock_chain();
+	if (WARN_ON_ONCE(!new_chain)) {
 		debug_locks_off();
 		return;
 	}
-	/*
-	 * Leak *chain because it is not safe to reinsert it before an RCU
-	 * grace period has expired.
-	 */
-	new_chain = lock_chains + nr_lock_chains++;
 	*new_chain = *chain;
 	hlist_add_head_rcu(&new_chain->entry, chainhashentry(chain_key));
 #endif
 }
 
 /* Must be called with the graph lock held. */
-static void remove_class_from_lock_chains(struct lock_class *class)
+static void remove_class_from_lock_chains(struct pending_free *pf,
+					  struct lock_class *class)
 {
 	struct lock_chain *chain;
 	struct hlist_head *head;
@@ -4293,7 +4293,7 @@ static void remove_class_from_lock_chains(struct lock_class *class)
 	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
 		head = chainhash_table + i;
 		hlist_for_each_entry_rcu(chain, head, entry) {
-			remove_class_from_lock_chain(chain, class);
+			remove_class_from_lock_chain(pf, chain, class);
 		}
 	}
 }
@@ -4334,7 +4334,7 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
 			  class->name);
 	}
 
-	remove_class_from_lock_chains(class);
+	remove_class_from_lock_chains(pf, class);
 }
 
 static void reinit_class(struct lock_class *class)
@@ -4383,6 +4383,11 @@ static void free_zapped_classes(struct rcu_head *ch)
 	bitmap_andnot(list_entries_in_use, list_entries_in_use,
 		      pf->list_entries_being_freed, ARRAY_SIZE(list_entries));
 	bitmap_clear(pf->list_entries_being_freed, 0, ARRAY_SIZE(list_entries));
+#ifdef CONFIG_PROVE_LOCKING
+	bitmap_andnot(lock_chains_in_use, lock_chains_in_use,
+		      pf->lock_chains_being_freed, ARRAY_SIZE(lock_chains));
+	bitmap_clear(pf->lock_chains_being_freed, 0, ARRAY_SIZE(lock_chains));
+#endif
 	graph_unlock();
 restore_irqs:
 	raw_local_irq_restore(flags);
@@ -4675,6 +4680,7 @@ void __init lockdep_init(void)
 #ifdef CONFIG_PROVE_LOCKING
 		+ sizeof(lock_cq)
 		+ sizeof(lock_chains)
+		+ sizeof(lock_chains_in_use)
 		+ sizeof(chain_hlocks)
 #endif
 		) / 1024
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 12/16] locking/lockdep: Check data structure consistency
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (10 preceding siblings ...)
  2019-01-09 21:01 ` [PATCH v6 11/16] locking/lockdep: Reuse lock chains that have been freed Bart Van Assche
@ 2019-01-09 21:02 ` Bart Van Assche
  2019-01-09 21:02 ` [PATCH v6 13/16] locking/lockdep: Verify whether lock objects are small enough to be used as class keys Bart Van Assche
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:02 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Debugging lockdep data structure inconsistencies is challenging. Add
code that verifies data structure consistency at runtime. That code is
disabled by default because it is very CPU intensive.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 170 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 170 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a8ea03bfc944..acf61dbb8b30 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -74,6 +74,8 @@ module_param(lock_stat, int, 0644);
 #define lock_stat 0
 #endif
 
+static bool check_data_structure_consistency;
+
 /*
  * lockdep_lock: protects the lockdep graph, the hashes and the
  *               class/list/hash allocators.
@@ -760,6 +762,81 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	return true;
 }
 
+/* Check whether element @e occurs in list @h */
+static bool in_list(struct list_head *e, struct list_head *h)
+{
+	struct list_head *f;
+
+	list_for_each(f, h) {
+		if (e == f)
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Check whether entry @e occurs in any of the locks_after or locks_before
+ * lists.
+ */
+static bool in_any_class_list(struct list_head *e)
+{
+	struct lock_class *class;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		class = &lock_classes[i];
+		if (in_list(e, &class->locks_after) ||
+		    in_list(e, &class->locks_before))
+			return true;
+	}
+	return false;
+}
+
+static bool class_lock_list_valid(struct lock_class *c, struct list_head *h)
+{
+	struct lock_list *e;
+
+	list_for_each_entry(e, h, entry) {
+		if (e->links_to != c) {
+			printk(KERN_INFO "class %s: mismatch for lock entry %ld; class %s <> %s",
+			       c->name ? : "(?)",
+			       (unsigned long)(e - list_entries),
+			       e->links_to && e->links_to->name ?
+			       e->links_to->name : "(?)",
+			       e->class && e->class->name ? e->class->name :
+			       "(?)");
+			return false;
+		}
+	}
+	return true;
+}
+
+static u16 chain_hlocks[];
+
+static bool check_lock_chain_key(struct lock_chain *chain)
+{
+#ifdef CONFIG_PROVE_LOCKING
+	u64 chain_key = 0;
+	int i;
+
+	for (i = chain->base; i < chain->base + chain->depth; i++)
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+	/*
+	 * The 'unsigned long long' casts avoid that a compiler warning
+	 * is reported when building tools/lib/lockdep.
+	 */
+	if (chain->chain_key != chain_key)
+		printk(KERN_INFO "chain %lld: key %#llx <> %#llx\n",
+		       (unsigned long long)(chain - lock_chains),
+		       (unsigned long long)chain->chain_key,
+		       (unsigned long long)chain_key);
+	return chain->chain_key == chain_key;
+#else
+	return true;
+#endif
+}
+
 static bool list_entry_being_freed(int list_entry_idx)
 {
 	struct pending_free *pf;
@@ -773,6 +850,97 @@ static bool list_entry_being_freed(int list_entry_idx)
 	return false;
 }
 
+static bool in_any_zapped_class_list(struct lock_class *class)
+{
+	struct pending_free *pf;
+	int i;
+
+	for (i = 0, pf = pending_free; i < ARRAY_SIZE(pending_free);
+	     i++, pf++)
+		if (in_list(&class->lock_entry, &pf->zapped_classes))
+			return true;
+
+	return false;
+}
+
+static bool check_data_structures(void)
+{
+	struct lock_class *class;
+	struct lock_chain *chain;
+	struct hlist_head *head;
+	struct lock_list *e;
+	int i;
+
+	/* Check whether all classes occur in a lock list. */
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		class = &lock_classes[i];
+		if (!in_list(&class->lock_entry, &all_lock_classes) &&
+		    !in_list(&class->lock_entry, &free_lock_classes) &&
+		    !in_any_zapped_class_list(class)) {
+			printk(KERN_INFO "class %px/%s is not in any class list\n",
+			       class, class->name ? : "(?)");
+			return false;
+			return false;
+		}
+	}
+
+	/* Check whether all classes have valid lock lists. */
+	for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+		class = &lock_classes[i];
+		if (!class_lock_list_valid(class, &class->locks_before))
+			return false;
+		if (!class_lock_list_valid(class, &class->locks_after))
+			return false;
+	}
+
+	/* Check the chain_key of all lock chains. */
+	for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
+		head = chainhash_table + i;
+		hlist_for_each_entry_rcu(chain, head, entry) {
+			if (!check_lock_chain_key(chain))
+				return false;
+		}
+	}
+
+	/*
+	 * Check whether all list entries that are in use occur in a class
+	 * lock list.
+	 */
+	for_each_set_bit(i, list_entries_in_use, ARRAY_SIZE(list_entries)) {
+		if (list_entry_being_freed(i))
+			continue;
+		e = list_entries + i;
+		if (!in_any_class_list(&e->entry)) {
+			printk(KERN_INFO "list entry %d is not in any class list; class %s <> %s\n",
+			       (unsigned int)(e - list_entries),
+			       e->class->name ? : "(?)",
+			       e->links_to->name ? : "(?)");
+			return false;
+		}
+	}
+
+	/*
+	 * Check whether all list entries that are not in use do not occur in
+	 * a class lock list.
+	 */
+	for_each_clear_bit(i, list_entries_in_use, ARRAY_SIZE(list_entries)) {
+		if (WARN_ON_ONCE(list_entry_being_freed(i)))
+			return false;
+		e = list_entries + i;
+		if (in_any_class_list(&e->entry)) {
+			printk(KERN_INFO "list entry %d occurs in a class list; class %s <> %s\n",
+			       (unsigned int)(e - list_entries),
+			       e->class && e->class->name ? e->class->name :
+			       "(?)",
+			       e->links_to && e->links_to->name ?
+			       e->links_to->name : "(?)");
+			return false;
+		}
+	}
+
+	return true;
+}
+
 /*
  * Initialize the lock_classes[] array elements, the free_lock_classes list
  * and also the pending_free[] array.
@@ -4376,6 +4544,8 @@ static void free_zapped_classes(struct rcu_head *ch)
 	if (!graph_lock())
 		goto restore_irqs;
 	pf->scheduled = false;
+	if (check_data_structure_consistency)
+		WARN_ON_ONCE(!check_data_structures());
 	list_for_each_entry(class, &pf->zapped_classes, lock_entry) {
 		reinit_class(class);
 	}
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 13/16] locking/lockdep: Verify whether lock objects are small enough to be used as class keys
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (11 preceding siblings ...)
  2019-01-09 21:02 ` [PATCH v6 12/16] locking/lockdep: Check data structure consistency Bart Van Assche
@ 2019-01-09 21:02 ` Bart Van Assche
  2019-01-09 21:02 ` [PATCH v6 14/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:02 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/locking/lockdep.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index acf61dbb8b30..72cff86829e6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -743,6 +743,17 @@ static bool assign_lock_key(struct lockdep_map *lock)
 {
 	unsigned long can_addr, addr = (unsigned long)lock;
 
+#ifdef __KERNEL__
+	/*
+	 * lockdep_free_key_range() assumes that struct lock_class_key
+	 * objects do not overlap. Since we use the address of lock
+	 * objects as class key for static objects, check whether the
+	 * size of lock_class_key objects does not exceed the size of
+	 * the smallest lock object.
+	 */
+	BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
+#endif
+
 	if (__is_kernel_percpu_address(addr, &can_addr))
 		lock->key = (void *)can_addr;
 	else if (__is_module_percpu_address(addr, &can_addr))
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 14/16] locking/lockdep: Add support for dynamic keys
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (12 preceding siblings ...)
  2019-01-09 21:02 ` [PATCH v6 13/16] locking/lockdep: Verify whether lock objects are small enough to be used as class keys Bart Van Assche
@ 2019-01-09 21:02 ` Bart Van Assche
  2019-01-09 21:02 ` [PATCH v6 15/16] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:02 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

A shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically. That forces certain lock objects
to share lock keys. Since lock dependency analysis groups lock objects
per key sharing lock keys can cause false positive lockdep reports.
Make it possible to avoid such false positive reports by allowing lock
keys to be allocated dynamically. Require that dynamically allocated
lock keys are registered before use by calling lockdep_register_key().
Complain about attempts to register the same lock key pointer twice
without calling lockdep_unregister_key() between successive
registration calls.

The purpose of the new lock_keys_hash[] data structure that keeps
track of all dynamic keys is twofold:
- Verify whether the lockdep_register_key() and lockdep_unregister_key()
  functions are used correctly.
- Avoid that lockdep_init_map() complains when encountering a dynamically
  allocated key.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/lockdep.h  |  13 ++++-
 kernel/locking/lockdep.c | 123 ++++++++++++++++++++++++++++++++++++---
 2 files changed, 125 insertions(+), 11 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 619ec3f26cdc..3fd4172e9d1e 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -46,15 +46,19 @@ extern int lock_stat;
 #define NR_LOCKDEP_CACHING_CLASSES	2
 
 /*
- * Lock-classes are keyed via unique addresses, by embedding the
- * lockclass-key into the kernel (or module) .data section. (For
- * static locks we use the lock address itself as the key.)
+ * A lockdep key is associated with each lock object. For static locks we use
+ * the lock address itself as the key. Dynamically allocated lock objects can
+ * have a statically or dynamically allocated key. Dynamically allocated lock
+ * keys must be registered before being used and must be unregistered before
+ * the key memory is freed.
  */
 struct lockdep_subclass_key {
 	char __one_byte;
 } __attribute__ ((__packed__));
 
+/* hash_entry is used to keep track of dynamically allocated keys. */
 struct lock_class_key {
+	struct hlist_node		hash_entry;
 	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
 };
 
@@ -273,6 +277,9 @@ extern void lockdep_set_selftest_task(struct task_struct *task);
 extern void lockdep_off(void);
 extern void lockdep_on(void);
 
+extern void lockdep_register_key(struct lock_class_key *key);
+extern void lockdep_unregister_key(struct lock_class_key *key);
+
 /*
  * These methods are used by specific locking variants (spinlocks,
  * rwlocks, mutexes and rwsems) to pass init/acquire/release events
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 72cff86829e6..a570be564be8 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -143,6 +143,9 @@ static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
  * nr_lock_classes is the number of elements of lock_classes[] that is
  * in use.
  */
+#define KEYHASH_BITS		(MAX_LOCKDEP_KEYS_BITS - 1)
+#define KEYHASH_SIZE		(1UL << KEYHASH_BITS)
+static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
 unsigned long nr_lock_classes;
 #ifndef CONFIG_DEBUG_LOCKDEP
 static
@@ -626,7 +629,7 @@ static int very_verbose(struct lock_class *class)
  * Is this the address of a static object:
  */
 #ifdef __KERNEL__
-static int static_obj(void *obj)
+static int static_obj(const void *obj)
 {
 	unsigned long start = (unsigned long) &_stext,
 		      end   = (unsigned long) &_end,
@@ -980,6 +983,71 @@ static void init_data_structures_once(void)
 	}
 }
 
+static inline struct hlist_head *keyhashentry(const struct lock_class_key *key)
+{
+	unsigned long hash = hash_long((uintptr_t)key, KEYHASH_BITS);
+
+	return lock_keys_hash + hash;
+}
+
+/* Register a dynamically allocated key. */
+void lockdep_register_key(struct lock_class_key *key)
+{
+	struct hlist_head *hash_head;
+	struct lock_class_key *k;
+	unsigned long flags;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return;
+	hash_head = keyhashentry(key);
+
+	raw_local_irq_save(flags);
+	if (!graph_lock())
+		goto restore_irqs;
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (WARN_ON_ONCE(k == key))
+			goto out_unlock;
+	}
+	hlist_add_head_rcu(&key->hash_entry, hash_head);
+out_unlock:
+	graph_unlock();
+restore_irqs:
+	raw_local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(lockdep_register_key);
+
+/* Check whether a key has been registered as a dynamic key. */
+static bool is_dynamic_key(const struct lock_class_key *key)
+{
+	struct hlist_head *hash_head;
+	struct lock_class_key *k;
+	bool found = false;
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return false;
+
+	/*
+	 * If lock debugging is disabled lock_keys_hash[] may contain
+	 * pointers to memory that has already been freed. Avoid triggering
+	 * a use-after-free in that case by returning early.
+	 */
+	if (!debug_locks)
+		return true;
+
+	hash_head = keyhashentry(key);
+
+	rcu_read_lock();
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (k == key) {
+			found = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return found;
+}
+
 /*
  * Register a lock's class in the hash-table, if the class is not present
  * yet. Otherwise we look it up. We cache the result in the lock object
@@ -1001,7 +1069,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	if (!lock->key) {
 		if (!assign_lock_key(lock))
 			return NULL;
-	} else if (!static_obj(lock->key)) {
+	} else if (!static_obj(lock->key) && !is_dynamic_key(lock->key)) {
 		return NULL;
 	}
 
@@ -3393,13 +3461,13 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,
 	if (DEBUG_LOCKS_WARN_ON(!key))
 		return;
 	/*
-	 * Sanity check, the lock-class key must be persistent:
+	 * Sanity check, the lock-class key must either have been allocated
+	 * statically or must have been registered as a dynamic key.
 	 */
-	if (!static_obj(key)) {
-		printk("BUG: key %px not in .data!\n", key);
-		/*
-		 * What it says above ^^^^^, I suggest you read it.
-		 */
+	if (!static_obj(key) && !is_dynamic_key(key)) {
+		if (debug_locks)
+			printk(KERN_ERR "BUG: key %px has not been registered!\n",
+			       key);
 		DEBUG_LOCKS_WARN_ON(1);
 		return;
 	}
@@ -4839,6 +4907,45 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 		lockdep_reset_lock_reg(lock);
 }
 
+/*
+ * Unregister a dynamically allocated key. Must not be called from interrupt
+ * context. The caller must ensure that freeing @key only happens after an RCU
+ * grace period.
+ */
+void lockdep_unregister_key(struct lock_class_key *key)
+{
+	struct hlist_head *hash_head = keyhashentry(key);
+	struct lock_class_key *k;
+	struct pending_free *pf;
+	unsigned long flags;
+	bool found = false;
+
+	might_sleep();
+
+	if (WARN_ON_ONCE(static_obj(key)))
+		return;
+
+	pf = get_pending_free_lock(&flags);
+	if (!pf)
+		return;
+	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+		if (k == key) {
+			hlist_del_rcu(&k->hash_entry);
+			found = true;
+			break;
+		}
+	}
+	WARN_ON_ONCE(!found);
+	__lockdep_free_key_range(pf, key, 1);
+	schedule_free_zapped_classes(pf);
+	graph_unlock();
+	raw_local_irq_restore(flags);
+
+	/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
+	synchronize_rcu();
+}
+EXPORT_SYMBOL_GPL(lockdep_unregister_key);
+
 void __init lockdep_init(void)
 {
 	printk("Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar\n");
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 15/16] kernel/workqueue: Use dynamic lockdep keys for workqueues
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (13 preceding siblings ...)
  2019-01-09 21:02 ` [PATCH v6 14/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
@ 2019-01-09 21:02 ` Bart Van Assche
  2019-01-09 21:02 ` [PATCH v6 16/16] lockdep tests: Test dynamic key registration Bart Van Assche
  2019-01-11 12:48 ` [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Peter Zijlstra
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:02 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Will Deacon

Commit 87915adc3f0a ("workqueue: re-add lockdep dependencies for flushing")
improved deadlock checking in the workqueue implementation. Unfortunately
that patch also introduced a few false positive lockdep complaints. This
patch suppresses these false positives by allocating the workqueue mutex
lockdep key dynamically. An example of a false positive lockdep complaint
suppressed by this report can be found below. The root cause of the
lockdep complaint shown below is that the direct I/O code can call
alloc_workqueue() from inside a work item created by another
alloc_workqueue() call and that both workqueues share the same lockdep
key. This patch avoids that that lockdep complaint is triggered by
allocating the work queue lockdep keys dynamically. In other words, this
patch guarantees that a unique lockdep key is associated with each work
queue mutex.

======================================================
WARNING: possible circular locking dependency detected
4.19.0-dbg+ #1 Not tainted
------------------------------------------------------
fio/4129 is trying to acquire lock:
00000000a01cfe1a ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: flush_workqueue+0xd0/0x970

but task is already holding lock:
00000000a0acecf9 (&sb->s_type->i_mutex_key#14){+.+.}, at: ext4_file_write_iter+0x154/0x710

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #2 (&sb->s_type->i_mutex_key#14){+.+.}:
       down_write+0x3d/0x80
       __generic_file_fsync+0x77/0xf0
       ext4_sync_file+0x3c9/0x780
       vfs_fsync_range+0x66/0x100
       dio_complete+0x2f5/0x360
       dio_aio_complete_work+0x1c/0x20
       process_one_work+0x481/0x9f0
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #1 ((work_completion)(&dio->complete_work)){+.+.}:
       process_one_work+0x447/0x9f0
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #0 ((wq_completion)"dio/%s"sb->s_id){+.+.}:
       lock_acquire+0xc5/0x200
       flush_workqueue+0xf3/0x970
       drain_workqueue+0xec/0x220
       destroy_workqueue+0x23/0x350
       sb_init_dio_done_wq+0x6a/0x80
       do_blockdev_direct_IO+0x1f33/0x4be0
       __blockdev_direct_IO+0x79/0x86
       ext4_direct_IO+0x5df/0xbb0
       generic_file_direct_write+0x119/0x220
       __generic_file_write_iter+0x131/0x2d0
       ext4_file_write_iter+0x3fa/0x710
       aio_write+0x235/0x330
       io_submit_one+0x510/0xeb0
       __x64_sys_io_submit+0x122/0x340
       do_syscall_64+0x71/0x220
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
  (wq_completion)"dio/%s"sb->s_id --> (work_completion)(&dio->complete_work) --> &sb->s_type->i_mutex_key#14

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&sb->s_type->i_mutex_key#14);
                               lock((work_completion)(&dio->complete_work));
                               lock(&sb->s_type->i_mutex_key#14);
  lock((wq_completion)"dio/%s"sb->s_id);

 *** DEADLOCK ***

1 lock held by fio/4129:
 #0: 00000000a0acecf9 (&sb->s_type->i_mutex_key#14){+.+.}, at: ext4_file_write_iter+0x154/0x710

stack backtrace:
CPU: 3 PID: 4129 Comm: fio Not tainted 4.19.0-dbg+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
Call Trace:
 dump_stack+0x86/0xc5
 print_circular_bug.isra.32+0x20a/0x218
 __lock_acquire+0x1c68/0x1cf0
 lock_acquire+0xc5/0x200
 flush_workqueue+0xf3/0x970
 drain_workqueue+0xec/0x220
 destroy_workqueue+0x23/0x350
 sb_init_dio_done_wq+0x6a/0x80
 do_blockdev_direct_IO+0x1f33/0x4be0
 __blockdev_direct_IO+0x79/0x86
 ext4_direct_IO+0x5df/0xbb0
 generic_file_direct_write+0x119/0x220
 __generic_file_write_iter+0x131/0x2d0
 ext4_file_write_iter+0x3fa/0x710
 aio_write+0x235/0x330
 io_submit_one+0x510/0xeb0
 __x64_sys_io_submit+0x122/0x340
 do_syscall_64+0x71/0x220
 entry_SYSCALL_64_after_hwframe+0x49/0xbe

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/workqueue.h | 28 +++---------------
 kernel/workqueue.c        | 60 +++++++++++++++++++++++++++++++++------
 2 files changed, 55 insertions(+), 33 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 60d673e15632..d9a1a480e920 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -390,43 +390,23 @@ extern struct workqueue_struct *system_freezable_wq;
 extern struct workqueue_struct *system_power_efficient_wq;
 extern struct workqueue_struct *system_freezable_power_efficient_wq;
 
-extern struct workqueue_struct *
-__alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
-	struct lock_class_key *key, const char *lock_name, ...) __printf(1, 6);
-
 /**
  * alloc_workqueue - allocate a workqueue
  * @fmt: printf format for the name of the workqueue
  * @flags: WQ_* flags
  * @max_active: max in-flight work items, 0 for default
- * @args...: args for @fmt
+ * remaining args: args for @fmt
  *
  * Allocate a workqueue with the specified parameters.  For detailed
  * information on WQ_* flags, please refer to
  * Documentation/core-api/workqueue.rst.
  *
- * The __lock_name macro dance is to guarantee that single lock_class_key
- * doesn't end up with different namesm, which isn't allowed by lockdep.
- *
  * RETURNS:
  * Pointer to the allocated workqueue on success, %NULL on failure.
  */
-#ifdef CONFIG_LOCKDEP
-#define alloc_workqueue(fmt, flags, max_active, args...)		\
-({									\
-	static struct lock_class_key __key;				\
-	const char *__lock_name;					\
-									\
-	__lock_name = "(wq_completion)"#fmt#args;			\
-									\
-	__alloc_workqueue_key((fmt), (flags), (max_active),		\
-			      &__key, __lock_name, ##args);		\
-})
-#else
-#define alloc_workqueue(fmt, flags, max_active, args...)		\
-	__alloc_workqueue_key((fmt), (flags), (max_active),		\
-			      NULL, NULL, ##args)
-#endif
+struct workqueue_struct *alloc_workqueue(const char *fmt,
+					 unsigned int flags,
+					 int max_active, ...);
 
 /**
  * alloc_ordered_workqueue - allocate an ordered workqueue
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 392be4b252f6..391a3db13171 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -259,6 +259,8 @@ struct workqueue_struct {
 	struct wq_device	*wq_dev;	/* I: for sysfs interface */
 #endif
 #ifdef CONFIG_LOCKDEP
+	char			*lock_name;
+	struct lock_class_key	key;
 	struct lockdep_map	lockdep_map;
 #endif
 	char			name[WQ_NAME_LEN]; /* I: workqueue name */
@@ -3314,11 +3316,50 @@ static int init_worker_pool(struct worker_pool *pool)
 	return 0;
 }
 
+#ifdef CONFIG_LOCKDEP
+static void wq_init_lockdep(struct workqueue_struct *wq)
+{
+	char *lock_name;
+
+	lockdep_register_key(&wq->key);
+	lock_name = kasprintf(GFP_KERNEL, "%s%s", "(wq_completion)", wq->name);
+	if (!lock_name)
+		lock_name = wq->name;
+	lockdep_init_map(&wq->lockdep_map, lock_name, &wq->key, 0);
+}
+
+static void wq_unregister_lockdep(struct workqueue_struct *wq)
+{
+	lockdep_reset_lock(&wq->lockdep_map);
+	lockdep_unregister_key(&wq->key);
+}
+
+static void wq_free_lockdep(struct workqueue_struct *wq)
+{
+	if (wq->lock_name != wq->name)
+		kfree(wq->lock_name);
+}
+#else
+static void wq_init_lockdep(struct workqueue_struct *wq)
+{
+}
+
+static void wq_unregister_lockdep(struct workqueue_struct *wq)
+{
+}
+
+static void wq_free_lockdep(struct workqueue_struct *wq)
+{
+}
+#endif
+
 static void rcu_free_wq(struct rcu_head *rcu)
 {
 	struct workqueue_struct *wq =
 		container_of(rcu, struct workqueue_struct, rcu);
 
+	wq_free_lockdep(wq);
+
 	if (!(wq->flags & WQ_UNBOUND))
 		free_percpu(wq->cpu_pwqs);
 	else
@@ -3509,8 +3550,10 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
 	 * If we're the last pwq going away, @wq is already dead and no one
 	 * is gonna access it anymore.  Schedule RCU free.
 	 */
-	if (is_last)
+	if (is_last) {
+		wq_unregister_lockdep(wq);
 		call_rcu(&wq->rcu, rcu_free_wq);
+	}
 }
 
 /**
@@ -4044,11 +4087,9 @@ static int init_rescuer(struct workqueue_struct *wq)
 	return 0;
 }
 
-struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
-					       unsigned int flags,
-					       int max_active,
-					       struct lock_class_key *key,
-					       const char *lock_name, ...)
+struct workqueue_struct *alloc_workqueue(const char *fmt,
+					 unsigned int flags,
+					 int max_active, ...)
 {
 	size_t tbl_size = 0;
 	va_list args;
@@ -4083,7 +4124,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 			goto err_free_wq;
 	}
 
-	va_start(args, lock_name);
+	va_start(args, max_active);
 	vsnprintf(wq->name, sizeof(wq->name), fmt, args);
 	va_end(args);
 
@@ -4100,7 +4141,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	INIT_LIST_HEAD(&wq->flusher_overflow);
 	INIT_LIST_HEAD(&wq->maydays);
 
-	lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
+	wq_init_lockdep(wq);
 	INIT_LIST_HEAD(&wq->list);
 
 	if (alloc_and_link_pwqs(wq) < 0)
@@ -4138,7 +4179,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	destroy_workqueue(wq);
 	return NULL;
 }
-EXPORT_SYMBOL_GPL(__alloc_workqueue_key);
+EXPORT_SYMBOL_GPL(alloc_workqueue);
 
 /**
  * destroy_workqueue - safely terminate a workqueue
@@ -4191,6 +4232,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
 		kthread_stop(wq->rescuer->task);
 
 	if (!(wq->flags & WQ_UNBOUND)) {
+		wq_unregister_lockdep(wq);
 		/*
 		 * The base ref is never dropped on per-cpu pwqs.  Directly
 		 * schedule RCU free.
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v6 16/16] lockdep tests: Test dynamic key registration
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (14 preceding siblings ...)
  2019-01-09 21:02 ` [PATCH v6 15/16] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
@ 2019-01-09 21:02 ` Bart Van Assche
  2019-01-11 12:48 ` [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Peter Zijlstra
  16 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-01-09 21:02 UTC (permalink / raw)
  To: peterz
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Bart Van Assche,
	Johannes Berg

Make sure that the lockdep_register_key() and lockdep_unregister_key()
code is tested when running the lockdep tests.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 tools/lib/lockdep/include/liblockdep/common.h |  2 ++
 tools/lib/lockdep/include/liblockdep/mutex.h  | 11 ++++++-----
 tools/lib/lockdep/tests/ABBA.c                |  9 +++++++++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h b/tools/lib/lockdep/include/liblockdep/common.h
index d640a9761f09..a81d91d4fc78 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -45,6 +45,8 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 void lock_release(struct lockdep_map *lock, int nested,
 			unsigned long ip);
 void lockdep_reset_lock(struct lockdep_map *lock);
+void lockdep_register_key(struct lock_class_key *key);
+void lockdep_unregister_key(struct lock_class_key *key);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h
index 2073d4e1f2f0..783dd0df06f9 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -7,6 +7,7 @@
 
 struct liblockdep_pthread_mutex {
 	pthread_mutex_t mutex;
+	struct lock_class_key key;
 	struct lockdep_map dep_map;
 };
 
@@ -27,11 +28,10 @@ static inline int __mutex_init(liblockdep_pthread_mutex_t *lock,
 	return pthread_mutex_init(&lock->mutex, __mutexattr);
 }
 
-#define liblockdep_pthread_mutex_init(mutex, mutexattr)		\
-({								\
-	static struct lock_class_key __key;			\
-								\
-	__mutex_init((mutex), #mutex, &__key, (mutexattr));	\
+#define liblockdep_pthread_mutex_init(mutex, mutexattr)			\
+({									\
+	lockdep_register_key(&(mutex)->key);				\
+	__mutex_init((mutex), #mutex, &(mutex)->key, (mutexattr));	\
 })
 
 static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t *lock)
@@ -55,6 +55,7 @@ static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t *lock)
 {
 	lockdep_reset_lock(&lock->dep_map);
+	lockdep_unregister_key(&lock->key);
 	return pthread_mutex_destroy(&lock->mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 623313f54720..543789bc3e37 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -14,4 +14,13 @@ void main(void)
 
 	pthread_mutex_destroy(&b);
 	pthread_mutex_destroy(&a);
+
+	pthread_mutex_init(&a, NULL);
+	pthread_mutex_init(&b, NULL);
+
+	LOCK_UNLOCK_2(a, b);
+	LOCK_UNLOCK_2(b, a);
+
+	pthread_mutex_destroy(&b);
+	pthread_mutex_destroy(&a);
 }
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
                   ` (15 preceding siblings ...)
  2019-01-09 21:02 ` [PATCH v6 16/16] lockdep tests: Test dynamic key registration Bart Van Assche
@ 2019-01-11 12:48 ` Peter Zijlstra
  2019-01-11 15:55   ` Bart Van Assche
  16 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2019-01-11 12:48 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, longman, johannes.berg, linux-kernel


Hi Bart,

I spotted this new v6 in my inbox and have rebased to it.

On Wed, Jan 09, 2019 at 01:01:48PM -0800, Bart Van Assche wrote:

> The changes compared to v5 are:
> - Modified zap_class() such that it doesn't try to free a list entry that
>   is already being freed.

I however have a question on this; this seems wrong. Once a list entry
is enqueued it should not be reachable anymore. If we can reach an entry
after call_rcu() happened, we've got a problem.

> - Added a patch that fixes an existing bug in add_chain_cache().
> - Improved the code that reports the size needed for lockdep data structures
>   further.
> - Rebased and retested this patch series on top of kernel v5.0-rc1.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-11 12:48 ` [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Peter Zijlstra
@ 2019-01-11 15:55   ` Bart Van Assche
  2019-01-11 16:55     ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2019-01-11 15:55 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, longman, johannes.berg, linux-kernel

On Fri, 2019-01-11 at 13:48 +0100, Peter Zijlstra wrote:
> I spotted this new v6 in my inbox and have rebased to it.

Thanks!

> On Wed, Jan 09, 2019 at 01:01:48PM -0800, Bart Van Assche wrote:
> 
> > The changes compared to v5 are:
> > - Modified zap_class() such that it doesn't try to free a list entry that
> >   is already being freed.
> 
> I however have a question on this; this seems wrong. Once a list entry
> is enqueued it should not be reachable anymore. If we can reach an entry
> after call_rcu() happened, we've got a problem.

Apparently I confused you - sorry that I was not more clear. What I meant is
that I changed a single if test into a loop. The graph lock is held while that
loop is being executed so the code below is serialized against the code called
from inside the RCU callback:

@@ -4574,8 +4563,9 @@ static void zap_class(struct pending_free *pf, struct lock
_class *class)
                entry = list_entries + i;
                if (entry->class != class && entry->links_to != class)
                        continue;
-               if (__test_and_set_bit(i, pf->list_entries_being_freed))
+               if (list_entry_being_freed(i))
                        continue;
+               set_bit(i, pf->list_entries_being_freed);
                nr_list_entries--;
                list_del_rcu(&entry->entry);
        }

Please let me know if you need more information.

Bart.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-11 15:55   ` Bart Van Assche
@ 2019-01-11 16:55     ` Peter Zijlstra
  2019-01-11 17:01       ` Bart Van Assche
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2019-01-11 16:55 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, longman, johannes.berg, linux-kernel

On Fri, Jan 11, 2019 at 07:55:03AM -0800, Bart Van Assche wrote:
> On Fri, 2019-01-11 at 13:48 +0100, Peter Zijlstra wrote:
> > I spotted this new v6 in my inbox and have rebased to it.
> 
> Thanks!
> 
> > On Wed, Jan 09, 2019 at 01:01:48PM -0800, Bart Van Assche wrote:
> > 
> > > The changes compared to v5 are:
> > > - Modified zap_class() such that it doesn't try to free a list entry that
> > >   is already being freed.
> > 
> > I however have a question on this; this seems wrong. Once a list entry
> > is enqueued it should not be reachable anymore. If we can reach an entry
> > after call_rcu() happened, we've got a problem.
> 
> Apparently I confused you - sorry that I was not more clear. What I meant is
> that I changed a single if test into a loop. The graph lock is held while that
> loop is being executed so the code below is serialized against the code called
> from inside the RCU callback:
> 
> @@ -4574,8 +4563,9 @@ static void zap_class(struct pending_free *pf, struct lock
> _class *class)
>                 entry = list_entries + i;
>                 if (entry->class != class && entry->links_to != class)
>                         continue;
> -               if (__test_and_set_bit(i, pf->list_entries_being_freed))
> +               if (list_entry_being_freed(i))
>                         continue;

Yes, it is the above change that caught my eye.. That checks _both_ your
lists. One is your current open one (@pf), but the other could already
be pending the call_rcu().

So my question is why do we have to check both ?! How come the old code,
that only checked @pf, is wrong?

> +               set_bit(i, pf->list_entries_being_freed);
>                 nr_list_entries--;
>                 list_del_rcu(&entry->entry);
>         }
> 
> Please let me know if you need more information.
> 
> Bart.
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-11 16:55     ` Peter Zijlstra
@ 2019-01-11 17:01       ` Bart Van Assche
  2019-01-14 12:52         ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2019-01-11 17:01 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, longman, johannes.berg, linux-kernel

On Fri, 2019-01-11 at 17:55 +0100, Peter Zijlstra wrote:
> On Fri, Jan 11, 2019 at 07:55:03AM -0800, Bart Van Assche wrote:
> > On Fri, 2019-01-11 at 13:48 +0100, Peter Zijlstra wrote:
> > > I spotted this new v6 in my inbox and have rebased to it.
> > 
> > Thanks!
> > 
> > > On Wed, Jan 09, 2019 at 01:01:48PM -0800, Bart Van Assche wrote:
> > > 
> > > > The changes compared to v5 are:
> > > > - Modified zap_class() such that it doesn't try to free a list entry that
> > > >   is already being freed.
> > > 
> > > I however have a question on this; this seems wrong. Once a list entry
> > > is enqueued it should not be reachable anymore. If we can reach an entry
> > > after call_rcu() happened, we've got a problem.
> > 
> > Apparently I confused you - sorry that I was not more clear. What I meant is
> > that I changed a single if test into a loop. The graph lock is held while that
> > loop is being executed so the code below is serialized against the code called
> > from inside the RCU callback:
> > 
> > @@ -4574,8 +4563,9 @@ static void zap_class(struct pending_free *pf, struct lock
> > _class *class)
> >                 entry = list_entries + i;
> >                 if (entry->class != class && entry->links_to != class)
> >                         continue;
> > -               if (__test_and_set_bit(i, pf->list_entries_being_freed))
> > +               if (list_entry_being_freed(i))
> >                         continue;
> 
> Yes, it is the above change that caught my eye.. That checks _both_ your
> lists. One is your current open one (@pf), but the other could already
> be pending the call_rcu().
> 
> So my question is why do we have to check both ?! How come the old code,
> that only checked @pf, is wrong?
> 
> > +               set_bit(i, pf->list_entries_being_freed);
> >                 nr_list_entries--;
> >                 list_del_rcu(&entry->entry);
> >         }

The list_del_rcu() call must only happen once. I ran into complaints reporting that
the list_del_rcu() call triggered list corruption. This change made these complaints
disappear.

Bart.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-11 17:01       ` Bart Van Assche
@ 2019-01-14 12:52         ` Peter Zijlstra
  2019-01-14 16:52           ` Bart Van Assche
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2019-01-14 12:52 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: mingo, tj, longman, johannes.berg, linux-kernel

On Fri, Jan 11, 2019 at 09:01:41AM -0800, Bart Van Assche wrote:
> On Fri, 2019-01-11 at 17:55 +0100, Peter Zijlstra wrote:
> > On Fri, Jan 11, 2019 at 07:55:03AM -0800, Bart Van Assche wrote:
> > > On Fri, 2019-01-11 at 13:48 +0100, Peter Zijlstra wrote:
> > > > I spotted this new v6 in my inbox and have rebased to it.
> > > 
> > > Thanks!
> > > 
> > > > On Wed, Jan 09, 2019 at 01:01:48PM -0800, Bart Van Assche wrote:
> > > > 
> > > > > The changes compared to v5 are:
> > > > > - Modified zap_class() such that it doesn't try to free a list entry that
> > > > >   is already being freed.
> > > > 
> > > > I however have a question on this; this seems wrong. Once a list entry
> > > > is enqueued it should not be reachable anymore. If we can reach an entry
> > > > after call_rcu() happened, we've got a problem.
> > > 
> > > Apparently I confused you - sorry that I was not more clear. What I meant is
> > > that I changed a single if test into a loop. The graph lock is held while that
> > > loop is being executed so the code below is serialized against the code called
> > > from inside the RCU callback:
> > > 
> > > @@ -4574,8 +4563,9 @@ static void zap_class(struct pending_free *pf, struct lock
> > > _class *class)
> > >                 entry = list_entries + i;
> > >                 if (entry->class != class && entry->links_to != class)
> > >                         continue;
> > > -               if (__test_and_set_bit(i, pf->list_entries_being_freed))
> > > +               if (list_entry_being_freed(i))
> > >                         continue;
> > 
> > Yes, it is the above change that caught my eye.. That checks _both_ your
> > lists. One is your current open one (@pf), but the other could already
> > be pending the call_rcu().
> > 
> > So my question is why do we have to check both ?! How come the old code,
> > that only checked @pf, is wrong?
> > 
> > > +               set_bit(i, pf->list_entries_being_freed);
> > >                 nr_list_entries--;
> > >                 list_del_rcu(&entry->entry);
> > >         }
> 
> The list_del_rcu() call must only happen once. 

Yes; obviously. But if we need to check all @pf's, that means the entry
is still reachable after a single reset_lock()/free_key_range(), which
is a bug.

> I ran into complaints reporting that
> the list_del_rcu() call triggered list corruption. This change made these complaints
> disappear.

I'm saying this solution buggy, because that means the entry is still
reachable after we do call_rcu() (which is a straight up UAF).

Also put it differently, what guarantees checking those two @pf's is
sufficient. Suppose your earlier @pf already did the RCU callback and
freed stuff while the second is in progress. Then you're poking into
dead space.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-14 12:52         ` Peter Zijlstra
@ 2019-01-14 16:52           ` Bart Van Assche
  2019-01-18  9:48             ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2019-01-14 16:52 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, tj, longman, johannes.berg, linux-kernel

On Mon, 2019-01-14 at 13:52 +0100, Peter Zijlstra wrote:
> On Fri, Jan 11, 2019 at 09:01:41AM -0800, Bart Van Assche wrote:
> > The list_del_rcu() call must only happen once. 
> 
> Yes; obviously. But if we need to check all @pf's, that means the entry
> is still reachable after a single reset_lock()/free_key_range(), which
> is a bug.
> 
> > I ran into complaints reporting that
> > the list_del_rcu() call triggered list corruption. This change made these complaints
> > disappear.
> 
> I'm saying this solution buggy, because that means the entry is still
> reachable after we do call_rcu() (which is a straight up UAF).
> 
> Also put it differently, what guarantees checking those two @pf's is
> sufficient. Suppose your earlier @pf already did the RCU callback and
> freed stuff while the second is in progress. Then you're poking into
> dead space.

zap_class() only examines elements of the list_entries[] array for which the
corresponding bit in list_entries_in_use has been set. The RCU callback clears 
the bits in the list_entries_in_use that correspond to elements that have been
freed. The graph lock serializes zap_class() calls and the code inside the
RCU callback. So it's not clear to me why you are claiming that zap_class()
would trigger a use-after-free?

Bart.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-14 16:52           ` Bart Van Assche
@ 2019-01-18  9:48             ` Peter Zijlstra
  2019-01-19  2:34               ` Bart Van Assche
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2019-01-18  9:48 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Paul McKenney

On Mon, Jan 14, 2019 at 08:52:33AM -0800, Bart Van Assche wrote:
> On Mon, 2019-01-14 at 13:52 +0100, Peter Zijlstra wrote:
> > On Fri, Jan 11, 2019 at 09:01:41AM -0800, Bart Van Assche wrote:
> > > The list_del_rcu() call must only happen once. 
> > 
> > Yes; obviously. But if we need to check all @pf's, that means the entry
> > is still reachable after a single reset_lock()/free_key_range(), which
> > is a bug.
> > 
> > > I ran into complaints reporting that
> > > the list_del_rcu() call triggered list corruption. This change made these complaints
> > > disappear.
> > 
> > I'm saying this solution buggy, because that means the entry is still
> > reachable after we do call_rcu() (which is a straight up UAF).
> > 
> > Also put it differently, what guarantees checking those two @pf's is
> > sufficient. Suppose your earlier @pf already did the RCU callback and
> > freed stuff while the second is in progress. Then you're poking into
> > dead space.
> 
> zap_class() only examines elements of the list_entries[] array for which the
> corresponding bit in list_entries_in_use has been set. The RCU callback clears 
> the bits in the list_entries_in_use that correspond to elements that have been
> freed. The graph lock serializes zap_class() calls and the code inside the
> RCU callback. So it's not clear to me why you are claiming that zap_class()
> would trigger a use-after-free?

The scenario is like:


CPU0					CPU1					CPU2

lockdep_reset_lock_reg()
  pf = get_pending_free_lock() // pf[0]
  __lockdep_reset_lock(pf)
    zap_class()
  schedule_free_zapped_classes(pf)
    call_rcu()


  // here is wbere the objects 'freed' in zap_class()
  // can still be used through references obtained
  // __before__ we did call_rcu().


					lockdep_reset_lock_reg()
					  pf = get_pending_free_lock() // pf[1]
					  __lockdep_reset_lock(pf)
					    zap_class()
					      list_entry_being_freed()
						// checks: pf[0]

						// this is a problem, it
						// should _NEVER_ match
						// anything from pf[0]

						// those entries should
						// be unreachable,
						// otherwise:


										rcu_read_lock()
										entry = rcu_dereference()

<rcu-callback>
  free_zapped_classes()

										entry->class // UAF, just freed by rcu-callback

										rcu_read_unlock()




Now, arguably, I'm having a really hard time actually finding the RCU user of
lock_list::entry, the comment in add_lock_to_list() seems to mention
look_up_lock_class(), but the only RCU usage there is the
lock_class::hash_entry, not lock_list::entry.

If lock_class is not indeed RCU used, that would simplify things. Please
double check.

But in any case, the normal RCU pattern is:

lock()
add-to-data-structure()
unlock()

				rcu_read_lock()
				obj = obtain-from-data-structure();

lock()
remove-from-data-structure()
  call_rcu()
unlock();

				use(obj);
				rcu_read_unlock();


<rcu-callback>
  actually-free-obj()



Fundamentally RCU delays the callback to the point where the last observer
that started before call_rcu() has finished and no later (in practise it often
is much later, but no guarantees there). So being able to reach an object
after you did call_rcu() on it is a fundamental fail.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-18  9:48             ` Peter Zijlstra
@ 2019-01-19  2:34               ` Bart Van Assche
  2019-02-01 12:15                 ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2019-01-19  2:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Paul McKenney

On 1/18/19 1:48 AM, Peter Zijlstra wrote:
> On Mon, Jan 14, 2019 at 08:52:33AM -0800, Bart Van Assche wrote:
>> On Mon, 2019-01-14 at 13:52 +0100, Peter Zijlstra wrote:
>>> On Fri, Jan 11, 2019 at 09:01:41AM -0800, Bart Van Assche wrote:
>>>> The list_del_rcu() call must only happen once.
>>>
>>> Yes; obviously. But if we need to check all @pf's, that means the entry
>>> is still reachable after a single reset_lock()/free_key_range(), which
>>> is a bug.
>>>
>>>> I ran into complaints reporting that
>>>> the list_del_rcu() call triggered list corruption. This change made these complaints
>>>> disappear.
>>>
>>> I'm saying this solution buggy, because that means the entry is still
>>> reachable after we do call_rcu() (which is a straight up UAF).
>>>
>>> Also put it differently, what guarantees checking those two @pf's is
>>> sufficient. Suppose your earlier @pf already did the RCU callback and
>>> freed stuff while the second is in progress. Then you're poking into
>>> dead space.
>>
>> zap_class() only examines elements of the list_entries[] array for which the
>> corresponding bit in list_entries_in_use has been set. The RCU callback clears
>> the bits in the list_entries_in_use that correspond to elements that have been
>> freed. The graph lock serializes zap_class() calls and the code inside the
>> RCU callback. So it's not clear to me why you are claiming that zap_class()
>> would trigger a use-after-free?
> 
> The scenario is like:
> 
> 
> CPU0					CPU1					CPU2
> 
> lockdep_reset_lock_reg()
>    pf = get_pending_free_lock() // pf[0]
>    __lockdep_reset_lock(pf)
>      zap_class()
>    schedule_free_zapped_classes(pf)
>      call_rcu()
> 
> 
>    // here is wbere the objects 'freed' in zap_class()
>    // can still be used through references obtained
>    // __before__ we did call_rcu().
> 
> 
> 					lockdep_reset_lock_reg()
> 					  pf = get_pending_free_lock() // pf[1]
> 					  __lockdep_reset_lock(pf)
> 					    zap_class()
> 					      list_entry_being_freed()
> 						// checks: pf[0]
> 
> 						// this is a problem, it
> 						// should _NEVER_ match
> 						// anything from pf[0]
> 
> 						// those entries should
> 						// be unreachable,
> 						// otherwise:
> 
> 
> 										rcu_read_lock()
> 										entry = rcu_dereference()
> 
> <rcu-callback>
>    free_zapped_classes()
> 
> 										entry->class // UAF, just freed by rcu-callback
> 
> 										rcu_read_unlock()
> 
> 
> 
> 
> Now, arguably, I'm having a really hard time actually finding the RCU user of
> lock_list::entry, the comment in add_lock_to_list() seems to mention
> look_up_lock_class(), but the only RCU usage there is the
> lock_class::hash_entry, not lock_list::entry.
> 
> If lock_class is not indeed RCU used, that would simplify things. Please
> double check.
> 
> But in any case, the normal RCU pattern is:
> 
> lock()
> add-to-data-structure()
> unlock()
> 
> 				rcu_read_lock()
> 				obj = obtain-from-data-structure();
> 
> lock()
> remove-from-data-structure()
>    call_rcu()
> unlock();
> 
> 				use(obj);
> 				rcu_read_unlock();
> 
> 
> <rcu-callback>
>    actually-free-obj()
> 
> 
> 
> Fundamentally RCU delays the callback to the point where the last observer
> that started before call_rcu() has finished and no later (in practise it often
> is much later, but no guarantees there). So being able to reach an object
> after you did call_rcu() on it is a fundamental fail.

Hi Peter,

I agree with what you wrote. The only code I know of that accesses list
entries using RCU is the __bfs() function. In that function I found the
following loop:

	list_for_each_entry_rcu(entry, head, entry) { [ ... ] }

Since zap_class() calls list_del_rcu(&entry->entry), since a grace period
occurs between the call_rcu() invocation and the RCU callback function,
since at least an RCU reader lock must be held around RCU loops and since
sleeping is not allowed while holding an RCU read lock I think there is
no risk that __bfs() will examine a list entry after it has been freed.

Bart.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-01-19  2:34               ` Bart Van Assche
@ 2019-02-01 12:15                 ` Peter Zijlstra
  2019-02-03 17:36                   ` Bart Van Assche
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2019-02-01 12:15 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Paul McKenney

On Fri, Jan 18, 2019 at 06:34:20PM -0800, Bart Van Assche wrote:

> I agree with what you wrote. The only code I know of that accesses list
> entries using RCU is the __bfs() function. In that function I found the
> following loop:
> 
> 	list_for_each_entry_rcu(entry, head, entry) { [ ... ] }

Thing is; I can't seem to find any __bfs() usage outside of graph_lock.

  count_{fwd,bwd}_deps() - takes graph lock

  check_{noncircular,redudant}() - called from check_prev_add() <-
  check_prevs_add() <- validate_chain() which takes graph lock

  find_usage{,_fwd,_bwd}
    <- check_usage() <- check_irq_usage() <- check_prev_add_irq() <-
    check_prev_add <- check_prevs_add() <- validate_chain() which takes
    graph lock

    <- check_usage_{fwd,bdw}() <- mark_lock_irq() <- mark_lock() which
    takes graph lock

Or did I miss something? If there are no __bfs() users outside of graph
lock, then we can simply remove that _rcu from the iteration, and
simplify all that.

> Since zap_class() calls list_del_rcu(&entry->entry), since a grace period
> occurs between the call_rcu() invocation and the RCU callback function,
> since at least an RCU reader lock must be held around RCU loops and since
> sleeping is not allowed while holding an RCU read lock I think there is
> no risk that __bfs() will examine a list entry after it has been freed.

So you agree that list_entry_being_freed() should only check the current
pf?



Also; yes, I seem to have completely misplaced your #14, I've not idea
how I totally lost one patch, that was certainly not intentional, sorry
about that.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-02-01 12:15                 ` Peter Zijlstra
@ 2019-02-03 17:36                   ` Bart Van Assche
  2019-02-08 11:43                     ` Will Deacon
  0 siblings, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2019-02-03 17:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, tj, longman, johannes.berg, linux-kernel, Paul McKenney

On 2/1/19 4:15 AM, Peter Zijlstra wrote:
> On Fri, Jan 18, 2019 at 06:34:20PM -0800, Bart Van Assche wrote:
>> I agree with what you wrote. The only code I know of that accesses list
>> entries using RCU is the __bfs() function. In that function I found the
>> following loop:
>>
>> 	list_for_each_entry_rcu(entry, head, entry) { [ ... ] }
> 
> Thing is; I can't seem to find any __bfs() usage outside of graph_lock.
> 
>    count_{fwd,bwd}_deps() - takes graph lock
> 
>    check_{noncircular,redudant}() - called from check_prev_add() <-
>    check_prevs_add() <- validate_chain() which takes graph lock
> 
>    find_usage{,_fwd,_bwd}
>      <- check_usage() <- check_irq_usage() <- check_prev_add_irq() <-
>      check_prev_add <- check_prevs_add() <- validate_chain() which takes
>      graph lock
> 
>      <- check_usage_{fwd,bdw}() <- mark_lock_irq() <- mark_lock() which
>      takes graph lock
> 
> Or did I miss something? If there are no __bfs() users outside of graph
> lock, then we can simply remove that _rcu from the iteration, and
> simplify all that.

Every time I make a single change to the lockdep code I have to rerun my 
test case for a week to make sure that no regressions have been 
introduced. In other words, I can make further changes but that could 
take some time. Do you want me to look into this simplification now or 
after this patch series went upstream?

>> Since zap_class() calls list_del_rcu(&entry->entry), since a grace period
>> occurs between the call_rcu() invocation and the RCU callback function,
>> since at least an RCU reader lock must be held around RCU loops and since
>> sleeping is not allowed while holding an RCU read lock I think there is
>> no risk that __bfs() will examine a list entry after it has been freed.
> 
> So you agree that list_entry_being_freed() should only check the current
> pf?

Sorry if I wasn't clear enough. In a previous e-mail I tried to explain 
that both pf's have to be checked. Another way to explain that is as 
follows:
- Each list entry has one of the following states: free, in use or being
   freed.
- "Free" means that the corresponding bit in the list_entries_in_use
   bitmap has not been set.
- "In use" means that the corresponding bit in the list_entries_in_use
   bitmap has been set and that none of the corresponding bits in the
   list_entries_being_freed bitmaps have been set.
- "Being freed" means that the corresponding bit in one of the
   list_entries_being_freed bitmaps has been set.

Since it can happen that multiple elements of the pending_free[] array 
are in the state where call_rcu() has been called but the RCU callback 
function has not yet been called, I think that zap_class() must check 
the list_entries_being_freed bitmaps in all pending_free[] array elements.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-02-03 17:36                   ` Bart Van Assche
@ 2019-02-08 11:43                     ` Will Deacon
  2019-02-08 16:31                       ` Bart Van Assche
  2019-02-13 22:32                       ` Bart Van Assche
  0 siblings, 2 replies; 30+ messages in thread
From: Will Deacon @ 2019-02-08 11:43 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Peter Zijlstra, mingo, tj, longman, johannes.berg, linux-kernel,
	Paul McKenney

Hi Bart, Peter,

On Sun, Feb 03, 2019 at 09:36:38AM -0800, Bart Van Assche wrote:
> On 2/1/19 4:15 AM, Peter Zijlstra wrote:
> > On Fri, Jan 18, 2019 at 06:34:20PM -0800, Bart Van Assche wrote:
> > > I agree with what you wrote. The only code I know of that accesses list
> > > entries using RCU is the __bfs() function. In that function I found the
> > > following loop:
> > > 
> > > 	list_for_each_entry_rcu(entry, head, entry) { [ ... ] }
> > 
> > Thing is; I can't seem to find any __bfs() usage outside of graph_lock.
> > 
> >    count_{fwd,bwd}_deps() - takes graph lock
> > 
> >    check_{noncircular,redudant}() - called from check_prev_add() <-
> >    check_prevs_add() <- validate_chain() which takes graph lock
> > 
> >    find_usage{,_fwd,_bwd}
> >      <- check_usage() <- check_irq_usage() <- check_prev_add_irq() <-
> >      check_prev_add <- check_prevs_add() <- validate_chain() which takes
> >      graph lock
> > 
> >      <- check_usage_{fwd,bdw}() <- mark_lock_irq() <- mark_lock() which
> >      takes graph lock
> > 
> > Or did I miss something? If there are no __bfs() users outside of graph
> > lock, then we can simply remove that _rcu from the iteration, and
> > simplify all that.
> 
> Every time I make a single change to the lockdep code I have to rerun my
> test case for a week to make sure that no regressions have been introduced.
> In other words, I can make further changes but that could take some time. Do
> you want me to look into this simplification now or after this patch series
> went upstream?
> 
> > > Since zap_class() calls list_del_rcu(&entry->entry), since a grace period
> > > occurs between the call_rcu() invocation and the RCU callback function,
> > > since at least an RCU reader lock must be held around RCU loops and since
> > > sleeping is not allowed while holding an RCU read lock I think there is
> > > no risk that __bfs() will examine a list entry after it has been freed.
> > 
> > So you agree that list_entry_being_freed() should only check the current
> > pf?
> 
> Sorry if I wasn't clear enough. In a previous e-mail I tried to explain that
> both pf's have to be checked. Another way to explain that is as follows:
> - Each list entry has one of the following states: free, in use or being
>   freed.
> - "Free" means that the corresponding bit in the list_entries_in_use
>   bitmap has not been set.
> - "In use" means that the corresponding bit in the list_entries_in_use
>   bitmap has been set and that none of the corresponding bits in the
>   list_entries_being_freed bitmaps have been set.
> - "Being freed" means that the corresponding bit in one of the
>   list_entries_being_freed bitmaps has been set.
> 
> Since it can happen that multiple elements of the pending_free[] array are
> in the state where call_rcu() has been called but the RCU callback function
> has not yet been called, I think that zap_class() must check the
> list_entries_being_freed bitmaps in all pending_free[] array elements.

I've also been trying to understand why it's necessary to check both of the
pending_free entries, and I'm still struggling somewhat. It's true that the
wakeup in get_pending_free_lock() could lead to both entries being used
without the RCU call back running in between, however in this scenario then
any list entries marked for freeing in the first pf will have been unhashed
and therefore made unreachable to look_up_lock_class().

So I think the concern remains that entries are somehow remaining visible
after being zapped.

You mentioned earlier in the thread that people actually complained about
list corruption if you only checked the current pf:

  | The list_del_rcu() call must only happen once. I ran into complaints
  | reporting that the list_del_rcu() call triggered list corruption. This
  | change made these complaints disappear.

Do you have any more details about these complaints (e.g. kernel logs etc)?
Failing that, any idea how to reproduce them?

Thanks,

Will

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-02-08 11:43                     ` Will Deacon
@ 2019-02-08 16:31                       ` Bart Van Assche
  2019-02-13 22:32                       ` Bart Van Assche
  1 sibling, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-02-08 16:31 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, mingo, tj, longman, johannes.berg, linux-kernel,
	Paul McKenney

On Fri, 2019-02-08 at 11:43 +0000, Will Deacon wrote:
> I've also been trying to understand why it's necessary to check both of the
> pending_free entries, and I'm still struggling somewhat. It's true that the
> wakeup in get_pending_free_lock() could lead to both entries being used
> without the RCU call back running in between, however in this scenario then
> any list entries marked for freeing in the first pf will have been unhashed
> and therefore made unreachable to look_up_lock_class().
> 
> So I think the concern remains that entries are somehow remaining visible
> after being zapped.
> 
> You mentioned earlier in the thread that people actually complained about
> list corruption if you only checked the current pf:
> 
>   | The list_del_rcu() call must only happen once. I ran into complaints
>   | reporting that the list_del_rcu() call triggered list corruption. This
>   | change made these complaints disappear.
> 
> Do you have any more details about these complaints (e.g. kernel logs etc)?
> Failing that, any idea how to reproduce them?

Hi Will,

The approach I use to test this patch series is to run the following shell
code for several days:

    git clone https://github.com/osandov/blktests/
    cd blktests
    make
    while ./check -q srp; do :; done

This test not only triggers plenty of lock and unlock calls but also
frequently causes kernel modules to be loaded and unloaded.

The oldest kernel logs I have in the VM I use for testing this patch series
are four weeks old. Sorry but that means that these logs do not go back far
enough to retrieve the list corruption issue I mentioned in a previous
e-mail.

Regarding the concern that "entries somehow remain visible after being
zapped": in a previous version of this patch series a struct list_head was
added in struct lock_list. That list head was used to maintain a linked list
of all elements of the list_entries[] array that are in use. zap_class()
used that list to iterate over all list entries that are in use. With that
approach it was not necessary to check in zap_class() whether or not a list
entry was being removed because it got removed from that list before
zap_class() was called again. I removed that list head because Peter asked
me reduce the amount of memory required at runtime. Using one bitmap to
track list entries that are in use and using two bitmaps to track list
entries that are being freed implies that code that iterates over all
list entries that are in use (zap_class()) must check all three bitmaps. The
only alternative I see when using bitmaps is that zap_class() clears the
bits in list_entries_in_use for bits that are being freed and that
alloc_list_entry() checks the two bitmaps with list entries that are being
freed. I'm not sure whether one of these two approaches is really better
than the other.

Bart.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys
  2019-02-08 11:43                     ` Will Deacon
  2019-02-08 16:31                       ` Bart Van Assche
@ 2019-02-13 22:32                       ` Bart Van Assche
  1 sibling, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2019-02-13 22:32 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, mingo, tj, longman, johannes.berg, linux-kernel,
	Paul McKenney

On Fri, 2019-02-08 at 11:43 +0000, Will Deacon wrote:
> I've also been trying to understand why it's necessary to check both of the
> pending_free entries, and I'm still struggling somewhat. It's true that the
> wakeup in get_pending_free_lock() could lead to both entries being used
> without the RCU call back running in between, however in this scenario then
> any list entries marked for freeing in the first pf will have been unhashed
> and therefore made unreachable to look_up_lock_class().
> 
> So I think the concern remains that entries are somehow remaining visible
> after being zapped.
> 
> You mentioned earlier in the thread that people actually complained about
> list corruption if you only checked the current pf:
> 
>   | The list_del_rcu() call must only happen once. I ran into complaints
>   | reporting that the list_del_rcu() call triggered list corruption. This
>   | change made these complaints disappear.
> 
> Do you have any more details about these complaints (e.g. kernel logs etc)?
> Failing that, any idea how to reproduce them?

Hi Will,

Since elements of the list_entries[] array are always accessed with the graph
lock held, how about removing the list_entries_being_freed bitmap and making
zap_class() clear the appropriate bits in the list_entries_in_use bitmap?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2019-02-13 22:32 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-09 21:01 [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 01/16] locking/lockdep: Fix reported required memory size Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 02/16] locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 03/16] locking/lockdep: Make zap_class() remove all matching lock order entries Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 04/16] locking/lockdep: Reorder struct lock_class members Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 05/16] locking/lockdep: Initialize the locks_before and locks_after lists earlier Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 06/16] locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock() Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 07/16] locking/lockdep: Make it easy to detect whether or not inside a selftest Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 08/16] locking/lockdep: Free lock classes that are no longer in use Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 09/16] locking/lockdep: Reuse list entries " Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 10/16] locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() Bart Van Assche
2019-01-09 21:01 ` [PATCH v6 11/16] locking/lockdep: Reuse lock chains that have been freed Bart Van Assche
2019-01-09 21:02 ` [PATCH v6 12/16] locking/lockdep: Check data structure consistency Bart Van Assche
2019-01-09 21:02 ` [PATCH v6 13/16] locking/lockdep: Verify whether lock objects are small enough to be used as class keys Bart Van Assche
2019-01-09 21:02 ` [PATCH v6 14/16] locking/lockdep: Add support for dynamic keys Bart Van Assche
2019-01-09 21:02 ` [PATCH v6 15/16] kernel/workqueue: Use dynamic lockdep keys for workqueues Bart Van Assche
2019-01-09 21:02 ` [PATCH v6 16/16] lockdep tests: Test dynamic key registration Bart Van Assche
2019-01-11 12:48 ` [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys Peter Zijlstra
2019-01-11 15:55   ` Bart Van Assche
2019-01-11 16:55     ` Peter Zijlstra
2019-01-11 17:01       ` Bart Van Assche
2019-01-14 12:52         ` Peter Zijlstra
2019-01-14 16:52           ` Bart Van Assche
2019-01-18  9:48             ` Peter Zijlstra
2019-01-19  2:34               ` Bart Van Assche
2019-02-01 12:15                 ` Peter Zijlstra
2019-02-03 17:36                   ` Bart Van Assche
2019-02-08 11:43                     ` Will Deacon
2019-02-08 16:31                       ` Bart Van Assche
2019-02-13 22:32                       ` Bart Van Assche

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).