linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/19] locking/lockdep: Add comments and make some code
@ 2019-03-18  8:57 Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void Yuyang Du
                   ` (18 more replies)
  0 siblings, 19 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Hi Peter,

I recently looked at some system hang issues. While at it, I tried to use
and understand lockdep. These patches are made as a result. I believe they
should have helped me, so hopefully they do for others as well.

Change from v1:
- Rebased the patch series.
- Added more no-functional-change patches.
- Removed zapped locks in lock chains printing patch, which was a band-aid.
  The real problem was recently fixed by Bart.

Thanks,
Yuyang

--

Yuyang Du (19):
  locking/lockdep: Change all print_*() return type to void
  locking/lockdep: Add description and explanation in lockdep design doc
  locking/lockdep: Adjust lock usage bit character checks
  locking/lockdep: Remove useless conditional macro
  locking/lockdep: Adjust indents for function definitions
  locking/lockdep: Print the right depth for chain key colission
  locking/lockdep: Update obsolete struct field description
  locking/lockdep: Use lockdep_init_task for task initiation
    consistently
  locking/lockdep: Define INITIAL_CHAIN_KEY for chain keys to start with
  locking/lockdep: Change the range of class_idx in held_lock struct
  locking/lockdep: Remove unused argument in validate_chain()
  locking/lockdep: Update comment
  locking/lockdep: Remove unnecessary function pointer argument
  locking/lockdep: Change type of the element field in circular_queue
  locking/lockdep: Remove __cq_empty()
  locking/lockdep: Use function pointer to avoid constant checks
  locking/lockdep: Combine check_noncircular and check_redundant
  locking/lockdep: Update comments on dependency search
  locking/lockdep: Change if to else-if when checking bfs errors

 Documentation/locking/lockdep-design.txt |  89 +++--
 include/linux/lockdep.h                  |  26 +-
 init/init_task.c                         |   2 +
 kernel/fork.c                            |   3 -
 kernel/locking/lockdep.c                 | 573 +++++++++++++++++--------------
 kernel/locking/lockdep_internals.h       |   1 +
 6 files changed, 391 insertions(+), 303 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  9:45   ` Joe Perches
  2019-03-18  8:57 ` [PATCH v2 02/19] locking/lockdep: Add description and explanation in lockdep design doc Yuyang Du
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Since none of the print_*() function's return value is necessary, change
their return type to void. No functional change.

In cases where an invariable return value is used, this change slightly
improves readability, i.e.:

	print_x();
	return 0;

is definitely better than:

	return print_x(); /* where print_x() always returns 0 */

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 222 ++++++++++++++++++++++++-----------------------
 1 file changed, 112 insertions(+), 110 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 34cdcbe..1c5e947 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1430,23 +1430,20 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
  * Print a dependency chain entry (this is only done when a deadlock
  * has been detected):
  */
-static noinline int
+static noinline void
 print_circular_bug_entry(struct lock_list *target, int depth)
 {
 	if (debug_locks_silent)
-		return 0;
+		return;
 	printk("\n-> #%u", depth);
 	print_lock_name(target->class);
 	printk(KERN_CONT ":\n");
 	print_stack_trace(&target->trace, 6);
-
-	return 0;
 }
 
-static void
-print_circular_lock_scenario(struct held_lock *src,
-			     struct held_lock *tgt,
-			     struct lock_list *prt)
+static void print_circular_lock_scenario(struct held_lock *src,
+					 struct held_lock *tgt,
+					 struct lock_list *prt)
 {
 	struct lock_class *source = hlock_class(src);
 	struct lock_class *target = hlock_class(tgt);
@@ -1497,7 +1494,7 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
  * When a circular dependency is detected, print the
  * header first:
  */
-static noinline int
+static noinline void
 print_circular_bug_header(struct lock_list *entry, unsigned int depth,
 			struct held_lock *check_src,
 			struct held_lock *check_tgt)
@@ -1505,7 +1502,7 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
 	struct task_struct *curr = current;
 
 	if (debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("======================================================\n");
@@ -1523,8 +1520,6 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
 	pr_warn("\nthe existing dependency chain (in reverse order) is:\n");
 
 	print_circular_bug_entry(entry, depth);
-
-	return 0;
 }
 
 static inline int class_equal(struct lock_list *entry, void *data)
@@ -1532,11 +1527,11 @@ static inline int class_equal(struct lock_list *entry, void *data)
 	return entry->class == data;
 }
 
-static noinline int print_circular_bug(struct lock_list *this,
-				struct lock_list *target,
-				struct held_lock *check_src,
-				struct held_lock *check_tgt,
-				struct stack_trace *trace)
+static noinline void print_circular_bug(struct lock_list *this,
+					struct lock_list *target,
+					struct held_lock *check_src,
+					struct held_lock *check_tgt,
+					struct stack_trace *trace)
 {
 	struct task_struct *curr = current;
 	struct lock_list *parent;
@@ -1544,10 +1539,10 @@ static noinline int print_circular_bug(struct lock_list *this,
 	int depth;
 
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
-		return 0;
+		return;
 
 	if (!save_trace(&this->trace))
-		return 0;
+		return;
 
 	depth = get_lock_depth(target);
 
@@ -1569,21 +1564,17 @@ static noinline int print_circular_bug(struct lock_list *this,
 
 	printk("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
-static noinline int print_bfs_bug(int ret)
+static noinline void print_bfs_bug(int ret)
 {
 	if (!debug_locks_off_graph_unlock())
-		return 0;
+		return;
 
 	/*
 	 * Breadth-first-search failed, graph got corrupted?
 	 */
 	WARN(1, "lockdep bfs error:%d\n", ret);
-
-	return 0;
 }
 
 static int noop_count(struct lock_list *entry, void *data)
@@ -1766,7 +1757,7 @@ static void print_lock_class_header(struct lock_class *class, int depth)
  */
 static void __used
 print_shortest_lock_dependencies(struct lock_list *leaf,
-				struct lock_list *root)
+				 struct lock_list *root)
 {
 	struct lock_list *entry = leaf;
 	int depth;
@@ -1788,8 +1779,6 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 		entry = get_lock_parent(entry);
 		depth--;
 	} while (entry && (depth >= 0));
-
-	return;
 }
 
 static void
@@ -1848,7 +1837,7 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 	printk("\n *** DEADLOCK ***\n\n");
 }
 
-static int
+static void
 print_bad_irq_dependency(struct task_struct *curr,
 			 struct lock_list *prev_root,
 			 struct lock_list *next_root,
@@ -1861,7 +1850,7 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 			 const char *irqclass)
 {
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("=====================================================\n");
@@ -1907,19 +1896,17 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 
 	pr_warn("\nthe dependencies between %s-irq-safe lock and the holding lock:\n", irqclass);
 	if (!save_trace(&prev_root->trace))
-		return 0;
+		return;
 	print_shortest_lock_dependencies(backwards_entry, prev_root);
 
 	pr_warn("\nthe dependencies between the lock to be acquired");
 	pr_warn(" and %s-irq-unsafe lock:\n", irqclass);
 	if (!save_trace(&next_root->trace))
-		return 0;
+		return;
 	print_shortest_lock_dependencies(forwards_entry, next_root);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
 static int
@@ -1936,23 +1923,28 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 
 	this.class = hlock_class(prev);
 	ret = find_usage_backwards(&this, bit_backwards, &target_entry);
-	if (ret < 0)
-		return print_bfs_bug(ret);
+	if (ret < 0) {
+		print_bfs_bug(ret);
+		return 0;
+	}
 	if (ret == 1)
 		return ret;
 
 	that.parent = NULL;
 	that.class = hlock_class(next);
 	ret = find_usage_forwards(&that, bit_forwards, &target_entry1);
-	if (ret < 0)
-		return print_bfs_bug(ret);
+	if (ret < 0) {
+		print_bfs_bug(ret);
+		return 0;
+	}
 	if (ret == 1)
 		return ret;
 
-	return print_bad_irq_dependency(curr, &this, &that,
-			target_entry, target_entry1,
-			prev, next,
-			bit_backwards, bit_forwards, irqclass);
+	print_bad_irq_dependency(curr, &this, &that,
+				 target_entry, target_entry1,
+				 prev, next,
+				 bit_backwards, bit_forwards, irqclass);
+	return 0;
 }
 
 static const char *state_names[] = {
@@ -2042,7 +2034,7 @@ static void inc_chains(void)
 
 static inline int
 check_prev_add_irq(struct task_struct *curr, struct held_lock *prev,
-		struct held_lock *next)
+		   struct held_lock *next)
 {
 	return 1;
 }
@@ -2054,9 +2046,8 @@ static inline void inc_chains(void)
 
 #endif
 
-static void
-print_deadlock_scenario(struct held_lock *nxt,
-			     struct held_lock *prv)
+static void print_deadlock_scenario(struct held_lock *nxt,
+				    struct held_lock *prv)
 {
 	struct lock_class *next = hlock_class(nxt);
 	struct lock_class *prev = hlock_class(prv);
@@ -2074,12 +2065,12 @@ static inline void inc_chains(void)
 	printk(" May be due to missing lock nesting notation\n\n");
 }
 
-static int
+static void
 print_deadlock_bug(struct task_struct *curr, struct held_lock *prev,
 		   struct held_lock *next)
 {
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("============================================\n");
@@ -2098,8 +2089,6 @@ static inline void inc_chains(void)
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
 /*
@@ -2141,7 +2130,8 @@ static inline void inc_chains(void)
 		if (nest)
 			return 2;
 
-		return print_deadlock_bug(curr, prev, next);
+		print_deadlock_bug(curr, prev, next);
+		return 0;
 	}
 	return 1;
 }
@@ -2217,10 +2207,13 @@ static inline void inc_chains(void)
 			 */
 			save(trace);
 		}
-		return print_circular_bug(&this, target_entry, next, prev, trace);
+		print_circular_bug(&this, target_entry, next, prev, trace);
+		return 0;
+	}
+	else if (unlikely(ret < 0)) {
+		print_bfs_bug(ret);
+		return 0;
 	}
-	else if (unlikely(ret < 0))
-		return print_bfs_bug(ret);
 
 	if (!check_prev_add_irq(curr, prev, next))
 		return 0;
@@ -2261,8 +2254,10 @@ static inline void inc_chains(void)
 		debug_atomic_inc(nr_redundant);
 		return 2;
 	}
-	if (ret < 0)
-		return print_bfs_bug(ret);
+	if (ret < 0) {
+		print_bfs_bug(ret);
+		return 0;
+	}
 
 
 	if (!trace->entries && !save(trace))
@@ -2452,8 +2447,8 @@ static void print_chain_keys_chain(struct lock_chain *chain)
 }
 
 static void print_collision(struct task_struct *curr,
-			struct held_lock *hlock_next,
-			struct lock_chain *chain)
+			    struct held_lock *hlock_next,
+			    struct lock_chain *chain)
 {
 	pr_warn("\n");
 	pr_warn("============================\n");
@@ -2726,8 +2721,8 @@ static int validate_chain(struct task_struct *curr, struct lockdep_map *lock,
 }
 #else
 static inline int validate_chain(struct task_struct *curr,
-	       	struct lockdep_map *lock, struct held_lock *hlock,
-		int chain_head, u64 chain_key)
+				 struct lockdep_map *lock, struct held_lock *hlock,
+				 int chain_head, u64 chain_key)
 {
 	return 1;
 }
@@ -2784,8 +2779,7 @@ static void check_chain_key(struct task_struct *curr)
 #endif
 }
 
-static void
-print_usage_bug_scenario(struct held_lock *lock)
+static void print_usage_bug_scenario(struct held_lock *lock)
 {
 	struct lock_class *class = hlock_class(lock);
 
@@ -2802,12 +2796,12 @@ static void check_chain_key(struct task_struct *curr)
 	printk("\n *** DEADLOCK ***\n\n");
 }
 
-static int
+static void
 print_usage_bug(struct task_struct *curr, struct held_lock *this,
 		enum lock_usage_bit prev_bit, enum lock_usage_bit new_bit)
 {
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("================================\n");
@@ -2837,8 +2831,6 @@ static void check_chain_key(struct task_struct *curr)
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
 /*
@@ -2848,8 +2840,10 @@ static void check_chain_key(struct task_struct *curr)
 valid_state(struct task_struct *curr, struct held_lock *this,
 	    enum lock_usage_bit new_bit, enum lock_usage_bit bad_bit)
 {
-	if (unlikely(hlock_class(this)->usage_mask & (1 << bad_bit)))
-		return print_usage_bug(curr, this, bad_bit, new_bit);
+	if (unlikely(hlock_class(this)->usage_mask & (1 << bad_bit))) {
+		print_usage_bug(curr, this, bad_bit, new_bit);
+		return 0;
+	}
 	return 1;
 }
 
@@ -2861,7 +2855,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 /*
  * print irq inversion bug:
  */
-static int
+static void
 print_irq_inversion_bug(struct task_struct *curr,
 			struct lock_list *root, struct lock_list *other,
 			struct held_lock *this, int forwards,
@@ -2872,7 +2866,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	int depth;
 
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("========================================================\n");
@@ -2913,13 +2907,11 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 
 	pr_warn("\nthe shortest dependencies between 2nd lock and 1st lock:\n");
 	if (!save_trace(&root->trace))
-		return 0;
+		return;
 	print_shortest_lock_dependencies(other, root);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
 /*
@@ -2937,13 +2929,16 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_forwards(&root, bit, &target_entry);
-	if (ret < 0)
-		return print_bfs_bug(ret);
+	if (ret < 0) {
+		print_bfs_bug(ret);
+		return 0;
+	}
 	if (ret == 1)
 		return ret;
 
-	return print_irq_inversion_bug(curr, &root, target_entry,
-					this, 1, irqclass);
+	print_irq_inversion_bug(curr, &root, target_entry,
+				this, 1, irqclass);
+	return 0;
 }
 
 /*
@@ -2961,13 +2956,16 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_backwards(&root, bit, &target_entry);
-	if (ret < 0)
-		return print_bfs_bug(ret);
+	if (ret < 0) {
+		print_bfs_bug(ret);
+		return 0;
+	}
 	if (ret == 1)
 		return ret;
 
-	return print_irq_inversion_bug(curr, &root, target_entry,
-					this, 0, irqclass);
+	print_irq_inversion_bug(curr, &root, target_entry,
+				this, 0, irqclass);
+	return 0;
 }
 
 void print_irqtrace_events(struct task_struct *curr)
@@ -3510,15 +3508,15 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,
 struct lock_class_key __lockdep_no_validate__;
 EXPORT_SYMBOL_GPL(__lockdep_no_validate__);
 
-static int
+static void
 print_lock_nested_lock_not_held(struct task_struct *curr,
 				struct held_lock *hlock,
 				unsigned long ip)
 {
 	if (!debug_locks_off())
-		return 0;
+		return;
 	if (debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("==================================\n");
@@ -3540,8 +3538,6 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
 static int __lock_is_held(const struct lockdep_map *lock, int read);
@@ -3690,8 +3686,10 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	}
 	chain_key = iterate_chain_key(chain_key, class_idx);
 
-	if (nest_lock && !__lock_is_held(nest_lock, -1))
-		return print_lock_nested_lock_not_held(curr, hlock, ip);
+	if (nest_lock && !__lock_is_held(nest_lock, -1)) {
+		print_lock_nested_lock_not_held(curr, hlock, ip);
+		return 0;
+	}
 
 	if (!debug_locks_silent) {
 		WARN_ON_ONCE(depth && !hlock_class(hlock - 1)->key);
@@ -3727,14 +3725,14 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	return 1;
 }
 
-static int
-print_unlock_imbalance_bug(struct task_struct *curr, struct lockdep_map *lock,
-			   unsigned long ip)
+static void print_unlock_imbalance_bug(struct task_struct *curr,
+				       struct lockdep_map *lock,
+				       unsigned long ip)
 {
 	if (!debug_locks_off())
-		return 0;
+		return;
 	if (debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("=====================================\n");
@@ -3752,8 +3750,6 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
 static int match_held_lock(const struct held_lock *hlock,
@@ -3872,8 +3868,10 @@ static int reacquire_held_locks(struct task_struct *curr, unsigned int depth,
 		return 0;
 
 	hlock = find_held_lock(curr, lock, depth, &i);
-	if (!hlock)
-		return print_unlock_imbalance_bug(curr, lock, ip);
+	if (!hlock) {
+		print_unlock_imbalance_bug(curr, lock, ip);
+		return 0;
+	}
 
 	lockdep_init_map(lock, name, key, 0);
 	class = register_lock_class(lock, subclass, 0);
@@ -3913,8 +3911,10 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
 		return 0;
 
 	hlock = find_held_lock(curr, lock, depth, &i);
-	if (!hlock)
-		return print_unlock_imbalance_bug(curr, lock, ip);
+	if (!hlock) {
+		print_unlock_imbalance_bug(curr, lock, ip);
+		return 0;
+	}
 
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
@@ -3958,16 +3958,20 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
 	 * So we're all set to release this lock.. wait what lock? We don't
 	 * own any locks, you've been drinking again?
 	 */
-	if (DEBUG_LOCKS_WARN_ON(depth <= 0))
-		 return print_unlock_imbalance_bug(curr, lock, ip);
+	if (DEBUG_LOCKS_WARN_ON(depth <= 0)) {
+		print_unlock_imbalance_bug(curr, lock, ip);
+		return 0;
+	}
 
 	/*
 	 * Check whether the lock exists in the current stack
 	 * of held locks:
 	 */
 	hlock = find_held_lock(curr, lock, depth, &i);
-	if (!hlock)
-		return print_unlock_imbalance_bug(curr, lock, ip);
+	if (!hlock) {
+		print_unlock_imbalance_bug(curr, lock, ip);
+		return 0;
+	}
 
 	if (hlock->instance == lock)
 		lock_release_holdtime(hlock);
@@ -4310,14 +4314,14 @@ void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie)
 EXPORT_SYMBOL_GPL(lock_unpin_lock);
 
 #ifdef CONFIG_LOCK_STAT
-static int
-print_lock_contention_bug(struct task_struct *curr, struct lockdep_map *lock,
-			   unsigned long ip)
+static void print_lock_contention_bug(struct task_struct *curr,
+				      struct lockdep_map *lock,
+				      unsigned long ip)
 {
 	if (!debug_locks_off())
-		return 0;
+		return;
 	if (debug_locks_silent)
-		return 0;
+		return;
 
 	pr_warn("\n");
 	pr_warn("=================================\n");
@@ -4335,8 +4339,6 @@ void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie)
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
-
-	return 0;
 }
 
 static void
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 02/19] locking/lockdep: Add description and explanation in lockdep design doc
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 03/19] locking/lockdep: Adjust lock usage bit character checks Yuyang Du
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

More words are added to lockdep design document regarding key concepts,
which helps people understand the design as well as read the reports.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 Documentation/locking/lockdep-design.txt | 89 +++++++++++++++++++++++---------
 1 file changed, 64 insertions(+), 25 deletions(-)

diff --git a/Documentation/locking/lockdep-design.txt b/Documentation/locking/lockdep-design.txt
index 49f58a0..621d8f4 100644
--- a/Documentation/locking/lockdep-design.txt
+++ b/Documentation/locking/lockdep-design.txt
@@ -10,56 +10,95 @@ Lock-class
 The basic object the validator operates upon is a 'class' of locks.
 
 A class of locks is a group of locks that are logically the same with
-respect to locking rules, even if the locks may have multiple (possibly
-tens of thousands of) instantiations. For example a lock in the inode
-struct is one class, while each inode has its own instantiation of that
-lock class.
-
-The validator tracks the 'state' of lock-classes, and it tracks
-dependencies between different lock-classes. The validator maintains a
-rolling proof that the state and the dependencies are correct.
-
-Unlike an lock instantiation, the lock-class itself never goes away: when
-a lock-class is used for the first time after bootup it gets registered,
-and all subsequent uses of that lock-class will be attached to this
-lock-class.
+respect to locking rules, even if the locks may have multiple (possibly tens
+of thousands of) instantiations. For example a lock in the inode struct is
+one class, while each inode has its own instantiation of that lock class.
+
+The validator tracks the 'usage state' of lock-classes, and it tracks the
+dependencies between different lock-classes. The dependency can be
+understood as lock order, where L1 -> L2 suggests L1 depends on L2, which
+can also be expressed as a forward dependency (L1 -> L2) or a backward
+dependency (L2 <- L1). From lockdep's perspective, the two locks (L1 and L2)
+are not necessarily related as opposed to in some modules an order must be
+followed. Here it just means that order ever happened. The validator
+maintains a continuing effort to prove that the lock usages and their
+dependencies are correct or the validator will shoot a splat if they are
+potentially incorrect.
+
+Unlike a lock instance, a lock-class itself never goes away: when a
+lock-class's instance is used for the first time after bootup the class gets
+registered, and all (subsequent) instances of that lock-class will be mapped
+to the lock-class.
 
 State
 -----
 
-The validator tracks lock-class usage history into 4 * nSTATEs + 1 separate
-state bits:
+The validator tracks lock-class usage history and divides the usage into
+(4 usages * n STATEs + 1) categories:
 
+Where the 4 usages can be:
 - 'ever held in STATE context'
 - 'ever held as readlock in STATE context'
 - 'ever held with STATE enabled'
 - 'ever held as readlock with STATE enabled'
 
-Where STATE can be either one of (kernel/locking/lockdep_states.h)
- - hardirq
- - softirq
+Where the n STATEs are coded in kernel/locking/lockdep_states.h and as of
+now they include:
+- hardirq
+- softirq
 
+Where the last 1 category is:
 - 'ever used'                                       [ == !unused        ]
 
-When locking rules are violated, these state bits are presented in the
-locking error messages, inside curlies. A contrived example:
+When locking rules are violated, these usage bits are presented in the
+locking error messages, inside curlies, with a total of 2 * n STATEs bits.
+See a contrived example:
 
    modprobe/2287 is trying to acquire lock:
-    (&sio_locks[i].lock){-.-...}, at: [<c02867fd>] mutex_lock+0x21/0x24
+    (&sio_locks[i].lock){-.-.}, at: [<c02867fd>] mutex_lock+0x21/0x24
 
    but task is already holding lock:
-    (&sio_locks[i].lock){-.-...}, at: [<c02867fd>] mutex_lock+0x21/0x24
+    (&sio_locks[i].lock){-.-.}, at: [<c02867fd>] mutex_lock+0x21/0x24
 
 
-The bit position indicates STATE, STATE-read, for each of the states listed
-above, and the character displayed in each indicates:
+For a given lock, the bit positions from left to right indicate the usage
+of the lock and readlock (if exists), for each of the n STATEs listed
+above respectively, and the character displayed at each bit position
+indicates:
 
    '.'  acquired while irqs disabled and not in irq context
    '-'  acquired in irq context
    '+'  acquired with irqs enabled
    '?'  acquired in irq context with irqs enabled.
 
-Unused mutexes cannot be part of the cause of an error.
+The bits are illustrated with an example:
+
+    (&sio_locks[i].lock){-.-.}, at: [<c02867fd>] mutex_lock+0x21/0x24
+                         ||||
+                         ||| \-> softirq disabled and not in softirq context
+                         || \--> acquired in softirq context
+                         | \---> hardirq disabled and not in hardirq context
+                          \----> acquired in hardirq context
+
+
+For a given STATE, whether the lock is ever acquired in that STATE context
+and whether that STATE is enabled yields four possible cases as shown in the
+table below. It is worth noting that the bit character is able to indicate
+which exact case is for the lock as of the reporting time.
+
+   -------------------------------------------------
+  |              | enabled in irq | disabled in irq |
+   -------------------------------------------------
+  | ever in irq  |        ?       |        -        |
+   -------------------------------------------------
+  | never in irq |        +       |        .        |
+   -------------------------------------------------
+
+The character '-' suggests irq is disabled because if otherwise, the
+charactor '?' would have been shown instead. Similar deduction can be
+applied for '+' too.
+
+Unused locks (e.g., mutexes) cannot be part of the cause of an error.
 
 
 Single-lock state rules:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 03/19] locking/lockdep: Adjust lock usage bit character checks
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 02/19] locking/lockdep: Add description and explanation in lockdep design doc Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 04/19] locking/lockdep: Remove useless conditional macro Yuyang Du
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

The lock usage bit characters are defined and determined with tricks. Use a
macro and add some explanation to make it a bit clearer. Then adjust the
logic to check the usage, which optimizes the code a bit.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c           | 21 ++++++++++++++++-----
 kernel/locking/lockdep_internals.h |  1 +
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1c5e947..d5b66cf 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -514,15 +514,26 @@ static inline unsigned long lock_flag(enum lock_usage_bit bit)
 
 static char get_usage_char(struct lock_class *class, enum lock_usage_bit bit)
 {
+	/*
+	 * The usage character defaults to '.' (i.e., irqs disabled and not in
+	 * irq context), which is the safest usage category.
+	 */
 	char c = '.';
 
-	if (class->usage_mask & lock_flag(bit + 2))
+	/*
+	 * The order of the following usage checks matters, which will
+	 * result in the outcome character as follows:
+	 *
+	 * - '+': irq is enabled and not in irq context
+	 * - '-': in irq context and irq is disabled
+	 * - '?': in irq context and irq is enabled
+	 */
+	if (class->usage_mask & lock_flag(bit + LOCK_USAGE_TO_ENABLED_STEP)) {
 		c = '+';
-	if (class->usage_mask & lock_flag(bit)) {
-		c = '-';
-		if (class->usage_mask & lock_flag(bit + 2))
+		if (class->usage_mask & lock_flag(bit))
 			c = '?';
-	}
+	} else if (class->usage_mask & lock_flag(bit))
+		c = '-';
 
 	return c;
 }
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index d4c1974..2fd31d5 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -25,6 +25,7 @@ enum lock_usage_bit {
 #define LOCK_USAGE_READ_MASK 1
 #define LOCK_USAGE_DIR_MASK  2
 #define LOCK_USAGE_STATE_MASK (~(LOCK_USAGE_READ_MASK | LOCK_USAGE_DIR_MASK))
+#define LOCK_USAGE_TO_ENABLED_STEP 2
 
 /*
  * Usage-state bitmasks:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 04/19] locking/lockdep: Remove useless conditional macro
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (2 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 03/19] locking/lockdep: Adjust lock usage bit character checks Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 05/19] locking/lockdep: Adjust indents for function definitions Yuyang Du
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Since #defined(CONFIG_PROVE_LOCKING) is used in the scope of #ifdef
CONFIG_PROVE_LOCKING, it can be removed.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index d5b66cf..dea495b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1677,7 +1677,7 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 	return result;
 }
 
-#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING)
+#if defined(CONFIG_TRACE_IRQFLAGS)
 /*
  * Forwards and backwards subgraph searching, for the purposes of
  * proving that two subgraphs can be connected by a new dependency
@@ -2055,7 +2055,7 @@ static inline void inc_chains(void)
 	nr_process_chains++;
 }
 
-#endif
+#endif /* CONFIG_TRACE_IRQFLAGS */
 
 static void print_deadlock_scenario(struct held_lock *nxt,
 				    struct held_lock *prv)
@@ -2737,7 +2737,7 @@ static inline int validate_chain(struct task_struct *curr,
 {
 	return 1;
 }
-#endif
+#endif /* CONFIG_PROVE_LOCKING */
 
 /*
  * We are building curr_chain_key incrementally, so double-check
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 05/19] locking/lockdep: Adjust indents for function definitions
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (3 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 04/19] locking/lockdep: Remove useless conditional macro Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:33   ` Bart Van Assche
  2019-03-18  8:57 ` [PATCH v2 06/19] locking/lockdep: Print the right depth for chain key colission Yuyang Du
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Being paranoid to see function arguments lines are aligned.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 63 +++++++++++++++++++++++-------------------------
 1 file changed, 30 insertions(+), 33 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index dea495b..320ab62 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1308,7 +1308,7 @@ static inline unsigned int  __cq_get_elem_count(struct circular_queue *cq)
 }
 
 static inline void mark_lock_accessed(struct lock_list *lock,
-					struct lock_list *parent)
+				      struct lock_list *parent)
 {
 	unsigned long nr;
 
@@ -1413,19 +1413,17 @@ static int __bfs(struct lock_list *source_entry,
 	return ret;
 }
 
-static inline int __bfs_forwards(struct lock_list *src_entry,
-			void *data,
-			int (*match)(struct lock_list *entry, void *data),
-			struct lock_list **target_entry)
+static inline int __bfs_forwards(struct lock_list *src_entry, void *data,
+				 int (*match)(struct lock_list *entry, void *data),
+				 struct lock_list **target_entry)
 {
 	return __bfs(src_entry, data, match, target_entry, 1);
 
 }
 
-static inline int __bfs_backwards(struct lock_list *src_entry,
-			void *data,
-			int (*match)(struct lock_list *entry, void *data),
-			struct lock_list **target_entry)
+static inline int __bfs_backwards(struct lock_list *src_entry, void *data,
+				  int (*match)(struct lock_list *entry, void *data),
+				  struct lock_list **target_entry)
 {
 	return __bfs(src_entry, data, match, target_entry, 0);
 
@@ -1507,8 +1505,8 @@ static void print_circular_lock_scenario(struct held_lock *src,
  */
 static noinline void
 print_circular_bug_header(struct lock_list *entry, unsigned int depth,
-			struct held_lock *check_src,
-			struct held_lock *check_tgt)
+			  struct held_lock *check_src,
+			  struct held_lock *check_tgt)
 {
 	struct task_struct *curr = current;
 
@@ -1653,7 +1651,7 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
  */
 static noinline int
 check_noncircular(struct lock_list *root, struct lock_class *target,
-		struct lock_list **target_entry)
+		  struct lock_list **target_entry)
 {
 	int result;
 
@@ -1703,7 +1701,7 @@ static inline int usage_match(struct lock_list *entry, void *bit)
  */
 static int
 find_usage_forwards(struct lock_list *root, enum lock_usage_bit bit,
-			struct lock_list **target_entry)
+		    struct lock_list **target_entry)
 {
 	int result;
 
@@ -1726,7 +1724,7 @@ static inline int usage_match(struct lock_list *entry, void *bit)
  */
 static int
 find_usage_backwards(struct lock_list *root, enum lock_usage_bit bit,
-			struct lock_list **target_entry)
+		     struct lock_list **target_entry)
 {
 	int result;
 
@@ -2018,7 +2016,7 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
 
 static int
 check_prev_add_irq(struct task_struct *curr, struct held_lock *prev,
-		struct held_lock *next)
+		   struct held_lock *next)
 {
 #define LOCKDEP_STATE(__STATE)						\
 	if (!check_irq_usage(curr, prev, next, LOCK_USED_IN_##__STATE))	\
@@ -2392,7 +2390,7 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
  * Returns the index of the first held_lock of the current chain
  */
 static inline int get_first_held_lock(struct task_struct *curr,
-					struct held_lock *hlock)
+				      struct held_lock *hlock)
 {
 	int i;
 	struct held_lock *hlock_curr;
@@ -2487,8 +2485,8 @@ static void print_collision(struct task_struct *curr,
  * Returns: 0 not passed, 1 passed
  */
 static int check_no_collision(struct task_struct *curr,
-			struct held_lock *hlock,
-			struct lock_chain *chain)
+			      struct held_lock *hlock,
+			      struct lock_chain *chain)
 {
 #ifdef CONFIG_DEBUG_LOCKDEP
 	int i, j, id;
@@ -2675,7 +2673,7 @@ static inline int lookup_chain_cache_add(struct task_struct *curr,
 }
 
 static int validate_chain(struct task_struct *curr, struct lockdep_map *lock,
-		struct held_lock *hlock, int chain_head, u64 chain_key)
+			  struct held_lock *hlock, int chain_head, u64 chain_key)
 {
 	/*
 	 * Trylock needs to maintain the stack of held locks, but it
@@ -3032,7 +3030,7 @@ typedef int (*check_usage_f)(struct task_struct *, struct held_lock *,
 
 static int
 mark_lock_irq(struct task_struct *curr, struct held_lock *this,
-		enum lock_usage_bit new_bit)
+	      enum lock_usage_bit new_bit)
 {
 	int excl_bit = exclusive_bit(new_bit);
 	int read = new_bit & LOCK_USAGE_READ_MASK;
@@ -3060,7 +3058,7 @@ typedef int (*check_usage_f)(struct task_struct *, struct held_lock *,
 	 * states.
 	 */
 	if ((!read || !dir || STRICT_READ_CHECKS) &&
-			!usage(curr, this, excl_bit, state_name(new_bit & ~LOCK_USAGE_READ_MASK)))
+	    !usage(curr, this, excl_bit, state_name(new_bit & ~LOCK_USAGE_READ_MASK)))
 		return 0;
 
 	/*
@@ -3071,8 +3069,8 @@ typedef int (*check_usage_f)(struct task_struct *, struct held_lock *,
 			return 0;
 
 		if (STRICT_READ_CHECKS &&
-			!usage(curr, this, excl_bit + LOCK_USAGE_READ_MASK,
-				state_name(new_bit + LOCK_USAGE_READ_MASK)))
+		    !usage(curr, this, excl_bit + LOCK_USAGE_READ_MASK,
+			   state_name(new_bit + LOCK_USAGE_READ_MASK)))
 			return 0;
 	}
 
@@ -3338,7 +3336,7 @@ static inline unsigned int task_irq_context(struct task_struct *task)
 }
 
 static int separate_irq_context(struct task_struct *curr,
-		struct held_lock *hlock)
+				struct held_lock *hlock)
 {
 	unsigned int depth = curr->lockdep_depth;
 
@@ -3364,14 +3362,14 @@ static int separate_irq_context(struct task_struct *curr,
 
 static inline
 int mark_lock_irq(struct task_struct *curr, struct held_lock *this,
-		enum lock_usage_bit new_bit)
+		  enum lock_usage_bit new_bit)
 {
 	WARN_ON(1); /* Impossible innit? when we don't have TRACE_IRQFLAG */
 	return 1;
 }
 
 static inline int mark_irqflags(struct task_struct *curr,
-		struct held_lock *hlock)
+				struct held_lock *hlock)
 {
 	return 1;
 }
@@ -3382,7 +3380,7 @@ static inline unsigned int task_irq_context(struct task_struct *task)
 }
 
 static inline int separate_irq_context(struct task_struct *curr,
-		struct held_lock *hlock)
+				       struct held_lock *hlock)
 {
 	return 0;
 }
@@ -3393,7 +3391,7 @@ static inline int separate_irq_context(struct task_struct *curr,
  * Mark a lock with a usage bit, and validate the state transition:
  */
 static int mark_lock(struct task_struct *curr, struct held_lock *this,
-			     enum lock_usage_bit new_bit)
+		     enum lock_usage_bit new_bit)
 {
 	unsigned int new_mask = 1 << new_bit, ret = 1;
 
@@ -3836,7 +3834,7 @@ static struct held_lock *find_held_lock(struct task_struct *curr,
 }
 
 static int reacquire_held_locks(struct task_struct *curr, unsigned int depth,
-			      int idx)
+				int idx)
 {
 	struct held_lock *hlock;
 
@@ -4210,8 +4208,8 @@ void lock_downgrade(struct lockdep_map *lock, unsigned long ip)
  * and also avoid lockdep recursion:
  */
 void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
-			  int trylock, int read, int check,
-			  struct lockdep_map *nest_lock, unsigned long ip)
+		  int trylock, int read, int check,
+		  struct lockdep_map *nest_lock, unsigned long ip)
 {
 	unsigned long flags;
 
@@ -4230,8 +4228,7 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 }
 EXPORT_SYMBOL_GPL(lock_acquire);
 
-void lock_release(struct lockdep_map *lock, int nested,
-			  unsigned long ip)
+void lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
 {
 	unsigned long flags;
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 06/19] locking/lockdep: Print the right depth for chain key colission
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (4 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 05/19] locking/lockdep: Adjust indents for function definitions Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 07/19] locking/lockdep: Update obsolete struct field description Yuyang Du
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Since chains are separated by irq context, so when printing a chain the
depth should be consistent with it.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 320ab62..3df0a5e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2425,10 +2425,11 @@ static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
 	struct held_lock *hlock;
 	u64 chain_key = 0;
 	int depth = curr->lockdep_depth;
-	int i;
+	int i = get_first_held_lock(curr, hlock_next);
 
-	printk("depth: %u\n", depth + 1);
-	for (i = get_first_held_lock(curr, hlock_next); i < depth; i++) {
+	printk("depth: %u (irq_context %u)\n", depth - i + 1,
+		hlock_next->irq_context);
+	for (; i < depth; i++) {
 		hlock = curr->held_locks + i;
 		chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 07/19] locking/lockdep: Update obsolete struct field description
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (5 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 06/19] locking/lockdep: Print the right depth for chain key colission Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:35   ` Bart Van Assche
  2019-03-18  8:57 ` [PATCH v2 08/19] locking/lockdep: Use lockdep_init_task for task initiation consistently Yuyang Du
                   ` (11 subsequent siblings)
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

The lock_chain struct definition has outdated comment, update it.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 include/linux/lockdep.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 79c3873..1258a62 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -202,7 +202,11 @@ struct lock_list {
  * We record lock dependency chains, so that we can cache them:
  */
 struct lock_chain {
-	/* see BUILD_BUG_ON()s in lookup_chain_cache() */
+	/*
+	 * irq_context: the same as irq_context in held_lock below
+	 * depth:       the number of held locks in this chain
+	 * base:        the index in chain_hlocks for this chain
+	 */
 	unsigned int			irq_context :  2,
 					depth       :  6,
 					base	    : 24;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 08/19] locking/lockdep: Use lockdep_init_task for task initiation consistently
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (6 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 07/19] locking/lockdep: Update obsolete struct field description Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 09/19] locking/lockdep: Define INITIAL_CHAIN_KEY for chain keys to start with Yuyang Du
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Despite that there is a lockdep_init_task() which does nothing, lockdep
initiates tasks by assigning lockdep fields and does so inconsistently. Fix
this by using lockdep_init_task().

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 include/linux/lockdep.h  |  7 ++++++-
 init/init_task.c         |  2 ++
 kernel/fork.c            |  3 ---
 kernel/locking/lockdep.c | 11 ++++++++---
 4 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 1258a62..49b928f 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -280,6 +280,8 @@ struct held_lock {
 extern asmlinkage void lockdep_sys_exit(void);
 extern void lockdep_set_selftest_task(struct task_struct *task);
 
+extern inline void lockdep_init_task(struct task_struct *task);
+
 extern void lockdep_off(void);
 extern void lockdep_on(void);
 
@@ -404,6 +406,10 @@ static inline void lock_set_subclass(struct lockdep_map *lock,
 
 #else /* !CONFIG_LOCKDEP */
 
+static inline void lockdep_init_task(struct task_struct *task)
+{
+}
+
 static inline void lockdep_off(void)
 {
 }
@@ -496,7 +502,6 @@ enum xhlock_context_t {
 	{ .name = (_name), .key = (void *)(_key), }
 
 static inline void lockdep_invariant_state(bool force) {}
-static inline void lockdep_init_task(struct task_struct *task) {}
 static inline void lockdep_free_task(struct task_struct *task) {}
 
 #ifdef CONFIG_LOCK_STAT
diff --git a/init/init_task.c b/init/init_task.c
index 46dbf54..9460878 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -165,6 +165,8 @@ struct task_struct init_task
 	.softirqs_enabled = 1,
 #endif
 #ifdef CONFIG_LOCKDEP
+	.lockdep_depth = 0, /* no locks held yet */
+	.curr_chain_key = 0,
 	.lockdep_recursion = 0,
 #endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
diff --git a/kernel/fork.c b/kernel/fork.c
index 77059b2..c0d2000 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1869,9 +1869,6 @@ static __latent_entropy struct task_struct *copy_process(
 	p->pagefault_disabled = 0;
 
 #ifdef CONFIG_LOCKDEP
-	p->lockdep_depth = 0; /* no locks held yet */
-	p->curr_chain_key = 0;
-	p->lockdep_recursion = 0;
 	lockdep_init_task(p);
 #endif
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 3df0a5e..737fe0a 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -358,6 +358,13 @@ static inline u64 iterate_chain_key(u64 key, u32 idx)
 	return k0 | (u64)k1 << 32;
 }
 
+inline void lockdep_init_task(struct task_struct *task)
+{
+	task->lockdep_depth = 0; /* no locks held yet */
+	task->curr_chain_key = 0;
+	task->lockdep_recursion = 0;
+}
+
 void lockdep_off(void)
 {
 	current->lockdep_recursion++;
@@ -4492,9 +4499,7 @@ void lockdep_reset(void)
 	int i;
 
 	raw_local_irq_save(flags);
-	current->curr_chain_key = 0;
-	current->lockdep_depth = 0;
-	current->lockdep_recursion = 0;
+	lockdep_init_task(current);
 	memset(current->held_locks, 0, MAX_LOCK_DEPTH*sizeof(struct held_lock));
 	nr_hardirq_chains = 0;
 	nr_softirq_chains = 0;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 09/19] locking/lockdep: Define INITIAL_CHAIN_KEY for chain keys to start with
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (7 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 08/19] locking/lockdep: Use lockdep_init_task for task initiation consistently Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 10/19] locking/lockdep: Change the range of class_idx in held_lock struct Yuyang Du
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Chain keys are computed using Jenkins hash function, which needs an initial
hash to start with. Dedicate a macro to make this clear and configurable. A
later patch changes this initial chain key.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 include/linux/lockdep.h  |  1 +
 init/init_task.c         |  2 +-
 kernel/locking/lockdep.c | 18 +++++++++---------
 3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 49b928f..dd8cf33 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -222,6 +222,7 @@ struct lock_chain {
  * bitfield and hitting the BUG in hlock_class().
  */
 #define MAX_LOCKDEP_KEYS		((1UL << MAX_LOCKDEP_KEYS_BITS) - 1)
+#define INITIAL_CHAIN_KEY		0
 
 struct held_lock {
 	/*
diff --git a/init/init_task.c b/init/init_task.c
index 9460878..ff3e8bf 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -166,7 +166,7 @@ struct task_struct init_task
 #endif
 #ifdef CONFIG_LOCKDEP
 	.lockdep_depth = 0, /* no locks held yet */
-	.curr_chain_key = 0,
+	.curr_chain_key = INITIAL_CHAIN_KEY,
 	.lockdep_recursion = 0,
 #endif
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 737fe0a..5459d37 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -361,7 +361,7 @@ static inline u64 iterate_chain_key(u64 key, u32 idx)
 inline void lockdep_init_task(struct task_struct *task)
 {
 	task->lockdep_depth = 0; /* no locks held yet */
-	task->curr_chain_key = 0;
+	task->curr_chain_key = INITIAL_CHAIN_KEY;
 	task->lockdep_recursion = 0;
 }
 
@@ -867,7 +867,7 @@ static bool class_lock_list_valid(struct lock_class *c, struct list_head *h)
 static bool check_lock_chain_key(struct lock_chain *chain)
 {
 #ifdef CONFIG_PROVE_LOCKING
-	u64 chain_key = 0;
+	u64 chain_key = INITIAL_CHAIN_KEY;
 	int i;
 
 	for (i = chain->base; i < chain->base + chain->depth; i++)
@@ -2430,7 +2430,7 @@ static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
 print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_next)
 {
 	struct held_lock *hlock;
-	u64 chain_key = 0;
+	u64 chain_key = INITIAL_CHAIN_KEY;
 	int depth = curr->lockdep_depth;
 	int i = get_first_held_lock(curr, hlock_next);
 
@@ -2450,7 +2450,7 @@ static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
 static void print_chain_keys_chain(struct lock_chain *chain)
 {
 	int i;
-	u64 chain_key = 0;
+	u64 chain_key = INITIAL_CHAIN_KEY;
 	int class_id;
 
 	printk("depth: %u\n", chain->depth);
@@ -2754,7 +2754,7 @@ static void check_chain_key(struct task_struct *curr)
 #ifdef CONFIG_DEBUG_LOCKDEP
 	struct held_lock *hlock, *prev_hlock = NULL;
 	unsigned int i;
-	u64 chain_key = 0;
+	u64 chain_key = INITIAL_CHAIN_KEY;
 
 	for (i = 0; i < curr->lockdep_depth; i++) {
 		hlock = curr->held_locks + i;
@@ -2778,7 +2778,7 @@ static void check_chain_key(struct task_struct *curr)
 
 		if (prev_hlock && (prev_hlock->irq_context !=
 							hlock->irq_context))
-			chain_key = 0;
+			chain_key = INITIAL_CHAIN_KEY;
 		chain_key = iterate_chain_key(chain_key, hlock->class_idx);
 		prev_hlock = hlock;
 	}
@@ -3691,14 +3691,14 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		/*
 		 * How can we have a chain hash when we ain't got no keys?!
 		 */
-		if (DEBUG_LOCKS_WARN_ON(chain_key != 0))
+		if (DEBUG_LOCKS_WARN_ON(chain_key != INITIAL_CHAIN_KEY))
 			return 0;
 		chain_head = 1;
 	}
 
 	hlock->prev_chain_key = chain_key;
 	if (separate_irq_context(curr, hlock)) {
-		chain_key = 0;
+		chain_key = INITIAL_CHAIN_KEY;
 		chain_head = 1;
 	}
 	chain_key = iterate_chain_key(chain_key, class_idx);
@@ -4539,7 +4539,7 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 	return;
 
 recalc:
-	chain_key = 0;
+	chain_key = INITIAL_CHAIN_KEY;
 	for (i = chain->base; i < chain->base + chain->depth; i++)
 		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
 	if (chain->depth && chain->chain_key == chain_key)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 10/19] locking/lockdep: Change the range of class_idx in held_lock struct
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (8 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 09/19] locking/lockdep: Define INITIAL_CHAIN_KEY for chain keys to start with Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 11/19] locking/lockdep: Remove unused argument in validate_chain() Yuyang Du
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

held_lock->class_idx is used to point to the class of the held lock. The
index is shifted by 1 to make index 0 mean no class, which results in class
index shifting back and forth but is not worth doing so.

The reason is: (1) there will be no "no-class" held_lock to begin with, and
(2) index 0 seems to be used for error checking, but if something wrong
indeed happended, the index can't be counted on to distinguish it as that
something won't set the class_idx to 0 on purpose to tell us it is wrong.

Therefore, change the index to start from 0. This saves a lot of
back-and-forth shifts and save a slot back to lock_classes.

Since index 0 is now used for lock class, we change the initial chain key to
-1 to avoid key collision, which is due to the fact that __jhash_mix(0, 0, 0) = 0.
Actually, the initial chain key can be any arbitrary value other than 0.

In addition, we maintain a bitmap to keep track of the used lock classes,
and we check the validity of the held lock against that bitmap.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 include/linux/lockdep.h  | 14 ++++++------
 kernel/locking/lockdep.c | 59 ++++++++++++++++++++++++++++++++----------------
 2 files changed, 46 insertions(+), 27 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index dd8cf33..ba4f0c7 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -216,13 +216,8 @@ struct lock_chain {
 };
 
 #define MAX_LOCKDEP_KEYS_BITS		13
-/*
- * Subtract one because we offset hlock->class_idx by 1 in order
- * to make 0 mean no class. This avoids overflowing the class_idx
- * bitfield and hitting the BUG in hlock_class().
- */
-#define MAX_LOCKDEP_KEYS		((1UL << MAX_LOCKDEP_KEYS_BITS) - 1)
-#define INITIAL_CHAIN_KEY		0
+#define MAX_LOCKDEP_KEYS		(1UL << MAX_LOCKDEP_KEYS_BITS)
+#define INITIAL_CHAIN_KEY		-1
 
 struct held_lock {
 	/*
@@ -247,6 +242,11 @@ struct held_lock {
 	u64 				waittime_stamp;
 	u64				holdtime_stamp;
 #endif
+	/*
+	 * class_idx is zero-indexed; it points to the element in
+	 * lock_classes this held lock instance belongs to. class_idx is in
+	 * the range from 0 to (MAX_LOCKDEP_KEYS-1) inclusive.
+	 */
 	unsigned int			class_idx:MAX_LOCKDEP_KEYS_BITS;
 	/*
 	 * The lock-stack is unified in that the lock chains of interrupt
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5459d37..e8871f2 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -150,17 +150,28 @@ static inline int debug_locks_off_graph_unlock(void)
 static
 #endif
 struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+static DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
 
 static inline struct lock_class *hlock_class(struct held_lock *hlock)
 {
-	if (!hlock->class_idx) {
+	unsigned int class_idx = hlock->class_idx;
+
+	/* Don't re-read hlock->class_idx, can't use READ_ONCE() on bitfield */
+	barrier();
+
+	if (!test_bit(class_idx, lock_classes_in_use)) {
 		/*
 		 * Someone passed in garbage, we give up.
 		 */
 		DEBUG_LOCKS_WARN_ON(1);
 		return NULL;
 	}
-	return lock_classes + hlock->class_idx - 1;
+
+	/*
+	 * At this point, if the passed hlock->class_idx is still garbage,
+	 * we just have to live with it
+	 */
+	return lock_classes + class_idx;
 }
 
 #ifdef CONFIG_LOCK_STAT
@@ -604,19 +615,22 @@ static void print_lock(struct held_lock *hlock)
 	/*
 	 * We can be called locklessly through debug_show_all_locks() so be
 	 * extra careful, the hlock might have been released and cleared.
+	 *
+	 * If this indeed happens, lets pretend it does not hurt to continue
+	 * to print the lock unless the hlock class_idx does not point to a
+	 * registered class. The rationale here is: since we don't attempt
+	 * to distinguish whether we are in this situation, if it just
+	 * happened we can't count on class_idx to tell either.
 	 */
-	unsigned int class_idx = hlock->class_idx;
+	struct lock_class *lock = hlock_class(hlock);
 
-	/* Don't re-read hlock->class_idx, can't use READ_ONCE() on bitfields: */
-	barrier();
-
-	if (!class_idx || (class_idx - 1) >= MAX_LOCKDEP_KEYS) {
+	if (!lock) {
 		printk(KERN_CONT "<RELEASED>\n");
 		return;
 	}
 
 	printk(KERN_CONT "%p", hlock->instance);
-	print_lock_name(lock_classes + class_idx - 1);
+	print_lock_name(lock);
 	printk(KERN_CONT ", at: %pS\n", (void *)hlock->acquire_ip);
 }
 
@@ -871,7 +885,7 @@ static bool check_lock_chain_key(struct lock_chain *chain)
 	int i;
 
 	for (i = chain->base; i < chain->base + chain->depth; i++)
-		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i]);
 	/*
 	 * The 'unsigned long long' casts avoid that a compiler warning
 	 * is reported when building tools/lib/lockdep.
@@ -1146,6 +1160,7 @@ static bool is_dynamic_key(const struct lock_class_key *key)
 		return NULL;
 	}
 	nr_lock_classes++;
+	__set_bit(class - lock_classes, lock_classes_in_use);
 	debug_atomic_inc(nr_unused_locks);
 	class->key = key;
 	class->name = lock->name;
@@ -2456,7 +2471,7 @@ static void print_chain_keys_chain(struct lock_chain *chain)
 	printk("depth: %u\n", chain->depth);
 	for (i = 0; i < chain->depth; i++) {
 		class_id = chain_hlocks[chain->base + i];
-		chain_key = print_chain_key_iteration(class_id + 1, chain_key);
+		chain_key = print_chain_key_iteration(class_id, chain_key);
 
 		print_lock_name(lock_classes + class_id);
 		printk("\n");
@@ -2507,7 +2522,7 @@ static int check_no_collision(struct task_struct *curr,
 	}
 
 	for (j = 0; j < chain->depth - 1; j++, i++) {
-		id = curr->held_locks[i].class_idx - 1;
+		id = curr->held_locks[i].class_idx;
 
 		if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) {
 			print_collision(curr, hlock, chain);
@@ -2590,7 +2605,7 @@ static inline int add_chain_cache(struct task_struct *curr,
 	if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {
 		chain->base = nr_chain_hlocks;
 		for (j = 0; j < chain->depth - 1; j++, i++) {
-			int lock_id = curr->held_locks[i].class_idx - 1;
+			int lock_id = curr->held_locks[i].class_idx;
 			chain_hlocks[chain->base + j] = lock_id;
 		}
 		chain_hlocks[chain->base + j] = class - lock_classes;
@@ -2770,10 +2785,12 @@ static void check_chain_key(struct task_struct *curr)
 				(unsigned long long)hlock->prev_chain_key);
 			return;
 		}
+
 		/*
-		 * Whoops ran out of static storage again?
+		 * hlock->class_idx can't go beyond MAX_LOCKDEP_KEYS, but is
+		 * it registered lock class index?
 		 */
-		if (DEBUG_LOCKS_WARN_ON(hlock->class_idx > MAX_LOCKDEP_KEYS))
+		if (DEBUG_LOCKS_WARN_ON(!test_bit(hlock->class_idx, lock_classes_in_use)))
 			return;
 
 		if (prev_hlock && (prev_hlock->irq_context !=
@@ -3619,7 +3636,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	if (DEBUG_LOCKS_WARN_ON(depth >= MAX_LOCK_DEPTH))
 		return 0;
 
-	class_idx = class - lock_classes + 1;
+	class_idx = class - lock_classes;
 
 	if (depth) {
 		hlock = curr->held_locks + depth - 1;
@@ -3681,9 +3698,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	 * the hash, not class->key.
 	 */
 	/*
-	 * Whoops, we did it again.. ran straight out of our static allocation.
+	 * Whoops, we did it again.. class_idx is invalid.
 	 */
-	if (DEBUG_LOCKS_WARN_ON(class_idx > MAX_LOCKDEP_KEYS))
+	if (DEBUG_LOCKS_WARN_ON(!test_bit(class_idx, lock_classes_in_use)))
 		return 0;
 
 	chain_key = curr->curr_chain_key;
@@ -3798,7 +3815,7 @@ static int match_held_lock(const struct held_lock *hlock,
 		if (DEBUG_LOCKS_WARN_ON(!hlock->nest_lock))
 			return 0;
 
-		if (hlock->class_idx == class - lock_classes + 1)
+		if (hlock->class_idx == class - lock_classes)
 			return 1;
 	}
 
@@ -3892,7 +3909,7 @@ static int reacquire_held_locks(struct task_struct *curr, unsigned int depth,
 
 	lockdep_init_map(lock, name, key, 0);
 	class = register_lock_class(lock, subclass, 0);
-	hlock->class_idx = class - lock_classes + 1;
+	hlock->class_idx = class - lock_classes;
 
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
@@ -4541,7 +4558,7 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 recalc:
 	chain_key = INITIAL_CHAIN_KEY;
 	for (i = chain->base; i < chain->base + chain->depth; i++)
-		chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i]);
 	if (chain->depth && chain->chain_key == chain_key)
 		return;
 	/* Overwrite the chain key for concurrent RCU readers. */
@@ -4615,6 +4632,7 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
 		WRITE_ONCE(class->key, NULL);
 		WRITE_ONCE(class->name, NULL);
 		nr_lock_classes--;
+		__clear_bit(class - lock_classes, lock_classes_in_use);
 	} else {
 		WARN_ONCE(true, "%s() failed for class %s\n", __func__,
 			  class->name);
@@ -4964,6 +4982,7 @@ void __init lockdep_init(void)
 
 	printk(" memory used by lock dependency info: %zu kB\n",
 	       (sizeof(lock_classes) +
+		sizeof(lock_classes_in_use) +
 		sizeof(classhash_table) +
 		sizeof(list_entries) +
 		sizeof(list_entries_in_use) +
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 11/19] locking/lockdep: Remove unused argument in validate_chain()
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (9 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 10/19] locking/lockdep: Change the range of class_idx in held_lock struct Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:45   ` Bart Van Assche
  2019-03-18  8:57 ` [PATCH v2 12/19] locking/lockdep: Update comment Yuyang Du
                   ` (7 subsequent siblings)
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

Its lockdep_map argument is not used, remove it.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index e8871f2..dcff644 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2131,8 +2131,7 @@ static void print_deadlock_scenario(struct held_lock *nxt,
  * Returns: 0 on deadlock detected, 1 on OK, 2 on recursive read
  */
 static int
-check_deadlock(struct task_struct *curr, struct held_lock *next,
-	       struct lockdep_map *next_instance, int read)
+check_deadlock(struct task_struct *curr, struct held_lock *next, int read)
 {
 	struct held_lock *prev;
 	struct held_lock *nest = NULL;
@@ -2695,8 +2694,9 @@ static inline int lookup_chain_cache_add(struct task_struct *curr,
 	return 1;
 }
 
-static int validate_chain(struct task_struct *curr, struct lockdep_map *lock,
-			  struct held_lock *hlock, int chain_head, u64 chain_key)
+static int validate_chain(struct task_struct *curr,
+			  struct held_lock *hlock,
+			  int chain_head, u64 chain_key)
 {
 	/*
 	 * Trylock needs to maintain the stack of held locks, but it
@@ -2722,7 +2722,7 @@ static int validate_chain(struct task_struct *curr, struct lockdep_map *lock,
 		 * any of these scenarios could lead to a deadlock. If
 		 * All validations
 		 */
-		int ret = check_deadlock(curr, hlock, lock, hlock->read);
+		int ret = check_deadlock(curr, hlock, hlock->read);
 
 		if (!ret)
 			return 0;
@@ -2753,7 +2753,7 @@ static int validate_chain(struct task_struct *curr, struct lockdep_map *lock,
 }
 #else
 static inline int validate_chain(struct task_struct *curr,
-				 struct lockdep_map *lock, struct held_lock *hlock,
+				 struct held_lock *hlock,
 				 int chain_head, u64 chain_key)
 {
 	return 1;
@@ -3730,7 +3730,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		WARN_ON_ONCE(!hlock_class(hlock)->key);
 	}
 
-	if (!validate_chain(curr, lock, hlock, chain_head, chain_key))
+	if (!validate_chain(curr, hlock, chain_head, chain_key))
 		return 0;
 
 	curr->curr_chain_key = chain_key;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 12/19] locking/lockdep: Update comment
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (10 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 11/19] locking/lockdep: Remove unused argument in validate_chain() Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 13/19] locking/lockdep: Remove unnecessary function pointer argument Yuyang Du
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

An out-of-nowhere comment is removed. While at it, add more explanatory
comments. Such a trivial patch!

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index dcff644..250ba64 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2717,10 +2717,16 @@ static int validate_chain(struct task_struct *curr,
 		 * - is softirq-safe, if this lock is hardirq-unsafe
 		 *
 		 * And check whether the new lock's dependency graph
-		 * could lead back to the previous lock.
+		 * could lead back to the previous lock:
 		 *
-		 * any of these scenarios could lead to a deadlock. If
-		 * All validations
+		 * - within the current held-lock stack
+		 * - across our accumulated lock dependency records
+		 *
+		 * any of these scenarios could lead to a deadlock.
+		 */
+		/*
+		 * The simple case: does the current hold the same lock
+		 * already?
 		 */
 		int ret = check_deadlock(curr, hlock, hlock->read);
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 13/19] locking/lockdep: Remove unnecessary function pointer argument
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (11 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 12/19] locking/lockdep: Update comment Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:48   ` Bart Van Assche
  2019-03-18  8:57 ` [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue Yuyang Du
                   ` (5 subsequent siblings)
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

check_prev_add() always has save_trace() as an input argument, which is
unnecessary.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 250ba64..de731b8 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2190,8 +2190,7 @@ static void print_deadlock_scenario(struct held_lock *nxt,
  */
 static int
 check_prev_add(struct task_struct *curr, struct held_lock *prev,
-	       struct held_lock *next, int distance, struct stack_trace *trace,
-	       int (*save)(struct stack_trace *trace))
+	       struct held_lock *next, int distance, struct stack_trace *trace)
 {
 	struct lock_list *uninitialized_var(target_entry);
 	struct lock_list *entry;
@@ -2231,11 +2230,11 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 	if (unlikely(!ret)) {
 		if (!trace->entries) {
 			/*
-			 * If @save fails here, the printing might trigger
+			 * If save_trace fails here, the printing might trigger
 			 * a WARN but because of the !nr_entries it should
 			 * not do bad things.
 			 */
-			save(trace);
+			save_trace(trace);
 		}
 		print_circular_bug(&this, target_entry, next, prev, trace);
 		return 0;
@@ -2290,7 +2289,7 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 	}
 
 
-	if (!trace->entries && !save(trace))
+	if (!trace->entries && !save_trace(trace))
 		return 0;
 
 	/*
@@ -2355,7 +2354,7 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 		 * added:
 		 */
 		if (hlock->read != 2 && hlock->check) {
-			int ret = check_prev_add(curr, hlock, next, distance, &trace, save_trace);
+			int ret = check_prev_add(curr, hlock, next, distance, &trace);
 			if (!ret)
 				return 0;
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (12 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 13/19] locking/lockdep: Remove unnecessary function pointer argument Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:50   ` Bart Van Assche
  2019-03-19 16:51   ` Bart Van Assche
  2019-03-18  8:57 ` [PATCH v2 15/19] locking/lockdep: Remove __cq_empty() Yuyang Du
                   ` (4 subsequent siblings)
  18 siblings, 2 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

The element field is an array in struct circular_queue to keep track of locks
in the search. Making it the same type as the locks avoids type cast.  Also
fix a typo.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index de731b8..a3fb112 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1272,13 +1272,16 @@ static int add_lock_to_list(struct lock_class *this,
 #define CQ_MASK				(MAX_CIRCULAR_QUEUE_SIZE-1)
 
 /*
- * The circular_queue and helpers is used to implement the
- * breadth-first search(BFS)algorithem, by which we can build
- * the shortest path from the next lock to be acquired to the
- * previous held lock if there is a circular between them.
+ * The circular_queue and helpers are used to implement the graph
+ * breadth-first search (BFS) algorithm, by which we can determine whether
+ * there is a path from the next lock to be acquired to a previous held
+ * lock, which indicates that adding the <prev> -> <next> lock dependency
+ * produces a circle in the lock dependency graph. Breadth-first search
+ * instead of depth-first search is used for finding the shortest circular
+ * path.
  */
 struct circular_queue {
-	unsigned long element[MAX_CIRCULAR_QUEUE_SIZE];
+	struct lock_list *element[MAX_CIRCULAR_QUEUE_SIZE];
 	unsigned int  front, rear;
 };
 
@@ -1304,7 +1307,7 @@ static inline int __cq_full(struct circular_queue *cq)
 	return ((cq->rear + 1) & CQ_MASK) == cq->front;
 }
 
-static inline int __cq_enqueue(struct circular_queue *cq, unsigned long elem)
+static inline int __cq_enqueue(struct circular_queue *cq, struct lock_list *elem)
 {
 	if (__cq_full(cq))
 		return -1;
@@ -1314,7 +1317,7 @@ static inline int __cq_enqueue(struct circular_queue *cq, unsigned long elem)
 	return 0;
 }
 
-static inline int __cq_dequeue(struct circular_queue *cq, unsigned long *elem)
+static inline int __cq_dequeue(struct circular_queue *cq, struct lock_list **elem)
 {
 	if (__cq_empty(cq))
 		return -1;
@@ -1392,12 +1395,12 @@ static int __bfs(struct lock_list *source_entry,
 		goto exit;
 
 	__cq_init(cq);
-	__cq_enqueue(cq, (unsigned long)source_entry);
+	__cq_enqueue(cq, source_entry);
 
 	while (!__cq_empty(cq)) {
 		struct lock_list *lock;
 
-		__cq_dequeue(cq, (unsigned long *)&lock);
+		__cq_dequeue(cq, &lock);
 
 		if (!lock->class) {
 			ret = -2;
@@ -1421,7 +1424,7 @@ static int __bfs(struct lock_list *source_entry,
 					goto exit;
 				}
 
-				if (__cq_enqueue(cq, (unsigned long)entry)) {
+				if (__cq_enqueue(cq, entry)) {
 					ret = -1;
 					goto exit;
 				}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 15/19] locking/lockdep: Remove __cq_empty()
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (13 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:54   ` Bart Van Assche
  2019-03-18  8:57 ` [PATCH v2 16/19] locking/lockdep: Use function pointer to avoid constant checks Yuyang Du
                   ` (3 subsequent siblings)
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

__cq_empty() can be embeded in __cq_dequeue(), removing it. We get slightly
simpler code. No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a3fb112..ee8fe64 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1297,11 +1297,6 @@ static inline void __cq_init(struct circular_queue *cq)
 	lockdep_dependency_gen_id++;
 }
 
-static inline int __cq_empty(struct circular_queue *cq)
-{
-	return (cq->front == cq->rear);
-}
-
 static inline int __cq_full(struct circular_queue *cq)
 {
 	return ((cq->rear + 1) & CQ_MASK) == cq->front;
@@ -1317,14 +1312,24 @@ static inline int __cq_enqueue(struct circular_queue *cq, struct lock_list *elem
 	return 0;
 }
 
-static inline int __cq_dequeue(struct circular_queue *cq, struct lock_list **elem)
+/*
+ * Dequeue an element from the circular_queue, return the lock if the queue
+ * is not empty, or NULL if otherwise
+ */
+static inline struct lock_list * __cq_dequeue(struct circular_queue *cq)
 {
-	if (__cq_empty(cq))
-		return -1;
+	struct lock_list * lock;
 
-	*elem = cq->element[cq->front];
+	/*
+	 * Is the circular_queue empty?
+	 */
+	if (cq->front == cq->rear)
+		return NULL;
+
+	lock = cq->element[cq->front];
 	cq->front = (cq->front + 1) & CQ_MASK;
-	return 0;
+
+	return lock;
 }
 
 static inline unsigned int  __cq_get_elem_count(struct circular_queue *cq)
@@ -1376,6 +1381,7 @@ static int __bfs(struct lock_list *source_entry,
 		 int forward)
 {
 	struct lock_list *entry;
+	struct lock_list *lock;
 	struct list_head *head;
 	struct circular_queue *cq = &lock_cq;
 	int ret = 1;
@@ -1397,10 +1403,7 @@ static int __bfs(struct lock_list *source_entry,
 	__cq_init(cq);
 	__cq_enqueue(cq, source_entry);
 
-	while (!__cq_empty(cq)) {
-		struct lock_list *lock;
-
-		__cq_dequeue(cq, &lock);
+	while ((lock = __cq_dequeue(cq))) {
 
 		if (!lock->class) {
 			ret = -2;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 16/19] locking/lockdep: Use function pointer to avoid constant checks
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (14 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 15/19] locking/lockdep: Remove __cq_empty() Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:56   ` Bart Van Assche
  2019-03-18  8:57 ` [PATCH v2 17/19] locking/lockdep: Combine check_noncircular and check_redundant Yuyang Du
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

In search of a dependency in the lock graph, there is contant check for
forward or backward search. Use a function pointer to avoid that check.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ee8fe64..3dbb4d0 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1374,11 +1374,21 @@ static inline int get_lock_depth(struct lock_list *child)
 	return depth;
 }
 
+static inline struct list_head *get_forward_dep(struct lock_list * lock)
+{
+	return &lock->class->locks_after;
+}
+
+static inline struct list_head *get_backward_dep(struct lock_list * lock)
+{
+	return &lock->class->locks_before;
+}
+
 static int __bfs(struct lock_list *source_entry,
 		 void *data,
 		 int (*match)(struct lock_list *entry, void *data),
 		 struct lock_list **target_entry,
-		 int forward)
+		 struct list_head *(*get_dep)(struct lock_list * lock))
 {
 	struct lock_list *entry;
 	struct lock_list *lock;
@@ -1392,11 +1402,7 @@ static int __bfs(struct lock_list *source_entry,
 		goto exit;
 	}
 
-	if (forward)
-		head = &source_entry->class->locks_after;
-	else
-		head = &source_entry->class->locks_before;
-
+	head = get_dep(source_entry);
 	if (list_empty(head))
 		goto exit;
 
@@ -1410,10 +1416,7 @@ static int __bfs(struct lock_list *source_entry,
 			goto exit;
 		}
 
-		if (forward)
-			head = &lock->class->locks_after;
-		else
-			head = &lock->class->locks_before;
+		head = get_dep(lock);
 
 		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
@@ -1445,7 +1448,7 @@ static inline int __bfs_forwards(struct lock_list *src_entry, void *data,
 				 int (*match)(struct lock_list *entry, void *data),
 				 struct lock_list **target_entry)
 {
-	return __bfs(src_entry, data, match, target_entry, 1);
+	return __bfs(src_entry, data, match, target_entry, get_forward_dep);
 
 }
 
@@ -1453,7 +1456,7 @@ static inline int __bfs_backwards(struct lock_list *src_entry, void *data,
 				  int (*match)(struct lock_list *entry, void *data),
 				  struct lock_list **target_entry)
 {
-	return __bfs(src_entry, data, match, target_entry, 0);
+	return __bfs(src_entry, data, match, target_entry, get_backward_dep);
 
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 17/19] locking/lockdep: Combine check_noncircular and check_redundant
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (15 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 16/19] locking/lockdep: Use function pointer to avoid constant checks Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 18/19] locking/lockdep: Update comments on dependency search Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors Yuyang Du
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

These two functions are essentially duplicates, combine them.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 29 ++++++++++-------------------
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 3dbb4d0..90d58cc 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1677,29 +1677,18 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 }
 
 /*
- * Prove that the dependency graph starting at <entry> can not
- * lead to <target>. Print an error and return 0 if it does.
+ * Prove that the dependency graph starting at <root> can not
+ * lead to <target>. If existent, there is a circle when adding
+ * a <target> -> <root> dependency.
+ *
+ * Print an error and return 0 if it does exist.
  */
 static noinline int
-check_noncircular(struct lock_list *root, struct lock_class *target,
+check_nonexistent(struct lock_list *root, struct lock_class *target,
 		  struct lock_list **target_entry)
 {
 	int result;
 
-	debug_atomic_inc(nr_cyclic_checks);
-
-	result = __bfs_forwards(root, target, class_equal, target_entry);
-
-	return result;
-}
-
-static noinline int
-check_redundant(struct lock_list *root, struct lock_class *target,
-		struct lock_list **target_entry)
-{
-	int result;
-
-	debug_atomic_inc(nr_redundant_checks);
 
 	result = __bfs_forwards(root, target, class_equal, target_entry);
 
@@ -2235,7 +2224,8 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 	 */
 	this.class = hlock_class(next);
 	this.parent = NULL;
-	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
+	debug_atomic_inc(nr_cyclic_checks);
+	ret = check_nonexistent(&this, hlock_class(prev), &target_entry);
 	if (unlikely(!ret)) {
 		if (!trace->entries) {
 			/*
@@ -2287,7 +2277,8 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 	 */
 	this.class = hlock_class(prev);
 	this.parent = NULL;
-	ret = check_redundant(&this, hlock_class(next), &target_entry);
+	debug_atomic_inc(nr_redundant_checks);
+	ret = check_nonexistent(&this, hlock_class(next), &target_entry);
 	if (!ret) {
 		debug_atomic_inc(nr_redundant);
 		return 2;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 18/19] locking/lockdep: Update comments on dependency search
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (16 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 17/19] locking/lockdep: Combine check_noncircular and check_redundant Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-18  8:57 ` [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors Yuyang Du
  18 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

The breadth-first search is implemented as flat-out non-recursive now, but
the comments are still describing it as recursive, update the comments in
that regard.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 90d58cc..1d38bf6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1384,6 +1384,10 @@ static inline struct list_head *get_backward_dep(struct lock_list * lock)
 	return &lock->class->locks_before;
 }
 
+/*
+ * Forward- or backward-dependency search, used for both circular dependency
+ * checking and hardirq-unsafe/softirq-unsafe checking.
+ */
 static int __bfs(struct lock_list *source_entry,
 		 void *data,
 		 int (*match)(struct lock_list *entry, void *data),
@@ -1461,12 +1465,6 @@ static inline int __bfs_backwards(struct lock_list *src_entry, void *data,
 }
 
 /*
- * Recursive, forwards-direction lock-dependency checking, used for
- * both noncyclic checking and for hardirq-unsafe/softirq-unsafe
- * checking.
- */
-
-/*
  * Print a dependency chain entry (this is only done when a deadlock
  * has been detected):
  */
@@ -2166,7 +2164,7 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 
 /*
  * There was a chain-cache miss, and we are about to add a new dependency
- * to a previous lock. We recursively validate the following rules:
+ * to a previous lock. We validate the following rules:
  *
  *  - would the adding of the <prev> -> <next> dependency create a
  *    circular dependency in the graph? [== circular deadlock]
@@ -2216,11 +2214,12 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 	/*
 	 * Prove that the new <prev> -> <next> dependency would not
 	 * create a circular dependency in the graph. (We do this by
-	 * forward-recursing into the graph starting at <next>, and
-	 * checking whether we can reach <prev>.)
+	 * a breadth-first search into the graph starting at <next>,
+	 * which checks whether we can reach <prev>.)
 	 *
-	 * We are using global variables to control the recursion, to
-	 * keep the stackframe size of the recursive functions low:
+	 * The search is limited by the size of the circular queue (i.e.,
+	 * MAX_CIRCULAR_QUEUE_SIZE) which keeps track of a breadth of nodes
+	 * in the graph whose neighbours are to be checked.
 	 */
 	this.class = hlock_class(next);
 	this.parent = NULL;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors
  2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
                   ` (17 preceding siblings ...)
  2019-03-18  8:57 ` [PATCH v2 18/19] locking/lockdep: Update comments on dependency search Yuyang Du
@ 2019-03-18  8:57 ` Yuyang Du
  2019-03-19 16:29   ` Bart Van Assche
  18 siblings, 1 reply; 37+ messages in thread
From: Yuyang Du @ 2019-03-18  8:57 UTC (permalink / raw)
  To: peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel, Yuyang Du

After BFS searching, we check whether there is an error. These checks are
exclusive, so we can use "else if" instead of "if", which results in a bit
optimized code.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1d38bf6..3efc00e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1950,21 +1950,21 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 
 	this.class = hlock_class(prev);
 	ret = find_usage_backwards(&this, bit_backwards, &target_entry);
-	if (ret < 0) {
+	if (unlikely(ret < 0)) {
 		print_bfs_bug(ret);
 		return 0;
 	}
-	if (ret == 1)
+	else if (ret == 1)
 		return ret;
 
 	that.parent = NULL;
 	that.class = hlock_class(next);
 	ret = find_usage_forwards(&that, bit_forwards, &target_entry1);
-	if (ret < 0) {
+	if (unlikely(ret < 0)) {
 		print_bfs_bug(ret);
 		return 0;
 	}
-	if (ret == 1)
+	else if (ret == 1)
 		return ret;
 
 	print_bad_irq_dependency(curr, &this, &that,
@@ -2282,7 +2282,7 @@ static void print_deadlock_scenario(struct held_lock *nxt,
 		debug_atomic_inc(nr_redundant);
 		return 2;
 	}
-	if (ret < 0) {
+	else if (unlikely(ret < 0)) {
 		print_bfs_bug(ret);
 		return 0;
 	}
@@ -2967,11 +2967,11 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_forwards(&root, bit, &target_entry);
-	if (ret < 0) {
+	if (unlikely(ret < 0)) {
 		print_bfs_bug(ret);
 		return 0;
 	}
-	if (ret == 1)
+	else if (ret == 1)
 		return ret;
 
 	print_irq_inversion_bug(curr, &root, target_entry,
@@ -2994,11 +2994,11 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	root.parent = NULL;
 	root.class = hlock_class(this);
 	ret = find_usage_backwards(&root, bit, &target_entry);
-	if (ret < 0) {
+	if (unlikely(ret < 0)) {
 		print_bfs_bug(ret);
 		return 0;
 	}
-	if (ret == 1)
+	else if (ret == 1)
 		return ret;
 
 	print_irq_inversion_bug(curr, &root, target_entry,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void
  2019-03-18  8:57 ` [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void Yuyang Du
@ 2019-03-18  9:45   ` Joe Perches
  2019-03-19  3:28     ` Yuyang Du
  2019-03-19 16:30     ` Bart Van Assche
  0 siblings, 2 replies; 37+ messages in thread
From: Joe Perches @ 2019-03-18  9:45 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: bvanassche, ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> Since none of the print_*() function's return value is necessary, change
> their return type to void. No functional change.
> 
> In cases where an invariable return value is used, this change slightly
> improves readability, i.e.:
[]
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
[]
> @@ -1430,23 +1430,20 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
[]
> -static void
> -print_circular_lock_scenario(struct held_lock *src,
> -			     struct held_lock *tgt,
> -			     struct lock_list *prt)
> +static void print_circular_lock_scenario(struct held_lock *src,
> +					 struct held_lock *tgt,
> +					 struct lock_list *prt)

trivia:

This style change seems superfluous as many
other existing functions use the previous style.



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void
  2019-03-18  9:45   ` Joe Perches
@ 2019-03-19  3:28     ` Yuyang Du
  2019-03-19 16:30     ` Bart Van Assche
  1 sibling, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-19  3:28 UTC (permalink / raw)
  To: Joe Perches
  Cc: peterz, will.deacon, mingo, bvanassche, ming.lei, linux-kernel

Thanks for the review.

On Mon, 18 Mar 2019 at 17:45, Joe Perches <joe@perches.com> wrote:
>
> On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> > Since none of the print_*() function's return value is necessary, change
> > their return type to void. No functional change.
> >
> > In cases where an invariable return value is used, this change slightly
> > improves readability, i.e.:
> []
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> []
> > @@ -1430,23 +1430,20 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
> []
> > -static void
> > -print_circular_lock_scenario(struct held_lock *src,
> > -                          struct held_lock *tgt,
> > -                          struct lock_list *prt)
> > +static void print_circular_lock_scenario(struct held_lock *src,
> > +                                      struct held_lock *tgt,
> > +                                      struct lock_list *prt)
>
> trivia:
>
> This style change seems superfluous as many
> other existing functions use the previous style.

Those "many other" are kind of bizarre already in the context of that
file IMHO. So the change went to great lengths in #05 :)

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors
  2019-03-18  8:57 ` [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors Yuyang Du
@ 2019-03-19 16:29   ` Bart Van Assche
  2019-03-19 17:19     ` Joe Perches
  2019-03-20  2:02     ` Yuyang Du
  0 siblings, 2 replies; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:29 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> -       if (ret < 0) {
> +       if (unlikely(ret < 0)) {
>                 print_bfs_bug(ret);
>                 return 0;
>         }
> -       if (ret == 1)
> +       else if (ret == 1)
>                 return ret;

Have you verified this patch series with checkpatch? Checkpatch should have
reported that you are changing code that conforms to the coding style into
code that does not conform to the kernel coding style. Checkpatch should have
reported the following:

"else is not generally useful after a break or return"

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void
  2019-03-18  9:45   ` Joe Perches
  2019-03-19  3:28     ` Yuyang Du
@ 2019-03-19 16:30     ` Bart Van Assche
  1 sibling, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:30 UTC (permalink / raw)
  To: Joe Perches, Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 02:45 -0700, Joe Perches wrote:
> > -static void
> > -print_circular_lock_scenario(struct held_lock *src,
> > -			     struct held_lock *tgt,
> > -			     struct lock_list *prt)
> > +static void print_circular_lock_scenario(struct held_lock *src,
> > +					 struct held_lock *tgt,
> > +					 struct lock_list *prt)
> 
> trivia:
> 
> This style change seems superfluous as many
> other existing functions use the previous style.

I share Joe's opinion. 

Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 05/19] locking/lockdep: Adjust indents for function definitions
  2019-03-18  8:57 ` [PATCH v2 05/19] locking/lockdep: Adjust indents for function definitions Yuyang Du
@ 2019-03-19 16:33   ` Bart Van Assche
  2019-03-19 16:36     ` Peter Zijlstra
  0 siblings, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:33 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> Being paranoid to see function arguments lines are aligned.

This patch changes code that conforms to the kernel coding style into
code that does not conform to the kernel coding style. If you have a
look at the c-lineup-arglist-tabs-only function in
Documentation/process/coding-style.rst you will see that only tabs and
no spaces should be used to indent function arguments.

Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 07/19] locking/lockdep: Update obsolete struct field description
  2019-03-18  8:57 ` [PATCH v2 07/19] locking/lockdep: Update obsolete struct field description Yuyang Du
@ 2019-03-19 16:35   ` Bart Van Assche
  2019-03-19 16:44     ` Peter Zijlstra
  0 siblings, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:35 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
>  struct lock_chain {
> -       /* see BUILD_BUG_ON()s in lookup_chain_cache() */
> +       /*
> +        * irq_context: the same as irq_context in held_lock below
> +        * depth:       the number of held locks in this chain
> +        * base:        the index in chain_hlocks for this chain
> +        */
>         unsigned int                    irq_context :  2,
>                                         depth       :  6,
>                                         base        : 24;

Have you considered to use the kernel-doc style for documenting these
structure members?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 05/19] locking/lockdep: Adjust indents for function definitions
  2019-03-19 16:33   ` Bart Van Assche
@ 2019-03-19 16:36     ` Peter Zijlstra
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Zijlstra @ 2019-03-19 16:36 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Yuyang Du, will.deacon, mingo, ming.lei, linux-kernel

On Tue, Mar 19, 2019 at 09:33:00AM -0700, Bart Van Assche wrote:
> On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> > Being paranoid to see function arguments lines are aligned.
> 
> This patch changes code that conforms to the kernel coding style into
> code that does not conform to the kernel coding style. If you have a
> look at the c-lineup-arglist-tabs-only function in
> Documentation/process/coding-style.rst you will see that only tabs and
> no spaces should be used to indent function arguments.

So while I generally do align things with tabs and spaces, but have no
strong opinions either way as longs as its readable; I do hate pure
whitespaces patches. They make git-blame so much harder to use :/


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 07/19] locking/lockdep: Update obsolete struct field description
  2019-03-19 16:35   ` Bart Van Assche
@ 2019-03-19 16:44     ` Peter Zijlstra
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Zijlstra @ 2019-03-19 16:44 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Yuyang Du, will.deacon, mingo, ming.lei, linux-kernel

On Tue, Mar 19, 2019 at 09:35:49AM -0700, Bart Van Assche wrote:
> On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> >  struct lock_chain {
> > -       /* see BUILD_BUG_ON()s in lookup_chain_cache() */

That's add_chain_cache() now, please do keep the reference.

> > +       /*
> > +        * irq_context: the same as irq_context in held_lock below
> > +        * depth:       the number of held locks in this chain
> > +        * base:        the index in chain_hlocks for this chain
> > +        */
> >         unsigned int                    irq_context :  2,
> >                                         depth       :  6,
> >                                         base        : 24;
> 
> Have you considered to use the kernel-doc style for documenting these
> structure members?
> 
> Thanks,
> 
> Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 11/19] locking/lockdep: Remove unused argument in validate_chain()
  2019-03-18  8:57 ` [PATCH v2 11/19] locking/lockdep: Remove unused argument in validate_chain() Yuyang Du
@ 2019-03-19 16:45   ` Bart Van Assche
  0 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:45 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> Its lockdep_map argument is not used, remove it.

Maybe update the commit message to reflect that the same unused argument is
also removed from check_deadlock()? Anyway:

Reviewed-by: Bart Van Assche <bvanassche@acm.org>



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 13/19] locking/lockdep: Remove unnecessary function pointer argument
  2019-03-18  8:57 ` [PATCH v2 13/19] locking/lockdep: Remove unnecessary function pointer argument Yuyang Du
@ 2019-03-19 16:48   ` Bart Van Assche
  0 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:48 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> check_prev_add() always has save_trace() as an input argument, which is
> unnecessary.

I think this kind of transformation is called constant propagation. You may
want to mention that name. Anyway:

Reviewed-by: Bart Van Assche <bvanassche@acm.org>



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue
  2019-03-18  8:57 ` [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue Yuyang Du
@ 2019-03-19 16:50   ` Bart Van Assche
  2019-03-19 16:51   ` Bart Van Assche
  1 sibling, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:50 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> The element field is an array in struct circular_queue to keep track of locks
> in the search. Making it the same type as the locks avoids type cast.  Also
> fix a typo.
> 
> No functional change.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue
  2019-03-18  8:57 ` [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue Yuyang Du
  2019-03-19 16:50   ` Bart Van Assche
@ 2019-03-19 16:51   ` Bart Van Assche
  1 sibling, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:51 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> The element field is an array in struct circular_queue to keep track of locks
> in the search. Making it the same type as the locks avoids type cast.  Also
> fix a typo.

BTW, your patch does more than fixing a typo. This patch elaborates the comment
above struct circular_queue. I think the commit message should mention that.

Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 15/19] locking/lockdep: Remove __cq_empty()
  2019-03-18  8:57 ` [PATCH v2 15/19] locking/lockdep: Remove __cq_empty() Yuyang Du
@ 2019-03-19 16:54   ` Bart Van Assche
  2019-03-20  2:30     ` Yuyang Du
  0 siblings, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:54 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> __cq_empty() can be embeded in __cq_dequeue(), removing it. We get slightly
> simpler code. No functional change.

Does inlining __cq_empty() really improve readability of the lockdep code?

> -static inline int __cq_dequeue(struct circular_queue *cq, struct lock_list **elem)
> +/*
> + * Dequeue an element from the circular_queue, return the lock if the queue
> + * is not empty, or NULL if otherwise
> + */
> +static inline struct lock_list * __cq_dequeue(struct circular_queue *cq)
>  {
> -       if (__cq_empty(cq))
> -               return -1;
> +       struct lock_list * lock;
>  
> -       *elem = cq->element[cq->front];
> +       /*
> +        * Is the circular_queue empty?
> +        */
> +       if (cq->front == cq->rear)
> +               return NULL;
> +
> +       lock = cq->element[cq->front];
>         cq->front = (cq->front + 1) & CQ_MASK;
> -       return 0;
> +
> +       return lock;
>  }
>  
>  static inline unsigned int  __cq_get_elem_count(struct circular_queue *cq)
> @@ -1376,6 +1381,7 @@ static int __bfs(struct lock_list *source_entry,
>                  int forward)
>  {
>         struct lock_list *entry;
> +       struct lock_list *lock;
>         struct list_head *head;
>         struct circular_queue *cq = &lock_cq;
>         int ret = 1;
> @@ -1397,10 +1403,7 @@ static int __bfs(struct lock_list *source_entry,
>         __cq_init(cq);
>         __cq_enqueue(cq, source_entry);
>  
> -       while (!__cq_empty(cq)) {
> -               struct lock_list *lock;
> -
> -               __cq_dequeue(cq, &lock);
> +       while ((lock = __cq_dequeue(cq))) {
>  
>                 if (!lock->class) {
>                         ret = -2;

This is the most important change in this patch. Using the title "Remove __cq_empty()"
for this patch is misleading because the above patch does something else, namely changing
the return type of __cq_dequeue() from int into a pointer. Should this patch perhaps be
split or should the __cq_empty() removal be left out from this patch?

Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 16/19] locking/lockdep: Use function pointer to avoid constant checks
  2019-03-18  8:57 ` [PATCH v2 16/19] locking/lockdep: Use function pointer to avoid constant checks Yuyang Du
@ 2019-03-19 16:56   ` Bart Van Assche
  0 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2019-03-19 16:56 UTC (permalink / raw)
  To: Yuyang Du, peterz, will.deacon, mingo; +Cc: ming.lei, linux-kernel

On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> +static inline struct list_head *get_forward_dep(struct lock_list * lock)
> +{
> +       return &lock->class->locks_after;
> +}
> +
> +static inline struct list_head *get_backward_dep(struct lock_list * lock)
> +{
> +       return &lock->class->locks_before;
> +}
> +
>  static int __bfs(struct lock_list *source_entry,
>                  void *data,
>                  int (*match)(struct lock_list *entry, void *data),
>                  struct lock_list **target_entry,
> -                int forward)
> +                struct list_head *(*get_dep)(struct lock_list * lock))
>  {
>         struct lock_list *entry;
>         struct lock_list *lock;
> @@ -1392,11 +1402,7 @@ static int __bfs(struct lock_list *source_entry,
>                 goto exit;
>         }
>  
> -       if (forward)
> -               head = &source_entry->class->locks_after;
> -       else
> -               head = &source_entry->class->locks_before;
> -
> +       head = get_dep(source_entry);
>         if (list_empty(head))
>                 goto exit;
>  
> @@ -1410,10 +1416,7 @@ static int __bfs(struct lock_list *source_entry,
>                         goto exit;
>                 }
>  
> -               if (forward)
> -                       head = &lock->class->locks_after;
> -               else
> -                       head = &lock->class->locks_before;
> +               head = get_dep(lock);
>  
>                 DEBUG_LOCKS_WARN_ON(!irqs_disabled());
>  
> @@ -1445,7 +1448,7 @@ static inline int __bfs_forwards(struct lock_list *src_entry, void *data,
>                                  int (*match)(struct lock_list *entry, void *data),
>                                  struct lock_list **target_entry)
>  {
> -       return __bfs(src_entry, data, match, target_entry, 1);
> +       return __bfs(src_entry, data, match, target_entry, get_forward_dep);
>  
>  }
>  
> @@ -1453,7 +1456,7 @@ static inline int __bfs_backwards(struct lock_list *src_entry, void *data,
>                                   int (*match)(struct lock_list *entry, void *data),
>                                   struct lock_list **target_entry)
>  {
> -       return __bfs(src_entry, data, match, target_entry, 0);
> +       return __bfs(src_entry, data, match, target_entry, get_backward_dep);
>  
>  }

I think this patch makes the code harder to read. Additionally, if the compiler doesn't
do conditional folding, this patch introduces an indirect branch and hence will make the
lockdep code slower.

Bart.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors
  2019-03-19 16:29   ` Bart Van Assche
@ 2019-03-19 17:19     ` Joe Perches
  2019-03-20  2:02     ` Yuyang Du
  1 sibling, 0 replies; 37+ messages in thread
From: Joe Perches @ 2019-03-19 17:19 UTC (permalink / raw)
  To: Bart Van Assche, Yuyang Du, peterz, will.deacon, mingo
  Cc: ming.lei, linux-kernel

On Tue, 2019-03-19 at 09:29 -0700, Bart Van Assche wrote:
> On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> > -       if (ret < 0) {
> > +       if (unlikely(ret < 0)) {
> >                 print_bfs_bug(ret);
> >                 return 0;
> >         }
> > -       if (ret == 1)
> > +       else if (ret == 1)
> >                 return ret;
> 
> Have you verified this patch series with checkpatch? Checkpatch should have
> reported that you are changing code that conforms to the coding style into
> code that does not conform to the kernel coding style. Checkpatch should have
> reported the following:
> 
> "else is not generally useful after a break or return"

checkpatch just ain't that smart.
You're welcome to try to improve it.


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors
  2019-03-19 16:29   ` Bart Van Assche
  2019-03-19 17:19     ` Joe Perches
@ 2019-03-20  2:02     ` Yuyang Du
  1 sibling, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-20  2:02 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: peterz, will.deacon, mingo, ming.lei, linux-kernel

Thanks for the review.

On Wed, 20 Mar 2019 at 00:29, Bart Van Assche <bvanassche@acm.org> wrote:
>
> On Mon, 2019-03-18 at 16:57 +0800, Yuyang Du wrote:
> > -       if (ret < 0) {
> > +       if (unlikely(ret < 0)) {
> >                 print_bfs_bug(ret);
> >                 return 0;
> >         }
> > -       if (ret == 1)
> > +       else if (ret == 1)
> >                 return ret;
>
> Have you verified this patch series with checkpatch? Checkpatch should have
> reported that you are changing code that conforms to the coding style into
> code that does not conform to the kernel coding style. Checkpatch should have
> reported the following:
>
> "else is not generally useful after a break or return"

I didn't. And, these changes were done in a row; my not checking each
of them was careless.

Thanks,
Yuyang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 15/19] locking/lockdep: Remove __cq_empty()
  2019-03-19 16:54   ` Bart Van Assche
@ 2019-03-20  2:30     ` Yuyang Du
  0 siblings, 0 replies; 37+ messages in thread
From: Yuyang Du @ 2019-03-20  2:30 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: peterz, will.deacon, mingo, ming.lei, linux-kernel

Thanks for review.

On Wed, 20 Mar 2019 at 00:54, Bart Van Assche <bvanassche@acm.org> wrote:
>
> This is the most important change in this patch. Using the title "Remove __cq_empty()"
> for this patch is misleading because the above patch does something else, namely changing
> the return type of __cq_dequeue() from int into a pointer. Should this patch perhaps be
> split or should the __cq_empty() removal be left out from this patch?

I actually hesitated whether _cq_full() should be inlined as well.
Having said that, let me keep both of them and just change the return
type of __cq_dequeue().

Thanks,
Yuyang

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2019-03-20  2:31 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-18  8:57 [PATCH v2 00/19] locking/lockdep: Add comments and make some code Yuyang Du
2019-03-18  8:57 ` [PATCH v2 01/19] locking/lockdep: Change all print_*() return type to void Yuyang Du
2019-03-18  9:45   ` Joe Perches
2019-03-19  3:28     ` Yuyang Du
2019-03-19 16:30     ` Bart Van Assche
2019-03-18  8:57 ` [PATCH v2 02/19] locking/lockdep: Add description and explanation in lockdep design doc Yuyang Du
2019-03-18  8:57 ` [PATCH v2 03/19] locking/lockdep: Adjust lock usage bit character checks Yuyang Du
2019-03-18  8:57 ` [PATCH v2 04/19] locking/lockdep: Remove useless conditional macro Yuyang Du
2019-03-18  8:57 ` [PATCH v2 05/19] locking/lockdep: Adjust indents for function definitions Yuyang Du
2019-03-19 16:33   ` Bart Van Assche
2019-03-19 16:36     ` Peter Zijlstra
2019-03-18  8:57 ` [PATCH v2 06/19] locking/lockdep: Print the right depth for chain key colission Yuyang Du
2019-03-18  8:57 ` [PATCH v2 07/19] locking/lockdep: Update obsolete struct field description Yuyang Du
2019-03-19 16:35   ` Bart Van Assche
2019-03-19 16:44     ` Peter Zijlstra
2019-03-18  8:57 ` [PATCH v2 08/19] locking/lockdep: Use lockdep_init_task for task initiation consistently Yuyang Du
2019-03-18  8:57 ` [PATCH v2 09/19] locking/lockdep: Define INITIAL_CHAIN_KEY for chain keys to start with Yuyang Du
2019-03-18  8:57 ` [PATCH v2 10/19] locking/lockdep: Change the range of class_idx in held_lock struct Yuyang Du
2019-03-18  8:57 ` [PATCH v2 11/19] locking/lockdep: Remove unused argument in validate_chain() Yuyang Du
2019-03-19 16:45   ` Bart Van Assche
2019-03-18  8:57 ` [PATCH v2 12/19] locking/lockdep: Update comment Yuyang Du
2019-03-18  8:57 ` [PATCH v2 13/19] locking/lockdep: Remove unnecessary function pointer argument Yuyang Du
2019-03-19 16:48   ` Bart Van Assche
2019-03-18  8:57 ` [PATCH v2 14/19] locking/lockdep: Change type of the element field in circular_queue Yuyang Du
2019-03-19 16:50   ` Bart Van Assche
2019-03-19 16:51   ` Bart Van Assche
2019-03-18  8:57 ` [PATCH v2 15/19] locking/lockdep: Remove __cq_empty() Yuyang Du
2019-03-19 16:54   ` Bart Van Assche
2019-03-20  2:30     ` Yuyang Du
2019-03-18  8:57 ` [PATCH v2 16/19] locking/lockdep: Use function pointer to avoid constant checks Yuyang Du
2019-03-19 16:56   ` Bart Van Assche
2019-03-18  8:57 ` [PATCH v2 17/19] locking/lockdep: Combine check_noncircular and check_redundant Yuyang Du
2019-03-18  8:57 ` [PATCH v2 18/19] locking/lockdep: Update comments on dependency search Yuyang Du
2019-03-18  8:57 ` [PATCH v2 19/19] locking/lockdep: Change if to else-if when checking bfs errors Yuyang Du
2019-03-19 16:29   ` Bart Van Assche
2019-03-19 17:19     ` Joe Perches
2019-03-20  2:02     ` Yuyang Du

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).