linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries
@ 2020-01-21 15:40 Waiman Long
  2020-01-21 15:40 ` [PATCH v4 1/7] locking/lockdep: Decrement irq context counters when removing lock chain Waiman Long
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

 v4:
  - Fix build errors reported by kbuild test robot.
  - Adopt the single chain block allocator code suggested by PeterZ which
    combine the last 3 patches of v3 series.
  - Add another patch to introduce a fast path in the chain block
    allocator.
  - In patch 1, move the inc_chains() out of CONFIG_TRACE_IRQFLAGS
    conditional compilation block.

 v3:
  - Move the bug fix patches to the beginning of the series.
  - Include a number of changes as suggested by PeterZ.
  - Increase MAX_CHAIN_BUCKETS from 8 to 10 to reduce the chance of using
    the unsized list.
  - Add patch 7 to add a lockdep_early_init() call.
  - Add patch 8 to allocate chain hlocks by splitting large chain block
    as a last resort.

 v2:
  - Revamp the chain_hlocks reuse patch to store the freed chain_hlocks
    information in the chain_hlocks entries themselves avoiding the
    need of a separate set of tracking structures. This, however,
    requires a minimum allocation size of at least 2. Thanks to PeterZ
    for his review and inspiring this change.
  - Remove the leakage counter as it is no longer applicable.
  - Add patch 6 to make the output of /proc/lockdep_chains more readable.

It was found that when running a workload that kept on adding lock
classes and then zapping them repetitively, the system will eventually
run out of chain_hlocks[] entries even though there were still plenty
of other lockdep data buffers available.

  [ 4318.443670] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
  [ 4318.444809] turning off the locking correctness validator.

In order to fix this problem, we have to make chain_hlocks[] entries
reusable just like other lockdep arrays. Besides that, the patchset
also adds some zapped class and chain_hlocks counters to be tracked by
/proc/lockdep_stats. It also fixes leakage in the irq context counters
and makes the output of /proc/lockdep_chains more readable.

Waiman Long (7):
  locking/lockdep: Decrement irq context counters when removing lock
    chain
  locking/lockdep: Display irq_context names in /proc/lockdep_chains
  locking/lockdep: Track number of zapped classes
  locking/lockdep: Throw away all lock chains with zapped class
  locking/lockdep: Track number of zapped lock chains
  locking/lockdep: Reuse freed chain_hlocks entries
  locking/lockdep: Add a fast path for chain_hlocks allocation

 kernel/locking/lockdep.c           | 335 ++++++++++++++++++++++++-----
 kernel/locking/lockdep_internals.h |  13 +-
 kernel/locking/lockdep_proc.c      |  26 ++-
 3 files changed, 309 insertions(+), 65 deletions(-)

-- 
2.18.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v4 1/7] locking/lockdep: Decrement irq context counters when removing lock chain
  2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
@ 2020-01-21 15:40 ` Waiman Long
  2020-01-21 15:40 ` [PATCH v4 2/7] locking/lockdep: Display irq_context names in /proc/lockdep_chains Waiman Long
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

There are currently three counters to track the irq context of a lock
chain - nr_hardirq_chains, nr_softirq_chains and nr_process_chains.
They are incremented when a new lock chain is added, but they are
not decremented when a lock chain is removed. That causes some of the
statistic counts reported by /proc/lockdep_stats to be incorrect.

Fix that by decrementing the right counter when a lock chain is removed.

Since inc_chains() no longer accesses hardirq_context and softirq_context
directly, it is moved out from the CONFIG_TRACE_IRQFLAGS conditional
compilation block.

Fixes: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use")
Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lockdep.c           | 40 +++++++++++++++++-------------
 kernel/locking/lockdep_internals.h |  6 +++++
 2 files changed, 29 insertions(+), 17 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 32282e7112d3..056f1ee1db53 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2299,18 +2299,6 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
 	return 0;
 }
 
-static void inc_chains(void)
-{
-	if (current->hardirq_context)
-		nr_hardirq_chains++;
-	else {
-		if (current->softirq_context)
-			nr_softirq_chains++;
-		else
-			nr_process_chains++;
-	}
-}
-
 #else
 
 static inline int check_irq_usage(struct task_struct *curr,
@@ -2318,13 +2306,27 @@ static inline int check_irq_usage(struct task_struct *curr,
 {
 	return 1;
 }
+#endif /* CONFIG_TRACE_IRQFLAGS */
 
-static inline void inc_chains(void)
+static void inc_chains(int irq_context)
 {
-	nr_process_chains++;
+	if (irq_context & LOCK_CHAIN_HARDIRQ_CONTEXT)
+		nr_hardirq_chains++;
+	else if (irq_context & LOCK_CHAIN_SOFTIRQ_CONTEXT)
+		nr_softirq_chains++;
+	else
+		nr_process_chains++;
 }
 
-#endif /* CONFIG_TRACE_IRQFLAGS */
+static void dec_chains(int irq_context)
+{
+	if (irq_context & LOCK_CHAIN_HARDIRQ_CONTEXT)
+		nr_hardirq_chains--;
+	else if (irq_context & LOCK_CHAIN_SOFTIRQ_CONTEXT)
+		nr_softirq_chains--;
+	else
+		nr_process_chains--;
+}
 
 static void
 print_deadlock_scenario(struct held_lock *nxt, struct held_lock *prv)
@@ -2844,7 +2846,7 @@ static inline int add_chain_cache(struct task_struct *curr,
 
 	hlist_add_head_rcu(&chain->entry, hash_head);
 	debug_atomic_inc(chain_lookup_misses);
-	inc_chains();
+	inc_chains(chain->irq_context);
 
 	return 1;
 }
@@ -3597,7 +3599,8 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check)
 
 static inline unsigned int task_irq_context(struct task_struct *task)
 {
-	return 2 * !!task->hardirq_context + !!task->softirq_context;
+	return LOCK_CHAIN_HARDIRQ_CONTEXT * !!task->hardirq_context +
+	       LOCK_CHAIN_SOFTIRQ_CONTEXT * !!task->softirq_context;
 }
 
 static int separate_irq_context(struct task_struct *curr,
@@ -4799,6 +4802,8 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 		return;
 	/* Overwrite the chain key for concurrent RCU readers. */
 	WRITE_ONCE(chain->chain_key, chain_key);
+	dec_chains(chain->irq_context);
+
 	/*
 	 * Note: calling hlist_del_rcu() from inside a
 	 * hlist_for_each_entry_rcu() loop is safe.
@@ -4820,6 +4825,7 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 	}
 	*new_chain = *chain;
 	hlist_add_head_rcu(&new_chain->entry, chainhashentry(chain_key));
+	inc_chains(new_chain->irq_context);
 #endif
 }
 
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 18d85aebbb57..a525368b8cf6 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -106,6 +106,12 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ =
 #define STACK_TRACE_HASH_SIZE	16384
 #endif
 
+/*
+ * Bit definitions for lock_chain.irq_context
+ */
+#define LOCK_CHAIN_SOFTIRQ_CONTEXT	(1 << 0)
+#define LOCK_CHAIN_HARDIRQ_CONTEXT	(1 << 1)
+
 #define MAX_LOCKDEP_CHAINS	(1UL << MAX_LOCKDEP_CHAINS_BITS)
 
 #define MAX_LOCKDEP_CHAIN_HLOCKS (MAX_LOCKDEP_CHAINS*5)
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 2/7] locking/lockdep: Display irq_context names in /proc/lockdep_chains
  2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
  2020-01-21 15:40 ` [PATCH v4 1/7] locking/lockdep: Decrement irq context counters when removing lock chain Waiman Long
@ 2020-01-21 15:40 ` Waiman Long
  2020-01-21 15:40 ` [PATCH v4 3/7] locking/lockdep: Track number of zapped classes Waiman Long
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

Currently, the irq_context field of a lock chains displayed in
/proc/lockdep_chains is just a number. It is likely that many people
may not know what a non-zero number means. To make the information more
useful, print the actual irq names ("softirq" and "hardirq") instead.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lockdep_proc.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index dadb7b7fba37..5e64a6ec09c2 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -128,6 +128,13 @@ static int lc_show(struct seq_file *m, void *v)
 	struct lock_chain *chain = v;
 	struct lock_class *class;
 	int i;
+	static const char * const irq_strs[] = {
+		[0]			     = "0",
+		[LOCK_CHAIN_HARDIRQ_CONTEXT] = "hardirq",
+		[LOCK_CHAIN_SOFTIRQ_CONTEXT] = "softirq",
+		[LOCK_CHAIN_SOFTIRQ_CONTEXT|
+		 LOCK_CHAIN_HARDIRQ_CONTEXT] = "hardirq|softirq",
+	};
 
 	if (v == SEQ_START_TOKEN) {
 		if (nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)
@@ -136,7 +143,7 @@ static int lc_show(struct seq_file *m, void *v)
 		return 0;
 	}
 
-	seq_printf(m, "irq_context: %d\n", chain->irq_context);
+	seq_printf(m, "irq_context: %s\n", irq_strs[chain->irq_context]);
 
 	for (i = 0; i < chain->depth; i++) {
 		class = lock_chain_get_class(chain, i);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 3/7] locking/lockdep: Track number of zapped classes
  2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
  2020-01-21 15:40 ` [PATCH v4 1/7] locking/lockdep: Decrement irq context counters when removing lock chain Waiman Long
  2020-01-21 15:40 ` [PATCH v4 2/7] locking/lockdep: Display irq_context names in /proc/lockdep_chains Waiman Long
@ 2020-01-21 15:40 ` Waiman Long
  2020-01-21 15:40 ` [PATCH v4 4/7] locking/lockdep: Throw away all lock chains with zapped class Waiman Long
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

The whole point of the lockdep dynamic key patch is to allow unused
locks to be removed from the lockdep data buffers so that existing
buffer space can be reused. However, there is no way to find out how
many unused locks are zapped and so we don't know if the zapping process
is working properly.

Add a new nr_zapped_classes counter to track that and show it in
/proc/lockdep_stats.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lockdep.c           | 2 ++
 kernel/locking/lockdep_internals.h | 1 +
 kernel/locking/lockdep_proc.c      | 6 ++++++
 3 files changed, 9 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 056f1ee1db53..48e8510529a7 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -147,6 +147,7 @@ static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
 #define KEYHASH_SIZE		(1UL << KEYHASH_BITS)
 static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
 unsigned long nr_lock_classes;
+unsigned long nr_zapped_classes;
 #ifndef CONFIG_DEBUG_LOCKDEP
 static
 #endif
@@ -4881,6 +4882,7 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
 	}
 
 	remove_class_from_lock_chains(pf, class);
+	nr_zapped_classes++;
 }
 
 static void reinit_class(struct lock_class *class)
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index a525368b8cf6..76db80446a32 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -130,6 +130,7 @@ extern const char *__get_key_name(const struct lockdep_subclass_key *key,
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i);
 
 extern unsigned long nr_lock_classes;
+extern unsigned long nr_zapped_classes;
 extern unsigned long nr_list_entries;
 long lockdep_next_lockchain(long i);
 unsigned long lock_chain_count(void);
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 5e64a6ec09c2..bd853848832b 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -343,6 +343,12 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 	seq_printf(m, " debug_locks:                   %11u\n",
 			debug_locks);
 
+	/*
+	 * Zappped classes and lockdep data buffers reuse statistics.
+	 */
+	seq_puts(m, "\n");
+	seq_printf(m, " zapped classes:                %11lu\n",
+			nr_zapped_classes);
 	return 0;
 }
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 4/7] locking/lockdep: Throw away all lock chains with zapped class
  2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
                   ` (2 preceding siblings ...)
  2020-01-21 15:40 ` [PATCH v4 3/7] locking/lockdep: Track number of zapped classes Waiman Long
@ 2020-01-21 15:40 ` Waiman Long
  2020-01-21 15:40 ` [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains Waiman Long
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

If a lock chain contains a class that is zapped, the whole lock chain is
likely to be invalid. If the zapped class is at the end of the chain,
the partial chain without the zapped class should have been stored
already as the current code will store all its predecessor chains. If
the zapped class is somewhere in the middle, there is no guarantee that
the partial chain will actually happen. It may just clutter up the hash
and make searching slower. I would rather prefer storing the chain only
when it actually happens.

So just dump the corresponding chain_hlocks entries for now. A latter
patch will try to reuse the freed chain_hlocks entries.

This patch also changes the type of nr_chain_hlocks to unsigned integer
to be consistent with the other counters.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lockdep.c           | 37 ++++--------------------------
 kernel/locking/lockdep_internals.h |  4 ++--
 kernel/locking/lockdep_proc.c      |  2 +-
 3 files changed, 7 insertions(+), 36 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 48e8510529a7..a259f9e97e3b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2626,8 +2626,8 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 
 struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
 static DECLARE_BITMAP(lock_chains_in_use, MAX_LOCKDEP_CHAINS);
-int nr_chain_hlocks;
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
+unsigned int nr_chain_hlocks;
 
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
 {
@@ -4773,36 +4773,23 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 					 struct lock_class *class)
 {
 #ifdef CONFIG_PROVE_LOCKING
-	struct lock_chain *new_chain;
-	u64 chain_key;
 	int i;
 
 	for (i = chain->base; i < chain->base + chain->depth; i++) {
 		if (chain_hlocks[i] != class - lock_classes)
 			continue;
-		/* The code below leaks one chain_hlock[] entry. */
-		if (--chain->depth > 0) {
-			memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
-				(chain->base + chain->depth - i) *
-				sizeof(chain_hlocks[0]));
-		}
 		/*
 		 * Each lock class occurs at most once in a lock chain so once
 		 * we found a match we can break out of this loop.
 		 */
-		goto recalc;
+		goto free_lock_chain;
 	}
 	/* Since the chain has not been modified, return. */
 	return;
 
-recalc:
-	chain_key = INITIAL_CHAIN_KEY;
-	for (i = chain->base; i < chain->base + chain->depth; i++)
-		chain_key = iterate_chain_key(chain_key, chain_hlocks[i]);
-	if (chain->depth && chain->chain_key == chain_key)
-		return;
+free_lock_chain:
 	/* Overwrite the chain key for concurrent RCU readers. */
-	WRITE_ONCE(chain->chain_key, chain_key);
+	WRITE_ONCE(chain->chain_key, INITIAL_CHAIN_KEY);
 	dec_chains(chain->irq_context);
 
 	/*
@@ -4811,22 +4798,6 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 	 */
 	hlist_del_rcu(&chain->entry);
 	__set_bit(chain - lock_chains, pf->lock_chains_being_freed);
-	if (chain->depth == 0)
-		return;
-	/*
-	 * If the modified lock chain matches an existing lock chain, drop
-	 * the modified lock chain.
-	 */
-	if (lookup_chain_cache(chain_key))
-		return;
-	new_chain = alloc_lock_chain();
-	if (WARN_ON_ONCE(!new_chain)) {
-		debug_locks_off();
-		return;
-	}
-	*new_chain = *chain;
-	hlist_add_head_rcu(&new_chain->entry, chainhashentry(chain_key));
-	inc_chains(new_chain->irq_context);
 #endif
 }
 
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 76db80446a32..926bfa4b4564 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -134,14 +134,14 @@ extern unsigned long nr_zapped_classes;
 extern unsigned long nr_list_entries;
 long lockdep_next_lockchain(long i);
 unsigned long lock_chain_count(void);
-extern int nr_chain_hlocks;
 extern unsigned long nr_stack_trace_entries;
 
 extern unsigned int nr_hardirq_chains;
 extern unsigned int nr_softirq_chains;
 extern unsigned int nr_process_chains;
-extern unsigned int max_lockdep_depth;
+extern unsigned int nr_chain_hlocks;
 
+extern unsigned int max_lockdep_depth;
 extern unsigned int max_bfs_queue_depth;
 
 #ifdef CONFIG_PROVE_LOCKING
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index bd853848832b..2457bc97924c 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -278,7 +278,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 #ifdef CONFIG_PROVE_LOCKING
 	seq_printf(m, " dependency chains:             %11lu [max: %lu]\n",
 			lock_chain_count(), MAX_LOCKDEP_CHAINS);
-	seq_printf(m, " dependency chain hlocks:       %11d [max: %lu]\n",
+	seq_printf(m, " dependency chain hlocks:       %11u [max: %lu]\n",
 			nr_chain_hlocks, MAX_LOCKDEP_CHAIN_HLOCKS);
 #endif
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains
  2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
                   ` (3 preceding siblings ...)
  2020-01-21 15:40 ` [PATCH v4 4/7] locking/lockdep: Throw away all lock chains with zapped class Waiman Long
@ 2020-01-21 15:40 ` Waiman Long
  2020-01-24 13:18   ` kbuild test robot
  2020-01-24 20:15   ` kbuild test robot
  2020-01-21 15:40 ` [PATCH v4 6/7] locking/lockdep: Reuse freed chain_hlocks entries Waiman Long
  2020-01-21 15:40 ` [PATCH v4 7/7] locking/lockdep: Add a fast path for chain_hlocks allocation Waiman Long
  6 siblings, 2 replies; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

Add a new counter nr_zapped_lock_chains to track the number lock chains
that have been removed.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lockdep.c           | 2 ++
 kernel/locking/lockdep_internals.h | 1 +
 kernel/locking/lockdep_proc.c      | 2 ++
 3 files changed, 5 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a259f9e97e3b..97ec0786dbda 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2627,6 +2627,7 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
 static DECLARE_BITMAP(lock_chains_in_use, MAX_LOCKDEP_CHAINS);
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
+unsigned long nr_zapped_lock_chains;
 unsigned int nr_chain_hlocks;
 
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
@@ -4798,6 +4799,7 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 	 */
 	hlist_del_rcu(&chain->entry);
 	__set_bit(chain - lock_chains, pf->lock_chains_being_freed);
+	nr_zapped_lock_chains++;
 #endif
 }
 
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 926bfa4b4564..af722ceeda33 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -131,6 +131,7 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i);
 
 extern unsigned long nr_lock_classes;
 extern unsigned long nr_zapped_classes;
+extern unsigned long nr_zapped_lock_chains;
 extern unsigned long nr_list_entries;
 long lockdep_next_lockchain(long i);
 unsigned long lock_chain_count(void);
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 2457bc97924c..df1b97912871 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -349,6 +349,8 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 	seq_puts(m, "\n");
 	seq_printf(m, " zapped classes:                %11lu\n",
 			nr_zapped_classes);
+	seq_printf(m, " zapped lock chains:            %11lu\n",
+			nr_zapped_lock_chains);
 	return 0;
 }
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 6/7] locking/lockdep: Reuse freed chain_hlocks entries
  2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
                   ` (4 preceding siblings ...)
  2020-01-21 15:40 ` [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains Waiman Long
@ 2020-01-21 15:40 ` Waiman Long
  2020-01-24 11:57   ` kbuild test robot
  2020-01-21 15:40 ` [PATCH v4 7/7] locking/lockdep: Add a fast path for chain_hlocks allocation Waiman Long
  6 siblings, 1 reply; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

Once a lock class is zapped, all the lock chains that include the zapped
class are essentially useless. The lock_chain structure itself can be
reused, but not the corresponding chain_hlocks[] entries. Over time,
we will run out of chain_hlocks entries while there are still plenty
of other lockdep array entries available.

To fix this imbalance, we have to make chain_hlocks entries reusable
just like the others. As the freed chain_hlocks entries are in blocks
of various lengths. A simple bitmap like the one used in the other
reusable lockdep arrays isn't applicable. Instead the chain_hlocks
entries are put into bucketed lists (MAX_CHAIN_BUCKETS) of chain blocks.
Bucket 0 is the dump bucket which houses chain blocks of size larger
than MAX_CHAIN_BUCKETS.  Initially, the whole array is in one chain
block in bucket 0.

Allocation requests for the chain_hlocks are fulfilled first by looking
for chain block of matching size. If not found, a larger chain block
is split.  As the minimum size of a chain block is 2 chain_hlocks[]
entries. That will be the minimum allocation size. In other word,
allocation requests for one chain_hlocks[] entry will cause 2-entry block
to be returned and hence 1 entry will be wasted. When the chain_hlocks[]
array is nearly full or highly fragmented, there is a slight chance
that 1-entry fragment will be created during the chain block splitting
process. Those 1-entry fragments will be discarded and hence leaked.

By reusing the chain_hlocks entries, we are able to handle workloads
that add and zap a lot of lock classes without the risk of overflowing
the chain_hlocks array as long as the total number of outstanding lock
classes at any time remain within a reasonable limit.

Two new tracking counters, nr_free_chain_hlocks & nr_large_chain_blocks,
are added to track the total number of chain_hlocks entries in the
free bucketed lists and the number of large chain blocks in buckets[0]
respectively. The nr_free_chain_hlocks replaces nr_chain_hlocks.

The nr_large_chain_blocks counter enables to see if we should increase
the number of buckets (MAX_CHAIN_BUCKETS) available as linear search is
used to find a matching chain block which is much slower than the O(1)
time to find a chain block with size not bigger than MAX_CHAIN_BUCKETS.

An internal nfsd test that ran for more than an hour and kept on
loading and unloading kernel modules could cause the following message
to be displayed.

  [ 4318.443670] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!

The patched kernel was able to complete the test with a lot of
chain_hlocks entries to spare:

  # cat /proc/lockdep_stats
     :
   dependency chains:                   26723 [max: 65536]
   dependency chain hlocks:            115052 [max: 327680]
     :
   zapped classes:                       1538
   zapped lock chains:                  56990
   large chain blocks:                      0

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lockdep.c           | 242 +++++++++++++++++++++++++++--
 kernel/locking/lockdep_internals.h |   3 +-
 kernel/locking/lockdep_proc.c      |   9 +-
 3 files changed, 239 insertions(+), 15 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 97ec0786dbda..6d0f6a256d63 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1072,13 +1072,15 @@ static inline void check_data_structures(void) { }
 
 #endif /* CONFIG_DEBUG_LOCKDEP */
 
+static void init_chain_block_buckets(void);
+
 /*
  * Initialize the lock_classes[] array elements, the free_lock_classes list
  * and also the delayed_free structure.
  */
 static void init_data_structures_once(void)
 {
-	static bool ds_initialized, rcu_head_initialized;
+	static bool __read_mostly ds_initialized, rcu_head_initialized;
 	int i;
 
 	if (likely(rcu_head_initialized))
@@ -1102,6 +1104,7 @@ static void init_data_structures_once(void)
 		INIT_LIST_HEAD(&lock_classes[i].locks_after);
 		INIT_LIST_HEAD(&lock_classes[i].locks_before);
 	}
+	init_chain_block_buckets();
 }
 
 static inline struct hlist_head *keyhashentry(const struct lock_class_key *key)
@@ -2628,7 +2631,223 @@ struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
 static DECLARE_BITMAP(lock_chains_in_use, MAX_LOCKDEP_CHAINS);
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
 unsigned long nr_zapped_lock_chains;
-unsigned int nr_chain_hlocks;
+unsigned int nr_free_chain_hlocks;	/* Free cfhain_hlocks in buckets */
+unsigned int nr_large_chain_blocks;	/* size > MAX_CHAIN_BUCKETS */
+
+/*
+ * The first 2 chain_hlocks entries in the chain block in the bucket
+ * list contains the following meta data:
+ *
+ *   entry[0]:
+ *     Bit    15 - always set to 1 (it is not a class index)
+ *     Bits 0-14 - upper 15 bits of the next block index
+ *   entry[1]    - lower 16 bits of next block index
+ *
+ * A next block index of all 1 bits means it is the end of the list.
+ *
+ * On the unsized bucket (bucket-0), the 3rd and 4th entries contain
+ * the chain block size:
+ *
+ *   entry[2] - upper 16 bits of the chain block size
+ *   entry[3] - lower 16 bits of the chain block size
+ */
+#define MAX_CHAIN_BUCKETS	10
+#define CHAIN_BLK_FLAG		(1U << 15)
+#define CHAIN_BLK_LIST_END	0xFFFFU
+
+static int chain_block_buckets[MAX_CHAIN_BUCKETS];
+
+static inline int size_to_bucket(int size)
+{
+	if (size > MAX_CHAIN_BUCKETS)
+		return 0;
+
+	return size - 1;
+}
+
+/*
+ * Iterate all the chain blocks in a bucket.
+ * The loop body has to set the next parameter.
+ */
+#define for_each_chain_block(bucket, prev, curr, next)		\
+	for ((prev) = bucket - MAX_CHAIN_BUCKETS,		\
+	     (curr) = chain_block_buckets[bucket];		\
+	     (curr) >= 0;					\
+	     (prev) = (curr), (curr) = (next))
+
+/*
+ * next block or -1
+ */
+static inline int chain_block_next(int offset)
+{
+	int next = chain_hlocks[offset];
+
+	WARN_ON_ONCE(!(next & CHAIN_BLK_FLAG));
+
+	if (next == CHAIN_BLK_LIST_END)
+		return -1;
+
+	next &= ~CHAIN_BLK_FLAG;
+	next <<= 16;
+	next |= chain_hlocks[offset + 1];
+
+	return next;
+}
+
+/*
+ * bucket-0 only
+ */
+static inline int chain_block_size(int offset)
+{
+	return (chain_hlocks[offset + 2] << 16) | chain_hlocks[offset + 3];
+}
+
+static inline void init_chain_block(int offset, int next, int bucket, int size)
+{
+	chain_hlocks[offset] = (next >> 16) | CHAIN_BLK_FLAG;
+	chain_hlocks[offset + 1] = (u16)next;
+
+	if (bucket == 0) {
+		chain_hlocks[offset + 2] = size >> 16;
+		chain_hlocks[offset + 3] = (u16)size;
+	}
+}
+
+static inline void add_chain_block(int offset, int size)
+{
+	int bucket = size_to_bucket(size);
+	int next = chain_block_buckets[bucket];
+
+	if (unlikely(size < 2))
+		return; // XXX leak!
+
+	init_chain_block(offset, next, bucket, size);
+	chain_block_buckets[bucket] = offset;
+	nr_free_chain_hlocks += size;
+
+	if (bucket == 0)
+		nr_large_chain_blocks++;
+}
+
+/*
+ * When offset < 0, the bucket number is encoded in it.
+ */
+static inline void del_chain_block(int offset, int size, int next)
+{
+	int bucket = size_to_bucket(size);
+
+	nr_free_chain_hlocks -= size;
+	if (offset < 0)
+		chain_block_buckets[offset + MAX_CHAIN_BUCKETS] = next;
+	else
+		init_chain_block(offset, next, bucket, size);
+
+	if (bucket == 0)
+		nr_large_chain_blocks--;
+}
+
+static void init_chain_block_buckets(void)
+{
+	int i;
+
+	for (i = 0; i < MAX_CHAIN_BUCKETS; i++)
+		chain_block_buckets[i] = -1;
+
+	add_chain_block(0, ARRAY_SIZE(chain_hlocks));
+}
+
+/*
+ * Return offset of a chain block of the right size or -1 if not found.
+ *
+ * Fairly simple worst-fit allocator with the addition of a number of size
+ * specific free lists.
+ */
+static int alloc_chain_hlocks(int req)
+{
+	int max_prev, max_curr, max_next, max_size = INT_MIN;
+	int prev, curr, next, size;
+	int bucket;
+
+	/*
+	 * We rely on the MSB to act as an escape bit to denote freelist
+	 * pointers. Make sure this bit isn't set in 'normal' class_idx usage.
+	 */
+	BUILD_BUG_ON((MAX_LOCKDEP_KEYS-1) & CHAIN_BLK_FLAG);
+
+	init_data_structures_once();
+
+	if (!nr_free_chain_hlocks)
+		return -1;
+
+	/*
+	 * We require a minimum of 2 (u16) entries to encode a freelist
+	 * 'pointer'.
+	 */
+	req = max(req, 2);
+	bucket = size_to_bucket(req);
+
+	if (bucket != 0) {
+		curr = chain_block_buckets[bucket];
+		if (curr < 0)
+			goto search;
+
+		del_chain_block(bucket - MAX_CHAIN_BUCKETS, req,
+				chain_block_next(curr));
+		return curr;
+	}
+
+search:
+	/*
+	 * linear search in the 'dump' bucket; look for an exact match or the
+	 * largest block.
+	 */
+	for_each_chain_block(0, prev, curr, next) {
+		size = chain_block_size(curr);
+		next = chain_block_next(curr);
+
+		if (size == req) {
+			del_chain_block(prev, req, next);
+			return curr;
+		}
+
+		if (size > max_size) {
+			max_prev = prev;
+			max_curr = curr;
+			max_next = next;
+			max_size = size;
+		}
+	}
+
+	if (likely(max_size > req)) {
+		del_chain_block(max_prev, max_size, max_next);
+		add_chain_block(max_curr + req, max_size - req);
+		return max_curr;
+	}
+
+	/*
+	 * Last resort, split a block in a larger sized bucket.
+	 */
+	for (size = MAX_CHAIN_BUCKETS; size > req; size--) {
+		bucket = size_to_bucket(size);
+		curr = chain_block_buckets[bucket];
+		if (curr < 0)
+			continue;
+
+		del_chain_block(bucket - MAX_CHAIN_BUCKETS, size,
+				chain_block_next(curr));
+		add_chain_block(curr + req, size - req);
+		return curr;
+	}
+
+	return -1;
+}
+
+#ifdef CONFIG_PROVE_LOCKING
+static inline void free_chain_hlocks(int base, int size)
+{
+	add_chain_block(base, max(size, 2));
+}
+#endif
 
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
 {
@@ -2829,15 +3048,8 @@ static inline int add_chain_cache(struct task_struct *curr,
 	BUILD_BUG_ON((1UL << 6)  <= ARRAY_SIZE(curr->held_locks));
 	BUILD_BUG_ON((1UL << 8*sizeof(chain_hlocks[0])) <= ARRAY_SIZE(lock_classes));
 
-	if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {
-		chain->base = nr_chain_hlocks;
-		for (j = 0; j < chain->depth - 1; j++, i++) {
-			int lock_id = curr->held_locks[i].class_idx;
-			chain_hlocks[chain->base + j] = lock_id;
-		}
-		chain_hlocks[chain->base + j] = class - lock_classes;
-		nr_chain_hlocks += chain->depth;
-	} else {
+	j = alloc_chain_hlocks(chain->depth);
+	if (j < 0) {
 		if (!debug_locks_off_graph_unlock())
 			return 0;
 
@@ -2846,6 +3058,13 @@ static inline int add_chain_cache(struct task_struct *curr,
 		return 0;
 	}
 
+	chain->base = j;
+	for (j = 0; j < chain->depth - 1; j++, i++) {
+		int lock_id = curr->held_locks[i].class_idx;
+
+		chain_hlocks[chain->base + j] = lock_id;
+	}
+	chain_hlocks[chain->base + j] = class - lock_classes;
 	hlist_add_head_rcu(&chain->entry, hash_head);
 	debug_atomic_inc(chain_lookup_misses);
 	inc_chains(chain->irq_context);
@@ -4789,6 +5008,7 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 	return;
 
 free_lock_chain:
+	free_chain_hlocks(chain->base, chain->depth);
 	/* Overwrite the chain key for concurrent RCU readers. */
 	WRITE_ONCE(chain->chain_key, INITIAL_CHAIN_KEY);
 	dec_chains(chain->irq_context);
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index af722ceeda33..5040e6729c05 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -140,7 +140,8 @@ extern unsigned long nr_stack_trace_entries;
 extern unsigned int nr_hardirq_chains;
 extern unsigned int nr_softirq_chains;
 extern unsigned int nr_process_chains;
-extern unsigned int nr_chain_hlocks;
+extern unsigned int nr_free_chain_hlocks;
+extern unsigned int nr_large_chain_blocks;
 
 extern unsigned int max_lockdep_depth;
 extern unsigned int max_bfs_queue_depth;
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index df1b97912871..b1be77371790 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -137,7 +137,7 @@ static int lc_show(struct seq_file *m, void *v)
 	};
 
 	if (v == SEQ_START_TOKEN) {
-		if (nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)
+		if (!nr_free_chain_hlocks)
 			seq_printf(m, "(buggered) ");
 		seq_printf(m, "all lock chains:\n");
 		return 0;
@@ -278,8 +278,9 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 #ifdef CONFIG_PROVE_LOCKING
 	seq_printf(m, " dependency chains:             %11lu [max: %lu]\n",
 			lock_chain_count(), MAX_LOCKDEP_CHAINS);
-	seq_printf(m, " dependency chain hlocks:       %11u [max: %lu]\n",
-			nr_chain_hlocks, MAX_LOCKDEP_CHAIN_HLOCKS);
+	seq_printf(m, " dependency chain hlocks:       %11lu [max: %lu]\n",
+			MAX_LOCKDEP_CHAIN_HLOCKS - nr_free_chain_hlocks,
+			MAX_LOCKDEP_CHAIN_HLOCKS);
 #endif
 
 #ifdef CONFIG_TRACE_IRQFLAGS
@@ -351,6 +352,8 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 			nr_zapped_classes);
 	seq_printf(m, " zapped lock chains:            %11lu\n",
 			nr_zapped_lock_chains);
+	seq_printf(m, " large chain blocks:            %11u\n",
+			nr_large_chain_blocks);
 	return 0;
 }
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 7/7] locking/lockdep: Add a fast path for chain_hlocks allocation
  2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
                   ` (5 preceding siblings ...)
  2020-01-21 15:40 ` [PATCH v4 6/7] locking/lockdep: Reuse freed chain_hlocks entries Waiman Long
@ 2020-01-21 15:40 ` Waiman Long
  6 siblings, 0 replies; 11+ messages in thread
From: Waiman Long @ 2020-01-21 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: linux-kernel, Bart Van Assche, Waiman Long

When alloc_chain_hlocks() is called, the most likely scenario is
to allocate from the primordial chain block which holds the whole
chain_hlocks[] array initially. It is the primordial chain block if its
size is bigger than MAX_LOCK_DEPTH. As long as the number of entries left
after splitting is still bigger than MAX_CHAIN_BUCKETS it will remain
in bucket 0. By splitting out a sub-block at the end, we only need to
adjust the size without changing any of the existing linkage information.
This optimized fast path can reduce the latency of allocation requests.

This patch does change the order by which chain_hlocks entries are
allocated. The original code allocates entries from the beginning of
the array. Now it will be allocated from the end of the array backward.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lockdep.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 6d0f6a256d63..12148bb6d2c1 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2702,15 +2702,19 @@ static inline int chain_block_size(int offset)
 	return (chain_hlocks[offset + 2] << 16) | chain_hlocks[offset + 3];
 }
 
+static inline void init_chain_block_size(int offset, int size)
+{
+	chain_hlocks[offset + 2] = size >> 16;
+	chain_hlocks[offset + 3] = (u16)size;
+}
+
 static inline void init_chain_block(int offset, int next, int bucket, int size)
 {
 	chain_hlocks[offset] = (next >> 16) | CHAIN_BLK_FLAG;
 	chain_hlocks[offset + 1] = (u16)next;
 
-	if (bucket == 0) {
-		chain_hlocks[offset + 2] = size >> 16;
-		chain_hlocks[offset + 3] = (u16)size;
-	}
+	if (bucket == 0)
+		init_chain_block_size(offset, size);
 }
 
 static inline void add_chain_block(int offset, int size)
@@ -2810,6 +2814,18 @@ static int alloc_chain_hlocks(int req)
 			return curr;
 		}
 
+		/*
+		 * Fast path: splitting out a sub-block at the end of the
+		 * primordial chain block.
+		 */
+		if (likely((size > MAX_LOCK_DEPTH) &&
+			   (size - req > MAX_CHAIN_BUCKETS))) {
+			size -= req;
+			nr_free_chain_hlocks -= req;
+			init_chain_block_size(curr, size);
+			return curr + size;
+		}
+
 		if (size > max_size) {
 			max_prev = prev;
 			max_curr = curr;
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 6/7] locking/lockdep: Reuse freed chain_hlocks entries
  2020-01-21 15:40 ` [PATCH v4 6/7] locking/lockdep: Reuse freed chain_hlocks entries Waiman Long
@ 2020-01-24 11:57   ` kbuild test robot
  0 siblings, 0 replies; 11+ messages in thread
From: kbuild test robot @ 2020-01-24 11:57 UTC (permalink / raw)
  To: Waiman Long
  Cc: kbuild-all, Peter Zijlstra, Ingo Molnar, Will Deacon,
	linux-kernel, Bart Van Assche, Waiman Long

[-- Attachment #1: Type: text/plain, Size: 2583 bytes --]

Hi Waiman,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/locking/core]
[also build test ERROR on tip/auto-latest arm-perf/for-next/perf v5.5-rc7 next-20200121]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Waiman-Long/locking-lockdep-Reuse-zapped-chain_hlocks-entries/20200124-085419
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git f5bfdc8e3947a7ae489cf8ae9cfd6b3fb357b952
config: xtensa-randconfig-a001-20200124 (attached as .config)
compiler: xtensa-linux-gcc (GCC) 7.5.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.5.0 make.cross ARCH=xtensa 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   kernel/locking/lockdep.o: In function `arch_local_irq_restore':
>> arch/xtensa/include/asm/irqflags.h:66: undefined reference to `init_chain_block_buckets'
   kernel/locking/lockdep.o: In function `__lock_pin_lock':
   kernel/locking/lockdep.c:4565: undefined reference to `init_chain_block_buckets'
   kernel/locking/lockdep_proc.o: In function `l_stop':
   kernel/locking/lockdep_proc.c:39: undefined reference to `nr_zapped_lock_chains'
>> kernel/locking/lockdep_proc.c:39: undefined reference to `nr_large_chain_blocks'

vim +66 arch/xtensa/include/asm/irqflags.h

df9ee29270c11d David Howells 2010-10-07  63  
df9ee29270c11d David Howells 2010-10-07  64  static inline void arch_local_irq_restore(unsigned long flags)
df9ee29270c11d David Howells 2010-10-07  65  {
bc5378fcba9743 Max Filippov  2012-10-15 @66  	asm volatile("wsr %0, ps; rsync"
df9ee29270c11d David Howells 2010-10-07  67  		     :: "a" (flags) : "memory");
df9ee29270c11d David Howells 2010-10-07  68  }
df9ee29270c11d David Howells 2010-10-07  69  

:::::: The code at line 66 was first introduced by commit
:::::: bc5378fcba974317f9657c4fdc78af227e1e1068 xtensa: reorganize SR referencing

:::::: TO: Max Filippov <jcmvbkbc@gmail.com>
:::::: CC: Chris Zankel <chris@zankel.net>

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 30129 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains
  2020-01-21 15:40 ` [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains Waiman Long
@ 2020-01-24 13:18   ` kbuild test robot
  2020-01-24 20:15   ` kbuild test robot
  1 sibling, 0 replies; 11+ messages in thread
From: kbuild test robot @ 2020-01-24 13:18 UTC (permalink / raw)
  To: Waiman Long
  Cc: kbuild-all, Peter Zijlstra, Ingo Molnar, Will Deacon,
	linux-kernel, Bart Van Assche, Waiman Long

[-- Attachment #1: Type: text/plain, Size: 1316 bytes --]

Hi Waiman,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/locking/core]
[also build test ERROR on tip/auto-latest arm-perf/for-next/perf v5.5-rc7 next-20200123]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Waiman-Long/locking-lockdep-Reuse-zapped-chain_hlocks-entries/20200124-085419
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git f5bfdc8e3947a7ae489cf8ae9cfd6b3fb357b952
config: x86_64-randconfig-d001-20200124 (attached as .config)
compiler: gcc-7 (Debian 7.5.0-3) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   ld: kernel/locking/lockdep_proc.o: in function `lockdep_stats_show':
>> lockdep_proc.c:(.text+0x3fd): undefined reference to `nr_zapped_lock_chains'

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 35554 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains
  2020-01-21 15:40 ` [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains Waiman Long
  2020-01-24 13:18   ` kbuild test robot
@ 2020-01-24 20:15   ` kbuild test robot
  1 sibling, 0 replies; 11+ messages in thread
From: kbuild test robot @ 2020-01-24 20:15 UTC (permalink / raw)
  To: Waiman Long
  Cc: kbuild-all, Peter Zijlstra, Ingo Molnar, Will Deacon,
	linux-kernel, Bart Van Assche, Waiman Long

[-- Attachment #1: Type: text/plain, Size: 4383 bytes --]

Hi Waiman,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/locking/core]
[also build test ERROR on tip/auto-latest arm-perf/for-next/perf v5.5-rc7 next-20200124]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Waiman-Long/locking-lockdep-Reuse-zapped-chain_hlocks-entries/20200124-085419
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git f5bfdc8e3947a7ae489cf8ae9cfd6b3fb357b952
config: x86_64-randconfig-f003-20200124 (attached as .config)
compiler: gcc-7 (Debian 7.5.0-3) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   ld: kernel/locking/lockdep_proc.o: in function `lockdep_stats_show':
>> kernel/locking/lockdep_proc.c:352: undefined reference to `nr_zapped_lock_chains'

vim +352 kernel/locking/lockdep_proc.c

   284	
   285	#ifdef CONFIG_TRACE_IRQFLAGS
   286		seq_printf(m, " in-hardirq chains:             %11u\n",
   287				nr_hardirq_chains);
   288		seq_printf(m, " in-softirq chains:             %11u\n",
   289				nr_softirq_chains);
   290	#endif
   291		seq_printf(m, " in-process chains:             %11u\n",
   292				nr_process_chains);
   293		seq_printf(m, " stack-trace entries:           %11lu [max: %lu]\n",
   294				nr_stack_trace_entries, MAX_STACK_TRACE_ENTRIES);
   295	#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING)
   296		seq_printf(m, " number of stack traces:        %11llu\n",
   297			   lockdep_stack_trace_count());
   298		seq_printf(m, " number of stack hash chains:   %11llu\n",
   299			   lockdep_stack_hash_count());
   300	#endif
   301		seq_printf(m, " combined max dependencies:     %11u\n",
   302				(nr_hardirq_chains + 1) *
   303				(nr_softirq_chains + 1) *
   304				(nr_process_chains + 1)
   305		);
   306		seq_printf(m, " hardirq-safe locks:            %11lu\n",
   307				nr_hardirq_safe);
   308		seq_printf(m, " hardirq-unsafe locks:          %11lu\n",
   309				nr_hardirq_unsafe);
   310		seq_printf(m, " softirq-safe locks:            %11lu\n",
   311				nr_softirq_safe);
   312		seq_printf(m, " softirq-unsafe locks:          %11lu\n",
   313				nr_softirq_unsafe);
   314		seq_printf(m, " irq-safe locks:                %11lu\n",
   315				nr_irq_safe);
   316		seq_printf(m, " irq-unsafe locks:              %11lu\n",
   317				nr_irq_unsafe);
   318	
   319		seq_printf(m, " hardirq-read-safe locks:       %11lu\n",
   320				nr_hardirq_read_safe);
   321		seq_printf(m, " hardirq-read-unsafe locks:     %11lu\n",
   322				nr_hardirq_read_unsafe);
   323		seq_printf(m, " softirq-read-safe locks:       %11lu\n",
   324				nr_softirq_read_safe);
   325		seq_printf(m, " softirq-read-unsafe locks:     %11lu\n",
   326				nr_softirq_read_unsafe);
   327		seq_printf(m, " irq-read-safe locks:           %11lu\n",
   328				nr_irq_read_safe);
   329		seq_printf(m, " irq-read-unsafe locks:         %11lu\n",
   330				nr_irq_read_unsafe);
   331	
   332		seq_printf(m, " uncategorized locks:           %11lu\n",
   333				nr_uncategorized);
   334		seq_printf(m, " unused locks:                  %11lu\n",
   335				nr_unused);
   336		seq_printf(m, " max locking depth:             %11u\n",
   337				max_lockdep_depth);
   338	#ifdef CONFIG_PROVE_LOCKING
   339		seq_printf(m, " max bfs queue depth:           %11u\n",
   340				max_bfs_queue_depth);
   341	#endif
   342		lockdep_stats_debug_show(m);
   343		seq_printf(m, " debug_locks:                   %11u\n",
   344				debug_locks);
   345	
   346		/*
   347		 * Zappped classes and lockdep data buffers reuse statistics.
   348		 */
   349		seq_puts(m, "\n");
   350		seq_printf(m, " zapped classes:                %11lu\n",
   351				nr_zapped_classes);
 > 352		seq_printf(m, " zapped lock chains:            %11lu\n",
   353				nr_zapped_lock_chains);
   354		return 0;
   355	}
   356	

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 35756 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-01-24 20:16 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-21 15:40 [PATCH v4 0/7] locking/lockdep: Reuse zapped chain_hlocks entries Waiman Long
2020-01-21 15:40 ` [PATCH v4 1/7] locking/lockdep: Decrement irq context counters when removing lock chain Waiman Long
2020-01-21 15:40 ` [PATCH v4 2/7] locking/lockdep: Display irq_context names in /proc/lockdep_chains Waiman Long
2020-01-21 15:40 ` [PATCH v4 3/7] locking/lockdep: Track number of zapped classes Waiman Long
2020-01-21 15:40 ` [PATCH v4 4/7] locking/lockdep: Throw away all lock chains with zapped class Waiman Long
2020-01-21 15:40 ` [PATCH v4 5/7] locking/lockdep: Track number of zapped lock chains Waiman Long
2020-01-24 13:18   ` kbuild test robot
2020-01-24 20:15   ` kbuild test robot
2020-01-21 15:40 ` [PATCH v4 6/7] locking/lockdep: Reuse freed chain_hlocks entries Waiman Long
2020-01-24 11:57   ` kbuild test robot
2020-01-21 15:40 ` [PATCH v4 7/7] locking/lockdep: Add a fast path for chain_hlocks allocation Waiman Long

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).