linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Weisbecker <frederic@kernel.org>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Peter Zijlstra <peterz@infradead.org>,
	"David S . Miller" <davem@davemloft.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	"Paul E . McKenney" <paulmck@linux.vnet.ibm.com>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Pavan Kondeti <pkondeti@codeaurora.org>,
	Ingo Molnar <mingo@kernel.org>,
	Joel Fernandes <joel@joelfernandes.org>
Subject: [PATCH 06/37] locking/lockdep: Test all incompatible scenario at once in check_irq_usage()
Date: Thu, 28 Feb 2019 18:12:11 +0100	[thread overview]
Message-ID: <20190228171242.32144-7-frederic@kernel.org> (raw)
In-Reply-To: <20190228171242.32144-1-frederic@kernel.org>

check_prev_add_irq() tests all incompatible scenarios one after the
other while adding a lock (@next) to a tree dependency (@prev):

	LOCK_USED_IN_HARDIRQ          vs         LOCK_ENABLED_HARDIRQ
	LOCK_USED_IN_HARDIRQ_READ     vs         LOCK_ENABLED_HARDIRQ
	LOCK_USED_IN_SOFTIRQ          vs         LOCK_ENABLED_SOFTIRQ
	LOCK_USED_IN_SOFTIRQ_READ     vs         LOCK_ENABLED_SOFTIRQ

Also for these four scenarios, we must at least iterate the @prev
backward dependency. Then if it matches the relevant LOCK_USED_* bit,
we must also iterate the @next forward dependency.

Therefore in the best case we iterate 4 times, in the worst case 8 times.

Now things are going to become much worse as we plan to add as much
LOCK_USED_IN_{$VEC}_SOFTIRQ[_READ] as we have softirq vectors, currently
10. So the expanded test would be at least 22 loops and at most 44. We
can't really afford that.

So we take a different approach to fix this:

1) Iterate through @prev backward dependencies and accumulate all the IRQ
   uses in a single mask.

2) Iterate through @next forward dependencies and try to find a lock
   whose usage is exclusive to the accumulated usages gathered in the
   previous step. If we find one (call it @lockA), we have found an
   incompatible use.

3) Iterate again through @prev backward dependency and find the lock
   whose usage matches @lockA in term of incompatibility. Call that
   lock @lockB.

4) Report the incompatible usages of @lockA and @lockB

If no incompatible use is found, the verification never goes beyond
step 2 which means at most two iterations.

Reviewed-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Pavan Kondeti <pkondeti@codeaurora.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: David S . Miller <davem@davemloft.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/locking/lockdep.c | 193 +++++++++++++++++++++++++--------------
 1 file changed, 126 insertions(+), 67 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1a335176cb61..ac1efd16f3e7 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1351,6 +1351,14 @@ check_redundant(struct lock_list *root, struct lock_class *target,
 }
 
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING)
+
+static inline int usage_accumulate(struct lock_list *entry, void *mask)
+{
+	*(u64 *)mask |= entry->class->usage_mask;
+
+	return 0;
+}
+
 /*
  * Forwards and backwards subgraph searching, for the purposes of
  * proving that two subgraphs can be connected by a new dependency
@@ -1362,8 +1370,6 @@ static inline int usage_match(struct lock_list *entry, void *mask)
 	return entry->class->usage_mask & *(u64 *)mask;
 }
 
-
-
 /*
  * Find a node in the forwards-direction dependency sub-graph starting
  * at @root->class that matches @bit.
@@ -1597,39 +1603,6 @@ print_bad_irq_dependency(struct task_struct *curr,
 	return 0;
 }
 
-static int
-check_usage(struct task_struct *curr, struct held_lock *prev,
-	    struct held_lock *next, enum lock_usage_bit bit_backwards,
-	    enum lock_usage_bit bit_forwards, const char *irqclass)
-{
-	int ret;
-	struct lock_list this, that;
-	struct lock_list *uninitialized_var(target_entry);
-	struct lock_list *uninitialized_var(target_entry1);
-
-	this.parent = NULL;
-
-	this.class = hlock_class(prev);
-	ret = find_usage_backwards(&this, BIT(bit_backwards), &target_entry);
-	if (ret < 0)
-		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
-
-	that.parent = NULL;
-	that.class = hlock_class(next);
-	ret = find_usage_forwards(&that, BIT(bit_forwards), &target_entry1);
-	if (ret < 0)
-		return print_bfs_bug(ret);
-	if (ret == 1)
-		return ret;
-
-	return print_bad_irq_dependency(curr, &this, &that,
-			target_entry, target_entry1,
-			prev, next,
-			bit_backwards, bit_forwards, irqclass);
-}
-
 static const char *state_names[] = {
 #define LOCKDEP_STATE(__STATE) \
 	__stringify(__STATE),
@@ -1660,45 +1633,132 @@ static int exclusive_bit(int new_bit)
 	return state | (dir ^ LOCK_USAGE_DIR_MASK);
 }
 
+static u64 __invert_mask(u64 mask, bool read)
+{
+	u64 excl_mask = 0;
+	int bit;
+
+	for_each_bit_nr(mask, bit) {
+		int excl = exclusive_bit(bit);
+		excl_mask |= lock_flag(excl);
+		if (read)
+			excl_mask |= lock_flag(excl | LOCK_USAGE_READ_MASK);
+	}
+
+	return excl_mask;
+}
+
+static u64 exclusive_mask(u64 mask)
+{
+	return __invert_mask(mask, false);
+}
+
+/*
+ * Retrieve the _possible_ original mask to which @mask is
+ * exclusive. Ie: this is the opposite of exclusive_mask().
+ * Note that 2 possible original bits can match an exclusive
+ * bit: one has LOCK_USAGE_READ_MASK set, the other has it
+ * cleared. So both are returned for each exclusive bit.
+ */
+static u64 original_mask(u64 mask)
+{
+	return __invert_mask(mask, true);
+}
+
+/*
+ * Find the first pair of bit match between an original
+ * usage mask and an exclusive usage mask.
+ */
+static int find_exclusive_match(u64 mask, u64 excl_mask,
+				enum lock_usage_bit *bit,
+				enum lock_usage_bit *excl_bit)
+{
+	int nr;
+
+	for_each_bit_nr(mask, nr) {
+		int excl = exclusive_bit(nr);
+		if (excl_mask & lock_flag(excl)) {
+			*bit = nr;
+			*excl_bit = excl;
+			return 0;
+		}
+	}
+	return -1;
+}
+
+/*
+ * Prove that the new dependency does not connect a hardirq-safe(-read)
+ * lock with a hardirq-unsafe lock - to achieve this we search
+ * the backwards-subgraph starting at <prev>, and the
+ * forwards-subgraph starting at <next>:
+ */
 static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
-			   struct held_lock *next, enum lock_usage_bit bit)
+			   struct held_lock *next)
 {
+	enum lock_usage_bit forward_bit = 0, backward_bit = 0;
+	struct lock_list *uninitialized_var(target_entry1);
+	struct lock_list *uninitialized_var(target_entry);
+	u64 usage_mask = 0, forward_mask, backward_mask;
+	struct lock_list this, that;
+	int ret;
+
 	/*
-	 * Prove that the new dependency does not connect a hardirq-safe
-	 * lock with a hardirq-unsafe lock - to achieve this we search
-	 * the backwards-subgraph starting at <prev>, and the
-	 * forwards-subgraph starting at <next>:
+	 * Step 1: gather all hard/soft IRQs usages backward in an
+	 * accumulated usage mask.
 	 */
-	if (!check_usage(curr, prev, next, bit,
-			   exclusive_bit(bit), state_name(bit)))
-		return 0;
+	this.parent = NULL;
+	this.class = hlock_class(prev);
+
+	ret = __bfs_backwards(&this, &usage_mask, usage_accumulate, NULL);
+	if (ret < 0)
+		return print_bfs_bug(ret);
 
-	bit++; /* _READ */
+	usage_mask &= LOCKF_USED_IN_IRQ | LOCKF_USED_IN_IRQ_READ;
+	if (!usage_mask)
+		return 1;
 
 	/*
-	 * Prove that the new dependency does not connect a hardirq-safe-read
-	 * lock with a hardirq-unsafe lock - to achieve this we search
-	 * the backwards-subgraph starting at <prev>, and the
-	 * forwards-subgraph starting at <next>:
+	 * Step 2: find exclusive uses forward that match the previous
+	 * backward accumulated mask.
 	 */
-	if (!check_usage(curr, prev, next, bit,
-			   exclusive_bit(bit), state_name(bit)))
-		return 0;
+	forward_mask = exclusive_mask(usage_mask);
 
-	return 1;
-}
+	that.parent = NULL;
+	that.class = hlock_class(next);
 
-static int
-check_prev_add_irq(struct task_struct *curr, struct held_lock *prev,
-		struct held_lock *next)
-{
-#define LOCKDEP_STATE(__STATE)						\
-	if (!check_irq_usage(curr, prev, next, LOCK_USED_IN_##__STATE))	\
-		return 0;
-#include "lockdep_states.h"
-#undef LOCKDEP_STATE
+	ret = find_usage_forwards(&that, forward_mask, &target_entry1);
+	if (ret < 0)
+		return print_bfs_bug(ret);
+	if (ret == 1)
+		return ret;
+
+	/*
+	 * Step 3: we found a bad match! Now retrieve a lock from the backward
+	 * list whose usage mask matches the exclusive usage mask from the
+	 * lock found on the forward list.
+	 */
+	backward_mask = original_mask(target_entry1->class->usage_mask);
+
+	ret = find_usage_backwards(&this, backward_mask, &target_entry);
+	if (ret < 0)
+		return print_bfs_bug(ret);
+	if (ret == 1)
+		return print_bfs_bug(ret);
+
+	/*
+	 * Step 4: narrow down to a pair of incompatible usage bits
+	 * and report it.
+	 */
+	ret = find_exclusive_match(target_entry->class->usage_mask,
+				   target_entry1->class->usage_mask,
+				   &backward_bit, &forward_bit);
+	WARN_ON_ONCE(ret == -1);
 
-	return 1;
+	return print_bad_irq_dependency(curr, &this, &that,
+			target_entry, target_entry1,
+			prev, next,
+			backward_bit, forward_bit,
+			state_name(backward_bit));
 }
 
 static void inc_chains(void)
@@ -1715,9 +1775,8 @@ static void inc_chains(void)
 
 #else
 
-static inline int
-check_prev_add_irq(struct task_struct *curr, struct held_lock *prev,
-		struct held_lock *next)
+static inline int check_irq_usage(struct task_struct *curr,
+				  struct held_lock *prev, struct held_lock *next)
 {
 	return 1;
 }
@@ -1879,7 +1938,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	else if (unlikely(ret < 0))
 		return print_bfs_bug(ret);
 
-	if (!check_prev_add_irq(curr, prev, next))
+	if (!check_irq_usage(curr, prev, next))
 		return 0;
 
 	/*
-- 
2.21.0


  parent reply	other threads:[~2019-02-28 17:13 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-28 17:12 [PATCH 00/37] softirq: Per vector masking v3 Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 01/37] locking/lockdep: Move valid_state() inside CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 02/37] locking/lockdep: Use expanded masks on find_usage_*() functions Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 03/37] locking/lockdep: Introduce struct lock_usage Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 04/37] locking/lockdep: Convert usage_mask to u64 Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 05/37] locking/lockdep: Introduce lock usage mask iterator Frederic Weisbecker
2019-02-28 17:12 ` Frederic Weisbecker [this message]
2019-02-28 17:12 ` [PATCH 07/37] locking/lockdep: Prepare valid_state() to handle plain masks Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 08/37] locking/lockdep: Prepare check_usage_*() " Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 09/37] locking/lockdep: Prepare state_verbose() to handle all softirqs Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 10/37] locking/lockdep: Make mark_lock() fastpath to work with multiple usage at once Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 11/37] locking/lockdep: Save stack trace for each softirq vector involved Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 12/37] locking/lockdep: Report all usages on mark_lock() verbosity mode Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 13/37] softirq: Macrofy softirq vectors Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 14/37] locking/lockdep: Define per vector softirq lock usage states Frederic Weisbecker
2019-04-09 12:03   ` Peter Zijlstra
2019-02-28 17:12 ` [PATCH 15/37] softirq: Pass softirq vector number to lockdep on vector execution Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 16/37] x86: Revert "x86/irq: Demote irq_cpustat_t::__softirq_pending to u16" Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 17/37] arch/softirq: Rename softirq_pending fields to softirq_data Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 18/37] softirq: Normalize softirq_pending naming scheme Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 19/37] softirq: Convert softirq_pending_*() to set/clear mask scheme Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 20/37] softirq: Introduce disabled softirq vectors bits Frederic Weisbecker
2019-03-01 11:29   ` Sebastian Andrzej Siewior
2019-02-28 17:12 ` [PATCH 21/37] softirq: Rename _local_bh_enable() to local_bh_enable_no_softirq() Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 22/37] softirq: Move vectors bits to bottom_half.h Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 23/37] x86: Init softirq enabled field Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 24/37] parisc: " Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 25/37] powerpc: " Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 26/37] softirq: Init softirq enabled field for default irq_stat definition Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 27/37] softirq: Check enabled vectors before processing Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 28/37] softirq: Remove stale comment Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 29/37] softirq: Uninline !CONFIG_TRACE_IRQFLAGS __local_bh_disable_ip() Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 30/37] softirq: Prepare for mixing all/per-vector masking Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 31/37] softirq: Support per vector masking Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 32/37] locking/lockdep: Remove redundant softirqs on check Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 33/37] locking/lockdep: Update check_flags() according to new layout Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 34/37] locking/lockdep: Branch the new vec-finegrained softirq masking to lockdep Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 35/37] softirq: Allow to soft interrupt vector-specific masked contexts Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 36/37] locking: Introduce spin_[un]lock_bh_mask() Frederic Weisbecker
2019-02-28 17:12 ` [PATCH 37/37] net: Make softirq vector masking finegrained on release_sock() Frederic Weisbecker
2019-02-28 17:33 ` [PATCH 00/37] softirq: Per vector masking v3 Linus Torvalds
2019-03-01  3:45   ` Frederic Weisbecker
2019-03-01 16:51     ` Linus Torvalds
2019-03-08 15:30       ` Frederic Weisbecker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190228171242.32144-7-frederic@kernel.org \
    --to=frederic@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=davem@davemloft.net \
    --cc=fweisbec@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mchehab+samsung@kernel.org \
    --cc=mingo@kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=pkondeti@codeaurora.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).