From: Joel Fernandes <joel@joelfernandes.org>
To: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Peter Zijlstra <peterz@infradead.org>,
Steven Rostedt <rostedt@goodmis.org>, rcu <rcu@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>,
Josh Triplett <josh@joshtriplett.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Lai Jiangshan <jiangshanlai@gmail.com>
Subject: Re: [RFC] Deadlock via recursive wakeup via RCU with threadirqs
Date: Fri, 28 Jun 2019 17:40:18 -0400 [thread overview]
Message-ID: <20190628214018.GB249127@google.com> (raw)
In-Reply-To: <20190628200423.GB26519@linux.ibm.com>
Hi Paul,
On Fri, Jun 28, 2019 at 01:04:23PM -0700, Paul E. McKenney wrote:
[snip]
> > > > Commit
> > > > - 23634ebc1d946 ("rcu: Check for wakeup-safe conditions in
> > > > rcu_read_unlock_special()") does not trigger the bug within 94
> > > > attempts.
> > > >
> > > > - 48d07c04b4cc1 ("rcu: Enable elimination of Tree-RCU softirq
> > > > processing") needed 12 attempts to trigger the bug.
> > >
> > > That matches my belief that 23634ebc1d946 ("rcu: Check for wakeup-safe
> > > conditions in rcu_read_unlock_special()") will at least greatly decrease
> > > the probability of this bug occurring.
> >
> > I was just typing a reply that I can't reproduce it with:
> > rcu: Check for wakeup-safe conditions in rcu_read_unlock_special()
> >
> > I am trying to revert enough of this patch to see what would break things,
> > however I think a better exercise might be to understand more what the patch
> > does why it fixes things in the first place ;-) It is probably the
> > deferred_qs thing.
>
> The deferred_qs flag is part of it! Looking forward to hearing what
> you come up with as being the critical piece of this commit.
The new deferred_qs flag indeed saves the machine from the dead-lock.
If we don't want the deferred_qs, then the below patch also fixes the issue.
However, I am more sure than not that it does not handle all cases (such as
what if we previously had an expedited grace period IPI in a previous reader
section and had to to defer processing. Then it seems a similar deadlock
would present. But anyway, the below patch does fix it for me! It is based on
your -rcu tree commit 23634ebc1d946f19eb112d4455c1d84948875e31 (rcu: Check
for wakeup-safe conditions in rcu_read_unlock_special()).
---8<-----------------------
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH] Fix RCU recursive deadlock
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
include/linux/sched.h | 2 +-
kernel/rcu/tree_plugin.h | 17 +++++++++++++----
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 942a44c1b8eb..347e6dfcc91b 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -565,7 +565,7 @@ union rcu_special {
u8 blocked;
u8 need_qs;
u8 exp_hint; /* Hint for performance. */
- u8 deferred_qs;
+ u8 pad;
} b; /* Bits. */
u32 s; /* Set of bits. */
};
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 75110ea75d01..5b9b12c1ba5c 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -455,7 +455,6 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
local_irq_restore(flags);
return;
}
- t->rcu_read_unlock_special.b.deferred_qs = false;
if (special.b.need_qs) {
rcu_qs();
t->rcu_read_unlock_special.b.need_qs = false;
@@ -608,13 +607,24 @@ static void rcu_read_unlock_special(struct task_struct *t)
if (preempt_bh_were_disabled || irqs_were_disabled) {
t->rcu_read_unlock_special.b.exp_hint = false;
// Need to defer quiescent state until everything is enabled.
+
+ /* If unlock_special was called in the current reader section
+ * just because we were blocked in a previous reader section,
+ * then raising softirqs can deadlock. This is because the
+ * scheduler executes RCU sections with preemption disabled,
+ * however it may have previously blocked in a previous
+ * non-scheduler reader section and .blocked got set. It is
+ * never safe to call unlock_special from the scheduler path
+ * due to recursive wake ups (unless we are in_irq(), so
+ * prevent this by checking if we were previously blocked.
+ */
if (irqs_were_disabled && use_softirq &&
- (in_irq() || !t->rcu_read_unlock_special.b.deferred_qs)) {
+ (!t->rcu_read_unlock_special.b.blocked || in_irq())) {
// Using softirq, safe to awaken, and we get
// no help from enabling irqs, unlike bh/preempt.
raise_softirq_irqoff(RCU_SOFTIRQ);
} else if (irqs_were_disabled && !use_softirq &&
- !t->rcu_read_unlock_special.b.deferred_qs) {
+ !t->rcu_read_unlock_special.b.blocked) {
// Safe to awaken and we get no help from enabling
// irqs, unlike bh/preempt.
invoke_rcu_core();
@@ -623,7 +633,6 @@ static void rcu_read_unlock_special(struct task_struct *t)
set_tsk_need_resched(current);
set_preempt_need_resched();
}
- t->rcu_read_unlock_special.b.deferred_qs = true;
local_irq_restore(flags);
return;
}
--
2.22.0.410.gd8fdbe21b5-goog
next prev parent reply other threads:[~2019-06-28 21:40 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-26 13:54 [RFC] Deadlock via recursive wakeup via RCU with threadirqs Sebastian Andrzej Siewior
2019-06-26 16:25 ` Paul E. McKenney
2019-06-27 7:47 ` Sebastian Andrzej Siewior
2019-06-27 15:52 ` Paul E. McKenney
2019-06-27 14:24 ` Joel Fernandes
2019-06-27 14:34 ` Steven Rostedt
2019-06-27 15:30 ` Joel Fernandes
2019-06-27 15:37 ` Joel Fernandes
2019-06-27 15:40 ` Sebastian Andrzej Siewior
2019-06-27 15:42 ` Joel Fernandes
2019-06-27 17:43 ` Joel Fernandes
2019-06-27 17:46 ` Joel Fernandes
2019-06-27 18:11 ` Paul E. McKenney
2019-06-27 18:27 ` Joel Fernandes
2019-06-27 18:51 ` Joel Fernandes
2019-06-27 19:14 ` Paul E. McKenney
2019-06-27 19:15 ` Paul E. McKenney
2019-06-27 18:30 ` Paul E. McKenney
2019-06-27 20:45 ` Paul E. McKenney
2019-06-27 15:55 ` Paul E. McKenney
2019-06-27 16:47 ` Joel Fernandes
2019-06-27 17:38 ` Paul E. McKenney
2019-06-27 18:16 ` Joel Fernandes
2019-06-27 18:41 ` Paul E. McKenney
2019-06-27 20:17 ` Scott Wood
2019-06-27 20:36 ` Paul E. McKenney
2019-06-28 7:31 ` Byungchul Park
2019-06-28 7:43 ` Byungchul Park
2019-06-28 8:14 ` Byungchul Park
2019-06-28 8:24 ` Byungchul Park
2019-06-28 12:24 ` Paul E. McKenney
2019-06-28 9:10 ` Byungchul Park
2019-06-28 9:28 ` Byungchul Park
2019-06-28 12:21 ` Paul E. McKenney
2019-06-28 10:40 ` Byungchul Park
2019-06-28 12:27 ` Paul E. McKenney
2019-06-28 15:44 ` Steven Rostedt
2019-06-29 15:12 ` Andrea Parri
2019-06-29 16:55 ` Paul E. McKenney
2019-06-29 18:09 ` Andrea Parri
2019-06-29 18:21 ` Andrea Parri
2019-06-29 19:15 ` Paul E. McKenney
2019-06-29 19:35 ` Andrea Parri
2019-06-30 23:55 ` Byungchul Park
2019-06-28 14:15 ` Peter Zijlstra
2019-06-28 15:54 ` Paul E. McKenney
2019-06-28 16:04 ` Peter Zijlstra
2019-06-28 17:20 ` Paul E. McKenney
2019-07-01 9:42 ` Peter Zijlstra
2019-07-01 10:24 ` Sebastian Andrzej Siewior
2019-07-01 12:23 ` Paul E. McKenney
2019-07-01 14:00 ` Peter Zijlstra
2019-07-01 16:01 ` Paul E. McKenney
2019-06-28 20:01 ` Scott Wood
2019-07-01 9:45 ` Peter Zijlstra
2019-06-28 13:54 ` Peter Zijlstra
2019-06-28 15:30 ` Paul E. McKenney
2019-06-28 18:40 ` Sebastian Andrzej Siewior
2019-06-28 18:52 ` Paul E. McKenney
2019-06-28 19:24 ` Joel Fernandes
2019-06-28 20:04 ` Paul E. McKenney
2019-06-28 21:40 ` Joel Fernandes [this message]
2019-06-28 22:25 ` Paul E. McKenney
2019-06-28 23:12 ` Joel Fernandes
2019-06-29 0:06 ` Paul E. McKenney
2019-06-28 16:40 ` Joel Fernandes
2019-06-28 16:45 ` Joel Fernandes
2019-06-28 17:30 ` Paul E. McKenney
2019-06-28 17:41 ` Paul E. McKenney
2019-06-28 17:45 ` Sebastian Andrzej Siewior
2019-06-28 18:07 ` Joel Fernandes
2019-06-28 18:20 ` Sebastian Andrzej Siewior
2019-07-01 2:08 ` Joel Fernandes
2019-06-28 18:22 ` Paul E. McKenney
2019-06-28 19:29 ` Joel Fernandes
2019-06-28 20:06 ` Paul E. McKenney
2019-06-28 18:05 ` Joel Fernandes
2019-06-28 18:23 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190628214018.GB249127@google.com \
--to=joel@joelfernandes.org \
--cc=bigeasy@linutronix.de \
--cc=jiangshanlai@gmail.com \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mingo@redhat.com \
--cc=paulmck@linux.ibm.com \
--cc=peterz@infradead.org \
--cc=rcu@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).