From: Joel Fernandes <joel@joelfernandes.org>
To: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Peter Zijlstra <peterz@infradead.org>,
Steven Rostedt <rostedt@goodmis.org>, rcu <rcu@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>,
Josh Triplett <josh@joshtriplett.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Lai Jiangshan <jiangshanlai@gmail.com>
Subject: Re: [RFC] Deadlock via recursive wakeup via RCU with threadirqs
Date: Fri, 28 Jun 2019 19:12:41 -0400 [thread overview]
Message-ID: <20190628231241.GA9243@google.com> (raw)
In-Reply-To: <20190628222547.GE26519@linux.ibm.com>
On Fri, Jun 28, 2019 at 03:25:47PM -0700, Paul E. McKenney wrote:
> On Fri, Jun 28, 2019 at 05:40:18PM -0400, Joel Fernandes wrote:
> > Hi Paul,
> >
> > On Fri, Jun 28, 2019 at 01:04:23PM -0700, Paul E. McKenney wrote:
> > [snip]
> > > > > > Commit
> > > > > > - 23634ebc1d946 ("rcu: Check for wakeup-safe conditions in
> > > > > > rcu_read_unlock_special()") does not trigger the bug within 94
> > > > > > attempts.
> > > > > >
> > > > > > - 48d07c04b4cc1 ("rcu: Enable elimination of Tree-RCU softirq
> > > > > > processing") needed 12 attempts to trigger the bug.
> > > > >
> > > > > That matches my belief that 23634ebc1d946 ("rcu: Check for wakeup-safe
> > > > > conditions in rcu_read_unlock_special()") will at least greatly decrease
> > > > > the probability of this bug occurring.
> > > >
> > > > I was just typing a reply that I can't reproduce it with:
> > > > rcu: Check for wakeup-safe conditions in rcu_read_unlock_special()
> > > >
> > > > I am trying to revert enough of this patch to see what would break things,
> > > > however I think a better exercise might be to understand more what the patch
> > > > does why it fixes things in the first place ;-) It is probably the
> > > > deferred_qs thing.
> > >
> > > The deferred_qs flag is part of it! Looking forward to hearing what
> > > you come up with as being the critical piece of this commit.
> >
> > The new deferred_qs flag indeed saves the machine from the dead-lock.
> >
> > If we don't want the deferred_qs, then the below patch also fixes the issue.
> > However, I am more sure than not that it does not handle all cases (such as
> > what if we previously had an expedited grace period IPI in a previous reader
> > section and had to to defer processing. Then it seems a similar deadlock
> > would present. But anyway, the below patch does fix it for me! It is based on
> > your -rcu tree commit 23634ebc1d946f19eb112d4455c1d84948875e31 (rcu: Check
> > for wakeup-safe conditions in rcu_read_unlock_special()).
>
> The point here being that you rely on .b.blocked rather than
> .b.deferred_qs. Hmmm... There are a number of places that check all
> the bits via the .s leg of the rcu_special union. The .s check in
> rcu_preempt_need_deferred_qs() should be OK because it is conditioned
> on t->rcu_read_lock_nesting of zero or negative.
> Do rest of those also work out OK?
>
> It would be nice to remove the flag, but doing so clearly needs careful
> review and testing.
Agreed. I am planning to do an audit of this code within the next couple of
weeks so I will be on the look out for any optimization opportunities related
to this. Will let you know if this can work. For now I like your patch better
because it is more conservative and doesn't cause any space overhead.
If you'd like, please free to included my Tested-by on it:
Tested-by: Joel Fernandes (Google) <joel@joelfernandes.org>
If you had a chance, could you also point to me any tests that show
performance improvement with the irqwork patch, on the expedited GP usecase?
I'd like to try it out as well. I guess rcuperf should have some?
thanks!
- Joel
next prev parent reply other threads:[~2019-06-28 23:12 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-26 13:54 [RFC] Deadlock via recursive wakeup via RCU with threadirqs Sebastian Andrzej Siewior
2019-06-26 16:25 ` Paul E. McKenney
2019-06-27 7:47 ` Sebastian Andrzej Siewior
2019-06-27 15:52 ` Paul E. McKenney
2019-06-27 14:24 ` Joel Fernandes
2019-06-27 14:34 ` Steven Rostedt
2019-06-27 15:30 ` Joel Fernandes
2019-06-27 15:37 ` Joel Fernandes
2019-06-27 15:40 ` Sebastian Andrzej Siewior
2019-06-27 15:42 ` Joel Fernandes
2019-06-27 17:43 ` Joel Fernandes
2019-06-27 17:46 ` Joel Fernandes
2019-06-27 18:11 ` Paul E. McKenney
2019-06-27 18:27 ` Joel Fernandes
2019-06-27 18:51 ` Joel Fernandes
2019-06-27 19:14 ` Paul E. McKenney
2019-06-27 19:15 ` Paul E. McKenney
2019-06-27 18:30 ` Paul E. McKenney
2019-06-27 20:45 ` Paul E. McKenney
2019-06-27 15:55 ` Paul E. McKenney
2019-06-27 16:47 ` Joel Fernandes
2019-06-27 17:38 ` Paul E. McKenney
2019-06-27 18:16 ` Joel Fernandes
2019-06-27 18:41 ` Paul E. McKenney
2019-06-27 20:17 ` Scott Wood
2019-06-27 20:36 ` Paul E. McKenney
2019-06-28 7:31 ` Byungchul Park
2019-06-28 7:43 ` Byungchul Park
2019-06-28 8:14 ` Byungchul Park
2019-06-28 8:24 ` Byungchul Park
2019-06-28 12:24 ` Paul E. McKenney
2019-06-28 9:10 ` Byungchul Park
2019-06-28 9:28 ` Byungchul Park
2019-06-28 12:21 ` Paul E. McKenney
2019-06-28 10:40 ` Byungchul Park
2019-06-28 12:27 ` Paul E. McKenney
2019-06-28 15:44 ` Steven Rostedt
2019-06-29 15:12 ` Andrea Parri
2019-06-29 16:55 ` Paul E. McKenney
2019-06-29 18:09 ` Andrea Parri
2019-06-29 18:21 ` Andrea Parri
2019-06-29 19:15 ` Paul E. McKenney
2019-06-29 19:35 ` Andrea Parri
2019-06-30 23:55 ` Byungchul Park
2019-06-28 14:15 ` Peter Zijlstra
2019-06-28 15:54 ` Paul E. McKenney
2019-06-28 16:04 ` Peter Zijlstra
2019-06-28 17:20 ` Paul E. McKenney
2019-07-01 9:42 ` Peter Zijlstra
2019-07-01 10:24 ` Sebastian Andrzej Siewior
2019-07-01 12:23 ` Paul E. McKenney
2019-07-01 14:00 ` Peter Zijlstra
2019-07-01 16:01 ` Paul E. McKenney
2019-06-28 20:01 ` Scott Wood
2019-07-01 9:45 ` Peter Zijlstra
2019-06-28 13:54 ` Peter Zijlstra
2019-06-28 15:30 ` Paul E. McKenney
2019-06-28 18:40 ` Sebastian Andrzej Siewior
2019-06-28 18:52 ` Paul E. McKenney
2019-06-28 19:24 ` Joel Fernandes
2019-06-28 20:04 ` Paul E. McKenney
2019-06-28 21:40 ` Joel Fernandes
2019-06-28 22:25 ` Paul E. McKenney
2019-06-28 23:12 ` Joel Fernandes [this message]
2019-06-29 0:06 ` Paul E. McKenney
2019-06-28 16:40 ` Joel Fernandes
2019-06-28 16:45 ` Joel Fernandes
2019-06-28 17:30 ` Paul E. McKenney
2019-06-28 17:41 ` Paul E. McKenney
2019-06-28 17:45 ` Sebastian Andrzej Siewior
2019-06-28 18:07 ` Joel Fernandes
2019-06-28 18:20 ` Sebastian Andrzej Siewior
2019-07-01 2:08 ` Joel Fernandes
2019-06-28 18:22 ` Paul E. McKenney
2019-06-28 19:29 ` Joel Fernandes
2019-06-28 20:06 ` Paul E. McKenney
2019-06-28 18:05 ` Joel Fernandes
2019-06-28 18:23 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190628231241.GA9243@google.com \
--to=joel@joelfernandes.org \
--cc=bigeasy@linutronix.de \
--cc=jiangshanlai@gmail.com \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mingo@redhat.com \
--cc=paulmck@linux.ibm.com \
--cc=peterz@infradead.org \
--cc=rcu@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).