From: Lai Jiangshan <laijs@linux.alibaba.com>
To: linux-kernel@vger.kernel.org
Cc: Lai Jiangshan <laijs@linux.alibaba.com>,
"Paul E. McKenney" <paulmck@kernel.org>,
Josh Triplett <josh@joshtriplett.org>,
Steven Rostedt <rostedt@goodmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Lai Jiangshan <jiangshanlai@gmail.com>,
Joel Fernandes <joel@joelfernandes.org>,
rcu@vger.kernel.org
Subject: [PATCH 01/11] rcu: avoid leaking exp_deferred_qs into next GP
Date: Thu, 31 Oct 2019 10:07:56 +0000 [thread overview]
Message-ID: <20191031100806.1326-2-laijs@linux.alibaba.com> (raw)
In-Reply-To: <20191031100806.1326-1-laijs@linux.alibaba.com>
If exp_deferred_qs is incorrectly set and leaked to the next
exp GP, it may cause the next GP to be incorrectly prematurely
completed.
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
kernel/rcu/tree_exp.h | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index a0e1e51c51c2..6dec21909b30 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -603,6 +603,18 @@ static void rcu_exp_handler(void *unused)
struct rcu_node *rnp = rdp->mynode;
struct task_struct *t = current;
+ /*
+ * Note that there is a large group of race conditions that
+ * can have caused this quiescent state to already have been
+ * reported, so we really do need to check ->expmask first.
+ */
+ raw_spin_lock_irqsave_rcu_node(rnp, flags);
+ if (!(rnp->expmask & rdp->grpmask)) {
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ return;
+ }
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+
/*
* First, the common case of not being in an RCU read-side
* critical section. If also enabled or idle, immediately
@@ -628,17 +640,10 @@ static void rcu_exp_handler(void *unused)
* a future context switch. Either way, if the expedited
* grace period is still waiting on this CPU, set ->deferred_qs
* so that the eventual quiescent state will be reported.
- * Note that there is a large group of race conditions that
- * can have caused this quiescent state to already have been
- * reported, so we really do need to check ->expmask.
*/
if (t->rcu_read_lock_nesting > 0) {
- raw_spin_lock_irqsave_rcu_node(rnp, flags);
- if (rnp->expmask & rdp->grpmask) {
- rdp->exp_deferred_qs = true;
- t->rcu_read_unlock_special.b.exp_hint = true;
- }
- raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ rdp->exp_deferred_qs = true;
+ WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, true);
return;
}
--
2.20.1
next prev parent reply other threads:[~2019-10-31 10:08 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-31 10:07 [PATCH 00/11] rcu: introduce percpu rcu_preempt_depth Lai Jiangshan
2019-10-31 10:07 ` Lai Jiangshan [this message]
2019-10-31 13:43 ` [PATCH 01/11] rcu: avoid leaking exp_deferred_qs into next GP Paul E. McKenney
2019-10-31 18:19 ` Lai Jiangshan
2019-10-31 19:00 ` Paul E. McKenney
2019-10-31 10:07 ` [PATCH 02/11] rcu: fix bug when rcu_exp_handler() in nested interrupt Lai Jiangshan
2019-10-31 13:47 ` Paul E. McKenney
2019-10-31 14:20 ` Lai Jiangshan
2019-10-31 14:31 ` Paul E. McKenney
2019-10-31 15:14 ` Lai Jiangshan
2019-10-31 18:52 ` Paul E. McKenney
2019-11-01 0:19 ` Boqun Feng
2019-11-01 2:29 ` Lai Jiangshan
2019-10-31 10:07 ` [PATCH 03/11] rcu: clean up rcu_preempt_deferred_qs_irqrestore() Lai Jiangshan
2019-10-31 13:52 ` Paul E. McKenney
2019-10-31 15:25 ` Lai Jiangshan
2019-10-31 18:57 ` Paul E. McKenney
2019-10-31 19:02 ` Paul E. McKenney
2019-10-31 10:07 ` [PATCH 04/11] rcu: cleanup rcu_preempt_deferred_qs() Lai Jiangshan
2019-10-31 14:10 ` Paul E. McKenney
2019-10-31 14:35 ` Lai Jiangshan
2019-10-31 15:07 ` Paul E. McKenney
2019-10-31 18:33 ` Lai Jiangshan
2019-10-31 22:45 ` Paul E. McKenney
2019-10-31 10:08 ` [PATCH 05/11] rcu: clean all rcu_read_unlock_special after report qs Lai Jiangshan
2019-11-01 11:54 ` Paul E. McKenney
2019-10-31 10:08 ` [PATCH 06/11] rcu: clear t->rcu_read_unlock_special in one go Lai Jiangshan
2019-11-01 12:10 ` Paul E. McKenney
2019-11-01 16:58 ` Paul E. McKenney
2019-10-31 10:08 ` [PATCH 07/11] rcu: set special.b.deferred_qs before wake_up() Lai Jiangshan
2019-10-31 10:08 ` [PATCH 08/11] rcu: don't use negative ->rcu_read_lock_nesting Lai Jiangshan
2019-11-01 12:33 ` Paul E. McKenney
2019-11-16 13:04 ` Lai Jiangshan
2019-11-17 21:53 ` Paul E. McKenney
2019-11-18 1:54 ` Lai Jiangshan
2019-11-18 14:57 ` Paul E. McKenney
2019-10-31 10:08 ` [PATCH 09/11] rcu: wrap usages of rcu_read_lock_nesting Lai Jiangshan
2019-10-31 10:08 ` [PATCH 10/11] rcu: clear the special.b.need_qs in rcu_note_context_switch() Lai Jiangshan
2019-10-31 10:08 ` [PATCH 11/11] x86,rcu: use percpu rcu_preempt_depth Lai Jiangshan
2019-11-01 12:58 ` Paul E. McKenney
2019-11-01 13:13 ` Peter Zijlstra
2019-11-01 14:30 ` Paul E. McKenney
2019-11-01 15:32 ` Lai Jiangshan
2019-11-01 16:21 ` Paul E. McKenney
2019-11-01 15:47 ` Lai Jiangshan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191031100806.1326-2-laijs@linux.alibaba.com \
--to=laijs@linux.alibaba.com \
--cc=jiangshanlai@gmail.com \
--cc=joel@joelfernandes.org \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=paulmck@kernel.org \
--cc=rcu@vger.kernel.org \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).