rcu.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
To: linux-kernel@vger.kernel.org
Cc: "Joel Fernandes (Google)" <joel@joelfernandes.org>,
	Palmer Dabbelt <palmer@sifive.com>,
	"Paul E. McKenney" <paulmck@linux.ibm.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	rcu@vger.kernel.org
Subject: [PATCH -rcu/dev] Please squash: fixup! rcu/tree: Add basic support for kfree_rcu() batching
Date: Sat, 17 Aug 2019 00:22:11 -0400	[thread overview]
Message-ID: <20190817042211.137149-1-joel@joelfernandes.org> (raw)

xchg() on a bool is causing issues on riscv and arm32. Please squash
this into the -rcu dev branch to resolve the issue.

Please squash this fix.

Fixes: -rcu dev commit 3cbd3aa7d9c7bdf ("rcu/tree: Add basic support for kfree_rcu() batching")

Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 kernel/rcu/tree.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4f7c3096d786..33192a58b39a 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2717,7 +2717,7 @@ struct kfree_rcu_cpu {
 	 * is busy, ->head just continues to grow and we retry flushing later.
 	 */
 	struct delayed_work monitor_work;
-	bool monitor_todo;	/* Is a delayed work pending execution? */
+	int monitor_todo;	/* Is a delayed work pending execution? */
 };
 
 static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc);
@@ -2790,7 +2790,7 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
 	/* Previous batch that was queued to RCU did not get free yet, let us
 	 * try again soon.
 	 */
-	if (!xchg(&krcp->monitor_todo, true))
+	if (!xchg(&krcp->monitor_todo, 1))
 		schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
 	spin_unlock_irqrestore(&krcp->lock, flags);
 }
@@ -2806,7 +2806,7 @@ static void kfree_rcu_monitor(struct work_struct *work)
 						 monitor_work.work);
 
 	spin_lock_irqsave(&krcp->lock, flags);
-	if (xchg(&krcp->monitor_todo, false))
+	if (xchg(&krcp->monitor_todo, 0))
 		kfree_rcu_drain_unlock(krcp, flags);
 	else
 		spin_unlock_irqrestore(&krcp->lock, flags);
@@ -2858,7 +2858,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
 	krcp->head = head;
 
 	/* Schedule monitor for timely drain after KFREE_DRAIN_JIFFIES. */
-	if (!xchg(&krcp->monitor_todo, true))
+	if (!xchg(&krcp->monitor_todo, 1))
 		schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
 
 	spin_unlock(&krcp->lock);
-- 
2.23.0.rc1.153.gdeed80330f-goog


             reply	other threads:[~2019-08-17  4:22 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-17  4:22 Joel Fernandes (Google) [this message]
2019-08-17  4:38 ` [PATCH -rcu/dev] Please squash: fixup! rcu/tree: Add basic support for kfree_rcu() batching Paul Walmsley
2019-08-17  4:43   ` Joel Fernandes
2019-08-17 21:42     ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190817042211.137149-1-joel@joelfernandes.org \
    --to=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=palmer@sifive.com \
    --cc=paul.walmsley@sifive.com \
    --cc=paulmck@linux.ibm.com \
    --cc=rcu@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).