All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
To: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Will Deacon <will@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	rcu@vger.kernel.org
Subject: Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
Date: Fri, 22 May 2020 17:12:55 +0200	[thread overview]
Message-ID: <20200522151255.rtqnuk2cl3dpruou@linutronix.de> (raw)
In-Reply-To: <20200520184345.GU2869@paulmck-ThinkPad-P72>

On 2020-05-20 11:43:45 [-0700], Paul E. McKenney wrote:
> 
> Yes, that CPU's rcu_segcblist structure does need mutual exclusion in
> this case.  This is because rcu_segcblist_pend_cbs() looks not just
> at the ->tails[] pointer, but also at the pointer referenced by the
> ->tails[] pointer.  This last pointer is in an rcu_head structure, and
> not just any rcu_head structure, but one that is ready to be invoked.
> So this callback could vanish into the freelist (or worse) at any time.
> But callback invocation runs on the CPU that enqueued the callbacks
> (as long as that CPU remains online, anyway), so disabling interrupts
> suffices in mainline.
> 
> Now, we could have srcu_might_be_idle() instead acquire the sdp->lock
> to protect the structure.

Joel suggested that.

> What would be really nice is a primitive that acquires such a per-CPU
> lock and remains executing on that CPU, whether by the graces of
> preempt_disable(), local_irq_save(), migrate_disable(), or what have you.

It depends on what is required. migrate_disable() would limit you to
executing one CPU but would allow preemption. You would need a lock to
ensure exclusive access to the data structure. preempt_disable() /
local_irq_save() guarantee more than that.

Looking at the two call-sites there is no damage there is a CPU
migration after obtaining the per-CPU pointer. There could be a
CPU-migration before and after the pointer has been obtained so the code
before and after this function can not make any assumptions.

Would something like this work: ?

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -764,14 +764,15 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
 	unsigned long t;
 	unsigned long tlast;
 
+	check_init_srcu_struct(ssp);
 	/* If the local srcu_data structure has callbacks, not idle.  */
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
+	sdp = raw_cpu_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
 	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
-		local_irq_restore(flags);
+		spin_unlock_irqrestore_rcu_node(sdp, flags);
 		return false; /* Callbacks already present, so not idle. */
 	}
-	local_irq_restore(flags);
+	spin_unlock_irqrestore_rcu_node(sdp, flags);
 
 	/*
 	 * No local callbacks, so probabalistically probe global state.
@@ -851,9 +852,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 	}
 	rhp->func = func;
 	idx = srcu_read_lock(ssp);
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
-	spin_lock_rcu_node(sdp);
+	sdp = raw_cpu_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
 	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
 	rcu_segcblist_advance(&sdp->srcu_cblist,
 			      rcu_seq_current(&ssp->srcu_gp_seq));


That check_init_srcu_struct() is needed, because otherwise:

| BUG: spinlock bad magic on CPU#2, swapper/0/1
|  lock: 0xffff88803ed28ac0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
| CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc6+ #81
| Call Trace:
|  dump_stack+0x71/0xa0
|  do_raw_spin_lock+0x6c/0xb0
|  _raw_spin_lock_irqsave+0x33/0x40
|  synchronize_srcu+0x24/0xc9
|  wakeup_source_remove+0x4d/0x70
|  wakeup_source_unregister.part.0+0x9/0x40
|  device_wakeup_enable+0x99/0xc0

I'm not sure if there should be an explicit init of `wakeup_srcu' or if
an srcu function (like call_srcu()) is supposed to do it.

> 							Thanx, Paul

Sebastian

  reply	other threads:[~2020-05-22 15:13 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
2020-05-19 20:19 ` [PATCH 1/8] locking: " Sebastian Andrzej Siewior
2020-05-20 12:04   ` Peter Zijlstra
2020-05-22 11:05     ` Sebastian Andrzej Siewior
2020-05-22 13:01       ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 2/8] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
2020-05-19 20:45   ` Matthew Wilcox
2020-05-19 20:54     ` Steven Rostedt
2020-05-20  2:05       ` Matthew Wilcox
2020-05-20 10:13         ` Sebastian Andrzej Siewior
2020-05-20 10:21   ` Peter Zijlstra
2020-05-20 12:28     ` Thomas Gleixner
2020-05-19 20:19 ` [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access Sebastian Andrzej Siewior
2020-05-20 10:24   ` Peter Zijlstra
2020-05-20 12:06     ` Sebastian Andrzej Siewior
2020-05-20 13:27       ` Peter Zijlstra
2020-05-20 17:42       ` Joel Fernandes
2020-05-20 18:28         ` Sebastian Andrzej Siewior
2020-05-20 18:35           ` Peter Zijlstra
2020-05-20 18:44             ` Joel Fernandes
2020-05-20 18:50               ` Uladzislau Rezki
2020-05-20 18:59           ` Joel Fernandes
2020-05-20 18:43       ` Paul E. McKenney
2020-05-22 15:12         ` Sebastian Andrzej Siewior [this message]
2020-05-22 17:39           ` Paul E. McKenney
2020-05-23 15:08             ` Sebastian Andrzej Siewior
2020-05-23 16:59               ` Paul E. McKenney
2020-05-24 19:03               ` Sebastian Andrzej Siewior
2020-05-25  3:27                 ` Paul E. McKenney
2020-05-26 13:41                   ` [PATCH] srcu: Avoid local_irq_save() before acquiring spinlock_t Sebastian Andrzej Siewior
2020-05-26 16:16                     ` Paul E. McKenney
2020-05-26 16:31                       ` Sebastian Andrzej Siewior
2020-05-19 20:19 ` [PATCH 4/8] mm/swap: Use local_lock for protection Sebastian Andrzej Siewior
2020-05-19 23:58   ` Andrew Morton
2020-05-20  2:17     ` Matthew Wilcox
2020-05-20 10:53   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 5/8] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
2020-05-20 10:56   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 6/8] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
2020-05-20 11:03   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 7/8] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
2020-05-20 11:07   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 8/8] mm/zswap: " Sebastian Andrzej Siewior
2020-05-19 21:46   ` Song Bao Hua
2020-05-20 10:26     ` Sebastian Andrzej Siewior
2020-05-20 11:13       ` Song Bao Hua
2020-05-20 11:57         ` Sebastian Andrzej Siewior
2020-05-20 12:01           ` Song Bao Hua

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200522151255.rtqnuk2cl3dpruou@linutronix.de \
    --to=bigeasy@linutronix.de \
    --cc=jiangshanlai@gmail.com \
    --cc=josh@joshtriplett.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mingo@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.