linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] locking/mutex: Don't hog RCU read lock while optimistically spinning
@ 2020-08-07 19:16 Sultan Alsawaf
  2020-08-07 19:16 ` [PATCH 2/2] locking/rwsem: " Sultan Alsawaf
  2020-09-07 16:20 ` [PATCH 1/2] locking/mutex: " Will Deacon
  0 siblings, 2 replies; 4+ messages in thread
From: Sultan Alsawaf @ 2020-08-07 19:16 UTC (permalink / raw)
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, linux-kernel, Sultan Alsawaf

From: Sultan Alsawaf <sultan@kerneltoast.com>

There's no reason to hold an RCU read lock the entire time while
optimistically spinning for a mutex lock. This can needlessly lengthen
RCU grace periods and slow down synchronize_rcu() when it doesn't brute
force the RCU grace period via rcupdate.rcu_expedited=1.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
---
 kernel/locking/mutex.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 5352ce50a97e..cc5676712458 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -552,21 +552,31 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
 {
 	bool ret = true;
 
-	rcu_read_lock();
-	while (__mutex_owner(lock) == owner) {
+	for (;;) {
+		unsigned int cpu;
+		bool same_owner;
+
 		/*
-		 * Ensure we emit the owner->on_cpu, dereference _after_
-		 * checking lock->owner still matches owner. If that fails,
+		 * Ensure lock->owner still matches owner. If that fails,
 		 * owner might point to freed memory. If it still matches,
 		 * the rcu_read_lock() ensures the memory stays valid.
 		 */
-		barrier();
+		rcu_read_lock();
+		same_owner = __mutex_owner(lock) == owner;
+		if (same_owner) {
+			ret = owner->on_cpu;
+			if (ret)
+				cpu = task_cpu(owner);
+		}
+		rcu_read_unlock();
+
+		if (!ret || !same_owner)
+			break;
 
 		/*
 		 * Use vcpu_is_preempted to detect lock holder preemption issue.
 		 */
-		if (!owner->on_cpu || need_resched() ||
-				vcpu_is_preempted(task_cpu(owner))) {
+		if (need_resched() || vcpu_is_preempted(cpu)) {
 			ret = false;
 			break;
 		}
@@ -578,7 +588,6 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
 
 		cpu_relax();
 	}
-	rcu_read_unlock();
 
 	return ret;
 }
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-09-14  0:36 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-07 19:16 [PATCH 1/2] locking/mutex: Don't hog RCU read lock while optimistically spinning Sultan Alsawaf
2020-08-07 19:16 ` [PATCH 2/2] locking/rwsem: " Sultan Alsawaf
2020-09-07 16:20 ` [PATCH 1/2] locking/mutex: " Will Deacon
2020-09-14  0:36   ` Sultan Alsawaf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).