All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] rtmutex: Lockless reader wake + explicit check for TASK_RTLOCK_WAIT.
@ 2021-09-28 15:00 Sebastian Andrzej Siewior
  2021-09-28 15:00 ` [PATCH 1/2] rtmutex: Check explicit " Sebastian Andrzej Siewior
  2021-09-28 15:00 ` [PATCH 2/2] rtmutex: Wake up the waiters lockless while dropping the read lock Sebastian Andrzej Siewior
  0 siblings, 2 replies; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-09-28 15:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
	Boqun Feng, Thomas Gleixner

This is a follow up to
  https://lkml.kernel.org/r/87wnnjcuam.ffs@tglx

While forming the patch, I got currious why the state check is against
TASK_RTLOCK_WAIT and against TASK_NORMAL. So I swapped to to
TASK_RTLOCK_WAIT which should be more robust.

Sebastian



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] rtmutex: Check explicit for TASK_RTLOCK_WAIT.
  2021-09-28 15:00 [PATCH 0/2] rtmutex: Lockless reader wake + explicit check for TASK_RTLOCK_WAIT Sebastian Andrzej Siewior
@ 2021-09-28 15:00 ` Sebastian Andrzej Siewior
  2021-10-01 15:05   ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
  2021-09-28 15:00 ` [PATCH 2/2] rtmutex: Wake up the waiters lockless while dropping the read lock Sebastian Andrzej Siewior
  1 sibling, 1 reply; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-09-28 15:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
	Boqun Feng, Thomas Gleixner, Sebastian Andrzej Siewior

rt_mutex_wake_q_add() needs to  need to distiguish between sleeping
locks (TASK_RTLOCK_WAIT) and normal locks which use TASK_NORMAL to use
the proper wake mechanism.

Instead of checking for != TASK_NORMAL make it more robust and check
explicit for TASK_RTLOCK_WAIT which is the reason why a different wake
mechanism is used.

No functional change.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 6bb116c559b4a..cafc259ec59d3 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -449,7 +449,7 @@ static __always_inline void rt_mutex_adjust_prio(struct task_struct *p)
 static __always_inline void rt_mutex_wake_q_add(struct rt_wake_q_head *wqh,
 						struct rt_mutex_waiter *w)
 {
-	if (IS_ENABLED(CONFIG_PREEMPT_RT) && w->wake_state != TASK_NORMAL) {
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && w->wake_state == TASK_RTLOCK_WAIT) {
 		if (IS_ENABLED(CONFIG_PROVE_LOCKING))
 			WARN_ON_ONCE(wqh->rtlock_task);
 		get_task_struct(w->task);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] rtmutex: Wake up the waiters lockless while dropping the read lock.
  2021-09-28 15:00 [PATCH 0/2] rtmutex: Lockless reader wake + explicit check for TASK_RTLOCK_WAIT Sebastian Andrzej Siewior
  2021-09-28 15:00 ` [PATCH 1/2] rtmutex: Check explicit " Sebastian Andrzej Siewior
@ 2021-09-28 15:00 ` Sebastian Andrzej Siewior
  2021-10-01 15:05   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
  1 sibling, 1 reply; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-09-28 15:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
	Boqun Feng, Thomas Gleixner, Davidlohr Bueso,
	Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

The rw_semaphore and rwlock_t implementation both wake the waiter while
holding the rt_mutex_base::wait_lock acquired.
This can be optimized by waking the waiter lockless outside of the
locked section to avoid a needless contention on the
rt_mutex_base::wait_lock lock.

Extend rt_mutex_wake_q_add() to also accept task and state and use it in
__rwbase_read_unlock().

Suggested-by: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/rtmutex.c   | 23 +++++++++++++++--------
 kernel/locking/rwbase_rt.c |  6 +++++-
 2 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index cafc259ec59d3..0c6a48dfcecb3 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -446,17 +446,24 @@ static __always_inline void rt_mutex_adjust_prio(struct task_struct *p)
 }
 
 /* RT mutex specific wake_q wrappers */
+static __always_inline void rt_mutex_wake_q_add_task(struct rt_wake_q_head *wqh,
+						     struct task_struct *task,
+						     unsigned int wake_state)
+{
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && wake_state == TASK_RTLOCK_WAIT) {
+		if (IS_ENABLED(CONFIG_PROVE_LOCKING))
+			WARN_ON_ONCE(wqh->rtlock_task);
+		get_task_struct(task);
+		wqh->rtlock_task = task;
+	} else {
+		wake_q_add(&wqh->head, task);
+	}
+}
+
 static __always_inline void rt_mutex_wake_q_add(struct rt_wake_q_head *wqh,
 						struct rt_mutex_waiter *w)
 {
-	if (IS_ENABLED(CONFIG_PREEMPT_RT) && w->wake_state == TASK_RTLOCK_WAIT) {
-		if (IS_ENABLED(CONFIG_PROVE_LOCKING))
-			WARN_ON_ONCE(wqh->rtlock_task);
-		get_task_struct(w->task);
-		wqh->rtlock_task = w->task;
-	} else {
-		wake_q_add(&wqh->head, w->task);
-	}
+	rt_mutex_wake_q_add_task(wqh, w->task, w->wake_state);
 }
 
 static __always_inline void rt_mutex_wake_up_q(struct rt_wake_q_head *wqh)
diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c
index 88191f6e252c3..15c81100f0e26 100644
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -148,6 +148,7 @@ static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb,
 {
 	struct rt_mutex_base *rtm = &rwb->rtmutex;
 	struct task_struct *owner;
+	DEFINE_RT_WAKE_Q(wqh);
 
 	raw_spin_lock_irq(&rtm->wait_lock);
 	/*
@@ -158,9 +159,12 @@ static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb,
 	 */
 	owner = rt_mutex_owner(rtm);
 	if (owner)
-		wake_up_state(owner, state);
+		rt_mutex_wake_q_add_task(&wqh, owner, state);
 
+	/* Pairs with the preempt_enable in rt_mutex_wake_up_q() */
+	preempt_disable();
 	raw_spin_unlock_irq(&rtm->wait_lock);
+	rt_mutex_wake_up_q(&wqh);
 }
 
 static __always_inline void rwbase_read_unlock(struct rwbase_rt *rwb,
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip: locking/core] rtmutex: Wake up the waiters lockless while dropping the read lock.
  2021-09-28 15:00 ` [PATCH 2/2] rtmutex: Wake up the waiters lockless while dropping the read lock Sebastian Andrzej Siewior
@ 2021-10-01 15:05   ` tip-bot2 for Thomas Gleixner
  0 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2021-10-01 15:05 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Davidlohr Bueso, Thomas Gleixner, Sebastian Andrzej Siewior,
	Peter Zijlstra (Intel),
	x86, linux-kernel

The following commit has been merged into the locking/core branch of tip:

Commit-ID:     9321f8152d9a764208c3f0dad49e0c55f293b7ab
Gitweb:        https://git.kernel.org/tip/9321f8152d9a764208c3f0dad49e0c55f293b7ab
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Tue, 28 Sep 2021 17:00:06 +02:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 01 Oct 2021 13:57:52 +02:00

rtmutex: Wake up the waiters lockless while dropping the read lock.

The rw_semaphore and rwlock_t implementation both wake the waiter while
holding the rt_mutex_base::wait_lock acquired.
This can be optimized by waking the waiter lockless outside of the
locked section to avoid a needless contention on the
rt_mutex_base::wait_lock lock.

Extend rt_mutex_wake_q_add() to also accept task and state and use it in
__rwbase_read_unlock().

Suggested-by: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210928150006.597310-3-bigeasy@linutronix.de
---
 kernel/locking/rtmutex.c   | 19 +++++++++++++------
 kernel/locking/rwbase_rt.c |  6 +++++-
 2 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index cafc259..0c6a48d 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -446,19 +446,26 @@ static __always_inline void rt_mutex_adjust_prio(struct task_struct *p)
 }
 
 /* RT mutex specific wake_q wrappers */
-static __always_inline void rt_mutex_wake_q_add(struct rt_wake_q_head *wqh,
-						struct rt_mutex_waiter *w)
+static __always_inline void rt_mutex_wake_q_add_task(struct rt_wake_q_head *wqh,
+						     struct task_struct *task,
+						     unsigned int wake_state)
 {
-	if (IS_ENABLED(CONFIG_PREEMPT_RT) && w->wake_state == TASK_RTLOCK_WAIT) {
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && wake_state == TASK_RTLOCK_WAIT) {
 		if (IS_ENABLED(CONFIG_PROVE_LOCKING))
 			WARN_ON_ONCE(wqh->rtlock_task);
-		get_task_struct(w->task);
-		wqh->rtlock_task = w->task;
+		get_task_struct(task);
+		wqh->rtlock_task = task;
 	} else {
-		wake_q_add(&wqh->head, w->task);
+		wake_q_add(&wqh->head, task);
 	}
 }
 
+static __always_inline void rt_mutex_wake_q_add(struct rt_wake_q_head *wqh,
+						struct rt_mutex_waiter *w)
+{
+	rt_mutex_wake_q_add_task(wqh, w->task, w->wake_state);
+}
+
 static __always_inline void rt_mutex_wake_up_q(struct rt_wake_q_head *wqh)
 {
 	if (IS_ENABLED(CONFIG_PREEMPT_RT) && wqh->rtlock_task) {
diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c
index 4ba1508..6b143fb 100644
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -141,6 +141,7 @@ static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb,
 {
 	struct rt_mutex_base *rtm = &rwb->rtmutex;
 	struct task_struct *owner;
+	DEFINE_RT_WAKE_Q(wqh);
 
 	raw_spin_lock_irq(&rtm->wait_lock);
 	/*
@@ -151,9 +152,12 @@ static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb,
 	 */
 	owner = rt_mutex_owner(rtm);
 	if (owner)
-		wake_up_state(owner, state);
+		rt_mutex_wake_q_add_task(&wqh, owner, state);
 
+	/* Pairs with the preempt_enable in rt_mutex_wake_up_q() */
+	preempt_disable();
 	raw_spin_unlock_irq(&rtm->wait_lock);
+	rt_mutex_wake_up_q(&wqh);
 }
 
 static __always_inline void rwbase_read_unlock(struct rwbase_rt *rwb,

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip: locking/core] rtmutex: Check explicit for TASK_RTLOCK_WAIT.
  2021-09-28 15:00 ` [PATCH 1/2] rtmutex: Check explicit " Sebastian Andrzej Siewior
@ 2021-10-01 15:05   ` tip-bot2 for Sebastian Andrzej Siewior
  0 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Sebastian Andrzej Siewior @ 2021-10-01 15:05 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sebastian Andrzej Siewior, Peter Zijlstra (Intel), x86, linux-kernel

The following commit has been merged into the locking/core branch of tip:

Commit-ID:     8fe46535e10dbfebad68ad9f2f8260e49f5852c9
Gitweb:        https://git.kernel.org/tip/8fe46535e10dbfebad68ad9f2f8260e49f5852c9
Author:        Sebastian Andrzej Siewior <bigeasy@linutronix.de>
AuthorDate:    Tue, 28 Sep 2021 17:00:05 +02:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 01 Oct 2021 13:57:52 +02:00

rtmutex: Check explicit for TASK_RTLOCK_WAIT.

rt_mutex_wake_q_add() needs to  need to distiguish between sleeping
locks (TASK_RTLOCK_WAIT) and normal locks which use TASK_NORMAL to use
the proper wake mechanism.

Instead of checking for != TASK_NORMAL make it more robust and check
explicit for TASK_RTLOCK_WAIT which is the reason why a different wake
mechanism is used.

No functional change.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210928150006.597310-2-bigeasy@linutronix.de
---
 kernel/locking/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 6bb116c..cafc259 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -449,7 +449,7 @@ static __always_inline void rt_mutex_adjust_prio(struct task_struct *p)
 static __always_inline void rt_mutex_wake_q_add(struct rt_wake_q_head *wqh,
 						struct rt_mutex_waiter *w)
 {
-	if (IS_ENABLED(CONFIG_PREEMPT_RT) && w->wake_state != TASK_NORMAL) {
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && w->wake_state == TASK_RTLOCK_WAIT) {
 		if (IS_ENABLED(CONFIG_PROVE_LOCKING))
 			WARN_ON_ONCE(wqh->rtlock_task);
 		get_task_struct(w->task);

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-10-01 15:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-28 15:00 [PATCH 0/2] rtmutex: Lockless reader wake + explicit check for TASK_RTLOCK_WAIT Sebastian Andrzej Siewior
2021-09-28 15:00 ` [PATCH 1/2] rtmutex: Check explicit " Sebastian Andrzej Siewior
2021-10-01 15:05   ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
2021-09-28 15:00 ` [PATCH 2/2] rtmutex: Wake up the waiters lockless while dropping the read lock Sebastian Andrzej Siewior
2021-10-01 15:05   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.