linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Henrik Austad <henrik@austad.us>
To: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Henrik Austad <haustad@cisco.com>,
	Xunlei Pang <xlpang@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	juri.lelli@arm.com, bigeasy@linutronix.de,
	mathieu.desnoyers@efficios.com, jdesfossez@efficios.com,
	bristot@redhat.com, Thomas Gleixner <tglx@linutronix.de>
Subject: [PATCH 16/17] rtmutex: Deboost before waking up the top waiter
Date: Fri,  9 Nov 2018 11:07:44 +0100	[thread overview]
Message-ID: <1541758065-10952-17-git-send-email-henrik@austad.us> (raw)
In-Reply-To: <1541758065-10952-1-git-send-email-henrik@austad.us>

From: Xunlei Pang <xlpang@redhat.com>

commit 2a1c6029940675abb2217b590512dbf691867ec4 upstream.

We should deboost before waking the high-priority task, such that we
don't run two tasks with the same "state" (priority, deadline,
sched_class, etc).

In order to make sure the boosting task doesn't start running between
unlock and deboost (due to 'spurious' wakeup), we move the deboost
under the wait_lock, that way its serialized against the wait loop in
__rt_mutex_slowlock().

Doing the deboost early can however lead to priority-inversion if
current would get preempted after the deboost but before waking our
high-prio task, hence we disable preemption before doing deboost, and
enabling it after the wake up is over.

This gets us the right semantic order, but most importantly however;
this change ensures pointer stability for the next patch, where we
have rt_mutex_setprio() cache a pointer to the top-most waiter task.
If we, as before this change, do the wakeup first and then deboost,
this pointer might point into thin air.

[peterz: Changelog + patch munging]
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: juri.lelli@arm.com
Cc: bigeasy@linutronix.de
Cc: mathieu.desnoyers@efficios.com
Cc: jdesfossez@efficios.com
Cc: bristot@redhat.com
Link: http://lkml.kernel.org/r/20170323150216.110065320@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Henrik Austad <haustad@cisco.com>
---
 kernel/futex.c                  |  5 +---
 kernel/locking/rtmutex.c        | 59 ++++++++++++++++++++++-------------------
 kernel/locking/rtmutex_common.h |  2 +-
 3 files changed, 34 insertions(+), 32 deletions(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index afb02a7..63fa840 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1457,10 +1457,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_
 out_unlock:
 	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
 
-	if (deboost) {
-		wake_up_q(&wake_q);
-		rt_mutex_adjust_prio(current);
-	}
+	rt_mutex_postunlock(&wake_q, deboost);
 
 	return ret;
 }
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index b061a79..c01d7f4 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -371,24 +371,6 @@ static void __rt_mutex_adjust_prio(struct task_struct *task)
 }
 
 /*
- * Adjust task priority (undo boosting). Called from the exit path of
- * rt_mutex_slowunlock() and rt_mutex_slowlock().
- *
- * (Note: We do this outside of the protection of lock->wait_lock to
- * allow the lock to be taken while or before we readjust the priority
- * of task. We do not use the spin_xx_mutex() variants here as we are
- * outside of the debug path.)
- */
-void rt_mutex_adjust_prio(struct task_struct *task)
-{
-	unsigned long flags;
-
-	raw_spin_lock_irqsave(&task->pi_lock, flags);
-	__rt_mutex_adjust_prio(task);
-	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
-}
-
-/*
  * Deadlock detection is conditional:
  *
  * If CONFIG_DEBUG_RT_MUTEXES=n, deadlock detection is only conducted
@@ -1049,6 +1031,7 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
 	 * lock->wait_lock.
 	 */
 	rt_mutex_dequeue_pi(current, waiter);
+	__rt_mutex_adjust_prio(current);
 
 	/*
 	 * As we are waking up the top waiter, and the waiter stays
@@ -1391,6 +1374,16 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
 	 */
 	mark_wakeup_next_waiter(wake_q, lock);
 
+	/*
+	 * We should deboost before waking the top waiter task such that
+	 * we don't run two tasks with the 'same' priority. This however
+	 * can lead to prio-inversion if we would get preempted after
+	 * the deboost but before waking our high-prio task, hence the
+	 * preempt_disable before unlock. Pairs with preempt_enable() in
+	 * rt_mutex_postunlock();
+	 */
+	preempt_disable();
+
 	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	/* check PI boosting */
@@ -1440,6 +1433,18 @@ rt_mutex_fasttrylock(struct rt_mutex *lock,
 	return slowfn(lock);
 }
 
+/*
+ * Undo pi boosting (if necessary) and wake top waiter.
+ */
+void rt_mutex_postunlock(struct wake_q_head *wake_q, bool deboost)
+{
+	wake_up_q(wake_q);
+
+	/* Pairs with preempt_disable() in rt_mutex_slowunlock() */
+	if (deboost)
+		preempt_enable();
+}
+
 static inline void
 rt_mutex_fastunlock(struct rt_mutex *lock,
 		    bool (*slowfn)(struct rt_mutex *lock,
@@ -1453,11 +1458,7 @@ rt_mutex_fastunlock(struct rt_mutex *lock,
 
 	deboost = slowfn(lock, &wake_q);
 
-	wake_up_q(&wake_q);
-
-	/* Undo pi boosting if necessary: */
-	if (deboost)
-		rt_mutex_adjust_prio(current);
+	rt_mutex_postunlock(&wake_q, deboost);
 }
 
 /**
@@ -1570,6 +1571,13 @@ bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
 	}
 
 	mark_wakeup_next_waiter(wake_q, lock);
+	/*
+	 * We've already deboosted, retain preempt_disabled when dropping
+	 * the wait_lock to avoid inversion until the wakeup. Matched
+	 * by rt_mutex_postunlock();
+	 */
+	preempt_disable();
+
 	return true; /* deboost and wakeups */
 }
 
@@ -1582,10 +1590,7 @@ void __sched rt_mutex_futex_unlock(struct rt_mutex *lock)
 	deboost = __rt_mutex_futex_unlock(lock, &wake_q);
 	raw_spin_unlock_irq(&lock->wait_lock);
 
-	if (deboost) {
-		wake_up_q(&wake_q);
-		rt_mutex_adjust_prio(current);
-	}
+	rt_mutex_postunlock(&wake_q, deboost);
 }
 
 /**
diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
index 25ccf71..6f07357 100644
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -122,7 +122,7 @@ extern void rt_mutex_futex_unlock(struct rt_mutex *lock);
 extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock,
 				 struct wake_q_head *wqh);
 
-extern void rt_mutex_adjust_prio(struct task_struct *task);
+extern void rt_mutex_postunlock(struct wake_q_head *wake_q, bool deboost);
 
 #ifdef CONFIG_DEBUG_RT_MUTEXES
 # include "rtmutex-debug.h"
-- 
2.7.4


  parent reply	other threads:[~2018-11-09 10:08 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-09 10:07 [PATCH 00/17] Backport rt/deadline crash and the ardous story of FUTEX_UNLOCK_PI to 4.4 Henrik Austad
2018-11-09 10:07 ` [PATCH 01/17] futex: Cleanup variable names for futex_top_waiter() Henrik Austad
2018-11-09 10:07 ` [PATCH 02/17] futex: Use smp_store_release() in mark_wake_futex() Henrik Austad
2018-11-09 10:07 ` [PATCH 03/17] futex: Remove rt_mutex_deadlock_account_*() Henrik Austad
2018-11-09 10:07 ` [PATCH 04/17] rtmutex: Make wait_lock irq safe Henrik Austad
2018-11-09 10:07 ` [PATCH 05/17] futex,rt_mutex: Provide futex specific rt_mutex API Henrik Austad
2018-11-09 10:07 ` [PATCH 06/17] futex: Change locking rules Henrik Austad
2018-11-09 10:07 ` [PATCH 07/17] futex: Cleanup refcounting Henrik Austad
2018-11-09 10:07 ` [PATCH 08/17] futex: Rework inconsistent rt_mutex/futex_q state Henrik Austad
2018-11-09 10:07 ` [PATCH 09/17] futex: Rename free_pi_state() to put_pi_state() Henrik Austad
2018-11-09 10:07 ` [PATCH 10/17] futex: Pull rt_mutex_futex_unlock() out from under hb->lock Henrik Austad
2018-11-09 10:07 ` [PATCH 11/17] futex,rt_mutex: Introduce rt_mutex_init_waiter() Henrik Austad
2018-11-09 10:07 ` [PATCH 12/17] futex,rt_mutex: Restructure rt_mutex_finish_proxy_lock() Henrik Austad
2018-11-09 10:07 ` [PATCH 13/17] futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock() Henrik Austad
2018-11-09 10:07 ` [PATCH 14/17] futex: Futex_unlock_pi() determinism Henrik Austad
2018-11-09 10:07 ` [PATCH 15/17] futex: Drop hb->lock before enqueueing on the rtmutex Henrik Austad
2018-11-09 10:07 ` Henrik Austad [this message]
2018-11-09 10:07 ` [PATCH 17/17] sched/rtmutex/deadline: Fix a PI crash for deadline tasks Henrik Austad
2018-11-09 10:35 ` [PATCH 00/17] Backport rt/deadline crash and the ardous story of FUTEX_UNLOCK_PI to 4.4 Henrik Austad
2018-11-19 11:27   ` Henrik Austad
2018-12-14  7:18     ` Greg Kroah-Hartman
2018-12-14  7:36       ` Henrik Austad

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1541758065-10952-17-git-send-email-henrik@austad.us \
    --to=henrik@austad.us \
    --cc=bigeasy@linutronix.de \
    --cc=bristot@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=haustad@cisco.com \
    --cc=jdesfossez@efficios.com \
    --cc=juri.lelli@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=peterz@infradead.org \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=xlpang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).