All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Mike Galbraith <efault@gmx.de>
Subject: [patch V5 66/72] futex: Prevent requeue_pi() lock nesting issue on RT
Date: Sun, 15 Aug 2021 23:29:18 +0200 (CEST)	[thread overview]
Message-ID: <20210815211305.693317658@linutronix.de> (raw)
In-Reply-To: 20210815203225.710392609@linutronix.de

From: Thomas Gleixner <tglx@linutronix.de>

The requeue_pi() operation on RT kernels creates a problem versus the
task::pi_blocked_on state when a waiter is woken early (signal, timeout)
and that early wake up interleaves with the requeue_pi() operation.

When the requeue manages to block the waiter on the rtmutex which is
associated to the second futex, then a concurrent early wakeup of that
waiter faces the problem that it has to acquire the hash bucket spinlock,
which is not an issue on non-RT kernels, but on RT kernels spinlocks are
substituted by 'sleeping' spinlocks based on rtmutex. If the hash bucket
lock is contended then blocking on that spinlock would result in a
impossible situation: blocking on two locks at the same time (the hash
bucket lock and the rtmutex representing the PI futex).

It was considered to make the hash bucket locks raw_spinlocks, but
especially requeue operations with a large amount of waiters can introduce
significant latencies, so that's not an option for RT.

The RT tree carried a solution which (ab)used task::pi_blocked_on to store
the information about an ongoing requeue and an early wakeup which worked,
but required to add checks for these special states all over the place.

The distangling of an early wakeup of a waiter for a requeue_pi() operation
is already looking at quite some different states and the task::pi_blocked_on
magic just expanded that to a hard to understand 'state machine'.

This can be avoided by keeping track of the waiter/requeue state in the
futex_q object itself.

Add a requeue_state field to struct futex_q with the following possible
states:

	Q_REQUEUE_PI_NONE
	Q_REQUEUE_PI_IGNORE
	Q_REQUEUE_PI_IN_PROGRESS
	Q_REQUEUE_PI_WAIT
	Q_REQUEUE_PI_DONE
	Q_REQUEUE_PI_LOCKED

The waiter starts with state = NONE and the following state transitions are
valid:

On the waiter side:
  Q_REQUEUE_PI_NONE		-> Q_REQUEUE_PI_IGNORE
  Q_REQUEUE_PI_IN_PROGRESS	-> Q_REQUEUE_PI_WAIT

On the requeue side:
  Q_REQUEUE_PI_NONE		-> Q_REQUEUE_PI_INPROGRESS
  Q_REQUEUE_PI_IN_PROGRESS	-> Q_REQUEUE_PI_DONE/LOCKED
  Q_REQUEUE_PI_IN_PROGRESS	-> Q_REQUEUE_PI_NONE (requeue failed)
  Q_REQUEUE_PI_WAIT		-> Q_REQUEUE_PI_DONE/LOCKED
  Q_REQUEUE_PI_WAIT		-> Q_REQUEUE_PI_IGNORE (requeue failed)

The requeue side ignores a waiter with state Q_REQUEUE_PI_IGNORE as this
signals that the waiter is already on the way out. It also means that
the waiter is still on the 'wait' futex, i.e. uaddr1.

The waiter side signals early wakeup to the requeue side either through
setting state to Q_REQUEUE_PI_IGNORE or to Q_REQUEUE_PI_WAIT depending
on the current state. In case of Q_REQUEUE_PI_IGNORE it can immediately
proceed to take the hash bucket lock of uaddr1. If it set state to WAIT,
which means the wakeup is interleaving with a requeue in progress it has
to wait for the requeue side to change the state. Either to DONE/LOCKED
or to IGNORE. DONE/LOCKED means the waiter q is now on the uaddr2 futex
and either blocked (DONE) or has acquired it (LOCKED). IGNORE is set by
the requeue side when the requeue attempt failed via deadlock detection
and therefore the waiter's futex_q is still on the uaddr1 futex.

While this is not strictly required on !RT making this unconditional has
the benefit of common code and it also allows the waiter to avoid taking
the hash bucket lock on the way out in certain cases, which reduces
contention.

Add the required helpers required for the state transitions, invoke them at
the right places and restructure the futex_wait_requeue_pi() code to handle
the return from wait (early or not) based on the state machine values.

On !RT enabled kernels the waiter spin waits for the state going from
Q_REQUEUE_PI_WAIT to some other state, on RT enabled kernels this is
handled by rcuwait_wait_event() and the corresponding wake up on the
requeue side.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V3: Folded Peter's improvements
---
 kernel/futex.c |  308 +++++++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 259 insertions(+), 49 deletions(-)
---
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -197,6 +197,8 @@ struct futex_pi_state {
  * @rt_waiter:		rt_waiter storage for use with requeue_pi
  * @requeue_pi_key:	the requeue_pi target futex key
  * @bitset:		bitset for the optional bitmasked wakeup
+ * @requeue_state:	State field for futex_requeue_pi()
+ * @requeue_wait:	RCU wait for futex_requeue_pi() (RT only)
  *
  * We use this hashed waitqueue, instead of a normal wait_queue_entry_t, so
  * we can wake only the relevant ones (hashed queues may be shared).
@@ -219,12 +221,68 @@ struct futex_q {
 	struct rt_mutex_waiter *rt_waiter;
 	union futex_key *requeue_pi_key;
 	u32 bitset;
+	atomic_t requeue_state;
+#ifdef CONFIG_PREEMPT_RT
+	struct rcuwait requeue_wait;
+#endif
 } __randomize_layout;
 
+/*
+ * On PREEMPT_RT, the hash bucket lock is a 'sleeping' spinlock with an
+ * underlying rtmutex. The task which is about to be requeued could have
+ * just woken up (timeout, signal). After the wake up the task has to
+ * acquire hash bucket lock, which is held by the requeue code.  As a task
+ * can only be blocked on _ONE_ rtmutex at a time, the proxy lock blocking
+ * and the hash bucket lock blocking would collide and corrupt state.
+ *
+ * On !PREEMPT_RT this is not a problem and everything could be serialized
+ * on hash bucket lock, but aside of having the benefit of common code,
+ * this allows to avoid doing the requeue when the task is already on the
+ * way out and taking the hash bucket lock of the original uaddr1 when the
+ * requeue has been completed.
+ *
+ * The following state transitions are valid:
+ *
+ * On the waiter side:
+ *   Q_REQUEUE_PI_NONE		-> Q_REQUEUE_PI_IGNORE
+ *   Q_REQUEUE_PI_IN_PROGRESS	-> Q_REQUEUE_PI_WAIT
+ *
+ * On the requeue side:
+ *   Q_REQUEUE_PI_NONE		-> Q_REQUEUE_PI_INPROGRESS
+ *   Q_REQUEUE_PI_IN_PROGRESS	-> Q_REQUEUE_PI_DONE/LOCKED
+ *   Q_REQUEUE_PI_IN_PROGRESS	-> Q_REQUEUE_PI_NONE (requeue failed)
+ *   Q_REQUEUE_PI_WAIT		-> Q_REQUEUE_PI_DONE/LOCKED
+ *   Q_REQUEUE_PI_WAIT		-> Q_REQUEUE_PI_IGNORE (requeue failed)
+ *
+ * The requeue side ignores a waiter with state Q_REQUEUE_PI_IGNORE as this
+ * signals that the waiter is already on the way out. It also means that
+ * the waiter is still on the 'wait' futex, i.e. uaddr1.
+ *
+ * The waiter side signals early wakeup to the requeue side either through
+ * setting state to Q_REQUEUE_PI_IGNORE or to Q_REQUEUE_PI_WAIT depending
+ * on the current state. In case of Q_REQUEUE_PI_IGNORE it can immediately
+ * proceed to take the hash bucket lock of uaddr1. If it set state to WAIT,
+ * which means the wakeup is interleaving with a requeue in progress it has
+ * to wait for the requeue side to change the state. Either to DONE/LOCKED
+ * or to IGNORE. DONE/LOCKED means the waiter q is now on the uaddr2 futex
+ * and either blocked (DONE) or has acquired it (LOCKED). IGNORE is set by
+ * the requeue side when the requeue attempt failed via deadlock detection
+ * and therefore the waiter q is still on the uaddr1 futex.
+ */
+enum {
+	Q_REQUEUE_PI_NONE		=  0,
+	Q_REQUEUE_PI_IGNORE,
+	Q_REQUEUE_PI_IN_PROGRESS,
+	Q_REQUEUE_PI_WAIT,
+	Q_REQUEUE_PI_DONE,
+	Q_REQUEUE_PI_LOCKED,
+};
+
 static const struct futex_q futex_q_init = {
 	/* list gets initialized in queue_me()*/
-	.key = FUTEX_KEY_INIT,
-	.bitset = FUTEX_BITSET_MATCH_ANY
+	.key		= FUTEX_KEY_INIT,
+	.bitset		= FUTEX_BITSET_MATCH_ANY,
+	.requeue_state	= ATOMIC_INIT(Q_REQUEUE_PI_NONE),
 };
 
 /*
@@ -1772,6 +1830,108 @@ void requeue_futex(struct futex_q *q, st
 	q->key = *key2;
 }
 
+static inline bool futex_requeue_pi_prepare(struct futex_q *q,
+					    struct futex_pi_state *pi_state)
+{
+	int old, new;
+
+	/*
+	 * Set state to Q_REQUEUE_PI_IN_PROGRESS unless an early wakeup has
+	 * already set Q_REQUEUE_PI_IGNORE to signal that requeue should
+	 * ignore the waiter.
+	 */
+	old = atomic_read_acquire(&q->requeue_state);
+	do {
+		if (old == Q_REQUEUE_PI_IGNORE)
+			return false;
+
+		/*
+		 * futex_proxy_trylock_atomic() might have set it to
+		 * IN_PROGRESS and a interleaved early wake to WAIT.
+		 *
+		 * It was considered to have an extra state for that
+		 * trylock, but that would just add more conditionals
+		 * all over the place for a dubious value.
+		 */
+		if (old != Q_REQUEUE_PI_NONE)
+			break;
+
+		new = Q_REQUEUE_PI_IN_PROGRESS;
+	} while (!atomic_try_cmpxchg(&q->requeue_state, &old, new));
+
+	q->pi_state = pi_state;
+	return true;
+}
+
+static inline void futex_requeue_pi_complete(struct futex_q *q, int locked)
+{
+	int old, new;
+
+	old = atomic_read_acquire(&q->requeue_state);
+	do {
+		if (old == Q_REQUEUE_PI_IGNORE)
+			return;
+
+		if (locked >= 0) {
+			/* Requeue succeeded. Set DONE or LOCKED */
+			WARN_ON_ONCE(old != Q_REQUEUE_PI_IN_PROGRESS &&
+				     old != Q_REQUEUE_PI_WAIT);
+			new = Q_REQUEUE_PI_DONE + locked;
+		} else if (old == Q_REQUEUE_PI_IN_PROGRESS) {
+			/* Deadlock, no early wakeup interleave */
+			new = Q_REQUEUE_PI_NONE;
+		} else {
+			/* Deadlock, early wakeup interleave. */
+			WARN_ON_ONCE(old != Q_REQUEUE_PI_WAIT);
+			new = Q_REQUEUE_PI_IGNORE;
+		}
+	} while (!atomic_try_cmpxchg(&q->requeue_state, &old, new));
+
+#ifdef CONFIG_PREEMPT_RT
+	/* If the waiter interleaved with the requeue let it know */
+	if (unlikely(old == Q_REQUEUE_PI_WAIT))
+		rcuwait_wake_up(&q->requeue_wait);
+#endif
+}
+
+static inline int futex_requeue_pi_wakeup_sync(struct futex_q *q)
+{
+	int old, new;
+
+	old = atomic_read_acquire(&q->requeue_state);
+	do {
+		/* Is requeue done already? */
+		if (old >= Q_REQUEUE_PI_DONE)
+			return old;
+
+		/*
+		 * If not done, then tell the requeue code to either ignore
+		 * the waiter or to wake it up once the requeue is done.
+		 */
+		new = Q_REQUEUE_PI_WAIT;
+		if (old == Q_REQUEUE_PI_NONE)
+			new = Q_REQUEUE_PI_IGNORE;
+	} while (!atomic_try_cmpxchg(&q->requeue_state, &old, new));
+
+	/* If the requeue was in progress, wait for it to complete */
+	if (old == Q_REQUEUE_PI_IN_PROGRESS) {
+#ifdef CONFIG_PREEMPT_RT
+		rcuwait_wait_event(&q->requeue_wait,
+				   atomic_read(&q->requeue_state) != Q_REQUEUE_PI_WAIT,
+				   TASK_UNINTERRUPTIBLE);
+#else
+		(void)atomic_cond_read_relaxed(&q->requeue_state, VAL != Q_REQUEUE_PI_WAIT);
+#endif
+	}
+
+	/*
+	 * Requeue is now either prohibited or complete. Reread state
+	 * because during the wait above it might have changed. Nothing
+	 * will modify q->requeue_state after this point.
+	 */
+	return atomic_read(&q->requeue_state);
+}
+
 /**
  * requeue_pi_wake_futex() - Wake a task that acquired the lock during requeue
  * @q:		the futex_q
@@ -1799,6 +1959,8 @@ void requeue_pi_wake_futex(struct futex_
 
 	q->lock_ptr = &hb->lock;
 
+	/* Signal locked state to the waiter */
+	futex_requeue_pi_complete(q, 1);
 	wake_up_state(q->task, TASK_NORMAL);
 }
 
@@ -1869,6 +2031,10 @@ futex_proxy_trylock_atomic(u32 __user *p
 	if (!match_futex(top_waiter->requeue_pi_key, key2))
 		return -EINVAL;
 
+	/* Ensure that this does not race against an early wakeup */
+	if (!futex_requeue_pi_prepare(top_waiter, NULL))
+		return -EAGAIN;
+
 	/*
 	 * Try to take the lock for top_waiter.  Set the FUTEX_WAITERS bit in
 	 * the contended case or if set_waiters is 1.  The pi_state is returned
@@ -1878,8 +2044,22 @@ futex_proxy_trylock_atomic(u32 __user *p
 	ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task,
 				   exiting, set_waiters);
 	if (ret == 1) {
+		/* Dequeue, wake up and update top_waiter::requeue_state */
 		requeue_pi_wake_futex(top_waiter, key2, hb2);
 		return vpid;
+	} else if (ret < 0) {
+		/* Rewind top_waiter::requeue_state */
+		futex_requeue_pi_complete(top_waiter, ret);
+	} else {
+		/*
+		 * futex_lock_pi_atomic() did not acquire the user space
+		 * futex, but managed to establish the proxy lock and pi
+		 * state. top_waiter::requeue_state cannot be fixed up here
+		 * because the waiter is not enqueued on the rtmutex
+		 * yet. This is handled at the callsite depending on the
+		 * result of rt_mutex_start_proxy_lock() which is
+		 * guaranteed to be reached with this function returning 0.
+		 */
 	}
 	return ret;
 }
@@ -2020,6 +2200,8 @@ static int futex_requeue(u32 __user *uad
 		 * intend to requeue waiters, force setting the FUTEX_WAITERS
 		 * bit.  We force this here where we are able to easily handle
 		 * faults rather in the requeue loop below.
+		 *
+		 * Updates topwaiter::requeue_state if a top waiter exists.
 		 */
 		ret = futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1,
 						 &key2, &pi_state,
@@ -2033,6 +2215,24 @@ static int futex_requeue(u32 __user *uad
 		 * VPID of the top waiter task.
 		 * If the lock was not taken, we have pi_state and an initial
 		 * refcount on it. In case of an error we have nothing.
+		 *
+		 * The top waiter's requeue_state is up to date:
+		 *
+		 *  - If the lock was acquired atomically (ret > 0), then
+		 *    the state is Q_REQUEUE_PI_LOCKED.
+		 *
+		 *  - If the trylock failed with an error (ret < 0) then
+		 *    the state is either Q_REQUEUE_PI_NONE, i.e. "nothing
+		 *    happened", or Q_REQUEUE_PI_IGNORE when there was an
+		 *    interleaved early wakeup.
+		 *
+		 *  - If the trylock did not succeed (ret == 0) then the
+		 *    state is either Q_REQUEUE_PI_IN_PROGRESS or
+		 *    Q_REQUEUE_PI_WAIT if an early wakeup interleaved.
+		 *    This will be cleaned up in the loop below, which
+		 *    cannot fail because futex_proxy_trylock_atomic() did
+		 *    the same sanity checks for requeue_pi as the loop
+		 *    below does.
 		 */
 		if (ret > 0) {
 			WARN_ON(pi_state);
@@ -2064,7 +2264,10 @@ static int futex_requeue(u32 __user *uad
 			/* We hold a reference on the pi state. */
 			break;
 
-			/* If the above failed, then pi_state is NULL */
+		/*
+		 * If the above failed, then pi_state is NULL and
+		 * waiter::requeue_state is correct.
+		 */
 		case -EFAULT:
 			double_unlock_hb(hb1, hb2);
 			hb_waiters_dec(hb2);
@@ -2140,21 +2343,39 @@ static int futex_requeue(u32 __user *uad
 		 * object of the waiter.
 		 */
 		get_pi_state(pi_state);
-		this->pi_state = pi_state;
+
+		/* Don't requeue when the waiter is already on the way out. */
+		if (!futex_requeue_pi_prepare(this, pi_state)) {
+			/*
+			 * Early woken waiter signaled that it is on the
+			 * way out. Drop the pi_state reference and try the
+			 * next waiter. @this->pi_state is still NULL.
+			 */
+			put_pi_state(pi_state);
+			continue;
+		}
+
 		ret = rt_mutex_start_proxy_lock(&pi_state->pi_mutex,
-						this->rt_waiter, this->task);
+						this->rt_waiter,
+						this->task);
+
 		if (ret == 1) {
 			/*
 			 * We got the lock. We do neither drop the refcount
 			 * on pi_state nor clear this->pi_state because the
 			 * waiter needs the pi_state for cleaning up the
 			 * user space value. It will drop the refcount
-			 * after doing so.
+			 * after doing so. this::requeue_state is updated
+			 * in the wakeup as well.
 			 */
 			requeue_pi_wake_futex(this, &key2, hb2);
 			task_count++;
-			continue;
-		} else if (ret) {
+		} else if (!ret) {
+			/* Waiter is queued, move it to hb2 */
+			requeue_futex(this, hb1, hb2, &key2);
+			futex_requeue_pi_complete(this, 0);
+			task_count++;
+		} else {
 			/*
 			 * rt_mutex_start_proxy_lock() detected a potential
 			 * deadlock when we tried to queue that waiter.
@@ -2164,15 +2385,13 @@ static int futex_requeue(u32 __user *uad
 			 */
 			this->pi_state = NULL;
 			put_pi_state(pi_state);
+			futex_requeue_pi_complete(this, ret);
 			/*
 			 * We stop queueing more waiters and let user space
 			 * deal with the mess.
 			 */
 			break;
 		}
-		/* Waiter is queued, move it to hb2 */
-		requeue_futex(this, hb1, hb2, &key2);
-		task_count++;
 	}
 
 	/*
@@ -3161,6 +3380,7 @@ static int futex_wait_requeue_pi(u32 __u
 	struct futex_hash_bucket *hb;
 	union futex_key key2 = FUTEX_KEY_INIT;
 	struct futex_q q = futex_q_init;
+	struct rt_mutex_base *pi_mutex;
 	int res, ret;
 
 	if (!IS_ENABLED(CONFIG_FUTEX_PI))
@@ -3210,32 +3430,22 @@ static int futex_wait_requeue_pi(u32 __u
 	/* Queue the futex_q, drop the hb lock, wait for wakeup. */
 	futex_wait_queue_me(hb, &q, to);
 
-	spin_lock(&hb->lock);
-	/* Is @q still queued on uaddr1? */
-	if (!match_futex(&q->key, key2))
+	switch (futex_requeue_pi_wakeup_sync(&q)) {
+	case Q_REQUEUE_PI_IGNORE:
+		/* The waiter is still on uaddr1 */
+		spin_lock(&hb->lock);
 		ret = handle_early_requeue_pi_wakeup(hb, &q, to);
-	spin_unlock(&hb->lock);
-	if (ret)
-		goto out;
-
-	/*
-	 * In order for us to be here, we know our q.key == key2, and since
-	 * we took the hb->lock above, we also know that futex_requeue() has
-	 * completed and we no longer have to concern ourselves with a wakeup
-	 * race with the atomic proxy lock acquisition by the requeue code.
-	 */
+		spin_unlock(&hb->lock);
+		break;
 
-	/*
-	 * Check if the requeue code acquired the second futex for us and do
-	 * any pertinent fixup.
-	 */
-	if (!q.rt_waiter) {
+	case Q_REQUEUE_PI_LOCKED:
+		/* The requeue acquired the lock */
 		if (q.pi_state && (q.pi_state->owner != current)) {
 			spin_lock(q.lock_ptr);
 			ret = fixup_owner(uaddr2, &q, true);
 			/*
-			 * Drop the reference to the pi state which
-			 * the requeue_pi() code acquired for us.
+			 * Drop the reference to the pi state which the
+			 * requeue_pi() code acquired for us.
 			 */
 			put_pi_state(q.pi_state);
 			spin_unlock(q.lock_ptr);
@@ -3245,18 +3455,14 @@ static int futex_wait_requeue_pi(u32 __u
 			 */
 			ret = ret < 0 ? ret : 0;
 		}
-	} else {
-		struct rt_mutex_base *pi_mutex;
+		break;
 
-		/*
-		 * We have been woken up by futex_unlock_pi(), a timeout, or a
-		 * signal.  futex_unlock_pi() will not destroy the lock_ptr nor
-		 * the pi_state.
-		 */
-		WARN_ON(!q.pi_state);
+	case Q_REQUEUE_PI_DONE:
+		/* Requeue completed. Current is 'pi_blocked_on' the rtmutex */
 		pi_mutex = &q.pi_state->pi_mutex;
 		ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
 
+		/* Current is not longer pi_blocked_on */
 		spin_lock(q.lock_ptr);
 		if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
 			ret = 0;
@@ -3276,17 +3482,21 @@ static int futex_wait_requeue_pi(u32 __u
 
 		unqueue_me_pi(&q);
 		spin_unlock(q.lock_ptr);
-	}
 
-	if (ret == -EINTR) {
-		/*
-		 * We've already been requeued, but cannot restart by calling
-		 * futex_lock_pi() directly. We could restart this syscall, but
-		 * it would detect that the user space "val" changed and return
-		 * -EWOULDBLOCK.  Save the overhead of the restart and return
-		 * -EWOULDBLOCK directly.
-		 */
-		ret = -EWOULDBLOCK;
+		if (ret == -EINTR) {
+			/*
+			 * We've already been requeued, but cannot restart
+			 * by calling futex_lock_pi() directly. We could
+			 * restart this syscall, but it would detect that
+			 * the user space "val" changed and return
+			 * -EWOULDBLOCK.  Save the overhead of the restart
+			 * and return -EWOULDBLOCK directly.
+			 */
+			ret = -EWOULDBLOCK;
+		}
+		break;
+	default:
+		BUG();
 	}
 
 out:


  parent reply	other threads:[~2021-08-15 21:32 UTC|newest]

Thread overview: 160+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-15 21:27 [patch V5 00/72] locking, sched: The PREEMPT-RT locking infrastructure Thomas Gleixner
2021-08-15 21:27 ` [patch V5 01/72] locking/local_lock: Add missing owner initialization Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 02/72] locking/rtmutex: Set proper wait context for lockdep Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 03/72] sched: Split out the wakeup state check Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: Split out the wakeup ->__state check tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 04/72] sched: Introduce TASK_RTLOCK_WAIT Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: Introduce the TASK_RTLOCK_WAIT state bit tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 05/72] sched: Reorganize current::__state helpers Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: Reorganize the " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 06/72] sched: Prepare for RT sleeping spin/rwlocks Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 07/72] sched: Rework the __schedule() preempt argument Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/core: " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 08/72] sched: Provide schedule point for RT locks Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/core: Provide a scheduling " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 09/72] sched/wake_q: Provide WAKE_Q_HEAD_INITIALIZER Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wake_q: Provide WAKE_Q_HEAD_INITIALIZER() tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 10/72] media/atomisp: Use lockdep instead of *mutex_is_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:27 ` [patch V5 11/72] rtmutex: Remove rt_mutex_is_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Peter Zijlstra
2021-08-15 21:27 ` [patch V5 12/72] rtmutex: Convert macros to inlines Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:27 ` [patch V5 13/72] rtmutex: Switch to try_cmpxchg() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Switch to from cmpxchg_*() to try_cmpxchg_*() tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 14/72] rtmutex: Split API and implementation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Split API from implementation tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 15/72] rtmutex: Split out the inner parts of struct rtmutex Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Split out the inner parts of 'struct rtmutex' tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 16/72] locking/rtmutex: Provide rt_mutex_slowlock_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 17/72] rtmutex: Provide rt_mutex_base_is_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 18/72] locking: Add base code for RT rw_semaphore and rwlock Thomas Gleixner
2021-08-16  5:00   ` Davidlohr Bueso
2021-08-17 20:14   ` [tip: locking/core] locking/rt: " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 19/72] locking/rwsem: Add rtmutex based R/W semaphore implementation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 20/72] locking/rtmutex: Add wake_state to rt_mutex_waiter Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 21/72] locking/rtmutex: Provide rt_wake_q and helpers Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Provide rt_wake_q_head " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 22/72] locking/rtmutex: Use rt_mutex_wake_q_head Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 23/72] locking/rtmutex: Prepare RT rt_mutex_wake_q for RT locks Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 24/72] locking/rtmutex: Guard regular sleeping locks specific functions Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 25/72] locking/spinlock: Split the lock types header Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/spinlock: Split the lock types header, and move the raw types into <linux/spinlock_types_raw.h> tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 26/72] locking/rtmutex: Prevent future include recursion hell Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 27/72] locking/lockdep: Reduce includes in debug_locks.h Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/lockdep: Reduce header dependencies in <linux/debug_locks.h> tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 28/72] rbtree: Split out the rbtree type definitions Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] rbtree: Split out the rbtree type definitions into <linux/rbtree_types.h> tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 29/72] locking/rtmutex: Include only rbtree types Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Reduce <linux/rtmutex.h> header dependencies, only include <linux/rbtree_types.h> tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 30/72] locking/spinlock: Provide RT specific spinlock type Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/spinlock: Provide RT specific spinlock_t tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 31/72] locking/spinlock: Provide RT variant header Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/spinlock: Provide RT variant header: <linux/spinlock_rt.h> tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 32/72] locking/rtmutex: Provide the spin/rwlock core lock function Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-27 17:21   ` [patch V5 32/72] " Boqun Feng
2021-08-27 20:03     ` Thomas Gleixner
2021-08-15 21:28 ` [patch V5 33/72] locking/spinlock: Provide RT variant Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 34/72] locking/rwlock: " Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-09-11  2:59   ` [patch V5 34/72] " Xiaoming Ni
2021-09-13  6:28     ` Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 35/72] locking/rtmutex: Squash !RT tasks to DEFAULT_PRIO Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 36/72] locking/mutex: Consolidate core headers Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/mutex: Consolidate core headers, remove kernel/locking/mutex-debug.h tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 37/72] locking/mutex: Move waiter to core header Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/mutex: Move the 'struct mutex_waiter' definition from <linux/mutex.h> to the internal header tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 38/72] locking/ww_mutex: Move ww_mutex declarations into ww_mutex.h Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Move the ww_mutex definitions from <linux/mutex.h> into <linux/ww_mutex.h> tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 39/72] locking/mutex: Make mutex::wait_lock raw Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 40/72] locking/ww_mutex: Simplify lockdep annotation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Simplify lockdep annotations tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 41/72] locking/ww_mutex: Gather mutex_waiter initialization Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-19 16:54   ` [patch V5 41/72] " Guenter Roeck
2021-08-19 18:08     ` [PATCH] locking/ww_mutex: Initialize waiter.ww_ctx properly Sebastian Andrzej Siewior
2021-08-19 18:22       ` Peter Zijlstra
2021-08-19 18:32         ` Sebastian Andrzej Siewior
2021-08-19 17:51   ` [patch V5 41/72] locking/ww_mutex: Gather mutex_waiter initialization Sebastian Andrzej Siewior
2021-08-19 18:17     ` Peter Zijlstra
2021-08-19 18:28       ` Sebastian Andrzej Siewior
2021-08-19 19:30       ` [PATCH v2] locking/ww_mutex: Initialize waiter.ww_ctx properly Sebastian Andrzej Siewior
2021-08-20 10:20         ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 42/72] locking/ww_mutex: Split up ww_mutex_unlock() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra (Intel)
2021-08-15 21:28 ` [patch V5 43/72] locking/ww_mutex: Split W/W implementation logic Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Split out the W/W implementation logic into kernel/locking/ww_mutex.h tip-bot2 for Peter Zijlstra (Intel)
2021-08-15 21:28 ` [patch V5 44/72] locking/ww_mutex: Remove __sched annotation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Remove the __sched annotation from ww_mutex APIs tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 45/72] locking/ww_mutex: Abstract waiter iteration Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out the " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 46/72] locking/ww_mutex: Abstract waiter enqueueing Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 47/72] locking/ww_mutex: Abstract mutex accessors Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 48/72] locking/ww_mutex: Abstract mutex types Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 49/72] locking/ww_mutex: Abstract internal lock access Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out internal lock accesses tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 50/72] locking/ww_mutex: Implement rt_mutex accessors Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 51/72] locking/ww_mutex: Add RT priority to W/W order Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 52/72] locking/ww_mutex: Add rt_mutex based lock type and accessors Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 53/72] locking/rtmutex: Extend the rtmutex core to support ww_mutex Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:29 ` [patch V5 54/72] locking/ww_mutex: Implement rtmutex based ww_mutex API functions Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:29 ` [patch V5 55/72] locking/rtmutex: Add mutex variant for RT Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 56/72] lib/test_lockup: Adapt to changed variables Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:29 ` [patch V5 57/72] futex: Validate waiter correctly in futex_proxy_trylock_atomic() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 58/72] futex: Cleanup stale comments Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] futex: Clean up " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 59/72] futex: Clarify futex_requeue() PI handling Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 60/72] futex: Remove bogus condition for requeue PI Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 61/72] futex: Correct the number of requeued waiters for PI Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 62/72] futex: Restructure futex_requeue() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 63/72] futex: Clarify comment in futex_requeue() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 64/72] futex: Reorder sanity checks " Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 65/72] futex: Simplify handle_early_requeue_pi_wakeup() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` Thomas Gleixner [this message]
2021-08-17 20:13   ` [tip: locking/core] futex: Prevent requeue_pi() lock nesting issue on RT tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 67/72] rtmutex: Prevent lockdep false positive with PI futexes Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 68/72] preempt: Adjust PREEMPT_LOCK_OFFSET for RT Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 69/72] locking/rtmutex: Implement equal priority lock stealing Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Gregory Haskins
2021-08-15 21:29 ` [patch V5 70/72] locking/rtmutex: Add adaptive spinwait mechanism Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Steven Rostedt
2021-08-15 21:29 ` [patch V5 71/72] locking/spinlock/rt: Prepare for RT local_lock Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 72/72] locking/local_lock: Add PREEMPT_RT support Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-16  9:21 ` [patch V5 00/72] locking, sched: The PREEMPT-RT locking infrastructure Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210815211305.693317658@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=bigeasy@linutronix.de \
    --cc=boqun.feng@gmail.com \
    --cc=bristot@redhat.com \
    --cc=dave@stgolabs.net \
    --cc=efault@gmx.de \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.