linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] scheduler tinification
@ 2017-06-06 23:24 Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 1/8] cpuset/sched: cpuset makes sense for SMP only Nicolas Pitre
                   ` (8 more replies)
  0 siblings, 9 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Many embedded systems don't need the full scheduler support. Most of the
time, user space is tightly controlled and many of the scheduler facilities
are simply unused.

This patch series makes it possible to configure out some parts of the
scheduler such as the deadline and realtime scheduler classes. The saving
in kernel footprint is non negligible.

Small ARM kernel config before this series:

   text    data     bss     dec     hex filename
  28623    3404     128   32155    7d9b kernel/sched/built-in.o

With this series and dl and rt classes disabled:

   text    data     bss     dec     hex filename
  20734    3334      40   24108    5e2c kernel/sched/built-in.o

A significant part of the remaining code is support for various system calls
that could be automatically removed when user space doesn't use it but that
is a topic for another day.

Changes from v1:

- the deadline class is configurable independently from the realtime class
- split of the PI futex code to make non-PI futexes available when RT
  is configured out
- removal of many #ifdefs to keep the code more readable

diffstat for this series:

 include/linux/futex.h          |    7 +-
 include/linux/init_task.h      |   15 +-
 include/linux/rtmutex.h        |   69 +
 include/linux/sched.h          |    4 +
 include/linux/sched/deadline.h |    8 +-
 include/linux/sched/rt.h       |   10 +-
 init/Kconfig                   |   28 +-
 kernel/futex.c                 | 2829 ++++++++--------------------------
 kernel/futex_pi.c              | 1563 +++++++++++++++++++
 kernel/locking/Makefile        |    3 +
 kernel/locking/locktorture.c   |    4 +-
 kernel/locking/rtmutex.c       |    6 +-
 kernel/sched/Makefile          |    7 +-
 kernel/sched/core.c            |  759 +--------
 kernel/sched/cpudeadline.h     |    7 +-
 kernel/sched/deadline.c        |  340 ++++
 kernel/sched/debug.c           |    6 +
 kernel/sched/rt.c              |  315 +++-
 kernel/sched/sched.h           |   88 +-
 kernel/sched/stop_task.c       |    6 +
 kernel/sysctl.c                |    4 +-
 kernel/time/posix-cpu-timers.c |    7 +-
 lib/Kconfig.debug              |    2 +-
 23 files changed, 3190 insertions(+), 2897 deletions(-)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2 1/8] cpuset/sched: cpuset makes sense for SMP only
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 2/8] sched: omit stop_sched_class when !SMP Nicolas Pitre
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Make CONFIG_CPUSETS depend on SMP as this feature makes no sense
on UP. This allows for configuring out cpuset_cpumask_can_shrink()
and task_can_attach() entirely.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 init/Kconfig        | 1 +
 kernel/sched/core.c | 7 +++----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 4ef946b466..b9aed60cac 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1156,6 +1156,7 @@ config CGROUP_HUGETLB
 
 config CPUSETS
 	bool "Cpuset controller"
+	depends on SMP
 	help
 	  This option will let you create and manage CPUSETs which
 	  allow dynamically partitioning a system into sets of CPUs and
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 803c3bc274..de274b1bd2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5463,6 +5463,8 @@ void init_idle(struct task_struct *idle, int cpu)
 #endif
 }
 
+#ifdef CONFIG_SMP
+
 int cpuset_cpumask_can_shrink(const struct cpumask *cur,
 			      const struct cpumask *trial)
 {
@@ -5506,7 +5508,6 @@ int task_can_attach(struct task_struct *p,
 		goto out;
 	}
 
-#ifdef CONFIG_SMP
 	if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span,
 					      cs_cpus_allowed)) {
 		unsigned int dest_cpu = cpumask_any_and(cpu_active_mask,
@@ -5536,13 +5537,11 @@ int task_can_attach(struct task_struct *p,
 		rcu_read_unlock_sched();
 
 	}
-#endif
+
 out:
 	return ret;
 }
 
-#ifdef CONFIG_SMP
-
 bool sched_smp_initialized __read_mostly;
 
 #ifdef CONFIG_NUMA_BALANCING
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 2/8] sched: omit stop_sched_class when !SMP
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 1/8] cpuset/sched: cpuset makes sense for SMP only Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 3/8] futex: make PI support optional Nicolas Pitre
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

The stop class is invoked through stop_machine only.
This is dead code on UP builds.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 kernel/sched/Makefile |  4 ++--
 kernel/sched/core.c   | 60 +++++++++++++++++++++++++--------------------------
 kernel/sched/sched.h  |  4 ++++
 3 files changed, 36 insertions(+), 32 deletions(-)

diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
index 89ab675866..5e4c2e7a63 100644
--- a/kernel/sched/Makefile
+++ b/kernel/sched/Makefile
@@ -16,9 +16,9 @@ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
 endif
 
 obj-y += core.o loadavg.o clock.o cputime.o
-obj-y += idle_task.o fair.o rt.o deadline.o stop_task.o
+obj-y += idle_task.o fair.o rt.o deadline.o
 obj-y += wait.o swait.o completion.o idle.o
-obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o
+obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o
 obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
 obj-$(CONFIG_SCHEDSTATS) += stats.o
 obj-$(CONFIG_SCHED_DEBUG) += debug.o
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index de274b1bd2..94fa712791 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -788,36 +788,6 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags)
 	dequeue_task(rq, p, flags);
 }
 
-void sched_set_stop_task(int cpu, struct task_struct *stop)
-{
-	struct sched_param param = { .sched_priority = MAX_RT_PRIO - 1 };
-	struct task_struct *old_stop = cpu_rq(cpu)->stop;
-
-	if (stop) {
-		/*
-		 * Make it appear like a SCHED_FIFO task, its something
-		 * userspace knows about and won't get confused about.
-		 *
-		 * Also, it will make PI more or less work without too
-		 * much confusion -- but then, stop work should not
-		 * rely on PI working anyway.
-		 */
-		sched_setscheduler_nocheck(stop, SCHED_FIFO, &param);
-
-		stop->sched_class = &stop_sched_class;
-	}
-
-	cpu_rq(cpu)->stop = stop;
-
-	if (old_stop) {
-		/*
-		 * Reset it back to a normal scheduling class so that
-		 * it can die in pieces.
-		 */
-		old_stop->sched_class = &rt_sched_class;
-	}
-}
-
 /*
  * __normal_prio - return the priority that is based on the static prio
  */
@@ -1588,6 +1558,36 @@ static void update_avg(u64 *avg, u64 sample)
 	*avg += diff >> 3;
 }
 
+void sched_set_stop_task(int cpu, struct task_struct *stop)
+{
+	struct sched_param param = { .sched_priority = MAX_RT_PRIO - 1 };
+	struct task_struct *old_stop = cpu_rq(cpu)->stop;
+
+	if (stop) {
+		/*
+		 * Make it appear like a SCHED_FIFO task, its something
+		 * userspace knows about and won't get confused about.
+		 *
+		 * Also, it will make PI more or less work without too
+		 * much confusion -- but then, stop work should not
+		 * rely on PI working anyway.
+		 */
+		sched_setscheduler_nocheck(stop, SCHED_FIFO, &param);
+
+		stop->sched_class = &stop_sched_class;
+	}
+
+	cpu_rq(cpu)->stop = stop;
+
+	if (old_stop) {
+		/*
+		 * Reset it back to a normal scheduling class so that
+		 * it can die in pieces.
+		 */
+		old_stop->sched_class = &rt_sched_class;
+	}
+}
+
 #else
 
 static inline int __set_cpus_allowed_ptr(struct task_struct *p,
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 6dda2aab73..053f60afb7 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1422,7 +1422,11 @@ static inline void set_curr_task(struct rq *rq, struct task_struct *curr)
 	curr->sched_class->set_curr_task(rq);
 }
 
+#ifdef CONFIG_SMP
 #define sched_class_highest (&stop_sched_class)
+#else
+#define sched_class_highest (&dl_sched_class)
+#endif
 #define for_each_class(class) \
    for (class = sched_class_highest; class; class = class->next)
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 3/8] futex: make PI support optional
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 1/8] cpuset/sched: cpuset makes sense for SMP only Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 2/8] sched: omit stop_sched_class when !SMP Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 4/8] sched/deadline: move dl related code out of sched/core.c Nicolas Pitre
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Split out the priority inheritance support to a file of its own
to make futex.c easier to understand and, hopefully, to maintain.
This also makes it possible to compile out the PI support when RT
task support is not available.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 include/linux/futex.h |    7 +-
 init/Kconfig          |    7 +-
 kernel/futex.c        | 2829 ++++++++++++-------------------------------------
 kernel/futex_pi.c     | 1563 +++++++++++++++++++++++++++
 4 files changed, 2233 insertions(+), 2173 deletions(-)
 create mode 100644 kernel/futex_pi.c

diff --git a/include/linux/futex.h b/include/linux/futex.h
index 7c5b694864..f36bfd26f9 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -54,7 +54,6 @@ union futex_key {
 
 #ifdef CONFIG_FUTEX
 extern void exit_robust_list(struct task_struct *curr);
-extern void exit_pi_state_list(struct task_struct *curr);
 #ifdef CONFIG_HAVE_FUTEX_CMPXCHG
 #define futex_cmpxchg_enabled 1
 #else
@@ -64,8 +63,14 @@ extern int futex_cmpxchg_enabled;
 static inline void exit_robust_list(struct task_struct *curr)
 {
 }
+#endif
+
+#ifdef CONFIG_FUTEX_PI
+extern void exit_pi_state_list(struct task_struct *curr);
+#else
 static inline void exit_pi_state_list(struct task_struct *curr)
 {
 }
 #endif
+
 #endif
diff --git a/init/Kconfig b/init/Kconfig
index b9aed60cac..ad91724f75 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1624,12 +1624,17 @@ config BASE_FULL
 config FUTEX
 	bool "Enable futex support" if EXPERT
 	default y
-	select RT_MUTEXES
+	imply RT_MUTEXES
 	help
 	  Disabling this option will cause the kernel to be built without
 	  support for "fast userspace mutexes".  The resulting kernel may not
 	  run glibc-based applications correctly.
 
+config FUTEX_PI
+	bool
+	depends on FUTEX && RT_MUTEXES
+	default y
+
 config HAVE_FUTEX_CMPXCHG
 	bool
 	depends on FUTEX
diff --git a/kernel/futex.c b/kernel/futex.c
index 357348a6cf..c82ea0098f 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -12,17 +12,9 @@
  *  (C) Copyright 2006 Red Hat Inc, All Rights Reserved
  *  Thanks to Thomas Gleixner for suggestions, analysis and fixes.
  *
- *  PI-futex support started by Ingo Molnar and Thomas Gleixner
- *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
- *  Copyright (C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
- *
  *  PRIVATE futexes by Eric Dumazet
  *  Copyright (C) 2007 Eric Dumazet <dada1@cosmosbay.com>
  *
- *  Requeue-PI support by Darren Hart <dvhltc@us.ibm.com>
- *  Copyright (C) IBM Corporation, 2009
- *  Thanks to Thomas Gleixner for conceptual design and careful reviews.
- *
  *  Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly
  *  enough at me, Linus for the original (flawed) idea, Matthew
  *  Kirkwood for proof-of-concept implementation.
@@ -70,8 +62,6 @@
 
 #include <asm/futex.h>
 
-#include "locking/rtmutex_common.h"
-
 /*
  * READ this before attempting to hack on futexes!
  *
@@ -193,26 +183,7 @@ int __read_mostly futex_cmpxchg_enabled;
 #define FLAGS_CLOCKRT		0x02
 #define FLAGS_HAS_TIMEOUT	0x04
 
-/*
- * Priority Inheritance state:
- */
-struct futex_pi_state {
-	/*
-	 * list of 'owned' pi_state instances - these have to be
-	 * cleaned up in do_exit() if the task exits prematurely:
-	 */
-	struct list_head list;
-
-	/*
-	 * The PI object:
-	 */
-	struct rt_mutex pi_mutex;
-
-	struct task_struct *owner;
-	atomic_t refcount;
-
-	union futex_key key;
-};
+struct futex_pi_state;
 
 /**
  * struct futex_q - The hashed futex queue entry, one per waiting task
@@ -733,25 +704,6 @@ static int fault_in_user_writeable(u32 __user *uaddr)
 	return ret < 0 ? ret : 0;
 }
 
-/**
- * futex_top_waiter() - Return the highest priority waiter on a futex
- * @hb:		the hash bucket the futex_q's reside in
- * @key:	the futex key (to distinguish it from other futex futex_q's)
- *
- * Must be called with the hb lock held.
- */
-static struct futex_q *futex_top_waiter(struct futex_hash_bucket *hb,
-					union futex_key *key)
-{
-	struct futex_q *this;
-
-	plist_for_each_entry(this, &hb->chain, list) {
-		if (match_futex(&this->key, key))
-			return this;
-	}
-	return NULL;
-}
-
 static int cmpxchg_futex_value_locked(u32 *curval, u32 __user *uaddr,
 				      u32 uval, u32 newval)
 {
@@ -779,1114 +731,395 @@ static int get_futex_value_locked(u32 *dest, u32 __user *from)
 /*
  * PI code:
  */
-static int refill_pi_state_cache(void)
-{
-	struct futex_pi_state *pi_state;
-
-	if (likely(current->pi_state_cache))
-		return 0;
-
-	pi_state = kzalloc(sizeof(*pi_state), GFP_KERNEL);
-
-	if (!pi_state)
-		return -ENOMEM;
-
-	INIT_LIST_HEAD(&pi_state->list);
-	/* pi_mutex gets initialized later */
-	pi_state->owner = NULL;
-	atomic_set(&pi_state->refcount, 1);
-	pi_state->key = FUTEX_KEY_INIT;
-
-	current->pi_state_cache = pi_state;
+#ifdef CONFIG_FUTEX_PI
+#include "futex_pi.c"
+#else
+#define get_pi_state(...)
+#define put_pi_state(...)
+#define refill_pi_state_cache()		false
+#define lookup_pi_state(...)		-ENOSYS
+#define rt_mutex_start_proxy_lock(...)	-ENOSYS
+#define requeue_pi_wake_futex(...)
+#define futex_proxy_trylock_atomic(...)	-ENOSYS
+#define futex_lock_pi(...)		-ENOSYS
+#define futex_unlock_pi(...)		-ENOSYS
+#define futex_wait_requeue_pi(...)	-ENOSYS
+#endif
 
-	return 0;
-}
 
-static struct futex_pi_state *alloc_pi_state(void)
+/**
+ * __unqueue_futex() - Remove the futex_q from its futex_hash_bucket
+ * @q:	The futex_q to unqueue
+ *
+ * The q->lock_ptr must not be NULL and must be held by the caller.
+ */
+static void __unqueue_futex(struct futex_q *q)
 {
-	struct futex_pi_state *pi_state = current->pi_state_cache;
-
-	WARN_ON(!pi_state);
-	current->pi_state_cache = NULL;
+	struct futex_hash_bucket *hb;
 
-	return pi_state;
-}
+	if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr))
+	    || WARN_ON(plist_node_empty(&q->list)))
+		return;
 
-static void get_pi_state(struct futex_pi_state *pi_state)
-{
-	WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
+	hb = container_of(q->lock_ptr, struct futex_hash_bucket, lock);
+	plist_del(&q->list, &hb->chain);
+	hb_waiters_dec(hb);
 }
 
 /*
- * Drops a reference to the pi_state object and frees or caches it
- * when the last reference is gone.
- *
- * Must be called with the hb lock held.
+ * The hash bucket lock must be held when this is called.
+ * Afterwards, the futex_q must not be accessed. Callers
+ * must ensure to later call wake_up_q() for the actual
+ * wakeups to occur.
  */
-static void put_pi_state(struct futex_pi_state *pi_state)
+static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
 {
-	if (!pi_state)
-		return;
+	struct task_struct *p = q->task;
 
-	if (!atomic_dec_and_test(&pi_state->refcount))
+	if (WARN(q->pi_state || q->rt_waiter, "refusing to wake PI futex\n"))
 		return;
 
 	/*
-	 * If pi_state->owner is NULL, the owner is most probably dying
-	 * and has cleaned up the pi_state already
+	 * Queue the task for later wakeup for after we've released
+	 * the hb->lock. wake_q_add() grabs reference to p.
 	 */
-	if (pi_state->owner) {
-		raw_spin_lock_irq(&pi_state->owner->pi_lock);
-		list_del_init(&pi_state->list);
-		raw_spin_unlock_irq(&pi_state->owner->pi_lock);
-
-		rt_mutex_proxy_unlock(&pi_state->pi_mutex, pi_state->owner);
-	}
-
-	if (current->pi_state_cache)
-		kfree(pi_state);
-	else {
-		/*
-		 * pi_state->list is already empty.
-		 * clear pi_state->owner.
-		 * refcount is at 0 - put it back to 1.
-		 */
-		pi_state->owner = NULL;
-		atomic_set(&pi_state->refcount, 1);
-		current->pi_state_cache = pi_state;
-	}
+	wake_q_add(wake_q, p);
+	__unqueue_futex(q);
+	/*
+	 * The waiting task can free the futex_q as soon as q->lock_ptr = NULL
+	 * is written, without taking any locks. This is possible in the event
+	 * of a spurious wakeup, for example. A memory barrier is required here
+	 * to prevent the following store to lock_ptr from getting ahead of the
+	 * plist_del in __unqueue_futex().
+	 */
+	smp_store_release(&q->lock_ptr, NULL);
 }
 
 /*
- * Look up the task based on what TID userspace gave us.
- * We dont trust it.
+ * Express the locking dependencies for lockdep:
  */
-static struct task_struct *futex_find_get_task(pid_t pid)
+static inline void
+double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
 {
-	struct task_struct *p;
-
-	rcu_read_lock();
-	p = find_task_by_vpid(pid);
-	if (p)
-		get_task_struct(p);
-
-	rcu_read_unlock();
+	if (hb1 <= hb2) {
+		spin_lock(&hb1->lock);
+		if (hb1 < hb2)
+			spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
+	} else { /* hb1 > hb2 */
+		spin_lock(&hb2->lock);
+		spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
+	}
+}
 
-	return p;
+static inline void
+double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+{
+	spin_unlock(&hb1->lock);
+	if (hb1 != hb2)
+		spin_unlock(&hb2->lock);
 }
 
 /*
- * This task is holding PI mutexes at exit time => bad.
- * Kernel cleans up PI-state, but userspace is likely hosed.
- * (Robust-futex cleanup is separate and might save the day for userspace.)
+ * Wake up waiters matching bitset queued on this futex (uaddr).
  */
-void exit_pi_state_list(struct task_struct *curr)
+static int
+futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
 {
-	struct list_head *next, *head = &curr->pi_state_list;
-	struct futex_pi_state *pi_state;
 	struct futex_hash_bucket *hb;
+	struct futex_q *this, *next;
 	union futex_key key = FUTEX_KEY_INIT;
+	int ret;
+	DEFINE_WAKE_Q(wake_q);
 
-	if (!futex_cmpxchg_enabled)
-		return;
-	/*
-	 * We are a ZOMBIE and nobody can enqueue itself on
-	 * pi_state_list anymore, but we have to be careful
-	 * versus waiters unqueueing themselves:
-	 */
-	raw_spin_lock_irq(&curr->pi_lock);
-	while (!list_empty(head)) {
+	if (!bitset)
+		return -EINVAL;
 
-		next = head->next;
-		pi_state = list_entry(next, struct futex_pi_state, list);
-		key = pi_state->key;
-		hb = hash_futex(&key);
-		raw_spin_unlock_irq(&curr->pi_lock);
+	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ);
+	if (unlikely(ret != 0))
+		goto out;
 
-		spin_lock(&hb->lock);
+	hb = hash_futex(&key);
 
-		raw_spin_lock_irq(&curr->pi_lock);
-		/*
-		 * We dropped the pi-lock, so re-check whether this
-		 * task still owns the PI-state:
-		 */
-		if (head->next != next) {
-			spin_unlock(&hb->lock);
-			continue;
-		}
+	/* Make sure we really have tasks to wakeup */
+	if (!hb_waiters_pending(hb))
+		goto out_put_key;
 
-		WARN_ON(pi_state->owner != curr);
-		WARN_ON(list_empty(&pi_state->list));
-		list_del_init(&pi_state->list);
-		pi_state->owner = NULL;
-		raw_spin_unlock_irq(&curr->pi_lock);
+	spin_lock(&hb->lock);
 
-		get_pi_state(pi_state);
-		spin_unlock(&hb->lock);
+	plist_for_each_entry_safe(this, next, &hb->chain, list) {
+		if (match_futex (&this->key, &key)) {
+			if (this->pi_state || this->rt_waiter) {
+				ret = -EINVAL;
+				break;
+			}
 
-		rt_mutex_futex_unlock(&pi_state->pi_mutex);
-		put_pi_state(pi_state);
+			/* Check if one of the bits is set in both bitsets */
+			if (!(this->bitset & bitset))
+				continue;
 
-		raw_spin_lock_irq(&curr->pi_lock);
+			mark_wake_futex(&wake_q, this);
+			if (++ret >= nr_wake)
+				break;
+		}
 	}
-	raw_spin_unlock_irq(&curr->pi_lock);
-}
 
-/*
- * We need to check the following states:
- *
- *      Waiter | pi_state | pi->owner | uTID      | uODIED | ?
- *
- * [1]  NULL   | ---      | ---       | 0         | 0/1    | Valid
- * [2]  NULL   | ---      | ---       | >0        | 0/1    | Valid
- *
- * [3]  Found  | NULL     | --        | Any       | 0/1    | Invalid
- *
- * [4]  Found  | Found    | NULL      | 0         | 1      | Valid
- * [5]  Found  | Found    | NULL      | >0        | 1      | Invalid
- *
- * [6]  Found  | Found    | task      | 0         | 1      | Valid
- *
- * [7]  Found  | Found    | NULL      | Any       | 0      | Invalid
- *
- * [8]  Found  | Found    | task      | ==taskTID | 0/1    | Valid
- * [9]  Found  | Found    | task      | 0         | 0      | Invalid
- * [10] Found  | Found    | task      | !=taskTID | 0/1    | Invalid
- *
- * [1]	Indicates that the kernel can acquire the futex atomically. We
- *	came came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit.
- *
- * [2]	Valid, if TID does not belong to a kernel thread. If no matching
- *      thread is found then it indicates that the owner TID has died.
- *
- * [3]	Invalid. The waiter is queued on a non PI futex
- *
- * [4]	Valid state after exit_robust_list(), which sets the user space
- *	value to FUTEX_WAITERS | FUTEX_OWNER_DIED.
- *
- * [5]	The user space value got manipulated between exit_robust_list()
- *	and exit_pi_state_list()
- *
- * [6]	Valid state after exit_pi_state_list() which sets the new owner in
- *	the pi_state but cannot access the user space value.
- *
- * [7]	pi_state->owner can only be NULL when the OWNER_DIED bit is set.
- *
- * [8]	Owner and user space value match
- *
- * [9]	There is no transient state which sets the user space TID to 0
- *	except exit_robust_list(), but this is indicated by the
- *	FUTEX_OWNER_DIED bit. See [4]
- *
- * [10] There is no transient state which leaves owner and user space
- *	TID out of sync.
- *
- *
- * Serialization and lifetime rules:
- *
- * hb->lock:
- *
- *	hb -> futex_q, relation
- *	futex_q -> pi_state, relation
- *
- *	(cannot be raw because hb can contain arbitrary amount
- *	 of futex_q's)
- *
- * pi_mutex->wait_lock:
- *
- *	{uval, pi_state}
- *
- *	(and pi_mutex 'obviously')
- *
- * p->pi_lock:
- *
- *	p->pi_state_list -> pi_state->list, relation
- *
- * pi_state->refcount:
- *
- *	pi_state lifetime
- *
- *
- * Lock order:
- *
- *   hb->lock
- *     pi_mutex->wait_lock
- *       p->pi_lock
- *
- */
+	spin_unlock(&hb->lock);
+	wake_up_q(&wake_q);
+out_put_key:
+	put_futex_key(&key);
+out:
+	return ret;
+}
 
 /*
- * Validate that the existing waiter has a pi_state and sanity check
- * the pi_state against the user space value. If correct, attach to
- * it.
+ * Wake up all waiters hashed on the physical page that is mapped
+ * to this virtual address:
  */
-static int attach_to_pi_state(u32 __user *uaddr, u32 uval,
-			      struct futex_pi_state *pi_state,
-			      struct futex_pi_state **ps)
+static int
+futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
+	      int nr_wake, int nr_wake2, int op)
 {
-	pid_t pid = uval & FUTEX_TID_MASK;
-	u32 uval2;
-	int ret;
-
-	/*
-	 * Userspace might have messed up non-PI and PI futexes [3]
-	 */
-	if (unlikely(!pi_state))
-		return -EINVAL;
+	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
+	struct futex_hash_bucket *hb1, *hb2;
+	struct futex_q *this, *next;
+	int ret, op_ret;
+	DEFINE_WAKE_Q(wake_q);
 
-	/*
-	 * We get here with hb->lock held, and having found a
-	 * futex_top_waiter(). This means that futex_lock_pi() of said futex_q
-	 * has dropped the hb->lock in between queue_me() and unqueue_me_pi(),
-	 * which in turn means that futex_lock_pi() still has a reference on
-	 * our pi_state.
-	 *
-	 * The waiter holding a reference on @pi_state also protects against
-	 * the unlocked put_pi_state() in futex_unlock_pi(), futex_lock_pi()
-	 * and futex_wait_requeue_pi() as it cannot go to 0 and consequently
-	 * free pi_state before we can take a reference ourselves.
-	 */
-	WARN_ON(!atomic_read(&pi_state->refcount));
+retry:
+	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, VERIFY_READ);
+	if (unlikely(ret != 0))
+		goto out;
+	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
+	if (unlikely(ret != 0))
+		goto out_put_key1;
 
-	/*
-	 * Now that we have a pi_state, we can acquire wait_lock
-	 * and do the state validation.
-	 */
-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+	hb1 = hash_futex(&key1);
+	hb2 = hash_futex(&key2);
 
-	/*
-	 * Since {uval, pi_state} is serialized by wait_lock, and our current
-	 * uval was read without holding it, it can have changed. Verify it
-	 * still is what we expect it to be, otherwise retry the entire
-	 * operation.
-	 */
-	if (get_futex_value_locked(&uval2, uaddr))
-		goto out_efault;
+retry_private:
+	double_lock_hb(hb1, hb2);
+	op_ret = futex_atomic_op_inuser(op, uaddr2);
+	if (unlikely(op_ret < 0)) {
 
-	if (uval != uval2)
-		goto out_eagain;
+		double_unlock_hb(hb1, hb2);
 
-	/*
-	 * Handle the owner died case:
-	 */
-	if (uval & FUTEX_OWNER_DIED) {
+#ifndef CONFIG_MMU
 		/*
-		 * exit_pi_state_list sets owner to NULL and wakes the
-		 * topmost waiter. The task which acquires the
-		 * pi_state->rt_mutex will fixup owner.
+		 * we don't get EFAULT from MMU faults if we don't have an MMU,
+		 * but we might get them from range checking
 		 */
-		if (!pi_state->owner) {
-			/*
-			 * No pi state owner, but the user space TID
-			 * is not 0. Inconsistent state. [5]
-			 */
-			if (pid)
-				goto out_einval;
-			/*
-			 * Take a ref on the state and return success. [4]
-			 */
-			goto out_attach;
-		}
+		ret = op_ret;
+		goto out_put_keys;
+#endif
 
-		/*
-		 * If TID is 0, then either the dying owner has not
-		 * yet executed exit_pi_state_list() or some waiter
-		 * acquired the rtmutex in the pi state, but did not
-		 * yet fixup the TID in user space.
-		 *
-		 * Take a ref on the state and return success. [6]
-		 */
-		if (!pid)
-			goto out_attach;
-	} else {
-		/*
-		 * If the owner died bit is not set, then the pi_state
-		 * must have an owner. [7]
-		 */
-		if (!pi_state->owner)
-			goto out_einval;
-	}
+		if (unlikely(op_ret != -EFAULT)) {
+			ret = op_ret;
+			goto out_put_keys;
+		}
 
-	/*
-	 * Bail out if user space manipulated the futex value. If pi
-	 * state exists then the owner TID must be the same as the
-	 * user space TID. [9/10]
-	 */
-	if (pid != task_pid_vnr(pi_state->owner))
-		goto out_einval;
+		ret = fault_in_user_writeable(uaddr2);
+		if (ret)
+			goto out_put_keys;
 
-out_attach:
-	get_pi_state(pi_state);
-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
-	*ps = pi_state;
-	return 0;
+		if (!(flags & FLAGS_SHARED))
+			goto retry_private;
 
-out_einval:
-	ret = -EINVAL;
-	goto out_error;
+		put_futex_key(&key2);
+		put_futex_key(&key1);
+		goto retry;
+	}
 
-out_eagain:
-	ret = -EAGAIN;
-	goto out_error;
+	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
+		if (match_futex (&this->key, &key1)) {
+			if (this->pi_state || this->rt_waiter) {
+				ret = -EINVAL;
+				goto out_unlock;
+			}
+			mark_wake_futex(&wake_q, this);
+			if (++ret >= nr_wake)
+				break;
+		}
+	}
 
-out_efault:
-	ret = -EFAULT;
-	goto out_error;
+	if (op_ret > 0) {
+		op_ret = 0;
+		plist_for_each_entry_safe(this, next, &hb2->chain, list) {
+			if (match_futex (&this->key, &key2)) {
+				if (this->pi_state || this->rt_waiter) {
+					ret = -EINVAL;
+					goto out_unlock;
+				}
+				mark_wake_futex(&wake_q, this);
+				if (++op_ret >= nr_wake2)
+					break;
+			}
+		}
+		ret += op_ret;
+	}
 
-out_error:
-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+out_unlock:
+	double_unlock_hb(hb1, hb2);
+	wake_up_q(&wake_q);
+out_put_keys:
+	put_futex_key(&key2);
+out_put_key1:
+	put_futex_key(&key1);
+out:
 	return ret;
 }
 
-/*
- * Lookup the task for the TID provided from user space and attach to
- * it after doing proper sanity checks.
+/**
+ * requeue_futex() - Requeue a futex_q from one hb to another
+ * @q:		the futex_q to requeue
+ * @hb1:	the source hash_bucket
+ * @hb2:	the target hash_bucket
+ * @key2:	the new key for the requeued futex_q
  */
-static int attach_to_pi_owner(u32 uval, union futex_key *key,
-			      struct futex_pi_state **ps)
+static inline
+void requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1,
+		   struct futex_hash_bucket *hb2, union futex_key *key2)
 {
-	pid_t pid = uval & FUTEX_TID_MASK;
-	struct futex_pi_state *pi_state;
-	struct task_struct *p;
-
-	/*
-	 * We are the first waiter - try to look up the real owner and attach
-	 * the new pi_state to it, but bail out when TID = 0 [1]
-	 */
-	if (!pid)
-		return -ESRCH;
-	p = futex_find_get_task(pid);
-	if (!p)
-		return -ESRCH;
-
-	if (unlikely(p->flags & PF_KTHREAD)) {
-		put_task_struct(p);
-		return -EPERM;
-	}
 
 	/*
-	 * We need to look at the task state flags to figure out,
-	 * whether the task is exiting. To protect against the do_exit
-	 * change of the task flags, we do this protected by
-	 * p->pi_lock:
+	 * If key1 and key2 hash to the same bucket, no need to
+	 * requeue.
 	 */
-	raw_spin_lock_irq(&p->pi_lock);
-	if (unlikely(p->flags & PF_EXITING)) {
-		/*
-		 * The task is on the way out. When PF_EXITPIDONE is
-		 * set, we know that the task has finished the
-		 * cleanup:
-		 */
-		int ret = (p->flags & PF_EXITPIDONE) ? -ESRCH : -EAGAIN;
-
-		raw_spin_unlock_irq(&p->pi_lock);
-		put_task_struct(p);
-		return ret;
+	if (likely(&hb1->chain != &hb2->chain)) {
+		plist_del(&q->list, &hb1->chain);
+		hb_waiters_dec(hb1);
+		hb_waiters_inc(hb2);
+		plist_add(&q->list, &hb2->chain);
+		q->lock_ptr = &hb2->lock;
 	}
-
-	/*
-	 * No existing pi state. First waiter. [2]
-	 *
-	 * This creates pi_state, we have hb->lock held, this means nothing can
-	 * observe this state, wait_lock is irrelevant.
-	 */
-	pi_state = alloc_pi_state();
-
-	/*
-	 * Initialize the pi_mutex in locked state and make @p
-	 * the owner of it:
-	 */
-	rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p);
-
-	/* Store the key for possible exit cleanups: */
-	pi_state->key = *key;
-
-	WARN_ON(!list_empty(&pi_state->list));
-	list_add(&pi_state->list, &p->pi_state_list);
-	pi_state->owner = p;
-	raw_spin_unlock_irq(&p->pi_lock);
-
-	put_task_struct(p);
-
-	*ps = pi_state;
-
-	return 0;
-}
-
-static int lookup_pi_state(u32 __user *uaddr, u32 uval,
-			   struct futex_hash_bucket *hb,
-			   union futex_key *key, struct futex_pi_state **ps)
-{
-	struct futex_q *top_waiter = futex_top_waiter(hb, key);
-
-	/*
-	 * If there is a waiter on that futex, validate it and
-	 * attach to the pi_state when the validation succeeds.
-	 */
-	if (top_waiter)
-		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
-
-	/*
-	 * We are the first waiter - try to look up the owner based on
-	 * @uval and attach to it.
-	 */
-	return attach_to_pi_owner(uval, key, ps);
-}
-
-static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval)
-{
-	u32 uninitialized_var(curval);
-
-	if (unlikely(should_fail_futex(true)))
-		return -EFAULT;
-
-	if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)))
-		return -EFAULT;
-
-	/* If user space value changed, let the caller retry */
-	return curval != uval ? -EAGAIN : 0;
+	get_futex_key_refs(key2);
+	q->key = *key2;
 }
 
 /**
- * futex_lock_pi_atomic() - Atomic work required to acquire a pi aware futex
- * @uaddr:		the pi futex user address
- * @hb:			the pi futex hash bucket
- * @key:		the futex key associated with uaddr and hb
- * @ps:			the pi_state pointer where we store the result of the
- *			lookup
- * @task:		the task to perform the atomic lock work for.  This will
- *			be "current" except in the case of requeue pi.
- * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
+ * futex_requeue() - Requeue waiters from uaddr1 to uaddr2
+ * @uaddr1:	source futex user address
+ * @flags:	futex flags (FLAGS_SHARED, etc.)
+ * @uaddr2:	target futex user address
+ * @nr_wake:	number of waiters to wake (must be 1 for requeue_pi)
+ * @nr_requeue:	number of waiters to requeue (0-INT_MAX)
+ * @cmpval:	@uaddr1 expected value (or %NULL)
+ * @requeue_pi:	if we are attempting to requeue from a non-pi futex to a
+ *		pi futex (pi to pi requeue is not supported)
  *
- * Return:
- *  0 - ready to wait;
- *  1 - acquired the lock;
- * <0 - error
+ * Requeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire
+ * uaddr2 atomically on behalf of the top waiter.
  *
- * The hb->lock and futex_key refs shall be held by the caller.
+ * Return:
+ * >=0 - on success, the number of tasks requeued or woken;
+ *  <0 - on error
  */
-static int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
-				union futex_key *key,
-				struct futex_pi_state **ps,
-				struct task_struct *task, int set_waiters)
+static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
+			 u32 __user *uaddr2, int nr_wake, int nr_requeue,
+			 u32 *cmpval, int requeue_pi)
 {
-	u32 uval, newval, vpid = task_pid_vnr(task);
-	struct futex_q *top_waiter;
-	int ret;
-
-	/*
-	 * Read the user space value first so we can validate a few
-	 * things before proceeding further.
-	 */
-	if (get_futex_value_locked(&uval, uaddr))
-		return -EFAULT;
-
-	if (unlikely(should_fail_futex(true)))
-		return -EFAULT;
-
-	/*
-	 * Detect deadlocks.
-	 */
-	if ((unlikely((uval & FUTEX_TID_MASK) == vpid)))
-		return -EDEADLK;
-
-	if ((unlikely(should_fail_futex(true))))
-		return -EDEADLK;
+	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
+	int drop_count = 0, task_count = 0, ret;
+	struct futex_pi_state *pi_state = NULL;
+	struct futex_hash_bucket *hb1, *hb2;
+	struct futex_q *this, *next;
+	DEFINE_WAKE_Q(wake_q);
 
 	/*
-	 * Lookup existing state first. If it exists, try to attach to
-	 * its pi_state.
+	 * When PI not supported, return -ENOSYS if requeue_pi is true,
+	 * otherwise this will let the compiler assume requeue_pi is false
+	 * which shoud optimize away all the conditional code further down.
 	 */
-	top_waiter = futex_top_waiter(hb, key);
-	if (top_waiter)
-		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
+	if (!IS_ENABLED(CONFIG_FUTEX_PI) && requeue_pi)
+		return -ENOSYS;
 
-	/*
-	 * No waiter and user TID is 0. We are here because the
-	 * waiters or the owner died bit is set or called from
-	 * requeue_cmp_pi or for whatever reason something took the
-	 * syscall.
-	 */
-	if (!(uval & FUTEX_TID_MASK)) {
+	if (requeue_pi) {
 		/*
-		 * We take over the futex. No other waiters and the user space
-		 * TID is 0. We preserve the owner died bit.
+		 * Requeue PI only works on two distinct uaddrs. This
+		 * check is only valid for private futexes. See below.
 		 */
-		newval = uval & FUTEX_OWNER_DIED;
-		newval |= vpid;
-
-		/* The futex requeue_pi code can enforce the waiters bit */
-		if (set_waiters)
-			newval |= FUTEX_WAITERS;
-
-		ret = lock_pi_update_atomic(uaddr, uval, newval);
-		/* If the take over worked, return 1 */
-		return ret < 0 ? ret : 1;
-	}
-
-	/*
-	 * First waiter. Set the waiters bit before attaching ourself to
-	 * the owner. If owner tries to unlock, it will be forced into
-	 * the kernel and blocked on hb->lock.
-	 */
-	newval = uval | FUTEX_WAITERS;
-	ret = lock_pi_update_atomic(uaddr, uval, newval);
-	if (ret)
-		return ret;
-	/*
-	 * If the update of the user space value succeeded, we try to
-	 * attach to the owner. If that fails, no harm done, we only
-	 * set the FUTEX_WAITERS bit in the user space variable.
-	 */
-	return attach_to_pi_owner(uval, key, ps);
-}
-
-/**
- * __unqueue_futex() - Remove the futex_q from its futex_hash_bucket
- * @q:	The futex_q to unqueue
- *
- * The q->lock_ptr must not be NULL and must be held by the caller.
- */
-static void __unqueue_futex(struct futex_q *q)
-{
-	struct futex_hash_bucket *hb;
-
-	if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr))
-	    || WARN_ON(plist_node_empty(&q->list)))
-		return;
-
-	hb = container_of(q->lock_ptr, struct futex_hash_bucket, lock);
-	plist_del(&q->list, &hb->chain);
-	hb_waiters_dec(hb);
-}
-
-/*
- * The hash bucket lock must be held when this is called.
- * Afterwards, the futex_q must not be accessed. Callers
- * must ensure to later call wake_up_q() for the actual
- * wakeups to occur.
- */
-static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
-{
-	struct task_struct *p = q->task;
-
-	if (WARN(q->pi_state || q->rt_waiter, "refusing to wake PI futex\n"))
-		return;
-
-	/*
-	 * Queue the task for later wakeup for after we've released
-	 * the hb->lock. wake_q_add() grabs reference to p.
-	 */
-	wake_q_add(wake_q, p);
-	__unqueue_futex(q);
-	/*
-	 * The waiting task can free the futex_q as soon as q->lock_ptr = NULL
-	 * is written, without taking any locks. This is possible in the event
-	 * of a spurious wakeup, for example. A memory barrier is required here
-	 * to prevent the following store to lock_ptr from getting ahead of the
-	 * plist_del in __unqueue_futex().
-	 */
-	smp_store_release(&q->lock_ptr, NULL);
-}
-
-/*
- * Caller must hold a reference on @pi_state.
- */
-static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_state)
-{
-	u32 uninitialized_var(curval), newval;
-	struct task_struct *new_owner;
-	bool postunlock = false;
-	DEFINE_WAKE_Q(wake_q);
-	int ret = 0;
+		if (uaddr1 == uaddr2)
+			return -EINVAL;
 
-	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
-	if (WARN_ON_ONCE(!new_owner)) {
 		/*
-		 * As per the comment in futex_unlock_pi() this should not happen.
-		 *
-		 * When this happens, give up our locks and try again, giving
-		 * the futex_lock_pi() instance time to complete, either by
-		 * waiting on the rtmutex or removing itself from the futex
-		 * queue.
+		 * requeue_pi requires a pi_state, try to allocate it now
+		 * without any locks in case it fails.
 		 */
-		ret = -EAGAIN;
-		goto out_unlock;
-	}
-
-	/*
-	 * We pass it to the next owner. The WAITERS bit is always kept
-	 * enabled while there is PI state around. We cleanup the owner
-	 * died bit, because we are the owner.
-	 */
-	newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
-
-	if (unlikely(should_fail_futex(true)))
-		ret = -EFAULT;
-
-	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) {
-		ret = -EFAULT;
-
-	} else if (curval != uval) {
+		if (refill_pi_state_cache())
+			return -ENOMEM;
 		/*
-		 * If a unconditional UNLOCK_PI operation (user space did not
-		 * try the TID->0 transition) raced with a waiter setting the
-		 * FUTEX_WAITERS flag between get_user() and locking the hash
-		 * bucket lock, retry the operation.
+		 * requeue_pi must wake as many tasks as it can, up to nr_wake
+		 * + nr_requeue, since it acquires the rt_mutex prior to
+		 * returning to userspace, so as to not leave the rt_mutex with
+		 * waiters and no owner.  However, second and third wake-ups
+		 * cannot be predicted as they involve race conditions with the
+		 * first wake and a fault while looking up the pi_state.  Both
+		 * pthread_cond_signal() and pthread_cond_broadcast() should
+		 * use nr_wake=1.
 		 */
-		if ((FUTEX_TID_MASK & curval) == uval)
-			ret = -EAGAIN;
-		else
-			ret = -EINVAL;
+		if (nr_wake != 1)
+			return -EINVAL;
 	}
 
-	if (ret)
-		goto out_unlock;
+retry:
+	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, VERIFY_READ);
+	if (unlikely(ret != 0))
+		goto out;
+	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2,
+			    requeue_pi ? VERIFY_WRITE : VERIFY_READ);
+	if (unlikely(ret != 0))
+		goto out_put_key1;
 
 	/*
-	 * This is a point of no return; once we modify the uval there is no
-	 * going back and subsequent operations must not fail.
+	 * The check above which compares uaddrs is not sufficient for
+	 * shared futexes. We need to compare the keys:
 	 */
+	if (requeue_pi && match_futex(&key1, &key2)) {
+		ret = -EINVAL;
+		goto out_put_keys;
+	}
 
-	raw_spin_lock(&pi_state->owner->pi_lock);
-	WARN_ON(list_empty(&pi_state->list));
-	list_del_init(&pi_state->list);
-	raw_spin_unlock(&pi_state->owner->pi_lock);
+	hb1 = hash_futex(&key1);
+	hb2 = hash_futex(&key2);
 
-	raw_spin_lock(&new_owner->pi_lock);
-	WARN_ON(!list_empty(&pi_state->list));
-	list_add(&pi_state->list, &new_owner->pi_state_list);
-	pi_state->owner = new_owner;
-	raw_spin_unlock(&new_owner->pi_lock);
+retry_private:
+	hb_waiters_inc(hb2);
+	double_lock_hb(hb1, hb2);
 
-	postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
+	if (likely(cmpval != NULL)) {
+		u32 curval;
 
-out_unlock:
-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+		ret = get_futex_value_locked(&curval, uaddr1);
 
-	if (postunlock)
-		rt_mutex_postunlock(&wake_q);
+		if (unlikely(ret)) {
+			double_unlock_hb(hb1, hb2);
+			hb_waiters_dec(hb2);
 
-	return ret;
-}
-
-/*
- * Express the locking dependencies for lockdep:
- */
-static inline void
-double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
-{
-	if (hb1 <= hb2) {
-		spin_lock(&hb1->lock);
-		if (hb1 < hb2)
-			spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
-	} else { /* hb1 > hb2 */
-		spin_lock(&hb2->lock);
-		spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
-	}
-}
-
-static inline void
-double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
-{
-	spin_unlock(&hb1->lock);
-	if (hb1 != hb2)
-		spin_unlock(&hb2->lock);
-}
-
-/*
- * Wake up waiters matching bitset queued on this futex (uaddr).
- */
-static int
-futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
-{
-	struct futex_hash_bucket *hb;
-	struct futex_q *this, *next;
-	union futex_key key = FUTEX_KEY_INIT;
-	int ret;
-	DEFINE_WAKE_Q(wake_q);
-
-	if (!bitset)
-		return -EINVAL;
-
-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ);
-	if (unlikely(ret != 0))
-		goto out;
-
-	hb = hash_futex(&key);
-
-	/* Make sure we really have tasks to wakeup */
-	if (!hb_waiters_pending(hb))
-		goto out_put_key;
-
-	spin_lock(&hb->lock);
-
-	plist_for_each_entry_safe(this, next, &hb->chain, list) {
-		if (match_futex (&this->key, &key)) {
-			if (this->pi_state || this->rt_waiter) {
-				ret = -EINVAL;
-				break;
-			}
+			ret = get_user(curval, uaddr1);
+			if (ret)
+				goto out_put_keys;
 
-			/* Check if one of the bits is set in both bitsets */
-			if (!(this->bitset & bitset))
-				continue;
+			if (!(flags & FLAGS_SHARED))
+				goto retry_private;
 
-			mark_wake_futex(&wake_q, this);
-			if (++ret >= nr_wake)
-				break;
+			put_futex_key(&key2);
+			put_futex_key(&key1);
+			goto retry;
+		}
+		if (curval != *cmpval) {
+			ret = -EAGAIN;
+			goto out_unlock;
 		}
 	}
 
-	spin_unlock(&hb->lock);
-	wake_up_q(&wake_q);
-out_put_key:
-	put_futex_key(&key);
-out:
-	return ret;
-}
-
-/*
- * Wake up all waiters hashed on the physical page that is mapped
- * to this virtual address:
- */
-static int
-futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
-	      int nr_wake, int nr_wake2, int op)
-{
-	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
-	struct futex_hash_bucket *hb1, *hb2;
-	struct futex_q *this, *next;
-	int ret, op_ret;
-	DEFINE_WAKE_Q(wake_q);
-
-retry:
-	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, VERIFY_READ);
-	if (unlikely(ret != 0))
-		goto out;
-	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
-	if (unlikely(ret != 0))
-		goto out_put_key1;
-
-	hb1 = hash_futex(&key1);
-	hb2 = hash_futex(&key2);
-
-retry_private:
-	double_lock_hb(hb1, hb2);
-	op_ret = futex_atomic_op_inuser(op, uaddr2);
-	if (unlikely(op_ret < 0)) {
-
-		double_unlock_hb(hb1, hb2);
-
-#ifndef CONFIG_MMU
+	if (requeue_pi && (task_count - nr_wake < nr_requeue)) {
 		/*
-		 * we don't get EFAULT from MMU faults if we don't have an MMU,
-		 * but we might get them from range checking
+		 * Attempt to acquire uaddr2 and wake the top waiter. If we
+		 * intend to requeue waiters, force setting the FUTEX_WAITERS
+		 * bit.  We force this here where we are able to easily handle
+		 * faults rather in the requeue loop below.
 		 */
-		ret = op_ret;
-		goto out_put_keys;
-#endif
-
-		if (unlikely(op_ret != -EFAULT)) {
-			ret = op_ret;
-			goto out_put_keys;
-		}
-
-		ret = fault_in_user_writeable(uaddr2);
-		if (ret)
-			goto out_put_keys;
-
-		if (!(flags & FLAGS_SHARED))
-			goto retry_private;
-
-		put_futex_key(&key2);
-		put_futex_key(&key1);
-		goto retry;
-	}
-
-	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
-		if (match_futex (&this->key, &key1)) {
-			if (this->pi_state || this->rt_waiter) {
-				ret = -EINVAL;
-				goto out_unlock;
-			}
-			mark_wake_futex(&wake_q, this);
-			if (++ret >= nr_wake)
-				break;
-		}
-	}
-
-	if (op_ret > 0) {
-		op_ret = 0;
-		plist_for_each_entry_safe(this, next, &hb2->chain, list) {
-			if (match_futex (&this->key, &key2)) {
-				if (this->pi_state || this->rt_waiter) {
-					ret = -EINVAL;
-					goto out_unlock;
-				}
-				mark_wake_futex(&wake_q, this);
-				if (++op_ret >= nr_wake2)
-					break;
-			}
-		}
-		ret += op_ret;
-	}
-
-out_unlock:
-	double_unlock_hb(hb1, hb2);
-	wake_up_q(&wake_q);
-out_put_keys:
-	put_futex_key(&key2);
-out_put_key1:
-	put_futex_key(&key1);
-out:
-	return ret;
-}
-
-/**
- * requeue_futex() - Requeue a futex_q from one hb to another
- * @q:		the futex_q to requeue
- * @hb1:	the source hash_bucket
- * @hb2:	the target hash_bucket
- * @key2:	the new key for the requeued futex_q
- */
-static inline
-void requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1,
-		   struct futex_hash_bucket *hb2, union futex_key *key2)
-{
-
-	/*
-	 * If key1 and key2 hash to the same bucket, no need to
-	 * requeue.
-	 */
-	if (likely(&hb1->chain != &hb2->chain)) {
-		plist_del(&q->list, &hb1->chain);
-		hb_waiters_dec(hb1);
-		hb_waiters_inc(hb2);
-		plist_add(&q->list, &hb2->chain);
-		q->lock_ptr = &hb2->lock;
-	}
-	get_futex_key_refs(key2);
-	q->key = *key2;
-}
-
-/**
- * requeue_pi_wake_futex() - Wake a task that acquired the lock during requeue
- * @q:		the futex_q
- * @key:	the key of the requeue target futex
- * @hb:		the hash_bucket of the requeue target futex
- *
- * During futex_requeue, with requeue_pi=1, it is possible to acquire the
- * target futex if it is uncontended or via a lock steal.  Set the futex_q key
- * to the requeue target futex so the waiter can detect the wakeup on the right
- * futex, but remove it from the hb and NULL the rt_waiter so it can detect
- * atomic lock acquisition.  Set the q->lock_ptr to the requeue target hb->lock
- * to protect access to the pi_state to fixup the owner later.  Must be called
- * with both q->lock_ptr and hb->lock held.
- */
-static inline
-void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
-			   struct futex_hash_bucket *hb)
-{
-	get_futex_key_refs(key);
-	q->key = *key;
-
-	__unqueue_futex(q);
-
-	WARN_ON(!q->rt_waiter);
-	q->rt_waiter = NULL;
-
-	q->lock_ptr = &hb->lock;
-
-	wake_up_state(q->task, TASK_NORMAL);
-}
-
-/**
- * futex_proxy_trylock_atomic() - Attempt an atomic lock for the top waiter
- * @pifutex:		the user address of the to futex
- * @hb1:		the from futex hash bucket, must be locked by the caller
- * @hb2:		the to futex hash bucket, must be locked by the caller
- * @key1:		the from futex key
- * @key2:		the to futex key
- * @ps:			address to store the pi_state pointer
- * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
- *
- * Try and get the lock on behalf of the top waiter if we can do it atomically.
- * Wake the top waiter if we succeed.  If the caller specified set_waiters,
- * then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit.
- * hb1 and hb2 must be held by the caller.
- *
- * Return:
- *  0 - failed to acquire the lock atomically;
- * >0 - acquired the lock, return value is vpid of the top_waiter
- * <0 - error
- */
-static int futex_proxy_trylock_atomic(u32 __user *pifutex,
-				 struct futex_hash_bucket *hb1,
-				 struct futex_hash_bucket *hb2,
-				 union futex_key *key1, union futex_key *key2,
-				 struct futex_pi_state **ps, int set_waiters)
-{
-	struct futex_q *top_waiter = NULL;
-	u32 curval;
-	int ret, vpid;
-
-	if (get_futex_value_locked(&curval, pifutex))
-		return -EFAULT;
-
-	if (unlikely(should_fail_futex(true)))
-		return -EFAULT;
-
-	/*
-	 * Find the top_waiter and determine if there are additional waiters.
-	 * If the caller intends to requeue more than 1 waiter to pifutex,
-	 * force futex_lock_pi_atomic() to set the FUTEX_WAITERS bit now,
-	 * as we have means to handle the possible fault.  If not, don't set
-	 * the bit unecessarily as it will force the subsequent unlock to enter
-	 * the kernel.
-	 */
-	top_waiter = futex_top_waiter(hb1, key1);
-
-	/* There are no waiters, nothing for us to do. */
-	if (!top_waiter)
-		return 0;
-
-	/* Ensure we requeue to the expected futex. */
-	if (!match_futex(top_waiter->requeue_pi_key, key2))
-		return -EINVAL;
-
-	/*
-	 * Try to take the lock for top_waiter.  Set the FUTEX_WAITERS bit in
-	 * the contended case or if set_waiters is 1.  The pi_state is returned
-	 * in ps in contended cases.
-	 */
-	vpid = task_pid_vnr(top_waiter->task);
-	ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task,
-				   set_waiters);
-	if (ret == 1) {
-		requeue_pi_wake_futex(top_waiter, key2, hb2);
-		return vpid;
-	}
-	return ret;
-}
-
-/**
- * futex_requeue() - Requeue waiters from uaddr1 to uaddr2
- * @uaddr1:	source futex user address
- * @flags:	futex flags (FLAGS_SHARED, etc.)
- * @uaddr2:	target futex user address
- * @nr_wake:	number of waiters to wake (must be 1 for requeue_pi)
- * @nr_requeue:	number of waiters to requeue (0-INT_MAX)
- * @cmpval:	@uaddr1 expected value (or %NULL)
- * @requeue_pi:	if we are attempting to requeue from a non-pi futex to a
- *		pi futex (pi to pi requeue is not supported)
- *
- * Requeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire
- * uaddr2 atomically on behalf of the top waiter.
- *
- * Return:
- * >=0 - on success, the number of tasks requeued or woken;
- *  <0 - on error
- */
-static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
-			 u32 __user *uaddr2, int nr_wake, int nr_requeue,
-			 u32 *cmpval, int requeue_pi)
-{
-	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
-	int drop_count = 0, task_count = 0, ret;
-	struct futex_pi_state *pi_state = NULL;
-	struct futex_hash_bucket *hb1, *hb2;
-	struct futex_q *this, *next;
-	DEFINE_WAKE_Q(wake_q);
-
-	if (requeue_pi) {
-		/*
-		 * Requeue PI only works on two distinct uaddrs. This
-		 * check is only valid for private futexes. See below.
-		 */
-		if (uaddr1 == uaddr2)
-			return -EINVAL;
-
-		/*
-		 * requeue_pi requires a pi_state, try to allocate it now
-		 * without any locks in case it fails.
-		 */
-		if (refill_pi_state_cache())
-			return -ENOMEM;
-		/*
-		 * requeue_pi must wake as many tasks as it can, up to nr_wake
-		 * + nr_requeue, since it acquires the rt_mutex prior to
-		 * returning to userspace, so as to not leave the rt_mutex with
-		 * waiters and no owner.  However, second and third wake-ups
-		 * cannot be predicted as they involve race conditions with the
-		 * first wake and a fault while looking up the pi_state.  Both
-		 * pthread_cond_signal() and pthread_cond_broadcast() should
-		 * use nr_wake=1.
-		 */
-		if (nr_wake != 1)
-			return -EINVAL;
-	}
-
-retry:
-	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, VERIFY_READ);
-	if (unlikely(ret != 0))
-		goto out;
-	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2,
-			    requeue_pi ? VERIFY_WRITE : VERIFY_READ);
-	if (unlikely(ret != 0))
-		goto out_put_key1;
-
-	/*
-	 * The check above which compares uaddrs is not sufficient for
-	 * shared futexes. We need to compare the keys:
-	 */
-	if (requeue_pi && match_futex(&key1, &key2)) {
-		ret = -EINVAL;
-		goto out_put_keys;
-	}
-
-	hb1 = hash_futex(&key1);
-	hb2 = hash_futex(&key2);
-
-retry_private:
-	hb_waiters_inc(hb2);
-	double_lock_hb(hb1, hb2);
-
-	if (likely(cmpval != NULL)) {
-		u32 curval;
-
-		ret = get_futex_value_locked(&curval, uaddr1);
-
-		if (unlikely(ret)) {
-			double_unlock_hb(hb1, hb2);
-			hb_waiters_dec(hb2);
-
-			ret = get_user(curval, uaddr1);
-			if (ret)
-				goto out_put_keys;
-
-			if (!(flags & FLAGS_SHARED))
-				goto retry_private;
-
-			put_futex_key(&key2);
-			put_futex_key(&key1);
-			goto retry;
-		}
-		if (curval != *cmpval) {
-			ret = -EAGAIN;
-			goto out_unlock;
-		}
-	}
-
-	if (requeue_pi && (task_count - nr_wake < nr_requeue)) {
-		/*
-		 * Attempt to acquire uaddr2 and wake the top waiter. If we
-		 * intend to requeue waiters, force setting the FUTEX_WAITERS
-		 * bit.  We force this here where we are able to easily handle
-		 * faults rather in the requeue loop below.
-		 */
-		ret = futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1,
-						 &key2, &pi_state, nr_requeue);
+		ret = futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1,
+						 &key2, &pi_state, nr_requeue);
 
 		/*
 		 * At this point the top_waiter has either taken uaddr2 or is
@@ -1912,1085 +1145,415 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
 			 *
 			 * If that call succeeds then we have pi_state and an
 			 * initial refcount on it.
-			 */
-			ret = lookup_pi_state(uaddr2, ret, hb2, &key2, &pi_state);
-		}
-
-		switch (ret) {
-		case 0:
-			/* We hold a reference on the pi state. */
-			break;
-
-			/* If the above failed, then pi_state is NULL */
-		case -EFAULT:
-			double_unlock_hb(hb1, hb2);
-			hb_waiters_dec(hb2);
-			put_futex_key(&key2);
-			put_futex_key(&key1);
-			ret = fault_in_user_writeable(uaddr2);
-			if (!ret)
-				goto retry;
-			goto out;
-		case -EAGAIN:
-			/*
-			 * Two reasons for this:
-			 * - Owner is exiting and we just wait for the
-			 *   exit to complete.
-			 * - The user space value changed.
-			 */
-			double_unlock_hb(hb1, hb2);
-			hb_waiters_dec(hb2);
-			put_futex_key(&key2);
-			put_futex_key(&key1);
-			cond_resched();
-			goto retry;
-		default:
-			goto out_unlock;
-		}
-	}
-
-	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
-		if (task_count - nr_wake >= nr_requeue)
-			break;
-
-		if (!match_futex(&this->key, &key1))
-			continue;
-
-		/*
-		 * FUTEX_WAIT_REQEUE_PI and FUTEX_CMP_REQUEUE_PI should always
-		 * be paired with each other and no other futex ops.
-		 *
-		 * We should never be requeueing a futex_q with a pi_state,
-		 * which is awaiting a futex_unlock_pi().
-		 */
-		if ((requeue_pi && !this->rt_waiter) ||
-		    (!requeue_pi && this->rt_waiter) ||
-		    this->pi_state) {
-			ret = -EINVAL;
-			break;
-		}
-
-		/*
-		 * Wake nr_wake waiters.  For requeue_pi, if we acquired the
-		 * lock, we already woke the top_waiter.  If not, it will be
-		 * woken by futex_unlock_pi().
-		 */
-		if (++task_count <= nr_wake && !requeue_pi) {
-			mark_wake_futex(&wake_q, this);
-			continue;
-		}
-
-		/* Ensure we requeue to the expected futex for requeue_pi. */
-		if (requeue_pi && !match_futex(this->requeue_pi_key, &key2)) {
-			ret = -EINVAL;
-			break;
-		}
-
-		/*
-		 * Requeue nr_requeue waiters and possibly one more in the case
-		 * of requeue_pi if we couldn't acquire the lock atomically.
-		 */
-		if (requeue_pi) {
-			/*
-			 * Prepare the waiter to take the rt_mutex. Take a
-			 * refcount on the pi_state and store the pointer in
-			 * the futex_q object of the waiter.
-			 */
-			get_pi_state(pi_state);
-			this->pi_state = pi_state;
-			ret = rt_mutex_start_proxy_lock(&pi_state->pi_mutex,
-							this->rt_waiter,
-							this->task);
-			if (ret == 1) {
-				/*
-				 * We got the lock. We do neither drop the
-				 * refcount on pi_state nor clear
-				 * this->pi_state because the waiter needs the
-				 * pi_state for cleaning up the user space
-				 * value. It will drop the refcount after
-				 * doing so.
-				 */
-				requeue_pi_wake_futex(this, &key2, hb2);
-				drop_count++;
-				continue;
-			} else if (ret) {
-				/*
-				 * rt_mutex_start_proxy_lock() detected a
-				 * potential deadlock when we tried to queue
-				 * that waiter. Drop the pi_state reference
-				 * which we took above and remove the pointer
-				 * to the state from the waiters futex_q
-				 * object.
-				 */
-				this->pi_state = NULL;
-				put_pi_state(pi_state);
-				/*
-				 * We stop queueing more waiters and let user
-				 * space deal with the mess.
-				 */
-				break;
-			}
-		}
-		requeue_futex(this, hb1, hb2, &key2);
-		drop_count++;
-	}
-
-	/*
-	 * We took an extra initial reference to the pi_state either
-	 * in futex_proxy_trylock_atomic() or in lookup_pi_state(). We
-	 * need to drop it here again.
-	 */
-	put_pi_state(pi_state);
-
-out_unlock:
-	double_unlock_hb(hb1, hb2);
-	wake_up_q(&wake_q);
-	hb_waiters_dec(hb2);
-
-	/*
-	 * drop_futex_key_refs() must be called outside the spinlocks. During
-	 * the requeue we moved futex_q's from the hash bucket at key1 to the
-	 * one at key2 and updated their key pointer.  We no longer need to
-	 * hold the references to key1.
-	 */
-	while (--drop_count >= 0)
-		drop_futex_key_refs(&key1);
-
-out_put_keys:
-	put_futex_key(&key2);
-out_put_key1:
-	put_futex_key(&key1);
-out:
-	return ret ? ret : task_count;
-}
-
-/* The key must be already stored in q->key. */
-static inline struct futex_hash_bucket *queue_lock(struct futex_q *q)
-	__acquires(&hb->lock)
-{
-	struct futex_hash_bucket *hb;
-
-	hb = hash_futex(&q->key);
-
-	/*
-	 * Increment the counter before taking the lock so that
-	 * a potential waker won't miss a to-be-slept task that is
-	 * waiting for the spinlock. This is safe as all queue_lock()
-	 * users end up calling queue_me(). Similarly, for housekeeping,
-	 * decrement the counter at queue_unlock() when some error has
-	 * occurred and we don't end up adding the task to the list.
-	 */
-	hb_waiters_inc(hb);
-
-	q->lock_ptr = &hb->lock;
-
-	spin_lock(&hb->lock); /* implies smp_mb(); (A) */
-	return hb;
-}
-
-static inline void
-queue_unlock(struct futex_hash_bucket *hb)
-	__releases(&hb->lock)
-{
-	spin_unlock(&hb->lock);
-	hb_waiters_dec(hb);
-}
-
-static inline void __queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
-{
-	int prio;
-
-	/*
-	 * The priority used to register this element is
-	 * - either the real thread-priority for the real-time threads
-	 * (i.e. threads with a priority lower than MAX_RT_PRIO)
-	 * - or MAX_RT_PRIO for non-RT threads.
-	 * Thus, all RT-threads are woken first in priority order, and
-	 * the others are woken last, in FIFO order.
-	 */
-	prio = min(current->normal_prio, MAX_RT_PRIO);
-
-	plist_node_init(&q->list, prio);
-	plist_add(&q->list, &hb->chain);
-	q->task = current;
-}
-
-/**
- * queue_me() - Enqueue the futex_q on the futex_hash_bucket
- * @q:	The futex_q to enqueue
- * @hb:	The destination hash bucket
- *
- * The hb->lock must be held by the caller, and is released here. A call to
- * queue_me() is typically paired with exactly one call to unqueue_me().  The
- * exceptions involve the PI related operations, which may use unqueue_me_pi()
- * or nothing if the unqueue is done as part of the wake process and the unqueue
- * state is implicit in the state of woken task (see futex_wait_requeue_pi() for
- * an example).
- */
-static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
-	__releases(&hb->lock)
-{
-	__queue_me(q, hb);
-	spin_unlock(&hb->lock);
-}
-
-/**
- * unqueue_me() - Remove the futex_q from its futex_hash_bucket
- * @q:	The futex_q to unqueue
- *
- * The q->lock_ptr must not be held by the caller. A call to unqueue_me() must
- * be paired with exactly one earlier call to queue_me().
- *
- * Return:
- *   1 - if the futex_q was still queued (and we removed unqueued it);
- *   0 - if the futex_q was already removed by the waking thread
- */
-static int unqueue_me(struct futex_q *q)
-{
-	spinlock_t *lock_ptr;
-	int ret = 0;
-
-	/* In the common case we don't take the spinlock, which is nice. */
-retry:
-	/*
-	 * q->lock_ptr can change between this read and the following spin_lock.
-	 * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
-	 * optimizing lock_ptr out of the logic below.
-	 */
-	lock_ptr = READ_ONCE(q->lock_ptr);
-	if (lock_ptr != NULL) {
-		spin_lock(lock_ptr);
-		/*
-		 * q->lock_ptr can change between reading it and
-		 * spin_lock(), causing us to take the wrong lock.  This
-		 * corrects the race condition.
-		 *
-		 * Reasoning goes like this: if we have the wrong lock,
-		 * q->lock_ptr must have changed (maybe several times)
-		 * between reading it and the spin_lock().  It can
-		 * change again after the spin_lock() but only if it was
-		 * already changed before the spin_lock().  It cannot,
-		 * however, change back to the original value.  Therefore
-		 * we can detect whether we acquired the correct lock.
-		 */
-		if (unlikely(lock_ptr != q->lock_ptr)) {
-			spin_unlock(lock_ptr);
-			goto retry;
-		}
-		__unqueue_futex(q);
-
-		BUG_ON(q->pi_state);
-
-		spin_unlock(lock_ptr);
-		ret = 1;
-	}
-
-	drop_futex_key_refs(&q->key);
-	return ret;
-}
-
-/*
- * PI futexes can not be requeued and must remove themself from the
- * hash bucket. The hash bucket lock (i.e. lock_ptr) is held on entry
- * and dropped here.
- */
-static void unqueue_me_pi(struct futex_q *q)
-	__releases(q->lock_ptr)
-{
-	__unqueue_futex(q);
-
-	BUG_ON(!q->pi_state);
-	put_pi_state(q->pi_state);
-	q->pi_state = NULL;
-
-	spin_unlock(q->lock_ptr);
-}
-
-/*
- * Fixup the pi_state owner with the new owner.
- *
- * Must be called with hash bucket lock held and mm->sem held for non
- * private futexes.
- */
-static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
-				struct task_struct *newowner)
-{
-	u32 newtid = task_pid_vnr(newowner) | FUTEX_WAITERS;
-	struct futex_pi_state *pi_state = q->pi_state;
-	u32 uval, uninitialized_var(curval), newval;
-	struct task_struct *oldowner;
-	int ret;
-
-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
-
-	oldowner = pi_state->owner;
-	/* Owner died? */
-	if (!pi_state->owner)
-		newtid |= FUTEX_OWNER_DIED;
-
-	/*
-	 * We are here either because we stole the rtmutex from the
-	 * previous highest priority waiter or we are the highest priority
-	 * waiter but have failed to get the rtmutex the first time.
-	 *
-	 * We have to replace the newowner TID in the user space variable.
-	 * This must be atomic as we have to preserve the owner died bit here.
-	 *
-	 * Note: We write the user space value _before_ changing the pi_state
-	 * because we can fault here. Imagine swapped out pages or a fork
-	 * that marked all the anonymous memory readonly for cow.
-	 *
-	 * Modifying pi_state _before_ the user space value would leave the
-	 * pi_state in an inconsistent state when we fault here, because we
-	 * need to drop the locks to handle the fault. This might be observed
-	 * in the PID check in lookup_pi_state.
-	 */
-retry:
-	if (get_futex_value_locked(&uval, uaddr))
-		goto handle_fault;
-
-	for (;;) {
-		newval = (uval & FUTEX_OWNER_DIED) | newtid;
-
-		if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))
-			goto handle_fault;
-		if (curval == uval)
-			break;
-		uval = curval;
-	}
-
-	/*
-	 * We fixed up user space. Now we need to fix the pi_state
-	 * itself.
-	 */
-	if (pi_state->owner != NULL) {
-		raw_spin_lock(&pi_state->owner->pi_lock);
-		WARN_ON(list_empty(&pi_state->list));
-		list_del_init(&pi_state->list);
-		raw_spin_unlock(&pi_state->owner->pi_lock);
-	}
-
-	pi_state->owner = newowner;
-
-	raw_spin_lock(&newowner->pi_lock);
-	WARN_ON(!list_empty(&pi_state->list));
-	list_add(&pi_state->list, &newowner->pi_state_list);
-	raw_spin_unlock(&newowner->pi_lock);
-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
-
-	return 0;
-
-	/*
-	 * To handle the page fault we need to drop the locks here. That gives
-	 * the other task (either the highest priority waiter itself or the
-	 * task which stole the rtmutex) the chance to try the fixup of the
-	 * pi_state. So once we are back from handling the fault we need to
-	 * check the pi_state after reacquiring the locks and before trying to
-	 * do another fixup. When the fixup has been done already we simply
-	 * return.
-	 *
-	 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
-	 * drop hb->lock since the caller owns the hb -> futex_q relation.
-	 * Dropping the pi_mutex->wait_lock requires the state revalidate.
-	 */
-handle_fault:
-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
-	spin_unlock(q->lock_ptr);
-
-	ret = fault_in_user_writeable(uaddr);
-
-	spin_lock(q->lock_ptr);
-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
-
-	/*
-	 * Check if someone else fixed it for us:
-	 */
-	if (pi_state->owner != oldowner) {
-		ret = 0;
-		goto out_unlock;
-	}
-
-	if (ret)
-		goto out_unlock;
-
-	goto retry;
-
-out_unlock:
-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
-	return ret;
-}
-
-static long futex_wait_restart(struct restart_block *restart);
-
-/**
- * fixup_owner() - Post lock pi_state and corner case management
- * @uaddr:	user address of the futex
- * @q:		futex_q (contains pi_state and access to the rt_mutex)
- * @locked:	if the attempt to take the rt_mutex succeeded (1) or not (0)
- *
- * After attempting to lock an rt_mutex, this function is called to cleanup
- * the pi_state owner as well as handle race conditions that may allow us to
- * acquire the lock. Must be called with the hb lock held.
- *
- * Return:
- *  1 - success, lock taken;
- *  0 - success, lock not taken;
- * <0 - on error (-EFAULT)
- */
-static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
-{
-	int ret = 0;
-
-	if (locked) {
-		/*
-		 * Got the lock. We might not be the anticipated owner if we
-		 * did a lock-steal - fix up the PI-state in that case:
-		 *
-		 * We can safely read pi_state->owner without holding wait_lock
-		 * because we now own the rt_mutex, only the owner will attempt
-		 * to change it.
-		 */
-		if (q->pi_state->owner != current)
-			ret = fixup_pi_state_owner(uaddr, q, current);
-		goto out;
-	}
-
-	/*
-	 * Paranoia check. If we did not take the lock, then we should not be
-	 * the owner of the rt_mutex.
-	 */
-	if (rt_mutex_owner(&q->pi_state->pi_mutex) == current) {
-		printk(KERN_ERR "fixup_owner: ret = %d pi-mutex: %p "
-				"pi-state %p\n", ret,
-				q->pi_state->pi_mutex.owner,
-				q->pi_state->owner);
-	}
-
-out:
-	return ret ? ret : locked;
-}
-
-/**
- * futex_wait_queue_me() - queue_me() and wait for wakeup, timeout, or signal
- * @hb:		the futex hash bucket, must be locked by the caller
- * @q:		the futex_q to queue up on
- * @timeout:	the prepared hrtimer_sleeper, or null for no timeout
- */
-static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
-				struct hrtimer_sleeper *timeout)
-{
-	/*
-	 * The task state is guaranteed to be set before another task can
-	 * wake it. set_current_state() is implemented using smp_store_mb() and
-	 * queue_me() calls spin_unlock() upon completion, both serializing
-	 * access to the hash list and forcing another memory barrier.
-	 */
-	set_current_state(TASK_INTERRUPTIBLE);
-	queue_me(q, hb);
-
-	/* Arm the timer */
-	if (timeout)
-		hrtimer_start_expires(&timeout->timer, HRTIMER_MODE_ABS);
-
-	/*
-	 * If we have been removed from the hash list, then another task
-	 * has tried to wake us, and we can skip the call to schedule().
-	 */
-	if (likely(!plist_node_empty(&q->list))) {
-		/*
-		 * If the timer has already expired, current will already be
-		 * flagged for rescheduling. Only call schedule if there
-		 * is no timeout, or if it has yet to expire.
-		 */
-		if (!timeout || timeout->task)
-			freezable_schedule();
-	}
-	__set_current_state(TASK_RUNNING);
-}
-
-/**
- * futex_wait_setup() - Prepare to wait on a futex
- * @uaddr:	the futex userspace address
- * @val:	the expected value
- * @flags:	futex flags (FLAGS_SHARED, etc.)
- * @q:		the associated futex_q
- * @hb:		storage for hash_bucket pointer to be returned to caller
- *
- * Setup the futex_q and locate the hash_bucket.  Get the futex value and
- * compare it with the expected value.  Handle atomic faults internally.
- * Return with the hb lock held and a q.key reference on success, and unlocked
- * with no q.key reference on failure.
- *
- * Return:
- *  0 - uaddr contains val and hb has been locked;
- * <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked
- */
-static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
-			   struct futex_q *q, struct futex_hash_bucket **hb)
-{
-	u32 uval;
-	int ret;
-
-	/*
-	 * Access the page AFTER the hash-bucket is locked.
-	 * Order is important:
-	 *
-	 *   Userspace waiter: val = var; if (cond(val)) futex_wait(&var, val);
-	 *   Userspace waker:  if (cond(var)) { var = new; futex_wake(&var); }
-	 *
-	 * The basic logical guarantee of a futex is that it blocks ONLY
-	 * if cond(var) is known to be true at the time of blocking, for
-	 * any cond.  If we locked the hash-bucket after testing *uaddr, that
-	 * would open a race condition where we could block indefinitely with
-	 * cond(var) false, which would violate the guarantee.
-	 *
-	 * On the other hand, we insert q and release the hash-bucket only
-	 * after testing *uaddr.  This guarantees that futex_wait() will NOT
-	 * absorb a wakeup if *uaddr does not match the desired values
-	 * while the syscall executes.
-	 */
-retry:
-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q->key, VERIFY_READ);
-	if (unlikely(ret != 0))
-		return ret;
-
-retry_private:
-	*hb = queue_lock(q);
-
-	ret = get_futex_value_locked(&uval, uaddr);
-
-	if (ret) {
-		queue_unlock(*hb);
-
-		ret = get_user(uval, uaddr);
-		if (ret)
-			goto out;
-
-		if (!(flags & FLAGS_SHARED))
-			goto retry_private;
-
-		put_futex_key(&q->key);
-		goto retry;
-	}
-
-	if (uval != val) {
-		queue_unlock(*hb);
-		ret = -EWOULDBLOCK;
-	}
-
-out:
-	if (ret)
-		put_futex_key(&q->key);
-	return ret;
-}
-
-static int futex_wait(u32 __user *uaddr, unsigned int flags, u32 val,
-		      ktime_t *abs_time, u32 bitset)
-{
-	struct hrtimer_sleeper timeout, *to = NULL;
-	struct restart_block *restart;
-	struct futex_hash_bucket *hb;
-	struct futex_q q = futex_q_init;
-	int ret;
-
-	if (!bitset)
-		return -EINVAL;
-	q.bitset = bitset;
-
-	if (abs_time) {
-		to = &timeout;
-
-		hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
-				      CLOCK_REALTIME : CLOCK_MONOTONIC,
-				      HRTIMER_MODE_ABS);
-		hrtimer_init_sleeper(to, current);
-		hrtimer_set_expires_range_ns(&to->timer, *abs_time,
-					     current->timer_slack_ns);
-	}
-
-retry:
-	/*
-	 * Prepare to wait on uaddr. On success, holds hb lock and increments
-	 * q.key refs.
-	 */
-	ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
-	if (ret)
-		goto out;
-
-	/* queue_me and wait for wakeup, timeout, or a signal. */
-	futex_wait_queue_me(hb, &q, to);
-
-	/* If we were woken (and unqueued), we succeeded, whatever. */
-	ret = 0;
-	/* unqueue_me() drops q.key ref */
-	if (!unqueue_me(&q))
-		goto out;
-	ret = -ETIMEDOUT;
-	if (to && !to->task)
-		goto out;
-
-	/*
-	 * We expect signal_pending(current), but we might be the
-	 * victim of a spurious wakeup as well.
-	 */
-	if (!signal_pending(current))
-		goto retry;
-
-	ret = -ERESTARTSYS;
-	if (!abs_time)
-		goto out;
-
-	restart = &current->restart_block;
-	restart->fn = futex_wait_restart;
-	restart->futex.uaddr = uaddr;
-	restart->futex.val = val;
-	restart->futex.time = *abs_time;
-	restart->futex.bitset = bitset;
-	restart->futex.flags = flags | FLAGS_HAS_TIMEOUT;
-
-	ret = -ERESTART_RESTARTBLOCK;
-
-out:
-	if (to) {
-		hrtimer_cancel(&to->timer);
-		destroy_hrtimer_on_stack(&to->timer);
-	}
-	return ret;
-}
-
-
-static long futex_wait_restart(struct restart_block *restart)
-{
-	u32 __user *uaddr = restart->futex.uaddr;
-	ktime_t t, *tp = NULL;
-
-	if (restart->futex.flags & FLAGS_HAS_TIMEOUT) {
-		t = restart->futex.time;
-		tp = &t;
-	}
-	restart->fn = do_no_restart_syscall;
-
-	return (long)futex_wait(uaddr, restart->futex.flags,
-				restart->futex.val, tp, restart->futex.bitset);
-}
-
-
-/*
- * Userspace tried a 0 -> TID atomic transition of the futex value
- * and failed. The kernel side here does the whole locking operation:
- * if there are waiters then it will block as a consequence of relying
- * on rt-mutexes, it does PI, etc. (Due to races the kernel might see
- * a 0 value of the futex too.).
- *
- * Also serves as futex trylock_pi()'ing, and due semantics.
- */
-static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
-			 ktime_t *time, int trylock)
-{
-	struct hrtimer_sleeper timeout, *to = NULL;
-	struct futex_pi_state *pi_state = NULL;
-	struct rt_mutex_waiter rt_waiter;
-	struct futex_hash_bucket *hb;
-	struct futex_q q = futex_q_init;
-	int res, ret;
-
-	if (refill_pi_state_cache())
-		return -ENOMEM;
-
-	if (time) {
-		to = &timeout;
-		hrtimer_init_on_stack(&to->timer, CLOCK_REALTIME,
-				      HRTIMER_MODE_ABS);
-		hrtimer_init_sleeper(to, current);
-		hrtimer_set_expires(&to->timer, *time);
-	}
-
-retry:
-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q.key, VERIFY_WRITE);
-	if (unlikely(ret != 0))
-		goto out;
-
-retry_private:
-	hb = queue_lock(&q);
+			 */
+			ret = lookup_pi_state(uaddr2, ret, hb2, &key2, &pi_state);
+		}
 
-	ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current, 0);
-	if (unlikely(ret)) {
-		/*
-		 * Atomic work succeeded and we got the lock,
-		 * or failed. Either way, we do _not_ block.
-		 */
 		switch (ret) {
-		case 1:
-			/* We got the lock. */
-			ret = 0;
-			goto out_unlock_put_key;
+		case 0:
+			/* We hold a reference on the pi state. */
+			break;
+
+			/* If the above failed, then pi_state is NULL */
 		case -EFAULT:
-			goto uaddr_faulted;
+			double_unlock_hb(hb1, hb2);
+			hb_waiters_dec(hb2);
+			put_futex_key(&key2);
+			put_futex_key(&key1);
+			ret = fault_in_user_writeable(uaddr2);
+			if (!ret)
+				goto retry;
+			goto out;
 		case -EAGAIN:
 			/*
 			 * Two reasons for this:
-			 * - Task is exiting and we just wait for the
+			 * - Owner is exiting and we just wait for the
 			 *   exit to complete.
 			 * - The user space value changed.
 			 */
-			queue_unlock(hb);
-			put_futex_key(&q.key);
+			double_unlock_hb(hb1, hb2);
+			hb_waiters_dec(hb2);
+			put_futex_key(&key2);
+			put_futex_key(&key1);
 			cond_resched();
 			goto retry;
 		default:
-			goto out_unlock_put_key;
+			goto out_unlock;
 		}
 	}
 
-	WARN_ON(!q.pi_state);
-
-	/*
-	 * Only actually queue now that the atomic ops are done:
-	 */
-	__queue_me(&q, hb);
+	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
+		if (task_count - nr_wake >= nr_requeue)
+			break;
 
-	if (trylock) {
-		ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex);
-		/* Fixup the trylock return value: */
-		ret = ret ? 0 : -EWOULDBLOCK;
-		goto no_block;
-	}
+		if (!match_futex(&this->key, &key1))
+			continue;
 
-	rt_mutex_init_waiter(&rt_waiter);
+		/*
+		 * FUTEX_WAIT_REQEUE_PI and FUTEX_CMP_REQUEUE_PI should always
+		 * be paired with each other and no other futex ops.
+		 *
+		 * We should never be requeueing a futex_q with a pi_state,
+		 * which is awaiting a futex_unlock_pi().
+		 */
+		if ((requeue_pi && !this->rt_waiter) ||
+		    (!requeue_pi && this->rt_waiter) ||
+		    this->pi_state) {
+			ret = -EINVAL;
+			break;
+		}
 
-	/*
-	 * On PREEMPT_RT_FULL, when hb->lock becomes an rt_mutex, we must not
-	 * hold it while doing rt_mutex_start_proxy(), because then it will
-	 * include hb->lock in the blocking chain, even through we'll not in
-	 * fact hold it while blocking. This will lead it to report -EDEADLK
-	 * and BUG when futex_unlock_pi() interleaves with this.
-	 *
-	 * Therefore acquire wait_lock while holding hb->lock, but drop the
-	 * latter before calling rt_mutex_start_proxy_lock(). This still fully
-	 * serializes against futex_unlock_pi() as that does the exact same
-	 * lock handoff sequence.
-	 */
-	raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
-	spin_unlock(q.lock_ptr);
-	ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current);
-	raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock);
+		/*
+		 * Wake nr_wake waiters.  For requeue_pi, if we acquired the
+		 * lock, we already woke the top_waiter.  If not, it will be
+		 * woken by futex_unlock_pi().
+		 */
+		if (++task_count <= nr_wake && !requeue_pi) {
+			mark_wake_futex(&wake_q, this);
+			continue;
+		}
 
-	if (ret) {
-		if (ret == 1)
-			ret = 0;
+		/* Ensure we requeue to the expected futex for requeue_pi. */
+		if (requeue_pi && !match_futex(this->requeue_pi_key, &key2)) {
+			ret = -EINVAL;
+			break;
+		}
 
-		spin_lock(q.lock_ptr);
-		goto no_block;
+		/*
+		 * Requeue nr_requeue waiters and possibly one more in the case
+		 * of requeue_pi if we couldn't acquire the lock atomically.
+		 */
+		if (requeue_pi) {
+			/*
+			 * Prepare the waiter to take the rt_mutex. Take a
+			 * refcount on the pi_state and store the pointer in
+			 * the futex_q object of the waiter.
+			 */
+			get_pi_state(pi_state);
+			this->pi_state = pi_state;
+			ret = rt_mutex_start_proxy_lock(&pi_state->pi_mutex,
+							this->rt_waiter,
+							this->task);
+			if (ret == 1) {
+				/*
+				 * We got the lock. We do neither drop the
+				 * refcount on pi_state nor clear
+				 * this->pi_state because the waiter needs the
+				 * pi_state for cleaning up the user space
+				 * value. It will drop the refcount after
+				 * doing so.
+				 */
+				requeue_pi_wake_futex(this, &key2, hb2);
+				drop_count++;
+				continue;
+			} else if (ret) {
+				/*
+				 * rt_mutex_start_proxy_lock() detected a
+				 * potential deadlock when we tried to queue
+				 * that waiter. Drop the pi_state reference
+				 * which we took above and remove the pointer
+				 * to the state from the waiters futex_q
+				 * object.
+				 */
+				this->pi_state = NULL;
+				put_pi_state(pi_state);
+				/*
+				 * We stop queueing more waiters and let user
+				 * space deal with the mess.
+				 */
+				break;
+			}
+		}
+		requeue_futex(this, hb1, hb2, &key2);
+		drop_count++;
 	}
 
-
-	if (unlikely(to))
-		hrtimer_start_expires(&to->timer, HRTIMER_MODE_ABS);
-
-	ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
-
-	spin_lock(q.lock_ptr);
 	/*
-	 * If we failed to acquire the lock (signal/timeout), we must
-	 * first acquire the hb->lock before removing the lock from the
-	 * rt_mutex waitqueue, such that we can keep the hb and rt_mutex
-	 * wait lists consistent.
-	 *
-	 * In particular; it is important that futex_unlock_pi() can not
-	 * observe this inconsistency.
+	 * We took an extra initial reference to the pi_state either
+	 * in futex_proxy_trylock_atomic() or in lookup_pi_state(). We
+	 * need to drop it here again.
 	 */
-	if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter))
-		ret = 0;
+	put_pi_state(pi_state);
 
-no_block:
-	/*
-	 * Fixup the pi_state owner and possibly acquire the lock if we
-	 * haven't already.
-	 */
-	res = fixup_owner(uaddr, &q, !ret);
-	/*
-	 * If fixup_owner() returned an error, proprogate that.  If it acquired
-	 * the lock, clear our -ETIMEDOUT or -EINTR.
-	 */
-	if (res)
-		ret = (res < 0) ? res : 0;
+out_unlock:
+	double_unlock_hb(hb1, hb2);
+	wake_up_q(&wake_q);
+	hb_waiters_dec(hb2);
 
 	/*
-	 * If fixup_owner() faulted and was unable to handle the fault, unlock
-	 * it and return the fault to userspace.
+	 * drop_futex_key_refs() must be called outside the spinlocks. During
+	 * the requeue we moved futex_q's from the hash bucket at key1 to the
+	 * one at key2 and updated their key pointer.  We no longer need to
+	 * hold the references to key1.
 	 */
-	if (ret && (rt_mutex_owner(&q.pi_state->pi_mutex) == current)) {
-		pi_state = q.pi_state;
-		get_pi_state(pi_state);
-	}
-
-	/* Unqueue and drop the lock */
-	unqueue_me_pi(&q);
-
-	if (pi_state) {
-		rt_mutex_futex_unlock(&pi_state->pi_mutex);
-		put_pi_state(pi_state);
-	}
-
-	goto out_put_key;
-
-out_unlock_put_key:
-	queue_unlock(hb);
+	while (--drop_count >= 0)
+		drop_futex_key_refs(&key1);
 
-out_put_key:
-	put_futex_key(&q.key);
+out_put_keys:
+	put_futex_key(&key2);
+out_put_key1:
+	put_futex_key(&key1);
 out:
-	if (to) {
-		hrtimer_cancel(&to->timer);
-		destroy_hrtimer_on_stack(&to->timer);
-	}
-	return ret != -EINTR ? ret : -ERESTARTNOINTR;
-
-uaddr_faulted:
-	queue_unlock(hb);
-
-	ret = fault_in_user_writeable(uaddr);
-	if (ret)
-		goto out_put_key;
-
-	if (!(flags & FLAGS_SHARED))
-		goto retry_private;
-
-	put_futex_key(&q.key);
-	goto retry;
+	return ret ? ret : task_count;
 }
 
-/*
- * Userspace attempted a TID -> 0 atomic transition, and failed.
- * This is the in-kernel slowpath: we look up the PI state (if any),
- * and do the rt-mutex unlock.
- */
-static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
+/* The key must be already stored in q->key. */
+static inline struct futex_hash_bucket *queue_lock(struct futex_q *q)
+	__acquires(&hb->lock)
 {
-	u32 uninitialized_var(curval), uval, vpid = task_pid_vnr(current);
-	union futex_key key = FUTEX_KEY_INIT;
 	struct futex_hash_bucket *hb;
-	struct futex_q *top_waiter;
-	int ret;
 
-retry:
-	if (get_user(uval, uaddr))
-		return -EFAULT;
+	hb = hash_futex(&q->key);
+
 	/*
-	 * We release only a lock we actually own:
+	 * Increment the counter before taking the lock so that
+	 * a potential waker won't miss a to-be-slept task that is
+	 * waiting for the spinlock. This is safe as all queue_lock()
+	 * users end up calling queue_me(). Similarly, for housekeeping,
+	 * decrement the counter at queue_unlock() when some error has
+	 * occurred and we don't end up adding the task to the list.
 	 */
-	if ((uval & FUTEX_TID_MASK) != vpid)
-		return -EPERM;
+	hb_waiters_inc(hb);
 
-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_WRITE);
-	if (ret)
-		return ret;
+	q->lock_ptr = &hb->lock;
 
-	hb = hash_futex(&key);
-	spin_lock(&hb->lock);
+	spin_lock(&hb->lock); /* implies smp_mb(); (A) */
+	return hb;
+}
 
-	/*
-	 * Check waiters first. We do not trust user space values at
-	 * all and we at least want to know if user space fiddled
-	 * with the futex value instead of blindly unlocking.
-	 */
-	top_waiter = futex_top_waiter(hb, &key);
-	if (top_waiter) {
-		struct futex_pi_state *pi_state = top_waiter->pi_state;
+static inline void
+queue_unlock(struct futex_hash_bucket *hb)
+	__releases(&hb->lock)
+{
+	spin_unlock(&hb->lock);
+	hb_waiters_dec(hb);
+}
 
-		ret = -EINVAL;
-		if (!pi_state)
-			goto out_unlock;
+static inline void __queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
+{
+	int prio;
 
-		/*
-		 * If current does not own the pi_state then the futex is
-		 * inconsistent and user space fiddled with the futex value.
-		 */
-		if (pi_state->owner != current)
-			goto out_unlock;
+	/*
+	 * The priority used to register this element is
+	 * - either the real thread-priority for the real-time threads
+	 * (i.e. threads with a priority lower than MAX_RT_PRIO)
+	 * - or MAX_RT_PRIO for non-RT threads.
+	 * Thus, all RT-threads are woken first in priority order, and
+	 * the others are woken last, in FIFO order.
+	 */
+	prio = min(current->normal_prio, MAX_RT_PRIO);
 
-		get_pi_state(pi_state);
-		/*
-		 * By taking wait_lock while still holding hb->lock, we ensure
-		 * there is no point where we hold neither; and therefore
-		 * wake_futex_pi() must observe a state consistent with what we
-		 * observed.
-		 */
-		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
-		spin_unlock(&hb->lock);
+	plist_node_init(&q->list, prio);
+	plist_add(&q->list, &hb->chain);
+	q->task = current;
+}
 
-		ret = wake_futex_pi(uaddr, uval, pi_state);
+/**
+ * queue_me() - Enqueue the futex_q on the futex_hash_bucket
+ * @q:	The futex_q to enqueue
+ * @hb:	The destination hash bucket
+ *
+ * The hb->lock must be held by the caller, and is released here. A call to
+ * queue_me() is typically paired with exactly one call to unqueue_me().  The
+ * exceptions involve the PI related operations, which may use unqueue_me_pi()
+ * or nothing if the unqueue is done as part of the wake process and the unqueue
+ * state is implicit in the state of woken task (see futex_wait_requeue_pi() for
+ * an example).
+ */
+static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
+	__releases(&hb->lock)
+{
+	__queue_me(q, hb);
+	spin_unlock(&hb->lock);
+}
 
-		put_pi_state(pi_state);
+/**
+ * unqueue_me() - Remove the futex_q from its futex_hash_bucket
+ * @q:	The futex_q to unqueue
+ *
+ * The q->lock_ptr must not be held by the caller. A call to unqueue_me() must
+ * be paired with exactly one earlier call to queue_me().
+ *
+ * Return:
+ *   1 - if the futex_q was still queued (and we removed unqueued it);
+ *   0 - if the futex_q was already removed by the waking thread
+ */
+static int unqueue_me(struct futex_q *q)
+{
+	spinlock_t *lock_ptr;
+	int ret = 0;
 
+	/* In the common case we don't take the spinlock, which is nice. */
+retry:
+	/*
+	 * q->lock_ptr can change between this read and the following spin_lock.
+	 * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
+	 * optimizing lock_ptr out of the logic below.
+	 */
+	lock_ptr = READ_ONCE(q->lock_ptr);
+	if (lock_ptr != NULL) {
+		spin_lock(lock_ptr);
 		/*
-		 * Success, we're done! No tricky corner cases.
-		 */
-		if (!ret)
-			goto out_putkey;
-		/*
-		 * The atomic access to the futex value generated a
-		 * pagefault, so retry the user-access and the wakeup:
-		 */
-		if (ret == -EFAULT)
-			goto pi_faulted;
-		/*
-		 * A unconditional UNLOCK_PI op raced against a waiter
-		 * setting the FUTEX_WAITERS bit. Try again.
+		 * q->lock_ptr can change between reading it and
+		 * spin_lock(), causing us to take the wrong lock.  This
+		 * corrects the race condition.
+		 *
+		 * Reasoning goes like this: if we have the wrong lock,
+		 * q->lock_ptr must have changed (maybe several times)
+		 * between reading it and the spin_lock().  It can
+		 * change again after the spin_lock() but only if it was
+		 * already changed before the spin_lock().  It cannot,
+		 * however, change back to the original value.  Therefore
+		 * we can detect whether we acquired the correct lock.
 		 */
-		if (ret == -EAGAIN) {
-			put_futex_key(&key);
+		if (unlikely(lock_ptr != q->lock_ptr)) {
+			spin_unlock(lock_ptr);
 			goto retry;
 		}
-		/*
-		 * wake_futex_pi has detected invalid state. Tell user
-		 * space.
-		 */
-		goto out_putkey;
-	}
+		__unqueue_futex(q);
 
-	/*
-	 * We have no kernel internal state, i.e. no waiters in the
-	 * kernel. Waiters which are about to queue themselves are stuck
-	 * on hb->lock. So we can safely ignore them. We do neither
-	 * preserve the WAITERS bit not the OWNER_DIED one. We are the
-	 * owner.
-	 */
-	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, 0)) {
-		spin_unlock(&hb->lock);
-		goto pi_faulted;
-	}
+		BUG_ON(q->pi_state);
 
-	/*
-	 * If uval has changed, let user space handle it.
-	 */
-	ret = (curval == uval) ? 0 : -EAGAIN;
+		spin_unlock(lock_ptr);
+		ret = 1;
+	}
 
-out_unlock:
-	spin_unlock(&hb->lock);
-out_putkey:
-	put_futex_key(&key);
+	drop_futex_key_refs(&q->key);
 	return ret;
+}
 
-pi_faulted:
-	put_futex_key(&key);
+static long futex_wait_restart(struct restart_block *restart);
 
-	ret = fault_in_user_writeable(uaddr);
-	if (!ret)
-		goto retry;
+/**
+ * futex_wait_queue_me() - queue_me() and wait for wakeup, timeout, or signal
+ * @hb:		the futex hash bucket, must be locked by the caller
+ * @q:		the futex_q to queue up on
+ * @timeout:	the prepared hrtimer_sleeper, or null for no timeout
+ */
+static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
+				struct hrtimer_sleeper *timeout)
+{
+	/*
+	 * The task state is guaranteed to be set before another task can
+	 * wake it. set_current_state() is implemented using smp_store_mb() and
+	 * queue_me() calls spin_unlock() upon completion, both serializing
+	 * access to the hash list and forcing another memory barrier.
+	 */
+	set_current_state(TASK_INTERRUPTIBLE);
+	queue_me(q, hb);
 
-	return ret;
+	/* Arm the timer */
+	if (timeout)
+		hrtimer_start_expires(&timeout->timer, HRTIMER_MODE_ABS);
+
+	/*
+	 * If we have been removed from the hash list, then another task
+	 * has tried to wake us, and we can skip the call to schedule().
+	 */
+	if (likely(!plist_node_empty(&q->list))) {
+		/*
+		 * If the timer has already expired, current will already be
+		 * flagged for rescheduling. Only call schedule if there
+		 * is no timeout, or if it has yet to expire.
+		 */
+		if (!timeout || timeout->task)
+			freezable_schedule();
+	}
+	__set_current_state(TASK_RUNNING);
 }
 
 /**
- * handle_early_requeue_pi_wakeup() - Detect early wakeup on the initial futex
- * @hb:		the hash_bucket futex_q was original enqueued on
- * @q:		the futex_q woken while waiting to be requeued
- * @key2:	the futex_key of the requeue target futex
- * @timeout:	the timeout associated with the wait (NULL if none)
+ * futex_wait_setup() - Prepare to wait on a futex
+ * @uaddr:	the futex userspace address
+ * @val:	the expected value
+ * @flags:	futex flags (FLAGS_SHARED, etc.)
+ * @q:		the associated futex_q
+ * @hb:		storage for hash_bucket pointer to be returned to caller
  *
- * Detect if the task was woken on the initial futex as opposed to the requeue
- * target futex.  If so, determine if it was a timeout or a signal that caused
- * the wakeup and return the appropriate error code to the caller.  Must be
- * called with the hb lock held.
+ * Setup the futex_q and locate the hash_bucket.  Get the futex value and
+ * compare it with the expected value.  Handle atomic faults internally.
+ * Return with the hb lock held and a q.key reference on success, and unlocked
+ * with no q.key reference on failure.
  *
  * Return:
- *  0 = no early wakeup detected;
- * <0 = -ETIMEDOUT or -ERESTARTNOINTR
+ *  0 - uaddr contains val and hb has been locked;
+ * <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked
  */
-static inline
-int handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb,
-				   struct futex_q *q, union futex_key *key2,
-				   struct hrtimer_sleeper *timeout)
+static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
+			   struct futex_q *q, struct futex_hash_bucket **hb)
 {
-	int ret = 0;
+	u32 uval;
+	int ret;
 
 	/*
-	 * With the hb lock held, we avoid races while we process the wakeup.
-	 * We only need to hold hb (and not hb2) to ensure atomicity as the
-	 * wakeup code can't change q.key from uaddr to uaddr2 if we hold hb.
-	 * It can't be requeued from uaddr2 to something else since we don't
-	 * support a PI aware source futex for requeue.
+	 * Access the page AFTER the hash-bucket is locked.
+	 * Order is important:
+	 *
+	 *   Userspace waiter: val = var; if (cond(val)) futex_wait(&var, val);
+	 *   Userspace waker:  if (cond(var)) { var = new; futex_wake(&var); }
+	 *
+	 * The basic logical guarantee of a futex is that it blocks ONLY
+	 * if cond(var) is known to be true at the time of blocking, for
+	 * any cond.  If we locked the hash-bucket after testing *uaddr, that
+	 * would open a race condition where we could block indefinitely with
+	 * cond(var) false, which would violate the guarantee.
+	 *
+	 * On the other hand, we insert q and release the hash-bucket only
+	 * after testing *uaddr.  This guarantees that futex_wait() will NOT
+	 * absorb a wakeup if *uaddr does not match the desired values
+	 * while the syscall executes.
 	 */
-	if (!match_futex(&q->key, key2)) {
-		WARN_ON(q->lock_ptr && (&hb->lock != q->lock_ptr));
-		/*
-		 * We were woken prior to requeue by a timeout or a signal.
-		 * Unqueue the futex_q and determine which it was.
-		 */
-		plist_del(&q->list, &hb->chain);
-		hb_waiters_dec(hb);
+retry:
+	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q->key, VERIFY_READ);
+	if (unlikely(ret != 0))
+		return ret;
+
+retry_private:
+	*hb = queue_lock(q);
+
+	ret = get_futex_value_locked(&uval, uaddr);
+
+	if (ret) {
+		queue_unlock(*hb);
+
+		ret = get_user(uval, uaddr);
+		if (ret)
+			goto out;
+
+		if (!(flags & FLAGS_SHARED))
+			goto retry_private;
+
+		put_futex_key(&q->key);
+		goto retry;
+	}
 
-		/* Handle spurious wakeups gracefully */
+	if (uval != val) {
+		queue_unlock(*hb);
 		ret = -EWOULDBLOCK;
-		if (timeout && !timeout->task)
-			ret = -ETIMEDOUT;
-		else if (signal_pending(current))
-			ret = -ERESTARTNOINTR;
 	}
+
+out:
+	if (ret)
+		put_futex_key(&q->key);
 	return ret;
 }
 
-/**
- * futex_wait_requeue_pi() - Wait on uaddr and take uaddr2
- * @uaddr:	the futex we initially wait on (non-pi)
- * @flags:	futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be
- *		the same type, no requeueing from private to shared, etc.
- * @val:	the expected value of uaddr
- * @abs_time:	absolute timeout
- * @bitset:	32 bit wakeup bitset set by userspace, defaults to all
- * @uaddr2:	the pi futex we will take prior to returning to user-space
- *
- * The caller will wait on uaddr and will be requeued by futex_requeue() to
- * uaddr2 which must be PI aware and unique from uaddr.  Normal wakeup will wake
- * on uaddr2 and complete the acquisition of the rt_mutex prior to returning to
- * userspace.  This ensures the rt_mutex maintains an owner when it has waiters;
- * without one, the pi logic would not know which task to boost/deboost, if
- * there was a need to.
- *
- * We call schedule in futex_wait_queue_me() when we enqueue and return there
- * via the following--
- * 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue()
- * 2) wakeup on uaddr2 after a requeue
- * 3) signal
- * 4) timeout
- *
- * If 3, cleanup and return -ERESTARTNOINTR.
- *
- * If 2, we may then block on trying to take the rt_mutex and return via:
- * 5) successful lock
- * 6) signal
- * 7) timeout
- * 8) other lock acquisition failure
- *
- * If 6, return -EWOULDBLOCK (restarting the syscall would do the same).
- *
- * If 4 or 7, we cleanup and return with -ETIMEDOUT.
- *
- * Return:
- *  0 - On success;
- * <0 - On error
- */
-static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
-				 u32 val, ktime_t *abs_time, u32 bitset,
-				 u32 __user *uaddr2)
+static int futex_wait(u32 __user *uaddr, unsigned int flags, u32 val,
+		      ktime_t *abs_time, u32 bitset)
 {
 	struct hrtimer_sleeper timeout, *to = NULL;
-	struct futex_pi_state *pi_state = NULL;
-	struct rt_mutex_waiter rt_waiter;
+	struct restart_block *restart;
 	struct futex_hash_bucket *hb;
-	union futex_key key2 = FUTEX_KEY_INIT;
 	struct futex_q q = futex_q_init;
-	int res, ret;
-
-	if (uaddr == uaddr2)
-		return -EINVAL;
+	int ret;
 
 	if (!bitset)
 		return -EINVAL;
+	q.bitset = bitset;
 
 	if (abs_time) {
 		to = &timeout;
+
 		hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
 				      CLOCK_REALTIME : CLOCK_MONOTONIC,
 				      HRTIMER_MODE_ABS);
@@ -2999,139 +1562,47 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
 					     current->timer_slack_ns);
 	}
 
+retry:
 	/*
-	 * The waiter is allocated on our stack, manipulated by the requeue
-	 * code while we sleep on uaddr.
-	 */
-	rt_mutex_init_waiter(&rt_waiter);
-
-	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
-	if (unlikely(ret != 0))
-		goto out;
-
-	q.bitset = bitset;
-	q.rt_waiter = &rt_waiter;
-	q.requeue_pi_key = &key2;
-
-	/*
-	 * Prepare to wait on uaddr. On success, increments q.key (key1) ref
-	 * count.
+	 * Prepare to wait on uaddr. On success, holds hb lock and increments
+	 * q.key refs.
 	 */
 	ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
 	if (ret)
-		goto out_key2;
-
-	/*
-	 * The check above which compares uaddrs is not sufficient for
-	 * shared futexes. We need to compare the keys:
-	 */
-	if (match_futex(&q.key, &key2)) {
-		queue_unlock(hb);
-		ret = -EINVAL;
-		goto out_put_keys;
-	}
+		goto out;
 
-	/* Queue the futex_q, drop the hb lock, wait for wakeup. */
+	/* queue_me and wait for wakeup, timeout, or a signal. */
 	futex_wait_queue_me(hb, &q, to);
 
-	spin_lock(&hb->lock);
-	ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
-	spin_unlock(&hb->lock);
-	if (ret)
-		goto out_put_keys;
+	/* If we were woken (and unqueued), we succeeded, whatever. */
+	ret = 0;
+	/* unqueue_me() drops q.key ref */
+	if (!unqueue_me(&q))
+		goto out;
+	ret = -ETIMEDOUT;
+	if (to && !to->task)
+		goto out;
 
 	/*
-	 * In order for us to be here, we know our q.key == key2, and since
-	 * we took the hb->lock above, we also know that futex_requeue() has
-	 * completed and we no longer have to concern ourselves with a wakeup
-	 * race with the atomic proxy lock acquisition by the requeue code. The
-	 * futex_requeue dropped our key1 reference and incremented our key2
-	 * reference count.
+	 * We expect signal_pending(current), but we might be the
+	 * victim of a spurious wakeup as well.
 	 */
+	if (!signal_pending(current))
+		goto retry;
 
-	/* Check if the requeue code acquired the second futex for us. */
-	if (!q.rt_waiter) {
-		/*
-		 * Got the lock. We might not be the anticipated owner if we
-		 * did a lock-steal - fix up the PI-state in that case.
-		 */
-		if (q.pi_state && (q.pi_state->owner != current)) {
-			spin_lock(q.lock_ptr);
-			ret = fixup_pi_state_owner(uaddr2, &q, current);
-			if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
-				pi_state = q.pi_state;
-				get_pi_state(pi_state);
-			}
-			/*
-			 * Drop the reference to the pi state which
-			 * the requeue_pi() code acquired for us.
-			 */
-			put_pi_state(q.pi_state);
-			spin_unlock(q.lock_ptr);
-		}
-	} else {
-		struct rt_mutex *pi_mutex;
-
-		/*
-		 * We have been woken up by futex_unlock_pi(), a timeout, or a
-		 * signal.  futex_unlock_pi() will not destroy the lock_ptr nor
-		 * the pi_state.
-		 */
-		WARN_ON(!q.pi_state);
-		pi_mutex = &q.pi_state->pi_mutex;
-		ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
-
-		spin_lock(q.lock_ptr);
-		if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
-			ret = 0;
-
-		debug_rt_mutex_free_waiter(&rt_waiter);
-		/*
-		 * Fixup the pi_state owner and possibly acquire the lock if we
-		 * haven't already.
-		 */
-		res = fixup_owner(uaddr2, &q, !ret);
-		/*
-		 * If fixup_owner() returned an error, proprogate that.  If it
-		 * acquired the lock, clear -ETIMEDOUT or -EINTR.
-		 */
-		if (res)
-			ret = (res < 0) ? res : 0;
-
-		/*
-		 * If fixup_pi_state_owner() faulted and was unable to handle
-		 * the fault, unlock the rt_mutex and return the fault to
-		 * userspace.
-		 */
-		if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
-			pi_state = q.pi_state;
-			get_pi_state(pi_state);
-		}
-
-		/* Unqueue and drop the lock. */
-		unqueue_me_pi(&q);
-	}
-
-	if (pi_state) {
-		rt_mutex_futex_unlock(&pi_state->pi_mutex);
-		put_pi_state(pi_state);
-	}
+	ret = -ERESTARTSYS;
+	if (!abs_time)
+		goto out;
 
-	if (ret == -EINTR) {
-		/*
-		 * We've already been requeued, but cannot restart by calling
-		 * futex_lock_pi() directly. We could restart this syscall, but
-		 * it would detect that the user space "val" changed and return
-		 * -EWOULDBLOCK.  Save the overhead of the restart and return
-		 * -EWOULDBLOCK directly.
-		 */
-		ret = -EWOULDBLOCK;
-	}
+	restart = &current->restart_block;
+	restart->fn = futex_wait_restart;
+	restart->futex.uaddr = uaddr;
+	restart->futex.val = val;
+	restart->futex.time = *abs_time;
+	restart->futex.bitset = bitset;
+	restart->futex.flags = flags | FLAGS_HAS_TIMEOUT;
 
-out_put_keys:
-	put_futex_key(&q.key);
-out_key2:
-	put_futex_key(&key2);
+	ret = -ERESTART_RESTARTBLOCK;
 
 out:
 	if (to) {
@@ -3141,6 +1612,22 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
 	return ret;
 }
 
+
+static long futex_wait_restart(struct restart_block *restart)
+{
+	u32 __user *uaddr = restart->futex.uaddr;
+	ktime_t t, *tp = NULL;
+
+	if (restart->futex.flags & FLAGS_HAS_TIMEOUT) {
+		t = restart->futex.time;
+		tp = &t;
+	}
+	restart->fn = do_no_restart_syscall;
+
+	return (long)futex_wait(uaddr, restart->futex.flags,
+				restart->futex.val, tp, restart->futex.bitset);
+}
+
 /*
  * Support for robust futexes: the kernel cleans up held futexes at
  * thread exit time.
diff --git a/kernel/futex_pi.c b/kernel/futex_pi.c
new file mode 100644
index 0000000000..a10c962aa2
--- /dev/null
+++ b/kernel/futex_pi.c
@@ -0,0 +1,1563 @@
+/*
+ *  PI-futex support started by Ingo Molnar and Thomas Gleixner
+ *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ *  Copyright (C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
+ *  Requeue-PI support by Darren Hart <dvhltc@us.ibm.com>
+ *  Copyright (C) IBM Corporation, 2009
+ *  Thanks to Thomas Gleixner for conceptual design and careful reviews.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, write to the Free Software
+ *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#include "locking/rtmutex_common.h"
+
+/* from futex.c: */
+static void __unqueue_futex(struct futex_q *q);
+static inline struct futex_hash_bucket *queue_lock(struct futex_q *q);
+static inline void queue_unlock(struct futex_hash_bucket *hb);
+static inline void __queue_me(struct futex_q *q, struct futex_hash_bucket *hb);
+static void futex_wait_queue_me(struct futex_hash_bucket *hb,
+		struct futex_q *q, struct hrtimer_sleeper *timeout);
+static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
+		struct futex_q *q, struct futex_hash_bucket **hb);
+
+/*
+ * Priority Inheritance state:
+ */
+struct futex_pi_state {
+	/*
+	 * list of 'owned' pi_state instances - these have to be
+	 * cleaned up in do_exit() if the task exits prematurely:
+	 */
+	struct list_head list;
+
+	/*
+	 * The PI object:
+	 */
+	struct rt_mutex pi_mutex;
+
+	struct task_struct *owner;
+	atomic_t refcount;
+
+	union futex_key key;
+};
+
+static int refill_pi_state_cache(void)
+{
+	struct futex_pi_state *pi_state;
+
+	if (likely(current->pi_state_cache))
+		return 0;
+
+	pi_state = kzalloc(sizeof(*pi_state), GFP_KERNEL);
+
+	if (!pi_state)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&pi_state->list);
+	/* pi_mutex gets initialized later */
+	pi_state->owner = NULL;
+	atomic_set(&pi_state->refcount, 1);
+	pi_state->key = FUTEX_KEY_INIT;
+
+	current->pi_state_cache = pi_state;
+
+	return 0;
+}
+
+static struct futex_pi_state *alloc_pi_state(void)
+{
+	struct futex_pi_state *pi_state = current->pi_state_cache;
+
+	WARN_ON(!pi_state);
+	current->pi_state_cache = NULL;
+
+	return pi_state;
+}
+
+static void get_pi_state(struct futex_pi_state *pi_state)
+{
+	WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
+}
+
+/*
+ * Drops a reference to the pi_state object and frees or caches it
+ * when the last reference is gone.
+ *
+ * Must be called with the hb lock held.
+ */
+static void put_pi_state(struct futex_pi_state *pi_state)
+{
+	if (!pi_state)
+		return;
+
+	if (!atomic_dec_and_test(&pi_state->refcount))
+		return;
+
+	/*
+	 * If pi_state->owner is NULL, the owner is most probably dying
+	 * and has cleaned up the pi_state already
+	 */
+	if (pi_state->owner) {
+		raw_spin_lock_irq(&pi_state->owner->pi_lock);
+		list_del_init(&pi_state->list);
+		raw_spin_unlock_irq(&pi_state->owner->pi_lock);
+
+		rt_mutex_proxy_unlock(&pi_state->pi_mutex, pi_state->owner);
+	}
+
+	if (current->pi_state_cache)
+		kfree(pi_state);
+	else {
+		/*
+		 * pi_state->list is already empty.
+		 * clear pi_state->owner.
+		 * refcount is at 0 - put it back to 1.
+		 */
+		pi_state->owner = NULL;
+		atomic_set(&pi_state->refcount, 1);
+		current->pi_state_cache = pi_state;
+	}
+}
+
+/*
+ * Look up the task based on what TID userspace gave us.
+ * We dont trust it.
+ */
+static struct task_struct *futex_find_get_task(pid_t pid)
+{
+	struct task_struct *p;
+
+	rcu_read_lock();
+	p = find_task_by_vpid(pid);
+	if (p)
+		get_task_struct(p);
+
+	rcu_read_unlock();
+
+	return p;
+}
+
+/*
+ * This task is holding PI mutexes at exit time => bad.
+ * Kernel cleans up PI-state, but userspace is likely hosed.
+ * (Robust-futex cleanup is separate and might save the day for userspace.)
+ */
+void exit_pi_state_list(struct task_struct *curr)
+{
+	struct list_head *next, *head = &curr->pi_state_list;
+	struct futex_pi_state *pi_state;
+	struct futex_hash_bucket *hb;
+	union futex_key key = FUTEX_KEY_INIT;
+
+	if (!futex_cmpxchg_enabled)
+		return;
+	/*
+	 * We are a ZOMBIE and nobody can enqueue itself on
+	 * pi_state_list anymore, but we have to be careful
+	 * versus waiters unqueueing themselves:
+	 */
+	raw_spin_lock_irq(&curr->pi_lock);
+	while (!list_empty(head)) {
+
+		next = head->next;
+		pi_state = list_entry(next, struct futex_pi_state, list);
+		key = pi_state->key;
+		hb = hash_futex(&key);
+		raw_spin_unlock_irq(&curr->pi_lock);
+
+		spin_lock(&hb->lock);
+
+		raw_spin_lock_irq(&curr->pi_lock);
+		/*
+		 * We dropped the pi-lock, so re-check whether this
+		 * task still owns the PI-state:
+		 */
+		if (head->next != next) {
+			spin_unlock(&hb->lock);
+			continue;
+		}
+
+		WARN_ON(pi_state->owner != curr);
+		WARN_ON(list_empty(&pi_state->list));
+		list_del_init(&pi_state->list);
+		pi_state->owner = NULL;
+		raw_spin_unlock_irq(&curr->pi_lock);
+
+		get_pi_state(pi_state);
+		spin_unlock(&hb->lock);
+
+		rt_mutex_futex_unlock(&pi_state->pi_mutex);
+		put_pi_state(pi_state);
+
+		raw_spin_lock_irq(&curr->pi_lock);
+	}
+	raw_spin_unlock_irq(&curr->pi_lock);
+}
+
+/**
+ * futex_top_waiter() - Return the highest priority waiter on a futex
+ * @hb:		the hash bucket the futex_q's reside in
+ * @key:	the futex key (to distinguish it from other futex futex_q's)
+ *
+ * Must be called with the hb lock held.
+ */
+static struct futex_q *futex_top_waiter(struct futex_hash_bucket *hb,
+					union futex_key *key)
+{
+	struct futex_q *this;
+
+	plist_for_each_entry(this, &hb->chain, list) {
+		if (match_futex(&this->key, key))
+			return this;
+	}
+	return NULL;
+}
+
+/*
+ * We need to check the following states:
+ *
+ *      Waiter | pi_state | pi->owner | uTID      | uODIED | ?
+ *
+ * [1]  NULL   | ---      | ---       | 0         | 0/1    | Valid
+ * [2]  NULL   | ---      | ---       | >0        | 0/1    | Valid
+ *
+ * [3]  Found  | NULL     | --        | Any       | 0/1    | Invalid
+ *
+ * [4]  Found  | Found    | NULL      | 0         | 1      | Valid
+ * [5]  Found  | Found    | NULL      | >0        | 1      | Invalid
+ *
+ * [6]  Found  | Found    | task      | 0         | 1      | Valid
+ *
+ * [7]  Found  | Found    | NULL      | Any       | 0      | Invalid
+ *
+ * [8]  Found  | Found    | task      | ==taskTID | 0/1    | Valid
+ * [9]  Found  | Found    | task      | 0         | 0      | Invalid
+ * [10] Found  | Found    | task      | !=taskTID | 0/1    | Invalid
+ *
+ * [1]	Indicates that the kernel can acquire the futex atomically. We
+ *	came came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit.
+ *
+ * [2]	Valid, if TID does not belong to a kernel thread. If no matching
+ *      thread is found then it indicates that the owner TID has died.
+ *
+ * [3]	Invalid. The waiter is queued on a non PI futex
+ *
+ * [4]	Valid state after exit_robust_list(), which sets the user space
+ *	value to FUTEX_WAITERS | FUTEX_OWNER_DIED.
+ *
+ * [5]	The user space value got manipulated between exit_robust_list()
+ *	and exit_pi_state_list()
+ *
+ * [6]	Valid state after exit_pi_state_list() which sets the new owner in
+ *	the pi_state but cannot access the user space value.
+ *
+ * [7]	pi_state->owner can only be NULL when the OWNER_DIED bit is set.
+ *
+ * [8]	Owner and user space value match
+ *
+ * [9]	There is no transient state which sets the user space TID to 0
+ *	except exit_robust_list(), but this is indicated by the
+ *	FUTEX_OWNER_DIED bit. See [4]
+ *
+ * [10] There is no transient state which leaves owner and user space
+ *	TID out of sync.
+ *
+ *
+ * Serialization and lifetime rules:
+ *
+ * hb->lock:
+ *
+ *	hb -> futex_q, relation
+ *	futex_q -> pi_state, relation
+ *
+ *	(cannot be raw because hb can contain arbitrary amount
+ *	 of futex_q's)
+ *
+ * pi_mutex->wait_lock:
+ *
+ *	{uval, pi_state}
+ *
+ *	(and pi_mutex 'obviously')
+ *
+ * p->pi_lock:
+ *
+ *	p->pi_state_list -> pi_state->list, relation
+ *
+ * pi_state->refcount:
+ *
+ *	pi_state lifetime
+ *
+ *
+ * Lock order:
+ *
+ *   hb->lock
+ *     pi_mutex->wait_lock
+ *       p->pi_lock
+ *
+ */
+
+/*
+ * Validate that the existing waiter has a pi_state and sanity check
+ * the pi_state against the user space value. If correct, attach to
+ * it.
+ */
+static int attach_to_pi_state(u32 __user *uaddr, u32 uval,
+			      struct futex_pi_state *pi_state,
+			      struct futex_pi_state **ps)
+{
+	pid_t pid = uval & FUTEX_TID_MASK;
+	u32 uval2;
+	int ret;
+
+	/*
+	 * Userspace might have messed up non-PI and PI futexes [3]
+	 */
+	if (unlikely(!pi_state))
+		return -EINVAL;
+
+	/*
+	 * We get here with hb->lock held, and having found a
+	 * futex_top_waiter(). This means that futex_lock_pi() of said futex_q
+	 * has dropped the hb->lock in between queue_me() and unqueue_me_pi(),
+	 * which in turn means that futex_lock_pi() still has a reference on
+	 * our pi_state.
+	 *
+	 * The waiter holding a reference on @pi_state also protects against
+	 * the unlocked put_pi_state() in futex_unlock_pi(), futex_lock_pi()
+	 * and futex_wait_requeue_pi() as it cannot go to 0 and consequently
+	 * free pi_state before we can take a reference ourselves.
+	 */
+	WARN_ON(!atomic_read(&pi_state->refcount));
+
+	/*
+	 * Now that we have a pi_state, we can acquire wait_lock
+	 * and do the state validation.
+	 */
+	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+
+	/*
+	 * Since {uval, pi_state} is serialized by wait_lock, and our current
+	 * uval was read without holding it, it can have changed. Verify it
+	 * still is what we expect it to be, otherwise retry the entire
+	 * operation.
+	 */
+	if (get_futex_value_locked(&uval2, uaddr))
+		goto out_efault;
+
+	if (uval != uval2)
+		goto out_eagain;
+
+	/*
+	 * Handle the owner died case:
+	 */
+	if (uval & FUTEX_OWNER_DIED) {
+		/*
+		 * exit_pi_state_list sets owner to NULL and wakes the
+		 * topmost waiter. The task which acquires the
+		 * pi_state->rt_mutex will fixup owner.
+		 */
+		if (!pi_state->owner) {
+			/*
+			 * No pi state owner, but the user space TID
+			 * is not 0. Inconsistent state. [5]
+			 */
+			if (pid)
+				goto out_einval;
+			/*
+			 * Take a ref on the state and return success. [4]
+			 */
+			goto out_attach;
+		}
+
+		/*
+		 * If TID is 0, then either the dying owner has not
+		 * yet executed exit_pi_state_list() or some waiter
+		 * acquired the rtmutex in the pi state, but did not
+		 * yet fixup the TID in user space.
+		 *
+		 * Take a ref on the state and return success. [6]
+		 */
+		if (!pid)
+			goto out_attach;
+	} else {
+		/*
+		 * If the owner died bit is not set, then the pi_state
+		 * must have an owner. [7]
+		 */
+		if (!pi_state->owner)
+			goto out_einval;
+	}
+
+	/*
+	 * Bail out if user space manipulated the futex value. If pi
+	 * state exists then the owner TID must be the same as the
+	 * user space TID. [9/10]
+	 */
+	if (pid != task_pid_vnr(pi_state->owner))
+		goto out_einval;
+
+out_attach:
+	get_pi_state(pi_state);
+	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+	*ps = pi_state;
+	return 0;
+
+out_einval:
+	ret = -EINVAL;
+	goto out_error;
+
+out_eagain:
+	ret = -EAGAIN;
+	goto out_error;
+
+out_efault:
+	ret = -EFAULT;
+	goto out_error;
+
+out_error:
+	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+	return ret;
+}
+
+/*
+ * Lookup the task for the TID provided from user space and attach to
+ * it after doing proper sanity checks.
+ */
+static int attach_to_pi_owner(u32 uval, union futex_key *key,
+			      struct futex_pi_state **ps)
+{
+	pid_t pid = uval & FUTEX_TID_MASK;
+	struct futex_pi_state *pi_state;
+	struct task_struct *p;
+
+	/*
+	 * We are the first waiter - try to look up the real owner and attach
+	 * the new pi_state to it, but bail out when TID = 0 [1]
+	 */
+	if (!pid)
+		return -ESRCH;
+	p = futex_find_get_task(pid);
+	if (!p)
+		return -ESRCH;
+
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		put_task_struct(p);
+		return -EPERM;
+	}
+
+	/*
+	 * We need to look at the task state flags to figure out,
+	 * whether the task is exiting. To protect against the do_exit
+	 * change of the task flags, we do this protected by
+	 * p->pi_lock:
+	 */
+	raw_spin_lock_irq(&p->pi_lock);
+	if (unlikely(p->flags & PF_EXITING)) {
+		/*
+		 * The task is on the way out. When PF_EXITPIDONE is
+		 * set, we know that the task has finished the
+		 * cleanup:
+		 */
+		int ret = (p->flags & PF_EXITPIDONE) ? -ESRCH : -EAGAIN;
+
+		raw_spin_unlock_irq(&p->pi_lock);
+		put_task_struct(p);
+		return ret;
+	}
+
+	/*
+	 * No existing pi state. First waiter. [2]
+	 *
+	 * This creates pi_state, we have hb->lock held, this means nothing can
+	 * observe this state, wait_lock is irrelevant.
+	 */
+	pi_state = alloc_pi_state();
+
+	/*
+	 * Initialize the pi_mutex in locked state and make @p
+	 * the owner of it:
+	 */
+	rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p);
+
+	/* Store the key for possible exit cleanups: */
+	pi_state->key = *key;
+
+	WARN_ON(!list_empty(&pi_state->list));
+	list_add(&pi_state->list, &p->pi_state_list);
+	pi_state->owner = p;
+	raw_spin_unlock_irq(&p->pi_lock);
+
+	put_task_struct(p);
+
+	*ps = pi_state;
+
+	return 0;
+}
+
+static int lookup_pi_state(u32 __user *uaddr, u32 uval,
+			   struct futex_hash_bucket *hb,
+			   union futex_key *key, struct futex_pi_state **ps)
+{
+	struct futex_q *top_waiter = futex_top_waiter(hb, key);
+
+	/*
+	 * If there is a waiter on that futex, validate it and
+	 * attach to the pi_state when the validation succeeds.
+	 */
+	if (top_waiter)
+		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
+
+	/*
+	 * We are the first waiter - try to look up the owner based on
+	 * @uval and attach to it.
+	 */
+	return attach_to_pi_owner(uval, key, ps);
+}
+
+static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval)
+{
+	u32 uninitialized_var(curval);
+
+	if (unlikely(should_fail_futex(true)))
+		return -EFAULT;
+
+	if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)))
+		return -EFAULT;
+
+	/* If user space value changed, let the caller retry */
+	return curval != uval ? -EAGAIN : 0;
+}
+
+/**
+ * futex_lock_pi_atomic() - Atomic work required to acquire a pi aware futex
+ * @uaddr:		the pi futex user address
+ * @hb:			the pi futex hash bucket
+ * @key:		the futex key associated with uaddr and hb
+ * @ps:			the pi_state pointer where we store the result of the
+ *			lookup
+ * @task:		the task to perform the atomic lock work for.  This will
+ *			be "current" except in the case of requeue pi.
+ * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
+ *
+ * Return:
+ *  0 - ready to wait;
+ *  1 - acquired the lock;
+ * <0 - error
+ *
+ * The hb->lock and futex_key refs shall be held by the caller.
+ */
+static int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
+				union futex_key *key,
+				struct futex_pi_state **ps,
+				struct task_struct *task, int set_waiters)
+{
+	u32 uval, newval, vpid = task_pid_vnr(task);
+	struct futex_q *top_waiter;
+	int ret;
+
+	/*
+	 * Read the user space value first so we can validate a few
+	 * things before proceeding further.
+	 */
+	if (get_futex_value_locked(&uval, uaddr))
+		return -EFAULT;
+
+	if (unlikely(should_fail_futex(true)))
+		return -EFAULT;
+
+	/*
+	 * Detect deadlocks.
+	 */
+	if ((unlikely((uval & FUTEX_TID_MASK) == vpid)))
+		return -EDEADLK;
+
+	if ((unlikely(should_fail_futex(true))))
+		return -EDEADLK;
+
+	/*
+	 * Lookup existing state first. If it exists, try to attach to
+	 * its pi_state.
+	 */
+	top_waiter = futex_top_waiter(hb, key);
+	if (top_waiter)
+		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
+
+	/*
+	 * No waiter and user TID is 0. We are here because the
+	 * waiters or the owner died bit is set or called from
+	 * requeue_cmp_pi or for whatever reason something took the
+	 * syscall.
+	 */
+	if (!(uval & FUTEX_TID_MASK)) {
+		/*
+		 * We take over the futex. No other waiters and the user space
+		 * TID is 0. We preserve the owner died bit.
+		 */
+		newval = uval & FUTEX_OWNER_DIED;
+		newval |= vpid;
+
+		/* The futex requeue_pi code can enforce the waiters bit */
+		if (set_waiters)
+			newval |= FUTEX_WAITERS;
+
+		ret = lock_pi_update_atomic(uaddr, uval, newval);
+		/* If the take over worked, return 1 */
+		return ret < 0 ? ret : 1;
+	}
+
+	/*
+	 * First waiter. Set the waiters bit before attaching ourself to
+	 * the owner. If owner tries to unlock, it will be forced into
+	 * the kernel and blocked on hb->lock.
+	 */
+	newval = uval | FUTEX_WAITERS;
+	ret = lock_pi_update_atomic(uaddr, uval, newval);
+	if (ret)
+		return ret;
+	/*
+	 * If the update of the user space value succeeded, we try to
+	 * attach to the owner. If that fails, no harm done, we only
+	 * set the FUTEX_WAITERS bit in the user space variable.
+	 */
+	return attach_to_pi_owner(uval, key, ps);
+}
+
+/*
+ * Caller must hold a reference on @pi_state.
+ */
+static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_state)
+{
+	u32 uninitialized_var(curval), newval;
+	struct task_struct *new_owner;
+	bool postunlock = false;
+	DEFINE_WAKE_Q(wake_q);
+	int ret = 0;
+
+	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
+	if (WARN_ON_ONCE(!new_owner)) {
+		/*
+		 * As per the comment in futex_unlock_pi() this should not happen.
+		 *
+		 * When this happens, give up our locks and try again, giving
+		 * the futex_lock_pi() instance time to complete, either by
+		 * waiting on the rtmutex or removing itself from the futex
+		 * queue.
+		 */
+		ret = -EAGAIN;
+		goto out_unlock;
+	}
+
+	/*
+	 * We pass it to the next owner. The WAITERS bit is always kept
+	 * enabled while there is PI state around. We cleanup the owner
+	 * died bit, because we are the owner.
+	 */
+	newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
+
+	if (unlikely(should_fail_futex(true)))
+		ret = -EFAULT;
+
+	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) {
+		ret = -EFAULT;
+
+	} else if (curval != uval) {
+		/*
+		 * If a unconditional UNLOCK_PI operation (user space did not
+		 * try the TID->0 transition) raced with a waiter setting the
+		 * FUTEX_WAITERS flag between get_user() and locking the hash
+		 * bucket lock, retry the operation.
+		 */
+		if ((FUTEX_TID_MASK & curval) == uval)
+			ret = -EAGAIN;
+		else
+			ret = -EINVAL;
+	}
+
+	if (ret)
+		goto out_unlock;
+
+	/*
+	 * This is a point of no return; once we modify the uval there is no
+	 * going back and subsequent operations must not fail.
+	 */
+
+	raw_spin_lock(&pi_state->owner->pi_lock);
+	WARN_ON(list_empty(&pi_state->list));
+	list_del_init(&pi_state->list);
+	raw_spin_unlock(&pi_state->owner->pi_lock);
+
+	raw_spin_lock(&new_owner->pi_lock);
+	WARN_ON(!list_empty(&pi_state->list));
+	list_add(&pi_state->list, &new_owner->pi_state_list);
+	pi_state->owner = new_owner;
+	raw_spin_unlock(&new_owner->pi_lock);
+
+	postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
+
+out_unlock:
+	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+
+	if (postunlock)
+		rt_mutex_postunlock(&wake_q);
+
+	return ret;
+}
+
+/**
+ * requeue_pi_wake_futex() - Wake a task that acquired the lock during requeue
+ * @q:		the futex_q
+ * @key:	the key of the requeue target futex
+ * @hb:		the hash_bucket of the requeue target futex
+ *
+ * During futex_requeue, with requeue_pi=1, it is possible to acquire the
+ * target futex if it is uncontended or via a lock steal.  Set the futex_q key
+ * to the requeue target futex so the waiter can detect the wakeup on the right
+ * futex, but remove it from the hb and NULL the rt_waiter so it can detect
+ * atomic lock acquisition.  Set the q->lock_ptr to the requeue target hb->lock
+ * to protect access to the pi_state to fixup the owner later.  Must be called
+ * with both q->lock_ptr and hb->lock held.
+ */
+static inline
+void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+			   struct futex_hash_bucket *hb)
+{
+	get_futex_key_refs(key);
+	q->key = *key;
+
+	__unqueue_futex(q);
+
+	WARN_ON(!q->rt_waiter);
+	q->rt_waiter = NULL;
+
+	q->lock_ptr = &hb->lock;
+
+	wake_up_state(q->task, TASK_NORMAL);
+}
+
+/**
+ * futex_proxy_trylock_atomic() - Attempt an atomic lock for the top waiter
+ * @pifutex:		the user address of the to futex
+ * @hb1:		the from futex hash bucket, must be locked by the caller
+ * @hb2:		the to futex hash bucket, must be locked by the caller
+ * @key1:		the from futex key
+ * @key2:		the to futex key
+ * @ps:			address to store the pi_state pointer
+ * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
+ *
+ * Try and get the lock on behalf of the top waiter if we can do it atomically.
+ * Wake the top waiter if we succeed.  If the caller specified set_waiters,
+ * then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit.
+ * hb1 and hb2 must be held by the caller.
+ *
+ * Return:
+ *  0 - failed to acquire the lock atomically;
+ * >0 - acquired the lock, return value is vpid of the top_waiter
+ * <0 - error
+ */
+static int futex_proxy_trylock_atomic(u32 __user *pifutex,
+				 struct futex_hash_bucket *hb1,
+				 struct futex_hash_bucket *hb2,
+				 union futex_key *key1, union futex_key *key2,
+				 struct futex_pi_state **ps, int set_waiters)
+{
+	struct futex_q *top_waiter = NULL;
+	u32 curval;
+	int ret, vpid;
+
+	if (get_futex_value_locked(&curval, pifutex))
+		return -EFAULT;
+
+	if (unlikely(should_fail_futex(true)))
+		return -EFAULT;
+
+	/*
+	 * Find the top_waiter and determine if there are additional waiters.
+	 * If the caller intends to requeue more than 1 waiter to pifutex,
+	 * force futex_lock_pi_atomic() to set the FUTEX_WAITERS bit now,
+	 * as we have means to handle the possible fault.  If not, don't set
+	 * the bit unecessarily as it will force the subsequent unlock to enter
+	 * the kernel.
+	 */
+	top_waiter = futex_top_waiter(hb1, key1);
+
+	/* There are no waiters, nothing for us to do. */
+	if (!top_waiter)
+		return 0;
+
+	/* Ensure we requeue to the expected futex. */
+	if (!match_futex(top_waiter->requeue_pi_key, key2))
+		return -EINVAL;
+
+	/*
+	 * Try to take the lock for top_waiter.  Set the FUTEX_WAITERS bit in
+	 * the contended case or if set_waiters is 1.  The pi_state is returned
+	 * in ps in contended cases.
+	 */
+	vpid = task_pid_vnr(top_waiter->task);
+	ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task,
+				   set_waiters);
+	if (ret == 1) {
+		requeue_pi_wake_futex(top_waiter, key2, hb2);
+		return vpid;
+	}
+	return ret;
+}
+
+/*
+ * PI futexes can not be requeued and must remove themself from the
+ * hash bucket. The hash bucket lock (i.e. lock_ptr) is held on entry
+ * and dropped here.
+ */
+static void unqueue_me_pi(struct futex_q *q)
+	__releases(q->lock_ptr)
+{
+	__unqueue_futex(q);
+
+	BUG_ON(!q->pi_state);
+	put_pi_state(q->pi_state);
+	q->pi_state = NULL;
+
+	spin_unlock(q->lock_ptr);
+}
+
+/*
+ * Fixup the pi_state owner with the new owner.
+ *
+ * Must be called with hash bucket lock held and mm->sem held for non
+ * private futexes.
+ */
+static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
+				struct task_struct *newowner)
+{
+	u32 newtid = task_pid_vnr(newowner) | FUTEX_WAITERS;
+	struct futex_pi_state *pi_state = q->pi_state;
+	u32 uval, uninitialized_var(curval), newval;
+	struct task_struct *oldowner;
+	int ret;
+
+	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+
+	oldowner = pi_state->owner;
+	/* Owner died? */
+	if (!pi_state->owner)
+		newtid |= FUTEX_OWNER_DIED;
+
+	/*
+	 * We are here either because we stole the rtmutex from the
+	 * previous highest priority waiter or we are the highest priority
+	 * waiter but have failed to get the rtmutex the first time.
+	 *
+	 * We have to replace the newowner TID in the user space variable.
+	 * This must be atomic as we have to preserve the owner died bit here.
+	 *
+	 * Note: We write the user space value _before_ changing the pi_state
+	 * because we can fault here. Imagine swapped out pages or a fork
+	 * that marked all the anonymous memory readonly for cow.
+	 *
+	 * Modifying pi_state _before_ the user space value would leave the
+	 * pi_state in an inconsistent state when we fault here, because we
+	 * need to drop the locks to handle the fault. This might be observed
+	 * in the PID check in lookup_pi_state.
+	 */
+retry:
+	if (get_futex_value_locked(&uval, uaddr))
+		goto handle_fault;
+
+	for (;;) {
+		newval = (uval & FUTEX_OWNER_DIED) | newtid;
+
+		if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))
+			goto handle_fault;
+		if (curval == uval)
+			break;
+		uval = curval;
+	}
+
+	/*
+	 * We fixed up user space. Now we need to fix the pi_state
+	 * itself.
+	 */
+	if (pi_state->owner != NULL) {
+		raw_spin_lock(&pi_state->owner->pi_lock);
+		WARN_ON(list_empty(&pi_state->list));
+		list_del_init(&pi_state->list);
+		raw_spin_unlock(&pi_state->owner->pi_lock);
+	}
+
+	pi_state->owner = newowner;
+
+	raw_spin_lock(&newowner->pi_lock);
+	WARN_ON(!list_empty(&pi_state->list));
+	list_add(&pi_state->list, &newowner->pi_state_list);
+	raw_spin_unlock(&newowner->pi_lock);
+	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+
+	return 0;
+
+	/*
+	 * To handle the page fault we need to drop the locks here. That gives
+	 * the other task (either the highest priority waiter itself or the
+	 * task which stole the rtmutex) the chance to try the fixup of the
+	 * pi_state. So once we are back from handling the fault we need to
+	 * check the pi_state after reacquiring the locks and before trying to
+	 * do another fixup. When the fixup has been done already we simply
+	 * return.
+	 *
+	 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
+	 * drop hb->lock since the caller owns the hb -> futex_q relation.
+	 * Dropping the pi_mutex->wait_lock requires the state revalidate.
+	 */
+handle_fault:
+	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+	spin_unlock(q->lock_ptr);
+
+	ret = fault_in_user_writeable(uaddr);
+
+	spin_lock(q->lock_ptr);
+	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+
+	/*
+	 * Check if someone else fixed it for us:
+	 */
+	if (pi_state->owner != oldowner) {
+		ret = 0;
+		goto out_unlock;
+	}
+
+	if (ret)
+		goto out_unlock;
+
+	goto retry;
+
+out_unlock:
+	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+	return ret;
+}
+
+/**
+ * fixup_owner() - Post lock pi_state and corner case management
+ * @uaddr:	user address of the futex
+ * @q:		futex_q (contains pi_state and access to the rt_mutex)
+ * @locked:	if the attempt to take the rt_mutex succeeded (1) or not (0)
+ *
+ * After attempting to lock an rt_mutex, this function is called to cleanup
+ * the pi_state owner as well as handle race conditions that may allow us to
+ * acquire the lock. Must be called with the hb lock held.
+ *
+ * Return:
+ *  1 - success, lock taken;
+ *  0 - success, lock not taken;
+ * <0 - on error (-EFAULT)
+ */
+static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
+{
+	int ret = 0;
+
+	if (locked) {
+		/*
+		 * Got the lock. We might not be the anticipated owner if we
+		 * did a lock-steal - fix up the PI-state in that case:
+		 *
+		 * We can safely read pi_state->owner without holding wait_lock
+		 * because we now own the rt_mutex, only the owner will attempt
+		 * to change it.
+		 */
+		if (q->pi_state->owner != current)
+			ret = fixup_pi_state_owner(uaddr, q, current);
+		goto out;
+	}
+
+	/*
+	 * Paranoia check. If we did not take the lock, then we should not be
+	 * the owner of the rt_mutex.
+	 */
+	if (rt_mutex_owner(&q->pi_state->pi_mutex) == current) {
+		printk(KERN_ERR "fixup_owner: ret = %d pi-mutex: %p "
+				"pi-state %p\n", ret,
+				q->pi_state->pi_mutex.owner,
+				q->pi_state->owner);
+	}
+
+out:
+	return ret ? ret : locked;
+}
+
+/*
+ * Userspace tried a 0 -> TID atomic transition of the futex value
+ * and failed. The kernel side here does the whole locking operation:
+ * if there are waiters then it will block as a consequence of relying
+ * on rt-mutexes, it does PI, etc. (Due to races the kernel might see
+ * a 0 value of the futex too.).
+ *
+ * Also serves as futex trylock_pi()'ing, and due semantics.
+ */
+static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
+			 ktime_t *time, int trylock)
+{
+	struct hrtimer_sleeper timeout, *to = NULL;
+	struct futex_pi_state *pi_state = NULL;
+	struct rt_mutex_waiter rt_waiter;
+	struct futex_hash_bucket *hb;
+	struct futex_q q = futex_q_init;
+	int res, ret;
+
+	if (refill_pi_state_cache())
+		return -ENOMEM;
+
+	if (time) {
+		to = &timeout;
+		hrtimer_init_on_stack(&to->timer, CLOCK_REALTIME,
+				      HRTIMER_MODE_ABS);
+		hrtimer_init_sleeper(to, current);
+		hrtimer_set_expires(&to->timer, *time);
+	}
+
+retry:
+	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q.key, VERIFY_WRITE);
+	if (unlikely(ret != 0))
+		goto out;
+
+retry_private:
+	hb = queue_lock(&q);
+
+	ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current, 0);
+	if (unlikely(ret)) {
+		/*
+		 * Atomic work succeeded and we got the lock,
+		 * or failed. Either way, we do _not_ block.
+		 */
+		switch (ret) {
+		case 1:
+			/* We got the lock. */
+			ret = 0;
+			goto out_unlock_put_key;
+		case -EFAULT:
+			goto uaddr_faulted;
+		case -EAGAIN:
+			/*
+			 * Two reasons for this:
+			 * - Task is exiting and we just wait for the
+			 *   exit to complete.
+			 * - The user space value changed.
+			 */
+			queue_unlock(hb);
+			put_futex_key(&q.key);
+			cond_resched();
+			goto retry;
+		default:
+			goto out_unlock_put_key;
+		}
+	}
+
+	WARN_ON(!q.pi_state);
+
+	/*
+	 * Only actually queue now that the atomic ops are done:
+	 */
+	__queue_me(&q, hb);
+
+	if (trylock) {
+		ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex);
+		/* Fixup the trylock return value: */
+		ret = ret ? 0 : -EWOULDBLOCK;
+		goto no_block;
+	}
+
+	rt_mutex_init_waiter(&rt_waiter);
+
+	/*
+	 * On PREEMPT_RT_FULL, when hb->lock becomes an rt_mutex, we must not
+	 * hold it while doing rt_mutex_start_proxy(), because then it will
+	 * include hb->lock in the blocking chain, even through we'll not in
+	 * fact hold it while blocking. This will lead it to report -EDEADLK
+	 * and BUG when futex_unlock_pi() interleaves with this.
+	 *
+	 * Therefore acquire wait_lock while holding hb->lock, but drop the
+	 * latter before calling rt_mutex_start_proxy_lock(). This still fully
+	 * serializes against futex_unlock_pi() as that does the exact same
+	 * lock handoff sequence.
+	 */
+	raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
+	spin_unlock(q.lock_ptr);
+	ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current);
+	raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock);
+
+	if (ret) {
+		if (ret == 1)
+			ret = 0;
+
+		spin_lock(q.lock_ptr);
+		goto no_block;
+	}
+
+
+	if (unlikely(to))
+		hrtimer_start_expires(&to->timer, HRTIMER_MODE_ABS);
+
+	ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
+
+	spin_lock(q.lock_ptr);
+	/*
+	 * If we failed to acquire the lock (signal/timeout), we must
+	 * first acquire the hb->lock before removing the lock from the
+	 * rt_mutex waitqueue, such that we can keep the hb and rt_mutex
+	 * wait lists consistent.
+	 *
+	 * In particular; it is important that futex_unlock_pi() can not
+	 * observe this inconsistency.
+	 */
+	if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter))
+		ret = 0;
+
+no_block:
+	/*
+	 * Fixup the pi_state owner and possibly acquire the lock if we
+	 * haven't already.
+	 */
+	res = fixup_owner(uaddr, &q, !ret);
+	/*
+	 * If fixup_owner() returned an error, proprogate that.  If it acquired
+	 * the lock, clear our -ETIMEDOUT or -EINTR.
+	 */
+	if (res)
+		ret = (res < 0) ? res : 0;
+
+	/*
+	 * If fixup_owner() faulted and was unable to handle the fault, unlock
+	 * it and return the fault to userspace.
+	 */
+	if (ret && (rt_mutex_owner(&q.pi_state->pi_mutex) == current)) {
+		pi_state = q.pi_state;
+		get_pi_state(pi_state);
+	}
+
+	/* Unqueue and drop the lock */
+	unqueue_me_pi(&q);
+
+	if (pi_state) {
+		rt_mutex_futex_unlock(&pi_state->pi_mutex);
+		put_pi_state(pi_state);
+	}
+
+	goto out_put_key;
+
+out_unlock_put_key:
+	queue_unlock(hb);
+
+out_put_key:
+	put_futex_key(&q.key);
+out:
+	if (to) {
+		hrtimer_cancel(&to->timer);
+		destroy_hrtimer_on_stack(&to->timer);
+	}
+	return ret != -EINTR ? ret : -ERESTARTNOINTR;
+
+uaddr_faulted:
+	queue_unlock(hb);
+
+	ret = fault_in_user_writeable(uaddr);
+	if (ret)
+		goto out_put_key;
+
+	if (!(flags & FLAGS_SHARED))
+		goto retry_private;
+
+	put_futex_key(&q.key);
+	goto retry;
+}
+
+/*
+ * Userspace attempted a TID -> 0 atomic transition, and failed.
+ * This is the in-kernel slowpath: we look up the PI state (if any),
+ * and do the rt-mutex unlock.
+ */
+static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
+{
+	u32 uninitialized_var(curval), uval, vpid = task_pid_vnr(current);
+	union futex_key key = FUTEX_KEY_INIT;
+	struct futex_hash_bucket *hb;
+	struct futex_q *top_waiter;
+	int ret;
+
+retry:
+	if (get_user(uval, uaddr))
+		return -EFAULT;
+	/*
+	 * We release only a lock we actually own:
+	 */
+	if ((uval & FUTEX_TID_MASK) != vpid)
+		return -EPERM;
+
+	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_WRITE);
+	if (ret)
+		return ret;
+
+	hb = hash_futex(&key);
+	spin_lock(&hb->lock);
+
+	/*
+	 * Check waiters first. We do not trust user space values at
+	 * all and we at least want to know if user space fiddled
+	 * with the futex value instead of blindly unlocking.
+	 */
+	top_waiter = futex_top_waiter(hb, &key);
+	if (top_waiter) {
+		struct futex_pi_state *pi_state = top_waiter->pi_state;
+
+		ret = -EINVAL;
+		if (!pi_state)
+			goto out_unlock;
+
+		/*
+		 * If current does not own the pi_state then the futex is
+		 * inconsistent and user space fiddled with the futex value.
+		 */
+		if (pi_state->owner != current)
+			goto out_unlock;
+
+		get_pi_state(pi_state);
+		/*
+		 * By taking wait_lock while still holding hb->lock, we ensure
+		 * there is no point where we hold neither; and therefore
+		 * wake_futex_pi() must observe a state consistent with what we
+		 * observed.
+		 */
+		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+		spin_unlock(&hb->lock);
+
+		ret = wake_futex_pi(uaddr, uval, pi_state);
+
+		put_pi_state(pi_state);
+
+		/*
+		 * Success, we're done! No tricky corner cases.
+		 */
+		if (!ret)
+			goto out_putkey;
+		/*
+		 * The atomic access to the futex value generated a
+		 * pagefault, so retry the user-access and the wakeup:
+		 */
+		if (ret == -EFAULT)
+			goto pi_faulted;
+		/*
+		 * A unconditional UNLOCK_PI op raced against a waiter
+		 * setting the FUTEX_WAITERS bit. Try again.
+		 */
+		if (ret == -EAGAIN) {
+			put_futex_key(&key);
+			goto retry;
+		}
+		/*
+		 * wake_futex_pi has detected invalid state. Tell user
+		 * space.
+		 */
+		goto out_putkey;
+	}
+
+	/*
+	 * We have no kernel internal state, i.e. no waiters in the
+	 * kernel. Waiters which are about to queue themselves are stuck
+	 * on hb->lock. So we can safely ignore them. We do neither
+	 * preserve the WAITERS bit not the OWNER_DIED one. We are the
+	 * owner.
+	 */
+	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, 0)) {
+		spin_unlock(&hb->lock);
+		goto pi_faulted;
+	}
+
+	/*
+	 * If uval has changed, let user space handle it.
+	 */
+	ret = (curval == uval) ? 0 : -EAGAIN;
+
+out_unlock:
+	spin_unlock(&hb->lock);
+out_putkey:
+	put_futex_key(&key);
+	return ret;
+
+pi_faulted:
+	put_futex_key(&key);
+
+	ret = fault_in_user_writeable(uaddr);
+	if (!ret)
+		goto retry;
+
+	return ret;
+}
+
+/**
+ * handle_early_requeue_pi_wakeup() - Detect early wakeup on the initial futex
+ * @hb:		the hash_bucket futex_q was original enqueued on
+ * @q:		the futex_q woken while waiting to be requeued
+ * @key2:	the futex_key of the requeue target futex
+ * @timeout:	the timeout associated with the wait (NULL if none)
+ *
+ * Detect if the task was woken on the initial futex as opposed to the requeue
+ * target futex.  If so, determine if it was a timeout or a signal that caused
+ * the wakeup and return the appropriate error code to the caller.  Must be
+ * called with the hb lock held.
+ *
+ * Return:
+ *  0 = no early wakeup detected;
+ * <0 = -ETIMEDOUT or -ERESTARTNOINTR
+ */
+static inline
+int handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb,
+				   struct futex_q *q, union futex_key *key2,
+				   struct hrtimer_sleeper *timeout)
+{
+	int ret = 0;
+
+	/*
+	 * With the hb lock held, we avoid races while we process the wakeup.
+	 * We only need to hold hb (and not hb2) to ensure atomicity as the
+	 * wakeup code can't change q.key from uaddr to uaddr2 if we hold hb.
+	 * It can't be requeued from uaddr2 to something else since we don't
+	 * support a PI aware source futex for requeue.
+	 */
+	if (!match_futex(&q->key, key2)) {
+		WARN_ON(q->lock_ptr && (&hb->lock != q->lock_ptr));
+		/*
+		 * We were woken prior to requeue by a timeout or a signal.
+		 * Unqueue the futex_q and determine which it was.
+		 */
+		plist_del(&q->list, &hb->chain);
+		hb_waiters_dec(hb);
+
+		/* Handle spurious wakeups gracefully */
+		ret = -EWOULDBLOCK;
+		if (timeout && !timeout->task)
+			ret = -ETIMEDOUT;
+		else if (signal_pending(current))
+			ret = -ERESTARTNOINTR;
+	}
+	return ret;
+}
+
+/**
+ * futex_wait_requeue_pi() - Wait on uaddr and take uaddr2
+ * @uaddr:	the futex we initially wait on (non-pi)
+ * @flags:	futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be
+ *		the same type, no requeueing from private to shared, etc.
+ * @val:	the expected value of uaddr
+ * @abs_time:	absolute timeout
+ * @bitset:	32 bit wakeup bitset set by userspace, defaults to all
+ * @uaddr2:	the pi futex we will take prior to returning to user-space
+ *
+ * The caller will wait on uaddr and will be requeued by futex_requeue() to
+ * uaddr2 which must be PI aware and unique from uaddr.  Normal wakeup will wake
+ * on uaddr2 and complete the acquisition of the rt_mutex prior to returning to
+ * userspace.  This ensures the rt_mutex maintains an owner when it has waiters;
+ * without one, the pi logic would not know which task to boost/deboost, if
+ * there was a need to.
+ *
+ * We call schedule in futex_wait_queue_me() when we enqueue and return there
+ * via the following--
+ * 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue()
+ * 2) wakeup on uaddr2 after a requeue
+ * 3) signal
+ * 4) timeout
+ *
+ * If 3, cleanup and return -ERESTARTNOINTR.
+ *
+ * If 2, we may then block on trying to take the rt_mutex and return via:
+ * 5) successful lock
+ * 6) signal
+ * 7) timeout
+ * 8) other lock acquisition failure
+ *
+ * If 6, return -EWOULDBLOCK (restarting the syscall would do the same).
+ *
+ * If 4 or 7, we cleanup and return with -ETIMEDOUT.
+ *
+ * Return:
+ *  0 - On success;
+ * <0 - On error
+ */
+static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+				 u32 val, ktime_t *abs_time, u32 bitset,
+				 u32 __user *uaddr2)
+{
+	struct hrtimer_sleeper timeout, *to = NULL;
+	struct futex_pi_state *pi_state = NULL;
+	struct rt_mutex_waiter rt_waiter;
+	struct futex_hash_bucket *hb;
+	union futex_key key2 = FUTEX_KEY_INIT;
+	struct futex_q q = futex_q_init;
+	int res, ret;
+
+	if (uaddr == uaddr2)
+		return -EINVAL;
+
+	if (!bitset)
+		return -EINVAL;
+
+	if (abs_time) {
+		to = &timeout;
+		hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
+				      CLOCK_REALTIME : CLOCK_MONOTONIC,
+				      HRTIMER_MODE_ABS);
+		hrtimer_init_sleeper(to, current);
+		hrtimer_set_expires_range_ns(&to->timer, *abs_time,
+					     current->timer_slack_ns);
+	}
+
+	/*
+	 * The waiter is allocated on our stack, manipulated by the requeue
+	 * code while we sleep on uaddr.
+	 */
+	rt_mutex_init_waiter(&rt_waiter);
+
+	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
+	if (unlikely(ret != 0))
+		goto out;
+
+	q.bitset = bitset;
+	q.rt_waiter = &rt_waiter;
+	q.requeue_pi_key = &key2;
+
+	/*
+	 * Prepare to wait on uaddr. On success, increments q.key (key1) ref
+	 * count.
+	 */
+	ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
+	if (ret)
+		goto out_key2;
+
+	/*
+	 * The check above which compares uaddrs is not sufficient for
+	 * shared futexes. We need to compare the keys:
+	 */
+	if (match_futex(&q.key, &key2)) {
+		queue_unlock(hb);
+		ret = -EINVAL;
+		goto out_put_keys;
+	}
+
+	/* Queue the futex_q, drop the hb lock, wait for wakeup. */
+	futex_wait_queue_me(hb, &q, to);
+
+	spin_lock(&hb->lock);
+	ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
+	spin_unlock(&hb->lock);
+	if (ret)
+		goto out_put_keys;
+
+	/*
+	 * In order for us to be here, we know our q.key == key2, and since
+	 * we took the hb->lock above, we also know that futex_requeue() has
+	 * completed and we no longer have to concern ourselves with a wakeup
+	 * race with the atomic proxy lock acquisition by the requeue code. The
+	 * futex_requeue dropped our key1 reference and incremented our key2
+	 * reference count.
+	 */
+
+	/* Check if the requeue code acquired the second futex for us. */
+	if (!q.rt_waiter) {
+		/*
+		 * Got the lock. We might not be the anticipated owner if we
+		 * did a lock-steal - fix up the PI-state in that case.
+		 */
+		if (q.pi_state && (q.pi_state->owner != current)) {
+			spin_lock(q.lock_ptr);
+			ret = fixup_pi_state_owner(uaddr2, &q, current);
+			if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
+				pi_state = q.pi_state;
+				get_pi_state(pi_state);
+			}
+			/*
+			 * Drop the reference to the pi state which
+			 * the requeue_pi() code acquired for us.
+			 */
+			put_pi_state(q.pi_state);
+			spin_unlock(q.lock_ptr);
+		}
+	} else {
+		struct rt_mutex *pi_mutex;
+
+		/*
+		 * We have been woken up by futex_unlock_pi(), a timeout, or a
+		 * signal.  futex_unlock_pi() will not destroy the lock_ptr nor
+		 * the pi_state.
+		 */
+		WARN_ON(!q.pi_state);
+		pi_mutex = &q.pi_state->pi_mutex;
+		ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
+
+		spin_lock(q.lock_ptr);
+		if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
+			ret = 0;
+
+		debug_rt_mutex_free_waiter(&rt_waiter);
+		/*
+		 * Fixup the pi_state owner and possibly acquire the lock if we
+		 * haven't already.
+		 */
+		res = fixup_owner(uaddr2, &q, !ret);
+		/*
+		 * If fixup_owner() returned an error, proprogate that.  If it
+		 * acquired the lock, clear -ETIMEDOUT or -EINTR.
+		 */
+		if (res)
+			ret = (res < 0) ? res : 0;
+
+		/*
+		 * If fixup_pi_state_owner() faulted and was unable to handle
+		 * the fault, unlock the rt_mutex and return the fault to
+		 * userspace.
+		 */
+		if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
+			pi_state = q.pi_state;
+			get_pi_state(pi_state);
+		}
+
+		/* Unqueue and drop the lock. */
+		unqueue_me_pi(&q);
+	}
+
+	if (pi_state) {
+		rt_mutex_futex_unlock(&pi_state->pi_mutex);
+		put_pi_state(pi_state);
+	}
+
+	if (ret == -EINTR) {
+		/*
+		 * We've already been requeued, but cannot restart by calling
+		 * futex_lock_pi() directly. We could restart this syscall, but
+		 * it would detect that the user space "val" changed and return
+		 * -EWOULDBLOCK.  Save the overhead of the restart and return
+		 * -EWOULDBLOCK directly.
+		 */
+		ret = -EWOULDBLOCK;
+	}
+
+out_put_keys:
+	put_futex_key(&q.key);
+out_key2:
+	put_futex_key(&key2);
+
+out:
+	if (to) {
+		hrtimer_cancel(&to->timer);
+		destroy_hrtimer_on_stack(&to->timer);
+	}
+	return ret;
+}
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 4/8] sched/deadline: move dl related code out of sched/core.c
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
                   ` (2 preceding siblings ...)
  2017-06-06 23:24 ` [PATCH v2 3/8] futex: make PI support optional Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 5/8] sched/rt: move rt " Nicolas Pitre
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

... to sched/deadline.c. This helps making sched/core.c smaller and
hopefully easier to understand and maintain. This also will help
configuring the deadline scheduling class out of the kernel build.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 kernel/sched/core.c     | 335 +----------------------------------------------
 kernel/sched/deadline.c | 336 ++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h    |  14 ++
 3 files changed, 356 insertions(+), 329 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 94fa712791..93ce28ea34 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2148,23 +2148,6 @@ int wake_up_state(struct task_struct *p, unsigned int state)
 }
 
 /*
- * This function clears the sched_dl_entity static params.
- */
-void __dl_clear_params(struct task_struct *p)
-{
-	struct sched_dl_entity *dl_se = &p->dl;
-
-	dl_se->dl_runtime = 0;
-	dl_se->dl_deadline = 0;
-	dl_se->dl_period = 0;
-	dl_se->flags = 0;
-	dl_se->dl_bw = 0;
-
-	dl_se->dl_throttled = 0;
-	dl_se->dl_yielded = 0;
-}
-
-/*
  * Perform scheduler related setup for a newly forked process p.
  * p is forked by current.
  *
@@ -2443,90 +2426,6 @@ unsigned long to_ratio(u64 period, u64 runtime)
 	return div64_u64(runtime << 20, period);
 }
 
-#ifdef CONFIG_SMP
-inline struct dl_bw *dl_bw_of(int i)
-{
-	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
-			 "sched RCU must be held");
-	return &cpu_rq(i)->rd->dl_bw;
-}
-
-static inline int dl_bw_cpus(int i)
-{
-	struct root_domain *rd = cpu_rq(i)->rd;
-	int cpus = 0;
-
-	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
-			 "sched RCU must be held");
-	for_each_cpu_and(i, rd->span, cpu_active_mask)
-		cpus++;
-
-	return cpus;
-}
-#else
-inline struct dl_bw *dl_bw_of(int i)
-{
-	return &cpu_rq(i)->dl.dl_bw;
-}
-
-static inline int dl_bw_cpus(int i)
-{
-	return 1;
-}
-#endif
-
-/*
- * We must be sure that accepting a new task (or allowing changing the
- * parameters of an existing one) is consistent with the bandwidth
- * constraints. If yes, this function also accordingly updates the currently
- * allocated bandwidth to reflect the new situation.
- *
- * This function is called while holding p's rq->lock.
- *
- * XXX we should delay bw change until the task's 0-lag point, see
- * __setparam_dl().
- */
-static int dl_overflow(struct task_struct *p, int policy,
-		       const struct sched_attr *attr)
-{
-
-	struct dl_bw *dl_b = dl_bw_of(task_cpu(p));
-	u64 period = attr->sched_period ?: attr->sched_deadline;
-	u64 runtime = attr->sched_runtime;
-	u64 new_bw = dl_policy(policy) ? to_ratio(period, runtime) : 0;
-	int cpus, err = -1;
-
-	/* !deadline task may carry old deadline bandwidth */
-	if (new_bw == p->dl.dl_bw && task_has_dl_policy(p))
-		return 0;
-
-	/*
-	 * Either if a task, enters, leave, or stays -deadline but changes
-	 * its parameters, we may need to update accordingly the total
-	 * allocated bandwidth of the container.
-	 */
-	raw_spin_lock(&dl_b->lock);
-	cpus = dl_bw_cpus(task_cpu(p));
-	if (dl_policy(policy) && !task_has_dl_policy(p) &&
-	    !__dl_overflow(dl_b, cpus, 0, new_bw)) {
-		__dl_add(dl_b, new_bw);
-		err = 0;
-	} else if (dl_policy(policy) && task_has_dl_policy(p) &&
-		   !__dl_overflow(dl_b, cpus, p->dl.dl_bw, new_bw)) {
-		__dl_clear(dl_b, p->dl.dl_bw);
-		__dl_add(dl_b, new_bw);
-		err = 0;
-	} else if (!dl_policy(policy) && task_has_dl_policy(p)) {
-		__dl_clear(dl_b, p->dl.dl_bw);
-		err = 0;
-	}
-	raw_spin_unlock(&dl_b->lock);
-
-	return err;
-}
-
-extern void init_dl_bw(struct dl_bw *dl_b);
-
 /*
  * wake_up_new_task - wake up a newly created task for the first time.
  *
@@ -4009,46 +3908,6 @@ static struct task_struct *find_process_by_pid(pid_t pid)
 }
 
 /*
- * This function initializes the sched_dl_entity of a newly becoming
- * SCHED_DEADLINE task.
- *
- * Only the static values are considered here, the actual runtime and the
- * absolute deadline will be properly calculated when the task is enqueued
- * for the first time with its new policy.
- */
-static void
-__setparam_dl(struct task_struct *p, const struct sched_attr *attr)
-{
-	struct sched_dl_entity *dl_se = &p->dl;
-
-	dl_se->dl_runtime = attr->sched_runtime;
-	dl_se->dl_deadline = attr->sched_deadline;
-	dl_se->dl_period = attr->sched_period ?: dl_se->dl_deadline;
-	dl_se->flags = attr->sched_flags;
-	dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);
-
-	/*
-	 * Changing the parameters of a task is 'tricky' and we're not doing
-	 * the correct thing -- also see task_dead_dl() and switched_from_dl().
-	 *
-	 * What we SHOULD do is delay the bandwidth release until the 0-lag
-	 * point. This would include retaining the task_struct until that time
-	 * and change dl_overflow() to not immediately decrement the current
-	 * amount.
-	 *
-	 * Instead we retain the current runtime/deadline and let the new
-	 * parameters take effect after the current reservation period lapses.
-	 * This is safe (albeit pessimistic) because the 0-lag point is always
-	 * before the current scheduling deadline.
-	 *
-	 * We can still have temporary overloads because we do not delay the
-	 * change in bandwidth until that time; so admission control is
-	 * not on the safe side. It does however guarantee tasks will never
-	 * consume more than promised.
-	 */
-}
-
-/*
  * sched_setparam() passes in -1 for its policy, to let the functions
  * it calls know not to change it.
  */
@@ -4101,59 +3960,6 @@ static void __setscheduler(struct rq *rq, struct task_struct *p,
 		p->sched_class = &fair_sched_class;
 }
 
-static void
-__getparam_dl(struct task_struct *p, struct sched_attr *attr)
-{
-	struct sched_dl_entity *dl_se = &p->dl;
-
-	attr->sched_priority = p->rt_priority;
-	attr->sched_runtime = dl_se->dl_runtime;
-	attr->sched_deadline = dl_se->dl_deadline;
-	attr->sched_period = dl_se->dl_period;
-	attr->sched_flags = dl_se->flags;
-}
-
-/*
- * This function validates the new parameters of a -deadline task.
- * We ask for the deadline not being zero, and greater or equal
- * than the runtime, as well as the period of being zero or
- * greater than deadline. Furthermore, we have to be sure that
- * user parameters are above the internal resolution of 1us (we
- * check sched_runtime only since it is always the smaller one) and
- * below 2^63 ns (we have to check both sched_deadline and
- * sched_period, as the latter can be zero).
- */
-static bool
-__checkparam_dl(const struct sched_attr *attr)
-{
-	/* deadline != 0 */
-	if (attr->sched_deadline == 0)
-		return false;
-
-	/*
-	 * Since we truncate DL_SCALE bits, make sure we're at least
-	 * that big.
-	 */
-	if (attr->sched_runtime < (1ULL << DL_SCALE))
-		return false;
-
-	/*
-	 * Since we use the MSB for wrap-around and sign issues, make
-	 * sure it's not set (mind that period can be equal to zero).
-	 */
-	if (attr->sched_deadline & (1ULL << 63) ||
-	    attr->sched_period & (1ULL << 63))
-		return false;
-
-	/* runtime <= deadline <= period (if period != 0) */
-	if ((attr->sched_period != 0 &&
-	     attr->sched_period < attr->sched_deadline) ||
-	    attr->sched_deadline < attr->sched_runtime)
-		return false;
-
-	return true;
-}
-
 /*
  * Check the target process has a UID that matches the current process's:
  */
@@ -4170,19 +3976,6 @@ static bool check_same_owner(struct task_struct *p)
 	return match;
 }
 
-static bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
-{
-	struct sched_dl_entity *dl_se = &p->dl;
-
-	if (dl_se->dl_runtime != attr->sched_runtime ||
-		dl_se->dl_deadline != attr->sched_deadline ||
-		dl_se->dl_period != attr->sched_period ||
-		dl_se->flags != attr->sched_flags)
-		return true;
-
-	return false;
-}
-
 static int __sched_setscheduler(struct task_struct *p,
 				const struct sched_attr *attr,
 				bool user, bool pi)
@@ -4362,7 +4155,7 @@ static int __sched_setscheduler(struct task_struct *p,
 	 * of a SCHED_DEADLINE task) we need to check if enough bandwidth
 	 * is available.
 	 */
-	if ((dl_policy(policy) || dl_task(p)) && dl_overflow(p, policy, attr)) {
+	if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) {
 		task_rq_unlock(rq, p, &rf);
 		return -EBUSY;
 	}
@@ -5468,23 +5261,12 @@ void init_idle(struct task_struct *idle, int cpu)
 int cpuset_cpumask_can_shrink(const struct cpumask *cur,
 			      const struct cpumask *trial)
 {
-	int ret = 1, trial_cpus;
-	struct dl_bw *cur_dl_b;
-	unsigned long flags;
+	int ret = 1;
 
 	if (!cpumask_weight(cur))
 		return ret;
 
-	rcu_read_lock_sched();
-	cur_dl_b = dl_bw_of(cpumask_any(cur));
-	trial_cpus = cpumask_weight(trial);
-
-	raw_spin_lock_irqsave(&cur_dl_b->lock, flags);
-	if (cur_dl_b->bw != -1 &&
-	    cur_dl_b->bw * trial_cpus < cur_dl_b->total_bw)
-		ret = 0;
-	raw_spin_unlock_irqrestore(&cur_dl_b->lock, flags);
-	rcu_read_unlock_sched();
+	ret = dl_cpuset_cpumask_can_shrink(cur, trial);
 
 	return ret;
 }
@@ -5509,34 +5291,8 @@ int task_can_attach(struct task_struct *p,
 	}
 
 	if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span,
-					      cs_cpus_allowed)) {
-		unsigned int dest_cpu = cpumask_any_and(cpu_active_mask,
-							cs_cpus_allowed);
-		struct dl_bw *dl_b;
-		bool overflow;
-		int cpus;
-		unsigned long flags;
-
-		rcu_read_lock_sched();
-		dl_b = dl_bw_of(dest_cpu);
-		raw_spin_lock_irqsave(&dl_b->lock, flags);
-		cpus = dl_bw_cpus(dest_cpu);
-		overflow = __dl_overflow(dl_b, cpus, 0, p->dl.dl_bw);
-		if (overflow)
-			ret = -EBUSY;
-		else {
-			/*
-			 * We reserve space for this task in the destination
-			 * root_domain, as we can't fail after this point.
-			 * We will free resources in the source root_domain
-			 * later on (see set_cpus_allowed_dl()).
-			 */
-			__dl_add(dl_b, p->dl.dl_bw);
-		}
-		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-		rcu_read_unlock_sched();
-
-	}
+					      cs_cpus_allowed))
+		ret = dl_task_can_attach(p, cs_cpus_allowed);
 
 out:
 	return ret;
@@ -5804,23 +5560,8 @@ static void cpuset_cpu_active(void)
 
 static int cpuset_cpu_inactive(unsigned int cpu)
 {
-	unsigned long flags;
-	struct dl_bw *dl_b;
-	bool overflow;
-	int cpus;
-
 	if (!cpuhp_tasks_frozen) {
-		rcu_read_lock_sched();
-		dl_b = dl_bw_of(cpu);
-
-		raw_spin_lock_irqsave(&dl_b->lock, flags);
-		cpus = dl_bw_cpus(cpu);
-		overflow = __dl_overflow(dl_b, cpus, 0, 0);
-		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-
-		rcu_read_unlock_sched();
-
-		if (overflow)
+		if (dl_cpu_busy(cpu))
 			return -EBUSY;
 		cpuset_update_active_cpus();
 	} else {
@@ -6740,70 +6481,6 @@ static int sched_rt_global_constraints(void)
 }
 #endif /* CONFIG_RT_GROUP_SCHED */
 
-static int sched_dl_global_validate(void)
-{
-	u64 runtime = global_rt_runtime();
-	u64 period = global_rt_period();
-	u64 new_bw = to_ratio(period, runtime);
-	struct dl_bw *dl_b;
-	int cpu, ret = 0;
-	unsigned long flags;
-
-	/*
-	 * Here we want to check the bandwidth not being set to some
-	 * value smaller than the currently allocated bandwidth in
-	 * any of the root_domains.
-	 *
-	 * FIXME: Cycling on all the CPUs is overdoing, but simpler than
-	 * cycling on root_domains... Discussion on different/better
-	 * solutions is welcome!
-	 */
-	for_each_possible_cpu(cpu) {
-		rcu_read_lock_sched();
-		dl_b = dl_bw_of(cpu);
-
-		raw_spin_lock_irqsave(&dl_b->lock, flags);
-		if (new_bw < dl_b->total_bw)
-			ret = -EBUSY;
-		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-
-		rcu_read_unlock_sched();
-
-		if (ret)
-			break;
-	}
-
-	return ret;
-}
-
-static void sched_dl_do_global(void)
-{
-	u64 new_bw = -1;
-	struct dl_bw *dl_b;
-	int cpu;
-	unsigned long flags;
-
-	def_dl_bandwidth.dl_period = global_rt_period();
-	def_dl_bandwidth.dl_runtime = global_rt_runtime();
-
-	if (global_rt_runtime() != RUNTIME_INF)
-		new_bw = to_ratio(global_rt_period(), global_rt_runtime());
-
-	/*
-	 * FIXME: As above...
-	 */
-	for_each_possible_cpu(cpu) {
-		rcu_read_lock_sched();
-		dl_b = dl_bw_of(cpu);
-
-		raw_spin_lock_irqsave(&dl_b->lock, flags);
-		dl_b->bw = new_bw;
-		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-
-		rcu_read_unlock_sched();
-	}
-}
-
 static int sched_rt_global_validate(void)
 {
 	if (sysctl_sched_rt_period <= 0)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index a2ce590156..e879feae5f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -17,6 +17,7 @@
 #include "sched.h"
 
 #include <linux/slab.h>
+#include <uapi/linux/sched/types.h>
 
 struct dl_bandwidth def_dl_bandwidth;
 
@@ -1854,6 +1855,341 @@ const struct sched_class dl_sched_class = {
 	.update_curr		= update_curr_dl,
 };
 
+#ifdef CONFIG_SMP
+struct dl_bw *dl_bw_of(int i)
+{
+	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
+			 "sched RCU must be held");
+	return &cpu_rq(i)->rd->dl_bw;
+}
+
+static inline int dl_bw_cpus(int i)
+{
+	struct root_domain *rd = cpu_rq(i)->rd;
+	int cpus = 0;
+
+	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
+			 "sched RCU must be held");
+	for_each_cpu_and(i, rd->span, cpu_active_mask)
+		cpus++;
+
+	return cpus;
+}
+#else
+struct dl_bw *dl_bw_of(int i)
+{
+	return &cpu_rq(i)->dl.dl_bw;
+}
+
+static inline int dl_bw_cpus(int i)
+{
+	return 1;
+}
+#endif
+
+int sched_dl_global_validate(void)
+{
+	u64 runtime = global_rt_runtime();
+	u64 period = global_rt_period();
+	u64 new_bw = to_ratio(period, runtime);
+	struct dl_bw *dl_b;
+	int cpu, ret = 0;
+	unsigned long flags;
+
+	/*
+	 * Here we want to check the bandwidth not being set to some
+	 * value smaller than the currently allocated bandwidth in
+	 * any of the root_domains.
+	 *
+	 * FIXME: Cycling on all the CPUs is overdoing, but simpler than
+	 * cycling on root_domains... Discussion on different/better
+	 * solutions is welcome!
+	 */
+	for_each_possible_cpu(cpu) {
+		rcu_read_lock_sched();
+		dl_b = dl_bw_of(cpu);
+
+		raw_spin_lock_irqsave(&dl_b->lock, flags);
+		if (new_bw < dl_b->total_bw)
+			ret = -EBUSY;
+		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+
+		rcu_read_unlock_sched();
+
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+
+void sched_dl_do_global(void)
+{
+	u64 new_bw = -1;
+	struct dl_bw *dl_b;
+	int cpu;
+	unsigned long flags;
+
+	def_dl_bandwidth.dl_period = global_rt_period();
+	def_dl_bandwidth.dl_runtime = global_rt_runtime();
+
+	if (global_rt_runtime() != RUNTIME_INF)
+		new_bw = to_ratio(global_rt_period(), global_rt_runtime());
+
+	/*
+	 * FIXME: As above...
+	 */
+	for_each_possible_cpu(cpu) {
+		rcu_read_lock_sched();
+		dl_b = dl_bw_of(cpu);
+
+		raw_spin_lock_irqsave(&dl_b->lock, flags);
+		dl_b->bw = new_bw;
+		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+
+		rcu_read_unlock_sched();
+	}
+}
+
+/*
+ * We must be sure that accepting a new task (or allowing changing the
+ * parameters of an existing one) is consistent with the bandwidth
+ * constraints. If yes, this function also accordingly updates the currently
+ * allocated bandwidth to reflect the new situation.
+ *
+ * This function is called while holding p's rq->lock.
+ *
+ * XXX we should delay bw change until the task's 0-lag point, see
+ * __setparam_dl().
+ */
+int sched_dl_overflow(struct task_struct *p, int policy,
+		      const struct sched_attr *attr)
+{
+	struct dl_bw *dl_b = dl_bw_of(task_cpu(p));
+	u64 period = attr->sched_period ?: attr->sched_deadline;
+	u64 runtime = attr->sched_runtime;
+	u64 new_bw = dl_policy(policy) ? to_ratio(period, runtime) : 0;
+	int cpus, err = -1;
+
+	/* !deadline task may carry old deadline bandwidth */
+	if (new_bw == p->dl.dl_bw && task_has_dl_policy(p))
+		return 0;
+
+	/*
+	 * Either if a task, enters, leave, or stays -deadline but changes
+	 * its parameters, we may need to update accordingly the total
+	 * allocated bandwidth of the container.
+	 */
+	raw_spin_lock(&dl_b->lock);
+	cpus = dl_bw_cpus(task_cpu(p));
+	if (dl_policy(policy) && !task_has_dl_policy(p) &&
+	    !__dl_overflow(dl_b, cpus, 0, new_bw)) {
+		__dl_add(dl_b, new_bw);
+		err = 0;
+	} else if (dl_policy(policy) && task_has_dl_policy(p) &&
+		   !__dl_overflow(dl_b, cpus, p->dl.dl_bw, new_bw)) {
+		__dl_clear(dl_b, p->dl.dl_bw);
+		__dl_add(dl_b, new_bw);
+		err = 0;
+	} else if (!dl_policy(policy) && task_has_dl_policy(p)) {
+		__dl_clear(dl_b, p->dl.dl_bw);
+		err = 0;
+	}
+	raw_spin_unlock(&dl_b->lock);
+
+	return err;
+}
+
+/*
+ * This function initializes the sched_dl_entity of a newly becoming
+ * SCHED_DEADLINE task.
+ *
+ * Only the static values are considered here, the actual runtime and the
+ * absolute deadline will be properly calculated when the task is enqueued
+ * for the first time with its new policy.
+ */
+void __setparam_dl(struct task_struct *p, const struct sched_attr *attr)
+{
+	struct sched_dl_entity *dl_se = &p->dl;
+
+	dl_se->dl_runtime = attr->sched_runtime;
+	dl_se->dl_deadline = attr->sched_deadline;
+	dl_se->dl_period = attr->sched_period ?: dl_se->dl_deadline;
+	dl_se->flags = attr->sched_flags;
+	dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);
+
+	/*
+	 * Changing the parameters of a task is 'tricky' and we're not doing
+	 * the correct thing -- also see task_dead_dl() and switched_from_dl().
+	 *
+	 * What we SHOULD do is delay the bandwidth release until the 0-lag
+	 * point. This would include retaining the task_struct until that time
+	 * and change sched_dl_overflow() to not immediately decrement the
+	 * current amount.
+	 *
+	 * Instead we retain the current runtime/deadline and let the new
+	 * parameters take effect after the current reservation period lapses.
+	 * This is safe (albeit pessimistic) because the 0-lag point is always
+	 * before the current scheduling deadline.
+	 *
+	 * We can still have temporary overloads because we do not delay the
+	 * change in bandwidth until that time; so admission control is
+	 * not on the safe side. It does however guarantee tasks will never
+	 * consume more than promised.
+	 */
+}
+
+void __getparam_dl(struct task_struct *p, struct sched_attr *attr)
+{
+	struct sched_dl_entity *dl_se = &p->dl;
+
+	attr->sched_priority = p->rt_priority;
+	attr->sched_runtime = dl_se->dl_runtime;
+	attr->sched_deadline = dl_se->dl_deadline;
+	attr->sched_period = dl_se->dl_period;
+	attr->sched_flags = dl_se->flags;
+}
+
+/*
+ * This function validates the new parameters of a -deadline task.
+ * We ask for the deadline not being zero, and greater or equal
+ * than the runtime, as well as the period of being zero or
+ * greater than deadline. Furthermore, we have to be sure that
+ * user parameters are above the internal resolution of 1us (we
+ * check sched_runtime only since it is always the smaller one) and
+ * below 2^63 ns (we have to check both sched_deadline and
+ * sched_period, as the latter can be zero).
+ */
+bool __checkparam_dl(const struct sched_attr *attr)
+{
+	/* deadline != 0 */
+	if (attr->sched_deadline == 0)
+		return false;
+
+	/*
+	 * Since we truncate DL_SCALE bits, make sure we're at least
+	 * that big.
+	 */
+	if (attr->sched_runtime < (1ULL << DL_SCALE))
+		return false;
+
+	/*
+	 * Since we use the MSB for wrap-around and sign issues, make
+	 * sure it's not set (mind that period can be equal to zero).
+	 */
+	if (attr->sched_deadline & (1ULL << 63) ||
+	    attr->sched_period & (1ULL << 63))
+		return false;
+
+	/* runtime <= deadline <= period (if period != 0) */
+	if ((attr->sched_period != 0 &&
+	     attr->sched_period < attr->sched_deadline) ||
+	    attr->sched_deadline < attr->sched_runtime)
+		return false;
+
+	return true;
+}
+
+/*
+ * This function clears the sched_dl_entity static params.
+ */
+void __dl_clear_params(struct task_struct *p)
+{
+	struct sched_dl_entity *dl_se = &p->dl;
+
+	dl_se->dl_runtime = 0;
+	dl_se->dl_deadline = 0;
+	dl_se->dl_period = 0;
+	dl_se->flags = 0;
+	dl_se->dl_bw = 0;
+
+	dl_se->dl_throttled = 0;
+	dl_se->dl_yielded = 0;
+}
+
+bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
+{
+	struct sched_dl_entity *dl_se = &p->dl;
+
+	if (dl_se->dl_runtime != attr->sched_runtime ||
+	    dl_se->dl_deadline != attr->sched_deadline ||
+	    dl_se->dl_period != attr->sched_period ||
+	    dl_se->flags != attr->sched_flags)
+		return true;
+
+	return false;
+}
+
+#ifdef CONFIG_SMP
+int dl_task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed)
+{
+	unsigned int dest_cpu = cpumask_any_and(cpu_active_mask,
+							cs_cpus_allowed);
+	struct dl_bw *dl_b;
+	bool overflow;
+	int cpus, ret;
+	unsigned long flags;
+
+	rcu_read_lock_sched();
+	dl_b = dl_bw_of(dest_cpu);
+	raw_spin_lock_irqsave(&dl_b->lock, flags);
+	cpus = dl_bw_cpus(dest_cpu);
+	overflow = __dl_overflow(dl_b, cpus, 0, p->dl.dl_bw);
+	if (overflow)
+		ret = -EBUSY;
+	else {
+		/*
+		 * We reserve space for this task in the destination
+		 * root_domain, as we can't fail after this point.
+		 * We will free resources in the source root_domain
+		 * later on (see set_cpus_allowed_dl()).
+		 */
+		__dl_add(dl_b, p->dl.dl_bw);
+		ret = 0;
+	}
+	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+	rcu_read_unlock_sched();
+	return ret;
+}
+
+int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
+				 const struct cpumask *trial)
+{
+	int ret = 1, trial_cpus;
+	struct dl_bw *cur_dl_b;
+	unsigned long flags;
+
+	rcu_read_lock_sched();
+	cur_dl_b = dl_bw_of(cpumask_any(cur));
+	trial_cpus = cpumask_weight(trial);
+
+	raw_spin_lock_irqsave(&cur_dl_b->lock, flags);
+	if (cur_dl_b->bw != -1 &&
+	    cur_dl_b->bw * trial_cpus < cur_dl_b->total_bw)
+		ret = 0;
+	raw_spin_unlock_irqrestore(&cur_dl_b->lock, flags);
+	rcu_read_unlock_sched();
+	return ret;
+}
+
+bool dl_cpu_busy(unsigned int cpu)
+{
+	unsigned long flags;
+	struct dl_bw *dl_b;
+	bool overflow;
+	int cpus;
+
+	rcu_read_lock_sched();
+	dl_b = dl_bw_of(cpu);
+	raw_spin_lock_irqsave(&dl_b->lock, flags);
+	cpus = dl_bw_cpus(cpu);
+	overflow = __dl_overflow(dl_b, cpus, 0, 0);
+	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+	rcu_read_unlock_sched();
+	return overflow;
+}
+#endif
+
 #ifdef CONFIG_SCHED_DEBUG
 extern void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 053f60afb7..4a845c19b8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -245,6 +245,20 @@ bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
 }
 
 extern void init_dl_bw(struct dl_bw *dl_b);
+extern int sched_dl_global_validate(void);
+extern void sched_dl_do_global(void);
+extern int sched_dl_overflow(struct task_struct *p, int policy,
+			     const struct sched_attr *attr);
+extern void __setparam_dl(struct task_struct *p, const struct sched_attr *attr);
+extern void __getparam_dl(struct task_struct *p, struct sched_attr *attr);
+extern bool __checkparam_dl(const struct sched_attr *attr);
+extern void __dl_clear_params(struct task_struct *p);
+extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr);
+extern int dl_task_can_attach(struct task_struct *p,
+			      const struct cpumask *cs_cpus_allowed);
+extern int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
+					const struct cpumask *trial);
+extern bool dl_cpu_busy(unsigned int cpu);
 
 #ifdef CONFIG_CGROUP_SCHED
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 5/8] sched/rt: move rt related code out of sched/core.c
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
                   ` (3 preceding siblings ...)
  2017-06-06 23:24 ` [PATCH v2 4/8] sched/deadline: move dl related code out of sched/core.c Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 6/8] sched/deadline: make it configurable Nicolas Pitre
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

... to sched/rt.c. This helps making sched/core.c smaller and hopefully
easier to understand and maintain. This also will make it easier to
configure the realtime scheduling class out of the kernel build.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 kernel/sched/core.c  | 315 ---------------------------------------------------
 kernel/sched/rt.c    | 310 ++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h |   5 +
 3 files changed, 315 insertions(+), 315 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 93ce28ea34..9923c4b742 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6247,321 +6247,6 @@ void sched_move_task(struct task_struct *tsk)
 
 	task_rq_unlock(rq, tsk, &rf);
 }
-#endif /* CONFIG_CGROUP_SCHED */
-
-#ifdef CONFIG_RT_GROUP_SCHED
-/*
- * Ensure that the real time constraints are schedulable.
- */
-static DEFINE_MUTEX(rt_constraints_mutex);
-
-/* Must be called with tasklist_lock held */
-static inline int tg_has_rt_tasks(struct task_group *tg)
-{
-	struct task_struct *g, *p;
-
-	/*
-	 * Autogroups do not have RT tasks; see autogroup_create().
-	 */
-	if (task_group_is_autogroup(tg))
-		return 0;
-
-	for_each_process_thread(g, p) {
-		if (rt_task(p) && task_group(p) == tg)
-			return 1;
-	}
-
-	return 0;
-}
-
-struct rt_schedulable_data {
-	struct task_group *tg;
-	u64 rt_period;
-	u64 rt_runtime;
-};
-
-static int tg_rt_schedulable(struct task_group *tg, void *data)
-{
-	struct rt_schedulable_data *d = data;
-	struct task_group *child;
-	unsigned long total, sum = 0;
-	u64 period, runtime;
-
-	period = ktime_to_ns(tg->rt_bandwidth.rt_period);
-	runtime = tg->rt_bandwidth.rt_runtime;
-
-	if (tg == d->tg) {
-		period = d->rt_period;
-		runtime = d->rt_runtime;
-	}
-
-	/*
-	 * Cannot have more runtime than the period.
-	 */
-	if (runtime > period && runtime != RUNTIME_INF)
-		return -EINVAL;
-
-	/*
-	 * Ensure we don't starve existing RT tasks.
-	 */
-	if (rt_bandwidth_enabled() && !runtime && tg_has_rt_tasks(tg))
-		return -EBUSY;
-
-	total = to_ratio(period, runtime);
-
-	/*
-	 * Nobody can have more than the global setting allows.
-	 */
-	if (total > to_ratio(global_rt_period(), global_rt_runtime()))
-		return -EINVAL;
-
-	/*
-	 * The sum of our children's runtime should not exceed our own.
-	 */
-	list_for_each_entry_rcu(child, &tg->children, siblings) {
-		period = ktime_to_ns(child->rt_bandwidth.rt_period);
-		runtime = child->rt_bandwidth.rt_runtime;
-
-		if (child == d->tg) {
-			period = d->rt_period;
-			runtime = d->rt_runtime;
-		}
-
-		sum += to_ratio(period, runtime);
-	}
-
-	if (sum > total)
-		return -EINVAL;
-
-	return 0;
-}
-
-static int __rt_schedulable(struct task_group *tg, u64 period, u64 runtime)
-{
-	int ret;
-
-	struct rt_schedulable_data data = {
-		.tg = tg,
-		.rt_period = period,
-		.rt_runtime = runtime,
-	};
-
-	rcu_read_lock();
-	ret = walk_tg_tree(tg_rt_schedulable, tg_nop, &data);
-	rcu_read_unlock();
-
-	return ret;
-}
-
-static int tg_set_rt_bandwidth(struct task_group *tg,
-		u64 rt_period, u64 rt_runtime)
-{
-	int i, err = 0;
-
-	/*
-	 * Disallowing the root group RT runtime is BAD, it would disallow the
-	 * kernel creating (and or operating) RT threads.
-	 */
-	if (tg == &root_task_group && rt_runtime == 0)
-		return -EINVAL;
-
-	/* No period doesn't make any sense. */
-	if (rt_period == 0)
-		return -EINVAL;
-
-	mutex_lock(&rt_constraints_mutex);
-	read_lock(&tasklist_lock);
-	err = __rt_schedulable(tg, rt_period, rt_runtime);
-	if (err)
-		goto unlock;
-
-	raw_spin_lock_irq(&tg->rt_bandwidth.rt_runtime_lock);
-	tg->rt_bandwidth.rt_period = ns_to_ktime(rt_period);
-	tg->rt_bandwidth.rt_runtime = rt_runtime;
-
-	for_each_possible_cpu(i) {
-		struct rt_rq *rt_rq = tg->rt_rq[i];
-
-		raw_spin_lock(&rt_rq->rt_runtime_lock);
-		rt_rq->rt_runtime = rt_runtime;
-		raw_spin_unlock(&rt_rq->rt_runtime_lock);
-	}
-	raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock);
-unlock:
-	read_unlock(&tasklist_lock);
-	mutex_unlock(&rt_constraints_mutex);
-
-	return err;
-}
-
-static int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
-{
-	u64 rt_runtime, rt_period;
-
-	rt_period = ktime_to_ns(tg->rt_bandwidth.rt_period);
-	rt_runtime = (u64)rt_runtime_us * NSEC_PER_USEC;
-	if (rt_runtime_us < 0)
-		rt_runtime = RUNTIME_INF;
-
-	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
-}
-
-static long sched_group_rt_runtime(struct task_group *tg)
-{
-	u64 rt_runtime_us;
-
-	if (tg->rt_bandwidth.rt_runtime == RUNTIME_INF)
-		return -1;
-
-	rt_runtime_us = tg->rt_bandwidth.rt_runtime;
-	do_div(rt_runtime_us, NSEC_PER_USEC);
-	return rt_runtime_us;
-}
-
-static int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
-{
-	u64 rt_runtime, rt_period;
-
-	rt_period = rt_period_us * NSEC_PER_USEC;
-	rt_runtime = tg->rt_bandwidth.rt_runtime;
-
-	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
-}
-
-static long sched_group_rt_period(struct task_group *tg)
-{
-	u64 rt_period_us;
-
-	rt_period_us = ktime_to_ns(tg->rt_bandwidth.rt_period);
-	do_div(rt_period_us, NSEC_PER_USEC);
-	return rt_period_us;
-}
-#endif /* CONFIG_RT_GROUP_SCHED */
-
-#ifdef CONFIG_RT_GROUP_SCHED
-static int sched_rt_global_constraints(void)
-{
-	int ret = 0;
-
-	mutex_lock(&rt_constraints_mutex);
-	read_lock(&tasklist_lock);
-	ret = __rt_schedulable(NULL, 0, 0);
-	read_unlock(&tasklist_lock);
-	mutex_unlock(&rt_constraints_mutex);
-
-	return ret;
-}
-
-static int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
-{
-	/* Don't accept realtime tasks when there is no way for them to run */
-	if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
-		return 0;
-
-	return 1;
-}
-
-#else /* !CONFIG_RT_GROUP_SCHED */
-static int sched_rt_global_constraints(void)
-{
-	unsigned long flags;
-	int i;
-
-	raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
-	for_each_possible_cpu(i) {
-		struct rt_rq *rt_rq = &cpu_rq(i)->rt;
-
-		raw_spin_lock(&rt_rq->rt_runtime_lock);
-		rt_rq->rt_runtime = global_rt_runtime();
-		raw_spin_unlock(&rt_rq->rt_runtime_lock);
-	}
-	raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
-
-	return 0;
-}
-#endif /* CONFIG_RT_GROUP_SCHED */
-
-static int sched_rt_global_validate(void)
-{
-	if (sysctl_sched_rt_period <= 0)
-		return -EINVAL;
-
-	if ((sysctl_sched_rt_runtime != RUNTIME_INF) &&
-		(sysctl_sched_rt_runtime > sysctl_sched_rt_period))
-		return -EINVAL;
-
-	return 0;
-}
-
-static void sched_rt_do_global(void)
-{
-	def_rt_bandwidth.rt_runtime = global_rt_runtime();
-	def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
-}
-
-int sched_rt_handler(struct ctl_table *table, int write,
-		void __user *buffer, size_t *lenp,
-		loff_t *ppos)
-{
-	int old_period, old_runtime;
-	static DEFINE_MUTEX(mutex);
-	int ret;
-
-	mutex_lock(&mutex);
-	old_period = sysctl_sched_rt_period;
-	old_runtime = sysctl_sched_rt_runtime;
-
-	ret = proc_dointvec(table, write, buffer, lenp, ppos);
-
-	if (!ret && write) {
-		ret = sched_rt_global_validate();
-		if (ret)
-			goto undo;
-
-		ret = sched_dl_global_validate();
-		if (ret)
-			goto undo;
-
-		ret = sched_rt_global_constraints();
-		if (ret)
-			goto undo;
-
-		sched_rt_do_global();
-		sched_dl_do_global();
-	}
-	if (0) {
-undo:
-		sysctl_sched_rt_period = old_period;
-		sysctl_sched_rt_runtime = old_runtime;
-	}
-	mutex_unlock(&mutex);
-
-	return ret;
-}
-
-int sched_rr_handler(struct ctl_table *table, int write,
-		void __user *buffer, size_t *lenp,
-		loff_t *ppos)
-{
-	int ret;
-	static DEFINE_MUTEX(mutex);
-
-	mutex_lock(&mutex);
-	ret = proc_dointvec(table, write, buffer, lenp, ppos);
-	/*
-	 * Make sure that internally we keep jiffies.
-	 * Also, writing zero resets the timeslice to default:
-	 */
-	if (!ret && write) {
-		sched_rr_timeslice =
-			sysctl_sched_rr_timeslice <= 0 ? RR_TIMESLICE :
-			msecs_to_jiffies(sysctl_sched_rr_timeslice);
-	}
-	mutex_unlock(&mutex);
-	return ret;
-}
-
-#ifdef CONFIG_CGROUP_SCHED
 
 static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
 {
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 979b734100..29c48a6cfb 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2438,6 +2438,316 @@ const struct sched_class rt_sched_class = {
 	.update_curr		= update_curr_rt,
 };
 
+#ifdef CONFIG_RT_GROUP_SCHED
+/*
+ * Ensure that the real time constraints are schedulable.
+ */
+static DEFINE_MUTEX(rt_constraints_mutex);
+
+/* Must be called with tasklist_lock held */
+static inline int tg_has_rt_tasks(struct task_group *tg)
+{
+	struct task_struct *g, *p;
+
+	/*
+	 * Autogroups do not have RT tasks; see autogroup_create().
+	 */
+	if (task_group_is_autogroup(tg))
+		return 0;
+
+	for_each_process_thread(g, p) {
+		if (rt_task(p) && task_group(p) == tg)
+			return 1;
+	}
+
+	return 0;
+}
+
+struct rt_schedulable_data {
+	struct task_group *tg;
+	u64 rt_period;
+	u64 rt_runtime;
+};
+
+static int tg_rt_schedulable(struct task_group *tg, void *data)
+{
+	struct rt_schedulable_data *d = data;
+	struct task_group *child;
+	unsigned long total, sum = 0;
+	u64 period, runtime;
+
+	period = ktime_to_ns(tg->rt_bandwidth.rt_period);
+	runtime = tg->rt_bandwidth.rt_runtime;
+
+	if (tg == d->tg) {
+		period = d->rt_period;
+		runtime = d->rt_runtime;
+	}
+
+	/*
+	 * Cannot have more runtime than the period.
+	 */
+	if (runtime > period && runtime != RUNTIME_INF)
+		return -EINVAL;
+
+	/*
+	 * Ensure we don't starve existing RT tasks.
+	 */
+	if (rt_bandwidth_enabled() && !runtime && tg_has_rt_tasks(tg))
+		return -EBUSY;
+
+	total = to_ratio(period, runtime);
+
+	/*
+	 * Nobody can have more than the global setting allows.
+	 */
+	if (total > to_ratio(global_rt_period(), global_rt_runtime()))
+		return -EINVAL;
+
+	/*
+	 * The sum of our children's runtime should not exceed our own.
+	 */
+	list_for_each_entry_rcu(child, &tg->children, siblings) {
+		period = ktime_to_ns(child->rt_bandwidth.rt_period);
+		runtime = child->rt_bandwidth.rt_runtime;
+
+		if (child == d->tg) {
+			period = d->rt_period;
+			runtime = d->rt_runtime;
+		}
+
+		sum += to_ratio(period, runtime);
+	}
+
+	if (sum > total)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int __rt_schedulable(struct task_group *tg, u64 period, u64 runtime)
+{
+	int ret;
+
+	struct rt_schedulable_data data = {
+		.tg = tg,
+		.rt_period = period,
+		.rt_runtime = runtime,
+	};
+
+	rcu_read_lock();
+	ret = walk_tg_tree(tg_rt_schedulable, tg_nop, &data);
+	rcu_read_unlock();
+
+	return ret;
+}
+
+static int tg_set_rt_bandwidth(struct task_group *tg,
+		u64 rt_period, u64 rt_runtime)
+{
+	int i, err = 0;
+
+	/*
+	 * Disallowing the root group RT runtime is BAD, it would disallow the
+	 * kernel creating (and or operating) RT threads.
+	 */
+	if (tg == &root_task_group && rt_runtime == 0)
+		return -EINVAL;
+
+	/* No period doesn't make any sense. */
+	if (rt_period == 0)
+		return -EINVAL;
+
+	mutex_lock(&rt_constraints_mutex);
+	read_lock(&tasklist_lock);
+	err = __rt_schedulable(tg, rt_period, rt_runtime);
+	if (err)
+		goto unlock;
+
+	raw_spin_lock_irq(&tg->rt_bandwidth.rt_runtime_lock);
+	tg->rt_bandwidth.rt_period = ns_to_ktime(rt_period);
+	tg->rt_bandwidth.rt_runtime = rt_runtime;
+
+	for_each_possible_cpu(i) {
+		struct rt_rq *rt_rq = tg->rt_rq[i];
+
+		raw_spin_lock(&rt_rq->rt_runtime_lock);
+		rt_rq->rt_runtime = rt_runtime;
+		raw_spin_unlock(&rt_rq->rt_runtime_lock);
+	}
+	raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock);
+unlock:
+	read_unlock(&tasklist_lock);
+	mutex_unlock(&rt_constraints_mutex);
+
+	return err;
+}
+
+int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
+{
+	u64 rt_runtime, rt_period;
+
+	rt_period = ktime_to_ns(tg->rt_bandwidth.rt_period);
+	rt_runtime = (u64)rt_runtime_us * NSEC_PER_USEC;
+	if (rt_runtime_us < 0)
+		rt_runtime = RUNTIME_INF;
+
+	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
+}
+
+long sched_group_rt_runtime(struct task_group *tg)
+{
+	u64 rt_runtime_us;
+
+	if (tg->rt_bandwidth.rt_runtime == RUNTIME_INF)
+		return -1;
+
+	rt_runtime_us = tg->rt_bandwidth.rt_runtime;
+	do_div(rt_runtime_us, NSEC_PER_USEC);
+	return rt_runtime_us;
+}
+
+int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
+{
+	u64 rt_runtime, rt_period;
+
+	rt_period = rt_period_us * NSEC_PER_USEC;
+	rt_runtime = tg->rt_bandwidth.rt_runtime;
+
+	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
+}
+
+long sched_group_rt_period(struct task_group *tg)
+{
+	u64 rt_period_us;
+
+	rt_period_us = ktime_to_ns(tg->rt_bandwidth.rt_period);
+	do_div(rt_period_us, NSEC_PER_USEC);
+	return rt_period_us;
+}
+
+static int sched_rt_global_constraints(void)
+{
+	int ret = 0;
+
+	mutex_lock(&rt_constraints_mutex);
+	read_lock(&tasklist_lock);
+	ret = __rt_schedulable(NULL, 0, 0);
+	read_unlock(&tasklist_lock);
+	mutex_unlock(&rt_constraints_mutex);
+
+	return ret;
+}
+
+int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
+{
+	/* Don't accept realtime tasks when there is no way for them to run */
+	if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
+		return 0;
+
+	return 1;
+}
+
+#else /* !CONFIG_RT_GROUP_SCHED */
+static int sched_rt_global_constraints(void)
+{
+	unsigned long flags;
+	int i;
+
+	raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
+	for_each_possible_cpu(i) {
+		struct rt_rq *rt_rq = &cpu_rq(i)->rt;
+
+		raw_spin_lock(&rt_rq->rt_runtime_lock);
+		rt_rq->rt_runtime = global_rt_runtime();
+		raw_spin_unlock(&rt_rq->rt_runtime_lock);
+	}
+	raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
+
+	return 0;
+}
+#endif /* CONFIG_RT_GROUP_SCHED */
+
+static int sched_rt_global_validate(void)
+{
+	if (sysctl_sched_rt_period <= 0)
+		return -EINVAL;
+
+	if ((sysctl_sched_rt_runtime != RUNTIME_INF) &&
+		(sysctl_sched_rt_runtime > sysctl_sched_rt_period))
+		return -EINVAL;
+
+	return 0;
+}
+
+static void sched_rt_do_global(void)
+{
+	def_rt_bandwidth.rt_runtime = global_rt_runtime();
+	def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
+}
+
+int sched_rt_handler(struct ctl_table *table, int write,
+		void __user *buffer, size_t *lenp,
+		loff_t *ppos)
+{
+	int old_period, old_runtime;
+	static DEFINE_MUTEX(mutex);
+	int ret;
+
+	mutex_lock(&mutex);
+	old_period = sysctl_sched_rt_period;
+	old_runtime = sysctl_sched_rt_runtime;
+
+	ret = proc_dointvec(table, write, buffer, lenp, ppos);
+
+	if (!ret && write) {
+		ret = sched_rt_global_validate();
+		if (ret)
+			goto undo;
+
+		ret = sched_dl_global_validate();
+		if (ret)
+			goto undo;
+
+		ret = sched_rt_global_constraints();
+		if (ret)
+			goto undo;
+
+		sched_rt_do_global();
+		sched_dl_do_global();
+	}
+	if (0) {
+undo:
+		sysctl_sched_rt_period = old_period;
+		sysctl_sched_rt_runtime = old_runtime;
+	}
+	mutex_unlock(&mutex);
+
+	return ret;
+}
+
+int sched_rr_handler(struct ctl_table *table, int write,
+		void __user *buffer, size_t *lenp,
+		loff_t *ppos)
+{
+	int ret;
+	static DEFINE_MUTEX(mutex);
+
+	mutex_lock(&mutex);
+	ret = proc_dointvec(table, write, buffer, lenp, ppos);
+	/*
+	 * Make sure that internally we keep jiffies.
+	 * Also, writing zero resets the timeslice to default:
+	 */
+	if (!ret && write) {
+		sched_rr_timeslice =
+			sysctl_sched_rr_timeslice <= 0 ? RR_TIMESLICE :
+			msecs_to_jiffies(sysctl_sched_rr_timeslice);
+	}
+	mutex_unlock(&mutex);
+	return ret;
+}
+
 #ifdef CONFIG_SCHED_DEBUG
 extern void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4a845c19b8..84ab1be493 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -380,6 +380,11 @@ extern int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent
 extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
 		struct sched_rt_entity *rt_se, int cpu,
 		struct sched_rt_entity *parent);
+extern int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us);
+extern int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us);
+extern long sched_group_rt_runtime(struct task_group *tg);
+extern long sched_group_rt_period(struct task_group *tg);
+extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
 
 extern struct task_group *sched_create_group(struct task_group *parent);
 extern void sched_online_group(struct task_group *tg,
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 6/8] sched/deadline: make it configurable
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
                   ` (4 preceding siblings ...)
  2017-06-06 23:24 ` [PATCH v2 5/8] sched/rt: move rt " Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 7/8] rtmutex: compatibility wrappers when no RT support is configured Nicolas Pitre
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

On most small systems, the deadline scheduler class is a luxury that
rarely gets used if at all. It is preferable to have the ability to
configure it out to reduce the kernel size in that case.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 include/linux/sched.h          |  2 ++
 include/linux/sched/deadline.h |  8 +++++++-
 init/Kconfig                   |  8 ++++++++
 kernel/locking/rtmutex.c       |  6 +++---
 kernel/sched/Makefile          |  5 +++--
 kernel/sched/core.c            | 15 +++++++++------
 kernel/sched/cpudeadline.h     |  7 ++++++-
 kernel/sched/debug.c           |  4 ++++
 kernel/sched/rt.c              | 13 ++++++++-----
 kernel/sched/sched.h           | 38 +++++++++++++++++++++++++++++++-------
 kernel/sched/stop_task.c       |  4 ++++
 11 files changed, 85 insertions(+), 25 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2b69fc6502..ba0c203669 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -522,7 +522,9 @@ struct task_struct {
 #ifdef CONFIG_CGROUP_SCHED
 	struct task_group		*sched_task_group;
 #endif
+#ifdef CONFIG_SCHED_DL
 	struct sched_dl_entity		dl;
+#endif
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
 	/* List of struct preempt_notifier: */
diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
index 975be862e0..8f191a17dd 100644
--- a/include/linux/sched/deadline.h
+++ b/include/linux/sched/deadline.h
@@ -13,7 +13,7 @@
 
 static inline int dl_prio(int prio)
 {
-	if (unlikely(prio < MAX_DL_PRIO))
+	if (IS_ENABLED(CONFIG_SCHED_DL) && unlikely(prio < MAX_DL_PRIO))
 		return 1;
 	return 0;
 }
@@ -28,4 +28,10 @@ static inline bool dl_time_before(u64 a, u64 b)
 	return (s64)(a - b) < 0;
 }
 
+#ifdef CONFIG_SCHED_DL
+#define dl_deadline(tsk)	(tsk)->dl.deadline
+#else
+#define dl_deadline(tsk)	0
+#endif
+
 #endif /* _LINUX_SCHED_DEADLINE_H */
diff --git a/init/Kconfig b/init/Kconfig
index ad91724f75..43e6ae3414 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1303,6 +1303,14 @@ config SCHED_AUTOGROUP
 	  desktop applications.  Task group autogeneration is currently based
 	  upon task session.
 
+config SCHED_DL
+	bool "Deadline Task Scheduling" if EXPERT
+	default y
+	help
+	  This adds the sched_dl scheduling class to the kernel providing
+	  support for the SCHED_DEADLINE policy. You might want to disable
+	  this to reduce the kernel size. If unsure say y.
+
 config SYSFS_DEPRECATED
 	bool "Enable deprecated sysfs features to support old userspace tools"
 	depends on SYSFS
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 28cd09e635..1deabf9ebd 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -228,7 +228,7 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
  * Only use with rt_mutex_waiter_{less,equal}()
  */
 #define task_to_waiter(p)	\
-	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
+	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = dl_deadline(p) }
 
 static inline int
 rt_mutex_waiter_less(struct rt_mutex_waiter *left,
@@ -692,7 +692,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
 	 * the values of the node being removed.
 	 */
 	waiter->prio = task->prio;
-	waiter->deadline = task->dl.deadline;
+	waiter->deadline = dl_deadline(task);
 
 	rt_mutex_enqueue(lock, waiter);
 
@@ -967,7 +967,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
 	waiter->task = task;
 	waiter->lock = lock;
 	waiter->prio = task->prio;
-	waiter->deadline = task->dl.deadline;
+	waiter->deadline = dl_deadline(task);
 
 	/* Get the top priority waiter on the lock */
 	if (rt_mutex_has_waiters(lock))
diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
index 5e4c2e7a63..3bd6a7c1cc 100644
--- a/kernel/sched/Makefile
+++ b/kernel/sched/Makefile
@@ -16,9 +16,10 @@ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
 endif
 
 obj-y += core.o loadavg.o clock.o cputime.o
-obj-y += idle_task.o fair.o rt.o deadline.o
 obj-y += wait.o swait.o completion.o idle.o
-obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o
+obj-y += idle_task.o fair.o rt.o
+obj-$(CONFIG_SCHED_DL) += deadline.o $(if $(CONFIG_SMP),cpudeadline.o)
+obj-$(CONFIG_SMP) += cpupri.o topology.o stop_task.o
 obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
 obj-$(CONFIG_SCHEDSTATS) += stats.o
 obj-$(CONFIG_SCHED_DEBUG) += debug.o
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9923c4b742..30138033b7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -635,7 +635,7 @@ bool sched_can_stop_tick(struct rq *rq)
 	int fifo_nr_running;
 
 	/* Deadline tasks, even if single, need the tick */
-	if (rq->dl.dl_nr_running)
+	if (dl_nr_running(rq))
 		return false;
 
 	/*
@@ -2174,9 +2174,11 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	memset(&p->se.statistics, 0, sizeof(p->se.statistics));
 #endif
 
+#ifdef CONFIG_SCHED_DL
 	RB_CLEAR_NODE(&p->dl.rb_node);
 	init_dl_task_timer(&p->dl);
 	__dl_clear_params(p);
+#endif
 
 	INIT_LIST_HEAD(&p->rt.run_list);
 	p->rt.timeout		= 0;
@@ -3702,20 +3704,20 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
 	if (dl_prio(prio)) {
 		if (!dl_prio(p->normal_prio) ||
 		    (pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) {
-			p->dl.dl_boosted = 1;
+			dl_boosted(p) = 1;
 			queue_flag |= ENQUEUE_REPLENISH;
 		} else
-			p->dl.dl_boosted = 0;
+			dl_boosted(p) = 0;
 		p->sched_class = &dl_sched_class;
 	} else if (rt_prio(prio)) {
 		if (dl_prio(oldprio))
-			p->dl.dl_boosted = 0;
+			dl_boosted(p) = 0;
 		if (oldprio < prio)
 			queue_flag |= ENQUEUE_HEAD;
 		p->sched_class = &rt_sched_class;
 	} else {
 		if (dl_prio(oldprio))
-			p->dl.dl_boosted = 0;
+			dl_boosted(p) = 0;
 		if (rt_prio(oldprio))
 			p->rt.timeout = 0;
 		p->sched_class = &fair_sched_class;
@@ -5266,7 +5268,8 @@ int cpuset_cpumask_can_shrink(const struct cpumask *cur,
 	if (!cpumask_weight(cur))
 		return ret;
 
-	ret = dl_cpuset_cpumask_can_shrink(cur, trial);
+	if (IS_ENABLED(CONFIG_SCHED_DL))
+		ret = dl_cpuset_cpumask_can_shrink(cur, trial);
 
 	return ret;
 }
diff --git a/kernel/sched/cpudeadline.h b/kernel/sched/cpudeadline.h
index f7da8c55bb..5f4c10f837 100644
--- a/kernel/sched/cpudeadline.h
+++ b/kernel/sched/cpudeadline.h
@@ -25,10 +25,15 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,
 	       struct cpumask *later_mask);
 void cpudl_set(struct cpudl *cp, int cpu, u64 dl);
 void cpudl_clear(struct cpudl *cp, int cpu);
-int cpudl_init(struct cpudl *cp);
 void cpudl_set_freecpu(struct cpudl *cp, int cpu);
 void cpudl_clear_freecpu(struct cpudl *cp, int cpu);
+#ifdef CONFIG_SCHED_DL
+int cpudl_init(struct cpudl *cp);
 void cpudl_cleanup(struct cpudl *cp);
+#else
+#define cpudl_init(cp)		0
+#define cpudl_cleanup(cp)	do { } while (0)
+#endif
 #endif /* CONFIG_SMP */
 
 #endif /* _LINUX_CPUDL_H */
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 38f019324f..84f80a81ab 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -646,7 +646,9 @@ do {									\
 	spin_lock_irqsave(&sched_debug_lock, flags);
 	print_cfs_stats(m, cpu);
 	print_rt_stats(m, cpu);
+#ifdef CONFIG_SCHED_DL
 	print_dl_stats(m, cpu);
+#endif
 
 	print_rq(m, rq, cpu);
 	spin_unlock_irqrestore(&sched_debug_lock, flags);
@@ -954,10 +956,12 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
 #endif
 	P(policy);
 	P(prio);
+#ifdef CONFIG_SCHED_DL
 	if (p->policy == SCHED_DEADLINE) {
 		P(dl.runtime);
 		P(dl.deadline);
 	}
+#endif
 #undef PN_SCHEDSTAT
 #undef PN
 #undef __PN
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 29c48a6cfb..02ac9b336f 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1545,7 +1545,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 		 * to re-start task selection.
 		 */
 		if (unlikely((rq->stop && task_on_rq_queued(rq->stop)) ||
-			     rq->dl.dl_nr_running))
+			     dl_nr_running(rq)))
 			return RETRY_TASK;
 	}
 
@@ -2705,16 +2705,19 @@ int sched_rt_handler(struct ctl_table *table, int write,
 		if (ret)
 			goto undo;
 
-		ret = sched_dl_global_validate();
-		if (ret)
-			goto undo;
+		if (IS_ENABLED(CONFIG_SCHED_DL)) {
+			ret = sched_dl_global_validate();
+			if (ret)
+				goto undo;
+		}
 
 		ret = sched_rt_global_constraints();
 		if (ret)
 			goto undo;
 
 		sched_rt_do_global();
-		sched_dl_do_global();
+		if (IS_ENABLED(CONFIG_SCHED_DL))
+			sched_dl_do_global();
 	}
 	if (0) {
 undo:
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 84ab1be493..c05cc33848 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -137,7 +137,7 @@ static inline int rt_policy(int policy)
 
 static inline int dl_policy(int policy)
 {
-	return policy == SCHED_DEADLINE;
+	return IS_ENABLED(CONFIG_SCHED_DL) && policy == SCHED_DEADLINE;
 }
 static inline bool valid_policy(int policy)
 {
@@ -158,11 +158,15 @@ static inline int task_has_dl_policy(struct task_struct *p)
 /*
  * Tells if entity @a should preempt entity @b.
  */
+#ifdef CONFIG_SCHED_DL
 static inline bool
 dl_entity_preempt(struct sched_dl_entity *a, struct sched_dl_entity *b)
 {
 	return dl_time_before(a->deadline, b->deadline);
 }
+#else
+#define dl_entity_preempt(a, b)	false
+#endif
 
 /*
  * This is the priority-queue data structure of the RT scheduling class:
@@ -244,7 +248,6 @@ bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
 	       dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
 }
 
-extern void init_dl_bw(struct dl_bw *dl_b);
 extern int sched_dl_global_validate(void);
 extern void sched_dl_do_global(void);
 extern int sched_dl_overflow(struct task_struct *p, int policy,
@@ -258,7 +261,27 @@ extern int dl_task_can_attach(struct task_struct *p,
 			      const struct cpumask *cs_cpus_allowed);
 extern int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
 					const struct cpumask *trial);
+extern struct dl_bandwidth def_dl_bandwidth;
+
+struct dl_rq;
+
+#ifdef CONFIG_SCHED_DL
+#define dl_nr_running(rq)	(rq)->dl.dl_nr_running
+#define dl_boosted(tsk)		(tsk)->dl.dl_boosted
 extern bool dl_cpu_busy(unsigned int cpu);
+extern void init_dl_bw(struct dl_bw *dl_b);
+extern void init_sched_dl_class(void);
+extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
+extern void init_dl_rq(struct dl_rq *dl_rq);
+#else
+#define dl_nr_running(rq)	0
+#define dl_boosted(tsk)		(*(int *)0)
+#define dl_cpu_busy(cpu)	false
+#define init_dl_bw(dl_b)	do { } while (0)
+#define init_sched_dl_class()	do { } while (0)
+#define init_dl_bandwidth(...)	do { } while (0)
+#define init_dl_rq(dl_rq)	do { } while (0)
+#endif
 
 #ifdef CONFIG_CGROUP_SCHED
 
@@ -672,7 +695,9 @@ struct rq {
 
 	struct cfs_rq cfs;
 	struct rt_rq rt;
+#ifdef CONFIG_SCHED_DL
 	struct dl_rq dl;
+#endif
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 	/* list of leaf cfs_rq on this cpu: */
@@ -1443,9 +1468,12 @@ static inline void set_curr_task(struct rq *rq, struct task_struct *curr)
 
 #ifdef CONFIG_SMP
 #define sched_class_highest (&stop_sched_class)
-#else
+#elif defined(CONFIG_SCHED_DL)
 #define sched_class_highest (&dl_sched_class)
+#else
+#define sched_class_highest (&rt_sched_class)
 #endif
+
 #define for_each_class(class) \
    for (class = sched_class_highest; class; class = class->next)
 
@@ -1496,7 +1524,6 @@ extern void sysrq_sched_debug_show(void);
 extern void sched_init_granularity(void);
 extern void update_max_interval(void);
 
-extern void init_sched_dl_class(void);
 extern void init_sched_rt_class(void);
 extern void init_sched_fair_class(void);
 
@@ -1506,8 +1533,6 @@ extern void resched_cpu(int cpu);
 extern struct rt_bandwidth def_rt_bandwidth;
 extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
 
-extern struct dl_bandwidth def_dl_bandwidth;
-extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
 extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
 
 unsigned long to_ratio(u64 period, u64 runtime);
@@ -1933,7 +1958,6 @@ print_numa_stats(struct seq_file *m, int node, unsigned long tsf,
 
 extern void init_cfs_rq(struct cfs_rq *cfs_rq);
 extern void init_rt_rq(struct rt_rq *rt_rq);
-extern void init_dl_rq(struct dl_rq *dl_rq);
 
 extern void cfs_bandwidth_usage_inc(void);
 extern void cfs_bandwidth_usage_dec(void);
diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
index 9f69fb6308..5632dc3e63 100644
--- a/kernel/sched/stop_task.c
+++ b/kernel/sched/stop_task.c
@@ -110,7 +110,11 @@ static void update_curr_stop(struct rq *rq)
  * Simple, special scheduling class for the per-CPU stop tasks:
  */
 const struct sched_class stop_sched_class = {
+#ifdef CONFIG_SCHED_DL
 	.next			= &dl_sched_class,
+#else
+	.next			= &rt_sched_class,
+#endif
 
 	.enqueue_task		= enqueue_task_stop,
 	.dequeue_task		= dequeue_task_stop,
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 7/8] rtmutex: compatibility wrappers when no RT support is configured
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
                   ` (5 preceding siblings ...)
  2017-06-06 23:24 ` [PATCH v2 6/8] sched/deadline: make it configurable Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-06 23:24 ` [PATCH v2 8/8] sched/rt: make it configurable Nicolas Pitre
  2017-06-07 16:00 ` [PATCH v2 0/8] scheduler tinification Ingo Molnar
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Prepare the code for the next patch making RT task support optional.
With no actual RT task, there is no priority inversion issues to care about.
We can therefore map RT mutexes to regular mutexes in that case and remain
compatible with most users.

The code that makes explicit assumptions about actual RT mutexes such as
RT mutex debugging and PI futexes will have to be made conditional on  the
availability of RT task support. This will be done in a later patch when
CONFIG_SCHED_RT gets defined.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 include/linux/rtmutex.h | 69 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
index 1abba5ce2a..01db77a41b 100644
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -12,6 +12,8 @@
 #ifndef __LINUX_RT_MUTEX_H
 #define __LINUX_RT_MUTEX_H
 
+#if 1 /* will become def CONFIG_SCHED_RT later */
+
 #include <linux/linkage.h>
 #include <linux/rbtree.h>
 #include <linux/spinlock_types.h>
@@ -98,4 +100,71 @@ extern int rt_mutex_trylock(struct rt_mutex *lock);
 
 extern void rt_mutex_unlock(struct rt_mutex *lock);
 
+#else /* CONFIG_SCHED_RT */
+
+/*
+ * We have no realtime task support and therefore no priority inversion
+ * may occur. Let's map RT mutexes using regular mutexes.
+ */
+
+#include <linux/mutex.h>
+
+struct rt_mutex {
+	struct mutex m;
+};
+
+#define __RT_MUTEX_INITIALIZER(m) \
+	{ .m = __MUTEX_INITIALIZER(m) }
+
+#define DEFINE_RT_MUTEX(mutexname) \
+	struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
+
+static inline void __rt_mutex_init(struct rt_mutex *lock, const char *name)
+{
+	static struct lock_class_key __key;
+	__mutex_init(&lock->m, name, &__key);
+}
+
+#define rt_mutex_init(mutex)	__rt_mutex_init(mutex, #mutex)
+
+static inline int rt_mutex_is_locked(struct rt_mutex *lock)
+{
+	return mutex_is_locked(&lock->m);
+}
+
+static inline void rt_mutex_destroy(struct rt_mutex *lock)
+{
+	mutex_destroy(&lock->m);
+}
+
+static inline void rt_mutex_lock(struct rt_mutex *lock)
+{
+	mutex_lock(&lock->m);
+}
+
+static inline int rt_mutex_lock_interruptible(struct rt_mutex *lock)
+{
+	return mutex_lock_interruptible(&lock->m);
+}
+
+static inline int rt_mutex_trylock(struct rt_mutex *lock)
+{
+	return mutex_trylock(&lock->m);
+}
+
+static inline void rt_mutex_unlock(struct rt_mutex *lock)
+{
+	mutex_unlock(&lock->m);
+}
+
+static inline int rt_mutex_debug_check_no_locks_freed(const void *from,
+						      unsigned long len)
+{
+	return 0;
+}
+#define rt_mutex_debug_check_no_locks_held(task)	do { } while (0)
+#define rt_mutex_debug_task_free(t)			do { } while (0)
+
+#endif /* CONFIG_SCHED_RT */
+
 #endif
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 8/8] sched/rt: make it configurable
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
                   ` (6 preceding siblings ...)
  2017-06-06 23:24 ` [PATCH v2 7/8] rtmutex: compatibility wrappers when no RT support is configured Nicolas Pitre
@ 2017-06-06 23:24 ` Nicolas Pitre
  2017-06-07 16:00 ` [PATCH v2 0/8] scheduler tinification Ingo Molnar
  8 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-06 23:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

On most small systems where user space is tightly controlled, the realtime
scheduling class can often be dispensed with to reduce the kernel footprint.
Let's make it configurable.

The code that makes explicit assumptions about actual RT mutexes (i.e
where the compatibility wrappers don't make sense) has to be made conditional on CONFIG_SCHED_RT. This is
also done here.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 include/linux/init_task.h      | 15 +++++++++++----
 include/linux/rtmutex.h        |  2 +-
 include/linux/sched.h          |  2 ++
 include/linux/sched/rt.h       | 10 ++++++++--
 init/Kconfig                   | 14 +++++++++++---
 kernel/locking/Makefile        |  3 +++
 kernel/locking/locktorture.c   |  4 ++--
 kernel/sched/Makefile          |  4 ++--
 kernel/sched/core.c            | 33 +++++++++++++++++++++++++++------
 kernel/sched/deadline.c        |  4 ++++
 kernel/sched/debug.c           |  2 ++
 kernel/sched/sched.h           | 33 ++++++++++++++++++++++++---------
 kernel/sched/stop_task.c       |  4 +++-
 kernel/sysctl.c                |  4 +++-
 kernel/time/posix-cpu-timers.c |  8 +++++---
 lib/Kconfig.debug              |  2 +-
 16 files changed, 109 insertions(+), 35 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index e049526bc1..6befc0aa61 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -225,6 +225,16 @@ extern struct cred init_cred;
 #define INIT_TASK_SECURITY
 #endif
 
+#ifdef CONFIG_SCHED_RT
+#define INIT_TASK_RT(tsk)						\
+	.rt		= {						\
+		.run_list	= LIST_HEAD_INIT(tsk.rt.run_list),	\
+		.time_slice	= RR_TIMESLICE,				\
+	},
+#else
+#define INIT_TASK_RT(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -250,10 +260,7 @@ extern struct cred init_cred;
 	.se		= {						\
 		.group_node 	= LIST_HEAD_INIT(tsk.se.group_node),	\
 	},								\
-	.rt		= {						\
-		.run_list	= LIST_HEAD_INIT(tsk.rt.run_list),	\
-		.time_slice	= RR_TIMESLICE,				\
-	},								\
+	INIT_TASK_RT(tsk)						\
 	.tasks		= LIST_HEAD_INIT(tsk.tasks),			\
 	INIT_PUSHABLE_TASKS(tsk)					\
 	INIT_CGROUP_SCHED(tsk)						\
diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
index 01db77a41b..05c444f930 100644
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -12,7 +12,7 @@
 #ifndef __LINUX_RT_MUTEX_H
 #define __LINUX_RT_MUTEX_H
 
-#if 1 /* will become def CONFIG_SCHED_RT later */
+#ifdef CONFIG_SCHED_RT
 
 #include <linux/linkage.h>
 #include <linux/rbtree.h>
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ba0c203669..71a43480ed 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -518,7 +518,9 @@ struct task_struct {
 
 	const struct sched_class	*sched_class;
 	struct sched_entity		se;
+#ifdef CONFIG_SCHED_RT
 	struct sched_rt_entity		rt;
+#endif
 #ifdef CONFIG_CGROUP_SCHED
 	struct task_group		*sched_task_group;
 #endif
diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
index f93329aba3..681c48361f 100644
--- a/include/linux/sched/rt.h
+++ b/include/linux/sched/rt.h
@@ -7,7 +7,7 @@ struct task_struct;
 
 static inline int rt_prio(int prio)
 {
-	if (unlikely(prio < MAX_RT_PRIO))
+	if (IS_ENABLED(CONFIG_SCHED_RT) && unlikely(prio < MAX_RT_PRIO))
 		return 1;
 	return 0;
 }
@@ -17,7 +17,7 @@ static inline int rt_task(struct task_struct *p)
 	return rt_prio(p->prio);
 }
 
-#ifdef CONFIG_RT_MUTEXES
+#if defined(CONFIG_RT_MUTEXES) && defined(CONFIG_SCHED_RT)
 /*
  * Must hold either p->pi_lock or task_rq(p)->lock.
  */
@@ -52,4 +52,10 @@ extern void normalize_rt_tasks(void);
  */
 #define RR_TIMESLICE		(100 * HZ / 1000)
 
+#ifdef CONFIG_SCHED_RT
+#define rt_timeout(tsk)		(tsk)->rt.timeout
+#else
+#define rt_timeout(tsk)		0
+#endif
+
 #endif /* _LINUX_SCHED_RT_H */
diff --git a/init/Kconfig b/init/Kconfig
index 43e6ae3414..723ec1cb5c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -687,7 +687,7 @@ config TREE_RCU_TRACE
 
 config RCU_BOOST
 	bool "Enable RCU priority boosting"
-	depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
+	depends on SCHED_RT && RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
 	default n
 	help
 	  This option boosts the priority of preempted RCU readers that
@@ -1090,7 +1090,7 @@ config CFS_BANDWIDTH
 
 config RT_GROUP_SCHED
 	bool "Group scheduling for SCHED_RR/FIFO"
-	depends on CGROUP_SCHED
+	depends on CGROUP_SCHED && SCHED_RT
 	default n
 	help
 	  This feature lets you explicitly allocate real CPU bandwidth
@@ -1303,6 +1303,14 @@ config SCHED_AUTOGROUP
 	  desktop applications.  Task group autogeneration is currently based
 	  upon task session.
 
+config SCHED_RT
+	bool "Real Time Task Scheduling" if EXPERT
+	default y
+	help
+	  This adds the sched_rt scheduling class to the kernel providing
+ 	  support for the SCHED_FIFO and SCHED_RR policies. You might want
+	  to disable this to reduce the kernel size. If unsure say y.
+
 config SCHED_DL
 	bool "Deadline Task Scheduling" if EXPERT
 	default y
@@ -1640,7 +1648,7 @@ config FUTEX
 
 config FUTEX_PI
 	bool
-	depends on FUTEX && RT_MUTEXES
+	depends on FUTEX && RT_MUTEXES && SCHED_RT
 	default y
 
 config HAVE_FUTEX_CMPXCHG
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index 760158d9d9..52892cf26c 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -20,8 +20,11 @@ obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_LOCK_SPIN_ON_OWNER) += osq_lock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
 obj-$(CONFIG_QUEUED_SPINLOCKS) += qspinlock.o
+# Compatibility wrappers in rtmutex.h are used when CONFIG_SCHED_Rt=n
+ifeq ($(CONFIG_SCHED_RT),y)
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
+endif
 obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
 obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o
 obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index f24582d4da..53d6753d50 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -446,7 +446,7 @@ static struct lock_torture_ops ww_mutex_lock_ops = {
 	.name		= "ww_mutex_lock"
 };
 
-#ifdef CONFIG_RT_MUTEXES
+#if defined(CONFIG_RT_MUTEXES) && defined(CONFIG_SCHED_RT)
 static DEFINE_RT_MUTEX(torture_rtmutex);
 
 static int torture_rtmutex_lock(void) __acquires(torture_rtmutex)
@@ -872,7 +872,7 @@ static int __init lock_torture_init(void)
 		&rw_lock_ops, &rw_lock_irq_ops,
 		&mutex_lock_ops,
 		&ww_mutex_lock_ops,
-#ifdef CONFIG_RT_MUTEXES
+#if defined(CONFIG_RT_MUTEXES) && defined(CONFIG_SCHED_RT)
 		&rtmutex_lock_ops,
 #endif
 		&rwsem_lock_ops,
diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
index 3bd6a7c1cc..bccbef85e5 100644
--- a/kernel/sched/Makefile
+++ b/kernel/sched/Makefile
@@ -16,8 +16,8 @@ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
 endif
 
 obj-y += core.o loadavg.o clock.o cputime.o
-obj-y += wait.o swait.o completion.o idle.o
-obj-y += idle_task.o fair.o rt.o
+obj-y += wait.o swait.o completion.o idle.o idle_task.o fair.o
+obj-$(CONFIG_SCHED_RT) += rt.o
 obj-$(CONFIG_SCHED_DL) += deadline.o $(if $(CONFIG_SMP),cpudeadline.o)
 obj-$(CONFIG_SMP) += cpupri.o topology.o stop_task.o
 obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 30138033b7..0d718b68df 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -642,8 +642,8 @@ bool sched_can_stop_tick(struct rq *rq)
 	 * If there are more than one RR tasks, we need the tick to effect the
 	 * actual RR behaviour.
 	 */
-	if (rq->rt.rr_nr_running) {
-		if (rq->rt.rr_nr_running == 1)
+	if (rt_rr_nr_running(rq)) {
+		if (rt_rr_nr_running(rq) == 1)
 			return true;
 		else
 			return false;
@@ -653,7 +653,7 @@ bool sched_can_stop_tick(struct rq *rq)
 	 * If there's no RR tasks, but FIFO tasks, we can skip the tick, no
 	 * forced preemption between FIFO tasks.
 	 */
-	fifo_nr_running = rq->rt.rt_nr_running - rq->rt.rr_nr_running;
+	fifo_nr_running = rt_rt_nr_running(rq) - rt_rr_nr_running(rq);
 	if (fifo_nr_running)
 		return true;
 
@@ -1584,7 +1584,7 @@ void sched_set_stop_task(int cpu, struct task_struct *stop)
 		 * Reset it back to a normal scheduling class so that
 		 * it can die in pieces.
 		 */
-		old_stop->sched_class = &rt_sched_class;
+		old_stop->sched_class = stop_sched_class.next;
 	}
 }
 
@@ -2180,11 +2180,13 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	__dl_clear_params(p);
 #endif
 
+#ifdef CONFIG_SCHED_RT
 	INIT_LIST_HEAD(&p->rt.run_list);
 	p->rt.timeout		= 0;
 	p->rt.time_slice	= sched_rr_timeslice;
 	p->rt.on_rq		= 0;
 	p->rt.on_list		= 0;
+#endif
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
 	INIT_HLIST_HEAD(&p->preempt_notifiers);
@@ -3595,7 +3597,7 @@ int default_wake_function(wait_queue_t *curr, unsigned mode, int wake_flags,
 }
 EXPORT_SYMBOL(default_wake_function);
 
-#ifdef CONFIG_RT_MUTEXES
+#if defined(CONFIG_RT_MUTEXES) && defined(CONFIG_SCHED_RT)
 
 static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
 {
@@ -3994,6 +3996,23 @@ static int __sched_setscheduler(struct task_struct *p,
 
 	/* May grab non-irq protected spin_locks: */
 	BUG_ON(in_interrupt());
+
+	/*
+	 * When the RT scheduling class is disabled, let's make sure kernel threads
+	 * wanting RT still get lowest nice value to give them highest available
+	 * priority rather than simply returning an error. Obviously we can't test
+	 * rt_policy() here as it is always false in that case.
+	 */
+	if (!IS_ENABLED(CONFIG_SCHED_RT) && !user &&
+	    (policy == SCHED_FIFO || policy == SCHED_RR)) {
+		static const struct sched_attr k_attr = {
+			.sched_policy = SCHED_NORMAL,
+			.sched_nice = MIN_NICE,
+		};
+		attr = &k_attr;
+		policy = SCHED_NORMAL;
+	}
+
 recheck:
 	/* Double check policy once rq lock held: */
 	if (policy < 0) {
@@ -5857,7 +5876,10 @@ void __init sched_init(void)
 		rq->calc_load_active = 0;
 		rq->calc_load_update = jiffies + LOAD_FREQ;
 		init_cfs_rq(&rq->cfs);
+#ifdef CONFIG_SCHED_RT
 		init_rt_rq(&rq->rt);
+		rq->rt.rt_runtime = def_rt_bandwidth.rt_runtime;
+#endif
 		init_dl_rq(&rq->dl);
 #ifdef CONFIG_FAIR_GROUP_SCHED
 		root_task_group.shares = ROOT_TASK_GROUP_LOAD;
@@ -5886,7 +5908,6 @@ void __init sched_init(void)
 		init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
-		rq->rt.rt_runtime = def_rt_bandwidth.rt_runtime;
 #ifdef CONFIG_RT_GROUP_SCHED
 		init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL);
 #endif
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index e879feae5f..08d7193ba5 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1825,7 +1825,11 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
 }
 
 const struct sched_class dl_sched_class = {
+#ifdef CONFIG_SCHED_RT
 	.next			= &rt_sched_class,
+#else
+	.next			= &fair_sched_class,
+#endif
 	.enqueue_task		= enqueue_task_dl,
 	.dequeue_task		= dequeue_task_dl,
 	.yield_task		= yield_task_dl,
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 84f80a81ab..c550723ce9 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -645,7 +645,9 @@ do {									\
 
 	spin_lock_irqsave(&sched_debug_lock, flags);
 	print_cfs_stats(m, cpu);
+#ifdef CONFIG_SCHED_RT
 	print_rt_stats(m, cpu);
+#endif
 #ifdef CONFIG_SCHED_DL
 	print_dl_stats(m, cpu);
 #endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c05cc33848..07366c1d04 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -132,7 +132,8 @@ static inline int fair_policy(int policy)
 
 static inline int rt_policy(int policy)
 {
-	return policy == SCHED_FIFO || policy == SCHED_RR;
+	return IS_ENABLED(CONFIG_SCHED_RT) &&
+	       (policy == SCHED_FIFO || policy == SCHED_RR);
 }
 
 static inline int dl_policy(int policy)
@@ -398,8 +399,6 @@ extern void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b);
 extern void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
 extern void unthrottle_cfs_rq(struct cfs_rq *cfs_rq);
 
-extern void free_rt_sched_group(struct task_group *tg);
-extern int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent);
 extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
 		struct sched_rt_entity *rt_se, int cpu,
 		struct sched_rt_entity *parent);
@@ -518,7 +517,7 @@ struct cfs_rq {
 
 static inline int rt_bandwidth_enabled(void)
 {
-	return sysctl_sched_rt_runtime >= 0;
+	return IS_ENABLED(CONFIG_SCHED_RT) && sysctl_sched_rt_runtime >= 0;
 }
 
 /* RT IPI pull logic requires IRQ_WORK */
@@ -567,6 +566,24 @@ struct rt_rq {
 #endif
 };
 
+extern struct rt_bandwidth def_rt_bandwidth;
+
+#ifdef CONFIG_SCHED_RT
+#define rt_rr_nr_running(rq)		(rq)->rt.rr_nr_running
+#define rt_rt_nr_running(rq)		(rq)->rt.rt_nr_running
+extern int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent);
+extern void free_rt_sched_group(struct task_group *tg);
+extern void init_sched_rt_class(void);
+extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
+#else
+#define rt_rr_nr_running(rq)		0
+#define rt_rt_nr_running(rq)		0
+#define alloc_rt_sched_group(...)	1
+#define free_rt_sched_group(tg)		do { } while (0)
+#define init_sched_rt_class()		do {  } while (0)
+#define init_rt_bandwidth(...)		do { } while (0)
+#endif
+
 /* Deadline class' related fields in a runqueue */
 struct dl_rq {
 	/* runqueue is an rbtree, ordered by deadline */
@@ -1470,8 +1487,10 @@ static inline void set_curr_task(struct rq *rq, struct task_struct *curr)
 #define sched_class_highest (&stop_sched_class)
 #elif defined(CONFIG_SCHED_DL)
 #define sched_class_highest (&dl_sched_class)
-#else
+#elif defined(CONFIG_SCHED_RT)
 #define sched_class_highest (&rt_sched_class)
+#else
+#define sched_class_highest (&fair_sched_class)
 #endif
 
 #define for_each_class(class) \
@@ -1524,15 +1543,11 @@ extern void sysrq_sched_debug_show(void);
 extern void sched_init_granularity(void);
 extern void update_max_interval(void);
 
-extern void init_sched_rt_class(void);
 extern void init_sched_fair_class(void);
 
 extern void resched_curr(struct rq *rq);
 extern void resched_cpu(int cpu);
 
-extern struct rt_bandwidth def_rt_bandwidth;
-extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
-
 extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
 
 unsigned long to_ratio(u64 period, u64 runtime);
diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
index 5632dc3e63..7cad8c1540 100644
--- a/kernel/sched/stop_task.c
+++ b/kernel/sched/stop_task.c
@@ -112,8 +112,10 @@ static void update_curr_stop(struct rq *rq)
 const struct sched_class stop_sched_class = {
 #ifdef CONFIG_SCHED_DL
 	.next			= &dl_sched_class,
-#else
+#elif defined(CONFIG_SCHED_RT)
 	.next			= &rt_sched_class,
+#else
+	.next			= &fair_sched_class,
 #endif
 
 	.enqueue_task		= enqueue_task_stop,
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 4dfba1a76c..1c670f4053 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -401,6 +401,7 @@ static struct ctl_table kern_table[] = {
 	},
 #endif /* CONFIG_NUMA_BALANCING */
 #endif /* CONFIG_SCHED_DEBUG */
+#ifdef CONFIG_SCHED_RT
 	{
 		.procname	= "sched_rt_period_us",
 		.data		= &sysctl_sched_rt_period,
@@ -422,6 +423,7 @@ static struct ctl_table kern_table[] = {
 		.mode		= 0644,
 		.proc_handler	= sched_rr_handler,
 	},
+#endif
 #ifdef CONFIG_SCHED_AUTOGROUP
 	{
 		.procname	= "sched_autogroup_enabled",
@@ -1071,7 +1073,7 @@ static struct ctl_table kern_table[] = {
 		.extra1		= &neg_one,
 	},
 #endif
-#ifdef CONFIG_RT_MUTEXES
+#if defined(CONFIG_RT_MUTEXES) && defined(CONFIG_SCHED_RT)
 	{
 		.procname	= "max_lock_depth",
 		.data		= &max_lock_depth,
diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
index d2a1e6dd02..32b2ea6212 100644
--- a/kernel/time/posix-cpu-timers.c
+++ b/kernel/time/posix-cpu-timers.c
@@ -4,6 +4,7 @@
 
 #include <linux/sched/signal.h>
 #include <linux/sched/cputime.h>
+#include <linux/sched/rt.h>
 #include <linux/posix-timers.h>
 #include <linux/errno.h>
 #include <linux/math64.h>
@@ -814,13 +815,14 @@ static void check_thread_timers(struct task_struct *tsk,
 	/*
 	 * Check for the special case thread timers.
 	 */
-	soft = READ_ONCE(sig->rlim[RLIMIT_RTTIME].rlim_cur);
+	soft = IS_ENABLED(CONFIG_SCHED_RT) ?
+		READ_ONCE(sig->rlim[RLIMIT_RTTIME].rlim_cur) : RLIM_INFINITY;
 	if (soft != RLIM_INFINITY) {
 		unsigned long hard =
 			READ_ONCE(sig->rlim[RLIMIT_RTTIME].rlim_max);
 
 		if (hard != RLIM_INFINITY &&
-		    tsk->rt.timeout > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) {
+		    rt_timeout(tsk) > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) {
 			/*
 			 * At the hard limit, we just die.
 			 * No need to calculate anything else now.
@@ -832,7 +834,7 @@ static void check_thread_timers(struct task_struct *tsk,
 			__group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk);
 			return;
 		}
-		if (tsk->rt.timeout > DIV_ROUND_UP(soft, USEC_PER_SEC/HZ)) {
+		if (rt_timeout(tsk) > DIV_ROUND_UP(soft, USEC_PER_SEC/HZ)) {
 			/*
 			 * At the soft limit, send a SIGXCPU every second.
 			 */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index e4587ebe52..0ecc7eb9dc 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1008,7 +1008,7 @@ menu "Lock Debugging (spinlocks, mutexes, etc...)"
 
 config DEBUG_RT_MUTEXES
 	bool "RT Mutex debugging, deadlock detection"
-	depends on DEBUG_KERNEL && RT_MUTEXES
+	depends on DEBUG_KERNEL && RT_MUTEXES && SCHED_RT
 	help
 	 This allows rt mutex semantics violations and rt mutex related
 	 deadlocks (lockups) to be detected and reported automatically.
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
                   ` (7 preceding siblings ...)
  2017-06-06 23:24 ` [PATCH v2 8/8] sched/rt: make it configurable Nicolas Pitre
@ 2017-06-07 16:00 ` Ingo Molnar
  2017-06-07 17:09   ` Nicolas Pitre
  8 siblings, 1 reply; 23+ messages in thread
From: Ingo Molnar @ 2017-06-07 16:00 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner


* Nicolas Pitre <nicolas.pitre@linaro.org> wrote:

> Many embedded systems don't need the full scheduler support. Most of the
> time, user space is tightly controlled and many of the scheduler facilities
> are simply unused.

Sorry, NAK:

>  23 files changed, 3190 insertions(+), 2897 deletions(-)

That's a lot of extra code plus churn for a code base that is already pretty
#ifdef heavy.

Also, the savings are marginal, even with significant functionality disabled:

>   text    data     bss     dec     hex filename
>  28623    3404     128   32155    7d9b kernel/sched/built-in.o
>
> With this series and dl and rt classes disabled:
>
>   text    data     bss     dec     hex filename
>  20734    3334      40   24108    5e2c kernel/sched/built-in.o

With 1GHz + 1GB RAM SoCs being well below $10 in bulk we worry about code 
complexity, predictability, testability, behavioral and ABI uniformity a lot more 
than about the last 10-20k of kernel text footprint...

So I think the 'tiny' efforts are fundamentally misguided and are shooting for an 
ever shrinking market of RAM/ROM starved products whose share is shrinking every 
month.

We want to _remove_ kernel options and reduce complexity, not increase it.

So unless there's convincing counter arguments, or Linus overrules me, this NAK is 
pretty firm.

I'd love to see scheduler complexity reduction patches though, the "CPP count" of 
the scheduler code base is pretty damn high:

  triton:~/tip> git grep -h '^#[^ ]' kernel/sched/  | cut -d' ' -f1 | sort | uniq -c | sort -n | tail -10
      2 #ifdef  CONFIG_SCHED_DEBUG
      4 #endif  /*
     19 #if
     26 #ifndef
     27 #undef
     97 #else
    161 #define
    199 #include
    317 #ifdef
    361 #endif

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-07 16:00 ` [PATCH v2 0/8] scheduler tinification Ingo Molnar
@ 2017-06-07 17:09   ` Nicolas Pitre
  2017-06-07 18:49     ` Alan Cox
  2017-06-08  7:59     ` Ingo Molnar
  0 siblings, 2 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-07 17:09 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner

On Wed, 7 Jun 2017, Ingo Molnar wrote:

> 
> * Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> 
> > Many embedded systems don't need the full scheduler support. Most of the
> > time, user space is tightly controlled and many of the scheduler facilities
> > are simply unused.
> 
> Sorry, NAK:
> 
> >  23 files changed, 3190 insertions(+), 2897 deletions(-)
> 
> That's a lot of extra code plus churn for a code base that is already pretty
> #ifdef heavy.
> 
> Also, the savings are marginal, even with significant functionality disabled:
> 
> >   text    data     bss     dec     hex filename
> >  28623    3404     128   32155    7d9b kernel/sched/built-in.o
> >
> > With this series and dl and rt classes disabled:
> >
> >   text    data     bss     dec     hex filename
> >  20734    3334      40   24108    5e2c kernel/sched/built-in.o
> 
> With 1GHz + 1GB RAM SoCs being well below $10 in bulk we worry about code 
> complexity, predictability, testability, behavioral and ABI uniformity a lot more 
> than about the last 10-20k of kernel text footprint...
> 
> So I think the 'tiny' efforts are fundamentally misguided and are shooting for an 
> ever shrinking market of RAM/ROM starved products whose share is shrinking every 
> month.

I'm rather seeing the opposite: an ever growing market of 
internet-connected coin-cell-battery-powered tiny devices where the 
amount of RAM is counted in kilobytes rather than megabytes.

Let me repeat some background as to what my fundamental motivation is, 
and then maybe you'll understand why I'm doing this.

What is the biggest buzzword in the IT industry besides AI right now?
It is IOT.

Most IOT targets are so small that people are rewriting new operating 
systems from scratch for them. Lots of fragmentation already exists. 
We're talking about systems with less than one megabyte of RAM, 
sometimes much less.  Still, those things are being connected to the 
internet. And this is going to be a total security nightmare.

I wish to be able to leverage the Linux ecosystem for as much of the IOT 
space as possible to avoid the worst of those nightmares.  The Linux 
ecosystem has a *lot* of knowledgeable people around it, a lot of 
testing infrastructure and tooling available already, etc.  If a 
security issue turns up on Linux, it has a greater chance of being 
caught early, or fixed quickly otherwise, and finding people with the 
right knowledge is easier on Linux than it could be on any RTOS out 
there. Still with me so far?

Yes we have tools that can automatically reduce the kernel size. We can 
use LTO with the compiler, etc.  LTO is pretty good already. It can 
typically reduce the kernel size by 20%.  If all system calls are 
disabled except for a few ones, then LTO can get rid of another 20%. The 
minimal kernel I get is still 400-500 KB in size.  That's still too big.

There is this 120 KB of VFS code that is always there even though there 
is no real filesystem at all configured in the kernel. There is that 
other 100 KB of core driver support code despite the fact that the set 
of drivers I'm using are very simple and make no use of most of that 
core driver code. Etc.

There comes a point where there is no option but to explicitly trim out 
parts of the kernel as such decisions cannot be automated, hence this 
patch series. Bringing the scheduler under 20KB in size is therefore 
very useful in that context. Alternatively I could push for a parallel 
implementation as I did with the TTY layer where I obtained a 6x size 
reduction. But in the scheduler case I obtained only a 2x size reduction 
so I thought it could be more profitable to get about the same saving by 
reworking the existing code instead., and eventually contributing a very 
bare scheduler class that would be a smaller alternative to the fair 
scheduler for deployments where that makes sense. Unless you actually 
changed your mind about alternative whole scheduler implementations that 
is...

For Linux to be suitable for small IoT, it has to be small, damn small. 
My target is 256 KB of RAM.  And if you look at the kind of application 
those 256-KB systems are doing, it's basically one main task typically 
acquiring sensor data and sending it in some crypted protocol over a 
wireless network on the internet, and possibly accepting commands back.  
So what do you need from the OS to achieve that?  A few system calls, a 
minimal scheduler, minimal memory management, minimal filesystem 
structure and minimal network stack. And your user app.

So, why not having each of those blocks be created using the existing 
Linux syscall interface and internal API?  At that point, it should be 
possible to take your standard full-featured Linux workstation and 
develop your user app on it, run it there using all the existing native 
debugging tools, etc. In the end you just pick the mini version of 
everything for the final target and you're done.  And you don't have to 
learn a whole new OS, development environment and program model, etc.

Next on my list would be a cache-less, completely serialized VFS bypass 
that has only what's needed to make the link between the read/write 
syscalls, a filesystem driver and a block driver while preserving the 
existing kernel APIs. And by being really small, the maintenance cost of 
a "parallel" implementation isn't very high, certainly much less than 
trying to maintain a single code path that can scale to both extremes 
in that case.

PS: As far as I remember, Linus didn't condemn the idea last time I 
    brought up this topic in his presence. I therefore hope we could 
    find ways for allowing Linux usage into the largest computing device 
    deployment to come.


Nicolas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-07 17:09   ` Nicolas Pitre
@ 2017-06-07 18:49     ` Alan Cox
  2017-06-07 21:15       ` Nicolas Pitre
  2017-06-08  7:59     ` Ingo Molnar
  1 sibling, 1 reply; 23+ messages in thread
From: Alan Cox @ 2017-06-07 18:49 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Ingo Molnar, Ingo Molnar, Peter Zijlstra, linux-kernel,
	Linus Torvalds, Thomas Gleixner

> Next on my list would be a cache-less, completely serialized VFS bypass 
> that has only what's needed to make the link between the read/write 
> syscalls, a filesystem driver and a block driver while preserving the 
> existing kernel APIs. And by being really small, the maintenance cost of 
> a "parallel" implementation isn't very high, certainly much less than 
> trying to maintain a single code path that can scale to both extremes 
> in that case.

So once you've rewritten the tty layer, the device drivers, the VFS and
removed most of the syscalls why even pretend it's Linux any more. It's
something else, and that something else is totally architecturally
incompatible with Linux. That's btw a good thing - trying to fit Linux
directly into such a tiny device isn't sensible because the core
assumptions you make about scalability are just totally different.

IMHO it would be far far better to just borrow the bits that look handy,
and the bits of the ABI you need and put them together as a new OS
kernel. When you look at tiny hardware even core bits of the Linux
architecture like the wait queues are just not sensible uses of memory
and cause fragmentation. The dcache is completely insane in that
environment, the scheduler is total overkill and the networking is easy
to DoS in a tiny memory. The device layer assumes dynamic hot pluggable
device architecture - and that's extremely expensive but nonsensical for
most µcontrollers.

It's easy to put a Unixlike OS in 256K of RAM and a pile of flash. It's
going to be pretty easy to put all the major bits of the Linux API into
it. You can run 2.11BSD with only 256K of writable memory (you need more
in your PDP-11 to run it but if you look all of that in a µcontroller
would live in flash).


Alan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-07 18:49     ` Alan Cox
@ 2017-06-07 21:15       ` Nicolas Pitre
  2017-06-07 21:53         ` Alan Cox
  0 siblings, 1 reply; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-07 21:15 UTC (permalink / raw)
  To: Alan Cox
  Cc: Ingo Molnar, Ingo Molnar, Peter Zijlstra, linux-kernel,
	Linus Torvalds, Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 3566 bytes --]

On Wed, 7 Jun 2017, Alan Cox wrote:

> > Next on my list would be a cache-less, completely serialized VFS bypass 
> > that has only what's needed to make the link between the read/write 
> > syscalls, a filesystem driver and a block driver while preserving the 
> > existing kernel APIs. And by being really small, the maintenance cost of 
> > a "parallel" implementation isn't very high, certainly much less than 
> > trying to maintain a single code path that can scale to both extremes 
> > in that case.
> 
> So once you've rewritten the tty layer, the device drivers, the VFS and
> removed most of the syscalls why even pretend it's Linux any more. It's
> something else, and that something else is totally architecturally
> incompatible with Linux.

You got at least one thing wrong. One huge benefit is to leverage 
existing device drivers of which Linux is plentiful. So there is no 
point rewriting device drivers.

Then if most syscalls are removed then *of course* you won't be able to 
boot a standard "Linux" distro on it. But that's not the point either. 
However the compatibility is preserved the other way around i.e. user 
space from this Linux subset should just work as is on a full Linux 
kernel. And it would still be a Linux code base i.e. architecturally 
compatible with Linux at the source level.

> That's btw a good thing - trying to fit Linux
> directly into such a tiny device isn't sensible because the core
> assumptions you make about scalability are just totally different.

For a couple core components that's true, hence my approach with the TTY 
layer. But many other parts aren't that bad. And given that a small 
system can't afford that many whistles and bells then it is not like if 
the whole of Linux would be rewritten anyway.

> IMHO it would be far far better to just borrow the bits that look 
> handy, and the bits of the ABI you need and put them together as a new 
> OS kernel.

Hasn't that been attempted and failed already? One nasty effect of such 
an approach is effectively the creation of a fork, then you completely 
lose the community leverage and gravitational effect, create 
fragmentation, fixes are not propagated across, etc.

> When you look at tiny hardware even core bits of the Linux
> architecture like the wait queues are just not sensible uses of memory
> and cause fragmentation. The dcache is completely insane in that
> environment, the scheduler is total overkill and the networking is easy
> to DoS in a tiny memory. The device layer assumes dynamic hot pluggable
> device architecture - and that's extremely expensive but nonsensical for
> most µcontrollers.

Why do you think I'm proposing scheduler patches? And TTY patches before 
that, and having plans for the VFS? Obviously, all those things coule be 
reimplemented for small scale in a new and separate tiny OS. But what if 
those things could just live in the Linux source tree alongside their 
big cousins and be swapped according to your needs? Why couldn't those 
arguments served to the embedded people for years about joining the 
mainline effort be extended to this use case as well?

> It's easy to put a Unixlike OS in 256K of RAM and a pile of flash. It's
> going to be pretty easy to put all the major bits of the Linux API into
> it. You can run 2.11BSD with only 256K of writable memory (you need more
> in your PDP-11 to run it but if you look all of that in a µcontroller
> would live in flash).

Would be nice if that could share the same source code whenever 
possible, and also the same source tree, no?


Nicolas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-07 21:15       ` Nicolas Pitre
@ 2017-06-07 21:53         ` Alan Cox
  0 siblings, 0 replies; 23+ messages in thread
From: Alan Cox @ 2017-06-07 21:53 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Ingo Molnar, Ingo Molnar, Peter Zijlstra, linux-kernel,
	Linus Torvalds, Thomas Gleixner

> You got at least one thing wrong. One huge benefit is to leverage 
> existing device drivers of which Linux is plentiful. So there is no 
> point rewriting device drivers.

So you want to keep a common interface for some of the common driver
APIs. Several people have managed that.

> > IMHO it would be far far better to just borrow the bits that look 
> > handy, and the bits of the ABI you need and put them together as a new 
> > OS kernel.  
> 
> Hasn't that been attempted and failed already? One nasty effect of such 
> an approach is effectively the creation of a fork, then you completely 
> lose the community leverage and gravitational effect, create 
> fragmentation, fixes are not propagated across, etc.

Almost nothing can be shared though, and for drivers you want to re-use
then if you can re-use them you can share the code for that.

> Why do you think I'm proposing scheduler patches? And TTY patches before 
> that, and having plans for the VFS? Obviously, all those things coule be 
> reimplemented for small scale in a new and separate tiny OS. But what if 
> those things could just live in the Linux source tree alongside their 
> big cousins and be swapped according to your needs? Why couldn't those 
> arguments served to the embedded people for years about joining the 
> mainline effort be extended to this use case as well?

I don't think it works like that. The overhead of the duplication
and trying to keep them aligned rapidly exceeds the value they give. The
moment you try and do the job well you also 
> 
> > It's easy to put a Unixlike OS in 256K of RAM and a pile of flash. It's
> > going to be pretty easy to put all the major bits of the Linux API into
> > it. You can run 2.11BSD with only 256K of writable memory (you need more
> > in your PDP-11 to run it but if you look all of that in a µcontroller
> > would live in flash).  
> 
> Would be nice if that could share the same source code whenever 
> possible, and also the same source tree, no?

But that will never work. The fundamental architecture of a tiny system
is different because the scaling rules and underlying algorithms are
different. wait queues don't work sanely on tiny devices, TCP queues need
a totally different architecture, scheduling is quite different, memory
mangement is totally different, things like the dcache which is fairly
fundamental to the VFS internals make no sense, the locking model for
file systems makes no sense because you can't use all that expensive
scaling. Even the device core which is designed for dynamically managed
trees of devices with hotplug, discovery and power management heirarchies
is basically a large resource expensive paper weight.

It goes on and on. Add any desire to do hard real time or meet things
like ASIL-B to that and you hit a brick wall pretty damned quick.

When you proposed the tty changes I was dubious, now you are talking
about basically writing a new OS kernel in the same git tree that shares
the drivers it looks even less sensible from a Linux perspective.

Alan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-07 17:09   ` Nicolas Pitre
  2017-06-07 18:49     ` Alan Cox
@ 2017-06-08  7:59     ` Ingo Molnar
  2017-06-08 18:14       ` Alan Cox
  2017-06-08 20:16       ` Nicolas Pitre
  1 sibling, 2 replies; 23+ messages in thread
From: Ingo Molnar @ 2017-06-08  7:59 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner


Also, let me make it clear at the outset that we do care about RAM footprint all 
the time, and I've applied countless data structure and .text reducing patches to 
the kernel. But there's a cost/benefit analysis to be made, and this series fails 
that test in my view, because it increases the complexity of an already complex 
code base:

* Nicolas Pitre <nicolas.pitre@linaro.org> wrote:

> Most IOT targets are so small that people are rewriting new operating systems 
> from scratch for them. Lots of fragmentation already exists.

Let me offer a speculative if somewhat cynical prediction: 90% of those ghastly 
IOT hardware hacks won't survive the market. The remaining 10% will be successful 
financially, despite being ghastly hardware hacks and will eventually, in the next 
iteration or so, get a proper OS.

As users ask for more features the the hardware capabilities will increase 
dramatically and home-grown microcontroller derived code plus minimal OSes will be 
replaced by a 'real' OS. Because both developers and users will demand IPv6 
compatibility, or Bluetooth connectivity, or storage support, or any random range 
of features we have in the Linux kernel.

With the stroke of a pen from the CFO: "yes, we can spend more on our next 
hardware design!" the problem goes away, overnight, and nobody will look back at 
the hardware hack that had only 1MB of RAM.

> [...] We're talking about systems with less than one megabyte of RAM, sometimes 
> much less.

Two data points:

Firstly, by the time any Linux kernel change I commit today gets to a typical 
distro it's at least 0.5-1 years, 2 years for it to get widely used by hardware 
shops - 5 years to get used by enterprises. More latency in more conservative 
places.

Secondly, I don't see Moore's Law reversing:

   http://nerdfever.com/wp-content/uploads/2015/06/2015-06_Moravec_MIPS.png

If you combine those two time frames, the consequence of this:

Even taking the 1MB size at face value (which I don't: a networking enabled system 
can probably not function very well with just 1MB of RAM) - the RAM-starved 1 MB 
system today will effectively be a 2 MB system in 2 years.

And yes, I don't claim Moore's law will go on forever and I'm oversimplifying - 
maybe things are slowing down and it will only be 1.5 MB, but the point remains: 
the importance of your 20kb .text savings will become a 10-15k .text savings in 
just 2 years. In 8 years today's 1 MB system will be a 32 MB system if that trend 
holds up.

You can already fit a mostly full Linux system into 32 MB just fine, i.e. the 
problem has solved itself just by waiting a bit or by increasing the hardware 
capabilities a bit.

But the kernel complexity you introduce with this series stays with us! It will be 
an additional cost added to many scheduler commits going forward. It's an added 
cost for all the other usecases.

Also, it's not like 20k .text savings will magically enable Linux to fit into 1MB 
of RAM - it won't. The smallest still practical more or less generic Linux system 
in existence today is around 16 MB. You can shrink it more, but the effort 
increases exponentially once you go below a natural minimum size.

> [...]  Still, those things are being connected to the internet. [...]

So while I believe small size has its value, I think it's far more important to be 
able to _trust_ those devices than to squeeze the last kilobyte out of the kernel.

In that sense these qualities:

 - reducing complexity,
 - reducing actual line count,
 - increasing testability,
 - increasing reviewability,
 - offering behavioral and ABI uniformity

are more important than 1% of RAM of very, very RAM starved system which likely 
won't use Linux to begin with...

So while it obviously the "complexity vs. kernel size" trade-off will always be a 
judgement call, for the scheduler it's not really an open question what we need to 
do at this stage: we need to reduce complexity and #ifdef variants, not increase 
it.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-08  7:59     ` Ingo Molnar
@ 2017-06-08 18:14       ` Alan Cox
  2017-06-08 20:16       ` Nicolas Pitre
  1 sibling, 0 replies; 23+ messages in thread
From: Alan Cox @ 2017-06-08 18:14 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Nicolas Pitre, Ingo Molnar, Peter Zijlstra, linux-kernel,
	Linus Torvalds, Thomas Gleixner

> As users ask for more features the the hardware capabilities will increase 
> dramatically and home-grown microcontroller derived code plus minimal OSes will be 
> replaced by a 'real' OS. Because both developers and users will demand IPv6 
> compatibility, or Bluetooth connectivity, or storage support, or any random range 
> of features we have in the Linux kernel.

There are already tiny OS's with that feature set but they don't feel
Unixish and aren't quite so fun to program.

> Even taking the 1MB size at face value (which I don't: a networking enabled system 
> can probably not function very well with just 1MB of RAM) - the RAM-starved 1 MB 
> system today will effectively be a 2 MB system in 2 years.

Probably not - I may be wrong but power and what you can and can't put on
the same die are likely to mean that small RAM devices are here for a
while and in fact the CFO will be ordering the engineers to get it in
less RAM to save 20 cents a unit.

> And yes, I don't claim Moore's law will go on forever and I'm oversimplifying - 
> maybe things are slowing down and it will only be 1.5 MB, but the point remains: 
> the importance of your 20kb .text savings will become a 10-15k .text savings in 
> just 2 years. In 8 years today's 1 MB system will be a 32 MB system if that trend 
> holds up.

Power means it's more likely IMHO that todays 256K RAM system will in a
few years be either a 64K RAM system or have tons of persistent memory.

Alan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-08  7:59     ` Ingo Molnar
  2017-06-08 18:14       ` Alan Cox
@ 2017-06-08 20:16       ` Nicolas Pitre
  2017-06-11  9:23         ` Ingo Molnar
  2017-06-11  9:42         ` Ingo Molnar
  1 sibling, 2 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-08 20:16 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner

On Thu, 8 Jun 2017, Ingo Molnar wrote:

> 
> Also, let me make it clear at the outset that we do care about RAM footprint all 
> the time, and I've applied countless data structure and .text reducing patches to 
> the kernel. But there's a cost/benefit analysis to be made, and this series fails 
> that test in my view, because it increases the complexity of an already complex 
> code base:
> 
> * Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> 
> > Most IOT targets are so small that people are rewriting new operating systems 
> > from scratch for them. Lots of fragmentation already exists.
> 
> Let me offer a speculative if somewhat cynical prediction: 90% of those ghastly 
> IOT hardware hacks won't survive the market. The remaining 10% will be successful 
> financially, despite being ghastly hardware hacks and will eventually, in the next 
> iteration or so, get a proper OS.

Your prediction is based on a false premise. There is simply no money to 
be made with IoT hardware, especially in the low end.  Those little 
devices will be given away for free because it is in the service 
subscription that the money is. So the hardware has to, and will be, 
extremely cheap to produce. If a serious bug turns up in one of those 
device, my own cynical prediction is that no one will bother with field 
upgradability and they will ask you to throw the device away instead 
while they ship you a replacement (field upgradability implies at least 
twice the flash memory size and that comes with a cost so some will 
gamble that obsolescence will happen before a serious bug turns up).

> As users ask for more features the the hardware capabilities will increase 
> dramatically and home-grown microcontroller derived code plus minimal OSes will be 
> replaced by a 'real' OS. Because both developers and users will demand IPv6 
> compatibility, or Bluetooth connectivity, or storage support, or any random range 
> of features we have in the Linux kernel.

The "Cloud" is taking care of most of that. For the rest, your cellphone 
or IoT gateway will take over. IPv6 stacks are already used in tiny 
microcontrollers with as low as 32KB of RAM.

> With the stroke of a pen from the CFO: "yes, we can spend more on our next 
> hardware design!" the problem goes away, overnight, and nobody will look back at 
> the hardware hack that had only 1MB of RAM.

Of course hobbyists can already get a Raspberry Pi Zero and run a full 
featured Linux distro on it... for a mere 5 bucks. That comes with 512MB 
of RAM so my patches certainly don't make a difference there.

But that's not that simple.  First there is a fundamental constraint 
which is power consumption. If you want your device to run for months 
(some will hope years) from the same tiny battery then you just cannot 
afford SDRAM. So we're talking static RAM here. And to keep costs down 
because you want to give away your thingies by the millions for free it 
usually means single-chip designs with on-chip sub-megabyte static RAM.  
And in that field the 256KB mark is located towards the high end of the 
spectrum.  Many IPv6-capable chips available today have less than that.

And the thing is: people already manage to do a awful lot of stuff in 
such a constrained device. Some probably did a good job of it, but most 
of them likely suck and we don't know about their bugs because we have 
no idea what's running inside.

And because it is rather easy to write a new OS from scratch for such a 
small environment (and who didn't dream of writing his own OS, right?) 
then about every company in that field did so. That's not counting most 
Open Source ones which usually are close to single-person projects. So 
you get a lot of fragmentation, very very little peer review, and no 
incentive for proper maintenance because the cost saving simply isn't 
significant enough.

It is just like asteroids. Some of them collapse to form bigger objects 
like planets, while others have too weak a gravitational field to gather 
more matter. My vision is about leveraging the Linux gravitational power 
to bring the tiny embedded space together because, on its own, the tiny 
embedded space simply has not enough community power to actually 
organize itself.

Of course there are important parts of Linux that couldn't be reused as 
is in such a setup, but yet many other things still can be reused with 
either some modifications or a tiny parallel subsystem substitution. 
Technically, it is always possible to find ways to make it low on 
maintenance and beneficial to the wider community. But first and 
foremost you have to agree with the fundamental principle of gathering 
more people around a common codebase to make it better for everyone and 
not suggest that they stick to themselves. If you agree to that then we 
can move back to a technical discussion.

> > [...] We're talking about systems with less than one megabyte of RAM, sometimes 
> > much less.
> 
> Two data points:
> 
> Firstly, by the time any Linux kernel change I commit today gets to a typical 
> distro it's at least 0.5-1 years, 2 years for it to get widely used by hardware 
> shops - 5 years to get used by enterprises. More latency in more conservative 
> places.

Don't forget that you are also merging patches today from the Android 
folks that have been deployed into actual products years ago. So the 
enterprise distro comparison simply has no commonalities here.

> Secondly, I don't see Moore's Law reversing:
> 
>    http://nerdfever.com/wp-content/uploads/2015/06/2015-06_Moravec_MIPS.png
> 
> If you combine those two time frames, the consequence of this:
> 
> Even taking the 1MB size at face value (which I don't: a networking enabled system 
> can probably not function very well with just 1MB of RAM) - the RAM-starved 1 MB 
> system today will effectively be a 2 MB system in 2 years.

As surprising as it might be, IPv6 stacks requiring only a few dozens of 
kilobytes of memory do exist. Not so surprisingly though, some people 
think that the existing stacks simply suck and they are rewriting yet 
another one ... because they think their own will be better of course.

So there *is* still a huge market for sub-megabyte systems. I was also 
counting on Moore's law so that by the time Linux actually has the 
ability to be tailored for such systems then typical SRAM in those 
10-cents microcontrollers will be 512KB instead of 128 or 32.

> You can already fit a mostly full Linux system into 32 MB just fine, i.e. the 
> problem has solved itself just by waiting a bit or by increasing the hardware 
> capabilities a bit.

You just can't procure SDRAM chips smaller than 32MB on the market 
anymore. That's why Linux didn't get any pressure to fit in smaller than 
that for quite a while. But I've heard of some people having use cases 
for thousands if not millions of Linux VMs on a single server and 
they're looking at 10MB VMs or smaller for their application.

> But the kernel complexity you introduce with this series stays with us! It will be 
> an additional cost added to many scheduler commits going forward. It's an added 
> cost for all the other usecases.

OK, let's talk about that a bit. How isn't sched/core.c with its 7387 
lines not overly complex already? How is my moving of rt related code to 
rt.c and dl related code to dl.c not helping things? Isn't it easier to 
understand the 3500 lines of code in futex.c when half of it i.e. the PI 
specific code is split into a separate file? I ask you.

If you want to pick only those patches for now then please be my guest. 
At lease the first two patches of the series should be mergeable without 
even a doubt.

As to the actual complexity I'm introducing... this is just about not 
compiling some files in and stubbing calls to them out. Isn't that a 
sign of good isolation when you can stub the dl class out with only 9 
insertions and 6 deletions to sched/core.c? I'm not saying the 
complexity is nonexistent here, but just the _ability_ to remove a 
scheduler class enforces code abstractions which should be a good thing 
maintenance wise, no?

> Also, it's not like 20k .text savings will magically enable Linux to fit into 1MB 
> of RAM - it won't. The smallest still practical more or less generic Linux system 
> in existence today is around 16 MB. You can shrink it more, but the effort 
> increases exponentially once you go below a natural minimum size.

Again, I'm not after a tiny-and-generic Linux target. I'm after a 
tiny-and-heavily-tailored Linux subset that shares the same ABI and API 
as the generic Linux. Once you start compiling out pieces of the core 
kernel, it obviously isn't generic anymore, but the potential for size 
reduction becomes much bigger.

Anyway... as I said, you have to agree with the high level goal and 
principle of leveraging the Linux codebase to gather the tiny embedded 
people around it. The tiny embedded community simply will never take 
hold otherwise. . If we cannot agree on that then any other point of 
discussion is moot. In which case I'll simply drop this project entirely 
and move on.


Nicolas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-08 20:16       ` Nicolas Pitre
@ 2017-06-11  9:23         ` Ingo Molnar
  2017-06-11 15:26           ` Nicolas Pitre
  2017-06-11  9:42         ` Ingo Molnar
  1 sibling, 1 reply; 23+ messages in thread
From: Ingo Molnar @ 2017-06-11  9:23 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner


* Nicolas Pitre <nicolas.pitre@linaro.org> wrote:

> > But the kernel complexity you introduce with this series stays with us! It 
> > will be an additional cost added to many scheduler commits going forward. It's 
> > an added cost for all the other usecases.
> 
> OK, let's talk about that a bit. How isn't sched/core.c with its 7387 
> lines not overly complex already? How is my moving of rt related code to 
> rt.c and dl related code to dl.c not helping things? Isn't it easier to 
> understand the 3500 lines of code in futex.c when half of it i.e. the PI 
> specific code is split into a separate file? I ask you.
> 
> If you want to pick only those patches for now then please be my guest. 
> At lease the first two patches of the series should be mergeable without 
> even a doubt.

That's a strawman argument - I was reacting to the combined effect of your series:

 > > >  23 files changed, 3190 insertions(+), 2897 deletions(-)

A subset of the patches might be fine and note that in fact I already picked a 
patch from your series that made sense, I committed this patch of yours three days 
ago:

  f5832c1998af: sched/core: Omit building stop_sched_class when !SMP

I'll pick others as well as long as they don't complicate the code. Please send a 
revised series that only does unambiguous complexity reduction/cleanups.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-08 20:16       ` Nicolas Pitre
  2017-06-11  9:23         ` Ingo Molnar
@ 2017-06-11  9:42         ` Ingo Molnar
  2017-06-11 16:45           ` Nicolas Pitre
  1 sibling, 1 reply; 23+ messages in thread
From: Ingo Molnar @ 2017-06-11  9:42 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner


* Nicolas Pitre <nicolas.pitre@linaro.org> wrote:

> > With the stroke of a pen from the CFO: "yes, we can spend more on our next 
> > hardware design!" the problem goes away, overnight, and nobody will look back at 
> > the hardware hack that had only 1MB of RAM.
> 
> Of course hobbyists can already get a Raspberry Pi Zero and run a full 
> featured Linux distro on it... for a mere 5 bucks. That comes with 512MB 
> of RAM so my patches certainly don't make a difference there.

Note that those mere 5 bucks are probably 50 cents or less in bulk. Perfectly fine 
economics for many types of 'throw away IoT hardware' products.

> But that's not that simple.  First there is a fundamental constraint 
> which is power consumption. If you want your device to run for months 
> (some will hope years) from the same tiny battery then you just cannot 
> afford SDRAM. So we're talking static RAM here. And to keep costs down 
> because you want to give away your thingies by the millions for free it 
> usually means single-chip designs with on-chip sub-megabyte static RAM.  
> And in that field the 256KB mark is located towards the high end of the 
> spectrum.  Many IPv6-capable chips available today have less than that.
> 
> And the thing is: people already manage to do a awful lot of stuff in 
> such a constrained device. Some probably did a good job of it, but most 
> of them likely suck and we don't know about their bugs because we have 
> no idea what's running inside.

Ok, let me put it this way: there's no way in hell I see a viable Linux kernel 
running (no matter how stripped down) in 32K or even 64K of RAM. 256K is a stretch 
as well - but that RAM size you claim to be already 'high end', so it probably 
wouldn't be used as a standardized solution anyway...

Today a Linux 'allnoconfig' kernel, i.e. a kernel with no device drivers and no 
filesystems whatsoever and with everything optional turned off (including all 
networking!), is over 2MB large text+data (on x86, which has a compressed 
instruction set - it would possibly be larger on simpler CPUs):

 triton:~/tip> size vmlinux
    text    data     bss     dec  filename
  926056  208624 1215904 2350584  vmlinux

A series that shrinks the .text size of the allnoconfig core Linux kernel from 1MB 
to 9.9MB in isolation is not proof.

There will literally have to be two orders of magnitude more patches than that to 
reach the 32K size envelope, if I (very) optimistically assume that the difficulty 
to shrink code is constant (which it most certainly is not).

I.e. the whole stated premise of the series is wildly not realistic AFAICS, the 
series does not make Linux more usable at all on that category of devices (Linux 
is totally inadequate because it's way too large), it only increases its 
complexity.

But you can prove me wrong: show me a Linux kernel for a real device that fits 
into 32KB of RAM (or even 256 KB) and _then_ I'll consider the cost/benefit 
equation.

Until that happens I consider most forms of additional complexity on the 
non-hardware dependent side of the kernel a net negative.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-11  9:23         ` Ingo Molnar
@ 2017-06-11 15:26           ` Nicolas Pitre
  0 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-11 15:26 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner

On Sun, 11 Jun 2017, Ingo Molnar wrote:

> 
> * Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> 
> > If you want to pick only those patches for now then please be my guest. 
> > At least the first two patches of the series should be mergeable without 
> > even a doubt.
> 
> That's a strawman argument - I was reacting to the combined effect of your series:
> 
>  > > >  23 files changed, 3190 insertions(+), 2897 deletions(-)

As I mentioned, the bulk of that count comes from moving rt and dl code 
out of sched/core.c into their respective .c files:


    sched/deadline: move dl related code out of sched/core.c
    
    ... to sched/deadline.c. This helps making sched/core.c smaller and
    hopefully easier to understand and maintain. This also will help
    configuring the deadline scheduling class out of the kernel build.
    
    Signed-off-by: Nicolas Pitre <nico@linaro.org>

 kernel/sched/core.c     | 335 +----------------------------------------
 kernel/sched/deadline.c | 336 ++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h    |  14 ++
 3 files changed, 356 insertions(+), 329 deletions(-)


    sched/rt: move rt related code out of sched/core.c
    
    ... to sched/rt.c. This helps making sched/core.c smaller and hopefully
    easier to understand and maintain. This also will make it easier to
    configure the realtime scheduling class out of the kernel build.
    
    Signed-off-by: Nicolas Pitre <nico@linaro.org>

 kernel/sched/core.c  | 315 ---------------------------------------------
 kernel/sched/rt.c    | 310 ++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h |   5 +
 3 files changed, 315 insertions(+), 315 deletions(-)

I also untangled the futex code so the PI support is gathered in a file 
of its own:


    futex: make PI support optional
    
    Split out the priority inheritance support to a file of its own
    to make futex.c easier to understand and, hopefully, to maintain.
    This also makes it possible to compile out the PI support when RT
    task support is not available.
    
    Signed-off-by: Nicolas Pitre <nico@linaro.org>

 include/linux/futex.h |    7 +-
 init/Kconfig          |    7 +-
 kernel/futex.c        | 2829 ++++++++++---------------------------------
 kernel/futex_pi.c     | 1563 ++++++++++++++++++++++++
 4 files changed, 2233 insertions(+), 2173 deletions(-)

Granted I made a mistake in this last description above. It should have 
said "RT mutex support" instead of "RT task support". But those 3 
patches are making the code easier to understand I'd say.

> A subset of the patches might be fine and note that in fact I already picked a 
> patch from your series that made sense, I committed this patch of yours three days 
> ago:
> 
>   f5832c1998af: sched/core: Omit building stop_sched_class when !SMP

Good. That was patch #2/8.  Why did you skip over #1/8 "cpuset/sched: 
cpuset makes sense for SMP only"? It is the same kind of simple cleanup 
as the one you did apply.

> I'll pick others as well as long as they don't complicate the code. Please send a 
> revised series that only does unambiguous complexity reduction/cleanups.

Tell me from the above which patches would qualify and I'll repost them.


Nicolas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-11  9:42         ` Ingo Molnar
@ 2017-06-11 16:45           ` Nicolas Pitre
  2017-06-13  7:12             ` Ingo Molnar
  0 siblings, 1 reply; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-11 16:45 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner

On Sun, 11 Jun 2017, Ingo Molnar wrote:

> * Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> 
> > But that's not that simple.  First there is a fundamental constraint 
> > which is power consumption. If you want your device to run for months 
> > (some will hope years) from the same tiny battery then you just cannot 
> > afford SDRAM. So we're talking static RAM here. And to keep costs down 
> > because you want to give away your thingies by the millions for free it 
> > usually means single-chip designs with on-chip sub-megabyte static RAM.  
> > And in that field the 256KB mark is located towards the high end of the 
> > spectrum.  Many IPv6-capable chips available today have less than that.
> > 
> > And the thing is: people already manage to do a awful lot of stuff in 
> > such a constrained device. Some probably did a good job of it, but most 
> > of them likely suck and we don't know about their bugs because we have 
> > no idea what's running inside.
> 
> Ok, let me put it this way: there's no way in hell I see a viable Linux kernel 
> running (no matter how stripped down) in 32K or even 64K of RAM. 256K is a stretch 
> as well - but that RAM size you claim to be already 'high end', so it probably 
> wouldn't be used as a standardized solution anyway...

I never pretended to make Linux runable in 32KB of RAM. Therefore we 
strongly agree here. I however mentioned that some 32KB chips are IPv6 
capable, just to give you a different perspective given that you're more 
acquainted with multi-gigabyte systems.

And as you did mention Moore's law previously, the fact that 256KB of 
RAM might be somewhat high-end today in that space, that should become 
pretty common in the near future. The test board in front of me has 
384KB of SRAM and bigger ones exist.

> Today a Linux 'allnoconfig' kernel, i.e. a kernel with no device drivers and no 
> filesystems whatsoever and with everything optional turned off (including all 
> networking!), is over 2MB large text+data (on x86, which has a compressed 
> instruction set - it would possibly be larger on simpler CPUs):
> 
>  triton:~/tip> size vmlinux
>     text    data     bss     dec  filename
>   926056  208624 1215904 2350584  vmlinux

On ARM, allnoconfig produces:

   text    data     bss     dec     hex filename
 548144   95480   24252  667876   a30e4 vmlinux

But more realistically, the test system I'm using currently runs the 
kernel XIP from flash, so the text size is an indirect metric. It uses 
external RAM as the 384KB of SRAM still doesn't allow for a successful 
boot. But here's what I get once booted:

/ # free
             total       used       free     shared    buffers     cached
Mem:          7936       1624       6312          0          0        492
-/+ buffers/cache:       1132       6804
/ # uname -a
Linux (none) 4.12.0-rc4-00013-g32352a9367 #35 PREEMPT Sun Jun 11 10:45:02 EDT 2017 armv7ml GNU/Linux

I could make user space XIP from flash as well, but right now it is just 
some initramfs living in RAM.

Obviously you can't use the native Linux networking stack in such small 
systems. But a few IPv6 stacks have been made to work in a few kilobytes 
already.

> A series that shrinks the .text size of the allnoconfig core Linux kernel from 1MB 
> to 9.9MB in isolation is not proof.

I assume you meant 0.9MB.

It is no proof of course. But I'm following the well known and proven 
"release early, release often" mantra here... unless this is no longer 
promoted?

> There will literally have to be two orders of magnitude more patches than that to 
> reach the 32K size envelope, if I (very) optimistically assume that the difficulty 
> to shrink code is constant (which it most certainly is not).

Once again, my goal is _not_ 32KB.

And I don't intend to shrink code. Most of the time I just want to 
_remove_ code. Compiling it out to be precise. The goal of this series 
is all about compiling out code. And to achieve that with the scheduler, 
I simply moved some code to different source files and not including 
those source files in the final build. That keeps the number of #ifdef's 
to a minimum but it makes a big diffstat due to the code movement.

In the TTY layer case, I found out that writing a simplistic parallel 
equivalent that doesn't have to scale to server class systems and 
remains compatible with existing drivers allowed a 6x factor in size 
reduction. The same strategy could be employed with the VFS where any 
kind of file caching doesn't make sense in a tiny system. Don't worry, 
I'm lot looking forward to using BTRFS in 256KB of RAM either.

To give you an idea, here's the size repartition from that booting 
kernel above:

$ size */built-in.o
   text    data     bss     dec     hex filename
 290669   41864    3616  336149   52115 drivers/built-in.o
 173275    1189    5472  179936   2bee0 fs/built-in.o
  10135   14084      84   24303    5eef init/built-in.o
 198624   22000   25160  245784   3c018 kernel/built-in.o
  79064     133      53   79250   13592 lib/built-in.o
  97034    6328    3532  106894   1a18e mm/built-in.o
   2135       0       0    2135     857 security/built-in.o
 146046       0       0  146046   23a7e usr/built-in.o
      0       0       0       0       0 virt/built-in.o

That's without LTO (because with LTO there's no way to size individual 
parts) and without syscall trimming. From previous experiments, LTO 
brings a 20% reduction on the final build size, and LTO with syscall 
trimming together provide about 40% reduction. One nice thing about LTO 
is that part of the 75KB of lib code automatically gets discarded when 
not referenced, etc.  This is not always the case for most of the core 
driver infrastructure despite most of it not being used in my case.

But there are pieces of the kernel that can't automatically be 
eliminated, such as scheduler classes, because the compiler just can't 
tell if they'll be used at run time.

Some "memory hogs" (relatively speaking) might need a tiny version to 
cope with a handful of processes max and a few static drivers. As Alan 
said, wait queues as they are right now consume a lot of memory. But 
since they're well defined and encapsulated already, it is possible to 
provide a light alternative implemented in a way that uses much less 
memory with the side effect of being much less scalable. But scalability 
is not a huge concern when you have only 256KB of RAM.

So it is a combination of strategies that will make the 256KB goal 
possible. And as you can see from the free output above, this is not 
_that_ far off already.

> But you can prove me wrong: show me a Linux kernel for a real device that fits 
> into 32KB of RAM (or even 256 KB) and _then_ I'll consider the cost/benefit 
> equation.

Your insisting on 32KB in this discussion is simply disingenuous.

So you are basically saying that you want me to work another year on 
this project "behind closed doors" and come out with "a final solution" 
before you tell me if my approach is worthy of your consideration? 
Thanks but no thanks. As I said elsewhere, the value in this proposal is 
mainline inclusion in an ongoing process otherwise there is no gain over 
those small OSes out there, and my time is more valuable than that.


Nicolas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-11 16:45           ` Nicolas Pitre
@ 2017-06-13  7:12             ` Ingo Molnar
  2017-06-13 12:29               ` Nicolas Pitre
  0 siblings, 1 reply; 23+ messages in thread
From: Ingo Molnar @ 2017-06-13  7:12 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner


* Nicolas Pitre <nicolas.pitre@linaro.org> wrote:

> > A series that shrinks the .text size of the allnoconfig core Linux kernel from 1MB 
> > to 9.9MB in isolation is not proof.
> 
> I assume you meant 0.9MB.

0.992 MB actually if we apply the ~8k .text savings. 0.9MB would imply 100k of 
savings on an allnoconfig kernel.

> It is no proof of course. But I'm following the well known and proven 
> "release early, release often" mantra here... unless this is no longer 
> promoted?

I'm following that same pattern: I gave you negative review feedback as early as 
possible. Fragmention of the scheduler ABI increases complexity and has knock-on 
costs - and the kernel size reduction for the usecase you cited are still 1-2 
orders of magnitude away from making a practical difference.

> > There will literally have to be two orders of magnitude more patches than that 
> > to reach the 32K size envelope, if I (very) optimistically assume that the 
> > difficulty to shrink code is constant (which it most certainly is not).
> 
> Once again, my goal is _not_ 32KB.
> 
> And I don't intend to shrink code. Most of the time I just want to 
> _remove_ code. Compiling it out to be precise. The goal of this series 
> is all about compiling out code. And to achieve that with the scheduler, 
> I simply moved some code to different source files and not including 
> those source files in the final build. That keeps the number of #ifdef's 
> to a minimum but it makes a big diffstat due to the code movement.

So I'm fine with most of the code movement - let's try this series without any of 
the more controversial bits which should make future arguments easier.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 0/8] scheduler tinification
  2017-06-13  7:12             ` Ingo Molnar
@ 2017-06-13 12:29               ` Nicolas Pitre
  0 siblings, 0 replies; 23+ messages in thread
From: Nicolas Pitre @ 2017-06-13 12:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Linus Torvalds,
	Thomas Gleixner

On Tue, 13 Jun 2017, Ingo Molnar wrote:

> > I simply moved some code to different source files and not including 
> > those source files in the final build. That keeps the number of #ifdef's 
> > to a minimum but it makes a big diffstat due to the code movement.
> 
> So I'm fine with most of the code movement - let's try this series without any of 
> the more controversial bits which should make future arguments easier.

You should then be able to merge patches #1 to #5 already (you already 
have #2) as the more controversial ones are at the end.


Nicolas

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2017-06-13 12:29 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-06 23:24 [PATCH v2 0/8] scheduler tinification Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 1/8] cpuset/sched: cpuset makes sense for SMP only Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 2/8] sched: omit stop_sched_class when !SMP Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 3/8] futex: make PI support optional Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 4/8] sched/deadline: move dl related code out of sched/core.c Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 5/8] sched/rt: move rt " Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 6/8] sched/deadline: make it configurable Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 7/8] rtmutex: compatibility wrappers when no RT support is configured Nicolas Pitre
2017-06-06 23:24 ` [PATCH v2 8/8] sched/rt: make it configurable Nicolas Pitre
2017-06-07 16:00 ` [PATCH v2 0/8] scheduler tinification Ingo Molnar
2017-06-07 17:09   ` Nicolas Pitre
2017-06-07 18:49     ` Alan Cox
2017-06-07 21:15       ` Nicolas Pitre
2017-06-07 21:53         ` Alan Cox
2017-06-08  7:59     ` Ingo Molnar
2017-06-08 18:14       ` Alan Cox
2017-06-08 20:16       ` Nicolas Pitre
2017-06-11  9:23         ` Ingo Molnar
2017-06-11 15:26           ` Nicolas Pitre
2017-06-11  9:42         ` Ingo Molnar
2017-06-11 16:45           ` Nicolas Pitre
2017-06-13  7:12             ` Ingo Molnar
2017-06-13 12:29               ` Nicolas Pitre

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).