All of lore.kernel.org
 help / color / mirror / Atom feed
* [STABLE] kernel oops which can be fixed by peterz's patches
@ 2016-01-05  8:52 Byungchul Park
  2016-01-05  9:14 ` Peter Zijlstra
                   ` (4 more replies)
  0 siblings, 5 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  8:52 UTC (permalink / raw)
  To: stable; +Cc: peterz, linux-kernel


Upstream commits to be applied
==============================

e3fca9e: sched: Replace post_schedule with a balance callback list
4c9a4bc: sched: Allow balance callbacks for check_class_changed()
8046d68: sched,rt: Remove return value from pull_rt_task()
fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
0ea60c2: sched,dl: Remove return value from pull_dl_task()
9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks

The reason why these should be applied
======================================

Our products developed using 3.16 kernel, faced a kernel oops which can
be fixed with above upstreamed patches. The oops is caused by "Unable
to handle kernel NULL pointer dereference at virtual address 000000xx"
in the call path,

__sched_setscheduler()
	check_class_changed()
		switched_to_fair()
			check_preempt_curr()
				check_preempt_wakeup()
					find_matching_se()
						is_same_group()

by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.

How to apply it
===============

For stable 4.2.8+:
	N/A (already applied)

For longterm 4.1.15:
	Cherry-picking the upsteam commits works with a trivial conflict.

For longterm 3.18.25:
	Refer to the backported patches in this thread.

For longterm 3.14.58:
	Refer to the backported patches in this thread. And applying
	additional "6c3b4d4: sched: Clean up idle task SMP logic" commit
	makes backporting the upstream commits much simpler. So my
	backporting patches include the patch.

For longterm 2.6.32.69 ~ 3.12.51: Need to be backported. (I didn't)


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-01-05  8:52 [STABLE] kernel oops which can be fixed by peterz's patches Byungchul Park
@ 2016-01-05  9:14 ` Peter Zijlstra
  2016-01-12  8:47   ` Byungchul Park
                     ` (2 more replies)
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 33+ messages in thread
From: Peter Zijlstra @ 2016-01-05  9:14 UTC (permalink / raw)
  To: Byungchul Park; +Cc: stable, linux-kernel

On Tue, Jan 05, 2016 at 05:52:11PM +0900, Byungchul Park wrote:
> 
> Upstream commits to be applied
> ==============================
> 
> e3fca9e: sched: Replace post_schedule with a balance callback list
> 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> 8046d68: sched,rt: Remove return value from pull_rt_task()
> fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
> 0ea60c2: sched,dl: Remove return value from pull_dl_task()
> 9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
> 
> The reason why these should be applied
> ======================================
> 
> Our products developed using 3.16 kernel, faced a kernel oops which can
> be fixed with above upstreamed patches. The oops is caused by "Unable
> to handle kernel NULL pointer dereference at virtual address 000000xx"
> in the call path,
> 
> __sched_setscheduler()
> 	check_class_changed()
> 		switched_to_fair()
> 			check_preempt_curr()
> 				check_preempt_wakeup()
> 					find_matching_se()
> 						is_same_group()
> 
> by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.

So the reason I didn't mark them for stable is that they were non
trivial, however they've been in for a while now and nothing broke, so I
suppose backporting them isn't a problem.

> How to apply it
> ===============
> 
> For stable 4.2.8+:
> 	N/A (already applied)
> 
> For longterm 4.1.15:
> 	Cherry-picking the upsteam commits works with a trivial conflict.
> 
> For longterm 3.18.25:
> 	Refer to the backported patches in this thread.
> 
> For longterm 3.14.58:
> 	Refer to the backported patches in this thread. And applying
> 	additional "6c3b4d4: sched: Clean up idle task SMP logic" commit
> 	makes backporting the upstream commits much simpler. So my
> 	backporting patches include the patch.
> 
> For longterm 2.6.32.69 ~ 3.12.51: Need to be backported. (I didn't)

No objection as long as you've actually tested things etc..

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic
  2016-01-05  8:52 [STABLE] kernel oops which can be fixed by peterz's patches Byungchul Park
  2016-01-05  9:14 ` Peter Zijlstra
@ 2016-01-05  9:16 ` Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 2/7] sched: Replace post_schedule with a balance callback list Byungchul Park
                     ` (5 more replies)
  2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
                   ` (2 subsequent siblings)
  4 siblings, 6 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:16 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, Daniel Lezcano, Vincent Guittot, alex.shi,
	mingo, Steven Rostedt

From: Peter Zijlstra <peterz@infradead.org>

The idle post_schedule flag is just a vile waste of time, furthermore
it appears unneeded, move the idle_enter_fair() call into
pick_next_task_idle().

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: alex.shi@linaro.org
Cc: mingo@kernel.org
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-aljykihtxJt3mkokxi0qZurb@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/idle_task.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/kernel/sched/idle_task.c b/kernel/sched/idle_task.c
index 516c3d9..d08678d 100644
--- a/kernel/sched/idle_task.c
+++ b/kernel/sched/idle_task.c
@@ -19,11 +19,6 @@ static void pre_schedule_idle(struct rq *rq, struct task_struct *prev)
 	idle_exit_fair(rq);
 	rq_last_tick_reset(rq);
 }
-
-static void post_schedule_idle(struct rq *rq)
-{
-	idle_enter_fair(rq);
-}
 #endif /* CONFIG_SMP */
 /*
  * Idle tasks are unconditionally rescheduled:
@@ -37,8 +32,7 @@ static struct task_struct *pick_next_task_idle(struct rq *rq)
 {
 	schedstat_inc(rq, sched_goidle);
 #ifdef CONFIG_SMP
-	/* Trigger the post schedule to do an idle_enter for CFS */
-	rq->post_schedule = 1;
+	idle_enter_fair(rq);
 #endif
 	return rq->idle;
 }
@@ -102,7 +96,6 @@ const struct sched_class idle_sched_class = {
 #ifdef CONFIG_SMP
 	.select_task_rq		= select_task_rq_idle,
 	.pre_schedule		= pre_schedule_idle,
-	.post_schedule		= post_schedule_idle,
 #endif
 
 	.set_curr_task          = set_curr_task_idle,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.14.58 2/7] sched: Replace post_schedule with a balance callback list
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
@ 2016-01-05  9:16   ` Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 3/7] sched: Allow balance callbacks for check_class_changed() Byungchul Park
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:16 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

Generalize the post_schedule() stuff into a balance callback list.
This allows us to more easily use it outside of schedule() and cross
sched_class.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.424032725@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/core.c
	kernel/sched/deadline.c
	kernel/sched/rt.c
	kernel/sched/sched.h
---
 kernel/sched/core.c     | 36 ++++++++++++++++++++++++------------
 kernel/sched/deadline.c | 23 ++++++++++++++++-------
 kernel/sched/rt.c       | 27 ++++++++++++++++-----------
 kernel/sched/sched.h    | 19 +++++++++++++++++--
 4 files changed, 73 insertions(+), 32 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bbe9577..cc1be56 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2179,18 +2179,30 @@ static inline void pre_schedule(struct rq *rq, struct task_struct *prev)
 }
 
 /* rq->lock is NOT held, but preemption is disabled */
-static inline void post_schedule(struct rq *rq)
+static void __balance_callback(struct rq *rq)
 {
-	if (rq->post_schedule) {
-		unsigned long flags;
+	struct callback_head *head, *next;
+	void (*func)(struct rq *rq);
+	unsigned long flags;
 
-		raw_spin_lock_irqsave(&rq->lock, flags);
-		if (rq->curr->sched_class->post_schedule)
-			rq->curr->sched_class->post_schedule(rq);
-		raw_spin_unlock_irqrestore(&rq->lock, flags);
+	raw_spin_lock_irqsave(&rq->lock, flags);
+	head = rq->balance_callback;
+	rq->balance_callback = NULL;
+	while (head) {
+		func = (void (*)(struct rq *))head->func;
+		next = head->next;
+		head->next = NULL;
+		head = next;
 
-		rq->post_schedule = 0;
+		func(rq);
 	}
+	raw_spin_unlock_irqrestore(&rq->lock, flags);
+}
+
+static inline void balance_callback(struct rq *rq)
+{
+	if (unlikely(rq->balance_callback))
+		__balance_callback(rq);
 }
 
 #else
@@ -2199,7 +2211,7 @@ static inline void pre_schedule(struct rq *rq, struct task_struct *p)
 {
 }
 
-static inline void post_schedule(struct rq *rq)
+static inline void balance_callback(struct rq *rq)
 {
 }
 
@@ -2220,7 +2232,7 @@ asmlinkage void schedule_tail(struct task_struct *prev)
 	 * FIXME: do we need to worry about rq being invalidated by the
 	 * task_switch?
 	 */
-	post_schedule(rq);
+	balance_callback(rq);
 
 #ifdef __ARCH_WANT_UNLOCKED_CTXSW
 	/* In this case, finish_task_switch does not reenable preemption */
@@ -2732,7 +2744,7 @@ need_resched:
 	} else
 		raw_spin_unlock_irq(&rq->lock);
 
-	post_schedule(rq);
+	balance_callback(rq);
 
 	sched_preempt_enable_no_resched();
 	if (need_resched())
@@ -6902,7 +6914,7 @@ void __init sched_init(void)
 		rq->sd = NULL;
 		rq->rd = NULL;
 		rq->cpu_power = SCHED_POWER_SCALE;
-		rq->post_schedule = 0;
+		rq->balance_callback = NULL;
 		rq->active_balance = 0;
 		rq->next_balance = jiffies;
 		rq->push_cpu = 0;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 8d3c5dd..aaefe1b 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -210,6 +210,18 @@ static inline int has_pushable_dl_tasks(struct rq *rq)
 
 static int push_dl_task(struct rq *rq);
 
+static DEFINE_PER_CPU(struct callback_head, dl_balance_head);
+
+static void push_dl_tasks(struct rq *);
+
+static inline void queue_push_tasks(struct rq *rq)
+{
+	if (!has_pushable_dl_tasks(rq))
+		return;
+
+	queue_balance_callback(rq, &per_cpu(dl_balance_head, rq->cpu), push_dl_tasks);
+}
+
 #else
 
 static inline
@@ -232,6 +244,9 @@ void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
 {
 }
 
+static inline void queue_push_tasks(struct rq *rq)
+{
+}
 #endif /* CONFIG_SMP */
 
 static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags);
@@ -1005,7 +1020,7 @@ struct task_struct *pick_next_task_dl(struct rq *rq)
 #endif
 
 #ifdef CONFIG_SMP
-	rq->post_schedule = has_pushable_dl_tasks(rq);
+	queue_push_tasks(rq);
 #endif /* CONFIG_SMP */
 
 	return p;
@@ -1422,11 +1437,6 @@ static void pre_schedule_dl(struct rq *rq, struct task_struct *prev)
 		pull_dl_task(rq);
 }
 
-static void post_schedule_dl(struct rq *rq)
-{
-	push_dl_tasks(rq);
-}
-
 /*
  * Since the task is not running and a reschedule is not going to happen
  * anytime soon on its runqueue, we try pushing it away now.
@@ -1615,7 +1625,6 @@ const struct sched_class dl_sched_class = {
 	.rq_online              = rq_online_dl,
 	.rq_offline             = rq_offline_dl,
 	.pre_schedule		= pre_schedule_dl,
-	.post_schedule		= post_schedule_dl,
 	.task_woken		= task_woken_dl,
 #endif
 
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 27b8e83..2b980d0 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -315,6 +315,18 @@ static inline int has_pushable_tasks(struct rq *rq)
 	return !plist_head_empty(&rq->rt.pushable_tasks);
 }
 
+static DEFINE_PER_CPU(struct callback_head, rt_balance_head);
+
+static void push_rt_tasks(struct rq *);
+
+static inline void queue_push_tasks(struct rq *rq)
+{
+	if (!has_pushable_tasks(rq))
+		return;
+
+	queue_balance_callback(rq, &per_cpu(rt_balance_head, rq->cpu), push_rt_tasks);
+}
+
 static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
 {
 	plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
@@ -359,6 +371,9 @@ void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
 {
 }
 
+static inline void queue_push_tasks(struct rq *rq)
+{
+}
 #endif /* CONFIG_SMP */
 
 static inline int on_rt_rq(struct sched_rt_entity *rt_se)
@@ -1349,11 +1364,7 @@ static struct task_struct *pick_next_task_rt(struct rq *rq)
 		dequeue_pushable_task(rq, p);
 
 #ifdef CONFIG_SMP
-	/*
-	 * We detect this state here so that we can avoid taking the RQ
-	 * lock again later if there is no need to push
-	 */
-	rq->post_schedule = has_pushable_tasks(rq);
+	queue_push_tasks(rq);
 #endif
 
 	return p;
@@ -1731,11 +1742,6 @@ static void pre_schedule_rt(struct rq *rq, struct task_struct *prev)
 		pull_rt_task(rq);
 }
 
-static void post_schedule_rt(struct rq *rq)
-{
-	push_rt_tasks(rq);
-}
-
 /*
  * If we are not running and we are not going to reschedule soon, we should
  * try to push tasks away now
@@ -2008,7 +2014,6 @@ const struct sched_class rt_sched_class = {
 	.rq_online              = rq_online_rt,
 	.rq_offline             = rq_offline_rt,
 	.pre_schedule		= pre_schedule_rt,
-	.post_schedule		= post_schedule_rt,
 	.task_woken		= task_woken_rt,
 	.switched_from		= switched_from_rt,
 #endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 835b6ef..675e147 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -587,9 +587,10 @@ struct rq {
 
 	unsigned long cpu_power;
 
+	struct callback_head *balance_callback;
+
 	unsigned char idle_balance;
 	/* For active balancing */
-	int post_schedule;
 	int active_balance;
 	int push_cpu;
 	struct cpu_stop_work active_balance_work;
@@ -690,6 +691,21 @@ extern int migrate_swap(struct task_struct *, struct task_struct *);
 
 #ifdef CONFIG_SMP
 
+static inline void
+queue_balance_callback(struct rq *rq,
+		       struct callback_head *head,
+		       void (*func)(struct rq *rq))
+{
+	lockdep_assert_held(&rq->lock);
+
+	if (unlikely(head->next))
+		return;
+
+	head->func = (void (*)(struct callback_head *))func;
+	head->next = rq->balance_callback;
+	rq->balance_callback = head;
+}
+
 #define rcu_dereference_check_sched_domain(p) \
 	rcu_dereference_check((p), \
 			      lockdep_is_held(&sched_domains_mutex))
@@ -1131,7 +1147,6 @@ struct sched_class {
 	void (*migrate_task_rq)(struct task_struct *p, int next_cpu);
 
 	void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
-	void (*post_schedule) (struct rq *this_rq);
 	void (*task_waking) (struct task_struct *task);
 	void (*task_woken) (struct rq *this_rq, struct task_struct *task);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.14.58 3/7] sched: Allow balance callbacks for check_class_changed()
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 2/7] sched: Replace post_schedule with a balance callback list Byungchul Park
@ 2016-01-05  9:16   ` Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 4/7] sched,rt: Remove return value from pull_rt_task() Byungchul Park
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:16 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

In order to remove dropping rq->lock from the
switched_{to,from}()/prio_changed() sched_class methods, run the
balance callbacks after it.

We need to remove dropping rq->lock because its buggy,
suppose using sched_setattr()/sched_setscheduler() to change a running
task from FIFO to OTHER.

By the time we get to switched_from_rt() the task is already enqueued
on the cfs runqueues. If switched_from_rt() does pull_rt_task() and
drops rq->lock, load-balancing can come in and move our task @p to
another rq.

The subsequent switched_to_fair() still assumes @p is on @rq and bad
things will happen.

By using balance callbacks we delay the load-balancing operations
{rt,dl}x{push,pull} until we've done all the important work and the
task is fully set up.

Furthermore, the balance callbacks do not know about @p, therefore
they cannot get confused like this.

Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Link: http://lkml.kernel.org/r/20150611124742.615343911@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/core.c
---
 kernel/sched/core.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cc1be56..459cc86 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -937,6 +937,13 @@ inline int task_curr(const struct task_struct *p)
 	return cpu_curr(task_cpu(p)) == p;
 }
 
+/*
+ * switched_from, switched_to and prio_changed must _NOT_ drop rq->lock,
+ * use the balance_callback list if you want balancing.
+ *
+ * this means any call to check_class_changed() must be followed by a call to
+ * balance_callback().
+ */
 static inline void check_class_changed(struct rq *rq, struct task_struct *p,
 				       const struct sched_class *prev_class,
 				       int oldprio)
@@ -1423,8 +1430,12 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
 
 	p->state = TASK_RUNNING;
 #ifdef CONFIG_SMP
-	if (p->sched_class->task_woken)
+	if (p->sched_class->task_woken) {
+		/*
+		 * XXX can drop rq->lock; most likely ok.
+		 */
 		p->sched_class->task_woken(rq, p);
+	}
 
 	if (rq->idle_stamp) {
 		u64 delta = rq_clock(rq) - rq->idle_stamp;
@@ -3006,7 +3017,11 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
 
 	check_class_changed(rq, p, prev_class, oldprio);
 out_unlock:
+	preempt_disable(); /* avoid rq from going away on us */
 	__task_rq_unlock(rq);
+
+	balance_callback(rq);
+	preempt_enable();
 }
 #endif
 
@@ -3512,10 +3527,17 @@ change:
 		enqueue_task(rq, p, 0);
 
 	check_class_changed(rq, p, prev_class, oldprio);
+	preempt_disable(); /* avoid rq from going away on us */
 	task_rq_unlock(rq, p, &flags);
 
 	rt_mutex_adjust_pi(p);
 
+	/*
+	 * Run balance callbacks after we've adjusted the PI chain.
+	 */
+	balance_callback(rq);
+	preempt_enable();
+
 	return 0;
 }
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.14.58 4/7] sched,rt: Remove return value from pull_rt_task()
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 2/7] sched: Replace post_schedule with a balance callback list Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 3/7] sched: Allow balance callbacks for check_class_changed() Byungchul Park
@ 2016-01-05  9:16   ` Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 5/7] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks Byungchul Park
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:16 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

In order to be able to use pull_rt_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.679002000@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/rt.c
---
 kernel/sched/rt.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 2b980d0..d235fd7 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1652,14 +1652,15 @@ static void push_rt_tasks(struct rq *rq)
 		;
 }
 
-static int pull_rt_task(struct rq *this_rq)
+static void pull_rt_task(struct rq *this_rq)
 {
-	int this_cpu = this_rq->cpu, ret = 0, cpu;
+	int this_cpu = this_rq->cpu, cpu;
+	bool resched = false;
 	struct task_struct *p;
 	struct rq *src_rq;
 
 	if (likely(!rt_overloaded(this_rq)))
-		return 0;
+		return;
 
 	/*
 	 * Match the barrier from rt_set_overloaded; this guarantees that if we
@@ -1716,7 +1717,7 @@ static int pull_rt_task(struct rq *this_rq)
 			if (p->prio < src_rq->curr->prio)
 				goto skip;
 
-			ret = 1;
+			resched = true;
 
 			deactivate_task(src_rq, p, 0);
 			set_task_cpu(p, this_cpu);
@@ -1732,7 +1733,8 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
-	return ret;
+	if (resched)
+		resched_task(this_rq->curr);
 }
 
 static void pre_schedule_rt(struct rq *rq, struct task_struct *prev)
@@ -1835,8 +1837,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
 	if (!p->on_rq || rq->rt.rt_nr_running)
 		return;
 
-	if (pull_rt_task(rq))
-		resched_task(rq->curr);
+	pull_rt_task(rq);
 }
 
 void init_sched_rt_class(void)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.14.58 5/7] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
                     ` (2 preceding siblings ...)
  2016-01-05  9:16   ` [PATCH for v3.14.58 4/7] sched,rt: Remove return value from pull_rt_task() Byungchul Park
@ 2016-01-05  9:16   ` Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 6/7] sched,dl: Remove return value from pull_dl_task() Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 7/7] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks Byungchul Park
  5 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:16 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

Remove the direct {push,pull} balancing operations from
switched_{from,to}_rt() / prio_changed_rt() and use the balance
callback queue.

Again, err on the side of too many reschedules; since too few is a
hard bug while too many is just annoying.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.766832367@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/rt.c
---
 kernel/sched/rt.c | 35 +++++++++++++++++++----------------
 1 file changed, 19 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index d235fd7..0fb72ae 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -315,16 +315,23 @@ static inline int has_pushable_tasks(struct rq *rq)
 	return !plist_head_empty(&rq->rt.pushable_tasks);
 }
 
-static DEFINE_PER_CPU(struct callback_head, rt_balance_head);
+static DEFINE_PER_CPU(struct callback_head, rt_push_head);
+static DEFINE_PER_CPU(struct callback_head, rt_pull_head);
 
 static void push_rt_tasks(struct rq *);
+static void pull_rt_task(struct rq *);
 
 static inline void queue_push_tasks(struct rq *rq)
 {
 	if (!has_pushable_tasks(rq))
 		return;
 
-	queue_balance_callback(rq, &per_cpu(rt_balance_head, rq->cpu), push_rt_tasks);
+	queue_balance_callback(rq, &per_cpu(rt_push_head, rq->cpu), push_rt_tasks);
+}
+
+static inline void queue_pull_task(struct rq *rq)
+{
+	queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
 }
 
 static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
@@ -1837,7 +1844,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
 	if (!p->on_rq || rq->rt.rt_nr_running)
 		return;
 
-	pull_rt_task(rq);
+	queue_pull_task(rq);
 }
 
 void init_sched_rt_class(void)
@@ -1858,8 +1865,6 @@ void init_sched_rt_class(void)
  */
 static void switched_to_rt(struct rq *rq, struct task_struct *p)
 {
-	int check_resched = 1;
-
 	/*
 	 * If we are already running, then there's nothing
 	 * that needs to be done. But if we are not running
@@ -1869,13 +1874,12 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
 	 */
 	if (p->on_rq && rq->curr != p) {
 #ifdef CONFIG_SMP
-		if (rq->rt.overloaded && push_rt_task(rq) &&
-		    /* Don't resched if we changed runqueues */
-		    rq != task_rq(p))
-			check_resched = 0;
-#endif /* CONFIG_SMP */
-		if (check_resched && p->prio < rq->curr->prio)
+		if (rq->rt.overloaded)
+			queue_push_tasks(rq);
+#else
+		if (p->prio < rq->curr->prio)
 			resched_task(rq->curr);
+#endif /* CONFIG_SMP */
 	}
 }
 
@@ -1896,14 +1900,13 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
 		 * may need to pull tasks to this runqueue.
 		 */
 		if (oldprio < p->prio)
-			pull_rt_task(rq);
+			queue_pull_task(rq);
+
 		/*
 		 * If there's a higher priority task waiting to run
-		 * then reschedule. Note, the above pull_rt_task
-		 * can release the rq lock and p could migrate.
-		 * Only reschedule if p is still on the same runqueue.
+		 * then reschedule.
 		 */
-		if (p->prio > rq->rt.highest_prio.curr && rq->curr == p)
+		if (p->prio > rq->rt.highest_prio.curr)
 			resched_task(p);
 #else
 		/* For UP simply resched on drop of prio */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.14.58 6/7] sched,dl: Remove return value from pull_dl_task()
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
                     ` (3 preceding siblings ...)
  2016-01-05  9:16   ` [PATCH for v3.14.58 5/7] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks Byungchul Park
@ 2016-01-05  9:16   ` Byungchul Park
  2016-01-05  9:16   ` [PATCH for v3.14.58 7/7] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks Byungchul Park
  5 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:16 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

In order to be able to use pull_dl_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.859398977@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/deadline.c
---
 kernel/sched/deadline.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index aaefe1b..ec1f21d 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1351,15 +1351,16 @@ static void push_dl_tasks(struct rq *rq)
 		;
 }
 
-static int pull_dl_task(struct rq *this_rq)
+static void pull_dl_task(struct rq *this_rq)
 {
-	int this_cpu = this_rq->cpu, ret = 0, cpu;
+	int this_cpu = this_rq->cpu, cpu;
 	struct task_struct *p;
+	bool resched = false;
 	struct rq *src_rq;
 	u64 dmin = LONG_MAX;
 
 	if (likely(!dl_overloaded(this_rq)))
-		return 0;
+		return;
 
 	/*
 	 * Match the barrier from dl_set_overloaded; this guarantees that if we
@@ -1414,7 +1415,7 @@ static int pull_dl_task(struct rq *this_rq)
 					   src_rq->curr->dl.deadline))
 				goto skip;
 
-			ret = 1;
+			resched = true;
 
 			deactivate_task(src_rq, p, 0);
 			set_task_cpu(p, this_cpu);
@@ -1427,7 +1428,8 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
-	return ret;
+	if (resched)
+		resched_task(this_rq->curr);
 }
 
 static void pre_schedule_dl(struct rq *rq, struct task_struct *prev)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.14.58 7/7] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
                     ` (4 preceding siblings ...)
  2016-01-05  9:16   ` [PATCH for v3.14.58 6/7] sched,dl: Remove return value from pull_dl_task() Byungchul Park
@ 2016-01-05  9:16   ` Byungchul Park
  5 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:16 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

Remove the direct {push,pull} balancing operations from
switched_{from,to}_dl() / prio_changed_dl() and use the balance
callback queue.

Again, err on the side of too many reschedules; since too few is a
hard bug while too many is just annoying.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.968262663@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/deadline.c
---
 kernel/sched/deadline.c | 34 +++++++++++++++++++++-------------
 1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index ec1f21d..6ab59bb 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -210,16 +210,23 @@ static inline int has_pushable_dl_tasks(struct rq *rq)
 
 static int push_dl_task(struct rq *rq);
 
-static DEFINE_PER_CPU(struct callback_head, dl_balance_head);
+static DEFINE_PER_CPU(struct callback_head, dl_push_head);
+static DEFINE_PER_CPU(struct callback_head, dl_pull_head);
 
 static void push_dl_tasks(struct rq *);
+static void pull_dl_task(struct rq *);
 
 static inline void queue_push_tasks(struct rq *rq)
 {
 	if (!has_pushable_dl_tasks(rq))
 		return;
 
-	queue_balance_callback(rq, &per_cpu(dl_balance_head, rq->cpu), push_dl_tasks);
+	queue_balance_callback(rq, &per_cpu(dl_push_head, rq->cpu), push_dl_tasks);
+}
+
+static inline void queue_pull_task(struct rq *rq)
+{
+	queue_balance_callback(rq, &per_cpu(dl_pull_head, rq->cpu), pull_dl_task);
 }
 
 #else
@@ -247,6 +254,10 @@ void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
 static inline void queue_push_tasks(struct rq *rq)
 {
 }
+
+static inline void queue_pull_task(struct rq *rq)
+{
+}
 #endif /* CONFIG_SMP */
 
 static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags);
@@ -1541,7 +1552,7 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
 	 * from an overloaded cpu, if any.
 	 */
 	if (!rq->dl.dl_nr_running)
-		pull_dl_task(rq);
+		queue_pull_task(rq);
 #endif
 }
 
@@ -1551,8 +1562,6 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
  */
 static void switched_to_dl(struct rq *rq, struct task_struct *p)
 {
-	int check_resched = 1;
-
 	/*
 	 * If p is throttled, don't consider the possibility
 	 * of preempting rq->curr, the check will be done right
@@ -1563,12 +1572,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
 
 	if (p->on_rq || rq->curr != p) {
 #ifdef CONFIG_SMP
-		if (rq->dl.overloaded && push_dl_task(rq) && rq != task_rq(p))
-			/* Only reschedule if pushing failed */
-			check_resched = 0;
-#endif /* CONFIG_SMP */
-		if (check_resched && task_has_dl_policy(rq->curr))
+		if (rq->dl.overloaded)
+			queue_push_tasks(rq);
+#else
+		if (task_has_dl_policy(rq->curr))
 			check_preempt_curr_dl(rq, p, 0);
+#endif /* CONFIG_SMP */
 	}
 }
 
@@ -1588,15 +1597,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
 		 * or lowering its prio, so...
 		 */
 		if (!rq->dl.overloaded)
-			pull_dl_task(rq);
+			queue_pull_task(rq);
 
 		/*
 		 * If we now have a earlier deadline task than p,
 		 * then reschedule, provided p is still on this
 		 * runqueue.
 		 */
-		if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline) &&
-		    rq->curr == p)
+		if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline))
 			resched_task(p);
 #else
 		/*
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list
  2016-01-05  8:52 [STABLE] kernel oops which can be fixed by peterz's patches Byungchul Park
  2016-01-05  9:14 ` Peter Zijlstra
  2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
@ 2016-01-05  9:24 ` Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 2/6] sched: Allow balance callbacks for check_class_changed() Byungchul Park
                     ` (4 more replies)
  2016-03-01  8:15 ` [STABLE] kernel oops which can be fixed by peterz's patches Greg KH
  2016-06-13 18:31 ` Ben Hutchings
  4 siblings, 5 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:24 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

Generalize the post_schedule() stuff into a balance callback list.
This allows us to more easily use it outside of schedule() and cross
sched_class.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.424032725@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/core.c
---
 kernel/sched/core.c     | 36 ++++++++++++++++++++++++------------
 kernel/sched/deadline.c | 21 +++++++++++----------
 kernel/sched/rt.c       | 25 +++++++++++--------------
 kernel/sched/sched.h    | 19 +++++++++++++++++--
 4 files changed, 63 insertions(+), 38 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a38f987..3dff0ef 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2240,23 +2240,35 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
 #ifdef CONFIG_SMP
 
 /* rq->lock is NOT held, but preemption is disabled */
-static inline void post_schedule(struct rq *rq)
+static void __balance_callback(struct rq *rq)
 {
-	if (rq->post_schedule) {
-		unsigned long flags;
+	struct callback_head *head, *next;
+	void (*func)(struct rq *rq);
+	unsigned long flags;
 
-		raw_spin_lock_irqsave(&rq->lock, flags);
-		if (rq->curr->sched_class->post_schedule)
-			rq->curr->sched_class->post_schedule(rq);
-		raw_spin_unlock_irqrestore(&rq->lock, flags);
+	raw_spin_lock_irqsave(&rq->lock, flags);
+	head = rq->balance_callback;
+	rq->balance_callback = NULL;
+	while (head) {
+		func = (void (*)(struct rq *))head->func;
+		next = head->next;
+		head->next = NULL;
+		head = next;
 
-		rq->post_schedule = 0;
+		func(rq);
 	}
+	raw_spin_unlock_irqrestore(&rq->lock, flags);
+}
+
+static inline void balance_callback(struct rq *rq)
+{
+	if (unlikely(rq->balance_callback))
+		__balance_callback(rq);
 }
 
 #else
 
-static inline void post_schedule(struct rq *rq)
+static inline void balance_callback(struct rq *rq)
 {
 }
 
@@ -2277,7 +2289,7 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev)
 	 * FIXME: do we need to worry about rq being invalidated by the
 	 * task_switch?
 	 */
-	post_schedule(rq);
+	balance_callback(rq);
 
 #ifdef __ARCH_WANT_UNLOCKED_CTXSW
 	/* In this case, finish_task_switch does not reenable preemption */
@@ -2804,7 +2816,7 @@ need_resched:
 	} else
 		raw_spin_unlock_irq(&rq->lock);
 
-	post_schedule(rq);
+	balance_callback(rq);
 
 	sched_preempt_enable_no_resched();
 	if (need_resched())
@@ -6973,7 +6985,7 @@ void __init sched_init(void)
 		rq->sd = NULL;
 		rq->rd = NULL;
 		rq->cpu_capacity = SCHED_CAPACITY_SCALE;
-		rq->post_schedule = 0;
+		rq->balance_callback = NULL;
 		rq->active_balance = 0;
 		rq->next_balance = jiffies;
 		rq->push_cpu = 0;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index fc4f98b1..3bdf558 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -213,9 +213,16 @@ static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
 	return dl_task(prev);
 }
 
-static inline void set_post_schedule(struct rq *rq)
+static DEFINE_PER_CPU(struct callback_head, dl_balance_head);
+
+static void push_dl_tasks(struct rq *);
+
+static inline void queue_push_tasks(struct rq *rq)
 {
-	rq->post_schedule = has_pushable_dl_tasks(rq);
+	if (!has_pushable_dl_tasks(rq))
+		return;
+
+	queue_balance_callback(rq, &per_cpu(dl_balance_head, rq->cpu), push_dl_tasks);
 }
 
 #else
@@ -250,7 +257,7 @@ static inline int pull_dl_task(struct rq *rq)
 	return 0;
 }
 
-static inline void set_post_schedule(struct rq *rq)
+static inline void queue_push_tasks(struct rq *rq)
 {
 }
 #endif /* CONFIG_SMP */
@@ -1060,7 +1067,7 @@ struct task_struct *pick_next_task_dl(struct rq *rq, struct task_struct *prev)
 		start_hrtick_dl(rq, p);
 #endif
 
-	set_post_schedule(rq);
+	queue_push_tasks(rq);
 
 	return p;
 }
@@ -1469,11 +1476,6 @@ skip:
 	return ret;
 }
 
-static void post_schedule_dl(struct rq *rq)
-{
-	push_dl_tasks(rq);
-}
-
 /*
  * Since the task is not running and a reschedule is not going to happen
  * anytime soon on its runqueue, we try pushing it away now.
@@ -1661,7 +1663,6 @@ const struct sched_class dl_sched_class = {
 	.set_cpus_allowed       = set_cpus_allowed_dl,
 	.rq_online              = rq_online_dl,
 	.rq_offline             = rq_offline_dl,
-	.post_schedule		= post_schedule_dl,
 	.task_woken		= task_woken_dl,
 #endif
 
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index a490831..5a91237 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -338,13 +338,16 @@ static inline int has_pushable_tasks(struct rq *rq)
 	return !plist_head_empty(&rq->rt.pushable_tasks);
 }
 
-static inline void set_post_schedule(struct rq *rq)
+static DEFINE_PER_CPU(struct callback_head, rt_balance_head);
+
+static void push_rt_tasks(struct rq *);
+
+static inline void queue_push_tasks(struct rq *rq)
 {
-	/*
-	 * We detect this state here so that we can avoid taking the RQ
-	 * lock again later if there is no need to push
-	 */
-	rq->post_schedule = has_pushable_tasks(rq);
+	if (!has_pushable_tasks(rq))
+		return;
+
+	queue_balance_callback(rq, &per_cpu(rt_balance_head, rq->cpu), push_rt_tasks);
 }
 
 static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
@@ -401,7 +404,7 @@ static inline int pull_rt_task(struct rq *this_rq)
 	return 0;
 }
 
-static inline void set_post_schedule(struct rq *rq)
+static inline void queue_push_tasks(struct rq *rq)
 {
 }
 #endif /* CONFIG_SMP */
@@ -1467,7 +1470,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev)
 	if (p)
 		dequeue_pushable_task(rq, p);
 
-	set_post_schedule(rq);
+	queue_push_tasks(rq);
 
 	return p;
 }
@@ -1837,11 +1840,6 @@ skip:
 	return ret;
 }
 
-static void post_schedule_rt(struct rq *rq)
-{
-	push_rt_tasks(rq);
-}
-
 /*
  * If we are not running and we are not going to reschedule soon, we should
  * try to push tasks away now
@@ -2113,7 +2111,6 @@ const struct sched_class rt_sched_class = {
 	.set_cpus_allowed       = set_cpus_allowed_rt,
 	.rq_online              = rq_online_rt,
 	.rq_offline             = rq_offline_rt,
-	.post_schedule		= post_schedule_rt,
 	.task_woken		= task_woken_rt,
 	.switched_from		= switched_from_rt,
 #endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 31cc02e..045e008 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -569,9 +569,10 @@ struct rq {
 
 	unsigned long cpu_capacity;
 
+	struct callback_head *balance_callback;
+
 	unsigned char idle_balance;
 	/* For active balancing */
-	int post_schedule;
 	int active_balance;
 	int push_cpu;
 	struct cpu_stop_work active_balance_work;
@@ -670,6 +671,21 @@ extern int migrate_swap(struct task_struct *, struct task_struct *);
 
 #ifdef CONFIG_SMP
 
+static inline void
+queue_balance_callback(struct rq *rq,
+		       struct callback_head *head,
+		       void (*func)(struct rq *rq))
+{
+	lockdep_assert_held(&rq->lock);
+
+	if (unlikely(head->next))
+		return;
+
+	head->func = (void (*)(struct callback_head *))func;
+	head->next = rq->balance_callback;
+	rq->balance_callback = head;
+}
+
 extern void sched_ttwu_pending(void);
 
 #define rcu_dereference_check_sched_domain(p) \
@@ -1126,7 +1142,6 @@ struct sched_class {
 	int  (*select_task_rq)(struct task_struct *p, int task_cpu, int sd_flag, int flags);
 	void (*migrate_task_rq)(struct task_struct *p, int next_cpu);
 
-	void (*post_schedule) (struct rq *this_rq);
 	void (*task_waking) (struct task_struct *task);
 	void (*task_woken) (struct rq *this_rq, struct task_struct *task);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.18.25 2/6] sched: Allow balance callbacks for check_class_changed()
  2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
@ 2016-01-05  9:24   ` Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 3/6] sched,rt: Remove return value from pull_rt_task() Byungchul Park
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:24 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

In order to remove dropping rq->lock from the
switched_{to,from}()/prio_changed() sched_class methods, run the
balance callbacks after it.

We need to remove dropping rq->lock because its buggy,
suppose using sched_setattr()/sched_setscheduler() to change a running
task from FIFO to OTHER.

By the time we get to switched_from_rt() the task is already enqueued
on the cfs runqueues. If switched_from_rt() does pull_rt_task() and
drops rq->lock, load-balancing can come in and move our task @p to
another rq.

The subsequent switched_to_fair() still assumes @p is on @rq and bad
things will happen.

By using balance callbacks we delay the load-balancing operations
{rt,dl}x{push,pull} until we've done all the important work and the
task is fully set up.

Furthermore, the balance callbacks do not know about @p, therefore
they cannot get confused like this.

Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Link: http://lkml.kernel.org/r/20150611124742.615343911@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/core.c
---
 kernel/sched/core.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3dff0ef..ca6dad6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -999,6 +999,13 @@ inline int task_curr(const struct task_struct *p)
 	return cpu_curr(task_cpu(p)) == p;
 }
 
+/*
+ * switched_from, switched_to and prio_changed must _NOT_ drop rq->lock,
+ * use the balance_callback list if you want balancing.
+ *
+ * this means any call to check_class_changed() must be followed by a call to
+ * balance_callback().
+ */
 static inline void check_class_changed(struct rq *rq, struct task_struct *p,
 				       const struct sched_class *prev_class,
 				       int oldprio)
@@ -1485,8 +1492,12 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
 
 	p->state = TASK_RUNNING;
 #ifdef CONFIG_SMP
-	if (p->sched_class->task_woken)
+	if (p->sched_class->task_woken) {
+		/*
+		 * XXX can drop rq->lock; most likely ok.
+		 */
 		p->sched_class->task_woken(rq, p);
+	}
 
 	if (rq->idle_stamp) {
 		u64 delta = rq_clock(rq) - rq->idle_stamp;
@@ -3032,7 +3043,11 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
 
 	check_class_changed(rq, p, prev_class, oldprio);
 out_unlock:
+	preempt_disable(); /* avoid rq from going away on us */
 	__task_rq_unlock(rq);
+
+	balance_callback(rq);
+	preempt_enable();
 }
 #endif
 
@@ -3559,10 +3574,17 @@ change:
 	}
 
 	check_class_changed(rq, p, prev_class, oldprio);
+	preempt_disable(); /* avoid rq from going away on us */
 	task_rq_unlock(rq, p, &flags);
 
 	rt_mutex_adjust_pi(p);
 
+	/*
+	 * Run balance callbacks after we've adjusted the PI chain.
+	 */
+	balance_callback(rq);
+	preempt_enable();
+
 	return 0;
 }
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.18.25 3/6] sched,rt: Remove return value from pull_rt_task()
  2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 2/6] sched: Allow balance callbacks for check_class_changed() Byungchul Park
@ 2016-01-05  9:24   ` Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 4/6] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks Byungchul Park
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:24 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

In order to be able to use pull_rt_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.679002000@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/rt.c
---
 kernel/sched/rt.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 5a91237..ce807aa 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -244,7 +244,7 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
 
 #ifdef CONFIG_SMP
 
-static int pull_rt_task(struct rq *this_rq);
+static void pull_rt_task(struct rq *this_rq);
 
 static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
 {
@@ -399,9 +399,8 @@ static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
 	return false;
 }
 
-static inline int pull_rt_task(struct rq *this_rq)
+static inline void pull_rt_task(struct rq *this_rq)
 {
-	return 0;
 }
 
 static inline void queue_push_tasks(struct rq *rq)
@@ -1757,14 +1756,15 @@ static void push_rt_tasks(struct rq *rq)
 		;
 }
 
-static int pull_rt_task(struct rq *this_rq)
+static void pull_rt_task(struct rq *this_rq)
 {
-	int this_cpu = this_rq->cpu, ret = 0, cpu;
+	int this_cpu = this_rq->cpu, cpu;
+	bool resched = false;
 	struct task_struct *p;
 	struct rq *src_rq;
 
 	if (likely(!rt_overloaded(this_rq)))
-		return 0;
+		return;
 
 	/*
 	 * Match the barrier from rt_set_overloaded; this guarantees that if we
@@ -1821,7 +1821,7 @@ static int pull_rt_task(struct rq *this_rq)
 			if (p->prio < src_rq->curr->prio)
 				goto skip;
 
-			ret = 1;
+			resched = true;
 
 			deactivate_task(src_rq, p, 0);
 			set_task_cpu(p, this_cpu);
@@ -1837,7 +1837,8 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
-	return ret;
+	if (resched)
+		resched_curr(this_rq);
 }
 
 /*
@@ -1933,8 +1934,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
 	if (!p->on_rq || rq->rt.rt_nr_running)
 		return;
 
-	if (pull_rt_task(rq))
-		resched_curr(rq);
+	pull_rt_task(rq);
 }
 
 void __init init_sched_rt_class(void)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.18.25 4/6] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
  2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 2/6] sched: Allow balance callbacks for check_class_changed() Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 3/6] sched,rt: Remove return value from pull_rt_task() Byungchul Park
@ 2016-01-05  9:24   ` Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 5/6] sched,dl: Remove return value from pull_dl_task() Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 6/6] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks Byungchul Park
  4 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:24 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

Remove the direct {push,pull} balancing operations from
switched_{from,to}_rt() / prio_changed_rt() and use the balance
callback queue.

Again, err on the side of too many reschedules; since too few is a
hard bug while too many is just annoying.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.766832367@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/rt.c
---
 kernel/sched/rt.c | 35 +++++++++++++++++++----------------
 1 file changed, 19 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ce807aa..fe0399f 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -338,16 +338,23 @@ static inline int has_pushable_tasks(struct rq *rq)
 	return !plist_head_empty(&rq->rt.pushable_tasks);
 }
 
-static DEFINE_PER_CPU(struct callback_head, rt_balance_head);
+static DEFINE_PER_CPU(struct callback_head, rt_push_head);
+static DEFINE_PER_CPU(struct callback_head, rt_pull_head);
 
 static void push_rt_tasks(struct rq *);
+static void pull_rt_task(struct rq *);
 
 static inline void queue_push_tasks(struct rq *rq)
 {
 	if (!has_pushable_tasks(rq))
 		return;
 
-	queue_balance_callback(rq, &per_cpu(rt_balance_head, rq->cpu), push_rt_tasks);
+	queue_balance_callback(rq, &per_cpu(rt_push_head, rq->cpu), push_rt_tasks);
+}
+
+static inline void queue_pull_task(struct rq *rq)
+{
+	queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
 }
 
 static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
@@ -1934,7 +1941,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
 	if (!p->on_rq || rq->rt.rt_nr_running)
 		return;
 
-	pull_rt_task(rq);
+	queue_pull_task(rq);
 }
 
 void __init init_sched_rt_class(void)
@@ -1955,8 +1962,6 @@ void __init init_sched_rt_class(void)
  */
 static void switched_to_rt(struct rq *rq, struct task_struct *p)
 {
-	int check_resched = 1;
-
 	/*
 	 * If we are already running, then there's nothing
 	 * that needs to be done. But if we are not running
@@ -1966,13 +1971,12 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
 	 */
 	if (p->on_rq && rq->curr != p) {
 #ifdef CONFIG_SMP
-		if (p->nr_cpus_allowed > 1 && rq->rt.overloaded &&
-		    /* Don't resched if we changed runqueues */
-		    push_rt_task(rq) && rq != task_rq(p))
-			check_resched = 0;
-#endif /* CONFIG_SMP */
-		if (check_resched && p->prio < rq->curr->prio)
+		if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
+			queue_push_tasks(rq);
+#else
+		if (p->prio < rq->curr->prio)
 			resched_curr(rq);
+#endif /* CONFIG_SMP */
 	}
 }
 
@@ -1993,14 +1997,13 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
 		 * may need to pull tasks to this runqueue.
 		 */
 		if (oldprio < p->prio)
-			pull_rt_task(rq);
+			queue_pull_task(rq);
+
 		/*
 		 * If there's a higher priority task waiting to run
-		 * then reschedule. Note, the above pull_rt_task
-		 * can release the rq lock and p could migrate.
-		 * Only reschedule if p is still on the same runqueue.
+		 * then reschedule.
 		 */
-		if (p->prio > rq->rt.highest_prio.curr && rq->curr == p)
+		if (p->prio > rq->rt.highest_prio.curr)
 			resched_curr(rq);
 #else
 		/* For UP simply resched on drop of prio */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.18.25 5/6] sched,dl: Remove return value from pull_dl_task()
  2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
                     ` (2 preceding siblings ...)
  2016-01-05  9:24   ` [PATCH for v3.18.25 4/6] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks Byungchul Park
@ 2016-01-05  9:24   ` Byungchul Park
  2016-01-05  9:24   ` [PATCH for v3.18.25 6/6] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks Byungchul Park
  4 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:24 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

In order to be able to use pull_dl_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.859398977@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/deadline.c
---
 kernel/sched/deadline.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3bdf558..822b94f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -252,9 +252,8 @@ static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
 	return false;
 }
 
-static inline int pull_dl_task(struct rq *rq)
+static inline void pull_dl_task(struct rq *rq)
 {
-	return 0;
 }
 
 static inline void queue_push_tasks(struct rq *rq)
@@ -974,7 +973,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
 	resched_curr(rq);
 }
 
-static int pull_dl_task(struct rq *this_rq);
+static void pull_dl_task(struct rq *this_rq);
 
 #endif /* CONFIG_SMP */
 
@@ -1397,15 +1396,16 @@ static void push_dl_tasks(struct rq *rq)
 		;
 }
 
-static int pull_dl_task(struct rq *this_rq)
+static void pull_dl_task(struct rq *this_rq)
 {
-	int this_cpu = this_rq->cpu, ret = 0, cpu;
+	int this_cpu = this_rq->cpu, cpu;
 	struct task_struct *p;
+	bool resched = false;
 	struct rq *src_rq;
 	u64 dmin = LONG_MAX;
 
 	if (likely(!dl_overloaded(this_rq)))
-		return 0;
+		return;
 
 	/*
 	 * Match the barrier from dl_set_overloaded; this guarantees that if we
@@ -1460,7 +1460,7 @@ static int pull_dl_task(struct rq *this_rq)
 					   src_rq->curr->dl.deadline))
 				goto skip;
 
-			ret = 1;
+			resched = true;
 
 			deactivate_task(src_rq, p, 0);
 			set_task_cpu(p, this_cpu);
@@ -1473,7 +1473,8 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
-	return ret;
+	if (resched)
+		resched_curr(this_rq);
 }
 
 /*
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH for v3.18.25 6/6] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
  2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
                     ` (3 preceding siblings ...)
  2016-01-05  9:24   ` [PATCH for v3.18.25 5/6] sched,dl: Remove return value from pull_dl_task() Byungchul Park
@ 2016-01-05  9:24   ` Byungchul Park
  4 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-05  9:24 UTC (permalink / raw)
  To: stable
  Cc: peterz, linux-kernel, ktkhai, rostedt, juri.lelli, pang.xunlei,
	oleg, wanpeng.li, umgwanakikbuti, Thomas Gleixner

From: Peter Zijlstra <peterz@infradead.org>

Remove the direct {push,pull} balancing operations from
switched_{from,to}_dl() / prio_changed_dl() and use the balance
callback queue.

Again, err on the side of too many reschedules; since too few is a
hard bug while too many is just annoying.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: ktkhai@parallels.com
Cc: rostedt@goodmis.org
Cc: juri.lelli@gmail.com
Cc: pang.xunlei@linaro.org
Cc: oleg@redhat.com
Cc: wanpeng.li@linux.intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/20150611124742.968262663@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>

Conflicts:
	kernel/sched/deadline.c
---
 kernel/sched/deadline.c | 42 +++++++++++++++++++++++-------------------
 1 file changed, 23 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 1242e5b..6762024 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -213,16 +213,23 @@ static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
 	return dl_task(prev);
 }
 
-static DEFINE_PER_CPU(struct callback_head, dl_balance_head);
+static DEFINE_PER_CPU(struct callback_head, dl_push_head);
+static DEFINE_PER_CPU(struct callback_head, dl_pull_head);
 
 static void push_dl_tasks(struct rq *);
+static void pull_dl_task(struct rq *);
 
 static inline void queue_push_tasks(struct rq *rq)
 {
 	if (!has_pushable_dl_tasks(rq))
 		return;
 
-	queue_balance_callback(rq, &per_cpu(dl_balance_head, rq->cpu), push_dl_tasks);
+	queue_balance_callback(rq, &per_cpu(dl_push_head, rq->cpu), push_dl_tasks);
+}
+
+static inline void queue_pull_task(struct rq *rq)
+{
+	queue_balance_callback(rq, &per_cpu(dl_pull_head, rq->cpu), pull_dl_task);
 }
 
 #else
@@ -259,6 +266,10 @@ static inline void pull_dl_task(struct rq *rq)
 static inline void queue_push_tasks(struct rq *rq)
 {
 }
+
+static inline void queue_pull_task(struct rq *rq)
+{
+}
 #endif /* CONFIG_SMP */
 
 static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags);
@@ -975,8 +986,6 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
 	resched_curr(rq);
 }
 
-static void pull_dl_task(struct rq *this_rq);
-
 #endif /* CONFIG_SMP */
 
 /*
@@ -1586,7 +1595,7 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
 	 * from an overloaded cpu, if any.
 	 */
 	if (!rq->dl.dl_nr_running)
-		pull_dl_task(rq);
+		queue_pull_task(rq);
 #endif
 }
 
@@ -1596,8 +1605,6 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
  */
 static void switched_to_dl(struct rq *rq, struct task_struct *p)
 {
-	int check_resched = 1;
-
 	/*
 	 * If p is throttled, don't consider the possibility
 	 * of preempting rq->curr, the check will be done right
@@ -1608,16 +1615,14 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
 
 	if (task_on_rq_queued(p) && rq->curr != p) {
 #ifdef CONFIG_SMP
-		if (rq->dl.overloaded && push_dl_task(rq) && rq != task_rq(p))
-			/* Only reschedule if pushing failed */
-			check_resched = 0;
+		if (rq->dl.overloaded)
+			queue_push_tasks(rq);
+#else
+		if (dl_task(rq->curr))
+			check_preempt_curr_dl(rq, p, 0);
+		else
+			resched_curr(rq);
 #endif /* CONFIG_SMP */
-		if (check_resched) {
-			if (dl_task(rq->curr))
-				check_preempt_curr_dl(rq, p, 0);
-			else
-				resched_curr(rq);
-		}
 	}
 }
 
@@ -1637,15 +1642,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
 		 * or lowering its prio, so...
 		 */
 		if (!rq->dl.overloaded)
-			pull_dl_task(rq);
+			queue_pull_task(rq);
 
 		/*
 		 * If we now have a earlier deadline task than p,
 		 * then reschedule, provided p is still on this
 		 * runqueue.
 		 */
-		if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline) &&
-		    rq->curr == p)
+		if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline))
 			resched_curr(rq);
 #else
 		/*
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-01-05  9:14 ` Peter Zijlstra
@ 2016-01-12  8:47   ` Byungchul Park
  2016-01-12 10:21   ` Willy Tarreau
  2016-01-25  7:25   ` Byungchul Park
  2 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-01-12  8:47 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: stable, linux-kernel

On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 05, 2016 at 05:52:11PM +0900, Byungchul Park wrote:
> > 
> > Upstream commits to be applied
> > ==============================
> > 
> > e3fca9e: sched: Replace post_schedule with a balance callback list
> > 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> > 8046d68: sched,rt: Remove return value from pull_rt_task()
> > fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
> > 0ea60c2: sched,dl: Remove return value from pull_dl_task()
> > 9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
> > 
> > The reason why these should be applied
> > ======================================
> > 
> > Our products developed using 3.16 kernel, faced a kernel oops which can
> > be fixed with above upstreamed patches. The oops is caused by "Unable
> > to handle kernel NULL pointer dereference at virtual address 000000xx"
> > in the call path,
> > 
> > __sched_setscheduler()
> > 	check_class_changed()
> > 		switched_to_fair()
> > 			check_preempt_curr()
> > 				check_preempt_wakeup()
> > 					find_matching_se()
> > 						is_same_group()
> > 
> > by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.
> 
> So the reason I didn't mark them for stable is that they were non
> trivial, however they've been in for a while now and nothing broke, so I
> suppose backporting them isn't a problem.

Do you think the backporting can have some potential problems? I
checked if it worked well on my machine. Do you think it need more
tests to verify its stability?

> 
> > How to apply it
> > ===============
> > 
> > For stable 4.2.8+:
> > 	N/A (already applied)
> > 
> > For longterm 4.1.15:
> > 	Cherry-picking the upsteam commits works with a trivial conflict.
> > 
> > For longterm 3.18.25:
> > 	Refer to the backported patches in this thread.
> > 
> > For longterm 3.14.58:
> > 	Refer to the backported patches in this thread. And applying
> > 	additional "6c3b4d4: sched: Clean up idle task SMP logic" commit
> > 	makes backporting the upstream commits much simpler. So my
> > 	backporting patches include the patch.
> > 
> > For longterm 2.6.32.69 ~ 3.12.51: Need to be backported. (I didn't)
> 
> No objection as long as you've actually tested things etc..
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-01-05  9:14 ` Peter Zijlstra
  2016-01-12  8:47   ` Byungchul Park
@ 2016-01-12 10:21   ` Willy Tarreau
  2016-01-25  7:25   ` Byungchul Park
  2 siblings, 0 replies; 33+ messages in thread
From: Willy Tarreau @ 2016-01-12 10:21 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Byungchul Park, stable, linux-kernel

Hi Peter,

On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 05, 2016 at 05:52:11PM +0900, Byungchul Park wrote:
> > 
> > Upstream commits to be applied
> > ==============================
> > 
> > e3fca9e: sched: Replace post_schedule with a balance callback list
> > 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> > 8046d68: sched,rt: Remove return value from pull_rt_task()
> > fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
> > 0ea60c2: sched,dl: Remove return value from pull_dl_task()
> > 9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
> > 
> > The reason why these should be applied
> > ======================================
> > 
> > Our products developed using 3.16 kernel, faced a kernel oops which can
> > be fixed with above upstreamed patches. The oops is caused by "Unable
> > to handle kernel NULL pointer dereference at virtual address 000000xx"
> > in the call path,
> > 
> > __sched_setscheduler()
> > 	check_class_changed()
> > 		switched_to_fair()
> > 			check_preempt_curr()
> > 				check_preempt_wakeup()
> > 					find_matching_se()
> > 						is_same_group()
> > 
> > by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.
> 
> So the reason I didn't mark them for stable is that they were non
> trivial, however they've been in for a while now and nothing broke, so I
> suppose backporting them isn't a problem.

I didn't check the code, but for older kernels, can't we simply get rid
of the issue by adding an extra test on se/pse before dereferencing it,
even if that implies a suboptimal fix which is always better than an oops ?
I must confess I don't feel at ease with backporting so many sensitive
changes into 2.6.32!

Thanks,
Willy

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-01-05  9:14 ` Peter Zijlstra
  2016-01-12  8:47   ` Byungchul Park
  2016-01-12 10:21   ` Willy Tarreau
@ 2016-01-25  7:25   ` Byungchul Park
  2016-02-16  7:08     ` Byungchul Park
  2 siblings, 1 reply; 33+ messages in thread
From: Byungchul Park @ 2016-01-25  7:25 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: stable, linux-kernel

On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> So the reason I didn't mark them for stable is that they were non
> trivial, however they've been in for a while now and nothing broke, so I
> suppose backporting them isn't a problem.

Hello,

What do you think about the way to solve this oops problem? Could you just
give your opinion of the way? Or ack or nack about this backporting?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-01-25  7:25   ` Byungchul Park
@ 2016-02-16  7:08     ` Byungchul Park
  2016-02-16  8:44       ` Peter Zijlstra
  0 siblings, 1 reply; 33+ messages in thread
From: Byungchul Park @ 2016-02-16  7:08 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: stable, linux-kernel, gregkh

On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > So the reason I didn't mark them for stable is that they were non
> > trivial, however they've been in for a while now and nothing broke, so I
> > suppose backporting them isn't a problem.
> 
> Hello,
> 
> What do you think about the way to solve this oops problem? Could you just
> give your opinion of the way? Or ack or nack about this backporting?

Or would it be better to create a new simple patch with which we can solve
the oops problem, because your patch is too complicated to backport to
stable tree? What do you think about that?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-16  7:08     ` Byungchul Park
@ 2016-02-16  8:44       ` Peter Zijlstra
  2016-02-16 17:42         ` Greg KH
  0 siblings, 1 reply; 33+ messages in thread
From: Peter Zijlstra @ 2016-02-16  8:44 UTC (permalink / raw)
  To: Byungchul Park; +Cc: stable, linux-kernel, gregkh

On Tue, Feb 16, 2016 at 04:08:37PM +0900, Byungchul Park wrote:
> On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> > On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > > So the reason I didn't mark them for stable is that they were non
> > > trivial, however they've been in for a while now and nothing broke, so I
> > > suppose backporting them isn't a problem.
> > 
> > Hello,
> > 
> > What do you think about the way to solve this oops problem? Could you just
> > give your opinion of the way? Or ack or nack about this backporting?
> 
> Or would it be better to create a new simple patch with which we can solve
> the oops problem, because your patch is too complicated to backport to
> stable tree? What do you think about that?

I would prefer just backporting existing stuff, we know that works.

A separate patch for stable doesn't make sense to me; you get extra
chances for fail and a divergent code-base.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-16  8:44       ` Peter Zijlstra
@ 2016-02-16 17:42         ` Greg KH
  2016-02-17  0:11           ` Byungchul Park
  0 siblings, 1 reply; 33+ messages in thread
From: Greg KH @ 2016-02-16 17:42 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Byungchul Park, stable, linux-kernel

On Tue, Feb 16, 2016 at 09:44:35AM +0100, Peter Zijlstra wrote:
> On Tue, Feb 16, 2016 at 04:08:37PM +0900, Byungchul Park wrote:
> > On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> > > On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > > > So the reason I didn't mark them for stable is that they were non
> > > > trivial, however they've been in for a while now and nothing broke, so I
> > > > suppose backporting them isn't a problem.
> > > 
> > > Hello,
> > > 
> > > What do you think about the way to solve this oops problem? Could you just
> > > give your opinion of the way? Or ack or nack about this backporting?
> > 
> > Or would it be better to create a new simple patch with which we can solve
> > the oops problem, because your patch is too complicated to backport to
> > stable tree? What do you think about that?
> 
> I would prefer just backporting existing stuff, we know that works.
> 
> A separate patch for stable doesn't make sense to me; you get extra
> chances for fail and a divergent code-base.

I agree, I REALLY don't want to take patches that are not
identical-as-much-as-possible to what is in Linus's tree, because almost
every time we do, the patch is broken in some way.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-16 17:42         ` Greg KH
@ 2016-02-17  0:11           ` Byungchul Park
  2016-02-17  0:41             ` Greg KH
  0 siblings, 1 reply; 33+ messages in thread
From: Byungchul Park @ 2016-02-17  0:11 UTC (permalink / raw)
  To: Greg KH; +Cc: Peter Zijlstra, stable, linux-kernel

On Tue, Feb 16, 2016 at 09:42:12AM -0800, Greg KH wrote:
> On Tue, Feb 16, 2016 at 09:44:35AM +0100, Peter Zijlstra wrote:
> > On Tue, Feb 16, 2016 at 04:08:37PM +0900, Byungchul Park wrote:
> > > On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> > > > On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > > > > So the reason I didn't mark them for stable is that they were non
> > > > > trivial, however they've been in for a while now and nothing broke, so I
> > > > > suppose backporting them isn't a problem.
> > > > 
> > > > Hello,
> > > > 
> > > > What do you think about the way to solve this oops problem? Could you just
> > > > give your opinion of the way? Or ack or nack about this backporting?
> > > 
> > > Or would it be better to create a new simple patch with which we can solve
> > > the oops problem, because your patch is too complicated to backport to
> > > stable tree? What do you think about that?
> > 
> > I would prefer just backporting existing stuff, we know that works.
> > 
> > A separate patch for stable doesn't make sense to me; you get extra
> > chances for fail and a divergent code-base.
> 
> I agree, I REALLY don't want to take patches that are not
> identical-as-much-as-possible to what is in Linus's tree, because almost
> every time we do, the patch is broken in some way.

I also agree and got it. Then could you check if this backporting is done
properly?

> 
> thanks,
> 
> greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-17  0:11           ` Byungchul Park
@ 2016-02-17  0:41             ` Greg KH
  2016-02-17  2:00               ` Byungchul Park
  0 siblings, 1 reply; 33+ messages in thread
From: Greg KH @ 2016-02-17  0:41 UTC (permalink / raw)
  To: Byungchul Park; +Cc: Peter Zijlstra, stable, linux-kernel

On Wed, Feb 17, 2016 at 09:11:03AM +0900, Byungchul Park wrote:
> On Tue, Feb 16, 2016 at 09:42:12AM -0800, Greg KH wrote:
> > On Tue, Feb 16, 2016 at 09:44:35AM +0100, Peter Zijlstra wrote:
> > > On Tue, Feb 16, 2016 at 04:08:37PM +0900, Byungchul Park wrote:
> > > > On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> > > > > On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > > > > > So the reason I didn't mark them for stable is that they were non
> > > > > > trivial, however they've been in for a while now and nothing broke, so I
> > > > > > suppose backporting them isn't a problem.
> > > > > 
> > > > > Hello,
> > > > > 
> > > > > What do you think about the way to solve this oops problem? Could you just
> > > > > give your opinion of the way? Or ack or nack about this backporting?
> > > > 
> > > > Or would it be better to create a new simple patch with which we can solve
> > > > the oops problem, because your patch is too complicated to backport to
> > > > stable tree? What do you think about that?
> > > 
> > > I would prefer just backporting existing stuff, we know that works.
> > > 
> > > A separate patch for stable doesn't make sense to me; you get extra
> > > chances for fail and a divergent code-base.
> > 
> > I agree, I REALLY don't want to take patches that are not
> > identical-as-much-as-possible to what is in Linus's tree, because almost
> > every time we do, the patch is broken in some way.
> 
> I also agree and got it. Then could you check if this backporting is done
> properly?

What backporting of what to where by whom?

Come on, someone needs to actually send in some patches, in the correct
format, before anyone can do anything with them...

greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-17  0:41             ` Greg KH
@ 2016-02-17  2:00               ` Byungchul Park
  2016-02-17  3:01                 ` Greg KH
  2016-02-17  3:02                 ` Mike Galbraith
  0 siblings, 2 replies; 33+ messages in thread
From: Byungchul Park @ 2016-02-17  2:00 UTC (permalink / raw)
  To: Greg KH; +Cc: Peter Zijlstra, stable, linux-kernel

On Tue, Feb 16, 2016 at 04:41:39PM -0800, Greg KH wrote:
> On Wed, Feb 17, 2016 at 09:11:03AM +0900, Byungchul Park wrote:
> > On Tue, Feb 16, 2016 at 09:42:12AM -0800, Greg KH wrote:
> > > On Tue, Feb 16, 2016 at 09:44:35AM +0100, Peter Zijlstra wrote:
> > > > On Tue, Feb 16, 2016 at 04:08:37PM +0900, Byungchul Park wrote:
> > > > > On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> > > > > > On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > > > > > > So the reason I didn't mark them for stable is that they were non
> > > > > > > trivial, however they've been in for a while now and nothing broke, so I
> > > > > > > suppose backporting them isn't a problem.
> > > > > > 
> > > > > > Hello,
> > > > > > 
> > > > > > What do you think about the way to solve this oops problem? Could you just
> > > > > > give your opinion of the way? Or ack or nack about this backporting?
> > > > > 
> > > > > Or would it be better to create a new simple patch with which we can solve
> > > > > the oops problem, because your patch is too complicated to backport to
> > > > > stable tree? What do you think about that?
> > > > 
> > > > I would prefer just backporting existing stuff, we know that works.
> > > > 
> > > > A separate patch for stable doesn't make sense to me; you get extra
> > > > chances for fail and a divergent code-base.
> > > 
> > > I agree, I REALLY don't want to take patches that are not
> > > identical-as-much-as-possible to what is in Linus's tree, because almost
> > > every time we do, the patch is broken in some way.
> > 
> > I also agree and got it. Then could you check if this backporting is done
> > properly?
> 
> What backporting of what to where by whom?
> 
> Come on, someone needs to actually send in some patches, in the correct
> format, before anyone can do anything with them...

I am sorry for not ccing you when I sent the patches at first. (I didn't
know I should do it.) There are the patches in this thread. Refer to,

https://lkml.org/lkml/2016/1/5/60

Thanks,
Byungchul

> 
> greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-17  2:00               ` Byungchul Park
@ 2016-02-17  3:01                 ` Greg KH
  2016-02-17  3:02                 ` Mike Galbraith
  1 sibling, 0 replies; 33+ messages in thread
From: Greg KH @ 2016-02-17  3:01 UTC (permalink / raw)
  To: Byungchul Park; +Cc: Peter Zijlstra, stable, linux-kernel

On Wed, Feb 17, 2016 at 11:00:08AM +0900, Byungchul Park wrote:
> On Tue, Feb 16, 2016 at 04:41:39PM -0800, Greg KH wrote:
> > On Wed, Feb 17, 2016 at 09:11:03AM +0900, Byungchul Park wrote:
> > > On Tue, Feb 16, 2016 at 09:42:12AM -0800, Greg KH wrote:
> > > > On Tue, Feb 16, 2016 at 09:44:35AM +0100, Peter Zijlstra wrote:
> > > > > On Tue, Feb 16, 2016 at 04:08:37PM +0900, Byungchul Park wrote:
> > > > > > On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> > > > > > > On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > > > > > > > So the reason I didn't mark them for stable is that they were non
> > > > > > > > trivial, however they've been in for a while now and nothing broke, so I
> > > > > > > > suppose backporting them isn't a problem.
> > > > > > > 
> > > > > > > Hello,
> > > > > > > 
> > > > > > > What do you think about the way to solve this oops problem? Could you just
> > > > > > > give your opinion of the way? Or ack or nack about this backporting?
> > > > > > 
> > > > > > Or would it be better to create a new simple patch with which we can solve
> > > > > > the oops problem, because your patch is too complicated to backport to
> > > > > > stable tree? What do you think about that?
> > > > > 
> > > > > I would prefer just backporting existing stuff, we know that works.
> > > > > 
> > > > > A separate patch for stable doesn't make sense to me; you get extra
> > > > > chances for fail and a divergent code-base.
> > > > 
> > > > I agree, I REALLY don't want to take patches that are not
> > > > identical-as-much-as-possible to what is in Linus's tree, because almost
> > > > every time we do, the patch is broken in some way.
> > > 
> > > I also agree and got it. Then could you check if this backporting is done
> > > properly?
> > 
> > What backporting of what to where by whom?
> > 
> > Come on, someone needs to actually send in some patches, in the correct
> > format, before anyone can do anything with them...
> 
> I am sorry for not ccing you when I sent the patches at first. (I didn't
> know I should do it.) There are the patches in this thread. Refer to,
> 
> https://lkml.org/lkml/2016/1/5/60

Ok, that's in my "to review" queue, along with 500+ other patches.  I'll
get to them eventually :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-17  2:00               ` Byungchul Park
  2016-02-17  3:01                 ` Greg KH
@ 2016-02-17  3:02                 ` Mike Galbraith
  2016-02-23 21:05                   ` Ben Hutchings
  2016-02-23 21:06                   ` Ben Hutchings
  1 sibling, 2 replies; 33+ messages in thread
From: Mike Galbraith @ 2016-02-17  3:02 UTC (permalink / raw)
  To: Byungchul Park, Greg KH; +Cc: Peter Zijlstra, stable, linux-kernel

On Wed, 2016-02-17 at 11:00 +0900, Byungchul Park wrote:
> On Tue, Feb 16, 2016 at 04:41:39PM -0800, Greg KH wrote:
> > On Wed, Feb 17, 2016 at 09:11:03AM +0900, Byungchul Park wrote:
> > > On Tue, Feb 16, 2016 at 09:42:12AM -0800, Greg KH wrote:
> > > > On Tue, Feb 16, 2016 at 09:44:35AM +0100, Peter Zijlstra wrote:
> > > > > On Tue, Feb 16, 2016 at 04:08:37PM +0900, Byungchul Park wrote:
> > > > > > On Mon, Jan 25, 2016 at 04:25:03PM +0900, Byungchul Park wrote:
> > > > > > > On Tue, Jan 05, 2016 at 10:14:44AM +0100, Peter Zijlstra wrote:
> > > > > > > > So the reason I didn't mark them for stable is that they were non
> > > > > > > > trivial, however they've been in for a while now and nothing broke, so I
> > > > > > > > suppose backporting them isn't a problem.
> > > > > > > 
> > > > > > > Hello,
> > > > > > > 
> > > > > > > What do you think about the way to solve this oops problem? Could you just
> > > > > > > give your opinion of the way? Or ack or nack about this backporting?
> > > > > > 
> > > > > > Or would it be better to create a new simple patch with which we can solve
> > > > > > the oops problem, because your patch is too complicated to backport to
> > > > > > stable tree? What do you think about that?
> > > > > 
> > > > > I would prefer just backporting existing stuff, we know that works.
> > > > > 
> > > > > A separate patch for stable doesn't make sense to me; you get extra
> > > > > chances for fail and a divergent code-base.
> > > > 
> > > > I agree, I REALLY don't want to take patches that are not
> > > > identical-as-much-as-possible to what is in Linus's tree, because almost
> > > > every time we do, the patch is broken in some way.
> > > 
> > > I also agree and got it. Then could you check if this backporting is done
> > > properly?
> > 
> > What backporting of what to where by whom?
> > 
> > Come on, someone needs to actually send in some patches, in the correct
> > format, before anyone can do anything with them...
> 
> I am sorry for not ccing you when I sent the patches at first. (I didn't
> know I should do it.) There are the patches in this thread. Refer to,
> 
> https://lkml.org/lkml/2016/1/5/60

Anybody wanting to fix up a < 3.14 kernel can use the below.

sched: fix __sched_setscheduler() vs load balancing race

__sched_setscheduler() may release rq->lock in pull_rt_task() as a task is
being changed rt -> fair class.  load balancing may sneak in, move the task
behind __sched_setscheduler()'s back, which explodes in switched_to_fair()
when the passed but no longer valid rq is used.  Tell can_migrate_task() to
say no if ->pi_lock is held.

@stable: Kernels that predate SCHED_DEADLINE can use this simple (and tested)
check in lieu of backport of the full 18 patch mainline treatment.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
---
 kernel/sched/fair.c |    9 +++++++++
 1 file changed, 9 insertions(+)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4008,6 +4008,7 @@ int can_migrate_task(struct task_struct
 	 * 2) cannot be migrated to this CPU due to cpus_allowed, or
 	 * 3) running (obviously), or
 	 * 4) are cache-hot on their current CPU.
+	 * 5) p->pi_lock is held.
 	 */
 	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
 		return 0;
@@ -4049,6 +4050,14 @@ int can_migrate_task(struct task_struct
 	}
 
 	/*
+	 * rt -> fair class change may be in progress.  If we sneak in should
+	 * double_lock_balance() release rq->lock, and move the task, we will
+	 * cause switched_to_fair() to meet a passed but no longer valid rq.
+	 */
+	if (raw_spin_is_locked(&p->pi_lock))
+		return 0;
+
+	/*
 	 * Aggressive migration if:
 	 * 1) task is cache cold, or
 	 * 2) too many balance attempts have failed.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-17  3:02                 ` Mike Galbraith
@ 2016-02-23 21:05                   ` Ben Hutchings
  2016-02-23 21:06                   ` Ben Hutchings
  1 sibling, 0 replies; 33+ messages in thread
From: Ben Hutchings @ 2016-02-23 21:05 UTC (permalink / raw)
  To: Mike Galbraith, Byungchul Park, Greg KH
  Cc: Peter Zijlstra, stable, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1441 bytes --]

On Wed, 2016-02-17 at 04:02 +0100, Mike Galbraith wrote:
[...]
> @stable: Kernels that predate SCHED_DEADLINE can use this simple (and tested)
> check in lieu of backport of the full 18 patch mainline treatment.
> 
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
> ---
>  kernel/sched/fair.c |    9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4008,6 +4008,7 @@ int can_migrate_task(struct task_struct
>  	 * 2) cannot be migrated to this CPU due to cpus_allowed, or
>  	 * 3) running (obviously), or
>  	 * 4) are cache-hot on their current CPU.
> +	 * 5) p->pi_lock is held.
>  	 */
>  	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
>  		return 0;
> @@ -4049,6 +4050,14 @@ int can_migrate_task(struct task_struct
>  	}
>  
>  	/*
> +	 * rt -> fair class change may be in progress.  If we sneak in should
> +	 * double_lock_balance() release rq->lock, and move the task, we will
> +	 * cause switched_to_fair() to meet a passed but no longer valid rq.
> +	 */
> +	if (raw_spin_is_locked(&p->pi_lock))
> +		return 0;
> +
> +	/*
>  	 * Aggressive migration if:
>  	 * 1) task is cache cold, or
>  	 * 2) too many balance attempts have failed.

Queued up for 3.2, thanks.

Ben.

-- 
Ben Hutchings
Any smoothly functioning technology is indistinguishable from a rigged demo.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-02-17  3:02                 ` Mike Galbraith
  2016-02-23 21:05                   ` Ben Hutchings
@ 2016-02-23 21:06                   ` Ben Hutchings
  1 sibling, 0 replies; 33+ messages in thread
From: Ben Hutchings @ 2016-02-23 21:06 UTC (permalink / raw)
  To: Mike Galbraith, Byungchul Park, Greg KH
  Cc: Peter Zijlstra, stable, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1481 bytes --]

On Wed, 2016-02-17 at 04:02 +0100, Mike Galbraith wrote:
[...]

@stable: Kernels that predate SCHED_DEADLINE can use this simple (and tested)
check in lieu of backport of the full 18 patch mainline treatment.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
---
 kernel/sched/fair.c |    9 +++++++++
 1 file changed, 9 insertions(+)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4008,6 +4008,7 @@ int can_migrate_task(struct task_struct
 	 * 2) cannot be migrated to this CPU due to cpus_allowed, or
 	 * 3) running (obviously), or
 	 * 4) are cache-hot on their current CPU.
+	 * 5) p->pi_lock is held.
 	 */
 	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
 		return 0;
@@ -4049,6 +4050,14 @@ int can_migrate_task(struct task_struct
 	}
 
 	/*
+	 * rt -> fair class change may be in progress.  If we sneak in should
+	 * double_lock_balance() release rq->lock, and move the task, we will
+	 * cause switched_to_fair() to meet a passed but no longer valid rq.
+	 */
+	if (raw_spin_is_locked(&p->pi_lock))
+		return 0;
+
+	/*
 	 * Aggressive migration if:
 	 * 1) task is cache cold, or
 	 * 2) too many balance attempts have failed.



Queued up for 3.2, thanks.

Ben.

-- 
Ben Hutchings
Any smoothly functioning technology is indistinguishable from a rigged demo.
-- 
Ben Hutchings
Any smoothly functioning technology is indistinguishable from a rigged demo.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-01-05  8:52 [STABLE] kernel oops which can be fixed by peterz's patches Byungchul Park
                   ` (2 preceding siblings ...)
  2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
@ 2016-03-01  8:15 ` Greg KH
  2016-07-18  6:31   ` Byungchul Park
  2016-06-13 18:31 ` Ben Hutchings
  4 siblings, 1 reply; 33+ messages in thread
From: Greg KH @ 2016-03-01  8:15 UTC (permalink / raw)
  To: Byungchul Park; +Cc: stable, peterz, linux-kernel

On Tue, Jan 05, 2016 at 05:52:11PM +0900, Byungchul Park wrote:
> 
> Upstream commits to be applied
> ==============================
> 
> e3fca9e: sched: Replace post_schedule with a balance callback list
> 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> 8046d68: sched,rt: Remove return value from pull_rt_task()
> fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
> 0ea60c2: sched,dl: Remove return value from pull_dl_task()
> 9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
> 
> The reason why these should be applied
> ======================================
> 
> Our products developed using 3.16 kernel, faced a kernel oops which can
> be fixed with above upstreamed patches. The oops is caused by "Unable
> to handle kernel NULL pointer dereference at virtual address 000000xx"
> in the call path,
> 
> __sched_setscheduler()
> 	check_class_changed()
> 		switched_to_fair()
> 			check_preempt_curr()
> 				check_preempt_wakeup()
> 					find_matching_se()
> 						is_same_group()
> 
> by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.
> 
> How to apply it
> ===============
> 
> For stable 4.2.8+:
> 	N/A (already applied)
> 
> For longterm 4.1.15:
> 	Cherry-picking the upsteam commits works with a trivial conflict.
> 
> For longterm 3.18.25:
> 	Refer to the backported patches in this thread.
> 
> For longterm 3.14.58:
> 	Refer to the backported patches in this thread. And applying
> 	additional "6c3b4d4: sched: Clean up idle task SMP logic" commit
> 	makes backporting the upstream commits much simpler. So my
> 	backporting patches include the patch.

All now applied to the 3.14-stable queue, thanks.

greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-01-05  8:52 [STABLE] kernel oops which can be fixed by peterz's patches Byungchul Park
                   ` (3 preceding siblings ...)
  2016-03-01  8:15 ` [STABLE] kernel oops which can be fixed by peterz's patches Greg KH
@ 2016-06-13 18:31 ` Ben Hutchings
  4 siblings, 0 replies; 33+ messages in thread
From: Ben Hutchings @ 2016-06-13 18:31 UTC (permalink / raw)
  To: Byungchul Park, stable; +Cc: peterz, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2001 bytes --]

On Tue, 2016-01-05 at 17:52 +0900, Byungchul Park wrote:
> Upstream commits to be applied
> ==============================
> 
> e3fca9e: sched: Replace post_schedule with a balance callback list
> 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> 8046d68: sched,rt: Remove return value from pull_rt_task()
> fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
> 0ea60c2: sched,dl: Remove return value from pull_dl_task()
> 9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
> 
> The reason why these should be applied
> ======================================
> 
> Our products developed using 3.16 kernel, faced a kernel oops which can
> be fixed with above upstreamed patches. The oops is caused by "Unable
> to handle kernel NULL pointer dereference at virtual address 000000xx"
> in the call path,
> 
> __sched_setscheduler()
> 	check_class_changed()
> 		switched_to_fair()
> 			check_preempt_curr()
> 				check_preempt_wakeup()
> 					find_matching_se()
> 						is_same_group()
> 
> by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.
> 
> How to apply it
> ===============
> 
> For stable 4.2.8+:
> 	N/A (already applied)
> 
> For longterm 4.1.15:
> 	Cherry-picking the upsteam commits works with a trivial conflict.
> 
> For longterm 3.18.25:
> 	Refer to the backported patches in this thread.

I've used your patches for 3.18.25 as the basis for backporting to 3.16
longterm.  Thanks.

Ben.

> For longterm 3.14.58:
> 	Refer to the backported patches in this thread. And applying
> 	additional "6c3b4d4: sched: Clean up idle task SMP logic" commit
> 	makes backporting the upstream commits much simpler. So my
> 	backporting patches include the patch.
> 
> For longterm 2.6.32.69 ~ 3.12.51: Need to be backported. (I didn't)

-- 
Ben Hutchings
One of the nice things about standards is that there are so many of
them.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-03-01  8:15 ` [STABLE] kernel oops which can be fixed by peterz's patches Greg KH
@ 2016-07-18  6:31   ` Byungchul Park
  2016-07-18 12:09     ` Greg KH
  0 siblings, 1 reply; 33+ messages in thread
From: Byungchul Park @ 2016-07-18  6:31 UTC (permalink / raw)
  To: Greg KH; +Cc: stable, peterz, linux-kernel

On Tue, Mar 01, 2016 at 08:15:55AM +0000, Greg KH wrote:
> On Tue, Jan 05, 2016 at 05:52:11PM +0900, Byungchul Park wrote:
> > 
> > Upstream commits to be applied
> > ==============================
> > 
> > e3fca9e: sched: Replace post_schedule with a balance callback list
> > 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> > 8046d68: sched,rt: Remove return value from pull_rt_task()
> > fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
> > 0ea60c2: sched,dl: Remove return value from pull_dl_task()
> > 9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
> > 
> > The reason why these should be applied
> > ======================================
> > 
> > Our products developed using 3.16 kernel, faced a kernel oops which can
> > be fixed with above upstreamed patches. The oops is caused by "Unable
> > to handle kernel NULL pointer dereference at virtual address 000000xx"
> > in the call path,
> > 
> > __sched_setscheduler()
> > 	check_class_changed()
> > 		switched_to_fair()
> > 			check_preempt_curr()
> > 				check_preempt_wakeup()
> > 					find_matching_se()
> > 						is_same_group()
> > 
> > by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.
> > 
> > How to apply it
> > ===============
> > 
> > For stable 4.2.8+:
> > 	N/A (already applied)
> > 
> > For longterm 4.1.15:
> > 	Cherry-picking the upsteam commits works with a trivial conflict.
> > 
> > For longterm 3.18.25:
> > 	Refer to the backported patches in this thread.
> > 
> > For longterm 3.14.58:
> > 	Refer to the backported patches in this thread. And applying
> > 	additional "6c3b4d4: sched: Clean up idle task SMP logic" commit
> > 	makes backporting the upstream commits much simpler. So my
> > 	backporting patches include the patch.
> 
> All now applied to the 3.14-stable queue, thanks.

Hello,

I realized this was not applied to 3.18-stable yet.

Is there any reason?

Thanks,
Byungchul

> 
> greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-07-18  6:31   ` Byungchul Park
@ 2016-07-18 12:09     ` Greg KH
  2016-07-18 23:59       ` Byungchul Park
  0 siblings, 1 reply; 33+ messages in thread
From: Greg KH @ 2016-07-18 12:09 UTC (permalink / raw)
  To: Byungchul Park; +Cc: stable, peterz, linux-kernel

On Mon, Jul 18, 2016 at 03:31:46PM +0900, Byungchul Park wrote:
> On Tue, Mar 01, 2016 at 08:15:55AM +0000, Greg KH wrote:
> > On Tue, Jan 05, 2016 at 05:52:11PM +0900, Byungchul Park wrote:
> > > 
> > > Upstream commits to be applied
> > > ==============================
> > > 
> > > e3fca9e: sched: Replace post_schedule with a balance callback list
> > > 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> > > 8046d68: sched,rt: Remove return value from pull_rt_task()
> > > fd7a4be: sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks
> > > 0ea60c2: sched,dl: Remove return value from pull_dl_task()
> > > 9916e21: sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks
> > > 
> > > The reason why these should be applied
> > > ======================================
> > > 
> > > Our products developed using 3.16 kernel, faced a kernel oops which can
> > > be fixed with above upstreamed patches. The oops is caused by "Unable
> > > to handle kernel NULL pointer dereference at virtual address 000000xx"
> > > in the call path,
> > > 
> > > __sched_setscheduler()
> > > 	check_class_changed()
> > > 		switched_to_fair()
> > > 			check_preempt_curr()
> > > 				check_preempt_wakeup()
> > > 					find_matching_se()
> > > 						is_same_group()
> > > 
> > > by "if (se->cfs_rq == pse->cfs_rq) // se, pse == NULL" condition.
> > > 
> > > How to apply it
> > > ===============
> > > 
> > > For stable 4.2.8+:
> > > 	N/A (already applied)
> > > 
> > > For longterm 4.1.15:
> > > 	Cherry-picking the upsteam commits works with a trivial conflict.
> > > 
> > > For longterm 3.18.25:
> > > 	Refer to the backported patches in this thread.
> > > 
> > > For longterm 3.14.58:
> > > 	Refer to the backported patches in this thread. And applying
> > > 	additional "6c3b4d4: sched: Clean up idle task SMP logic" commit
> > > 	makes backporting the upstream commits much simpler. So my
> > > 	backporting patches include the patch.
> > 
> > All now applied to the 3.14-stable queue, thanks.
> 
> Hello,
> 
> I realized this was not applied to 3.18-stable yet.
> 
> Is there any reason?

I don't maintain the 3.18-stable tree, so there's nothing I can do
there, please be patient and let the other stable maintainers catch up
on things...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [STABLE] kernel oops which can be fixed by peterz's patches
  2016-07-18 12:09     ` Greg KH
@ 2016-07-18 23:59       ` Byungchul Park
  0 siblings, 0 replies; 33+ messages in thread
From: Byungchul Park @ 2016-07-18 23:59 UTC (permalink / raw)
  To: Greg KH; +Cc: stable, peterz, linux-kernel

On Mon, Jul 18, 2016 at 09:09:45PM +0900, Greg KH wrote:
> > > All now applied to the 3.14-stable queue, thanks.
> > 
> > Hello,
> > 
> > I realized this was not applied to 3.18-stable yet.
> > 
> > Is there any reason?
> 
> I don't maintain the 3.18-stable tree, so there's nothing I can do
> there, please be patient and let the other stable maintainers catch up
> on things...

Ah. Thank you :)

> 
> thanks,
> 
> greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2016-07-19  0:01 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-05  8:52 [STABLE] kernel oops which can be fixed by peterz's patches Byungchul Park
2016-01-05  9:14 ` Peter Zijlstra
2016-01-12  8:47   ` Byungchul Park
2016-01-12 10:21   ` Willy Tarreau
2016-01-25  7:25   ` Byungchul Park
2016-02-16  7:08     ` Byungchul Park
2016-02-16  8:44       ` Peter Zijlstra
2016-02-16 17:42         ` Greg KH
2016-02-17  0:11           ` Byungchul Park
2016-02-17  0:41             ` Greg KH
2016-02-17  2:00               ` Byungchul Park
2016-02-17  3:01                 ` Greg KH
2016-02-17  3:02                 ` Mike Galbraith
2016-02-23 21:05                   ` Ben Hutchings
2016-02-23 21:06                   ` Ben Hutchings
2016-01-05  9:16 ` [PATCH for v3.14.58 1/7] sched: Clean up idle task SMP logic Byungchul Park
2016-01-05  9:16   ` [PATCH for v3.14.58 2/7] sched: Replace post_schedule with a balance callback list Byungchul Park
2016-01-05  9:16   ` [PATCH for v3.14.58 3/7] sched: Allow balance callbacks for check_class_changed() Byungchul Park
2016-01-05  9:16   ` [PATCH for v3.14.58 4/7] sched,rt: Remove return value from pull_rt_task() Byungchul Park
2016-01-05  9:16   ` [PATCH for v3.14.58 5/7] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks Byungchul Park
2016-01-05  9:16   ` [PATCH for v3.14.58 6/7] sched,dl: Remove return value from pull_dl_task() Byungchul Park
2016-01-05  9:16   ` [PATCH for v3.14.58 7/7] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks Byungchul Park
2016-01-05  9:24 ` [PATCH for v3.18.25 1/6] sched: Replace post_schedule with a balance callback list Byungchul Park
2016-01-05  9:24   ` [PATCH for v3.18.25 2/6] sched: Allow balance callbacks for check_class_changed() Byungchul Park
2016-01-05  9:24   ` [PATCH for v3.18.25 3/6] sched,rt: Remove return value from pull_rt_task() Byungchul Park
2016-01-05  9:24   ` [PATCH for v3.18.25 4/6] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks Byungchul Park
2016-01-05  9:24   ` [PATCH for v3.18.25 5/6] sched,dl: Remove return value from pull_dl_task() Byungchul Park
2016-01-05  9:24   ` [PATCH for v3.18.25 6/6] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks Byungchul Park
2016-03-01  8:15 ` [STABLE] kernel oops which can be fixed by peterz's patches Greg KH
2016-07-18  6:31   ` Byungchul Park
2016-07-18 12:09     ` Greg KH
2016-07-18 23:59       ` Byungchul Park
2016-06-13 18:31 ` Ben Hutchings

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.