linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio
@ 2015-04-20  8:22 Xunlei Pang
  2015-04-20  8:22 ` [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases Xunlei Pang
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Xunlei Pang @ 2015-04-20  8:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Steven Rostedt, Juri Lelli, Xunlei Pang

From: Xunlei Pang <pang.xunlei@linaro.org>

If there're multiple nodes with the same prio as @node, currently
plist_add() will add @node behind all of them. Now we need to add
@node before all of these nodes for SMP RT scheduler.

This patch adds a common __plist_add() for adding @node before or
after existing nodes with the same prio, then adds plist_add_head()
and plist_add_tail() inline wrapper functions for convenient uses.

Finally, define plist_add() as plist_add_tail() which has the same
behaviour as before.

Reviewed-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org>
---
v5->v6:
Add more detailed annotations.

 include/linux/plist.h | 30 +++++++++++++++++++++++++++++-
 lib/plist.c           | 28 +++++++++++++++++++++++-----
 2 files changed, 52 insertions(+), 6 deletions(-)

diff --git a/include/linux/plist.h b/include/linux/plist.h
index 9788360..39806a2 100644
--- a/include/linux/plist.h
+++ b/include/linux/plist.h
@@ -138,7 +138,35 @@ static inline void plist_node_init(struct plist_node *node, int prio)
 	INIT_LIST_HEAD(&node->node_list);
 }
 
-extern void plist_add(struct plist_node *node, struct plist_head *head);
+extern void __plist_add(struct plist_node *node,
+				struct plist_head *head, bool is_head);
+
+/**
+ * plist_add_head - add @node to @head, before all existing same-prio nodes
+ *
+ * @node:	The plist_node to be added to @head
+ * @head:	The plist_head that @node is being added to
+ */
+static inline
+void plist_add_head(struct plist_node *node, struct plist_head *head)
+{
+	__plist_add(node, head, true);
+}
+
+/**
+ * plist_add_tail - add @node to @head, after all existing same-prio nodes
+ *
+ * @node:	The plist_node to be added to @head
+ * @head:	The plist_head that @node is being added to
+ */
+static inline
+void plist_add_tail(struct plist_node *node, struct plist_head *head)
+{
+	__plist_add(node, head, false);
+}
+
+#define plist_add plist_add_tail
+
 extern void plist_del(struct plist_node *node, struct plist_head *head);
 
 extern void plist_requeue(struct plist_node *node, struct plist_head *head);
diff --git a/lib/plist.c b/lib/plist.c
index 3a30c53..c1ee2b0 100644
--- a/lib/plist.c
+++ b/lib/plist.c
@@ -66,12 +66,18 @@ static void plist_check_head(struct plist_head *head)
 #endif
 
 /**
- * plist_add - add @node to @head
+ * __plist_add - add @node to @head
  *
- * @node:	&struct plist_node pointer
- * @head:	&struct plist_head pointer
+ * @node:    The plist_node to be added to @head
+ * @head:    The plist_head that @node is being added to
+ * @is_head: True if adding to head of prio list, false otherwise
+ *
+ * For nodes of the same prio, @node will be added at the
+ * head of previously added nodes if @is_head is true, or
+ * it will be added at the tail of previously added nodes
+ * if @is_head is false.
  */
-void plist_add(struct plist_node *node, struct plist_head *head)
+void __plist_add(struct plist_node *node, struct plist_head *head, bool is_head)
 {
 	struct plist_node *first, *iter, *prev = NULL;
 	struct list_head *node_next = &head->node_list;
@@ -96,8 +102,20 @@ void plist_add(struct plist_node *node, struct plist_head *head)
 				struct plist_node, prio_list);
 	} while (iter != first);
 
-	if (!prev || prev->prio != node->prio)
+	if (!prev || prev->prio != node->prio) {
 		list_add_tail(&node->prio_list, &iter->prio_list);
+	} else if (is_head) {
+		/*
+		 * prev has the same priority as the node that is being
+		 * added. It is also the first node for this priority,
+		 * but the new node needs to be added ahead of it.
+		 * To accomplish this, replace prev in the prio_list
+		 * with node. Then set node_next to prev->node_list so
+		 * that the new node gets added before prev and not iter.
+		 */
+		list_replace_init(&prev->prio_list, &node->prio_list);
+		node_next = &prev->node_list;
+	}
 ins_node:
 	list_add_tail(&node->node_list, node_next);
 
-- 
1.9.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-20  8:22 [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Xunlei Pang
@ 2015-04-20  8:22 ` Xunlei Pang
  2015-04-20 14:52   ` Steven Rostedt
  2015-04-20  8:22 ` [PATCH v6 3/3] sched/rt: Check to push the task when changing its affinity Xunlei Pang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Xunlei Pang @ 2015-04-20  8:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Steven Rostedt, Juri Lelli, Xunlei Pang

From: Xunlei Pang <pang.xunlei@linaro.org>

Currently, SMP RT scheduler has some trouble in dealing with
equal prio cases.

For example, in check_preempt_equal_prio():
When RT1(current task) gets preempted by RT2, if there is a
migratable RT3 with same prio, RT3 will be pushed away instead
of RT1 afterwards, because RT1 will be enqueued to the tail of
the pushable list when going through succeeding put_prev_task_rt()
triggered by resched. This broke FIFO.

Furthermore, this is also problematic for normal preempted cases
if there're some rt tasks queued with the same prio as current.
Because current will be put behind these tasks in the pushable
queue.

So, if a task is running and gets preempted by a higher priority
task (or even with same priority for migrating), this patch ensures
that it is put ahead of any existing task with the same priority in
the pushable queue.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org>
---
v5->v6:
Add wrapped enqueue_pushable_task_preempted() and enqueue_pushable_task()
for __enqueue_pushable_task() to avoid the "bool head" parameter everywhere.

 kernel/sched/rt.c | 41 +++++++++++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 575da76..8679eff 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -359,17 +359,32 @@ static inline void set_post_schedule(struct rq *rq)
 	rq->post_schedule = has_pushable_tasks(rq);
 }
 
-static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
+static void
+__enqueue_pushable_task(struct rq *rq, struct task_struct *p, bool head)
 {
 	plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
 	plist_node_init(&p->pushable_tasks, p->prio);
-	plist_add(&p->pushable_tasks, &rq->rt.pushable_tasks);
+	if (head)
+		plist_add_head(&p->pushable_tasks, &rq->rt.pushable_tasks);
+	else
+		plist_add_tail(&p->pushable_tasks, &rq->rt.pushable_tasks);
 
 	/* Update the highest prio pushable task */
 	if (p->prio < rq->rt.highest_prio.next)
 		rq->rt.highest_prio.next = p->prio;
 }
 
+static inline
+void enqueue_pushable_task_preempted(struct rq *rq, struct task_struct *p)
+{
+	__enqueue_pushable_task(rq, p, true);
+}
+
+static inline void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
+{
+	__enqueue_pushable_task(rq, p, false);
+}
+
 static void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
 {
 	plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
@@ -385,6 +400,11 @@ static void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
 
 #else
 
+static inline
+void enqueue_pushable_task_preempted(struct rq *rq, struct task_struct *p)
+{
+}
+
 static inline void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
 {
 }
@@ -1506,8 +1526,21 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
 	 * The previous task needs to be made eligible for pushing
 	 * if it is still active
 	 */
-	if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
-		enqueue_pushable_task(rq, p);
+	if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1) {
+		/*
+		 * put_prev_task_rt() is called by many functions,
+		 * pick_next_task_rt() is the only one may have
+		 * PREEMPT_ACTIVE set. So if detecting p(current
+		 * task) is preempted in such case, we should
+		 * enqueue it to the front of the pushable plist,
+		 * as there may be multiple tasks with the same
+		 * priority as p.
+		 */
+		if (preempt_count() & PREEMPT_ACTIVE)
+			enqueue_pushable_task_preempted(rq, p);
+		else
+			enqueue_pushable_task(rq, p);
+	}
 }
 
 #ifdef CONFIG_SMP
-- 
1.9.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v6 3/3] sched/rt: Check to push the task when changing its affinity
  2015-04-20  8:22 [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Xunlei Pang
  2015-04-20  8:22 ` [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases Xunlei Pang
@ 2015-04-20  8:22 ` Xunlei Pang
  2015-04-20 16:06   ` Steven Rostedt
  2015-04-20 14:42 ` [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Steven Rostedt
  2015-04-20 14:48 ` Steven Rostedt
  3 siblings, 1 reply; 18+ messages in thread
From: Xunlei Pang @ 2015-04-20  8:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Steven Rostedt, Juri Lelli, Xunlei Pang

From: Xunlei Pang <pang.xunlei@linaro.org>

We may suffer from extra rt overload rq due to the affinity,
so when the affinity of any runnable rt task is changed, we
should check to trigger balancing, otherwise it will cause
some unnecessary delayed real-time response. Unfortunately,
current RT global scheduler does nothing about this.

For example: a 2-cpu system with two runnable FIFO tasks(same
rt_priority) bound on CPU0, let's name them rt1(running) and
rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets
the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this,
rt2 still can't be scheduled until rt1 enters schedule(), this
definitely causes some/big response latency for rt2.

So, when doing set_cpus_allowed_rt(), if detecting such cases,
check to trigger a push behaviour.

Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org>
---
 kernel/sched/rt.c | 81 ++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 71 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 8679eff..846b59c 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1460,10 +1460,9 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
 	return next;
 }
 
-static struct task_struct *_pick_next_task_rt(struct rq *rq)
+static struct task_struct *peek_next_task_rt(struct rq *rq)
 {
 	struct sched_rt_entity *rt_se;
-	struct task_struct *p;
 	struct rt_rq *rt_rq  = &rq->rt;
 
 	do {
@@ -1472,7 +1471,14 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq)
 		rt_rq = group_rt_rq(rt_se);
 	} while (rt_rq);
 
-	p = rt_task_of(rt_se);
+	return rt_task_of(rt_se);
+}
+
+static inline struct task_struct *_pick_next_task_rt(struct rq *rq)
+{
+	struct task_struct *p;
+
+	p = peek_next_task_rt(rq);
 	p->se.exec_start = rq_clock_task(rq);
 
 	return p;
@@ -2096,28 +2102,77 @@ static void set_cpus_allowed_rt(struct task_struct *p,
 				const struct cpumask *new_mask)
 {
 	struct rq *rq;
-	int weight;
+	int old_weight, new_weight;
+	int preempt_push = 0, direct_push = 0;
 
 	BUG_ON(!rt_task(p));
 
 	if (!task_on_rq_queued(p))
 		return;
 
-	weight = cpumask_weight(new_mask);
+	old_weight = p->nr_cpus_allowed;
+	new_weight = cpumask_weight(new_mask);
 
+	rq = task_rq(p);
+
+	if (new_weight > 1 &&
+	    rt_task(rq->curr) &&
+	    rq->rt.rt_nr_total > 1 &&
+	    !test_tsk_need_resched(rq->curr)) {
+		/*
+		 * We own p->pi_lock and rq->lock. rq->lock might
+		 * get released when doing direct pushing, however
+		 * p->pi_lock is always held, so it's safe to assign
+		 * new_mask and new_weight to p below.
+		 */
+		if (!task_running(rq, p)) {
+			cpumask_copy(&p->cpus_allowed, new_mask);
+			p->nr_cpus_allowed = new_weight;
+			direct_push = 1;
+		} else if (cpumask_test_cpu(task_cpu(p), new_mask)) {
+			struct task_struct *next;
+
+			cpumask_copy(&p->cpus_allowed, new_mask);
+			p->nr_cpus_allowed = new_weight;
+			if (!cpupri_find(&rq->rd->cpupri, p, NULL))
+				goto update;
+
+			/*
+			 * At this point, current task gets migratable most
+			 * likely due to the change of its affinity, let's
+			 * figure out if we can migrate it.
+			 *
+			 * Can we find any task with the same priority as
+			 * current? To accomplish this, firstly we requeue
+			 * current to the tail and peek next, then restore
+			 * current to the head.
+			 */
+			requeue_task_rt(rq, p, 0);
+			next = peek_next_task_rt(rq);
+			requeue_task_rt(rq, p, 1);
+			if (next != p && next->prio == p->prio) {
+				/*
+				 * Target found, so let's reschedule to try
+				 * and push current away.
+				 */
+				requeue_task_rt(rq, next, 1);
+				preempt_push = 1;
+			}
+		}
+	}
+
+update:
 	/*
 	 * Only update if the process changes its state from whether it
 	 * can migrate or not.
 	 */
-	if ((p->nr_cpus_allowed > 1) == (weight > 1))
-		return;
-
-	rq = task_rq(p);
+	if ((old_weight > 1) == (new_weight > 1))
+		goto out;
 
 	/*
 	 * The process used to be able to migrate OR it can now migrate
 	 */
-	if (weight <= 1) {
+	if (new_weight <= 1) {
 		if (!task_current(rq, p))
 			dequeue_pushable_task(rq, p);
 		BUG_ON(!rq->rt.rt_nr_migratory);
@@ -2129,6 +2184,12 @@ static void set_cpus_allowed_rt(struct task_struct *p,
 	}
 
 	update_rt_migration(&rq->rt);
+
+out:
+	if (direct_push)
+		push_rt_tasks(rq);
+	else if (preempt_push)
+		resched_curr(rq);
 }
 
 /* Assumes rq->lock is held */
-- 
1.9.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio
  2015-04-20  8:22 [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Xunlei Pang
  2015-04-20  8:22 ` [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases Xunlei Pang
  2015-04-20  8:22 ` [PATCH v6 3/3] sched/rt: Check to push the task when changing its affinity Xunlei Pang
@ 2015-04-20 14:42 ` Steven Rostedt
  2015-04-20 14:48 ` Steven Rostedt
  3 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2015-04-20 14:42 UTC (permalink / raw)
  To: Xunlei Pang; +Cc: linux-kernel, Peter Zijlstra, Juri Lelli, Xunlei Pang

On Mon, 20 Apr 2015 16:22:46 +0800
Xunlei Pang <xlpang@126.com> wrote:

> From: Xunlei Pang <pang.xunlei@linaro.org>
> 
> If there're multiple nodes with the same prio as @node, currently
> plist_add() will add @node behind all of them. Now we need to add
> @node before all of these nodes for SMP RT scheduler.
> 
> This patch adds a common __plist_add() for adding @node before or
> after existing nodes with the same prio, then adds plist_add_head()
> and plist_add_tail() inline wrapper functions for convenient uses.
> 
> Finally, define plist_add() as plist_add_tail() which has the same
> behaviour as before.
> 
> Reviewed-by: Dan Streetman <ddstreet@ieee.org>

Reviewed-by: Steven Rostedt <rostedt@goodmis.org>

-- Steve

> Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org>
> ---
> v5->v6:
> Add more detailed annotations.
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio
  2015-04-20  8:22 [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Xunlei Pang
                   ` (2 preceding siblings ...)
  2015-04-20 14:42 ` [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Steven Rostedt
@ 2015-04-20 14:48 ` Steven Rostedt
  3 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2015-04-20 14:48 UTC (permalink / raw)
  To: Xunlei Pang; +Cc: linux-kernel, Peter Zijlstra, Juri Lelli, Xunlei Pang

On Mon, 20 Apr 2015 16:22:46 +0800
Xunlei Pang <xlpang@126.com> wrote:

> +/**
> + * plist_add_tail - add @node to @head, after all existing same-prio nodes
> + *
> + * @node:	The plist_node to be added to @head
> + * @head:	The plist_head that @node is being added to
> + */
> +static inline
> +void plist_add_tail(struct plist_node *node, struct plist_head *head)
> +{
> +	__plist_add(node, head, false);
> +}
> +
> +#define plist_add plist_add_tail

I already placed my review by, but this is more of a matter of taste.

I don't think this should be a #define, but instead a static inline.
The assembly will end up as the same, but the compiler warnings will be
more helpful if it is a static inline, as we don't want
"plist_add_tail()" being shown in warnings when the developer never
typed in "_tail()".

Thus it should be:

static inline void
plist_add(struct plist_node *node, struct plist_head *head)
{
	plist_add_tail(node, head);
}

You can keep my Reviewed-by, but please make this update.

-- Steve


> +
>  extern void plist_del(struct plist_node *node, struct plist_head *head);
>  
>  extern void plist_requeue(struct plist_node *node, struct plist_head *head);
> diff --git a/lib/plist.c b/lib/plist.c
> index 3a30c53..c1ee2b0 100644
> --- a/lib/plist.c
> +++ b/lib/plist.c


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-20  8:22 ` [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases Xunlei Pang
@ 2015-04-20 14:52   ` Steven Rostedt
  2015-04-20 17:20     ` Peter Zijlstra
  0 siblings, 1 reply; 18+ messages in thread
From: Steven Rostedt @ 2015-04-20 14:52 UTC (permalink / raw)
  To: Xunlei Pang; +Cc: linux-kernel, Peter Zijlstra, Juri Lelli, Xunlei Pang

On Mon, 20 Apr 2015 16:22:47 +0800
Xunlei Pang <xlpang@126.com> wrote:

>  static inline void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
>  {
>  }
> @@ -1506,8 +1526,21 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
>  	 * The previous task needs to be made eligible for pushing
>  	 * if it is still active
>  	 */
> -	if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
> -		enqueue_pushable_task(rq, p);
> +	if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1) {
> +		/*
> +		 * put_prev_task_rt() is called by many functions,
> +		 * pick_next_task_rt() is the only one may have
> +		 * PREEMPT_ACTIVE set. So if detecting p(current
> +		 * task) is preempted in such case, we should
> +		 * enqueue it to the front of the pushable plist,
> +		 * as there may be multiple tasks with the same
> +		 * priority as p.

The above comment is very difficult to understand. Maybe something like:

		/*
		 * When put_prev_task_rt() is called by
		 * pick_next_task_rt(), if PREEMPT_ACTIVE is set, it
		 * means that the current rt task is being preempted by
		 * a higher priority task. To maintain FIFO, it must
		 * stay ahead of any other task that is queued at the
		 * same priority.
		 */

-- Steve

> +		 */
> +		if (preempt_count() & PREEMPT_ACTIVE)
> +			enqueue_pushable_task_preempted(rq, p);
> +		else
> +			enqueue_pushable_task(rq, p);
> +	}
>  }
>  
>  #ifdef CONFIG_SMP


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 3/3] sched/rt: Check to push the task when changing its affinity
  2015-04-20  8:22 ` [PATCH v6 3/3] sched/rt: Check to push the task when changing its affinity Xunlei Pang
@ 2015-04-20 16:06   ` Steven Rostedt
  0 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2015-04-20 16:06 UTC (permalink / raw)
  To: Xunlei Pang; +Cc: linux-kernel, Peter Zijlstra, Juri Lelli, Xunlei Pang

On Mon, 20 Apr 2015 16:22:48 +0800
Xunlei Pang <xlpang@126.com> wrote:


> +	rq = task_rq(p);
> +
> +	if (new_weight > 1 &&
> +	    rt_task(rq->curr) &&
> +	    rq->rt.rt_nr_total > 1 &&
> +	    !test_tsk_need_resched(rq->curr)) {
> +		/*
> +		 * We own p->pi_lock and rq->lock. rq->lock might
> +		 * get released when doing direct pushing, however
> +		 * p->pi_lock is always held, so it's safe to assign
> +		 * new_mask and new_weight to p below.
> +		 */
> +		if (!task_running(rq, p)) {
> +			cpumask_copy(&p->cpus_allowed, new_mask);
> +			p->nr_cpus_allowed = new_weight;
> +			direct_push = 1;
> +		} else if (cpumask_test_cpu(task_cpu(p), new_mask)) {
> +			struct task_struct *next;
> +
> +			cpumask_copy(&p->cpus_allowed, new_mask);
> +			p->nr_cpus_allowed = new_weight;
> +			if (!cpupri_find(&rq->rd->cpupri, p, NULL))
> +				goto update;
> +
> +			/*
> +			 * At this point, current task gets migratable most
> +			 * likely due to the change of its affinity, let's
> +			 * figure out if we can migrate it.
> +			 *
> +			 * Can we find any task with the same priority as
> +			 * current? To accomplish this, firstly we requeue
> +			 * current to the tail and peek next, then restore
> +			 * current to the head.

"current"? Don't you mean 'p'?


> +			 */
> +			requeue_task_rt(rq, p, 0);
> +			next = peek_next_task_rt(rq);
> +			requeue_task_rt(rq, p, 1);

Actually, I'm totally confused by all this. Why are you looking at
next? The running task (current) should not be in the next_task_rt()
list to begin with. If p can not preempt current, and cpupri_find()
finds something for p to run on, then move it there, and be done with
it.

I also looked at set_cpus_allowed_ptr() and think we should probably
add a p->sched_class->pick_cpu() and do:

	if (p->sched_class->pick_cpu)
		dest_cpu = p->sched_class->pick_cpu(p, cpu_active_mask,
				new_mask);
	else
		dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);

Hmm, maybe all this work should be done there, or have
do_set_cpus_allowed() return a value that states that the
sched_class->set_cpus_allowed() did all the work for us.

-- Steve

> +			if (next != p && next->prio == p->prio) {
> +				/*
> +				 * Target found, so let's reschedule to try
> +				 * and push current away.
> +				 */
> +				requeue_task_rt(rq, next, 1);
> +				preempt_push = 1;
> +			}
> +		}
> +	}
> +
> +update:
>  	/*
>  	 * Only update if the process changes its state from whether it
>  	 * can migrate or not.
>  	 */
> -	if ((p->nr_cpus_allowed > 1) == (weight > 1))
> -		return;
> -
> -	rq = task_rq(p);
> +	if ((old_weight > 1) == (new_weight > 1))
> +		goto out;
>  
>  	/*
>  	 * The process used to be able to migrate OR it can now migrate
>  	 */
> -	if (weight <= 1) {
> +	if (new_weight <= 1) {
>  		if (!task_current(rq, p))
>  			dequeue_pushable_task(rq, p);
>  		BUG_ON(!rq->rt.rt_nr_migratory);
> @@ -2129,6 +2184,12 @@ static void set_cpus_allowed_rt(struct task_struct *p,
>  	}
>  
>  	update_rt_migration(&rq->rt);
> +
> +out:
> +	if (direct_push)
> +		push_rt_tasks(rq);
> +	else if (preempt_push)
> +		resched_curr(rq);
>  }
>  
>  /* Assumes rq->lock is held */


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-20 14:52   ` Steven Rostedt
@ 2015-04-20 17:20     ` Peter Zijlstra
  2015-04-20 17:48       ` Steven Rostedt
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-04-20 17:20 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Xunlei Pang, linux-kernel, Juri Lelli, Xunlei Pang

On Mon, Apr 20, 2015 at 10:52:28AM -0400, Steven Rostedt wrote:
> On Mon, 20 Apr 2015 16:22:47 +0800
> Xunlei Pang <xlpang@126.com> wrote:
> 
> >  static inline void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
> >  {
> >  }
> > @@ -1506,8 +1526,21 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
> >  	 * The previous task needs to be made eligible for pushing
> >  	 * if it is still active
> >  	 */
> > -	if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
> > -		enqueue_pushable_task(rq, p);
> > +	if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1) {
> > +		/*
> > +		 * put_prev_task_rt() is called by many functions,
> > +		 * pick_next_task_rt() is the only one may have
> > +		 * PREEMPT_ACTIVE set. So if detecting p(current
> > +		 * task) is preempted in such case, we should
> > +		 * enqueue it to the front of the pushable plist,
> > +		 * as there may be multiple tasks with the same
> > +		 * priority as p.
> 
> The above comment is very difficult to understand. Maybe something like:
> 
> 		/*
> 		 * When put_prev_task_rt() is called by
> 		 * pick_next_task_rt(), if PREEMPT_ACTIVE is set, it
> 		 * means that the current rt task is being preempted by
> 		 * a higher priority task. To maintain FIFO, it must
> 		 * stay ahead of any other task that is queued at the
> 		 * same priority.
> 		 */
> 
> -- Steve
> 
> > +		 */
> > +		if (preempt_count() & PREEMPT_ACTIVE)
> > +			enqueue_pushable_task_preempted(rq, p);
> > +		else
> > +			enqueue_pushable_task(rq, p);
> > +	}
> >  }

This looks wrong, what do you want to find? _any_ preemption? In that
case PREEMPT_ACTIVE is wrong. What you need to check is if the task is
still on the RQ or not.

If the task was put to sleep it got dequeued, if it was not dequeued, it
got preempted.

PREEMPT_ACTIVE is only ever set for forced kernel preemption, which is a
special sub case only ever triggered with CONFIG_PREEMPT=y.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-20 17:20     ` Peter Zijlstra
@ 2015-04-20 17:48       ` Steven Rostedt
  2015-04-20 23:45         ` Peter Zijlstra
       [not found]         ` <OFB1503F16.1E65406F-ON48257E2E.002B562D-48257E30.0008BFEB@zte.com.cn>
  0 siblings, 2 replies; 18+ messages in thread
From: Steven Rostedt @ 2015-04-20 17:48 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Xunlei Pang, linux-kernel, Juri Lelli, Xunlei Pang

On Mon, 20 Apr 2015 19:20:48 +0200
Peter Zijlstra <peterz@infradead.org> wrote:

> > > +		 */
> > > +		if (preempt_count() & PREEMPT_ACTIVE)
> > > +			enqueue_pushable_task_preempted(rq, p);
> > > +		else
> > > +			enqueue_pushable_task(rq, p);
> > > +	}
> > >  }
> 
> This looks wrong, what do you want to find? _any_ preemption? In that
> case PREEMPT_ACTIVE is wrong. What you need to check is if the task is
> still on the RQ or not.
> 
> If the task was put to sleep it got dequeued, if it was not dequeued, it
> got preempted.
> 
> PREEMPT_ACTIVE is only ever set for forced kernel preemption, which is a
> special sub case only ever triggered with CONFIG_PREEMPT=y.

Ah, you're right. I was thinking of just forced preemption, but, I
wasn't thinking about voluntary preemption (preemption points). We want
this behavior for that too (for kernel).

And yes, if we preempt in user space, this isn't enough either.

Actually, I think we only care if the state of the task is
TASK_RUNNING, if it is anything else, the task is probably going to
sleep anyway and we don't care about FIFO order then.

-- Steve
 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-20 17:48       ` Steven Rostedt
@ 2015-04-20 23:45         ` Peter Zijlstra
  2015-04-21 13:10           ` Steven Rostedt
       [not found]         ` <OFB1503F16.1E65406F-ON48257E2E.002B562D-48257E30.0008BFEB@zte.com.cn>
  1 sibling, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-04-20 23:45 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Xunlei Pang, linux-kernel, Juri Lelli, Xunlei Pang

On Mon, Apr 20, 2015 at 01:48:03PM -0400, Steven Rostedt wrote:
> On Mon, 20 Apr 2015 19:20:48 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > > > +		 */
> > > > +		if (preempt_count() & PREEMPT_ACTIVE)
> > > > +			enqueue_pushable_task_preempted(rq, p);
> > > > +		else
> > > > +			enqueue_pushable_task(rq, p);
> > > > +	}
> > > >  }
> > 
> > This looks wrong, what do you want to find? _any_ preemption? In that
> > case PREEMPT_ACTIVE is wrong. What you need to check is if the task is
> > still on the RQ or not.
> > 
> > If the task was put to sleep it got dequeued, if it was not dequeued, it
> > got preempted.
> > 
> > PREEMPT_ACTIVE is only ever set for forced kernel preemption, which is a
> > special sub case only ever triggered with CONFIG_PREEMPT=y.
> 
> Ah, you're right. I was thinking of just forced preemption, but, I
> wasn't thinking about voluntary preemption (preemption points). We want
> this behavior for that too (for kernel).
> 
> And yes, if we preempt in user space, this isn't enough either.
> 
> Actually, I think we only care if the state of the task is
> TASK_RUNNING, if it is anything else, the task is probably going to
> sleep anyway and we don't care about FIFO order then.

Please don't try and be clever there :-) Task state can be misleading,
you might get a wakeup before you're running again, in which case you
never went to sleep.

Please use task_on_rq_queued(p) like all other sites.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-20 23:45         ` Peter Zijlstra
@ 2015-04-21 13:10           ` Steven Rostedt
  0 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2015-04-21 13:10 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Xunlei Pang, linux-kernel, Juri Lelli, Xunlei Pang

On Tue, 21 Apr 2015 01:45:50 +0200
Peter Zijlstra <peterz@infradead.org> wrote:


> Please don't try and be clever there :-) Task state can be misleading,
> you might get a wakeup before you're running again, in which case you
> never went to sleep.

OK, point taken.

> 
> Please use task_on_rq_queued(p) like all other sites.

Agreed.

-- Steve

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
       [not found]         ` <OFB1503F16.1E65406F-ON48257E2E.002B562D-48257E30.0008BFEB@zte.com.cn>
@ 2015-04-23  3:01           ` Steven Rostedt
       [not found]             ` <OFE3B71874.C9D81693-ON48257E30.0024F69F-48257E30.0025E2D3@zte.com.cn>
  0 siblings, 1 reply; 18+ messages in thread
From: Steven Rostedt @ 2015-04-23  3:01 UTC (permalink / raw)
  To: pang.xunlei; +Cc: Peter Zijlstra, Juri Lelli, linux-kernel, Xunlei Pang

On Thu, 23 Apr 2015 09:35:12 +0800
pang.xunlei@zte.com.cn wrote:

> Hi Steve, Peter,
> 
> Steven Rostedt <rostedt@goodmis.org> wrote 2015-04-21 AM 01:48:03:
> > On Mon, 20 Apr 2015 19:20:48 +0200
> > Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > > > +       */
> > > > > +      if (preempt_count() & PREEMPT_ACTIVE)
> > > > > +         enqueue_pushable_task_preempted(rq, p);
> > > > > +      else
> > > > > +         enqueue_pushable_task(rq, p);
> > > > > +   }
> > > > >  }
> > > 
> > > This looks wrong, what do you want to find? _any_ preemption? In that
> > > case PREEMPT_ACTIVE is wrong. What you need to check is if the task is
> > > still on the RQ or not.
> > > 
> > > If the task was put to sleep it got dequeued, if it was not dequeued, 
> it
> > > got preempted.
> > > 
> > > PREEMPT_ACTIVE is only ever set for forced kernel preemption, which is 
> a
> > > special sub case only ever triggered with CONFIG_PREEMPT=y.
> > 
> > Ah, you're right. I was thinking of just forced preemption, but, I
> > wasn't thinking about voluntary preemption (preemption points). We want
> > this behavior for that too (for kernel).
> > 
> > And yes, if we preempt in user space, this isn't enough either.
> 
> Thanks, I understood this.
> 
> So, we can't rely on PREEMPT_ACTIVE to do the job.
> Even for forced kernel preemption, it will be problematic for 
> RR policy when running out of time slice in task_tick_rt(), 
> as it calls resched_curr() to do the reschedule.
> 
> So, I think we need to add a flag in task_struct and set it 
> properly when doing real preemption.
> 
> How about my unfinished patch below for this idea?
> 

Why not use Peter's idea of instead of checking PREEMPT_ACTIVE, just
check if the task is on the runqueue or not. If it scheduled out, it
would take itself off the runqueue, if it was preempted by anything, it
would still be on the run queue, and according to FIFO, it should still
maintain CPU ownership over other tasks of the same prio.

-- Steve


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
       [not found]             ` <OFE3B71874.C9D81693-ON48257E30.0024F69F-48257E30.0025E2D3@zte.com.cn>
@ 2015-04-23 13:10               ` Steven Rostedt
  2015-04-24 18:32               ` Peter Zijlstra
  1 sibling, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2015-04-23 13:10 UTC (permalink / raw)
  To: pang.xunlei; +Cc: Juri Lelli, linux-kernel, Xunlei Pang, Peter Zijlstra

On Thu, 23 Apr 2015 14:53:27 +0800
pang.xunlei@zte.com.cn wrote:
 
> > Why not use Peter's idea of instead of checking PREEMPT_ACTIVE, just
> > check if the task is on the runqueue or not. If it scheduled out, it
> > would take itself off the runqueue, if it was preempted by anything, it
> > would still be on the run queue, and according to FIFO, it should still
> > maintain CPU ownership over other tasks of the same prio.
> > 
> 
> But for yield() or RR scheduling when running out of time slice, 
> I think this would be still inappropriate, am I missing something? 

Good point, I'll have to look into this a bit more.

-- Steve

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
       [not found]             ` <OFE3B71874.C9D81693-ON48257E30.0024F69F-48257E30.0025E2D3@zte.com.cn>
  2015-04-23 13:10               ` Steven Rostedt
@ 2015-04-24 18:32               ` Peter Zijlstra
       [not found]                 ` <OF0B05B4FE.F40C0BE3-ON48257E32.004EF485-48257E32.00513DB3@zte.com.cn>
  1 sibling, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-04-24 18:32 UTC (permalink / raw)
  To: pang.xunlei; +Cc: Steven Rostedt, Juri Lelli, linux-kernel, Xunlei Pang

On Thu, Apr 23, 2015 at 02:53:27PM +0800, pang.xunlei@zte.com.cn wrote:
> But for yield() or RR scheduling when running out of time slice, 
> I think this would be still inappropriate, am I missing something? 

Those two have explicit hooks you can use to do the right queueing with.

Look at yield_task_rt() and task_tick_rt() both end up doing
requeue_task_rt(), which is exactly what you want, no?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Re: Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
       [not found]                 ` <OF0B05B4FE.F40C0BE3-ON48257E32.004EF485-48257E32.00513DB3@zte.com.cn>
@ 2015-04-25 18:23                   ` Peter Zijlstra
       [not found]                     ` <OFD410EB1E.5A02675A-ON48257E33.00309252-48257E33.003641FF@zte.com.cn>
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-04-25 18:23 UTC (permalink / raw)
  To: pang.xunlei; +Cc: Juri Lelli, linux-kernel, Xunlei Pang, Steven Rostedt

On Sat, Apr 25, 2015 at 10:47:00PM +0800, pang.xunlei@zte.com.cn wrote:
> We want to do the operation in put_prev_task_rt(), and 
> put_prev_task_rt() has many call sites.
> 

So I've still no clue wtf you're trying to do, there's words in your
changelog but none of them seem to describe the actual problem nor the
proposed solution.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
       [not found]                     ` <OFD410EB1E.5A02675A-ON48257E33.00309252-48257E33.003641FF@zte.com.cn>
@ 2015-04-26 15:58                       ` Steven Rostedt
  2015-04-28 10:19                         ` Peter Zijlstra
  0 siblings, 1 reply; 18+ messages in thread
From: Steven Rostedt @ 2015-04-26 15:58 UTC (permalink / raw)
  To: pang.xunlei
  Cc: Peter Zijlstra, Juri Lelli, linux-kernel, linux-kernel-owner,
	Xunlei Pang

On Sun, 26 Apr 2015 17:52:16 +0800
pang.xunlei@zte.com.cn wrote:
 
> The problem I tried to describe here is:
> 
> We know, there are two main queues each cpu for RT scheduler: 
> "run queue" and "pushable queue".
>  
> For RT tasks, the scheduler uses "plist" to manage the pushable queue,
> so when there are multiple tasks queued at the same priority, they get 
> queued in the FIFO order.
> 
> Currently, when an rt task gets queued, it is put to the head or the
> tail of its "run queue" according to different scenarios. Then if it 
> is migratable, it will also and always be put to the tail of its 
> "pushable queue".
> 
> For one cpu, assuming initially it has multiple migratable tasks queued 
> at the same priority as current(RT) in "run queue" and "pushable queue" 
> both in the same order. At some time, when current gets preempted, it 
> will be put behind these tasks in the "pushable queue", while it still 
> stays ahead of these tasks in the "run queue". Afterwards, if there comes 
> a pull from other cpu or a push from local cpu, the task behind current 
> in the "run queue" will be pulled from the "pushable queue" and gets 
> running.
> 
> Obviously, to maintain the same order between the two queues, when current 
> 
> is preempted(not involving re-queue in the "run queue"), we want to put it 
> 
> ahead of all those tasks queued at the same priority in the "pushable 
> queue".
> 
> -Xunlei


I think what Xunlei is trying to say, is that we don't currently keep
FIFO when preemption or migration is involved. If a task is currently
running, strict FIFO denotes that it should run ahead of all other
tasks queued at its priority or less until it decides to schedule out.
But the issue is, if it gets preempted or migrates, it gets placed
behind other tasks of the same priority as itself, but it never
voluntarily relinquished the CPU.

Thus, if it gets preempted by a higher priority task, it should at a
minimum be placed ahead of all other tasks of its priority or less to
run on the CPU again. If it gets migrated to another CPU, it should at
least be placed ahead of other tasks on that new CPU of the same
priority. Although, for the migration case, I'm not sure why it would
be migrated to a CPU where it couldn't run right away in the first
place, as the push/pull logic only migrates RT tasks that can run on
the new CPU. Unless, he's talking about a race where a new task just
got scheduled before it made it to the CPU? But that's a separate issue.

But at least for being preempted by a higher priority task, it should
be placed back ahead of the currently running tasks, unless it did a
yield or is RR and its time ran out.

I'm not sure why your solution with yield_task_rt() and task_tick_rt()
doesn't work. Maybe Xunlei is looking too deep into the solution.
Monday, I'll try to spend some time looking at the scheduler logic
there.

-- Steve

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-26 15:58                       ` Steven Rostedt
@ 2015-04-28 10:19                         ` Peter Zijlstra
  2015-04-28 12:48                           ` Mike Galbraith
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-04-28 10:19 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: pang.xunlei, Juri Lelli, linux-kernel, linux-kernel-owner, Xunlei Pang

On Sun, Apr 26, 2015 at 11:58:51AM -0400, Steven Rostedt wrote:
> I think what Xunlei is trying to say, is that we don't currently keep
> FIFO when preemption or migration is involved. If a task is currently
> running, strict FIFO denotes that it should run ahead of all other
> tasks queued at its priority or less until it decides to schedule out.
> But the issue is, if it gets preempted or migrates, it gets placed
> behind other tasks of the same priority as itself, but it never
> voluntarily relinquished the CPU.

So 1) FIFO is only defined for UP, anything SMP is well outside of the
FIFO spec and therefore we cannot break it.

2) The 'head' of the queue only has meaning on UP, with SMP there's 'n'
heads, which of those heads is is the foremost head? That is, we're
already lost order, you cannot reconstruct. This cannot be done without
first defining order and then implementing that.

The Changelog is completely devoid of useful information.

> Thus, if it gets preempted by a higher priority task, it should at a
> minimum be placed ahead of all other tasks of its priority or less to
> run on the CPU again.

Which, with the current status is an impossibility with the exception of
SMP=n.

> If it gets migrated to another CPU, it should at
> least be placed ahead of other tasks on that new CPU of the same
> priority.

Who says this task is further 'ahead' than the head on the new CPU?

> Although, for the migration case, I'm not sure why it would
> be migrated to a CPU where it couldn't run right away in the first
> place, as the push/pull logic only migrates RT tasks that can run on
> the new CPU. Unless, he's talking about a race where a new task just
> got scheduled before it made it to the CPU? But that's a separate issue.

See, even you don't really know wtf he's wanting to do.

> But at least for being preempted by a higher priority task, it should
> be placed back ahead of the currently running tasks, unless it did a
> yield or is RR and its time ran out.

AFAICT this is already so. For RT we do not dequeue running tasks (CFS
does), so pick_next_task/put_prev_task does not change the location of a
task on the queues.

> I'm not sure why your solution with yield_task_rt() and task_tick_rt()
> doesn't work. Maybe Xunlei is looking too deep into the solution.
> Monday, I'll try to spend some time looking at the scheduler logic
> there.

No, have Xunlei go write a coherent problem statement, and for as long
as you don't understand it send him back to it.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases
  2015-04-28 10:19                         ` Peter Zijlstra
@ 2015-04-28 12:48                           ` Mike Galbraith
  0 siblings, 0 replies; 18+ messages in thread
From: Mike Galbraith @ 2015-04-28 12:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Steven Rostedt, pang.xunlei, Juri Lelli, linux-kernel,
	linux-kernel-owner, Xunlei Pang

On Tue, 2015-04-28 at 12:19 +0200, Peter Zijlstra wrote:
> On Sun, Apr 26, 2015 at 11:58:51AM -0400, Steven Rostedt wrote:
> > I think what Xunlei is trying to say, is that we don't currently keep
> > FIFO when preemption or migration is involved. If a task is currently
> > running, strict FIFO denotes that it should run ahead of all other
> > tasks queued at its priority or less until it decides to schedule out.
> > But the issue is, if it gets preempted or migrates, it gets placed
> > behind other tasks of the same priority as itself, but it never
> > voluntarily relinquished the CPU.
> 
> So 1) FIFO is only defined for UP, anything SMP is well outside of the
> FIFO spec and therefore we cannot break it.
> 
> 2) The 'head' of the queue only has meaning on UP, with SMP there's 'n'
> heads, which of those heads is is the foremost head? That is, we're
> already lost order, you cannot reconstruct. This cannot be done without
> first defining order and then implementing that.

Good luck with that :)  Trying to preserve run order across the box led
me to a seemingly _endless_ supply of deadlocks while piddling with
preemptible spinning locks in rt.  Maybe you can pull that off, me it
gave serious headaches.

	-Mike


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2015-04-28 12:48 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-20  8:22 [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Xunlei Pang
2015-04-20  8:22 ` [PATCH v6 2/3] sched/rt: Fix wrong SMP scheduler behavior for equal prio cases Xunlei Pang
2015-04-20 14:52   ` Steven Rostedt
2015-04-20 17:20     ` Peter Zijlstra
2015-04-20 17:48       ` Steven Rostedt
2015-04-20 23:45         ` Peter Zijlstra
2015-04-21 13:10           ` Steven Rostedt
     [not found]         ` <OFB1503F16.1E65406F-ON48257E2E.002B562D-48257E30.0008BFEB@zte.com.cn>
2015-04-23  3:01           ` Steven Rostedt
     [not found]             ` <OFE3B71874.C9D81693-ON48257E30.0024F69F-48257E30.0025E2D3@zte.com.cn>
2015-04-23 13:10               ` Steven Rostedt
2015-04-24 18:32               ` Peter Zijlstra
     [not found]                 ` <OF0B05B4FE.F40C0BE3-ON48257E32.004EF485-48257E32.00513DB3@zte.com.cn>
2015-04-25 18:23                   ` Peter Zijlstra
     [not found]                     ` <OFD410EB1E.5A02675A-ON48257E33.00309252-48257E33.003641FF@zte.com.cn>
2015-04-26 15:58                       ` Steven Rostedt
2015-04-28 10:19                         ` Peter Zijlstra
2015-04-28 12:48                           ` Mike Galbraith
2015-04-20  8:22 ` [PATCH v6 3/3] sched/rt: Check to push the task when changing its affinity Xunlei Pang
2015-04-20 16:06   ` Steven Rostedt
2015-04-20 14:42 ` [PATCH v6 1/3] lib/plist: Provide plist_add_head() for nodes with the same prio Steven Rostedt
2015-04-20 14:48 ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).