All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm
@ 2015-04-06  8:53 Wanpeng Li
  2015-04-06  8:53 ` [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
                   ` (7 more replies)
  0 siblings, 8 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-04-06  8:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li

Function pick_next_earliest_dl_task is used to pick earliest and pushable
dl task from overloaded cpus in pull algorithm, however, it traverses
runqueue rbtree instead of pushable task rbtree which is also ordered by
tasks' deadlines. This will result in getting no candidates from overloaded
cpus if all the dl tasks on the overloaded cpus are pinned. This patch fix
it by traversing pushable task rbtree which is also ordered by tasks'
deadlines.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
 kernel/sched/deadline.c | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 5e95145..b57ceba 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1230,6 +1230,33 @@ next_node:
 	return NULL;
 }
 
+/*
+ * Return the earliest pushable rq's task, which is suitable to be executed
+ * on the cpu, NULL otherwse
+ */
+static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq,
+									int cpu)
+{
+	struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost;
+	struct task_struct *p = NULL;
+
+	if (!has_pushable_dl_tasks(rq))
+		return NULL;
+
+next_node:
+	if (next_node) {
+		p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
+
+		if (pick_dl_task(rq, p, cpu))
+			return p;
+
+		next_node = rb_next(next_node);
+		goto next_node;
+	}
+
+	return NULL;
+}
+
 static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
 
 static int find_later_rq(struct task_struct *task)
@@ -1514,7 +1541,7 @@ static int pull_dl_task(struct rq *this_rq)
 		if (src_rq->dl.dl_nr_running <= 1)
 			goto skip;
 
-		p = pick_next_earliest_dl_task(src_rq, this_cpu);
+		p = pick_earliest_pushable_dl_task(src_rq, this_cpu);
 
 		/*
 		 * We found a task to be pulled if:
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
@ 2015-04-06  8:53 ` Wanpeng Li
  2015-04-07 13:48   ` Juri Lelli
  2015-04-06  8:53 ` [PATCH 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target Wanpeng Li
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-06  8:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li

It's a bootstrap function, make init_sched_dl_class() __init.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
 kernel/sched/deadline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b57ceba..3bd3158 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1686,7 +1686,7 @@ static void rq_offline_dl(struct rq *rq)
 	cpudl_clear_freecpu(&rq->rd->cpudl, rq->cpu);
 }
 
-void init_sched_dl_class(void)
+void __init init_sched_dl_class(void)
 {
 	unsigned int i;
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
  2015-04-06  8:53 ` [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
@ 2015-04-06  8:53 ` Wanpeng Li
  2015-04-07 13:48   ` Juri Lelli
  2015-04-06  8:53 ` [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations Wanpeng Li
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-06  8:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li

This patch adds check that prevents futile attempts to move dl tasks to
a CPU with active tasks of equal or earlier deadline. The same behavior as
commit 80e3d87b2c55 ("sched/rt: Reduce rq lock contention by eliminating
locking of non-feasible target") for rt class.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
 kernel/sched/deadline.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3bd3158..b8b9355 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1012,7 +1012,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
 	    (p->nr_cpus_allowed > 1)) {
 		int target = find_later_rq(p);
 
-		if (target != -1)
+		if (target != -1 &&
+			dl_time_before(p->dl.deadline,
+				cpu_rq(target)->dl.earliest_dl.curr))
 			cpu = target;
 	}
 	rcu_read_unlock();
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
  2015-04-06  8:53 ` [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
  2015-04-06  8:53 ` [PATCH 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target Wanpeng Li
@ 2015-04-06  8:53 ` Wanpeng Li
  2015-04-20 10:27   ` Juri Lelli
  2015-04-06  8:53 ` [PATCH 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration Wanpeng Li
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-06  8:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li

pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task 
can slip in, in which case we need to reschedule. This patch add the 
reschedule when the scenario occurs.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
 kernel/sched/deadline.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b8b9355..844da0f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1739,7 +1739,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
 	if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
 		return;
 
-	if (pull_dl_task(rq))
+	/*
+	 * pull_dl_task() can drop (and re-acquire) rq->lock; this
+	 * means a stop task can slip in, in which case we need to
+	 * reschedule.
+	 */
+	if (pull_dl_task(rq) ||
+		(rq->stop && task_on_rq_queued(rq->stop)))
 		resched_curr(rq);
 }
 
@@ -1786,6 +1792,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
 			pull_dl_task(rq);
 
 		/*
+		 * pull_dl_task() can drop (and re-acquire) rq->lock; this
+		 * means a stop task can slip in, in which case we need to
+		 * reschedule.
+		 */
+		if (rq->stop && task_on_rq_queued(rq->stop))
+			resched_curr(rq);
+
+		/*
 		 * If we now have a earlier deadline task than p,
 		 * then reschedule, provided p is still on this
 		 * runqueue.
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
                   ` (2 preceding siblings ...)
  2015-04-06  8:53 ` [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations Wanpeng Li
@ 2015-04-06  8:53 ` Wanpeng Li
  2015-04-08  8:47   ` Juri Lelli
  2015-04-06  8:53 ` [PATCH 6/7] sched/deadline: depend on clearing throttled status in replenish_dl_entity Wanpeng Li
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-06  8:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li

There are two init_sched_dl_class declarations, this patch drop the duplicated one.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
 kernel/sched/sched.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e0e1299..c9b2689 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1284,7 +1284,6 @@ extern void update_max_interval(void);
 extern void init_sched_dl_class(void);
 extern void init_sched_rt_class(void);
 extern void init_sched_fair_class(void);
-extern void init_sched_dl_class(void);
 
 extern void resched_curr(struct rq *rq);
 extern void resched_cpu(int cpu);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/7] sched/deadline: depend on clearing throttled status in replenish_dl_entity
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
                   ` (3 preceding siblings ...)
  2015-04-06  8:53 ` [PATCH 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration Wanpeng Li
@ 2015-04-06  8:53 ` Wanpeng Li
  2015-04-21  8:16   ` Juri Lelli
  2015-04-06  8:53 ` [PATCH 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations Wanpeng Li
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-06  8:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li

Since the natural place to clear ->dl_throttled is in replenish_dl_entity(), and 
the task which is adjusted the priority is the current, it will be dequeued and 
then enqueued w/ replenish which can guarantee ->dl_throttled can be cleared, 
this patch drop the clear throttled status in function rt_mutex_setprio.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
 kernel/sched/core.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 28b0d75..f1b9222 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3037,7 +3037,6 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
 		if (!dl_prio(p->normal_prio) ||
 		    (pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) {
 			p->dl.dl_boosted = 1;
-			p->dl.dl_throttled = 0;
 			enqueue_flag = ENQUEUE_REPLENISH;
 		} else
 			p->dl.dl_boosted = 0;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
                   ` (4 preceding siblings ...)
  2015-04-06  8:53 ` [PATCH 6/7] sched/deadline: depend on clearing throttled status in replenish_dl_entity Wanpeng Li
@ 2015-04-06  8:53 ` Wanpeng Li
  2015-04-14 23:10 ` [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
  2015-04-27  9:42 ` Wanpeng Li
  7 siblings, 0 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-04-06  8:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li

pull_rt_task() can drop (and re-acquire) rq->lock, this means a dl or stop task 
can slip in, in which case need to reschedule. This patch add the reschedule
when the scenario occurs.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
 kernel/sched/rt.c | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 575da76..cc973e8 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2136,7 +2136,14 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
 	if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
 		return;
 
-	if (pull_rt_task(rq))
+	/*
+	 * pull_rt_task() can drop (and re-acquire) rq->lock; this
+	 * means a dl or stop task can slip in, in which case we need
+	 * to reschedule.
+	 */
+	if (pull_rt_task(rq) ||
+		(unlikely((rq->stop && task_on_rq_queued(rq->stop)) ||
+			rq->dl.dl_nr_running)))
 		resched_curr(rq);
 }
 
@@ -2197,6 +2204,16 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
 		 */
 		if (oldprio < p->prio)
 			pull_rt_task(rq);
+
+		/*
+		 * pull_rt_task() can drop (and re-acquire) rq->lock; this
+		 * means a dl or stop task can slip in, in which case we need
+		 * to reschedule.
+		 */
+		if (unlikely((rq->stop && task_on_rq_queued(rq->stop)) ||
+			rq->dl.dl_nr_running))
+			resched_curr(rq);
+
 		/*
 		 * If there's a higher priority task waiting to run
 		 * then reschedule. Note, the above pull_rt_task
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target
  2015-04-06  8:53 ` [PATCH 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target Wanpeng Li
@ 2015-04-07 13:48   ` Juri Lelli
  2015-04-07 22:57     ` Wanpeng Li
  0 siblings, 1 reply; 22+ messages in thread
From: Juri Lelli @ 2015-04-07 13:48 UTC (permalink / raw)
  To: Wanpeng Li, Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Hi,

On 06/04/2015 09:53, Wanpeng Li wrote:
> This patch adds check that prevents futile attempts to move dl tasks to
> a CPU with active tasks of equal or earlier deadline. The same behavior as
> commit 80e3d87b2c55 ("sched/rt: Reduce rq lock contention by eliminating
> locking of non-feasible target") for rt class.
> 

This commit introduced also this kind of check in find_lock_lowest_rq().
Don't we need something like this in find_lock_later_rq() as well?

Thanks,

- Juri

> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
>  kernel/sched/deadline.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 3bd3158..b8b9355 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1012,7 +1012,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
>  	    (p->nr_cpus_allowed > 1)) {
>  		int target = find_later_rq(p);
>  
> -		if (target != -1)
> +		if (target != -1 &&
> +			dl_time_before(p->dl.deadline,
> +				cpu_rq(target)->dl.earliest_dl.curr))
>  			cpu = target;
>  	}
>  	rcu_read_unlock();
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init
  2015-04-06  8:53 ` [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
@ 2015-04-07 13:48   ` Juri Lelli
  2015-04-07 23:00     ` Wanpeng Li
  0 siblings, 1 reply; 22+ messages in thread
From: Juri Lelli @ 2015-04-07 13:48 UTC (permalink / raw)
  To: Wanpeng Li, Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Hi,

On 06/04/2015 09:53, Wanpeng Li wrote:
> It's a bootstrap function, make init_sched_dl_class() __init.
> 

Looks good, thanks!

Best,

- Juri

> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
>  kernel/sched/deadline.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index b57ceba..3bd3158 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1686,7 +1686,7 @@ static void rq_offline_dl(struct rq *rq)
>  	cpudl_clear_freecpu(&rq->rd->cpudl, rq->cpu);
>  }
>  
> -void init_sched_dl_class(void)
> +void __init init_sched_dl_class(void)
>  {
>  	unsigned int i;
>  
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target
  2015-04-07 13:48   ` Juri Lelli
@ 2015-04-07 22:57     ` Wanpeng Li
  0 siblings, 0 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-04-07 22:57 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

Hi Juri,
On Tue, Apr 07, 2015 at 02:48:12PM +0100, Juri Lelli wrote:
>Hi,
>
>On 06/04/2015 09:53, Wanpeng Li wrote:
>> This patch adds check that prevents futile attempts to move dl tasks to
>> a CPU with active tasks of equal or earlier deadline. The same behavior as
>> commit 80e3d87b2c55 ("sched/rt: Reduce rq lock contention by eliminating
>> locking of non-feasible target") for rt class.
>> 
>
>This commit introduced also this kind of check in find_lock_lowest_rq().
>Don't we need something like this in find_lock_later_rq() as well?
>

Good point, will do. Great thanks for your review. :)

Regards,
Wanpeng Li 

>Thanks,
>
>- Juri
>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>>  kernel/sched/deadline.c | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
>> 
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index 3bd3158..b8b9355 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -1012,7 +1012,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
>>  	    (p->nr_cpus_allowed > 1)) {
>>  		int target = find_later_rq(p);
>>  
>> -		if (target != -1)
>> +		if (target != -1 &&
>> +			dl_time_before(p->dl.deadline,
>> +				cpu_rq(target)->dl.earliest_dl.curr))
>>  			cpu = target;
>>  	}
>>  	rcu_read_unlock();
>> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init
  2015-04-07 13:48   ` Juri Lelli
@ 2015-04-07 23:00     ` Wanpeng Li
  0 siblings, 0 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-04-07 23:00 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

On Tue, Apr 07, 2015 at 02:48:23PM +0100, Juri Lelli wrote:
>Hi,
>
>On 06/04/2015 09:53, Wanpeng Li wrote:
>> It's a bootstrap function, make init_sched_dl_class() __init.
>> 
>
>Looks good, thanks!
>

I will add your Acked-by for this one in next version, if I miss
understand you please let me know. Btw, how about the other patches? :)

Regards,
Wanpeng Li 

>Best,
>
>- Juri
>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>>  kernel/sched/deadline.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index b57ceba..3bd3158 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -1686,7 +1686,7 @@ static void rq_offline_dl(struct rq *rq)
>>  	cpudl_clear_freecpu(&rq->rd->cpudl, rq->cpu);
>>  }
>>  
>> -void init_sched_dl_class(void)
>> +void __init init_sched_dl_class(void)
>>  {
>>  	unsigned int i;
>>  
>> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration
  2015-04-08  8:47   ` Juri Lelli
@ 2015-04-08  8:33     ` Wanpeng Li
  0 siblings, 0 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-04-08  8:33 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

Hi Juri,
On Wed, Apr 08, 2015 at 09:47:33AM +0100, Juri Lelli wrote:
>Hi,
>
>On 06/04/2015 09:53, Wanpeng Li wrote:
>> There are two init_sched_dl_class declarations, this patch drop the duplicated one.
>> 
>
>I guess the changelog needs to be trimmed. Apart from this, the

Will do in v2, thanks for your review. :)

Regards,
Wanpeng Li 

>patch looks of course good, thanks!
>
>Best,
>
>- Juri
>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>>  kernel/sched/sched.h | 1 -
>>  1 file changed, 1 deletion(-)
>> 
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index e0e1299..c9b2689 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -1284,7 +1284,6 @@ extern void update_max_interval(void);
>>  extern void init_sched_dl_class(void);
>>  extern void init_sched_rt_class(void);
>>  extern void init_sched_fair_class(void);
>> -extern void init_sched_dl_class(void);
>>  
>>  extern void resched_curr(struct rq *rq);
>>  extern void resched_cpu(int cpu);
>> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration
  2015-04-06  8:53 ` [PATCH 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration Wanpeng Li
@ 2015-04-08  8:47   ` Juri Lelli
  2015-04-08  8:33     ` Wanpeng Li
  0 siblings, 1 reply; 22+ messages in thread
From: Juri Lelli @ 2015-04-08  8:47 UTC (permalink / raw)
  To: Wanpeng Li, Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Hi,

On 06/04/2015 09:53, Wanpeng Li wrote:
> There are two init_sched_dl_class declarations, this patch drop the duplicated one.
> 

I guess the changelog needs to be trimmed. Apart from this, the
patch looks of course good, thanks!

Best,

- Juri

> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
>  kernel/sched/sched.h | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index e0e1299..c9b2689 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1284,7 +1284,6 @@ extern void update_max_interval(void);
>  extern void init_sched_dl_class(void);
>  extern void init_sched_rt_class(void);
>  extern void init_sched_fair_class(void);
> -extern void init_sched_dl_class(void);
>  
>  extern void resched_curr(struct rq *rq);
>  extern void resched_cpu(int cpu);
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
                   ` (5 preceding siblings ...)
  2015-04-06  8:53 ` [PATCH 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations Wanpeng Li
@ 2015-04-14 23:10 ` Wanpeng Li
  2015-04-27  9:42 ` Wanpeng Li
  7 siblings, 0 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-04-14 23:10 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

Ping Juri for this patchset, :)
On Mon, Apr 06, 2015 at 04:53:13PM +0800, Wanpeng Li wrote:
>Function pick_next_earliest_dl_task is used to pick earliest and pushable
>dl task from overloaded cpus in pull algorithm, however, it traverses
>runqueue rbtree instead of pushable task rbtree which is also ordered by
>tasks' deadlines. This will result in getting no candidates from overloaded
>cpus if all the dl tasks on the overloaded cpus are pinned. This patch fix
>it by traversing pushable task rbtree which is also ordered by tasks'
>deadlines.
>
>Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>---
> kernel/sched/deadline.c | 29 ++++++++++++++++++++++++++++-
> 1 file changed, 28 insertions(+), 1 deletion(-)
>
>diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>index 5e95145..b57ceba 100644
>--- a/kernel/sched/deadline.c
>+++ b/kernel/sched/deadline.c
>@@ -1230,6 +1230,33 @@ next_node:
> 	return NULL;
> }
> 
>+/*
>+ * Return the earliest pushable rq's task, which is suitable to be executed
>+ * on the cpu, NULL otherwse
>+ */
>+static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq,
>+									int cpu)
>+{
>+	struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost;
>+	struct task_struct *p = NULL;
>+
>+	if (!has_pushable_dl_tasks(rq))
>+		return NULL;
>+
>+next_node:
>+	if (next_node) {
>+		p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
>+
>+		if (pick_dl_task(rq, p, cpu))
>+			return p;
>+
>+		next_node = rb_next(next_node);
>+		goto next_node;
>+	}
>+
>+	return NULL;
>+}
>+
> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
> 
> static int find_later_rq(struct task_struct *task)
>@@ -1514,7 +1541,7 @@ static int pull_dl_task(struct rq *this_rq)
> 		if (src_rq->dl.dl_nr_running <= 1)
> 			goto skip;
> 
>-		p = pick_next_earliest_dl_task(src_rq, this_cpu);
>+		p = pick_earliest_pushable_dl_task(src_rq, this_cpu);
> 
> 		/*
> 		 * We found a task to be pulled if:
>-- 
>1.9.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations
  2015-04-06  8:53 ` [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations Wanpeng Li
@ 2015-04-20 10:27   ` Juri Lelli
  2015-04-20 22:59     ` Wanpeng Li
  0 siblings, 1 reply; 22+ messages in thread
From: Juri Lelli @ 2015-04-20 10:27 UTC (permalink / raw)
  To: Wanpeng Li, Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Hi,

On 06/04/2015 09:53, Wanpeng Li wrote:
> pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task 
> can slip in, in which case we need to reschedule. This patch add the 
> reschedule when the scenario occurs.
> 

Ok, I guess it can happen. Doesn't RT have the same problem? It seems that
it also has to deal with DL tasks slipping in, right?

Thanks,

- Juri

> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
>  kernel/sched/deadline.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index b8b9355..844da0f 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1739,7 +1739,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
>  	if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
>  		return;
>  
> -	if (pull_dl_task(rq))
> +	/*
> +	 * pull_dl_task() can drop (and re-acquire) rq->lock; this
> +	 * means a stop task can slip in, in which case we need to
> +	 * reschedule.
> +	 */
> +	if (pull_dl_task(rq) ||
> +		(rq->stop && task_on_rq_queued(rq->stop)))
>  		resched_curr(rq);
>  }
>  
> @@ -1786,6 +1792,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
>  			pull_dl_task(rq);
>  
>  		/*
> +		 * pull_dl_task() can drop (and re-acquire) rq->lock; this
> +		 * means a stop task can slip in, in which case we need to
> +		 * reschedule.
> +		 */
> +		if (rq->stop && task_on_rq_queued(rq->stop))
> +			resched_curr(rq);
> +
> +		/*
>  		 * If we now have a earlier deadline task than p,
>  		 * then reschedule, provided p is still on this
>  		 * runqueue.
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations
  2015-04-20 10:27   ` Juri Lelli
@ 2015-04-20 22:59     ` Wanpeng Li
  2015-04-20 23:02       ` Wanpeng Li
  0 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-20 22:59 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

Hi Juri,
On Mon, Apr 20, 2015 at 11:27:22AM +0100, Juri Lelli wrote:
>Hi,
>
>On 06/04/2015 09:53, Wanpeng Li wrote:
>> pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task 
>> can slip in, in which case we need to reschedule. This patch add the 
>> reschedule when the scenario occurs.
>> 
>
>Ok, I guess it can happen. Doesn't RT have the same problem? It seems that
>it also has to deal with DL tasks slipping in, right?

Yeah, I will send another patch to handle RT class in the v2 patchset. :)

Regards,
Wanpeng Li 

>
>Thanks,
>
>- Juri
>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>>  kernel/sched/deadline.c | 16 +++++++++++++++-
>>  1 file changed, 15 insertions(+), 1 deletion(-)
>> 
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index b8b9355..844da0f 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -1739,7 +1739,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
>>  	if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
>>  		return;
>>  
>> -	if (pull_dl_task(rq))
>> +	/*
>> +	 * pull_dl_task() can drop (and re-acquire) rq->lock; this
>> +	 * means a stop task can slip in, in which case we need to
>> +	 * reschedule.
>> +	 */
>> +	if (pull_dl_task(rq) ||
>> +		(rq->stop && task_on_rq_queued(rq->stop)))
>>  		resched_curr(rq);
>>  }
>>  
>> @@ -1786,6 +1792,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
>>  			pull_dl_task(rq);
>>  
>>  		/*
>> +		 * pull_dl_task() can drop (and re-acquire) rq->lock; this
>> +		 * means a stop task can slip in, in which case we need to
>> +		 * reschedule.
>> +		 */
>> +		if (rq->stop && task_on_rq_queued(rq->stop))
>> +			resched_curr(rq);
>> +
>> +		/*
>>  		 * If we now have a earlier deadline task than p,
>>  		 * then reschedule, provided p is still on this
>>  		 * runqueue.
>> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations
  2015-04-20 22:59     ` Wanpeng Li
@ 2015-04-20 23:02       ` Wanpeng Li
  2015-04-21  8:22         ` Juri Lelli
  0 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-20 23:02 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

On Tue, Apr 21, 2015 at 06:59:02AM +0800, Wanpeng Li wrote:
>Hi Juri,
>On Mon, Apr 20, 2015 at 11:27:22AM +0100, Juri Lelli wrote:
>>Hi,
>>
>>On 06/04/2015 09:53, Wanpeng Li wrote:
>>> pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task 
>>> can slip in, in which case we need to reschedule. This patch add the 
>>> reschedule when the scenario occurs.
>>> 
>>
>>Ok, I guess it can happen. Doesn't RT have the same problem? It seems that
>>it also has to deal with DL tasks slipping in, right?
>
>Yeah, I will send another patch to handle RT class in the v2 patchset. :)

Oh, I just find I have already done them in patch 7/7.

Regards,
Wanpeng Li 

>
>Regards,
>Wanpeng Li 
>
>>
>>Thanks,
>>
>>- Juri
>>
>>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>>> ---
>>>  kernel/sched/deadline.c | 16 +++++++++++++++-
>>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>> 
>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>> index b8b9355..844da0f 100644
>>> --- a/kernel/sched/deadline.c
>>> +++ b/kernel/sched/deadline.c
>>> @@ -1739,7 +1739,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
>>>  	if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
>>>  		return;
>>>  
>>> -	if (pull_dl_task(rq))
>>> +	/*
>>> +	 * pull_dl_task() can drop (and re-acquire) rq->lock; this
>>> +	 * means a stop task can slip in, in which case we need to
>>> +	 * reschedule.
>>> +	 */
>>> +	if (pull_dl_task(rq) ||
>>> +		(rq->stop && task_on_rq_queued(rq->stop)))
>>>  		resched_curr(rq);
>>>  }
>>>  
>>> @@ -1786,6 +1792,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
>>>  			pull_dl_task(rq);
>>>  
>>>  		/*
>>> +		 * pull_dl_task() can drop (and re-acquire) rq->lock; this
>>> +		 * means a stop task can slip in, in which case we need to
>>> +		 * reschedule.
>>> +		 */
>>> +		if (rq->stop && task_on_rq_queued(rq->stop))
>>> +			resched_curr(rq);
>>> +
>>> +		/*
>>>  		 * If we now have a earlier deadline task than p,
>>>  		 * then reschedule, provided p is still on this
>>>  		 * runqueue.
>>> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/7] sched/deadline: depend on clearing throttled status in replenish_dl_entity
  2015-04-21  8:16   ` Juri Lelli
@ 2015-04-21  8:10     ` Wanpeng Li
  0 siblings, 0 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-04-21  8:10 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Juri Lelli

Hi Juri,
On Tue, Apr 21, 2015 at 09:16:47AM +0100, Juri Lelli wrote:
>Hi,
>
>On 06/04/2015 09:53, Wanpeng Li wrote:
>> Since the natural place to clear ->dl_throttled is in replenish_dl_entity(), and 
>> the task which is adjusted the priority is the current, it will be dequeued and 
>> then enqueued w/ replenish which can guarantee ->dl_throttled can be cleared, 
>> this patch drop the clear throttled status in function rt_mutex_setprio.
>> 
>
>Patch looks good. But, I'd slightly change subject and changelog. Something like
>this, maybe?
>
>sched/core: remove superfluous resetting of dl_throttled flag
>
>Resetting dl_throttled flag in rt_mutex_setprio (for a task that is going
>to be boosted) is superfluous, as the natural place to do so is in
>replenish_dl_entity(). If the task was on the runqueue and it is boosted
>by a DL task, it will be enqueued back with ENQUEUE_REPLENISH flag set,
>which can guarantee that dl_throttled is reset in replenish_dl_entity().
>
>This patch drops the resetting of throttled status in function
>rt_mutex_setprio().

Cool, many thanks for your review. :)

Regards,
Wanpeng Li 

>
>Thanks,
>
>- Juri
>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>>  kernel/sched/core.c | 1 -
>>  1 file changed, 1 deletion(-)
>> 
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 28b0d75..f1b9222 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -3037,7 +3037,6 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
>>  		if (!dl_prio(p->normal_prio) ||
>>  		    (pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) {
>>  			p->dl.dl_boosted = 1;
>> -			p->dl.dl_throttled = 0;
>>  			enqueue_flag = ENQUEUE_REPLENISH;
>>  		} else
>>  			p->dl.dl_boosted = 0;
>> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/7] sched/deadline: depend on clearing throttled status in replenish_dl_entity
  2015-04-06  8:53 ` [PATCH 6/7] sched/deadline: depend on clearing throttled status in replenish_dl_entity Wanpeng Li
@ 2015-04-21  8:16   ` Juri Lelli
  2015-04-21  8:10     ` Wanpeng Li
  0 siblings, 1 reply; 22+ messages in thread
From: Juri Lelli @ 2015-04-21  8:16 UTC (permalink / raw)
  To: Wanpeng Li, Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

Hi,

On 06/04/2015 09:53, Wanpeng Li wrote:
> Since the natural place to clear ->dl_throttled is in replenish_dl_entity(), and 
> the task which is adjusted the priority is the current, it will be dequeued and 
> then enqueued w/ replenish which can guarantee ->dl_throttled can be cleared, 
> this patch drop the clear throttled status in function rt_mutex_setprio.
> 

Patch looks good. But, I'd slightly change subject and changelog. Something like
this, maybe?

sched/core: remove superfluous resetting of dl_throttled flag

Resetting dl_throttled flag in rt_mutex_setprio (for a task that is going
to be boosted) is superfluous, as the natural place to do so is in
replenish_dl_entity(). If the task was on the runqueue and it is boosted
by a DL task, it will be enqueued back with ENQUEUE_REPLENISH flag set,
which can guarantee that dl_throttled is reset in replenish_dl_entity().

This patch drops the resetting of throttled status in function
rt_mutex_setprio().

Thanks,

- Juri

> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
>  kernel/sched/core.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 28b0d75..f1b9222 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3037,7 +3037,6 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
>  		if (!dl_prio(p->normal_prio) ||
>  		    (pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) {
>  			p->dl.dl_boosted = 1;
> -			p->dl.dl_throttled = 0;
>  			enqueue_flag = ENQUEUE_REPLENISH;
>  		} else
>  			p->dl.dl_boosted = 0;
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations
  2015-04-20 23:02       ` Wanpeng Li
@ 2015-04-21  8:22         ` Juri Lelli
  0 siblings, 0 replies; 22+ messages in thread
From: Juri Lelli @ 2015-04-21  8:22 UTC (permalink / raw)
  To: Wanpeng Li; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On 21/04/15 00:02, Wanpeng Li wrote:
> On Tue, Apr 21, 2015 at 06:59:02AM +0800, Wanpeng Li wrote:
>> Hi Juri,
>> On Mon, Apr 20, 2015 at 11:27:22AM +0100, Juri Lelli wrote:
>>> Hi,
>>>
>>> On 06/04/2015 09:53, Wanpeng Li wrote:
>>>> pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task 
>>>> can slip in, in which case we need to reschedule. This patch add the 
>>>> reschedule when the scenario occurs.
>>>>
>>>
>>> Ok, I guess it can happen. Doesn't RT have the same problem? It seems that
>>> it also has to deal with DL tasks slipping in, right?
>>
>> Yeah, I will send another patch to handle RT class in the v2 patchset. :)
> 
> Oh, I just find I have already done them in patch 7/7.
>

Oh, right! That one ended up in another bucket. Sorry about that.

Best,

- Juri

> Regards,
> Wanpeng Li 
> 
>>
>> Regards,
>> Wanpeng Li 
>>
>>>
>>> Thanks,
>>>
>>> - Juri
>>>
>>>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>>>> ---
>>>>  kernel/sched/deadline.c | 16 +++++++++++++++-
>>>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>>> index b8b9355..844da0f 100644
>>>> --- a/kernel/sched/deadline.c
>>>> +++ b/kernel/sched/deadline.c
>>>> @@ -1739,7 +1739,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
>>>>  	if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
>>>>  		return;
>>>>  
>>>> -	if (pull_dl_task(rq))
>>>> +	/*
>>>> +	 * pull_dl_task() can drop (and re-acquire) rq->lock; this
>>>> +	 * means a stop task can slip in, in which case we need to
>>>> +	 * reschedule.
>>>> +	 */
>>>> +	if (pull_dl_task(rq) ||
>>>> +		(rq->stop && task_on_rq_queued(rq->stop)))
>>>>  		resched_curr(rq);
>>>>  }
>>>>  
>>>> @@ -1786,6 +1792,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
>>>>  			pull_dl_task(rq);
>>>>  
>>>>  		/*
>>>> +		 * pull_dl_task() can drop (and re-acquire) rq->lock; this
>>>> +		 * means a stop task can slip in, in which case we need to
>>>> +		 * reschedule.
>>>> +		 */
>>>> +		if (rq->stop && task_on_rq_queued(rq->stop))
>>>> +			resched_curr(rq);
>>>> +
>>>> +		/*
>>>>  		 * If we now have a earlier deadline task than p,
>>>>  		 * then reschedule, provided p is still on this
>>>>  		 * runqueue.
>>>>
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm
  2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
                   ` (6 preceding siblings ...)
  2015-04-14 23:10 ` [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
@ 2015-04-27  9:42 ` Wanpeng Li
  2015-05-06  7:11   ` Wanpeng Li
  7 siblings, 1 reply; 22+ messages in thread
From: Wanpeng Li @ 2015-04-27  9:42 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

How about patch 1/7? :)
On Mon, Apr 06, 2015 at 04:53:13PM +0800, Wanpeng Li wrote:
>Function pick_next_earliest_dl_task is used to pick earliest and pushable
>dl task from overloaded cpus in pull algorithm, however, it traverses
>runqueue rbtree instead of pushable task rbtree which is also ordered by
>tasks' deadlines. This will result in getting no candidates from overloaded
>cpus if all the dl tasks on the overloaded cpus are pinned. This patch fix
>it by traversing pushable task rbtree which is also ordered by tasks'
>deadlines.
>
>Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>---
> kernel/sched/deadline.c | 29 ++++++++++++++++++++++++++++-
> 1 file changed, 28 insertions(+), 1 deletion(-)
>
>diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>index 5e95145..b57ceba 100644
>--- a/kernel/sched/deadline.c
>+++ b/kernel/sched/deadline.c
>@@ -1230,6 +1230,33 @@ next_node:
> 	return NULL;
> }
> 
>+/*
>+ * Return the earliest pushable rq's task, which is suitable to be executed
>+ * on the cpu, NULL otherwse
>+ */
>+static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq,
>+									int cpu)
>+{
>+	struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost;
>+	struct task_struct *p = NULL;
>+
>+	if (!has_pushable_dl_tasks(rq))
>+		return NULL;
>+
>+next_node:
>+	if (next_node) {
>+		p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
>+
>+		if (pick_dl_task(rq, p, cpu))
>+			return p;
>+
>+		next_node = rb_next(next_node);
>+		goto next_node;
>+	}
>+
>+	return NULL;
>+}
>+
> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
> 
> static int find_later_rq(struct task_struct *task)
>@@ -1514,7 +1541,7 @@ static int pull_dl_task(struct rq *this_rq)
> 		if (src_rq->dl.dl_nr_running <= 1)
> 			goto skip;
> 
>-		p = pick_next_earliest_dl_task(src_rq, this_cpu);
>+		p = pick_earliest_pushable_dl_task(src_rq, this_cpu);
> 
> 		/*
> 		 * We found a task to be pulled if:
>-- 
>1.9.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm
  2015-04-27  9:42 ` Wanpeng Li
@ 2015-05-06  7:11   ` Wanpeng Li
  0 siblings, 0 replies; 22+ messages in thread
From: Wanpeng Li @ 2015-05-06  7:11 UTC (permalink / raw)
  To: Juri Lelli; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Wanpeng Li

Ping, :)
On Mon, Apr 27, 2015 at 05:42:38PM +0800, Wanpeng Li wrote:
>How about patch 1/7? :)
>On Mon, Apr 06, 2015 at 04:53:13PM +0800, Wanpeng Li wrote:
>>Function pick_next_earliest_dl_task is used to pick earliest and pushable
>>dl task from overloaded cpus in pull algorithm, however, it traverses
>>runqueue rbtree instead of pushable task rbtree which is also ordered by
>>tasks' deadlines. This will result in getting no candidates from overloaded
>>cpus if all the dl tasks on the overloaded cpus are pinned. This patch fix
>>it by traversing pushable task rbtree which is also ordered by tasks'
>>deadlines.
>>
>>Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>>---
>> kernel/sched/deadline.c | 29 ++++++++++++++++++++++++++++-
>> 1 file changed, 28 insertions(+), 1 deletion(-)
>>
>>diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>index 5e95145..b57ceba 100644
>>--- a/kernel/sched/deadline.c
>>+++ b/kernel/sched/deadline.c
>>@@ -1230,6 +1230,33 @@ next_node:
>> 	return NULL;
>> }
>> 
>>+/*
>>+ * Return the earliest pushable rq's task, which is suitable to be executed
>>+ * on the cpu, NULL otherwse
>>+ */
>>+static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq,
>>+									int cpu)
>>+{
>>+	struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost;
>>+	struct task_struct *p = NULL;
>>+
>>+	if (!has_pushable_dl_tasks(rq))
>>+		return NULL;
>>+
>>+next_node:
>>+	if (next_node) {
>>+		p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
>>+
>>+		if (pick_dl_task(rq, p, cpu))
>>+			return p;
>>+
>>+		next_node = rb_next(next_node);
>>+		goto next_node;
>>+	}
>>+
>>+	return NULL;
>>+}
>>+
>> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
>> 
>> static int find_later_rq(struct task_struct *task)
>>@@ -1514,7 +1541,7 @@ static int pull_dl_task(struct rq *this_rq)
>> 		if (src_rq->dl.dl_nr_running <= 1)
>> 			goto skip;
>> 
>>-		p = pick_next_earliest_dl_task(src_rq, this_cpu);
>>+		p = pick_earliest_pushable_dl_task(src_rq, this_cpu);
>> 
>> 		/*
>> 		 * We found a task to be pulled if:
>>-- 
>>1.9.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2015-05-06  7:29 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-06  8:53 [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
2015-04-06  8:53 ` [PATCH 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
2015-04-07 13:48   ` Juri Lelli
2015-04-07 23:00     ` Wanpeng Li
2015-04-06  8:53 ` [PATCH 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target Wanpeng Li
2015-04-07 13:48   ` Juri Lelli
2015-04-07 22:57     ` Wanpeng Li
2015-04-06  8:53 ` [PATCH 4/7] sched/deadline: reschedule if stop task slip in after pull operations Wanpeng Li
2015-04-20 10:27   ` Juri Lelli
2015-04-20 22:59     ` Wanpeng Li
2015-04-20 23:02       ` Wanpeng Li
2015-04-21  8:22         ` Juri Lelli
2015-04-06  8:53 ` [PATCH 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration Wanpeng Li
2015-04-08  8:47   ` Juri Lelli
2015-04-08  8:33     ` Wanpeng Li
2015-04-06  8:53 ` [PATCH 6/7] sched/deadline: depend on clearing throttled status in replenish_dl_entity Wanpeng Li
2015-04-21  8:16   ` Juri Lelli
2015-04-21  8:10     ` Wanpeng Li
2015-04-06  8:53 ` [PATCH 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations Wanpeng Li
2015-04-14 23:10 ` [PATCH 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
2015-04-27  9:42 ` Wanpeng Li
2015-05-06  7:11   ` Wanpeng Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.