linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v2]  sched/rt : return accurate release rq lock info
  2018-10-02 20:17 [PATCH v2] sched/rt : return accurate release rq lock info Peng Hao
@ 2018-10-02 15:18 ` Steven Rostedt
  0 siblings, 0 replies; 2+ messages in thread
From: Steven Rostedt @ 2018-10-02 15:18 UTC (permalink / raw)
  To: Peng Hao; +Cc: mingo, peterz, linux-kernel

On Wed, 3 Oct 2018 04:17:22 +0800
Peng Hao <peng.hao2@zte.com.cn> wrote:

> find_lock_lowest_rq may or not releease rq lock, but it is fuzzy.
> If not releasing rq lock, it is unnecessary to re-call
> pick_next_pushable_task.
> When CONFIG_PREEMPT=n, not releasing rq lock frequently happens
> in a simple test case:
> Four different rt priority tasks run on limited two cpus.
> 
> Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
> ---
>  kernel/sched/rt.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 2e2955a..7c5382a 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1718,6 +1718,7 @@ static int find_lowest_rq(struct task_struct *task)
>  static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
>  {
>  	struct rq *lowest_rq = NULL;
> +	bool release_lock = false;
>  	int tries;
>  	int cpu;
>  
> @@ -1741,6 +1742,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
>  
>  		/* if the prio of this runqueue changed, try again */
>  		if (double_lock_balance(rq, lowest_rq)) {
> +			release_lock = true;
>  			/*
>  			 * We had to unlock the run queue. In
>  			 * the mean time, task could have
> @@ -1768,6 +1770,8 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
>  		lowest_rq = NULL;
>  	}
>  
> +	if (!lowest_rq && release_lock)
> +		lowest_rq = RETRY_TASK;
>  	return lowest_rq;
>  }

Instead of adding the above boolean variable, wouldn't this work just
as well?

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 2e2955a8cf8f..b363ef70412a 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1754,7 +1754,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
 				     !task_on_rq_queued(task))) {
 
 				double_unlock_balance(rq, lowest_rq);
-				lowest_rq = NULL;
+				lowest_rq = RETRY_TASK;
 				break;
 			}
 		}

>  
> @@ -1830,7 +1834,7 @@ static int push_rt_task(struct rq *rq)
>  
>  	/* find_lock_lowest_rq locks the rq if found */
>  	lowest_rq = find_lock_lowest_rq(next_task, rq);
> -	if (!lowest_rq) {

Not a biggy, but I would add here:

	if (!lowest_rq)
		goto out;

And then add the below. It just flows better.

-- Steve

> +	if (lowest_rq == RETRY_TASK) {
>  		struct task_struct *task;
>  		/*
>  		 * find_lock_lowest_rq releases rq->lock
> @@ -1863,6 +1867,8 @@ static int push_rt_task(struct rq *rq)
>  		goto retry;
>  	}
>  
> +	if (!lowest_rq)
> +		goto out;
>  	deactivate_task(rq, next_task, 0);
>  	set_task_cpu(next_task, lowest_rq->cpu);
>  	activate_task(lowest_rq, next_task, 0);


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [PATCH v2]  sched/rt : return accurate release rq lock info
@ 2018-10-02 20:17 Peng Hao
  2018-10-02 15:18 ` Steven Rostedt
  0 siblings, 1 reply; 2+ messages in thread
From: Peng Hao @ 2018-10-02 20:17 UTC (permalink / raw)
  To: mingo, peterz; +Cc: linux-kernel, Peng Hao

find_lock_lowest_rq may or not releease rq lock, but it is fuzzy.
If not releasing rq lock, it is unnecessary to re-call
pick_next_pushable_task.
When CONFIG_PREEMPT=n, not releasing rq lock frequently happens
in a simple test case:
Four different rt priority tasks run on limited two cpus.

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
---
 kernel/sched/rt.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 2e2955a..7c5382a 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1718,6 +1718,7 @@ static int find_lowest_rq(struct task_struct *task)
 static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
 {
 	struct rq *lowest_rq = NULL;
+	bool release_lock = false;
 	int tries;
 	int cpu;
 
@@ -1741,6 +1742,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
 
 		/* if the prio of this runqueue changed, try again */
 		if (double_lock_balance(rq, lowest_rq)) {
+			release_lock = true;
 			/*
 			 * We had to unlock the run queue. In
 			 * the mean time, task could have
@@ -1768,6 +1770,8 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
 		lowest_rq = NULL;
 	}
 
+	if (!lowest_rq && release_lock)
+		lowest_rq = RETRY_TASK;
 	return lowest_rq;
 }
 
@@ -1830,7 +1834,7 @@ static int push_rt_task(struct rq *rq)
 
 	/* find_lock_lowest_rq locks the rq if found */
 	lowest_rq = find_lock_lowest_rq(next_task, rq);
-	if (!lowest_rq) {
+	if (lowest_rq == RETRY_TASK) {
 		struct task_struct *task;
 		/*
 		 * find_lock_lowest_rq releases rq->lock
@@ -1863,6 +1867,8 @@ static int push_rt_task(struct rq *rq)
 		goto retry;
 	}
 
+	if (!lowest_rq)
+		goto out;
 	deactivate_task(rq, next_task, 0);
 	set_task_cpu(next_task, lowest_rq->cpu);
 	activate_task(lowest_rq, next_task, 0);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-10-02 15:18 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-02 20:17 [PATCH v2] sched/rt : return accurate release rq lock info Peng Hao
2018-10-02 15:18 ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).