All of lore.kernel.org
 help / color / mirror / Atom feed
* [V3 0/3] Fixed dump_backtrace() when task running on another cpu
@ 2020-04-14 12:36 ` Wang Qing
  0 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

We cannot get FP and PC when the task is running on another CPU,
task->thread.cpu_context is the last time the task was switched out,
it's better to give a reminder than to provide wrong information
for example when Task blocked in spinlock/interrupt/busy loop.

The task_running() should be renamed to task_running_on_rq()
like the naming of task_running_on_cpu(), this is what it
originally mean.

Add task_running() no rq required.

V2:
- Add task_running_oncpu()
v3:
- Renamed task_running() to task_running_on_rq()
- Renamed task_running_oncpu() to task_running()

Wang Qing (3):
  ARM64:fixed dump_backtrace() when task running on another cpu
  sched:add task_running()
  sched:rename task_running() and to task_running_on_rq

 arch/arm64/kernel/traps.c |  7 +++++++
 include/linux/sched.h     | 10 ++++++++++
 kernel/sched/core.c       | 14 +++++++-------
 kernel/sched/deadline.c   |  6 +++---
 kernel/sched/fair.c       |  2 +-
 kernel/sched/rt.c         |  6 +++---
 kernel/sched/sched.h      |  2 +-
 7 files changed, 32 insertions(+), 15 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [V3 0/3] Fixed dump_backtrace() when task running on another cpu
@ 2020-04-14 12:36 ` Wang Qing
  0 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

We cannot get FP and PC when the task is running on another CPU,
task->thread.cpu_context is the last time the task was switched out,
it's better to give a reminder than to provide wrong information
for example when Task blocked in spinlock/interrupt/busy loop.

The task_running() should be renamed to task_running_on_rq()
like the naming of task_running_on_cpu(), this is what it
originally mean.

Add task_running() no rq required.

V2:
- Add task_running_oncpu()
v3:
- Renamed task_running() to task_running_on_rq()
- Renamed task_running_oncpu() to task_running()

Wang Qing (3):
  ARM64:fixed dump_backtrace() when task running on another cpu
  sched:add task_running()
  sched:rename task_running() and to task_running_on_rq

 arch/arm64/kernel/traps.c |  7 +++++++
 include/linux/sched.h     | 10 ++++++++++
 kernel/sched/core.c       | 14 +++++++-------
 kernel/sched/deadline.c   |  6 +++---
 kernel/sched/fair.c       |  2 +-
 kernel/sched/rt.c         |  6 +++---
 kernel/sched/sched.h      |  2 +-
 7 files changed, 32 insertions(+), 15 deletions(-)

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [V3 1/3] ARM64:fixed dump_backtrace() when task running on another cpu
  2020-04-14 12:36 ` Wang Qing
@ 2020-04-14 12:36   ` Wang Qing
  -1 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

We cannot get FP and PC when the task is running on another CPU,
task->thread.cpu_context is the last time the task was switched out,
it's better to give a reminder than to provide wrong information
for example when Task blocked in spinlock/interrupt/busy loop.

Signed-off-by: Wang Qing <wangqing@vivo.com>
---
 arch/arm64/kernel/traps.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index cf402be..c74090b 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -106,6 +106,13 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 		start_backtrace(&frame,
 				(unsigned long)__builtin_frame_address(0),
 				(unsigned long)dump_backtrace);
+	} else if (task_running(tsk)) {
+		/*
+		 * The task is running in another cpu, we cannot get it.
+		 */
+		pr_warn("tsk: %s is running in CPU%d, Don't call trace!\n",
+			tsk->comm, task_cpu(tsk));
+		return;
 	} else {
 		/*
 		 * task blocked in __switch_to
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [V3 1/3] ARM64:fixed dump_backtrace() when task running on another cpu
@ 2020-04-14 12:36   ` Wang Qing
  0 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

We cannot get FP and PC when the task is running on another CPU,
task->thread.cpu_context is the last time the task was switched out,
it's better to give a reminder than to provide wrong information
for example when Task blocked in spinlock/interrupt/busy loop.

Signed-off-by: Wang Qing <wangqing@vivo.com>
---
 arch/arm64/kernel/traps.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index cf402be..c74090b 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -106,6 +106,13 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 		start_backtrace(&frame,
 				(unsigned long)__builtin_frame_address(0),
 				(unsigned long)dump_backtrace);
+	} else if (task_running(tsk)) {
+		/*
+		 * The task is running in another cpu, we cannot get it.
+		 */
+		pr_warn("tsk: %s is running in CPU%d, Don't call trace!\n",
+			tsk->comm, task_cpu(tsk));
+		return;
 	} else {
 		/*
 		 * task blocked in __switch_to
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [V3 2/3] sched:add task_running()
  2020-04-14 12:36 ` Wang Qing
@ 2020-04-14 12:36   ` Wang Qing
  -1 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

The task_running() should be renamed to task_running_on_rq()
like the naming of task_running_on_cpu(), this is what it
originally mean.

Add task_running() no rq required.

Signed-off-by: Wang Qing <wangqing@vivo.com>
---
 include/linux/sched.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4418f5c..0a7b150 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1843,6 +1843,11 @@ static inline unsigned int task_cpu(const struct task_struct *p)
 
 extern void set_task_cpu(struct task_struct *p, unsigned int cpu);
 
+static inline int task_running(const struct task_struct *p)
+{
+	return READ_ONCE(p->on_cpu);
+}
+
 #else
 
 static inline unsigned int task_cpu(const struct task_struct *p)
@@ -1854,6 +1859,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
 {
 }
 
+static inline int task_running(const struct task_struct *p)
+{
+	return p == current;
+}
+
 #endif /* CONFIG_SMP */
 
 /*
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [V3 2/3] sched:add task_running()
@ 2020-04-14 12:36   ` Wang Qing
  0 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

The task_running() should be renamed to task_running_on_rq()
like the naming of task_running_on_cpu(), this is what it
originally mean.

Add task_running() no rq required.

Signed-off-by: Wang Qing <wangqing@vivo.com>
---
 include/linux/sched.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4418f5c..0a7b150 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1843,6 +1843,11 @@ static inline unsigned int task_cpu(const struct task_struct *p)
 
 extern void set_task_cpu(struct task_struct *p, unsigned int cpu);
 
+static inline int task_running(const struct task_struct *p)
+{
+	return READ_ONCE(p->on_cpu);
+}
+
 #else
 
 static inline unsigned int task_cpu(const struct task_struct *p)
@@ -1854,6 +1859,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
 {
 }
 
+static inline int task_running(const struct task_struct *p)
+{
+	return p == current;
+}
+
 #endif /* CONFIG_SMP */
 
 /*
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [V3 3/3] sched:rename task_running() and to task_running_on_rq
  2020-04-14 12:36 ` Wang Qing
@ 2020-04-14 12:36   ` Wang Qing
  -1 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

The task_running() should be renamed to task_running_on_rq()
like the naming of task_running_on_cpu(), this is what it
originally mean.

This solves the confusing naming problem.

Signed-off-by: Wang Qing <wangqing@vivo.com>
---
 kernel/sched/core.c     | 14 +++++++-------
 kernel/sched/deadline.c |  6 +++---
 kernel/sched/fair.c     |  2 +-
 kernel/sched/rt.c       |  6 +++---
 kernel/sched/sched.h    |  2 +-
 5 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a2694ba..7ba1840 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1672,7 +1672,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
 	if (cpumask_test_cpu(task_cpu(p), new_mask))
 		goto out;
 
-	if (task_running(rq, p) || p->state == TASK_WAKING) {
+	if (task_running_on_rq(rq, p) || p->state == TASK_WAKING) {
 		struct migration_arg arg = { p, dest_cpu };
 		/* Need help from migration thread: drop lock and wait. */
 		task_rq_unlock(rq, p, &rf);
@@ -1905,11 +1905,11 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
 		 *
 		 * NOTE! Since we don't hold any locks, it's not
 		 * even sure that "rq" stays as the right runqueue!
-		 * But we don't care, since "task_running()" will
-		 * return false if the runqueue has changed and p
-		 * is actually now running somewhere else!
+		 * But we don't care, since "task_running_on_rq()"
+		 * will return false if the runqueue has changed
+		 * and p is actually now running somewhere else!
 		 */
-		while (task_running(rq, p)) {
+		while (task_running_on_rq(rq, p)) {
 			if (match_state && unlikely(p->state != match_state))
 				return 0;
 			cpu_relax();
@@ -1922,7 +1922,7 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
 		 */
 		rq = task_rq_lock(p, &rf);
 		trace_sched_wait_task(p);
-		running = task_running(rq, p);
+		running = task_running_on_rq(rq, p);
 		queued = task_on_rq_queued(p);
 		ncsw = 0;
 		if (!match_state || p->state == match_state)
@@ -5745,7 +5745,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
 	if (curr->sched_class != p->sched_class)
 		goto out_unlock;
 
-	if (task_running(p_rq, p) || p->state)
+	if (task_running_on_rq(p_rq, p) || p->state)
 		goto out_unlock;
 
 	yielded = curr->sched_class->yield_to_task(rq, p, preempt);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 504d2f5..c04cecd 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1838,7 +1838,7 @@ static void task_fork_dl(struct task_struct *p)
 
 static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
 {
-	if (!task_running(rq, p) &&
+	if (!task_running_on_rq(rq, p) &&
 	    cpumask_test_cpu(cpu, p->cpus_ptr))
 		return 1;
 	return 0;
@@ -1990,7 +1990,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
 		if (double_lock_balance(rq, later_rq)) {
 			if (unlikely(task_rq(task) != rq ||
 				     !cpumask_test_cpu(later_rq->cpu, task->cpus_ptr) ||
-				     task_running(rq, task) ||
+				     task_running_on_rq(rq, task) ||
 				     !dl_task(task) ||
 				     !task_on_rq_queued(task))) {
 				double_unlock_balance(rq, later_rq);
@@ -2217,7 +2217,7 @@ static void pull_dl_task(struct rq *this_rq)
  */
 static void task_woken_dl(struct rq *rq, struct task_struct *p)
 {
-	if (!task_running(rq, p) &&
+	if (!task_running_on_rq(rq, p) &&
 	    !test_tsk_need_resched(rq->curr) &&
 	    p->nr_cpus_allowed > 1 &&
 	    dl_task(rq->curr) &&
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1ea3ddd..6cc0b5b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7503,7 +7503,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
 	/* Record that we found atleast one task that could run on dst_cpu */
 	env->flags &= ~LBF_ALL_PINNED;
 
-	if (task_running(env->src_rq, p)) {
+	if (task_running_on_rq(env->src_rq, p)) {
 		schedstat_inc(p->se.statistics.nr_failed_migrations_running);
 		return 0;
 	}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index df11d88..ea647d9 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1655,7 +1655,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
 
 static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
 {
-	if (!task_running(rq, p) &&
+	if (!task_running_on_rq(rq, p) &&
 	    cpumask_test_cpu(cpu, p->cpus_ptr))
 		return 1;
 
@@ -1810,7 +1810,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
 			 */
 			if (unlikely(task_rq(task) != rq ||
 				     !cpumask_test_cpu(lowest_rq->cpu, task->cpus_ptr) ||
-				     task_running(rq, task) ||
+				     task_running_on_rq(rq, task) ||
 				     !rt_task(task) ||
 				     !task_on_rq_queued(task))) {
 
@@ -2218,7 +2218,7 @@ static void pull_rt_task(struct rq *this_rq)
  */
 static void task_woken_rt(struct rq *rq, struct task_struct *p)
 {
-	bool need_to_push = !task_running(rq, p) &&
+	bool need_to_push = !task_running_on_rq(rq, p) &&
 			    !test_tsk_need_resched(rq->curr) &&
 			    p->nr_cpus_allowed > 1 &&
 			    (dl_task(rq->curr) || rt_task(rq->curr)) &&
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 0f616bf..e5b6538 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1659,7 +1659,7 @@ static inline int task_current(struct rq *rq, struct task_struct *p)
 	return rq->curr == p;
 }
 
-static inline int task_running(struct rq *rq, struct task_struct *p)
+static inline int task_running_on_rq(struct rq *rq, struct task_struct *p)
 {
 #ifdef CONFIG_SMP
 	return p->on_cpu;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [V3 3/3] sched:rename task_running() and to task_running_on_rq
@ 2020-04-14 12:36   ` Wang Qing
  0 siblings, 0 replies; 8+ messages in thread
From: Wang Qing @ 2020-04-14 12:36 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, James Morse, Mark Rutland,
	Eric W. Biederman, Thomas Gleixner, jinho lim, Wang Qing,
	Dave Martin, linux-arm-kernel, linux-kernel
  Cc: opensource.kernel

The task_running() should be renamed to task_running_on_rq()
like the naming of task_running_on_cpu(), this is what it
originally mean.

This solves the confusing naming problem.

Signed-off-by: Wang Qing <wangqing@vivo.com>
---
 kernel/sched/core.c     | 14 +++++++-------
 kernel/sched/deadline.c |  6 +++---
 kernel/sched/fair.c     |  2 +-
 kernel/sched/rt.c       |  6 +++---
 kernel/sched/sched.h    |  2 +-
 5 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a2694ba..7ba1840 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1672,7 +1672,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
 	if (cpumask_test_cpu(task_cpu(p), new_mask))
 		goto out;
 
-	if (task_running(rq, p) || p->state == TASK_WAKING) {
+	if (task_running_on_rq(rq, p) || p->state == TASK_WAKING) {
 		struct migration_arg arg = { p, dest_cpu };
 		/* Need help from migration thread: drop lock and wait. */
 		task_rq_unlock(rq, p, &rf);
@@ -1905,11 +1905,11 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
 		 *
 		 * NOTE! Since we don't hold any locks, it's not
 		 * even sure that "rq" stays as the right runqueue!
-		 * But we don't care, since "task_running()" will
-		 * return false if the runqueue has changed and p
-		 * is actually now running somewhere else!
+		 * But we don't care, since "task_running_on_rq()"
+		 * will return false if the runqueue has changed
+		 * and p is actually now running somewhere else!
 		 */
-		while (task_running(rq, p)) {
+		while (task_running_on_rq(rq, p)) {
 			if (match_state && unlikely(p->state != match_state))
 				return 0;
 			cpu_relax();
@@ -1922,7 +1922,7 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
 		 */
 		rq = task_rq_lock(p, &rf);
 		trace_sched_wait_task(p);
-		running = task_running(rq, p);
+		running = task_running_on_rq(rq, p);
 		queued = task_on_rq_queued(p);
 		ncsw = 0;
 		if (!match_state || p->state == match_state)
@@ -5745,7 +5745,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
 	if (curr->sched_class != p->sched_class)
 		goto out_unlock;
 
-	if (task_running(p_rq, p) || p->state)
+	if (task_running_on_rq(p_rq, p) || p->state)
 		goto out_unlock;
 
 	yielded = curr->sched_class->yield_to_task(rq, p, preempt);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 504d2f5..c04cecd 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1838,7 +1838,7 @@ static void task_fork_dl(struct task_struct *p)
 
 static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
 {
-	if (!task_running(rq, p) &&
+	if (!task_running_on_rq(rq, p) &&
 	    cpumask_test_cpu(cpu, p->cpus_ptr))
 		return 1;
 	return 0;
@@ -1990,7 +1990,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
 		if (double_lock_balance(rq, later_rq)) {
 			if (unlikely(task_rq(task) != rq ||
 				     !cpumask_test_cpu(later_rq->cpu, task->cpus_ptr) ||
-				     task_running(rq, task) ||
+				     task_running_on_rq(rq, task) ||
 				     !dl_task(task) ||
 				     !task_on_rq_queued(task))) {
 				double_unlock_balance(rq, later_rq);
@@ -2217,7 +2217,7 @@ static void pull_dl_task(struct rq *this_rq)
  */
 static void task_woken_dl(struct rq *rq, struct task_struct *p)
 {
-	if (!task_running(rq, p) &&
+	if (!task_running_on_rq(rq, p) &&
 	    !test_tsk_need_resched(rq->curr) &&
 	    p->nr_cpus_allowed > 1 &&
 	    dl_task(rq->curr) &&
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1ea3ddd..6cc0b5b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7503,7 +7503,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
 	/* Record that we found atleast one task that could run on dst_cpu */
 	env->flags &= ~LBF_ALL_PINNED;
 
-	if (task_running(env->src_rq, p)) {
+	if (task_running_on_rq(env->src_rq, p)) {
 		schedstat_inc(p->se.statistics.nr_failed_migrations_running);
 		return 0;
 	}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index df11d88..ea647d9 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1655,7 +1655,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
 
 static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
 {
-	if (!task_running(rq, p) &&
+	if (!task_running_on_rq(rq, p) &&
 	    cpumask_test_cpu(cpu, p->cpus_ptr))
 		return 1;
 
@@ -1810,7 +1810,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
 			 */
 			if (unlikely(task_rq(task) != rq ||
 				     !cpumask_test_cpu(lowest_rq->cpu, task->cpus_ptr) ||
-				     task_running(rq, task) ||
+				     task_running_on_rq(rq, task) ||
 				     !rt_task(task) ||
 				     !task_on_rq_queued(task))) {
 
@@ -2218,7 +2218,7 @@ static void pull_rt_task(struct rq *this_rq)
  */
 static void task_woken_rt(struct rq *rq, struct task_struct *p)
 {
-	bool need_to_push = !task_running(rq, p) &&
+	bool need_to_push = !task_running_on_rq(rq, p) &&
 			    !test_tsk_need_resched(rq->curr) &&
 			    p->nr_cpus_allowed > 1 &&
 			    (dl_task(rq->curr) || rt_task(rq->curr)) &&
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 0f616bf..e5b6538 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1659,7 +1659,7 @@ static inline int task_current(struct rq *rq, struct task_struct *p)
 	return rq->curr == p;
 }
 
-static inline int task_running(struct rq *rq, struct task_struct *p)
+static inline int task_running_on_rq(struct rq *rq, struct task_struct *p)
 {
 #ifdef CONFIG_SMP
 	return p->on_cpu;
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-04-14 12:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-14 12:36 [V3 0/3] Fixed dump_backtrace() when task running on another cpu Wang Qing
2020-04-14 12:36 ` Wang Qing
2020-04-14 12:36 ` [V3 1/3] ARM64:fixed " Wang Qing
2020-04-14 12:36   ` Wang Qing
2020-04-14 12:36 ` [V3 2/3] sched:add task_running() Wang Qing
2020-04-14 12:36   ` Wang Qing
2020-04-14 12:36 ` [V3 3/3] sched:rename task_running() and to task_running_on_rq Wang Qing
2020-04-14 12:36   ` Wang Qing

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.