linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2][v3] sched: Clean up newidle_balance() and pick_next_task()
@ 2020-04-21 10:48 Chen Yu
  2020-04-21 10:50 ` [PATCH 1/2][v3] sched: Make newidle_balance() static again Chen Yu
  2020-04-21 10:50 ` [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task() Chen Yu
  0 siblings, 2 replies; 7+ messages in thread
From: Chen Yu @ 2020-04-21 10:48 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Vincent Guittot, Steven Rostedt,
	Valentin Schneider, Ingo Molnar, Juri Lelli, Dietmar Eggemann,
	Mel Gorman, Ben Segall, Chen Yu

As Peter suggested, here are two minor code clean up in the scheduler:
1. turn newidle_balance() into a static function.
2. code extraction in pick_next_task() for future re-use.

Chen Yu (2):
  sched: Make newidle_balance() static again
  sched: Extract the task putting code from pick_next_task()

 kernel/sched/core.c  | 39 +++++++++++++++++++++++----------------
 kernel/sched/fair.c  |  6 ++++--
 kernel/sched/sched.h |  4 ----
 3 files changed, 27 insertions(+), 22 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2][v3] sched: Make newidle_balance() static again
  2020-04-21 10:48 [PATCH 0/2][v3] sched: Clean up newidle_balance() and pick_next_task() Chen Yu
@ 2020-04-21 10:50 ` Chen Yu
  2020-05-01 18:22   ` [tip: sched/core] " tip-bot2 for Chen Yu
  2020-04-21 10:50 ` [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task() Chen Yu
  1 sibling, 1 reply; 7+ messages in thread
From: Chen Yu @ 2020-04-21 10:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Vincent Guittot, Steven Rostedt,
	Valentin Schneider, Ingo Molnar, Juri Lelli, Dietmar Eggemann,
	Mel Gorman, Ben Segall, Chen Yu

After Commit 6e2df0581f56 ("sched: Fix pick_next_task() vs 'change'
pattern race"), there is no need to expose newidle_balance() as it
is only used within fair.c file. Change this function back to static again.

No functional change.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
v2: Rename the remaining idle_balance() to newidle_balance()
    to fix an compile error when CONFIG_SMP is not set.
v3: Unchanged.
---
 kernel/sched/fair.c  | 6 ++++--
 kernel/sched/sched.h | 4 ----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 02f323b85b6d..cca5c9b7b5ae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3873,6 +3873,8 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
 	return cfs_rq->avg.load_avg;
 }
 
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
+
 static inline unsigned long task_util(struct task_struct *p)
 {
 	return READ_ONCE(p->se.avg.util_avg);
@@ -4054,7 +4056,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 static inline void
 detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 
-static inline int idle_balance(struct rq *rq, struct rq_flags *rf)
+static inline int newidle_balance(struct rq *rq, struct rq_flags *rf)
 {
 	return 0;
 }
@@ -10425,7 +10427,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
  *     0 - failed, no new tasks
  *   > 0 - success, new (fair) tasks present
  */
-int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
 {
 	unsigned long next_balance = jiffies + HZ;
 	int this_cpu = this_rq->cpu;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index db3a57675ccf..be83f88495fb 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1504,14 +1504,10 @@ static inline void unregister_sched_domain_sysctl(void)
 }
 #endif
 
-extern int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
-
 #else
 
 static inline void sched_ttwu_pending(void) { }
 
-static inline int newidle_balance(struct rq *this_rq, struct rq_flags *rf) { return 0; }
-
 #endif /* CONFIG_SMP */
 
 #include "stats.h"
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task()
  2020-04-21 10:48 [PATCH 0/2][v3] sched: Clean up newidle_balance() and pick_next_task() Chen Yu
  2020-04-21 10:50 ` [PATCH 1/2][v3] sched: Make newidle_balance() static again Chen Yu
@ 2020-04-21 10:50 ` Chen Yu
  2020-04-21 14:50   ` Steven Rostedt
                     ` (2 more replies)
  1 sibling, 3 replies; 7+ messages in thread
From: Chen Yu @ 2020-04-21 10:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Vincent Guittot, Steven Rostedt,
	Valentin Schneider, Ingo Molnar, Juri Lelli, Dietmar Eggemann,
	Mel Gorman, Ben Segall, Chen Yu

Introduce a new function put_prev_task_balance() to do the balance
when necessary, and then put previous task back to the run queue.
This function is extracted from pick_next_task() to prepare for
future usage by other type of task picking logic.

No functional change.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
v3: According to Steven's suggestion, fix the comment
    to change put_next_task() to put_prev_task()

    Rename finish_prev_task() to put_prev_task_balance()
    as this reflects what we do before picking a new task. 

    Per Valentin's suggestion, put the declaration of
    struct sched_class within the ifdef, given it isn't
    used outside of it.
---
 kernel/sched/core.c | 39 +++++++++++++++++++++++----------------
 1 file changed, 23 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3a61a3b8eaa9..c9d7880f6bf2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3904,6 +3904,28 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
 	schedstat_inc(this_rq()->sched_count);
 }
 
+static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
+				  struct rq_flags *rf)
+{
+#ifdef CONFIG_SMP
+	const struct sched_class *class;
+	/*
+	 * We must do the balancing pass before put_prev_task(), such
+	 * that when we release the rq->lock the task is in the same
+	 * state as before we took rq->lock.
+	 *
+	 * We can terminate the balance pass as soon as we know there is
+	 * a runnable task of @class priority or higher.
+	 */
+	for_class_range(class, prev->sched_class, &idle_sched_class) {
+		if (class->balance(rq, prev, rf))
+			break;
+	}
+#endif
+
+	put_prev_task(rq, prev);
+}
+
 /*
  * Pick up the highest-prio task:
  */
@@ -3937,22 +3959,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	}
 
 restart:
-#ifdef CONFIG_SMP
-	/*
-	 * We must do the balancing pass before put_next_task(), such
-	 * that when we release the rq->lock the task is in the same
-	 * state as before we took rq->lock.
-	 *
-	 * We can terminate the balance pass as soon as we know there is
-	 * a runnable task of @class priority or higher.
-	 */
-	for_class_range(class, prev->sched_class, &idle_sched_class) {
-		if (class->balance(rq, prev, rf))
-			break;
-	}
-#endif
-
-	put_prev_task(rq, prev);
+	put_prev_task_balance(rq, prev, rf);
 
 	for_each_class(class) {
 		p = class->pick_next_task(rq);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task()
  2020-04-21 10:50 ` [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task() Chen Yu
@ 2020-04-21 14:50   ` Steven Rostedt
  2020-04-21 14:53   ` Vincent Guittot
  2020-05-01 18:22   ` [tip: sched/core] " tip-bot2 for Chen Yu
  2 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2020-04-21 14:50 UTC (permalink / raw)
  To: Chen Yu
  Cc: linux-kernel, Peter Zijlstra, Vincent Guittot,
	Valentin Schneider, Ingo Molnar, Juri Lelli, Dietmar Eggemann,
	Mel Gorman, Ben Segall

On Tue, 21 Apr 2020 18:50:43 +0800
Chen Yu <yu.c.chen@intel.com> wrote:

> Introduce a new function put_prev_task_balance() to do the balance
> when necessary, and then put previous task back to the run queue.
> This function is extracted from pick_next_task() to prepare for
> future usage by other type of task picking logic.
> 
> No functional change.
> 
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
> 

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task()
  2020-04-21 10:50 ` [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task() Chen Yu
  2020-04-21 14:50   ` Steven Rostedt
@ 2020-04-21 14:53   ` Vincent Guittot
  2020-05-01 18:22   ` [tip: sched/core] " tip-bot2 for Chen Yu
  2 siblings, 0 replies; 7+ messages in thread
From: Vincent Guittot @ 2020-04-21 14:53 UTC (permalink / raw)
  To: Chen Yu
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Valentin Schneider,
	Ingo Molnar, Juri Lelli, Dietmar Eggemann, Mel Gorman,
	Ben Segall

On Tue, 21 Apr 2020 at 12:50, Chen Yu <yu.c.chen@intel.com> wrote:
>
> Introduce a new function put_prev_task_balance() to do the balance
> when necessary, and then put previous task back to the run queue.
> This function is extracted from pick_next_task() to prepare for
> future usage by other type of task picking logic.
>
> No functional change.
>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
> v3: According to Steven's suggestion, fix the comment
>     to change put_next_task() to put_prev_task()
>
>     Rename finish_prev_task() to put_prev_task_balance()
>     as this reflects what we do before picking a new task.
>
>     Per Valentin's suggestion, put the declaration of
>     struct sched_class within the ifdef, given it isn't
>     used outside of it.
> ---
>  kernel/sched/core.c | 39 +++++++++++++++++++++++----------------
>  1 file changed, 23 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 3a61a3b8eaa9..c9d7880f6bf2 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3904,6 +3904,28 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
>         schedstat_inc(this_rq()->sched_count);
>  }
>
> +static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
> +                                 struct rq_flags *rf)
> +{
> +#ifdef CONFIG_SMP
> +       const struct sched_class *class;
> +       /*
> +        * We must do the balancing pass before put_prev_task(), such
> +        * that when we release the rq->lock the task is in the same
> +        * state as before we took rq->lock.
> +        *
> +        * We can terminate the balance pass as soon as we know there is
> +        * a runnable task of @class priority or higher.
> +        */
> +       for_class_range(class, prev->sched_class, &idle_sched_class) {
> +               if (class->balance(rq, prev, rf))
> +                       break;
> +       }
> +#endif
> +
> +       put_prev_task(rq, prev);
> +}
> +
>  /*
>   * Pick up the highest-prio task:
>   */
> @@ -3937,22 +3959,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
>         }
>
>  restart:
> -#ifdef CONFIG_SMP
> -       /*
> -        * We must do the balancing pass before put_next_task(), such
> -        * that when we release the rq->lock the task is in the same
> -        * state as before we took rq->lock.
> -        *
> -        * We can terminate the balance pass as soon as we know there is
> -        * a runnable task of @class priority or higher.
> -        */
> -       for_class_range(class, prev->sched_class, &idle_sched_class) {
> -               if (class->balance(rq, prev, rf))
> -                       break;
> -       }
> -#endif
> -
> -       put_prev_task(rq, prev);
> +       put_prev_task_balance(rq, prev, rf);
>
>         for_each_class(class) {
>                 p = class->pick_next_task(rq);
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [tip: sched/core] sched: Make newidle_balance() static again
  2020-04-21 10:50 ` [PATCH 1/2][v3] sched: Make newidle_balance() static again Chen Yu
@ 2020-05-01 18:22   ` tip-bot2 for Chen Yu
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot2 for Chen Yu @ 2020-05-01 18:22 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: kbuild test robot, Peter Zijlstra, Chen Yu, x86, LKML

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     d91cecc156620ec75d94c55369509c807c3d07e6
Gitweb:        https://git.kernel.org/tip/d91cecc156620ec75d94c55369509c807c3d07e6
Author:        Chen Yu <yu.c.chen@intel.com>
AuthorDate:    Tue, 21 Apr 2020 18:50:34 +08:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Thu, 30 Apr 2020 20:14:40 +02:00

sched: Make newidle_balance() static again

After Commit 6e2df0581f56 ("sched: Fix pick_next_task() vs 'change'
pattern race"), there is no need to expose newidle_balance() as it
is only used within fair.c file. Change this function back to static again.

No functional change.

Reported-by: kbuild test robot <lkp@intel.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/83cd3030b031ca5d646cd5e225be10e7a0fdd8f5.1587464698.git.yu.c.chen@intel.com
---
 kernel/sched/fair.c  | 6 ++++--
 kernel/sched/sched.h | 4 ----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4b959c0..c0216ef 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3873,6 +3873,8 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
 	return cfs_rq->avg.load_avg;
 }
 
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
+
 static inline unsigned long task_util(struct task_struct *p)
 {
 	return READ_ONCE(p->se.avg.util_avg);
@@ -4054,7 +4056,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 static inline void
 detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 
-static inline int idle_balance(struct rq *rq, struct rq_flags *rf)
+static inline int newidle_balance(struct rq *rq, struct rq_flags *rf)
 {
 	return 0;
 }
@@ -10414,7 +10416,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
  *     0 - failed, no new tasks
  *   > 0 - success, new (fair) tasks present
  */
-int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
 {
 	unsigned long next_balance = jiffies + HZ;
 	int this_cpu = this_rq->cpu;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 7198683..978c6fa 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1503,14 +1503,10 @@ static inline void unregister_sched_domain_sysctl(void)
 }
 #endif
 
-extern int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
-
 #else
 
 static inline void sched_ttwu_pending(void) { }
 
-static inline int newidle_balance(struct rq *this_rq, struct rq_flags *rf) { return 0; }
-
 #endif /* CONFIG_SMP */
 
 #include "stats.h"

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip: sched/core] sched: Extract the task putting code from pick_next_task()
  2020-04-21 10:50 ` [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task() Chen Yu
  2020-04-21 14:50   ` Steven Rostedt
  2020-04-21 14:53   ` Vincent Guittot
@ 2020-05-01 18:22   ` tip-bot2 for Chen Yu
  2 siblings, 0 replies; 7+ messages in thread
From: tip-bot2 for Chen Yu @ 2020-05-01 18:22 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra, Chen Yu, Valentin Schneider, Vincent Guittot,
	Steven Rostedt (VMware),
	x86, LKML

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     457d1f465778ccb5f14f7d7a62245e41d12a3804
Gitweb:        https://git.kernel.org/tip/457d1f465778ccb5f14f7d7a62245e41d12a3804
Author:        Chen Yu <yu.c.chen@intel.com>
AuthorDate:    Tue, 21 Apr 2020 18:50:43 +08:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Thu, 30 Apr 2020 20:14:40 +02:00

sched: Extract the task putting code from pick_next_task()

Introduce a new function put_prev_task_balance() to do the balance
when necessary, and then put previous task back to the run queue.
This function is extracted from pick_next_task() to prepare for
future usage by other type of task picking logic.

No functional change.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Link: https://lkml.kernel.org/r/5a99860cf66293db58a397d6248bcb2eee326776.1587464698.git.yu.c.chen@intel.com
---
 kernel/sched/core.c | 39 +++++++++++++++++++++++----------------
 1 file changed, 23 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9a2fbf9..2e6ba9e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3899,6 +3899,28 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
 	schedstat_inc(this_rq()->sched_count);
 }
 
+static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
+				  struct rq_flags *rf)
+{
+#ifdef CONFIG_SMP
+	const struct sched_class *class;
+	/*
+	 * We must do the balancing pass before put_prev_task(), such
+	 * that when we release the rq->lock the task is in the same
+	 * state as before we took rq->lock.
+	 *
+	 * We can terminate the balance pass as soon as we know there is
+	 * a runnable task of @class priority or higher.
+	 */
+	for_class_range(class, prev->sched_class, &idle_sched_class) {
+		if (class->balance(rq, prev, rf))
+			break;
+	}
+#endif
+
+	put_prev_task(rq, prev);
+}
+
 /*
  * Pick up the highest-prio task:
  */
@@ -3932,22 +3954,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	}
 
 restart:
-#ifdef CONFIG_SMP
-	/*
-	 * We must do the balancing pass before put_next_task(), such
-	 * that when we release the rq->lock the task is in the same
-	 * state as before we took rq->lock.
-	 *
-	 * We can terminate the balance pass as soon as we know there is
-	 * a runnable task of @class priority or higher.
-	 */
-	for_class_range(class, prev->sched_class, &idle_sched_class) {
-		if (class->balance(rq, prev, rf))
-			break;
-	}
-#endif
-
-	put_prev_task(rq, prev);
+	put_prev_task_balance(rq, prev, rf);
 
 	for_each_class(class) {
 		p = class->pick_next_task(rq);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-05-01 18:22 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-21 10:48 [PATCH 0/2][v3] sched: Clean up newidle_balance() and pick_next_task() Chen Yu
2020-04-21 10:50 ` [PATCH 1/2][v3] sched: Make newidle_balance() static again Chen Yu
2020-05-01 18:22   ` [tip: sched/core] " tip-bot2 for Chen Yu
2020-04-21 10:50 ` [PATCH 2/2][v3] sched: Extract the task putting code from pick_next_task() Chen Yu
2020-04-21 14:50   ` Steven Rostedt
2020-04-21 14:53   ` Vincent Guittot
2020-05-01 18:22   ` [tip: sched/core] " tip-bot2 for Chen Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).