linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2][v2] sched: Clean up newidle_balance() and pick_next_task()
@ 2020-04-20 15:01 Chen Yu
  2020-04-20 15:01 ` [PATCH 1/2][v2] sched: Make newidle_balance() static again Chen Yu
  2020-04-20 15:01 ` [PATCH 2/2][v2] sched: Extract the task putting code from pick_next_task() Chen Yu
  0 siblings, 2 replies; 7+ messages in thread
From: Chen Yu @ 2020-04-20 15:01 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman
  Cc: linux-kernel, Chen Yu

As Peter suggested, here are two minor code clean up in the scheduler:
1. turn newidle_balance() into a static function.
2. code extraction in pick_next_task() for future re-use.

Chen Yu (2):
  sched: Make newidle_balance() static again
  sched: Extract the task putting code from pick_next_task()

 kernel/sched/core.c  | 39 +++++++++++++++++++++++----------------
 kernel/sched/fair.c  |  6 ++++--
 kernel/sched/sched.h |  4 ----
 3 files changed, 27 insertions(+), 22 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2][v2] sched: Make newidle_balance() static again
  2020-04-20 15:01 [PATCH 0/2][v2] sched: Clean up newidle_balance() and pick_next_task() Chen Yu
@ 2020-04-20 15:01 ` Chen Yu
  2020-04-20 17:43   ` Valentin Schneider
  2020-04-21  7:17   ` Vincent Guittot
  2020-04-20 15:01 ` [PATCH 2/2][v2] sched: Extract the task putting code from pick_next_task() Chen Yu
  1 sibling, 2 replies; 7+ messages in thread
From: Chen Yu @ 2020-04-20 15:01 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman
  Cc: linux-kernel, Chen Yu

After Commit 6e2df0581f56 ("sched: Fix pick_next_task() vs 'change'
pattern race"), there is no need to expose newidle_balance() as it
is only used within fair.c file. Change this function back to static again.

No functional change.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
v2: Rename the remaining idle_balance() to newidle_balance()
    to fix an compile error when CONFIG_SMP is not set.
---
 kernel/sched/fair.c  | 6 ++++--
 kernel/sched/sched.h | 4 ----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 02f323b85b6d..cca5c9b7b5ae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3873,6 +3873,8 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
 	return cfs_rq->avg.load_avg;
 }
 
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
+
 static inline unsigned long task_util(struct task_struct *p)
 {
 	return READ_ONCE(p->se.avg.util_avg);
@@ -4054,7 +4056,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 static inline void
 detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 
-static inline int idle_balance(struct rq *rq, struct rq_flags *rf)
+static inline int newidle_balance(struct rq *rq, struct rq_flags *rf)
 {
 	return 0;
 }
@@ -10425,7 +10427,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
  *     0 - failed, no new tasks
  *   > 0 - success, new (fair) tasks present
  */
-int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
 {
 	unsigned long next_balance = jiffies + HZ;
 	int this_cpu = this_rq->cpu;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index db3a57675ccf..be83f88495fb 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1504,14 +1504,10 @@ static inline void unregister_sched_domain_sysctl(void)
 }
 #endif
 
-extern int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
-
 #else
 
 static inline void sched_ttwu_pending(void) { }
 
-static inline int newidle_balance(struct rq *this_rq, struct rq_flags *rf) { return 0; }
-
 #endif /* CONFIG_SMP */
 
 #include "stats.h"
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2][v2] sched: Extract the task putting code from pick_next_task()
  2020-04-20 15:01 [PATCH 0/2][v2] sched: Clean up newidle_balance() and pick_next_task() Chen Yu
  2020-04-20 15:01 ` [PATCH 1/2][v2] sched: Make newidle_balance() static again Chen Yu
@ 2020-04-20 15:01 ` Chen Yu
  2020-04-20 17:44   ` Valentin Schneider
  1 sibling, 1 reply; 7+ messages in thread
From: Chen Yu @ 2020-04-20 15:01 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman
  Cc: linux-kernel, Chen Yu

Introduce a new function finish_prev_task() to do the balance
when necessary, and then put previous task back to the run queue.
This function is extracted from pick_next_task() to prepare for
future usage by other type of task picking logic.

No functional change.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
 kernel/sched/core.c | 39 +++++++++++++++++++++++----------------
 1 file changed, 23 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3a61a3b8eaa9..bf59a5cf030c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3904,6 +3904,28 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
 	schedstat_inc(this_rq()->sched_count);
 }
 
+static void finish_prev_task(struct rq *rq, struct task_struct *prev,
+			     struct rq_flags *rf)
+{
+	const struct sched_class *class;
+#ifdef CONFIG_SMP
+	/*
+	 * We must do the balancing pass before put_next_task(), such
+	 * that when we release the rq->lock the task is in the same
+	 * state as before we took rq->lock.
+	 *
+	 * We can terminate the balance pass as soon as we know there is
+	 * a runnable task of @class priority or higher.
+	 */
+	for_class_range(class, prev->sched_class, &idle_sched_class) {
+		if (class->balance(rq, prev, rf))
+			break;
+	}
+#endif
+
+	put_prev_task(rq, prev);
+}
+
 /*
  * Pick up the highest-prio task:
  */
@@ -3937,22 +3959,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	}
 
 restart:
-#ifdef CONFIG_SMP
-	/*
-	 * We must do the balancing pass before put_next_task(), such
-	 * that when we release the rq->lock the task is in the same
-	 * state as before we took rq->lock.
-	 *
-	 * We can terminate the balance pass as soon as we know there is
-	 * a runnable task of @class priority or higher.
-	 */
-	for_class_range(class, prev->sched_class, &idle_sched_class) {
-		if (class->balance(rq, prev, rf))
-			break;
-	}
-#endif
-
-	put_prev_task(rq, prev);
+	finish_prev_task(rq, prev, rf);
 
 	for_each_class(class) {
 		p = class->pick_next_task(rq);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2][v2] sched: Make newidle_balance() static again
  2020-04-20 15:01 ` [PATCH 1/2][v2] sched: Make newidle_balance() static again Chen Yu
@ 2020-04-20 17:43   ` Valentin Schneider
  2020-04-21  7:17   ` Vincent Guittot
  1 sibling, 0 replies; 7+ messages in thread
From: Valentin Schneider @ 2020-04-20 17:43 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	linux-kernel


On 20/04/20 16:01, Chen Yu wrote:
> After Commit 6e2df0581f56 ("sched: Fix pick_next_task() vs 'change'
> pattern race"), there is no need to expose newidle_balance() as it
> is only used within fair.c file. Change this function back to static again.
>
> No functional change.
>

That derelict 'idle_balance()' is a funny one.

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Reported-by: kbuild test robot <lkp@intel.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
> v2: Rename the remaining idle_balance() to newidle_balance()
>     to fix an compile error when CONFIG_SMP is not set.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2][v2] sched: Extract the task putting code from pick_next_task()
  2020-04-20 15:01 ` [PATCH 2/2][v2] sched: Extract the task putting code from pick_next_task() Chen Yu
@ 2020-04-20 17:44   ` Valentin Schneider
  2020-04-21  8:00     ` Chen Yu
  0 siblings, 1 reply; 7+ messages in thread
From: Valentin Schneider @ 2020-04-20 17:44 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	linux-kernel


On 20/04/20 16:01, Chen Yu wrote:
> Introduce a new function finish_prev_task() to do the balance
> when necessary, and then put previous task back to the run queue.
> This function is extracted from pick_next_task() to prepare for
> future usage by other type of task picking logic.
>
> No functional change.
>

With the absolutely tiny nit below:

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
>  kernel/sched/core.c | 39 +++++++++++++++++++++++----------------
>  1 file changed, 23 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 3a61a3b8eaa9..bf59a5cf030c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3904,6 +3904,28 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
>       schedstat_inc(this_rq()->sched_count);
>  }
>
> +static void finish_prev_task(struct rq *rq, struct task_struct *prev,
> +			     struct rq_flags *rf)
> +{
> +	const struct sched_class *class;
> +#ifdef CONFIG_SMP

Nit: I would've put that declaration within the ifdef, given it isn't used
outside of it.

> +	/*
> +	 * We must do the balancing pass before put_next_task(), such
> +	 * that when we release the rq->lock the task is in the same
> +	 * state as before we took rq->lock.
> +	 *
> +	 * We can terminate the balance pass as soon as we know there is
> +	 * a runnable task of @class priority or higher.
> +	 */
> +	for_class_range(class, prev->sched_class, &idle_sched_class) {
> +		if (class->balance(rq, prev, rf))
> +			break;
> +	}
> +#endif
> +
> +	put_prev_task(rq, prev);
> +}
> +
>  /*
>   * Pick up the highest-prio task:
>   */

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2][v2] sched: Make newidle_balance() static again
  2020-04-20 15:01 ` [PATCH 1/2][v2] sched: Make newidle_balance() static again Chen Yu
  2020-04-20 17:43   ` Valentin Schneider
@ 2020-04-21  7:17   ` Vincent Guittot
  1 sibling, 0 replies; 7+ messages in thread
From: Vincent Guittot @ 2020-04-21  7:17 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, linux-kernel

On Mon, 20 Apr 2020 at 17:01, Chen Yu <yu.c.chen@intel.com> wrote:
>
> After Commit 6e2df0581f56 ("sched: Fix pick_next_task() vs 'change'
> pattern race"), there is no need to expose newidle_balance() as it
> is only used within fair.c file. Change this function back to static again.
>
> No functional change.
>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Reported-by: kbuild test robot <lkp@intel.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
> v2: Rename the remaining idle_balance() to newidle_balance()
>     to fix an compile error when CONFIG_SMP is not set.
> ---
>  kernel/sched/fair.c  | 6 ++++--
>  kernel/sched/sched.h | 4 ----
>  2 files changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 02f323b85b6d..cca5c9b7b5ae 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3873,6 +3873,8 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
>         return cfs_rq->avg.load_avg;
>  }
>
> +static int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
> +
>  static inline unsigned long task_util(struct task_struct *p)
>  {
>         return READ_ONCE(p->se.avg.util_avg);
> @@ -4054,7 +4056,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
>  static inline void
>  detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
>
> -static inline int idle_balance(struct rq *rq, struct rq_flags *rf)
> +static inline int newidle_balance(struct rq *rq, struct rq_flags *rf)
>  {
>         return 0;
>  }
> @@ -10425,7 +10427,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
>   *     0 - failed, no new tasks
>   *   > 0 - success, new (fair) tasks present
>   */
> -int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
> +static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
>  {
>         unsigned long next_balance = jiffies + HZ;
>         int this_cpu = this_rq->cpu;
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index db3a57675ccf..be83f88495fb 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1504,14 +1504,10 @@ static inline void unregister_sched_domain_sysctl(void)
>  }
>  #endif
>
> -extern int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
> -
>  #else
>
>  static inline void sched_ttwu_pending(void) { }
>
> -static inline int newidle_balance(struct rq *this_rq, struct rq_flags *rf) { return 0; }
> -
>  #endif /* CONFIG_SMP */
>
>  #include "stats.h"
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2][v2] sched: Extract the task putting code from pick_next_task()
  2020-04-20 17:44   ` Valentin Schneider
@ 2020-04-21  8:00     ` Chen Yu
  0 siblings, 0 replies; 7+ messages in thread
From: Chen Yu @ 2020-04-21  8:00 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	linux-kernel

Hi,
On Mon, Apr 20, 2020 at 06:44:29PM +0100, Valentin Schneider wrote:
> 
> On 20/04/20 16:01, Chen Yu wrote:
> > Introduce a new function finish_prev_task() to do the balance
> > when necessary, and then put previous task back to the run queue.
> > This function is extracted from pick_next_task() to prepare for
> > future usage by other type of task picking logic.
> >
> > No functional change.
> >
> 
> With the absolutely tiny nit below:
> 
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
> 
> > Suggested-by: Peter Zijlstra <peterz@infradead.org>
> > Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> > ---
> >  kernel/sched/core.c | 39 +++++++++++++++++++++++----------------
> >  1 file changed, 23 insertions(+), 16 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 3a61a3b8eaa9..bf59a5cf030c 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3904,6 +3904,28 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
> >       schedstat_inc(this_rq()->sched_count);
> >  }
> >
> > +static void finish_prev_task(struct rq *rq, struct task_struct *prev,
> > +			     struct rq_flags *rf)
> > +{
> > +	const struct sched_class *class;
> > +#ifdef CONFIG_SMP
> 
> Nit: I would've put that declaration within the ifdef, given it isn't used
> outside of it.
> 
Okay, I'll do.

Thanks,
Chenyu

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-04-21  8:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-20 15:01 [PATCH 0/2][v2] sched: Clean up newidle_balance() and pick_next_task() Chen Yu
2020-04-20 15:01 ` [PATCH 1/2][v2] sched: Make newidle_balance() static again Chen Yu
2020-04-20 17:43   ` Valentin Schneider
2020-04-21  7:17   ` Vincent Guittot
2020-04-20 15:01 ` [PATCH 2/2][v2] sched: Extract the task putting code from pick_next_task() Chen Yu
2020-04-20 17:44   ` Valentin Schneider
2020-04-21  8:00     ` Chen Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).