linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions
@ 2024-03-08 11:18 Ingo Molnar
  2024-03-08 11:18 ` [PATCH 01/13] sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq() Ingo Molnar
                   ` (13 more replies)
  0 siblings, 14 replies; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Over the years we've grown a colorful zoo of scheduler
load-balancing function names - both following random,
idiosyncratic patterns, and gaining historic misnomers
that are not accurate anymore.

We have 'newidle_balance()' to rebalance newly idle tasks,
but 'balance_domains()' to rebalance domains. We have
a find_idlest_cpu() function whose purpose is not to find
the idlest CPU anymore, and a find_busiest_queue() function
whose purpose is not to find the busiest runqueue anymore.

Fix most of the misnomers and organize the functions along the
sched_balance_*() namespace:

  scheduler_tick()		=> sched_tick()
  run_rebalance_domains()	=> sched_balance_softirq()
  trigger_load_balance()	=> sched_balance_trigger()
  rebalance_domains()		=> sched_balance_domains()
  load_balance()		=> sched_balance_rq()
  newidle_balance()		=> sched_balance_newidle()
  find_busiest_queue()		=> sched_balance_find_src_rq()
  find_busiest_group()		=> sched_balance_find_src_group()
  find_idlest_group_cpu()	=> sched_balance_find_dst_group_cpu()
  find_idlest_group()		=> sched_balance_find_dst_group()
  find_idlest_cpu()		=> sched_balance_find_dst_cpu()
  update_blocked_averages()	=> sched_balance_update_blocked_averages()

I think the visual improvement of left vs. right column
demonstrates the goal nicely.

While the function names got a bit longer, another advantage of
the common prefix, beyond readability, is that now a:

  git grep sched_balance_

... will show most of the balancing code nicely.

( I have a few more patches that standardize the NOHZ balancing
  code along the nohz_balance_*() nomenclature as well. )

Thanks,

    Ingo

==================>
Ingo Molnar (13):
  sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq()
  sched/balancing: Rename scheduler_tick() => sched_tick()
  sched/balancing: Rename trigger_load_balance() => sched_balance_trigger()
  sched/balancing: Rename rebalance_domains() => sched_balance_domains()
  sched/balancing: Rename load_balance() => sched_balance_rq()
  sched/balancing: Rename find_busiest_queue() => find_src_rq()
  sched/balancing: Rename find_src_rq() => sched_balance_find_src_rq()
  sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group()
  sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages()
  sched/balancing: Rename newidle_balance() => sched_balance_newidle()
  sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu()
  sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group()
  sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu()

 Documentation/scheduler/sched-domains.rst                       | 12 ++--
 Documentation/scheduler/sched-stats.rst                         | 32 +++++------
 Documentation/translations/zh_CN/scheduler/sched-domains.rst    | 10 ++--
 Documentation/translations/zh_CN/scheduler/sched-stats.rst      | 30 +++++-----
 arch/arm/kernel/topology.c                                      |  2 +-
 include/linux/sched.h                                           |  2 +-
 include/linux/sched/topology.h                                  |  2 +-
 kernel/sched/core.c                                             |  6 +-
 kernel/sched/fair.c                                             | 88 ++++++++++++++---------------
 kernel/sched/loadavg.c                                          |  2 +-
 kernel/sched/pelt.c                                             |  2 +-
 kernel/sched/sched.h                                            |  4 +-
 kernel/time/timer.c                                             |  2 +-
 kernel/workqueue.c                                              |  2 +-
 .../selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc      |  2 +-
 15 files changed, 99 insertions(+), 99 deletions(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 01/13] sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 02/13] sched/balancing: Rename scheduler_tick() => sched_tick() Ingo Molnar
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

run_rebalance_domains() is a misnomer, as it doesn't only
run rebalance_domains(), but since the introduction of the
NOHZ code it also runs nohz_idle_balance().

Rename it to sched_balance_softirq(), reflecting its more
generic purpose and that it's a softirq handler.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 Documentation/scheduler/sched-domains.rst                    | 2 +-
 Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +-
 kernel/sched/fair.c                                          | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index e57ad28301bd..6577b068f921 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -34,7 +34,7 @@ out of balance are tasks moved between groups.
 In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
 through scheduler_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
-balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run
+balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
 in softirq context (SCHED_SOFTIRQ).
 
 The latter function takes two arguments: the runqueue of current CPU and whether
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index e814d4c01141..fbc326668e37 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这
 
 在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过scheduler_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
-的工作由run_rebalance_domains()->rebalance_domains()完成,在软中断上下文中执行
+的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
 
 后一个函数有两个入参:当前CPU的运行队列、它在scheduler_tick()调用时是否空闲。函数会从
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 116a640534b9..953f39deb68e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12415,7 +12415,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
  * - indirectly from a remote scheduler_tick() for NOHZ idle balancing
  *   through the SMP cross-call nohz_csd_func()
  */
-static __latent_entropy void run_rebalance_domains(struct softirq_action *h)
+static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 {
 	struct rq *this_rq = this_rq();
 	enum cpu_idle_type idle = this_rq->idle_balance;
@@ -13216,7 +13216,7 @@ __init void init_sched_fair_class(void)
 #endif
 	}
 
-	open_softirq(SCHED_SOFTIRQ, run_rebalance_domains);
+	open_softirq(SCHED_SOFTIRQ, sched_balance_softirq);
 
 #ifdef CONFIG_NO_HZ_COMMON
 	nohz.next_balance = jiffies;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 02/13] sched/balancing: Rename scheduler_tick() => sched_tick()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
  2024-03-08 11:18 ` [PATCH 01/13] sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 03/13] sched/balancing: Rename trigger_load_balance() => sched_balance_trigger() Ingo Molnar
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

- Standardize on prefixing scheduler-internal functions defined
  in <linux/sched.h> with sched_*() prefix. scheduler_tick() was
  the only function using the scheduler_ prefix. Harmonize it.

- The other reason to rename it is the NOHZ scheduler tick
  handling functions are already named sched_tick_*().
  Make the 'git grep sched_tick' more meaningful.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Valentin Schneider <vschneid@redhat.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 Documentation/scheduler/sched-domains.rst                            | 4 ++--
 Documentation/translations/zh_CN/scheduler/sched-domains.rst         | 4 ++--
 include/linux/sched.h                                                | 2 +-
 kernel/sched/core.c                                                  | 4 ++--
 kernel/sched/loadavg.c                                               | 2 +-
 kernel/time/timer.c                                                  | 2 +-
 kernel/workqueue.c                                                   | 2 +-
 tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc | 2 +-
 8 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index 6577b068f921..541d6c617971 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -32,13 +32,13 @@ load of each of its member CPUs, and only when the load of a group becomes
 out of balance are tasks moved between groups.
 
 In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
-through scheduler_tick(). It raises a softirq after the next regularly scheduled
+through sched_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
 balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
 in softirq context (SCHED_SOFTIRQ).
 
 The latter function takes two arguments: the runqueue of current CPU and whether
-the CPU was idle at the time the scheduler_tick() happened and iterates over all
+the CPU was idle at the time the sched_tick() happened and iterates over all
 sched domains our CPU is on, starting from its base domain and going up the ->parent
 chain. While doing that, it checks to see if the current domain has exhausted its
 rebalance interval. If so, it runs load_balance() on that domain. It then checks
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index fbc326668e37..fa0c0bcc6ba5 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -34,12 +34,12 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这
 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它
 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。
 
-在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过scheduler_tick()
+在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
 
-后一个函数有两个入参:当前CPU的运行队列、它在scheduler_tick()调用时是否空闲。函数会从
+后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从
 当前CPU所在的基调度域开始迭代执行,并沿着parent指针链向上进入更高层级的调度域。在迭代
 过程中,函数会检查当前调度域是否已经耗尽了再平衡的时间间隔,如果是,它在该调度域运行
 load_balance()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ffe8f618ab86..739e32ead24b 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -301,7 +301,7 @@ enum {
 	TASK_COMM_LEN = 16,
 };
 
-extern void scheduler_tick(void);
+extern void sched_tick(void);
 
 #define	MAX_SCHEDULE_TIMEOUT		LONG_MAX
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a76c7095f736..3affa9a6b249 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5651,7 +5651,7 @@ static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
  * This function gets called by the timer code, with HZ frequency.
  * We call it with interrupts disabled.
  */
-void scheduler_tick(void)
+void sched_tick(void)
 {
 	int cpu = smp_processor_id();
 	struct rq *rq = cpu_rq(cpu);
@@ -6574,7 +6574,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
  *      paths. For example, see arch/x86/entry_64.S.
  *
  *      To drive preemption between tasks, the scheduler sets the flag in timer
- *      interrupt handler scheduler_tick().
+ *      interrupt handler sched_tick().
  *
  *   3. Wakeups don't really cause entry into schedule(). They add a
  *      task to the run-queue and that's it.
diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index 52c8f8226b0d..ca9da66cc894 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -379,7 +379,7 @@ void calc_global_load(void)
 }
 
 /*
- * Called from scheduler_tick() to periodically update this CPU's
+ * Called from sched_tick() to periodically update this CPU's
  * active count.
  */
 void calc_global_load_tick(struct rq *this_rq)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 352b161113cd..ec003ad18b2d 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -2089,7 +2089,7 @@ void update_process_times(int user_tick)
 	if (in_irq())
 		irq_work_tick();
 #endif
-	scheduler_tick();
+	sched_tick();
 	if (IS_ENABLED(CONFIG_POSIX_TIMERS))
 		run_posix_cpu_timers();
 }
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7b482a26d741..8aa3a0829dd4 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1324,7 +1324,7 @@ void wq_worker_sleeping(struct task_struct *task)
  * wq_worker_tick - a scheduler tick occurred while a kworker is running
  * @task: task currently running
  *
- * Called from scheduler_tick(). We're in the IRQ context and the current
+ * Called from sched_tick(). We're in the IRQ context and the current
  * worker's fields which follow the 'K' locking rule can be accessed safely.
  */
 void wq_worker_tick(struct task_struct *task)
diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
index 25432b8cd5bd..073a748b9380 100644
--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
@@ -19,7 +19,7 @@ fail() { # mesg
 
 FILTER=set_ftrace_filter
 FUNC1="schedule"
-FUNC2="scheduler_tick"
+FUNC2="sched_tick"
 
 ALL_FUNCS="#### all functions enabled ####"
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 03/13] sched/balancing: Rename trigger_load_balance() => sched_balance_trigger()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
  2024-03-08 11:18 ` [PATCH 01/13] sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq() Ingo Molnar
  2024-03-08 11:18 ` [PATCH 02/13] sched/balancing: Rename scheduler_tick() => sched_tick() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 04/13] sched/balancing: Rename rebalance_domains() => sched_balance_domains() Ingo Molnar
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 Documentation/scheduler/sched-domains.rst                    | 2 +-
 Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +-
 kernel/sched/core.c                                          | 2 +-
 kernel/sched/fair.c                                          | 2 +-
 kernel/sched/sched.h                                         | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index 541d6c617971..c7ea05f4107b 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -31,7 +31,7 @@ is treated as one entity. The load of a group is defined as the sum of the
 load of each of its member CPUs, and only when the load of a group becomes
 out of balance are tasks moved between groups.
 
-In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
+In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU
 through sched_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
 balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index fa0c0bcc6ba5..1a8587a971f9 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -34,7 +34,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这
 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它
 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。
 
-在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick()
+在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3affa9a6b249..d56ebe8230bc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5689,7 +5689,7 @@ void sched_tick(void)
 
 #ifdef CONFIG_SMP
 	rq->idle_balance = idle_cpu(cpu);
-	trigger_load_balance(rq);
+	sched_balance_trigger(rq);
 #endif
 }
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 953f39deb68e..e377b675920a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12438,7 +12438,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 /*
  * Trigger the SCHED_SOFTIRQ if it is time to do periodic load balancing.
  */
-void trigger_load_balance(struct rq *rq)
+void sched_balance_trigger(struct rq *rq)
 {
 	/*
 	 * Don't need to rebalance while attached to NULL domain or
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d2242679239e..5b0ddb0e6017 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2397,7 +2397,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
 
 extern void update_group_capacity(struct sched_domain *sd, int cpu);
 
-extern void trigger_load_balance(struct rq *rq);
+extern void sched_balance_trigger(struct rq *rq);
 
 extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx);
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 04/13] sched/balancing: Rename rebalance_domains() => sched_balance_domains()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (2 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 03/13] sched/balancing: Rename trigger_load_balance() => sched_balance_trigger() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq() Ingo Molnar
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Valentin Schneider <vschneid@redhat.com>
---
 Documentation/scheduler/sched-domains.rst                    | 2 +-
 Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +-
 arch/arm/kernel/topology.c                                   | 2 +-
 kernel/sched/fair.c                                          | 8 ++++----
 kernel/sched/sched.h                                         | 2 +-
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index c7ea05f4107b..5d8e8b8b269e 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -34,7 +34,7 @@ out of balance are tasks moved between groups.
 In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU
 through sched_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
-balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
+balancing workhorse, sched_balance_softirq()->sched_balance_domains(), is then run
 in softirq context (SCHED_SOFTIRQ).
 
 The latter function takes two arguments: the runqueue of current CPU and whether
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index 1a8587a971f9..e6590fd80640 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这
 
 在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
-的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
+的工作由sched_balance_softirq()->sched_balance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
 
 后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从
diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index ef0058de432b..2336ee2aa44a 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -42,7 +42,7 @@
  * can take this difference into account during load balance. A per cpu
  * structure is preferred because each CPU updates its own cpu_capacity field
  * during the load balance except for idle cores. One idle core is selected
- * to run the rebalance_domains for all idle cores and the cpu_capacity can be
+ * to run the sched_balance_domains for all idle cores and the cpu_capacity can be
  * updated during this sequence.
  */
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e377b675920a..330788b0c617 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11685,7 +11685,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
  *
  * Balancing parameters are set up in init_sched_domains.
  */
-static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle)
+static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
 {
 	int continue_balancing = 1;
 	int cpu = rq->cpu;
@@ -12161,7 +12161,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 			rq_unlock_irqrestore(rq, &rf);
 
 			if (flags & NOHZ_BALANCE_KICK)
-				rebalance_domains(rq, CPU_IDLE);
+				sched_balance_domains(rq, CPU_IDLE);
 		}
 
 		if (time_after(next_balance, rq->next_balance)) {
@@ -12422,7 +12422,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 	/*
 	 * If this CPU has a pending NOHZ_BALANCE_KICK, then do the
 	 * balancing on behalf of the other idle CPUs whose ticks are
-	 * stopped. Do nohz_idle_balance *before* rebalance_domains to
+	 * stopped. Do nohz_idle_balance *before* sched_balance_domains to
 	 * give the idle CPUs a chance to load balance. Else we may
 	 * load balance only within the local sched_domain hierarchy
 	 * and abort nohz_idle_balance altogether if we pull some load.
@@ -12432,7 +12432,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 
 	/* normal load balance */
 	update_blocked_averages(this_rq->cpu);
-	rebalance_domains(this_rq, idle);
+	sched_balance_domains(this_rq, idle);
 }
 
 /*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 5b0ddb0e6017..41024c1c49b4 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2904,7 +2904,7 @@ extern void cfs_bandwidth_usage_dec(void);
 #define NOHZ_NEWILB_KICK_BIT	2
 #define NOHZ_NEXT_KICK_BIT	3
 
-/* Run rebalance_domains() */
+/* Run sched_balance_domains() */
 #define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
 /* Update blocked load */
 #define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (3 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 04/13] sched/balancing: Rename rebalance_domains() => sched_balance_domains() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-11  8:17   ` Shrikanth Hegde
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 06/13] sched/balancing: Rename find_busiest_queue() => find_src_rq() Ingo Molnar
                   ` (8 subsequent siblings)
  13 siblings, 2 replies; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also load_balance() has become somewhat of a misnomer: historically
it was the first and primary load-balancing function that was called,
but with the introduction of sched domains, it's become a lower
layer function that balances runqueues.

Rename it to sched_balance_rq() accordingly.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 Documentation/scheduler/sched-domains.rst                    |  4 ++--
 Documentation/scheduler/sched-stats.rst                      | 32 ++++++++++++++++----------------
 Documentation/translations/zh_CN/scheduler/sched-domains.rst |  4 ++--
 Documentation/translations/zh_CN/scheduler/sched-stats.rst   | 30 +++++++++++++++---------------
 include/linux/sched/topology.h                               |  2 +-
 kernel/sched/fair.c                                          | 10 +++++-----
 6 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index 5d8e8b8b269e..5e996fe973b1 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -41,11 +41,11 @@ The latter function takes two arguments: the runqueue of current CPU and whether
 the CPU was idle at the time the sched_tick() happened and iterates over all
 sched domains our CPU is on, starting from its base domain and going up the ->parent
 chain. While doing that, it checks to see if the current domain has exhausted its
-rebalance interval. If so, it runs load_balance() on that domain. It then checks
+rebalance interval. If so, it runs sched_balance_rq() on that domain. It then checks
 the parent sched_domain (if it exists), and the parent of the parent and so
 forth.
 
-Initially, load_balance() finds the busiest group in the current sched domain.
+Initially, sched_balance_rq() finds the busiest group in the current sched domain.
 If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in
 that group. If it manages to find such a runqueue, it locks both our initial
 CPU's runqueue and the newly found busiest one and starts moving tasks from it
diff --git a/Documentation/scheduler/sched-stats.rst b/Documentation/scheduler/sched-stats.rst
index 03c062915998..afb39be7d6d2 100644
--- a/Documentation/scheduler/sched-stats.rst
+++ b/Documentation/scheduler/sched-stats.rst
@@ -72,53 +72,53 @@ domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
 
 The first field is a bit mask indicating what cpus this domain operates over.
 
-The next 24 are a variety of load_balance() statistics in grouped into types
+The next 24 are a variety of sched_balance_rq() statistics in grouped into types
 of idleness (idle, busy, and newly idle):
 
-    1)  # of times in this domain load_balance() was called when the
+    1)  # of times in this domain sched_balance_rq() was called when the
         cpu was idle
-    2)  # of times in this domain load_balance() checked but found
+    2)  # of times in this domain sched_balance_rq() checked but found
         the load did not require balancing when the cpu was idle
-    3)  # of times in this domain load_balance() tried to move one or
+    3)  # of times in this domain sched_balance_rq() tried to move one or
         more tasks and failed, when the cpu was idle
     4)  sum of imbalances discovered (if any) with each call to
-        load_balance() in this domain when the cpu was idle
+        sched_balance_rq() in this domain when the cpu was idle
     5)  # of times in this domain pull_task() was called when the cpu
         was idle
     6)  # of times in this domain pull_task() was called even though
         the target task was cache-hot when idle
-    7)  # of times in this domain load_balance() was called but did
+    7)  # of times in this domain sched_balance_rq() was called but did
         not find a busier queue while the cpu was idle
     8)  # of times in this domain a busier queue was found while the
         cpu was idle but no busier group was found
-    9)  # of times in this domain load_balance() was called when the
+    9)  # of times in this domain sched_balance_rq() was called when the
         cpu was busy
-    10) # of times in this domain load_balance() checked but found the
+    10) # of times in this domain sched_balance_rq() checked but found the
         load did not require balancing when busy
-    11) # of times in this domain load_balance() tried to move one or
+    11) # of times in this domain sched_balance_rq() tried to move one or
         more tasks and failed, when the cpu was busy
     12) sum of imbalances discovered (if any) with each call to
-        load_balance() in this domain when the cpu was busy
+        sched_balance_rq() in this domain when the cpu was busy
     13) # of times in this domain pull_task() was called when busy
     14) # of times in this domain pull_task() was called even though the
         target task was cache-hot when busy
-    15) # of times in this domain load_balance() was called but did not
+    15) # of times in this domain sched_balance_rq() was called but did not
         find a busier queue while the cpu was busy
     16) # of times in this domain a busier queue was found while the cpu
         was busy but no busier group was found
 
-    17) # of times in this domain load_balance() was called when the
+    17) # of times in this domain sched_balance_rq() was called when the
         cpu was just becoming idle
-    18) # of times in this domain load_balance() checked but found the
+    18) # of times in this domain sched_balance_rq() checked but found the
         load did not require balancing when the cpu was just becoming idle
-    19) # of times in this domain load_balance() tried to move one or more
+    19) # of times in this domain sched_balance_rq() tried to move one or more
         tasks and failed, when the cpu was just becoming idle
     20) sum of imbalances discovered (if any) with each call to
-        load_balance() in this domain when the cpu was just becoming idle
+        sched_balance_rq() in this domain when the cpu was just becoming idle
     21) # of times in this domain pull_task() was called when newly idle
     22) # of times in this domain pull_task() was called even though the
         target task was cache-hot when just becoming idle
-    23) # of times in this domain load_balance() was called but did not
+    23) # of times in this domain sched_balance_rq() was called but did not
         find a busier queue while the cpu was just becoming idle
     24) # of times in this domain a busier queue was found while the cpu
         was just becoming idle but no busier group was found
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index e6590fd80640..06363169c56b 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -42,9 +42,9 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这
 后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从
 当前CPU所在的基调度域开始迭代执行,并沿着parent指针链向上进入更高层级的调度域。在迭代
 过程中,函数会检查当前调度域是否已经耗尽了再平衡的时间间隔,如果是,它在该调度域运行
-load_balance()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。
+sched_balance_rq()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。
 
-起初,load_balance()查找当前调度域中最繁忙的调度组。如果成功,在该调度组管辖的全部CPU
+起初,sched_balance_rq()查找当前调度域中最繁忙的调度组。如果成功,在该调度组管辖的全部CPU
 的运行队列中找出最繁忙的运行队列。如能找到,对当前的CPU运行队列和新找到的最繁忙运行
 队列均加锁,并把任务从最繁忙队列中迁移到当前CPU上。被迁移的任务数量等于在先前迭代执行
 中计算出的该调度域的调度组的不均衡值。
diff --git a/Documentation/translations/zh_CN/scheduler/sched-stats.rst b/Documentation/translations/zh_CN/scheduler/sched-stats.rst
index c5e0be663837..09eee2517610 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-stats.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-stats.rst
@@ -75,42 +75,42 @@ domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
 繁忙,新空闲):
 
 
-    1)  当CPU空闲时,load_balance()在这个调度域中被调用了#次
-    2)  当CPU空闲时,load_balance()在这个调度域中被调用,但是发现负载无需
+    1)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用了#次
+    2)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需
         均衡#次
-    3)  当CPU空闲时,load_balance()在这个调度域中被调用,试图迁移1个或更多
+    3)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多
         任务且失败了#次
-    4)  当CPU空闲时,load_balance()在这个调度域中被调用,发现不均衡(如果有)
+    4)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有)
         #次
     5)  当CPU空闲时,pull_task()在这个调度域中被调用#次
     6)  当CPU空闲时,尽管目标任务是热缓存状态,pull_task()依然被调用#次
-    7)  当CPU空闲时,load_balance()在这个调度域中被调用,未能找到更繁忙的
+    7)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的
         队列#次
     8)  当CPU空闲时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组
         #次
-    9)  当CPU繁忙时,load_balance()在这个调度域中被调用了#次
-    10) 当CPU繁忙时,load_balance()在这个调度域中被调用,但是发现负载无需
+    9)  当CPU繁忙时,sched_balance_rq()在这个调度域中被调用了#次
+    10) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需
         均衡#次
-    11) 当CPU繁忙时,load_balance()在这个调度域中被调用,试图迁移1个或更多
+    11) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多
         任务且失败了#次
-    12) 当CPU繁忙时,load_balance()在这个调度域中被调用,发现不均衡(如果有)
+    12) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有)
         #次
     13) 当CPU繁忙时,pull_task()在这个调度域中被调用#次
     14) 当CPU繁忙时,尽管目标任务是热缓存状态,pull_task()依然被调用#次
-    15) 当CPU繁忙时,load_balance()在这个调度域中被调用,未能找到更繁忙的
+    15) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的
         队列#次
     16) 当CPU繁忙时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组
         #次
-    17) 当CPU新空闲时,load_balance()在这个调度域中被调用了#次
-    18) 当CPU新空闲时,load_balance()在这个调度域中被调用,但是发现负载无需
+    17) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用了#次
+    18) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需
         均衡#次
-    19) 当CPU新空闲时,load_balance()在这个调度域中被调用,试图迁移1个或更多
+    19) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多
         任务且失败了#次
-    20) 当CPU新空闲时,load_balance()在这个调度域中被调用,发现不均衡(如果有)
+    20) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有)
         #次
     21) 当CPU新空闲时,pull_task()在这个调度域中被调用#次
     22) 当CPU新空闲时,尽管目标任务是热缓存状态,pull_task()依然被调用#次
-    23) 当CPU新空闲时,load_balance()在这个调度域中被调用,未能找到更繁忙的
+    23) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的
         队列#次
     24) 当CPU新空闲时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组
         #次
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 191b122158fb..f0b721b5d42d 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -110,7 +110,7 @@ struct sched_domain {
 	unsigned long last_decay_max_lb_cost;
 
 #ifdef CONFIG_SCHEDSTATS
-	/* load_balance() stats */
+	/* sched_balance_rq() stats */
 	unsigned int lb_count[CPU_MAX_IDLE_TYPES];
 	unsigned int lb_failed[CPU_MAX_IDLE_TYPES];
 	unsigned int lb_balanced[CPU_MAX_IDLE_TYPES];
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 330788b0c617..0d2753c50be9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6866,7 +6866,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 #ifdef CONFIG_SMP
 
-/* Working cpumask for: load_balance, load_balance_newidle. */
+/* Working cpumask for: sched_balance_rq, load_balance_newidle. */
 static DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);
 static DEFINE_PER_CPU(cpumask_var_t, select_rq_mask);
 static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
@@ -11242,7 +11242,7 @@ static int should_we_balance(struct lb_env *env)
  * Check this_cpu to ensure it is balanced within domain. Attempt to move
  * tasks if there is an imbalance.
  */
-static int load_balance(int this_cpu, struct rq *this_rq,
+static int sched_balance_rq(int this_cpu, struct rq *this_rq,
 			struct sched_domain *sd, enum cpu_idle_type idle,
 			int *continue_balancing)
 {
@@ -11647,7 +11647,7 @@ static int active_load_balance_cpu_stop(void *data)
 static atomic_t sched_balance_running = ATOMIC_INIT(0);
 
 /*
- * Scale the max load_balance interval with the number of CPUs in the system.
+ * Scale the max sched_balance_rq interval with the number of CPUs in the system.
  * This trades load-balance latency on larger machines for less cross talk.
  */
 void update_max_interval(void)
@@ -11727,7 +11727,7 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
 		}
 
 		if (time_after_eq(jiffies, sd->last_balance + interval)) {
-			if (load_balance(cpu, rq, sd, idle, &continue_balancing)) {
+			if (sched_balance_rq(cpu, rq, sd, idle, &continue_balancing)) {
 				/*
 				 * The LBF_DST_PINNED logic could have changed
 				 * env->dst_cpu, so we can't know our idle
@@ -12353,7 +12353,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
 
 		if (sd->flags & SD_BALANCE_NEWIDLE) {
 
-			pulled_task = load_balance(this_cpu, this_rq,
+			pulled_task = sched_balance_rq(this_cpu, this_rq,
 						   sd, CPU_NEWLY_IDLE,
 						   &continue_balancing);
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 06/13] sched/balancing: Rename find_busiest_queue() => find_src_rq()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (4 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] sched/balancing: Rename find_busiest_queue() => sched_balance_find_src_rq() tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 07/13] sched/balancing: Rename find_src_rq() " Ingo Molnar
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

The find_busiest_queue() naming has two small quirks:

 - Scheduler functions that deal with runqueues usually have a rq_ prefix
   or _rq postfix, but this function has neither.

 - Plus the 'busiest' qualifier to this function was historically
   correct, but has become somewhat of a misnomer: in quite a few
   cases we will not pick the busiest runqueue - but the best
   (possible) runqueue we can balance tasks from. So name it a
   bit more neutrally, similar to the 'src/dst' nomenclature
   we are already using when moving tasks between runqueues.

To fix both quirks, rename it to find_src_rq().

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0d2753c50be9..e600cac7806d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10959,9 +10959,9 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 }
 
 /*
- * find_busiest_queue - find the busiest runqueue among the CPUs in the group.
+ * find_src_rq - find the busiest runqueue among the CPUs in the group.
  */
-static struct rq *find_busiest_queue(struct lb_env *env,
+static struct rq *find_src_rq(struct lb_env *env,
 				     struct sched_group *group)
 {
 	struct rq *busiest = NULL, *rq;
@@ -11280,7 +11280,7 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
 		goto out_balanced;
 	}
 
-	busiest = find_busiest_queue(&env, group);
+	busiest = find_src_rq(&env, group);
 	if (!busiest) {
 		schedstat_inc(sd->lb_nobusyq[idle]);
 		goto out_balanced;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 07/13] sched/balancing: Rename find_src_rq() => sched_balance_find_src_rq()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (5 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 06/13] sched/balancing: Rename find_busiest_queue() => find_src_rq() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-08 13:51   ` Vincent Guittot
  2024-03-08 11:18 ` [PATCH 08/13] sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group() Ingo Molnar
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e600cac7806d..1cd9a18b35e0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10959,9 +10959,9 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 }
 
 /*
- * find_src_rq - find the busiest runqueue among the CPUs in the group.
+ * sched_balance_find_src_rq - find the busiest runqueue among the CPUs in the group.
  */
-static struct rq *find_src_rq(struct lb_env *env,
+static struct rq *sched_balance_find_src_rq(struct lb_env *env,
 				     struct sched_group *group)
 {
 	struct rq *busiest = NULL, *rq;
@@ -11280,7 +11280,7 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
 		goto out_balanced;
 	}
 
-	busiest = find_src_rq(&env, group);
+	busiest = sched_balance_find_src_rq(&env, group);
 	if (!busiest) {
 		schedstat_inc(sd->lb_nobusyq[idle]);
 		goto out_balanced;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 08/13] sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (6 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 07/13] sched/balancing: Rename find_src_rq() " Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages() Ingo Molnar
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Make two naming changes:

1)
   Standardize scheduler load-balancing function names on the
   sched_balance_() prefix.

2)

   Similar to find_busiest_queue(), the find_busiest_group() naming
   has become a bit of a misnomer: the 'busiest' qualifier to this
   function was historically correct but in the current code
   in quite a few cases we will not pick the 'busiest' group - but the best
   (possible) group we can balance from based on a complex set of
   constraints.

So name it a bit more neutrally, similar to the 'src/dst' nomenclature
we are already using when moving tasks between runqueues, and also
use the sched_balance_ prefix: sched_balance_find_src_group().

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1cd9a18b35e0..96a81b2fa281 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9430,7 +9430,7 @@ static void update_blocked_averages(int cpu)
 	rq_unlock_irqrestore(rq, &rf);
 }
 
-/********** Helpers for find_busiest_group ************************/
+/********** Helpers for sched_balance_find_src_group ************************/
 
 /*
  * sg_lb_stats - stats of a sched_group required for load-balancing:
@@ -9637,7 +9637,7 @@ static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd)
  *
  * When this is so detected; this group becomes a candidate for busiest; see
  * update_sd_pick_busiest(). And calculate_imbalance() and
- * find_busiest_group() avoid some of the usual balance conditions to allow it
+ * sched_balance_find_src_group() avoid some of the usual balance conditions to allow it
  * to create an effective group imbalance.
  *
  * This is a somewhat tricky proposition since the next run might not find the
@@ -10788,7 +10788,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	) / SCHED_CAPACITY_SCALE;
 }
 
-/******* find_busiest_group() helpers end here *********************/
+/******* sched_balance_find_src_group() helpers end here *********************/
 
 /*
  * Decision matrix according to the local and busiest group type:
@@ -10811,7 +10811,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
  */
 
 /**
- * find_busiest_group - Returns the busiest group within the sched_domain
+ * sched_balance_find_src_group - Returns the busiest group within the sched_domain
  * if there is an imbalance.
  * @env: The load balancing environment.
  *
@@ -10820,7 +10820,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
  *
  * Return:	- The busiest group if imbalance exists.
  */
-static struct sched_group *find_busiest_group(struct lb_env *env)
+static struct sched_group *sched_balance_find_src_group(struct lb_env *env)
 {
 	struct sg_lb_stats *local, *busiest;
 	struct sd_lb_stats sds;
@@ -11274,7 +11274,7 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
 		goto out_balanced;
 	}
 
-	group = find_busiest_group(&env);
+	group = sched_balance_find_src_group(&env);
 	if (!group) {
 		schedstat_inc(sd->lb_nobusyg[idle]);
 		goto out_balanced;
@@ -11298,7 +11298,7 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
 	env.flags |= LBF_ALL_PINNED;
 	if (busiest->nr_running > 1) {
 		/*
-		 * Attempt to move tasks. If find_busiest_group has found
+		 * Attempt to move tasks. If sched_balance_find_src_group has found
 		 * an imbalance but busiest->nr_running <= 1, the group is
 		 * still unbalanced. ld_moved simply stays zero, so it is
 		 * correctly treated as an imbalance.
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (7 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 08/13] sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-11  6:42   ` Honglei Wang
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 10/13] sched/balancing: Rename newidle_balance() => sched_balance_newidle() Ingo Molnar
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 8 ++++----
 kernel/sched/pelt.c | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 96a81b2fa281..95f7092043f3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9411,7 +9411,7 @@ static unsigned long task_h_load(struct task_struct *p)
 }
 #endif
 
-static void update_blocked_averages(int cpu)
+static void sched_balance_update_blocked_averages(int cpu)
 {
 	bool decayed = false, done = true;
 	struct rq *rq = cpu_rq(cpu);
@@ -12079,7 +12079,7 @@ static bool update_nohz_stats(struct rq *rq)
 	if (!time_after(jiffies, READ_ONCE(rq->last_blocked_load_update_tick)))
 		return true;
 
-	update_blocked_averages(cpu);
+	sched_balance_update_blocked_averages(cpu);
 
 	return rq->has_blocked_load;
 }
@@ -12339,7 +12339,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
 	raw_spin_rq_unlock(this_rq);
 
 	t0 = sched_clock_cpu(this_cpu);
-	update_blocked_averages(this_cpu);
+	sched_balance_update_blocked_averages(this_cpu);
 
 	rcu_read_lock();
 	for_each_domain(this_cpu, sd) {
@@ -12431,7 +12431,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 		return;
 
 	/* normal load balance */
-	update_blocked_averages(this_rq->cpu);
+	sched_balance_update_blocked_averages(this_rq->cpu);
 	sched_balance_domains(this_rq, idle);
 }
 
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 63b6cf898220..f80955ecdce6 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -209,7 +209,7 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
 	 * This means that weight will be 0 but not running for a sched_entity
 	 * but also for a cfs_rq if the latter becomes idle. As an example,
 	 * this happens during idle_balance() which calls
-	 * update_blocked_averages().
+	 * sched_balance_update_blocked_averages().
 	 *
 	 * Also see the comment in accumulate_sum().
 	 */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 10/13] sched/balancing: Rename newidle_balance() => sched_balance_newidle()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (8 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 11/13] sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu() Ingo Molnar
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 95f7092043f3..aa5ff0efcca8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4816,7 +4816,7 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
 	return cfs_rq->avg.load_avg;
 }
 
-static int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
+static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf);
 
 static inline unsigned long task_util(struct task_struct *p)
 {
@@ -5136,7 +5136,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 static inline void
 detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 
-static inline int newidle_balance(struct rq *rq, struct rq_flags *rf)
+static inline int sched_balance_newidle(struct rq *rq, struct rq_flags *rf)
 {
 	return 0;
 }
@@ -8253,7 +8253,7 @@ balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	if (rq->nr_running)
 		return 1;
 
-	return newidle_balance(rq, rf) != 0;
+	return sched_balance_newidle(rq, rf) != 0;
 }
 #endif /* CONFIG_SMP */
 
@@ -8505,10 +8505,10 @@ done: __maybe_unused;
 	if (!rf)
 		return NULL;
 
-	new_tasks = newidle_balance(rq, rf);
+	new_tasks = sched_balance_newidle(rq, rf);
 
 	/*
-	 * Because newidle_balance() releases (and re-acquires) rq->lock, it is
+	 * Because sched_balance_newidle() releases (and re-acquires) rq->lock, it is
 	 * possible for any higher priority task to appear. In that case we
 	 * must re-start the pick_next_entity() loop.
 	 */
@@ -11493,7 +11493,7 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
 	ld_moved = 0;
 
 	/*
-	 * newidle_balance() disregards balance intervals, so we could
+	 * sched_balance_newidle() disregards balance intervals, so we could
 	 * repeatedly reach this code, which would lead to balance_interval
 	 * skyrocketing in a short amount of time. Skip the balance_interval
 	 * increase logic to avoid that.
@@ -12277,7 +12277,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
 #endif /* CONFIG_NO_HZ_COMMON */
 
 /*
- * newidle_balance is called by schedule() if this_cpu is about to become
+ * sched_balance_newidle is called by schedule() if this_cpu is about to become
  * idle. Attempts to pull tasks from other CPUs.
  *
  * Returns:
@@ -12285,7 +12285,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
  *     0 - failed, no new tasks
  *   > 0 - success, new (fair) tasks present
  */
-static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
+static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
 {
 	unsigned long next_balance = jiffies + HZ;
 	int this_cpu = this_rq->cpu;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 11/13] sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (9 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 10/13] sched/balancing: Rename newidle_balance() => sched_balance_newidle() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 12/13] sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group() Ingo Molnar
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also use 'dst' instead of 'idlest': while historically correct,
today it's not really true anymore that we return the 'idlest'
group or CPU, we sort by idle-exit latency and only return the
idlest CPUs from the lowest-latency set of CPUs.

The true 'idlest' CPUs often remain idle for a long time
and are never returned as long as the system is under-loaded.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa5ff0efcca8..02ff0272b2e4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7101,10 +7101,10 @@ static struct sched_group *
 find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu);
 
 /*
- * find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
+ * sched_balance_find_dst_group_cpu - find the idlest CPU among the CPUs in the group.
  */
 static int
-find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
+sched_balance_find_dst_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 {
 	unsigned long load, min_load = ULONG_MAX;
 	unsigned int min_exit_latency = UINT_MAX;
@@ -7191,7 +7191,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 			continue;
 		}
 
-		new_cpu = find_idlest_group_cpu(group, p, cpu);
+		new_cpu = sched_balance_find_dst_group_cpu(group, p, cpu);
 		if (new_cpu == cpu) {
 			/* Now try balancing at a lower domain level of 'cpu': */
 			sd = sd->child;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 12/13] sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (10 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 11/13] sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:18 ` [PATCH 13/13] sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu() Ingo Molnar
  2024-03-08 11:25 ` [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also use 'dst' instead of 'idlest', because it's not really
true that we return the 'idlest' group or CPU, we sort by
idle-exit latency and only return the idlest CPUs from the
lowest-latency set of CPUs.

The true 'idlest' CPUs often remain idle for a long time
and are never returned as long as the system is under-loaded.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 02ff0272b2e4..d0c3a091d7d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7098,7 +7098,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
 }
 
 static struct sched_group *
-find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu);
+sched_balance_find_dst_group(struct sched_domain *sd, struct task_struct *p, int this_cpu);
 
 /*
  * sched_balance_find_dst_group_cpu - find the idlest CPU among the CPUs in the group.
@@ -7185,7 +7185,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 			continue;
 		}
 
-		group = find_idlest_group(sd, p, cpu);
+		group = sched_balance_find_dst_group(sd, p, cpu);
 		if (!group) {
 			sd = sd->child;
 			continue;
@@ -10296,13 +10296,13 @@ static bool update_pick_idlest(struct sched_group *idlest,
 }
 
 /*
- * find_idlest_group() finds and returns the least busy CPU group within the
+ * sched_balance_find_dst_group() finds and returns the least busy CPU group within the
  * domain.
  *
  * Assumes p is allowed on at least one CPU in sd.
  */
 static struct sched_group *
-find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
+sched_balance_find_dst_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
 {
 	struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
 	struct sg_lb_stats local_sgs, tmp_sgs;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 13/13] sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu()
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (11 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 12/13] sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group() Ingo Molnar
@ 2024-03-08 11:18 ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2024-03-08 11:25 ` [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
  13 siblings, 1 reply; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also use 'dst' instead of 'idlest', because it's not really
true that we return the 'idlest' group or CPU, we sort by
idle-exit latency and only return the idlest CPUs from the
lowest-latency set of CPUs.

The true 'idlest' CPUs often remain idle for a long time
and are never returned as long as the system is under-loaded.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d0c3a091d7d1..4b3c4a181a91 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7160,7 +7160,7 @@ sched_balance_find_dst_group_cpu(struct sched_group *group, struct task_struct *
 	return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu;
 }
 
-static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p,
+static inline int sched_balance_find_dst_cpu(struct sched_domain *sd, struct task_struct *p,
 				  int cpu, int prev_cpu, int sd_flag)
 {
 	int new_cpu = cpu;
@@ -7936,7 +7936,7 @@ compute_energy(struct energy_env *eenv, struct perf_domain *pd,
  * NOTE: Forkees are not accepted in the energy-aware wake-up path because
  * they don't have any useful utilization data yet and it's not possible to
  * forecast their impact on energy consumption. Consequently, they will be
- * placed by find_idlest_cpu() on the least loaded CPU, which might turn out
+ * placed by sched_balance_find_dst_cpu() on the least loaded CPU, which might turn out
  * to be energy-inefficient in some use-cases. The alternative would be to
  * bias new tasks towards specific types of CPUs first, or to try to infer
  * their util_avg from the parent task, but those heuristics could hurt
@@ -8201,7 +8201,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
 
 	if (unlikely(sd)) {
 		/* Slow path */
-		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
+		new_cpu = sched_balance_find_dst_cpu(sd, p, cpu, prev_cpu, sd_flag);
 	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
 		/* Fast path */
 		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions
  2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
                   ` (12 preceding siblings ...)
  2024-03-08 11:18 ` [PATCH 13/13] sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu() Ingo Molnar
@ 2024-03-08 11:25 ` Ingo Molnar
  13 siblings, 0 replies; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 11:25 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot


* Ingo Molnar <mingo@kernel.org> wrote:

> Over the years we've grown a colorful zoo of scheduler
> load-balancing function names - both following random,
> idiosyncratic patterns, and gaining historic misnomers
> that are not accurate anymore.
> 
> We have 'newidle_balance()' to rebalance newly idle tasks,
> but 'balance_domains()' to rebalance domains. We have
> a find_idlest_cpu() function whose purpose is not to find
> the idlest CPU anymore, and a find_busiest_queue() function
> whose purpose is not to find the busiest runqueue anymore.
> 
> Fix most of the misnomers and organize the functions along the
> sched_balance_*() namespace:
> 
>   scheduler_tick()		=> sched_tick()
>   run_rebalance_domains()	=> sched_balance_softirq()
>   trigger_load_balance()	=> sched_balance_trigger()
>   rebalance_domains()		=> sched_balance_domains()
>   load_balance()		=> sched_balance_rq()
>   newidle_balance()		=> sched_balance_newidle()
>   find_busiest_queue()	=> sched_balance_find_src_rq()
>   find_busiest_group()	=> sched_balance_find_src_group()
>   find_idlest_group_cpu()	=> sched_balance_find_dst_group_cpu()
>   find_idlest_group()		=> sched_balance_find_dst_group()
>   find_idlest_cpu()		=> sched_balance_find_dst_cpu()
>   update_blocked_averages()	=> sched_balance_update_blocked_averages()

Forgot to mention that this series is on top of the scheduler tree 
(tip:sched/core) plus my other pending queue:

   https://lore.kernel.org/r/20240308105901.1096078-1-mingo@kernel.org

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 07/13] sched/balancing: Rename find_src_rq() => sched_balance_find_src_rq()
  2024-03-08 11:18 ` [PATCH 07/13] sched/balancing: Rename find_src_rq() " Ingo Molnar
@ 2024-03-08 13:51   ` Vincent Guittot
  2024-03-08 17:49     ` Ingo Molnar
  0 siblings, 1 reply; 33+ messages in thread
From: Vincent Guittot @ 2024-03-08 13:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider

On Fri, 8 Mar 2024 at 12:18, Ingo Molnar <mingo@kernel.org> wrote:
>
> Standardize scheduler load-balancing function names on the
> sched_balance_() prefix.

This patch renames the renaming done by the previous one. They could
be merged in one

sched/balancing: Rename find_busiest_queue() => find_src_rq()
sched/balancing: Rename find_src_rq() => sched_balance_find_src_rq()



>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e600cac7806d..1cd9a18b35e0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10959,9 +10959,9 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
>  }
>
>  /*
> - * find_src_rq - find the busiest runqueue among the CPUs in the group.
> + * sched_balance_find_src_rq - find the busiest runqueue among the CPUs in the group.
>   */
> -static struct rq *find_src_rq(struct lb_env *env,
> +static struct rq *sched_balance_find_src_rq(struct lb_env *env,
>                                      struct sched_group *group)
>  {
>         struct rq *busiest = NULL, *rq;
> @@ -11280,7 +11280,7 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
>                 goto out_balanced;
>         }
>
> -       busiest = find_src_rq(&env, group);
> +       busiest = sched_balance_find_src_rq(&env, group);
>         if (!busiest) {
>                 schedstat_inc(sd->lb_nobusyq[idle]);
>                 goto out_balanced;
> --
> 2.40.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 07/13] sched/balancing: Rename find_src_rq() => sched_balance_find_src_rq()
  2024-03-08 13:51   ` Vincent Guittot
@ 2024-03-08 17:49     ` Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: Ingo Molnar @ 2024-03-08 17:49 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider


* Vincent Guittot <vincent.guittot@linaro.org> wrote:

> On Fri, 8 Mar 2024 at 12:18, Ingo Molnar <mingo@kernel.org> wrote:
> >
> > Standardize scheduler load-balancing function names on the
> > sched_balance_() prefix.
> 
> This patch renames the renaming done by the previous one. They could
> be merged in one
> 
> sched/balancing: Rename find_busiest_queue() => find_src_rq()
> sched/balancing: Rename find_src_rq() => sched_balance_find_src_rq()

Yeah - I already did that in the 00/13 summary description,
have done it in the series as well.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages()
  2024-03-08 11:18 ` [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages() Ingo Molnar
@ 2024-03-11  6:42   ` Honglei Wang
  2024-03-12 10:36     ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  1 sibling, 1 reply; 33+ messages in thread
From: Honglei Wang @ 2024-03-11  6:42 UTC (permalink / raw)
  To: Ingo Molnar, linux-kernel
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot



On 2024/3/8 19:18, Ingo Molnar wrote:
> Standardize scheduler load-balancing function names on the
> sched_balance_() prefix.
> 
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>   kernel/sched/fair.c | 8 ++++----
>   kernel/sched/pelt.c | 2 +-
>   2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 96a81b2fa281..95f7092043f3 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9411,7 +9411,7 @@ static unsigned long task_h_load(struct task_struct *p)
>   }
>   #endif
>   
> -static void update_blocked_averages(int cpu)
> +static void sched_balance_update_blocked_averages(int cpu)
>   {
>   	bool decayed = false, done = true;
>   	struct rq *rq = cpu_rq(cpu);
> @@ -12079,7 +12079,7 @@ static bool update_nohz_stats(struct rq *rq)
>   	if (!time_after(jiffies, READ_ONCE(rq->last_blocked_load_update_tick)))
>   		return true;
>   
> -	update_blocked_averages(cpu);
> +	sched_balance_update_blocked_averages(cpu);
>   
>   	return rq->has_blocked_load;
>   }
> @@ -12339,7 +12339,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
>   	raw_spin_rq_unlock(this_rq);
>   
>   	t0 = sched_clock_cpu(this_cpu);
> -	update_blocked_averages(this_cpu);
> +	sched_balance_update_blocked_averages(this_cpu);
>   
>   	rcu_read_lock();
>   	for_each_domain(this_cpu, sd) {
> @@ -12431,7 +12431,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
>   		return;
>   
>   	/* normal load balance */
> -	update_blocked_averages(this_rq->cpu);
> +	sched_balance_update_blocked_averages(this_rq->cpu);
>   	sched_balance_domains(this_rq, idle);
>   }
>   
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index 63b6cf898220..f80955ecdce6 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -209,7 +209,7 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
>   	 * This means that weight will be 0 but not running for a sched_entity
>   	 * but also for a cfs_rq if the latter becomes idle. As an example,
>   	 * this happens during idle_balance() which calls

Could we also fix this ghost idle_balance() in this serial (maybe in 
patch 10)?

Honglei
> -	 * update_blocked_averages().
> +	 * sched_balance_update_blocked_averages().
>   	 *
>   	 * Also see the comment in accumulate_sum().
>   	 */


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq()
  2024-03-08 11:18 ` [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq() Ingo Molnar
@ 2024-03-11  8:17   ` Shrikanth Hegde
  2024-03-12 10:27     ` Ingo Molnar
  2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  1 sibling, 1 reply; 33+ messages in thread
From: Shrikanth Hegde @ 2024-03-11  8:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Valentin Schneider, Vincent Guittot, LKML



On 3/8/24 4:48 PM, Ingo Molnar wrote:
> Standardize scheduler load-balancing function names on the
> sched_balance_() prefix.
> 
> Also load_balance() has become somewhat of a misnomer: historically
> it was the first and primary load-balancing function that was called,
> but with the introduction of sched domains, it's become a lower
> layer function that balances runqueues.
> 
> Rename it to sched_balance_rq() accordingly.

nit: Can this be sched_balance_rqs()? since load balancing happens 
between two runqeueus.


> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> ---

Though one would have been familiar with names(for someone started recently),
given the correct behaviour and historical context helps why the name changes are making sense. 

Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq()
  2024-03-11  8:17   ` Shrikanth Hegde
@ 2024-03-12 10:27     ` Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: Ingo Molnar @ 2024-03-12 10:27 UTC (permalink / raw)
  To: Shrikanth Hegde
  Cc: Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Valentin Schneider, Vincent Guittot, LKML


* Shrikanth Hegde <sshegde@linux.ibm.com> wrote:

> 
> 
> On 3/8/24 4:48 PM, Ingo Molnar wrote:
> > Standardize scheduler load-balancing function names on the
> > sched_balance_() prefix.
> > 
> > Also load_balance() has become somewhat of a misnomer: historically
> > it was the first and primary load-balancing function that was called,
> > but with the introduction of sched domains, it's become a lower
> > layer function that balances runqueues.
> > 
> > Rename it to sched_balance_rq() accordingly.
> 
> nit: Can this be sched_balance_rqs()? since load balancing happens 
> between two runqeueus.

Yeah, but we really are primarily balancing *this* runqueue - because it 
got potentially out of balance due to a newidle event, or we are checking 
its balance in the periodic load-balancing tick. So it's really a shortcut 
for 'balance this runqueue' - singular, although internally it will indeed 
search for a source runqueue to move tasks from.

So it's a kind of a pull-balancing model, with a singular target (this_cpu).

> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > Cc: Linus Torvalds <torvalds@linux-foundation.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
> > Cc: Valentin Schneider <vschneid@redhat.com>
> > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> 
> Though one would have been familiar with names(for someone started recently),
> given the correct behaviour and historical context helps why the name changes are making sense. 
> 
> Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>

Thanks! I've added your Reviewed-by tags to the series.

	Ingo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages()
  2024-03-11  6:42   ` Honglei Wang
@ 2024-03-12 10:36     ` Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: Ingo Molnar @ 2024-03-12 10:36 UTC (permalink / raw)
  To: Honglei Wang
  Cc: linux-kernel, Dietmar Eggemann, Linus Torvalds, Peter Zijlstra,
	Shrikanth Hegde, Valentin Schneider, Vincent Guittot


* Honglei Wang <jameshongleiwang@126.com> wrote:

> > --- a/kernel/sched/pelt.c
> > +++ b/kernel/sched/pelt.c
> > @@ -209,7 +209,7 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
> >   	 * This means that weight will be 0 but not running for a sched_entity
> >   	 * but also for a cfs_rq if the latter becomes idle. As an example,
> >   	 * this happens during idle_balance() which calls
> 
> Could we also fix this ghost idle_balance() in this serial (maybe in patch
> 10)?

Good point - I've added the patch below.

Thanks,

	Ingo

===================>
From: Ingo Molnar <mingo@kernel.org>
Date: Tue, 12 Mar 2024 11:33:50 +0100
Subject: [PATCH] sched/balancing: Fix a couple of outdated function names in comments

The 'idle_balance()' function hasn't existed for years, and there's no
load_balance_newidle() either - both are sched_balance_newidle() today.

Reported-by: Honglei Wang <jameshongleiwang@126.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 2 +-
 kernel/sched/pelt.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 54177ff96e4b..c35452109c76 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6866,7 +6866,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 #ifdef CONFIG_SMP
 
-/* Working cpumask for: sched_balance_rq, load_balance_newidle. */
+/* Working cpumask for: sched_balance_rq(), sched_balance_newidle(). */
 static DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);
 static DEFINE_PER_CPU(cpumask_var_t, select_rq_mask);
 static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index f80955ecdce6..3a96da25b67c 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -208,7 +208,7 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
 	 * se has been already dequeued but cfs_rq->curr still points to it.
 	 * This means that weight will be 0 but not running for a sched_entity
 	 * but also for a cfs_rq if the latter becomes idle. As an example,
-	 * this happens during idle_balance() which calls
+	 * this happens during sched_balance_newidle() which calls
 	 * sched_balance_update_blocked_averages().
 	 *
 	 * Also see the comment in accumulate_sum().


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu()
  2024-03-08 11:18 ` [PATCH 13/13] sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     686d148cbb5a1c2891914b8d11147d3c5556a29a
Gitweb:        https://git.kernel.org/tip/686d148cbb5a1c2891914b8d11147d3c5556a29a
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:19 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also use 'dst' instead of 'idlest', because it's not really
true that we return the 'idlest' group or CPU, we sort by
idle-exit latency and only return the idlest CPUs from the
lowest-latency set of CPUs.

The true 'idlest' CPUs often remain idle for a long time
and are never returned as long as the system is under-loaded.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-14-mingo@kernel.org
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d0c3a09..4b3c4a1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7160,7 +7160,7 @@ sched_balance_find_dst_group_cpu(struct sched_group *group, struct task_struct *
 	return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu;
 }
 
-static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p,
+static inline int sched_balance_find_dst_cpu(struct sched_domain *sd, struct task_struct *p,
 				  int cpu, int prev_cpu, int sd_flag)
 {
 	int new_cpu = cpu;
@@ -7936,7 +7936,7 @@ compute_energy(struct energy_env *eenv, struct perf_domain *pd,
  * NOTE: Forkees are not accepted in the energy-aware wake-up path because
  * they don't have any useful utilization data yet and it's not possible to
  * forecast their impact on energy consumption. Consequently, they will be
- * placed by find_idlest_cpu() on the least loaded CPU, which might turn out
+ * placed by sched_balance_find_dst_cpu() on the least loaded CPU, which might turn out
  * to be energy-inefficient in some use-cases. The alternative would be to
  * bias new tasks towards specific types of CPUs first, or to try to infer
  * their util_avg from the parent task, but those heuristics could hurt
@@ -8201,7 +8201,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
 
 	if (unlikely(sd)) {
 		/* Slow path */
-		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
+		new_cpu = sched_balance_find_dst_cpu(sd, p, cpu, prev_cpu, sd_flag);
 	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
 		/* Fast path */
 		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group()
  2024-03-08 11:18 ` [PATCH 12/13] sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     a88b17080294f735c4124acccfa2d803a6a7d46f
Gitweb:        https://git.kernel.org/tip/a88b17080294f735c4124acccfa2d803a6a7d46f
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:18 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also use 'dst' instead of 'idlest', because it's not really
true that we return the 'idlest' group or CPU, we sort by
idle-exit latency and only return the idlest CPUs from the
lowest-latency set of CPUs.

The true 'idlest' CPUs often remain idle for a long time
and are never returned as long as the system is under-loaded.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-13-mingo@kernel.org
---
 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 02ff027..d0c3a09 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7098,7 +7098,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
 }
 
 static struct sched_group *
-find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu);
+sched_balance_find_dst_group(struct sched_domain *sd, struct task_struct *p, int this_cpu);
 
 /*
  * sched_balance_find_dst_group_cpu - find the idlest CPU among the CPUs in the group.
@@ -7185,7 +7185,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 			continue;
 		}
 
-		group = find_idlest_group(sd, p, cpu);
+		group = sched_balance_find_dst_group(sd, p, cpu);
 		if (!group) {
 			sd = sd->child;
 			continue;
@@ -10296,13 +10296,13 @@ static bool update_pick_idlest(struct sched_group *idlest,
 }
 
 /*
- * find_idlest_group() finds and returns the least busy CPU group within the
+ * sched_balance_find_dst_group() finds and returns the least busy CPU group within the
  * domain.
  *
  * Assumes p is allowed on at least one CPU in sd.
  */
 static struct sched_group *
-find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
+sched_balance_find_dst_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
 {
 	struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
 	struct sg_lb_stats local_sgs, tmp_sgs;

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu()
  2024-03-08 11:18 ` [PATCH 11/13] sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     646ebaf51c64c6416ca89765c20041363fc1b518
Gitweb:        https://git.kernel.org/tip/646ebaf51c64c6416ca89765c20041363fc1b518
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:17 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also use 'dst' instead of 'idlest': while historically correct,
today it's not really true anymore that we return the 'idlest'
group or CPU, we sort by idle-exit latency and only return the
idlest CPUs from the lowest-latency set of CPUs.

The true 'idlest' CPUs often remain idle for a long time
and are never returned as long as the system is under-loaded.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-12-mingo@kernel.org
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa5ff0e..02ff027 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7101,10 +7101,10 @@ static struct sched_group *
 find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu);
 
 /*
- * find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
+ * sched_balance_find_dst_group_cpu - find the idlest CPU among the CPUs in the group.
  */
 static int
-find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
+sched_balance_find_dst_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 {
 	unsigned long load, min_load = ULONG_MAX;
 	unsigned int min_exit_latency = UINT_MAX;
@@ -7191,7 +7191,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 			continue;
 		}
 
-		new_cpu = find_idlest_group_cpu(group, p, cpu);
+		new_cpu = sched_balance_find_dst_group_cpu(group, p, cpu);
 		if (new_cpu == cpu) {
 			/* Now try balancing at a lower domain level of 'cpu': */
 			sd = sd->child;

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename newidle_balance() => sched_balance_newidle()
  2024-03-08 11:18 ` [PATCH 10/13] sched/balancing: Rename newidle_balance() => sched_balance_newidle() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     7d058285cd77cc1411c91efd1b1673530bb1bee8
Gitweb:        https://git.kernel.org/tip/7d058285cd77cc1411c91efd1b1673530bb1bee8
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:16 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename newidle_balance() => sched_balance_newidle()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-11-mingo@kernel.org
---
 kernel/sched/fair.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 95f7092..aa5ff0e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4816,7 +4816,7 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
 	return cfs_rq->avg.load_avg;
 }
 
-static int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
+static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf);
 
 static inline unsigned long task_util(struct task_struct *p)
 {
@@ -5136,7 +5136,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 static inline void
 detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 
-static inline int newidle_balance(struct rq *rq, struct rq_flags *rf)
+static inline int sched_balance_newidle(struct rq *rq, struct rq_flags *rf)
 {
 	return 0;
 }
@@ -8253,7 +8253,7 @@ balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	if (rq->nr_running)
 		return 1;
 
-	return newidle_balance(rq, rf) != 0;
+	return sched_balance_newidle(rq, rf) != 0;
 }
 #endif /* CONFIG_SMP */
 
@@ -8505,10 +8505,10 @@ idle:
 	if (!rf)
 		return NULL;
 
-	new_tasks = newidle_balance(rq, rf);
+	new_tasks = sched_balance_newidle(rq, rf);
 
 	/*
-	 * Because newidle_balance() releases (and re-acquires) rq->lock, it is
+	 * Because sched_balance_newidle() releases (and re-acquires) rq->lock, it is
 	 * possible for any higher priority task to appear. In that case we
 	 * must re-start the pick_next_entity() loop.
 	 */
@@ -11493,7 +11493,7 @@ out_one_pinned:
 	ld_moved = 0;
 
 	/*
-	 * newidle_balance() disregards balance intervals, so we could
+	 * sched_balance_newidle() disregards balance intervals, so we could
 	 * repeatedly reach this code, which would lead to balance_interval
 	 * skyrocketing in a short amount of time. Skip the balance_interval
 	 * increase logic to avoid that.
@@ -12277,7 +12277,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
 #endif /* CONFIG_NO_HZ_COMMON */
 
 /*
- * newidle_balance is called by schedule() if this_cpu is about to become
+ * sched_balance_newidle is called by schedule() if this_cpu is about to become
  * idle. Attempts to pull tasks from other CPUs.
  *
  * Returns:
@@ -12285,7 +12285,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
  *     0 - failed, no new tasks
  *   > 0 - success, new (fair) tasks present
  */
-static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
+static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
 {
 	unsigned long next_balance = jiffies + HZ;
 	int this_cpu = this_rq->cpu;

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages()
  2024-03-08 11:18 ` [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages() Ingo Molnar
  2024-03-11  6:42   ` Honglei Wang
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  1 sibling, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     391b7a5335c45b2bafe535cb440836ccd17515aa
Gitweb:        https://git.kernel.org/tip/391b7a5335c45b2bafe535cb440836ccd17515aa
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:15 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-10-mingo@kernel.org
---
 kernel/sched/fair.c | 8 ++++----
 kernel/sched/pelt.c | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 96a81b2..95f7092 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9411,7 +9411,7 @@ static unsigned long task_h_load(struct task_struct *p)
 }
 #endif
 
-static void update_blocked_averages(int cpu)
+static void sched_balance_update_blocked_averages(int cpu)
 {
 	bool decayed = false, done = true;
 	struct rq *rq = cpu_rq(cpu);
@@ -12079,7 +12079,7 @@ static bool update_nohz_stats(struct rq *rq)
 	if (!time_after(jiffies, READ_ONCE(rq->last_blocked_load_update_tick)))
 		return true;
 
-	update_blocked_averages(cpu);
+	sched_balance_update_blocked_averages(cpu);
 
 	return rq->has_blocked_load;
 }
@@ -12339,7 +12339,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
 	raw_spin_rq_unlock(this_rq);
 
 	t0 = sched_clock_cpu(this_cpu);
-	update_blocked_averages(this_cpu);
+	sched_balance_update_blocked_averages(this_cpu);
 
 	rcu_read_lock();
 	for_each_domain(this_cpu, sd) {
@@ -12431,7 +12431,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 		return;
 
 	/* normal load balance */
-	update_blocked_averages(this_rq->cpu);
+	sched_balance_update_blocked_averages(this_rq->cpu);
 	sched_balance_domains(this_rq, idle);
 }
 
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 63b6cf8..f80955e 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -209,7 +209,7 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
 	 * This means that weight will be 0 but not running for a sched_entity
 	 * but also for a cfs_rq if the latter becomes idle. As an example,
 	 * this happens during idle_balance() which calls
-	 * update_blocked_averages().
+	 * sched_balance_update_blocked_averages().
 	 *
 	 * Also see the comment in accumulate_sum().
 	 */

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group()
  2024-03-08 11:18 ` [PATCH 08/13] sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     82cf921432fc184adbbb9c1bced182564876ec5e
Gitweb:        https://git.kernel.org/tip/82cf921432fc184adbbb9c1bced182564876ec5e
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:14 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group()

Make two naming changes:

1)
   Standardize scheduler load-balancing function names on the
   sched_balance_() prefix.

2)

   Similar to find_busiest_queue(), the find_busiest_group() naming
   has become a bit of a misnomer: the 'busiest' qualifier to this
   function was historically correct but in the current code
   in quite a few cases we will not pick the 'busiest' group - but the best
   (possible) group we can balance from based on a complex set of
   constraints.

So name it a bit more neutrally, similar to the 'src/dst' nomenclature
we are already using when moving tasks between runqueues, and also
use the sched_balance_ prefix: sched_balance_find_src_group().

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-9-mingo@kernel.org
---
 kernel/sched/fair.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1cd9a18..96a81b2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9430,7 +9430,7 @@ static void update_blocked_averages(int cpu)
 	rq_unlock_irqrestore(rq, &rf);
 }
 
-/********** Helpers for find_busiest_group ************************/
+/********** Helpers for sched_balance_find_src_group ************************/
 
 /*
  * sg_lb_stats - stats of a sched_group required for load-balancing:
@@ -9637,7 +9637,7 @@ static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd)
  *
  * When this is so detected; this group becomes a candidate for busiest; see
  * update_sd_pick_busiest(). And calculate_imbalance() and
- * find_busiest_group() avoid some of the usual balance conditions to allow it
+ * sched_balance_find_src_group() avoid some of the usual balance conditions to allow it
  * to create an effective group imbalance.
  *
  * This is a somewhat tricky proposition since the next run might not find the
@@ -10788,7 +10788,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	) / SCHED_CAPACITY_SCALE;
 }
 
-/******* find_busiest_group() helpers end here *********************/
+/******* sched_balance_find_src_group() helpers end here *********************/
 
 /*
  * Decision matrix according to the local and busiest group type:
@@ -10811,7 +10811,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
  */
 
 /**
- * find_busiest_group - Returns the busiest group within the sched_domain
+ * sched_balance_find_src_group - Returns the busiest group within the sched_domain
  * if there is an imbalance.
  * @env: The load balancing environment.
  *
@@ -10820,7 +10820,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
  *
  * Return:	- The busiest group if imbalance exists.
  */
-static struct sched_group *find_busiest_group(struct lb_env *env)
+static struct sched_group *sched_balance_find_src_group(struct lb_env *env)
 {
 	struct sg_lb_stats *local, *busiest;
 	struct sd_lb_stats sds;
@@ -11274,7 +11274,7 @@ redo:
 		goto out_balanced;
 	}
 
-	group = find_busiest_group(&env);
+	group = sched_balance_find_src_group(&env);
 	if (!group) {
 		schedstat_inc(sd->lb_nobusyg[idle]);
 		goto out_balanced;
@@ -11298,7 +11298,7 @@ redo:
 	env.flags |= LBF_ALL_PINNED;
 	if (busiest->nr_running > 1) {
 		/*
-		 * Attempt to move tasks. If find_busiest_group has found
+		 * Attempt to move tasks. If sched_balance_find_src_group has found
 		 * an imbalance but busiest->nr_running <= 1, the group is
 		 * still unbalanced. ld_moved simply stays zero, so it is
 		 * correctly treated as an imbalance.

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename find_busiest_queue() => sched_balance_find_src_rq()
  2024-03-08 11:18 ` [PATCH 06/13] sched/balancing: Rename find_busiest_queue() => find_src_rq() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     f1cd2e2e79d283e315356bd403c7f928e994f057
Gitweb:        https://git.kernel.org/tip/f1cd2e2e79d283e315356bd403c7f928e994f057
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Mon, 23 Oct 2023 13:04:12 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename find_busiest_queue() => sched_balance_find_src_rq()

The find_busiest_queue() naming has two small quirks:

 - Scheduler functions that deal with runqueues usually have a rq_ prefix
   or _rq postfix, but this function has neither.

 - Plus the 'busiest' qualifier to this function was historically
   correct, but has become somewhat of a misnomer: in quite a few
   cases we will not pick the busiest runqueue - but the best
   (possible) runqueue we can balance tasks from. So name it a
   bit more neutrally, similar to the 'src/dst' nomenclature
   we are already using when moving tasks between runqueues.

To fix both quirks, and to standardize scheduler load-balancing
function names on the sched_balance_() prefix, rename the
function to sched_balance_find_src_rq().

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-7-mingo@kernel.org
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0d2753c..1cd9a18 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10959,9 +10959,9 @@ out_balanced:
 }
 
 /*
- * find_busiest_queue - find the busiest runqueue among the CPUs in the group.
+ * sched_balance_find_src_rq - find the busiest runqueue among the CPUs in the group.
  */
-static struct rq *find_busiest_queue(struct lb_env *env,
+static struct rq *sched_balance_find_src_rq(struct lb_env *env,
 				     struct sched_group *group)
 {
 	struct rq *busiest = NULL, *rq;
@@ -11280,7 +11280,7 @@ redo:
 		goto out_balanced;
 	}
 
-	busiest = find_busiest_queue(&env, group);
+	busiest = sched_balance_find_src_rq(&env, group);
 	if (!busiest) {
 		schedstat_inc(sd->lb_nobusyq[idle]);
 		goto out_balanced;

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename load_balance() => sched_balance_rq()
  2024-03-08 11:18 ` [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq() Ingo Molnar
  2024-03-11  8:17   ` Shrikanth Hegde
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  1 sibling, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     4c3e509ea9f249458e8692f8298cceac73105948
Gitweb:        https://git.kernel.org/tip/4c3e509ea9f249458e8692f8298cceac73105948
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:11 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 12:00:00 +01:00

sched/balancing: Rename load_balance() => sched_balance_rq()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Also load_balance() has become somewhat of a misnomer: historically
it was the first and primary load-balancing function that was called,
but with the introduction of sched domains, it's become a lower
layer function that balances runqueues.

Rename it to sched_balance_rq() accordingly.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-6-mingo@kernel.org
---
 Documentation/scheduler/sched-domains.rst                    |  4 +-
 Documentation/scheduler/sched-stats.rst                      | 32 +++----
 Documentation/translations/zh_CN/scheduler/sched-domains.rst |  4 +-
 Documentation/translations/zh_CN/scheduler/sched-stats.rst   | 30 +++----
 include/linux/sched/topology.h                               |  2 +-
 kernel/sched/fair.c                                          | 10 +-
 6 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index 5d8e8b8..5e996fe 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -41,11 +41,11 @@ The latter function takes two arguments: the runqueue of current CPU and whether
 the CPU was idle at the time the sched_tick() happened and iterates over all
 sched domains our CPU is on, starting from its base domain and going up the ->parent
 chain. While doing that, it checks to see if the current domain has exhausted its
-rebalance interval. If so, it runs load_balance() on that domain. It then checks
+rebalance interval. If so, it runs sched_balance_rq() on that domain. It then checks
 the parent sched_domain (if it exists), and the parent of the parent and so
 forth.
 
-Initially, load_balance() finds the busiest group in the current sched domain.
+Initially, sched_balance_rq() finds the busiest group in the current sched domain.
 If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in
 that group. If it manages to find such a runqueue, it locks both our initial
 CPU's runqueue and the newly found busiest one and starts moving tasks from it
diff --git a/Documentation/scheduler/sched-stats.rst b/Documentation/scheduler/sched-stats.rst
index 73c4126..7c2b16c 100644
--- a/Documentation/scheduler/sched-stats.rst
+++ b/Documentation/scheduler/sched-stats.rst
@@ -77,53 +77,53 @@ domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 
 
 The first field is a bit mask indicating what cpus this domain operates over.
 
-The next 24 are a variety of load_balance() statistics in grouped into types
+The next 24 are a variety of sched_balance_rq() statistics in grouped into types
 of idleness (idle, busy, and newly idle):
 
-    1)  # of times in this domain load_balance() was called when the
+    1)  # of times in this domain sched_balance_rq() was called when the
         cpu was idle
-    2)  # of times in this domain load_balance() checked but found
+    2)  # of times in this domain sched_balance_rq() checked but found
         the load did not require balancing when the cpu was idle
-    3)  # of times in this domain load_balance() tried to move one or
+    3)  # of times in this domain sched_balance_rq() tried to move one or
         more tasks and failed, when the cpu was idle
     4)  sum of imbalances discovered (if any) with each call to
-        load_balance() in this domain when the cpu was idle
+        sched_balance_rq() in this domain when the cpu was idle
     5)  # of times in this domain pull_task() was called when the cpu
         was idle
     6)  # of times in this domain pull_task() was called even though
         the target task was cache-hot when idle
-    7)  # of times in this domain load_balance() was called but did
+    7)  # of times in this domain sched_balance_rq() was called but did
         not find a busier queue while the cpu was idle
     8)  # of times in this domain a busier queue was found while the
         cpu was idle but no busier group was found
-    9)  # of times in this domain load_balance() was called when the
+    9)  # of times in this domain sched_balance_rq() was called when the
         cpu was busy
-    10) # of times in this domain load_balance() checked but found the
+    10) # of times in this domain sched_balance_rq() checked but found the
         load did not require balancing when busy
-    11) # of times in this domain load_balance() tried to move one or
+    11) # of times in this domain sched_balance_rq() tried to move one or
         more tasks and failed, when the cpu was busy
     12) sum of imbalances discovered (if any) with each call to
-        load_balance() in this domain when the cpu was busy
+        sched_balance_rq() in this domain when the cpu was busy
     13) # of times in this domain pull_task() was called when busy
     14) # of times in this domain pull_task() was called even though the
         target task was cache-hot when busy
-    15) # of times in this domain load_balance() was called but did not
+    15) # of times in this domain sched_balance_rq() was called but did not
         find a busier queue while the cpu was busy
     16) # of times in this domain a busier queue was found while the cpu
         was busy but no busier group was found
 
-    17) # of times in this domain load_balance() was called when the
+    17) # of times in this domain sched_balance_rq() was called when the
         cpu was just becoming idle
-    18) # of times in this domain load_balance() checked but found the
+    18) # of times in this domain sched_balance_rq() checked but found the
         load did not require balancing when the cpu was just becoming idle
-    19) # of times in this domain load_balance() tried to move one or more
+    19) # of times in this domain sched_balance_rq() tried to move one or more
         tasks and failed, when the cpu was just becoming idle
     20) sum of imbalances discovered (if any) with each call to
-        load_balance() in this domain when the cpu was just becoming idle
+        sched_balance_rq() in this domain when the cpu was just becoming idle
     21) # of times in this domain pull_task() was called when newly idle
     22) # of times in this domain pull_task() was called even though the
         target task was cache-hot when just becoming idle
-    23) # of times in this domain load_balance() was called but did not
+    23) # of times in this domain sched_balance_rq() was called but did not
         find a busier queue while the cpu was just becoming idle
     24) # of times in this domain a busier queue was found while the cpu
         was just becoming idle but no busier group was found
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index e6590fd..0636316 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -42,9 +42,9 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这�
 后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从
 当前CPU所在的基调度域开始迭代执行,并沿着parent指针链向上进入更高层级的调度域。在迭代
 过程中,函数会检查当前调度域是否已经耗尽了再平衡的时间间隔,如果是,它在该调度域运行
-load_balance()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。
+sched_balance_rq()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。
 
-起初,load_balance()查找当前调度域中最繁忙的调度组。如果成功,在该调度组管辖的全部CPU
+起初,sched_balance_rq()查找当前调度域中最繁忙的调度组。如果成功,在该调度组管辖的全部CPU
 的运行队列中找出最繁忙的运行队列。如能找到,对当前的CPU运行队列和新找到的最繁忙运行
 队列均加锁,并把任务从最繁忙队列中迁移到当前CPU上。被迁移的任务数量等于在先前迭代执行
 中计算出的该调度域的调度组的不均衡值。
diff --git a/Documentation/translations/zh_CN/scheduler/sched-stats.rst b/Documentation/translations/zh_CN/scheduler/sched-stats.rst
index c5e0be6..09eee25 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-stats.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-stats.rst
@@ -75,42 +75,42 @@ domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 
 繁忙,新空闲):
 
 
-    1)  当CPU空闲时,load_balance()在这个调度域中被调用了#次
-    2)  当CPU空闲时,load_balance()在这个调度域中被调用,但是发现负载无需
+    1)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用了#次
+    2)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需
         均衡#次
-    3)  当CPU空闲时,load_balance()在这个调度域中被调用,试图迁移1个或更多
+    3)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多
         任务且失败了#次
-    4)  当CPU空闲时,load_balance()在这个调度域中被调用,发现不均衡(如果有)
+    4)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有)
         #次
     5)  当CPU空闲时,pull_task()在这个调度域中被调用#次
     6)  当CPU空闲时,尽管目标任务是热缓存状态,pull_task()依然被调用#次
-    7)  当CPU空闲时,load_balance()在这个调度域中被调用,未能找到更繁忙的
+    7)  当CPU空闲时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的
         队列#次
     8)  当CPU空闲时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组
         #次
-    9)  当CPU繁忙时,load_balance()在这个调度域中被调用了#次
-    10) 当CPU繁忙时,load_balance()在这个调度域中被调用,但是发现负载无需
+    9)  当CPU繁忙时,sched_balance_rq()在这个调度域中被调用了#次
+    10) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需
         均衡#次
-    11) 当CPU繁忙时,load_balance()在这个调度域中被调用,试图迁移1个或更多
+    11) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多
         任务且失败了#次
-    12) 当CPU繁忙时,load_balance()在这个调度域中被调用,发现不均衡(如果有)
+    12) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有)
         #次
     13) 当CPU繁忙时,pull_task()在这个调度域中被调用#次
     14) 当CPU繁忙时,尽管目标任务是热缓存状态,pull_task()依然被调用#次
-    15) 当CPU繁忙时,load_balance()在这个调度域中被调用,未能找到更繁忙的
+    15) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的
         队列#次
     16) 当CPU繁忙时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组
         #次
-    17) 当CPU新空闲时,load_balance()在这个调度域中被调用了#次
-    18) 当CPU新空闲时,load_balance()在这个调度域中被调用,但是发现负载无需
+    17) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用了#次
+    18) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需
         均衡#次
-    19) 当CPU新空闲时,load_balance()在这个调度域中被调用,试图迁移1个或更多
+    19) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多
         任务且失败了#次
-    20) 当CPU新空闲时,load_balance()在这个调度域中被调用,发现不均衡(如果有)
+    20) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有)
         #次
     21) 当CPU新空闲时,pull_task()在这个调度域中被调用#次
     22) 当CPU新空闲时,尽管目标任务是热缓存状态,pull_task()依然被调用#次
-    23) 当CPU新空闲时,load_balance()在这个调度域中被调用,未能找到更繁忙的
+    23) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的
         队列#次
     24) 当CPU新空闲时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组
         #次
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 18572c9..c8fe9ba 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -110,7 +110,7 @@ struct sched_domain {
 	unsigned long last_decay_max_lb_cost;
 
 #ifdef CONFIG_SCHEDSTATS
-	/* load_balance() stats */
+	/* sched_balance_rq() stats */
 	unsigned int lb_count[CPU_MAX_IDLE_TYPES];
 	unsigned int lb_failed[CPU_MAX_IDLE_TYPES];
 	unsigned int lb_balanced[CPU_MAX_IDLE_TYPES];
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 330788b..0d2753c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6866,7 +6866,7 @@ dequeue_throttle:
 
 #ifdef CONFIG_SMP
 
-/* Working cpumask for: load_balance, load_balance_newidle. */
+/* Working cpumask for: sched_balance_rq, load_balance_newidle. */
 static DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);
 static DEFINE_PER_CPU(cpumask_var_t, select_rq_mask);
 static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
@@ -11242,7 +11242,7 @@ static int should_we_balance(struct lb_env *env)
  * Check this_cpu to ensure it is balanced within domain. Attempt to move
  * tasks if there is an imbalance.
  */
-static int load_balance(int this_cpu, struct rq *this_rq,
+static int sched_balance_rq(int this_cpu, struct rq *this_rq,
 			struct sched_domain *sd, enum cpu_idle_type idle,
 			int *continue_balancing)
 {
@@ -11647,7 +11647,7 @@ out_unlock:
 static atomic_t sched_balance_running = ATOMIC_INIT(0);
 
 /*
- * Scale the max load_balance interval with the number of CPUs in the system.
+ * Scale the max sched_balance_rq interval with the number of CPUs in the system.
  * This trades load-balance latency on larger machines for less cross talk.
  */
 void update_max_interval(void)
@@ -11727,7 +11727,7 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
 		}
 
 		if (time_after_eq(jiffies, sd->last_balance + interval)) {
-			if (load_balance(cpu, rq, sd, idle, &continue_balancing)) {
+			if (sched_balance_rq(cpu, rq, sd, idle, &continue_balancing)) {
 				/*
 				 * The LBF_DST_PINNED logic could have changed
 				 * env->dst_cpu, so we can't know our idle
@@ -12353,7 +12353,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
 
 		if (sd->flags & SD_BALANCE_NEWIDLE) {
 
-			pulled_task = load_balance(this_cpu, this_rq,
+			pulled_task = sched_balance_rq(this_cpu, this_rq,
 						   sd, CPU_NEWLY_IDLE,
 						   &continue_balancing);
 

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename rebalance_domains() => sched_balance_domains()
  2024-03-08 11:18 ` [PATCH 04/13] sched/balancing: Rename rebalance_domains() => sched_balance_domains() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     14ff4dbd34f46cc6b6105f549983321241ccbba9
Gitweb:        https://git.kernel.org/tip/14ff4dbd34f46cc6b6105f549983321241ccbba9
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:10 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 11:59:59 +01:00

sched/balancing: Rename rebalance_domains() => sched_balance_domains()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-5-mingo@kernel.org
---
 Documentation/scheduler/sched-domains.rst                    | 2 +-
 Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +-
 arch/arm/kernel/topology.c                                   | 2 +-
 kernel/sched/fair.c                                          | 8 +++----
 kernel/sched/sched.h                                         | 2 +-
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index c7ea05f..5d8e8b8 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -34,7 +34,7 @@ out of balance are tasks moved between groups.
 In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU
 through sched_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
-balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
+balancing workhorse, sched_balance_softirq()->sched_balance_domains(), is then run
 in softirq context (SCHED_SOFTIRQ).
 
 The latter function takes two arguments: the runqueue of current CPU and whether
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index 1a8587a..e6590fd 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这�
 
 在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
-的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
+的工作由sched_balance_softirq()->sched_balance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
 
 后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从
diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index ef0058d..2336ee2 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -42,7 +42,7 @@
  * can take this difference into account during load balance. A per cpu
  * structure is preferred because each CPU updates its own cpu_capacity field
  * during the load balance except for idle cores. One idle core is selected
- * to run the rebalance_domains for all idle cores and the cpu_capacity can be
+ * to run the sched_balance_domains for all idle cores and the cpu_capacity can be
  * updated during this sequence.
  */
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e377b67..330788b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11685,7 +11685,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
  *
  * Balancing parameters are set up in init_sched_domains.
  */
-static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle)
+static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
 {
 	int continue_balancing = 1;
 	int cpu = rq->cpu;
@@ -12161,7 +12161,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 			rq_unlock_irqrestore(rq, &rf);
 
 			if (flags & NOHZ_BALANCE_KICK)
-				rebalance_domains(rq, CPU_IDLE);
+				sched_balance_domains(rq, CPU_IDLE);
 		}
 
 		if (time_after(next_balance, rq->next_balance)) {
@@ -12422,7 +12422,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 	/*
 	 * If this CPU has a pending NOHZ_BALANCE_KICK, then do the
 	 * balancing on behalf of the other idle CPUs whose ticks are
-	 * stopped. Do nohz_idle_balance *before* rebalance_domains to
+	 * stopped. Do nohz_idle_balance *before* sched_balance_domains to
 	 * give the idle CPUs a chance to load balance. Else we may
 	 * load balance only within the local sched_domain hierarchy
 	 * and abort nohz_idle_balance altogether if we pull some load.
@@ -12432,7 +12432,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 
 	/* normal load balance */
 	update_blocked_averages(this_rq->cpu);
-	rebalance_domains(this_rq, idle);
+	sched_balance_domains(this_rq, idle);
 }
 
 /*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 5b0ddb0..41024c1 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2904,7 +2904,7 @@ extern void cfs_bandwidth_usage_dec(void);
 #define NOHZ_NEWILB_KICK_BIT	2
 #define NOHZ_NEXT_KICK_BIT	3
 
-/* Run rebalance_domains() */
+/* Run sched_balance_domains() */
 #define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
 /* Update blocked load */
 #define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename trigger_load_balance() => sched_balance_trigger()
  2024-03-08 11:18 ` [PATCH 03/13] sched/balancing: Rename trigger_load_balance() => sched_balance_trigger() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     983be0628c061989b6cc175d2f5e429b40699fbb
Gitweb:        https://git.kernel.org/tip/983be0628c061989b6cc175d2f5e429b40699fbb
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:09 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 11:59:59 +01:00

sched/balancing: Rename trigger_load_balance() => sched_balance_trigger()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-4-mingo@kernel.org
---
 Documentation/scheduler/sched-domains.rst                    | 2 +-
 Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +-
 kernel/sched/core.c                                          | 2 +-
 kernel/sched/fair.c                                          | 2 +-
 kernel/sched/sched.h                                         | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index 541d6c6..c7ea05f 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -31,7 +31,7 @@ is treated as one entity. The load of a group is defined as the sum of the
 load of each of its member CPUs, and only when the load of a group becomes
 out of balance are tasks moved between groups.
 
-In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
+In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU
 through sched_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
 balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index fa0c0bc..1a8587a 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -34,7 +34,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这�
 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它
 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。
 
-在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick()
+在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 71b7a08..929fce6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5700,7 +5700,7 @@ void sched_tick(void)
 
 #ifdef CONFIG_SMP
 	rq->idle_balance = idle_cpu(cpu);
-	trigger_load_balance(rq);
+	sched_balance_trigger(rq);
 #endif
 }
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 953f39d..e377b67 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12438,7 +12438,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 /*
  * Trigger the SCHED_SOFTIRQ if it is time to do periodic load balancing.
  */
-void trigger_load_balance(struct rq *rq)
+void sched_balance_trigger(struct rq *rq)
 {
 	/*
 	 * Don't need to rebalance while attached to NULL domain or
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d224267..5b0ddb0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2397,7 +2397,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
 
 extern void update_group_capacity(struct sched_domain *sd, int cpu);
 
-extern void trigger_load_balance(struct rq *rq);
+extern void sched_balance_trigger(struct rq *rq);
 
 extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx);
 

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq()
  2024-03-08 11:18 ` [PATCH 01/13] sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Ingo Molnar, Valentin Schneider, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     70a27d6d1b19392a23bb4a41de7788fbc539f18d
Gitweb:        https://git.kernel.org/tip/70a27d6d1b19392a23bb4a41de7788fbc539f18d
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:07 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 11:59:59 +01:00

sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq()

run_rebalance_domains() is a misnomer, as it doesn't only
run rebalance_domains(), but since the introduction of the
NOHZ code it also runs nohz_idle_balance().

Rename it to sched_balance_softirq(), reflecting its more
generic purpose and that it's a softirq handler.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-2-mingo@kernel.org
---
 Documentation/scheduler/sched-domains.rst                    | 2 +-
 Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +-
 kernel/sched/fair.c                                          | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index e57ad28..6577b06 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -34,7 +34,7 @@ out of balance are tasks moved between groups.
 In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
 through scheduler_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
-balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run
+balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
 in softirq context (SCHED_SOFTIRQ).
 
 The latter function takes two arguments: the runqueue of current CPU and whether
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index e814d4c..fbc3266 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这�
 
 在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过scheduler_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
-的工作由run_rebalance_domains()->rebalance_domains()完成,在软中断上下文中执行
+的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
 
 后一个函数有两个入参:当前CPU的运行队列、它在scheduler_tick()调用时是否空闲。函数会从
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 116a640..953f39d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12415,7 +12415,7 @@ out:
  * - indirectly from a remote scheduler_tick() for NOHZ idle balancing
  *   through the SMP cross-call nohz_csd_func()
  */
-static __latent_entropy void run_rebalance_domains(struct softirq_action *h)
+static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 {
 	struct rq *this_rq = this_rq();
 	enum cpu_idle_type idle = this_rq->idle_balance;
@@ -13216,7 +13216,7 @@ __init void init_sched_fair_class(void)
 #endif
 	}
 
-	open_softirq(SCHED_SOFTIRQ, run_rebalance_domains);
+	open_softirq(SCHED_SOFTIRQ, sched_balance_softirq);
 
 #ifdef CONFIG_NO_HZ_COMMON
 	nohz.next_balance = jiffies;

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [tip: sched/core] sched/balancing: Rename scheduler_tick() => sched_tick()
  2024-03-08 11:18 ` [PATCH 02/13] sched/balancing: Rename scheduler_tick() => sched_tick() Ingo Molnar
@ 2024-03-12 12:00   ` tip-bot2 for Ingo Molnar
  0 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2024-03-12 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Ingo Molnar, Valentin Schneider, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     86dd6c04ef9f213e14d60c9f64bce1cc019f816e
Gitweb:        https://git.kernel.org/tip/86dd6c04ef9f213e14d60c9f64bce1cc019f816e
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Fri, 08 Mar 2024 12:18:08 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 12 Mar 2024 11:59:59 +01:00

sched/balancing: Rename scheduler_tick() => sched_tick()

- Standardize on prefixing scheduler-internal functions defined
  in <linux/sched.h> with sched_*() prefix. scheduler_tick() was
  the only function using the scheduler_ prefix. Harmonize it.

- The other reason to rename it is the NOHZ scheduler tick
  handling functions are already named sched_tick_*().
  Make the 'git grep sched_tick' more meaningful.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-3-mingo@kernel.org
---
 Documentation/scheduler/sched-domains.rst                            | 4 ++--
 Documentation/translations/zh_CN/scheduler/sched-domains.rst         | 4 ++--
 include/linux/sched.h                                                | 2 +-
 kernel/sched/core.c                                                  | 4 ++--
 kernel/sched/loadavg.c                                               | 2 +-
 kernel/time/timer.c                                                  | 2 +-
 kernel/workqueue.c                                                   | 2 +-
 tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc | 2 +-
 8 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst
index 6577b06..541d6c6 100644
--- a/Documentation/scheduler/sched-domains.rst
+++ b/Documentation/scheduler/sched-domains.rst
@@ -32,13 +32,13 @@ load of each of its member CPUs, and only when the load of a group becomes
 out of balance are tasks moved between groups.
 
 In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
-through scheduler_tick(). It raises a softirq after the next regularly scheduled
+through sched_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
 balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
 in softirq context (SCHED_SOFTIRQ).
 
 The latter function takes two arguments: the runqueue of current CPU and whether
-the CPU was idle at the time the scheduler_tick() happened and iterates over all
+the CPU was idle at the time the sched_tick() happened and iterates over all
 sched domains our CPU is on, starting from its base domain and going up the ->parent
 chain. While doing that, it checks to see if the current domain has exhausted its
 rebalance interval. If so, it runs load_balance() on that domain. It then checks
diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
index fbc3266..fa0c0bc 100644
--- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst
+++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst
@@ -34,12 +34,12 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这�
 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它
 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。
 
-在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过scheduler_tick()
+在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
 
-后一个函数有两个入参:当前CPU的运行队列、它在scheduler_tick()调用时是否空闲。函数会从
+后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从
 当前CPU所在的基调度域开始迭代执行,并沿着parent指针链向上进入更高层级的调度域。在迭代
 过程中,函数会检查当前调度域是否已经耗尽了再平衡的时间间隔,如果是,它在该调度域运行
 load_balance()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 17cb076..7eb7f31 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -301,7 +301,7 @@ enum {
 	TASK_COMM_LEN = 16,
 };
 
-extern void scheduler_tick(void);
+extern void sched_tick(void);
 
 #define	MAX_SCHEDULE_TIMEOUT		LONG_MAX
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d44efa0..71b7a08 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5662,7 +5662,7 @@ static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
  * This function gets called by the timer code, with HZ frequency.
  * We call it with interrupts disabled.
  */
-void scheduler_tick(void)
+void sched_tick(void)
 {
 	int cpu = smp_processor_id();
 	struct rq *rq = cpu_rq(cpu);
@@ -6585,7 +6585,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
  *      paths. For example, see arch/x86/entry_64.S.
  *
  *      To drive preemption between tasks, the scheduler sets the flag in timer
- *      interrupt handler scheduler_tick().
+ *      interrupt handler sched_tick().
  *
  *   3. Wakeups don't really cause entry into schedule(). They add a
  *      task to the run-queue and that's it.
diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index 52c8f82..ca9da66 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -379,7 +379,7 @@ void calc_global_load(void)
 }
 
 /*
- * Called from scheduler_tick() to periodically update this CPU's
+ * Called from sched_tick() to periodically update this CPU's
  * active count.
  */
 void calc_global_load_tick(struct rq *this_rq)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index e69e75d..ff49ddc 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -2478,7 +2478,7 @@ void update_process_times(int user_tick)
 	if (in_irq())
 		irq_work_tick();
 #endif
-	scheduler_tick();
+	sched_tick();
 	if (IS_ENABLED(CONFIG_POSIX_TIMERS))
 		run_posix_cpu_timers();
 }
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index bf2bdac..8fbb0ec 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1464,7 +1464,7 @@ void wq_worker_sleeping(struct task_struct *task)
  * wq_worker_tick - a scheduler tick occurred while a kworker is running
  * @task: task currently running
  *
- * Called from scheduler_tick(). We're in the IRQ context and the current
+ * Called from sched_tick(). We're in the IRQ context and the current
  * worker's fields which follow the 'K' locking rule can be accessed safely.
  */
 void wq_worker_tick(struct task_struct *task)
diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
index 25432b8..073a748 100644
--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
@@ -19,7 +19,7 @@ fail() { # mesg
 
 FILTER=set_ftrace_filter
 FUNC1="schedule"
-FUNC2="scheduler_tick"
+FUNC2="sched_tick"
 
 ALL_FUNCS="#### all functions enabled ####"
 

^ permalink raw reply related	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2024-03-12 12:00 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-08 11:18 [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar
2024-03-08 11:18 ` [PATCH 01/13] sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 02/13] sched/balancing: Rename scheduler_tick() => sched_tick() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 03/13] sched/balancing: Rename trigger_load_balance() => sched_balance_trigger() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 04/13] sched/balancing: Rename rebalance_domains() => sched_balance_domains() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 05/13] sched/balancing: Rename load_balance() => sched_balance_rq() Ingo Molnar
2024-03-11  8:17   ` Shrikanth Hegde
2024-03-12 10:27     ` Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 06/13] sched/balancing: Rename find_busiest_queue() => find_src_rq() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] sched/balancing: Rename find_busiest_queue() => sched_balance_find_src_rq() tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 07/13] sched/balancing: Rename find_src_rq() " Ingo Molnar
2024-03-08 13:51   ` Vincent Guittot
2024-03-08 17:49     ` Ingo Molnar
2024-03-08 11:18 ` [PATCH 08/13] sched/balancing: Rename find_busiest_group() => sched_balance_find_src_group() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 09/13] sched/balancing: Rename update_blocked_averages() => sched_balance_update_blocked_averages() Ingo Molnar
2024-03-11  6:42   ` Honglei Wang
2024-03-12 10:36     ` Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 10/13] sched/balancing: Rename newidle_balance() => sched_balance_newidle() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 11/13] sched/balancing: Rename find_idlest_group_cpu() => sched_balance_find_dst_group_cpu() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 12/13] sched/balancing: Rename find_idlest_group() => sched_balance_find_dst_group() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:18 ` [PATCH 13/13] sched/balancing: Rename find_idlest_cpu() => sched_balance_find_dst_cpu() Ingo Molnar
2024-03-12 12:00   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2024-03-08 11:25 ` [PATCH -v1 00/13] sched/balancing: Standardize the naming of scheduler load-balancing functions Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).