linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix the scene where group's imbalance is set incorrectly
@ 2022-04-10  4:58 zgpeng
  2022-04-18 16:31 ` Steven Rostedt
  2022-04-19  8:07 ` Vincent Guittot
  0 siblings, 2 replies; 3+ messages in thread
From: zgpeng @ 2022-04-10  4:58 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, linux-kernel

In the load_balance function, if the balance fails due to
affinity,then parent group's imbalance will be set to 1.
However, there will be a scene where balance is achieved,
but the imbalance flag is still set to 1, which needs to
be fixed.

The specific trigger scenarios are as follows. In the
load_balance function, the first loop picks out the busiest
cpu. During the process of pulling the process from the
busiest cpu, it is found that all tasks cannot be run on the
DST cpu. At this time, both LBF_SOME_PINNED andLBF_ALL_PINNED
of env.flags are set. Because balance fails and LBF_SOME_PINNED
is set, the parent group's mbalance flag will be set to 1. At
the same time, because LBF_ALL_PINNED is set, it will re-enter
the second cycle to select another busiest cpu to pull the process.
Because the busiest CPU has changed, the process can be pulled to
DST cpu, so it is possible to reach a balance state.

But at this time, the parent group's imbalance flag is not set
correctly again. As a result, the load balance is successfully
achieved, but the parent group's imbalance flag is incorrectly
set to 1. In this case, the upper-layer load balance will
erroneously select this domain as the busiest group, thereby
breaking the balance.

Signed-off-by: zgpeng <zgpeng@tencent.com>
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d4bd299..e137917 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10019,13 +10019,13 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 		}
 
 		/*
-		 * We failed to reach balance because of affinity.
+		 * According to balance status, set group_imbalance correctly.
 		 */
 		if (sd_parent) {
 			int *group_imbalance = &sd_parent->groups->sgc->imbalance;
 
-			if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0)
-				*group_imbalance = 1;
+			if (env.flags & LBF_SOME_PINNED)
+				*group_imbalance = env.imbalance > 0 ? 1 : 0;
 		}
 
 		/* All tasks on this runqueue were pinned by CPU affinity */
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] sched/fair: Fix the scene where group's imbalance is set incorrectly
  2022-04-10  4:58 [PATCH] sched/fair: Fix the scene where group's imbalance is set incorrectly zgpeng
@ 2022-04-18 16:31 ` Steven Rostedt
  2022-04-19  8:07 ` Vincent Guittot
  1 sibling, 0 replies; 3+ messages in thread
From: Steven Rostedt @ 2022-04-18 16:31 UTC (permalink / raw)
  To: zgpeng
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	bsegall, mgorman, bristot, linux-kernel

On Sun, 10 Apr 2022 12:58:36 +0800
zgpeng <zgpeng.linux@gmail.com> wrote:

> The specific trigger scenarios are as follows. In the
> load_balance function, the first loop picks out the busiest
> cpu. During the process of pulling the process from the
> busiest cpu, it is found that all tasks cannot be run on the
> DST cpu. At this time, both LBF_SOME_PINNED andLBF_ALL_PINNED
> of env.flags are set. Because balance fails and LBF_SOME_PINNED
> is set, the parent group's mbalance flag will be set to 1. At
> the same time, because LBF_ALL_PINNED is set, it will re-enter
> the second cycle to select another busiest cpu to pull the process.
> Because the busiest CPU has changed, the process can be pulled to
> DST cpu, so it is possible to reach a balance state.

But is the next cpu selected really the busiest CPU? Or is the original
still the busiest but doesn't have any tasks that can migrate to the dst
CPU? Does it really mean the parent is now unbalanced?

-- Steve


> 
> But at this time, the parent group's imbalance flag is not set
> correctly again. As a result, the load balance is successfully
> achieved, but the parent group's imbalance flag is incorrectly
> set to 1. In this case, the upper-layer load balance will
> erroneously select this domain as the busiest group, thereby
> breaking the balance.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] sched/fair: Fix the scene where group's imbalance is set incorrectly
  2022-04-10  4:58 [PATCH] sched/fair: Fix the scene where group's imbalance is set incorrectly zgpeng
  2022-04-18 16:31 ` Steven Rostedt
@ 2022-04-19  8:07 ` Vincent Guittot
  1 sibling, 0 replies; 3+ messages in thread
From: Vincent Guittot @ 2022-04-19  8:07 UTC (permalink / raw)
  To: zgpeng
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, linux-kernel

On Sun, 10 Apr 2022 at 06:58, zgpeng <zgpeng.linux@gmail.com> wrote:
>
> In the load_balance function, if the balance fails due to
> affinity,then parent group's imbalance will be set to 1.
> However, there will be a scene where balance is achieved,
> but the imbalance flag is still set to 1, which needs to
> be fixed.
>
> The specific trigger scenarios are as follows. In the
> load_balance function, the first loop picks out the busiest
> cpu. During the process of pulling the process from the
> busiest cpu, it is found that all tasks cannot be run on the
> DST cpu. At this time, both LBF_SOME_PINNED andLBF_ALL_PINNED
> of env.flags are set. Because balance fails and LBF_SOME_PINNED

shouldn't LBF_DST_PINNED and dst_cpu have been set ?
and goto more_balance should clear env.imbalance before we set group's
imbalance ?

> is set, the parent group's mbalance flag will be set to 1. At
> the same time, because LBF_ALL_PINNED is set, it will re-enter
> the second cycle to select another busiest cpu to pull the process.
> Because the busiest CPU has changed, the process can be pulled to
> DST cpu, so it is possible to reach a balance state.

The new load_balance will be done without the previous busiest cpu

>
> But at this time, the parent group's imbalance flag is not set
> correctly again. As a result, the load balance is successfully
> achieved, but the parent group's imbalance flag is incorrectly
> set to 1. In this case, the upper-layer load balance will
> erroneously select this domain as the busiest group, thereby
> breaking the balance.
>
> Signed-off-by: zgpeng <zgpeng@tencent.com>
> ---
>  kernel/sched/fair.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299..e137917 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10019,13 +10019,13 @@ static int load_balance(int this_cpu, struct rq *this_rq,
>                 }
>
>                 /*
> -                * We failed to reach balance because of affinity.
> +                * According to balance status, set group_imbalance correctly.
>                  */
>                 if (sd_parent) {
>                         int *group_imbalance = &sd_parent->groups->sgc->imbalance;
>
> -                       if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0)
> -                               *group_imbalance = 1;
> +                       if (env.flags & LBF_SOME_PINNED)
> +                               *group_imbalance = env.imbalance > 0 ? 1 : 0;
>                 }
>
>                 /* All tasks on this runqueue were pinned by CPU affinity */
> --
> 2.9.5
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-04-19  8:07 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-10  4:58 [PATCH] sched/fair: Fix the scene where group's imbalance is set incorrectly zgpeng
2022-04-18 16:31 ` Steven Rostedt
2022-04-19  8:07 ` Vincent Guittot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).