linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Li, Aubrey" <aubrey.li@linux.intel.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Aubrey Li <aubrey.li@intel.com>, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Andi Kleen <ak@linux.intel.com>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>
Subject: Re: [RFC PATCH v1] sched/fair: limit load balance redo times at the same sched_domain level
Date: Tue, 26 Jan 2021 09:40:38 +0800	[thread overview]
Message-ID: <c4066132-d7a2-ef22-2ec0-6ba0c6fc3357@linux.intel.com> (raw)
In-Reply-To: <CAKfTPtCCzy5keKcEOUX6D1+wty0dtYEfA5=oezWRgKY_beO5NQ@mail.gmail.com>

On 2021/1/25 22:51, Vincent Guittot wrote:
> On Mon, 25 Jan 2021 at 15:00, Li, Aubrey <aubrey.li@linux.intel.com> wrote:
>>
>> On 2021/1/25 18:56, Vincent Guittot wrote:
>>> On Mon, 25 Jan 2021 at 06:50, Aubrey Li <aubrey.li@intel.com> wrote:
>>>>
>>>> A long-tail load balance cost is observed on the newly idle path,
>>>> this is caused by a race window between the first nr_running check
>>>> of the busiest runqueue and its nr_running recheck in detach_tasks.
>>>>
>>>> Before the busiest runqueue is locked, the tasks on the busiest
>>>> runqueue could be pulled by other CPUs and nr_running of the busiest
>>>> runqueu becomes 1, this causes detach_tasks breaks with LBF_ALL_PINNED
>>>
>>> We should better detect that when trying to detach task like below
>>
>> This should be a compromise from my understanding. If we give up load balance
>> this time due to the race condition, we do reduce the load balance cost on the
>> newly idle path, but if there is an imbalance indeed at the same sched_domain
> 
> Redo path is there in case, LB has found an imbalance but it can't
> move some loads from this busiest rq to dest rq because of some cpu
> affinity. So it tries to fix the imbalance by moving load onto another
> rq of the group. In your case, the imbalance has disappeared because
> it has already been pulled by another rq so you don't have to try to
> find another imbalance. And I would even say you should not in order
> to let other level to take a chance to spread the load

Here is one simple case I have seen:
1) CPU_a becomes idle and invoke newly idle balance
2) Group_b is found as the busiest group
3) CPU_b_1 is found as the busiest CPU, nr_running = 5
4) detach_tasks check CPU_b_1's run queue again, nr_running = 1, goto redo
5) Group_b is still found as the busiest group
6) This time CPU_b_2 is found as the busiest CPU, nr_running = 3
7) detach_tasks successfully, 2 tasks moved.

If we skipped redo,
- CPU_a exit load balance and remain idle
- tasks stay on CPU_b_2's runqueue, wait for the next load balancing

The two tasks could have been moved to the idle CPU and get executed
immediately.

> 
>> level, we have to wait the next softirq entry to handle that imbalance. This
>> means the tasks on the second busiest runqueue have to stay longer, which could
>> introduce tail latency as well. That's why I introduced a variable to control
>> the redo loops. I'll send this to the benchmark queue to see if it makes any
> 
> TBH, I don't like multiplying the number of knobs
> I see.

Thanks,
-Aubrey

  reply	other threads:[~2021-01-26 12:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-25  6:02 [RFC PATCH v1] sched/fair: limit load balance redo times at the same sched_domain level Aubrey Li
2021-01-25  9:06 ` Mel Gorman
2021-01-25 13:53   ` Li, Aubrey
2021-01-25 14:40     ` Mel Gorman
2021-01-25 10:56 ` Vincent Guittot
2021-01-25 14:00   ` Li, Aubrey
2021-01-25 14:51     ` Vincent Guittot
2021-01-26  1:40       ` Li, Aubrey [this message]
2021-02-23  5:41       ` Li, Aubrey
2021-02-23 17:33         ` Vincent Guittot
2021-02-24  2:55           ` Li, Aubrey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c4066132-d7a2-ef22-2ec0-6ba0c6fc3357@linux.intel.com \
    --to=aubrey.li@linux.intel.com \
    --cc=ak@linux.intel.com \
    --cc=aubrey.li@intel.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=rostedt@goodmis.org \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).