From: Vincent Guittot <vincent.guittot@linaro.org>
To: Cheng Jian <cj.chengjian@huawei.com>
Cc: Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
linux-kernel <linux-kernel@vger.kernel.org>,
chenwandun@huawei.com, Xie XiuQi <xiexiuqi@huawei.com>,
liwei391@huawei.com, huawei.libin@huawei.com,
bobo.shaobowang@huawei.com, Juri Lelli <juri.lelli@redhat.com>
Subject: Re: [PATCH v2] sched/fair: Optimize select_idle_cpu
Date: Fri, 13 Dec 2019 09:37:27 +0100 [thread overview]
Message-ID: <CAKfTPtCQCQio=D3nRTRgbhthKWo752OeaM2X4UcNwr2jByvoNg@mail.gmail.com> (raw)
In-Reply-To: <20191213024530.28052-1-cj.chengjian@huawei.com>
On Fri, 13 Dec 2019 at 03:48, Cheng Jian <cj.chengjian@huawei.com> wrote:
>
> select_idle_cpu() will scan the LLC domain for idle CPUs,
> it's always expensive. so the next commit :
>
> 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()")
>
> introduces a way to limit how many CPUs we scan.
>
> But it consume some CPUs out of 'nr' that are not allowed
> for the task and thus waste our attempts. The function
> always return nr_cpumask_bits, and we can't find a CPU
> which our task is allowed to run.
>
> Cpumask may be too big, similar to select_idle_core(), use
> per_cpu_ptr 'select_idle_mask' to prevent stack overflow.
>
> Fixes: 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()")
> Signed-off-by: Cheng Jian <cj.chengjian@huawei.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
> kernel/sched/fair.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 08a233e97a01..d48244388ce9 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5828,6 +5828,7 @@ static inline int select_idle_smt(struct task_struct *p, int target)
> */
> static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
> {
> + struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> struct sched_domain *this_sd;
> u64 avg_cost, avg_idle;
> u64 time, cost;
> @@ -5859,11 +5860,11 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>
> time = cpu_clock(this);
>
> - for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
> + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +
> + for_each_cpu_wrap(cpu, cpus, target) {
> if (!--nr)
> return si_cpu;
> - if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> - continue;
> if (available_idle_cpu(cpu))
> break;
> if (si_cpu == -1 && sched_idle_cpu(cpu))
> --
> 2.20.1
>
next prev parent reply other threads:[~2019-12-13 8:37 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-13 2:45 [PATCH v2] sched/fair: Optimize select_idle_cpu Cheng Jian
2019-12-13 5:13 ` Srikar Dronamraju
2019-12-13 8:37 ` Vincent Guittot [this message]
2019-12-13 11:48 ` Valentin Schneider
2019-12-17 12:39 ` [tip: sched/core] " tip-bot2 for Cheng Jian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAKfTPtCQCQio=D3nRTRgbhthKWo752OeaM2X4UcNwr2jByvoNg@mail.gmail.com' \
--to=vincent.guittot@linaro.org \
--cc=bobo.shaobowang@huawei.com \
--cc=chenwandun@huawei.com \
--cc=cj.chengjian@huawei.com \
--cc=huawei.libin@huawei.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=liwei391@huawei.com \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=xiexiuqi@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).