From: Peter Zijlstra <peterz@infradead.org>
To: 王贇 <yun.wang@linux.alibaba.com>
Cc: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com,
Ingo Molnar <mingo@redhat.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
mcgrof@kernel.org, keescook@chromium.org,
linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org,
Mel Gorman <mgorman@suse.de>,
riel@surriel.com
Subject: Re: [PATCH 4/4] numa: introduce numa cling feature
Date: Fri, 12 Jul 2019 09:53:18 +0200 [thread overview]
Message-ID: <20190712075318.GM3402@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <82f42063-ce51-dd34-ba95-5b32ee733de7@linux.alibaba.com>
On Fri, Jul 12, 2019 at 11:10:08AM +0800, 王贇 wrote:
> On 2019/7/11 下午10:27, Peter Zijlstra wrote:
> >> Thus we introduce the numa cling, which try to prevent tasks leaving
> >> the preferred node on wakeup fast path.
> >
> >
> >> @@ -6195,6 +6447,13 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> >> if ((unsigned)i < nr_cpumask_bits)
> >> return i;
> >>
> >> + /*
> >> + * Failed to find an idle cpu, wake affine may want to pull but
> >> + * try stay on prev-cpu when the task cling to it.
> >> + */
> >> + if (task_numa_cling(p, cpu_to_node(prev), cpu_to_node(target)))
> >> + return prev;
> >> +
> >> return target;
> >> }
> >
> > Select idle sibling should never cross node boundaries and is thus the
> > entirely wrong place to fix anything.
>
> Hmm.. in our early testing the printk show both select_task_rq_fair() and
> task_numa_find_cpu() will call select_idle_sibling with prev and target on
> different node, thus we pick this point to save few lines.
But it will never return @prev if it is not in the same cache domain as
@target. See how everything is gated by:
&& cpus_share_cache(x, target)
> But if the semantics of select_idle_sibling() is to return cpu on the same
> node of target, what about move the logical after select_idle_sibling() for
> the two callers?
No, that's insane. You don't do select_idle_sibling() to then ignore the
result. You have to change @target before calling select_idle_sibling().
next prev parent reply other threads:[~2019-07-12 7:53 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-22 2:10 [RFC PATCH 0/5] NUMA Balancer Suite 王贇
2019-04-22 2:11 ` [RFC PATCH 1/5] numa: introduce per-cgroup numa balancing locality, statistic 王贇
2019-04-23 8:44 ` Peter Zijlstra
2019-04-23 9:14 ` 王贇
2019-04-23 8:46 ` Peter Zijlstra
2019-04-23 9:32 ` 王贇
2019-04-23 8:47 ` Peter Zijlstra
2019-04-23 9:33 ` 王贇
2019-04-23 9:46 ` Peter Zijlstra
2019-04-22 2:12 ` [RFC PATCH 2/5] numa: append per-node execution info in memory.numa_stat 王贇
2019-04-23 8:52 ` Peter Zijlstra
2019-04-23 9:36 ` 王贇
2019-04-23 9:46 ` Peter Zijlstra
2019-04-23 10:01 ` 王贇
2019-04-22 2:13 ` [RFC PATCH 3/5] numa: introduce per-cgroup preferred numa node 王贇
2019-04-23 8:55 ` Peter Zijlstra
2019-04-23 9:41 ` 王贇
2019-04-22 2:14 ` [RFC PATCH 4/5] numa: introduce numa balancer infrastructure 王贇
2019-04-22 2:21 ` [RFC PATCH 5/5] numa: numa balancer 王贇
2019-04-23 9:05 ` Peter Zijlstra
2019-04-23 9:59 ` 王贇
[not found] ` <CAHCio2gEw4xyuoiurvwzvEiU8eLas+5ZLhzmqm1V2CJqvt+cyA@mail.gmail.com>
2019-04-23 2:14 ` [RFC PATCH 0/5] NUMA Balancer Suite 王贇
2019-07-03 3:26 ` [PATCH 0/4] per cpu cgroup numa suite 王贇
2019-07-03 3:28 ` [PATCH 1/4] numa: introduce per-cgroup numa balancing locality, statistic 王贇
2019-07-11 13:43 ` Peter Zijlstra
2019-07-12 3:15 ` 王贇
2019-07-11 13:47 ` Peter Zijlstra
2019-07-12 3:43 ` 王贇
2019-07-12 7:58 ` Peter Zijlstra
2019-07-12 9:11 ` 王贇
2019-07-12 9:42 ` Peter Zijlstra
2019-07-12 10:10 ` 王贇
2019-07-15 2:09 ` 王贇
2019-07-15 12:10 ` Michal Koutný
2019-07-16 2:41 ` 王贇
2019-07-19 16:47 ` Michal Koutný
2019-07-03 3:29 ` [PATCH 2/4] numa: append per-node execution info in memory.numa_stat 王贇
2019-07-11 13:45 ` Peter Zijlstra
2019-07-12 3:17 ` 王贇
2019-07-03 3:32 ` [PATCH 3/4] numa: introduce numa group per task group 王贇
2019-07-11 14:10 ` Peter Zijlstra
2019-07-12 4:03 ` 王贇
2019-07-03 3:34 ` [PATCH 4/4] numa: introduce numa cling feature 王贇
2019-07-08 2:25 ` [PATCH v2 " 王贇
2019-07-09 2:15 ` 王贇
2019-07-09 2:24 ` [PATCH v3 " 王贇
2019-07-11 14:27 ` [PATCH " Peter Zijlstra
2019-07-12 3:10 ` 王贇
2019-07-12 7:53 ` Peter Zijlstra [this message]
2019-07-12 8:58 ` 王贇
2019-07-22 3:44 ` 王贇
2019-07-11 9:00 ` [PATCH 0/4] per cgroup numa suite 王贇
2019-07-16 3:38 ` [PATCH v2 0/4] per-cgroup " 王贇
2019-07-16 3:39 ` [PATCH v2 1/4] numa: introduce per-cgroup numa balancing locality statistic 王贇
2019-07-16 3:40 ` [PATCH v2 2/4] numa: append per-node execution time in cpu.numa_stat 王贇
2019-07-19 16:39 ` Michal Koutný
2019-07-22 2:36 ` 王贇
2019-07-16 3:41 ` [PATCH v2 3/4] numa: introduce numa group per task group 王贇
2019-07-16 3:41 ` [PATCH v4 4/4] numa: introduce numa cling feature 王贇
2019-07-22 2:37 ` [PATCH v5 " 王贇
2019-07-25 2:33 ` [PATCH v2 0/4] per-cgroup numa suite 王贇
2019-08-06 1:33 ` 王贇
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190712075318.GM3402@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=keescook@chromium.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=riel@surriel.com \
--cc=vdavydov.dev@gmail.com \
--cc=yun.wang@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).