Linux-Fsdevel Archive on lore.kernel.org
 help / color / Atom feed
From: 王贇 <yun.wang@linux.alibaba.com>
To: Mel Gorman <mgorman@suse.de>, Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Luis Chamberlain <mcgrof@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Iurii Zaikin <yzaikin@google.com>,
	Michal Koutn? <mkoutny@suse.com>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-doc@vger.kernel.org,
	"Paul E. McKenney" <paulmck@linux.ibm.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Jonathan Corbet <corbet@lwn.net>
Subject: Re: [PATCH RESEND v8 1/2] sched/numa: introduce per-cgroup NUMA locality info
Date: Mon, 17 Feb 2020 21:23:52 +0800
Message-ID: <881deb50-163e-442a-41ec-b375cc445e4d@linux.alibaba.com> (raw)
In-Reply-To: <20200217115810.GA3420@suse.de>



On 2020/2/17 下午7:58, Mel Gorman wrote:
[snip]
>> Mel, I suspect you still feel that way, right?
>>
> 
> Yes, I still think it would be a struggle to interpret the data
> meaningfully without very specific knowledge of the implementation. If
> the scan rate was constant, it would be easier but that would make NUMA
> balancing worse overall. Similarly, the stat might get very difficult to
> interpret when NUMA balancing is failing because of a load imbalance,
> pages are shared and being interleaved or NUMA groups span multiple
> active nodes.

Hi, Mel, appreciated to have you back on the table :-)

IMHO the scan period changing should not be a problem now, since the
maximum period is defined by user, so monitoring at maximum period
on the accumulated page accessing counters is always meaningful, correct?

FYI, by monitoring locality, we found that the kvm vcpu thread is not
covered by NUMA Balancing, whatever how many maximum period passed, the
counters are not increasing, or very slowly, although inside guest we are
copying memory.

Later we found such task rarely exit to user space to trigger task
work callbacks, and NUMA Balancing scan depends on that, which help us
realize the importance to enable NUMA Balancing inside guest, with the
correct NUMA topo, a big performance risk I'll say :-P

Maybe not a good example, but we just try to highlight that NUMA Balancing
could have issue in some cases, and we want them to be exposed, somehow,
maybe by the locality.

Regards,
Michael Wang

> 
> For example, the series that reconciles NUMA and CPU balancers may look
> worse in these stats even though the overall performance may be better.
> 
>> In the document (patch 2/2) you write:
>>
>>> +However, there are no hardware counters for per-task local/remote accessing
>>> +info, we don't know how many remote page accesses have occurred for a
>>> +particular task.
>>
>> We can of course 'fix' that by adding a tracepoint.
>>
>> Mel, would you feel better by having a tracepoint in task_numa_fault() ?
>>
> 
> A bit, although interpreting the data would still be difficult and the
> tracepoint would have to include information about the cgroup. While
> I've never tried, this seems like the type of thing that would be suited
> to a BPF script that probes task_numa_fault and extract the information
> it needs.

> 

  reply index

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-07  3:34 [PATCH RESEND v8 0/2] sched/numa: introduce numa locality 王贇
2020-02-07  3:35 ` [PATCH RESEND v8 1/2] sched/numa: introduce per-cgroup NUMA locality info 王贇
2020-02-07  3:37   ` 王贇
2020-02-13  2:35     ` 王贇
2020-02-14 15:10   ` Peter Zijlstra
2020-02-17 11:58     ` Mel Gorman
2020-02-17 13:23       ` 王贇 [this message]
2020-02-17 14:16         ` Mel Gorman
2020-02-18  1:39           ` 王贇
2020-02-21 14:20             ` Mel Gorman
2020-02-21 15:47               ` Peter Zijlstra
2020-02-24  3:13                 ` 王贇
2020-02-24  3:05               ` 王贇
2020-02-21 15:28         ` Peter Zijlstra
2020-02-24  3:09           ` 王贇
2020-02-07  3:35 ` [PATCH RESEND v8 2/2] sched/numa: documentation for per-cgroup numa 王贇

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=881deb50-163e-442a-41ec-b375cc445e4d@linux.alibaba.com \
    --to=yun.wang@linux.alibaba.com \
    --cc=bsegall@google.com \
    --cc=corbet@lwn.net \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=keescook@chromium.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mcgrof@kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=paulmck@linux.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=yzaikin@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-Fsdevel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-fsdevel/0 linux-fsdevel/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-fsdevel linux-fsdevel/ https://lore.kernel.org/linux-fsdevel \
		linux-fsdevel@vger.kernel.org
	public-inbox-index linux-fsdevel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-fsdevel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git