linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>, Jirka Hladky <jhladky@redhat.com>,
	Rik van Riel <riel@surriel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH 2/2] mm, numa: Migrate pages to local nodes quicker early in the lifetime of a task
Date: Tue, 2 Oct 2018 18:11:49 +0530	[thread overview]
Message-ID: <20181002124149.GB4593@linux.vnet.ibm.com> (raw)
In-Reply-To: <20181001100525.29789-3-mgorman@techsingularity.net>

>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 25c7c7e09cbd..7fc4a371bdd2 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1392,6 +1392,17 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
>  	int last_cpupid, this_cpupid;
>
>  	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
> +	last_cpupid = page_cpupid_xchg_last(page, this_cpupid);
> +
> +	/*
> +	 * Allow first faults or private faults to migrate immediately early in
> +	 * the lifetime of a task. The magic number 4 is based on waiting for
> +	 * two full passes of the "multi-stage node selection" test that is
> +	 * executed below.
> +	 */
> +	if ((p->numa_preferred_nid == -1 || p->numa_scan_seq <= 4) &&
> +	    (cpupid_pid_unset(last_cpupid) || cpupid_match_pid(p, last_cpupid)))
> +		return true;
>

This does have issues when using with workloads that access more shared faults
than private faults.

In such workloads, this change would spread the memory causing regression in
behaviour.

5 runs of on 2 socket/ 4 node power 8 box


Without this patch
./numa01.sh      Real:  382.82    454.29    422.31    29.72
./numa01.sh      Sys:   40.12     74.53     58.50     13.37
./numa01.sh      User:  34230.22  46398.84  40292.62  4915.93

With this patch
./numa01.sh      Real:  415.56    555.04    473.45    51.17    -10.8016%
./numa01.sh      Sys:   43.42     94.22     73.59     17.31    -20.5055%
./numa01.sh      User:  35271.95  56644.19  45615.72  7165.01  -11.6694%

Since we are looking at time, smaller numbers are better.

----------------------------------------
# cat numa01.sh
#! /bin/bash
# numa01.sh corresponds to 2 perf bench processes each having ncpus/2 threads
# 50 loops of 3G process memory.

THREADS=${THREADS:-$(($(getconf _NPROCESSORS_ONLN)/2))}
perf bench numa mem --no-data_rand_walk -p 2 -t $THREADS -G 0 -P 3072 -T 0 -l 50 -c -s 2000 $@
----------------------------------------

I know this is a synthetic benchmark, but wonder if benchmarks run on vm
guest show similar behaviour when noticed from host.

SPECJbb did show some small loss and gains.

Our numa grouping is not fast enough. It can take sometimes several
iterations before all the tasks belonging to the same group end up being
part of the group. With the current check we end up spreading memory faster
than we should hence hurting the chance of early consolidation.

Can we restrict to something like this?

if (p->numa_scan_seq >=MIN && p->numa_scan_seq <= MIN+4 &&
    (cpupid_match_pid(p, last_cpupid)))
	return true;

meaning, we ran atleast MIN number of scans, and we find the task to be most likely
task using this page.

-- 
Thanks and Regards
Srikar Dronamraju


  parent reply	other threads:[~2018-10-02 12:42 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-01 10:05 [PATCH 0/2] Faster migration for automatic NUMA balancing Mel Gorman
2018-10-01 10:05 ` [PATCH 1/2] mm, numa: Remove rate-limiting of automatic numa balancing migration Mel Gorman
2018-10-01 15:39   ` Rik van Riel
2018-10-02 10:17   ` [tip:sched/urgent] mm, sched/numa: Remove rate-limiting of automatic NUMA " tip-bot for Mel Gorman
2018-10-02 11:54   ` [PATCH 1/2] mm, numa: Remove rate-limiting of automatic numa " Srikar Dronamraju
2018-10-01 10:05 ` [PATCH 2/2] mm, numa: Migrate pages to local nodes quicker early in the lifetime of a task Mel Gorman
2018-10-01 15:41   ` Rik van Riel
2018-10-02 10:17   ` [tip:sched/urgent] sched/numa: " tip-bot for Mel Gorman
2018-10-02 12:41   ` Srikar Dronamraju [this message]
2018-10-02 13:54     ` [PATCH 2/2] mm, numa: " Mel Gorman
2018-10-02 17:30       ` Srikar Dronamraju
2018-10-02 18:22         ` Mel Gorman
2018-10-03 13:15           ` Srikar Dronamraju
2018-10-03 13:07         ` Srikar Dronamraju
2018-10-03 13:21           ` Mel Gorman
2018-10-03 14:08             ` Srikar Dronamraju

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181002124149.GB4593@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=jhladky@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).