All of lore.kernel.org
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: mgorman@techsingularity.net, torvalds@linux-foundation.org,
	tglx@linutronix.de, mingo@kernel.org, hpa@zytor.com,
	efault@gmx.de, linux-kernel@vger.kernel.org,
	matt@codeblueprint.co.uk, peterz@infradead.org,
	ggherdovich@suse.cz
Cc: linux-tip-commits@vger.kernel.org, mpe@ellerman.id.au
Subject: Re: [tip:sched/core] sched/numa: Delay retrying placement for automatic NUMA balance after wake_affine()
Date: Mon, 7 May 2018 04:06:07 -0700	[thread overview]
Message-ID: <20180507110607.GA3828@linux.vnet.ibm.com> (raw)
In-Reply-To: <tip-7347fc87dfe6b7315e74310ee1243dc222c68086@git.kernel.org>

Hi Mel,

I do see performance improving with this commit 7347fc87df "sched/numa:
Delay retrying placement for automatic NUMA balance after wake_affine()"
even on powerpc where we have SD_WAKE_AFFINE *disabled* on numa sched
domains. Ideally this commit should not have affected powerpc machines.
That made me to look a bit deeper.

> @@ -1876,7 +1877,18 @@ static void numa_migrate_preferred(struct task_struct *p)
>
>  	/* Periodically retry migrating the task to the preferred node */
>  	interval = min(interval, msecs_to_jiffies(p->numa_scan_period) / 16);
> -	p->numa_migrate_retry = jiffies + interval;
> +	numa_migrate_retry = jiffies + interval;
> +
> +	/*
> +	 * Check that the new retry threshold is after the current one. If
> +	 * the retry is in the future, it implies that wake_affine has
> +	 * temporarily asked NUMA balancing to backoff from placement.
> +	 */
> +	if (numa_migrate_retry > p->numa_migrate_retry)
> +		return;

The above check looks wrong. This check will most likely to be true,
numa_migrate_preferred() itself is called either when jiffies >
p->numa_migrate_retry or if the task's numa_preferred_nid has changed.

Hence we never end up calling task_numa_migrate() i.e we never go thro
the active cpu balancing path in numa balancing.

Reading the comments just above the check, makes me think the check
should have been

	if (numa_migrate_retry < p->numa_migrate_retry)
		return;

Here is perf stat output with 7347fc87df running perf bench numa mem
--no-data_rand_walk 96 -p 2 -t 48 -G 0 -P 3072 -T 0 -l 50 -c -s 1000

          2,13,898      cs                                                            ( +-  2.65% )
            10,228      migrations                                                    ( +- 14.61% )
         21,86,406      faults                                                        ( +-  9.69% )
   40,65,84,68,026      cache-misses                                                  ( +-  0.31% )
                 0      sched:sched_move_numa   <---------------
                 0      sched:sched_stick_numa   <---------------
                 0      sched:sched_swap_numa   <---------------
          1,41,780      migrate:mm_migrate_pages                                      ( +- 24.11% )
                 0      migrate:mm_numa_migrate_ratelimit

     778.331602169 seconds time elapsed


If you look at sched_move_numa, sched_stick_numa, sched_swap_numa
numbers, its very clear that we did try any active cpu migrations.

Same command with the commit reverted

       2,38,685      cs                                                            ( +-  2.93% )
            25,127      migrations                                                    ( +- 13.22% )
         17,27,858      faults                                                        ( +-  2.61% )
   34,77,06,21,298      cache-misses                                                  ( +-  0.61% )
               560      sched:sched_move_numa                                         ( +-  2.05% )
                16      sched:sched_stick_numa                                        ( +- 33.33% )
               310      sched:sched_swap_numa                                         ( +- 15.16% )
          1,25,062      migrate:mm_migrate_pages                                      ( +-  0.91% )
                 0      migrate:mm_numa_migrate_ratelimit

     916.777315465 seconds time elapsed

(numbers are almost same with just that check commented/modified)

So we are seeing an improvement, but the improvement is because of
bypassing the active cpu balancing. Do we really want to by-pass this
code?

> +
> +	/* Safe to try placing the task on the preferred node */
> +	p->numa_migrate_retry = numa_migrate_retry;
>
>  	/* Success if task is already running on preferred CPU */
>  	if (task_node(p) == p->numa_preferred_nid)
> @@ -5759,6 +5771,48 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,

  reply	other threads:[~2018-05-07 11:06 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-13 13:37 [PATCH 0/6] Reduce migrations and conflicts with automatic NUMA balancing v2 Mel Gorman
2018-02-13 13:37 ` [PATCH 1/6] sched/fair: Avoid an unnecessary lookup of current CPU ID during wake_affine Mel Gorman
2018-02-21 10:27   ` [tip:sched/core] " tip-bot for Mel Gorman
2018-02-13 13:37 ` [PATCH 2/6] sched/fair: Defer calculation of prev_eff_load in wake_affine until needed Mel Gorman
2018-02-21 10:28   ` [tip:sched/core] sched/fair: Defer calculation of 'prev_eff_load' in wake_affine_weight() " tip-bot for Mel Gorman
2018-02-13 13:37 ` [PATCH 3/6] sched/fair: Do not migrate on wake_affine_weight if weights are equal Mel Gorman
2018-02-21 10:28   ` [tip:sched/core] sched/fair: Do not migrate on wake_affine_weight() " tip-bot for Mel Gorman
2018-02-13 13:37 ` [PATCH 4/6] sched/fair: Do not migrate due to a sync wakeup on exit Mel Gorman
2018-02-21 10:29   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2018-02-13 13:37 ` [PATCH 5/6] sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on Mel Gorman
2018-02-21 10:29   ` [tip:sched/core] " tip-bot for Mel Gorman
2018-02-13 13:37 ` [PATCH 6/6] sched/numa: Delay retrying placement for automatic NUMA balance after wake_affine Mel Gorman
2018-02-13 14:01   ` Peter Zijlstra
2018-02-13 14:18     ` Mel Gorman
2018-02-13 14:43       ` Peter Zijlstra
2018-02-13 15:00         ` Mel Gorman
2018-02-13 15:10           ` Peter Zijlstra
2018-02-21 10:30   ` [tip:sched/core] sched/numa: Delay retrying placement for automatic NUMA balance after wake_affine() tip-bot for Mel Gorman
2018-05-07 11:06     ` Srikar Dronamraju [this message]
2018-05-09  8:41       ` Mel Gorman
2018-05-09 10:58         ` Srikar Dronamraju
2018-05-09 16:34           ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180507110607.GA3828@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=efault@gmx.de \
    --cc=ggherdovich@suse.cz \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=matt@codeblueprint.co.uk \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.