From: Peter Zijlstra <peterz@infradead.org>
To: Rik van Riel <riel@redhat.com>
Cc: linux-kernel@vger.kernel.org, jhladky@redhat.com,
mingo@kernel.org, mgorman@suse.de
Subject: Re: [PATCH 4/4] sched,fair: remove effective_load
Date: Mon, 26 Jun 2017 18:12:50 +0200 [thread overview]
Message-ID: <20170626161250.GD4941@worktop> (raw)
In-Reply-To: <1498490454.13083.45.camel@redhat.com>
On Mon, Jun 26, 2017 at 11:20:54AM -0400, Rik van Riel wrote:
> Oh, indeed. I guess in wake_affine() we should test
> whether the CPUs are in the same NUMA node, rather than
> doing cpus_share_cache() ?
Well, since select_idle_sibling() is on LLC; the early test on
cpus_share_cache(prev,this) seems to actually make sense.
But then cutting out all the other bits seems wrong. Not in the least
because !NUMA_BALACING should also still keep working.
> Or, alternatively, have an update_numa_stats() variant
> for numa_wake_affine() that works on the LLC level?
I think we want to retain the existing behaviour for everything
larger than LLC, and when NUMA_BALANCING, smaller than NUMA.
Also note that your use of task_h_load() in the new numa thing suffers
from exactly the problem effective_load() is trying to solve.
next prev parent reply other threads:[~2017-06-26 16:13 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-23 16:55 [PATCH 0/4] NUMA improvements with task wakeup and load balancing riel
2017-06-23 16:55 ` [PATCH 1/4] sched,numa: override part of migrate_degrades_locality when idle balancing riel
2017-06-24 6:58 ` Ingo Molnar
2017-06-24 23:45 ` Rik van Riel
2017-06-24 7:22 ` [tip:sched/core] sched/numa: Override part of migrate_degrades_locality() " tip-bot for Rik van Riel
2017-06-23 16:55 ` [PATCH 2/4] sched: simplify wake_affine for single socket case riel
2017-06-24 7:22 ` [tip:sched/core] sched/fair: Simplify wake_affine() for the " tip-bot for Rik van Riel
2017-06-23 16:55 ` [PATCH 3/4] sched,numa: implement numa node level wake_affine riel
2017-06-24 7:23 ` [tip:sched/core] sched/numa: Implement NUMA node level wake_affine() tip-bot for Rik van Riel
2017-06-26 14:43 ` [PATCH 3/4] sched,numa: implement numa node level wake_affine Peter Zijlstra
2017-06-23 16:55 ` [PATCH 4/4] sched,fair: remove effective_load riel
2017-06-24 7:23 ` [tip:sched/core] sched/fair: Remove effective_load() tip-bot for Rik van Riel
2017-06-26 14:44 ` [PATCH 4/4] sched,fair: remove effective_load Peter Zijlstra
2017-06-26 14:46 ` Peter Zijlstra
2017-06-26 14:55 ` Rik van Riel
2017-06-26 15:04 ` Peter Zijlstra
2017-06-26 15:20 ` Rik van Riel
2017-06-26 16:12 ` Peter Zijlstra [this message]
2017-06-26 19:34 ` Rik van Riel
2017-06-27 5:39 ` Peter Zijlstra
2017-06-27 14:55 ` Rik van Riel
2017-08-01 12:19 ` [PATCH] sched/fair: Fix wake_affine() for !NUMA_BALANCING Peter Zijlstra
2017-08-01 19:26 ` Josef Bacik
2017-08-01 21:43 ` Peter Zijlstra
2017-08-24 22:29 ` Chris Wilson
2017-08-25 15:46 ` Chris Wilson
2017-06-27 18:27 ` [PATCH 4/4] sched,fair: remove effective_load Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170626161250.GD4941@worktop \
--to=peterz@infradead.org \
--cc=jhladky@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).