linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched: Calculate effective load even if local weight is 0
@ 2014-01-06 11:39 Mel Gorman
  2014-01-06 12:34 ` Peter Zijlstra
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Mel Gorman @ 2014-01-06 11:39 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Thomas Hellstrom, Rik van Riel, linux-kernel

(Rik, you authored this patch so it should be sent from you and needs a
signed-off assuming people are ok with the changelog.)

Thomas Hellstrom bisected a regression where erratic 3D performance is
experienced on virtual machines as measured by glxgears. It identified
commit 58d081b5 (sched/numa: Avoid overloading CPUs on a preferred NUMA
node) as the problem which had modified the behaviour of effective_load.

Effective load calculates the difference to the system-wide load if a
scheduling entity was moved to another CPU. The task group is not heavier
as a result of the move but overall system load can increase/decrease as a
result of the change. Commit 58d081b5 (sched/numa: Avoid overloading CPUs
on a preferred NUMA node) changed effective_load to make it suitable for
calculating if a particular NUMA node was compute overloaded. To reduce
the cost of the function, it assumed that a current sched entity weight
of 0 was uninteresting but that is not the case.

wake_affine uses a weight of 0 for sync wakeups on the grounds that it
is assuming the waking task will sleep and not contribute to load in the
near future. In this case, we still want to calculate the effective load
of the sched entity hierarchy. As effective_load is no longer used by
task_numa_compare since commit fb13c7ee (sched/numa: Use a system-wide
search to find swap/migration candidates), this patch simply restores the
historical behaviour.

[mgorman@suse.de: Wrote changelog]
Reported-and-tested-by: Thomas Hellstrom <thellstrom@vmware.com>
Should-be-signed-off-and-authored-by-Rik
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c7395d9..e64b079 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3923,7 +3923,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
 {
 	struct sched_entity *se = tg->se[cpu];
 
-	if (!tg->parent || !wl)	/* the trivial, non-cgroup case */
+	if (!tg->parent)	/* the trivial, non-cgroup case */
 		return wl;
 
 	for_each_sched_entity(se) {

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-01-17 14:56 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-06 11:39 [PATCH] sched: Calculate effective load even if local weight is 0 Mel Gorman
2014-01-06 12:34 ` Peter Zijlstra
2014-01-06 13:09   ` Mel Gorman
2014-01-06 12:35 ` Peter Zijlstra
2014-01-06 14:00   ` Rik van Riel
2014-01-12 18:42 ` [tip:sched/urgent] " tip-bot for Rik van Riel
2014-01-13  7:52 ` [PATCH] " Preeti Murthy
2014-01-17 14:56   ` Mel Gorman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).