All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Shi <alex.shi@intel.com>
To: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Paul Turner <pjt@google.com>,
	Michael Wang <wangyun@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	Borislav Petkov <bp@alien8.de>,
	Namhyung Kim <namhyung@kernel.org>,
	Mike Galbraith <efault@gmx.de>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>,
	Rik van Riel <riel@redhat.com>
Subject: Re: [PATCH v5 7/7] sched: consider runnable load average in effective_load
Date: Tue, 07 May 2013 13:43:52 +0800	[thread overview]
Message-ID: <51889498.8090409@intel.com> (raw)
In-Reply-To: <51877EF8.20504@linux.vnet.ibm.com>

On 05/06/2013 05:59 PM, Preeti U Murthy wrote:
> Suggestion1: Would change the CPU share calculation to use runnable load
> average all the time.
> 
> Suggestion2: Did opposite of point 2 above,it used runnable load average
> while calculating the CPU share *before* a new task has been woken up
> while it retaining the instantaneous weight to calculate the CPU share
> after a new task could be woken up.
> 
> So since there was no uniformity in the calculation of CPU shares in
> approaches 2 and 3, I think it caused a regression. However I still
> don't understand how approach 4-Suggestion2 made that go away although
> there was non-uniformity in the CPU shares calculation.
> 
> But as Paul says we could retain the usage of instantaneous loads
> wherever there is calculation of CPU shares for the reason he mentioned
> and leave effective_load() and calc_cfs_shares() untouched.
> 
> This also brings forth another question,should we modify wake_affine()
> to pass the runnable load average of the waking up task to effective_load().
> 
> What do you think?

I am not Paul. :)

The acceptable patch of pgbench attached. In fact, since effective_load is mixed 
with direct load and tg's runnable load. the patch looks no much sense.
So, I am going to agree to drop it if there is no performance benefit on my benchmarks.

---

>From f58519a8de3cebb7a865c9911c00dce5f1dd87f2 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@intel.com>
Date: Fri, 3 May 2013 13:29:04 +0800
Subject: [PATCH 7/7] sched: consider runnable load average in effective_load

effective_load calculates the load change as seen from the
root_task_group. It needs to engage the runnable average
of changed task.

Thanks for Morten Rasmussen and PeterZ's reminder of this.

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
 kernel/sched/fair.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ca0e051..b683909 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2980,15 +2980,15 @@ static void task_waking_fair(struct task_struct *p)
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 /*
- * effective_load() calculates the load change as seen from the root_task_group
+ * effective_load() calculates load avg change as seen from the root_task_group
  *
  * Adding load to a group doesn't make a group heavier, but can cause movement
  * of group shares between cpus. Assuming the shares were perfectly aligned one
  * can calculate the shift in shares.
  *
- * Calculate the effective load difference if @wl is added (subtracted) to @tg
- * on this @cpu and results in a total addition (subtraction) of @wg to the
- * total group weight.
+ * Calculate the effective load avg difference if @wl is added (subtracted) to
+ * @tg on this @cpu and results in a total addition (subtraction) of @wg to the
+ * total group load avg.
  *
  * Given a runqueue weight distribution (rw_i) we can compute a shares
  * distribution (s_i) using:
@@ -3002,7 +3002,7 @@ static void task_waking_fair(struct task_struct *p)
  *   rw_i = {   2,   4,   1,   0 }
  *   s_i  = { 2/7, 4/7, 1/7,   0 }
  *
- * As per wake_affine() we're interested in the load of two CPUs (the CPU the
+ * As per wake_affine() we're interested in load avg of two CPUs (the CPU the
  * task used to run on and the CPU the waker is running on), we need to
  * compute the effect of waking a task on either CPU and, in case of a sync
  * wakeup, compute the effect of the current task going to sleep.
@@ -3012,20 +3012,20 @@ static void task_waking_fair(struct task_struct *p)
  *
  *   s'_i = (rw_i + @wl) / (@wg + \Sum rw_j)				(2)
  *
- * Suppose we're interested in CPUs 0 and 1, and want to compute the load
+ * Suppose we're interested in CPUs 0 and 1, and want to compute the load avg
  * differences in waking a task to CPU 0. The additional task changes the
  * weight and shares distributions like:
  *
  *   rw'_i = {   3,   4,   1,   0 }
  *   s'_i  = { 3/8, 4/8, 1/8,   0 }
  *
- * We can then compute the difference in effective weight by using:
+ * We can then compute the difference in effective load avg by using:
  *
  *   dw_i = S * (s'_i - s_i)						(3)
  *
  * Where 'S' is the group weight as seen by its parent.
  *
- * Therefore the effective change in loads on CPU 0 would be 5/56 (3/8 - 2/7)
+ * Therefore the effective change in load avg on CPU 0 would be 5/56 (3/8 - 2/7)
  * times the weight of the group. The effect on CPU 1 would be -4/56 (4/8 -
  * 4/7) times the weight of the group.
  */
@@ -3070,7 +3070,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
 		/*
 		 * wl = dw_i = S * (s'_i - s_i); see (3)
 		 */
-		wl -= se->load.weight;
+		wl -= se->avg.load_avg_contrib;
 
 		/*
 		 * Recursively apply this logic to all parent groups to compute
@@ -3116,14 +3116,14 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 	 */
 	if (sync) {
 		tg = task_group(current);
-		weight = current->se.load.weight;
+		weight = current->se.avg.load_avg_contrib;
 
 		this_load += effective_load(tg, this_cpu, -weight, -weight);
 		load += effective_load(tg, prev_cpu, 0, -weight);
 	}
 
 	tg = task_group(p);
-	weight = p->se.load.weight;
+	weight = p->se.avg.load_avg_contrib;
 
 	/*
 	 * In low-load situations, where prev_cpu is idle and this_cpu is idle
-- 
1.7.12

-- 
Thanks
    Alex

  parent reply	other threads:[~2013-05-07  5:44 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-06  1:45 [PATCH v5 0/7] use runnable load avg in load balance Alex Shi
2013-05-06  1:45 ` [PATCH v5 1/7] Revert "sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking" Alex Shi
2013-05-06  8:24   ` Paul Turner
2013-05-06  8:49     ` Alex Shi
2013-05-06  8:55       ` Paul Turner
2013-05-06  8:58         ` Alex Shi
2013-05-07  5:05         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 2/7] sched: remove SMP cover for runnable variables in cfs_rq Alex Shi
2013-05-06  4:11   ` Preeti U Murthy
2013-05-06  7:18     ` Alex Shi
2013-05-06  8:01   ` Paul Turner
2013-05-06  8:57     ` Alex Shi
2013-05-06  9:08       ` Paul Turner
2013-05-06 10:47         ` Preeti U Murthy
2013-05-06 15:02         ` Alex Shi
2013-05-07  5:07         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 3/7] sched: set initial value of runnable avg for new forked task Alex Shi
2013-05-06  8:19   ` Paul Turner
2013-05-06  9:21     ` Alex Shi
2013-05-06 10:17       ` Paul Turner
2013-05-07  2:18         ` Alex Shi
2013-05-07  3:06           ` Paul Turner
2013-05-07  3:24             ` Alex Shi
2013-05-07  5:03               ` Alex Shi
2013-05-09  8:31                 ` Alex Shi
2013-05-09  9:30                   ` Paul Turner
2013-05-09 14:23                     ` Alex Shi
2013-05-08 11:15               ` Peter Zijlstra
2013-05-09  9:34               ` Paul Turner
2013-05-07  9:57             ` Morten Rasmussen
2013-05-07 11:05               ` Alex Shi
2013-05-07 11:20                 ` Paul Turner
2013-05-08 11:34                   ` Peter Zijlstra
2013-05-08 12:00                     ` Paul Turner
2013-05-09 10:55                       ` Morten Rasmussen
2013-05-09  8:22                     ` Alex Shi
2013-05-09  9:24                       ` Paul Turner
2013-05-09 13:13                         ` Alex Shi
2013-05-06 10:22       ` Paul Turner
2013-05-06 15:26         ` Alex Shi
2013-05-06 15:28           ` Peter Zijlstra
2013-05-07  2:19   ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 4/7] sched: update cpu load after task_tick Alex Shi
2013-05-06  1:45 ` [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task Alex Shi
2013-05-06  8:46   ` Paul Turner
2013-05-06 10:19     ` Peter Zijlstra
2013-05-06 10:33       ` Paul Turner
2013-05-06 11:10         ` Peter Zijlstra
2013-05-07  6:17           ` Alex Shi
2013-06-04  1:45             ` Alex Shi
2013-06-04  1:51               ` [DISCUSSION] removing variety rq->cpu_load ? Alex Shi
2013-06-04  2:33                 ` Michael Wang
2013-06-04  2:44                   ` Alex Shi
2013-06-04  3:09                     ` Michael Wang
2013-06-04  4:55                       ` Alex Shi
2013-05-06 15:00     ` [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task Alex Shi
2013-05-06 18:34       ` Paul Turner
2013-05-07  0:24         ` Alex Shi
2013-05-07  5:12         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 6/7] sched: consider runnable load average in move_tasks Alex Shi
2013-05-06  8:53   ` Paul Turner
2013-05-06 15:04     ` Peter Zijlstra
2013-05-06 20:59       ` Paul Turner
2013-05-07  5:17         ` Alex Shi
2013-05-08  1:39           ` Alex Shi
2013-05-09  1:24             ` Alex Shi
2013-05-10 13:58               ` Alex Shi
2013-05-09  5:29             ` Alex Shi
2013-05-10 14:03               ` Alex Shi
2013-05-06 15:07     ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 7/7] sched: consider runnable load average in effective_load Alex Shi
2013-05-06  3:34   ` Michael Wang
2013-05-06  5:39     ` Alex Shi
2013-05-06  6:11       ` Michael Wang
2013-05-06  9:39         ` Alex Shi
2013-05-06  7:49       ` Michael Wang
2013-05-06  8:02         ` Alex Shi
2013-05-06  8:34           ` Michael Wang
2013-05-06  9:06             ` Paul Turner
2013-05-06  9:35               ` Alex Shi
2013-05-06  9:59                 ` Preeti U Murthy
2013-05-07  2:43                   ` Michael Wang
2013-05-07  5:43                   ` Alex Shi [this message]
2013-05-08  1:33                     ` Alex Shi
2013-05-06 10:00                 ` Paul Turner
2013-05-06  7:10     ` Preeti U Murthy
2013-05-06  7:20       ` Michael Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51889498.8090409@intel.com \
    --to=alex.shi@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=preeti@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    --cc=wangyun@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.