All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Shi <alex.shi@intel.com>
To: mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de,
	akpm@linux-foundation.org, bp@alien8.de, pjt@google.com,
	namhyung@kernel.org, efault@gmx.de, morten.rasmussen@arm.com
Cc: vincent.guittot@linaro.org, preeti@linux.vnet.ibm.com,
	viresh.kumar@linaro.org, linux-kernel@vger.kernel.org,
	alex.shi@intel.com, mgorman@suse.de, riel@redhat.com,
	wangyun@linux.vnet.ibm.com
Subject: [PATCH v5 7/7] sched: consider runnable load average in effective_load
Date: Mon,  6 May 2013 09:45:11 +0800	[thread overview]
Message-ID: <1367804711-30308-8-git-send-email-alex.shi@intel.com> (raw)
In-Reply-To: <1367804711-30308-1-git-send-email-alex.shi@intel.com>

effective_load calculates the load change as seen from the
root_task_group. It needs to engage the runnable average
of changed task.

Thanks for Morten Rasmussen and PeterZ's reminder of this.

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
 kernel/sched/fair.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 790e23d..6f4f14b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2976,15 +2976,15 @@ static void task_waking_fair(struct task_struct *p)
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 /*
- * effective_load() calculates the load change as seen from the root_task_group
+ * effective_load() calculates load avg change as seen from the root_task_group
  *
  * Adding load to a group doesn't make a group heavier, but can cause movement
  * of group shares between cpus. Assuming the shares were perfectly aligned one
  * can calculate the shift in shares.
  *
- * Calculate the effective load difference if @wl is added (subtracted) to @tg
- * on this @cpu and results in a total addition (subtraction) of @wg to the
- * total group weight.
+ * Calculate the effective load avg difference if @wl is added (subtracted) to
+ * @tg on this @cpu and results in a total addition (subtraction) of @wg to the
+ * total group load avg.
  *
  * Given a runqueue weight distribution (rw_i) we can compute a shares
  * distribution (s_i) using:
@@ -2998,7 +2998,7 @@ static void task_waking_fair(struct task_struct *p)
  *   rw_i = {   2,   4,   1,   0 }
  *   s_i  = { 2/7, 4/7, 1/7,   0 }
  *
- * As per wake_affine() we're interested in the load of two CPUs (the CPU the
+ * As per wake_affine() we're interested in load avg of two CPUs (the CPU the
  * task used to run on and the CPU the waker is running on), we need to
  * compute the effect of waking a task on either CPU and, in case of a sync
  * wakeup, compute the effect of the current task going to sleep.
@@ -3008,20 +3008,20 @@ static void task_waking_fair(struct task_struct *p)
  *
  *   s'_i = (rw_i + @wl) / (@wg + \Sum rw_j)				(2)
  *
- * Suppose we're interested in CPUs 0 and 1, and want to compute the load
+ * Suppose we're interested in CPUs 0 and 1, and want to compute the load avg
  * differences in waking a task to CPU 0. The additional task changes the
  * weight and shares distributions like:
  *
  *   rw'_i = {   3,   4,   1,   0 }
  *   s'_i  = { 3/8, 4/8, 1/8,   0 }
  *
- * We can then compute the difference in effective weight by using:
+ * We can then compute the difference in effective load avg by using:
  *
  *   dw_i = S * (s'_i - s_i)						(3)
  *
  * Where 'S' is the group weight as seen by its parent.
  *
- * Therefore the effective change in loads on CPU 0 would be 5/56 (3/8 - 2/7)
+ * Therefore the effective change in load avg on CPU 0 would be 5/56 (3/8 - 2/7)
  * times the weight of the group. The effect on CPU 1 would be -4/56 (4/8 -
  * 4/7) times the weight of the group.
  */
@@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
 		/*
 		 * w = rw_i + @wl
 		 */
-		w = se->my_q->load.weight + wl;
+		w = se->my_q->tg_load_contrib + wl;
 
 		/*
 		 * wl = S * s'_i; see (2)
@@ -3066,7 +3066,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
 		/*
 		 * wl = dw_i = S * (s'_i - s_i); see (3)
 		 */
-		wl -= se->load.weight;
+		wl -= se->avg.load_avg_contrib;
 
 		/*
 		 * Recursively apply this logic to all parent groups to compute
@@ -3112,14 +3112,14 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 	 */
 	if (sync) {
 		tg = task_group(current);
-		weight = current->se.load.weight;
+		weight = current->se.avg.load_avg_contrib;
 
 		this_load += effective_load(tg, this_cpu, -weight, -weight);
 		load += effective_load(tg, prev_cpu, 0, -weight);
 	}
 
 	tg = task_group(p);
-	weight = p->se.load.weight;
+	weight = p->se.avg.load_avg_contrib;
 
 	/*
 	 * In low-load situations, where prev_cpu is idle and this_cpu is idle
-- 
1.7.12


  parent reply	other threads:[~2013-05-06  1:46 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-06  1:45 [PATCH v5 0/7] use runnable load avg in load balance Alex Shi
2013-05-06  1:45 ` [PATCH v5 1/7] Revert "sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking" Alex Shi
2013-05-06  8:24   ` Paul Turner
2013-05-06  8:49     ` Alex Shi
2013-05-06  8:55       ` Paul Turner
2013-05-06  8:58         ` Alex Shi
2013-05-07  5:05         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 2/7] sched: remove SMP cover for runnable variables in cfs_rq Alex Shi
2013-05-06  4:11   ` Preeti U Murthy
2013-05-06  7:18     ` Alex Shi
2013-05-06  8:01   ` Paul Turner
2013-05-06  8:57     ` Alex Shi
2013-05-06  9:08       ` Paul Turner
2013-05-06 10:47         ` Preeti U Murthy
2013-05-06 15:02         ` Alex Shi
2013-05-07  5:07         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 3/7] sched: set initial value of runnable avg for new forked task Alex Shi
2013-05-06  8:19   ` Paul Turner
2013-05-06  9:21     ` Alex Shi
2013-05-06 10:17       ` Paul Turner
2013-05-07  2:18         ` Alex Shi
2013-05-07  3:06           ` Paul Turner
2013-05-07  3:24             ` Alex Shi
2013-05-07  5:03               ` Alex Shi
2013-05-09  8:31                 ` Alex Shi
2013-05-09  9:30                   ` Paul Turner
2013-05-09 14:23                     ` Alex Shi
2013-05-08 11:15               ` Peter Zijlstra
2013-05-09  9:34               ` Paul Turner
2013-05-07  9:57             ` Morten Rasmussen
2013-05-07 11:05               ` Alex Shi
2013-05-07 11:20                 ` Paul Turner
2013-05-08 11:34                   ` Peter Zijlstra
2013-05-08 12:00                     ` Paul Turner
2013-05-09 10:55                       ` Morten Rasmussen
2013-05-09  8:22                     ` Alex Shi
2013-05-09  9:24                       ` Paul Turner
2013-05-09 13:13                         ` Alex Shi
2013-05-06 10:22       ` Paul Turner
2013-05-06 15:26         ` Alex Shi
2013-05-06 15:28           ` Peter Zijlstra
2013-05-07  2:19   ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 4/7] sched: update cpu load after task_tick Alex Shi
2013-05-06  1:45 ` [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task Alex Shi
2013-05-06  8:46   ` Paul Turner
2013-05-06 10:19     ` Peter Zijlstra
2013-05-06 10:33       ` Paul Turner
2013-05-06 11:10         ` Peter Zijlstra
2013-05-07  6:17           ` Alex Shi
2013-06-04  1:45             ` Alex Shi
2013-06-04  1:51               ` [DISCUSSION] removing variety rq->cpu_load ? Alex Shi
2013-06-04  2:33                 ` Michael Wang
2013-06-04  2:44                   ` Alex Shi
2013-06-04  3:09                     ` Michael Wang
2013-06-04  4:55                       ` Alex Shi
2013-05-06 15:00     ` [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task Alex Shi
2013-05-06 18:34       ` Paul Turner
2013-05-07  0:24         ` Alex Shi
2013-05-07  5:12         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 6/7] sched: consider runnable load average in move_tasks Alex Shi
2013-05-06  8:53   ` Paul Turner
2013-05-06 15:04     ` Peter Zijlstra
2013-05-06 20:59       ` Paul Turner
2013-05-07  5:17         ` Alex Shi
2013-05-08  1:39           ` Alex Shi
2013-05-09  1:24             ` Alex Shi
2013-05-10 13:58               ` Alex Shi
2013-05-09  5:29             ` Alex Shi
2013-05-10 14:03               ` Alex Shi
2013-05-06 15:07     ` Alex Shi
2013-05-06  1:45 ` Alex Shi [this message]
2013-05-06  3:34   ` [PATCH v5 7/7] sched: consider runnable load average in effective_load Michael Wang
2013-05-06  5:39     ` Alex Shi
2013-05-06  6:11       ` Michael Wang
2013-05-06  9:39         ` Alex Shi
2013-05-06  7:49       ` Michael Wang
2013-05-06  8:02         ` Alex Shi
2013-05-06  8:34           ` Michael Wang
2013-05-06  9:06             ` Paul Turner
2013-05-06  9:35               ` Alex Shi
2013-05-06  9:59                 ` Preeti U Murthy
2013-05-07  2:43                   ` Michael Wang
2013-05-07  5:43                   ` Alex Shi
2013-05-08  1:33                     ` Alex Shi
2013-05-06 10:00                 ` Paul Turner
2013-05-06  7:10     ` Preeti U Murthy
2013-05-06  7:20       ` Michael Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1367804711-30308-8-git-send-email-alex.shi@intel.com \
    --to=alex.shi@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=preeti@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    --cc=wangyun@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.