All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michael Wang <wangyun@linux.vnet.ibm.com>
To: Alex Shi <alex.shi@intel.com>
Cc: pjt@google.com, mingo@redhat.com, peterz@infradead.org,
	tglx@linutronix.de, akpm@linux-foundation.org, bp@alien8.de,
	namhyung@kernel.org, efault@gmx.de, morten.rasmussen@arm.com,
	vincent.guittot@linaro.org, preeti@linux.vnet.ibm.com,
	viresh.kumar@linaro.org, linux-kernel@vger.kernel.org,
	mgorman@suse.de, riel@redhat.com
Subject: Re: [PATCH v5 7/7] sched: consider runnable load average in effective_load
Date: Mon, 06 May 2013 16:34:06 +0800	[thread overview]
Message-ID: <51876AFE.80906@linux.vnet.ibm.com> (raw)
In-Reply-To: <518763B0.30200@intel.com>

On 05/06/2013 04:02 PM, Alex Shi wrote:
> On 05/06/2013 03:49 PM, Michael Wang wrote:
>> On 05/06/2013 01:39 PM, Alex Shi wrote:
>> [snip]
>>
>> Rough test done:
>>
>>>
>>> 1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not direct load.
>>
>> This way stop the regression of patch 7.
>>
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 6f4f14b..c770f8d 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -1037,8 +1037,8 @@ static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq)
>>>  	 * update_cfs_rq_load_contribution().
>>>  	 */
>>>  	tg_weight = atomic64_read(&tg->load_avg);
>>> -	tg_weight -= cfs_rq->tg_load_contrib;
>>> -	tg_weight += cfs_rq->load.weight;
>>> +	//tg_weight -= cfs_rq->tg_load_contrib;
>>> +	//tg_weight += cfs_rq->load.weight;
>>>
>>>  	return tg_weight;
>>>  }
>>>
>>> 2, another try is follow the current calc_tg_weight, so remove the follow change.
>>
>> This way show even better results than only patch 1~6.
> 
> how much better to the first change?

Nevermind, it's just a rough test, consider them as same...

>>
>> But the way Preeti suggested doesn't works...
> 
> What's the Preeti suggestion? :)

Paste at last.

>>
>> May be we should record some explanation about this change here, do we?
> 
> I don't know why we need this, PJT, would you like to tell us why the
> calc_tg_weight use cfs_rq->load.weight not cfs_rq->tg_load_contrib?

The comment said this is more accurate, but that was for the world
without decay load I suppose...

But if it using 'cfs_rq->load.weight', which means denominator M contain
that factor, than numerator w has to contain it also...

Regards,
Michael Wang

> 
> 
>>
>> Regards,
>> Michael Wang
>>

sched: Modify effective_load() to use runnable load average

From: Preeti U Murthy <preeti@linux.vnet.ibm.com>

The runqueue weight distribution should update the runnable load average of
the cfs_rq on which the task will be woken up.

However since the computation of se->load.weight takes into consideration
the runnable load average in update_cfs_shares(),no need to modify this in
effective_load().
---
 kernel/sched/fair.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 790e23d..5489022 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg,
int cpu, long wl, long wg)
 		/*
 		 * w = rw_i + @wl
 		 */
-		w = se->my_q->load.weight + wl;
+		w = se->my_q->runnable_load_avg + wl;

 		/*
 		 * wl = S * s'_i; see (2)
@@ -3066,6 +3066,9 @@ static long effective_load(struct task_group *tg,
int cpu, long wl, long wg)
 		/*
 		 * wl = dw_i = S * (s'_i - s_i); see (3)
 		 */
+		/* Do not modify the below as it already contains runnable
+		 * load average in its computation
+		 */
 		wl -= se->load.weight;

 		/*
@@ -3112,14 +3115,14 @@ static int wake_affine(struct sched_domain *sd,
struct task_struct *p, int sync)
 	 */
 	if (sync) {
 		tg = task_group(current);
-		weight = current->se.load.weight;
+		weight = current->se.avg.load_avg_contrib;

 		this_load += effective_load(tg, this_cpu, -weight, -weight);
 		load += effective_load(tg, prev_cpu, 0, -weight);
 	}

 	tg = task_group(p);
-	weight = p->se.load.weight;
+	weight = p->se.avg.load_avg_contrib;

 	/*
 	 * In low-load situations, where prev_cpu is idle and this_cpu is idle


Regards
Preeti U Murthy





  reply	other threads:[~2013-05-06  8:34 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-06  1:45 [PATCH v5 0/7] use runnable load avg in load balance Alex Shi
2013-05-06  1:45 ` [PATCH v5 1/7] Revert "sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking" Alex Shi
2013-05-06  8:24   ` Paul Turner
2013-05-06  8:49     ` Alex Shi
2013-05-06  8:55       ` Paul Turner
2013-05-06  8:58         ` Alex Shi
2013-05-07  5:05         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 2/7] sched: remove SMP cover for runnable variables in cfs_rq Alex Shi
2013-05-06  4:11   ` Preeti U Murthy
2013-05-06  7:18     ` Alex Shi
2013-05-06  8:01   ` Paul Turner
2013-05-06  8:57     ` Alex Shi
2013-05-06  9:08       ` Paul Turner
2013-05-06 10:47         ` Preeti U Murthy
2013-05-06 15:02         ` Alex Shi
2013-05-07  5:07         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 3/7] sched: set initial value of runnable avg for new forked task Alex Shi
2013-05-06  8:19   ` Paul Turner
2013-05-06  9:21     ` Alex Shi
2013-05-06 10:17       ` Paul Turner
2013-05-07  2:18         ` Alex Shi
2013-05-07  3:06           ` Paul Turner
2013-05-07  3:24             ` Alex Shi
2013-05-07  5:03               ` Alex Shi
2013-05-09  8:31                 ` Alex Shi
2013-05-09  9:30                   ` Paul Turner
2013-05-09 14:23                     ` Alex Shi
2013-05-08 11:15               ` Peter Zijlstra
2013-05-09  9:34               ` Paul Turner
2013-05-07  9:57             ` Morten Rasmussen
2013-05-07 11:05               ` Alex Shi
2013-05-07 11:20                 ` Paul Turner
2013-05-08 11:34                   ` Peter Zijlstra
2013-05-08 12:00                     ` Paul Turner
2013-05-09 10:55                       ` Morten Rasmussen
2013-05-09  8:22                     ` Alex Shi
2013-05-09  9:24                       ` Paul Turner
2013-05-09 13:13                         ` Alex Shi
2013-05-06 10:22       ` Paul Turner
2013-05-06 15:26         ` Alex Shi
2013-05-06 15:28           ` Peter Zijlstra
2013-05-07  2:19   ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 4/7] sched: update cpu load after task_tick Alex Shi
2013-05-06  1:45 ` [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task Alex Shi
2013-05-06  8:46   ` Paul Turner
2013-05-06 10:19     ` Peter Zijlstra
2013-05-06 10:33       ` Paul Turner
2013-05-06 11:10         ` Peter Zijlstra
2013-05-07  6:17           ` Alex Shi
2013-06-04  1:45             ` Alex Shi
2013-06-04  1:51               ` [DISCUSSION] removing variety rq->cpu_load ? Alex Shi
2013-06-04  2:33                 ` Michael Wang
2013-06-04  2:44                   ` Alex Shi
2013-06-04  3:09                     ` Michael Wang
2013-06-04  4:55                       ` Alex Shi
2013-05-06 15:00     ` [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task Alex Shi
2013-05-06 18:34       ` Paul Turner
2013-05-07  0:24         ` Alex Shi
2013-05-07  5:12         ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 6/7] sched: consider runnable load average in move_tasks Alex Shi
2013-05-06  8:53   ` Paul Turner
2013-05-06 15:04     ` Peter Zijlstra
2013-05-06 20:59       ` Paul Turner
2013-05-07  5:17         ` Alex Shi
2013-05-08  1:39           ` Alex Shi
2013-05-09  1:24             ` Alex Shi
2013-05-10 13:58               ` Alex Shi
2013-05-09  5:29             ` Alex Shi
2013-05-10 14:03               ` Alex Shi
2013-05-06 15:07     ` Alex Shi
2013-05-06  1:45 ` [PATCH v5 7/7] sched: consider runnable load average in effective_load Alex Shi
2013-05-06  3:34   ` Michael Wang
2013-05-06  5:39     ` Alex Shi
2013-05-06  6:11       ` Michael Wang
2013-05-06  9:39         ` Alex Shi
2013-05-06  7:49       ` Michael Wang
2013-05-06  8:02         ` Alex Shi
2013-05-06  8:34           ` Michael Wang [this message]
2013-05-06  9:06             ` Paul Turner
2013-05-06  9:35               ` Alex Shi
2013-05-06  9:59                 ` Preeti U Murthy
2013-05-07  2:43                   ` Michael Wang
2013-05-07  5:43                   ` Alex Shi
2013-05-08  1:33                     ` Alex Shi
2013-05-06 10:00                 ` Paul Turner
2013-05-06  7:10     ` Preeti U Murthy
2013-05-06  7:20       ` Michael Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51876AFE.80906@linux.vnet.ibm.com \
    --to=wangyun@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@intel.com \
    --cc=bp@alien8.de \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=preeti@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.