All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Yuyang Du <yuyang.du@intel.com>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	Linaro Kernel Mailman List <linaro-kernel@lists.linaro.org>,
	Paul Turner <pjt@google.com>,
	Benjamin Segall <bsegall@google.com>
Subject: Re: [PATCH 4/7 v3] sched: propagate load during synchronous attach/detach
Date: Thu, 15 Sep 2016 18:20:18 +0100	[thread overview]
Message-ID: <161f2b72-7674-6f90-9269-9ef1bba242f0@arm.com> (raw)
In-Reply-To: <CAKfTPtC0dpLYsfn7Li__nr3MX45DX2fBL=ZTDqSuDCYKe2XS_w@mail.gmail.com>

On 15/09/16 15:31, Vincent Guittot wrote:
> On 15 September 2016 at 15:11, Dietmar Eggemann
> <dietmar.eggemann@arm.com> wrote:

[...]

>> Wasn't 'consuming <1' related to 'NICE_0_LOAD' and not
>> scale_load_down(gcfs_rq->tg->shares) before the rewrite of PELT (v4.2,
>> __update_group_entity_contrib())?
> 
> Yes before the rewrite, the condition (tg->runnable_avg < NICE_0_LOAD) was used.
> 
> I have used the following examples to choose the condition:
> 
> A task group with only one always running task TA with a weight equals
> to tg->shares, will have a tg's load (cfs_rq->tg->load_avg) equals to
> TA's weight == scale_load_down(tg->shares): The load of the CPU on
> which the task runs, will be scale_load_down(task's weight) ==
> scale_load_down(tg->shares) and the load of others CPUs will be null.
> In this case, all shares will be given to cfs_rq CFS1 on which TA runs
> and the load of the sched_entity SB that represents CFS1 at parent
> level will be scale_load_down(SB's weight) =
> scale_load_down(tg->shares).
> 
> If the TA is not an always running task, its load will be less than
> its weight and less than scale_load_down(tg->shares) and as a result
> tg->load_avg will be less than scale_load_down(tg->shares).
> Nevertheless, the weight of SB is still scale_load_down(tg->shares)
> and its load should be the same as TA. But the 1st part of the
> calculation gives a load of scale_load_down(gcfs_rq->tg->shares)
> because tg_load == gcfs_rq->tg_load_avg_contrib == load. So if tg_load
> < scale_load_down(gcfs_rq->tg->shares), we have to correct the load
> that we set to SEB

Makes sense to me now. Thanks. Peter already pointed out that this math
can be made easier, so you will probably 'scale gcfs_rq's load into tg's
shares' only if 'tg_load >= shares''

[...]

  reply	other threads:[~2016-09-15 17:20 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-12  7:47 [PATCH 0/7 v3] sched: reflect sched_entity move into task_group's load Vincent Guittot
2016-09-12  7:47 ` [PATCH 1/7 v3] sched: factorize attach entity Vincent Guittot
2016-09-12  7:47 ` [PATCH 2/7 v3] sched: fix hierarchical order in rq->leaf_cfs_rq_list Vincent Guittot
2016-09-21 10:14   ` Dietmar Eggemann
2016-09-21 12:34     ` Vincent Guittot
2016-09-21 17:25       ` Dietmar Eggemann
2016-09-21 18:02         ` Vincent Guittot
2016-09-12  7:47 ` [PATCH 3/7 v3] sched: factorize PELT update Vincent Guittot
2016-09-15 13:09   ` Peter Zijlstra
2016-09-15 13:30     ` Vincent Guittot
2016-09-12  7:47 ` [PATCH 4/7 v3] sched: propagate load during synchronous attach/detach Vincent Guittot
2016-09-15 12:55   ` Peter Zijlstra
2016-09-15 13:01     ` Vincent Guittot
2016-09-15 12:59   ` Peter Zijlstra
2016-09-15 13:11     ` Vincent Guittot
2016-09-15 13:11   ` Dietmar Eggemann
2016-09-15 14:31     ` Vincent Guittot
2016-09-15 17:20       ` Dietmar Eggemann [this message]
2016-09-15 15:14     ` Peter Zijlstra
2016-09-15 17:36       ` Dietmar Eggemann
2016-09-15 17:54         ` Peter Zijlstra
2016-09-15 14:43   ` Peter Zijlstra
2016-09-15 14:51     ` Vincent Guittot
2016-09-19  3:19   ` Wanpeng Li
2016-09-12  7:47 ` [PATCH 5/7 v3] sched: propagate asynchrous detach Vincent Guittot
2016-09-12  7:47 ` [PATCH 6/7 v3] sched: fix task group initialization Vincent Guittot
2016-09-12  7:47 ` [PATCH 7/7 v3] sched: fix wrong utilization accounting when switching to fair class Vincent Guittot
2016-09-15 13:18   ` Peter Zijlstra
2016-09-15 15:36     ` Vincent Guittot
2016-09-16 12:16       ` Peter Zijlstra
2016-09-16 14:23         ` Vincent Guittot
2016-09-20 11:54           ` Peter Zijlstra
2016-09-20 13:06             ` Vincent Guittot
2016-09-22 12:25               ` Peter Zijlstra
2016-09-26 14:53                 ` Peter Zijlstra
2016-09-20 16:59             ` bsegall
2016-09-22  8:33               ` Peter Zijlstra
2016-09-22 17:10                 ` bsegall
2016-09-16 10:51   ` Peter Zijlstra
2016-09-16 12:45     ` Vincent Guittot
2016-09-30 12:01   ` [tip:sched/core] sched/core: Fix incorrect " tip-bot for Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=161f2b72-7674-6f90-9269-9ef1bba242f0@arm.com \
    --to=dietmar.eggemann@arm.com \
    --cc=Morten.Rasmussen@arm.com \
    --cc=bsegall@google.com \
    --cc=linaro-kernel@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=vincent.guittot@linaro.org \
    --cc=yuyang.du@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.