linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	Patrick Bellasi <patrick.bellasi@arm.com>,
	Paul Turner <pjt@google.com>, Ben Segall <bsegall@google.com>,
	Thara Gopinath <thara.gopinath@linaro.org>,
	pkondeti@codeaurora.org
Subject: Re: [PATCH v5 2/2] sched/fair: update scale invariance of PELT
Date: Wed, 7 Nov 2018 13:58:52 +0100	[thread overview]
Message-ID: <CAKfTPtDQJYG1a+JDO_okHcbk9mxn+k+b_dwP5hTbUH1e00ZszA@mail.gmail.com> (raw)
In-Reply-To: <28af1313-8153-624d-1ae9-1554bb2db474@arm.com>

On Wed, 7 Nov 2018 at 11:47, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>
> On 11/5/18 10:10 AM, Vincent Guittot wrote:
> > On Fri, 2 Nov 2018 at 16:36, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
> >>
> >> On 10/26/18 6:11 PM, Vincent Guittot wrote:
>
> [...]
>
> >> Thinking about this new approach on a big.LITTLE platform:
> >>
> >> CPU Capacities big: 1024 LITTLE: 512, performance CPUfreq governor
> >>
> >> A 50% (runtime/period) task on a big CPU will become an always running
> >> task on the little CPU. The utilization signal of the task and the
> >> cfs_rq of the little CPU converges to 1024.
> >>
> >> With contrib scaling the utilization signal of the 50% task converges to
> >> 512 on the little CPU, even it is always running on it, and so does the
> >> one of the cfs_rq.
> >>
> >> Two 25% tasks on a big CPU will become two 50% tasks on a little CPU.
> >> The utilization signal of the tasks converges to 512 and the one of the
> >> cfs_rq of the little CPU converges to 1024.
> >>
> >> With contrib scaling the utilization signal of the 25% tasks converges
> >> to 256 on the little CPU, even they run each 50% on it, and the one of
> >> the cfs_rq converges to 512.
> >>
> >> So what do we consider system-wide invariance? I thought that e.g. a 25%
> >> task should have a utilization value of 256 no matter on which CPU it is
> >> running?
> >>
> >> In both cases, the little CPU is not going idle whereas the big CPU does.
> >
> > IMO, the key point here is that there is no idle time. As soon as
> > there is no idle time, you don't know if a task has enough compute
> > capacity so you can't make difference between the 50% running task or
> > an always running task on the little core.
>
> Agreed. My '2 25% tasks on a 512 cpu' was a special example in the sense
> that the tasks would stay invariant since they are not restricted by the
> cpu capacity yet. '2 35% tasks' would also have 256 utilization each
> with contrib scaling so that's not invariant either.
>
> Could we say that in the overutilized case with contrib scaling each of
> the n tasks get cpu_cap/n utilization where with time scaling they get
> 1024/n utilization? Even though there is no value in this information
> because of the over-utilized state.
>
> > That's also interesting to noticed that the task will reach the always
> > running state after more than 600ms on little core with utilization
> > starting from 0.
> >
> > Then considering the system-wide invariance, the task are not really
> > invariant. If we take a 50% running task that run 40ms in a period of
> > 80ms, the max utilization of the task will be 721 on the big core and
> > 512 on the little core.
>
> Agreed, the utilization of the task on the big CPU oscillates between
> 721 and 321 so the average is still ~512.
>
> > Then, if you take a 39ms running task instead, the utilization on the
> > big core will reach 709 but it will be 507 on little core. So your
> > utilization depends on the current capacity.
>
> OK, but the average should be ~ 507 on big as well. There is idle time

I don't know about the average, the utilization varies between 709-292
on big core and between 507-486 in little core

> now even on the little CPU. But yeah, with longer period value, there
> are quite big amplitudes.
>
> > With the new proposal, the max utilization will be 709 on big and
> > little cores for the 39ms running task. For the 40ms running task, the
> > utilization will be 721 on big core. then if the task moves on the
> > little, it will reach the value 721 after 80ms,  then 900 after more
> > than 160ms and 1000 after 320ms
>
> We consider max values here? In this case, agreed. So this is a reminder

Yes, we consider max value as it's what is mainly used especially with
util_est feature

> that even if the average utilization of a task compared to the CPU
> capacity would mark the system as non-overutilized (39ms/80ms on a 512
> CPU), the utilization of that task looks different because of the
> oscillation which is pretty noticeable with long periods.
>
> The important bit for EAS is that it only uses utilization in the
> non-overutilized case. Here, utilization signals should look the same
> between the two approaches, not considering tasks with long periods like
> the 39/80ms example above.
> There are also some advantages for EAS with time scaling: (1) faster
> overutilization detection when a big task runs on a little CPU, (2)
> higher (initial) task utilization value when this task migrates from
> little to big CPU.
>
> We should run our EAS task placement tests with your time scaling patches.

  reply	other threads:[~2018-11-07 12:59 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-26 16:11 [PATCH v5 0/2] sched/fair: update scale invariance of PELT Vincent Guittot
2018-10-26 16:11 ` [PATCH v5 1/2] sched/fair: move rq_of helper function Vincent Guittot
2018-10-26 16:11 ` [PATCH v5 2/2] sched/fair: update scale invariance of PELT Vincent Guittot
2018-10-30  9:19   ` Pavan Kondeti
2018-10-30 10:50     ` Vincent Guittot
2018-10-31  7:20   ` Dietmar Eggemann
2018-10-31  9:18     ` Vincent Guittot
2018-11-01  9:38       ` Dietmar Eggemann
2018-11-05  7:59         ` Vincent Guittot
2018-11-02 15:36   ` Dietmar Eggemann
2018-11-05  9:10     ` Vincent Guittot
2018-11-05 14:58       ` Morten Rasmussen
2018-11-06 14:27         ` Vincent Guittot
2018-11-06 14:59         ` Peter Zijlstra
2018-11-07 10:47       ` Dietmar Eggemann
2018-11-07 12:58         ` Vincent Guittot [this message]
2018-11-08 11:35         ` Quentin Perret
2018-11-08 16:04           ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKfTPtDQJYG1a+JDO_okHcbk9mxn+k+b_dwP5hTbUH1e00ZszA@mail.gmail.com \
    --to=vincent.guittot@linaro.org \
    --cc=Morten.Rasmussen@arm.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=patrick.bellasi@arm.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=pkondeti@codeaurora.org \
    --cc=rjw@rjwysocki.net \
    --cc=thara.gopinath@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).