linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Patrick Bellasi <patrick.bellasi@arm.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	Paul Turner <pjt@google.com>, Ben Segall <bsegall@google.com>,
	Thara Gopinath <thara.gopinath@linaro.org>,
	pkondeti@codeaurora.org, Quentin Perret <quentin.perret@arm.com>,
	Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Subject: Re: [PATCH v7 2/2] sched/fair: update scale invariance of PELT
Date: Wed, 28 Nov 2018 16:35:46 +0000	[thread overview]
Message-ID: <20181128163545.GE23094@e110439-lin> (raw)
In-Reply-To: <CAKfTPtDTSXiVsPepiJDFT9ZCXfUVYTQhRvgfZbF=_w87girPAQ@mail.gmail.com>

On 28-Nov 16:42, Vincent Guittot wrote:
> On Wed, 28 Nov 2018 at 16:21, Patrick Bellasi <patrick.bellasi@arm.com> wrote:
> >
> > On 28-Nov 15:55, Vincent Guittot wrote:
> > > On Wed, 28 Nov 2018 at 15:40, Patrick Bellasi <patrick.bellasi@arm.com> wrote:
> > > >
> > > > On 28-Nov 14:33, Vincent Guittot wrote:
> > > > > On Wed, 28 Nov 2018 at 12:53, Patrick Bellasi <patrick.bellasi@arm.com> wrote:
> > > > > >
> > > > > > On 28-Nov 11:02, Peter Zijlstra wrote:
> > > > > > > On Wed, Nov 28, 2018 at 10:54:13AM +0100, Vincent Guittot wrote:
> > > > > > >
> > > > > > > > Is there anything else that I should do for these patches ?
> > > > > > >
> > > > > > > IIRC, Morten mention they break util_est; Patrick was going to explain.
> > > > > >
> > > > > > I guess the problem is that, once we cross the current capacity,
> > > > > > strictly speaking util_avg does not represent anymore a utilization.
> > > > > >
> > > > > > With the new signal this could happen and we end up storing estimated
> > > > > > utilization samples which will overestimate the task requirements.
> > > > > >
> > > > > > We will have a spike in estimated utilization at next wakeup, since we
> > > > > > use MAX(util_avg@dequeue_time, ewma). Potentially we also inflate the EWMA in
> > > > > > case we collect multiple samples above the current capacity.
> > > > >
> > > > > TBH I don't see how it's different from current implementation with a
> > > > > task that was scheduled on big core and now wakes up on little core.
> > > > > The util_est is overestimated as well.
> > > >
> > > > While running below the capacity of a CPU, either big or LITTLE, we
> > > > can still measure the actual used bandwidth as long as we have idle
> > > > time. If the task is then moved into a lower capacity core, I think
> > > > it's still safe to assume that, likely, it would need more capacity.
> > > >
> > > > Why do you say it's the same ?
> > >
> > > In the example of a task that runs 39ms in period of 80ms that we used
> > > during previous version,
> > > the utilization on the big core will reach 709 so will util_est too
> > > When the task migrates on little core (512), util_est is higher than
> > > current cpu capacity
> >
> > Right, and what's the problem ?
> 
> you worry about an util_est being higher than capacity which is the case there

I worry about util_est being higher then the capacity the task WAS
running... not the capacity the task IS running... if that value does
not correspond to what the task really need... (more on that at the
end).

> > 1) We know that PELT is calibrated to 32ms period task and in your
> >    example, since the runtime is higher then the half-life, it's
> >    correct to estimate a utilization higher then 50%.
> >
> >    PELT utilization is defined _based on the half-life_: thus
> >    your task having a 50% duty cycle does not mean we are not correct
> >    if report a utilization != 50%.
> >    It would be as broken as reporting 10% utilization for a task
> >    running 100ms every 1s.
> >
> > 2) If it was a 70% task on a previous activation, once it's moved into
> >    a lower capacity CPU it's still correct to assume that it's likely
> >    going to require the same bandwidth and thus will be
> >    under-provisioned.
> >
> > I still don't see where we are wrong in this case :/
> >
> > To me it looks different then the problem I described.
> >
> > > > With your new signal instead, once we cross the current capacity,
> > > > utilization is just not anymore utilization. Thus, IMHO it make sense
> > > > avoid to accumulate a sample for what we call "estimated utilization".
> 
> This is not true. With the example above, the util_est will be exactly the same
>  on big and little cores with the new signal

... AFAIU only if we have idle time...

> > > > I would also say that, with the current implementation which caps
> > > > utilization to the current capacity, we get better estimation in
> > > > general. At least we can say with absolute precision:
> > > >
> > > >    "the task needs _at least_ that amount of capacity".
> > > >
> > > > Potentially we can also flag the task as being under-provisioned, in
> > > > case there was not idle time, and _let a policy_ decide what to do
> > > > with it and the granted information we have.
> > > >
> > > > While, with your new signal, once we are over the current capacity,
> > > > the "utilization" is just a sort of "random" number at best useful to
> > > > drive some conclusions about how long the task has been delayed.
> 
> see my comment above
> 
> > > >
> > > > IOW, I fear that we are embedding a policy within a signal which is
> > > > currently representing something very well defined: how much cpu
> > > > bandwidth a task used. While, latency/under-provisioning policies
> > > > perhaps should be better placed somewhere else.
> > > >
> > > > Perhaps I've missed it in some of the previous discussions:
> > > > have we have considered/discussed this signal-vs-policy aspect ?
> >
> > What's your opinion on the above instead ?
> 
> It's not a policy but it gives better knowledge about the amount a work done
> I have put below discussion on the  subject on previous version

Thanks, I think I've skimmed through it, but it's sill useful...

> > > With contribution scaling the PELT utilization of a task is a _minimum_
> > > utilization. Regardless of where the task is currently/was running (and
> > > provided that it doesn't change behaviour) its PELT utilization will
> > > approximate its _minimum_ utilization on an idle 1024 capacity CPU.
> >
> > The main drawback is that the _minimum_ utilization depends on the CPU
> > capacity on which the task runs. The two 25% tasks on a 256 capacity
> > CPU will have an utilization of 128 as an example
> >
> > >
> > > With time scaling the PELT utilization doesn't really have a meaning on
> > > its own. It has to be compared to the capacity of the CPU where it
> > > is/was running to know what the its current PELT utilization means. When
> >
> > I would have said the opposite. The utilization of the task will
> > always reflect the same amount of work that has been already done
> > whatever the CPU capacity.
> > In fact, the new scaling mechanism uses the real amount of work that
> > has been already done to compute the utilization signal which is not
> > the case currently. This gives more information about the real amount
> > of worked that has been computed in the over utilization case.
> >
> > > the utilization over-shoots the capacity its value is no longer
> > > represents utilization, it just means that it has a higher compute
> > > demand than is offered on its current CPU and a high value means that it
> > > has been suffering longer. It can't be used to predict the actual
> > > utilization on an idle 1024 capacity any better than contribution scaled
> > > PELT utilization.
> >
> > I think that it provides earlier detection of over utilization and
> > more accurate signal for a longer time duration which can help the
> > load balance
> > Coming back to 50% task example . I will use a 50ms running time
> > during a 100ms period for the example below to make it easier
> >
> > Starting from 0, the evolution of the utilization is:
> >
> > With contribution scaling:
> >          time  0ms  50ms  100ms  150ms  200ms
> > capacity
> > 1024           0    666
> > 512            0    333   453
> > When the CPU start to be over utilized (@100ms), the utilization is
> > already too low (453 instead of 666) and scheduler doesn't detect yet
> > that we are over utilized
> > 256            0    169   226    246    252
> > That's even worse with this lower capacity
> >
> > With time scaling,
> >          time  0ms  50ms  100ms  150ms  200ms
> > capacity
> > 1024           0    666
> > 512            0    428   677
> > We know that the current capacity is not enough and the utilization
> > reflect the correct utilization level compare to 1024 capacity (the
> > 666 vs 677 difference comes from the 1024us window so the last window
> > is not full in the case of max capacity)
> > 256            0    234   468    564    677
> > At 100ms, we know that there is not enough capacity. (In fact we know
> > that at 56ms). And even at time 200ms, the amount of work is exactly
> > what would have been executed on a CPU 4x faster
> >
> > >
> > > This change might not be a showstopper, but it is something to be aware
> > > off and take into account wherever PELT utilization is used.
> >
> > The point above is clearly a big difference between the 2 approaches
> > of the no spare cycle case but I think it will help by giving more
> > information in the over utilization case

I like the idea that we ramp up faster and always get to the same
value. I like also the idea that we always reach the same value on
both LITTLE and big.

As long as there is idle time this is working fine, in these cases we
should probably also collect util_est samples.

But what happens when we don't have idle time ?

Let say we have 2 15% tasks, co-scheduled on a cpu with <300 capacity.

Are not these two tasks being reported as 50% tasks (after a while) ?

If that's the case, these are samples we should not store...

-- 
#include <best/regards.h>

Patrick Bellasi

  reply	other threads:[~2018-11-28 16:35 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-20 10:55 [PATCH v7 0/2] sched/fair: update scale invariance of PELT Vincent Guittot
2018-11-20 10:55 ` [PATCH v7 1/2] sched/fair: move rq_of helper function Vincent Guittot
2018-11-20 10:55 ` [PATCH v7 2/2] sched/fair: update scale invariance of PELT Vincent Guittot
2018-11-28  9:54   ` Vincent Guittot
2018-11-28 10:02     ` Peter Zijlstra
2018-11-28 11:53       ` Patrick Bellasi
2018-11-28 13:33         ` Vincent Guittot
2018-11-28 13:35           ` Vincent Guittot
2018-11-28 14:40           ` Patrick Bellasi
2018-11-28 14:55             ` Vincent Guittot
2018-11-28 15:21               ` Patrick Bellasi
2018-11-28 15:42                 ` Vincent Guittot
2018-11-28 16:35                   ` Patrick Bellasi [this message]
2018-11-29 10:43                     ` Vincent Guittot
2018-11-29 15:00                       ` Patrick Bellasi
2018-11-29 16:19                         ` Vincent Guittot
2019-01-10 15:30                           ` Patrick Bellasi
2019-01-11 14:29                             ` Vincent Guittot
2018-11-29 12:53         ` Peter Zijlstra
2018-11-29 15:13           ` Patrick Bellasi
2019-01-24  9:07             ` Peter Zijlstra
2019-01-24 14:04               ` Patrick Bellasi
2019-01-29 19:42                 ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181128163545.GE23094@e110439-lin \
    --to=patrick.bellasi@arm.com \
    --cc=Morten.Rasmussen@arm.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=pkondeti@codeaurora.org \
    --cc=quentin.perret@arm.com \
    --cc=rjw@rjwysocki.net \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=thara.gopinath@linaro.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).