All of lore.kernel.org
 help / color / mirror / Atom feed
From: Luca Abeni <luca.abeni@unitn.it>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Steve Muckle <steve.muckle@linaro.org>,
	Ingo Molnar <mingo@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Juri Lelli <Juri.Lelli@arm.com>,
	Patrick Bellasi <patrick.bellasi@arm.com>,
	Michael Turquette <mturquette@baylibre.com>
Subject: Re: [RFCv6 PATCH 09/10] sched: deadline: use deadline bandwidth in scale_rt_capacity
Date: Tue, 15 Dec 2015 09:50:14 +0100	[thread overview]
Message-ID: <566FD446.1080004@unitn.it> (raw)
In-Reply-To: <CAKfTPtDT----aVGF-3T90PZQQvsOJay_Y-js19W+NNZpvk7J_w@mail.gmail.com>

On 12/15/2015 05:59 AM, Vincent Guittot wrote:
[...]
>>>> So I don't think this is right. AFAICT this projects the WCET as the
>>>> amount of time actually used by DL. This will, under many
>>>> circumstances, vastly overestimate the amount of time actually
>>>> spend on it. Therefore unduly pessimisme the fair capacity of this
>>>> CPU.
>>>
>>> I agree that if the WCET is far from reality, we will underestimate
>>> available capacity for CFS. Have you got some use case in mind which
>>> overestimates the WCET ?
>>> If we can't rely on this parameters to evaluate the amount of capacity
>>> used by deadline scheduler on a core, this will imply that we can't
>>> also use it for requesting capacity to cpufreq and we should fallback
>>> on a monitoring mechanism which reacts to a change instead of
>>> anticipating it.
>> I think a more "theoretically sound" approach would be to track the
>> _active_ utilisation (informally speaking, the sum of the utilisations
>> of the tasks that are actually active on a core - the exact definition
>> of "active" is the trick here).
>
> The point is that we probably need 2 definitions of "active" tasks.
Ok; thanks for clarifying. I do not know much about the remaining capacity
used by CFS; however, from what you write I guess CFS really need an "average"
utilisation (while frequency scaling needs the active utilisation).
So, I suspect you really need to track 2 different things.
 From a quick look at the code that is currently in mainline, it seems to
me that it does a reasonable thing for tracking the remaining capacity
used by CFS...

> The 1st one would be used to scale the frequency. From a power saving
> point of view, it have to reflect the minimum frequency needed at the
> current time to handle all works without missing deadline.
Right. And it can be computed as shown in the GRUB-PA paper I mentioned
in a previous mail (that is, by tracking the active utilisation, as done
by my patches).

> This one
> should be updated quite often with the wake up and the sleep of tasks
> as well as the throttling.
Strictly speaking, the active utilisation must be updated when a task
wakes up and when a task sleeps/terminates (but when a task sleeps/terminates
you cannot decrease the active utilisation immediately: you have to wait
some time because the task might already have used part of its "future
utilisation").
The active utilisation must not be updated when a task is throttled: a
task is throttled when its current runtime is 0, so it already used all
of its utilisation for the current period (think about two tasks with
runtime=50ms and period 100ms: they consume 100% of the time on a CPU,
and when the first task consumed all of its runtime, you cannot decrease
the active utilisation).

> The 2nd definition is used to compute the remaining capacity for the
> CFS scheduler. This one doesn't need to be updated at each wake/sleep
> of a deadline task but should reflect the capacity used by deadline in
> a larger time scale. The latter will be used by the CFS scheduler  at
> the periodic load balance pace
Ok, so as I wrote above this really looks like an average utilisation.
My impression (but I do not know the CFS code too much) is that the mainline
kernel is currently doing the right thing to compute it, so maybe there is no
need to change the current code in this regard.
If the current code is not acceptable for some reason, an alternative would
be to measure the active utilisation for frequency scaling, and then apply a
low-pass filter to it for CFS.


				Luca

>
>> As done, for example, here:
>> https://github.com/lucabe72/linux-reclaiming/tree/track-utilisation-v2
>> (in particular, see
>> https://github.com/lucabe72/linux-reclaiming/commit/49fc786a1c453148625f064fa38ea538470df55b
>> )
>> I understand this approach might look too complex... But I think it is
>> much less pessimistic while still being "safe".
>> If there is something that I can do to make that code more acceptable,
>> let me know.
>>
>>
>>                          Luca


  reply	other threads:[~2015-12-15  8:50 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-09  6:19 [RFCv6 PATCH 00/10] sched: scheduler-driven CPU frequency selection Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 01/10] sched: Compute cpu capacity available at current frequency Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 02/10] cpufreq: introduce cpufreq_driver_is_slow Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 03/10] sched: scheduler-driven cpu frequency selection Steve Muckle
2015-12-11 11:04   ` Juri Lelli
2015-12-15  2:02     ` Steve Muckle
2015-12-15 10:31       ` Juri Lelli
2015-12-16  1:22         ` Steve Muckle
2015-12-16  3:48   ` Leo Yan
2015-12-17  1:24     ` Steve Muckle
2015-12-17  7:17       ` Leo Yan
2015-12-18 19:15         ` Steve Muckle
2015-12-19  5:54           ` Leo Yan
2016-01-25 12:06   ` Ricky Liang
2016-01-27  1:14     ` Steve Muckle
2016-02-01 17:10   ` Ricky Liang
2016-02-11  4:44     ` Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 04/10] sched/fair: add triggers for OPP change requests Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 05/10] sched/{core,fair}: trigger OPP change request on fork() Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 06/10] sched/fair: cpufreq_sched triggers for load balancing Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 07/10] sched/fair: jump to max OPP when crossing UP threshold Steve Muckle
2015-12-11 11:12   ` Juri Lelli
2015-12-15  2:42     ` Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 08/10] sched: remove call of sched_avg_update from sched_rt_avg_update Steve Muckle
2015-12-09  6:19 ` [RFCv6 PATCH 09/10] sched: deadline: use deadline bandwidth in scale_rt_capacity Steve Muckle
2015-12-09  8:50   ` Vincent Guittot
2015-12-10 13:27     ` Luca Abeni
2015-12-10 16:11       ` Vincent Guittot
2015-12-11  7:48         ` Luca Abeni
2015-12-14 14:02           ` Vincent Guittot
2015-12-14 14:38             ` Luca Abeni
2015-12-14 15:17   ` Peter Zijlstra
2015-12-14 15:56     ` Vincent Guittot
2015-12-14 16:07       ` Juri Lelli
2015-12-14 21:19         ` Luca Abeni
2015-12-14 16:51       ` Peter Zijlstra
2015-12-14 21:31         ` Luca Abeni
2015-12-15 12:38           ` Peter Zijlstra
2015-12-15 13:30             ` Luca Abeni
2015-12-15 13:42               ` Peter Zijlstra
2015-12-15 21:24                 ` Luca Abeni
2015-12-16  9:28                   ` Juri Lelli
2015-12-15  4:43         ` Vincent Guittot
2015-12-15 12:41           ` Peter Zijlstra
2015-12-15 12:56             ` Vincent Guittot
2015-12-14 21:12       ` Luca Abeni
2015-12-15  4:59         ` Vincent Guittot
2015-12-15  8:50           ` Luca Abeni [this message]
2015-12-15 12:20             ` Peter Zijlstra
2015-12-15 12:46               ` Vincent Guittot
2015-12-15 13:18               ` Luca Abeni
2015-12-15 12:23             ` Peter Zijlstra
2015-12-15 13:21               ` Luca Abeni
2015-12-15 12:43             ` Vincent Guittot
2015-12-15 13:39               ` Luca Abeni
2015-12-15 12:58             ` Vincent Guittot
2015-12-15 13:41               ` Luca Abeni
2015-12-09  6:19 ` [RFCv6 PATCH 10/10] sched: rt scheduler sets capacity requirement Steve Muckle
2015-12-11 11:22   ` Juri Lelli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=566FD446.1080004@unitn.it \
    --to=luca.abeni@unitn.it \
    --cc=Juri.Lelli@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=mturquette@baylibre.com \
    --cc=patrick.bellasi@arm.com \
    --cc=peterz@infradead.org \
    --cc=steve.muckle@linaro.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.