From: Patrick Bellasi <email@example.com>
To: Tejun Heo <firstname.lastname@example.org>
Cc: email@example.com, firstname.lastname@example.org,
Ingo Molnar <email@example.com>,
Peter Zijlstra <firstname.lastname@example.org>,
"Rafael J . Wysocki" <email@example.com>,
Viresh Kumar <firstname.lastname@example.org>,
Vincent Guittot <email@example.com>,
Paul Turner <firstname.lastname@example.org>,
Dietmar Eggemann <email@example.com>,
Morten Rasmussen <firstname.lastname@example.org>,
Juri Lelli <email@example.com>, Todd Kjos <firstname.lastname@example.org>,
Joel Fernandes <email@example.com>,
Steve Muckle <firstname.lastname@example.org>,
Suren Baghdasaryan <email@example.com>
Subject: Re: [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller
Date: Mon, 23 Jul 2018 18:22:15 +0100 [thread overview]
Message-ID: <20180723172215.GG2683@e110439-lin> (raw)
On 23-Jul 08:30, Tejun Heo wrote:
> On Mon, Jul 16, 2018 at 09:29:02AM +0100, Patrick Bellasi wrote:
> > The cgroup's CPU controller allows to assign a specified (maximum)
> > bandwidth to the tasks of a group. However this bandwidth is defined and
> > enforced only on a temporal base, without considering the actual
> > frequency a CPU is running on. Thus, the amount of computation completed
> > by a task within an allocated bandwidth can be very different depending
> > on the actual frequency the CPU is running that task.
> > The amount of computation can be affected also by the specific CPU a
> > task is running on, especially when running on asymmetric capacity
> > systems like Arm's big.LITTLE.
> One basic problem I have with this patchset is that what's being
> described is way more generic than what actually got implemented.
> What's described is computation bandwidth control but what's
> implemented is just frequency clamping.
What I meant to describe is that we already have a computation
bandwidth control mechanism which is working quite fine for the
scheduling classes it applies to, i.e. CFS and RT.
For these classes we are usually happy with just a _best effort_
allocation of the bandwidth: nothing enforced in strict terms. Indeed,
there is not (at least not in kernel space) a tracking of the actual
available and allocated bandwidth. If we need strict enforcement, we
already have DL with its CBS servers.
However, the "best effort" bandwidth control we have for CFS and RT
can be further improved if, instead of just looking at time spent on
CPUs, we provide some more hints to the scheduler to know at which
min/max "MIPS" we want to consume the (best effort) time we have been
allocated on a CPU.
Such a simple extension is still quite useful to satisfy many use-case
we have, mainly on mobile systems, like the ones I've described in the
"Newcomer's Short Abstract (Updated)"
section of the cover letter:
> So, there are fundamental discrepancies between
> description+interface vs. what it actually does.
Perhaps then I should just change the description to make it less
> I really don't think that's something we can fix up later.
... since, really, I don't think we can get to the point to extend
later this interface to provide the strict bandwidth enforcement you
are thinking about.
This would not be a fixup, but something really close to
re-implementing what we already have with the DL class.
> > These attributes:
> > a) are available only for non-root nodes, both on default and legacy
> > hierarchies
> > b) do not enforce any constraints and/or dependency between the parent
> > and its child nodes, thus relying on the delegation model and
> > permission settings defined by the system management software
> cgroup does host attributes which only concern the cgroup itself and
> thus don't need any hierarchical behaviors on their own, but what's
> being implemented does control resource allocation,
I'm not completely sure to get your point here.
Maybe it all depends on what we mean by "control resource allocation".
AFAIU, currently both the CFS and RT bandwidth controllers allow you
to define how much CPU time a group of tasks can use. It does that by
looking just within the group: there is no enforced/required relation
between the bandwidth assigned to a group and the bandwidth assigned
to its parent, siblings and/or children.
The resource control allocation is eventually enforced "indirectly" by
means of the fact that, based on tasks priorities and cgroup shares,
the scheduler will prefer to pick and run "more frequently" and
"longer" certain tasks instead of others.
Thus I would say that the resource allocation control is already
performed by the combined action of:
A) priorities / shares to favor certain tasks over others
B) period & bandwidth to further bias the scheduler in _not_ selecting
tasks which already executed for the configured amount of time.
> and what you're describing inherently breaks the delegation model.
What I describe here is just an additional hint to the scheduler which
enrich the above described model. Provided A and B are already
satisfied, when a task gets a chance to run it will be executed at a
min/max configured frequency. That's really all... there is not
additional impact on "resources allocation".
I don't see why you say that this breaks the delegation model?
Maybe an example can help to better explain what you mean?
next prev parent reply other threads:[~2018-07-23 17:22 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-16 8:28 [PATCH v2 00/12] Add utilization clamping support Patrick Bellasi
2018-07-16 8:28 ` [PATCH v2 01/12] sched/core: uclamp: extend sched_setattr to support utilization clamping Patrick Bellasi
2018-07-17 17:50 ` Joel Fernandes
2018-07-18 8:42 ` Patrick Bellasi
2018-07-18 17:02 ` Joel Fernandes
2018-07-17 18:04 ` Joel Fernandes
2018-07-16 8:28 ` [PATCH v2 02/12] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups Patrick Bellasi
2018-07-19 23:51 ` Suren Baghdasaryan
2018-07-20 15:11 ` Patrick Bellasi
2018-07-21 0:25 ` Suren Baghdasaryan
2018-07-23 13:36 ` Patrick Bellasi
2018-07-16 8:28 ` [PATCH v2 03/12] sched/core: uclamp: add CPU's clamp groups accounting Patrick Bellasi
2018-07-20 20:25 ` Suren Baghdasaryan
2018-07-16 8:28 ` [PATCH v2 04/12] sched/core: uclamp: update CPU's refcount on clamp changes Patrick Bellasi
2018-07-16 8:28 ` [PATCH v2 05/12] sched/cpufreq: uclamp: add utilization clamping for FAIR tasks Patrick Bellasi
2018-07-16 8:29 ` [PATCH v2 06/12] sched/cpufreq: uclamp: add utilization clamping for RT tasks Patrick Bellasi
2018-07-16 8:29 ` [PATCH v2 07/12] sched/core: uclamp: enforce last task UCLAMP_MAX Patrick Bellasi
2018-07-21 1:23 ` Suren Baghdasaryan
2018-07-23 15:02 ` Patrick Bellasi
2018-07-23 16:40 ` Suren Baghdasaryan
2018-07-16 8:29 ` [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller Patrick Bellasi
2018-07-21 2:37 ` Suren Baghdasaryan
2018-07-21 3:16 ` Suren Baghdasaryan
2018-07-23 15:17 ` Patrick Bellasi
2018-07-23 15:30 ` Tejun Heo
2018-07-23 17:22 ` Patrick Bellasi [this message]
2018-07-24 13:29 ` Tejun Heo
2018-07-24 15:39 ` Patrick Bellasi
2018-07-27 0:39 ` Joel Fernandes
2018-07-27 8:09 ` Quentin Perret
2018-07-16 8:29 ` [PATCH v2 09/12] sched/core: uclamp: map TG's clamp values into CPU's clamp groups Patrick Bellasi
2018-07-16 8:29 ` [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps Patrick Bellasi
2018-07-22 3:05 ` Suren Baghdasaryan
2018-07-23 15:40 ` Patrick Bellasi
2018-07-23 17:11 ` Suren Baghdasaryan
2018-07-24 9:56 ` Patrick Bellasi
2018-07-24 15:28 ` Suren Baghdasaryan
2018-07-24 15:49 ` Patrick Bellasi
2018-07-16 8:29 ` [PATCH v2 11/12] sched/core: uclamp: update CPU's refcount on TG's clamp changes Patrick Bellasi
2018-07-22 3:17 ` Suren Baghdasaryan
2018-07-16 8:29 ` [PATCH v2 12/12] sched/core: uclamp: use percentage clamp values Patrick Bellasi
2018-07-22 4:04 ` Suren Baghdasaryan
2018-07-24 16:43 ` Patrick Bellasi
2018-07-24 17:11 ` Suren Baghdasaryan
2018-07-24 17:17 ` Patrick Bellasi
2018-07-17 13:03 ` [PATCH v2 00/12] Add utilization clamping support Joel Fernandes
2018-07-17 13:41 ` Patrick Bellasi
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).