LKML Archive on lore.kernel.org
 help / color / Atom feed
From: Quentin Perret <quentin.perret@arm.com>
To: Joel Fernandes <joel@joelfernandes.org>
Cc: Tejun Heo <tj@kernel.org>,
	Patrick Bellasi <patrick.bellasi@arm.com>,
	linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Paul Turner <pjt@google.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Juri Lelli <juri.lelli@redhat.com>, Todd Kjos <tkjos@google.com>,
	Joel Fernandes <joelaf@google.com>,
	Steve Muckle <smuckle@google.com>,
	Suren Baghdasaryan <surenb@google.com>
Subject: Re: [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller
Date: Fri, 27 Jul 2018 09:09:04 +0100
Message-ID: <20180727080827.u7tseookdhsckrec@queper01-lin> (raw)
In-Reply-To: <20180727003919.GA22416@joelaf.mtv.corp.google.com>

On Thursday 26 Jul 2018 at 17:39:19 (-0700), Joel Fernandes wrote:
> On Tue, Jul 24, 2018 at 06:29:02AM -0700, Tejun Heo wrote:
> > Hello, Patrick.
> > 
> > On Mon, Jul 23, 2018 at 06:22:15PM +0100, Patrick Bellasi wrote:
> > > However, the "best effort" bandwidth control we have for CFS and RT
> > > can be further improved if, instead of just looking at time spent on
> > > CPUs, we provide some more hints to the scheduler to know at which
> > > min/max "MIPS" we want to consume the (best effort) time we have been
> > > allocated on a CPU.
> > > 
> > > Such a simple extension is still quite useful to satisfy many use-case
> > > we have, mainly on mobile systems, like the ones I've described in the
> > >    "Newcomer's Short Abstract (Updated)"
> > > section of the cover letter:
> > >    https://lore.kernel.org/lkml/20180716082906.6061-1-patrick.bellasi@arm.com/T/#u
> > 
> > So, that's all completely fine but then let's please not give it a
> > name which doesn't quite match what it does.  We can just call it
> > e.g. cpufreq range control.
> 
> But then what name can one give it if it does more than one thing, like
> task-placement and CPU frequency control?
> 
> It doesn't make sense to name it cpufreq IMHO. Its a clamp on the utilization
> of the task which can be used for many purposes.

Indeed, the scheduler could use clamped utilization values in several
places. The capacity-awareness bits (mostly useful for big.LITTLE
platforms) could already use that today I guess.

And on the longer term, depending on where the EAS patches [1] end up,
utilization clamping might actually become very useful to bias task
placement decisions. EAS basically decides where to place tasks based on
their utilization, so util_clamp would make a lot of sense there IMO.

Thanks,
Quentin

[1] https://lkml.org/lkml/2018/7/24/420

  reply index

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-16  8:28 [PATCH v2 00/12] Add utilization clamping support Patrick Bellasi
2018-07-16  8:28 ` [PATCH v2 01/12] sched/core: uclamp: extend sched_setattr to support utilization clamping Patrick Bellasi
2018-07-17 17:50   ` Joel Fernandes
2018-07-18  8:42     ` Patrick Bellasi
2018-07-18 17:02       ` Joel Fernandes
2018-07-17 18:04   ` Joel Fernandes
2018-07-16  8:28 ` [PATCH v2 02/12] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups Patrick Bellasi
2018-07-19 23:51   ` Suren Baghdasaryan
2018-07-20 15:11     ` Patrick Bellasi
2018-07-21  0:25       ` Suren Baghdasaryan
2018-07-23 13:36         ` Patrick Bellasi
2018-07-16  8:28 ` [PATCH v2 03/12] sched/core: uclamp: add CPU's clamp groups accounting Patrick Bellasi
2018-07-20 20:25   ` Suren Baghdasaryan
2018-07-16  8:28 ` [PATCH v2 04/12] sched/core: uclamp: update CPU's refcount on clamp changes Patrick Bellasi
2018-07-16  8:28 ` [PATCH v2 05/12] sched/cpufreq: uclamp: add utilization clamping for FAIR tasks Patrick Bellasi
2018-07-16  8:29 ` [PATCH v2 06/12] sched/cpufreq: uclamp: add utilization clamping for RT tasks Patrick Bellasi
2018-07-16  8:29 ` [PATCH v2 07/12] sched/core: uclamp: enforce last task UCLAMP_MAX Patrick Bellasi
2018-07-21  1:23   ` Suren Baghdasaryan
2018-07-23 15:02     ` Patrick Bellasi
2018-07-23 16:40       ` Suren Baghdasaryan
2018-07-16  8:29 ` [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller Patrick Bellasi
2018-07-21  2:37   ` Suren Baghdasaryan
2018-07-21  3:16     ` Suren Baghdasaryan
2018-07-23 15:17     ` Patrick Bellasi
2018-07-23 15:30   ` Tejun Heo
2018-07-23 17:22     ` Patrick Bellasi
2018-07-24 13:29       ` Tejun Heo
2018-07-24 15:39         ` Patrick Bellasi
2018-07-27  0:39         ` Joel Fernandes
2018-07-27  8:09           ` Quentin Perret [this message]
2018-07-16  8:29 ` [PATCH v2 09/12] sched/core: uclamp: map TG's clamp values into CPU's clamp groups Patrick Bellasi
2018-07-16  8:29 ` [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps Patrick Bellasi
2018-07-22  3:05   ` Suren Baghdasaryan
2018-07-23 15:40     ` Patrick Bellasi
2018-07-23 17:11       ` Suren Baghdasaryan
2018-07-24  9:56         ` Patrick Bellasi
2018-07-24 15:28           ` Suren Baghdasaryan
2018-07-24 15:49             ` Patrick Bellasi
2018-07-16  8:29 ` [PATCH v2 11/12] sched/core: uclamp: update CPU's refcount on TG's clamp changes Patrick Bellasi
2018-07-22  3:17   ` Suren Baghdasaryan
2018-07-16  8:29 ` [PATCH v2 12/12] sched/core: uclamp: use percentage clamp values Patrick Bellasi
2018-07-22  4:04   ` Suren Baghdasaryan
2018-07-24 16:43     ` Patrick Bellasi
2018-07-24 17:11       ` Suren Baghdasaryan
2018-07-24 17:17         ` Patrick Bellasi
2018-07-17 13:03 ` [PATCH v2 00/12] Add utilization clamping support Joel Fernandes
2018-07-17 13:41   ` Patrick Bellasi

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180727080827.u7tseookdhsckrec@queper01-lin \
    --to=quentin.perret@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=joel@joelfernandes.org \
    --cc=joelaf@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=patrick.bellasi@arm.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=rafael.j.wysocki@intel.com \
    --cc=smuckle@google.com \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=tkjos@google.com \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox