From: Patrick Bellasi <patrick.bellasi@arm.com>
To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org,
linux-api@vger.kernel.org
Cc: Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>, Tejun Heo <tj@kernel.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Viresh Kumar <viresh.kumar@linaro.org>,
Paul Turner <pjt@google.com>,
Quentin Perret <quentin.perret@arm.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Morten Rasmussen <morten.rasmussen@arm.com>,
Juri Lelli <juri.lelli@redhat.com>, Todd Kjos <tkjos@google.com>,
Joel Fernandes <joelaf@google.com>,
Steve Muckle <smuckle@google.com>,
Suren Baghdasaryan <surenb@google.com>
Subject: Re: [PATCH v8 02/16] sched/core: Add bucket local max tracking
Date: Mon, 15 Apr 2019 15:51:07 +0100 [thread overview]
Message-ID: <20190415144930.pntid6evu6r67l4o@e110439-lin> (raw)
In-Reply-To: <20190402104153.25404-3-patrick.bellasi@arm.com>
On 02-Apr 11:41, Patrick Bellasi wrote:
> Because of bucketization, different task-specific clamp values are
> tracked in the same bucket. For example, with 20% bucket size and
> assuming to have:
> Task1: util_min=25%
> Task2: util_min=35%
> both tasks will be refcounted in the [20..39]% bucket and always boosted
> only up to 20% thus implementing a simple floor aggregation normally
> used in histograms.
>
> In systems with only few and well-defined clamp values, it would be
> useful to track the exact clamp value required by a task whenever
> possible. For example, if a system requires only 23% and 47% boost
> values then it's possible to track the exact boost required by each
> task using only 3 buckets of ~33% size each.
>
> Introduce a mechanism to max aggregate the requested clamp values of
> RUNNABLE tasks in the same bucket. Keep it simple by resetting the
> bucket value to its base value only when a bucket becomes inactive.
> Allow a limited and controlled overboosting margin for tasks recounted
> in the same bucket.
>
> In systems where the boost values are not known in advance, it is still
> possible to control the maximum acceptable overboosting margin by tuning
> the number of clamp groups. For example, 20 groups ensure a 5% maximum
> overboost.
>
> Remove the rq bucket initialization code since a correct bucket value
> is now computed when a task is refcounted into a CPU's rq.
>
> Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
>
> --
> Changes in v8:
> Message-ID: <20190313193916.GQ2482@worktop.programming.kicks-ass.net>
> - split this code out from the previous patch
> ---
> kernel/sched/core.c | 46 ++++++++++++++++++++++++++-------------------
> 1 file changed, 27 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 032211b72110..6e1beae5f348 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -778,6 +778,11 @@ unsigned int uclamp_rq_max_value(struct rq *rq, unsigned int clamp_id)
> * When a task is enqueued on a rq, the clamp bucket currently defined by the
> * task's uclamp::bucket_id is refcounted on that rq. This also immediately
> * updates the rq's clamp value if required.
> + *
> + * Tasks can have a task-specific value requested from user-space, track
> + * within each bucket the maximum value for tasks refcounted in it.
> + * This "local max aggregation" allows to track the exact "requested" value
> + * for each bucket when all its RUNNABLE tasks require the same clamp.
> */
> static inline void uclamp_rq_inc_id(struct task_struct *p, struct rq *rq,
> unsigned int clamp_id)
> @@ -789,8 +794,15 @@ static inline void uclamp_rq_inc_id(struct task_struct *p, struct rq *rq,
> bucket = &uc_rq->bucket[uc_se->bucket_id];
> bucket->tasks++;
>
> + /*
> + * Local max aggregation: rq buckets always track the max
> + * "requested" clamp value of its RUNNABLE tasks.
> + */
> + if (uc_se->value > bucket->value)
> + bucket->value = uc_se->value;
> +
> if (uc_se->value > READ_ONCE(uc_rq->value))
> - WRITE_ONCE(uc_rq->value, bucket->value);
> + WRITE_ONCE(uc_rq->value, uc_se->value);
> }
>
> /*
> @@ -815,6 +827,12 @@ static inline void uclamp_rq_dec_id(struct task_struct *p, struct rq *rq,
> if (likely(bucket->tasks))
> bucket->tasks--;
>
> + /*
> + * Keep "local max aggregation" simple and accept to (possibly)
> + * overboost some RUNNABLE tasks in the same bucket.
> + * The rq clamp bucket value is reset to its base value whenever
> + * there are no more RUNNABLE tasks refcounting it.
> + */
> if (likely(bucket->tasks))
> return;
>
> @@ -824,8 +842,14 @@ static inline void uclamp_rq_dec_id(struct task_struct *p, struct rq *rq,
> * e.g. due to future modification, warn and fixup the expected value.
> */
> SCHED_WARN_ON(bucket->value > rq_clamp);
> - if (bucket->value >= rq_clamp)
> + if (bucket->value >= rq_clamp) {
> + /*
> + * Reset clamp bucket value to its nominal value whenever
> + * there are anymore RUNNABLE tasks refcounting it.
> + */
> + bucket->value = uclamp_bucket_base_value(bucket->value);
While running tests on Android, I found that the snipped above should
be better done in uclamp_rq_inc_id() for two main reasons:
1. because of the early return in this function, we skip the reset in
case the task is not the last one running on a CPU, thus
triggering the above SCHED_WARN_ON
2. since a non active bucket is already ignored, we don't care about
resetting its "local max value"
Will move the "max local update" in uclamp_rq_inc_id() where something
like:
/*
* Local max aggregation: rq buckets always track the max
* "requested" clamp value of its RUNNABLE tasks.
*/
if (bucket->tasks == 1 || uc_se->value > bucket->value)
bucket->value = uc_se->value;
Should grant to always have an updated local max every time a bucket
is active.
> WRITE_ONCE(uc_rq->value, uclamp_rq_max_value(rq, clamp_id));
> + }
> }
>
> static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p)
> @@ -855,25 +879,9 @@ static void __init init_uclamp(void)
> unsigned int clamp_id;
> int cpu;
>
> - for_each_possible_cpu(cpu) {
> - struct uclamp_bucket *bucket;
> - struct uclamp_rq *uc_rq;
> - unsigned int bucket_id;
> -
> + for_each_possible_cpu(cpu)
> memset(&cpu_rq(cpu)->uclamp, 0, sizeof(struct uclamp_rq));
>
> - for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) {
> - uc_rq = &cpu_rq(cpu)->uclamp[clamp_id];
> -
> - bucket_id = 1;
> - while (bucket_id < UCLAMP_BUCKETS) {
> - bucket = &uc_rq->bucket[bucket_id];
> - bucket->value = bucket_id * UCLAMP_BUCKET_DELTA;
> - ++bucket_id;
> - }
> - }
> - }
> -
> for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) {
> struct uclamp_se *uc_se = &init_task.uclamp[clamp_id];
>
> --
> 2.20.1
>
--
#include <best/regards.h>
Patrick Bellasi
next prev parent reply other threads:[~2019-04-15 14:51 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-02 10:41 [PATCH v8 00/16] Add utilization clamping support Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 01/16] sched/core: uclamp: Add CPU's clamp buckets refcounting Patrick Bellasi
2019-04-06 23:51 ` Suren Baghdasaryan
2019-04-08 11:49 ` Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 02/16] sched/core: Add bucket local max tracking Patrick Bellasi
2019-04-15 14:51 ` Patrick Bellasi [this message]
2019-04-02 10:41 ` [PATCH v8 03/16] sched/core: uclamp: Enforce last task's UCLAMP_MAX Patrick Bellasi
2019-04-17 20:36 ` Suren Baghdasaryan
2019-05-07 10:10 ` Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 04/16] sched/core: uclamp: Add system default clamps Patrick Bellasi
2019-04-18 0:51 ` Suren Baghdasaryan
2019-05-07 10:38 ` Patrick Bellasi
2019-05-08 18:42 ` Peter Zijlstra
2019-05-09 8:43 ` Patrick Bellasi
2019-05-08 19:00 ` Peter Zijlstra
2019-05-09 8:45 ` Patrick Bellasi
2019-05-08 19:07 ` Peter Zijlstra
2019-05-08 19:15 ` Peter Zijlstra
2019-05-09 9:10 ` Patrick Bellasi
2019-05-09 11:53 ` Peter Zijlstra
2019-05-09 13:04 ` Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 05/16] sched/core: Allow sched_setattr() to use the current policy Patrick Bellasi
2019-05-08 19:21 ` Peter Zijlstra
2019-05-09 9:18 ` Patrick Bellasi
2019-05-09 11:55 ` Peter Zijlstra
2019-05-09 14:59 ` Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 06/16] sched/core: uclamp: Extend sched_setattr() to support utilization clamping Patrick Bellasi
2019-04-17 22:26 ` Suren Baghdasaryan
2019-05-07 11:13 ` Patrick Bellasi
2019-05-08 19:44 ` Peter Zijlstra
2019-05-09 9:24 ` Patrick Bellasi
2019-05-08 19:41 ` Peter Zijlstra
2019-05-09 9:23 ` Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 07/16] sched/core: uclamp: Reset uclamp values on RESET_ON_FORK Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 08/16] sched/core: uclamp: Set default clamps for RT tasks Patrick Bellasi
2019-04-17 23:07 ` Suren Baghdasaryan
2019-05-07 11:25 ` Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 09/16] sched/cpufreq: uclamp: Add clamps for FAIR and " Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 10/16] sched/core: uclamp: Add uclamp_util_with() Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 11/16] sched/fair: uclamp: Add uclamp support to energy_compute() Patrick Bellasi
2019-05-09 12:51 ` Peter Zijlstra
2019-04-02 10:41 ` [PATCH v8 12/16] sched/core: uclamp: Extend CPU's cgroup controller Patrick Bellasi
2019-04-18 0:12 ` Suren Baghdasaryan
2019-05-07 11:42 ` Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 13/16] sched/core: uclamp: Propagate parent clamps Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 14/16] sched/core: uclamp: Propagate system defaults to root group Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 15/16] sched/core: uclamp: Use TG's clamps to restrict TASK's clamps Patrick Bellasi
2019-04-02 10:41 ` [PATCH v8 16/16] sched/core: uclamp: Update CPU's refcount on TG's clamp changes Patrick Bellasi
2019-05-09 13:02 ` [PATCH v8 00/16] Add utilization clamping support Peter Zijlstra
2019-05-09 13:09 ` Patrick Bellasi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190415144930.pntid6evu6r67l4o@e110439-lin \
--to=patrick.bellasi@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=joelaf@google.com \
--cc=juri.lelli@redhat.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=morten.rasmussen@arm.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=quentin.perret@arm.com \
--cc=rafael.j.wysocki@intel.com \
--cc=smuckle@google.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=tkjos@google.com \
--cc=vincent.guittot@linaro.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).