All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Vincent Guittot <vincent.guittot@linaro.org>,
	mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
	rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, linux-kernel@vger.kernel.org,
	rickyiu@google.com, odin@uged.al
Cc: sachinp@linux.vnet.ibm.com, naresh.kamboju@linaro.org
Subject: Re: [PATCH v2 3/3] sched/pelt: Don't sync hardly load_sum with load_avg
Date: Tue, 4 Jan 2022 12:47:21 +0100	[thread overview]
Message-ID: <06bef8b2-aba6-5dd3-ddb0-ffcdc4f2a689@arm.com> (raw)
In-Reply-To: <20211222093802.22357-4-vincent.guittot@linaro.org>

On 22/12/2021 10:38, Vincent Guittot wrote:
> Similarly to util_avg and util_sum, don't sync load_sum with the low
> bound of load_avg but only ensure that load_sum stays in the correct range.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c | 41 +++++++++++++++++++++++++----------------
>  1 file changed, 25 insertions(+), 16 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b4c350715c16..488213d98770 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3025,12 +3025,17 @@ enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  	cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;
>  }
>  
> +#define MIN_DIVIDER (LOAD_AVG_MAX - 1024)
> +
>  static inline void
>  dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  {
> -	u32 divider = get_pelt_divider(&se->avg);
>  	sub_positive(&cfs_rq->avg.load_avg, se->avg.load_avg);
> -	cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider;
> +	sub_positive(&cfs_rq->avg.load_sum, se_weight(se) * se->avg.load_sum);
> +	/* See update_tg_cfs_util() */
> +	cfs_rq->avg.load_sum = max_t(u32, cfs_rq->avg.load_sum,
> +					  cfs_rq->avg.load_avg * MIN_DIVIDER);
> +

Maybe add a:

Fixes: ceb6ba45dc80 ("sched/fair: Sync load_sum with load_avg after
dequeue")

[...]

>  static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum)
> @@ -3699,7 +3706,9 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
>  
>  		r = removed_load;
>  		sub_positive(&sa->load_avg, r);
> -		sa->load_sum = sa->load_avg * divider;
> +		sub_positive(&sa->load_sum, r * divider);
> +		/* See update_tg_cfs_util() */
> +		sa->load_sum = max_t(u32, sa->load_sum, sa->load_avg * MIN_DIVIDER);

Since this max_t() is used 9 times in this patch-set, maybe a macro in
pelt.h is better:

+/* Because of rounding, se->util_sum might ends up being +1 more than
+ * cfs->util_sum (XXX fix the rounding). Although this is not
+ * a problem by itself, detaching a lot of tasks with the rounding
+ * problem between 2 updates of util_avg (~1ms) can make cfs->util_sum
+ * becoming null whereas cfs_util_avg is not.
+ * Check that util_sum is still above its lower bound for the new
+ * util_avg. Given that period_contrib might have moved since the last
+ * sync, we are only sure that util_sum must be above or equal to
+ * util_avg * minimum possible divider
+  */
+#define MIN_DIVIDER    (LOAD_AVG_MAX - 1024)
+
+#define enforce_lower_bound_on_pelt_sum(sa, var) do {           \
+       (sa)->var##_sum = max_t(u32,                             \
+                               (sa)->var##_sum,                 \
+                               (sa)->var##_avg * MIN_DIVIDER);  \
+} while (0)

This way, you could also add the comment from update_tg_cfs_util()
there. And there would be no need anymore to point to it from the other
places where it is used.

[...]

  reply	other threads:[~2022-01-04 11:47 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-22  9:37 [PATCH v2 0/3] sched/pelt: Don't sync hardly *_sum with *_avg Vincent Guittot
2021-12-22  9:38 ` [PATCH v2 1/3] sched/pelt: Don't sync hardly util_sum with uti_avg Vincent Guittot
2022-01-04 11:47   ` Dietmar Eggemann
2022-01-04 13:42     ` Vincent Guittot
2022-01-05 13:15       ` Dietmar Eggemann
2022-01-05 13:57         ` Vincent Guittot
2022-01-07 11:43           ` Dietmar Eggemann
2022-01-07 15:21             ` Vincent Guittot
2022-01-11  7:54               ` Vincent Guittot
2022-01-11 12:37                 ` Dietmar Eggemann
2022-01-04 13:48     ` Vincent Guittot
2021-12-22  9:38 ` [PATCH v2 2/3] sched/pelt: Don't sync hardly runnable_sum with runnable_avg Vincent Guittot
2022-01-04 11:47   ` Dietmar Eggemann
2021-12-22  9:38 ` [PATCH v2 3/3] sched/pelt: Don't sync hardly load_sum with load_avg Vincent Guittot
2022-01-04 11:47   ` Dietmar Eggemann [this message]
2022-01-04 11:46 ` [PATCH v2 0/3] sched/pelt: Don't sync hardly *_sum with *_avg Dietmar Eggemann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=06bef8b2-aba6-5dd3-ddb0-ffcdc4f2a689@arm.com \
    --to=dietmar.eggemann@arm.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=naresh.kamboju@linaro.org \
    --cc=odin@uged.al \
    --cc=peterz@infradead.org \
    --cc=rickyiu@google.com \
    --cc=rostedt@goodmis.org \
    --cc=sachinp@linux.vnet.ibm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.