All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: Odin Ugedal <odin@uged.al>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Sachin Sant <sachinp@linux.vnet.ibm.com>,
	Naresh Kamboju <naresh.kamboju@linaro.org>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] sched/fair: Ensure _sum and _avg values stay consistent
Date: Thu, 24 Jun 2021 14:16:43 +0200	[thread overview]
Message-ID: <CAKfTPtCiq6jMZwp-9s8wN80rg57fphm5PxH_519XjfAt213tGg@mail.gmail.com> (raw)
In-Reply-To: <20210624111815.57937-1-odin@uged.al>

On Thu, 24 Jun 2021 at 13:21, Odin Ugedal <odin@uged.al> wrote:
>
> The _sum and _avg values are in general sync together with the PELT
> divider. They are however not always completely in perfect sync,
> resulting in situations where _sum gets to zero while _avg stays
> positive. Such situations are undesirable.
>
> This comes from the fact that PELT will increase period_contrib, also
> increasing the PELT divider, without updating _sum and _avg values to
> stay in perfect sync where (_sum == _avg * divider). However, such PELT

_sum is always synced and updated with PELT contrib and _avg is only
updated when crossing the 1024us period boundary. The problem here is
that the contribution to _sum can be null (not running or sleeping
state) whereas the formula "_avg * divider" assumes that all
contributions in the current period are not null. So "_avg * divider"
overestimates _sum.

Another solution would be to underestimate _sum and use _avg
*LOAD_AVG_MAX - 1024" when subtracting some _sum and keep using
LOAD_AVG_MAX - 1024 + avg->period_contrib when adding _sum.  Note,
that this doesn't make any real difference at the end for the patch
below because we don't save any multiplication operation anyway

So after updating the commit message

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> change will never lower _sum, making it impossible to end up in a
> situation where _sum is zero and _avg is not.
>
> Therefore, we need to ensure that when subtracting load outside PELT,
> that when _sum is zero, _avg is also set to zero. This occurs when
> (_sum < _avg * divider), and the subtracted (_avg * divider) is bigger
> or equal to the current _sum, while the subtracted _avg is smaller than
> the current _avg.
>
> Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> Signed-off-by: Odin Ugedal <odin@uged.al>
> ---
>
> Reports and discussion can be found here:
>
> https://lore.kernel.org/lkml/2ED1BDF5-BC0C-47CD-8F33-9A46C738F8CF@linux.vnet.ibm.com/
> https://lore.kernel.org/lkml/CA+G9fYsMXELmjGUzw4SY1bghTYz_PeR2diM6dRp2J37bBZzMSA@mail.gmail.com/
>
>  kernel/sched/fair.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bfaa6e1f6067..def48bc2e90b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3688,15 +3688,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
>
>                 r = removed_load;
>                 sub_positive(&sa->load_avg, r);
> -               sub_positive(&sa->load_sum, r * divider);
> +               sa->load_sum = sa->load_avg * divider;
>
>                 r = removed_util;
>                 sub_positive(&sa->util_avg, r);
> -               sub_positive(&sa->util_sum, r * divider);
> +               sa->util_sum = sa->util_avg * divider;
>
>                 r = removed_runnable;
>                 sub_positive(&sa->runnable_avg, r);
> -               sub_positive(&sa->runnable_sum, r * divider);
> +               sa->runnable_sum = sa->runnable_avg * divider;
>
>                 /*
>                  * removed_runnable is the unweighted version of removed_load so we
> --
> 2.32.0
>

  reply	other threads:[~2021-06-24 12:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-24 11:18 [PATCH] sched/fair: Ensure _sum and _avg values stay consistent Odin Ugedal
2021-06-24 12:16 ` Vincent Guittot [this message]
2021-06-24 13:48 ` Sachin Sant
2021-06-28 13:58 ` [tip: sched/core] " tip-bot2 for Odin Ugedal
2021-07-01  9:47 ` [PATCH] " Sachin Sant
2021-07-01 10:34   ` Mel Gorman
2021-07-01 11:09     ` Odin Ugedal
2021-07-01 11:17       ` Vincent Guittot
2021-07-01 11:45       ` Sachin Sant
2021-07-01 17:21         ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKfTPtCiq6jMZwp-9s8wN80rg57fphm5PxH_519XjfAt213tGg@mail.gmail.com \
    --to=vincent.guittot@linaro.org \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=naresh.kamboju@linaro.org \
    --cc=odin@uged.al \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=sachinp@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.