bitbake-devel.lists.openembedded.org archive mirror
 help / color / mirror / Atom feed
From: Randy MacLeod <randy.macleod@windriver.com>
To: Qi.Chen@windriver.com, bitbake-devel@lists.openembedded.org
Subject: Re: [bitbake-devel][PATCH 1/2] runqueue: fix PSI check calculation
Date: Fri, 7 Apr 2023 11:02:05 -0400	[thread overview]
Message-ID: <d5f4e8e3-53c2-ab85-be64-78252a4e356d@windriver.com> (raw)
In-Reply-To: <20230407030715.4394-1-Qi.Chen@windriver.com>

[-- Attachment #1: Type: text/plain, Size: 4222 bytes --]

On 2023-04-06 23:07, Chen Qi via lists.openembedded.org wrote:
> The current PSI check calculation does not take into consideration
> the possibility of the time interval between last check and current
> check being much larger than 1s. In fact, the current behavior does
> not match what the manual says about BB_PRESSURE_MAX_XXX, even if
> the value is set to upper limit, 1000000, we still get many blocks
> on new task launch. The difference between 'total' should be divided
> by the time interval if it's larger than 1s.


Yes!
I had a patch  to do this but wanted to write a test case using stress .

Anyway, it's clearly better and won't allow the occasional new job to
slip in when the time diff is small and the pressure hasn't been 
recalculated
by the kernel. It will mean that pressure regulated builds may be a bit 
slower
but I doubt we'll even be able to measure that.

Thanks Qi,

../Randy


>
> Signed-off-by: Chen Qi<Qi.Chen@windriver.com>
> ---
>   bitbake/lib/bb/runqueue.py | 13 +++++++++----
>   1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/bitbake/lib/bb/runqueue.py b/bitbake/lib/bb/runqueue.py
> index e629ab7e7b..02f1474540 100644
> --- a/bitbake/lib/bb/runqueue.py
> +++ b/bitbake/lib/bb/runqueue.py
> @@ -198,15 +198,20 @@ class RunQueueScheduler(object):
>                   curr_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
>                   curr_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
>                   curr_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
> -                exceeds_cpu_pressure =  self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) > self.rq.max_cpu_pressure
> -                exceeds_io_pressure =  self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) > self.rq.max_io_pressure
> -                exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) > self.rq.max_memory_pressure
>                   now = time.time()
> -                if now - self.prev_pressure_time > 1.0:
> +                tdiff = now - self.prev_pressure_time
> +                if tdiff > 1.0:
> +                    exceeds_cpu_pressure =  self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) / tdiff > self.rq.max_cpu_pressure
> +                    exceeds_io_pressure =  self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) / tdiff > self.rq.max_io_pressure
> +                    exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) / tdiff > self.rq.max_memory_pressure
>                       self.prev_cpu_pressure = curr_cpu_pressure
>                       self.prev_io_pressure = curr_io_pressure
>                       self.prev_memory_pressure = curr_memory_pressure
>                       self.prev_pressure_time = now
> +                else:
> +                    exceeds_cpu_pressure =  self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) > self.rq.max_cpu_pressure
> +                    exceeds_io_pressure =  self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) > self.rq.max_io_pressure
> +                    exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) > self.rq.max_memory_pressure
>               return (exceeds_cpu_pressure or exceeds_io_pressure or exceeds_memory_pressure)
>           return False
>   
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#14682):https://lists.openembedded.org/g/bitbake-devel/message/14682
> Mute This Topic:https://lists.openembedded.org/mt/98118922/3616765
> Group Owner:bitbake-devel+owner@lists.openembedded.org
> Unsubscribe:https://lists.openembedded.org/g/bitbake-devel/unsub  [randy.macleod@windriver.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>

-- 
# Randy MacLeod
# Wind River Linux

[-- Attachment #2: Type: text/html, Size: 5483 bytes --]

  parent reply	other threads:[~2023-04-07 15:02 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-07  3:07 [bitbake-devel][PATCH 1/2] runqueue: fix PSI check calculation Chen Qi
2023-04-07  3:07 ` [bitbake-devel][PATCH 2/2] runqueue.py: use 'full' line for PSI check Chen Qi
2023-04-07 15:14   ` Randy MacLeod
2023-04-07 15:02 ` Randy MacLeod [this message]
2023-04-07 23:02   ` [bitbake-devel][PATCH 1/2] runqueue: fix PSI check calculation contrib

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d5f4e8e3-53c2-ab85-be64-78252a4e356d@windriver.com \
    --to=randy.macleod@windriver.com \
    --cc=Qi.Chen@windriver.com \
    --cc=bitbake-devel@lists.openembedded.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).