All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dario Faggioli <dfaggioli@suse.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
	 "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
	Julien Grall <jgrall@amazon.com>,
	Dario Faggioli <dario.faggioli@suse.com>,
	"Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
	 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: IRQ latency measurements in hypervisor
Date: Thu, 21 Jan 2021 09:39:50 +0100	[thread overview]
Message-ID: <616efdeb8c3bc6bec1c02de05e0cfe0eda5b4e56.camel@suse.com> (raw)
In-Reply-To: <87turb13wp.fsf@epam.com>

[-- Attachment #1: Type: text/plain, Size: 4417 bytes --]

On Thu, 2021-01-21 at 01:20 +0000, Volodymyr Babchuk wrote:
> Hi Dario,
> 
Hi :-)

> Dario Faggioli writes:
> > Anyway, as I was saying, having a latency which is ~ 2x of your
> > period
> > is ok, and it should be expected (when you size the period). In
> > fact,
> > let's say that your budget is Xus, and your period is 30us. This
> > means
> > that you get to execute for Xus every 30us. So, basically, at time
> > t0
> > you are given a budget of Xus and you are guaranteed to be able to
> > use
> > it all within time t1=t0+30us. At that time (t1=t0+30us) you are
> > given
> > another Xus amount of budget, and you are guaranteed to be able to
> > use
> > it all within t2=t1+30us=t0+60us.
> 
> Well, I'm not sure if I got you right. Are you saying that unused
> budget
> is preserved?
> 
No, it's not preserved and that's not what I meant... sorry if I did
not manage to make myself clear.

> If I understood it correct, any unused budget is lost. So, basically
> RDTS guarantees that your vcpu will able to run Xus in total every
> Yus,
> where X is the budget and Y is the period. 
>
Yep. Every Y, you are given X and you can use it. When, after Y, a new
replenishment happens, you are given X again, so if you did not use all
of X during the previous period, the amount you didn't use is lost.

> Also, it does not guarantee
> that vCPU will be scheduled right away, so for period of 100us and
> budget of 30us it will be perfectly fine to have latency of 70+us:
> 
Exactly, that kind of more what I was trying to say myself.

In fact, as you say, with period 100 and budget 30, it is perfectly
"legal" for the vcpu to start executing at 70 after the replenishment
(i.e., after the beginning of a new period).

And that is also why, theoretically, the maximum latency between to
consecutive execution of a vcpu could be, in the worst possible case,
2*P-2*B (where, this time, P is the period and B is the budget).

> If at t=0 new period begins and in the same time IRQ fires, but RDTS
> decides to run another task, it is possible that your vCPU will be
> scheduled at only t=70us.
>
Exactly.

> > What it will therefore record would be a latency to t2-5us, which
> > in
> > fact is:
> > 
> >   t1 + 30us - 5us = t0 + 30us + 30us - 5us =
> > = 0 + 60us - 5us = 55us ~= 60us
> > 
> > So... May this be the case?
> 
> Yes, probably this is the issue. I used budget of 15us in this
> case. But, taking into account that minimal observed latency is 3us
> and
> typical is ~10us, it is quite possible that budget will be emptied
> before IRQ handler will have a chance to complete.
> 
Indeed.

> It would be great to have priorities in RTDS, so more critical task
> can
> preempt less critical one. 
>
Well, I see from where this comes but it's a dynamic scheduler, so the
priority is given by the (absolute) deadline. The challenge, therefore,
is to determine the proper period and budget to be assigned to each
vcpu, given your workload and your requirement.

With fixed priority schedulers, this "modelling" part could be easier,
but then the scheduler themselves have most of the time worse
performance and only allow you to utilize a fraction of the resources
that you have available.

And adding some kind of priority on top of a dynamic scheduler such as
RTDS --although it may sound appealing when thinking about it-- would
mess up both the theoretical analysis and the implementation of the
algorithm itself, believe me. :-)

> I believe, I have seen corresponding patches
> somewhere...
> 
Mmm... not sure. Maybe you're referring to how "extratime" is handled?
Because there is a concept of priority in the way it's handled, but
it's mostly an implementation detail.

That being said, was extratime enabled, in your experiments, for the
various vcpus?

Of course, it's not that you can rely on a particular vcpu to be able
to take advantage of it, because it's not how it works. But at the same
time it does not disrupt the real-time guarantees so it's safe to
use... It may be interesting to give it a try.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2021-01-21  8:40 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-12 23:48 IRQ latency measurements in hypervisor Volodymyr Babchuk
2021-01-14 23:33 ` Stefano Stabellini
2021-01-15 11:42   ` Julien Grall
2021-01-15 15:45     ` Volodymyr Babchuk
2021-01-15 17:13       ` Julien Grall
2021-01-15 23:41         ` Stefano Stabellini
2021-01-16 12:59           ` Andrew Cooper
2021-01-20 23:09           ` Volodymyr Babchuk
2021-01-20 23:03         ` Volodymyr Babchuk
2021-01-21  0:52           ` Stefano Stabellini
2021-01-21 21:01           ` Julien Grall
2021-01-15 15:27   ` Volodymyr Babchuk
2021-01-15 23:17     ` Stefano Stabellini
2021-01-16 12:47       ` Julien Grall
2021-01-21  0:49       ` Volodymyr Babchuk
2021-01-21  0:59         ` Stefano Stabellini
2021-01-18 16:40   ` Dario Faggioli
2021-01-21  1:20     ` Volodymyr Babchuk
2021-01-21  8:39       ` Dario Faggioli [this message]
2021-01-16 14:40 ` Andrew Cooper
2021-01-21  2:39   ` Volodymyr Babchuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=616efdeb8c3bc6bec1c02de05e0cfe0eda5b4e56.camel@suse.com \
    --to=dfaggioli@suse.com \
    --cc=Bertrand.Marquis@arm.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@suse.com \
    --cc=jgrall@amazon.com \
    --cc=stefano.stabellini@xilinx.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.