xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: Andrii Anisov <andrii.anisov@gmail.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	xen-devel@lists.xenproject.org
Cc: Andrii Anisov <andrii_anisov@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [Xen-devel] [RFC 1/6] xen/arm: Re-enable interrupt later in the trap path
Date: Fri, 2 Aug 2019 14:49:51 +0100	[thread overview]
Message-ID: <c20b6a42-d8e4-379f-f0c7-56ad008ce653@arm.com> (raw)
In-Reply-To: <d92cecb0-397a-004f-aa80-e2761d9fadb5@gmail.com>

Hi,

/!\/!\/!\

I am not a scheduler expert so my view maybe be wrong. Dario feel free to 
correct me :).

/!\/!\/!\

On 02/08/2019 14:07, Andrii Anisov wrote:
> 
> 
> On 02.08.19 12:15, Julien Grall wrote:
>>> I can make such a list, how it is done in this series:
>>
>>  From the list below it is not clear what is the split between hypervisor time 
>> and guest time. See some of the examples below.
> 
> I guess your question is *why* do I split hyp/guest time in such a way.
> 
> So for the guest I count time spent in the guest mode. Plus time spent in 
> hypervisor mode to serve explicit requests by guest.
> 
> That time may be quite deterministic from the guest's point of view.
> 
> But the time spent by hypervisor to handle interrupts, update the hardware state 
> is not requested by the guest itself. It is a virtualization overhead. And the 
> overhead heavily depends on the system configuration (e.g. how many guests are 
> running).

While context switch cost will depend on your system configuration. The HW state 
synchronization on entry to the hypervisor and exit from the hypervisor will 
always be there. This is even if you have one guest running or partitioning your 
system.

Furthermore, Xen is implementing a voluntary preemption model. The main 
preemption point for Arm is on return to the guest. So if you have work 
initiated by the guest that takes long, then you need may want to defer until 
you can preempt without much trouble.

Your definition of "virtualization overhead" is somewhat unclear. A guest is not 
aware that a device may be emulated. So emulating any I/O is per se an overhead.

> That overhead may be accounted for a guest or for hyp, depending on the model 
> agreed.

There are some issues to account some of the work on exit to the hypervisor 
time. Let's take the example of the P2M, this task is a deferred work from an 
system register emulation because we need preemption.

The task can be long running (several hundred milliseconds). A scheduler may 
only take into account the guest time and consider that vCPU does not need to be 
unscheduled. You are at the risk a vCPU will hog a pCPU and delay any other 
vCPU. This is not something ideal even for RT task.

Other work done on exit (e.g syncing the vGIC state to HW) will be less a 
concern where they are accounted because it cannot possibly hog a pCPU.

I understand you want to get the virtualization overhead. It feels to me, this 
needs to be a different category (i.e neither hypervisor time, nor guest time).

> 
> My idea is as following:
> Accounting that overhead for guests is quite OK for server applications, you put 
> server overhead time on guests and charge money from their budget.
> Yet for RT applications you will have more accurate view on the guest execution 
> time if you drop that overhead.
> 
> Our target is XEN in safety critical systems. So I chosen more deterministic 
> (from my point of view) approach.

See above, I believe you are building an secure system with accounting some of 
the guest work to the hypervisor.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-08-02 13:50 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-26 10:37 [Xen-devel] [RFC 0/6] XEN scheduling hardening Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 1/6] xen/arm: Re-enable interrupt later in the trap path Andrii Anisov
2019-07-26 10:48   ` Julien Grall
2019-07-30 17:35     ` Andrii Anisov
2019-07-30 20:10       ` Julien Grall
2019-08-01  6:45         ` Andrii Anisov
2019-08-01  9:37           ` Julien Grall
2019-08-02  8:28             ` Andrii Anisov
2019-08-02  9:03               ` Julien Grall
2019-08-02 12:24                 ` Andrii Anisov
2019-08-02 13:22                   ` Julien Grall
2019-08-01 11:19           ` Dario Faggioli
2019-08-02  7:50             ` Andrii Anisov
2019-08-02  9:15               ` Julien Grall
2019-08-02 13:07                 ` Andrii Anisov
2019-08-02 13:49                   ` Julien Grall [this message]
2019-08-03  1:39                     ` Dario Faggioli
2019-08-03  0:55                   ` Dario Faggioli
2019-08-06 13:09                     ` Andrii Anisov
2019-08-08 14:07                       ` Andrii Anisov
2019-08-13 14:45                         ` Dario Faggioli
2019-08-15 18:25                           ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 2/6] schedule: account true system idle time Andrii Anisov
2019-07-26 12:00   ` Dario Faggioli
2019-07-26 12:42     ` Andrii Anisov
2019-07-29 11:40       ` Dario Faggioli
2019-08-01  8:23         ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 3/6] sysctl: extend XEN_SYSCTL_getcpuinfo interface Andrii Anisov
2019-07-26 12:15   ` Dario Faggioli
2019-07-26 13:06     ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 4/6] xentop: show CPU load information Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 5/6] arm64: сall enter_hypervisor_head only when it is needed Andrii Anisov
2019-07-26 10:44   ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 5/6] arm64: call " Andrii Anisov
2019-07-26 10:59   ` Julien Grall
2019-07-30 17:35     ` Andrii Anisov
2019-07-31 11:02       ` Julien Grall
2019-07-31 11:33         ` Andre Przywara
2019-08-01  7:33         ` Andrii Anisov
2019-08-01 10:17           ` Julien Grall
2019-08-02 13:50             ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 6/6] schedule: account all the hypervisor time to the idle vcpu Andrii Anisov
2019-07-26 11:56 ` [Xen-devel] [RFC 0/6] XEN scheduling hardening Dario Faggioli
2019-07-26 12:14   ` Juergen Gross
2019-07-29 11:53     ` Dario Faggioli
2019-07-29 12:13       ` Juergen Gross
2019-07-29 14:47     ` Andrii Anisov
2019-07-29 18:46       ` Dario Faggioli
2019-07-29 14:28   ` Andrii Anisov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c20b6a42-d8e4-379f-f0c7-56ad008ce653@arm.com \
    --to=julien.grall@arm.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrii.anisov@gmail.com \
    --cc=andrii_anisov@epam.com \
    --cc=dfaggioli@suse.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).