xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrii Anisov <andrii.anisov@gmail.com>
To: Dario Faggioli <dfaggioli@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"andrii_anisov@epam.com" <andrii_anisov@epam.com>,
	"wl@xen.org" <wl@xen.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"George.Dunlap@eu.citrix.com" <George.Dunlap@eu.citrix.com>,
	"tim@xen.org" <tim@xen.org>,
	"ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	"julien.grall@arm.com" <julien.grall@arm.com>,
	Jan Beulich <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [RFC 3/6] sysctl: extend XEN_SYSCTL_getcpuinfo interface
Date: Fri, 26 Jul 2019 16:06:32 +0300	[thread overview]
Message-ID: <0fd1f291-59d0-4085-6393-ef7809b1c3f0@gmail.com> (raw)
In-Reply-To: <3dbd34f4b4f6286c627b40ed464e565c02111fda.camel@suse.com>



On 26.07.19 15:15, Dario Faggioli wrote:
> Yep, I think being able to know time spent running guests could be
> useful.

Well, my intention was to see hypervisor run and true idle time.

With this full series I see the distinct difference in xentop depending on the type of load in domains:

On my regular system (HW less Dom0, Linux with UI aka DomD, Android with PV drivers aka DomA), I see following:

Idle system:

xentop - 10:10:42   Xen 4.13-unstable
3 domains: 1 running, 2 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
%CPU(s):    7.0 gu,    2.6 hy,  390.4 id
Mem: 8257536k total, 8257536k used, 99020k free    CPUs: 4 @ 8MHz
       NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  VBD_WSECT SSID
       DomA --b---         76    3.3    6258456   75.8    6259712      75.8     4    0        0        0    0        0        0        0          0          0    0
   Domain-0 -----r         14    1.0     262144    3.2   no limit       n/a     4    0        0        0    0        0        0        0          0          0    0
       DomD --b---        111    2.8    1181972   14.3    1246208      15.1     4    0        0        0    0        0        0        0          0          0    0


System with CPU burners in all domains:

xentop - 10:12:19   Xen 4.13-unstable
3 domains: 3 running, 0 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
%CPU(s):  389.1 gu,   10.9 hy,    0.0 id
Mem: 8257536k total, 8257536k used, 99020k free    CPUs: 4 @ 8MHz
       NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  VBD_WSECT SSID
       DomA -----r        115  129.7    6258456   75.8    6259712      75.8     4    0        0        0    0        0        0        0          0          0    0
   Domain-0 -----r        120  129.8     262144    3.2   no limit       n/a     4    0        0        0    0        0        0        0          0          0    0
       DomD -----r        163  129.6    1181972   14.3    1246208      15.1     4    0        0        0    0        0        0        0          0          0    0


System with GPU load run both in DomD and DomA:

xentop - 10:14:26   Xen 4.13-unstable
3 domains: 2 running, 1 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
%CPU(s):  165.7 gu,   51.4 hy,  182.9 id
Mem: 8257536k total, 8257536k used, 99020k free    CPUs: 4 @ 8MHz
       NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  VBD_WSECT SSID
       DomA --b---        250   60.8    6258456   75.8    6259712      75.8     4    0        0        0    0        0        0        0          0          0    0
   Domain-0 -----r        159    2.1     262144    3.2   no limit       n/a     4    0        0        0    0        0        0        0          0          0    0
       DomD -----r        275  102.7    1181972   14.3    1246208      15.1     4    0        0        0    0        0        0        0          0          0    0


You can see that rise of CPU used by hypervisor itself in high IRQ use-case (GPU load).

> I confirm what I said about patch 1: idle time being the time idle_vcpu
> spent in RUNSTATE_blocked, and hypervisor time being the time idle_vcpu
> spent in RUNSTATE_running sounds quite confusing to me.

As I said before, think of idle_vcpu as hypervisor_vcpu ;)

-- 
Sincerely,
Andrii Anisov.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-07-26 13:07 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-26 10:37 [Xen-devel] [RFC 0/6] XEN scheduling hardening Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 1/6] xen/arm: Re-enable interrupt later in the trap path Andrii Anisov
2019-07-26 10:48   ` Julien Grall
2019-07-30 17:35     ` Andrii Anisov
2019-07-30 20:10       ` Julien Grall
2019-08-01  6:45         ` Andrii Anisov
2019-08-01  9:37           ` Julien Grall
2019-08-02  8:28             ` Andrii Anisov
2019-08-02  9:03               ` Julien Grall
2019-08-02 12:24                 ` Andrii Anisov
2019-08-02 13:22                   ` Julien Grall
2019-08-01 11:19           ` Dario Faggioli
2019-08-02  7:50             ` Andrii Anisov
2019-08-02  9:15               ` Julien Grall
2019-08-02 13:07                 ` Andrii Anisov
2019-08-02 13:49                   ` Julien Grall
2019-08-03  1:39                     ` Dario Faggioli
2019-08-03  0:55                   ` Dario Faggioli
2019-08-06 13:09                     ` Andrii Anisov
2019-08-08 14:07                       ` Andrii Anisov
2019-08-13 14:45                         ` Dario Faggioli
2019-08-15 18:25                           ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 2/6] schedule: account true system idle time Andrii Anisov
2019-07-26 12:00   ` Dario Faggioli
2019-07-26 12:42     ` Andrii Anisov
2019-07-29 11:40       ` Dario Faggioli
2019-08-01  8:23         ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 3/6] sysctl: extend XEN_SYSCTL_getcpuinfo interface Andrii Anisov
2019-07-26 12:15   ` Dario Faggioli
2019-07-26 13:06     ` Andrii Anisov [this message]
2019-07-26 10:37 ` [Xen-devel] [RFC 4/6] xentop: show CPU load information Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 5/6] arm64: сall enter_hypervisor_head only when it is needed Andrii Anisov
2019-07-26 10:44   ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 5/6] arm64: call " Andrii Anisov
2019-07-26 10:59   ` Julien Grall
2019-07-30 17:35     ` Andrii Anisov
2019-07-31 11:02       ` Julien Grall
2019-07-31 11:33         ` Andre Przywara
2019-08-01  7:33         ` Andrii Anisov
2019-08-01 10:17           ` Julien Grall
2019-08-02 13:50             ` Andrii Anisov
2019-07-26 10:37 ` [Xen-devel] [RFC 6/6] schedule: account all the hypervisor time to the idle vcpu Andrii Anisov
2019-07-26 11:56 ` [Xen-devel] [RFC 0/6] XEN scheduling hardening Dario Faggioli
2019-07-26 12:14   ` Juergen Gross
2019-07-29 11:53     ` Dario Faggioli
2019-07-29 12:13       ` Juergen Gross
2019-07-29 14:47     ` Andrii Anisov
2019-07-29 18:46       ` Dario Faggioli
2019-07-29 14:28   ` Andrii Anisov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0fd1f291-59d0-4085-6393-ef7809b1c3f0@gmail.com \
    --to=andrii.anisov@gmail.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=andrii_anisov@epam.com \
    --cc=dfaggioli@suse.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=julien.grall@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).