xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks
Date: Tue, 16 Jun 2020 12:10:29 +0200	[thread overview]
Message-ID: <7d3e1741-b8bc-b522-8d64-20ca9c14744b@suse.com> (raw)
In-Reply-To: <20200612002205.174295-3-volodymyr_babchuk@epam.com>

On 12.06.2020 02:22, Volodymyr Babchuk wrote:
> In most cases hypervisor code performs guest-related jobs. Tasks like
> hypercall handling or MMIO access emulation are done for calling vCPU
> so it is okay to charge time spent in hypervisor to the current vCPU.
> 
> But, there are also tasks that are not originated from guests. This
> includes things like TLB flushing or running tasklets. We don't want
> to track time spent in this tasks to a total scheduling unit run
> time. So we need to track time spent in such housekeeping tasks
> separately.
> 
> Those hypervisor tasks are run in do_softirq() function, so we'll
> install our hooks there.

I can see the point and desire, but it feels like you're moving from
one kind of unfairness to another: A softirq may very well be on
behalf of a specific vCPU, in which case not charging current should
lead to charging that specific one (which may still be current then).
Even more than for TLB flushes this may be relevant for the cases
where (on x86) we issue WBINVD on behalf of a guest.

Jan


  parent reply	other threads:[~2020-06-16 10:10 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-12  0:22 [RFC PATCH v1 0/6] Fair scheduling Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks Volodymyr Babchuk
2020-06-12  4:43   ` Jürgen Groß
2020-06-12 11:30     ` Volodymyr Babchuk
2020-06-12 11:40       ` Jürgen Groß
2020-09-24 18:08         ` Volodymyr Babchuk
2020-09-25 17:22           ` Dario Faggioli
2020-09-25 20:21             ` Volodymyr Babchuk
2020-09-25 21:42               ` Dario Faggioli
2020-06-16 10:10   ` Jan Beulich [this message]
2020-06-18  2:50     ` Volodymyr Babchuk
2020-06-18  6:34       ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 1/6] sched: track time spent in IRQ handler Volodymyr Babchuk
2020-06-12  4:36   ` Jürgen Groß
2020-06-12 11:26     ` Volodymyr Babchuk
2020-06-12 11:29       ` Julien Grall
2020-06-12 11:33         ` Volodymyr Babchuk
2020-06-12 12:21           ` Julien Grall
2020-06-12 20:08             ` Dario Faggioli
2020-06-12 22:25               ` Volodymyr Babchuk
2020-06-12 22:54               ` Julien Grall
2020-06-16 10:06   ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 3/6] sched, credit2: improve scheduler fairness Volodymyr Babchuk
2020-06-12  4:51   ` Jürgen Groß
2020-06-12 11:38     ` Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 5/6] tools: xentop: show time spent in IRQ and HYP states Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 6/6] trace: add fair scheduling trace events Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics Volodymyr Babchuk
2020-06-12  4:57   ` Jürgen Groß
2020-06-12 11:44     ` Volodymyr Babchuk
2020-06-12 12:45       ` Julien Grall
2020-06-12 22:16         ` Volodymyr Babchuk
2020-06-18 20:24         ` Volodymyr Babchuk
2020-06-18 20:34           ` Julien Grall
2020-06-18 23:35             ` Volodymyr Babchuk
2020-06-12 12:29     ` Julien Grall
2020-06-12 12:41       ` Jürgen Groß
2020-06-12 15:29         ` Dario Faggioli
2020-06-12 22:27           ` Volodymyr Babchuk
2020-06-13  6:22             ` Jürgen Groß
2020-06-18  2:58               ` Volodymyr Babchuk
2020-06-18 15:17                 ` Julien Grall
2020-06-18 15:23                   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7d3e1741-b8bc-b522-8d64-20ca9c14744b@suse.com \
    --to=jbeulich@suse.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).