xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks
Date: Fri, 12 Jun 2020 00:22:36 +0000	[thread overview]
Message-ID: <20200612002205.174295-3-volodymyr_babchuk@epam.com> (raw)
In-Reply-To: <20200612002205.174295-1-volodymyr_babchuk@epam.com>

In most cases hypervisor code performs guest-related jobs. Tasks like
hypercall handling or MMIO access emulation are done for calling vCPU
so it is okay to charge time spent in hypervisor to the current vCPU.

But, there are also tasks that are not originated from guests. This
includes things like TLB flushing or running tasklets. We don't want
to track time spent in this tasks to a total scheduling unit run
time. So we need to track time spent in such housekeeping tasks
separately.

Those hypervisor tasks are run in do_softirq() function, so we'll
install our hooks there.

TODO: This change is not tested on ARM, and probably we'll get a
failing assertion there. This is because ARM code exits from
schedule() and have chance to get to end of do_softirq().

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/common/sched/core.c | 32 ++++++++++++++++++++++++++++++++
 xen/common/softirq.c    |  2 ++
 xen/include/xen/sched.h | 16 +++++++++++++++-
 3 files changed, 49 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 8f642ada05..d597811fef 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -945,6 +945,37 @@ void vcpu_end_irq_handler(void)
     atomic_add(delta, &current->sched_unit->irq_time);
 }
 
+void vcpu_begin_hyp_task(struct vcpu *v)
+{
+    if ( is_idle_vcpu(v) )
+        return;
+
+    ASSERT(!v->in_hyp_task);
+
+    v->hyp_entry_time = NOW();
+#ifndef NDEBUG
+    v->in_hyp_task = true;
+#endif
+}
+
+void vcpu_end_hyp_task(struct vcpu *v)
+{
+    int delta;
+
+    if ( is_idle_vcpu(v) )
+        return;
+
+    ASSERT(v->in_hyp_task);
+
+    /* We assume that hypervisor task time will not overflow int */
+    delta = NOW() - v->hyp_entry_time;
+    atomic_add(delta, &v->sched_unit->hyp_time);
+
+#ifndef NDEBUG
+    v->in_hyp_task = false;
+#endif
+}
+
 /*
  * Do the actual movement of an unit from old to new CPU. Locks for *both*
  * CPUs needs to have been taken already when calling this!
@@ -2615,6 +2646,7 @@ static void schedule(void)
 
     SCHED_STAT_CRANK(sched_run);
 
+    vcpu_end_hyp_task(current);
     rcu_read_lock(&sched_res_rculock);
 
     lock = pcpu_schedule_lock_irq(cpu);
diff --git a/xen/common/softirq.c b/xen/common/softirq.c
index 063e93cbe3..03a29384d1 100644
--- a/xen/common/softirq.c
+++ b/xen/common/softirq.c
@@ -71,7 +71,9 @@ void process_pending_softirqs(void)
 void do_softirq(void)
 {
     ASSERT_NOT_IN_ATOMIC();
+    vcpu_begin_hyp_task(current);
     __do_softirq(0);
+    vcpu_end_hyp_task(current);
 }
 
 void open_softirq(int nr, softirq_handler handler)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ceed53364b..51dc7c4551 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -239,7 +239,12 @@ struct vcpu
 
     /* Fair scheduling state */
     uint64_t         irq_entry_time;
+    uint64_t         hyp_entry_time;
     unsigned int     irq_nesting;
+#ifndef NDEBUG
+    bool             in_hyp_task;
+#endif
+
     /* Tasklet for continue_hypercall_on_cpu(). */
     struct tasklet   continue_hypercall_tasklet;
 
@@ -279,8 +284,9 @@ struct sched_unit {
     /* Vcpu state summary. */
     unsigned int           runstate_cnt[4];
 
-    /* Fair scheduling correction value */
+    /* Fair scheduling correction values */
     atomic_t               irq_time;
+    atomic_t               hyp_time;
 
     /* Bitmask of CPUs on which this VCPU may run. */
     cpumask_var_t          cpu_hard_affinity;
@@ -703,6 +709,14 @@ void vcpu_sleep_sync(struct vcpu *v);
 void vcpu_begin_irq_handler(void);
 void vcpu_end_irq_handler(void);
 
+/*
+ * Report to scheduler when we are doing housekeeping tasks on the
+ * current vcpu. This is called during do_softirq() but can be called
+ * anywhere else.
+ */
+void vcpu_begin_hyp_task(struct vcpu *v);
+void vcpu_end_hyp_task(struct vcpu *v);
+
 /*
  * Force synchronisation of given VCPU's state. If it is currently descheduled,
  * this call will ensure that all its state is committed to memory and that
-- 
2.27.0


  reply	other threads:[~2020-06-12  0:23 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-12  0:22 [RFC PATCH v1 0/6] Fair scheduling Volodymyr Babchuk
2020-06-12  0:22 ` Volodymyr Babchuk [this message]
2020-06-12  4:43   ` [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks Jürgen Groß
2020-06-12 11:30     ` Volodymyr Babchuk
2020-06-12 11:40       ` Jürgen Groß
2020-09-24 18:08         ` Volodymyr Babchuk
2020-09-25 17:22           ` Dario Faggioli
2020-09-25 20:21             ` Volodymyr Babchuk
2020-09-25 21:42               ` Dario Faggioli
2020-06-16 10:10   ` Jan Beulich
2020-06-18  2:50     ` Volodymyr Babchuk
2020-06-18  6:34       ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 1/6] sched: track time spent in IRQ handler Volodymyr Babchuk
2020-06-12  4:36   ` Jürgen Groß
2020-06-12 11:26     ` Volodymyr Babchuk
2020-06-12 11:29       ` Julien Grall
2020-06-12 11:33         ` Volodymyr Babchuk
2020-06-12 12:21           ` Julien Grall
2020-06-12 20:08             ` Dario Faggioli
2020-06-12 22:25               ` Volodymyr Babchuk
2020-06-12 22:54               ` Julien Grall
2020-06-16 10:06   ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 3/6] sched, credit2: improve scheduler fairness Volodymyr Babchuk
2020-06-12  4:51   ` Jürgen Groß
2020-06-12 11:38     ` Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 5/6] tools: xentop: show time spent in IRQ and HYP states Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 6/6] trace: add fair scheduling trace events Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics Volodymyr Babchuk
2020-06-12  4:57   ` Jürgen Groß
2020-06-12 11:44     ` Volodymyr Babchuk
2020-06-12 12:45       ` Julien Grall
2020-06-12 22:16         ` Volodymyr Babchuk
2020-06-18 20:24         ` Volodymyr Babchuk
2020-06-18 20:34           ` Julien Grall
2020-06-18 23:35             ` Volodymyr Babchuk
2020-06-12 12:29     ` Julien Grall
2020-06-12 12:41       ` Jürgen Groß
2020-06-12 15:29         ` Dario Faggioli
2020-06-12 22:27           ` Volodymyr Babchuk
2020-06-13  6:22             ` Jürgen Groß
2020-06-18  2:58               ` Volodymyr Babchuk
2020-06-18 15:17                 ` Julien Grall
2020-06-18 15:23                   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200612002205.174295-3-volodymyr_babchuk@epam.com \
    --to=volodymyr_babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).