xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jürgen Groß" <jgross@suse.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics.
Date: Fri, 12 Jun 2020 06:57:13 +0200	[thread overview]
Message-ID: <2a0ff6f5-1ada-9d0a-5014-709c873ec3e3@suse.com> (raw)
In-Reply-To: <20200612002205.174295-5-volodymyr_babchuk@epam.com>

On 12.06.20 02:22, Volodymyr Babchuk wrote:
> As scheduler code now collects time spent in IRQ handlers and in
> do_softirq(), we can present those values to userspace tools like
> xentop, so system administrator can see how system behaves.
> 
> We are updating counters only in sched_get_time_correction() function
> to minimize number of taken spinlocks. As atomic_t is 32 bit wide, it
> is not enough to store time with nanosecond precision. So we need to
> use 64 bit variables and protect them with spinlock.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
>   xen/common/sched/core.c     | 17 +++++++++++++++++
>   xen/common/sysctl.c         |  1 +
>   xen/include/public/sysctl.h |  4 +++-
>   xen/include/xen/sched.h     |  2 ++
>   4 files changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index a7294ff5c3..ee6b1d9161 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -95,6 +95,10 @@ static struct scheduler __read_mostly ops;
>   
>   static bool scheduler_active;
>   
> +static DEFINE_SPINLOCK(sched_stat_lock);
> +s_time_t sched_stat_irq_time;
> +s_time_t sched_stat_hyp_time;
> +
>   static void sched_set_affinity(
>       struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft);
>   
> @@ -994,9 +998,22 @@ s_time_t sched_get_time_correction(struct sched_unit *u)
>               break;
>       }
>   
> +    spin_lock_irqsave(&sched_stat_lock, flags);
> +    sched_stat_irq_time += irq;
> +    sched_stat_hyp_time += hyp;
> +    spin_unlock_irqrestore(&sched_stat_lock, flags);

Please don't use a lock. Just use add_sized() instead which will add
atomically.

> +
>       return irq + hyp;
>   }
>   
> +void sched_get_time_stats(uint64_t *irq_time, uint64_t *hyp_time)
> +{
> +    unsigned long flags;
> +
> +    spin_lock_irqsave(&sched_stat_lock, flags);
> +    *irq_time = sched_stat_irq_time;
> +    *hyp_time = sched_stat_hyp_time;
> +    spin_unlock_irqrestore(&sched_stat_lock, flags);

read_atomic() will do the job without lock.

>   }
>   
>   /*
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 1c6a817476..00683bc93f 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -270,6 +270,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>           pi->scrub_pages = 0;
>           pi->cpu_khz = cpu_khz;
>           pi->max_mfn = get_upper_mfn_bound();
> +        sched_get_time_stats(&pi->irq_time, &pi->hyp_time);
>           arch_do_physinfo(pi);
>           if ( iommu_enabled )
>           {
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 3a08c512e8..f320144d40 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -35,7 +35,7 @@
>   #include "domctl.h"
>   #include "physdev.h"
>   
> -#define XEN_SYSCTL_INTERFACE_VERSION 0x00000013
> +#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014
>   
>   /*
>    * Read console content from Xen buffer ring.
> @@ -118,6 +118,8 @@ struct xen_sysctl_physinfo {
>       uint64_aligned_t scrub_pages;
>       uint64_aligned_t outstanding_pages;
>       uint64_aligned_t max_mfn; /* Largest possible MFN on this host */
> +    uint64_aligned_t irq_time;
> +    uint64_aligned_t hyp_time;

Would hypfs work, too? This would avoid the need for extending another
hypercall.


Juergen



  reply	other threads:[~2020-06-12  4:57 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-12  0:22 [RFC PATCH v1 0/6] Fair scheduling Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks Volodymyr Babchuk
2020-06-12  4:43   ` Jürgen Groß
2020-06-12 11:30     ` Volodymyr Babchuk
2020-06-12 11:40       ` Jürgen Groß
2020-09-24 18:08         ` Volodymyr Babchuk
2020-09-25 17:22           ` Dario Faggioli
2020-09-25 20:21             ` Volodymyr Babchuk
2020-09-25 21:42               ` Dario Faggioli
2020-06-16 10:10   ` Jan Beulich
2020-06-18  2:50     ` Volodymyr Babchuk
2020-06-18  6:34       ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 1/6] sched: track time spent in IRQ handler Volodymyr Babchuk
2020-06-12  4:36   ` Jürgen Groß
2020-06-12 11:26     ` Volodymyr Babchuk
2020-06-12 11:29       ` Julien Grall
2020-06-12 11:33         ` Volodymyr Babchuk
2020-06-12 12:21           ` Julien Grall
2020-06-12 20:08             ` Dario Faggioli
2020-06-12 22:25               ` Volodymyr Babchuk
2020-06-12 22:54               ` Julien Grall
2020-06-16 10:06   ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 3/6] sched, credit2: improve scheduler fairness Volodymyr Babchuk
2020-06-12  4:51   ` Jürgen Groß
2020-06-12 11:38     ` Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 5/6] tools: xentop: show time spent in IRQ and HYP states Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 6/6] trace: add fair scheduling trace events Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics Volodymyr Babchuk
2020-06-12  4:57   ` Jürgen Groß [this message]
2020-06-12 11:44     ` Volodymyr Babchuk
2020-06-12 12:45       ` Julien Grall
2020-06-12 22:16         ` Volodymyr Babchuk
2020-06-18 20:24         ` Volodymyr Babchuk
2020-06-18 20:34           ` Julien Grall
2020-06-18 23:35             ` Volodymyr Babchuk
2020-06-12 12:29     ` Julien Grall
2020-06-12 12:41       ` Jürgen Groß
2020-06-12 15:29         ` Dario Faggioli
2020-06-12 22:27           ` Volodymyr Babchuk
2020-06-13  6:22             ` Jürgen Groß
2020-06-18  2:58               ` Volodymyr Babchuk
2020-06-18 15:17                 ` Julien Grall
2020-06-18 15:23                   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a0ff6f5-1ada-9d0a-5014-709c873ec3e3@suse.com \
    --to=jgross@suse.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).