From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Dario Faggioli <dfaggioli@suse.com>
Cc: "Jürgen Groß" <jgross@suse.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
"julien@xen.org" <julien@xen.org>,
"jbeulich@suse.com" <jbeulich@suse.com>,
"wl@xen.org" <wl@xen.org>,
"sstabellini@kernel.org" <sstabellini@kernel.org>,
"ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
"george.dunlap@citrix.com" <george.dunlap@citrix.com>,
"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks
Date: Fri, 25 Sep 2020 20:21:44 +0000 [thread overview]
Message-ID: <87r1qpa9pk.fsf@epam.com> (raw)
In-Reply-To: <66880caef018abdbf9fe99116594a2826efcb603.camel@suse.com>
Hi Dario,
Dario Faggioli writes:
> On Thu, 2020-09-24 at 18:08 +0000, Volodymyr Babchuk wrote:
>> So, as I see this, functions are called in the following way (on
>> x86):
>>
>> 1. do_softirq() calls vcpu_begin_hyp_task() and then executes
>> __do_softirq()
>>
>> 2. __do_softirq() does different jobs and eventually calls schedule()
>>
>> 3. schedule() calls vcpu_end_hyp_task() and makes scheduling decision
>> which leads to call to context_switch()
>>
>> 4. On end context_switch() we will exit hypervisor and enter VM. At
>> least, this is how I understand
>>
>> nextd->arch.ctxt_switch->tail(next);
>>
>> call.
>>
>> So, no need to call vcpu_begin_hyp_task() in context_switch() for
>> x86.
>>
> Mmm... This looks correct to me too.
>
> And what about the cases where schedule() does return?
Can it return on x86? I want to test this case, but how force it? Null
scheduler, perhaps?
> Are these also fine because they're handled within __do_softirq()
> (i.e., without actually going back to do_softirq() and hence never
> calling end_hyp_task() for a second time)?
I afraid, that there will be a bug. schedule() calls end_hyp_task(), and
if it will eventually return from __do_softirq() to do_softirq(),
end_hyp_task() will be called twice.
>
>> I have put bunch of ASSERTs to ensure that vcpu_begin_hyp_task() or
>> vcpu_end_hyp_task() are not called twice and that vcpu_end_hyp_task()
>> is
>> called after vcpu_begin_hyp_task(). Those asserts are not failing, so
>> I
>> assume that I did all this in the right way :)
>>
> Yeah, good to know. :-)
>
> Are you doing these tests with both core-scheduling disabled and
> enabled?
Good question. On x86 I am running Xen in QEMU. With -smp=2 it sees two
CPUs:
(XEN) Brought up 2 CPUs
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
You are right, I need to try other variants of scheduling granularity.
Do you by any chance know how to emulate more complex setup in QEMU?
Also, what is the preferred way to test/debug Xen on x86?
--
Volodymyr Babchuk at EPAM
next prev parent reply other threads:[~2020-09-25 20:22 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-12 0:22 [RFC PATCH v1 0/6] Fair scheduling Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks Volodymyr Babchuk
2020-06-12 4:43 ` Jürgen Groß
2020-06-12 11:30 ` Volodymyr Babchuk
2020-06-12 11:40 ` Jürgen Groß
2020-09-24 18:08 ` Volodymyr Babchuk
2020-09-25 17:22 ` Dario Faggioli
2020-09-25 20:21 ` Volodymyr Babchuk [this message]
2020-09-25 21:42 ` Dario Faggioli
2020-06-16 10:10 ` Jan Beulich
2020-06-18 2:50 ` Volodymyr Babchuk
2020-06-18 6:34 ` Jan Beulich
2020-06-12 0:22 ` [RFC PATCH v1 1/6] sched: track time spent in IRQ handler Volodymyr Babchuk
2020-06-12 4:36 ` Jürgen Groß
2020-06-12 11:26 ` Volodymyr Babchuk
2020-06-12 11:29 ` Julien Grall
2020-06-12 11:33 ` Volodymyr Babchuk
2020-06-12 12:21 ` Julien Grall
2020-06-12 20:08 ` Dario Faggioli
2020-06-12 22:25 ` Volodymyr Babchuk
2020-06-12 22:54 ` Julien Grall
2020-06-16 10:06 ` Jan Beulich
2020-06-12 0:22 ` [RFC PATCH v1 3/6] sched, credit2: improve scheduler fairness Volodymyr Babchuk
2020-06-12 4:51 ` Jürgen Groß
2020-06-12 11:38 ` Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 5/6] tools: xentop: show time spent in IRQ and HYP states Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 6/6] trace: add fair scheduling trace events Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics Volodymyr Babchuk
2020-06-12 4:57 ` Jürgen Groß
2020-06-12 11:44 ` Volodymyr Babchuk
2020-06-12 12:45 ` Julien Grall
2020-06-12 22:16 ` Volodymyr Babchuk
2020-06-18 20:24 ` Volodymyr Babchuk
2020-06-18 20:34 ` Julien Grall
2020-06-18 23:35 ` Volodymyr Babchuk
2020-06-12 12:29 ` Julien Grall
2020-06-12 12:41 ` Jürgen Groß
2020-06-12 15:29 ` Dario Faggioli
2020-06-12 22:27 ` Volodymyr Babchuk
2020-06-13 6:22 ` Jürgen Groß
2020-06-18 2:58 ` Volodymyr Babchuk
2020-06-18 15:17 ` Julien Grall
2020-06-18 15:23 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87r1qpa9pk.fsf@epam.com \
--to=volodymyr_babchuk@epam.com \
--cc=andrew.cooper3@citrix.com \
--cc=dfaggioli@suse.com \
--cc=george.dunlap@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=julien@xen.org \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).