xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "Jürgen Groß" <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"julien@xen.org" <julien@xen.org>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"wl@xen.org" <wl@xen.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	"george.dunlap@citrix.com" <george.dunlap@citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dfaggioli@suse.com" <dfaggioli@suse.com>
Subject: Re: [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks
Date: Thu, 24 Sep 2020 18:08:55 +0000	[thread overview]
Message-ID: <87d02bavz7.fsf@epam.com> (raw)
In-Reply-To: <918fa2e1-232c-a3ff-d0a9-776b470ee5db@suse.com>


Hello Jürgen,

Jürgen Groß writes:

> On 12.06.20 13:30, Volodymyr Babchuk wrote:
>> On Fri, 2020-06-12 at 06:43 +0200, Jürgen Groß wrote:
>>> On 12.06.20 02:22, Volodymyr Babchuk wrote:

[...]
>>>> +    delta = NOW() - v->hyp_entry_time;
>>>> +    atomic_add(delta, &v->sched_unit->hyp_time);
>>>> +
>>>> +#ifndef NDEBUG
>>>> +    v->in_hyp_task = false;
>>>> +#endif
>>>> +}
>>>> +
>>>>    /*
>>>>     * Do the actual movement of an unit from old to new CPU. Locks for *both*
>>>>     * CPUs needs to have been taken already when calling this!
>>>> @@ -2615,6 +2646,7 @@ static void schedule(void)
>>>>           SCHED_STAT_CRANK(sched_run);
>>>>    +    vcpu_end_hyp_task(current);
>>>>        rcu_read_lock(&sched_res_rculock);
>>>>           lock = pcpu_schedule_lock_irq(cpu);
>>>> diff --git a/xen/common/softirq.c b/xen/common/softirq.c
>>>> index 063e93cbe3..03a29384d1 100644
>>>> --- a/xen/common/softirq.c
>>>> +++ b/xen/common/softirq.c
>>>> @@ -71,7 +71,9 @@ void process_pending_softirqs(void)
>>>>    void do_softirq(void)
>>>>    {
>>>>        ASSERT_NOT_IN_ATOMIC();
>>>> +    vcpu_begin_hyp_task(current);
>>>>        __do_softirq(0);
>>>> +    vcpu_end_hyp_task(current);
>>>
>>> This won't work for scheduling. current will either have changed,
>>> or in x86 case __do_softirq() might just not return. You need to
>>> handle that case explicitly in schedule() (you did that for the
>>> old vcpu, but for the case schedule() is returning you need to
>>> call vcpu_begin_hyp_task(current) there).
>>>
>> Well, this is one of questions, I wanted to discuss. I certainly
>> need
>> to call vcpu_begin_hyp_task(current) after context switch. But what it
>> is the right place? If my understaning is right, code on x86 platform
>> will never reach this point. Or I'm wrong there?
>
> No, this is correct.
>
> You can add the call to context_switch() just after set_current() has
> been called.

Looks like I'm missing something there. If I get this right, code you
mentioned is executed right before leaving hypervisor.

So, as I see this, functions are called in the following way (on x86):

1. do_softirq() calls vcpu_begin_hyp_task() and then executes
__do_softirq()

2. __do_softirq() does different jobs and eventually calls schedule()

3. schedule() calls vcpu_end_hyp_task() and makes scheduling decision
which leads to call to context_switch()

4. On end context_switch() we will exit hypervisor and enter VM. At
least, this is how I understand

       nextd->arch.ctxt_switch->tail(next);

call.

So, no need to call vcpu_begin_hyp_task() in context_switch() for x86.

On ARM, this is different story. There, I am calling
vcpu_begin_hyp_task() after set_current() because ARM code will
eventually return to do_softirq() and there will be called corresponding
vcpu_end_hyp_task().

I have put bunch of ASSERTs to ensure that vcpu_begin_hyp_task() or
vcpu_end_hyp_task() are not called twice and that vcpu_end_hyp_task() is
called after vcpu_begin_hyp_task(). Those asserts are not failing, so I
assume that I did all this in the right way :)

-- 
Volodymyr Babchuk at EPAM

  reply	other threads:[~2020-09-24 18:09 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-12  0:22 [RFC PATCH v1 0/6] Fair scheduling Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks Volodymyr Babchuk
2020-06-12  4:43   ` Jürgen Groß
2020-06-12 11:30     ` Volodymyr Babchuk
2020-06-12 11:40       ` Jürgen Groß
2020-09-24 18:08         ` Volodymyr Babchuk [this message]
2020-09-25 17:22           ` Dario Faggioli
2020-09-25 20:21             ` Volodymyr Babchuk
2020-09-25 21:42               ` Dario Faggioli
2020-06-16 10:10   ` Jan Beulich
2020-06-18  2:50     ` Volodymyr Babchuk
2020-06-18  6:34       ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 1/6] sched: track time spent in IRQ handler Volodymyr Babchuk
2020-06-12  4:36   ` Jürgen Groß
2020-06-12 11:26     ` Volodymyr Babchuk
2020-06-12 11:29       ` Julien Grall
2020-06-12 11:33         ` Volodymyr Babchuk
2020-06-12 12:21           ` Julien Grall
2020-06-12 20:08             ` Dario Faggioli
2020-06-12 22:25               ` Volodymyr Babchuk
2020-06-12 22:54               ` Julien Grall
2020-06-16 10:06   ` Jan Beulich
2020-06-12  0:22 ` [RFC PATCH v1 3/6] sched, credit2: improve scheduler fairness Volodymyr Babchuk
2020-06-12  4:51   ` Jürgen Groß
2020-06-12 11:38     ` Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 5/6] tools: xentop: show time spent in IRQ and HYP states Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 6/6] trace: add fair scheduling trace events Volodymyr Babchuk
2020-06-12  0:22 ` [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics Volodymyr Babchuk
2020-06-12  4:57   ` Jürgen Groß
2020-06-12 11:44     ` Volodymyr Babchuk
2020-06-12 12:45       ` Julien Grall
2020-06-12 22:16         ` Volodymyr Babchuk
2020-06-18 20:24         ` Volodymyr Babchuk
2020-06-18 20:34           ` Julien Grall
2020-06-18 23:35             ` Volodymyr Babchuk
2020-06-12 12:29     ` Julien Grall
2020-06-12 12:41       ` Jürgen Groß
2020-06-12 15:29         ` Dario Faggioli
2020-06-12 22:27           ` Volodymyr Babchuk
2020-06-13  6:22             ` Jürgen Groß
2020-06-18  2:58               ` Volodymyr Babchuk
2020-06-18 15:17                 ` Julien Grall
2020-06-18 15:23                   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87d02bavz7.fsf@epam.com \
    --to=volodymyr_babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).