From: Julien Grall <julien@xen.org>
To: "Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>,
"Jürgen Groß" <jgross@suse.com>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
"wl@xen.org" <wl@xen.org>,
"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
"ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
"george.dunlap@citrix.com" <george.dunlap@citrix.com>,
"dfaggioli@suse.com" <dfaggioli@suse.com>,
"jbeulich@suse.com" <jbeulich@suse.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics.
Date: Thu, 18 Jun 2020 16:17:49 +0100 [thread overview]
Message-ID: <8b87612e-52e3-8f75-27a9-557ed9e7991f@xen.org> (raw)
In-Reply-To: <87tuz92i6y.fsf@epam.com>
On 18/06/2020 03:58, Volodymyr Babchuk wrote:
>
> Hi Jürgen,
>
> Jürgen Groß writes:
>
>> On 13.06.20 00:27, Volodymyr Babchuk wrote:
>>> On Fri, 2020-06-12 at 17:29 +0200, Dario Faggioli wrote:
>>>> On Fri, 2020-06-12 at 14:41 +0200, Jürgen Groß wrote:
>>>>> On 12.06.20 14:29, Julien Grall wrote:
>>>>>> On 12/06/2020 05:57, Jürgen Groß wrote:
>>>>>>> On 12.06.20 02:22, Volodymyr Babchuk wrote:
>>>>>>>> @@ -994,9 +998,22 @@ s_time_t sched_get_time_correction(struct
>>>>>>>> sched_unit *u)
>>>>>>>> break;
>>>>>>>> }
>>>>>>>> + spin_lock_irqsave(&sched_stat_lock, flags);
>>>>>>>> + sched_stat_irq_time += irq;
>>>>>>>> + sched_stat_hyp_time += hyp;
>>>>>>>> + spin_unlock_irqrestore(&sched_stat_lock, flags);
>>>>>>>
>>>>>>> Please don't use a lock. Just use add_sized() instead which will
>>>>>>> add
>>>>>>> atomically.
>>>>>>
>>>>>> If we expect sched_get_time_correction to be called concurrently
>>>>>> then we
>>>>>> would need to introduce atomic64_t or a spin lock.
>>>>>
>>>>> Or we could use percpu variables and add the cpu values up when
>>>>> fetching the values.
>>>>>
>>>> Yes, either percpu or atomic looks much better than locking, to me, for
>>>> this.
>>>
>>> Looks like we going to have atomic64_t after all. So, I'll prefer to to
>>> use atomics there.
>>
>> Performance would be better using percpu variables, as those would avoid
>> the cacheline moved between cpus a lot.
>
> I see. But don't we need locking in this case? I can see scenario, when
> one pCPU updates own counters while another pCPU is reading them.
>
> IIRC, ARMv8 guarantees that 64 bit read of aligned data would be
> consistent. "Consistent" in the sense that, for example, we would not
> see lower 32 bits of the new value and upper 32 bits of the old value.
That's right. Although this would be valid so long you use {read,
write}_atomic().
>
> I can't say for sure about ARMv7 and about x86.
ARMv7 with LPAE support will guarantee 64-bit atomicity when using
strd/ldrd as long as the alignment is correct. LPAE is mandatory when
supporting HYP mode, so you can safely assume this will work.
64-bit on x86 is also guaranteed to be atomic when using write_atomic().
--
Julien Grall
next prev parent reply other threads:[~2020-06-18 15:18 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-12 0:22 [RFC PATCH v1 0/6] Fair scheduling Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks Volodymyr Babchuk
2020-06-12 4:43 ` Jürgen Groß
2020-06-12 11:30 ` Volodymyr Babchuk
2020-06-12 11:40 ` Jürgen Groß
2020-09-24 18:08 ` Volodymyr Babchuk
2020-09-25 17:22 ` Dario Faggioli
2020-09-25 20:21 ` Volodymyr Babchuk
2020-09-25 21:42 ` Dario Faggioli
2020-06-16 10:10 ` Jan Beulich
2020-06-18 2:50 ` Volodymyr Babchuk
2020-06-18 6:34 ` Jan Beulich
2020-06-12 0:22 ` [RFC PATCH v1 1/6] sched: track time spent in IRQ handler Volodymyr Babchuk
2020-06-12 4:36 ` Jürgen Groß
2020-06-12 11:26 ` Volodymyr Babchuk
2020-06-12 11:29 ` Julien Grall
2020-06-12 11:33 ` Volodymyr Babchuk
2020-06-12 12:21 ` Julien Grall
2020-06-12 20:08 ` Dario Faggioli
2020-06-12 22:25 ` Volodymyr Babchuk
2020-06-12 22:54 ` Julien Grall
2020-06-16 10:06 ` Jan Beulich
2020-06-12 0:22 ` [RFC PATCH v1 3/6] sched, credit2: improve scheduler fairness Volodymyr Babchuk
2020-06-12 4:51 ` Jürgen Groß
2020-06-12 11:38 ` Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 5/6] tools: xentop: show time spent in IRQ and HYP states Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 6/6] trace: add fair scheduling trace events Volodymyr Babchuk
2020-06-12 0:22 ` [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics Volodymyr Babchuk
2020-06-12 4:57 ` Jürgen Groß
2020-06-12 11:44 ` Volodymyr Babchuk
2020-06-12 12:45 ` Julien Grall
2020-06-12 22:16 ` Volodymyr Babchuk
2020-06-18 20:24 ` Volodymyr Babchuk
2020-06-18 20:34 ` Julien Grall
2020-06-18 23:35 ` Volodymyr Babchuk
2020-06-12 12:29 ` Julien Grall
2020-06-12 12:41 ` Jürgen Groß
2020-06-12 15:29 ` Dario Faggioli
2020-06-12 22:27 ` Volodymyr Babchuk
2020-06-13 6:22 ` Jürgen Groß
2020-06-18 2:58 ` Volodymyr Babchuk
2020-06-18 15:17 ` Julien Grall [this message]
2020-06-18 15:23 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8b87612e-52e3-8f75-27a9-557ed9e7991f@xen.org \
--to=julien@xen.org \
--cc=Volodymyr_Babchuk@epam.com \
--cc=andrew.cooper3@citrix.com \
--cc=dfaggioli@suse.com \
--cc=george.dunlap@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).