From: Andrew Cooper <email@example.com>
To: Stefano Stabellini <firstname.lastname@example.org>,
Julien Grall <email@example.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
Stefano Stabellini <firstname.lastname@example.org>,
Julien Grall <email@example.com>,
Dario Faggioli <firstname.lastname@example.org>,
Subject: Re: IRQ latency measurements in hypervisor
Date: Sat, 16 Jan 2021 12:59:48 +0000 [thread overview]
Message-ID: <email@example.com> (raw)
On 15/01/2021 23:41, Stefano Stabellini wrote:
>>>>> This is very interestingi too. Did you get any spikes with the
>>>>> set to 100us? It would be fantastic if there were none.
>>>>>> 3. Huge latency spike during domain creation. I conducted some
>>>>>> additional tests, including use of PV drivers, but this didn't
>>>>>> affected the latency in my "real time" domain. But attempt to
>>>>>> create another domain with relatively large memory size of 2GB
>>>>>> to huge spike in latency. Debugging led to this call path:
>>>>>> XENMEM_populate_physmap -> populate_physmap() ->
>>>>>> alloc_domheap_pages() -> alloc_heap_pages()-> huge
>>>>>> "for ( i = 0; i < (1 << order); i++ )" loop.
>>>> There are two for loops in alloc_heap_pages() using this syntax. Which
>>>> one are your referring to?
>>> I did some tracing with Lautrebach. It pointed to the first loop and
>>> especially to flush_page_to_ram() call if I remember correctly.
>> Thanks, I am not entirely surprised because we are clean and invalidating the
>> region line by line and across all the CPUs.
>> If we are assuming 128 bytes cacheline, we will need to issue 32 cache
>> instructions per page. This going to involve quite a bit of traffic on the
> I think Julien is most likely right. It would be good to verify this
> with an experiment. For instance, you could remove the
> flush_page_to_ram() call for one test and see if you see any latency
>> One possibility would be to defer the cache flush when the domain is created
>> and use the hypercall XEN_DOMCTL_cacheflush to issue the flush.
>> Note that XEN_DOMCTL_cacheflush would need some modification to be
>> preemptible. But at least, it will work on a GFN which is easier to track.
> This looks like a solid suggestion. XEN_DOMCTL_cacheflush is already
> used by the toolstack in a few places.
> I am also wondering if we can get away with fewer flush_page_to_ram()
> calls from alloc_heap_pages() for memory allocations done at boot time
> soon after global boot memory scrubbing.
I'm pretty sure there is room to improve Xen's behaviour in general, by
not scrubbing pages already known to be zero.
As far as I'm aware, there are improvements which never got completed
when lazy scrubbing was added, and I think it is giving us a hit on x86,
where we don't even have to do any cache maintenance on the side.
next prev parent reply other threads:[~2021-01-16 13:00 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-12 23:48 IRQ latency measurements in hypervisor Volodymyr Babchuk
2021-01-14 23:33 ` Stefano Stabellini
2021-01-15 11:42 ` Julien Grall
2021-01-15 15:45 ` Volodymyr Babchuk
2021-01-15 17:13 ` Julien Grall
2021-01-15 23:41 ` Stefano Stabellini
2021-01-16 12:59 ` Andrew Cooper [this message]
2021-01-20 23:09 ` Volodymyr Babchuk
2021-01-20 23:03 ` Volodymyr Babchuk
2021-01-21 0:52 ` Stefano Stabellini
2021-01-21 21:01 ` Julien Grall
2021-01-15 15:27 ` Volodymyr Babchuk
2021-01-15 23:17 ` Stefano Stabellini
2021-01-16 12:47 ` Julien Grall
2021-01-21 0:49 ` Volodymyr Babchuk
2021-01-21 0:59 ` Stefano Stabellini
2021-01-18 16:40 ` Dario Faggioli
2021-01-21 1:20 ` Volodymyr Babchuk
2021-01-21 8:39 ` Dario Faggioli
2021-01-16 14:40 ` Andrew Cooper
2021-01-21 2:39 ` Volodymyr Babchuk
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.