xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [for-4.9] Re: HVM guest performance regression
Date: Tue, 30 May 2017 16:57:29 +0200	[thread overview]
Message-ID: <41b3b844-9ab6-be69-b7ea-df9073ecba26@suse.com> (raw)
In-Reply-To: <592D68DC020000780015D919@suse.com>

On 30/05/17 12:43, Jan Beulich wrote:
>>>> On 30.05.17 at 12:33, <jgross@suse.com> wrote:
>> On 30/05/17 09:24, Jan Beulich wrote:
>>>>>> On 29.05.17 at 21:05, <jgross@suse.com> wrote:
>>>> Creating the domains with
>>>>
>>>> xl -vvv create ...
>>>>
>>>> showed the numbers of superpages and normal pages allocated for the
>>>> domain.
>>>>
>>>> The following allocation pattern resulted in a slow domain:
>>>>
>>>> xc: detail: PHYSICAL MEMORY ALLOCATION:
>>>> xc: detail:   4KB PAGES: 0x0000000000000600
>>>> xc: detail:   2MB PAGES: 0x00000000000003f9
>>>> xc: detail:   1GB PAGES: 0x0000000000000000
>>>>
>>>> And this one was fast:
>>>>
>>>> xc: detail: PHYSICAL MEMORY ALLOCATION:
>>>> xc: detail:   4KB PAGES: 0x0000000000000400
>>>> xc: detail:   2MB PAGES: 0x00000000000003fa
>>>> xc: detail:   1GB PAGES: 0x0000000000000000
>>>>
>>>> I ballooned dom0 down in small steps to be able to create those
>>>> test cases.
>>>>
>>>> I believe the main reason is that some data needed by the benchmark
>>>> is located near the end of domain memory resulting in a rather high
>>>> TLB miss rate in case of not all (or nearly all) memory available in
>>>> form of 2MB pages.
>>>
>>> Did you double check this by creating some other (persistent)
>>> process prior to running your benchmark? I find it rather
>>> unlikely that you would consistently see space from the top of
>>> guest RAM allocated to your test, unless it consumes all RAM
>>> that's available at the time it runs (but then I'd consider it
>>> quite likely for overhead of using the few smaller pages to be
>>> mostly hidden in the noise).
>>>
>>> Or are you suspecting some crucial kernel structures to live
>>> there?
>>
>> Yes, I do. When onlining memory at boot time the kernel is using the new
>> memory chunk to add the page structures and if needed new kernel page
>> tables. It is normally allocating that memory from the end of the new
>> chunk.
> 
> The page tables are 4k allocations, sure. But the page structures
> surely would be allocated with higher granularity?

I'm really not sure. It might depend on the memory model (sparse,
sparse vmemmap, flat).

>>>>>> What makes the whole problem even more mysterious is that the
>>>>>> regression was detected first with SLE12 SP3 (guest and dom0, Xen 4.9
>>>>>> and Linux 4.4) against older systems (guest and dom0). While trying
>>>>>> to find out whether the guest or the Xen version are the culprit I
>>>>>> found that the old guest (based on kernel 3.12) showed the mentioned
>>>>>> performance drop with above commit. The new guest (based on kernel
>>>>>> 4.4) shows the same bad performance regardless of the Xen version or
>>>>>> amount of free memory. I haven't found the Linux kernel commit yet
>>>>>> being responsible for that performance drop.
>>>>
>>>> And this might be result of a different memory usage of more recent
>>>> kernels: I suspect the critical data is now at the very end of the
>>>> domain's memory. As there are always some pages allocated in 4kB
>>>> chunks the last pages of the domain will never be part of a 2MB page.
>>>
>>> But if the OS allocated large pages internally for relevant data
>>> structures, those obviously won't come from that necessarily 4k-
>>> mapped tail range.
>>
>> Sure? I think the kernel is using 1GB pages if possible for direct
>> kernel mappings of the physical memory. It doesn't care for the last
>> page mapping some space not populated.
> 
> Are you sure? I would very much hope for Linux to not establish
> mappings to addresses where no memory (and no MMIO) resides.
> But I can't tell for sure for recent Linux versions; I do know in the
> old days they were quite careful there.

Looking at phys_pud_init() they are happily using 1GB pages until they
have all memory mapped.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2017-05-30 14:57 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-26 16:14 HVM guest performance regression Juergen Gross
2017-05-26 16:19 ` [for-4.9] " Ian Jackson
2017-05-26 17:00   ` Juergen Gross
2017-05-26 19:01     ` Stefano Stabellini
2017-05-29 19:05       ` Juergen Gross
2017-05-30  7:24         ` Jan Beulich
     [not found]         ` <592D3A3A020000780015D787@suse.com>
2017-05-30 10:33           ` Juergen Gross
2017-05-30 10:43             ` Jan Beulich
     [not found]             ` <592D68DC020000780015D919@suse.com>
2017-05-30 14:57               ` Juergen Gross [this message]
2017-05-30 15:10                 ` Jan Beulich
2017-06-06 13:44       ` Juergen Gross
2017-06-06 16:39         ` Stefano Stabellini
2017-06-06 19:00           ` Juergen Gross
2017-06-06 19:08             ` Stefano Stabellini
2017-06-07  6:55               ` Juergen Gross
2017-06-07 18:19                 ` Stefano Stabellini
2017-06-08  9:37                   ` Juergen Gross
2017-06-08 18:09                     ` Stefano Stabellini
2017-06-08 18:28                       ` Juergen Gross
2017-06-08 21:00                     ` Dario Faggioli
2017-06-11  2:27                       ` Konrad Rzeszutek Wilk
2017-06-12  5:48                       ` Solved: " Juergen Gross
2017-06-12  7:35                         ` Andrew Cooper
2017-06-12  7:47                           ` Juergen Gross
2017-06-12  8:30                             ` Andrew Cooper
2017-05-26 17:04 ` Dario Faggioli
2017-05-26 17:25   ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=41b3b844-9ab6-be69-b7ea-df9073ecba26@suse.com \
    --to=jgross@suse.com \
    --cc=JBeulich@suse.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).