xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>
Subject: Solved: HVM guest performance regression
Date: Mon, 12 Jun 2017 07:48:58 +0200	[thread overview]
Message-ID: <dd74f0cf-137f-9577-8bd1-7fe4c4d9a90a@suse.com> (raw)
In-Reply-To: <1496955634.26212.6.camel@citrix.com>

On 08/06/17 23:00, Dario Faggioli wrote:
> Bringing in Konrad because...
> 
> On Thu, 2017-06-08 at 11:37 +0200, Juergen Gross wrote:
>> On 07/06/17 20:19, Stefano Stabellini wrote:
>>> On Wed, 7 Jun 2017, Juergen Gross wrote:
>>>> On 06/06/17 21:08, Stefano Stabellini wrote:
>>>>>
>>>>> 2) PV suspend/resume
>>>>> 3) vector callback
>>>>> 4) interrupt remapping
>>>>>
>>>>> 2) is not on the hot path.
>>>>> I did individual measurements of 3) at some points and it was a
>>>>> clear win.
>>>>
>>>> That might depend on the hardware. Could it be newer processors
>>>> are
>>>> faster here?
>>>
>>> I don't think so: the alternative it's an emulated interrupt. It's
>>> slower under all points of view.
>>
>> What about APIC virtualization of modern processors? Are you sure
>> e.g.
>> timer interrupts aren't handled completely by the processor? I guess
>> this might be faster than letting it be handled by the hypervisor and
>> then use the callback into the guest.
>>
> ... I kind of remember an email exchange we had, not here on the list,
> but in private, about some apparently weird scheduling behavior you
> were seeing, there at Oracle, on a particular benchmark/customer's
> workload.
> 
> Not that this is directly related, but I seem to also recall that you
> managed to find out that some of the perf difference (between baremetal
> and guest) was due to vAPIC being faster than the PV path we were
> taking? What I don't recall, though, is whether your guest was PV or
> (PV)HVM... Do you remember anything more precisely than this?

I now tweaked the kernel to use the LAPIC timer instead of the pv one.

While it is a very little bit faster (<1%) this doesn't seem to be the
reason for the performance drop.

Using xentrace I've verified that no additional hypercalls or other
VMEXITs are occurring which would explain what is happening (I'm
seeing setting the timer and the related timer interrupt 250 times
a second, what is expected).

Using ftrace in the kernel I can see all functions being called on
the munmap path. Nothing worrying and no weird differences between the
pv and the non-pv test.

What is interesting is that the time for the pv test isn't lost at one
or two specific points, but all over the test. All function seem to run
just a little bit slower as in the non-pv case.

So I concluded it might be TLB related. The main difference between
using pv interfaces or not is the mapping of the shared info page into
the guest. The guest physical page for the shared info page is allocated
rather early via extend_brk(). Mapping the shared info page into the
guest requires that specific page to be mapped via a 4kB EPT entry,
resulting in breaking up a 2MB entry. So at least most of the other data
allocated via extend_brk() in the kernel will be hit by this large page
break up. The main other data allocated this way are the early page
tables which are essential for nearly all virtual addresses of the
kernel address space.

Instead of using extend_brk() I had a try allocating the shared info
pfn from the first MB of memory, as this area is already mapped via
4kB EPT entries. And indeed: this measure did speed up the munmap test
even when using pv interfaces in the guest.

I'll send a proper patch for the kernel after doing some more testing.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2017-06-12  5:49 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-26 16:14 HVM guest performance regression Juergen Gross
2017-05-26 16:19 ` [for-4.9] " Ian Jackson
2017-05-26 17:00   ` Juergen Gross
2017-05-26 19:01     ` Stefano Stabellini
2017-05-29 19:05       ` Juergen Gross
2017-05-30  7:24         ` Jan Beulich
     [not found]         ` <592D3A3A020000780015D787@suse.com>
2017-05-30 10:33           ` Juergen Gross
2017-05-30 10:43             ` Jan Beulich
     [not found]             ` <592D68DC020000780015D919@suse.com>
2017-05-30 14:57               ` Juergen Gross
2017-05-30 15:10                 ` Jan Beulich
2017-06-06 13:44       ` Juergen Gross
2017-06-06 16:39         ` Stefano Stabellini
2017-06-06 19:00           ` Juergen Gross
2017-06-06 19:08             ` Stefano Stabellini
2017-06-07  6:55               ` Juergen Gross
2017-06-07 18:19                 ` Stefano Stabellini
2017-06-08  9:37                   ` Juergen Gross
2017-06-08 18:09                     ` Stefano Stabellini
2017-06-08 18:28                       ` Juergen Gross
2017-06-08 21:00                     ` Dario Faggioli
2017-06-11  2:27                       ` Konrad Rzeszutek Wilk
2017-06-12  5:48                       ` Juergen Gross [this message]
2017-06-12  7:35                         ` Solved: " Andrew Cooper
2017-06-12  7:47                           ` Juergen Gross
2017-06-12  8:30                             ` Andrew Cooper
2017-05-26 17:04 ` Dario Faggioli
2017-05-26 17:25   ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dd74f0cf-137f-9577-8bd1-7fe4c4d9a90a@suse.com \
    --to=jgross@suse.com \
    --cc=dario.faggioli@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).