From: "Raslan, KarimAllah" <karahmed@amazon.de>
To: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"x86@kernel.org" <x86@kernel.org>
Cc: "hpa@zytor.com" <hpa@zytor.com>,
"jmattson@google.com" <jmattson@google.com>,
"rkrcmar@redhat.com" <rkrcmar@redhat.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"mingo@redhat.com" <mingo@redhat.com>
Subject: Re: [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page
Date: Thu, 12 Apr 2018 21:25:58 +0000 [thread overview]
Message-ID: <1523568357.32594.42.camel@amazon.de> (raw)
In-Reply-To: <cedf2ad3-43ef-c2ec-0455-4dc27842f71a@redhat.com>
On Thu, 2018-04-12 at 16:59 +0200, Paolo Bonzini wrote:
> On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> >
> > For the most part, KVM can handle guest memory that does not have a struct
> > page (i.e. not directly managed by the kernel). However, There are a few places
> > in the code, specially in the nested code, that does not support that.
> >
> > Patch 1, 2, and 3 avoid the mapping and unmapping all together and just
> > directly use kvm_guest_read and kvm_guest_write.
> >
> > Patch 4 introduces a new guest mapping interface that encapsulate all the
> > bioler plate code that is needed to map and unmap guest memory. It also
> > supports guest memory without "struct page".
> >
> > Patch 5, 6, 7, 8, 9, and 10 switch most of the offending code in VMX and hyperv
> > to use the new guest mapping API.
> >
> > This patch series is the first set of fixes. Handling SVM and APIC-access page
> > will be handled in a different patch series.
>
> I like the patches and the new API. However, I'm a bit less convinced
> about the caching aspect; keeping a page pinned is not the nicest thing
> with respect (for example) to memory hot-unplug.
>
> Since you're basically reinventing kmap_high, or alternatively
> (depending on your background) xc_map_foreign_pages, it's not surprising
> that memremap is slow. How slow is it really (as seen e.g. with
> vmexit.flat running in L1, on EC2 compared to vanilla KVM)?
I have not actually compared EC2 vs vanilla but I compared between the
version with cached vs no-cache (both in EC2 setup). The one that
cached the mappings was an order of magnitude better. For example,
booting an Ubuntu L2 guest with QEMU took around 10-13 seconds with
this caching and it took over 5 minutes without the caching.
I will test with vanilla KVM and post the results.
>
> Perhaps you can keep some kind of per-CPU cache of the last N remapped
> pfns? This cache would sit between memremap and __kvm_map_gfn and it
> would be completely transparent to the layer below since it takes raw
> pfns. This removes the need to store the memslots generation etc. (If
> you go this way please place it in virt/kvm/pfncache.[ch], since
> kvm_main.c is already way too big).
Yup, that sounds like a good idea. I actually already implemented some
sort of a per-CPU mapping pool in order to reduce the overhead when
the vCPU is over-committed. I will clean this and post as you
suggested.
>
> Thanks,
>
> Paolo
>
> >
> > KarimAllah Ahmed (10):
> > X86/nVMX: handle_vmon: Read 4 bytes from guest memory instead of
> > map->read->unmap sequence
> > X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
> > instead of map->copy->unmap sequence.
> > X86/nVMX: Update the PML table without mapping and unmapping the page
> > KVM: Introduce a new guest mapping API
> > KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
> > KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
> > KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt
> > descriptor table
> > KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
> > KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
> > KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
> >
> > arch/x86/kvm/hyperv.c | 28 ++++-----
> > arch/x86/kvm/vmx.c | 144 +++++++++++++++--------------------------------
> > arch/x86/kvm/x86.c | 13 ++---
> > include/linux/kvm_host.h | 15 +++++
> > virt/kvm/kvm_main.c | 50 ++++++++++++++++
> > 5 files changed, 129 insertions(+), 121 deletions(-)
> >
>
>
Amazon Development Center Germany GmbH
Berlin - Dresden - Aachen
main office: Krausenstr. 38, 10117 Berlin
Geschaeftsfuehrer: Dr. Ralf Herbrich, Christian Schlaeger
Ust-ID: DE289237879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B
prev parent reply other threads:[~2018-04-12 21:26 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-21 17:47 [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 01/10] X86/nVMX: handle_vmon: Read 4 bytes from guest memory instead of map->read->unmap sequence KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 02/10] X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory instead of map->copy->unmap sequence KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 03/10] X86/nVMX: Update the PML table without mapping and unmapping the page KarimAllah Ahmed
2018-02-23 2:02 ` kbuild test robot
2018-04-12 15:03 ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 04/10] KVM: Introduce a new guest mapping API KarimAllah Ahmed
2018-02-23 1:27 ` kbuild test robot
2018-02-23 1:37 ` kbuild test robot
2018-02-23 23:48 ` Raslan, KarimAllah
2018-04-12 14:33 ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 05/10] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KarimAllah Ahmed
2018-02-23 21:36 ` Konrad Rzeszutek Wilk
2018-02-23 23:45 ` Raslan, KarimAllah
2018-04-12 14:36 ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 06/10] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KarimAllah Ahmed
2018-04-12 14:38 ` Paolo Bonzini
2018-04-12 17:57 ` Sean Christopherson
2018-04-12 20:23 ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 07/10] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KarimAllah Ahmed
2018-04-12 14:39 ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 08/10] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KarimAllah Ahmed
2018-02-22 2:56 ` Raslan, KarimAllah
2018-02-21 17:47 ` [PATCH 09/10] KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 10/10] KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg KarimAllah Ahmed
2018-03-01 15:24 ` [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page Raslan, KarimAllah
2018-03-01 17:51 ` Jim Mattson
2018-03-02 17:40 ` Paolo Bonzini
2018-04-12 14:59 ` Paolo Bonzini
2018-04-12 21:25 ` Raslan, KarimAllah [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1523568357.32594.42.camel@amazon.de \
--to=karahmed@amazon.de \
--cc=hpa@zytor.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).