From: Steve Rutherford <srutherford@google.com>
To: Ashish Kalra <ashish.kalra@amd.com>
Cc: Sean Christopherson <seanjc@google.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"mingo@redhat.com" <mingo@redhat.com>,
"hpa@zytor.com" <hpa@zytor.com>,
"joro@8bytes.org" <joro@8bytes.org>, "bp@suse.de" <bp@suse.de>,
"Lendacky, Thomas" <Thomas.Lendacky@amd.com>,
"x86@kernel.org" <x86@kernel.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"venu.busireddy@oracle.com" <venu.busireddy@oracle.com>,
"Singh, Brijesh" <brijesh.singh@amd.com>
Subject: Re: [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl
Date: Thu, 25 Feb 2021 15:24:41 -0800 [thread overview]
Message-ID: <CABayD+cVKjYrNq84Ak2rJ0W696+BhhAYkEpJ+SwsK5DvNYSGzw@mail.gmail.com> (raw)
In-Reply-To: <CABayD+cn5e3PR6NtSWLeM_qxs6hKWtjEx=aeKpy=WC2dzPdRLw@mail.gmail.com>
On Thu, Feb 25, 2021 at 2:59 PM Steve Rutherford <srutherford@google.com> wrote:
>
> On Thu, Feb 25, 2021 at 12:20 PM Ashish Kalra <ashish.kalra@amd.com> wrote:
> >
> > On Wed, Feb 24, 2021 at 10:22:33AM -0800, Sean Christopherson wrote:
> > > On Wed, Feb 24, 2021, Ashish Kalra wrote:
> > > > # Samples: 19K of event 'kvm:kvm_hypercall'
> > > > # Event count (approx.): 19573
> > > > #
> > > > # Overhead Command Shared Object Symbol
> > > > # ........ ............... ................ .........................
> > > > #
> > > > 100.00% qemu-system-x86 [kernel.vmlinux] [k] kvm_emulate_hypercall
> > > >
> > > > Out of these 19573 hypercalls, # of page encryption status hcalls are 19479,
> > > > so almost all hypercalls here are page encryption status hypercalls.
> > >
> > > Oof.
> > >
> > > > The above data indicates that there will be ~2% more Heavyweight VMEXITs
> > > > during SEV guest boot if we do page encryption status hypercalls
> > > > pass-through to host userspace.
> > > >
> > > > But, then Brijesh pointed out to me and highlighted that currently
> > > > OVMF is doing lot of VMEXITs because they don't use the DMA pool to minimize the C-bit toggles,
> > > > in other words, OVMF bounce buffer does page state change on every DMA allocate and free.
> > > >
> > > > So here is the performance analysis after kernel and initrd have been
> > > > loaded into memory using grub and then starting perf just before booting the kernel.
> > > >
> > > > These are the performance #'s after kernel and initrd have been loaded into memory,
> > > > then perf is attached and kernel is booted :
> > > >
> > > > # Samples: 1M of event 'kvm:kvm_userspace_exit'
> > > > # Event count (approx.): 1081235
> > > > #
> > > > # Overhead Trace output
> > > > # ........ ........................
> > > > #
> > > > 99.77% reason KVM_EXIT_IO (2)
> > > > 0.23% reason KVM_EXIT_MMIO (6)
> > > >
> > > > # Samples: 1K of event 'kvm:kvm_hypercall'
> > > > # Event count (approx.): 1279
> > > > #
> > > >
> > > > So as the above data indicates, Linux is only making ~1K hypercalls,
> > > > compared to ~18K hypercalls made by OVMF in the above use case.
> > > >
> > > > Does the above adds a prerequisite that OVMF needs to be optimized if
> > > > and before hypercall pass-through can be done ?
> > >
> > > Disclaimer: my math could be totally wrong.
> > >
> > > I doubt it's a hard requirement. Assuming a conversative roundtrip time of 50k
> > > cycles, those 18K hypercalls will add well under a 1/2 a second of boot time.
> > > If userspace can push the roundtrip time down to 10k cycles, the overhead is
> > > more like 50 milliseconds.
> > >
> > > That being said, this does seem like a good OVMF cleanup, irrespective of this
> > > new hypercall. I assume it's not cheap to convert a page between encrypted and
> > > decrypted.
> > >
> > > Thanks much for getting the numbers!
> >
> > Considering the above data and guest boot time latencies
> > (and potential issues with OVMF and optimizations required there),
> > do we have any consensus on whether we want to do page encryption
> > status hypercall passthrough or not ?
> >
> > Thanks,
> > Ashish
>
> Thanks for grabbing the data!
>
> I am fine with both paths. Sean has stated an explicit desire for
> hypercall exiting, so I think that would be the current consensus.
>
> If we want to do hypercall exiting, this should be in a follow-up
> series where we implement something more generic, e.g. a hypercall
> exiting bitmap or hypercall exit list. If we are taking the hypercall
> exit route, we can drop the kvm side of the hypercall. Userspace could
> also handle the MSR using MSR filters (would need to confirm that).
> Then userspace could also be in control of the cpuid bit.
>
> Essentially, I think you could drop most of the host kernel work if
> there were generic support for hypercall exiting. Then userspace would
> be responsible for all of that. Thoughts on this?
>
> --Steve
This could even go a step further, and use an MSR write from within
the guest instead of a hypercall, which could be patched through to
userspace without host modification, if I understand the MSR filtering
correctly.
--Steve
next prev parent reply other threads:[~2021-02-25 23:26 UTC|newest]
Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-04 0:35 [PATCH v10 00/17] Add AMD SEV guest live migration support Ashish Kalra
2021-02-04 0:36 ` [PATCH v10 01/16] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
2021-02-04 0:36 ` [PATCH v10 02/16] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
2021-02-04 0:37 ` [PATCH v10 03/16] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
2021-02-04 0:37 ` [PATCH v10 04/16] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
2021-02-04 0:37 ` [PATCH v10 05/16] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
2021-02-04 0:37 ` [PATCH v10 06/16] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
2021-02-04 0:38 ` [PATCH v10 07/16] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
2021-02-04 0:38 ` [PATCH v10 08/16] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
2021-02-04 16:03 ` Tom Lendacky
2021-02-05 1:44 ` Steve Rutherford
2021-02-05 3:32 ` Ashish Kalra
2021-02-04 0:39 ` [PATCH v10 09/16] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
2021-02-04 0:39 ` [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl Ashish Kalra
2021-02-04 16:14 ` Tom Lendacky
2021-02-04 16:34 ` Ashish Kalra
2021-02-17 1:03 ` Sean Christopherson
2021-02-17 14:00 ` Kalra, Ashish
2021-02-17 16:13 ` Sean Christopherson
2021-02-18 6:48 ` Kalra, Ashish
2021-02-18 16:39 ` Sean Christopherson
2021-02-18 17:05 ` Kalra, Ashish
2021-02-18 17:50 ` Sean Christopherson
2021-02-18 18:32 ` Kalra, Ashish
2021-02-24 17:51 ` Ashish Kalra
2021-02-24 18:22 ` Sean Christopherson
2021-02-25 20:20 ` Ashish Kalra
2021-02-25 22:59 ` Steve Rutherford
2021-02-25 23:24 ` Steve Rutherford [this message]
2021-02-26 14:04 ` Ashish Kalra
2021-02-26 17:44 ` Sean Christopherson
2021-03-02 14:55 ` Ashish Kalra
2021-03-02 15:15 ` Ashish Kalra
2021-03-03 18:54 ` Will Deacon
2021-03-03 19:32 ` Ashish Kalra
2021-03-09 19:10 ` Ashish Kalra
2021-03-11 18:14 ` Ashish Kalra
2021-03-11 20:48 ` Steve Rutherford
2021-03-19 17:59 ` Ashish Kalra
2021-04-02 1:40 ` Steve Rutherford
2021-04-02 11:09 ` Ashish Kalra
2021-03-08 10:40 ` Ashish Kalra
2021-03-08 19:51 ` Sean Christopherson
2021-03-08 21:05 ` Ashish Kalra
2021-03-08 21:11 ` Brijesh Singh
2021-03-08 21:32 ` Ashish Kalra
2021-03-08 21:51 ` Steve Rutherford
2021-03-09 19:42 ` Sean Christopherson
2021-03-10 3:42 ` Kalra, Ashish
2021-03-10 3:47 ` Steve Rutherford
2021-03-08 21:48 ` Steve Rutherford
2021-02-17 1:06 ` Sean Christopherson
2021-02-04 0:39 ` [PATCH v10 11/16] KVM: x86: Introduce KVM_SET_SHARED_PAGES_LIST ioctl Ashish Kalra
2021-02-04 0:39 ` [PATCH v10 12/16] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
2021-02-05 0:56 ` Steve Rutherford
2021-02-05 3:07 ` Ashish Kalra
2021-02-06 2:54 ` Steve Rutherford
2021-02-06 4:49 ` Ashish Kalra
2021-02-06 5:46 ` Ashish Kalra
2021-02-06 13:56 ` Ashish Kalra
2021-02-08 0:28 ` Ashish Kalra
2021-02-08 22:50 ` Steve Rutherford
2021-02-10 20:36 ` Ashish Kalra
2021-02-10 22:01 ` Steve Rutherford
2021-02-10 22:05 ` Steve Rutherford
2021-02-16 23:20 ` Sean Christopherson
2021-02-04 0:40 ` [PATCH v10 13/16] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
2021-02-04 0:40 ` [PATCH v10 14/16] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
2021-02-18 17:56 ` Sean Christopherson
2021-02-04 0:40 ` [PATCH v10 15/16] KVM: x86: Add kexec support for SEV Live Migration Ashish Kalra
2021-02-04 0:40 ` [PATCH v10 16/16] KVM: SVM: Bypass DBG_DECRYPT API calls for unencrypted guest memory Ashish Kalra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CABayD+cVKjYrNq84Ak2rJ0W696+BhhAYkEpJ+SwsK5DvNYSGzw@mail.gmail.com \
--to=srutherford@google.com \
--cc=Thomas.Lendacky@amd.com \
--cc=ashish.kalra@amd.com \
--cc=bp@suse.de \
--cc=brijesh.singh@amd.com \
--cc=hpa@zytor.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=venu.busireddy@oracle.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).