linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ashish Kalra <ashish.kalra@amd.com>
To: Sean Christopherson <seanjc@google.com>
Cc: "pbonzini@redhat.com" <pbonzini@redhat.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"hpa@zytor.com" <hpa@zytor.com>,
	"rkrcmar@redhat.com" <rkrcmar@redhat.com>,
	"joro@8bytes.org" <joro@8bytes.org>, "bp@suse.de" <bp@suse.de>,
	"Lendacky, Thomas" <Thomas.Lendacky@amd.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"srutherford@google.com" <srutherford@google.com>,
	"venu.busireddy@oracle.com" <venu.busireddy@oracle.com>,
	"Singh, Brijesh" <brijesh.singh@amd.com>
Subject: Re: [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl
Date: Wed, 24 Feb 2021 17:51:22 +0000	[thread overview]
Message-ID: <20210224175122.GA19661@ashkalra_ubuntu_server> (raw)
In-Reply-To: <SN6PR12MB27672FF8358D122EDD8CC0188E859@SN6PR12MB2767.namprd12.prod.outlook.com>

On Thu, Feb 18, 2021 at 12:32:47PM -0600, Kalra, Ashish wrote:
> [AMD Public Use]
> 
> 
> -----Original Message-----
> From: Sean Christopherson <seanjc@google.com> 
> Sent: Tuesday, February 16, 2021 7:03 PM
> To: Kalra, Ashish <Ashish.Kalra@amd.com>
> Cc: pbonzini@redhat.com; tglx@linutronix.de; mingo@redhat.com; hpa@zytor.com; rkrcmar@redhat.com; joro@8bytes.org; bp@suse.de; Lendacky, Thomas <Thomas.Lendacky@amd.com>; x86@kernel.org; kvm@vger.kernel.org; linux-kernel@vger.kernel.org; srutherford@google.com; venu.busireddy@oracle.com; Singh, Brijesh <brijesh.singh@amd.com>
> Subject: Re: [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl
> 
> On Thu, Feb 04, 2021, Ashish Kalra wrote:
> > From: Brijesh Singh <brijesh.singh@amd.com>
> > 
> > The ioctl is used to retrieve a guest's shared pages list.
> 
> >What's the performance hit to boot time if KVM_HC_PAGE_ENC_STATUS is passed through to userspace?  That way, userspace could manage the set of pages >in whatever data structure they want, and these get/set ioctls go away.
> 
> I will be more concerned about performance hit during guest DMA I/O if the page encryption status hypercalls are passed through to user-space, 
> a lot of guest DMA I/O dynamically sets up pages for encryption and then flips them at DMA completion, so guest I/O will surely take a performance 
> hit with this pass-through stuff.
> 

Here are some rough performance numbers comparing # of heavy-weight VMEXITs compared to # of hypercalls, 
during a SEV guest boot: (launch of a ubuntu 18.04 guest)

# ./perf record -e kvm:kvm_userspace_exit -e kvm:kvm_hypercall -a ./qemu-system-x86_64 -enable-kvm -cpu host -machine q35 -smp 16,maxcpus=64 -m 512M -drive if=pflash,format=raw,unit=0,file=/home/ashish/sev-migration/qemu-5.1.50/OVMF_CODE.fd,readonly -drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd -drive file=../ubuntu-18.04.qcow2,if=none,id=disk0,format=qcow2 -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -device scsi-hd,drive=disk0 -object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1,policy=0x0 -machine memory-encryption=sev0 -trace events=/tmp/events -nographic -monitor pty -monitor unix:monitor-source,server,nowait -qmp unix:/tmp/qmp-sock,server,nowait -device virtio-rng-pci,disable-legacy=on,iommu_platform=true

...
...

root@diesel2540:/home/ashish/sev-migration/qemu-5.1.50# ./perf report
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 981K of event 'kvm:kvm_userspace_exit'
# Event count (approx.): 981021
#
# Overhead  Command          Shared Object     Symbol
# ........  ...............  ................  ..................
#
   100.00%  qemu-system-x86  [kernel.vmlinux]  [k] kvm_vcpu_ioctl


# Samples: 19K of event 'kvm:kvm_hypercall'
# Event count (approx.): 19573
#
# Overhead  Command          Shared Object     Symbol
# ........  ...............  ................  .........................
#
   100.00%  qemu-system-x86  [kernel.vmlinux]  [k] kvm_emulate_hypercall

Out of these 19573 hypercalls, # of page encryption status hcalls are 19479,
so almost all hypercalls here are page encryption status hypercalls.

The above data indicates that there will be ~2% more Heavyweight VMEXITs
during SEV guest boot if we do page encryption status hypercalls 
pass-through to host userspace.

But, then Brijesh pointed out to me and highlighted that currently
OVMF is doing lot of VMEXITs because they don't use the DMA pool to minimize the C-bit toggles,
in other words, OVMF bounce buffer does page state change on every DMA allocate and free.

So here is the performance analysis after kernel and initrd have been
loaded into memory using grub and then starting perf just before booting the kernel.

These are the performance #'s after kernel and initrd have been loaded into memory, 
then perf is attached and kernel is booted : 

# Samples: 1M of event 'kvm:kvm_userspace_exit'
# Event count (approx.): 1081235
#
# Overhead  Trace output
# ........  ........................
#
    99.77%  reason KVM_EXIT_IO (2)
     0.23%  reason KVM_EXIT_MMIO (6)

# Samples: 1K of event 'kvm:kvm_hypercall'
# Event count (approx.): 1279
#

So as the above data indicates, Linux is only making ~1K hypercalls,
compared to ~18K hypercalls made by OVMF in the above use case.

Does the above adds a prerequisite that OVMF needs to be optimized if 
and before hypercall pass-through can be done ? 

Thanks,
Ashish

  reply	other threads:[~2021-02-24 17:53 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-04  0:35 [PATCH v10 00/17] Add AMD SEV guest live migration support Ashish Kalra
2021-02-04  0:36 ` [PATCH v10 01/16] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
2021-02-04  0:36 ` [PATCH v10 02/16] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
2021-02-04  0:37 ` [PATCH v10 03/16] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
2021-02-04  0:37 ` [PATCH v10 04/16] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
2021-02-04  0:37 ` [PATCH v10 05/16] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
2021-02-04  0:37 ` [PATCH v10 06/16] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
2021-02-04  0:38 ` [PATCH v10 07/16] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
2021-02-04  0:38 ` [PATCH v10 08/16] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
2021-02-04 16:03   ` Tom Lendacky
2021-02-05  1:44   ` Steve Rutherford
2021-02-05  3:32     ` Ashish Kalra
2021-02-04  0:39 ` [PATCH v10 09/16] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
2021-02-04  0:39 ` [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl Ashish Kalra
2021-02-04 16:14   ` Tom Lendacky
2021-02-04 16:34     ` Ashish Kalra
2021-02-17  1:03   ` Sean Christopherson
2021-02-17 14:00     ` Kalra, Ashish
2021-02-17 16:13       ` Sean Christopherson
2021-02-18  6:48         ` Kalra, Ashish
2021-02-18 16:39           ` Sean Christopherson
2021-02-18 17:05             ` Kalra, Ashish
2021-02-18 17:50               ` Sean Christopherson
2021-02-18 18:32     ` Kalra, Ashish
2021-02-24 17:51       ` Ashish Kalra [this message]
2021-02-24 18:22         ` Sean Christopherson
2021-02-25 20:20           ` Ashish Kalra
2021-02-25 22:59             ` Steve Rutherford
2021-02-25 23:24               ` Steve Rutherford
2021-02-26 14:04               ` Ashish Kalra
2021-02-26 17:44                 ` Sean Christopherson
2021-03-02 14:55                   ` Ashish Kalra
2021-03-02 15:15                     ` Ashish Kalra
2021-03-03 18:54                     ` Will Deacon
2021-03-03 19:32                       ` Ashish Kalra
2021-03-09 19:10                       ` Ashish Kalra
2021-03-11 18:14                       ` Ashish Kalra
2021-03-11 20:48                         ` Steve Rutherford
2021-03-19 17:59                           ` Ashish Kalra
2021-04-02  1:40                             ` Steve Rutherford
2021-04-02 11:09                               ` Ashish Kalra
2021-03-08 10:40                   ` Ashish Kalra
2021-03-08 19:51                     ` Sean Christopherson
2021-03-08 21:05                       ` Ashish Kalra
2021-03-08 21:11                       ` Brijesh Singh
2021-03-08 21:32                         ` Ashish Kalra
2021-03-08 21:51                         ` Steve Rutherford
2021-03-09 19:42                           ` Sean Christopherson
2021-03-10  3:42                           ` Kalra, Ashish
2021-03-10  3:47                             ` Steve Rutherford
2021-03-08 21:48                       ` Steve Rutherford
2021-02-17  1:06   ` Sean Christopherson
2021-02-04  0:39 ` [PATCH v10 11/16] KVM: x86: Introduce KVM_SET_SHARED_PAGES_LIST ioctl Ashish Kalra
2021-02-04  0:39 ` [PATCH v10 12/16] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
2021-02-05  0:56   ` Steve Rutherford
2021-02-05  3:07     ` Ashish Kalra
2021-02-06  2:54       ` Steve Rutherford
2021-02-06  4:49         ` Ashish Kalra
2021-02-06  5:46         ` Ashish Kalra
2021-02-06 13:56           ` Ashish Kalra
2021-02-08  0:28             ` Ashish Kalra
2021-02-08 22:50               ` Steve Rutherford
2021-02-10 20:36                 ` Ashish Kalra
2021-02-10 22:01                   ` Steve Rutherford
2021-02-10 22:05                     ` Steve Rutherford
2021-02-16 23:20   ` Sean Christopherson
2021-02-04  0:40 ` [PATCH v10 13/16] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
2021-02-04  0:40 ` [PATCH v10 14/16] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
2021-02-18 17:56   ` Sean Christopherson
2021-02-04  0:40 ` [PATCH v10 15/16] KVM: x86: Add kexec support for SEV Live Migration Ashish Kalra
2021-02-04  0:40 ` [PATCH v10 16/16] KVM: SVM: Bypass DBG_DECRYPT API calls for unencrypted guest memory Ashish Kalra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210224175122.GA19661@ashkalra_ubuntu_server \
    --to=ashish.kalra@amd.com \
    --cc=Thomas.Lendacky@amd.com \
    --cc=bp@suse.de \
    --cc=brijesh.singh@amd.com \
    --cc=hpa@zytor.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=seanjc@google.com \
    --cc=srutherford@google.com \
    --cc=tglx@linutronix.de \
    --cc=venu.busireddy@oracle.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).