From: "Kalra, Ashish" <Ashish.Kalra@amd.com>
To: Steve Rutherford <srutherford@google.com>
Cc: Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
Joerg Roedel <joro@8bytes.org>, Borislav Petkov <bp@suse.de>,
"Lendacky, Thomas" <Thomas.Lendacky@amd.com>,
X86 ML <x86@kernel.org>, KVM list <kvm@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
"Singh, Brijesh" <brijesh.singh@amd.com>,
"dovmurik@linux.vnet.ibm.com" <dovmurik@linux.vnet.ibm.com>,
"tobin@ibm.com" <tobin@ibm.com>,
"jejb@linux.ibm.com" <jejb@linux.ibm.com>,
"frankeh@us.ibm.com" <frankeh@us.ibm.com>,
"dgilbert@redhat.com" <dgilbert@redhat.com>
Subject: Re: [PATCH v2 1/9] KVM: x86: Add AMD SEV specific Hypercall3
Date: Tue, 8 Dec 2020 05:18:39 +0000 [thread overview]
Message-ID: <373DF203-88D9-4501-AC0F-CB7D191050B1@amd.com> (raw)
>
>> I suspect a list
>> would consume far less memory, hopefully without impacting performance.
And how much host memory are we talking about for here, say for a 4gb guest, the bitmap will be using just using something like 128k+.
Thanks,
Ashish
> On Dec 7, 2020, at 10:16 PM, Kalra, Ashish <Ashish.Kalra@amd.com> wrote:
>
> I don’t think that the bitmap by itself is really a performance bottleneck here.
>
> Thanks,
> Ashish
>
>>> On Dec 7, 2020, at 9:10 PM, Steve Rutherford <srutherford@google.com> wrote:
>>> On Mon, Dec 7, 2020 at 12:42 PM Sean Christopherson <seanjc@google.com> wrote:
>>>> On Sun, Dec 06, 2020, Paolo Bonzini wrote:
>>>> On 03/12/20 01:34, Sean Christopherson wrote:
>>>>> On Tue, Dec 01, 2020, Ashish Kalra wrote:
>>>>>> From: Brijesh Singh <brijesh.singh@amd.com>
>>>>>> KVM hypercall framework relies on alternative framework to patch the
>>>>>> VMCALL -> VMMCALL on AMD platform. If a hypercall is made before
>>>>>> apply_alternative() is called then it defaults to VMCALL. The approach
>>>>>> works fine on non SEV guest. A VMCALL would causes #UD, and hypervisor
>>>>>> will be able to decode the instruction and do the right things. But
>>>>>> when SEV is active, guest memory is encrypted with guest key and
>>>>>> hypervisor will not be able to decode the instruction bytes.
>>>>>> Add SEV specific hypercall3, it unconditionally uses VMMCALL. The hypercall
>>>>>> will be used by the SEV guest to notify encrypted pages to the hypervisor.
>>>>> What if we invert KVM_HYPERCALL and X86_FEATURE_VMMCALL to default to VMMCALL
>>>>> and opt into VMCALL? It's a synthetic feature flag either way, and I don't
>>>>> think there are any existing KVM hypercalls that happen before alternatives are
>>>>> patched, i.e. it'll be a nop for sane kernel builds.
>>>>> I'm also skeptical that a KVM specific hypercall is the right approach for the
>>>>> encryption behavior, but I'll take that up in the patches later in the series.
>>>> Do you think that it's the guest that should "donate" memory for the bitmap
>>>> instead?
>>> No. Two things I'd like to explore:
>>> 1. Making the hypercall to announce/request private vs. shared common across
>>> hypervisors (KVM, Hyper-V, VMware, etc...) and technologies (SEV-* and TDX).
>>> I'm concerned that we'll end up with multiple hypercalls that do more or
>>> less the same thing, e.g. KVM+SEV, Hyper-V+SEV, TDX, etc... Maybe it's a
>>> pipe dream, but I'd like to at least explore options before shoving in KVM-
>>> only hypercalls.
>>> 2. Tracking shared memory via a list of ranges instead of a using bitmap to
>>> track all of guest memory. For most use cases, the vast majority of guest
>>> memory will be private, most ranges will be 2mb+, and conversions between
>>> private and shared will be uncommon events, i.e. the overhead to walk and
>>> split/merge list entries is hopefully not a big concern. I suspect a list
>>> would consume far less memory, hopefully without impacting performance.
>> For a fancier data structure, I'd suggest an interval tree. Linux
>> already has an rbtree-based interval tree implementation, which would
>> likely work, and would probably assuage any performance concerns.
>> Something like this would not be worth doing unless most of the shared
>> pages were physically contiguous. A sample Ubuntu 20.04 VM on GCP had
>> 60ish discontiguous shared regions. This is by no means a thorough
>> search, but it's suggestive. If this is typical, then the bitmap would
>> be far less efficient than most any interval-based data structure.
>> You'd have to allow userspace to upper bound the number of intervals
>> (similar to the maximum bitmap size), to prevent host OOMs due to
>> malicious guests. There's something nice about the guest donating
>> memory for this, since that would eliminate the OOM risk.
next reply other threads:[~2020-12-08 5:19 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-08 5:18 Kalra, Ashish [this message]
-- strict thread matches above, loose matches on Subject: below --
2020-12-01 0:45 [PATCH v2 0/9] Add AMD SEV page encryption bitmap support Ashish Kalra
2020-12-01 0:45 ` [PATCH v2 1/9] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
2020-12-03 0:34 ` Sean Christopherson
2020-12-04 17:16 ` Brijesh Singh
2020-12-06 10:26 ` Paolo Bonzini
2020-12-07 20:41 ` Sean Christopherson
2020-12-08 3:09 ` Steve Rutherford
2020-12-08 4:16 ` Kalra, Ashish
2020-12-08 16:29 ` Brijesh Singh
2020-12-11 22:55 ` Ashish Kalra
2020-12-12 4:56 ` Ashish Kalra
2020-12-18 19:39 ` Dr. David Alan Gilbert
[not found] ` <E79E09A2-F314-4B59-B7AE-07B1D422DF2B@amd.com>
2020-12-18 19:56 ` Dr. David Alan Gilbert
2021-01-06 23:05 ` Ashish Kalra
2021-01-07 1:01 ` Steve Rutherford
2021-01-07 1:34 ` Ashish Kalra
2021-01-07 8:05 ` Ashish Kalra
2021-01-08 0:47 ` Ashish Kalra
2021-01-08 0:55 ` Steve Rutherford
2021-01-07 17:07 ` Ashish Kalra
2021-01-07 17:26 ` Sean Christopherson
2021-01-07 18:41 ` Ashish Kalra
2021-01-07 19:22 ` Sean Christopherson
2021-01-08 0:54 ` Steve Rutherford
2021-01-08 16:56 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=373DF203-88D9-4501-AC0F-CB7D191050B1@amd.com \
--to=ashish.kalra@amd.com \
--cc=Thomas.Lendacky@amd.com \
--cc=bp@suse.de \
--cc=brijesh.singh@amd.com \
--cc=dgilbert@redhat.com \
--cc=dovmurik@linux.vnet.ibm.com \
--cc=frankeh@us.ibm.com \
--cc=hpa@zytor.com \
--cc=jejb@linux.ibm.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=srutherford@google.com \
--cc=tglx@linutronix.de \
--cc=tobin@ibm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).