From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B9BF28F2 for ; Thu, 7 Jul 2022 20:07:13 +0000 (UTC) Received: by mail-lf1-f42.google.com with SMTP id i18so32951727lfu.8 for ; Thu, 07 Jul 2022 13:07:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=QKhIcn0No1S7A61+zmGLwK91+iFzUTtNVPgIThcxj6s=; b=gVC3x4vRSwq9+jTYJ4uo46yrPyBoDX3w1bx40R6Hq+qmp0LoElZElkFm2Hh0l+vxXC nl9oOppNyrXY9npgFovnSDGCRT+bSUhslsEq8L4AZqAA1jcipcKhZ7anh/DFsf1fEysw yA0IdQLnyj0S0Pgeh67cQ7U+/axRBD/NtYoevdUq4dibCqJY68Dc9jlP26545M4i5/U4 Ef24qa57ODqV0zgYuzDbrH8vNumdkYJwcA7qYMfh+3KjREN38xz8SU6ImdPRQUW5InGw X9cu4YLgWbUU140YdmZwa15DMNtZxfCsLvvLo2QT6i+m5C+wfWxcAH1UAzgZsXuGp5zl Licg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=QKhIcn0No1S7A61+zmGLwK91+iFzUTtNVPgIThcxj6s=; b=SKR3DEsfCJHeXjAA941DZIcJWLyceaF7UwA+UdjpdmAbHA5wAsRtt+majuyLKaYs2C n3lo4idqqrv1ScgZLP91jM6wHKrgEdrjcZ21RaEtLlxFKuFDaBbXvBwTe6IYf4D3WHci K09b5MO6v6PqpnAhxUxosiKdwHYJWhGUc6HfTn3bbBlRUK+niCDYooSAEPDCdtcfYHoH jfSpWMO3L/asXXphjUOeMpl2CUpFwtl6KcyeFKmBgBQ78C5TigA8wE5g4h9H5E5BoCEh b2YI/7hD7oL2DF6FwQqNoHIQZ++ytuBK1KjeCoDmLtnk3d6qhgB42Hxcu/pva4blsS9c T4QQ== X-Gm-Message-State: AJIora855JSZPgLXAwh5mRZjDMAlos4m4NovnBswurvebc2Cj7Vrcs0m KNbjmp6TZs6pUN0t+1fRW6D6pDdY83BfP3yA51vcjQ== X-Google-Smtp-Source: AGRyM1ukmtVWv+UDsK4ocu2WtNQ0skzcglLvz74pG51Y9fsQEHQ2iNFArCm/PlRMEJjkm+H4jn63SwqGqxQ9ctyw6Jk= X-Received: by 2002:a05:6512:6d3:b0:489:37a0:ac40 with SMTP id u19-20020a05651206d300b0048937a0ac40mr1624560lff.79.1657224431418; Thu, 07 Jul 2022 13:07:11 -0700 (PDT) Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <7845d453af6344d0b156493eb4555399aad78615.1655761627.git.ashish.kalra@amd.com> In-Reply-To: From: Peter Gonda Date: Thu, 7 Jul 2022 14:06:59 -0600 Message-ID: Subject: Re: [PATCH Part2 v6 35/49] KVM: SVM: Remove the long-lived GHCB host map To: "Kalra, Ashish" Cc: "the arch/x86 maintainers" , LKML , kvm list , "linux-coco@lists.linux.dev" , "linux-mm@kvack.org" , Linux Crypto Mailing List , Thomas Gleixner , Ingo Molnar , Joerg Roedel , "Lendacky, Thomas" , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , "Roth, Michael" , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , Tony Luck , Marc Orr , Sathyanarayanan Kuppuswamy , Alper Gun , "Dr. David Alan Gilbert" , "jarkko@kernel.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Fri, Jun 24, 2022 at 2:14 PM Kalra, Ashish wrote: > > [AMD Official Use Only - General] > > Hello Peter, > > >> From: Brijesh Singh > >> > >> On VMGEXIT, sev_handle_vmgexit() creates a host mapping for the GHCB > >> GPA, and unmaps it just before VM-entry. This long-lived GHCB map is > >> used by the VMGEXIT handler through accessors such as ghcb_{set_get}_x= xx(). > >> > >> A long-lived GHCB map can cause issue when SEV-SNP is enabled. When > >> SEV-SNP is enabled the mapped GPA needs to be protected against a page > >> state change. > >> > >> To eliminate the long-lived GHCB mapping, update the GHCB sync > >> operations to explicitly map the GHCB before access and unmap it after > >> access is complete. This requires that the setting of the GHCBs > >> sw_exit_info_{1,2} fields be done during sev_es_sync_to_ghcb(), so > >> create two new fields in the vcpu_svm struct to hold these values when > >> required to be set outside of the GHCB mapping. > >> > >> Signed-off-by: Brijesh Singh > >> --- > >> arch/x86/kvm/svm/sev.c | 131 > >> ++++++++++++++++++++++++++--------------- > >> arch/x86/kvm/svm/svm.c | 12 ++-- > >> arch/x86/kvm/svm/svm.h | 24 +++++++- > >> 3 files changed, 111 insertions(+), 56 deletions(-) > >> > >> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index > >> 01ea257e17d6..c70f3f7e06a8 100644 > >> --- a/arch/x86/kvm/svm/sev.c > >> +++ b/arch/x86/kvm/svm/sev.c > >> @@ -2823,15 +2823,40 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) > >> kvfree(svm->sev_es.ghcb_sa); > >> } > >> > >> +static inline int svm_map_ghcb(struct vcpu_svm *svm, struct > >> +kvm_host_map *map) { > >> + struct vmcb_control_area *control =3D &svm->vmcb->control; > >> + u64 gfn =3D gpa_to_gfn(control->ghcb_gpa); > >> + > >> + if (kvm_vcpu_map(&svm->vcpu, gfn, map)) { > >> + /* Unable to map GHCB from guest */ > >> + pr_err("error mapping GHCB GFN [%#llx] from guest\n", = gfn); > >> + return -EFAULT; > >> + } > >> + > >> + return 0; > >> +} > > >There is a perf cost to this suggestion but it might make accessing the = GHCB safer for KVM. Have you thought about just using > >kvm_read_guest() or copy_from_user() to fully copy out the GCHB into a K= VM owned buffer, then copying it back before the VMRUN. That way the KVM do= esn't need to guard against page_state_changes on the GHCBs, that could be = a perf ?>improvement in a follow up. > > Along with the performance costs you mentioned, the main concern here wil= l be the GHCB write-back path (copying it back) before VMRUN: this will aga= in hit the issue we have currently with > kvm_write_guest() / copy_to_user(), when we use it to sync the scratch bu= ffer back to GHCB. This can fail if guest RAM is mapped using huge-page(s) = and RMP is 4K. Please refer to the patch/fix > mentioned below, kvm_write_guest() potentially can fail before VMRUN in c= ase of SNP : > > commit 94ed878c2669532ebae8eb9b4503f19aa33cd7aa > Author: Ashish Kalra > Date: Mon Jun 6 22:28:01 2022 +0000 > > KVM: SVM: Sync the GHCB scratch buffer using already mapped ghcb > > Using kvm_write_guest() to sync the GHCB scratch buffer can fail > due to host mapping being 2M, but RMP being 4K. The page fault handli= ng > in do_user_addr_fault() fails to split the 2M page to handle RMP faul= t due > to it being called here in a non-preemptible context. Instead use > the already kernel mapped ghcb to sync the scratch buffer when the > scratch buffer is contained within the GHCB. Ah I didn't see that issue thanks for the pointer. The patch description says "When SEV-SNP is enabled the mapped GPA needs to be protected against a page state change." since if the guest were to convert the GHCB page to private when the host is using the GHCB the host could get an RMP violation right? That RMP violation would cause the host to crash unless we use some copy_to_user() type protections. I don't see anything mechanism for this patch to add the page state change protection discussed. Can't another vCPU still convert the GHCB to private? I was wrong about the importance of this though seanjc@ walked me through how UPM will solve this issue so no worries about this until the series is rebased on to UPM. > > Thanks, > Ashish > > >Since we cannot unmap GHCBs I don't think UPM will help here so we proba= bly want to make these patches safe against malicious guests making GHCBs p= rivate. But maybe UPM does help?