From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 340A4CCA479 for ; Thu, 7 Jul 2022 20:07:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235978AbiGGUHQ (ORCPT ); Thu, 7 Jul 2022 16:07:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235929AbiGGUHP (ORCPT ); Thu, 7 Jul 2022 16:07:15 -0400 Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com [IPv6:2a00:1450:4864:20::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 601DE5C9E8 for ; Thu, 7 Jul 2022 13:07:13 -0700 (PDT) Received: by mail-lf1-x133.google.com with SMTP id d12so9642903lfq.12 for ; Thu, 07 Jul 2022 13:07:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=QKhIcn0No1S7A61+zmGLwK91+iFzUTtNVPgIThcxj6s=; b=gVC3x4vRSwq9+jTYJ4uo46yrPyBoDX3w1bx40R6Hq+qmp0LoElZElkFm2Hh0l+vxXC nl9oOppNyrXY9npgFovnSDGCRT+bSUhslsEq8L4AZqAA1jcipcKhZ7anh/DFsf1fEysw yA0IdQLnyj0S0Pgeh67cQ7U+/axRBD/NtYoevdUq4dibCqJY68Dc9jlP26545M4i5/U4 Ef24qa57ODqV0zgYuzDbrH8vNumdkYJwcA7qYMfh+3KjREN38xz8SU6ImdPRQUW5InGw X9cu4YLgWbUU140YdmZwa15DMNtZxfCsLvvLo2QT6i+m5C+wfWxcAH1UAzgZsXuGp5zl Licg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=QKhIcn0No1S7A61+zmGLwK91+iFzUTtNVPgIThcxj6s=; b=0abW0gENVS9VB12CCvV8kpHQIx70JaPTy4X1aNNqSzvLKNwBbKeh6VCUxhjYfpv+xA 3S873CNbip/8enTh3UQUvK6SUM44X4brsWQiLzO6o7RWTAHdl1IWaHQ2YzLiXQ6+NyKQ GorSS0gT9ILn5+IzLQv+zqvCRlSpBh1DAxXQi+w3DDMvNttkX914MjxyBLy5R0zcpByl eitk1w32iusBJxsFBquFJk1YR1LWuDhqSk6O94w/YJS19rN9MSqWkpqHXuRDdekXiA/0 RBLiwpGC8KdIQop7FP+yEgTkDBKHXP2FN6lQFr0CvXRN8tq8+rM+x/FHWboAqifvoIPG XNmw== X-Gm-Message-State: AJIora9tGLd6qInD8Ul/yl3GES6Udw96TDU4+z5Lcl6xNyceEzDU4RhU za9+WqAFVxbPXHorRCV6nP2FV9vxwPiyfr4zH/Bopw== X-Google-Smtp-Source: AGRyM1ukmtVWv+UDsK4ocu2WtNQ0skzcglLvz74pG51Y9fsQEHQ2iNFArCm/PlRMEJjkm+H4jn63SwqGqxQ9ctyw6Jk= X-Received: by 2002:a05:6512:6d3:b0:489:37a0:ac40 with SMTP id u19-20020a05651206d300b0048937a0ac40mr1624560lff.79.1657224431418; Thu, 07 Jul 2022 13:07:11 -0700 (PDT) MIME-Version: 1.0 References: <7845d453af6344d0b156493eb4555399aad78615.1655761627.git.ashish.kalra@amd.com> In-Reply-To: From: Peter Gonda Date: Thu, 7 Jul 2022 14:06:59 -0600 Message-ID: Subject: Re: [PATCH Part2 v6 35/49] KVM: SVM: Remove the long-lived GHCB host map To: "Kalra, Ashish" Cc: "the arch/x86 maintainers" , LKML , kvm list , "linux-coco@lists.linux.dev" , "linux-mm@kvack.org" , Linux Crypto Mailing List , Thomas Gleixner , Ingo Molnar , Joerg Roedel , "Lendacky, Thomas" , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , "Roth, Michael" , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , Tony Luck , Marc Orr , Sathyanarayanan Kuppuswamy , Alper Gun , "Dr. David Alan Gilbert" , "jarkko@kernel.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Fri, Jun 24, 2022 at 2:14 PM Kalra, Ashish wrote: > > [AMD Official Use Only - General] > > Hello Peter, > > >> From: Brijesh Singh > >> > >> On VMGEXIT, sev_handle_vmgexit() creates a host mapping for the GHCB > >> GPA, and unmaps it just before VM-entry. This long-lived GHCB map is > >> used by the VMGEXIT handler through accessors such as ghcb_{set_get}_x= xx(). > >> > >> A long-lived GHCB map can cause issue when SEV-SNP is enabled. When > >> SEV-SNP is enabled the mapped GPA needs to be protected against a page > >> state change. > >> > >> To eliminate the long-lived GHCB mapping, update the GHCB sync > >> operations to explicitly map the GHCB before access and unmap it after > >> access is complete. This requires that the setting of the GHCBs > >> sw_exit_info_{1,2} fields be done during sev_es_sync_to_ghcb(), so > >> create two new fields in the vcpu_svm struct to hold these values when > >> required to be set outside of the GHCB mapping. > >> > >> Signed-off-by: Brijesh Singh > >> --- > >> arch/x86/kvm/svm/sev.c | 131 > >> ++++++++++++++++++++++++++--------------- > >> arch/x86/kvm/svm/svm.c | 12 ++-- > >> arch/x86/kvm/svm/svm.h | 24 +++++++- > >> 3 files changed, 111 insertions(+), 56 deletions(-) > >> > >> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index > >> 01ea257e17d6..c70f3f7e06a8 100644 > >> --- a/arch/x86/kvm/svm/sev.c > >> +++ b/arch/x86/kvm/svm/sev.c > >> @@ -2823,15 +2823,40 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) > >> kvfree(svm->sev_es.ghcb_sa); > >> } > >> > >> +static inline int svm_map_ghcb(struct vcpu_svm *svm, struct > >> +kvm_host_map *map) { > >> + struct vmcb_control_area *control =3D &svm->vmcb->control; > >> + u64 gfn =3D gpa_to_gfn(control->ghcb_gpa); > >> + > >> + if (kvm_vcpu_map(&svm->vcpu, gfn, map)) { > >> + /* Unable to map GHCB from guest */ > >> + pr_err("error mapping GHCB GFN [%#llx] from guest\n", = gfn); > >> + return -EFAULT; > >> + } > >> + > >> + return 0; > >> +} > > >There is a perf cost to this suggestion but it might make accessing the = GHCB safer for KVM. Have you thought about just using > >kvm_read_guest() or copy_from_user() to fully copy out the GCHB into a K= VM owned buffer, then copying it back before the VMRUN. That way the KVM do= esn't need to guard against page_state_changes on the GHCBs, that could be = a perf ?>improvement in a follow up. > > Along with the performance costs you mentioned, the main concern here wil= l be the GHCB write-back path (copying it back) before VMRUN: this will aga= in hit the issue we have currently with > kvm_write_guest() / copy_to_user(), when we use it to sync the scratch bu= ffer back to GHCB. This can fail if guest RAM is mapped using huge-page(s) = and RMP is 4K. Please refer to the patch/fix > mentioned below, kvm_write_guest() potentially can fail before VMRUN in c= ase of SNP : > > commit 94ed878c2669532ebae8eb9b4503f19aa33cd7aa > Author: Ashish Kalra > Date: Mon Jun 6 22:28:01 2022 +0000 > > KVM: SVM: Sync the GHCB scratch buffer using already mapped ghcb > > Using kvm_write_guest() to sync the GHCB scratch buffer can fail > due to host mapping being 2M, but RMP being 4K. The page fault handli= ng > in do_user_addr_fault() fails to split the 2M page to handle RMP faul= t due > to it being called here in a non-preemptible context. Instead use > the already kernel mapped ghcb to sync the scratch buffer when the > scratch buffer is contained within the GHCB. Ah I didn't see that issue thanks for the pointer. The patch description says "When SEV-SNP is enabled the mapped GPA needs to be protected against a page state change." since if the guest were to convert the GHCB page to private when the host is using the GHCB the host could get an RMP violation right? That RMP violation would cause the host to crash unless we use some copy_to_user() type protections. I don't see anything mechanism for this patch to add the page state change protection discussed. Can't another vCPU still convert the GHCB to private? I was wrong about the importance of this though seanjc@ walked me through how UPM will solve this issue so no worries about this until the series is rebased on to UPM. > > Thanks, > Ashish > > >Since we cannot unmap GHCBs I don't think UPM will help here so we proba= bly want to make these patches safe against malicious guests making GHCBs p= rivate. But maybe UPM does help?