From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 187A872 for ; Fri, 16 Jul 2021 21:00:39 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id d12so11139705pgd.9 for ; Fri, 16 Jul 2021 14:00:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=32owgdGkkMeFKYJKP96aJ0cepPQZmV7ASPwAnVQdyQ8=; b=mWGfXWGNFAoNVa/XQEMEHBGq3nEvz+50vTLu166iLOmPRHDnbyzkms0w3ML3zNZeWP GVWuqECe5KirV8LFrcLji/GmzrQ7GVbVpBIE3e5EzaXDLr6WyUx5EmX+jj1MebxKkklO 4FV8/YtI4QEurHcBfFD436NAdnosWqA7ukGoXATVmBIhLQbTxj1nAy4LNBJ4Rs+/Y1gn frv72OoilRYqDRBvJQD2gN25MIilFA3t3XV9nKRiPei7WUH6uGHQRcrdLe3+TebOCdDS 53ONDWobK9c65N8sBOfetN8OnTEMw4ULi/OqXPosKkR+Pd3UuFCKTM65gtt0ZUlrb545 JcAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=32owgdGkkMeFKYJKP96aJ0cepPQZmV7ASPwAnVQdyQ8=; b=P3tyJSkt7gK6fm/en5XRrIc8CfVHix2/joQbXs3pd3FBRBaeYaUMTueM88Dy2bMS3Q 3/2gMBdqTbf/st0JBCjlO9GAvB5QZtJ3tsX9Nz324TujzpLSUsJja+N/in9s+nYqcB3E JxJ0d5DpDAqRQHKJLTxPsaHkWQ8uO0AY1RUYXoNj0R9IZh3XpqskYAFy3v1FyZYWZnI0 5xGEzeb47JXqFnyb0TmDlG5pAflvi82ejndGjQk9Z1nG8a9YLOaKnIcPy4OT17N+T1OS NxEfCv0Vv9kNz3zlLb1KulgKQ5BvotkUFa9ogNhNKVfRPUqgf6s8ux0owMTbIdA52oWA Q0fw== X-Gm-Message-State: AOAM533ubPeGDay507F8Ly9sUnK+1W6CrDC7Jq8HH9ZcYajaKedp+hmx SHPFBCDOAZq2AVenSMvQ7veD8Q== X-Google-Smtp-Source: ABdhPJwjG+4AKAUsolr4NhSH0onD6XOfm/OvTT2xz/g+OQ4KDyJFfI4r0VZGJsgBVJZPZ187YQk3wQ== X-Received: by 2002:a62:520e:0:b029:30b:fc21:975d with SMTP id g14-20020a62520e0000b029030bfc21975dmr12345822pfb.57.1626469238280; Fri, 16 Jul 2021 14:00:38 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id f18sm6484622pfe.25.2021.07.16.14.00.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Jul 2021 14:00:37 -0700 (PDT) Date: Fri, 16 Jul 2021 21:00:34 +0000 From: Sean Christopherson To: Brijesh Singh Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, platform-driver-x86@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Joerg Roedel , Tom Lendacky , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Gonda , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , Michael Roth , Vlastimil Babka , tony.luck@intel.com, npmccallum@redhat.com, brijesh.ksingh@gmail.com Subject: Re: [PATCH Part2 RFC v4 33/40] KVM: SVM: Add support to handle MSR based Page State Change VMGEXIT Message-ID: References: <20210707183616.5620-1-brijesh.singh@amd.com> <20210707183616.5620-34-brijesh.singh@amd.com> Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20210707183616.5620-34-brijesh.singh@amd.com> On Wed, Jul 07, 2021, Brijesh Singh wrote: > +static int __snp_handle_psc(struct kvm_vcpu *vcpu, int op, gpa_t gpa, int level) I can live with e.g. GHCB_MSR_PSC_REQ, but I'd strongly prefer to spell this out, e.g. __snp_handle_page_state_change() or whatever. I had a hell of a time figuring out what PSC was the first time I saw it in some random context. > +{ > + struct kvm *kvm = vcpu->kvm; > + int rc, tdp_level; > + kvm_pfn_t pfn; > + gpa_t gpa_end; > + > + gpa_end = gpa + page_level_size(level); > + > + while (gpa < gpa_end) { > + /* > + * Get the pfn and level for the gpa from the nested page table. > + * > + * If the TDP walk failed, then its safe to say that we don't have a valid > + * mapping for the gpa in the nested page table. Create a fault to map the > + * page is nested page table. > + */ > + if (!kvm_mmu_get_tdp_walk(vcpu, gpa, &pfn, &tdp_level)) { > + pfn = kvm_mmu_map_tdp_page(vcpu, gpa, PFERR_USER_MASK, level); > + if (is_error_noslot_pfn(pfn)) > + goto out; > + > + if (!kvm_mmu_get_tdp_walk(vcpu, gpa, &pfn, &tdp_level)) > + goto out; > + } > + > + /* Adjust the level so that we don't go higher than the backing page level */ > + level = min_t(size_t, level, tdp_level); > + > + write_lock(&kvm->mmu_lock); Retrieving the PFN and level outside of mmu_lock is not correct. Because the pages are pinned and the VMM is not malicious, it will function as intended, but it is far from correct. The overall approach also feels wrong, e.g. a guest won't be able to convert a 2mb chunk back to a 2mb large page if KVM mapped the GPA as a 4kb page in the past (from a different conversion). I'd also strongly prefer to have a common flow between SNP and TDX for converting between shared/prviate. I'll circle back to this next week, it'll probably take a few hours of staring to figure out a solution, if a common one for SNP+TDX is even possible. > + > + switch (op) { > + case SNP_PAGE_STATE_SHARED: > + rc = snp_make_page_shared(vcpu, gpa, pfn, level); > + break; > + case SNP_PAGE_STATE_PRIVATE: > + rc = snp_make_page_private(vcpu, gpa, pfn, level); > + break; > + default: > + rc = -EINVAL; > + break; > + } > + > + write_unlock(&kvm->mmu_lock); > + > + if (rc) { > + pr_err_ratelimited("Error op %d gpa %llx pfn %llx level %d rc %d\n", > + op, gpa, pfn, level, rc); > + goto out; > + } > + > + gpa = gpa + page_level_size(level); > + } > + > +out: > + return rc; > +} > + > static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) > { > struct vmcb_control_area *control = &svm->vmcb->control; > @@ -2941,6 +3063,25 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) > GHCB_MSR_INFO_POS); > break; > } > + case GHCB_MSR_PSC_REQ: { > + gfn_t gfn; > + int ret; > + u8 op; > + > + gfn = get_ghcb_msr_bits(svm, GHCB_MSR_PSC_GFN_MASK, GHCB_MSR_PSC_GFN_POS); > + op = get_ghcb_msr_bits(svm, GHCB_MSR_PSC_OP_MASK, GHCB_MSR_PSC_OP_POS); > + > + ret = __snp_handle_psc(vcpu, op, gfn_to_gpa(gfn), PG_LEVEL_4K); > + > + /* If failed to change the state then spec requires to return all F's */ That doesn't mesh with what I could find: o 0x015 – SNP Page State Change Response ▪ GHCBData[63:32] – Error code ▪ GHCBData[31:12] – Reserved, must be zero Written by the hypervisor in response to a Page State Change request. Any non- zero value for the error code indicates that the page state change was not successful. And if "all Fs" is indeed the error code, 'int ret' probably only works by luck since the return value is a 64-bit value, where as ret is a 32-bit signed int. > + if (ret) > + ret = -1; Uh, this is fubar. You've created a shadow of 'ret', i.e. the outer ret is likely uninitialized. > + > + set_ghcb_msr_bits(svm, ret, GHCB_MSR_PSC_ERROR_MASK, GHCB_MSR_PSC_ERROR_POS); > + set_ghcb_msr_bits(svm, 0, GHCB_MSR_PSC_RSVD_MASK, GHCB_MSR_PSC_RSVD_POS); > + set_ghcb_msr_bits(svm, GHCB_MSR_PSC_RESP, GHCB_MSR_INFO_MASK, GHCB_MSR_INFO_POS); > + break; > + } > case GHCB_MSR_TERM_REQ: { > u64 reason_set, reason_code; > > -- > 2.17.1 >