From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B7B2C00144 for ; Fri, 29 Jul 2022 20:58:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239095AbiG2U6u (ORCPT ); Fri, 29 Jul 2022 16:58:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238806AbiG2U6s (ORCPT ); Fri, 29 Jul 2022 16:58:48 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBC2C87C2C for ; Fri, 29 Jul 2022 13:58:45 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id f11so4880662pgj.7 for ; Fri, 29 Jul 2022 13:58:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=xSUX4n5EOZoO6EmWZ2h+wwgMii0Rl96zCog8KMwNDpE=; b=IQa/2kQ54sNuJFQTlEtNq5TKTdwkdwyj7A2Lk5GyapXGPHgft6Y4da5hKX8JyLKYUF dxIS0jRo9GG3npqy+Pg4Ln4aI12jIIGtLZP0/UBLco+CF9pVQjURcXwHKbpgIaL6oDo2 oHPrpODdXTv5tU6hp5fnQVQaLEOG2clh+Dv38QypiPr2SiCugTXODzmnkYiEw9xwIygg /6eAHyLwoYiqcdVMdT02I9kG8osr3/FmkjgMjspXUgU4nrIqcp177Bu4fYi5UEuFgXNu aPQZj8BQskDnwF3+n8m72EOF8vqFzF/UzjAX2l1tE6KGgebsxuoKxNou4XR1uAj2Xvn9 2H6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=xSUX4n5EOZoO6EmWZ2h+wwgMii0Rl96zCog8KMwNDpE=; b=FNR1jDHSmEBTjLnbW/unjTgwCAWaI6CaqtgTNcfqu4B9YLzuJi+FTVvzUYP7m9u1B7 ANDmLgJtDyLU7BDT9gGluulv/1G4Nl5UCvUd3Fyoka4p75dyg6NAi27xAhA+LrC/DZpi EDnklPOJdATxzxeGLGonPgMckyr2f2P6kUyg+3q1SFBojnbC8LjUllSA81R9ncIPFEOd lD6EFx+B1PSKUYPOt2RbGpKNJ3+ggsuiIFzCm6pa6wnVjT3Ns1i6CpvDE0M5JNA9ZvLD rAHAIKJmTETMkb+tq4HMOZF85LDBY+Gt606ohj1ZuUreEmXPr8FFy3yiwu9JqhvSUBAp q5dg== X-Gm-Message-State: AJIora8x27GsWzGJ7CLRPfimiIfGgx/ik6x7Rnpzp+Tq86xkL11STRV+ Ou+dQW0FPeJXwsY98YZ15ck/fA== X-Google-Smtp-Source: AGRyM1vN6zx5wsr5GVVc5ytEEhaS3cRKGvLGbuTs4y/gxM5vty96tVvmJRlSNJfUJgonhSJqKrn69g== X-Received: by 2002:a63:535f:0:b0:41a:ee1c:a15f with SMTP id t31-20020a63535f000000b0041aee1ca15fmr4299865pgl.265.1659128325266; Fri, 29 Jul 2022 13:58:45 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id b10-20020a1709027e0a00b0016d295888e3sm3999789plm.241.2022.07.29.13.58.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Jul 2022 13:58:44 -0700 (PDT) Date: Fri, 29 Jul 2022 20:58:41 +0000 From: Sean Christopherson To: Chao Peng Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song Subject: Re: [PATCH v7 12/14] KVM: Handle page fault for private memory Message-ID: References: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> <20220706082016.2603916-13-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220706082016.2603916-13-chao.p.peng@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 06, 2022, Chao Peng wrote: > A page fault can carry the private/shared information for > KVM_MEM_PRIVATE memslot, this can be filled by architecture code(like > TDX code). To handle page fault for such access, KVM maps the page only > when this private property matches the host's view on the page. > > For a successful match, private pfn is obtained with memfile_notifier > callbacks from private fd and shared pfn is obtained with existing > get_user_pages. > > For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to > userspace. Userspace then can convert memory between private/shared from > host's view then retry the access. > > Co-developed-by: Yu Zhang > Signed-off-by: Yu Zhang > Signed-off-by: Chao Peng > --- > arch/x86/kvm/mmu/mmu.c | 60 ++++++++++++++++++++++++++++++++- > arch/x86/kvm/mmu/mmu_internal.h | 18 ++++++++++ > arch/x86/kvm/mmu/mmutrace.h | 1 + > include/linux/kvm_host.h | 35 ++++++++++++++++++- > 4 files changed, 112 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 545eb74305fe..27dbdd4fe8d1 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3004,6 +3004,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, > if (max_level == PG_LEVEL_4K) > return PG_LEVEL_4K; > > + if (kvm_mem_is_private(kvm, gfn)) > + return max_level; > + > host_level = host_pfn_mapping_level(kvm, gfn, pfn, slot); > return min(host_level, max_level); > } > @@ -4101,10 +4104,52 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) > kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); > } > > +static inline u8 order_to_level(int order) > +{ > + enum pg_level level; > + > + for (level = KVM_MAX_HUGEPAGE_LEVEL; level > PG_LEVEL_4K; level--) Curly braces needed for the for-loop. And I think it makes sense to take in the fault->max_level, that way this is slightly more performant when the guest mapping is smaller than the host, e.g. for (level = max_level; level > PG_LEVEL_4K; level--) ... return level; Though I think I'd vote to avoid a loop entirely and do: BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); if (order > ???) return PG_LEVEL_1G; if (order > ???) return PG_LEVEL_2M; return PG_LEVEL_4K; > + if (order >= page_level_shift(level) - PAGE_SHIFT) > + return level; > + return level; > +} > + > +static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, > + struct kvm_page_fault *fault) > +{ > + int order; > + struct kvm_memory_slot *slot = fault->slot; > + bool private_exist = kvm_mem_is_private(vcpu->kvm, fault->gfn); > + > + if (fault->is_private != private_exist) { > + vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; > + if (fault->is_private) > + vcpu->run->memory.flags = KVM_MEMORY_EXIT_FLAG_PRIVATE; > + else > + vcpu->run->memory.flags = 0; > + vcpu->run->memory.padding = 0; > + vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT; > + vcpu->run->memory.size = PAGE_SIZE; > + return RET_PF_USER; > + } > + > + if (fault->is_private) { > + if (kvm_private_mem_get_pfn(slot, fault->gfn, &fault->pfn, &order)) > + return RET_PF_RETRY; > + fault->max_level = min(order_to_level(order), fault->max_level); > + fault->map_writable = !(slot->flags & KVM_MEM_READONLY); > + return RET_PF_FIXED; > + } > + > + /* Fault is shared, fallthrough. */ > + return RET_PF_CONTINUE; > +} > + > static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > { > struct kvm_memory_slot *slot = fault->slot; > bool async; > + int r; > > /* > * Retry the page fault if the gfn hit a memslot that is being deleted > @@ -4133,6 +4178,12 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > return RET_PF_EMULATE; > } > > + if (kvm_slot_can_be_private(slot)) { > + r = kvm_faultin_pfn_private(vcpu, fault); > + if (r != RET_PF_CONTINUE) > + return r == RET_PF_FIXED ? RET_PF_CONTINUE : r; I apologize if I've given you conflicting feedback in the past. Now that this returns RET_PF_* directly, I definitely think it makes sense to do: if (kvm_slot_can_be_private(slot) && fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; if (fault->is_private) vcpu->run->memory.flags = KVM_MEMORY_EXIT_FLAG_PRIVATE; else vcpu->run->memory.flags = 0; vcpu->run->memory.padding = 0; vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT; vcpu->run->memory.size = PAGE_SIZE; return RET_PF_USER; } if (fault->is_private) return kvm_faultin_pfn_private(vcpu, fault); That way kvm_faultin_pfn_private() only handles private faults, and this doesn't need to play games with RET_PF_FIXED. > + } > + > async = false; > fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, > fault->write, &fault->map_writable, > @@ -4241,7 +4292,11 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > read_unlock(&vcpu->kvm->mmu_lock); > else > write_unlock(&vcpu->kvm->mmu_lock); > - kvm_release_pfn_clean(fault->pfn); > + > + if (fault->is_private) > + kvm_private_mem_put_pfn(fault->slot, fault->pfn); > + else > + kvm_release_pfn_clean(fault->pfn); AFAIK, we never bottomed out on whether or not this is needed[*]. Can you follow up with Kirill to get an answer before posting v8? [*] https://lore.kernel.org/all/20220620141647.GC2016793@chaop.bj.intel.com