From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6374C64E75 for ; Mon, 16 Nov 2020 18:28:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DC92241A7 for ; Mon, 16 Nov 2020 18:28:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388274AbgKPS2N (ORCPT ); Mon, 16 Nov 2020 13:28:13 -0500 Received: from mga06.intel.com ([134.134.136.31]:20651 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388259AbgKPS2N (ORCPT ); Mon, 16 Nov 2020 13:28:13 -0500 IronPort-SDR: cRm1lLdo2O7a5ILAzuwGQooya3p3TZdwgyL0d5L27v1sIySSrrZlh/2+zVea/tg8gR7NyM1ZYF 8CYEfPNCwNQg== X-IronPort-AV: E=McAfee;i="6000,8403,9807"; a="232410061" X-IronPort-AV: E=Sophos;i="5.77,483,1596524400"; d="scan'208";a="232410061" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Nov 2020 10:28:12 -0800 IronPort-SDR: eYT6/Tv30ckhTif6v0quFkZ2VzXtOKcWyKkOqyz5yod4zd0MkZG6YCe2ot40BlSi4h0KWwgO80 +klwdXy+CK0Q== X-IronPort-AV: E=Sophos;i="5.77,483,1596524400"; d="scan'208";a="400528187" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Nov 2020 10:28:12 -0800 From: isaku.yamahata@intel.com To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Sean Christopherson Subject: [RFC PATCH 43/67] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX Date: Mon, 16 Nov 2020 10:26:28 -0800 Message-Id: <3c4a842b7f989090f58da9ec50e238e5cd06588a.1605232743.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Introduce a helper to directly (pun intented) fault-in a TDP page without having to go through the full page fault path. This allows TDX to get the resulting pfn and also allows the RET_PF_* enums to stay in mmu.c where they belong. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 25 +++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 3b1243cfc280..a6bb930d1549 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -115,6 +115,9 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return vcpu->arch.mmu->page_fault(vcpu, cr2_or_gpa, err, prefault); } +kvm_pfn_t kvm_mmu_map_tdp_page(struct kvm_vcpu *vcpu, gpa_t gpa, + u32 error_code, int max_level); + /* * Currently, we have two sorts of write-protection, a) the first one * write-protects guest page to sync the guest modification, b) another one is diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 474173bceb54..bb59e80ade81 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4034,6 +4034,31 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, max_level, true, &pfn); } +kvm_pfn_t kvm_mmu_map_tdp_page(struct kvm_vcpu *vcpu, gpa_t gpa, + u32 error_code, int max_level) +{ + kvm_pfn_t pfn; + int r; + + if (mmu_topup_memory_caches(vcpu, false)) + return KVM_PFN_ERR_FAULT; + + /* + * Loop on the page fault path to handle the case where an mmu_notifier + * invalidation triggers RET_PF_RETRY. In the normal page fault path, + * KVM needs to resume the guest in case the invalidation changed any + * of the page fault properties, i.e. the gpa or error code. For this + * path, the gpa and error code are fixed by the caller, and the caller + * expects failure if and only if the page fault can't be fixed. + */ + do { + r = direct_page_fault(vcpu, gpa, error_code, false, max_level, + true, &pfn); + } while (r == RET_PF_RETRY && !is_error_noslot_pfn(pfn)); + return pfn; +} +EXPORT_SYMBOL_GPL(kvm_mmu_map_tdp_page); + static void nonpaging_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { -- 2.17.1