From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CF83C43334 for ; Mon, 27 Jun 2022 21:57:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241788AbiF0V5v (ORCPT ); Mon, 27 Jun 2022 17:57:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241427AbiF0VzK (ORCPT ); Mon, 27 Jun 2022 17:55:10 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D14A864D9; Mon, 27 Jun 2022 14:54:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656366897; x=1687902897; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/tO1CnpFPe2HS26pE1AZrC4XC6fCVaWXk7hyHkBlL24=; b=Byr4chxNP39pNiV4B6Gta0n2OQRbv87WgtEKg+3Fmr0Pd1+G8P5vB4fs T9sdW8cpkvcqJ+Vpj1cn8ekmkXIjjWo+0vV0h32EeolIe1prmf/vJXqyQ 8rjIqmhvkIKA6XGmaQ2X7zkkmW5RUhn1HWb+hVmVFiOCIYPIUYmY6/QaV mujmJNyzv1wUtd+uIzZlfck4GW5vLB5jZ7J0bolvQCbYWTWq4P2Q6whF+ tSNezgtpbc9tz9nHESEfhSEEATpyxy6UgbNUKNNf16QQnH9sh9rfDTRFn 5EBr7qDr8h2GmqDtqI1qMCp8jYWrUYud76jPbFE5mewAWX/GR6VJmD6pw Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10391"; a="281609549" X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="281609549" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:53 -0700 X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="657863557" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:53 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini Subject: [PATCH v7 038/102] KVM: x86/mmu: Disallow fast page fault on private GPA Date: Mon, 27 Jun 2022 14:53:30 -0700 Message-Id: <8718a5e35e4ec90e6493541f4f4a5903918f31e5.1656366338.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX requires TDX SEAMCALL to operate Secure EPT instead of direct memory access and TDX SEAMCALL is heavy operation. Fast page fault on private GPA doesn't make sense. Disallow fast page fault on private GPA. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 496d0d30839b..e0aa5ad3931d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3081,8 +3081,16 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau return RET_PF_CONTINUE; } -static bool page_fault_can_be_fast(struct kvm_page_fault *fault) +static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault) { + /* + * TDX private mapping doesn't support fast page fault because the EPT + * entry is read/written with TDX SEAMCALLs instead of direct memory + * access. + */ + if (kvm_is_private_gpa(kvm, fault->addr)) + return false; + /* * Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only * reach the common page fault handler if the SPTE has an invalid MMIO @@ -3192,7 +3200,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) u64 *sptep = NULL; uint retry_count = 0; - if (!page_fault_can_be_fast(fault)) + if (!page_fault_can_be_fast(vcpu->kvm, fault)) return ret; walk_shadow_page_lockless_begin(vcpu); -- 2.25.1