From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76B7DCCA47F for ; Mon, 27 Jun 2022 21:57:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241921AbiF0V50 (ORCPT ); Mon, 27 Jun 2022 17:57:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241511AbiF0Vza (ORCPT ); Mon, 27 Jun 2022 17:55:30 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 578B7AE5A; Mon, 27 Jun 2022 14:55:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656366901; x=1687902901; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=b6s9IUhhY8x4nDadyouJUPn8tf4bhp+u1wO7MtNdxNQ=; b=kpncNEsnUX4lEFo63ZQ4tHxSm/5Cuj+S+0C6DwUBvKmkssJdbxHhxEa8 298m7u3yIXwmkJbAPMalwHVeoOrF0OHUEjUYOiFhsHEiMYVZwC8FIBxGq l9Oqywh4cQymekQ1QQCvKgLTTDnYWjCjQytppSQRhcEIN/PxHhblHmJsb LZtb/q0V1XopUYPSu/LQsQQO+m8CbaoDB0udSte06iiiKeKdrA8vHn1s5 kMiwn4QZ3YM8nzbV+MQMlepT951OhmNC+DSTyGa2f3kMZMet6+CV+9Cti cQj4fXdXfO7QfA3sbXG05w1EsCwwDvd1fhREexlw0HEG+63DYNzbXXQ3R w==; X-IronPort-AV: E=McAfee;i="6400,9594,10391"; a="281609579" X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="281609579" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:55 -0700 X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="657863604" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:55 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Sean Christopherson Subject: [PATCH v7 052/102] KVM: TDX: Add load_mmu_pgd method for TDX Date: Mon, 27 Jun 2022 14:53:44 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson For virtual IO, the guest TD shares guest pages with VMM without encryption. Shared EPT is used to map guest pages in unprotected way. Add the VMCS field encoding for the shared EPTP, which will be used by TDX to have separate EPT walks for private GPAs (existing EPTP) versus shared GPAs (new shared EPTP). Set shared EPT pointer value for the TDX guest to initialize TDX MMU. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/include/asm/vmx.h | 1 + arch/x86/kvm/vmx/main.c | 11 ++++++++++- arch/x86/kvm/vmx/tdx.c | 5 +++++ arch/x86/kvm/vmx/x86_ops.h | 4 ++++ 4 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index f0f8eecf55ac..e169ace97e83 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -234,6 +234,7 @@ enum vmcs_field { TSC_MULTIPLIER_HIGH = 0x00002033, TERTIARY_VM_EXEC_CONTROL = 0x00002034, TERTIARY_VM_EXEC_CONTROL_HIGH = 0x00002035, + SHARED_EPT_POINTER = 0x0000203C, PID_POINTER_TABLE = 0x00002042, PID_POINTER_TABLE_HIGH = 0x00002043, GUEST_PHYSICAL_ADDRESS = 0x00002400, diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 9f4c3a0bcc12..252b7298b230 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -110,6 +110,15 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) return vmx_vcpu_reset(vcpu, init_event); } +static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, + int pgd_level) +{ + if (is_td_vcpu(vcpu)) + return tdx_load_mmu_pgd(vcpu, root_hpa, pgd_level); + + vmx_load_mmu_pgd(vcpu, root_hpa, pgd_level); +} + static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp) { if (!is_td(kvm)) @@ -228,7 +237,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .write_tsc_offset = vmx_write_tsc_offset, .write_tsc_multiplier = vmx_write_tsc_multiplier, - .load_mmu_pgd = vmx_load_mmu_pgd, + .load_mmu_pgd = vt_load_mmu_pgd, .check_intercept = vmx_check_intercept, .handle_exit_irqoff = vmx_handle_exit_irqoff, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 2772775457b0..24b428b7491d 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -532,6 +532,11 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vcpu->kvm->vm_bugged = true; } +void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) +{ + td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); +} + int tdx_dev_ioctl(void __user *argp) { struct kvm_tdx_capabilities __user *user_caps; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 7e38c7b756d4..e70f84d29d21 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -144,6 +144,8 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); + +void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); #else static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; } static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; } @@ -161,6 +163,8 @@ static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } + +static inline void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level) {} #endif #endif /* __KVM_X86_VMX_X86_OPS_H */ -- 2.25.1