From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9920DC433F5 for ; Fri, 30 Sep 2022 10:28:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232732AbiI3K2O (ORCPT ); Fri, 30 Sep 2022 06:28:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232527AbiI3KZm (ORCPT ); Fri, 30 Sep 2022 06:25:42 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C7EF32DAB; Fri, 30 Sep 2022 03:19:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664533200; x=1696069200; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oPyutxGE/++Ne7CeSQMxhy+1nYhW/8cl/c+HXzmwEZs=; b=TKEVhyeBjRKHch7YrqXTZaT3WrIURbHQj6ctWV+ZofKwclUwV9jRx0xB bjmenZH9pFGAAdm4yIdc1epjZZuvwQDmqyw88HWTEaiIPjRSbvRFI868b +lM4Dh9INcho/OvWA+FpB/SqJqu91ELscAIWvPJKSoFo9VwkzMDvDGj5q k/814cGesa5tZfUo7MfaD63lDVgYBWajsY9DdJpouNcWKyEccpwKvgGFx aqpVcqBiK0aSw7CLmzhfPUJJGW8YVvKryh5n9QMW3cD5Jbo0S/6GXGRNo BnYKd7+C5OBV9n2S6CUTjvGzN8MAuiBqGcKy9ep1BTf0G8ko71iAIPlKt w==; X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="328540182" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="328540182" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:19:08 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="726807839" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="726807839" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:19:08 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [PATCH v9 100/105] KVM: TDX: Silently discard SMI request Date: Fri, 30 Sep 2022 03:18:34 -0700 Message-Id: <916f8ff8d66e52f10a31bcd812fd9a220c9874bf.1664530908.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX doesn't support system-management mode (SMM) and system-management interrupt (SMI) in guest TDs. Because guest state (vcpu state, memory state) is protected, it must go through the TDX module APIs to change guest state, injecting SMI and changing vcpu mode into SMM. The TDX module doesn't provide a way for VMM to inject SMI into guest TD and a way for VMM to switch guest vcpu mode into SMM. We have two options in KVM when handling SMM or SMI in the guest TD or the device model (e.g. QEMU): 1) silently ignore the request or 2) return a meaningful error. For simplicity, we implemented the option 1). Signed-off-by: Isaku Yamahata --- arch/x86/kvm/lapic.c | 7 +++++-- arch/x86/kvm/vmx/main.c | 43 ++++++++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/tdx.c | 27 ++++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 8 +++++++ arch/x86/kvm/x86.c | 3 ++- 5 files changed, 81 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 4e506084e8ed..e02681061637 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1171,8 +1171,11 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, case APIC_DM_SMI: result = 1; - kvm_make_request(KVM_REQ_SMI, vcpu); - kvm_vcpu_kick(vcpu); + if (static_call(kvm_x86_has_emulated_msr)(vcpu->kvm, + MSR_IA32_SMBASE)) { + kvm_make_request(KVM_REQ_SMI, vcpu); + kvm_vcpu_kick(vcpu); + } break; case APIC_DM_NMI: diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 6189bcdc1d80..017c24ed16e5 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -223,6 +223,41 @@ static int vt_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return vmx_get_msr(vcpu, msr_info); } +static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + if (is_td_vcpu(vcpu)) + return tdx_smi_allowed(vcpu, for_injection); + + return vmx_smi_allowed(vcpu, for_injection); +} + +static int vt_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_enter_smm(vcpu, smstate); + + return vmx_enter_smm(vcpu, smstate); +} + +static int vt_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_leave_smm(vcpu, smstate); + + return vmx_leave_smm(vcpu, smstate); +} + +static void vt_enable_smi_window(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_enable_smi_window(vcpu); + return; + } + + /* RSM will cause a vmexit anyway. */ + vmx_enable_smi_window(vcpu); +} + static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu) { struct pi_desc *pi = vcpu_to_pi_desc(vcpu); @@ -580,10 +615,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .setup_mce = vmx_setup_mce, - .smi_allowed = vmx_smi_allowed, - .enter_smm = vmx_enter_smm, - .leave_smm = vmx_leave_smm, - .enable_smi_window = vmx_enable_smi_window, + .smi_allowed = vt_smi_allowed, + .enter_smm = vt_enter_smm, + .leave_smm = vt_leave_smm, + .enable_smi_window = vt_enable_smi_window, .can_emulate_instruction = vmx_can_emulate_instruction, .apic_init_signal_blocked = vmx_apic_init_signal_blocked, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index e5337fb24e82..c7164404a79f 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1631,6 +1631,33 @@ int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) return 1; } +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + /* SMI isn't supported for TDX. */ + WARN_ON_ONCE(1); + return false; +} + +int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +{ + /* smi_allowed() is always false for TDX as above. */ + WARN_ON_ONCE(1); + return 0; +} + +int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +{ + WARN_ON_ONCE(1); + return 0; +} + +void tdx_enable_smi_window(struct kvm_vcpu *vcpu) +{ + /* SMI isn't supported for TDX. Silently discard SMI request. */ + WARN_ON_ONCE(1); + vcpu->arch.smi_pending = false; +} + int tdx_dev_ioctl(void __user *argp) { struct kvm_tdx_capabilities __user *user_caps; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 0a4bdf63e07a..a81b47307b39 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -165,6 +165,10 @@ void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason, bool tdx_is_emulated_msr(u32 index, bool write); int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr); int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr); +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection); +int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate); +int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate); +void tdx_enable_smi_window(struct kvm_vcpu *vcpu); int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -206,6 +210,10 @@ static inline void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason, u64 *in static inline bool tdx_is_emulated_msr(u32 index, bool write) { return false; } static inline int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; } static inline int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; } +static inline int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { return false; } +static inline int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) { return 0; } +static inline int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) { return 0; } +static inline void tdx_enable_smi_window(struct kvm_vcpu *vcpu) {} static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5ecd7a028632..3ba16fe6c9df 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4919,7 +4919,8 @@ static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu) static int kvm_vcpu_ioctl_smi(struct kvm_vcpu *vcpu) { - kvm_make_request(KVM_REQ_SMI, vcpu); + if (static_call(kvm_x86_has_emulated_msr)(vcpu->kvm, MSR_IA32_SMBASE)) + kvm_make_request(KVM_REQ_SMI, vcpu); return 0; } -- 2.25.1