From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C274EC433ED for ; Mon, 26 Apr 2021 18:03:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9911D613BD for ; Mon, 26 Apr 2021 18:03:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234711AbhDZSEc (ORCPT ); Mon, 26 Apr 2021 14:04:32 -0400 Received: from mga05.intel.com ([192.55.52.43]:6419 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234313AbhDZSDd (ORCPT ); Mon, 26 Apr 2021 14:03:33 -0400 IronPort-SDR: UfCykPGmWF+VkQnedUnUufot4Sw/EGGNXChX6T0yfKOu1yz8bZ5p7dsIc+L+iKhFsd7PWgJ+u2 IF8SIfvL0QNw== X-IronPort-AV: E=McAfee;i="6200,9189,9966"; a="281710679" X-IronPort-AV: E=Sophos;i="5.82,252,1613462400"; d="scan'208";a="281710679" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2021 11:02:47 -0700 IronPort-SDR: wm1KNH5BrdOdIFyyY2ye2YnM7214haPyhOAAGmgHL6xTgAq1Fe+Lf6Gqh2h5YCM+B+teM7fNPp vMm7CAuEBCTQ== X-IronPort-AV: E=Sophos;i="5.82,252,1613462400"; d="scan'208";a="447353377" Received: from ssumanpx-mobl.amr.corp.intel.com (HELO skuppusw-mobl5.amr.corp.intel.com) ([10.254.34.197]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2021 11:02:46 -0700 From: Kuppuswamy Sathyanarayanan To: Peter Zijlstra , Andy Lutomirski , Dave Hansen , Dan Williams , Tony Luck Cc: Andi Kleen , Kirill Shutemov , Kuppuswamy Sathyanarayanan , Raj Ashok , Sean Christopherson , linux-kernel@vger.kernel.org, Kuppuswamy Sathyanarayanan Subject: [RFC v2 11/32] x86/tdx: Add MSR support for TDX guest Date: Mon, 26 Apr 2021 11:01:38 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" Operations on context-switched MSRs can be run natively. The rest of MSRs should be handled through TDVMCALLs. TDVMCALL[Instruction.RDMSR] and TDVMCALL[Instruction.WRMSR] provide MSR oprations. You can find RDMSR and WRMSR details in Guest-Host-Communication Interface (GHCI) for Intel Trust Domain Extensions (Intel TDX) specification, sec 3.10, 3.11. Also, since CSTAR MSR is not used on Intel CPUs as SYSCALL instruction, ignore accesses to CSTAR MSR. Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/kernel/tdx.c | 85 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 83 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index 721c213d807d..5b16707b3577 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -107,6 +107,73 @@ static __cpuidle void tdg_safe_halt(void) tdg_halt(); } +static bool tdg_is_context_switched_msr(unsigned int msr) +{ + /* XXX: Update the list of context-switched MSRs */ + + switch (msr) { + case MSR_EFER: + case MSR_IA32_CR_PAT: + case MSR_FS_BASE: + case MSR_GS_BASE: + case MSR_KERNEL_GS_BASE: + case MSR_IA32_SYSENTER_CS: + case MSR_IA32_SYSENTER_EIP: + case MSR_IA32_SYSENTER_ESP: + case MSR_STAR: + case MSR_LSTAR: + case MSR_SYSCALL_MASK: + case MSR_IA32_XSS: + case MSR_TSC_AUX: + case MSR_IA32_BNDCFGS: + return true; + } + return false; +} + +static u64 tdg_read_msr_safe(unsigned int msr, int *err) +{ + u64 ret; + struct tdvmcall_output out = {0}; + + WARN_ON_ONCE(tdg_is_context_switched_msr(msr)); + + /* + * Since CSTAR MSR is not used by Intel CPUs as SYSCALL + * instruction, just ignore it. Even raising TDVMCALL + * will lead to same result. + */ + if (msr == MSR_CSTAR) + return 0; + + ret = __tdvmcall(EXIT_REASON_MSR_READ, msr, 0, 0, 0, &out); + + *err = (ret) ? -EIO : 0; + + return out.r11; +} + +static int tdg_write_msr_safe(unsigned int msr, unsigned int low, + unsigned int high) +{ + u64 ret; + + WARN_ON_ONCE(tdg_is_context_switched_msr(msr)); + + /* + * Since CSTAR MSR is not used by Intel CPUs as SYSCALL + * instruction, just ignore it. Even raising TDVMCALL + * will lead to same result. + */ + if (msr == MSR_CSTAR) + return 0; + + ret = __tdvmcall(EXIT_REASON_MSR_WRITE, msr, (u64)high << 32 | low, + 0, 0, NULL); + + return ret ? -EIO : 0; +} + unsigned long tdg_get_ve_info(struct ve_info *ve) { u64 ret; @@ -136,19 +203,33 @@ unsigned long tdg_get_ve_info(struct ve_info *ve) int tdg_handle_virtualization_exception(struct pt_regs *regs, struct ve_info *ve) { + unsigned long val; + int ret = 0; + switch (ve->exit_reason) { case EXIT_REASON_HLT: tdg_halt(); break; + case EXIT_REASON_MSR_READ: + val = tdg_read_msr_safe(regs->cx, (unsigned int *)&ret); + if (!ret) { + regs->ax = val & UINT_MAX; + regs->dx = val >> 32; + } + break; + case EXIT_REASON_MSR_WRITE: + ret = tdg_write_msr_safe(regs->cx, regs->ax, regs->dx); + break; default: pr_warn("Unexpected #VE: %lld\n", ve->exit_reason); return -EFAULT; } /* After successful #VE handling, move the IP */ - regs->ip += ve->instr_len; + if (!ret) + regs->ip += ve->instr_len; - return 0; + return ret; } void __init tdx_early_init(void) -- 2.25.1