From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D0C0C47092 for ; Wed, 2 Jun 2021 19:42:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F343610A1 for ; Wed, 2 Jun 2021 19:42:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229654AbhFBToe (ORCPT ); Wed, 2 Jun 2021 15:44:34 -0400 Received: from mga06.intel.com ([134.134.136.31]:49276 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229467AbhFBToa (ORCPT ); Wed, 2 Jun 2021 15:44:30 -0400 IronPort-SDR: 7j2+O3afr6+qGVwXkkUZAxrUw6Yn1bw5Bd38EhA1nLbtN15L915yK05AIne0kotxlUtIK2EDpc bF8nVq+WuKfw== X-IronPort-AV: E=McAfee;i="6200,9189,10003"; a="265054743" X-IronPort-AV: E=Sophos;i="5.83,242,1616482800"; d="scan'208";a="265054743" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2021 12:42:46 -0700 IronPort-SDR: LtPo3Q/1BsMkbzUSMFId4rKqHvgJtl03X16T3Zkdc8tAGumqaPXR9z5i2jyxZUcsB2JEt9ghkv g8Kee73+2Bpg== X-IronPort-AV: E=Sophos;i="5.83,242,1616482800"; d="scan'208";a="445965445" Received: from ccmolito-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.34.188]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2021 12:42:44 -0700 From: Kuppuswamy Sathyanarayanan To: Peter Zijlstra , Andy Lutomirski , Dave Hansen , Tony Luck Cc: Andi Kleen , Kirill Shutemov , Kuppuswamy Sathyanarayanan , Dan Williams , Raj Ashok , Sean Christopherson , Kuppuswamy Sathyanarayanan , linux-kernel@vger.kernel.org Subject: [RFC v2-fix-v2 2/2] x86/tdx: Handle in-kernel MMIO Date: Wed, 2 Jun 2021 12:42:19 -0700 Message-Id: <20210602194220.2227863-3-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210602194220.2227863-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <3e9a26c3-8eee-88f5-f8e2-8a2dd2c028ea@intel.com> <20210602194220.2227863-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" In traditional VMs, MMIO is usually implemented by giving a guest access to a mapping which will cause a VMEXIT on access and then the VMM emulating the access. That's not possible in TDX guest because VMEXIT will expose the register state to the host. TDX guests don't trust the host and can't have its state exposed to the host. In TDX the MMIO regions are instead configured to trigger a #VE exception in the guest. The guest #VE handler then emulates the MMIO instruction inside the guest and converts them into a controlled TDCALL to the host, rather than completely exposing the state to the host. Currently, we only support MMIO for instructions that are known to come from io.h macros (build_mmio_read/write()). For drivers that don't use the io.h macros or uses structure overlay to do MMIO are currently not supported in TDX guest (for example the MMIO based XAPIC is disable at runtime for TDX). User-space access triggers SIGBUS. This way of handling is similar to AMD SEV. Also, reasons for supporting #VE based MMIO in TDX guest are, * MMIO is widely used and we'll have more drivers in the future. * We don't want to annotate every TDX specific MMIO readl/writel etc. * If we didn't annotate we would need to add an alternative to every   MMIO access in the kernel (even though 99.9% will never be used on   TDX) which would be a complete waste and incredible binary bloat   for nothing. Signed-off-by: Kirill A. Shutemov --- arch/x86/kernel/tdx.c | 108 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index d0e569b607bc..3687144b9131 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -5,6 +5,9 @@ #include #include +#include +#include +#include /* force_sig_fault() */ #include #include @@ -236,6 +239,104 @@ static void tdg_handle_io(struct pt_regs *regs, u32 exit_qual) } } +static unsigned long tdg_mmio(int size, bool write, unsigned long addr, + unsigned long *val) +{ + struct tdx_hypercall_output out = {0}; + u64 err; + + err = __tdx_hypercall(EXIT_REASON_EPT_VIOLATION, size, write, + addr, *val, &out); + *val = out.r11; + return err; +} + +static int tdg_handle_mmio(struct pt_regs *regs, struct ve_info *ve) +{ + struct insn insn = {}; + char buffer[MAX_INSN_SIZE]; + enum mmio_type mmio; + unsigned long *reg; + int size, ret; + u8 sign_byte; + unsigned long val; + + if (user_mode(regs)) { + ret = insn_fetch_from_user(regs, buffer); + if (!ret) + return -EFAULT; + if (!insn_decode_from_regs(&insn, regs, buffer, ret)) + return -EFAULT; + } else { + ret = copy_from_kernel_nofault(buffer, (void *)regs->ip, + MAX_INSN_SIZE); + if (ret) + return -EFAULT; + insn_init(&insn, buffer, MAX_INSN_SIZE, 1); + insn_get_length(&insn); + } + + mmio = insn_decode_mmio(&insn, &size); + if (mmio == MMIO_DECODE_FAILED) + return -EFAULT; + + if (mmio != MMIO_WRITE_IMM && mmio != MMIO_MOVS) { + reg = insn_get_modrm_reg_ptr(&insn, regs); + if (!reg) + return -EFAULT; + } + + switch (mmio) { + case MMIO_WRITE: + memcpy(&val, reg, size); + ret = tdg_mmio(size, true, ve->gpa, &val); + break; + case MMIO_WRITE_IMM: + val = insn.immediate.value; + ret = tdg_mmio(size, true, ve->gpa, &val); + break; + case MMIO_READ: + ret = tdg_mmio(size, false, ve->gpa, &val); + if (ret) + break; + /* Zero-extend for 32-bit operation */ + if (size == 4) + *reg = 0; + memcpy(reg, &val, size); + break; + case MMIO_READ_ZERO_EXTEND: + ret = tdg_mmio(size, false, ve->gpa, &val); + if (ret) + break; + + /* Zero extend based on operand size */ + memset(reg, 0, insn.opnd_bytes); + memcpy(reg, &val, size); + break; + case MMIO_READ_SIGN_EXTEND: + ret = tdg_mmio(size, false, ve->gpa, &val); + if (ret) + break; + + if (size == 1) + sign_byte = (val & 0x80) ? 0xff : 0x00; + else + sign_byte = (val & 0x8000) ? 0xff : 0x00; + + /* Sign extend based on operand size */ + memset(reg, sign_byte, insn.opnd_bytes); + memcpy(reg, &val, size); + break; + case MMIO_MOVS: + case MMIO_DECODE_FAILED: + return -EFAULT; + } + + if (ret) + return -EFAULT; + return insn.length; +} + unsigned long tdg_get_ve_info(struct ve_info *ve) { u64 ret; @@ -285,6 +386,13 @@ int tdg_handle_virtualization_exception(struct pt_regs *regs, case EXIT_REASON_IO_INSTRUCTION: tdg_handle_io(regs, ve->exit_qual); break; + case EXIT_REASON_EPT_VIOLATION: + ve->instr_len = tdg_handle_mmio(regs, ve); + if (ve->instr_len < 0) { + pr_warn_once("MMIO failed\n"); + return -EFAULT; + } + break; default: pr_warn("Unexpected #VE: %lld\n", ve->exit_reason); return -EFAULT; -- 2.25.1