linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kai Huang <kai.huang@intel.com>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org
Cc: sathyanarayanan.kuppuswamy@linux.intel.com, aarcange@redhat.com,
	ak@linux.intel.com, dan.j.williams@intel.com, david@redhat.com,
	hpa@zytor.com, jgross@suse.com, jmattson@google.com,
	joro@8bytes.org, jpoimboe@redhat.com, knsathya@kernel.org,
	pbonzini@redhat.com, sdeep@vmware.com, seanjc@google.com,
	tony.luck@intel.com, vkuznets@redhat.com, wanpengli@tencent.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [PATCHv7 30/30] Documentation/x86: Document TDX kernel architecture
Date: Mon, 04 Apr 2022 16:32:42 +1200	[thread overview]
Message-ID: <7c507b609a6e742ebdcd61ea204d5f05ee0b3b35.camel@intel.com> (raw)
In-Reply-To: <20220318153048.51177-31-kirill.shutemov@linux.intel.com>

On Fri, 2022-03-18 at 18:30 +0300, Kirill A. Shutemov wrote:
> From: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> 
> Document the TDX guest architecture details like #VE support,
> shared memory, etc.
> 
> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
> ---
>  Documentation/x86/index.rst |   1 +
>  Documentation/x86/tdx.rst   | 214 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 215 insertions(+)
>  create mode 100644 Documentation/x86/tdx.rst
> 
> diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
> index f498f1d36cd3..382e53ca850a 100644
> --- a/Documentation/x86/index.rst
> +++ b/Documentation/x86/index.rst
> @@ -24,6 +24,7 @@ x86-specific Documentation
>     intel-iommu
>     intel_txt
>     amd-memory-encryption
> +   tdx
>     pti
>     mds
>     microcode
> diff --git a/Documentation/x86/tdx.rst b/Documentation/x86/tdx.rst
> new file mode 100644
> index 000000000000..8ca60256511b
> --- /dev/null
> +++ b/Documentation/x86/tdx.rst
> @@ -0,0 +1,214 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=====================================
> +Intel Trust Domain Extensions (TDX)
> +=====================================
> +
> +Intel's Trust Domain Extensions (TDX) protect confidential guest VMs from
> +the host and physical attacks by isolating the guest register state and by
> +encrypting the guest memory. In TDX, a special TDX module sits between the
> +host and the guest, and runs in a special mode and manages the guest/host
> +separation.
> +
> +Since the host cannot directly access guest registers or memory, much
> +normal functionality of a hypervisor must be moved into the guest. This is
> +implemented using a Virtualization Exception (#VE) that is handled by the
> +guest kernel. Some #VEs are handled entirely inside the guest kernel, but
> +some require the hypervisor to be involved.
> +
> +TDX includes new hypercall-like mechanisms for communicating from the
> +guest to the hypervisor or the TDX module.
> +
> +New TDX Exceptions
> +==================

Hi Kirill, all,

TDX host support also introduces some documentation.  Currently a new
tdx_host.rst is added for that since there are non-trivial lines of materials
there too (subject to change of course).  More information here:

https://lore.kernel.org/lkml/cover.1647167475.git.kai.huang@intel.com/T/#m1f51210c0ed72c6dd05abe90453d50740e97ceed

This is not the best way I believe.  I think we have two options here:
1) Having dedicated tdx_guest.rst and tdx_host.rst
2) Having a single tdx.rst to cover both TDX guest and host materials.

I don't know which way is better, perhaps 2).  

Kirill,

If 2) is better, could we add an additional level of menu to represent "TDX
guest support" and make all materials as sub-menu of it, if you need to send out
v8?


> +
> +TDX guests behave differently from bare-metal and traditional VMX guests.
> +In TDX guests, otherwise normal instructions or memory accesses can cause
> +#VE or #GP exceptions.
> +
> +Instructions marked with an '*' conditionally cause exceptions.  The
> +details for these instructions are discussed below.
> +
> +Instruction-based #VE
> +---------------------
> +
> +- Port I/O (INS, OUTS, IN, OUT)
> +- HLT
> +- MONITOR, MWAIT
> +- WBINVD, INVD
> +- VMCALL
> +- RDMSR*,WRMSR*
> +- CPUID*
> +
> +Instruction-based #GP
> +---------------------
> +
> +- All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH,
> +  VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON
> +- ENCLS, ENCLU
> +- GETSEC
> +- RSM
> +- ENQCMD
> +- RDMSR*,WRMSR*
> +
> +RDMSR/WRMSR Behavior
> +--------------------
> +
> +MSR access behavior falls into three categories:
> +
> +- #GP generated
> +- #VE generated
> +- "Just works"
> +
> +In general, the #GP MSRs should not be used in guests.  Their use likely
> +indicates a bug in the guest.  The guest may try to handle the #GP with a
> +hypercall but it is unlikely to succeed.
> +
> +The #VE MSRs are typically able to be handled by the hypervisor.  Guests
> +can make a hypercall to the hypervisor to handle the #VE.
> +
> +The "just works" MSRs do not need any special guest handling.  They might
> +be implemented by directly passing through the MSR to the hardware or by
> +trapping and handling in the TDX module.  Other than possibly being slow,
> +these MSRs appear to function just as they would on bare metal.
> +
> +CPUID Behavior
> +--------------
> +
> +For some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID
> +return values (in guest EAX/EBX/ECX/EDX) are configurable by the
> +hypervisor. For such cases, the Intel TDX module architecture defines two
> +virtualization types:
> +
> +- Bit fields for which the hypervisor configures the value seen by the
> +  guest TD.
> +
> +- Bit fields for which the hypervisor configures the value such that the
> +  guest TD either sees their native value or a value of 0
> +
> +#VE generated for CPUID leaves and sub-leaves that TDX module doesn't know
> +how to handle. The guest kernel may ask the hypervisor for the value with
> +a hypercall.
> +
> +#VE on Memory Accesses
> +======================
> +
> +There are essentially two classes of TDX memory: private and shared.
> +Private memory receives full TDX protections.  Its content is protected
> +against access from the hypervisor.  Shared memory is expected to be
> +shared between guest and hypervisor.
> +
> +A TD guest is in control of whether its memory accesses are treated as
> +private or shared.  It selects the behavior with a bit in its page table
> +entries.  This helps ensure that a guest does not place sensitive
> +information in shared memory, exposing it to the untrusted hypervisor.
> +
> +#VE on Shared Memory
> +--------------------
> +
> +Access to shared mappings can cause a #VE.  The hypervisor ultimately
> +controls whether a shared memory access causes a #VE, so the guest must be
> +careful to only reference shared pages it can safely handle a #VE.  For
> +instance, the guest should be careful not to access shared memory in the
> +#VE handler before it reads the #VE info structure (TDG.VP.VEINFO.GET).
> +
> +Shared mapping content is entirely controlled by the hypervisor. Shared
> +mappings must never be used for sensitive memory content like stacks or
> +kernel text, only for I/O buffers and MMIO regions.  A good rule of thumb
> +is that hypervisor-shared memory should be treated the same as memory
> +mapped to userspace.  Both the hypervisor and userspace are completely
> +untrusted.
> +
> +MMIO for virtual devices is implemented as shared memory.  The guest must
> +be careful not to access device MMIO regions unless it is also prepared to
> +handle a #VE.
> +
> +#VE on Private Pages
> +--------------------
> +
> +Accesses to private mappings can also cause #VEs.  Since all kernel memory
> +is also private memory, the kernel might theoretically need to handle a
> +#VE on arbitrary kernel memory accesses.  This is not feasible, so TDX
> +guests ensure that all guest memory has been "accepted" before memory is
> +used by the kernel.
> +
> +A modest amount of memory (typically 512M) is pre-accepted by the firmware
> +before the kernel runs to ensure that the kernel can start up without
> +being subjected to #VE's.
> +
> +The hypervisor is permitted to unilaterally move accepted pages to a
> +"blocked" state. However, if it does this, page access will not generate a
> +#VE.  It will, instead, cause a "TD Exit" where the hypervisor is required
> +to handle the exception.
> +
> +Linux #VE handler
> +=================
> +
> +Just like page faults or #GP's, #VE exceptions can be either handled or be
> +fatal.  Typically, unhandled userspace #VE's result in a SIGSEGV.
> +Unhandled kernel #VE's result in an oops.
> +
> +Handling nested exceptions on x86 is typically nasty business.  A #VE
> +could be interrupted by an NMI which triggers another #VE and hilarity
> +ensues.  TDX #VE's have a novel solution to make it slightly less nasty.
> +
> +During #VE handling, the TDX module ensures that all interrupts (including
> +NMIs) are blocked.  The block remains in place until the guest makes a
> +TDG.VP.VEINFO.GET TDCALL.  This allows the guest to choose when interrupts
> +or new #VE's can be delivered.
> +
> +However, the guest kernel must still be careful to avoid potential
> +#VE-triggering actions (discussed above) while this block is in place.
> +While the block is in place, #VE's are elevated to double faults (#DF)
> +which are not recoverable.
> +
> +MMIO handling
> +=============
> +
> +In non-TDX VMs, MMIO is usually implemented by giving a guest access to
> +a mapping which will cause a VMEXIT on access, and then the hypervisor emulates
> +the access.  That is not possible in TDX guests because VMEXIT will expose the
> +register state to the host. TDX guests don't trust the host and can't have
> +their state exposed to the host.
> +
> +In TDX, the MMIO regions typically trigger a #VE exception in the guest.
> +The guest #VE handler then emulates the MMIO instruction inside the guest
> +and converts it into a controlled TDCALL to the host, rather than exposing
> +guest state to the host.
> +
> +MMIO addresses on x86 are just special physical addresses. They can
> +theoretically be accessed with any instruction that accesses memory.
> +However, the kernel instruction decoding method is limited. It is only
> +designed to decode instructions like those generated by io.h macros.
> +
> +MMIO access via other means (like structure overlays) may result in an
> +oops.
> +
> +Shared Memory Conversions
> +=========================
> +
> +All TDX guest memory starts out as private at boot.  This memory can not
> +be accessed by the hypervisor.  However some kernel users like device
> +drivers might have a need to share data with the hypervisor.  To do this,
> +memory must be converted between shared and private.  This can be
> +accomplished using some existing memory encryption helpers:
> +
> +set_memory_decrypted() converts a range of pages to shared.
> +set_memory_encrypted() converts memory back to private.
> +
> +Device drivers are the primary user of shared memory, but there's no need
> +to touch every driver. DMA buffers and ioremap()'ed do the conversions
> +automatically.
> +
> +TDX uses SWIOTLB for most DMA allocations. The SWIOTLB buffer is
> +converted to shared on boot.
> +
> +For coherent DMA allocation, the DMA buffer gets converted on the
> +allocation. Check force_dma_unencrypted() for details.
> +
> +References
> +==========
> +
> +TDX reference material is collected here:
> +
> +https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html


  reply	other threads:[~2022-04-04  4:33 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-18 15:30 [PATCHv7 00/30] TDX Guest: TDX core support Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 01/30] x86/tdx: Detect running as a TDX guest in early boot Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 02/30] x86/tdx: Provide common base for SEAMCALL and TDCALL C wrappers Kirill A. Shutemov
2022-03-19 14:48   ` Borislav Petkov
2022-03-21 15:52     ` Kirill A. Shutemov
2022-03-21 16:02     ` [PATCHv7.1 " Kirill A. Shutemov
2022-04-04  3:19       ` Kai Huang
2022-04-04  3:25         ` Kai Huang
2022-04-04 13:51         ` Dave Hansen
2022-04-04 23:35           ` Kai Huang
2022-04-05  0:01             ` Dave Hansen
2022-04-05  0:23               ` Kai Huang
2022-04-08 20:12                 ` Dave Hansen
2022-03-18 15:30 ` [PATCHv7 03/30] x86/tdx: Add __tdx_module_call() and __tdx_hypercall() helper functions Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 04/30] x86/tdx: Extend the confidential computing API to support TDX guests Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 05/30] x86/tdx: Exclude shared bit from __PHYSICAL_MASK Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 06/30] x86/traps: Refactor exc_general_protection() Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 07/30] x86/traps: Add #VE support for TDX guest Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 08/30] x86/tdx: Add HLT support for TDX guests Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 09/30] x86/tdx: Add MSR " Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 10/30] x86/tdx: Handle CPUID via #VE Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 11/30] x86/tdx: Handle in-kernel MMIO Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 12/30] x86/tdx: Detect TDX at early kernel decompression time Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 13/30] x86: Adjust types used in port I/O helpers Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 14/30] x86: Consolidate " Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 15/30] x86/boot: Port I/O: allow to hook up alternative helpers Kirill A. Shutemov
2022-03-18 16:04   ` [PATCHv7.1 " Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 16/30] x86/boot: Port I/O: add decompression-time support for TDX Kirill A. Shutemov
2022-03-18 16:05   ` [PATCHv7.1 " Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 17/30] x86/tdx: Port I/O: add runtime hypercalls Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 18/30] x86/tdx: Port I/O: add early boot support Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 19/30] x86/tdx: Wire up KVM hypercalls Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 20/30] x86/boot: Add a trampoline for booting APs via firmware handoff Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 21/30] x86/acpi, x86/boot: Add multiprocessor wake-up support Kirill A. Shutemov
2022-03-18 18:23   ` Dave Hansen
2022-03-18 19:22     ` Dave Hansen
2022-03-24 15:24     ` Kirill A. Shutemov
2022-03-28 19:17       ` Dave Hansen
2022-03-30 23:16         ` Kirill A. Shutemov
2022-03-30 23:44           ` Dave Hansen
2022-03-31  1:52             ` Kirill A. Shutemov
2022-04-01 17:33               ` Dave Hansen
2022-03-18 15:30 ` [PATCHv7 22/30] x86/boot: Set CR0.NE early and keep it set during the boot Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 23/30] x86/boot: Avoid #VE during boot for TDX platforms Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 24/30] x86/topology: Disable CPU online/offline control for TDX guests Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 25/30] x86/tdx: Make pages shared in ioremap() Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 26/30] x86/mm/cpa: Add support for TDX shared memory Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 27/30] x86/mm: Make DMA memory shared for TD guest Kirill A. Shutemov
2022-03-18 15:53   ` Dave Hansen
2022-03-18 15:30 ` [PATCHv7 28/30] x86/tdx: ioapic: Add shared bit for IOAPIC base address Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 29/30] ACPICA: Avoid cache flush inside virtual machines Kirill A. Shutemov
2022-03-18 15:30 ` [PATCHv7 30/30] Documentation/x86: Document TDX kernel architecture Kirill A. Shutemov
2022-04-04  4:32   ` Kai Huang [this message]
2022-04-04  6:25     ` Dave Hansen
2022-04-04  7:23       ` Kai Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7c507b609a6e742ebdcd61ea204d5f05ee0b3b35.camel@intel.com \
    --to=kai.huang@intel.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=brijesh.singh@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=jpoimboe@redhat.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=knsathya@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=sdeep@vmware.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=tony.luck@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).