All of lore.kernel.org
 help / color / mirror / Atom feed
From: isaku.yamahata@intel.com
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com,
	Paolo Bonzini <pbonzini@redhat.com>,
	erdemaktas@google.com, Sean Christopherson <seanjc@google.com>,
	Sagi Shahar <sagis@google.com>
Subject: [RFC PATCH v6 103/104] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU
Date: Thu,  5 May 2022 11:15:37 -0700	[thread overview]
Message-ID: <fe205fea212962098165f351e379e8e7305dcf42.1651774251.git.isaku.yamahata@intel.com> (raw)
In-Reply-To: <cover.1651774250.git.isaku.yamahata@intel.com>

From: Isaku Yamahata <isaku.yamahata@intel.com>

Add a high level design document on TDX changes to TDP MMU.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/tdx-tdp-mmu.rst | 466 +++++++++++++++++++++++++
 1 file changed, 466 insertions(+)
 create mode 100644 Documentation/virt/kvm/tdx-tdp-mmu.rst

diff --git a/Documentation/virt/kvm/tdx-tdp-mmu.rst b/Documentation/virt/kvm/tdx-tdp-mmu.rst
new file mode 100644
index 000000000000..6d63bb75f785
--- /dev/null
+++ b/Documentation/virt/kvm/tdx-tdp-mmu.rst
@@ -0,0 +1,466 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Design of TDP MMU for TDX support
+=================================
+This document describes a (high level) design for TDX support of KVM TDP MMU of
+x86 KVM.
+
+In this document, we use "TD" or "guest TD" to differentiate it from the current
+"VM" (Virtual Machine), which is supported by KVM today.
+
+
+Background of TDX
+=================
+TD private memory is designed to hold TD private content, encrypted by the CPU
+using the TD ephemeral key.  An encryption engine holds a table of encryption
+keys, and an encryption key is selected for each memory transaction based on a
+Host Key Identifier (HKID).  By design, the host VMM does not have access to the
+encryption keys.
+
+In the first generation of MKTME, HKID is "stolen" from the physical address by
+allocating a configurable number of bits from the top of the physical address.
+The HKID space is partitioned into shared HKIDs for legacy MKTME accesses and
+private HKIDs for SEAM-mode-only accesses.  We use 0 for the shared HKID on the
+host so that MKTME can be opaque or bypassed on the host.
+
+During TDX non-root operation (i.e. guest TD), memory accesses can be qualified
+as either shared or private, based on the value of a new SHARED bit in the Guest
+Physical Address (GPA).  The CPU translates shared GPAs using the usual VMX EPT
+(Extended Page Table) or "Shared EPT" (in this document), which resides in the
+host VMM memory.  The Shared EPT is directly managed by the host VMM - the same
+as with the current VMX.  Since guest TDs usually require I/O, and the data
+exchange needs to be done via shared memory, thus KVM needs to use the current
+EPT functionality even for TDs.
+
+The CPU translates private GPAs using a separate Secure EPT.  The Secure EPT
+pages are encrypted and integrity-protected with the TD's ephemeral private key.
+Secure EPT can be managed _indirectly_ by the host VMM, using the TDX interface
+functions (SEAMCALLs), and thus conceptually Secure EPT is a subset of EPT
+because not all functionalities are available.
+
+Since the execution of such interface functions takes much longer time than
+accessing memory directly, in KVM we use the existing TDP code to mirror the
+Secure EPT for the TD. And we think there are at least two options today in
+terms of the timing for executing such SEAMCALLs:
+
+1. synchronous, i.e. while walking the TDP page tables, or
+2. post-walk, i.e. record what needs to be done to the real Secure EPT during
+   the walk, and execute SEAMCALLs later.
+
+The option 1 seems to be more intuitive and simpler, but the Secure EPT
+concurrency rules are different from the ones of the TDP or EPT. For example,
+MEM.SEPT.RD acquire shared access to the whole Secure EPT tree of the target
+
+Secure EPT(SEPT) operations
+---------------------------
+Secure EPT is an Extended Page Table for GPA-to-HPA translation of TD private
+HPA.  A Secure EPT is designed to be encrypted with the TD's ephemeral private
+key. SEPT pages are allocated by the host VMM via Intel TDX functions, but their
+content is intended to be hidden and is not architectural.
+
+Unlike the conventional EPT, the CPU can't directly read/write its entry.
+Instead, TDX SEAMCALL API is used.  Several SEAMCALLs correspond to operation on
+the EPT entry.
+
+* TDH.MEM.SEPT.ADD():
+  Add a secure EPT page from the secure EPT tree.  This corresponds to updating
+  the non-leaf EPT entry with present bit set
+
+* TDH.MEM.SEPT.REMOVE():
+  Remove the secure page from the secure EPT tree.  There is no corresponding
+  to the EPT operation.
+
+* TDH.MEM.SEPT.RD():
+  Read the secure EPT entry.  This corresponds to reading the EPT entry as
+  memory.  Please note that this is much slower than direct memory reading.
+
+* TDH.MEM.PAGE.ADD() and TDH.MEM.PAGE.AUG():
+  Add a private page to the secure EPT tree.  This corresponds to updating the
+  leaf EPT entry with present bit set.
+
+* THD.MEM.PAGE.REMOVE():
+  Remove a private page from the secure EPT tree.  There is no corresponding
+  to the EPT operation.
+
+* TDH.MEM.RANGE.BLOCK():
+  This (mostly) corresponds to clearing the present bit of the leaf EPT entry.
+  Note that the private page is still linked in the secure EPT.  To remove it
+  from the secure EPT, TDH.MEM.SEPT.REMOVE() and TDH.MEM.PAGE.REMOVE() needs to
+  be called.
+
+* TDH.MEM.TRACK():
+  Increment the TLB epoch counter. This (mostly) corresponds to EPT TLB flush.
+  Note that the private page is still linked in the secure EPT.  To remove it
+  from the secure EPT, tdh_mem_page_remove() needs to be called.
+
+
+Adding private page
+-------------------
+The procedure of populating the private page looks as follows.
+
+1. TDH.MEM.SEPT.ADD(512G level)
+2. TDH.MEM.SEPT.ADD(1G level)
+3. TDH.MEM.SEPT.ADD(2M level)
+4. TDH.MEM.PAGE.AUG(4K level)
+
+Those operations correspond to updating the EPT entries.
+
+Dropping private page and TLB shootdown
+---------------------------------------
+The procedure of dropping the private page looks as follows.
+
+1. TDH.MEM.RANGE.BLOCK(4K level)
+   This mostly corresponds to clear the present bit in the EPT entry.  This
+   prevents (or blocks) TLB entry from creating in the future.  Note that the
+   private page is still linked in the secure EPT tree and the existing cache
+   entry in the TLB isn't flushed.
+2. TDH.MEM.TRACK(range) and TLB shootdown
+   This mostly corresponds to the EPT TLB shootdown.  Because all vcpus share
+   the same Secure EPT, all vcpus need to flush TLB.
+   * TDH.MEM.TRACK(range) by one vcpu.  It increments the global internal TLB
+     epoch counter.
+   * send IPI to remote vcpus
+   * Other vcpu exits to VMM from guest TD and then re-enter. TDH.VP.ENTER().
+   * TDH.VP.ENTER() checks the TLB epoch counter and If its TLB is old, flush
+     TLB.
+   Note that only single vcpu issues tdh_mem_track().
+   Note that the private page is still linked in the secure EPT tree, unlike the
+   conventional EPT.
+3. TDH.MEM.PAGE.PROMOTE, TDH.MEM.PAGEDEMOTE(), TDH.MEM.PAGE.RELOCATE(), or
+   TDH.MEM.PAGE.REMOVE()
+   There is no corresponding operation to the conventional EPT.
+   * When changing page size (e.g. 4K <-> 2M) TDH.MEM.PAGE.PROMOTE() or
+     TDH.MEM.PAGE.DEMOTE() is used.  During those operation, the guest page is
+     kept referenced in the Secure EPT.
+   * When migrating page, TDH.MEM.PAGE.RELOCATE().  This requires both source
+     page and destination page.
+   * when destroying TD, TDH.MEM.PAGE.REMOVE() removes the private page from the
+     secure EPT tree.  In this case TLB shootdown is not needed because vcpus
+     don't run any more.
+
+The basic idea for TDX support
+==============================
+Because shared EPT is the same as the existing EPT, use the existing logic for
+shared EPT.  On the other hand, secure EPT requires additional operations
+instead of directly reading/writing of the EPT entry.
+
+On EPT violation, The KVM mmu walks down the EPT tree from the root, determines
+the EPT entry to operate, and updates the entry. If necessary, a TLB shootdown
+is done.  Because it's very slow to directly walk secure EPT by TDX SEAMCALL,
+TDH.MEM.SEPT.RD(), the mirror of secure EPT is created and maintained.  Add
+hooks to KVM MMU to reuse the existing code.
+
+EPT violation on shared GPA
+---------------------------
+(1) EPT violation on shared GPA or zapping shared GPA
+    walk down shared EPT tree (the existing code)
+        |
+        |
+        V
+shared EPT tree (CPU refers.)
+(2) update the EPT entry. (the existing code)
+    TLB shootdown in the case of zapping.
+
+
+EPT violation on private GPA
+----------------------------
+(1) EPT violation on private GPA or zapping private GPA
+    walk down the mirror of secure EPT tree (mostly same as the existing code)
+        |
+        |
+        V
+mirror of secure EPT tree (KVM MMU software only. reuse of the existing code)
+(2) update the (mirrored) EPT entry. (mostly same as the existing code)
+(3) call the hooks with what EPT entry is changed
+        |
+        NEW: hooks in KVM MMU
+        |
+        V
+secure EPT root(CPU refers)
+(4) the TDX backend calls necessary TDX SEAMCALLs to update real secure EPT.
+
+The major modification is to add hooks for the TDX backend for additional
+operations and to pass down which EPT, shared EPT, or private EPT is used, and
+twist the behavior if we're operating on private EPT.
+
+The following depicts the relationship.
+::
+
+                    KVM                             |       TDX module
+                     |                              |           |
+        -------------+----------                    |           |
+        |                      |                    |           |
+        V                      V                    |           |
+     shared GPA           private GPA               |           |
+  CPU shared EPT pointer  KVM private EPT pointer   |  CPU secure EPT pointer
+        |                      |                    |           |
+        |                      |                    |           |
+        V                      V                    |           V
+  shared EPT                private EPT<-------mirror----->Secure EPT
+        |                      |                    |           |
+        |                      \--------------------+------\    |
+        |                                           |      |    |
+        V                                           |      V    V
+  shared guest page                                 |    private guest page
+                                                    |
+                                                    |
+                              non-encrypted memory  |    encrypted memory
+                                                    |
+
+shared EPT: CPU and KVM walk with shared GPA
+            Maintained by the existing code
+private EPT: KVM walks with private GPA
+             Maintained by the twisted existing code
+secure EPT: CPU walks with private GPA.
+            Maintained by TDX module with TDX SEAMCALLs via hooks
+
+
+Tracking private EPT page
+=========================
+Shared EPT pages are managed by struct kvm_mmu_page.  They are linked in a list
+structure.  When necessary, the list is traversed to operate on.  Private EPT
+pages have different characteristics.  For example, private pages can't be
+swapped out.  When shrinking memory, we'd like to traverse only shared EPT pages
+and skip private EPT pages.  Likewise, page migration isn't supported for
+private pages (yet).  Introduce an additional list to track shared EPT pages and
+track private EPT pages independently.
+
+At the beginning of EPT violation, the fault handler knows fault GPA, thus it
+knows which EPT to operate on, private or shared.  If it's private EPT,
+an additional task is done.  Something like "if (private) { callback a hook }".
+Since the fault handler has deep function calls, it's cumbersome to hold the
+information of which EPT is operating.  Options to mitigate it are
+
+1. Pass the information as an argument for the function call.
+2. Record the information in struct kvm_mmu_page somehow.
+3. Record the information in vcpu structure.
+
+Option 2 was chosen.  Because option 1 requires modifying all the functions.  It
+would affect badly to the normal case.  Option 3 doesn't work well because in
+some cases, we need to walk both private and shared EPT.
+
+The role of the EPT page can be utilized and one bit can be curved out from
+unused bits in struct kvm_mmu_page_role.  When allocating the EPT page,
+initialize the information. Mostly struct kvm_mmu_page is available because
+we're operating on EPT pages.
+
+
+The conversion of private GPA and shared GPA
+============================================
+A page of a given GPA can be assigned to only private GPA xor shared GPA at one
+time.  The GPA can't be accessed simultaneously via both private GPA and shared
+GPA.  On guest startup, all the GPAs are assigned as private.  Guest converts
+the range of GPA to shared (or private) from private (or shared) by MapGPA
+hypercall.  MapGPA hypercall takes the start GPA and the size of the region.  If
+the given start GPA is shared, VMM converts the region into shared (if it's
+already shared, nop).  If the start GPA is private, VMM converts the region into
+private.  It implies the guest won't access the unmapped region. private(or
+shared) region after converting to shared(or private).
+
+If the guest TD triggers an EPT violation on the already converted region, the
+access won't be allowed (loop in EPT violation) until other vcpu converts back
+the region.
+
+KVM MMU records which GPA is allowed to access, private or shared.  It steals
+software usable bit from MMU present mask.  SPTE_SHARED_MASK.  The bit is
+recorded in both shared EPT and the mirror of secure EPT.
+
+* If SPTE_SHARED_MASK cleared in the shared EPT and the mirror of secure EPT:
+  Private GPA is allowed. Shared GPA is not allowed.
+
+* SPTE_SHARED_MASK set in the shared EPT and the mirror of secure EPT:
+  Private GPA is not allowed. Shared GPA is allowed.
+
+The default is that SPTE_SHARED_MASK is cleared so that the existing KVM
+MMU code (mostly) works.
+
+The reason why the bit is recorded in both shared and private EPT is to optimize
+for EPT violation path by penalizing MapGPA hypercall.
+
+The state machine of EPT entry
+------------------------------
+(private EPT entry, shared EPT entry) =
+        (non-present, non-present):             private mapping is allowed
+        (present, non-present):                 private mapping is mapped
+        (non-present | SPTE_SHARED_MASK, non-present | SPTE_SHARED_MASK):
+                                                shared mapping is allowed
+        (non-present | SPTE_SHARED_MASK, present | SPTE_SHARED_MASK):
+                                                shared mapping is mapped
+        (present | SPTE_SHARED_MASK, any)       invalid combination
+
+* map_gpa(private GPA): Mark the region that private GPA is allowed(NEW)
+        private EPT entry: clear SPTE_SHARED_MASK
+          present: nop
+          non-present: nop
+          non-present | SPTE_SHARED_MASK -> non-present (clear SPTE_SHARED_MASK)
+
+        shared EPT entry: zap the entry, clear SPTE_SHARED_MASK
+          present: invalid
+          non-present -> non-present: nop
+          present | SPTE_SHARED_MASK -> non-present
+          non-present | SPTE_SHARED_MASK -> non-present
+
+* map_gpa(shared GPA): Mark the region that shared GPA is allowed(NEW)
+        private EPT entry: zap and set SPTE_SHARED_MASK
+          present     -> non-present | SPTE_SHARED_MASK
+          non-present -> non-present | SPTE_SHARED_MASK
+          non-present | SPTE_SHARED_MASK: nop
+
+        shared EPT entry: set SPTE_SHARED_MASK
+          present: invalid
+          non-present -> non-present | SPTE_SHARED_MASK
+          present | SPTE_SHARED_MASK -> present | SPTE_SHARED_MASK: nop
+          non-present | SPTE_SHARED_MASK -> non-present | SPTE_SHARED_MASK: nop
+
+* map(private GPA)
+        private EPT entry
+          present: nop
+          non-present -> present
+          non-present | SPTE_SHARED_MASK: nop. looping on EPT violation(NEW)
+
+        shared EPT entry: nop
+
+* map(shared GPA)
+        private EPT entry: nop
+
+        shared EPT entry
+          present: invalid
+          present | SPTE_SHARED_MASK: nop
+          non-present | SPTE_SHARED_MASK -> present | SPTE_SHARED_MASK
+          non-present: nop. looping on EPT violation(NEW)
+
+* zap(private GPA)
+        private EPT entry: zap the entry with keeping SPTE_SHARED_MASK
+          present -> non-present
+          present | SPTE_SHARED_MASK: invalid
+          non-present: nop as is_shadow_present_pte() is checked
+          non-present | SPTE_SHARED_MASK: nop as is_shadow_present_pte() is
+                                          checked
+
+        shared EPT entry: nop
+
+* zap(shared GPA)
+        private EPT entry: nop
+
+        shared EPT entry: zap
+          any -> non-present
+          present: invalid
+          present | SPTE_SHARED_MASK -> non-present | SPTE_SHARED_MASK
+          non-present: nop as is_shadow_present_pte() is checked
+          non-present | SPTE_SHARED_MASK: nop as is_shadow_present_pte() is
+                                          checked
+
+
+The original TDP MMU and race condition
+=======================================
+Because vcpus share the EPT, once the EPT entry is zapped, we need to shootdown
+TLB.  Send IPI to remote vcpus.  Remote vcpus flush their down TLBs.  Until TLB
+shootdown is done, vcpus may reference the zapped guest page.
+
+TDP MMU uses read lock of mmu_lock to mitigate vcpu contention.  When read lock
+is obtained, it depends on the atomic update of the EPT entry.  (On the other
+hand legacy MMU uses write lock.)  When vcpu is populating/zapping the EPT entry
+with a read lock held, other vcpu may be populating or zapping the same EPT
+entry at the same time.
+
+To avoid the race condition, the entry is frozen.  It means the EPT entry is set
+to the special value, REMOVED_SPTE which clears the present bit.  And then after
+TLB shootdown, update the EPT entry to the final value.
+
+Concurrent zapping
+------------------
+1. read lock
+2. freeze the EPT entry (atomically set the value to REMOVED_SPTE)
+   If other vcpu froze the entry, restart page fault.
+3. TLB shootdown
+   * send IPI to remote vcpus
+   * TLB flush (local and remote)
+   For each entry update, TLB shootdown is needed because of the
+   concurrency.
+4. atomically set the EPT entry to the final value
+5. read unlock
+
+Concurrent populating
+---------------------
+In the case of populating the non-present EPT entry, atomically update the EPT
+entry.
+1. read lock
+2. atomically update the EPT entry
+   If other vcpu frozen the entry or updated the entry, restart page fault.
+3. read unlock
+
+In the case of updating the present EPT entry (e.g. page migration), the
+operation is split into two.  Zapping the entry and populating the entry.
+1. read lock
+2. zap the EPT entry.  follow the concurrent zapping case.
+3. populate the non-present EPT entry.
+4. read unlock
+
+Non-concurrent batched zapping
+------------------------------
+In some cases, zapping the ranges is done exclusively with a write lock held.
+In this case, the TLB shootdown is batched into one.
+
+1. write lock
+2. zap the EPT entries by traversing them
+3. TLB shootdown
+4. write unlock
+
+
+For Secure EPT, TDX SEAMCALLs are needed in addition to updating the mirrored
+EPT entry.
+
+TDX concurrent zapping
+----------------------
+Add a hook for TDX SEAMCALLs at the step of the TLB shootdown.
+
+1. read lock
+2. freeze the EPT entry(set the value to REMOVED_SPTE)
+3. TLB shootdown via a hook
+   * TLB.MEM.RANGE.BLOCK()
+   * TLB.MEM.TRACK()
+   * send IPI to remote vcpus
+4. set the EPT entry to the final value
+5. read unlock
+
+TDX concurrent populating
+-------------------------
+TDX SEAMCALLs are required in addition to operating the mirrored EPT entry.  The
+frozen entry is utilized by following the zapping case to avoid the race
+condition.  A hook can be added.
+
+1. read lock
+2. freeze the EPT entry
+3. hook
+   * TDH_MEM_SEPT_ADD() for non-leaf or TDH_MEM_PAGE_AUG() for leaf.
+4. set the EPT entry to the final value
+5. read unlock
+
+Without freezing the entry, the following race can happen.  Suppose two vcpus
+are faulting on the same GPA and the 2M and 4K level entries aren't populated
+yet.
+
+* vcpu 1: update 2M level EPT entry
+* vcpu 2: update 4K level EPT entry
+* vcpu 2: TDX SEAMCALL to update 4K secure EPT entry => error
+* vcpu 1: TDX SEAMCALL to update 2M secure EPT entry
+
+
+TDX non-concurrent batched zapping
+----------------------------------
+For simplicity, the procedure of concurrent populating is utilized.  The
+procedure can be optimized later.
+
+
+Co-existing with unmapping guest private memory
+===============================================
+TODO.  This needs to be addressed.
+
+
+Restrictions or future work
+===========================
+The following features aren't supported yet at the moment.
+
+* optimizing non-concurrent zap
+* Large page
+* Page migration
-- 
2.25.1


  parent reply	other threads:[~2022-05-05 18:24 UTC|newest]

Thread overview: 146+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-05 18:13 [RFC PATCH v6 000/104] KVM TDX basic feature support isaku.yamahata
2022-05-05 18:13 ` [RFC PATCH v6 001/104] KVM: x86: Move check_processor_compatibility from init ops to runtime ops isaku.yamahata
2022-05-05 18:13 ` [RFC PATCH v6 002/104] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs" isaku.yamahata
2022-05-05 18:13 ` [RFC PATCH v6 003/104] KVM: Refactor CPU compatibility check on module initialiization isaku.yamahata
2022-05-23 22:27   ` Sagi Shahar
2022-05-26 19:03     ` Isaku Yamahata
2022-05-05 18:13 ` [RFC PATCH v6 004/104] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
2022-05-05 18:13 ` [RFC PATCH v6 005/104] x86/virt/vmx/tdx: export platform_has_tdx isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 006/104] KVM: TDX: Detect CPU feature on kernel module initialization isaku.yamahata
2022-05-23 23:47   ` Sagi Shahar
2022-05-26 19:28     ` Isaku Yamahata
2022-06-01 22:11       ` Kai Huang
2022-05-05 18:14 ` [RFC PATCH v6 007/104] KVM: Enable hardware before doing arch VM initialization isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 008/104] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 009/104] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 010/104] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 011/104] KVM: TDX: Initialize TDX module when loading kvm_intel.ko isaku.yamahata
2022-05-06 13:57   ` Xiaoyao Li
2022-05-06 21:29     ` Isaku Yamahata
2022-05-05 18:14 ` [RFC PATCH v6 012/104] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 013/104] KVM: TDX: Make TDX VM type supported isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 014/104] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 015/104] KVM: TDX: Define " isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 016/104] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 017/104] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
2022-05-06  8:56   ` Xiaoyao Li
2022-05-06 21:35     ` Isaku Yamahata
2022-05-05 18:14 ` [RFC PATCH v6 018/104] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 019/104] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 020/104] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 021/104] x86/cpu: Add helper functions to allocate/free TDX private host key id isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 022/104] KVM: TDX: create/destroy VM structure isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 023/104] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 024/104] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 025/104] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
2022-05-06  5:27   ` Xiaoyao Li
2022-05-06 13:54   ` Xiaoyao Li
2022-05-09 15:18     ` Isaku Yamahata
2022-05-13 12:34       ` Kai Huang
2022-05-05 18:14 ` [RFC PATCH v6 026/104] KVM: TDX: Make KVM_CAP_SET_IDENTITY_MAP_ADDR unsupported for TDX isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 027/104] KVM: TDX: Make pmu_intel.c ignore guest TD case isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 028/104] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 029/104] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 030/104] " isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 031/104] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 032/104] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 033/104] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 034/104] KVM: x86/mmu: Add address conversion functions for TDX shared bits isaku.yamahata
2022-05-10  0:16   ` Kai Huang
2022-05-12  9:50     ` Isaku Yamahata
2022-05-05 18:14 ` [RFC PATCH v6 035/104] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 036/104] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault isaku.yamahata
2022-08-01 22:27   ` David Matlack
2022-08-01 23:27     ` Sean Christopherson
2022-08-02  1:46       ` Huang, Kai
2022-08-02 16:34         ` David Matlack
2022-08-03  0:28           ` Kai Huang
2022-05-05 18:14 ` [RFC PATCH v6 037/104] KVM: x86/mmu: Allow non-zero value for non-present SPTE isaku.yamahata
2022-08-04 22:54   ` David Matlack
2022-08-04 23:18     ` David Matlack
2022-08-05  0:03       ` Huang, Kai
2022-08-05 16:46         ` David Matlack
2022-08-05 17:14           ` Sean Christopherson
2022-08-04 23:23     ` Sean Christopherson
2022-08-04 23:43       ` David Matlack
2022-05-05 18:14 ` [RFC PATCH v6 038/104] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 039/104] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 040/104] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 041/104] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 042/104] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 043/104] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 044/104] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 045/104] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 046/104] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map() isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 047/104] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
2022-05-27 17:38   ` Paolo Bonzini
2022-06-01 20:49     ` Isaku Yamahata
2022-05-05 18:14 ` [RFC PATCH v6 048/104] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 049/104] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 050/104] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 051/104] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 052/104] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 053/104] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 054/104] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 055/104] KVM: TDX: TDP MMU TDX support isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 056/104] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 057/104] KVM: x86/mmu: steal software usable git to record if GFN is for shared or not isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 058/104] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 059/104] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 060/104] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 061/104] KVM: TDX: Create initial guest memory isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 062/104] KVM: TDX: Finalize VM initialization isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 063/104] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
2022-05-05 18:14 ` [RFC PATCH v6 064/104] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
2022-05-31 15:58   ` Paolo Bonzini
2022-06-01 20:50     ` Isaku Yamahata
2022-05-05 18:14 ` [RFC PATCH v6 065/104] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 066/104] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 067/104] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 068/104] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 069/104] KVM: TDX: restore user ret MSRs isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 070/104] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 071/104] KVM: TDX: complete interrupts after tdexit isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 072/104] KVM: TDX: restore debug store when TD exit isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 073/104] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 074/104] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 075/104] KVM: TDX: Add support for find pending IRQ in a protected local APIC isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 076/104] KVM: x86: Assume timer IRQ was injected if APIC state is proteced isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 077/104] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 078/104] KVM: TDX: Implement interrupt injection isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 079/104] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 080/104] KVM: TDX: Implement methods to inject NMI isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 081/104] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 082/104] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 083/104] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 084/104] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 085/104] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 086/104] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 087/104] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 088/104] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 089/104] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 090/104] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
2022-06-14 18:15   ` Sagi Shahar
2022-06-29 10:13     ` Isaku Yamahata
2022-07-18 22:37       ` Sagi Shahar
2022-07-19 19:23         ` Sean Christopherson
2022-05-05 18:15 ` [RFC PATCH v6 091/104] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 092/104] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 093/104] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
2022-06-14 18:08   ` Sagi Shahar
2022-06-29 10:17     ` Isaku Yamahata
2022-05-05 18:15 ` [RFC PATCH v6 094/104] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 095/104] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall isaku.yamahata
2022-06-10 21:04   ` Sagi Shahar
2022-06-29 10:24     ` Isaku Yamahata
2022-05-05 18:15 ` [RFC PATCH v6 096/104] KVM: TDX: Handle TDX PV report fatal error hypercall isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 097/104] KVM: TDX: Handle TDX PV map_gpa hypercall isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 098/104] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 099/104] KVM: TDX: Silently discard SMI request isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 100/104] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 101/104] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
2022-05-05 18:15 ` [RFC PATCH v6 102/104] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
2022-05-05 18:15 ` isaku.yamahata [this message]
2022-05-05 18:15 ` [RFC PATCH v6 104/104] [MARKER] the end of (the first phase of) TDX KVM patch series isaku.yamahata
2022-05-31 14:46 ` [RFC PATCH v6 000/104] KVM TDX basic feature support Paolo Bonzini
2022-06-01 20:53   ` Isaku Yamahata

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fe205fea212962098165f351e379e8e7305dcf42.1651774251.git.isaku.yamahata@intel.com \
    --to=isaku.yamahata@intel.com \
    --cc=erdemaktas@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sagis@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.