All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 000/102] KVM TDX basic feature support
@ 2022-06-27 21:52 isaku.yamahata
  2022-06-27 21:52 ` [PATCH v7 001/102] KVM: x86: Move check_processor_compatibility from init ops to runtime ops isaku.yamahata
                   ` (103 more replies)
  0 siblings, 104 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

KVM TDX basic feature support

Hello.  This is v7 the patch series vof KVM TDX support.
This is based on v5.19-rc1 + kvm/queue branch + TDX HOST patch series.
The tree can be found at https://github.com/intel/tdx/tree/kvm-upstream
How to run/test: It's describe at https://github.com/intel/tdx/wiki/TDX-KVM

Major changes from v6:
- rebased to v5.19 base

TODO:
- integrate fd-based guest memory. As the discussion is still on-going, I
  intentionally dropped fd-based guest memory support yet.  The integration can
  be found at https://github.com/intel/tdx/tree/kvm-upstream-workaround.
- 2M large page support. It's work-in-progress.
For large page support, there are several design choices. Here is the design options.
Any thoughts/feedback?

KVM MMU Large page support for TDX

* What needs to be done
- Track private or shared of each page size (4KB, 2MB, 1GB) based on
  TDG.VP.VMCALL<MapGPA>.  For large pages(2MB, 1GB), it can be mixed (some
  lower-size pages are private and some shared.)  In this case, the page can't
  be large.
- if necessary, split large page on TDG.VP.VMCALL<MapGPA>
  (split on dirty page tracking is future work)
- resolving KVM page fault
  When resolving a private page and the page is large in the host, GPA can be
  resolved as a large page in Secure-EPT.  Even if the page is large on the host
  side, sometimes a 4KB page can be resolved because it's up to guest TD to
  accept at 4KB, 2MB, or 1GB.
- collapsing pages into a large page.
  At this point, it's okay to not implement this.  When dirty page tracking is
  supported, this needs to be supported.
  - On MapGPA, the page can be collapsed into a large page
  - handle zapping SPTE and try to collapse the pages on the next KVM page fault
    Unlike the EPT case, some trick is needed.
- For performance, optimize KVM page fault path at the cost of complicating
  MapGPA path.

* options to track private or shared
At each page size (4KB, 2MB, and 1GB), track private, shared, or mixed (2MB and
1GB case). For 4KB each page, 1 bit per page is needed. private or shared.  For
large pages (2MB and 1GB), 2 bits per large page is needed. (private, shared, or
mixed).  When resolving KVM page fault, we don't want to check the lower-size
pages to check if the given GPA can be a large for performance.  On MapGPA check
it instead.

Option A). enhance kvm_arch_memory_slot
  enum kvm_page_type {
       KVM_PAGE_TYPE_INVALID,
       KVM_PAGE_TYPE_SHARED,
       KVM_PAGE_TYPE_PRIVATE,
       KVM_PAGE_TYPE_MIXED,
  };

  struct kvm_page_attr {
       enum kvm_page_type type;
  };

 struct kvm_arch_memory_slot {
 +      struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];

Option B). steal one more bit SPTE_MIXED_MASK in addition to SPTE_SHARED_MASK
If !SPTE_MIXED_MASK, it can be large page.

Option C). use SPTE_SHARED_MASK and kvm_mmu_page::mixed bitmap
kvm_mmu_page::mixed bitmap of 1GB, root indicates mixed for 2MB, 1GB.


* comparison
A).
+ straightforward to implement
+ SPTE_SHARED_MASK isn't needed
- memory overhead compared to B). or C).
- more memory reference on KVM page fault

B).
+ simpler than C) (complex than A)?)
+ efficient on KVM page fault. (only SPTE reference)
+ low memory overhead
- Waste precious SPTE bits.

C).
+ efficient on KVM page fault. (only SPTE reference)
+ low memory overhead
- complicates MapGPA
- scattered data structure

Thanks,
Isaku Yamahata

Changes from v6:
- rebased to v5.19

Changes from v5:
- export __seamcall and use it
- move mutex lock from callee function of smp_call_on_cpu to the caller.
- rename mmu_prezap => flush_shadow_all_private() and tdx_mmu_release_hkid
- updated comment
- drop the use of tdh_mng_key.reclaimid(): as the function is for backward
  compatibility to only return success
- struct kvm_tdx_cmd: metadata => flags, added __u64 error.
- make this ioctl systemwide ioctl
- ABI change to struct kvm_init_vm
- guest_tsc_khz: use kvm->arch.default_tsc_khz
- rename BUILD_BUG_ON_MEMCPY to MEMCPY_SAME_SIZE
- drop exporting kvm_set_tsc_khz().
- fix kvm_tdp_page_fault() for mtrr emulation
- rename it to kvm_gfn_shared_mask(), dropped kvm_gpa_shared_mask()
- drop kvm_is_private_gfn(), kept kvm_is_private_gpa()
  keep kvm_{gfn, gpa}_private(), kvm_gpa_private()
- update commit message
- rename shadow_init_value => shadow_nonprsent_value
- added ept_violation_ve_test mode
- shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in tdp_mmu.c
- legacy MMU case
  => - mmu_topup_shadow_page_cache(), kvm_mmu_create()
     - FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
- #VE warning:
- rename: REMOVED_SPTE => __REMOVED_SPTE, SHADOW_REMOVED_SPTE => REMOVED_SPTE
- merge into Like we discussed, this patch should be merged with patch
  "KVM: x86/mmu: Allow non-zero init value for shadow PTE".
- fix pointed by Sagi. check !is_private check => (kvm_gfn_shared_mask && !is_private)
- introduce kvm_gfn_for_root(kvm, root, gfn)
- add only_shared argument to kvm_tdp_mmu_handle_gfn()
- use kvm_arch_dirty_log_supported()
- rename SPTE_PRIVATE_PROHIBIT to SPTE_SHARED_MASK.
- rename: is_private_prohibit_spte() => spte_shared_mask()
- fix: shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in comment
- dropped this patch as the change was merged into kvm/queue
- update vt_apicv_post_state_restore()
- use is_64_bit_hypercall()
- comment: expand MSMI -> Machine Check System Management Interrupt
- fixed TDX_SEPT_PFERR
- tdvmcall_p[1234]_{write, read}() => tdvmcall_a[0123]_{read,write}()
- rename tdmvcall_exit_readon() => tdvmcall_leaf()
- remove optional zero check of argument.
- do a check for static_call(kvm_x86_has_emulated_msr)(kvm, MSR_IA32_SMBASE)
   in kvm_vcpu_ioctl_smi and __apic_accept_irq.
- WARN_ON_ONCE in tdx_smi_allowed and tdx_enable_smi_window.
- introduce vcpu_deliver_init to x86_ops
- sprinkeled KVM_BUG_ON()

Changes from v4:
- rebased to TDX host kernel patch series.
- include all the patches to make this patch series working.
- add [MARKER] patches to mark the patch layer clear.

---
* What's TDX?
TDX stands for Trust Domain Extensions, which extends Intel Virtual Machines
Extensions (VMX) to introduce a kind of virtual machine guest called a Trust
Domain (TD) for confidential computing.

A TD runs in a CPU mode that is designed to protect the confidentiality of its
memory contents and its CPU state from any other software, including the hosting
Virtual Machine Monitor (VMM), unless explicitly shared by the TD itself.

We have more detailed explanations below (***).
We have the high-level design of TDX KVM below (****).

In this patch series, we use "TD" or "guest TD" to differentiate it from the
current "VM" (Virtual Machine), which is supported by KVM today.


* The organization of this patch series
This patch series is on top of the patches series "TDX host kernel support":
https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/

this patch series is available at
https://github.com/intel/tdx/releases/tag/kvm-upstream
The corresponding patches to qemu are available at
https://github.com/intel/qemu-tdx/commits/tdx-upstream

The relations of the layers are depicted as follows.
The arrows below show the order of patch reviews we would like to have.

The below layers are chosen so that the device model, for example, qemu can
exercise each layering step by step.  Check if TDX is supported, create TD VM,
create TD vcpu, allow vcpu running, populate TD guest private memory, and handle
vcpu exits/hypercalls/interrupts to run TD fully.

  TDX vcpu
  interrupt/exits/hypercall<------------\
        ^                               |
        |                               |
  TD finalization                       |
        ^                               |
        |                               |
  TDX EPT violation<------------\       |
        ^                       |       |
        |                       |       |
  TD vcpu enter/exit            |       |
        ^                       |       |
        |                       |       |
  TD vcpu creation/destruction  |       \-------KVM TDP MMU MapGPA
        ^                       |                       ^
        |                       |                       |
  TD VM creation/destruction    \---------------KVM TDP MMU hooks
        ^                                               ^
        |                                               |
  TDX architectural definitions                 KVM TDP refactoring for TDX
        ^                                               ^
        |                                               |
   TDX, VMX    <--------TDX host kernel         KVM MMU GPA stolen bits
   coexistence          support


The followings are explanations of each layer.  Each layer has a dummy commit
that starts with [MARKER] in subject.  It is intended to help to identify where
each layer starts.

TDX host kernel support:
        https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/
        The guts of system-wide initialization of TDX module.  There is an
        independent patch series for host x86.  TDX KVM patches call functions
        this patch series provides to initialize the TDX module.

TDX, VMX coexistence:
        Infrastructure to allow TDX to coexist with VMX and trigger the
        initialization of the TDX module.
        This layer starts with
        "KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX"
TDX architectural definitions:
        Add TDX architectural definitions and helper functions
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: TDX architectural definitions".
TD VM creation/destruction:
        Guest TD creation/destroy allocation and releasing of TDX specific vm
        and vcpu structure.  Create an initial guest memory image with TDX
        measurement.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: TD VM creation/destruction".
TD vcpu creation/destruction:
        guest TD creation/destroy Allocation and releasing of TDX specific vm
        and vcpu structure.  Create an initial guest memory image with TDX
        measurement.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction"
TDX EPT violation:
        Create an initial guest memory image with TDX measurement.  Handle
        secure EPT violations to populate guest pages with TDX SEAMCALLs.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: TDX EPT violation"
TD vcpu enter/exit:
        Allow TDX vcpu to enter into TD and exit from TD.  Save CPU state before
        entering into TD.  Restore CPU state after exiting from TD.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: TD vcpu enter/exit"
TD vcpu interrupts/exit/hypercall:
        Handle various exits/hypercalls and allow interrupts to be injected so
        that TD vcpu can continue running.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls"

KVM MMU GPA shared bit:
        Introduce framework to handle shared bit repurposed bit of GPA TDX
        repurposed a bit of GPA to indicate shared or private. If it's shared,
        it's the same as the conventional VMX EPT case.  VMM can access shared
        guest pages.  If it's private, it's handled by Secure-EPT and the guest
        page is encrypted.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: KVM MMU GPA stolen bits"
KVM TDP refactoring for TDX:
        TDX Secure EPT requires different constants. e.g. initial value EPT
        entry value etc. Various refactoring for those differences.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX"
KVM TDP MMU hooks:
        Introduce framework to TDP MMU to add hooks in addition to direct EPT
        access TDX added Secure EPT which is an enhancement to VMX EPT.  Unlike
        conventional VMX EPT, CPU can't directly read/write Secure EPT. Instead,
        use TDX SEAMCALLs to operate on Secure EPT.
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks"
KVM TDP MMU MapGPA:
        Introduce framework to handle switching guest pages from private/shared
        to shared/private.  For a given GPA, a guest page can be assigned to a
        private GPA or a shared GPA exclusively.  With TDX MapGPA hypercall,
        guest TD converts GPA assignments from private (or shared) to shared (or
        private).
        This layer starts with
        "[MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA "

KVM guest private memory: (not shown in the above diagram)
[PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private
memory: https://lkml.org/lkml/2022/1/18/395
        Guest private memory requires different memory management in KVM.  The
        patch proposes a way for it.  Integration with TDX KVM.

(***)
* TDX module
A CPU-attested software module called the "TDX module" is designed to implement
the TDX architecture, and it is loaded by the UEFI firmware today. It can be
loaded by the kernel or driver at runtime, but in this patch series we assume
that the TDX module is already loaded and initialized.

The TDX module provides two main new logical modes of operation built upon the
new SEAM (Secure Arbitration Mode) root and non-root CPU modes added to the VMX
architecture. TDX root mode is mostly identical to the VMX root operation mode,
and the TDX functions (described later) are triggered by the new SEAMCALL
instruction with the desired interface function selected by an input operand
(leaf number, in RAX). TDX non-root mode is used for TD guest operation.  TDX
non-root operation (i.e. "guest TD" mode) is similar to the VMX non-root
operation (i.e. guest VM), with changes and restrictions to better assure that
no other software or hardware has direct visibility of the TD memory and state.

TDX transitions between TDX root operation and TDX non-root operation include TD
Entries, from TDX root to TDX non-root mode, and TD Exits from TDX non-root to
TDX root mode.  A TD Exit might be asynchronous, triggered by some external
event (e.g., external interrupt or SMI) or an exception, or it might be
synchronous, triggered by a TDCALL (TDG.VP.VMCALL) function.

TD VCPUs can be entered using SEAMCALL(TDH.VP.ENTER) by KVM. TDH.VP.ENTER is one
of the TDX interface functions as mentioned above, and "TDH" stands for Trust
Domain Host. Those host-side TDX interface functions are categorized into
various areas just for better organization, such as SYS (TDX module management),
MNG (TD management), VP (VCPU), PHYSMEM (physical memory), MEM (private memory),
etc. For example, SEAMCALL(TDH.SYS.INFO) returns the TDX module information.

TDCS (Trust Domain Control Structure) is the main control structure of a guest
TD, and encrypted (using the guest TD's ephemeral private key).  At a high
level, TDCS holds information for controlling TD operation as a whole,
execution, EPTP, MSR bitmaps, etc that KVM needs to set it up.  Note that MSR
bitmaps are held as part of TDCS (unlike VMX) because they are meant to have the
same value for all VCPUs of the same TD.

Trust Domain Virtual Processor State (TDVPS) is the root control structure of a
TD VCPU.  It helps the TDX module control the operation of the VCPU, and holds
the VCPU state while the VCPU is not running. TDVPS is opaque to software and
DMA access, accessible only by using the TDX module interface functions (such as
TDH.VP.RD, TDH.VP.WR). TDVPS includes TD VMCS, and TD VMCS auxiliary structures,
such as virtual APIC page, virtualization exception information, etc.

Several VMX control structures (such as Shared EPT and Posted interrupt
descriptor) are directly managed and accessed by the host VMM.  These control
structures are pointed to by fields in the TD VMCS.

The above means that 1) KVM needs to allocate different data structures for TDs,
2) KVM can reuse the existing code for TDs for some operations, 3) it needs to
define TD-specific handling for others.  3) Redirect operations to .  3)
Redirect operations to the TDX specific callbacks, like "if (is_td_vcpu(vcpu))
tdx_callback() else vmx_callback();".

*TD Private Memory
TD private memory is designed to hold TD private content, encrypted by the CPU
using the TD ephemeral key. An encryption engine holds a table of encryption
keys, and an encryption key is selected for each memory transaction based on a
Host Key Identifier (HKID). By design, the host VMM does not have access to the
encryption keys.

In the first generation of MKTME, HKID is "stolen" from the physical address by
allocating a configurable number of bits from the top of the physical
address. The HKID space is partitioned into shared HKIDs for legacy MKTME
accesses and private HKIDs for SEAM-mode-only accesses. We use 0 for the shared
HKID on the host so that MKTME can be opaque or bypassed on the host.

During TDX non-root operation (i.e. guest TD), memory accesses can be qualified
as either shared or private, based on the value of a new SHARED bit in the Guest
Physical Address (GPA).  The CPU translates shared GPAs using the usual VMX EPT
(Extended Page Table) or "Shared EPT" (in this document), which resides in host
VMM memory. The Shared EPT is directly managed by the host VMM - the same as
with the current VMX. Since guest TDs usually require I/O, and the data exchange
needs to be done via shared memory, thus KVM needs to use the current EPT
functionality even for TDs.

* Secure EPT and Minoring using the TDP code
The CPU translates private GPAs using a separate Secure EPT.  The Secure EPT
pages are encrypted and integrity-protected with the TD's ephemeral private
key.  Secure EPT can be managed _indirectly_ by the host VMM, using the TDX
interface functions, and thus conceptually Secure EPT is a subset of EPT (why
"subset"). Since execution of such interface functions takes much longer time
than accessing memory directly, in KVM we use the existing TDP code to minor the
Secure EPT for the TD.

This way, we can effectively walk Secure EPT without using the TDX interface
functions.

* VM life cycle and TDX specific operations
The userspace VMM, such as QEMU, needs to build and treat TDs differently.  For
example, a TD needs to boot in private memory, and the host software cannot copy
the initial image to private memory.

* TSC Virtualization
The TDX module helps TDs maintain reliable TSC (Time Stamp Counter) values
(e.g. consistent among the TD VCPUs) and the virtual TSC frequency is determined
by TD configuration, i.e. when the TD is created, not per VCPU.  The current KVM
owns TSC virtualization for VMs, but the TDX module does for TDs.

* MCE support for TDs
The TDX module doesn't allow VMM to inject MCE.  Instead PV way is needed for TD
to communicate with VMM.  For now, KVM silently ignores MCE request by VMM.  MSRs
related to MCE (e.g, MCE bank registers) can be naturally emulated by
paravirtualizing MSR access.

[1] For details, the specifications, [2], [3], [4], [5], [6], [7], are
available.

* Restrictions or future work
Some features are not included to reduce patch size.  Those features are
addressed as future independent patch series.
- large page (2M, 1G)
- qemu gdb stub
- guest PMU
- and more

* Prerequisites
It's required to load the TDX module and initialize it.  It's out of the scope
of this patch series.  Another independent patch for the common x86 code is
planned.  It defines CONFIG_INTEL_TDX_HOST and this patch series uses
CONFIG_INTEL_TDX_HOST.  It's assumed that With CONFIG_INTEL_TDX_HOST=y, the TDX
module is initialized and ready for KVM to use the TDX module APIs for TDX guest
life cycle like tdh.mng.init are ready to use.

Concretely Global initialization, LP (Logical Processor) initialization, global
configuration, the key configuration, and TDMR and PAMT initialization are done.
The state of the TDX module is SYS_READY.  Please refer to the TDX module
specification, the chapter Intel TDX Module Lifecycle State Machine

** Detecting the TDX module readiness.
TDX host patch series implements the detection of the TDX module availability
and its initialization so that KVM can use it.  Also it manages Host KeyID
(HKID) assigned to guest TD.
The assumed APIs the TDX host patch series provides are
- int seamrr_enabled()
  Check if required cpu feature (SEAM mode) is available. This only check CPU
  feature availability.  At this point, the TDX module may not be ready for KVM
  to use.
- int init_tdx(void);
  Initialization of TDX module so that the TDX module is ready for KVM to use.
- const struct tdsysinfo_struct *tdx_get_sysinfo(void);
  Return the system wide information about the TDX module.  NULL if the TDX
  isn't initialized.
- u32 tdx_get_global_keyid(void);
  Return global key id that is used for the TDX module itself.
- int tdx_keyid_alloc(void);
  Allocate HKID for guest TD.
- void tdx_keyid_free(int keyid);
  Free HKID for guest TD.

(****)
* TDX KVM high-level design
- Host key ID management
Host Key ID (HKID) needs to be assigned to each TDX guest for memory encryption.
It is assumed The TDX host patch series implements necessary functions,
u32 tdx_get_global_keyid(void), int tdx_keyid_alloc(void) and,
void tdx_keyid_free(int keyid).

- Data structures and VM type
Because TDX is different from VMX, define its own VM/VCPU structures, struct
kvm_tdx and struct vcpu_tdx instead of struct kvm_vmx and struct vcpu_vmx.  To
identify the VM, introduce VM-type to specify which VM type, VMX (default) or
TDX, is used.

- VM life cycle and TDX specific operations
Re-purpose the existing KVM_MEMORY_ENCRYPT_OP to add TDX specific operations.
New commands are used to get the TDX system parameters, set TDX specific VM/VCPU
parameters, set initial guest memory and measurement.

The creation of TDX VM requires five additional operations in addition to the
conventional VM creation.
  - Get KVM system capability to check if TDX VM type is supported
  - VM creation (KVM_CREATE_VM)
  - New: Get the TDX specific system parameters.  KVM_TDX_GET_CAPABILITY.
  - New: Set TDX specific VM parameters.  KVM_TDX_INIT_VM.
  - VCPU creation (KVM_CREATE_VCPU)
  - New: Set TDX specific VCPU parameters.  KVM_TDX_INIT_VCPU.
  - New: Initialize guest memory as boot state and extend the measurement with
    the memory.  KVM_TDX_INIT_MEM_REGION.
  - New: Finalize VM. KVM_TDX_FINALIZE. Complete measurement of the initial
    TDX VM contents.
  - VCPU RUN (KVM_VCPU_RUN)

- Protected guest state
Because the guest state (CPU state and guest memory) is protected, the KVM VMM
can't operate on them.  For example, accessing CPU registers, injecting
exceptions, and accessing guest memory.  Those operations are handled as
silently ignored, returning zero or initial reset value when it's requested via
KVM API ioctls.

    VM/VCPU state and callbacks for TDX specific operations.
    Define tdx specific VM state and VCPU state instead of VMX ones.  Redirect
    operations to TDX specific callbacks.  "if (tdx) tdx_op() else vmx_op()".

    Operations on the CPU state
    silently ignore operations on the guest state.  For example, the write to
    CPU registers is ignored and the read from CPU registers returns 0.

    . ignore access to CPU registers except for allowed ones.
    . TSC: add a check if tsc is immutable and return an error.  Because the KVM
      implementation updates the internal tsc state and it's difficult to back
      out those changes.  Instead, skip the logic.
    . dirty logging: add check if dirty logging is supported.
    . exceptions/SMI/MCE/SIPI/INIT: silently ignore

    Note: virtual external interrupt and NMI can be injected into TDX guests.

- KVM MMU integration
One bit of the guest physical address (bit 51 or 47) is repurposed to indicate if
the guest physical address is private (the bit is cleared) or shared (the bit is
set).  The bits are called stolen bits.

  - Stolen bits framework
    systematically tracks which guest physical address, shared or private, is
    used.

  - Shared EPT and secure EPT
    There are two EPTs. Shared EPT (the conventional one) and Secure
    EPT(the new one). Shared EPT is handled the same for the stolen
    bit set.  Secure EPT points to private guest pages.  To resolve
    EPT violation, KVM walks one of two EPTs based on faulted GPA.
    Because it's costly to access secure EPT during walking EPTs with
    SEAMCALLs for the private guest physical address, another private
    EPT is used as a shadow of Secure-EPT with the existing logic at
    the cost of extra memory.

The following depicts the relationship.

                    KVM                             |       TDX module
                     |                              |           |
        -------------+----------                    |           |
        |                      |                    |           |
        V                      V                    |           |
     shared GPA           private GPA               |           |
  CPU shared EPT pointer  KVM private EPT pointer   |  CPU secure EPT pointer
        |                      |                    |           |
        |                      |                    |           |
        V                      V                    |           V
  shared EPT                private EPT--------mirror----->Secure EPT
        |                      |                    |           |
        |                      \--------------------+------\    |
        |                                           |      |    |
        V                                           |      V    V
  shared guest page                                 |    private guest page
                                                    |
                                                    |
                              non-encrypted memory  |    encrypted memory
                                                    |

  - Operating on Secure EPT
    Use the TDX module APIs to operate on Secure EPT.  To call the TDX API
    during resolving EPT violation, add hooks to additional operation and wiring
    it to TDX backend.

* References

[1] TDX specification
   https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html
[2] Intel Trust Domain Extensions (Intel TDX)
   https://cdrdv2.intel.com/v1/dl/getContent/726790
[3] Intel CPU Architectural Extensions Specification
   https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-cpu-architectural-specification.pdf
[4] Intel TDX Module 1.0 Specification
   https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-module-1.0-public-spec-v0.931.pdf
[5] Intel TDX Loader Interface Specification
  https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-seamldr-interface-specification.pdf
[6] Intel TDX Guest-Hypervisor Communication Interface
   https://cdrdv2.intel.com/v1/dl/getContent/726790
[7] Intel TDX Virtual Firmware Design Guide
   https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.01.pdf
[8] intel public github
   kvm TDX branch: https://github.com/intel/tdx/tree/kvm
   TDX guest branch: https://github.com/intel/tdx/tree/guest
   qemu TDX https://github.com/intel/qemu-tdx
[9] TDVF
    https://github.com/tianocore/edk2-staging/tree/TDVF
    This was merged into EDK2 main branch. https://github.com/tianocore/edk2

Chao Gao (3):
  KVM: x86: Move check_processor_compatibility from init ops to runtime
    ops
  Partially revert "KVM: Pass kvm_init()'s opaque param to additional
    arch funcs"
  KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o
    wrmsr

Isaku Yamahata (72):
  KVM: Refactor CPU compatibility check on module initialiization
  x86/virt/vmx/tdx: export platform_tdx_enabled()
  KVM: TDX: Detect CPU feature on kernel module initialization
  KVM: x86: Refactor KVM VMX module init/exit functions
  KVM: TDX: Add placeholders for TDX VM/vcpu structure
  x86/virt/tdx: Add a helper function to return system wide info about
    TDX module
  KVM: TDX: Initialize TDX module when loading kvm_intel.ko
  KVM: TDX: Make TDX VM type supported
  [MARKER] The start of TDX KVM patch series: TDX architectural
    definitions
  KVM: TDX: Define TDX architectural definitions
  KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module
  KVM: TDX: Add helper functions to print TDX SEAMCALL error
  [MARKER] The start of TDX KVM patch series: TD VM creation/destruction
  x86/cpu: Add helper functions to allocate/free TDX private host key id
  KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl
  KVM: TDX: Make pmu_intel.c ignore guest TD case
  [MARKER] The start of TDX KVM patch series: TD vcpu
    creation/destruction
  KVM: TDX: allocate/free TDX vcpu structure
  KVM: TDX: allocate/free TDX vcpu structure
  [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits
  KVM: x86/mmu: introduce config for PRIVATE KVM MMU
  [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for
    TDX
  KVM: x86/mmu: Disallow fast page fault on private GPA
  KVM: VMX: Introduce test mode related to EPT violation VE
  [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks
  KVM: x86/mmu: Focibly use TDP MMU for TDX
  KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map()
  KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  [MARKER] The start of TDX KVM patch series: TDX EPT violation
  KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs
  KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
  KVM: TDX: TDP MMU TDX support
  [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA
  KVM: x86/mmu: steal software usable git to record if GFN is for shared
    or not
  KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX
  [MARKER] The start of TDX KVM patch series: TD finalization
  KVM: TDX: Create initial guest memory
  KVM: TDX: Finalize VM initialization
  [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit
  KVM: TDX: Add helper assembly function to TDX vcpu
  KVM: TDX: Implement TDX vcpu enter/exit path
  KVM: TDX: vcpu_run: save/restore host state(host kernel gs)
  KVM: TDX: restore host xsave state when exit from the guest TD
  KVM: TDX: restore user ret MSRs
  [MARKER] The start of TDX KVM patch series: TD vcpu
    exits/interrupts/hypercalls
  KVM: TDX: complete interrupts after tdexit
  KVM: TDX: restore debug store when TD exit
  KVM: TDX: handle vcpu migration over logical processor
  KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched
    behavior
  KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c
  KVM: TDX: Implement interrupt injection
  KVM: TDX: Implements vcpu request_immediate_exit
  KVM: TDX: Implement methods to inject NMI
  KVM: TDX: Add a place holder to handle TDX VM exit
  KVM: TDX: handle EXIT_REASON_OTHER_SMI
  KVM: TDX: handle ept violation/misconfig exit
  KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT
  KVM: TDX: Add a place holder for handler of TDX hypercalls
    (TDG.VP.VMCALL)
  KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL
  KVM: TDX: Handle TDX PV CPUID hypercall
  KVM: TDX: Handle TDX PV HLT hypercall
  KVM: TDX: Handle TDX PV port io hypercall
  KVM: TDX: Implement callbacks for MSR operations for TDX
  KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall
  KVM: TDX: Handle TDX PV report fatal error hypercall
  KVM: TDX: Handle TDX PV map_gpa hypercall
  KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall
  KVM: TDX: Silently discard SMI request
  KVM: TDX: Silently ignore INIT/SIPI
  Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX)
  KVM: x86: design documentation on TDX support of x86 KVM TDP MMU

Rick Edgecombe (1):
  KVM: x86/mmu: Add address conversion functions for TDX shared bits

Sean Christopherson (25):
  KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
  KVM: Enable hardware before doing arch VM initialization
  KVM: x86: Introduce vm_type to differentiate default VMs from
    confidential VMs
  KVM: TDX: Add TDX "architectural" error codes
  KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers
  KVM: TDX: create/destroy VM structure
  KVM: TDX: x86: Add ioctl to get TDX systemwide parameters
  KVM: TDX: Do TDX specific vcpu initialization
  KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
  KVM: x86/mmu: Allow non-zero value for non-present SPTE
  KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  KVM: x86/mmu: Allow per-VM override of the TDP max page level
  KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for
    private mmu
  KVM: x86/mmu: Disallow dirty logging for x86 TDX
  KVM: VMX: Split out guts of EPT violation to common/exposed function
  KVM: VMX: Move setting of EPT MMU masks to common VT-x code
  KVM: TDX: Add load_mmu_pgd method for TDX
  KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX
  KVM: TDX: Add support for find pending IRQ in a protected local APIC
  KVM: x86: Assume timer IRQ was injected if APIC state is proteced
  KVM: VMX: Modify NMI and INTR handlers to take intr_info as function
    argument
  KVM: VMX: Move NMI/exception handler to common helper
  KVM: x86: Split core of hypercall emulation to helper function
  KVM: TDX: Handle TDX PV MMIO hypercall
  KVM: TDX: Add methods to ignore accesses to CPU state

Xiaoyao Li (1):
  KVM: TDX: initialize VM with TDX specific parameters

 Documentation/virt/kvm/api.rst                |   30 +-
 .../virt/kvm/intel-tdx-layer-status.rst       |   33 +
 Documentation/virt/kvm/intel-tdx.rst          |  381 +++
 Documentation/virt/kvm/tdx-tdp-mmu.rst        |  466 ++++
 arch/arm64/kvm/arm.c                          |    2 +-
 arch/mips/kvm/mips.c                          |   14 +-
 arch/powerpc/kvm/powerpc.c                    |    2 +-
 arch/riscv/kvm/main.c                         |    2 +-
 arch/s390/kvm/kvm-s390.c                      |    2 +-
 arch/x86/events/intel/ds.c                    |    1 +
 arch/x86/include/asm/kvm-x86-ops.h            |   10 +
 arch/x86/include/asm/kvm_host.h               |   56 +-
 arch/x86/include/asm/tdx.h                    |   67 +
 arch/x86/include/asm/vmx.h                    |   14 +
 arch/x86/include/uapi/asm/kvm.h               |   95 +
 arch/x86/include/uapi/asm/vmx.h               |    5 +-
 arch/x86/kvm/Kconfig                          |    4 +
 arch/x86/kvm/Makefile                         |    3 +-
 arch/x86/kvm/irq.c                            |    3 +
 arch/x86/kvm/lapic.c                          |   37 +-
 arch/x86/kvm/lapic.h                          |    2 +
 arch/x86/kvm/mmu.h                            |   42 +-
 arch/x86/kvm/mmu/mmu.c                        |  360 ++-
 arch/x86/kvm/mmu/mmu_internal.h               |  123 +-
 arch/x86/kvm/mmu/paging_tmpl.h                |    5 +-
 arch/x86/kvm/mmu/spte.c                       |   46 +-
 arch/x86/kvm/mmu/spte.h                       |   65 +-
 arch/x86/kvm/mmu/tdp_iter.c                   |    1 +
 arch/x86/kvm/mmu/tdp_iter.h                   |    5 +-
 arch/x86/kvm/mmu/tdp_mmu.c                    |  690 ++++-
 arch/x86/kvm/mmu/tdp_mmu.h                    |   12 +-
 arch/x86/kvm/svm/svm.c                        |   13 +-
 arch/x86/kvm/vmx/common.h                     |  174 ++
 arch/x86/kvm/vmx/evmcs.c                      |    2 +-
 arch/x86/kvm/vmx/evmcs.h                      |    2 +-
 arch/x86/kvm/vmx/main.c                       | 1071 +++++++
 arch/x86/kvm/vmx/pmu_intel.c                  |   39 +-
 arch/x86/kvm/vmx/pmu_intel.h                  |   28 +
 arch/x86/kvm/vmx/posted_intr.c                |   43 +-
 arch/x86/kvm/vmx/posted_intr.h                |   13 +
 arch/x86/kvm/vmx/tdx.c                        | 2465 +++++++++++++++++
 arch/x86/kvm/vmx/tdx.h                        |  275 ++
 arch/x86/kvm/vmx/tdx_arch.h                   |  157 ++
 arch/x86/kvm/vmx/tdx_errno.h                  |   29 +
 arch/x86/kvm/vmx/tdx_error.c                  |   22 +
 arch/x86/kvm/vmx/tdx_ops.h                    |  188 ++
 arch/x86/kvm/vmx/vmenter.S                    |  146 +
 arch/x86/kvm/vmx/vmx.c                        |  737 ++---
 arch/x86/kvm/vmx/vmx.h                        |   39 +-
 arch/x86/kvm/vmx/x86_ops.h                    |  235 ++
 arch/x86/kvm/x86.c                            |  148 +-
 arch/x86/virt/vmx/tdx/seamcall.S              |    2 +
 arch/x86/virt/vmx/tdx/tdx.c                   |   54 +-
 arch/x86/virt/vmx/tdx/tdx.h                   |   52 -
 include/linux/kvm_host.h                      |    4 +-
 include/uapi/linux/kvm.h                      |    2 +
 tools/arch/x86/include/uapi/asm/kvm.h         |   95 +
 tools/include/uapi/linux/kvm.h                |    1 +
 virt/kvm/kvm_main.c                           |   67 +-
 59 files changed, 7877 insertions(+), 804 deletions(-)
 create mode 100644 Documentation/virt/kvm/intel-tdx-layer-status.rst
 create mode 100644 Documentation/virt/kvm/intel-tdx.rst
 create mode 100644 Documentation/virt/kvm/tdx-tdp-mmu.rst
 create mode 100644 arch/x86/kvm/vmx/common.h
 create mode 100644 arch/x86/kvm/vmx/main.c
 create mode 100644 arch/x86/kvm/vmx/pmu_intel.h
 create mode 100644 arch/x86/kvm/vmx/tdx.c
 create mode 100644 arch/x86/kvm/vmx/tdx.h
 create mode 100644 arch/x86/kvm/vmx/tdx_arch.h
 create mode 100644 arch/x86/kvm/vmx/tdx_errno.h
 create mode 100644 arch/x86/kvm/vmx/tdx_error.c
 create mode 100644 arch/x86/kvm/vmx/tdx_ops.h
 create mode 100644 arch/x86/kvm/vmx/x86_ops.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 219+ messages in thread

* [PATCH v7 001/102] KVM: x86: Move check_processor_compatibility from init ops to runtime ops
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
@ 2022-06-27 21:52 ` isaku.yamahata
  2022-06-27 21:52 ` [PATCH v7 002/102] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs" isaku.yamahata
                   ` (102 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Chao Gao,
	Sean Christopherson

From: Chao Gao <chao.gao@intel.com>

so that KVM can do compatibility checks on hotplugged CPUs. Drop __init
from check_processor_compatibility() and its callees.

use a static_call() to invoke .check_processor_compatibility.

Opportunistically rename {svm,vmx}_check_processor_compat to conform
to the naming convention of fields of kvm_x86_ops.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220216031528.92558-2-chao.gao@intel.com
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  2 +-
 arch/x86/kvm/svm/svm.c             |  4 ++--
 arch/x86/kvm/vmx/evmcs.c           |  2 +-
 arch/x86/kvm/vmx/evmcs.h           |  2 +-
 arch/x86/kvm/vmx/vmx.c             | 14 +++++++-------
 arch/x86/kvm/x86.c                 |  3 +--
 7 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 6f2f1affbb78..75bc44aa8d51 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -129,6 +129,7 @@ KVM_X86_OP(msr_filter_changed)
 KVM_X86_OP(complete_emulated_msr)
 KVM_X86_OP(vcpu_deliver_sipi_vector)
 KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
+KVM_X86_OP(check_processor_compatibility)
 
 #undef KVM_X86_OP
 #undef KVM_X86_OP_OPTIONAL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7e98b2876380..62dec97f6607 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1427,6 +1427,7 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
 struct kvm_x86_ops {
 	const char *name;
 
+	int (*check_processor_compatibility)(void);
 	int (*hardware_enable)(void);
 	void (*hardware_disable)(void);
 	void (*hardware_unsetup)(void);
@@ -1637,7 +1638,6 @@ struct kvm_x86_nested_ops {
 struct kvm_x86_init_ops {
 	int (*cpu_has_kvm_support)(void);
 	int (*disabled_by_bios)(void);
-	int (*check_processor_compatibility)(void);
 	int (*hardware_setup)(void);
 	unsigned int (*handle_intel_pt_intr)(void);
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c6cca0ce127b..247c0ad458a0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4083,7 +4083,7 @@ svm_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
 	hypercall[2] = 0xd9;
 }
 
-static int __init svm_check_processor_compat(void)
+static int svm_check_processor_compatibility(void)
 {
 	return 0;
 }
@@ -4703,6 +4703,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.name = "kvm_amd",
 
 	.hardware_unsetup = svm_hardware_unsetup,
+	.check_processor_compatibility = svm_check_processor_compatibility,
 	.hardware_enable = svm_hardware_enable,
 	.hardware_disable = svm_hardware_disable,
 	.has_emulated_msr = svm_has_emulated_msr,
@@ -5090,7 +5091,6 @@ static struct kvm_x86_init_ops svm_init_ops __initdata = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
 	.hardware_setup = svm_hardware_setup,
-	.check_processor_compatibility = svm_check_processor_compat,
 
 	.runtime_ops = &svm_x86_ops,
 	.pmu_ops = &amd_pmu_ops,
diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
index 6a61b1ae7942..3f84680c8139 100644
--- a/arch/x86/kvm/vmx/evmcs.c
+++ b/arch/x86/kvm/vmx/evmcs.c
@@ -295,7 +295,7 @@ const struct evmcs_field vmcs_field_to_evmcs_1[] = {
 const unsigned int nr_evmcs_1_fields = ARRAY_SIZE(vmcs_field_to_evmcs_1);
 
 #if IS_ENABLED(CONFIG_HYPERV)
-__init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf)
+void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf)
 {
 	vmcs_conf->cpu_based_exec_ctrl &= ~EVMCS1_UNSUPPORTED_EXEC_CTRL;
 	vmcs_conf->pin_based_exec_ctrl &= ~EVMCS1_UNSUPPORTED_PINCTRL;
diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
index f886a8ff0342..276f788cef15 100644
--- a/arch/x86/kvm/vmx/evmcs.h
+++ b/arch/x86/kvm/vmx/evmcs.h
@@ -212,7 +212,7 @@ static inline void evmcs_load(u64 phys_addr)
 	vp_ap->enlighten_vmentry = 1;
 }
 
-__init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf);
+void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf);
 #else /* !IS_ENABLED(CONFIG_HYPERV) */
 static __always_inline void evmcs_write64(unsigned long field, u64 value) {}
 static inline void evmcs_write32(unsigned long field, u32 value) {}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5e14e4c40007..31e7630203fd 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2410,8 +2410,8 @@ static bool cpu_has_sgx(void)
 	return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0));
 }
 
-static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt,
-				      u32 msr, u32 *result)
+static int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt,
+			       u32 msr, u32 *result)
 {
 	u32 vmx_msr_low, vmx_msr_high;
 	u32 ctl = ctl_min | ctl_opt;
@@ -2429,7 +2429,7 @@ static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt,
 	return 0;
 }
 
-static __init u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr)
+static u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr)
 {
 	u64 allowed;
 
@@ -2438,8 +2438,8 @@ static __init u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr)
 	return  ctl_opt & allowed;
 }
 
-static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
-				    struct vmx_capability *vmx_cap)
+static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
+			     struct vmx_capability *vmx_cap)
 {
 	u32 vmx_msr_low, vmx_msr_high;
 	u32 min, opt, min2, opt2;
@@ -7318,7 +7318,7 @@ static int vmx_vm_init(struct kvm *kvm)
 	return 0;
 }
 
-static int __init vmx_check_processor_compat(void)
+static int vmx_check_processor_compatibility(void)
 {
 	struct vmcs_config vmcs_conf;
 	struct vmx_capability vmx_cap;
@@ -7928,6 +7928,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 
 	.hardware_unsetup = vmx_hardware_unsetup,
 
+	.check_processor_compatibility = vmx_check_processor_compatibility,
 	.hardware_enable = vmx_hardware_enable,
 	.hardware_disable = vmx_hardware_disable,
 	.has_emulated_msr = vmx_has_emulated_msr,
@@ -8316,7 +8317,6 @@ static __init int hardware_setup(void)
 static struct kvm_x86_init_ops vmx_init_ops __initdata = {
 	.cpu_has_kvm_support = cpu_has_kvm_support,
 	.disabled_by_bios = vmx_disabled_by_bios,
-	.check_processor_compatibility = vmx_check_processor_compat,
 	.hardware_setup = hardware_setup,
 	.handle_intel_pt_intr = NULL,
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2318a99139fa..3d9dbaf9828f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11802,7 +11802,6 @@ void kvm_arch_hardware_unsetup(void)
 int kvm_arch_check_processor_compat(void *opaque)
 {
 	struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
-	struct kvm_x86_init_ops *ops = opaque;
 
 	WARN_ON(!irqs_disabled());
 
@@ -11810,7 +11809,7 @@ int kvm_arch_check_processor_compat(void *opaque)
 	    __cr4_reserved_bits(cpu_has, &boot_cpu_data))
 		return -EIO;
 
-	return ops->check_processor_compatibility();
+	return static_call(kvm_x86_check_processor_compatibility)();
 }
 
 bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 002/102] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs"
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
  2022-06-27 21:52 ` [PATCH v7 001/102] KVM: x86: Move check_processor_compatibility from init ops to runtime ops isaku.yamahata
@ 2022-06-27 21:52 ` isaku.yamahata
  2022-07-13  1:55   ` Kai Huang
  2022-06-27 21:52 ` [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization isaku.yamahata
                   ` (101 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Chao Gao,
	Sean Christopherson, Suzuki K Poulose, Anup Patel,
	Claudio Imbrenda

From: Chao Gao <chao.gao@intel.com>

This partially reverts commit b99040853738 ("KVM: Pass kvm_init()'s opaque
param to additional arch funcs") remove opaque from
kvm_arch_check_processor_compat because no one uses this opaque now.
Address conflicts for ARM (due to file movement) and manually handle RISC-V
which comes after the commit.

And changes about kvm_arch_hardware_setup() in original commit are still
needed so they are not reverted.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Anup Patel <anup@brainfault.org>
Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Link: https://lore.kernel.org/r/20220216031528.92558-3-chao.gao@intel.com
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/arm64/kvm/arm.c       |  2 +-
 arch/mips/kvm/mips.c       |  2 +-
 arch/powerpc/kvm/powerpc.c |  2 +-
 arch/riscv/kvm/main.c      |  2 +-
 arch/s390/kvm/kvm-s390.c   |  2 +-
 arch/x86/kvm/x86.c         |  2 +-
 include/linux/kvm_host.h   |  2 +-
 virt/kvm/kvm_main.c        | 16 +++-------------
 8 files changed, 10 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a0188144a122..7588efbac6be 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -68,7 +68,7 @@ int kvm_arch_hardware_setup(void *opaque)
 	return 0;
 }
 
-int kvm_arch_check_processor_compat(void *opaque)
+int kvm_arch_check_processor_compat(void)
 {
 	return 0;
 }
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index a25e0b73ee70..092d09fb6a7e 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -140,7 +140,7 @@ int kvm_arch_hardware_setup(void *opaque)
 	return 0;
 }
 
-int kvm_arch_check_processor_compat(void *opaque)
+int kvm_arch_check_processor_compat(void)
 {
 	return 0;
 }
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 191992fcb2c2..ca8ef51092c6 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -446,7 +446,7 @@ int kvm_arch_hardware_setup(void *opaque)
 	return 0;
 }
 
-int kvm_arch_check_processor_compat(void *opaque)
+int kvm_arch_check_processor_compat(void)
 {
 	return kvmppc_core_check_processor_compat();
 }
diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
index 1549205fe5fe..f8d6372d208f 100644
--- a/arch/riscv/kvm/main.c
+++ b/arch/riscv/kvm/main.c
@@ -20,7 +20,7 @@ long kvm_arch_dev_ioctl(struct file *filp,
 	return -EINVAL;
 }
 
-int kvm_arch_check_processor_compat(void *opaque)
+int kvm_arch_check_processor_compat(void)
 {
 	return 0;
 }
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 72bd5c9b9617..a05493f1cacf 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -251,7 +251,7 @@ int kvm_arch_hardware_enable(void)
 	return 0;
 }
 
-int kvm_arch_check_processor_compat(void *opaque)
+int kvm_arch_check_processor_compat(void)
 {
 	return 0;
 }
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3d9dbaf9828f..30af2bd0b4d5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11799,7 +11799,7 @@ void kvm_arch_hardware_unsetup(void)
 	static_call(kvm_x86_hardware_unsetup)();
 }
 
-int kvm_arch_check_processor_compat(void *opaque)
+int kvm_arch_check_processor_compat(void)
 {
 	struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c20f2d55840c..d4f130a9f5c8 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1442,7 +1442,7 @@ int kvm_arch_hardware_enable(void);
 void kvm_arch_hardware_disable(void);
 int kvm_arch_hardware_setup(void *opaque);
 void kvm_arch_hardware_unsetup(void);
-int kvm_arch_check_processor_compat(void *opaque);
+int kvm_arch_check_processor_compat(void);
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
 bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
 int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a67e996cbf7f..a5bada53f1fe 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -5697,22 +5697,14 @@ void kvm_unregister_perf_callbacks(void)
 }
 #endif
 
-struct kvm_cpu_compat_check {
-	void *opaque;
-	int *ret;
-};
-
-static void check_processor_compat(void *data)
+static void check_processor_compat(void *rtn)
 {
-	struct kvm_cpu_compat_check *c = data;
-
-	*c->ret = kvm_arch_check_processor_compat(c->opaque);
+	*(int *)rtn = kvm_arch_check_processor_compat();
 }
 
 int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 		  struct module *module)
 {
-	struct kvm_cpu_compat_check c;
 	int r;
 	int cpu;
 
@@ -5740,10 +5732,8 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 	if (r < 0)
 		goto out_free_1;
 
-	c.ret = &r;
-	c.opaque = opaque;
 	for_each_online_cpu(cpu) {
-		smp_call_function_single(cpu, check_processor_compat, &c, 1);
+		smp_call_function_single(cpu, check_processor_compat, &r, 1);
 		if (r < 0)
 			goto out_free_2;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
  2022-06-27 21:52 ` [PATCH v7 001/102] KVM: x86: Move check_processor_compatibility from init ops to runtime ops isaku.yamahata
  2022-06-27 21:52 ` [PATCH v7 002/102] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs" isaku.yamahata
@ 2022-06-27 21:52 ` isaku.yamahata
  2022-07-12  1:15   ` Kai Huang
                     ` (2 more replies)
  2022-06-27 21:52 ` [PATCH v7 004/102] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
                   ` (100 subsequent siblings)
  103 siblings, 3 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Isaku Yamahata <isaku.yamahata@intel.com>

Although non-x86 arch doesn't break as long as I inspected code, it's by
code inspection.  This should be reviewed by each arch maintainers.

kvm_init() checks CPU compatibility by calling
kvm_arch_check_processor_compat() on all online CPUs.  Move the callback
to hardware_enable_nolock() and add hardware_enable_all() and
hardware_disable_all().
Add arch specific callback kvm_arch_post_hardware_enable_setup() for arch
to do arch specific initialization that required hardware_enable_all().
This makes a room for TDX module to initialize on kvm module loading.  TDX
module requires all online cpu to enable VMX by VMXON.

If kvm_arch_hardware_enable/disable() depend on (*) part, such dependency
must be called before kvm_init().  In fact kvm_intel() does.  Although
other arch doesn't as long as I checked as follows, it should be reviewed
by each arch maintainers.

Before this patch:
- Arch module initialization
  - kvm_init()
    - kvm_arch_init()
    - kvm_arch_check_processor_compat() on each CPUs
  - post arch specific initialization ---- (*)

- when creating/deleting first/last VM
   - kvm_arch_hardware_enable() on each CPUs --- (A)
   - kvm_arch_hardware_disable() on each CPUs --- (B)

After this patch:
- Arch module initialization
  - kvm_init()
    - kvm_arch_init()
    - kvm_arch_hardware_enable() on each CPUs  (A)
    - kvm_arch_check_processor_compat() on each CPUs
    - kvm_arch_hardware_disable() on each CPUs (B)
  - post arch specific initialization  --- (*)

Code inspection result:
(A)/(B) can depend on (*) before this patch.  If there is dependency, such
initialization must be moved before kvm_init() with this patch.  VMX does
in fact.  As long as I inspected other archs and find only mips has it.

- arch/mips/kvm/mips.c
  module init function, kvm_mips_init(), does some initialization after
  kvm_init().  Compile test only.  Needs review.

- arch/x86/kvm/x86.c
  - uses vm_list which is statically initialized.
  - static_call(kvm_x86_hardware_enable)();
    - SVM: (*) is empty.
    - VMX: needs fix

- arch/powerpc/kvm/powerpc.c
  kvm_arch_hardware_enable/disable() are nop

- arch/s390/kvm/kvm-s390.c
  kvm_arch_hardware_enable/disable() are nop

- arch/arm64/kvm/arm.c
  module init function, arm_init(), calls only kvm_init().
  (*) is empty

- arch/riscv/kvm/main.c
  module init function, riscv_kvm_init(), calls only kvm_init().
  (*) is empty

Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/mips/kvm/mips.c     | 12 +++++++-----
 arch/x86/kvm/vmx/vmx.c   | 15 +++++++++++----
 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c      | 25 ++++++++++++++++++-------
 4 files changed, 37 insertions(+), 16 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 092d09fb6a7e..17228584485d 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1643,11 +1643,6 @@ static int __init kvm_mips_init(void)
 	}
 
 	ret = kvm_mips_entry_setup();
-	if (ret)
-		return ret;
-
-	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
-
 	if (ret)
 		return ret;
 
@@ -1656,6 +1651,13 @@ static int __init kvm_mips_init(void)
 
 	register_die_notifier(&kvm_mips_csr_die_notifier);
 
+	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+
+	if (ret) {
+		unregister_die_notifier(&kvm_mips_csr_die_notifier);
+		return ret;
+	}
+
 	return 0;
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 31e7630203fd..d3b68a6dec48 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8372,6 +8372,15 @@ static void vmx_exit(void)
 }
 module_exit(vmx_exit);
 
+/* initialize before kvm_init() so that hardware_enable/disable() can work. */
+static void __init vmx_init_early(void)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu)
+		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
+}
+
 static int __init vmx_init(void)
 {
 	int r, cpu;
@@ -8409,6 +8418,7 @@ static int __init vmx_init(void)
 	}
 #endif
 
+	vmx_init_early();
 	r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
 		     __alignof__(struct vcpu_vmx), THIS_MODULE);
 	if (r)
@@ -8427,11 +8437,8 @@ static int __init vmx_init(void)
 		return r;
 	}
 
-	for_each_possible_cpu(cpu) {
-		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
-
+	for_each_possible_cpu(cpu)
 		pi_init_cpu(cpu);
-	}
 
 #ifdef CONFIG_KEXEC_CORE
 	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index d4f130a9f5c8..79a4988fd51f 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1441,6 +1441,7 @@ void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu, struct dentry *debugfs_
 int kvm_arch_hardware_enable(void);
 void kvm_arch_hardware_disable(void);
 int kvm_arch_hardware_setup(void *opaque);
+int kvm_arch_post_hardware_enable_setup(void *opaque);
 void kvm_arch_hardware_unsetup(void);
 int kvm_arch_check_processor_compat(void);
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a5bada53f1fe..cee799265ce6 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4899,8 +4899,13 @@ static void hardware_enable_nolock(void *junk)
 
 	cpumask_set_cpu(cpu, cpus_hardware_enabled);
 
+	r = kvm_arch_check_processor_compat();
+	if (r)
+		goto out;
+
 	r = kvm_arch_hardware_enable();
 
+out:
 	if (r) {
 		cpumask_clear_cpu(cpu, cpus_hardware_enabled);
 		atomic_inc(&hardware_enable_failed);
@@ -5697,9 +5702,9 @@ void kvm_unregister_perf_callbacks(void)
 }
 #endif
 
-static void check_processor_compat(void *rtn)
+__weak int kvm_arch_post_hardware_enable_setup(void *opaque)
 {
-	*(int *)rtn = kvm_arch_check_processor_compat();
+	return 0;
 }
 
 int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
@@ -5732,11 +5737,17 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 	if (r < 0)
 		goto out_free_1;
 
-	for_each_online_cpu(cpu) {
-		smp_call_function_single(cpu, check_processor_compat, &r, 1);
-		if (r < 0)
-			goto out_free_2;
-	}
+	/* hardware_enable_nolock() checks CPU compatibility on each CPUs. */
+	r = hardware_enable_all();
+	if (r)
+		goto out_free_2;
+	/*
+	 * Arch specific initialization that requires to enable virtualization
+	 * feature.  e.g. TDX module initialization requires VMXON on all
+	 * present CPUs.
+	 */
+	kvm_arch_post_hardware_enable_setup(opaque);
+	hardware_disable_all();
 
 	r = cpuhp_setup_state_nocalls(CPUHP_AP_KVM_STARTING, "kvm/cpu:starting",
 				      kvm_starting_cpu, kvm_dying_cpu);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 004/102] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (2 preceding siblings ...)
  2022-06-27 21:52 ` [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization isaku.yamahata
@ 2022-06-27 21:52 ` isaku.yamahata
  2022-06-27 21:52 ` [PATCH v7 005/102] x86/virt/vmx/tdx: export platform_tdx_enabled() isaku.yamahata
                   ` (99 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Xiaoyao Li

From: Sean Christopherson <sean.j.christopherson@intel.com>

KVM accesses Virtual Machine Control Structure (VMCS) with VMX instructions
to operate on VM.  TDX defines its data structure and TDX SEAMCALL APIs for
VMM to operate on Trust Domain (TD) instead.

Trust Domain Virtual Processor State (TDVPS) is the root control structure
of a TD VCPU.  It helps the TDX module control the operation of the VCPU,
and holds the VCPU state while the VCPU is not running. TDVPS is opaque to
software and DMA access, accessible only by using the TDX module interface
functions (such as TDH.VP.RD, TDH.VP.WR ,..).  TDVPS includes TD VMCS, and
TD VMCS auxiliary structures, such as virtual APIC page, virtualization
exception information, etc.  TDVPS is composed of Trust Domain Virtual
Processor Root (TDVPR) which is the root page of TDVPS and Trust Domain
Virtual Processor eXtension (TDVPX) pages which extend TDVPR to help
provide enough physical space for the logical TDVPS structure.

Also, we have a new structure, Trust Domain Control Structure (TDCS) is the
main control structure of a guest TD, and encrypted (using the guest TD's
ephemeral private key).  At a high level, TDCS holds information for
controlling TD operation as a whole, execution, EPTP, MSR bitmaps, etc. KVM
needs to set it up.  Note that MSR bitmaps are held as part of TDCS (unlike
VMX) because they are meant to have the same value for all VCPUs of the
same TD.  TDCS is a multi-page logical structure composed of multiple Trust
Domain Control Extension (TDCX) physical pages.  Trust Domain Root (TDR) is
the root control structure of a guest TD and is encrypted using the TDX
global private key. It holds a minimal set of state variables that enable
guest TD control even during times when the TD's private key is not known,
or when the TD's key management state does not permit access to memory
encrypted using the TD's private key.

The following shows the relationship between those structures.

        TDR--> TDCS                     per-TD
         |       \--> TDCX
         \
          \--> TDVPS                    per-TD VCPU
                 \--> TDVPR and TDVPX

The existing global struct kvm_x86_ops already defines an interface which
fits with TDX.  But kvm_x86_ops is system-wide, not per-VM structure.  To
allow VMX to coexist with TDs, the kvm_x86_ops callbacks will have wrappers
"if (tdx) tdx_op() else vmx_op()" to switch VMX or TDX at run time.

To split the runtime switch, the VMX implementation, and the TDX
implementation, add main.c, and move out the vmx_x86_ops hooks in
preparation for adding TDX, which can coexist with VMX, i.e. KVM can run
both VMs and TDs.  Use 'vt' for the naming scheme as a nod to VT-x and as a
concatenation of VmxTdx.

The current code looks as follows.
In vmx.c
  static vmx_op() { ... }
  static struct kvm_x86_ops vmx_x86_ops = {
        .op = vmx_op,
  initialization code

The eventually converted code will look like
In vmx.c, keep the VMX operations.
  vmx_op() { ... }
  VMX initialization
In tdx.c, define the TDX operations.
  tdx_op() { ... }
  TDX initialization
In x86_ops.h, declare the VMX and TDX operations.
  vmx_op();
  tdx_op();
In main.c, define common wrappers for VMX and VMX.
  static vt_ops() { if (tdx) tdx_ops() else vmx_ops() }
  static struct kvm_x86_ops vt_x86_ops = {
        .op = vt_op,
  initialization to call VMX and TDX initialization

Opportunistically, fix the name inconsistency from vmx_create_vcpu() and
vmx_free_vcpu() to vmx_vcpu_create() and vxm_vcpu_free().

Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/Makefile      |   2 +-
 arch/x86/kvm/vmx/main.c    | 155 ++++++++++++++++
 arch/x86/kvm/vmx/vmx.c     | 363 +++++++++++--------------------------
 arch/x86/kvm/vmx/x86_ops.h | 125 +++++++++++++
 4 files changed, 386 insertions(+), 259 deletions(-)
 create mode 100644 arch/x86/kvm/vmx/main.c
 create mode 100644 arch/x86/kvm/vmx/x86_ops.h

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 30f244b64523..ee4d0999f20f 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -22,7 +22,7 @@ kvm-$(CONFIG_X86_64) += mmu/tdp_iter.o mmu/tdp_mmu.o
 kvm-$(CONFIG_KVM_XEN)	+= xen.o
 
 kvm-intel-y		+= vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
-			   vmx/evmcs.o vmx/nested.o vmx/posted_intr.o
+			   vmx/evmcs.o vmx/nested.o vmx/posted_intr.o vmx/main.o
 kvm-intel-$(CONFIG_X86_SGX_KVM)	+= vmx/sgx.o
 
 kvm-amd-y		+= svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
new file mode 100644
index 000000000000..636768f5b985
--- /dev/null
+++ b/arch/x86/kvm/vmx/main.c
@@ -0,0 +1,155 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/moduleparam.h>
+
+#include "x86_ops.h"
+#include "vmx.h"
+#include "nested.h"
+#include "pmu.h"
+
+struct kvm_x86_ops vt_x86_ops __initdata = {
+	.name = "kvm_intel",
+
+	.hardware_unsetup = vmx_hardware_unsetup,
+	.check_processor_compatibility = vmx_check_processor_compatibility,
+
+	.hardware_enable = vmx_hardware_enable,
+	.hardware_disable = vmx_hardware_disable,
+	.has_emulated_msr = vmx_has_emulated_msr,
+
+	.vm_size = sizeof(struct kvm_vmx),
+	.vm_init = vmx_vm_init,
+	.vm_destroy = vmx_vm_destroy,
+
+	.vcpu_precreate = vmx_vcpu_precreate,
+	.vcpu_create = vmx_vcpu_create,
+	.vcpu_free = vmx_vcpu_free,
+	.vcpu_reset = vmx_vcpu_reset,
+
+	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
+	.vcpu_load = vmx_vcpu_load,
+	.vcpu_put = vmx_vcpu_put,
+
+	.update_exception_bitmap = vmx_update_exception_bitmap,
+	.get_msr_feature = vmx_get_msr_feature,
+	.get_msr = vmx_get_msr,
+	.set_msr = vmx_set_msr,
+	.get_segment_base = vmx_get_segment_base,
+	.get_segment = vmx_get_segment,
+	.set_segment = vmx_set_segment,
+	.get_cpl = vmx_get_cpl,
+	.get_cs_db_l_bits = vmx_get_cs_db_l_bits,
+	.set_cr0 = vmx_set_cr0,
+	.is_valid_cr4 = vmx_is_valid_cr4,
+	.set_cr4 = vmx_set_cr4,
+	.set_efer = vmx_set_efer,
+	.get_idt = vmx_get_idt,
+	.set_idt = vmx_set_idt,
+	.get_gdt = vmx_get_gdt,
+	.set_gdt = vmx_set_gdt,
+	.set_dr7 = vmx_set_dr7,
+	.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
+	.cache_reg = vmx_cache_reg,
+	.get_rflags = vmx_get_rflags,
+	.set_rflags = vmx_set_rflags,
+	.get_if_flag = vmx_get_if_flag,
+
+	.flush_tlb_all = vmx_flush_tlb_all,
+	.flush_tlb_current = vmx_flush_tlb_current,
+	.flush_tlb_gva = vmx_flush_tlb_gva,
+	.flush_tlb_guest = vmx_flush_tlb_guest,
+
+	.vcpu_pre_run = vmx_vcpu_pre_run,
+	.vcpu_run = vmx_vcpu_run,
+	.handle_exit = vmx_handle_exit,
+	.skip_emulated_instruction = vmx_skip_emulated_instruction,
+	.update_emulated_instruction = vmx_update_emulated_instruction,
+	.set_interrupt_shadow = vmx_set_interrupt_shadow,
+	.get_interrupt_shadow = vmx_get_interrupt_shadow,
+	.patch_hypercall = vmx_patch_hypercall,
+	.inject_irq = vmx_inject_irq,
+	.inject_nmi = vmx_inject_nmi,
+	.queue_exception = vmx_queue_exception,
+	.cancel_injection = vmx_cancel_injection,
+	.interrupt_allowed = vmx_interrupt_allowed,
+	.nmi_allowed = vmx_nmi_allowed,
+	.get_nmi_mask = vmx_get_nmi_mask,
+	.set_nmi_mask = vmx_set_nmi_mask,
+	.enable_nmi_window = vmx_enable_nmi_window,
+	.enable_irq_window = vmx_enable_irq_window,
+	.update_cr8_intercept = vmx_update_cr8_intercept,
+	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
+	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
+	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
+	.load_eoi_exitmap = vmx_load_eoi_exitmap,
+	.apicv_post_state_restore = vmx_apicv_post_state_restore,
+	.check_apicv_inhibit_reasons = vmx_check_apicv_inhibit_reasons,
+	.hwapic_irr_update = vmx_hwapic_irr_update,
+	.hwapic_isr_update = vmx_hwapic_isr_update,
+	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
+	.sync_pir_to_irr = vmx_sync_pir_to_irr,
+	.deliver_interrupt = vmx_deliver_interrupt,
+	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
+
+	.set_tss_addr = vmx_set_tss_addr,
+	.set_identity_map_addr = vmx_set_identity_map_addr,
+	.get_mt_mask = vmx_get_mt_mask,
+
+	.get_exit_info = vmx_get_exit_info,
+
+	.vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
+
+	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
+
+	.get_l2_tsc_offset = vmx_get_l2_tsc_offset,
+	.get_l2_tsc_multiplier = vmx_get_l2_tsc_multiplier,
+	.write_tsc_offset = vmx_write_tsc_offset,
+	.write_tsc_multiplier = vmx_write_tsc_multiplier,
+
+	.load_mmu_pgd = vmx_load_mmu_pgd,
+
+	.check_intercept = vmx_check_intercept,
+	.handle_exit_irqoff = vmx_handle_exit_irqoff,
+
+	.request_immediate_exit = vmx_request_immediate_exit,
+
+	.sched_in = vmx_sched_in,
+
+	.cpu_dirty_log_size = PML_ENTITY_NUM,
+	.update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
+
+	.nested_ops = &vmx_nested_ops,
+
+	.pi_update_irte = vmx_pi_update_irte,
+	.pi_start_assignment = vmx_pi_start_assignment,
+
+#ifdef CONFIG_X86_64
+	.set_hv_timer = vmx_set_hv_timer,
+	.cancel_hv_timer = vmx_cancel_hv_timer,
+#endif
+
+	.setup_mce = vmx_setup_mce,
+
+	.smi_allowed = vmx_smi_allowed,
+	.enter_smm = vmx_enter_smm,
+	.leave_smm = vmx_leave_smm,
+	.enable_smi_window = vmx_enable_smi_window,
+
+	.can_emulate_instruction = vmx_can_emulate_instruction,
+	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
+	.migrate_timers = vmx_migrate_timers,
+
+	.msr_filter_changed = vmx_msr_filter_changed,
+	.complete_emulated_msr = kvm_complete_insn_gp,
+
+	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
+};
+
+struct kvm_x86_init_ops vt_init_ops __initdata = {
+	.cpu_has_kvm_support = vmx_cpu_has_kvm_support,
+	.disabled_by_bios = vmx_disabled_by_bios,
+	.hardware_setup = vmx_hardware_setup,
+	.handle_intel_pt_intr = NULL,
+
+	.runtime_ops = &vt_x86_ops,
+	.pmu_ops = &intel_pmu_ops,
+};
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d3b68a6dec48..286947c00638 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -66,6 +66,7 @@
 #include "vmcs12.h"
 #include "vmx.h"
 #include "x86.h"
+#include "x86_ops.h"
 
 MODULE_AUTHOR("Qumranet");
 MODULE_LICENSE("GPL");
@@ -1312,7 +1313,7 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
  * Switches to specified vcpu, until a matching vcpu_put(), but assumes
  * vcpu mutex is already taken.
  */
-static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -1323,7 +1324,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	vmx->host_debugctlmsr = get_debugctlmsr();
 }
 
-static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
+void vmx_vcpu_put(struct kvm_vcpu *vcpu)
 {
 	vmx_vcpu_pi_put(vcpu);
 
@@ -1377,7 +1378,7 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 		vmx->emulation_required = vmx_emulation_required(vcpu);
 }
 
-static bool vmx_get_if_flag(struct kvm_vcpu *vcpu)
+bool vmx_get_if_flag(struct kvm_vcpu *vcpu)
 {
 	return vmx_get_rflags(vcpu) & X86_EFLAGS_IF;
 }
@@ -1483,8 +1484,8 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data)
 	return 0;
 }
 
-static bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
-					void *insn, int insn_len)
+bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
+				void *insn, int insn_len)
 {
 	/*
 	 * Emulation of instructions in SGX enclaves is impossible as RIP does
@@ -1568,7 +1569,7 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
  * Recognizes a pending MTF VM-exit and records the nested state for later
  * delivery.
  */
-static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
+void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -1591,7 +1592,7 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
 		vmx->nested.mtf_pending = false;
 }
 
-static int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)
+int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
 	vmx_update_emulated_instruction(vcpu);
 	return skip_emulated_instruction(vcpu);
@@ -1610,7 +1611,7 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
 		vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE);
 }
 
-static void vmx_queue_exception(struct kvm_vcpu *vcpu)
+void vmx_queue_exception(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned nr = vcpu->arch.exception.nr;
@@ -1723,12 +1724,12 @@ u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu)
 	return kvm_caps.default_tsc_scaling_ratio;
 }
 
-static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
 {
 	vmcs_write64(TSC_OFFSET, offset);
 }
 
-static void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
+void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
 {
 	vmcs_write64(TSC_MULTIPLIER, multiplier);
 }
@@ -1752,7 +1753,7 @@ static inline bool vmx_feature_control_msr_valid(struct kvm_vcpu *vcpu,
 	return !(val & ~valid_bits);
 }
 
-static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
+int vmx_get_msr_feature(struct kvm_msr_entry *msr)
 {
 	switch (msr->index) {
 	case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
@@ -1772,7 +1773,7 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
  * Returns 0 on success, non-0 otherwise.
  * Assumes vcpu_load() was already called.
  */
-static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct vmx_uret_msr *msr;
@@ -1950,7 +1951,7 @@ static u64 vcpu_supported_debugctl(struct kvm_vcpu *vcpu)
  * Returns 0 on success, non-0 otherwise.
  * Assumes vcpu_load() was already called.
  */
-static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct vmx_uret_msr *msr;
@@ -2274,7 +2275,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	return ret;
 }
 
-static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
+void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
 {
 	unsigned long guest_owned_bits;
 
@@ -2317,12 +2318,12 @@ static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
 	}
 }
 
-static __init int cpu_has_kvm_support(void)
+__init int vmx_cpu_has_kvm_support(void)
 {
 	return cpu_has_vmx();
 }
 
-static __init int vmx_disabled_by_bios(void)
+__init int vmx_disabled_by_bios(void)
 {
 	return !boot_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) ||
 	       !boot_cpu_has(X86_FEATURE_VMX);
@@ -2348,7 +2349,7 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer)
 	return -EFAULT;
 }
 
-static int vmx_hardware_enable(void)
+int vmx_hardware_enable(void)
 {
 	int cpu = raw_smp_processor_id();
 	u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
@@ -2389,7 +2390,7 @@ static void vmclear_local_loaded_vmcss(void)
 		__loaded_vmcs_clear(v);
 }
 
-static void vmx_hardware_disable(void)
+void vmx_hardware_disable(void)
 {
 	vmclear_local_loaded_vmcss();
 
@@ -2988,7 +2989,7 @@ static void exit_lmode(struct kvm_vcpu *vcpu)
 
 #endif
 
-static void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
+void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -3018,7 +3019,7 @@ static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)
 	return to_vmx(vcpu)->vpid;
 }
 
-static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
+void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
 {
 	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	u64 root_hpa = mmu->root.hpa;
@@ -3034,7 +3035,7 @@ static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
 		vpid_sync_context(vmx_get_current_vpid(vcpu));
 }
 
-static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
+void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
 {
 	/*
 	 * vpid_sync_vcpu_addr() is a nop if vpid==0, see the comment in
@@ -3043,7 +3044,7 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
 	vpid_sync_vcpu_addr(vmx_get_current_vpid(vcpu), addr);
 }
 
-static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
+void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
 {
 	/*
 	 * vpid_sync_context() is a nop if vpid==0, e.g. if enable_vpid==0 or a
@@ -3198,8 +3199,7 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level)
 	return eptp;
 }
 
-static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
-			     int root_level)
+void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level)
 {
 	struct kvm *kvm = vcpu->kvm;
 	bool update_guest_cr3 = true;
@@ -3227,8 +3227,7 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 		vmcs_writel(GUEST_CR3, guest_cr3);
 }
 
-
-static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 {
 	/*
 	 * We operate under the default treatment of SMM, so VMX cannot be
@@ -3344,7 +3343,7 @@ void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
 	var->g = (ar >> 15) & 1;
 }
 
-static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
+u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
 {
 	struct kvm_segment s;
 
@@ -3424,14 +3423,14 @@ void __vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
 	vmcs_write32(sf->ar_bytes, vmx_segment_access_rights(var));
 }
 
-static void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
+void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
 {
 	__vmx_set_segment(vcpu, var, seg);
 
 	to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
 }
 
-static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
+void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
 {
 	u32 ar = vmx_read_guest_seg_ar(to_vmx(vcpu), VCPU_SREG_CS);
 
@@ -3439,25 +3438,25 @@ static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
 	*l = (ar >> 13) & 1;
 }
 
-static void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	dt->size = vmcs_read32(GUEST_IDTR_LIMIT);
 	dt->address = vmcs_readl(GUEST_IDTR_BASE);
 }
 
-static void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	vmcs_write32(GUEST_IDTR_LIMIT, dt->size);
 	vmcs_writel(GUEST_IDTR_BASE, dt->address);
 }
 
-static void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	dt->size = vmcs_read32(GUEST_GDTR_LIMIT);
 	dt->address = vmcs_readl(GUEST_GDTR_BASE);
 }
 
-static void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	vmcs_write32(GUEST_GDTR_LIMIT, dt->size);
 	vmcs_writel(GUEST_GDTR_BASE, dt->address);
@@ -3955,7 +3954,7 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
 	}
 }
 
-static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
+bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	void *vapic_page;
@@ -3975,7 +3974,7 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 	return ((rvi & 0xf0) > (vppr & 0xf0));
 }
 
-static void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
+void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	u32 i;
@@ -4109,8 +4108,8 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
 	return 0;
 }
 
-static void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
-				  int trig_mode, int vector)
+void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector)
 {
 	struct kvm_vcpu *vcpu = apic->vcpu;
 
@@ -4253,7 +4252,7 @@ static u32 vmx_vmexit_ctrl(void)
 		~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER);
 }
 
-static void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -4493,7 +4492,7 @@ static int vmx_alloc_ipiv_pid_table(struct kvm *kvm)
 	return 0;
 }
 
-static int vmx_vcpu_precreate(struct kvm *kvm)
+int vmx_vcpu_precreate(struct kvm *kvm)
 {
 	return vmx_alloc_ipiv_pid_table(kvm);
 }
@@ -4645,7 +4644,7 @@ static void __vmx_vcpu_reset(struct kvm_vcpu *vcpu)
 	vmx->pi_desc.sn = 1;
 }
 
-static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -4702,12 +4701,12 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vpid_sync_context(vmx->vpid);
 }
 
-static void vmx_enable_irq_window(struct kvm_vcpu *vcpu)
+void vmx_enable_irq_window(struct kvm_vcpu *vcpu)
 {
 	exec_controls_setbit(to_vmx(vcpu), CPU_BASED_INTR_WINDOW_EXITING);
 }
 
-static void vmx_enable_nmi_window(struct kvm_vcpu *vcpu)
+void vmx_enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	if (!enable_vnmi ||
 	    vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI) {
@@ -4718,7 +4717,7 @@ static void vmx_enable_nmi_window(struct kvm_vcpu *vcpu)
 	exec_controls_setbit(to_vmx(vcpu), CPU_BASED_NMI_WINDOW_EXITING);
 }
 
-static void vmx_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
+void vmx_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	uint32_t intr;
@@ -4746,7 +4745,7 @@ static void vmx_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
 	vmx_clear_hlt(vcpu);
 }
 
-static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
+void vmx_inject_nmi(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -4824,7 +4823,7 @@ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu)
 		 GUEST_INTR_STATE_NMI));
 }
 
-static int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return -EBUSY;
@@ -4846,7 +4845,7 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
 		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
 }
 
-static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return -EBUSY;
@@ -4861,7 +4860,7 @@ static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 	return !vmx_interrupt_blocked(vcpu);
 }
 
-static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
+int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
 {
 	void __user *ret;
 
@@ -4881,7 +4880,7 @@ static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
 	return init_rmode_tss(kvm, ret);
 }
 
-static int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
+int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
 {
 	to_kvm_vmx(kvm)->ept_identity_map_addr = ident_addr;
 	return 0;
@@ -5160,8 +5159,7 @@ static int handle_io(struct kvm_vcpu *vcpu)
 	return kvm_fast_pio(vcpu, size, port, in);
 }
 
-static void
-vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
+void vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
 {
 	/*
 	 * Patch in the VMCALL instruction:
@@ -5371,7 +5369,7 @@ static int handle_dr(struct kvm_vcpu *vcpu)
 	return kvm_complete_insn_gp(vcpu, err);
 }
 
-static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
 {
 	get_debugreg(vcpu->arch.db[0], 0);
 	get_debugreg(vcpu->arch.db[1], 1);
@@ -5390,7 +5388,7 @@ static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
 	set_debugreg(DR6_RESERVED, 6);
 }
 
-static void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
 {
 	vmcs_writel(GUEST_DR7, val);
 }
@@ -5661,7 +5659,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
-static int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu)
+int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu)
 {
 	if (vmx_emulation_required_with_pending_exception(vcpu)) {
 		kvm_prepare_emulation_failure_exit(vcpu);
@@ -5925,9 +5923,8 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 static const int kvm_vmx_max_exit_handlers =
 	ARRAY_SIZE(kvm_vmx_exit_handlers);
 
-static void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
-			      u64 *info1, u64 *info2,
-			      u32 *intr_info, u32 *error_code)
+void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
+		u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -6370,7 +6367,7 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 	return 0;
 }
 
-static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 {
 	int ret = __vmx_handle_exit(vcpu, exit_fastpath);
 
@@ -6458,7 +6455,7 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu)
 		: "eax", "ebx", "ecx", "edx");
 }
 
-static void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 	int tpr_threshold;
@@ -6528,7 +6525,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 	vmx_update_msr_bitmap_x2apic(vcpu);
 }
 
-static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
+void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
 {
 	struct page *page;
 
@@ -6556,7 +6553,7 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
 	put_page(page);
 }
 
-static void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
 {
 	u16 status;
 	u8 old;
@@ -6590,7 +6587,7 @@ static void vmx_set_rvi(int vector)
 	}
 }
 
-static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
+void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
 {
 	/*
 	 * When running L2, updating RVI is only relevant when
@@ -6604,7 +6601,7 @@ static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
 		vmx_set_rvi(max_irr);
 }
 
-static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	int max_irr;
@@ -6650,7 +6647,7 @@ static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
 	return max_irr;
 }
 
-static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
+void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 {
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
@@ -6661,7 +6658,7 @@ static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 	vmcs_write64(EOI_EXIT_BITMAP3, eoi_exit_bitmap[3]);
 }
 
-static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -6734,7 +6731,7 @@ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
 	vcpu->arch.at_instruction_boundary = true;
 }
 
-static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -6751,7 +6748,7 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
  * The kvm parameter can be NULL (module initialization, or invocation before
  * VM creation). Be sure to check the kvm parameter before using it.
  */
-static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index)
+bool vmx_has_emulated_msr(struct kvm *kvm, u32 index)
 {
 	switch (index) {
 	case MSR_IA32_SMBASE:
@@ -6872,7 +6869,7 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 				  IDT_VECTORING_ERROR_CODE);
 }
 
-static void vmx_cancel_injection(struct kvm_vcpu *vcpu)
+void vmx_cancel_injection(struct kvm_vcpu *vcpu)
 {
 	__vmx_complete_interrupts(vcpu,
 				  vmcs_read32(VM_ENTRY_INTR_INFO_FIELD),
@@ -6973,7 +6970,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 	guest_state_exit_irqoff();
 }
 
-static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
+fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned long cr3, cr4;
@@ -7167,7 +7164,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	return vmx_exit_handlers_fastpath(vcpu);
 }
 
-static void vmx_vcpu_free(struct kvm_vcpu *vcpu)
+void vmx_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -7178,7 +7175,7 @@ static void vmx_vcpu_free(struct kvm_vcpu *vcpu)
 	free_loaded_vmcs(vmx->loaded_vmcs);
 }
 
-static int vmx_vcpu_create(struct kvm_vcpu *vcpu)
+int vmx_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	struct vmx_uret_msr *tsx_ctrl;
 	struct vcpu_vmx *vmx;
@@ -7287,7 +7284,7 @@ static int vmx_vcpu_create(struct kvm_vcpu *vcpu)
 #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 
-static int vmx_vm_init(struct kvm *kvm)
+int vmx_vm_init(struct kvm *kvm)
 {
 	if (!ple_gap)
 		kvm->arch.pause_in_guest = true;
@@ -7318,7 +7315,7 @@ static int vmx_vm_init(struct kvm *kvm)
 	return 0;
 }
 
-static int vmx_check_processor_compatibility(void)
+int vmx_check_processor_compatibility(void)
 {
 	struct vmcs_config vmcs_conf;
 	struct vmx_capability vmx_cap;
@@ -7341,7 +7338,7 @@ static int vmx_check_processor_compatibility(void)
 	return 0;
 }
 
-static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 {
 	u8 cache;
 
@@ -7530,7 +7527,7 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
 		vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));
 }
 
-static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -7642,7 +7639,7 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
 }
 
-static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
+void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
 {
 	to_vmx(vcpu)->req_immediate_exit = true;
 }
@@ -7681,10 +7678,10 @@ static int vmx_check_intercept_io(struct kvm_vcpu *vcpu,
 	return intercept ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
 }
 
-static int vmx_check_intercept(struct kvm_vcpu *vcpu,
-			       struct x86_instruction_info *info,
-			       enum x86_intercept_stage stage,
-			       struct x86_exception *exception)
+int vmx_check_intercept(struct kvm_vcpu *vcpu,
+		       struct x86_instruction_info *info,
+		       enum x86_intercept_stage stage,
+		       struct x86_exception *exception)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 
@@ -7749,8 +7746,8 @@ static inline int u64_shl_div_u64(u64 a, unsigned int shift,
 	return 0;
 }
 
-static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
-			    bool *expired)
+int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+		bool *expired)
 {
 	struct vcpu_vmx *vmx;
 	u64 tscl, guest_tscl, delta_tsc, lapic_timer_advance_cycles;
@@ -7789,13 +7786,13 @@ static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
 	return 0;
 }
 
-static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
+void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
 {
 	to_vmx(vcpu)->hv_deadline_tsc = -1;
 }
 #endif
 
-static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
+void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
 {
 	if (!kvm_pause_in_guest(vcpu->kvm))
 		shrink_ple_window(vcpu);
@@ -7821,7 +7818,7 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
 		secondary_exec_controls_clearbit(vmx, SECONDARY_EXEC_ENABLE_PML);
 }
 
-static void vmx_setup_mce(struct kvm_vcpu *vcpu)
+void vmx_setup_mce(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->arch.mcg_cap & MCG_LMCE_P)
 		to_vmx(vcpu)->msr_ia32_feature_control_valid_bits |=
@@ -7831,7 +7828,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu)
 			~FEAT_CTL_LMCE_ENABLED;
 }
 
-static int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	/* we need a nested vmexit to enter SMM, postpone if run is pending */
 	if (to_vmx(vcpu)->nested.nested_run_pending)
@@ -7839,7 +7836,7 @@ static int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 	return !is_smm(vcpu);
 }
 
-static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
+int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -7853,7 +7850,7 @@ static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 	return 0;
 }
 
-static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
+int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	int ret;
@@ -7874,17 +7871,17 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 	return 0;
 }
 
-static void vmx_enable_smi_window(struct kvm_vcpu *vcpu)
+void vmx_enable_smi_window(struct kvm_vcpu *vcpu)
 {
 	/* RSM will cause a vmexit anyway.  */
 }
 
-static bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
+bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 {
 	return to_vmx(vcpu)->nested.vmxon && !is_guest_mode(vcpu);
 }
 
-static void vmx_migrate_timers(struct kvm_vcpu *vcpu)
+void vmx_migrate_timers(struct kvm_vcpu *vcpu)
 {
 	if (is_guest_mode(vcpu)) {
 		struct hrtimer *timer = &to_vmx(vcpu)->nested.preemption_timer;
@@ -7894,7 +7891,7 @@ static void vmx_migrate_timers(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void vmx_hardware_unsetup(void)
+void vmx_hardware_unsetup(void)
 {
 	kvm_set_posted_intr_wakeup_handler(NULL);
 
@@ -7904,7 +7901,7 @@ static void vmx_hardware_unsetup(void)
 	free_kvm_area();
 }
 
-static bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason)
+bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason)
 {
 	ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) |
 			  BIT(APICV_INHIBIT_REASON_ABSENT) |
@@ -7916,151 +7913,13 @@ static bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason)
 	return supported & BIT(reason);
 }
 
-static void vmx_vm_destroy(struct kvm *kvm)
+void vmx_vm_destroy(struct kvm *kvm)
 {
 	struct kvm_vmx *kvm_vmx = to_kvm_vmx(kvm);
 
 	free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
 }
 
-static struct kvm_x86_ops vmx_x86_ops __initdata = {
-	.name = "kvm_intel",
-
-	.hardware_unsetup = vmx_hardware_unsetup,
-
-	.check_processor_compatibility = vmx_check_processor_compatibility,
-	.hardware_enable = vmx_hardware_enable,
-	.hardware_disable = vmx_hardware_disable,
-	.has_emulated_msr = vmx_has_emulated_msr,
-
-	.vm_size = sizeof(struct kvm_vmx),
-	.vm_init = vmx_vm_init,
-	.vm_destroy = vmx_vm_destroy,
-
-	.vcpu_precreate = vmx_vcpu_precreate,
-	.vcpu_create = vmx_vcpu_create,
-	.vcpu_free = vmx_vcpu_free,
-	.vcpu_reset = vmx_vcpu_reset,
-
-	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
-	.vcpu_load = vmx_vcpu_load,
-	.vcpu_put = vmx_vcpu_put,
-
-	.update_exception_bitmap = vmx_update_exception_bitmap,
-	.get_msr_feature = vmx_get_msr_feature,
-	.get_msr = vmx_get_msr,
-	.set_msr = vmx_set_msr,
-	.get_segment_base = vmx_get_segment_base,
-	.get_segment = vmx_get_segment,
-	.set_segment = vmx_set_segment,
-	.get_cpl = vmx_get_cpl,
-	.get_cs_db_l_bits = vmx_get_cs_db_l_bits,
-	.set_cr0 = vmx_set_cr0,
-	.is_valid_cr4 = vmx_is_valid_cr4,
-	.set_cr4 = vmx_set_cr4,
-	.set_efer = vmx_set_efer,
-	.get_idt = vmx_get_idt,
-	.set_idt = vmx_set_idt,
-	.get_gdt = vmx_get_gdt,
-	.set_gdt = vmx_set_gdt,
-	.set_dr7 = vmx_set_dr7,
-	.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
-	.cache_reg = vmx_cache_reg,
-	.get_rflags = vmx_get_rflags,
-	.set_rflags = vmx_set_rflags,
-	.get_if_flag = vmx_get_if_flag,
-
-	.flush_tlb_all = vmx_flush_tlb_all,
-	.flush_tlb_current = vmx_flush_tlb_current,
-	.flush_tlb_gva = vmx_flush_tlb_gva,
-	.flush_tlb_guest = vmx_flush_tlb_guest,
-
-	.vcpu_pre_run = vmx_vcpu_pre_run,
-	.vcpu_run = vmx_vcpu_run,
-	.handle_exit = vmx_handle_exit,
-	.skip_emulated_instruction = vmx_skip_emulated_instruction,
-	.update_emulated_instruction = vmx_update_emulated_instruction,
-	.set_interrupt_shadow = vmx_set_interrupt_shadow,
-	.get_interrupt_shadow = vmx_get_interrupt_shadow,
-	.patch_hypercall = vmx_patch_hypercall,
-	.inject_irq = vmx_inject_irq,
-	.inject_nmi = vmx_inject_nmi,
-	.queue_exception = vmx_queue_exception,
-	.cancel_injection = vmx_cancel_injection,
-	.interrupt_allowed = vmx_interrupt_allowed,
-	.nmi_allowed = vmx_nmi_allowed,
-	.get_nmi_mask = vmx_get_nmi_mask,
-	.set_nmi_mask = vmx_set_nmi_mask,
-	.enable_nmi_window = vmx_enable_nmi_window,
-	.enable_irq_window = vmx_enable_irq_window,
-	.update_cr8_intercept = vmx_update_cr8_intercept,
-	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
-	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
-	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
-	.load_eoi_exitmap = vmx_load_eoi_exitmap,
-	.apicv_post_state_restore = vmx_apicv_post_state_restore,
-	.check_apicv_inhibit_reasons = vmx_check_apicv_inhibit_reasons,
-	.hwapic_irr_update = vmx_hwapic_irr_update,
-	.hwapic_isr_update = vmx_hwapic_isr_update,
-	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
-	.sync_pir_to_irr = vmx_sync_pir_to_irr,
-	.deliver_interrupt = vmx_deliver_interrupt,
-	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
-
-	.set_tss_addr = vmx_set_tss_addr,
-	.set_identity_map_addr = vmx_set_identity_map_addr,
-	.get_mt_mask = vmx_get_mt_mask,
-
-	.get_exit_info = vmx_get_exit_info,
-
-	.vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
-
-	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
-
-	.get_l2_tsc_offset = vmx_get_l2_tsc_offset,
-	.get_l2_tsc_multiplier = vmx_get_l2_tsc_multiplier,
-	.write_tsc_offset = vmx_write_tsc_offset,
-	.write_tsc_multiplier = vmx_write_tsc_multiplier,
-
-	.load_mmu_pgd = vmx_load_mmu_pgd,
-
-	.check_intercept = vmx_check_intercept,
-	.handle_exit_irqoff = vmx_handle_exit_irqoff,
-
-	.request_immediate_exit = vmx_request_immediate_exit,
-
-	.sched_in = vmx_sched_in,
-
-	.cpu_dirty_log_size = PML_ENTITY_NUM,
-	.update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
-
-	.nested_ops = &vmx_nested_ops,
-
-	.pi_update_irte = vmx_pi_update_irte,
-	.pi_start_assignment = vmx_pi_start_assignment,
-
-#ifdef CONFIG_X86_64
-	.set_hv_timer = vmx_set_hv_timer,
-	.cancel_hv_timer = vmx_cancel_hv_timer,
-#endif
-
-	.setup_mce = vmx_setup_mce,
-
-	.smi_allowed = vmx_smi_allowed,
-	.enter_smm = vmx_enter_smm,
-	.leave_smm = vmx_leave_smm,
-	.enable_smi_window = vmx_enable_smi_window,
-
-	.can_emulate_instruction = vmx_can_emulate_instruction,
-	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
-	.migrate_timers = vmx_migrate_timers,
-
-	.msr_filter_changed = vmx_msr_filter_changed,
-	.complete_emulated_msr = kvm_complete_insn_gp,
-
-	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
-};
-
 static unsigned int vmx_handle_intel_pt_intr(void)
 {
 	struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
@@ -8126,9 +7985,7 @@ static void __init vmx_setup_me_spte_mask(void)
 	kvm_mmu_set_me_spte_mask(0, me_mask);
 }
 
-static struct kvm_x86_init_ops vmx_init_ops __initdata;
-
-static __init int hardware_setup(void)
+__init int vmx_hardware_setup(void)
 {
 	unsigned long host_bndcfgs;
 	struct desc_ptr dt;
@@ -8188,16 +8045,16 @@ static __init int hardware_setup(void)
 	 * using the APIC_ACCESS_ADDR VMCS field.
 	 */
 	if (!flexpriority_enabled)
-		vmx_x86_ops.set_apic_access_page_addr = NULL;
+		vt_x86_ops.set_apic_access_page_addr = NULL;
 
 	if (!cpu_has_vmx_tpr_shadow())
-		vmx_x86_ops.update_cr8_intercept = NULL;
+		vt_x86_ops.update_cr8_intercept = NULL;
 
 #if IS_ENABLED(CONFIG_HYPERV)
 	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
 	    && enable_ept) {
-		vmx_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
-		vmx_x86_ops.tlb_remote_flush_with_range =
+		vt_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
+		vt_x86_ops.tlb_remote_flush_with_range =
 				hv_remote_flush_tlb_with_range;
 	}
 #endif
@@ -8213,7 +8070,7 @@ static __init int hardware_setup(void)
 	if (!cpu_has_vmx_apicv())
 		enable_apicv = 0;
 	if (!enable_apicv)
-		vmx_x86_ops.sync_pir_to_irr = NULL;
+		vt_x86_ops.sync_pir_to_irr = NULL;
 
 	if (!enable_apicv || !cpu_has_vmx_ipiv())
 		enable_ipiv = false;
@@ -8249,7 +8106,7 @@ static __init int hardware_setup(void)
 		enable_pml = 0;
 
 	if (!enable_pml)
-		vmx_x86_ops.cpu_dirty_log_size = 0;
+		vt_x86_ops.cpu_dirty_log_size = 0;
 
 	if (!cpu_has_vmx_preemption_timer())
 		enable_preemption_timer = false;
@@ -8276,9 +8133,9 @@ static __init int hardware_setup(void)
 	}
 
 	if (!enable_preemption_timer) {
-		vmx_x86_ops.set_hv_timer = NULL;
-		vmx_x86_ops.cancel_hv_timer = NULL;
-		vmx_x86_ops.request_immediate_exit = __kvm_request_immediate_exit;
+		vt_x86_ops.set_hv_timer = NULL;
+		vt_x86_ops.cancel_hv_timer = NULL;
+		vt_x86_ops.request_immediate_exit = __kvm_request_immediate_exit;
 	}
 
 	kvm_caps.supported_mce_cap |= MCG_LMCE_P;
@@ -8288,9 +8145,9 @@ static __init int hardware_setup(void)
 	if (!enable_ept || !enable_pmu || !cpu_has_vmx_intel_pt())
 		pt_mode = PT_MODE_SYSTEM;
 	if (pt_mode == PT_MODE_HOST_GUEST)
-		vmx_init_ops.handle_intel_pt_intr = vmx_handle_intel_pt_intr;
+		vt_init_ops.handle_intel_pt_intr = vmx_handle_intel_pt_intr;
 	else
-		vmx_init_ops.handle_intel_pt_intr = NULL;
+		vt_init_ops.handle_intel_pt_intr = NULL;
 
 	setup_default_sgx_lepubkeyhash();
 
@@ -8314,16 +8171,6 @@ static __init int hardware_setup(void)
 	return r;
 }
 
-static struct kvm_x86_init_ops vmx_init_ops __initdata = {
-	.cpu_has_kvm_support = cpu_has_kvm_support,
-	.disabled_by_bios = vmx_disabled_by_bios,
-	.hardware_setup = hardware_setup,
-	.handle_intel_pt_intr = NULL,
-
-	.runtime_ops = &vmx_x86_ops,
-	.pmu_ops = &intel_pmu_ops,
-};
-
 static void vmx_cleanup_l1d_flush(void)
 {
 	if (vmx_l1d_flush_pages) {
@@ -8410,7 +8257,7 @@ static int __init vmx_init(void)
 		}
 
 		if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
-			vmx_x86_ops.enable_direct_tlbflush
+			vt_x86_ops.enable_direct_tlbflush
 				= hv_enable_direct_tlbflush;
 
 	} else {
@@ -8419,8 +8266,8 @@ static int __init vmx_init(void)
 #endif
 
 	vmx_init_early();
-	r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
-		     __alignof__(struct vcpu_vmx), THIS_MODULE);
+	r = kvm_init(&vt_init_ops, sizeof(struct vcpu_vmx),
+		__alignof__(struct vcpu_vmx), THIS_MODULE);
 	if (r)
 		return r;
 
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
new file mode 100644
index 000000000000..0f8a8547958f
--- /dev/null
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __KVM_X86_VMX_X86_OPS_H
+#define __KVM_X86_VMX_X86_OPS_H
+
+#include <linux/kvm_host.h>
+
+#include <asm/virtext.h>
+
+#include "x86.h"
+
+__init int vmx_cpu_has_kvm_support(void);
+__init int vmx_disabled_by_bios(void);
+__init int vmx_hardware_setup(void);
+
+extern struct kvm_x86_ops vt_x86_ops __initdata;
+extern struct kvm_x86_init_ops vt_init_ops __initdata;
+
+void vmx_hardware_unsetup(void);
+int vmx_check_processor_compatibility(void);
+int vmx_hardware_enable(void);
+void vmx_hardware_disable(void);
+int vmx_vm_init(struct kvm *kvm);
+void vmx_vm_destroy(struct kvm *kvm);
+int vmx_vcpu_precreate(struct kvm *kvm);
+int vmx_vcpu_create(struct kvm_vcpu *vcpu);
+int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu);
+fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu);
+void vmx_vcpu_free(struct kvm_vcpu *vcpu);
+void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+void vmx_vcpu_put(struct kvm_vcpu *vcpu);
+int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath);
+void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu);
+int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu);
+void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu);
+int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
+int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection);
+int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate);
+int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate);
+void vmx_enable_smi_window(struct kvm_vcpu *vcpu);
+bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
+				void *insn, int insn_len);
+int vmx_check_intercept(struct kvm_vcpu *vcpu,
+			struct x86_instruction_info *info,
+			enum x86_intercept_stage stage,
+			struct x86_exception *exception);
+bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
+void vmx_migrate_timers(struct kvm_vcpu *vcpu);
+void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu);
+bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason);
+void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
+void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
+bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu);
+int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu);
+void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector);
+void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu);
+bool vmx_has_emulated_msr(struct kvm *kvm, u32 index);
+void vmx_msr_filter_changed(struct kvm_vcpu *vcpu);
+void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
+void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
+int vmx_get_msr_feature(struct kvm_msr_entry *msr);
+int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
+u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg);
+void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+int vmx_get_cpl(struct kvm_vcpu *vcpu);
+void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l);
+void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
+void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level);
+void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
+bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
+int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val);
+void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu);
+void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu);
+void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
+bool vmx_get_if_flag(struct kvm_vcpu *vcpu);
+void vmx_flush_tlb_all(struct kvm_vcpu *vcpu);
+void vmx_flush_tlb_current(struct kvm_vcpu *vcpu);
+void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr);
+void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu);
+void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
+u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu);
+void vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall);
+void vmx_inject_irq(struct kvm_vcpu *vcpu, bool reinjected);
+void vmx_inject_nmi(struct kvm_vcpu *vcpu);
+void vmx_queue_exception(struct kvm_vcpu *vcpu);
+void vmx_cancel_injection(struct kvm_vcpu *vcpu);
+int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection);
+int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection);
+bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
+void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
+void vmx_enable_nmi_window(struct kvm_vcpu *vcpu);
+void vmx_enable_irq_window(struct kvm_vcpu *vcpu);
+void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr);
+void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu);
+void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu);
+void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
+int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr);
+int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr);
+u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
+void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
+		u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code);
+u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
+u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
+void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset);
+void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier);
+void vmx_request_immediate_exit(struct kvm_vcpu *vcpu);
+void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu);
+void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
+#ifdef CONFIG_X86_64
+int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+		bool *expired);
+void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
+#endif
+void vmx_setup_mce(struct kvm_vcpu *vcpu);
+
+#endif /* __KVM_X86_VMX_X86_OPS_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 005/102] x86/virt/vmx/tdx: export platform_tdx_enabled()
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (3 preceding siblings ...)
  2022-06-27 21:52 ` [PATCH v7 004/102] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
@ 2022-06-27 21:52 ` isaku.yamahata
  2022-06-27 21:52 ` [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization isaku.yamahata
                   ` (98 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX KVM uses platform_tdx_enabled() via vmx_hardware_setup() to check if the
platform supports TDX, concretely CPU SEAM mode, irrespective of TDX module
is loaded or initialized.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/virt/vmx/tdx/tdx.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 1363998ce1a9..f9a6f8bdade8 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1285,6 +1285,7 @@ bool platform_tdx_enabled(void)
 {
 	return tdx_keyid_num >= 2;
 }
+EXPORT_SYMBOL_GPL(platform_tdx_enabled);
 
 /**
  * tdx_init - Initialize the TDX module
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (4 preceding siblings ...)
  2022-06-27 21:52 ` [PATCH v7 005/102] x86/virt/vmx/tdx: export platform_tdx_enabled() isaku.yamahata
@ 2022-06-27 21:52 ` isaku.yamahata
  2022-06-28  3:43   ` Kai Huang
  2022-06-27 21:52 ` [PATCH v7 007/102] KVM: Enable hardware before doing arch VM initialization isaku.yamahata
                   ` (97 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX requires several initialization steps for KVM to create guest TDs.
Detect CPU feature, enable VMX (TDX is based on VMX), detect TDX module
availability, and initialize TDX module.  This patch implements the first
step to detect CPU feature.  Because VMX isn't enabled yet by VMXON
instruction on KVM kernel module initialization, defer further
initialization step until VMX is enabled by hardware_enable callback.

Introduce a module parameter, enable_tdx, to explicitly enable TDX KVM
support.  It's off by default to keep same behavior for those who don't use
TDX.  Implement CPU feature detection at KVM kernel module initialization
as hardware_setup callback to check if CPU feature is available and get
some CPU parameters.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/Makefile      |  1 +
 arch/x86/kvm/vmx/main.c    | 18 ++++++++++++++++-
 arch/x86/kvm/vmx/tdx.c     | 40 ++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h |  6 ++++++
 4 files changed, 64 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kvm/vmx/tdx.c

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index ee4d0999f20f..e2c05195cb95 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -24,6 +24,7 @@ kvm-$(CONFIG_KVM_XEN)	+= xen.o
 kvm-intel-y		+= vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
 			   vmx/evmcs.o vmx/nested.o vmx/posted_intr.o vmx/main.o
 kvm-intel-$(CONFIG_X86_SGX_KVM)	+= vmx/sgx.o
+kvm-intel-$(CONFIG_INTEL_TDX_HOST)	+= vmx/tdx.o
 
 kvm-amd-y		+= svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
 
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 636768f5b985..fabf5f22c94f 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -6,6 +6,22 @@
 #include "nested.h"
 #include "pmu.h"
 
+static bool __read_mostly enable_tdx = IS_ENABLED(CONFIG_INTEL_TDX_HOST);
+module_param_named(tdx, enable_tdx, bool, 0444);
+
+static __init int vt_hardware_setup(void)
+{
+	int ret;
+
+	ret = vmx_hardware_setup();
+	if (ret)
+		return ret;
+
+	enable_tdx = enable_tdx && !tdx_hardware_setup(&vt_x86_ops);
+
+	return 0;
+}
+
 struct kvm_x86_ops vt_x86_ops __initdata = {
 	.name = "kvm_intel",
 
@@ -147,7 +163,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 struct kvm_x86_init_ops vt_init_ops __initdata = {
 	.cpu_has_kvm_support = vmx_cpu_has_kvm_support,
 	.disabled_by_bios = vmx_disabled_by_bios,
-	.hardware_setup = vmx_hardware_setup,
+	.hardware_setup = vt_hardware_setup,
 	.handle_intel_pt_intr = NULL,
 
 	.runtime_ops = &vt_x86_ops,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
new file mode 100644
index 000000000000..c12e61cdddea
--- /dev/null
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/cpu.h>
+
+#include <asm/tdx.h>
+
+#include "capabilities.h"
+#include "x86_ops.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "tdx: " fmt
+
+static u64 hkid_mask __ro_after_init;
+static u8 hkid_start_pos __ro_after_init;
+
+int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
+{
+	u32 max_pa;
+
+	if (!enable_ept) {
+		pr_warn("Cannot enable TDX with EPT disabled\n");
+		return -EINVAL;
+	}
+
+	if (!platform_tdx_enabled()) {
+		pr_warn("Cannot enable TDX on TDX disabled platform\n");
+		return -ENODEV;
+	}
+
+	/* Safe guard check because TDX overrides tlb_remote_flush callback. */
+	if (WARN_ON_ONCE(x86_ops->tlb_remote_flush))
+		return -EIO;
+
+	max_pa = cpuid_eax(0x80000008) & 0xff;
+	hkid_start_pos = boot_cpu_data.x86_phys_bits;
+	hkid_mask = GENMASK_ULL(max_pa - 1, hkid_start_pos);
+	pr_info("kvm: TDX is supported. hkid start pos %d mask 0x%llx\n",
+		hkid_start_pos, hkid_mask);
+
+	return 0;
+}
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 0f8a8547958f..0a5967a91e26 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -122,4 +122,10 @@ void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
 #endif
 void vmx_setup_mce(struct kvm_vcpu *vcpu);
 
+#ifdef CONFIG_INTEL_TDX_HOST
+int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
+#else
+static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
+#endif
+
 #endif /* __KVM_X86_VMX_X86_OPS_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 007/102] KVM: Enable hardware before doing arch VM initialization
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (5 preceding siblings ...)
  2022-06-27 21:52 ` [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization isaku.yamahata
@ 2022-06-27 21:52 ` isaku.yamahata
  2022-06-28  2:59   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
                   ` (96 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:52 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Swap the order of hardware_enable_all() and kvm_arch_init_vm() to
accommodate Intel's TDX, which needs VMX to be enabled during VM init in
order to make SEAMCALLs.

This also provides consistent ordering between kvm_create_vm() and
kvm_destroy_vm() with respect to calling kvm_arch_destroy_vm() and
hardware_disable_all().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 virt/kvm/kvm_main.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index cee799265ce6..0acb0b6d1f82 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1138,19 +1138,19 @@ static struct kvm *kvm_create_vm(unsigned long type)
 		rcu_assign_pointer(kvm->buses[i],
 			kzalloc(sizeof(struct kvm_io_bus), GFP_KERNEL_ACCOUNT));
 		if (!kvm->buses[i])
-			goto out_err_no_arch_destroy_vm;
+			goto out_err_no_disable;
 	}
 
 	kvm->max_halt_poll_ns = halt_poll_ns;
 
-	r = kvm_arch_init_vm(kvm, type);
-	if (r)
-		goto out_err_no_arch_destroy_vm;
-
 	r = hardware_enable_all();
 	if (r)
 		goto out_err_no_disable;
 
+	r = kvm_arch_init_vm(kvm, type);
+	if (r)
+		goto out_err_no_arch_destroy_vm;
+
 #ifdef CONFIG_HAVE_KVM_IRQFD
 	INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
 #endif
@@ -1188,10 +1188,10 @@ static struct kvm *kvm_create_vm(unsigned long type)
 		mmu_notifier_unregister(&kvm->mmu_notifier, current->mm);
 #endif
 out_err_no_mmu_notifier:
-	hardware_disable_all();
-out_err_no_disable:
 	kvm_arch_destroy_vm(kvm);
 out_err_no_arch_destroy_vm:
+	hardware_disable_all();
+out_err_no_disable:
 	WARN_ON_ONCE(!refcount_dec_and_test(&kvm->users_count));
 	for (i = 0; i < KVM_NR_BUSES; i++)
 		kfree(kvm_get_bus(kvm, i));
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (6 preceding siblings ...)
  2022-06-27 21:52 ` [PATCH v7 007/102] KVM: Enable hardware before doing arch VM initialization isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-28  3:53   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
                   ` (95 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Currently, KVM VMX module initialization/exit functions are a single
function each.  Refactor KVM VMX module initialization functions into KVM
common part and VMX part so that TDX specific part can be added cleanly.
Opportunistically refactor module exit function as well.

The current module initialization flow is, 1.) calculate the sizes of VMX
kvm structure and VMX vcpu structure, 2.) hyper-v specific initialization
3.) report those sizes to the KVM common layer and KVM common
initialization, and 4.) VMX specific system-wide initialization.

Refactor the KVM VMX module initialization function into functions with a
wrapper function to separate VMX logic in vmx.c from a file, main.c, common
among VMX and TDX.  We have a wrapper function, "vt_init() {vmx kvm/vcpu
size calculation; hv_vp_assist_page_init(); kvm_init(); vmx_init(); }" in
main.c, and hv_vp_assist_page_init() and vmx_init() in vmx.c.
hv_vp_assist_page_init() initializes hyper-v specific assist pages,
kvm_init() does system-wide initialization of the KVM common layer, and
vmx_init() does system-wide VMX initialization.

The KVM architecture common layer allocates struct kvm with reported size
for architecture-specific code.  The KVM VMX module defines its structure
as struct vmx_kvm { struct kvm; VMX specific members;} and uses it as
struct vmx kvm.  Similar for vcpu structure. TDX KVM patches will define
TDX specific kvm and vcpu structures, add tdx_pre_kvm_init() to report the
sizes of them to the KVM common layer.

The current module exit function is also a single function, a combination
of VMX specific logic and common KVM logic.  Refactor it into VMX specific
logic and KVM common logic.  This is just refactoring to keep the VMX
specific logic in vmx.c from main.c.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    |  38 +++++++++++++
 arch/x86/kvm/vmx/vmx.c     | 106 ++++++++++++++++++-------------------
 arch/x86/kvm/vmx/x86_ops.h |   6 +++
 3 files changed, 95 insertions(+), 55 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index fabf5f22c94f..371dad728166 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -169,3 +169,41 @@ struct kvm_x86_init_ops vt_init_ops __initdata = {
 	.runtime_ops = &vt_x86_ops,
 	.pmu_ops = &intel_pmu_ops,
 };
+
+static int __init vt_init(void)
+{
+	unsigned int vcpu_size, vcpu_align;
+	int r;
+
+	vt_x86_ops.vm_size = sizeof(struct kvm_vmx);
+	vcpu_size = sizeof(struct vcpu_vmx);
+	vcpu_align = __alignof__(struct vcpu_vmx);
+
+	hv_vp_assist_page_init();
+	vmx_init_early();
+
+	r = kvm_init(&vt_init_ops, vcpu_size, vcpu_align, THIS_MODULE);
+	if (r)
+		goto err_vmx_post_exit;
+
+	r = vmx_init();
+	if (r)
+		goto err_kvm_exit;
+
+	return 0;
+
+err_kvm_exit:
+	kvm_exit();
+err_vmx_post_exit:
+	hv_vp_assist_page_exit();
+	return r;
+}
+module_init(vt_init);
+
+static void vt_exit(void)
+{
+	vmx_exit();
+	kvm_exit();
+	hv_vp_assist_page_exit();
+}
+module_exit(vt_exit);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 286947c00638..b30d73d28e75 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8181,15 +8181,45 @@ static void vmx_cleanup_l1d_flush(void)
 	l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
 }
 
-static void vmx_exit(void)
+void __init hv_vp_assist_page_init(void)
 {
-#ifdef CONFIG_KEXEC_CORE
-	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
-	synchronize_rcu();
-#endif
+#if IS_ENABLED(CONFIG_HYPERV)
+	/*
+	 * Enlightened VMCS usage should be recommended and the host needs
+	 * to support eVMCS v1 or above. We can also disable eVMCS support
+	 * with module parameter.
+	 */
+	if (enlightened_vmcs &&
+	    ms_hyperv.hints & HV_X64_ENLIGHTENED_VMCS_RECOMMENDED &&
+	    (ms_hyperv.nested_features & HV_X64_ENLIGHTENED_VMCS_VERSION) >=
+	    KVM_EVMCS_VERSION) {
+		int cpu;
+
+		/* Check that we have assist pages on all online CPUs */
+		for_each_online_cpu(cpu) {
+			if (!hv_get_vp_assist_page(cpu)) {
+				enlightened_vmcs = false;
+				break;
+			}
+		}
 
-	kvm_exit();
+		if (enlightened_vmcs) {
+			pr_info("KVM: vmx: using Hyper-V Enlightened VMCS\n");
+			static_branch_enable(&enable_evmcs);
+		}
+
+		if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
+			vt_x86_ops.enable_direct_tlbflush
+				= hv_enable_direct_tlbflush;
 
+	} else {
+		enlightened_vmcs = false;
+	}
+#endif
+}
+
+void hv_vp_assist_page_exit(void)
+{
 #if IS_ENABLED(CONFIG_HYPERV)
 	if (static_branch_unlikely(&enable_evmcs)) {
 		int cpu;
@@ -8213,14 +8243,10 @@ static void vmx_exit(void)
 		static_branch_disable(&enable_evmcs);
 	}
 #endif
-	vmx_cleanup_l1d_flush();
-
-	allow_smaller_maxphyaddr = false;
 }
-module_exit(vmx_exit);
 
 /* initialize before kvm_init() so that hardware_enable/disable() can work. */
-static void __init vmx_init_early(void)
+void __init vmx_init_early(void)
 {
 	int cpu;
 
@@ -8228,49 +8254,10 @@ static void __init vmx_init_early(void)
 		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
 }
 
-static int __init vmx_init(void)
+int __init vmx_init(void)
 {
 	int r, cpu;
 
-#if IS_ENABLED(CONFIG_HYPERV)
-	/*
-	 * Enlightened VMCS usage should be recommended and the host needs
-	 * to support eVMCS v1 or above. We can also disable eVMCS support
-	 * with module parameter.
-	 */
-	if (enlightened_vmcs &&
-	    ms_hyperv.hints & HV_X64_ENLIGHTENED_VMCS_RECOMMENDED &&
-	    (ms_hyperv.nested_features & HV_X64_ENLIGHTENED_VMCS_VERSION) >=
-	    KVM_EVMCS_VERSION) {
-
-		/* Check that we have assist pages on all online CPUs */
-		for_each_online_cpu(cpu) {
-			if (!hv_get_vp_assist_page(cpu)) {
-				enlightened_vmcs = false;
-				break;
-			}
-		}
-
-		if (enlightened_vmcs) {
-			pr_info("KVM: vmx: using Hyper-V Enlightened VMCS\n");
-			static_branch_enable(&enable_evmcs);
-		}
-
-		if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
-			vt_x86_ops.enable_direct_tlbflush
-				= hv_enable_direct_tlbflush;
-
-	} else {
-		enlightened_vmcs = false;
-	}
-#endif
-
-	vmx_init_early();
-	r = kvm_init(&vt_init_ops, sizeof(struct vcpu_vmx),
-		__alignof__(struct vcpu_vmx), THIS_MODULE);
-	if (r)
-		return r;
-
 	/*
 	 * Must be called after kvm_init() so enable_ept is properly set
 	 * up. Hand the parameter mitigation value in which was stored in
@@ -8279,10 +8266,8 @@ static int __init vmx_init(void)
 	 * mitigation mode.
 	 */
 	r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
-	if (r) {
-		vmx_exit();
+	if (r)
 		return r;
-	}
 
 	for_each_possible_cpu(cpu)
 		pi_init_cpu(cpu);
@@ -8303,4 +8288,15 @@ static int __init vmx_init(void)
 
 	return 0;
 }
-module_init(vmx_init);
+
+void vmx_exit(void)
+{
+#ifdef CONFIG_KEXEC_CORE
+	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
+	synchronize_rcu();
+#endif
+
+	vmx_cleanup_l1d_flush();
+
+	allow_smaller_maxphyaddr = false;
+}
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 0a5967a91e26..2abead2f60f7 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -8,6 +8,12 @@
 
 #include "x86.h"
 
+void __init hv_vp_assist_page_init(void);
+void hv_vp_assist_page_exit(void);
+void __init vmx_init_early(void);
+int __init vmx_init(void);
+void vmx_exit(void);
+
 __init int vmx_cpu_has_kvm_support(void);
 __init int vmx_disabled_by_bios(void);
 __init int vmx_hardware_setup(void);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (7 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 010/102] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
                   ` (94 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Add placeholders TDX VM/vcpu structure that overlays with VMX VM/vcpu
structures.  Initialize VM structure size and vcpu size/align so that x86
KVM common code knows those size irrespective of VMX or TDX.  Those
structures will be populated as guest creation logic develops.

Add helper functions to check if the VM is guest TD and add conversion
functions between KVM VM/VCPU and TDX VM/VCPU.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c |  8 +++---
 arch/x86/kvm/vmx/tdx.c  |  1 +
 arch/x86/kvm/vmx/tdx.h  | 54 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/kvm/vmx/tdx.h

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 371dad728166..349534412216 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -5,6 +5,7 @@
 #include "vmx.h"
 #include "nested.h"
 #include "pmu.h"
+#include "tdx.h"
 
 static bool __read_mostly enable_tdx = IS_ENABLED(CONFIG_INTEL_TDX_HOST);
 module_param_named(tdx, enable_tdx, bool, 0444);
@@ -175,9 +176,10 @@ static int __init vt_init(void)
 	unsigned int vcpu_size, vcpu_align;
 	int r;
 
-	vt_x86_ops.vm_size = sizeof(struct kvm_vmx);
-	vcpu_size = sizeof(struct vcpu_vmx);
-	vcpu_align = __alignof__(struct vcpu_vmx);
+	vt_x86_ops.vm_size = max(sizeof(struct kvm_vmx), sizeof(struct kvm_tdx));
+	vcpu_size = max(sizeof(struct vcpu_vmx), sizeof(struct vcpu_tdx));
+	vcpu_align = max(__alignof__(struct vcpu_vmx),
+			__alignof__(struct vcpu_tdx));
 
 	hv_vp_assist_page_init();
 	vmx_init_early();
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c12e61cdddea..2617389ef466 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -5,6 +5,7 @@
 
 #include "capabilities.h"
 #include "x86_ops.h"
+#include "tdx.h"
 
 #undef pr_fmt
 #define pr_fmt(fmt) "tdx: " fmt
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
new file mode 100644
index 000000000000..060bf48ec3d6
--- /dev/null
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __KVM_X86_TDX_H
+#define __KVM_X86_TDX_H
+
+#ifdef CONFIG_INTEL_TDX_HOST
+struct kvm_tdx {
+	struct kvm kvm;
+	/* TDX specific members follow. */
+};
+
+struct vcpu_tdx {
+	struct kvm_vcpu	vcpu;
+	/* TDX specific members follow. */
+};
+
+static inline bool is_td(struct kvm *kvm)
+{
+	/*
+	 * TDX VM type isn't defined yet.
+	 * return kvm->arch.vm_type == KVM_X86_TDX_VM;
+	 */
+	return false;
+}
+
+static inline bool is_td_vcpu(struct kvm_vcpu *vcpu)
+{
+	return is_td(vcpu->kvm);
+}
+
+static inline struct kvm_tdx *to_kvm_tdx(struct kvm *kvm)
+{
+	return container_of(kvm, struct kvm_tdx, kvm);
+}
+
+static inline struct vcpu_tdx *to_tdx(struct kvm_vcpu *vcpu)
+{
+	return container_of(vcpu, struct vcpu_tdx, vcpu);
+}
+#else
+struct kvm_tdx {
+	struct kvm kvm;
+};
+
+struct vcpu_tdx {
+	struct kvm_vcpu	vcpu;
+};
+
+static inline bool is_td(struct kvm *kvm) { return false; }
+static inline bool is_td_vcpu(struct kvm_vcpu *vcpu) { return false; }
+static inline struct kvm_tdx *to_kvm_tdx(struct kvm *kvm) { return NULL; }
+static inline struct vcpu_tdx *to_tdx(struct kvm_vcpu *vcpu) { return NULL; }
+#endif /* CONFIG_INTEL_TDX_HOST */
+
+#endif /* __KVM_X86_TDX_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 010/102] x86/virt/tdx: Add a helper function to return system wide info about TDX module
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (8 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-07  2:46   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko isaku.yamahata
                   ` (93 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX KVM needs system-wide information about the TDX module, struct
tdsysinfo_struct.  Add a helper function tdx_get_sysinfo() to return it
instead of KVM getting it with various error checks.  Move out the struct
definition about it to common place tdx_host.h.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/tdx.h  | 55 +++++++++++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/tdx.c | 20 +++++++++++---
 arch/x86/virt/vmx/tdx/tdx.h | 52 -----------------------------------
 3 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 801f6e10b2db..dfea0dd71bc1 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -89,11 +89,66 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
 #endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */
 
 #ifdef CONFIG_INTEL_TDX_HOST
+struct tdx_cpuid_config {
+	u32	leaf;
+	u32	sub_leaf;
+	u32	eax;
+	u32	ebx;
+	u32	ecx;
+	u32	edx;
+} __packed;
+
+#define TDSYSINFO_STRUCT_SIZE		1024
+#define TDSYSINFO_STRUCT_ALIGNMENT	1024
+
+struct tdsysinfo_struct {
+	/* TDX-SEAM Module Info */
+	u32	attributes;
+	u32	vendor_id;
+	u32	build_date;
+	u16	build_num;
+	u16	minor_version;
+	u16	major_version;
+	u8	reserved0[14];
+	/* Memory Info */
+	u16	max_tdmrs;
+	u16	max_reserved_per_tdmr;
+	u16	pamt_entry_size;
+	u8	reserved1[10];
+	/* Control Struct Info */
+	u16	tdcs_base_size;
+	u8	reserved2[2];
+	u16	tdvps_base_size;
+	u8	tdvps_xfam_dependent_size;
+	u8	reserved3[9];
+	/* TD Capabilities */
+	u64	attributes_fixed0;
+	u64	attributes_fixed1;
+	u64	xfam_fixed0;
+	u64	xfam_fixed1;
+	u8	reserved4[32];
+	u32	num_cpuid_config;
+	/*
+	 * The actual number of CPUID_CONFIG depends on above
+	 * 'num_cpuid_config'.  The size of 'struct tdsysinfo_struct'
+	 * is 1024B defined by TDX architecture.  Use a union with
+	 * specific padding to make 'sizeof(struct tdsysinfo_struct)'
+	 * equal to 1024.
+	 */
+	union {
+		struct tdx_cpuid_config	cpuid_configs[0];
+		u8			reserved5[892];
+	};
+} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT);
+
 bool platform_tdx_enabled(void);
 int tdx_init(void);
+const struct tdsysinfo_struct *tdx_get_sysinfo(void);
 #else	/* !CONFIG_INTEL_TDX_HOST */
 static inline bool platform_tdx_enabled(void) { return false; }
 static inline int tdx_init(void)  { return -ENODEV; }
+struct tdsysinfo_struct;
+static inline const struct tdsysinfo_struct *tdx_get_sysinfo(void) { return NULL; }
 #endif	/* CONFIG_INTEL_TDX_HOST */
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index f9a6f8bdade8..14f53494156c 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -364,9 +364,9 @@ static int check_cmrs(struct cmr_info *cmr_array, int *actual_cmr_num)
 	return 0;
 }
 
-static int tdx_get_sysinfo(struct tdsysinfo_struct *tdsysinfo,
-			   struct cmr_info *cmr_array,
-			   int *actual_cmr_num)
+static int __tdx_get_sysinfo(struct tdsysinfo_struct *tdsysinfo,
+			     struct cmr_info *cmr_array,
+			     int *actual_cmr_num)
 {
 	struct tdx_module_output out;
 	u64 ret;
@@ -393,6 +393,18 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *tdsysinfo,
 	return check_cmrs(cmr_array, actual_cmr_num);
 }
 
+const struct tdsysinfo_struct *tdx_get_sysinfo(void)
+{
+       const struct tdsysinfo_struct *r = NULL;
+
+       mutex_lock(&tdx_module_lock);
+       if (tdx_module_status == TDX_MODULE_INITIALIZED)
+	       r = &tdx_sysinfo;
+       mutex_unlock(&tdx_module_lock);
+       return r;
+}
+EXPORT_SYMBOL_GPL(tdx_get_sysinfo);
+
 /*
  * Skip the memory region below 1MB.  Return true if the entire
  * region is skipped.  Otherwise, the updated range is returned.
@@ -1116,7 +1128,7 @@ static int init_tdx_module(void)
 	if (ret)
 		goto out;
 
-	ret = tdx_get_sysinfo(&tdx_sysinfo, tdx_cmr_array, &tdx_cmr_num);
+	ret = __tdx_get_sysinfo(&tdx_sysinfo, tdx_cmr_array, &tdx_cmr_num);
 	if (ret)
 		goto out;
 
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index e0309558be13..c08e4ee2d0bf 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -65,58 +65,6 @@ struct cmr_info {
 #define MAX_CMRS			32
 #define CMR_INFO_ARRAY_ALIGNMENT	512
 
-struct cpuid_config {
-	u32	leaf;
-	u32	sub_leaf;
-	u32	eax;
-	u32	ebx;
-	u32	ecx;
-	u32	edx;
-} __packed;
-
-#define TDSYSINFO_STRUCT_SIZE		1024
-#define TDSYSINFO_STRUCT_ALIGNMENT	1024
-
-struct tdsysinfo_struct {
-	/* TDX-SEAM Module Info */
-	u32	attributes;
-	u32	vendor_id;
-	u32	build_date;
-	u16	build_num;
-	u16	minor_version;
-	u16	major_version;
-	u8	reserved0[14];
-	/* Memory Info */
-	u16	max_tdmrs;
-	u16	max_reserved_per_tdmr;
-	u16	pamt_entry_size;
-	u8	reserved1[10];
-	/* Control Struct Info */
-	u16	tdcs_base_size;
-	u8	reserved2[2];
-	u16	tdvps_base_size;
-	u8	tdvps_xfam_dependent_size;
-	u8	reserved3[9];
-	/* TD Capabilities */
-	u64	attributes_fixed0;
-	u64	attributes_fixed1;
-	u64	xfam_fixed0;
-	u64	xfam_fixed1;
-	u8	reserved4[32];
-	u32	num_cpuid_config;
-	/*
-	 * The actual number of CPUID_CONFIG depends on above
-	 * 'num_cpuid_config'.  The size of 'struct tdsysinfo_struct'
-	 * is 1024B defined by TDX architecture.  Use a union with
-	 * specific padding to make 'sizeof(struct tdsysinfo_struct)'
-	 * equal to 1024.
-	 */
-	union {
-		struct cpuid_config	cpuid_configs[0];
-		u8			reserved5[892];
-	};
-} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT);
-
 struct tdmr_reserved_area {
 	u64 offset;
 	u64 size;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (9 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 010/102] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-28  4:31   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
                   ` (92 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Isaku Yamahata <isaku.yamahata@intel.com>

To use TDX functionality, TDX module needs to be loaded and initialized.
A TDX host patch series[1] implements the detection of the TDX module,
tdx_detect() and its initialization, tdx_init().

This patch is to call those functions, tdx_detect() and tdx_init(), when
loading kvm_intel.ko.

Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
while hardware is enabled, i.e. after hardware_enable_all() and before
hardware_disable_all().  Because TDX requires all present CPUs to enable
VMX (VMXON).

[1] https://lore.kernel.org/lkml/cover.1649219184.git.kai.huang@intel.com/

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/main.c         | 11 ++++++
 arch/x86/kvm/vmx/tdx.c          | 60 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h          |  4 +++
 arch/x86/kvm/x86.c              |  8 +++++
 5 files changed, 84 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 62dec97f6607..aa11525500d3 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1639,6 +1639,7 @@ struct kvm_x86_init_ops {
 	int (*cpu_has_kvm_support)(void);
 	int (*disabled_by_bios)(void);
 	int (*hardware_setup)(void);
+	int (*post_hardware_enable_setup)(void);
 	unsigned int (*handle_intel_pt_intr)(void);
 
 	struct kvm_x86_ops *runtime_ops;
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 349534412216..ac788af17d92 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -23,6 +23,16 @@ static __init int vt_hardware_setup(void)
 	return 0;
 }
 
+static int __init vt_post_hardware_enable_setup(void)
+{
+	enable_tdx = enable_tdx && !tdx_module_setup();
+	/*
+	 * Even if it failed to initialize TDX module, conventional VMX is
+	 * available.  Keep VMX usable.
+	 */
+	return 0;
+}
+
 struct kvm_x86_ops vt_x86_ops __initdata = {
 	.name = "kvm_intel",
 
@@ -165,6 +175,7 @@ struct kvm_x86_init_ops vt_init_ops __initdata = {
 	.cpu_has_kvm_support = vmx_cpu_has_kvm_support,
 	.disabled_by_bios = vmx_disabled_by_bios,
 	.hardware_setup = vt_hardware_setup,
+	.post_hardware_enable_setup = vt_post_hardware_enable_setup,
 	.handle_intel_pt_intr = NULL,
 
 	.runtime_ops = &vt_x86_ops,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 2617389ef466..9cb36716b0f3 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -13,6 +13,66 @@
 static u64 hkid_mask __ro_after_init;
 static u8 hkid_start_pos __ro_after_init;
 
+#define TDX_MAX_NR_CPUID_CONFIGS					\
+	((sizeof(struct tdsysinfo_struct) -				\
+		offsetof(struct tdsysinfo_struct, cpuid_configs))	\
+		/ sizeof(struct tdx_cpuid_config))
+
+struct tdx_capabilities {
+	u8 tdcs_nr_pages;
+	u8 tdvpx_nr_pages;
+
+	u64 attrs_fixed0;
+	u64 attrs_fixed1;
+	u64 xfam_fixed0;
+	u64 xfam_fixed1;
+
+	u32 nr_cpuid_configs;
+	struct tdx_cpuid_config cpuid_configs[TDX_MAX_NR_CPUID_CONFIGS];
+};
+
+/* Capabilities of KVM + the TDX module. */
+static struct tdx_capabilities tdx_caps;
+
+int __init tdx_module_setup(void)
+{
+	const struct tdsysinfo_struct *tdsysinfo;
+	int ret = 0;
+
+	BUILD_BUG_ON(sizeof(*tdsysinfo) != 1024);
+	BUILD_BUG_ON(TDX_MAX_NR_CPUID_CONFIGS != 37);
+
+	ret = tdx_init();
+	if (ret) {
+		pr_info("Failed to initialize TDX module.\n");
+		return ret;
+	}
+
+	tdsysinfo = tdx_get_sysinfo();
+	if (tdsysinfo->num_cpuid_config > TDX_MAX_NR_CPUID_CONFIGS)
+		return -EIO;
+
+	tdx_caps = (struct tdx_capabilities) {
+		.tdcs_nr_pages = tdsysinfo->tdcs_base_size / PAGE_SIZE,
+		/*
+		 * TDVPS = TDVPR(4K page) + TDVPX(multiple 4K pages).
+		 * -1 for TDVPR.
+		 */
+		.tdvpx_nr_pages = tdsysinfo->tdvps_base_size / PAGE_SIZE - 1,
+		.attrs_fixed0 = tdsysinfo->attributes_fixed0,
+		.attrs_fixed1 = tdsysinfo->attributes_fixed1,
+		.xfam_fixed0 =	tdsysinfo->xfam_fixed0,
+		.xfam_fixed1 = tdsysinfo->xfam_fixed1,
+		.nr_cpuid_configs = tdsysinfo->num_cpuid_config,
+	};
+	if (!memcpy(tdx_caps.cpuid_configs, tdsysinfo->cpuid_configs,
+			tdsysinfo->num_cpuid_config *
+			sizeof(struct tdx_cpuid_config)))
+		return -EIO;
+
+	return 0;
+}
+
 int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 {
 	u32 max_pa;
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 060bf48ec3d6..54d7a26ed9ee 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -3,6 +3,8 @@
 #define __KVM_X86_TDX_H
 
 #ifdef CONFIG_INTEL_TDX_HOST
+int tdx_module_setup(void);
+
 struct kvm_tdx {
 	struct kvm kvm;
 	/* TDX specific members follow. */
@@ -37,6 +39,8 @@ static inline struct vcpu_tdx *to_tdx(struct kvm_vcpu *vcpu)
 	return container_of(vcpu, struct vcpu_tdx, vcpu);
 }
 #else
+static inline int tdx_module_setup(void) { return -ENODEV; };
+
 struct kvm_tdx {
 	struct kvm kvm;
 };
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 30af2bd0b4d5..fb7a33fbc136 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11792,6 +11792,14 @@ int kvm_arch_hardware_setup(void *opaque)
 	return 0;
 }
 
+int kvm_arch_post_hardware_enable_setup(void *opaque)
+{
+	struct kvm_x86_init_ops *ops = opaque;
+	if (ops->post_hardware_enable_setup)
+		return ops->post_hardware_enable_setup();
+	return 0;
+}
+
 void kvm_arch_hardware_unsetup(void)
 {
 	kvm_unregister_perf_callbacks();
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (10 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-28  2:52   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 013/102] KVM: TDX: Make TDX VM type supported isaku.yamahata
                   ` (91 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Xiaoyao Li

From: Sean Christopherson <sean.j.christopherson@intel.com>

Unlike default VMs, confidential VMs (Intel TDX and AMD SEV-ES) don't allow
some operations (e.g., memory read/write, register state access, etc).

Introduce vm_type to track the type of the VM to x86 KVM.  Other arch KVMs
already use vm_type, KVM_INIT_VM accepts vm_type, and x86 KVM callback
vm_init accepts vm_type.  So follow them.  Further, a different policy can
be made based on vm_type.  Define KVM_X86_DEFAULT_VM for default VM as
default and define KVM_X86_TDX_VM for Intel TDX VM.  The wrapper function
will be defined as "bool is_td(kvm) { return vm_type == VM_TYPE_TDX; }"

Add a capability KVM_CAP_VM_TYPES to effectively allow device model,
e.g. qemu, to query what VM types are supported by KVM.  This (introduce a
new capability and add vm_type) is chosen to align with other arch KVMs
that have VM types already.  Other arch KVMs uses different name to query
supported vm types and there is no common name for it, so new name was
chosen.

Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/api.rst        | 21 +++++++++++++++++++++
 arch/x86/include/asm/kvm-x86-ops.h    |  1 +
 arch/x86/include/asm/kvm_host.h       |  2 ++
 arch/x86/include/uapi/asm/kvm.h       |  3 +++
 arch/x86/kvm/svm/svm.c                |  6 ++++++
 arch/x86/kvm/vmx/main.c               |  1 +
 arch/x86/kvm/vmx/tdx.h                |  6 +-----
 arch/x86/kvm/vmx/vmx.c                |  5 +++++
 arch/x86/kvm/vmx/x86_ops.h            |  1 +
 arch/x86/kvm/x86.c                    |  9 ++++++++-
 include/uapi/linux/kvm.h              |  1 +
 tools/arch/x86/include/uapi/asm/kvm.h |  3 +++
 tools/include/uapi/linux/kvm.h        |  1 +
 13 files changed, 54 insertions(+), 6 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 9cbbfdb663b6..b9ab598883b2 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -147,10 +147,31 @@ described as 'basic' will be available.
 The new VM has no virtual cpus and no memory.
 You probably want to use 0 as machine type.
 
+X86:
+^^^^
+
+Supported vm type can be queried from KVM_CAP_VM_TYPES, which returns the
+bitmap of supported vm types. The 1-setting of bit @n means vm type with
+value @n is supported.
+
+S390:
+^^^^^
+
 In order to create user controlled virtual machines on S390, check
 KVM_CAP_S390_UCONTROL and use the flag KVM_VM_S390_UCONTROL as
 privileged user (CAP_SYS_ADMIN).
 
+MIPS:
+^^^^^
+
+To use hardware assisted virtualization on MIPS (VZ ASE) rather than
+the default trap & emulate implementation (which changes the virtual
+memory layout to fit in user mode), check KVM_CAP_MIPS_VZ and use the
+flag KVM_VM_MIPS_VZ.
+
+ARM64:
+^^^^^^
+
 On arm64, the physical address size for a VM (IPA Size limit) is limited
 to 40bits by default. The limit can be configured if the host supports the
 extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 75bc44aa8d51..a97cdb203a16 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -19,6 +19,7 @@ KVM_X86_OP(hardware_disable)
 KVM_X86_OP(hardware_unsetup)
 KVM_X86_OP(has_emulated_msr)
 KVM_X86_OP(vcpu_after_set_cpuid)
+KVM_X86_OP(is_vm_type_supported)
 KVM_X86_OP(vm_init)
 KVM_X86_OP_OPTIONAL(vm_destroy)
 KVM_X86_OP_OPTIONAL_RET0(vcpu_precreate)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index aa11525500d3..089e0a4de926 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1141,6 +1141,7 @@ enum kvm_apicv_inhibit {
 };
 
 struct kvm_arch {
+	unsigned long vm_type;
 	unsigned long n_used_mmu_pages;
 	unsigned long n_requested_mmu_pages;
 	unsigned long n_max_mmu_pages;
@@ -1434,6 +1435,7 @@ struct kvm_x86_ops {
 	bool (*has_emulated_msr)(struct kvm *kvm, u32 index);
 	void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu);
 
+	bool (*is_vm_type_supported)(unsigned long vm_type);
 	unsigned int vm_size;
 	int (*vm_init)(struct kvm *kvm);
 	void (*vm_destroy)(struct kvm *kvm);
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 50a4e787d5e6..9792ec1cc317 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -531,4 +531,7 @@ struct kvm_pmu_event_filter {
 #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */
 #define   KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
 
+#define KVM_X86_DEFAULT_VM	0
+#define KVM_X86_TDX_VM		1
+
 #endif /* _ASM_X86_KVM_H */
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 247c0ad458a0..815a07c594f1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4685,6 +4685,11 @@ static void svm_vm_destroy(struct kvm *kvm)
 	sev_vm_destroy(kvm);
 }
 
+static bool svm_is_vm_type_supported(unsigned long type)
+{
+	return type == KVM_X86_DEFAULT_VM;
+}
+
 static int svm_vm_init(struct kvm *kvm)
 {
 	if (!pause_filter_count || !pause_filter_thresh)
@@ -4712,6 +4717,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.vcpu_free = svm_vcpu_free,
 	.vcpu_reset = svm_vcpu_reset,
 
+	.is_vm_type_supported = svm_is_vm_type_supported,
 	.vm_size = sizeof(struct kvm_svm),
 	.vm_init = svm_vm_init,
 	.vm_destroy = svm_vm_destroy,
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index ac788af17d92..7be4941e4c4d 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -43,6 +43,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.hardware_disable = vmx_hardware_disable,
 	.has_emulated_msr = vmx_has_emulated_msr,
 
+	.is_vm_type_supported = vmx_is_vm_type_supported,
 	.vm_size = sizeof(struct kvm_vmx),
 	.vm_init = vmx_vm_init,
 	.vm_destroy = vmx_vm_destroy,
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 54d7a26ed9ee..2f43db5bbefb 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -17,11 +17,7 @@ struct vcpu_tdx {
 
 static inline bool is_td(struct kvm *kvm)
 {
-	/*
-	 * TDX VM type isn't defined yet.
-	 * return kvm->arch.vm_type == KVM_X86_TDX_VM;
-	 */
-	return false;
+	return kvm->arch.vm_type == KVM_X86_TDX_VM;
 }
 
 static inline bool is_td_vcpu(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index b30d73d28e75..5ba62f8b42ce 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7281,6 +7281,11 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
 	return err;
 }
 
+bool vmx_is_vm_type_supported(unsigned long type)
+{
+	return type == KVM_X86_DEFAULT_VM;
+}
+
 #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 2abead2f60f7..a5e85eb4e183 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -25,6 +25,7 @@ void vmx_hardware_unsetup(void);
 int vmx_check_processor_compatibility(void);
 int vmx_hardware_enable(void);
 void vmx_hardware_disable(void);
+bool vmx_is_vm_type_supported(unsigned long type);
 int vmx_vm_init(struct kvm *kvm);
 void vmx_vm_destroy(struct kvm *kvm);
 int vmx_vcpu_precreate(struct kvm *kvm);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fb7a33fbc136..96dc8f52a137 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4408,6 +4408,11 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_X86_NOTIFY_VMEXIT:
 		r = kvm_caps.has_notify_vmexit;
 		break;
+	case KVM_CAP_VM_TYPES:
+		r = BIT(KVM_X86_DEFAULT_VM);
+		if (static_call(kvm_x86_is_vm_type_supported)(KVM_X86_TDX_VM))
+			r |= BIT(KVM_X86_TDX_VM);
+		break;
 	default:
 		break;
 	}
@@ -11858,9 +11863,11 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	int ret;
 	unsigned long flags;
 
-	if (type)
+	if (!static_call(kvm_x86_is_vm_type_supported)(type))
 		return -EINVAL;
 
+	kvm->arch.vm_type = type;
+
 	ret = kvm_page_track_init(kvm);
 	if (ret)
 		goto out;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 7569b4ec199c..6d6785d2685f 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1166,6 +1166,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_S390_PROTECTED_DUMP 217
 #define KVM_CAP_X86_TRIPLE_FAULT_EVENT 218
 #define KVM_CAP_X86_NOTIFY_VMEXIT 219
+#define KVM_CAP_VM_TYPES 220
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index bf6e96011dfe..71a5851475e7 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -525,4 +525,7 @@ struct kvm_pmu_event_filter {
 #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */
 #define   KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
 
+#define KVM_X86_DEFAULT_VM	0
+#define KVM_X86_TDX_VM		1
+
 #endif /* _ASM_X86_KVM_H */
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index 6a184d260c7f..1e89b967e050 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -1152,6 +1152,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_DISABLE_QUIRKS2 213
 /* #define KVM_CAP_VM_TSC_CONTROL 214 */
 #define KVM_CAP_SYSTEM_EVENT_DATA 215
+#define KVM_CAP_VM_TYPES 220
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 013/102] KVM: TDX: Make TDX VM type supported
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (11 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-07  2:55   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 014/102] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
                   ` (90 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

NOTE: This patch is in position of the patch series for developers to be
able to test codes during the middle of the patch series although this
patch series doesn't provide functional features until the all the patches
of this patch series.  When merging this patch series, this patch can be
moved to the end.

As first step TDX VM support, return that TDX VM type supported to device
model, e.g. qemu.  The callback to create guest TD is vm_init callback for
KVM_CREATE_VM.  Add a place holder function and call a function to
initialize TDX module on demand because in that callback VMX is enabled by
hardware_enable callback (vmx_hardware_enable).

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    | 18 ++++++++++++++++--
 arch/x86/kvm/vmx/tdx.c     |  6 ++++++
 arch/x86/kvm/vmx/vmx.c     |  5 -----
 arch/x86/kvm/vmx/x86_ops.h |  3 ++-
 4 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 7be4941e4c4d..47bfa94e538e 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -10,6 +10,12 @@
 static bool __read_mostly enable_tdx = IS_ENABLED(CONFIG_INTEL_TDX_HOST);
 module_param_named(tdx, enable_tdx, bool, 0444);
 
+static bool vt_is_vm_type_supported(unsigned long type)
+{
+	return type == KVM_X86_DEFAULT_VM ||
+		(enable_tdx && tdx_is_vm_type_supported(type));
+}
+
 static __init int vt_hardware_setup(void)
 {
 	int ret;
@@ -33,6 +39,14 @@ static int __init vt_post_hardware_enable_setup(void)
 	return 0;
 }
 
+static int vt_vm_init(struct kvm *kvm)
+{
+	if (is_td(kvm))
+		return -EOPNOTSUPP;	/* Not ready to create guest TD yet. */
+
+	return vmx_vm_init(kvm);
+}
+
 struct kvm_x86_ops vt_x86_ops __initdata = {
 	.name = "kvm_intel",
 
@@ -43,9 +57,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.hardware_disable = vmx_hardware_disable,
 	.has_emulated_msr = vmx_has_emulated_msr,
 
-	.is_vm_type_supported = vmx_is_vm_type_supported,
+	.is_vm_type_supported = vt_is_vm_type_supported,
 	.vm_size = sizeof(struct kvm_vmx),
-	.vm_init = vmx_vm_init,
+	.vm_init = vt_vm_init,
 	.vm_destroy = vmx_vm_destroy,
 
 	.vcpu_precreate = vmx_vcpu_precreate,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 9cb36716b0f3..3675f7de2735 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -73,6 +73,12 @@ int __init tdx_module_setup(void)
 	return 0;
 }
 
+bool tdx_is_vm_type_supported(unsigned long type)
+{
+	/* enable_tdx check is done by the caller. */
+	return type == KVM_X86_TDX_VM;
+}
+
 int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 {
 	u32 max_pa;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5ba62f8b42ce..b30d73d28e75 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7281,11 +7281,6 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
 	return err;
 }
 
-bool vmx_is_vm_type_supported(unsigned long type)
-{
-	return type == KVM_X86_DEFAULT_VM;
-}
-
 #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index a5e85eb4e183..dbfd0e43fd89 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -25,7 +25,6 @@ void vmx_hardware_unsetup(void);
 int vmx_check_processor_compatibility(void);
 int vmx_hardware_enable(void);
 void vmx_hardware_disable(void);
-bool vmx_is_vm_type_supported(unsigned long type);
 int vmx_vm_init(struct kvm *kvm);
 void vmx_vm_destroy(struct kvm *kvm);
 int vmx_vcpu_precreate(struct kvm *kvm);
@@ -131,8 +130,10 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu);
 
 #ifdef CONFIG_INTEL_TDX_HOST
 int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
+bool tdx_is_vm_type_supported(unsigned long type);
 #else
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
+static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
 #endif
 
 #endif /* __KVM_X86_VMX_X86_OPS_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 014/102] [MARKER] The start of TDX KVM patch series: TDX architectural definitions
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (12 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 013/102] KVM: TDX: Make TDX VM type supported isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 015/102] KVM: TDX: Define " isaku.yamahata
                   ` (89 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of TDX architectural
definitions.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 .../virt/kvm/intel-tdx-layer-status.rst       | 29 +++++++++++++++++++
 1 file changed, 29 insertions(+)
 create mode 100644 Documentation/virt/kvm/intel-tdx-layer-status.rst

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
new file mode 100644
index 000000000000..b7a14bc73853
--- /dev/null
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -0,0 +1,29 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===================================
+Intel Trust Dodmain Extensions(TDX)
+===================================
+
+Layer status
+============
+What qemu can do
+----------------
+- TDX VM TYPE is exposed to Qemu.
+- Qemu can try to create VM of TDX VM type and then fails.
+
+Patch Layer status
+------------------
+  Patch layer                          Status
+* TDX, VMX coexistence:                 Applied
+* TDX architectural definitions:        Applying
+* TD VM creation/destruction:           Not yet
+* TD vcpu creation/destruction:         Not yet
+* TDX EPT violation:                    Not yet
+* TD finalization:                      Not yet
+* TD vcpu enter/exit:                   Not yet
+* TD vcpu interrupts/exit/hypercall:    Not yet
+
+* KVM MMU GPA shared bits:              Not yet
+* KVM TDP refactoring for TDX:          Not yet
+* KVM TDP MMU hooks:                    Not yet
+* KVM TDP MMU MapGPA:                   Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 015/102] KVM: TDX: Define TDX architectural definitions
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (13 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 014/102] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 016/102] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
                   ` (88 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Isaku Yamahata <isaku.yamahata@intel.com>

Define architectural definitions for KVM to issue the TDX SEAMCALLs.

Structures and values that are architecturally defined in the TDX module
specifications the chapter of ABI Reference.

Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx_arch.h | 157 ++++++++++++++++++++++++++++++++++++
 1 file changed, 157 insertions(+)
 create mode 100644 arch/x86/kvm/vmx/tdx_arch.h

diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
new file mode 100644
index 000000000000..94258056d742
--- /dev/null
+++ b/arch/x86/kvm/vmx/tdx_arch.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* architectural constants/data definitions for TDX SEAMCALLs */
+
+#ifndef __KVM_X86_TDX_ARCH_H
+#define __KVM_X86_TDX_ARCH_H
+
+#include <linux/types.h>
+
+/*
+ * TDX SEAMCALL API function leaves
+ */
+#define TDH_VP_ENTER			0
+#define TDH_MNG_ADDCX			1
+#define TDH_MEM_PAGE_ADD		2
+#define TDH_MEM_SEPT_ADD		3
+#define TDH_VP_ADDCX			4
+#define TDH_MEM_PAGE_RELOCATE		5
+#define TDH_MEM_PAGE_AUG		6
+#define TDH_MEM_RANGE_BLOCK		7
+#define TDH_MNG_KEY_CONFIG		8
+#define TDH_MNG_CREATE			9
+#define TDH_VP_CREATE			10
+#define TDH_MNG_RD			11
+#define TDH_MR_EXTEND			16
+#define TDH_MR_FINALIZE			17
+#define TDH_VP_FLUSH			18
+#define TDH_MNG_VPFLUSHDONE		19
+#define TDH_MNG_KEY_FREEID		20
+#define TDH_MNG_INIT			21
+#define TDH_VP_INIT			22
+#define TDH_VP_RD			26
+#define TDH_MNG_KEY_RECLAIMID		27
+#define TDH_PHYMEM_PAGE_RECLAIM		28
+#define TDH_MEM_PAGE_REMOVE		29
+#define TDH_MEM_SEPT_REMOVE		30
+#define TDH_MEM_TRACK			38
+#define TDH_MEM_RANGE_UNBLOCK		39
+#define TDH_PHYMEM_CACHE_WB		40
+#define TDH_PHYMEM_PAGE_WBINVD		41
+#define TDH_VP_WR			43
+#define TDH_SYS_LP_SHUTDOWN		44
+
+#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO		0x10000
+#define TDG_VP_VMCALL_MAP_GPA				0x10001
+#define TDG_VP_VMCALL_GET_QUOTE				0x10002
+#define TDG_VP_VMCALL_REPORT_FATAL_ERROR		0x10003
+#define TDG_VP_VMCALL_SETUP_EVENT_NOTIFY_INTERRUPT	0x10004
+
+/* TDX control structure (TDR/TDCS/TDVPS) field access codes */
+#define TDX_NON_ARCH			BIT_ULL(63)
+#define TDX_CLASS_SHIFT			56
+#define TDX_FIELD_MASK			GENMASK_ULL(31, 0)
+
+#define __BUILD_TDX_FIELD(non_arch, class, field)	\
+	(((non_arch) ? TDX_NON_ARCH : 0) |		\
+	 ((u64)(class) << TDX_CLASS_SHIFT) |		\
+	 ((u64)(field) & TDX_FIELD_MASK))
+
+#define BUILD_TDX_FIELD(class, field)			\
+	__BUILD_TDX_FIELD(false, (class), (field))
+
+#define BUILD_TDX_FIELD_NON_ARCH(class, field)		\
+	__BUILD_TDX_FIELD(true, (class), (field))
+
+
+/* @field is the VMCS field encoding */
+#define TDVPS_VMCS(field)		BUILD_TDX_FIELD(0, (field))
+
+enum tdx_guest_other_state {
+	TD_VCPU_STATE_DETAILS_NON_ARCH = 0x100,
+};
+
+union tdx_vcpu_state_details {
+	struct {
+		u64 vmxip	: 1;
+		u64 reserved	: 63;
+	};
+	u64 full;
+};
+
+/* @field is any of enum tdx_guest_other_state */
+#define TDVPS_STATE(field)		BUILD_TDX_FIELD(17, (field))
+#define TDVPS_STATE_NON_ARCH(field)	BUILD_TDX_FIELD_NON_ARCH(17, (field))
+
+/* Management class fields */
+enum tdx_guest_management {
+	TD_VCPU_PEND_NMI = 11,
+};
+
+/* @field is any of enum tdx_guest_management */
+#define TDVPS_MANAGEMENT(field)		BUILD_TDX_FIELD(32, (field))
+
+enum tdx_tdcs_execution_control {
+	TD_TDCS_EXEC_TSC_OFFSET = 10,
+};
+
+/* @field is any of enum tdx_tdcs_execution_control */
+#define TDCS_EXEC(field)		BUILD_TDX_FIELD(17, (field))
+
+#define TDX_EXTENDMR_CHUNKSIZE		256
+
+struct tdx_cpuid_value {
+	u32 eax;
+	u32 ebx;
+	u32 ecx;
+	u32 edx;
+} __packed;
+
+#define TDX_TD_ATTRIBUTE_DEBUG		BIT_ULL(0)
+#define TDX_TD_ATTRIBUTE_PKS		BIT_ULL(30)
+#define TDX_TD_ATTRIBUTE_KL		BIT_ULL(31)
+#define TDX_TD_ATTRIBUTE_PERFMON	BIT_ULL(63)
+
+/*
+ * TD_PARAMS is provided as an input to TDH_MNG_INIT, the size of which is 1024B.
+ */
+struct td_params {
+	u64 attributes;
+	u64 xfam;
+	u32 max_vcpus;
+	u32 reserved0;
+
+	u64 eptp_controls;
+	u64 exec_controls;
+	u16 tsc_frequency;
+	u8  reserved1[38];
+
+	u64 mrconfigid[6];
+	u64 mrowner[6];
+	u64 mrownerconfig[6];
+	u64 reserved2[4];
+
+	union {
+		struct tdx_cpuid_value cpuid_values[0];
+		u8 reserved3[768];
+	};
+} __packed __aligned(1024);
+
+/*
+ * Guest uses MAX_PA for GPAW when set.
+ * 0: GPA.SHARED bit is GPA[47]
+ * 1: GPA.SHARED bit is GPA[51]
+ */
+#define TDX_EXEC_CONTROL_MAX_GPAW      BIT_ULL(0)
+
+/*
+ * TDX requires the frequency to be defined in units of 25MHz, which is the
+ * frequency of the core crystal clock on TDX-capable platforms, i.e. the TDX
+ * module can only program frequencies that are multiples of 25MHz.  The
+ * frequency must be between 100mhz and 10ghz (inclusive).
+ */
+#define TDX_TSC_KHZ_TO_25MHZ(tsc_in_khz)	((tsc_in_khz) / (25 * 1000))
+#define TDX_TSC_25MHZ_TO_KHZ(tsc_in_25mhz)	((tsc_in_25mhz) * (25 * 1000))
+#define TDX_MIN_TSC_FREQUENCY_KHZ		(100 * 1000)
+#define TDX_MAX_TSC_FREQUENCY_KHZ		(10 * 1000 * 1000)
+
+#endif /* __KVM_X86_TDX_ARCH_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 016/102] KVM: TDX: Add TDX "architectural" error codes
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (14 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 015/102] KVM: TDX: Define " isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 017/102] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
                   ` (87 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add error codes for the TDX SEAMCALLs both for TDX VMM side for TDH
SEAMCALL and TDX guest side for TDG.VP.VMCALL.  KVM issues the TDX
SEAMCALLs and checks its error code.  KVM handles hypercall from the TDX
guest and may return an error.  So error code for the TDX guest is also
needed.

TDX SEAMCALL uses bits 31:0 to return more information, so these error
codes will only exactly match RAX[63:32].  Error codes for TDG.VP.VMCALL is
defined by TDX Guest-Host-Communication interface spec.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx_errno.h | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)
 create mode 100644 arch/x86/kvm/vmx/tdx_errno.h

diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h
new file mode 100644
index 000000000000..5c878488795d
--- /dev/null
+++ b/arch/x86/kvm/vmx/tdx_errno.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* architectural status code for SEAMCALL */
+
+#ifndef __KVM_X86_TDX_ERRNO_H
+#define __KVM_X86_TDX_ERRNO_H
+
+#define TDX_SEAMCALL_STATUS_MASK		0xFFFFFFFF00000000ULL
+
+/*
+ * TDX SEAMCALL Status Codes (returned in RAX)
+ */
+#define TDX_SUCCESS				0x0000000000000000ULL
+#define TDX_NON_RECOVERABLE_VCPU		0x4000000100000000ULL
+#define TDX_INTERRUPTED_RESUMABLE		0x8000000300000000ULL
+#define TDX_LIFECYCLE_STATE_INCORRECT		0xC000060700000000ULL
+#define TDX_VCPU_NOT_ASSOCIATED			0x8000070200000000ULL
+#define TDX_KEY_GENERATION_FAILED		0x8000080000000000ULL
+#define TDX_KEY_STATE_INCORRECT			0xC000081100000000ULL
+#define TDX_KEY_CONFIGURED			0x0000081500000000ULL
+#define TDX_EPT_WALK_FAILED			0xC0000B0000000000ULL
+
+/*
+ * TDG.VP.VMCALL Status Codes (returned in R10)
+ */
+#define TDG_VP_VMCALL_SUCCESS			0x0000000000000000ULL
+#define TDG_VP_VMCALL_INVALID_OPERAND		0x8000000000000000ULL
+#define TDG_VP_VMCALL_TDREPORT_FAILED		0x8000000000000001ULL
+
+#endif /* __KVM_X86_TDX_ERRNO_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 017/102] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (15 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 016/102] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 018/102] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
                   ` (86 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Isaku Yamahata <isaku.yamahata@intel.com>

A VMM interacts with the TDX module using a new instruction (SEAMCALL).  A
TDX VMM uses SEAMCALLs where a VMX VMM would have directly interacted with
VMX instructions.  For instance, a TDX VMM does not have full access to the
VM control structure corresponding to VMX VMCS.  Instead, a VMM induces the
TDX module to act on behalf via SEAMCALLs.

Export __seamcall and define C wrapper functions for SEAMCALLs for
readability.  Some SEAMCALL APIs donates pages to TDX module or guest TD.
The pages are encrypted with TDX private host key id set in high bits of
physical address.  If any modified cache lines may exit for these pages,
flush them to memory by clflush_cache_range().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/tdx.h       |   2 +
 arch/x86/kvm/vmx/tdx_ops.h       | 185 +++++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/seamcall.S |   2 +
 3 files changed, 189 insertions(+)
 create mode 100644 arch/x86/kvm/vmx/tdx_ops.h

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index dfea0dd71bc1..c887618e3cec 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -144,6 +144,8 @@ struct tdsysinfo_struct {
 bool platform_tdx_enabled(void);
 int tdx_init(void);
 const struct tdsysinfo_struct *tdx_get_sysinfo(void);
+u64 __seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
+	       struct tdx_module_output *out);
 #else	/* !CONFIG_INTEL_TDX_HOST */
 static inline bool platform_tdx_enabled(void) { return false; }
 static inline int tdx_init(void)  { return -ENODEV; }
diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
new file mode 100644
index 000000000000..85adbf49c277
--- /dev/null
+++ b/arch/x86/kvm/vmx/tdx_ops.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* constants/data definitions for TDX SEAMCALLs */
+
+#ifndef __KVM_X86_TDX_OPS_H
+#define __KVM_X86_TDX_OPS_H
+
+#include <linux/compiler.h>
+
+#include <asm/cacheflush.h>
+#include <asm/asm.h>
+#include <asm/kvm_host.h>
+
+#include "tdx_errno.h"
+#include "tdx_arch.h"
+
+#ifdef CONFIG_INTEL_TDX_HOST
+
+static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr)
+{
+	clflush_cache_range(__va(addr), PAGE_SIZE);
+	return __seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL);
+}
+
+static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source,
+				   struct tdx_module_output *out)
+{
+	clflush_cache_range(__va(hpa), PAGE_SIZE);
+	return __seamcall(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out);
+}
+
+static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
+				   struct tdx_module_output *out)
+{
+	clflush_cache_range(__va(page), PAGE_SIZE);
+	return __seamcall(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out);
+}
+
+static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_t gpa, int level,
+				      struct tdx_module_output *out)
+{
+	return __seamcall(TDH_MEM_SEPT_REMOVE, gpa | level, tdr, 0, 0, out);
+}
+
+static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr)
+{
+	clflush_cache_range(__va(addr), PAGE_SIZE);
+	return __seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL);
+}
+
+static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa,
+					struct tdx_module_output *out)
+{
+	clflush_cache_range(__va(hpa), PAGE_SIZE);
+	return __seamcall(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out);
+}
+
+static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa,
+				   struct tdx_module_output *out)
+{
+	clflush_cache_range(__va(hpa), PAGE_SIZE);
+	return __seamcall(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out);
+}
+
+static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level,
+				      struct tdx_module_output *out)
+{
+	return __seamcall(TDH_MEM_RANGE_BLOCK, gpa | level, tdr, 0, 0, out);
+}
+
+static inline u64 tdh_mng_key_config(hpa_t tdr)
+{
+	return __seamcall(TDH_MNG_KEY_CONFIG, tdr, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_mng_create(hpa_t tdr, int hkid)
+{
+	clflush_cache_range(__va(tdr), PAGE_SIZE);
+	return __seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL);
+}
+
+static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr)
+{
+	clflush_cache_range(__va(tdvpr), PAGE_SIZE);
+	return __seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL);
+}
+
+static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, struct tdx_module_output *out)
+{
+	return __seamcall(TDH_MNG_RD, tdr, field, 0, 0, out);
+}
+
+static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa,
+				struct tdx_module_output *out)
+{
+	return __seamcall(TDH_MR_EXTEND, gpa, tdr, 0, 0, out);
+}
+
+static inline u64 tdh_mr_finalize(hpa_t tdr)
+{
+	return __seamcall(TDH_MR_FINALIZE, tdr, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_vp_flush(hpa_t tdvpr)
+{
+	return __seamcall(TDH_VP_FLUSH, tdvpr, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_mng_vpflushdone(hpa_t tdr)
+{
+	return __seamcall(TDH_MNG_VPFLUSHDONE, tdr, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_mng_key_freeid(hpa_t tdr)
+{
+	return __seamcall(TDH_MNG_KEY_FREEID, tdr, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_mng_init(hpa_t tdr, hpa_t td_params,
+			       struct tdx_module_output *out)
+{
+	return __seamcall(TDH_MNG_INIT, tdr, td_params, 0, 0, out);
+}
+
+static inline u64 tdh_vp_init(hpa_t tdvpr, u64 rcx)
+{
+	return __seamcall(TDH_VP_INIT, tdvpr, rcx, 0, 0, NULL);
+}
+
+static inline u64 tdh_vp_rd(hpa_t tdvpr, u64 field,
+			    struct tdx_module_output *out)
+{
+	return __seamcall(TDH_VP_RD, tdvpr, field, 0, 0, out);
+}
+
+static inline u64 tdh_mng_key_reclaimid(hpa_t tdr)
+{
+	return __seamcall(TDH_MNG_KEY_RECLAIMID, tdr, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_phymem_page_reclaim(hpa_t page,
+					  struct tdx_module_output *out)
+{
+	return __seamcall(TDH_PHYMEM_PAGE_RECLAIM, page, 0, 0, 0, out);
+}
+
+static inline u64 tdh_mem_page_remove(hpa_t tdr, gpa_t gpa, int level,
+				      struct tdx_module_output *out)
+{
+	return __seamcall(TDH_MEM_PAGE_REMOVE, gpa | level, tdr, 0, 0, out);
+}
+
+static inline u64 tdh_sys_lp_shutdown(void)
+{
+	return __seamcall(TDH_SYS_LP_SHUTDOWN, 0, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_mem_track(hpa_t tdr)
+{
+	return __seamcall(TDH_MEM_TRACK, tdr, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_mem_range_unblock(hpa_t tdr, gpa_t gpa, int level,
+					struct tdx_module_output *out)
+{
+	return __seamcall(TDH_MEM_RANGE_UNBLOCK, gpa | level, tdr, 0, 0, out);
+}
+
+static inline u64 tdh_phymem_cache_wb(bool resume)
+{
+	return __seamcall(TDH_PHYMEM_CACHE_WB, resume ? 1 : 0, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_phymem_page_wbinvd(hpa_t page)
+{
+	return __seamcall(TDH_PHYMEM_PAGE_WBINVD, page, 0, 0, 0, NULL);
+}
+
+static inline u64 tdh_vp_wr(hpa_t tdvpr, u64 field, u64 val, u64 mask,
+			    struct tdx_module_output *out)
+{
+	return __seamcall(TDH_VP_WR, tdvpr, field, val, mask, out);
+}
+#endif /* CONFIG_INTEL_TDX_HOST */
+
+#endif /* __KVM_X86_TDX_OPS_H */
diff --git a/arch/x86/virt/vmx/tdx/seamcall.S b/arch/x86/virt/vmx/tdx/seamcall.S
index f322427e48c3..aced0ed9b76a 100644
--- a/arch/x86/virt/vmx/tdx/seamcall.S
+++ b/arch/x86/virt/vmx/tdx/seamcall.S
@@ -1,5 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #include <linux/linkage.h>
+#include <asm/export.h>
 #include <asm/frame.h>
 
 #include "tdxcall.S"
@@ -50,3 +51,4 @@ SYM_FUNC_START(__seamcall)
 	FRAME_END
 	RET
 SYM_FUNC_END(__seamcall)
+EXPORT_SYMBOL_GPL(__seamcall)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 018/102] KVM: TDX: Add helper functions to print TDX SEAMCALL error
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (16 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 017/102] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 019/102] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
                   ` (85 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Add helper functions to print out errors from the TDX module in a uniform
manner.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/Makefile        |  2 +-
 arch/x86/kvm/vmx/tdx_error.c | 22 ++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx_ops.h   |  3 +++
 3 files changed, 26 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kvm/vmx/tdx_error.c

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index e2c05195cb95..f1ad445df505 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -24,7 +24,7 @@ kvm-$(CONFIG_KVM_XEN)	+= xen.o
 kvm-intel-y		+= vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
 			   vmx/evmcs.o vmx/nested.o vmx/posted_intr.o vmx/main.o
 kvm-intel-$(CONFIG_X86_SGX_KVM)	+= vmx/sgx.o
-kvm-intel-$(CONFIG_INTEL_TDX_HOST)	+= vmx/tdx.o
+kvm-intel-$(CONFIG_INTEL_TDX_HOST)	+= vmx/tdx.o vmx/tdx_error.o
 
 kvm-amd-y		+= svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
 
diff --git a/arch/x86/kvm/vmx/tdx_error.c b/arch/x86/kvm/vmx/tdx_error.c
new file mode 100644
index 000000000000..61ed855d1188
--- /dev/null
+++ b/arch/x86/kvm/vmx/tdx_error.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: GPL-2.0
+/* functions to record TDX SEAMCALL error */
+
+#include <linux/kernel.h>
+#include <linux/bug.h>
+
+#include "tdx_ops.h"
+
+void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_output *out)
+{
+	if (!out) {
+		pr_err_ratelimited("SEAMCALL[%lld] failed: 0x%llx\n",
+				op, error_code);
+		return;
+	}
+
+	pr_err_ratelimited(
+		"SEAMCALL[%lld] failed: 0x%llx "
+		"RCX 0x%llx, RDX 0x%llx, R8 0x%llx, R9 0x%llx, R10 0x%llx, R11 0x%llx\n",
+		op, error_code,
+		out->rcx, out->rdx, out->r8, out->r9, out->r10, out->r11);
+}
diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
index 85adbf49c277..8cc2f01c509b 100644
--- a/arch/x86/kvm/vmx/tdx_ops.h
+++ b/arch/x86/kvm/vmx/tdx_ops.h
@@ -9,12 +9,15 @@
 #include <asm/cacheflush.h>
 #include <asm/asm.h>
 #include <asm/kvm_host.h>
+#include <asm/tdx.h>
 
 #include "tdx_errno.h"
 #include "tdx_arch.h"
 
 #ifdef CONFIG_INTEL_TDX_HOST
 
+void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_output *out);
+
 static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr)
 {
 	clflush_cache_range(__va(addr), PAGE_SIZE);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 019/102] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (17 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 018/102] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 020/102] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers isaku.yamahata
                   ` (84 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of TD VM
creation/destruction.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index b7a14bc73853..5e0deaebf843 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -15,8 +15,8 @@ Patch Layer status
 ------------------
   Patch layer                          Status
 * TDX, VMX coexistence:                 Applied
-* TDX architectural definitions:        Applying
-* TD VM creation/destruction:           Not yet
+* TDX architectural definitions:        Applied
+* TD VM creation/destruction:           Applying
 * TD vcpu creation/destruction:         Not yet
 * TDX EPT violation:                    Not yet
 * TD finalization:                      Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 020/102] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (18 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 019/102] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 021/102] x86/cpu: Add helper functions to allocate/free TDX private host key id isaku.yamahata
                   ` (83 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Stub in kvm_tdx, vcpu_tdx, and their various accessors.  TDX defines
SEAMCALL APIs to access TDX control structures corresponding to the VMX
VMCS.  Introduce helper accessors to hide its SEAMCALL ABI details.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.h | 103 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 101 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 2f43db5bbefb..f50d37f3fc9c 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -3,16 +3,29 @@
 #define __KVM_X86_TDX_H
 
 #ifdef CONFIG_INTEL_TDX_HOST
+
+#include "tdx_ops.h"
+
 int tdx_module_setup(void);
 
+struct tdx_td_page {
+	unsigned long va;
+	hpa_t pa;
+	bool added;
+};
+
 struct kvm_tdx {
 	struct kvm kvm;
-	/* TDX specific members follow. */
+
+	struct tdx_td_page tdr;
+	struct tdx_td_page *tdcs;
 };
 
 struct vcpu_tdx {
 	struct kvm_vcpu	vcpu;
-	/* TDX specific members follow. */
+
+	struct tdx_td_page tdvpr;
+	struct tdx_td_page *tdvpx;
 };
 
 static inline bool is_td(struct kvm *kvm)
@@ -34,6 +47,92 @@ static inline struct vcpu_tdx *to_tdx(struct kvm_vcpu *vcpu)
 {
 	return container_of(vcpu, struct vcpu_tdx, vcpu);
 }
+
+static __always_inline void tdvps_vmcs_check(u32 field, u8 bits)
+{
+	BUILD_BUG_ON_MSG(__builtin_constant_p(field) && (field) & 0x1,
+			 "Read/Write to TD VMCS *_HIGH fields not supported");
+
+	BUILD_BUG_ON(bits != 16 && bits != 32 && bits != 64);
+
+	BUILD_BUG_ON_MSG(bits != 64 && __builtin_constant_p(field) &&
+			 (((field) & 0x6000) == 0x2000 ||
+			  ((field) & 0x6000) == 0x6000),
+			 "Invalid TD VMCS access for 64-bit field");
+	BUILD_BUG_ON_MSG(bits != 32 && __builtin_constant_p(field) &&
+			 ((field) & 0x6000) == 0x4000,
+			 "Invalid TD VMCS access for 32-bit field");
+	BUILD_BUG_ON_MSG(bits != 16 && __builtin_constant_p(field) &&
+			 ((field) & 0x6000) == 0x0000,
+			 "Invalid TD VMCS access for 16-bit field");
+}
+
+static __always_inline void tdvps_state_non_arch_check(u64 field, u8 bits) {}
+static __always_inline void tdvps_management_check(u64 field, u8 bits) {}
+
+#define TDX_BUILD_TDVPS_ACCESSORS(bits, uclass, lclass)				\
+static __always_inline u##bits td_##lclass##_read##bits(struct vcpu_tdx *tdx,	\
+							u32 field)		\
+{										\
+	struct tdx_module_output out;						\
+	u64 err;								\
+										\
+	tdvps_##lclass##_check(field, bits);					\
+	err = tdh_vp_rd(tdx->tdvpr.pa, TDVPS_##uclass(field), &out);		\
+	if (unlikely(err)) {							\
+		pr_err("TDH_VP_RD["#uclass".0x%x] failed: 0x%llx\n",		\
+		       field, err);						\
+		return 0;							\
+	}									\
+	return (u##bits)out.r8;							\
+}										\
+static __always_inline void td_##lclass##_write##bits(struct vcpu_tdx *tdx,	\
+						      u32 field, u##bits val)	\
+{										\
+	struct tdx_module_output out;						\
+	u64 err;								\
+										\
+	tdvps_##lclass##_check(field, bits);					\
+	err = tdh_vp_wr(tdx->tdvpr.pa, TDVPS_##uclass(field), val,		\
+		      GENMASK_ULL(bits - 1, 0), &out);				\
+	if (unlikely(err))							\
+		pr_err("TDH_VP_WR["#uclass".0x%x] = 0x%llx failed: 0x%llx\n",	\
+		       field, (u64)val, err);					\
+}										\
+static __always_inline void td_##lclass##_setbit##bits(struct vcpu_tdx *tdx,	\
+						       u32 field, u64 bit)	\
+{										\
+	struct tdx_module_output out;						\
+	u64 err;								\
+										\
+	tdvps_##lclass##_check(field, bits);					\
+	err = tdh_vp_wr(tdx->tdvpr.pa, TDVPS_##uclass(field), bit, bit,		\
+			&out);							\
+	if (unlikely(err))							\
+		pr_err("TDH_VP_WR["#uclass".0x%x] |= 0x%llx failed: 0x%llx\n",	\
+		       field, bit, err);					\
+}										\
+static __always_inline void td_##lclass##_clearbit##bits(struct vcpu_tdx *tdx,	\
+							 u32 field, u64 bit)	\
+{										\
+	struct tdx_module_output out;						\
+	u64 err;								\
+										\
+	tdvps_##lclass##_check(field, bits);					\
+	err = tdh_vp_wr(tdx->tdvpr.pa, TDVPS_##uclass(field), 0, bit,		\
+			&out);							\
+	if (unlikely(err))							\
+		pr_err("TDH_VP_WR["#uclass".0x%x] &= ~0x%llx failed: 0x%llx\n",	\
+		       field, bit,  err);					\
+}
+
+TDX_BUILD_TDVPS_ACCESSORS(16, VMCS, vmcs);
+TDX_BUILD_TDVPS_ACCESSORS(32, VMCS, vmcs);
+TDX_BUILD_TDVPS_ACCESSORS(64, VMCS, vmcs);
+
+TDX_BUILD_TDVPS_ACCESSORS(64, STATE_NON_ARCH, state_non_arch);
+TDX_BUILD_TDVPS_ACCESSORS(8, MANAGEMENT, management);
+
 #else
 static inline int tdx_module_setup(void) { return -ENODEV; };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 021/102] x86/cpu: Add helper functions to allocate/free TDX private host key id
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (19 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 020/102] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 022/102] KVM: TDX: create/destroy VM structure isaku.yamahata
                   ` (82 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX private host key id is assigned to guest TD.  The memory controller
encrypts guest TD memory with the assigned TDX private host key id (HIKD).
Add helper functions to allocate/free TDX private host key id so that TDX
KVM manage it.

Also export the global TDX private host key id that is used to encrypt TDX
module, its memory and some dynamic data (TDR).  When VMM releasing
encrypted page to reuse it, the page needs to be flushed with the used host
key id.  VMM needs the global TDX private host key id to flush such pages
TDX module accesses with the global TDX private host key id.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/tdx.h  |  7 +++++++
 arch/x86/virt/vmx/tdx/tdx.c | 33 ++++++++++++++++++++++++++++++++-
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index c887618e3cec..6c0925e73a27 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -144,6 +144,10 @@ struct tdsysinfo_struct {
 bool platform_tdx_enabled(void);
 int tdx_init(void);
 const struct tdsysinfo_struct *tdx_get_sysinfo(void);
+u32 tdx_get_global_keyid(void);
+int tdx_keyid_alloc(void);
+void tdx_keyid_free(int keyid);
+
 u64 __seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
 	       struct tdx_module_output *out);
 #else	/* !CONFIG_INTEL_TDX_HOST */
@@ -151,6 +155,9 @@ static inline bool platform_tdx_enabled(void) { return false; }
 static inline int tdx_init(void)  { return -ENODEV; }
 struct tdsysinfo_struct;
 static inline const struct tdsysinfo_struct *tdx_get_sysinfo(void) { return NULL; }
+static inline u32 tdx_get_global_keyid(void) { return 0; };
+static inline int tdx_keyid_alloc(void) { return -EOPNOTSUPP; }
+static inline void tdx_keyid_free(int keyid) { }
 #endif	/* CONFIG_INTEL_TDX_HOST */
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 14f53494156c..322b6e0ac7dc 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -57,7 +57,13 @@ static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMEN
 static int tdx_cmr_num;
 
 /* TDX module global KeyID.  Used in TDH.SYS.CONFIG ABI. */
-static u32 tdx_global_keyid;
+static u32 __read_mostly tdx_global_keyid;
+
+u32 tdx_get_global_keyid(void)
+{
+	return tdx_global_keyid;
+}
+EXPORT_SYMBOL_GPL(tdx_get_global_keyid);
 
 /* Detect whether CPU supports SEAM */
 static int detect_seam(void)
@@ -81,6 +87,31 @@ static int detect_seam(void)
 	return 0;
 }
 
+/* TDX KeyID pool */
+static DEFINE_IDA(tdx_keyid_pool);
+
+int tdx_keyid_alloc(void)
+{
+	if (WARN_ON_ONCE(!tdx_keyid_start || !tdx_keyid_num))
+		return -EINVAL;
+
+	/* The first keyID is reserved for the global key. */
+	return ida_alloc_range(&tdx_keyid_pool, tdx_keyid_start + 1,
+			       tdx_keyid_start + tdx_keyid_num - 1,
+			       GFP_KERNEL);
+}
+EXPORT_SYMBOL_GPL(tdx_keyid_alloc);
+
+void tdx_keyid_free(int keyid)
+{
+	/* keyid = 0 is reserved. */
+	if (!keyid || keyid <= 0)
+		return;
+
+	ida_free(&tdx_keyid_pool, keyid);
+}
+EXPORT_SYMBOL_GPL(tdx_keyid_free);
+
 static int detect_tdx_keyids(void)
 {
 	u64 keyid_part;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 022/102] KVM: TDX: create/destroy VM structure
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (20 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 021/102] x86/cpu: Add helper functions to allocate/free TDX private host key id isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-07  6:16   ` Yuan Yao
  2022-08-02 19:46   ` Sean Christopherson
  2022-06-27 21:53 ` [PATCH v7 023/102] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
                   ` (81 subsequent siblings)
  103 siblings, 2 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

As the first step to create TDX guest, create/destroy VM struct.  Assign
TDX private Host Key ID (HKID) to the TDX guest for memory encryption and
allocate extra pages for the TDX guest. On destruction, free allocated
pages, and HKID.

Before tearing down private page tables, TDX requires some resources of the
guest TD to be destroyed (i.e. keyID must have been reclaimed, etc).  Add
flush_shadow_all_private callback before tearing down private page tables
for it.

Add a second kvm_x86_ops hook in kvm_arch_destroy_vm() to support TDX's
destruction path, which needs to first put the VM into a teardown state,
then free per-vCPU resources, and finally free per-VM resources.

Co-developed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |   2 +
 arch/x86/include/asm/kvm_host.h    |   2 +
 arch/x86/kvm/vmx/main.c            |  34 ++-
 arch/x86/kvm/vmx/tdx.c             | 376 +++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h             |   2 +
 arch/x86/kvm/vmx/tdx_errno.h       |   2 +-
 arch/x86/kvm/vmx/x86_ops.h         |  11 +
 arch/x86/kvm/x86.c                 |   8 +
 8 files changed, 433 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index a97cdb203a16..fbb2c6746066 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -21,7 +21,9 @@ KVM_X86_OP(has_emulated_msr)
 KVM_X86_OP(vcpu_after_set_cpuid)
 KVM_X86_OP(is_vm_type_supported)
 KVM_X86_OP(vm_init)
+KVM_X86_OP_OPTIONAL(flush_shadow_all_private)
 KVM_X86_OP_OPTIONAL(vm_destroy)
+KVM_X86_OP_OPTIONAL(vm_free)
 KVM_X86_OP_OPTIONAL_RET0(vcpu_precreate)
 KVM_X86_OP(vcpu_create)
 KVM_X86_OP(vcpu_free)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 089e0a4de926..80df346af117 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1438,7 +1438,9 @@ struct kvm_x86_ops {
 	bool (*is_vm_type_supported)(unsigned long vm_type);
 	unsigned int vm_size;
 	int (*vm_init)(struct kvm *kvm);
+	void (*flush_shadow_all_private)(struct kvm *kvm);
 	void (*vm_destroy)(struct kvm *kvm);
+	void (*vm_free)(struct kvm *kvm);
 
 	/* Create, but do not attach this VCPU */
 	int (*vcpu_precreate)(struct kvm *kvm);
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 47bfa94e538e..6a93b19a8b06 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -39,18 +39,44 @@ static int __init vt_post_hardware_enable_setup(void)
 	return 0;
 }
 
+static void vt_hardware_unsetup(void)
+{
+	tdx_hardware_unsetup();
+	vmx_hardware_unsetup();
+}
+
 static int vt_vm_init(struct kvm *kvm)
 {
 	if (is_td(kvm))
-		return -EOPNOTSUPP;	/* Not ready to create guest TD yet. */
+		return tdx_vm_init(kvm);
 
 	return vmx_vm_init(kvm);
 }
 
+static void vt_flush_shadow_all_private(struct kvm *kvm)
+{
+	if (is_td(kvm))
+		return tdx_mmu_release_hkid(kvm);
+}
+
+static void vt_vm_destroy(struct kvm *kvm)
+{
+	if (is_td(kvm))
+		return;
+
+	vmx_vm_destroy(kvm);
+}
+
+static void vt_vm_free(struct kvm *kvm)
+{
+	if (is_td(kvm))
+		return tdx_vm_free(kvm);
+}
+
 struct kvm_x86_ops vt_x86_ops __initdata = {
 	.name = "kvm_intel",
 
-	.hardware_unsetup = vmx_hardware_unsetup,
+	.hardware_unsetup = vt_hardware_unsetup,
 	.check_processor_compatibility = vmx_check_processor_compatibility,
 
 	.hardware_enable = vmx_hardware_enable,
@@ -60,7 +86,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.is_vm_type_supported = vt_is_vm_type_supported,
 	.vm_size = sizeof(struct kvm_vmx),
 	.vm_init = vt_vm_init,
-	.vm_destroy = vmx_vm_destroy,
+	.flush_shadow_all_private = vt_flush_shadow_all_private,
+	.vm_destroy = vt_vm_destroy,
+	.vm_free = vt_vm_free,
 
 	.vcpu_precreate = vmx_vcpu_precreate,
 	.vcpu_create = vmx_vcpu_create,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 3675f7de2735..63f3c7a02cc8 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -31,9 +31,367 @@ struct tdx_capabilities {
 	struct tdx_cpuid_config cpuid_configs[TDX_MAX_NR_CPUID_CONFIGS];
 };
 
+/*
+ * Key id globally used by TDX module: TDX module maps TDR with this TDX global
+ * key id.  TDR includes key id assigned to the TD.  Then TDX module maps other
+ * TD-related pages with the assigned key id.  TDR requires this TDX global key
+ * id for cache flush unlike other TD-related pages.
+ */
+static u32 tdx_global_keyid __read_mostly;
+
 /* Capabilities of KVM + the TDX module. */
 static struct tdx_capabilities tdx_caps;
 
+/*
+ * Some TDX SEAMCALLs (TDH.MNG.CREATE, TDH.PHYMEM.CACHE.WB,
+ * TDH.MNG.KEY.RECLAIMID, TDH.MNG.KEY.FREEID etc) tries to acquire a global lock
+ * internally in TDX module.  If failed, TDX_OPERAND_BUSY is returned without
+ * spinning or waiting due to a constraint on execution time.  It's caller's
+ * responsibility to avoid race (or retry on TDX_OPERAND_BUSY).  Use this mutex
+ * to avoid race in TDX module because the kernel knows better about scheduling.
+ */
+static DEFINE_MUTEX(tdx_lock);
+static struct mutex *tdx_mng_key_config_lock;
+
+static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid)
+{
+	pa &= ~hkid_mask;
+	pa |= (u64)hkid << hkid_start_pos;
+
+	return pa;
+}
+
+static inline bool is_td_created(struct kvm_tdx *kvm_tdx)
+{
+	return kvm_tdx->tdr.added;
+}
+
+static inline void tdx_hkid_free(struct kvm_tdx *kvm_tdx)
+{
+	tdx_keyid_free(kvm_tdx->hkid);
+	kvm_tdx->hkid = -1;
+}
+
+static inline bool is_hkid_assigned(struct kvm_tdx *kvm_tdx)
+{
+	return kvm_tdx->hkid > 0;
+}
+
+static void tdx_clear_page(unsigned long page)
+{
+	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
+	unsigned long i;
+
+	/*
+	 * Zeroing the page is only necessary for systems with MKTME-i:
+	 * when re-assign one page from old keyid to a new keyid, MOVDIR64B is
+	 * required to clear/write the page with new keyid to prevent integrity
+	 * error when read on the page with new keyid.
+	 */
+	if (!static_cpu_has(X86_FEATURE_MOVDIR64B))
+		return;
+
+	for (i = 0; i < 4096; i += 64)
+		/* MOVDIR64B [rdx], es:rdi */
+		asm (".byte 0x66, 0x0f, 0x38, 0xf8, 0x3a"
+		     : : "d" (zero_page), "D" (page + i) : "memory");
+}
+
+static int tdx_reclaim_page(unsigned long va, hpa_t pa, bool do_wb, u16 hkid)
+{
+	struct tdx_module_output out;
+	u64 err;
+
+	err = tdh_phymem_page_reclaim(pa, &out);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out);
+		return -EIO;
+	}
+
+	if (do_wb) {
+		err = tdh_phymem_page_wbinvd(set_hkid_to_hpa(pa, hkid));
+		if (WARN_ON_ONCE(err)) {
+			pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
+			return -EIO;
+		}
+	}
+
+	tdx_clear_page(va);
+	return 0;
+}
+
+static int tdx_alloc_td_page(struct tdx_td_page *page)
+{
+	page->va = __get_free_page(GFP_KERNEL_ACCOUNT);
+	if (!page->va)
+		return -ENOMEM;
+
+	page->pa = __pa(page->va);
+	return 0;
+}
+
+static void tdx_mark_td_page_added(struct tdx_td_page *page)
+{
+	WARN_ON_ONCE(page->added);
+	page->added = true;
+}
+
+static void tdx_reclaim_td_page(struct tdx_td_page *page)
+{
+	if (page->added) {
+		/*
+		 * TDCX are being reclaimed.  TDX module maps TDCX with HKID
+		 * assigned to the TD.  Here the cache associated to the TD
+		 * was already flushed by TDH.PHYMEM.CACHE.WB before here, So
+		 * cache doesn't need to be flushed again.
+		 */
+		if (tdx_reclaim_page(page->va, page->pa, false, 0))
+			return;
+
+		page->added = false;
+	}
+	free_page(page->va);
+}
+
+static int tdx_do_tdh_phymem_cache_wb(void *param)
+{
+	u64 err = 0;
+
+	do {
+		err = tdh_phymem_cache_wb(!!err);
+	} while (err == TDX_INTERRUPTED_RESUMABLE);
+
+	/* Other thread may have done for us. */
+	if (err == TDX_NO_HKID_READY_TO_WBCACHE)
+		err = TDX_SUCCESS;
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_PHYMEM_CACHE_WB, err, NULL);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+void tdx_mmu_release_hkid(struct kvm *kvm)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	cpumask_var_t packages;
+	bool cpumask_allocated;
+	u64 err;
+	int ret;
+	int i;
+
+	if (!is_hkid_assigned(kvm_tdx))
+		return;
+
+	if (!is_td_created(kvm_tdx))
+		goto free_hkid;
+
+	cpumask_allocated = zalloc_cpumask_var(&packages, GFP_KERNEL);
+	cpus_read_lock();
+	for_each_online_cpu(i) {
+		if (cpumask_allocated &&
+			cpumask_test_and_set_cpu(topology_physical_package_id(i),
+						packages))
+			continue;
+
+		/*
+		 * We can destroy multiple the guest TDs simultaneously.
+		 * Prevent tdh_phymem_cache_wb from returning TDX_BUSY by
+		 * serialization.
+		 */
+		mutex_lock(&tdx_lock);
+		ret = smp_call_on_cpu(i, tdx_do_tdh_phymem_cache_wb, NULL, 1);
+		mutex_unlock(&tdx_lock);
+		if (ret)
+			break;
+	}
+	cpus_read_unlock();
+	free_cpumask_var(packages);
+
+	mutex_lock(&tdx_lock);
+	err = tdh_mng_key_freeid(kvm_tdx->tdr.pa);
+	mutex_unlock(&tdx_lock);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_MNG_KEY_FREEID, err, NULL);
+		pr_err("tdh_mng_key_freeid failed. HKID %d is leaked.\n",
+			kvm_tdx->hkid);
+		return;
+	}
+
+free_hkid:
+	tdx_hkid_free(kvm_tdx);
+}
+
+void tdx_vm_free(struct kvm *kvm)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	int i;
+
+	/* Can't reclaim or free TD pages if teardown failed. */
+	if (is_hkid_assigned(kvm_tdx))
+		return;
+
+	for (i = 0; i < tdx_caps.tdcs_nr_pages; i++)
+		tdx_reclaim_td_page(&kvm_tdx->tdcs[i]);
+	kfree(kvm_tdx->tdcs);
+
+	/*
+	 * TDX module maps TDR with TDX global HKID.  TDX module may access TDR
+	 * while operating on TD (Especially reclaiming TDCS).  Cache flush with
+	 * TDX global HKID is needed.
+	 */
+	if (kvm_tdx->tdr.added &&
+		tdx_reclaim_page(kvm_tdx->tdr.va, kvm_tdx->tdr.pa, true,
+				tdx_global_keyid))
+		return;
+
+	free_page(kvm_tdx->tdr.va);
+}
+
+static int tdx_do_tdh_mng_key_config(void *param)
+{
+	hpa_t *tdr_p = param;
+	u64 err;
+
+	do {
+		err = tdh_mng_key_config(*tdr_p);
+
+		/*
+		 * If it failed to generate a random key, retry it because this
+		 * is typically caused by an entropy error of the CPU's random
+		 * number generator.
+		 */
+	} while (err == TDX_KEY_GENERATION_FAILED);
+
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_MNG_KEY_CONFIG, err, NULL);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int tdx_vm_init(struct kvm *kvm)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	cpumask_var_t packages;
+	int ret, i;
+	u64 err;
+
+	/* vCPUs can't be created until after KVM_TDX_INIT_VM. */
+	kvm->max_vcpus = 0;
+
+	kvm_tdx->hkid = tdx_keyid_alloc();
+	if (kvm_tdx->hkid < 0)
+		return -EBUSY;
+
+	ret = tdx_alloc_td_page(&kvm_tdx->tdr);
+	if (ret)
+		goto free_hkid;
+
+	kvm_tdx->tdcs = kcalloc(tdx_caps.tdcs_nr_pages, sizeof(*kvm_tdx->tdcs),
+				GFP_KERNEL_ACCOUNT);
+	if (!kvm_tdx->tdcs)
+		goto free_tdr;
+	for (i = 0; i < tdx_caps.tdcs_nr_pages; i++) {
+		ret = tdx_alloc_td_page(&kvm_tdx->tdcs[i]);
+		if (ret)
+			goto free_tdcs;
+	}
+
+	/*
+	 * Acquire global lock to avoid TDX_OPERAND_BUSY:
+	 * TDH.MNG.CREATE and other APIs try to lock the global Key Owner
+	 * Table (KOT) to track the assigned TDX private HKID.  It doesn't spin
+	 * to acquire the lock, returns TDX_OPERAND_BUSY instead, and let the
+	 * caller to handle the contention.  This is because of time limitation
+	 * usable inside the TDX module and OS/VMM knows better about process
+	 * scheduling.
+	 *
+	 * APIs to acquire the lock of KOT:
+	 * TDH.MNG.CREATE, TDH.MNG.KEY.FREEID, TDH.MNG.VPFLUSHDONE, and
+	 * TDH.PHYMEM.CACHE.WB.
+	 */
+	mutex_lock(&tdx_lock);
+	err = tdh_mng_create(kvm_tdx->tdr.pa, kvm_tdx->hkid);
+	mutex_unlock(&tdx_lock);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_MNG_CREATE, err, NULL);
+		ret = -EIO;
+		goto free_tdcs;
+	}
+	tdx_mark_td_page_added(&kvm_tdx->tdr);
+
+	if (!zalloc_cpumask_var(&packages, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto free_tdcs;
+	}
+	cpus_read_lock();
+	for_each_online_cpu(i) {
+		int pkg = topology_physical_package_id(i);
+
+		if (cpumask_test_and_set_cpu(pkg, packages))
+			continue;
+
+		/*
+		 * Program the memory controller in the package with an
+		 * encryption key associated to a TDX private host key id
+		 * assigned to this TDR.  Concurrent operations on same memory
+		 * controller results in TDX_OPERAND_BUSY.  Avoid this race by
+		 * mutex.
+		 */
+		mutex_lock(&tdx_mng_key_config_lock[pkg]);
+		ret = smp_call_on_cpu(i, tdx_do_tdh_mng_key_config,
+				      &kvm_tdx->tdr.pa, true);
+		mutex_unlock(&tdx_mng_key_config_lock[pkg]);
+		if (ret)
+			break;
+	}
+	cpus_read_unlock();
+	free_cpumask_var(packages);
+	if (ret)
+		goto teardown;
+
+	for (i = 0; i < tdx_caps.tdcs_nr_pages; i++) {
+		err = tdh_mng_addcx(kvm_tdx->tdr.pa, kvm_tdx->tdcs[i].pa);
+		if (WARN_ON_ONCE(err)) {
+			pr_tdx_error(TDH_MNG_ADDCX, err, NULL);
+			ret = -EIO;
+			goto teardown;
+		}
+		tdx_mark_td_page_added(&kvm_tdx->tdcs[i]);
+	}
+
+	/*
+	 * Note, TDH_MNG_INIT cannot be invoked here.  TDH_MNG_INIT requires a dedicated
+	 * ioctl() to define the configure CPUID values for the TD.
+	 */
+	return 0;
+
+	/*
+	 * The sequence for freeing resources from a partially initialized TD
+	 * varies based on where in the initialization flow failure occurred.
+	 * Simply use the full teardown and destroy, which naturally play nice
+	 * with partial initialization.
+	 */
+teardown:
+	tdx_mmu_release_hkid(kvm);
+	tdx_vm_free(kvm);
+	return ret;
+
+free_tdcs:
+	/* @i points at the TDCS page that failed allocation. */
+	for (--i; i >= 0; i--)
+		free_page(kvm_tdx->tdcs[i].va);
+	kfree(kvm_tdx->tdcs);
+free_tdr:
+	free_page(kvm_tdx->tdr.va);
+free_hkid:
+	tdx_hkid_free(kvm_tdx);
+	return ret;
+}
+
 int __init tdx_module_setup(void)
 {
 	const struct tdsysinfo_struct *tdsysinfo;
@@ -48,6 +406,8 @@ int __init tdx_module_setup(void)
 		return ret;
 	}
 
+	tdx_global_keyid = tdx_get_global_keyid();
+
 	tdsysinfo = tdx_get_sysinfo();
 	if (tdsysinfo->num_cpuid_config > TDX_MAX_NR_CPUID_CONFIGS)
 		return -EIO;
@@ -81,7 +441,9 @@ bool tdx_is_vm_type_supported(unsigned long type)
 
 int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 {
+	int max_pkgs;
 	u32 max_pa;
+	int i;
 
 	if (!enable_ept) {
 		pr_warn("Cannot enable TDX with EPT disabled\n");
@@ -97,6 +459,14 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 	if (WARN_ON_ONCE(x86_ops->tlb_remote_flush))
 		return -EIO;
 
+	max_pkgs = topology_max_packages();
+	tdx_mng_key_config_lock = kcalloc(max_pkgs, sizeof(*tdx_mng_key_config_lock),
+				   GFP_KERNEL);
+	if (!tdx_mng_key_config_lock)
+		return -ENOMEM;
+	for (i = 0; i < max_pkgs; i++)
+		mutex_init(&tdx_mng_key_config_lock[i]);
+
 	max_pa = cpuid_eax(0x80000008) & 0xff;
 	hkid_start_pos = boot_cpu_data.x86_phys_bits;
 	hkid_mask = GENMASK_ULL(max_pa - 1, hkid_start_pos);
@@ -105,3 +475,9 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 
 	return 0;
 }
+
+void tdx_hardware_unsetup(void)
+{
+	/* kfree accepts NULL. */
+	kfree(tdx_mng_key_config_lock);
+}
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index f50d37f3fc9c..8058b6b153f8 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -19,6 +19,8 @@ struct kvm_tdx {
 
 	struct tdx_td_page tdr;
 	struct tdx_td_page *tdcs;
+
+	int hkid;
 };
 
 struct vcpu_tdx {
diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h
index 5c878488795d..590fcfdd1899 100644
--- a/arch/x86/kvm/vmx/tdx_errno.h
+++ b/arch/x86/kvm/vmx/tdx_errno.h
@@ -12,11 +12,11 @@
 #define TDX_SUCCESS				0x0000000000000000ULL
 #define TDX_NON_RECOVERABLE_VCPU		0x4000000100000000ULL
 #define TDX_INTERRUPTED_RESUMABLE		0x8000000300000000ULL
-#define TDX_LIFECYCLE_STATE_INCORRECT		0xC000060700000000ULL
 #define TDX_VCPU_NOT_ASSOCIATED			0x8000070200000000ULL
 #define TDX_KEY_GENERATION_FAILED		0x8000080000000000ULL
 #define TDX_KEY_STATE_INCORRECT			0xC000081100000000ULL
 #define TDX_KEY_CONFIGURED			0x0000081500000000ULL
+#define TDX_NO_HKID_READY_TO_WBCACHE		0x0000082100000000ULL
 #define TDX_EPT_WALK_FAILED			0xC0000B0000000000ULL
 
 /*
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index dbfd0e43fd89..663fd8d4063f 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -131,9 +131,20 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu);
 #ifdef CONFIG_INTEL_TDX_HOST
 int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
 bool tdx_is_vm_type_supported(unsigned long type);
+void tdx_hardware_unsetup(void);
+
+int tdx_vm_init(struct kvm *kvm);
+void tdx_mmu_release_hkid(struct kvm *kvm);
+void tdx_vm_free(struct kvm *kvm);
 #else
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
 static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
+static inline void tdx_hardware_unsetup(void) {}
+
+static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; }
+static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
+static inline void tdx_flush_shadow_all_private(struct kvm *kvm) {}
+static inline void tdx_vm_free(struct kvm *kvm) {}
 #endif
 
 #endif /* __KVM_X86_VMX_X86_OPS_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 96dc8f52a137..320f902eaf9e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12057,6 +12057,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	kvm_page_track_cleanup(kvm);
 	kvm_xen_destroy_vm(kvm);
 	kvm_hv_destroy_vm(kvm);
+	static_call_cond(kvm_x86_vm_free)(kvm);
 }
 
 static void memslot_rmap_free(struct kvm_memory_slot *slot)
@@ -12321,6 +12322,13 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 
 void kvm_arch_flush_shadow_all(struct kvm *kvm)
 {
+	/*
+	 * kvm_mmu_zap_all() zaps both private and shared page tables.  Before
+	 * tearing down private page tables, TDX requires some TD resources to
+	 * be destroyed (i.e. keyID must have been reclaimed, etc).  Invoke
+	 * kvm_x86_flush_shadow_all_private() for this.
+	 */
+	static_call_cond(kvm_x86_flush_shadow_all_private)(kvm);
 	kvm_mmu_zap_all(kvm);
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 023/102] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (21 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 022/102] KVM: TDX: create/destroy VM structure isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-07  6:48   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 024/102] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
                   ` (80 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Implement a system-scoped ioctl to get system-wide parameters for TDX.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h    |  1 +
 arch/x86/include/asm/kvm_host.h       |  1 +
 arch/x86/include/uapi/asm/kvm.h       | 48 +++++++++++++++++++++++++++
 arch/x86/kvm/vmx/main.c               |  2 ++
 arch/x86/kvm/vmx/tdx.c                | 46 +++++++++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h            |  2 ++
 arch/x86/kvm/x86.c                    |  6 ++++
 tools/arch/x86/include/uapi/asm/kvm.h | 48 +++++++++++++++++++++++++++
 8 files changed, 154 insertions(+)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index fbb2c6746066..3677a5015a4f 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -117,6 +117,7 @@ KVM_X86_OP(smi_allowed)
 KVM_X86_OP(enter_smm)
 KVM_X86_OP(leave_smm)
 KVM_X86_OP(enable_smi_window)
+KVM_X86_OP_OPTIONAL(dev_mem_enc_ioctl)
 KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
 KVM_X86_OP_OPTIONAL(mem_enc_register_region)
 KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 80df346af117..342decc69649 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1591,6 +1591,7 @@ struct kvm_x86_ops {
 	int (*leave_smm)(struct kvm_vcpu *vcpu, const char *smstate);
 	void (*enable_smi_window)(struct kvm_vcpu *vcpu);
 
+	int (*dev_mem_enc_ioctl)(void __user *argp);
 	int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
 	int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
 	int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 9792ec1cc317..273c8d82b9c8 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -534,4 +534,52 @@ struct kvm_pmu_event_filter {
 #define KVM_X86_DEFAULT_VM	0
 #define KVM_X86_TDX_VM		1
 
+/* Trust Domain eXtension sub-ioctl() commands. */
+enum kvm_tdx_cmd_id {
+	KVM_TDX_CAPABILITIES = 0,
+
+	KVM_TDX_CMD_NR_MAX,
+};
+
+struct kvm_tdx_cmd {
+	/* enum kvm_tdx_cmd_id */
+	__u32 id;
+	/* flags for sub-commend. If sub-command doesn't use this, set zero. */
+	__u32 flags;
+	/*
+	 * data for each sub-command. An immediate or a pointer to the actual
+	 * data in process virtual address.  If sub-command doesn't use it,
+	 * set zero.
+	 */
+	__u64 data;
+	/*
+	 * Auxiliary error code.  The sub-command may return TDX SEAMCALL
+	 * status code in addition to -Exxx.
+	 * Defined for consistency with struct kvm_sev_cmd.
+	 */
+	__u64 error;
+	/* Reserved: Defined for consistency with struct kvm_sev_cmd. */
+	__u64 unused;
+};
+
+struct kvm_tdx_cpuid_config {
+	__u32 leaf;
+	__u32 sub_leaf;
+	__u32 eax;
+	__u32 ebx;
+	__u32 ecx;
+	__u32 edx;
+};
+
+struct kvm_tdx_capabilities {
+	__u64 attrs_fixed0;
+	__u64 attrs_fixed1;
+	__u64 xfam_fixed0;
+	__u64 xfam_fixed1;
+
+	__u32 nr_cpuid_configs;
+	__u32 padding;
+	struct kvm_tdx_cpuid_config cpuid_configs[0];
+};
+
 #endif /* _ASM_X86_KVM_H */
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 6a93b19a8b06..7b497ed1f21c 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -212,6 +212,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.complete_emulated_msr = kvm_complete_insn_gp,
 
 	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
+
+	.dev_mem_enc_ioctl = tdx_dev_ioctl,
 };
 
 struct kvm_x86_init_ops vt_init_ops __initdata = {
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 63f3c7a02cc8..ec4ebba4152a 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -392,6 +392,52 @@ int tdx_vm_init(struct kvm *kvm)
 	return ret;
 }
 
+int tdx_dev_ioctl(void __user *argp)
+{
+	struct kvm_tdx_capabilities __user *user_caps;
+	struct kvm_tdx_capabilities caps;
+	struct kvm_tdx_cmd cmd;
+
+	BUILD_BUG_ON(sizeof(struct kvm_tdx_cpuid_config) !=
+		     sizeof(struct tdx_cpuid_config));
+
+	if (copy_from_user(&cmd, argp, sizeof(cmd)))
+		return -EFAULT;
+	if (cmd.flags || cmd.error || cmd.unused)
+		return -EINVAL;
+	/*
+	 * Currently only KVM_TDX_CAPABILITIES is defined for system-scoped
+	 * mem_enc_ioctl().
+	 */
+	if (cmd.id != KVM_TDX_CAPABILITIES)
+		return -EINVAL;
+
+	user_caps = (void __user *)cmd.data;
+	if (copy_from_user(&caps, user_caps, sizeof(caps)))
+		return -EFAULT;
+
+	if (caps.nr_cpuid_configs < tdx_caps.nr_cpuid_configs)
+		return -E2BIG;
+
+	caps = (struct kvm_tdx_capabilities) {
+		.attrs_fixed0 = tdx_caps.attrs_fixed0,
+		.attrs_fixed1 = tdx_caps.attrs_fixed1,
+		.xfam_fixed0 = tdx_caps.xfam_fixed0,
+		.xfam_fixed1 = tdx_caps.xfam_fixed1,
+		.nr_cpuid_configs = tdx_caps.nr_cpuid_configs,
+		.padding = 0,
+	};
+
+	if (copy_to_user(user_caps, &caps, sizeof(caps)))
+		return -EFAULT;
+	if (copy_to_user(user_caps->cpuid_configs, &tdx_caps.cpuid_configs,
+			 tdx_caps.nr_cpuid_configs *
+			 sizeof(struct tdx_cpuid_config)))
+		return -EFAULT;
+
+	return 0;
+}
+
 int __init tdx_module_setup(void)
 {
 	const struct tdsysinfo_struct *tdsysinfo;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 663fd8d4063f..3027d9821fe1 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -132,6 +132,7 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu);
 int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
 bool tdx_is_vm_type_supported(unsigned long type);
 void tdx_hardware_unsetup(void);
+int tdx_dev_ioctl(void __user *argp);
 
 int tdx_vm_init(struct kvm *kvm);
 void tdx_mmu_release_hkid(struct kvm *kvm);
@@ -140,6 +141,7 @@ void tdx_vm_free(struct kvm *kvm);
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
 static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
 static inline void tdx_hardware_unsetup(void) {}
+static inline int tdx_dev_ioctl(void __user *argp) { return -EOPNOTSUPP; };
 
 static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; }
 static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 320f902eaf9e..6037ce93bcb7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4565,6 +4565,12 @@ long kvm_arch_dev_ioctl(struct file *filp,
 			break;
 		r = kvm_x86_dev_has_attr(&attr);
 		break;
+		case KVM_MEMORY_ENCRYPT_OP:
+			r = -EINVAL;
+			if (!kvm_x86_ops.dev_mem_enc_ioctl)
+				goto out;
+			r = static_call(kvm_x86_dev_mem_enc_ioctl)(argp);
+			break;
 	}
 	default:
 		r = -EINVAL;
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index 71a5851475e7..a9ea3573be1b 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -528,4 +528,52 @@ struct kvm_pmu_event_filter {
 #define KVM_X86_DEFAULT_VM	0
 #define KVM_X86_TDX_VM		1
 
+/* Trust Domain eXtension sub-ioctl() commands. */
+enum kvm_tdx_cmd_id {
+	KVM_TDX_CAPABILITIES = 0,
+
+	KVM_TDX_CMD_NR_MAX,
+};
+
+struct kvm_tdx_cmd {
+	/* enum kvm_tdx_cmd_id */
+	__u32 id;
+	/* flags for sub-commend. If sub-command doesn't use this, set zero. */
+	__u32 flags;
+	/*
+	 * data for each sub-command. An immediate or a pointer to the actual
+	 * data in process virtual address.  If sub-command doesn't use it,
+	 * set zero.
+	 */
+	__u64 data;
+	/*
+	 * Auxiliary error code.  The sub-command may return TDX SEAMCALL
+	 * status code in addition to -Exxx.
+	 * Defined for consistency with struct kvm_sev_cmd.
+	 */
+	__u64 error;
+	/* Reserved: Defined for consistency with struct kvm_sev_cmd. */
+	__u64 unused;
+};
+
+struct kvm_tdx_cpuid_config {
+	__u32 leaf;
+	__u32 sub_leaf;
+	__u32 eax;
+	__u32 ebx;
+	__u32 ecx;
+	__u32 edx;
+};
+
+struct kvm_tdx_capabilities {
+	__u64 attrs_fixed0;
+	__u64 attrs_fixed1;
+	__u64 xfam_fixed0;
+	__u64 xfam_fixed1;
+
+	__u32 nr_cpuid_configs;
+	__u32 padding;
+	struct kvm_tdx_cpuid_config cpuid_configs[0];
+};
+
 #endif /* _ASM_X86_KVM_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 024/102] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (22 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 023/102] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-07  7:12   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 025/102] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
                   ` (79 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Add a place holder function for TDX specific VM-scoped ioctl as mem_enc_op.
TDX specific sub-commands will be added to retrieve/pass TDX specific
parameters.

KVM_MEMORY_ENCRYPT_OP was introduced for VM-scoped operations specific for
guest state-protected VM.  It defined subcommands for technology-specific
operations under KVM_MEMORY_ENCRYPT_OP.  Despite its name, the subcommands
are not limited to memory encryption, but various technology-specific
operations are defined.  It's natural to repurpose KVM_MEMORY_ENCRYPT_OP
for TDX specific operations and define subcommands.

TDX requires VM-scoped, and VCPU-scoped TDX-specific operations for device
model, for example, qemu.  Getting system-wide parameters, TDX-specific VM
initialization, and TDX-specific vCPU initialization.  Which requires KVM
vCPU-scoped operations in addition to the existing VM-scoped operations.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    |  9 +++++++++
 arch/x86/kvm/vmx/tdx.c     | 26 ++++++++++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h |  4 ++++
 3 files changed, 39 insertions(+)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 7b497ed1f21c..067f5de56c53 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -73,6 +73,14 @@ static void vt_vm_free(struct kvm *kvm)
 		return tdx_vm_free(kvm);
 }
 
+static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
+{
+	if (!is_td(kvm))
+		return -ENOTTY;
+
+	return tdx_vm_ioctl(kvm, argp);
+}
+
 struct kvm_x86_ops vt_x86_ops __initdata = {
 	.name = "kvm_intel",
 
@@ -214,6 +222,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
 
 	.dev_mem_enc_ioctl = tdx_dev_ioctl,
+	.mem_enc_ioctl = vt_mem_enc_ioctl,
 };
 
 struct kvm_x86_init_ops vt_init_ops __initdata = {
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index ec4ebba4152a..2a9dfd54189f 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -438,6 +438,32 @@ int tdx_dev_ioctl(void __user *argp)
 	return 0;
 }
 
+int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
+{
+	struct kvm_tdx_cmd tdx_cmd;
+	int r;
+
+	if (copy_from_user(&tdx_cmd, argp, sizeof(struct kvm_tdx_cmd)))
+		return -EFAULT;
+	if (tdx_cmd.error || tdx_cmd.unused)
+		return -EINVAL;
+
+	mutex_lock(&kvm->lock);
+
+	switch (tdx_cmd.id) {
+	default:
+		r = -EINVAL;
+		goto out;
+	}
+
+	if (copy_to_user(argp, &tdx_cmd, sizeof(struct kvm_tdx_cmd)))
+		r = -EFAULT;
+
+out:
+	mutex_unlock(&kvm->lock);
+	return r;
+}
+
 int __init tdx_module_setup(void)
 {
 	const struct tdsysinfo_struct *tdsysinfo;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 3027d9821fe1..ef6115ae0e88 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -137,6 +137,8 @@ int tdx_dev_ioctl(void __user *argp);
 int tdx_vm_init(struct kvm *kvm);
 void tdx_mmu_release_hkid(struct kvm *kvm);
 void tdx_vm_free(struct kvm *kvm);
+
+int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 #else
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
 static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
@@ -147,6 +149,8 @@ static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; }
 static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
 static inline void tdx_flush_shadow_all_private(struct kvm *kvm) {}
 static inline void tdx_vm_free(struct kvm *kvm) {}
+
+static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 #endif
 
 #endif /* __KVM_X86_VMX_X86_OPS_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 025/102] KVM: TDX: initialize VM with TDX specific parameters
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (23 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 024/102] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-28  8:30   ` Xiaoyao Li
  2022-06-27 21:53 ` [PATCH v7 026/102] KVM: TDX: Make pmu_intel.c ignore guest TD case isaku.yamahata
                   ` (78 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

TDX requires additional parameters for TDX VM for confidential execution to
protect its confidentiality of its memory contents and its CPU state from
any other software, including VMM. When creating guest TD VM before
creating vcpu, the number of vcpu, TSC frequency (that is same among
vcpus. and it can't be changed.)  CPUIDs which is emulated by the TDX
module. It means guest can trust those CPUIDs. and sha384 values for
measurement.

Add new subcommand, KVM_TDX_INIT_VM, to pass parameters for TDX guest.  It
assigns encryption key to the TDX guest for memory encryption.  TDX
encrypts memory per-guest bases.  It assigns device model passes per-VM
parameters for the TDX guest.  The maximum number of vcpus, tsc frequency
(TDX guest has fised VM-wide TSC frequency. not per-vcpu.  The TDX guest
can not change it.), attributes (production or debug), available extended
features (which is reflected into guest XCR0, IA32_XSS MSR), cpuids, sha384
measurements, and etc.

This subcommand is called before creating vcpu and KVM_SET_CPUID2, i.e.
cpuids configurations aren't available yet.  So CPUIDs configuration values
needs to be passed in struct kvm_init_vm.  It's device model responsibility
to make this cpuid config for KVM_TDX_INIT_VM and KVM_SET_CPUID2.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h       |   2 +
 arch/x86/include/asm/tdx.h            |   3 +
 arch/x86/include/uapi/asm/kvm.h       |  33 +++++
 arch/x86/kvm/vmx/tdx.c                | 206 ++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h                |  23 +++
 tools/arch/x86/include/uapi/asm/kvm.h |  33 +++++
 6 files changed, 300 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 342decc69649..81638987cdb9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1338,6 +1338,8 @@ struct kvm_arch {
 	 * the global KVM_MAX_VCPU_IDS may lead to significant memory waste.
 	 */
 	u32 max_vcpu_ids;
+
+	gfn_t gfn_shared_mask;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 6c0925e73a27..26e3e2da685a 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -89,6 +89,9 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
 #endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */
 
 #ifdef CONFIG_INTEL_TDX_HOST
+
+/* -1 indicates CPUID leaf with no sub-leaves. */
+#define TDX_CPUID_NO_SUBLEAF	((u32)-1)
 struct tdx_cpuid_config {
 	u32	leaf;
 	u32	sub_leaf;
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 273c8d82b9c8..f89774ccd4ae 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -537,6 +537,7 @@ struct kvm_pmu_event_filter {
 /* Trust Domain eXtension sub-ioctl() commands. */
 enum kvm_tdx_cmd_id {
 	KVM_TDX_CAPABILITIES = 0,
+	KVM_TDX_INIT_VM,
 
 	KVM_TDX_CMD_NR_MAX,
 };
@@ -582,4 +583,36 @@ struct kvm_tdx_capabilities {
 	struct kvm_tdx_cpuid_config cpuid_configs[0];
 };
 
+struct kvm_tdx_init_vm {
+	__u64 attributes;
+	__u32 max_vcpus;
+	__u32 padding;
+	__u64 mrconfigid[6];	/* sha384 digest */
+	__u64 mrowner[6];	/* sha384 digest */
+	__u64 mrownerconfig[6];	/* sha348 digest */
+	union {
+		/*
+		 * KVM_TDX_INIT_VM is called before vcpu creation, thus before
+		 * KVM_SET_CPUID2.  CPUID configurations needs to be passed.
+		 *
+		 * This configuration supersedes KVM_SET_CPUID{,2}.
+		 * The user space VMM, e.g. qemu, should make them consistent
+		 * with this values.
+		 * sizeof(struct kvm_cpuid_entry2) * KVM_MAX_CPUID_ENTRIES(256)
+		 * = 8KB.
+		 */
+		struct {
+			struct kvm_cpuid2 cpuid;
+			/* 8KB with KVM_MAX_CPUID_ENTRIES. */
+			struct kvm_cpuid_entry2 entries[];
+		};
+		/*
+		 * For future extensibility.
+		 * The size(struct kvm_tdx_init_vm) = 16KB.
+		 * This should be enough given sizeof(TD_PARAMS) = 1024
+		 */
+		__u64 reserved[2028];
+	};
+};
+
 #endif /* _ASM_X86_KVM_H */
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 2a9dfd54189f..1273b60a1a00 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -438,6 +438,209 @@ int tdx_dev_ioctl(void __user *argp)
 	return 0;
 }
 
+/*
+ * cpuid entry lookup in TDX cpuid config way.
+ * The difference is how to specify index(subleaves).
+ * Specify index to TDX_CPUID_NO_SUBLEAF for CPUID leaf with no-subleaves.
+ */
+static const struct kvm_cpuid_entry2 *tdx_find_cpuid_entry(
+	const struct kvm_cpuid2 *cpuid, u32 function, u32 index)
+{
+	int i;
+
+
+	/* In TDX CPU CONFIG, TDX_CPUID_NO_SUBLEAF means index = 0. */
+	if (index == TDX_CPUID_NO_SUBLEAF)
+		index = 0;
+
+	for (i = 0; i < cpuid->nent; i++) {
+		const struct kvm_cpuid_entry2 *e = &cpuid->entries[i];
+
+		if (e->function == function &&
+		    (e->index == index ||
+		     !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
+			return e;
+	}
+	return NULL;
+}
+
+static int setup_tdparams(struct kvm *kvm, struct td_params *td_params,
+			struct kvm_tdx_init_vm *init_vm)
+{
+	const struct kvm_cpuid2 *cpuid = &init_vm->cpuid;
+	const struct kvm_cpuid_entry2 *entry;
+	u64 guest_supported_xcr0;
+	u64 guest_supported_xss;
+	int max_pa;
+	int i;
+
+	td_params->max_vcpus = init_vm->max_vcpus;
+
+	td_params->attributes = init_vm->attributes;
+	if (td_params->attributes & TDX_TD_ATTRIBUTE_PERFMON) {
+		/*
+		 * TODO: save/restore PMU related registers around TDENTER.
+		 * Once it's done, remove this guard.
+		 */
+		pr_warn("TD doesn't support perfmon yet. KVM needs to save/restore "
+			"host perf registers properly.\n");
+		return -EOPNOTSUPP;
+	}
+
+	for (i = 0; i < tdx_caps.nr_cpuid_configs; i++) {
+		const struct tdx_cpuid_config *config = &tdx_caps.cpuid_configs[i];
+		const struct kvm_cpuid_entry2 *entry =
+			tdx_find_cpuid_entry(cpuid, config->leaf, config->sub_leaf);
+		struct tdx_cpuid_value *value = &td_params->cpuid_values[i];
+
+		if (!entry)
+			continue;
+
+		value->eax = entry->eax & config->eax;
+		value->ebx = entry->ebx & config->ebx;
+		value->ecx = entry->ecx & config->ecx;
+		value->edx = entry->edx & config->edx;
+	}
+
+	max_pa = 36;
+	entry = tdx_find_cpuid_entry(cpuid, 0x80000008, 0);
+	if (entry)
+		max_pa = entry->eax & 0xff;
+
+	td_params->eptp_controls = VMX_EPTP_MT_WB;
+	/*
+	 * No CPU supports 4-level && max_pa > 48.
+	 * "5-level paging and 5-level EPT" section 4.1 4-level EPT
+	 * "4-level EPT is limited to translating 48-bit guest-physical
+	 *  addresses."
+	 * cpu_has_vmx_ept_5levels() check is just in case.
+	 */
+	if (cpu_has_vmx_ept_5levels() && max_pa > 48) {
+		td_params->eptp_controls |= VMX_EPTP_PWL_5;
+		td_params->exec_controls |= TDX_EXEC_CONTROL_MAX_GPAW;
+	} else {
+		td_params->eptp_controls |= VMX_EPTP_PWL_4;
+	}
+
+	/* Setup td_params.xfam */
+	entry = tdx_find_cpuid_entry(cpuid, 0xd, 0);
+	if (entry)
+		guest_supported_xcr0 = (entry->eax | ((u64)entry->edx << 32));
+	else
+		guest_supported_xcr0 = 0;
+	guest_supported_xcr0 &= kvm_caps.supported_xcr0;
+
+	entry = tdx_find_cpuid_entry(cpuid, 0xd, 1);
+	if (entry)
+		guest_supported_xss = (entry->ecx | ((u64)entry->edx << 32));
+	else
+		guest_supported_xss = 0;
+	/* PT can be exposed to TD guest regardless of KVM's XSS support */
+	guest_supported_xss &= (kvm_caps.supported_xss | XFEATURE_MASK_PT);
+
+	td_params->xfam = guest_supported_xcr0 | guest_supported_xss;
+	if (td_params->xfam & XFEATURE_MASK_LBR) {
+		/*
+		 * TODO: once KVM supports LBR(save/restore LBR related
+		 * registers around TDENTER), remove this guard.
+		 */
+		pr_warn("TD doesn't support LBR yet. KVM needs to save/restore "
+			"IA32_LBR_DEPTH properly.\n");
+		return -EOPNOTSUPP;
+	}
+
+	if (td_params->xfam & XFEATURE_MASK_XTILE) {
+		/*
+		 * TODO: once KVM supports AMX(save/restore AMX related
+		 * registers around TDENTER), remove this guard.
+		 */
+		pr_warn("TD doesn't support AMX yet. KVM needs to save/restore "
+			"IA32_XFD, IA32_XFD_ERR properly.\n");
+		return -EOPNOTSUPP;
+	}
+
+	td_params->tsc_frequency =
+		TDX_TSC_KHZ_TO_25MHZ(kvm->arch.default_tsc_khz);
+
+#define MEMCPY_SAME_SIZE(dst, src)				\
+	do {							\
+		BUILD_BUG_ON(sizeof(dst) != sizeof(src));	\
+		memcpy((dst), (src), sizeof(dst));		\
+	} while (0)
+
+	MEMCPY_SAME_SIZE(td_params->mrconfigid, init_vm->mrconfigid);
+	MEMCPY_SAME_SIZE(td_params->mrowner, init_vm->mrowner);
+	MEMCPY_SAME_SIZE(td_params->mrownerconfig, init_vm->mrownerconfig);
+
+	return 0;
+}
+
+static int tdx_td_init(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	struct kvm_tdx_init_vm *init_vm = NULL;
+	struct td_params *td_params = NULL;
+	struct tdx_module_output out;
+	int ret;
+	u64 err;
+
+	BUILD_BUG_ON(sizeof(*init_vm) != 16 * 1024);
+	BUILD_BUG_ON((sizeof(*init_vm) - offsetof(typeof(*init_vm), entries)) /
+		     sizeof(init_vm->entries[0]) < KVM_MAX_CPUID_ENTRIES);
+	BUILD_BUG_ON(sizeof(struct td_params) != 1024);
+
+	if (is_td_initialized(kvm))
+		return -EINVAL;
+
+	if (cmd->flags)
+		return -EINVAL;
+
+	init_vm = kzalloc(sizeof(*init_vm), GFP_KERNEL);
+	if (copy_from_user(init_vm, (void __user *)cmd->data, sizeof(*init_vm))) {
+		ret = -EFAULT;
+		goto out;
+	}
+
+	if (init_vm->max_vcpus > KVM_MAX_VCPUS) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	td_params = kzalloc(sizeof(struct td_params), GFP_KERNEL);
+	if (!td_params) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	ret = setup_tdparams(kvm, td_params, init_vm);
+	if (ret)
+		goto out;
+
+	err = tdh_mng_init(kvm_tdx->tdr.pa, __pa(td_params), &out);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_MNG_INIT, err, &out);
+		ret = -EIO;
+		goto out;
+	}
+
+	kvm_tdx->tsc_offset = td_tdcs_exec_read64(kvm_tdx, TD_TDCS_EXEC_TSC_OFFSET);
+	kvm_tdx->attributes = td_params->attributes;
+	kvm_tdx->xfam = td_params->xfam;
+	kvm_tdx->tsc_khz = TDX_TSC_25MHZ_TO_KHZ(td_params->tsc_frequency);
+	kvm->max_vcpus = td_params->max_vcpus;
+
+	if (td_params->exec_controls & TDX_EXEC_CONTROL_MAX_GPAW)
+		kvm->arch.gfn_shared_mask = gpa_to_gfn(BIT_ULL(51));
+	else
+		kvm->arch.gfn_shared_mask = gpa_to_gfn(BIT_ULL(47));
+
+out:
+	/* kfree() accepts NULL. */
+	kfree(init_vm);
+	kfree(td_params);
+	return ret;
+}
+
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_tdx_cmd tdx_cmd;
@@ -451,6 +654,9 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 	mutex_lock(&kvm->lock);
 
 	switch (tdx_cmd.id) {
+	case KVM_TDX_INIT_VM:
+		r = tdx_td_init(kvm, &tdx_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 8058b6b153f8..8a0793fcc3ab 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -20,7 +20,12 @@ struct kvm_tdx {
 	struct tdx_td_page tdr;
 	struct tdx_td_page *tdcs;
 
+	u64 attributes;
+	u64 xfam;
 	int hkid;
+
+	u64 tsc_offset;
+	unsigned long tsc_khz;
 };
 
 struct vcpu_tdx {
@@ -50,6 +55,11 @@ static inline struct vcpu_tdx *to_tdx(struct kvm_vcpu *vcpu)
 	return container_of(vcpu, struct vcpu_tdx, vcpu);
 }
 
+static inline bool is_td_initialized(struct kvm *kvm)
+{
+	return !!kvm->max_vcpus;
+}
+
 static __always_inline void tdvps_vmcs_check(u32 field, u8 bits)
 {
 	BUILD_BUG_ON_MSG(__builtin_constant_p(field) && (field) & 0x1,
@@ -135,6 +145,19 @@ TDX_BUILD_TDVPS_ACCESSORS(64, VMCS, vmcs);
 TDX_BUILD_TDVPS_ACCESSORS(64, STATE_NON_ARCH, state_non_arch);
 TDX_BUILD_TDVPS_ACCESSORS(8, MANAGEMENT, management);
 
+static __always_inline u64 td_tdcs_exec_read64(struct kvm_tdx *kvm_tdx, u32 field)
+{
+	struct tdx_module_output out;
+	u64 err;
+
+	err = tdh_mng_rd(kvm_tdx->tdr.pa, TDCS_EXEC(field), &out);
+	if (unlikely(err)) {
+		pr_err("TDH_MNG_RD[EXEC.0x%x] failed: 0x%llx\n", field, err);
+		return 0;
+	}
+	return out.r8;
+}
+
 #else
 static inline int tdx_module_setup(void) { return -ENODEV; };
 
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index a9ea3573be1b..779dfd683d66 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -531,6 +531,7 @@ struct kvm_pmu_event_filter {
 /* Trust Domain eXtension sub-ioctl() commands. */
 enum kvm_tdx_cmd_id {
 	KVM_TDX_CAPABILITIES = 0,
+	KVM_TDX_INIT_VM,
 
 	KVM_TDX_CMD_NR_MAX,
 };
@@ -576,4 +577,36 @@ struct kvm_tdx_capabilities {
 	struct kvm_tdx_cpuid_config cpuid_configs[0];
 };
 
+struct kvm_tdx_init_vm {
+	__u64 attributes;
+	__u32 max_vcpus;
+	__u32 tsc_khz;
+	__u64 mrconfigid[6];    /* sha384 digest */
+	__u64 mrowner[6];       /* sha384 digest */
+	__u64 mrownerconfig[6]; /* sha348 digest */
+	union {
+		/*
+		 * KVM_TDX_INIT_VM is called before vcpu creation, thus before
+		 * KVM_SET_CPUID2.  CPUID configurations needs to be passed.
+		 *
+		 * This configuration supersedes KVM_SET_CPUID{,2}.
+		 * The user space VMM, e.g. qemu, should make them consistent
+		 * with this values.
+		 * sizeof(struct kvm_cpuid_entry2) * KVM_MAX_CPUID_ENTRIES(256)
+		 * = 8KB.
+		 */
+		struct {
+			struct kvm_cpuid2 cpuid;
+			/* 8KB with KVM_MAX_CPUID_ENTRIES. */
+			struct kvm_cpuid_entry2 entries[];
+		};
+		/*
+		 * For future extensibility.
+		 * The size(struct kvm_tdx_init_vm) = 16KB.
+		 * This should be enough given sizeof(TD_PARAMS) = 1024
+		 */
+		__u64 reserved[2028];
+	};
+};
+
 #endif /* _ASM_X86_KVM_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 026/102] KVM: TDX: Make pmu_intel.c ignore guest TD case
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (24 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 025/102] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 027/102] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
                   ` (77 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Because TDX KVM doesn't support PMU yet (it's future work of TDX KVM
support as another patch series) and pmu_intel.c touches vmx specific
structure in vcpu initialization, as workaround add dummy structure to
struct vcpu_tdx and pmu_intel.c can ignore TDX case.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 39 +++++++++++++++++++++++++++++++++++-
 arch/x86/kvm/vmx/pmu_intel.h | 28 ++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h       |  7 +++++++
 arch/x86/kvm/vmx/vmx.c       |  2 +-
 arch/x86/kvm/vmx/vmx.h       | 22 +-------------------
 5 files changed, 75 insertions(+), 23 deletions(-)
 create mode 100644 arch/x86/kvm/vmx/pmu_intel.h

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 422f0a6562ac..f8e8f32b8a3f 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -17,6 +17,7 @@
 #include "lapic.h"
 #include "nested.h"
 #include "pmu.h"
+#include "tdx.h"
 
 #define MSR_PMC_FULL_WIDTH_BIT      (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0)
 
@@ -35,6 +36,26 @@ static struct kvm_event_hw_type_mapping intel_arch_events[] = {
 /* mapping between fixed pmc index and intel_arch_events array */
 static int fixed_pmc_events[] = {1, 0, 7};
 
+struct lbr_desc *vcpu_to_lbr_desc(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_INTEL_TDX_HOST
+	if (is_td_vcpu(vcpu))
+		return &to_tdx(vcpu)->lbr_desc;
+#endif
+
+	return &to_vmx(vcpu)->lbr_desc;
+}
+
+struct x86_pmu_lbr *vcpu_to_lbr_records(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_INTEL_TDX_HOST
+	if (is_td_vcpu(vcpu))
+		return &to_tdx(vcpu)->lbr_desc.records;
+#endif
+
+	return &to_vmx(vcpu)->lbr_desc.records;
+}
+
 static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data)
 {
 	struct kvm_pmc *pmc;
@@ -171,10 +192,20 @@ static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr)
 	return get_gp_pmc(pmu, msr, MSR_IA32_PMC0);
 }
 
+bool intel_pmu_lbr_is_compatible(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return false;
+	return cpuid_model_is_consistent(vcpu);
+}
+
 bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu)
 {
 	struct x86_pmu_lbr *lbr = vcpu_to_lbr_records(vcpu);
 
+	if (is_td_vcpu(vcpu))
+		return false;
+
 	return lbr->nr && (vcpu_get_perf_capabilities(vcpu) & PMU_CAP_LBR_FMT);
 }
 
@@ -294,6 +325,9 @@ int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu)
 					PERF_SAMPLE_BRANCH_USER,
 	};
 
+	if (WARN_ON(is_td_vcpu(vcpu)))
+		return 0;
+
 	if (unlikely(lbr_desc->event)) {
 		__set_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use);
 		return 0;
@@ -602,7 +636,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 	nested_vmx_pmu_refresh(vcpu,
 			       intel_is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, false));
 
-	if (cpuid_model_is_consistent(vcpu))
+	if (intel_pmu_lbr_is_compatible(vcpu))
 		x86_perf_get_lbr(&lbr_desc->records);
 	else
 		lbr_desc->records.nr = 0;
@@ -661,6 +695,9 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
 	struct kvm_pmc *pmc = NULL;
 	int i;
 
+	if (is_td_vcpu(vcpu))
+		return;
+
 	for (i = 0; i < INTEL_PMC_MAX_GENERIC; i++) {
 		pmc = &pmu->gp_counters[i];
 
diff --git a/arch/x86/kvm/vmx/pmu_intel.h b/arch/x86/kvm/vmx/pmu_intel.h
new file mode 100644
index 000000000000..66bba47c1269
--- /dev/null
+++ b/arch/x86/kvm/vmx/pmu_intel.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __KVM_X86_VMX_PMU_INTEL_H
+#define  __KVM_X86_VMX_PMU_INTEL_H
+
+struct lbr_desc *vcpu_to_lbr_desc(struct kvm_vcpu *vcpu);
+struct x86_pmu_lbr *vcpu_to_lbr_records(struct kvm_vcpu *vcpu);
+
+bool intel_pmu_lbr_is_compatible(struct kvm_vcpu *vcpu);
+bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu);
+int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu);
+
+struct lbr_desc {
+	/* Basic info about guest LBR records. */
+	struct x86_pmu_lbr records;
+
+	/*
+	 * Emulate LBR feature via passthrough LBR registers when the
+	 * per-vcpu guest LBR event is scheduled on the current pcpu.
+	 *
+	 * The records may be inaccurate if the host reclaims the LBR.
+	 */
+	struct perf_event *event;
+
+	/* True if LBRs are marked as not intercepted in the MSR bitmap */
+	bool msr_passthrough;
+};
+
+#endif /* __KVM_X86_VMX_PMU_INTEL_H */
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 8a0793fcc3ab..892e7dc96e99 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -4,6 +4,7 @@
 
 #ifdef CONFIG_INTEL_TDX_HOST
 
+#include "pmu_intel.h"
 #include "tdx_ops.h"
 
 int tdx_module_setup(void);
@@ -33,6 +34,12 @@ struct vcpu_tdx {
 
 	struct tdx_td_page tdvpr;
 	struct tdx_td_page *tdvpx;
+
+	/*
+	 * Dummy to make pmu_intel not corrupt memory.
+	 * TODO: Support PMU for TDX.  Future work.
+	 */
+	struct lbr_desc lbr_desc;
 };
 
 static inline bool is_td(struct kvm *kvm)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index b30d73d28e75..1d87885245cc 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2246,7 +2246,7 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			if ((data & PMU_CAP_LBR_FMT) !=
 			    (vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT))
 				return 1;
-			if (!cpuid_model_is_consistent(vcpu))
+			if (!intel_pmu_lbr_is_compatible(vcpu))
 				return 1;
 		}
 		if (data & PERF_CAP_PEBS_FORMAT) {
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 71bcb486e73f..9feb994e5ea2 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -10,6 +10,7 @@
 #include "capabilities.h"
 #include "kvm_cache_regs.h"
 #include "posted_intr.h"
+#include "pmu_intel.h"
 #include "vmcs.h"
 #include "vmx_ops.h"
 #include "cpuid.h"
@@ -91,31 +92,10 @@ union vmx_exit_reason {
 	u32 full;
 };
 
-#define vcpu_to_lbr_desc(vcpu) (&to_vmx(vcpu)->lbr_desc)
-#define vcpu_to_lbr_records(vcpu) (&to_vmx(vcpu)->lbr_desc.records)
-
 void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu);
-bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu);
 
-int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu);
 void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu);
 
-struct lbr_desc {
-	/* Basic info about guest LBR records. */
-	struct x86_pmu_lbr records;
-
-	/*
-	 * Emulate LBR feature via passthrough LBR registers when the
-	 * per-vcpu guest LBR event is scheduled on the current pcpu.
-	 *
-	 * The records may be inaccurate if the host reclaims the LBR.
-	 */
-	struct perf_event *event;
-
-	/* True if LBRs are marked as not intercepted in the MSR bitmap */
-	bool msr_passthrough;
-};
-
 /*
  * The nested_vmx structure is part of vcpu_vmx, and holds information we need
  * for correct emulation of VMX (i.e., nested VMX) on this vcpu.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 027/102] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (25 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 026/102] KVM: TDX: Make pmu_intel.c ignore guest TD case isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 028/102] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
                   ` (76 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of TD vcpu
creation/destruction.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index 5e0deaebf843..3e8efde3e3f3 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -9,15 +9,15 @@ Layer status
 What qemu can do
 ----------------
 - TDX VM TYPE is exposed to Qemu.
-- Qemu can try to create VM of TDX VM type and then fails.
+- Qemu can create/destroy guest of TDX vm type.
 
 Patch Layer status
 ------------------
   Patch layer                          Status
 * TDX, VMX coexistence:                 Applied
 * TDX architectural definitions:        Applied
-* TD VM creation/destruction:           Applying
-* TD vcpu creation/destruction:         Not yet
+* TD VM creation/destruction:           Applied
+* TD vcpu creation/destruction:         Applying
 * TDX EPT violation:                    Not yet
 * TD finalization:                      Not yet
 * TD vcpu enter/exit:                   Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 028/102] KVM: TDX: allocate/free TDX vcpu structure
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (26 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 027/102] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-08-02 19:56   ` Sean Christopherson
  2022-06-27 21:53 ` [PATCH v7 029/102] " isaku.yamahata
                   ` (75 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

The next step of TDX guest creation is to create vcpu.  Allocate TDX vcpu
structures, initialize it.  Allocate pages of TDX vcpu for the TDX module.

In the case of the conventional case, cpuid is empty at the initialization.
and cpuid is configured after the vcpu initialization.  Because TDX
supports only X2APIC mode, cpuid is forcibly initialized to support X2APIC
on the vcpu initialization.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 135 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 135 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 1273b60a1a00..d9fe3f6463c3 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -6,6 +6,7 @@
 #include "capabilities.h"
 #include "x86_ops.h"
 #include "tdx.h"
+#include "x86.h"
 
 #undef pr_fmt
 #define pr_fmt(fmt) "tdx: " fmt
@@ -61,6 +62,11 @@ static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid)
 	return pa;
 }
 
+static inline bool is_td_vcpu_created(struct vcpu_tdx *tdx)
+{
+	return tdx->tdvpr.added;
+}
+
 static inline bool is_td_created(struct kvm_tdx *kvm_tdx)
 {
 	return kvm_tdx->tdr.added;
@@ -392,6 +398,135 @@ int tdx_vm_init(struct kvm *kvm)
 	return ret;
 }
 
+int tdx_vcpu_create(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+	int ret, i;
+
+	/* TDX only supports x2APIC, which requires an in-kernel local APIC. */
+	if (!vcpu->arch.apic)
+		return -EINVAL;
+
+	fpstate_set_confidential(&vcpu->arch.guest_fpu);
+
+	ret = tdx_alloc_td_page(&tdx->tdvpr);
+	if (ret)
+		return ret;
+
+	tdx->tdvpx = kcalloc(tdx_caps.tdvpx_nr_pages, sizeof(*tdx->tdvpx),
+			GFP_KERNEL_ACCOUNT);
+	if (!tdx->tdvpx) {
+		ret = -ENOMEM;
+		goto free_tdvpr;
+	}
+	for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++) {
+		ret = tdx_alloc_td_page(&tdx->tdvpx[i]);
+		if (ret)
+			goto free_tdvpx;
+	}
+
+	vcpu->arch.efer = EFER_SCE | EFER_LME | EFER_LMA | EFER_NX;
+
+	vcpu->arch.cr0_guest_owned_bits = -1ul;
+	vcpu->arch.cr4_guest_owned_bits = -1ul;
+
+	vcpu->arch.tsc_offset = to_kvm_tdx(vcpu->kvm)->tsc_offset;
+	vcpu->arch.l1_tsc_offset = vcpu->arch.tsc_offset;
+	vcpu->arch.guest_state_protected =
+		!(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG);
+
+	return 0;
+
+free_tdvpx:
+	/* @i points at the TDVPX page that failed allocation. */
+	for (--i; i >= 0; i--)
+		free_page(tdx->tdvpx[i].va);
+	kfree(tdx->tdvpx);
+free_tdvpr:
+	free_page(tdx->tdvpr.va);
+
+	return ret;
+}
+
+void tdx_vcpu_free(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+	int i;
+
+	/* Can't reclaim or free pages if teardown failed. */
+	if (is_hkid_assigned(to_kvm_tdx(vcpu->kvm)))
+		return;
+
+	for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++)
+		tdx_reclaim_td_page(&tdx->tdvpx[i]);
+	kfree(tdx->tdvpx);
+	tdx_reclaim_td_page(&tdx->tdvpr);
+}
+
+void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+	struct msr_data apic_base_msr;
+	u64 err;
+	int i;
+
+	/* TDX doesn't support INIT event. */
+	if (WARN_ON(init_event))
+		goto td_bugged;
+	if (WARN_ON(is_td_vcpu_created(tdx)))
+		goto td_bugged;
+
+	err = tdh_vp_create(kvm_tdx->tdr.pa, tdx->tdvpr.pa);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_VP_CREATE, err, NULL);
+		goto td_bugged;
+	}
+	tdx_mark_td_page_added(&tdx->tdvpr);
+
+	for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++) {
+		err = tdh_vp_addcx(tdx->tdvpr.pa, tdx->tdvpx[i].pa);
+		if (WARN_ON_ONCE(err)) {
+			pr_tdx_error(TDH_VP_ADDCX, err, NULL);
+			goto td_bugged;
+		}
+		tdx_mark_td_page_added(&tdx->tdvpx[i]);
+	}
+
+	if (!vcpu->arch.cpuid_entries) {
+		/*
+		 * On cpu creation, cpuid entry is blank.  Forcibly enable
+		 * X2APIC feature to allow X2APIC.
+		 */
+		struct kvm_cpuid_entry2 *e;
+
+		e = kvmalloc_array(1, sizeof(*e), GFP_KERNEL_ACCOUNT);
+		*e  = (struct kvm_cpuid_entry2) {
+			.function = 1,	/* Features for X2APIC */
+			.index = 0,
+			.eax = 0,
+			.ebx = 0,
+			.ecx = 1ULL << 21,	/* X2APIC */
+			.edx = 0,
+		};
+		vcpu->arch.cpuid_entries = e;
+		vcpu->arch.cpuid_nent = 1;
+	}
+	apic_base_msr.data = APIC_DEFAULT_PHYS_BASE | LAPIC_MODE_X2APIC;
+	if (kvm_vcpu_is_reset_bsp(vcpu))
+		apic_base_msr.data |= MSR_IA32_APICBASE_BSP;
+	apic_base_msr.host_initiated = true;
+	if (WARN_ON(kvm_set_apic_base(vcpu, &apic_base_msr)))
+		goto td_bugged;
+
+	vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+
+	return;
+
+td_bugged:
+	vcpu->kvm->vm_bugged = true;
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 029/102] KVM: TDX: allocate/free TDX vcpu structure
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (27 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 028/102] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-28 11:34   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
                   ` (74 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

The next step of TDX guest creation is to create vcpu.  Allocate TDX vcpu
structures, initialize it.  Allocate pages of TDX vcpu for the TDX module.

In the case of the conventional case, cpuid is empty at the initialization.
and cpuid is configured after the vcpu initialization.  Because TDX
supports only X2APIC mode, cpuid is forcibly initialized to support X2APIC
on the vcpu initialization.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    | 40 ++++++++++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/x86_ops.h |  8 ++++++++
 2 files changed, 44 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 067f5de56c53..4f4ed4ad65a7 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -73,6 +73,38 @@ static void vt_vm_free(struct kvm *kvm)
 		return tdx_vm_free(kvm);
 }
 
+static int vt_vcpu_precreate(struct kvm *kvm)
+{
+	if (is_td(kvm))
+		return 0;
+
+	return vmx_vcpu_precreate(kvm);
+}
+
+static int vt_vcpu_create(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_vcpu_create(vcpu);
+
+	return vmx_vcpu_create(vcpu);
+}
+
+static void vt_vcpu_free(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_vcpu_free(vcpu);
+
+	return vmx_vcpu_free(vcpu);
+}
+
+static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_vcpu_reset(vcpu, init_event);
+
+	return vmx_vcpu_reset(vcpu, init_event);
+}
+
 static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	if (!is_td(kvm))
@@ -98,10 +130,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.vm_destroy = vt_vm_destroy,
 	.vm_free = vt_vm_free,
 
-	.vcpu_precreate = vmx_vcpu_precreate,
-	.vcpu_create = vmx_vcpu_create,
-	.vcpu_free = vmx_vcpu_free,
-	.vcpu_reset = vmx_vcpu_reset,
+	.vcpu_precreate = vt_vcpu_precreate,
+	.vcpu_create = vt_vcpu_create,
+	.vcpu_free = vt_vcpu_free,
+	.vcpu_reset = vt_vcpu_reset,
 
 	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
 	.vcpu_load = vmx_vcpu_load,
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index ef6115ae0e88..42b634971544 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -138,6 +138,10 @@ int tdx_vm_init(struct kvm *kvm);
 void tdx_mmu_release_hkid(struct kvm *kvm);
 void tdx_vm_free(struct kvm *kvm);
 
+int tdx_vcpu_create(struct kvm_vcpu *vcpu);
+void tdx_vcpu_free(struct kvm_vcpu *vcpu);
+void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 #else
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
@@ -150,6 +154,10 @@ static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
 static inline void tdx_flush_shadow_all_private(struct kvm *kvm) {}
 static inline void tdx_vm_free(struct kvm *kvm) {}
 
+static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; }
+static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {}
+static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
+
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (28 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 029/102] " isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-08  2:14   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 031/102] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits isaku.yamahata
                   ` (73 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

TD guest vcpu need to be configured before ready to run which requests
addtional information from Device model (e.g. qemu), one 64bit value is
passed to vcpu's RCX as an initial value.  Repurpose KVM_MEMORY_ENCRYPT_OP
to vcpu-scope and add new sub-commands KVM_TDX_INIT_VCPU under it for such
additional vcpu configuration.

Add callback for kvm vCPU-scoped operations of KVM_MEMORY_ENCRYPT_OP and
add a new subcommand, KVM_TDX_INIT_VCPU, for further vcpu initialization.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h    |  1 +
 arch/x86/include/asm/kvm_host.h       |  1 +
 arch/x86/include/uapi/asm/kvm.h       |  1 +
 arch/x86/kvm/vmx/main.c               |  9 +++++++
 arch/x86/kvm/vmx/tdx.c                | 36 +++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h                |  4 +++
 arch/x86/kvm/vmx/x86_ops.h            |  2 ++
 arch/x86/kvm/x86.c                    |  6 +++++
 tools/arch/x86/include/uapi/asm/kvm.h |  1 +
 9 files changed, 61 insertions(+)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 3677a5015a4f..32a6df784ea6 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -119,6 +119,7 @@ KVM_X86_OP(leave_smm)
 KVM_X86_OP(enable_smi_window)
 KVM_X86_OP_OPTIONAL(dev_mem_enc_ioctl)
 KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
+KVM_X86_OP_OPTIONAL(vcpu_mem_enc_ioctl)
 KVM_X86_OP_OPTIONAL(mem_enc_register_region)
 KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
 KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 81638987cdb9..e5d4e5b60fdc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1595,6 +1595,7 @@ struct kvm_x86_ops {
 
 	int (*dev_mem_enc_ioctl)(void __user *argp);
 	int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
+	int (*vcpu_mem_enc_ioctl)(struct kvm_vcpu *vcpu, void __user *argp);
 	int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
 	int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
 	int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index f89774ccd4ae..399c28b2f4f5 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -538,6 +538,7 @@ struct kvm_pmu_event_filter {
 enum kvm_tdx_cmd_id {
 	KVM_TDX_CAPABILITIES = 0,
 	KVM_TDX_INIT_VM,
+	KVM_TDX_INIT_VCPU,
 
 	KVM_TDX_CMD_NR_MAX,
 };
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 4f4ed4ad65a7..ce12cc8276ef 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -113,6 +113,14 @@ static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 	return tdx_vm_ioctl(kvm, argp);
 }
 
+static int vt_vcpu_mem_enc_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
+{
+	if (!is_td_vcpu(vcpu))
+		return -EINVAL;
+
+	return tdx_vcpu_ioctl(vcpu, argp);
+}
+
 struct kvm_x86_ops vt_x86_ops __initdata = {
 	.name = "kvm_intel",
 
@@ -255,6 +263,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 
 	.dev_mem_enc_ioctl = tdx_dev_ioctl,
 	.mem_enc_ioctl = vt_mem_enc_ioctl,
+	.vcpu_mem_enc_ioctl = vt_vcpu_mem_enc_ioctl,
 };
 
 struct kvm_x86_init_ops vt_init_ops __initdata = {
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index d9fe3f6463c3..2772775457b0 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -83,6 +83,11 @@ static inline bool is_hkid_assigned(struct kvm_tdx *kvm_tdx)
 	return kvm_tdx->hkid > 0;
 }
 
+static inline bool is_td_finalized(struct kvm_tdx *kvm_tdx)
+{
+	return kvm_tdx->finalized;
+}
+
 static void tdx_clear_page(unsigned long page)
 {
 	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
@@ -805,6 +810,37 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 	return r;
 }
 
+int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+	struct kvm_tdx_cmd cmd;
+	u64 err;
+
+	if (tdx->initialized)
+		return -EINVAL;
+
+	if (!is_td_initialized(vcpu->kvm) || is_td_finalized(kvm_tdx))
+		return -EINVAL;
+
+	if (copy_from_user(&cmd, argp, sizeof(cmd)))
+		return -EFAULT;
+
+	if (cmd.error || cmd.unused)
+		return -EINVAL;
+	if (cmd.flags || cmd.id != KVM_TDX_INIT_VCPU)
+		return -EINVAL;
+
+	err = tdh_vp_init(tdx->tdvpr.pa, cmd.data);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_VP_INIT, err, NULL);
+		return -EIO;
+	}
+
+	tdx->initialized = true;
+	return 0;
+}
+
 int __init tdx_module_setup(void)
 {
 	const struct tdsysinfo_struct *tdsysinfo;
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 892e7dc96e99..337c3adb4fcf 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -25,6 +25,8 @@ struct kvm_tdx {
 	u64 xfam;
 	int hkid;
 
+	bool finalized;
+
 	u64 tsc_offset;
 	unsigned long tsc_khz;
 };
@@ -35,6 +37,8 @@ struct vcpu_tdx {
 	struct tdx_td_page tdvpr;
 	struct tdx_td_page *tdvpx;
 
+	bool initialized;
+
 	/*
 	 * Dummy to make pmu_intel not corrupt memory.
 	 * TODO: Support PMU for TDX.  Future work.
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 42b634971544..7e38c7b756d4 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -143,6 +143,7 @@ void tdx_vcpu_free(struct kvm_vcpu *vcpu);
 void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
+int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
 #else
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
 static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
@@ -159,6 +160,7 @@ static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
+static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
 #endif
 
 #endif /* __KVM_X86_VMX_X86_OPS_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6037ce93bcb7..4309ef0ade21 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5778,6 +5778,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 	case KVM_SET_DEVICE_ATTR:
 		r = kvm_vcpu_ioctl_device_attr(vcpu, ioctl, argp);
 		break;
+	case KVM_MEMORY_ENCRYPT_OP:
+		r = -ENOTTY;
+		if (!kvm_x86_ops.vcpu_mem_enc_ioctl)
+			goto out;
+		r = kvm_x86_ops.vcpu_mem_enc_ioctl(vcpu, argp);
+		break;
 	default:
 		r = -EINVAL;
 	}
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index 779dfd683d66..60a79f9ef174 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -532,6 +532,7 @@ struct kvm_pmu_event_filter {
 enum kvm_tdx_cmd_id {
 	KVM_TDX_CAPABILITIES = 0,
 	KVM_TDX_INIT_VM,
+	KVM_TDX_INIT_VCPU,
 
 	KVM_TDX_CMD_NR_MAX,
 };
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 031/102] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (29 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 032/102] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
                   ` (72 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of KVM MMU GPA
shared bits.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index 3e8efde3e3f3..6e3f71ab6b59 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -10,6 +10,7 @@ What qemu can do
 ----------------
 - TDX VM TYPE is exposed to Qemu.
 - Qemu can create/destroy guest of TDX vm type.
+- Qemu can create/destroy vcpu of TDX vm type.
 
 Patch Layer status
 ------------------
@@ -17,13 +18,13 @@ Patch Layer status
 * TDX, VMX coexistence:                 Applied
 * TDX architectural definitions:        Applied
 * TD VM creation/destruction:           Applied
-* TD vcpu creation/destruction:         Applying
+* TD vcpu creation/destruction:         Applied
 * TDX EPT violation:                    Not yet
 * TD finalization:                      Not yet
 * TD vcpu enter/exit:                   Not yet
 * TD vcpu interrupts/exit/hypercall:    Not yet
 
-* KVM MMU GPA shared bits:              Not yet
+* KVM MMU GPA shared bits:              Applying
 * KVM TDP refactoring for TDX:          Not yet
 * KVM TDP MMU hooks:                    Not yet
 * KVM TDP MMU MapGPA:                   Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 032/102] KVM: x86/mmu: introduce config for PRIVATE KVM MMU
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (30 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 031/102] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-08  1:53   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits isaku.yamahata
                   ` (71 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

To Keep the case of non TDX intact, introduce a new config option for
private KVM MMU support.  At the moment, this is synonym for
CONFIG_INTEL_TDX_HOST && CONFIG_KVM_INTEL.  The new flag make it clear
that the config is only for x86 KVM MMU.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/Kconfig | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index e3cbd7706136..5a59abc83179 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -129,4 +129,8 @@ config KVM_XEN
 config KVM_EXTERNAL_WRITE_TRACKING
 	bool
 
+config KVM_MMU_PRIVATE
+	def_bool y
+	depends on INTEL_TDX_HOST && KVM_INTEL
+
 endif # VIRTUALIZATION
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (31 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 032/102] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-08  2:15   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 034/102] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
                   ` (70 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Rick Edgecombe

From: Rick Edgecombe <rick.p.edgecombe@intel.com>

TDX repurposes one GPA bits (51 bit or 47 bit based on configuration) to
indicate the GPA is private(if cleared) or shared (if set) with VMM.  If
GPA.shared is set, GPA is converted existing conventional EPT pointed by
EPTP.  If GPA.shared bit is cleared, GPA is converted by Secure-EPT(S-EPT)
TDX module manages.  VMM has to issue SEAM call to TDX module to operate on
S-EPT.  e.g. populating/zapping guest page or shadow page by TDH.PAGE.{ADD,
REMOVE} for guest page, TDH.PAGE.SEPT.{ADD, REMOVE} S-EPT etc.

Several hooks needs to be added to KVM MMU to support TDX.  Add a function
to check if KVM MMU is running for TDX and several functions for address
conversation between private-GPA and shared-GPA.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  2 ++
 arch/x86/kvm/mmu.h              | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e5d4e5b60fdc..2c47aab72a1b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1339,7 +1339,9 @@ struct kvm_arch {
 	 */
 	u32 max_vcpu_ids;
 
+#ifdef CONFIG_KVM_MMU_PRIVATE
 	gfn_t gfn_shared_mask;
+#endif
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index f8192864b496..ccf0ba7a6387 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -286,4 +286,36 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu,
 		return gpa;
 	return translate_nested_gpa(vcpu, gpa, access, exception);
 }
+
+static inline gfn_t kvm_gfn_shared_mask(const struct kvm *kvm)
+{
+#ifdef CONFIG_KVM_MMU_PRIVATE
+	return kvm->arch.gfn_shared_mask;
+#else
+	return 0;
+#endif
+}
+
+static inline gfn_t kvm_gfn_shared(const struct kvm *kvm, gfn_t gfn)
+{
+	return gfn | kvm_gfn_shared_mask(kvm);
+}
+
+static inline gfn_t kvm_gfn_private(const struct kvm *kvm, gfn_t gfn)
+{
+	return gfn & ~kvm_gfn_shared_mask(kvm);
+}
+
+static inline gpa_t kvm_gpa_private(const struct kvm *kvm, gpa_t gpa)
+{
+	return gpa & ~gfn_to_gpa(kvm_gfn_shared_mask(kvm));
+}
+
+static inline bool kvm_is_private_gpa(const struct kvm *kvm, gpa_t gpa)
+{
+	gfn_t mask = kvm_gfn_shared_mask(kvm);
+
+	return mask && !(gpa_to_gfn(gpa) & mask);
+}
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 034/102] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (32 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault isaku.yamahata
                   ` (69 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of KVM TDP
refactoring for TDX.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index 6e3f71ab6b59..df003d2ed89e 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -24,7 +24,7 @@ Patch Layer status
 * TD vcpu enter/exit:                   Not yet
 * TD vcpu interrupts/exit/hypercall:    Not yet
 
-* KVM MMU GPA shared bits:              Applying
-* KVM TDP refactoring for TDX:          Not yet
+* KVM MMU GPA shared bits:              Applied
+* KVM TDP refactoring for TDX:          Applying
 * KVM TDP MMU hooks:                    Not yet
 * KVM TDP MMU MapGPA:                   Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (33 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 034/102] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-30 11:37   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE isaku.yamahata
                   ` (68 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Explicitly check for an MMIO spte in the fast page fault flow.  TDX will
use a not-present entry for MMIO sptes, which can be mistaken for an
access-tracked spte since both have SPTE_SPECIAL_MASK set.

MMIO sptes are handled in handle_mmio_page_fault for non-TDX VMs, so this
patch does not affect them.  TDX will handle MMIO emulation through a
hypercall instead.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 17252f39bd7c..51306b80f47c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3163,7 +3163,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 		else
 			sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte);
 
-		if (!is_shadow_present_pte(spte))
+		if (!is_shadow_present_pte(spte) || is_mmio_spte(spte))
 			break;
 
 		sp = sptep_to_sp(sptep);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (34 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-30 11:03   ` Kai Huang
                     ` (2 more replies)
  2022-06-27 21:53 ` [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
                   ` (67 subsequent siblings)
  103 siblings, 3 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

TDX introduced a new ETP, Secure-EPT, in addition to the existing EPT.
Secure-EPT maps protected guest memory, which is called private. Since
Secure-EPT page tables is also protected, those page tables is also called
private.  The existing EPT is often called shared EPT to distinguish from
Secure-EPT.  And also page tables for share EPT is also called shared.

Virtualization Exception, #VE, is a new processor exception in VMX non-root
operation.  In certain virtualizatoin-related conditions, #VE is injected
into guest instead of exiting from guest to VMM so that guest is given a
chance to inspect it.  One important one is EPT violation.  When
"ETP-violation #VE" VM-execution is set, "#VE suppress bit" in EPT entry
is cleared, #VE is injected instead of EPT violation.

Because guest memory is protected with TDX, VMM can't parse instructions
in the guest memory.  Instead, MMIO hypercall is used for guest to pass
necessary information to VMM.

To make unmodified device driver work, guest TD expects #VE on accessing
shared GPA.  The #VE handler converts MMIO access into MMIO hypercall with
the EPT entry of enabled "#VE" by clearing "suppress #VE" bit.  Before VMM
enabling #VE, it needs to figure out the given GPA is for MMIO by EPT
violation.  So the execution flow looks like

- Allocate unused shared EPT entry with suppress #VE bit set.
- EPT violation on that GPA.
- VMM figures out the faulted GPA is for MMIO.
- VMM clears the suppress #VE bit.
- Guest TD gets #VE, and converts MMIO access into MMIO hypercall.
- If the GPA maps guest memory, VMM resolves it with guest pages.

For both cases, SPTE needs suppress #VE" bit set initially when it
is allocated or zapped, therefore non-zero non-present value for SPTE
needs to be allowed.

This change requires to update FNAME(sync_page) for shadow EPT.
"if(!sp->spte[i])" in FNAME(sync_page) means that the spte entry is the
initial value.  With the introduction of shadow_nonpresent_value which can
be non-zero, it doesn't hold any more. Replace zero check with
"!is_shadow_present_pte() && !is_mmio_spte()".

When "if (!spt[i])" doesn't hold, but the entry value is
shadow_nonpresent_value, the entry is wrongly synchronized from non-present
to non-present with (wrongly) pfn changed and tries to remove rmap wrongly
and BUG_ON() is hit.

TDP MMU uses REMOVED_SPTE = 0x5a0ULL as special constant to indicate the
intermediate value to indicate one thread is operating on it and the value
should be semi-arbitrary value.  For TDX (more correctly to use #VE), the
value should include suppress #VE value which is SHADOW_NONPRESENT_VALUE.
Rename REMOVED_SPTE to __REMOVED_SPTE and define REMOVED_SPTE as
SHADOW_NONPRESENT_VALUE | REMOVED_SPTE to set suppress #VE bit.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         | 55 ++++++++++++++++++++++++++++++----
 arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
 arch/x86/kvm/mmu/spte.c        |  5 +++-
 arch/x86/kvm/mmu/spte.h        | 37 ++++++++++++++++++++---
 arch/x86/kvm/mmu/tdp_mmu.c     | 23 +++++++++-----
 5 files changed, 105 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 51306b80f47c..f239b6cb5d53 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -668,6 +668,44 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
 	}
 }
 
+static inline void kvm_init_shadow_page(void *page)
+{
+#ifdef CONFIG_X86_64
+	int ign;
+
+	WARN_ON_ONCE(shadow_nonpresent_value != SHADOW_NONPRESENT_VALUE);
+	asm volatile (
+		"rep stosq\n\t"
+		: "=c"(ign), "=D"(page)
+		: "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
+		: "memory"
+	);
+#else
+	BUG();
+#endif
+}
+
+static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
+	int start, end, i, r;
+	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
+
+	if (is_tdp_mmu && shadow_nonpresent_value)
+		start = kvm_mmu_memory_cache_nr_free_objects(mc);
+
+	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
+	if (r)
+		return r;
+
+	if (is_tdp_mmu && shadow_nonpresent_value) {
+		end = kvm_mmu_memory_cache_nr_free_objects(mc);
+		for (i = start; i < end; i++)
+			kvm_init_shadow_page(mc->objects[i]);
+	}
+	return 0;
+}
+
 static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 {
 	int r;
@@ -677,8 +715,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
 	if (r)
 		return r;
-	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
-				       PT64_ROOT_MAX_LEVEL);
+	r = mmu_topup_shadow_page_cache(vcpu);
 	if (r)
 		return r;
 	if (maybe_indirect) {
@@ -5521,9 +5558,16 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
 	 * what is used by the kernel for any given HVA, i.e. the kernel's
 	 * capabilities are ultimately consulted by kvm_mmu_hugepage_adjust().
 	 */
-	if (tdp_enabled)
+	if (tdp_enabled) {
+		/*
+		 * For TDP MMU, always set bit 63 for TDX support. See the
+		 * comment on SHADOW_NONPRESENT_VALUE.
+		 */
+#ifdef CONFIG_X86_64
+		shadow_nonpresent_value = SHADOW_NONPRESENT_VALUE;
+#endif
 		max_huge_page_level = tdp_huge_page_level;
-	else if (boot_cpu_has(X86_FEATURE_GBPAGES))
+	} else if (boot_cpu_has(X86_FEATURE_GBPAGES))
 		max_huge_page_level = PG_LEVEL_1G;
 	else
 		max_huge_page_level = PG_LEVEL_2M;
@@ -5654,7 +5698,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
 	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
 
-	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
+	if (!(is_tdp_mmu_enabled(vcpu->kvm) && shadow_nonpresent_value))
+		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
 
 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
 	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index fe35d8fd3276..ee2fb0c073f3 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -1031,7 +1031,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 		gpa_t pte_gpa;
 		gfn_t gfn;
 
-		if (!sp->spt[i])
+		if (!is_shadow_present_pte(sp->spt[i]) &&
+		    !is_mmio_spte(sp->spt[i]))
 			continue;
 
 		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index cda1851ec155..bd441458153f 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -36,6 +36,9 @@ u64 __read_mostly shadow_present_mask;
 u64 __read_mostly shadow_me_value;
 u64 __read_mostly shadow_me_mask;
 u64 __read_mostly shadow_acc_track_mask;
+#ifdef CONFIG_X86_64
+u64 __read_mostly shadow_nonpresent_value;
+#endif
 
 u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
 u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
@@ -360,7 +363,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
 	 * not set any RWX bits.
 	 */
 	if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
-	    WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
+	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
 		mmio_value = 0;
 
 	if (!mmio_value)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 0127bb6e3c7d..1bfedbe0585f 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -140,6 +140,19 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
 
 #define MMIO_SPTE_GEN_MASK		GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0)
 
+/*
+ * non-present SPTE value for both VMX and SVM for TDP MMU.
+ * For SVM NPT, for non-present spte (bit 0 = 0), other bits are ignored.
+ * For VMX EPT, bit 63 is ignored if #VE is disabled.
+ *              bit 63 is #VE suppress if #VE is enabled.
+ */
+#ifdef CONFIG_X86_64
+#define SHADOW_NONPRESENT_VALUE	BIT_ULL(63)
+static_assert(!(SHADOW_NONPRESENT_VALUE & SPTE_MMU_PRESENT_MASK));
+#else
+#define SHADOW_NONPRESENT_VALUE	0ULL
+#endif
+
 extern u64 __read_mostly shadow_host_writable_mask;
 extern u64 __read_mostly shadow_mmu_writable_mask;
 extern u64 __read_mostly shadow_nx_mask;
@@ -154,6 +167,12 @@ extern u64 __read_mostly shadow_present_mask;
 extern u64 __read_mostly shadow_me_value;
 extern u64 __read_mostly shadow_me_mask;
 
+#ifdef CONFIG_X86_64
+extern u64 __read_mostly shadow_nonpresent_value;
+#else
+#define shadow_nonpresent_value	0ULL
+#endif
+
 /*
  * SPTEs in MMUs without A/D bits are marked with SPTE_TDP_AD_DISABLED_MASK;
  * shadow_acc_track_mask is the set of bits to be cleared in non-accessed
@@ -174,9 +193,12 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
 
 /*
  * If a thread running without exclusive control of the MMU lock must perform a
- * multi-part operation on an SPTE, it can set the SPTE to REMOVED_SPTE as a
+ * multi-part operation on an SPTE, it can set the SPTE to __REMOVED_SPTE as a
  * non-present intermediate value. Other threads which encounter this value
- * should not modify the SPTE.
+ * should not modify the SPTE.  For the case that TDX is enabled,
+ * SHADOW_NONPRESENT_VALUE, which is "suppress #VE" bit set because TDX module
+ * always enables "EPT violation #VE".  The bit is ignored by non-TDX case as
+ * present bit (bit 0) is cleared.
  *
  * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on
  * bot AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF
@@ -184,10 +206,17 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
  *
  * Only used by the TDP MMU.
  */
-#define REMOVED_SPTE	0x5a0ULL
+#define __REMOVED_SPTE	0x5a0ULL
 
 /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
-static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
+static_assert(!(__REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
+static_assert(!(__REMOVED_SPTE & SHADOW_NONPRESENT_VALUE));
+
+/*
+ * See above comment around __REMOVED_SPTE.  REMOVED_SPTE is the actual
+ * intermediate value set to the removed SPET.  it sets the "suppress #VE" bit.
+ */
+#define REMOVED_SPTE	(SHADOW_NONPRESENT_VALUE | __REMOVED_SPTE)
 
 static inline bool is_removed_spte(u64 spte)
 {
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 7b9265d67131..2ca03ec3bf52 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -692,8 +692,16 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
 	 * overwrite the special removed SPTE value. No bookkeeping is needed
 	 * here since the SPTE is going from non-present to non-present.  Use
 	 * the raw write helper to avoid an unnecessary check on volatile bits.
+	 *
+	 * Set non-present value to SHADOW_NONPRESENT_VALUE, rather than 0.
+	 * It is because when TDX is enabled, TDX module always
+	 * enables "EPT-violation #VE", so KVM needs to set
+	 * "suppress #VE" bit in EPT table entries, in order to get
+	 * real EPT violation, rather than TDVMCALL.  KVM sets
+	 * SHADOW_NONPRESENT_VALUE (which sets "suppress #VE" bit) so it
+	 * can be set when EPT table entries are zapped.
 	 */
-	__kvm_tdp_mmu_write_spte(iter->sptep, 0);
+	__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);
 
 	return 0;
 }
@@ -870,8 +878,8 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
 			continue;
 
 		if (!shared)
-			tdp_mmu_set_spte(kvm, &iter, 0);
-		else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0))
+			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
+		else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE))
 			goto retry;
 	}
 }
@@ -927,8 +935,9 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 	if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte)))
 		return false;
 
-	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0,
-			   sp->gfn, sp->role.level + 1, true, true);
+	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
+			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
+			   true, true);
 
 	return true;
 }
@@ -965,7 +974,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
 		    !is_last_spte(iter.old_spte, iter.level))
 			continue;
 
-		tdp_mmu_set_spte(kvm, &iter, 0);
+		tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
 		flush = true;
 	}
 
@@ -1330,7 +1339,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
 	 * invariant that the PFN of a present * leaf SPTE can never change.
 	 * See __handle_changed_spte().
 	 */
-	tdp_mmu_set_spte(kvm, iter, 0);
+	tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE);
 
 	if (!pte_write(range->pte)) {
 		new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (35 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-30 11:45   ` Kai Huang
                     ` (2 more replies)
  2022-06-27 21:53 ` [PATCH v7 038/102] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
                   ` (66 subsequent siblings)
  103 siblings, 3 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

TDX will use a different shadow PTE entry value for MMIO from VMX.  Add
members to kvm_arch and track value for MMIO per-VM instead of global
variables.  By using the per-VM EPT entry value for MMIO, the existing VMX
logic is kept working.

In the case of VMX VM case, the EPT entry for MMIO is non-present PTE
(present bit cleared) without backing guest physical address (on EPT
violation, KVM searches backing guest memory and it finds there is no
backing guest page.) or the value to trigger EPT misconfiguration.  Once
MMIO is triggered on the EPT entry, the EPT entry is updated to trigger EPT
misconfiguration for the future MMIO on the same GPA.  It allows KVM to
understand the memory access is for MMIO without searching backing guest
pages.). And then KVM parses guest instruction to figure out
address/value/width for MMIO.

In the case of the guest TD, the guest memory is protected so that VMM
can't parse guest instruction to understand the value and access width for
MMIO.  Instead, VMM sets up (Shared) EPT to trigger #VE by clearing
the VE-suppress bit.  When the guest TD issues MMIO, #VE is injected.  Guest VE
handler converts MMIO access into MMIO hypercall to pass
address/value/width for MMIO to VMM. (or directly paravirtualize MMIO into
hypercall.)  Then VMM can handle the MMIO hypercall without parsing guest
instructions.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  4 ++++
 arch/x86/include/asm/vmx.h      |  1 +
 arch/x86/kvm/mmu.h              |  4 +++-
 arch/x86/kvm/mmu/mmu.c          | 20 ++++++++++++----
 arch/x86/kvm/mmu/paging_tmpl.h  |  2 +-
 arch/x86/kvm/mmu/spte.c         | 41 +++++++++++++++------------------
 arch/x86/kvm/mmu/spte.h         | 11 ++++-----
 arch/x86/kvm/mmu/tdp_mmu.c      |  6 ++---
 arch/x86/kvm/svm/svm.c          |  2 +-
 arch/x86/kvm/vmx/vmx.c          |  8 +++++++
 10 files changed, 59 insertions(+), 40 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2c47aab72a1b..39215daa8576 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1161,6 +1161,10 @@ struct kvm_arch {
 	 */
 	spinlock_t mmu_unsync_pages_lock;
 
+	bool enable_mmio_caching;
+	u64 shadow_mmio_value;
+	u64 shadow_mmio_mask;
+
 	struct list_head assigned_dev_head;
 	struct iommu_domain *iommu_domain;
 	bool iommu_noncoherent;
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index c371ef695fcc..6231ef005a50 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -511,6 +511,7 @@ enum vmcs_field {
 #define VMX_EPT_IPAT_BIT    			(1ull << 6)
 #define VMX_EPT_ACCESS_BIT			(1ull << 8)
 #define VMX_EPT_DIRTY_BIT			(1ull << 9)
+#define VMX_EPT_SUPPRESS_VE_BIT			(1ull << 63)
 #define VMX_EPT_RWX_MASK                        (VMX_EPT_READABLE_MASK |       \
 						 VMX_EPT_WRITABLE_MASK |       \
 						 VMX_EPT_EXECUTABLE_MASK)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index ccf0ba7a6387..9ba60fd79d33 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -108,7 +108,9 @@ static inline u8 kvm_get_shadow_phys_bits(void)
 	return boot_cpu_data.x86_phys_bits;
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
+void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask,
+				u64 access_mask);
+void kvm_mmu_set_default_mmio_spte_mask(u64 mask);
 void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
 void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f239b6cb5d53..496d0d30839b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2287,7 +2287,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 				return kvm_mmu_prepare_zap_page(kvm, child,
 								invalid_list);
 		}
-	} else if (is_mmio_spte(pte)) {
+	} else if (is_mmio_spte(kvm, pte)) {
 		mmu_spte_clear_no_track(spte);
 	}
 	return 0;
@@ -3067,8 +3067,13 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
 		 * by L0 userspace (you can observe gfn > L1.MAXPHYADDR if
 		 * and only if L1's MAXPHYADDR is inaccurate with respect to
 		 * the hardware's).
+		 *
+		 * Excludes the INTEL TD guest.  Because TD memory is
+		 * protected, the instruction can't be emulated.  Instead, use
+		 * SPTE value without #VE suppress bit cleared
+		 * (kvm->arch.shadow_mmio_value = 0).
 		 */
-		if (unlikely(!enable_mmio_caching) ||
+		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
 		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
 			return RET_PF_EMULATE;
 	}
@@ -3200,7 +3205,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 		else
 			sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte);
 
-		if (!is_shadow_present_pte(spte) || is_mmio_spte(spte))
+		if (!is_shadow_present_pte(spte) ||
+		    is_mmio_spte(vcpu->kvm, spte))
 			break;
 
 		sp = sptep_to_sp(sptep);
@@ -3907,7 +3913,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 	if (WARN_ON(reserved))
 		return -EINVAL;
 
-	if (is_mmio_spte(spte)) {
+	if (is_mmio_spte(vcpu->kvm, spte)) {
 		gfn_t gfn = get_mmio_spte_gfn(spte);
 		unsigned int access = get_mmio_spte_access(spte);
 
@@ -4350,7 +4356,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu)
 static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
 			   unsigned int access)
 {
-	if (unlikely(is_mmio_spte(*sptep))) {
+	if (unlikely(is_mmio_spte(vcpu->kvm, *sptep))) {
 		if (gfn != get_mmio_spte_gfn(*sptep)) {
 			mmu_spte_clear_no_track(sptep);
 			return true;
@@ -5864,6 +5870,10 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 	node->track_write = kvm_mmu_pte_write;
 	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
 	kvm_page_track_register_notifier(kvm, node);
+	kvm_mmu_set_mmio_spte_mask(kvm, shadow_default_mmio_mask,
+				   shadow_default_mmio_mask,
+				   ACC_WRITE_MASK | ACC_USER_MASK);
+
 	return 0;
 }
 
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index ee2fb0c073f3..62ae590d4e5b 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -1032,7 +1032,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 		gfn_t gfn;
 
 		if (!is_shadow_present_pte(sp->spt[i]) &&
-		    !is_mmio_spte(sp->spt[i]))
+		    !is_mmio_spte(vcpu->kvm, sp->spt[i]))
 			continue;
 
 		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index bd441458153f..5194aef60c1f 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -29,8 +29,7 @@ u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
 u64 __read_mostly shadow_user_mask;
 u64 __read_mostly shadow_accessed_mask;
 u64 __read_mostly shadow_dirty_mask;
-u64 __read_mostly shadow_mmio_value;
-u64 __read_mostly shadow_mmio_mask;
+u64 __read_mostly shadow_default_mmio_mask;
 u64 __read_mostly shadow_mmio_access_mask;
 u64 __read_mostly shadow_present_mask;
 u64 __read_mostly shadow_me_value;
@@ -62,10 +61,11 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access)
 	u64 spte = generation_mmio_spte_mask(gen);
 	u64 gpa = gfn << PAGE_SHIFT;
 
-	WARN_ON_ONCE(!shadow_mmio_value);
+	WARN_ON_ONCE(!vcpu->kvm->arch.shadow_mmio_value &&
+		     !kvm_gfn_shared_mask(vcpu->kvm));
 
 	access &= shadow_mmio_access_mask;
-	spte |= shadow_mmio_value | access;
+	spte |= vcpu->kvm->arch.shadow_mmio_value | access;
 	spte |= gpa | shadow_nonpresent_or_rsvd_mask;
 	spte |= (gpa & shadow_nonpresent_or_rsvd_mask)
 		<< SHADOW_NONPRESENT_OR_RSVD_MASK_LEN;
@@ -337,7 +337,8 @@ u64 mark_spte_for_access_track(u64 spte)
 	return spte;
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
+void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask,
+				u64 access_mask)
 {
 	BUG_ON((u64)(unsigned)access_mask != access_mask);
 	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
@@ -366,11 +367,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
 	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
 		mmio_value = 0;
 
-	if (!mmio_value)
-		enable_mmio_caching = false;
-
-	shadow_mmio_value = mmio_value;
-	shadow_mmio_mask  = mmio_mask;
+	kvm->arch.enable_mmio_caching = !!mmio_value;
+	kvm->arch.shadow_mmio_value = mmio_value;
+	kvm->arch.shadow_mmio_mask = mmio_mask;
 	shadow_mmio_access_mask = access_mask;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
@@ -393,24 +392,18 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
 	shadow_dirty_mask	= has_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull;
 	shadow_nx_mask		= 0ull;
 	shadow_x_mask		= VMX_EPT_EXECUTABLE_MASK;
-	shadow_present_mask	= has_exec_only ? 0ull : VMX_EPT_READABLE_MASK;
+	/* VMX_EPT_SUPPRESS_VE_BIT is needed for W or X violation. */
+	shadow_present_mask	=
+		(has_exec_only ? 0ull : VMX_EPT_READABLE_MASK) | VMX_EPT_SUPPRESS_VE_BIT;
 	shadow_acc_track_mask	= VMX_EPT_RWX_MASK;
 	shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
 	shadow_mmu_writable_mask  = EPT_SPTE_MMU_WRITABLE;
-
-	/*
-	 * EPT Misconfigurations are generated if the value of bits 2:0
-	 * of an EPT paging-structure entry is 110b (write/execute).
-	 */
-	kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE,
-				   VMX_EPT_RWX_MASK, 0);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks);
 
 void kvm_mmu_reset_all_pte_masks(void)
 {
 	u8 low_phys_bits;
-	u64 mask;
 
 	shadow_phys_bits = kvm_get_shadow_phys_bits();
 
@@ -459,9 +452,13 @@ void kvm_mmu_reset_all_pte_masks(void)
 	 * PTEs and so the reserved PA approach must be disabled.
 	 */
 	if (shadow_phys_bits < 52)
-		mask = BIT_ULL(51) | PT_PRESENT_MASK;
+		shadow_default_mmio_mask = BIT_ULL(51) | PT_PRESENT_MASK;
 	else
-		mask = 0;
+		shadow_default_mmio_mask = 0;
+}
 
-	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
+void kvm_mmu_set_default_mmio_spte_mask(u64 mask)
+{
+	shadow_default_mmio_mask = mask;
 }
+EXPORT_SYMBOL_GPL(kvm_mmu_set_default_mmio_spte_mask);
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 1bfedbe0585f..96312ab4fffb 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -5,8 +5,6 @@
 
 #include "mmu_internal.h"
 
-extern bool __read_mostly enable_mmio_caching;
-
 /*
  * A MMU present SPTE is backed by actual memory and may or may not be present
  * in hardware.  E.g. MMIO SPTEs are not considered present.  Use bit 11, as it
@@ -160,8 +158,7 @@ extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
 extern u64 __read_mostly shadow_user_mask;
 extern u64 __read_mostly shadow_accessed_mask;
 extern u64 __read_mostly shadow_dirty_mask;
-extern u64 __read_mostly shadow_mmio_value;
-extern u64 __read_mostly shadow_mmio_mask;
+extern u64 __read_mostly shadow_default_mmio_mask;
 extern u64 __read_mostly shadow_mmio_access_mask;
 extern u64 __read_mostly shadow_present_mask;
 extern u64 __read_mostly shadow_me_value;
@@ -233,10 +230,10 @@ static inline bool is_removed_spte(u64 spte)
  */
 extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
 
-static inline bool is_mmio_spte(u64 spte)
+static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
 {
-	return (spte & shadow_mmio_mask) == shadow_mmio_value &&
-	       likely(enable_mmio_caching);
+	return (spte & kvm->arch.shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
+		likely(kvm->arch.enable_mmio_caching || kvm_gfn_shared_mask(kvm));
 }
 
 static inline bool is_shadow_present_pte(u64 pte)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 2ca03ec3bf52..82f1bfac7ee6 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -569,8 +569,8 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 		 * impact the guest since both the former and current SPTEs
 		 * are nonpresent.
 		 */
-		if (WARN_ON(!is_mmio_spte(old_spte) &&
-			    !is_mmio_spte(new_spte) &&
+		if (WARN_ON(!is_mmio_spte(kvm, old_spte) &&
+			    !is_mmio_spte(kvm, new_spte) &&
 			    !is_removed_spte(new_spte)))
 			pr_err("Unexpected SPTE change! Nonpresent SPTEs\n"
 			       "should not be replaced with another,\n"
@@ -1108,7 +1108,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 	}
 
 	/* If a MMIO SPTE is installed, the MMIO will need to be emulated. */
-	if (unlikely(is_mmio_spte(new_spte))) {
+	if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) {
 		vcpu->stat.pf_mmio_spte_created++;
 		trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn,
 				     new_spte);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 815a07c594f1..0abc43d6a115 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4870,7 +4870,7 @@ static __init void svm_adjust_mmio_mask(void)
 	 */
 	mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
 
-	kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
+	kvm_mmu_set_default_mmio_spte_mask(mask);
 }
 
 static __init void svm_set_cpu_caps(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1d87885245cc..e2415ac55317 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7289,6 +7289,14 @@ int vmx_vm_init(struct kvm *kvm)
 	if (!ple_gap)
 		kvm->arch.pause_in_guest = true;
 
+	/*
+	 * EPT Misconfigurations can be generated if the value of bits 2:0
+	 * of an EPT paging-structure entry is 110b (write/execute).
+	 */
+	if (enable_ept)
+		kvm_mmu_set_mmio_spte_mask(kvm, VMX_EPT_MISCONFIG_WX_VALUE,
+					   VMX_EPT_RWX_MASK, 0);
+
 	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
 		switch (l1tf_mitigation) {
 		case L1TF_MITIGATION_OFF:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 038/102] KVM: x86/mmu: Disallow fast page fault on private GPA
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (36 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
                   ` (65 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX requires TDX SEAMCALL to operate Secure EPT instead of direct memory
access and TDX SEAMCALL is heavy operation.  Fast page fault on private GPA
doesn't make sense.  Disallow fast page fault on private GPA.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 496d0d30839b..e0aa5ad3931d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3081,8 +3081,16 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
 	return RET_PF_CONTINUE;
 }
 
-static bool page_fault_can_be_fast(struct kvm_page_fault *fault)
+static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault)
 {
+	/*
+	 * TDX private mapping doesn't support fast page fault because the EPT
+	 * entry is read/written with TDX SEAMCALLs instead of direct memory
+	 * access.
+	 */
+	if (kvm_is_private_gpa(kvm, fault->addr))
+		return false;
+
 	/*
 	 * Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only
 	 * reach the common page fault handler if the SPTE has an invalid MMIO
@@ -3192,7 +3200,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 	u64 *sptep = NULL;
 	uint retry_count = 0;
 
-	if (!page_fault_can_be_fast(fault))
+	if (!page_fault_can_be_fast(vcpu->kvm, fault))
 		return ret;
 
 	walk_shadow_page_lockless_begin(vcpu);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (37 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 038/102] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-30 12:27   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu isaku.yamahata
                   ` (64 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

TODO: This is a transient workaround patch until the large page support for
TDX is implemented.  Support large page for TDX and remove this patch.

At this point, large page for TDX isn't supported, and need to allow guest
TD to work only with 4K pages.  On the other hand, conventional VMX VMs
should continue to work with large page.  Allow per-VM override of the TDP
max page level.

In the existing x86 KVM MMU code, there is already max_level member in
struct kvm_page_fault with KVM_MAX_HUGEPAGE_LEVEL initial value.  The KVM
page fault handler denies page size larger than max_level.

Add per-VM member to indicate the allowed maximum page size with
KVM_MAX_HUGEPAGE_LEVEL as default value and initialize max_level in struct
kvm_page_fault with it.  For the guest TD, the set per-VM value for allows
maximum page size to 4K page size.  Then only allowed page size is 4K.  It
means large page is disabled.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu/mmu.c          | 1 +
 arch/x86/kvm/mmu/mmu_internal.h | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 39215daa8576..f4d4ed41641b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1146,6 +1146,7 @@ struct kvm_arch {
 	unsigned long n_requested_mmu_pages;
 	unsigned long n_max_mmu_pages;
 	unsigned int indirect_shadow_pages;
+	int tdp_max_page_level;
 	u8 mmu_valid_gen;
 	struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
 	struct list_head active_mmu_pages;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e0aa5ad3931d..80d7c7709af3 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5878,6 +5878,7 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 	node->track_write = kvm_mmu_pte_write;
 	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
 	kvm_page_track_register_notifier(kvm, node);
+	kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL;
 	kvm_mmu_set_mmio_spte_mask(kvm, shadow_default_mmio_mask,
 				   shadow_default_mmio_mask,
 				   ACC_WRITE_MASK | ACC_USER_MASK);
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index bd2a26897b97..44a04fad4bed 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -244,7 +244,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
 		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
 
-		.max_level = KVM_MAX_HUGEPAGE_LEVEL,
+		.max_level = vcpu->kvm->arch.tdp_max_page_level,
 		.req_level = PG_LEVEL_4K,
 		.goal_level = PG_LEVEL_4K,
 	};
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (38 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-01 10:41   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
                   ` (63 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

For kvm mmu that has shared bit mask, zap only leaf SPTEs when
deleting/moving a memslot.  The existing kvm_mmu_zap_memslot() depends on
role.invalid with read lock of mmu_lock so that other vcpu can operate on
kvm mmu concurrently. Mark the root page table invalid, unlink it from page
table pointer of CPU, process the page table.  It doesn't work for private
page table to unlink the root page table because it requires all SPTE entry
to be non-present.  Instead, with write-lock of mmu_lock and zap only leaf
SPTEs for kvm mmu with shared bit mask.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 35 ++++++++++++++++++++++++++++++++++-
 1 file changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 80d7c7709af3..c517c7bca105 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5854,11 +5854,44 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
 	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
 }
 
+static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+	bool flush = false;
+
+	write_lock(&kvm->mmu_lock);
+
+	/*
+	 * Zapping non-leaf SPTEs, a.k.a. not-last SPTEs, isn't required, worst
+	 * case scenario we'll have unused shadow pages lying around until they
+	 * are recycled due to age or when the VM is destroyed.
+	 */
+	if (is_tdp_mmu_enabled(kvm)) {
+		struct kvm_gfn_range range = {
+		      .slot = slot,
+		      .start = slot->base_gfn,
+		      .end = slot->base_gfn + slot->npages,
+		      .may_block = false,
+		};
+
+		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
+	} else {
+		flush = slot_handle_level(kvm, slot, kvm_zap_rmapp, PG_LEVEL_4K,
+					  KVM_MAX_HUGEPAGE_LEVEL, true);
+	}
+	if (flush)
+		kvm_flush_remote_tlbs(kvm);
+
+	write_unlock(&kvm->mmu_lock);
+}
+
 static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
 			struct kvm_memory_slot *slot,
 			struct kvm_page_track_notifier_node *node)
 {
-	kvm_mmu_zap_all_fast(kvm);
+	if (kvm_gfn_shared_mask(kvm))
+		kvm_mmu_zap_memslot(kvm, slot);
+	else
+		kvm_mmu_zap_all_fast(kvm);
 }
 
 int kvm_mmu_init_vm(struct kvm *kvm)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (39 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-08  2:23   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 042/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
                   ` (62 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
defensive (test that VMX case isn't broken), introduce option
ept_violation_ve_test and when it's set, set error.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/vmx.h | 12 +++++++
 arch/x86/kvm/vmx/vmx.c     | 68 +++++++++++++++++++++++++++++++++++++-
 arch/x86/kvm/vmx/vmx.h     |  3 ++
 3 files changed, 82 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 6231ef005a50..f0f8eecf55ac 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -68,6 +68,7 @@
 #define SECONDARY_EXEC_ENCLS_EXITING		VMCS_CONTROL_BIT(ENCLS_EXITING)
 #define SECONDARY_EXEC_RDSEED_EXITING		VMCS_CONTROL_BIT(RDSEED_EXITING)
 #define SECONDARY_EXEC_ENABLE_PML               VMCS_CONTROL_BIT(PAGE_MOD_LOGGING)
+#define SECONDARY_EXEC_EPT_VIOLATION_VE		VMCS_CONTROL_BIT(EPT_VIOLATION_VE)
 #define SECONDARY_EXEC_PT_CONCEAL_VMX		VMCS_CONTROL_BIT(PT_CONCEAL_VMX)
 #define SECONDARY_EXEC_XSAVES			VMCS_CONTROL_BIT(XSAVES)
 #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC	VMCS_CONTROL_BIT(MODE_BASED_EPT_EXEC)
@@ -223,6 +224,8 @@ enum vmcs_field {
 	VMREAD_BITMAP_HIGH              = 0x00002027,
 	VMWRITE_BITMAP                  = 0x00002028,
 	VMWRITE_BITMAP_HIGH             = 0x00002029,
+	VE_INFORMATION_ADDRESS		= 0x0000202A,
+	VE_INFORMATION_ADDRESS_HIGH	= 0x0000202B,
 	XSS_EXIT_BITMAP                 = 0x0000202C,
 	XSS_EXIT_BITMAP_HIGH            = 0x0000202D,
 	ENCLS_EXITING_BITMAP		= 0x0000202E,
@@ -628,4 +631,13 @@ enum vmx_l1d_flush_state {
 
 extern enum vmx_l1d_flush_state l1tf_vmx_mitigation;
 
+struct vmx_ve_information {
+	u32 exit_reason;
+	u32 delivery;
+	u64 exit_qualification;
+	u64 guest_linear_address;
+	u64 guest_physical_address;
+	u16 eptp_index;
+};
+
 #endif
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e2415ac55317..e3d304b14df0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -126,6 +126,9 @@ module_param(error_on_inconsistent_vmcs_config, bool, 0444);
 static bool __read_mostly dump_invalid_vmcs = 0;
 module_param(dump_invalid_vmcs, bool, 0644);
 
+static bool __read_mostly ept_violation_ve_test = 0;
+module_param(ept_violation_ve_test, bool, 0444);
+
 #define MSR_BITMAP_MODE_X2APIC		1
 #define MSR_BITMAP_MODE_X2APIC_APICV	2
 
@@ -726,6 +729,13 @@ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu)
 
 	eb = (1u << PF_VECTOR) | (1u << UD_VECTOR) | (1u << MC_VECTOR) |
 	     (1u << DB_VECTOR) | (1u << AC_VECTOR);
+	/*
+	 * #VE isn't used for VMX, but for TDX.  To test against unexpected
+	 * change related to #VE for VMX, intercept unexpected #VE and warn on
+	 * it.
+	 */
+	if (ept_violation_ve_test)
+		eb |= 1u << VE_VECTOR;
 	/*
 	 * Guest access to VMware backdoor ports could legitimately
 	 * trigger #GP because of TSS I/O permission bitmap.
@@ -2524,6 +2534,8 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 			SECONDARY_EXEC_NOTIFY_VM_EXITING;
 		if (cpu_has_sgx())
 			opt2 |= SECONDARY_EXEC_ENCLS_EXITING;
+		if (ept_violation_ve_test)
+			opt2 |= SECONDARY_EXEC_EPT_VIOLATION_VE;
 		if (adjust_vmx_controls(min2, opt2,
 					MSR_IA32_VMX_PROCBASED_CTLS2,
 					&_cpu_based_2nd_exec_control) < 0)
@@ -2558,6 +2570,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 			return -EIO;
 
 		vmx_cap->ept = 0;
+		_cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE;
 	}
 	if (!(_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_VPID) &&
 	    vmx_cap->vpid) {
@@ -4390,6 +4403,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
 		exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
 	if (!enable_ept) {
 		exec_control &= ~SECONDARY_EXEC_ENABLE_EPT;
+		exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE;
 		enable_unrestricted_guest = 0;
 	}
 	if (!enable_unrestricted_guest)
@@ -4517,8 +4531,40 @@ static void init_vmcs(struct vcpu_vmx *vmx)
 
 	exec_controls_set(vmx, vmx_exec_control(vmx));
 
-	if (cpu_has_secondary_exec_ctrls())
+	if (cpu_has_secondary_exec_ctrls()) {
 		secondary_exec_controls_set(vmx, vmx_secondary_exec_control(vmx));
+		if (secondary_exec_controls_get(vmx) &
+		    SECONDARY_EXEC_EPT_VIOLATION_VE) {
+			if (!vmx->ve_info) {
+				/* ve_info must be page aligned. */
+				struct page *page;
+
+				BUILD_BUG_ON(sizeof(*vmx->ve_info) > PAGE_SIZE);
+				page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
+				if (page)
+					vmx->ve_info = page_to_virt(page);
+			}
+			if (vmx->ve_info) {
+				/*
+				 * Allow #VE delivery. CPU sets this field to
+				 * 0xFFFFFFFF on #VE delivery.  Another #VE can
+				 * occur only if software clears the field.
+				 */
+				vmx->ve_info->delivery = 0;
+				vmcs_write64(VE_INFORMATION_ADDRESS,
+					     __pa(vmx->ve_info));
+			} else {
+				/*
+				 * Because SECONDARY_EXEC_EPT_VIOLATION_VE is
+				 * used only when ept_violation_ve_test is true,
+				 * it's okay to go with the bit disabled.
+				 */
+				pr_err("Failed to allocate ve_info. disabling EPT_VIOLATION_VE.\n");
+				secondary_exec_controls_clearbit(
+					vmx, SECONDARY_EXEC_EPT_VIOLATION_VE);
+			}
+		}
+	}
 
 	if (cpu_has_tertiary_exec_ctrls())
 		tertiary_exec_controls_set(vmx, vmx_tertiary_exec_control(vmx));
@@ -5116,7 +5162,14 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 		if (handle_guest_split_lock(kvm_rip_read(vcpu)))
 			return 1;
 		fallthrough;
+	case VE_VECTOR:
 	default:
+		if (ept_violation_ve_test && ex_no == VE_VECTOR) {
+			pr_err("VMEXIT due to unexpected #VE.\n");
+			secondary_exec_controls_clearbit(
+				vmx, SECONDARY_EXEC_EPT_VIOLATION_VE);
+			return 1;
+		}
 		kvm_run->exit_reason = KVM_EXIT_EXCEPTION;
 		kvm_run->ex.exception = ex_no;
 		kvm_run->ex.error_code = error_code;
@@ -6182,6 +6235,17 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
 	if (secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
 		pr_err("Virtual processor ID = 0x%04x\n",
 		       vmcs_read16(VIRTUAL_PROCESSOR_ID));
+	if (secondary_exec_control & SECONDARY_EXEC_EPT_VIOLATION_VE) {
+		struct vmx_ve_information *ve_info;
+		pr_err("VE info address = 0x%016llx\n",
+		       vmcs_read64(VE_INFORMATION_ADDRESS));
+		ve_info = __va(vmcs_read64(VE_INFORMATION_ADDRESS));
+		pr_err("ve_info: 0x%08x 0x%08x 0x%016llx 0x%016llx 0x%016llx 0x%04x\n",
+		       ve_info->exit_reason, ve_info->delivery,
+		       ve_info->exit_qualification,
+		       ve_info->guest_linear_address,
+		       ve_info->guest_physical_address, ve_info->eptp_index);
+	}
 }
 
 /*
@@ -7173,6 +7237,8 @@ void vmx_vcpu_free(struct kvm_vcpu *vcpu)
 	free_vpid(vmx->vpid);
 	nested_vmx_free_vcpu(vcpu);
 	free_loaded_vmcs(vmx->loaded_vmcs);
+	if (vmx->ve_info)
+		free_page((unsigned long)vmx->ve_info);
 }
 
 int vmx_vcpu_create(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 9feb994e5ea2..60d93c38e014 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -338,6 +338,9 @@ struct vcpu_vmx {
 		DECLARE_BITMAP(read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
 		DECLARE_BITMAP(write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
 	} shadow_msr_intercept;
+
+	/* ve_info must be page aligned. */
+	struct vmx_ve_information *ve_info;
 };
 
 struct kvm_vmx {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 042/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (40 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
                   ` (61 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of KVM TDP MMU
hooks.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index df003d2ed89e..d5cace00c433 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -25,6 +25,6 @@ Patch Layer status
 * TD vcpu interrupts/exit/hypercall:    Not yet
 
 * KVM MMU GPA shared bits:              Applied
-* KVM TDP refactoring for TDX:          Applying
-* KVM TDP MMU hooks:                    Not yet
+* KVM TDP refactoring for TDX:          Applied
+* KVM TDP MMU hooks:                    Applying
 * KVM TDP MMU MapGPA:                   Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (41 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 042/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-11  5:48   ` Yuan Yao
  2022-07-11 14:56   ` Sean Christopherson
  2022-06-27 21:53 ` [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
                   ` (60 subsequent siblings)
  103 siblings, 2 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

In this patch series, TDX supports only TDP MMU and doesn't support legacy
MMU.  Forcibly use TDP MMU for TDX irrelevant of kernel parameter to
disable TDP MMU.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 82f1bfac7ee6..7eb41b176d1e 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -18,8 +18,13 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
 {
 	struct workqueue_struct *wq;
 
-	if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
-		return 0;
+	/*
+	 *  Because TDX supports only TDP MMU, forcibly use TDP MMU in the case
+	 *  of TDX.
+	 */
+	if (kvm->arch.vm_type != KVM_X86_TDX_VM &&
+		(!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)))
+		return false;
 
 	wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
 	if (!wq)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (42 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-01 11:12   ` Kai Huang
                     ` (3 more replies)
  2022-06-27 21:53 ` [PATCH v7 045/102] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map() isaku.yamahata
                   ` (59 subsequent siblings)
  103 siblings, 4 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

For private GPA, CPU refers a private page table whose contents are
encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
PTE entry) are used and their cost is expensive.

When KVM resolves KVM page fault, it walks the page tables.  To reuse the
existing KVM MMU code and mitigate the heavy cost to directly walk
encrypted private page table, allocate a more page to mirror the existing
KVM page table.  Resolve KVM page fault with the existing code, and do
additional operations necessary for the mirrored private page table.  To
distinguish such cases, the existing KVM page table is called a shared page
table (i.e. no mirrored private page table), and the KVM page table with
mirrored private page table is called a private page table.  The
relationship is depicted below.

Add private pointer to struct kvm_mmu_page for mirrored private page table
and add helper functions to allocate/initialize/free a mirrored private
page table page.  Also, add helper functions to check if a given
kvm_mmu_page is private.  The later patch introduces hooks to operate on
the mirrored private page table.

              KVM page fault                     |
                     |                           |
                     V                           |
        -------------+----------                 |
        |                      |                 |
        V                      V                 |
     shared GPA           private GPA            |
        |                      |                 |
        V                      V                 |
 CPU/KVM shared PT root  KVM private PT root     |  CPU private PT root
        |                      |                 |           |
        V                      V                 |           V
     shared PT            private PT <----mirror----> mirrored private PT
        |                      |                 |           |
        |                      \-----------------+------\    |
        |                                        |      |    |
        V                                        |      V    V
  shared guest page                              |    private guest page
                                                 |
                           non-encrypted memory  |    encrypted memory
                                                 |
PT: page table

Both CPU and KVM refer to CPU/KVM shared page table.  Private page table
is used only by KVM.  CPU refers to mirrored private page table.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/mmu/mmu.c          |  9 ++++
 arch/x86/kvm/mmu/mmu_internal.h | 84 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/mmu/tdp_mmu.c      |  3 ++
 4 files changed, 97 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f4d4ed41641b..bfc934dc9a33 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -716,6 +716,7 @@ struct kvm_vcpu_arch {
 	struct kvm_mmu_memory_cache mmu_shadow_page_cache;
 	struct kvm_mmu_memory_cache mmu_gfn_array_cache;
 	struct kvm_mmu_memory_cache mmu_page_header_cache;
+	struct kvm_mmu_memory_cache mmu_private_sp_cache;
 
 	/*
 	 * QEMU userspace and the guest each have their own FPU state.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c517c7bca105..a5bf3e40e209 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -691,6 +691,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
 	int start, end, i, r;
 	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
 
+	if (kvm_gfn_shared_mask(vcpu->kvm)) {
+		r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache,
+					       PT64_ROOT_MAX_LEVEL);
+		if (r)
+			return r;
+	}
+
 	if (is_tdp_mmu && shadow_nonpresent_value)
 		start = kvm_mmu_memory_cache_nr_free_objects(mc);
 
@@ -732,6 +739,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
 {
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
+	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_private_sp_cache);
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache);
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
 }
@@ -1736,6 +1744,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
 	if (!direct)
 		sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
 	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
+	kvm_mmu_init_private_sp(sp, NULL);
 
 	/*
 	 * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 44a04fad4bed..9f3a6bea60a3 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -55,6 +55,10 @@ struct kvm_mmu_page {
 	u64 *spt;
 	/* hold the gfn of each spte inside spt */
 	gfn_t *gfns;
+#ifdef CONFIG_KVM_MMU_PRIVATE
+	/* associated private shadow page, e.g. SEPT page. */
+	void *private_sp;
+#endif
 	/* Currently serving as active root */
 	union {
 		int root_count;
@@ -115,6 +119,86 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
 	return kvm_mmu_role_as_id(sp->role);
 }
 
+/*
+ * TDX vcpu allocates page for root Secure EPT page and assigns to CPU secure
+ * EPT pointer.  KVM doesn't need to allocate and link to the secure EPT.
+ * Dummy value to make is_pivate_sp() return true.
+ */
+#define KVM_MMU_PRIVATE_SP_ROOT	((void *)1)
+
+#ifdef CONFIG_KVM_MMU_PRIVATE
+static inline bool is_private_sp(struct kvm_mmu_page *sp)
+{
+	return !!sp->private_sp;
+}
+
+static inline bool is_private_sptep(u64 *sptep)
+{
+	WARN_ON(!sptep);
+	return is_private_sp(sptep_to_sp(sptep));
+}
+
+static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
+{
+	return sp->private_sp;
+}
+
+static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
+{
+	sp->private_sp = private_sp;
+}
+
+/* Valid sp->role.level is required. */
+static inline void kvm_mmu_alloc_private_sp(
+	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
+{
+	if (is_root)
+		sp->private_sp = KVM_MMU_PRIVATE_SP_ROOT;
+	else
+		sp->private_sp = kvm_mmu_memory_cache_alloc(
+			&vcpu->arch.mmu_private_sp_cache);
+	/*
+	 * Because mmu_private_sp_cache is topped up before staring kvm page
+	 * fault resolving, the allocation above shouldn't fail.
+	 */
+	WARN_ON_ONCE(!sp->private_sp);
+}
+
+static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
+{
+	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
+		free_page((unsigned long)sp->private_sp);
+}
+#else
+static inline bool is_private_sp(struct kvm_mmu_page *sp)
+{
+	return false;
+}
+
+static inline bool is_private_sptep(u64 *sptep)
+{
+	return false;
+}
+
+static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
+{
+	return NULL;
+}
+
+static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
+{
+}
+
+static inline void kvm_mmu_alloc_private_sp(
+	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
+{
+}
+
+static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
+{
+}
+#endif
+
 static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
 {
 	/*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 7eb41b176d1e..b2568b062faa 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -72,6 +72,8 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
 
 static void tdp_mmu_free_sp(struct kvm_mmu_page *sp)
 {
+	if (is_private_sp(sp))
+		kvm_mmu_free_private_sp(sp);
 	free_page((unsigned long)sp->spt);
 	kmem_cache_free(mmu_page_header_cache, sp);
 }
@@ -295,6 +297,7 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
 	sp->gfn = gfn;
 	sp->ptep = sptep;
 	sp->tdp_mmu_page = true;
+	kvm_mmu_init_private_sp(sp);
 
 	trace_kvm_mmu_get_page(sp, true);
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 045/102] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map()
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (43 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
                   ` (58 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Factor out non-leaf SPTE population logic from kvm_tdp_mmu_map().  MapGPA
hypercall needs to populate non-leaf SPTE to record which GPA, private or
shared, is allowed in the leaf EPT entry.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index b2568b062faa..d874c79ab96c 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1167,6 +1167,24 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter,
 	return 0;
 }
 
+static int tdp_mmu_populate_nonleaf(
+	struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
+{
+	struct kvm_mmu_page *sp;
+	int ret;
+
+	WARN_ON(is_shadow_present_pte(iter->old_spte));
+	WARN_ON(is_removed_spte(iter->old_spte));
+
+	sp = tdp_mmu_alloc_sp(vcpu);
+	tdp_mmu_init_child_sp(sp, iter);
+
+	ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true);
+	if (ret)
+		tdp_mmu_free_sp(sp);
+	return ret;
+}
+
 /*
  * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing
  * page tables and SPTEs to translate the faulting guest physical address.
@@ -1175,7 +1193,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
 	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	struct tdp_iter iter;
-	struct kvm_mmu_page *sp;
 	int ret;
 
 	kvm_mmu_hugepage_adjust(vcpu, fault);
@@ -1221,13 +1238,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 			if (is_removed_spte(iter.old_spte))
 				break;
 
-			sp = tdp_mmu_alloc_sp(vcpu);
-			tdp_mmu_init_child_sp(sp, &iter);
-
-			if (tdp_mmu_link_sp(vcpu->kvm, &iter, sp, account_nx, true)) {
-				tdp_mmu_free_sp(sp);
+			if (tdp_mmu_populate_nonleaf(vcpu, &iter, account_nx))
 				break;
-			}
 		}
 	}
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (44 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 045/102] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map() isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-08  3:44   ` Kai Huang
                     ` (2 more replies)
  2022-06-27 21:53 ` [PATCH v7 047/102] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
                   ` (57 subsequent siblings)
  103 siblings, 3 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Kai Huang

From: Isaku Yamahata <isaku.yamahata@intel.com>

Allocate mirrored private page table for private page table, and add hooks
to operate on mirrored private page table.  This patch adds only hooks. As
kvm_gfn_shared_mask() returns false always, those hooks aren't called yet.

Because private guest page is protected, page copy with mmu_notifier to
migrate page doesn't work.  Callback from backing store is needed.

When the faulting GPA is private, the KVM fault is also called private.
When resolving private KVM, allocate mirrored private page table and call
hooks to operate on mirrored private page table. On the change of the
private PTE entry, invoke kvm_x86_ops hook in __handle_changed_spte() to
propagate the change to mirrored private page table. The following depicts
the relationship.

  private KVM page fault   |
      |                    |
      V                    |
 private GPA               |
      |                    |
      V                    |
 KVM private PT root       |  CPU private PT root
      |                    |           |
      V                    |           V
   private PT ---hook to mirror--->mirrored private PT
      |                    |           |
      \--------------------+------\    |
                           |      |    |
                           |      V    V
                           |    private guest page
                           |
                           |
     non-encrypted memory  |    encrypted memory
                           |
PT: page table

The existing KVM TDP MMU code uses atomic update of SPTE.  On populating
the EPT entry, atomically set the entry.  However, it requires TLB
shootdown to zap SPTE.  To address it, the entry is frozen with the special
SPTE value that clears the present bit. After the TLB shootdown, the entry
is set to the eventual value (unfreeze).

For mirrored private page table, hooks are called to update mirrored
private page table in addition to direct access to the private SPTE. For
the zapping case, it works to freeze the SPTE. It can call hooks in
addition to TLB shootdown.  For populating the private SPTE entry, there
can be a race condition without further protection

  vcpu 1: populating 2M private SPTE
  vcpu 2: populating 4K private SPTE
  vcpu 2: TDX SEAMCALL to update 4K mirrored private SPTE => error
  vcpu 1: TDX SEAMCALL to update 2M mirrored private SPTE

To avoid the race, the frozen SPTE is utilized.  Instead of atomic update
of the private entry, freeze the entry, call the hook that update mirrored
private SPTE, set the entry to the final value.

Support 4K page only at this stage.  2M page support can be done in future
patches.

Add is_private member to kvm_page_fault to indicate the fault is private.
Also is_private member to struct tdp_inter to propagate it.

Co-developed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |   2 +
 arch/x86/include/asm/kvm_host.h    |  20 +++
 arch/x86/kvm/mmu/mmu.c             |  86 +++++++++-
 arch/x86/kvm/mmu/mmu_internal.h    |  37 +++++
 arch/x86/kvm/mmu/paging_tmpl.h     |   2 +-
 arch/x86/kvm/mmu/tdp_iter.c        |   1 +
 arch/x86/kvm/mmu/tdp_iter.h        |   5 +-
 arch/x86/kvm/mmu/tdp_mmu.c         | 247 +++++++++++++++++++++++------
 arch/x86/kvm/mmu/tdp_mmu.h         |   7 +-
 virt/kvm/kvm_main.c                |   1 +
 10 files changed, 346 insertions(+), 62 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 32a6df784ea6..6982d57e4518 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -93,6 +93,8 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
 KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
 KVM_X86_OP(get_mt_mask)
 KVM_X86_OP(load_mmu_pgd)
+KVM_X86_OP_OPTIONAL(free_private_sp)
+KVM_X86_OP_OPTIONAL(handle_changed_private_spte)
 KVM_X86_OP(has_wbinvd_exit)
 KVM_X86_OP(get_l2_tsc_offset)
 KVM_X86_OP(get_l2_tsc_multiplier)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index bfc934dc9a33..f2a4d5a18851 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -440,6 +440,7 @@ struct kvm_mmu {
 			 struct kvm_mmu_page *sp);
 	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
 	struct kvm_mmu_root_info root;
+	hpa_t private_root_hpa;
 	union kvm_cpu_role cpu_role;
 	union kvm_mmu_page_role root_role;
 
@@ -1435,6 +1436,20 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
 	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
 }
 
+struct kvm_spte {
+	kvm_pfn_t pfn;
+	bool is_present;
+	bool is_leaf;
+};
+
+struct kvm_spte_change {
+	gfn_t gfn;
+	enum pg_level level;
+	struct kvm_spte old;
+	struct kvm_spte new;
+	void *sept_page;
+};
+
 struct kvm_x86_ops {
 	const char *name;
 
@@ -1547,6 +1562,11 @@ struct kvm_x86_ops {
 	void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 			     int root_level);
 
+	int (*free_private_sp)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
+			       void *private_sp);
+	void (*handle_changed_private_spte)(
+		struct kvm *kvm, const struct kvm_spte_change *change);
+
 	bool (*has_wbinvd_exit)(void);
 
 	u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a5bf3e40e209..ef925722ee28 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1577,7 +1577,11 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 		flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);
 
 	if (is_tdp_mmu_enabled(kvm))
-		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
+		/*
+		 * private page needs to be kept and handle page migration
+		 * on next EPT violation.
+		 */
+		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush, false);
 
 	return flush;
 }
@@ -3082,7 +3086,8 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
 		 * SPTE value without #VE suppress bit cleared
 		 * (kvm->arch.shadow_mmio_value = 0).
 		 */
-		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
+		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching &&
+			     !kvm_gfn_shared_mask(vcpu->kvm)) ||
 		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
 			return RET_PF_EMULATE;
 	}
@@ -3454,7 +3459,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 		goto out_unlock;
 
 	if (is_tdp_mmu_enabled(vcpu->kvm)) {
-		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
+		if (kvm_gfn_shared_mask(vcpu->kvm) &&
+		    !VALID_PAGE(mmu->private_root_hpa)) {
+			root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, true);
+			mmu->private_root_hpa = root;
+		}
+		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, false);
 		mmu->root.hpa = root;
 	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
 		root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true);
@@ -4026,6 +4036,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
 	kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true);
 }
 
+/*
+ * Private page can't be release on mmu_notifier without losing page contents.
+ * The help, callback, from backing store is needed to allow page migration.
+ * For now, pin the page.
+ */
+static int kvm_faultin_pfn_private_mapped(struct kvm_vcpu *vcpu,
+					   struct kvm_page_fault *fault)
+{
+	hva_t hva = gfn_to_hva_memslot(fault->slot, fault->gfn);
+	struct page *page[1];
+
+	fault->map_writable = false;
+	fault->pfn = KVM_PFN_ERR_FAULT;
+	if (hva == KVM_HVA_ERR_RO_BAD || hva == KVM_HVA_ERR_BAD)
+		return RET_PF_CONTINUE;
+
+	/* TDX allows only RWX.  Read-only isn't supported. */
+	WARN_ON_ONCE(!fault->write);
+	if (pin_user_pages_fast(hva, 1, FOLL_WRITE, page) != 1)
+		return RET_PF_INVALID;
+
+	fault->map_writable = true;
+	fault->pfn = page_to_pfn(page[0]);
+	return RET_PF_CONTINUE;
+}
+
 static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
 	struct kvm_memory_slot *slot = fault->slot;
@@ -4058,6 +4094,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 			return RET_PF_EMULATE;
 	}
 
+	if (fault->is_private)
+		return kvm_faultin_pfn_private_mapped(vcpu, fault);
+
 	async = false;
 	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async,
 					  fault->write, &fault->map_writable,
@@ -4110,6 +4149,17 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
 	       mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);
 }
 
+void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r)
+{
+	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn))
+		return;
+
+	if (fault->is_private)
+		put_page(pfn_to_page(fault->pfn));
+	else
+		kvm_release_pfn_clean(fault->pfn);
+}
+
 static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
 	bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);
@@ -4117,7 +4167,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 	unsigned long mmu_seq;
 	int r;
 
-	fault->gfn = fault->addr >> PAGE_SHIFT;
+	fault->gfn = gpa_to_gfn(fault->addr) & ~kvm_gfn_shared_mask(vcpu->kvm);
 	fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
 
 	if (page_fault_handle_page_track(vcpu, fault))
@@ -4166,7 +4216,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 		read_unlock(&vcpu->kvm->mmu_lock);
 	else
 		write_unlock(&vcpu->kvm->mmu_lock);
-	kvm_release_pfn_clean(fault->pfn);
+	kvm_mmu_release_fault(vcpu->kvm, fault, r);
 	return r;
 }
 
@@ -5665,6 +5715,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
 
 	mmu->root.hpa = INVALID_PAGE;
 	mmu->root.pgd = 0;
+	mmu->private_root_hpa = INVALID_PAGE;
 	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
 		mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
 
@@ -5855,6 +5906,10 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
 	 * lead to use-after-free.
 	 */
 	if (is_tdp_mmu_enabled(kvm))
+		/*
+		 * For now private root is never invalidate during VM is running,
+		 * so this can only happen for shared roots.
+		 */
 		kvm_tdp_mmu_zap_invalidated_roots(kvm);
 }
 
@@ -5882,7 +5937,8 @@ static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
 		      .may_block = false,
 		};
 
-		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
+		/* All private page should be zapped on memslot deletion. */
+		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush, true);
 	} else {
 		flush = slot_handle_level(kvm, slot, kvm_zap_rmapp, PG_LEVEL_4K,
 					  KVM_MAX_HUGEPAGE_LEVEL, true);
@@ -5990,7 +6046,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 	if (is_tdp_mmu_enabled(kvm)) {
 		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
 			flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
-						      gfn_end, true, flush);
+						      gfn_end, true, flush, false);
 	}
 
 	if (flush)
@@ -6023,6 +6079,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
 		write_unlock(&kvm->mmu_lock);
 	}
 
+	/*
+	 * For now this can only happen for non-TD VM, because TD private
+	 * mapping doesn't support write protection.  kvm_tdp_mmu_wrprot_slot()
+	 * will give a WARN() if it hits for TD.
+	 */
 	if (is_tdp_mmu_enabled(kvm)) {
 		read_lock(&kvm->mmu_lock);
 		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
@@ -6111,6 +6172,9 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
 		sp = sptep_to_sp(sptep);
 		pfn = spte_to_pfn(*sptep);
 
+		/* Private page dirty logging is not supported. */
+		KVM_BUG_ON(is_private_sptep(sptep), kvm);
+
 		/*
 		 * We cannot do huge page mapping for indirect shadow pages,
 		 * which are found on the last rmap (level = 1) when not using
@@ -6151,6 +6215,11 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
 		write_unlock(&kvm->mmu_lock);
 	}
 
+	/*
+	 * This should only be reachable in case of log-dirty, wihch TD private
+	 * mapping doesn't support so far.  kvm_tdp_mmu_zap_collapsible_sptes()
+	 * internally gives a WARN() when it hits.
+	 */
 	if (is_tdp_mmu_enabled(kvm)) {
 		read_lock(&kvm->mmu_lock);
 		kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot);
@@ -6437,6 +6506,9 @@ int kvm_mmu_vendor_module_init(void)
 void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
 {
 	kvm_mmu_unload(vcpu);
+	if (is_tdp_mmu_enabled(vcpu->kvm))
+		mmu_free_root_page(vcpu->kvm, &vcpu->arch.mmu->private_root_hpa,
+				NULL);
 	free_mmu_pages(&vcpu->arch.root_mmu);
 	free_mmu_pages(&vcpu->arch.guest_mmu);
 	mmu_free_memory_caches(vcpu);
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 9f3a6bea60a3..d3b30d62aca0 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -6,6 +6,8 @@
 #include <linux/kvm_host.h>
 #include <asm/kvm_host.h>
 
+#include "mmu.h"
+
 #undef MMU_DEBUG
 
 #ifdef MMU_DEBUG
@@ -164,11 +166,30 @@ static inline void kvm_mmu_alloc_private_sp(
 	WARN_ON_ONCE(!sp->private_sp);
 }
 
+static inline int kvm_alloc_private_sp_for_split(
+	struct kvm_mmu_page *sp, gfp_t gfp)
+{
+	gfp &= ~__GFP_ZERO;
+	sp->private_sp = (void*)__get_free_page(gfp);
+	if (!sp->private_sp)
+		return -ENOMEM;
+	return 0;
+}
+
 static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
 {
 	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
 		free_page((unsigned long)sp->private_sp);
 }
+
+static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
+				     gfn_t gfn)
+{
+	if (is_private_sp(root))
+		return kvm_gfn_private(kvm, gfn);
+	else
+		return kvm_gfn_shared(kvm, gfn);
+}
 #else
 static inline bool is_private_sp(struct kvm_mmu_page *sp)
 {
@@ -194,11 +215,25 @@ static inline void kvm_mmu_alloc_private_sp(
 {
 }
 
+static inline int kvm_alloc_private_sp_for_split(
+	struct kvm_mmu_page *sp, gfp_t gfp)
+{
+	return -ENOMEM;
+}
+
 static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
 {
 }
+
+static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
+				     gfn_t gfn)
+{
+	return gfn;
+}
 #endif
 
+void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r);
+
 static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
 {
 	/*
@@ -246,6 +281,7 @@ struct kvm_page_fault {
 	/* Derived from mmu and global state.  */
 	const bool is_tdp;
 	const bool nx_huge_page_workaround_enabled;
+	const bool is_private;
 
 	/*
 	 * Whether a >4KB mapping can be created or is forbidden due to NX
@@ -327,6 +363,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 		.prefetch = prefetch,
 		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
 		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
+		.is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),
 
 		.max_level = vcpu->kvm->arch.tdp_max_page_level,
 		.req_level = PG_LEVEL_4K,
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 62ae590d4e5b..e5b73638bd83 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -877,7 +877,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 
 out_unlock:
 	write_unlock(&vcpu->kvm->mmu_lock);
-	kvm_release_pfn_clean(fault->pfn);
+	kvm_mmu_release_fault(vcpu->kvm, fault, r);
 	return r;
 }
 
diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
index ee4802d7b36c..4ed50f3c424d 100644
--- a/arch/x86/kvm/mmu/tdp_iter.c
+++ b/arch/x86/kvm/mmu/tdp_iter.c
@@ -53,6 +53,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root,
 	iter->min_level = min_level;
 	iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root->spt;
 	iter->as_id = kvm_mmu_page_as_id(root);
+	iter->is_private = is_private_sp(root);
 
 	tdp_iter_restart(iter);
 }
diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
index adfca0cf94d3..dec56795c5da 100644
--- a/arch/x86/kvm/mmu/tdp_iter.h
+++ b/arch/x86/kvm/mmu/tdp_iter.h
@@ -71,7 +71,7 @@ struct tdp_iter {
 	tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL];
 	/* A pointer to the current SPTE */
 	tdp_ptep_t sptep;
-	/* The lowest GFN mapped by the current SPTE */
+	/* The lowest GFN (shared bits included) mapped by the current SPTE */
 	gfn_t gfn;
 	/* The level of the root page given to the iterator */
 	int root_level;
@@ -94,6 +94,9 @@ struct tdp_iter {
 	 * level instead of advancing to the next entry.
 	 */
 	bool yielded;
+
+	/* True if this iter is handling private KVM page fault. */
+	bool is_private;
 };
 
 /*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index d874c79ab96c..12f75e60a254 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -278,18 +278,24 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
 		    kvm_mmu_page_as_id(_root) != _as_id) {		\
 		} else
 
-static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
+static struct kvm_mmu_page *tdp_mmu_alloc_sp(
+	struct kvm_vcpu *vcpu, bool private, bool is_root)
 {
 	struct kvm_mmu_page *sp;
 
 	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
 	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
 
+	if (private)
+		kvm_mmu_alloc_private_sp(vcpu, sp, is_root);
+	else
+		kvm_mmu_init_private_sp(sp, NULL);
+
 	return sp;
 }
 
-static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
-			    gfn_t gfn, union kvm_mmu_page_role role)
+static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn,
+			    union kvm_mmu_page_role role)
 {
 	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
 
@@ -297,7 +303,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
 	sp->gfn = gfn;
 	sp->ptep = sptep;
 	sp->tdp_mmu_page = true;
-	kvm_mmu_init_private_sp(sp);
 
 	trace_kvm_mmu_get_page(sp, true);
 }
@@ -316,7 +321,8 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
 	tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
 }
 
-hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
+static struct kvm_mmu_page *kvm_tdp_mmu_get_vcpu_root(struct kvm_vcpu *vcpu,
+						      bool private)
 {
 	union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
 	struct kvm *kvm = vcpu->kvm;
@@ -330,11 +336,12 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
 	 */
 	for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
 		if (root->role.word == role.word &&
+		    is_private_sp(root) == private &&
 		    kvm_tdp_mmu_get_root(root))
 			goto out;
 	}
 
-	root = tdp_mmu_alloc_sp(vcpu);
+	root = tdp_mmu_alloc_sp(vcpu, private, true);
 	tdp_mmu_init_sp(root, NULL, 0, role);
 
 	refcount_set(&root->tdp_mmu_root_count, 1);
@@ -344,12 +351,17 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
 	spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
 
 out:
-	return __pa(root->spt);
+	return root;
+}
+
+hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private)
+{
+	return __pa(kvm_tdp_mmu_get_vcpu_root(vcpu, private)->spt);
 }
 
 static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
-				u64 old_spte, u64 new_spte, int level,
-				bool shared);
+				bool private_spte, u64 old_spte,
+				u64 new_spte, int level, bool shared);
 
 static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level)
 {
@@ -410,6 +422,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
  *
  * @kvm: kvm instance
  * @pt: the page removed from the paging structure
+ * @is_private: pt is private or not.
  * @shared: This operation may not be running under the exclusive use
  *	    of the MMU lock and the operation must synchronize with other
  *	    threads that might be modifying SPTEs.
@@ -422,7 +435,8 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
  * this thread will be responsible for ensuring the page is freed. Hence the
  * early rcu_dereferences in the function.
  */
-static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
+static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool is_private,
+			      bool shared)
 {
 	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));
 	int level = sp->role.level;
@@ -498,8 +512,20 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
 			old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte,
 							  REMOVED_SPTE, level);
 		}
-		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
-				    old_spte, REMOVED_SPTE, level, shared);
+		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, is_private,
+				    old_spte, REMOVED_SPTE, level,
+				    shared);
+	}
+
+	if (is_private && WARN_ON(static_call(kvm_x86_free_private_sp)(
+					  kvm, sp->gfn, sp->role.level,
+					  kvm_mmu_private_sp(sp)))) {
+		/*
+		 * Failed to unlink Secure EPT page and there is nothing to do
+		 * further.  Intentionally leak the page to prevent the kernel
+		 * from accessing the encrypted page.
+		 */
+		kvm_mmu_init_private_sp(sp, NULL);
 	}
 
 	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
@@ -510,6 +536,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
  * @kvm: kvm instance
  * @as_id: the address space of the paging structure the SPTE was a part of
  * @gfn: the base GFN that was mapped by the SPTE
+ * @private_spte: the SPTE is private or not
  * @old_spte: The value of the SPTE before the change
  * @new_spte: The value of the SPTE after the change
  * @level: the level of the PT the SPTE is part of in the paging structure
@@ -521,14 +548,30 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
  * This function must be called for all TDP SPTE modifications.
  */
 static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
-				  u64 old_spte, u64 new_spte, int level,
-				  bool shared)
+				  bool private_spte, u64 old_spte,
+				  u64 new_spte, int level, bool shared)
 {
 	bool was_present = is_shadow_present_pte(old_spte);
 	bool is_present = is_shadow_present_pte(new_spte);
 	bool was_leaf = was_present && is_last_spte(old_spte, level);
 	bool is_leaf = is_present && is_last_spte(new_spte, level);
-	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
+	kvm_pfn_t old_pfn = spte_to_pfn(old_spte);
+	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
+	bool pfn_changed = old_pfn != new_pfn;
+	struct kvm_spte_change change = {
+		.gfn = gfn,
+		.level = level,
+		.old = {
+			.pfn = old_pfn,
+			.is_present = was_present,
+			.is_leaf = was_leaf,
+		},
+		.new = {
+			.pfn = new_pfn,
+			.is_present = is_present,
+			.is_leaf = is_leaf,
+		},
+	};
 
 	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
 	WARN_ON(level < PG_LEVEL_4K);
@@ -595,7 +638,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 
 	if (was_leaf && is_dirty_spte(old_spte) &&
 	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
-		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
+		kvm_set_pfn_dirty(old_pfn);
 
 	/*
 	 * Recursively handle child PTs if the change removed a subtree from
@@ -604,16 +647,47 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 	 * pages are kernel allocations and should never be migrated.
 	 */
 	if (was_present && !was_leaf &&
-	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
-		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
+	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) {
+		WARN_ON(private_spte !=
+			is_private_sptep(spte_to_child_pt(old_spte, level)));
+		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level),
+				  private_spte, shared);
+	}
+
+	/*
+	 * Special handling for the private mapping.  We are either
+	 * setting up new mapping at middle level page table, or leaf,
+	 * or tearing down existing mapping.
+	 *
+	 * This is after handling lower page table by above
+	 * handle_remove_tdp_mmu_page().  S-EPT requires to remove S-EPT tables
+	 * after removing childrens.
+	 */
+	if (private_spte &&
+	    /* Ignore change of software only bits. e.g. host_writable */
+	    (was_leaf != is_leaf || was_present != is_present || pfn_changed)) {
+		void *sept_page = NULL;
+
+		if (is_present && !is_leaf) {
+			struct kvm_mmu_page *sp = to_shadow_page(pfn_to_hpa(new_pfn));
+
+			sept_page = kvm_mmu_private_sp(sp);
+			WARN_ON(!sept_page);
+			WARN_ON(sp->role.level + 1 != level);
+			WARN_ON(sp->gfn != gfn);
+		}
+		change.sept_page = sept_page;
+
+		static_call(kvm_x86_handle_changed_private_spte)(kvm, &change);
+	}
 }
 
 static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
-				u64 old_spte, u64 new_spte, int level,
-				bool shared)
+				bool private_spte, u64 old_spte, u64 new_spte,
+				int level, bool shared)
 {
-	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level,
-			      shared);
+	__handle_changed_spte(kvm, as_id, gfn, private_spte,
+			old_spte, new_spte, level, shared);
 	handle_changed_spte_acc_track(old_spte, new_spte, level);
 	handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte,
 				      new_spte, level);
@@ -640,6 +714,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 					  struct tdp_iter *iter,
 					  u64 new_spte)
 {
+	bool freeze_spte = iter->is_private && !is_removed_spte(new_spte);
+	u64 tmp_spte = freeze_spte ? REMOVED_SPTE : new_spte;
 	u64 *sptep = rcu_dereference(iter->sptep);
 	u64 old_spte;
 
@@ -657,7 +733,7 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 	 * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
 	 * does not hold the mmu_lock.
 	 */
-	old_spte = cmpxchg64(sptep, iter->old_spte, new_spte);
+	old_spte = cmpxchg64(sptep, iter->old_spte, tmp_spte);
 	if (old_spte != iter->old_spte) {
 		/*
 		 * The page table entry was modified by a different logical
@@ -669,10 +745,14 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 		return -EBUSY;
 	}
 
-	__handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
-			      new_spte, iter->level, true);
+	__handle_changed_spte(
+		kvm, iter->as_id, iter->gfn, iter->is_private,
+		iter->old_spte, new_spte, iter->level, true);
 	handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level);
 
+	if (freeze_spte)
+		__kvm_tdp_mmu_write_spte(sptep, new_spte);
+
 	return 0;
 }
 
@@ -734,13 +814,15 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
  *		      unless performing certain dirty logging operations.
  *		      Leaving record_dirty_log unset in that case prevents page
  *		      writes from being double counted.
+ * @is_private:       The fault is private.
  *
  * Returns the old SPTE value, which _may_ be different than @old_spte if the
  * SPTE had voldatile bits.
  */
 static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
-			      u64 old_spte, u64 new_spte, gfn_t gfn, int level,
-			      bool record_acc_track, bool record_dirty_log)
+			       u64 old_spte, u64 new_spte, gfn_t gfn, int level,
+			       bool record_acc_track, bool record_dirty_log,
+			       bool is_private)
 {
 	lockdep_assert_held_write(&kvm->mmu_lock);
 
@@ -755,7 +837,8 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
 
 	old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
 
-	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false);
+	__handle_changed_spte(kvm, as_id, gfn, is_private,
+			      old_spte, new_spte, level, false);
 
 	if (record_acc_track)
 		handle_changed_spte_acc_track(old_spte, new_spte, level);
@@ -774,7 +857,8 @@ static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
 	iter->old_spte = __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep,
 					    iter->old_spte, new_spte,
 					    iter->gfn, iter->level,
-					    record_acc_track, record_dirty_log);
+					    record_acc_track, record_dirty_log,
+					    iter->is_private);
 }
 
 static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
@@ -807,8 +891,11 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
 			continue;					\
 		else
 
-#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end)		\
-	for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end)
+#define tdp_mmu_for_each_pte(_iter, _mmu, _private, _start, _end)	\
+	for_each_tdp_pte(_iter,						\
+		 to_shadow_page((_private) ? _mmu->private_root_hpa :	\
+				_mmu->root.hpa),			\
+		_start, _end)
 
 /*
  * Yield if the MMU lock is contended or this thread needs to return control
@@ -945,7 +1032,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 
 	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
 			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
-			   true, true);
+			   true, true, is_private_sp(sp));
 
 	return true;
 }
@@ -961,13 +1048,21 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
  * operation can cause a soft lockup.
  */
 static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
-			      gfn_t start, gfn_t end, bool can_yield, bool flush)
+			      gfn_t start, gfn_t end, bool can_yield, bool flush,
+			      bool drop_private)
 {
 	struct tdp_iter iter;
 
 	end = min(end, tdp_mmu_max_gfn_exclusive());
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
+	/*
+	 * Extend [start, end) to include GFN shared bit when TDX is enabled,
+	 * and for shared mapping range.
+	 */
+	WARN_ON_ONCE(!is_private_sp(root) && drop_private);
+	start = kvm_gfn_for_root(kvm, root, start);
+	end = kvm_gfn_for_root(kvm, root, end);
 
 	rcu_read_lock();
 
@@ -1002,12 +1097,13 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
  * MMU lock.
  */
 bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
-			   bool can_yield, bool flush)
+			   bool can_yield, bool flush, bool drop_private)
 {
 	struct kvm_mmu_page *root;
 
 	for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
-		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush);
+		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush,
+					  drop_private && is_private_sp(root));
 
 	return flush;
 }
@@ -1067,6 +1163,12 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
 	list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
+		/*
+		 * Skip private root since private page table
+		 * is only torn down when VM is destroyed.
+		 */
+		if (is_private_sp(root))
+			continue;
 		if (!root->role.invalid &&
 		    !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) {
 			root->role.invalid = true;
@@ -1087,14 +1189,22 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 	u64 new_spte;
 	int ret = RET_PF_FIXED;
 	bool wrprot = false;
+	unsigned long pte_access = ACC_ALL;
+	gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
 
 	WARN_ON(sp->role.level != fault->goal_level);
+
+	/* TDX shared GPAs are no executable, enforce this for the SDV. */
+	if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
+		pte_access &= ~ACC_EXEC_MASK;
+
 	if (unlikely(!fault->slot))
-		new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
+		new_spte = make_mmio_spte(vcpu, gfn_unalias, pte_access);
 	else
-		wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
-					 fault->pfn, iter->old_spte, fault->prefetch, true,
-					 fault->map_writable, &new_spte);
+		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
+				   gfn_unalias, fault->pfn, iter->old_spte,
+				   fault->prefetch, true, fault->map_writable,
+				   &new_spte);
 
 	if (new_spte == iter->old_spte)
 		ret = RET_PF_SPURIOUS;
@@ -1167,8 +1277,7 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter,
 	return 0;
 }
 
-static int tdp_mmu_populate_nonleaf(
-	struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
+static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
 {
 	struct kvm_mmu_page *sp;
 	int ret;
@@ -1176,7 +1285,7 @@ static int tdp_mmu_populate_nonleaf(
 	WARN_ON(is_shadow_present_pte(iter->old_spte));
 	WARN_ON(is_removed_spte(iter->old_spte));
 
-	sp = tdp_mmu_alloc_sp(vcpu);
+	sp = tdp_mmu_alloc_sp(vcpu, iter->is_private, false);
 	tdp_mmu_init_child_sp(sp, iter);
 
 	ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true);
@@ -1193,6 +1302,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
 	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	struct tdp_iter iter;
+	gfn_t raw_gfn;
+	bool is_private = fault->is_private;
 	int ret;
 
 	kvm_mmu_hugepage_adjust(vcpu, fault);
@@ -1201,7 +1312,16 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 
 	rcu_read_lock();
 
-	tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) {
+	raw_gfn = gpa_to_gfn(fault->addr);
+
+	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) {
+		if (is_private) {
+			rcu_read_unlock();
+			return -EFAULT;
+		}
+	}
+
+	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
 		if (fault->nx_huge_page_workaround_enabled)
 			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
 
@@ -1217,6 +1337,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 		    is_large_pte(iter.old_spte)) {
 			if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
 				break;
+			/*
+			 * TODO: large page support.
+			 * Doesn't support large page for TDX now
+			 */
+			WARN_ON(is_private_sptep(iter.sptep));
+
 
 			/*
 			 * The iter must explicitly re-read the spte here
@@ -1258,11 +1384,13 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 	return ret;
 }
 
+/* Used by mmu notifier via kvm_unmap_gfn_range() */
 bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
-				 bool flush)
+				 bool flush, bool drop_private)
 {
 	return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start,
-				     range->end, range->may_block, flush);
+				     range->end, range->may_block, flush,
+				     drop_private);
 }
 
 typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
@@ -1445,7 +1573,8 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
 	return spte_set;
 }
 
-static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
+static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(
+	gfp_t gfp, bool is_private)
 {
 	struct kvm_mmu_page *sp;
 
@@ -1456,6 +1585,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
 		return NULL;
 
 	sp->spt = (void *)__get_free_page(gfp);
+	if (is_private) {
+		if (kvm_alloc_private_sp_for_split(sp, gfp)) {
+			free_page((unsigned long)sp->spt);
+			sp->spt = NULL;
+		}
+	}
 	if (!sp->spt) {
 		kmem_cache_free(mmu_page_header_cache, sp);
 		return NULL;
@@ -1469,6 +1604,11 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 						       bool shared)
 {
 	struct kvm_mmu_page *sp;
+	bool is_private = iter->is_private;
+
+	/* TODO: For now large page isn't supported for private SPTE. */
+	WARN_ON(is_private);
+	WARN_ON(iter->is_private != is_private_sptep(iter->sptep));
 
 	/*
 	 * Since we are allocating while under the MMU lock we have to be
@@ -1479,7 +1619,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 	 * If this allocation fails we drop the lock and retry with reclaim
 	 * allowed.
 	 */
-	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
+	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT, is_private);
 	if (sp)
 		return sp;
 
@@ -1491,7 +1631,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 		write_unlock(&kvm->mmu_lock);
 
 	iter->yielded = true;
-	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
+	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT, is_private);
 
 	if (shared)
 		read_lock(&kvm->mmu_lock);
@@ -1907,10 +2047,14 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
 	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	gfn_t gfn = addr >> PAGE_SHIFT;
 	int leaf = -1;
+	bool is_private = kvm_is_private_gpa(vcpu->kvm, addr);
 
 	*root_level = vcpu->arch.mmu->root_role.level;
 
-	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
+	if (WARN_ON(is_private))
+		return leaf;
+
+	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
 		leaf = iter.level;
 		sptes[leaf] = iter.old_spte;
 	}
@@ -1937,7 +2081,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
 	gfn_t gfn = addr >> PAGE_SHIFT;
 	tdp_ptep_t sptep = NULL;
 
-	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
+	/* fast page fault for private GPA isn't supported. */
+	WARN_ON_ONCE(kvm_is_private_gpa(vcpu->kvm, addr));
+
+	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
 		*spte = iter.old_spte;
 		sptep = iter.sptep;
 	}
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index c163f7cc23ca..d1655571eb2f 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -5,7 +5,7 @@
 
 #include <linux/kvm_host.h>
 
-hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
+hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private);
 
 __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
 {
@@ -16,7 +16,8 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
 			  bool shared);
 
 bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start,
-				 gfn_t end, bool can_yield, bool flush);
+				gfn_t end, bool can_yield, bool flush,
+				bool drop_private);
 bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
 void kvm_tdp_mmu_zap_all(struct kvm *kvm);
 void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
@@ -25,7 +26,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm);
 int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
 
 bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
-				 bool flush);
+				 bool flush, bool drop_private);
 bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range);
 bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
 bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 0acb0b6d1f82..7a5261eb7eb8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -196,6 +196,7 @@ bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
 
 	return true;
 }
+EXPORT_SYMBOL_GPL(kvm_is_reserved_pfn);
 
 /*
  * Switches to specified vcpu, until a matching vcpu_put()
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 047/102] [MARKER] The start of TDX KVM patch series: TDX EPT violation
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (45 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 048/102] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
                   ` (56 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of TDX EPT
violation.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index d5cace00c433..c3e675bea802 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -19,12 +19,12 @@ Patch Layer status
 * TDX architectural definitions:        Applied
 * TD VM creation/destruction:           Applied
 * TD vcpu creation/destruction:         Applied
-* TDX EPT violation:                    Not yet
+* TDX EPT violation:                    Applying
 * TD finalization:                      Not yet
 * TD vcpu enter/exit:                   Not yet
 * TD vcpu interrupts/exit/hypercall:    Not yet
 
 * KVM MMU GPA shared bits:              Applied
 * KVM TDP refactoring for TDX:          Applied
-* KVM TDP MMU hooks:                    Applying
+* KVM TDP MMU hooks:                    Applied
 * KVM TDP MMU MapGPA:                   Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 048/102] KVM: x86/mmu: Disallow dirty logging for x86 TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (46 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 047/102] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-08  2:30   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 049/102] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
                   ` (55 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Xiaoyao Li

From: Sean Christopherson <sean.j.christopherson@intel.com>

TDX doesn't support dirty logging.  Report dirty logging isn't supported so
that device model, for example qemu, can properly handle it.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c       |  5 +++++
 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c      | 15 ++++++++++++---
 3 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4309ef0ade21..dcd1f5e2ba05 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13164,6 +13164,11 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size,
 }
 EXPORT_SYMBOL_GPL(kvm_sev_es_string_io);
 
+bool kvm_arch_dirty_log_supported(struct kvm *kvm)
+{
+	return kvm->arch.vm_type != KVM_X86_TDX_VM;
+}
+
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 79a4988fd51f..6fd8ec297236 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1452,6 +1452,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu);
 int kvm_arch_post_init_vm(struct kvm *kvm);
 void kvm_arch_pre_destroy_vm(struct kvm *kvm);
 int kvm_arch_create_vm_debugfs(struct kvm *kvm);
+bool kvm_arch_dirty_log_supported(struct kvm *kvm);
 
 #ifndef __KVM_HAVE_ARCH_VM_ALLOC
 /*
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7a5261eb7eb8..703c1d0c98da 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1467,9 +1467,18 @@ static void kvm_replace_memslot(struct kvm *kvm,
 	}
 }
 
-static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem)
+bool __weak kvm_arch_dirty_log_supported(struct kvm *kvm)
 {
-	u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
+	return true;
+}
+
+static int check_memory_region_flags(struct kvm *kvm,
+				     const struct kvm_userspace_memory_region *mem)
+{
+	u32 valid_flags = 0;
+
+	if (kvm_arch_dirty_log_supported(kvm))
+		valid_flags |= KVM_MEM_LOG_DIRTY_PAGES;
 
 #ifdef __KVM_HAVE_READONLY_MEM
 	valid_flags |= KVM_MEM_READONLY;
@@ -1871,7 +1880,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	int as_id, id;
 	int r;
 
-	r = check_memory_region_flags(mem);
+	r = check_memory_region_flags(kvm, mem);
 	if (r)
 		return r;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 049/102] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (47 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 048/102] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-12  2:58   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 050/102] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
                   ` (54 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Some KVM MMU operations (dirty page logging, page migration, aging page)
aren't supported for private GFNs (yet) with the first generation of TDX.
Silently return on unsupported TDX KVM MMU operations.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 74 +++++++++++++++++++++++++++++++++++---
 arch/x86/kvm/x86.c         |  3 ++
 2 files changed, 72 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 12f75e60a254..fef6246086a8 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -387,6 +387,8 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
 
 	if ((!is_writable_pte(old_spte) || pfn_changed) &&
 	    is_writable_pte(new_spte)) {
+		/* For memory slot operations, use GFN without aliasing */
+		gfn = gfn & ~kvm_gfn_shared_mask(kvm);
 		slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn);
 		mark_page_dirty_in_slot(kvm, slot, gfn);
 	}
@@ -1398,7 +1400,8 @@ typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
 
 static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
 						   struct kvm_gfn_range *range,
-						   tdp_handler_t handler)
+						   tdp_handler_t handler,
+						   bool only_shared)
 {
 	struct kvm_mmu_page *root;
 	struct tdp_iter iter;
@@ -1409,9 +1412,23 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
 	 * into this helper allow blocking; it'd be dead, wasteful code.
 	 */
 	for_each_tdp_mmu_root(kvm, root, range->slot->as_id) {
+		gfn_t start;
+		gfn_t end;
+
+		if (only_shared && is_private_sp(root))
+			continue;
+
 		rcu_read_lock();
 
-		tdp_root_for_each_leaf_pte(iter, root, range->start, range->end)
+		/*
+		 * For TDX shared mapping, set GFN shared bit to the range,
+		 * so the handler() doesn't need to set it, to avoid duplicated
+		 * code in multiple handler()s.
+		 */
+		start = kvm_gfn_for_root(kvm, root, range->start);
+		end = kvm_gfn_for_root(kvm, root, range->end);
+
+		tdp_root_for_each_leaf_pte(iter, root, start, end)
 			ret |= handler(kvm, &iter, range);
 
 		rcu_read_unlock();
@@ -1455,7 +1472,12 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter,
 
 bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 {
-	return kvm_tdp_mmu_handle_gfn(kvm, range, age_gfn_range);
+	/*
+	 * First TDX generation doesn't support clearing A bit for private
+	 * mapping, since there's no secure EPT API to support it.  However
+	 * it's a legitimate request for TDX guest.
+	 */
+	return kvm_tdp_mmu_handle_gfn(kvm, range, age_gfn_range, true);
 }
 
 static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter,
@@ -1466,7 +1488,7 @@ static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter,
 
 bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
 {
-	return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn);
+	return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn, false);
 }
 
 static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
@@ -1511,8 +1533,11 @@ bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
 	 * No need to handle the remote TLB flush under RCU protection, the
 	 * target SPTE _must_ be a leaf SPTE, i.e. cannot result in freeing a
 	 * shadow page.  See the WARN on pfn_changed in __handle_changed_spte().
+	 *
+	 * .change_pte() callback should not happen for private page, because
+	 * for now TDX private pages are pinned during VM's life time.
 	 */
-	return kvm_tdp_mmu_handle_gfn(kvm, range, set_spte_gfn);
+	return kvm_tdp_mmu_handle_gfn(kvm, range, set_spte_gfn, true);
 }
 
 /*
@@ -1566,6 +1591,14 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
 
 	lockdep_assert_held_read(&kvm->mmu_lock);
 
+	/*
+	 * Because first TDX generation doesn't support write protecting private
+	 * mappings and kvm_arch_dirty_log_supported(kvm) = false, it's a bug
+	 * to reach here for guest TD.
+	 */
+	if (WARN_ON(!kvm_arch_dirty_log_supported(kvm)))
+		return false;
+
 	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
 		spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn,
 			     slot->base_gfn + slot->npages, min_level);
@@ -1830,6 +1863,14 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm,
 
 	lockdep_assert_held_read(&kvm->mmu_lock);
 
+	/*
+	 * First TDX generation doesn't support clearing dirty bit,
+	 * since there's no secure EPT API to support it.  It is a
+	 * bug to reach here for TDX guest.
+	 */
+	if (WARN_ON(!kvm_arch_dirty_log_supported(kvm)))
+		return false;
+
 	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
 		spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
 				slot->base_gfn + slot->npages);
@@ -1896,6 +1937,13 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm,
 	struct kvm_mmu_page *root;
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
+	/*
+	 * First TDX generation doesn't support clearing dirty bit,
+	 * since there's no secure EPT API to support it.  For now silently
+	 * ignore KVM_CLEAR_DIRTY_LOG.
+	 */
+	if (!kvm_arch_dirty_log_supported(kvm))
+		return;
 	for_each_tdp_mmu_root(kvm, root, slot->as_id)
 		clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot);
 }
@@ -1975,6 +2023,13 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
 
 	lockdep_assert_held_read(&kvm->mmu_lock);
 
+	/*
+	 * This should only be reachable when diryt-log is supported. It's a
+	 * bug to reach here.
+	 */
+	if (WARN_ON(!kvm_arch_dirty_log_supported(kvm)))
+		return;
+
 	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
 		zap_collapsible_spte_range(kvm, root, slot);
 }
@@ -2028,6 +2083,15 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
 	bool spte_set = false;
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
+
+	/*
+	 * First TDX generation doesn't support write protecting private
+	 * mappings, silently ignore the request.  KVM_GET_DIRTY_LOG etc
+	 * can reach here, no warning.
+	 */
+	if (!kvm_arch_dirty_log_supported(kvm))
+		return false;
+
 	for_each_tdp_mmu_root(kvm, root, slot->as_id)
 		spte_set |= write_protect_gfn(kvm, root, gfn, min_level);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index dcd1f5e2ba05..8f57dfb2a8c9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12243,6 +12243,9 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
 	u32 new_flags = new ? new->flags : 0;
 	bool log_dirty_pages = new_flags & KVM_MEM_LOG_DIRTY_PAGES;
 
+	if (!kvm_arch_dirty_log_supported(kvm) && log_dirty_pages)
+		return;
+
 	/*
 	 * Update CPU dirty logging if dirty logging is being toggled.  This
 	 * applies to all operations.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 050/102] KVM: VMX: Split out guts of EPT violation to common/exposed function
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (48 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 049/102] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-08 10:25   ` Kai Huang
  2022-06-27 21:53 ` [PATCH v7 051/102] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
                   ` (53 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

The difference of TDX EPT violation is how to retrieve information, GPA,
and exit qualification.  To share the code to handle EPT violation, split
out the guts of EPT violation handler so that VMX/TDX exit handler can call
it after retrieving GPA and exit qualification.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/common.h | 33 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c    | 32 ++++++--------------------------
 2 files changed, 39 insertions(+), 26 deletions(-)
 create mode 100644 arch/x86/kvm/vmx/common.h

diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
new file mode 100644
index 000000000000..235908f3e044
--- /dev/null
+++ b/arch/x86/kvm/vmx/common.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __KVM_X86_VMX_COMMON_H
+#define __KVM_X86_VMX_COMMON_H
+
+#include <linux/kvm_host.h>
+
+#include "mmu.h"
+
+static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
+					     unsigned long exit_qualification)
+{
+	u64 error_code;
+
+	/* Is it a read fault? */
+	error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
+		     ? PFERR_USER_MASK : 0;
+	/* Is it a write fault? */
+	error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
+		      ? PFERR_WRITE_MASK : 0;
+	/* Is it a fetch fault? */
+	error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
+		      ? PFERR_FETCH_MASK : 0;
+	/* ept page table entry is present? */
+	error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
+		      ? PFERR_PRESENT_MASK : 0;
+
+	error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
+	       PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
+
+	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
+}
+
+#endif /* __KVM_X86_VMX_COMMON_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e3d304b14df0..2f1dc06aec3c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -50,6 +50,7 @@
 #include <asm/vmx.h>
 
 #include "capabilities.h"
+#include "common.h"
 #include "cpuid.h"
 #include "evmcs.h"
 #include "hyperv.h"
@@ -5578,11 +5579,10 @@ static int handle_task_switch(struct kvm_vcpu *vcpu)
 
 static int handle_ept_violation(struct kvm_vcpu *vcpu)
 {
-	unsigned long exit_qualification;
-	gpa_t gpa;
-	u64 error_code;
+	unsigned long exit_qualification = vmx_get_exit_qual(vcpu);
+	gpa_t gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
 
-	exit_qualification = vmx_get_exit_qual(vcpu);
+	trace_kvm_page_fault(gpa, exit_qualification);
 
 	/*
 	 * EPT violation happened while executing iret from NMI,
@@ -5591,29 +5591,9 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
 	 * AAK134, BY25.
 	 */
 	if (!(to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) &&
-			enable_vnmi &&
-			(exit_qualification & INTR_INFO_UNBLOCK_NMI))
+	    enable_vnmi && (exit_qualification & INTR_INFO_UNBLOCK_NMI))
 		vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI);
 
-	gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
-	trace_kvm_page_fault(gpa, exit_qualification);
-
-	/* Is it a read fault? */
-	error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
-		     ? PFERR_USER_MASK : 0;
-	/* Is it a write fault? */
-	error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
-		      ? PFERR_WRITE_MASK : 0;
-	/* Is it a fetch fault? */
-	error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
-		      ? PFERR_FETCH_MASK : 0;
-	/* ept page table entry is present? */
-	error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
-		      ? PFERR_PRESENT_MASK : 0;
-
-	error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
-	       PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
-
 	vcpu->arch.exit_qualification = exit_qualification;
 
 	/*
@@ -5627,7 +5607,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
 	if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa)))
 		return kvm_emulate_instruction(vcpu, 0);
 
-	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
+	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification);
 }
 
 static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 051/102] KVM: VMX: Move setting of EPT MMU masks to common VT-x code
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (49 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 050/102] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 052/102] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
                   ` (52 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

EPT MMU masks are used commonly for VMX and TDX.  The value needs to be
initialized in common code before both VMX/TDX-specific initialization
code.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c | 5 +++++
 arch/x86/kvm/vmx/vmx.c  | 4 ----
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index ce12cc8276ef..9f4c3a0bcc12 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -4,6 +4,7 @@
 #include "x86_ops.h"
 #include "vmx.h"
 #include "nested.h"
+#include "mmu.h"
 #include "pmu.h"
 #include "tdx.h"
 
@@ -26,6 +27,10 @@ static __init int vt_hardware_setup(void)
 
 	enable_tdx = enable_tdx && !tdx_hardware_setup(&vt_x86_ops);
 
+	if (enable_ept)
+		kvm_mmu_set_ept_masks(enable_ept_ad_bits,
+				      cpu_has_vmx_ept_execute_only());
+
 	return 0;
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2f1dc06aec3c..3f231159fe3d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8139,10 +8139,6 @@ __init int vmx_hardware_setup(void)
 
 	set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
 
-	if (enable_ept)
-		kvm_mmu_set_ept_masks(enable_ept_ad_bits,
-				      cpu_has_vmx_ept_execute_only());
-
 	/*
 	 * Setup shadow_me_value/shadow_me_mask to include MKTME KeyID
 	 * bits to shadow_zero_check.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 052/102] KVM: TDX: Add load_mmu_pgd method for TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (50 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 051/102] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD isaku.yamahata
                   ` (51 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

For virtual IO, the guest TD shares guest pages with VMM without
encryption.  Shared EPT is used to map guest pages in unprotected way.

Add the VMCS field encoding for the shared EPTP, which will be used by
TDX to have separate EPT walks for private GPAs (existing EPTP) versus
shared GPAs (new shared EPTP).

Set shared EPT pointer value for the TDX guest to initialize TDX MMU.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/vmx.h |  1 +
 arch/x86/kvm/vmx/main.c    | 11 ++++++++++-
 arch/x86/kvm/vmx/tdx.c     |  5 +++++
 arch/x86/kvm/vmx/x86_ops.h |  4 ++++
 4 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index f0f8eecf55ac..e169ace97e83 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -234,6 +234,7 @@ enum vmcs_field {
 	TSC_MULTIPLIER_HIGH             = 0x00002033,
 	TERTIARY_VM_EXEC_CONTROL	= 0x00002034,
 	TERTIARY_VM_EXEC_CONTROL_HIGH	= 0x00002035,
+	SHARED_EPT_POINTER		= 0x0000203C,
 	PID_POINTER_TABLE		= 0x00002042,
 	PID_POINTER_TABLE_HIGH		= 0x00002043,
 	GUEST_PHYSICAL_ADDRESS          = 0x00002400,
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 9f4c3a0bcc12..252b7298b230 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -110,6 +110,15 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	return vmx_vcpu_reset(vcpu, init_event);
 }
 
+static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
+			int pgd_level)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_load_mmu_pgd(vcpu, root_hpa, pgd_level);
+
+	vmx_load_mmu_pgd(vcpu, root_hpa, pgd_level);
+}
+
 static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	if (!is_td(kvm))
@@ -228,7 +237,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.write_tsc_offset = vmx_write_tsc_offset,
 	.write_tsc_multiplier = vmx_write_tsc_multiplier,
 
-	.load_mmu_pgd = vmx_load_mmu_pgd,
+	.load_mmu_pgd = vt_load_mmu_pgd,
 
 	.check_intercept = vmx_check_intercept,
 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 2772775457b0..24b428b7491d 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -532,6 +532,11 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->kvm->vm_bugged = true;
 }
 
+void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
+{
+	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 7e38c7b756d4..e70f84d29d21 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -144,6 +144,8 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
+
+void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level);
 #else
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
 static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
@@ -161,6 +163,8 @@ static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
+
+static inline void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level) {}
 #endif
 
 #endif /* __KVM_X86_VMX_X86_OPS_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (51 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 052/102] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-12  3:47   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 054/102] KVM: TDX: TDP MMU TDX support isaku.yamahata
                   ` (50 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX doesn't need APIC page depending on vapic and its callback is
WARN_ON_ONCE(is_tdx).  To avoid unnecessary overhead and WARN_ON_ONCE(),
skip requesting KVM_REQ_APIC_PAGE_RELOAD when TD.

  ------------[ cut here ]------------
  WARNING: CPU: 134 PID: 42205 at arch/x86/kvm/vmx/main.c:696 vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
  Modules linked in: squashfs nls_iso8859_1 nls_cp437 vhost_vsock vhost vhost_iotlb tdx_debug kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd i2c_i801 i2c_smbus i2c_ismt
  CPU: 134 PID: 42205 Comm: tdx_vm_tests Tainted: G        W         5.17.0-rc8 #165 4baba67c36c7c1001d782c47f2964b779a5659c7
  Hardware name: Intel Corporation EAGLESTREAM/EAGLESTREAM, BIOS EGSDCRB1.SYS.0066.D24.2110072326 10/07/2021
  RIP: 0010:vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
  Code: e7 d5 49 8b 1c 24 48 8d bb 78 15 00 00 e8 4c 78 e7 d5 48 83 bb 78 15 00 00 01 74 0d 4c 89 e7 e8 7a 9b fd ff 5b 41 5c 5d c3 90 <0f  0b 90 5b 41 5c 5d c3 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f
  RSP: 0018:ffa0000027477b68 EFLAGS: 00010246
  RAX: 0000000000000000 RBX: ffa00000572d9000 RCX: ffffffffde6864d4
  RDX: dffffc0000000000 RSI: 0000000000000008 RDI: ffa00000572da578
  RBP: ffa0000027477b78 R08: 0000000000000001 R09: ffe21c006df80008
  R10: ff1100036fc0003f R11: ffe21c006df80007 R12: ff1100036fc00000
  R13: ff1100036fc000d8 R14: ff1100036fc00038 R15: ff1100036fc00000
  FS:  00007fdf1ad32740(0000) GS:ff11000e1ed00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 00007fdf15f1b000 CR3: 000000011e462005 CR4: 0000000000773ee0
  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
  DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
  PKRU: 55555554
  Call Trace:
   <TASK>
   vcpu_enter_guest+0x145d/0x24d0 [kvm]
   ? inject_pending_event+0x750/0x750 [kvm]
   ? xsaves+0x31/0x40
   ? rcu_read_lock_held_common+0x1e/0x60
   ? rcu_read_lock_sched_held+0x60/0xe0
   ? rcu_read_lock_bh_held+0xc0/0xc0
   kvm_arch_vcpu_ioctl_run+0x25d/0xcc0 [kvm]
   kvm_vcpu_ioctl+0x414/0xa30 [kvm]]
   ? kvm_clear_dirty_log_protect+0x4d0/0x4d0 [kvm]
   ? userfaultfd_unmap_prep+0x240/0x240
   ? __up_read+0x17f/0x530
   ? rwsem_wake+0x110/0x110
   ? __do_munmap+0x437/0x7c0
   ? rcu_read_lock_held_common+0x1e/0x60
   ? rcu_read_lock_sched_held+0x60/0xe0
   ? rcu_read_lock_sched_held+0x60/0xe0
   ? __kasan_check_read+0x11/0x20
   ? __fget_light+0xa9/0x100
   __x64_sys_ioctl+0xc0/0x100
   do_syscall_64+0x39/0xc0
   entry_SYSCALL_64_after_hwframe+0x44/0xae
  RIP: 0033:0x7fdf1ae493db
  Code: 0f 1e fa 48 8b 05 b5 7a 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48  3d 01 f0 ff ff 73 01 c3 48 8b 0d 85 7a 0d 00 f7 d8 64 89 01 48
  RSP: 002b:00007ffcf8bdfb38 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
  RAX: ffffffffffffffda RBX: 00000000006f26d0 RCX: 00007fdf1ae493db
  RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000007
  RBP: 0000000000000000 R08: 0000000000411d36 R09: 0000000000000000
  R10: fffffffffffffb69 R11: 0000000000000246 R12: 0000000000402410
  R13: 00000000006f02b0 R14: 0000000000000000 R15: 0000000000000000
   </TASK>
  irq event stamp: 0
  hardirqs last  enabled at (0): [<0000000000000000>] 0x0
  hardirqs last disabled at (0): [<ffffffffb40c809a>] copy_process+0xaca/0x3270
  softirqs last  enabled at (0): [<ffffffffb40c809a>] copy_process+0xaca/0x3270
  softirqs last disabled at (0): [<0000000000000000>] 0x0
  ---[ end trace 0000000000000000 ]---

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/x86.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8f57dfb2a8c9..c90ec611de2f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10042,7 +10042,8 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
 	 * Update it when it becomes invalid.
 	 */
 	apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
-	if (start <= apic_address && apic_address < end)
+	if (start <= apic_address && apic_address < end &&
+	    !kvm_gfn_shared_mask(kvm))
 		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 054/102] KVM: TDX: TDP MMU TDX support
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (52 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 055/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA isaku.yamahata
                   ` (49 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Implement hooks of TDP MMU for TDX backend.  TLB flush, TLB shootdown,
propagating the change private EPT entry to Secure EPT and freeing Secure
EPT page.

TLB flush handles both shared EPT and private EPT.  It flushes shared EPT
same as VMX.  It also waits for the TDX TLB shootdown.

For the hook to free Secure EPT page, unlinks the Secure EPT page from the
Secure EPT so that the page can be freed to OS.

Propagating the entry change to Secure EPT.  The possible entry changes are
present -> non-present(zapping) and non-present -> present(population).  On
population just link the Secure EPT page or the private guest page to the
Secure EPT by TDX SEAMCALL.

Because TDP MMU allows concurrent zapping/population, zapping requires
synchronous TLB shootdown with the frozen EPT entry.  It zaps the secure
entry, increments TLB counter, sends IPI to remote vcpus to trigger TLB
flush, and then unlinks the private guest page from the Secure EPT.

For simplicity, batched zapping with exclude lock is handled as concurrent
zapping.  Although it's inefficient, it can be optimized in the future.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    |  40 ++++-
 arch/x86/kvm/vmx/tdx.c     | 318 +++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h     |  21 +++
 arch/x86/kvm/vmx/x86_ops.h |   2 +
 4 files changed, 377 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 252b7298b230..442d89e02459 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -110,6 +110,38 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	return vmx_vcpu_reset(vcpu, init_event);
 }
 
+static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_flush_tlb(vcpu);
+
+	vmx_flush_tlb_all(vcpu);
+}
+
+static void vt_flush_tlb_current(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_flush_tlb(vcpu);
+
+	vmx_flush_tlb_current(vcpu);
+}
+
+static void vt_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_flush_tlb_gva(vcpu, addr);
+}
+
+static void vt_flush_tlb_guest(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_flush_tlb_guest(vcpu);
+}
+
 static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 			int pgd_level)
 {
@@ -185,10 +217,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.set_rflags = vmx_set_rflags,
 	.get_if_flag = vmx_get_if_flag,
 
-	.flush_tlb_all = vmx_flush_tlb_all,
-	.flush_tlb_current = vmx_flush_tlb_current,
-	.flush_tlb_gva = vmx_flush_tlb_gva,
-	.flush_tlb_guest = vmx_flush_tlb_guest,
+	.flush_tlb_all = vt_flush_tlb_all,
+	.flush_tlb_current = vt_flush_tlb_current,
+	.flush_tlb_gva = vt_flush_tlb_gva,
+	.flush_tlb_guest = vt_flush_tlb_guest,
 
 	.vcpu_pre_run = vmx_vcpu_pre_run,
 	.vcpu_run = vmx_vcpu_run,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 24b428b7491d..3d578197d567 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -5,7 +5,9 @@
 
 #include "capabilities.h"
 #include "x86_ops.h"
+#include "mmu.h"
 #include "tdx.h"
+#include "vmx.h"
 #include "x86.h"
 
 #undef pr_fmt
@@ -290,6 +292,22 @@ int tdx_vm_init(struct kvm *kvm)
 	int ret, i;
 	u64 err;
 
+	/*
+	 * Because guest TD is protected, VMM can't parse the instruction in TD.
+	 * Instead, guest uses MMIO hypercall.  For unmodified device driver,
+	 * #VE needs to be injected for MMIO and #VE handler in TD converts MMIO
+	 * instruction into MMIO hypercall.
+	 *
+	 * SPTE value for MMIO needs to be setup so that #VE is injected into
+	 * TD instead of triggering EPT MISCONFIG.
+	 * - RWX=0 so that EPT violation is triggered.
+	 * - suppress #VE bit is cleared to inject #VE.
+	 */
+	kvm_mmu_set_mmio_spte_mask(kvm, 0, VMX_EPT_RWX_MASK, 0);
+
+	/* TODO: Enable 2mb and 1gb large page support. */
+	kvm->arch.tdp_max_page_level = PG_LEVEL_4K;
+
 	/* vCPUs can't be created until after KVM_TDX_INIT_VM. */
 	kvm->max_vcpus = 0;
 
@@ -374,6 +392,8 @@ int tdx_vm_init(struct kvm *kvm)
 		tdx_mark_td_page_added(&kvm_tdx->tdcs[i]);
 	}
 
+	spin_lock_init(&kvm_tdx->seamcall_lock);
+
 	/*
 	 * Note, TDH_MNG_INIT cannot be invoked here.  TDH_MNG_INIT requires a dedicated
 	 * ioctl() to define the configure CPUID values for the TD.
@@ -537,6 +557,281 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
 	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
 }
 
+static void tdx_unpin_pfn(struct kvm *kvm, kvm_pfn_t pfn)
+{
+	struct page *page = pfn_to_page(pfn);
+
+	put_page(page);
+	WARN_ON(!page_count(page) && to_kvm_tdx(kvm)->hkid > 0);
+}
+
+static void __tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
+					enum pg_level level, kvm_pfn_t pfn)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	hpa_t hpa = pfn_to_hpa(pfn);
+	gpa_t gpa = gfn_to_gpa(gfn);
+	struct tdx_module_output out;
+	u64 err;
+
+	if (WARN_ON_ONCE(is_error_noslot_pfn(pfn) || kvm_is_reserved_pfn(pfn)))
+		return;
+
+	/* TODO: handle large pages. */
+	if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
+		return;
+
+	/* To prevent page migration, do nothing on mmu notifier. */
+	get_page(pfn_to_page(pfn));
+
+	if (likely(is_td_finalized(kvm_tdx))) {
+		err = tdh_mem_page_aug(kvm_tdx->tdr.pa, gpa, hpa, &out);
+		if (KVM_BUG_ON(err, kvm)) {
+			pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out);
+			put_page(pfn_to_page(pfn));
+		}
+		return;
+	}
+}
+
+static void tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
+				      enum pg_level level, kvm_pfn_t pfn)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+
+	spin_lock(&kvm_tdx->seamcall_lock);
+	__tdx_sept_set_private_spte(kvm, gfn, level, pfn);
+	spin_unlock(&kvm_tdx->seamcall_lock);
+}
+
+static void tdx_sept_drop_private_spte(
+	struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn)
+{
+	int tdx_level = pg_level_to_tdx_sept_level(level);
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	gpa_t gpa = gfn_to_gpa(gfn);
+	hpa_t hpa = pfn_to_hpa(pfn);
+	hpa_t hpa_with_hkid;
+	struct tdx_module_output out;
+	u64 err = 0;
+
+	/* TODO: handle large pages. */
+	if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
+		return;
+
+	spin_lock(&kvm_tdx->seamcall_lock);
+	if (is_hkid_assigned(kvm_tdx)) {
+		err = tdh_mem_page_remove(kvm_tdx->tdr.pa, gpa, tdx_level, &out);
+		if (KVM_BUG_ON(err, kvm)) {
+			pr_tdx_error(TDH_MEM_PAGE_REMOVE, err, &out);
+			goto unlock;
+		}
+
+		hpa_with_hkid = set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid);
+		err = tdh_phymem_page_wbinvd(hpa_with_hkid);
+		if (WARN_ON_ONCE(err)) {
+			pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
+			goto unlock;
+		}
+	} else
+		/*
+		 * The HKID assigned to this TD was already freed and cache
+		 * was already flushed. We don't have to flush again.
+		 */
+		err = tdx_reclaim_page((unsigned long)__va(hpa), hpa, false, 0);
+
+unlock:
+	spin_unlock(&kvm_tdx->seamcall_lock);
+
+	if (!err)
+		tdx_unpin_pfn(kvm, pfn);
+}
+
+static int tdx_sept_link_private_sp(struct kvm *kvm, gfn_t gfn,
+				    enum pg_level level, void *sept_page)
+{
+	int tdx_level = pg_level_to_tdx_sept_level(level);
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	gpa_t gpa = gfn_to_gpa(gfn);
+	hpa_t hpa = __pa(sept_page);
+	struct tdx_module_output out;
+	u64 err;
+
+	spin_lock(&kvm_tdx->seamcall_lock);
+	err = tdh_mem_sept_add(kvm_tdx->tdr.pa, gpa, tdx_level, hpa, &out);
+	spin_unlock(&kvm_tdx->seamcall_lock);
+	if (KVM_BUG_ON(err, kvm)) {
+		pr_tdx_error(TDH_MEM_SEPT_ADD, err, &out);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static void tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
+				      enum pg_level level)
+{
+	int tdx_level = pg_level_to_tdx_sept_level(level);
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	gpa_t gpa = gfn_to_gpa(gfn);
+	struct tdx_module_output out;
+	u64 err;
+
+	/* For now large page isn't supported yet. */
+	WARN_ON_ONCE(level != PG_LEVEL_4K);
+	spin_lock(&kvm_tdx->seamcall_lock);
+	err = tdh_mem_range_block(kvm_tdx->tdr.pa, gpa, tdx_level, &out);
+	spin_unlock(&kvm_tdx->seamcall_lock);
+	if (KVM_BUG_ON(err, kvm))
+		pr_tdx_error(TDH_MEM_RANGE_BLOCK, err, &out);
+}
+
+/*
+ * TLB shoot down procedure:
+ * There is a global epoch counter and each vcpu has local epoch counter.
+ * - TDH.MEM.RANGE.BLOCK(TDR. level, range) on one vcpu
+ *   This blocks the subsequenct creation of TLB translation on that range.
+ *   This corresponds to clear the present bit(all RXW) in EPT entry
+ * - TDH.MEM.TRACK(TDR): advances the epoch counter which is global.
+ * - IPI to remote vcpus
+ * - TDExit and re-entry with TDH.VP.ENTER on remote vcpus
+ * - On re-entry, TDX module compares the local epoch counter with the global
+ *   epoch counter.  If the local epoch counter is older than the global epoch
+ *   counter, update the local epoch counter and flushes TLB.
+ */
+static void tdx_track(struct kvm_tdx *kvm_tdx)
+{
+	u64 err;
+
+	WARN_ON(!is_hkid_assigned(kvm_tdx));
+	/* If TD isn't finalized, it's before any vcpu running. */
+	if (unlikely(!is_td_finalized(kvm_tdx)))
+		return;
+
+	/*
+	 * tdx_flush_tlb() waits for this function to issue TDH.MEM.TRACK() by
+	 * the counter.  The counter is used instead of bool because multiple
+	 * TDH_MEM_TRACK() can be issued concurrently by multiple vcpus.
+	 */
+	atomic_inc(&kvm_tdx->tdh_mem_track);
+	/*
+	 * KVM_REQ_TLB_FLUSH waits for the empty IPI handler, ack_flush(), with
+	 * KVM_REQUEST_WAIT.
+	 */
+	kvm_make_all_cpus_request(&kvm_tdx->kvm, KVM_REQ_TLB_FLUSH);
+
+	spin_lock(&kvm_tdx->seamcall_lock);
+	err = tdh_mem_track(kvm_tdx->tdr.pa);
+	spin_unlock(&kvm_tdx->seamcall_lock);
+
+	/* Release remote vcpu waiting for TDH.MEM.TRACK in tdx_flush_tlb(). */
+	atomic_dec(&kvm_tdx->tdh_mem_track);
+
+	if (KVM_BUG_ON(err, &kvm_tdx->kvm))
+		pr_tdx_error(TDH_MEM_TRACK, err, NULL);
+
+}
+
+static int tdx_sept_free_private_sp(struct kvm *kvm, gfn_t gfn, enum pg_level level,
+				    void *sept_page)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	int ret;
+
+	/*
+	 * free_private_sp() is (obviously) called when a shadow page is being
+	 * zapped.  KVM doesn't (yet) zap private SPs while the TD is active.
+	 * Note: This function is for private shadow page.  Not for private
+	 * guest page.   private guest page can be zapped during TD is active.
+	 * shared <-> private conversion and slot move/deletion.
+	 *
+	 * TODO: large page support.  If large page is supported, S-EPT page
+	 * can be freed when promoting 4K page to 2M/1G page during TD running.
+	 * In such case, flush cache and TDH.PAGE.RECLAIM.
+	 */
+	if (KVM_BUG_ON(is_hkid_assigned(to_kvm_tdx(kvm)), kvm))
+		return -EINVAL;
+
+	/*
+	 * The HKID assigned to this TD was already freed and cache was
+	 * already flushed. We don't have to flush again.
+	 */
+	spin_lock(&kvm_tdx->seamcall_lock);
+	ret = tdx_reclaim_page((unsigned long)sept_page, __pa(sept_page), false, 0);
+	spin_unlock(&kvm_tdx->seamcall_lock);
+
+	return ret;
+}
+
+static int tdx_sept_tlb_remote_flush(struct kvm *kvm)
+{
+	struct kvm_tdx *kvm_tdx;
+
+	if (!is_td(kvm))
+		return -EOPNOTSUPP;
+
+	kvm_tdx = to_kvm_tdx(kvm);
+	if (is_hkid_assigned(kvm_tdx))
+		tdx_track(kvm_tdx);
+
+	return 0;
+}
+
+static void tdx_handle_changed_private_spte(
+	struct kvm *kvm, const struct kvm_spte_change *change)
+{
+	const gfn_t gfn = change->gfn;
+	const enum pg_level level = change->level;
+
+	WARN_ON(!is_td(kvm));
+	lockdep_assert_held(&kvm->mmu_lock);
+
+	if (change->new.is_present) {
+		/* TDP MMU doesn't change present -> present */
+		WARN_ON(change->old.is_present);
+
+		/*
+		 * Use different call to either set up middle level
+		 * private page table, or leaf.
+		 */
+		if (change->new.is_leaf)
+			tdx_sept_set_private_spte(
+				kvm, gfn, level, change->new.pfn);
+		else {
+			WARN_ON(!change->sept_page);
+			if (tdx_sept_link_private_sp(
+				    kvm, gfn, level, change->sept_page))
+				/* failed to update Secure-EPT.  */
+				WARN_ON(1);
+		}
+	} else if (change->old.is_leaf) {
+		/* non-present -> non-present doesn't make sense. */
+		WARN_ON(!change->old.is_present);
+
+		/*
+		 * Zap private leaf SPTE.  Zapping private table is done
+		 * below in handle_removed_tdp_mmu_page().
+		 */
+		tdx_sept_zap_private_spte(kvm, gfn, level);
+
+		/*
+		 * TDX requires TLB tracking before dropping private page.  Do
+		 * it here, although it is also done later.
+		 * If hkid isn't assigned, the guest is destroying and no vcpu
+		 * runs further.  TLB shootdown isn't needed.
+		 *
+		 * TODO: implement with_range version for optimization.
+		 * kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
+		 *   => tdx_sept_tlb_remote_flush_with_range(kvm, gfn,
+		 *                                 KVM_PAGES_PER_HPAGE(level));
+		 */
+		if (is_hkid_assigned(to_kvm_tdx(kvm)))
+			kvm_flush_remote_tlbs(kvm);
+
+		tdx_sept_drop_private_spte(kvm, gfn, level, change->old.pfn);
+	}
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
@@ -786,6 +1081,25 @@ static int tdx_td_init(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
 	return ret;
 }
 
+void tdx_flush_tlb(struct kvm_vcpu *vcpu)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
+	struct kvm_mmu *mmu = vcpu->arch.mmu;
+	u64 root_hpa = mmu->root.hpa;
+
+	/* Flush the shared EPTP, if it's valid. */
+	if (VALID_PAGE(root_hpa))
+		ept_sync_context(construct_eptp(vcpu, root_hpa,
+						mmu->root_role.level));
+
+	/*
+	 * See tdx_track().  Wait for tlb shootdown initiater to finish
+	 * TDH_MEM_TRACK() so that TLB is flushed on the next TDENTER.
+	 */
+	while (atomic_read(&kvm_tdx->tdh_mem_track))
+		cpu_relax();
+}
+
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_tdx_cmd tdx_cmd;
@@ -927,6 +1241,10 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 	pr_info("kvm: TDX is supported. hkid start pos %d mask 0x%llx\n",
 		hkid_start_pos, hkid_mask);
 
+	x86_ops->tlb_remote_flush = tdx_sept_tlb_remote_flush;
+	x86_ops->free_private_sp = tdx_sept_free_private_sp;
+	x86_ops->handle_changed_private_spte = tdx_handle_changed_private_spte;
+
 	return 0;
 }
 
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 337c3adb4fcf..d8dcbedd690b 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -26,9 +26,24 @@ struct kvm_tdx {
 	int hkid;
 
 	bool finalized;
+	atomic_t tdh_mem_track;
 
 	u64 tsc_offset;
 	unsigned long tsc_khz;
+
+	/*
+	 * Some SEAMCALLs try to lock TD resources (e.g. Secure-EPT) they use or
+	 * update.  If TDX module fails to obtain the lock, it returns
+	 * TDX_OPERAND_BUSY error without spinning.  It's VMM/OS responsibility
+	 * to retry or guarantee no contention because TDX module has the
+	 * restriction on cpu cycles it can spend and VMM/OS knows better
+	 * vcpu scheduling.
+	 *
+	 * TDP MMU uses read lock of kvm.arch.mmu_lock so TDP MMU code can be
+	 * run concurrently with multiple vCPUs.   Lock to prevent seamcalls from
+	 * running concurrently when TDP MMU is enabled.
+	 */
+	spinlock_t seamcall_lock;
 };
 
 struct vcpu_tdx {
@@ -169,6 +184,12 @@ static __always_inline u64 td_tdcs_exec_read64(struct kvm_tdx *kvm_tdx, u32 fiel
 	return out.r8;
 }
 
+static __always_inline int pg_level_to_tdx_sept_level(enum pg_level level)
+{
+	WARN_ON(level == PG_LEVEL_NONE);
+	return level - 1;
+}
+
 #else
 static inline int tdx_module_setup(void) { return -ENODEV; };
 
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index e70f84d29d21..2c55aea8963f 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -145,6 +145,7 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
 
+void tdx_flush_tlb(struct kvm_vcpu *vcpu);
 void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level);
 #else
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
@@ -164,6 +165,7 @@ static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
 
+static inline void tdx_flush_tlb(struct kvm_vcpu *vcpu) {}
 static inline void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level) {}
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 055/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (53 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 054/102] KVM: TDX: TDP MMU TDX support isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 056/102] KVM: x86/mmu: steal software usable git to record if GFN is for shared or not isaku.yamahata
                   ` (48 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of KVM TDP MMU
MapGPA.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index c3e675bea802..5797d172176d 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -11,6 +11,7 @@ What qemu can do
 - TDX VM TYPE is exposed to Qemu.
 - Qemu can create/destroy guest of TDX vm type.
 - Qemu can create/destroy vcpu of TDX vm type.
+- Qemu can populate initial guest memory image.
 
 Patch Layer status
 ------------------
@@ -19,7 +20,7 @@ Patch Layer status
 * TDX architectural definitions:        Applied
 * TD VM creation/destruction:           Applied
 * TD vcpu creation/destruction:         Applied
-* TDX EPT violation:                    Applying
+* TDX EPT violation:                    Applied
 * TD finalization:                      Not yet
 * TD vcpu enter/exit:                   Not yet
 * TD vcpu interrupts/exit/hypercall:    Not yet
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 056/102] KVM: x86/mmu: steal software usable git to record if GFN is for shared or not
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (54 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 055/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-07-18  8:37   ` Yuan Yao
  2022-06-27 21:53 ` [PATCH v7 057/102] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX isaku.yamahata
                   ` (47 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

With TDX, all GFNs are private at guest boot time.  At run time guest TD
can explicitly change it to shared from private or vice-versa by MapGPA
hypercall.  If it's specified, the given GFN can't be used as otherwise.
That's is, if a guest tells KVM that the GFN is shared, it can't be used
as private.  or vice-versa.

Steal software usable bit, SPTE_SHARED_MASK, for it from MMIO counter to
record it.  Use the bit SPTE_SHARED_MASK in shared or private EPT to
determine which mapping, shared or private, is allowed.  If requested
mapping isn't allowed, return RET_PF_RETRY to wait for other vcpu to change
it.  The bit is recorded in both shared and private shadow page to avoid
traverse one more shadow page when resolving KVM page fault.

The bit needs to be kept over zapping the EPT entry.  Currently the EPT
entry is initialized SHADOW_NONPRESENT_VALUE unconditionally to clear
SPTE_SHARED_MASK bit.  To carry SPTE_SHARED_MASK bit, introduce a helper
function to get initial value for zapped entry with SPTE_SHARED_MASK bit.
Replace SHADOW_NONPRESENT_VALUE with it.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/spte.h    | 17 +++++++---
 arch/x86/kvm/mmu/tdp_mmu.c | 65 ++++++++++++++++++++++++++++++++------
 2 files changed, 68 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 96312ab4fffb..7c1aaf0e963e 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -14,6 +14,9 @@
  */
 #define SPTE_MMU_PRESENT_MASK		BIT_ULL(11)
 
+/* Masks that used to track for shared GPA **/
+#define SPTE_SHARED_MASK		BIT_ULL(62)
+
 /*
  * TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also
  * be restricted to using write-protection (for L2 when CPU dirty logging, i.e.
@@ -104,7 +107,7 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
  * the memslots generation and is derived as follows:
  *
  * Bits 0-7 of the MMIO generation are propagated to spte bits 3-10
- * Bits 8-18 of the MMIO generation are propagated to spte bits 52-62
+ * Bits 8-18 of the MMIO generation are propagated to spte bits 52-61
  *
  * The KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag is intentionally not included in
  * the MMIO generation number, as doing so would require stealing a bit from
@@ -118,7 +121,7 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
 #define MMIO_SPTE_GEN_LOW_END		10
 
 #define MMIO_SPTE_GEN_HIGH_START	52
-#define MMIO_SPTE_GEN_HIGH_END		62
+#define MMIO_SPTE_GEN_HIGH_END		61
 
 #define MMIO_SPTE_GEN_LOW_MASK		GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \
 						    MMIO_SPTE_GEN_LOW_START)
@@ -131,7 +134,7 @@ static_assert(!(SPTE_MMU_PRESENT_MASK &
 #define MMIO_SPTE_GEN_HIGH_BITS		(MMIO_SPTE_GEN_HIGH_END - MMIO_SPTE_GEN_HIGH_START + 1)
 
 /* remember to adjust the comment above as well if you change these */
-static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
+static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 10);
 
 #define MMIO_SPTE_GEN_LOW_SHIFT		(MMIO_SPTE_GEN_LOW_START - 0)
 #define MMIO_SPTE_GEN_HIGH_SHIFT	(MMIO_SPTE_GEN_HIGH_START - MMIO_SPTE_GEN_LOW_BITS)
@@ -208,6 +211,7 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
 /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
 static_assert(!(__REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
 static_assert(!(__REMOVED_SPTE & SHADOW_NONPRESENT_VALUE));
+static_assert(!(__REMOVED_SPTE & SPTE_SHARED_MASK));
 
 /*
  * See above comment around __REMOVED_SPTE.  REMOVED_SPTE is the actual
@@ -217,7 +221,12 @@ static_assert(!(__REMOVED_SPTE & SHADOW_NONPRESENT_VALUE));
 
 static inline bool is_removed_spte(u64 spte)
 {
-	return spte == REMOVED_SPTE;
+	return (spte & ~SPTE_SHARED_MASK) == REMOVED_SPTE;
+}
+
+static inline u64 spte_shared_mask(u64 spte)
+{
+	return spte & SPTE_SHARED_MASK;
 }
 
 /*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index fef6246086a8..4f279700b3cc 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -758,6 +758,11 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 	return 0;
 }
 
+static u64 shadow_nonpresent_spte(u64 old_spte)
+{
+	return SHADOW_NONPRESENT_VALUE | spte_shared_mask(old_spte);
+}
+
 static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
 					  struct tdp_iter *iter)
 {
@@ -791,7 +796,8 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
 	 * SHADOW_NONPRESENT_VALUE (which sets "suppress #VE" bit) so it
 	 * can be set when EPT table entries are zapped.
 	 */
-	__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);
+	__kvm_tdp_mmu_write_spte(iter->sptep,
+			       shadow_nonpresent_spte(iter->old_spte));
 
 	return 0;
 }
@@ -975,8 +981,11 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
 			continue;
 
 		if (!shared)
-			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
-		else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE))
+			tdp_mmu_set_spte(kvm, &iter,
+					 shadow_nonpresent_spte(iter.old_spte));
+		else if (tdp_mmu_set_spte_atomic(
+				 kvm, &iter,
+				 shadow_nonpresent_spte(iter.old_spte)))
 			goto retry;
 	}
 }
@@ -1033,7 +1042,8 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 		return false;
 
 	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
-			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
+			   shadow_nonpresent_spte(old_spte),
+			   sp->gfn, sp->role.level + 1,
 			   true, true, is_private_sp(sp));
 
 	return true;
@@ -1075,11 +1085,20 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
 			continue;
 		}
 
+		/*
+		 * SPTE_SHARED_MASK is stored as 4K granularity.  The
+		 * information is lost if we delete upper level SPTE page.
+		 * TODO: support large page.
+		 */
+		if (kvm_gfn_shared_mask(kvm) && iter.level > PG_LEVEL_4K)
+			continue;
+
 		if (!is_shadow_present_pte(iter.old_spte) ||
 		    !is_last_spte(iter.old_spte, iter.level))
 			continue;
 
-		tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
+		tdp_mmu_set_spte(kvm, &iter,
+				 shadow_nonpresent_spte(iter.old_spte));
 		flush = true;
 	}
 
@@ -1195,18 +1214,44 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 	gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
 
 	WARN_ON(sp->role.level != fault->goal_level);
+	WARN_ON(is_private_sptep(iter->sptep) != fault->is_private);
 
-	/* TDX shared GPAs are no executable, enforce this for the SDV. */
-	if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
-		pte_access &= ~ACC_EXEC_MASK;
+	if (kvm_gfn_shared_mask(vcpu->kvm)) {
+		if (fault->is_private) {
+			/*
+			 * SPTE allows only RWX mapping. PFN can't be mapped it
+			 * as READONLY in GPA.
+			 */
+			if (fault->slot && !fault->map_writable)
+				return RET_PF_RETRY;
+			/*
+			 * This GPA is not allowed to map as private.  Let
+			 * vcpu loop in page fault until other vcpu change it
+			 * by MapGPA hypercall.
+			 */
+			if (fault->slot &&
+				spte_shared_mask(iter->old_spte))
+				return RET_PF_RETRY;
+		} else {
+			/* This GPA is not allowed to map as shared. */
+			if (fault->slot &&
+				!spte_shared_mask(iter->old_spte))
+				return RET_PF_RETRY;
+			/* TDX shared GPAs are no executable, enforce this. */
+			pte_access &= ~ACC_EXEC_MASK;
+		}
+	}
 
 	if (unlikely(!fault->slot))
 		new_spte = make_mmio_spte(vcpu, gfn_unalias, pte_access);
-	else
+	else {
 		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
 				   gfn_unalias, fault->pfn, iter->old_spte,
 				   fault->prefetch, true, fault->map_writable,
 				   &new_spte);
+		if (spte_shared_mask(iter->old_spte))
+			new_spte |= SPTE_SHARED_MASK;
+	}
 
 	if (new_spte == iter->old_spte)
 		ret = RET_PF_SPURIOUS;
@@ -1509,7 +1554,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
 	 * invariant that the PFN of a present * leaf SPTE can never change.
 	 * See __handle_changed_spte().
 	 */
-	tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE);
+	tdp_mmu_set_spte(kvm, iter, shadow_nonpresent_spte(iter->old_spte));
 
 	if (!pte_write(range->pte)) {
 		new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 057/102] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (55 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 056/102] KVM: x86/mmu: steal software usable git to record if GFN is for shared or not isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 058/102] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
                   ` (46 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

The TDX Guest-Hypervisor communication interface(GHCI) specification
defines MapGPA hypercall for guest TD to request the host VMM to map given
GPA range as private or shared.

It means the guest TD uses the GPA as shared (or private).  The GPA
won't be used as private (or shared).  VMM should enforce GPA usage. VMM
doesn't have to map the GPA on the hypercall request.

- Allocate 4k PTE to record SPTE_SHARED_MASK bit.

- Zap the aliased region.
  If shared (or private) GPA is requested, zap private (or shared) GPA
  (modulo shared bit).

- Record the request GPA is shared (or private) by SPTE_SHARED_MASK in SPTE
  in both shared and private EPT tables.
  - With SPTE_SHARED_MASK set, a shared GPA is allowed.
  - With SPTE_SHARED_MASK cleared, a private GPA is allowed.

  The reason to record SPTE_SHARED_MASK in both shared and private EPT
  is to optimize EPT violation path for normal guest TD execution path and
  penalize map_gpa hypercall.

  If the guest TD faults on not-allowed GPA (modulo shared bit), the KVM
  doesn't resolve EPT violation and let vcpu retry.  vcpu will keep
  faulting until other vcpu maps the region with MapGPA hypercall.  With
  the nonpresent value of spte(shadow_nonpresent_value), SPTE_SHARED_MASK
  is cleared.  So the default behavior doesn't change.

- don't map GPA.
  The GPA is mapped on the next EPT violation.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu.h         |   3 +
 arch/x86/kvm/mmu/mmu.c     | 106 +++++++++++++++
 arch/x86/kvm/mmu/tdp_mmu.c | 271 ++++++++++++++++++++++++++++++++++++-
 arch/x86/kvm/mmu/tdp_mmu.h |   5 +
 4 files changed, 382 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 9ba60fd79d33..f5edf2e58dba 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -225,6 +225,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end);
 
 int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
 
+int kvm_mmu_map_gpa(struct kvm_vcpu *vcpu, gfn_t *startp, gfn_t end,
+		    bool allow_private);
+
 int kvm_mmu_post_init_vm(struct kvm *kvm);
 void kvm_mmu_pre_destroy_vm(struct kvm *kvm);
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ef925722ee28..a777a1d4278c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6323,6 +6323,112 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
 	}
 }
 
+static int kvm_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, gfn_t start, gfn_t end)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_memslots *slots;
+	struct kvm_memslot_iter iter;
+	int ret = 0;
+
+	/* No need to populate as mmu_map_gpa() handles single GPA. */
+	if (!is_tdp_mmu_enabled(kvm))
+		return 0;
+
+	slots = __kvm_memslots(kvm, 0 /* only normal ram. not SMM. */);
+	kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) {
+		struct kvm_memory_slot *memslot = iter.slot;
+		gfn_t s = max(start, memslot->base_gfn);
+		gfn_t e = min(end, memslot->base_gfn + memslot->npages);
+
+		if (WARN_ON_ONCE(s >= e))
+			continue;
+
+		ret = kvm_tdp_mmu_populate_nonleaf(vcpu, kvm_gfn_private(kvm, s),
+						kvm_gfn_private(kvm, e), true, false);
+		if (ret)
+			break;
+		ret = kvm_tdp_mmu_populate_nonleaf(vcpu, kvm_gfn_shared(kvm, s),
+						kvm_gfn_shared(kvm, e), false, false);
+		if (ret)
+			break;
+	}
+	return ret;
+}
+
+int kvm_mmu_map_gpa(struct kvm_vcpu *vcpu, gfn_t *startp, gfn_t end,
+		bool allow_private)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_memslots *slots;
+	struct kvm_memslot_iter iter;
+	gfn_t start = *startp;
+	int ret;
+
+	if (!kvm_gfn_shared_mask(kvm))
+		return -EOPNOTSUPP;
+
+	start = start & ~kvm_gfn_shared_mask(kvm);
+	end = end & ~kvm_gfn_shared_mask(kvm);
+
+	/*
+	 * Allocate S-EPT pages first so that the operations leaf SPTE entry
+	 * can be done without memory allocation.
+	 */
+	while (true) {
+		ret = mmu_topup_memory_caches(vcpu, false);
+		if (ret)
+			return ret;
+
+		mutex_lock(&kvm->slots_lock);
+		write_lock(&kvm->mmu_lock);
+
+		ret = kvm_mmu_populate_nonleaf(vcpu, start, end);
+		if (!ret)
+			break;
+
+		write_unlock(&kvm->mmu_lock);
+		mutex_unlock(&kvm->slots_lock);
+		if (ret == -EAGAIN) {
+			if (need_resched())
+				cond_resched();
+			continue;
+		}
+		return ret;
+	}
+
+	slots = __kvm_memslots(kvm, 0 /* only normal ram. not SMM. */);
+	kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) {
+		struct kvm_memory_slot *memslot = iter.slot;
+		gfn_t s = max(start, memslot->base_gfn);
+		gfn_t e = min(end, memslot->base_gfn + memslot->npages);
+
+		if (WARN_ON_ONCE(s >= e))
+			continue;
+		if (is_tdp_mmu_enabled(kvm)) {
+			ret = kvm_tdp_mmu_map_gpa(vcpu, &s, e, allow_private);
+			if (ret) {
+				start = s;
+				break;
+			}
+		} else {
+			ret = -EOPNOTSUPP;
+			break;
+		}
+	}
+
+	write_unlock(&kvm->mmu_lock);
+	mutex_unlock(&kvm->slots_lock);
+
+	if (ret == -EAGAIN) {
+		if (allow_private)
+			*startp = kvm_gfn_private(kvm, start);
+		else
+			*startp = kvm_gfn_shared(kvm, start);
+	}
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_mmu_map_gpa);
+
 static unsigned long
 mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 {
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 4f279700b3cc..c99f2c9a86dc 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -680,6 +680,13 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 		}
 		change.sept_page = sept_page;
 
+		/*
+		 * SPTE_SHARED_MASK is only changed by map_gpa that obtains
+		 * write lock of mmu_lock.
+		 */
+		WARN_ON(shared &&
+			(spte_shared_mask(old_spte) !=
+				spte_shared_mask(new_spte)));
 		static_call(kvm_x86_handle_changed_private_spte)(kvm, &change);
 	}
 }
@@ -1324,7 +1331,8 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter,
 	return 0;
 }
 
-static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
+static int tdp_mmu_populate_nonleaf(
+	struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx, bool shared)
 {
 	struct kvm_mmu_page *sp;
 	int ret;
@@ -1335,7 +1343,7 @@ static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter
 	sp = tdp_mmu_alloc_sp(vcpu, iter->is_private, false);
 	tdp_mmu_init_child_sp(sp, iter);
 
-	ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true);
+	ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, shared);
 	if (ret)
 		tdp_mmu_free_sp(sp);
 	return ret;
@@ -1411,7 +1419,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 			if (is_removed_spte(iter.old_spte))
 				break;
 
-			if (tdp_mmu_populate_nonleaf(vcpu, &iter, account_nx))
+			if (tdp_mmu_populate_nonleaf(vcpu, &iter, account_nx, true))
 				break;
 		}
 	}
@@ -2143,6 +2151,263 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
 	return spte_set;
 }
 
+/*
+ * Allocate shadow page table for given gfn so that the following operations
+ * on sptes can be done without memory allocation.
+ */
+int kvm_tdp_mmu_populate_nonleaf(
+	struct kvm_vcpu *vcpu, gfn_t start, gfn_t end, bool is_private, bool shared)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct tdp_iter iter;
+	int ret = 0;
+
+	kvm_lockdep_assert_mmu_lock_held(kvm, false);
+	rcu_read_lock();
+	tdp_mmu_for_each_pte(iter, vcpu->arch.mmu, is_private, start, end) {
+		if (iter.level == PG_LEVEL_4K)
+			continue;
+		if (is_shadow_present_pte(iter.old_spte) &&
+			is_large_pte(iter.old_spte)) {
+			/* TODO: large page support. */
+			WARN_ON_ONCE(true);
+			return -ENOSYS;
+		}
+
+		if (is_shadow_present_pte(iter.old_spte))
+			continue;
+
+		/*
+		 * Guarantee that alloc_tdp_mmu_page() succees which
+		 * assumes page allocation from cache always successes.
+		 */
+		if (vcpu->arch.mmu_page_header_cache.nobjs == 0 ||
+			vcpu->arch.mmu_shadow_page_cache.nobjs == 0 ||
+			vcpu->arch.mmu_private_sp_cache.nobjs == 0) {
+			ret = -EAGAIN;
+			break;
+		}
+
+		/*
+		 * write lock of mmu_lock is held.  No other thread
+		 * freezes SPTE.
+		 */
+		ret = tdp_mmu_populate_nonleaf(vcpu, &iter, false, shared);
+		if (ret) {
+			/* As write lock is held, this case sholdn't happen. */
+			WARN_ON_ONCE(true);
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return ret;
+}
+
+typedef void (*update_spte_t)(
+	struct kvm *kvm, struct tdp_iter *iter, bool allow_private);
+
+static int kvm_tdp_mmu_update_range(struct kvm_vcpu *vcpu, bool is_private,
+				gfn_t start, gfn_t end, gfn_t *nextp,
+				update_spte_t fn, bool allow_private)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct tdp_iter iter;
+	int ret = 0;
+
+	rcu_read_lock();
+	tdp_mmu_for_each_pte(iter, vcpu->arch.mmu, is_private, start, end) {
+		if (iter.level == PG_LEVEL_4K) {
+			fn(kvm, &iter, allow_private);
+			continue;
+		}
+
+		/*
+		 * Which GPA is allowed, private or shared, is recorded in the
+		 * granular of 4K in private leaf spte as SPTE_SHARED_MASK.
+		 * Break large page into 4K.
+		 */
+		if (is_shadow_present_pte(iter.old_spte) &&
+			is_large_pte(iter.old_spte)) {
+			/*
+			 * TODO: large page support.
+			 * Doesn't support large page for TDX now
+			 */
+			WARN_ON_ONCE(true);
+			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
+			iter.old_spte = kvm_tdp_mmu_read_spte(iter.sptep);
+		}
+
+		if (!is_shadow_present_pte(iter.old_spte)) {
+			/*
+			 * Guarantee that alloc_tdp_mmu_page() succees which
+			 * assumes page allocation from cache always successes.
+			 */
+			if (vcpu->arch.mmu_page_header_cache.nobjs == 0 ||
+				vcpu->arch.mmu_shadow_page_cache.nobjs == 0 ||
+				vcpu->arch.mmu_private_sp_cache.nobjs == 0) {
+				ret = -EAGAIN;
+				break;
+			}
+			/*
+			 * write lock of mmu_lock is held.  No other thread
+			 * freezes SPTE.
+			 */
+			ret = tdp_mmu_populate_nonleaf(vcpu, &iter, false, false);
+			if (ret) {
+				/* As write lock is held, this case sholdn't happen. */
+				WARN_ON_ONCE(true);
+				break;
+			}
+		}
+	}
+	rcu_read_unlock();
+
+	if (ret == -EAGAIN)
+		*nextp = iter.next_last_level_gfn;
+
+	return ret;
+}
+
+static void kvm_tdp_mmu_update_shared_spte(
+	struct kvm *kvm, struct tdp_iter *iter, bool allow_private)
+{
+	u64 new_spte;
+
+	WARN_ON(iter->is_private);
+	if (allow_private) {
+		/* Zap SPTE and clear SPTE_SHARED_MASK */
+		new_spte = SHADOW_NONPRESENT_VALUE;
+		if (new_spte != iter->old_spte)
+			tdp_mmu_set_spte(kvm, iter, new_spte);
+	} else {
+		new_spte = iter->old_spte | SPTE_SHARED_MASK;
+		/* No side effect is needed */
+		if (new_spte != iter->old_spte)
+			__kvm_tdp_mmu_write_spte(iter->sptep, new_spte);
+	}
+}
+
+static void kvm_tdp_mmu_update_private_spte(
+	struct kvm *kvm, struct tdp_iter *iter, bool allow_private)
+{
+	u64 new_spte;
+
+	WARN_ON(!iter->is_private);
+	if (allow_private) {
+		new_spte = iter->old_spte & ~SPTE_SHARED_MASK;
+		/* No side effect is needed */
+		if (new_spte != iter->old_spte)
+			__kvm_tdp_mmu_write_spte(iter->sptep, new_spte);
+	} else {
+		if (is_shadow_present_pte(iter->old_spte)) {
+			/* Zap SPTE */
+			new_spte = shadow_nonpresent_spte(iter->old_spte) |
+				SPTE_SHARED_MASK;
+			if (new_spte != iter->old_spte)
+				tdp_mmu_set_spte(kvm, iter, new_spte);
+		} else {
+			new_spte = iter->old_spte | SPTE_SHARED_MASK;
+			/* No side effect is needed */
+			if (new_spte != iter->old_spte)
+				__kvm_tdp_mmu_write_spte(iter->sptep, new_spte);
+		}
+	}
+}
+
+/*
+ * Whether GPA is allowed to map private or shared is recorded in both private
+ * and shared leaf spte entry as SPTE_SHARED_MASK bit.  They must match.
+ * private leaf spte entry
+ * - present: private mapping is allowed. (already mapped)
+ * - non-present: private mapping is allowed.
+ * - present | SPTE_SHARED_MASK: invalid state.
+ * - non-present | SPTE_SHARED_MASK: shared mapping is allowed.
+ *                                        may or may not be mapped as shared.
+ * shared leaf spte entry
+ * - present: invalid state
+ * - non-present: private mapping is allowed.
+ * - present | SPTE_SHARED_MASK: shared mapping is allowed (already mapped)
+ * - non-present | SPTE_SHARED_MASK: shared mapping is allowed.
+ *
+ * state change of private spte:
+ * map_gpa(private):
+ *      private EPT entry: clear SPTE_SHARED_MASK
+ *	  present: nop
+ *	  non-present: nop
+ *	  non-present | SPTE_SHARED_MASK -> non-present
+ *	share EPT entry: zap and clear SPTE_SHARED_MASK
+ *	  any -> non-present
+ * map_gpa(shared):
+ *	private EPT entry: zap and set SPTE_SHARED_MASK
+ *	  present     -> non-present | SPTE_SHARED_MASK
+ *	  non-present -> non-present | SPTE_SHARED_MASK
+ *	  non-present | SPTE_SHARED_MASK: nop
+ *	shared EPT entry: set SPTE_SHARED_MASK
+ *	  present | SPTE_SHARED_MASK: nop
+ *	  non-present -> non-present | SPTE_SHARED_MASK
+ *	  non-present | SPTE_SHARED_MASK: nop
+ * map(private GPA):
+ *	private EPT entry: try to populate
+ *	  present: nop
+ *	  non-present -> present
+ *	  non-present | SPTE_SHARED_MASK: nop. looping on EPT violation
+ *	shared EPT entry: nop
+ * map(shared GPA):
+ *	private EPT entry: nop
+ *	shared EPT entry: populate
+ *	  present | SPTE_SHARED_MASK: nop
+ *	  non-present | SPTE_SHARED_MASK -> present | SPTE_SHARED_MASK
+ *	  non-present: nop. looping on EPT violation
+ * zap(private GPA):
+ *	private EPT entry: zap and keep SPTE_SHARED_MASK
+ *	  present | SPTE_SHARED_MASK -> non-present | SPTE_SHARED_MASK
+ *	  non-present: nop as is_shadow_prsent_pte() is checked
+ *	  non-present | SPTE_SHARED_MASK: nop by is_shadow_present_pte()
+ *	shared EPT entry: nop
+ * zap(shared GPA):
+ *	private EPT entry: nop
+ *	shared EPT entry: zap and keep SPTE_SHARED_MASK
+ *	  present | SPTE_SHARED_MASK -> non-present | SPTE_SHARED_MASK
+ *	  non-present | SPTE_SHARED_MASK: nop
+ *	  non-present: nop.
+ */
+int kvm_tdp_mmu_map_gpa(struct kvm_vcpu *vcpu,
+			gfn_t *startp, gfn_t end, bool allow_private)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_mmu *mmu = vcpu->arch.mmu;
+	gfn_t start = *startp;
+	gfn_t next;
+	int ret = 0;
+
+	lockdep_assert_held_write(&kvm->mmu_lock);
+	WARN_ON(start & kvm_gfn_shared_mask(kvm));
+	WARN_ON(end & kvm_gfn_shared_mask(kvm));
+
+	if (!VALID_PAGE(mmu->root.hpa) || !VALID_PAGE(mmu->private_root_hpa))
+		return -EINVAL;
+
+	next = end;
+	ret = kvm_tdp_mmu_update_range(
+		vcpu, false, kvm_gfn_shared(kvm, start), kvm_gfn_shared(kvm, end),
+		&next, kvm_tdp_mmu_update_shared_spte, allow_private);
+	if (ret) {
+		kvm_flush_remote_tlbs_with_address(kvm, start, next - start);
+		return ret;
+	}
+
+	ret = kvm_tdp_mmu_update_range(
+		vcpu, true, kvm_gfn_private(kvm, start), kvm_gfn_private(kvm, end),
+		&next, kvm_tdp_mmu_update_private_spte, allow_private);
+	if (ret == -EAGAIN) {
+		*startp = next;
+		end = *startp;
+	}
+	kvm_flush_remote_tlbs_with_address(kvm, start, end - start);
+	return ret;
+}
+
 /*
  * Return the level of the lowest level SPTE added to sptes.
  * That SPTE may be non-present.
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index d1655571eb2f..4d1c27911134 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -51,6 +51,11 @@ void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm,
 				      gfn_t start, gfn_t end,
 				      int target_level, bool shared);
 
+int kvm_tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, gfn_t start, gfn_t end,
+				bool is_private, bool shared);
+int kvm_tdp_mmu_map_gpa(struct kvm_vcpu *vcpu,
+			gfn_t *startp, gfn_t end, bool allow_private);
+
 static inline void kvm_tdp_mmu_walk_lockless_begin(void)
 {
 	rcu_read_lock();
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 058/102] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (56 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 057/102] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 059/102] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
                   ` (45 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Introduce a helper to directly (pun intended) fault-in a TDP page
without having to go through the full page fault path.  This allows
TDX to get the resulting pfn and also allows the RET_PF_* enums to
stay in mmu.c where they belong.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu.h     |  3 +++
 arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index f5edf2e58dba..ee592348ace1 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -163,6 +163,9 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
 					  vcpu->arch.mmu->root_role.level);
 }
 
+kvm_pfn_t kvm_mmu_map_tdp_page(struct kvm_vcpu *vcpu, gpa_t gpa,
+			       u32 error_code, int max_level);
+
 /*
  * Check if a given access (described through the I/D, W/R and U/S bits of a
  * page fault error code pfec) causes a permission fault with the given PTE
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a777a1d4278c..599c81504bea 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4278,6 +4278,45 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 	return direct_page_fault(vcpu, fault);
 }
 
+kvm_pfn_t kvm_mmu_map_tdp_page(struct kvm_vcpu *vcpu, gpa_t gpa,
+			       u32 error_code, int max_level)
+{
+	int r;
+	struct kvm_page_fault fault = (struct kvm_page_fault) {
+		.addr = gpa,
+		.error_code = error_code,
+		.exec = error_code & PFERR_FETCH_MASK,
+		.write = error_code & PFERR_WRITE_MASK,
+		.present = error_code & PFERR_PRESENT_MASK,
+		.rsvd = error_code & PFERR_RSVD_MASK,
+		.user = error_code & PFERR_USER_MASK,
+		.prefetch = false,
+		.is_tdp = true,
+		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
+		.is_private = kvm_is_private_gpa(vcpu->kvm, gpa),
+	};
+
+	if (mmu_topup_memory_caches(vcpu, false))
+		return KVM_PFN_ERR_FAULT;
+
+	/*
+	 * Loop on the page fault path to handle the case where an mmu_notifier
+	 * invalidation triggers RET_PF_RETRY.  In the normal page fault path,
+	 * KVM needs to resume the guest in case the invalidation changed any
+	 * of the page fault properties, i.e. the gpa or error code.  For this
+	 * path, the gpa and error code are fixed by the caller, and the caller
+	 * expects failure if and only if the page fault can't be fixed.
+	 */
+	do {
+		fault.max_level = max_level;
+		fault.req_level = PG_LEVEL_4K;
+		fault.goal_level = PG_LEVEL_4K;
+		r = direct_page_fault(vcpu, &fault);
+	} while (r == RET_PF_RETRY && !is_error_noslot_pfn(fault.pfn));
+	return fault.pfn;
+}
+EXPORT_SYMBOL_GPL(kvm_mmu_map_tdp_page);
+
 static void nonpaging_init_context(struct kvm_mmu *context)
 {
 	context->page_fault = nonpaging_page_fault;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 059/102] [MARKER] The start of TDX KVM patch series: TD finalization
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (57 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 058/102] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 060/102] KVM: TDX: Create initial guest memory isaku.yamahata
                   ` (44 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of TD finalization.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index 5797d172176d..53897312699f 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -21,11 +21,11 @@ Patch Layer status
 * TD VM creation/destruction:           Applied
 * TD vcpu creation/destruction:         Applied
 * TDX EPT violation:                    Applied
-* TD finalization:                      Not yet
+* TD finalization:                      Applying
 * TD vcpu enter/exit:                   Not yet
 * TD vcpu interrupts/exit/hypercall:    Not yet
 
 * KVM MMU GPA shared bits:              Applied
 * KVM TDP refactoring for TDX:          Applied
 * KVM TDP MMU hooks:                    Applied
-* KVM TDP MMU MapGPA:                   Not yet
+* KVM TDP MMU MapGPA:                   Applied
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 060/102] KVM: TDX: Create initial guest memory
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (58 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 059/102] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 061/102] KVM: TDX: Finalize VM initialization isaku.yamahata
                   ` (43 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Because the guest memory is protected in TDX, the creation of the initial
guest memory requires a dedicated TDX module API, tdh_mem_page_add, instead
of directly copying the memory contents into the guest memory in the case
of the default VM type.  KVM MMU page fault handler callback,
private_page_add, handles it.

Define new subcommand, KVM_TDX_INIT_MEM_REGION, of VM-scoped
KVM_MEMORY_ENCRYPT_OP.  It assigns the guest page, copies the initial
memory contents into the guest memory, encrypts the guest memory.  At the
same time, optionally it extends memory measurement of the TDX guest.  It
calls the KVM MMU page fault(EPT-violation) handler to trigger the
callbacks for it.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/uapi/asm/kvm.h       |   9 ++
 arch/x86/kvm/mmu/mmu.c                |   1 +
 arch/x86/kvm/vmx/tdx.c                | 135 +++++++++++++++++++++++++-
 arch/x86/kvm/vmx/tdx.h                |   2 +
 tools/arch/x86/include/uapi/asm/kvm.h |   9 ++
 5 files changed, 155 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 399c28b2f4f5..cb2b0701f0d9 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -539,6 +539,7 @@ enum kvm_tdx_cmd_id {
 	KVM_TDX_CAPABILITIES = 0,
 	KVM_TDX_INIT_VM,
 	KVM_TDX_INIT_VCPU,
+	KVM_TDX_INIT_MEM_REGION,
 
 	KVM_TDX_CMD_NR_MAX,
 };
@@ -616,4 +617,12 @@ struct kvm_tdx_init_vm {
 	};
 };
 
+#define KVM_TDX_MEASURE_MEMORY_REGION	(1UL << 0)
+
+struct kvm_tdx_init_mem_region {
+	__u64 source_addr;
+	__u64 gpa;
+	__u64 nr_pages;
+};
+
 #endif /* _ASM_X86_KVM_H */
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 599c81504bea..da634fa4b75f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5285,6 +5285,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 out:
 	return r;
 }
+EXPORT_SYMBOL(kvm_mmu_load);
 
 void kvm_mmu_unload(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 3d578197d567..69550a1ea1d0 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -557,6 +557,21 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
 	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
 }
 
+static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa)
+{
+	struct tdx_module_output out;
+	u64 err;
+	int i;
+
+	for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) {
+		err = tdh_mr_extend(kvm_tdx->tdr.pa, gpa + i, &out);
+		if (KVM_BUG_ON(err, &kvm_tdx->kvm)) {
+			pr_tdx_error(TDH_MR_EXTEND, err, &out);
+			break;
+		}
+	}
+}
+
 static void tdx_unpin_pfn(struct kvm *kvm, kvm_pfn_t pfn)
 {
 	struct page *page = pfn_to_page(pfn);
@@ -572,6 +587,7 @@ static void __tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
 	hpa_t hpa = pfn_to_hpa(pfn);
 	gpa_t gpa = gfn_to_gpa(gfn);
 	struct tdx_module_output out;
+	hpa_t source_pa;
 	u64 err;
 
 	if (WARN_ON_ONCE(is_error_noslot_pfn(pfn) || kvm_is_reserved_pfn(pfn)))
@@ -584,14 +600,40 @@ static void __tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
 	/* To prevent page migration, do nothing on mmu notifier. */
 	get_page(pfn_to_page(pfn));
 
+	/* Build-time faults are induced and handled via TDH_MEM_PAGE_ADD. */
 	if (likely(is_td_finalized(kvm_tdx))) {
 		err = tdh_mem_page_aug(kvm_tdx->tdr.pa, gpa, hpa, &out);
 		if (KVM_BUG_ON(err, kvm)) {
 			pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out);
-			put_page(pfn_to_page(pfn));
+			tdx_unpin_pfn(kvm, pfn);
 		}
 		return;
 	}
+
+	/*
+	 * In case of TDP MMU, fault handler can run concurrently.  Note
+	 * 'source_pa' is a TD scope variable, meaning if there are multiple
+	 * threads reaching here with all needing to access 'source_pa', it
+	 * will break.  However fortunately this won't happen, because below
+	 * TDH_MEM_PAGE_ADD code path is only used when VM is being created
+	 * before it is running, using KVM_TDX_INIT_MEM_REGION ioctl (which
+	 * always uses vcpu 0's page table and protected by vcpu->mutex).
+	 */
+	if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) {
+		tdx_unpin_pfn(kvm, pfn);
+		return;
+	}
+
+	source_pa = kvm_tdx->source_pa & ~KVM_TDX_MEASURE_MEMORY_REGION;
+
+	err = tdh_mem_page_add(kvm_tdx->tdr.pa, gpa, hpa, source_pa, &out);
+	if (KVM_BUG_ON(err, kvm)) {
+		pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out);
+		tdx_unpin_pfn(kvm, pfn);
+	} else if ((kvm_tdx->source_pa & KVM_TDX_MEASURE_MEMORY_REGION))
+		tdx_measure_page(kvm_tdx, gpa);
+
+	kvm_tdx->source_pa = INVALID_PAGE;
 }
 
 static void tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
@@ -1100,6 +1142,94 @@ void tdx_flush_tlb(struct kvm_vcpu *vcpu)
 		cpu_relax();
 }
 
+#define TDX_SEPT_PFERR	PFERR_WRITE_MASK
+
+static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	struct kvm_tdx_init_mem_region region;
+	struct kvm_vcpu *vcpu;
+	struct page *page;
+	kvm_pfn_t pfn;
+	int idx, ret = 0;
+
+	/* The BSP vCPU must be created before initializing memory regions. */
+	if (!atomic_read(&kvm->online_vcpus))
+		return -EINVAL;
+
+	if (cmd->flags & ~KVM_TDX_MEASURE_MEMORY_REGION)
+		return -EINVAL;
+
+	if (copy_from_user(&region, (void __user *)cmd->data, sizeof(region)))
+		return -EFAULT;
+
+	/* Sanity check */
+	if (!IS_ALIGNED(region.source_addr, PAGE_SIZE) ||
+	    !IS_ALIGNED(region.gpa, PAGE_SIZE) ||
+	    !region.nr_pages ||
+	    region.gpa + (region.nr_pages << PAGE_SHIFT) <= region.gpa ||
+	    !kvm_is_private_gpa(kvm, region.gpa) ||
+	    !kvm_is_private_gpa(kvm, region.gpa + (region.nr_pages << PAGE_SHIFT)))
+		return -EINVAL;
+
+	vcpu = kvm_get_vcpu(kvm, 0);
+	if (mutex_lock_killable(&vcpu->mutex))
+		return -EINTR;
+
+	vcpu_load(vcpu);
+	idx = srcu_read_lock(&kvm->srcu);
+
+	kvm_mmu_reload(vcpu);
+
+	while (region.nr_pages) {
+		if (signal_pending(current)) {
+			ret = -ERESTARTSYS;
+			break;
+		}
+
+		if (need_resched())
+			cond_resched();
+
+
+		/* Pin the source page. */
+		ret = get_user_pages_fast(region.source_addr, 1, 0, &page);
+		if (ret < 0)
+			break;
+		if (ret != 1) {
+			ret = -ENOMEM;
+			break;
+		}
+
+		kvm_tdx->source_pa = pfn_to_hpa(page_to_pfn(page)) |
+				     (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION);
+
+		pfn = kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR,
+					   PG_LEVEL_4K);
+		if (is_error_noslot_pfn(pfn) || kvm->vm_bugged)
+			ret = -EFAULT;
+		else
+			ret = 0;
+
+		put_page(page);
+		if (ret)
+			break;
+
+		region.source_addr += PAGE_SIZE;
+		region.gpa += PAGE_SIZE;
+		region.nr_pages--;
+	}
+
+	srcu_read_unlock(&kvm->srcu, idx);
+	vcpu_put(vcpu);
+
+	mutex_unlock(&vcpu->mutex);
+
+	if (copy_to_user((void __user *)cmd->data, &region, sizeof(region)))
+		ret = -EFAULT;
+
+	return ret;
+}
+
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_tdx_cmd tdx_cmd;
@@ -1116,6 +1246,9 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 	case KVM_TDX_INIT_VM:
 		r = tdx_td_init(kvm, &tdx_cmd);
 		break;
+	case KVM_TDX_INIT_MEM_REGION:
+		r = tdx_init_mem_region(kvm, &tdx_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index d8dcbedd690b..29e7accee733 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -25,6 +25,8 @@ struct kvm_tdx {
 	u64 xfam;
 	int hkid;
 
+	hpa_t source_pa;
+
 	bool finalized;
 	atomic_t tdh_mem_track;
 
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index 60a79f9ef174..af39f3adc179 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -533,6 +533,7 @@ enum kvm_tdx_cmd_id {
 	KVM_TDX_CAPABILITIES = 0,
 	KVM_TDX_INIT_VM,
 	KVM_TDX_INIT_VCPU,
+	KVM_TDX_INIT_MEM_REGION,
 
 	KVM_TDX_CMD_NR_MAX,
 };
@@ -610,4 +611,12 @@ struct kvm_tdx_init_vm {
 	};
 };
 
+#define KVM_TDX_MEASURE_MEMORY_REGION	(1UL << 0)
+
+struct kvm_tdx_init_mem_region {
+	__u64 source_addr;
+	__u64 gpa;
+	__u64 nr_pages;
+};
+
 #endif /* _ASM_X86_KVM_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 061/102] KVM: TDX: Finalize VM initialization
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (59 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 060/102] KVM: TDX: Create initial guest memory isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 062/102] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
                   ` (42 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

To protect the initial contents of the guest TD, the TDX module measures
the guest TD during the build process as SHA-384 measurement.  The
measurement of the guest TD contents needs to be completed to make the
guest TD ready to run.

Add a new subcommand, KVM_TDX_FINALIZE_VM, for VM-scoped
KVM_MEMORY_ENCRYPT_OP to finalize the measurement and mark the TDX VM ready
to run.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/uapi/asm/kvm.h       |  1 +
 arch/x86/kvm/vmx/tdx.c                | 21 +++++++++++++++++++++
 tools/arch/x86/include/uapi/asm/kvm.h |  1 +
 3 files changed, 23 insertions(+)

diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index cb2b0701f0d9..2fe4cc497bc2 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -540,6 +540,7 @@ enum kvm_tdx_cmd_id {
 	KVM_TDX_INIT_VM,
 	KVM_TDX_INIT_VCPU,
 	KVM_TDX_INIT_MEM_REGION,
+	KVM_TDX_FINALIZE_VM,
 
 	KVM_TDX_CMD_NR_MAX,
 };
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 69550a1ea1d0..d2688bb8e5fa 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1230,6 +1230,24 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
 	return ret;
 }
 
+static int tdx_td_finalizemr(struct kvm *kvm)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	u64 err;
+
+	if (!is_td_initialized(kvm) || is_td_finalized(kvm_tdx))
+		return -EINVAL;
+
+	err = tdh_mr_finalize(kvm_tdx->tdr.pa);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_MR_FINALIZE, err, NULL);
+		return -EIO;
+	}
+
+	kvm_tdx->finalized = true;
+	return 0;
+}
+
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_tdx_cmd tdx_cmd;
@@ -1249,6 +1267,9 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
 	case KVM_TDX_INIT_MEM_REGION:
 		r = tdx_init_mem_region(kvm, &tdx_cmd);
 		break;
+	case KVM_TDX_FINALIZE_VM:
+		r = tdx_td_finalizemr(kvm);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index af39f3adc179..7f5eb5536ec5 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -534,6 +534,7 @@ enum kvm_tdx_cmd_id {
 	KVM_TDX_INIT_VM,
 	KVM_TDX_INIT_VCPU,
 	KVM_TDX_INIT_MEM_REGION,
+	KVM_TDX_FINALIZE_VM,
 
 	KVM_TDX_CMD_NR_MAX,
 };
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 062/102] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (60 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 061/102] KVM: TDX: Finalize VM initialization isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 063/102] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
                   ` (41 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of TD vcpu
enter/exit.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index 53897312699f..b51e8e6b1541 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -12,6 +12,7 @@ What qemu can do
 - Qemu can create/destroy guest of TDX vm type.
 - Qemu can create/destroy vcpu of TDX vm type.
 - Qemu can populate initial guest memory image.
+- Qemu can finalize guest TD.
 
 Patch Layer status
 ------------------
@@ -21,8 +22,8 @@ Patch Layer status
 * TD VM creation/destruction:           Applied
 * TD vcpu creation/destruction:         Applied
 * TDX EPT violation:                    Applied
-* TD finalization:                      Applying
-* TD vcpu enter/exit:                   Not yet
+* TD finalization:                      Applied
+* TD vcpu enter/exit:                   Applying
 * TD vcpu interrupts/exit/hypercall:    Not yet
 
 * KVM MMU GPA shared bits:              Applied
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 063/102] KVM: TDX: Add helper assembly function to TDX vcpu
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (61 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 062/102] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 064/102] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
                   ` (40 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX defines an API to run TDX vcpu with its own ABI.  Define an assembly
helper function to run TDX vcpu to hide the special ABI so that C code can
call it with function call ABI.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/vmenter.S | 146 +++++++++++++++++++++++++++++++++++++
 1 file changed, 146 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 435c187927c4..f58ea3c97ccf 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -2,6 +2,7 @@
 #include <linux/linkage.h>
 #include <asm/asm.h>
 #include <asm/bitsperlong.h>
+#include <asm/errno.h>
 #include <asm/kvm_vcpu_regs.h>
 #include <asm/nospec-branch.h>
 #include <asm/segment.h>
@@ -28,6 +29,13 @@
 #define VCPU_R15	__VCPU_REGS_R15 * WORD_SIZE
 #endif
 
+#ifdef CONFIG_INTEL_TDX_HOST
+#define TDENTER 		0
+#define EXIT_REASON_TDCALL	77
+#define TDENTER_ERROR_BIT	63
+#define seamcall		.byte 0x66,0x0f,0x01,0xcf
+#endif
+
 .section .noinstr.text, "ax"
 
 /**
@@ -328,3 +336,141 @@ SYM_FUNC_START(vmx_do_interrupt_nmi_irqoff)
 	pop %_ASM_BP
 	RET
 SYM_FUNC_END(vmx_do_interrupt_nmi_irqoff)
+
+#ifdef CONFIG_INTEL_TDX_HOST
+
+.pushsection .noinstr.text, "ax"
+
+/**
+ * __tdx_vcpu_run - Call SEAMCALL(TDENTER) to run a TD vcpu
+ * @tdvpr:	physical address of TDVPR
+ * @regs:	void * (to registers of TDVCPU)
+ * @gpr_mask:	non-zero if guest registers need to be loaded prior to TDENTER
+ *
+ * Returns:
+ *	TD-Exit Reason
+ *
+ * Note: KVM doesn't support using XMM in its hypercalls, it's the HyperV
+ *	 code's responsibility to save/restore XMM registers on TDVMCALL.
+ */
+SYM_FUNC_START(__tdx_vcpu_run)
+	push %rbp
+	mov  %rsp, %rbp
+
+	push %r15
+	push %r14
+	push %r13
+	push %r12
+	push %rbx
+
+	/* Save @regs, which is needed after TDENTER to capture output. */
+	push %rsi
+
+	/* Load @tdvpr to RCX */
+	mov %rdi, %rcx
+
+	/* No need to load guest GPRs if the last exit wasn't a TDVMCALL. */
+	test %dx, %dx
+	je 1f
+
+	/* Load @regs to RAX, which will be clobbered with $TDENTER anyways. */
+	mov %rsi, %rax
+
+	mov VCPU_RBX(%rax), %rbx
+	mov VCPU_RDX(%rax), %rdx
+	mov VCPU_RBP(%rax), %rbp
+	mov VCPU_RSI(%rax), %rsi
+	mov VCPU_RDI(%rax), %rdi
+
+	mov VCPU_R8 (%rax),  %r8
+	mov VCPU_R9 (%rax),  %r9
+	mov VCPU_R10(%rax), %r10
+	mov VCPU_R11(%rax), %r11
+	mov VCPU_R12(%rax), %r12
+	mov VCPU_R13(%rax), %r13
+	mov VCPU_R14(%rax), %r14
+	mov VCPU_R15(%rax), %r15
+
+	/*  Load TDENTER to RAX.  This kills the @regs pointer! */
+1:	mov $TDENTER, %rax
+
+2:	seamcall
+
+	/* Skip to the exit path if TDENTER failed. */
+	bt $TDENTER_ERROR_BIT, %rax
+	jc 4f
+
+	/* Temporarily save the TD-Exit reason. */
+	push %rax
+
+	/* check if TD-exit due to TDVMCALL */
+	cmp $EXIT_REASON_TDCALL, %ax
+
+	/* Reload @regs to RAX. */
+	mov 8(%rsp), %rax
+
+	/* Jump on non-TDVMCALL */
+	jne 3f
+
+	/* Save all output from SEAMCALL(TDENTER) */
+	mov %rbx, VCPU_RBX(%rax)
+	mov %rbp, VCPU_RBP(%rax)
+	mov %rsi, VCPU_RSI(%rax)
+	mov %rdi, VCPU_RDI(%rax)
+	mov %r10, VCPU_R10(%rax)
+	mov %r11, VCPU_R11(%rax)
+	mov %r12, VCPU_R12(%rax)
+	mov %r13, VCPU_R13(%rax)
+	mov %r14, VCPU_R14(%rax)
+	mov %r15, VCPU_R15(%rax)
+
+3:	mov %rcx, VCPU_RCX(%rax)
+	mov %rdx, VCPU_RDX(%rax)
+	mov %r8,  VCPU_R8 (%rax)
+	mov %r9,  VCPU_R9 (%rax)
+
+	/*
+	 * Clear all general purpose registers except RSP and RAX to prevent
+	 * speculative use of the guest's values.
+	 */
+	xor %rbx, %rbx
+	xor %rcx, %rcx
+	xor %rdx, %rdx
+	xor %rsi, %rsi
+	xor %rdi, %rdi
+	xor %rbp, %rbp
+	xor %r8,  %r8
+	xor %r9,  %r9
+	xor %r10, %r10
+	xor %r11, %r11
+	xor %r12, %r12
+	xor %r13, %r13
+	xor %r14, %r14
+	xor %r15, %r15
+
+	/* Restore the TD-Exit reason to RAX for return. */
+	pop %rax
+
+	/* "POP" @regs. */
+4:	add $8, %rsp
+	pop %rbx
+	pop %r12
+	pop %r13
+	pop %r14
+	pop %r15
+
+	pop %rbp
+	RET
+
+5:	cmpb $0, kvm_rebooting
+	je 6f
+	mov $-EFAULT, %rax
+	jmp 4b
+6:	ud2
+	_ASM_EXTABLE(2b, 5b)
+
+SYM_FUNC_END(__tdx_vcpu_run)
+
+.popsection
+
+#endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 064/102] KVM: TDX: Implement TDX vcpu enter/exit path
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (62 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 063/102] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 065/102] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
                   ` (39 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This patch implements running TDX vcpu.  Once vcpu runs on the logical
processor (LP), the TDX vcpu is associated with it.  When the TDX vcpu
moves to another LP, the TDX vcpu needs to flush its status on the LP.
When destroying TDX vcpu, it needs to complete flush and flush cpu memory
cache.  Track which LP the TDX vcpu run and flush it as necessary.

Do nothing on sched_in event as TDX doesn't support pause loop.

TDX vcpu execution requires restoring PMU debug store after returning back
to KVM because the TDX module unconditionally resets the value.  To reuse
the existing code, export perf_restore_debug_store.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    | 21 +++++++++++++++++++--
 arch/x86/kvm/vmx/tdx.c     | 32 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h     | 33 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h |  2 ++
 arch/x86/kvm/x86.c         |  1 +
 5 files changed, 87 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 442d89e02459..099842a8a397 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -110,6 +110,23 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	return vmx_vcpu_reset(vcpu, init_event);
 }
 
+static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		/* Unconditionally continue to vcpu_run(). */
+		return 1;
+
+	return vmx_vcpu_pre_run(vcpu);
+}
+
+static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_vcpu_run(vcpu);
+
+	return vmx_vcpu_run(vcpu);
+}
+
 static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -222,8 +239,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.flush_tlb_gva = vt_flush_tlb_gva,
 	.flush_tlb_guest = vt_flush_tlb_guest,
 
-	.vcpu_pre_run = vmx_vcpu_pre_run,
-	.vcpu_run = vmx_vcpu_run,
+	.vcpu_pre_run = vt_vcpu_pre_run,
+	.vcpu_run = vt_vcpu_run,
 	.handle_exit = vmx_handle_exit,
 	.skip_emulated_instruction = vmx_skip_emulated_instruction,
 	.update_emulated_instruction = vmx_update_emulated_instruction,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index d2688bb8e5fa..e13b1c8caa39 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -10,6 +10,9 @@
 #include "vmx.h"
 #include "x86.h"
 
+#include <trace/events/kvm.h>
+#include "trace.h"
+
 #undef pr_fmt
 #define pr_fmt(fmt) "tdx: " fmt
 
@@ -552,6 +555,35 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->kvm->vm_bugged = true;
 }
 
+u64 __tdx_vcpu_run(hpa_t tdvpr, void *regs, u32 regs_mask);
+
+static noinstr void tdx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+					struct vcpu_tdx *tdx)
+{
+	guest_enter_irqoff();
+	tdx->exit_reason.full = __tdx_vcpu_run(tdx->tdvpr.pa, vcpu->arch.regs, 0);
+	guest_exit_irqoff();
+}
+
+fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	if (unlikely(vcpu->kvm->vm_bugged)) {
+		tdx->exit_reason.full = TDX_NON_RECOVERABLE_VCPU;
+		return EXIT_FASTPATH_NONE;
+	}
+
+	trace_kvm_entry(vcpu);
+
+	tdx_vcpu_enter_exit(vcpu, tdx);
+
+	vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET;
+	trace_kvm_exit(vcpu, KVM_ISA_VMX);
+
+	return EXIT_FASTPATH_NONE;
+}
+
 void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
 {
 	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 29e7accee733..f90f83b22d25 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -48,12 +48,45 @@ struct kvm_tdx {
 	spinlock_t seamcall_lock;
 };
 
+union tdx_exit_reason {
+	struct {
+		/* 31:0 mirror the VMX Exit Reason format */
+		u64 basic		: 16;
+		u64 reserved16		: 1;
+		u64 reserved17		: 1;
+		u64 reserved18		: 1;
+		u64 reserved19		: 1;
+		u64 reserved20		: 1;
+		u64 reserved21		: 1;
+		u64 reserved22		: 1;
+		u64 reserved23		: 1;
+		u64 reserved24		: 1;
+		u64 reserved25		: 1;
+		u64 bus_lock_detected	: 1;
+		u64 enclave_mode	: 1;
+		u64 smi_pending_mtf	: 1;
+		u64 smi_from_vmx_root	: 1;
+		u64 reserved30		: 1;
+		u64 failed_vmentry	: 1;
+
+		/* 63:32 are TDX specific */
+		u64 details_l1		: 8;
+		u64 class		: 8;
+		u64 reserved61_48	: 14;
+		u64 non_recoverable	: 1;
+		u64 error		: 1;
+	};
+	u64 full;
+};
+
 struct vcpu_tdx {
 	struct kvm_vcpu	vcpu;
 
 	struct tdx_td_page tdvpr;
 	struct tdx_td_page *tdvpx;
 
+	union tdx_exit_reason exit_reason;
+
 	bool initialized;
 
 	/*
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 2c55aea8963f..ea34671cd23f 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -141,6 +141,7 @@ void tdx_vm_free(struct kvm *kvm);
 int tdx_vcpu_create(struct kvm_vcpu *vcpu);
 void tdx_vcpu_free(struct kvm_vcpu *vcpu);
 void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -161,6 +162,7 @@ static inline void tdx_vm_free(struct kvm *kvm) {}
 static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; }
 static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
+static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTPATH_NONE; }
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c90ec611de2f..70312e195f36 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -300,6 +300,7 @@ const struct kvm_stats_header kvm_vcpu_stats_header = {
 };
 
 u64 __read_mostly host_xcr0;
+EXPORT_SYMBOL_GPL(host_xcr0);
 
 static struct kmem_cache *x86_emulator_cache;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 065/102] KVM: TDX: vcpu_run: save/restore host state(host kernel gs)
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (63 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 064/102] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 066/102] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
                   ` (38 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

On entering/exiting TDX vcpu, Preserved or clobbered CPU state is different
from VMX case.  Add TDX hooks to save/restore host/guest CPU state.
Save/restore kernel GS base MSR.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/main.c    | 28 +++++++++++++++++++++++++--
 arch/x86/kvm/vmx/tdx.c     | 39 ++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h     |  4 ++++
 arch/x86/kvm/vmx/x86_ops.h |  4 ++++
 4 files changed, 73 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 099842a8a397..f101f358d90c 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -110,6 +110,30 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	return vmx_vcpu_reset(vcpu, init_event);
 }
 
+static void vt_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * All host state is saved/restored across SEAMCALL/SEAMRET, and the
+	 * guest state of a TD is obviously off limits.  Deferring MSRs and DRs
+	 * is pointless because the TDX module needs to load *something* so as
+	 * not to expose guest state.
+	 */
+	if (is_td_vcpu(vcpu)) {
+		tdx_prepare_switch_to_guest(vcpu);
+		return;
+	}
+
+	vmx_prepare_switch_to_guest(vcpu);
+}
+
+static void vt_vcpu_put(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_vcpu_put(vcpu);
+
+	return vmx_vcpu_put(vcpu);
+}
+
 static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -206,9 +230,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.vcpu_free = vt_vcpu_free,
 	.vcpu_reset = vt_vcpu_reset,
 
-	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
+	.prepare_switch_to_guest = vt_prepare_switch_to_guest,
 	.vcpu_load = vmx_vcpu_load,
-	.vcpu_put = vmx_vcpu_put,
+	.vcpu_put = vt_vcpu_put,
 
 	.update_exception_bitmap = vmx_update_exception_bitmap,
 	.get_msr_feature = vmx_get_msr_feature,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index e13b1c8caa39..d9e0dd30c150 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <linux/cpu.h>
+#include <linux/mmu_context.h>
 
 #include <asm/tdx.h>
 
@@ -463,6 +464,9 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.guest_state_protected =
 		!(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG);
 
+	tdx->host_state_need_save = true;
+	tdx->host_state_need_restore = false;
+
 	return 0;
 
 free_tdvpx:
@@ -476,6 +480,39 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	if (!tdx->host_state_need_save)
+		return;
+
+	if (likely(is_64bit_mm(current->mm)))
+		tdx->msr_host_kernel_gs_base = current->thread.gsbase;
+	else
+		tdx->msr_host_kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE);
+
+	tdx->host_state_need_save = false;
+}
+
+static void tdx_prepare_switch_to_host(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	tdx->host_state_need_save = true;
+	if (!tdx->host_state_need_restore)
+		return;
+
+	wrmsrl(MSR_KERNEL_GS_BASE, tdx->msr_host_kernel_gs_base);
+	tdx->host_state_need_restore = false;
+}
+
+void tdx_vcpu_put(struct kvm_vcpu *vcpu)
+{
+	vmx_vcpu_pi_put(vcpu);
+	tdx_prepare_switch_to_host(vcpu);
+}
+
 void tdx_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_tdx *tdx = to_tdx(vcpu);
@@ -578,6 +615,8 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	tdx_vcpu_enter_exit(vcpu, tdx);
 
+	tdx->host_state_need_restore = true;
+
 	vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET;
 	trace_kvm_exit(vcpu, KVM_ISA_VMX);
 
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index f90f83b22d25..414c15235ed0 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -89,6 +89,10 @@ struct vcpu_tdx {
 
 	bool initialized;
 
+	bool host_state_need_save;
+	bool host_state_need_restore;
+	u64 msr_host_kernel_gs_base;
+
 	/*
 	 * Dummy to make pmu_intel not corrupt memory.
 	 * TODO: Support PMU for TDX.  Future work.
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index ea34671cd23f..2213739c2303 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -142,6 +142,8 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu);
 void tdx_vcpu_free(struct kvm_vcpu *vcpu);
 void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
 fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu);
+void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
+void tdx_vcpu_put(struct kvm_vcpu *vcpu);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -163,6 +165,8 @@ static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; }
 static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
 static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTPATH_NONE; }
+static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {}
+static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 066/102] KVM: TDX: restore host xsave state when exit from the guest TD
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (64 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 065/102] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:53 ` [PATCH v7 067/102] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
                   ` (37 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

On exiting from the guest TD, xsave state is clobbered.  Restore xsave
state on TD exit.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index d9e0dd30c150..277525b6ca51 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2,6 +2,7 @@
 #include <linux/cpu.h>
 #include <linux/mmu_context.h>
 
+#include <asm/fpu/xcr.h>
 #include <asm/tdx.h>
 
 #include "capabilities.h"
@@ -592,6 +593,22 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->kvm->vm_bugged = true;
 }
 
+static void tdx_restore_host_xsave_state(struct kvm_vcpu *vcpu)
+{
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
+
+	if (static_cpu_has(X86_FEATURE_XSAVE) &&
+	    host_xcr0 != (kvm_tdx->xfam & kvm_caps.supported_xcr0))
+		xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
+	if (static_cpu_has(X86_FEATURE_XSAVES) &&
+	    /* PT can be exposed to TD guest regardless of KVM's XSS support */
+	    host_xss != (kvm_tdx->xfam & (kvm_caps.supported_xss | XFEATURE_MASK_PT)))
+		wrmsrl(MSR_IA32_XSS, host_xss);
+	if (static_cpu_has(X86_FEATURE_PKU) &&
+	    (kvm_tdx->xfam & XFEATURE_MASK_PKRU))
+		write_pkru(vcpu->arch.host_pkru);
+}
+
 u64 __tdx_vcpu_run(hpa_t tdvpr, void *regs, u32 regs_mask);
 
 static noinstr void tdx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
@@ -615,6 +632,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	tdx_vcpu_enter_exit(vcpu, tdx);
 
+	tdx_restore_host_xsave_state(vcpu);
 	tdx->host_state_need_restore = true;
 
 	vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 067/102] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (65 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 066/102] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
@ 2022-06-27 21:53 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 068/102] KVM: TDX: restore user ret MSRs isaku.yamahata
                   ` (36 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:53 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Several MSRs are constant and only used in userspace(ring 3).  But VMs may
have different values.  KVM uses kvm_set_user_return_msr() to switch to
guest's values and leverages user return notifier to restore them when the
kernel is to return to userspace.  To eliminate unnecessary wrmsr, KVM also
caches the value it wrote to an MSR last time.

TDX module unconditionally resets some of these MSRs to architectural INIT
state on TD exit.  It makes the cached values in kvm_user_return_msrs are
inconsistent with values in hardware.  This inconsistency needs to be
fixed.  Otherwise, it may mislead kvm_on_user_return() to skip restoring
some MSRs to the host's values.  kvm_set_user_return_msr() can help correct
this case, but it is not optimal as it always does a wrmsr.  So, introduce
a variation of kvm_set_user_return_msr() to update cached values and skip
that wrmsr.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/x86.c              | 25 ++++++++++++++++++++-----
 2 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f2a4d5a18851..a1d186190287 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2025,6 +2025,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low,
 int kvm_add_user_return_msr(u32 msr);
 int kvm_find_user_return_msr(u32 msr);
 int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
+void kvm_user_return_update_cache(unsigned int index, u64 val);
 
 static inline bool kvm_is_supported_user_return_msr(u32 msr)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 70312e195f36..ce0ef32c2619 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -425,6 +425,15 @@ static void kvm_user_return_msr_cpu_online(void)
 	}
 }
 
+static void kvm_user_return_register_notifier(struct kvm_user_return_msrs *msrs)
+{
+	if (!msrs->registered) {
+		msrs->urn.on_user_return = kvm_on_user_return;
+		user_return_notifier_register(&msrs->urn);
+		msrs->registered = true;
+	}
+}
+
 int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask)
 {
 	unsigned int cpu = smp_processor_id();
@@ -439,15 +448,21 @@ int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask)
 		return 1;
 
 	msrs->values[slot].curr = value;
-	if (!msrs->registered) {
-		msrs->urn.on_user_return = kvm_on_user_return;
-		user_return_notifier_register(&msrs->urn);
-		msrs->registered = true;
-	}
+	kvm_user_return_register_notifier(msrs);
 	return 0;
 }
 EXPORT_SYMBOL_GPL(kvm_set_user_return_msr);
 
+/* Update the cache, "curr", and register the notifier */
+void kvm_user_return_update_cache(unsigned int slot, u64 value)
+{
+	struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);
+
+	msrs->values[slot].curr = value;
+	kvm_user_return_register_notifier(msrs);
+}
+EXPORT_SYMBOL_GPL(kvm_user_return_update_cache);
+
 static void drop_user_return_notifiers(void)
 {
 	unsigned int cpu = smp_processor_id();
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 068/102] KVM: TDX: restore user ret MSRs
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (66 preceding siblings ...)
  2022-06-27 21:53 ` [PATCH v7 067/102] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 069/102] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
                   ` (35 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Several user ret MSRs are clobbered on TD exit.  Restore those values on
TD exit and before returning to ring 3.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx.c | 43 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 277525b6ca51..3d9898b677bc 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -593,6 +593,28 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->kvm->vm_bugged = true;
 }
 
+struct tdx_uret_msr {
+	u32 msr;
+	unsigned int slot;
+	u64 defval;
+};
+
+static struct tdx_uret_msr tdx_uret_msrs[] = {
+	{.msr = MSR_SYSCALL_MASK,},
+	{.msr = MSR_STAR,},
+	{.msr = MSR_LSTAR,},
+	{.msr = MSR_TSC_AUX,},
+};
+
+static void tdx_user_return_update_cache(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(tdx_uret_msrs); i++)
+		kvm_user_return_update_cache(tdx_uret_msrs[i].slot,
+					     tdx_uret_msrs[i].defval);
+}
+
 static void tdx_restore_host_xsave_state(struct kvm_vcpu *vcpu)
 {
 	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
@@ -632,6 +654,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	tdx_vcpu_enter_exit(vcpu, tdx);
 
+	tdx_user_return_update_cache();
 	tdx_restore_host_xsave_state(vcpu);
 	tdx->host_state_need_restore = true;
 
@@ -1470,6 +1493,26 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 	if (WARN_ON_ONCE(x86_ops->tlb_remote_flush))
 		return -EIO;
 
+	for (i = 0; i < ARRAY_SIZE(tdx_uret_msrs); i++) {
+		/*
+		 * Here it checks if MSRs (tdx_uret_msrs) can be saved/restored
+		 * before returning to user space.
+		 *
+		 * this_cpu_ptr(user_return_msrs)->registerd isn't checked
+		 * because the registration is done at vcpu runtime by
+		 * kvm_set_user_return_msr().
+		 * Here is setting up cpu feature before running vcpu,
+		 * registered is alreays false.
+		 */
+		tdx_uret_msrs[i].slot = kvm_find_user_return_msr(tdx_uret_msrs[i].msr);
+		if (tdx_uret_msrs[i].slot == -1) {
+			/* If any MSR isn't supported, it is a KVM bug */
+			pr_err("MSR %x isn't included by kvm_find_user_return_msr\n",
+				tdx_uret_msrs[i].msr);
+			return -EIO;
+		}
+	}
+
 	max_pkgs = topology_max_packages();
 	tdx_mng_key_config_lock = kcalloc(max_pkgs, sizeof(*tdx_mng_key_config_lock),
 				   GFP_KERNEL);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 069/102] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (67 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 068/102] KVM: TDX: restore user ret MSRs isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 070/102] KVM: TDX: complete interrupts after tdexit isaku.yamahata
                   ` (34 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This empty commit is to mark the start of patch series of TD vcpu
exits, interrupts, and hypercalls.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/intel-tdx-layer-status.rst | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/intel-tdx-layer-status.rst b/Documentation/virt/kvm/intel-tdx-layer-status.rst
index b51e8e6b1541..1cec14213f69 100644
--- a/Documentation/virt/kvm/intel-tdx-layer-status.rst
+++ b/Documentation/virt/kvm/intel-tdx-layer-status.rst
@@ -13,6 +13,7 @@ What qemu can do
 - Qemu can create/destroy vcpu of TDX vm type.
 - Qemu can populate initial guest memory image.
 - Qemu can finalize guest TD.
+- Qemu can start to run vcpu. But vcpu can not make progress yet.
 
 Patch Layer status
 ------------------
@@ -23,7 +24,7 @@ Patch Layer status
 * TD vcpu creation/destruction:         Applied
 * TDX EPT violation:                    Applied
 * TD finalization:                      Applied
-* TD vcpu enter/exit:                   Applying
+* TD vcpu enter/exit:                   Applied
 * TD vcpu interrupts/exit/hypercall:    Not yet
 
 * KVM MMU GPA shared bits:              Applied
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 070/102] KVM: TDX: complete interrupts after tdexit
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (68 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 069/102] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 071/102] KVM: TDX: restore debug store when TD exit isaku.yamahata
                   ` (33 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

This corresponds to VMX __vmx_complete_interrupts().  Because TDX
virtualize vAPIC, KVM only needs to care NMI injection.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 3d9898b677bc..c9cb9670f7cf 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -593,6 +593,14 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->kvm->vm_bugged = true;
 }
 
+static void tdx_complete_interrupts(struct kvm_vcpu *vcpu)
+{
+	/* Avoid costly SEAMCALL if no nmi was injected */
+	if (vcpu->arch.nmi_injected)
+		vcpu->arch.nmi_injected = td_management_read8(to_tdx(vcpu),
+							      TD_VCPU_PEND_NMI);
+}
+
 struct tdx_uret_msr {
 	u32 msr;
 	unsigned int slot;
@@ -661,6 +669,8 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 	vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET;
 	trace_kvm_exit(vcpu, KVM_ISA_VMX);
 
+	tdx_complete_interrupts(vcpu);
+
 	return EXIT_FASTPATH_NONE;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 071/102] KVM: TDX: restore debug store when TD exit
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (69 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 070/102] KVM: TDX: complete interrupts after tdexit isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 072/102] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
                   ` (32 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Because debug store is clobbered, restore it on TD exit.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/events/intel/ds.c | 1 +
 arch/x86/kvm/vmx/tdx.c     | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 376cc3d66094..cdba4227ad3b 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2256,3 +2256,4 @@ void perf_restore_debug_store(void)
 
 	wrmsrl(MSR_IA32_DS_AREA, (unsigned long)ds);
 }
+EXPORT_SYMBOL_GPL(perf_restore_debug_store);
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c9cb9670f7cf..0de113a643e4 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -663,6 +663,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 	tdx_vcpu_enter_exit(vcpu, tdx);
 
 	tdx_user_return_update_cache();
+	perf_restore_debug_store();
 	tdx_restore_host_xsave_state(vcpu);
 	tdx->host_state_need_restore = true;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 072/102] KVM: TDX: handle vcpu migration over logical processor
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (70 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 071/102] KVM: TDX: restore debug store when TD exit isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 073/102] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
                   ` (31 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

For vcpu migration, in the case of VMX, VCMS is flushed on the source pcpu,
and load it on the target pcpu.  There are corresponding TDX SEAMCALL APIs,
call them on vcpu migration.  The logic is mostly same as VMX except the
TDX SEAMCALLs are used.

When shutting down the machine, (VMX or TDX) vcpus needs to be shutdown on
each pcpu.  Do the similar for TDX with TDX SEAMCALL APIs.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    |  43 +++++++++++--
 arch/x86/kvm/vmx/tdx.c     | 121 +++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx.h     |   2 +
 arch/x86/kvm/vmx/x86_ops.h |   6 ++
 4 files changed, 168 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index f101f358d90c..ad09988c4faa 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -17,6 +17,25 @@ static bool vt_is_vm_type_supported(unsigned long type)
 		(enable_tdx && tdx_is_vm_type_supported(type));
 }
 
+static int vt_hardware_enable(void)
+{
+	int ret;
+
+	ret = vmx_hardware_enable();
+	if (ret)
+		return ret;
+
+	tdx_hardware_enable();
+	return 0;
+}
+
+static void vt_hardware_disable(void)
+{
+	/* Note, TDX *and* VMX need to be disabled if TDX is enabled. */
+	tdx_hardware_disable();
+	vmx_hardware_disable();
+}
+
 static __init int vt_hardware_setup(void)
 {
 	int ret;
@@ -151,6 +170,14 @@ static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu)
 	return vmx_vcpu_run(vcpu);
 }
 
+static void vt_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_vcpu_load(vcpu, cpu);
+
+	return vmx_vcpu_load(vcpu, cpu);
+}
+
 static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -192,6 +219,14 @@ static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 	vmx_load_mmu_pgd(vcpu, root_hpa, pgd_level);
 }
 
+static void vt_sched_in(struct kvm_vcpu *vcpu, int cpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_sched_in(vcpu, cpu);
+}
+
 static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	if (!is_td(kvm))
@@ -214,8 +249,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.hardware_unsetup = vt_hardware_unsetup,
 	.check_processor_compatibility = vmx_check_processor_compatibility,
 
-	.hardware_enable = vmx_hardware_enable,
-	.hardware_disable = vmx_hardware_disable,
+	.hardware_enable = vt_hardware_enable,
+	.hardware_disable = vt_hardware_disable,
 	.has_emulated_msr = vmx_has_emulated_msr,
 
 	.is_vm_type_supported = vt_is_vm_type_supported,
@@ -231,7 +266,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.vcpu_reset = vt_vcpu_reset,
 
 	.prepare_switch_to_guest = vt_prepare_switch_to_guest,
-	.vcpu_load = vmx_vcpu_load,
+	.vcpu_load = vt_vcpu_load,
 	.vcpu_put = vt_vcpu_put,
 
 	.update_exception_bitmap = vmx_update_exception_bitmap,
@@ -317,7 +352,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 
 	.request_immediate_exit = vmx_request_immediate_exit,
 
-	.sched_in = vmx_sched_in,
+	.sched_in = vt_sched_in,
 
 	.cpu_dirty_log_size = PML_ENTITY_NUM,
 	.update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 0de113a643e4..4db9bfe2c534 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -61,6 +61,14 @@ static struct tdx_capabilities tdx_caps;
 static DEFINE_MUTEX(tdx_lock);
 static struct mutex *tdx_mng_key_config_lock;
 
+/*
+ * A per-CPU list of TD vCPUs associated with a given CPU.  Used when a CPU
+ * is brought down to invoke TDH_VP_FLUSH on the approapriate TD vCPUS.
+ * Protected by interrupt mask.  This list is manipulated in process context
+ * of vcpu and IPI callback.  See tdx_flush_vp_on_cpu().
+ */
+static DEFINE_PER_CPU(struct list_head, associated_tdvcpus);
+
 static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid)
 {
 	pa &= ~hkid_mask;
@@ -95,6 +103,36 @@ static inline bool is_td_finalized(struct kvm_tdx *kvm_tdx)
 	return kvm_tdx->finalized;
 }
 
+static inline void tdx_disassociate_vp(struct kvm_vcpu *vcpu)
+{
+	list_del(&to_tdx(vcpu)->cpu_list);
+
+	/*
+	 * Ensure tdx->cpu_list is updated is before setting vcpu->cpu to -1,
+	 * otherwise, a different CPU can see vcpu->cpu = -1 and add the vCPU
+	 * to its list before its deleted from this CPUs list.
+	 */
+	smp_wmb();
+
+	vcpu->cpu = -1;
+}
+
+void tdx_hardware_enable(void)
+{
+	INIT_LIST_HEAD(&per_cpu(associated_tdvcpus, raw_smp_processor_id()));
+}
+
+void tdx_hardware_disable(void)
+{
+	int cpu = raw_smp_processor_id();
+	struct list_head *tdvcpus = &per_cpu(associated_tdvcpus, cpu);
+	struct vcpu_tdx *tdx, *tmp;
+
+	/* Safe variant needed as tdx_disassociate_vp() deletes the entry. */
+	list_for_each_entry_safe(tdx, tmp, tdvcpus, cpu_list)
+		tdx_disassociate_vp(&tdx->vcpu);
+}
+
 static void tdx_clear_page(unsigned long page)
 {
 	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
@@ -171,6 +209,41 @@ static void tdx_reclaim_td_page(struct tdx_td_page *page)
 	free_page(page->va);
 }
 
+static void tdx_flush_vp(void *arg)
+{
+	struct kvm_vcpu *vcpu = arg;
+	u64 err;
+
+	lockdep_assert_irqs_disabled();
+
+	/* Task migration can race with CPU offlining. */
+	if (vcpu->cpu != raw_smp_processor_id())
+		return;
+
+	/*
+	 * No need to do TDH_VP_FLUSH if the vCPU hasn't been initialized.  The
+	 * list tracking still needs to be updated so that it's correct if/when
+	 * the vCPU does get initialized.
+	 */
+	if (is_td_vcpu_created(to_tdx(vcpu))) {
+		err = tdh_vp_flush(to_tdx(vcpu)->tdvpr.pa);
+		if (unlikely(err && err != TDX_VCPU_NOT_ASSOCIATED)) {
+			if (WARN_ON_ONCE(err))
+				pr_tdx_error(TDH_VP_FLUSH, err, NULL);
+		}
+	}
+
+	tdx_disassociate_vp(vcpu);
+}
+
+static void tdx_flush_vp_on_cpu(struct kvm_vcpu *vcpu)
+{
+	if (unlikely(vcpu->cpu == -1))
+		return;
+
+	smp_call_function_single(vcpu->cpu, tdx_flush_vp, vcpu, 1);
+}
+
 static int tdx_do_tdh_phymem_cache_wb(void *param)
 {
 	u64 err = 0;
@@ -195,9 +268,11 @@ void tdx_mmu_release_hkid(struct kvm *kvm)
 	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
 	cpumask_var_t packages;
 	bool cpumask_allocated;
+	struct kvm_vcpu *vcpu;
 	u64 err;
 	int ret;
 	int i;
+	unsigned long j;
 
 	if (!is_hkid_assigned(kvm_tdx))
 		return;
@@ -205,6 +280,19 @@ void tdx_mmu_release_hkid(struct kvm *kvm)
 	if (!is_td_created(kvm_tdx))
 		goto free_hkid;
 
+	kvm_for_each_vcpu(j, vcpu, kvm)
+		tdx_flush_vp_on_cpu(vcpu);
+
+	mutex_lock(&tdx_lock);
+	err = tdh_mng_vpflushdone(kvm_tdx->tdr.pa);
+	mutex_unlock(&tdx_lock);
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_MNG_VPFLUSHDONE, err, NULL);
+		pr_err("tdh_mng_vpflushdone failed. HKID %d is leaked.\n",
+			kvm_tdx->hkid);
+		return;
+	}
+
 	cpumask_allocated = zalloc_cpumask_var(&packages, GFP_KERNEL);
 	cpus_read_lock();
 	for_each_online_cpu(i) {
@@ -481,6 +569,26 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	if (vcpu->cpu == cpu)
+		return;
+
+	tdx_flush_vp_on_cpu(vcpu);
+
+	local_irq_disable();
+	/*
+	 * Pairs with the smp_wmb() in tdx_disassociate_vp() to ensure
+	 * vcpu->cpu is read before tdx->cpu_list.
+	 */
+	smp_rmb();
+
+	list_add(&tdx->cpu_list, &per_cpu(associated_tdvcpus, cpu));
+	local_irq_enable();
+}
+
 void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_tdx *tdx = to_tdx(vcpu);
@@ -527,6 +635,19 @@ void tdx_vcpu_free(struct kvm_vcpu *vcpu)
 		tdx_reclaim_td_page(&tdx->tdvpx[i]);
 	kfree(tdx->tdvpx);
 	tdx_reclaim_td_page(&tdx->tdvpr);
+
+	/*
+	 * kvm_free_vcpus()
+	 *   -> kvm_unload_vcpu_mmu()
+	 *
+	 * does vcpu_load() for every vcpu after they already disassociated
+	 * from the per cpu list when tdx_vm_teardown(). So we need to
+	 * disassociate them again, otherwise the freed vcpu data will be
+	 * accessed when do list_{del,add}() on associated_tdvcpus list
+	 * later.
+	 */
+	tdx_flush_vp_on_cpu(vcpu);
+	WARN_ON(vcpu->cpu != -1);
 }
 
 void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 414c15235ed0..32e05efa70f9 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -85,6 +85,8 @@ struct vcpu_tdx {
 	struct tdx_td_page tdvpr;
 	struct tdx_td_page *tdvpx;
 
+	struct list_head cpu_list;
+
 	union tdx_exit_reason exit_reason;
 
 	bool initialized;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 2213739c2303..55273a0fe273 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -132,6 +132,8 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu);
 int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
 bool tdx_is_vm_type_supported(unsigned long type);
 void tdx_hardware_unsetup(void);
+void tdx_hardware_enable(void);
+void tdx_hardware_disable(void);
 int tdx_dev_ioctl(void __user *argp);
 
 int tdx_vm_init(struct kvm *kvm);
@@ -144,6 +146,7 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
 fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu);
 void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
 void tdx_vcpu_put(struct kvm_vcpu *vcpu);
+void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -154,6 +157,8 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level);
 static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
 static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
 static inline void tdx_hardware_unsetup(void) {}
+static inline void tdx_hardware_enable(void) {}
+static inline void tdx_hardware_disable(void) {}
 static inline int tdx_dev_ioctl(void __user *argp) { return -EOPNOTSUPP; };
 
 static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; }
@@ -167,6 +172,7 @@ static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
 static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTPATH_NONE; }
 static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {}
+static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 073/102] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (71 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 072/102] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 074/102] KVM: TDX: Add support for find pending IRQ in a protected local APIC isaku.yamahata
                   ` (30 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Xiaoyao Li,
	Sean Christopherson, Chao Gao

From: Isaku Yamahata <isaku.yamahata@intel.com>

Add a flag, KVM_DEBUGREG_AUTO_SWITCHED_GUEST, to skip saving/restoring DRs
irrespective of any other flags.  TDX-SEAM unconditionally saves and
restores guest DRs and reset to architectural INIT state on TD exit.
So, KVM needs to save host DRs before TD enter without restoring guest DRs
and restore host DRs after TD exit.

Opportunistically convert the KVM_DEBUGREG_* definitions to use BIT().

Reported-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Co-developed-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  9 +++++++--
 arch/x86/kvm/vmx/tdx.c          |  1 +
 arch/x86/kvm/x86.c              | 11 ++++++++---
 3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a1d186190287..1f5be98b7630 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -556,8 +556,13 @@ struct kvm_pmu {
 struct kvm_pmu_ops;
 
 enum {
-	KVM_DEBUGREG_BP_ENABLED = 1,
-	KVM_DEBUGREG_WONT_EXIT = 2,
+	KVM_DEBUGREG_BP_ENABLED		= BIT(0),
+	KVM_DEBUGREG_WONT_EXIT		= BIT(1),
+	/*
+	 * Guest debug registers are saved/restored by hardware on exit from
+	 * or enter guest. KVM needn't switch them.
+	 */
+	KVM_DEBUGREG_AUTO_SWITCH	= BIT(2),
 };
 
 struct kvm_mtrr_range {
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 4db9bfe2c534..c256853efed5 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -545,6 +545,7 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.efer = EFER_SCE | EFER_LME | EFER_LMA | EFER_NX;
 
+	vcpu->arch.switch_db_regs = KVM_DEBUGREG_AUTO_SWITCH;
 	vcpu->arch.cr0_guest_owned_bits = -1ul;
 	vcpu->arch.cr4_guest_owned_bits = -1ul;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ce0ef32c2619..39473b561e27 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10326,7 +10326,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.guest_fpu.xfd_err)
 		wrmsrl(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err);
 
-	if (unlikely(vcpu->arch.switch_db_regs)) {
+	if (unlikely(vcpu->arch.switch_db_regs & ~KVM_DEBUGREG_AUTO_SWITCH)) {
 		set_debugreg(0, 7);
 		set_debugreg(vcpu->arch.eff_db[0], 0);
 		set_debugreg(vcpu->arch.eff_db[1], 1);
@@ -10368,6 +10368,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 */
 	if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
 		WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
+		WARN_ON(vcpu->arch.switch_db_regs & KVM_DEBUGREG_AUTO_SWITCH);
 		static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
 		kvm_update_dr0123(vcpu);
 		kvm_update_dr7(vcpu);
@@ -10380,8 +10381,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 * care about the messed up debug address registers. But if
 	 * we have some of them active, restore the old state.
 	 */
-	if (hw_breakpoint_active())
-		hw_breakpoint_restore();
+	if (hw_breakpoint_active()) {
+		if (!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_AUTO_SWITCH))
+			hw_breakpoint_restore();
+		else
+			set_debugreg(__this_cpu_read(cpu_dr7), 7);
+	}
 
 	vcpu->arch.last_vmentry_cpu = vcpu->cpu;
 	vcpu->arch.last_guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 074/102] KVM: TDX: Add support for find pending IRQ in a protected local APIC
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (72 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 073/102] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 075/102] KVM: x86: Assume timer IRQ was injected if APIC state is proteced isaku.yamahata
                   ` (29 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <seanjc@google.com>

Add flag and hook to KVM's local APIC management to support determining
whether or not a TDX guest as a pending IRQ.  For TDX vCPUs, the virtual
APIC page is owned by the TDX module and cannot be accessed by KVM.  As a
result, registers that are virtualized by the CPU, e.g. PPR, cannot be
read or written by KVM.  To deliver interrupts for TDX guests, KVM must
send an IRQ to the CPU on the posted interrupt notification vector.  And
to determine if TDX vCPU has a pending interrupt, KVM must check if there
is an outstanding notification.

Return "no interrupt" in kvm_apic_has_interrupt() if the guest APIC is
protected to short-circuit the various other flows that try to pull an
IRQ out of the vAPIC, the only valid operation is querying _if_ an IRQ is
pending, KVM can't do anything based on _which_ IRQ is pending.

Intentionally omit sanity checks from other flows, e.g. PPR update, so as
not to degrade non-TDX guests with unecessary checks.  A well-behaved KVM
and userspace will never reach those flows for TDX guests, but reaching
them is not fatal if something does go awry.

Note, this doesn't handle interrupts that have been delivered to the vCPU
but not yet recognized by the core, i.e. interrupts that are sitting in
vmcs.GUEST_INTR_STATUS.  Querying that state requires a SEAMCALL and will
be supported in a future patch.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  1 +
 arch/x86/kvm/irq.c                 |  3 +++
 arch/x86/kvm/lapic.c               |  3 +++
 arch/x86/kvm/lapic.h               |  2 ++
 arch/x86/kvm/vmx/main.c            | 11 +++++++++++
 arch/x86/kvm/vmx/tdx.c             |  6 ++++++
 arch/x86/kvm/vmx/x86_ops.h         |  2 ++
 8 files changed, 29 insertions(+)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 6982d57e4518..ec98b3f734a2 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -112,6 +112,7 @@ KVM_X86_OP_OPTIONAL(pi_update_irte)
 KVM_X86_OP_OPTIONAL(pi_start_assignment)
 KVM_X86_OP_OPTIONAL(apicv_post_state_restore)
 KVM_X86_OP_OPTIONAL_RET0(dy_apicv_has_pending_interrupt)
+KVM_X86_OP_OPTIONAL(protected_apic_has_interrupt)
 KVM_X86_OP_OPTIONAL(set_hv_timer)
 KVM_X86_OP_OPTIONAL(cancel_hv_timer)
 KVM_X86_OP(setup_mce)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1f5be98b7630..6a940700eb9a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1614,6 +1614,7 @@ struct kvm_x86_ops {
 	void (*pi_start_assignment)(struct kvm *kvm);
 	void (*apicv_post_state_restore)(struct kvm_vcpu *vcpu);
 	bool (*dy_apicv_has_pending_interrupt)(struct kvm_vcpu *vcpu);
+	bool (*protected_apic_has_interrupt)(struct kvm_vcpu *vcpu);
 
 	int (*set_hv_timer)(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
 			    bool *expired);
diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
index f371f1292ca3..56e52eef0269 100644
--- a/arch/x86/kvm/irq.c
+++ b/arch/x86/kvm/irq.c
@@ -100,6 +100,9 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *v)
 	if (kvm_cpu_has_extint(v))
 		return 1;
 
+	if (lapic_in_kernel(v) && v->arch.apic->guest_apic_protected)
+		return static_call(kvm_x86_protected_apic_has_interrupt)(v);
+
 	return kvm_apic_has_interrupt(v) != -1;	/* LAPIC */
 }
 EXPORT_SYMBOL_GPL(kvm_cpu_has_interrupt);
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index a413a1d8df4c..c85ed9f6a8c9 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -2585,6 +2585,9 @@ int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu)
 	if (!kvm_apic_present(vcpu))
 		return -1;
 
+	if (apic->guest_apic_protected)
+		return -1;
+
 	__apic_update_ppr(apic, &ppr);
 	return apic_has_interrupt_for_ppr(apic, ppr);
 }
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 65bb2a8cf145..1fa316b81fef 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -51,6 +51,8 @@ struct kvm_lapic {
 	bool sw_enabled;
 	bool irr_pending;
 	bool lvt0_in_nmi_mode;
+	/* Select registers in the vAPIC cannot be read/written. */
+	bool guest_apic_protected;
 	/* Number of bits set in ISR. */
 	s16 isr_count;
 	/* The highest vector set in ISR; if -1 - invalid, must scan ISR. */
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index ad09988c4faa..f14519c6a861 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -46,6 +46,9 @@ static __init int vt_hardware_setup(void)
 
 	enable_tdx = enable_tdx && !tdx_hardware_setup(&vt_x86_ops);
 
+	if (!enable_tdx)
+		vt_x86_ops.protected_apic_has_interrupt = NULL;
+
 	if (enable_ept)
 		kvm_mmu_set_ept_masks(enable_ept_ad_bits,
 				      cpu_has_vmx_ept_execute_only());
@@ -178,6 +181,13 @@ static void vt_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	return vmx_vcpu_load(vcpu, cpu);
 }
 
+static bool vt_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)
+{
+	KVM_BUG_ON(!is_td_vcpu(vcpu), vcpu->kvm);
+
+	return tdx_protected_apic_has_interrupt(vcpu);
+}
+
 static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -329,6 +339,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.sync_pir_to_irr = vmx_sync_pir_to_irr,
 	.deliver_interrupt = vmx_deliver_interrupt,
 	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
+	.protected_apic_has_interrupt = vt_protected_apic_has_interrupt,
 
 	.set_tss_addr = vmx_set_tss_addr,
 	.set_identity_map_addr = vmx_set_identity_map_addr,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c256853efed5..244477713fee 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -526,6 +526,7 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 		return -EINVAL;
 
 	fpstate_set_confidential(&vcpu->arch.guest_fpu);
+	vcpu->arch.apic->guest_apic_protected = true;
 
 	ret = tdx_alloc_td_page(&tdx->tdvpr);
 	if (ret)
@@ -590,6 +591,11 @@ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	local_irq_enable();
 }
 
+bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)
+{
+	return pi_has_pending_interrupt(vcpu);
+}
+
 void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_tdx *tdx = to_tdx(vcpu);
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 55273a0fe273..17aaa0b3d921 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -147,6 +147,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu);
 void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
 void tdx_vcpu_put(struct kvm_vcpu *vcpu);
 void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -173,6 +174,7 @@ static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTP
 static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {}
+static inline bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu) { return false; }
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 075/102] KVM: x86: Assume timer IRQ was injected if APIC state is proteced
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (73 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 074/102] KVM: TDX: Add support for find pending IRQ in a protected local APIC isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 076/102] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c isaku.yamahata
                   ` (28 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <seanjc@google.com>

If APIC state is protected, i.e. the vCPU is a TDX guest, assume a timer
IRQ was injected when deciding whether or not to busy wait in the "timer
advanced" path.  The "real" vIRR is not readable/writable, so trying to
query for a pending timer IRQ will return garbage.

Note, TDX can scour the PIR if it wants to be more precise and skip the
"wait" call entirely.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/lapic.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index c85ed9f6a8c9..707f1ff90f8a 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1578,8 +1578,17 @@ static void apic_update_lvtt(struct kvm_lapic *apic)
 static bool lapic_timer_int_injected(struct kvm_vcpu *vcpu)
 {
 	struct kvm_lapic *apic = vcpu->arch.apic;
-	u32 reg = kvm_lapic_get_reg(apic, APIC_LVTT);
+	u32 reg;
 
+	/*
+	 * Assume a timer IRQ was "injected" if the APIC is protected.  KVM's
+	 * copy of the vIRR is bogus, it's the responsibility of the caller to
+	 * precisely check whether or not a timer IRQ is pending.
+	 */
+	if (apic->guest_apic_protected)
+		return true;
+
+	reg  = kvm_lapic_get_reg(apic, APIC_LVTT);
 	if (kvm_apic_hw_enabled(apic)) {
 		int vec = reg & APIC_VECTOR_MASK;
 		void *bitmap = apic->regs + APIC_ISR;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 076/102] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (74 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 075/102] KVM: x86: Assume timer IRQ was injected if APIC state is proteced isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 077/102] KVM: TDX: Implement interrupt injection isaku.yamahata
                   ` (27 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

As TDX will use posted_interrupt.c, the use of struct vcpu_vmx is a blocker.
Because the members of struct pi_desc pi_desc and struct list_head
pi_wakeup_list are only used in posted_interrupt.c, introduce common
structure, struct vcpu_pi, make vcpu_vmx and vcpu_tdx has same layout
in the top of structure.

To minimize the diff size, avoid code conversion like,
vmx->pi_desc => vmx->common->pi_desc.  Instead add compile time check
if the layout is expected.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/posted_intr.c | 41 ++++++++++++++++++++++++++--------
 arch/x86/kvm/vmx/posted_intr.h | 11 +++++++++
 arch/x86/kvm/vmx/tdx.c         |  1 +
 arch/x86/kvm/vmx/tdx.h         |  8 +++++++
 arch/x86/kvm/vmx/vmx.h         | 14 +++++++-----
 5 files changed, 60 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
index 237a1f40f939..196bf9f86ee4 100644
--- a/arch/x86/kvm/vmx/posted_intr.c
+++ b/arch/x86/kvm/vmx/posted_intr.c
@@ -9,6 +9,7 @@
 #include "posted_intr.h"
 #include "trace.h"
 #include "vmx.h"
+#include "tdx.h"
 
 /*
  * Maintain a per-CPU list of vCPUs that need to be awakened by wakeup_handler()
@@ -29,9 +30,29 @@ static DEFINE_PER_CPU(struct list_head, wakeup_vcpus_on_cpu);
  */
 static DEFINE_PER_CPU(raw_spinlock_t, wakeup_vcpus_on_cpu_lock);
 
+/*
+ * The layout of the head of struct vcpu_vmx and struct vcpu_tdx must match with
+ * struct vcpu_pi.
+ */
+static_assert(offsetof(struct vcpu_pi, pi_desc) ==
+	      offsetof(struct vcpu_vmx, pi_desc));
+static_assert(offsetof(struct vcpu_pi, pi_wakeup_list) ==
+	      offsetof(struct vcpu_vmx, pi_wakeup_list));
+#ifdef CONFIG_INTEL_TDX_HOST
+static_assert(offsetof(struct vcpu_pi, pi_desc) ==
+	      offsetof(struct vcpu_tdx, pi_desc));
+static_assert(offsetof(struct vcpu_pi, pi_wakeup_list) ==
+	      offsetof(struct vcpu_tdx, pi_wakeup_list));
+#endif
+
+static inline struct vcpu_pi *vcpu_to_pi(struct kvm_vcpu *vcpu)
+{
+	return (struct vcpu_pi*)vcpu;
+}
+
 static inline struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
 {
-	return &(to_vmx(vcpu)->pi_desc);
+	return &vcpu_to_pi(vcpu)->pi_desc;
 }
 
 static int pi_try_set_control(struct pi_desc *pi_desc, u64 old, u64 new)
@@ -50,8 +71,8 @@ static int pi_try_set_control(struct pi_desc *pi_desc, u64 old, u64 new)
 
 void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu)
 {
-	struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
-	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	struct vcpu_pi *vcpu_pi = vcpu_to_pi(vcpu);
+	struct pi_desc *pi_desc = &vcpu_pi->pi_desc;
 	struct pi_desc old, new;
 	unsigned long flags;
 	unsigned int dest;
@@ -88,7 +109,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu)
 	 */
 	if (pi_desc->nv == POSTED_INTR_WAKEUP_VECTOR) {
 		raw_spin_lock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu));
-		list_del(&vmx->pi_wakeup_list);
+		list_del(&vcpu_pi->pi_wakeup_list);
 		raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu));
 	}
 
@@ -142,15 +163,15 @@ static bool vmx_can_use_vtd_pi(struct kvm *kvm)
  */
 static void pi_enable_wakeup_handler(struct kvm_vcpu *vcpu)
 {
-	struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
-	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	struct vcpu_pi *vcpu_pi = vcpu_to_pi(vcpu);
+	struct pi_desc *pi_desc = &vcpu_pi->pi_desc;
 	struct pi_desc old, new;
 	unsigned long flags;
 
 	local_irq_save(flags);
 
 	raw_spin_lock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu));
-	list_add_tail(&vmx->pi_wakeup_list,
+	list_add_tail(&vcpu_pi->pi_wakeup_list,
 		      &per_cpu(wakeup_vcpus_on_cpu, vcpu->cpu));
 	raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu));
 
@@ -187,7 +208,8 @@ static bool vmx_needs_pi_wakeup(struct kvm_vcpu *vcpu)
 	 * notification vector is switched to the one that calls
 	 * back to the pi_wakeup_handler() function.
 	 */
-	return vmx_can_use_ipiv(vcpu) || vmx_can_use_vtd_pi(vcpu->kvm);
+	return (vmx_can_use_ipiv(vcpu) && !is_td_vcpu(vcpu)) ||
+		vmx_can_use_vtd_pi(vcpu->kvm);
 }
 
 void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)
@@ -197,7 +219,8 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)
 	if (!vmx_needs_pi_wakeup(vcpu))
 		return;
 
-	if (kvm_vcpu_is_blocking(vcpu) && !vmx_interrupt_blocked(vcpu))
+	if (kvm_vcpu_is_blocking(vcpu) &&
+	    (is_td_vcpu(vcpu) || !vmx_interrupt_blocked(vcpu)))
 		pi_enable_wakeup_handler(vcpu);
 
 	/*
diff --git a/arch/x86/kvm/vmx/posted_intr.h b/arch/x86/kvm/vmx/posted_intr.h
index 26992076552e..2fe8222308b2 100644
--- a/arch/x86/kvm/vmx/posted_intr.h
+++ b/arch/x86/kvm/vmx/posted_intr.h
@@ -94,6 +94,17 @@ static inline bool pi_test_sn(struct pi_desc *pi_desc)
 			(unsigned long *)&pi_desc->control);
 }
 
+struct vcpu_pi {
+	struct kvm_vcpu	vcpu;
+
+	/* Posted interrupt descriptor */
+	struct pi_desc pi_desc;
+
+	/* Used if this vCPU is waiting for PI notification wakeup. */
+	struct list_head pi_wakeup_list;
+	/* Until here common layout betwwn vcpu_vmx and vcpu_tdx. */
+};
+
 void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu);
 void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu);
 void pi_wakeup_handler(void);
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 244477713fee..01dd2376c3a1 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -527,6 +527,7 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 
 	fpstate_set_confidential(&vcpu->arch.guest_fpu);
 	vcpu->arch.apic->guest_apic_protected = true;
+	INIT_LIST_HEAD(&tdx->pi_wakeup_list);
 
 	ret = tdx_alloc_td_page(&tdx->tdvpr);
 	if (ret)
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 32e05efa70f9..1268a49fdf18 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -4,6 +4,7 @@
 
 #ifdef CONFIG_INTEL_TDX_HOST
 
+#include "posted_intr.h"
 #include "pmu_intel.h"
 #include "tdx_ops.h"
 
@@ -82,6 +83,13 @@ union tdx_exit_reason {
 struct vcpu_tdx {
 	struct kvm_vcpu	vcpu;
 
+	/* Posted interrupt descriptor */
+	struct pi_desc pi_desc;
+
+	/* Used if this vCPU is waiting for PI notification wakeup. */
+	struct list_head pi_wakeup_list;
+	/* Until here same layout to struct vcpu_pi. */
+
 	struct tdx_td_page tdvpr;
 	struct tdx_td_page *tdvpx;
 
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 60d93c38e014..1cb34a8533ff 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -226,6 +226,14 @@ struct nested_vmx {
 
 struct vcpu_vmx {
 	struct kvm_vcpu       vcpu;
+
+	/* Posted interrupt descriptor */
+	struct pi_desc pi_desc;
+
+	/* Used if this vCPU is waiting for PI notification wakeup. */
+	struct list_head pi_wakeup_list;
+	/* Until here same layout to struct vcpu_pi. */
+
 	u8                    fail;
 	u8		      x2apic_msr_bitmap_mode;
 
@@ -295,12 +303,6 @@ struct vcpu_vmx {
 
 	union vmx_exit_reason exit_reason;
 
-	/* Posted interrupt descriptor */
-	struct pi_desc pi_desc;
-
-	/* Used if this vCPU is waiting for PI notification wakeup. */
-	struct list_head pi_wakeup_list;
-
 	/* Support for a guest hypervisor (nested VMX) */
 	struct nested_vmx nested;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 077/102] KVM: TDX: Implement interrupt injection
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (75 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 076/102] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 078/102] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
                   ` (26 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX supports interrupt inject into vcpu with posted interrupt.  Wire up the
corresponding kvm x86 operations to posted interrupt.  Move
kvm_vcpu_trigger_posted_interrupt() from vmx.c to common.h to share the
code.

VMX can inject interrupt by setting interrupt information field,
VM_ENTRY_INTR_INFO_FIELD, of VMCS.  TDX supports interrupt injection only
by posted interrupt.  Ignore the execution path to access
VM_ENTRY_INTR_INFO_FIELD.

As cpu state is protected and apicv is enabled for the TDX guest, VMM can
inject interrupt by updating posted interrupt descriptor.  Treat interrupt
can be injected always.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/common.h      | 71 ++++++++++++++++++++++++++
 arch/x86/kvm/vmx/main.c        | 92 ++++++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/posted_intr.c |  2 +-
 arch/x86/kvm/vmx/posted_intr.h |  2 +
 arch/x86/kvm/vmx/tdx.c         | 25 +++++++++
 arch/x86/kvm/vmx/vmx.c         | 67 +------------------------
 arch/x86/kvm/vmx/x86_ops.h     |  7 ++-
 7 files changed, 189 insertions(+), 77 deletions(-)

diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 235908f3e044..1522e9e6851b 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -4,6 +4,7 @@
 
 #include <linux/kvm_host.h>
 
+#include "posted_intr.h"
 #include "mmu.h"
 
 static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
@@ -30,4 +31,74 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
 	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
 }
 
+static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
+						     int pi_vec)
+{
+#ifdef CONFIG_SMP
+	if (vcpu->mode == IN_GUEST_MODE) {
+		/*
+		 * The vector of the virtual has already been set in the PIR.
+		 * Send a notification event to deliver the virtual interrupt
+		 * unless the vCPU is the currently running vCPU, i.e. the
+		 * event is being sent from a fastpath VM-Exit handler, in
+		 * which case the PIR will be synced to the vIRR before
+		 * re-entering the guest.
+		 *
+		 * When the target is not the running vCPU, the following
+		 * possibilities emerge:
+		 *
+		 * Case 1: vCPU stays in non-root mode. Sending a notification
+		 * event posts the interrupt to the vCPU.
+		 *
+		 * Case 2: vCPU exits to root mode and is still runnable. The
+		 * PIR will be synced to the vIRR before re-entering the guest.
+		 * Sending a notification event is ok as the host IRQ handler
+		 * will ignore the spurious event.
+		 *
+		 * Case 3: vCPU exits to root mode and is blocked. vcpu_block()
+		 * has already synced PIR to vIRR and never blocks the vCPU if
+		 * the vIRR is not empty. Therefore, a blocked vCPU here does
+		 * not wait for any requested interrupts in PIR, and sending a
+		 * notification event also results in a benign, spurious event.
+		 */
+
+		if (vcpu != kvm_get_running_vcpu())
+			apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
+		return;
+	}
+#endif
+	/*
+	 * The vCPU isn't in the guest; wake the vCPU in case it is blocking,
+	 * otherwise do nothing as KVM will grab the highest priority pending
+	 * IRQ via ->sync_pir_to_irr() in vcpu_enter_guest().
+	 */
+	kvm_vcpu_wake_up(vcpu);
+}
+
+/*
+ * Send interrupt to vcpu via posted interrupt way.
+ * 1. If target vcpu is running(non-root mode), send posted interrupt
+ * notification to vcpu and hardware will sync PIR to vIRR atomically.
+ * 2. If target vcpu isn't running(root mode), kick it to pick up the
+ * interrupt from PIR in next vmentry.
+ */
+static inline void __vmx_deliver_posted_interrupt(
+	struct kvm_vcpu *vcpu, struct pi_desc *pi_desc, int vector)
+{
+	if (pi_test_and_set_pir(vector, pi_desc))
+		return;
+
+	/* If a previous notification has sent the IPI, nothing to do.  */
+	if (pi_test_and_set_on(pi_desc))
+		return;
+
+	/*
+	 * The implied barrier in pi_test_and_set_on() pairs with the smp_mb_*()
+	 * after setting vcpu->mode in vcpu_enter_guest(), thus the vCPU is
+	 * guaranteed to see PID.ON=1 and sync the PIR to IRR if triggering a
+	 * posted interrupt "fails" because vcpu->mode != IN_GUEST_MODE.
+	 */
+	kvm_vcpu_trigger_posted_interrupt(vcpu, POSTED_INTR_VECTOR);
+}
+
 #endif /* __KVM_X86_VMX_COMMON_H */
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index f14519c6a861..07ea7211c633 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -188,6 +188,33 @@ static bool vt_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)
 	return tdx_protected_apic_has_interrupt(vcpu);
 }
 
+static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+{
+	struct pi_desc *pi = vcpu_to_pi_desc(vcpu);
+	pi_clear_on(pi);
+	memset(pi->pir, 0, sizeof(pi->pir));
+}
+
+static int vt_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return -1;
+
+	return vmx_sync_pir_to_irr(vcpu);
+}
+
+static void vt_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector)
+{
+	if (is_td_vcpu(apic->vcpu)) {
+		tdx_deliver_interrupt(apic, delivery_mode, trig_mode,
+					     vector);
+		return;
+	}
+
+	vmx_deliver_interrupt(apic, delivery_mode, trig_mode, vector);
+}
+
 static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -237,6 +264,53 @@ static void vt_sched_in(struct kvm_vcpu *vcpu, int cpu)
 	vmx_sched_in(vcpu, cpu);
 }
 
+static void vt_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+	vmx_set_interrupt_shadow(vcpu, mask);
+}
+
+static u32 vt_get_interrupt_shadow(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return 0;
+
+	return vmx_get_interrupt_shadow(vcpu);
+}
+
+static void vt_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_inject_irq(vcpu, reinjected);
+}
+
+static void vt_cancel_injection(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_cancel_injection(vcpu);
+}
+
+static int vt_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+{
+	if (is_td_vcpu(vcpu))
+		return true;
+
+	return vmx_interrupt_allowed(vcpu, for_injection);
+}
+
+static void vt_enable_irq_window(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_enable_irq_window(vcpu);
+}
+
 static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	if (!is_td(kvm))
@@ -313,31 +387,31 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.handle_exit = vmx_handle_exit,
 	.skip_emulated_instruction = vmx_skip_emulated_instruction,
 	.update_emulated_instruction = vmx_update_emulated_instruction,
-	.set_interrupt_shadow = vmx_set_interrupt_shadow,
-	.get_interrupt_shadow = vmx_get_interrupt_shadow,
+	.set_interrupt_shadow = vt_set_interrupt_shadow,
+	.get_interrupt_shadow = vt_get_interrupt_shadow,
 	.patch_hypercall = vmx_patch_hypercall,
-	.inject_irq = vmx_inject_irq,
+	.inject_irq = vt_inject_irq,
 	.inject_nmi = vmx_inject_nmi,
 	.queue_exception = vmx_queue_exception,
-	.cancel_injection = vmx_cancel_injection,
-	.interrupt_allowed = vmx_interrupt_allowed,
+	.cancel_injection = vt_cancel_injection,
+	.interrupt_allowed = vt_interrupt_allowed,
 	.nmi_allowed = vmx_nmi_allowed,
 	.get_nmi_mask = vmx_get_nmi_mask,
 	.set_nmi_mask = vmx_set_nmi_mask,
 	.enable_nmi_window = vmx_enable_nmi_window,
-	.enable_irq_window = vmx_enable_irq_window,
+	.enable_irq_window = vt_enable_irq_window,
 	.update_cr8_intercept = vmx_update_cr8_intercept,
 	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
 	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
 	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
 	.load_eoi_exitmap = vmx_load_eoi_exitmap,
-	.apicv_post_state_restore = vmx_apicv_post_state_restore,
+	.apicv_post_state_restore = vt_apicv_post_state_restore,
 	.check_apicv_inhibit_reasons = vmx_check_apicv_inhibit_reasons,
 	.hwapic_irr_update = vmx_hwapic_irr_update,
 	.hwapic_isr_update = vmx_hwapic_isr_update,
 	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
-	.sync_pir_to_irr = vmx_sync_pir_to_irr,
-	.deliver_interrupt = vmx_deliver_interrupt,
+	.sync_pir_to_irr = vt_sync_pir_to_irr,
+	.deliver_interrupt = vt_deliver_interrupt,
 	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
 	.protected_apic_has_interrupt = vt_protected_apic_has_interrupt,
 
diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
index 196bf9f86ee4..86c5bc9255c5 100644
--- a/arch/x86/kvm/vmx/posted_intr.c
+++ b/arch/x86/kvm/vmx/posted_intr.c
@@ -50,7 +50,7 @@ static inline struct vcpu_pi *vcpu_to_pi(struct kvm_vcpu *vcpu)
 	return (struct vcpu_pi*)vcpu;
 }
 
-static inline struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
+struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
 {
 	return &vcpu_to_pi(vcpu)->pi_desc;
 }
diff --git a/arch/x86/kvm/vmx/posted_intr.h b/arch/x86/kvm/vmx/posted_intr.h
index 2fe8222308b2..0f9983b6910b 100644
--- a/arch/x86/kvm/vmx/posted_intr.h
+++ b/arch/x86/kvm/vmx/posted_intr.h
@@ -105,6 +105,8 @@ struct vcpu_pi {
 	/* Until here common layout betwwn vcpu_vmx and vcpu_tdx. */
 };
 
+struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu);
+
 void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu);
 void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu);
 void pi_wakeup_handler(void);
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 01dd2376c3a1..db3840c040f9 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -7,6 +7,7 @@
 
 #include "capabilities.h"
 #include "x86_ops.h"
+#include "common.h"
 #include "mmu.h"
 #include "tdx.h"
 #include "vmx.h"
@@ -556,6 +557,9 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.guest_state_protected =
 		!(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG);
 
+	tdx->pi_desc.nv = POSTED_INTR_VECTOR;
+	tdx->pi_desc.sn = 1;
+
 	tdx->host_state_need_save = true;
 	tdx->host_state_need_restore = false;
 
@@ -576,6 +580,7 @@ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct vcpu_tdx *tdx = to_tdx(vcpu);
 
+	vmx_vcpu_pi_load(vcpu, cpu);
 	if (vcpu->cpu == cpu)
 		return;
 
@@ -789,6 +794,12 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	trace_kvm_entry(vcpu);
 
+	if (pi_test_on(&tdx->pi_desc)) {
+		apic->send_IPI_self(POSTED_INTR_VECTOR);
+
+		kvm_wait_lapic_expire(vcpu);
+	}
+
 	tdx_vcpu_enter_exit(vcpu, tdx);
 
 	tdx_user_return_update_cache();
@@ -1126,6 +1137,16 @@ static void tdx_handle_changed_private_spte(
 	}
 }
 
+void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector)
+{
+	struct kvm_vcpu *vcpu = apic->vcpu;
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	/* TDX supports only posted interrupt.  No lapic emulation. */
+	__vmx_deliver_posted_interrupt(vcpu, &tdx->pi_desc, vector);
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
@@ -1562,6 +1583,10 @@ int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
 		return -EIO;
 	}
 
+	td_vmcs_write16(tdx, POSTED_INTR_NV, POSTED_INTR_VECTOR);
+	td_vmcs_write64(tdx, POSTED_INTR_DESC_ADDR, __pa(&tdx->pi_desc));
+	td_vmcs_setbit32(tdx, PIN_BASED_VM_EXEC_CONTROL, PIN_BASED_POSTED_INTR);
+
 	tdx->initialized = true;
 	return 0;
 }
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3f231159fe3d..3aca3976ba1b 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4010,50 +4010,6 @@ void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
 	pt_update_intercept_for_msr(vcpu);
 }
 
-static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
-						     int pi_vec)
-{
-#ifdef CONFIG_SMP
-	if (vcpu->mode == IN_GUEST_MODE) {
-		/*
-		 * The vector of the virtual has already been set in the PIR.
-		 * Send a notification event to deliver the virtual interrupt
-		 * unless the vCPU is the currently running vCPU, i.e. the
-		 * event is being sent from a fastpath VM-Exit handler, in
-		 * which case the PIR will be synced to the vIRR before
-		 * re-entering the guest.
-		 *
-		 * When the target is not the running vCPU, the following
-		 * possibilities emerge:
-		 *
-		 * Case 1: vCPU stays in non-root mode. Sending a notification
-		 * event posts the interrupt to the vCPU.
-		 *
-		 * Case 2: vCPU exits to root mode and is still runnable. The
-		 * PIR will be synced to the vIRR before re-entering the guest.
-		 * Sending a notification event is ok as the host IRQ handler
-		 * will ignore the spurious event.
-		 *
-		 * Case 3: vCPU exits to root mode and is blocked. vcpu_block()
-		 * has already synced PIR to vIRR and never blocks the vCPU if
-		 * the vIRR is not empty. Therefore, a blocked vCPU here does
-		 * not wait for any requested interrupts in PIR, and sending a
-		 * notification event also results in a benign, spurious event.
-		 */
-
-		if (vcpu != kvm_get_running_vcpu())
-			apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
-		return;
-	}
-#endif
-	/*
-	 * The vCPU isn't in the guest; wake the vCPU in case it is blocking,
-	 * otherwise do nothing as KVM will grab the highest priority pending
-	 * IRQ via ->sync_pir_to_irr() in vcpu_enter_guest().
-	 */
-	kvm_vcpu_wake_up(vcpu);
-}
-
 static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
 						int vector)
 {
@@ -4105,20 +4061,7 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
 	if (!vcpu->arch.apicv_active)
 		return -1;
 
-	if (pi_test_and_set_pir(vector, &vmx->pi_desc))
-		return 0;
-
-	/* If a previous notification has sent the IPI, nothing to do.  */
-	if (pi_test_and_set_on(&vmx->pi_desc))
-		return 0;
-
-	/*
-	 * The implied barrier in pi_test_and_set_on() pairs with the smp_mb_*()
-	 * after setting vcpu->mode in vcpu_enter_guest(), thus the vCPU is
-	 * guaranteed to see PID.ON=1 and sync the PIR to IRR if triggering a
-	 * posted interrupt "fails" because vcpu->mode != IN_GUEST_MODE.
-	 */
-	kvm_vcpu_trigger_posted_interrupt(vcpu, POSTED_INTR_VECTOR);
+	__vmx_deliver_posted_interrupt(vcpu, &vmx->pi_desc, vector);
 	return 0;
 }
 
@@ -6702,14 +6645,6 @@ void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 	vmcs_write64(EOI_EXIT_BITMAP3, eoi_exit_bitmap[3]);
 }
 
-void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
-{
-	struct vcpu_vmx *vmx = to_vmx(vcpu);
-
-	pi_clear_on(&vmx->pi_desc);
-	memset(vmx->pi_desc.pir, 0, sizeof(vmx->pi_desc.pir));
-}
-
 void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
 
 static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu,
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 17aaa0b3d921..bc25260aefc6 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -53,7 +53,6 @@ int vmx_check_intercept(struct kvm_vcpu *vcpu,
 bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
 void vmx_migrate_timers(struct kvm_vcpu *vcpu);
 void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
-void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu);
 bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason);
 void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
 void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
@@ -149,6 +148,9 @@ void tdx_vcpu_put(struct kvm_vcpu *vcpu);
 void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
 bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu);
 
+void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector);
+
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
 
@@ -176,6 +178,9 @@ static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {}
 static inline bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu) { return false; }
 
+static inline void tdx_deliver_interrupt(
+	struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector) {}
+
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 078/102] KVM: TDX: Implements vcpu request_immediate_exit
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (76 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 077/102] KVM: TDX: Implement interrupt injection isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 079/102] KVM: TDX: Implement methods to inject NMI isaku.yamahata
                   ` (25 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Now we are able to inject interrupts into TDX vcpu, it's ready to block TDX
vcpu.  Wire up kvm x86 methods for blocking/unblocking vcpu for TDX.  To
unblock on pending events, request immediate exit methods is also needed.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/main.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 07ea7211c633..d743de7b087c 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -311,6 +311,14 @@ static void vt_enable_irq_window(struct kvm_vcpu *vcpu)
 	vmx_enable_irq_window(vcpu);
 }
 
+static void vt_request_immediate_exit(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return __kvm_request_immediate_exit(vcpu);
+
+	vmx_request_immediate_exit(vcpu);
+}
+
 static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	if (!is_td(kvm))
@@ -435,7 +443,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.check_intercept = vmx_check_intercept,
 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
 
-	.request_immediate_exit = vmx_request_immediate_exit,
+	.request_immediate_exit = vt_request_immediate_exit,
 
 	.sched_in = vt_sched_in,
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 079/102] KVM: TDX: Implement methods to inject NMI
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (77 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 078/102] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 080/102] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
                   ` (24 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX vcpu control structure defines one bit for pending NMI for VMM to
inject NMI by setting the bit without knowing TDX vcpu NMI states.  Because
the vcpu state is protected, VMM can't know about NMI states of TDX vcpu.
The TDX module handles actual injection and NMI states transition.

Add methods for NMI and treat NMI can be injected always.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/main.c    | 62 +++++++++++++++++++++++++++++++++++---
 arch/x86/kvm/vmx/tdx.c     |  5 +++
 arch/x86/kvm/vmx/x86_ops.h |  2 ++
 3 files changed, 64 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index d743de7b087c..eddfd07506df 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -247,6 +247,58 @@ static void vt_flush_tlb_guest(struct kvm_vcpu *vcpu)
 	vmx_flush_tlb_guest(vcpu);
 }
 
+static void vt_inject_nmi(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_inject_nmi(vcpu);
+
+	vmx_inject_nmi(vcpu);
+}
+
+static int vt_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+{
+	/*
+	 * The TDX module manages NMI windows and NMI reinjection, and hides NMI
+	 * blocking, all KVM can do is throw an NMI over the wall.
+	 */
+	if (is_td_vcpu(vcpu))
+		return true;
+
+	return vmx_nmi_allowed(vcpu, for_injection);
+}
+
+static bool vt_get_nmi_mask(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * Assume NMIs are always unmasked.  KVM could query PEND_NMI and treat
+	 * NMIs as masked if a previous NMI is still pending, but SEAMCALLs are
+	 * expensive and the end result is unchanged as the only relevant usage
+	 * of get_nmi_mask() is to limit the number of pending NMIs, i.e. it
+	 * only changes whether KVM or the TDX module drops an NMI.
+	 */
+	if (is_td_vcpu(vcpu))
+		return false;
+
+	return vmx_get_nmi_mask(vcpu);
+}
+
+static void vt_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_set_nmi_mask(vcpu, masked);
+}
+
+static void vt_enable_nmi_window(struct kvm_vcpu *vcpu)
+{
+	/* Refer the comment in vt_get_nmi_mask(). */
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_enable_nmi_window(vcpu);
+}
+
 static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 			int pgd_level)
 {
@@ -399,14 +451,14 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.get_interrupt_shadow = vt_get_interrupt_shadow,
 	.patch_hypercall = vmx_patch_hypercall,
 	.inject_irq = vt_inject_irq,
-	.inject_nmi = vmx_inject_nmi,
+	.inject_nmi = vt_inject_nmi,
 	.queue_exception = vmx_queue_exception,
 	.cancel_injection = vt_cancel_injection,
 	.interrupt_allowed = vt_interrupt_allowed,
-	.nmi_allowed = vmx_nmi_allowed,
-	.get_nmi_mask = vmx_get_nmi_mask,
-	.set_nmi_mask = vmx_set_nmi_mask,
-	.enable_nmi_window = vmx_enable_nmi_window,
+	.nmi_allowed = vt_nmi_allowed,
+	.get_nmi_mask = vt_get_nmi_mask,
+	.set_nmi_mask = vt_set_nmi_mask,
+	.enable_nmi_window = vt_enable_nmi_window,
 	.enable_irq_window = vt_enable_irq_window,
 	.update_cr8_intercept = vmx_update_cr8_intercept,
 	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index db3840c040f9..de696d82ddbf 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -815,6 +815,11 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 	return EXIT_FASTPATH_NONE;
 }
 
+void tdx_inject_nmi(struct kvm_vcpu *vcpu)
+{
+	td_management_write8(to_tdx(vcpu), TD_VCPU_PEND_NMI, 1);
+}
+
 void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
 {
 	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index bc25260aefc6..174e90eb7e2d 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -150,6 +150,7 @@ bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu);
 
 void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 			   int trig_mode, int vector);
+void tdx_inject_nmi(struct kvm_vcpu *vcpu);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -180,6 +181,7 @@ static inline bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu) { ret
 
 static inline void tdx_deliver_interrupt(
 	struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector) {}
+static inline void tdx_inject_nmi(struct kvm_vcpu *vcpu) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 080/102] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (78 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 079/102] KVM: TDX: Implement methods to inject NMI isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 081/102] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
                   ` (23 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

TDX uses different ABI to get information about VM exit.  Pass intr_info to
the NMI and INTR handlers instead of pulling it from vcpu_vmx in
preparation for sharing the bulk of the handlers with TDX.

When the guest TD exits to VMM, RAX holds status and exit reason, RCX holds
exit qualification etc rather than the VMCS fields because VMM doesn't have
access to the VMCS.  The eventual code will be

VMX:
  - get exit reason, intr_info, exit_qualification, and etc from VMCS
  - call NMI/INTR handlers (common code)

TDX:
  - get exit reason, intr_info, exit_qualification, and etc from guest
    registers
  - call NMI/INTR handlers (common code)

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3aca3976ba1b..ccc245fbe0a1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6677,28 +6677,27 @@ static void handle_nm_fault_irqoff(struct kvm_vcpu *vcpu)
 		rdmsrl(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err);
 }
 
-static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
+static void handle_exception_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info)
 {
 	const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist;
-	u32 intr_info = vmx_get_intr_info(&vmx->vcpu);
 
 	/* if exit due to PF check for async PF */
 	if (is_page_fault(intr_info))
-		vmx->vcpu.arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags();
+		vcpu->arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags();
 	/* if exit due to NM, handle before interrupts are enabled */
 	else if (is_nm_fault(intr_info))
-		handle_nm_fault_irqoff(&vmx->vcpu);
+		handle_nm_fault_irqoff(vcpu);
 	/* Handle machine checks before interrupts are enabled */
 	else if (is_machine_check(intr_info))
 		kvm_machine_check();
 	/* We need to handle NMIs before interrupts are enabled */
 	else if (is_nmi(intr_info))
-		handle_interrupt_nmi_irqoff(&vmx->vcpu, nmi_entry);
+		handle_interrupt_nmi_irqoff(vcpu, nmi_entry);
 }
 
-static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
+static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu,
+					     u32 intr_info)
 {
-	u32 intr_info = vmx_get_intr_info(vcpu);
 	unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
 	gate_desc *desc = (gate_desc *)host_idt_base + vector;
 
@@ -6718,9 +6717,9 @@ void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 		return;
 
 	if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
-		handle_external_interrupt_irqoff(vcpu);
+		handle_external_interrupt_irqoff(vcpu, vmx_get_intr_info(vcpu));
 	else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
-		handle_exception_nmi_irqoff(vmx);
+		handle_exception_nmi_irqoff(vcpu, vmx_get_intr_info(vcpu));
 }
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 081/102] KVM: VMX: Move NMI/exception handler to common helper
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (79 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 080/102] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 082/102] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
                   ` (22 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

TDX mostly handles NMI/exception exit mostly the same to VMX case.  The
difference is how to retrieve exit qualification.  To share the code with
TDX, move NMI/exception to a common header, common.h.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/common.h | 70 ++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c    | 79 ++++-----------------------------------
 2 files changed, 78 insertions(+), 71 deletions(-)

diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 1522e9e6851b..fd5ed3c0f894 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -4,8 +4,78 @@
 
 #include <linux/kvm_host.h>
 
+#include <asm/traps.h>
+
 #include "posted_intr.h"
 #include "mmu.h"
+#include "vmcs.h"
+#include "x86.h"
+
+extern unsigned long vmx_host_idt_base;
+void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
+
+static inline void vmx_handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu,
+					       unsigned long entry)
+{
+       bool is_nmi = entry == (unsigned long)asm_exc_nmi_noist;
+
+       kvm_before_interrupt(vcpu, is_nmi ? KVM_HANDLING_NMI : KVM_HANDLING_IRQ);
+       vmx_do_interrupt_nmi_irqoff(entry);
+       kvm_after_interrupt(vcpu);
+}
+
+static inline void vmx_handle_nm_fault_irqoff(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * Save xfd_err to guest_fpu before interrupt is enabled, so the
+	 * MSR value is not clobbered by the host activity before the guest
+	 * has chance to consume it.
+	 *
+	 * Do not blindly read xfd_err here, since this exception might
+	 * be caused by L1 interception on a platform which doesn't
+	 * support xfd at all.
+	 *
+	 * Do it conditionally upon guest_fpu::xfd. xfd_err matters
+	 * only when xfd contains a non-zero value.
+	 *
+	 * Queuing exception is done in vmx_handle_exit. See comment there.
+	 */
+	if (vcpu->arch.guest_fpu.fpstate->xfd)
+		rdmsrl(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err);
+}
+
+static inline void vmx_handle_exception_nmi_irqoff(struct kvm_vcpu *vcpu,
+						   u32 intr_info)
+{
+	const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist;
+
+	/* if exit due to PF check for async PF */
+	if (is_page_fault(intr_info))
+		vcpu->arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags();
+	/* if exit due to NM, handle before interrupts are enabled */
+	else if (is_nm_fault(intr_info))
+		vmx_handle_nm_fault_irqoff(vcpu);
+	/* Handle machine checks before interrupts are enabled */
+	else if (is_machine_check(intr_info))
+		kvm_machine_check();
+	/* We need to handle NMIs before interrupts are enabled */
+	else if (is_nmi(intr_info))
+		vmx_handle_interrupt_nmi_irqoff(vcpu, nmi_entry);
+}
+
+static inline void vmx_handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu,
+							u32 intr_info)
+{
+	unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
+	gate_desc *desc = (gate_desc *)vmx_host_idt_base + vector;
+
+	if (KVM_BUG(!is_external_intr(intr_info), vcpu->kvm,
+	    "KVM: unexpected VM-Exit interrupt info: 0x%x", intr_info))
+		return;
+
+	vmx_handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
+	vcpu->arch.at_instruction_boundary = true;
+}
 
 static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
 					     unsigned long exit_qualification)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ccc245fbe0a1..5c5580ab98d3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -468,7 +468,7 @@ static inline void vmx_segment_cache_clear(struct vcpu_vmx *vmx)
 	vmx->segment_cache.bitmask = 0;
 }
 
-static unsigned long host_idt_base;
+unsigned long vmx_host_idt_base;
 
 #if IS_ENABLED(CONFIG_HYPERV)
 static bool __read_mostly enlightened_vmcs = true;
@@ -4125,7 +4125,7 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
 	vmcs_write16(HOST_SS_SELECTOR, __KERNEL_DS);  /* 22.2.4 */
 	vmcs_write16(HOST_TR_SELECTOR, GDT_ENTRY_TSS*8);  /* 22.2.4 */
 
-	vmcs_writel(HOST_IDTR_BASE, host_idt_base);   /* 22.2.4 */
+	vmcs_writel(HOST_IDTR_BASE, vmx_host_idt_base);   /* 22.2.4 */
 
 	vmcs_writel(HOST_RIP, (unsigned long)vmx_vmexit); /* 22.2.5 */
 
@@ -4970,10 +4970,10 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 	intr_info = vmx_get_intr_info(vcpu);
 
 	if (is_machine_check(intr_info) || is_nmi(intr_info))
-		return 1; /* handled by handle_exception_nmi_irqoff() */
+		return 1; /* handled by vmx_handle_exception_nmi_irqoff() */
 
 	/*
-	 * Queue the exception here instead of in handle_nm_fault_irqoff().
+	 * Queue the exception here instead of in vmx_handle_nm_fault_irqoff().
 	 * This ensures the nested_vmx check is not skipped so vmexit can
 	 * be reflected to L1 (when it intercepts #NM) before reaching this
 	 * point.
@@ -6645,70 +6645,6 @@ void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 	vmcs_write64(EOI_EXIT_BITMAP3, eoi_exit_bitmap[3]);
 }
 
-void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
-
-static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu,
-					unsigned long entry)
-{
-	bool is_nmi = entry == (unsigned long)asm_exc_nmi_noist;
-
-	kvm_before_interrupt(vcpu, is_nmi ? KVM_HANDLING_NMI : KVM_HANDLING_IRQ);
-	vmx_do_interrupt_nmi_irqoff(entry);
-	kvm_after_interrupt(vcpu);
-}
-
-static void handle_nm_fault_irqoff(struct kvm_vcpu *vcpu)
-{
-	/*
-	 * Save xfd_err to guest_fpu before interrupt is enabled, so the
-	 * MSR value is not clobbered by the host activity before the guest
-	 * has chance to consume it.
-	 *
-	 * Do not blindly read xfd_err here, since this exception might
-	 * be caused by L1 interception on a platform which doesn't
-	 * support xfd at all.
-	 *
-	 * Do it conditionally upon guest_fpu::xfd. xfd_err matters
-	 * only when xfd contains a non-zero value.
-	 *
-	 * Queuing exception is done in vmx_handle_exit. See comment there.
-	 */
-	if (vcpu->arch.guest_fpu.fpstate->xfd)
-		rdmsrl(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err);
-}
-
-static void handle_exception_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info)
-{
-	const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist;
-
-	/* if exit due to PF check for async PF */
-	if (is_page_fault(intr_info))
-		vcpu->arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags();
-	/* if exit due to NM, handle before interrupts are enabled */
-	else if (is_nm_fault(intr_info))
-		handle_nm_fault_irqoff(vcpu);
-	/* Handle machine checks before interrupts are enabled */
-	else if (is_machine_check(intr_info))
-		kvm_machine_check();
-	/* We need to handle NMIs before interrupts are enabled */
-	else if (is_nmi(intr_info))
-		handle_interrupt_nmi_irqoff(vcpu, nmi_entry);
-}
-
-static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu,
-					     u32 intr_info)
-{
-	unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
-	gate_desc *desc = (gate_desc *)host_idt_base + vector;
-
-	if (KVM_BUG(!is_external_intr(intr_info), vcpu->kvm,
-	    "KVM: unexpected VM-Exit interrupt info: 0x%x", intr_info))
-		return;
-
-	handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
-	vcpu->arch.at_instruction_boundary = true;
-}
-
 void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -6717,9 +6653,10 @@ void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 		return;
 
 	if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
-		handle_external_interrupt_irqoff(vcpu, vmx_get_intr_info(vcpu));
+		vmx_handle_external_interrupt_irqoff(vcpu,
+						     vmx_get_intr_info(vcpu));
 	else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
-		handle_exception_nmi_irqoff(vcpu, vmx_get_intr_info(vcpu));
+		vmx_handle_exception_nmi_irqoff(vcpu, vmx_get_intr_info(vcpu));
 }
 
 /*
@@ -7980,7 +7917,7 @@ __init int vmx_hardware_setup(void)
 	int r;
 
 	store_idt(&dt);
-	host_idt_base = dt.address;
+	vmx_host_idt_base = dt.address;
 
 	vmx_setup_user_return_msrs();
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 082/102] KVM: x86: Split core of hypercall emulation to helper function
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (80 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 081/102] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 083/102] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
                   ` (21 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

By necessity, TDX will use a different register ABI for hypercalls.
Break out the core functionality so that it may be reused for TDX.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  4 +++
 arch/x86/kvm/x86.c              | 54 ++++++++++++++++++++-------------
 2 files changed, 37 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6a940700eb9a..42d209fe0a4f 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1948,6 +1948,10 @@ static inline void kvm_clear_apicv_inhibit(struct kvm *kvm,
 	kvm_set_or_clear_apicv_inhibit(kvm, reason, false);
 }
 
+unsigned long __kvm_emulate_hypercall(struct kvm_vcpu *vcpu, unsigned long nr,
+				      unsigned long a0, unsigned long a1,
+				      unsigned long a2, unsigned long a3,
+				      int op_64_bit);
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
 int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 39473b561e27..a68a917ebdff 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9316,26 +9316,15 @@ static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
 	return kvm_skip_emulated_instruction(vcpu);
 }
 
-int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
+unsigned long __kvm_emulate_hypercall(struct kvm_vcpu *vcpu, unsigned long nr,
+				      unsigned long a0, unsigned long a1,
+				      unsigned long a2, unsigned long a3,
+				      int op_64_bit)
 {
-	unsigned long nr, a0, a1, a2, a3, ret;
-	int op_64_bit;
-
-	if (kvm_xen_hypercall_enabled(vcpu->kvm))
-		return kvm_xen_hypercall(vcpu);
-
-	if (kvm_hv_hypercall_enabled(vcpu))
-		return kvm_hv_hypercall(vcpu);
-
-	nr = kvm_rax_read(vcpu);
-	a0 = kvm_rbx_read(vcpu);
-	a1 = kvm_rcx_read(vcpu);
-	a2 = kvm_rdx_read(vcpu);
-	a3 = kvm_rsi_read(vcpu);
+	unsigned long ret;
 
 	trace_kvm_hypercall(nr, a0, a1, a2, a3);
 
-	op_64_bit = is_64_bit_hypercall(vcpu);
 	if (!op_64_bit) {
 		nr &= 0xFFFFFFFF;
 		a0 &= 0xFFFFFFFF;
@@ -9344,11 +9333,6 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		a3 &= 0xFFFFFFFF;
 	}
 
-	if (static_call(kvm_x86_get_cpl)(vcpu) != 0) {
-		ret = -KVM_EPERM;
-		goto out;
-	}
-
 	ret = -KVM_ENOSYS;
 
 	switch (nr) {
@@ -9407,6 +9391,34 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		ret = -KVM_ENOSYS;
 		break;
 	}
+	return ret;
+}
+EXPORT_SYMBOL_GPL(__kvm_emulate_hypercall);
+
+int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
+{
+	unsigned long nr, a0, a1, a2, a3, ret;
+	int op_64_bit;
+
+	if (kvm_xen_hypercall_enabled(vcpu->kvm))
+		return kvm_xen_hypercall(vcpu);
+
+	if (kvm_hv_hypercall_enabled(vcpu))
+		return kvm_hv_hypercall(vcpu);
+
+	nr = kvm_rax_read(vcpu);
+	a0 = kvm_rbx_read(vcpu);
+	a1 = kvm_rcx_read(vcpu);
+	a2 = kvm_rdx_read(vcpu);
+	a3 = kvm_rsi_read(vcpu);
+	op_64_bit = is_64_bit_hypercall(vcpu);
+
+	if (static_call(kvm_x86_get_cpl)(vcpu) != 0) {
+		ret = -KVM_EPERM;
+		goto out;
+	}
+
+	ret = __kvm_emulate_hypercall(vcpu, nr, a0, a1, a2, a3, op_64_bit);
 out:
 	if (!op_64_bit)
 		ret = (u32)ret;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 083/102] KVM: TDX: Add a place holder to handle TDX VM exit
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (81 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 082/102] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 084/102] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
                   ` (20 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Wire up handle_exit and handle_exit_irqoff methods and add a place holder
to handle VM exit.  Add helper functions to get exit info, exit
qualification, etc.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/main.c    | 33 ++++++++++++++--
 arch/x86/kvm/vmx/tdx.c     | 81 ++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h | 11 ++++++
 3 files changed, 122 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index eddfd07506df..227739c2490e 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -188,6 +188,23 @@ static bool vt_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)
 	return tdx_protected_apic_has_interrupt(vcpu);
 }
 
+static int vt_handle_exit(struct kvm_vcpu *vcpu,
+			     enum exit_fastpath_completion fastpath)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_handle_exit(vcpu, fastpath);
+
+	return vmx_handle_exit(vcpu, fastpath);
+}
+
+static void vt_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_handle_exit_irqoff(vcpu);
+
+	vmx_handle_exit_irqoff(vcpu);
+}
+
 static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	struct pi_desc *pi = vcpu_to_pi_desc(vcpu);
@@ -371,6 +388,16 @@ static void vt_request_immediate_exit(struct kvm_vcpu *vcpu)
 	vmx_request_immediate_exit(vcpu);
 }
 
+static void vt_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
+			u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_get_exit_info(vcpu, reason, info1, info2, intr_info,
+					 error_code);
+
+	return vmx_get_exit_info(vcpu, reason, info1, info2, intr_info, error_code);
+}
+
 static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	if (!is_td(kvm))
@@ -444,7 +471,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 
 	.vcpu_pre_run = vt_vcpu_pre_run,
 	.vcpu_run = vt_vcpu_run,
-	.handle_exit = vmx_handle_exit,
+	.handle_exit = vt_handle_exit,
 	.skip_emulated_instruction = vmx_skip_emulated_instruction,
 	.update_emulated_instruction = vmx_update_emulated_instruction,
 	.set_interrupt_shadow = vt_set_interrupt_shadow,
@@ -479,7 +506,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.set_identity_map_addr = vmx_set_identity_map_addr,
 	.get_mt_mask = vmx_get_mt_mask,
 
-	.get_exit_info = vmx_get_exit_info,
+	.get_exit_info = vt_get_exit_info,
 
 	.vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
 
@@ -493,7 +520,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.load_mmu_pgd = vt_load_mmu_pgd,
 
 	.check_intercept = vmx_check_intercept,
-	.handle_exit_irqoff = vmx_handle_exit_irqoff,
+	.handle_exit_irqoff = vt_handle_exit_irqoff,
 
 	.request_immediate_exit = vt_request_immediate_exit,
 
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index de696d82ddbf..c29501a69167 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -78,6 +78,26 @@ static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid)
 	return pa;
 }
 
+static __always_inline unsigned long tdexit_exit_qual(struct kvm_vcpu *vcpu)
+{
+	return kvm_rcx_read(vcpu);
+}
+
+static __always_inline unsigned long tdexit_ext_exit_qual(struct kvm_vcpu *vcpu)
+{
+	return kvm_rdx_read(vcpu);
+}
+
+static __always_inline unsigned long tdexit_gpa(struct kvm_vcpu *vcpu)
+{
+	return kvm_r8_read(vcpu);
+}
+
+static __always_inline unsigned long tdexit_intr_info(struct kvm_vcpu *vcpu)
+{
+	return kvm_r9_read(vcpu);
+}
+
 static inline bool is_td_vcpu_created(struct vcpu_tdx *tdx)
 {
 	return tdx->tdvpr.added;
@@ -820,6 +840,25 @@ void tdx_inject_nmi(struct kvm_vcpu *vcpu)
 	td_management_write8(to_tdx(vcpu), TD_VCPU_PEND_NMI, 1);
 }
 
+void tdx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+	u16 exit_reason = tdx->exit_reason.basic;
+
+	if (exit_reason == EXIT_REASON_EXCEPTION_NMI)
+		vmx_handle_exception_nmi_irqoff(vcpu, tdexit_intr_info(vcpu));
+	else if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
+		vmx_handle_external_interrupt_irqoff(vcpu,
+						     tdexit_intr_info(vcpu));
+}
+
+static int tdx_handle_triple_fault(struct kvm_vcpu *vcpu)
+{
+	vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
+	vcpu->mmio_needed = 0;
+	return 0;
+}
+
 void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
 {
 	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
@@ -1152,6 +1191,48 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 	__vmx_deliver_posted_interrupt(vcpu, &tdx->pi_desc, vector);
 }
 
+int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
+{
+	union tdx_exit_reason exit_reason = to_tdx(vcpu)->exit_reason;
+
+	if (unlikely(exit_reason.non_recoverable || exit_reason.error)) {
+		if (exit_reason.basic == EXIT_REASON_TRIPLE_FAULT)
+			return tdx_handle_triple_fault(vcpu);
+
+		kvm_pr_unimpl("TD exit 0x%llx, %d hkid 0x%x hkid pa 0x%llx\n",
+			      exit_reason.full, exit_reason.basic,
+			      to_kvm_tdx(vcpu->kvm)->hkid,
+			      set_hkid_to_hpa(0, to_kvm_tdx(vcpu->kvm)->hkid));
+		goto unhandled_exit;
+	}
+
+	WARN_ON_ONCE(fastpath != EXIT_FASTPATH_NONE);
+
+	switch (exit_reason.basic) {
+	default:
+		break;
+	}
+
+unhandled_exit:
+	vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
+	vcpu->run->hw.hardware_exit_reason = exit_reason.full;
+	return 0;
+}
+
+void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
+		u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	*reason = tdx->exit_reason.full;
+
+	*info1 = tdexit_exit_qual(vcpu);
+	*info2 = tdexit_ext_exit_qual(vcpu);
+
+	*intr_info = tdexit_intr_info(vcpu);
+	*error_code = 0;
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 174e90eb7e2d..78f2d624b58e 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -147,10 +147,15 @@ void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
 void tdx_vcpu_put(struct kvm_vcpu *vcpu);
 void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
 bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu);
+void tdx_handle_exit_irqoff(struct kvm_vcpu *vcpu);
+int tdx_handle_exit(struct kvm_vcpu *vcpu,
+		enum exit_fastpath_completion fastpath);
 
 void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 			   int trig_mode, int vector);
 void tdx_inject_nmi(struct kvm_vcpu *vcpu);
+void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
+		u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -178,10 +183,16 @@ static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {}
 static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {}
 static inline bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu) { return false; }
+static inline void tdx_handle_exit_irqoff(struct kvm_vcpu *vcpu) {}
+static inline int tdx_handle_exit(struct kvm_vcpu *vcpu,
+		enum exit_fastpath_completion fastpath) { return 0; }
 
 static inline void tdx_deliver_interrupt(
 	struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector) {}
 static inline void tdx_inject_nmi(struct kvm_vcpu *vcpu) {}
+static inline void tdx_get_exit_info(
+	struct kvm_vcpu *vcpu, u32 *reason, u64 *info1, u64 *info2,
+	u32 *intr_info, u32 *error_code) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 084/102] KVM: TDX: handle EXIT_REASON_OTHER_SMI
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (82 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 083/102] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 085/102] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
                   ` (19 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

If the control reaches EXIT_REASON_OTHER_SMI, #SMI is delivered and
handled right after returning from the TDX module to KVM nothing needs to
be done in KVM.  Continue TDX vcpu execution.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/uapi/asm/vmx.h | 1 +
 arch/x86/kvm/vmx/tdx.c          | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index a5faf6d88f1b..b3a30ef3efdd 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -34,6 +34,7 @@
 #define EXIT_REASON_TRIPLE_FAULT        2
 #define EXIT_REASON_INIT_SIGNAL			3
 #define EXIT_REASON_SIPI_SIGNAL         4
+#define EXIT_REASON_OTHER_SMI           6
 
 #define EXIT_REASON_INTERRUPT_WINDOW    7
 #define EXIT_REASON_NMI_WINDOW          8
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c29501a69167..e5268bfa8d27 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1209,6 +1209,13 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
 	WARN_ON_ONCE(fastpath != EXIT_FASTPATH_NONE);
 
 	switch (exit_reason.basic) {
+	case EXIT_REASON_OTHER_SMI:
+		/*
+		 * If reach here, it's not a Machine Check System Management
+		 * Interrupt(MSMI).  #SMI is delivered and handled right after
+		 * SEAMRET, nothing needs to be done in KVM.
+		 */
+		return 1;
 	default:
 		break;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 085/102] KVM: TDX: handle ept violation/misconfig exit
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (83 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 084/102] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 086/102] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
                   ` (18 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

On EPT violation, call a common function, __vmx_handle_ept_violation() to
trigger x86 MMU code.  On EPT misconfiguration, exit to ring 3 with
KVM_EXIT_UNKNOWN.  because EPT misconfiguration can't happen as MMIO is
trigged by TDG.VP.VMCALL. No point to set a misconfiguration value for the
fast path.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 46 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index e5268bfa8d27..14f65d7b3824 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1191,6 +1191,48 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 	__vmx_deliver_posted_interrupt(vcpu, &tdx->pi_desc, vector);
 }
 
+static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
+{
+	unsigned long exit_qual;
+
+	if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) {
+		/*
+		 * Always treat SEPT violations as write faults.  Ignore the
+		 * EXIT_QUALIFICATION reported by TDX-SEAM for SEPT violations.
+		 * TD private pages are always RWX in the SEPT tables,
+		 * i.e. they're always mapped writable.  Just as importantly,
+		 * treating SEPT violations as write faults is necessary to
+		 * avoid COW allocations, which will cause TDAUGPAGE failures
+		 * due to aliasing a single HPA to multiple GPAs.
+		 */
+#define TDX_SEPT_VIOLATION_EXIT_QUAL	EPT_VIOLATION_ACC_WRITE
+		exit_qual = TDX_SEPT_VIOLATION_EXIT_QUAL;
+	} else {
+		exit_qual = tdexit_exit_qual(vcpu);;
+		if (exit_qual & EPT_VIOLATION_ACC_INSTR) {
+			pr_warn("kvm: TDX instr fetch to shared GPA = 0x%lx @ RIP = 0x%lx\n",
+				tdexit_gpa(vcpu), kvm_rip_read(vcpu));
+			vcpu->run->exit_reason = KVM_EXIT_EXCEPTION;
+			vcpu->run->ex.exception = PF_VECTOR;
+			vcpu->run->ex.error_code = exit_qual;
+			return 0;
+		}
+	}
+
+	trace_kvm_page_fault(tdexit_gpa(vcpu), exit_qual);
+	return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual);
+}
+
+static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu)
+{
+	WARN_ON(1);
+
+	vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
+	vcpu->run->hw.hardware_exit_reason = EXIT_REASON_EPT_MISCONFIG;
+
+	return 0;
+}
+
 int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
 {
 	union tdx_exit_reason exit_reason = to_tdx(vcpu)->exit_reason;
@@ -1209,6 +1251,10 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
 	WARN_ON_ONCE(fastpath != EXIT_FASTPATH_NONE);
 
 	switch (exit_reason.basic) {
+	case EXIT_REASON_EPT_VIOLATION:
+		return tdx_handle_ept_violation(vcpu);
+	case EXIT_REASON_EPT_MISCONFIG:
+		return tdx_handle_ept_misconfig(vcpu);
 	case EXIT_REASON_OTHER_SMI:
 		/*
 		 * If reach here, it's not a Machine Check System Management
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 086/102] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (84 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 085/102] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 087/102] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
                   ` (17 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Because guest TD state is protected, exceptions in guest TDs can't be
intercepted.  TDX VMM doesn't need to handle exceptions.
tdx_handle_exit_irqoff() handles NMI and machine check.  Ignore NMI and
machine check and continue guest TD execution.

For external interrupt, increment stats same to the VMX case.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 14f65d7b3824..6e8a7e4b4da2 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -852,6 +852,25 @@ void tdx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 						     tdexit_intr_info(vcpu));
 }
 
+static int tdx_handle_exception(struct kvm_vcpu *vcpu)
+{
+	u32 intr_info = tdexit_intr_info(vcpu);
+
+	if (is_nmi(intr_info) || is_machine_check(intr_info))
+		return 1;
+
+	kvm_pr_unimpl("unexpected exception 0x%x(exit_reason 0x%llx qual 0x%lx)\n",
+		intr_info,
+		to_tdx(vcpu)->exit_reason.full, tdexit_exit_qual(vcpu));
+	return -EFAULT;
+}
+
+static int tdx_handle_external_interrupt(struct kvm_vcpu *vcpu)
+{
+	++vcpu->stat.irq_exits;
+	return 1;
+}
+
 static int tdx_handle_triple_fault(struct kvm_vcpu *vcpu)
 {
 	vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
@@ -1251,6 +1270,10 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
 	WARN_ON_ONCE(fastpath != EXIT_FASTPATH_NONE);
 
 	switch (exit_reason.basic) {
+	case EXIT_REASON_EXCEPTION_NMI:
+		return tdx_handle_exception(vcpu);
+	case EXIT_REASON_EXTERNAL_INTERRUPT:
+		return tdx_handle_external_interrupt(vcpu);
 	case EXIT_REASON_EPT_VIOLATION:
 		return tdx_handle_ept_violation(vcpu);
 	case EXIT_REASON_EPT_MISCONFIG:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 087/102] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL)
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (85 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 086/102] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 088/102] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
                   ` (16 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Xiaoyao Li,
	Sean Christopherson

From: Isaku Yamahata <isaku.yamahata@intel.com>

The TDX module specification defines TDG.VP.VMCALL API (TDVMCALL for short)
for the guest TD to call hypercall to VMM.  When the guest TD issues
TDG.VP.VMCALL, the guest TD exits to VMM with a new exit reason of
TDVMCALL.  The arguments from the guest TD and returned values from the VMM
are passed in the guest registers.  The guest RCX registers indicates which
registers are used.  Define helper functions to access those registers as
ABI.

Define the TDVMCALL exit reason, which is carved out from the VMX exit
reason namespace as the TDVMCALL exit from TDX guest to TDX-SEAM is really
just a VM-Exit.  Add a place holder to handle TDVMCALL exit.

Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/uapi/asm/vmx.h |  4 ++-
 arch/x86/kvm/vmx/tdx.c          | 56 ++++++++++++++++++++++++++++++++-
 arch/x86/kvm/vmx/tdx.h          | 13 ++++++++
 3 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index b3a30ef3efdd..f0f4a4cf84a7 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -93,6 +93,7 @@
 #define EXIT_REASON_TPAUSE              68
 #define EXIT_REASON_BUS_LOCK            74
 #define EXIT_REASON_NOTIFY              75
+#define EXIT_REASON_TDCALL              77
 
 #define VMX_EXIT_REASONS \
 	{ EXIT_REASON_EXCEPTION_NMI,         "EXCEPTION_NMI" }, \
@@ -156,7 +157,8 @@
 	{ EXIT_REASON_UMWAIT,                "UMWAIT" }, \
 	{ EXIT_REASON_TPAUSE,                "TPAUSE" }, \
 	{ EXIT_REASON_BUS_LOCK,              "BUS_LOCK" }, \
-	{ EXIT_REASON_NOTIFY,                "NOTIFY" }
+	{ EXIT_REASON_NOTIFY,                "NOTIFY" }, \
+	{ EXIT_REASON_TDCALL,                "TDCALL" }
 
 #define VMX_EXIT_REASON_FLAGS \
 	{ VMX_EXIT_REASONS_FAILED_VMENTRY,	"FAILED_VMENTRY" }
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 6e8a7e4b4da2..c9663df83292 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -98,6 +98,41 @@ static __always_inline unsigned long tdexit_intr_info(struct kvm_vcpu *vcpu)
 	return kvm_r9_read(vcpu);
 }
 
+#define BUILD_TDVMCALL_ACCESSORS(param, gpr)				\
+static __always_inline							\
+unsigned long tdvmcall_##param##_read(struct kvm_vcpu *vcpu)		\
+{									\
+	return kvm_##gpr##_read(vcpu);					\
+}									\
+static __always_inline void tdvmcall_##param##_write(struct kvm_vcpu *vcpu, \
+						     unsigned long val)	\
+{									\
+	kvm_##gpr##_write(vcpu, val);					\
+}
+BUILD_TDVMCALL_ACCESSORS(a0, r12);
+BUILD_TDVMCALL_ACCESSORS(a1, r13);
+BUILD_TDVMCALL_ACCESSORS(a2, r14);
+BUILD_TDVMCALL_ACCESSORS(a3, r15);
+
+static __always_inline unsigned long tdvmcall_exit_type(struct kvm_vcpu *vcpu)
+{
+	return kvm_r10_read(vcpu);
+}
+static __always_inline unsigned long tdvmcall_leaf(struct kvm_vcpu *vcpu)
+{
+	return kvm_r11_read(vcpu);
+}
+static __always_inline void tdvmcall_set_return_code(struct kvm_vcpu *vcpu,
+						     long val)
+{
+	kvm_r10_write(vcpu, val);
+}
+static __always_inline void tdvmcall_set_return_val(struct kvm_vcpu *vcpu,
+						    unsigned long val)
+{
+	kvm_r11_write(vcpu, val);
+}
+
 static inline bool is_td_vcpu_created(struct vcpu_tdx *tdx)
 {
 	return tdx->tdvpr.added;
@@ -799,7 +834,8 @@ static noinstr void tdx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 					struct vcpu_tdx *tdx)
 {
 	guest_enter_irqoff();
-	tdx->exit_reason.full = __tdx_vcpu_run(tdx->tdvpr.pa, vcpu->arch.regs, 0);
+	tdx->exit_reason.full = __tdx_vcpu_run(tdx->tdvpr.pa, vcpu->arch.regs,
+					tdx->tdvmcall.regs_mask);
 	guest_exit_irqoff();
 }
 
@@ -832,6 +868,11 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	tdx_complete_interrupts(vcpu);
 
+	if (tdx->exit_reason.basic == EXIT_REASON_TDCALL)
+		tdx->tdvmcall.rcx = vcpu->arch.regs[VCPU_REGS_RCX];
+	else
+		tdx->tdvmcall.rcx = 0;
+
 	return EXIT_FASTPATH_NONE;
 }
 
@@ -878,6 +919,17 @@ static int tdx_handle_triple_fault(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static int handle_tdvmcall(struct kvm_vcpu *vcpu)
+{
+	switch (tdvmcall_leaf(vcpu)) {
+	default:
+		break;
+	}
+
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INVALID_OPERAND);
+	return 1;
+}
+
 void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
 {
 	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
@@ -1274,6 +1326,8 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
 		return tdx_handle_exception(vcpu);
 	case EXIT_REASON_EXTERNAL_INTERRUPT:
 		return tdx_handle_external_interrupt(vcpu);
+	case EXIT_REASON_TDCALL:
+		return handle_tdvmcall(vcpu);
 	case EXIT_REASON_EPT_VIOLATION:
 		return tdx_handle_ept_violation(vcpu);
 	case EXIT_REASON_EPT_MISCONFIG:
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 1268a49fdf18..b0bb239b51bf 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -95,6 +95,19 @@ struct vcpu_tdx {
 
 	struct list_head cpu_list;
 
+	union {
+		struct {
+			union {
+				struct {
+					u16 gpr_mask;
+					u16 xmm_mask;
+				};
+				u32 regs_mask;
+			};
+			u32 reserved;
+		};
+		u64 rcx;
+	} tdvmcall;
 	union tdx_exit_reason exit_reason;
 
 	bool initialized;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 088/102] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (86 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 087/102] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 089/102] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
                   ` (15 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

The TDX Guest-Host communication interface (GHCI) specification defines
the ABI for the guest TD to issue hypercall.   It reserves vendor specific
arguments for VMM specific use.  Use it as KVM hypercall and handle it.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c9663df83292..a30be04229d7 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -919,8 +919,39 @@ static int tdx_handle_triple_fault(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static int tdx_emulate_vmcall(struct kvm_vcpu *vcpu)
+{
+	unsigned long nr, a0, a1, a2, a3, ret;
+
+	/*
+	 * ABI for KVM tdvmcall argument:
+	 * In Guest-Hypervisor Communication Interface(GHCI) specification,
+	 * Non-zero leaf number (R10 != 0) is defined to indicate
+	 * vendor-specific.  KVM uses this for KVM hypercall.  NOTE: KVM
+	 * hypercall number starts from one.  Zero isn't used for KVM hypercall
+	 * number.
+	 *
+	 * R10: KVM hypercall number
+	 * arguments: R11, R12, R13, R14.
+	 */
+	nr = kvm_r10_read(vcpu);
+	a0 = kvm_r11_read(vcpu);
+	a1 = kvm_r12_read(vcpu);
+	a2 = kvm_r13_read(vcpu);
+	a3 = kvm_r14_read(vcpu);
+
+	ret = __kvm_emulate_hypercall(vcpu, nr, a0, a1, a2, a3, true);
+
+	tdvmcall_set_return_code(vcpu, ret);
+
+	return 1;
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
+	if (tdvmcall_exit_type(vcpu))
+		return tdx_emulate_vmcall(vcpu);
+
 	switch (tdvmcall_leaf(vcpu)) {
 	default:
 		break;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 089/102] KVM: TDX: Handle TDX PV CPUID hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (87 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 088/102] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 090/102] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
                   ` (14 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Wire up TDX PV CPUID hypercall to the KVM backend function.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index a30be04229d7..96e41602125b 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -947,12 +947,34 @@ static int tdx_emulate_vmcall(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int tdx_emulate_cpuid(struct kvm_vcpu *vcpu)
+{
+	u32 eax, ebx, ecx, edx;
+
+	/* EAX and ECX for cpuid is stored in R12 and R13. */
+	eax = tdvmcall_a0_read(vcpu);
+	ecx = tdvmcall_a1_read(vcpu);
+
+	kvm_cpuid(vcpu, &eax, &ebx, &ecx, &edx, true);
+
+	tdvmcall_a0_write(vcpu, eax);
+	tdvmcall_a1_write(vcpu, ebx);
+	tdvmcall_a2_write(vcpu, ecx);
+	tdvmcall_a3_write(vcpu, edx);
+
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+
+	return 1;
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
 	if (tdvmcall_exit_type(vcpu))
 		return tdx_emulate_vmcall(vcpu);
 
 	switch (tdvmcall_leaf(vcpu)) {
+	case EXIT_REASON_CPUID:
+		return tdx_emulate_cpuid(vcpu);
 	default:
 		break;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 090/102] KVM: TDX: Handle TDX PV HLT hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (88 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 089/102] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 091/102] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
                   ` (13 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Wire up TDX PV HLT hypercall to the KVM backend function.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 42 +++++++++++++++++++++++++++++++++++++++++-
 arch/x86/kvm/vmx/tdx.h |  3 +++
 2 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 96e41602125b..15dc0ae61e0f 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -654,7 +654,32 @@ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 
 bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
-	return pi_has_pending_interrupt(vcpu);
+	bool ret = pi_has_pending_interrupt(vcpu);
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	if (ret || vcpu->arch.mp_state != KVM_MP_STATE_HALTED)
+		return true;
+
+	if (tdx->interrupt_disabled_hlt)
+		return false;
+
+	/*
+	 * This is for the case where the virtual interrupt is recognized,
+	 * i.e. set in vmcs.RVI, between the STI and "HLT".  KVM doesn't have
+	 * access to RVI and the interrupt is no longer in the PID (because it
+	 * was "recognized".  It doesn't get delivered in the guest because the
+	 * TDCALL completes before interrupts are enabled.
+	 *
+	 * TDX modules sets RVI while in an STI interrupt shadow.
+	 * - TDExit(typically TDG.VP.VMCALL<HLT>) from the guest to TDX module.
+	 *   The interrupt shadow at this point is gone.
+	 * - It knows that there is an interrupt that can be delivered
+	 *   (RVI > PPR && EFLAGS.IF=1, the other conditions of 29.2.2 don't
+	 *    matter)
+	 * - It forwards the TDExit nevertheless, to a clueless hypervisor that
+	 *   has no way to glean either RVI or PPR.
+	 */
+	return !!xchg(&tdx->buggy_hlt_workaround, 0);
 }
 
 void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
@@ -967,6 +992,17 @@ static int tdx_emulate_cpuid(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int tdx_emulate_hlt(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	/* See tdx_protected_apic_has_interrupt() to avoid heavy seamcall */
+	tdx->interrupt_disabled_hlt = tdvmcall_a0_read(vcpu);;
+
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+	return kvm_emulate_halt_noskip(vcpu);
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
 	if (tdvmcall_exit_type(vcpu))
@@ -975,6 +1011,8 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 	switch (tdvmcall_leaf(vcpu)) {
 	case EXIT_REASON_CPUID:
 		return tdx_emulate_cpuid(vcpu);
+	case EXIT_REASON_HLT:
+		return tdx_emulate_hlt(vcpu);
 	default:
 		break;
 	}
@@ -1311,6 +1349,8 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 	struct kvm_vcpu *vcpu = apic->vcpu;
 	struct vcpu_tdx *tdx = to_tdx(vcpu);
 
+	/* See comment in tdx_protected_apic_has_interrupt(). */
+	tdx->buggy_hlt_workaround = 1;
 	/* TDX supports only posted interrupt.  No lapic emulation. */
 	__vmx_deliver_posted_interrupt(vcpu, &tdx->pi_desc, vector);
 }
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index b0bb239b51bf..a456ca6ec187 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -116,6 +116,9 @@ struct vcpu_tdx {
 	bool host_state_need_restore;
 	u64 msr_host_kernel_gs_base;
 
+	bool interrupt_disabled_hlt;
+	unsigned int buggy_hlt_workaround;
+
 	/*
 	 * Dummy to make pmu_intel not corrupt memory.
 	 * TODO: Support PMU for TDX.  Future work.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 091/102] KVM: TDX: Handle TDX PV port io hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (89 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 090/102] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 092/102] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
                   ` (12 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Wire up TDX PV port IO hypercall to the KVM backend function.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx.c | 57 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 15dc0ae61e0f..a62586a83b80 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1003,6 +1003,61 @@ static int tdx_emulate_hlt(struct kvm_vcpu *vcpu)
 	return kvm_emulate_halt_noskip(vcpu);
 }
 
+static int tdx_complete_pio_in(struct kvm_vcpu *vcpu)
+{
+	struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt;
+	unsigned long val = 0;
+	int ret;
+
+	WARN_ON(vcpu->arch.pio.count != 1);
+
+	ret = ctxt->ops->pio_in_emulated(ctxt, vcpu->arch.pio.size,
+					 vcpu->arch.pio.port, &val, 1);
+	WARN_ON(!ret);
+
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+	tdvmcall_set_return_val(vcpu, val);
+
+	return 1;
+}
+
+static int tdx_emulate_io(struct kvm_vcpu *vcpu)
+{
+	struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt;
+	unsigned long val = 0;
+	unsigned int port;
+	int size, ret;
+	bool write;
+
+	++vcpu->stat.io_exits;
+
+	size = tdvmcall_a0_read(vcpu);
+	write = tdvmcall_a1_read(vcpu);
+	port = tdvmcall_a2_read(vcpu);
+
+	if (size != 1 && size != 2 && size != 4) {
+		tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INVALID_OPERAND);
+		return 1;
+	}
+
+	if (write) {
+		val = tdvmcall_a3_read(vcpu);
+		ret = ctxt->ops->pio_out_emulated(ctxt, size, port, &val, 1);
+
+		/* No need for a complete_userspace_io callback. */
+		vcpu->arch.pio.count = 0;
+	} else {
+		ret = ctxt->ops->pio_in_emulated(ctxt, size, port, &val, 1);
+		if (!ret)
+			vcpu->arch.complete_userspace_io = tdx_complete_pio_in;
+		else
+			tdvmcall_set_return_val(vcpu, val);
+	}
+	if (ret)
+		tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+	return ret;
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
 	if (tdvmcall_exit_type(vcpu))
@@ -1013,6 +1068,8 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 		return tdx_emulate_cpuid(vcpu);
 	case EXIT_REASON_HLT:
 		return tdx_emulate_hlt(vcpu);
+	case EXIT_REASON_IO_INSTRUCTION:
+		return tdx_emulate_io(vcpu);
 	default:
 		break;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 092/102] KVM: TDX: Handle TDX PV MMIO hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (90 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 091/102] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 093/102] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
                   ` (11 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Export kvm_io_bus_read and kvm_mmio tracepoint and wire up TDX PV MMIO
hypercall to the KVM backend functions.

kvm_io_bus_read/write() searches KVM device emulated in kernel of the given
MMIO address and emulates the MMIO.  As TDX PV MMIO also needs it, export
kvm_io_bus_read().  kvm_io_bus_write() is already exported.  TDX PV MMIO
emulates some of MMIO itself.  To add trace point consistently with x86
kvm, export kvm_mmio tracepoint.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx.c | 114 +++++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c     |   1 +
 virt/kvm/kvm_main.c    |   2 +
 3 files changed, 117 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index a62586a83b80..3a955a2a4f0b 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1058,6 +1058,118 @@ static int tdx_emulate_io(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+static int tdx_complete_mmio(struct kvm_vcpu *vcpu)
+{
+	unsigned long val = 0;
+	gpa_t gpa;
+	int size;
+
+	WARN_ON(vcpu->mmio_needed != 1);
+	vcpu->mmio_needed = 0;
+
+	if (!vcpu->mmio_is_write) {
+		gpa = vcpu->mmio_fragments[0].gpa;
+		size = vcpu->mmio_fragments[0].len;
+
+		memcpy(&val, vcpu->run->mmio.data, size);
+		tdvmcall_set_return_val(vcpu, val);
+		trace_kvm_mmio(KVM_TRACE_MMIO_READ, size, gpa, &val);
+	}
+	return 1;
+}
+
+static inline int tdx_mmio_write(struct kvm_vcpu *vcpu, gpa_t gpa, int size,
+				 unsigned long val)
+{
+	if (kvm_iodevice_write(vcpu, &vcpu->arch.apic->dev, gpa, size, &val) &&
+	    kvm_io_bus_write(vcpu, KVM_MMIO_BUS, gpa, size, &val))
+		return -EOPNOTSUPP;
+
+	trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, size, gpa, &val);
+	return 0;
+}
+
+static inline int tdx_mmio_read(struct kvm_vcpu *vcpu, gpa_t gpa, int size)
+{
+	unsigned long val;
+
+	if (kvm_iodevice_read(vcpu, &vcpu->arch.apic->dev, gpa, size, &val) &&
+	    kvm_io_bus_read(vcpu, KVM_MMIO_BUS, gpa, size, &val))
+		return -EOPNOTSUPP;
+
+	tdvmcall_set_return_val(vcpu, val);
+	trace_kvm_mmio(KVM_TRACE_MMIO_READ, size, gpa, &val);
+	return 0;
+}
+
+static int tdx_emulate_mmio(struct kvm_vcpu *vcpu)
+{
+	struct kvm_memory_slot *slot;
+	int size, write, r;
+	unsigned long val;
+	gpa_t gpa;
+
+	WARN_ON(vcpu->mmio_needed);
+
+	size = tdvmcall_a0_read(vcpu);
+	write = tdvmcall_a1_read(vcpu);
+	gpa = tdvmcall_a2_read(vcpu);
+	val = write ? tdvmcall_a3_read(vcpu) : 0;
+
+	if (size != 1 && size != 2 && size != 4 && size != 8)
+		goto error;
+	if (write != 0 && write != 1)
+		goto error;
+
+	/* Strip the shared bit, allow MMIO with and without it set. */
+	gpa = gpa & ~gfn_to_gpa(kvm_gfn_shared_mask(vcpu->kvm));
+
+	if (size > 8u || ((gpa + size - 1) ^ gpa) & PAGE_MASK)
+		goto error;
+
+	slot = kvm_vcpu_gfn_to_memslot(vcpu, gpa_to_gfn(gpa));
+	if (slot && !(slot->flags & KVM_MEMSLOT_INVALID))
+		goto error;
+
+	if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
+		trace_kvm_fast_mmio(gpa);
+		return 1;
+	}
+
+	if (write)
+		r = tdx_mmio_write(vcpu, gpa, size, val);
+	else
+		r = tdx_mmio_read(vcpu, gpa, size);
+	if (!r) {
+		/* Kernel completed device emulation. */
+		tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+		return 1;
+	}
+
+	/* Request the device emulation to userspace device model. */
+	vcpu->mmio_needed = 1;
+	vcpu->mmio_is_write = write;
+	vcpu->arch.complete_userspace_io = tdx_complete_mmio;
+
+	vcpu->run->mmio.phys_addr = gpa;
+	vcpu->run->mmio.len = size;
+	vcpu->run->mmio.is_write = write;
+	vcpu->run->exit_reason = KVM_EXIT_MMIO;
+
+	if (write) {
+		memcpy(vcpu->run->mmio.data, &val, size);
+	} else {
+		vcpu->mmio_fragments[0].gpa = gpa;
+		vcpu->mmio_fragments[0].len = size;
+		trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, size, gpa, NULL);
+	}
+	return 0;
+
+error:
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+	return 1;
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
 	if (tdvmcall_exit_type(vcpu))
@@ -1070,6 +1182,8 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 		return tdx_emulate_hlt(vcpu);
 	case EXIT_REASON_IO_INSTRUCTION:
 		return tdx_emulate_io(vcpu);
+	case EXIT_REASON_EPT_VIOLATION:
+		return tdx_emulate_mmio(vcpu);
 	default:
 		break;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a68a917ebdff..ccb1670adfbc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13208,6 +13208,7 @@ bool kvm_arch_dirty_log_supported(struct kvm *kvm)
 
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
+EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_mmio);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 703c1d0c98da..753442bddd96 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2303,6 +2303,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
 
 	return NULL;
 }
+EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot);
 
 bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
 {
@@ -5185,6 +5186,7 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
 	r = __kvm_io_bus_read(vcpu, bus, &range, val);
 	return r < 0 ? r : 0;
 }
+EXPORT_SYMBOL_GPL(kvm_io_bus_read);
 
 /* Caller must hold slots_lock. */
 int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 093/102] KVM: TDX: Implement callbacks for MSR operations for TDX
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (91 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 092/102] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 094/102] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall isaku.yamahata
                   ` (10 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Implements set_msr/get_msr/has_emulated_msr methods for TDX to handle
hypercall from guest TD for paravirtualized rdmsr and wrmsr.  The TDX
module virtualizes MSRs.  For some MSRs, it injects #VE to the guest TD
upon RDMSR or WRMSR.  The exact list of such MSRs are defined in the spec.

Upon #VE, the guest TD may execute hypercalls,
TDG.VP.VMCALL<INSTRUCTION.RDMSR> and TDG.VP.VMCALL<INSTRUCTION.WRMSR>,
which are defined in GHCI (Guest-Host Communication Interface) so that the
host VMM (e.g. KVM) can virtualizes the MSRs.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/main.c    | 34 +++++++++++++++++--
 arch/x86/kvm/vmx/tdx.c     | 68 ++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h |  6 ++++
 3 files changed, 105 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 227739c2490e..2696278e9b17 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -205,6 +205,34 @@ static void vt_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 	vmx_handle_exit_irqoff(vcpu);
 }
 
+static int vt_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+{
+	if (unlikely(is_td_vcpu(vcpu)))
+		return tdx_set_msr(vcpu, msr_info);
+
+	return vmx_set_msr(vcpu, msr_info);
+}
+
+/*
+ * The kvm parameter can be NULL (module initialization, or invocation before
+ * VM creation). Be sure to check the kvm parameter before using it.
+ */
+static bool vt_has_emulated_msr(struct kvm *kvm, u32 index)
+{
+	if (kvm && is_td(kvm))
+		return tdx_is_emulated_msr(index, true);
+
+	return vmx_has_emulated_msr(kvm, index);
+}
+
+static int vt_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+{
+	if (unlikely(is_td_vcpu(vcpu)))
+		return tdx_get_msr(vcpu, msr_info);
+
+	return vmx_get_msr(vcpu, msr_info);
+}
+
 static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	struct pi_desc *pi = vcpu_to_pi_desc(vcpu);
@@ -422,7 +450,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 
 	.hardware_enable = vt_hardware_enable,
 	.hardware_disable = vt_hardware_disable,
-	.has_emulated_msr = vmx_has_emulated_msr,
+	.has_emulated_msr = vt_has_emulated_msr,
 
 	.is_vm_type_supported = vt_is_vm_type_supported,
 	.vm_size = sizeof(struct kvm_vmx),
@@ -442,8 +470,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 
 	.update_exception_bitmap = vmx_update_exception_bitmap,
 	.get_msr_feature = vmx_get_msr_feature,
-	.get_msr = vmx_get_msr,
-	.set_msr = vmx_set_msr,
+	.get_msr = vt_get_msr,
+	.set_msr = vt_set_msr,
 	.get_segment_base = vmx_get_segment_base,
 	.get_segment = vmx_get_segment,
 	.set_segment = vmx_set_segment,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 3a955a2a4f0b..162cab67d1ef 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1627,6 +1627,74 @@ void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
 	*error_code = 0;
 }
 
+bool tdx_is_emulated_msr(u32 index, bool write)
+{
+	switch (index) {
+	case MSR_IA32_UCODE_REV:
+	case MSR_IA32_ARCH_CAPABILITIES:
+	case MSR_IA32_POWER_CTL:
+	case MSR_MTRRcap:
+	case 0x200 ... 0x26f:
+		/* IA32_MTRR_PHYS{BASE, MASK}, IA32_MTRR_FIX*_* */
+	case MSR_IA32_CR_PAT:
+	case MSR_MTRRdefType:
+	case MSR_IA32_TSC_DEADLINE:
+	case MSR_IA32_MISC_ENABLE:
+	case MSR_KVM_STEAL_TIME:
+	case MSR_KVM_POLL_CONTROL:
+	case MSR_PLATFORM_INFO:
+	case MSR_MISC_FEATURES_ENABLES:
+	case MSR_IA32_MCG_CAP:
+	case MSR_IA32_MCG_STATUS:
+	case MSR_IA32_MCG_CTL:
+	case MSR_IA32_MCG_EXT_CTL:
+	case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_MISC(28) - 1:
+		/* MSR_IA32_MCx_{CTL, STATUS, ADDR, MISC} */
+		return true;
+	case APIC_BASE_MSR ... APIC_BASE_MSR + 0xff:
+		/*
+		 * x2APIC registers that are virtualized by the CPU can't be
+		 * emulated, KVM doesn't have access to the virtual APIC page.
+		 */
+		switch (index) {
+		case X2APIC_MSR(APIC_TASKPRI):
+		case X2APIC_MSR(APIC_PROCPRI):
+		case X2APIC_MSR(APIC_EOI):
+		case X2APIC_MSR(APIC_ISR) ... X2APIC_MSR(APIC_ISR + APIC_ISR_NR):
+		case X2APIC_MSR(APIC_TMR) ... X2APIC_MSR(APIC_TMR + APIC_ISR_NR):
+		case X2APIC_MSR(APIC_IRR) ... X2APIC_MSR(APIC_IRR + APIC_ISR_NR):
+			return false;
+		default:
+			return true;
+		}
+	case MSR_IA32_APICBASE:
+	case MSR_EFER:
+		return !write;
+	case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31):
+		/*
+		 * 0x280 - 0x29f: The x86 common code doesn't emulate MCx_CTL2.
+		 * Refer to kvm_{get,set}_msr_common(),
+		 * kvm_mtrr_{get, set}_msr(), and msr_mtrr_valid().
+		 */
+	default:
+		return false;
+	}
+}
+
+int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+	if (tdx_is_emulated_msr(msr->index, false))
+		return kvm_get_msr_common(vcpu, msr);
+	return 1;
+}
+
+int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+	if (tdx_is_emulated_msr(msr->index, true))
+		return kvm_set_msr_common(vcpu, msr);
+	return 1;
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 78f2d624b58e..1a8fd74a7a3c 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -156,6 +156,9 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 void tdx_inject_nmi(struct kvm_vcpu *vcpu);
 void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
 		u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code);
+bool tdx_is_emulated_msr(u32 index, bool write);
+int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
+int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -193,6 +196,9 @@ static inline void tdx_inject_nmi(struct kvm_vcpu *vcpu) {}
 static inline void tdx_get_exit_info(
 	struct kvm_vcpu *vcpu, u32 *reason, u64 *info1, u64 *info2,
 	u32 *intr_info, u32 *error_code) {}
+static inline bool tdx_is_emulated_msr(u32 index, bool write) { return false; }
+static inline int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
+static inline int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 094/102] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (92 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 093/102] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 095/102] KVM: TDX: Handle TDX PV report fatal error hypercall isaku.yamahata
                   ` (9 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Wire up TDX PV rdmsr/wrmsr hypercall to the KVM backend function.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/tdx.c | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 162cab67d1ef..dc66c799cae8 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1170,6 +1170,39 @@ static int tdx_emulate_mmio(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int tdx_emulate_rdmsr(struct kvm_vcpu *vcpu)
+{
+	u32 index = tdvmcall_a0_read(vcpu);
+	u64 data;
+
+	if (kvm_get_msr(vcpu, index, &data)) {
+		trace_kvm_msr_read_ex(index);
+		tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INVALID_OPERAND);
+		return 1;
+	}
+	trace_kvm_msr_read(index, data);
+
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+	tdvmcall_set_return_val(vcpu, data);
+	return 1;
+}
+
+static int tdx_emulate_wrmsr(struct kvm_vcpu *vcpu)
+{
+	u32 index = tdvmcall_a0_read(vcpu);
+	u64 data = tdvmcall_a1_read(vcpu);
+
+	if (kvm_set_msr(vcpu, index, data)) {
+		trace_kvm_msr_write_ex(index, data);
+		tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INVALID_OPERAND);
+		return 1;
+	}
+
+	trace_kvm_msr_write(index, data);
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+	return 1;
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
 	if (tdvmcall_exit_type(vcpu))
@@ -1184,6 +1217,10 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 		return tdx_emulate_io(vcpu);
 	case EXIT_REASON_EPT_VIOLATION:
 		return tdx_emulate_mmio(vcpu);
+	case EXIT_REASON_MSR_READ:
+		return tdx_emulate_rdmsr(vcpu);
+	case EXIT_REASON_MSR_WRITE:
+		return tdx_emulate_wrmsr(vcpu);
 	default:
 		break;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 095/102] KVM: TDX: Handle TDX PV report fatal error hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (93 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 094/102] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 096/102] KVM: TDX: Handle TDX PV map_gpa hypercall isaku.yamahata
                   ` (8 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Wire up TDX PV report fatal error hypercall to KVM_SYSTEM_EVENT_CRASH KVM
exit event.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c   | 20 ++++++++++++++++++++
 include/uapi/linux/kvm.h |  1 +
 2 files changed, 21 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index dc66c799cae8..00baecbb62ff 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1203,6 +1203,24 @@ static int tdx_emulate_wrmsr(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int tdx_report_fatal_error(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * Exit to userspace device model for teardown.
+	 * Because guest TD is already panicing, returning an error to guerst TD
+	 * doesn't make sense.  No argument check is done.
+	 */
+
+	vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
+	vcpu->run->system_event.type = KVM_SYSTEM_EVENT_TDX;
+	vcpu->run->system_event.ndata = 3;
+	vcpu->run->system_event.data[0] = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
+	vcpu->run->system_event.data[1] = tdvmcall_a0_read(vcpu);
+	vcpu->run->system_event.data[2] = tdvmcall_a1_read(vcpu);
+
+	return 0;
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
 	if (tdvmcall_exit_type(vcpu))
@@ -1221,6 +1239,8 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 		return tdx_emulate_rdmsr(vcpu);
 	case EXIT_REASON_MSR_WRITE:
 		return tdx_emulate_wrmsr(vcpu);
+	case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
+		return tdx_report_fatal_error(vcpu);
 	default:
 		break;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 6d6785d2685f..014337760dfa 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -448,6 +448,7 @@ struct kvm_run {
 #define KVM_SYSTEM_EVENT_WAKEUP         4
 #define KVM_SYSTEM_EVENT_SUSPEND        5
 #define KVM_SYSTEM_EVENT_SEV_TERM       6
+#define KVM_SYSTEM_EVENT_TDX            7
 			__u32 type;
 			__u32 ndata;
 			union {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 096/102] KVM: TDX: Handle TDX PV map_gpa hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (94 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 095/102] KVM: TDX: Handle TDX PV report fatal error hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 097/102] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall isaku.yamahata
                   ` (7 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Wire up TDX PV map_gpa hypercall to the kvm/mmu backend.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 60 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 00baecbb62ff..d4ac573d9db3 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1221,6 +1221,64 @@ static int tdx_report_fatal_error(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static int tdx_map_gpa(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+	gpa_t gpa = tdvmcall_a0_read(vcpu);
+	gpa_t size = tdvmcall_a1_read(vcpu);
+	gpa_t end = gpa + size;
+	bool allow_private = kvm_is_private_gpa(kvm, gpa);
+
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INVALID_OPERAND);
+	if (!IS_ALIGNED(gpa, 4096) || !IS_ALIGNED(size, 4096) ||
+		end < gpa ||
+		end > kvm_gfn_shared_mask(kvm) << (PAGE_SHIFT + 1) ||
+		kvm_is_private_gpa(kvm, gpa) != kvm_is_private_gpa(kvm, end))
+		return 1;
+
+	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+
+#define TDX_MAP_GPA_SIZE_MAX   (16 * 1024 * 1024)
+	while (gpa < end) {
+		gfn_t s = gpa_to_gfn(gpa);
+		gfn_t e = gpa_to_gfn(
+			min(roundup(gpa + 1, TDX_MAP_GPA_SIZE_MAX), end));
+		int ret = kvm_mmu_map_gpa(vcpu, &s, e, allow_private);
+
+		if (ret == -EAGAIN)
+			e = s;
+		else if (ret) {
+			tdvmcall_set_return_code(vcpu,
+						TDG_VP_VMCALL_INVALID_OPERAND);
+			break;
+		}
+
+		gpa = gfn_to_gpa(e);
+
+		/*
+		 * TODO:
+		 * Interrupt this hypercall invocation to return remaining
+		 * region to the guest and let the guest to resume the
+		 * hypercall.
+		 *
+		 * The TDX Guest-Hypervisor Communication Interface(GHCI)
+		 * specification and guest implementation need to be updated.
+		 *
+		 * if (gpa < end && need_resched()) {
+		 *	size = end - gpa;
+		 *	tdvmcall_a0_write(vcpu, gpa);
+		 *	tdvmcall_a1_write(vcpu, size);
+		 *	tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INTERRUPTED_RESUME);
+		 *	break;
+		 * }
+		 */
+		if (gpa < end && need_resched())
+			cond_resched();
+	}
+
+	return 1;
+}
+
 static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 {
 	if (tdvmcall_exit_type(vcpu))
@@ -1241,6 +1299,8 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 		return tdx_emulate_wrmsr(vcpu);
 	case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
 		return tdx_report_fatal_error(vcpu);
+	case TDG_VP_VMCALL_MAP_GPA:
+		return tdx_map_gpa(vcpu);
 	default:
 		break;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 097/102] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (95 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 096/102] KVM: TDX: Handle TDX PV map_gpa hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 098/102] KVM: TDX: Silently discard SMI request isaku.yamahata
                   ` (6 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Implement TDG.VP.VMCALL<GetTdVmCallInfo> hypercall.  If the input value is
zero, return success code and zero in output registers.

TDG.VP.VMCALL<GetTdVmCallInfo> hypercall is a subleaf of TDG.VP.VMCALL to
enumerate which TDG.VP.VMCALL sub leaves are supported.  This hypercall is
for future enhancement of the Guest-Host-Communication Interface (GHCI)
specification.  The GHCI version of 344426-001US defines it to require
input R12 to be zero and to return zero in output registers, R11, R12, R13,
and R14 so that guest TD enumerates no enhancement.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index d4ac573d9db3..b1a1a7d96f39 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1203,6 +1203,20 @@ static int tdx_emulate_wrmsr(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int tdx_get_td_vm_call_info(struct kvm_vcpu *vcpu)
+{
+	if (tdvmcall_a0_read(vcpu))
+		tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INVALID_OPERAND);
+	else {
+		tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_SUCCESS);
+		kvm_r11_write(vcpu, 0);
+		tdvmcall_a0_write(vcpu, 0);
+		tdvmcall_a1_write(vcpu, 0);
+		tdvmcall_a2_write(vcpu, 0);
+	}
+	return 1;
+}
+
 static int tdx_report_fatal_error(struct kvm_vcpu *vcpu)
 {
 	/*
@@ -1297,6 +1311,8 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
 		return tdx_emulate_rdmsr(vcpu);
 	case EXIT_REASON_MSR_WRITE:
 		return tdx_emulate_wrmsr(vcpu);
+	case TDG_VP_VMCALL_GET_TD_VM_CALL_INFO:
+		return tdx_get_td_vm_call_info(vcpu);
 	case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
 		return tdx_report_fatal_error(vcpu);
 	case TDG_VP_VMCALL_MAP_GPA:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 098/102] KVM: TDX: Silently discard SMI request
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (96 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 097/102] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 099/102] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
                   ` (5 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX doesn't support system-management mode (SMM) and system-management
interrupt (SMI) in guest TDs.  Because guest state (vcpu state, memory
state) is protected, it must go through the TDX module APIs to change guest
state, injecting SMI and changing vcpu mode into SMM.  The TDX module
doesn't provide a way for VMM to inject SMI into guest TD and a way for VMM
to switch guest vcpu mode into SMM.

We have two options in KVM when handling SMM or SMI in the guest TD or the
device model (e.g. QEMU): 1) silently ignore the request or 2) return a
meaningful error.

For simplicity, we implemented the option 1).

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/lapic.c       |  7 +++++--
 arch/x86/kvm/vmx/main.c    | 43 ++++++++++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/tdx.c     | 27 ++++++++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h |  8 +++++++
 arch/x86/kvm/x86.c         |  3 ++-
 5 files changed, 81 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 707f1ff90f8a..67dbc26aa1bd 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1146,8 +1146,11 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 
 	case APIC_DM_SMI:
 		result = 1;
-		kvm_make_request(KVM_REQ_SMI, vcpu);
-		kvm_vcpu_kick(vcpu);
+		if (static_call(kvm_x86_has_emulated_msr)(vcpu->kvm,
+							  MSR_IA32_SMBASE)) {
+			kvm_make_request(KVM_REQ_SMI, vcpu);
+			kvm_vcpu_kick(vcpu);
+		}
 		break;
 
 	case APIC_DM_NMI:
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 2696278e9b17..294919913dfd 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -233,6 +233,41 @@ static int vt_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	return vmx_get_msr(vcpu, msr_info);
 }
 
+static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_smi_allowed(vcpu, for_injection);
+
+	return vmx_smi_allowed(vcpu, for_injection);
+}
+
+static int vt_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
+{
+	if (unlikely(is_td_vcpu(vcpu)))
+		return tdx_enter_smm(vcpu, smstate);
+
+	return vmx_enter_smm(vcpu, smstate);
+}
+
+static int vt_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
+{
+	if (unlikely(is_td_vcpu(vcpu)))
+		return tdx_leave_smm(vcpu, smstate);
+
+	return vmx_leave_smm(vcpu, smstate);
+}
+
+static void vt_enable_smi_window(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu)) {
+		tdx_enable_smi_window(vcpu);
+		return;
+	}
+
+	/* RSM will cause a vmexit anyway.  */
+	vmx_enable_smi_window(vcpu);
+}
+
 static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	struct pi_desc *pi = vcpu_to_pi_desc(vcpu);
@@ -569,10 +604,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 
 	.setup_mce = vmx_setup_mce,
 
-	.smi_allowed = vmx_smi_allowed,
-	.enter_smm = vmx_enter_smm,
-	.leave_smm = vmx_leave_smm,
-	.enable_smi_window = vmx_enable_smi_window,
+	.smi_allowed = vt_smi_allowed,
+	.enter_smm = vt_enter_smm,
+	.leave_smm = vt_leave_smm,
+	.enable_smi_window = vt_enable_smi_window,
 
 	.can_emulate_instruction = vmx_can_emulate_instruction,
 	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index b1a1a7d96f39..d81a0a832ce2 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1828,6 +1828,33 @@ int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	return 1;
 }
 
+int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+{
+	/* SMI isn't supported for TDX. */
+	WARN_ON_ONCE(1);
+	return false;
+}
+
+int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
+{
+	/* smi_allowed() is always false for TDX as above. */
+	WARN_ON_ONCE(1);
+	return 0;
+}
+
+int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
+{
+	WARN_ON_ONCE(1);
+	return 0;
+}
+
+void tdx_enable_smi_window(struct kvm_vcpu *vcpu)
+{
+	/* SMI isn't supported for TDX.  Silently discard SMI request. */
+	WARN_ON_ONCE(1);
+	vcpu->arch.smi_pending = false;
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 1a8fd74a7a3c..1c4672037a2e 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -159,6 +159,10 @@ void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
 bool tdx_is_emulated_msr(u32 index, bool write);
 int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
 int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
+int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection);
+int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate);
+int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate);
+void tdx_enable_smi_window(struct kvm_vcpu *vcpu);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -199,6 +203,10 @@ static inline void tdx_get_exit_info(
 static inline bool tdx_is_emulated_msr(u32 index, bool write) { return false; }
 static inline int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
 static inline int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
+static inline int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { return false; }
+static inline int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) { return 0; }
+static inline int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) { return 0; }
+static inline void tdx_enable_smi_window(struct kvm_vcpu *vcpu) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ccb1670adfbc..a13040e22d25 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4816,7 +4816,8 @@ static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu)
 
 static int kvm_vcpu_ioctl_smi(struct kvm_vcpu *vcpu)
 {
-	kvm_make_request(KVM_REQ_SMI, vcpu);
+	if (static_call(kvm_x86_has_emulated_msr)(vcpu->kvm, MSR_IA32_SMBASE))
+		kvm_make_request(KVM_REQ_SMI, vcpu);
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 099/102] KVM: TDX: Silently ignore INIT/SIPI
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (97 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 098/102] KVM: TDX: Silently discard SMI request isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 100/102] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
                   ` (4 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

The TDX module API doesn't provide API for VMM to inject INIT IPI and SIPI.
Instead it defines the different protocols to boot application processors.
Ignore INIT and SIPI events for the TDX guest.

There are two options. 1) (silently) ignore INIT/SIPI request or 2) return
error to guest TDs somehow.  Given that TDX guest is paravirtualized to
boot AP, the option 1 is chosen for simplicity.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  2 ++
 arch/x86/kvm/lapic.c               | 16 +++++++++++-----
 arch/x86/kvm/svm/svm.c             |  1 +
 arch/x86/kvm/vmx/main.c            | 22 +++++++++++++++++++++-
 5 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index ec98b3f734a2..ff658969cfff 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -136,6 +136,7 @@ KVM_X86_OP_OPTIONAL(migrate_timers)
 KVM_X86_OP(msr_filter_changed)
 KVM_X86_OP(complete_emulated_msr)
 KVM_X86_OP(vcpu_deliver_sipi_vector)
+KVM_X86_OP(vcpu_deliver_init)
 KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
 KVM_X86_OP(check_processor_compatibility)
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 42d209fe0a4f..2b79d1c9cabb 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1649,6 +1649,7 @@ struct kvm_x86_ops {
 	int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
 
 	void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector);
+	void (*vcpu_deliver_init)(struct kvm_vcpu *vcpu);
 
 	/*
 	 * Returns vCPU specific APICv inhibit reasons
@@ -1858,6 +1859,7 @@ int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu);
 void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
 int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg);
 void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
+void kvm_vcpu_deliver_init(struct kvm_vcpu *vcpu);
 
 int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index,
 		    int reason, bool has_error_code, u32 error_code);
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 67dbc26aa1bd..596955070721 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -2996,6 +2996,16 @@ int kvm_lapic_set_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len)
 	return 0;
 }
 
+void kvm_vcpu_deliver_init(struct kvm_vcpu *vcpu)
+{
+	kvm_vcpu_reset(vcpu, true);
+	if (kvm_vcpu_is_bsp(vcpu))
+		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+	else
+		vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_deliver_init);
+
 int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 {
 	struct kvm_lapic *apic = vcpu->arch.apic;
@@ -3043,11 +3053,7 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 
 	if (test_bit(KVM_APIC_INIT, &pe)) {
 		clear_bit(KVM_APIC_INIT, &apic->pending_events);
-		kvm_vcpu_reset(vcpu, true);
-		if (kvm_vcpu_is_bsp(apic->vcpu))
-			vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
-		else
-			vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
+		static_call(kvm_x86_vcpu_deliver_init)(vcpu);
 	}
 	if (test_bit(KVM_APIC_SIPI, &pe)) {
 		clear_bit(KVM_APIC_SIPI, &apic->pending_events);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 0abc43d6a115..0f4ce62b30c0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4829,6 +4829,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.complete_emulated_msr = svm_complete_emulated_msr,
 
 	.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
+	.vcpu_deliver_init = kvm_vcpu_deliver_init,
 	.vcpu_get_apicv_inhibit_reasons = avic_vcpu_get_apicv_inhibit_reasons,
 };
 
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 294919913dfd..552f2576d3ae 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -295,6 +295,25 @@ static void vt_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 	vmx_deliver_interrupt(apic, delivery_mode, trig_mode, vector);
 }
 
+static void vt_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	kvm_vcpu_deliver_sipi_vector(vcpu, vector);
+}
+
+static void vt_vcpu_deliver_init(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu)) {
+		/* TDX doesn't support INIT.  Ignore INIT event */
+		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+		return;
+	}
+
+	kvm_vcpu_deliver_init(vcpu);
+}
+
 static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -616,7 +635,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.msr_filter_changed = vmx_msr_filter_changed,
 	.complete_emulated_msr = kvm_complete_insn_gp,
 
-	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
+	.vcpu_deliver_sipi_vector = vt_vcpu_deliver_sipi_vector,
+	.vcpu_deliver_init = vt_vcpu_deliver_init,
 
 	.dev_mem_enc_ioctl = tdx_dev_ioctl,
 	.mem_enc_ioctl = vt_mem_enc_ioctl,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 100/102] KVM: TDX: Add methods to ignore accesses to CPU state
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (98 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 099/102] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-06-27 21:54 ` [PATCH v7 101/102] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
                   ` (3 subsequent siblings)
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, Sean Christopherson

From: Sean Christopherson <sean.j.christopherson@intel.com>

TDX protects TDX guest state from VMM.  Implements to access methods for
TDX guest state to ignore them or return zero.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/main.c    | 463 +++++++++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/tdx.c     |  55 ++++-
 arch/x86/kvm/vmx/x86_ops.h |  17 ++
 3 files changed, 490 insertions(+), 45 deletions(-)

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 552f2576d3ae..b9ad41ace499 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -268,6 +268,46 @@ static void vt_enable_smi_window(struct kvm_vcpu *vcpu)
 	vmx_enable_smi_window(vcpu);
 }
 
+static bool vt_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
+				       void *insn, int insn_len)
+{
+	if (is_td_vcpu(vcpu))
+		return false;
+
+	return vmx_can_emulate_instruction(vcpu, emul_type, insn, insn_len);
+}
+
+static int vt_check_intercept(struct kvm_vcpu *vcpu,
+				 struct x86_instruction_info *info,
+				 enum x86_intercept_stage stage,
+				 struct x86_exception *exception)
+{
+	/*
+	 * This call back is triggered by the x86 instruction emulator. TDX
+	 * doesn't allow guest memory inspection.
+	 */
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return X86EMUL_UNHANDLEABLE;
+
+	return vmx_check_intercept(vcpu, info, stage, exception);
+}
+
+static bool vt_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return true;
+
+	return vmx_apic_init_signal_blocked(vcpu);
+}
+
+static void vt_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_set_virtual_apic_mode(vcpu);
+
+	return vmx_set_virtual_apic_mode(vcpu);
+}
+
 static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	struct pi_desc *pi = vcpu_to_pi_desc(vcpu);
@@ -275,6 +315,31 @@ static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 	memset(pi->pir, 0, sizeof(pi->pir));
 }
 
+static void vt_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	return vmx_hwapic_irr_update(vcpu, max_irr);
+}
+
+static void vt_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	return vmx_hwapic_isr_update(vcpu, max_isr);
+}
+
+static bool vt_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
+{
+	/* TDX doesn't support L2 at the moment. */
+	if (WARN_ON_ONCE(is_td_vcpu(vcpu)))
+		return false;
+
+	return vmx_guest_apic_has_interrupt(vcpu);
+}
+
 static int vt_sync_pir_to_irr(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -314,6 +379,177 @@ static void vt_vcpu_deliver_init(struct kvm_vcpu *vcpu)
 	kvm_vcpu_deliver_init(vcpu);
 }
 
+static void vt_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	return vmx_vcpu_after_set_cpuid(vcpu);
+}
+
+static void vt_update_exception_bitmap(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_update_exception_bitmap(vcpu);
+}
+
+static u64 vt_get_segment_base(struct kvm_vcpu *vcpu, int seg)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return tdx_get_segment_base(vcpu, seg);
+
+	return vmx_get_segment_base(vcpu, seg);
+}
+
+static void vt_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+			      int seg)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return tdx_get_segment(vcpu, var, seg);
+
+	vmx_get_segment(vcpu, var, seg);
+}
+
+static void vt_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+			      int seg)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_set_segment(vcpu, var, seg);
+}
+
+static int vt_get_cpl(struct kvm_vcpu *vcpu)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return tdx_get_cpl(vcpu);
+
+	return vmx_get_cpl(vcpu);
+}
+
+static void vt_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_get_cs_db_l_bits(vcpu, db, l);
+}
+
+static void vt_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_set_cr0(vcpu, cr0);
+}
+
+static void vt_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_set_cr4(vcpu, cr4);
+}
+
+static int vt_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+{
+	if (is_td_vcpu(vcpu))
+		return 0;
+
+	return vmx_set_efer(vcpu, efer);
+}
+
+static void vt_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm)) {
+		memset(dt, 0, sizeof(*dt));
+		return;
+	}
+
+	vmx_get_idt(vcpu, dt);
+}
+
+static void vt_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_set_idt(vcpu, dt);
+}
+
+static void vt_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm)) {
+		memset(dt, 0, sizeof(*dt));
+		return;
+	}
+
+	vmx_get_gdt(vcpu, dt);
+}
+
+static void vt_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_set_gdt(vcpu, dt);
+}
+
+static void vt_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_set_dr7(vcpu, val);
+}
+
+static void vt_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * MOV-DR exiting is always cleared for TD guest, even in debug mode.
+	 * Thus KVM_DEBUGREG_WONT_EXIT can never be set and it should never
+	 * reach here for TD vcpu.
+	 */
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_sync_dirty_debug_regs(vcpu);
+}
+
+static void vt_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_cache_reg(vcpu, reg);
+
+	return vmx_cache_reg(vcpu, reg);
+}
+
+static unsigned long vt_get_rflags(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return tdx_get_rflags(vcpu);
+
+	return vmx_get_rflags(vcpu);
+}
+
+static void vt_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_set_rflags(vcpu, rflags);
+}
+
+static bool vt_get_if_flag(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return false;
+
+	return vmx_get_if_flag(vcpu);
+}
+
 static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -430,6 +666,15 @@ static u32 vt_get_interrupt_shadow(struct kvm_vcpu *vcpu)
 	return vmx_get_interrupt_shadow(vcpu);
 }
 
+static void vt_patch_hypercall(struct kvm_vcpu *vcpu,
+				  unsigned char *hypercall)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_patch_hypercall(vcpu, hypercall);
+}
+
 static void vt_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
 {
 	if (is_td_vcpu(vcpu))
@@ -438,6 +683,14 @@ static void vt_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
 	vmx_inject_irq(vcpu, reinjected);
 }
 
+static void vt_queue_exception(struct kvm_vcpu *vcpu)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_queue_exception(vcpu);
+}
+
 static void vt_cancel_injection(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu))
@@ -470,6 +723,130 @@ static void vt_request_immediate_exit(struct kvm_vcpu *vcpu)
 	vmx_request_immediate_exit(vcpu);
 }
 
+static void vt_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_update_cr8_intercept(vcpu, tpr, irr);
+}
+
+static void vt_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
+{
+	if (WARN_ON_ONCE(is_td_vcpu(vcpu)))
+		return;
+
+	vmx_set_apic_access_page_addr(vcpu);
+}
+
+static void vt_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+{
+	if (WARN_ON_ONCE(is_td_vcpu(vcpu)))
+		return;
+
+	vmx_refresh_apicv_exec_ctrl(vcpu);
+}
+
+static void vt_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_load_eoi_exitmap(vcpu, eoi_exit_bitmap);
+}
+
+static int vt_set_tss_addr(struct kvm *kvm, unsigned int addr)
+{
+	if (is_td(kvm))
+		return 0;
+
+	return vmx_set_tss_addr(kvm, addr);
+}
+
+static int vt_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
+{
+	if (is_td(kvm))
+		return 0;
+
+	return vmx_set_identity_map_addr(kvm, ident_addr);
+}
+
+static u64 vt_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+{
+	if (is_td_vcpu(vcpu)) {
+		if (is_mmio)
+			return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT;
+		return  MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT;
+	}
+
+	return vmx_get_mt_mask(vcpu, gfn, is_mmio);
+}
+
+static u64 vt_get_l2_tsc_offset(struct kvm_vcpu *vcpu)
+{
+	/* TDX doesn't support L2 guest at the moment. */
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return 0;
+
+	return vmx_get_l2_tsc_offset(vcpu);
+}
+
+static u64 vt_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu)
+{
+	/* TDX doesn't support L2 guest at the moment. */
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return 0;
+
+	return vmx_get_l2_tsc_multiplier(vcpu);
+}
+
+static void vt_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+{
+	/* In TDX, tsc offset can't be changed. */
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_write_tsc_offset(vcpu, offset);
+}
+
+static void vt_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
+{
+	/* In TDX, tsc multiplier can't be changed. */
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_write_tsc_multiplier(vcpu, multiplier);
+}
+
+static void vt_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
+{
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_update_cpu_dirty_logging(vcpu);
+}
+
+#ifdef CONFIG_X86_64
+static int vt_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+			      bool *expired)
+{
+	/* VMX-preemption timer isn't available for TDX. */
+	if (is_td_vcpu(vcpu))
+		return -EINVAL;
+
+	return vmx_set_hv_timer(vcpu, guest_deadline_tsc, expired);
+}
+
+static void vt_cancel_hv_timer(struct kvm_vcpu *vcpu)
+{
+	/* VMX-preemption timer can't be set.  Set vt_set_hv_timer(). */
+	if (KVM_BUG_ON(is_td_vcpu(vcpu), vcpu->kvm))
+		return;
+
+	vmx_cancel_hv_timer(vcpu);
+}
+#endif
+
 static void vt_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
 			u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code)
 {
@@ -522,29 +899,29 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.vcpu_load = vt_vcpu_load,
 	.vcpu_put = vt_vcpu_put,
 
-	.update_exception_bitmap = vmx_update_exception_bitmap,
+	.update_exception_bitmap = vt_update_exception_bitmap,
 	.get_msr_feature = vmx_get_msr_feature,
 	.get_msr = vt_get_msr,
 	.set_msr = vt_set_msr,
-	.get_segment_base = vmx_get_segment_base,
-	.get_segment = vmx_get_segment,
-	.set_segment = vmx_set_segment,
-	.get_cpl = vmx_get_cpl,
-	.get_cs_db_l_bits = vmx_get_cs_db_l_bits,
-	.set_cr0 = vmx_set_cr0,
+	.get_segment_base = vt_get_segment_base,
+	.get_segment = vt_get_segment,
+	.set_segment = vt_set_segment,
+	.get_cpl = vt_get_cpl,
+	.get_cs_db_l_bits = vt_get_cs_db_l_bits,
+	.set_cr0 = vt_set_cr0,
 	.is_valid_cr4 = vmx_is_valid_cr4,
-	.set_cr4 = vmx_set_cr4,
-	.set_efer = vmx_set_efer,
-	.get_idt = vmx_get_idt,
-	.set_idt = vmx_set_idt,
-	.get_gdt = vmx_get_gdt,
-	.set_gdt = vmx_set_gdt,
-	.set_dr7 = vmx_set_dr7,
-	.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
-	.cache_reg = vmx_cache_reg,
-	.get_rflags = vmx_get_rflags,
-	.set_rflags = vmx_set_rflags,
-	.get_if_flag = vmx_get_if_flag,
+	.set_cr4 = vt_set_cr4,
+	.set_efer = vt_set_efer,
+	.get_idt = vt_get_idt,
+	.set_idt = vt_set_idt,
+	.get_gdt = vt_get_gdt,
+	.set_gdt = vt_set_gdt,
+	.set_dr7 = vt_set_dr7,
+	.sync_dirty_debug_regs = vt_sync_dirty_debug_regs,
+	.cache_reg = vt_cache_reg,
+	.get_rflags = vt_get_rflags,
+	.set_rflags = vt_set_rflags,
+	.get_if_flag = vt_get_if_flag,
 
 	.flush_tlb_all = vt_flush_tlb_all,
 	.flush_tlb_current = vt_flush_tlb_current,
@@ -558,10 +935,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.update_emulated_instruction = vmx_update_emulated_instruction,
 	.set_interrupt_shadow = vt_set_interrupt_shadow,
 	.get_interrupt_shadow = vt_get_interrupt_shadow,
-	.patch_hypercall = vmx_patch_hypercall,
+	.patch_hypercall = vt_patch_hypercall,
 	.inject_irq = vt_inject_irq,
 	.inject_nmi = vt_inject_nmi,
-	.queue_exception = vmx_queue_exception,
+	.queue_exception = vt_queue_exception,
 	.cancel_injection = vt_cancel_injection,
 	.interrupt_allowed = vt_interrupt_allowed,
 	.nmi_allowed = vt_nmi_allowed,
@@ -569,39 +946,39 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.set_nmi_mask = vt_set_nmi_mask,
 	.enable_nmi_window = vt_enable_nmi_window,
 	.enable_irq_window = vt_enable_irq_window,
-	.update_cr8_intercept = vmx_update_cr8_intercept,
-	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
-	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
-	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
-	.load_eoi_exitmap = vmx_load_eoi_exitmap,
+	.update_cr8_intercept = vt_update_cr8_intercept,
+	.set_virtual_apic_mode = vt_set_virtual_apic_mode,
+	.set_apic_access_page_addr = vt_set_apic_access_page_addr,
+	.refresh_apicv_exec_ctrl = vt_refresh_apicv_exec_ctrl,
+	.load_eoi_exitmap = vt_load_eoi_exitmap,
 	.apicv_post_state_restore = vt_apicv_post_state_restore,
 	.check_apicv_inhibit_reasons = vmx_check_apicv_inhibit_reasons,
-	.hwapic_irr_update = vmx_hwapic_irr_update,
-	.hwapic_isr_update = vmx_hwapic_isr_update,
-	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
+	.hwapic_irr_update = vt_hwapic_irr_update,
+	.hwapic_isr_update = vt_hwapic_isr_update,
+	.guest_apic_has_interrupt = vt_guest_apic_has_interrupt,
 	.sync_pir_to_irr = vt_sync_pir_to_irr,
 	.deliver_interrupt = vt_deliver_interrupt,
 	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
 	.protected_apic_has_interrupt = vt_protected_apic_has_interrupt,
 
-	.set_tss_addr = vmx_set_tss_addr,
-	.set_identity_map_addr = vmx_set_identity_map_addr,
-	.get_mt_mask = vmx_get_mt_mask,
+	.set_tss_addr = vt_set_tss_addr,
+	.set_identity_map_addr = vt_set_identity_map_addr,
+	.get_mt_mask = vt_get_mt_mask,
 
 	.get_exit_info = vt_get_exit_info,
 
-	.vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
+	.vcpu_after_set_cpuid = vt_vcpu_after_set_cpuid,
 
 	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
 
-	.get_l2_tsc_offset = vmx_get_l2_tsc_offset,
-	.get_l2_tsc_multiplier = vmx_get_l2_tsc_multiplier,
-	.write_tsc_offset = vmx_write_tsc_offset,
-	.write_tsc_multiplier = vmx_write_tsc_multiplier,
+	.get_l2_tsc_offset = vt_get_l2_tsc_offset,
+	.get_l2_tsc_multiplier = vt_get_l2_tsc_multiplier,
+	.write_tsc_offset = vt_write_tsc_offset,
+	.write_tsc_multiplier = vt_write_tsc_multiplier,
 
 	.load_mmu_pgd = vt_load_mmu_pgd,
 
-	.check_intercept = vmx_check_intercept,
+	.check_intercept = vt_check_intercept,
 	.handle_exit_irqoff = vt_handle_exit_irqoff,
 
 	.request_immediate_exit = vt_request_immediate_exit,
@@ -609,7 +986,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.sched_in = vt_sched_in,
 
 	.cpu_dirty_log_size = PML_ENTITY_NUM,
-	.update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
+	.update_cpu_dirty_logging = vt_update_cpu_dirty_logging,
 
 	.nested_ops = &vmx_nested_ops,
 
@@ -617,8 +994,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.pi_start_assignment = vmx_pi_start_assignment,
 
 #ifdef CONFIG_X86_64
-	.set_hv_timer = vmx_set_hv_timer,
-	.cancel_hv_timer = vmx_cancel_hv_timer,
+	.set_hv_timer = vt_set_hv_timer,
+	.cancel_hv_timer = vt_cancel_hv_timer,
 #endif
 
 	.setup_mce = vmx_setup_mce,
@@ -628,8 +1005,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.leave_smm = vt_leave_smm,
 	.enable_smi_window = vt_enable_smi_window,
 
-	.can_emulate_instruction = vmx_can_emulate_instruction,
-	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
+	.can_emulate_instruction = vt_can_emulate_instruction,
+	.apic_init_signal_blocked = vt_apic_init_signal_blocked,
 	.migrate_timers = vmx_migrate_timers,
 
 	.msr_filter_changed = vmx_msr_filter_changed,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index d81a0a832ce2..10207afddec8 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -3,6 +3,7 @@
 #include <linux/mmu_context.h>
 
 #include <asm/fpu/xcr.h>
+#include <asm/virtext.h>
 #include <asm/tdx.h>
 
 #include "capabilities.h"
@@ -609,8 +610,15 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.tsc_offset = to_kvm_tdx(vcpu->kvm)->tsc_offset;
 	vcpu->arch.l1_tsc_offset = vcpu->arch.tsc_offset;
-	vcpu->arch.guest_state_protected =
-		!(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG);
+	/*
+	 * TODO: support off-TD debug.  If TD DEBUG is enabled, guest state
+	 * can be accessed. guest_state_protected = false. and kvm ioctl to
+	 * access CPU states should be usable for user space VMM (e.g. qemu).
+	 *
+	 * vcpu->arch.guest_state_protected =
+	 *	!(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG);
+	 */
+	vcpu->arch.guest_state_protected = true;
 
 	tdx->pi_desc.nv = POSTED_INTR_VECTOR;
 	tdx->pi_desc.sn = 1;
@@ -1855,6 +1863,49 @@ void tdx_enable_smi_window(struct kvm_vcpu *vcpu)
 	vcpu->arch.smi_pending = false;
 }
 
+void tdx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+{
+	/* Only x2APIC mode is supported for TD. */
+	WARN_ON_ONCE(kvm_get_apic_mode(vcpu) != LAPIC_MODE_X2APIC);
+}
+
+int tdx_get_cpl(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+
+void tdx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
+{
+	kvm_register_mark_available(vcpu, reg);
+	switch (reg) {
+	case VCPU_REGS_RSP:
+	case VCPU_REGS_RIP:
+	case VCPU_EXREG_PDPTR:
+	case VCPU_EXREG_CR0:
+	case VCPU_EXREG_CR3:
+	case VCPU_EXREG_CR4:
+		break;
+	default:
+		KVM_BUG_ON(1, vcpu->kvm);
+		break;
+	}
+}
+
+unsigned long tdx_get_rflags(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+
+u64 tdx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
+{
+	return 0;
+}
+
+void tdx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
+{
+	memset(var, 0, sizeof(*var));
+}
+
 int tdx_dev_ioctl(void __user *argp)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 1c4672037a2e..2e204002efb1 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -163,6 +163,14 @@ int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection);
 int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate);
 int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate);
 void tdx_enable_smi_window(struct kvm_vcpu *vcpu);
+void tdx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+
+int tdx_get_cpl(struct kvm_vcpu *vcpu);
+void tdx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+unsigned long tdx_get_rflags(struct kvm_vcpu *vcpu);
+bool tdx_is_emulated_msr(u32 index, bool write);
+u64 tdx_get_segment_base(struct kvm_vcpu *vcpu, int seg);
+void tdx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
 
 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
@@ -203,10 +211,19 @@ static inline void tdx_get_exit_info(
 static inline bool tdx_is_emulated_msr(u32 index, bool write) { return false; }
 static inline int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
 static inline int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { return 1; }
+
 static inline int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { return false; }
 static inline int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) { return 0; }
 static inline int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) { return 0; }
 static inline void tdx_enable_smi_window(struct kvm_vcpu *vcpu) {}
+static inline void tdx_set_virtual_apic_mode(struct kvm_vcpu *vcpu) {}
+
+static inline int tdx_get_cpl(struct kvm_vcpu *vcpu) { return 0; }
+static inline void tdx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) {}
+static inline unsigned long tdx_get_rflags(struct kvm_vcpu *vcpu) { return 0; }
+static inline u64 tdx_get_segment_base(struct kvm_vcpu *vcpu, int seg) { return 0;}
+static inline void tdx_get_segment(
+	struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) {}
 
 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 101/102] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX)
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (99 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 100/102] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-07-08  1:34   ` Kai Huang
  2022-06-27 21:54 ` [PATCH v7 102/102] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
                   ` (2 subsequent siblings)
  103 siblings, 1 reply; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Add documentation to Intel Trusted Domain Extensions(TDX) support.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/api.rst       |   9 +-
 Documentation/virt/kvm/intel-tdx.rst | 381 +++++++++++++++++++++++++++
 2 files changed, 389 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/virt/kvm/intel-tdx.rst

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index b9ab598883b2..653ba93452f3 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1402,6 +1402,9 @@ It is recommended to use this API instead of the KVM_SET_MEMORY_REGION ioctl.
 The KVM_SET_MEMORY_REGION does not allow fine grained control over memory
 allocation and is deprecated.
 
+For TDX guest, deleting/moving memory region loses guest memory contents.
+Read only region isn't supported.  Only as-id 0 is supported.
+
 
 4.36 KVM_SET_TSS_ADDR
 ---------------------
@@ -4688,7 +4691,7 @@ H_GET_CPU_CHARACTERISTICS hypercall.
 
 :Capability: basic
 :Architectures: x86
-:Type: vm
+:Type: vm ioctl, vcpu ioctl
 :Parameters: an opaque platform specific structure (in/out)
 :Returns: 0 on success; -1 on error
 
@@ -4700,6 +4703,10 @@ Currently, this ioctl is used for issuing Secure Encrypted Virtualization
 (SEV) commands on AMD Processors. The SEV commands are defined in
 Documentation/virt/kvm/amd-memory-encryption.rst.
 
+Currently, this ioctl is used for issuing Trusted Domain Extensions
+(TDX) commands on Intel Processors. The TDX commands are defined in
+Documentation/virt/kvm/intel-tdx.rst.
+
 4.111 KVM_MEMORY_ENCRYPT_REG_REGION
 -----------------------------------
 
diff --git a/Documentation/virt/kvm/intel-tdx.rst b/Documentation/virt/kvm/intel-tdx.rst
new file mode 100644
index 000000000000..3fae2cf9e534
--- /dev/null
+++ b/Documentation/virt/kvm/intel-tdx.rst
@@ -0,0 +1,381 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===================================
+Intel Trust Dodmain Extensions(TDX)
+===================================
+
+Overview
+========
+TDX stands for Trust Domain Extensions which isolates VMs from
+the virtual-machine manager (VMM)/hypervisor and any other software on
+the platform. [1]
+For details, the specifications, [2], [3], [4], [5], [6], [7], are
+available.
+
+
+API description
+===============
+
+KVM_MEMORY_ENCRYPT_OP
+---------------------
+:Type: vm ioctl, vcpu ioctl
+
+For TDX operations, KVM_MEMORY_ENCRYPT_OP is re-purposed to be generic
+ioctl with TDX specific sub ioctl command.
+
+::
+
+  /* Trust Domain eXtension sub-ioctl() commands. */
+  enum kvm_tdx_cmd_id {
+          KVM_TDX_CAPABILITIES = 0,
+          KVM_TDX_INIT_VM,
+          KVM_TDX_INIT_VCPU,
+          KVM_TDX_INIT_MEM_REGION,
+          KVM_TDX_FINALIZE_VM,
+
+          KVM_TDX_CMD_NR_MAX,
+  };
+
+  struct kvm_tdx_cmd {
+        /* enum kvm_tdx_cmd_id */
+        __u32 id;
+        /* flags for sub-commend. If sub-command doesn't use this, set zero. */
+        __u32 flags;
+        /*
+         * data for each sub-command. An immediate or a pointer to the actual
+         * data in process virtual address.  If sub-command doesn't use it,
+         * set zero.
+         */
+        __u64 data;
+        /*
+         * Auxiliary error code.  The sub-command may return TDX SEAMCALL
+         * status code in addition to -Exxx.
+         * Defined for consistency with struct kvm_sev_cmd.
+         */
+        __u64 error;
+        /* Reserved: Defined for consistency with struct kvm_sev_cmd. */
+        __u64 unused;
+  };
+
+KVM_TDX_CAPABILITIES
+--------------------
+:Type: vm ioctl
+
+Subset of TDSYSINFO_STRCUCT retrieved by TDH.SYS.INFO TDX SEAM call will be
+returned. Which describes about Intel TDX module.
+
+- id: KVM_TDX_CAPABILITIES
+- flags: must be 0
+- data: pointer to struct kvm_tdx_capabilities
+- error: must be 0
+- unused: must be 0
+
+::
+
+  struct kvm_tdx_cpuid_config {
+          __u32 leaf;
+          __u32 sub_leaf;
+          __u32 eax;
+          __u32 ebx;
+          __u32 ecx;
+          __u32 edx;
+  };
+
+  struct kvm_tdx_capabilities {
+          __u64 attrs_fixed0;
+          __u64 attrs_fixed1;
+          __u64 xfam_fixed0;
+          __u64 xfam_fixed1;
+
+          __u32 nr_cpuid_configs;
+          struct kvm_tdx_cpuid_config cpuid_configs[0];
+  };
+
+
+KVM_TDX_INIT_VM
+---------------
+:Type: vm ioctl
+
+Does additional VM initialization specific to TDX which corresponds to
+TDH.MNG.INIT TDX SEAM call.
+
+- id: KVM_TDX_INIT_VM
+- flags: must be 0
+- data: pointer to struct kvm_tdx_init_vm
+- error: must be 0
+- unused: must be 0
+
+::
+
+  struct kvm_tdx_init_vm {
+          __u32 max_vcpus;
+          __u32 reserved;
+          __u64 attributes;
+          __u64 cpuid;  /* pointer to struct kvm_cpuid2 */
+          __u64 mrconfigid[6];          /* sha384 digest */
+          __u64 mrowner[6];             /* sha384 digest */
+          __u64 mrownerconfig[6];       /* sha348 digest */
+          __u64 reserved[43];           /* must be zero for future extensibility */
+  };
+
+
+KVM_TDX_INIT_VCPU
+-----------------
+:Type: vcpu ioctl
+
+Does additional VCPU initialization specific to TDX which corresponds to
+TDH.VP.INIT TDX SEAM call.
+
+- id: KVM_TDX_INIT_VCPU
+- flags: must be 0
+- data: initial value of the guest TD VCPU RCX
+- error: must be 0
+- unused: must be 0
+
+KVM_TDX_INIT_MEM_REGION
+-----------------------
+:Type: vm ioctl
+
+Encrypt a memory continuous region which corresponding to TDH.MEM.PAGE.ADD
+TDX SEAM call.
+If KVM_TDX_MEASURE_MEMORY_REGION flag is specified, it also extends measurement
+which corresponds to TDH.MR.EXTEND TDX SEAM call.
+
+- id: KVM_TDX_INIT_VCPU
+- flags: flags
+            currently only KVM_TDX_MEASURE_MEMORY_REGION is defined
+- data: pointer to struct kvm_tdx_init_mem_region
+- error: must be 0
+- unused: must be 0
+
+::
+
+  #define KVM_TDX_MEASURE_MEMORY_REGION   (1UL << 0)
+
+  struct kvm_tdx_init_mem_region {
+          __u64 source_addr;
+          __u64 gpa;
+          __u64 nr_pages;
+  };
+
+
+KVM_TDX_FINALIZE_VM
+-------------------
+:Type: vm ioctl
+
+Complete measurement of the initial TD contents and mark it ready to run
+which corresponds to TDH.MR.FINALIZE
+
+- id: KVM_TDX_FINALIZE_VM
+- flags: must be 0
+- data: must be 0
+- error: must be 0
+- unused: must be 0
+
+KVM TDX creation flow
+=====================
+In addition to KVM normal flow, new TDX ioctls need to be called.  The control flow
+looks like as follows.
+
+#. system wide capability check
+  * KVM_CAP_VM_TYPES: check if VM type is supported and if TDX_VM_TYPE is
+    supported.
+
+#. creating VM
+  * KVM_CREATE_VM
+  * KVM_TDX_CAPABILITIES: query if TDX is supported on the platform.
+  * KVM_TDX_INIT_VM: pass TDX specific VM parameters.
+
+#. creating VCPU
+  * KVM_CREATE_VCPU
+  * KVM_TDX_INIT_VCPU: pass TDX specific VCPU parameters.
+
+#. initializing guest memory
+  * allocate guest memory and initialize page same to normal KVM case
+    In TDX case, parse and load TDVF into guest memory in addition.
+  * KVM_TDX_INIT_MEM_REGION to add and measure guest pages.
+    If the pages has contents above, those pages need to be added.
+    Otherwise the contents will be lost and guest sees zero pages.
+  * KVM_TDX_FINALIAZE_VM: Finalize VM and measurement
+    This must be after KVM_TDX_INIT_MEM_REGION.
+
+#. run vcpu
+
+Design discussion
+=================
+
+Coexistence of normal(VMX) VM and TD VM
+---------------------------------------
+It's required to allow both legacy(normal VMX) VMs and new TD VMs to
+coexist. Otherwise the benefits of VM flexibility would be eliminated.
+The main issue for it is that the logic of kvm_x86_ops callbacks for
+TDX is different from VMX. On the other hand, the variable,
+kvm_x86_ops, is global single variable. Not per-VM, not per-vcpu.
+
+Several points to be considered.
+  . No or minimal overhead when TDX is disabled(CONFIG_INTEL_TDX_HOST=n).
+  . Avoid overhead of indirect call via function pointers.
+  . Contain the changes under arch/x86/kvm/vmx directory and share logic
+    with VMX for maintenance.
+    Even though the ways to operation on VM (VMX instruction vs TDX
+    SEAM call) is different, the basic idea remains same. So, many
+    logic can be shared.
+  . Future maintenance
+    The huge change of kvm_x86_ops in (near) future isn't expected.
+    a centralized file is acceptable.
+
+- Wrapping kvm x86_ops: The current choice
+  Introduce dedicated file for arch/x86/kvm/vmx/main.c (the name,
+  main.c, is just chosen to show main entry points for callbacks.) and
+  wrapper functions around all the callbacks with
+  "if (is-tdx) tdx-callback() else vmx-callback()".
+
+  Pros:
+  - No major change in common x86 KVM code. The change is (mostly)
+    contained under arch/x86/kvm/vmx/.
+  - When TDX is disabled(CONFIG_INTEL_TDX_HOST=n), the overhead is
+    optimized out.
+  - Micro optimization by avoiding function pointer.
+  Cons:
+  - Many boiler plates in arch/x86/kvm/vmx/main.c.
+
+Alternative:
+- Introduce another callback layer under arch/x86/kvm/vmx.
+  Pros:
+  - No major change in common x86 KVM code. The change is (mostly)
+    contained under arch/x86/kvm/vmx/.
+  - clear separation on callbacks.
+  Cons:
+  - overhead in VMX even when TDX is disabled(CONFIG_INTEL_TDX_HOST=n).
+
+- Allow per-VM kvm_x86_ops callbacks instead of global kvm_x86_ops
+  Pros:
+  - clear separation on callbacks.
+  Cons:
+  - Big change in common x86 code.
+  - overhead in common code even when TDX is
+    disabled(CONFIG_INTEL_TDX_HOST=n).
+
+- Introduce new directory arch/x86/kvm/tdx
+  Pros:
+  - It clarifies that TDX is different from VMX.
+  Cons:
+  - Given the level of code sharing, it complicates code sharing.
+
+KVM MMU Changes
+---------------
+KVM MMU needs to be enhanced to handle Secure/Shared-EPT. The
+high-level execution flow is mostly same to normal EPT case.
+EPT violation/misconfiguration -> invoke TDP fault handler ->
+resolve TDP fault -> resume execution. (or emulate MMIO)
+The difference is, that S-EPT is operated(read/write) via TDX SEAM
+call which is expensive instead of direct read/write EPT entry.
+One bit of GPA (51 or 47 bit) is repurposed so that it means shared
+with host(if set to 1) or private to TD(if cleared to 0).
+
+- The current implementation
+  . Reuse the existing MMU code with minimal update.  Because the
+    execution flow is mostly same. But additional operation, TDX call
+    for S-EPT, is needed. So add hooks for it to kvm_x86_ops.
+  . For performance, minimize TDX SEAM call to operate on S-EPT. When
+    getting corresponding S-EPT pages/entry from faulting GPA, don't
+    use TDX SEAM call to read S-EPT entry. Instead create shadow copy
+    in host memory.
+    Repurpose the existing kvm_mmu_page as shadow copy of S-EPT and
+    associate S-EPT to it.
+  . Treats share bit as attributes. mask/unmask the bit where
+    necessary to keep the existing traversing code works.
+    Introduce kvm.arch.gfn_shared_mask and use "if (gfn_share_mask)"
+    for special case.
+    = 0 : for non-TDX case
+    = 51 or 47 bit set for TDX case.
+
+  Pros:
+  - Large code reuse with minimal new hooks.
+  - Execution path is same.
+  Cons:
+  - Complicates the existing code.
+  - Repurpose kvm_mmu_page as shadow of Secure-EPT can be confusing.
+
+Alternative:
+- Replace direct read/write on EPT entry with TDX-SEAM call by
+  introducing callbacks on EPT entry.
+  Pros:
+  - Straightforward.
+  Cons:
+  - Too many touching point.
+  - Too slow due to TDX-SEAM call.
+  - Overhead even when TDX is disabled(CONFIG_INTEL_TDX_HOST=n).
+
+- Sprinkle "if (is-tdx)" for TDX special case
+  Pros:
+  - Straightforward.
+  Cons:
+  - The result is non-generic and ugly.
+  - Put TDX specific logic into common KVM MMU code.
+
+New KVM API, ioctl (sub)command, to manage TD VMs
+-------------------------------------------------
+Additional KVM API are needed to control TD VMs. The operations on TD
+VMs are specific to TDX.
+
+- Piggyback and repurpose KVM_MEMORY_ENCRYPT_OP
+  Although not all operation isn't memory encryption, repupose to get
+  TDX specific ioctls.
+  Pros:
+  - No major change in common x86 KVM code.
+  Cons:
+  - The operations aren't actually memory encryption, but operations
+    on TD VMs.
+
+Alternative:
+- Introduce new ioctl for guest protection like
+  KVM_GUEST_PROTECTION_OP and introduce subcommand for TDX.
+  Pros:
+  - Clean name.
+  Cons:
+  - One more new ioctl for guest protection.
+  - Confusion with KVM_MEMORY_ENCRYPT_OP with KVM_GUEST_PROTECTION_OP.
+
+- Rename KVM_MEMORY_ENCRYPT_OP to KVM_GUEST_PROTECTION_OP and keep
+  KVM_MEMORY_ENCRYPT_OP as same value for user API for compatibility.
+  "#define KVM_MEMORY_ENCRYPT_OP KVM_GUEST_PROTECTION_OP" for uapi
+  compatibility.
+  Pros:
+  - No new ioctl with more suitable name.
+  Cons:
+  - May cause confusion to the existing user program.
+
+
+References
+==========
+
+.. [1] TDX specification
+   https://software.intel.com/content/www/us/en/develop/articles/intel-trust-domain-extensions.html
+.. [2] Intel Trust Domain Extensions (Intel TDX)
+   https://software.intel.com/content/dam/develop/external/us/en/documents/tdx-whitepaper-final9-17.pdf
+.. [3] Intel CPU Architectural Extensions Specification
+   https://software.intel.com/content/dam/develop/external/us/en/documents/intel-tdx-cpu-architectural-specification.pdf
+.. [4] Intel TDX Module 1.0 EAS
+   https://software.intel.com/content/dam/develop/external/us/en/documents/intel-tdx-module-1eas.pdf
+.. [5] Intel TDX Loader Interface Specification
+   https://software.intel.com/content/dam/develop/external/us/en/documents/intel-tdx-seamldr-interface-specification.pdf
+.. [6] Intel TDX Guest-Hypervisor Communication Interface
+   https://software.intel.com/content/dam/develop/external/us/en/documents/intel-tdx-guest-hypervisor-communication-interface.pdf
+.. [7] Intel TDX Virtual Firmware Design Guide
+   https://software.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.
+.. [8] intel public github
+   kvm TDX branch: https://github.com/intel/tdx/tree/kvm
+   TDX guest branch: https://github.com/intel/tdx/tree/guest
+.. [9] tdvf
+    https://github.com/tianocore/edk2-staging/tree/TDVF
+.. [10] KVM forum 2020: Intel Virtualization Technology Extensions to
+     Enable Hardware Isolated VMs
+     https://osseu2020.sched.com/event/eDzm/intel-virtualization-technology-extensions-to-enable-hardware-isolated-vms-sean-christopherson-intel
+.. [11] Linux Security Summit EU 2020:
+     Architectural Extensions for Hardware Virtual Machine Isolation
+     to Advance Confidential Computing in Public Clouds - Ravi Sahita
+     & Jun Nakajima, Intel Corporation
+     https://osseu2020.sched.com/event/eDOx/architectural-extensions-for-hardware-virtual-machine-isolation-to-advance-confidential-computing-in-public-clouds-ravi-sahita-jun-nakajima-intel-corporation
+.. [12] [RFCv2,00/16] KVM protected memory extension
+     https://lkml.org/lkml/2020/10/20/66
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* [PATCH v7 102/102] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (100 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 101/102] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
@ 2022-06-27 21:54 ` isaku.yamahata
  2022-07-11 15:17 ` [PATCH v7 000/102] KVM TDX basic feature support Isaku Yamahata
  2022-07-14  1:03 ` Sean Christopherson
  103 siblings, 0 replies; 219+ messages in thread
From: isaku.yamahata @ 2022-06-27 21:54 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini

From: Isaku Yamahata <isaku.yamahata@intel.com>

Add a high level design document on TDX changes to TDP MMU.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 Documentation/virt/kvm/tdx-tdp-mmu.rst | 466 +++++++++++++++++++++++++
 1 file changed, 466 insertions(+)
 create mode 100644 Documentation/virt/kvm/tdx-tdp-mmu.rst

diff --git a/Documentation/virt/kvm/tdx-tdp-mmu.rst b/Documentation/virt/kvm/tdx-tdp-mmu.rst
new file mode 100644
index 000000000000..6d63bb75f785
--- /dev/null
+++ b/Documentation/virt/kvm/tdx-tdp-mmu.rst
@@ -0,0 +1,466 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Design of TDP MMU for TDX support
+=================================
+This document describes a (high level) design for TDX support of KVM TDP MMU of
+x86 KVM.
+
+In this document, we use "TD" or "guest TD" to differentiate it from the current
+"VM" (Virtual Machine), which is supported by KVM today.
+
+
+Background of TDX
+=================
+TD private memory is designed to hold TD private content, encrypted by the CPU
+using the TD ephemeral key.  An encryption engine holds a table of encryption
+keys, and an encryption key is selected for each memory transaction based on a
+Host Key Identifier (HKID).  By design, the host VMM does not have access to the
+encryption keys.
+
+In the first generation of MKTME, HKID is "stolen" from the physical address by
+allocating a configurable number of bits from the top of the physical address.
+The HKID space is partitioned into shared HKIDs for legacy MKTME accesses and
+private HKIDs for SEAM-mode-only accesses.  We use 0 for the shared HKID on the
+host so that MKTME can be opaque or bypassed on the host.
+
+During TDX non-root operation (i.e. guest TD), memory accesses can be qualified
+as either shared or private, based on the value of a new SHARED bit in the Guest
+Physical Address (GPA).  The CPU translates shared GPAs using the usual VMX EPT
+(Extended Page Table) or "Shared EPT" (in this document), which resides in the
+host VMM memory.  The Shared EPT is directly managed by the host VMM - the same
+as with the current VMX.  Since guest TDs usually require I/O, and the data
+exchange needs to be done via shared memory, thus KVM needs to use the current
+EPT functionality even for TDs.
+
+The CPU translates private GPAs using a separate Secure EPT.  The Secure EPT
+pages are encrypted and integrity-protected with the TD's ephemeral private key.
+Secure EPT can be managed _indirectly_ by the host VMM, using the TDX interface
+functions (SEAMCALLs), and thus conceptually Secure EPT is a subset of EPT
+because not all functionalities are available.
+
+Since the execution of such interface functions takes much longer time than
+accessing memory directly, in KVM we use the existing TDP code to mirror the
+Secure EPT for the TD. And we think there are at least two options today in
+terms of the timing for executing such SEAMCALLs:
+
+1. synchronous, i.e. while walking the TDP page tables, or
+2. post-walk, i.e. record what needs to be done to the real Secure EPT during
+   the walk, and execute SEAMCALLs later.
+
+The option 1 seems to be more intuitive and simpler, but the Secure EPT
+concurrency rules are different from the ones of the TDP or EPT. For example,
+MEM.SEPT.RD acquire shared access to the whole Secure EPT tree of the target
+
+Secure EPT(SEPT) operations
+---------------------------
+Secure EPT is an Extended Page Table for GPA-to-HPA translation of TD private
+HPA.  A Secure EPT is designed to be encrypted with the TD's ephemeral private
+key. SEPT pages are allocated by the host VMM via Intel TDX functions, but their
+content is intended to be hidden and is not architectural.
+
+Unlike the conventional EPT, the CPU can't directly read/write its entry.
+Instead, TDX SEAMCALL API is used.  Several SEAMCALLs correspond to operation on
+the EPT entry.
+
+* TDH.MEM.SEPT.ADD():
+  Add a secure EPT page from the secure EPT tree.  This corresponds to updating
+  the non-leaf EPT entry with present bit set
+
+* TDH.MEM.SEPT.REMOVE():
+  Remove the secure page from the secure EPT tree.  There is no corresponding
+  to the EPT operation.
+
+* TDH.MEM.SEPT.RD():
+  Read the secure EPT entry.  This corresponds to reading the EPT entry as
+  memory.  Please note that this is much slower than direct memory reading.
+
+* TDH.MEM.PAGE.ADD() and TDH.MEM.PAGE.AUG():
+  Add a private page to the secure EPT tree.  This corresponds to updating the
+  leaf EPT entry with present bit set.
+
+* THD.MEM.PAGE.REMOVE():
+  Remove a private page from the secure EPT tree.  There is no corresponding
+  to the EPT operation.
+
+* TDH.MEM.RANGE.BLOCK():
+  This (mostly) corresponds to clearing the present bit of the leaf EPT entry.
+  Note that the private page is still linked in the secure EPT.  To remove it
+  from the secure EPT, TDH.MEM.SEPT.REMOVE() and TDH.MEM.PAGE.REMOVE() needs to
+  be called.
+
+* TDH.MEM.TRACK():
+  Increment the TLB epoch counter. This (mostly) corresponds to EPT TLB flush.
+  Note that the private page is still linked in the secure EPT.  To remove it
+  from the secure EPT, tdh_mem_page_remove() needs to be called.
+
+
+Adding private page
+-------------------
+The procedure of populating the private page looks as follows.
+
+1. TDH.MEM.SEPT.ADD(512G level)
+2. TDH.MEM.SEPT.ADD(1G level)
+3. TDH.MEM.SEPT.ADD(2M level)
+4. TDH.MEM.PAGE.AUG(4K level)
+
+Those operations correspond to updating the EPT entries.
+
+Dropping private page and TLB shootdown
+---------------------------------------
+The procedure of dropping the private page looks as follows.
+
+1. TDH.MEM.RANGE.BLOCK(4K level)
+   This mostly corresponds to clear the present bit in the EPT entry.  This
+   prevents (or blocks) TLB entry from creating in the future.  Note that the
+   private page is still linked in the secure EPT tree and the existing cache
+   entry in the TLB isn't flushed.
+2. TDH.MEM.TRACK(range) and TLB shootdown
+   This mostly corresponds to the EPT TLB shootdown.  Because all vcpus share
+   the same Secure EPT, all vcpus need to flush TLB.
+   * TDH.MEM.TRACK(range) by one vcpu.  It increments the global internal TLB
+     epoch counter.
+   * send IPI to remote vcpus
+   * Other vcpu exits to VMM from guest TD and then re-enter. TDH.VP.ENTER().
+   * TDH.VP.ENTER() checks the TLB epoch counter and If its TLB is old, flush
+     TLB.
+   Note that only single vcpu issues tdh_mem_track().
+   Note that the private page is still linked in the secure EPT tree, unlike the
+   conventional EPT.
+3. TDH.MEM.PAGE.PROMOTE, TDH.MEM.PAGEDEMOTE(), TDH.MEM.PAGE.RELOCATE(), or
+   TDH.MEM.PAGE.REMOVE()
+   There is no corresponding operation to the conventional EPT.
+   * When changing page size (e.g. 4K <-> 2M) TDH.MEM.PAGE.PROMOTE() or
+     TDH.MEM.PAGE.DEMOTE() is used.  During those operation, the guest page is
+     kept referenced in the Secure EPT.
+   * When migrating page, TDH.MEM.PAGE.RELOCATE().  This requires both source
+     page and destination page.
+   * when destroying TD, TDH.MEM.PAGE.REMOVE() removes the private page from the
+     secure EPT tree.  In this case TLB shootdown is not needed because vcpus
+     don't run any more.
+
+The basic idea for TDX support
+==============================
+Because shared EPT is the same as the existing EPT, use the existing logic for
+shared EPT.  On the other hand, secure EPT requires additional operations
+instead of directly reading/writing of the EPT entry.
+
+On EPT violation, The KVM mmu walks down the EPT tree from the root, determines
+the EPT entry to operate, and updates the entry. If necessary, a TLB shootdown
+is done.  Because it's very slow to directly walk secure EPT by TDX SEAMCALL,
+TDH.MEM.SEPT.RD(), the mirror of secure EPT is created and maintained.  Add
+hooks to KVM MMU to reuse the existing code.
+
+EPT violation on shared GPA
+---------------------------
+(1) EPT violation on shared GPA or zapping shared GPA
+    walk down shared EPT tree (the existing code)
+        |
+        |
+        V
+shared EPT tree (CPU refers.)
+(2) update the EPT entry. (the existing code)
+    TLB shootdown in the case of zapping.
+
+
+EPT violation on private GPA
+----------------------------
+(1) EPT violation on private GPA or zapping private GPA
+    walk down the mirror of secure EPT tree (mostly same as the existing code)
+        |
+        |
+        V
+mirror of secure EPT tree (KVM MMU software only. reuse of the existing code)
+(2) update the (mirrored) EPT entry. (mostly same as the existing code)
+(3) call the hooks with what EPT entry is changed
+        |
+        NEW: hooks in KVM MMU
+        |
+        V
+secure EPT root(CPU refers)
+(4) the TDX backend calls necessary TDX SEAMCALLs to update real secure EPT.
+
+The major modification is to add hooks for the TDX backend for additional
+operations and to pass down which EPT, shared EPT, or private EPT is used, and
+twist the behavior if we're operating on private EPT.
+
+The following depicts the relationship.
+::
+
+                    KVM                             |       TDX module
+                     |                              |           |
+        -------------+----------                    |           |
+        |                      |                    |           |
+        V                      V                    |           |
+     shared GPA           private GPA               |           |
+  CPU shared EPT pointer  KVM private EPT pointer   |  CPU secure EPT pointer
+        |                      |                    |           |
+        |                      |                    |           |
+        V                      V                    |           V
+  shared EPT                private EPT<-------mirror----->Secure EPT
+        |                      |                    |           |
+        |                      \--------------------+------\    |
+        |                                           |      |    |
+        V                                           |      V    V
+  shared guest page                                 |    private guest page
+                                                    |
+                                                    |
+                              non-encrypted memory  |    encrypted memory
+                                                    |
+
+shared EPT: CPU and KVM walk with shared GPA
+            Maintained by the existing code
+private EPT: KVM walks with private GPA
+             Maintained by the twisted existing code
+secure EPT: CPU walks with private GPA.
+            Maintained by TDX module with TDX SEAMCALLs via hooks
+
+
+Tracking private EPT page
+=========================
+Shared EPT pages are managed by struct kvm_mmu_page.  They are linked in a list
+structure.  When necessary, the list is traversed to operate on.  Private EPT
+pages have different characteristics.  For example, private pages can't be
+swapped out.  When shrinking memory, we'd like to traverse only shared EPT pages
+and skip private EPT pages.  Likewise, page migration isn't supported for
+private pages (yet).  Introduce an additional list to track shared EPT pages and
+track private EPT pages independently.
+
+At the beginning of EPT violation, the fault handler knows fault GPA, thus it
+knows which EPT to operate on, private or shared.  If it's private EPT,
+an additional task is done.  Something like "if (private) { callback a hook }".
+Since the fault handler has deep function calls, it's cumbersome to hold the
+information of which EPT is operating.  Options to mitigate it are
+
+1. Pass the information as an argument for the function call.
+2. Record the information in struct kvm_mmu_page somehow.
+3. Record the information in vcpu structure.
+
+Option 2 was chosen.  Because option 1 requires modifying all the functions.  It
+would affect badly to the normal case.  Option 3 doesn't work well because in
+some cases, we need to walk both private and shared EPT.
+
+The role of the EPT page can be utilized and one bit can be curved out from
+unused bits in struct kvm_mmu_page_role.  When allocating the EPT page,
+initialize the information. Mostly struct kvm_mmu_page is available because
+we're operating on EPT pages.
+
+
+The conversion of private GPA and shared GPA
+============================================
+A page of a given GPA can be assigned to only private GPA xor shared GPA at one
+time.  The GPA can't be accessed simultaneously via both private GPA and shared
+GPA.  On guest startup, all the GPAs are assigned as private.  Guest converts
+the range of GPA to shared (or private) from private (or shared) by MapGPA
+hypercall.  MapGPA hypercall takes the start GPA and the size of the region.  If
+the given start GPA is shared, VMM converts the region into shared (if it's
+already shared, nop).  If the start GPA is private, VMM converts the region into
+private.  It implies the guest won't access the unmapped region. private(or
+shared) region after converting to shared(or private).
+
+If the guest TD triggers an EPT violation on the already converted region, the
+access won't be allowed (loop in EPT violation) until other vcpu converts back
+the region.
+
+KVM MMU records which GPA is allowed to access, private or shared.  It steals
+software usable bit from MMU present mask.  SPTE_SHARED_MASK.  The bit is
+recorded in both shared EPT and the mirror of secure EPT.
+
+* If SPTE_SHARED_MASK cleared in the shared EPT and the mirror of secure EPT:
+  Private GPA is allowed. Shared GPA is not allowed.
+
+* SPTE_SHARED_MASK set in the shared EPT and the mirror of secure EPT:
+  Private GPA is not allowed. Shared GPA is allowed.
+
+The default is that SPTE_SHARED_MASK is cleared so that the existing KVM
+MMU code (mostly) works.
+
+The reason why the bit is recorded in both shared and private EPT is to optimize
+for EPT violation path by penalizing MapGPA hypercall.
+
+The state machine of EPT entry
+------------------------------
+(private EPT entry, shared EPT entry) =
+        (non-present, non-present):             private mapping is allowed
+        (present, non-present):                 private mapping is mapped
+        (non-present | SPTE_SHARED_MASK, non-present | SPTE_SHARED_MASK):
+                                                shared mapping is allowed
+        (non-present | SPTE_SHARED_MASK, present | SPTE_SHARED_MASK):
+                                                shared mapping is mapped
+        (present | SPTE_SHARED_MASK, any)       invalid combination
+
+* map_gpa(private GPA): Mark the region that private GPA is allowed(NEW)
+        private EPT entry: clear SPTE_SHARED_MASK
+          present: nop
+          non-present: nop
+          non-present | SPTE_SHARED_MASK -> non-present (clear SPTE_SHARED_MASK)
+
+        shared EPT entry: zap the entry, clear SPTE_SHARED_MASK
+          present: invalid
+          non-present -> non-present: nop
+          present | SPTE_SHARED_MASK -> non-present
+          non-present | SPTE_SHARED_MASK -> non-present
+
+* map_gpa(shared GPA): Mark the region that shared GPA is allowed(NEW)
+        private EPT entry: zap and set SPTE_SHARED_MASK
+          present     -> non-present | SPTE_SHARED_MASK
+          non-present -> non-present | SPTE_SHARED_MASK
+          non-present | SPTE_SHARED_MASK: nop
+
+        shared EPT entry: set SPTE_SHARED_MASK
+          present: invalid
+          non-present -> non-present | SPTE_SHARED_MASK
+          present | SPTE_SHARED_MASK -> present | SPTE_SHARED_MASK: nop
+          non-present | SPTE_SHARED_MASK -> non-present | SPTE_SHARED_MASK: nop
+
+* map(private GPA)
+        private EPT entry
+          present: nop
+          non-present -> present
+          non-present | SPTE_SHARED_MASK: nop. looping on EPT violation(NEW)
+
+        shared EPT entry: nop
+
+* map(shared GPA)
+        private EPT entry: nop
+
+        shared EPT entry
+          present: invalid
+          present | SPTE_SHARED_MASK: nop
+          non-present | SPTE_SHARED_MASK -> present | SPTE_SHARED_MASK
+          non-present: nop. looping on EPT violation(NEW)
+
+* zap(private GPA)
+        private EPT entry: zap the entry with keeping SPTE_SHARED_MASK
+          present -> non-present
+          present | SPTE_SHARED_MASK: invalid
+          non-present: nop as is_shadow_present_pte() is checked
+          non-present | SPTE_SHARED_MASK: nop as is_shadow_present_pte() is
+                                          checked
+
+        shared EPT entry: nop
+
+* zap(shared GPA)
+        private EPT entry: nop
+
+        shared EPT entry: zap
+          any -> non-present
+          present: invalid
+          present | SPTE_SHARED_MASK -> non-present | SPTE_SHARED_MASK
+          non-present: nop as is_shadow_present_pte() is checked
+          non-present | SPTE_SHARED_MASK: nop as is_shadow_present_pte() is
+                                          checked
+
+
+The original TDP MMU and race condition
+=======================================
+Because vcpus share the EPT, once the EPT entry is zapped, we need to shootdown
+TLB.  Send IPI to remote vcpus.  Remote vcpus flush their down TLBs.  Until TLB
+shootdown is done, vcpus may reference the zapped guest page.
+
+TDP MMU uses read lock of mmu_lock to mitigate vcpu contention.  When read lock
+is obtained, it depends on the atomic update of the EPT entry.  (On the other
+hand legacy MMU uses write lock.)  When vcpu is populating/zapping the EPT entry
+with a read lock held, other vcpu may be populating or zapping the same EPT
+entry at the same time.
+
+To avoid the race condition, the entry is frozen.  It means the EPT entry is set
+to the special value, REMOVED_SPTE which clears the present bit.  And then after
+TLB shootdown, update the EPT entry to the final value.
+
+Concurrent zapping
+------------------
+1. read lock
+2. freeze the EPT entry (atomically set the value to REMOVED_SPTE)
+   If other vcpu froze the entry, restart page fault.
+3. TLB shootdown
+   * send IPI to remote vcpus
+   * TLB flush (local and remote)
+   For each entry update, TLB shootdown is needed because of the
+   concurrency.
+4. atomically set the EPT entry to the final value
+5. read unlock
+
+Concurrent populating
+---------------------
+In the case of populating the non-present EPT entry, atomically update the EPT
+entry.
+1. read lock
+2. atomically update the EPT entry
+   If other vcpu frozen the entry or updated the entry, restart page fault.
+3. read unlock
+
+In the case of updating the present EPT entry (e.g. page migration), the
+operation is split into two.  Zapping the entry and populating the entry.
+1. read lock
+2. zap the EPT entry.  follow the concurrent zapping case.
+3. populate the non-present EPT entry.
+4. read unlock
+
+Non-concurrent batched zapping
+------------------------------
+In some cases, zapping the ranges is done exclusively with a write lock held.
+In this case, the TLB shootdown is batched into one.
+
+1. write lock
+2. zap the EPT entries by traversing them
+3. TLB shootdown
+4. write unlock
+
+
+For Secure EPT, TDX SEAMCALLs are needed in addition to updating the mirrored
+EPT entry.
+
+TDX concurrent zapping
+----------------------
+Add a hook for TDX SEAMCALLs at the step of the TLB shootdown.
+
+1. read lock
+2. freeze the EPT entry(set the value to REMOVED_SPTE)
+3. TLB shootdown via a hook
+   * TLB.MEM.RANGE.BLOCK()
+   * TLB.MEM.TRACK()
+   * send IPI to remote vcpus
+4. set the EPT entry to the final value
+5. read unlock
+
+TDX concurrent populating
+-------------------------
+TDX SEAMCALLs are required in addition to operating the mirrored EPT entry.  The
+frozen entry is utilized by following the zapping case to avoid the race
+condition.  A hook can be added.
+
+1. read lock
+2. freeze the EPT entry
+3. hook
+   * TDH_MEM_SEPT_ADD() for non-leaf or TDH_MEM_PAGE_AUG() for leaf.
+4. set the EPT entry to the final value
+5. read unlock
+
+Without freezing the entry, the following race can happen.  Suppose two vcpus
+are faulting on the same GPA and the 2M and 4K level entries aren't populated
+yet.
+
+* vcpu 1: update 2M level EPT entry
+* vcpu 2: update 4K level EPT entry
+* vcpu 2: TDX SEAMCALL to update 4K secure EPT entry => error
+* vcpu 1: TDX SEAMCALL to update 2M secure EPT entry
+
+
+TDX non-concurrent batched zapping
+----------------------------------
+For simplicity, the procedure of concurrent populating is utilized.  The
+procedure can be optimized later.
+
+
+Co-existing with unmapping guest private memory
+===============================================
+TODO.  This needs to be addressed.
+
+
+Restrictions or future work
+===========================
+The following features aren't supported yet at the moment.
+
+* optimizing non-concurrent zap
+* Large page
+* Page migration
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs
  2022-06-27 21:53 ` [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
@ 2022-06-28  2:52   ` Kai Huang
  2022-07-04  6:44     ` Kai Huang
  2022-07-12  1:01     ` Isaku Yamahata
  0 siblings, 2 replies; 219+ messages in thread
From: Kai Huang @ 2022-06-28  2:52 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson, Xiaoyao Li

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> Unlike default VMs, confidential VMs (Intel TDX and AMD SEV-ES) don't allow
> some operations (e.g., memory read/write, register state access, etc).
> 
> Introduce vm_type to track the type of the VM to x86 KVM.  Other arch KVMs
> already use vm_type, KVM_INIT_VM accepts vm_type, and x86 KVM callback
> vm_init accepts vm_type.  So follow them.  Further, a different policy can
> be made based on vm_type.  Define KVM_X86_DEFAULT_VM for default VM as
> default and define KVM_X86_TDX_VM for Intel TDX VM.  The wrapper function
> will be defined as "bool is_td(kvm) { return vm_type == VM_TYPE_TDX; }"
> 
> Add a capability KVM_CAP_VM_TYPES to effectively allow device model,
> e.g. qemu, to query what VM types are supported by KVM.  This (introduce a
> new capability and add vm_type) is chosen to align with other arch KVMs
> that have VM types already.  Other arch KVMs uses different name to query
> supported vm types and there is no common name for it, so new name was
> chosen.
> 
> Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  Documentation/virt/kvm/api.rst        | 21 +++++++++++++++++++++
>  arch/x86/include/asm/kvm-x86-ops.h    |  1 +
>  arch/x86/include/asm/kvm_host.h       |  2 ++
>  arch/x86/include/uapi/asm/kvm.h       |  3 +++
>  arch/x86/kvm/svm/svm.c                |  6 ++++++
>  arch/x86/kvm/vmx/main.c               |  1 +
>  arch/x86/kvm/vmx/tdx.h                |  6 +-----
>  arch/x86/kvm/vmx/vmx.c                |  5 +++++
>  arch/x86/kvm/vmx/x86_ops.h            |  1 +
>  arch/x86/kvm/x86.c                    |  9 ++++++++-
>  include/uapi/linux/kvm.h              |  1 +
>  tools/arch/x86/include/uapi/asm/kvm.h |  3 +++
>  tools/include/uapi/linux/kvm.h        |  1 +
>  13 files changed, 54 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 9cbbfdb663b6..b9ab598883b2 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -147,10 +147,31 @@ described as 'basic' will be available.
>  The new VM has no virtual cpus and no memory.
>  You probably want to use 0 as machine type.
>  
> +X86:
> +^^^^
> +
> +Supported vm type can be queried from KVM_CAP_VM_TYPES, which returns the
> +bitmap of supported vm types. The 1-setting of bit @n means vm type with
> +value @n is supported.


Perhaps I am missing something, but I don't understand how the below changes
(except the x86 part above) in Documentation are related to this patch.

> +
> +S390:
> +^^^^^
> +
>  In order to create user controlled virtual machines on S390, check
>  KVM_CAP_S390_UCONTROL and use the flag KVM_VM_S390_UCONTROL as
>  privileged user (CAP_SYS_ADMIN).
>  
> +MIPS:
> +^^^^^
> +
> +To use hardware assisted virtualization on MIPS (VZ ASE) rather than
> +the default trap & emulate implementation (which changes the virtual
> +memory layout to fit in user mode), check KVM_CAP_MIPS_VZ and use the
> +flag KVM_VM_MIPS_VZ.
> +
> +ARM64:
> +^^^^^^
> +
>  On arm64, the physical address size for a VM (IPA Size limit) is limited
>  to 40bits by default. The limit can be configured if the host supports the
>  extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 75bc44aa8d51..a97cdb203a16 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -19,6 +19,7 @@ KVM_X86_OP(hardware_disable)
>  KVM_X86_OP(hardware_unsetup)
>  KVM_X86_OP(has_emulated_msr)
>  KVM_X86_OP(vcpu_after_set_cpuid)
> +KVM_X86_OP(is_vm_type_supported)
>  KVM_X86_OP(vm_init)
>  KVM_X86_OP_OPTIONAL(vm_destroy)
>  KVM_X86_OP_OPTIONAL_RET0(vcpu_precreate)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index aa11525500d3..089e0a4de926 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1141,6 +1141,7 @@ enum kvm_apicv_inhibit {
>  };
>  
>  struct kvm_arch {
> +	unsigned long vm_type;
>  	unsigned long n_used_mmu_pages;
>  	unsigned long n_requested_mmu_pages;
>  	unsigned long n_max_mmu_pages;
> @@ -1434,6 +1435,7 @@ struct kvm_x86_ops {
>  	bool (*has_emulated_msr)(struct kvm *kvm, u32 index);
>  	void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu);
>  
> +	bool (*is_vm_type_supported)(unsigned long vm_type);
>  	unsigned int vm_size;
>  	int (*vm_init)(struct kvm *kvm);
>  	void (*vm_destroy)(struct kvm *kvm);
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index 50a4e787d5e6..9792ec1cc317 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -531,4 +531,7 @@ struct kvm_pmu_event_filter {
>  #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */
>  #define   KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
>  
> +#define KVM_X86_DEFAULT_VM	0
> +#define KVM_X86_TDX_VM		1
> +
>  #endif /* _ASM_X86_KVM_H */
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 247c0ad458a0..815a07c594f1 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4685,6 +4685,11 @@ static void svm_vm_destroy(struct kvm *kvm)
>  	sev_vm_destroy(kvm);
>  }
>  
> +static bool svm_is_vm_type_supported(unsigned long type)
> +{
> +	return type == KVM_X86_DEFAULT_VM;
> +}
> +
>  static int svm_vm_init(struct kvm *kvm)
>  {
>  	if (!pause_filter_count || !pause_filter_thresh)
> @@ -4712,6 +4717,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>  	.vcpu_free = svm_vcpu_free,
>  	.vcpu_reset = svm_vcpu_reset,
>  
> +	.is_vm_type_supported = svm_is_vm_type_supported,
>  	.vm_size = sizeof(struct kvm_svm),
>  	.vm_init = svm_vm_init,
>  	.vm_destroy = svm_vm_destroy,
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index ac788af17d92..7be4941e4c4d 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -43,6 +43,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.hardware_disable = vmx_hardware_disable,
>  	.has_emulated_msr = vmx_has_emulated_msr,
>  
> +	.is_vm_type_supported = vmx_is_vm_type_supported,
>  	.vm_size = sizeof(struct kvm_vmx),
>  	.vm_init = vmx_vm_init,
>  	.vm_destroy = vmx_vm_destroy,
> diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> index 54d7a26ed9ee..2f43db5bbefb 100644
> --- a/arch/x86/kvm/vmx/tdx.h
> +++ b/arch/x86/kvm/vmx/tdx.h
> @@ -17,11 +17,7 @@ struct vcpu_tdx {
>  
>  static inline bool is_td(struct kvm *kvm)
>  {
> -	/*
> -	 * TDX VM type isn't defined yet.
> -	 * return kvm->arch.vm_type == KVM_X86_TDX_VM;
> -	 */
> -	return false;
> +	return kvm->arch.vm_type == KVM_X86_TDX_VM;
>  }

If you put this patch before patch:

	[PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure

Then you don't need to introduce this chunk in above patch and then remove it
here, which is unnecessary and ugly.

And you can even only introduce KVM_X86_DEFAULT_VM but not KVM_X86_TDX_VM in
this patch, so you can make this patch as a infrastructural patch to report VM
type.  The KVM_X86_TDX_VM can come with the patch where is_td() is introduced
(in your above patch 9).  

To me, it's more clean way to write patch.  For instance, this infrastructural
patch can be theoretically used by other series if they have similar thing to
support, but doesn't need to carry is_td() and KVM_X86_TDX_VM burden that you
made.

>  
>  static inline bool is_td_vcpu(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index b30d73d28e75..5ba62f8b42ce 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7281,6 +7281,11 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
>  	return err;
>  }
>  
> +bool vmx_is_vm_type_supported(unsigned long type)
> +{
> +	return type == KVM_X86_DEFAULT_VM;
> +}
> +
>  #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
>  #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
>  
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index 2abead2f60f7..a5e85eb4e183 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -25,6 +25,7 @@ void vmx_hardware_unsetup(void);
>  int vmx_check_processor_compatibility(void);
>  int vmx_hardware_enable(void);
>  void vmx_hardware_disable(void);
> +bool vmx_is_vm_type_supported(unsigned long type);
>  int vmx_vm_init(struct kvm *kvm);
>  void vmx_vm_destroy(struct kvm *kvm);
>  int vmx_vcpu_precreate(struct kvm *kvm);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fb7a33fbc136..96dc8f52a137 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4408,6 +4408,11 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_X86_NOTIFY_VMEXIT:
>  		r = kvm_caps.has_notify_vmexit;
>  		break;
> +	case KVM_CAP_VM_TYPES:
> +		r = BIT(KVM_X86_DEFAULT_VM);
> +		if (static_call(kvm_x86_is_vm_type_supported)(KVM_X86_TDX_VM))
> +			r |= BIT(KVM_X86_TDX_VM);
> +		break;
>  	default:
>  		break;
>  	}
> @@ -11858,9 +11863,11 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>  	int ret;
>  	unsigned long flags;
>  
> -	if (type)
> +	if (!static_call(kvm_x86_is_vm_type_supported)(type))
>  		return -EINVAL;
>  
> +	kvm->arch.vm_type = type;
> +
>  	ret = kvm_page_track_init(kvm);
>  	if (ret)
>  		goto out;
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 7569b4ec199c..6d6785d2685f 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1166,6 +1166,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_S390_PROTECTED_DUMP 217
>  #define KVM_CAP_X86_TRIPLE_FAULT_EVENT 218
>  #define KVM_CAP_X86_NOTIFY_VMEXIT 219
> +#define KVM_CAP_VM_TYPES 220
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
> index bf6e96011dfe..71a5851475e7 100644
> --- a/tools/arch/x86/include/uapi/asm/kvm.h
> +++ b/tools/arch/x86/include/uapi/asm/kvm.h
> @@ -525,4 +525,7 @@ struct kvm_pmu_event_filter {
>  #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */
>  #define   KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
>  
> +#define KVM_X86_DEFAULT_VM	0
> +#define KVM_X86_TDX_VM		1
> +
>  #endif /* _ASM_X86_KVM_H */
> diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
> index 6a184d260c7f..1e89b967e050 100644
> --- a/tools/include/uapi/linux/kvm.h
> +++ b/tools/include/uapi/linux/kvm.h
> @@ -1152,6 +1152,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_DISABLE_QUIRKS2 213
>  /* #define KVM_CAP_VM_TSC_CONTROL 214 */
>  #define KVM_CAP_SYSTEM_EVENT_DATA 215
> +#define KVM_CAP_VM_TYPES 220
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  

-- 
Thanks,
-Kai



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 007/102] KVM: Enable hardware before doing arch VM initialization
  2022-06-27 21:52 ` [PATCH v7 007/102] KVM: Enable hardware before doing arch VM initialization isaku.yamahata
@ 2022-06-28  2:59   ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-06-28  2:59 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> Swap the order of hardware_enable_all() and kvm_arch_init_vm() to
> accommodate Intel's TDX, which needs VMX to be enabled during VM init in
> order to make SEAMCALLs.
> 
> This also provides consistent ordering between kvm_create_vm() and
> kvm_destroy_vm() with respect to calling kvm_arch_destroy_vm() and
> hardware_disable_all().

Reviewed-by: Kai Huang <kai.huang@intel.com>

> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  virt/kvm/kvm_main.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index cee799265ce6..0acb0b6d1f82 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1138,19 +1138,19 @@ static struct kvm *kvm_create_vm(unsigned long type)
>  		rcu_assign_pointer(kvm->buses[i],
>  			kzalloc(sizeof(struct kvm_io_bus), GFP_KERNEL_ACCOUNT));
>  		if (!kvm->buses[i])
> -			goto out_err_no_arch_destroy_vm;
> +			goto out_err_no_disable;
>  	}
>  
>  	kvm->max_halt_poll_ns = halt_poll_ns;
>  
> -	r = kvm_arch_init_vm(kvm, type);
> -	if (r)
> -		goto out_err_no_arch_destroy_vm;
> -
>  	r = hardware_enable_all();
>  	if (r)
>  		goto out_err_no_disable;
>  
> +	r = kvm_arch_init_vm(kvm, type);
> +	if (r)
> +		goto out_err_no_arch_destroy_vm;
> +
>  #ifdef CONFIG_HAVE_KVM_IRQFD
>  	INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
>  #endif
> @@ -1188,10 +1188,10 @@ static struct kvm *kvm_create_vm(unsigned long type)
>  		mmu_notifier_unregister(&kvm->mmu_notifier, current->mm);
>  #endif
>  out_err_no_mmu_notifier:
> -	hardware_disable_all();
> -out_err_no_disable:
>  	kvm_arch_destroy_vm(kvm);
>  out_err_no_arch_destroy_vm:
> +	hardware_disable_all();
> +out_err_no_disable:
>  	WARN_ON_ONCE(!refcount_dec_and_test(&kvm->users_count));
>  	for (i = 0; i < KVM_NR_BUSES; i++)
>  		kfree(kvm_get_bus(kvm, i));


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization
  2022-06-27 21:52 ` [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization isaku.yamahata
@ 2022-06-28  3:43   ` Kai Huang
  2022-07-11 23:48     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-06-28  3:43 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> TDX requires several initialization steps for KVM to create guest TDs.
> Detect CPU feature, enable VMX (TDX is based on VMX), detect TDX module
> availability, and initialize TDX module.  This patch implements the first
> step to detect CPU feature.  Because VMX isn't enabled yet by VMXON
> instruction on KVM kernel module initialization, defer further
> initialization step until VMX is enabled by hardware_enable callback.

Not clear why you need to split into multiple patches.  If we put all
initialization into one patch, it's much easier to see why those steps are done
in whatever way.

> 
> Introduce a module parameter, enable_tdx, to explicitly enable TDX KVM
> support.  It's off by default to keep same behavior for those who don't use
> TDX.  Implement CPU feature detection at KVM kernel module initialization
> as hardware_setup callback to check if CPU feature is available and get
> some CPU parameters.
> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/Makefile      |  1 +
>  arch/x86/kvm/vmx/main.c    | 18 ++++++++++++++++-
>  arch/x86/kvm/vmx/tdx.c     | 40 ++++++++++++++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/x86_ops.h |  6 ++++++
>  4 files changed, 64 insertions(+), 1 deletion(-)
>  create mode 100644 arch/x86/kvm/vmx/tdx.c
> 
> diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
> index ee4d0999f20f..e2c05195cb95 100644
> --- a/arch/x86/kvm/Makefile
> +++ b/arch/x86/kvm/Makefile
> @@ -24,6 +24,7 @@ kvm-$(CONFIG_KVM_XEN)	+= xen.o
>  kvm-intel-y		+= vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
>  			   vmx/evmcs.o vmx/nested.o vmx/posted_intr.o vmx/main.o
>  kvm-intel-$(CONFIG_X86_SGX_KVM)	+= vmx/sgx.o
> +kvm-intel-$(CONFIG_INTEL_TDX_HOST)	+= vmx/tdx.o
>  
>  kvm-amd-y		+= svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
>  
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 636768f5b985..fabf5f22c94f 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -6,6 +6,22 @@
>  #include "nested.h"
>  #include "pmu.h"
>  
> +static bool __read_mostly enable_tdx = IS_ENABLED(CONFIG_INTEL_TDX_HOST);
> +module_param_named(tdx, enable_tdx, bool, 0444);
> +
> +static __init int vt_hardware_setup(void)
> +{
> +	int ret;
> +
> +	ret = vmx_hardware_setup();
> +	if (ret)
> +		return ret;
> +
> +	enable_tdx = enable_tdx && !tdx_hardware_setup(&vt_x86_ops);
> +
> +	return 0;
> +}
> +
>  struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.name = "kvm_intel",
>  
> @@ -147,7 +163,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  struct kvm_x86_init_ops vt_init_ops __initdata = {
>  	.cpu_has_kvm_support = vmx_cpu_has_kvm_support,
>  	.disabled_by_bios = vmx_disabled_by_bios,
> -	.hardware_setup = vmx_hardware_setup,
> +	.hardware_setup = vt_hardware_setup,
>  	.handle_intel_pt_intr = NULL,
>  
>  	.runtime_ops = &vt_x86_ops,
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> new file mode 100644
> index 000000000000..c12e61cdddea
> --- /dev/null
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -0,0 +1,40 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/cpu.h>
> +
> +#include <asm/tdx.h>
> +
> +#include "capabilities.h"
> +#include "x86_ops.h"
> +
> +#undef pr_fmt
> +#define pr_fmt(fmt) "tdx: " fmt
> +
> +static u64 hkid_mask __ro_after_init;
> +static u8 hkid_start_pos __ro_after_init;
> +
> +int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
> +{
> +	u32 max_pa;
> +
> +	if (!enable_ept) {
> +		pr_warn("Cannot enable TDX with EPT disabled\n");
> +		return -EINVAL;
> +	}
> +
> +	if (!platform_tdx_enabled()) {
> +		pr_warn("Cannot enable TDX on TDX disabled platform\n");
> +		return -ENODEV;
> +	}
> +
> +	/* Safe guard check because TDX overrides tlb_remote_flush callback. */
> +	if (WARN_ON_ONCE(x86_ops->tlb_remote_flush))
> +		return -EIO;

To me it's better to move this chunk to the patch which actually implements how
to flush TLB foro private pages.  W/o some background, it's hard to tell why TDX
needs to overrides tlb_remote_flush callback.  Otherwise it's quite hard to
review here.

For instance, even if it must be replaced, I am wondering why it must be empty
at the beginning?  For instance, assuming it has an original version which does
something:

	x86_ops->tlb_remote_flush = vmx_remote_flush;

Why cannot it be replaced with vt_tlb_remote_flush():

	int vt_tlb_remote_flush(struct kvm *kvm)
	{
		if (is_td(kvm))
			return tdx_tlb_remote_flush(kvm);

		return vmx_remote_flush(kvm);
	}

?

> +
> +	max_pa = cpuid_eax(0x80000008) & 0xff;
> +	hkid_start_pos = boot_cpu_data.x86_phys_bits;
> +	hkid_mask = GENMASK_ULL(max_pa - 1, hkid_start_pos);
> +	pr_info("kvm: TDX is supported. hkid start pos %d mask 0x%llx\n",
> +		hkid_start_pos, hkid_mask);

Again, I think it's better to introduce those in the patch where you actually
need those.  It will be more clear if you introduce those with the code which
actually uses them.

For instance, I think both hkid_start_pos and hkid_mask are not necessary.  If
you want to apply one keyid to an address, isn't below enough?

	u64 phys |= ((keyid) << boot_cpu_data.x86_phys_bits);

> +
> +	return 0;
> +}
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index 0f8a8547958f..0a5967a91e26 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -122,4 +122,10 @@ void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
>  #endif
>  void vmx_setup_mce(struct kvm_vcpu *vcpu);
>  
> +#ifdef CONFIG_INTEL_TDX_HOST
> +int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
> +#else
> +static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
> +#endif

I think if you introduce a "tdx_ops.h", or "tdx_x86_ops.h", and you can only
include it when CONFIG_INTEL_TDX_HOST is true, then in tdx_ops.h you don't need
those stubs.

Makes sense?

> +
>  #endif /* __KVM_X86_VMX_X86_OPS_H */


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions
  2022-06-27 21:53 ` [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
@ 2022-06-28  3:53   ` Kai Huang
  2022-07-12  0:38     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-06-28  3:53 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> Currently, KVM VMX module initialization/exit functions are a single
> function each.  Refactor KVM VMX module initialization functions into KVM
> common part and VMX part so that TDX specific part can be added cleanly.
> Opportunistically refactor module exit function as well.
> 
> The current module initialization flow is, 1.) calculate the sizes of VMX
> kvm structure and VMX vcpu structure, 2.) hyper-v specific initialization
> 3.) report those sizes to the KVM common layer and KVM common
> initialization, and 4.) VMX specific system-wide initialization.
> 
> Refactor the KVM VMX module initialization function into functions with a
> wrapper function to separate VMX logic in vmx.c from a file, main.c, common
> among VMX and TDX.  We have a wrapper function, "vt_init() {vmx kvm/vcpu
> size calculation; hv_vp_assist_page_init(); kvm_init(); vmx_init(); }" in
> main.c, and hv_vp_assist_page_init() and vmx_init() in vmx.c.
> hv_vp_assist_page_init() initializes hyper-v specific assist pages,
> kvm_init() does system-wide initialization of the KVM common layer, and
> vmx_init() does system-wide VMX initialization.
> 
> The KVM architecture common layer allocates struct kvm with reported size
> for architecture-specific code.  The KVM VMX module defines its structure
> as struct vmx_kvm { struct kvm; VMX specific members;} and uses it as
> struct vmx kvm.  Similar for vcpu structure. TDX KVM patches will define
> TDX specific kvm and vcpu structures, add tdx_pre_kvm_init() to report the
> sizes of them to the KVM common layer.
> 
> The current module exit function is also a single function, a combination
> of VMX specific logic and common KVM logic.  Refactor it into VMX specific
> logic and KVM common logic.  This is just refactoring to keep the VMX
> specific logic in vmx.c from main.c.

This patch, coupled with the patch:

	KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX

Basically provides an infrastructure to support both VMX and TDX.  Why we cannot
merge them into one patch?  What's the benefit of splitting them?

At least, why the two patches cannot be put together closely?

> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/vmx/main.c    |  38 +++++++++++++
>  arch/x86/kvm/vmx/vmx.c     | 106 ++++++++++++++++++-------------------
>  arch/x86/kvm/vmx/x86_ops.h |   6 +++
>  3 files changed, 95 insertions(+), 55 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index fabf5f22c94f..371dad728166 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -169,3 +169,41 @@ struct kvm_x86_init_ops vt_init_ops __initdata = {
>  	.runtime_ops = &vt_x86_ops,
>  	.pmu_ops = &intel_pmu_ops,
>  };
> +
> +static int __init vt_init(void)
> +{
> +	unsigned int vcpu_size, vcpu_align;
> +	int r;
> +
> +	vt_x86_ops.vm_size = sizeof(struct kvm_vmx);
> +	vcpu_size = sizeof(struct vcpu_vmx);
> +	vcpu_align = __alignof__(struct vcpu_vmx);
> +
> +	hv_vp_assist_page_init();
> +	vmx_init_early();
> +
> +	r = kvm_init(&vt_init_ops, vcpu_size, vcpu_align, THIS_MODULE);
> +	if (r)
> +		goto err_vmx_post_exit;
> +
> +	r = vmx_init();
> +	if (r)
> +		goto err_kvm_exit;
> +
> +	return 0;
> +
> +err_kvm_exit:
> +	kvm_exit();
> +err_vmx_post_exit:
> +	hv_vp_assist_page_exit();
> +	return r;
> +}
> +module_init(vt_init);
> +
> +static void vt_exit(void)
> +{
> +	vmx_exit();
> +	kvm_exit();
> +	hv_vp_assist_page_exit();
> +}
> +module_exit(vt_exit);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 286947c00638..b30d73d28e75 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8181,15 +8181,45 @@ static void vmx_cleanup_l1d_flush(void)
>  	l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
>  }
>  
> -static void vmx_exit(void)
> +void __init hv_vp_assist_page_init(void)
>  {
> -#ifdef CONFIG_KEXEC_CORE
> -	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
> -	synchronize_rcu();
> -#endif
> +#if IS_ENABLED(CONFIG_HYPERV)
> +	/*
> +	 * Enlightened VMCS usage should be recommended and the host needs
> +	 * to support eVMCS v1 or above. We can also disable eVMCS support
> +	 * with module parameter.
> +	 */
> +	if (enlightened_vmcs &&
> +	    ms_hyperv.hints & HV_X64_ENLIGHTENED_VMCS_RECOMMENDED &&
> +	    (ms_hyperv.nested_features & HV_X64_ENLIGHTENED_VMCS_VERSION) >=
> +	    KVM_EVMCS_VERSION) {
> +		int cpu;
> +
> +		/* Check that we have assist pages on all online CPUs */
> +		for_each_online_cpu(cpu) {
> +			if (!hv_get_vp_assist_page(cpu)) {
> +				enlightened_vmcs = false;
> +				break;
> +			}
> +		}
>  
> -	kvm_exit();
> +		if (enlightened_vmcs) {
> +			pr_info("KVM: vmx: using Hyper-V Enlightened VMCS\n");
> +			static_branch_enable(&enable_evmcs);
> +		}
> +
> +		if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
> +			vt_x86_ops.enable_direct_tlbflush
> +				= hv_enable_direct_tlbflush;
>  
> +	} else {
> +		enlightened_vmcs = false;
> +	}
> +#endif
> +}
> +
> +void hv_vp_assist_page_exit(void)
> +{
>  #if IS_ENABLED(CONFIG_HYPERV)
>  	if (static_branch_unlikely(&enable_evmcs)) {
>  		int cpu;
> @@ -8213,14 +8243,10 @@ static void vmx_exit(void)
>  		static_branch_disable(&enable_evmcs);
>  	}
>  #endif
> -	vmx_cleanup_l1d_flush();
> -
> -	allow_smaller_maxphyaddr = false;
>  }
> -module_exit(vmx_exit);
>  
>  /* initialize before kvm_init() so that hardware_enable/disable() can work. */
> -static void __init vmx_init_early(void)
> +void __init vmx_init_early(void)
>  {
>  	int cpu;
>  
> @@ -8228,49 +8254,10 @@ static void __init vmx_init_early(void)
>  		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
>  }
>  
> -static int __init vmx_init(void)
> +int __init vmx_init(void)
>  {
>  	int r, cpu;
>  
> -#if IS_ENABLED(CONFIG_HYPERV)
> -	/*
> -	 * Enlightened VMCS usage should be recommended and the host needs
> -	 * to support eVMCS v1 or above. We can also disable eVMCS support
> -	 * with module parameter.
> -	 */
> -	if (enlightened_vmcs &&
> -	    ms_hyperv.hints & HV_X64_ENLIGHTENED_VMCS_RECOMMENDED &&
> -	    (ms_hyperv.nested_features & HV_X64_ENLIGHTENED_VMCS_VERSION) >=
> -	    KVM_EVMCS_VERSION) {
> -
> -		/* Check that we have assist pages on all online CPUs */
> -		for_each_online_cpu(cpu) {
> -			if (!hv_get_vp_assist_page(cpu)) {
> -				enlightened_vmcs = false;
> -				break;
> -			}
> -		}
> -
> -		if (enlightened_vmcs) {
> -			pr_info("KVM: vmx: using Hyper-V Enlightened VMCS\n");
> -			static_branch_enable(&enable_evmcs);
> -		}
> -
> -		if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
> -			vt_x86_ops.enable_direct_tlbflush
> -				= hv_enable_direct_tlbflush;
> -
> -	} else {
> -		enlightened_vmcs = false;
> -	}
> -#endif
> -
> -	vmx_init_early();
> -	r = kvm_init(&vt_init_ops, sizeof(struct vcpu_vmx),
> -		__alignof__(struct vcpu_vmx), THIS_MODULE);
> -	if (r)
> -		return r;
> -
>  	/*
>  	 * Must be called after kvm_init() so enable_ept is properly set
>  	 * up. Hand the parameter mitigation value in which was stored in
> @@ -8279,10 +8266,8 @@ static int __init vmx_init(void)
>  	 * mitigation mode.
>  	 */
>  	r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
> -	if (r) {
> -		vmx_exit();
> +	if (r)
>  		return r;
> -	}
>  
>  	for_each_possible_cpu(cpu)
>  		pi_init_cpu(cpu);
> @@ -8303,4 +8288,15 @@ static int __init vmx_init(void)
>  
>  	return 0;
>  }
> -module_init(vmx_init);
> +
> +void vmx_exit(void)
> +{
> +#ifdef CONFIG_KEXEC_CORE
> +	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
> +	synchronize_rcu();
> +#endif
> +
> +	vmx_cleanup_l1d_flush();
> +
> +	allow_smaller_maxphyaddr = false;
> +}
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index 0a5967a91e26..2abead2f60f7 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -8,6 +8,12 @@
>  
>  #include "x86.h"
>  
> +void __init hv_vp_assist_page_init(void);
> +void hv_vp_assist_page_exit(void);
> +void __init vmx_init_early(void);
> +int __init vmx_init(void);
> +void vmx_exit(void);
> +
>  __init int vmx_cpu_has_kvm_support(void);
>  __init int vmx_disabled_by_bios(void);
>  __init int vmx_hardware_setup(void);


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko
  2022-06-27 21:53 ` [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko isaku.yamahata
@ 2022-06-28  4:31   ` Kai Huang
  2022-07-12  0:46     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-06-28  4:31 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> To use TDX functionality, TDX module needs to be loaded and initialized.
> A TDX host patch series[1] implements the detection of the TDX module,
> tdx_detect() and its initialization, tdx_init().

"A TDX host patch series[1]" really isn't a commit message material.  You can
put it to the cover letter, but not here.

Also tdx_detect() is removed in latest code.

> 
> This patch is to call those functions, tdx_detect() and tdx_init(), when
> loading kvm_intel.ko.
> 
> Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
> while hardware is enabled, i.e. after hardware_enable_all() and before
> hardware_disable_all().  Because TDX requires all present CPUs to enable
> VMX (VMXON).
> 
> [1] https://lore.kernel.org/lkml/cover.1649219184.git.kai.huang@intel.com/
> 
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/vmx/main.c         | 11 ++++++
>  arch/x86/kvm/vmx/tdx.c          | 60 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/tdx.h          |  4 +++
>  arch/x86/kvm/x86.c              |  8 +++++
>  5 files changed, 84 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 62dec97f6607..aa11525500d3 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1639,6 +1639,7 @@ struct kvm_x86_init_ops {
>  	int (*cpu_has_kvm_support)(void);
>  	int (*disabled_by_bios)(void);
>  	int (*hardware_setup)(void);
> +	int (*post_hardware_enable_setup)(void);
>  	unsigned int (*handle_intel_pt_intr)(void);
>  
>  	struct kvm_x86_ops *runtime_ops;
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 349534412216..ac788af17d92 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -23,6 +23,16 @@ static __init int vt_hardware_setup(void)
>  	return 0;
>  }
>  
> +static int __init vt_post_hardware_enable_setup(void)
> +{
> +	enable_tdx = enable_tdx && !tdx_module_setup();
> +	/*
> +	 * Even if it failed to initialize TDX module, conventional VMX is
> +	 * available.  Keep VMX usable.
> +	 */
> +	return 0;
> +}
> +
>  struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.name = "kvm_intel",
>  
> @@ -165,6 +175,7 @@ struct kvm_x86_init_ops vt_init_ops __initdata = {
>  	.cpu_has_kvm_support = vmx_cpu_has_kvm_support,
>  	.disabled_by_bios = vmx_disabled_by_bios,
>  	.hardware_setup = vt_hardware_setup,
> +	.post_hardware_enable_setup = vt_post_hardware_enable_setup,
>  	.handle_intel_pt_intr = NULL,
>  
>  	.runtime_ops = &vt_x86_ops,
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 2617389ef466..9cb36716b0f3 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -13,6 +13,66 @@
>  static u64 hkid_mask __ro_after_init;
>  static u8 hkid_start_pos __ro_after_init;
>  
> +#define TDX_MAX_NR_CPUID_CONFIGS					\
> +	((sizeof(struct tdsysinfo_struct) -				\
> +		offsetof(struct tdsysinfo_struct, cpuid_configs))	\
> +		/ sizeof(struct tdx_cpuid_config))
> +
> +struct tdx_capabilities {
> +	u8 tdcs_nr_pages;
> +	u8 tdvpx_nr_pages;
> +
> +	u64 attrs_fixed0;
> +	u64 attrs_fixed1;
> +	u64 xfam_fixed0;
> +	u64 xfam_fixed1;
> +
> +	u32 nr_cpuid_configs;
> +	struct tdx_cpuid_config cpuid_configs[TDX_MAX_NR_CPUID_CONFIGS];
> +};
> +
> +/* Capabilities of KVM + the TDX module. */
> +static struct tdx_capabilities tdx_caps;
> +
> +int __init tdx_module_setup(void)
> +{
> +	const struct tdsysinfo_struct *tdsysinfo;
> +	int ret = 0;
> +
> +	BUILD_BUG_ON(sizeof(*tdsysinfo) != 1024);
> +	BUILD_BUG_ON(TDX_MAX_NR_CPUID_CONFIGS != 37);
> +
> +	ret = tdx_init();
> +	if (ret) {
> +		pr_info("Failed to initialize TDX module.\n");
> +		return ret;
> +	}
> +
> +	tdsysinfo = tdx_get_sysinfo();
> +	if (tdsysinfo->num_cpuid_config > TDX_MAX_NR_CPUID_CONFIGS)
> +		return -EIO;
> +
> +	tdx_caps = (struct tdx_capabilities) {
> +		.tdcs_nr_pages = tdsysinfo->tdcs_base_size / PAGE_SIZE,
> +		/*
> +		 * TDVPS = TDVPR(4K page) + TDVPX(multiple 4K pages).
> +		 * -1 for TDVPR.
> +		 */
> +		.tdvpx_nr_pages = tdsysinfo->tdvps_base_size / PAGE_SIZE - 1,
> +		.attrs_fixed0 = tdsysinfo->attributes_fixed0,
> +		.attrs_fixed1 = tdsysinfo->attributes_fixed1,
> +		.xfam_fixed0 =	tdsysinfo->xfam_fixed0,
> +		.xfam_fixed1 = tdsysinfo->xfam_fixed1,
> +		.nr_cpuid_configs = tdsysinfo->num_cpuid_config,
> +	};
> +	if (!memcpy(tdx_caps.cpuid_configs, tdsysinfo->cpuid_configs,
> +			tdsysinfo->num_cpuid_config *
> +			sizeof(struct tdx_cpuid_config)))
> +		return -EIO;
> +
> +	return 0;
> +}
> +
>  int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>  {
>  	u32 max_pa;
> diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> index 060bf48ec3d6..54d7a26ed9ee 100644
> --- a/arch/x86/kvm/vmx/tdx.h
> +++ b/arch/x86/kvm/vmx/tdx.h
> @@ -3,6 +3,8 @@
>  #define __KVM_X86_TDX_H
>  
>  #ifdef CONFIG_INTEL_TDX_HOST
> +int tdx_module_setup(void);
> +
>  struct kvm_tdx {
>  	struct kvm kvm;
>  	/* TDX specific members follow. */
> @@ -37,6 +39,8 @@ static inline struct vcpu_tdx *to_tdx(struct kvm_vcpu *vcpu)
>  	return container_of(vcpu, struct vcpu_tdx, vcpu);
>  }
>  #else
> +static inline int tdx_module_setup(void) { return -ENODEV; };
> +
>  struct kvm_tdx {
>  	struct kvm kvm;
>  };
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 30af2bd0b4d5..fb7a33fbc136 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11792,6 +11792,14 @@ int kvm_arch_hardware_setup(void *opaque)
>  	return 0;
>  }
>  
> +int kvm_arch_post_hardware_enable_setup(void *opaque)
> +{
> +	struct kvm_x86_init_ops *ops = opaque;
> +	if (ops->post_hardware_enable_setup)
> +		return ops->post_hardware_enable_setup();
> +	return 0;
> +}
> +

Where is this kvm_arch_post_hardware_enable_setup() called?

Shouldn't the code change which calls it be part of this patch?

>  void kvm_arch_hardware_unsetup(void)
>  {
>  	kvm_unregister_perf_callbacks();


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 025/102] KVM: TDX: initialize VM with TDX specific parameters
  2022-06-27 21:53 ` [PATCH v7 025/102] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
@ 2022-06-28  8:30   ` Xiaoyao Li
  2022-07-12  7:11     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Xiaoyao Li @ 2022-06-28  8:30 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On 6/28/2022 5:53 AM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
> 
> TDX requires additional parameters for TDX VM for confidential execution to
> protect its confidentiality of its memory contents and its CPU state from
> any other software, including VMM. When creating guest TD VM before
> creating vcpu, the number of vcpu, TSC frequency (that is same among
> vcpus. and it can't be changed.)  CPUIDs which is emulated by the TDX
> module. It means guest can trust those CPUIDs. and sha384 values for
> measurement.
> 
> Add new subcommand, KVM_TDX_INIT_VM, to pass parameters for TDX guest.  It
> assigns encryption key to the TDX guest for memory encryption.  TDX
> encrypts memory per-guest bases.  It assigns device model passes per-VM
> parameters for the TDX guest.  The maximum number of vcpus, tsc frequency
> (TDX guest has fised VM-wide TSC frequency. not per-vcpu.  The TDX guest
> can not change it.), attributes (production or debug), available extended
> features (which is reflected into guest XCR0, IA32_XSS MSR), cpuids, sha384
> measurements, and etc.
> 
> This subcommand is called before creating vcpu and KVM_SET_CPUID2, i.e.
> cpuids configurations aren't available yet.  So CPUIDs configuration values
> needs to be passed in struct kvm_init_vm.  It's device model responsibility
> to make this cpuid config for KVM_TDX_INIT_VM and KVM_SET_CPUID2.
> 
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/include/asm/kvm_host.h       |   2 +
>   arch/x86/include/asm/tdx.h            |   3 +
>   arch/x86/include/uapi/asm/kvm.h       |  33 +++++
>   arch/x86/kvm/vmx/tdx.c                | 206 ++++++++++++++++++++++++++
>   arch/x86/kvm/vmx/tdx.h                |  23 +++
>   tools/arch/x86/include/uapi/asm/kvm.h |  33 +++++
>   6 files changed, 300 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 342decc69649..81638987cdb9 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1338,6 +1338,8 @@ struct kvm_arch {
>   	 * the global KVM_MAX_VCPU_IDS may lead to significant memory waste.
>   	 */
>   	u32 max_vcpu_ids;
> +
> +	gfn_t gfn_shared_mask;

I think it's better to put in a seperate patch or the patch that 
consumes it.

>   };
>   
...

> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 2a9dfd54189f..1273b60a1a00 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -438,6 +438,209 @@ int tdx_dev_ioctl(void __user *argp)
>   	return 0;
>   }
>   
> +/*
> + * cpuid entry lookup in TDX cpuid config way.
> + * The difference is how to specify index(subleaves).
> + * Specify index to TDX_CPUID_NO_SUBLEAF for CPUID leaf with no-subleaves.
> + */
> +static const struct kvm_cpuid_entry2 *tdx_find_cpuid_entry(
> +	const struct kvm_cpuid2 *cpuid, u32 function, u32 index)
> +{
> +	int i;
> +
> +

superfluous line

> +	/* In TDX CPU CONFIG, TDX_CPUID_NO_SUBLEAF means index = 0. */
> +	if (index == TDX_CPUID_NO_SUBLEAF)
> +		index = 0;
> +
> +	for (i = 0; i < cpuid->nent; i++) {
> +		const struct kvm_cpuid_entry2 *e = &cpuid->entries[i];
> +
> +		if (e->function == function &&
> +		    (e->index == index ||
> +		     !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
> +			return e;
> +	}
> +	return NULL;
> +}

no need for kvm_tdx->tsc_khz field. We have kvm->arch.default_tsc_khz.
It seems kvm_tdx->tsc_khz is not used in the following patches.

...

> +
> +	kvm_tdx->tsc_offset = td_tdcs_exec_read64(kvm_tdx, TD_TDCS_EXEC_TSC_OFFSET);
> +	kvm_tdx->attributes = td_params->attributes;
> +	kvm_tdx->xfam = td_params->xfam;
> +	kvm_tdx->tsc_khz = TDX_TSC_25MHZ_TO_KHZ(td_params->tsc_frequency);
> +	kvm->max_vcpus = td_params->max_vcpus;
> +
> +	if (td_params->exec_controls & TDX_EXEC_CONTROL_MAX_GPAW)
> +		kvm->arch.gfn_shared_mask = gpa_to_gfn(BIT_ULL(51));
> +	else
> +		kvm->arch.gfn_shared_mask = gpa_to_gfn(BIT_ULL(47));
> +

....

> diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
> index a9ea3573be1b..779dfd683d66 100644
> --- a/tools/arch/x86/include/uapi/asm/kvm.h
> +++ b/tools/arch/x86/include/uapi/asm/kvm.h
> @@ -531,6 +531,7 @@ struct kvm_pmu_event_filter {
>   /* Trust Domain eXtension sub-ioctl() commands. */
>   enum kvm_tdx_cmd_id {
>   	KVM_TDX_CAPABILITIES = 0,
> +	KVM_TDX_INIT_VM,
>   
>   	KVM_TDX_CMD_NR_MAX,
>   };
> @@ -576,4 +577,36 @@ struct kvm_tdx_capabilities {
>   	struct kvm_tdx_cpuid_config cpuid_configs[0];
>   };
>   
> +struct kvm_tdx_init_vm {
> +	__u64 attributes;
> +	__u32 max_vcpus;
> +	__u32 tsc_khz;

it needs to align with arch/x86/include/uapi/asm/kvm.h that @tsc_khz 
needs to be removed.



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 029/102] KVM: TDX: allocate/free TDX vcpu structure
  2022-06-27 21:53 ` [PATCH v7 029/102] " isaku.yamahata
@ 2022-06-28 11:34   ` Kai Huang
  2022-07-12  7:55     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-06-28 11:34 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> The next step of TDX guest creation is to create vcpu.  Allocate TDX vcpu
> structures, initialize it.  Allocate pages of TDX vcpu for the TDX module.
> 
> In the case of the conventional case, cpuid is empty at the initialization.
> and cpuid is configured after the vcpu initialization.  Because TDX
> supports only X2APIC mode, cpuid is forcibly initialized to support X2APIC
> on the vcpu initialization.

The patch title and commit message of this patch are identical to the previous
patch.

What happened? Did you forget to squash two patches together?
 
> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/vmx/main.c    | 40 ++++++++++++++++++++++++++++++++++----
>  arch/x86/kvm/vmx/x86_ops.h |  8 ++++++++
>  2 files changed, 44 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 067f5de56c53..4f4ed4ad65a7 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -73,6 +73,38 @@ static void vt_vm_free(struct kvm *kvm)
>  		return tdx_vm_free(kvm);
>  }
>  
> +static int vt_vcpu_precreate(struct kvm *kvm)
> +{
> +	if (is_td(kvm))
> +		return 0;
> +
> +	return vmx_vcpu_precreate(kvm);
> +}
> +
> +static int vt_vcpu_create(struct kvm_vcpu *vcpu)
> +{
> +	if (is_td_vcpu(vcpu))
> +		return tdx_vcpu_create(vcpu);
> +
> +	return vmx_vcpu_create(vcpu);
> +}
> +
> +static void vt_vcpu_free(struct kvm_vcpu *vcpu)
> +{
> +	if (is_td_vcpu(vcpu))
> +		return tdx_vcpu_free(vcpu);
> +
> +	return vmx_vcpu_free(vcpu);
> +}
> +
> +static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
> +{
> +	if (is_td_vcpu(vcpu))
> +		return tdx_vcpu_reset(vcpu, init_event);
> +
> +	return vmx_vcpu_reset(vcpu, init_event);
> +}
> +
>  static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
>  {
>  	if (!is_td(kvm))
> @@ -98,10 +130,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.vm_destroy = vt_vm_destroy,
>  	.vm_free = vt_vm_free,
>  
> -	.vcpu_precreate = vmx_vcpu_precreate,
> -	.vcpu_create = vmx_vcpu_create,
> -	.vcpu_free = vmx_vcpu_free,
> -	.vcpu_reset = vmx_vcpu_reset,
> +	.vcpu_precreate = vt_vcpu_precreate,
> +	.vcpu_create = vt_vcpu_create,
> +	.vcpu_free = vt_vcpu_free,
> +	.vcpu_reset = vt_vcpu_reset,
>  
>  	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
>  	.vcpu_load = vmx_vcpu_load,
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index ef6115ae0e88..42b634971544 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -138,6 +138,10 @@ int tdx_vm_init(struct kvm *kvm);
>  void tdx_mmu_release_hkid(struct kvm *kvm);
>  void tdx_vm_free(struct kvm *kvm);
>  
> +int tdx_vcpu_create(struct kvm_vcpu *vcpu);
> +void tdx_vcpu_free(struct kvm_vcpu *vcpu);
> +void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
> +
>  int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
>  #else
>  static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
> @@ -150,6 +154,10 @@ static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
>  static inline void tdx_flush_shadow_all_private(struct kvm *kvm) {}
>  static inline void tdx_vm_free(struct kvm *kvm) {}
>  
> +static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; }
> +static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {}
> +static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
> +
>  static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
>  #endif
>  


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-06-27 21:53 ` [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE isaku.yamahata
@ 2022-06-30 11:03   ` Kai Huang
  2022-07-14 18:05     ` Isaku Yamahata
  2022-07-08  5:18   ` Yuan Yao
  2022-07-14 18:41   ` Isaku Yamahata
  2 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-06-30 11:03 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> TDX introduced a new ETP, Secure-EPT, in addition to the existing EPT.
> Secure-EPT maps protected guest memory, which is called private. Since
> Secure-EPT page tables is also protected, those page tables is also called
> private.  The existing EPT is often called shared EPT to distinguish from
> Secure-EPT.  And also page tables for share EPT is also called shared.

Does this patch has anything to do with secure-EPT?

> 
> Virtualization Exception, #VE, is a new processor exception in VMX non-root

#VE isn't new.  It's already in pre-TDX public spec AFAICT.

> operation.  In certain virtualizatoin-related conditions, #VE is injected
> into guest instead of exiting from guest to VMM so that guest is given a
> chance to inspect it.  One important one is EPT violation.  When
> "ETP-violation #VE" VM-execution is set, "#VE suppress bit" in EPT entry
> is cleared, #VE is injected instead of EPT violation.

We already know such fact based on pre-TDX public spec.  Instead of repeating it
here, why not focusing on saying what's new in TDX, so your below paragraph of
setting a non-zero value for non-present SPTE can be justified?

> 
> Because guest memory is protected with TDX, VMM can't parse instructions
> in the guest memory.  Instead, MMIO hypercall is used for guest to pass
> necessary information to VMM.
> 
> To make unmodified device driver work, guest TD expects #VE on accessing
> shared GPA.  The #VE handler converts MMIO access into MMIO hypercall with
> the EPT entry of enabled "#VE" by clearing "suppress #VE" bit.  Before VMM
> enabling #VE, it needs to figure out the given GPA is for MMIO by EPT
> violation.  
> 

As I said above, before here, you need to explain in TDX VMCS is controlled by
the TDX module and it always sets the "EPT-violation #VE" in execution control
bit.

> So the execution flow looks like
> 
> - Allocate unused shared EPT entry with suppress #VE bit set.
> - EPT violation on that GPA.
> - VMM figures out the faulted GPA is for MMIO.
> - VMM clears the suppress #VE bit.
> - Guest TD gets #VE, and converts MMIO access into MMIO hypercall.
> - If the GPA maps guest memory, VMM resolves it with guest pages.
> 
> For both cases, SPTE needs suppress #VE" bit set initially when it
> is allocated or zapped, therefore non-zero non-present value for SPTE
> needs to be allowed.
> 
> This change requires to update FNAME(sync_page) for shadow EPT.
> "if(!sp->spte[i])" in FNAME(sync_page) means that the spte entry is the
> initial value.  With the introduction of shadow_nonpresent_value which can
> be non-zero, it doesn't hold any more. Replace zero check with
> "!is_shadow_present_pte() && !is_mmio_spte()".

I don't think you need to mention above paragraph.  It's absolutely unclear how
is_mmio_spte() will be impacted by this patch by reading above paragraphs.

From the "execution flow" you mentioned above, you will change MMIO fault from
EPT misconfiguration to EPT violation (in order to get #VE), so theoretically
you may effectively disable MMIO caching, in which case, if I understand
correctly, is_mmio_spte() always returns false.

I guess you can just change to check:

	if (sp->spte[i] != shadow_nonpresent_value)

Anyway, IMO you can just comment in the code.

After all, what is shadow_nonpresent_value, given you haven't explained what it
is?

> 
> When "if (!spt[i])" doesn't hold, but the entry value is
> shadow_nonpresent_value, the entry is wrongly synchronized from non-present
> to non-present with (wrongly) pfn changed and tries to remove rmap wrongly
> and BUG_ON() is hit.

Ditto.

> 
> TDP MMU uses REMOVED_SPTE = 0x5a0ULL as special constant to indicate the
> intermediate value to indicate one thread is operating on it and the value
> should be semi-arbitrary value.  For TDX (more correctly to use #VE), the
> value should include suppress #VE value which is SHADOW_NONPRESENT_VALUE.

What is SHADOW_NONPRESENT_VALUE?

> Rename REMOVED_SPTE to __REMOVED_SPTE and define REMOVED_SPTE as
> SHADOW_NONPRESENT_VALUE | REMOVED_SPTE to set suppress #VE bit.

Ditto. IMHO you don't even need to mention REMOVED_SPTE in changelog.

> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c         | 55 ++++++++++++++++++++++++++++++----
>  arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
>  arch/x86/kvm/mmu/spte.c        |  5 +++-
>  arch/x86/kvm/mmu/spte.h        | 37 ++++++++++++++++++++---
>  arch/x86/kvm/mmu/tdp_mmu.c     | 23 +++++++++-----
>  5 files changed, 105 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 51306b80f47c..f239b6cb5d53 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -668,6 +668,44 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
>  	}
>  }
>  
> +static inline void kvm_init_shadow_page(void *page)
> +{
> +#ifdef CONFIG_X86_64
> +	int ign;
> +
> +	WARN_ON_ONCE(shadow_nonpresent_value != SHADOW_NONPRESENT_VALUE);
> +	asm volatile (
> +		"rep stosq\n\t"
> +		: "=c"(ign), "=D"(page)
> +		: "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> +		: "memory"
> +	);
> +#else
> +	BUG();
> +#endif
> +}
> +
> +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> +	int start, end, i, r;
> +	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> +
> +	if (is_tdp_mmu && shadow_nonpresent_value)
> +		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> +
> +	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
> +	if (r)
> +		return r;
> +
> +	if (is_tdp_mmu && shadow_nonpresent_value) {
> +		end = kvm_mmu_memory_cache_nr_free_objects(mc);
> +		for (i = start; i < end; i++)
> +			kvm_init_shadow_page(mc->objects[i]);
> +	}

I think you can just extend this to legacy MMU too, but not only TDP MMU.

After all, before this patch, where have you declared that TDX only supports TDP
MMU?  This is only enforced in:

	[PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX

Which is 7 patches later.

Also, shadow_nonpresent_value is only used in couple of places, while
SHADOW_NONPRESENT_VALUE is used directly in more places.  Does it make more
sense to always use shadow_nonpresent_value, instead of using
SHADOW_NONPRESENT_VALUE?

Similar to other shadow values, we can provide a function to let caller
(VMX/SVM) to decide whether it wants to use non-zero for non-present SPTE.

	void kvm_mmu_set_non_present_value(u64 value)
	{
		shadow_nonpresent_value = value;
	}


> +	return 0;
> +}
> +
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
>  	int r;
> @@ -677,8 +715,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
>  	if (r)
>  		return r;
> -	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -				       PT64_ROOT_MAX_LEVEL);
> +	r = mmu_topup_shadow_page_cache(vcpu);
>  	if (r)
>  		return r;
>  	if (maybe_indirect) {
> @@ -5521,9 +5558,16 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>  	 * what is used by the kernel for any given HVA, i.e. the kernel's
>  	 * capabilities are ultimately consulted by kvm_mmu_hugepage_adjust().
>  	 */
> -	if (tdp_enabled)
> +	if (tdp_enabled) {
> +		/*
> +		 * For TDP MMU, always set bit 63 for TDX support. See the
> +		 * comment on SHADOW_NONPRESENT_VALUE.
> +		 */
> +#ifdef CONFIG_X86_64
> +		shadow_nonpresent_value = SHADOW_NONPRESENT_VALUE;
> +#endif

'tdp_enabled' doesn't mean TDP MMU, right? 

>  		max_huge_page_level = tdp_huge_page_level;
> -	else if (boot_cpu_has(X86_FEATURE_GBPAGES))
> +	} else if (boot_cpu_has(X86_FEATURE_GBPAGES))
>  		max_huge_page_level = PG_LEVEL_1G;
>  	else
>  		max_huge_page_level = PG_LEVEL_2M;
> @@ -5654,7 +5698,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
>  	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
>  
> -	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +	if (!(is_tdp_mmu_enabled(vcpu->kvm) && shadow_nonpresent_value))
> +		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
>  
>  	vcpu->arch.mmu = &vcpu->arch.root_mmu;
>  	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index fe35d8fd3276..ee2fb0c073f3 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -1031,7 +1031,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  		gpa_t pte_gpa;
>  		gfn_t gfn;
>  
> -		if (!sp->spt[i])
> +		if (!is_shadow_present_pte(sp->spt[i]) &&
> +		    !is_mmio_spte(sp->spt[i]))
>  			continue;

As I said in the changelog, I don't think this is correct.

I guess you can just change to check:

	if (sp->spte[i] != shadow_nonpresent_value)

>  
>  		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index cda1851ec155..bd441458153f 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -36,6 +36,9 @@ u64 __read_mostly shadow_present_mask;
>  u64 __read_mostly shadow_me_value;
>  u64 __read_mostly shadow_me_mask;
>  u64 __read_mostly shadow_acc_track_mask;
> +#ifdef CONFIG_X86_64
> +u64 __read_mostly shadow_nonpresent_value;
> +#endif
>  
>  u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>  u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
> @@ -360,7 +363,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	 * not set any RWX bits.
>  	 */
>  	if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
> -	    WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
> +	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
>  		mmio_value = 0;
>  
>  	if (!mmio_value)
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 0127bb6e3c7d..1bfedbe0585f 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -140,6 +140,19 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
>  
>  #define MMIO_SPTE_GEN_MASK		GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0)
>  
> +/*
> + * non-present SPTE value for both VMX and SVM for TDP MMU.
> + * For SVM NPT, for non-present spte (bit 0 = 0), other bits are ignored.
> + * For VMX EPT, bit 63 is ignored if #VE is disabled.
> + *              bit 63 is #VE suppress if #VE is enabled.
> + */
> +#ifdef CONFIG_X86_64
> +#define SHADOW_NONPRESENT_VALUE	BIT_ULL(63)
> +static_assert(!(SHADOW_NONPRESENT_VALUE & SPTE_MMU_PRESENT_MASK));
> +#else
> +#define SHADOW_NONPRESENT_VALUE	0ULL
> +#endif
> +
>  extern u64 __read_mostly shadow_host_writable_mask;
>  extern u64 __read_mostly shadow_mmu_writable_mask;
>  extern u64 __read_mostly shadow_nx_mask;
> @@ -154,6 +167,12 @@ extern u64 __read_mostly shadow_present_mask;
>  extern u64 __read_mostly shadow_me_value;
>  extern u64 __read_mostly shadow_me_mask;
>  
> +#ifdef CONFIG_X86_64
> +extern u64 __read_mostly shadow_nonpresent_value;
> +#else
> +#define shadow_nonpresent_value	0ULL
> +#endif
> +
>  /*
>   * SPTEs in MMUs without A/D bits are marked with SPTE_TDP_AD_DISABLED_MASK;
>   * shadow_acc_track_mask is the set of bits to be cleared in non-accessed
> @@ -174,9 +193,12 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>  
>  /*
>   * If a thread running without exclusive control of the MMU lock must perform a
> - * multi-part operation on an SPTE, it can set the SPTE to REMOVED_SPTE as a
> + * multi-part operation on an SPTE, it can set the SPTE to __REMOVED_SPTE as a
>   * non-present intermediate value. Other threads which encounter this value
> - * should not modify the SPTE.
> + * should not modify the SPTE.  For the case that TDX is enabled,
> + * SHADOW_NONPRESENT_VALUE, which is "suppress #VE" bit set because TDX module
> + * always enables "EPT violation #VE".  The bit is ignored by non-TDX case as
> + * present bit (bit 0) is cleared.
>   *
>   * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on
>   * bot AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF
> @@ -184,10 +206,17 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>   *
>   * Only used by the TDP MMU.
>   */
> -#define REMOVED_SPTE	0x5a0ULL
> +#define __REMOVED_SPTE	0x5a0ULL
>  
>  /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
> -static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
> +static_assert(!(__REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
> +static_assert(!(__REMOVED_SPTE & SHADOW_NONPRESENT_VALUE));

I don't think you need this.  My understanding of the reason REMOVED_SPTE is
checked against SPTE_MMU_PRESENT_MASK is because they both at low 12 bits. 
SHADOW_NONPRESENT_VALUE is bit 63 so it's not possible to conflict with
REMOVED_SPTE, which by comment only uses low bits.

> +
> +/*
> + * See above comment around __REMOVED_SPTE.  REMOVED_SPTE is the actual
> + * intermediate value set to the removed SPET.  it sets the "suppress #VE" bit.
> + */
> +#define REMOVED_SPTE	(SHADOW_NONPRESENT_VALUE | __REMOVED_SPTE)
>  
>  static inline bool is_removed_spte(u64 spte)
>  {
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7b9265d67131..2ca03ec3bf52 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -692,8 +692,16 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>  	 * overwrite the special removed SPTE value. No bookkeeping is needed
>  	 * here since the SPTE is going from non-present to non-present.  Use
>  	 * the raw write helper to avoid an unnecessary check on volatile bits.
> +	 *
> +	 * Set non-present value to SHADOW_NONPRESENT_VALUE, rather than 0.
> +	 * It is because when TDX is enabled, TDX module always
> +	 * enables "EPT-violation #VE", so KVM needs to set
> +	 * "suppress #VE" bit in EPT table entries, in order to get
> +	 * real EPT violation, rather than TDVMCALL.  KVM sets
> +	 * SHADOW_NONPRESENT_VALUE (which sets "suppress #VE" bit) so it
> +	 * can be set when EPT table entries are zapped.
>  	 */
> -	__kvm_tdp_mmu_write_spte(iter->sptep, 0);
> +	__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);
>  
>  	return 0;
>  }
> @@ -870,8 +878,8 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
>  			continue;
>  
>  		if (!shared)
> -			tdp_mmu_set_spte(kvm, &iter, 0);
> -		else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0))
> +			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
> +		else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE))
>  			goto retry;
>  	}
>  }
> @@ -927,8 +935,9 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>  	if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte)))
>  		return false;
>  
> -	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0,
> -			   sp->gfn, sp->role.level + 1, true, true);
> +	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
> +			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
> +			   true, true);
>  
>  	return true;
>  }
> @@ -965,7 +974,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>  		    !is_last_spte(iter.old_spte, iter.level))
>  			continue;
>  
> -		tdp_mmu_set_spte(kvm, &iter, 0);
> +		tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
>  		flush = true;
>  	}
>  
> @@ -1330,7 +1339,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
>  	 * invariant that the PFN of a present * leaf SPTE can never change.
>  	 * See __handle_changed_spte().
>  	 */
> -	tdp_mmu_set_spte(kvm, iter, 0);
> +	tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE);
>  
>  	if (!pte_write(range->pte)) {
>  		new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
  2022-06-27 21:53 ` [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault isaku.yamahata
@ 2022-06-30 11:37   ` Kai Huang
  2022-07-13  8:35     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-06-30 11:37 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> Explicitly check for an MMIO spte in the fast page fault flow.  TDX will
> use a not-present entry for MMIO sptes, which can be mistaken for an
> access-tracked spte since both have SPTE_SPECIAL_MASK set.

SPTE_SPECIAL_MASK has been removed in latest KVM code.  The changelog needs
update.

In fact, if I understand correctly, I don't think this changelog is correct:

The existing code doesn't check is_mmio_spte() because:

1) If MMIO caching is enabled, MMIO fault is always handled in
handle_mmio_page_fault() before reaching here; 

2) If MMIO caching is disabled, is_shadow_present_pte() always returns false for
MMIO spte, and is_mmio_spte() also always return false for MMIO spte, so there's
no need check here.

"A non-present entry for MMIO spte" doesn't necessarily mean
is_shadow_present_pte() will return true for it, and there's no explanation at
all that for TDX guest a MMIO spte could reach here and is_shadow_present_pte()
returns true for it.

If this patch is ever needed, it should come with or after the patch (patches)
that handles MMIO fault for TD guest.

Hi Sean, Paolo,

Did I miss anything?

> 
> MMIO sptes are handled in handle_mmio_page_fault for non-TDX VMs, so this
> patch does not affect them.  TDX will handle MMIO emulation through a
> hypercall instead.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 17252f39bd7c..51306b80f47c 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3163,7 +3163,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  		else
>  			sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte);
>  
> -		if (!is_shadow_present_pte(spte))
> +		if (!is_shadow_present_pte(spte) || is_mmio_spte(spte))
>  			break;
>  
>  		sp = sptep_to_sp(sptep);


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  2022-06-27 21:53 ` [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
@ 2022-06-30 11:45   ` Kai Huang
  2022-07-05 14:06   ` Kai Huang
  2022-07-19  8:47   ` Isaku Yamahata
  2 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-06-30 11:45 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> TDX will use a different shadow PTE entry value for MMIO from VMX.  Add
> members to kvm_arch and track value for MMIO per-VM instead of global
> variables.  By using the per-VM EPT entry value for MMIO, the existing VMX
> logic is kept working.
> 
> In the case of VMX VM case, the EPT entry for MMIO is non-present PTE
> (present bit cleared) without backing guest physical address (on EPT
> violation, KVM searches backing guest memory and it finds there is no
> backing guest page.) or the value to trigger EPT misconfiguration.  Once
> MMIO is triggered on the EPT entry, the EPT entry is updated to trigger EPT
> misconfiguration for the future MMIO on the same GPA.  It allows KVM to
> understand the memory access is for MMIO without searching backing guest
> pages.). And then KVM parses guest instruction to figure out
> address/value/width for MMIO.
> 
> In the case of the guest TD, the guest memory is protected so that VMM
> can't parse guest instruction to understand the value and access width for
> MMIO.  Instead, VMM sets up (Shared) EPT to trigger #VE by clearing
> the VE-suppress bit.  When the guest TD issues MMIO, #VE is injected.  Guest VE
> handler converts MMIO access into MMIO hypercall to pass
> address/value/width for MMIO to VMM. (or directly paravirtualize MMIO into
> hypercall.)  Then VMM can handle the MMIO hypercall without parsing guest
> instructions.

This is an infrastructural patch which enables per-VM MMIO caching.  Why not
putting this patch first so you don't need to do below changes (which are
introduced by your previous patches)?

[...]

>  
> -		if (!is_shadow_present_pte(spte) || is_mmio_spte(spte))
> +		if (!is_shadow_present_pte(spte) ||
> +		    is_mmio_spte(vcpu->kvm, spte))
>  			break;
>  
> 

[...]

> @@ -1032,7 +1032,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  		gfn_t gfn;
>  
>  		if (!is_shadow_present_pte(sp->spt[i]) &&
> -		    !is_mmio_spte(sp->spt[i]))
> +		    !is_mmio_spte(vcpu->kvm, sp->spt[i]))
>  			continue;


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level
  2022-06-27 21:53 ` [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
@ 2022-06-30 12:27   ` Kai Huang
  2022-07-19 10:26     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-06-30 12:27 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> TODO: This is a transient workaround patch until the large page support for
> TDX is implemented.  Support large page for TDX and remove this patch.

I don't understand.  How does this patch have anything to do with what you are
talking about here?

If you want to remove this patch later, then why not just explain the reason to
remove when you actually have that patch?

> 
> At this point, large page for TDX isn't supported, and need to allow guest
> TD to work only with 4K pages.  On the other hand, conventional VMX VMs
> should continue to work with large page.  Allow per-VM override of the TDP
> max page level.

At which point/previous patch have you made/declared "large page for TDX isn't
supported"?

If you want to declare you don't want to support large page for TDX, IMHO just
declare it here, for instance:

"For simplicity, only support 4K page for TD guest."
  
> 
> In the existing x86 KVM MMU code, there is already max_level member in
> struct kvm_page_fault with KVM_MAX_HUGEPAGE_LEVEL initial value.  The KVM
> page fault handler denies page size larger than max_level.
> 
> Add per-VM member to indicate the allowed maximum page size with
> KVM_MAX_HUGEPAGE_LEVEL as default value and initialize max_level in struct
> kvm_page_fault with it.  For the guest TD, the set per-VM value for allows
> maximum page size to 4K page size.  Then only allowed page size is 4K.  It
> means large page is disabled.

To me it's overcomplicated.  You just need simple sentences for such simple
infrastructural patch.  For instance:

"TDX requires special handling to support large private page.  For simplicity,
only support 4K page for TD guest for now.  Add per-VM maximum page level
support to support different maximum page sizes for TD guest and conventional
VMX guest."

Just for your reference.

> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 +
>  arch/x86/kvm/mmu/mmu.c          | 1 +
>  arch/x86/kvm/mmu/mmu_internal.h | 2 +-
>  3 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 39215daa8576..f4d4ed41641b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1146,6 +1146,7 @@ struct kvm_arch {
>  	unsigned long n_requested_mmu_pages;
>  	unsigned long n_max_mmu_pages;
>  	unsigned int indirect_shadow_pages;
> +	int tdp_max_page_level;
>  	u8 mmu_valid_gen;
>  	struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
>  	struct list_head active_mmu_pages;
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index e0aa5ad3931d..80d7c7709af3 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5878,6 +5878,7 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>  	node->track_write = kvm_mmu_pte_write;
>  	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
>  	kvm_page_track_register_notifier(kvm, node);
> +	kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL;
>  	kvm_mmu_set_mmio_spte_mask(kvm, shadow_default_mmio_mask,
>  				   shadow_default_mmio_mask,
>  				   ACC_WRITE_MASK | ACC_USER_MASK);
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index bd2a26897b97..44a04fad4bed 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -244,7 +244,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>  		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
>  		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
>  
> -		.max_level = KVM_MAX_HUGEPAGE_LEVEL,
> +		.max_level = vcpu->kvm->arch.tdp_max_page_level,
>  		.req_level = PG_LEVEL_4K,
>  		.goal_level = PG_LEVEL_4K,
>  	};


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu
  2022-06-27 21:53 ` [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu isaku.yamahata
@ 2022-07-01 10:41   ` Kai Huang
  2022-07-19 11:06     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-01 10:41 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> For kvm mmu that has shared bit mask, zap only leaf SPTEs when
> deleting/moving a memslot.  The existing kvm_mmu_zap_memslot() depends on

Unless I am mistaken, I don't see there's an 'existing' kvm_mmu_zap_memslot().

> role.invalid with read lock of mmu_lock so that other vcpu can operate on
> kvm mmu concurrently. 
> 

> Mark the root page table invalid, unlink it from page
> table pointer of CPU, process the page table.  
> 

Are you talking about the behaviour of existing code, or the change you are
going to make?  Looks like you mean the latter but I believe it's the former. 

> It doesn't work for private
> page table to unlink the root page table because it requires all SPTE entry
> to be non-present.  
> 

I don't think we can truly *unlink* the private root page table from secure
EPTP, right?  The EPTP (root table) is fixed (and hidden) during TD's runtime.

I guess you are trying to say: removing/unlinking one secure-EPT page requires
removing/unlinking all its children first? 

So the reason to only zap leaf is we cannot truly unlink the private root page
table, correct?  Sorry your changelog is not obvious to me.

> Instead, with write-lock of mmu_lock and zap only leaf
> SPTEs for kvm mmu with shared bit mask.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 35 ++++++++++++++++++++++++++++++++++-
>  1 file changed, 34 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 80d7c7709af3..c517c7bca105 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5854,11 +5854,44 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
>  	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
>  }
>  
> +static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
> +{
> +	bool flush = false;
> +
> +	write_lock(&kvm->mmu_lock);
> +
> +	/*
> +	 * Zapping non-leaf SPTEs, a.k.a. not-last SPTEs, isn't required, worst
> +	 * case scenario we'll have unused shadow pages lying around until they
> +	 * are recycled due to age or when the VM is destroyed.
> +	 */
> +	if (is_tdp_mmu_enabled(kvm)) {
> +		struct kvm_gfn_range range = {
> +		      .slot = slot,
> +		      .start = slot->base_gfn,
> +		      .end = slot->base_gfn + slot->npages,
> +		      .may_block = false,
> +		};
> +
> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);


It appears you only unmap private GFNs (because the base_gfn doesn't have shared
bit)?  I think shared mapping in this slot must be zapped too?  

How is this done?  Or the kvm_tdp_mmu_unmap_gfn_range() also zaps shared
mappings?

It's hard to review if one patch's behaviour/logic depends on further patches.

> +	} else {
> +		flush = slot_handle_level(kvm, slot, kvm_zap_rmapp, PG_LEVEL_4K,
> +					  KVM_MAX_HUGEPAGE_LEVEL, true);
> +	}
> +	if (flush)
> +		kvm_flush_remote_tlbs(kvm);
> +
> +	write_unlock(&kvm->mmu_lock);
> +}
> +
>  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
>  			struct kvm_memory_slot *slot,
>  			struct kvm_page_track_notifier_node *node)
>  {
> -	kvm_mmu_zap_all_fast(kvm);
> +	if (kvm_gfn_shared_mask(kvm))
> +		kvm_mmu_zap_memslot(kvm, slot);
> +	else
> +		kvm_mmu_zap_all_fast(kvm);
>  }
>  
>  int kvm_mmu_init_vm(struct kvm *kvm)


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-06-27 21:53 ` [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
@ 2022-07-01 11:12   ` Kai Huang
  2022-07-19 15:35     ` Isaku Yamahata
  2022-07-11  6:28   ` Yuan Yao
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-01 11:12 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> For private GPA, CPU refers a private page table whose contents are
> encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
> PTE entry) are used and their cost is expensive.
> 
> When KVM resolves KVM page fault, it walks the page tables.  To reuse the
> existing KVM MMU code and mitigate the heavy cost to directly walk
> encrypted private page table, allocate a more page to mirror the existing
> KVM page table.  Resolve KVM page fault with the existing code, and do
> additional operations necessary for the mirrored private page table.  To
> distinguish such cases, the existing KVM page table is called a shared page
> table (i.e. no mirrored private page table), and the KVM page table with
> mirrored private page table is called a private page table.  The
> relationship is depicted below.
> 
> Add private pointer to struct kvm_mmu_page for mirrored private page table
> and add helper functions to allocate/initialize/free a mirrored private
> page table page.  Also, add helper functions to check if a given
> kvm_mmu_page is private.  The later patch introduces hooks to operate on
> the mirrored private page table.
> 
>               KVM page fault                     |
>                      |                           |
>                      V                           |
>         -------------+----------                 |
>         |                      |                 |
>         V                      V                 |
>      shared GPA           private GPA            |
>         |                      |                 |
>         V                      V                 |
>  CPU/KVM shared PT root  KVM private PT root     |  CPU private PT root
>         |                      |                 |           |
>         V                      V                 |           V
>      shared PT            private PT <----mirror----> mirrored private PT
>         |                      |                 |           |
>         |                      \-----------------+------\    |
>         |                                        |      |    |
>         V                                        |      V    V
>   shared guest page                              |    private guest page
>                                                  |
>                            non-encrypted memory  |    encrypted memory
>                                                  |
> PT: page table
> 
> Both CPU and KVM refer to CPU/KVM shared page table.  Private page table
> is used only by KVM.  CPU refers to mirrored private page table.

Shouldn't the private page table maintained by KVM be "mirrored private PT"?

To me "mirrored" normally implies it is fake, or backup which isn't actually
used.  But here "mirrored private PT" is actually used by hardware.

And to me, "CPU and KVM" above are confusing.  For instance, "Both CPU and KVM
refer to CPU/KVM shared page table" took me at least one minute to understand,
with the help from the diagram -- otherwise I won't be able to understand.

I guess you can just say somewhere:

1) Shared PT is visible to KVM and it is used by CPU;
1) Private PT is used by CPU but it is invisible to KVM;
2) Mirrored private PT is visible to KVM but not used by CPU.  It is used to
mirror the actual private PT which is used by CPU.


[...]

> +
> +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
> +{
> +	sp->private_sp = private_sp;
> +}
> 

[...]

> @@ -295,6 +297,7 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>  	sp->gfn = gfn;
>  	sp->ptep = sptep;
>  	sp->tdp_mmu_page = true;
> +	kvm_mmu_init_private_sp(sp);

Can this even compile?  Unless I am seeing mistakenly, kvm_mmu_init_private_sp()
(see above) has two arguments..

Please make sure each patch can at least compile and doesn't cause warning...

-- 
Thanks,
-Kai



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs
  2022-06-28  2:52   ` Kai Huang
@ 2022-07-04  6:44     ` Kai Huang
  2022-07-12  1:01     ` Isaku Yamahata
  1 sibling, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-04  6:44 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson, Xiaoyao Li

On Tue, 2022-06-28 at 14:52 +1200, Kai Huang wrote:
> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > Unlike default VMs, confidential VMs (Intel TDX and AMD SEV-ES) don't allow
> > some operations (e.g., memory read/write, register state access, etc).
> > 
> > Introduce vm_type to track the type of the VM to x86 KVM.  Other arch KVMs
> > already use vm_type, KVM_INIT_VM accepts vm_type, and x86 KVM callback
> > vm_init accepts vm_type.  So follow them.  Further, a different policy can
> > be made based on vm_type.  Define KVM_X86_DEFAULT_VM for default VM as
> > default and define KVM_X86_TDX_VM for Intel TDX VM.  The wrapper function
> > will be defined as "bool is_td(kvm) { return vm_type == VM_TYPE_TDX; }"
> > 
> > Add a capability KVM_CAP_VM_TYPES to effectively allow device model,
> > e.g. qemu, to query what VM types are supported by KVM.  This (introduce a
> > new capability and add vm_type) is chosen to align with other arch KVMs
> > that have VM types already.  Other arch KVMs uses different name to query
> > supported vm types and there is no common name for it, so new name was
> > chosen.
> > 
> > Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  Documentation/virt/kvm/api.rst        | 21 +++++++++++++++++++++
> >  arch/x86/include/asm/kvm-x86-ops.h    |  1 +
> >  arch/x86/include/asm/kvm_host.h       |  2 ++
> >  arch/x86/include/uapi/asm/kvm.h       |  3 +++
> >  arch/x86/kvm/svm/svm.c                |  6 ++++++
> >  arch/x86/kvm/vmx/main.c               |  1 +
> >  arch/x86/kvm/vmx/tdx.h                |  6 +-----
> >  arch/x86/kvm/vmx/vmx.c                |  5 +++++
> >  arch/x86/kvm/vmx/x86_ops.h            |  1 +
> >  arch/x86/kvm/x86.c                    |  9 ++++++++-
> >  include/uapi/linux/kvm.h              |  1 +
> >  tools/arch/x86/include/uapi/asm/kvm.h |  3 +++
> >  tools/include/uapi/linux/kvm.h        |  1 +
> >  13 files changed, 54 insertions(+), 6 deletions(-)
> > 
> > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> > index 9cbbfdb663b6..b9ab598883b2 100644
> > --- a/Documentation/virt/kvm/api.rst
> > +++ b/Documentation/virt/kvm/api.rst
> > @@ -147,10 +147,31 @@ described as 'basic' will be available.
> >  The new VM has no virtual cpus and no memory.
> >  You probably want to use 0 as machine type.
> >  
> > +X86:
> > +^^^^
> > +
> > +Supported vm type can be queried from KVM_CAP_VM_TYPES, which returns the
> > +bitmap of supported vm types. The 1-setting of bit @n means vm type with
> > +value @n is supported.
> 
> 
> Perhaps I am missing something, but I don't understand how the below changes
> (except the x86 part above) in Documentation are related to this patch.
> 
> > +
> > +S390:
> > +^^^^^
> > +
> >  In order to create user controlled virtual machines on S390, check
> >  KVM_CAP_S390_UCONTROL and use the flag KVM_VM_S390_UCONTROL as
> >  privileged user (CAP_SYS_ADMIN).
> >  
> > +MIPS:
> > +^^^^^
> > +
> > +To use hardware assisted virtualization on MIPS (VZ ASE) rather than
> > +the default trap & emulate implementation (which changes the virtual
> > +memory layout to fit in user mode), check KVM_CAP_MIPS_VZ and use the
> > +flag KVM_VM_MIPS_VZ.
> > +
> > +ARM64:
> > +^^^^^^
> > +
> >  On arm64, the physical address size for a VM (IPA Size limit) is limited
> >  to 40bits by default. The limit can be configured if the host supports the
> >  extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use
> > diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> > index 75bc44aa8d51..a97cdb203a16 100644
> > --- a/arch/x86/include/asm/kvm-x86-ops.h
> > +++ b/arch/x86/include/asm/kvm-x86-ops.h
> > @@ -19,6 +19,7 @@ KVM_X86_OP(hardware_disable)
> >  KVM_X86_OP(hardware_unsetup)
> >  KVM_X86_OP(has_emulated_msr)
> >  KVM_X86_OP(vcpu_after_set_cpuid)
> > +KVM_X86_OP(is_vm_type_supported)
> >  KVM_X86_OP(vm_init)
> >  KVM_X86_OP_OPTIONAL(vm_destroy)
> >  KVM_X86_OP_OPTIONAL_RET0(vcpu_precreate)
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index aa11525500d3..089e0a4de926 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1141,6 +1141,7 @@ enum kvm_apicv_inhibit {
> >  };
> >  
> >  struct kvm_arch {
> > +	unsigned long vm_type;
> >  	unsigned long n_used_mmu_pages;
> >  	unsigned long n_requested_mmu_pages;
> >  	unsigned long n_max_mmu_pages;
> > @@ -1434,6 +1435,7 @@ struct kvm_x86_ops {
> >  	bool (*has_emulated_msr)(struct kvm *kvm, u32 index);
> >  	void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu);
> >  
> > +	bool (*is_vm_type_supported)(unsigned long vm_type);
> >  	unsigned int vm_size;
> >  	int (*vm_init)(struct kvm *kvm);
> >  	void (*vm_destroy)(struct kvm *kvm);
> > diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> > index 50a4e787d5e6..9792ec1cc317 100644
> > --- a/arch/x86/include/uapi/asm/kvm.h
> > +++ b/arch/x86/include/uapi/asm/kvm.h
> > @@ -531,4 +531,7 @@ struct kvm_pmu_event_filter {
> >  #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */
> >  #define   KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
> >  
> > +#define KVM_X86_DEFAULT_VM	0
> > +#define KVM_X86_TDX_VM		1
> > +
> >  #endif /* _ASM_X86_KVM_H */
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 247c0ad458a0..815a07c594f1 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -4685,6 +4685,11 @@ static void svm_vm_destroy(struct kvm *kvm)
> >  	sev_vm_destroy(kvm);
> >  }
> >  
> > +static bool svm_is_vm_type_supported(unsigned long type)
> > +{
> > +	return type == KVM_X86_DEFAULT_VM;
> > +}
> > +
> >  static int svm_vm_init(struct kvm *kvm)
> >  {
> >  	if (!pause_filter_count || !pause_filter_thresh)
> > @@ -4712,6 +4717,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
> >  	.vcpu_free = svm_vcpu_free,
> >  	.vcpu_reset = svm_vcpu_reset,
> >  
> > +	.is_vm_type_supported = svm_is_vm_type_supported,
> >  	.vm_size = sizeof(struct kvm_svm),
> >  	.vm_init = svm_vm_init,
> >  	.vm_destroy = svm_vm_destroy,
> > diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> > index ac788af17d92..7be4941e4c4d 100644
> > --- a/arch/x86/kvm/vmx/main.c
> > +++ b/arch/x86/kvm/vmx/main.c
> > @@ -43,6 +43,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
> >  	.hardware_disable = vmx_hardware_disable,
> >  	.has_emulated_msr = vmx_has_emulated_msr,
> >  
> > +	.is_vm_type_supported = vmx_is_vm_type_supported,
> >  	.vm_size = sizeof(struct kvm_vmx),
> >  	.vm_init = vmx_vm_init,
> >  	.vm_destroy = vmx_vm_destroy,
> > diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> > index 54d7a26ed9ee..2f43db5bbefb 100644
> > --- a/arch/x86/kvm/vmx/tdx.h
> > +++ b/arch/x86/kvm/vmx/tdx.h
> > @@ -17,11 +17,7 @@ struct vcpu_tdx {
> >  
> >  static inline bool is_td(struct kvm *kvm)
> >  {
> > -	/*
> > -	 * TDX VM type isn't defined yet.
> > -	 * return kvm->arch.vm_type == KVM_X86_TDX_VM;
> > -	 */
> > -	return false;
> > +	return kvm->arch.vm_type == KVM_X86_TDX_VM;
> >  }
> 
> If you put this patch before patch:
> 
> 	[PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure
> 
> Then you don't need to introduce this chunk in above patch and then remove it
> here, which is unnecessary and ugly.
> 
> And you can even only introduce KVM_X86_DEFAULT_VM but not KVM_X86_TDX_VM in
> this patch, so you can make this patch as a infrastructural patch to report VM
> type.  The KVM_X86_TDX_VM can come with the patch where is_td() is introduced
> (in your above patch 9).  
> 
> To me, it's more clean way to write patch.  For instance, this infrastructural
> patch can be theoretically used by other series if they have similar thing to
> support, but doesn't need to carry is_td() and KVM_X86_TDX_VM burden that you
> made.

Sorry I missed this patch already has Paolo's Reviewed-by.  Please feel free to
ignore my comments.


-- 
Thanks,
-Kai



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  2022-06-27 21:53 ` [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
  2022-06-30 11:45   ` Kai Huang
@ 2022-07-05 14:06   ` Kai Huang
  2022-07-19  8:47   ` Isaku Yamahata
  2 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-05 14:06 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> TDX will use a different shadow PTE entry value for MMIO from VMX.  Add
> members to kvm_arch and track value for MMIO per-VM instead of global
> variables.  By using the per-VM EPT entry value for MMIO, the existing VMX
> logic is kept working.
> 
> In the case of VMX VM case, the EPT entry for MMIO is non-present PTE
> (present bit cleared) without backing guest physical address (on EPT
> violation, KVM searches backing guest memory and it finds there is no
> backing guest page.) or the value to trigger EPT misconfiguration.  Once
> MMIO is triggered on the EPT entry, the EPT entry is updated to trigger EPT
> misconfiguration for the future MMIO on the same GPA.  It allows KVM to
> understand the memory access is for MMIO without searching backing guest
> pages.). And then KVM parses guest instruction to figure out
> address/value/width for MMIO.
> 
> In the case of the guest TD, the guest memory is protected so that VMM
> can't parse guest instruction to understand the value and access width for
> MMIO.  Instead, VMM sets up (Shared) EPT to trigger #VE by clearing
> the VE-suppress bit.  When the guest TD issues MMIO, #VE is injected.  Guest VE
> handler converts MMIO access into MMIO hypercall to pass
> address/value/width for MMIO to VMM. (or directly paravirtualize MMIO into
> hypercall.)  Then VMM can handle the MMIO hypercall without parsing guest
> instructions.

To me only first paragraph is needed.  It already describes _why_ we need this
patch and _how_ you are going to implement.  

The last two paragraphs only elaborate the _why_ in the first paragraph, but
they does not say this patch will do more.  And they have been explained in
previous patches so looks they are not mandatory here.

> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  4 ++++
>  arch/x86/include/asm/vmx.h      |  1 +
>  arch/x86/kvm/mmu.h              |  4 +++-
>  arch/x86/kvm/mmu/mmu.c          | 20 ++++++++++++----
>  arch/x86/kvm/mmu/paging_tmpl.h  |  2 +-
>  arch/x86/kvm/mmu/spte.c         | 41 +++++++++++++++------------------
>  arch/x86/kvm/mmu/spte.h         | 11 ++++-----
>  arch/x86/kvm/mmu/tdp_mmu.c      |  6 ++---
>  arch/x86/kvm/svm/svm.c          |  2 +-
>  arch/x86/kvm/vmx/vmx.c          |  8 +++++++
>  10 files changed, 59 insertions(+), 40 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 2c47aab72a1b..39215daa8576 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1161,6 +1161,10 @@ struct kvm_arch {
>  	 */
>  	spinlock_t mmu_unsync_pages_lock;
>  
> +	bool enable_mmio_caching;
> +	u64 shadow_mmio_value;
> +	u64 shadow_mmio_mask;
> +
>  	struct list_head assigned_dev_head;
>  	struct iommu_domain *iommu_domain;
>  	bool iommu_noncoherent;
> diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
> index c371ef695fcc..6231ef005a50 100644
> --- a/arch/x86/include/asm/vmx.h
> +++ b/arch/x86/include/asm/vmx.h
> @@ -511,6 +511,7 @@ enum vmcs_field {
>  #define VMX_EPT_IPAT_BIT    			(1ull << 6)
>  #define VMX_EPT_ACCESS_BIT			(1ull << 8)
>  #define VMX_EPT_DIRTY_BIT			(1ull << 9)
> +#define VMX_EPT_SUPPRESS_VE_BIT			(1ull << 63)

Both the patch title and the changelog say this patch only does per-VM MMIO
value/mask tracking.  Why do we need this bit here?

>  #define VMX_EPT_RWX_MASK                        (VMX_EPT_READABLE_MASK |       \
>  						 VMX_EPT_WRITABLE_MASK |       \
>  						 VMX_EPT_EXECUTABLE_MASK)
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index ccf0ba7a6387..9ba60fd79d33 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -108,7 +108,9 @@ static inline u8 kvm_get_shadow_phys_bits(void)
>  	return boot_cpu_data.x86_phys_bits;
>  }
>  
> -void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
> +void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask,
> +				u64 access_mask);
> +void kvm_mmu_set_default_mmio_spte_mask(u64 mask);
>  void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
>  void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
>  
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index f239b6cb5d53..496d0d30839b 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2287,7 +2287,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
>  				return kvm_mmu_prepare_zap_page(kvm, child,
>  								invalid_list);
>  		}
> -	} else if (is_mmio_spte(pte)) {
> +	} else if (is_mmio_spte(kvm, pte)) {
>  		mmu_spte_clear_no_track(spte);
>  	}
>  	return 0;
> @@ -3067,8 +3067,13 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
>  		 * by L0 userspace (you can observe gfn > L1.MAXPHYADDR if
>  		 * and only if L1's MAXPHYADDR is inaccurate with respect to
>  		 * the hardware's).
> +		 *
> +		 * Excludes the INTEL TD guest.  Because TD memory is
> +		 * protected, the instruction can't be emulated.  Instead, use
> +		 * SPTE value without #VE suppress bit cleared
> +		 * (kvm->arch.shadow_mmio_value = 0).
>  		 */

Again, I don't think this chunk should be in this patch.  It's out-of-scope of
what the patch claims to do.

I see you will make below code change in later patch (couple of patches later):

-		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
+		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching &&
+			     !kvm_gfn_shared_mask(vcpu->kvm)) ||
 		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
 			return RET_PF_EMULATE;

So why not putting the comment and the code change together?

> -		if (unlikely(!enable_mmio_caching) ||
> +		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
>  		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
>  			return RET_PF_EMULATE;
>  	}
> @@ -3200,7 +3205,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  		else
>  			sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte);
>  
> -		if (!is_shadow_present_pte(spte) || is_mmio_spte(spte))
> +		if (!is_shadow_present_pte(spte) ||
> +		    is_mmio_spte(vcpu->kvm, spte))
>  			break;
>  
>  		sp = sptep_to_sp(sptep);
> @@ -3907,7 +3913,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct)
>  	if (WARN_ON(reserved))
>  		return -EINVAL;
>  
> -	if (is_mmio_spte(spte)) {
> +	if (is_mmio_spte(vcpu->kvm, spte)) {
>  		gfn_t gfn = get_mmio_spte_gfn(spte);
>  		unsigned int access = get_mmio_spte_access(spte);
>  
> @@ -4350,7 +4356,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu)
>  static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
>  			   unsigned int access)
>  {
> -	if (unlikely(is_mmio_spte(*sptep))) {
> +	if (unlikely(is_mmio_spte(vcpu->kvm, *sptep))) {
>  		if (gfn != get_mmio_spte_gfn(*sptep)) {
>  			mmu_spte_clear_no_track(sptep);
>  			return true;
> @@ -5864,6 +5870,10 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>  	node->track_write = kvm_mmu_pte_write;
>  	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
>  	kvm_page_track_register_notifier(kvm, node);
> +	kvm_mmu_set_mmio_spte_mask(kvm, shadow_default_mmio_mask,
> +				   shadow_default_mmio_mask,
> +				   ACC_WRITE_MASK | ACC_USER_MASK);
> +

This (along with shadow_default_mmio_mask) looks a little bit weird.  Please
also see comments below.
 
>  	return 0;
>  }
>  
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index ee2fb0c073f3..62ae590d4e5b 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -1032,7 +1032,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  		gfn_t gfn;
>  
>  		if (!is_shadow_present_pte(sp->spt[i]) &&
> -		    !is_mmio_spte(sp->spt[i]))
> +		    !is_mmio_spte(vcpu->kvm, sp->spt[i]))
>  			continue;
>  
>  		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index bd441458153f..5194aef60c1f 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -29,8 +29,7 @@ u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
>  u64 __read_mostly shadow_user_mask;
>  u64 __read_mostly shadow_accessed_mask;
>  u64 __read_mostly shadow_dirty_mask;
> -u64 __read_mostly shadow_mmio_value;
> -u64 __read_mostly shadow_mmio_mask;
> +u64 __read_mostly shadow_default_mmio_mask;

This shadow_default_mmio_mask looks a little bit weird.  Please also see below.

>  u64 __read_mostly shadow_mmio_access_mask;
>  u64 __read_mostly shadow_present_mask;
>  u64 __read_mostly shadow_me_value;
> @@ -62,10 +61,11 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access)
>  	u64 spte = generation_mmio_spte_mask(gen);
>  	u64 gpa = gfn << PAGE_SHIFT;
>  
> -	WARN_ON_ONCE(!shadow_mmio_value);
> +	WARN_ON_ONCE(!vcpu->kvm->arch.shadow_mmio_value &&
> +		     !kvm_gfn_shared_mask(vcpu->kvm));

Chunk shouldn't belong to  this patch.

>  
>  	access &= shadow_mmio_access_mask;
> -	spte |= shadow_mmio_value | access;
> +	spte |= vcpu->kvm->arch.shadow_mmio_value | access;
>  	spte |= gpa | shadow_nonpresent_or_rsvd_mask;
>  	spte |= (gpa & shadow_nonpresent_or_rsvd_mask)
>  		<< SHADOW_NONPRESENT_OR_RSVD_MASK_LEN;
> @@ -337,7 +337,8 @@ u64 mark_spte_for_access_track(u64 spte)
>  	return spte;
>  }
>  
> -void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
> +void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask,
> +				u64 access_mask)
>  {
>  	BUG_ON((u64)(unsigned)access_mask != access_mask);
>  	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
> @@ -366,11 +367,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
>  		mmio_value = 0;
>  
> -	if (!mmio_value)
> -		enable_mmio_caching = false;
> -
> -	shadow_mmio_value = mmio_value;
> -	shadow_mmio_mask  = mmio_mask;
> +	kvm->arch.enable_mmio_caching = !!mmio_value;
> +	kvm->arch.shadow_mmio_value = mmio_value;
> +	kvm->arch.shadow_mmio_mask = mmio_mask;
>  	shadow_mmio_access_mask = access_mask;
>  }
>  EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
> @@ -393,24 +392,18 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
>  	shadow_dirty_mask	= has_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull;
>  	shadow_nx_mask		= 0ull;
>  	shadow_x_mask		= VMX_EPT_EXECUTABLE_MASK;
> -	shadow_present_mask	= has_exec_only ? 0ull : VMX_EPT_READABLE_MASK;
> +	/* VMX_EPT_SUPPRESS_VE_BIT is needed for W or X violation. */
> +	shadow_present_mask	=
> +		(has_exec_only ? 0ull : VMX_EPT_READABLE_MASK) | VMX_EPT_SUPPRESS_VE_BIT;

Again, this chunk shouldn't be in this patch.

>  	shadow_acc_track_mask	= VMX_EPT_RWX_MASK;
>  	shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
>  	shadow_mmu_writable_mask  = EPT_SPTE_MMU_WRITABLE;
> -
> -	/*
> -	 * EPT Misconfigurations are generated if the value of bits 2:0
> -	 * of an EPT paging-structure entry is 110b (write/execute).
> -	 */
> -	kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE,
> -				   VMX_EPT_RWX_MASK, 0);
>  }
>  EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks);
>  
>  void kvm_mmu_reset_all_pte_masks(void)
>  {
>  	u8 low_phys_bits;
> -	u64 mask;
>  
>  	shadow_phys_bits = kvm_get_shadow_phys_bits();
>  
> @@ -459,9 +452,13 @@ void kvm_mmu_reset_all_pte_masks(void)
>  	 * PTEs and so the reserved PA approach must be disabled.
>  	 */
>  	if (shadow_phys_bits < 52)
> -		mask = BIT_ULL(51) | PT_PRESENT_MASK;
> +		shadow_default_mmio_mask = BIT_ULL(51) | PT_PRESENT_MASK;
>  	else
> -		mask = 0;
> +		shadow_default_mmio_mask = 0;
> +}

Shadow_default_mmio_mask alone looks a little bit weird with per-VM MMIO
tracking.  I think it can be removed by moving this code to vmx_vm_init(), and
call it as VM's MMIO mask/value for non-EPT case.  If EPT is enabled, it can
override using new mask/value.

>  
> -	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
> +void kvm_mmu_set_default_mmio_spte_mask(u64 mask)
> +{
> +	shadow_default_mmio_mask = mask;
>  }
> +EXPORT_SYMBOL_GPL(kvm_mmu_set_default_mmio_spte_mask);
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 1bfedbe0585f..96312ab4fffb 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -5,8 +5,6 @@
>  
>  #include "mmu_internal.h"
>  
> -extern bool __read_mostly enable_mmio_caching;
> -
>  /*
>   * A MMU present SPTE is backed by actual memory and may or may not be present
>   * in hardware.  E.g. MMIO SPTEs are not considered present.  Use bit 11, as it
> @@ -160,8 +158,7 @@ extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
>  extern u64 __read_mostly shadow_user_mask;
>  extern u64 __read_mostly shadow_accessed_mask;
>  extern u64 __read_mostly shadow_dirty_mask;
> -extern u64 __read_mostly shadow_mmio_value;
> -extern u64 __read_mostly shadow_mmio_mask;
> +extern u64 __read_mostly shadow_default_mmio_mask;
>  extern u64 __read_mostly shadow_mmio_access_mask;
>  extern u64 __read_mostly shadow_present_mask;
>  extern u64 __read_mostly shadow_me_value;
> @@ -233,10 +230,10 @@ static inline bool is_removed_spte(u64 spte)
>   */
>  extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
>  
> -static inline bool is_mmio_spte(u64 spte)
> +static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
>  {
> -	return (spte & shadow_mmio_mask) == shadow_mmio_value &&
> -	       likely(enable_mmio_caching);
> +	return (spte & kvm->arch.shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
> +		likely(kvm->arch.enable_mmio_caching || kvm_gfn_shared_mask(kvm));
>  }

This chunk (checking kvm_gfn_shared_mask(kvm)) should not be in this patch. 

>  
>  static inline bool is_shadow_present_pte(u64 pte)
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 2ca03ec3bf52..82f1bfac7ee6 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -569,8 +569,8 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>  		 * impact the guest since both the former and current SPTEs
>  		 * are nonpresent.
>  		 */
> -		if (WARN_ON(!is_mmio_spte(old_spte) &&
> -			    !is_mmio_spte(new_spte) &&
> +		if (WARN_ON(!is_mmio_spte(kvm, old_spte) &&
> +			    !is_mmio_spte(kvm, new_spte) &&
>  			    !is_removed_spte(new_spte)))
>  			pr_err("Unexpected SPTE change! Nonpresent SPTEs\n"
>  			       "should not be replaced with another,\n"
> @@ -1108,7 +1108,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  	}
>  
>  	/* If a MMIO SPTE is installed, the MMIO will need to be emulated. */
> -	if (unlikely(is_mmio_spte(new_spte))) {
> +	if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) {
>  		vcpu->stat.pf_mmio_spte_created++;
>  		trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn,
>  				     new_spte);
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 815a07c594f1..0abc43d6a115 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4870,7 +4870,7 @@ static __init void svm_adjust_mmio_mask(void)
>  	 */
>  	mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
>  
> -	kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
> +	kvm_mmu_set_default_mmio_spte_mask(mask);

SVM doesn't need shadow_default_mmio_mask.  Instead, it can define a local
variable in svm.c, and call kvm_mmu_set_mmio_spte_mask(mask, mask,
PT_WRITABLE_MASK | PT_USER_MASK) in svm_vm_init().

>  }
>  
>  static __init void svm_set_cpu_caps(void)
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 1d87885245cc..e2415ac55317 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7289,6 +7289,14 @@ int vmx_vm_init(struct kvm *kvm)
>  	if (!ple_gap)
>  		kvm->arch.pause_in_guest = true;
>  
> +	/*
> +	 * EPT Misconfigurations can be generated if the value of bits 2:0
> +	 * of an EPT paging-structure entry is 110b (write/execute).
> +	 */
> +	if (enable_ept)
> +		kvm_mmu_set_mmio_spte_mask(kvm, VMX_EPT_MISCONFIG_WX_VALUE,
> +					   VMX_EPT_RWX_MASK, 0);
> +

As commented above, I think we can remove shadow_default_mmio_mask by moving the
logic in kvm_mmu_reset_all_pte_mask() here.

Or use SVM similar way, use a local variable 'mask' in vmx.c, calculate the
'mask' during hardware_setup(), and use it here for non-EPT case.


>  	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
>  		switch (l1tf_mitigation) {
>  		case L1TF_MITIGATION_OFF:


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 010/102] x86/virt/tdx: Add a helper function to return system wide info about TDX module
  2022-06-27 21:53 ` [PATCH v7 010/102] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
@ 2022-07-07  2:46   ` Yuan Yao
  2022-07-12  0:39     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-07  2:46 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:02PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> TDX KVM needs system-wide information about the TDX module, struct
> tdsysinfo_struct.  Add a helper function tdx_get_sysinfo() to return it
> instead of KVM getting it with various error checks.  Move out the struct
> definition about it to common place tdx_host.h.

Please correct the tdx_host.h to tdx.h or arch/x86/include/asm/tdx.h

>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/tdx.h  | 55 +++++++++++++++++++++++++++++++++++++
>  arch/x86/virt/vmx/tdx/tdx.c | 20 +++++++++++---
>  arch/x86/virt/vmx/tdx/tdx.h | 52 -----------------------------------
>  3 files changed, 71 insertions(+), 56 deletions(-)
>
> diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> index 801f6e10b2db..dfea0dd71bc1 100644
> --- a/arch/x86/include/asm/tdx.h
> +++ b/arch/x86/include/asm/tdx.h
> @@ -89,11 +89,66 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
>  #endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */
>
>  #ifdef CONFIG_INTEL_TDX_HOST
> +struct tdx_cpuid_config {
> +	u32	leaf;
> +	u32	sub_leaf;
> +	u32	eax;
> +	u32	ebx;
> +	u32	ecx;
> +	u32	edx;
> +} __packed;
> +
> +#define TDSYSINFO_STRUCT_SIZE		1024
> +#define TDSYSINFO_STRUCT_ALIGNMENT	1024
> +
> +struct tdsysinfo_struct {
> +	/* TDX-SEAM Module Info */
> +	u32	attributes;
> +	u32	vendor_id;
> +	u32	build_date;
> +	u16	build_num;
> +	u16	minor_version;
> +	u16	major_version;
> +	u8	reserved0[14];
> +	/* Memory Info */
> +	u16	max_tdmrs;
> +	u16	max_reserved_per_tdmr;
> +	u16	pamt_entry_size;
> +	u8	reserved1[10];
> +	/* Control Struct Info */
> +	u16	tdcs_base_size;
> +	u8	reserved2[2];
> +	u16	tdvps_base_size;
> +	u8	tdvps_xfam_dependent_size;
> +	u8	reserved3[9];
> +	/* TD Capabilities */
> +	u64	attributes_fixed0;
> +	u64	attributes_fixed1;
> +	u64	xfam_fixed0;
> +	u64	xfam_fixed1;
> +	u8	reserved4[32];
> +	u32	num_cpuid_config;
> +	/*
> +	 * The actual number of CPUID_CONFIG depends on above
> +	 * 'num_cpuid_config'.  The size of 'struct tdsysinfo_struct'
> +	 * is 1024B defined by TDX architecture.  Use a union with
> +	 * specific padding to make 'sizeof(struct tdsysinfo_struct)'
> +	 * equal to 1024.
> +	 */
> +	union {
> +		struct tdx_cpuid_config	cpuid_configs[0];
> +		u8			reserved5[892];
> +	};
> +} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT);
> +
>  bool platform_tdx_enabled(void);
>  int tdx_init(void);
> +const struct tdsysinfo_struct *tdx_get_sysinfo(void);
>  #else	/* !CONFIG_INTEL_TDX_HOST */
>  static inline bool platform_tdx_enabled(void) { return false; }
>  static inline int tdx_init(void)  { return -ENODEV; }
> +struct tdsysinfo_struct;
> +static inline const struct tdsysinfo_struct *tdx_get_sysinfo(void) { return NULL; }
>  #endif	/* CONFIG_INTEL_TDX_HOST */
>
>  #endif /* !__ASSEMBLY__ */
> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index f9a6f8bdade8..14f53494156c 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -364,9 +364,9 @@ static int check_cmrs(struct cmr_info *cmr_array, int *actual_cmr_num)
>  	return 0;
>  }
>
> -static int tdx_get_sysinfo(struct tdsysinfo_struct *tdsysinfo,
> -			   struct cmr_info *cmr_array,
> -			   int *actual_cmr_num)
> +static int __tdx_get_sysinfo(struct tdsysinfo_struct *tdsysinfo,
> +			     struct cmr_info *cmr_array,
> +			     int *actual_cmr_num)
>  {
>  	struct tdx_module_output out;
>  	u64 ret;
> @@ -393,6 +393,18 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *tdsysinfo,
>  	return check_cmrs(cmr_array, actual_cmr_num);
>  }
>
> +const struct tdsysinfo_struct *tdx_get_sysinfo(void)
> +{
> +       const struct tdsysinfo_struct *r = NULL;
> +
> +       mutex_lock(&tdx_module_lock);
> +       if (tdx_module_status == TDX_MODULE_INITIALIZED)
> +	       r = &tdx_sysinfo;
> +       mutex_unlock(&tdx_module_lock);
> +       return r;
> +}
> +EXPORT_SYMBOL_GPL(tdx_get_sysinfo);
> +
>  /*
>   * Skip the memory region below 1MB.  Return true if the entire
>   * region is skipped.  Otherwise, the updated range is returned.
> @@ -1116,7 +1128,7 @@ static int init_tdx_module(void)
>  	if (ret)
>  		goto out;
>
> -	ret = tdx_get_sysinfo(&tdx_sysinfo, tdx_cmr_array, &tdx_cmr_num);
> +	ret = __tdx_get_sysinfo(&tdx_sysinfo, tdx_cmr_array, &tdx_cmr_num);
>  	if (ret)
>  		goto out;
>
> diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
> index e0309558be13..c08e4ee2d0bf 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.h
> +++ b/arch/x86/virt/vmx/tdx/tdx.h
> @@ -65,58 +65,6 @@ struct cmr_info {
>  #define MAX_CMRS			32
>  #define CMR_INFO_ARRAY_ALIGNMENT	512
>
> -struct cpuid_config {
> -	u32	leaf;
> -	u32	sub_leaf;
> -	u32	eax;
> -	u32	ebx;
> -	u32	ecx;
> -	u32	edx;
> -} __packed;
> -
> -#define TDSYSINFO_STRUCT_SIZE		1024
> -#define TDSYSINFO_STRUCT_ALIGNMENT	1024
> -
> -struct tdsysinfo_struct {
> -	/* TDX-SEAM Module Info */
> -	u32	attributes;
> -	u32	vendor_id;
> -	u32	build_date;
> -	u16	build_num;
> -	u16	minor_version;
> -	u16	major_version;
> -	u8	reserved0[14];
> -	/* Memory Info */
> -	u16	max_tdmrs;
> -	u16	max_reserved_per_tdmr;
> -	u16	pamt_entry_size;
> -	u8	reserved1[10];
> -	/* Control Struct Info */
> -	u16	tdcs_base_size;
> -	u8	reserved2[2];
> -	u16	tdvps_base_size;
> -	u8	tdvps_xfam_dependent_size;
> -	u8	reserved3[9];
> -	/* TD Capabilities */
> -	u64	attributes_fixed0;
> -	u64	attributes_fixed1;
> -	u64	xfam_fixed0;
> -	u64	xfam_fixed1;
> -	u8	reserved4[32];
> -	u32	num_cpuid_config;
> -	/*
> -	 * The actual number of CPUID_CONFIG depends on above
> -	 * 'num_cpuid_config'.  The size of 'struct tdsysinfo_struct'
> -	 * is 1024B defined by TDX architecture.  Use a union with
> -	 * specific padding to make 'sizeof(struct tdsysinfo_struct)'
> -	 * equal to 1024.
> -	 */
> -	union {
> -		struct cpuid_config	cpuid_configs[0];
> -		u8			reserved5[892];
> -	};
> -} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT);
> -
>  struct tdmr_reserved_area {
>  	u64 offset;
>  	u64 size;
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 013/102] KVM: TDX: Make TDX VM type supported
  2022-06-27 21:53 ` [PATCH v7 013/102] KVM: TDX: Make TDX VM type supported isaku.yamahata
@ 2022-07-07  2:55   ` Yuan Yao
  2022-07-12  1:06     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-07  2:55 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:05PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> NOTE: This patch is in position of the patch series for developers to be
> able to test codes during the middle of the patch series although this
> patch series doesn't provide functional features until the all the patches
> of this patch series.  When merging this patch series, this patch can be
> moved to the end.
>
> As first step TDX VM support, return that TDX VM type supported to device
> model, e.g. qemu.  The callback to create guest TD is vm_init callback for
> KVM_CREATE_VM.  Add a place holder function and call a function to
> initialize TDX module on demand because in that callback VMX is enabled by
> hardware_enable callback (vmx_hardware_enable).

if the "initialize TDX module on demand" means calling tdx_init() then
it's already done in kvm_init() ->
kvm_arch_post_hardware_enable_setup from patch 11, so may need commit
messsage update here.

>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/vmx/main.c    | 18 ++++++++++++++++--
>  arch/x86/kvm/vmx/tdx.c     |  6 ++++++
>  arch/x86/kvm/vmx/vmx.c     |  5 -----
>  arch/x86/kvm/vmx/x86_ops.h |  3 ++-
>  4 files changed, 24 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 7be4941e4c4d..47bfa94e538e 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -10,6 +10,12 @@
>  static bool __read_mostly enable_tdx = IS_ENABLED(CONFIG_INTEL_TDX_HOST);
>  module_param_named(tdx, enable_tdx, bool, 0444);
>
> +static bool vt_is_vm_type_supported(unsigned long type)
> +{
> +	return type == KVM_X86_DEFAULT_VM ||
> +		(enable_tdx && tdx_is_vm_type_supported(type));
> +}
> +
>  static __init int vt_hardware_setup(void)
>  {
>  	int ret;
> @@ -33,6 +39,14 @@ static int __init vt_post_hardware_enable_setup(void)
>  	return 0;
>  }
>
> +static int vt_vm_init(struct kvm *kvm)
> +{
> +	if (is_td(kvm))
> +		return -EOPNOTSUPP;	/* Not ready to create guest TD yet. */
> +
> +	return vmx_vm_init(kvm);
> +}
> +
>  struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.name = "kvm_intel",
>
> @@ -43,9 +57,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.hardware_disable = vmx_hardware_disable,
>  	.has_emulated_msr = vmx_has_emulated_msr,
>
> -	.is_vm_type_supported = vmx_is_vm_type_supported,
> +	.is_vm_type_supported = vt_is_vm_type_supported,
>  	.vm_size = sizeof(struct kvm_vmx),
> -	.vm_init = vmx_vm_init,
> +	.vm_init = vt_vm_init,
>  	.vm_destroy = vmx_vm_destroy,
>
>  	.vcpu_precreate = vmx_vcpu_precreate,
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 9cb36716b0f3..3675f7de2735 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -73,6 +73,12 @@ int __init tdx_module_setup(void)
>  	return 0;
>  }
>
> +bool tdx_is_vm_type_supported(unsigned long type)
> +{
> +	/* enable_tdx check is done by the caller. */
> +	return type == KVM_X86_TDX_VM;
> +}
> +
>  int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>  {
>  	u32 max_pa;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 5ba62f8b42ce..b30d73d28e75 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7281,11 +7281,6 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
>  	return err;
>  }
>
> -bool vmx_is_vm_type_supported(unsigned long type)
> -{
> -	return type == KVM_X86_DEFAULT_VM;
> -}
> -
>  #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
>  #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
>
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index a5e85eb4e183..dbfd0e43fd89 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -25,7 +25,6 @@ void vmx_hardware_unsetup(void);
>  int vmx_check_processor_compatibility(void);
>  int vmx_hardware_enable(void);
>  void vmx_hardware_disable(void);
> -bool vmx_is_vm_type_supported(unsigned long type);
>  int vmx_vm_init(struct kvm *kvm);
>  void vmx_vm_destroy(struct kvm *kvm);
>  int vmx_vcpu_precreate(struct kvm *kvm);
> @@ -131,8 +130,10 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu);
>
>  #ifdef CONFIG_INTEL_TDX_HOST
>  int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
> +bool tdx_is_vm_type_supported(unsigned long type);
>  #else
>  static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
> +static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
>  #endif
>
>  #endif /* __KVM_X86_VMX_X86_OPS_H */
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 022/102] KVM: TDX: create/destroy VM structure
  2022-06-27 21:53 ` [PATCH v7 022/102] KVM: TDX: create/destroy VM structure isaku.yamahata
@ 2022-07-07  6:16   ` Yuan Yao
  2022-07-12  6:21     ` Isaku Yamahata
  2022-08-02 19:46   ` Sean Christopherson
  1 sibling, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-07  6:16 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Kai Huang

On Mon, Jun 27, 2022 at 02:53:14PM -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
>
> As the first step to create TDX guest, create/destroy VM struct.  Assign
> TDX private Host Key ID (HKID) to the TDX guest for memory encryption and
> allocate extra pages for the TDX guest. On destruction, free allocated
> pages, and HKID.
>
> Before tearing down private page tables, TDX requires some resources of the
> guest TD to be destroyed (i.e. keyID must have been reclaimed, etc).  Add
> flush_shadow_all_private callback before tearing down private page tables
> for it.
>
> Add a second kvm_x86_ops hook in kvm_arch_destroy_vm() to support TDX's
> destruction path, which needs to first put the VM into a teardown state,
> then free per-vCPU resources, and finally free per-VM resources.
>
> Co-developed-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm-x86-ops.h |   2 +
>  arch/x86/include/asm/kvm_host.h    |   2 +
>  arch/x86/kvm/vmx/main.c            |  34 ++-
>  arch/x86/kvm/vmx/tdx.c             | 376 +++++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/tdx.h             |   2 +
>  arch/x86/kvm/vmx/tdx_errno.h       |   2 +-
>  arch/x86/kvm/vmx/x86_ops.h         |  11 +
>  arch/x86/kvm/x86.c                 |   8 +
>  8 files changed, 433 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index a97cdb203a16..fbb2c6746066 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -21,7 +21,9 @@ KVM_X86_OP(has_emulated_msr)
>  KVM_X86_OP(vcpu_after_set_cpuid)
>  KVM_X86_OP(is_vm_type_supported)
>  KVM_X86_OP(vm_init)
> +KVM_X86_OP_OPTIONAL(flush_shadow_all_private)
>  KVM_X86_OP_OPTIONAL(vm_destroy)
> +KVM_X86_OP_OPTIONAL(vm_free)
>  KVM_X86_OP_OPTIONAL_RET0(vcpu_precreate)
>  KVM_X86_OP(vcpu_create)
>  KVM_X86_OP(vcpu_free)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 089e0a4de926..80df346af117 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1438,7 +1438,9 @@ struct kvm_x86_ops {
>  	bool (*is_vm_type_supported)(unsigned long vm_type);
>  	unsigned int vm_size;
>  	int (*vm_init)(struct kvm *kvm);
> +	void (*flush_shadow_all_private)(struct kvm *kvm);
>  	void (*vm_destroy)(struct kvm *kvm);
> +	void (*vm_free)(struct kvm *kvm);
>
>  	/* Create, but do not attach this VCPU */
>  	int (*vcpu_precreate)(struct kvm *kvm);
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 47bfa94e538e..6a93b19a8b06 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -39,18 +39,44 @@ static int __init vt_post_hardware_enable_setup(void)
>  	return 0;
>  }
>
> +static void vt_hardware_unsetup(void)
> +{
> +	tdx_hardware_unsetup();
> +	vmx_hardware_unsetup();
> +}
> +
>  static int vt_vm_init(struct kvm *kvm)
>  {
>  	if (is_td(kvm))
> -		return -EOPNOTSUPP;	/* Not ready to create guest TD yet. */
> +		return tdx_vm_init(kvm);
>
>  	return vmx_vm_init(kvm);
>  }
>
> +static void vt_flush_shadow_all_private(struct kvm *kvm)
> +{
> +	if (is_td(kvm))
> +		return tdx_mmu_release_hkid(kvm);
> +}
> +
> +static void vt_vm_destroy(struct kvm *kvm)
> +{
> +	if (is_td(kvm))
> +		return;
> +
> +	vmx_vm_destroy(kvm);
> +}
> +
> +static void vt_vm_free(struct kvm *kvm)
> +{
> +	if (is_td(kvm))
> +		return tdx_vm_free(kvm);
> +}
> +
>  struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.name = "kvm_intel",
>
> -	.hardware_unsetup = vmx_hardware_unsetup,
> +	.hardware_unsetup = vt_hardware_unsetup,
>  	.check_processor_compatibility = vmx_check_processor_compatibility,
>
>  	.hardware_enable = vmx_hardware_enable,
> @@ -60,7 +86,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.is_vm_type_supported = vt_is_vm_type_supported,
>  	.vm_size = sizeof(struct kvm_vmx),
>  	.vm_init = vt_vm_init,
> -	.vm_destroy = vmx_vm_destroy,
> +	.flush_shadow_all_private = vt_flush_shadow_all_private,
> +	.vm_destroy = vt_vm_destroy,
> +	.vm_free = vt_vm_free,
>
>  	.vcpu_precreate = vmx_vcpu_precreate,
>  	.vcpu_create = vmx_vcpu_create,
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 3675f7de2735..63f3c7a02cc8 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -31,9 +31,367 @@ struct tdx_capabilities {
>  	struct tdx_cpuid_config cpuid_configs[TDX_MAX_NR_CPUID_CONFIGS];
>  };
>
> +/*
> + * Key id globally used by TDX module: TDX module maps TDR with this TDX global
> + * key id.  TDR includes key id assigned to the TD.  Then TDX module maps other
> + * TD-related pages with the assigned key id.  TDR requires this TDX global key
> + * id for cache flush unlike other TD-related pages.
> + */
> +static u32 tdx_global_keyid __read_mostly;
> +
>  /* Capabilities of KVM + the TDX module. */
>  static struct tdx_capabilities tdx_caps;
>
> +/*
> + * Some TDX SEAMCALLs (TDH.MNG.CREATE, TDH.PHYMEM.CACHE.WB,
> + * TDH.MNG.KEY.RECLAIMID, TDH.MNG.KEY.FREEID etc) tries to acquire a global lock
> + * internally in TDX module.  If failed, TDX_OPERAND_BUSY is returned without
> + * spinning or waiting due to a constraint on execution time.  It's caller's
> + * responsibility to avoid race (or retry on TDX_OPERAND_BUSY).  Use this mutex
> + * to avoid race in TDX module because the kernel knows better about scheduling.
> + */
> +static DEFINE_MUTEX(tdx_lock);
> +static struct mutex *tdx_mng_key_config_lock;
> +
> +static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid)
> +{
> +	pa &= ~hkid_mask;
> +	pa |= (u64)hkid << hkid_start_pos;
> +
> +	return pa;
> +}
> +
> +static inline bool is_td_created(struct kvm_tdx *kvm_tdx)
> +{
> +	return kvm_tdx->tdr.added;
> +}
> +
> +static inline void tdx_hkid_free(struct kvm_tdx *kvm_tdx)
> +{
> +	tdx_keyid_free(kvm_tdx->hkid);
> +	kvm_tdx->hkid = -1;
> +}
> +
> +static inline bool is_hkid_assigned(struct kvm_tdx *kvm_tdx)
> +{
> +	return kvm_tdx->hkid > 0;
> +}
> +
> +static void tdx_clear_page(unsigned long page)
> +{
> +	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
> +	unsigned long i;
> +
> +	/*
> +	 * Zeroing the page is only necessary for systems with MKTME-i:
> +	 * when re-assign one page from old keyid to a new keyid, MOVDIR64B is
> +	 * required to clear/write the page with new keyid to prevent integrity
> +	 * error when read on the page with new keyid.
> +	 */
> +	if (!static_cpu_has(X86_FEATURE_MOVDIR64B))
> +		return;
> +
> +	for (i = 0; i < 4096; i += 64)
> +		/* MOVDIR64B [rdx], es:rdi */
> +		asm (".byte 0x66, 0x0f, 0x38, 0xf8, 0x3a"
> +		     : : "d" (zero_page), "D" (page + i) : "memory");
> +}
> +
> +static int tdx_reclaim_page(unsigned long va, hpa_t pa, bool do_wb, u16 hkid)
> +{
> +	struct tdx_module_output out;
> +	u64 err;
> +
> +	err = tdh_phymem_page_reclaim(pa, &out);
> +	if (WARN_ON_ONCE(err)) {
> +		pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out);
> +		return -EIO;
> +	}
> +
> +	if (do_wb) {
> +		err = tdh_phymem_page_wbinvd(set_hkid_to_hpa(pa, hkid));
> +		if (WARN_ON_ONCE(err)) {
> +			pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
> +			return -EIO;
> +		}
> +	}
> +
> +	tdx_clear_page(va);
> +	return 0;
> +}
> +
> +static int tdx_alloc_td_page(struct tdx_td_page *page)
> +{
> +	page->va = __get_free_page(GFP_KERNEL_ACCOUNT);
> +	if (!page->va)
> +		return -ENOMEM;
> +
> +	page->pa = __pa(page->va);
> +	return 0;
> +}
> +
> +static void tdx_mark_td_page_added(struct tdx_td_page *page)
> +{
> +	WARN_ON_ONCE(page->added);
> +	page->added = true;
> +}
> +
> +static void tdx_reclaim_td_page(struct tdx_td_page *page)
> +{
> +	if (page->added) {
> +		/*
> +		 * TDCX are being reclaimed.  TDX module maps TDCX with HKID
> +		 * assigned to the TD.  Here the cache associated to the TD
> +		 * was already flushed by TDH.PHYMEM.CACHE.WB before here, So
> +		 * cache doesn't need to be flushed again.
> +		 */
> +		if (tdx_reclaim_page(page->va, page->pa, false, 0))
> +			return;
> +
> +		page->added = false;
> +	}
> +	free_page(page->va);
> +}
> +
> +static int tdx_do_tdh_phymem_cache_wb(void *param)
> +{
> +	u64 err = 0;
> +
> +	do {
> +		err = tdh_phymem_cache_wb(!!err);
> +	} while (err == TDX_INTERRUPTED_RESUMABLE);
> +
> +	/* Other thread may have done for us. */
> +	if (err == TDX_NO_HKID_READY_TO_WBCACHE)
> +		err = TDX_SUCCESS;
> +	if (WARN_ON_ONCE(err)) {
> +		pr_tdx_error(TDH_PHYMEM_CACHE_WB, err, NULL);
> +		return -EIO;
> +	}
> +
> +	return 0;
> +}
> +
> +void tdx_mmu_release_hkid(struct kvm *kvm)
> +{
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> +	cpumask_var_t packages;
> +	bool cpumask_allocated;
> +	u64 err;
> +	int ret;
> +	int i;
> +
> +	if (!is_hkid_assigned(kvm_tdx))
> +		return;
> +
> +	if (!is_td_created(kvm_tdx))
> +		goto free_hkid;
> +
> +	cpumask_allocated = zalloc_cpumask_var(&packages, GFP_KERNEL);
> +	cpus_read_lock();
> +	for_each_online_cpu(i) {
> +		if (cpumask_allocated &&
> +			cpumask_test_and_set_cpu(topology_physical_package_id(i),
> +						packages))
> +			continue;
> +
> +		/*
> +		 * We can destroy multiple the guest TDs simultaneously.
> +		 * Prevent tdh_phymem_cache_wb from returning TDX_BUSY by
> +		 * serialization.
> +		 */
> +		mutex_lock(&tdx_lock);
> +		ret = smp_call_on_cpu(i, tdx_do_tdh_phymem_cache_wb, NULL, 1);
> +		mutex_unlock(&tdx_lock);
> +		if (ret)
> +			break;
> +	}
> +	cpus_read_unlock();
> +	free_cpumask_var(packages);
> +
> +	mutex_lock(&tdx_lock);
> +	err = tdh_mng_key_freeid(kvm_tdx->tdr.pa);
> +	mutex_unlock(&tdx_lock);
> +	if (WARN_ON_ONCE(err)) {
> +		pr_tdx_error(TDH_MNG_KEY_FREEID, err, NULL);
> +		pr_err("tdh_mng_key_freeid failed. HKID %d is leaked.\n",
> +			kvm_tdx->hkid);
> +		return;
> +	}
> +
> +free_hkid:
> +	tdx_hkid_free(kvm_tdx);
> +}
> +
> +void tdx_vm_free(struct kvm *kvm)
> +{
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> +	int i;
> +
> +	/* Can't reclaim or free TD pages if teardown failed. */
> +	if (is_hkid_assigned(kvm_tdx))
> +		return;
> +
> +	for (i = 0; i < tdx_caps.tdcs_nr_pages; i++)
> +		tdx_reclaim_td_page(&kvm_tdx->tdcs[i]);
> +	kfree(kvm_tdx->tdcs);
> +
> +	/*
> +	 * TDX module maps TDR with TDX global HKID.  TDX module may access TDR
> +	 * while operating on TD (Especially reclaiming TDCS).  Cache flush with
> +	 * TDX global HKID is needed.
> +	 */
> +	if (kvm_tdx->tdr.added &&
> +		tdx_reclaim_page(kvm_tdx->tdr.va, kvm_tdx->tdr.pa, true,
> +				tdx_global_keyid))
> +		return;
> +
> +	free_page(kvm_tdx->tdr.va);
> +}
> +
> +static int tdx_do_tdh_mng_key_config(void *param)
> +{
> +	hpa_t *tdr_p = param;
> +	u64 err;
> +
> +	do {
> +		err = tdh_mng_key_config(*tdr_p);
> +
> +		/*
> +		 * If it failed to generate a random key, retry it because this
> +		 * is typically caused by an entropy error of the CPU's random
> +		 * number generator.
> +		 */
> +	} while (err == TDX_KEY_GENERATION_FAILED);
> +
> +	if (WARN_ON_ONCE(err)) {
> +		pr_tdx_error(TDH_MNG_KEY_CONFIG, err, NULL);
> +		return -EIO;
> +	}
> +
> +	return 0;
> +}
> +
> +int tdx_vm_init(struct kvm *kvm)
> +{
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> +	cpumask_var_t packages;
> +	int ret, i;
> +	u64 err;
> +
> +	/* vCPUs can't be created until after KVM_TDX_INIT_VM. */
> +	kvm->max_vcpus = 0;
> +
> +	kvm_tdx->hkid = tdx_keyid_alloc();
> +	if (kvm_tdx->hkid < 0)
> +		return -EBUSY;
> +
> +	ret = tdx_alloc_td_page(&kvm_tdx->tdr);
> +	if (ret)
> +		goto free_hkid;
> +
> +	kvm_tdx->tdcs = kcalloc(tdx_caps.tdcs_nr_pages, sizeof(*kvm_tdx->tdcs),
> +				GFP_KERNEL_ACCOUNT);
> +	if (!kvm_tdx->tdcs)
> +		goto free_tdr;
> +	for (i = 0; i < tdx_caps.tdcs_nr_pages; i++) {
> +		ret = tdx_alloc_td_page(&kvm_tdx->tdcs[i]);
> +		if (ret)
> +			goto free_tdcs;
> +	}
> +
> +	/*
> +	 * Acquire global lock to avoid TDX_OPERAND_BUSY:
> +	 * TDH.MNG.CREATE and other APIs try to lock the global Key Owner
> +	 * Table (KOT) to track the assigned TDX private HKID.  It doesn't spin
> +	 * to acquire the lock, returns TDX_OPERAND_BUSY instead, and let the
> +	 * caller to handle the contention.  This is because of time limitation
> +	 * usable inside the TDX module and OS/VMM knows better about process
> +	 * scheduling.
> +	 *
> +	 * APIs to acquire the lock of KOT:
> +	 * TDH.MNG.CREATE, TDH.MNG.KEY.FREEID, TDH.MNG.VPFLUSHDONE, and
> +	 * TDH.PHYMEM.CACHE.WB.
> +	 */
> +	mutex_lock(&tdx_lock);
> +	err = tdh_mng_create(kvm_tdx->tdr.pa, kvm_tdx->hkid);
> +	mutex_unlock(&tdx_lock);
> +	if (WARN_ON_ONCE(err)) {
> +		pr_tdx_error(TDH_MNG_CREATE, err, NULL);
> +		ret = -EIO;
> +		goto free_tdcs;
> +	}
> +	tdx_mark_td_page_added(&kvm_tdx->tdr);
> +
> +	if (!zalloc_cpumask_var(&packages, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto free_tdcs;
> +	}
> +	cpus_read_lock();
> +	for_each_online_cpu(i) {
> +		int pkg = topology_physical_package_id(i);
> +
> +		if (cpumask_test_and_set_cpu(pkg, packages))
> +			continue;
> +
> +		/*
> +		 * Program the memory controller in the package with an
> +		 * encryption key associated to a TDX private host key id
> +		 * assigned to this TDR.  Concurrent operations on same memory
> +		 * controller results in TDX_OPERAND_BUSY.  Avoid this race by
> +		 * mutex.
> +		 */
> +		mutex_lock(&tdx_mng_key_config_lock[pkg]);
> +		ret = smp_call_on_cpu(i, tdx_do_tdh_mng_key_config,
> +				      &kvm_tdx->tdr.pa, true);
> +		mutex_unlock(&tdx_mng_key_config_lock[pkg]);
> +		if (ret)
> +			break;
> +	}
> +	cpus_read_unlock();
> +	free_cpumask_var(packages);
> +	if (ret)
> +		goto teardown;
> +
> +	for (i = 0; i < tdx_caps.tdcs_nr_pages; i++) {
> +		err = tdh_mng_addcx(kvm_tdx->tdr.pa, kvm_tdx->tdcs[i].pa);
> +		if (WARN_ON_ONCE(err)) {
> +			pr_tdx_error(TDH_MNG_ADDCX, err, NULL);
> +			ret = -EIO;
> +			goto teardown;
> +		}
> +		tdx_mark_td_page_added(&kvm_tdx->tdcs[i]);
> +	}
> +
> +	/*
> +	 * Note, TDH_MNG_INIT cannot be invoked here.  TDH_MNG_INIT requires a dedicated
> +	 * ioctl() to define the configure CPUID values for the TD.
> +	 */
> +	return 0;
> +
> +	/*
> +	 * The sequence for freeing resources from a partially initialized TD
> +	 * varies based on where in the initialization flow failure occurred.
> +	 * Simply use the full teardown and destroy, which naturally play nice
> +	 * with partial initialization.
> +	 */
> +teardown:
> +	tdx_mmu_release_hkid(kvm);
> +	tdx_vm_free(kvm);
> +	return ret;
> +
> +free_tdcs:
> +	/* @i points at the TDCS page that failed allocation. */
> +	for (--i; i >= 0; i--)
> +		free_page(kvm_tdx->tdcs[i].va);
> +	kfree(kvm_tdx->tdcs);
> +free_tdr:
> +	free_page(kvm_tdx->tdr.va);
> +free_hkid:
> +	tdx_hkid_free(kvm_tdx);
> +	return ret;
> +}
> +
>  int __init tdx_module_setup(void)
>  {
>  	const struct tdsysinfo_struct *tdsysinfo;
> @@ -48,6 +406,8 @@ int __init tdx_module_setup(void)
>  		return ret;
>  	}
>
> +	tdx_global_keyid = tdx_get_global_keyid();

I remember there's another static variable also named
"tdx_global_keyid" in arch/x86/virt/vmx/tdx/tdx.c ?
We can just use tdx_get_global_keyid() here without introducing
another static variable.

> +
>  	tdsysinfo = tdx_get_sysinfo();
>  	if (tdsysinfo->num_cpuid_config > TDX_MAX_NR_CPUID_CONFIGS)
>  		return -EIO;
> @@ -81,7 +441,9 @@ bool tdx_is_vm_type_supported(unsigned long type)
>
>  int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>  {
> +	int max_pkgs;
>  	u32 max_pa;
> +	int i;
>
>  	if (!enable_ept) {
>  		pr_warn("Cannot enable TDX with EPT disabled\n");
> @@ -97,6 +459,14 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>  	if (WARN_ON_ONCE(x86_ops->tlb_remote_flush))
>  		return -EIO;
>
> +	max_pkgs = topology_max_packages();
> +	tdx_mng_key_config_lock = kcalloc(max_pkgs, sizeof(*tdx_mng_key_config_lock),
> +				   GFP_KERNEL);
> +	if (!tdx_mng_key_config_lock)
> +		return -ENOMEM;
> +	for (i = 0; i < max_pkgs; i++)
> +		mutex_init(&tdx_mng_key_config_lock[i]);
> +
>  	max_pa = cpuid_eax(0x80000008) & 0xff;
>  	hkid_start_pos = boot_cpu_data.x86_phys_bits;
>  	hkid_mask = GENMASK_ULL(max_pa - 1, hkid_start_pos);
> @@ -105,3 +475,9 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>
>  	return 0;
>  }
> +
> +void tdx_hardware_unsetup(void)
> +{
> +	/* kfree accepts NULL. */
> +	kfree(tdx_mng_key_config_lock);
> +}
> diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> index f50d37f3fc9c..8058b6b153f8 100644
> --- a/arch/x86/kvm/vmx/tdx.h
> +++ b/arch/x86/kvm/vmx/tdx.h
> @@ -19,6 +19,8 @@ struct kvm_tdx {
>
>  	struct tdx_td_page tdr;
>  	struct tdx_td_page *tdcs;
> +
> +	int hkid;
>  };
>
>  struct vcpu_tdx {
> diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h
> index 5c878488795d..590fcfdd1899 100644
> --- a/arch/x86/kvm/vmx/tdx_errno.h
> +++ b/arch/x86/kvm/vmx/tdx_errno.h
> @@ -12,11 +12,11 @@
>  #define TDX_SUCCESS				0x0000000000000000ULL
>  #define TDX_NON_RECOVERABLE_VCPU		0x4000000100000000ULL
>  #define TDX_INTERRUPTED_RESUMABLE		0x8000000300000000ULL
> -#define TDX_LIFECYCLE_STATE_INCORRECT		0xC000060700000000ULL
>  #define TDX_VCPU_NOT_ASSOCIATED			0x8000070200000000ULL
>  #define TDX_KEY_GENERATION_FAILED		0x8000080000000000ULL
>  #define TDX_KEY_STATE_INCORRECT			0xC000081100000000ULL
>  #define TDX_KEY_CONFIGURED			0x0000081500000000ULL
> +#define TDX_NO_HKID_READY_TO_WBCACHE		0x0000082100000000ULL
>  #define TDX_EPT_WALK_FAILED			0xC0000B0000000000ULL
>
>  /*
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index dbfd0e43fd89..663fd8d4063f 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -131,9 +131,20 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu);
>  #ifdef CONFIG_INTEL_TDX_HOST
>  int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
>  bool tdx_is_vm_type_supported(unsigned long type);
> +void tdx_hardware_unsetup(void);
> +
> +int tdx_vm_init(struct kvm *kvm);
> +void tdx_mmu_release_hkid(struct kvm *kvm);
> +void tdx_vm_free(struct kvm *kvm);
>  #else
>  static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
>  static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
> +static inline void tdx_hardware_unsetup(void) {}
> +
> +static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; }
> +static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
> +static inline void tdx_flush_shadow_all_private(struct kvm *kvm) {}
> +static inline void tdx_vm_free(struct kvm *kvm) {}
>  #endif
>
>  #endif /* __KVM_X86_VMX_X86_OPS_H */
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 96dc8f52a137..320f902eaf9e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12057,6 +12057,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  	kvm_page_track_cleanup(kvm);
>  	kvm_xen_destroy_vm(kvm);
>  	kvm_hv_destroy_vm(kvm);
> +	static_call_cond(kvm_x86_vm_free)(kvm);
>  }
>
>  static void memslot_rmap_free(struct kvm_memory_slot *slot)
> @@ -12321,6 +12322,13 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>
>  void kvm_arch_flush_shadow_all(struct kvm *kvm)
>  {
> +	/*
> +	 * kvm_mmu_zap_all() zaps both private and shared page tables.  Before
> +	 * tearing down private page tables, TDX requires some TD resources to
> +	 * be destroyed (i.e. keyID must have been reclaimed, etc).  Invoke
> +	 * kvm_x86_flush_shadow_all_private() for this.
> +	 */
> +	static_call_cond(kvm_x86_flush_shadow_all_private)(kvm);
>  	kvm_mmu_zap_all(kvm);
>  }
>
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 023/102] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters
  2022-06-27 21:53 ` [PATCH v7 023/102] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
@ 2022-07-07  6:48   ` Yuan Yao
  0 siblings, 0 replies; 219+ messages in thread
From: Yuan Yao @ 2022-07-07  6:48 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, Jun 27, 2022 at 02:53:15PM -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
>
> Implement a system-scoped ioctl to get system-wide parameters for TDX.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm-x86-ops.h    |  1 +
>  arch/x86/include/asm/kvm_host.h       |  1 +
>  arch/x86/include/uapi/asm/kvm.h       | 48 +++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/main.c               |  2 ++
>  arch/x86/kvm/vmx/tdx.c                | 46 +++++++++++++++++++++++++
>  arch/x86/kvm/vmx/x86_ops.h            |  2 ++
>  arch/x86/kvm/x86.c                    |  6 ++++
>  tools/arch/x86/include/uapi/asm/kvm.h | 48 +++++++++++++++++++++++++++
>  8 files changed, 154 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index fbb2c6746066..3677a5015a4f 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -117,6 +117,7 @@ KVM_X86_OP(smi_allowed)
>  KVM_X86_OP(enter_smm)
>  KVM_X86_OP(leave_smm)
>  KVM_X86_OP(enable_smi_window)
> +KVM_X86_OP_OPTIONAL(dev_mem_enc_ioctl)
>  KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
>  KVM_X86_OP_OPTIONAL(mem_enc_register_region)
>  KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 80df346af117..342decc69649 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1591,6 +1591,7 @@ struct kvm_x86_ops {
>  	int (*leave_smm)(struct kvm_vcpu *vcpu, const char *smstate);
>  	void (*enable_smi_window)(struct kvm_vcpu *vcpu);
>
> +	int (*dev_mem_enc_ioctl)(void __user *argp);
>  	int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
>  	int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
>  	int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index 9792ec1cc317..273c8d82b9c8 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -534,4 +534,52 @@ struct kvm_pmu_event_filter {
>  #define KVM_X86_DEFAULT_VM	0
>  #define KVM_X86_TDX_VM		1
>
> +/* Trust Domain eXtension sub-ioctl() commands. */
> +enum kvm_tdx_cmd_id {
> +	KVM_TDX_CAPABILITIES = 0,
> +
> +	KVM_TDX_CMD_NR_MAX,
> +};
> +
> +struct kvm_tdx_cmd {
> +	/* enum kvm_tdx_cmd_id */
> +	__u32 id;
> +	/* flags for sub-commend. If sub-command doesn't use this, set zero. */
> +	__u32 flags;
> +	/*
> +	 * data for each sub-command. An immediate or a pointer to the actual
> +	 * data in process virtual address.  If sub-command doesn't use it,
> +	 * set zero.
> +	 */
> +	__u64 data;
> +	/*
> +	 * Auxiliary error code.  The sub-command may return TDX SEAMCALL
> +	 * status code in addition to -Exxx.
> +	 * Defined for consistency with struct kvm_sev_cmd.
> +	 */
> +	__u64 error;
> +	/* Reserved: Defined for consistency with struct kvm_sev_cmd. */
> +	__u64 unused;
> +};
> +
> +struct kvm_tdx_cpuid_config {
> +	__u32 leaf;
> +	__u32 sub_leaf;
> +	__u32 eax;
> +	__u32 ebx;
> +	__u32 ecx;
> +	__u32 edx;
> +};
> +
> +struct kvm_tdx_capabilities {
> +	__u64 attrs_fixed0;
> +	__u64 attrs_fixed1;
> +	__u64 xfam_fixed0;
> +	__u64 xfam_fixed1;
> +
> +	__u32 nr_cpuid_configs;
> +	__u32 padding;
> +	struct kvm_tdx_cpuid_config cpuid_configs[0];
> +};
> +
>  #endif /* _ASM_X86_KVM_H */
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 6a93b19a8b06..7b497ed1f21c 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -212,6 +212,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.complete_emulated_msr = kvm_complete_insn_gp,
>
>  	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
> +
> +	.dev_mem_enc_ioctl = tdx_dev_ioctl,
>  };
>
>  struct kvm_x86_init_ops vt_init_ops __initdata = {
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 63f3c7a02cc8..ec4ebba4152a 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -392,6 +392,52 @@ int tdx_vm_init(struct kvm *kvm)
>  	return ret;
>  }
>
> +int tdx_dev_ioctl(void __user *argp)
> +{
> +	struct kvm_tdx_capabilities __user *user_caps;
> +	struct kvm_tdx_capabilities caps;
> +	struct kvm_tdx_cmd cmd;
> +
> +	BUILD_BUG_ON(sizeof(struct kvm_tdx_cpuid_config) !=
> +		     sizeof(struct tdx_cpuid_config));
> +
> +	if (copy_from_user(&cmd, argp, sizeof(cmd)))
> +		return -EFAULT;
> +	if (cmd.flags || cmd.error || cmd.unused)
> +		return -EINVAL;
> +	/*
> +	 * Currently only KVM_TDX_CAPABILITIES is defined for system-scoped
> +	 * mem_enc_ioctl().
> +	 */
> +	if (cmd.id != KVM_TDX_CAPABILITIES)
> +		return -EINVAL;
> +
> +	user_caps = (void __user *)cmd.data;
> +	if (copy_from_user(&caps, user_caps, sizeof(caps)))
> +		return -EFAULT;
> +
> +	if (caps.nr_cpuid_configs < tdx_caps.nr_cpuid_configs)
> +		return -E2BIG;
> +
> +	caps = (struct kvm_tdx_capabilities) {
> +		.attrs_fixed0 = tdx_caps.attrs_fixed0,
> +		.attrs_fixed1 = tdx_caps.attrs_fixed1,
> +		.xfam_fixed0 = tdx_caps.xfam_fixed0,
> +		.xfam_fixed1 = tdx_caps.xfam_fixed1,
> +		.nr_cpuid_configs = tdx_caps.nr_cpuid_configs,
> +		.padding = 0,
> +	};
> +
> +	if (copy_to_user(user_caps, &caps, sizeof(caps)))
> +		return -EFAULT;
> +	if (copy_to_user(user_caps->cpuid_configs, &tdx_caps.cpuid_configs,
> +			 tdx_caps.nr_cpuid_configs *
> +			 sizeof(struct tdx_cpuid_config)))
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +
>  int __init tdx_module_setup(void)
>  {
>  	const struct tdsysinfo_struct *tdsysinfo;
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index 663fd8d4063f..3027d9821fe1 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -132,6 +132,7 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu);
>  int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
>  bool tdx_is_vm_type_supported(unsigned long type);
>  void tdx_hardware_unsetup(void);
> +int tdx_dev_ioctl(void __user *argp);
>
>  int tdx_vm_init(struct kvm *kvm);
>  void tdx_mmu_release_hkid(struct kvm *kvm);
> @@ -140,6 +141,7 @@ void tdx_vm_free(struct kvm *kvm);
>  static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
>  static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
>  static inline void tdx_hardware_unsetup(void) {}
> +static inline int tdx_dev_ioctl(void __user *argp) { return -EOPNOTSUPP; };
>
>  static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; }
>  static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 320f902eaf9e..6037ce93bcb7 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4565,6 +4565,12 @@ long kvm_arch_dev_ioctl(struct file *filp,
>  			break;
>  		r = kvm_x86_dev_has_attr(&attr);
>  		break;
> +		case KVM_MEMORY_ENCRYPT_OP:
> +			r = -EINVAL;
> +			if (!kvm_x86_ops.dev_mem_enc_ioctl)
> +				goto out;
> +			r = static_call(kvm_x86_dev_mem_enc_ioctl)(argp);
> +			break;

Incorrect indention and please move it out of
case KVM_HAS_DEVICE_ATTR: {
}

>  	}
>  	default:
>  		r = -EINVAL;
> diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
> index 71a5851475e7..a9ea3573be1b 100644
> --- a/tools/arch/x86/include/uapi/asm/kvm.h
> +++ b/tools/arch/x86/include/uapi/asm/kvm.h
> @@ -528,4 +528,52 @@ struct kvm_pmu_event_filter {
>  #define KVM_X86_DEFAULT_VM	0
>  #define KVM_X86_TDX_VM		1
>
> +/* Trust Domain eXtension sub-ioctl() commands. */
> +enum kvm_tdx_cmd_id {
> +	KVM_TDX_CAPABILITIES = 0,
> +
> +	KVM_TDX_CMD_NR_MAX,
> +};
> +
> +struct kvm_tdx_cmd {
> +	/* enum kvm_tdx_cmd_id */
> +	__u32 id;
> +	/* flags for sub-commend. If sub-command doesn't use this, set zero. */
> +	__u32 flags;
> +	/*
> +	 * data for each sub-command. An immediate or a pointer to the actual
> +	 * data in process virtual address.  If sub-command doesn't use it,
> +	 * set zero.
> +	 */
> +	__u64 data;
> +	/*
> +	 * Auxiliary error code.  The sub-command may return TDX SEAMCALL
> +	 * status code in addition to -Exxx.
> +	 * Defined for consistency with struct kvm_sev_cmd.
> +	 */
> +	__u64 error;
> +	/* Reserved: Defined for consistency with struct kvm_sev_cmd. */
> +	__u64 unused;
> +};
> +
> +struct kvm_tdx_cpuid_config {
> +	__u32 leaf;
> +	__u32 sub_leaf;
> +	__u32 eax;
> +	__u32 ebx;
> +	__u32 ecx;
> +	__u32 edx;
> +};
> +
> +struct kvm_tdx_capabilities {
> +	__u64 attrs_fixed0;
> +	__u64 attrs_fixed1;
> +	__u64 xfam_fixed0;
> +	__u64 xfam_fixed1;
> +
> +	__u32 nr_cpuid_configs;
> +	__u32 padding;
> +	struct kvm_tdx_cpuid_config cpuid_configs[0];
> +};
> +
>  #endif /* _ASM_X86_KVM_H */
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 024/102] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl
  2022-06-27 21:53 ` [PATCH v7 024/102] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
@ 2022-07-07  7:12   ` Yuan Yao
  0 siblings, 0 replies; 219+ messages in thread
From: Yuan Yao @ 2022-07-07  7:12 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:16PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> Add a place holder function for TDX specific VM-scoped ioctl as mem_enc_op.
> TDX specific sub-commands will be added to retrieve/pass TDX specific
> parameters.
>
> KVM_MEMORY_ENCRYPT_OP was introduced for VM-scoped operations specific for
> guest state-protected VM.  It defined subcommands for technology-specific
> operations under KVM_MEMORY_ENCRYPT_OP.  Despite its name, the subcommands
> are not limited to memory encryption, but various technology-specific
> operations are defined.  It's natural to repurpose KVM_MEMORY_ENCRYPT_OP
> for TDX specific operations and define subcommands.
>
> TDX requires VM-scoped, and VCPU-scoped TDX-specific operations for device
> model, for example, qemu.  Getting system-wide parameters, TDX-specific VM
> initialization, and TDX-specific vCPU initialization.  Which requires KVM
> vCPU-scoped operations in addition to the existing VM-scoped operations.

Suggest to no need talking about vcpu scope operations here, because
they're not available in this patch, we can talk about them in the
patch which introduces them.

>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/vmx/main.c    |  9 +++++++++
>  arch/x86/kvm/vmx/tdx.c     | 26 ++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/x86_ops.h |  4 ++++
>  3 files changed, 39 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 7b497ed1f21c..067f5de56c53 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -73,6 +73,14 @@ static void vt_vm_free(struct kvm *kvm)
>  		return tdx_vm_free(kvm);
>  }
>
> +static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
> +{
> +	if (!is_td(kvm))
> +		return -ENOTTY;
> +
> +	return tdx_vm_ioctl(kvm, argp);
> +}
> +
>  struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.name = "kvm_intel",
>
> @@ -214,6 +222,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
>
>  	.dev_mem_enc_ioctl = tdx_dev_ioctl,
> +	.mem_enc_ioctl = vt_mem_enc_ioctl,
>  };
>
>  struct kvm_x86_init_ops vt_init_ops __initdata = {
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index ec4ebba4152a..2a9dfd54189f 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -438,6 +438,32 @@ int tdx_dev_ioctl(void __user *argp)
>  	return 0;
>  }
>
> +int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
> +{
> +	struct kvm_tdx_cmd tdx_cmd;
> +	int r;
> +
> +	if (copy_from_user(&tdx_cmd, argp, sizeof(struct kvm_tdx_cmd)))
> +		return -EFAULT;
> +	if (tdx_cmd.error || tdx_cmd.unused)
> +		return -EINVAL;
> +
> +	mutex_lock(&kvm->lock);
> +
> +	switch (tdx_cmd.id) {
> +	default:
> +		r = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (copy_to_user(argp, &tdx_cmd, sizeof(struct kvm_tdx_cmd)))
> +		r = -EFAULT;
> +
> +out:
> +	mutex_unlock(&kvm->lock);
> +	return r;
> +}
> +
>  int __init tdx_module_setup(void)
>  {
>  	const struct tdsysinfo_struct *tdsysinfo;
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index 3027d9821fe1..ef6115ae0e88 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -137,6 +137,8 @@ int tdx_dev_ioctl(void __user *argp);
>  int tdx_vm_init(struct kvm *kvm);
>  void tdx_mmu_release_hkid(struct kvm *kvm);
>  void tdx_vm_free(struct kvm *kvm);
> +
> +int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
>  #else
>  static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
>  static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
> @@ -147,6 +149,8 @@ static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; }
>  static inline void tdx_mmu_release_hkid(struct kvm *kvm) {}
>  static inline void tdx_flush_shadow_all_private(struct kvm *kvm) {}
>  static inline void tdx_vm_free(struct kvm *kvm) {}
> +
> +static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
>  #endif
>
>  #endif /* __KVM_X86_VMX_X86_OPS_H */
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 101/102] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX)
  2022-06-27 21:54 ` [PATCH v7 101/102] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
@ 2022-07-08  1:34   ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-08  1:34 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini


> +
> +- Wrapping kvm x86_ops: The current choice
> +  Introduce dedicated file for arch/x86/kvm/vmx/main.c (the name,
> +  main.c, is just chosen to show main entry points for callbacks.) and
> +  wrapper functions around all the callbacks with
> +  "if (is-tdx) tdx-callback() else vmx-callback()".
> +
> +  Pros:
> +  - No major change in common x86 KVM code. The change is (mostly)
> +    contained under arch/x86/kvm/vmx/.
> +  - When TDX is disabled(CONFIG_INTEL_TDX_HOST=n), the overhead is
> +    optimized out.
> +  - Micro optimization by avoiding function pointer.
> +  Cons:
> +  - Many boiler plates in arch/x86/kvm/vmx/main.c.
> +
> +Alternative:
> +- Introduce another callback layer under arch/x86/kvm/vmx.
> +  Pros:
> +  - No major change in common x86 KVM code. The change is (mostly)
> +    contained under arch/x86/kvm/vmx/.
> +  - clear separation on callbacks.
> +  Cons:
> +  - overhead in VMX even when TDX is disabled(CONFIG_INTEL_TDX_HOST=n).
> +

Why putting "Alternative" in the documentation?  You may put it to the cover
letter so people can judge whether the design is reasonable, but it should not
be in the documentation.

-- 
Thanks,
-Kai



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 032/102] KVM: x86/mmu: introduce config for PRIVATE KVM MMU
  2022-06-27 21:53 ` [PATCH v7 032/102] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
@ 2022-07-08  1:53   ` Kai Huang
  2022-07-13  1:25     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-08  1:53 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> To Keep the case of non TDX intact, introduce a new config option for
> private KVM MMU support.  At the moment, this is synonym for
> CONFIG_INTEL_TDX_HOST && CONFIG_KVM_INTEL.  The new flag make it clear
> that the config is only for x86 KVM MMU.

What is the "new flag"?

> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/Kconfig | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index e3cbd7706136..5a59abc83179 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -129,4 +129,8 @@ config KVM_XEN
>  config KVM_EXTERNAL_WRITE_TRACKING
>  	bool
>  
> +config KVM_MMU_PRIVATE
> +	def_bool y
> +	depends on INTEL_TDX_HOST && KVM_INTEL
> +
>  endif # VIRTUALIZATION


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization
  2022-06-27 21:53 ` [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
@ 2022-07-08  2:14   ` Yuan Yao
  2022-07-12 20:35     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-08  2:14 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, Jun 27, 2022 at 02:53:22PM -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
>
> TD guest vcpu need to be configured before ready to run which requests
> addtional information from Device model (e.g. qemu), one 64bit value is
> passed to vcpu's RCX as an initial value.  Repurpose KVM_MEMORY_ENCRYPT_OP
> to vcpu-scope and add new sub-commands KVM_TDX_INIT_VCPU under it for such
> additional vcpu configuration.
>
> Add callback for kvm vCPU-scoped operations of KVM_MEMORY_ENCRYPT_OP and
> add a new subcommand, KVM_TDX_INIT_VCPU, for further vcpu initialization.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm-x86-ops.h    |  1 +
>  arch/x86/include/asm/kvm_host.h       |  1 +
>  arch/x86/include/uapi/asm/kvm.h       |  1 +
>  arch/x86/kvm/vmx/main.c               |  9 +++++++
>  arch/x86/kvm/vmx/tdx.c                | 36 +++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/tdx.h                |  4 +++
>  arch/x86/kvm/vmx/x86_ops.h            |  2 ++
>  arch/x86/kvm/x86.c                    |  6 +++++
>  tools/arch/x86/include/uapi/asm/kvm.h |  1 +
>  9 files changed, 61 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 3677a5015a4f..32a6df784ea6 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -119,6 +119,7 @@ KVM_X86_OP(leave_smm)
>  KVM_X86_OP(enable_smi_window)
>  KVM_X86_OP_OPTIONAL(dev_mem_enc_ioctl)
>  KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
> +KVM_X86_OP_OPTIONAL(vcpu_mem_enc_ioctl)
>  KVM_X86_OP_OPTIONAL(mem_enc_register_region)
>  KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
>  KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 81638987cdb9..e5d4e5b60fdc 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1595,6 +1595,7 @@ struct kvm_x86_ops {
>
>  	int (*dev_mem_enc_ioctl)(void __user *argp);
>  	int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
> +	int (*vcpu_mem_enc_ioctl)(struct kvm_vcpu *vcpu, void __user *argp);
>  	int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
>  	int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
>  	int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index f89774ccd4ae..399c28b2f4f5 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -538,6 +538,7 @@ struct kvm_pmu_event_filter {
>  enum kvm_tdx_cmd_id {
>  	KVM_TDX_CAPABILITIES = 0,
>  	KVM_TDX_INIT_VM,
> +	KVM_TDX_INIT_VCPU,
>
>  	KVM_TDX_CMD_NR_MAX,
>  };
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 4f4ed4ad65a7..ce12cc8276ef 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -113,6 +113,14 @@ static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
>  	return tdx_vm_ioctl(kvm, argp);
>  }
>
> +static int vt_vcpu_mem_enc_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
> +{
> +	if (!is_td_vcpu(vcpu))
> +		return -EINVAL;
> +
> +	return tdx_vcpu_ioctl(vcpu, argp);
> +}
> +
>  struct kvm_x86_ops vt_x86_ops __initdata = {
>  	.name = "kvm_intel",
>
> @@ -255,6 +263,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>
>  	.dev_mem_enc_ioctl = tdx_dev_ioctl,
>  	.mem_enc_ioctl = vt_mem_enc_ioctl,
> +	.vcpu_mem_enc_ioctl = vt_vcpu_mem_enc_ioctl,
>  };
>
>  struct kvm_x86_init_ops vt_init_ops __initdata = {
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index d9fe3f6463c3..2772775457b0 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -83,6 +83,11 @@ static inline bool is_hkid_assigned(struct kvm_tdx *kvm_tdx)
>  	return kvm_tdx->hkid > 0;
>  }
>
> +static inline bool is_td_finalized(struct kvm_tdx *kvm_tdx)
> +{
> +	return kvm_tdx->finalized;
> +}
> +
>  static void tdx_clear_page(unsigned long page)
>  {
>  	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
> @@ -805,6 +810,37 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
>  	return r;
>  }
>
> +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
> +{
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
> +	struct vcpu_tdx *tdx = to_tdx(vcpu);
> +	struct kvm_tdx_cmd cmd;
> +	u64 err;
> +
> +	if (tdx->initialized)

Minor: How about "tdx_vcpu->initialized" ? there's
"is_td_initialized()" below, the "tdx" here may lead guys to treat it
as whole td vm until they confirmed it's type again.

> +		return -EINVAL;
> +
> +	if (!is_td_initialized(vcpu->kvm) || is_td_finalized(kvm_tdx))
> +		return -EINVAL;
> +
> +	if (copy_from_user(&cmd, argp, sizeof(cmd)))
> +		return -EFAULT;
> +
> +	if (cmd.error || cmd.unused)
> +		return -EINVAL;
> +	if (cmd.flags || cmd.id != KVM_TDX_INIT_VCPU)
> +		return -EINVAL;
> +
> +	err = tdh_vp_init(tdx->tdvpr.pa, cmd.data);
> +	if (WARN_ON_ONCE(err)) {
> +		pr_tdx_error(TDH_VP_INIT, err, NULL);
> +		return -EIO;
> +	}
> +
> +	tdx->initialized = true;
> +	return 0;
> +}
> +
>  int __init tdx_module_setup(void)
>  {
>  	const struct tdsysinfo_struct *tdsysinfo;
> diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> index 892e7dc96e99..337c3adb4fcf 100644
> --- a/arch/x86/kvm/vmx/tdx.h
> +++ b/arch/x86/kvm/vmx/tdx.h
> @@ -25,6 +25,8 @@ struct kvm_tdx {
>  	u64 xfam;
>  	int hkid;
>
> +	bool finalized;
> +
>  	u64 tsc_offset;
>  	unsigned long tsc_khz;
>  };
> @@ -35,6 +37,8 @@ struct vcpu_tdx {
>  	struct tdx_td_page tdvpr;
>  	struct tdx_td_page *tdvpx;
>
> +	bool initialized;
> +
>  	/*
>  	 * Dummy to make pmu_intel not corrupt memory.
>  	 * TODO: Support PMU for TDX.  Future work.
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index 42b634971544..7e38c7b756d4 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -143,6 +143,7 @@ void tdx_vcpu_free(struct kvm_vcpu *vcpu);
>  void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
>
>  int tdx_vm_ioctl(struct kvm *kvm, void __user *argp);
> +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
>  #else
>  static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
>  static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; }
> @@ -159,6 +160,7 @@ static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {}
>  static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {}
>
>  static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; }
> +static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
>  #endif
>
>  #endif /* __KVM_X86_VMX_X86_OPS_H */
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6037ce93bcb7..4309ef0ade21 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5778,6 +5778,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  	case KVM_SET_DEVICE_ATTR:
>  		r = kvm_vcpu_ioctl_device_attr(vcpu, ioctl, argp);
>  		break;
> +	case KVM_MEMORY_ENCRYPT_OP:
> +		r = -ENOTTY;
> +		if (!kvm_x86_ops.vcpu_mem_enc_ioctl)
> +			goto out;
> +		r = kvm_x86_ops.vcpu_mem_enc_ioctl(vcpu, argp);
> +		break;
>  	default:
>  		r = -EINVAL;
>  	}
> diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
> index 779dfd683d66..60a79f9ef174 100644
> --- a/tools/arch/x86/include/uapi/asm/kvm.h
> +++ b/tools/arch/x86/include/uapi/asm/kvm.h
> @@ -532,6 +532,7 @@ struct kvm_pmu_event_filter {
>  enum kvm_tdx_cmd_id {
>  	KVM_TDX_CAPABILITIES = 0,
>  	KVM_TDX_INIT_VM,
> +	KVM_TDX_INIT_VCPU,
>
>  	KVM_TDX_CMD_NR_MAX,
>  };
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits
  2022-06-27 21:53 ` [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits isaku.yamahata
@ 2022-07-08  2:15   ` Kai Huang
  2022-07-13  4:52     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-08  2:15 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Rick Edgecombe

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Rick Edgecombe <rick.p.edgecombe@intel.com>

I don't think this is appropriate any more.  You can add Co-developed-by I
guess.

> 
> TDX repurposes one GPA bits (51 bit or 47 bit based on configuration) to
> indicate the GPA is private(if cleared) or shared (if set) with VMM.  If
> GPA.shared is set, GPA is converted existing conventional EPT pointed by
> EPTP.  If GPA.shared bit is cleared, GPA is converted by Secure-EPT(S-EPT)

Not sure whether Secure EPT has even been mentioned before in this series.  If
not, perhaps better to explain it here.  Or not sure whether you need to mention
S-EPT at all.

> TDX module manages.  VMM has to issue SEAM call to TDX module to operate on

SEAM call -> SEAMCALL

> S-EPT.  e.g. populating/zapping guest page or shadow page by TDH.PAGE.{ADD,
> REMOVE} for guest page, TDH.PAGE.SEPT.{ADD, REMOVE} S-EPT etc.

Not sure why you want to mention those particular SEAMCALLs.

> 
> Several hooks needs to be added to KVM MMU to support TDX.  Add a function

needs -> need.

Not sure why you need first sentence at all.

But I do think you should mention adding per-VM scope 'gfn_shared_mask' thing.

> to check if KVM MMU is running for TDX and several functions for address
> conversation between private-GPA and shared-GPA.
> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  2 ++
>  arch/x86/kvm/mmu.h              | 32 ++++++++++++++++++++++++++++++++
>  2 files changed, 34 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index e5d4e5b60fdc..2c47aab72a1b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1339,7 +1339,9 @@ struct kvm_arch {
>  	 */
>  	u32 max_vcpu_ids;
>  
> +#ifdef CONFIG_KVM_MMU_PRIVATE
>  	gfn_t gfn_shared_mask;
> +#endif

As Xiaoyao said, please introduce gfn_shared_mask in this patch.

And by applying this patch, nothing will prevent you to turn on INTEL_TDX_HOST
and KVM_INTEL, which also turns on KVM_MMU_PRIVATE.

So 'kvm_arch::gfn_shared_mask' is guaranteed to be 0?  If not, can legal
(shared) GFN for normal VM be potentially treated as private?

If yes, perhaps explicitly call out in changelog so people don't need to worry
about?

>  };
>  
>  struct kvm_vm_stat {
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index f8192864b496..ccf0ba7a6387 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -286,4 +286,36 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu,
>  		return gpa;
>  	return translate_nested_gpa(vcpu, gpa, access, exception);
>  }
> +
> +static inline gfn_t kvm_gfn_shared_mask(const struct kvm *kvm)
> +{
> +#ifdef CONFIG_KVM_MMU_PRIVATE
> +	return kvm->arch.gfn_shared_mask;
> +#else
> +	return 0;
> +#endif
> +}
> +
> +static inline gfn_t kvm_gfn_shared(const struct kvm *kvm, gfn_t gfn)
> +{
> +	return gfn | kvm_gfn_shared_mask(kvm);
> +}
> +
> +static inline gfn_t kvm_gfn_private(const struct kvm *kvm, gfn_t gfn)
> +{
> +	return gfn & ~kvm_gfn_shared_mask(kvm);
> +}
> +
> +static inline gpa_t kvm_gpa_private(const struct kvm *kvm, gpa_t gpa)
> +{
> +	return gpa & ~gfn_to_gpa(kvm_gfn_shared_mask(kvm));
> +}
> +
> +static inline bool kvm_is_private_gpa(const struct kvm *kvm, gpa_t gpa)
> +{
> +	gfn_t mask = kvm_gfn_shared_mask(kvm);
> +
> +	return mask && !(gpa_to_gfn(gpa) & mask);
> +}
> +
>  #endif


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-06-27 21:53 ` [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
@ 2022-07-08  2:23   ` Kai Huang
  2022-07-19 14:49     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-08  2:23 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
> to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
> case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
> defensive (test that VMX case isn't broken), introduce option
> ept_violation_ve_test and when it's set, set error.

I don't see why we need this patch.  It may be helpful during your test, but why
do we need this patch for formal submission?

And for a normal guest, what prevents one vcpu from sending #VE IPI to another
vcpu?
 
> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/vmx.h | 12 +++++++
>  arch/x86/kvm/vmx/vmx.c     | 68 +++++++++++++++++++++++++++++++++++++-
>  arch/x86/kvm/vmx/vmx.h     |  3 ++
>  3 files changed, 82 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
> index 6231ef005a50..f0f8eecf55ac 100644
> --- a/arch/x86/include/asm/vmx.h
> +++ b/arch/x86/include/asm/vmx.h
> @@ -68,6 +68,7 @@
>  #define SECONDARY_EXEC_ENCLS_EXITING		VMCS_CONTROL_BIT(ENCLS_EXITING)
>  #define SECONDARY_EXEC_RDSEED_EXITING		VMCS_CONTROL_BIT(RDSEED_EXITING)
>  #define SECONDARY_EXEC_ENABLE_PML               VMCS_CONTROL_BIT(PAGE_MOD_LOGGING)
> +#define SECONDARY_EXEC_EPT_VIOLATION_VE		VMCS_CONTROL_BIT(EPT_VIOLATION_VE)
>  #define SECONDARY_EXEC_PT_CONCEAL_VMX		VMCS_CONTROL_BIT(PT_CONCEAL_VMX)
>  #define SECONDARY_EXEC_XSAVES			VMCS_CONTROL_BIT(XSAVES)
>  #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC	VMCS_CONTROL_BIT(MODE_BASED_EPT_EXEC)
> @@ -223,6 +224,8 @@ enum vmcs_field {
>  	VMREAD_BITMAP_HIGH              = 0x00002027,
>  	VMWRITE_BITMAP                  = 0x00002028,
>  	VMWRITE_BITMAP_HIGH             = 0x00002029,
> +	VE_INFORMATION_ADDRESS		= 0x0000202A,
> +	VE_INFORMATION_ADDRESS_HIGH	= 0x0000202B,
>  	XSS_EXIT_BITMAP                 = 0x0000202C,
>  	XSS_EXIT_BITMAP_HIGH            = 0x0000202D,
>  	ENCLS_EXITING_BITMAP		= 0x0000202E,
> @@ -628,4 +631,13 @@ enum vmx_l1d_flush_state {
>  
>  extern enum vmx_l1d_flush_state l1tf_vmx_mitigation;
>  
> +struct vmx_ve_information {
> +	u32 exit_reason;
> +	u32 delivery;
> +	u64 exit_qualification;
> +	u64 guest_linear_address;
> +	u64 guest_physical_address;
> +	u16 eptp_index;
> +};
> +
>  #endif
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index e2415ac55317..e3d304b14df0 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -126,6 +126,9 @@ module_param(error_on_inconsistent_vmcs_config, bool, 0444);
>  static bool __read_mostly dump_invalid_vmcs = 0;
>  module_param(dump_invalid_vmcs, bool, 0644);
>  
> +static bool __read_mostly ept_violation_ve_test = 0;
> +module_param(ept_violation_ve_test, bool, 0444);
> +
>  #define MSR_BITMAP_MODE_X2APIC		1
>  #define MSR_BITMAP_MODE_X2APIC_APICV	2
>  
> @@ -726,6 +729,13 @@ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu)
>  
>  	eb = (1u << PF_VECTOR) | (1u << UD_VECTOR) | (1u << MC_VECTOR) |
>  	     (1u << DB_VECTOR) | (1u << AC_VECTOR);
> +	/*
> +	 * #VE isn't used for VMX, but for TDX.  To test against unexpected
> +	 * change related to #VE for VMX, intercept unexpected #VE and warn on
> +	 * it.
> +	 */
> +	if (ept_violation_ve_test)
> +		eb |= 1u << VE_VECTOR;
>  	/*
>  	 * Guest access to VMware backdoor ports could legitimately
>  	 * trigger #GP because of TSS I/O permission bitmap.
> @@ -2524,6 +2534,8 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
>  			SECONDARY_EXEC_NOTIFY_VM_EXITING;
>  		if (cpu_has_sgx())
>  			opt2 |= SECONDARY_EXEC_ENCLS_EXITING;
> +		if (ept_violation_ve_test)
> +			opt2 |= SECONDARY_EXEC_EPT_VIOLATION_VE;
>  		if (adjust_vmx_controls(min2, opt2,
>  					MSR_IA32_VMX_PROCBASED_CTLS2,
>  					&_cpu_based_2nd_exec_control) < 0)
> @@ -2558,6 +2570,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
>  			return -EIO;
>  
>  		vmx_cap->ept = 0;
> +		_cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE;
>  	}
>  	if (!(_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_VPID) &&
>  	    vmx_cap->vpid) {
> @@ -4390,6 +4403,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
>  		exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
>  	if (!enable_ept) {
>  		exec_control &= ~SECONDARY_EXEC_ENABLE_EPT;
> +		exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE;
>  		enable_unrestricted_guest = 0;
>  	}
>  	if (!enable_unrestricted_guest)
> @@ -4517,8 +4531,40 @@ static void init_vmcs(struct vcpu_vmx *vmx)
>  
>  	exec_controls_set(vmx, vmx_exec_control(vmx));
>  
> -	if (cpu_has_secondary_exec_ctrls())
> +	if (cpu_has_secondary_exec_ctrls()) {
>  		secondary_exec_controls_set(vmx, vmx_secondary_exec_control(vmx));
> +		if (secondary_exec_controls_get(vmx) &
> +		    SECONDARY_EXEC_EPT_VIOLATION_VE) {
> +			if (!vmx->ve_info) {
> +				/* ve_info must be page aligned. */
> +				struct page *page;
> +
> +				BUILD_BUG_ON(sizeof(*vmx->ve_info) > PAGE_SIZE);
> +				page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
> +				if (page)
> +					vmx->ve_info = page_to_virt(page);
> +			}
> +			if (vmx->ve_info) {
> +				/*
> +				 * Allow #VE delivery. CPU sets this field to
> +				 * 0xFFFFFFFF on #VE delivery.  Another #VE can
> +				 * occur only if software clears the field.
> +				 */
> +				vmx->ve_info->delivery = 0;
> +				vmcs_write64(VE_INFORMATION_ADDRESS,
> +					     __pa(vmx->ve_info));
> +			} else {
> +				/*
> +				 * Because SECONDARY_EXEC_EPT_VIOLATION_VE is
> +				 * used only when ept_violation_ve_test is true,
> +				 * it's okay to go with the bit disabled.
> +				 */
> +				pr_err("Failed to allocate ve_info. disabling EPT_VIOLATION_VE.\n");
> +				secondary_exec_controls_clearbit(
> +					vmx, SECONDARY_EXEC_EPT_VIOLATION_VE);
> +			}
> +		}
> +	}
>  
>  	if (cpu_has_tertiary_exec_ctrls())
>  		tertiary_exec_controls_set(vmx, vmx_tertiary_exec_control(vmx));
> @@ -5116,7 +5162,14 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>  		if (handle_guest_split_lock(kvm_rip_read(vcpu)))
>  			return 1;
>  		fallthrough;
> +	case VE_VECTOR:
>  	default:
> +		if (ept_violation_ve_test && ex_no == VE_VECTOR) {
> +			pr_err("VMEXIT due to unexpected #VE.\n");
> +			secondary_exec_controls_clearbit(
> +				vmx, SECONDARY_EXEC_EPT_VIOLATION_VE);
> +			return 1;
> +		}
>  		kvm_run->exit_reason = KVM_EXIT_EXCEPTION;
>  		kvm_run->ex.exception = ex_no;
>  		kvm_run->ex.error_code = error_code;
> @@ -6182,6 +6235,17 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
>  	if (secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
>  		pr_err("Virtual processor ID = 0x%04x\n",
>  		       vmcs_read16(VIRTUAL_PROCESSOR_ID));
> +	if (secondary_exec_control & SECONDARY_EXEC_EPT_VIOLATION_VE) {
> +		struct vmx_ve_information *ve_info;
> +		pr_err("VE info address = 0x%016llx\n",
> +		       vmcs_read64(VE_INFORMATION_ADDRESS));
> +		ve_info = __va(vmcs_read64(VE_INFORMATION_ADDRESS));
> +		pr_err("ve_info: 0x%08x 0x%08x 0x%016llx 0x%016llx 0x%016llx 0x%04x\n",
> +		       ve_info->exit_reason, ve_info->delivery,
> +		       ve_info->exit_qualification,
> +		       ve_info->guest_linear_address,
> +		       ve_info->guest_physical_address, ve_info->eptp_index);
> +	}
>  }
>  
>  /*
> @@ -7173,6 +7237,8 @@ void vmx_vcpu_free(struct kvm_vcpu *vcpu)
>  	free_vpid(vmx->vpid);
>  	nested_vmx_free_vcpu(vcpu);
>  	free_loaded_vmcs(vmx->loaded_vmcs);
> +	if (vmx->ve_info)
> +		free_page((unsigned long)vmx->ve_info);
>  }
>  
>  int vmx_vcpu_create(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 9feb994e5ea2..60d93c38e014 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -338,6 +338,9 @@ struct vcpu_vmx {
>  		DECLARE_BITMAP(read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
>  		DECLARE_BITMAP(write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
>  	} shadow_msr_intercept;
> +
> +	/* ve_info must be page aligned. */
> +	struct vmx_ve_information *ve_info;
>  };
>  
>  struct kvm_vmx {


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 048/102] KVM: x86/mmu: Disallow dirty logging for x86 TDX
  2022-06-27 21:53 ` [PATCH v7 048/102] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
@ 2022-07-08  2:30   ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-08  2:30 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson, Xiaoyao Li

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> TDX doesn't support dirty logging.  Report dirty logging isn't supported so
> that device model, for example qemu, can properly handle it.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>

Xiaoyao's SoB looks weird.

> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/x86.c       |  5 +++++
>  include/linux/kvm_host.h |  1 +
>  virt/kvm/kvm_main.c      | 15 ++++++++++++---
>  3 files changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 4309ef0ade21..dcd1f5e2ba05 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -13164,6 +13164,11 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size,
>  }
>  EXPORT_SYMBOL_GPL(kvm_sev_es_string_io);
>  
> +bool kvm_arch_dirty_log_supported(struct kvm *kvm)
> +{
> +	return kvm->arch.vm_type != KVM_X86_TDX_VM;
> +}
> +
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry);
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 79a4988fd51f..6fd8ec297236 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1452,6 +1452,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu);
>  int kvm_arch_post_init_vm(struct kvm *kvm);
>  void kvm_arch_pre_destroy_vm(struct kvm *kvm);
>  int kvm_arch_create_vm_debugfs(struct kvm *kvm);
> +bool kvm_arch_dirty_log_supported(struct kvm *kvm);
>  
>  #ifndef __KVM_HAVE_ARCH_VM_ALLOC
>  /*
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 7a5261eb7eb8..703c1d0c98da 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1467,9 +1467,18 @@ static void kvm_replace_memslot(struct kvm *kvm,
>  	}
>  }
>  
> -static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem)
> +bool __weak kvm_arch_dirty_log_supported(struct kvm *kvm)
>  {
> -	u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
> +	return true;
> +}
> +
> +static int check_memory_region_flags(struct kvm *kvm,
> +				     const struct kvm_userspace_memory_region *mem)
> +{
> +	u32 valid_flags = 0;
> +
> +	if (kvm_arch_dirty_log_supported(kvm))
> +		valid_flags |= KVM_MEM_LOG_DIRTY_PAGES;
>  
>  #ifdef __KVM_HAVE_READONLY_MEM
>  	valid_flags |= KVM_MEM_READONLY;
> @@ -1871,7 +1880,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  	int as_id, id;
>  	int r;
>  
> -	r = check_memory_region_flags(mem);
> +	r = check_memory_region_flags(kvm, mem);
>  	if (r)
>  		return r;
>  


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  2022-06-27 21:53 ` [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
@ 2022-07-08  3:44   ` Kai Huang
  2022-07-26 23:39     ` Isaku Yamahata
  2022-07-11  8:28   ` Yuan Yao
  2022-07-12  2:36   ` Yuan Yao
  2 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-08  3:44 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel; +Cc: isaku.yamahata, Paolo Bonzini

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> Allocate mirrored private page table for private page table, and add hooks
> to operate on mirrored private page table.  This patch adds only hooks. As
> kvm_gfn_shared_mask() returns false always, those hooks aren't called yet.
> 
> Because private guest page is protected, page copy with mmu_notifier to
> migrate page doesn't work.  Callback from backing store is needed.
> 
> When the faulting GPA is private, the KVM fault is also called private.
> When resolving private KVM, allocate mirrored private page table and call
> hooks to operate on mirrored private page table. On the change of the
> private PTE entry, invoke kvm_x86_ops hook in __handle_changed_spte() to
> propagate the change to mirrored private page table. The following depicts
> the relationship.
> 
>   private KVM page fault   |
>       |                    |
>       V                    |
>  private GPA               |
>       |                    |
>       V                    |
>  KVM private PT root       |  CPU private PT root
>       |                    |           |
>       V                    |           V
>    private PT ---hook to mirror--->mirrored private PT
>       |                    |           |
>       \--------------------+------\    |
>                            |      |    |
>                            |      V    V
>                            |    private guest page
>                            |
>                            |
>      non-encrypted memory  |    encrypted memory
>                            |
> PT: page table
> 
> The existing KVM TDP MMU code uses atomic update of SPTE.  On populating
> the EPT entry, atomically set the entry.  However, it requires TLB
> shootdown to zap SPTE.  To address it, the entry is frozen with the special
> SPTE value that clears the present bit. After the TLB shootdown, the entry
> is set to the eventual value (unfreeze).
> 
> For mirrored private page table, hooks are called to update mirrored
> private page table in addition to direct access to the private SPTE. For
> the zapping case, it works to freeze the SPTE. It can call hooks in
> addition to TLB shootdown.  For populating the private SPTE entry, there
> can be a race condition without further protection
> 
>   vcpu 1: populating 2M private SPTE
>   vcpu 2: populating 4K private SPTE
>   vcpu 2: TDX SEAMCALL to update 4K mirrored private SPTE => error
>   vcpu 1: TDX SEAMCALL to update 2M mirrored private SPTE
> 
> To avoid the race, the frozen SPTE is utilized.  Instead of atomic update
> of the private entry, freeze the entry, call the hook that update mirrored
> private SPTE, set the entry to the final value.
> 
> Support 4K page only at this stage.  2M page support can be done in future
> patches.
> 
> Add is_private member to kvm_page_fault to indicate the fault is private.
> Also is_private member to struct tdp_inter to propagate it.
> 
> Co-developed-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm-x86-ops.h |   2 +
>  arch/x86/include/asm/kvm_host.h    |  20 +++
>  arch/x86/kvm/mmu/mmu.c             |  86 +++++++++-
>  arch/x86/kvm/mmu/mmu_internal.h    |  37 +++++
>  arch/x86/kvm/mmu/paging_tmpl.h     |   2 +-
>  arch/x86/kvm/mmu/tdp_iter.c        |   1 +
>  arch/x86/kvm/mmu/tdp_iter.h        |   5 +-
>  arch/x86/kvm/mmu/tdp_mmu.c         | 247 +++++++++++++++++++++++------
>  arch/x86/kvm/mmu/tdp_mmu.h         |   7 +-
>  virt/kvm/kvm_main.c                |   1 +
>  10 files changed, 346 insertions(+), 62 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 32a6df784ea6..6982d57e4518 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -93,6 +93,8 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
>  KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
>  KVM_X86_OP(get_mt_mask)
>  KVM_X86_OP(load_mmu_pgd)
> +KVM_X86_OP_OPTIONAL(free_private_sp)
> +KVM_X86_OP_OPTIONAL(handle_changed_private_spte)
>  KVM_X86_OP(has_wbinvd_exit)
>  KVM_X86_OP(get_l2_tsc_offset)
>  KVM_X86_OP(get_l2_tsc_multiplier)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index bfc934dc9a33..f2a4d5a18851 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -440,6 +440,7 @@ struct kvm_mmu {
>  			 struct kvm_mmu_page *sp);
>  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
>  	struct kvm_mmu_root_info root;
> +	hpa_t private_root_hpa;
>  	union kvm_cpu_role cpu_role;
>  	union kvm_mmu_page_role root_role;
>  
> @@ -1435,6 +1436,20 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
>  	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
>  }
>  
> +struct kvm_spte {
> +	kvm_pfn_t pfn;
> +	bool is_present;
> +	bool is_leaf;
> +};
> +
> +struct kvm_spte_change {
> +	gfn_t gfn;
> +	enum pg_level level;
> +	struct kvm_spte old;
> +	struct kvm_spte new;
> +	void *sept_page;
> +};
> +
>  struct kvm_x86_ops {
>  	const char *name;
>  
> @@ -1547,6 +1562,11 @@ struct kvm_x86_ops {
>  	void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
>  			     int root_level);
>  
> +	int (*free_private_sp)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +			       void *private_sp);
> +	void (*handle_changed_private_spte)(
> +		struct kvm *kvm, const struct kvm_spte_change *change);
> +
>  	bool (*has_wbinvd_exit)(void);
>  
>  	u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a5bf3e40e209..ef925722ee28 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1577,7 +1577,11 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  		flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);
>  
>  	if (is_tdp_mmu_enabled(kvm))
> -		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
> +		/*
> +		 * private page needs to be kept and handle page migration
> +		 * on next EPT violation.
> +		 */

I don't think this series supports page migration? How can page migration be
handled in next EPT violation?

> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush, false);

The meaning of the additional 'false' isn't clear at all.  I need to go through
entire patch to figure out what does it mean.

How about splitting 'adding additional false argument' part as a separate patch
(no functional change), give a short changelog to explain, and put it before
this patch?  In this way we can clearly understand what it does here.
>  
>  	return flush;
>  }
> @@ -3082,7 +3086,8 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
>  		 * SPTE value without #VE suppress bit cleared
>  		 * (kvm->arch.shadow_mmio_value = 0).
>  		 */
> -		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
> +		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching &&
> +			     !kvm_gfn_shared_mask(vcpu->kvm)) ||

This chunk belongs to MMIO fault handling patch.

>  		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
>  			return RET_PF_EMULATE;
>  	}
> @@ -3454,7 +3459,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
>  		goto out_unlock;
>  
>  	if (is_tdp_mmu_enabled(vcpu->kvm)) {
> -		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
> +		if (kvm_gfn_shared_mask(vcpu->kvm) &&
> +		    !VALID_PAGE(mmu->private_root_hpa)) {
> +			root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, true);
> +			mmu->private_root_hpa = root;
> +		}
> +		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, false);
>  		mmu->root.hpa = root;
>  	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
>  		root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true);
> @@ -4026,6 +4036,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
>  	kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true);
>  }
>  
> +/*
> + * Private page can't be release on mmu_notifier without losing page contents.
> + * The help, callback, from backing store is needed to allow page migration.

I hardly can understand what does ', callback, ' part mean.  I guess it is used
to explain exactly what is the 'help'.  I am not native speaker, but the
grammar doesn't look right to me.

This series is fully of this patten.  It hurts readability a lot.  Would you
improve it?

> + * For now, pin the page.
> + */

Back to the technical point.  IMHO you need to explain how page migration is
supposed to work to justify why page migration needs help from backing store
first.  Perhaps you can briefly explain in the changelog so people can
understand what part is done by backing store and which part is done by KVM.

For instance, for anonymous page, page migration is done by core-kernel.  So why
backing store cannot handle page migration for TDX?  Is it technically unable to
by design, or is it because it just hasn't implemented it yet?

If the latter, why backing store doesn't pin the page directly but requires KVM
to do?


> +static int kvm_faultin_pfn_private_mapped(struct kvm_vcpu *vcpu,
> +					   struct kvm_page_fault *fault)
> +{
> +	hva_t hva = gfn_to_hva_memslot(fault->slot, fault->gfn);
> +	struct page *page[1];
> +
> +	fault->map_writable = false;
> +	fault->pfn = KVM_PFN_ERR_FAULT;
> +	if (hva == KVM_HVA_ERR_RO_BAD || hva == KVM_HVA_ERR_BAD)
> +		return RET_PF_CONTINUE;
> +
> +	/* TDX allows only RWX.  Read-only isn't supported. */
> +	WARN_ON_ONCE(!fault->write);
> +	if (pin_user_pages_fast(hva, 1, FOLL_WRITE, page) != 1)
> +		return RET_PF_INVALID;
> +
> +	fault->map_writable = true;
> +	fault->pfn = page_to_pfn(page[0]);
> +	return RET_PF_CONTINUE;
> +}
> +
>  static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	struct kvm_memory_slot *slot = fault->slot;
> @@ -4058,6 +4094,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  			return RET_PF_EMULATE;
>  	}
>  
> +	if (fault->is_private)
> +		return kvm_faultin_pfn_private_mapped(vcpu, fault);
> +
>  	async = false;
>  	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async,
>  					  fault->write, &fault->map_writable,
> @@ -4110,6 +4149,17 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
>  	       mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);
>  }
>  
> +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r)
> +{
> +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn))
> +		return;
> +
> +	if (fault->is_private)
> +		put_page(pfn_to_page(fault->pfn));
> +	else
> +		kvm_release_pfn_clean(fault->pfn);
> +}

What's the purpose of 'int r'?  Is it even used?

> +
>  static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);
> @@ -4117,7 +4167,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  	unsigned long mmu_seq;
>  	int r;
>  
> -	fault->gfn = fault->addr >> PAGE_SHIFT;
> +	fault->gfn = gpa_to_gfn(fault->addr) & ~kvm_gfn_shared_mask(vcpu->kvm);
>  	fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);

Where is fault->is_private set? Shouldn't it be set here?

>  
>  	if (page_fault_handle_page_track(vcpu, fault))
> @@ -4166,7 +4216,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  		read_unlock(&vcpu->kvm->mmu_lock);
>  	else
>  		write_unlock(&vcpu->kvm->mmu_lock);
> -	kvm_release_pfn_clean(fault->pfn);
> +	kvm_mmu_release_fault(vcpu->kvm, fault, r);
>  	return r;
>  }
>  
> @@ -5665,6 +5715,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
>  
>  	mmu->root.hpa = INVALID_PAGE;
>  	mmu->root.pgd = 0;
> +	mmu->private_root_hpa = INVALID_PAGE;
>  	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
>  		mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
>  
> @@ -5855,6 +5906,10 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
>  	 * lead to use-after-free.
>  	 */
>  	if (is_tdp_mmu_enabled(kvm))
> +		/*
> +		 * For now private root is never invalidate during VM is running,
> +		 * so this can only happen for shared roots.
> +		 */

Please put the comment to the code which actually does the job.

>  		kvm_tdp_mmu_zap_invalidated_roots(kvm);
>  }
>  
> @@ -5882,7 +5937,8 @@ static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
>  		      .may_block = false,
>  		};
>  
> -		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
> +		/* All private page should be zapped on memslot deletion. */
> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush, true);
>  	} else {
>  		flush = slot_handle_level(kvm, slot, kvm_zap_rmapp, PG_LEVEL_4K,
>  					  KVM_MAX_HUGEPAGE_LEVEL, true);
> @@ -5990,7 +6046,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
>  	if (is_tdp_mmu_enabled(kvm)) {
>  		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
>  			flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
> -						      gfn_end, true, flush);
> +						      gfn_end, true, flush, false);

Add a comment to why kvm_zap_gfn_range() only zap shared?

>  	}
>  
>  	if (flush)
> @@ -6023,6 +6079,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  	}
>  
> +	/*
> +	 * For now this can only happen for non-TD VM, because TD private
> +	 * mapping doesn't support write protection.  kvm_tdp_mmu_wrprot_slot()
> +	 * will give a WARN() if it hits for TD.
> +	 */

Unless I am mistaken, 'kvm_tdp_mmu_wrprot_slot() will give a WARN() if it hits
for TD" is done in your later patch "KVM: x86/tdp_mmu: Ignore unsupported mmu
operation on private GFNs".  Why putting comment here?

Please move this comment to that patch, and I think you can put that patch
before this patch.

And this problem happens repeatedly in this series.  Could you check the entire
series?


>  	if (is_tdp_mmu_enabled(kvm)) {
>  		read_lock(&kvm->mmu_lock);
>  		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
> @@ -6111,6 +6172,9 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
>  		sp = sptep_to_sp(sptep);
>  		pfn = spte_to_pfn(*sptep);
>  
> +		/* Private page dirty logging is not supported. */
> +		KVM_BUG_ON(is_private_sptep(sptep), kvm);
> +

Looks like this chunk should belong to patch "KVM: x86/tdp_mmu: Ignore
unsupported mmu operation on private GFNs".

Or you can just merge the two patches together if that make things clearer.

>  		/*
>  		 * We cannot do huge page mapping for indirect shadow pages,
>  		 * which are found on the last rmap (level = 1) when not using
> @@ -6151,6 +6215,11 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  	}
>  
> +	/*
> +	 * This should only be reachable in case of log-dirty, wihch TD private
> +	 * mapping doesn't support so far.  kvm_tdp_mmu_zap_collapsible_sptes()
> +	 * internally gives a WARN() when it hits.
> +	 */

The same to above..

>  	if (is_tdp_mmu_enabled(kvm)) {
>  		read_lock(&kvm->mmu_lock);
>  		kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot);
> @@ -6437,6 +6506,9 @@ int kvm_mmu_vendor_module_init(void)
>  void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_unload(vcpu);
> +	if (is_tdp_mmu_enabled(vcpu->kvm))
> +		mmu_free_root_page(vcpu->kvm, &vcpu->arch.mmu->private_root_hpa,
> +				NULL);

Cannot judge correctness now, but at least a comment would help here.

>  	free_mmu_pages(&vcpu->arch.root_mmu);
>  	free_mmu_pages(&vcpu->arch.guest_mmu);
>  	mmu_free_memory_caches(vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 9f3a6bea60a3..d3b30d62aca0 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -6,6 +6,8 @@
>  #include <linux/kvm_host.h>
>  #include <asm/kvm_host.h>
>  
> +#include "mmu.h"
> +
>  #undef MMU_DEBUG
>  
>  #ifdef MMU_DEBUG
> @@ -164,11 +166,30 @@ static inline void kvm_mmu_alloc_private_sp(
>  	WARN_ON_ONCE(!sp->private_sp);
>  }
>  
> +static inline int kvm_alloc_private_sp_for_split(
> +	struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	gfp &= ~__GFP_ZERO;
> +	sp->private_sp = (void*)__get_free_page(gfp);
> +	if (!sp->private_sp)
> +		return -ENOMEM;
> +	return 0;
> +}

What does "for_split" mean?  Why do we need it?

> +
>  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
>  {
>  	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
>  		free_page((unsigned long)sp->private_sp);
>  }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	if (is_private_sp(root))
> +		return kvm_gfn_private(kvm, gfn);
> +	else
> +		return kvm_gfn_shared(kvm, gfn);
> +}
>  #else
>  static inline bool is_private_sp(struct kvm_mmu_page *sp)
>  {
> @@ -194,11 +215,25 @@ static inline void kvm_mmu_alloc_private_sp(
>  {
>  }
>  
> +static inline int kvm_alloc_private_sp_for_split(
> +	struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	return -ENOMEM;
> +}
> +
>  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
>  {
>  }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	return gfn;
> +}
>  #endif
>  
> +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r);
> +
>  static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
>  {
>  	/*
> @@ -246,6 +281,7 @@ struct kvm_page_fault {
>  	/* Derived from mmu and global state.  */
>  	const bool is_tdp;
>  	const bool nx_huge_page_workaround_enabled;
> +	const bool is_private;
>  
>  	/*
>  	 * Whether a >4KB mapping can be created or is forbidden due to NX
> @@ -327,6 +363,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>  		.prefetch = prefetch,
>  		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
>  		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
> +		.is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),

I guess putting this chunk and setting up fault->gfn together would be clearer?

>  
>  		.max_level = vcpu->kvm->arch.tdp_max_page_level,
>  		.req_level = PG_LEVEL_4K,
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 62ae590d4e5b..e5b73638bd83 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -877,7 +877,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  
>  out_unlock:
>  	write_unlock(&vcpu->kvm->mmu_lock);
> -	kvm_release_pfn_clean(fault->pfn);
> +	kvm_mmu_release_fault(vcpu->kvm, fault, r);

Too painful to review.  If 'r' is ever needed, please at least consider a more
meaningful name.

>  	return r;
>  }
>  
> diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> index ee4802d7b36c..4ed50f3c424d 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.c
> +++ b/arch/x86/kvm/mmu/tdp_iter.c
> @@ -53,6 +53,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root,
>  	iter->min_level = min_level;
>  	iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root->spt;
>  	iter->as_id = kvm_mmu_page_as_id(root);
> +	iter->is_private = is_private_sp(root);
>  
>  	tdp_iter_restart(iter);
>  }
> diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
> index adfca0cf94d3..dec56795c5da 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.h
> +++ b/arch/x86/kvm/mmu/tdp_iter.h
> @@ -71,7 +71,7 @@ struct tdp_iter {
>  	tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL];
>  	/* A pointer to the current SPTE */
>  	tdp_ptep_t sptep;
> -	/* The lowest GFN mapped by the current SPTE */
> +	/* The lowest GFN (shared bits included) mapped by the current SPTE */
>  	gfn_t gfn;
>  	/* The level of the root page given to the iterator */
>  	int root_level;
> @@ -94,6 +94,9 @@ struct tdp_iter {
>  	 * level instead of advancing to the next entry.
>  	 */
>  	bool yielded;
> +
> +	/* True if this iter is handling private KVM page fault. */
> +	bool is_private;
>  };
>  
>  /*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index d874c79ab96c..12f75e60a254 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -278,18 +278,24 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
>  		    kvm_mmu_page_as_id(_root) != _as_id) {		\
>  		} else
>  
> -static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *tdp_mmu_alloc_sp(
> +	struct kvm_vcpu *vcpu, bool private, bool is_root)
>  {
>  	struct kvm_mmu_page *sp;
>  
>  	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
>  	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
>  
> +	if (private)
> +		kvm_mmu_alloc_private_sp(vcpu, sp, is_root);
> +	else
> +		kvm_mmu_init_private_sp(sp, NULL);
> +
>  	return sp;
>  }
>  
> -static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
> -			    gfn_t gfn, union kvm_mmu_page_role role)
> +static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn,
> +			    union kvm_mmu_page_role role)
>  {
>  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
>  
> @@ -297,7 +303,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>  	sp->gfn = gfn;
>  	sp->ptep = sptep;
>  	sp->tdp_mmu_page = true;
> -	kvm_mmu_init_private_sp(sp);
>  
>  	trace_kvm_mmu_get_page(sp, true);
>  }
> @@ -316,7 +321,8 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
>  	tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
>  }
>  
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *kvm_tdp_mmu_get_vcpu_root(struct kvm_vcpu *vcpu,
> +						      bool private)
>  {
>  	union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
>  	struct kvm *kvm = vcpu->kvm;
> @@ -330,11 +336,12 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>  	 */
>  	for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
>  		if (root->role.word == role.word &&
> +		    is_private_sp(root) == private &&
>  		    kvm_tdp_mmu_get_root(root))

Is it better to have a role.private, so you don't need this change?

>  			goto out;
>  	}
>  
> -	root = tdp_mmu_alloc_sp(vcpu);
> +	root = tdp_mmu_alloc_sp(vcpu, private, true);

With role.private, I think you can avoid 'private' argument here?

And can you check sp->role.level to determine whether it is root?


>  	tdp_mmu_init_sp(root, NULL, 0, role);
>  
>  	refcount_set(&root->tdp_mmu_root_count, 1);
> @@ -344,12 +351,17 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>  	spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
>  
>  out:
> -	return __pa(root->spt);
> +	return root;
> +}
> +
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private)
> +{
> +	return __pa(kvm_tdp_mmu_get_vcpu_root(vcpu, private)->spt);
>  }
>  
>  static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				u64 old_spte, u64 new_spte, int level,
> -				bool shared);
> +				bool private_spte, u64 old_spte,
> +				u64 new_spte, int level, bool shared);
>  
>  static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level)
>  {
> @@ -410,6 +422,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
>   *
>   * @kvm: kvm instance
>   * @pt: the page removed from the paging structure
> + * @is_private: pt is private or not.
>   * @shared: This operation may not be running under the exclusive use
>   *	    of the MMU lock and the operation must synchronize with other
>   *	    threads that might be modifying SPTEs.
> @@ -422,7 +435,8 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
>   * this thread will be responsible for ensuring the page is freed. Hence the
>   * early rcu_dereferences in the function.
>   */
> -static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
> +static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool is_private,
> +			      bool shared)

I think you can get whether the page table is private or not by ...
>  {
>  	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));

... checking is_private_sp(sp), right?

Why do you need 'is_private' argumenet?

>  	int level = sp->role.level;
> @@ -498,8 +512,20 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>  			old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte,
>  							  REMOVED_SPTE, level);
>  		}
> -		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
> -				    old_spte, REMOVED_SPTE, level, shared);
> +		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, is_private,
> +				    old_spte, REMOVED_SPTE, level,
> +				    shared);
> +	}
> +
> +	if (is_private && WARN_ON(static_call(kvm_x86_free_private_sp)(
> +					  kvm, sp->gfn, sp->role.level,
> +					  kvm_mmu_private_sp(sp)))) {
> +		/*
> +		 * Failed to unlink Secure EPT page and there is nothing to do
> +		 * further.  Intentionally leak the page to prevent the kernel
> +		 * from accessing the encrypted page.
> +		 */
> +		kvm_mmu_init_private_sp(sp, NULL);

At least explicitly give a error message, or even a WARN().

>  	}
>  
>  	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
> @@ -510,6 +536,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   * @kvm: kvm instance
>   * @as_id: the address space of the paging structure the SPTE was a part of
>   * @gfn: the base GFN that was mapped by the SPTE
> + * @private_spte: the SPTE is private or not
>   * @old_spte: The value of the SPTE before the change
>   * @new_spte: The value of the SPTE after the change
>   * @level: the level of the PT the SPTE is part of in the paging structure
> @@ -521,14 +548,30 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   * This function must be called for all TDP SPTE modifications.
>   */
>  static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				  u64 old_spte, u64 new_spte, int level,
> -				  bool shared)
> +				  bool private_spte, u64 old_spte,
> +				  u64 new_spte, int level, bool shared)

I am thinking if you can just pass parent 'sp', or sptep, then you can get all
roles internally, including whether it is private.  I guess it is more flexible.

>  {
>  	bool was_present = is_shadow_present_pte(old_spte);
>  	bool is_present = is_shadow_present_pte(new_spte);
>  	bool was_leaf = was_present && is_last_spte(old_spte, level);
>  	bool is_leaf = is_present && is_last_spte(new_spte, level);
> -	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
> +	kvm_pfn_t old_pfn = spte_to_pfn(old_spte);
> +	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
> +	bool pfn_changed = old_pfn != new_pfn;
> +	struct kvm_spte_change change = {
> +		.gfn = gfn,
> +		.level = level,
> +		.old = {
> +			.pfn = old_pfn,
> +			.is_present = was_present,
> +			.is_leaf = was_leaf,
> +		},
> +		.new = {
> +			.pfn = new_pfn,
> +			.is_present = is_present,
> +			.is_leaf = is_leaf,
> +		},
> +	};
>  
>  	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
>  	WARN_ON(level < PG_LEVEL_4K);
> @@ -595,7 +638,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>  
>  	if (was_leaf && is_dirty_spte(old_spte) &&
>  	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
> -		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
> +		kvm_set_pfn_dirty(old_pfn);
>  
>  	/*
>  	 * Recursively handle child PTs if the change removed a subtree from
> @@ -604,16 +647,47 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>  	 * pages are kernel allocations and should never be migrated.
>  	 */
>  	if (was_present && !was_leaf &&
> -	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
> -		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
> +	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) {
> +		WARN_ON(private_spte !=
> +			is_private_sptep(spte_to_child_pt(old_spte, level)));
> +		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level),
> +				  private_spte, shared);
> +	}
> +
> +	/*
> +	 * Special handling for the private mapping.  We are either
> +	 * setting up new mapping at middle level page table, or leaf,
> +	 * or tearing down existing mapping.
> +	 *
> +	 * This is after handling lower page table by above
> +	 * handle_remove_tdp_mmu_page().  S-EPT requires to remove S-EPT tables
> +	 * after removing childrens.
> +	 */
> +	if (private_spte &&
> +	    /* Ignore change of software only bits. e.g. host_writable */
> +	    (was_leaf != is_leaf || was_present != is_present || pfn_changed)) {
> +		void *sept_page = NULL;
> +
> +		if (is_present && !is_leaf) {
> +			struct kvm_mmu_page *sp = to_shadow_page(pfn_to_hpa(new_pfn));
> +
> +			sept_page = kvm_mmu_private_sp(sp);
> +			WARN_ON(!sept_page);
> +			WARN_ON(sp->role.level + 1 != level);
> +			WARN_ON(sp->gfn != gfn);
> +		}
> +		change.sept_page = sept_page;
> +
> +		static_call(kvm_x86_handle_changed_private_spte)(kvm, &change);
> +	}
>  }
>  
>  static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				u64 old_spte, u64 new_spte, int level,
> -				bool shared)
> +				bool private_spte, u64 old_spte, u64 new_spte,
> +				int level, bool shared)
>  {
> -	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level,
> -			      shared);
> +	__handle_changed_spte(kvm, as_id, gfn, private_spte,
> +			old_spte, new_spte, level, shared);
>  	handle_changed_spte_acc_track(old_spte, new_spte, level);
>  	handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte,
>  				      new_spte, level);
> @@ -640,6 +714,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  					  struct tdp_iter *iter,
>  					  u64 new_spte)
>  {
> +	bool freeze_spte = iter->is_private && !is_removed_spte(new_spte);
> +	u64 tmp_spte = freeze_spte ? REMOVED_SPTE : new_spte;

Perhaps I am missing something.  Could you add comments to explain the logic?

>  	u64 *sptep = rcu_dereference(iter->sptep);
>  	u64 old_spte;
>  
> @@ -657,7 +733,7 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  	 * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
>  	 * does not hold the mmu_lock.
>  	 */
> -	old_spte = cmpxchg64(sptep, iter->old_spte, new_spte);
> +	old_spte = cmpxchg64(sptep, iter->old_spte, tmp_spte);
>  	if (old_spte != iter->old_spte) {
>  		/*
>  		 * The page table entry was modified by a different logical
> @@ -669,10 +745,14 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  		return -EBUSY;
>  	}
>  
> -	__handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
> -			      new_spte, iter->level, true);
> +	__handle_changed_spte(
> +		kvm, iter->as_id, iter->gfn, iter->is_private,
> +		iter->old_spte, new_spte, iter->level, true);
>  	handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level);
>  
> +	if (freeze_spte)
> +		__kvm_tdp_mmu_write_spte(sptep, new_spte);
> +
>  	return 0;
>  }
>  
> @@ -734,13 +814,15 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>   *		      unless performing certain dirty logging operations.
>   *		      Leaving record_dirty_log unset in that case prevents page
>   *		      writes from being double counted.
> + * @is_private:       The fault is private.
>   *
>   * Returns the old SPTE value, which _may_ be different than @old_spte if the
>   * SPTE had voldatile bits.
>   */
>  static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
> -			      u64 old_spte, u64 new_spte, gfn_t gfn, int level,
> -			      bool record_acc_track, bool record_dirty_log)
> +			       u64 old_spte, u64 new_spte, gfn_t gfn, int level,
> +			       bool record_acc_track, bool record_dirty_log,
> +			       bool is_private)
>  {
>  	lockdep_assert_held_write(&kvm->mmu_lock);
>  
> @@ -755,7 +837,8 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
>  
>  	old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
>  
> -	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false);
> +	__handle_changed_spte(kvm, as_id, gfn, is_private,
> +			      old_spte, new_spte, level, false);
>  
>  	if (record_acc_track)
>  		handle_changed_spte_acc_track(old_spte, new_spte, level);
> @@ -774,7 +857,8 @@ static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
>  	iter->old_spte = __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep,
>  					    iter->old_spte, new_spte,
>  					    iter->gfn, iter->level,
> -					    record_acc_track, record_dirty_log);
> +					    record_acc_track, record_dirty_log,
> +					    iter->is_private);
>  }
>  
>  static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
> @@ -807,8 +891,11 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
>  			continue;					\
>  		else
>  
> -#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end)		\
> -	for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end)
> +#define tdp_mmu_for_each_pte(_iter, _mmu, _private, _start, _end)	\
> +	for_each_tdp_pte(_iter,						\
> +		 to_shadow_page((_private) ? _mmu->private_root_hpa :	\
> +				_mmu->root.hpa),			\
> +		_start, _end)
>  
>  /*
>   * Yield if the MMU lock is contended or this thread needs to return control
> @@ -945,7 +1032,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>  
>  	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
>  			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
> -			   true, true);
> +			   true, true, is_private_sp(sp));
>  
>  	return true;
>  }
> @@ -961,13 +1048,21 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>   * operation can cause a soft lockup.
>   */
>  static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
> -			      gfn_t start, gfn_t end, bool can_yield, bool flush)
> +			      gfn_t start, gfn_t end, bool can_yield, bool flush,
> +			      bool drop_private)
>  {
>  	struct tdp_iter iter;
>  
>  	end = min(end, tdp_mmu_max_gfn_exclusive());
>  
>  	lockdep_assert_held_write(&kvm->mmu_lock);
> +	/*
> +	 * Extend [start, end) to include GFN shared bit when TDX is enabled,
> +	 * and for shared mapping range.
> +	 */
> +	WARN_ON_ONCE(!is_private_sp(root) && drop_private);
> +	start = kvm_gfn_for_root(kvm, root, start);
> +	end = kvm_gfn_for_root(kvm, root, end);

So for the GFN given to the iterator, it always doesn't have shared bit, right?

>  
>  	rcu_read_lock();
>  
> @@ -1002,12 +1097,13 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>   * MMU lock.
>   */
>  bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
> -			   bool can_yield, bool flush)
> +			   bool can_yield, bool flush, bool drop_private)
>  {
>  	struct kvm_mmu_page *root;
>  
>  	for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
> -		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush);
> +		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush,
> +					  drop_private && is_private_sp(root));


	if (is_private_sp(root) && drop_private)
		continue;

	flush  = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush);

In this case, I guess you can remove 'is_private' in tdp_mmu_zap_leafs()?

>  
>  	return flush;
>  }
> @@ -1067,6 +1163,12 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
>  
>  	lockdep_assert_held_write(&kvm->mmu_lock);
>  	list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
> +		/*
> +		 * Skip private root since private page table
> +		 * is only torn down when VM is destroyed.
> +		 */
> +		if (is_private_sp(root))
> +			continue;
>  		if (!root->role.invalid &&
>  		    !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) {
>  			root->role.invalid = true;
> @@ -1087,14 +1189,22 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  	u64 new_spte;
>  	int ret = RET_PF_FIXED;
>  	bool wrprot = false;
> +	unsigned long pte_access = ACC_ALL;
> +	gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);

Here looks the iter->gfn still contains the shared bits.  It is not consistent
with above.

Can you put some words into the changelog explaining exactly what GFN will you
put to iterator?

Or can you even split out this part as a separate patch?

>  
>  	WARN_ON(sp->role.level != fault->goal_level);
> +
> +	/* TDX shared GPAs are no executable, enforce this for the SDV. */
> +	if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
> +		pte_access &= ~ACC_EXEC_MASK;
> +
>  	if (unlikely(!fault->slot))
> -		new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
> +		new_spte = make_mmio_spte(vcpu, gfn_unalias, pte_access);

This part belong to MMIO fault handing patch.

>  	else
> -		wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
> -					 fault->pfn, iter->old_spte, fault->prefetch, true,
> -					 fault->map_writable, &new_spte);
> +		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
> +				   gfn_unalias, fault->pfn, iter->old_spte,
> +				   fault->prefetch, true, fault->map_writable,
> +				   &new_spte);
>  
>  	if (new_spte == iter->old_spte)
>  		ret = RET_PF_SPURIOUS;
> @@ -1167,8 +1277,7 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter,
>  	return 0;
>  }
>  
> -static int tdp_mmu_populate_nonleaf(
> -	struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
> +static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
>  {
>  	struct kvm_mmu_page *sp;
>  	int ret;
> @@ -1176,7 +1285,7 @@ static int tdp_mmu_populate_nonleaf(
>  	WARN_ON(is_shadow_present_pte(iter->old_spte));
>  	WARN_ON(is_removed_spte(iter->old_spte));
>  
> -	sp = tdp_mmu_alloc_sp(vcpu);
> +	sp = tdp_mmu_alloc_sp(vcpu, iter->is_private, false);
>  	tdp_mmu_init_child_sp(sp, iter);
>  
>  	ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true);
> @@ -1193,6 +1302,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	struct kvm_mmu *mmu = vcpu->arch.mmu;
>  	struct tdp_iter iter;
> +	gfn_t raw_gfn;
> +	bool is_private = fault->is_private;
>  	int ret;
>  
>  	kvm_mmu_hugepage_adjust(vcpu, fault);
> @@ -1201,7 +1312,16 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  
>  	rcu_read_lock();
>  
> -	tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) {
> +	raw_gfn = gpa_to_gfn(fault->addr);
> +
> +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) {
> +		if (is_private) {
> +			rcu_read_unlock();
> +			return -EFAULT;
> +		}
> +	}
> +
> +	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
>  		if (fault->nx_huge_page_workaround_enabled)
>  			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
>  
> @@ -1217,6 +1337,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  		    is_large_pte(iter.old_spte)) {
>  			if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
>  				break;
> +			/*
> +			 * TODO: large page support.
> +			 * Doesn't support large page for TDX now
> +			 */
> +			WARN_ON(is_private_sptep(iter.sptep));
> +
>  
>  			/*
>  			 * The iter must explicitly re-read the spte here
> @@ -1258,11 +1384,13 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  	return ret;
>  }
>  
> +/* Used by mmu notifier via kvm_unmap_gfn_range() */
>  bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
> -				 bool flush)
> +				 bool flush, bool drop_private)
>  {
>  	return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start,
> -				     range->end, range->may_block, flush);
> +				     range->end, range->may_block, flush,
> +				     drop_private);
>  }
>  
>  typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
> @@ -1445,7 +1573,8 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
>  	return spte_set;
>  }
>  
> -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> +static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(
> +	gfp_t gfp, bool is_private)
>  {
>  	struct kvm_mmu_page *sp;
>  
> @@ -1456,6 +1585,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
>  		return NULL;
>  
>  	sp->spt = (void *)__get_free_page(gfp);
> +	if (is_private) {
> +		if (kvm_alloc_private_sp_for_split(sp, gfp)) {
> +			free_page((unsigned long)sp->spt);
> +			sp->spt = NULL;
> +		}
> +	}
>  	if (!sp->spt) {
>  		kmem_cache_free(mmu_page_header_cache, sp);
>  		return NULL;
> @@ -1469,6 +1604,11 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  						       bool shared)
>  {
>  	struct kvm_mmu_page *sp;
> +	bool is_private = iter->is_private;
> +
> +	/* TODO: For now large page isn't supported for private SPTE. */
> +	WARN_ON(is_private);
> +	WARN_ON(iter->is_private != is_private_sptep(iter->sptep));
>  
>  	/*
>  	 * Since we are allocating while under the MMU lock we have to be
> @@ -1479,7 +1619,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  	 * If this allocation fails we drop the lock and retry with reclaim
>  	 * allowed.
>  	 */
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT, is_private);
>  	if (sp)
>  		return sp;
>  
> @@ -1491,7 +1631,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  
>  	iter->yielded = true;
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT, is_private);
>  
>  	if (shared)
>  		read_lock(&kvm->mmu_lock);
> @@ -1907,10 +2047,14 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
>  	struct kvm_mmu *mmu = vcpu->arch.mmu;
>  	gfn_t gfn = addr >> PAGE_SHIFT;
>  	int leaf = -1;
> +	bool is_private = kvm_is_private_gpa(vcpu->kvm, addr);
>  
>  	*root_level = vcpu->arch.mmu->root_role.level;
>  
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	if (WARN_ON(is_private))
> +		return leaf;
> +
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>  		leaf = iter.level;
>  		sptes[leaf] = iter.old_spte;
>  	}
> @@ -1937,7 +2081,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
>  	gfn_t gfn = addr >> PAGE_SHIFT;
>  	tdp_ptep_t sptep = NULL;
>  
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	/* fast page fault for private GPA isn't supported. */
> +	WARN_ON_ONCE(kvm_is_private_gpa(vcpu->kvm, addr));

Shouldn't this chunk belong to patch:

[PATCH v7 038/102] KVM: x86/mmu: Disallow fast page fault on private GPA

?

> +
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>  		*spte = iter.old_spte;
>  		sptep = iter.sptep;
>  	}
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index c163f7cc23ca..d1655571eb2f 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -5,7 +5,7 @@
>  
>  #include <linux/kvm_host.h>
>  
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private);
>  
>  __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
>  {
> @@ -16,7 +16,8 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
>  			  bool shared);
>  
>  bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start,
> -				 gfn_t end, bool can_yield, bool flush);
> +				gfn_t end, bool can_yield, bool flush,
> +				bool drop_private);
>  bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
>  void kvm_tdp_mmu_zap_all(struct kvm *kvm);
>  void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
> @@ -25,7 +26,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm);
>  int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
>  
>  bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
> -				 bool flush);
> +				 bool flush, bool drop_private);
>  bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range);
>  bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
>  bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 0acb0b6d1f82..7a5261eb7eb8 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -196,6 +196,7 @@ bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
>  
>  	return true;
>  }
> +EXPORT_SYMBOL_GPL(kvm_is_reserved_pfn);
>  
>  /*
>   * Switches to specified vcpu, until a matching vcpu_put()


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-06-27 21:53 ` [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE isaku.yamahata
  2022-06-30 11:03   ` Kai Huang
@ 2022-07-08  5:18   ` Yuan Yao
  2022-07-08 15:30     ` Sean Christopherson
  2022-07-14 18:41   ` Isaku Yamahata
  2 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-08  5:18 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, Jun 27, 2022 at 02:53:28PM -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
>
> TDX introduced a new ETP, Secure-EPT, in addition to the existing EPT.
> Secure-EPT maps protected guest memory, which is called private. Since
> Secure-EPT page tables is also protected, those page tables is also called
> private.  The existing EPT is often called shared EPT to distinguish from
> Secure-EPT.  And also page tables for share EPT is also called shared.
>
> Virtualization Exception, #VE, is a new processor exception in VMX non-root
> operation.  In certain virtualizatoin-related conditions, #VE is injected
> into guest instead of exiting from guest to VMM so that guest is given a
> chance to inspect it.  One important one is EPT violation.  When
> "ETP-violation #VE" VM-execution is set, "#VE suppress bit" in EPT entry
> is cleared, #VE is injected instead of EPT violation.
>
> Because guest memory is protected with TDX, VMM can't parse instructions
> in the guest memory.  Instead, MMIO hypercall is used for guest to pass
> necessary information to VMM.
>
> To make unmodified device driver work, guest TD expects #VE on accessing
> shared GPA.  The #VE handler converts MMIO access into MMIO hypercall with
> the EPT entry of enabled "#VE" by clearing "suppress #VE" bit.  Before VMM
> enabling #VE, it needs to figure out the given GPA is for MMIO by EPT
> violation.  So the execution flow looks like
>
> - Allocate unused shared EPT entry with suppress #VE bit set.
> - EPT violation on that GPA.
> - VMM figures out the faulted GPA is for MMIO.
> - VMM clears the suppress #VE bit.
> - Guest TD gets #VE, and converts MMIO access into MMIO hypercall.
> - If the GPA maps guest memory, VMM resolves it with guest pages.
>
> For both cases, SPTE needs suppress #VE" bit set initially when it
> is allocated or zapped, therefore non-zero non-present value for SPTE
> needs to be allowed.
>
> This change requires to update FNAME(sync_page) for shadow EPT.
> "if(!sp->spte[i])" in FNAME(sync_page) means that the spte entry is the
> initial value.  With the introduction of shadow_nonpresent_value which can
> be non-zero, it doesn't hold any more. Replace zero check with
> "!is_shadow_present_pte() && !is_mmio_spte()".
>
> When "if (!spt[i])" doesn't hold, but the entry value is
> shadow_nonpresent_value, the entry is wrongly synchronized from non-present
> to non-present with (wrongly) pfn changed and tries to remove rmap wrongly
> and BUG_ON() is hit.
>
> TDP MMU uses REMOVED_SPTE = 0x5a0ULL as special constant to indicate the
> intermediate value to indicate one thread is operating on it and the value
> should be semi-arbitrary value.  For TDX (more correctly to use #VE), the
> value should include suppress #VE value which is SHADOW_NONPRESENT_VALUE.
> Rename REMOVED_SPTE to __REMOVED_SPTE and define REMOVED_SPTE as
> SHADOW_NONPRESENT_VALUE | REMOVED_SPTE to set suppress #VE bit.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c         | 55 ++++++++++++++++++++++++++++++----
>  arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
>  arch/x86/kvm/mmu/spte.c        |  5 +++-
>  arch/x86/kvm/mmu/spte.h        | 37 ++++++++++++++++++++---
>  arch/x86/kvm/mmu/tdp_mmu.c     | 23 +++++++++-----
>  5 files changed, 105 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 51306b80f47c..f239b6cb5d53 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -668,6 +668,44 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
>  	}
>  }
>
> +static inline void kvm_init_shadow_page(void *page)
> +{
> +#ifdef CONFIG_X86_64
> +	int ign;
> +
> +	WARN_ON_ONCE(shadow_nonpresent_value != SHADOW_NONPRESENT_VALUE);
> +	asm volatile (
> +		"rep stosq\n\t"
> +		: "=c"(ign), "=D"(page)
> +		: "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> +		: "memory"
> +	);
> +#else
> +	BUG();
> +#endif
> +}
> +
> +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> +	int start, end, i, r;
> +	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> +
> +	if (is_tdp_mmu && shadow_nonpresent_value)
> +		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> +
> +	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
> +	if (r)
> +		return r;
> +
> +	if (is_tdp_mmu && shadow_nonpresent_value) {
> +		end = kvm_mmu_memory_cache_nr_free_objects(mc);
> +		for (i = start; i < end; i++)
> +			kvm_init_shadow_page(mc->objects[i]);
> +	}
> +	return 0;
> +}
> +
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
>  	int r;
> @@ -677,8 +715,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
>  	if (r)
>  		return r;
> -	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -				       PT64_ROOT_MAX_LEVEL);
> +	r = mmu_topup_shadow_page_cache(vcpu);
>  	if (r)
>  		return r;
>  	if (maybe_indirect) {
> @@ -5521,9 +5558,16 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>  	 * what is used by the kernel for any given HVA, i.e. the kernel's
>  	 * capabilities are ultimately consulted by kvm_mmu_hugepage_adjust().
>  	 */
> -	if (tdp_enabled)
> +	if (tdp_enabled) {
> +		/*
> +		 * For TDP MMU, always set bit 63 for TDX support. See the
> +		 * comment on SHADOW_NONPRESENT_VALUE.
> +		 */
> +#ifdef CONFIG_X86_64
> +		shadow_nonpresent_value = SHADOW_NONPRESENT_VALUE;
> +#endif
>  		max_huge_page_level = tdp_huge_page_level;
> -	else if (boot_cpu_has(X86_FEATURE_GBPAGES))
> +	} else if (boot_cpu_has(X86_FEATURE_GBPAGES))
>  		max_huge_page_level = PG_LEVEL_1G;
>  	else
>  		max_huge_page_level = PG_LEVEL_2M;
> @@ -5654,7 +5698,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
>  	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
>
> -	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +	if (!(is_tdp_mmu_enabled(vcpu->kvm) && shadow_nonpresent_value))
> +		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;

I'm not sure why skip this for TDX, arch.mmu_shadow_page_cache is
still used for allocating sp->spt which used to track the S-EPT in kvm
for tdx guest.  Anything I missed for this ?

>
>  	vcpu->arch.mmu = &vcpu->arch.root_mmu;
>  	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index fe35d8fd3276..ee2fb0c073f3 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -1031,7 +1031,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  		gpa_t pte_gpa;
>  		gfn_t gfn;
>
> -		if (!sp->spt[i])
> +		if (!is_shadow_present_pte(sp->spt[i]) &&
> +		    !is_mmio_spte(sp->spt[i]))
>  			continue;
>
>  		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index cda1851ec155..bd441458153f 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -36,6 +36,9 @@ u64 __read_mostly shadow_present_mask;
>  u64 __read_mostly shadow_me_value;
>  u64 __read_mostly shadow_me_mask;
>  u64 __read_mostly shadow_acc_track_mask;
> +#ifdef CONFIG_X86_64
> +u64 __read_mostly shadow_nonpresent_value;
> +#endif
>
>  u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>  u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
> @@ -360,7 +363,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	 * not set any RWX bits.
>  	 */
>  	if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
> -	    WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
> +	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
>  		mmio_value = 0;
>
>  	if (!mmio_value)
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 0127bb6e3c7d..1bfedbe0585f 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -140,6 +140,19 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
>
>  #define MMIO_SPTE_GEN_MASK		GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0)
>
> +/*
> + * non-present SPTE value for both VMX and SVM for TDP MMU.
> + * For SVM NPT, for non-present spte (bit 0 = 0), other bits are ignored.
> + * For VMX EPT, bit 63 is ignored if #VE is disabled.
> + *              bit 63 is #VE suppress if #VE is enabled.
> + */
> +#ifdef CONFIG_X86_64
> +#define SHADOW_NONPRESENT_VALUE	BIT_ULL(63)
> +static_assert(!(SHADOW_NONPRESENT_VALUE & SPTE_MMU_PRESENT_MASK));
> +#else
> +#define SHADOW_NONPRESENT_VALUE	0ULL
> +#endif
> +
>  extern u64 __read_mostly shadow_host_writable_mask;
>  extern u64 __read_mostly shadow_mmu_writable_mask;
>  extern u64 __read_mostly shadow_nx_mask;
> @@ -154,6 +167,12 @@ extern u64 __read_mostly shadow_present_mask;
>  extern u64 __read_mostly shadow_me_value;
>  extern u64 __read_mostly shadow_me_mask;
>
> +#ifdef CONFIG_X86_64
> +extern u64 __read_mostly shadow_nonpresent_value;
> +#else
> +#define shadow_nonpresent_value	0ULL
> +#endif
> +
>  /*
>   * SPTEs in MMUs without A/D bits are marked with SPTE_TDP_AD_DISABLED_MASK;
>   * shadow_acc_track_mask is the set of bits to be cleared in non-accessed
> @@ -174,9 +193,12 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>
>  /*
>   * If a thread running without exclusive control of the MMU lock must perform a
> - * multi-part operation on an SPTE, it can set the SPTE to REMOVED_SPTE as a
> + * multi-part operation on an SPTE, it can set the SPTE to __REMOVED_SPTE as a
>   * non-present intermediate value. Other threads which encounter this value
> - * should not modify the SPTE.
> + * should not modify the SPTE.  For the case that TDX is enabled,
> + * SHADOW_NONPRESENT_VALUE, which is "suppress #VE" bit set because TDX module
> + * always enables "EPT violation #VE".  The bit is ignored by non-TDX case as
> + * present bit (bit 0) is cleared.
>   *
>   * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on
>   * bot AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF
> @@ -184,10 +206,17 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>   *
>   * Only used by the TDP MMU.
>   */
> -#define REMOVED_SPTE	0x5a0ULL
> +#define __REMOVED_SPTE	0x5a0ULL
>
>  /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
> -static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
> +static_assert(!(__REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
> +static_assert(!(__REMOVED_SPTE & SHADOW_NONPRESENT_VALUE));
> +
> +/*
> + * See above comment around __REMOVED_SPTE.  REMOVED_SPTE is the actual
> + * intermediate value set to the removed SPET.  it sets the "suppress #VE" bit.
> + */
> +#define REMOVED_SPTE	(SHADOW_NONPRESENT_VALUE | __REMOVED_SPTE)
>
>  static inline bool is_removed_spte(u64 spte)
>  {
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7b9265d67131..2ca03ec3bf52 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -692,8 +692,16 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>  	 * overwrite the special removed SPTE value. No bookkeeping is needed
>  	 * here since the SPTE is going from non-present to non-present.  Use
>  	 * the raw write helper to avoid an unnecessary check on volatile bits.
> +	 *
> +	 * Set non-present value to SHADOW_NONPRESENT_VALUE, rather than 0.
> +	 * It is because when TDX is enabled, TDX module always
> +	 * enables "EPT-violation #VE", so KVM needs to set
> +	 * "suppress #VE" bit in EPT table entries, in order to get
> +	 * real EPT violation, rather than TDVMCALL.  KVM sets
> +	 * SHADOW_NONPRESENT_VALUE (which sets "suppress #VE" bit) so it
> +	 * can be set when EPT table entries are zapped.
>  	 */
> -	__kvm_tdp_mmu_write_spte(iter->sptep, 0);
> +	__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);
>
>  	return 0;
>  }
> @@ -870,8 +878,8 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
>  			continue;
>
>  		if (!shared)
> -			tdp_mmu_set_spte(kvm, &iter, 0);
> -		else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0))
> +			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
> +		else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE))
>  			goto retry;
>  	}
>  }
> @@ -927,8 +935,9 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>  	if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte)))
>  		return false;
>
> -	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0,
> -			   sp->gfn, sp->role.level + 1, true, true);
> +	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
> +			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
> +			   true, true);
>
>  	return true;
>  }
> @@ -965,7 +974,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>  		    !is_last_spte(iter.old_spte, iter.level))
>  			continue;
>
> -		tdp_mmu_set_spte(kvm, &iter, 0);
> +		tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
>  		flush = true;
>  	}
>
> @@ -1330,7 +1339,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
>  	 * invariant that the PFN of a present * leaf SPTE can never change.
>  	 * See __handle_changed_spte().
>  	 */
> -	tdp_mmu_set_spte(kvm, iter, 0);
> +	tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE);
>
>  	if (!pte_write(range->pte)) {
>  		new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 050/102] KVM: VMX: Split out guts of EPT violation to common/exposed function
  2022-06-27 21:53 ` [PATCH v7 050/102] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
@ 2022-07-08 10:25   ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-08 10:25 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> The difference of TDX EPT violation is how to retrieve information, GPA,
> and exit qualification.  To share the code to handle EPT violation, split
> out the guts of EPT violation handler so that VMX/TDX exit handler can call
> it after retrieving GPA and exit qualification.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/vmx/common.h | 33 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/vmx.c    | 32 ++++++--------------------------
>  2 files changed, 39 insertions(+), 26 deletions(-)
>  create mode 100644 arch/x86/kvm/vmx/common.h
> 
> diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
> new file mode 100644
> index 000000000000..235908f3e044
> --- /dev/null
> +++ b/arch/x86/kvm/vmx/common.h
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef __KVM_X86_VMX_COMMON_H
> +#define __KVM_X86_VMX_COMMON_H
> +
> +#include <linux/kvm_host.h>
> +
> +#include "mmu.h"
> +
> +static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
> +					     unsigned long exit_qualification)
> +{
> +	u64 error_code;
> +
> +	/* Is it a read fault? */
> +	error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
> +		     ? PFERR_USER_MASK : 0;
> +	/* Is it a write fault? */
> +	error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
> +		      ? PFERR_WRITE_MASK : 0;
> +	/* Is it a fetch fault? */
> +	error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
> +		      ? PFERR_FETCH_MASK : 0;
> +	/* ept page table entry is present? */
> +	error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
> +		      ? PFERR_PRESENT_MASK : 0;
> +
> +	error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
> +	       PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
> +
> +	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
> +}
> +
> +#endif /* __KVM_X86_VMX_COMMON_H */
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index e3d304b14df0..2f1dc06aec3c 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -50,6 +50,7 @@
>  #include <asm/vmx.h>
>  
>  #include "capabilities.h"
> +#include "common.h"
>  #include "cpuid.h"
>  #include "evmcs.h"
>  #include "hyperv.h"
> @@ -5578,11 +5579,10 @@ static int handle_task_switch(struct kvm_vcpu *vcpu)
>  
>  static int handle_ept_violation(struct kvm_vcpu *vcpu)
>  {
> -	unsigned long exit_qualification;
> -	gpa_t gpa;
> -	u64 error_code;
> +	unsigned long exit_qualification = vmx_get_exit_qual(vcpu);
> +	gpa_t gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
>  
> -	exit_qualification = vmx_get_exit_qual(vcpu);
> +	trace_kvm_page_fault(gpa, exit_qualification);
>  
>  	/*
>  	 * EPT violation happened while executing iret from NMI,
> @@ -5591,29 +5591,9 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
>  	 * AAK134, BY25.
>  	 */
>  	if (!(to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) &&
> -			enable_vnmi &&
> -			(exit_qualification & INTR_INFO_UNBLOCK_NMI))
> +	    enable_vnmi && (exit_qualification & INTR_INFO_UNBLOCK_NMI))

Why this code change?

With this removed:

Reviewed-by: Kai Huang <kai.huang@intel.com>

>  		vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI);
>  
> -	gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> -	trace_kvm_page_fault(gpa, exit_qualification);
> -
> -	/* Is it a read fault? */
> -	error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
> -		     ? PFERR_USER_MASK : 0;
> -	/* Is it a write fault? */
> -	error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
> -		      ? PFERR_WRITE_MASK : 0;
> -	/* Is it a fetch fault? */
> -	error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
> -		      ? PFERR_FETCH_MASK : 0;
> -	/* ept page table entry is present? */
> -	error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
> -		      ? PFERR_PRESENT_MASK : 0;
> -
> -	error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
> -	       PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
> -
>  	vcpu->arch.exit_qualification = exit_qualification;
>  
>  	/*
> @@ -5627,7 +5607,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
>  	if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa)))
>  		return kvm_emulate_instruction(vcpu, 0);
>  
> -	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
> +	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification);
>  }
>  
>  static int handle_ept_misconfig(struct kvm_vcpu *vcpu)


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-07-08  5:18   ` Yuan Yao
@ 2022-07-08 15:30     ` Sean Christopherson
  2022-07-11  7:05       ` Yuan Yao
  0 siblings, 1 reply; 219+ messages in thread
From: Sean Christopherson @ 2022-07-08 15:30 UTC (permalink / raw)
  To: Yuan Yao
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

Please trim replies.

On Fri, Jul 08, 2022, Yuan Yao wrote:
> On Mon, Jun 27, 2022 at 02:53:28PM -0700, isaku.yamahata@intel.com wrote:
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 51306b80f47c..f239b6cb5d53 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -668,6 +668,44 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> >  	}
> >  }
> >
> > +static inline void kvm_init_shadow_page(void *page)
> > +{
> > +#ifdef CONFIG_X86_64
> > +	int ign;
> > +
> > +	WARN_ON_ONCE(shadow_nonpresent_value != SHADOW_NONPRESENT_VALUE);
> > +	asm volatile (
> > +		"rep stosq\n\t"

I have a slight preference for:

	asm volatile ("rep stosq\n\t"
		      <align here>
	);

so that searching for "asm" or "asm volatile" shows the "rep stosq" in the
result without needed to capture the next line.

> > +		: "=c"(ign), "=D"(page)
> > +		: "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> > +		: "memory"
> > +	);
> > +#else
> > +	BUG();
> > +#endif

Rather than put the #ifdef here, split mmu_topup_shadow_page_cache() on 64-bit
versus 32-bit.  Then this BUG() goes away and we don't get slapped on the wrist
by Linus :-)

> > +}
> > +
> > +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> > +{
> > +	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> > +	int start, end, i, r;
> > +	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> > +
> > +	if (is_tdp_mmu && shadow_nonpresent_value)
> > +		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> > +
> > +	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
> > +	if (r)
> > +		return r;

Bailing immediately is wrong.  If kvm_mmu_topup_memory_cache() fails after allocating
at least one page, then KVM needs to initialize those pages, otherwise it will leave
uninitialized pages in the cache.  If userspace frees up memory in response to the
-ENOMEM and resumes the vCPU, KVM will consume uninitialized data.

> > +
> > +	if (is_tdp_mmu && shadow_nonpresent_value) {

So I'm pretty sure I effectively suggested keeping shadow_nonpresent_value, but
seeing it in code, I really don't like it.  It's an unnecessary check on every
SPT allocation, and it's misleading because it suggests shadow_nonpresent_value
might be zero when the TDP MMU is enabled.

My vote is to drop shadow_nonpresent_value and then rename kvm_init_shadow_page()
to make it clear that it's specific to the TDP MMU.

So this?  Completely untested.

#ifdef CONFIG_X86_64
static void kvm_init_tdp_mmu_shadow_page(void *page)
{
	int ign;

	asm volatile ("rep stosq\n\t"
		      : "=c"(ign), "=D"(page)
		      : "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
		      : "memory"
	);
}

static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
{
	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
	int start, end, i, r;

	if (is_tdp_mmu)
		start = kvm_mmu_memory_cache_nr_free_objects(mc);

	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);

	/*
	 * Note, topup may have allocated objects even if it failed to allocate
	 * the minimum number of objects required to make forward progress _at
	 * this time_.  Initialize newly allocated objects even on failure, as
	 * userspace can free memory and rerun the vCPU in response to -ENOMEM.
	 */
	if (is_tdp_mmu) {
		end = kvm_mmu_memory_cache_nr_free_objects(mc);
		for (i = start; i < end; i++)
			kvm_init_tdp_mmu_shadow_page(mc->objects[i]);
	}
	return r;
}
#else
static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
{
	return kvm_mmu_topup_memory_cache(vcpu->arch.mmu_shadow_page_cache,
					  PT64_ROOT_MAX_LEVEL);
}
#endif /* CONFIG_X86_64 */

> > +		end = kvm_mmu_memory_cache_nr_free_objects(mc);
> > +		for (i = start; i < end; i++)
> > +			kvm_init_shadow_page(mc->objects[i]);
> > +	}
> > +	return 0;
> > +}
> > +

...

> > @@ -5654,7 +5698,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> >  	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> >
> > -	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > +	if (!(is_tdp_mmu_enabled(vcpu->kvm) && shadow_nonpresent_value))
> > +		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> 
> I'm not sure why skip this for TDX, arch.mmu_shadow_page_cache is
> still used for allocating sp->spt which used to track the S-EPT in kvm
> for tdx guest.  Anything I missed for this ?

Shared EPTEs need to be initialized with SUPPRESS_VE=1, otherwise not-present
EPT violations would be reflected into the guest by hardware as #VE exceptions.
This is handled by initializing page allocations via kvm_init_shadow_page() during
cache topup if shadow_nonpresent_value is non-zero.  In that case, telling the
page allocation to zero-initialize the page would be wasted effort.

The initialization is harmless for S-EPT entries because KVM's copy of the S-EPT
isn't consumed by hardware, and because under the hood S-EPT entries should never
#VE (I forget if this is enforced by hardware or if the TDX module sets SUPPRESS_VE).

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX
  2022-06-27 21:53 ` [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
@ 2022-07-11  5:48   ` Yuan Yao
  2022-07-11 14:56   ` Sean Christopherson
  1 sibling, 0 replies; 219+ messages in thread
From: Yuan Yao @ 2022-07-11  5:48 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:35PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> In this patch series, TDX supports only TDP MMU and doesn't support legacy
> MMU.  Forcibly use TDP MMU for TDX irrelevant of kernel parameter to
> disable TDP MMU.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 82f1bfac7ee6..7eb41b176d1e 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -18,8 +18,13 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
>  {
>  	struct workqueue_struct *wq;
>
> -	if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
> -		return 0;
> +	/*
> +	 *  Because TDX supports only TDP MMU, forcibly use TDP MMU in the case
> +	 *  of TDX.
> +	 */
> +	if (kvm->arch.vm_type != KVM_X86_TDX_VM &&
> +		(!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)))
> +		return false;

Please return 0 here for int return value type.

>
>  	wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
>  	if (!wq)
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-06-27 21:53 ` [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
  2022-07-01 11:12   ` Kai Huang
@ 2022-07-11  6:28   ` Yuan Yao
  2022-07-28 19:41   ` David Matlack
  2022-07-28 20:13   ` David Matlack
  3 siblings, 0 replies; 219+ messages in thread
From: Yuan Yao @ 2022-07-11  6:28 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:36PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> For private GPA, CPU refers a private page table whose contents are
> encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
> PTE entry) are used and their cost is expensive.
>
> When KVM resolves KVM page fault, it walks the page tables.  To reuse the
> existing KVM MMU code and mitigate the heavy cost to directly walk
> encrypted private page table, allocate a more page to mirror the existing
> KVM page table.  Resolve KVM page fault with the existing code, and do
> additional operations necessary for the mirrored private page table.  To
> distinguish such cases, the existing KVM page table is called a shared page
> table (i.e. no mirrored private page table), and the KVM page table with
> mirrored private page table is called a private page table.  The
> relationship is depicted below.
>
> Add private pointer to struct kvm_mmu_page for mirrored private page table
> and add helper functions to allocate/initialize/free a mirrored private
> page table page.  Also, add helper functions to check if a given
> kvm_mmu_page is private.  The later patch introduces hooks to operate on
> the mirrored private page table.
>
>               KVM page fault                     |
>                      |                           |
>                      V                           |
>         -------------+----------                 |
>         |                      |                 |
>         V                      V                 |
>      shared GPA           private GPA            |
>         |                      |                 |
>         V                      V                 |
>  CPU/KVM shared PT root  KVM private PT root     |  CPU private PT root
>         |                      |                 |           |
>         V                      V                 |           V
>      shared PT            private PT <----mirror----> mirrored private PT
>         |                      |                 |           |
>         |                      \-----------------+------\    |
>         |                                        |      |    |
>         V                                        |      V    V
>   shared guest page                              |    private guest page
>                                                  |
>                            non-encrypted memory  |    encrypted memory
>                                                  |
> PT: page table
>
> Both CPU and KVM refer to CPU/KVM shared page table.  Private page table
> is used only by KVM.  CPU refers to mirrored private page table.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/mmu/mmu.c          |  9 ++++
>  arch/x86/kvm/mmu/mmu_internal.h | 84 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/mmu/tdp_mmu.c      |  3 ++
>  4 files changed, 97 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f4d4ed41641b..bfc934dc9a33 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -716,6 +716,7 @@ struct kvm_vcpu_arch {
>  	struct kvm_mmu_memory_cache mmu_shadow_page_cache;
>  	struct kvm_mmu_memory_cache mmu_gfn_array_cache;
>  	struct kvm_mmu_memory_cache mmu_page_header_cache;
> +	struct kvm_mmu_memory_cache mmu_private_sp_cache;
>
>  	/*
>  	 * QEMU userspace and the guest each have their own FPU state.
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index c517c7bca105..a5bf3e40e209 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -691,6 +691,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
>  	int start, end, i, r;
>  	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
>
> +	if (kvm_gfn_shared_mask(vcpu->kvm)) {
> +		r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache,
> +					       PT64_ROOT_MAX_LEVEL);
> +		if (r)
> +			return r;
> +	}
> +
>  	if (is_tdp_mmu && shadow_nonpresent_value)
>  		start = kvm_mmu_memory_cache_nr_free_objects(mc);
>
> @@ -732,6 +739,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> +	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_private_sp_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
>  }
> @@ -1736,6 +1744,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
>  	if (!direct)
>  		sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
>  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> +	kvm_mmu_init_private_sp(sp, NULL);
>
>  	/*
>  	 * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 44a04fad4bed..9f3a6bea60a3 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -55,6 +55,10 @@ struct kvm_mmu_page {
>  	u64 *spt;
>  	/* hold the gfn of each spte inside spt */
>  	gfn_t *gfns;
> +#ifdef CONFIG_KVM_MMU_PRIVATE
> +	/* associated private shadow page, e.g. SEPT page. */
> +	void *private_sp;
> +#endif
>  	/* Currently serving as active root */
>  	union {
>  		int root_count;
> @@ -115,6 +119,86 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
>  	return kvm_mmu_role_as_id(sp->role);
>  }
>
> +/*
> + * TDX vcpu allocates page for root Secure EPT page and assigns to CPU secure

"TDX vcpu" is a little confused, how about "TDX moudule allocates(or manages) page
for ..." ?

> + * EPT pointer.  KVM doesn't need to allocate and link to the secure EPT.
> + * Dummy value to make is_pivate_sp() return true.
> + */
> +#define KVM_MMU_PRIVATE_SP_ROOT	((void *)1)
> +
> +#ifdef CONFIG_KVM_MMU_PRIVATE
> +static inline bool is_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return !!sp->private_sp;
> +}
> +
> +static inline bool is_private_sptep(u64 *sptep)
> +{
> +	WARN_ON(!sptep);
> +	return is_private_sp(sptep_to_sp(sptep));
> +}
> +
> +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return sp->private_sp;
> +}
> +
> +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
> +{
> +	sp->private_sp = private_sp;
> +}
> +
> +/* Valid sp->role.level is required. */

I didn't see such requirement in kvm_mmu_alloc_private_sp(), please
consider to move the comment with the code that introduces such
requirement together.

> +static inline void kvm_mmu_alloc_private_sp(
> +	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
> +{
> +	if (is_root)
> +		sp->private_sp = KVM_MMU_PRIVATE_SP_ROOT;
> +	else
> +		sp->private_sp = kvm_mmu_memory_cache_alloc(
> +			&vcpu->arch.mmu_private_sp_cache);
> +	/*
> +	 * Because mmu_private_sp_cache is topped up before staring kvm page
> +	 * fault resolving, the allocation above shouldn't fail.
> +	 */
> +	WARN_ON_ONCE(!sp->private_sp);
> +}
> +
> +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> +{
> +	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
> +		free_page((unsigned long)sp->private_sp);
> +}
> +#else
> +static inline bool is_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return false;
> +}
> +
> +static inline bool is_private_sptep(u64 *sptep)
> +{
> +	return false;
> +}
> +
> +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return NULL;
> +}
> +
> +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
> +{
> +}
> +
> +static inline void kvm_mmu_alloc_private_sp(
> +	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
> +{
> +}
> +
> +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> +{
> +}
> +#endif
> +
>  static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
>  {
>  	/*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7eb41b176d1e..b2568b062faa 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -72,6 +72,8 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
>
>  static void tdp_mmu_free_sp(struct kvm_mmu_page *sp)
>  {
> +	if (is_private_sp(sp))
> +		kvm_mmu_free_private_sp(sp);
>  	free_page((unsigned long)sp->spt);
>  	kmem_cache_free(mmu_page_header_cache, sp);
>  }
> @@ -295,6 +297,7 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>  	sp->gfn = gfn;
>  	sp->ptep = sptep;
>  	sp->tdp_mmu_page = true;
> +	kvm_mmu_init_private_sp(sp);
>
>  	trace_kvm_mmu_get_page(sp, true);
>  }
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-07-08 15:30     ` Sean Christopherson
@ 2022-07-11  7:05       ` Yuan Yao
  2022-07-11 14:47         ` Sean Christopherson
  0 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-11  7:05 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Fri, Jul 08, 2022 at 03:30:23PM +0000, Sean Christopherson wrote:
> Please trim replies.
>
> On Fri, Jul 08, 2022, Yuan Yao wrote:
> > On Mon, Jun 27, 2022 at 02:53:28PM -0700, isaku.yamahata@intel.com wrote:
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 51306b80f47c..f239b6cb5d53 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -668,6 +668,44 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> > >  	}
> > >  }
> > >
> > > +static inline void kvm_init_shadow_page(void *page)
> > > +{
> > > +#ifdef CONFIG_X86_64
> > > +	int ign;
> > > +
> > > +	WARN_ON_ONCE(shadow_nonpresent_value != SHADOW_NONPRESENT_VALUE);
> > > +	asm volatile (
> > > +		"rep stosq\n\t"
>
> I have a slight preference for:
>
> 	asm volatile ("rep stosq\n\t"
> 		      <align here>
> 	);
>
> so that searching for "asm" or "asm volatile" shows the "rep stosq" in the
> result without needed to capture the next line.
>
> > > +		: "=c"(ign), "=D"(page)
> > > +		: "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> > > +		: "memory"
> > > +	);
> > > +#else
> > > +	BUG();
> > > +#endif
>
> Rather than put the #ifdef here, split mmu_topup_shadow_page_cache() on 64-bit
> versus 32-bit.  Then this BUG() goes away and we don't get slapped on the wrist
> by Linus :-)
>
> > > +}
> > > +
> > > +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> > > +{
> > > +	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> > > +	int start, end, i, r;
> > > +	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> > > +
> > > +	if (is_tdp_mmu && shadow_nonpresent_value)
> > > +		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> > > +
> > > +	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
> > > +	if (r)
> > > +		return r;
>
> Bailing immediately is wrong.  If kvm_mmu_topup_memory_cache() fails after allocating
> at least one page, then KVM needs to initialize those pages, otherwise it will leave
> uninitialized pages in the cache.  If userspace frees up memory in response to the
> -ENOMEM and resumes the vCPU, KVM will consume uninitialized data.
>
> > > +
> > > +	if (is_tdp_mmu && shadow_nonpresent_value) {
>
> So I'm pretty sure I effectively suggested keeping shadow_nonpresent_value, but
> seeing it in code, I really don't like it.  It's an unnecessary check on every
> SPT allocation, and it's misleading because it suggests shadow_nonpresent_value
> might be zero when the TDP MMU is enabled.
>
> My vote is to drop shadow_nonpresent_value and then rename kvm_init_shadow_page()
> to make it clear that it's specific to the TDP MMU.
>
> So this?  Completely untested.
>
> #ifdef CONFIG_X86_64
> static void kvm_init_tdp_mmu_shadow_page(void *page)
> {
> 	int ign;
>
> 	asm volatile ("rep stosq\n\t"
> 		      : "=c"(ign), "=D"(page)
> 		      : "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> 		      : "memory"
> 	);
> }
>
> static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> {
> 	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> 	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> 	int start, end, i, r;
>
> 	if (is_tdp_mmu)
> 		start = kvm_mmu_memory_cache_nr_free_objects(mc);
>
> 	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
>
> 	/*
> 	 * Note, topup may have allocated objects even if it failed to allocate
> 	 * the minimum number of objects required to make forward progress _at
> 	 * this time_.  Initialize newly allocated objects even on failure, as
> 	 * userspace can free memory and rerun the vCPU in response to -ENOMEM.
> 	 */
> 	if (is_tdp_mmu) {
> 		end = kvm_mmu_memory_cache_nr_free_objects(mc);
> 		for (i = start; i < end; i++)
> 			kvm_init_tdp_mmu_shadow_page(mc->objects[i]);
> 	}
> 	return r;
> }
> #else
> static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> {
> 	return kvm_mmu_topup_memory_cache(vcpu->arch.mmu_shadow_page_cache,
> 					  PT64_ROOT_MAX_LEVEL);
> }
> #endif /* CONFIG_X86_64 */
>
> > > +		end = kvm_mmu_memory_cache_nr_free_objects(mc);
> > > +		for (i = start; i < end; i++)
> > > +			kvm_init_shadow_page(mc->objects[i]);
> > > +	}
> > > +	return 0;
> > > +}
> > > +
>
> ...
>
> > > @@ -5654,7 +5698,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> > >  	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> > >  	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> > >
> > > -	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > > +	if (!(is_tdp_mmu_enabled(vcpu->kvm) && shadow_nonpresent_value))
> > > +		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> >
> > I'm not sure why skip this for TDX, arch.mmu_shadow_page_cache is
> > still used for allocating sp->spt which used to track the S-EPT in kvm
> > for tdx guest.  Anything I missed for this ?
>
> Shared EPTEs need to be initialized with SUPPRESS_VE=1, otherwise not-present
> EPT violations would be reflected into the guest by hardware as #VE exceptions.
> This is handled by initializing page allocations via kvm_init_shadow_page() during
> cache topup if shadow_nonpresent_value is non-zero.  In that case, telling the
> page allocation to zero-initialize the page would be wasted effort.
>
> The initialization is harmless for S-EPT entries because KVM's copy of the S-EPT
> isn't consumed by hardware, and because under the hood S-EPT entries should never
> #VE (I forget if this is enforced by hardware or if the TDX module sets SUPPRESS_VE).

Ah I see, you're right, thanks for the explanation! I think with
changes you suggested above the __GFP_ZERO can be removed from
mmu_shadow_page_cache for VMs which is_tdp_mmu_enabled() is true:

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 8de26cbde295..0b412f3eb0c5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6483,8 +6483,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
 	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;

-	if (!(tdp_enabled && shadow_nonpresent_value))
-		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
+	if (!(is_tdp_mmu_enabled(vcpu->kvm))
+	    vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;

 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
 	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  2022-06-27 21:53 ` [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
  2022-07-08  3:44   ` Kai Huang
@ 2022-07-11  8:28   ` Yuan Yao
  2022-07-26 23:41     ` Isaku Yamahata
  2022-07-12  2:36   ` Yuan Yao
  2 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-11  8:28 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Kai Huang

On Mon, Jun 27, 2022 at 02:53:38PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> Allocate mirrored private page table for private page table, and add hooks
> to operate on mirrored private page table.  This patch adds only hooks. As
> kvm_gfn_shared_mask() returns false always, those hooks aren't called yet.
>
> Because private guest page is protected, page copy with mmu_notifier to
> migrate page doesn't work.  Callback from backing store is needed.
>
> When the faulting GPA is private, the KVM fault is also called private.
> When resolving private KVM, allocate mirrored private page table and call
> hooks to operate on mirrored private page table. On the change of the
> private PTE entry, invoke kvm_x86_ops hook in __handle_changed_spte() to
> propagate the change to mirrored private page table. The following depicts
> the relationship.
>
>   private KVM page fault   |
>       |                    |
>       V                    |
>  private GPA               |
>       |                    |
>       V                    |
>  KVM private PT root       |  CPU private PT root
>       |                    |           |
>       V                    |           V
>    private PT ---hook to mirror--->mirrored private PT
>       |                    |           |
>       \--------------------+------\    |
>                            |      |    |
>                            |      V    V
>                            |    private guest page
>                            |
>                            |
>      non-encrypted memory  |    encrypted memory
>                            |
> PT: page table
>
> The existing KVM TDP MMU code uses atomic update of SPTE.  On populating
> the EPT entry, atomically set the entry.  However, it requires TLB
> shootdown to zap SPTE.  To address it, the entry is frozen with the special
> SPTE value that clears the present bit. After the TLB shootdown, the entry
> is set to the eventual value (unfreeze).
>
> For mirrored private page table, hooks are called to update mirrored
> private page table in addition to direct access to the private SPTE. For
> the zapping case, it works to freeze the SPTE. It can call hooks in
> addition to TLB shootdown.  For populating the private SPTE entry, there
> can be a race condition without further protection
>
>   vcpu 1: populating 2M private SPTE
>   vcpu 2: populating 4K private SPTE
>   vcpu 2: TDX SEAMCALL to update 4K mirrored private SPTE => error
>   vcpu 1: TDX SEAMCALL to update 2M mirrored private SPTE
>
> To avoid the race, the frozen SPTE is utilized.  Instead of atomic update
> of the private entry, freeze the entry, call the hook that update mirrored
> private SPTE, set the entry to the final value.
>
> Support 4K page only at this stage.  2M page support can be done in future
> patches.
>
> Add is_private member to kvm_page_fault to indicate the fault is private.
> Also is_private member to struct tdp_inter to propagate it.
>
> Co-developed-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm-x86-ops.h |   2 +
>  arch/x86/include/asm/kvm_host.h    |  20 +++
>  arch/x86/kvm/mmu/mmu.c             |  86 +++++++++-
>  arch/x86/kvm/mmu/mmu_internal.h    |  37 +++++
>  arch/x86/kvm/mmu/paging_tmpl.h     |   2 +-
>  arch/x86/kvm/mmu/tdp_iter.c        |   1 +
>  arch/x86/kvm/mmu/tdp_iter.h        |   5 +-
>  arch/x86/kvm/mmu/tdp_mmu.c         | 247 +++++++++++++++++++++++------
>  arch/x86/kvm/mmu/tdp_mmu.h         |   7 +-
>  virt/kvm/kvm_main.c                |   1 +
>  10 files changed, 346 insertions(+), 62 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 32a6df784ea6..6982d57e4518 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -93,6 +93,8 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
>  KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
>  KVM_X86_OP(get_mt_mask)
>  KVM_X86_OP(load_mmu_pgd)
> +KVM_X86_OP_OPTIONAL(free_private_sp)
> +KVM_X86_OP_OPTIONAL(handle_changed_private_spte)
>  KVM_X86_OP(has_wbinvd_exit)
>  KVM_X86_OP(get_l2_tsc_offset)
>  KVM_X86_OP(get_l2_tsc_multiplier)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index bfc934dc9a33..f2a4d5a18851 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -440,6 +440,7 @@ struct kvm_mmu {
>  			 struct kvm_mmu_page *sp);
>  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
>  	struct kvm_mmu_root_info root;
> +	hpa_t private_root_hpa;
>  	union kvm_cpu_role cpu_role;
>  	union kvm_mmu_page_role root_role;
>
> @@ -1435,6 +1436,20 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
>  	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
>  }
>
> +struct kvm_spte {
> +	kvm_pfn_t pfn;
> +	bool is_present;
> +	bool is_leaf;
> +};
> +
> +struct kvm_spte_change {
> +	gfn_t gfn;
> +	enum pg_level level;
> +	struct kvm_spte old;
> +	struct kvm_spte new;
> +	void *sept_page;
> +};
> +
>  struct kvm_x86_ops {
>  	const char *name;
>
> @@ -1547,6 +1562,11 @@ struct kvm_x86_ops {
>  	void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
>  			     int root_level);
>
> +	int (*free_private_sp)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +			       void *private_sp);
> +	void (*handle_changed_private_spte)(
> +		struct kvm *kvm, const struct kvm_spte_change *change);
> +
>  	bool (*has_wbinvd_exit)(void);
>
>  	u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a5bf3e40e209..ef925722ee28 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1577,7 +1577,11 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  		flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);
>
>  	if (is_tdp_mmu_enabled(kvm))
> -		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
> +		/*
> +		 * private page needs to be kept and handle page migration
> +		 * on next EPT violation.
> +		 */
> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush, false);
>
>  	return flush;
>  }
> @@ -3082,7 +3086,8 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
>  		 * SPTE value without #VE suppress bit cleared
>  		 * (kvm->arch.shadow_mmio_value = 0).
>  		 */
> -		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
> +		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching &&
> +			     !kvm_gfn_shared_mask(vcpu->kvm)) ||
>  		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
>  			return RET_PF_EMULATE;
>  	}
> @@ -3454,7 +3459,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
>  		goto out_unlock;
>
>  	if (is_tdp_mmu_enabled(vcpu->kvm)) {
> -		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
> +		if (kvm_gfn_shared_mask(vcpu->kvm) &&
> +		    !VALID_PAGE(mmu->private_root_hpa)) {
> +			root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, true);
> +			mmu->private_root_hpa = root;
> +		}
> +		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, false);
>  		mmu->root.hpa = root;
>  	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
>  		root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true);
> @@ -4026,6 +4036,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
>  	kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true);
>  }
>
> +/*
> + * Private page can't be release on mmu_notifier without losing page contents.
> + * The help, callback, from backing store is needed to allow page migration.
> + * For now, pin the page.
> + */
> +static int kvm_faultin_pfn_private_mapped(struct kvm_vcpu *vcpu,
> +					   struct kvm_page_fault *fault)
> +{
> +	hva_t hva = gfn_to_hva_memslot(fault->slot, fault->gfn);
> +	struct page *page[1];
> +
> +	fault->map_writable = false;
> +	fault->pfn = KVM_PFN_ERR_FAULT;
> +	if (hva == KVM_HVA_ERR_RO_BAD || hva == KVM_HVA_ERR_BAD)
> +		return RET_PF_CONTINUE;
> +
> +	/* TDX allows only RWX.  Read-only isn't supported. */
> +	WARN_ON_ONCE(!fault->write);
> +	if (pin_user_pages_fast(hva, 1, FOLL_WRITE, page) != 1)
> +		return RET_PF_INVALID;
> +
> +	fault->map_writable = true;
> +	fault->pfn = page_to_pfn(page[0]);
> +	return RET_PF_CONTINUE;
> +}
> +
>  static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	struct kvm_memory_slot *slot = fault->slot;
> @@ -4058,6 +4094,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  			return RET_PF_EMULATE;
>  	}
>
> +	if (fault->is_private)
> +		return kvm_faultin_pfn_private_mapped(vcpu, fault);
> +
>  	async = false;
>  	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async,
>  					  fault->write, &fault->map_writable,
> @@ -4110,6 +4149,17 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
>  	       mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);
>  }
>
> +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r)
> +{
> +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn))
> +		return;
> +
> +	if (fault->is_private)
> +		put_page(pfn_to_page(fault->pfn));

The pin_user_pages_fast() is used above which has FOLL_PIN set
internal, so should we use unpin_user_page() here ? The FOLL_PIN means
the unpin should be done by unpin_user_page() but not put_page, please
see /Documentation/core-api/pin_user_pages.rst and comments on
FOLL_PIN;

> +	else
> +		kvm_release_pfn_clean(fault->pfn);
> +}
> +
>  static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);
> @@ -4117,7 +4167,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  	unsigned long mmu_seq;
>  	int r;
>
> -	fault->gfn = fault->addr >> PAGE_SHIFT;
> +	fault->gfn = gpa_to_gfn(fault->addr) & ~kvm_gfn_shared_mask(vcpu->kvm);
>  	fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
>
>  	if (page_fault_handle_page_track(vcpu, fault))
> @@ -4166,7 +4216,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  		read_unlock(&vcpu->kvm->mmu_lock);
>  	else
>  		write_unlock(&vcpu->kvm->mmu_lock);
> -	kvm_release_pfn_clean(fault->pfn);
> +	kvm_mmu_release_fault(vcpu->kvm, fault, r);
>  	return r;
>  }
>
> @@ -5665,6 +5715,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
>
>  	mmu->root.hpa = INVALID_PAGE;
>  	mmu->root.pgd = 0;
> +	mmu->private_root_hpa = INVALID_PAGE;
>  	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
>  		mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
>
> @@ -5855,6 +5906,10 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
>  	 * lead to use-after-free.
>  	 */
>  	if (is_tdp_mmu_enabled(kvm))
> +		/*
> +		 * For now private root is never invalidate during VM is running,
> +		 * so this can only happen for shared roots.
> +		 */
>  		kvm_tdp_mmu_zap_invalidated_roots(kvm);
>  }
>
> @@ -5882,7 +5937,8 @@ static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
>  		      .may_block = false,
>  		};
>
> -		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
> +		/* All private page should be zapped on memslot deletion. */
> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush, true);
>  	} else {
>  		flush = slot_handle_level(kvm, slot, kvm_zap_rmapp, PG_LEVEL_4K,
>  					  KVM_MAX_HUGEPAGE_LEVEL, true);
> @@ -5990,7 +6046,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
>  	if (is_tdp_mmu_enabled(kvm)) {
>  		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
>  			flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
> -						      gfn_end, true, flush);
> +						      gfn_end, true, flush, false);
>  	}
>
>  	if (flush)
> @@ -6023,6 +6079,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  	}
>
> +	/*
> +	 * For now this can only happen for non-TD VM, because TD private
> +	 * mapping doesn't support write protection.  kvm_tdp_mmu_wrprot_slot()
> +	 * will give a WARN() if it hits for TD.
> +	 */
>  	if (is_tdp_mmu_enabled(kvm)) {
>  		read_lock(&kvm->mmu_lock);
>  		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
> @@ -6111,6 +6172,9 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
>  		sp = sptep_to_sp(sptep);
>  		pfn = spte_to_pfn(*sptep);
>
> +		/* Private page dirty logging is not supported. */
> +		KVM_BUG_ON(is_private_sptep(sptep), kvm);
> +
>  		/*
>  		 * We cannot do huge page mapping for indirect shadow pages,
>  		 * which are found on the last rmap (level = 1) when not using
> @@ -6151,6 +6215,11 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  	}
>
> +	/*
> +	 * This should only be reachable in case of log-dirty, wihch TD private
> +	 * mapping doesn't support so far.  kvm_tdp_mmu_zap_collapsible_sptes()
> +	 * internally gives a WARN() when it hits.
> +	 */
>  	if (is_tdp_mmu_enabled(kvm)) {
>  		read_lock(&kvm->mmu_lock);
>  		kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot);
> @@ -6437,6 +6506,9 @@ int kvm_mmu_vendor_module_init(void)
>  void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_unload(vcpu);
> +	if (is_tdp_mmu_enabled(vcpu->kvm))
> +		mmu_free_root_page(vcpu->kvm, &vcpu->arch.mmu->private_root_hpa,
> +				NULL);
>  	free_mmu_pages(&vcpu->arch.root_mmu);
>  	free_mmu_pages(&vcpu->arch.guest_mmu);
>  	mmu_free_memory_caches(vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 9f3a6bea60a3..d3b30d62aca0 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -6,6 +6,8 @@
>  #include <linux/kvm_host.h>
>  #include <asm/kvm_host.h>
>
> +#include "mmu.h"
> +
>  #undef MMU_DEBUG
>
>  #ifdef MMU_DEBUG
> @@ -164,11 +166,30 @@ static inline void kvm_mmu_alloc_private_sp(
>  	WARN_ON_ONCE(!sp->private_sp);
>  }
>
> +static inline int kvm_alloc_private_sp_for_split(
> +	struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	gfp &= ~__GFP_ZERO;
> +	sp->private_sp = (void*)__get_free_page(gfp);
> +	if (!sp->private_sp)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
>  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
>  {
>  	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
>  		free_page((unsigned long)sp->private_sp);
>  }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	if (is_private_sp(root))
> +		return kvm_gfn_private(kvm, gfn);
> +	else
> +		return kvm_gfn_shared(kvm, gfn);
> +}
>  #else
>  static inline bool is_private_sp(struct kvm_mmu_page *sp)
>  {
> @@ -194,11 +215,25 @@ static inline void kvm_mmu_alloc_private_sp(
>  {
>  }
>
> +static inline int kvm_alloc_private_sp_for_split(
> +	struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	return -ENOMEM;
> +}
> +
>  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
>  {
>  }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	return gfn;
> +}
>  #endif
>
> +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r);
> +
>  static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
>  {
>  	/*
> @@ -246,6 +281,7 @@ struct kvm_page_fault {
>  	/* Derived from mmu and global state.  */
>  	const bool is_tdp;
>  	const bool nx_huge_page_workaround_enabled;
> +	const bool is_private;
>
>  	/*
>  	 * Whether a >4KB mapping can be created or is forbidden due to NX
> @@ -327,6 +363,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>  		.prefetch = prefetch,
>  		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
>  		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
> +		.is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),
>
>  		.max_level = vcpu->kvm->arch.tdp_max_page_level,
>  		.req_level = PG_LEVEL_4K,
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 62ae590d4e5b..e5b73638bd83 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -877,7 +877,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>
>  out_unlock:
>  	write_unlock(&vcpu->kvm->mmu_lock);
> -	kvm_release_pfn_clean(fault->pfn);
> +	kvm_mmu_release_fault(vcpu->kvm, fault, r);
>  	return r;
>  }
>
> diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> index ee4802d7b36c..4ed50f3c424d 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.c
> +++ b/arch/x86/kvm/mmu/tdp_iter.c
> @@ -53,6 +53,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root,
>  	iter->min_level = min_level;
>  	iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root->spt;
>  	iter->as_id = kvm_mmu_page_as_id(root);
> +	iter->is_private = is_private_sp(root);
>
>  	tdp_iter_restart(iter);
>  }
> diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
> index adfca0cf94d3..dec56795c5da 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.h
> +++ b/arch/x86/kvm/mmu/tdp_iter.h
> @@ -71,7 +71,7 @@ struct tdp_iter {
>  	tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL];
>  	/* A pointer to the current SPTE */
>  	tdp_ptep_t sptep;
> -	/* The lowest GFN mapped by the current SPTE */
> +	/* The lowest GFN (shared bits included) mapped by the current SPTE */
>  	gfn_t gfn;
>  	/* The level of the root page given to the iterator */
>  	int root_level;
> @@ -94,6 +94,9 @@ struct tdp_iter {
>  	 * level instead of advancing to the next entry.
>  	 */
>  	bool yielded;
> +
> +	/* True if this iter is handling private KVM page fault. */
> +	bool is_private;
>  };
>
>  /*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index d874c79ab96c..12f75e60a254 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -278,18 +278,24 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
>  		    kvm_mmu_page_as_id(_root) != _as_id) {		\
>  		} else
>
> -static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *tdp_mmu_alloc_sp(
> +	struct kvm_vcpu *vcpu, bool private, bool is_root)
>  {
>  	struct kvm_mmu_page *sp;
>
>  	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
>  	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
>
> +	if (private)
> +		kvm_mmu_alloc_private_sp(vcpu, sp, is_root);
> +	else
> +		kvm_mmu_init_private_sp(sp, NULL);
> +
>  	return sp;
>  }
>
> -static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
> -			    gfn_t gfn, union kvm_mmu_page_role role)
> +static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn,
> +			    union kvm_mmu_page_role role)
>  {
>  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
>
> @@ -297,7 +303,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>  	sp->gfn = gfn;
>  	sp->ptep = sptep;
>  	sp->tdp_mmu_page = true;
> -	kvm_mmu_init_private_sp(sp);
>
>  	trace_kvm_mmu_get_page(sp, true);
>  }
> @@ -316,7 +321,8 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
>  	tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
>  }
>
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *kvm_tdp_mmu_get_vcpu_root(struct kvm_vcpu *vcpu,
> +						      bool private)
>  {
>  	union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
>  	struct kvm *kvm = vcpu->kvm;
> @@ -330,11 +336,12 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>  	 */
>  	for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
>  		if (root->role.word == role.word &&
> +		    is_private_sp(root) == private &&
>  		    kvm_tdp_mmu_get_root(root))
>  			goto out;
>  	}
>
> -	root = tdp_mmu_alloc_sp(vcpu);
> +	root = tdp_mmu_alloc_sp(vcpu, private, true);
>  	tdp_mmu_init_sp(root, NULL, 0, role);
>
>  	refcount_set(&root->tdp_mmu_root_count, 1);
> @@ -344,12 +351,17 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>  	spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
>
>  out:
> -	return __pa(root->spt);
> +	return root;
> +}
> +
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private)
> +{
> +	return __pa(kvm_tdp_mmu_get_vcpu_root(vcpu, private)->spt);
>  }
>
>  static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				u64 old_spte, u64 new_spte, int level,
> -				bool shared);
> +				bool private_spte, u64 old_spte,
> +				u64 new_spte, int level, bool shared);
>
>  static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level)
>  {
> @@ -410,6 +422,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
>   *
>   * @kvm: kvm instance
>   * @pt: the page removed from the paging structure
> + * @is_private: pt is private or not.
>   * @shared: This operation may not be running under the exclusive use
>   *	    of the MMU lock and the operation must synchronize with other
>   *	    threads that might be modifying SPTEs.
> @@ -422,7 +435,8 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
>   * this thread will be responsible for ensuring the page is freed. Hence the
>   * early rcu_dereferences in the function.
>   */
> -static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
> +static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool is_private,
> +			      bool shared)
>  {
>  	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));
>  	int level = sp->role.level;
> @@ -498,8 +512,20 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>  			old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte,
>  							  REMOVED_SPTE, level);
>  		}
> -		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
> -				    old_spte, REMOVED_SPTE, level, shared);
> +		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, is_private,
> +				    old_spte, REMOVED_SPTE, level,
> +				    shared);
> +	}
> +
> +	if (is_private && WARN_ON(static_call(kvm_x86_free_private_sp)(
> +					  kvm, sp->gfn, sp->role.level,
> +					  kvm_mmu_private_sp(sp)))) {
> +		/*
> +		 * Failed to unlink Secure EPT page and there is nothing to do
> +		 * further.  Intentionally leak the page to prevent the kernel
> +		 * from accessing the encrypted page.
> +		 */
> +		kvm_mmu_init_private_sp(sp, NULL);
>  	}
>
>  	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
> @@ -510,6 +536,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   * @kvm: kvm instance
>   * @as_id: the address space of the paging structure the SPTE was a part of
>   * @gfn: the base GFN that was mapped by the SPTE
> + * @private_spte: the SPTE is private or not
>   * @old_spte: The value of the SPTE before the change
>   * @new_spte: The value of the SPTE after the change
>   * @level: the level of the PT the SPTE is part of in the paging structure
> @@ -521,14 +548,30 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   * This function must be called for all TDP SPTE modifications.
>   */
>  static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				  u64 old_spte, u64 new_spte, int level,
> -				  bool shared)
> +				  bool private_spte, u64 old_spte,
> +				  u64 new_spte, int level, bool shared)
>  {
>  	bool was_present = is_shadow_present_pte(old_spte);
>  	bool is_present = is_shadow_present_pte(new_spte);
>  	bool was_leaf = was_present && is_last_spte(old_spte, level);
>  	bool is_leaf = is_present && is_last_spte(new_spte, level);
> -	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
> +	kvm_pfn_t old_pfn = spte_to_pfn(old_spte);
> +	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
> +	bool pfn_changed = old_pfn != new_pfn;
> +	struct kvm_spte_change change = {
> +		.gfn = gfn,
> +		.level = level,
> +		.old = {
> +			.pfn = old_pfn,
> +			.is_present = was_present,
> +			.is_leaf = was_leaf,
> +		},
> +		.new = {
> +			.pfn = new_pfn,
> +			.is_present = is_present,
> +			.is_leaf = is_leaf,
> +		},
> +	};
>
>  	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
>  	WARN_ON(level < PG_LEVEL_4K);
> @@ -595,7 +638,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>
>  	if (was_leaf && is_dirty_spte(old_spte) &&
>  	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
> -		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
> +		kvm_set_pfn_dirty(old_pfn);
>
>  	/*
>  	 * Recursively handle child PTs if the change removed a subtree from
> @@ -604,16 +647,47 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>  	 * pages are kernel allocations and should never be migrated.
>  	 */
>  	if (was_present && !was_leaf &&
> -	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
> -		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
> +	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) {
> +		WARN_ON(private_spte !=
> +			is_private_sptep(spte_to_child_pt(old_spte, level)));
> +		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level),
> +				  private_spte, shared);
> +	}
> +
> +	/*
> +	 * Special handling for the private mapping.  We are either
> +	 * setting up new mapping at middle level page table, or leaf,
> +	 * or tearing down existing mapping.
> +	 *
> +	 * This is after handling lower page table by above
> +	 * handle_remove_tdp_mmu_page().  S-EPT requires to remove S-EPT tables
> +	 * after removing childrens.
> +	 */
> +	if (private_spte &&
> +	    /* Ignore change of software only bits. e.g. host_writable */
> +	    (was_leaf != is_leaf || was_present != is_present || pfn_changed)) {
> +		void *sept_page = NULL;
> +
> +		if (is_present && !is_leaf) {
> +			struct kvm_mmu_page *sp = to_shadow_page(pfn_to_hpa(new_pfn));
> +
> +			sept_page = kvm_mmu_private_sp(sp);
> +			WARN_ON(!sept_page);
> +			WARN_ON(sp->role.level + 1 != level);
> +			WARN_ON(sp->gfn != gfn);
> +		}
> +		change.sept_page = sept_page;
> +
> +		static_call(kvm_x86_handle_changed_private_spte)(kvm, &change);
> +	}
>  }
>
>  static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				u64 old_spte, u64 new_spte, int level,
> -				bool shared)
> +				bool private_spte, u64 old_spte, u64 new_spte,
> +				int level, bool shared)
>  {
> -	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level,
> -			      shared);
> +	__handle_changed_spte(kvm, as_id, gfn, private_spte,
> +			old_spte, new_spte, level, shared);
>  	handle_changed_spte_acc_track(old_spte, new_spte, level);
>  	handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte,
>  				      new_spte, level);
> @@ -640,6 +714,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  					  struct tdp_iter *iter,
>  					  u64 new_spte)
>  {
> +	bool freeze_spte = iter->is_private && !is_removed_spte(new_spte);
> +	u64 tmp_spte = freeze_spte ? REMOVED_SPTE : new_spte;
>  	u64 *sptep = rcu_dereference(iter->sptep);
>  	u64 old_spte;
>
> @@ -657,7 +733,7 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  	 * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
>  	 * does not hold the mmu_lock.
>  	 */
> -	old_spte = cmpxchg64(sptep, iter->old_spte, new_spte);
> +	old_spte = cmpxchg64(sptep, iter->old_spte, tmp_spte);
>  	if (old_spte != iter->old_spte) {
>  		/*
>  		 * The page table entry was modified by a different logical
> @@ -669,10 +745,14 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  		return -EBUSY;
>  	}
>
> -	__handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
> -			      new_spte, iter->level, true);
> +	__handle_changed_spte(
> +		kvm, iter->as_id, iter->gfn, iter->is_private,
> +		iter->old_spte, new_spte, iter->level, true);
>  	handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level);
>
> +	if (freeze_spte)
> +		__kvm_tdp_mmu_write_spte(sptep, new_spte);
> +
>  	return 0;
>  }
>
> @@ -734,13 +814,15 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>   *		      unless performing certain dirty logging operations.
>   *		      Leaving record_dirty_log unset in that case prevents page
>   *		      writes from being double counted.
> + * @is_private:       The fault is private.
>   *
>   * Returns the old SPTE value, which _may_ be different than @old_spte if the
>   * SPTE had voldatile bits.
>   */
>  static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
> -			      u64 old_spte, u64 new_spte, gfn_t gfn, int level,
> -			      bool record_acc_track, bool record_dirty_log)
> +			       u64 old_spte, u64 new_spte, gfn_t gfn, int level,
> +			       bool record_acc_track, bool record_dirty_log,
> +			       bool is_private)
>  {
>  	lockdep_assert_held_write(&kvm->mmu_lock);
>
> @@ -755,7 +837,8 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
>
>  	old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
>
> -	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false);
> +	__handle_changed_spte(kvm, as_id, gfn, is_private,
> +			      old_spte, new_spte, level, false);
>
>  	if (record_acc_track)
>  		handle_changed_spte_acc_track(old_spte, new_spte, level);
> @@ -774,7 +857,8 @@ static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
>  	iter->old_spte = __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep,
>  					    iter->old_spte, new_spte,
>  					    iter->gfn, iter->level,
> -					    record_acc_track, record_dirty_log);
> +					    record_acc_track, record_dirty_log,
> +					    iter->is_private);
>  }
>
>  static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
> @@ -807,8 +891,11 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
>  			continue;					\
>  		else
>
> -#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end)		\
> -	for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end)
> +#define tdp_mmu_for_each_pte(_iter, _mmu, _private, _start, _end)	\
> +	for_each_tdp_pte(_iter,						\
> +		 to_shadow_page((_private) ? _mmu->private_root_hpa :	\
> +				_mmu->root.hpa),			\
> +		_start, _end)
>
>  /*
>   * Yield if the MMU lock is contended or this thread needs to return control
> @@ -945,7 +1032,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>
>  	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
>  			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
> -			   true, true);
> +			   true, true, is_private_sp(sp));
>
>  	return true;
>  }
> @@ -961,13 +1048,21 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>   * operation can cause a soft lockup.
>   */
>  static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
> -			      gfn_t start, gfn_t end, bool can_yield, bool flush)
> +			      gfn_t start, gfn_t end, bool can_yield, bool flush,
> +			      bool drop_private)
>  {
>  	struct tdp_iter iter;
>
>  	end = min(end, tdp_mmu_max_gfn_exclusive());
>
>  	lockdep_assert_held_write(&kvm->mmu_lock);
> +	/*
> +	 * Extend [start, end) to include GFN shared bit when TDX is enabled,
> +	 * and for shared mapping range.
> +	 */
> +	WARN_ON_ONCE(!is_private_sp(root) && drop_private);
> +	start = kvm_gfn_for_root(kvm, root, start);
> +	end = kvm_gfn_for_root(kvm, root, end);
>
>  	rcu_read_lock();
>
> @@ -1002,12 +1097,13 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>   * MMU lock.
>   */
>  bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
> -			   bool can_yield, bool flush)
> +			   bool can_yield, bool flush, bool drop_private)
>  {
>  	struct kvm_mmu_page *root;
>
>  	for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
> -		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush);
> +		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush,
> +					  drop_private && is_private_sp(root));
>
>  	return flush;
>  }
> @@ -1067,6 +1163,12 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
>
>  	lockdep_assert_held_write(&kvm->mmu_lock);
>  	list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
> +		/*
> +		 * Skip private root since private page table
> +		 * is only torn down when VM is destroyed.
> +		 */
> +		if (is_private_sp(root))
> +			continue;
>  		if (!root->role.invalid &&
>  		    !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) {
>  			root->role.invalid = true;
> @@ -1087,14 +1189,22 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  	u64 new_spte;
>  	int ret = RET_PF_FIXED;
>  	bool wrprot = false;
> +	unsigned long pte_access = ACC_ALL;
> +	gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
>
>  	WARN_ON(sp->role.level != fault->goal_level);
> +
> +	/* TDX shared GPAs are no executable, enforce this for the SDV. */
> +	if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
> +		pte_access &= ~ACC_EXEC_MASK;
> +
>  	if (unlikely(!fault->slot))
> -		new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
> +		new_spte = make_mmio_spte(vcpu, gfn_unalias, pte_access);
>  	else
> -		wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
> -					 fault->pfn, iter->old_spte, fault->prefetch, true,
> -					 fault->map_writable, &new_spte);
> +		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
> +				   gfn_unalias, fault->pfn, iter->old_spte,
> +				   fault->prefetch, true, fault->map_writable,
> +				   &new_spte);
>
>  	if (new_spte == iter->old_spte)
>  		ret = RET_PF_SPURIOUS;
> @@ -1167,8 +1277,7 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter,
>  	return 0;
>  }
>
> -static int tdp_mmu_populate_nonleaf(
> -	struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
> +static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
>  {
>  	struct kvm_mmu_page *sp;
>  	int ret;
> @@ -1176,7 +1285,7 @@ static int tdp_mmu_populate_nonleaf(
>  	WARN_ON(is_shadow_present_pte(iter->old_spte));
>  	WARN_ON(is_removed_spte(iter->old_spte));
>
> -	sp = tdp_mmu_alloc_sp(vcpu);
> +	sp = tdp_mmu_alloc_sp(vcpu, iter->is_private, false);
>  	tdp_mmu_init_child_sp(sp, iter);
>
>  	ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true);
> @@ -1193,6 +1302,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	struct kvm_mmu *mmu = vcpu->arch.mmu;
>  	struct tdp_iter iter;
> +	gfn_t raw_gfn;
> +	bool is_private = fault->is_private;
>  	int ret;
>
>  	kvm_mmu_hugepage_adjust(vcpu, fault);
> @@ -1201,7 +1312,16 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>
>  	rcu_read_lock();
>
> -	tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) {
> +	raw_gfn = gpa_to_gfn(fault->addr);
> +
> +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) {
> +		if (is_private) {
> +			rcu_read_unlock();
> +			return -EFAULT;
> +		}
> +	}
> +
> +	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
>  		if (fault->nx_huge_page_workaround_enabled)
>  			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
>
> @@ -1217,6 +1337,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  		    is_large_pte(iter.old_spte)) {
>  			if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
>  				break;
> +			/*
> +			 * TODO: large page support.
> +			 * Doesn't support large page for TDX now
> +			 */
> +			WARN_ON(is_private_sptep(iter.sptep));
> +
>
>  			/*
>  			 * The iter must explicitly re-read the spte here
> @@ -1258,11 +1384,13 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  	return ret;
>  }
>
> +/* Used by mmu notifier via kvm_unmap_gfn_range() */
>  bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
> -				 bool flush)
> +				 bool flush, bool drop_private)
>  {
>  	return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start,
> -				     range->end, range->may_block, flush);
> +				     range->end, range->may_block, flush,
> +				     drop_private);
>  }
>
>  typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
> @@ -1445,7 +1573,8 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
>  	return spte_set;
>  }
>
> -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> +static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(
> +	gfp_t gfp, bool is_private)
>  {
>  	struct kvm_mmu_page *sp;
>
> @@ -1456,6 +1585,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
>  		return NULL;
>
>  	sp->spt = (void *)__get_free_page(gfp);
> +	if (is_private) {
> +		if (kvm_alloc_private_sp_for_split(sp, gfp)) {
> +			free_page((unsigned long)sp->spt);
> +			sp->spt = NULL;
> +		}
> +	}
>  	if (!sp->spt) {
>  		kmem_cache_free(mmu_page_header_cache, sp);
>  		return NULL;
> @@ -1469,6 +1604,11 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  						       bool shared)
>  {
>  	struct kvm_mmu_page *sp;
> +	bool is_private = iter->is_private;
> +
> +	/* TODO: For now large page isn't supported for private SPTE. */
> +	WARN_ON(is_private);
> +	WARN_ON(iter->is_private != is_private_sptep(iter->sptep));
>
>  	/*
>  	 * Since we are allocating while under the MMU lock we have to be
> @@ -1479,7 +1619,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  	 * If this allocation fails we drop the lock and retry with reclaim
>  	 * allowed.
>  	 */
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT, is_private);
>  	if (sp)
>  		return sp;
>
> @@ -1491,7 +1631,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>
>  	iter->yielded = true;
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT, is_private);
>
>  	if (shared)
>  		read_lock(&kvm->mmu_lock);
> @@ -1907,10 +2047,14 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
>  	struct kvm_mmu *mmu = vcpu->arch.mmu;
>  	gfn_t gfn = addr >> PAGE_SHIFT;
>  	int leaf = -1;
> +	bool is_private = kvm_is_private_gpa(vcpu->kvm, addr);
>
>  	*root_level = vcpu->arch.mmu->root_role.level;
>
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	if (WARN_ON(is_private))
> +		return leaf;
> +
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>  		leaf = iter.level;
>  		sptes[leaf] = iter.old_spte;
>  	}
> @@ -1937,7 +2081,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
>  	gfn_t gfn = addr >> PAGE_SHIFT;
>  	tdp_ptep_t sptep = NULL;
>
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	/* fast page fault for private GPA isn't supported. */
> +	WARN_ON_ONCE(kvm_is_private_gpa(vcpu->kvm, addr));
> +
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>  		*spte = iter.old_spte;
>  		sptep = iter.sptep;
>  	}
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index c163f7cc23ca..d1655571eb2f 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -5,7 +5,7 @@
>
>  #include <linux/kvm_host.h>
>
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private);
>
>  __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
>  {
> @@ -16,7 +16,8 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
>  			  bool shared);
>
>  bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start,
> -				 gfn_t end, bool can_yield, bool flush);
> +				gfn_t end, bool can_yield, bool flush,
> +				bool drop_private);
>  bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
>  void kvm_tdp_mmu_zap_all(struct kvm *kvm);
>  void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
> @@ -25,7 +26,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm);
>  int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
>
>  bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
> -				 bool flush);
> +				 bool flush, bool drop_private);
>  bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range);
>  bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
>  bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 0acb0b6d1f82..7a5261eb7eb8 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -196,6 +196,7 @@ bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
>
>  	return true;
>  }
> +EXPORT_SYMBOL_GPL(kvm_is_reserved_pfn);
>
>  /*
>   * Switches to specified vcpu, until a matching vcpu_put()
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-07-11  7:05       ` Yuan Yao
@ 2022-07-11 14:47         ` Sean Christopherson
  0 siblings, 0 replies; 219+ messages in thread
From: Sean Christopherson @ 2022-07-11 14:47 UTC (permalink / raw)
  To: Yuan Yao
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Mon, Jul 11, 2022, Yuan Yao wrote:
> On Fri, Jul 08, 2022 at 03:30:23PM +0000, Sean Christopherson wrote:
> > Please trim replies.
> > > I'm not sure why skip this for TDX, arch.mmu_shadow_page_cache is
> > > still used for allocating sp->spt which used to track the S-EPT in kvm
> > > for tdx guest.  Anything I missed for this ?
> >
> > Shared EPTEs need to be initialized with SUPPRESS_VE=1, otherwise not-present
> > EPT violations would be reflected into the guest by hardware as #VE exceptions.
> > This is handled by initializing page allocations via kvm_init_shadow_page() during
> > cache topup if shadow_nonpresent_value is non-zero.  In that case, telling the
> > page allocation to zero-initialize the page would be wasted effort.
> >
> > The initialization is harmless for S-EPT entries because KVM's copy of the S-EPT
> > isn't consumed by hardware, and because under the hood S-EPT entries should never
> > #VE (I forget if this is enforced by hardware or if the TDX module sets SUPPRESS_VE).
> 
> Ah I see, you're right, thanks for the explanation! I think with
> changes you suggested above the __GFP_ZERO can be removed from
> mmu_shadow_page_cache for VMs which is_tdp_mmu_enabled() is true:

Yep.

> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 8de26cbde295..0b412f3eb0c5 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6483,8 +6483,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
>  	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> 
> -	if (!(tdp_enabled && shadow_nonpresent_value))
> -		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +	if (!(is_tdp_mmu_enabled(vcpu->kvm))
> +	    vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> 
>  	vcpu->arch.mmu = &vcpu->arch.root_mmu;
>  	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX
  2022-06-27 21:53 ` [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
  2022-07-11  5:48   ` Yuan Yao
@ 2022-07-11 14:56   ` Sean Christopherson
  2022-07-19 15:04     ` Isaku Yamahata
  1 sibling, 1 reply; 219+ messages in thread
From: Sean Christopherson @ 2022-07-11 14:56 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

s/Focibly/Forcibly, but that's a moot point because KVM shouldn't override the
the module param.  KVM should instead _require_ the TDP MMU to be enabled.  E.g.
if userspace disables the TDP MMU to workaround a fatal bug, then forcing the TDP
MMU may silently expose KVM to said bug.

And overriding tdp_enabled is just mind-boggling broken, all of the SPTE masks
will be wrong.

On Mon, Jun 27, 2022, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> In this patch series, TDX supports only TDP MMU and doesn't support legacy
> MMU.  Forcibly use TDP MMU for TDX irrelevant of kernel parameter to
> disable TDP MMU.

Do not refer to the "patch series", instead phrase the statement with respect to
what KVM support.

  Require the TDP MMU for TDX guests, the so called "shadow" MMU does not
  support mapping guest private memory, i.e. does not support Secure-EPT.

> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 82f1bfac7ee6..7eb41b176d1e 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -18,8 +18,13 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
>  {
>  	struct workqueue_struct *wq;
>  
> -	if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
> -		return 0;
> +	/*
> +	 *  Because TDX supports only TDP MMU, forcibly use TDP MMU in the case
> +	 *  of TDX.
> +	 */
> +	if (kvm->arch.vm_type != KVM_X86_TDX_VM &&
> +		(!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)))
> +		return false;

Yeah, no.

	if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
		return kvm->arch.vm_type == KVM_X86_TDX_VM ? -EINVAL : 0;

>  
>  	wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
>  	if (!wq)
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (101 preceding siblings ...)
  2022-06-27 21:54 ` [PATCH v7 102/102] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
@ 2022-07-11 15:17 ` Isaku Yamahata
  2022-07-12  5:07   ` Chao Gao
  2022-07-12 10:49   ` Chao Peng
  2022-07-14  1:03 ` Sean Christopherson
  103 siblings, 2 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-11 15:17 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

Hi. Because my description on large page support was terse, I wrote up more
detailed one.  Any feedback/thoughts on large page support?

TDP MMU large page support design

Two main discussion points
* how to track page status. private vs shared, no-largepage vs can-be-largepage
* how to trigger merging mapping from 4KB/2MB to 2MB/1GB

Expected private-vs-shared page usage
-------------------------------------
On TD boot all pages are private and TD converts pages into shared if necessary.
* Most of the guest pages remain private.
* Only limited pages are converted at kernel boot
  ** bounce buffer for IO (virt-io).  It's allocated as swiotlb.  Its size is
     64MB or 6% of total guest memory.
  ** KVM PV shared page. (the current guest TD doesn't use KVM PV shared page.)
* Only a small number of pages are dynamically converted from private to shared
  and vice versa.  This usage is very limited. e.g. GetQuote, the lack of
  swiotlb buffer


Theory of Secure-EPT operations related to large page
-----------------------------------------------------
TDX Secure-EPT has differences from VMX EPT.
To add a page to Secure-EPT

* Here is the operation to resolve the EPT violation.
1. TD: Accepts GPA.  TD needs to accept GPA before accessing GPA because TD
   needs to detect that VMM unmaps GPA and maps GPA again.
2. EPT violation is triggered.  TD exit to VMM.
3. VMM: allocate a page for GPA and TDH.MEM.PAGE.AUG it to GPA.  Resume TD vcpu.
   (3a. TD: #VE<EPT violation> is injected.  #VE handler accepts the page)
4. TD: resume #VE and continue TD vcpu execution

TD may choose step 1. In that case, After step 3. #VE is injected into TD and,
TD #VE handler needs to accept the page.

When adding a page to Secure-EPT again, the page contexts are cleared and the
page is encrypted.  If a page is disassociated from Secure-EPT and added again,
the page content is lost.

* TDG.VP.VMCALL<MapGPA> hypercall
The page associated with GPA can be private or shared.  TD converts the GPA by
TDG.VP.VMCALL<MapGPA> hypercall from private to shared or vice versa.  VMM
tracks whether the given GPA is private or shared.

* mapping merge(promote)/split(demote)
The page can be mapped as large page (2MB or 1GB) in addition to 4KB.  The
mapping can be merged(4KB/2MB -> 2MB/1GB) or split(2MB/1GB -> 4KB/2MB) by TDX
SEAMCALL TDH.MEM.PAGE.PROMOTE and TDH.MEM.PAGE.DEMOTE.
The merge of mapping requires all the pages needs to be mapped, unlike VMX EPT
because of encryption.  This implies the current KVM implementation doesn't work
for TDX when merging mapping as follows

- EPT violation and host page is 2MB mappable.
  some of the 4KB pages of the given 2MB page are already mapped, some not.
  i.e. 2MB EPT -> 4KB EPT -> 4K pages
- KVM page fault handler zap 2MB EPT entry and populate 2MB EPT entry
  zap: 2MB EPT: non present
  populate 2MB: -> 2MB page

If VMM zaps 2MB Secure-EPT entry, the page contents will be lost for TDX.
Mapping merge requires all pages are already mapped.

Instead, the following steps are needed.
- EPT violation and host page is 2MB mappable.
  some of the 4KB pages of the given 2MB page are already mapped.  Some not.
  i.e. 2MB EPT -> 4KB EPT -> 4K pages
- VMM checks all 4KB GPAs are private. If not, it can't be mapped as a large page.
  (****)
- VMM checks all 4KB GPAs are already mapped.  If not, give up mapping merge.
  (or map missing 4KB pages.)
- mapping merge by TDH.MEM.PAGE.PROMOTE

The mapping split for TDX Secure-EPT works similarly to the VMX EPT case.


EPT violation and MapGPA
------------------------
- EPT violation is a fast path
- MapGPA is not a fast path.
=> Keep the EPT violation path optimized and complicates the MapGPA path.  For
(****) check, we don't want to scan the 4KB mapping on EPT violation.  Instead,
the MapGPA path scans it and records the result as the page can be mapped as 2MB
due to private/shared.


Tracking private/shared and large page mappable
-----------------------------------------------
VMM needs to track that page is mapped as private or shared at 4KB granularity.
For efficiency of EPT violation path (****), at 2MB and 1GB level, VMM should
track the page can be mapped as a large page (regarding private/shared).  VMM
updates it on MapGPA and references it on the EPT violation path. (****)

For 4KB pages, 1 bit is needed. private or shared.  Let's call it shared-mask bit.
For 2MB/1GB pages, 2 bit is needed. large page mappable or not. private or
shared if mappable.  Let's call it no-largepage bit.

Option A.)
  Allocate array for pages in struct kvm_arch_memory_slot on TD creation.
  struct kvm_arch_memory_slot {
    +struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
  }

  pros:
  +straight forward implementation
  +SPTE_SHARED_MASK is not needed
  cons:
  -memory overhead is high
  -not optimized for expected usage
  -one more look-up on EPT violation

Option B.) Steal two software usable bits from SPTE and record them in SPTE.
           SPTE_SHARED_MASK, SPTE_NOLARGE_PAGE_MASK
  pros:
  +optimized for EPT violation
  cons:
  -2bits used in SPTE entry
  -complicates the MapGPA path.

Option C.) Steal one software usable bit from SPTE and record it in SPTE.
           SPTE_SHARED_MASK
           For 2MB/1GB, allocate bitmap in kvm_mmu_page.
           struct kvm_mmu_page {
             bitmap nolarge
           }
  pros:
  +optimized for EPT violation
  cons:
  -complicates the MapGPA path.
  -information is scattered in SPTE and struct kvm_mmu_page


How to update those bits
------------------------
- MapGPA
  - at 4KB level, set or clear shared-mask bit.
  - Scan 512 4KB bit, at 2MB level
    - set or clear shared-mask bit, clear no-largepage bit or
    - clear shared-mask bit, set no-largepage bit
    - increment/decrement lpageinfo to prevent/allow large page
  - similar for 1GB level
  Note: This logic might a bit tricky.

- EPT violation
  - If 2MB large page is allowed, check if no-largepage bit
    - If no-largepage bit is set, => go down to 4KB page
    - If no-largepage bit is cleared => try to map 2MB page
      - If 4KB level is not mapped, map 2MB page
      - If some 4KB level is already mapped, go down to 4KB.
        Don't try to merge mapping. Or it's possible to try to merge mapping.
  Note: 512 4KB entry scanning is not done at EPT violation because it's fast
        path.


Map merging
-----------
Map merging is necessary for TD migration. (Map split is the easy part.)  The
current KVM implementation zaps the range (mmu notification or lpage recovery
worker) and expects large page mapping on the next EPT violation.

Option A.) Keep the code similar to map merging logic.
Zap 2MB EPT entry in some sense and trigger map merging logic on the next EPT
violation.  To keep encrypted page contents, zapped EPT entries needs to keep
the page.  Steal one more bits from SPTE. SPTE_PRIVATE_BLOCKED_MASK.
It means that the page is zapped from SPTE. but it still alive and references
page.

Option B.) In the callback, directly merge mapping somehow.  In this case, mmu
notifier usage doesn't make sense.

NOTE:
- Implement map merging in MapGPA. This doesn't work for dirty page logging.
- We can utilize kvm_nx_lpage_recovery_worker
- We can utilize THP. Probably doesn't work well for fd-based private memory.

Thanks,
Isaku Yamayhata

On Mon, Jun 27, 2022 at 02:52:52PM -0700,
isaku.yamahata@intel.com wrote:

> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> KVM TDX basic feature support
> 
> Hello.  This is v7 the patch series vof KVM TDX support.
> This is based on v5.19-rc1 + kvm/queue branch + TDX HOST patch series.
> The tree can be found at https://github.com/intel/tdx/tree/kvm-upstream
> How to run/test: It's describe at https://github.com/intel/tdx/wiki/TDX-KVM
> 
> Major changes from v6:
> - rebased to v5.19 base
> 
> TODO:
> - integrate fd-based guest memory. As the discussion is still on-going, I
>   intentionally dropped fd-based guest memory support yet.  The integration can
>   be found at https://github.com/intel/tdx/tree/kvm-upstream-workaround.
> - 2M large page support. It's work-in-progress.
> For large page support, there are several design choices. Here is the design options.
> Any thoughts/feedback?
> 
> KVM MMU Large page support for TDX
> 
> * What needs to be done
> - Track private or shared of each page size (4KB, 2MB, 1GB) based on
>   TDG.VP.VMCALL<MapGPA>.  For large pages(2MB, 1GB), it can be mixed (some
>   lower-size pages are private and some shared.)  In this case, the page can't
>   be large.
> - if necessary, split large page on TDG.VP.VMCALL<MapGPA>
>   (split on dirty page tracking is future work)
> - resolving KVM page fault
>   When resolving a private page and the page is large in the host, GPA can be
>   resolved as a large page in Secure-EPT.  Even if the page is large on the host
>   side, sometimes a 4KB page can be resolved because it's up to guest TD to
>   accept at 4KB, 2MB, or 1GB.
> - collapsing pages into a large page.
>   At this point, it's okay to not implement this.  When dirty page tracking is
>   supported, this needs to be supported.
>   - On MapGPA, the page can be collapsed into a large page
>   - handle zapping SPTE and try to collapse the pages on the next KVM page fault
>     Unlike the EPT case, some trick is needed.
> - For performance, optimize KVM page fault path at the cost of complicating
>   MapGPA path.
> 
> * options to track private or shared
> At each page size (4KB, 2MB, and 1GB), track private, shared, or mixed (2MB and
> 1GB case). For 4KB each page, 1 bit per page is needed. private or shared.  For
> large pages (2MB and 1GB), 2 bits per large page is needed. (private, shared, or
> mixed).  When resolving KVM page fault, we don't want to check the lower-size
> pages to check if the given GPA can be a large for performance.  On MapGPA check
> it instead.
> 
> Option A). enhance kvm_arch_memory_slot
>   enum kvm_page_type {
>        KVM_PAGE_TYPE_INVALID,
>        KVM_PAGE_TYPE_SHARED,
>        KVM_PAGE_TYPE_PRIVATE,
>        KVM_PAGE_TYPE_MIXED,
>   };
> 
>   struct kvm_page_attr {
>        enum kvm_page_type type;
>   };
> 
>  struct kvm_arch_memory_slot {
>  +      struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
> 
> Option B). steal one more bit SPTE_MIXED_MASK in addition to SPTE_SHARED_MASK
> If !SPTE_MIXED_MASK, it can be large page.
> 
> Option C). use SPTE_SHARED_MASK and kvm_mmu_page::mixed bitmap
> kvm_mmu_page::mixed bitmap of 1GB, root indicates mixed for 2MB, 1GB.
> 
> 
> * comparison
> A).
> + straightforward to implement
> + SPTE_SHARED_MASK isn't needed
> - memory overhead compared to B). or C).
> - more memory reference on KVM page fault
> 
> B).
> + simpler than C) (complex than A)?)
> + efficient on KVM page fault. (only SPTE reference)
> + low memory overhead
> - Waste precious SPTE bits.
> 
> C).
> + efficient on KVM page fault. (only SPTE reference)
> + low memory overhead
> - complicates MapGPA
> - scattered data structure
> 
> Thanks,
> Isaku Yamahata
> 
> Changes from v6:
> - rebased to v5.19
> 
> Changes from v5:
> - export __seamcall and use it
> - move mutex lock from callee function of smp_call_on_cpu to the caller.
> - rename mmu_prezap => flush_shadow_all_private() and tdx_mmu_release_hkid
> - updated comment
> - drop the use of tdh_mng_key.reclaimid(): as the function is for backward
>   compatibility to only return success
> - struct kvm_tdx_cmd: metadata => flags, added __u64 error.
> - make this ioctl systemwide ioctl
> - ABI change to struct kvm_init_vm
> - guest_tsc_khz: use kvm->arch.default_tsc_khz
> - rename BUILD_BUG_ON_MEMCPY to MEMCPY_SAME_SIZE
> - drop exporting kvm_set_tsc_khz().
> - fix kvm_tdp_page_fault() for mtrr emulation
> - rename it to kvm_gfn_shared_mask(), dropped kvm_gpa_shared_mask()
> - drop kvm_is_private_gfn(), kept kvm_is_private_gpa()
>   keep kvm_{gfn, gpa}_private(), kvm_gpa_private()
> - update commit message
> - rename shadow_init_value => shadow_nonprsent_value
> - added ept_violation_ve_test mode
> - shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in tdp_mmu.c
> - legacy MMU case
>   => - mmu_topup_shadow_page_cache(), kvm_mmu_create()
>      - FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> - #VE warning:
> - rename: REMOVED_SPTE => __REMOVED_SPTE, SHADOW_REMOVED_SPTE => REMOVED_SPTE
> - merge into Like we discussed, this patch should be merged with patch
>   "KVM: x86/mmu: Allow non-zero init value for shadow PTE".
> - fix pointed by Sagi. check !is_private check => (kvm_gfn_shared_mask && !is_private)
> - introduce kvm_gfn_for_root(kvm, root, gfn)
> - add only_shared argument to kvm_tdp_mmu_handle_gfn()
> - use kvm_arch_dirty_log_supported()
> - rename SPTE_PRIVATE_PROHIBIT to SPTE_SHARED_MASK.
> - rename: is_private_prohibit_spte() => spte_shared_mask()
> - fix: shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in comment
> - dropped this patch as the change was merged into kvm/queue
> - update vt_apicv_post_state_restore()
> - use is_64_bit_hypercall()
> - comment: expand MSMI -> Machine Check System Management Interrupt
> - fixed TDX_SEPT_PFERR
> - tdvmcall_p[1234]_{write, read}() => tdvmcall_a[0123]_{read,write}()
> - rename tdmvcall_exit_readon() => tdvmcall_leaf()
> - remove optional zero check of argument.
> - do a check for static_call(kvm_x86_has_emulated_msr)(kvm, MSR_IA32_SMBASE)
>    in kvm_vcpu_ioctl_smi and __apic_accept_irq.
> - WARN_ON_ONCE in tdx_smi_allowed and tdx_enable_smi_window.
> - introduce vcpu_deliver_init to x86_ops
> - sprinkeled KVM_BUG_ON()
> 
> Changes from v4:
> - rebased to TDX host kernel patch series.
> - include all the patches to make this patch series working.
> - add [MARKER] patches to mark the patch layer clear.
> 
> ---
> * What's TDX?
> TDX stands for Trust Domain Extensions, which extends Intel Virtual Machines
> Extensions (VMX) to introduce a kind of virtual machine guest called a Trust
> Domain (TD) for confidential computing.
> 
> A TD runs in a CPU mode that is designed to protect the confidentiality of its
> memory contents and its CPU state from any other software, including the hosting
> Virtual Machine Monitor (VMM), unless explicitly shared by the TD itself.
> 
> We have more detailed explanations below (***).
> We have the high-level design of TDX KVM below (****).
> 
> In this patch series, we use "TD" or "guest TD" to differentiate it from the
> current "VM" (Virtual Machine), which is supported by KVM today.
> 
> 
> * The organization of this patch series
> This patch series is on top of the patches series "TDX host kernel support":
> https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/
> 
> this patch series is available at
> https://github.com/intel/tdx/releases/tag/kvm-upstream
> The corresponding patches to qemu are available at
> https://github.com/intel/qemu-tdx/commits/tdx-upstream
> 
> The relations of the layers are depicted as follows.
> The arrows below show the order of patch reviews we would like to have.
> 
> The below layers are chosen so that the device model, for example, qemu can
> exercise each layering step by step.  Check if TDX is supported, create TD VM,
> create TD vcpu, allow vcpu running, populate TD guest private memory, and handle
> vcpu exits/hypercalls/interrupts to run TD fully.
> 
>   TDX vcpu
>   interrupt/exits/hypercall<------------\
>         ^                               |
>         |                               |
>   TD finalization                       |
>         ^                               |
>         |                               |
>   TDX EPT violation<------------\       |
>         ^                       |       |
>         |                       |       |
>   TD vcpu enter/exit            |       |
>         ^                       |       |
>         |                       |       |
>   TD vcpu creation/destruction  |       \-------KVM TDP MMU MapGPA
>         ^                       |                       ^
>         |                       |                       |
>   TD VM creation/destruction    \---------------KVM TDP MMU hooks
>         ^                                               ^
>         |                                               |
>   TDX architectural definitions                 KVM TDP refactoring for TDX
>         ^                                               ^
>         |                                               |
>    TDX, VMX    <--------TDX host kernel         KVM MMU GPA stolen bits
>    coexistence          support
> 
> 
> The followings are explanations of each layer.  Each layer has a dummy commit
> that starts with [MARKER] in subject.  It is intended to help to identify where
> each layer starts.
> 
> TDX host kernel support:
>         https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/
>         The guts of system-wide initialization of TDX module.  There is an
>         independent patch series for host x86.  TDX KVM patches call functions
>         this patch series provides to initialize the TDX module.
> 
> TDX, VMX coexistence:
>         Infrastructure to allow TDX to coexist with VMX and trigger the
>         initialization of the TDX module.
>         This layer starts with
>         "KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX"
> TDX architectural definitions:
>         Add TDX architectural definitions and helper functions
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: TDX architectural definitions".
> TD VM creation/destruction:
>         Guest TD creation/destroy allocation and releasing of TDX specific vm
>         and vcpu structure.  Create an initial guest memory image with TDX
>         measurement.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: TD VM creation/destruction".
> TD vcpu creation/destruction:
>         guest TD creation/destroy Allocation and releasing of TDX specific vm
>         and vcpu structure.  Create an initial guest memory image with TDX
>         measurement.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction"
> TDX EPT violation:
>         Create an initial guest memory image with TDX measurement.  Handle
>         secure EPT violations to populate guest pages with TDX SEAMCALLs.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: TDX EPT violation"
> TD vcpu enter/exit:
>         Allow TDX vcpu to enter into TD and exit from TD.  Save CPU state before
>         entering into TD.  Restore CPU state after exiting from TD.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: TD vcpu enter/exit"
> TD vcpu interrupts/exit/hypercall:
>         Handle various exits/hypercalls and allow interrupts to be injected so
>         that TD vcpu can continue running.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls"
> 
> KVM MMU GPA shared bit:
>         Introduce framework to handle shared bit repurposed bit of GPA TDX
>         repurposed a bit of GPA to indicate shared or private. If it's shared,
>         it's the same as the conventional VMX EPT case.  VMM can access shared
>         guest pages.  If it's private, it's handled by Secure-EPT and the guest
>         page is encrypted.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: KVM MMU GPA stolen bits"
> KVM TDP refactoring for TDX:
>         TDX Secure EPT requires different constants. e.g. initial value EPT
>         entry value etc. Various refactoring for those differences.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX"
> KVM TDP MMU hooks:
>         Introduce framework to TDP MMU to add hooks in addition to direct EPT
>         access TDX added Secure EPT which is an enhancement to VMX EPT.  Unlike
>         conventional VMX EPT, CPU can't directly read/write Secure EPT. Instead,
>         use TDX SEAMCALLs to operate on Secure EPT.
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks"
> KVM TDP MMU MapGPA:
>         Introduce framework to handle switching guest pages from private/shared
>         to shared/private.  For a given GPA, a guest page can be assigned to a
>         private GPA or a shared GPA exclusively.  With TDX MapGPA hypercall,
>         guest TD converts GPA assignments from private (or shared) to shared (or
>         private).
>         This layer starts with
>         "[MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA "
> 
> KVM guest private memory: (not shown in the above diagram)
> [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private
> memory: https://lkml.org/lkml/2022/1/18/395
>         Guest private memory requires different memory management in KVM.  The
>         patch proposes a way for it.  Integration with TDX KVM.
> 
> (***)
> * TDX module
> A CPU-attested software module called the "TDX module" is designed to implement
> the TDX architecture, and it is loaded by the UEFI firmware today. It can be
> loaded by the kernel or driver at runtime, but in this patch series we assume
> that the TDX module is already loaded and initialized.
> 
> The TDX module provides two main new logical modes of operation built upon the
> new SEAM (Secure Arbitration Mode) root and non-root CPU modes added to the VMX
> architecture. TDX root mode is mostly identical to the VMX root operation mode,
> and the TDX functions (described later) are triggered by the new SEAMCALL
> instruction with the desired interface function selected by an input operand
> (leaf number, in RAX). TDX non-root mode is used for TD guest operation.  TDX
> non-root operation (i.e. "guest TD" mode) is similar to the VMX non-root
> operation (i.e. guest VM), with changes and restrictions to better assure that
> no other software or hardware has direct visibility of the TD memory and state.
> 
> TDX transitions between TDX root operation and TDX non-root operation include TD
> Entries, from TDX root to TDX non-root mode, and TD Exits from TDX non-root to
> TDX root mode.  A TD Exit might be asynchronous, triggered by some external
> event (e.g., external interrupt or SMI) or an exception, or it might be
> synchronous, triggered by a TDCALL (TDG.VP.VMCALL) function.
> 
> TD VCPUs can be entered using SEAMCALL(TDH.VP.ENTER) by KVM. TDH.VP.ENTER is one
> of the TDX interface functions as mentioned above, and "TDH" stands for Trust
> Domain Host. Those host-side TDX interface functions are categorized into
> various areas just for better organization, such as SYS (TDX module management),
> MNG (TD management), VP (VCPU), PHYSMEM (physical memory), MEM (private memory),
> etc. For example, SEAMCALL(TDH.SYS.INFO) returns the TDX module information.
> 
> TDCS (Trust Domain Control Structure) is the main control structure of a guest
> TD, and encrypted (using the guest TD's ephemeral private key).  At a high
> level, TDCS holds information for controlling TD operation as a whole,
> execution, EPTP, MSR bitmaps, etc that KVM needs to set it up.  Note that MSR
> bitmaps are held as part of TDCS (unlike VMX) because they are meant to have the
> same value for all VCPUs of the same TD.
> 
> Trust Domain Virtual Processor State (TDVPS) is the root control structure of a
> TD VCPU.  It helps the TDX module control the operation of the VCPU, and holds
> the VCPU state while the VCPU is not running. TDVPS is opaque to software and
> DMA access, accessible only by using the TDX module interface functions (such as
> TDH.VP.RD, TDH.VP.WR). TDVPS includes TD VMCS, and TD VMCS auxiliary structures,
> such as virtual APIC page, virtualization exception information, etc.
> 
> Several VMX control structures (such as Shared EPT and Posted interrupt
> descriptor) are directly managed and accessed by the host VMM.  These control
> structures are pointed to by fields in the TD VMCS.
> 
> The above means that 1) KVM needs to allocate different data structures for TDs,
> 2) KVM can reuse the existing code for TDs for some operations, 3) it needs to
> define TD-specific handling for others.  3) Redirect operations to .  3)
> Redirect operations to the TDX specific callbacks, like "if (is_td_vcpu(vcpu))
> tdx_callback() else vmx_callback();".
> 
> *TD Private Memory
> TD private memory is designed to hold TD private content, encrypted by the CPU
> using the TD ephemeral key. An encryption engine holds a table of encryption
> keys, and an encryption key is selected for each memory transaction based on a
> Host Key Identifier (HKID). By design, the host VMM does not have access to the
> encryption keys.
> 
> In the first generation of MKTME, HKID is "stolen" from the physical address by
> allocating a configurable number of bits from the top of the physical
> address. The HKID space is partitioned into shared HKIDs for legacy MKTME
> accesses and private HKIDs for SEAM-mode-only accesses. We use 0 for the shared
> HKID on the host so that MKTME can be opaque or bypassed on the host.
> 
> During TDX non-root operation (i.e. guest TD), memory accesses can be qualified
> as either shared or private, based on the value of a new SHARED bit in the Guest
> Physical Address (GPA).  The CPU translates shared GPAs using the usual VMX EPT
> (Extended Page Table) or "Shared EPT" (in this document), which resides in host
> VMM memory. The Shared EPT is directly managed by the host VMM - the same as
> with the current VMX. Since guest TDs usually require I/O, and the data exchange
> needs to be done via shared memory, thus KVM needs to use the current EPT
> functionality even for TDs.
> 
> * Secure EPT and Minoring using the TDP code
> The CPU translates private GPAs using a separate Secure EPT.  The Secure EPT
> pages are encrypted and integrity-protected with the TD's ephemeral private
> key.  Secure EPT can be managed _indirectly_ by the host VMM, using the TDX
> interface functions, and thus conceptually Secure EPT is a subset of EPT (why
> "subset"). Since execution of such interface functions takes much longer time
> than accessing memory directly, in KVM we use the existing TDP code to minor the
> Secure EPT for the TD.
> 
> This way, we can effectively walk Secure EPT without using the TDX interface
> functions.
> 
> * VM life cycle and TDX specific operations
> The userspace VMM, such as QEMU, needs to build and treat TDs differently.  For
> example, a TD needs to boot in private memory, and the host software cannot copy
> the initial image to private memory.
> 
> * TSC Virtualization
> The TDX module helps TDs maintain reliable TSC (Time Stamp Counter) values
> (e.g. consistent among the TD VCPUs) and the virtual TSC frequency is determined
> by TD configuration, i.e. when the TD is created, not per VCPU.  The current KVM
> owns TSC virtualization for VMs, but the TDX module does for TDs.
> 
> * MCE support for TDs
> The TDX module doesn't allow VMM to inject MCE.  Instead PV way is needed for TD
> to communicate with VMM.  For now, KVM silently ignores MCE request by VMM.  MSRs
> related to MCE (e.g, MCE bank registers) can be naturally emulated by
> paravirtualizing MSR access.
> 
> [1] For details, the specifications, [2], [3], [4], [5], [6], [7], are
> available.
> 
> * Restrictions or future work
> Some features are not included to reduce patch size.  Those features are
> addressed as future independent patch series.
> - large page (2M, 1G)
> - qemu gdb stub
> - guest PMU
> - and more
> 
> * Prerequisites
> It's required to load the TDX module and initialize it.  It's out of the scope
> of this patch series.  Another independent patch for the common x86 code is
> planned.  It defines CONFIG_INTEL_TDX_HOST and this patch series uses
> CONFIG_INTEL_TDX_HOST.  It's assumed that With CONFIG_INTEL_TDX_HOST=y, the TDX
> module is initialized and ready for KVM to use the TDX module APIs for TDX guest
> life cycle like tdh.mng.init are ready to use.
> 
> Concretely Global initialization, LP (Logical Processor) initialization, global
> configuration, the key configuration, and TDMR and PAMT initialization are done.
> The state of the TDX module is SYS_READY.  Please refer to the TDX module
> specification, the chapter Intel TDX Module Lifecycle State Machine
> 
> ** Detecting the TDX module readiness.
> TDX host patch series implements the detection of the TDX module availability
> and its initialization so that KVM can use it.  Also it manages Host KeyID
> (HKID) assigned to guest TD.
> The assumed APIs the TDX host patch series provides are
> - int seamrr_enabled()
>   Check if required cpu feature (SEAM mode) is available. This only check CPU
>   feature availability.  At this point, the TDX module may not be ready for KVM
>   to use.
> - int init_tdx(void);
>   Initialization of TDX module so that the TDX module is ready for KVM to use.
> - const struct tdsysinfo_struct *tdx_get_sysinfo(void);
>   Return the system wide information about the TDX module.  NULL if the TDX
>   isn't initialized.
> - u32 tdx_get_global_keyid(void);
>   Return global key id that is used for the TDX module itself.
> - int tdx_keyid_alloc(void);
>   Allocate HKID for guest TD.
> - void tdx_keyid_free(int keyid);
>   Free HKID for guest TD.
> 
> (****)
> * TDX KVM high-level design
> - Host key ID management
> Host Key ID (HKID) needs to be assigned to each TDX guest for memory encryption.
> It is assumed The TDX host patch series implements necessary functions,
> u32 tdx_get_global_keyid(void), int tdx_keyid_alloc(void) and,
> void tdx_keyid_free(int keyid).
> 
> - Data structures and VM type
> Because TDX is different from VMX, define its own VM/VCPU structures, struct
> kvm_tdx and struct vcpu_tdx instead of struct kvm_vmx and struct vcpu_vmx.  To
> identify the VM, introduce VM-type to specify which VM type, VMX (default) or
> TDX, is used.
> 
> - VM life cycle and TDX specific operations
> Re-purpose the existing KVM_MEMORY_ENCRYPT_OP to add TDX specific operations.
> New commands are used to get the TDX system parameters, set TDX specific VM/VCPU
> parameters, set initial guest memory and measurement.
> 
> The creation of TDX VM requires five additional operations in addition to the
> conventional VM creation.
>   - Get KVM system capability to check if TDX VM type is supported
>   - VM creation (KVM_CREATE_VM)
>   - New: Get the TDX specific system parameters.  KVM_TDX_GET_CAPABILITY.
>   - New: Set TDX specific VM parameters.  KVM_TDX_INIT_VM.
>   - VCPU creation (KVM_CREATE_VCPU)
>   - New: Set TDX specific VCPU parameters.  KVM_TDX_INIT_VCPU.
>   - New: Initialize guest memory as boot state and extend the measurement with
>     the memory.  KVM_TDX_INIT_MEM_REGION.
>   - New: Finalize VM. KVM_TDX_FINALIZE. Complete measurement of the initial
>     TDX VM contents.
>   - VCPU RUN (KVM_VCPU_RUN)
> 
> - Protected guest state
> Because the guest state (CPU state and guest memory) is protected, the KVM VMM
> can't operate on them.  For example, accessing CPU registers, injecting
> exceptions, and accessing guest memory.  Those operations are handled as
> silently ignored, returning zero or initial reset value when it's requested via
> KVM API ioctls.
> 
>     VM/VCPU state and callbacks for TDX specific operations.
>     Define tdx specific VM state and VCPU state instead of VMX ones.  Redirect
>     operations to TDX specific callbacks.  "if (tdx) tdx_op() else vmx_op()".
> 
>     Operations on the CPU state
>     silently ignore operations on the guest state.  For example, the write to
>     CPU registers is ignored and the read from CPU registers returns 0.
> 
>     . ignore access to CPU registers except for allowed ones.
>     . TSC: add a check if tsc is immutable and return an error.  Because the KVM
>       implementation updates the internal tsc state and it's difficult to back
>       out those changes.  Instead, skip the logic.
>     . dirty logging: add check if dirty logging is supported.
>     . exceptions/SMI/MCE/SIPI/INIT: silently ignore
> 
>     Note: virtual external interrupt and NMI can be injected into TDX guests.
> 
> - KVM MMU integration
> One bit of the guest physical address (bit 51 or 47) is repurposed to indicate if
> the guest physical address is private (the bit is cleared) or shared (the bit is
> set).  The bits are called stolen bits.
> 
>   - Stolen bits framework
>     systematically tracks which guest physical address, shared or private, is
>     used.
> 
>   - Shared EPT and secure EPT
>     There are two EPTs. Shared EPT (the conventional one) and Secure
>     EPT(the new one). Shared EPT is handled the same for the stolen
>     bit set.  Secure EPT points to private guest pages.  To resolve
>     EPT violation, KVM walks one of two EPTs based on faulted GPA.
>     Because it's costly to access secure EPT during walking EPTs with
>     SEAMCALLs for the private guest physical address, another private
>     EPT is used as a shadow of Secure-EPT with the existing logic at
>     the cost of extra memory.
> 
> The following depicts the relationship.
> 
>                     KVM                             |       TDX module
>                      |                              |           |
>         -------------+----------                    |           |
>         |                      |                    |           |
>         V                      V                    |           |
>      shared GPA           private GPA               |           |
>   CPU shared EPT pointer  KVM private EPT pointer   |  CPU secure EPT pointer
>         |                      |                    |           |
>         |                      |                    |           |
>         V                      V                    |           V
>   shared EPT                private EPT--------mirror----->Secure EPT
>         |                      |                    |           |
>         |                      \--------------------+------\    |
>         |                                           |      |    |
>         V                                           |      V    V
>   shared guest page                                 |    private guest page
>                                                     |
>                                                     |
>                               non-encrypted memory  |    encrypted memory
>                                                     |
> 
>   - Operating on Secure EPT
>     Use the TDX module APIs to operate on Secure EPT.  To call the TDX API
>     during resolving EPT violation, add hooks to additional operation and wiring
>     it to TDX backend.
> 
> * References
> 
> [1] TDX specification
>    https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html
> [2] Intel Trust Domain Extensions (Intel TDX)
>    https://cdrdv2.intel.com/v1/dl/getContent/726790
> [3] Intel CPU Architectural Extensions Specification
>    https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-cpu-architectural-specification.pdf
> [4] Intel TDX Module 1.0 Specification
>    https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-module-1.0-public-spec-v0.931.pdf
> [5] Intel TDX Loader Interface Specification
>   https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-seamldr-interface-specification.pdf
> [6] Intel TDX Guest-Hypervisor Communication Interface
>    https://cdrdv2.intel.com/v1/dl/getContent/726790
> [7] Intel TDX Virtual Firmware Design Guide
>    https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.01.pdf
> [8] intel public github
>    kvm TDX branch: https://github.com/intel/tdx/tree/kvm
>    TDX guest branch: https://github.com/intel/tdx/tree/guest
>    qemu TDX https://github.com/intel/qemu-tdx
> [9] TDVF
>     https://github.com/tianocore/edk2-staging/tree/TDVF
>     This was merged into EDK2 main branch. https://github.com/tianocore/edk2
> 
> Chao Gao (3):
>   KVM: x86: Move check_processor_compatibility from init ops to runtime
>     ops
>   Partially revert "KVM: Pass kvm_init()'s opaque param to additional
>     arch funcs"
>   KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o
>     wrmsr
> 
> Isaku Yamahata (72):
>   KVM: Refactor CPU compatibility check on module initialiization
>   x86/virt/vmx/tdx: export platform_tdx_enabled()
>   KVM: TDX: Detect CPU feature on kernel module initialization
>   KVM: x86: Refactor KVM VMX module init/exit functions
>   KVM: TDX: Add placeholders for TDX VM/vcpu structure
>   x86/virt/tdx: Add a helper function to return system wide info about
>     TDX module
>   KVM: TDX: Initialize TDX module when loading kvm_intel.ko
>   KVM: TDX: Make TDX VM type supported
>   [MARKER] The start of TDX KVM patch series: TDX architectural
>     definitions
>   KVM: TDX: Define TDX architectural definitions
>   KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module
>   KVM: TDX: Add helper functions to print TDX SEAMCALL error
>   [MARKER] The start of TDX KVM patch series: TD VM creation/destruction
>   x86/cpu: Add helper functions to allocate/free TDX private host key id
>   KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl
>   KVM: TDX: Make pmu_intel.c ignore guest TD case
>   [MARKER] The start of TDX KVM patch series: TD vcpu
>     creation/destruction
>   KVM: TDX: allocate/free TDX vcpu structure
>   KVM: TDX: allocate/free TDX vcpu structure
>   [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits
>   KVM: x86/mmu: introduce config for PRIVATE KVM MMU
>   [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for
>     TDX
>   KVM: x86/mmu: Disallow fast page fault on private GPA
>   KVM: VMX: Introduce test mode related to EPT violation VE
>   [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks
>   KVM: x86/mmu: Focibly use TDP MMU for TDX
>   KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
>   KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map()
>   KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
>   [MARKER] The start of TDX KVM patch series: TDX EPT violation
>   KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs
>   KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
>   KVM: TDX: TDP MMU TDX support
>   [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA
>   KVM: x86/mmu: steal software usable git to record if GFN is for shared
>     or not
>   KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX
>   [MARKER] The start of TDX KVM patch series: TD finalization
>   KVM: TDX: Create initial guest memory
>   KVM: TDX: Finalize VM initialization
>   [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit
>   KVM: TDX: Add helper assembly function to TDX vcpu
>   KVM: TDX: Implement TDX vcpu enter/exit path
>   KVM: TDX: vcpu_run: save/restore host state(host kernel gs)
>   KVM: TDX: restore host xsave state when exit from the guest TD
>   KVM: TDX: restore user ret MSRs
>   [MARKER] The start of TDX KVM patch series: TD vcpu
>     exits/interrupts/hypercalls
>   KVM: TDX: complete interrupts after tdexit
>   KVM: TDX: restore debug store when TD exit
>   KVM: TDX: handle vcpu migration over logical processor
>   KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched
>     behavior
>   KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c
>   KVM: TDX: Implement interrupt injection
>   KVM: TDX: Implements vcpu request_immediate_exit
>   KVM: TDX: Implement methods to inject NMI
>   KVM: TDX: Add a place holder to handle TDX VM exit
>   KVM: TDX: handle EXIT_REASON_OTHER_SMI
>   KVM: TDX: handle ept violation/misconfig exit
>   KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT
>   KVM: TDX: Add a place holder for handler of TDX hypercalls
>     (TDG.VP.VMCALL)
>   KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL
>   KVM: TDX: Handle TDX PV CPUID hypercall
>   KVM: TDX: Handle TDX PV HLT hypercall
>   KVM: TDX: Handle TDX PV port io hypercall
>   KVM: TDX: Implement callbacks for MSR operations for TDX
>   KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall
>   KVM: TDX: Handle TDX PV report fatal error hypercall
>   KVM: TDX: Handle TDX PV map_gpa hypercall
>   KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall
>   KVM: TDX: Silently discard SMI request
>   KVM: TDX: Silently ignore INIT/SIPI
>   Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX)
>   KVM: x86: design documentation on TDX support of x86 KVM TDP MMU
> 
> Rick Edgecombe (1):
>   KVM: x86/mmu: Add address conversion functions for TDX shared bits
> 
> Sean Christopherson (25):
>   KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
>   KVM: Enable hardware before doing arch VM initialization
>   KVM: x86: Introduce vm_type to differentiate default VMs from
>     confidential VMs
>   KVM: TDX: Add TDX "architectural" error codes
>   KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers
>   KVM: TDX: create/destroy VM structure
>   KVM: TDX: x86: Add ioctl to get TDX systemwide parameters
>   KVM: TDX: Do TDX specific vcpu initialization
>   KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
>   KVM: x86/mmu: Allow non-zero value for non-present SPTE
>   KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
>   KVM: x86/mmu: Allow per-VM override of the TDP max page level
>   KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for
>     private mmu
>   KVM: x86/mmu: Disallow dirty logging for x86 TDX
>   KVM: VMX: Split out guts of EPT violation to common/exposed function
>   KVM: VMX: Move setting of EPT MMU masks to common VT-x code
>   KVM: TDX: Add load_mmu_pgd method for TDX
>   KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX
>   KVM: TDX: Add support for find pending IRQ in a protected local APIC
>   KVM: x86: Assume timer IRQ was injected if APIC state is proteced
>   KVM: VMX: Modify NMI and INTR handlers to take intr_info as function
>     argument
>   KVM: VMX: Move NMI/exception handler to common helper
>   KVM: x86: Split core of hypercall emulation to helper function
>   KVM: TDX: Handle TDX PV MMIO hypercall
>   KVM: TDX: Add methods to ignore accesses to CPU state
> 
> Xiaoyao Li (1):
>   KVM: TDX: initialize VM with TDX specific parameters
> 
>  Documentation/virt/kvm/api.rst                |   30 +-
>  .../virt/kvm/intel-tdx-layer-status.rst       |   33 +
>  Documentation/virt/kvm/intel-tdx.rst          |  381 +++
>  Documentation/virt/kvm/tdx-tdp-mmu.rst        |  466 ++++
>  arch/arm64/kvm/arm.c                          |    2 +-
>  arch/mips/kvm/mips.c                          |   14 +-
>  arch/powerpc/kvm/powerpc.c                    |    2 +-
>  arch/riscv/kvm/main.c                         |    2 +-
>  arch/s390/kvm/kvm-s390.c                      |    2 +-
>  arch/x86/events/intel/ds.c                    |    1 +
>  arch/x86/include/asm/kvm-x86-ops.h            |   10 +
>  arch/x86/include/asm/kvm_host.h               |   56 +-
>  arch/x86/include/asm/tdx.h                    |   67 +
>  arch/x86/include/asm/vmx.h                    |   14 +
>  arch/x86/include/uapi/asm/kvm.h               |   95 +
>  arch/x86/include/uapi/asm/vmx.h               |    5 +-
>  arch/x86/kvm/Kconfig                          |    4 +
>  arch/x86/kvm/Makefile                         |    3 +-
>  arch/x86/kvm/irq.c                            |    3 +
>  arch/x86/kvm/lapic.c                          |   37 +-
>  arch/x86/kvm/lapic.h                          |    2 +
>  arch/x86/kvm/mmu.h                            |   42 +-
>  arch/x86/kvm/mmu/mmu.c                        |  360 ++-
>  arch/x86/kvm/mmu/mmu_internal.h               |  123 +-
>  arch/x86/kvm/mmu/paging_tmpl.h                |    5 +-
>  arch/x86/kvm/mmu/spte.c                       |   46 +-
>  arch/x86/kvm/mmu/spte.h                       |   65 +-
>  arch/x86/kvm/mmu/tdp_iter.c                   |    1 +
>  arch/x86/kvm/mmu/tdp_iter.h                   |    5 +-
>  arch/x86/kvm/mmu/tdp_mmu.c                    |  690 ++++-
>  arch/x86/kvm/mmu/tdp_mmu.h                    |   12 +-
>  arch/x86/kvm/svm/svm.c                        |   13 +-
>  arch/x86/kvm/vmx/common.h                     |  174 ++
>  arch/x86/kvm/vmx/evmcs.c                      |    2 +-
>  arch/x86/kvm/vmx/evmcs.h                      |    2 +-
>  arch/x86/kvm/vmx/main.c                       | 1071 +++++++
>  arch/x86/kvm/vmx/pmu_intel.c                  |   39 +-
>  arch/x86/kvm/vmx/pmu_intel.h                  |   28 +
>  arch/x86/kvm/vmx/posted_intr.c                |   43 +-
>  arch/x86/kvm/vmx/posted_intr.h                |   13 +
>  arch/x86/kvm/vmx/tdx.c                        | 2465 +++++++++++++++++
>  arch/x86/kvm/vmx/tdx.h                        |  275 ++
>  arch/x86/kvm/vmx/tdx_arch.h                   |  157 ++
>  arch/x86/kvm/vmx/tdx_errno.h                  |   29 +
>  arch/x86/kvm/vmx/tdx_error.c                  |   22 +
>  arch/x86/kvm/vmx/tdx_ops.h                    |  188 ++
>  arch/x86/kvm/vmx/vmenter.S                    |  146 +
>  arch/x86/kvm/vmx/vmx.c                        |  737 ++---
>  arch/x86/kvm/vmx/vmx.h                        |   39 +-
>  arch/x86/kvm/vmx/x86_ops.h                    |  235 ++
>  arch/x86/kvm/x86.c                            |  148 +-
>  arch/x86/virt/vmx/tdx/seamcall.S              |    2 +
>  arch/x86/virt/vmx/tdx/tdx.c                   |   54 +-
>  arch/x86/virt/vmx/tdx/tdx.h                   |   52 -
>  include/linux/kvm_host.h                      |    4 +-
>  include/uapi/linux/kvm.h                      |    2 +
>  tools/arch/x86/include/uapi/asm/kvm.h         |   95 +
>  tools/include/uapi/linux/kvm.h                |    1 +
>  virt/kvm/kvm_main.c                           |   67 +-
>  59 files changed, 7877 insertions(+), 804 deletions(-)
>  create mode 100644 Documentation/virt/kvm/intel-tdx-layer-status.rst
>  create mode 100644 Documentation/virt/kvm/intel-tdx.rst
>  create mode 100644 Documentation/virt/kvm/tdx-tdp-mmu.rst
>  create mode 100644 arch/x86/kvm/vmx/common.h
>  create mode 100644 arch/x86/kvm/vmx/main.c
>  create mode 100644 arch/x86/kvm/vmx/pmu_intel.h
>  create mode 100644 arch/x86/kvm/vmx/tdx.c
>  create mode 100644 arch/x86/kvm/vmx/tdx.h
>  create mode 100644 arch/x86/kvm/vmx/tdx_arch.h
>  create mode 100644 arch/x86/kvm/vmx/tdx_errno.h
>  create mode 100644 arch/x86/kvm/vmx/tdx_error.c
>  create mode 100644 arch/x86/kvm/vmx/tdx_ops.h
>  create mode 100644 arch/x86/kvm/vmx/x86_ops.h
> 
> -- 
> 2.25.1
> 

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization
  2022-06-28  3:43   ` Kai Huang
@ 2022-07-11 23:48     ` Isaku Yamahata
  2022-07-12  0:45       ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-11 23:48 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Tue, Jun 28, 2022 at 03:43:00PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > TDX requires several initialization steps for KVM to create guest TDs.
> > Detect CPU feature, enable VMX (TDX is based on VMX), detect TDX module
> > availability, and initialize TDX module.  This patch implements the first
> > step to detect CPU feature.  Because VMX isn't enabled yet by VMXON
> > instruction on KVM kernel module initialization, defer further
> > initialization step until VMX is enabled by hardware_enable callback.
> 
> Not clear why you need to split into multiple patches.  If we put all
> initialization into one patch, it's much easier to see why those steps are done
> in whatever way.

I moved down this patch before "KVM: TDX: Initialize TDX module when loading
kvm_intel.ko". So the patch flow is, - detect tdx cpu feature, and then
- initialize tdx module.


> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > new file mode 100644
> > index 000000000000..c12e61cdddea
> > --- /dev/null
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -0,0 +1,40 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include <linux/cpu.h>
> > +
> > +#include <asm/tdx.h>
> > +
> > +#include "capabilities.h"
> > +#include "x86_ops.h"
> > +
> > +#undef pr_fmt
> > +#define pr_fmt(fmt) "tdx: " fmt
> > +
> > +static u64 hkid_mask __ro_after_init;
> > +static u8 hkid_start_pos __ro_after_init;
> > +
> > +int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
> > +{
> > +	u32 max_pa;
> > +
> > +	if (!enable_ept) {
> > +		pr_warn("Cannot enable TDX with EPT disabled\n");
> > +		return -EINVAL;
> > +	}
> > +
> > +	if (!platform_tdx_enabled()) {
> > +		pr_warn("Cannot enable TDX on TDX disabled platform\n");
> > +		return -ENODEV;
> > +	}
> > +
> > +	/* Safe guard check because TDX overrides tlb_remote_flush callback. */
> > +	if (WARN_ON_ONCE(x86_ops->tlb_remote_flush))
> > +		return -EIO;
> 
> To me it's better to move this chunk to the patch which actually implements how
> to flush TLB foro private pages.  W/o some background, it's hard to tell why TDX
> needs to overrides tlb_remote_flush callback.  Otherwise it's quite hard to
> review here.
> 
> For instance, even if it must be replaced, I am wondering why it must be empty
> at the beginning?  For instance, assuming it has an original version which does
> something:
> 
> 	x86_ops->tlb_remote_flush = vmx_remote_flush;
> 
> Why cannot it be replaced with vt_tlb_remote_flush():
> 
> 	int vt_tlb_remote_flush(struct kvm *kvm)
> 	{
> 		if (is_td(kvm))
> 			return tdx_tlb_remote_flush(kvm);
> 
> 		return vmx_remote_flush(kvm);
> 	}
> 
> ?

There is a bit tricky part.  Anyway I rewrote to follow this way.  Here is a
preparation patch to allow such direction.

Subject: [PATCH 055/290] KVM: x86/VMX: introduce vmx tlb_remote_flush and
 tlb_remote_flush_with_range

This is preparation for TDX to define its own tlb_remote_flush and
tlb_remote_flush_with_range.  Currently vmx code defines tlb_remote_flush
and tlb_remote_flush_with_range defined as NULL by default and only when
nested hyper-v guest case, they are defined to non-NULL methods.

To make TDX code to override those two methods consistently with other
methods, define vmx_tlb_remote_flush and vmx_tlb_remote_flush_with_range
as nop and call hyper-v code only when nested hyper-v guest case.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/kvm_onhyperv.c     |  5 ++++-
 arch/x86/kvm/kvm_onhyperv.h     |  1 +
 arch/x86/kvm/mmu/mmu.c          |  2 +-
 arch/x86/kvm/svm/svm_onhyperv.h |  1 +
 arch/x86/kvm/vmx/main.c         |  2 ++
 arch/x86/kvm/vmx/vmx.c          | 34 ++++++++++++++++++++++++++++-----
 arch/x86/kvm/vmx/x86_ops.h      |  3 +++
 7 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/kvm_onhyperv.c b/arch/x86/kvm/kvm_onhyperv.c
index ee4f696a0782..d43518da1c0e 100644
--- a/arch/x86/kvm/kvm_onhyperv.c
+++ b/arch/x86/kvm/kvm_onhyperv.c
@@ -93,11 +93,14 @@ int hv_remote_flush_tlb(struct kvm *kvm)
 }
 EXPORT_SYMBOL_GPL(hv_remote_flush_tlb);
 
+bool hv_use_remote_flush_tlb __ro_after_init;
+EXPORT_SYMBOL_GPL(hv_use_remote_flush_tlb);
+
 void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
 {
 	struct kvm_arch *kvm_arch = &vcpu->kvm->arch;
 
-	if (kvm_x86_ops.tlb_remote_flush == hv_remote_flush_tlb) {
+	if (hv_use_remote_flush_tlb) {
 		spin_lock(&kvm_arch->hv_root_tdp_lock);
 		vcpu->arch.hv_root_tdp = root_tdp;
 		if (root_tdp != kvm_arch->hv_root_tdp)
diff --git a/arch/x86/kvm/kvm_onhyperv.h b/arch/x86/kvm/kvm_onhyperv.h
index 287e98ef9df3..9a07a34666fb 100644
--- a/arch/x86/kvm/kvm_onhyperv.h
+++ b/arch/x86/kvm/kvm_onhyperv.h
@@ -10,6 +10,7 @@
 int hv_remote_flush_tlb_with_range(struct kvm *kvm,
 		struct kvm_tlb_range *range);
 int hv_remote_flush_tlb(struct kvm *kvm);
+extern bool hv_use_remote_flush_tlb __ro_after_init;
 void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp);
 #else /* !CONFIG_HYPERV */
 static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ef925722ee28..a11c78c8831b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -264,7 +264,7 @@ static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm,
 {
 	int ret = -ENOTSUPP;
 
-	if (range && kvm_x86_ops.tlb_remote_flush_with_range)
+	if (range && kvm_available_flush_tlb_with_range())
 		ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range);
 
 	if (ret)
diff --git a/arch/x86/kvm/svm/svm_onhyperv.h b/arch/x86/kvm/svm/svm_onhyperv.h
index e2fc59380465..b3cd61c62305 100644
--- a/arch/x86/kvm/svm/svm_onhyperv.h
+++ b/arch/x86/kvm/svm/svm_onhyperv.h
@@ -36,6 +36,7 @@ static inline void svm_hv_hardware_setup(void)
 		svm_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
 		svm_x86_ops.tlb_remote_flush_with_range =
 				hv_remote_flush_tlb_with_range;
+		hv_use_remote_flush_tlb = true;
 	}
 
 	if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH) {
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 252b7298b230..e52e12b8d49a 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -187,6 +187,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 
 	.flush_tlb_all = vmx_flush_tlb_all,
 	.flush_tlb_current = vmx_flush_tlb_current,
+	.tlb_remote_flush = vmx_tlb_remote_flush,
+	.tlb_remote_flush_with_range = vmx_tlb_remote_flush_with_range,
 	.flush_tlb_gva = vmx_flush_tlb_gva,
 	.flush_tlb_guest = vmx_flush_tlb_guest,
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5b8d399dd319..dc7ede3706e1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3110,6 +3110,33 @@ void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
 		vpid_sync_context(vmx_get_current_vpid(vcpu));
 }
 
+int vmx_tlb_remote_flush(struct kvm *kvm)
+{
+#if IS_ENABLED(CONFIG_HYPERV)
+	if (hv_use_tlb_remote_flush)
+		return hv_remote_flush_tlb(kvm);
+#endif
+	/*
+	 * fallback to KVM_REQ_TLB_FLUSH.
+	 * See kvm_arch_flush_remote_tlb() and kvm_flush_remote_tlbs().
+	 */
+	return -EOPNOTSUPP;
+}
+
+int vmx_tlb_remote_flush_with_range(struct kvm *kvm,
+				    struct kvm_tlb_range *range)
+{
+#if IS_ENABLED(CONFIG_HYPERV)
+	if (hv_use_tlb_remote_flush)
+		return hv_remote_flush_tlb_with_range(kvm, range);
+#endif
+	/*
+	 * fallback to tlb_remote_flush. See
+	 * kvm_flush_remote_tlbs_with_range()
+	 */
+	return -EOPNOTSUPP;
+}
+
 void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
 {
 	/*
@@ -8176,11 +8203,8 @@ __init int vmx_hardware_setup(void)
 
 #if IS_ENABLED(CONFIG_HYPERV)
 	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
-	    && enable_ept) {
-		vt_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
-		vt_x86_ops.tlb_remote_flush_with_range =
-				hv_remote_flush_tlb_with_range;
-	}
+	    && enable_ept)
+		hv_use_tlb_remote_flush = true;
 #endif
 
 	if (!cpu_has_vmx_ple()) {
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index e70f84d29d21..5ecf99170b30 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -90,6 +90,9 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
 bool vmx_get_if_flag(struct kvm_vcpu *vcpu);
 void vmx_flush_tlb_all(struct kvm_vcpu *vcpu);
 void vmx_flush_tlb_current(struct kvm_vcpu *vcpu);
+int vmx_tlb_remote_flush(struct kvm *kvm);
+int vmx_tlb_remote_flush_with_range(struct kvm *kvm,
+				    struct kvm_tlb_range *range);
 void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr);
 void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu);
 void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
-- 
2.25.1


> > +
> > +	max_pa = cpuid_eax(0x80000008) & 0xff;
> > +	hkid_start_pos = boot_cpu_data.x86_phys_bits;
> > +	hkid_mask = GENMASK_ULL(max_pa - 1, hkid_start_pos);
> > +	pr_info("kvm: TDX is supported. hkid start pos %d mask 0x%llx\n",
> > +		hkid_start_pos, hkid_mask);
> 
> Again, I think it's better to introduce those in the patch where you actually
> need those.  It will be more clear if you introduce those with the code which
> actually uses them.
> 
> For instance, I think both hkid_start_pos and hkid_mask are not necessary.  If
> you want to apply one keyid to an address, isn't below enough?
> 
> 	u64 phys |= ((keyid) << boot_cpu_data.x86_phys_bits);

Nice catch.  I removed max_pa, hkid_start_pos and hkid_mask.


> > diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> > index 0f8a8547958f..0a5967a91e26 100644
> > --- a/arch/x86/kvm/vmx/x86_ops.h
> > +++ b/arch/x86/kvm/vmx/x86_ops.h
> > @@ -122,4 +122,10 @@ void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
> >  #endif
> >  void vmx_setup_mce(struct kvm_vcpu *vcpu);
> >  
> > +#ifdef CONFIG_INTEL_TDX_HOST
> > +int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
> > +#else
> > +static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
> > +#endif
> 
> I think if you introduce a "tdx_ops.h", or "tdx_x86_ops.h", and you can only
> include it when CONFIG_INTEL_TDX_HOST is true, then in tdx_ops.h you don't need
> those stubs.
> 
> Makes sense?

main.c includes many tdx_xxx().  If we do so without stubs, many
CONFIG_INTEL_TDX_HOST in main.c.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions
  2022-06-28  3:53   ` Kai Huang
@ 2022-07-12  0:38     ` Isaku Yamahata
  2022-07-12  1:30       ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  0:38 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Tue, Jun 28, 2022 at 03:53:31PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > Currently, KVM VMX module initialization/exit functions are a single
> > function each.  Refactor KVM VMX module initialization functions into KVM
> > common part and VMX part so that TDX specific part can be added cleanly.
> > Opportunistically refactor module exit function as well.
> > 
> > The current module initialization flow is, 1.) calculate the sizes of VMX
> > kvm structure and VMX vcpu structure, 2.) hyper-v specific initialization
> > 3.) report those sizes to the KVM common layer and KVM common
> > initialization, and 4.) VMX specific system-wide initialization.
> > 
> > Refactor the KVM VMX module initialization function into functions with a
> > wrapper function to separate VMX logic in vmx.c from a file, main.c, common
> > among VMX and TDX.  We have a wrapper function, "vt_init() {vmx kvm/vcpu
> > size calculation; hv_vp_assist_page_init(); kvm_init(); vmx_init(); }" in
> > main.c, and hv_vp_assist_page_init() and vmx_init() in vmx.c.
> > hv_vp_assist_page_init() initializes hyper-v specific assist pages,
> > kvm_init() does system-wide initialization of the KVM common layer, and
> > vmx_init() does system-wide VMX initialization.
> > 
> > The KVM architecture common layer allocates struct kvm with reported size
> > for architecture-specific code.  The KVM VMX module defines its structure
> > as struct vmx_kvm { struct kvm; VMX specific members;} and uses it as
> > struct vmx kvm.  Similar for vcpu structure. TDX KVM patches will define
> > TDX specific kvm and vcpu structures, add tdx_pre_kvm_init() to report the
> > sizes of them to the KVM common layer.
> > 
> > The current module exit function is also a single function, a combination
> > of VMX specific logic and common KVM logic.  Refactor it into VMX specific
> > logic and KVM common logic.  This is just refactoring to keep the VMX
> > specific logic in vmx.c from main.c.
> 
> This patch, coupled with the patch:
> 
> 	KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
> 
> Basically provides an infrastructure to support both VMX and TDX.  Why we cannot
> merge them into one patch?  What's the benefit of splitting them?
> 
> At least, why the two patches cannot be put together closely?

It is trivial for the change of "KVM: VMX: Move out vmx_x86_ops to 'main.c' to
wrap VMX and TDX" to introduce no functional change.  But it's not trivial
for this patch to introduce no functional change.

So I moved this patch right after the main.c patch.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 010/102] x86/virt/tdx: Add a helper function to return system wide info about TDX module
  2022-07-07  2:46   ` Yuan Yao
@ 2022-07-12  0:39     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  0:39 UTC (permalink / raw)
  To: Yuan Yao; +Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Thu, Jul 07, 2022 at 10:46:02AM +0800,
Yuan Yao <yuan.yao@linux.intel.com> wrote:

> On Mon, Jun 27, 2022 at 02:53:02PM -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> >
> > TDX KVM needs system-wide information about the TDX module, struct
> > tdsysinfo_struct.  Add a helper function tdx_get_sysinfo() to return it
> > instead of KVM getting it with various error checks.  Move out the struct
> > definition about it to common place tdx_host.h.
> 
> Please correct the tdx_host.h to tdx.h or arch/x86/include/asm/tdx.h

Oops. Thanks for catching it. fixed it.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization
  2022-07-11 23:48     ` Isaku Yamahata
@ 2022-07-12  0:45       ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-12  0:45 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Mon, 2022-07-11 at 16:48 -0700, Isaku Yamahata wrote:
> On Tue, Jun 28, 2022 at 03:43:00PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > 
> > > TDX requires several initialization steps for KVM to create guest TDs.
> > > Detect CPU feature, enable VMX (TDX is based on VMX), detect TDX module
> > > availability, and initialize TDX module.  This patch implements the first
> > > step to detect CPU feature.  Because VMX isn't enabled yet by VMXON
> > > instruction on KVM kernel module initialization, defer further
> > > initialization step until VMX is enabled by hardware_enable callback.
> > 
> > Not clear why you need to split into multiple patches.  If we put all
> > initialization into one patch, it's much easier to see why those steps are done
> > in whatever way.
> 
> I moved down this patch before "KVM: TDX: Initialize TDX module when loading
> kvm_intel.ko". So the patch flow is, - detect tdx cpu feature, and then
> - initialize tdx module.

Unable to comment until see the actual patch/code.  My point is this series is
already very big (107 patches!!).  We should avoid splitting code chunks to
small patches when there's no real benefits.  For instance, when the code change
is an infrastructural patch so logically can and should be separated (also
easier to review).  Or when the patch is too big (thus hard to review) and
splitting "dependencies" into smaller patches that can help to review.

To me this patch (and init TDX module) doesn't belong to any of above.  The only
piece in this patch that makes sense to me is below:

	if (!enable_ept) {
		pr_warn("Cannot enable TDX with EPT disabled\n");
		return -EINVAL;
	}

	if (!platform_tdx_enabled()) {
		pr_warn("Cannot enable TDX on TDX disabled platform\n");
		return -ENODEV;
	}

And I don't see why it cannot be done together with initializing TDX module.

Btw, I do see in the init TDX module patch, you did more than tdx_init() such as
setting up 'tdx_capabilities' etc.  To me it makes more sense to split that part
out (if necessary) with some explanation why we need those 'tdx_capabilities'
after tdx_init().
	
> 
> 
> > > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > > new file mode 100644
> > > index 000000000000..c12e61cdddea
> > > --- /dev/null
> > > +++ b/arch/x86/kvm/vmx/tdx.c
> > > @@ -0,0 +1,40 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +#include <linux/cpu.h>
> > > +
> > > +#include <asm/tdx.h>
> > > +
> > > +#include "capabilities.h"
> > > +#include "x86_ops.h"
> > > +
> > > +#undef pr_fmt
> > > +#define pr_fmt(fmt) "tdx: " fmt
> > > +
> > > +static u64 hkid_mask __ro_after_init;
> > > +static u8 hkid_start_pos __ro_after_init;
> > > +
> > > +int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
> > > +{
> > > +	u32 max_pa;
> > > +
> > > +	if (!enable_ept) {
> > > +		pr_warn("Cannot enable TDX with EPT disabled\n");
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	if (!platform_tdx_enabled()) {
> > > +		pr_warn("Cannot enable TDX on TDX disabled platform\n");
> > > +		return -ENODEV;
> > > +	}
> > > +
> > > +	/* Safe guard check because TDX overrides tlb_remote_flush callback. */
> > > +	if (WARN_ON_ONCE(x86_ops->tlb_remote_flush))
> > > +		return -EIO;
> > 
> > To me it's better to move this chunk to the patch which actually implements how
> > to flush TLB foro private pages.  W/o some background, it's hard to tell why TDX
> > needs to overrides tlb_remote_flush callback.  Otherwise it's quite hard to
> > review here.
> > 
> > For instance, even if it must be replaced, I am wondering why it must be empty
> > at the beginning?  For instance, assuming it has an original version which does
> > something:
> > 
> > 	x86_ops->tlb_remote_flush = vmx_remote_flush;
> > 
> > Why cannot it be replaced with vt_tlb_remote_flush():
> > 
> > 	int vt_tlb_remote_flush(struct kvm *kvm)
> > 	{
> > 		if (is_td(kvm))
> > 			return tdx_tlb_remote_flush(kvm);
> > 
> > 		return vmx_remote_flush(kvm);
> > 	}
> > 
> > ?
> 
> There is a bit tricky part.  Anyway I rewrote to follow this way.  Here is a
> preparation patch to allow such direction.
> 
> Subject: [PATCH 055/290] KVM: x86/VMX: introduce vmx tlb_remote_flush and
>  tlb_remote_flush_with_range
> 
> This is preparation for TDX to define its own tlb_remote_flush and
> tlb_remote_flush_with_range.  Currently vmx code defines tlb_remote_flush
> and tlb_remote_flush_with_range defined as NULL by default and only when
> nested hyper-v guest case, they are defined to non-NULL methods.
> 
> To make TDX code to override those two methods consistently with other
> methods, define vmx_tlb_remote_flush and vmx_tlb_remote_flush_with_range
> as nop and call hyper-v code only when nested hyper-v guest case.

So why put into this patch which does "Detect CPU feature on kernel module
initialization"?

(btw, can you improve patch title to be more specific on which CPU feature on
which kernel module initialization?)

Even with your above explanation, it's hard to justify why we need this, because
you didn't explain _why_ we need to "make TDX code to override those two
methods".

Makes sense?

Skip below code now as I'd like to see that in a separate patch.

> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/kvm_onhyperv.c     |  5 ++++-
>  arch/x86/kvm/kvm_onhyperv.h     |  1 +
>  arch/x86/kvm/mmu/mmu.c          |  2 +-
>  arch/x86/kvm/svm/svm_onhyperv.h |  1 +
>  arch/x86/kvm/vmx/main.c         |  2 ++
>  arch/x86/kvm/vmx/vmx.c          | 34 ++++++++++++++++++++++++++++-----
>  arch/x86/kvm/vmx/x86_ops.h      |  3 +++
>  7 files changed, 41 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/kvm_onhyperv.c b/arch/x86/kvm/kvm_onhyperv.c
> index ee4f696a0782..d43518da1c0e 100644
> --- a/arch/x86/kvm/kvm_onhyperv.c
> +++ b/arch/x86/kvm/kvm_onhyperv.c
> @@ -93,11 +93,14 @@ int hv_remote_flush_tlb(struct kvm *kvm)
>  }
>  EXPORT_SYMBOL_GPL(hv_remote_flush_tlb);
>  
> +bool hv_use_remote_flush_tlb __ro_after_init;
> +EXPORT_SYMBOL_GPL(hv_use_remote_flush_tlb);
> +
>  void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
>  {
>  	struct kvm_arch *kvm_arch = &vcpu->kvm->arch;
>  
> -	if (kvm_x86_ops.tlb_remote_flush == hv_remote_flush_tlb) {
> +	if (hv_use_remote_flush_tlb) {
>  		spin_lock(&kvm_arch->hv_root_tdp_lock);
>  		vcpu->arch.hv_root_tdp = root_tdp;
>  		if (root_tdp != kvm_arch->hv_root_tdp)
> diff --git a/arch/x86/kvm/kvm_onhyperv.h b/arch/x86/kvm/kvm_onhyperv.h
> index 287e98ef9df3..9a07a34666fb 100644
> --- a/arch/x86/kvm/kvm_onhyperv.h
> +++ b/arch/x86/kvm/kvm_onhyperv.h
> @@ -10,6 +10,7 @@
>  int hv_remote_flush_tlb_with_range(struct kvm *kvm,
>  		struct kvm_tlb_range *range);
>  int hv_remote_flush_tlb(struct kvm *kvm);
> +extern bool hv_use_remote_flush_tlb __ro_after_init;
>  void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp);
>  #else /* !CONFIG_HYPERV */
>  static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index ef925722ee28..a11c78c8831b 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -264,7 +264,7 @@ static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm,
>  {
>  	int ret = -ENOTSUPP;
>  
> -	if (range && kvm_x86_ops.tlb_remote_flush_with_range)
> +	if (range && kvm_available_flush_tlb_with_range())
>  		ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range);
>  
>  	if (ret)
> diff --git a/arch/x86/kvm/svm/svm_onhyperv.h b/arch/x86/kvm/svm/svm_onhyperv.h
> index e2fc59380465..b3cd61c62305 100644
> --- a/arch/x86/kvm/svm/svm_onhyperv.h
> +++ b/arch/x86/kvm/svm/svm_onhyperv.h
> @@ -36,6 +36,7 @@ static inline void svm_hv_hardware_setup(void)
>  		svm_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
>  		svm_x86_ops.tlb_remote_flush_with_range =
>  				hv_remote_flush_tlb_with_range;
> +		hv_use_remote_flush_tlb = true;
>  	}
>  
>  	if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH) {
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 252b7298b230..e52e12b8d49a 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -187,6 +187,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
>  
>  	.flush_tlb_all = vmx_flush_tlb_all,
>  	.flush_tlb_current = vmx_flush_tlb_current,
> +	.tlb_remote_flush = vmx_tlb_remote_flush,
> +	.tlb_remote_flush_with_range = vmx_tlb_remote_flush_with_range,
>  	.flush_tlb_gva = vmx_flush_tlb_gva,
>  	.flush_tlb_guest = vmx_flush_tlb_guest,
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 5b8d399dd319..dc7ede3706e1 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -3110,6 +3110,33 @@ void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
>  		vpid_sync_context(vmx_get_current_vpid(vcpu));
>  }
>  
> +int vmx_tlb_remote_flush(struct kvm *kvm)
> +{
> +#if IS_ENABLED(CONFIG_HYPERV)
> +	if (hv_use_tlb_remote_flush)
> +		return hv_remote_flush_tlb(kvm);
> +#endif
> +	/*
> +	 * fallback to KVM_REQ_TLB_FLUSH.
> +	 * See kvm_arch_flush_remote_tlb() and kvm_flush_remote_tlbs().
> +	 */
> +	return -EOPNOTSUPP;
> +}
> +
> +int vmx_tlb_remote_flush_with_range(struct kvm *kvm,
> +				    struct kvm_tlb_range *range)
> +{
> +#if IS_ENABLED(CONFIG_HYPERV)
> +	if (hv_use_tlb_remote_flush)
> +		return hv_remote_flush_tlb_with_range(kvm, range);
> +#endif
> +	/*
> +	 * fallback to tlb_remote_flush. See
> +	 * kvm_flush_remote_tlbs_with_range()
> +	 */
> +	return -EOPNOTSUPP;
> +}
> +
>  void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
>  {
>  	/*
> @@ -8176,11 +8203,8 @@ __init int vmx_hardware_setup(void)
>  
>  #if IS_ENABLED(CONFIG_HYPERV)
>  	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
> -	    && enable_ept) {
> -		vt_x86_ops.tlb_remote_flush = hv_remote_flush_tlb;
> -		vt_x86_ops.tlb_remote_flush_with_range =
> -				hv_remote_flush_tlb_with_range;
> -	}
> +	    && enable_ept)
> +		hv_use_tlb_remote_flush = true;
>  #endif
>  
>  	if (!cpu_has_vmx_ple()) {
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index e70f84d29d21..5ecf99170b30 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -90,6 +90,9 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
>  bool vmx_get_if_flag(struct kvm_vcpu *vcpu);
>  void vmx_flush_tlb_all(struct kvm_vcpu *vcpu);
>  void vmx_flush_tlb_current(struct kvm_vcpu *vcpu);
> +int vmx_tlb_remote_flush(struct kvm *kvm);
> +int vmx_tlb_remote_flush_with_range(struct kvm *kvm,
> +				    struct kvm_tlb_range *range);
>  void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr);
>  void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu);
>  void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
> -- 
> 2.25.1
> 
> 
> > > +
> > > +	max_pa = cpuid_eax(0x80000008) & 0xff;
> > > +	hkid_start_pos = boot_cpu_data.x86_phys_bits;
> > > +	hkid_mask = GENMASK_ULL(max_pa - 1, hkid_start_pos);
> > > +	pr_info("kvm: TDX is supported. hkid start pos %d mask 0x%llx\n",
> > > +		hkid_start_pos, hkid_mask);
> > 
> > Again, I think it's better to introduce those in the patch where you actually
> > need those.  It will be more clear if you introduce those with the code which
> > actually uses them.
> > 
> > For instance, I think both hkid_start_pos and hkid_mask are not necessary.  If
> > you want to apply one keyid to an address, isn't below enough?
> > 
> > 	u64 phys |= ((keyid) << boot_cpu_data.x86_phys_bits);
> 
> Nice catch.  I removed max_pa, hkid_start_pos and hkid_mask.

Regardless whether you need 'max_pa, hkid_start_pos and hkid_mask', the point is
it's better to introduce when you really need them.

They are not big chunk which needs to be separated out to improve readability.

> 
> 
> > > diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> > > index 0f8a8547958f..0a5967a91e26 100644
> > > --- a/arch/x86/kvm/vmx/x86_ops.h
> > > +++ b/arch/x86/kvm/vmx/x86_ops.h
> > > @@ -122,4 +122,10 @@ void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
> > >  #endif
> > >  void vmx_setup_mce(struct kvm_vcpu *vcpu);
> > >  
> > > +#ifdef CONFIG_INTEL_TDX_HOST
> > > +int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops);
> > > +#else
> > > +static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; }
> > > +#endif
> > 
> > I think if you introduce a "tdx_ops.h", or "tdx_x86_ops.h", and you can only
> > include it when CONFIG_INTEL_TDX_HOST is true, then in tdx_ops.h you don't need
> > those stubs.
> > 
> > Makes sense?
> 
> main.c includes many tdx_xxx().  If we do so without stubs, many
> CONFIG_INTEL_TDX_HOST in main.c.

OK.


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko
  2022-06-28  4:31   ` Kai Huang
@ 2022-07-12  0:46     ` Isaku Yamahata
  2022-07-12  1:13       ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  0:46 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Tue, Jun 28, 2022 at 04:31:35PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > To use TDX functionality, TDX module needs to be loaded and initialized.
> > A TDX host patch series[1] implements the detection of the TDX module,
> > tdx_detect() and its initialization, tdx_init().
> 
> "A TDX host patch series[1]" really isn't a commit message material.  You can
> put it to the cover letter, but not here.
> 
> Also tdx_detect() is removed in latest code.

How about the followings?

    KVM: TDX: Initialize TDX module when loading kvm_intel.ko
    
    To use TDX functionality, TDX module needs to be loaded and initialized.
    This patch is to call a function, tdx_init(), when loading kvm_intel.ko.
    
    Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
    while hardware is enabled, i.e. after hardware_enable_all() and before
    hardware_disable_all().  Because TDX requires all present CPUs to enable
    VMX (VMXON).

> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 30af2bd0b4d5..fb7a33fbc136 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -11792,6 +11792,14 @@ int kvm_arch_hardware_setup(void *opaque)
> >  	return 0;
> >  }
> >  
> > +int kvm_arch_post_hardware_enable_setup(void *opaque)
> > +{
> > +	struct kvm_x86_init_ops *ops = opaque;
> > +	if (ops->post_hardware_enable_setup)
> > +		return ops->post_hardware_enable_setup();
> > +	return 0;
> > +}
> > +
> 
> Where is this kvm_arch_post_hardware_enable_setup() called?
> 
> Shouldn't the code change which calls it be part of this patch?

The patch of "4/102 KVM: Refactor CPU compatibility check on module
initialiization" introduces it.  Because the patch affects multiple archs
(mips, x86, poerpc, s390, and arm), I deliberately put it in early.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs
  2022-06-28  2:52   ` Kai Huang
  2022-07-04  6:44     ` Kai Huang
@ 2022-07-12  1:01     ` Isaku Yamahata
  2022-07-12  1:24       ` Kai Huang
  1 sibling, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  1:01 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Xiaoyao Li

On Tue, Jun 28, 2022 at 02:52:28PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > Unlike default VMs, confidential VMs (Intel TDX and AMD SEV-ES) don't allow
> > some operations (e.g., memory read/write, register state access, etc).
> > 
> > Introduce vm_type to track the type of the VM to x86 KVM.  Other arch KVMs
> > already use vm_type, KVM_INIT_VM accepts vm_type, and x86 KVM callback
> > vm_init accepts vm_type.  So follow them.  Further, a different policy can
> > be made based on vm_type.  Define KVM_X86_DEFAULT_VM for default VM as
> > default and define KVM_X86_TDX_VM for Intel TDX VM.  The wrapper function
> > will be defined as "bool is_td(kvm) { return vm_type == VM_TYPE_TDX; }"
> > 
> > Add a capability KVM_CAP_VM_TYPES to effectively allow device model,
> > e.g. qemu, to query what VM types are supported by KVM.  This (introduce a
> > new capability and add vm_type) is chosen to align with other arch KVMs
> > that have VM types already.  Other arch KVMs uses different name to query
> > supported vm types and there is no common name for it, so new name was
> > chosen.
> > 
> > Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  Documentation/virt/kvm/api.rst        | 21 +++++++++++++++++++++
> >  arch/x86/include/asm/kvm-x86-ops.h    |  1 +
> >  arch/x86/include/asm/kvm_host.h       |  2 ++
> >  arch/x86/include/uapi/asm/kvm.h       |  3 +++
> >  arch/x86/kvm/svm/svm.c                |  6 ++++++
> >  arch/x86/kvm/vmx/main.c               |  1 +
> >  arch/x86/kvm/vmx/tdx.h                |  6 +-----
> >  arch/x86/kvm/vmx/vmx.c                |  5 +++++
> >  arch/x86/kvm/vmx/x86_ops.h            |  1 +
> >  arch/x86/kvm/x86.c                    |  9 ++++++++-
> >  include/uapi/linux/kvm.h              |  1 +
> >  tools/arch/x86/include/uapi/asm/kvm.h |  3 +++
> >  tools/include/uapi/linux/kvm.h        |  1 +
> >  13 files changed, 54 insertions(+), 6 deletions(-)
> > 
> > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> > index 9cbbfdb663b6..b9ab598883b2 100644
> > --- a/Documentation/virt/kvm/api.rst
> > +++ b/Documentation/virt/kvm/api.rst
> > @@ -147,10 +147,31 @@ described as 'basic' will be available.
> >  The new VM has no virtual cpus and no memory.
> >  You probably want to use 0 as machine type.
> >  
> > +X86:
> > +^^^^
> > +
> > +Supported vm type can be queried from KVM_CAP_VM_TYPES, which returns the
> > +bitmap of supported vm types. The 1-setting of bit @n means vm type with
> > +value @n is supported.
> 
> 
> Perhaps I am missing something, but I don't understand how the below changes
> (except the x86 part above) in Documentation are related to this patch.

This is to summarize divergence of archs.  Those archs (s390, mips, and
arm64) introduce essentially same KVM capabilities, but different names. This
patch makes things worse.  So I thought it's good idea to summarize it. Probably
this documentation part can be split out into its own patch. thoughts?


> > diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> > index 54d7a26ed9ee..2f43db5bbefb 100644
> > --- a/arch/x86/kvm/vmx/tdx.h
> > +++ b/arch/x86/kvm/vmx/tdx.h
> > @@ -17,11 +17,7 @@ struct vcpu_tdx {
> >  
> >  static inline bool is_td(struct kvm *kvm)
> >  {
> > -	/*
> > -	 * TDX VM type isn't defined yet.
> > -	 * return kvm->arch.vm_type == KVM_X86_TDX_VM;
> > -	 */
> > -	return false;
> > +	return kvm->arch.vm_type == KVM_X86_TDX_VM;
> >  }
> 
> If you put this patch before patch:
> 
> 	[PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure
> 
> Then you don't need to introduce this chunk in above patch and then remove it
> here, which is unnecessary and ugly.
> 
> And you can even only introduce KVM_X86_DEFAULT_VM but not KVM_X86_TDX_VM in
> this patch, so you can make this patch as a infrastructural patch to report VM
> type.  The KVM_X86_TDX_VM can come with the patch where is_td() is introduced
> (in your above patch 9).  
> 
> To me, it's more clean way to write patch.  For instance, this infrastructural
> patch can be theoretically used by other series if they have similar thing to
> support, but doesn't need to carry is_td() and KVM_X86_TDX_VM burden that you
> made.

There are two choices. One is to put this patch before 9 as you suggested, other
is to  put it here right before the patch 13 that uses vm_type_supported().

Thanks,
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 013/102] KVM: TDX: Make TDX VM type supported
  2022-07-07  2:55   ` Yuan Yao
@ 2022-07-12  1:06     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  1:06 UTC (permalink / raw)
  To: Yuan Yao; +Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Thu, Jul 07, 2022 at 10:55:35AM +0800,
Yuan Yao <yuan.yao@linux.intel.com> wrote:

> On Mon, Jun 27, 2022 at 02:53:05PM -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> >
> > NOTE: This patch is in position of the patch series for developers to be
> > able to test codes during the middle of the patch series although this
> > patch series doesn't provide functional features until the all the patches
> > of this patch series.  When merging this patch series, this patch can be
> > moved to the end.
> >
> > As first step TDX VM support, return that TDX VM type supported to device
> > model, e.g. qemu.  The callback to create guest TD is vm_init callback for
> > KVM_CREATE_VM.  Add a place holder function and call a function to
> > initialize TDX module on demand because in that callback VMX is enabled by
> > hardware_enable callback (vmx_hardware_enable).
> 
> if the "initialize TDX module on demand" means calling tdx_init() then
> it's already done in kvm_init() ->
> kvm_arch_post_hardware_enable_setup from patch 11, so may need commit
> messsage update here.

Somehow I missed to delete those lines. Will remove "Add a place ...".
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko
  2022-07-12  0:46     ` Isaku Yamahata
@ 2022-07-12  1:13       ` Kai Huang
  2022-07-27  0:39         ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-12  1:13 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, Sean Christopherson

On Mon, 2022-07-11 at 17:46 -0700, Isaku Yamahata wrote:
> On Tue, Jun 28, 2022 at 04:31:35PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > 
> > > To use TDX functionality, TDX module needs to be loaded and initialized.
> > > A TDX host patch series[1] implements the detection of the TDX module,
> > > tdx_detect() and its initialization, tdx_init().
> > 
> > "A TDX host patch series[1]" really isn't a commit message material.  You can
> > put it to the cover letter, but not here.
> > 
> > Also tdx_detect() is removed in latest code.
> 
> How about the followings?
> 
>     KVM: TDX: Initialize TDX module when loading kvm_intel.ko

Personally don't like kvm_intel.ko in title (or changelog), but will leave to
maintainers.

>     
>     To use TDX functionality, TDX module needs to be loaded and initialized.
>     This patch is to call a function, tdx_init(), when loading kvm_intel.ko.

Could you add explain why we need to init TDX module when loading KVM module?

You don't have to say "call a function, tdx_init()", which can be easily seen in
the code.  

>     
>     Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
>     while hardware is enabled, i.e. after hardware_enable_all() and before
>     hardware_disable_all().  Because TDX requires all present CPUs to enable
>     VMX (VMXON).

Please explicitly say it is a replacement of the default __weak version, so
people can know there's already a default one.  Otherwise people may wonder why
this isn't called in this patch (i.e. I skipped patch 03 as it looks not
directly related to TDX).

That being said, why cannot you send out that patch separately but have to
include it into TDX series?

Looking at it, the only thing that is related to TDX is an empty
kvm_arch_post_hardware_enable_setup() with a comment saying TDX needs to do
something there.  This logic has nothing to do with the actual job in that
patch. 

So why cannot we introduce that __weak version in this patch, so that the rest
of it can be non-TDX related at all and can be upstreamed separately?

> 
> > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > index 30af2bd0b4d5..fb7a33fbc136 100644
> > > --- a/arch/x86/kvm/x86.c
> > > +++ b/arch/x86/kvm/x86.c
> > > @@ -11792,6 +11792,14 @@ int kvm_arch_hardware_setup(void *opaque)
> > >  	return 0;
> > >  }
> > >  
> > > +int kvm_arch_post_hardware_enable_setup(void *opaque)
> > > +{
> > > +	struct kvm_x86_init_ops *ops = opaque;
> > > +	if (ops->post_hardware_enable_setup)
> > > +		return ops->post_hardware_enable_setup();
> > > +	return 0;
> > > +}
> > > +
> > 
> > Where is this kvm_arch_post_hardware_enable_setup() called?
> > 
> > Shouldn't the code change which calls it be part of this patch?
> 
> The patch of "4/102 KVM: Refactor CPU compatibility check on module
> initialiization" introduces it.  Because the patch affects multiple archs
> (mips, x86, poerpc, s390, and arm), I deliberately put it in early.

It's patch 03, but not 04.  And see above.

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization
  2022-06-27 21:52 ` [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization isaku.yamahata
@ 2022-07-12  1:15   ` Kai Huang
  2022-07-13  3:16     ` Kai Huang
  2022-07-13  3:11   ` Kai Huang
  2022-07-27 22:04   ` Isaku Yamahata
  2 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-12  1:15 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> Although non-x86 arch doesn't break as long as I inspected code, it's by
> code inspection.  This should be reviewed by each arch maintainers.
> 
> kvm_init() checks CPU compatibility by calling
> kvm_arch_check_processor_compat() on all online CPUs.  Move the callback
> to hardware_enable_nolock() and add hardware_enable_all() and
> hardware_disable_all().
> Add arch specific callback kvm_arch_post_hardware_enable_setup() for arch
> to do arch specific initialization that required hardware_enable_all().
> This makes a room for TDX module to initialize on kvm module loading.  TDX
> module requires all online cpu to enable VMX by VMXON.
> 
> If kvm_arch_hardware_enable/disable() depend on (*) part, such dependency
> must be called before kvm_init().  In fact kvm_intel() does.  Although
> other arch doesn't as long as I checked as follows, it should be reviewed
> by each arch maintainers.
> 
> Before this patch:
> - Arch module initialization
>   - kvm_init()
>     - kvm_arch_init()
>     - kvm_arch_check_processor_compat() on each CPUs
>   - post arch specific initialization ---- (*)
> 
> - when creating/deleting first/last VM
>    - kvm_arch_hardware_enable() on each CPUs --- (A)
>    - kvm_arch_hardware_disable() on each CPUs --- (B)
> 
> After this patch:
> - Arch module initialization
>   - kvm_init()
>     - kvm_arch_init()
>     - kvm_arch_hardware_enable() on each CPUs  (A)
>     - kvm_arch_check_processor_compat() on each CPUs
>     - kvm_arch_hardware_disable() on each CPUs (B)
>   - post arch specific initialization  --- (*)
> 
> Code inspection result:
> (A)/(B) can depend on (*) before this patch.  If there is dependency, such
> initialization must be moved before kvm_init() with this patch.  VMX does
> in fact.  As long as I inspected other archs and find only mips has it.
> 
> - arch/mips/kvm/mips.c
>   module init function, kvm_mips_init(), does some initialization after
>   kvm_init().  Compile test only.  Needs review.
> 
> - arch/x86/kvm/x86.c
>   - uses vm_list which is statically initialized.
>   - static_call(kvm_x86_hardware_enable)();
>     - SVM: (*) is empty.
>     - VMX: needs fix
> 
> - arch/powerpc/kvm/powerpc.c
>   kvm_arch_hardware_enable/disable() are nop
> 
> - arch/s390/kvm/kvm-s390.c
>   kvm_arch_hardware_enable/disable() are nop
> 
> - arch/arm64/kvm/arm.c
>   module init function, arm_init(), calls only kvm_init().
>   (*) is empty
> 
> - arch/riscv/kvm/main.c
>   module init function, riscv_kvm_init(), calls only kvm_init().
>   (*) is empty
> 
> Co-developed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/mips/kvm/mips.c     | 12 +++++++-----
>  arch/x86/kvm/vmx/vmx.c   | 15 +++++++++++----
>  include/linux/kvm_host.h |  1 +
>  virt/kvm/kvm_main.c      | 25 ++++++++++++++++++-------
>  4 files changed, 37 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 092d09fb6a7e..17228584485d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1643,11 +1643,6 @@ static int __init kvm_mips_init(void)
>  	}
>  
>  	ret = kvm_mips_entry_setup();
> -	if (ret)
> -		return ret;
> -
> -	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> -
>  	if (ret)
>  		return ret;
>  
> @@ -1656,6 +1651,13 @@ static int __init kvm_mips_init(void)
>  
>  	register_die_notifier(&kvm_mips_csr_die_notifier);
>  
> +	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> +
> +	if (ret) {
> +		unregister_die_notifier(&kvm_mips_csr_die_notifier);
> +		return ret;
> +	}
> +
>  	return 0;
>  }
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 31e7630203fd..d3b68a6dec48 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8372,6 +8372,15 @@ static void vmx_exit(void)
>  }
>  module_exit(vmx_exit);
>  
> +/* initialize before kvm_init() so that hardware_enable/disable() can work. */
> +static void __init vmx_init_early(void)
> +{
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu)
> +		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> +}
> +
>  static int __init vmx_init(void)
>  {
>  	int r, cpu;
> @@ -8409,6 +8418,7 @@ static int __init vmx_init(void)
>  	}
>  #endif
>  
> +	vmx_init_early();
>  	r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
>  		     __alignof__(struct vcpu_vmx), THIS_MODULE);
>  	if (r)
> @@ -8427,11 +8437,8 @@ static int __init vmx_init(void)
>  		return r;
>  	}
>  
> -	for_each_possible_cpu(cpu) {
> -		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> -
> +	for_each_possible_cpu(cpu)
>  		pi_init_cpu(cpu);
> -	}
>  
>  #ifdef CONFIG_KEXEC_CORE
>  	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index d4f130a9f5c8..79a4988fd51f 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1441,6 +1441,7 @@ void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu, struct dentry *debugfs_
>  int kvm_arch_hardware_enable(void);
>  void kvm_arch_hardware_disable(void);
>  int kvm_arch_hardware_setup(void *opaque);
> +int kvm_arch_post_hardware_enable_setup(void *opaque);
>  void kvm_arch_hardware_unsetup(void);
>  int kvm_arch_check_processor_compat(void);
>  int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a5bada53f1fe..cee799265ce6 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4899,8 +4899,13 @@ static void hardware_enable_nolock(void *junk)
>  
>  	cpumask_set_cpu(cpu, cpus_hardware_enabled);
>  
> +	r = kvm_arch_check_processor_compat();
> +	if (r)
> +		goto out;
> +
>  	r = kvm_arch_hardware_enable();
>  
> +out:
>  	if (r) {
>  		cpumask_clear_cpu(cpu, cpus_hardware_enabled);
>  		atomic_inc(&hardware_enable_failed);
> @@ -5697,9 +5702,9 @@ void kvm_unregister_perf_callbacks(void)
>  }
>  #endif
>  
> -static void check_processor_compat(void *rtn)
> +__weak int kvm_arch_post_hardware_enable_setup(void *opaque)
>  {
> -	*(int *)rtn = kvm_arch_check_processor_compat();
> +	return 0;
>  }
>  
>  int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> @@ -5732,11 +5737,17 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>  	if (r < 0)
>  		goto out_free_1;
>  
> -	for_each_online_cpu(cpu) {
> -		smp_call_function_single(cpu, check_processor_compat, &r, 1);
> -		if (r < 0)
> -			goto out_free_2;
> -	}
> +	/* hardware_enable_nolock() checks CPU compatibility on each CPUs. */
> +	r = hardware_enable_all();
> +	if (r)
> +		goto out_free_2;
> +	/*
> +	 * Arch specific initialization that requires to enable virtualization
> +	 * feature.  e.g. TDX module initialization requires VMXON on all
> +	 * present CPUs.
> +	 */
> +	kvm_arch_post_hardware_enable_setup(opaque);

Please see my reply to your patch  "KVM: TDX: Initialize TDX module when loading
kvm_intel.ko".

The introduce of __weak kvm_arch_post_hardware_enable_setup() should be in that
patch since it has nothing to do the job you claimed to do in this patch.

And by removing it, this patch can be taken out of TDX series and upstreamed
separately.

> +	hardware_disable_all();
>  
>  	r = cpuhp_setup_state_nocalls(CPUHP_AP_KVM_STARTING, "kvm/cpu:starting",
>  				      kvm_starting_cpu, kvm_dying_cpu);


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs
  2022-07-12  1:01     ` Isaku Yamahata
@ 2022-07-12  1:24       ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-12  1:24 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini,
	Sean Christopherson, Xiaoyao Li

On Mon, 2022-07-11 at 18:01 -0700, Isaku Yamahata wrote:
> On Tue, Jun 28, 2022 at 02:52:28PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > > 
> > > Unlike default VMs, confidential VMs (Intel TDX and AMD SEV-ES) don't allow
> > > some operations (e.g., memory read/write, register state access, etc).
> > > 
> > > Introduce vm_type to track the type of the VM to x86 KVM.  Other arch KVMs
> > > already use vm_type, KVM_INIT_VM accepts vm_type, and x86 KVM callback
> > > vm_init accepts vm_type.  So follow them.  Further, a different policy can
> > > be made based on vm_type.  Define KVM_X86_DEFAULT_VM for default VM as
> > > default and define KVM_X86_TDX_VM for Intel TDX VM.  The wrapper function
> > > will be defined as "bool is_td(kvm) { return vm_type == VM_TYPE_TDX; }"
> > > 
> > > Add a capability KVM_CAP_VM_TYPES to effectively allow device model,
> > > e.g. qemu, to query what VM types are supported by KVM.  This (introduce a
> > > new capability and add vm_type) is chosen to align with other arch KVMs
> > > that have VM types already.  Other arch KVMs uses different name to query
> > > supported vm types and there is no common name for it, so new name was
> > > chosen.
> > > 
> > > Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > > ---
> > >  Documentation/virt/kvm/api.rst        | 21 +++++++++++++++++++++
> > >  arch/x86/include/asm/kvm-x86-ops.h    |  1 +
> > >  arch/x86/include/asm/kvm_host.h       |  2 ++
> > >  arch/x86/include/uapi/asm/kvm.h       |  3 +++
> > >  arch/x86/kvm/svm/svm.c                |  6 ++++++
> > >  arch/x86/kvm/vmx/main.c               |  1 +
> > >  arch/x86/kvm/vmx/tdx.h                |  6 +-----
> > >  arch/x86/kvm/vmx/vmx.c                |  5 +++++
> > >  arch/x86/kvm/vmx/x86_ops.h            |  1 +
> > >  arch/x86/kvm/x86.c                    |  9 ++++++++-
> > >  include/uapi/linux/kvm.h              |  1 +
> > >  tools/arch/x86/include/uapi/asm/kvm.h |  3 +++
> > >  tools/include/uapi/linux/kvm.h        |  1 +
> > >  13 files changed, 54 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> > > index 9cbbfdb663b6..b9ab598883b2 100644
> > > --- a/Documentation/virt/kvm/api.rst
> > > +++ b/Documentation/virt/kvm/api.rst
> > > @@ -147,10 +147,31 @@ described as 'basic' will be available.
> > >  The new VM has no virtual cpus and no memory.
> > >  You probably want to use 0 as machine type.
> > >  
> > > +X86:
> > > +^^^^
> > > +
> > > +Supported vm type can be queried from KVM_CAP_VM_TYPES, which returns the
> > > +bitmap of supported vm types. The 1-setting of bit @n means vm type with
> > > +value @n is supported.
> > 
> > 
> > Perhaps I am missing something, but I don't understand how the below changes
> > (except the x86 part above) in Documentation are related to this patch.
> 
> This is to summarize divergence of archs.  Those archs (s390, mips, and
> arm64) introduce essentially same KVM capabilities, but different names. This
> patch makes things worse.  So I thought it's good idea to summarize it. Probably
> this documentation part can be split out into its own patch. thoughts?

I will leave to maintainers here.  Thought personally I would split different
things into different patches.

> 
> 
> > > diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> > > index 54d7a26ed9ee..2f43db5bbefb 100644
> > > --- a/arch/x86/kvm/vmx/tdx.h
> > > +++ b/arch/x86/kvm/vmx/tdx.h
> > > @@ -17,11 +17,7 @@ struct vcpu_tdx {
> > >  
> > >  static inline bool is_td(struct kvm *kvm)
> > >  {
> > > -	/*
> > > -	 * TDX VM type isn't defined yet.
> > > -	 * return kvm->arch.vm_type == KVM_X86_TDX_VM;
> > > -	 */
> > > -	return false;
> > > +	return kvm->arch.vm_type == KVM_X86_TDX_VM;
> > >  }
> > 
> > If you put this patch before patch:
> > 
> > 	[PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure
> > 
> > Then you don't need to introduce this chunk in above patch and then remove it
> > here, which is unnecessary and ugly.
> > 
> > And you can even only introduce KVM_X86_DEFAULT_VM but not KVM_X86_TDX_VM in
> > this patch, so you can make this patch as a infrastructural patch to report VM
> > type.  The KVM_X86_TDX_VM can come with the patch where is_td() is introduced
> > (in your above patch 9).  
> > 
> > To me, it's more clean way to write patch.  For instance, this infrastructural
> > patch can be theoretically used by other series if they have similar thing to
> > support, but doesn't need to carry is_td() and KVM_X86_TDX_VM burden that you
> > made.
> 
> There are two choices. One is to put this patch before 9 as you suggested, other
> is to  put it here right before the patch 13 that uses vm_type_supported().
> 
> Thanks,

To me this belongs to category of "infrastructural patch", which does "Add new
ABI to support reporting VM types".  It can originally support default VM only.
TDX VM can come later.  But will leave to maintainers.




^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions
  2022-07-12  0:38     ` Isaku Yamahata
@ 2022-07-12  1:30       ` Kai Huang
  2022-07-27  0:44         ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-12  1:30 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Mon, 2022-07-11 at 17:38 -0700, Isaku Yamahata wrote:
> On Tue, Jun 28, 2022 at 03:53:31PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > 
> > > Currently, KVM VMX module initialization/exit functions are a single
> > > function each.  Refactor KVM VMX module initialization functions into KVM
> > > common part and VMX part so that TDX specific part can be added cleanly.
> > > Opportunistically refactor module exit function as well.
> > > 
> > > The current module initialization flow is, 1.) calculate the sizes of VMX
> > > kvm structure and VMX vcpu structure, 2.) hyper-v specific initialization
> > > 3.) report those sizes to the KVM common layer and KVM common
> > > initialization, and 4.) VMX specific system-wide initialization.
> > > 
> > > Refactor the KVM VMX module initialization function into functions with a
> > > wrapper function to separate VMX logic in vmx.c from a file, main.c, common
> > > among VMX and TDX.  We have a wrapper function, "vt_init() {vmx kvm/vcpu
> > > size calculation; hv_vp_assist_page_init(); kvm_init(); vmx_init(); }" in
> > > main.c, and hv_vp_assist_page_init() and vmx_init() in vmx.c.
> > > hv_vp_assist_page_init() initializes hyper-v specific assist pages,
> > > kvm_init() does system-wide initialization of the KVM common layer, and
> > > vmx_init() does system-wide VMX initialization.
> > > 
> > > The KVM architecture common layer allocates struct kvm with reported size
> > > for architecture-specific code.  The KVM VMX module defines its structure
> > > as struct vmx_kvm { struct kvm; VMX specific members;} and uses it as
> > > struct vmx kvm.  Similar for vcpu structure. TDX KVM patches will define
> > > TDX specific kvm and vcpu structures, add tdx_pre_kvm_init() to report the
> > > sizes of them to the KVM common layer.
> > > 
> > > The current module exit function is also a single function, a combination
> > > of VMX specific logic and common KVM logic.  Refactor it into VMX specific
> > > logic and KVM common logic.  This is just refactoring to keep the VMX
> > > specific logic in vmx.c from main.c.
> > 
> > This patch, coupled with the patch:
> > 
> > 	KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
> > 
> > Basically provides an infrastructure to support both VMX and TDX.  Why we cannot
> > merge them into one patch?  What's the benefit of splitting them?
> > 
> > At least, why the two patches cannot be put together closely?
> 
> It is trivial for the change of "KVM: VMX: Move out vmx_x86_ops to 'main.c' to
> wrap VMX and TDX" to introduce no functional change.  But it's not trivial
> for this patch to introduce no functional change.

This doesn't sound right.  If I understand correctly, this patch supposedly
shouldn't bring any functional change, right?  Could you explain what functional
change does this patch bring?



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  2022-06-27 21:53 ` [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
  2022-07-08  3:44   ` Kai Huang
  2022-07-11  8:28   ` Yuan Yao
@ 2022-07-12  2:36   ` Yuan Yao
  2022-07-26 23:42     ` Isaku Yamahata
  2 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-12  2:36 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Kai Huang

On Mon, Jun 27, 2022 at 02:53:38PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> Allocate mirrored private page table for private page table, and add hooks
> to operate on mirrored private page table.  This patch adds only hooks. As
> kvm_gfn_shared_mask() returns false always, those hooks aren't called yet.
>
> Because private guest page is protected, page copy with mmu_notifier to
> migrate page doesn't work.  Callback from backing store is needed.
>
> When the faulting GPA is private, the KVM fault is also called private.
> When resolving private KVM, allocate mirrored private page table and call
> hooks to operate on mirrored private page table. On the change of the
> private PTE entry, invoke kvm_x86_ops hook in __handle_changed_spte() to
> propagate the change to mirrored private page table. The following depicts
> the relationship.
>
>   private KVM page fault   |
>       |                    |
>       V                    |
>  private GPA               |
>       |                    |
>       V                    |
>  KVM private PT root       |  CPU private PT root
>       |                    |           |
>       V                    |           V
>    private PT ---hook to mirror--->mirrored private PT
>       |                    |           |
>       \--------------------+------\    |
>                            |      |    |
>                            |      V    V
>                            |    private guest page
>                            |
>                            |
>      non-encrypted memory  |    encrypted memory
>                            |
> PT: page table
>
> The existing KVM TDP MMU code uses atomic update of SPTE.  On populating
> the EPT entry, atomically set the entry.  However, it requires TLB
> shootdown to zap SPTE.  To address it, the entry is frozen with the special
> SPTE value that clears the present bit. After the TLB shootdown, the entry
> is set to the eventual value (unfreeze).
>
> For mirrored private page table, hooks are called to update mirrored
> private page table in addition to direct access to the private SPTE. For
> the zapping case, it works to freeze the SPTE. It can call hooks in
> addition to TLB shootdown.  For populating the private SPTE entry, there
> can be a race condition without further protection
>
>   vcpu 1: populating 2M private SPTE
>   vcpu 2: populating 4K private SPTE
>   vcpu 2: TDX SEAMCALL to update 4K mirrored private SPTE => error
>   vcpu 1: TDX SEAMCALL to update 2M mirrored private SPTE
>
> To avoid the race, the frozen SPTE is utilized.  Instead of atomic update
> of the private entry, freeze the entry, call the hook that update mirrored
> private SPTE, set the entry to the final value.
>
> Support 4K page only at this stage.  2M page support can be done in future
> patches.
>
> Add is_private member to kvm_page_fault to indicate the fault is private.
> Also is_private member to struct tdp_inter to propagate it.
>
> Co-developed-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm-x86-ops.h |   2 +
>  arch/x86/include/asm/kvm_host.h    |  20 +++
>  arch/x86/kvm/mmu/mmu.c             |  86 +++++++++-
>  arch/x86/kvm/mmu/mmu_internal.h    |  37 +++++
>  arch/x86/kvm/mmu/paging_tmpl.h     |   2 +-
>  arch/x86/kvm/mmu/tdp_iter.c        |   1 +
>  arch/x86/kvm/mmu/tdp_iter.h        |   5 +-
>  arch/x86/kvm/mmu/tdp_mmu.c         | 247 +++++++++++++++++++++++------
>  arch/x86/kvm/mmu/tdp_mmu.h         |   7 +-
>  virt/kvm/kvm_main.c                |   1 +
>  10 files changed, 346 insertions(+), 62 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 32a6df784ea6..6982d57e4518 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -93,6 +93,8 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
>  KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
>  KVM_X86_OP(get_mt_mask)
>  KVM_X86_OP(load_mmu_pgd)
> +KVM_X86_OP_OPTIONAL(free_private_sp)
> +KVM_X86_OP_OPTIONAL(handle_changed_private_spte)
>  KVM_X86_OP(has_wbinvd_exit)
>  KVM_X86_OP(get_l2_tsc_offset)
>  KVM_X86_OP(get_l2_tsc_multiplier)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index bfc934dc9a33..f2a4d5a18851 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -440,6 +440,7 @@ struct kvm_mmu {
>  			 struct kvm_mmu_page *sp);
>  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
>  	struct kvm_mmu_root_info root;
> +	hpa_t private_root_hpa;
>  	union kvm_cpu_role cpu_role;
>  	union kvm_mmu_page_role root_role;
>
> @@ -1435,6 +1436,20 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
>  	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
>  }
>
> +struct kvm_spte {
> +	kvm_pfn_t pfn;
> +	bool is_present;
> +	bool is_leaf;
> +};
> +
> +struct kvm_spte_change {
> +	gfn_t gfn;
> +	enum pg_level level;
> +	struct kvm_spte old;
> +	struct kvm_spte new;
> +	void *sept_page;
> +};
> +
>  struct kvm_x86_ops {
>  	const char *name;
>
> @@ -1547,6 +1562,11 @@ struct kvm_x86_ops {
>  	void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
>  			     int root_level);
>
> +	int (*free_private_sp)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +			       void *private_sp);
> +	void (*handle_changed_private_spte)(
> +		struct kvm *kvm, const struct kvm_spte_change *change);
> +
>  	bool (*has_wbinvd_exit)(void);
>
>  	u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a5bf3e40e209..ef925722ee28 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1577,7 +1577,11 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  		flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);
>
>  	if (is_tdp_mmu_enabled(kvm))
> -		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
> +		/*
> +		 * private page needs to be kept and handle page migration
> +		 * on next EPT violation.
> +		 */
> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush, false);
>
>  	return flush;
>  }
> @@ -3082,7 +3086,8 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
>  		 * SPTE value without #VE suppress bit cleared
>  		 * (kvm->arch.shadow_mmio_value = 0).
>  		 */
> -		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
> +		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching &&
> +			     !kvm_gfn_shared_mask(vcpu->kvm)) ||
>  		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
>  			return RET_PF_EMULATE;
>  	}
> @@ -3454,7 +3459,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
>  		goto out_unlock;
>
>  	if (is_tdp_mmu_enabled(vcpu->kvm)) {
> -		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
> +		if (kvm_gfn_shared_mask(vcpu->kvm) &&
> +		    !VALID_PAGE(mmu->private_root_hpa)) {
> +			root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, true);
> +			mmu->private_root_hpa = root;
> +		}
> +		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, false);
>  		mmu->root.hpa = root;
>  	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
>  		root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true);
> @@ -4026,6 +4036,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
>  	kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true);
>  }
>
> +/*
> + * Private page can't be release on mmu_notifier without losing page contents.
> + * The help, callback, from backing store is needed to allow page migration.
> + * For now, pin the page.
> + */
> +static int kvm_faultin_pfn_private_mapped(struct kvm_vcpu *vcpu,
> +					   struct kvm_page_fault *fault)
> +{
> +	hva_t hva = gfn_to_hva_memslot(fault->slot, fault->gfn);
> +	struct page *page[1];
> +
> +	fault->map_writable = false;
> +	fault->pfn = KVM_PFN_ERR_FAULT;
> +	if (hva == KVM_HVA_ERR_RO_BAD || hva == KVM_HVA_ERR_BAD)
> +		return RET_PF_CONTINUE;
> +
> +	/* TDX allows only RWX.  Read-only isn't supported. */
> +	WARN_ON_ONCE(!fault->write);
> +	if (pin_user_pages_fast(hva, 1, FOLL_WRITE, page) != 1)
> +		return RET_PF_INVALID;
> +
> +	fault->map_writable = true;
> +	fault->pfn = page_to_pfn(page[0]);
> +	return RET_PF_CONTINUE;
> +}
> +
>  static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	struct kvm_memory_slot *slot = fault->slot;
> @@ -4058,6 +4094,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  			return RET_PF_EMULATE;
>  	}
>
> +	if (fault->is_private)
> +		return kvm_faultin_pfn_private_mapped(vcpu, fault);
> +
>  	async = false;
>  	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async,
>  					  fault->write, &fault->map_writable,
> @@ -4110,6 +4149,17 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
>  	       mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);
>  }
>
> +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r)
> +{
> +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn))
> +		return;
> +
> +	if (fault->is_private)
> +		put_page(pfn_to_page(fault->pfn));
> +	else
> +		kvm_release_pfn_clean(fault->pfn);
> +}
> +
>  static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);
> @@ -4117,7 +4167,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  	unsigned long mmu_seq;
>  	int r;
>
> -	fault->gfn = fault->addr >> PAGE_SHIFT;
> +	fault->gfn = gpa_to_gfn(fault->addr) & ~kvm_gfn_shared_mask(vcpu->kvm);
>  	fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
>
>  	if (page_fault_handle_page_track(vcpu, fault))
> @@ -4166,7 +4216,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  		read_unlock(&vcpu->kvm->mmu_lock);
>  	else
>  		write_unlock(&vcpu->kvm->mmu_lock);
> -	kvm_release_pfn_clean(fault->pfn);
> +	kvm_mmu_release_fault(vcpu->kvm, fault, r);
>  	return r;
>  }
>
> @@ -5665,6 +5715,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
>
>  	mmu->root.hpa = INVALID_PAGE;
>  	mmu->root.pgd = 0;
> +	mmu->private_root_hpa = INVALID_PAGE;
>  	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
>  		mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
>
> @@ -5855,6 +5906,10 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
>  	 * lead to use-after-free.
>  	 */
>  	if (is_tdp_mmu_enabled(kvm))
> +		/*
> +		 * For now private root is never invalidate during VM is running,
> +		 * so this can only happen for shared roots.
> +		 */
>  		kvm_tdp_mmu_zap_invalidated_roots(kvm);
>  }
>
> @@ -5882,7 +5937,8 @@ static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
>  		      .may_block = false,
>  		};
>
> -		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
> +		/* All private page should be zapped on memslot deletion. */
> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush, true);
>  	} else {
>  		flush = slot_handle_level(kvm, slot, kvm_zap_rmapp, PG_LEVEL_4K,
>  					  KVM_MAX_HUGEPAGE_LEVEL, true);
> @@ -5990,7 +6046,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
>  	if (is_tdp_mmu_enabled(kvm)) {
>  		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
>  			flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
> -						      gfn_end, true, flush);
> +						      gfn_end, true, flush, false);
>  	}
>
>  	if (flush)
> @@ -6023,6 +6079,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  	}
>
> +	/*
> +	 * For now this can only happen for non-TD VM, because TD private
> +	 * mapping doesn't support write protection.  kvm_tdp_mmu_wrprot_slot()
> +	 * will give a WARN() if it hits for TD.
> +	 */
>  	if (is_tdp_mmu_enabled(kvm)) {
>  		read_lock(&kvm->mmu_lock);
>  		flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
> @@ -6111,6 +6172,9 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
>  		sp = sptep_to_sp(sptep);
>  		pfn = spte_to_pfn(*sptep);
>
> +		/* Private page dirty logging is not supported. */
> +		KVM_BUG_ON(is_private_sptep(sptep), kvm);
> +
>  		/*
>  		 * We cannot do huge page mapping for indirect shadow pages,
>  		 * which are found on the last rmap (level = 1) when not using
> @@ -6151,6 +6215,11 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  	}
>
> +	/*
> +	 * This should only be reachable in case of log-dirty, wihch TD private
> +	 * mapping doesn't support so far.  kvm_tdp_mmu_zap_collapsible_sptes()
> +	 * internally gives a WARN() when it hits.
> +	 */
>  	if (is_tdp_mmu_enabled(kvm)) {
>  		read_lock(&kvm->mmu_lock);
>  		kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot);
> @@ -6437,6 +6506,9 @@ int kvm_mmu_vendor_module_init(void)
>  void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_unload(vcpu);
> +	if (is_tdp_mmu_enabled(vcpu->kvm))
> +		mmu_free_root_page(vcpu->kvm, &vcpu->arch.mmu->private_root_hpa,
> +				NULL);
>  	free_mmu_pages(&vcpu->arch.root_mmu);
>  	free_mmu_pages(&vcpu->arch.guest_mmu);
>  	mmu_free_memory_caches(vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 9f3a6bea60a3..d3b30d62aca0 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -6,6 +6,8 @@
>  #include <linux/kvm_host.h>
>  #include <asm/kvm_host.h>
>
> +#include "mmu.h"
> +
>  #undef MMU_DEBUG
>
>  #ifdef MMU_DEBUG
> @@ -164,11 +166,30 @@ static inline void kvm_mmu_alloc_private_sp(
>  	WARN_ON_ONCE(!sp->private_sp);
>  }
>
> +static inline int kvm_alloc_private_sp_for_split(
> +	struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	gfp &= ~__GFP_ZERO;
> +	sp->private_sp = (void*)__get_free_page(gfp);
> +	if (!sp->private_sp)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
>  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
>  {
>  	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
>  		free_page((unsigned long)sp->private_sp);
>  }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	if (is_private_sp(root))
> +		return kvm_gfn_private(kvm, gfn);
> +	else
> +		return kvm_gfn_shared(kvm, gfn);
> +}
>  #else
>  static inline bool is_private_sp(struct kvm_mmu_page *sp)
>  {
> @@ -194,11 +215,25 @@ static inline void kvm_mmu_alloc_private_sp(
>  {
>  }
>
> +static inline int kvm_alloc_private_sp_for_split(
> +	struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	return -ENOMEM;
> +}
> +
>  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
>  {
>  }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	return gfn;
> +}
>  #endif
>
> +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r);
> +
>  static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
>  {
>  	/*
> @@ -246,6 +281,7 @@ struct kvm_page_fault {
>  	/* Derived from mmu and global state.  */
>  	const bool is_tdp;
>  	const bool nx_huge_page_workaround_enabled;
> +	const bool is_private;
>
>  	/*
>  	 * Whether a >4KB mapping can be created or is forbidden due to NX
> @@ -327,6 +363,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>  		.prefetch = prefetch,
>  		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
>  		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
> +		.is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),
>
>  		.max_level = vcpu->kvm->arch.tdp_max_page_level,
>  		.req_level = PG_LEVEL_4K,
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 62ae590d4e5b..e5b73638bd83 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -877,7 +877,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>
>  out_unlock:
>  	write_unlock(&vcpu->kvm->mmu_lock);
> -	kvm_release_pfn_clean(fault->pfn);
> +	kvm_mmu_release_fault(vcpu->kvm, fault, r);

Do we really need this? Shadow page table is not supported for TD guest.

>  	return r;
>  }
>
> diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> index ee4802d7b36c..4ed50f3c424d 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.c
> +++ b/arch/x86/kvm/mmu/tdp_iter.c
> @@ -53,6 +53,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root,
>  	iter->min_level = min_level;
>  	iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root->spt;
>  	iter->as_id = kvm_mmu_page_as_id(root);
> +	iter->is_private = is_private_sp(root);
>
>  	tdp_iter_restart(iter);
>  }
> diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
> index adfca0cf94d3..dec56795c5da 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.h
> +++ b/arch/x86/kvm/mmu/tdp_iter.h
> @@ -71,7 +71,7 @@ struct tdp_iter {
>  	tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL];
>  	/* A pointer to the current SPTE */
>  	tdp_ptep_t sptep;
> -	/* The lowest GFN mapped by the current SPTE */
> +	/* The lowest GFN (shared bits included) mapped by the current SPTE */
>  	gfn_t gfn;
>  	/* The level of the root page given to the iterator */
>  	int root_level;
> @@ -94,6 +94,9 @@ struct tdp_iter {
>  	 * level instead of advancing to the next entry.
>  	 */
>  	bool yielded;
> +
> +	/* True if this iter is handling private KVM page fault. */
> +	bool is_private;
>  };
>
>  /*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index d874c79ab96c..12f75e60a254 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -278,18 +278,24 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
>  		    kvm_mmu_page_as_id(_root) != _as_id) {		\
>  		} else
>
> -static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *tdp_mmu_alloc_sp(
> +	struct kvm_vcpu *vcpu, bool private, bool is_root)
>  {
>  	struct kvm_mmu_page *sp;
>
>  	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
>  	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
>
> +	if (private)
> +		kvm_mmu_alloc_private_sp(vcpu, sp, is_root);
> +	else
> +		kvm_mmu_init_private_sp(sp, NULL);
> +
>  	return sp;
>  }
>
> -static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
> -			    gfn_t gfn, union kvm_mmu_page_role role)
> +static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn,
> +			    union kvm_mmu_page_role role)
>  {
>  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
>
> @@ -297,7 +303,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>  	sp->gfn = gfn;
>  	sp->ptep = sptep;
>  	sp->tdp_mmu_page = true;
> -	kvm_mmu_init_private_sp(sp);
>
>  	trace_kvm_mmu_get_page(sp, true);
>  }
> @@ -316,7 +321,8 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
>  	tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
>  }
>
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *kvm_tdp_mmu_get_vcpu_root(struct kvm_vcpu *vcpu,
> +						      bool private)
>  {
>  	union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
>  	struct kvm *kvm = vcpu->kvm;
> @@ -330,11 +336,12 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>  	 */
>  	for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
>  		if (root->role.word == role.word &&
> +		    is_private_sp(root) == private &&
>  		    kvm_tdp_mmu_get_root(root))
>  			goto out;
>  	}
>
> -	root = tdp_mmu_alloc_sp(vcpu);
> +	root = tdp_mmu_alloc_sp(vcpu, private, true);
>  	tdp_mmu_init_sp(root, NULL, 0, role);
>
>  	refcount_set(&root->tdp_mmu_root_count, 1);
> @@ -344,12 +351,17 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>  	spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
>
>  out:
> -	return __pa(root->spt);
> +	return root;
> +}
> +
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private)
> +{
> +	return __pa(kvm_tdp_mmu_get_vcpu_root(vcpu, private)->spt);
>  }
>
>  static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				u64 old_spte, u64 new_spte, int level,
> -				bool shared);
> +				bool private_spte, u64 old_spte,
> +				u64 new_spte, int level, bool shared);
>
>  static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level)
>  {
> @@ -410,6 +422,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
>   *
>   * @kvm: kvm instance
>   * @pt: the page removed from the paging structure
> + * @is_private: pt is private or not.
>   * @shared: This operation may not be running under the exclusive use
>   *	    of the MMU lock and the operation must synchronize with other
>   *	    threads that might be modifying SPTEs.
> @@ -422,7 +435,8 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
>   * this thread will be responsible for ensuring the page is freed. Hence the
>   * early rcu_dereferences in the function.
>   */
> -static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
> +static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool is_private,
> +			      bool shared)
>  {
>  	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));
>  	int level = sp->role.level;
> @@ -498,8 +512,20 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>  			old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte,
>  							  REMOVED_SPTE, level);
>  		}
> -		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
> -				    old_spte, REMOVED_SPTE, level, shared);
> +		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, is_private,
> +				    old_spte, REMOVED_SPTE, level,
> +				    shared);
> +	}
> +
> +	if (is_private && WARN_ON(static_call(kvm_x86_free_private_sp)(
> +					  kvm, sp->gfn, sp->role.level,
> +					  kvm_mmu_private_sp(sp)))) {
> +		/*
> +		 * Failed to unlink Secure EPT page and there is nothing to do
> +		 * further.  Intentionally leak the page to prevent the kernel
> +		 * from accessing the encrypted page.
> +		 */
> +		kvm_mmu_init_private_sp(sp, NULL);
>  	}
>
>  	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
> @@ -510,6 +536,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   * @kvm: kvm instance
>   * @as_id: the address space of the paging structure the SPTE was a part of
>   * @gfn: the base GFN that was mapped by the SPTE
> + * @private_spte: the SPTE is private or not
>   * @old_spte: The value of the SPTE before the change
>   * @new_spte: The value of the SPTE after the change
>   * @level: the level of the PT the SPTE is part of in the paging structure
> @@ -521,14 +548,30 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   * This function must be called for all TDP SPTE modifications.
>   */
>  static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				  u64 old_spte, u64 new_spte, int level,
> -				  bool shared)
> +				  bool private_spte, u64 old_spte,
> +				  u64 new_spte, int level, bool shared)
>  {
>  	bool was_present = is_shadow_present_pte(old_spte);
>  	bool is_present = is_shadow_present_pte(new_spte);
>  	bool was_leaf = was_present && is_last_spte(old_spte, level);
>  	bool is_leaf = is_present && is_last_spte(new_spte, level);
> -	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
> +	kvm_pfn_t old_pfn = spte_to_pfn(old_spte);
> +	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
> +	bool pfn_changed = old_pfn != new_pfn;
> +	struct kvm_spte_change change = {
> +		.gfn = gfn,
> +		.level = level,
> +		.old = {
> +			.pfn = old_pfn,
> +			.is_present = was_present,
> +			.is_leaf = was_leaf,
> +		},
> +		.new = {
> +			.pfn = new_pfn,
> +			.is_present = is_present,
> +			.is_leaf = is_leaf,
> +		},
> +	};
>
>  	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
>  	WARN_ON(level < PG_LEVEL_4K);
> @@ -595,7 +638,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>
>  	if (was_leaf && is_dirty_spte(old_spte) &&
>  	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
> -		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
> +		kvm_set_pfn_dirty(old_pfn);
>
>  	/*
>  	 * Recursively handle child PTs if the change removed a subtree from
> @@ -604,16 +647,47 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>  	 * pages are kernel allocations and should never be migrated.
>  	 */
>  	if (was_present && !was_leaf &&
> -	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
> -		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
> +	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) {
> +		WARN_ON(private_spte !=
> +			is_private_sptep(spte_to_child_pt(old_spte, level)));
> +		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level),
> +				  private_spte, shared);
> +	}
> +
> +	/*
> +	 * Special handling for the private mapping.  We are either
> +	 * setting up new mapping at middle level page table, or leaf,
> +	 * or tearing down existing mapping.
> +	 *
> +	 * This is after handling lower page table by above
> +	 * handle_remove_tdp_mmu_page().  S-EPT requires to remove S-EPT tables
> +	 * after removing childrens.
> +	 */
> +	if (private_spte &&
> +	    /* Ignore change of software only bits. e.g. host_writable */
> +	    (was_leaf != is_leaf || was_present != is_present || pfn_changed)) {
> +		void *sept_page = NULL;
> +
> +		if (is_present && !is_leaf) {
> +			struct kvm_mmu_page *sp = to_shadow_page(pfn_to_hpa(new_pfn));
> +
> +			sept_page = kvm_mmu_private_sp(sp);
> +			WARN_ON(!sept_page);
> +			WARN_ON(sp->role.level + 1 != level);
> +			WARN_ON(sp->gfn != gfn);
> +		}
> +		change.sept_page = sept_page;
> +
> +		static_call(kvm_x86_handle_changed_private_spte)(kvm, &change);
> +	}
>  }
>
>  static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -				u64 old_spte, u64 new_spte, int level,
> -				bool shared)
> +				bool private_spte, u64 old_spte, u64 new_spte,
> +				int level, bool shared)
>  {
> -	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level,
> -			      shared);
> +	__handle_changed_spte(kvm, as_id, gfn, private_spte,
> +			old_spte, new_spte, level, shared);
>  	handle_changed_spte_acc_track(old_spte, new_spte, level);
>  	handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte,
>  				      new_spte, level);
> @@ -640,6 +714,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  					  struct tdp_iter *iter,
>  					  u64 new_spte)
>  {
> +	bool freeze_spte = iter->is_private && !is_removed_spte(new_spte);
> +	u64 tmp_spte = freeze_spte ? REMOVED_SPTE : new_spte;
>  	u64 *sptep = rcu_dereference(iter->sptep);
>  	u64 old_spte;
>
> @@ -657,7 +733,7 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  	 * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
>  	 * does not hold the mmu_lock.
>  	 */
> -	old_spte = cmpxchg64(sptep, iter->old_spte, new_spte);
> +	old_spte = cmpxchg64(sptep, iter->old_spte, tmp_spte);
>  	if (old_spte != iter->old_spte) {
>  		/*
>  		 * The page table entry was modified by a different logical
> @@ -669,10 +745,14 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  		return -EBUSY;
>  	}
>
> -	__handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
> -			      new_spte, iter->level, true);
> +	__handle_changed_spte(
> +		kvm, iter->as_id, iter->gfn, iter->is_private,
> +		iter->old_spte, new_spte, iter->level, true);
>  	handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level);
>
> +	if (freeze_spte)
> +		__kvm_tdp_mmu_write_spte(sptep, new_spte);
> +
>  	return 0;
>  }
>
> @@ -734,13 +814,15 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>   *		      unless performing certain dirty logging operations.
>   *		      Leaving record_dirty_log unset in that case prevents page
>   *		      writes from being double counted.
> + * @is_private:       The fault is private.
>   *
>   * Returns the old SPTE value, which _may_ be different than @old_spte if the
>   * SPTE had voldatile bits.
>   */
>  static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
> -			      u64 old_spte, u64 new_spte, gfn_t gfn, int level,
> -			      bool record_acc_track, bool record_dirty_log)
> +			       u64 old_spte, u64 new_spte, gfn_t gfn, int level,
> +			       bool record_acc_track, bool record_dirty_log,
> +			       bool is_private)
>  {
>  	lockdep_assert_held_write(&kvm->mmu_lock);
>
> @@ -755,7 +837,8 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
>
>  	old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
>
> -	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false);
> +	__handle_changed_spte(kvm, as_id, gfn, is_private,
> +			      old_spte, new_spte, level, false);
>
>  	if (record_acc_track)
>  		handle_changed_spte_acc_track(old_spte, new_spte, level);
> @@ -774,7 +857,8 @@ static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
>  	iter->old_spte = __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep,
>  					    iter->old_spte, new_spte,
>  					    iter->gfn, iter->level,
> -					    record_acc_track, record_dirty_log);
> +					    record_acc_track, record_dirty_log,
> +					    iter->is_private);
>  }
>
>  static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
> @@ -807,8 +891,11 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
>  			continue;					\
>  		else
>
> -#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end)		\
> -	for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end)
> +#define tdp_mmu_for_each_pte(_iter, _mmu, _private, _start, _end)	\
> +	for_each_tdp_pte(_iter,						\
> +		 to_shadow_page((_private) ? _mmu->private_root_hpa :	\
> +				_mmu->root.hpa),			\
> +		_start, _end)
>
>  /*
>   * Yield if the MMU lock is contended or this thread needs to return control
> @@ -945,7 +1032,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>
>  	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
>  			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
> -			   true, true);
> +			   true, true, is_private_sp(sp));
>
>  	return true;
>  }
> @@ -961,13 +1048,21 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>   * operation can cause a soft lockup.
>   */
>  static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
> -			      gfn_t start, gfn_t end, bool can_yield, bool flush)
> +			      gfn_t start, gfn_t end, bool can_yield, bool flush,
> +			      bool drop_private)
>  {
>  	struct tdp_iter iter;
>
>  	end = min(end, tdp_mmu_max_gfn_exclusive());
>
>  	lockdep_assert_held_write(&kvm->mmu_lock);
> +	/*
> +	 * Extend [start, end) to include GFN shared bit when TDX is enabled,
> +	 * and for shared mapping range.
> +	 */
> +	WARN_ON_ONCE(!is_private_sp(root) && drop_private);
> +	start = kvm_gfn_for_root(kvm, root, start);
> +	end = kvm_gfn_for_root(kvm, root, end);
>
>  	rcu_read_lock();
>
> @@ -1002,12 +1097,13 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>   * MMU lock.
>   */
>  bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
> -			   bool can_yield, bool flush)
> +			   bool can_yield, bool flush, bool drop_private)
>  {
>  	struct kvm_mmu_page *root;
>
>  	for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
> -		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush);
> +		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush,
> +					  drop_private && is_private_sp(root));
>
>  	return flush;
>  }
> @@ -1067,6 +1163,12 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
>
>  	lockdep_assert_held_write(&kvm->mmu_lock);
>  	list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
> +		/*
> +		 * Skip private root since private page table
> +		 * is only torn down when VM is destroyed.
> +		 */
> +		if (is_private_sp(root))
> +			continue;
>  		if (!root->role.invalid &&
>  		    !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) {
>  			root->role.invalid = true;
> @@ -1087,14 +1189,22 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  	u64 new_spte;
>  	int ret = RET_PF_FIXED;
>  	bool wrprot = false;
> +	unsigned long pte_access = ACC_ALL;
> +	gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
>
>  	WARN_ON(sp->role.level != fault->goal_level);
> +
> +	/* TDX shared GPAs are no executable, enforce this for the SDV. */
> +	if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
> +		pte_access &= ~ACC_EXEC_MASK;
> +
>  	if (unlikely(!fault->slot))
> -		new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
> +		new_spte = make_mmio_spte(vcpu, gfn_unalias, pte_access);
>  	else
> -		wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
> -					 fault->pfn, iter->old_spte, fault->prefetch, true,
> -					 fault->map_writable, &new_spte);
> +		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
> +				   gfn_unalias, fault->pfn, iter->old_spte,
> +				   fault->prefetch, true, fault->map_writable,
> +				   &new_spte);
>
>  	if (new_spte == iter->old_spte)
>  		ret = RET_PF_SPURIOUS;
> @@ -1167,8 +1277,7 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter,
>  	return 0;
>  }
>
> -static int tdp_mmu_populate_nonleaf(
> -	struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
> +static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter, bool account_nx)
>  {
>  	struct kvm_mmu_page *sp;
>  	int ret;
> @@ -1176,7 +1285,7 @@ static int tdp_mmu_populate_nonleaf(
>  	WARN_ON(is_shadow_present_pte(iter->old_spte));
>  	WARN_ON(is_removed_spte(iter->old_spte));
>
> -	sp = tdp_mmu_alloc_sp(vcpu);
> +	sp = tdp_mmu_alloc_sp(vcpu, iter->is_private, false);
>  	tdp_mmu_init_child_sp(sp, iter);
>
>  	ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true);
> @@ -1193,6 +1302,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>  	struct kvm_mmu *mmu = vcpu->arch.mmu;
>  	struct tdp_iter iter;
> +	gfn_t raw_gfn;
> +	bool is_private = fault->is_private;
>  	int ret;
>
>  	kvm_mmu_hugepage_adjust(vcpu, fault);
> @@ -1201,7 +1312,16 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>
>  	rcu_read_lock();
>
> -	tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) {
> +	raw_gfn = gpa_to_gfn(fault->addr);
> +
> +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) {
> +		if (is_private) {
> +			rcu_read_unlock();
> +			return -EFAULT;
> +		}
> +	}
> +
> +	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
>  		if (fault->nx_huge_page_workaround_enabled)
>  			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
>
> @@ -1217,6 +1337,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  		    is_large_pte(iter.old_spte)) {
>  			if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
>  				break;
> +			/*
> +			 * TODO: large page support.
> +			 * Doesn't support large page for TDX now
> +			 */
> +			WARN_ON(is_private_sptep(iter.sptep));
> +
>
>  			/*
>  			 * The iter must explicitly re-read the spte here
> @@ -1258,11 +1384,13 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  	return ret;
>  }
>
> +/* Used by mmu notifier via kvm_unmap_gfn_range() */
>  bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
> -				 bool flush)
> +				 bool flush, bool drop_private)
>  {
>  	return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start,
> -				     range->end, range->may_block, flush);
> +				     range->end, range->may_block, flush,
> +				     drop_private);
>  }
>
>  typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
> @@ -1445,7 +1573,8 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
>  	return spte_set;
>  }
>
> -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> +static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(
> +	gfp_t gfp, bool is_private)
>  {
>  	struct kvm_mmu_page *sp;
>
> @@ -1456,6 +1585,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
>  		return NULL;
>
>  	sp->spt = (void *)__get_free_page(gfp);
> +	if (is_private) {
> +		if (kvm_alloc_private_sp_for_split(sp, gfp)) {
> +			free_page((unsigned long)sp->spt);
> +			sp->spt = NULL;
> +		}
> +	}
>  	if (!sp->spt) {
>  		kmem_cache_free(mmu_page_header_cache, sp);
>  		return NULL;
> @@ -1469,6 +1604,11 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  						       bool shared)
>  {
>  	struct kvm_mmu_page *sp;
> +	bool is_private = iter->is_private;
> +
> +	/* TODO: For now large page isn't supported for private SPTE. */
> +	WARN_ON(is_private);
> +	WARN_ON(iter->is_private != is_private_sptep(iter->sptep));
>
>  	/*
>  	 * Since we are allocating while under the MMU lock we have to be
> @@ -1479,7 +1619,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  	 * If this allocation fails we drop the lock and retry with reclaim
>  	 * allowed.
>  	 */
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT, is_private);
>  	if (sp)
>  		return sp;
>
> @@ -1491,7 +1631,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>
>  	iter->yielded = true;
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT, is_private);
>
>  	if (shared)
>  		read_lock(&kvm->mmu_lock);
> @@ -1907,10 +2047,14 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
>  	struct kvm_mmu *mmu = vcpu->arch.mmu;
>  	gfn_t gfn = addr >> PAGE_SHIFT;
>  	int leaf = -1;
> +	bool is_private = kvm_is_private_gpa(vcpu->kvm, addr);
>
>  	*root_level = vcpu->arch.mmu->root_role.level;
>
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	if (WARN_ON(is_private))
> +		return leaf;
> +
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>  		leaf = iter.level;
>  		sptes[leaf] = iter.old_spte;
>  	}
> @@ -1937,7 +2081,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
>  	gfn_t gfn = addr >> PAGE_SHIFT;
>  	tdp_ptep_t sptep = NULL;
>
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	/* fast page fault for private GPA isn't supported. */
> +	WARN_ON_ONCE(kvm_is_private_gpa(vcpu->kvm, addr));
> +
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>  		*spte = iter.old_spte;
>  		sptep = iter.sptep;
>  	}
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index c163f7cc23ca..d1655571eb2f 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -5,7 +5,7 @@
>
>  #include <linux/kvm_host.h>
>
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private);
>
>  __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
>  {
> @@ -16,7 +16,8 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
>  			  bool shared);
>
>  bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start,
> -				 gfn_t end, bool can_yield, bool flush);
> +				gfn_t end, bool can_yield, bool flush,
> +				bool drop_private);
>  bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
>  void kvm_tdp_mmu_zap_all(struct kvm *kvm);
>  void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
> @@ -25,7 +26,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm);
>  int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
>
>  bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
> -				 bool flush);
> +				 bool flush, bool drop_private);
>  bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range);
>  bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
>  bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 0acb0b6d1f82..7a5261eb7eb8 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -196,6 +196,7 @@ bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
>
>  	return true;
>  }
> +EXPORT_SYMBOL_GPL(kvm_is_reserved_pfn);
>
>  /*
>   * Switches to specified vcpu, until a matching vcpu_put()
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 049/102] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs
  2022-06-27 21:53 ` [PATCH v7 049/102] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
@ 2022-07-12  2:58   ` Yuan Yao
  2022-07-19 18:03     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-12  2:58 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:41PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> Some KVM MMU operations (dirty page logging, page migration, aging page)
> aren't supported for private GFNs (yet) with the first generation of TDX.
> Silently return on unsupported TDX KVM MMU operations.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 74 +++++++++++++++++++++++++++++++++++---
>  arch/x86/kvm/x86.c         |  3 ++
>  2 files changed, 72 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 12f75e60a254..fef6246086a8 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -387,6 +387,8 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
>
>  	if ((!is_writable_pte(old_spte) || pfn_changed) &&
>  	    is_writable_pte(new_spte)) {
> +		/* For memory slot operations, use GFN without aliasing */
> +		gfn = gfn & ~kvm_gfn_shared_mask(kvm);

This should be part of enabling, please consider to squash it into patch 46.

>  		slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn);
>  		mark_page_dirty_in_slot(kvm, slot, gfn);
>  	}
> @@ -1398,7 +1400,8 @@ typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
>
>  static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
>  						   struct kvm_gfn_range *range,
> -						   tdp_handler_t handler)
> +						   tdp_handler_t handler,
> +						   bool only_shared)
>  {
>  	struct kvm_mmu_page *root;
>  	struct tdp_iter iter;
> @@ -1409,9 +1412,23 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
>  	 * into this helper allow blocking; it'd be dead, wasteful code.
>  	 */
>  	for_each_tdp_mmu_root(kvm, root, range->slot->as_id) {
> +		gfn_t start;
> +		gfn_t end;
> +
> +		if (only_shared && is_private_sp(root))
> +			continue;
> +
>  		rcu_read_lock();
>
> -		tdp_root_for_each_leaf_pte(iter, root, range->start, range->end)
> +		/*
> +		 * For TDX shared mapping, set GFN shared bit to the range,
> +		 * so the handler() doesn't need to set it, to avoid duplicated
> +		 * code in multiple handler()s.
> +		 */
> +		start = kvm_gfn_for_root(kvm, root, range->start);
> +		end = kvm_gfn_for_root(kvm, root, range->end);
> +
> +		tdp_root_for_each_leaf_pte(iter, root, start, end)
>  			ret |= handler(kvm, &iter, range);
>
>  		rcu_read_unlock();
> @@ -1455,7 +1472,12 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter,
>
>  bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
> -	return kvm_tdp_mmu_handle_gfn(kvm, range, age_gfn_range);
> +	/*
> +	 * First TDX generation doesn't support clearing A bit for private
> +	 * mapping, since there's no secure EPT API to support it.  However
> +	 * it's a legitimate request for TDX guest.
> +	 */
> +	return kvm_tdp_mmu_handle_gfn(kvm, range, age_gfn_range, true);
>  }
>
>  static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter,
> @@ -1466,7 +1488,7 @@ static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter,
>
>  bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
> -	return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn);
> +	return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn, false);

The "false" here means we will do young testing for even private
pages, but we don't have actual A bit state in iter->old_spte for
them, so may here should be "true" ?

>  }
>
>  static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
> @@ -1511,8 +1533,11 @@ bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>  	 * No need to handle the remote TLB flush under RCU protection, the
>  	 * target SPTE _must_ be a leaf SPTE, i.e. cannot result in freeing a
>  	 * shadow page.  See the WARN on pfn_changed in __handle_changed_spte().
> +	 *
> +	 * .change_pte() callback should not happen for private page, because
> +	 * for now TDX private pages are pinned during VM's life time.
>  	 */

Worth to catch this by WARN_ON() ? Depends on you.

> -	return kvm_tdp_mmu_handle_gfn(kvm, range, set_spte_gfn);
> +	return kvm_tdp_mmu_handle_gfn(kvm, range, set_spte_gfn, true);
>  }
>
>  /*
> @@ -1566,6 +1591,14 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
>
>  	lockdep_assert_held_read(&kvm->mmu_lock);
>
> +	/*
> +	 * Because first TDX generation doesn't support write protecting private
> +	 * mappings and kvm_arch_dirty_log_supported(kvm) = false, it's a bug
> +	 * to reach here for guest TD.
> +	 */
> +	if (WARN_ON(!kvm_arch_dirty_log_supported(kvm)))
> +		return false;
> +
>  	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
>  		spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn,
>  			     slot->base_gfn + slot->npages, min_level);
> @@ -1830,6 +1863,14 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm,
>
>  	lockdep_assert_held_read(&kvm->mmu_lock);
>
> +	/*
> +	 * First TDX generation doesn't support clearing dirty bit,
> +	 * since there's no secure EPT API to support it.  It is a
> +	 * bug to reach here for TDX guest.
> +	 */
> +	if (WARN_ON(!kvm_arch_dirty_log_supported(kvm)))
> +		return false;
> +
>  	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
>  		spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
>  				slot->base_gfn + slot->npages);
> @@ -1896,6 +1937,13 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm,
>  	struct kvm_mmu_page *root;
>
>  	lockdep_assert_held_write(&kvm->mmu_lock);
> +	/*
> +	 * First TDX generation doesn't support clearing dirty bit,
> +	 * since there's no secure EPT API to support it.  For now silently
> +	 * ignore KVM_CLEAR_DIRTY_LOG.
> +	 */
> +	if (!kvm_arch_dirty_log_supported(kvm))
> +		return;
>  	for_each_tdp_mmu_root(kvm, root, slot->as_id)
>  		clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot);
>  }
> @@ -1975,6 +2023,13 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
>
>  	lockdep_assert_held_read(&kvm->mmu_lock);
>
> +	/*
> +	 * This should only be reachable when diryt-log is supported. It's a
> +	 * bug to reach here.
> +	 */
> +	if (WARN_ON(!kvm_arch_dirty_log_supported(kvm)))
> +		return;
> +
>  	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
>  		zap_collapsible_spte_range(kvm, root, slot);
>  }
> @@ -2028,6 +2083,15 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
>  	bool spte_set = false;
>
>  	lockdep_assert_held_write(&kvm->mmu_lock);
> +
> +	/*
> +	 * First TDX generation doesn't support write protecting private
> +	 * mappings, silently ignore the request.  KVM_GET_DIRTY_LOG etc
> +	 * can reach here, no warning.
> +	 */
> +	if (!kvm_arch_dirty_log_supported(kvm))
> +		return false;
> +
>  	for_each_tdp_mmu_root(kvm, root, slot->as_id)
>  		spte_set |= write_protect_gfn(kvm, root, gfn, min_level);
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index dcd1f5e2ba05..8f57dfb2a8c9 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12243,6 +12243,9 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
>  	u32 new_flags = new ? new->flags : 0;
>  	bool log_dirty_pages = new_flags & KVM_MEM_LOG_DIRTY_PAGES;
>
> +	if (!kvm_arch_dirty_log_supported(kvm) && log_dirty_pages)
> +		return;
> +
>  	/*
>  	 * Update CPU dirty logging if dirty logging is being toggled.  This
>  	 * applies to all operations.
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
  2022-06-27 21:53 ` [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD isaku.yamahata
@ 2022-07-12  3:47   ` Yuan Yao
  2022-07-12  6:14     ` Chao Gao
  0 siblings, 1 reply; 219+ messages in thread
From: Yuan Yao @ 2022-07-12  3:47 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:45PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> TDX doesn't need APIC page depending on vapic and its callback is
> WARN_ON_ONCE(is_tdx).  To avoid unnecessary overhead and WARN_ON_ONCE(),
> skip requesting KVM_REQ_APIC_PAGE_RELOAD when TD.
>
>   ------------[ cut here ]------------
>   WARNING: CPU: 134 PID: 42205 at arch/x86/kvm/vmx/main.c:696 vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
>   Modules linked in: squashfs nls_iso8859_1 nls_cp437 vhost_vsock vhost vhost_iotlb tdx_debug kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd i2c_i801 i2c_smbus i2c_ismt
>   CPU: 134 PID: 42205 Comm: tdx_vm_tests Tainted: G        W         5.17.0-rc8 #165 4baba67c36c7c1001d782c47f2964b779a5659c7
>   Hardware name: Intel Corporation EAGLESTREAM/EAGLESTREAM, BIOS EGSDCRB1.SYS.0066.D24.2110072326 10/07/2021
>   RIP: 0010:vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
>   Code: e7 d5 49 8b 1c 24 48 8d bb 78 15 00 00 e8 4c 78 e7 d5 48 83 bb 78 15 00 00 01 74 0d 4c 89 e7 e8 7a 9b fd ff 5b 41 5c 5d c3 90 <0f  0b 90 5b 41 5c 5d c3 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f
>   RSP: 0018:ffa0000027477b68 EFLAGS: 00010246
>   RAX: 0000000000000000 RBX: ffa00000572d9000 RCX: ffffffffde6864d4
>   RDX: dffffc0000000000 RSI: 0000000000000008 RDI: ffa00000572da578
>   RBP: ffa0000027477b78 R08: 0000000000000001 R09: ffe21c006df80008
>   R10: ff1100036fc0003f R11: ffe21c006df80007 R12: ff1100036fc00000
>   R13: ff1100036fc000d8 R14: ff1100036fc00038 R15: ff1100036fc00000
>   FS:  00007fdf1ad32740(0000) GS:ff11000e1ed00000(0000) knlGS:0000000000000000
>   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>   CR2: 00007fdf15f1b000 CR3: 000000011e462005 CR4: 0000000000773ee0
>   DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>   DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
>   PKRU: 55555554
>   Call Trace:
>    <TASK>
>    vcpu_enter_guest+0x145d/0x24d0 [kvm]
>    ? inject_pending_event+0x750/0x750 [kvm]
>    ? xsaves+0x31/0x40
>    ? rcu_read_lock_held_common+0x1e/0x60
>    ? rcu_read_lock_sched_held+0x60/0xe0
>    ? rcu_read_lock_bh_held+0xc0/0xc0
>    kvm_arch_vcpu_ioctl_run+0x25d/0xcc0 [kvm]
>    kvm_vcpu_ioctl+0x414/0xa30 [kvm]]
>    ? kvm_clear_dirty_log_protect+0x4d0/0x4d0 [kvm]
>    ? userfaultfd_unmap_prep+0x240/0x240
>    ? __up_read+0x17f/0x530
>    ? rwsem_wake+0x110/0x110
>    ? __do_munmap+0x437/0x7c0
>    ? rcu_read_lock_held_common+0x1e/0x60
>    ? rcu_read_lock_sched_held+0x60/0xe0
>    ? rcu_read_lock_sched_held+0x60/0xe0
>    ? __kasan_check_read+0x11/0x20
>    ? __fget_light+0xa9/0x100
>    __x64_sys_ioctl+0xc0/0x100
>    do_syscall_64+0x39/0xc0
>    entry_SYSCALL_64_after_hwframe+0x44/0xae
>   RIP: 0033:0x7fdf1ae493db
>   Code: 0f 1e fa 48 8b 05 b5 7a 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48  3d 01 f0 ff ff 73 01 c3 48 8b 0d 85 7a 0d 00 f7 d8 64 89 01 48
>   RSP: 002b:00007ffcf8bdfb38 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
>   RAX: ffffffffffffffda RBX: 00000000006f26d0 RCX: 00007fdf1ae493db
>   RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000007
>   RBP: 0000000000000000 R08: 0000000000411d36 R09: 0000000000000000
>   R10: fffffffffffffb69 R11: 0000000000000246 R12: 0000000000402410
>   R13: 00000000006f02b0 R14: 0000000000000000 R15: 0000000000000000
>    </TASK>
>   irq event stamp: 0
>   hardirqs last  enabled at (0): [<0000000000000000>] 0x0
>   hardirqs last disabled at (0): [<ffffffffb40c809a>] copy_process+0xaca/0x3270
>   softirqs last  enabled at (0): [<ffffffffb40c809a>] copy_process+0xaca/0x3270
>   softirqs last disabled at (0): [<0000000000000000>] 0x0
>   ---[ end trace 0000000000000000 ]---

The trace can be simplified to :

WARNING: arch/x86/kvm/vmx/main.c:696 vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
RIP: 0010:vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
Call Trace:
 vcpu_enter_guest+0x145d/0x24d0 [kvm]
 kvm_arch_vcpu_ioctl_run+0x25d/0xcc0 [kvm]
 kvm_vcpu_ioctl+0x414/0xa30 [kvm]]
 __x64_sys_ioctl+0xc0/0x100
 do_syscall_64+0x39/0xc0
 entry_SYSCALL_64_after_hwframe+0x44/0xae

Becasue here you just want to record the trace path of WARN_ON_ONCE(),
but not request some help for debugging it.

>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/x86.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8f57dfb2a8c9..c90ec611de2f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10042,7 +10042,8 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
>  	 * Update it when it becomes invalid.
>  	 */
>  	apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
> -	if (start <= apic_address && apic_address < end)
> +	if (start <= apic_address && apic_address < end &&
> +	    !kvm_gfn_shared_mask(kvm))

Minor: please condier to check kvm_gfn_shared_mask(kvm) before range,
means firstly check is or not, then suitable or not.

>  		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
>  }
>
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-11 15:17 ` [PATCH v7 000/102] KVM TDX basic feature support Isaku Yamahata
@ 2022-07-12  5:07   ` Chao Gao
  2022-07-12 10:54     ` Chao Peng
  2022-07-12 10:49   ` Chao Peng
  1 sibling, 1 reply; 219+ messages in thread
From: Chao Gao @ 2022-07-12  5:07 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, chao.p.peng

On Mon, Jul 11, 2022 at 08:17:01AM -0700, Isaku Yamahata wrote:
>Hi. Because my description on large page support was terse, I wrote up more
>detailed one.  Any feedback/thoughts on large page support?
>
>TDP MMU large page support design
>
>Two main discussion points
>* how to track page status. private vs shared, no-largepage vs can-be-largepage

...

>
>Tracking private/shared and large page mappable
>-----------------------------------------------
>VMM needs to track that page is mapped as private or shared at 4KB granularity.
>For efficiency of EPT violation path (****), at 2MB and 1GB level, VMM should
>track the page can be mapped as a large page (regarding private/shared).  VMM
>updates it on MapGPA and references it on the EPT violation path. (****)

Isaku,

+ Peng Chao

Doesn't UPM guarantee that 2MB/1GB large page in CR3 should be either all
private or all shared?

KVM always retrieves the mapping level in CR3 and enforces that EPT's
page level is not greater than that in CR3. My point is if UPM already enforces
no mixed pages in a large page, then KVM needn't do that again (UPM can
be trusted).

Maybe I am misunderstanding something?

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
  2022-07-12  3:47   ` Yuan Yao
@ 2022-07-12  6:14     ` Chao Gao
  2022-07-19 18:12       ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Chao Gao @ 2022-07-12  6:14 UTC (permalink / raw)
  To: Yuan Yao; +Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Tue, Jul 12, 2022 at 11:47:43AM +0800, Yuan Yao wrote:
>On Mon, Jun 27, 2022 at 02:53:45PM -0700, isaku.yamahata@intel.com wrote:
>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>
>> TDX doesn't need APIC page depending on vapic and its callback is
>> WARN_ON_ONCE(is_tdx).  To avoid unnecessary overhead and WARN_ON_ONCE(),
>> skip requesting KVM_REQ_APIC_PAGE_RELOAD when TD.

!kvm_gfn_shared_mask() doesn't ensure the VM is a TD. Right?

>>
>>
>> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
>> ---
>>  arch/x86/kvm/x86.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 8f57dfb2a8c9..c90ec611de2f 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -10042,7 +10042,8 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
>>  	 * Update it when it becomes invalid.
>>  	 */
>>  	apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
>> -	if (start <= apic_address && apic_address < end)
>> +	if (start <= apic_address && apic_address < end &&
>> +	    !kvm_gfn_shared_mask(kvm))
>
>Minor: please condier to check kvm_gfn_shared_mask(kvm) before range,
>means firstly check is or not, then suitable or not.
>
>>  		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
>>  }
>>
>> --
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 022/102] KVM: TDX: create/destroy VM structure
  2022-07-07  6:16   ` Yuan Yao
@ 2022-07-12  6:21     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  6:21 UTC (permalink / raw)
  To: Yuan Yao
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Kai Huang

On Thu, Jul 07, 2022 at 02:16:29PM +0800,
Yuan Yao <yuan.yao@linux.intel.com> wrote:

> On Mon, Jun 27, 2022 at 02:53:14PM -0700, isaku.yamahata@intel.com wrote:
> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > index 3675f7de2735..63f3c7a02cc8 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
...
> >  int __init tdx_module_setup(void)
> >  {
> >  	const struct tdsysinfo_struct *tdsysinfo;
> > @@ -48,6 +406,8 @@ int __init tdx_module_setup(void)
> >  		return ret;
> >  	}
> >
> > +	tdx_global_keyid = tdx_get_global_keyid();
> 
> I remember there's another static variable also named
> "tdx_global_keyid" in arch/x86/virt/vmx/tdx/tdx.c ?
> We can just use tdx_get_global_keyid() here without introducing
> another static variable.

Hmm, it can be done by exporting the variable itself.

 static inline int tdx_keyid_alloc(void) { return -EOPNOTSUPP; }
 static inline void tdx_keyid_free(int keyid) { }
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c1d41350e021..71f6d026bfd2 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -43,14 +43,6 @@ struct tdx_capabilities {
        struct tdx_cpuid_config cpuid_configs[TDX_MAX_NR_CPUID_CONFIGS];
 };
 
-/*
- * Key id globally used by TDX module: TDX module maps TDR with this TDX global
- * key id.  TDR includes key id assigned to the TD.  Then TDX module maps other
- * TD-related pages with the assigned key id.  TDR requires this TDX global key
- * id for cache flush unlike other TD-related pages.
- */
-static u32 tdx_global_keyid __read_mostly;
-
 /* Capabilities of KVM + the TDX module. */
 static struct tdx_capabilities tdx_caps;
 
@@ -3572,8 +3564,6 @@ int __init tdx_module_setup(void)
                return ret;
        }
 
-       tdx_global_keyid = tdx_get_global_keyid();
-
        tdsysinfo = tdx_get_sysinfo();
        if (tdsysinfo->num_cpuid_config > TDX_MAX_NR_CPUID_CONFIGS)
                return -EIO;
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index ea35230f0814..68ddcb06c7f1 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -65,13 +65,8 @@ static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMEN
 static int tdx_cmr_num;
 
 /* TDX module global KeyID.  Used in TDH.SYS.CONFIG ABI. */
-static u32 __read_mostly tdx_global_keyid;
-
-u32 tdx_get_global_keyid(void)
-{
-       return tdx_global_keyid;
-}
-EXPORT_SYMBOL_GPL(tdx_get_global_keyid);
+u32 tdx_global_keyid __ro_after_init;
+EXPORT_SYMBOL_GPL(tdx_global_keyid);
 
 u32 tdx_get_num_keyid(void)
 {

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 025/102] KVM: TDX: initialize VM with TDX specific parameters
  2022-06-28  8:30   ` Xiaoyao Li
@ 2022-07-12  7:11     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  7:11 UTC (permalink / raw)
  To: Xiaoyao Li
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Tue, Jun 28, 2022 at 04:30:53PM +0800,
Xiaoyao Li <xiaoyao.li@intel.com> wrote:

> On 6/28/2022 5:53 AM, isaku.yamahata@intel.com wrote:
> > From: Xiaoyao Li <xiaoyao.li@intel.com>
> > 
> > TDX requires additional parameters for TDX VM for confidential execution to
> > protect its confidentiality of its memory contents and its CPU state from
> > any other software, including VMM. When creating guest TD VM before
> > creating vcpu, the number of vcpu, TSC frequency (that is same among
> > vcpus. and it can't be changed.)  CPUIDs which is emulated by the TDX
> > module. It means guest can trust those CPUIDs. and sha384 values for
> > measurement.
> > 
> > Add new subcommand, KVM_TDX_INIT_VM, to pass parameters for TDX guest.  It
> > assigns encryption key to the TDX guest for memory encryption.  TDX
> > encrypts memory per-guest bases.  It assigns device model passes per-VM
> > parameters for the TDX guest.  The maximum number of vcpus, tsc frequency
> > (TDX guest has fised VM-wide TSC frequency. not per-vcpu.  The TDX guest
> > can not change it.), attributes (production or debug), available extended
> > features (which is reflected into guest XCR0, IA32_XSS MSR), cpuids, sha384
> > measurements, and etc.
> > 
> > This subcommand is called before creating vcpu and KVM_SET_CPUID2, i.e.
> > cpuids configurations aren't available yet.  So CPUIDs configuration values
> > needs to be passed in struct kvm_init_vm.  It's device model responsibility
> > to make this cpuid config for KVM_TDX_INIT_VM and KVM_SET_CPUID2.
> > 
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >   arch/x86/include/asm/kvm_host.h       |   2 +
> >   arch/x86/include/asm/tdx.h            |   3 +
> >   arch/x86/include/uapi/asm/kvm.h       |  33 +++++
> >   arch/x86/kvm/vmx/tdx.c                | 206 ++++++++++++++++++++++++++
> >   arch/x86/kvm/vmx/tdx.h                |  23 +++
> >   tools/arch/x86/include/uapi/asm/kvm.h |  33 +++++
> >   6 files changed, 300 insertions(+)
> > 
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 342decc69649..81638987cdb9 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1338,6 +1338,8 @@ struct kvm_arch {
> >   	 * the global KVM_MAX_VCPU_IDS may lead to significant memory waste.
> >   	 */
> >   	u32 max_vcpu_ids;
> > +
> > +	gfn_t gfn_shared_mask;
> 
> I think it's better to put in a seperate patch or the patch that consumes
> it.
> 
> >   };
> ...
> 
> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > index 2a9dfd54189f..1273b60a1a00 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -438,6 +438,209 @@ int tdx_dev_ioctl(void __user *argp)
> >   	return 0;
> >   }
> > +/*
> > + * cpuid entry lookup in TDX cpuid config way.
> > + * The difference is how to specify index(subleaves).
> > + * Specify index to TDX_CPUID_NO_SUBLEAF for CPUID leaf with no-subleaves.
> > + */
> > +static const struct kvm_cpuid_entry2 *tdx_find_cpuid_entry(
> > +	const struct kvm_cpuid2 *cpuid, u32 function, u32 index)
> > +{
> > +	int i;
> > +
> > +
> 
> superfluous line
> 
> > +	/* In TDX CPU CONFIG, TDX_CPUID_NO_SUBLEAF means index = 0. */
> > +	if (index == TDX_CPUID_NO_SUBLEAF)
> > +		index = 0;
> > +
> > +	for (i = 0; i < cpuid->nent; i++) {
> > +		const struct kvm_cpuid_entry2 *e = &cpuid->entries[i];
> > +
> > +		if (e->function == function &&
> > +		    (e->index == index ||
> > +		     !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
> > +			return e;
> > +	}
> > +	return NULL;
> > +}
> 
> no need for kvm_tdx->tsc_khz field. We have kvm->arch.default_tsc_khz.
> It seems kvm_tdx->tsc_khz is not used in the following patches.
> 
> ...
> 
> > +
> > +	kvm_tdx->tsc_offset = td_tdcs_exec_read64(kvm_tdx, TD_TDCS_EXEC_TSC_OFFSET);
> > +	kvm_tdx->attributes = td_params->attributes;
> > +	kvm_tdx->xfam = td_params->xfam;
> > +	kvm_tdx->tsc_khz = TDX_TSC_25MHZ_TO_KHZ(td_params->tsc_frequency);
> > +	kvm->max_vcpus = td_params->max_vcpus;
> > +
> > +	if (td_params->exec_controls & TDX_EXEC_CONTROL_MAX_GPAW)
> > +		kvm->arch.gfn_shared_mask = gpa_to_gfn(BIT_ULL(51));
> > +	else
> > +		kvm->arch.gfn_shared_mask = gpa_to_gfn(BIT_ULL(47));
> > +
> 
> ....
> 
> > diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
> > index a9ea3573be1b..779dfd683d66 100644
> > --- a/tools/arch/x86/include/uapi/asm/kvm.h
> > +++ b/tools/arch/x86/include/uapi/asm/kvm.h
> > @@ -531,6 +531,7 @@ struct kvm_pmu_event_filter {
> >   /* Trust Domain eXtension sub-ioctl() commands. */
> >   enum kvm_tdx_cmd_id {
> >   	KVM_TDX_CAPABILITIES = 0,
> > +	KVM_TDX_INIT_VM,
> >   	KVM_TDX_CMD_NR_MAX,
> >   };
> > @@ -576,4 +577,36 @@ struct kvm_tdx_capabilities {
> >   	struct kvm_tdx_cpuid_config cpuid_configs[0];
> >   };
> > +struct kvm_tdx_init_vm {
> > +	__u64 attributes;
> > +	__u32 max_vcpus;
> > +	__u32 tsc_khz;
> 
> it needs to align with arch/x86/include/uapi/asm/kvm.h that @tsc_khz needs
> to be removed.

Thanks, I fixed this patch as follows.


diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 81638987cdb9..342decc69649 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1338,8 +1338,6 @@ struct kvm_arch {
         * the global KVM_MAX_VCPU_IDS may lead to significant memory waste.
         */
        u32 max_vcpu_ids;
-
-       gfn_t gfn_shared_mask;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 190b77f9cdd1..570127d4e566 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -441,7 +441,6 @@ static const struct kvm_cpuid_entry2 *tdx_find_cpuid_entry(
 {
        int i;
 
-
        /* In TDX CPU CONFIG, TDX_CPUID_NO_SUBLEAF means index = 0. */
        if (index == TDX_CPUID_NO_SUBLEAF)
                index = 0;
@@ -619,7 +618,6 @@ static int tdx_td_init(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
        kvm_tdx->tsc_offset = td_tdcs_exec_read64(kvm_tdx, TD_TDCS_EXEC_TSC_OFFSET);
        kvm_tdx->attributes = td_params->attributes;
        kvm_tdx->xfam = td_params->xfam;
-       kvm_tdx->tsc_khz = TDX_TSC_25MHZ_TO_KHZ(td_params->tsc_frequency);
        kvm->max_vcpus = td_params->max_vcpus;
 
        if (td_params->exec_controls & TDX_EXEC_CONTROL_MAX_GPAW)
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 8a0793fcc3ab..3e5782438dc9 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -25,7 +25,6 @@ struct kvm_tdx {
        int hkid;
 
        u64 tsc_offset;
-       unsigned long tsc_khz;
 };
 
 struct vcpu_tdx {
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index 18654ba2ee87..965a1c2e347d 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -581,7 +581,7 @@ struct kvm_tdx_capabilities {
 struct kvm_tdx_init_vm {
        __u64 attributes;
        __u32 max_vcpus;
-       __u32 tsc_khz;
+       __u32 padding;
        __u64 mrconfigid[6];    /* sha384 digest */
        __u64 mrowner[6];       /* sha384 digest */
        __u64 mrownerconfig[6]; /* sha348 digest */
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 029/102] KVM: TDX: allocate/free TDX vcpu structure
  2022-06-28 11:34   ` Kai Huang
@ 2022-07-12  7:55     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12  7:55 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Tue, Jun 28, 2022 at 11:34:55PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > The next step of TDX guest creation is to create vcpu.  Allocate TDX vcpu
> > structures, initialize it.  Allocate pages of TDX vcpu for the TDX module.
> > 
> > In the case of the conventional case, cpuid is empty at the initialization.
> > and cpuid is configured after the vcpu initialization.  Because TDX
> > supports only X2APIC mode, cpuid is forcibly initialized to support X2APIC
> > on the vcpu initialization.
> 
> The patch title and commit message of this patch are identical to the previous
> patch.
> 
> What happened? Did you forget to squash two patches together?

Forgot to squash this patch into the previous patch. Will fix it.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-11 15:17 ` [PATCH v7 000/102] KVM TDX basic feature support Isaku Yamahata
  2022-07-12  5:07   ` Chao Gao
@ 2022-07-12 10:49   ` Chao Peng
  2022-07-12 17:35     ` Isaku Yamahata
  1 sibling, 1 reply; 219+ messages in thread
From: Chao Peng @ 2022-07-12 10:49 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Mon, Jul 11, 2022 at 08:17:01AM -0700, Isaku Yamahata wrote:
> Hi. Because my description on large page support was terse, I wrote up more
> detailed one.  Any feedback/thoughts on large page support?
> 
> TDP MMU large page support design
> 
> Two main discussion points
> * how to track page status. private vs shared, no-largepage vs can-be-largepage
> * how to trigger merging mapping from 4KB/2MB to 2MB/1GB
> 
> Expected private-vs-shared page usage
> -------------------------------------
> On TD boot all pages are private and TD converts pages into shared if necessary.
> * Most of the guest pages remain private.
> * Only limited pages are converted at kernel boot
>   ** bounce buffer for IO (virt-io).  It's allocated as swiotlb.  Its size is
>      64MB or 6% of total guest memory.
>   ** KVM PV shared page. (the current guest TD doesn't use KVM PV shared page.)
> * Only a small number of pages are dynamically converted from private to shared
>   and vice versa.  This usage is very limited. e.g. GetQuote, the lack of
>   swiotlb buffer
> 
> 
> Theory of Secure-EPT operations related to large page
> -----------------------------------------------------
> TDX Secure-EPT has differences from VMX EPT.
> To add a page to Secure-EPT
> 
> * Here is the operation to resolve the EPT violation.
> 1. TD: Accepts GPA.  TD needs to accept GPA before accessing GPA because TD
>    needs to detect that VMM unmaps GPA and maps GPA again.
> 2. EPT violation is triggered.  TD exit to VMM.
> 3. VMM: allocate a page for GPA and TDH.MEM.PAGE.AUG it to GPA.  Resume TD vcpu.
>    (3a. TD: #VE<EPT violation> is injected.  #VE handler accepts the page)
> 4. TD: resume #VE and continue TD vcpu execution
> 
> TD may choose step 1. In that case, After step 3. #VE is injected into TD and,
> TD #VE handler needs to accept the page.
> 
> When adding a page to Secure-EPT again, the page contexts are cleared and the
> page is encrypted.  If a page is disassociated from Secure-EPT and added again,
> the page content is lost.
> 
> * TDG.VP.VMCALL<MapGPA> hypercall
> The page associated with GPA can be private or shared.  TD converts the GPA by
> TDG.VP.VMCALL<MapGPA> hypercall from private to shared or vice versa.  VMM
> tracks whether the given GPA is private or shared.
> 
> * mapping merge(promote)/split(demote)
> The page can be mapped as large page (2MB or 1GB) in addition to 4KB.  The
> mapping can be merged(4KB/2MB -> 2MB/1GB) or split(2MB/1GB -> 4KB/2MB) by TDX
> SEAMCALL TDH.MEM.PAGE.PROMOTE and TDH.MEM.PAGE.DEMOTE.
> The merge of mapping requires all the pages needs to be mapped, unlike VMX EPT
> because of encryption.  This implies the current KVM implementation doesn't work
> for TDX when merging mapping as follows
> 
> - EPT violation and host page is 2MB mappable.
>   some of the 4KB pages of the given 2MB page are already mapped, some not.
>   i.e. 2MB EPT -> 4KB EPT -> 4K pages
> - KVM page fault handler zap 2MB EPT entry and populate 2MB EPT entry
>   zap: 2MB EPT: non present
>   populate 2MB: -> 2MB page
> 
> If VMM zaps 2MB Secure-EPT entry, the page contents will be lost for TDX.
> Mapping merge requires all pages are already mapped.
> 
> Instead, the following steps are needed.
> - EPT violation and host page is 2MB mappable.
>   some of the 4KB pages of the given 2MB page are already mapped.  Some not.
>   i.e. 2MB EPT -> 4KB EPT -> 4K pages
> - VMM checks all 4KB GPAs are private. If not, it can't be mapped as a large page.
>   (****)
> - VMM checks all 4KB GPAs are already mapped.  If not, give up mapping merge.
>   (or map missing 4KB pages.)
> - mapping merge by TDH.MEM.PAGE.PROMOTE
> 
> The mapping split for TDX Secure-EPT works similarly to the VMX EPT case.
> 
> 
> EPT violation and MapGPA
> ------------------------
> - EPT violation is a fast path
> - MapGPA is not a fast path.
> => Keep the EPT violation path optimized and complicates the MapGPA path.  For
> (****) check, we don't want to scan the 4KB mapping on EPT violation.  Instead,
> the MapGPA path scans it and records the result as the page can be mapped as 2MB
> due to private/shared.

This sounds reasonable, Instead of tracking that in MapGPA,  maybe
KVM_MEMORY_ENCRYPT_{UN,}REG_REGION introduced in UPM v7 is a better
place to put the scan code in.

  https://lkml.org/lkml/2022/7/6/259

Both the MapGPA (explicit conversion) and the EPT violation (implicit
conversion) can cause invocation to these two ioctls and need update to
this info.

> 
> 
> Tracking private/shared and large page mappable
> -----------------------------------------------
> VMM needs to track that page is mapped as private or shared at 4KB granularity.
> For efficiency of EPT violation path (****), at 2MB and 1GB level, VMM should
> track the page can be mapped as a large page (regarding private/shared).  VMM
> updates it on MapGPA and references it on the EPT violation path. (****)
> 
> For 4KB pages, 1 bit is needed. private or shared.  Let's call it shared-mask bit.
> For 2MB/1GB pages, 2 bit is needed. large page mappable or not. private or
> shared if mappable.  Let's call it no-largepage bit.

I'm just thinking maybe we don't need introduce new bits, instead we
reuse lpage_info where we already use it to track whether a page can be
mapped at specified page level in kvm_mmu_max_mapping_level(). Then in
the above two ioctls we do a scan for each level and update lpage_info.
For example, we should disallow_lpage if private/shared pages are mixed
in that page level.

It's however a bit tricky to manage lpage_info.disallow_lpage in these
two ioctls with current code. We can't simply do disallow_lpage++ and
disallow_lpage--. One possible solution can treat disallow_lpage as a
mask instead of a count. Then we define bits like below for use:
  - USER_GFN_UNALIGNED set when memslot user_address/private_offset/gfn
    is not aligned on the page level
  - PAGE_TRACKING set during page tracking
  - PRIVITE_SHARED_MIXED set when private/shared pages are mixed

In page fault handler the page can be mapped at that level only when all
bits are zero and in above two ioctls we just switch on/off bit
PRIVITE_SHARED_MIXED.

Currently UMP don't have this code yet, but can be added if feasible.

Chao
> 
> Option A.)
>   Allocate array for pages in struct kvm_arch_memory_slot on TD creation.
>   struct kvm_arch_memory_slot {
>     +struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
>   }
> 
>   pros:
>   +straight forward implementation
>   +SPTE_SHARED_MASK is not needed
>   cons:
>   -memory overhead is high
>   -not optimized for expected usage
>   -one more look-up on EPT violation
> 
> Option B.) Steal two software usable bits from SPTE and record them in SPTE.
>            SPTE_SHARED_MASK, SPTE_NOLARGE_PAGE_MASK
>   pros:
>   +optimized for EPT violation
>   cons:
>   -2bits used in SPTE entry
>   -complicates the MapGPA path.
> 
> Option C.) Steal one software usable bit from SPTE and record it in SPTE.
>            SPTE_SHARED_MASK
>            For 2MB/1GB, allocate bitmap in kvm_mmu_page.
>            struct kvm_mmu_page {
>              bitmap nolarge
>            }
>   pros:
>   +optimized for EPT violation
>   cons:
>   -complicates the MapGPA path.
>   -information is scattered in SPTE and struct kvm_mmu_page
> 
> 
> How to update those bits
> ------------------------
> - MapGPA
>   - at 4KB level, set or clear shared-mask bit.
>   - Scan 512 4KB bit, at 2MB level
>     - set or clear shared-mask bit, clear no-largepage bit or
>     - clear shared-mask bit, set no-largepage bit
>     - increment/decrement lpageinfo to prevent/allow large page
>   - similar for 1GB level
>   Note: This logic might a bit tricky.
> 
> - EPT violation
>   - If 2MB large page is allowed, check if no-largepage bit
>     - If no-largepage bit is set, => go down to 4KB page
>     - If no-largepage bit is cleared => try to map 2MB page
>       - If 4KB level is not mapped, map 2MB page
>       - If some 4KB level is already mapped, go down to 4KB.
>         Don't try to merge mapping. Or it's possible to try to merge mapping.
>   Note: 512 4KB entry scanning is not done at EPT violation because it's fast
>         path.
> 
> 
> Map merging
> -----------
> Map merging is necessary for TD migration. (Map split is the easy part.)  The
> current KVM implementation zaps the range (mmu notification or lpage recovery
> worker) and expects large page mapping on the next EPT violation.
> 
> Option A.) Keep the code similar to map merging logic.
> Zap 2MB EPT entry in some sense and trigger map merging logic on the next EPT
> violation.  To keep encrypted page contents, zapped EPT entries needs to keep
> the page.  Steal one more bits from SPTE. SPTE_PRIVATE_BLOCKED_MASK.
> It means that the page is zapped from SPTE. but it still alive and references
> page.
> 
> Option B.) In the callback, directly merge mapping somehow.  In this case, mmu
> notifier usage doesn't make sense.
> 
> NOTE:
> - Implement map merging in MapGPA. This doesn't work for dirty page logging.
> - We can utilize kvm_nx_lpage_recovery_worker
> - We can utilize THP. Probably doesn't work well for fd-based private memory.
> 
> Thanks,
> Isaku Yamayhata
> 
> On Mon, Jun 27, 2022 at 02:52:52PM -0700,
> isaku.yamahata@intel.com wrote:
> 
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > KVM TDX basic feature support
> > 
> > Hello.  This is v7 the patch series vof KVM TDX support.
> > This is based on v5.19-rc1 + kvm/queue branch + TDX HOST patch series.
> > The tree can be found at https://github.com/intel/tdx/tree/kvm-upstream
> > How to run/test: It's describe at https://github.com/intel/tdx/wiki/TDX-KVM
> > 
> > Major changes from v6:
> > - rebased to v5.19 base
> > 
> > TODO:
> > - integrate fd-based guest memory. As the discussion is still on-going, I
> >   intentionally dropped fd-based guest memory support yet.  The integration can
> >   be found at https://github.com/intel/tdx/tree/kvm-upstream-workaround.
> > - 2M large page support. It's work-in-progress.
> > For large page support, there are several design choices. Here is the design options.
> > Any thoughts/feedback?
> > 
> > KVM MMU Large page support for TDX
> > 
> > * What needs to be done
> > - Track private or shared of each page size (4KB, 2MB, 1GB) based on
> >   TDG.VP.VMCALL<MapGPA>.  For large pages(2MB, 1GB), it can be mixed (some
> >   lower-size pages are private and some shared.)  In this case, the page can't
> >   be large.
> > - if necessary, split large page on TDG.VP.VMCALL<MapGPA>
> >   (split on dirty page tracking is future work)
> > - resolving KVM page fault
> >   When resolving a private page and the page is large in the host, GPA can be
> >   resolved as a large page in Secure-EPT.  Even if the page is large on the host
> >   side, sometimes a 4KB page can be resolved because it's up to guest TD to
> >   accept at 4KB, 2MB, or 1GB.
> > - collapsing pages into a large page.
> >   At this point, it's okay to not implement this.  When dirty page tracking is
> >   supported, this needs to be supported.
> >   - On MapGPA, the page can be collapsed into a large page
> >   - handle zapping SPTE and try to collapse the pages on the next KVM page fault
> >     Unlike the EPT case, some trick is needed.
> > - For performance, optimize KVM page fault path at the cost of complicating
> >   MapGPA path.
> > 
> > * options to track private or shared
> > At each page size (4KB, 2MB, and 1GB), track private, shared, or mixed (2MB and
> > 1GB case). For 4KB each page, 1 bit per page is needed. private or shared.  For
> > large pages (2MB and 1GB), 2 bits per large page is needed. (private, shared, or
> > mixed).  When resolving KVM page fault, we don't want to check the lower-size
> > pages to check if the given GPA can be a large for performance.  On MapGPA check
> > it instead.
> > 
> > Option A). enhance kvm_arch_memory_slot
> >   enum kvm_page_type {
> >        KVM_PAGE_TYPE_INVALID,
> >        KVM_PAGE_TYPE_SHARED,
> >        KVM_PAGE_TYPE_PRIVATE,
> >        KVM_PAGE_TYPE_MIXED,
> >   };
> > 
> >   struct kvm_page_attr {
> >        enum kvm_page_type type;
> >   };
> > 
> >  struct kvm_arch_memory_slot {
> >  +      struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
> > 
> > Option B). steal one more bit SPTE_MIXED_MASK in addition to SPTE_SHARED_MASK
> > If !SPTE_MIXED_MASK, it can be large page.
> > 
> > Option C). use SPTE_SHARED_MASK and kvm_mmu_page::mixed bitmap
> > kvm_mmu_page::mixed bitmap of 1GB, root indicates mixed for 2MB, 1GB.
> > 
> > 
> > * comparison
> > A).
> > + straightforward to implement
> > + SPTE_SHARED_MASK isn't needed
> > - memory overhead compared to B). or C).
> > - more memory reference on KVM page fault
> > 
> > B).
> > + simpler than C) (complex than A)?)
> > + efficient on KVM page fault. (only SPTE reference)
> > + low memory overhead
> > - Waste precious SPTE bits.
> > 
> > C).
> > + efficient on KVM page fault. (only SPTE reference)
> > + low memory overhead
> > - complicates MapGPA
> > - scattered data structure
> > 
> > Thanks,
> > Isaku Yamahata
> > 
> > Changes from v6:
> > - rebased to v5.19
> > 
> > Changes from v5:
> > - export __seamcall and use it
> > - move mutex lock from callee function of smp_call_on_cpu to the caller.
> > - rename mmu_prezap => flush_shadow_all_private() and tdx_mmu_release_hkid
> > - updated comment
> > - drop the use of tdh_mng_key.reclaimid(): as the function is for backward
> >   compatibility to only return success
> > - struct kvm_tdx_cmd: metadata => flags, added __u64 error.
> > - make this ioctl systemwide ioctl
> > - ABI change to struct kvm_init_vm
> > - guest_tsc_khz: use kvm->arch.default_tsc_khz
> > - rename BUILD_BUG_ON_MEMCPY to MEMCPY_SAME_SIZE
> > - drop exporting kvm_set_tsc_khz().
> > - fix kvm_tdp_page_fault() for mtrr emulation
> > - rename it to kvm_gfn_shared_mask(), dropped kvm_gpa_shared_mask()
> > - drop kvm_is_private_gfn(), kept kvm_is_private_gpa()
> >   keep kvm_{gfn, gpa}_private(), kvm_gpa_private()
> > - update commit message
> > - rename shadow_init_value => shadow_nonprsent_value
> > - added ept_violation_ve_test mode
> > - shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in tdp_mmu.c
> > - legacy MMU case
> >   => - mmu_topup_shadow_page_cache(), kvm_mmu_create()
> >      - FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> > - #VE warning:
> > - rename: REMOVED_SPTE => __REMOVED_SPTE, SHADOW_REMOVED_SPTE => REMOVED_SPTE
> > - merge into Like we discussed, this patch should be merged with patch
> >   "KVM: x86/mmu: Allow non-zero init value for shadow PTE".
> > - fix pointed by Sagi. check !is_private check => (kvm_gfn_shared_mask && !is_private)
> > - introduce kvm_gfn_for_root(kvm, root, gfn)
> > - add only_shared argument to kvm_tdp_mmu_handle_gfn()
> > - use kvm_arch_dirty_log_supported()
> > - rename SPTE_PRIVATE_PROHIBIT to SPTE_SHARED_MASK.
> > - rename: is_private_prohibit_spte() => spte_shared_mask()
> > - fix: shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in comment
> > - dropped this patch as the change was merged into kvm/queue
> > - update vt_apicv_post_state_restore()
> > - use is_64_bit_hypercall()
> > - comment: expand MSMI -> Machine Check System Management Interrupt
> > - fixed TDX_SEPT_PFERR
> > - tdvmcall_p[1234]_{write, read}() => tdvmcall_a[0123]_{read,write}()
> > - rename tdmvcall_exit_readon() => tdvmcall_leaf()
> > - remove optional zero check of argument.
> > - do a check for static_call(kvm_x86_has_emulated_msr)(kvm, MSR_IA32_SMBASE)
> >    in kvm_vcpu_ioctl_smi and __apic_accept_irq.
> > - WARN_ON_ONCE in tdx_smi_allowed and tdx_enable_smi_window.
> > - introduce vcpu_deliver_init to x86_ops
> > - sprinkeled KVM_BUG_ON()
> > 
> > Changes from v4:
> > - rebased to TDX host kernel patch series.
> > - include all the patches to make this patch series working.
> > - add [MARKER] patches to mark the patch layer clear.
> > 
> > ---
> > * What's TDX?
> > TDX stands for Trust Domain Extensions, which extends Intel Virtual Machines
> > Extensions (VMX) to introduce a kind of virtual machine guest called a Trust
> > Domain (TD) for confidential computing.
> > 
> > A TD runs in a CPU mode that is designed to protect the confidentiality of its
> > memory contents and its CPU state from any other software, including the hosting
> > Virtual Machine Monitor (VMM), unless explicitly shared by the TD itself.
> > 
> > We have more detailed explanations below (***).
> > We have the high-level design of TDX KVM below (****).
> > 
> > In this patch series, we use "TD" or "guest TD" to differentiate it from the
> > current "VM" (Virtual Machine), which is supported by KVM today.
> > 
> > 
> > * The organization of this patch series
> > This patch series is on top of the patches series "TDX host kernel support":
> > https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/
> > 
> > this patch series is available at
> > https://github.com/intel/tdx/releases/tag/kvm-upstream
> > The corresponding patches to qemu are available at
> > https://github.com/intel/qemu-tdx/commits/tdx-upstream
> > 
> > The relations of the layers are depicted as follows.
> > The arrows below show the order of patch reviews we would like to have.
> > 
> > The below layers are chosen so that the device model, for example, qemu can
> > exercise each layering step by step.  Check if TDX is supported, create TD VM,
> > create TD vcpu, allow vcpu running, populate TD guest private memory, and handle
> > vcpu exits/hypercalls/interrupts to run TD fully.
> > 
> >   TDX vcpu
> >   interrupt/exits/hypercall<------------\
> >         ^                               |
> >         |                               |
> >   TD finalization                       |
> >         ^                               |
> >         |                               |
> >   TDX EPT violation<------------\       |
> >         ^                       |       |
> >         |                       |       |
> >   TD vcpu enter/exit            |       |
> >         ^                       |       |
> >         |                       |       |
> >   TD vcpu creation/destruction  |       \-------KVM TDP MMU MapGPA
> >         ^                       |                       ^
> >         |                       |                       |
> >   TD VM creation/destruction    \---------------KVM TDP MMU hooks
> >         ^                                               ^
> >         |                                               |
> >   TDX architectural definitions                 KVM TDP refactoring for TDX
> >         ^                                               ^
> >         |                                               |
> >    TDX, VMX    <--------TDX host kernel         KVM MMU GPA stolen bits
> >    coexistence          support
> > 
> > 
> > The followings are explanations of each layer.  Each layer has a dummy commit
> > that starts with [MARKER] in subject.  It is intended to help to identify where
> > each layer starts.
> > 
> > TDX host kernel support:
> >         https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/
> >         The guts of system-wide initialization of TDX module.  There is an
> >         independent patch series for host x86.  TDX KVM patches call functions
> >         this patch series provides to initialize the TDX module.
> > 
> > TDX, VMX coexistence:
> >         Infrastructure to allow TDX to coexist with VMX and trigger the
> >         initialization of the TDX module.
> >         This layer starts with
> >         "KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX"
> > TDX architectural definitions:
> >         Add TDX architectural definitions and helper functions
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: TDX architectural definitions".
> > TD VM creation/destruction:
> >         Guest TD creation/destroy allocation and releasing of TDX specific vm
> >         and vcpu structure.  Create an initial guest memory image with TDX
> >         measurement.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: TD VM creation/destruction".
> > TD vcpu creation/destruction:
> >         guest TD creation/destroy Allocation and releasing of TDX specific vm
> >         and vcpu structure.  Create an initial guest memory image with TDX
> >         measurement.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction"
> > TDX EPT violation:
> >         Create an initial guest memory image with TDX measurement.  Handle
> >         secure EPT violations to populate guest pages with TDX SEAMCALLs.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: TDX EPT violation"
> > TD vcpu enter/exit:
> >         Allow TDX vcpu to enter into TD and exit from TD.  Save CPU state before
> >         entering into TD.  Restore CPU state after exiting from TD.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: TD vcpu enter/exit"
> > TD vcpu interrupts/exit/hypercall:
> >         Handle various exits/hypercalls and allow interrupts to be injected so
> >         that TD vcpu can continue running.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls"
> > 
> > KVM MMU GPA shared bit:
> >         Introduce framework to handle shared bit repurposed bit of GPA TDX
> >         repurposed a bit of GPA to indicate shared or private. If it's shared,
> >         it's the same as the conventional VMX EPT case.  VMM can access shared
> >         guest pages.  If it's private, it's handled by Secure-EPT and the guest
> >         page is encrypted.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: KVM MMU GPA stolen bits"
> > KVM TDP refactoring for TDX:
> >         TDX Secure EPT requires different constants. e.g. initial value EPT
> >         entry value etc. Various refactoring for those differences.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX"
> > KVM TDP MMU hooks:
> >         Introduce framework to TDP MMU to add hooks in addition to direct EPT
> >         access TDX added Secure EPT which is an enhancement to VMX EPT.  Unlike
> >         conventional VMX EPT, CPU can't directly read/write Secure EPT. Instead,
> >         use TDX SEAMCALLs to operate on Secure EPT.
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks"
> > KVM TDP MMU MapGPA:
> >         Introduce framework to handle switching guest pages from private/shared
> >         to shared/private.  For a given GPA, a guest page can be assigned to a
> >         private GPA or a shared GPA exclusively.  With TDX MapGPA hypercall,
> >         guest TD converts GPA assignments from private (or shared) to shared (or
> >         private).
> >         This layer starts with
> >         "[MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA "
> > 
> > KVM guest private memory: (not shown in the above diagram)
> > [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private
> > memory: https://lkml.org/lkml/2022/1/18/395
> >         Guest private memory requires different memory management in KVM.  The
> >         patch proposes a way for it.  Integration with TDX KVM.
> > 
> > (***)
> > * TDX module
> > A CPU-attested software module called the "TDX module" is designed to implement
> > the TDX architecture, and it is loaded by the UEFI firmware today. It can be
> > loaded by the kernel or driver at runtime, but in this patch series we assume
> > that the TDX module is already loaded and initialized.
> > 
> > The TDX module provides two main new logical modes of operation built upon the
> > new SEAM (Secure Arbitration Mode) root and non-root CPU modes added to the VMX
> > architecture. TDX root mode is mostly identical to the VMX root operation mode,
> > and the TDX functions (described later) are triggered by the new SEAMCALL
> > instruction with the desired interface function selected by an input operand
> > (leaf number, in RAX). TDX non-root mode is used for TD guest operation.  TDX
> > non-root operation (i.e. "guest TD" mode) is similar to the VMX non-root
> > operation (i.e. guest VM), with changes and restrictions to better assure that
> > no other software or hardware has direct visibility of the TD memory and state.
> > 
> > TDX transitions between TDX root operation and TDX non-root operation include TD
> > Entries, from TDX root to TDX non-root mode, and TD Exits from TDX non-root to
> > TDX root mode.  A TD Exit might be asynchronous, triggered by some external
> > event (e.g., external interrupt or SMI) or an exception, or it might be
> > synchronous, triggered by a TDCALL (TDG.VP.VMCALL) function.
> > 
> > TD VCPUs can be entered using SEAMCALL(TDH.VP.ENTER) by KVM. TDH.VP.ENTER is one
> > of the TDX interface functions as mentioned above, and "TDH" stands for Trust
> > Domain Host. Those host-side TDX interface functions are categorized into
> > various areas just for better organization, such as SYS (TDX module management),
> > MNG (TD management), VP (VCPU), PHYSMEM (physical memory), MEM (private memory),
> > etc. For example, SEAMCALL(TDH.SYS.INFO) returns the TDX module information.
> > 
> > TDCS (Trust Domain Control Structure) is the main control structure of a guest
> > TD, and encrypted (using the guest TD's ephemeral private key).  At a high
> > level, TDCS holds information for controlling TD operation as a whole,
> > execution, EPTP, MSR bitmaps, etc that KVM needs to set it up.  Note that MSR
> > bitmaps are held as part of TDCS (unlike VMX) because they are meant to have the
> > same value for all VCPUs of the same TD.
> > 
> > Trust Domain Virtual Processor State (TDVPS) is the root control structure of a
> > TD VCPU.  It helps the TDX module control the operation of the VCPU, and holds
> > the VCPU state while the VCPU is not running. TDVPS is opaque to software and
> > DMA access, accessible only by using the TDX module interface functions (such as
> > TDH.VP.RD, TDH.VP.WR). TDVPS includes TD VMCS, and TD VMCS auxiliary structures,
> > such as virtual APIC page, virtualization exception information, etc.
> > 
> > Several VMX control structures (such as Shared EPT and Posted interrupt
> > descriptor) are directly managed and accessed by the host VMM.  These control
> > structures are pointed to by fields in the TD VMCS.
> > 
> > The above means that 1) KVM needs to allocate different data structures for TDs,
> > 2) KVM can reuse the existing code for TDs for some operations, 3) it needs to
> > define TD-specific handling for others.  3) Redirect operations to .  3)
> > Redirect operations to the TDX specific callbacks, like "if (is_td_vcpu(vcpu))
> > tdx_callback() else vmx_callback();".
> > 
> > *TD Private Memory
> > TD private memory is designed to hold TD private content, encrypted by the CPU
> > using the TD ephemeral key. An encryption engine holds a table of encryption
> > keys, and an encryption key is selected for each memory transaction based on a
> > Host Key Identifier (HKID). By design, the host VMM does not have access to the
> > encryption keys.
> > 
> > In the first generation of MKTME, HKID is "stolen" from the physical address by
> > allocating a configurable number of bits from the top of the physical
> > address. The HKID space is partitioned into shared HKIDs for legacy MKTME
> > accesses and private HKIDs for SEAM-mode-only accesses. We use 0 for the shared
> > HKID on the host so that MKTME can be opaque or bypassed on the host.
> > 
> > During TDX non-root operation (i.e. guest TD), memory accesses can be qualified
> > as either shared or private, based on the value of a new SHARED bit in the Guest
> > Physical Address (GPA).  The CPU translates shared GPAs using the usual VMX EPT
> > (Extended Page Table) or "Shared EPT" (in this document), which resides in host
> > VMM memory. The Shared EPT is directly managed by the host VMM - the same as
> > with the current VMX. Since guest TDs usually require I/O, and the data exchange
> > needs to be done via shared memory, thus KVM needs to use the current EPT
> > functionality even for TDs.
> > 
> > * Secure EPT and Minoring using the TDP code
> > The CPU translates private GPAs using a separate Secure EPT.  The Secure EPT
> > pages are encrypted and integrity-protected with the TD's ephemeral private
> > key.  Secure EPT can be managed _indirectly_ by the host VMM, using the TDX
> > interface functions, and thus conceptually Secure EPT is a subset of EPT (why
> > "subset"). Since execution of such interface functions takes much longer time
> > than accessing memory directly, in KVM we use the existing TDP code to minor the
> > Secure EPT for the TD.
> > 
> > This way, we can effectively walk Secure EPT without using the TDX interface
> > functions.
> > 
> > * VM life cycle and TDX specific operations
> > The userspace VMM, such as QEMU, needs to build and treat TDs differently.  For
> > example, a TD needs to boot in private memory, and the host software cannot copy
> > the initial image to private memory.
> > 
> > * TSC Virtualization
> > The TDX module helps TDs maintain reliable TSC (Time Stamp Counter) values
> > (e.g. consistent among the TD VCPUs) and the virtual TSC frequency is determined
> > by TD configuration, i.e. when the TD is created, not per VCPU.  The current KVM
> > owns TSC virtualization for VMs, but the TDX module does for TDs.
> > 
> > * MCE support for TDs
> > The TDX module doesn't allow VMM to inject MCE.  Instead PV way is needed for TD
> > to communicate with VMM.  For now, KVM silently ignores MCE request by VMM.  MSRs
> > related to MCE (e.g, MCE bank registers) can be naturally emulated by
> > paravirtualizing MSR access.
> > 
> > [1] For details, the specifications, [2], [3], [4], [5], [6], [7], are
> > available.
> > 
> > * Restrictions or future work
> > Some features are not included to reduce patch size.  Those features are
> > addressed as future independent patch series.
> > - large page (2M, 1G)
> > - qemu gdb stub
> > - guest PMU
> > - and more
> > 
> > * Prerequisites
> > It's required to load the TDX module and initialize it.  It's out of the scope
> > of this patch series.  Another independent patch for the common x86 code is
> > planned.  It defines CONFIG_INTEL_TDX_HOST and this patch series uses
> > CONFIG_INTEL_TDX_HOST.  It's assumed that With CONFIG_INTEL_TDX_HOST=y, the TDX
> > module is initialized and ready for KVM to use the TDX module APIs for TDX guest
> > life cycle like tdh.mng.init are ready to use.
> > 
> > Concretely Global initialization, LP (Logical Processor) initialization, global
> > configuration, the key configuration, and TDMR and PAMT initialization are done.
> > The state of the TDX module is SYS_READY.  Please refer to the TDX module
> > specification, the chapter Intel TDX Module Lifecycle State Machine
> > 
> > ** Detecting the TDX module readiness.
> > TDX host patch series implements the detection of the TDX module availability
> > and its initialization so that KVM can use it.  Also it manages Host KeyID
> > (HKID) assigned to guest TD.
> > The assumed APIs the TDX host patch series provides are
> > - int seamrr_enabled()
> >   Check if required cpu feature (SEAM mode) is available. This only check CPU
> >   feature availability.  At this point, the TDX module may not be ready for KVM
> >   to use.
> > - int init_tdx(void);
> >   Initialization of TDX module so that the TDX module is ready for KVM to use.
> > - const struct tdsysinfo_struct *tdx_get_sysinfo(void);
> >   Return the system wide information about the TDX module.  NULL if the TDX
> >   isn't initialized.
> > - u32 tdx_get_global_keyid(void);
> >   Return global key id that is used for the TDX module itself.
> > - int tdx_keyid_alloc(void);
> >   Allocate HKID for guest TD.
> > - void tdx_keyid_free(int keyid);
> >   Free HKID for guest TD.
> > 
> > (****)
> > * TDX KVM high-level design
> > - Host key ID management
> > Host Key ID (HKID) needs to be assigned to each TDX guest for memory encryption.
> > It is assumed The TDX host patch series implements necessary functions,
> > u32 tdx_get_global_keyid(void), int tdx_keyid_alloc(void) and,
> > void tdx_keyid_free(int keyid).
> > 
> > - Data structures and VM type
> > Because TDX is different from VMX, define its own VM/VCPU structures, struct
> > kvm_tdx and struct vcpu_tdx instead of struct kvm_vmx and struct vcpu_vmx.  To
> > identify the VM, introduce VM-type to specify which VM type, VMX (default) or
> > TDX, is used.
> > 
> > - VM life cycle and TDX specific operations
> > Re-purpose the existing KVM_MEMORY_ENCRYPT_OP to add TDX specific operations.
> > New commands are used to get the TDX system parameters, set TDX specific VM/VCPU
> > parameters, set initial guest memory and measurement.
> > 
> > The creation of TDX VM requires five additional operations in addition to the
> > conventional VM creation.
> >   - Get KVM system capability to check if TDX VM type is supported
> >   - VM creation (KVM_CREATE_VM)
> >   - New: Get the TDX specific system parameters.  KVM_TDX_GET_CAPABILITY.
> >   - New: Set TDX specific VM parameters.  KVM_TDX_INIT_VM.
> >   - VCPU creation (KVM_CREATE_VCPU)
> >   - New: Set TDX specific VCPU parameters.  KVM_TDX_INIT_VCPU.
> >   - New: Initialize guest memory as boot state and extend the measurement with
> >     the memory.  KVM_TDX_INIT_MEM_REGION.
> >   - New: Finalize VM. KVM_TDX_FINALIZE. Complete measurement of the initial
> >     TDX VM contents.
> >   - VCPU RUN (KVM_VCPU_RUN)
> > 
> > - Protected guest state
> > Because the guest state (CPU state and guest memory) is protected, the KVM VMM
> > can't operate on them.  For example, accessing CPU registers, injecting
> > exceptions, and accessing guest memory.  Those operations are handled as
> > silently ignored, returning zero or initial reset value when it's requested via
> > KVM API ioctls.
> > 
> >     VM/VCPU state and callbacks for TDX specific operations.
> >     Define tdx specific VM state and VCPU state instead of VMX ones.  Redirect
> >     operations to TDX specific callbacks.  "if (tdx) tdx_op() else vmx_op()".
> > 
> >     Operations on the CPU state
> >     silently ignore operations on the guest state.  For example, the write to
> >     CPU registers is ignored and the read from CPU registers returns 0.
> > 
> >     . ignore access to CPU registers except for allowed ones.
> >     . TSC: add a check if tsc is immutable and return an error.  Because the KVM
> >       implementation updates the internal tsc state and it's difficult to back
> >       out those changes.  Instead, skip the logic.
> >     . dirty logging: add check if dirty logging is supported.
> >     . exceptions/SMI/MCE/SIPI/INIT: silently ignore
> > 
> >     Note: virtual external interrupt and NMI can be injected into TDX guests.
> > 
> > - KVM MMU integration
> > One bit of the guest physical address (bit 51 or 47) is repurposed to indicate if
> > the guest physical address is private (the bit is cleared) or shared (the bit is
> > set).  The bits are called stolen bits.
> > 
> >   - Stolen bits framework
> >     systematically tracks which guest physical address, shared or private, is
> >     used.
> > 
> >   - Shared EPT and secure EPT
> >     There are two EPTs. Shared EPT (the conventional one) and Secure
> >     EPT(the new one). Shared EPT is handled the same for the stolen
> >     bit set.  Secure EPT points to private guest pages.  To resolve
> >     EPT violation, KVM walks one of two EPTs based on faulted GPA.
> >     Because it's costly to access secure EPT during walking EPTs with
> >     SEAMCALLs for the private guest physical address, another private
> >     EPT is used as a shadow of Secure-EPT with the existing logic at
> >     the cost of extra memory.
> > 
> > The following depicts the relationship.
> > 
> >                     KVM                             |       TDX module
> >                      |                              |           |
> >         -------------+----------                    |           |
> >         |                      |                    |           |
> >         V                      V                    |           |
> >      shared GPA           private GPA               |           |
> >   CPU shared EPT pointer  KVM private EPT pointer   |  CPU secure EPT pointer
> >         |                      |                    |           |
> >         |                      |                    |           |
> >         V                      V                    |           V
> >   shared EPT                private EPT--------mirror----->Secure EPT
> >         |                      |                    |           |
> >         |                      \--------------------+------\    |
> >         |                                           |      |    |
> >         V                                           |      V    V
> >   shared guest page                                 |    private guest page
> >                                                     |
> >                                                     |
> >                               non-encrypted memory  |    encrypted memory
> >                                                     |
> > 
> >   - Operating on Secure EPT
> >     Use the TDX module APIs to operate on Secure EPT.  To call the TDX API
> >     during resolving EPT violation, add hooks to additional operation and wiring
> >     it to TDX backend.
> > 
> > * References
> > 
> > [1] TDX specification
> >    https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html
> > [2] Intel Trust Domain Extensions (Intel TDX)
> >    https://cdrdv2.intel.com/v1/dl/getContent/726790
> > [3] Intel CPU Architectural Extensions Specification
> >    https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-cpu-architectural-specification.pdf
> > [4] Intel TDX Module 1.0 Specification
> >    https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-module-1.0-public-spec-v0.931.pdf
> > [5] Intel TDX Loader Interface Specification
> >   https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-seamldr-interface-specification.pdf
> > [6] Intel TDX Guest-Hypervisor Communication Interface
> >    https://cdrdv2.intel.com/v1/dl/getContent/726790
> > [7] Intel TDX Virtual Firmware Design Guide
> >    https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.01.pdf
> > [8] intel public github
> >    kvm TDX branch: https://github.com/intel/tdx/tree/kvm
> >    TDX guest branch: https://github.com/intel/tdx/tree/guest
> >    qemu TDX https://github.com/intel/qemu-tdx
> > [9] TDVF
> >     https://github.com/tianocore/edk2-staging/tree/TDVF
> >     This was merged into EDK2 main branch. https://github.com/tianocore/edk2
> > 
> > Chao Gao (3):
> >   KVM: x86: Move check_processor_compatibility from init ops to runtime
> >     ops
> >   Partially revert "KVM: Pass kvm_init()'s opaque param to additional
> >     arch funcs"
> >   KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o
> >     wrmsr
> > 
> > Isaku Yamahata (72):
> >   KVM: Refactor CPU compatibility check on module initialiization
> >   x86/virt/vmx/tdx: export platform_tdx_enabled()
> >   KVM: TDX: Detect CPU feature on kernel module initialization
> >   KVM: x86: Refactor KVM VMX module init/exit functions
> >   KVM: TDX: Add placeholders for TDX VM/vcpu structure
> >   x86/virt/tdx: Add a helper function to return system wide info about
> >     TDX module
> >   KVM: TDX: Initialize TDX module when loading kvm_intel.ko
> >   KVM: TDX: Make TDX VM type supported
> >   [MARKER] The start of TDX KVM patch series: TDX architectural
> >     definitions
> >   KVM: TDX: Define TDX architectural definitions
> >   KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module
> >   KVM: TDX: Add helper functions to print TDX SEAMCALL error
> >   [MARKER] The start of TDX KVM patch series: TD VM creation/destruction
> >   x86/cpu: Add helper functions to allocate/free TDX private host key id
> >   KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl
> >   KVM: TDX: Make pmu_intel.c ignore guest TD case
> >   [MARKER] The start of TDX KVM patch series: TD vcpu
> >     creation/destruction
> >   KVM: TDX: allocate/free TDX vcpu structure
> >   KVM: TDX: allocate/free TDX vcpu structure
> >   [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits
> >   KVM: x86/mmu: introduce config for PRIVATE KVM MMU
> >   [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for
> >     TDX
> >   KVM: x86/mmu: Disallow fast page fault on private GPA
> >   KVM: VMX: Introduce test mode related to EPT violation VE
> >   [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks
> >   KVM: x86/mmu: Focibly use TDP MMU for TDX
> >   KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
> >   KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map()
> >   KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
> >   [MARKER] The start of TDX KVM patch series: TDX EPT violation
> >   KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs
> >   KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
> >   KVM: TDX: TDP MMU TDX support
> >   [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA
> >   KVM: x86/mmu: steal software usable git to record if GFN is for shared
> >     or not
> >   KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX
> >   [MARKER] The start of TDX KVM patch series: TD finalization
> >   KVM: TDX: Create initial guest memory
> >   KVM: TDX: Finalize VM initialization
> >   [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit
> >   KVM: TDX: Add helper assembly function to TDX vcpu
> >   KVM: TDX: Implement TDX vcpu enter/exit path
> >   KVM: TDX: vcpu_run: save/restore host state(host kernel gs)
> >   KVM: TDX: restore host xsave state when exit from the guest TD
> >   KVM: TDX: restore user ret MSRs
> >   [MARKER] The start of TDX KVM patch series: TD vcpu
> >     exits/interrupts/hypercalls
> >   KVM: TDX: complete interrupts after tdexit
> >   KVM: TDX: restore debug store when TD exit
> >   KVM: TDX: handle vcpu migration over logical processor
> >   KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched
> >     behavior
> >   KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c
> >   KVM: TDX: Implement interrupt injection
> >   KVM: TDX: Implements vcpu request_immediate_exit
> >   KVM: TDX: Implement methods to inject NMI
> >   KVM: TDX: Add a place holder to handle TDX VM exit
> >   KVM: TDX: handle EXIT_REASON_OTHER_SMI
> >   KVM: TDX: handle ept violation/misconfig exit
> >   KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT
> >   KVM: TDX: Add a place holder for handler of TDX hypercalls
> >     (TDG.VP.VMCALL)
> >   KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL
> >   KVM: TDX: Handle TDX PV CPUID hypercall
> >   KVM: TDX: Handle TDX PV HLT hypercall
> >   KVM: TDX: Handle TDX PV port io hypercall
> >   KVM: TDX: Implement callbacks for MSR operations for TDX
> >   KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall
> >   KVM: TDX: Handle TDX PV report fatal error hypercall
> >   KVM: TDX: Handle TDX PV map_gpa hypercall
> >   KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall
> >   KVM: TDX: Silently discard SMI request
> >   KVM: TDX: Silently ignore INIT/SIPI
> >   Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX)
> >   KVM: x86: design documentation on TDX support of x86 KVM TDP MMU
> > 
> > Rick Edgecombe (1):
> >   KVM: x86/mmu: Add address conversion functions for TDX shared bits
> > 
> > Sean Christopherson (25):
> >   KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
> >   KVM: Enable hardware before doing arch VM initialization
> >   KVM: x86: Introduce vm_type to differentiate default VMs from
> >     confidential VMs
> >   KVM: TDX: Add TDX "architectural" error codes
> >   KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers
> >   KVM: TDX: create/destroy VM structure
> >   KVM: TDX: x86: Add ioctl to get TDX systemwide parameters
> >   KVM: TDX: Do TDX specific vcpu initialization
> >   KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
> >   KVM: x86/mmu: Allow non-zero value for non-present SPTE
> >   KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
> >   KVM: x86/mmu: Allow per-VM override of the TDP max page level
> >   KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for
> >     private mmu
> >   KVM: x86/mmu: Disallow dirty logging for x86 TDX
> >   KVM: VMX: Split out guts of EPT violation to common/exposed function
> >   KVM: VMX: Move setting of EPT MMU masks to common VT-x code
> >   KVM: TDX: Add load_mmu_pgd method for TDX
> >   KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX
> >   KVM: TDX: Add support for find pending IRQ in a protected local APIC
> >   KVM: x86: Assume timer IRQ was injected if APIC state is proteced
> >   KVM: VMX: Modify NMI and INTR handlers to take intr_info as function
> >     argument
> >   KVM: VMX: Move NMI/exception handler to common helper
> >   KVM: x86: Split core of hypercall emulation to helper function
> >   KVM: TDX: Handle TDX PV MMIO hypercall
> >   KVM: TDX: Add methods to ignore accesses to CPU state
> > 
> > Xiaoyao Li (1):
> >   KVM: TDX: initialize VM with TDX specific parameters
> > 
> >  Documentation/virt/kvm/api.rst                |   30 +-
> >  .../virt/kvm/intel-tdx-layer-status.rst       |   33 +
> >  Documentation/virt/kvm/intel-tdx.rst          |  381 +++
> >  Documentation/virt/kvm/tdx-tdp-mmu.rst        |  466 ++++
> >  arch/arm64/kvm/arm.c                          |    2 +-
> >  arch/mips/kvm/mips.c                          |   14 +-
> >  arch/powerpc/kvm/powerpc.c                    |    2 +-
> >  arch/riscv/kvm/main.c                         |    2 +-
> >  arch/s390/kvm/kvm-s390.c                      |    2 +-
> >  arch/x86/events/intel/ds.c                    |    1 +
> >  arch/x86/include/asm/kvm-x86-ops.h            |   10 +
> >  arch/x86/include/asm/kvm_host.h               |   56 +-
> >  arch/x86/include/asm/tdx.h                    |   67 +
> >  arch/x86/include/asm/vmx.h                    |   14 +
> >  arch/x86/include/uapi/asm/kvm.h               |   95 +
> >  arch/x86/include/uapi/asm/vmx.h               |    5 +-
> >  arch/x86/kvm/Kconfig                          |    4 +
> >  arch/x86/kvm/Makefile                         |    3 +-
> >  arch/x86/kvm/irq.c                            |    3 +
> >  arch/x86/kvm/lapic.c                          |   37 +-
> >  arch/x86/kvm/lapic.h                          |    2 +
> >  arch/x86/kvm/mmu.h                            |   42 +-
> >  arch/x86/kvm/mmu/mmu.c                        |  360 ++-
> >  arch/x86/kvm/mmu/mmu_internal.h               |  123 +-
> >  arch/x86/kvm/mmu/paging_tmpl.h                |    5 +-
> >  arch/x86/kvm/mmu/spte.c                       |   46 +-
> >  arch/x86/kvm/mmu/spte.h                       |   65 +-
> >  arch/x86/kvm/mmu/tdp_iter.c                   |    1 +
> >  arch/x86/kvm/mmu/tdp_iter.h                   |    5 +-
> >  arch/x86/kvm/mmu/tdp_mmu.c                    |  690 ++++-
> >  arch/x86/kvm/mmu/tdp_mmu.h                    |   12 +-
> >  arch/x86/kvm/svm/svm.c                        |   13 +-
> >  arch/x86/kvm/vmx/common.h                     |  174 ++
> >  arch/x86/kvm/vmx/evmcs.c                      |    2 +-
> >  arch/x86/kvm/vmx/evmcs.h                      |    2 +-
> >  arch/x86/kvm/vmx/main.c                       | 1071 +++++++
> >  arch/x86/kvm/vmx/pmu_intel.c                  |   39 +-
> >  arch/x86/kvm/vmx/pmu_intel.h                  |   28 +
> >  arch/x86/kvm/vmx/posted_intr.c                |   43 +-
> >  arch/x86/kvm/vmx/posted_intr.h                |   13 +
> >  arch/x86/kvm/vmx/tdx.c                        | 2465 +++++++++++++++++
> >  arch/x86/kvm/vmx/tdx.h                        |  275 ++
> >  arch/x86/kvm/vmx/tdx_arch.h                   |  157 ++
> >  arch/x86/kvm/vmx/tdx_errno.h                  |   29 +
> >  arch/x86/kvm/vmx/tdx_error.c                  |   22 +
> >  arch/x86/kvm/vmx/tdx_ops.h                    |  188 ++
> >  arch/x86/kvm/vmx/vmenter.S                    |  146 +
> >  arch/x86/kvm/vmx/vmx.c                        |  737 ++---
> >  arch/x86/kvm/vmx/vmx.h                        |   39 +-
> >  arch/x86/kvm/vmx/x86_ops.h                    |  235 ++
> >  arch/x86/kvm/x86.c                            |  148 +-
> >  arch/x86/virt/vmx/tdx/seamcall.S              |    2 +
> >  arch/x86/virt/vmx/tdx/tdx.c                   |   54 +-
> >  arch/x86/virt/vmx/tdx/tdx.h                   |   52 -
> >  include/linux/kvm_host.h                      |    4 +-
> >  include/uapi/linux/kvm.h                      |    2 +
> >  tools/arch/x86/include/uapi/asm/kvm.h         |   95 +
> >  tools/include/uapi/linux/kvm.h                |    1 +
> >  virt/kvm/kvm_main.c                           |   67 +-
> >  59 files changed, 7877 insertions(+), 804 deletions(-)
> >  create mode 100644 Documentation/virt/kvm/intel-tdx-layer-status.rst
> >  create mode 100644 Documentation/virt/kvm/intel-tdx.rst
> >  create mode 100644 Documentation/virt/kvm/tdx-tdp-mmu.rst
> >  create mode 100644 arch/x86/kvm/vmx/common.h
> >  create mode 100644 arch/x86/kvm/vmx/main.c
> >  create mode 100644 arch/x86/kvm/vmx/pmu_intel.h
> >  create mode 100644 arch/x86/kvm/vmx/tdx.c
> >  create mode 100644 arch/x86/kvm/vmx/tdx.h
> >  create mode 100644 arch/x86/kvm/vmx/tdx_arch.h
> >  create mode 100644 arch/x86/kvm/vmx/tdx_errno.h
> >  create mode 100644 arch/x86/kvm/vmx/tdx_error.c
> >  create mode 100644 arch/x86/kvm/vmx/tdx_ops.h
> >  create mode 100644 arch/x86/kvm/vmx/x86_ops.h
> > 
> > -- 
> > 2.25.1
> > 
> 
> -- 
> Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-12  5:07   ` Chao Gao
@ 2022-07-12 10:54     ` Chao Peng
  2022-07-12 17:22       ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Chao Peng @ 2022-07-12 10:54 UTC (permalink / raw)
  To: Chao Gao
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini,
	chao.p.peng

On Tue, Jul 12, 2022 at 01:07:20PM +0800, Chao Gao wrote:
> On Mon, Jul 11, 2022 at 08:17:01AM -0700, Isaku Yamahata wrote:
> >Hi. Because my description on large page support was terse, I wrote up more
> >detailed one.  Any feedback/thoughts on large page support?
> >
> >TDP MMU large page support design
> >
> >Two main discussion points
> >* how to track page status. private vs shared, no-largepage vs can-be-largepage
> 
> ...
> 
> >
> >Tracking private/shared and large page mappable
> >-----------------------------------------------
> >VMM needs to track that page is mapped as private or shared at 4KB granularity.
> >For efficiency of EPT violation path (****), at 2MB and 1GB level, VMM should
> >track the page can be mapped as a large page (regarding private/shared).  VMM
> >updates it on MapGPA and references it on the EPT violation path. (****)
> 
> Isaku,
> 
> + Peng Chao
> 
> Doesn't UPM guarantee that 2MB/1GB large page in CR3 should be either all
> private or all shared?
> 
> KVM always retrieves the mapping level in CR3 and enforces that EPT's
> page level is not greater than that in CR3. My point is if UPM already enforces
> no mixed pages in a large page, then KVM needn't do that again (UPM can
> be trusted).

The backing store in the UMP can tell KVM which page level it can
support for a given private gpa, similar to host_pfn_mapping_level() for
shared address.

However, this solely represents the backing store's capability, KVM
still needs additional info to decide whether that can be safely mapped
as 2M/1G, e.g. all the following pages in the 2M/1G range should be all
private, currently this is not something backing store can tell.

Actually, in UPM v7 we let KVM record this info so one possible solution
is making use of it.

  https://lkml.org/lkml/2022/7/6/259

Then to map a page as 2M, KVM needs to check:
  - Memory backing store support that level
  - All pages in 2M range are private as we recorded through
    KVM_MEMORY_ENCRYPT_{UN,}REG_REGION
  - No existing partial 4K map(s) in 2M range

Chao
> 
> Maybe I am misunderstanding something?



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-12 10:54     ` Chao Peng
@ 2022-07-12 17:22       ` Isaku Yamahata
  2022-07-13  7:37         ` Chao Peng
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12 17:22 UTC (permalink / raw)
  To: Chao Peng
  Cc: Chao Gao, Isaku Yamahata, isaku.yamahata, kvm, linux-kernel,
	Paolo Bonzini, chao.p.peng

On Tue, Jul 12, 2022 at 06:54:19PM +0800,
Chao Peng <chao.p.peng@linux.intel.com> wrote:

> On Tue, Jul 12, 2022 at 01:07:20PM +0800, Chao Gao wrote:
> > On Mon, Jul 11, 2022 at 08:17:01AM -0700, Isaku Yamahata wrote:
> > >Hi. Because my description on large page support was terse, I wrote up more
> > >detailed one.  Any feedback/thoughts on large page support?
> > >
> > >TDP MMU large page support design
> > >
> > >Two main discussion points
> > >* how to track page status. private vs shared, no-largepage vs can-be-largepage
> > 
> > ...
> > 
> > >
> > >Tracking private/shared and large page mappable
> > >-----------------------------------------------
> > >VMM needs to track that page is mapped as private or shared at 4KB granularity.
> > >For efficiency of EPT violation path (****), at 2MB and 1GB level, VMM should
> > >track the page can be mapped as a large page (regarding private/shared).  VMM
> > >updates it on MapGPA and references it on the EPT violation path. (****)
> > 
> > Isaku,
> > 
> > + Peng Chao
> > 
> > Doesn't UPM guarantee that 2MB/1GB large page in CR3 should be either all
> > private or all shared?
> > 
> > KVM always retrieves the mapping level in CR3 and enforces that EPT's
> > page level is not greater than that in CR3. My point is if UPM already enforces
> > no mixed pages in a large page, then KVM needn't do that again (UPM can
> > be trusted).
> 
> The backing store in the UMP can tell KVM which page level it can
> support for a given private gpa, similar to host_pfn_mapping_level() for
> shared address.
>
> However, this solely represents the backing store's capability, KVM
> still needs additional info to decide whether that can be safely mapped
> as 2M/1G, e.g. all the following pages in the 2M/1G range should be all
> private, currently this is not something backing store can tell.

This argument applies to shared GPA.  The shared pages is backed by normal file
mapping with UPM.  When KVM is mapping shared GPA, the same check is needed.  So
I think KVM has to track all private or all shared or no-largepage at 2MB/1GB
level.  If UPM tracks shared-or-private at 4KB level, probably KVM may not need to
track it at 4KB level.


> Actually, in UPM v7 we let KVM record this info so one possible solution
> is making use of it.
> 
>   https://lkml.org/lkml/2022/7/6/259
> 
> Then to map a page as 2M, KVM needs to check:
>   - Memory backing store support that level
>   - All pages in 2M range are private as we recorded through
>     KVM_MEMORY_ENCRYPT_{UN,}REG_REGION
>   - No existing partial 4K map(s) in 2M range
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-12 10:49   ` Chao Peng
@ 2022-07-12 17:35     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12 17:35 UTC (permalink / raw)
  To: Chao Peng
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Tue, Jul 12, 2022 at 06:49:25PM +0800,
Chao Peng <chao.p.peng@linux.intel.com> wrote:

> On Mon, Jul 11, 2022 at 08:17:01AM -0700, Isaku Yamahata wrote:
> > Hi. Because my description on large page support was terse, I wrote up more
> > detailed one.  Any feedback/thoughts on large page support?
> > 
> > TDP MMU large page support design
> > 
> > Two main discussion points
> > * how to track page status. private vs shared, no-largepage vs can-be-largepage
> > * how to trigger merging mapping from 4KB/2MB to 2MB/1GB
> > 
> > Expected private-vs-shared page usage
> > -------------------------------------
> > On TD boot all pages are private and TD converts pages into shared if necessary.
> > * Most of the guest pages remain private.
> > * Only limited pages are converted at kernel boot
> >   ** bounce buffer for IO (virt-io).  It's allocated as swiotlb.  Its size is
> >      64MB or 6% of total guest memory.
> >   ** KVM PV shared page. (the current guest TD doesn't use KVM PV shared page.)
> > * Only a small number of pages are dynamically converted from private to shared
> >   and vice versa.  This usage is very limited. e.g. GetQuote, the lack of
> >   swiotlb buffer
> > 
> > 
> > Theory of Secure-EPT operations related to large page
> > -----------------------------------------------------
> > TDX Secure-EPT has differences from VMX EPT.
> > To add a page to Secure-EPT
> > 
> > * Here is the operation to resolve the EPT violation.
> > 1. TD: Accepts GPA.  TD needs to accept GPA before accessing GPA because TD
> >    needs to detect that VMM unmaps GPA and maps GPA again.
> > 2. EPT violation is triggered.  TD exit to VMM.
> > 3. VMM: allocate a page for GPA and TDH.MEM.PAGE.AUG it to GPA.  Resume TD vcpu.
> >    (3a. TD: #VE<EPT violation> is injected.  #VE handler accepts the page)
> > 4. TD: resume #VE and continue TD vcpu execution
> > 
> > TD may choose step 1. In that case, After step 3. #VE is injected into TD and,
> > TD #VE handler needs to accept the page.
> > 
> > When adding a page to Secure-EPT again, the page contexts are cleared and the
> > page is encrypted.  If a page is disassociated from Secure-EPT and added again,
> > the page content is lost.
> > 
> > * TDG.VP.VMCALL<MapGPA> hypercall
> > The page associated with GPA can be private or shared.  TD converts the GPA by
> > TDG.VP.VMCALL<MapGPA> hypercall from private to shared or vice versa.  VMM
> > tracks whether the given GPA is private or shared.
> > 
> > * mapping merge(promote)/split(demote)
> > The page can be mapped as large page (2MB or 1GB) in addition to 4KB.  The
> > mapping can be merged(4KB/2MB -> 2MB/1GB) or split(2MB/1GB -> 4KB/2MB) by TDX
> > SEAMCALL TDH.MEM.PAGE.PROMOTE and TDH.MEM.PAGE.DEMOTE.
> > The merge of mapping requires all the pages needs to be mapped, unlike VMX EPT
> > because of encryption.  This implies the current KVM implementation doesn't work
> > for TDX when merging mapping as follows
> > 
> > - EPT violation and host page is 2MB mappable.
> >   some of the 4KB pages of the given 2MB page are already mapped, some not.
> >   i.e. 2MB EPT -> 4KB EPT -> 4K pages
> > - KVM page fault handler zap 2MB EPT entry and populate 2MB EPT entry
> >   zap: 2MB EPT: non present
> >   populate 2MB: -> 2MB page
> > 
> > If VMM zaps 2MB Secure-EPT entry, the page contents will be lost for TDX.
> > Mapping merge requires all pages are already mapped.
> > 
> > Instead, the following steps are needed.
> > - EPT violation and host page is 2MB mappable.
> >   some of the 4KB pages of the given 2MB page are already mapped.  Some not.
> >   i.e. 2MB EPT -> 4KB EPT -> 4K pages
> > - VMM checks all 4KB GPAs are private. If not, it can't be mapped as a large page.
> >   (****)
> > - VMM checks all 4KB GPAs are already mapped.  If not, give up mapping merge.
> >   (or map missing 4KB pages.)
> > - mapping merge by TDH.MEM.PAGE.PROMOTE
> > 
> > The mapping split for TDX Secure-EPT works similarly to the VMX EPT case.
> > 
> > 
> > EPT violation and MapGPA
> > ------------------------
> > - EPT violation is a fast path
> > - MapGPA is not a fast path.
> > => Keep the EPT violation path optimized and complicates the MapGPA path.  For
> > (****) check, we don't want to scan the 4KB mapping on EPT violation.  Instead,
> > the MapGPA path scans it and records the result as the page can be mapped as 2MB
> > due to private/shared.
> 
> This sounds reasonable, Instead of tracking that in MapGPA,  maybe
> KVM_MEMORY_ENCRYPT_{UN,}REG_REGION introduced in UPM v7 is a better
> place to put the scan code in.
> 
>   https://lkml.org/lkml/2022/7/6/259
> 
> Both the MapGPA (explicit conversion) and the EPT violation (implicit
> conversion) can cause invocation to these two ioctls and need update to
> this info.
> 
> > 
> > 
> > Tracking private/shared and large page mappable
> > -----------------------------------------------
> > VMM needs to track that page is mapped as private or shared at 4KB granularity.
> > For efficiency of EPT violation path (****), at 2MB and 1GB level, VMM should
> > track the page can be mapped as a large page (regarding private/shared).  VMM
> > updates it on MapGPA and references it on the EPT violation path. (****)
> > 
> > For 4KB pages, 1 bit is needed. private or shared.  Let's call it shared-mask bit.
> > For 2MB/1GB pages, 2 bit is needed. large page mappable or not. private or
> > shared if mappable.  Let's call it no-largepage bit.
> 
> I'm just thinking maybe we don't need introduce new bits, instead we
> reuse lpage_info where we already use it to track whether a page can be
> mapped at specified page level in kvm_mmu_max_mapping_level(). Then in
> the above two ioctls we do a scan for each level and update lpage_info.
> For example, we should disallow_lpage if private/shared pages are mixed
> in that page level.
> 
> It's however a bit tricky to manage lpage_info.disallow_lpage in these
> two ioctls with current code. We can't simply do disallow_lpage++ and
> disallow_lpage--. One possible solution can treat disallow_lpage as a
> mask instead of a count. Then we define bits like below for use:
>   - USER_GFN_UNALIGNED set when memslot user_address/private_offset/gfn
>     is not aligned on the page level
>   - PAGE_TRACKING set during page tracking
>   - PRIVITE_SHARED_MIXED set when private/shared pages are mixed
> 
> In page fault handler the page can be mapped at that level only when all
> bits are zero and in above two ioctls we just switch on/off bit
> PRIVITE_SHARED_MIXED.

So steal 1 or 2 bits from kvm_lpage_info.disallow_lpage instead of adding one more
array in struct kvm_arch_memory_slot.  Nice idea.  Let's call it option A.1).
We increment/decrement disallow_lpage with option A.). With option A.1), it
automatically handled.

pros:
+SPTE_SHARED_MASK is not needed
cons:
-one more look-up on EPT violation


> Currently UMP don't have this code yet, but can be added if feasible.

Anyway let me integrate UPM v7.

Thanks,


> Chao
> > 
> > Option A.)
> >   Allocate array for pages in struct kvm_arch_memory_slot on TD creation.
> >   struct kvm_arch_memory_slot {
> >     +struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
> >   }
> > 
> >   pros:
> >   +straight forward implementation
> >   +SPTE_SHARED_MASK is not needed
> >   cons:
> >   -memory overhead is high
> >   -not optimized for expected usage
> >   -one more look-up on EPT violation
> > 
> > Option B.) Steal two software usable bits from SPTE and record them in SPTE.
> >            SPTE_SHARED_MASK, SPTE_NOLARGE_PAGE_MASK
> >   pros:
> >   +optimized for EPT violation
> >   cons:
> >   -2bits used in SPTE entry
> >   -complicates the MapGPA path.
> > 
> > Option C.) Steal one software usable bit from SPTE and record it in SPTE.
> >            SPTE_SHARED_MASK
> >            For 2MB/1GB, allocate bitmap in kvm_mmu_page.
> >            struct kvm_mmu_page {
> >              bitmap nolarge
> >            }
> >   pros:
> >   +optimized for EPT violation
> >   cons:
> >   -complicates the MapGPA path.
> >   -information is scattered in SPTE and struct kvm_mmu_page
> > 
> > 
> > How to update those bits
> > ------------------------
> > - MapGPA
> >   - at 4KB level, set or clear shared-mask bit.
> >   - Scan 512 4KB bit, at 2MB level
> >     - set or clear shared-mask bit, clear no-largepage bit or
> >     - clear shared-mask bit, set no-largepage bit
> >     - increment/decrement lpageinfo to prevent/allow large page
> >   - similar for 1GB level
> >   Note: This logic might a bit tricky.
> > 
> > - EPT violation
> >   - If 2MB large page is allowed, check if no-largepage bit
> >     - If no-largepage bit is set, => go down to 4KB page
> >     - If no-largepage bit is cleared => try to map 2MB page
> >       - If 4KB level is not mapped, map 2MB page
> >       - If some 4KB level is already mapped, go down to 4KB.
> >         Don't try to merge mapping. Or it's possible to try to merge mapping.
> >   Note: 512 4KB entry scanning is not done at EPT violation because it's fast
> >         path.
> > 
> > 
> > Map merging
> > -----------
> > Map merging is necessary for TD migration. (Map split is the easy part.)  The
> > current KVM implementation zaps the range (mmu notification or lpage recovery
> > worker) and expects large page mapping on the next EPT violation.
> > 
> > Option A.) Keep the code similar to map merging logic.
> > Zap 2MB EPT entry in some sense and trigger map merging logic on the next EPT
> > violation.  To keep encrypted page contents, zapped EPT entries needs to keep
> > the page.  Steal one more bits from SPTE. SPTE_PRIVATE_BLOCKED_MASK.
> > It means that the page is zapped from SPTE. but it still alive and references
> > page.
> > 
> > Option B.) In the callback, directly merge mapping somehow.  In this case, mmu
> > notifier usage doesn't make sense.
> > 
> > NOTE:
> > - Implement map merging in MapGPA. This doesn't work for dirty page logging.
> > - We can utilize kvm_nx_lpage_recovery_worker
> > - We can utilize THP. Probably doesn't work well for fd-based private memory.
> > 
> > Thanks,
> > Isaku Yamayhata
> > 
> > On Mon, Jun 27, 2022 at 02:52:52PM -0700,
> > isaku.yamahata@intel.com wrote:
> > 
> > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > 
> > > KVM TDX basic feature support
> > > 
> > > Hello.  This is v7 the patch series vof KVM TDX support.
> > > This is based on v5.19-rc1 + kvm/queue branch + TDX HOST patch series.
> > > The tree can be found at https://github.com/intel/tdx/tree/kvm-upstream
> > > How to run/test: It's describe at https://github.com/intel/tdx/wiki/TDX-KVM
> > > 
> > > Major changes from v6:
> > > - rebased to v5.19 base
> > > 
> > > TODO:
> > > - integrate fd-based guest memory. As the discussion is still on-going, I
> > >   intentionally dropped fd-based guest memory support yet.  The integration can
> > >   be found at https://github.com/intel/tdx/tree/kvm-upstream-workaround.
> > > - 2M large page support. It's work-in-progress.
> > > For large page support, there are several design choices. Here is the design options.
> > > Any thoughts/feedback?
> > > 
> > > KVM MMU Large page support for TDX
> > > 
> > > * What needs to be done
> > > - Track private or shared of each page size (4KB, 2MB, 1GB) based on
> > >   TDG.VP.VMCALL<MapGPA>.  For large pages(2MB, 1GB), it can be mixed (some
> > >   lower-size pages are private and some shared.)  In this case, the page can't
> > >   be large.
> > > - if necessary, split large page on TDG.VP.VMCALL<MapGPA>
> > >   (split on dirty page tracking is future work)
> > > - resolving KVM page fault
> > >   When resolving a private page and the page is large in the host, GPA can be
> > >   resolved as a large page in Secure-EPT.  Even if the page is large on the host
> > >   side, sometimes a 4KB page can be resolved because it's up to guest TD to
> > >   accept at 4KB, 2MB, or 1GB.
> > > - collapsing pages into a large page.
> > >   At this point, it's okay to not implement this.  When dirty page tracking is
> > >   supported, this needs to be supported.
> > >   - On MapGPA, the page can be collapsed into a large page
> > >   - handle zapping SPTE and try to collapse the pages on the next KVM page fault
> > >     Unlike the EPT case, some trick is needed.
> > > - For performance, optimize KVM page fault path at the cost of complicating
> > >   MapGPA path.
> > > 
> > > * options to track private or shared
> > > At each page size (4KB, 2MB, and 1GB), track private, shared, or mixed (2MB and
> > > 1GB case). For 4KB each page, 1 bit per page is needed. private or shared.  For
> > > large pages (2MB and 1GB), 2 bits per large page is needed. (private, shared, or
> > > mixed).  When resolving KVM page fault, we don't want to check the lower-size
> > > pages to check if the given GPA can be a large for performance.  On MapGPA check
> > > it instead.
> > > 
> > > Option A). enhance kvm_arch_memory_slot
> > >   enum kvm_page_type {
> > >        KVM_PAGE_TYPE_INVALID,
> > >        KVM_PAGE_TYPE_SHARED,
> > >        KVM_PAGE_TYPE_PRIVATE,
> > >        KVM_PAGE_TYPE_MIXED,
> > >   };
> > > 
> > >   struct kvm_page_attr {
> > >        enum kvm_page_type type;
> > >   };
> > > 
> > >  struct kvm_arch_memory_slot {
> > >  +      struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
> > > 
> > > Option B). steal one more bit SPTE_MIXED_MASK in addition to SPTE_SHARED_MASK
> > > If !SPTE_MIXED_MASK, it can be large page.
> > > 
> > > Option C). use SPTE_SHARED_MASK and kvm_mmu_page::mixed bitmap
> > > kvm_mmu_page::mixed bitmap of 1GB, root indicates mixed for 2MB, 1GB.
> > > 
> > > 
> > > * comparison
> > > A).
> > > + straightforward to implement
> > > + SPTE_SHARED_MASK isn't needed
> > > - memory overhead compared to B). or C).
> > > - more memory reference on KVM page fault
> > > 
> > > B).
> > > + simpler than C) (complex than A)?)
> > > + efficient on KVM page fault. (only SPTE reference)
> > > + low memory overhead
> > > - Waste precious SPTE bits.
> > > 
> > > C).
> > > + efficient on KVM page fault. (only SPTE reference)
> > > + low memory overhead
> > > - complicates MapGPA
> > > - scattered data structure
> > > 
> > > Thanks,
> > > Isaku Yamahata
> > > 
> > > Changes from v6:
> > > - rebased to v5.19
> > > 
> > > Changes from v5:
> > > - export __seamcall and use it
> > > - move mutex lock from callee function of smp_call_on_cpu to the caller.
> > > - rename mmu_prezap => flush_shadow_all_private() and tdx_mmu_release_hkid
> > > - updated comment
> > > - drop the use of tdh_mng_key.reclaimid(): as the function is for backward
> > >   compatibility to only return success
> > > - struct kvm_tdx_cmd: metadata => flags, added __u64 error.
> > > - make this ioctl systemwide ioctl
> > > - ABI change to struct kvm_init_vm
> > > - guest_tsc_khz: use kvm->arch.default_tsc_khz
> > > - rename BUILD_BUG_ON_MEMCPY to MEMCPY_SAME_SIZE
> > > - drop exporting kvm_set_tsc_khz().
> > > - fix kvm_tdp_page_fault() for mtrr emulation
> > > - rename it to kvm_gfn_shared_mask(), dropped kvm_gpa_shared_mask()
> > > - drop kvm_is_private_gfn(), kept kvm_is_private_gpa()
> > >   keep kvm_{gfn, gpa}_private(), kvm_gpa_private()
> > > - update commit message
> > > - rename shadow_init_value => shadow_nonprsent_value
> > > - added ept_violation_ve_test mode
> > > - shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in tdp_mmu.c
> > > - legacy MMU case
> > >   => - mmu_topup_shadow_page_cache(), kvm_mmu_create()
> > >      - FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> > > - #VE warning:
> > > - rename: REMOVED_SPTE => __REMOVED_SPTE, SHADOW_REMOVED_SPTE => REMOVED_SPTE
> > > - merge into Like we discussed, this patch should be merged with patch
> > >   "KVM: x86/mmu: Allow non-zero init value for shadow PTE".
> > > - fix pointed by Sagi. check !is_private check => (kvm_gfn_shared_mask && !is_private)
> > > - introduce kvm_gfn_for_root(kvm, root, gfn)
> > > - add only_shared argument to kvm_tdp_mmu_handle_gfn()
> > > - use kvm_arch_dirty_log_supported()
> > > - rename SPTE_PRIVATE_PROHIBIT to SPTE_SHARED_MASK.
> > > - rename: is_private_prohibit_spte() => spte_shared_mask()
> > > - fix: shadow_nonpresent_value => SHADOW_NONPRESENT_VALUE in comment
> > > - dropped this patch as the change was merged into kvm/queue
> > > - update vt_apicv_post_state_restore()
> > > - use is_64_bit_hypercall()
> > > - comment: expand MSMI -> Machine Check System Management Interrupt
> > > - fixed TDX_SEPT_PFERR
> > > - tdvmcall_p[1234]_{write, read}() => tdvmcall_a[0123]_{read,write}()
> > > - rename tdmvcall_exit_readon() => tdvmcall_leaf()
> > > - remove optional zero check of argument.
> > > - do a check for static_call(kvm_x86_has_emulated_msr)(kvm, MSR_IA32_SMBASE)
> > >    in kvm_vcpu_ioctl_smi and __apic_accept_irq.
> > > - WARN_ON_ONCE in tdx_smi_allowed and tdx_enable_smi_window.
> > > - introduce vcpu_deliver_init to x86_ops
> > > - sprinkeled KVM_BUG_ON()
> > > 
> > > Changes from v4:
> > > - rebased to TDX host kernel patch series.
> > > - include all the patches to make this patch series working.
> > > - add [MARKER] patches to mark the patch layer clear.
> > > 
> > > ---
> > > * What's TDX?
> > > TDX stands for Trust Domain Extensions, which extends Intel Virtual Machines
> > > Extensions (VMX) to introduce a kind of virtual machine guest called a Trust
> > > Domain (TD) for confidential computing.
> > > 
> > > A TD runs in a CPU mode that is designed to protect the confidentiality of its
> > > memory contents and its CPU state from any other software, including the hosting
> > > Virtual Machine Monitor (VMM), unless explicitly shared by the TD itself.
> > > 
> > > We have more detailed explanations below (***).
> > > We have the high-level design of TDX KVM below (****).
> > > 
> > > In this patch series, we use "TD" or "guest TD" to differentiate it from the
> > > current "VM" (Virtual Machine), which is supported by KVM today.
> > > 
> > > 
> > > * The organization of this patch series
> > > This patch series is on top of the patches series "TDX host kernel support":
> > > https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/
> > > 
> > > this patch series is available at
> > > https://github.com/intel/tdx/releases/tag/kvm-upstream
> > > The corresponding patches to qemu are available at
> > > https://github.com/intel/qemu-tdx/commits/tdx-upstream
> > > 
> > > The relations of the layers are depicted as follows.
> > > The arrows below show the order of patch reviews we would like to have.
> > > 
> > > The below layers are chosen so that the device model, for example, qemu can
> > > exercise each layering step by step.  Check if TDX is supported, create TD VM,
> > > create TD vcpu, allow vcpu running, populate TD guest private memory, and handle
> > > vcpu exits/hypercalls/interrupts to run TD fully.
> > > 
> > >   TDX vcpu
> > >   interrupt/exits/hypercall<------------\
> > >         ^                               |
> > >         |                               |
> > >   TD finalization                       |
> > >         ^                               |
> > >         |                               |
> > >   TDX EPT violation<------------\       |
> > >         ^                       |       |
> > >         |                       |       |
> > >   TD vcpu enter/exit            |       |
> > >         ^                       |       |
> > >         |                       |       |
> > >   TD vcpu creation/destruction  |       \-------KVM TDP MMU MapGPA
> > >         ^                       |                       ^
> > >         |                       |                       |
> > >   TD VM creation/destruction    \---------------KVM TDP MMU hooks
> > >         ^                                               ^
> > >         |                                               |
> > >   TDX architectural definitions                 KVM TDP refactoring for TDX
> > >         ^                                               ^
> > >         |                                               |
> > >    TDX, VMX    <--------TDX host kernel         KVM MMU GPA stolen bits
> > >    coexistence          support
> > > 
> > > 
> > > The followings are explanations of each layer.  Each layer has a dummy commit
> > > that starts with [MARKER] in subject.  It is intended to help to identify where
> > > each layer starts.
> > > 
> > > TDX host kernel support:
> > >         https://lore.kernel.org/lkml/cover.1646007267.git.kai.huang@intel.com/
> > >         The guts of system-wide initialization of TDX module.  There is an
> > >         independent patch series for host x86.  TDX KVM patches call functions
> > >         this patch series provides to initialize the TDX module.
> > > 
> > > TDX, VMX coexistence:
> > >         Infrastructure to allow TDX to coexist with VMX and trigger the
> > >         initialization of the TDX module.
> > >         This layer starts with
> > >         "KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX"
> > > TDX architectural definitions:
> > >         Add TDX architectural definitions and helper functions
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: TDX architectural definitions".
> > > TD VM creation/destruction:
> > >         Guest TD creation/destroy allocation and releasing of TDX specific vm
> > >         and vcpu structure.  Create an initial guest memory image with TDX
> > >         measurement.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: TD VM creation/destruction".
> > > TD vcpu creation/destruction:
> > >         guest TD creation/destroy Allocation and releasing of TDX specific vm
> > >         and vcpu structure.  Create an initial guest memory image with TDX
> > >         measurement.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction"
> > > TDX EPT violation:
> > >         Create an initial guest memory image with TDX measurement.  Handle
> > >         secure EPT violations to populate guest pages with TDX SEAMCALLs.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: TDX EPT violation"
> > > TD vcpu enter/exit:
> > >         Allow TDX vcpu to enter into TD and exit from TD.  Save CPU state before
> > >         entering into TD.  Restore CPU state after exiting from TD.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: TD vcpu enter/exit"
> > > TD vcpu interrupts/exit/hypercall:
> > >         Handle various exits/hypercalls and allow interrupts to be injected so
> > >         that TD vcpu can continue running.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls"
> > > 
> > > KVM MMU GPA shared bit:
> > >         Introduce framework to handle shared bit repurposed bit of GPA TDX
> > >         repurposed a bit of GPA to indicate shared or private. If it's shared,
> > >         it's the same as the conventional VMX EPT case.  VMM can access shared
> > >         guest pages.  If it's private, it's handled by Secure-EPT and the guest
> > >         page is encrypted.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: KVM MMU GPA stolen bits"
> > > KVM TDP refactoring for TDX:
> > >         TDX Secure EPT requires different constants. e.g. initial value EPT
> > >         entry value etc. Various refactoring for those differences.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX"
> > > KVM TDP MMU hooks:
> > >         Introduce framework to TDP MMU to add hooks in addition to direct EPT
> > >         access TDX added Secure EPT which is an enhancement to VMX EPT.  Unlike
> > >         conventional VMX EPT, CPU can't directly read/write Secure EPT. Instead,
> > >         use TDX SEAMCALLs to operate on Secure EPT.
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks"
> > > KVM TDP MMU MapGPA:
> > >         Introduce framework to handle switching guest pages from private/shared
> > >         to shared/private.  For a given GPA, a guest page can be assigned to a
> > >         private GPA or a shared GPA exclusively.  With TDX MapGPA hypercall,
> > >         guest TD converts GPA assignments from private (or shared) to shared (or
> > >         private).
> > >         This layer starts with
> > >         "[MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA "
> > > 
> > > KVM guest private memory: (not shown in the above diagram)
> > > [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private
> > > memory: https://lkml.org/lkml/2022/1/18/395
> > >         Guest private memory requires different memory management in KVM.  The
> > >         patch proposes a way for it.  Integration with TDX KVM.
> > > 
> > > (***)
> > > * TDX module
> > > A CPU-attested software module called the "TDX module" is designed to implement
> > > the TDX architecture, and it is loaded by the UEFI firmware today. It can be
> > > loaded by the kernel or driver at runtime, but in this patch series we assume
> > > that the TDX module is already loaded and initialized.
> > > 
> > > The TDX module provides two main new logical modes of operation built upon the
> > > new SEAM (Secure Arbitration Mode) root and non-root CPU modes added to the VMX
> > > architecture. TDX root mode is mostly identical to the VMX root operation mode,
> > > and the TDX functions (described later) are triggered by the new SEAMCALL
> > > instruction with the desired interface function selected by an input operand
> > > (leaf number, in RAX). TDX non-root mode is used for TD guest operation.  TDX
> > > non-root operation (i.e. "guest TD" mode) is similar to the VMX non-root
> > > operation (i.e. guest VM), with changes and restrictions to better assure that
> > > no other software or hardware has direct visibility of the TD memory and state.
> > > 
> > > TDX transitions between TDX root operation and TDX non-root operation include TD
> > > Entries, from TDX root to TDX non-root mode, and TD Exits from TDX non-root to
> > > TDX root mode.  A TD Exit might be asynchronous, triggered by some external
> > > event (e.g., external interrupt or SMI) or an exception, or it might be
> > > synchronous, triggered by a TDCALL (TDG.VP.VMCALL) function.
> > > 
> > > TD VCPUs can be entered using SEAMCALL(TDH.VP.ENTER) by KVM. TDH.VP.ENTER is one
> > > of the TDX interface functions as mentioned above, and "TDH" stands for Trust
> > > Domain Host. Those host-side TDX interface functions are categorized into
> > > various areas just for better organization, such as SYS (TDX module management),
> > > MNG (TD management), VP (VCPU), PHYSMEM (physical memory), MEM (private memory),
> > > etc. For example, SEAMCALL(TDH.SYS.INFO) returns the TDX module information.
> > > 
> > > TDCS (Trust Domain Control Structure) is the main control structure of a guest
> > > TD, and encrypted (using the guest TD's ephemeral private key).  At a high
> > > level, TDCS holds information for controlling TD operation as a whole,
> > > execution, EPTP, MSR bitmaps, etc that KVM needs to set it up.  Note that MSR
> > > bitmaps are held as part of TDCS (unlike VMX) because they are meant to have the
> > > same value for all VCPUs of the same TD.
> > > 
> > > Trust Domain Virtual Processor State (TDVPS) is the root control structure of a
> > > TD VCPU.  It helps the TDX module control the operation of the VCPU, and holds
> > > the VCPU state while the VCPU is not running. TDVPS is opaque to software and
> > > DMA access, accessible only by using the TDX module interface functions (such as
> > > TDH.VP.RD, TDH.VP.WR). TDVPS includes TD VMCS, and TD VMCS auxiliary structures,
> > > such as virtual APIC page, virtualization exception information, etc.
> > > 
> > > Several VMX control structures (such as Shared EPT and Posted interrupt
> > > descriptor) are directly managed and accessed by the host VMM.  These control
> > > structures are pointed to by fields in the TD VMCS.
> > > 
> > > The above means that 1) KVM needs to allocate different data structures for TDs,
> > > 2) KVM can reuse the existing code for TDs for some operations, 3) it needs to
> > > define TD-specific handling for others.  3) Redirect operations to .  3)
> > > Redirect operations to the TDX specific callbacks, like "if (is_td_vcpu(vcpu))
> > > tdx_callback() else vmx_callback();".
> > > 
> > > *TD Private Memory
> > > TD private memory is designed to hold TD private content, encrypted by the CPU
> > > using the TD ephemeral key. An encryption engine holds a table of encryption
> > > keys, and an encryption key is selected for each memory transaction based on a
> > > Host Key Identifier (HKID). By design, the host VMM does not have access to the
> > > encryption keys.
> > > 
> > > In the first generation of MKTME, HKID is "stolen" from the physical address by
> > > allocating a configurable number of bits from the top of the physical
> > > address. The HKID space is partitioned into shared HKIDs for legacy MKTME
> > > accesses and private HKIDs for SEAM-mode-only accesses. We use 0 for the shared
> > > HKID on the host so that MKTME can be opaque or bypassed on the host.
> > > 
> > > During TDX non-root operation (i.e. guest TD), memory accesses can be qualified
> > > as either shared or private, based on the value of a new SHARED bit in the Guest
> > > Physical Address (GPA).  The CPU translates shared GPAs using the usual VMX EPT
> > > (Extended Page Table) or "Shared EPT" (in this document), which resides in host
> > > VMM memory. The Shared EPT is directly managed by the host VMM - the same as
> > > with the current VMX. Since guest TDs usually require I/O, and the data exchange
> > > needs to be done via shared memory, thus KVM needs to use the current EPT
> > > functionality even for TDs.
> > > 
> > > * Secure EPT and Minoring using the TDP code
> > > The CPU translates private GPAs using a separate Secure EPT.  The Secure EPT
> > > pages are encrypted and integrity-protected with the TD's ephemeral private
> > > key.  Secure EPT can be managed _indirectly_ by the host VMM, using the TDX
> > > interface functions, and thus conceptually Secure EPT is a subset of EPT (why
> > > "subset"). Since execution of such interface functions takes much longer time
> > > than accessing memory directly, in KVM we use the existing TDP code to minor the
> > > Secure EPT for the TD.
> > > 
> > > This way, we can effectively walk Secure EPT without using the TDX interface
> > > functions.
> > > 
> > > * VM life cycle and TDX specific operations
> > > The userspace VMM, such as QEMU, needs to build and treat TDs differently.  For
> > > example, a TD needs to boot in private memory, and the host software cannot copy
> > > the initial image to private memory.
> > > 
> > > * TSC Virtualization
> > > The TDX module helps TDs maintain reliable TSC (Time Stamp Counter) values
> > > (e.g. consistent among the TD VCPUs) and the virtual TSC frequency is determined
> > > by TD configuration, i.e. when the TD is created, not per VCPU.  The current KVM
> > > owns TSC virtualization for VMs, but the TDX module does for TDs.
> > > 
> > > * MCE support for TDs
> > > The TDX module doesn't allow VMM to inject MCE.  Instead PV way is needed for TD
> > > to communicate with VMM.  For now, KVM silently ignores MCE request by VMM.  MSRs
> > > related to MCE (e.g, MCE bank registers) can be naturally emulated by
> > > paravirtualizing MSR access.
> > > 
> > > [1] For details, the specifications, [2], [3], [4], [5], [6], [7], are
> > > available.
> > > 
> > > * Restrictions or future work
> > > Some features are not included to reduce patch size.  Those features are
> > > addressed as future independent patch series.
> > > - large page (2M, 1G)
> > > - qemu gdb stub
> > > - guest PMU
> > > - and more
> > > 
> > > * Prerequisites
> > > It's required to load the TDX module and initialize it.  It's out of the scope
> > > of this patch series.  Another independent patch for the common x86 code is
> > > planned.  It defines CONFIG_INTEL_TDX_HOST and this patch series uses
> > > CONFIG_INTEL_TDX_HOST.  It's assumed that With CONFIG_INTEL_TDX_HOST=y, the TDX
> > > module is initialized and ready for KVM to use the TDX module APIs for TDX guest
> > > life cycle like tdh.mng.init are ready to use.
> > > 
> > > Concretely Global initialization, LP (Logical Processor) initialization, global
> > > configuration, the key configuration, and TDMR and PAMT initialization are done.
> > > The state of the TDX module is SYS_READY.  Please refer to the TDX module
> > > specification, the chapter Intel TDX Module Lifecycle State Machine
> > > 
> > > ** Detecting the TDX module readiness.
> > > TDX host patch series implements the detection of the TDX module availability
> > > and its initialization so that KVM can use it.  Also it manages Host KeyID
> > > (HKID) assigned to guest TD.
> > > The assumed APIs the TDX host patch series provides are
> > > - int seamrr_enabled()
> > >   Check if required cpu feature (SEAM mode) is available. This only check CPU
> > >   feature availability.  At this point, the TDX module may not be ready for KVM
> > >   to use.
> > > - int init_tdx(void);
> > >   Initialization of TDX module so that the TDX module is ready for KVM to use.
> > > - const struct tdsysinfo_struct *tdx_get_sysinfo(void);
> > >   Return the system wide information about the TDX module.  NULL if the TDX
> > >   isn't initialized.
> > > - u32 tdx_get_global_keyid(void);
> > >   Return global key id that is used for the TDX module itself.
> > > - int tdx_keyid_alloc(void);
> > >   Allocate HKID for guest TD.
> > > - void tdx_keyid_free(int keyid);
> > >   Free HKID for guest TD.
> > > 
> > > (****)
> > > * TDX KVM high-level design
> > > - Host key ID management
> > > Host Key ID (HKID) needs to be assigned to each TDX guest for memory encryption.
> > > It is assumed The TDX host patch series implements necessary functions,
> > > u32 tdx_get_global_keyid(void), int tdx_keyid_alloc(void) and,
> > > void tdx_keyid_free(int keyid).
> > > 
> > > - Data structures and VM type
> > > Because TDX is different from VMX, define its own VM/VCPU structures, struct
> > > kvm_tdx and struct vcpu_tdx instead of struct kvm_vmx and struct vcpu_vmx.  To
> > > identify the VM, introduce VM-type to specify which VM type, VMX (default) or
> > > TDX, is used.
> > > 
> > > - VM life cycle and TDX specific operations
> > > Re-purpose the existing KVM_MEMORY_ENCRYPT_OP to add TDX specific operations.
> > > New commands are used to get the TDX system parameters, set TDX specific VM/VCPU
> > > parameters, set initial guest memory and measurement.
> > > 
> > > The creation of TDX VM requires five additional operations in addition to the
> > > conventional VM creation.
> > >   - Get KVM system capability to check if TDX VM type is supported
> > >   - VM creation (KVM_CREATE_VM)
> > >   - New: Get the TDX specific system parameters.  KVM_TDX_GET_CAPABILITY.
> > >   - New: Set TDX specific VM parameters.  KVM_TDX_INIT_VM.
> > >   - VCPU creation (KVM_CREATE_VCPU)
> > >   - New: Set TDX specific VCPU parameters.  KVM_TDX_INIT_VCPU.
> > >   - New: Initialize guest memory as boot state and extend the measurement with
> > >     the memory.  KVM_TDX_INIT_MEM_REGION.
> > >   - New: Finalize VM. KVM_TDX_FINALIZE. Complete measurement of the initial
> > >     TDX VM contents.
> > >   - VCPU RUN (KVM_VCPU_RUN)
> > > 
> > > - Protected guest state
> > > Because the guest state (CPU state and guest memory) is protected, the KVM VMM
> > > can't operate on them.  For example, accessing CPU registers, injecting
> > > exceptions, and accessing guest memory.  Those operations are handled as
> > > silently ignored, returning zero or initial reset value when it's requested via
> > > KVM API ioctls.
> > > 
> > >     VM/VCPU state and callbacks for TDX specific operations.
> > >     Define tdx specific VM state and VCPU state instead of VMX ones.  Redirect
> > >     operations to TDX specific callbacks.  "if (tdx) tdx_op() else vmx_op()".
> > > 
> > >     Operations on the CPU state
> > >     silently ignore operations on the guest state.  For example, the write to
> > >     CPU registers is ignored and the read from CPU registers returns 0.
> > > 
> > >     . ignore access to CPU registers except for allowed ones.
> > >     . TSC: add a check if tsc is immutable and return an error.  Because the KVM
> > >       implementation updates the internal tsc state and it's difficult to back
> > >       out those changes.  Instead, skip the logic.
> > >     . dirty logging: add check if dirty logging is supported.
> > >     . exceptions/SMI/MCE/SIPI/INIT: silently ignore
> > > 
> > >     Note: virtual external interrupt and NMI can be injected into TDX guests.
> > > 
> > > - KVM MMU integration
> > > One bit of the guest physical address (bit 51 or 47) is repurposed to indicate if
> > > the guest physical address is private (the bit is cleared) or shared (the bit is
> > > set).  The bits are called stolen bits.
> > > 
> > >   - Stolen bits framework
> > >     systematically tracks which guest physical address, shared or private, is
> > >     used.
> > > 
> > >   - Shared EPT and secure EPT
> > >     There are two EPTs. Shared EPT (the conventional one) and Secure
> > >     EPT(the new one). Shared EPT is handled the same for the stolen
> > >     bit set.  Secure EPT points to private guest pages.  To resolve
> > >     EPT violation, KVM walks one of two EPTs based on faulted GPA.
> > >     Because it's costly to access secure EPT during walking EPTs with
> > >     SEAMCALLs for the private guest physical address, another private
> > >     EPT is used as a shadow of Secure-EPT with the existing logic at
> > >     the cost of extra memory.
> > > 
> > > The following depicts the relationship.
> > > 
> > >                     KVM                             |       TDX module
> > >                      |                              |           |
> > >         -------------+----------                    |           |
> > >         |                      |                    |           |
> > >         V                      V                    |           |
> > >      shared GPA           private GPA               |           |
> > >   CPU shared EPT pointer  KVM private EPT pointer   |  CPU secure EPT pointer
> > >         |                      |                    |           |
> > >         |                      |                    |           |
> > >         V                      V                    |           V
> > >   shared EPT                private EPT--------mirror----->Secure EPT
> > >         |                      |                    |           |
> > >         |                      \--------------------+------\    |
> > >         |                                           |      |    |
> > >         V                                           |      V    V
> > >   shared guest page                                 |    private guest page
> > >                                                     |
> > >                                                     |
> > >                               non-encrypted memory  |    encrypted memory
> > >                                                     |
> > > 
> > >   - Operating on Secure EPT
> > >     Use the TDX module APIs to operate on Secure EPT.  To call the TDX API
> > >     during resolving EPT violation, add hooks to additional operation and wiring
> > >     it to TDX backend.
> > > 
> > > * References
> > > 
> > > [1] TDX specification
> > >    https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html
> > > [2] Intel Trust Domain Extensions (Intel TDX)
> > >    https://cdrdv2.intel.com/v1/dl/getContent/726790
> > > [3] Intel CPU Architectural Extensions Specification
> > >    https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-cpu-architectural-specification.pdf
> > > [4] Intel TDX Module 1.0 Specification
> > >    https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-module-1.0-public-spec-v0.931.pdf
> > > [5] Intel TDX Loader Interface Specification
> > >   https://www.intel.com/content/dam/develop/external/us/en/documents-tps/intel-tdx-seamldr-interface-specification.pdf
> > > [6] Intel TDX Guest-Hypervisor Communication Interface
> > >    https://cdrdv2.intel.com/v1/dl/getContent/726790
> > > [7] Intel TDX Virtual Firmware Design Guide
> > >    https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.01.pdf
> > > [8] intel public github
> > >    kvm TDX branch: https://github.com/intel/tdx/tree/kvm
> > >    TDX guest branch: https://github.com/intel/tdx/tree/guest
> > >    qemu TDX https://github.com/intel/qemu-tdx
> > > [9] TDVF
> > >     https://github.com/tianocore/edk2-staging/tree/TDVF
> > >     This was merged into EDK2 main branch. https://github.com/tianocore/edk2
> > > 
> > > Chao Gao (3):
> > >   KVM: x86: Move check_processor_compatibility from init ops to runtime
> > >     ops
> > >   Partially revert "KVM: Pass kvm_init()'s opaque param to additional
> > >     arch funcs"
> > >   KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o
> > >     wrmsr
> > > 
> > > Isaku Yamahata (72):
> > >   KVM: Refactor CPU compatibility check on module initialiization
> > >   x86/virt/vmx/tdx: export platform_tdx_enabled()
> > >   KVM: TDX: Detect CPU feature on kernel module initialization
> > >   KVM: x86: Refactor KVM VMX module init/exit functions
> > >   KVM: TDX: Add placeholders for TDX VM/vcpu structure
> > >   x86/virt/tdx: Add a helper function to return system wide info about
> > >     TDX module
> > >   KVM: TDX: Initialize TDX module when loading kvm_intel.ko
> > >   KVM: TDX: Make TDX VM type supported
> > >   [MARKER] The start of TDX KVM patch series: TDX architectural
> > >     definitions
> > >   KVM: TDX: Define TDX architectural definitions
> > >   KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module
> > >   KVM: TDX: Add helper functions to print TDX SEAMCALL error
> > >   [MARKER] The start of TDX KVM patch series: TD VM creation/destruction
> > >   x86/cpu: Add helper functions to allocate/free TDX private host key id
> > >   KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl
> > >   KVM: TDX: Make pmu_intel.c ignore guest TD case
> > >   [MARKER] The start of TDX KVM patch series: TD vcpu
> > >     creation/destruction
> > >   KVM: TDX: allocate/free TDX vcpu structure
> > >   KVM: TDX: allocate/free TDX vcpu structure
> > >   [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits
> > >   KVM: x86/mmu: introduce config for PRIVATE KVM MMU
> > >   [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for
> > >     TDX
> > >   KVM: x86/mmu: Disallow fast page fault on private GPA
> > >   KVM: VMX: Introduce test mode related to EPT violation VE
> > >   [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks
> > >   KVM: x86/mmu: Focibly use TDP MMU for TDX
> > >   KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
> > >   KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map()
> > >   KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
> > >   [MARKER] The start of TDX KVM patch series: TDX EPT violation
> > >   KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs
> > >   KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
> > >   KVM: TDX: TDP MMU TDX support
> > >   [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA
> > >   KVM: x86/mmu: steal software usable git to record if GFN is for shared
> > >     or not
> > >   KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX
> > >   [MARKER] The start of TDX KVM patch series: TD finalization
> > >   KVM: TDX: Create initial guest memory
> > >   KVM: TDX: Finalize VM initialization
> > >   [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit
> > >   KVM: TDX: Add helper assembly function to TDX vcpu
> > >   KVM: TDX: Implement TDX vcpu enter/exit path
> > >   KVM: TDX: vcpu_run: save/restore host state(host kernel gs)
> > >   KVM: TDX: restore host xsave state when exit from the guest TD
> > >   KVM: TDX: restore user ret MSRs
> > >   [MARKER] The start of TDX KVM patch series: TD vcpu
> > >     exits/interrupts/hypercalls
> > >   KVM: TDX: complete interrupts after tdexit
> > >   KVM: TDX: restore debug store when TD exit
> > >   KVM: TDX: handle vcpu migration over logical processor
> > >   KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched
> > >     behavior
> > >   KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c
> > >   KVM: TDX: Implement interrupt injection
> > >   KVM: TDX: Implements vcpu request_immediate_exit
> > >   KVM: TDX: Implement methods to inject NMI
> > >   KVM: TDX: Add a place holder to handle TDX VM exit
> > >   KVM: TDX: handle EXIT_REASON_OTHER_SMI
> > >   KVM: TDX: handle ept violation/misconfig exit
> > >   KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT
> > >   KVM: TDX: Add a place holder for handler of TDX hypercalls
> > >     (TDG.VP.VMCALL)
> > >   KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL
> > >   KVM: TDX: Handle TDX PV CPUID hypercall
> > >   KVM: TDX: Handle TDX PV HLT hypercall
> > >   KVM: TDX: Handle TDX PV port io hypercall
> > >   KVM: TDX: Implement callbacks for MSR operations for TDX
> > >   KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall
> > >   KVM: TDX: Handle TDX PV report fatal error hypercall
> > >   KVM: TDX: Handle TDX PV map_gpa hypercall
> > >   KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall
> > >   KVM: TDX: Silently discard SMI request
> > >   KVM: TDX: Silently ignore INIT/SIPI
> > >   Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX)
> > >   KVM: x86: design documentation on TDX support of x86 KVM TDP MMU
> > > 
> > > Rick Edgecombe (1):
> > >   KVM: x86/mmu: Add address conversion functions for TDX shared bits
> > > 
> > > Sean Christopherson (25):
> > >   KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
> > >   KVM: Enable hardware before doing arch VM initialization
> > >   KVM: x86: Introduce vm_type to differentiate default VMs from
> > >     confidential VMs
> > >   KVM: TDX: Add TDX "architectural" error codes
> > >   KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers
> > >   KVM: TDX: create/destroy VM structure
> > >   KVM: TDX: x86: Add ioctl to get TDX systemwide parameters
> > >   KVM: TDX: Do TDX specific vcpu initialization
> > >   KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
> > >   KVM: x86/mmu: Allow non-zero value for non-present SPTE
> > >   KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
> > >   KVM: x86/mmu: Allow per-VM override of the TDP max page level
> > >   KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for
> > >     private mmu
> > >   KVM: x86/mmu: Disallow dirty logging for x86 TDX
> > >   KVM: VMX: Split out guts of EPT violation to common/exposed function
> > >   KVM: VMX: Move setting of EPT MMU masks to common VT-x code
> > >   KVM: TDX: Add load_mmu_pgd method for TDX
> > >   KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX
> > >   KVM: TDX: Add support for find pending IRQ in a protected local APIC
> > >   KVM: x86: Assume timer IRQ was injected if APIC state is proteced
> > >   KVM: VMX: Modify NMI and INTR handlers to take intr_info as function
> > >     argument
> > >   KVM: VMX: Move NMI/exception handler to common helper
> > >   KVM: x86: Split core of hypercall emulation to helper function
> > >   KVM: TDX: Handle TDX PV MMIO hypercall
> > >   KVM: TDX: Add methods to ignore accesses to CPU state
> > > 
> > > Xiaoyao Li (1):
> > >   KVM: TDX: initialize VM with TDX specific parameters
> > > 
> > >  Documentation/virt/kvm/api.rst                |   30 +-
> > >  .../virt/kvm/intel-tdx-layer-status.rst       |   33 +
> > >  Documentation/virt/kvm/intel-tdx.rst          |  381 +++
> > >  Documentation/virt/kvm/tdx-tdp-mmu.rst        |  466 ++++
> > >  arch/arm64/kvm/arm.c                          |    2 +-
> > >  arch/mips/kvm/mips.c                          |   14 +-
> > >  arch/powerpc/kvm/powerpc.c                    |    2 +-
> > >  arch/riscv/kvm/main.c                         |    2 +-
> > >  arch/s390/kvm/kvm-s390.c                      |    2 +-
> > >  arch/x86/events/intel/ds.c                    |    1 +
> > >  arch/x86/include/asm/kvm-x86-ops.h            |   10 +
> > >  arch/x86/include/asm/kvm_host.h               |   56 +-
> > >  arch/x86/include/asm/tdx.h                    |   67 +
> > >  arch/x86/include/asm/vmx.h                    |   14 +
> > >  arch/x86/include/uapi/asm/kvm.h               |   95 +
> > >  arch/x86/include/uapi/asm/vmx.h               |    5 +-
> > >  arch/x86/kvm/Kconfig                          |    4 +
> > >  arch/x86/kvm/Makefile                         |    3 +-
> > >  arch/x86/kvm/irq.c                            |    3 +
> > >  arch/x86/kvm/lapic.c                          |   37 +-
> > >  arch/x86/kvm/lapic.h                          |    2 +
> > >  arch/x86/kvm/mmu.h                            |   42 +-
> > >  arch/x86/kvm/mmu/mmu.c                        |  360 ++-
> > >  arch/x86/kvm/mmu/mmu_internal.h               |  123 +-
> > >  arch/x86/kvm/mmu/paging_tmpl.h                |    5 +-
> > >  arch/x86/kvm/mmu/spte.c                       |   46 +-
> > >  arch/x86/kvm/mmu/spte.h                       |   65 +-
> > >  arch/x86/kvm/mmu/tdp_iter.c                   |    1 +
> > >  arch/x86/kvm/mmu/tdp_iter.h                   |    5 +-
> > >  arch/x86/kvm/mmu/tdp_mmu.c                    |  690 ++++-
> > >  arch/x86/kvm/mmu/tdp_mmu.h                    |   12 +-
> > >  arch/x86/kvm/svm/svm.c                        |   13 +-
> > >  arch/x86/kvm/vmx/common.h                     |  174 ++
> > >  arch/x86/kvm/vmx/evmcs.c                      |    2 +-
> > >  arch/x86/kvm/vmx/evmcs.h                      |    2 +-
> > >  arch/x86/kvm/vmx/main.c                       | 1071 +++++++
> > >  arch/x86/kvm/vmx/pmu_intel.c                  |   39 +-
> > >  arch/x86/kvm/vmx/pmu_intel.h                  |   28 +
> > >  arch/x86/kvm/vmx/posted_intr.c                |   43 +-
> > >  arch/x86/kvm/vmx/posted_intr.h                |   13 +
> > >  arch/x86/kvm/vmx/tdx.c                        | 2465 +++++++++++++++++
> > >  arch/x86/kvm/vmx/tdx.h                        |  275 ++
> > >  arch/x86/kvm/vmx/tdx_arch.h                   |  157 ++
> > >  arch/x86/kvm/vmx/tdx_errno.h                  |   29 +
> > >  arch/x86/kvm/vmx/tdx_error.c                  |   22 +
> > >  arch/x86/kvm/vmx/tdx_ops.h                    |  188 ++
> > >  arch/x86/kvm/vmx/vmenter.S                    |  146 +
> > >  arch/x86/kvm/vmx/vmx.c                        |  737 ++---
> > >  arch/x86/kvm/vmx/vmx.h                        |   39 +-
> > >  arch/x86/kvm/vmx/x86_ops.h                    |  235 ++
> > >  arch/x86/kvm/x86.c                            |  148 +-
> > >  arch/x86/virt/vmx/tdx/seamcall.S              |    2 +
> > >  arch/x86/virt/vmx/tdx/tdx.c                   |   54 +-
> > >  arch/x86/virt/vmx/tdx/tdx.h                   |   52 -
> > >  include/linux/kvm_host.h                      |    4 +-
> > >  include/uapi/linux/kvm.h                      |    2 +
> > >  tools/arch/x86/include/uapi/asm/kvm.h         |   95 +
> > >  tools/include/uapi/linux/kvm.h                |    1 +
> > >  virt/kvm/kvm_main.c                           |   67 +-
> > >  59 files changed, 7877 insertions(+), 804 deletions(-)
> > >  create mode 100644 Documentation/virt/kvm/intel-tdx-layer-status.rst
> > >  create mode 100644 Documentation/virt/kvm/intel-tdx.rst
> > >  create mode 100644 Documentation/virt/kvm/tdx-tdp-mmu.rst
> > >  create mode 100644 arch/x86/kvm/vmx/common.h
> > >  create mode 100644 arch/x86/kvm/vmx/main.c
> > >  create mode 100644 arch/x86/kvm/vmx/pmu_intel.h
> > >  create mode 100644 arch/x86/kvm/vmx/tdx.c
> > >  create mode 100644 arch/x86/kvm/vmx/tdx.h
> > >  create mode 100644 arch/x86/kvm/vmx/tdx_arch.h
> > >  create mode 100644 arch/x86/kvm/vmx/tdx_errno.h
> > >  create mode 100644 arch/x86/kvm/vmx/tdx_error.c
> > >  create mode 100644 arch/x86/kvm/vmx/tdx_ops.h
> > >  create mode 100644 arch/x86/kvm/vmx/x86_ops.h
> > > 
> > > -- 
> > > 2.25.1
> > > 
> > 
> > -- 
> > Isaku Yamahata <isaku.yamahata@gmail.com>

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization
  2022-07-08  2:14   ` Yuan Yao
@ 2022-07-12 20:35     ` Isaku Yamahata
  2022-07-13  0:22       ` Xiaoyao Li
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-12 20:35 UTC (permalink / raw)
  To: Yuan Yao
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Fri, Jul 08, 2022 at 10:14:43AM +0800,
Yuan Yao <yuan.yao@linux.intel.com> wrote:

> On Mon, Jun 27, 2022 at 02:53:22PM -0700, isaku.yamahata@intel.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@intel.com>
> >
> > TD guest vcpu need to be configured before ready to run which requests
> > addtional information from Device model (e.g. qemu), one 64bit value is
> > passed to vcpu's RCX as an initial value.  Repurpose KVM_MEMORY_ENCRYPT_OP
> > to vcpu-scope and add new sub-commands KVM_TDX_INIT_VCPU under it for such
> > additional vcpu configuration.
> >
> > Add callback for kvm vCPU-scoped operations of KVM_MEMORY_ENCRYPT_OP and
> > add a new subcommand, KVM_TDX_INIT_VCPU, for further vcpu initialization.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/include/asm/kvm-x86-ops.h    |  1 +
> >  arch/x86/include/asm/kvm_host.h       |  1 +
> >  arch/x86/include/uapi/asm/kvm.h       |  1 +
> >  arch/x86/kvm/vmx/main.c               |  9 +++++++
> >  arch/x86/kvm/vmx/tdx.c                | 36 +++++++++++++++++++++++++++
> >  arch/x86/kvm/vmx/tdx.h                |  4 +++
> >  arch/x86/kvm/vmx/x86_ops.h            |  2 ++
> >  arch/x86/kvm/x86.c                    |  6 +++++
> >  tools/arch/x86/include/uapi/asm/kvm.h |  1 +
> >  9 files changed, 61 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> > index 3677a5015a4f..32a6df784ea6 100644
> > --- a/arch/x86/include/asm/kvm-x86-ops.h
> > +++ b/arch/x86/include/asm/kvm-x86-ops.h
> > @@ -119,6 +119,7 @@ KVM_X86_OP(leave_smm)
> >  KVM_X86_OP(enable_smi_window)
> >  KVM_X86_OP_OPTIONAL(dev_mem_enc_ioctl)
> >  KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
> > +KVM_X86_OP_OPTIONAL(vcpu_mem_enc_ioctl)
> >  KVM_X86_OP_OPTIONAL(mem_enc_register_region)
> >  KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
> >  KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from)
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 81638987cdb9..e5d4e5b60fdc 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1595,6 +1595,7 @@ struct kvm_x86_ops {
> >
> >  	int (*dev_mem_enc_ioctl)(void __user *argp);
> >  	int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
> > +	int (*vcpu_mem_enc_ioctl)(struct kvm_vcpu *vcpu, void __user *argp);
> >  	int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
> >  	int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
> >  	int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
> > diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> > index f89774ccd4ae..399c28b2f4f5 100644
> > --- a/arch/x86/include/uapi/asm/kvm.h
> > +++ b/arch/x86/include/uapi/asm/kvm.h
> > @@ -538,6 +538,7 @@ struct kvm_pmu_event_filter {
> >  enum kvm_tdx_cmd_id {
> >  	KVM_TDX_CAPABILITIES = 0,
> >  	KVM_TDX_INIT_VM,
> > +	KVM_TDX_INIT_VCPU,
> >
> >  	KVM_TDX_CMD_NR_MAX,
> >  };
> > diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> > index 4f4ed4ad65a7..ce12cc8276ef 100644
> > --- a/arch/x86/kvm/vmx/main.c
> > +++ b/arch/x86/kvm/vmx/main.c
> > @@ -113,6 +113,14 @@ static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
> >  	return tdx_vm_ioctl(kvm, argp);
> >  }
> >
> > +static int vt_vcpu_mem_enc_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
> > +{
> > +	if (!is_td_vcpu(vcpu))
> > +		return -EINVAL;
> > +
> > +	return tdx_vcpu_ioctl(vcpu, argp);
> > +}
> > +
> >  struct kvm_x86_ops vt_x86_ops __initdata = {
> >  	.name = "kvm_intel",
> >
> > @@ -255,6 +263,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
> >
> >  	.dev_mem_enc_ioctl = tdx_dev_ioctl,
> >  	.mem_enc_ioctl = vt_mem_enc_ioctl,
> > +	.vcpu_mem_enc_ioctl = vt_vcpu_mem_enc_ioctl,
> >  };
> >
> >  struct kvm_x86_init_ops vt_init_ops __initdata = {
> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > index d9fe3f6463c3..2772775457b0 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -83,6 +83,11 @@ static inline bool is_hkid_assigned(struct kvm_tdx *kvm_tdx)
> >  	return kvm_tdx->hkid > 0;
> >  }
> >
> > +static inline bool is_td_finalized(struct kvm_tdx *kvm_tdx)
> > +{
> > +	return kvm_tdx->finalized;
> > +}
> > +
> >  static void tdx_clear_page(unsigned long page)
> >  {
> >  	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
> > @@ -805,6 +810,37 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp)
> >  	return r;
> >  }
> >
> > +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
> > +{
> > +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
> > +	struct vcpu_tdx *tdx = to_tdx(vcpu);
> > +	struct kvm_tdx_cmd cmd;
> > +	u64 err;
> > +
> > +	if (tdx->initialized)
> 
> Minor: How about "tdx_vcpu->initialized" ? there's
> "is_td_initialized()" below, the "tdx" here may lead guys to treat it
> as whole td vm until they confirmed it's type again.

I think you man tdx->vcpu_initialized.  If so, makes sense. I'll rename it.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization
  2022-07-12 20:35     ` Isaku Yamahata
@ 2022-07-13  0:22       ` Xiaoyao Li
  0 siblings, 0 replies; 219+ messages in thread
From: Xiaoyao Li @ 2022-07-13  0:22 UTC (permalink / raw)
  To: Isaku Yamahata, Yuan Yao
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, Sean Christopherson

On 7/13/2022 4:35 AM, Isaku Yamahata wrote:
> On Fri, Jul 08, 2022 at 10:14:43AM +0800,
> Yuan Yao <yuan.yao@linux.intel.com> wrote:
> 
...
>>> +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
>>> +{
>>> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
>>> +	struct vcpu_tdx *tdx = to_tdx(vcpu);
>>> +	struct kvm_tdx_cmd cmd;
>>> +	u64 err;
>>> +
>>> +	if (tdx->initialized)
>>
>> Minor: How about "tdx_vcpu->initialized" ? there's
>> "is_td_initialized()" below, the "tdx" here may lead guys to treat it
>> as whole td vm until they confirmed it's type again.
> 
> I think you man tdx->vcpu_initialized.  If so, makes sense. I'll rename it.

IMO, no need to do so.

All around tdx.c, "tdx" is the brief pointer name, just like "vmx" used 
in vmx.c. People will get used to it.


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 032/102] KVM: x86/mmu: introduce config for PRIVATE KVM MMU
  2022-07-08  1:53   ` Kai Huang
@ 2022-07-13  1:25     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-13  1:25 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Fri, Jul 08, 2022 at 01:53:48PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > To Keep the case of non TDX intact, introduce a new config option for
> > private KVM MMU support.  At the moment, this is synonym for
> > CONFIG_INTEL_TDX_HOST && CONFIG_KVM_INTEL.  The new flag make it clear
> > that the config is only for x86 KVM MMU.
> 
> What is the "new flag"?

Oops. flags should be "config". Will fix it. Thanks for pointing it out.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 002/102] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs"
  2022-06-27 21:52 ` [PATCH v7 002/102] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs" isaku.yamahata
@ 2022-07-13  1:55   ` Kai Huang
  2022-07-26 23:57     ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-13  1:55 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Chao Gao, Sean Christopherson,
	Suzuki K Poulose, Anup Patel, Claudio Imbrenda

On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> From: Chao Gao <chao.gao@intel.com>
> 
> This partially reverts commit b99040853738 ("KVM: Pass kvm_init()'s opaque
> param to additional arch funcs") remove opaque from
> kvm_arch_check_processor_compat because no one uses this opaque now.
> Address conflicts for ARM (due to file movement) and manually handle RISC-V
> which comes after the commit.
> 
> And changes about kvm_arch_hardware_setup() in original commit are still
> needed so they are not reverted.

I tried to dig the history to find out why we are doing this.

IMHO it's better to give a reason why you need to revert the opaque.  I guess no
one uses this opaque now doesn't mean we need to remove it?

Perhaps you should mention this is a preparation to
hardware_enable_all()/hardware_disable_all() during module loading time. 
Instead of extending hardware_enable_all()/hardware_disable_all() to take the
opaque and pass to kvm_arch_check_process_compat(), just remove the opaque.

Or perhaps just merge this patch to next one?

> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> Reviewed-by: Sean Christopherson <seanjc@google.com>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Acked-by: Anup Patel <anup@brainfault.org>
> Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Link: https://lore.kernel.org/r/20220216031528.92558-3-chao.gao@intel.com
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/arm64/kvm/arm.c       |  2 +-
>  arch/mips/kvm/mips.c       |  2 +-
>  arch/powerpc/kvm/powerpc.c |  2 +-
>  arch/riscv/kvm/main.c      |  2 +-
>  arch/s390/kvm/kvm-s390.c   |  2 +-
>  arch/x86/kvm/x86.c         |  2 +-
>  include/linux/kvm_host.h   |  2 +-
>  virt/kvm/kvm_main.c        | 16 +++-------------
>  8 files changed, 10 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index a0188144a122..7588efbac6be 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -68,7 +68,7 @@ int kvm_arch_hardware_setup(void *opaque)
>  	return 0;
>  }
>  
> -int kvm_arch_check_processor_compat(void *opaque)
> +int kvm_arch_check_processor_compat(void)
>  {
>  	return 0;
>  }
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index a25e0b73ee70..092d09fb6a7e 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -140,7 +140,7 @@ int kvm_arch_hardware_setup(void *opaque)
>  	return 0;
>  }
>  
> -int kvm_arch_check_processor_compat(void *opaque)
> +int kvm_arch_check_processor_compat(void)
>  {
>  	return 0;
>  }
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 191992fcb2c2..ca8ef51092c6 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -446,7 +446,7 @@ int kvm_arch_hardware_setup(void *opaque)
>  	return 0;
>  }
>  
> -int kvm_arch_check_processor_compat(void *opaque)
> +int kvm_arch_check_processor_compat(void)
>  {
>  	return kvmppc_core_check_processor_compat();
>  }
> diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
> index 1549205fe5fe..f8d6372d208f 100644
> --- a/arch/riscv/kvm/main.c
> +++ b/arch/riscv/kvm/main.c
> @@ -20,7 +20,7 @@ long kvm_arch_dev_ioctl(struct file *filp,
>  	return -EINVAL;
>  }
>  
> -int kvm_arch_check_processor_compat(void *opaque)
> +int kvm_arch_check_processor_compat(void)
>  {
>  	return 0;
>  }
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 72bd5c9b9617..a05493f1cacf 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -251,7 +251,7 @@ int kvm_arch_hardware_enable(void)
>  	return 0;
>  }
>  
> -int kvm_arch_check_processor_compat(void *opaque)
> +int kvm_arch_check_processor_compat(void)
>  {
>  	return 0;
>  }
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 3d9dbaf9828f..30af2bd0b4d5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11799,7 +11799,7 @@ void kvm_arch_hardware_unsetup(void)
>  	static_call(kvm_x86_hardware_unsetup)();
>  }
>  
> -int kvm_arch_check_processor_compat(void *opaque)
> +int kvm_arch_check_processor_compat(void)
>  {
>  	struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
>  
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index c20f2d55840c..d4f130a9f5c8 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1442,7 +1442,7 @@ int kvm_arch_hardware_enable(void);
>  void kvm_arch_hardware_disable(void);
>  int kvm_arch_hardware_setup(void *opaque);
>  void kvm_arch_hardware_unsetup(void);
> -int kvm_arch_check_processor_compat(void *opaque);
> +int kvm_arch_check_processor_compat(void);
>  int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
>  bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
>  int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a67e996cbf7f..a5bada53f1fe 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -5697,22 +5697,14 @@ void kvm_unregister_perf_callbacks(void)
>  }
>  #endif
>  
> -struct kvm_cpu_compat_check {
> -	void *opaque;
> -	int *ret;
> -};
> -
> -static void check_processor_compat(void *data)
> +static void check_processor_compat(void *rtn)
>  {
> -	struct kvm_cpu_compat_check *c = data;
> -
> -	*c->ret = kvm_arch_check_processor_compat(c->opaque);
> +	*(int *)rtn = kvm_arch_check_processor_compat();
>  }
>  
>  int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>  		  struct module *module)
>  {
> -	struct kvm_cpu_compat_check c;
>  	int r;
>  	int cpu;
>  
> @@ -5740,10 +5732,8 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>  	if (r < 0)
>  		goto out_free_1;
>  
> -	c.ret = &r;
> -	c.opaque = opaque;
>  	for_each_online_cpu(cpu) {
> -		smp_call_function_single(cpu, check_processor_compat, &c, 1);
> +		smp_call_function_single(cpu, check_processor_compat, &r, 1);
>  		if (r < 0)
>  			goto out_free_2;
>  	}


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization
  2022-06-27 21:52 ` [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization isaku.yamahata
  2022-07-12  1:15   ` Kai Huang
@ 2022-07-13  3:11   ` Kai Huang
  2022-07-27 22:04   ` Isaku Yamahata
  2 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-13  3:11 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson

On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> Although non-x86 arch doesn't break as long as I inspected code, it's by
> code inspection.  This should be reviewed by each arch maintainers.

This first paragraph doesn't make sense to me.  At this moment, we don't know
why you need this patch at all.

> 
> kvm_init() checks CPU compatibility by calling
> kvm_arch_check_processor_compat() on all online CPUs.  
> 

What's the problem here which requires you need to do ..

> Move the callback
> to hardware_enable_nolock() and add hardware_enable_all() and
> hardware_disable_all().

.. this?


> Add arch specific callback kvm_arch_post_hardware_enable_setup() for arch
> to do arch specific initialization that required hardware_enable_all().
> This makes a room for TDX module to initialize on kvm module loading.  TDX
> module requires all online cpu to enable VMX by VMXON.

So to me this is the reason why you need to do hardware_enable_all() in
kvm_init().  There's nothing wrong with "kvm_init() checks CPU compatibility by
calling kvm_arch_check_processor_compat() on all online CPUs", right?

In this case, shouldn't we say something like "opportunistically move
kvm_arch_check_processor_compat() to hardware_enable_nolock()" because this is
not the reason that you want this patch, correct?

Also, maybe I am missing something obviously, but why do you need to do
hardware_disable_all() right after hardware_enable_all() in kvm_init()?  Could
you at least put some explanation in the changelog?

And again, it's better to add one sentence or so to explain why do you want to
init TDX module during module loading time.

> 
> If kvm_arch_hardware_enable/disable() depend on (*) part, such dependency
> must be called before kvm_init().  
> 

I don't follow the logic here.  If kvm_arch_hardware_enable() depends on
something, then you need to put kvm_arch_hardware_enable() after that, or move
that forward.  But why such dependency must be called "before kvm_init()"?

Also, I think you are talking about the problem that _after_ you move
hardware_enable_all() to kvm_init(), but not problem in existing code, right?


> In fact kvm_intel() does.  
> 

No such function kvm_intel().

Again, what's the issue here? Can you add more sentences to explain the
_problem_, or _why_?


> Although
> other arch doesn't as long as I checked as follows, it should be reviewed
> by each arch maintainers.
> 
> Before this patch:
> - Arch module initialization
>   - kvm_init()
>     - kvm_arch_init()
>     - kvm_arch_check_processor_compat() on each CPUs
>   - post arch specific initialization ---- (*)
> 
> - when creating/deleting first/last VM
>    - kvm_arch_hardware_enable() on each CPUs --- (A)
>    - kvm_arch_hardware_disable() on each CPUs --- (B)
> 
> After this patch:
> - Arch module initialization
>   - kvm_init()
>     - kvm_arch_init()
>     - kvm_arch_hardware_enable() on each CPUs  (A)
>     - kvm_arch_check_processor_compat() on each CPUs

Even with this patch, unless I am seeing mistakenly, kvm_arch_hardware_enable()
is called _after_ kvm_arch_check_processor_compat().

>     - kvm_arch_hardware_disable() on each CPUs (B)
>   - post arch specific initialization  --- (*)
> 
> Code inspection result:
> (A)/(B) can depend on (*) before this patch.  If there is dependency, such
> initialization must be moved before kvm_init() with this patch.  
> 

Must be moved to before (A)/(B), right?

> VMX does
> in fact.  
> 

More details will help, please.

> As long as I inspected other archs and find only mips has it.
> 
> - arch/mips/kvm/mips.c
>   module init function, kvm_mips_init(), does some initialization after
>   kvm_init().  Compile test only.  Needs review.

Is "Needs review" changelog material?

> 
> - arch/x86/kvm/x86.c
>   - uses vm_list which is statically initialized.

I can hardly see how "vm_list is statically initialized" is causing any problem
here.  Exactly what's the problem here??

>   - static_call(kvm_x86_hardware_enable)();
>     - SVM: (*) is empty.
>     - VMX: needs fix

What's the problem, and how are you going to fix?  Shouldn't this be in
changelog?

> 
> - arch/powerpc/kvm/powerpc.c
>   kvm_arch_hardware_enable/disable() are nop
> 
> - arch/s390/kvm/kvm-s390.c
>   kvm_arch_hardware_enable/disable() are nop
> 
> - arch/arm64/kvm/arm.c
>   module init function, arm_init(), calls only kvm_init().
>   (*) is empty
> 
> - arch/riscv/kvm/main.c
>   module init function, riscv_kvm_init(), calls only kvm_init().
>   (*) is empty
> 
> Co-developed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/mips/kvm/mips.c     | 12 +++++++-----
>  arch/x86/kvm/vmx/vmx.c   | 15 +++++++++++----
>  include/linux/kvm_host.h |  1 +
>  virt/kvm/kvm_main.c      | 25 ++++++++++++++++++-------
>  4 files changed, 37 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 092d09fb6a7e..17228584485d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1643,11 +1643,6 @@ static int __init kvm_mips_init(void)
>  	}
>  
>  	ret = kvm_mips_entry_setup();
> -	if (ret)
> -		return ret;
> -
> -	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> -
>  	if (ret)
>  		return ret;
>  
> @@ -1656,6 +1651,13 @@ static int __init kvm_mips_init(void)
>  
>  	register_die_notifier(&kvm_mips_csr_die_notifier);
>  
> +	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> +
> +	if (ret) {
> +		unregister_die_notifier(&kvm_mips_csr_die_notifier);
> +		return ret;
> +	}
> +

I don't understand how moving "hardware_enable_all()/hardware_disable_all()" to
kvm_init() is related to this change.

Anyway, at least some comments?

>  	return 0;
>  }
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 31e7630203fd..d3b68a6dec48 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8372,6 +8372,15 @@ static void vmx_exit(void)
>  }
>  module_exit(vmx_exit);
>  
> +/* initialize before kvm_init() so that hardware_enable/disable() can work. */

There's no function named hardware_enable() or hardware_disable().

> +static void __init vmx_init_early(void)
> +{
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu)
> +		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> +}
> +

Perhaps I am missing something, but I couldn't see why this must be done before
kvm_init().  Please give some comments?

>  static int __init vmx_init(void)
>  {
>  	int r, cpu;
> @@ -8409,6 +8418,7 @@ static int __init vmx_init(void)
>  	}
>  #endif
>  
> +	vmx_init_early();
>  	r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
>  		     __alignof__(struct vcpu_vmx), THIS_MODULE);
>  	if (r)
> @@ -8427,11 +8437,8 @@ static int __init vmx_init(void)
>  		return r;
>  	}
>  
> -	for_each_possible_cpu(cpu) {
> -		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> -
> +	for_each_possible_cpu(cpu)
>  		pi_init_cpu(cpu);
> -	}
>  
>  #ifdef CONFIG_KEXEC_CORE
>  	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index d4f130a9f5c8..79a4988fd51f 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1441,6 +1441,7 @@ void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu, struct dentry *debugfs_
>  int kvm_arch_hardware_enable(void);
>  void kvm_arch_hardware_disable(void);
>  int kvm_arch_hardware_setup(void *opaque);
> +int kvm_arch_post_hardware_enable_setup(void *opaque);
>  void kvm_arch_hardware_unsetup(void);
>  int kvm_arch_check_processor_compat(void);
>  int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a5bada53f1fe..cee799265ce6 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4899,8 +4899,13 @@ static void hardware_enable_nolock(void *junk)
>  
>  	cpumask_set_cpu(cpu, cpus_hardware_enabled);
>  
> +	r = kvm_arch_check_processor_compat();
> +	if (r)
> +		goto out;
> +
>  	r = kvm_arch_hardware_enable();
>  
> +out:
>  	if (r) {
>  		cpumask_clear_cpu(cpu, cpus_hardware_enabled);
>  		atomic_inc(&hardware_enable_failed);
> @@ -5697,9 +5702,9 @@ void kvm_unregister_perf_callbacks(void)
>  }
>  #endif
>  
> -static void check_processor_compat(void *rtn)
> +__weak int kvm_arch_post_hardware_enable_setup(void *opaque)
>  {
> -	*(int *)rtn = kvm_arch_check_processor_compat();
> +	return 0;
>  }
>  
>  int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> @@ -5732,11 +5737,17 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>  	if (r < 0)
>  		goto out_free_1;
>  
> -	for_each_online_cpu(cpu) {
> -		smp_call_function_single(cpu, check_processor_compat, &r, 1);
> -		if (r < 0)
> -			goto out_free_2;
> -	}
> +	/* hardware_enable_nolock() checks CPU compatibility on each CPUs. */
> +	r = hardware_enable_all();
> +	if (r)
> +		goto out_free_2;
> +	/*
> +	 * Arch specific initialization that requires to enable virtualization
> +	 * feature.  e.g. TDX module initialization requires VMXON on all
> +	 * present CPUs.
> +	 */
> +	kvm_arch_post_hardware_enable_setup(opaque);

So after digging history and looking at the code again, I guess perhaps it's
also fine to introduce this __weak version here (since you have given the reason
to do so in changelog), but in this way perhaps it's better to put this patch
and the patch to load TDX module closely so it's easier to review.

Or to me it's also fine to move this chunk to the patch to init TDX module as I
replied before.

> +	hardware_disable_all();

IMO needs a comment why do we need to do hardware_disable_all() here.  It
doesn't make a lot sense to me.

>  
>  	r = cpuhp_setup_state_nocalls(CPUHP_AP_KVM_STARTING, "kvm/cpu:starting",
>  				      kvm_starting_cpu, kvm_dying_cpu);


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization
  2022-07-12  1:15   ` Kai Huang
@ 2022-07-13  3:16     ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-13  3:16 UTC (permalink / raw)
  To: isaku.yamahata, kvm, linux-kernel
  Cc: isaku.yamahata, Paolo Bonzini, Sean Christopherson


> > +	/* hardware_enable_nolock() checks CPU compatibility on each CPUs. */
> > +	r = hardware_enable_all();
> > +	if (r)
> > +		goto out_free_2;
> > +	/*
> > +	 * Arch specific initialization that requires to enable virtualization
> > +	 * feature.  e.g. TDX module initialization requires VMXON on all
> > +	 * present CPUs.
> > +	 */
> > +	kvm_arch_post_hardware_enable_setup(opaque);
> 
> Please see my reply to your patch  "KVM: TDX: Initialize TDX module when loading
> kvm_intel.ko".
> 
> The introduce of __weak kvm_arch_post_hardware_enable_setup() should be in that
> patch since it has nothing to do the job you claimed to do in this patch.
> 
> And by removing it, this patch can be taken out of TDX series and upstreamed
> separately.

I tried to dig more about the history.  Please see my another reply and ignore
this.


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits
  2022-07-08  2:15   ` Kai Huang
@ 2022-07-13  4:52     ` Isaku Yamahata
  2022-07-13 10:41       ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-13  4:52 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Rick Edgecombe

On Fri, Jul 08, 2022 at 02:15:20PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Rick Edgecombe <rick.p.edgecombe@intel.com>
> 
> I don't think this is appropriate any more.  You can add Co-developed-by I
> guess.

Makes sense.


> > 
> > TDX repurposes one GPA bits (51 bit or 47 bit based on configuration) to
> > indicate the GPA is private(if cleared) or shared (if set) with VMM.  If
> > GPA.shared is set, GPA is converted existing conventional EPT pointed by
> > EPTP.  If GPA.shared bit is cleared, GPA is converted by Secure-EPT(S-EPT)
> 
> Not sure whether Secure EPT has even been mentioned before in this series.  If
> not, perhaps better to explain it here.  Or not sure whether you need to mention
> S-EPT at all.
> 
> > TDX module manages.  VMM has to issue SEAM call to TDX module to operate on
> 
> SEAM call -> SEAMCALL
> 
> > S-EPT.  e.g. populating/zapping guest page or shadow page by TDH.PAGE.{ADD,
> > REMOVE} for guest page, TDH.PAGE.SEPT.{ADD, REMOVE} S-EPT etc.
> 
> Not sure why you want to mention those particular SEAMCALLs.
> 
> > 
> > Several hooks needs to be added to KVM MMU to support TDX.  Add a function
> 
> needs -> need.
> 
> Not sure why you need first sentence at all.
> 
> But I do think you should mention adding per-VM scope 'gfn_shared_mask' thing.
> 
> > to check if KVM MMU is running for TDX and several functions for address
> > conversation between private-GPA and shared-GPA.
> > 
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  2 ++
> >  arch/x86/kvm/mmu.h              | 32 ++++++++++++++++++++++++++++++++
> >  2 files changed, 34 insertions(+)
> > 
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index e5d4e5b60fdc..2c47aab72a1b 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1339,7 +1339,9 @@ struct kvm_arch {
> >  	 */
> >  	u32 max_vcpu_ids;
> >  
> > +#ifdef CONFIG_KVM_MMU_PRIVATE
> >  	gfn_t gfn_shared_mask;
> > +#endif
> 
> As Xiaoyao said, please introduce gfn_shared_mask in this patch.
> 
> And by applying this patch, nothing will prevent you to turn on INTEL_TDX_HOST
> and KVM_INTEL, which also turns on KVM_MMU_PRIVATE.
> 
> So 'kvm_arch::gfn_shared_mask' is guaranteed to be 0?  If not, can legal
> (shared) GFN for normal VM be potentially treated as private?
> 
> If yes, perhaps explicitly call out in changelog so people don't need to worry
> about?

struct kvm that includes struct kvm_arch is guaranteed to be zero.

Here is the updated commit message.

Author: Isaku Yamahata <isaku.yamahata@intel.com>
Date:   Tue Jul 12 00:10:13 2022 -0700

    KVM: x86/mmu: Add address conversion functions for TDX shared bit of GPA
    
    TDX repurposes one GPA bit (51 bit or 47 bit based on configuration) to
    indicate the GPA is private(if cleared) or shared (if set) with VMM.  If
    GPA.shared is set, GPA is converted existing conventional EPT pointed by
    EPTP.  If GPA.shared bit is cleared, GPA is converted by TDX module.
    VMM has to issue SEAMCALLs to operate.
    
    Add a member to remember GPA shared bit for each guest TDs, add address
    conversion functions between private GPA and shared GPA and test if GPA
    is private.
    
    Because struct kvm_arch (or struct kvm which includes struct kvm_arch. See
    kvm_arch_alloc_vm() that passes __GPF_ZERO) is zero-cleared when allocated,
    the new member to remember GPA shared bit is guaranteed to be zero with
    this patch unless it's initialized explicitly.
    
    Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
    Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
    Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>


-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-12 17:22       ` Isaku Yamahata
@ 2022-07-13  7:37         ` Chao Peng
  0 siblings, 0 replies; 219+ messages in thread
From: Chao Peng @ 2022-07-13  7:37 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: Chao Gao, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, chao.p.peng

On Tue, Jul 12, 2022 at 10:22:50AM -0700, Isaku Yamahata wrote:
> On Tue, Jul 12, 2022 at 06:54:19PM +0800,
> Chao Peng <chao.p.peng@linux.intel.com> wrote:
> 
> > On Tue, Jul 12, 2022 at 01:07:20PM +0800, Chao Gao wrote:
> > > On Mon, Jul 11, 2022 at 08:17:01AM -0700, Isaku Yamahata wrote:
> > > >Hi. Because my description on large page support was terse, I wrote up more
> > > >detailed one.  Any feedback/thoughts on large page support?
> > > >
> > > >TDP MMU large page support design
> > > >
> > > >Two main discussion points
> > > >* how to track page status. private vs shared, no-largepage vs can-be-largepage
> > > 
> > > ...
> > > 
> > > >
> > > >Tracking private/shared and large page mappable
> > > >-----------------------------------------------
> > > >VMM needs to track that page is mapped as private or shared at 4KB granularity.
> > > >For efficiency of EPT violation path (****), at 2MB and 1GB level, VMM should
> > > >track the page can be mapped as a large page (regarding private/shared).  VMM
> > > >updates it on MapGPA and references it on the EPT violation path. (****)
> > > 
> > > Isaku,
> > > 
> > > + Peng Chao
> > > 
> > > Doesn't UPM guarantee that 2MB/1GB large page in CR3 should be either all
> > > private or all shared?
> > > 
> > > KVM always retrieves the mapping level in CR3 and enforces that EPT's
> > > page level is not greater than that in CR3. My point is if UPM already enforces
> > > no mixed pages in a large page, then KVM needn't do that again (UPM can
> > > be trusted).
> > 
> > The backing store in the UMP can tell KVM which page level it can
> > support for a given private gpa, similar to host_pfn_mapping_level() for
> > shared address.
> >
> > However, this solely represents the backing store's capability, KVM
> > still needs additional info to decide whether that can be safely mapped
> > as 2M/1G, e.g. all the following pages in the 2M/1G range should be all
> > private, currently this is not something backing store can tell.
> 
> This argument applies to shared GPA.  The shared pages is backed by normal file
> mapping with UPM.  When KVM is mapping shared GPA, the same check is needed.  So
> I think KVM has to track all private or all shared or no-largepage at 2MB/1GB
> level.  If UPM tracks shared-or-private at 4KB level, probably KVM may not need to
> track it at 4KB level.

Right, the same also applies to shared memory. All the info we need is
whether pages of a 2M range is all private/shared but not mixed. UPM v7
has code tracking that in KVM and previously versions we track that in
the backing store which has been discussed not a good idea.

Chao
> 
> 
> > Actually, in UPM v7 we let KVM record this info so one possible solution
> > is making use of it.
> > 
> >   https://lkml.org/lkml/2022/7/6/259
> > 
> > Then to map a page as 2M, KVM needs to check:
> >   - Memory backing store support that level
> >   - All pages in 2M range are private as we recorded through
> >     KVM_MEMORY_ENCRYPT_{UN,}REG_REGION
> >   - No existing partial 4K map(s) in 2M range
> -- 
> Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
  2022-06-30 11:37   ` Kai Huang
@ 2022-07-13  8:35     ` Isaku Yamahata
  2022-07-13 10:29       ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-13  8:35 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Thu, Jun 30, 2022 at 11:37:15PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > Explicitly check for an MMIO spte in the fast page fault flow.  TDX will
> > use a not-present entry for MMIO sptes, which can be mistaken for an
> > access-tracked spte since both have SPTE_SPECIAL_MASK set.
> 
> SPTE_SPECIAL_MASK has been removed in latest KVM code.  The changelog needs
> update.

It was renamed to SPTE_TDP_AD_MASK. not removed.


> In fact, if I understand correctly, I don't think this changelog is correct:

> The existing code doesn't check is_mmio_spte() because:
> 
> 1) If MMIO caching is enabled, MMIO fault is always handled in
> handle_mmio_page_fault() before reaching here; 
>
> 2) If MMIO caching is disabled, is_shadow_present_pte() always returns false for
> MMIO spte, and is_mmio_spte() also always return false for MMIO spte, so there's
> no need check here.
> 
> "A non-present entry for MMIO spte" doesn't necessarily mean
> is_shadow_present_pte() will return true for it, and there's no explanation at
> all that for TDX guest a MMIO spte could reach here and is_shadow_present_pte()
> returns true for it.

Although it was needed, I noticed the following commit made this patch
unnecessary.  So I'll drop this patch. Kudos to Sean.

edea7c4fc215c7ee1cc98363b016ad505cbac9f7
"KVM: x86/mmu: Use a dedicated bit to track shadow/MMU-present SPTEs"

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
  2022-07-13  8:35     ` Isaku Yamahata
@ 2022-07-13 10:29       ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-13 10:29 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, Sean Christopherson

> 
> Although it was needed, I noticed the following commit made this patch
> unnecessary.  So I'll drop this patch. Kudos to Sean.
> 
> edea7c4fc215c7ee1cc98363b016ad505cbac9f7
> "KVM: x86/mmu: Use a dedicated bit to track shadow/MMU-present SPTEs"
> 

Yes is_shadow_present_pte() always return false for MMIO so this patch isn't
needed anymore.

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits
  2022-07-13  4:52     ` Isaku Yamahata
@ 2022-07-13 10:41       ` Kai Huang
  2022-07-14  0:14         ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-13 10:41 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, Rick Edgecombe


> > 
> > And by applying this patch, nothing will prevent you to turn on INTEL_TDX_HOST
> > and KVM_INTEL, which also turns on KVM_MMU_PRIVATE.
> > 
> > So 'kvm_arch::gfn_shared_mask' is guaranteed to be 0?  If not, can legal
> > (shared) GFN for normal VM be potentially treated as private?
> > 
> > If yes, perhaps explicitly call out in changelog so people don't need to worry
> > about?
> 
> struct kvm that includes struct kvm_arch is guaranteed to be zero.
> 
> Here is the updated commit message.
> 
> Author: Isaku Yamahata <isaku.yamahata@intel.com>
> Date:   Tue Jul 12 00:10:13 2022 -0700
> 
>     KVM: x86/mmu: Add address conversion functions for TDX shared bit of GPA
>     
>     TDX repurposes one GPA bit (51 bit or 47 bit based on configuration) to
>     indicate the GPA is private(if cleared) or shared (if set) with VMM.  If
>     GPA.shared is set, GPA is converted existing conventional EPT pointed by
>     EPTP.  If GPA.shared bit is cleared, GPA is converted by TDX module.
>     VMM has to issue SEAMCALLs to operate.

Sorry what does "GPA is converted ..." mean?


-- 
Thanks,
-Kai



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits
  2022-07-13 10:41       ` Kai Huang
@ 2022-07-14  0:14         ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-14  0:14 UTC (permalink / raw)
  To: Kai Huang
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini,
	Rick Edgecombe

On Wed, Jul 13, 2022 at 10:41:33PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> 
> > > 
> > > And by applying this patch, nothing will prevent you to turn on INTEL_TDX_HOST
> > > and KVM_INTEL, which also turns on KVM_MMU_PRIVATE.
> > > 
> > > So 'kvm_arch::gfn_shared_mask' is guaranteed to be 0?  If not, can legal
> > > (shared) GFN for normal VM be potentially treated as private?
> > > 
> > > If yes, perhaps explicitly call out in changelog so people don't need to worry
> > > about?
> > 
> > struct kvm that includes struct kvm_arch is guaranteed to be zero.
> > 
> > Here is the updated commit message.
> > 
> > Author: Isaku Yamahata <isaku.yamahata@intel.com>
> > Date:   Tue Jul 12 00:10:13 2022 -0700
> > 
> >     KVM: x86/mmu: Add address conversion functions for TDX shared bit of GPA
> >     
> >     TDX repurposes one GPA bit (51 bit or 47 bit based on configuration) to
> >     indicate the GPA is private(if cleared) or shared (if set) with VMM.  If
> >     GPA.shared is set, GPA is converted existing conventional EPT pointed by
> >     EPTP.  If GPA.shared bit is cleared, GPA is converted by TDX module.
> >     VMM has to issue SEAMCALLs to operate.
> 
> Sorry what does "GPA is converted ..." mean?

Oops. typo. I meant GPA is covered by ...

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
                   ` (102 preceding siblings ...)
  2022-07-11 15:17 ` [PATCH v7 000/102] KVM TDX basic feature support Isaku Yamahata
@ 2022-07-14  1:03 ` Sean Christopherson
  2022-07-14  4:09   ` Xiaoyao Li
  2022-07-20 14:59   ` Chao Peng
  103 siblings, 2 replies; 219+ messages in thread
From: Sean Christopherson @ 2022-07-14  1:03 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> KVM TDX basic feature support
> 
> Hello.  This is v7 the patch series vof KVM TDX support.
> This is based on v5.19-rc1 + kvm/queue branch + TDX HOST patch series.
> The tree can be found at https://github.com/intel/tdx/tree/kvm-upstream
> How to run/test: It's describe at https://github.com/intel/tdx/wiki/TDX-KVM
> 
> Major changes from v6:
> - rebased to v5.19 base
> 
> TODO:
> - integrate fd-based guest memory. As the discussion is still on-going, I
>   intentionally dropped fd-based guest memory support yet.  The integration can
>   be found at https://github.com/intel/tdx/tree/kvm-upstream-workaround.
> - 2M large page support. It's work-in-progress.
> For large page support, there are several design choices. Here is the design options.
> Any thoughts/feedback?

Apologies, I didn't read beyond the intro paragraph.  In case something like this
comes up again, it's probably best to send a standalone email tagged RFC, I doubt
I'm the only one that missed this embedded RFC.

> KVM MMU Large page support for TDX
 
...

> * options to track private or shared
> At each page size (4KB, 2MB, and 1GB), track private, shared, or mixed (2MB and
> 1GB case). For 4KB each page, 1 bit per page is needed. private or shared.  For
> large pages (2MB and 1GB), 2 bits per large page is needed. (private, shared, or
> mixed).  When resolving KVM page fault, we don't want to check the lower-size
> pages to check if the given GPA can be a large for performance.  On MapGPA check
> it instead.
> 
> Option A). enhance kvm_arch_memory_slot
>   enum kvm_page_type {
>        KVM_PAGE_TYPE_INVALID,
>        KVM_PAGE_TYPE_SHARED,
>        KVM_PAGE_TYPE_PRIVATE,
>        KVM_PAGE_TYPE_MIXED,
>   };
> 
>   struct kvm_page_attr {
>        enum kvm_page_type type;
>   };
> 
>  struct kvm_arch_memory_slot {
>  +      struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
> 
> Option B). steal one more bit SPTE_MIXED_MASK in addition to SPTE_SHARED_MASK
> If !SPTE_MIXED_MASK, it can be large page.
> 
> Option C). use SPTE_SHARED_MASK and kvm_mmu_page::mixed bitmap
> kvm_mmu_page::mixed bitmap of 1GB, root indicates mixed for 2MB, 1GB.
> 
> 
> * comparison
> A).
> + straightforward to implement
> + SPTE_SHARED_MASK isn't needed
> - memory overhead compared to B). or C).
> - more memory reference on KVM page fault
> 
> B).
> + simpler than C) (complex than A)?)
> + efficient on KVM page fault. (only SPTE reference)
> + low memory overhead
> - Waste precious SPTE bits.
> 
> C).
> + efficient on KVM page fault. (only SPTE reference)
> + low memory overhead
> - complicates MapGPA
> - scattered data structure

Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
on insertion/removal to (dis)allow hugepages as needed.

  + efficient on KVM page fault (no new lookups)
  + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
  + straightforward to implement
  + can (and should) be merged as part of the UPM series

I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
completely covered (fully shared) or not covered at all (fully private), but I'm
not 100% certain that xa_for_each_range() works the way I think it does.

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-14  1:03 ` Sean Christopherson
@ 2022-07-14  4:09   ` Xiaoyao Li
  2022-07-20 14:59   ` Chao Peng
  1 sibling, 0 replies; 219+ messages in thread
From: Xiaoyao Li @ 2022-07-14  4:09 UTC (permalink / raw)
  To: Sean Christopherson, isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On 7/14/2022 9:03 AM, Sean Christopherson wrote:
> On Mon, Jun 27, 2022, isaku.yamahata@intel.com wrote:
>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>
>> KVM TDX basic feature support
>>
>> Hello.  This is v7 the patch series vof KVM TDX support.
>> This is based on v5.19-rc1 + kvm/queue branch + TDX HOST patch series.
>> The tree can be found at https://github.com/intel/tdx/tree/kvm-upstream
>> How to run/test: It's describe at https://github.com/intel/tdx/wiki/TDX-KVM
>>
>> Major changes from v6:
>> - rebased to v5.19 base
>>
>> TODO:
>> - integrate fd-based guest memory. As the discussion is still on-going, I
>>    intentionally dropped fd-based guest memory support yet.  The integration can
>>    be found at https://github.com/intel/tdx/tree/kvm-upstream-workaround.
>> - 2M large page support. It's work-in-progress.
>> For large page support, there are several design choices. Here is the design options.
>> Any thoughts/feedback?
> 
> Apologies, I didn't read beyond the intro paragraph.  In case something like this
> comes up again, it's probably best to send a standalone email tagged RFC, I doubt
> I'm the only one that missed this embedded RFC.
> 
>> KVM MMU Large page support for TDX
>   
> ...
> 
>> * options to track private or shared
>> At each page size (4KB, 2MB, and 1GB), track private, shared, or mixed (2MB and
>> 1GB case). For 4KB each page, 1 bit per page is needed. private or shared.  For
>> large pages (2MB and 1GB), 2 bits per large page is needed. (private, shared, or
>> mixed).  When resolving KVM page fault, we don't want to check the lower-size
>> pages to check if the given GPA can be a large for performance.  On MapGPA check
>> it instead.
>>
>> Option A). enhance kvm_arch_memory_slot
>>    enum kvm_page_type {
>>         KVM_PAGE_TYPE_INVALID,
>>         KVM_PAGE_TYPE_SHARED,
>>         KVM_PAGE_TYPE_PRIVATE,
>>         KVM_PAGE_TYPE_MIXED,
>>    };
>>
>>    struct kvm_page_attr {
>>         enum kvm_page_type type;
>>    };
>>
>>   struct kvm_arch_memory_slot {
>>   +      struct kvm_page_attr *page_attr[KVM_NR_PAGE_SIZES];
>>
>> Option B). steal one more bit SPTE_MIXED_MASK in addition to SPTE_SHARED_MASK
>> If !SPTE_MIXED_MASK, it can be large page.

I don't think this is a good option, since it requires all the mappings 
exist all the time both in shared spte tree and private spte tree.

>> Option C). use SPTE_SHARED_MASK and kvm_mmu_page::mixed bitmap
>> kvm_mmu_page::mixed bitmap of 1GB, root indicates mixed for 2MB, 1GB.
>>
>>
>> * comparison
>> A).
>> + straightforward to implement
>> + SPTE_SHARED_MASK isn't needed
>> - memory overhead compared to B). or C).
>> - more memory reference on KVM page fault
>>
>> B).
>> + simpler than C) (complex than A)?)
>> + efficient on KVM page fault. (only SPTE reference)
>> + low memory overhead
>> - Waste precious SPTE bits.
>>
>> C).
>> + efficient on KVM page fault. (only SPTE reference)
>> + low memory overhead
>> - complicates MapGPA
>> - scattered data structure
> 
> Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
> on insertion/removal to (dis)allow hugepages as needed.

UPM v7[1] introduces "struct xarray mem_attr_array" to track the 
shared/private attr of a range.

So in kvm_vm_ioctl_set_encrypted_region() it needs to

- increase the lpage_info counter when a 2m/1g range changed from 
identical to mixed, and

- decrease the counter when mixed -> identical

[1]: 
https://lore.kernel.org/all/20220706082016.2603916-12-chao.p.peng@linux.intel.com/

> 
>    + efficient on KVM page fault (no new lookups)
>    + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
>    + straightforward to implement
>    + can (and should) be merged as part of the UPM series
> 
> I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
> completely covered (fully shared) or not covered at all (fully private), but I'm
> not 100% certain that xa_for_each_range() works the way I think it does.


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-06-30 11:03   ` Kai Huang
@ 2022-07-14 18:05     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-14 18:05 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Thu, Jun 30, 2022 at 11:03:56PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > TDX introduced a new ETP, Secure-EPT, in addition to the existing EPT.
> > Secure-EPT maps protected guest memory, which is called private. Since
> > Secure-EPT page tables is also protected, those page tables is also called
> > private.  The existing EPT is often called shared EPT to distinguish from
> > Secure-EPT.  And also page tables for share EPT is also called shared.
> 
> Does this patch has anything to do with secure-EPT?
> 
> > 
> > Virtualization Exception, #VE, is a new processor exception in VMX non-root
> 
> #VE isn't new.  It's already in pre-TDX public spec AFAICT.
> 
> > operation.  In certain virtualizatoin-related conditions, #VE is injected
> > into guest instead of exiting from guest to VMM so that guest is given a
> > chance to inspect it.  One important one is EPT violation.  When
> > "ETP-violation #VE" VM-execution is set, "#VE suppress bit" in EPT entry
> > is cleared, #VE is injected instead of EPT violation.
> 
> We already know such fact based on pre-TDX public spec.  Instead of repeating it
> here, why not focusing on saying what's new in TDX, so your below paragraph of
> setting a non-zero value for non-present SPTE can be justified?

Ok, will drop those two paragraph above.


> > Because guest memory is protected with TDX, VMM can't parse instructions
> > in the guest memory.  Instead, MMIO hypercall is used for guest to pass
> > necessary information to VMM.
> > 
> > To make unmodified device driver work, guest TD expects #VE on accessing
> > shared GPA.  The #VE handler converts MMIO access into MMIO hypercall with
> > the EPT entry of enabled "#VE" by clearing "suppress #VE" bit.  Before VMM
> > enabling #VE, it needs to figure out the given GPA is for MMIO by EPT
> > violation.  
> > 
> 
> As I said above, before here, you need to explain in TDX VMCS is controlled by
> the TDX module and it always sets the "EPT-violation #VE" in execution control
> bit.
> 
> > So the execution flow looks like
> > 
> > - Allocate unused shared EPT entry with suppress #VE bit set.
> > - EPT violation on that GPA.
> > - VMM figures out the faulted GPA is for MMIO.
> > - VMM clears the suppress #VE bit.
> > - Guest TD gets #VE, and converts MMIO access into MMIO hypercall.
> > - If the GPA maps guest memory, VMM resolves it with guest pages.
> > 
> > For both cases, SPTE needs suppress #VE" bit set initially when it
> > is allocated or zapped, therefore non-zero non-present value for SPTE
> > needs to be allowed.
> > 
> > This change requires to update FNAME(sync_page) for shadow EPT.
> > "if(!sp->spte[i])" in FNAME(sync_page) means that the spte entry is the
> > initial value.  With the introduction of shadow_nonpresent_value which can
> > be non-zero, it doesn't hold any more. Replace zero check with
> > "!is_shadow_present_pte() && !is_mmio_spte()".
> 
> I don't think you need to mention above paragraph.  It's absolutely unclear how
> is_mmio_spte() will be impacted by this patch by reading above paragraphs.
> 
> From the "execution flow" you mentioned above, you will change MMIO fault from
> EPT misconfiguration to EPT violation (in order to get #VE), so theoretically
> you may effectively disable MMIO caching, in which case, if I understand
> correctly, is_mmio_spte() always returns false.
> 
> I guess you can just change to check:
> 
> 	if (sp->spte[i] != shadow_nonpresent_value)
> 
> Anyway, IMO you can just comment in the code.
> 
> After all, what is shadow_nonpresent_value, given you haven't explained what it
> is?

I'll drop the paragraph and add a comment on the code.


> > TDP MMU uses REMOVED_SPTE = 0x5a0ULL as special constant to indicate the
> > intermediate value to indicate one thread is operating on it and the value
> > should be semi-arbitrary value.  For TDX (more correctly to use #VE), the
> > value should include suppress #VE value which is SHADOW_NONPRESENT_VALUE.
> 
> What is SHADOW_NONPRESENT_VALUE?
> 
> > Rename REMOVED_SPTE to __REMOVED_SPTE and define REMOVED_SPTE as
> > SHADOW_NONPRESENT_VALUE | REMOVED_SPTE to set suppress #VE bit.
> 
> Ditto. IMHO you don't even need to mention REMOVED_SPTE in changelog.



> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c         | 55 ++++++++++++++++++++++++++++++----
> >  arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
> >  arch/x86/kvm/mmu/spte.c        |  5 +++-
> >  arch/x86/kvm/mmu/spte.h        | 37 ++++++++++++++++++++---
> >  arch/x86/kvm/mmu/tdp_mmu.c     | 23 +++++++++-----
> >  5 files changed, 105 insertions(+), 18 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 51306b80f47c..f239b6cb5d53 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -668,6 +668,44 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> >  	}
> >  }
> >  
> > +static inline void kvm_init_shadow_page(void *page)
> > +{
> > +#ifdef CONFIG_X86_64
> > +	int ign;
> > +
> > +	WARN_ON_ONCE(shadow_nonpresent_value != SHADOW_NONPRESENT_VALUE);
> > +	asm volatile (
> > +		"rep stosq\n\t"
> > +		: "=c"(ign), "=D"(page)
> > +		: "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> > +		: "memory"
> > +	);
> > +#else
> > +	BUG();
> > +#endif
> > +}
> > +
> > +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> > +{
> > +	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> > +	int start, end, i, r;
> > +	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> > +
> > +	if (is_tdp_mmu && shadow_nonpresent_value)
> > +		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> > +
> > +	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
> > +	if (r)
> > +		return r;
> > +
> > +	if (is_tdp_mmu && shadow_nonpresent_value) {
> > +		end = kvm_mmu_memory_cache_nr_free_objects(mc);
> > +		for (i = start; i < end; i++)
> > +			kvm_init_shadow_page(mc->objects[i]);
> > +	}
> 
> I think you can just extend this to legacy MMU too, but not only TDP MMU.
> 
> After all, before this patch, where have you declared that TDX only supports TDP
> MMU?  This is only enforced in:
> 
> 	[PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX
> 
> Which is 7 patches later.
> 
> Also, shadow_nonpresent_value is only used in couple of places, while
> SHADOW_NONPRESENT_VALUE is used directly in more places.  Does it make more
> sense to always use shadow_nonpresent_value, instead of using
> SHADOW_NONPRESENT_VALUE?
> 
> Similar to other shadow values, we can provide a function to let caller
> (VMX/SVM) to decide whether it wants to use non-zero for non-present SPTE.
> 
> 	void kvm_mmu_set_non_present_value(u64 value)
> 	{
> 		shadow_nonpresent_value = value;
> 	}

As you pointed out, those logic is independent of TDP MMU or legacy MMU.
So I'll remove is_tdp_mmu.I'll drop shadwo_nonpresent_value and use
SHADWO_NONPRESENT_VALUE.

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-06-27 21:53 ` [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE isaku.yamahata
  2022-06-30 11:03   ` Kai Huang
  2022-07-08  5:18   ` Yuan Yao
@ 2022-07-14 18:41   ` Isaku Yamahata
  2022-07-20  2:44     ` Kai Huang
  2022-07-20  3:12     ` Kai Huang
  2 siblings, 2 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-14 18:41 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson, Kai Huang, Yuan Yao

Thanks for review. Now here is the updated version.

From f1ee540d62ba13511b2c7d3db7662e32bd263e48 Mon Sep 17 00:00:00 2001
Message-Id: <f1ee540d62ba13511b2c7d3db7662e32bd263e48.1657823906.git.isaku.yamahata@intel.com>
In-Reply-To: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1657823906.git.isaku.yamahata@intel.com>
References: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1657823906.git.isaku.yamahata@intel.com>
From: Sean Christopherson <sean.j.christopherson@intel.com>
Date: Mon, 29 Jul 2019 19:23:46 -0700
Subject: [PATCH 036/304] KVM: x86/mmu: Allow non-zero value for non-present
 SPTE

TDX introduced a new ETP, Secure-EPT, in addition to the existing EPT.
Secure-EPT maps protected guest memory, which is called private. Since
Secure-EPT page tables is also protected, those page tables is also called
private.  The existing EPT is often called shared EPT to distinguish from
Secure-EPT.  And also page tables for shared EPT is also called shared.

TDX module enables #VE injection by setting "EPT-violation #VE" in
secondary processor-based VM-execution controls of TD VMCS.  It also sets
"suppress #VE" bit in Secure-EPT so that EPT violation on Secure-EPT causes
exit to VMM.

Because guest memory is protected with TDX, VMM can't parse instructions in
the guest memory.  Instead, MMIO hypercall is used for guest TD to pass
necessary information to VMM.  To make unmodified device driver work, guest
TD expects #VE on accessing shared GPA for MMIO. The #VE handler of guest
TD converts MMIO access into MMIO hypercall.  To trigger #VE in guest TD,
VMM needs to clear "suppress #VE" bit in shared EPT entry that corresponds
to MMIO address.

So the execution flow related for MMIO is as follows

- TDX module sets "EPT-violation #VE" in secondary processor-based
  VM-execution controls of TD VMCS.
- Allocate page for shared EPT PML4E page. Shared EPT entries are
  initialized with suppress #VE bit set.  Update the EPTP pointer.
- TD accesses a GPA for MMIO to trigger EPT violation.  It exits to VMM with
  EPT violation due to suppress #VE bit of EPT entries of PML4E page.
- VMM figures out the faulted GPA is for MMIO
- start shared EPT page table walk.
- Allocate non-leaf EPT pages for the shared EPT.
- Allocate leaf EPT page for the shared EPT and initialize EPT entries with
  suppress #VE bit set.
- VMM clears the suppress #VE bit for faulted GPA for MMIO.
  Please notice the leaf EPT page has 512 SPTE and other 511 SPTE entries
  need to keep "suppress #VE" bit set because GPAs for those SPTEs are not
  known to be MMIO. (It requires further lookups.)
  If GPA is a guest page, link the guest page from the leaf SPTE entry.
- resume TD vcpu.
- Guest TD gets #VE, and converts MMIO access into MMIO hypercall.
- If the GPA maps guest memory, VMM resolves it with guest pages.

SPTEs for shared EPT need suppress #VE" bit set initially when it
is allocated or zapped, therefore non-zero non-present value for SPTE
needs to be allowed.

TDP MMU uses REMOVED_SPTE = 0x5a0ULL as special constant to indicate the
intermediate value to indicate one thread is operating on it and the value
should be semi-arbitrary value.  For TDX (more exactly to use #VE), the
value should include suppress #VE bit.  Rename REMOVED_SPTE to
__REMOVED_SPTE and define REMOVED_SPTE as (REMOVED_SPTE | "suppress #VE")
bit.

For simplicity, "suppress #VE" bit is set unconditionally for X86_64 for
non-present SPTE.  Because "suppress #VE" bit (bit position of 63) for
non-present SPTE is ignored for non-TD case (AMD CPUs or Intel VMX case
with "EPT-violation #VE" cleared), the functionality shouldn't change.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         | 71 ++++++++++++++++++++++++++++++++--
 arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
 arch/x86/kvm/mmu/spte.c        |  5 ++-
 arch/x86/kvm/mmu/spte.h        | 28 +++++++++++++-
 arch/x86/kvm/mmu/tdp_mmu.c     | 23 +++++++----
 5 files changed, 116 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 51306b80f47c..992f31458f94 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -668,6 +668,55 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
 	}
 }
 
+#ifdef CONFIG_X86_64
+static inline void kvm_init_shadow_page(void *page)
+{
+	int ign;
+
+	/*
+	 * AMD: "suppress #VE" bit is ignored
+	 * Intel non-TD(VMX): "suppress #VE" bit is ignored because
+	 *   EPT_VIOLATION_VE isn't set.
+	 * guest TD: TDX module sets EPT_VIOLATION_VE
+	 *   conventional SEPT: "suppress #VE" bit must be set to get EPT violation
+	 *   private SEPT: "suppress #VE" bit is ignored.  CPU doesn't walk it
+	 *
+	 * For simplicity, unconditionally initialize SPET to set "suppress #VE".
+	 */
+	asm volatile ("rep stosq\n\t"
+		      : "=c"(ign), "=D"(page)
+		      : "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
+		      : "memory"
+	);
+}
+
+static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
+	int start, end, i, r;
+
+	start = kvm_mmu_memory_cache_nr_free_objects(mc);
+	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
+
+	/*
+	 * Note, topup may have allocated objects even if it failed to allocate
+	 * the minimum number of objects required to make forward progress _at
+	 * this time_.  Initialize newly allocated objects even on failure, as
+	 * userspace can free memory and rerun the vCPU in response to -ENOMEM.
+	 */
+	end = kvm_mmu_memory_cache_nr_free_objects(mc);
+	for (i = start; i < end; i++)
+		kvm_init_shadow_page(mc->objects[i]);
+	return r;
+}
+#else
+static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
+{
+	return kvm_mmu_topup_memory_cache(vcpu->arch.mmu_shadow_page_cache,
+					  PT64_ROOT_MAX_LEVEL);
+}
+#endif /* CONFIG_X86_64 */
+
 static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 {
 	int r;
@@ -677,8 +726,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
 	if (r)
 		return r;
-	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
-				       PT64_ROOT_MAX_LEVEL);
+	r = mmu_topup_shadow_page_cache(vcpu);
 	if (r)
 		return r;
 	if (maybe_indirect) {
@@ -5654,7 +5702,24 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
 	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
 
-	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
+	/*
+	 * When X86_64, initial SEPT entries are initialized with
+	 * SHADOW_NONPRESENT_VALUE.  Otherwise zeroed.  See
+	 * mmu_topup_shadow_page_cache().
+	 *
+	 * Shared EPTEs need to be initialized with SUPPRESS_VE=1, otherwise
+	 * not-present EPT violations would be reflected into the guest by
+	 * hardware as #VE exceptions.  This is handled by initializing page
+	 * allocations via kvm_init_shadow_page() during cache topup.
+	 * In that case, telling the page allocation to zero-initialize the page
+	 * would be wasted effort.
+	 *
+	 * The initialization is harmless for S-EPT entries because KVM's copy
+	 * of the S-EPT isn't consumed by hardware, and because under the hood
+	 * S-EPT entries should never #VE.
+	 */
+	if (!IS_ENABLED(X86_64))
+		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
 
 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
 	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index fe35d8fd3276..964ec76579f0 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -1031,7 +1031,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 		gpa_t pte_gpa;
 		gfn_t gfn;
 
-		if (!sp->spt[i])
+		/* spt[i] has initial value of shadow page table allocation */
+		if (sp->spt[i] != SHADOW_NONPRESENT_VALUE)
 			continue;
 
 		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index cda1851ec155..bd441458153f 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -36,6 +36,9 @@ u64 __read_mostly shadow_present_mask;
 u64 __read_mostly shadow_me_value;
 u64 __read_mostly shadow_me_mask;
 u64 __read_mostly shadow_acc_track_mask;
+#ifdef CONFIG_X86_64
+u64 __read_mostly shadow_nonpresent_value;
+#endif
 
 u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
 u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
@@ -360,7 +363,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
 	 * not set any RWX bits.
 	 */
 	if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
-	    WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
+	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
 		mmio_value = 0;
 
 	if (!mmio_value)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 0127bb6e3c7d..f5fd22f6bf5f 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -140,6 +140,19 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
 
 #define MMIO_SPTE_GEN_MASK		GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0)
 
+/*
+ * non-present SPTE value for both VMX and SVM for TDP MMU.
+ * For SVM NPT, for non-present spte (bit 0 = 0), other bits are ignored.
+ * For VMX EPT, bit 63 is ignored if #VE is disabled.
+ *              bit 63 is #VE suppress if #VE is enabled.
+ */
+#ifdef CONFIG_X86_64
+#define SHADOW_NONPRESENT_VALUE	BIT_ULL(63)
+static_assert(!(SHADOW_NONPRESENT_VALUE & SPTE_MMU_PRESENT_MASK));
+#else
+#define SHADOW_NONPRESENT_VALUE	0ULL
+#endif
+
 extern u64 __read_mostly shadow_host_writable_mask;
 extern u64 __read_mostly shadow_mmu_writable_mask;
 extern u64 __read_mostly shadow_nx_mask;
@@ -178,16 +191,27 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
  * non-present intermediate value. Other threads which encounter this value
  * should not modify the SPTE.
  *
+ * For X86_64 case, SHADOW_NONPRESENT_VALUE, "suppress #VE" bit, is set because
+ * "EPT violation #VE" in the secondary VM execution control may be enabled.
+ * Because TDX module sets "EPT violation #VE" for TD, "suppress #VE" bit for
+ * the conventional EPT needs to be set.
+ *
  * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on
  * bot AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF
  * vulnerability.  Use only low bits to avoid 64-bit immediates.
  *
  * Only used by the TDP MMU.
  */
-#define REMOVED_SPTE	0x5a0ULL
+#define __REMOVED_SPTE	0x5a0ULL
 
 /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
-static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
+static_assert(!(__REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
+
+/*
+ * See above comment around __REMOVED_SPTE.  REMOVED_SPTE is the actual
+ * intermediate value set to the removed SPET.  it sets the "suppress #VE" bit.
+ */
+#define REMOVED_SPTE	(SHADOW_NONPRESENT_VALUE | __REMOVED_SPTE)
 
 static inline bool is_removed_spte(u64 spte)
 {
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 7b9265d67131..2ca03ec3bf52 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -692,8 +692,16 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
 	 * overwrite the special removed SPTE value. No bookkeeping is needed
 	 * here since the SPTE is going from non-present to non-present.  Use
 	 * the raw write helper to avoid an unnecessary check on volatile bits.
+	 *
+	 * Set non-present value to SHADOW_NONPRESENT_VALUE, rather than 0.
+	 * It is because when TDX is enabled, TDX module always
+	 * enables "EPT-violation #VE", so KVM needs to set
+	 * "suppress #VE" bit in EPT table entries, in order to get
+	 * real EPT violation, rather than TDVMCALL.  KVM sets
+	 * SHADOW_NONPRESENT_VALUE (which sets "suppress #VE" bit) so it
+	 * can be set when EPT table entries are zapped.
 	 */
-	__kvm_tdp_mmu_write_spte(iter->sptep, 0);
+	__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);
 
 	return 0;
 }
@@ -870,8 +878,8 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
 			continue;
 
 		if (!shared)
-			tdp_mmu_set_spte(kvm, &iter, 0);
-		else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0))
+			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
+		else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE))
 			goto retry;
 	}
 }
@@ -927,8 +935,9 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 	if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte)))
 		return false;
 
-	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0,
-			   sp->gfn, sp->role.level + 1, true, true);
+	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
+			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
+			   true, true);
 
 	return true;
 }
@@ -965,7 +974,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
 		    !is_last_spte(iter.old_spte, iter.level))
 			continue;
 
-		tdp_mmu_set_spte(kvm, &iter, 0);
+		tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
 		flush = true;
 	}
 
@@ -1330,7 +1339,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
 	 * invariant that the PFN of a present * leaf SPTE can never change.
 	 * See __handle_changed_spte().
 	 */
-	tdp_mmu_set_spte(kvm, iter, 0);
+	tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE);
 
 	if (!pte_write(range->pte)) {
 		new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,
-- 
2.25.1



-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 056/102] KVM: x86/mmu: steal software usable git to record if GFN is for shared or not
  2022-06-27 21:53 ` [PATCH v7 056/102] KVM: x86/mmu: steal software usable git to record if GFN is for shared or not isaku.yamahata
@ 2022-07-18  8:37   ` Yuan Yao
  0 siblings, 0 replies; 219+ messages in thread
From: Yuan Yao @ 2022-07-18  8:37 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:48PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>

Subject: s/git/bit

>
> With TDX, all GFNs are private at guest boot time.  At run time guest TD
> can explicitly change it to shared from private or vice-versa by MapGPA
> hypercall.  If it's specified, the given GFN can't be used as otherwise.
> That's is, if a guest tells KVM that the GFN is shared, it can't be used
> as private.  or vice-versa.
>
> Steal software usable bit, SPTE_SHARED_MASK, for it from MMIO counter to
> record it.  Use the bit SPTE_SHARED_MASK in shared or private EPT to
> determine which mapping, shared or private, is allowed.  If requested
> mapping isn't allowed, return RET_PF_RETRY to wait for other vcpu to change
> it.  The bit is recorded in both shared and private shadow page to avoid
> traverse one more shadow page when resolving KVM page fault.
>
> The bit needs to be kept over zapping the EPT entry.  Currently the EPT
> entry is initialized SHADOW_NONPRESENT_VALUE unconditionally to clear
> SPTE_SHARED_MASK bit.  To carry SPTE_SHARED_MASK bit, introduce a helper
> function to get initial value for zapped entry with SPTE_SHARED_MASK bit.
> Replace SHADOW_NONPRESENT_VALUE with it.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/spte.h    | 17 +++++++---
>  arch/x86/kvm/mmu/tdp_mmu.c | 65 ++++++++++++++++++++++++++++++++------
>  2 files changed, 68 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 96312ab4fffb..7c1aaf0e963e 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -14,6 +14,9 @@
>   */
>  #define SPTE_MMU_PRESENT_MASK		BIT_ULL(11)
>
> +/* Masks that used to track for shared GPA **/
> +#define SPTE_SHARED_MASK		BIT_ULL(62)
> +
>  /*
>   * TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also
>   * be restricted to using write-protection (for L2 when CPU dirty logging, i.e.
> @@ -104,7 +107,7 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
>   * the memslots generation and is derived as follows:
>   *
>   * Bits 0-7 of the MMIO generation are propagated to spte bits 3-10
> - * Bits 8-18 of the MMIO generation are propagated to spte bits 52-62
> + * Bits 8-18 of the MMIO generation are propagated to spte bits 52-61

Should be 8-17.

>   *
>   * The KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag is intentionally not included in
>   * the MMIO generation number, as doing so would require stealing a bit from
> @@ -118,7 +121,7 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
>  #define MMIO_SPTE_GEN_LOW_END		10
>
>  #define MMIO_SPTE_GEN_HIGH_START	52
> -#define MMIO_SPTE_GEN_HIGH_END		62
> +#define MMIO_SPTE_GEN_HIGH_END		61
>
>  #define MMIO_SPTE_GEN_LOW_MASK		GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \
>  						    MMIO_SPTE_GEN_LOW_START)
> @@ -131,7 +134,7 @@ static_assert(!(SPTE_MMU_PRESENT_MASK &
>  #define MMIO_SPTE_GEN_HIGH_BITS		(MMIO_SPTE_GEN_HIGH_END - MMIO_SPTE_GEN_HIGH_START + 1)
>
>  /* remember to adjust the comment above as well if you change these */
> -static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
> +static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 10);
>
>  #define MMIO_SPTE_GEN_LOW_SHIFT		(MMIO_SPTE_GEN_LOW_START - 0)
>  #define MMIO_SPTE_GEN_HIGH_SHIFT	(MMIO_SPTE_GEN_HIGH_START - MMIO_SPTE_GEN_LOW_BITS)
> @@ -208,6 +211,7 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>  /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
>  static_assert(!(__REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
>  static_assert(!(__REMOVED_SPTE & SHADOW_NONPRESENT_VALUE));
> +static_assert(!(__REMOVED_SPTE & SPTE_SHARED_MASK));
>
>  /*
>   * See above comment around __REMOVED_SPTE.  REMOVED_SPTE is the actual
> @@ -217,7 +221,12 @@ static_assert(!(__REMOVED_SPTE & SHADOW_NONPRESENT_VALUE));
>
>  static inline bool is_removed_spte(u64 spte)
>  {
> -	return spte == REMOVED_SPTE;
> +	return (spte & ~SPTE_SHARED_MASK) == REMOVED_SPTE;
> +}
> +
> +static inline u64 spte_shared_mask(u64 spte)
> +{
> +	return spte & SPTE_SHARED_MASK;
>  }
>
>  /*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index fef6246086a8..4f279700b3cc 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -758,6 +758,11 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
>  	return 0;
>  }
>
> +static u64 shadow_nonpresent_spte(u64 old_spte)
> +{
> +	return SHADOW_NONPRESENT_VALUE | spte_shared_mask(old_spte);
> +}
> +
>  static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>  					  struct tdp_iter *iter)
>  {
> @@ -791,7 +796,8 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>  	 * SHADOW_NONPRESENT_VALUE (which sets "suppress #VE" bit) so it
>  	 * can be set when EPT table entries are zapped.
>  	 */
> -	__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);
> +	__kvm_tdp_mmu_write_spte(iter->sptep,
> +			       shadow_nonpresent_spte(iter->old_spte));
>
>  	return 0;
>  }
> @@ -975,8 +981,11 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
>  			continue;
>
>  		if (!shared)
> -			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
> -		else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE))
> +			tdp_mmu_set_spte(kvm, &iter,
> +					 shadow_nonpresent_spte(iter.old_spte));
> +		else if (tdp_mmu_set_spte_atomic(
> +				 kvm, &iter,
> +				 shadow_nonpresent_spte(iter.old_spte)))
>  			goto retry;
>  	}
>  }
> @@ -1033,7 +1042,8 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>  		return false;
>
>  	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
> -			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
> +			   shadow_nonpresent_spte(old_spte),
> +			   sp->gfn, sp->role.level + 1,
>  			   true, true, is_private_sp(sp));
>
>  	return true;
> @@ -1075,11 +1085,20 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>  			continue;
>  		}
>
> +		/*
> +		 * SPTE_SHARED_MASK is stored as 4K granularity.  The
> +		 * information is lost if we delete upper level SPTE page.
> +		 * TODO: support large page.
> +		 */
> +		if (kvm_gfn_shared_mask(kvm) && iter.level > PG_LEVEL_4K)
> +			continue;
> +
>  		if (!is_shadow_present_pte(iter.old_spte) ||
>  		    !is_last_spte(iter.old_spte, iter.level))
>  			continue;
>
> -		tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
> +		tdp_mmu_set_spte(kvm, &iter,
> +				 shadow_nonpresent_spte(iter.old_spte));
>  		flush = true;
>  	}
>
> @@ -1195,18 +1214,44 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  	gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
>
>  	WARN_ON(sp->role.level != fault->goal_level);
> +	WARN_ON(is_private_sptep(iter->sptep) != fault->is_private);
>
> -	/* TDX shared GPAs are no executable, enforce this for the SDV. */
> -	if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
> -		pte_access &= ~ACC_EXEC_MASK;
> +	if (kvm_gfn_shared_mask(vcpu->kvm)) {
> +		if (fault->is_private) {
> +			/*
> +			 * SPTE allows only RWX mapping. PFN can't be mapped it
> +			 * as READONLY in GPA.
> +			 */
> +			if (fault->slot && !fault->map_writable)
> +				return RET_PF_RETRY;
> +			/*
> +			 * This GPA is not allowed to map as private.  Let
> +			 * vcpu loop in page fault until other vcpu change it
> +			 * by MapGPA hypercall.
> +			 */
> +			if (fault->slot &&

Please consider to merge this if into above "if (fault->slot) {}"

> +				spte_shared_mask(iter->old_spte))
> +				return RET_PF_RETRY;
> +		} else {
> +			/* This GPA is not allowed to map as shared. */
> +			if (fault->slot &&
> +				!spte_shared_mask(iter->old_spte))
> +				return RET_PF_RETRY;
> +			/* TDX shared GPAs are no executable, enforce this. */
> +			pte_access &= ~ACC_EXEC_MASK;
> +		}
> +	}
>
>  	if (unlikely(!fault->slot))
>  		new_spte = make_mmio_spte(vcpu, gfn_unalias, pte_access);
> -	else
> +	else {
>  		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
>  				   gfn_unalias, fault->pfn, iter->old_spte,
>  				   fault->prefetch, true, fault->map_writable,
>  				   &new_spte);
> +		if (spte_shared_mask(iter->old_spte))
> +			new_spte |= SPTE_SHARED_MASK;
> +	}

The if can be eliminated:
new_spte |= spte_shared_mask(iter->old_spte);

>
>  	if (new_spte == iter->old_spte)
>  		ret = RET_PF_SPURIOUS;
> @@ -1509,7 +1554,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
>  	 * invariant that the PFN of a present * leaf SPTE can never change.
>  	 * See __handle_changed_spte().
>  	 */
> -	tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE);
> +	tdp_mmu_set_spte(kvm, iter, shadow_nonpresent_spte(iter->old_spte));
>
>  	if (!pte_write(range->pte)) {
>  		new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  2022-06-27 21:53 ` [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
  2022-06-30 11:45   ` Kai Huang
  2022-07-05 14:06   ` Kai Huang
@ 2022-07-19  8:47   ` Isaku Yamahata
  2022-07-20  3:45     ` Kai Huang
  2 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19  8:47 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Sean Christopherson


Here is the updated one. The changes are
- removed hunks that should be a part of other patches.
- removed shadow_default_mmio_mask
- trimed down commit messages.

From ed6b4a076e515550878b069596cf156a1bc33514 Mon Sep 17 00:00:00 2001
Message-Id: <ed6b4a076e515550878b069596cf156a1bc33514.1658220363.git.isaku.yamahata@intel.com>
In-Reply-To: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1658220363.git.isaku.yamahata@intel.com>
References: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1658220363.git.isaku.yamahata@intel.com>
From: Sean Christopherson <sean.j.christopherson@intel.com>
Date: Wed, 10 Jun 2020 15:46:38 -0700
Subject: [PATCH 036/306] KVM: x86/mmu: Track shadow MMIO value/mask on a
 per-VM basis

TDX will use a different shadow PTE entry value for MMIO from VMX.  Add
members to kvm_arch and track value for MMIO per-VM instead of global
variables.  By using the per-VM EPT entry value for MMIO, the existing VMX
logic is kept working.  To untangle the logic to initialize
shadow_mmio_access_mask, introduce a setter function.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  4 +++
 arch/x86/kvm/mmu.h              |  3 ++-
 arch/x86/kvm/mmu/mmu.c          |  8 +++---
 arch/x86/kvm/mmu/spte.c         | 45 +++++++++------------------------
 arch/x86/kvm/mmu/spte.h         | 10 +++-----
 arch/x86/kvm/mmu/tdp_mmu.c      |  6 ++---
 arch/x86/kvm/svm/svm.c          | 11 +++++---
 arch/x86/kvm/vmx/tdx.c          |  4 +++
 arch/x86/kvm/vmx/vmx.c          | 26 +++++++++++++++++++
 arch/x86/kvm/vmx/x86_ops.h      |  1 +
 10 files changed, 66 insertions(+), 52 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2c47aab72a1b..39215daa8576 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1161,6 +1161,10 @@ struct kvm_arch {
 	 */
 	spinlock_t mmu_unsync_pages_lock;
 
+	bool enable_mmio_caching;
+	u64 shadow_mmio_value;
+	u64 shadow_mmio_mask;
+
 	struct list_head assigned_dev_head;
 	struct iommu_domain *iommu_domain;
 	bool iommu_noncoherent;
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index ccf0ba7a6387..cfa3e658162c 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -108,7 +108,8 @@ static inline u8 kvm_get_shadow_phys_bits(void)
 	return boot_cpu_data.x86_phys_bits;
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
+void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask);
+void kvm_mmu_set_mmio_access_mask(u64 mmio_access_mask);
 void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
 void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5bfccfa0f50e..34240fcc45de 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2298,7 +2298,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 				return kvm_mmu_prepare_zap_page(kvm, child,
 								invalid_list);
 		}
-	} else if (is_mmio_spte(pte)) {
+	} else if (is_mmio_spte(kvm, pte)) {
 		mmu_spte_clear_no_track(spte);
 	}
 	return 0;
@@ -3079,7 +3079,7 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
 		 * and only if L1's MAXPHYADDR is inaccurate with respect to
 		 * the hardware's).
 		 */
-		if (unlikely(!enable_mmio_caching) ||
+		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
 		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
 			return RET_PF_EMULATE;
 	}
@@ -3918,7 +3918,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 	if (WARN_ON(reserved))
 		return -EINVAL;
 
-	if (is_mmio_spte(spte)) {
+	if (is_mmio_spte(vcpu->kvm, spte)) {
 		gfn_t gfn = get_mmio_spte_gfn(spte);
 		unsigned int access = get_mmio_spte_access(spte);
 
@@ -4361,7 +4361,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu)
 static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
 			   unsigned int access)
 {
-	if (unlikely(is_mmio_spte(*sptep))) {
+	if (unlikely(is_mmio_spte(vcpu->kvm, *sptep))) {
 		if (gfn != get_mmio_spte_gfn(*sptep)) {
 			mmu_spte_clear_no_track(sptep);
 			return true;
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 92968e5605fc..9a130dd3d6a3 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -29,8 +29,6 @@ u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
 u64 __read_mostly shadow_user_mask;
 u64 __read_mostly shadow_accessed_mask;
 u64 __read_mostly shadow_dirty_mask;
-u64 __read_mostly shadow_mmio_value;
-u64 __read_mostly shadow_mmio_mask;
 u64 __read_mostly shadow_mmio_access_mask;
 u64 __read_mostly shadow_present_mask;
 u64 __read_mostly shadow_me_value;
@@ -62,10 +60,10 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access)
 	u64 spte = generation_mmio_spte_mask(gen);
 	u64 gpa = gfn << PAGE_SHIFT;
 
-	WARN_ON_ONCE(!shadow_mmio_value);
+	WARN_ON_ONCE(!vcpu->kvm->arch.shadow_mmio_value);
 
 	access &= shadow_mmio_access_mask;
-	spte |= shadow_mmio_value | access;
+	spte |= vcpu->kvm->arch.shadow_mmio_value | access;
 	spte |= gpa | shadow_nonpresent_or_rsvd_mask;
 	spte |= (gpa & shadow_nonpresent_or_rsvd_mask)
 		<< SHADOW_NONPRESENT_OR_RSVD_MASK_LEN;
@@ -337,9 +335,8 @@ u64 mark_spte_for_access_track(u64 spte)
 	return spte;
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
+void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask)
 {
-	BUG_ON((u64)(unsigned)access_mask != access_mask);
 	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
 
 	if (!enable_mmio_caching)
@@ -366,12 +363,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
 	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
 		mmio_value = 0;
 
-	if (!mmio_value)
-		enable_mmio_caching = false;
-
-	shadow_mmio_value = mmio_value;
-	shadow_mmio_mask  = mmio_mask;
-	shadow_mmio_access_mask = access_mask;
+	kvm->arch.enable_mmio_caching = !!mmio_value;
+	kvm->arch.shadow_mmio_value = mmio_value;
+	kvm->arch.shadow_mmio_mask = mmio_mask;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
 
@@ -399,20 +393,12 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
 	shadow_acc_track_mask	= VMX_EPT_RWX_MASK;
 	shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
 	shadow_mmu_writable_mask  = EPT_SPTE_MMU_WRITABLE;
-
-	/*
-	 * EPT Misconfigurations are generated if the value of bits 2:0
-	 * of an EPT paging-structure entry is 110b (write/execute).
-	 */
-	kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE,
-				   VMX_EPT_RWX_MASK, 0);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks);
 
 void kvm_mmu_reset_all_pte_masks(void)
 {
 	u8 low_phys_bits;
-	u64 mask;
 
 	shadow_phys_bits = kvm_get_shadow_phys_bits();
 
@@ -452,18 +438,11 @@ void kvm_mmu_reset_all_pte_masks(void)
 
 	shadow_host_writable_mask = DEFAULT_SPTE_HOST_WRITABLE;
 	shadow_mmu_writable_mask  = DEFAULT_SPTE_MMU_WRITABLE;
+}
 
-	/*
-	 * Set a reserved PA bit in MMIO SPTEs to generate page faults with
-	 * PFEC.RSVD=1 on MMIO accesses.  64-bit PTEs (PAE, x86-64, and EPT
-	 * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
-	 * 52-bit physical addresses then there are no reserved PA bits in the
-	 * PTEs and so the reserved PA approach must be disabled.
-	 */
-	if (shadow_phys_bits < 52)
-		mask = BIT_ULL(51) | PT_PRESENT_MASK;
-	else
-		mask = 0;
-
-	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
+void kvm_mmu_set_mmio_access_mask(u64 mmio_access_mask)
+{
+	BUG_ON((u64)(unsigned)mmio_access_mask != mmio_access_mask);
+	shadow_mmio_access_mask = mmio_access_mask;
 }
+EXPORT_SYMBOL(kvm_mmu_set_mmio_access_mask);
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index f5fd22f6bf5f..99bce92b596e 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -5,8 +5,6 @@
 
 #include "mmu_internal.h"
 
-extern bool __read_mostly enable_mmio_caching;
-
 /*
  * A MMU present SPTE is backed by actual memory and may or may not be present
  * in hardware.  E.g. MMIO SPTEs are not considered present.  Use bit 11, as it
@@ -160,8 +158,6 @@ extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
 extern u64 __read_mostly shadow_user_mask;
 extern u64 __read_mostly shadow_accessed_mask;
 extern u64 __read_mostly shadow_dirty_mask;
-extern u64 __read_mostly shadow_mmio_value;
-extern u64 __read_mostly shadow_mmio_mask;
 extern u64 __read_mostly shadow_mmio_access_mask;
 extern u64 __read_mostly shadow_present_mask;
 extern u64 __read_mostly shadow_me_value;
@@ -228,10 +224,10 @@ static inline bool is_removed_spte(u64 spte)
  */
 extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
 
-static inline bool is_mmio_spte(u64 spte)
+static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
 {
-	return (spte & shadow_mmio_mask) == shadow_mmio_value &&
-	       likely(enable_mmio_caching);
+	return (spte & kvm->arch.shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
+		likely(kvm->arch.enable_mmio_caching);
 }
 
 static inline bool is_shadow_present_pte(u64 pte)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 2ca03ec3bf52..82f1bfac7ee6 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -569,8 +569,8 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 		 * impact the guest since both the former and current SPTEs
 		 * are nonpresent.
 		 */
-		if (WARN_ON(!is_mmio_spte(old_spte) &&
-			    !is_mmio_spte(new_spte) &&
+		if (WARN_ON(!is_mmio_spte(kvm, old_spte) &&
+			    !is_mmio_spte(kvm, new_spte) &&
 			    !is_removed_spte(new_spte)))
 			pr_err("Unexpected SPTE change! Nonpresent SPTEs\n"
 			       "should not be replaced with another,\n"
@@ -1108,7 +1108,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 	}
 
 	/* If a MMIO SPTE is installed, the MMIO will need to be emulated. */
-	if (unlikely(is_mmio_spte(new_spte))) {
+	if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) {
 		vcpu->stat.pf_mmio_spte_created++;
 		trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn,
 				     new_spte);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index f01821f48bfd..0f63257161a6 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -198,6 +198,7 @@ module_param(dump_invalid_vmcb, bool, 0644);
 bool intercept_smi = true;
 module_param(intercept_smi, bool, 0444);
 
+static u64 __read_mostly svm_shadow_mmio_mask;
 
 static bool svm_gp_erratum_intercept = true;
 
@@ -4685,6 +4686,9 @@ static bool svm_is_vm_type_supported(unsigned long type)
 
 static int svm_vm_init(struct kvm *kvm)
 {
+	kvm_mmu_set_mmio_spte_mask(kvm, svm_shadow_mmio_mask,
+				   svm_shadow_mmio_mask);
+
 	if (!pause_filter_count || !pause_filter_thresh)
 		kvm->arch.pause_in_guest = true;
 
@@ -4834,7 +4838,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 static __init void svm_adjust_mmio_mask(void)
 {
 	unsigned int enc_bit, mask_bit;
-	u64 msr, mask;
+	u64 msr;
 
 	/* If there is no memory encryption support, use existing mask */
 	if (cpuid_eax(0x80000000) < 0x8000001f)
@@ -4861,9 +4865,8 @@ static __init void svm_adjust_mmio_mask(void)
 	 *
 	 * If the mask bit location is 52 (or above), then clear the mask.
 	 */
-	mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
-
-	kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
+	svm_shadow_mmio_mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
+	kvm_mmu_set_mmio_access_mask(PT_WRITABLE_MASK | PT_USER_MASK);
 }
 
 static __init void svm_set_cpu_caps(void)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 36d2127cb7b7..52fb54880f9b 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -7,6 +7,7 @@
 #include "x86_ops.h"
 #include "tdx.h"
 #include "x86.h"
+#include "mmu.h"
 
 #undef pr_fmt
 #define pr_fmt(fmt) "tdx: " fmt
@@ -276,6 +277,9 @@ int tdx_vm_init(struct kvm *kvm)
 	int ret, i;
 	u64 err;
 
+	kvm_mmu_set_mmio_spte_mask(kvm, vmx_shadow_mmio_mask,
+				   vmx_shadow_mmio_mask);
+
 	/* vCPUs can't be created until after KVM_TDX_INIT_VM. */
 	kvm->max_vcpus = 0;
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e129ee663498..88e893fdffe8 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -141,6 +141,8 @@ module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO);
 extern bool __read_mostly allow_smaller_maxphyaddr;
 module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
 
+u64 __ro_after_init vmx_shadow_mmio_mask;
+
 #define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD)
 #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE
 #define KVM_VM_CR0_ALWAYS_ON				\
@@ -7359,6 +7361,17 @@ int vmx_vm_init(struct kvm *kvm)
 	if (!ple_gap)
 		kvm->arch.pause_in_guest = true;
 
+	/*
+	 * EPT Misconfigurations can be generated if the value of bits 2:0
+	 * of an EPT paging-structure entry is 110b (write/execute).
+	 */
+	if (enable_ept)
+		kvm_mmu_set_mmio_spte_mask(kvm, VMX_EPT_MISCONFIG_WX_VALUE,
+					   VMX_EPT_RWX_MASK);
+	else
+		kvm_mmu_set_mmio_spte_mask(kvm, vmx_shadow_mmio_mask,
+					   vmx_shadow_mmio_mask);
+
 	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
 		switch (l1tf_mitigation) {
 		case L1TF_MITIGATION_OFF:
@@ -8358,6 +8371,19 @@ int __init vmx_init(void)
 	if (!enable_ept)
 		allow_smaller_maxphyaddr = true;
 
+	/*
+	 * Set a reserved PA bit in MMIO SPTEs to generate page faults with
+	 * PFEC.RSVD=1 on MMIO accesses.  64-bit PTEs (PAE, x86-64, and EPT
+	 * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
+	 * 52-bit physical addresses then there are no reserved PA bits in the
+	 * PTEs and so the reserved PA approach must be disabled.
+	 */
+	if (kvm_get_shadow_phys_bits() < 52)
+		vmx_shadow_mmio_mask = BIT_ULL(51) | PT_PRESENT_MASK;
+	else
+		vmx_shadow_mmio_mask = 0;
+	kvm_mmu_set_mmio_access_mask(0);
+
 	return 0;
 }
 
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 7e38c7b756d4..279e5360c555 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -13,6 +13,7 @@ void hv_vp_assist_page_exit(void);
 void __init vmx_init_early(void);
 int __init vmx_init(void);
 void vmx_exit(void);
+extern u64 __ro_after_init vmx_shadow_mmio_mask;
 
 __init int vmx_cpu_has_kvm_support(void);
 __init int vmx_disabled_by_bios(void);
-- 
2.25.1


-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level
  2022-06-30 12:27   ` Kai Huang
@ 2022-07-19 10:26     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19 10:26 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Fri, Jul 01, 2022 at 12:27:24AM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > TODO: This is a transient workaround patch until the large page support for
> > TDX is implemented.  Support large page for TDX and remove this patch.
> 
> I don't understand.  How does this patch have anything to do with what you are
> talking about here?
> 
> If you want to remove this patch later, then why not just explain the reason to
> remove when you actually have that patch?
> 
> > 
> > At this point, large page for TDX isn't supported, and need to allow guest
> > TD to work only with 4K pages.  On the other hand, conventional VMX VMs
> > should continue to work with large page.  Allow per-VM override of the TDP
> > max page level.
> 
> At which point/previous patch have you made/declared "large page for TDX isn't
> supported"?
> 
> If you want to declare you don't want to support large page for TDX, IMHO just
> declare it here, for instance:
> 
> "For simplicity, only support 4K page for TD guest."
>   
> > 
> > In the existing x86 KVM MMU code, there is already max_level member in
> > struct kvm_page_fault with KVM_MAX_HUGEPAGE_LEVEL initial value.  The KVM
> > page fault handler denies page size larger than max_level.
> > 
> > Add per-VM member to indicate the allowed maximum page size with
> > KVM_MAX_HUGEPAGE_LEVEL as default value and initialize max_level in struct
> > kvm_page_fault with it.  For the guest TD, the set per-VM value for allows
> > maximum page size to 4K page size.  Then only allowed page size is 4K.  It
> > means large page is disabled.
> 
> To me it's overcomplicated.  You just need simple sentences for such simple
> infrastructural patch.  For instance:
> 
> "TDX requires special handling to support large private page.  For simplicity,
> only support 4K page for TD guest for now.  Add per-VM maximum page level
> support to support different maximum page sizes for TD guest and conventional
> VMX guest."
> 
> Just for your reference.

Thanks for the sentences. I'll replace the commit message with yours.

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu
  2022-07-01 10:41   ` Kai Huang
@ 2022-07-19 11:06     ` Isaku Yamahata
  2022-07-19 23:17       ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19 11:06 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Sean Christopherson

On Fri, Jul 01, 2022 at 10:41:08PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > For kvm mmu that has shared bit mask, zap only leaf SPTEs when
> > deleting/moving a memslot.  The existing kvm_mmu_zap_memslot() depends on
> 
> Unless I am mistaken, I don't see there's an 'existing' kvm_mmu_zap_memslot().

Oops. it should be kvm_tdp_mmu_invalidate_all_roots().


> > role.invalid with read lock of mmu_lock so that other vcpu can operate on
> > kvm mmu concurrently. 
> > 
> 
> > Mark the root page table invalid, unlink it from page
> > table pointer of CPU, process the page table.  
> > 
> 
> Are you talking about the behaviour of existing code, or the change you are
> going to make?  Looks like you mean the latter but I believe it's the former.


The existing code.  The should "It marks ...".


> > It doesn't work for private
> > page table to unlink the root page table because it requires all SPTE entry
> > to be non-present. 
> > 
> 
> I don't think we can truly *unlink* the private root page table from secure
> EPTP, right?  The EPTP (root table) is fixed (and hidden) during TD's runtime.
> 
> I guess you are trying to say: removing/unlinking one secure-EPT page requires
> removing/unlinking all its children first? 

That's right. I'll update it as follows.
                          

> So the reason to only zap leaf is we cannot truly unlink the private root page
> table, correct?  Sorry your changelog is not obvious to me.

The reason is, to unlink a page table from the parent's SPTE, all SPTEs of the
page table need to be non-present.

Here is the updated commit message.

KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu|
For kvm mmu that has shared bit mask, zap only leaf SPTEs when             |
deleting/moving a memslot.  The existing kvm_tdp_mmu_invalidate_all_roots()|
depends on role.invalid with read lock of mmu_lock so that other vcpu can  |
operate on kvm mmu concurrently.  It marks the root page table invalid,    |
zaps SPTEs of the root page tables.                                        |
                                                                           |
It doesn't work to unlink a private page table from the parent's SPTE entry|
because it requires all SPTE entry of the page table to be non-present.    |
Instead, with write-lock of mmu_lock and zap only leaf SPTEs for kvm mmu   |
with shared bit mask.  

> > Instead, with write-lock of mmu_lock and zap only leaf
> > SPTEs for kvm mmu with shared bit mask.
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 35 ++++++++++++++++++++++++++++++++++-
> >  1 file changed, 34 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 80d7c7709af3..c517c7bca105 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5854,11 +5854,44 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> >  	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> >  }
> >  
> > +static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
> > +{
> > +	bool flush = false;
> > +
> > +	write_lock(&kvm->mmu_lock);
> > +
> > +	/*
> > +	 * Zapping non-leaf SPTEs, a.k.a. not-last SPTEs, isn't required, worst
> > +	 * case scenario we'll have unused shadow pages lying around until they
> > +	 * are recycled due to age or when the VM is destroyed.
> > +	 */
> > +	if (is_tdp_mmu_enabled(kvm)) {
> > +		struct kvm_gfn_range range = {
> > +		      .slot = slot,
> > +		      .start = slot->base_gfn,
> > +		      .end = slot->base_gfn + slot->npages,
> > +		      .may_block = false,
> > +		};
> > +
> > +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
> 
> 
> It appears you only unmap private GFNs (because the base_gfn doesn't have shared
> bit)?  I think shared mapping in this slot must be zapped too?  
>
> How is this done?  Or the kvm_tdp_mmu_unmap_gfn_range() also zaps shared
> mappings?

kvm_tdp_mmu_unmap_gfn_range() handles both private gfn and shared gfn as
they are alias.  


> It's hard to review if one patch's behaviour/logic depends on further patches.

I'll add a comment on the call.

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-07-08  2:23   ` Kai Huang
@ 2022-07-19 14:49     ` Isaku Yamahata
  2022-07-20  5:13       ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19 14:49 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Fri, Jul 08, 2022 at 02:23:43PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
> > to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
> > case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
> > defensive (test that VMX case isn't broken), introduce option
> > ept_violation_ve_test and when it's set, set error.
> 
> I don't see why we need this patch.  It may be helpful during your test, but why
> do we need this patch for formal submission?
> 
> And for a normal guest, what prevents one vcpu from sending #VE IPI to another
> vcpu?

Paolo suggested it as follows.  Maybe it should be kernel config.
(I forgot to add suggested-by. I'll add it)

https://lore.kernel.org/lkml/84d56339-4a8a-6ddb-17cb-12074588ba9c@redhat.com/

> On 3/4/22 20:48, isaku.yamahata@intel.com wrote:
> > + if (enable_ept) {
> > +  const u64 init_value = enable_tdx ? VMX_EPT_SUPPRESS_VE_BIT : 0ull;
> >     kvm_mmu_set_ept_masks(enable_ept_ad_bits,
> > -          cpu_has_vmx_ept_execute_only());
> > +          cpu_has_vmx_ept_execute_only(), init_value);
> > +  kvm_mmu_set_spte_init_value(init_value);
> > + }
> 
> I think kvm-intel.ko should use VMX_EPT_SUPPRESS_VE_BIT unconditionally 
> as the init value.  The bit is ignored anyway if the "EPT-violation #VE" 
> execution control is 0.  Otherwise looks good, but I have a couple more 
> crazy ideas:
> 
> 1) there could even be a test mode where KVM enables the execution 
> control, traps #VE in the exception bitmap, and shouts loudly if it gets 
> a #VE.  That might avoid hard-to-find bugs due to forgetting about 
> VMX_EPT_SUPPRESS_VE_BIT.
> 
> 2) or even, perhaps the init_value for the TDP MMU could set bit 63 
> _unconditionally_, because KVM always sets the NX bit on AMD hardware. 
> That would remove the whole infrastructure to keep shadow_init_value, 
> because it would be constant 0 in mmu.c and constant BIT(63) in tdp_mmu.c.
> 
> Sean, what do you think?
> 
> Paolo
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX
  2022-07-11 14:56   ` Sean Christopherson
@ 2022-07-19 15:04     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19 15:04 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jul 11, 2022 at 02:56:29PM +0000,
Sean Christopherson <seanjc@google.com> wrote:

> s/Focibly/Forcibly, but that's a moot point because KVM shouldn't override the
> the module param.  KVM should instead _require_ the TDP MMU to be enabled.  E.g.
> if userspace disables the TDP MMU to workaround a fatal bug, then forcing the TDP
> MMU may silently expose KVM to said bug.
> 
> And overriding tdp_enabled is just mind-boggling broken, all of the SPTE masks
> will be wrong.
> 
> On Mon, Jun 27, 2022, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > In this patch series, TDX supports only TDP MMU and doesn't support legacy
> > MMU.  Forcibly use TDP MMU for TDX irrelevant of kernel parameter to
> > disable TDP MMU.
> 
> Do not refer to the "patch series", instead phrase the statement with respect to
> what KVM support.
> 
>   Require the TDP MMU for TDX guests, the so called "shadow" MMU does not
>   support mapping guest private memory, i.e. does not support Secure-EPT.

Thanks for rewrite of the commit message.  Now the TDP MMU is default, I'll change

> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/kvm/mmu/tdp_mmu.c | 9 +++++++--
> >  1 file changed, 7 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 82f1bfac7ee6..7eb41b176d1e 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -18,8 +18,13 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
> >  {
> >  	struct workqueue_struct *wq;
> >  
> > -	if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
> > -		return 0;
> > +	/*
> > +	 *  Because TDX supports only TDP MMU, forcibly use TDP MMU in the case
> > +	 *  of TDX.
> > +	 */
> > +	if (kvm->arch.vm_type != KVM_X86_TDX_VM &&
> > +		(!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)))
> > +		return false;
> 
> Yeah, no.
> 
> 	if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
> 		return kvm->arch.vm_type == KVM_X86_TDX_VM ? -EINVAL : 0;

I'll use -EOPNOTSUPP instead of -EINVAL.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-07-01 11:12   ` Kai Huang
@ 2022-07-19 15:35     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19 15:35 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Fri, Jul 01, 2022 at 11:12:44PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > For private GPA, CPU refers a private page table whose contents are
> > encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
> > PTE entry) are used and their cost is expensive.
> > 
> > When KVM resolves KVM page fault, it walks the page tables.  To reuse the
> > existing KVM MMU code and mitigate the heavy cost to directly walk
> > encrypted private page table, allocate a more page to mirror the existing
> > KVM page table.  Resolve KVM page fault with the existing code, and do
> > additional operations necessary for the mirrored private page table.  To
> > distinguish such cases, the existing KVM page table is called a shared page
> > table (i.e. no mirrored private page table), and the KVM page table with
> > mirrored private page table is called a private page table.  The
> > relationship is depicted below.
> > 
> > Add private pointer to struct kvm_mmu_page for mirrored private page table
> > and add helper functions to allocate/initialize/free a mirrored private
> > page table page.  Also, add helper functions to check if a given
> > kvm_mmu_page is private.  The later patch introduces hooks to operate on
> > the mirrored private page table.
> > 
> >               KVM page fault                     |
> >                      |                           |
> >                      V                           |
> >         -------------+----------                 |
> >         |                      |                 |
> >         V                      V                 |
> >      shared GPA           private GPA            |
> >         |                      |                 |
> >         V                      V                 |
> >  CPU/KVM shared PT root  KVM private PT root     |  CPU private PT root
> >         |                      |                 |           |
> >         V                      V                 |           V
> >      shared PT            private PT <----mirror----> mirrored private PT
> >         |                      |                 |           |
> >         |                      \-----------------+------\    |
> >         |                                        |      |    |
> >         V                                        |      V    V
> >   shared guest page                              |    private guest page
> >                                                  |
> >                            non-encrypted memory  |    encrypted memory
> >                                                  |
> > PT: page table
> > 
> > Both CPU and KVM refer to CPU/KVM shared page table.  Private page table
> > is used only by KVM.  CPU refers to mirrored private page table.
> 
> Shouldn't the private page table maintained by KVM be "mirrored private PT"?
> 
> To me "mirrored" normally implies it is fake, or backup which isn't actually
> used.  But here "mirrored private PT" is actually used by hardware.
> 
> And to me, "CPU and KVM" above are confusing.  For instance, "Both CPU and KVM
> refer to CPU/KVM shared page table" took me at least one minute to understand,
> with the help from the diagram -- otherwise I won't be able to understand.
> 
> I guess you can just say somewhere:
> 
> 1) Shared PT is visible to KVM and it is used by CPU;
> 1) Private PT is used by CPU but it is invisible to KVM;
> 2) Mirrored private PT is visible to KVM but not used by CPU.  It is used to
> mirror the actual private PT which is used by CPU.

I removed "mirror" word and use protected for encrypted page table.


    KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
    
    For private GPA, CPU refers a private page table whose contents are
    encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
    PTE entry) are used and their cost is expensive.
    
    When KVM resolves KVM page fault, it walks the page tables.  To reuse the
    existing KVM MMU code and mitigate the heavy cost to directly walk
    protected (encrypted) page table, allocate one more page to copy the
    protected page table for KVM MMU code to directly walk.  Resolve KVM page
    fault with the existing code, and do additional operations necessary for
    the protected page table.  To distinguish such cases, the existing KVM page
    table is called a shared page table (i.e. not associated with protected
    page table), and the page table with protected page table is called a
    private page table.  The relationship is depicted below.
    
    Add a private pointer to struct kvm_mmu_page for protected page table and
    add helper functions to allocate/initialize/free a protected page table
    page.  Also, add helper functions to check if a given kvm_mmu_page is
    private.  The later patch introduces hooks to operate on the protected page
    table.
    
                  KVM page fault                     |
                         |                           |
                         V                           |
            -------------+----------                 |
            |                      |                 |
            V                      V                 |
         shared GPA           private GPA            |
            |                      |                 |
            V                      V                 |
        shared PT root      private PT root          |    protected PT root
            |                      |                 |           |
            V                      V                 |           V
         shared PT            private PT ----propagate----> protected PT
            |                      |                 |           |
            |                      \-----------------+------\    |
            |                                        |      |    |
            V                                        |      V    V
      shared guest page                              |    private guest page
                                                     |
                               non-encrypted memory  |    encrypted memory
                                                     |
    PT: page table
    - Shared PT is visible to KVM and it is used by CPU.
    - Protected PT is used by CPU but it is invisible to KVM.
    - Private PT is visible to KVM but not used by CPU.  It is used to
      propagate PT change to the actual protected PT which is used by CPU.
    
    Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 049/102] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs
  2022-07-12  2:58   ` Yuan Yao
@ 2022-07-19 18:03     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19 18:03 UTC (permalink / raw)
  To: Yuan Yao; +Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Tue, Jul 12, 2022 at 10:58:06AM +0800,
Yuan Yao <yuan.yao@linux.intel.com> wrote:

> On Mon, Jun 27, 2022 at 02:53:41PM -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> >
> > Some KVM MMU operations (dirty page logging, page migration, aging page)
> > aren't supported for private GFNs (yet) with the first generation of TDX.
> > Silently return on unsupported TDX KVM MMU operations.
> >
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/kvm/mmu/tdp_mmu.c | 74 +++++++++++++++++++++++++++++++++++---
> >  arch/x86/kvm/x86.c         |  3 ++
> >  2 files changed, 72 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 12f75e60a254..fef6246086a8 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -387,6 +387,8 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
> >
> >  	if ((!is_writable_pte(old_spte) || pfn_changed) &&
> >  	    is_writable_pte(new_spte)) {
> > +		/* For memory slot operations, use GFN without aliasing */
> > +		gfn = gfn & ~kvm_gfn_shared_mask(kvm);
> 
> This should be part of enabling, please consider to squash it into patch 46.

Yes, merged into it.


> >  		slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn);
> >  		mark_page_dirty_in_slot(kvm, slot, gfn);
> >  	}
> > @@ -1398,7 +1400,8 @@ typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
> >
> >  static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
> >  						   struct kvm_gfn_range *range,
> > -						   tdp_handler_t handler)
> > +						   tdp_handler_t handler,
> > +						   bool only_shared)
> >  {
> >  	struct kvm_mmu_page *root;
> >  	struct tdp_iter iter;
> > @@ -1409,9 +1412,23 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
> >  	 * into this helper allow blocking; it'd be dead, wasteful code.
> >  	 */
> >  	for_each_tdp_mmu_root(kvm, root, range->slot->as_id) {
> > +		gfn_t start;
> > +		gfn_t end;
> > +
> > +		if (only_shared && is_private_sp(root))
> > +			continue;
> > +
> >  		rcu_read_lock();
> >
> > -		tdp_root_for_each_leaf_pte(iter, root, range->start, range->end)
> > +		/*
> > +		 * For TDX shared mapping, set GFN shared bit to the range,
> > +		 * so the handler() doesn't need to set it, to avoid duplicated
> > +		 * code in multiple handler()s.
> > +		 */
> > +		start = kvm_gfn_for_root(kvm, root, range->start);
> > +		end = kvm_gfn_for_root(kvm, root, range->end);
> > +
> > +		tdp_root_for_each_leaf_pte(iter, root, start, end)
> >  			ret |= handler(kvm, &iter, range);
> >
> >  		rcu_read_unlock();
> > @@ -1455,7 +1472,12 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter,
> >
> >  bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> >  {
> > -	return kvm_tdp_mmu_handle_gfn(kvm, range, age_gfn_range);
> > +	/*
> > +	 * First TDX generation doesn't support clearing A bit for private
> > +	 * mapping, since there's no secure EPT API to support it.  However
> > +	 * it's a legitimate request for TDX guest.
> > +	 */
> > +	return kvm_tdp_mmu_handle_gfn(kvm, range, age_gfn_range, true);
> >  }
> >
> >  static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter,
> > @@ -1466,7 +1488,7 @@ static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter,
> >
> >  bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> >  {
> > -	return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn);
> > +	return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn, false);
> 
> The "false" here means we will do young testing for even private
> pages, but we don't have actual A bit state in iter->old_spte for
> them, so may here should be "true" ?

Yes, nice catch.


> >  }
> >
> >  static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
> > @@ -1511,8 +1533,11 @@ bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> >  	 * No need to handle the remote TLB flush under RCU protection, the
> >  	 * target SPTE _must_ be a leaf SPTE, i.e. cannot result in freeing a
> >  	 * shadow page.  See the WARN on pfn_changed in __handle_changed_spte().
> > +	 *
> > +	 * .change_pte() callback should not happen for private page, because
> > +	 * for now TDX private pages are pinned during VM's life time.
> >  	 */
> 
> Worth to catch this by WARN_ON() ? Depends on you.

It call back can be called for shared pages.  Here there is no easy way which
GPA (private or shared) caused it.  i.e. no easy condition for WARN_ON().

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
  2022-07-12  6:14     ` Chao Gao
@ 2022-07-19 18:12       ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-19 18:12 UTC (permalink / raw)
  To: Chao Gao
  Cc: Yuan Yao, isaku.yamahata, kvm, linux-kernel, isaku.yamahata,
	Paolo Bonzini

On Tue, Jul 12, 2022 at 02:14:45PM +0800,
Chao Gao <chao.gao@intel.com> wrote:

> On Tue, Jul 12, 2022 at 11:47:43AM +0800, Yuan Yao wrote:
> >On Mon, Jun 27, 2022 at 02:53:45PM -0700, isaku.yamahata@intel.com wrote:
> >> From: Isaku Yamahata <isaku.yamahata@intel.com>
> >>
> >> TDX doesn't need APIC page depending on vapic and its callback is
> >> WARN_ON_ONCE(is_tdx).  To avoid unnecessary overhead and WARN_ON_ONCE(),
> >> skip requesting KVM_REQ_APIC_PAGE_RELOAD when TD.
> 
> !kvm_gfn_shared_mask() doesn't ensure the VM is a TD. Right?


That's right. I changed the check as follows.

commit 6753fc53f3b3fcbbd07ac688578ff5fb7f7f7d96 (HEAD)
Author: Isaku Yamahata <isaku.yamahata@intel.com>
Date:   Wed Mar 30 22:32:03 2022 -0700

    KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD
    
    TDX doesn't need APIC page depending on vapic and its callback is
    WARN_ON_ONCE(is_tdx).  To avoid unnecessary overhead and WARN_ON_ONCE(),
    skip requesting KVM_REQ_APIC_PAGE_RELOAD when TD.
    
      WARNING: arch/x86/kvm/vmx/main.c:696 vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
      RIP: 0010:vt_set_apic_access_page_addr+0x3c/0x50 [kvm_intel]
      Call Trace:
       vcpu_enter_guest+0x145d/0x24d0 [kvm]
       kvm_arch_vcpu_ioctl_run+0x25d/0xcc0 [kvm]
       kvm_vcpu_ioctl+0x414/0xa30 [kvm]
       __x64_sys_ioctl+0xc0/0x100
       do_syscall_64+0x39/0xc0
       entry_SYSCALL_64_after_hwframe+0x44/0xae
    
    Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 51ba2d163ec4..bfd7ed6ba385 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10045,7 +10045,9 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
         * Update it when it becomes invalid.
         */
        apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
-       if (start <= apic_address && apic_address < end)
+       /* TDX doesn't need APIC page. */
+       if (kvm->arch.vm_type != KVM_X86_TDX_VM &&
+           start <= apic_address && apic_address < end)
                kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
 }
 

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu
  2022-07-19 11:06     ` Isaku Yamahata
@ 2022-07-19 23:17       ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-19 23:17 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, Sean Christopherson

On Tue, 2022-07-19 at 04:06 -0700, Isaku Yamahata wrote:
> On Fri, Jul 01, 2022 at 10:41:08PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > > 
> > > For kvm mmu that has shared bit mask, zap only leaf SPTEs when
> > > deleting/moving a memslot.  The existing kvm_mmu_zap_memslot() depends on
> > 
> > Unless I am mistaken, I don't see there's an 'existing' kvm_mmu_zap_memslot().
> 
> Oops. it should be kvm_tdp_mmu_invalidate_all_roots().
> 
> 
> > > role.invalid with read lock of mmu_lock so that other vcpu can operate on
> > > kvm mmu concurrently. 
> > > 
> > 
> > > Mark the root page table invalid, unlink it from page
> > > table pointer of CPU, process the page table.  
> > > 
> > 
> > Are you talking about the behaviour of existing code, or the change you are
> > going to make?  Looks like you mean the latter but I believe it's the former.
> 
> 
> The existing code.  The should "It marks ...".
> 
> 
> > > It doesn't work for private
> > > page table to unlink the root page table because it requires all SPTE entry
> > > to be non-present. 
> > > 
> > 
> > I don't think we can truly *unlink* the private root page table from secure
> > EPTP, right?  The EPTP (root table) is fixed (and hidden) during TD's runtime.
> > 
> > I guess you are trying to say: removing/unlinking one secure-EPT page requires
> > removing/unlinking all its children first? 
> 
> That's right. I'll update it as follows.
>                           
> 
> > So the reason to only zap leaf is we cannot truly unlink the private root page
> > table, correct?  Sorry your changelog is not obvious to me.
> 
> The reason is, to unlink a page table from the parent's SPTE, all SPTEs of the
> page table need to be non-present.
> 
> Here is the updated commit message.
> 
> KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu|
> For kvm mmu that has shared bit mask, zap only leaf SPTEs when             |
> deleting/moving a memslot.  The existing kvm_tdp_mmu_invalidate_all_roots()|
> depends on role.invalid with read lock of mmu_lock so that other vcpu can  |
> operate on kvm mmu concurrently.  It marks the root page table invalid,    |
> zaps SPTEs of the root page tables.                                        |
>                                                                            |
> It doesn't work to unlink a private page table from the parent's SPTE entry|
> because it requires all SPTE entry of the page table to be non-present.    |

AFAICT this isn't the real reason that you cannot mark private root table as
invalid, and do the same zapping as you mentioned above.  There might be some
change to support "zapping all children before zapping the parent for private
table" (currently the actual page table is freed after RCU grace period, but not
at unlink time), but I don't see how this cannot be supported, or at least the
changelog doesn't explain why it cannot be supported.

The true reason is, if I understand correctly, you cannot truly unlink the
private root page table from the hardware and then, i.e. allocate a new one for
it.  So just zap the leafs.

> Instead, with write-lock of mmu_lock and zap only leaf SPTEs for kvm mmu   |
> with shared bit mask.  
> 
> > > Instead, with write-lock of mmu_lock and zap only leaf
> > > SPTEs for kvm mmu with shared bit mask.
> > > 
> > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > > ---
> > >  arch/x86/kvm/mmu/mmu.c | 35 ++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 34 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 80d7c7709af3..c517c7bca105 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -5854,11 +5854,44 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> > >  	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> > >  }
> > >  
> > > +static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
> > > +{
> > > +	bool flush = false;
> > > +
> > > +	write_lock(&kvm->mmu_lock);
> > > +
> > > +	/*
> > > +	 * Zapping non-leaf SPTEs, a.k.a. not-last SPTEs, isn't required, worst
> > > +	 * case scenario we'll have unused shadow pages lying around until they
> > > +	 * are recycled due to age or when the VM is destroyed.
> > > +	 */
> > > +	if (is_tdp_mmu_enabled(kvm)) {
> > > +		struct kvm_gfn_range range = {
> > > +		      .slot = slot,
> > > +		      .start = slot->base_gfn,
> > > +		      .end = slot->base_gfn + slot->npages,
> > > +		      .may_block = false,
> > > +		};
> > > +
> > > +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
> > 
> > 
> > It appears you only unmap private GFNs (because the base_gfn doesn't have shared
> > bit)?  I think shared mapping in this slot must be zapped too?  
> > 
> > How is this done?  Or the kvm_tdp_mmu_unmap_gfn_range() also zaps shared
> > mappings?
> 
> kvm_tdp_mmu_unmap_gfn_range() handles both private gfn and shared gfn as
> they are alias.  
> 
> 
> > It's hard to review if one patch's behaviour/logic depends on further patches.
> 
> I'll add a comment on the call.
> 

I don't think adding a comment is enough.  The correctness of one patch needs to
depend on future patch doesn't seem right.  Please also consider patch
reorganize/reorder.


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-07-14 18:41   ` Isaku Yamahata
@ 2022-07-20  2:44     ` Kai Huang
  2022-07-20  3:12     ` Kai Huang
  1 sibling, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-20  2:44 UTC (permalink / raw)
  To: Isaku Yamahata, isaku.yamahata
  Cc: kvm, linux-kernel, Paolo Bonzini, Sean Christopherson, Yuan Yao

On Thu, 2022-07-14 at 11:41 -0700, Isaku Yamahata wrote:
> Thanks for review. Now here is the updated version.
> 
> From f1ee540d62ba13511b2c7d3db7662e32bd263e48 Mon Sep 17 00:00:00 2001
> Message-Id: <f1ee540d62ba13511b2c7d3db7662e32bd263e48.1657823906.git.isaku.yamahata@intel.com>
> In-Reply-To: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1657823906.git.isaku.yamahata@intel.com>
> References: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1657823906.git.isaku.yamahata@intel.com>
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> Date: Mon, 29 Jul 2019 19:23:46 -0700
> Subject: [PATCH 036/304] KVM: x86/mmu: Allow non-zero value for non-present
>  SPTE
> 
> TDX introduced a new ETP, Secure-EPT, in addition to the existing EPT.
> Secure-EPT maps protected guest memory, which is called private. Since
> Secure-EPT page tables is also protected, those page tables is also called
> private.  The existing EPT is often called shared EPT to distinguish from
> Secure-EPT.  And also page tables for shared EPT is also called shared.

AFAICT secure-EPT isn't quite directly related here, so I don't think you should
spend the paragraph on it.  The first paragraph should state the problem and
catch reviewer's eyeball.

> 
> TDX module enables #VE injection by setting "EPT-violation #VE" in
> secondary processor-based VM-execution controls of TD VMCS.  It also sets
> "suppress #VE" bit in Secure-EPT so that EPT violation on Secure-EPT causes
> exit to VMM.
> 
> Because guest memory is protected with TDX, VMM can't parse instructions in
> the guest memory.  Instead, MMIO hypercall is used for guest TD to pass
> necessary information to VMM.  To make unmodified device driver work, guest
> TD expects #VE on accessing shared GPA for MMIO. The #VE handler of guest
> TD converts MMIO access into MMIO hypercall.  To trigger #VE in guest TD,
> VMM needs to clear "suppress #VE" bit in shared EPT entry that corresponds
> to MMIO address.
> 
> So the execution flow related for MMIO is as follows
> 
> - TDX module sets "EPT-violation #VE" in secondary processor-based
>   VM-execution controls of TD VMCS.
> - Allocate page for shared EPT PML4E page. Shared EPT entries are
>   initialized with suppress #VE bit set.  Update the EPTP pointer.
> - TD accesses a GPA for MMIO to trigger EPT violation.  It exits to VMM with
>   EPT violation due to suppress #VE bit of EPT entries of PML4E page.
> - VMM figures out the faulted GPA is for MMIO
> - start shared EPT page table walk.
> - Allocate non-leaf EPT pages for the shared EPT.
> - Allocate leaf EPT page for the shared EPT and initialize EPT entries with
>   suppress #VE bit set.
> - VMM clears the suppress #VE bit for faulted GPA for MMIO.
>   Please notice the leaf EPT page has 512 SPTE and other 511 SPTE entries
>   need to keep "suppress #VE" bit set because GPAs for those SPTEs are not
>   known to be MMIO. (It requires further lookups.)
>   If GPA is a guest page, link the guest page from the leaf SPTE entry.
> - resume TD vcpu.
> - Guest TD gets #VE, and converts MMIO access into MMIO hypercall.
> - If the GPA maps guest memory, VMM resolves it with guest pages.

Too many details IMHO.

Also, you forgot to mention the non-zero value for non-present SPTE is not just
for MMIO, but also for shared memory.

How about below?

For TD guest, the current way to emulate MMIO doesn't work any more, as KVM is
not able to access the private memory of TD guest and do the emulation. 
Instead, TD guest expects to receive #VE when it accesses the MMIO and then it
can explicitly makes hypercall to KVM to get the expected information.

To achieve this, the TDX module always enables "EPT-violation #VE" in the VMCS
control.  And accordingly, KVM needs to configure the MMIO spte to trigger EPT
violation (instead of misconfiguration) and at the same time, also clear the
"suppress #VE" bit so the TD guest can get the #VE instead of causing actual EPT
violation to KVM.

In order for KVM to be able to have chance to set up the correct SPTE for MMIO
for TD guest, the default non-present SPTE must have the "suppress #VE" bit set
so KVM can get a real EPT violation for the first time when TD guest accesses
the MMIO.

Also, when TD guest accesses the actual shared memory, it should continue to
trigger EPT violation to the KVM instead of receiving the #VE  (the TDX module
guarantees KVM will receive EPT violation for private memory access).  This
means for the shared memory, the SPTE also must have the "suppress #VE" bit set
for the non-present SPTE.

Add support to allow a non-zero value for the non-present SPTE (i.e. when the
page table is firstly allocated, and when the SPTE is zapped) to allow setting
"suppress #VE" bit for the non-present SPTE.

Introduce a new macro SHADOW_NONPRESENT_VALUE to be the "suppress #VE" bit. 
Unconditionally set the "suppress #VE" bit (which is bit 63) for both AMD and
Intel as: 1) AMD hardware doesn't use this bit; 2) for normal VMX guest, KVM
never enables the "EPT-violation #VE" in VMCS control and "suppress #VE" bit is
ignored by hardware.

(if you want to set SHADOW_NONPRESENT_VALUE only for TDP MMU then continue to
describe, but I don't see this is done in your below patch)

> 
> SPTEs for shared EPT need suppress #VE" bit set initially when it
> is allocated or zapped, therefore non-zero non-present value for SPTE
> needs to be allowed.
> 
> TDP MMU uses REMOVED_SPTE = 0x5a0ULL as special constant to indicate the
> intermediate value to indicate one thread is operating on it and the value
> should be semi-arbitrary value.  For TDX (more exactly to use #VE), the
> value should include suppress #VE bit.  Rename REMOVED_SPTE to
> __REMOVED_SPTE and define REMOVED_SPTE as (REMOVED_SPTE | "suppress #VE")
> bit.

IMHO REMOVED_SPTE is implementation details so it's OK to not mention in
changelog.

> 
> For simplicity, "suppress #VE" bit is set unconditionally for X86_64 for
> non-present SPTE.  Because "suppress #VE" bit (bit position of 63) for
> non-present SPTE is ignored for non-TD case (AMD CPUs or Intel VMX case
> with "EPT-violation #VE" cleared), the functionality shouldn't change.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c         | 71 ++++++++++++++++++++++++++++++++--
>  arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
>  arch/x86/kvm/mmu/spte.c        |  5 ++-
>  arch/x86/kvm/mmu/spte.h        | 28 +++++++++++++-
>  arch/x86/kvm/mmu/tdp_mmu.c     | 23 +++++++----
>  5 files changed, 116 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 51306b80f47c..992f31458f94 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -668,6 +668,55 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
>  	}
>  }
>  
> +#ifdef CONFIG_X86_64
> +static inline void kvm_init_shadow_page(void *page)
> +{
> +	int ign;
> +
> +	/*
> +	 * AMD: "suppress #VE" bit is ignored
> +	 * Intel non-TD(VMX): "suppress #VE" bit is ignored because
> +	 *   EPT_VIOLATION_VE isn't set.
> +	 * guest TD: TDX module sets EPT_VIOLATION_VE
> +	 *   conventional SEPT: "suppress #VE" bit must be set to get EPT violation
> +	 *   private SEPT: "suppress #VE" bit is ignored.  CPU doesn't walk it
> +	 *
> +	 * For simplicity, unconditionally initialize SPET to set "suppress #VE".
> +	 */
> +	asm volatile ("rep stosq\n\t"
> +		      : "=c"(ign), "=D"(page)
> +		      : "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> +		      : "memory"
> +	);
> +}
> +
> +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> +	int start, end, i, r;
> +
> +	start = kvm_mmu_memory_cache_nr_free_objects(mc);
> +	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
> +
> +	/*
> +	 * Note, topup may have allocated objects even if it failed to allocate
> +	 * the minimum number of objects required to make forward progress _at
> +	 * this time_.  Initialize newly allocated objects even on failure, as
> +	 * userspace can free memory and rerun the vCPU in response to -ENOMEM.
> +	 */
> +	end = kvm_mmu_memory_cache_nr_free_objects(mc);
> +	for (i = start; i < end; i++)
> +		kvm_init_shadow_page(mc->objects[i]);
> +	return r;
> +}
> +#else
> +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> +{
> +	return kvm_mmu_topup_memory_cache(vcpu->arch.mmu_shadow_page_cache,
> +					  PT64_ROOT_MAX_LEVEL);
> +}
> +#endif /* CONFIG_X86_64 */
> +
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
>  	int r;
> @@ -677,8 +726,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
>  	if (r)
>  		return r;
> -	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -				       PT64_ROOT_MAX_LEVEL);
> +	r = mmu_topup_shadow_page_cache(vcpu);
>  	if (r)
>  		return r;
>  	if (maybe_indirect) {
> @@ -5654,7 +5702,24 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
>  	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
>  
> -	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +	/*
> +	 * When X86_64, initial SEPT entries are initialized with
> +	 * SHADOW_NONPRESENT_VALUE.  Otherwise zeroed.  See
> +	 * mmu_topup_shadow_page_cache().
> +	 *
> +	 * Shared EPTEs need to be initialized with SUPPRESS_VE=1, otherwise
> +	 * not-present EPT violations would be reflected into the guest by
> +	 * hardware as #VE exceptions.  This is handled by initializing page
> +	 * allocations via kvm_init_shadow_page() during cache topup.
> +	 * In that case, telling the page allocation to zero-initialize the page
> +	 * would be wasted effort.
> +	 *
> +	 * The initialization is harmless for S-EPT entries because KVM's copy
> +	 * of the S-EPT isn't consumed by hardware, and because under the hood
> +	 * S-EPT entries should never #VE.
> +	 */
> +	if (!IS_ENABLED(X86_64))
> +		vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
>  
>  	vcpu->arch.mmu = &vcpu->arch.root_mmu;
>  	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index fe35d8fd3276..964ec76579f0 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -1031,7 +1031,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  		gpa_t pte_gpa;
>  		gfn_t gfn;
>  
> -		if (!sp->spt[i])
> +		/* spt[i] has initial value of shadow page table allocation */
> +		if (sp->spt[i] != SHADOW_NONPRESENT_VALUE)
>  			continue;
>  
>  		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index cda1851ec155..bd441458153f 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -36,6 +36,9 @@ u64 __read_mostly shadow_present_mask;
>  u64 __read_mostly shadow_me_value;
>  u64 __read_mostly shadow_me_mask;
>  u64 __read_mostly shadow_acc_track_mask;
> +#ifdef CONFIG_X86_64
> +u64 __read_mostly shadow_nonpresent_value;
> +#endif
>  
>  u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>  u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
> @@ -360,7 +363,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	 * not set any RWX bits.
>  	 */
>  	if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
> -	    WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
> +	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
>  		mmio_value = 0;
>  
>  	if (!mmio_value)
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 0127bb6e3c7d..f5fd22f6bf5f 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -140,6 +140,19 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
>  
>  #define MMIO_SPTE_GEN_MASK		GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0)
>  
> +/*
> + * non-present SPTE value for both VMX and SVM for TDP MMU.
> + * For SVM NPT, for non-present spte (bit 0 = 0), other bits are ignored.
> + * For VMX EPT, bit 63 is ignored if #VE is disabled.
> + *              bit 63 is #VE suppress if #VE is enabled.
> + */
> +#ifdef CONFIG_X86_64
> +#define SHADOW_NONPRESENT_VALUE	BIT_ULL(63)
> +static_assert(!(SHADOW_NONPRESENT_VALUE & SPTE_MMU_PRESENT_MASK));
> +#else
> +#define SHADOW_NONPRESENT_VALUE	0ULL
> +#endif
> +
>  extern u64 __read_mostly shadow_host_writable_mask;
>  extern u64 __read_mostly shadow_mmu_writable_mask;
>  extern u64 __read_mostly shadow_nx_mask;
> @@ -178,16 +191,27 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>   * non-present intermediate value. Other threads which encounter this value
>   * should not modify the SPTE.
>   *
> + * For X86_64 case, SHADOW_NONPRESENT_VALUE, "suppress #VE" bit, is set because
> + * "EPT violation #VE" in the secondary VM execution control may be enabled.
> + * Because TDX module sets "EPT violation #VE" for TD, "suppress #VE" bit for
> + * the conventional EPT needs to be set.
> + *
>   * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on
>   * bot AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF
>   * vulnerability.  Use only low bits to avoid 64-bit immediates.
>   *
>   * Only used by the TDP MMU.
>   */
> -#define REMOVED_SPTE	0x5a0ULL
> +#define __REMOVED_SPTE	0x5a0ULL
>  
>  /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
> -static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
> +static_assert(!(__REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
> +
> +/*
> + * See above comment around __REMOVED_SPTE.  REMOVED_SPTE is the actual
> + * intermediate value set to the removed SPET.  it sets the "suppress #VE" bit.
> + */
> +#define REMOVED_SPTE	(SHADOW_NONPRESENT_VALUE | __REMOVED_SPTE)
>  
>  static inline bool is_removed_spte(u64 spte)
>  {
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7b9265d67131..2ca03ec3bf52 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -692,8 +692,16 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
>  	 * overwrite the special removed SPTE value. No bookkeeping is needed
>  	 * here since the SPTE is going from non-present to non-present.  Use
>  	 * the raw write helper to avoid an unnecessary check on volatile bits.
> +	 *
> +	 * Set non-present value to SHADOW_NONPRESENT_VALUE, rather than 0.
> +	 * It is because when TDX is enabled, TDX module always
> +	 * enables "EPT-violation #VE", so KVM needs to set
> +	 * "suppress #VE" bit in EPT table entries, in order to get
> +	 * real EPT violation, rather than TDVMCALL.  KVM sets
> +	 * SHADOW_NONPRESENT_VALUE (which sets "suppress #VE" bit) so it
> +	 * can be set when EPT table entries are zapped.
>  	 */
> -	__kvm_tdp_mmu_write_spte(iter->sptep, 0);
> +	__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);

Since you also always set SHADOW_NONPRESENT_VALUE to SPTE for legacy MMU when
the page table is firstly allocated, it also makes sense to set it when SPTE is
zapped for legacy MMU.

This part is missing in this patch.

>  
>  	return 0;
>  }
> @@ -870,8 +878,8 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
>  			continue;
>  
>  		if (!shared)
> -			tdp_mmu_set_spte(kvm, &iter, 0);
> -		else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0))
> +			tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
> +		else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE))
>  			goto retry;
>  	}
>  }
> @@ -927,8 +935,9 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>  	if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte)))
>  		return false;
>  
> -	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0,
> -			   sp->gfn, sp->role.level + 1, true, true);
> +	__tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
> +			   SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1,
> +			   true, true);
>  
>  	return true;
>  }
> @@ -965,7 +974,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>  		    !is_last_spte(iter.old_spte, iter.level))
>  			continue;
>  
> -		tdp_mmu_set_spte(kvm, &iter, 0);
> +		tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
>  		flush = true;
>  	}
>  
> @@ -1330,7 +1339,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
>  	 * invariant that the PFN of a present * leaf SPTE can never change.
>  	 * See __handle_changed_spte().
>  	 */
> -	tdp_mmu_set_spte(kvm, iter, 0);
> +	tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE);
>  
>  	if (!pte_write(range->pte)) {
>  		new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,
> -- 
> 2.25.1
> 
> 
> 


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE
  2022-07-14 18:41   ` Isaku Yamahata
  2022-07-20  2:44     ` Kai Huang
@ 2022-07-20  3:12     ` Kai Huang
  1 sibling, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-20  3:12 UTC (permalink / raw)
  To: Isaku Yamahata, isaku.yamahata
  Cc: kvm, linux-kernel, Paolo Bonzini, Sean Christopherson, Yuan Yao


> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -36,6 +36,9 @@ u64 __read_mostly shadow_present_mask;
>  u64 __read_mostly shadow_me_value;
>  u64 __read_mostly shadow_me_mask;
>  u64 __read_mostly shadow_acc_track_mask;
> +#ifdef CONFIG_X86_64
> +u64 __read_mostly shadow_nonpresent_value;
> +#endif

Is this ever used?

>  
>  u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
>  u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
> @@ -360,7 +363,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	 * not set any RWX bits.
>  	 */
>  	if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
> -	    WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
> +	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
>  		mmio_value = 0;

This chunk doesn't look right, or necessary.  We need mmio_mask/mmio_value which
causes EPT violation but with "suppress #VE" bit clear.  

So, actually, we want to make sure SHADOW_NONPRESENT_VALUE is *NOT* in mmio_mask
and mmio_value.  Using (REMOVED_SPTE & mmio_mask) == mmio_value can actually
ensure SHADOW_NONPRESENT_VALUE is never set in MMIO spte, correct?  So I think
using REMOVED_SPTE is fine.

Or maybe additionally adding a explicit check is even better:

	if (WARN_ON(mmio_mask & SHADOW_NONPRESENT_VALUE))
		mmio_value = 0;

But this change maybe should be in another patch which deals setting up per-VM
mmio_mask/mmio_value anyway.  This patch, instead, focuses on allowing non-zero
value for non-present SPTE.


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  2022-07-19  8:47   ` Isaku Yamahata
@ 2022-07-20  3:45     ` Kai Huang
  2022-07-27 23:20       ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-20  3:45 UTC (permalink / raw)
  To: Isaku Yamahata, isaku.yamahata
  Cc: kvm, linux-kernel, Paolo Bonzini, Sean Christopherson

On Tue, 2022-07-19 at 01:47 -0700, Isaku Yamahata wrote:
> Here is the updated one. The changes are
> - removed hunks that should be a part of other patches.
> - removed shadow_default_mmio_mask
> - trimed down commit messages.
> 
> From ed6b4a076e515550878b069596cf156a1bc33514 Mon Sep 17 00:00:00 2001
> Message-Id: <ed6b4a076e515550878b069596cf156a1bc33514.1658220363.git.isaku.yamahata@intel.com>
> In-Reply-To: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1658220363.git.isaku.yamahata@intel.com>
> References: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1658220363.git.isaku.yamahata@intel.com>
> From: Sean Christopherson <sean.j.christopherson@intel.com>
> Date: Wed, 10 Jun 2020 15:46:38 -0700
> Subject: [PATCH 036/306] KVM: x86/mmu: Track shadow MMIO value/mask on a
>  per-VM basis
> 
> TDX will use a different shadow PTE entry value for MMIO from VMX.  Add
> members to kvm_arch and track value for MMIO per-VM instead of global
> variables.  By using the per-VM EPT entry value for MMIO, the existing VMX
> logic is kept working.  To untangle the logic to initialize
> shadow_mmio_access_mask, introduce a setter function.

introduce a separate setter function for it.

> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  4 +++
>  arch/x86/kvm/mmu.h              |  3 ++-
>  arch/x86/kvm/mmu/mmu.c          |  8 +++---
>  arch/x86/kvm/mmu/spte.c         | 45 +++++++++------------------------
>  arch/x86/kvm/mmu/spte.h         | 10 +++-----
>  arch/x86/kvm/mmu/tdp_mmu.c      |  6 ++---
>  arch/x86/kvm/svm/svm.c          | 11 +++++---
>  arch/x86/kvm/vmx/tdx.c          |  4 +++
>  arch/x86/kvm/vmx/vmx.c          | 26 +++++++++++++++++++
>  arch/x86/kvm/vmx/x86_ops.h      |  1 +
>  10 files changed, 66 insertions(+), 52 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 2c47aab72a1b..39215daa8576 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1161,6 +1161,10 @@ struct kvm_arch {
>  	 */
>  	spinlock_t mmu_unsync_pages_lock;
>  
> +	bool enable_mmio_caching;
> +	u64 shadow_mmio_value;
> +	u64 shadow_mmio_mask;
> +
>  	struct list_head assigned_dev_head;
>  	struct iommu_domain *iommu_domain;
>  	bool iommu_noncoherent;
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index ccf0ba7a6387..cfa3e658162c 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -108,7 +108,8 @@ static inline u8 kvm_get_shadow_phys_bits(void)
>  	return boot_cpu_data.x86_phys_bits;
>  }
>  
> -void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
> +void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask);
> +void kvm_mmu_set_mmio_access_mask(u64 mmio_access_mask);
>  void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
>  void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
>  
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 5bfccfa0f50e..34240fcc45de 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2298,7 +2298,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
>  				return kvm_mmu_prepare_zap_page(kvm, child,
>  								invalid_list);
>  		}
> -	} else if (is_mmio_spte(pte)) {
> +	} else if (is_mmio_spte(kvm, pte)) {
>  		mmu_spte_clear_no_track(spte);
>  	}
>  	return 0;
> @@ -3079,7 +3079,7 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
>  		 * and only if L1's MAXPHYADDR is inaccurate with respect to
>  		 * the hardware's).
>  		 */
> -		if (unlikely(!enable_mmio_caching) ||
> +		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
>  		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
>  			return RET_PF_EMULATE;
>  	}
> @@ -3918,7 +3918,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct)
>  	if (WARN_ON(reserved))
>  		return -EINVAL;
>  
> -	if (is_mmio_spte(spte)) {
> +	if (is_mmio_spte(vcpu->kvm, spte)) {
>  		gfn_t gfn = get_mmio_spte_gfn(spte);
>  		unsigned int access = get_mmio_spte_access(spte);
>  
> @@ -4361,7 +4361,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu)
>  static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
>  			   unsigned int access)
>  {
> -	if (unlikely(is_mmio_spte(*sptep))) {
> +	if (unlikely(is_mmio_spte(vcpu->kvm, *sptep))) {
>  		if (gfn != get_mmio_spte_gfn(*sptep)) {
>  			mmu_spte_clear_no_track(sptep);
>  			return true;
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 92968e5605fc..9a130dd3d6a3 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -29,8 +29,6 @@ u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
>  u64 __read_mostly shadow_user_mask;
>  u64 __read_mostly shadow_accessed_mask;
>  u64 __read_mostly shadow_dirty_mask;
> -u64 __read_mostly shadow_mmio_value;
> -u64 __read_mostly shadow_mmio_mask;
>  u64 __read_mostly shadow_mmio_access_mask;
>  u64 __read_mostly shadow_present_mask;
>  u64 __read_mostly shadow_me_value;
> @@ -62,10 +60,10 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access)
>  	u64 spte = generation_mmio_spte_mask(gen);
>  	u64 gpa = gfn << PAGE_SHIFT;
>  
> -	WARN_ON_ONCE(!shadow_mmio_value);
> +	WARN_ON_ONCE(!vcpu->kvm->arch.shadow_mmio_value);
>  
>  	access &= shadow_mmio_access_mask;
> -	spte |= shadow_mmio_value | access;
> +	spte |= vcpu->kvm->arch.shadow_mmio_value | access;
>  	spte |= gpa | shadow_nonpresent_or_rsvd_mask;
>  	spte |= (gpa & shadow_nonpresent_or_rsvd_mask)
>  		<< SHADOW_NONPRESENT_OR_RSVD_MASK_LEN;
> @@ -337,9 +335,8 @@ u64 mark_spte_for_access_track(u64 spte)
>  	return spte;
>  }
>  
> -void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
> +void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask)
>  {
> -	BUG_ON((u64)(unsigned)access_mask != access_mask);
>  	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
>  
>  	if (!enable_mmio_caching)
> @@ -366,12 +363,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
>  		mmio_value = 0;
>  
> -	if (!mmio_value)
> -		enable_mmio_caching = false;
> -
> -	shadow_mmio_value = mmio_value;
> -	shadow_mmio_mask  = mmio_mask;
> -	shadow_mmio_access_mask = access_mask;
> +	kvm->arch.enable_mmio_caching = !!mmio_value;

KVM has a global enable_mmio_caching boolean, and I think we should honor it
here (in this patch) by doing below first:

	if (enabling_mmio_caching)
		mmio_value = 0;

For TD guest, the logic around enable_mmio_caching doesn't make sense anymore,
so perhaps we can later tweak it by doing something like:

	/*
	 * Treat mmio_caching is false for TD guest, or true for it, depending
	 * on how you define it.
	 */
	if (kvm_gfn_shared_mask(kvm))
		kvm->arch.enable_mmio_caching = false;	/* or true? */
	else {
		if (!enable_mmio_caching)
			mmio_value = 0;
		kvm->arch.enable_mmio_caching = !!mmio_value;
	}

> +	kvm->arch.shadow_mmio_value = mmio_value;
> +	kvm->arch.shadow_mmio_mask = mmio_mask;
>  }
>  EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
>  
> @@ -399,20 +393,12 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
>  	shadow_acc_track_mask	= VMX_EPT_RWX_MASK;
>  	shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
>  	shadow_mmu_writable_mask  = EPT_SPTE_MMU_WRITABLE;
> -
> -	/*
> -	 * EPT Misconfigurations are generated if the value of bits 2:0
> -	 * of an EPT paging-structure entry is 110b (write/execute).
> -	 */
> -	kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE,
> -				   VMX_EPT_RWX_MASK, 0);
>  }
>  EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks);
>  
>  void kvm_mmu_reset_all_pte_masks(void)
>  {
>  	u8 low_phys_bits;
> -	u64 mask;
>  
>  	shadow_phys_bits = kvm_get_shadow_phys_bits();
>  
> @@ -452,18 +438,11 @@ void kvm_mmu_reset_all_pte_masks(void)
>  
>  	shadow_host_writable_mask = DEFAULT_SPTE_HOST_WRITABLE;
>  	shadow_mmu_writable_mask  = DEFAULT_SPTE_MMU_WRITABLE;
> +}
>  
> -	/*
> -	 * Set a reserved PA bit in MMIO SPTEs to generate page faults with
> -	 * PFEC.RSVD=1 on MMIO accesses.  64-bit PTEs (PAE, x86-64, and EPT
> -	 * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
> -	 * 52-bit physical addresses then there are no reserved PA bits in the
> -	 * PTEs and so the reserved PA approach must be disabled.
> -	 */
> -	if (shadow_phys_bits < 52)
> -		mask = BIT_ULL(51) | PT_PRESENT_MASK;
> -	else
> -		mask = 0;
> -
> -	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
> +void kvm_mmu_set_mmio_access_mask(u64 mmio_access_mask)
> +{
> +	BUG_ON((u64)(unsigned)mmio_access_mask != mmio_access_mask);
> +	shadow_mmio_access_mask = mmio_access_mask;
>  }
> +EXPORT_SYMBOL(kvm_mmu_set_mmio_access_mask);
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index f5fd22f6bf5f..99bce92b596e 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -5,8 +5,6 @@
>  
>  #include "mmu_internal.h"
>  
> -extern bool __read_mostly enable_mmio_caching;
> -

Here you removed the ability to control enable_mmio_caching globally.  It's not
something you stated to do in the changelog.  Perhaps we should still keep it,
and enforce it in kvm_mmu_set_mmio_spte_mask() as commented above.

And in upstream KVM, it is a module parameter.  What happens to it?

>  /*
>   * A MMU present SPTE is backed by actual memory and may or may not be present
>   * in hardware.  E.g. MMIO SPTEs are not considered present.  Use bit 11, as it
> @@ -160,8 +158,6 @@ extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
>  extern u64 __read_mostly shadow_user_mask;
>  extern u64 __read_mostly shadow_accessed_mask;
>  extern u64 __read_mostly shadow_dirty_mask;
> -extern u64 __read_mostly shadow_mmio_value;
> -extern u64 __read_mostly shadow_mmio_mask;
>  extern u64 __read_mostly shadow_mmio_access_mask;
>  extern u64 __read_mostly shadow_present_mask;
>  extern u64 __read_mostly shadow_me_value;
> @@ -228,10 +224,10 @@ static inline bool is_removed_spte(u64 spte)
>   */
>  extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
>  
> -static inline bool is_mmio_spte(u64 spte)
> +static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
>  {
> -	return (spte & shadow_mmio_mask) == shadow_mmio_value &&
> -	       likely(enable_mmio_caching);
> +	return (spte & kvm->arch.shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
> +		likely(kvm->arch.enable_mmio_caching);
>  }
>  
>  static inline bool is_shadow_present_pte(u64 pte)
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 2ca03ec3bf52..82f1bfac7ee6 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -569,8 +569,8 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
>  		 * impact the guest since both the former and current SPTEs
>  		 * are nonpresent.
>  		 */
> -		if (WARN_ON(!is_mmio_spte(old_spte) &&
> -			    !is_mmio_spte(new_spte) &&
> +		if (WARN_ON(!is_mmio_spte(kvm, old_spte) &&
> +			    !is_mmio_spte(kvm, new_spte) &&
>  			    !is_removed_spte(new_spte)))
>  			pr_err("Unexpected SPTE change! Nonpresent SPTEs\n"
>  			       "should not be replaced with another,\n"
> @@ -1108,7 +1108,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  	}
>  
>  	/* If a MMIO SPTE is installed, the MMIO will need to be emulated. */
> -	if (unlikely(is_mmio_spte(new_spte))) {
> +	if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) {
>  		vcpu->stat.pf_mmio_spte_created++;
>  		trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn,
>  				     new_spte);
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index f01821f48bfd..0f63257161a6 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -198,6 +198,7 @@ module_param(dump_invalid_vmcb, bool, 0644);
>  bool intercept_smi = true;
>  module_param(intercept_smi, bool, 0444);
>  
> +static u64 __read_mostly svm_shadow_mmio_mask;
>  
>  static bool svm_gp_erratum_intercept = true;
>  
> @@ -4685,6 +4686,9 @@ static bool svm_is_vm_type_supported(unsigned long type)
>  
>  static int svm_vm_init(struct kvm *kvm)
>  {
> +	kvm_mmu_set_mmio_spte_mask(kvm, svm_shadow_mmio_mask,
> +				   svm_shadow_mmio_mask);
> +
>  	if (!pause_filter_count || !pause_filter_thresh)
>  		kvm->arch.pause_in_guest = true;
>  
> @@ -4834,7 +4838,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>  static __init void svm_adjust_mmio_mask(void)
>  {
>  	unsigned int enc_bit, mask_bit;
> -	u64 msr, mask;
> +	u64 msr;
>  
>  	/* If there is no memory encryption support, use existing mask */
>  	if (cpuid_eax(0x80000000) < 0x8000001f)
> @@ -4861,9 +4865,8 @@ static __init void svm_adjust_mmio_mask(void)
>  	 *
>  	 * If the mask bit location is 52 (or above), then clear the mask.
>  	 */
> -	mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
> -
> -	kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
> +	svm_shadow_mmio_mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
> +	kvm_mmu_set_mmio_access_mask(PT_WRITABLE_MASK | PT_USER_MASK);
>  }
>  
>  static __init void svm_set_cpu_caps(void)
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 36d2127cb7b7..52fb54880f9b 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -7,6 +7,7 @@
>  #include "x86_ops.h"
>  #include "tdx.h"
>  #include "x86.h"
> +#include "mmu.h"
>  
>  #undef pr_fmt
>  #define pr_fmt(fmt) "tdx: " fmt
> @@ -276,6 +277,9 @@ int tdx_vm_init(struct kvm *kvm)
>  	int ret, i;
>  	u64 err;
>  
> +	kvm_mmu_set_mmio_spte_mask(kvm, vmx_shadow_mmio_mask,
> +				   vmx_shadow_mmio_mask);
> +

I prefer to split this chunk out to another patch so this patch can be purely
infrastructural.   In this way you can even move this patch around easily in
this series.

>  	/* vCPUs can't be created until after KVM_TDX_INIT_VM. */
>  	kvm->max_vcpus = 0;
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index e129ee663498..88e893fdffe8 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -141,6 +141,8 @@ module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO);
>  extern bool __read_mostly allow_smaller_maxphyaddr;
>  module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
>  
> +u64 __ro_after_init vmx_shadow_mmio_mask;
> +
>  #define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD)
>  #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE
>  #define KVM_VM_CR0_ALWAYS_ON				\
> @@ -7359,6 +7361,17 @@ int vmx_vm_init(struct kvm *kvm)
>  	if (!ple_gap)
>  		kvm->arch.pause_in_guest = true;
>  
> +	/*
> +	 * EPT Misconfigurations can be generated if the value of bits 2:0
> +	 * of an EPT paging-structure entry is 110b (write/execute).
> +	 */
> +	if (enable_ept)
> +		kvm_mmu_set_mmio_spte_mask(kvm, VMX_EPT_MISCONFIG_WX_VALUE,
> +					   VMX_EPT_RWX_MASK);
> +	else
> +		kvm_mmu_set_mmio_spte_mask(kvm, vmx_shadow_mmio_mask,
> +					   vmx_shadow_mmio_mask);
> +
>  	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
>  		switch (l1tf_mitigation) {
>  		case L1TF_MITIGATION_OFF:
> @@ -8358,6 +8371,19 @@ int __init vmx_init(void)
>  	if (!enable_ept)
>  		allow_smaller_maxphyaddr = true;
>  
> +	/*
> +	 * Set a reserved PA bit in MMIO SPTEs to generate page faults with
> +	 * PFEC.RSVD=1 on MMIO accesses.  64-bit PTEs (PAE, x86-64, and EPT
> +	 * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
> +	 * 52-bit physical addresses then there are no reserved PA bits in the
> +	 * PTEs and so the reserved PA approach must be disabled.
> +	 */
> +	if (kvm_get_shadow_phys_bits() < 52)
> +		vmx_shadow_mmio_mask = BIT_ULL(51) | PT_PRESENT_MASK;
> +	else
> +		vmx_shadow_mmio_mask = 0;
> +	kvm_mmu_set_mmio_access_mask(0);
> +
>  	return 0;
>  }
>  
> diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
> index 7e38c7b756d4..279e5360c555 100644
> --- a/arch/x86/kvm/vmx/x86_ops.h
> +++ b/arch/x86/kvm/vmx/x86_ops.h
> @@ -13,6 +13,7 @@ void hv_vp_assist_page_exit(void);
>  void __init vmx_init_early(void);
>  int __init vmx_init(void);
>  void vmx_exit(void);
> +extern u64 __ro_after_init vmx_shadow_mmio_mask;
>  
>  __init int vmx_cpu_has_kvm_support(void);
>  __init int vmx_disabled_by_bios(void);
> -- 
> 2.25.1
> 
> 


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-07-19 14:49     ` Isaku Yamahata
@ 2022-07-20  5:13       ` Kai Huang
  2022-07-27 23:39         ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-20  5:13 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Tue, 2022-07-19 at 07:49 -0700, Isaku Yamahata wrote:
> On Fri, Jul 08, 2022 at 02:23:43PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > 
> > > To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
> > > to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
> > > case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
> > > defensive (test that VMX case isn't broken), introduce option
> > > ept_violation_ve_test and when it's set, set error.
> > 
> > I don't see why we need this patch.  It may be helpful during your test, but why
> > do we need this patch for formal submission?
> > 
> > And for a normal guest, what prevents one vcpu from sending #VE IPI to another
> > vcpu?
> 
> Paolo suggested it as follows.  Maybe it should be kernel config.
> (I forgot to add suggested-by. I'll add it)
> 
> https://lore.kernel.org/lkml/84d56339-4a8a-6ddb-17cb-12074588ba9c@redhat.com/
> 
> > 

OK.  But can we assume a normal guest won't sending #VE IPI?


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-14  1:03 ` Sean Christopherson
  2022-07-14  4:09   ` Xiaoyao Li
@ 2022-07-20 14:59   ` Chao Peng
  2022-07-25 13:46     ` Nikunj A. Dadhania
  1 sibling, 1 reply; 219+ messages in thread
From: Chao Peng @ 2022-07-20 14:59 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Thu, Jul 14, 2022 at 01:03:46AM +0000, Sean Christopherson wrote:
...
> 
> Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
> on insertion/removal to (dis)allow hugepages as needed.
> 
>   + efficient on KVM page fault (no new lookups)
>   + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
>   + straightforward to implement
>   + can (and should) be merged as part of the UPM series
> 
> I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
> completely covered (fully shared) or not covered at all (fully private), but I'm
> not 100% certain that xa_for_each_range() works the way I think it does.

Hi Sean,

Below is the implementation to support 2M as you mentioned as option D.
It's based on UPM v7 xarray code: https://lkml.org/lkml/2022/7/6/259

Everything sounds good, the only trick bit is inc/dec disallow_lpage. If
we still treat it as a count, it will be a challenge to make the inc/dec
balanced. So in this patch I stole a bit for the purpose, looks ugly.

Any feedback is welcome.

Thanks,
Chao

-----------------------------------------------------------------------
From: Chao Peng <chao.p.peng@linux.intel.com>
Date: Wed, 20 Jul 2022 11:37:18 +0800
Subject: [PATCH] KVM: Add large page support for private memory

Update lpage_info when handling KVM_MEMORY_ENCRYPT_{UN,}REG_REGION.

Reserve a bit in disallow_lpage to indicate a large page has
private/share pages mixed.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
---
 arch/x86/include/asm/kvm_host.h |   8 +++
 arch/x86/kvm/mmu/mmu.c          | 120 +++++++++++++++++++++++++++++++-
 include/linux/kvm_host.h        |  14 ++++
 virt/kvm/kvm_main.c             |  12 +++-
 4 files changed, 150 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d460b8511041..b6ffe8b1c547 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -38,6 +38,7 @@
 
 #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
 #define __KVM_HAVE_ZAP_GFN_RANGE
+#define __KVM_HAVE_ARCH_UPDATE_MEM_ATTR
 
 #define KVM_MAX_VCPUS 1024
 
@@ -935,6 +936,13 @@ struct kvm_vcpu_arch {
 #endif
 };
 
+/*
+ * Use a bit in disallow_lpage to indicate private/shared pages mixed at the
+ * level. The remaining bits will be used as a reference count for other users.
+ */
+#define KVM_LPAGE_PRIVATE_SHARED_MIXED		(1U << 31)
+#define KVM_LPAGE_COUNT_MAX 			((1U << 31) - 1)
+
 struct kvm_lpage_info {
 	int disallow_lpage;
 };
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 771ffd147e77..d040eeaf1f1c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -843,11 +843,16 @@ static void update_gfn_disallow_lpage_count(const struct kvm_memory_slot *slot,
 {
 	struct kvm_lpage_info *linfo;
 	int i;
+	int disallow_count;
 
 	for (i = PG_LEVEL_2M; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) {
 		linfo = lpage_info_slot(gfn, slot, i);
+
+		disallow_count = linfo->disallow_lpage & KVM_LPAGE_COUNT_MAX;
+		WARN_ON(disallow_count + count < 0 ||
+			disallow_count > KVM_LPAGE_COUNT_MAX - count);
+
 		linfo->disallow_lpage += count;
-		WARN_ON(linfo->disallow_lpage < 0);
 	}
 }
 
@@ -7246,3 +7251,116 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
 	if (kvm->arch.nx_lpage_recovery_thread)
 		kthread_stop(kvm->arch.nx_lpage_recovery_thread);
 }
+
+static bool mem_attr_is_mixed(struct kvm *kvm, unsigned int attr,
+			      gfn_t start, gfn_t end)
+{
+	XA_STATE(xas, &kvm->mem_attr_array, start);
+	gfn_t gfn = start;
+	void *entry;
+	bool shared, private;
+	bool mixed = false;
+
+	if (attr == KVM_MEM_ATTR_SHARED) {
+		shared = true;
+		private = false;
+	} else {
+		shared = false;
+		private = true;
+	}
+
+	rcu_read_lock();
+	entry = xas_load(&xas);
+	while (gfn < end) {
+		if (xas_retry(&xas, entry))
+			continue;
+
+		KVM_BUG_ON(gfn != xas.xa_index, kvm);
+
+		if (entry)
+			private = true;
+		else
+			shared = true;
+
+		if (private && shared) {
+			mixed = true;
+			goto out;
+		}
+
+		entry = xas_next(&xas);
+		gfn++;
+	}
+out:
+	rcu_read_unlock();
+	return mixed;
+}
+
+static inline void update_mixed(struct kvm_lpage_info *linfo, bool mixed)
+{
+	if (mixed)
+		linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED;
+	else
+		linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED;
+}
+
+static void update_mem_lpage_info(struct kvm *kvm,
+				  struct kvm_memory_slot *slot,
+				  unsigned int attr,
+				  gfn_t start, gfn_t end)
+{
+	unsigned long lpage_start, lpage_end;
+	unsigned long gfn, pages, mask;
+	int level;
+
+	for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
+		pages = KVM_PAGES_PER_HPAGE(level);
+		mask = ~(pages - 1);
+		lpage_start = start & mask;
+		lpage_end = end & mask;
+
+		/*
+		 * We only need to scan the head and tail page, for middle pages
+		 * we know they are not mixed.
+		 */
+		update_mixed(lpage_info_slot(lpage_start, slot, level),
+			     mem_attr_is_mixed(kvm, attr, lpage_start,
+							  lpage_start + pages));
+
+		if (lpage_start == lpage_end)
+			return;
+
+		for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) {
+			update_mixed(lpage_info_slot(gfn, slot, level), false);
+		}
+
+		update_mixed(lpage_info_slot(lpage_end, slot, level),
+			     mem_attr_is_mixed(kvm, attr, lpage_end,
+							  lpage_end + pages));
+	}
+}
+
+void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr,
+			      gfn_t start, gfn_t end)
+{
+	struct kvm_memory_slot *slot;
+	struct kvm_memslots *slots;
+	struct kvm_memslot_iter iter;
+	int i;
+
+	WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)),
+			"Unsupported mem attribute.\n");
+
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		slots = __kvm_memslots(kvm, i);
+
+		kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) {
+			slot = iter.slot;
+			start = max(start, slot->base_gfn);
+			end = min(end, slot->base_gfn + slot->npages);
+			if (WARN_ON_ONCE(start >= end))
+				continue;
+
+			update_mem_lpage_info(kvm, slot, attr, start, end);
+		}
+	}
+}
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index d45f00f5b3ee..7b18fcd71df5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2282,6 +2282,10 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
 #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
 
 #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM
+
+#define KVM_MEM_ATTR_SHARED	0x0001
+#define KVM_MEM_ATTR_PRIVATE	0x0002
+
 static inline int kvm_private_mem_get_pfn(struct kvm_memory_slot *slot,
 					  gfn_t gfn, kvm_pfn_t *pfn, int *order)
 {
@@ -2307,6 +2311,16 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
 	return !!xa_load(&kvm->mem_attr_array, gfn);
 }
 
+#ifdef __KVM_HAVE_ARCH_UPDATE_MEM_ATTR
+void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr,
+			      gfn_t start, gfn_t end);
+#else
+static inline void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr,
+					    gfn_t start, gfn_t end)
+{
+}
+#endif
+
 #endif /* CONFIG_HAVE_KVM_PRIVATE_MEM */
 
 #endif
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1ba4b9e5449c..1d22c8603f91 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -863,12 +863,12 @@ static int kvm_init_mmu_notifier(struct kvm *kvm)
 #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */
 
 #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM
-#define KVM_MEM_ATTR_PRIVATE	0x0001
 static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl,
 					     struct kvm_enc_region *region)
 {
 	unsigned long start, end;
 	void *entry;
+	int attr;
 	int r;
 
 	if (region->size == 0 || region->addr + region->size < region->addr)
@@ -879,13 +879,19 @@ static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl
 	start = region->addr >> PAGE_SHIFT;
 	end = (region->addr + region->size - 1) >> PAGE_SHIFT;
 
-	entry = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION ?
-				xa_mk_value(KVM_MEM_ATTR_PRIVATE) : NULL;
+	if (ioctl == KVM_MEMORY_ENCRYPT_REG_REGION) {
+		attr = KVM_MEM_ATTR_PRIVATE;
+		entry = xa_mk_value(KVM_MEM_ATTR_PRIVATE);
+	} else {
+		attr = KVM_MEM_ATTR_SHARED;
+		entry = NULL;
+	}
 
 	r = xa_err(xa_store_range(&kvm->mem_attr_array, start, end,
 					entry, GFP_KERNEL_ACCOUNT));
 
 	kvm_zap_gfn_range(kvm, start, end + 1);
+	kvm_arch_update_mem_attr(kvm, attr, start, end + 1);
 
 	return r;
 }
-- 
2

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-20 14:59   ` Chao Peng
@ 2022-07-25 13:46     ` Nikunj A. Dadhania
  2022-07-26 14:32       ` Chao Peng
  0 siblings, 1 reply; 219+ messages in thread
From: Nikunj A. Dadhania @ 2022-07-25 13:46 UTC (permalink / raw)
  To: Chao Peng, Sean Christopherson
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On 7/20/2022 8:29 PM, Chao Peng wrote:
> On Thu, Jul 14, 2022 at 01:03:46AM +0000, Sean Christopherson wrote:
> ...
>>
>> Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
>> on insertion/removal to (dis)allow hugepages as needed.
>>
>>   + efficient on KVM page fault (no new lookups)
>>   + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
>>   + straightforward to implement
>>   + can (and should) be merged as part of the UPM series
>>
>> I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
>> completely covered (fully shared) or not covered at all (fully private), but I'm
>> not 100% certain that xa_for_each_range() works the way I think it does.
> 
> Hi Sean,
> 
> Below is the implementation to support 2M as you mentioned as option D.
> It's based on UPM v7 xarray code: https://lkml.org/lkml/2022/7/6/259
> 
> Everything sounds good, the only trick bit is inc/dec disallow_lpage. If
> we still treat it as a count, it will be a challenge to make the inc/dec
> balanced. So in this patch I stole a bit for the purpose, looks ugly.
> 
> Any feedback is welcome.
> 
> Thanks,
> Chao
> 
> -----------------------------------------------------------------------
> From: Chao Peng <chao.p.peng@linux.intel.com>
> Date: Wed, 20 Jul 2022 11:37:18 +0800
> Subject: [PATCH] KVM: Add large page support for private memory
> 
> Update lpage_info when handling KVM_MEMORY_ENCRYPT_{UN,}REG_REGION.
> 
> Reserve a bit in disallow_lpage to indicate a large page has
> private/share pages mixed.
> 
> Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> ---


> +static void update_mem_lpage_info(struct kvm *kvm,
> +				  struct kvm_memory_slot *slot,
> +				  unsigned int attr,
> +				  gfn_t start, gfn_t end)
> +{
> +	unsigned long lpage_start, lpage_end;
> +	unsigned long gfn, pages, mask;
> +	int level;
> +
> +	for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> +		pages = KVM_PAGES_PER_HPAGE(level);
> +		mask = ~(pages - 1);
> +		lpage_start = start & mask;
> +		lpage_end = end & mask;
> +
> +		/*
> +		 * We only need to scan the head and tail page, for middle pages
> +		 * we know they are not mixed.
> +		 */
> +		update_mixed(lpage_info_slot(lpage_start, slot, level),
> +			     mem_attr_is_mixed(kvm, attr, lpage_start,
> +							  lpage_start + pages));
> +
> +		if (lpage_start == lpage_end)
> +			return;
> +
> +		for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) {
> +			update_mixed(lpage_info_slot(gfn, slot, level), false);
> +		}

Boundary check missing here for the case when gfn reaches lpage_end.

		if (gfn == lpage_end)
			return;

> +
> +		update_mixed(lpage_info_slot(lpage_end, slot, level),
> +			     mem_attr_is_mixed(kvm, attr, lpage_end,
> +							  lpage_end + pages));
> +	}
> +}

Regards
Nikunj

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-25 13:46     ` Nikunj A. Dadhania
@ 2022-07-26 14:32       ` Chao Peng
  2022-07-27  9:26         ` Nikunj A. Dadhania
  0 siblings, 1 reply; 219+ messages in thread
From: Chao Peng @ 2022-07-26 14:32 UTC (permalink / raw)
  To: Nikunj A. Dadhania
  Cc: Sean Christopherson, isaku.yamahata, kvm, linux-kernel,
	isaku.yamahata, Paolo Bonzini

On Mon, Jul 25, 2022 at 07:16:24PM +0530, Nikunj A. Dadhania wrote:
> On 7/20/2022 8:29 PM, Chao Peng wrote:
> > On Thu, Jul 14, 2022 at 01:03:46AM +0000, Sean Christopherson wrote:
> > ...
> >>
> >> Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
> >> on insertion/removal to (dis)allow hugepages as needed.
> >>
> >>   + efficient on KVM page fault (no new lookups)
> >>   + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
> >>   + straightforward to implement
> >>   + can (and should) be merged as part of the UPM series
> >>
> >> I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
> >> completely covered (fully shared) or not covered at all (fully private), but I'm
> >> not 100% certain that xa_for_each_range() works the way I think it does.
> > 
> > Hi Sean,
> > 
> > Below is the implementation to support 2M as you mentioned as option D.
> > It's based on UPM v7 xarray code: https://lkml.org/lkml/2022/7/6/259
> > 
> > Everything sounds good, the only trick bit is inc/dec disallow_lpage. If
> > we still treat it as a count, it will be a challenge to make the inc/dec
> > balanced. So in this patch I stole a bit for the purpose, looks ugly.
> > 
> > Any feedback is welcome.
> > 
> > Thanks,
> > Chao
> > 
> > -----------------------------------------------------------------------
> > From: Chao Peng <chao.p.peng@linux.intel.com>
> > Date: Wed, 20 Jul 2022 11:37:18 +0800
> > Subject: [PATCH] KVM: Add large page support for private memory
> > 
> > Update lpage_info when handling KVM_MEMORY_ENCRYPT_{UN,}REG_REGION.
> > 
> > Reserve a bit in disallow_lpage to indicate a large page has
> > private/share pages mixed.
> > 
> > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> > ---
> 
> 
> > +static void update_mem_lpage_info(struct kvm *kvm,
> > +				  struct kvm_memory_slot *slot,
> > +				  unsigned int attr,
> > +				  gfn_t start, gfn_t end)
> > +{
> > +	unsigned long lpage_start, lpage_end;
> > +	unsigned long gfn, pages, mask;
> > +	int level;
> > +
> > +	for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> > +		pages = KVM_PAGES_PER_HPAGE(level);
> > +		mask = ~(pages - 1);
> > +		lpage_start = start & mask;
> > +		lpage_end = end & mask;
> > +
> > +		/*
> > +		 * We only need to scan the head and tail page, for middle pages
> > +		 * we know they are not mixed.
> > +		 */
> > +		update_mixed(lpage_info_slot(lpage_start, slot, level),
> > +			     mem_attr_is_mixed(kvm, attr, lpage_start,
> > +							  lpage_start + pages));
> > +
> > +		if (lpage_start == lpage_end)
> > +			return;
> > +
> > +		for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) {
> > +			update_mixed(lpage_info_slot(gfn, slot, level), false);
> > +		}
> 
> Boundary check missing here for the case when gfn reaches lpage_end.
> 
> 		if (gfn == lpage_end)
> 			return;

In this case, it's actually the tail page that I want to scan for with
below code.

It's also possible I misunderstand something here.

Chao
> 
> > +
> > +		update_mixed(lpage_info_slot(lpage_end, slot, level),
> > +			     mem_attr_is_mixed(kvm, attr, lpage_end,
> > +							  lpage_end + pages));
> > +	}
> > +}
> 
> Regards
> Nikunj

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  2022-07-08  3:44   ` Kai Huang
@ 2022-07-26 23:39     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-26 23:39 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Fri, Jul 08, 2022 at 03:44:05PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> > +static int kvm_faultin_pfn_private_mapped(struct kvm_vcpu *vcpu,
> > +					   struct kvm_page_fault *fault)
> > +{
> > +	hva_t hva = gfn_to_hva_memslot(fault->slot, fault->gfn);
> > +	struct page *page[1];
> > +
> > +	fault->map_writable = false;
> > +	fault->pfn = KVM_PFN_ERR_FAULT;
> > +	if (hva == KVM_HVA_ERR_RO_BAD || hva == KVM_HVA_ERR_BAD)
> > +		return RET_PF_CONTINUE;
> > +
> > +	/* TDX allows only RWX.  Read-only isn't supported. */
> > +	WARN_ON_ONCE(!fault->write);
> > +	if (pin_user_pages_fast(hva, 1, FOLL_WRITE, page) != 1)
> > +		return RET_PF_INVALID;
> > +
> > +	fault->map_writable = true;
> > +	fault->pfn = page_to_pfn(page[0]);
> > +	return RET_PF_CONTINUE;
> > +}
> > +
> >  static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >  {
> >  	struct kvm_memory_slot *slot = fault->slot;
> > @@ -4058,6 +4094,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >  			return RET_PF_EMULATE;
> >  	}
> >  
> > +	if (fault->is_private)
> > +		return kvm_faultin_pfn_private_mapped(vcpu, fault);
> > +
> >  	async = false;
> >  	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async,
> >  					  fault->write, &fault->map_writable,
> > @@ -4110,6 +4149,17 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
> >  	       mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);
> >  }
> >  
> > +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r)
> > +{
> > +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn))
> > +		return;
> > +
> > +	if (fault->is_private)
> > +		put_page(pfn_to_page(fault->pfn));
> > +	else
> > +		kvm_release_pfn_clean(fault->pfn);
> > +}
> 
> What's the purpose of 'int r'?  Is it even used?

removed r because r is unused.


> >  static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >  {
> >  	bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);
> > @@ -4117,7 +4167,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> >  	unsigned long mmu_seq;
> >  	int r;
> >  
> > -	fault->gfn = fault->addr >> PAGE_SHIFT;
> > +	fault->gfn = gpa_to_gfn(fault->addr) & ~kvm_gfn_shared_mask(vcpu->kvm);
> >  	fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
> 
> Where is fault->is_private set? Shouldn't it be set here?

kvm_mmu_do_page_fault() does it and no because is_private is constant.
is_private is input.  On the other hand gfn and slot is working variables.


> >  	}
> >  
> >  	if (flush)
> > @@ -6023,6 +6079,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> >  		write_unlock(&kvm->mmu_lock);
> >  	}
> >  
> > +	/*
> > +	 * For now this can only happen for non-TD VM, because TD private
> > +	 * mapping doesn't support write protection.  kvm_tdp_mmu_wrprot_slot()
> > +	 * will give a WARN() if it hits for TD.
> > +	 */
> 
> Unless I am mistaken, 'kvm_tdp_mmu_wrprot_slot() will give a WARN() if it hits
> for TD" is done in your later patch "KVM: x86/tdp_mmu: Ignore unsupported mmu
> operation on private GFNs".  Why putting comment here?
> 
> Please move this comment to that patch, and I think you can put that patch
> before this patch.
> 
> And this problem happens repeatedly in this series.  Could you check the entire
> series?

Split out those stuff into a patch.


> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > index 9f3a6bea60a3..d3b30d62aca0 100644
> > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > @@ -6,6 +6,8 @@
> >  #include <linux/kvm_host.h>
> >  #include <asm/kvm_host.h>
> >  
> > +#include "mmu.h"
> > +
> >  #undef MMU_DEBUG
> >  
> >  #ifdef MMU_DEBUG
> > @@ -164,11 +166,30 @@ static inline void kvm_mmu_alloc_private_sp(
> >  	WARN_ON_ONCE(!sp->private_sp);
> >  }
> >  
> > +static inline int kvm_alloc_private_sp_for_split(
> > +	struct kvm_mmu_page *sp, gfp_t gfp)
> > +{
> > +	gfp &= ~__GFP_ZERO;
> > +	sp->private_sp = (void*)__get_free_page(gfp);
> > +	if (!sp->private_sp)
> > +		return -ENOMEM;
> > +	return 0;
> > +}
> 
> What does "for_split" mean?  Why do we need it?

Split large page into smaller sized one.  Followed tdp_mmu_alloc_sp_for_split().
We can defer to introduce this function until large page support.


> > +
> >  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> >  {
> >  	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
> >  		free_page((unsigned long)sp->private_sp);
> >  }
> > +
> > +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> > +				     gfn_t gfn)
> > +{
> > +	if (is_private_sp(root))
> > +		return kvm_gfn_private(kvm, gfn);
> > +	else
> > +		return kvm_gfn_shared(kvm, gfn);
> > +}
> >  #else
> >  static inline bool is_private_sp(struct kvm_mmu_page *sp)
> >  {
> > @@ -194,11 +215,25 @@ static inline void kvm_mmu_alloc_private_sp(
> >  {
> >  }
> >  
> > +static inline int kvm_alloc_private_sp_for_split(
> > +	struct kvm_mmu_page *sp, gfp_t gfp)
> > +{
> > +	return -ENOMEM;
> > +}
> > +
> >  static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> >  {
> >  }
> > +
> > +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> > +				     gfn_t gfn)
> > +{
> > +	return gfn;
> > +}
> >  #endif
> >  
> > +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r);
> > +
> >  static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
> >  {
> >  	/*
> > @@ -246,6 +281,7 @@ struct kvm_page_fault {
> >  	/* Derived from mmu and global state.  */
> >  	const bool is_tdp;
> >  	const bool nx_huge_page_workaround_enabled;
> > +	const bool is_private;
> >  
> >  	/*
> >  	 * Whether a >4KB mapping can be created or is forbidden due to NX
> > @@ -327,6 +363,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> >  		.prefetch = prefetch,
> >  		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
> >  		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
> > +		.is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),
> 
> I guess putting this chunk and setting up fault->gfn together would be clearer?

is_private is input for kvm page fault. gfn is working variable to resolve
kvm page fault.

> >  static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> > -				u64 old_spte, u64 new_spte, int level,
> > -				bool shared)
> > +				bool private_spte, u64 old_spte, u64 new_spte,
> > +				int level, bool shared)
> >  {
> > -	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level,
> > -			      shared);
> > +	__handle_changed_spte(kvm, as_id, gfn, private_spte,
> > +			old_spte, new_spte, level, shared);
> >  	handle_changed_spte_acc_track(old_spte, new_spte, level);
> >  	handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte,
> >  				      new_spte, level);
> > @@ -640,6 +714,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
> >  					  struct tdp_iter *iter,
> >  					  u64 new_spte)
> >  {
> > +	bool freeze_spte = iter->is_private && !is_removed_spte(new_spte);
> > +	u64 tmp_spte = freeze_spte ? REMOVED_SPTE : new_spte;
> 
> Perhaps I am missing something.  Could you add comments to explain the logic?

Add a comment.
+       /*
+        * For conventional page table, the update flow is
+        * - update STPE with atomic operation
+        * - hanlde changed SPTE. __handle_changed_spte()
+        * NOTE: __handle_changed_spte() (and functions) must be safe against
+        * concurrent update.  It is an exception to zap SPTE.  See
+        * tdp_mmu_zap_spte_atomic().
+        *
+        * For private page table, callbacks are needed to propagate SPTE
+        * change into the protected page table.  In order to atomically update
+        * both the SPTE and the protected page tables with callbacks, utilize
+        * freezing SPTE.
+        * - Freeze the SPTE. Set entry to REMOVED_SPTE.
+        * - Trigger callbacks for protected page tables. __handle_changed_spte()
+        * - Unfreeze the SPTE.  Set the entry to new_spte.
+        */


> > @@ -1067,6 +1163,12 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
> >  
> >  	lockdep_assert_held_write(&kvm->mmu_lock);
> >  	list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
> > +		/*
> > +		 * Skip private root since private page table
> > +		 * is only torn down when VM is destroyed.
> > +		 */
> > +		if (is_private_sp(root))
> > +			continue;
> >  		if (!root->role.invalid &&
> >  		    !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) {
> >  			root->role.invalid = true;
> > @@ -1087,14 +1189,22 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
> >  	u64 new_spte;
> >  	int ret = RET_PF_FIXED;
> >  	bool wrprot = false;
> > +	unsigned long pte_access = ACC_ALL;
> > +	gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
> 
> Here looks the iter->gfn still contains the shared bits.  It is not consistent
> with above.
> 
> Can you put some words into the changelog explaining exactly what GFN will you
> put to iterator?
> 
> Or can you even split out this part as a separate patch?

I think you meant the above is zap_leafs function. It zaps GPA range module
alias (module shared bit).
This function is to resolve kvm page fault.  It means gpa includes shared bit.

here is the updated patch.

From ae3cee62e53a877bef04813e6ae8d710b4a9128a Mon Sep 17 00:00:00 2001
Message-Id: <ae3cee62e53a877bef04813e6ae8d710b4a9128a.1658878587.git.isaku.yamahata@intel.com>
In-Reply-To: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1658878587.git.isaku.yamahata@intel.com>
References: <3941849bf08a55cfbbe69b222f0fd0dac7c5ee53.1658878587.git.isaku.yamahata@intel.com>
From: Isaku Yamahata <isaku.yamahata@intel.com>
Date: Thu, 14 Jul 2022 15:11:24 -0700
Subject: [PATCH 048/292] KVM: x86/tdp_mmu: Support TDX private mapping for TDP
 MMU

Allocate protected page table for private page table, and add hooks to
operate on protected page table.  This patch adds allocation/free of
protected page tables and hooks.  When calling hooks to update SPTE entry,
freeze the entry, call hooks and unfree the entry to allow concurrent
updates on page tables.  Which is the advantage of TDP MMU.  As
kvm_gfn_shared_mask() returns false always, those hooks aren't called yet
with this patch.

When the faulting GPA is private, the KVM fault is called private.  When
resolving private KVM, allocate protected page table and call hooks to
operate on protected page table. On the change of the private PTE entry,
invoke kvm_x86_ops hook in __handle_changed_spte() to propagate the change
to protected page table. The following depicts the relationship.

  private KVM page fault   |
      |                    |
      V                    |
 private GPA               |     CPU protected EPTP
      |                    |           |
      V                    |           V
 private PT root           |     protected PT root
      |                    |           |
      V                    |           V
   private PT --hook to propagate-->protected PT
      |                    |           |
      \--------------------+------\    |
                           |      |    |
                           |      V    V
                           |    private guest page
                           |
                           |
     non-encrypted memory  |    encrypted memory
                           |
PT: page table

The existing KVM TDP MMU code uses atomic update of SPTE.  On populating
the EPT entry, atomically set the entry.  However, it requires TLB
shootdown to zap SPTE.  To address it, the entry is frozen with the special
SPTE value that clears the present bit. After the TLB shootdown, the entry
is set to the eventual value (unfreeze).

For protected page table, hooks are called to update protected page table
in addition to direct access to the private SPTE. For the zapping case, it
works to freeze the SPTE. It can call hooks in addition to TLB shootdown.
For populating the private SPTE entry, there can be a race condition
without further protection

  vcpu 1: populating 2M private SPTE
  vcpu 2: populating 4K private SPTE
  vcpu 2: TDX SEAMCALL to update 4K protected SPTE => error
  vcpu 1: TDX SEAMCALL to update 2M protected SPTE

To avoid the race, the frozen SPTE is utilized.  Instead of atomic update
of the private entry, freeze the entry, call the hook that update protected
SPTE, set the entry to the final value.

Support 4K page only at this stage.  2M page support can be done in future
patches.

Add is_private member to kvm_page_fault to indicate the fault is private.
Also is_private member to struct tdp_inter to propagate it.

Co-developed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Acked-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |   2 +
 arch/x86/include/asm/kvm_host.h    |  20 +++
 arch/x86/kvm/mmu/mmu.c             |  15 +-
 arch/x86/kvm/mmu/mmu_internal.h    |  35 +++++
 arch/x86/kvm/mmu/tdp_iter.h        |   2 +-
 arch/x86/kvm/mmu/tdp_mmu.c         | 215 ++++++++++++++++++++++++-----
 arch/x86/kvm/mmu/tdp_mmu.h         |   2 +-
 virt/kvm/kvm_main.c                |   1 +
 8 files changed, 254 insertions(+), 38 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 32a6df784ea6..6982d57e4518 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -93,6 +93,8 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
 KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
 KVM_X86_OP(get_mt_mask)
 KVM_X86_OP(load_mmu_pgd)
+KVM_X86_OP_OPTIONAL(free_private_sp)
+KVM_X86_OP_OPTIONAL(handle_changed_private_spte)
 KVM_X86_OP(has_wbinvd_exit)
 KVM_X86_OP(get_l2_tsc_offset)
 KVM_X86_OP(get_l2_tsc_multiplier)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a73050a69aab..23a4d9d06772 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -467,6 +467,7 @@ struct kvm_mmu {
 			 struct kvm_mmu_page *sp);
 	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
 	struct kvm_mmu_root_info root;
+	hpa_t private_root_hpa;
 	union kvm_cpu_role cpu_role;
 	union kvm_mmu_page_role root_role;
 
@@ -1462,6 +1463,20 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
 	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
 }
 
+struct kvm_spte {
+	kvm_pfn_t pfn;
+	bool is_present;
+	bool is_leaf;
+};
+
+struct kvm_spte_change {
+	gfn_t gfn;
+	enum pg_level level;
+	struct kvm_spte old;
+	struct kvm_spte new;
+	void *sept_page;
+};
+
 struct kvm_x86_ops {
 	const char *name;
 
@@ -1574,6 +1589,11 @@ struct kvm_x86_ops {
 	void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 			     int root_level);
 
+	int (*free_private_sp)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
+			       void *private_sp);
+	void (*handle_changed_private_spte)(
+		struct kvm *kvm, const struct kvm_spte_change *change);
+
 	bool (*has_wbinvd_exit)(void);
 
 	u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 37ae04ef0719..98138e688c59 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3465,7 +3465,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 		goto out_unlock;
 
 	if (is_tdp_mmu_enabled(vcpu->kvm)) {
-		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
+		if (kvm_gfn_shared_mask(vcpu->kvm) &&
+		    !VALID_PAGE(mmu->private_root_hpa)) {
+			root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, true);
+			mmu->private_root_hpa = root;
+		}
+		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, false);
 		mmu->root.hpa = root;
 	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
 		root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true);
@@ -4128,7 +4133,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 	unsigned long mmu_seq;
 	int r;
 
-	fault->gfn = fault->addr >> PAGE_SHIFT;
+	fault->gfn = gpa_to_gfn(fault->addr) & ~kvm_gfn_shared_mask(vcpu->kvm);
 	fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
 
 	if (page_fault_handle_page_track(vcpu, fault))
@@ -5669,6 +5674,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
 
 	mmu->root.hpa = INVALID_PAGE;
 	mmu->root.pgd = 0;
+	mmu->private_root_hpa = INVALID_PAGE;
 	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
 		mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
 
@@ -6467,6 +6473,9 @@ int kvm_mmu_vendor_module_init(void)
 void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
 {
 	kvm_mmu_unload(vcpu);
+	if (is_tdp_mmu_enabled(vcpu->kvm))
+		mmu_free_root_page(vcpu->kvm, &vcpu->arch.mmu->private_root_hpa,
+				NULL);
 	free_mmu_pages(&vcpu->arch.root_mmu);
 	free_mmu_pages(&vcpu->arch.guest_mmu);
 	mmu_free_memory_caches(vcpu);
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 29904f8d8719..6c529c804875 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -6,6 +6,8 @@
 #include <linux/kvm_host.h>
 #include <asm/kvm_host.h>
 
+#include "mmu.h"
+
 #undef MMU_DEBUG
 
 #ifdef MMU_DEBUG
@@ -163,11 +165,30 @@ static inline void kvm_mmu_alloc_private_sp(
 	}
 }
 
+static inline int kvm_alloc_private_sp_for_split(
+	struct kvm_mmu_page *sp, gfp_t gfp)
+{
+	gfp &= ~__GFP_ZERO;
+	sp->private_sp = (void*)__get_free_page(gfp);
+	if (!sp->private_sp)
+		return -ENOMEM;
+	return 0;
+}
+
 static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
 {
 	if (sp->private_sp)
 		free_page((unsigned long)sp->private_sp);
 }
+
+static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
+				     gfn_t gfn)
+{
+	if (is_private_sp(root))
+		return kvm_gfn_private(kvm, gfn);
+	else
+		return kvm_gfn_shared(kvm, gfn);
+}
 #else
 static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
 {
@@ -183,9 +204,21 @@ static inline void kvm_mmu_alloc_private_sp(
 {
 }
 
+static inline int kvm_alloc_private_sp_for_split(
+	struct kvm_mmu_page *sp, gfp_t gfp)
+{
+	return -ENOMEM;
+}
+
 static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
 {
 }
+
+static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
+				     gfn_t gfn)
+{
+	return gfn;
+}
 #endif
 
 static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
@@ -235,6 +268,7 @@ struct kvm_page_fault {
 	/* Derived from mmu and global state.  */
 	const bool is_tdp;
 	const bool nx_huge_page_workaround_enabled;
+	const bool is_private;
 
 	/*
 	 * Whether a >4KB mapping can be created or is forbidden due to NX
@@ -316,6 +350,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 		.prefetch = prefetch,
 		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
 		.nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
+		.is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),
 
 		.max_level = vcpu->kvm->arch.tdp_max_page_level,
 		.req_level = PG_LEVEL_4K,
diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
index e25992df5bba..20422eeba6aa 100644
--- a/arch/x86/kvm/mmu/tdp_iter.h
+++ b/arch/x86/kvm/mmu/tdp_iter.h
@@ -71,7 +71,7 @@ struct tdp_iter {
 	tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL];
 	/* A pointer to the current SPTE */
 	tdp_ptep_t sptep;
-	/* The lowest GFN mapped by the current SPTE */
+	/* The lowest GFN (shared bits included) mapped by the current SPTE */
 	gfn_t gfn;
 	/* The level of the root page given to the iterator */
 	int root_level;
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 8d8481beca4e..9d0bd5e1afbf 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -285,6 +285,11 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu,
 	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
 	sp->role = role;
 
+	if (kvm_mmu_page_role_is_private(role))
+		kvm_mmu_alloc_private_sp(vcpu, sp);
+	else
+		kvm_mmu_init_private_sp(sp, NULL);
+
 	return sp;
 }
 
@@ -301,12 +306,12 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
 	sp->gfn = gfn;
 	sp->ptep = sptep;
 	sp->tdp_mmu_page = true;
-	kvm_mmu_init_private_sp(sp, NULL);
 
 	trace_kvm_mmu_get_page(sp, true);
 }
 
-hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
+static struct kvm_mmu_page *kvm_tdp_mmu_get_vcpu_root(struct kvm_vcpu *vcpu,
+						      bool private)
 {
 	union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
 	struct kvm *kvm = vcpu->kvm;
@@ -318,6 +323,8 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
 	 * Check for an existing root before allocating a new one.  Note, the
 	 * role check prevents consuming an invalid root.
 	 */
+	if (private)
+		kvm_mmu_page_role_set_private(&role);
 	for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
 		if (root->role.word == role.word &&
 		    kvm_tdp_mmu_get_root(root))
@@ -334,12 +341,17 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
 	spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
 
 out:
-	return __pa(root->spt);
+	return root;
+}
+
+hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private)
+{
+	return __pa(kvm_tdp_mmu_get_vcpu_root(vcpu, private)->spt);
 }
 
 static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
-				u64 old_spte, u64 new_spte, int level,
-				bool shared);
+				u64 old_spte, u64 new_spte,
+				union kvm_mmu_page_role role, bool shared);
 
 static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level)
 {
@@ -365,6 +377,8 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
 
 	if ((!is_writable_pte(old_spte) || pfn_changed) &&
 	    is_writable_pte(new_spte)) {
+		/* For memory slot operations, use GFN without aliasing */
+		gfn = gfn & ~kvm_gfn_shared_mask(kvm);
 		slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn);
 		mark_page_dirty_in_slot(kvm, slot, gfn);
 	}
@@ -489,7 +503,18 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
 							  REMOVED_SPTE, level);
 		}
 		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
-				    old_spte, REMOVED_SPTE, level, shared);
+				    old_spte, REMOVED_SPTE, sp->role, shared);
+	}
+
+	if (is_private_sp(sp) && WARN_ON(static_call(kvm_x86_free_private_sp)(
+						   kvm, sp->gfn, sp->role.level,
+						   kvm_mmu_private_sp(sp)))) {
+		/*
+		 * Failed to unlink Secure EPT page and there is nothing to do
+		 * further.  Intentionally leak the page to prevent the kernel
+		 * from accessing the encrypted page.
+		 */
+		kvm_mmu_init_private_sp(sp, NULL);
 	}
 
 	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
@@ -502,7 +527,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
  * @gfn: the base GFN that was mapped by the SPTE
  * @old_spte: The value of the SPTE before the change
  * @new_spte: The value of the SPTE after the change
- * @level: the level of the PT the SPTE is part of in the paging structure
+ * @role: the role of the PT the SPTE is part of in the paging structure
  * @shared: This operation may not be running under the exclusive use of
  *	    the MMU lock and the operation must synchronize with other
  *	    threads that might be modifying SPTEs.
@@ -511,14 +536,32 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
  * This function must be called for all TDP SPTE modifications.
  */
 static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
-				  u64 old_spte, u64 new_spte, int level,
-				  bool shared)
+				  u64 old_spte, u64 new_spte,
+				  union kvm_mmu_page_role role, bool shared)
 {
+	bool is_private = kvm_mmu_page_role_is_private(role);
+	int level = role.level;
 	bool was_present = is_shadow_present_pte(old_spte);
 	bool is_present = is_shadow_present_pte(new_spte);
 	bool was_leaf = was_present && is_last_spte(old_spte, level);
 	bool is_leaf = is_present && is_last_spte(new_spte, level);
-	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
+	kvm_pfn_t old_pfn = spte_to_pfn(old_spte);
+	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
+	bool pfn_changed = old_pfn != new_pfn;
+	struct kvm_spte_change change = {
+		.gfn = gfn,
+		.level = level,
+		.old = {
+			.pfn = old_pfn,
+			.is_present = was_present,
+			.is_leaf = was_leaf,
+		},
+		.new = {
+			.pfn = new_pfn,
+			.is_present = is_present,
+			.is_leaf = is_leaf,
+		},
+	};
 
 	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
 	WARN_ON(level < PG_LEVEL_4K);
@@ -585,7 +628,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 
 	if (was_leaf && is_dirty_spte(old_spte) &&
 	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
-		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
+		kvm_set_pfn_dirty(old_pfn);
 
 	/*
 	 * Recursively handle child PTs if the change removed a subtree from
@@ -594,19 +637,48 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 	 * pages are kernel allocations and should never be migrated.
 	 */
 	if (was_present && !was_leaf &&
-	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
+	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) {
+		WARN_ON(is_private !=
+			is_private_sptep(spte_to_child_pt(old_spte, level)));
 		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
+	}
+
+	/*
+	 * Special handling for the private mapping.  We are either
+	 * setting up new mapping at middle level page table, or leaf,
+	 * or tearing down existing mapping.
+	 *
+	 * This is after handling lower page table by above
+	 * handle_remove_tdp_mmu_page().  S-EPT requires to remove S-EPT tables
+	 * after removing childrens.
+	 */
+	if (is_private &&
+	    /* Ignore change of software only bits. e.g. host_writable */
+	    (was_leaf != is_leaf || was_present != is_present || pfn_changed)) {
+		void *sept_page = NULL;
+
+		if (is_present && !is_leaf) {
+			struct kvm_mmu_page *sp = to_shadow_page(pfn_to_hpa(new_pfn));
+
+			sept_page = kvm_mmu_private_sp(sp);
+			WARN_ON(!sept_page);
+			WARN_ON(sp->role.level + 1 != level);
+			WARN_ON(sp->gfn != gfn);
+		}
+		change.sept_page = sept_page;
+
+		static_call(kvm_x86_handle_changed_private_spte)(kvm, &change);
+	}
 }
 
 static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
-				u64 old_spte, u64 new_spte, int level,
-				bool shared)
+				u64 old_spte, u64 new_spte,
+				union kvm_mmu_page_role role, bool shared)
 {
-	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level,
-			      shared);
-	handle_changed_spte_acc_track(old_spte, new_spte, level);
+	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, role, shared);
+	handle_changed_spte_acc_track(old_spte, new_spte, role.level);
 	handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte,
-				      new_spte, level);
+				      new_spte, role.level);
 }
 
 /*
@@ -630,6 +702,24 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 					  struct tdp_iter *iter,
 					  u64 new_spte)
 {
+	/*
+	 * For conventional page table, the update flow is
+	 * - update STPE with atomic operation
+	 * - hanlde changed SPTE. __handle_changed_spte()
+	 * NOTE: __handle_changed_spte() (and functions) must be safe against
+	 * concurrent update.  It is an exception to zap SPTE.  See
+	 * tdp_mmu_zap_spte_atomic().
+	 *
+	 * For private page table, callbacks are needed to propagate SPTE
+	 * change into the protected page table.  In order to atomically update
+	 * both the SPTE and the protected page tables with callbacks, utilize
+	 * freezing SPTE.
+	 * - Freeze the SPTE. Set entry to REMOVED_SPTE.
+	 * - Trigger callbacks for protected page tables. __handle_changed_spte()
+	 * - Unfreeze the SPTE.  Set the entry to new_spte.
+	 */
+	bool freeze_spte = is_private_sptep(iter->sptep) && !is_removed_spte(new_spte);
+	u64 tmp_spte = freeze_spte ? REMOVED_SPTE : new_spte;
 	u64 *sptep = rcu_dereference(iter->sptep);
 	u64 old_spte;
 
@@ -647,7 +737,7 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 	 * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
 	 * does not hold the mmu_lock.
 	 */
-	old_spte = cmpxchg64(sptep, iter->old_spte, new_spte);
+	old_spte = cmpxchg64(sptep, iter->old_spte, tmp_spte);
 	if (old_spte != iter->old_spte) {
 		/*
 		 * The page table entry was modified by a different logical
@@ -659,10 +749,14 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 		return -EBUSY;
 	}
 
-	__handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
-			      new_spte, iter->level, true);
+	__handle_changed_spte(
+		kvm, iter->as_id, iter->gfn,
+		iter->old_spte, new_spte, sptep_to_sp(sptep)->role, true);
 	handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level);
 
+	if (freeze_spte)
+		__kvm_tdp_mmu_write_spte(sptep, new_spte);
+
 	return 0;
 }
 
@@ -729,9 +823,11 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
  * SPTE had voldatile bits.
  */
 static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
-			      u64 old_spte, u64 new_spte, gfn_t gfn, int level,
-			      bool record_acc_track, bool record_dirty_log)
+			       u64 old_spte, u64 new_spte, gfn_t gfn, int level,
+			       bool record_acc_track, bool record_dirty_log)
 {
+	union kvm_mmu_page_role role;
+
 	lockdep_assert_held_write(&kvm->mmu_lock);
 
 	/*
@@ -745,7 +841,9 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
 
 	old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
 
-	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false);
+	role = sptep_to_sp(sptep)->role;
+	role.level = level;
+	__handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, role, false);
 
 	if (record_acc_track)
 		handle_changed_spte_acc_track(old_spte, new_spte, level);
@@ -797,8 +895,11 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
 			continue;					\
 		else
 
-#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end)		\
-	for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end)
+#define tdp_mmu_for_each_pte(_iter, _mmu, _private, _start, _end)	\
+	for_each_tdp_pte(_iter,						\
+		 to_shadow_page((_private) ? _mmu->private_root_hpa :	\
+				_mmu->root.hpa),			\
+		_start, _end)
 
 /*
  * Yield if the MMU lock is contended or this thread needs to return control
@@ -964,6 +1065,14 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
 	if (!zap_private && is_private_sp(root))
 		return false;
 
+	/*
+	 * start and end doesn't have GFN shared bit.  This function zaps
+	 * a region including alias.  Adjust shared bit of [start, end) if the
+	 * root is shared.
+	 */
+	start = kvm_gfn_for_root(kvm, root, start);
+	end = kvm_gfn_for_root(kvm, root, end);
+
 	rcu_read_lock();
 
 	for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) {
@@ -1093,10 +1202,19 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 	WARN_ON(sp->role.level != fault->goal_level);
 	if (unlikely(!fault->slot))
 		new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
-	else
-		wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
-					 fault->pfn, iter->old_spte, fault->prefetch, true,
-					 fault->map_writable, &new_spte);
+	else {
+		unsigned long pte_access = ACC_ALL;
+		gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
+
+		/* TDX shared GPAs are no executable, enforce this for the SDV. */
+		if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
+			pte_access &= ~ACC_EXEC_MASK;
+
+		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
+				   gfn_unalias, fault->pfn, iter->old_spte,
+				   fault->prefetch, true, fault->map_writable,
+				   &new_spte);
+	}
 
 	if (new_spte == iter->old_spte)
 		ret = RET_PF_SPURIOUS;
@@ -1195,6 +1313,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
 	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	struct tdp_iter iter;
+	gfn_t raw_gfn;
+	bool is_private = fault->is_private;
 	int ret;
 
 	kvm_mmu_hugepage_adjust(vcpu, fault);
@@ -1203,7 +1323,16 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 
 	rcu_read_lock();
 
-	tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) {
+	raw_gfn = gpa_to_gfn(fault->addr);
+
+	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) {
+		if (is_private) {
+			rcu_read_unlock();
+			return -EFAULT;
+		}
+	}
+
+	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
 		if (fault->nx_huge_page_workaround_enabled)
 			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
 
@@ -1219,6 +1348,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 		    is_large_pte(iter.old_spte)) {
 			if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
 				break;
+			/*
+			 * TODO: large page support.
+			 * Doesn't support large page for TDX now
+			 */
+			WARN_ON(is_private_sptep(iter.sptep));
+
 
 			/*
 			 * The iter must explicitly re-read the spte here
@@ -1462,6 +1597,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(
 
 	sp->role = role;
 	sp->spt = (void *)__get_free_page(gfp);
+	if (kvm_mmu_page_role_is_private(role)) {
+		if (kvm_alloc_private_sp_for_split(sp, gfp)) {
+			free_page((unsigned long)sp->spt);
+			sp->spt = NULL;
+		}
+	}
 	if (!sp->spt) {
 		kmem_cache_free(mmu_page_header_cache, sp);
 		return NULL;
@@ -1477,6 +1618,11 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 	union kvm_mmu_page_role role = tdp_iter_child_role(iter);
 	struct kvm_mmu_page *sp;
 
+	WARN_ON(kvm_mmu_page_role_is_private(role) !=
+		is_private_sptep(iter->sptep));
+	/* TODO: Large page isn't supported for private SPTE yet. */
+	WARN_ON(kvm_mmu_page_role_is_private(role));
+
 	/*
 	 * Since we are allocating while under the MMU lock we have to be
 	 * careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct
@@ -1924,7 +2070,7 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
 	if (WARN_ON(kvm_gfn_shared_mask(vcpu->kvm)))
 		return leaf;
 
-	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
+	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
 		leaf = iter.level;
 		sptes[leaf] = iter.old_spte;
 	}
@@ -1951,7 +2097,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
 	gfn_t gfn = addr >> PAGE_SHIFT;
 	tdp_ptep_t sptep = NULL;
 
-	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
+	/* fast page fault for private GPA isn't supported. */
+	WARN_ON_ONCE(kvm_is_private_gpa(vcpu->kvm, addr));
+
+	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
 		*spte = iter.old_spte;
 		sptep = iter.sptep;
 	}
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index c98c7df449a8..695175c921a5 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -5,7 +5,7 @@
 
 #include <linux/kvm_host.h>
 
-hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
+hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private);
 
 __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 0acb0b6d1f82..7a5261eb7eb8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -196,6 +196,7 @@ bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
 
 	return true;
 }
+EXPORT_SYMBOL_GPL(kvm_is_reserved_pfn);
 
 /*
  * Switches to specified vcpu, until a matching vcpu_put()
-- 
2.25.1




-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  2022-07-11  8:28   ` Yuan Yao
@ 2022-07-26 23:41     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-26 23:41 UTC (permalink / raw)
  To: Yuan Yao
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Kai Huang

On Mon, Jul 11, 2022 at 04:28:18PM +0800,
Yuan Yao <yuan.yao@linux.intel.com> wrote:

> On Mon, Jun 27, 2022 at 02:53:38PM -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> >
> > Allocate mirrored private page table for private page table, and add hooks
> > to operate on mirrored private page table.  This patch adds only hooks. As
> > kvm_gfn_shared_mask() returns false always, those hooks aren't called yet.
> >
> > Because private guest page is protected, page copy with mmu_notifier to
> > migrate page doesn't work.  Callback from backing store is needed.
> >
> > When the faulting GPA is private, the KVM fault is also called private.
> > When resolving private KVM, allocate mirrored private page table and call
> > hooks to operate on mirrored private page table. On the change of the
> > private PTE entry, invoke kvm_x86_ops hook in __handle_changed_spte() to
> > propagate the change to mirrored private page table. The following depicts
> > the relationship.
> >
> >   private KVM page fault   |
> >       |                    |
> >       V                    |
> >  private GPA               |
> >       |                    |
> >       V                    |
> >  KVM private PT root       |  CPU private PT root
> >       |                    |           |
> >       V                    |           V
> >    private PT ---hook to mirror--->mirrored private PT
> >       |                    |           |
> >       \--------------------+------\    |
> >                            |      |    |
> >                            |      V    V
> >                            |    private guest page
> >                            |
> >                            |
> >      non-encrypted memory  |    encrypted memory
> >                            |
> > PT: page table
> >
> > The existing KVM TDP MMU code uses atomic update of SPTE.  On populating
> > the EPT entry, atomically set the entry.  However, it requires TLB
> > shootdown to zap SPTE.  To address it, the entry is frozen with the special
> > SPTE value that clears the present bit. After the TLB shootdown, the entry
> > is set to the eventual value (unfreeze).
> >
> > For mirrored private page table, hooks are called to update mirrored
> > private page table in addition to direct access to the private SPTE. For
> > the zapping case, it works to freeze the SPTE. It can call hooks in
> > addition to TLB shootdown.  For populating the private SPTE entry, there
> > can be a race condition without further protection
> >
> >   vcpu 1: populating 2M private SPTE
> >   vcpu 2: populating 4K private SPTE
> >   vcpu 2: TDX SEAMCALL to update 4K mirrored private SPTE => error
> >   vcpu 1: TDX SEAMCALL to update 2M mirrored private SPTE
> >
> > To avoid the race, the frozen SPTE is utilized.  Instead of atomic update
> > of the private entry, freeze the entry, call the hook that update mirrored
> > private SPTE, set the entry to the final value.
> >
> > Support 4K page only at this stage.  2M page support can be done in future
> > patches.
> >
> > Add is_private member to kvm_page_fault to indicate the fault is private.
> > Also is_private member to struct tdp_inter to propagate it.
> >
> > Co-developed-by: Kai Huang <kai.huang@intel.com>
> > Signed-off-by: Kai Huang <kai.huang@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/include/asm/kvm-x86-ops.h |   2 +
> >  arch/x86/include/asm/kvm_host.h    |  20 +++
> >  arch/x86/kvm/mmu/mmu.c             |  86 +++++++++-
> >  arch/x86/kvm/mmu/mmu_internal.h    |  37 +++++
> >  arch/x86/kvm/mmu/paging_tmpl.h     |   2 +-
> >  arch/x86/kvm/mmu/tdp_iter.c        |   1 +
> >  arch/x86/kvm/mmu/tdp_iter.h        |   5 +-
> >  arch/x86/kvm/mmu/tdp_mmu.c         | 247 +++++++++++++++++++++++------
> >  arch/x86/kvm/mmu/tdp_mmu.h         |   7 +-
> >  virt/kvm/kvm_main.c                |   1 +
> >  10 files changed, 346 insertions(+), 62 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> > index 32a6df784ea6..6982d57e4518 100644
> > --- a/arch/x86/include/asm/kvm-x86-ops.h
> > +++ b/arch/x86/include/asm/kvm-x86-ops.h
> > @@ -93,6 +93,8 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
> >  KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
> >  KVM_X86_OP(get_mt_mask)
> >  KVM_X86_OP(load_mmu_pgd)
> > +KVM_X86_OP_OPTIONAL(free_private_sp)
> > +KVM_X86_OP_OPTIONAL(handle_changed_private_spte)
> >  KVM_X86_OP(has_wbinvd_exit)
> >  KVM_X86_OP(get_l2_tsc_offset)
> >  KVM_X86_OP(get_l2_tsc_multiplier)
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index bfc934dc9a33..f2a4d5a18851 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -440,6 +440,7 @@ struct kvm_mmu {
> >  			 struct kvm_mmu_page *sp);
> >  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
> >  	struct kvm_mmu_root_info root;
> > +	hpa_t private_root_hpa;
> >  	union kvm_cpu_role cpu_role;
> >  	union kvm_mmu_page_role root_role;
> >
> > @@ -1435,6 +1436,20 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
> >  	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
> >  }
> >
> > +struct kvm_spte {
> > +	kvm_pfn_t pfn;
> > +	bool is_present;
> > +	bool is_leaf;
> > +};
> > +
> > +struct kvm_spte_change {
> > +	gfn_t gfn;
> > +	enum pg_level level;
> > +	struct kvm_spte old;
> > +	struct kvm_spte new;
> > +	void *sept_page;
> > +};
> > +
> >  struct kvm_x86_ops {
> >  	const char *name;
> >
> > @@ -1547,6 +1562,11 @@ struct kvm_x86_ops {
> >  	void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
> >  			     int root_level);
> >
> > +	int (*free_private_sp)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> > +			       void *private_sp);
> > +	void (*handle_changed_private_spte)(
> > +		struct kvm *kvm, const struct kvm_spte_change *change);
> > +
> >  	bool (*has_wbinvd_exit)(void);
> >
> >  	u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu);
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index a5bf3e40e209..ef925722ee28 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -1577,7 +1577,11 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> >  		flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);
> >
> >  	if (is_tdp_mmu_enabled(kvm))
> > -		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
> > +		/*
> > +		 * private page needs to be kept and handle page migration
> > +		 * on next EPT violation.
> > +		 */
> > +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush, false);
> >
> >  	return flush;
> >  }
> > @@ -3082,7 +3086,8 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
> >  		 * SPTE value without #VE suppress bit cleared
> >  		 * (kvm->arch.shadow_mmio_value = 0).
> >  		 */
> > -		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching) ||
> > +		if (unlikely(!vcpu->kvm->arch.enable_mmio_caching &&
> > +			     !kvm_gfn_shared_mask(vcpu->kvm)) ||
> >  		    unlikely(fault->gfn > kvm_mmu_max_gfn()))
> >  			return RET_PF_EMULATE;
> >  	}
> > @@ -3454,7 +3459,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
> >  		goto out_unlock;
> >
> >  	if (is_tdp_mmu_enabled(vcpu->kvm)) {
> > -		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
> > +		if (kvm_gfn_shared_mask(vcpu->kvm) &&
> > +		    !VALID_PAGE(mmu->private_root_hpa)) {
> > +			root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, true);
> > +			mmu->private_root_hpa = root;
> > +		}
> > +		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, false);
> >  		mmu->root.hpa = root;
> >  	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
> >  		root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true);
> > @@ -4026,6 +4036,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
> >  	kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true);
> >  }
> >
> > +/*
> > + * Private page can't be release on mmu_notifier without losing page contents.
> > + * The help, callback, from backing store is needed to allow page migration.
> > + * For now, pin the page.
> > + */
> > +static int kvm_faultin_pfn_private_mapped(struct kvm_vcpu *vcpu,
> > +					   struct kvm_page_fault *fault)
> > +{
> > +	hva_t hva = gfn_to_hva_memslot(fault->slot, fault->gfn);
> > +	struct page *page[1];
> > +
> > +	fault->map_writable = false;
> > +	fault->pfn = KVM_PFN_ERR_FAULT;
> > +	if (hva == KVM_HVA_ERR_RO_BAD || hva == KVM_HVA_ERR_BAD)
> > +		return RET_PF_CONTINUE;
> > +
> > +	/* TDX allows only RWX.  Read-only isn't supported. */
> > +	WARN_ON_ONCE(!fault->write);
> > +	if (pin_user_pages_fast(hva, 1, FOLL_WRITE, page) != 1)
> > +		return RET_PF_INVALID;
> > +
> > +	fault->map_writable = true;
> > +	fault->pfn = page_to_pfn(page[0]);
> > +	return RET_PF_CONTINUE;
> > +}
> > +
> >  static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >  {
> >  	struct kvm_memory_slot *slot = fault->slot;
> > @@ -4058,6 +4094,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >  			return RET_PF_EMULATE;
> >  	}
> >
> > +	if (fault->is_private)
> > +		return kvm_faultin_pfn_private_mapped(vcpu, fault);
> > +
> >  	async = false;
> >  	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async,
> >  					  fault->write, &fault->map_writable,
> > @@ -4110,6 +4149,17 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
> >  	       mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);
> >  }
> >
> > +void kvm_mmu_release_fault(struct kvm *kvm, struct kvm_page_fault *fault, int r)
> > +{
> > +	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn))
> > +		return;
> > +
> > +	if (fault->is_private)
> > +		put_page(pfn_to_page(fault->pfn));
> 
> The pin_user_pages_fast() is used above which has FOLL_PIN set
> internal, so should we use unpin_user_page() here ? The FOLL_PIN means
> the unpin should be done by unpin_user_page() but not put_page, please
> see /Documentation/core-api/pin_user_pages.rst and comments on
> FOLL_PIN;

To align with large page support, I'll make it to use get_user_pages_fast() and
put_page().

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
  2022-07-12  2:36   ` Yuan Yao
@ 2022-07-26 23:42     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-26 23:42 UTC (permalink / raw)
  To: Yuan Yao
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Kai Huang

On Tue, Jul 12, 2022 at 10:36:57AM +0800,
Yuan Yao <yuan.yao@linux.intel.com> wrote:

> > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> > index 62ae590d4e5b..e5b73638bd83 100644
> > --- a/arch/x86/kvm/mmu/paging_tmpl.h
> > +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> > @@ -877,7 +877,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> >
> >  out_unlock:
> >  	write_unlock(&vcpu->kvm->mmu_lock);
> > -	kvm_release_pfn_clean(fault->pfn);
> > +	kvm_mmu_release_fault(vcpu->kvm, fault, r);
> 
> Do we really need this? Shadow page table is not supported for TD guest.

For consistency. pair of kvm_faultin_pfn() and kvm_mmu_release_fault().

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 002/102] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs"
  2022-07-13  1:55   ` Kai Huang
@ 2022-07-26 23:57     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-26 23:57 UTC (permalink / raw)
  To: Kai Huang
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Chao Gao, Sean Christopherson, Suzuki K Poulose, Anup Patel,
	Claudio Imbrenda

On Wed, Jul 13, 2022 at 01:55:46PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-06-27 at 14:52 -0700, isaku.yamahata@intel.com wrote:
> > From: Chao Gao <chao.gao@intel.com>
> > 
> > This partially reverts commit b99040853738 ("KVM: Pass kvm_init()'s opaque
> > param to additional arch funcs") remove opaque from
> > kvm_arch_check_processor_compat because no one uses this opaque now.
> > Address conflicts for ARM (due to file movement) and manually handle RISC-V
> > which comes after the commit.
> > 
> > And changes about kvm_arch_hardware_setup() in original commit are still
> > needed so they are not reverted.
> 
> I tried to dig the history to find out why we are doing this.
> 
> IMHO it's better to give a reason why you need to revert the opaque.  I guess no
> one uses this opaque now doesn't mean we need to remove it?
> 
> Perhaps you should mention this is a preparation to
> hardware_enable_all()/hardware_disable_all() during module loading time. 
> Instead of extending hardware_enable_all()/hardware_disable_all() to take the
> opaque and pass to kvm_arch_check_process_compat(), just remove the opaque.
> 
> Or perhaps just merge this patch to next one?


Here is the updated commit message.

    Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs"
    
    This partially reverts commit b99040853738 ("KVM: Pass kvm_init()'s opaque
    param to additional arch funcs") remove opaque from
    kvm_arch_check_processor_compat because no one uses this opaque now.
    Address conflicts for ARM (due to file movement) and manually handle RISC-V
    which comes after the commit.  The change about kvm_arch_hardware_setup()
    in original commit are still needed so they are not reverted.
    
    The current implementation enables hardware (e.g. enable VMX on all CPUs),
    arch-specific initialization for VM creation, and disables hardware (in
    x86, disable VMX on all CPUs) for last VM destruction.
    
    TDX requires its initialization on loading KVM module with VMX enabled on
    all available CPUs. It needs to enable/disable hardware on module
    initialization.  To reuse the same logic, one way is to pass around the
    unused opaque argument, another way is to remove the unused opaque
    argument.  This patch is a preparation for the latter by removing the
    argument.


-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko
  2022-07-12  1:13       ` Kai Huang
@ 2022-07-27  0:39         ` Isaku Yamahata
  2022-07-27  4:38           ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-27  0:39 UTC (permalink / raw)
  To: Kai Huang
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini,
	Sean Christopherson

On Tue, Jul 12, 2022 at 01:13:10PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> >     To use TDX functionality, TDX module needs to be loaded and initialized.
> >     This patch is to call a function, tdx_init(), when loading kvm_intel.ko.
> 
> Could you add explain why we need to init TDX module when loading KVM module?

Makes sense. Added a paragraph for it.


> >     Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
> >     while hardware is enabled, i.e. after hardware_enable_all() and before
> >     hardware_disable_all().  Because TDX requires all present CPUs to enable
> >     VMX (VMXON).
> 
> Please explicitly say it is a replacement of the default __weak version, so
> people can know there's already a default one.  Otherwise people may wonder why
> this isn't called in this patch (i.e. I skipped patch 03 as it looks not
> directly related to TDX).
> 
> That being said, why cannot you send out that patch separately but have to
> include it into TDX series?
> 
> Looking at it, the only thing that is related to TDX is an empty
> kvm_arch_post_hardware_enable_setup() with a comment saying TDX needs to do
> something there.  This logic has nothing to do with the actual job in that
> patch. 
> 
> So why cannot we introduce that __weak version in this patch, so that the rest
> of it can be non-TDX related at all and can be upstreamed separately?

The patch that adds weak kvm_arch_post_hardware_enable_setup() doesn't make
sense without the hook because it only enable_hardware and then disable hardware
immediately.
The patch touches multiple kvm arch.  and I split out TDX specific part in this
patch.  Ideally those two patch should be near. But I move it early to draw
attention for reviewers from multiple kvm arch.

Here is the updated version.

    KVM: TDX: Initialize the TDX module when loading the KVM intel kernel module
    
    To use TDX, the TDX module needs to be loaded and initialized.  This patch
    is to call a function to initialize the TDX module when loading KVM intel
    kernel module.
    
    There are several options on when to initialize the TDX module.  A.)
    kernel boot time as builtin, B.) kernel module loading time, C.) the first
    guest TD creation time.  B.) was chosen.  A.) causes unnecessary overhead
    (boot time and memory) even when TDX isn't used.  With C.), a user may hit
    an error of the TDX initialization when trying to create the first guest
    TD.  The machine that fails to initialize the TDX module can't boot any
    guest TD further.  Such failure is undesirable.  B.) has a good balance
    between them.
    
    Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
    while hardware is enabled, i.e. after hardware_enable_all() and before
    hardware_disable_all().  Because TDX requires all present CPUs to enable
    VMX (VMXON).  The x86 specific kvm_arch_post_hardware_enable_setup overrides
    the existing weak symbol of kvm_arch_post_hardware_enable_setup which is
    called at the KVM module initialization.

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions
  2022-07-12  1:30       ` Kai Huang
@ 2022-07-27  0:44         ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-27  0:44 UTC (permalink / raw)
  To: Kai Huang
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Tue, Jul 12, 2022 at 01:30:34PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Mon, 2022-07-11 at 17:38 -0700, Isaku Yamahata wrote:
> > On Tue, Jun 28, 2022 at 03:53:31PM +1200,
> > Kai Huang <kai.huang@intel.com> wrote:
> > 
> > > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > > 
> > > > Currently, KVM VMX module initialization/exit functions are a single
> > > > function each.  Refactor KVM VMX module initialization functions into KVM
> > > > common part and VMX part so that TDX specific part can be added cleanly.
> > > > Opportunistically refactor module exit function as well.
> > > > 
> > > > The current module initialization flow is, 1.) calculate the sizes of VMX
> > > > kvm structure and VMX vcpu structure, 2.) hyper-v specific initialization
> > > > 3.) report those sizes to the KVM common layer and KVM common
> > > > initialization, and 4.) VMX specific system-wide initialization.
> > > > 
> > > > Refactor the KVM VMX module initialization function into functions with a
> > > > wrapper function to separate VMX logic in vmx.c from a file, main.c, common
> > > > among VMX and TDX.  We have a wrapper function, "vt_init() {vmx kvm/vcpu
> > > > size calculation; hv_vp_assist_page_init(); kvm_init(); vmx_init(); }" in
> > > > main.c, and hv_vp_assist_page_init() and vmx_init() in vmx.c.
> > > > hv_vp_assist_page_init() initializes hyper-v specific assist pages,
> > > > kvm_init() does system-wide initialization of the KVM common layer, and
> > > > vmx_init() does system-wide VMX initialization.
> > > > 
> > > > The KVM architecture common layer allocates struct kvm with reported size
> > > > for architecture-specific code.  The KVM VMX module defines its structure
> > > > as struct vmx_kvm { struct kvm; VMX specific members;} and uses it as
> > > > struct vmx kvm.  Similar for vcpu structure. TDX KVM patches will define
> > > > TDX specific kvm and vcpu structures, add tdx_pre_kvm_init() to report the
> > > > sizes of them to the KVM common layer.
> > > > 
> > > > The current module exit function is also a single function, a combination
> > > > of VMX specific logic and common KVM logic.  Refactor it into VMX specific
> > > > logic and KVM common logic.  This is just refactoring to keep the VMX
> > > > specific logic in vmx.c from main.c.
> > > 
> > > This patch, coupled with the patch:
> > > 
> > > 	KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX
> > > 
> > > Basically provides an infrastructure to support both VMX and TDX.  Why we cannot
> > > merge them into one patch?  What's the benefit of splitting them?
> > > 
> > > At least, why the two patches cannot be put together closely?
> > 
> > It is trivial for the change of "KVM: VMX: Move out vmx_x86_ops to 'main.c' to
> > wrap VMX and TDX" to introduce no functional change.  But it's not trivial
> > for this patch to introduce no functional change.
> 
> This doesn't sound right.  If I understand correctly, this patch supposedly
> shouldn't bring any functional change, right?  Could you explain what functional
> change does this patch bring?

This patch doesn't bring functional change.  This patch changes orders of
some function calls.  It doesn't matter actually.  But I think it's non-trivial.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko
  2022-07-27  0:39         ` Isaku Yamahata
@ 2022-07-27  4:38           ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-27  4:38 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, Sean Christopherson

On Tue, 2022-07-26 at 17:39 -0700, Isaku Yamahata wrote:
> On Tue, Jul 12, 2022 at 01:13:10PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > >     To use TDX functionality, TDX module needs to be loaded and initialized.
> > >     This patch is to call a function, tdx_init(), when loading kvm_intel.ko.
> > 
> > Could you add explain why we need to init TDX module when loading KVM module?
> 
> Makes sense. Added a paragraph for it.
> 
> 
> > >     Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
> > >     while hardware is enabled, i.e. after hardware_enable_all() and before
> > >     hardware_disable_all().  Because TDX requires all present CPUs to enable
> > >     VMX (VMXON).
> > 
> > Please explicitly say it is a replacement of the default __weak version, so
> > people can know there's already a default one.  Otherwise people may wonder why
> > this isn't called in this patch (i.e. I skipped patch 03 as it looks not
> > directly related to TDX).
> > 
> > That being said, why cannot you send out that patch separately but have to
> > include it into TDX series?
> > 
> > Looking at it, the only thing that is related to TDX is an empty
> > kvm_arch_post_hardware_enable_setup() with a comment saying TDX needs to do
> > something there.  This logic has nothing to do with the actual job in that
> > patch. 
> > 
> > So why cannot we introduce that __weak version in this patch, so that the rest
> > of it can be non-TDX related at all and can be upstreamed separately?
> 
> The patch that adds weak kvm_arch_post_hardware_enable_setup() doesn't make
> sense without the hook because it only enable_hardware and then disable hardware
> immediately.

It's not a disaster if you describe the reason to do so in the changelog, but no
strong opinion here.

But I do think you need a comment to explain why disable hardware is called
immediately.  Is it because we want to maintain the current behaviour that we
want to allow out-of-tree driver, i.e. virtualbox to be loaded when KVM is
loaded?

 
> The patch touches multiple kvm arch.  and I split out TDX specific part in this
> patch.  Ideally those two patch should be near. But I move it early to draw
> attention for reviewers from multiple kvm arch.

Explicitly say this is the replacement of the default __weak version is fine.

> 
> Here is the updated version.
> 
>     KVM: TDX: Initialize the TDX module when loading the KVM intel kernel module
>     
>     To use TDX, the TDX module needs to be loaded and initialized.  This patch
>     is to call a function to initialize the TDX module when loading KVM intel
>     kernel module.
>     
>     There are several options on when to initialize the TDX module.  A.)
>     kernel boot time as builtin, B.) kernel module loading time, C.) the first
>     guest TD creation time.  B.) was chosen.  A.) causes unnecessary overhead
>     (boot time and memory) even when TDX isn't used.  With C.), a user may hit
>     an error of the TDX initialization when trying to create the first guest
>     TD.  The machine that fails to initialize the TDX module can't boot any
>     guest TD further.  Such failure is undesirable.  B.) has a good balance
>     between them.

You don't need to mention A.  When this patch is merged, the host series must
have been merged already.  In another words, this is already a fact, but not an
option.

>     
>     Add a hook, kvm_arch_post_hardware_enable_setup, to module initialization
>     while hardware is enabled, i.e. after hardware_enable_all() and before
>     hardware_disable_all().  
> 

You don't need to say "add a hook ..., i.e. after hardware_enable_all() and
before hardware_disable_all()".  Where the function is called is already a fact.
We have a __weak version already.


> Because TDX requires all present CPUs to enable
>     VMX (VMXON).  The x86 specific kvm_arch_post_hardware_enable_setup overrides
>     the existing weak symbol of kvm_arch_post_hardware_enable_setup which is
>     called at the KVM module initialization.
> 


^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-26 14:32       ` Chao Peng
@ 2022-07-27  9:26         ` Nikunj A. Dadhania
  2022-08-03 10:48           ` Chao Peng
  0 siblings, 1 reply; 219+ messages in thread
From: Nikunj A. Dadhania @ 2022-07-27  9:26 UTC (permalink / raw)
  To: Chao Peng
  Cc: Sean Christopherson, isaku.yamahata, kvm, linux-kernel,
	isaku.yamahata, Paolo Bonzini

On 7/26/2022 8:02 PM, Chao Peng wrote:
> On Mon, Jul 25, 2022 at 07:16:24PM +0530, Nikunj A. Dadhania wrote:
>> On 7/20/2022 8:29 PM, Chao Peng wrote:
>>> On Thu, Jul 14, 2022 at 01:03:46AM +0000, Sean Christopherson wrote:
>>> ...
>>>>
>>>> Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
>>>> on insertion/removal to (dis)allow hugepages as needed.
>>>>
>>>>   + efficient on KVM page fault (no new lookups)
>>>>   + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
>>>>   + straightforward to implement
>>>>   + can (and should) be merged as part of the UPM series
>>>>
>>>> I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
>>>> completely covered (fully shared) or not covered at all (fully private), but I'm
>>>> not 100% certain that xa_for_each_range() works the way I think it does.
>>>
>>> Hi Sean,
>>>
>>> Below is the implementation to support 2M as you mentioned as option D.
>>> It's based on UPM v7 xarray code: https://lkml.org/lkml/2022/7/6/259
>>>
>>> Everything sounds good, the only trick bit is inc/dec disallow_lpage. If
>>> we still treat it as a count, it will be a challenge to make the inc/dec
>>> balanced. So in this patch I stole a bit for the purpose, looks ugly.
>>>
>>> Any feedback is welcome.
>>>
>>> Thanks,
>>> Chao
>>>
>>> -----------------------------------------------------------------------
>>> From: Chao Peng <chao.p.peng@linux.intel.com>
>>> Date: Wed, 20 Jul 2022 11:37:18 +0800
>>> Subject: [PATCH] KVM: Add large page support for private memory
>>>
>>> Update lpage_info when handling KVM_MEMORY_ENCRYPT_{UN,}REG_REGION.
>>>
>>> Reserve a bit in disallow_lpage to indicate a large page has
>>> private/share pages mixed.
>>>
>>> Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
>>> ---
>>
>>
>>> +static void update_mem_lpage_info(struct kvm *kvm,
>>> +				  struct kvm_memory_slot *slot,
>>> +				  unsigned int attr,
>>> +				  gfn_t start, gfn_t end)
>>> +{
>>> +	unsigned long lpage_start, lpage_end;
>>> +	unsigned long gfn, pages, mask;
>>> +	int level;
>>> +
>>> +	for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
>>> +		pages = KVM_PAGES_PER_HPAGE(level);
>>> +		mask = ~(pages - 1);
>>> +		lpage_start = start & mask;
>>> +		lpage_end = end & mask;
>>> +
>>> +		/*
>>> +		 * We only need to scan the head and tail page, for middle pages
>>> +		 * we know they are not mixed.
>>> +		 */
>>> +		update_mixed(lpage_info_slot(lpage_start, slot, level),
>>> +			     mem_attr_is_mixed(kvm, attr, lpage_start,
>>> +							  lpage_start + pages));
>>> +
>>> +		if (lpage_start == lpage_end)
>>> +			return;
>>> +
>>> +		for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) {
>>> +			update_mixed(lpage_info_slot(gfn, slot, level), false);
>>> +		}
>>
>> Boundary check missing here for the case when gfn reaches lpage_end.
>>
>> 		if (gfn == lpage_end)
>> 			return;
> 
> In this case, it's actually the tail page that I want to scan for with
> below code.

What if you do not have the tail lpage?

For example: memslot base_gfn = 0x1000 and npages is 0x800, so memslot range
is 0x1000 to 0x17ff.

Assume a case when this function is called with start = 1000 and end = 1800.
For 2M, page mask is 0x1ff. start and end both are 2M aligned.

First update_mixed takes care of 0x1000-0x1200
Loop update_mixed: goes over from 0x1200 - 0x1800, there are no pages left
for last update_mixed to process.

> 
> It's also possible I misunderstand something here.
> 
> Chao
>>
>>> +
>>> +		update_mixed(lpage_info_slot(lpage_end, slot, level),
>>> +			     mem_attr_is_mixed(kvm, attr, lpage_end,
>>> +							  lpage_end + pages));

lpage_info_slot some times causes a crash, as I noticed that
lpage_info_slot() returns out-of-bound index.

Regards
Nikunj



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization
  2022-06-27 21:52 ` [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization isaku.yamahata
  2022-07-12  1:15   ` Kai Huang
  2022-07-13  3:11   ` Kai Huang
@ 2022-07-27 22:04   ` Isaku Yamahata
  2 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-27 22:04 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Sean Christopherson

Here is the updated version.

commit 7d042749b631f668ed9e99044228f16c212161bc
Author: Isaku Yamahata <isaku.yamahata@intel.com>
Date:   Fri Apr 22 16:56:51 2022 -0700

    KVM: Refactor CPU compatibility check on module initialization
    
    TDX module requires its initialization.  It requires VMX to be enabled.
    Although there are several options of when to initialize it, the choice is
    the initialization time of the KVM kernel module.  There is no usable
    arch-specific hook for the TDX module to utilize during the KVM kernel module
    initialization.  The code doesn't enable/disable hardware (VMX in TDX case)
    during the kernel module initialization.  Add a hook for enabling hardware,
    arch-specific initialization, and disabling hardware during KVM kernel
    module initialization to make a room for TDX module initialization.  The
    current KVM enables hardware when the first VM is created and disables
    hardware when the last VM is destroyed.  When no VM is running, hardware is
    disabled.  To follow these semantics, the kernel module initialization needs
    to disable hardware. Opportunistically refactor the code to enable/disable
    hardware.
    
    Add hadware_enable_all() and hardware_disable_all() to kvm_init() and
    introduce a new arch-specific callback function,
    kvm_arch_post_hardware_enable_setup, for arch to do arch-specific
    initialization that requires hardware_enable_all().  Opportunistically,
    move kvm_arch_check_processor_compat() to to hardware_enabled_nolock().
    TDX module initialization code will go into
    kvm_arch_post_hardware_enable_setup().
    
    This patch reorders some function calls as below from (*) (**) (A) and (B)
    to (A) (B) and (*).  Here (A) and (B) depends on (*), but not (**).  By
    code inspection, only mips and VMX has the code of (*).  No other
    arch has empty (*).  So refactor mips and VMX and eliminate the
    necessity hook for (*) instead of adding an unused hook.
    
    Before this patch:
    - Arch module initialization
      - kvm_init()
        - kvm_arch_init()
        - kvm_arch_check_processor_compat() on each CPUs
      - post-arch-specific initialization -- (*): (A) and (B) depends on this
      - post-arch-specific initialization -- (**): no dependency to (A) and (B)
    
    - When creating/deleting the first/last VM
       - kvm_arch_hardware_enable() on each CPUs -- (A)
       - kvm_arch_hardware_disable() on each CPUs -- (B)
    
    After this patch:
    - Arch module initialization
      - kvm_init()
        - kvm_arch_init()
        - arch-specific initialization -- (*)
        - kvm_arch_check_processor_compat() on each CPUs
        - kvm_arch_hardware_enable() on each CPUs -- (A)
        - kvm_arch_hardware_disable() on each CPUs -- (B)
      - post-arch-specific initialization  -- (**)
    
    - When creating/deleting the first/last VM (no logic change)
       - kvm_arch_hardware_enable() on each CPUs -- (A)
       - kvm_arch_hardware_disable() on each CPUs -- (B)
    
    Code inspection result:
    As long as I inspected, I found only mips and VMX have non-empty (*) or
    non-empty (A) or (B).
    x86: tested on a real machine
    mips: compile test only
    powerpc, s390, arm, riscv: code inspection only
    
    - arch/mips/kvm/mips.c
      module init function, kvm_mips_init(), does some initialization after
      kvm_init().  Compile test only.
    
    - arch/x86/kvm/x86.c
      - uses vm_list which is statically initialized.
      - static_call(kvm_x86_hardware_enable)();
        - SVM: (*) and (**) are empty.
        - VMX: initialize percpu variable loaded_vmcss_on_cpu that VMXON uses.
    
    - arch/powerpc/kvm/powerpc.c
      kvm_arch_hardware_enable/disable() are nop
    
    - arch/s390/kvm/kvm-s390.c
      kvm_arch_hardware_enable/disable() are nop
    
    - arch/arm64/kvm/arm.c
      module init function, arm_init(), calls only kvm_init().
      (*) and (**) are empty
    
    - arch/riscv/kvm/main.c
      module init function, riscv_kvm_init(), calls only kvm_init().
      (*) and (**) are empty
    
    Co-developed-by: Sean Christopherson <seanjc@google.com>
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 092d09fb6a7e..fd7339cff57c 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1642,12 +1642,11 @@ static int __init kvm_mips_init(void)
 		return -EOPNOTSUPP;
 	}
 
+	/*
+	 * kvm_init() calls kvm_arch_hardware_enable/disable().  The early
+	 * initialization is needed before calling kvm_init().
+	 */
 	ret = kvm_mips_entry_setup();
-	if (ret)
-		return ret;
-
-	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
-
 	if (ret)
 		return ret;
 
@@ -1656,6 +1655,13 @@ static int __init kvm_mips_init(void)
 
 	register_die_notifier(&kvm_mips_csr_die_notifier);
 
+	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+
+	if (ret) {
+		unregister_die_notifier(&kvm_mips_csr_die_notifier);
+		return ret;
+	}
+
 	return 0;
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 111e0c42479a..5c59b4ea6524 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8442,6 +8442,23 @@ static void vmx_exit(void)
 }
 module_exit(vmx_exit);
 
+/*
+ * Early initialization before kvm_init() so that vmx_hardware_enable/disable()
+ * can work.
+ */
+static void __init vmx_init_early(void)
+{
+	int cpu;
+
+	/*
+	 * vmx_hardware_disable() accesses loaded_vmcss_on_cpu list.
+	 * Initialize the variable before kvm_init() that calls
+	 * vmx_hardware_enable/disable().
+	 */
+	for_each_possible_cpu(cpu)
+		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
+}
+
 static int __init vmx_init(void)
 {
 	int r, cpu;
@@ -8479,6 +8496,7 @@ static int __init vmx_init(void)
 	}
 #endif
 
+	vmx_init_early();
 	r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
 		     __alignof__(struct vcpu_vmx), THIS_MODULE);
 	if (r)
@@ -8499,11 +8517,8 @@ static int __init vmx_init(void)
 
 	vmx_setup_fb_clear_ctrl();
 
-	for_each_possible_cpu(cpu) {
-		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
-
+	for_each_possible_cpu(cpu)
 		pi_init_cpu(cpu);
-	}
 
 #ifdef CONFIG_KEXEC_CORE
 	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index d4f130a9f5c8..79a4988fd51f 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1441,6 +1441,7 @@ void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu, struct dentry *debugfs_
 int kvm_arch_hardware_enable(void);
 void kvm_arch_hardware_disable(void);
 int kvm_arch_hardware_setup(void *opaque);
+int kvm_arch_post_hardware_enable_setup(void *opaque);
 void kvm_arch_hardware_unsetup(void);
 int kvm_arch_check_processor_compat(void);
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a5bada53f1fe..51b8ac5faca5 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4899,8 +4899,13 @@ static void hardware_enable_nolock(void *junk)
 
 	cpumask_set_cpu(cpu, cpus_hardware_enabled);
 
+	r = kvm_arch_check_processor_compat();
+	if (r)
+		goto out;
+
 	r = kvm_arch_hardware_enable();
 
+out:
 	if (r) {
 		cpumask_clear_cpu(cpu, cpus_hardware_enabled);
 		atomic_inc(&hardware_enable_failed);
@@ -5697,9 +5702,9 @@ void kvm_unregister_perf_callbacks(void)
 }
 #endif
 
-static void check_processor_compat(void *rtn)
+__weak int kvm_arch_post_hardware_enable_setup(void *opaque)
 {
-	*(int *)rtn = kvm_arch_check_processor_compat();
+	return 0;
 }
 
 int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
@@ -5732,11 +5737,23 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 	if (r < 0)
 		goto out_free_1;
 
-	for_each_online_cpu(cpu) {
-		smp_call_function_single(cpu, check_processor_compat, &r, 1);
-		if (r < 0)
-			goto out_free_2;
-	}
+	/* hardware_enable_nolock() checks CPU compatibility on each CPUs. */
+	r = hardware_enable_all();
+	if (r)
+		goto out_free_2;
+	/*
+	 * Arch specific initialization that requires to enable virtualization
+	 * feature.  e.g. TDX module initialization requires VMXON on all
+	 * present CPUs.
+	 */
+	kvm_arch_post_hardware_enable_setup(opaque);
+	/*
+	 * Make hardware disabled after the KVM module initialization.  KVM
+	 * enables hardware when the first KVM VM is created and disables
+	 * hardware when the last KVM VM is destroyed.  When no KVM VM is
+	 * running, hardware is disabled.  Keep that semantics.
+	 */
+	hardware_disable_all();
 
 	r = cpuhp_setup_state_nocalls(CPUHP_AP_KVM_STARTING, "kvm/cpu:starting",
 				      kvm_starting_cpu, kvm_dying_cpu);

-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  2022-07-20  3:45     ` Kai Huang
@ 2022-07-27 23:20       ` Isaku Yamahata
  2022-07-28  0:48         ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-27 23:20 UTC (permalink / raw)
  To: Kai Huang
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini,
	Sean Christopherson

On Wed, Jul 20, 2022 at 03:45:59PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> > @@ -337,9 +335,8 @@ u64 mark_spte_for_access_track(u64 spte)
> >  	return spte;
> >  }
> >  
> > -void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
> > +void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask)
> >  {
> > -	BUG_ON((u64)(unsigned)access_mask != access_mask);
> >  	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
> >  
> >  	if (!enable_mmio_caching)
> > @@ -366,12 +363,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
> >  	    WARN_ON(mmio_value && (__REMOVED_SPTE & mmio_mask) == mmio_value))
> >  		mmio_value = 0;
> >  
> > -	if (!mmio_value)
> > -		enable_mmio_caching = false;
> > -
> > -	shadow_mmio_value = mmio_value;
> > -	shadow_mmio_mask  = mmio_mask;
> > -	shadow_mmio_access_mask = access_mask;
> > +	kvm->arch.enable_mmio_caching = !!mmio_value;
> 
> KVM has a global enable_mmio_caching boolean, and I think we should honor it
> here (in this patch) by doing below first:
> 
> 	if (enabling_mmio_caching)
> 		mmio_value = 0;

This function already includes "if (!enable_mmio_caching) mmio_value = 0;" in
the beginning. (But not in this hunk, though).  So this patch honors the kernel
module parameter.


> > diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> > index f5fd22f6bf5f..99bce92b596e 100644
> > --- a/arch/x86/kvm/mmu/spte.h
> > +++ b/arch/x86/kvm/mmu/spte.h
> > @@ -5,8 +5,6 @@
> >  
> >  #include "mmu_internal.h"
> >  
> > -extern bool __read_mostly enable_mmio_caching;
> > -
> 
> Here you removed the ability to control enable_mmio_caching globally.  It's not
> something you stated to do in the changelog.  Perhaps we should still keep it,
> and enforce it in kvm_mmu_set_mmio_spte_mask() as commented above.
> 
> And in upstream KVM, it is a module parameter.  What happens to it?

Ditto.  the upstredam kvm_mmu_set_mmio_spte_mask() has
"if (!enable_mmio_caching) mmio_value = 0;" and this patch keeps it.


> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > index 36d2127cb7b7..52fb54880f9b 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -7,6 +7,7 @@
> >  #include "x86_ops.h"
> >  #include "tdx.h"
> >  #include "x86.h"
> > +#include "mmu.h"
> >  
> >  #undef pr_fmt
> >  #define pr_fmt(fmt) "tdx: " fmt
> > @@ -276,6 +277,9 @@ int tdx_vm_init(struct kvm *kvm)
> >  	int ret, i;
> >  	u64 err;
> >  
> > +	kvm_mmu_set_mmio_spte_mask(kvm, vmx_shadow_mmio_mask,
> > +				   vmx_shadow_mmio_mask);
> > +
> 
> I prefer to split this chunk out to another patch so this patch can be purely
> infrastructural.   In this way you can even move this patch around easily in
> this series.

Ok. I'll move it to a patch that touches TDX.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-07-20  5:13       ` Kai Huang
@ 2022-07-27 23:39         ` Isaku Yamahata
  2022-07-28  0:54           ` Kai Huang
  0 siblings, 1 reply; 219+ messages in thread
From: Isaku Yamahata @ 2022-07-27 23:39 UTC (permalink / raw)
  To: Kai Huang
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Wed, Jul 20, 2022 at 05:13:08PM +1200,
Kai Huang <kai.huang@intel.com> wrote:

> On Tue, 2022-07-19 at 07:49 -0700, Isaku Yamahata wrote:
> > On Fri, Jul 08, 2022 at 02:23:43PM +1200,
> > Kai Huang <kai.huang@intel.com> wrote:
> > 
> > > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > > 
> > > > To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
> > > > to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
> > > > case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
> > > > defensive (test that VMX case isn't broken), introduce option
> > > > ept_violation_ve_test and when it's set, set error.
> > > 
> > > I don't see why we need this patch.  It may be helpful during your test, but why
> > > do we need this patch for formal submission?
> > > 
> > > And for a normal guest, what prevents one vcpu from sending #VE IPI to another
> > > vcpu?
> > 
> > Paolo suggested it as follows.  Maybe it should be kernel config.
> > (I forgot to add suggested-by. I'll add it)
> > 
> > https://lore.kernel.org/lkml/84d56339-4a8a-6ddb-17cb-12074588ba9c@redhat.com/
> > 
> > > 
> 
> OK.  But can we assume a normal guest won't sending #VE IPI?

Theoretically nothing prevents that.  I wouldn't way "normal".
Anyway this is off by default.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
  2022-07-27 23:20       ` Isaku Yamahata
@ 2022-07-28  0:48         ` Kai Huang
  0 siblings, 0 replies; 219+ messages in thread
From: Kai Huang @ 2022-07-28  0:48 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini, Sean Christopherson

On Wed, 2022-07-27 at 16:20 -0700, Isaku Yamahata wrote:
> > KVM has a global enable_mmio_caching boolean, and I think we should honor it
> > here (in this patch) by doing below first:
> > 
> >  	if (enabling_mmio_caching)
> >  		mmio_value = 0;
> 
> This function already includes "if (!enable_mmio_caching) mmio_value = 0;" in
> the beginning. (But not in this hunk, though).  So this patch honors the
> kernel
> module parameter.

Yeah I missed that. Thanks.

-- 
Thanks,
-Kai



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-07-27 23:39         ` Isaku Yamahata
@ 2022-07-28  0:54           ` Kai Huang
  2022-07-28 20:11             ` Sean Christopherson
  0 siblings, 1 reply; 219+ messages in thread
From: Kai Huang @ 2022-07-28  0:54 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Wed, 2022-07-27 at 16:39 -0700, Isaku Yamahata wrote:
> On Wed, Jul 20, 2022 at 05:13:08PM +1200,
> Kai Huang <kai.huang@intel.com> wrote:
> 
> > On Tue, 2022-07-19 at 07:49 -0700, Isaku Yamahata wrote:
> > > On Fri, Jul 08, 2022 at 02:23:43PM +1200,
> > > Kai Huang <kai.huang@intel.com> wrote:
> > > 
> > > > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > > > 
> > > > > To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
> > > > > to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
> > > > > case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
> > > > > defensive (test that VMX case isn't broken), introduce option
> > > > > ept_violation_ve_test and when it's set, set error.
> > > > 
> > > > I don't see why we need this patch.  It may be helpful during your test, but why
> > > > do we need this patch for formal submission?
> > > > 
> > > > And for a normal guest, what prevents one vcpu from sending #VE IPI to another
> > > > vcpu?
> > > 
> > > Paolo suggested it as follows.  Maybe it should be kernel config.
> > > (I forgot to add suggested-by. I'll add it)
> > > 
> > > https://lore.kernel.org/lkml/84d56339-4a8a-6ddb-17cb-12074588ba9c@redhat.com/
> > > 
> > > > 
> > 
> > OK.  But can we assume a normal guest won't sending #VE IPI?
> 
> Theoretically nothing prevents that.  I wouldn't way "normal".
> Anyway this is off by default.

I don't think whether it is on or off by default matters.  If it can happen
legitimately in the guest, it doesn't look right to print out something like
below:

	pr_err("VMEXIT due to unexpected #VE.\n");

Anyway, will let maintainers to decide.

-- 
Thanks,
-Kai



^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-06-27 21:53 ` [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
  2022-07-01 11:12   ` Kai Huang
  2022-07-11  6:28   ` Yuan Yao
@ 2022-07-28 19:41   ` David Matlack
  2022-08-09 23:52     ` Isaku Yamahata
  2022-07-28 20:13   ` David Matlack
  3 siblings, 1 reply; 219+ messages in thread
From: David Matlack @ 2022-07-28 19:41 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:36PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> For private GPA, CPU refers a private page table whose contents are
> encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
> PTE entry) are used and their cost is expensive.
> 
> When KVM resolves KVM page fault, it walks the page tables.  To reuse the
> existing KVM MMU code and mitigate the heavy cost to directly walk
> encrypted private page table, allocate a more page to mirror the existing
> KVM page table.  Resolve KVM page fault with the existing code, and do
> additional operations necessary for the mirrored private page table.  To
> distinguish such cases, the existing KVM page table is called a shared page
> table (i.e. no mirrored private page table), and the KVM page table with
> mirrored private page table is called a private page table.  The
> relationship is depicted below.
> 
> Add private pointer to struct kvm_mmu_page for mirrored private page table
> and add helper functions to allocate/initialize/free a mirrored private
> page table page.  Also, add helper functions to check if a given
> kvm_mmu_page is private.  The later patch introduces hooks to operate on
> the mirrored private page table.
> 
>               KVM page fault                     |
>                      |                           |
>                      V                           |
>         -------------+----------                 |
>         |                      |                 |
>         V                      V                 |
>      shared GPA           private GPA            |
>         |                      |                 |
>         V                      V                 |
>  CPU/KVM shared PT root  KVM private PT root     |  CPU private PT root
>         |                      |                 |           |
>         V                      V                 |           V
>      shared PT            private PT <----mirror----> mirrored private PT
>         |                      |                 |           |
>         |                      \-----------------+------\    |
>         |                                        |      |    |
>         V                                        |      V    V
>   shared guest page                              |    private guest page
>                                                  |
>                            non-encrypted memory  |    encrypted memory
>                                                  |
> PT: page table
> 
> Both CPU and KVM refer to CPU/KVM shared page table.  Private page table
> is used only by KVM.  CPU refers to mirrored private page table.
> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/mmu/mmu.c          |  9 ++++
>  arch/x86/kvm/mmu/mmu_internal.h | 84 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/mmu/tdp_mmu.c      |  3 ++
>  4 files changed, 97 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f4d4ed41641b..bfc934dc9a33 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -716,6 +716,7 @@ struct kvm_vcpu_arch {
>  	struct kvm_mmu_memory_cache mmu_shadow_page_cache;
>  	struct kvm_mmu_memory_cache mmu_gfn_array_cache;
>  	struct kvm_mmu_memory_cache mmu_page_header_cache;
> +	struct kvm_mmu_memory_cache mmu_private_sp_cache;

I notice that mmu_private_sp_cache.gfp_zero is left unset so these pages
may contain garbage. Is this by design because the TDX module can't rely
on the contents being zero and has to take care of initializing the page
itself? i.e. GFP_ZERO would be a waste of cycles?

If I'm correct please include a comment here in the next revision to
explain why GFP_ZERO is not necessary.

>  
>  	/*
>  	 * QEMU userspace and the guest each have their own FPU state.
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index c517c7bca105..a5bf3e40e209 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -691,6 +691,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
>  	int start, end, i, r;
>  	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
>  
> +	if (kvm_gfn_shared_mask(vcpu->kvm)) {
> +		r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache,
> +					       PT64_ROOT_MAX_LEVEL);
> +		if (r)
> +			return r;
> +	}
> +
>  	if (is_tdp_mmu && shadow_nonpresent_value)
>  		start = kvm_mmu_memory_cache_nr_free_objects(mc);
>  
> @@ -732,6 +739,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> +	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_private_sp_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
>  }
> @@ -1736,6 +1744,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
>  	if (!direct)
>  		sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
>  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> +	kvm_mmu_init_private_sp(sp, NULL);

This is unnecessary. kvm_mmu_page structs are zero-initialized so
private_sp will already be NULL.

>  
>  	/*
>  	 * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 44a04fad4bed..9f3a6bea60a3 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -55,6 +55,10 @@ struct kvm_mmu_page {
>  	u64 *spt;
>  	/* hold the gfn of each spte inside spt */
>  	gfn_t *gfns;
> +#ifdef CONFIG_KVM_MMU_PRIVATE
> +	/* associated private shadow page, e.g. SEPT page. */

Can we use "Secure EPT" instead of SEPT in KVM code and comments? (i.e.
also including variable names like sept_page -> secure_ept_page)

"SEPT" looks like a mispelling of SPTE, which is used all over KVM. It
will be difficult to read code that contains both acronyms.

> +	void *private_sp;

Please name this "private_spt" and move it up next to "spt".

sp" or "shadow page" is used to refer to kvm_mmu_page structs. For
example, look at all the code in KVM that uses `struct kvm_mmu_page *sp`.

"spt" is "shadow page table", i.e. the actual page table memory. See
kvm_mmu_page.spt. Calling this field "private_spt" makes it obvious that
this pointer is pointing to a page table.

Also please update the language in the comment accordingly to "private
shadow page table".

> +#endif
>  	/* Currently serving as active root */
>  	union {
>  		int root_count;
> @@ -115,6 +119,86 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
>  	return kvm_mmu_role_as_id(sp->role);
>  }
>  
> +/*
> + * TDX vcpu allocates page for root Secure EPT page and assigns to CPU secure
> + * EPT pointer.  KVM doesn't need to allocate and link to the secure EPT.
> + * Dummy value to make is_pivate_sp() return true.
> + */
> +#define KVM_MMU_PRIVATE_SP_ROOT	((void *)1)
> +
> +#ifdef CONFIG_KVM_MMU_PRIVATE
> +static inline bool is_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return !!sp->private_sp;
> +}
> +
> +static inline bool is_private_sptep(u64 *sptep)
> +{
> +	WARN_ON(!sptep);
> +	return is_private_sp(sptep_to_sp(sptep));
> +}
> +
> +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return sp->private_sp;
> +}
> +
> +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
> +{
> +	sp->private_sp = private_sp;
> +}
> +
> +/* Valid sp->role.level is required. */
> +static inline void kvm_mmu_alloc_private_sp(
> +	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
> +{
> +	if (is_root)
> +		sp->private_sp = KVM_MMU_PRIVATE_SP_ROOT;
> +	else
> +		sp->private_sp = kvm_mmu_memory_cache_alloc(
> +			&vcpu->arch.mmu_private_sp_cache);
> +	/*
> +	 * Because mmu_private_sp_cache is topped up before staring kvm page
> +	 * fault resolving, the allocation above shouldn't fail.
> +	 */
> +	WARN_ON_ONCE(!sp->private_sp);
> +}
> +
> +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> +{
> +	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
> +		free_page((unsigned long)sp->private_sp);
> +}
> +#else
> +static inline bool is_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return false;
> +}
> +
> +static inline bool is_private_sptep(u64 *sptep)
> +{
> +	return false;
> +}
> +
> +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return NULL;
> +}
> +
> +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
> +{
> +}
> +
> +static inline void kvm_mmu_alloc_private_sp(
> +	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
> +{
> +}
> +
> +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> +{
> +}
> +#endif
> +
>  static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
>  {
>  	/*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7eb41b176d1e..b2568b062faa 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -72,6 +72,8 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
>  
>  static void tdp_mmu_free_sp(struct kvm_mmu_page *sp)
>  {
> +	if (is_private_sp(sp))
> +		kvm_mmu_free_private_sp(sp);
>  	free_page((unsigned long)sp->spt);
>  	kmem_cache_free(mmu_page_header_cache, sp);
>  }
> @@ -295,6 +297,7 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>  	sp->gfn = gfn;
>  	sp->ptep = sptep;
>  	sp->tdp_mmu_page = true;
> +	kvm_mmu_init_private_sp(sp);
>  
>  	trace_kvm_mmu_get_page(sp, true);
>  }
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-07-28  0:54           ` Kai Huang
@ 2022-07-28 20:11             ` Sean Christopherson
  2022-08-09  0:48               ` Isaku Yamahata
  0 siblings, 1 reply; 219+ messages in thread
From: Sean Christopherson @ 2022-07-28 20:11 UTC (permalink / raw)
  To: Kai Huang
  Cc: Isaku Yamahata, isaku.yamahata, kvm, linux-kernel, Paolo Bonzini

On Thu, Jul 28, 2022, Kai Huang wrote:
> On Wed, 2022-07-27 at 16:39 -0700, Isaku Yamahata wrote:
> > On Wed, Jul 20, 2022 at 05:13:08PM +1200,
> > Kai Huang <kai.huang@intel.com> wrote:
> > 
> > > On Tue, 2022-07-19 at 07:49 -0700, Isaku Yamahata wrote:
> > > > On Fri, Jul 08, 2022 at 02:23:43PM +1200,
> > > > Kai Huang <kai.huang@intel.com> wrote:
> > > > 
> > > > > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > > > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > > > > 
> > > > > > To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
> > > > > > to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
> > > > > > case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
> > > > > > defensive (test that VMX case isn't broken), introduce option
> > > > > > ept_violation_ve_test and when it's set, set error.
> > > > > 
> > > > > I don't see why we need this patch.  It may be helpful during your test, but why
> > > > > do we need this patch for formal submission?
> > > > > 
> > > > > And for a normal guest, what prevents one vcpu from sending #VE IPI to another
> > > > > vcpu?
> > > > 
> > > > Paolo suggested it as follows.  Maybe it should be kernel config.
> > > > (I forgot to add suggested-by. I'll add it)
> > > > 
> > > > https://lore.kernel.org/lkml/84d56339-4a8a-6ddb-17cb-12074588ba9c@redhat.com/
> > > > 
> > > > > 
> > > 
> > > OK.  But can we assume a normal guest won't sending #VE IPI?
> > 
> > Theoretically nothing prevents that.  I wouldn't way "normal".
> > Anyway this is off by default.
> 
> I don't think whether it is on or off by default matters.

It matters in the sense that the module param is intended purely for testing, i.e.
there's zero reason to ever enable it in production.  That changes what is and
wasn't isn't a reasonable response to an unexpected #VE.

> If it can happen legitimately in the guest, it doesn't look right to print
> out something like below:
> 
> 	pr_err("VMEXIT due to unexpected #VE.\n");

Agreed.  In this particular case I think the right approach is to treat an
unexpected #VE as a fatal KVM bug.  Yes, disabling EPT violation #VEs would likely
allow the guest to live, but as above the module param should never be enabled in
production.  And if we get a #VE with the module param disabled, then KVM is truly
in the weeds and killing the VM is the safe option.

E.g. something like

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 4fd25e1d6ec9..54b9cb56f6e2 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5010,6 +5010,9 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
        if (is_invalid_opcode(intr_info))
                return handle_ud(vcpu);

+       if (KVM_BUG_ON(is_ve_fault(intr_info), vcpu->kvm))
+               return -EIO;
+
        error_code = 0;
        if (intr_info & INTR_INFO_DELIVER_CODE_MASK)
                error_code = vmcs_read32(VM_EXIT_INTR_ERROR_CODE);

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-06-27 21:53 ` [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
                     ` (2 preceding siblings ...)
  2022-07-28 19:41   ` David Matlack
@ 2022-07-28 20:13   ` David Matlack
  2022-08-09 23:50     ` Isaku Yamahata
  3 siblings, 1 reply; 219+ messages in thread
From: David Matlack @ 2022-07-28 20:13 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022 at 02:53:36PM -0700, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> For private GPA, CPU refers a private page table whose contents are
> encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
> PTE entry) are used and their cost is expensive.
> 
> When KVM resolves KVM page fault, it walks the page tables.  To reuse the
> existing KVM MMU code and mitigate the heavy cost to directly walk
> encrypted private page table, allocate a more page to mirror the existing
> KVM page table.  Resolve KVM page fault with the existing code, and do
> additional operations necessary for the mirrored private page table.  To
> distinguish such cases, the existing KVM page table is called a shared page
> table (i.e. no mirrored private page table), and the KVM page table with
> mirrored private page table is called a private page table.  The
> relationship is depicted below.
> 
> Add private pointer to struct kvm_mmu_page for mirrored private page table
> and add helper functions to allocate/initialize/free a mirrored private
> page table page.  Also, add helper functions to check if a given
> kvm_mmu_page is private.  The later patch introduces hooks to operate on
> the mirrored private page table.
> 
>               KVM page fault                     |
>                      |                           |
>                      V                           |
>         -------------+----------                 |
>         |                      |                 |
>         V                      V                 |
>      shared GPA           private GPA            |
>         |                      |                 |
>         V                      V                 |
>  CPU/KVM shared PT root  KVM private PT root     |  CPU private PT root
>         |                      |                 |           |
>         V                      V                 |           V
>      shared PT            private PT <----mirror----> mirrored private PT
>         |                      |                 |           |
>         |                      \-----------------+------\    |
>         |                                        |      |    |
>         V                                        |      V    V
>   shared guest page                              |    private guest page
>                                                  |
>                            non-encrypted memory  |    encrypted memory
>                                                  |
> PT: page table
> 
> Both CPU and KVM refer to CPU/KVM shared page table.  Private page table
> is used only by KVM.  CPU refers to mirrored private page table.
> 
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/mmu/mmu.c          |  9 ++++
>  arch/x86/kvm/mmu/mmu_internal.h | 84 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/mmu/tdp_mmu.c      |  3 ++
>  4 files changed, 97 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f4d4ed41641b..bfc934dc9a33 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -716,6 +716,7 @@ struct kvm_vcpu_arch {
>  	struct kvm_mmu_memory_cache mmu_shadow_page_cache;
>  	struct kvm_mmu_memory_cache mmu_gfn_array_cache;
>  	struct kvm_mmu_memory_cache mmu_page_header_cache;
> +	struct kvm_mmu_memory_cache mmu_private_sp_cache;
>  
>  	/*
>  	 * QEMU userspace and the guest each have their own FPU state.
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index c517c7bca105..a5bf3e40e209 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -691,6 +691,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
>  	int start, end, i, r;
>  	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
>  
> +	if (kvm_gfn_shared_mask(vcpu->kvm)) {
> +		r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache,
> +					       PT64_ROOT_MAX_LEVEL);
> +		if (r)
> +			return r;
> +	}
> +
>  	if (is_tdp_mmu && shadow_nonpresent_value)
>  		start = kvm_mmu_memory_cache_nr_free_objects(mc);
>  
> @@ -732,6 +739,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> +	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_private_sp_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
>  }
> @@ -1736,6 +1744,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
>  	if (!direct)
>  		sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
>  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> +	kvm_mmu_init_private_sp(sp, NULL);
>  
>  	/*
>  	 * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 44a04fad4bed..9f3a6bea60a3 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -55,6 +55,10 @@ struct kvm_mmu_page {
>  	u64 *spt;
>  	/* hold the gfn of each spte inside spt */
>  	gfn_t *gfns;
> +#ifdef CONFIG_KVM_MMU_PRIVATE
> +	/* associated private shadow page, e.g. SEPT page. */
> +	void *private_sp;
> +#endif

write_flooding_count and unsync_children are only used in shadow MMU SPs
and private_sp is only used in TDP MMU SPs. So it seems like we could
put these together in a union and drop CONFIG_KVM_MMU_PRIVATE without
increasing the size of kvm_mmu_page. i.e.

	union {
		struct {
			unsigned int unsync_children;
			/* Number of writes since the last time traversal visited this page.  */
			atomic_t write_flooding_count;
		};
		/*
		 * The associated private shadow page table, e.g. for Secure EPT.
		 * Only valid if tdp_mmu_page is true.
		 */
		void *private_spt;
	};

Then change is_private_sp() to:

static inline bool is_private_sp(struct kvm_mmu_page *sp)
{
	return sp->tdp_mmu_page && sp->private_sp;
}

This will allow us to drop CONFIG_KVM_MMU_PRIVATE, the only benefit of
which I see is to avoid increasing the size of kvm_mmu_page. However
to actually realize that benefit Cloud vendors (for example) would have
to create separate kernel builds for TDX and non-TDX hosts, which seems
like a huge hassel.

>  	/* Currently serving as active root */
>  	union {
>  		int root_count;
> @@ -115,6 +119,86 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
>  	return kvm_mmu_role_as_id(sp->role);
>  }
>  
> +/*
> + * TDX vcpu allocates page for root Secure EPT page and assigns to CPU secure
> + * EPT pointer.  KVM doesn't need to allocate and link to the secure EPT.
> + * Dummy value to make is_pivate_sp() return true.
> + */
> +#define KVM_MMU_PRIVATE_SP_ROOT	((void *)1)
> +
> +#ifdef CONFIG_KVM_MMU_PRIVATE
> +static inline bool is_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return !!sp->private_sp;
> +}
> +
> +static inline bool is_private_sptep(u64 *sptep)
> +{
> +	WARN_ON(!sptep);
> +	return is_private_sp(sptep_to_sp(sptep));
> +}
> +
> +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return sp->private_sp;
> +}
> +
> +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
> +{
> +	sp->private_sp = private_sp;
> +}
> +
> +/* Valid sp->role.level is required. */
> +static inline void kvm_mmu_alloc_private_sp(
> +	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
> +{
> +	if (is_root)
> +		sp->private_sp = KVM_MMU_PRIVATE_SP_ROOT;
> +	else
> +		sp->private_sp = kvm_mmu_memory_cache_alloc(
> +			&vcpu->arch.mmu_private_sp_cache);
> +	/*
> +	 * Because mmu_private_sp_cache is topped up before staring kvm page
> +	 * fault resolving, the allocation above shouldn't fail.
> +	 */
> +	WARN_ON_ONCE(!sp->private_sp);
> +}
> +
> +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> +{
> +	if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT)
> +		free_page((unsigned long)sp->private_sp);
> +}
> +#else
> +static inline bool is_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return false;
> +}
> +
> +static inline bool is_private_sptep(u64 *sptep)
> +{
> +	return false;
> +}
> +
> +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp)
> +{
> +	return NULL;
> +}
> +
> +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp, void *private_sp)
> +{
> +}
> +
> +static inline void kvm_mmu_alloc_private_sp(
> +	struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool is_root)
> +{
> +}
> +
> +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp)
> +{
> +}
> +#endif
> +
>  static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
>  {
>  	/*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7eb41b176d1e..b2568b062faa 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -72,6 +72,8 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
>  
>  static void tdp_mmu_free_sp(struct kvm_mmu_page *sp)
>  {
> +	if (is_private_sp(sp))
> +		kvm_mmu_free_private_sp(sp);
>  	free_page((unsigned long)sp->spt);
>  	kmem_cache_free(mmu_page_header_cache, sp);
>  }
> @@ -295,6 +297,7 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>  	sp->gfn = gfn;
>  	sp->ptep = sptep;
>  	sp->tdp_mmu_page = true;
> +	kvm_mmu_init_private_sp(sp);
>  
>  	trace_kvm_mmu_get_page(sp, true);
>  }
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 022/102] KVM: TDX: create/destroy VM structure
  2022-06-27 21:53 ` [PATCH v7 022/102] KVM: TDX: create/destroy VM structure isaku.yamahata
  2022-07-07  6:16   ` Yuan Yao
@ 2022-08-02 19:46   ` Sean Christopherson
  2022-08-11 18:29     ` Isaku Yamahata
  1 sibling, 1 reply; 219+ messages in thread
From: Sean Christopherson @ 2022-08-02 19:46 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, Kai Huang, Sagi Shahar

On Mon, Jun 27, 2022, isaku.yamahata@intel.com wrote:
> +int tdx_vm_init(struct kvm *kvm)
> +{
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> +	cpumask_var_t packages;
> +	int ret, i;
> +	u64 err;
> +
> +	/* vCPUs can't be created until after KVM_TDX_INIT_VM. */
> +	kvm->max_vcpus = 0;
> +
> +	kvm_tdx->hkid = tdx_keyid_alloc();
> +	if (kvm_tdx->hkid < 0)
> +		return -EBUSY;

We (Google) have been working through potential flows for intrahost (copyless)
migration, and one of the things that came up is that allocating the HKID during
KVM_CREATE_VM will be problematic as HKID are a relatively scarce resource.  E.g.
if all key IDs are in use, then creating a destination TDX VM will be impossible
even though intrahost migration can create succeed since the "new" would reuse
the source's HKID.

Allocating the various pages is also annoying, e.g. they'd have to be freed, but
not as directly problematic.

SEV (all flavors) has a similar problem with ASIDs.  The solution for SEV was to
not allocate an ASID during KVM_CREATE_VM and instead "activate" SEV during
KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM.

I think we should prepare for a similar future for TDX and move the HKID allocation
and all dependent resource allocation to KVM_TDX_INIT_VM.  AFAICT (and remember),
this should be a fairly simple code movement, but I'd prefer it be done before
merging TDX so that if it's not so simple, e.g. requires another sub-ioctl, then
we don't have to try and tweak KVM's ABI to enable intrahost migration.

> +
> +	ret = tdx_alloc_td_page(&kvm_tdx->tdr);
> +	if (ret)
> +		goto free_hkid;
> +
> +	kvm_tdx->tdcs = kcalloc(tdx_caps.tdcs_nr_pages, sizeof(*kvm_tdx->tdcs),
> +				GFP_KERNEL_ACCOUNT);
> +	if (!kvm_tdx->tdcs)
> +		goto free_tdr;
> +	for (i = 0; i < tdx_caps.tdcs_nr_pages; i++) {
> +		ret = tdx_alloc_td_page(&kvm_tdx->tdcs[i]);
> +		if (ret)
> +			goto free_tdcs;
> +	}

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 028/102] KVM: TDX: allocate/free TDX vcpu structure
  2022-06-27 21:53 ` [PATCH v7 028/102] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
@ 2022-08-02 19:56   ` Sean Christopherson
  0 siblings, 0 replies; 219+ messages in thread
From: Sean Christopherson @ 2022-08-02 19:56 UTC (permalink / raw)
  To: isaku.yamahata; +Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Mon, Jun 27, 2022, isaku.yamahata@intel.com wrote:
> +int tdx_vcpu_create(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_tdx *tdx = to_tdx(vcpu);
> +	int ret, i;
> +
> +	/* TDX only supports x2APIC, which requires an in-kernel local APIC. */
> +	if (!vcpu->arch.apic)
> +		return -EINVAL;
> +
> +	fpstate_set_confidential(&vcpu->arch.guest_fpu);
> +
> +	ret = tdx_alloc_td_page(&tdx->tdvpr);
> +	if (ret)
> +		return ret;
> +
> +	tdx->tdvpx = kcalloc(tdx_caps.tdvpx_nr_pages, sizeof(*tdx->tdvpx),
> +			GFP_KERNEL_ACCOUNT);
> +	if (!tdx->tdvpx) {
> +		ret = -ENOMEM;
> +		goto free_tdvpr;
> +	}
> +	for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++) {
> +		ret = tdx_alloc_td_page(&tdx->tdvpx[i]);
> +		if (ret)
> +			goto free_tdvpx;
> +	}

Similar to HKID allocation for intrahost migration, can the TDVPX allocations be
moved to KVM_TDX_INIT_VCPU?

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 000/102] KVM TDX basic feature support
  2022-07-27  9:26         ` Nikunj A. Dadhania
@ 2022-08-03 10:48           ` Chao Peng
  0 siblings, 0 replies; 219+ messages in thread
From: Chao Peng @ 2022-08-03 10:48 UTC (permalink / raw)
  To: Nikunj A. Dadhania
  Cc: Sean Christopherson, isaku.yamahata, kvm, linux-kernel,
	isaku.yamahata, Paolo Bonzini

On Wed, Jul 27, 2022 at 02:56:40PM +0530, Nikunj A. Dadhania wrote:
> On 7/26/2022 8:02 PM, Chao Peng wrote:
> > On Mon, Jul 25, 2022 at 07:16:24PM +0530, Nikunj A. Dadhania wrote:
> >> On 7/20/2022 8:29 PM, Chao Peng wrote:
> >>> On Thu, Jul 14, 2022 at 01:03:46AM +0000, Sean Christopherson wrote:
> >>> ...
> >>>>
> >>>> Option D). track shared regions in an Xarray, update kvm_arch_memory_slot.lpage_info
> >>>> on insertion/removal to (dis)allow hugepages as needed.
> >>>>
> >>>>   + efficient on KVM page fault (no new lookups)
> >>>>   + zero memory overhead (assuming KVM has to eat the cost of the Xarray anyways)
> >>>>   + straightforward to implement
> >>>>   + can (and should) be merged as part of the UPM series
> >>>>
> >>>> I believe xa_for_each_range() can be used to see if a given 2mb/1gb range is
> >>>> completely covered (fully shared) or not covered at all (fully private), but I'm
> >>>> not 100% certain that xa_for_each_range() works the way I think it does.
> >>>
> >>> Hi Sean,
> >>>
> >>> Below is the implementation to support 2M as you mentioned as option D.
> >>> It's based on UPM v7 xarray code: https://lkml.org/lkml/2022/7/6/259
> >>>
> >>> Everything sounds good, the only trick bit is inc/dec disallow_lpage. If
> >>> we still treat it as a count, it will be a challenge to make the inc/dec
> >>> balanced. So in this patch I stole a bit for the purpose, looks ugly.
> >>>
> >>> Any feedback is welcome.
> >>>
> >>> Thanks,
> >>> Chao
> >>>
> >>> -----------------------------------------------------------------------
> >>> From: Chao Peng <chao.p.peng@linux.intel.com>
> >>> Date: Wed, 20 Jul 2022 11:37:18 +0800
> >>> Subject: [PATCH] KVM: Add large page support for private memory
> >>>
> >>> Update lpage_info when handling KVM_MEMORY_ENCRYPT_{UN,}REG_REGION.
> >>>
> >>> Reserve a bit in disallow_lpage to indicate a large page has
> >>> private/share pages mixed.
> >>>
> >>> Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> >>> ---
> >>
> >>
> >>> +static void update_mem_lpage_info(struct kvm *kvm,
> >>> +				  struct kvm_memory_slot *slot,
> >>> +				  unsigned int attr,
> >>> +				  gfn_t start, gfn_t end)
> >>> +{
> >>> +	unsigned long lpage_start, lpage_end;
> >>> +	unsigned long gfn, pages, mask;
> >>> +	int level;
> >>> +
> >>> +	for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> >>> +		pages = KVM_PAGES_PER_HPAGE(level);
> >>> +		mask = ~(pages - 1);
> >>> +		lpage_start = start & mask;
> >>> +		lpage_end = end & mask;
> >>> +
> >>> +		/*
> >>> +		 * We only need to scan the head and tail page, for middle pages
> >>> +		 * we know they are not mixed.
> >>> +		 */
> >>> +		update_mixed(lpage_info_slot(lpage_start, slot, level),
> >>> +			     mem_attr_is_mixed(kvm, attr, lpage_start,
> >>> +							  lpage_start + pages));
> >>> +
> >>> +		if (lpage_start == lpage_end)
> >>> +			return;
> >>> +
> >>> +		for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) {
> >>> +			update_mixed(lpage_info_slot(gfn, slot, level), false);
> >>> +		}
> >>
> >> Boundary check missing here for the case when gfn reaches lpage_end.
> >>
> >> 		if (gfn == lpage_end)
> >> 			return;
> > 
> > In this case, it's actually the tail page that I want to scan for with
> > below code.
> 
> What if you do not have the tail lpage?
> 
> For example: memslot base_gfn = 0x1000 and npages is 0x800, so memslot range
> is 0x1000 to 0x17ff.
> 
> Assume a case when this function is called with start = 1000 and end = 1800.
> For 2M, page mask is 0x1ff. start and end both are 2M aligned.
> 
> First update_mixed takes care of 0x1000-0x1200
> Loop update_mixed: goes over from 0x1200 - 0x1800, there are no pages left
> for last update_mixed to process.

Oops, good catch. I would fix it differently by playing with lpage_end:
	lpage_end = (end - 1) & mask;

Thanks,
Chao

> 
> > 
> > It's also possible I misunderstand something here.
> > 
> > Chao
> >>
> >>> +
> >>> +		update_mixed(lpage_info_slot(lpage_end, slot, level),
> >>> +			     mem_attr_is_mixed(kvm, attr, lpage_end,
> >>> +							  lpage_end + pages));
> 
> lpage_info_slot some times causes a crash, as I noticed that
> lpage_info_slot() returns out-of-bound index.
> 
> Regards
> Nikunj
> 

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE
  2022-07-28 20:11             ` Sean Christopherson
@ 2022-08-09  0:48               ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-08-09  0:48 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Kai Huang, Isaku Yamahata, isaku.yamahata, kvm, linux-kernel,
	Paolo Bonzini

On Thu, Jul 28, 2022 at 08:11:59PM +0000,
Sean Christopherson <seanjc@google.com> wrote:

> On Thu, Jul 28, 2022, Kai Huang wrote:
> > On Wed, 2022-07-27 at 16:39 -0700, Isaku Yamahata wrote:
> > > On Wed, Jul 20, 2022 at 05:13:08PM +1200,
> > > Kai Huang <kai.huang@intel.com> wrote:
> > > 
> > > > On Tue, 2022-07-19 at 07:49 -0700, Isaku Yamahata wrote:
> > > > > On Fri, Jul 08, 2022 at 02:23:43PM +1200,
> > > > > Kai Huang <kai.huang@intel.com> wrote:
> > > > > 
> > > > > > On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@intel.com wrote:
> > > > > > > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > > > > > > 
> > > > > > > To support TDX, KVM is enhanced to operate with #VE.  For TDX, KVM programs
> > > > > > > to inject #VE conditionally and set #VE suppress bit in EPT entry.  For VMX
> > > > > > > case, #VE isn't used.  If #VE happens for VMX, it's a bug.  To be
> > > > > > > defensive (test that VMX case isn't broken), introduce option
> > > > > > > ept_violation_ve_test and when it's set, set error.
> > > > > > 
> > > > > > I don't see why we need this patch.  It may be helpful during your test, but why
> > > > > > do we need this patch for formal submission?
> > > > > > 
> > > > > > And for a normal guest, what prevents one vcpu from sending #VE IPI to another
> > > > > > vcpu?
> > > > > 
> > > > > Paolo suggested it as follows.  Maybe it should be kernel config.
> > > > > (I forgot to add suggested-by. I'll add it)
> > > > > 
> > > > > https://lore.kernel.org/lkml/84d56339-4a8a-6ddb-17cb-12074588ba9c@redhat.com/
> > > > > 
> > > > > > 
> > > > 
> > > > OK.  But can we assume a normal guest won't sending #VE IPI?
> > > 
> > > Theoretically nothing prevents that.  I wouldn't way "normal".
> > > Anyway this is off by default.
> > 
> > I don't think whether it is on or off by default matters.
> 
> It matters in the sense that the module param is intended purely for testing, i.e.
> there's zero reason to ever enable it in production.  That changes what is and
> wasn't isn't a reasonable response to an unexpected #VE.
> 
> > If it can happen legitimately in the guest, it doesn't look right to print
> > out something like below:
> > 
> > 	pr_err("VMEXIT due to unexpected #VE.\n");
> 
> Agreed.  In this particular case I think the right approach is to treat an
> unexpected #VE as a fatal KVM bug.  Yes, disabling EPT violation #VEs would likely
> allow the guest to live, but as above the module param should never be enabled in
> production.  And if we get a #VE with the module param disabled, then KVM is truly
> in the weeds and killing the VM is the safe option.
> 
> E.g. something like

Thanks, I finally ended up with the following.

diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
index ac290a44a693..9277676057a7 100644
--- a/arch/x86/kvm/vmx/vmcs.h
+++ b/arch/x86/kvm/vmx/vmcs.h
@@ -140,6 +140,11 @@ static inline bool is_nm_fault(u32 intr_info)
 	return is_exception_n(intr_info, NM_VECTOR);
 }
 
+static inline bool is_ve_fault(u32 intr_info)
+{
+	return is_exception_n(intr_info, VE_VECTOR);
+}
+
 /* Undocumented: icebp/int1 */
 static inline bool is_icebp(u32 intr_info)
 {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 881db80ceee9..c3e4c0d17b63 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5047,6 +5047,12 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 	if (is_invalid_opcode(intr_info))
 		return handle_ud(vcpu);
 
+	/*
+	 * #VE isn't supposed to happen.  Although vcpu can send
+	 */
+	if (KVM_BUG_ON(is_ve_fault(intr_info), vcpu->kvm))
+		return -EIO;
+
 	error_code = 0;
 	if (intr_info & INTR_INFO_DELIVER_CODE_MASK)
 		error_code = vmcs_read32(VM_EXIT_INTR_ERROR_CODE);
@@ -5167,14 +5173,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 		if (handle_guest_split_lock(kvm_rip_read(vcpu)))
 			return 1;
 		fallthrough;
-	case VE_VECTOR:
 	default:
-		if (ept_violation_ve_test && ex_no == VE_VECTOR) {
-			pr_err("VMEXIT due to unexpected #VE.\n");
-			secondary_exec_controls_clearbit(
-				vmx, SECONDARY_EXEC_EPT_VIOLATION_VE);
-			return 1;
-		}
 		kvm_run->exit_reason = KVM_EXIT_EXCEPTION;
 		kvm_run->ex.exception = ex_no;
 		kvm_run->ex.error_code = error_code;



-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply related	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-07-28 20:13   ` David Matlack
@ 2022-08-09 23:50     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-08-09 23:50 UTC (permalink / raw)
  To: David Matlack
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Thu, Jul 28, 2022 at 01:13:35PM -0700,
David Matlack <dmatlack@google.com> wrote:

> On Mon, Jun 27, 2022 at 02:53:36PM -0700, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > For private GPA, CPU refers a private page table whose contents are
> > encrypted.  The dedicated APIs to operate on it (e.g. updating/reading its
> > PTE entry) are used and their cost is expensive.
> > 
> > When KVM resolves KVM page fault, it walks the page tables.  To reuse the
> > existing KVM MMU code and mitigate the heavy cost to directly walk
> > encrypted private page table, allocate a more page to mirror the existing
> > KVM page table.  Resolve KVM page fault with the existing code, and do
> > additional operations necessary for the mirrored private page table.  To
> > distinguish such cases, the existing KVM page table is called a shared page
> > table (i.e. no mirrored private page table), and the KVM page table with
> > mirrored private page table is called a private page table.  The
> > relationship is depicted below.
> > 
> > Add private pointer to struct kvm_mmu_page for mirrored private page table
> > and add helper functions to allocate/initialize/free a mirrored private
> > page table page.  Also, add helper functions to check if a given
> > kvm_mmu_page is private.  The later patch introduces hooks to operate on
> > the mirrored private page table.
> > 
> >               KVM page fault                     |
> >                      |                           |
> >                      V                           |
> >         -------------+----------                 |
> >         |                      |                 |
> >         V                      V                 |
> >      shared GPA           private GPA            |
> >         |                      |                 |
> >         V                      V                 |
> >  CPU/KVM shared PT root  KVM private PT root     |  CPU private PT root
> >         |                      |                 |           |
> >         V                      V                 |           V
> >      shared PT            private PT <----mirror----> mirrored private PT
> >         |                      |                 |           |
> >         |                      \-----------------+------\    |
> >         |                                        |      |    |
> >         V                                        |      V    V
> >   shared guest page                              |    private guest page
> >                                                  |
> >                            non-encrypted memory  |    encrypted memory
> >                                                  |
> > PT: page table
> > 
> > Both CPU and KVM refer to CPU/KVM shared page table.  Private page table
> > is used only by KVM.  CPU refers to mirrored private page table.
> > 
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  1 +
> >  arch/x86/kvm/mmu/mmu.c          |  9 ++++
> >  arch/x86/kvm/mmu/mmu_internal.h | 84 +++++++++++++++++++++++++++++++++
> >  arch/x86/kvm/mmu/tdp_mmu.c      |  3 ++
> >  4 files changed, 97 insertions(+)
> > 
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index f4d4ed41641b..bfc934dc9a33 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -716,6 +716,7 @@ struct kvm_vcpu_arch {
> >  	struct kvm_mmu_memory_cache mmu_shadow_page_cache;
> >  	struct kvm_mmu_memory_cache mmu_gfn_array_cache;
> >  	struct kvm_mmu_memory_cache mmu_page_header_cache;
> > +	struct kvm_mmu_memory_cache mmu_private_sp_cache;
> >  
> >  	/*
> >  	 * QEMU userspace and the guest each have their own FPU state.
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index c517c7bca105..a5bf3e40e209 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -691,6 +691,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> >  	int start, end, i, r;
> >  	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> >  
> > +	if (kvm_gfn_shared_mask(vcpu->kvm)) {
> > +		r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache,
> > +					       PT64_ROOT_MAX_LEVEL);
> > +		if (r)
> > +			return r;
> > +	}
> > +
> >  	if (is_tdp_mmu && shadow_nonpresent_value)
> >  		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> >  
> > @@ -732,6 +739,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> >  {
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> > +	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_private_sp_cache);
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache);
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
> >  }
> > @@ -1736,6 +1744,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
> >  	if (!direct)
> >  		sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
> >  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> > +	kvm_mmu_init_private_sp(sp, NULL);
> >  
> >  	/*
> >  	 * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > index 44a04fad4bed..9f3a6bea60a3 100644
> > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > @@ -55,6 +55,10 @@ struct kvm_mmu_page {
> >  	u64 *spt;
> >  	/* hold the gfn of each spte inside spt */
> >  	gfn_t *gfns;
> > +#ifdef CONFIG_KVM_MMU_PRIVATE
> > +	/* associated private shadow page, e.g. SEPT page. */
> > +	void *private_sp;
> > +#endif
> 
> write_flooding_count and unsync_children are only used in shadow MMU SPs
> and private_sp is only used in TDP MMU SPs. So it seems like we could
> put these together in a union and drop CONFIG_KVM_MMU_PRIVATE without
> increasing the size of kvm_mmu_page. i.e.

I introduced KVM_MMU_PRIVATE as a alias to INTEL_TDX_HOST because I don't want
to use it in kvm/mmu and I'd like KVM_MMU_PRIVATE (a sort of) independent from
INTEL_TDX_HOST.  Anyway once the patch series is merged, we can drop
KVM_MMU_PRIVATE.


> 	union {
> 		struct {
> 			unsigned int unsync_children;
> 			/* Number of writes since the last time traversal visited this page.  */
> 			atomic_t write_flooding_count;
> 		};
> 		/*
> 		 * The associated private shadow page table, e.g. for Secure EPT.
> 		 * Only valid if tdp_mmu_page is true.
> 		 */
> 		void *private_spt;
> 	};
> 
> Then change is_private_sp() to:
> 
> static inline bool is_private_sp(struct kvm_mmu_page *sp)
> {
> 	return sp->tdp_mmu_page && sp->private_sp;
> }
> 
> This will allow us to drop CONFIG_KVM_MMU_PRIVATE, the only benefit of
> which I see is to avoid increasing the size of kvm_mmu_page. However
> to actually realize that benefit Cloud vendors (for example) would have
> to create separate kernel builds for TDX and non-TDX hosts, which seems
> like a huge hassel.

Good idea. I'll use union.
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page
  2022-07-28 19:41   ` David Matlack
@ 2022-08-09 23:52     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-08-09 23:52 UTC (permalink / raw)
  To: David Matlack
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini

On Thu, Jul 28, 2022 at 12:41:51PM -0700,
David Matlack <dmatlack@google.com> wrote:

> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index f4d4ed41641b..bfc934dc9a33 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -716,6 +716,7 @@ struct kvm_vcpu_arch {
> >  	struct kvm_mmu_memory_cache mmu_shadow_page_cache;
> >  	struct kvm_mmu_memory_cache mmu_gfn_array_cache;
> >  	struct kvm_mmu_memory_cache mmu_page_header_cache;
> > +	struct kvm_mmu_memory_cache mmu_private_sp_cache;
> 
> I notice that mmu_private_sp_cache.gfp_zero is left unset so these pages
> may contain garbage. Is this by design because the TDX module can't rely
> on the contents being zero and has to take care of initializing the page
> itself? i.e. GFP_ZERO would be a waste of cycles?
> 
> If I'm correct please include a comment here in the next revision to
> explain why GFP_ZERO is not necessary.

Yes, exactly.  Here is the added comments.
 /*
  * This cache is to allocate pages used for Secure-EPT used by the TDX
  * module.  Because the TDX module doesn't trust VMM and initializes the
  * pages itself, KVM doesn't initialize them.  Allocate pages with
  * garbage and give them to the TDX module.
  */

> >  	/*
> >  	 * QEMU userspace and the guest each have their own FPU state.
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index c517c7bca105..a5bf3e40e209 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -691,6 +691,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> >  	int start, end, i, r;
> >  	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> >  
> > +	if (kvm_gfn_shared_mask(vcpu->kvm)) {
> > +		r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache,
> > +					       PT64_ROOT_MAX_LEVEL);
> > +		if (r)
> > +			return r;
> > +	}
> > +
> >  	if (is_tdp_mmu && shadow_nonpresent_value)
> >  		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> >  
> > @@ -732,6 +739,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> >  {
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> > +	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_private_sp_cache);
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache);
> >  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
> >  }
> > @@ -1736,6 +1744,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
> >  	if (!direct)
> >  		sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
> >  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> > +	kvm_mmu_init_private_sp(sp, NULL);
> 
> This is unnecessary. kvm_mmu_page structs are zero-initialized so
> private_sp will already be NULL.

Ok. 


> >  
> >  	/*
> >  	 * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > index 44a04fad4bed..9f3a6bea60a3 100644
> > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > @@ -55,6 +55,10 @@ struct kvm_mmu_page {
> >  	u64 *spt;
> >  	/* hold the gfn of each spte inside spt */
> >  	gfn_t *gfns;
> > +#ifdef CONFIG_KVM_MMU_PRIVATE
> > +	/* associated private shadow page, e.g. SEPT page. */
> 
> Can we use "Secure EPT" instead of SEPT in KVM code and comments? (i.e.
> also including variable names like sept_page -> secure_ept_page)
> 
> "SEPT" looks like a mispelling of SPTE, which is used all over KVM. It
> will be difficult to read code that contains both acronyms.

Makes sense. Will update it.


> > +	void *private_sp;
> 
> Please name this "private_spt" and move it up next to "spt".
> 
> sp" or "shadow page" is used to refer to kvm_mmu_page structs. For
> example, look at all the code in KVM that uses `struct kvm_mmu_page *sp`.
> 
> "spt" is "shadow page table", i.e. the actual page table memory. See
> kvm_mmu_page.spt. Calling this field "private_spt" makes it obvious that
> this pointer is pointing to a page table.
> 
> Also please update the language in the comment accordingly to "private
> shadow page table".

I'll rename as follows

private_sp => private_spt
spet_page => private_spt
mmu_private_sp_cache => mmu_private_spt_cache
kvm_mmu_init_private_sp => kvm_mmu_inite_private_spt
kvm_mmu_alloc_private_sp => kvm_mmu_alloc_private_spt
kvm_mmu_free_private_sp => kvm_mmu_free_private_spt
kvm_alloc_private_sp_for_split => kvm_alloc_private_spt_for_split

Thanks,
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

* Re: [PATCH v7 022/102] KVM: TDX: create/destroy VM structure
  2022-08-02 19:46   ` Sean Christopherson
@ 2022-08-11 18:29     ` Isaku Yamahata
  0 siblings, 0 replies; 219+ messages in thread
From: Isaku Yamahata @ 2022-08-11 18:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	Kai Huang, Sagi Shahar

On Tue, Aug 02, 2022 at 07:46:21PM +0000,
Sean Christopherson <seanjc@google.com> wrote:

> On Mon, Jun 27, 2022, isaku.yamahata@intel.com wrote:
> > +int tdx_vm_init(struct kvm *kvm)
> > +{
> > +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> > +	cpumask_var_t packages;
> > +	int ret, i;
> > +	u64 err;
> > +
> > +	/* vCPUs can't be created until after KVM_TDX_INIT_VM. */
> > +	kvm->max_vcpus = 0;
> > +
> > +	kvm_tdx->hkid = tdx_keyid_alloc();
> > +	if (kvm_tdx->hkid < 0)
> > +		return -EBUSY;
> 
> We (Google) have been working through potential flows for intrahost (copyless)
> migration, and one of the things that came up is that allocating the HKID during
> KVM_CREATE_VM will be problematic as HKID are a relatively scarce resource.  E.g.
> if all key IDs are in use, then creating a destination TDX VM will be impossible
> even though intrahost migration can create succeed since the "new" would reuse
> the source's HKID.
> 
> Allocating the various pages is also annoying, e.g. they'd have to be freed, but
> not as directly problematic.
> 
> SEV (all flavors) has a similar problem with ASIDs.  The solution for SEV was to
> not allocate an ASID during KVM_CREATE_VM and instead "activate" SEV during
> KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM.
> 
> I think we should prepare for a similar future for TDX and move the HKID allocation
> and all dependent resource allocation to KVM_TDX_INIT_VM.  AFAICT (and remember),
> this should be a fairly simple code movement, but I'd prefer it be done before
> merging TDX so that if it's not so simple, e.g. requires another sub-ioctl, then
> we don't have to try and tweak KVM's ABI to enable intrahost migration.

The simple code movement works here.  The TDX related initialization/allocation
can simply be moved to KVM_TDX_INIT_VM and KVM_TDX_INIT_VCPU.

I'll update them with the next respin.

Thanks,
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 219+ messages in thread

end of thread, other threads:[~2022-08-11 18:29 UTC | newest]

Thread overview: 219+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-27 21:52 [PATCH v7 000/102] KVM TDX basic feature support isaku.yamahata
2022-06-27 21:52 ` [PATCH v7 001/102] KVM: x86: Move check_processor_compatibility from init ops to runtime ops isaku.yamahata
2022-06-27 21:52 ` [PATCH v7 002/102] Partially revert "KVM: Pass kvm_init()'s opaque param to additional arch funcs" isaku.yamahata
2022-07-13  1:55   ` Kai Huang
2022-07-26 23:57     ` Isaku Yamahata
2022-06-27 21:52 ` [PATCH v7 003/102] KVM: Refactor CPU compatibility check on module initialiization isaku.yamahata
2022-07-12  1:15   ` Kai Huang
2022-07-13  3:16     ` Kai Huang
2022-07-13  3:11   ` Kai Huang
2022-07-27 22:04   ` Isaku Yamahata
2022-06-27 21:52 ` [PATCH v7 004/102] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
2022-06-27 21:52 ` [PATCH v7 005/102] x86/virt/vmx/tdx: export platform_tdx_enabled() isaku.yamahata
2022-06-27 21:52 ` [PATCH v7 006/102] KVM: TDX: Detect CPU feature on kernel module initialization isaku.yamahata
2022-06-28  3:43   ` Kai Huang
2022-07-11 23:48     ` Isaku Yamahata
2022-07-12  0:45       ` Kai Huang
2022-06-27 21:52 ` [PATCH v7 007/102] KVM: Enable hardware before doing arch VM initialization isaku.yamahata
2022-06-28  2:59   ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 008/102] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
2022-06-28  3:53   ` Kai Huang
2022-07-12  0:38     ` Isaku Yamahata
2022-07-12  1:30       ` Kai Huang
2022-07-27  0:44         ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 009/102] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 010/102] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
2022-07-07  2:46   ` Yuan Yao
2022-07-12  0:39     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 011/102] KVM: TDX: Initialize TDX module when loading kvm_intel.ko isaku.yamahata
2022-06-28  4:31   ` Kai Huang
2022-07-12  0:46     ` Isaku Yamahata
2022-07-12  1:13       ` Kai Huang
2022-07-27  0:39         ` Isaku Yamahata
2022-07-27  4:38           ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 012/102] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
2022-06-28  2:52   ` Kai Huang
2022-07-04  6:44     ` Kai Huang
2022-07-12  1:01     ` Isaku Yamahata
2022-07-12  1:24       ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 013/102] KVM: TDX: Make TDX VM type supported isaku.yamahata
2022-07-07  2:55   ` Yuan Yao
2022-07-12  1:06     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 014/102] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 015/102] KVM: TDX: Define " isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 016/102] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 017/102] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 018/102] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 019/102] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 020/102] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 021/102] x86/cpu: Add helper functions to allocate/free TDX private host key id isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 022/102] KVM: TDX: create/destroy VM structure isaku.yamahata
2022-07-07  6:16   ` Yuan Yao
2022-07-12  6:21     ` Isaku Yamahata
2022-08-02 19:46   ` Sean Christopherson
2022-08-11 18:29     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 023/102] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
2022-07-07  6:48   ` Yuan Yao
2022-06-27 21:53 ` [PATCH v7 024/102] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
2022-07-07  7:12   ` Yuan Yao
2022-06-27 21:53 ` [PATCH v7 025/102] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
2022-06-28  8:30   ` Xiaoyao Li
2022-07-12  7:11     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 026/102] KVM: TDX: Make pmu_intel.c ignore guest TD case isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 027/102] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 028/102] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
2022-08-02 19:56   ` Sean Christopherson
2022-06-27 21:53 ` [PATCH v7 029/102] " isaku.yamahata
2022-06-28 11:34   ` Kai Huang
2022-07-12  7:55     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 030/102] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
2022-07-08  2:14   ` Yuan Yao
2022-07-12 20:35     ` Isaku Yamahata
2022-07-13  0:22       ` Xiaoyao Li
2022-06-27 21:53 ` [PATCH v7 031/102] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 032/102] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
2022-07-08  1:53   ` Kai Huang
2022-07-13  1:25     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits isaku.yamahata
2022-07-08  2:15   ` Kai Huang
2022-07-13  4:52     ` Isaku Yamahata
2022-07-13 10:41       ` Kai Huang
2022-07-14  0:14         ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 034/102] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault isaku.yamahata
2022-06-30 11:37   ` Kai Huang
2022-07-13  8:35     ` Isaku Yamahata
2022-07-13 10:29       ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for non-present SPTE isaku.yamahata
2022-06-30 11:03   ` Kai Huang
2022-07-14 18:05     ` Isaku Yamahata
2022-07-08  5:18   ` Yuan Yao
2022-07-08 15:30     ` Sean Christopherson
2022-07-11  7:05       ` Yuan Yao
2022-07-11 14:47         ` Sean Christopherson
2022-07-14 18:41   ` Isaku Yamahata
2022-07-20  2:44     ` Kai Huang
2022-07-20  3:12     ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis isaku.yamahata
2022-06-30 11:45   ` Kai Huang
2022-07-05 14:06   ` Kai Huang
2022-07-19  8:47   ` Isaku Yamahata
2022-07-20  3:45     ` Kai Huang
2022-07-27 23:20       ` Isaku Yamahata
2022-07-28  0:48         ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 038/102] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
2022-06-30 12:27   ` Kai Huang
2022-07-19 10:26     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu isaku.yamahata
2022-07-01 10:41   ` Kai Huang
2022-07-19 11:06     ` Isaku Yamahata
2022-07-19 23:17       ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 041/102] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
2022-07-08  2:23   ` Kai Huang
2022-07-19 14:49     ` Isaku Yamahata
2022-07-20  5:13       ` Kai Huang
2022-07-27 23:39         ` Isaku Yamahata
2022-07-28  0:54           ` Kai Huang
2022-07-28 20:11             ` Sean Christopherson
2022-08-09  0:48               ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 042/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
2022-07-11  5:48   ` Yuan Yao
2022-07-11 14:56   ` Sean Christopherson
2022-07-19 15:04     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 044/102] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
2022-07-01 11:12   ` Kai Huang
2022-07-19 15:35     ` Isaku Yamahata
2022-07-11  6:28   ` Yuan Yao
2022-07-28 19:41   ` David Matlack
2022-08-09 23:52     ` Isaku Yamahata
2022-07-28 20:13   ` David Matlack
2022-08-09 23:50     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 045/102] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map() isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 046/102] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
2022-07-08  3:44   ` Kai Huang
2022-07-26 23:39     ` Isaku Yamahata
2022-07-11  8:28   ` Yuan Yao
2022-07-26 23:41     ` Isaku Yamahata
2022-07-12  2:36   ` Yuan Yao
2022-07-26 23:42     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 047/102] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 048/102] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
2022-07-08  2:30   ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 049/102] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
2022-07-12  2:58   ` Yuan Yao
2022-07-19 18:03     ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 050/102] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
2022-07-08 10:25   ` Kai Huang
2022-06-27 21:53 ` [PATCH v7 051/102] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 052/102] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 053/102] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD isaku.yamahata
2022-07-12  3:47   ` Yuan Yao
2022-07-12  6:14     ` Chao Gao
2022-07-19 18:12       ` Isaku Yamahata
2022-06-27 21:53 ` [PATCH v7 054/102] KVM: TDX: TDP MMU TDX support isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 055/102] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 056/102] KVM: x86/mmu: steal software usable git to record if GFN is for shared or not isaku.yamahata
2022-07-18  8:37   ` Yuan Yao
2022-06-27 21:53 ` [PATCH v7 057/102] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 058/102] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 059/102] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 060/102] KVM: TDX: Create initial guest memory isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 061/102] KVM: TDX: Finalize VM initialization isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 062/102] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 063/102] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 064/102] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 065/102] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 066/102] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
2022-06-27 21:53 ` [PATCH v7 067/102] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 068/102] KVM: TDX: restore user ret MSRs isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 069/102] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 070/102] KVM: TDX: complete interrupts after tdexit isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 071/102] KVM: TDX: restore debug store when TD exit isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 072/102] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 073/102] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 074/102] KVM: TDX: Add support for find pending IRQ in a protected local APIC isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 075/102] KVM: x86: Assume timer IRQ was injected if APIC state is proteced isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 076/102] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 077/102] KVM: TDX: Implement interrupt injection isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 078/102] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 079/102] KVM: TDX: Implement methods to inject NMI isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 080/102] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 081/102] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 082/102] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 083/102] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 084/102] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 085/102] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 086/102] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 087/102] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 088/102] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 089/102] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 090/102] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 091/102] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 092/102] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 093/102] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 094/102] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 095/102] KVM: TDX: Handle TDX PV report fatal error hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 096/102] KVM: TDX: Handle TDX PV map_gpa hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 097/102] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 098/102] KVM: TDX: Silently discard SMI request isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 099/102] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 100/102] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
2022-06-27 21:54 ` [PATCH v7 101/102] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
2022-07-08  1:34   ` Kai Huang
2022-06-27 21:54 ` [PATCH v7 102/102] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
2022-07-11 15:17 ` [PATCH v7 000/102] KVM TDX basic feature support Isaku Yamahata
2022-07-12  5:07   ` Chao Gao
2022-07-12 10:54     ` Chao Peng
2022-07-12 17:22       ` Isaku Yamahata
2022-07-13  7:37         ` Chao Peng
2022-07-12 10:49   ` Chao Peng
2022-07-12 17:35     ` Isaku Yamahata
2022-07-14  1:03 ` Sean Christopherson
2022-07-14  4:09   ` Xiaoyao Li
2022-07-20 14:59   ` Chao Peng
2022-07-25 13:46     ` Nikunj A. Dadhania
2022-07-26 14:32       ` Chao Peng
2022-07-27  9:26         ` Nikunj A. Dadhania
2022-08-03 10:48           ` Chao Peng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.