linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chenyi Qiang <chenyi.qiang@intel.com>
To: <isaku.yamahata@intel.com>, <kvm@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>
Cc: <isaku.yamahata@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	<erdemaktas@google.com>, Sean Christopherson <seanjc@google.com>,
	Sagi Shahar <sagis@google.com>,
	David Matlack <dmatlack@google.com>,
	Kai Huang <kai.huang@intel.com>
Subject: Re: [PATCH v10 049/108] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
Date: Wed, 16 Nov 2022 09:40:38 +0800	[thread overview]
Message-ID: <67b782ee-c95c-d6bc-3cca-cdfe75f4bf6a@intel.com> (raw)
In-Reply-To: <9d5595dfe1b5ab77bcb5650bc4d940dd977b0a32.1667110240.git.isaku.yamahata@intel.com>


On 10/30/2022 2:22 PM, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> Allocate protected page table for private page table, and add hooks to
> operate on protected page table.  This patch adds allocation/free of
> protected page tables and hooks.  When calling hooks to update SPTE entry,
> freeze the entry, call hooks and unfree the entry to allow concurrent
> updates on page tables.  Which is the advantage of TDP MMU.  As
> kvm_gfn_shared_mask() returns false always, those hooks aren't called yet
> with this patch.
>
> When the faulting GPA is private, the KVM fault is called private.  When
> resolving private KVM, allocate protected page table and call hooks to
> operate on protected page table. On the change of the private PTE entry,
> invoke kvm_x86_ops hook in __handle_changed_spte() to propagate the change
> to protected page table. The following depicts the relationship.
>
>    private KVM page fault   |
>        |                    |
>        V                    |
>   private GPA               |     CPU protected EPTP
>        |                    |           |
>        V                    |           V
>   private PT root           |     protected PT root
>        |                    |           |
>        V                    |           V
>     private PT --hook to propagate-->protected PT
>        |                    |           |
>        \--------------------+------\    |
>                             |      |    |
>                             |      V    V
>                             |    private guest page
>                             |
>                             |
>       non-encrypted memory  |    encrypted memory
>                             |
> PT: page table
>
> The existing KVM TDP MMU code uses atomic update of SPTE.  On populating
> the EPT entry, atomically set the entry.  However, it requires TLB
> shootdown to zap SPTE.  To address it, the entry is frozen with the special
> SPTE value that clears the present bit. After the TLB shootdown, the entry
> is set to the eventual value (unfreeze).
>
> For protected page table, hooks are called to update protected page table
> in addition to direct access to the private SPTE. For the zapping case, it
> works to freeze the SPTE. It can call hooks in addition to TLB shootdown.
> For populating the private SPTE entry, there can be a race condition
> without further protection
>
>    vcpu 1: populating 2M private SPTE
>    vcpu 2: populating 4K private SPTE
>    vcpu 2: TDX SEAMCALL to update 4K protected SPTE => error
>    vcpu 1: TDX SEAMCALL to update 2M protected SPTE
>
> To avoid the race, the frozen SPTE is utilized.  Instead of atomic update
> of the private entry, freeze the entry, call the hook that update protected
> SPTE, set the entry to the final value.
>
> Support 4K page only at this stage.  2M page support can be done in future
> patches.
>
> Co-developed-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/include/asm/kvm-x86-ops.h |   5 +
>   arch/x86/include/asm/kvm_host.h    |  11 ++
>   arch/x86/kvm/mmu/mmu.c             |  15 +-
>   arch/x86/kvm/mmu/mmu_internal.h    |  32 ++++
>   arch/x86/kvm/mmu/tdp_iter.h        |   2 +-
>   arch/x86/kvm/mmu/tdp_mmu.c         | 244 +++++++++++++++++++++++++----
>   arch/x86/kvm/mmu/tdp_mmu.h         |   2 +-
>   virt/kvm/kvm_main.c                |   1 +
>   8 files changed, 280 insertions(+), 32 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index f28c9fd72ac4..1b01dc2098b0 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -94,6 +94,11 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
>   KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
>   KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
>   KVM_X86_OP(load_mmu_pgd)
> +KVM_X86_OP_OPTIONAL(link_private_spt)
> +KVM_X86_OP_OPTIONAL(free_private_spt)
> +KVM_X86_OP_OPTIONAL(set_private_spte)
> +KVM_X86_OP_OPTIONAL(remove_private_spte)
> +KVM_X86_OP_OPTIONAL(zap_private_spte)
>   KVM_X86_OP(has_wbinvd_exit)
>   KVM_X86_OP(get_l2_tsc_offset)
>   KVM_X86_OP(get_l2_tsc_multiplier)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 381df2c8136d..5f9634c130d0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -467,6 +467,7 @@ struct kvm_mmu {
>   			 struct kvm_mmu_page *sp);
>   	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
>   	struct kvm_mmu_root_info root;
> +	hpa_t private_root_hpa;
>   	union kvm_cpu_role cpu_role;
>   	union kvm_mmu_page_role root_role;
>   
> @@ -1613,6 +1614,16 @@ struct kvm_x86_ops {
>   	void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
>   			     int root_level);
>   
> +	int (*link_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +				void *private_spt);
> +	int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +				void *private_spt);
> +	int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +				 kvm_pfn_t pfn);
> +	int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +				    kvm_pfn_t pfn);
> +	int (*zap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level);
> +
>   	bool (*has_wbinvd_exit)(void);
>   
>   	u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 0237e143299c..02e7b5cf3231 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3646,7 +3646,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
>   		goto out_unlock;
>   
>   	if (is_tdp_mmu_enabled(vcpu->kvm)) {
> -		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
> +		if (kvm_gfn_shared_mask(vcpu->kvm) &&
> +		    !VALID_PAGE(mmu->private_root_hpa)) {
> +			root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, true);
> +			mmu->private_root_hpa = root;
> +		}
> +		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, false);
>   		mmu->root.hpa = root;
>   	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
>   		root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level);
> @@ -4357,7 +4362,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>   	unsigned long mmu_seq;
>   	int r;
>   
> -	fault->gfn = fault->addr >> PAGE_SHIFT;
> +	fault->gfn = gpa_to_gfn(fault->addr) & ~kvm_gfn_shared_mask(vcpu->kvm);
>   	fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
>   
>   	if (page_fault_handle_page_track(vcpu, fault))
> @@ -5893,6 +5898,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
>   
>   	mmu->root.hpa = INVALID_PAGE;
>   	mmu->root.pgd = 0;
> +	mmu->private_root_hpa = INVALID_PAGE;
>   	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
>   		mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
>   
> @@ -6116,7 +6122,7 @@ static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
>   		};
>   
>   		/*
> -		 * this handles both private gfn and shared gfn.
> +		 * This handles both private gfn and shared gfn.
>   		 * All private page should be zapped on memslot deletion.
>   		 */
>   		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush, true);
> @@ -6919,6 +6925,9 @@ int kvm_mmu_vendor_module_init(void)
>   void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
>   {
>   	kvm_mmu_unload(vcpu);
> +	if (is_tdp_mmu_enabled(vcpu->kvm))
> +		mmu_free_root_page(vcpu->kvm, &vcpu->arch.mmu->private_root_hpa,
> +				NULL);
>   	free_mmu_pages(&vcpu->arch.root_mmu);
>   	free_mmu_pages(&vcpu->arch.guest_mmu);
>   	mmu_free_memory_caches(vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 4c013124534b..508e8402c07a 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -6,6 +6,8 @@
>   #include <linux/kvm_host.h>
>   #include <asm/kvm_host.h>
>   
> +#include "mmu.h"
> +
>   #undef MMU_DEBUG
>   
>   #ifdef MMU_DEBUG
> @@ -209,11 +211,29 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu,
>   	}
>   }
>   
> +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	gfp &= ~__GFP_ZERO;
> +	sp->private_spt = (void *)__get_free_page(gfp);
> +	if (!sp->private_spt)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
>   static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp)
>   {
>   	if (sp->private_spt)
>   		free_page((unsigned long)sp->private_spt);
>   }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	if (is_private_sp(root))
> +		return kvm_gfn_private(kvm, gfn);
> +	else
> +		return kvm_gfn_shared(kvm, gfn);
> +}
>   #else
>   static inline void *kvm_mmu_private_spt(struct kvm_mmu_page *sp)
>   {
> @@ -230,9 +250,20 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu,
>   {
>   }
>   
> +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> +	return -ENOMEM;
> +}
> +
>   static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp)
>   {
>   }
> +
> +static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct kvm_mmu_page *root,
> +				     gfn_t gfn)
> +{
> +	return gfn;
> +}
>   #endif
>   
>   static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
> @@ -367,6 +398,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>   		.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
>   		.nx_huge_page_workaround_enabled =
>   			is_nx_huge_page_enabled(vcpu->kvm),
> +		.is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),
>   
>   		.max_level = vcpu->kvm->arch.tdp_max_page_level,
>   		.req_level = PG_LEVEL_4K,
> diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
> index 9e56a5b1024c..eab62baf8549 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.h
> +++ b/arch/x86/kvm/mmu/tdp_iter.h
> @@ -71,7 +71,7 @@ struct tdp_iter {
>   	tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL];
>   	/* A pointer to the current SPTE */
>   	tdp_ptep_t sptep;
> -	/* The lowest GFN mapped by the current SPTE */
> +	/* The lowest GFN (shared bits included) mapped by the current SPTE */
>   	gfn_t gfn;
>   	/* The level of the root page given to the iterator */
>   	int root_level;
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index bdb50c26849f..0e053b96444a 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -285,6 +285,9 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu,
>   	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
>   	sp->role = role;
>   
> +	if (kvm_mmu_page_role_is_private(role))
> +		kvm_mmu_alloc_private_spt(vcpu, NULL, sp);
> +
>   	return sp;
>   }
>   
> @@ -305,7 +308,8 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
>   	trace_kvm_mmu_get_page(sp, true);
>   }
>   
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *kvm_tdp_mmu_get_vcpu_root(struct kvm_vcpu *vcpu,
> +						      bool private)
>   {
>   	union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
>   	struct kvm *kvm = vcpu->kvm;
> @@ -317,6 +321,8 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>   	 * Check for an existing root before allocating a new one.  Note, the
>   	 * role check prevents consuming an invalid root.
>   	 */
> +	if (private)
> +		kvm_mmu_page_role_set_private(&role);
>   	for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
>   		if (root->role.word == role.word &&
>   		    kvm_tdp_mmu_get_root(root))
> @@ -333,11 +339,17 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>   	spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
>   
>   out:
> -	return __pa(root->spt);
> +	return root;
> +}
> +
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private)
> +{
> +	return __pa(kvm_tdp_mmu_get_vcpu_root(vcpu, private)->spt);
>   }
>   
>   static int __must_check handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -					    u64 old_spte, u64 new_spte, int level,
> +					    u64 old_spte, u64 new_spte,
> +					    union kvm_mmu_page_role role,
>   					    bool shared);
>   
>   static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level)
> @@ -364,6 +376,8 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
>   
>   	if ((!is_writable_pte(old_spte) || pfn_changed) &&
>   	    is_writable_pte(new_spte)) {
> +		/* For memory slot operations, use GFN without aliasing */
> +		gfn = gfn & ~kvm_gfn_shared_mask(kvm);
>   		slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn);
>   		mark_page_dirty_in_slot(kvm, slot, gfn);
>   	}
> @@ -500,7 +514,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   							  REMOVED_SPTE, level);
>   		}
>   		ret = handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
> -					  old_spte, REMOVED_SPTE, level, shared);
> +					  old_spte, REMOVED_SPTE, sp->role,
> +					  shared);
>   		/*
>   		 * We are removing page tables.  Because in TDX case we don't
>   		 * zap private page tables except tearing down VM.  It means
> @@ -509,9 +524,81 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>   		WARN_ON_ONCE(ret);
>   	}
>   
> +	if (is_private_sp(sp) &&
> +	    WARN_ON(static_call(kvm_x86_free_private_spt)(kvm, sp->gfn, sp->role.level,
> +							  kvm_mmu_private_spt(sp)))) {
> +		/*
> +		 * Failed to unlink Secure EPT page and there is nothing to do
> +		 * further.  Intentionally leak the page to prevent the kernel
> +		 * from accessing the encrypted page.
> +		 */
> +		kvm_mmu_init_private_spt(sp, NULL);
> +	}
> +
>   	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
>   }
>   
> +static void *get_private_spt(gfn_t gfn, u64 new_spte, int level)
> +{
> +	if (is_shadow_present_pte(new_spte) && !is_last_spte(new_spte, level)) {
> +		struct kvm_mmu_page *sp = to_shadow_page(pfn_to_hpa(spte_to_pfn(new_spte)));
> +		void *private_spt = kvm_mmu_private_spt(sp);
> +
> +		WARN_ON_ONCE(!private_spt);
> +		WARN_ON_ONCE(sp->role.level + 1 != level);
> +		WARN_ON_ONCE(sp->gfn != gfn);
> +		return private_spt;
> +	}
> +
> +	return NULL;
> +}
> +
> +static int __must_check handle_changed_private_spte(struct kvm *kvm, gfn_t gfn,
> +						    u64 old_spte, u64 new_spte,
> +						    int level)
> +{
> +	bool was_present = is_shadow_present_pte(old_spte);
> +	bool is_present = is_shadow_present_pte(new_spte);
> +	bool was_leaf = was_present && is_last_spte(old_spte, level);
> +	bool is_leaf = is_present && is_last_spte(new_spte, level);
> +	kvm_pfn_t old_pfn = spte_to_pfn(old_spte);
> +	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
> +	int ret;

int ret = 0;

> +
> +	lockdep_assert_held(&kvm->mmu_lock);
> +	if (is_present) {
> +		/* TDP MMU doesn't change present -> present */
> +		KVM_BUG_ON(was_present, kvm);
> +
> +		/*
> +		 * Use different call to either set up middle level
> +		 * private page table, or leaf.
> +		 */
> +		if (is_leaf)
> +			ret = static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn);
> +		else {
> +			void *private_spt = get_private_spt(gfn, new_spte, level);
> +
> +			KVM_BUG_ON(!private_spt, kvm);
> +			ret = static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_spt);
> +		}
> +	} else if (was_leaf) {
> +		/* non-present -> non-present doesn't make sense. */
> +		KVM_BUG_ON(!was_present, kvm);
> +		/*
> +		 * Zap private leaf SPTE.  Zapping private table is done
> +		 * below in handle_removed_tdp_mmu_page().
> +		 */
> +		lockdep_assert_held_write(&kvm->mmu_lock);
> +		ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level);
> +		if (!ret) {
> +			ret = static_call(kvm_x86_remove_private_spte)(kvm, gfn, level, old_pfn);
> +			WARN_ON_ONCE(ret);
> +		}
> +	}

Otherwise, "ret" may be returned without initialization. Then it will 
trigger the WARN_ON_ONCE(ret) after handle_changed_spte() in patch 48.

> +	return ret;
> +}
> +
>   /**
>    * __handle_changed_spte - handle bookkeeping associated with an SPTE change
>    * @kvm: kvm instance
> @@ -519,7 +606,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>    * @gfn: the base GFN that was mapped by the SPTE
>    * @old_spte: The value of the SPTE before the change
>    * @new_spte: The value of the SPTE after the change
> - * @level: the level of the PT the SPTE is part of in the paging structure
> + * @role: the role of the PT the SPTE is part of in the paging structure
>    * @shared: This operation may not be running under the exclusive use of
>    *	    the MMU lock and the operation must synchronize with other
>    *	    threads that might be modifying SPTEs.
> @@ -528,14 +615,18 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
>    * This function must be called for all TDP SPTE modifications.
>    */
>   static int __must_check __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -					      u64 old_spte, u64 new_spte, int level,
> -					      bool shared)
> +					      u64 old_spte, u64 new_spte,
> +					      union kvm_mmu_page_role role, bool shared)
>   {
> +	bool is_private = kvm_mmu_page_role_is_private(role);
> +	int level = role.level;
>   	bool was_present = is_shadow_present_pte(old_spte);
>   	bool is_present = is_shadow_present_pte(new_spte);
>   	bool was_leaf = was_present && is_last_spte(old_spte, level);
>   	bool is_leaf = is_present && is_last_spte(new_spte, level);
> -	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
> +	kvm_pfn_t old_pfn = spte_to_pfn(old_spte);
> +	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
> +	bool pfn_changed = old_pfn != new_pfn;
>   
>   	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
>   	WARN_ON(level < PG_LEVEL_4K);
> @@ -602,7 +693,7 @@ static int __must_check __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t
>   
>   	if (was_leaf && is_dirty_spte(old_spte) &&
>   	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
> -		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
> +		kvm_set_pfn_dirty(old_pfn);
>   
>   	/*
>   	 * Recursively handle child PTs if the change removed a subtree from
> @@ -611,26 +702,42 @@ static int __must_check __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t
>   	 * pages are kernel allocations and should never be migrated.
>   	 */
>   	if (was_present && !was_leaf &&
> -	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
> +	    (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) {
> +		KVM_BUG_ON(is_private != is_private_sptep(spte_to_child_pt(old_spte, level)),
> +			   kvm);
>   		handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
> +	}
>   
> +	/*
> +	 * Special handling for the private mapping.  We are either
> +	 * setting up new mapping at middle level page table, or leaf,
> +	 * or tearing down existing mapping.
> +	 *
> +	 * This is after handling lower page table by above
> +	 * handle_remove_tdp_mmu_page().  Secure-EPT requires to remove
> +	 * Secure-EPT tables after removing children.
> +	 */
> +	if (is_private &&
> +	    /* Ignore change of software only bits. e.g. host_writable */
> +	    (was_leaf != is_leaf || was_present != is_present || pfn_changed))
> +		return handle_changed_private_spte(kvm, gfn, old_spte, new_spte, role.level);
>   	return 0;
>   }
>   
>   static int __must_check handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> -					    u64 old_spte, u64 new_spte, int level,
> +					    u64 old_spte, u64 new_spte,
> +					    union kvm_mmu_page_role role,
>   					    bool shared)
>   {
>   	int ret;
>   
> -	ret = __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level,
> -				    shared);
> +	ret = __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, role, shared);
>   	if (ret)
>   		return ret;
>   
> -	handle_changed_spte_acc_track(old_spte, new_spte, level);
> +	handle_changed_spte_acc_track(old_spte, new_spte, role.level);
>   	handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte,
> -				      new_spte, level);
> +				      new_spte, role.level);
>   	return 0;
>   }
>   
> @@ -656,6 +763,24 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
>   						       struct tdp_iter *iter,
>   						       u64 new_spte)
>   {
> +	/*
> +	 * For conventional page table, the update flow is
> +	 * - update STPE with atomic operation
> +	 * - handle changed SPTE. __handle_changed_spte()
> +	 * NOTE: __handle_changed_spte() (and functions) must be safe against
> +	 * concurrent update.  It is an exception to zap SPTE.  See
> +	 * tdp_mmu_zap_spte_atomic().
> +	 *
> +	 * For private page table, callbacks are needed to propagate SPTE
> +	 * change into the protected page table.  In order to atomically update
> +	 * both the SPTE and the protected page tables with callbacks, utilize
> +	 * freezing SPTE.
> +	 * - Freeze the SPTE. Set entry to REMOVED_SPTE.
> +	 * - Trigger callbacks for protected page tables. __handle_changed_spte()
> +	 * - Unfreeze the SPTE.  Set the entry to new_spte.
> +	 */
> +	bool freeze_spte = is_private_sptep(iter->sptep) && !is_removed_spte(new_spte);
> +	u64 tmp_spte = freeze_spte ? REMOVED_SPTE : new_spte;
>   	u64 *sptep = rcu_dereference(iter->sptep);
>   	int ret;
>   
> @@ -673,14 +798,24 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
>   	 * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
>   	 * does not hold the mmu_lock.
>   	 */
> -	if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte))
> +	if (!try_cmpxchg64(sptep, &iter->old_spte, tmp_spte))
>   		return -EBUSY;
>   
>   	ret = __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
> -				    new_spte, iter->level, true);
> +				    new_spte, sptep_to_sp(sptep)->role, true);
>   	if (!ret)
>   		handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level);
>   
> +	if (ret) {
> +		/*
> +		 * !freeze_spte means this fault isn't private.  No call to
> +		 * operation on Secure EPT.
> +		 */
> +		WARN_ON_ONCE(!freeze_spte);
> +		__kvm_tdp_mmu_write_spte(sptep, iter->old_spte);
> +	} else if (freeze_spte)
> +		__kvm_tdp_mmu_write_spte(sptep, new_spte);
> +
>   	return ret;
>   }
>   
> @@ -750,6 +885,7 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
>   			      u64 old_spte, u64 new_spte, gfn_t gfn, int level,
>   			      bool record_acc_track, bool record_dirty_log)
>   {
> +	union kvm_mmu_page_role role;
>   	int ret;
>   
>   	lockdep_assert_held_write(&kvm->mmu_lock);
> @@ -765,7 +901,9 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
>   
>   	old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
>   
> -	ret = __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false);
> +	role = sptep_to_sp(sptep)->role;
> +	role.level = level;
> +	ret = __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, role, false);
>   	/* Because write spin lock is held, no race.  It should success. */
>   	WARN_ON_ONCE(ret);
>   
> @@ -819,8 +957,11 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
>   			continue;					\
>   		else
>   
> -#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end)		\
> -	for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end)
> +#define tdp_mmu_for_each_pte(_iter, _mmu, _private, _start, _end)	\
> +	for_each_tdp_pte(_iter,						\
> +		 to_shadow_page((_private) ? _mmu->private_root_hpa :	\
> +				_mmu->root.hpa),			\
> +		_start, _end)
>   
>   /*
>    * Yield if the MMU lock is contended or this thread needs to return control
> @@ -983,6 +1124,14 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>   	if (!zap_private && is_private_sp(root))
>   		return false;
>   
> +	/*
> +	 * start and end doesn't have GFN shared bit.  This function zaps
> +	 * a region including alias.  Adjust shared bit of [start, end) if the
> +	 * root is shared.
> +	 */
> +	start = kvm_gfn_for_root(kvm, root, start);
> +	end = kvm_gfn_for_root(kvm, root, end);
> +
>   	rcu_read_lock();
>   
>   	for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) {
> @@ -1111,10 +1260,19 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>   	WARN_ON(sp->role.level != fault->goal_level);
>   	if (unlikely(!fault->slot))
>   		new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
> -	else
> -		wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
> -					 fault->pfn, iter->old_spte, fault->prefetch, true,
> -					 fault->map_writable, &new_spte);
> +	else {
> +		unsigned long pte_access = ACC_ALL;
> +		gfn_t gfn_unalias = iter->gfn & ~kvm_gfn_shared_mask(vcpu->kvm);
> +
> +		/* TDX shared GPAs are no executable, enforce this for the SDV. */
> +		if (kvm_gfn_shared_mask(vcpu->kvm) && !fault->is_private)
> +			pte_access &= ~ACC_EXEC_MASK;
> +
> +		wrprot = make_spte(vcpu, sp, fault->slot, pte_access,
> +				   gfn_unalias, fault->pfn, iter->old_spte,
> +				   fault->prefetch, true, fault->map_writable,
> +				   &new_spte);
> +	}
>   
>   	if (new_spte == iter->old_spte)
>   		ret = RET_PF_SPURIOUS;
> @@ -1214,6 +1372,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>   {
>   	struct kvm_mmu *mmu = vcpu->arch.mmu;
>   	struct tdp_iter iter;
> +	gfn_t raw_gfn;
> +	bool is_private = fault->is_private;
>   	int ret;
>   
>   	kvm_mmu_hugepage_adjust(vcpu, fault);
> @@ -1222,7 +1382,17 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>   
>   	rcu_read_lock();
>   
> -	tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) {
> +	raw_gfn = gpa_to_gfn(fault->addr);
> +
> +	if (is_error_noslot_pfn(fault->pfn) ||
> +	    !kvm_pfn_to_refcounted_page(fault->pfn)) {
> +		if (is_private) {
> +			rcu_read_unlock();
> +			return -EFAULT;
> +		}
> +	}
> +
> +	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
>   		if (fault->nx_huge_page_workaround_enabled)
>   			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
>   
> @@ -1238,6 +1408,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>   		    is_large_pte(iter.old_spte)) {
>   			if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
>   				break;
> +			/*
> +			 * TODO: large page support.
> +			 * Doesn't support large page for TDX now
> +			 */
> +			KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm);
> +
>   
>   			/*
>   			 * The iter must explicitly re-read the spte here
> @@ -1480,6 +1656,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp, union kvm_mm
>   
>   	sp->role = role;
>   	sp->spt = (void *)__get_free_page(gfp);
> +	if (kvm_mmu_page_role_is_private(role)) {
> +		if (kvm_alloc_private_spt_for_split(sp, gfp)) {
> +			free_page((unsigned long)sp->spt);
> +			sp->spt = NULL;
> +		}
> +	}
>   	if (!sp->spt) {
>   		kmem_cache_free(mmu_page_header_cache, sp);
>   		return NULL;
> @@ -1495,6 +1677,11 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>   	union kvm_mmu_page_role role = tdp_iter_child_role(iter);
>   	struct kvm_mmu_page *sp;
>   
> +	KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=
> +		   is_private_sptep(iter->sptep), kvm);
> +	/* TODO: Large page isn't supported for private SPTE yet. */
> +	KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm);
> +
>   	/*
>   	 * Since we are allocating while under the MMU lock we have to be
>   	 * careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct
> @@ -1929,7 +2116,7 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
>   	if (WARN_ON_ONCE(kvm_gfn_shared_mask(vcpu->kvm)))
>   		return leaf;
>   
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>   		leaf = iter.level;
>   		sptes[leaf] = iter.old_spte;
>   	}
> @@ -1956,7 +2143,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
>   	gfn_t gfn = addr >> PAGE_SHIFT;
>   	tdp_ptep_t sptep = NULL;
>   
> -	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> +	/* fast page fault for private GPA isn't supported. */
> +	WARN_ON_ONCE(kvm_is_private_gpa(vcpu->kvm, addr));
> +
> +	tdp_mmu_for_each_pte(iter, mmu, false, gfn, gfn + 1) {
>   		*spte = iter.old_spte;
>   		sptep = iter.sptep;
>   	}
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index c98c7df449a8..695175c921a5 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -5,7 +5,7 @@
>   
>   #include <linux/kvm_host.h>
>   
> -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
> +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, bool private);
>   
>   __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
>   {
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index dda2f2ec4faa..8c996f40b544 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -211,6 +211,7 @@ struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn)
>   
>   	return NULL;
>   }
> +EXPORT_SYMBOL_GPL(kvm_pfn_to_refcounted_page);
>   
>   /*
>    * Switches to specified vcpu, until a matching vcpu_put()

  parent reply	other threads:[~2022-11-16  1:41 UTC|newest]

Thread overview: 228+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-30  6:22 [PATCH v10 000/108] KVM TDX basic feature support isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 001/108] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 002/108] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 003/108] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 004/108] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 005/108] KVM: TDX: Initialize the TDX module when loading the KVM intel kernel module isaku.yamahata
2022-11-08  1:29   ` Huang, Kai
2022-11-08 18:48     ` Sean Christopherson
2022-11-14 23:18     ` Isaku Yamahata
2022-11-15  1:58       ` Huang, Kai
2022-11-15 12:22       ` Erdem Aktas
2022-11-17 17:33         ` Isaku Yamahata
2023-01-11 22:02   ` Erdem Aktas
2023-01-12  3:08     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 006/108] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 007/108] KVM: TDX: Make TDX VM type supported isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 008/108] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 009/108] KVM: TDX: Define " isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 010/108] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
2022-10-31  9:22   ` Binbin Wu
2022-11-03  0:05     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 011/108] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
2022-11-23 10:00   ` Zhi Wang
2022-12-15 19:45     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 012/108] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 013/108] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 014/108] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers isaku.yamahata
2022-10-31 11:39   ` Binbin Wu
2022-11-03  0:57     ` Isaku Yamahata
2022-11-10 11:11   ` Huang, Kai
2022-10-30  6:22 ` [PATCH v10 015/108] x86/cpu: Add helper functions to allocate/free TDX private host key id isaku.yamahata
2022-11-08  9:16   ` Huang, Kai
2022-11-17 17:34     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 016/108] KVM: TDX: create/destroy VM structure isaku.yamahata
2022-11-10 11:04   ` Huang, Kai
2022-11-17 17:45     ` Isaku Yamahata
2022-11-15  0:06   ` Sagi Shahar
2022-11-17 17:48     ` Isaku Yamahata
2022-11-17 23:45       ` Sagi Shahar
2022-11-23 10:36   ` Zhi Wang
2022-12-15 19:59     ` Isaku Yamahata
2022-12-09 19:15   ` Ackerley Tng
2022-12-15 20:59     ` Isaku Yamahata
2023-01-03 18:17       ` Ackerley Tng
2023-01-04  3:58   ` Wang, Lei
2022-10-30  6:22 ` [PATCH v10 017/108] KVM: TDX: Refuse to unplug the last cpu on the package isaku.yamahata
2022-11-02  8:06   ` Binbin Wu
2022-11-03  2:01     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 018/108] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 019/108] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 020/108] KVM: Support KVM_CAP_MAX_VCPUS for KVM_ENABLE_CAP isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 021/108] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
2022-11-16  5:34   ` Wang, Lei
2022-11-17 17:51     ` Isaku Yamahata
2023-01-04  7:59   ` Wang, Lei
2023-01-12  3:12     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 022/108] KVM: TDX: Make pmu_intel.c ignore guest TD case isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 023/108] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 024/108] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
2022-11-14  6:46   ` Yuan Yao
2022-12-15 21:28     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 025/108] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 026/108] KVM: TDX: Use private memory for TDX isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 027/108] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 028/108] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 029/108] KVM: x86/mmu: Add address conversion functions for TDX shared bit of GPA isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 030/108] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 031/108] KVM: x86/mmu: Replace hardcoded value 0 for the initial value for SPTE isaku.yamahata
2022-11-03  7:17   ` Binbin Wu
2022-11-03  8:00     ` Binbin Wu
2022-11-08 11:33   ` Huang, Kai
2022-11-17 17:55     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 032/108] KVM: x86/mmu: Make sync_page not use hard-coded 0 as the initial SPTE value isaku.yamahata
2022-11-09 11:24   ` Huang, Kai
2022-11-17 17:55     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 033/108] KVM: x86/mmu: Allow non-zero value for non-present SPTE and removed SPTE isaku.yamahata
2022-11-09 11:24   ` Huang, Kai
2022-11-17 17:58     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 034/108] KVM: x86/mmu: Add Suppress VE bit to shadow_mmio_{value, mask} isaku.yamahata
2022-11-09 11:48   ` Huang, Kai
2022-11-17 18:02     ` Isaku Yamahata
2022-11-28 23:51       ` Sean Christopherson
2022-10-30  6:22 ` [PATCH v10 035/108] KVM: x86/mmu: Track shadow MMIO value on a per-VM basis isaku.yamahata
2022-11-09 12:27   ` Huang, Kai
2022-11-22  2:10   ` Yan Zhao
2022-11-25  0:13     ` Huang, Kai
2022-11-25  0:12       ` Yan Zhao
2022-11-25  0:45         ` Huang, Kai
2022-11-25  0:37           ` Yan Zhao
2022-11-25  1:07             ` Huang, Kai
2022-11-25  1:04               ` Yan Zhao
2022-11-28 23:49                 ` Sean Christopherson
2022-10-30  6:22 ` [PATCH v10 036/108] KVM: TDX: Enable mmio spte caching always for TDX isaku.yamahata
2022-11-09 12:46   ` Huang, Kai
2022-10-30  6:22 ` [PATCH v10 037/108] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 038/108] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 039/108] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
2022-11-03 13:41   ` Binbin Wu
2022-11-03 20:13     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 040/108] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 041/108] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map() isaku.yamahata
2022-11-16  9:42   ` Huang, Kai
2022-11-17 18:37     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 042/108] KVM: x86/tdp_mmu: Init role member of struct kvm_mmu_page at allocation isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 043/108] KVM: x86/mmu: Require TDP MMU for TDX isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 044/108] KVM: x86/mmu: Add a new is_private member for union kvm_mmu_page_role isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 045/108] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
2022-11-16 10:32   ` Huang, Kai
2022-11-16 11:53     ` Huang, Kai
2022-11-17 19:25       ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 046/108] KVM: Add flags to struct kvm_gfn_range isaku.yamahata
2022-12-14 10:51   ` Huang, Kai
2022-12-15 22:10     ` Isaku Yamahata
2022-12-15 22:41       ` Huang, Kai
2022-10-30  6:22 ` [PATCH v10 047/108] KVM: x86/tdp_mmu: Don't zap private pages for unsupported cases isaku.yamahata
2022-11-22 21:26   ` Ackerley Tng
2022-12-14 11:17   ` Huang, Kai
2022-12-15 22:46     ` Isaku Yamahata
2022-12-15 23:03       ` Huang, Kai
2022-12-15 23:27       ` Huang, Kai
2022-10-30  6:22 ` [PATCH v10 048/108] KVM: x86/tdp_mmu: Make handle_changed_spte() return value isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 049/108] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
2022-11-08 13:41   ` Binbin Wu
2022-11-17 19:59     ` Isaku Yamahata
2022-11-16  1:40   ` Chenyi Qiang [this message]
2022-11-17 19:26     ` Isaku Yamahata
2022-11-16 11:58   ` Huang, Kai
2022-11-17 19:31     ` Isaku Yamahata
2022-10-30  6:22 ` [PATCH v10 050/108] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 051/108] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 052/108] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
2022-12-05 14:23   ` Wang, Wei W
2022-12-15 23:21     ` Isaku Yamahata
2022-12-19 13:15       ` Wang, Wei W
2022-10-30  6:22 ` [PATCH v10 053/108] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 054/108] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 055/108] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 056/108] KVM: TDX: don't request KVM_REQ_APIC_PAGE_RELOAD isaku.yamahata
2022-11-21 23:55   ` Huang, Kai
2022-12-16  0:11     ` Isaku Yamahata
2022-12-16  0:31       ` Huang, Kai
2022-10-30  6:22 ` [PATCH v10 057/108] KVM: x86/VMX: introduce vmx tlb_remote_flush and tlb_remote_flush_with_range isaku.yamahata
2022-10-30  6:22 ` [PATCH v10 058/108] KVM: TDX: TDP MMU TDX support isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 059/108] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 060/108] KVM: Add functions to set GFN to private or shared isaku.yamahata
2022-11-01 10:21   ` Xiaoyao Li
2022-11-03  2:01     ` Isaku Yamahata
2022-11-09 13:18   ` Binbin Wu
2022-10-30  6:23 ` [PATCH v10 061/108] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 062/108] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX isaku.yamahata
2022-11-09 15:05   ` Binbin Wu
2022-12-09  0:01     ` Vishal Annapurve
2022-12-16  0:31       ` Isaku Yamahata
2022-12-16  0:18     ` Isaku Yamahata
2022-10-30  6:23 ` [PATCH v10 063/108] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 064/108] KVM: TDX: Create initial guest memory isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 065/108] KVM: TDX: Finalize VM initialization isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 066/108] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 067/108] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
2023-01-17 23:36   ` Ackerley Tng
2022-10-30  6:23 ` [PATCH v10 068/108] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 069/108] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 070/108] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 071/108] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
2022-11-14  7:36   ` Binbin Wu
2022-11-17 20:10     ` Isaku Yamahata
2022-10-30  6:23 ` [PATCH v10 072/108] KVM: TDX: restore user ret MSRs isaku.yamahata
2022-11-14  7:49   ` Binbin Wu
2022-11-17 20:14     ` Isaku Yamahata
2022-10-30  6:23 ` [PATCH v10 073/108] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 074/108] KVM: TDX: complete interrupts after tdexit isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 075/108] KVM: TDX: restore debug store when TD exit isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 076/108] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
2022-11-15  2:28   ` Binbin Wu
2022-11-17 20:24     ` Isaku Yamahata
2022-10-30  6:23 ` [PATCH v10 077/108] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
2022-11-16  2:41   ` Binbin Wu
2022-12-16  1:12     ` Isaku Yamahata
2022-10-30  6:23 ` [PATCH v10 078/108] KVM: TDX: Add support for find pending IRQ in a protected local APIC isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 079/108] KVM: x86: Assume timer IRQ was injected if APIC state is proteced isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 080/108] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 081/108] KVM: TDX: Implement interrupt injection isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 082/108] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 083/108] KVM: TDX: Implement methods to inject NMI isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 084/108] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 085/108] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 086/108] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 087/108] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 088/108] KVM: TDX: Retry seamcall when TDX_OPERAND_BUSY with operand SEPT isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 089/108] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 090/108] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 091/108] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 092/108] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 093/108] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 094/108] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 095/108] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 096/108] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 097/108] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 098/108] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
2022-11-23 14:25   ` Binbin Wu
2022-12-16  1:29     ` Isaku Yamahata
2022-12-14 11:22   ` Huang, Kai
2022-12-16  1:39     ` Isaku Yamahata
2023-01-04 21:20   ` Ackerley Tng
2023-01-12 10:06     ` Isaku Yamahata
2022-10-30  6:23 ` [PATCH v10 099/108] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 100/108] KVM: TDX: Handle TDX PV report fatal error hypercall isaku.yamahata
2022-11-23 14:47   ` Binbin Wu
2022-10-30  6:23 ` [PATCH v10 101/108] KVM: TDX: Handle TDX PV map_gpa hypercall isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 102/108] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 103/108] KVM: TDX: Silently discard SMI request isaku.yamahata
2022-10-30  6:23 ` [PATCH v10 104/108] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
2022-11-23 15:17   ` Binbin Wu
2022-12-16  3:50     ` Isaku Yamahata
2022-12-16 15:49   ` Sean Christopherson
2022-10-30  6:23 ` [PATCH v10 105/108] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
2022-11-22  1:18   ` Huang, Kai
2022-12-14 11:43   ` Huang, Kai
2022-12-16  5:26     ` Isaku Yamahata
2022-12-19 10:46       ` Huang, Kai
2022-10-30  6:23 ` [PATCH v10 106/108] Documentation/virt/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
2022-11-25  3:49   ` Binbin Wu
2022-12-16  3:58     ` Isaku Yamahata
2022-10-30  6:23 ` [PATCH v10 107/108] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
2022-10-31  4:23   ` Bagas Sanjaya
2022-10-30  6:23 ` [PATCH v10 108/108] [MARKER] the end of (the first phase of) TDX KVM patch series isaku.yamahata
2023-01-03  8:26 ` [PATCH v10 000/108] KVM TDX basic feature support Wang, Lei
2023-01-12 16:16   ` Isaku Yamahata

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=67b782ee-c95c-d6bc-3cca-cdfe75f4bf6a@intel.com \
    --to=chenyi.qiang@intel.com \
    --cc=dmatlack@google.com \
    --cc=erdemaktas@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kai.huang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sagis@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).