kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] TDP MMU: several minor fixes or improvements
@ 2021-06-15  0:57 Kai Huang
  2021-06-15  0:57 ` [PATCH v3 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Kai Huang @ 2021-06-15  0:57 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

v2->v3:

 - Rebased to latest kvm/queue.
 - Added Sean's reviewed-by for patch 2.

v1 -> v2:
 - Update patch 2, using Sean's suggestion.
 - Update patch 3, based on Ben's review.

https://lore.kernel.org/kvm/cover.1620343751.git.kai.huang@intel.com/

v1:

https://lore.kernel.org/kvm/cover.1620200410.git.kai.huang@intel.com/T/#mcc2e6ea6d9e3caec2bcc9e5f99cbbe2a8dd24145


Kai Huang (3):
  KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level()
  KVM: x86/mmu: Fix pf_fixed count in tdp_mmu_map_handle_target_level()
  KVM: x86/mmu: Fix TDP MMU page table level

 arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++++------
 arch/x86/kvm/mmu/tdp_mmu.h |  2 +-
 2 files changed, 11 insertions(+), 7 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level()
  2021-06-15  0:57 [PATCH v3 0/3] TDP MMU: several minor fixes or improvements Kai Huang
@ 2021-06-15  0:57 ` Kai Huang
  2021-06-15  0:57 ` [PATCH v3 2/3] KVM: x86/mmu: Fix pf_fixed count " Kai Huang
  2021-06-15  0:57 ` [PATCH v3 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
  2 siblings, 0 replies; 5+ messages in thread
From: Kai Huang @ 2021-06-15  0:57 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

Currently tdp_mmu_map_handle_target_level() returns 0, which is
RET_PF_RETRY, when page fault is actually fixed.  This makes
kvm_tdp_mmu_map() also return RET_PF_RETRY in this case, instead of
RET_PF_FIXED.  Fix by initializing ret to RET_PF_FIXED.

Note that kvm_mmu_page_fault() resumes guest on both RET_PF_RETRY and
RET_PF_FIXED, which means in practice returning the two won't make
difference, so this fix alone won't be necessary for stable tree.

Fixes: bb18842e2111 ("kvm: x86/mmu: Add TDP MMU PF handler")
Reviewed-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index cc13e001f3de..6c9c6917925a 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -914,7 +914,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
 					  kvm_pfn_t pfn, bool prefault)
 {
 	u64 new_spte;
-	int ret = 0;
+	int ret = RET_PF_FIXED;
 	int make_spte_ret = 0;
 
 	if (unlikely(is_noslot_pfn(pfn)))
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/3] KVM: x86/mmu: Fix pf_fixed count in tdp_mmu_map_handle_target_level()
  2021-06-15  0:57 [PATCH v3 0/3] TDP MMU: several minor fixes or improvements Kai Huang
  2021-06-15  0:57 ` [PATCH v3 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
@ 2021-06-15  0:57 ` Kai Huang
  2021-06-15  0:57 ` [PATCH v3 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
  2 siblings, 0 replies; 5+ messages in thread
From: Kai Huang @ 2021-06-15  0:57 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

Currently pf_fixed is not increased when prefault is true.  This is not
correct, since prefault here really means "async page fault completed".
In that case, the original page fault from the guest was morphed into as
async page fault and pf_fixed was not increased.  So when prefault
indicates async page fault is completed, pf_fixed should be increased.

Additionally, currently pf_fixed is also increased even when page fault
is spurious, while legacy MMU increases pf_fixed when page fault returns
RET_PF_EMULATE or RET_PF_FIXED.

To fix above two issues, change to increase pf_fixed when return value
is not RET_PF_SPURIOUS (RET_PF_RETRY has already been ruled out by
reaching here).

More information:
https://lore.kernel.org/kvm/cover.1620200410.git.kai.huang@intel.com/T/#mbb5f8083e58a2cd262231512b9211cbe70fc3bd5

Fixes: bb18842e2111 ("kvm: x86/mmu: Add TDP MMU PF handler")
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 6c9c6917925a..efb7503ed4d5 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -951,7 +951,11 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
 				       rcu_dereference(iter->sptep));
 	}
 
-	if (!prefault)
+	/*
+	 * Increase pf_fixed in both RET_PF_EMULATE and RET_PF_FIXED to be
+	 * consistent with legacy MMU behavior.
+	 */
+	if (ret != RET_PF_SPURIOUS)
 		vcpu->stat.pf_fixed++;
 
 	return ret;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 3/3] KVM: x86/mmu: Fix TDP MMU page table level
  2021-06-15  0:57 [PATCH v3 0/3] TDP MMU: several minor fixes or improvements Kai Huang
  2021-06-15  0:57 ` [PATCH v3 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
  2021-06-15  0:57 ` [PATCH v3 2/3] KVM: x86/mmu: Fix pf_fixed count " Kai Huang
@ 2021-06-15  0:57 ` Kai Huang
  2021-06-17 18:27   ` Paolo Bonzini
  2 siblings, 1 reply; 5+ messages in thread
From: Kai Huang @ 2021-06-15  0:57 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

TDP MMU iterator's level is identical to page table's actual level.  For
instance, for the last level page table (whose entry points to one 4K
page), iter->level is 1 (PG_LEVEL_4K), and in case of 5 level paging,
the iter->level is mmu->shadow_root_level, which is 5.  However, struct
kvm_mmu_page's level currently is not set correctly when it is allocated
in kvm_tdp_mmu_map().  When iterator hits non-present SPTE and needs to
allocate a new child page table, currently iter->level, which is the
level of the page table where the non-present SPTE belongs to, is used.
This results in struct kvm_mmu_page's level always having its parent's
level (excpet root table's level, which is initialized explicitly using
mmu->shadow_root_level).

This is kinda wrong, and not consistent with existing non TDP MMU code.
Fortuantely sp->role.level is only used in handle_removed_tdp_mmu_page()
and kvm_tdp_mmu_zap_sp(), and they are already aware of this and behave
correctly.  However to make it consistent with legacy MMU code (and fix
the issue that both root page table and its child page table have
shadow_root_level), use iter->level - 1 in kvm_tdp_mmu_map(), and change
handle_removed_tdp_mmu_page() and kvm_tdp_mmu_zap_sp() accordingly.

Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++----
 arch/x86/kvm/mmu/tdp_mmu.h | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index efb7503ed4d5..4d658882a4d8 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -337,7 +337,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
 		sptep = rcu_dereference(pt) + i;
-		gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1));
+		gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);
 
 		if (shared) {
 			/*
@@ -379,12 +379,12 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
 			WRITE_ONCE(*sptep, REMOVED_SPTE);
 		}
 		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
-				    old_child_spte, REMOVED_SPTE, level - 1,
+				    old_child_spte, REMOVED_SPTE, level,
 				    shared);
 	}
 
 	kvm_flush_remote_tlbs_with_address(kvm, gfn,
-					   KVM_PAGES_PER_HPAGE(level));
+					   KVM_PAGES_PER_HPAGE(level + 1));
 
 	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
 }
@@ -1030,7 +1030,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 			if (is_removed_spte(iter.old_spte))
 				break;
 
-			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level);
+			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1);
 			child_pt = sp->spt;
 
 			new_spte = make_nonleaf_spte(child_pt,
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index f7a7990da11d..408aa49731d5 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -31,7 +31,7 @@ static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id,
 }
 static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
-	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
+	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level + 1);
 
 	/*
 	 * Don't allow yielding, as the caller may have a flush pending.  Note,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 3/3] KVM: x86/mmu: Fix TDP MMU page table level
  2021-06-15  0:57 ` [PATCH v3 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
@ 2021-06-17 18:27   ` Paolo Bonzini
  0 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2021-06-17 18:27 UTC (permalink / raw)
  To: Kai Huang, kvm; +Cc: bgardon, seanjc, vkuznets, wanpengli, jmattson, joro

On 15/06/21 02:57, Kai Huang wrote:
> TDP MMU iterator's level is identical to page table's actual level.  For
> instance, for the last level page table (whose entry points to one 4K
> page), iter->level is 1 (PG_LEVEL_4K), and in case of 5 level paging,
> the iter->level is mmu->shadow_root_level, which is 5.  However, struct
> kvm_mmu_page's level currently is not set correctly when it is allocated
> in kvm_tdp_mmu_map().  When iterator hits non-present SPTE and needs to
> allocate a new child page table, currently iter->level, which is the
> level of the page table where the non-present SPTE belongs to, is used.
> This results in struct kvm_mmu_page's level always having its parent's
> level (excpet root table's level, which is initialized explicitly using
> mmu->shadow_root_level).
> 
> This is kinda wrong, and not consistent with existing non TDP MMU code.
> Fortuantely sp->role.level is only used in handle_removed_tdp_mmu_page()
> and kvm_tdp_mmu_zap_sp(), and they are already aware of this and behave
> correctly.  However to make it consistent with legacy MMU code (and fix
> the issue that both root page table and its child page table have
> shadow_root_level), use iter->level - 1 in kvm_tdp_mmu_map(), and change
> handle_removed_tdp_mmu_page() and kvm_tdp_mmu_zap_sp() accordingly.
> 
> Reviewed-by: Ben Gardon <bgardon@google.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> ---
>   arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++----
>   arch/x86/kvm/mmu/tdp_mmu.h | 2 +-
>   2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index efb7503ed4d5..4d658882a4d8 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -337,7 +337,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
>   
>   	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
>   		sptep = rcu_dereference(pt) + i;
> -		gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1));
> +		gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);
>   
>   		if (shared) {
>   			/*
> @@ -379,12 +379,12 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
>   			WRITE_ONCE(*sptep, REMOVED_SPTE);
>   		}
>   		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
> -				    old_child_spte, REMOVED_SPTE, level - 1,
> +				    old_child_spte, REMOVED_SPTE, level,
>   				    shared);
>   	}
>   
>   	kvm_flush_remote_tlbs_with_address(kvm, gfn,
> -					   KVM_PAGES_PER_HPAGE(level));
> +					   KVM_PAGES_PER_HPAGE(level + 1));
>   
>   	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
>   }
> @@ -1030,7 +1030,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>   			if (is_removed_spte(iter.old_spte))
>   				break;
>   
> -			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level);
> +			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1);
>   			child_pt = sp->spt;
>   
>   			new_spte = make_nonleaf_spte(child_pt,
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index f7a7990da11d..408aa49731d5 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -31,7 +31,7 @@ static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id,
>   }
>   static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>   {
> -	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
> +	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level + 1);
>   
>   	/*
>   	 * Don't allow yielding, as the caller may have a flush pending.  Note,
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-06-17 18:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-15  0:57 [PATCH v3 0/3] TDP MMU: several minor fixes or improvements Kai Huang
2021-06-15  0:57 ` [PATCH v3 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
2021-06-15  0:57 ` [PATCH v3 2/3] KVM: x86/mmu: Fix pf_fixed count " Kai Huang
2021-06-15  0:57 ` [PATCH v3 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
2021-06-17 18:27   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).