kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] TDP MMU: several minor fixes or improvements
@ 2021-05-06 23:33 Kai Huang
  2021-05-06 23:34 ` [PATCH v2 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Kai Huang @ 2021-05-06 23:33 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

v1:

https://lore.kernel.org/kvm/cover.1620200410.git.kai.huang@intel.com/T/#mcc2e6ea6d9e3caec2bcc9e5f99cbbe2a8dd24145

v1 -> v2:
 - Update patch 2, using Sean's suggestion.
 - Update patch 3, based on Ben's review.

Kai Huang (3):
  KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level()
  KVM: x86/mmu: Fix pf_fixed count in tdp_mmu_map_handle_target_level()
  KVM: x86/mmu: Fix TDP MMU page table level

 arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++++------
 arch/x86/kvm/mmu/tdp_mmu.h |  2 +-
 2 files changed, 11 insertions(+), 7 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level()
  2021-05-06 23:33 [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang
@ 2021-05-06 23:34 ` Kai Huang
  2021-05-06 23:34 ` [PATCH v2 2/3] KVM: x86/mmu: Fix pf_fixed count " Kai Huang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Kai Huang @ 2021-05-06 23:34 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

Currently tdp_mmu_map_handle_target_level() returns 0, which is
RET_PF_RETRY, when page fault is actually fixed.  This makes
kvm_tdp_mmu_map() also return RET_PF_RETRY in this case, instead of
RET_PF_FIXED.  Fix by initializing ret to RET_PF_FIXED.

Note that kvm_mmu_page_fault() resumes guest on both RET_PF_RETRY and
RET_PF_FIXED, which means in practice returning the two won't make
difference, so this fix alone won't be necessary for stable tree.

Fixes: bb18842e2111 ("kvm: x86/mmu: Add TDP MMU PF handler")
Reviewed-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 83cbdbe5de5a..ed85b09f0119 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -905,7 +905,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
 					  kvm_pfn_t pfn, bool prefault)
 {
 	u64 new_spte;
-	int ret = 0;
+	int ret = RET_PF_FIXED;
 	int make_spte_ret = 0;
 
 	if (unlikely(is_noslot_pfn(pfn)))
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] KVM: x86/mmu: Fix pf_fixed count in tdp_mmu_map_handle_target_level()
  2021-05-06 23:33 [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang
  2021-05-06 23:34 ` [PATCH v2 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
@ 2021-05-06 23:34 ` Kai Huang
  2021-05-07 17:23   ` Sean Christopherson
  2021-05-06 23:34 ` [PATCH v2 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
  2021-05-27  2:03 ` [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang
  3 siblings, 1 reply; 7+ messages in thread
From: Kai Huang @ 2021-05-06 23:34 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

Currently pf_fixed is not increased when prefault is true.  This is not
correct, since prefault here really means "async page fault completed".
In that case, the original page fault from the guest was morphed into as
async page fault and pf_fixed was not increased.  So when prefault
indicates async page fault is completed, pf_fixed should be increased.

Additionally, currently pf_fixed is also increased even when page fault
is spurious, while legacy MMU increases pf_fixed when page fault returns
RET_PF_EMULATE or RET_PF_FIXED.

To fix above two issues, change to increase pf_fixed when return value
is not RET_PF_SPURIOUS (RET_PF_RETRY has already been ruled out by
reaching here).

More information:
https://lore.kernel.org/kvm/cover.1620200410.git.kai.huang@intel.com/T/#mbb5f8083e58a2cd262231512b9211cbe70fc3bd5

Fixes: bb18842e2111 ("kvm: x86/mmu: Add TDP MMU PF handler")
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index ed85b09f0119..c389d20418e3 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -942,7 +942,11 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
 				       rcu_dereference(iter->sptep));
 	}
 
-	if (!prefault)
+	/*
+	 * Increase pf_fixed in both RET_PF_EMULATE and RET_PF_FIXED to be
+	 * consistent with legacy MMU behavior.
+	 */
+	if (ret != RET_PF_SPURIOUS)
 		vcpu->stat.pf_fixed++;
 
 	return ret;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] KVM: x86/mmu: Fix TDP MMU page table level
  2021-05-06 23:33 [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang
  2021-05-06 23:34 ` [PATCH v2 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
  2021-05-06 23:34 ` [PATCH v2 2/3] KVM: x86/mmu: Fix pf_fixed count " Kai Huang
@ 2021-05-06 23:34 ` Kai Huang
  2021-05-06 23:46   ` Kai Huang
  2021-05-27  2:03 ` [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang
  3 siblings, 1 reply; 7+ messages in thread
From: Kai Huang @ 2021-05-06 23:34 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro,
	Kai Huang

TDP MMU iterator's level is identical to page table's actual level.  For
instance, for the last level page table (whose entry points to one 4K
page), iter->level is 1 (PG_LEVEL_4K), and in case of 5 level paging,
the iter->level is mmu->shadow_root_level, which is 5.  However, struct
kvm_mmu_page's level currently is not set correctly when it is allocated
in kvm_tdp_mmu_map().  When iterator hits non-present SPTE and needs to
allocate a new child page table, currently iter->level, which is the
level of the page table where the non-present SPTE belongs to, is used.
This results in struct kvm_mmu_page's level always having its parent's
level (excpet root table's level, which is initialized explicitly using
mmu->shadow_root_level).

This is kinda wrong, and not consistent with existing non TDP MMU code.
Fortuantely sp->role.level is only used in handle_removed_tdp_mmu_page()
and kvm_tdp_mmu_zap_sp(), and they are already aware of this and behave
correctly.  However to make it consistent with legacy MMU code (and fix
the issue that both root page table and its child page table have
shadow_root_level), use iter->level - 1 in kvm_tdp_mmu_map(), and change
handle_removed_tdp_mmu_page() and kvm_tdp_mmu_zap_sp() accordingly.

Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++----
 arch/x86/kvm/mmu/tdp_mmu.h | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index c389d20418e3..a1db99d10680 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -335,7 +335,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
 		sptep = rcu_dereference(pt) + i;
-		gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1));
+		gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);
 
 		if (shared) {
 			/*
@@ -377,12 +377,12 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
 			WRITE_ONCE(*sptep, REMOVED_SPTE);
 		}
 		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
-				    old_child_spte, REMOVED_SPTE, level - 1,
+				    old_child_spte, REMOVED_SPTE, level,
 				    shared);
 	}
 
 	kvm_flush_remote_tlbs_with_address(kvm, gfn,
-					   KVM_PAGES_PER_HPAGE(level));
+					   KVM_PAGES_PER_HPAGE(level + 1));
 
 	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
 }
@@ -1013,7 +1013,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 		}
 
 		if (!is_shadow_present_pte(iter.old_spte)) {
-			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level);
+			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1);
 			child_pt = sp->spt;
 
 			new_spte = make_nonleaf_spte(child_pt,
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index 5fdf63090451..7f9974c5d0b4 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -31,7 +31,7 @@ static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id,
 }
 static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
-	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
+	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level + 1);
 
 	/*
 	 * Don't allow yielding, as the caller may have a flush pending.  Note,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 3/3] KVM: x86/mmu: Fix TDP MMU page table level
  2021-05-06 23:34 ` [PATCH v2 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
@ 2021-05-06 23:46   ` Kai Huang
  0 siblings, 0 replies; 7+ messages in thread
From: Kai Huang @ 2021-05-06 23:46 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro

Oops, this patch has a merge conflict with latest kvm/queue due to patch 
ff76d506030da ("KVM: x86/mmu: Avoid unnecessary page table allocation in
kvm_tdp_mmu_map()), but it is very easy to resolve.

Sorry that I forgot to git pull before sending those :)

On Fri, 2021-05-07 at 11:34 +1200, Kai Huang wrote:
> TDP MMU iterator's level is identical to page table's actual level.  For
> instance, for the last level page table (whose entry points to one 4K
> page), iter->level is 1 (PG_LEVEL_4K), and in case of 5 level paging,
> the iter->level is mmu->shadow_root_level, which is 5.  However, struct
> kvm_mmu_page's level currently is not set correctly when it is allocated
> in kvm_tdp_mmu_map().  When iterator hits non-present SPTE and needs to
> allocate a new child page table, currently iter->level, which is the
> level of the page table where the non-present SPTE belongs to, is used.
> This results in struct kvm_mmu_page's level always having its parent's
> level (excpet root table's level, which is initialized explicitly using
> mmu->shadow_root_level).
> 
> This is kinda wrong, and not consistent with existing non TDP MMU code.
> Fortuantely sp->role.level is only used in handle_removed_tdp_mmu_page()
> and kvm_tdp_mmu_zap_sp(), and they are already aware of this and behave
> correctly.  However to make it consistent with legacy MMU code (and fix
> the issue that both root page table and its child page table have
> shadow_root_level), use iter->level - 1 in kvm_tdp_mmu_map(), and change
> handle_removed_tdp_mmu_page() and kvm_tdp_mmu_zap_sp() accordingly.
> 
> Reviewed-by: Ben Gardon <bgardon@google.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++----
>  arch/x86/kvm/mmu/tdp_mmu.h | 2 +-
>  2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index c389d20418e3..a1db99d10680 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -335,7 +335,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
>  
> 
>  	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
>  		sptep = rcu_dereference(pt) + i;
> -		gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1));
> +		gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);
>  
> 
>  		if (shared) {
>  			/*
> @@ -377,12 +377,12 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
>  			WRITE_ONCE(*sptep, REMOVED_SPTE);
>  		}
>  		handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
> -				    old_child_spte, REMOVED_SPTE, level - 1,
> +				    old_child_spte, REMOVED_SPTE, level,
>  				    shared);
>  	}
>  
> 
>  	kvm_flush_remote_tlbs_with_address(kvm, gfn,
> -					   KVM_PAGES_PER_HPAGE(level));
> +					   KVM_PAGES_PER_HPAGE(level + 1));
>  
> 
>  	call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
>  }
> @@ -1013,7 +1013,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>  		}
>  
> 
>  		if (!is_shadow_present_pte(iter.old_spte)) {
> -			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level);
> +			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1);
>  			child_pt = sp->spt;
>  
> 
>  			new_spte = make_nonleaf_spte(child_pt,
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index 5fdf63090451..7f9974c5d0b4 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -31,7 +31,7 @@ static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id,
>  }
>  static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
> -	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
> +	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level + 1);
>  
> 
>  	/*
>  	 * Don't allow yielding, as the caller may have a flush pending.  Note,



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] KVM: x86/mmu: Fix pf_fixed count in tdp_mmu_map_handle_target_level()
  2021-05-06 23:34 ` [PATCH v2 2/3] KVM: x86/mmu: Fix pf_fixed count " Kai Huang
@ 2021-05-07 17:23   ` Sean Christopherson
  0 siblings, 0 replies; 7+ messages in thread
From: Sean Christopherson @ 2021-05-07 17:23 UTC (permalink / raw)
  To: Kai Huang; +Cc: kvm, pbonzini, bgardon, vkuznets, wanpengli, jmattson, joro

On Fri, May 07, 2021, Kai Huang wrote:
> Currently pf_fixed is not increased when prefault is true.  This is not
> correct, since prefault here really means "async page fault completed".
> In that case, the original page fault from the guest was morphed into as
> async page fault and pf_fixed was not increased.  So when prefault
> indicates async page fault is completed, pf_fixed should be increased.
> 
> Additionally, currently pf_fixed is also increased even when page fault
> is spurious, while legacy MMU increases pf_fixed when page fault returns
> RET_PF_EMULATE or RET_PF_FIXED.
> 
> To fix above two issues, change to increase pf_fixed when return value
> is not RET_PF_SPURIOUS (RET_PF_RETRY has already been ruled out by
> reaching here).
> 
> More information:
> https://lore.kernel.org/kvm/cover.1620200410.git.kai.huang@intel.com/T/#mbb5f8083e58a2cd262231512b9211cbe70fc3bd5
> 
> Fixes: bb18842e2111 ("kvm: x86/mmu: Add TDP MMU PF handler")
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> ---

Reviewed-by: Sean Christopherson <seanjc@google.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/3] TDP MMU: several minor fixes or improvements
  2021-05-06 23:33 [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang
                   ` (2 preceding siblings ...)
  2021-05-06 23:34 ` [PATCH v2 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
@ 2021-05-27  2:03 ` Kai Huang
  3 siblings, 0 replies; 7+ messages in thread
From: Kai Huang @ 2021-05-27  2:03 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, bgardon, seanjc, vkuznets, wanpengli, jmattson, joro

On Fri, 2021-05-07 at 11:33 +1200, Kai Huang wrote:
> v1:
> 
> https://lore.kernel.org/kvm/cover.1620200410.git.kai.huang@intel.com/T/#mcc2e6ea6d9e3caec2bcc9e5f99cbbe2a8dd24145
> 
> v1 -> v2:
>  - Update patch 2, using Sean's suggestion.
>  - Update patch 3, based on Ben's review.
> 
> Kai Huang (3):
>   KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level()
>   KVM: x86/mmu: Fix pf_fixed count in tdp_mmu_map_handle_target_level()
>   KVM: x86/mmu: Fix TDP MMU page table level
> 
>  arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++++------
>  arch/x86/kvm/mmu/tdp_mmu.h |  2 +-
>  2 files changed, 11 insertions(+), 7 deletions(-)
> 

Hi Paolo,

Kindly ping.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-05-27  2:03 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-06 23:33 [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang
2021-05-06 23:34 ` [PATCH v2 1/3] KVM: x86/mmu: Fix return value in tdp_mmu_map_handle_target_level() Kai Huang
2021-05-06 23:34 ` [PATCH v2 2/3] KVM: x86/mmu: Fix pf_fixed count " Kai Huang
2021-05-07 17:23   ` Sean Christopherson
2021-05-06 23:34 ` [PATCH v2 3/3] KVM: x86/mmu: Fix TDP MMU page table level Kai Huang
2021-05-06 23:46   ` Kai Huang
2021-05-27  2:03 ` [PATCH v2 0/3] TDP MMU: several minor fixes or improvements Kai Huang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).