All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes
@ 2020-09-23 18:37 Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 1/8] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Sean Christopherson
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Sean Christopherson @ 2020-09-23 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Junaid Shahid

Patch 1 is a minor fix for a very theoretical bug where KVM could skip
the final "commit zap" when recovering shadow pages for the NX huge
page mitigation.

Patch 2 is cleanup that's made possible by patch 1.

Patches 3-5 are the main course and fix bugs in the NX huge page
accounting where shadow pages are incorrectly added to the list of
disallowed huge pages.  KVM doesn't actually check to see if the page
could actually have been a large page when adding to the disallowed list.
This result in what are effectively spurious zaps.  The biggest issue is
likely with shadow pages in the upper levels, i.e. levels 3 and 4, as they
are either unlikely to be huge (1gb) or flat out can't be huge (512tb).
And because of the way KVM zaps, the upper levels will be zapped first,
i.e. KVM is likely zapping and rebuilding a decent number of its shadow
pages for zero benefit.

Ideally, patches 3-5 would be a single patch to ease backporting.  In the
end, I decided the change is probably not suitable for stable as at worst
it creates an infrequent performance spike (assuming the admin isn't going
crazy with the recovery frequency), and it's far from straightforward or
risk free.  Cramming everything into a single patch was a mess.

Patches 6-8 are cleanups in related code.  The 'hlevel' name in particular
has been on my todo list for a while.

v2:
  - Rebased to kvm/queue, commit e1ba1a15af73 ("KVM: SVM: Enable INVPCID
    feature on AMD").

Sean Christopherson (8):
  KVM: x86/mmu: Commit zap of remaining invalid pages when recovering
    lpages
  KVM: x86/mmu: Refactor the zap loop for recovering NX lpages
  KVM: x86/mmu: Move "huge page disallowed" calculation into mapping
    helpers
  KVM: x86/mmu: Capture requested page level before NX huge page
    workaround
  KVM: x86/mmu: Account NX huge page disallowed iff huge page was
    requested
  KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch)
  KVM: x86/mmu: Hoist ITLB multi-hit workaround check up a level
  KVM: x86/mmu: Track write/user faults using bools

 arch/x86/kvm/mmu/mmu.c         | 58 +++++++++++++++++++++-------------
 arch/x86/kvm/mmu/paging_tmpl.h | 39 ++++++++++++-----------
 2 files changed, 57 insertions(+), 40 deletions(-)

-- 
2.28.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/8] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages
  2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
@ 2020-09-23 18:37 ` Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 4/8] KVM: x86/mmu: Capture requested page level before NX huge page workaround Sean Christopherson
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2020-09-23 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Junaid Shahid

Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in
kvm_recover_nx_lpages() to finish zapping pages in the unlikely event
that the loop exited due to lpage_disallowed_mmu_pages being empty.
Because the recovery thread drops mmu_lock() when rescheduling, it's
possible that lpage_disallowed_mmu_pages could be emptied by a different
thread without to_zap reaching zero despite to_zap being derived from
the number of disallowed lpages.

Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages")
Cc: Junaid Shahid <junaids@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 76c5826e29a2..dde6503d8926 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6376,6 +6376,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
 				cond_resched_lock(&kvm->mmu_lock);
 		}
 	}
+	kvm_mmu_commit_zap_page(kvm, &invalid_list);
 
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, rcu_idx);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 4/8] KVM: x86/mmu: Capture requested page level before NX huge page workaround
  2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 1/8] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Sean Christopherson
@ 2020-09-23 18:37 ` Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 5/8] KVM: x86/mmu: Account NX huge page disallowed iff huge page was requested Sean Christopherson
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2020-09-23 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Junaid Shahid

Apply the "huge page disallowed" adjustment of the max level only after
capturing the original requested level.  The requested level will be
used in a future patch to skip adding pages to the list of disallowed
huge pages if a huge page wasn't possible anyways, e.g. if the page
isn't mapped as a huge page in the host.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         | 22 +++++++++++++++-------
 arch/x86/kvm/mmu/paging_tmpl.h |  8 +++-----
 2 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index feb326e79b7b..39bc9bc1ac2f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3240,7 +3240,8 @@ static int host_pfn_mapping_level(struct kvm_vcpu *vcpu, gfn_t gfn,
 }
 
 static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
-				   int max_level, kvm_pfn_t *pfnp)
+				   int max_level, kvm_pfn_t *pfnp,
+				   bool huge_page_disallowed, int *req_level)
 {
 	struct kvm_memory_slot *slot;
 	struct kvm_lpage_info *linfo;
@@ -3248,6 +3249,8 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
 	kvm_pfn_t mask;
 	int level;
 
+	*req_level = PG_LEVEL_4K;
+
 	if (unlikely(max_level == PG_LEVEL_4K))
 		return PG_LEVEL_4K;
 
@@ -3272,7 +3275,14 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
 	if (level == PG_LEVEL_4K)
 		return level;
 
-	level = min(level, max_level);
+	*req_level = level = min(level, max_level);
+
+	/*
+	 * Enforce the iTLB multihit workaround after capturing the requested
+	 * level, which will be used to do precise, accurate accounting.
+	 */
+	if (huge_page_disallowed)
+		return PG_LEVEL_4K;
 
 	/*
 	 * mmu_notifier_retry() was successful and mmu_lock is held, so
@@ -3318,17 +3328,15 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
 	struct kvm_shadow_walk_iterator it;
 	struct kvm_mmu_page *sp;
-	int level, ret;
+	int level, req_level, ret;
 	gfn_t gfn = gpa >> PAGE_SHIFT;
 	gfn_t base_gfn = gfn;
 
 	if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
 		return RET_PF_RETRY;
 
-	if (huge_page_disallowed)
-		max_level = PG_LEVEL_4K;
-
-	level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn);
+	level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn,
+					huge_page_disallowed, &req_level);
 
 	trace_kvm_mmu_spte_requested(gpa, level, pfn);
 	for_each_shadow_entry(vcpu, gpa, it) {
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 3998ba0daf2e..e88d2acfd805 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -636,7 +636,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 	struct kvm_mmu_page *sp = NULL;
 	struct kvm_shadow_walk_iterator it;
 	unsigned direct_access, access = gw->pt_access;
-	int top_level, hlevel, ret;
+	int top_level, hlevel, req_level, ret;
 	gfn_t base_gfn = gw->gfn;
 
 	direct_access = gw->pte_access;
@@ -682,10 +682,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 			link_shadow_page(vcpu, it.sptep, sp);
 	}
 
-	if (huge_page_disallowed)
-		max_level = PG_LEVEL_4K;
-
-	hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn);
+	hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn,
+					 huge_page_disallowed, &req_level);
 
 	trace_kvm_mmu_spte_requested(addr, gw->level, pfn);
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 5/8] KVM: x86/mmu: Account NX huge page disallowed iff huge page was requested
  2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 1/8] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 4/8] KVM: x86/mmu: Capture requested page level before NX huge page workaround Sean Christopherson
@ 2020-09-23 18:37 ` Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 6/8] KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch) Sean Christopherson
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2020-09-23 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Junaid Shahid

Condition the accounting of a disallowed huge NX page on the original
requested level of the page being greater than the current iterator
level.  This does two things: accounts the page if and only if a huge
page was actually disallowed, and accounts the shadow page if and only
if it was the level at which the huge page was disallowed.  For the
latter case, the previous logic would account all shadow pages used to
create the translation for the forced small page, e.g. even PML4, which
can't be a huge page on current hardware, would be accounted as having
been a disallowed huge page when using 5-level EPT.

The overzealous accounting is purely a performance issue, i.e. the
recovery thread will spuriously zap shadow pages, but otherwise the bad
behavior is harmless.

Cc: Junaid Shahid <junaids@google.com>
Fixes: b8e8c8303ff28 ("kvm: mmu: ITLB_MULTIHIT mitigation")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         | 3 ++-
 arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 39bc9bc1ac2f..0957795be384 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3356,7 +3356,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 					      it.level - 1, true, ACC_ALL);
 
 			link_shadow_page(vcpu, it.sptep, sp);
-			if (is_tdp && huge_page_disallowed)
+			if (is_tdp && huge_page_disallowed &&
+			    req_level >= it.level)
 				account_huge_nx_page(vcpu->kvm, sp);
 		}
 	}
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index e88d2acfd805..7f00f266c75b 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -708,7 +708,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 			sp = kvm_mmu_get_page(vcpu, base_gfn, addr,
 					      it.level - 1, true, direct_access);
 			link_shadow_page(vcpu, it.sptep, sp);
-			if (huge_page_disallowed)
+			if (huge_page_disallowed && req_level >= it.level)
 				account_huge_nx_page(vcpu->kvm, sp);
 		}
 	}
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 6/8] KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch)
  2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
                   ` (2 preceding siblings ...)
  2020-09-23 18:37 ` [PATCH v2 5/8] KVM: x86/mmu: Account NX huge page disallowed iff huge page was requested Sean Christopherson
@ 2020-09-23 18:37 ` Sean Christopherson
  2020-09-25 21:25   ` Paolo Bonzini
  2020-09-23 18:37 ` [PATCH v2 7/8] KVM: x86/mmu: Hoist ITLB multi-hit workaround check up a level Sean Christopherson
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2020-09-23 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Junaid Shahid

Rename 'hlevel', which presumably stands for 'host level', to simply
'level' in FNAME(fetch).  The variable hasn't tracked the host level for
quite some time.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/paging_tmpl.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 7f00f266c75b..26dde437abc0 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -636,7 +636,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 	struct kvm_mmu_page *sp = NULL;
 	struct kvm_shadow_walk_iterator it;
 	unsigned direct_access, access = gw->pt_access;
-	int top_level, hlevel, req_level, ret;
+	int top_level, level, req_level, ret;
 	gfn_t base_gfn = gw->gfn;
 
 	direct_access = gw->pte_access;
@@ -682,8 +682,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 			link_shadow_page(vcpu, it.sptep, sp);
 	}
 
-	hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn,
-					 huge_page_disallowed, &req_level);
+	level = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn,
+					huge_page_disallowed, &req_level);
 
 	trace_kvm_mmu_spte_requested(addr, gw->level, pfn);
 
@@ -694,10 +694,10 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 		 * We cannot overwrite existing page tables with an NX
 		 * large page, as the leaf could be executable.
 		 */
-		disallowed_hugepage_adjust(it, gw->gfn, &pfn, &hlevel);
+		disallowed_hugepage_adjust(it, gw->gfn, &pfn, &level);
 
 		base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
-		if (it.level == hlevel)
+		if (it.level == level)
 			break;
 
 		validate_direct_spte(vcpu, it.sptep, direct_access);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 7/8] KVM: x86/mmu: Hoist ITLB multi-hit workaround check up a level
  2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
                   ` (3 preceding siblings ...)
  2020-09-23 18:37 ` [PATCH v2 6/8] KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch) Sean Christopherson
@ 2020-09-23 18:37 ` Sean Christopherson
  2020-09-23 18:37 ` [PATCH v2 8/8] KVM: x86/mmu: Track write/user faults using bools Sean Christopherson
  2020-09-25 21:27 ` [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Paolo Bonzini
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2020-09-23 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Junaid Shahid

Move the "ITLB multi-hit workaround enabled" check into the callers of
disallowed_hugepage_adjust() to make it more obvious that the helper is
specific to the workaround, and to be consistent with the accounting,
i.e. account_huge_nx_page() is called if and only if the workaround is
enabled.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         | 4 ++--
 arch/x86/kvm/mmu/paging_tmpl.h | 3 ++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0957795be384..fbee958927ce 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3302,7 +3302,6 @@ static void disallowed_hugepage_adjust(struct kvm_shadow_walk_iterator it,
 	u64 spte = *it.sptep;
 
 	if (it.level == level && level > PG_LEVEL_4K &&
-	    is_nx_huge_page_enabled() &&
 	    is_shadow_present_pte(spte) &&
 	    !is_large_pte(spte)) {
 		/*
@@ -3344,7 +3343,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 		 * We cannot overwrite existing page tables with an NX
 		 * large page, as the leaf could be executable.
 		 */
-		disallowed_hugepage_adjust(it, gfn, &pfn, &level);
+		if (nx_huge_page_workaround_enabled)
+			disallowed_hugepage_adjust(it, gfn, &pfn, &level);
 
 		base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
 		if (it.level == level)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 26dde437abc0..84ea1094fbe2 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -694,7 +694,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 		 * We cannot overwrite existing page tables with an NX
 		 * large page, as the leaf could be executable.
 		 */
-		disallowed_hugepage_adjust(it, gw->gfn, &pfn, &level);
+		if (nx_huge_page_workaround_enabled)
+			disallowed_hugepage_adjust(it, gw->gfn, &pfn, &level);
 
 		base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
 		if (it.level == level)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 8/8] KVM: x86/mmu: Track write/user faults using bools
  2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
                   ` (4 preceding siblings ...)
  2020-09-23 18:37 ` [PATCH v2 7/8] KVM: x86/mmu: Hoist ITLB multi-hit workaround check up a level Sean Christopherson
@ 2020-09-23 18:37 ` Sean Christopherson
  2020-09-25 21:27 ` [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Paolo Bonzini
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2020-09-23 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Junaid Shahid

Use bools to track write and user faults throughout the page fault paths
and down into mmu_set_spte().  The actual usage is purely boolean, but
that's not obvious without digging into all paths as the current code
uses a mix of bools (TDP and try_async_pf) and ints (shadow paging and
mmu_set_spte()).

No true functional change intended (although the pgprintk() will now
print 0/1 instead of 0/PFERR_WRITE_MASK).

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         |  4 ++--
 arch/x86/kvm/mmu/paging_tmpl.h | 10 +++++-----
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index fbee958927ce..a49a98ae1b25 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3064,7 +3064,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 }
 
 static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
-			unsigned int pte_access, int write_fault, int level,
+			unsigned int pte_access, bool write_fault, int level,
 			gfn_t gfn, kvm_pfn_t pfn, bool speculative,
 			bool host_writable)
 {
@@ -3161,7 +3161,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
 		return -1;
 
 	for (i = 0; i < ret; i++, gfn++, start++) {
-		mmu_set_spte(vcpu, start, access, 0, sp->role.level, gfn,
+		mmu_set_spte(vcpu, start, access, false, sp->role.level, gfn,
 			     page_to_pfn(pages[i]), true, true);
 		put_page(pages[i]);
 	}
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 84ea1094fbe2..012ef1548983 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -550,7 +550,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 	 * we call mmu_set_spte() with host_writable = true because
 	 * pte_prefetch_gfn_to_pfn always gets a writable pfn.
 	 */
-	mmu_set_spte(vcpu, spte, pte_access, 0, PG_LEVEL_4K, gfn, pfn,
+	mmu_set_spte(vcpu, spte, pte_access, false, PG_LEVEL_4K, gfn, pfn,
 		     true, true);
 
 	kvm_release_pfn_clean(pfn);
@@ -630,7 +630,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
 			 bool prefault)
 {
 	bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled();
-	int write_fault = error_code & PFERR_WRITE_MASK;
+	bool write_fault = error_code & PFERR_WRITE_MASK;
 	bool exec = error_code & PFERR_FETCH_MASK;
 	bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
 	struct kvm_mmu_page *sp = NULL;
@@ -743,7 +743,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
  */
 static bool
 FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu,
-			      struct guest_walker *walker, int user_fault,
+			      struct guest_walker *walker, bool user_fault,
 			      bool *write_fault_to_shadow_pgtable)
 {
 	int level;
@@ -781,8 +781,8 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu,
 static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
 			     bool prefault)
 {
-	int write_fault = error_code & PFERR_WRITE_MASK;
-	int user_fault = error_code & PFERR_USER_MASK;
+	bool write_fault = error_code & PFERR_WRITE_MASK;
+	bool user_fault = error_code & PFERR_USER_MASK;
 	struct guest_walker walker;
 	int r;
 	kvm_pfn_t pfn;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 6/8] KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch)
  2020-09-23 18:37 ` [PATCH v2 6/8] KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch) Sean Christopherson
@ 2020-09-25 21:25   ` Paolo Bonzini
  0 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2020-09-25 21:25 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Junaid Shahid

On 23/09/20 20:37, Sean Christopherson wrote:
> Rename 'hlevel', which presumably stands for 'host level', to simply
> 'level' in FNAME(fetch).  The variable hasn't tracked the host level for
> quite some time.

One could say that it stands for "huge" level...  I am not too attached
to it, the only qualm is that "level" is usually used as the starting or
current level and rarely as the end level in a loop.  But then it's used
like that in __direct_map, so...

Paolo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes
  2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
                   ` (5 preceding siblings ...)
  2020-09-23 18:37 ` [PATCH v2 8/8] KVM: x86/mmu: Track write/user faults using bools Sean Christopherson
@ 2020-09-25 21:27 ` Paolo Bonzini
  6 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2020-09-25 21:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Junaid Shahid

On 23/09/20 20:37, Sean Christopherson wrote:
> Patch 1 is a minor fix for a very theoretical bug where KVM could skip
> the final "commit zap" when recovering shadow pages for the NX huge
> page mitigation.
> 
> Patch 2 is cleanup that's made possible by patch 1.
> 
> Patches 3-5 are the main course and fix bugs in the NX huge page
> accounting where shadow pages are incorrectly added to the list of
> disallowed huge pages.  KVM doesn't actually check to see if the page
> could actually have been a large page when adding to the disallowed list.
> This result in what are effectively spurious zaps.  The biggest issue is
> likely with shadow pages in the upper levels, i.e. levels 3 and 4, as they
> are either unlikely to be huge (1gb) or flat out can't be huge (512tb).
> And because of the way KVM zaps, the upper levels will be zapped first,
> i.e. KVM is likely zapping and rebuilding a decent number of its shadow
> pages for zero benefit.
> 
> Ideally, patches 3-5 would be a single patch to ease backporting.  In the
> end, I decided the change is probably not suitable for stable as at worst
> it creates an infrequent performance spike (assuming the admin isn't going
> crazy with the recovery frequency), and it's far from straightforward or
> risk free.  Cramming everything into a single patch was a mess.
> 
> Patches 6-8 are cleanups in related code.  The 'hlevel' name in particular
> has been on my todo list for a while.
> 
> v2:
>   - Rebased to kvm/queue, commit e1ba1a15af73 ("KVM: SVM: Enable INVPCID
>     feature on AMD").
> 
> Sean Christopherson (8):
>   KVM: x86/mmu: Commit zap of remaining invalid pages when recovering
>     lpages
>   KVM: x86/mmu: Refactor the zap loop for recovering NX lpages
>   KVM: x86/mmu: Move "huge page disallowed" calculation into mapping
>     helpers
>   KVM: x86/mmu: Capture requested page level before NX huge page
>     workaround
>   KVM: x86/mmu: Account NX huge page disallowed iff huge page was
>     requested
>   KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch)
>   KVM: x86/mmu: Hoist ITLB multi-hit workaround check up a level
>   KVM: x86/mmu: Track write/user faults using bools
> 
>  arch/x86/kvm/mmu/mmu.c         | 58 +++++++++++++++++++++-------------
>  arch/x86/kvm/mmu/paging_tmpl.h | 39 ++++++++++++-----------
>  2 files changed, 57 insertions(+), 40 deletions(-)
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-09-25 21:27 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-23 18:37 [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Sean Christopherson
2020-09-23 18:37 ` [PATCH v2 1/8] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Sean Christopherson
2020-09-23 18:37 ` [PATCH v2 4/8] KVM: x86/mmu: Capture requested page level before NX huge page workaround Sean Christopherson
2020-09-23 18:37 ` [PATCH v2 5/8] KVM: x86/mmu: Account NX huge page disallowed iff huge page was requested Sean Christopherson
2020-09-23 18:37 ` [PATCH v2 6/8] KVM: x86/mmu: Rename 'hlevel' to 'level' in FNAME(fetch) Sean Christopherson
2020-09-25 21:25   ` Paolo Bonzini
2020-09-23 18:37 ` [PATCH v2 7/8] KVM: x86/mmu: Hoist ITLB multi-hit workaround check up a level Sean Christopherson
2020-09-23 18:37 ` [PATCH v2 8/8] KVM: x86/mmu: Track write/user faults using bools Sean Christopherson
2020-09-25 21:27 ` [PATCH v2 0/8] KVM: x86/mmu: ITLB multi-hit workaround fixes Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.