* [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code
@ 2012-09-21 6:56 Xiao Guangrong
2012-09-21 6:57 ` [PATCH v3 1/7] KVM: MMU: fix release noslot pfn Xiao Guangrong
` (6 more replies)
0 siblings, 7 replies; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 6:56 UTC (permalink / raw)
To: Avi Kivity; +Cc: Marcelo Tosatti, LKML, KVM
Changlog:
changes from Avi's comments:
- comment for FNAME(fetch)
- add annotations (__acquires, __releases) for page_fault_start and
page_fault_end
changes from Marcelo's comments:
- remove mmu_is_invalid
- make release noslot pfn path more readable
The last patch which introduces page_fault_start and page_fault_end is
controversial, i hope we can try it since it wrap the ugly pfn release
path up, but i respect your idea. :)
Release pfn in the mmu code is little special for we allow no-slot pfn
go to spte walk on page fault path, that means, on page fault fail path,
we can not directly call kvm_release_pfn_clean.
This patchset fixes the bug which release no-slot pfn on fail path and
clean up all the paths where kvm_release_pfn_clean is called
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
@ 2012-09-21 6:57 ` Xiao Guangrong
2012-09-23 9:13 ` Gleb Natapov
2012-09-21 6:57 ` [PATCH v3 2/7] KVM: MMU: remove mmu_is_invalid Xiao Guangrong
` (5 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 6:57 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache mmio info into
spte
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/mmu.c | 6 ++++--
arch/x86/kvm/paging_tmpl.h | 6 ++++--
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index aa0b469..0f56169 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2877,7 +2877,8 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
- kvm_release_pfn_clean(pfn);
+ if (likely(!is_noslot_pfn(pfn)))
+ kvm_release_pfn_clean(pfn);
return 0;
}
@@ -3345,7 +3346,8 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
- kvm_release_pfn_clean(pfn);
+ if (likely(!is_noslot_pfn(pfn)))
+ kvm_release_pfn_clean(pfn);
return 0;
}
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index bf8c42b..9ce6bc0 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -544,7 +544,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
out_gpte_changed:
if (sp)
kvm_mmu_put_page(sp, it.sptep);
- kvm_release_pfn_clean(pfn);
+ if (likely(!is_noslot_pfn(pfn)))
+ kvm_release_pfn_clean(pfn);
return NULL;
}
@@ -645,7 +646,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
- kvm_release_pfn_clean(pfn);
+ if (likely(!is_noslot_pfn(pfn)))
+ kvm_release_pfn_clean(pfn);
return 0;
}
--
1.7.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 2/7] KVM: MMU: remove mmu_is_invalid
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
2012-09-21 6:57 ` [PATCH v3 1/7] KVM: MMU: fix release noslot pfn Xiao Guangrong
@ 2012-09-21 6:57 ` Xiao Guangrong
2012-09-21 6:58 ` [PATCH v3 3/7] KVM: MMU: do not release pfn in mmu_set_spte Xiao Guangrong
` (4 subsequent siblings)
6 siblings, 0 replies; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 6:57 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
Remove mmu_is_invalid and use is_invalid_pfn instead
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/mmu.c | 5 -----
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 0f56169..3e9728b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2700,11 +2700,6 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
}
}
-static bool mmu_invalid_pfn(pfn_t pfn)
-{
- return unlikely(is_invalid_pfn(pfn));
-}
-
static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn,
pfn_t pfn, unsigned access, int *ret_val)
{
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9ce6bc0..b400761 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -370,7 +370,7 @@ static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte);
pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte, true);
pfn = gfn_to_pfn_atomic(vcpu->kvm, gpte_to_gfn(gpte));
- if (mmu_invalid_pfn(pfn))
+ if (is_invalid_pfn(pfn))
return;
/*
@@ -446,7 +446,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
gfn = gpte_to_gfn(gpte);
pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn,
pte_access & ACC_WRITE_MASK);
- if (mmu_invalid_pfn(pfn))
+ if (is_invalid_pfn(pfn))
break;
mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0,
--
1.7.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 3/7] KVM: MMU: do not release pfn in mmu_set_spte
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
2012-09-21 6:57 ` [PATCH v3 1/7] KVM: MMU: fix release noslot pfn Xiao Guangrong
2012-09-21 6:57 ` [PATCH v3 2/7] KVM: MMU: remove mmu_is_invalid Xiao Guangrong
@ 2012-09-21 6:58 ` Xiao Guangrong
2012-09-21 6:58 ` [PATCH v3 4/7] KVM: MMU: cleanup FNAME(page_fault) Xiao Guangrong
` (3 subsequent siblings)
6 siblings, 0 replies; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 6:58 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
It helps us to cleanup release pfn in the later patches
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/mmu.c | 29 ++++++++++++++---------------
arch/x86/kvm/paging_tmpl.h | 18 ++++++++++++------
2 files changed, 26 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3e9728b..bc1cda4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2496,9 +2496,6 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
rmap_recycle(vcpu, sptep, gfn);
}
}
-
- if (!is_error_pfn(pfn))
- kvm_release_pfn_clean(pfn);
}
static void nonpaging_new_cr3(struct kvm_vcpu *vcpu)
@@ -2535,12 +2532,15 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
if (ret <= 0)
return -1;
- for (i = 0; i < ret; i++, gfn++, start++)
+ for (i = 0; i < ret; i++, gfn++, start++) {
mmu_set_spte(vcpu, start, ACC_ALL,
access, 0, 0, NULL,
sp->role.level, gfn,
page_to_pfn(pages[i]), true, true);
+ kvm_release_page_clean(pages[i]);
+ }
+
return 0;
}
@@ -2858,23 +2858,22 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
return r;
spin_lock(&vcpu->kvm->mmu_lock);
- if (mmu_notifier_retry(vcpu, mmu_seq))
+ if (mmu_notifier_retry(vcpu, mmu_seq)) {
+ r = 0;
goto out_unlock;
+ }
+
kvm_mmu_free_some_pages(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
r = __direct_map(vcpu, v, write, map_writable, level, gfn, pfn,
prefault);
- spin_unlock(&vcpu->kvm->mmu_lock);
-
-
- return r;
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
if (likely(!is_noslot_pfn(pfn)))
kvm_release_pfn_clean(pfn);
- return 0;
+ return r;
}
@@ -3328,22 +3327,22 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
return r;
spin_lock(&vcpu->kvm->mmu_lock);
- if (mmu_notifier_retry(vcpu, mmu_seq))
+ if (mmu_notifier_retry(vcpu, mmu_seq)) {
+ r = 0;
goto out_unlock;
+ }
+
kvm_mmu_free_some_pages(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
r = __direct_map(vcpu, gpa, write, map_writable,
level, gfn, pfn, prefault);
- spin_unlock(&vcpu->kvm->mmu_lock);
-
- return r;
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
if (likely(!is_noslot_pfn(pfn)))
kvm_release_pfn_clean(pfn);
- return 0;
+ return r;
}
static void nonpaging_free(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index b400761..56f8085 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -380,6 +380,9 @@ static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0,
NULL, PT_PAGE_TABLE_LEVEL,
gpte_to_gfn(gpte), pfn, true, true);
+
+ if (likely(!is_noslot_pfn(pfn)))
+ kvm_release_pfn_clean(pfn);
}
static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu,
@@ -452,6 +455,9 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0,
NULL, PT_PAGE_TABLE_LEVEL, gfn,
pfn, true, true);
+
+ if (likely(!is_noslot_pfn(pfn)))
+ kvm_release_pfn_clean(pfn);
}
}
@@ -544,8 +550,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
out_gpte_changed:
if (sp)
kvm_mmu_put_page(sp, it.sptep);
- if (likely(!is_noslot_pfn(pfn)))
- kvm_release_pfn_clean(pfn);
+
return NULL;
}
@@ -625,8 +630,10 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
return r;
spin_lock(&vcpu->kvm->mmu_lock);
- if (mmu_notifier_retry(vcpu, mmu_seq))
+ if (mmu_notifier_retry(vcpu, mmu_seq)) {
+ r = 0;
goto out_unlock;
+ }
kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT);
kvm_mmu_free_some_pages(vcpu);
@@ -640,15 +647,14 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
++vcpu->stat.pf_fixed;
kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT);
- spin_unlock(&vcpu->kvm->mmu_lock);
- return emulate;
+ r = emulate;
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
if (likely(!is_noslot_pfn(pfn)))
kvm_release_pfn_clean(pfn);
- return 0;
+ return r;
}
static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp)
--
1.7.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 4/7] KVM: MMU: cleanup FNAME(page_fault)
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
` (2 preceding siblings ...)
2012-09-21 6:58 ` [PATCH v3 3/7] KVM: MMU: do not release pfn in mmu_set_spte Xiao Guangrong
@ 2012-09-21 6:58 ` Xiao Guangrong
2012-09-21 6:59 ` [PATCH v3 5/7] KVM: MMU: introduce FNAME(prefetch_gpte) Xiao Guangrong
` (2 subsequent siblings)
6 siblings, 0 replies; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 6:58 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
Let it return emulate state instead of spte like __direct_map
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/paging_tmpl.h | 31 ++++++++++++-------------------
1 files changed, 12 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 56f8085..6fd1c8b 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -463,21 +463,21 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
/*
* Fetch a shadow pte for a specific level in the paging hierarchy.
+ * If the guest tries to write a write-protected page, we need to
+ * emulate this operation, return 1 to indicate this case.
*/
-static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
+static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
struct guest_walker *gw,
int user_fault, int write_fault, int hlevel,
- int *emulate, pfn_t pfn, bool map_writable,
- bool prefault)
+ pfn_t pfn, bool map_writable, bool prefault)
{
- unsigned access = gw->pt_access;
struct kvm_mmu_page *sp = NULL;
- int top_level;
- unsigned direct_access;
struct kvm_shadow_walk_iterator it;
+ unsigned direct_access, access = gw->pt_access;
+ int top_level, emulate = 0;
if (!is_present_gpte(gw->ptes[gw->level - 1]))
- return NULL;
+ return 0;
direct_access = gw->pte_access;
@@ -541,17 +541,17 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
clear_sp_write_flooding_count(it.sptep);
mmu_set_spte(vcpu, it.sptep, access, gw->pte_access,
- user_fault, write_fault, emulate, it.level,
+ user_fault, write_fault, &emulate, it.level,
gw->gfn, pfn, prefault, map_writable);
FNAME(pte_prefetch)(vcpu, gw, it.sptep);
- return it.sptep;
+ return emulate;
out_gpte_changed:
if (sp)
kvm_mmu_put_page(sp, it.sptep);
- return NULL;
+ return 0;
}
/*
@@ -574,8 +574,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
int write_fault = error_code & PFERR_WRITE_MASK;
int user_fault = error_code & PFERR_USER_MASK;
struct guest_walker walker;
- u64 *sptep;
- int emulate = 0;
int r;
pfn_t pfn;
int level = PT_PAGE_TABLE_LEVEL;
@@ -639,17 +637,12 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
kvm_mmu_free_some_pages(vcpu);
if (!force_pt_level)
transparent_hugepage_adjust(vcpu, &walker.gfn, &pfn, &level);
- sptep = FNAME(fetch)(vcpu, addr, &walker, user_fault, write_fault,
- level, &emulate, pfn, map_writable, prefault);
- (void)sptep;
- pgprintk("%s: shadow pte %p %llx emulate %d\n", __func__,
- sptep, *sptep, emulate);
+ r = FNAME(fetch)(vcpu, addr, &walker, user_fault, write_fault,
+ level, pfn, map_writable, prefault);
++vcpu->stat.pf_fixed;
kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT);
- r = emulate;
-
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
if (likely(!is_noslot_pfn(pfn)))
--
1.7.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 5/7] KVM: MMU: introduce FNAME(prefetch_gpte)
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
` (3 preceding siblings ...)
2012-09-21 6:58 ` [PATCH v3 4/7] KVM: MMU: cleanup FNAME(page_fault) Xiao Guangrong
@ 2012-09-21 6:59 ` Xiao Guangrong
2012-09-21 6:59 ` [PATCH v3 6/7] KVM: MMU: move prefetch_invalid_gpte out of pagaing_tmp.h Xiao Guangrong
2012-09-21 7:00 ` [PATCH v3 7/7] KVM: MMU: introduce page_fault_start/page_fault_end Xiao Guangrong
6 siblings, 0 replies; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 6:59 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
The only difference between FNAME(update_pte) and FNAME(pte_prefetch)
is that the former is allowed to prefetch gfn from dirty logged slot,
so introduce a common function to prefetch spte
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/paging_tmpl.h | 58 ++++++++++++++++++-------------------------
1 files changed, 24 insertions(+), 34 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6fd1c8b..5b1af72 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -356,33 +356,45 @@ no_present:
return true;
}
-static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
- u64 *spte, const void *pte)
+static bool
+FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+ u64 *spte, pt_element_t gpte, bool no_dirty_log)
{
- pt_element_t gpte;
unsigned pte_access;
+ gfn_t gfn;
pfn_t pfn;
- gpte = *(const pt_element_t *)pte;
if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte))
- return;
+ return false;
pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte);
+
+ gfn = gpte_to_gfn(gpte);
pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte, true);
- pfn = gfn_to_pfn_atomic(vcpu->kvm, gpte_to_gfn(gpte));
+ pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn,
+ no_dirty_log && (pte_access & ACC_WRITE_MASK));
if (is_invalid_pfn(pfn))
- return;
+ return false;
/*
- * we call mmu_set_spte() with host_writable = true because that
- * vcpu->arch.update_pte.pfn was fetched from get_user_pages(write = 1).
+ * we call mmu_set_spte() with host_writable = true because
+ * pte_prefetch_gfn_to_pfn always gets a writable pfn.
*/
mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0,
- NULL, PT_PAGE_TABLE_LEVEL,
- gpte_to_gfn(gpte), pfn, true, true);
+ NULL, PT_PAGE_TABLE_LEVEL, gfn, pfn, true, true);
if (likely(!is_noslot_pfn(pfn)))
kvm_release_pfn_clean(pfn);
+
+ return true;
+}
+
+static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+ u64 *spte, const void *pte)
+{
+ pt_element_t gpte = *(const pt_element_t *)pte;
+
+ FNAME(prefetch_gpte)(vcpu, sp, spte, gpte, false);
}
static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu,
@@ -428,36 +440,14 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
spte = sp->spt + i;
for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) {
- pt_element_t gpte;
- unsigned pte_access;
- gfn_t gfn;
- pfn_t pfn;
-
if (spte == sptep)
continue;
if (is_shadow_present_pte(*spte))
continue;
- gpte = gptep[i];
-
- if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte))
- continue;
-
- pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte,
- true);
- gfn = gpte_to_gfn(gpte);
- pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn,
- pte_access & ACC_WRITE_MASK);
- if (is_invalid_pfn(pfn))
+ if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true))
break;
-
- mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0,
- NULL, PT_PAGE_TABLE_LEVEL, gfn,
- pfn, true, true);
-
- if (likely(!is_noslot_pfn(pfn)))
- kvm_release_pfn_clean(pfn);
}
}
--
1.7.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 6/7] KVM: MMU: move prefetch_invalid_gpte out of pagaing_tmp.h
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
` (4 preceding siblings ...)
2012-09-21 6:59 ` [PATCH v3 5/7] KVM: MMU: introduce FNAME(prefetch_gpte) Xiao Guangrong
@ 2012-09-21 6:59 ` Xiao Guangrong
2012-09-21 7:00 ` [PATCH v3 7/7] KVM: MMU: introduce page_fault_start/page_fault_end Xiao Guangrong
6 siblings, 0 replies; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 6:59 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
The function does not depend on guest mmu mode, move it out from
paging_tmpl.h
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/mmu.c | 36 ++++++++++++++++++++++++++++--------
arch/x86/kvm/paging_tmpl.h | 24 ++----------------------
2 files changed, 30 insertions(+), 30 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index bc1cda4..a455c0d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2503,6 +2503,14 @@ static void nonpaging_new_cr3(struct kvm_vcpu *vcpu)
mmu_free_roots(vcpu);
}
+static bool is_rsvd_bits_set(struct kvm_mmu *mmu, u64 gpte, int level)
+{
+ int bit7;
+
+ bit7 = (gpte >> 7) & 1;
+ return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0;
+}
+
static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn,
bool no_dirty_log)
{
@@ -2515,6 +2523,26 @@ static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn,
return gfn_to_pfn_memslot_atomic(slot, gfn);
}
+static bool prefetch_invalid_gpte(struct kvm_vcpu *vcpu,
+ struct kvm_mmu_page *sp, u64 *spte,
+ u64 gpte)
+{
+ if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL))
+ goto no_present;
+
+ if (!is_present_gpte(gpte))
+ goto no_present;
+
+ if (!(gpte & PT_ACCESSED_MASK))
+ goto no_present;
+
+ return false;
+
+no_present:
+ drop_spte(vcpu->kvm, spte);
+ return true;
+}
+
static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *sp,
u64 *start, u64 *end)
@@ -3396,14 +3424,6 @@ static void paging_free(struct kvm_vcpu *vcpu)
nonpaging_free(vcpu);
}
-static bool is_rsvd_bits_set(struct kvm_mmu *mmu, u64 gpte, int level)
-{
- int bit7;
-
- bit7 = (gpte >> 7) & 1;
- return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0;
-}
-
static bool sync_mmio_spte(u64 *sptep, gfn_t gfn, unsigned access,
int *nr_present)
{
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 5b1af72..298e5c2 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -336,26 +336,6 @@ static int FNAME(walk_addr_nested)(struct guest_walker *walker,
addr, access);
}
-static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
- struct kvm_mmu_page *sp, u64 *spte,
- pt_element_t gpte)
-{
- if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL))
- goto no_present;
-
- if (!is_present_gpte(gpte))
- goto no_present;
-
- if (!(gpte & PT_ACCESSED_MASK))
- goto no_present;
-
- return false;
-
-no_present:
- drop_spte(vcpu->kvm, spte);
- return true;
-}
-
static bool
FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
u64 *spte, pt_element_t gpte, bool no_dirty_log)
@@ -364,7 +344,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
gfn_t gfn;
pfn_t pfn;
- if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte))
+ if (prefetch_invalid_gpte(vcpu, sp, spte, gpte))
return false;
pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte);
@@ -778,7 +758,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
sizeof(pt_element_t)))
return -EINVAL;
- if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
+ if (prefetch_invalid_gpte(vcpu, sp, &sp->spt[i], gpte)) {
vcpu->kvm->tlbs_dirty++;
continue;
}
--
1.7.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 7/7] KVM: MMU: introduce page_fault_start/page_fault_end
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
` (5 preceding siblings ...)
2012-09-21 6:59 ` [PATCH v3 6/7] KVM: MMU: move prefetch_invalid_gpte out of pagaing_tmp.h Xiao Guangrong
@ 2012-09-21 7:00 ` Xiao Guangrong
6 siblings, 0 replies; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-21 7:00 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
Wrap the common operations into these two functions
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/mmu.c | 55 ++++++++++++++++++++++++++++----------------
arch/x86/kvm/paging_tmpl.h | 12 ++++-----
2 files changed, 40 insertions(+), 27 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a455c0d..d75bf9a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2848,6 +2848,31 @@ exit:
static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
gva_t gva, pfn_t *pfn, bool write, bool *writable);
+static bool
+page_fault_start(struct kvm_vcpu *vcpu, gfn_t *gfnp, pfn_t *pfnp, int *levelp,
+ bool force_pt_level, unsigned long mmu_seq)
+ __acquires(vcpu->kvm->mmu_lock)
+{
+ spin_lock(&vcpu->kvm->mmu_lock);
+ if (mmu_notifier_retry(vcpu, mmu_seq))
+ return false;
+
+ kvm_mmu_free_some_pages(vcpu);
+ if (likely(!force_pt_level))
+ transparent_hugepage_adjust(vcpu, gfnp, pfnp, levelp);
+
+ return true;
+}
+
+static void page_fault_end(struct kvm_vcpu *vcpu, pfn_t pfn)
+ __releases(vcpu->kvm->mmu_lock)
+{
+ spin_unlock(&vcpu->kvm->mmu_lock);
+
+ if (likely(!is_noslot_pfn(pfn)))
+ kvm_release_pfn_clean(pfn);
+}
+
static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
gfn_t gfn, bool prefault)
{
@@ -2885,22 +2910,17 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
if (handle_abnormal_pfn(vcpu, v, gfn, pfn, ACC_ALL, &r))
return r;
- spin_lock(&vcpu->kvm->mmu_lock);
- if (mmu_notifier_retry(vcpu, mmu_seq)) {
+ if (!page_fault_start(vcpu, &gfn, &pfn, &level, force_pt_level,
+ mmu_seq)) {
r = 0;
- goto out_unlock;
+ goto exit;
}
- kvm_mmu_free_some_pages(vcpu);
- if (likely(!force_pt_level))
- transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
r = __direct_map(vcpu, v, write, map_writable, level, gfn, pfn,
prefault);
-out_unlock:
- spin_unlock(&vcpu->kvm->mmu_lock);
- if (likely(!is_noslot_pfn(pfn)))
- kvm_release_pfn_clean(pfn);
+exit:
+ page_fault_end(vcpu, pfn);
return r;
}
@@ -3354,22 +3374,17 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
if (handle_abnormal_pfn(vcpu, 0, gfn, pfn, ACC_ALL, &r))
return r;
- spin_lock(&vcpu->kvm->mmu_lock);
- if (mmu_notifier_retry(vcpu, mmu_seq)) {
+ if (!page_fault_start(vcpu, &gfn, &pfn, &level, force_pt_level,
+ mmu_seq)) {
r = 0;
- goto out_unlock;
+ goto exit;
}
- kvm_mmu_free_some_pages(vcpu);
- if (likely(!force_pt_level))
- transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
r = __direct_map(vcpu, gpa, write, map_writable,
level, gfn, pfn, prefault);
-out_unlock:
- spin_unlock(&vcpu->kvm->mmu_lock);
- if (likely(!is_noslot_pfn(pfn)))
- kvm_release_pfn_clean(pfn);
+exit:
+ page_fault_end(vcpu, pfn);
return r;
}
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 298e5c2..269116d 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -597,10 +597,10 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
walker.gfn, pfn, walker.pte_access, &r))
return r;
- spin_lock(&vcpu->kvm->mmu_lock);
- if (mmu_notifier_retry(vcpu, mmu_seq)) {
+ if (!page_fault_start(vcpu, &walker.gfn, &pfn, &level,
+ force_pt_level, mmu_seq)) {
r = 0;
- goto out_unlock;
+ goto exit;
}
kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT);
@@ -613,10 +613,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
++vcpu->stat.pf_fixed;
kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT);
-out_unlock:
- spin_unlock(&vcpu->kvm->mmu_lock);
- if (likely(!is_noslot_pfn(pfn)))
- kvm_release_pfn_clean(pfn);
+exit:
+ page_fault_end(vcpu, pfn);
return r;
}
--
1.7.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-21 6:57 ` [PATCH v3 1/7] KVM: MMU: fix release noslot pfn Xiao Guangrong
@ 2012-09-23 9:13 ` Gleb Natapov
2012-09-24 4:59 ` Xiao Guangrong
0 siblings, 1 reply; 15+ messages in thread
From: Gleb Natapov @ 2012-09-23 9:13 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
> We can not directly call kvm_release_pfn_clean to release the pfn
> since we can meet noslot pfn which is used to cache mmio info into
> spte
>
Wouldn't it be better to move the check into kvm_release_pfn_clean()?
> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> ---
> arch/x86/kvm/mmu.c | 6 ++++--
> arch/x86/kvm/paging_tmpl.h | 6 ++++--
> 2 files changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index aa0b469..0f56169 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2877,7 +2877,8 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
>
> out_unlock:
> spin_unlock(&vcpu->kvm->mmu_lock);
> - kvm_release_pfn_clean(pfn);
> + if (likely(!is_noslot_pfn(pfn)))
> + kvm_release_pfn_clean(pfn);
> return 0;
> }
>
> @@ -3345,7 +3346,8 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
>
> out_unlock:
> spin_unlock(&vcpu->kvm->mmu_lock);
> - kvm_release_pfn_clean(pfn);
> + if (likely(!is_noslot_pfn(pfn)))
> + kvm_release_pfn_clean(pfn);
> return 0;
> }
>
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index bf8c42b..9ce6bc0 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -544,7 +544,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
> out_gpte_changed:
> if (sp)
> kvm_mmu_put_page(sp, it.sptep);
> - kvm_release_pfn_clean(pfn);
> + if (likely(!is_noslot_pfn(pfn)))
> + kvm_release_pfn_clean(pfn);
> return NULL;
> }
>
> @@ -645,7 +646,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
>
> out_unlock:
> spin_unlock(&vcpu->kvm->mmu_lock);
> - kvm_release_pfn_clean(pfn);
> + if (likely(!is_noslot_pfn(pfn)))
> + kvm_release_pfn_clean(pfn);
> return 0;
> }
>
> --
> 1.7.7.6
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-23 9:13 ` Gleb Natapov
@ 2012-09-24 4:59 ` Xiao Guangrong
2012-09-24 11:24 ` Gleb Natapov
0 siblings, 1 reply; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-24 4:59 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
On 09/23/2012 05:13 PM, Gleb Natapov wrote:
> On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
>> We can not directly call kvm_release_pfn_clean to release the pfn
>> since we can meet noslot pfn which is used to cache mmio info into
>> spte
>>
> Wouldn't it be better to move the check into kvm_release_pfn_clean()?
I think there is no reason for us to prefer to adding this branch in
the common code. :)
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-24 4:59 ` Xiao Guangrong
@ 2012-09-24 11:24 ` Gleb Natapov
2012-09-24 11:49 ` Xiao Guangrong
0 siblings, 1 reply; 15+ messages in thread
From: Gleb Natapov @ 2012-09-24 11:24 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
> On 09/23/2012 05:13 PM, Gleb Natapov wrote:
> > On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
> >> We can not directly call kvm_release_pfn_clean to release the pfn
> >> since we can meet noslot pfn which is used to cache mmio info into
> >> spte
> >>
> > Wouldn't it be better to move the check into kvm_release_pfn_clean()?
>
> I think there is no reason for us to prefer to adding this branch in
> the common code. :)
Is the function performance critical? Is function called without the check
on a hot path? The function already contains much heavier kvm_is_mmio_pfn()
check. If most/all function invocation require check before call it's
better to move it inside.
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-24 11:24 ` Gleb Natapov
@ 2012-09-24 11:49 ` Xiao Guangrong
2012-09-24 12:04 ` Gleb Natapov
0 siblings, 1 reply; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-24 11:49 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
On 09/24/2012 07:24 PM, Gleb Natapov wrote:
> On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
>> On 09/23/2012 05:13 PM, Gleb Natapov wrote:
>>> On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
>>>> We can not directly call kvm_release_pfn_clean to release the pfn
>>>> since we can meet noslot pfn which is used to cache mmio info into
>>>> spte
>>>>
>>> Wouldn't it be better to move the check into kvm_release_pfn_clean()?
>>
>> I think there is no reason for us to prefer to adding this branch in
>> the common code. :)
>
> Is the function performance critical? Is function called without the check
> on a hot path? The function already contains much heavier kvm_is_mmio_pfn()
> check. If most/all function invocation require check before call it's
> better to move it inside.
It is not most/all functions need do this check - it is only needed on x86 mmu
page-fault/prefetch path.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-24 11:49 ` Xiao Guangrong
@ 2012-09-24 12:04 ` Gleb Natapov
2012-09-24 12:32 ` Xiao Guangrong
0 siblings, 1 reply; 15+ messages in thread
From: Gleb Natapov @ 2012-09-24 12:04 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
On Mon, Sep 24, 2012 at 07:49:37PM +0800, Xiao Guangrong wrote:
> On 09/24/2012 07:24 PM, Gleb Natapov wrote:
> > On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
> >> On 09/23/2012 05:13 PM, Gleb Natapov wrote:
> >>> On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
> >>>> We can not directly call kvm_release_pfn_clean to release the pfn
> >>>> since we can meet noslot pfn which is used to cache mmio info into
> >>>> spte
> >>>>
> >>> Wouldn't it be better to move the check into kvm_release_pfn_clean()?
> >>
> >> I think there is no reason for us to prefer to adding this branch in
> >> the common code. :)
> >
> > Is the function performance critical? Is function called without the check
> > on a hot path? The function already contains much heavier kvm_is_mmio_pfn()
> > check. If most/all function invocation require check before call it's
> > better to move it inside.
>
> It is not most/all functions need do this check - it is only needed on x86 mmu
> page-fault/prefetch path.
At least on x86 there 7 calls to kvm_release_pfn_clean(), 5 of them are
guarded by is_noslot_pfn() (after this patch) and one by even stronger
is_error_pfn(). I guess when/if other architectures will add MMIO MMU
caching they will need to guard kvm_release_pfn_clean() by is_noslot_pfn()
too in most cases. I am not insisting, but as this patch shows it is
easy to miss the check before calling the function.
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-24 12:04 ` Gleb Natapov
@ 2012-09-24 12:32 ` Xiao Guangrong
2012-09-27 16:25 ` Avi Kivity
0 siblings, 1 reply; 15+ messages in thread
From: Xiao Guangrong @ 2012-09-24 12:32 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM
On 09/24/2012 08:04 PM, Gleb Natapov wrote:
> On Mon, Sep 24, 2012 at 07:49:37PM +0800, Xiao Guangrong wrote:
>> On 09/24/2012 07:24 PM, Gleb Natapov wrote:
>>> On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
>>>> On 09/23/2012 05:13 PM, Gleb Natapov wrote:
>>>>> On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
>>>>>> We can not directly call kvm_release_pfn_clean to release the pfn
>>>>>> since we can meet noslot pfn which is used to cache mmio info into
>>>>>> spte
>>>>>>
>>>>> Wouldn't it be better to move the check into kvm_release_pfn_clean()?
>>>>
>>>> I think there is no reason for us to prefer to adding this branch in
>>>> the common code. :)
>>>
>>> Is the function performance critical? Is function called without the check
>>> on a hot path? The function already contains much heavier kvm_is_mmio_pfn()
>>> check. If most/all function invocation require check before call it's
>>> better to move it inside.
>>
>> It is not most/all functions need do this check - it is only needed on x86 mmu
>> page-fault/prefetch path.
> At least on x86 there 7 calls to kvm_release_pfn_clean(), 5 of them are
> guarded by is_noslot_pfn() (after this patch)
3 places after the whole patchset (There are some cleanups after this patch).
> and one by even stronger is_error_pfn().
This one is:
| if (!is_error_pfn(pfn)) {
| kvm_release_pfn_clean(pfn);
| return true;
| }
|
| return false;
We can change it to:
| if (is_error_pfn(pfn))
| return false;
|
| kvm_release_pfn_clean(pfn);
| return true;
> I guess when/if other architectures will add MMIO MMU
> caching they will need to guard kvm_release_pfn_clean() by is_noslot_pfn()
> too in most cases. I am not insisting, but as this patch shows it is
> easy to miss the check before calling the function.
Sounds reasonable. I will consider it if Avi/Marcelo have no object on
it.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 1/7] KVM: MMU: fix release noslot pfn
2012-09-24 12:32 ` Xiao Guangrong
@ 2012-09-27 16:25 ` Avi Kivity
0 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2012-09-27 16:25 UTC (permalink / raw)
To: Xiao Guangrong; +Cc: Gleb Natapov, Marcelo Tosatti, LKML, KVM
On 09/24/2012 02:32 PM, Xiao Guangrong wrote:
> 3 places after the whole patchset (There are some cleanups after this patch).
>
>> and one by even stronger is_error_pfn().
>
> This one is:
>
> | if (!is_error_pfn(pfn)) {
> | kvm_release_pfn_clean(pfn);
> | return true;
> | }
> |
> | return false;
>
> We can change it to:
>
> | if (is_error_pfn(pfn))
> | return false;
> |
> | kvm_release_pfn_clean(pfn);
> | return true;
>
>> I guess when/if other architectures will add MMIO MMU
>> caching they will need to guard kvm_release_pfn_clean() by is_noslot_pfn()
>> too in most cases. I am not insisting, but as this patch shows it is
>> easy to miss the check before calling the function.
>
> Sounds reasonable. I will consider it if Avi/Marcelo have no object on
> it.
I think it's a good idea.
Looks like we traded the unscalable error pages for these branches, I
think it's a reasonable tradeoff.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2012-09-27 16:26 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-21 6:56 [PATCH v3 0/7] KVM: MMU: fix release pfn in mmu code Xiao Guangrong
2012-09-21 6:57 ` [PATCH v3 1/7] KVM: MMU: fix release noslot pfn Xiao Guangrong
2012-09-23 9:13 ` Gleb Natapov
2012-09-24 4:59 ` Xiao Guangrong
2012-09-24 11:24 ` Gleb Natapov
2012-09-24 11:49 ` Xiao Guangrong
2012-09-24 12:04 ` Gleb Natapov
2012-09-24 12:32 ` Xiao Guangrong
2012-09-27 16:25 ` Avi Kivity
2012-09-21 6:57 ` [PATCH v3 2/7] KVM: MMU: remove mmu_is_invalid Xiao Guangrong
2012-09-21 6:58 ` [PATCH v3 3/7] KVM: MMU: do not release pfn in mmu_set_spte Xiao Guangrong
2012-09-21 6:58 ` [PATCH v3 4/7] KVM: MMU: cleanup FNAME(page_fault) Xiao Guangrong
2012-09-21 6:59 ` [PATCH v3 5/7] KVM: MMU: introduce FNAME(prefetch_gpte) Xiao Guangrong
2012-09-21 6:59 ` [PATCH v3 6/7] KVM: MMU: move prefetch_invalid_gpte out of pagaing_tmp.h Xiao Guangrong
2012-09-21 7:00 ` [PATCH v3 7/7] KVM: MMU: introduce page_fault_start/page_fault_end Xiao Guangrong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).