* [PATCH v2] kvm: set page dirty only if page has been writable
@ 2016-03-30 20:38 Yu Zhao
2016-03-30 21:08 ` Paolo Bonzini
0 siblings, 1 reply; 2+ messages in thread
From: Yu Zhao @ 2016-03-30 20:38 UTC (permalink / raw)
To: Gleb Natapov, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
H . Peter Anvin
Cc: x86, kvm, Yu Zhao
In absence of shadow dirty mask, there is no need to set page dirty
if page has never been writable. This is a tiny optimization but
good to have for people who care much about dirty page tracking.
Signed-off-by: Yu Zhao <yuzhao@google.com>
---
arch/x86/kvm/mmu.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 70e95d0..1ff4dbb 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -557,8 +557,15 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
!is_writable_pte(new_spte))
ret = true;
- if (!shadow_accessed_mask)
+ if (!shadow_accessed_mask) {
+ /*
+ * We don't set page dirty when dropping non-writable spte.
+ * So do it now if the new spte is becoming non-writable.
+ */
+ if (ret)
+ kvm_set_pfn_dirty(spte_to_pfn(old_spte));
return ret;
+ }
/*
* Flush TLB when accessed/dirty bits are changed in the page tables,
@@ -605,7 +612,8 @@ static int mmu_spte_clear_track_bits(u64 *sptep)
if (!shadow_accessed_mask || old_spte & shadow_accessed_mask)
kvm_set_pfn_accessed(pfn);
- if (!shadow_dirty_mask || (old_spte & shadow_dirty_mask))
+ if (old_spte & (shadow_dirty_mask ? shadow_dirty_mask :
+ PT_WRITABLE_MASK))
kvm_set_pfn_dirty(pfn);
return 1;
}
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH v2] kvm: set page dirty only if page has been writable
2016-03-30 20:38 [PATCH v2] kvm: set page dirty only if page has been writable Yu Zhao
@ 2016-03-30 21:08 ` Paolo Bonzini
0 siblings, 0 replies; 2+ messages in thread
From: Paolo Bonzini @ 2016-03-30 21:08 UTC (permalink / raw)
To: Yu Zhao, Gleb Natapov, Thomas Gleixner, Ingo Molnar, H . Peter Anvin
Cc: x86, kvm
On 30/03/2016 22:38, Yu Zhao wrote:
> In absence of shadow dirty mask, there is no need to set page dirty
> if page has never been writable. This is a tiny optimization but
> good to have for people who care much about dirty page tracking.
>
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
> arch/x86/kvm/mmu.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 70e95d0..1ff4dbb 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -557,8 +557,15 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
> !is_writable_pte(new_spte))
> ret = true;
>
> - if (!shadow_accessed_mask)
> + if (!shadow_accessed_mask) {
> + /*
> + * We don't set page dirty when dropping non-writable spte.
> + * So do it now if the new spte is becoming non-writable.
> + */
> + if (ret)
> + kvm_set_pfn_dirty(spte_to_pfn(old_spte));
> return ret;
> + }
>
> /*
> * Flush TLB when accessed/dirty bits are changed in the page tables,
> @@ -605,7 +612,8 @@ static int mmu_spte_clear_track_bits(u64 *sptep)
>
> if (!shadow_accessed_mask || old_spte & shadow_accessed_mask)
> kvm_set_pfn_accessed(pfn);
> - if (!shadow_dirty_mask || (old_spte & shadow_dirty_mask))
> + if (old_spte & (shadow_dirty_mask ? shadow_dirty_mask :
> + PT_WRITABLE_MASK))
> kvm_set_pfn_dirty(pfn);
> return 1;
> }
>
Looks good, thanks!
Paolo
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-03-30 21:08 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-30 20:38 [PATCH v2] kvm: set page dirty only if page has been writable Yu Zhao
2016-03-30 21:08 ` Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.