* [PATCH 0/2] *** SUBJECT HERE *** @ 2009-06-10 11:51 Izik Eidus 2009-06-10 11:51 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Izik Eidus 0 siblings, 1 reply; 8+ messages in thread From: Izik Eidus @ 2009-06-10 11:51 UTC (permalink / raw) To: kvm; +Cc: avi, Izik Eidus RFC: move the dirty page tracking to use dirty bit Well, i was bored this morning and had this idea for a while, didnt test it to much..., first i want to hear what ppl think? Thanks. Izik Eidus (2): kvm: fix dirty bit tracking for slots with large pages kvm: change the dirty page tracking to work with dirty bit instead of page fault arch/ia64/kvm/kvm-ia64.c | 4 ++++ arch/powerpc/kvm/powerpc.c | 4 ++++ arch/s390/kvm/kvm-s390.c | 4 ++++ arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu.c | 32 +++++++++++++++++++++++++++++--- arch/x86/kvm/svm.c | 7 +++++++ arch/x86/kvm/vmx.c | 7 +++++++ arch/x86/kvm/x86.c | 21 ++++++++++++++++++--- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 17 ++++++++++++----- 10 files changed, 89 insertions(+), 11 deletions(-) ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages 2009-06-10 11:51 [PATCH 0/2] *** SUBJECT HERE *** Izik Eidus @ 2009-06-10 11:51 ` Izik Eidus 2009-06-10 11:51 ` [PATCH 2/2] kvm: change the dirty page tracking to work with dirty bit instead of page fault Izik Eidus 2009-06-10 11:54 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Avi Kivity 0 siblings, 2 replies; 8+ messages in thread From: Izik Eidus @ 2009-06-10 11:51 UTC (permalink / raw) To: kvm; +Cc: avi, Izik Eidus When slot is already allocted and being asked to be tracked we need to break the large pages. This code flush the mmu when someone ask a slot to start dirty bit tracking. Signed-off-by: Izik Eidus <ieidus@redhat.com> --- virt/kvm/kvm_main.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 669eb4a..4a60c72 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1160,6 +1160,8 @@ int __kvm_set_memory_region(struct kvm *kvm, new.userspace_addr = mem->userspace_addr; else new.userspace_addr = 0; + + kvm_arch_flush_shadow(kvm); } if (npages && !new.lpage_info) { largepages = 1 + (base_gfn + npages - 1) / KVM_PAGES_PER_HPAGE; -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/2] kvm: change the dirty page tracking to work with dirty bit instead of page fault 2009-06-10 11:51 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Izik Eidus @ 2009-06-10 11:51 ` Izik Eidus 2009-06-10 12:02 ` Izik Eidus 2009-06-10 11:54 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Avi Kivity 1 sibling, 1 reply; 8+ messages in thread From: Izik Eidus @ 2009-06-10 11:51 UTC (permalink / raw) To: kvm; +Cc: avi, Izik Eidus right now the dirty page tracking work with the help of page faults, when we want to track a page for being dirty, we write protect it and we mark it dirty when we have write page fault, this code move into looking at the dirty bit of the spte. Signed-off-by: Izik Eidus <ieidus@redhat.com> --- arch/ia64/kvm/kvm-ia64.c | 4 ++++ arch/powerpc/kvm/powerpc.c | 4 ++++ arch/s390/kvm/kvm-s390.c | 4 ++++ arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu.c | 32 +++++++++++++++++++++++++++++--- arch/x86/kvm/svm.c | 7 +++++++ arch/x86/kvm/vmx.c | 7 +++++++ arch/x86/kvm/x86.c | 21 ++++++++++++++++++--- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 15 ++++++++++----- 10 files changed, 87 insertions(+), 11 deletions(-) diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c index 3199221..5914128 100644 --- a/arch/ia64/kvm/kvm-ia64.c +++ b/arch/ia64/kvm/kvm-ia64.c @@ -1809,6 +1809,10 @@ void kvm_arch_exit(void) kvm_vmm_info = NULL; } +void kvm_arch_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ +} + static int kvm_ia64_sync_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 2cf915e..6beb368 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -418,6 +418,10 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) return -ENOTSUPP; } +void kvm_arch_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ +} + long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 981ab04..ab6f115 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -130,6 +130,10 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, return 0; } +void kvm_arch_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ +} + long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index c7b0cc2..8a24149 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -527,6 +527,7 @@ struct kvm_x86_ops { int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); int (*get_tdp_level)(void); u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); + int (*dirty_bit_support)(void); }; extern struct kvm_x86_ops *kvm_x86_ops; @@ -796,4 +797,6 @@ int kvm_unmap_hva(struct kvm *kvm, unsigned long hva); int kvm_age_hva(struct kvm *kvm, unsigned long hva); int cpuid_maxphyaddr(struct kvm_vcpu *vcpu); +int is_dirty_and_clean_rmapp(struct kvm *kvm, unsigned long *rmapp); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 809cce0..3ec6a7d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -140,6 +140,8 @@ module_param(oos_shadow, bool, 0644); #define ACC_USER_MASK PT_USER_MASK #define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) +#define SPTE_DONT_DIRTY (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) + #define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) struct kvm_rmap_desc { @@ -629,6 +631,25 @@ static u64 *rmap_next(struct kvm *kvm, unsigned long *rmapp, u64 *spte) return NULL; } +int is_dirty_and_clean_rmapp(struct kvm *kvm, unsigned long *rmapp) +{ + u64 *spte; + int dirty = 0; + + spte = rmap_next(kvm, rmapp, NULL); + while (spte) { + if (*spte & PT_DIRTY_MASK) { + set_shadow_pte(spte, (*spte &= ~PT_DIRTY_MASK) | + SPTE_DONT_DIRTY); + dirty = 1; + } + spte = rmap_next(kvm, rmapp, spte); + } + + return dirty; +} + + static int rmap_write_protect(struct kvm *kvm, u64 gfn) { unsigned long *rmapp; @@ -1676,7 +1697,10 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte, * whether the guest actually used the pte (in order to detect * demand paging). */ - spte = shadow_base_present_pte | shadow_dirty_mask; + spte = shadow_base_present_pte; + if (!(spte & SPTE_DONT_DIRTY)) + spte |= shadow_dirty_mask; + if (!speculative) spte |= shadow_accessed_mask; if (!dirty) @@ -1725,8 +1749,10 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte, } } - if (pte_access & ACC_WRITE_MASK) - mark_page_dirty(vcpu->kvm, gfn); + if (!shadow_dirty_mask) { + if (pte_access & ACC_WRITE_MASK) + mark_page_dirty(vcpu->kvm, gfn); + } set_pte: set_shadow_pte(shadow_pte, spte); diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 37397f6..5b6351e 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2724,6 +2724,11 @@ static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) return 0; } +static int svm_dirty_bit_support(void) +{ + return 1; +} + static struct kvm_x86_ops svm_x86_ops = { .cpu_has_kvm_support = has_svm, .disabled_by_bios = is_disabled, @@ -2785,6 +2790,8 @@ static struct kvm_x86_ops svm_x86_ops = { .set_tss_addr = svm_set_tss_addr, .get_tdp_level = get_npt_level, .get_mt_mask = svm_get_mt_mask, + + .dirty_bit_support = svm_dirty_bit_support, }; static int __init svm_init(void) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 673bcb3..8903314 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3774,6 +3774,11 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) return ret; } +static int vmx_dirty_bit_support(void) +{ + return false; +} + static struct kvm_x86_ops vmx_x86_ops = { .cpu_has_kvm_support = cpu_has_kvm_support, .disabled_by_bios = vmx_disabled_by_bios, @@ -3833,6 +3838,8 @@ static struct kvm_x86_ops vmx_x86_ops = { .set_tss_addr = vmx_set_tss_addr, .get_tdp_level = get_ept_level, .get_mt_mask = vmx_get_mt_mask, + + .dirty_bit_support = vmx_dirty_bit_support, }; static int __init vmx_init(void) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 272e2e8..d28fbcb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1963,6 +1963,19 @@ static int kvm_vm_ioctl_reinject(struct kvm *kvm, return 0; } +void kvm_arch_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + int i; + gfn_t gfn; + + for (i = 0; i < memslot->npages; ++i) { + if (!test_bit(i, memslot->dirty_bitmap)) { + if (is_dirty_and_clean_rmapp(kvm, &memslot->rmap[i])) + set_bit(i, memslot->dirty_bitmap); + } + } +} + /* * Get (and clear) the dirty memory log for a memory slot. */ @@ -1982,9 +1995,11 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, /* If nothing is dirty, don't bother messing with page tables. */ if (is_dirty) { - spin_lock(&kvm->mmu_lock); - kvm_mmu_slot_remove_write_access(kvm, log->slot); - spin_unlock(&kvm->mmu_lock); + if (kvm_x86_ops->dirty_bit_support()) { + spin_lock(&kvm->mmu_lock); + kvm_mmu_slot_remove_write_access(kvm, log->slot); + spin_unlock(&kvm->mmu_lock); + } kvm_flush_remote_tlbs(kvm); memslot = &kvm->memslots[log->slot]; n = ALIGN(memslot->npages, BITS_PER_LONG) / 8; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index cdf1279..d1657a3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -250,6 +250,7 @@ int kvm_dev_ioctl_check_extension(long ext); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty); +void kvm_arch_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4a60c72..bddc561 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -998,14 +998,18 @@ out: /* * Free any memory in @free but not in @dont. */ -static void kvm_free_physmem_slot(struct kvm_memory_slot *free, +static void kvm_free_physmem_slot(struct kvm *kvm, + struct kvm_memory_slot *free, struct kvm_memory_slot *dont) { if (!dont || free->rmap != dont->rmap) vfree(free->rmap); - if (!dont || free->dirty_bitmap != dont->dirty_bitmap) + if (!dont || free->dirty_bitmap != dont->dirty_bitmap) { + if (dont && free->rmap == dont->rmap) + kvm_arch_flush_shadow(kvm); vfree(free->dirty_bitmap); + } if (!dont || free->lpage_info != dont->lpage_info) vfree(free->lpage_info); @@ -1021,7 +1025,7 @@ void kvm_free_physmem(struct kvm *kvm) int i; for (i = 0; i < kvm->nmemslots; ++i) - kvm_free_physmem_slot(&kvm->memslots[i], NULL); + kvm_free_physmem_slot(kvm, &kvm->memslots[i], NULL); } static void kvm_destroy_vm(struct kvm *kvm) @@ -1217,7 +1221,7 @@ int __kvm_set_memory_region(struct kvm *kvm, goto out_free; } - kvm_free_physmem_slot(&old, npages ? &new : NULL); + kvm_free_physmem_slot(kvm, &old, npages ? &new : NULL); /* Slot deletion case: we have to update the current slot */ spin_lock(&kvm->mmu_lock); if (!npages) @@ -1232,7 +1236,7 @@ int __kvm_set_memory_region(struct kvm *kvm, return 0; out_free: - kvm_free_physmem_slot(&new, &old); + kvm_free_physmem_slot(kvm, &new, &old); out: return r; @@ -1279,6 +1283,7 @@ int kvm_get_dirty_log(struct kvm *kvm, if (!memslot->dirty_bitmap) goto out; + kvm_arch_get_dirty_log(kvm, memslot); n = ALIGN(memslot->npages, BITS_PER_LONG) / 8; for (i = 0; !any && i < n/sizeof(long); ++i) -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] kvm: change the dirty page tracking to work with dirty bit instead of page fault 2009-06-10 11:51 ` [PATCH 2/2] kvm: change the dirty page tracking to work with dirty bit instead of page fault Izik Eidus @ 2009-06-10 12:02 ` Izik Eidus 0 siblings, 0 replies; 8+ messages in thread From: Izik Eidus @ 2009-06-10 12:02 UTC (permalink / raw) To: kvm; +Cc: avi Few quick thoughts: > > +void kvm_arch_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) > +{ > +} > + > long kvm_arch_vm_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg) > { > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index c7b0cc2..8a24149 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -527,6 +527,7 @@ struct kvm_x86_ops { > int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); > int (*get_tdp_level)(void); > u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); > + int (*dirty_bit_support)(void); > }; > > extern struct kvm_x86_ops *kvm_x86_ops; > @@ -796,4 +797,6 @@ int kvm_unmap_hva(struct kvm *kvm, unsigned long hva); > int kvm_age_hva(struct kvm *kvm, unsigned long hva); > int cpuid_maxphyaddr(struct kvm_vcpu *vcpu); > > +int is_dirty_and_clean_rmapp(struct kvm *kvm, unsigned long *rmapp); > + > #endif /* _ASM_X86_KVM_HOST_H */ > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 809cce0..3ec6a7d 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -140,6 +140,8 @@ module_param(oos_shadow, bool, 0644); > #define ACC_USER_MASK PT_USER_MASK > #define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) > > +#define SPTE_DONT_DIRTY (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) > + > #define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) > > struct kvm_rmap_desc { > @@ -629,6 +631,25 @@ static u64 *rmap_next(struct kvm *kvm, unsigned long *rmapp, u64 *spte) > return NULL; > } > > +int is_dirty_and_clean_rmapp(struct kvm *kvm, unsigned long *rmapp) > +{ > + u64 *spte; > + int dirty = 0; > + > Here we should add: if (!shadow_dirty_mask) return 0; > + spte = rmap_next(kvm, rmapp, NULL); > + while (spte) { > + if (*spte & PT_DIRTY_MASK) { > + set_shadow_pte(spte, (*spte &= ~PT_DIRTY_MASK) | > + SPTE_DONT_DIRTY); > + dirty = 1; > + } > + spte = rmap_next(kvm, rmapp, spte); > + } > + > + return dirty; > +} > + > */ > @@ -1982,9 +1995,11 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, > > /* If nothing is dirty, don't bother messing with page tables. */ > if (is_dirty) { > - spin_lock(&kvm->mmu_lock); > - kvm_mmu_slot_remove_write_access(kvm, log->slot); > - spin_unlock(&kvm->mmu_lock); > + if (kvm_x86_ops->dirty_bit_support()) { > This should be if (kvm_x86_ops->dirty_bit_support() -> if (!kvm_x86_ops->dirty_bit_support()) > + spin_lock(&kvm->mmu_lock); > + kvm_mmu_slot_remove_write_access(kvm, log->slot); > + spin_unlock(&kvm->mmu_lock); > + } > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages 2009-06-10 11:51 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Izik Eidus 2009-06-10 11:51 ` [PATCH 2/2] kvm: change the dirty page tracking to work with dirty bit instead of page fault Izik Eidus @ 2009-06-10 11:54 ` Avi Kivity 2009-06-10 12:06 ` Izik Eidus 1 sibling, 1 reply; 8+ messages in thread From: Avi Kivity @ 2009-06-10 11:54 UTC (permalink / raw) To: Izik Eidus; +Cc: kvm, Marcelo Tosatti, Ryan Harper Izik Eidus wrote: > When slot is already allocted and being asked to be tracked we need to break the > large pages. > > This code flush the mmu when someone ask a slot to start dirty bit tracking. > > Signed-off-by: Izik Eidus <ieidus@redhat.com> > --- > virt/kvm/kvm_main.c | 2 ++ > 1 files changed, 2 insertions(+), 0 deletions(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 669eb4a..4a60c72 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -1160,6 +1160,8 @@ int __kvm_set_memory_region(struct kvm *kvm, > new.userspace_addr = mem->userspace_addr; > else > new.userspace_addr = 0; > + > + kvm_arch_flush_shadow(kvm); > } > if (npages && !new.lpage_info) { > largepages = 1 + (base_gfn + npages - 1) / KVM_PAGES_PER_HPAGE; > Ryan, can you try this out with your large page migration failures? -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages 2009-06-10 11:54 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Avi Kivity @ 2009-06-10 12:06 ` Izik Eidus 0 siblings, 0 replies; 8+ messages in thread From: Izik Eidus @ 2009-06-10 12:06 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm, Marcelo Tosatti, Ryan Harper Avi Kivity wrote: > Izik Eidus wrote: >> When slot is already allocted and being asked to be tracked we need >> to break the >> large pages. >> >> This code flush the mmu when someone ask a slot to start dirty bit >> tracking. >> >> Signed-off-by: Izik Eidus <ieidus@redhat.com> >> --- >> virt/kvm/kvm_main.c | 2 ++ >> 1 files changed, 2 insertions(+), 0 deletions(-) >> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index 669eb4a..4a60c72 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -1160,6 +1160,8 @@ int __kvm_set_memory_region(struct kvm *kvm, >> new.userspace_addr = mem->userspace_addr; >> else >> new.userspace_addr = 0; >> + >> + kvm_arch_flush_shadow(kvm); >> } >> if (npages && !new.lpage_info) { >> largepages = 1 + (base_gfn + npages - 1) / KVM_PAGES_PER_HPAGE; >> > > Ryan, can you try this out with your large page migration failures? > Wait, i think it is in the wrong place., i am sending a second seires :( ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 0/2] RFC use dirty bit for page dirty tracking (v2) @ 2009-06-10 16:23 Izik Eidus 2009-06-10 16:23 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Izik Eidus 0 siblings, 1 reply; 8+ messages in thread From: Izik Eidus @ 2009-06-10 16:23 UTC (permalink / raw) To: kvm; +Cc: avi, Izik Eidus RFC move to dirty bit tracking using the page table dirty bit (v2) (BTW, it seems like the vnc in the mainline have some bugs, i have wasted 2 hours debugging rendering bug that i thought was related to that seires, but it came out not to be related) Thanks. Izik Eidus (2): kvm: fix dirty bit tracking for slots with large pages kvm: change the dirty page tracking to work with dirty bity arch/ia64/kvm/kvm-ia64.c | 4 +++ arch/powerpc/kvm/powerpc.c | 4 +++ arch/s390/kvm/kvm-s390.c | 4 +++ arch/x86/include/asm/kvm_host.h | 3 ++ arch/x86/kvm/mmu.c | 42 ++++++++++++++++++++++++++++++++++++-- arch/x86/kvm/svm.c | 7 ++++++ arch/x86/kvm/vmx.c | 7 ++++++ arch/x86/kvm/x86.c | 26 ++++++++++++++++++++--- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 8 ++++++- 10 files changed, 98 insertions(+), 8 deletions(-) ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages 2009-06-10 16:23 [PATCH 0/2] RFC use dirty bit for page dirty tracking (v2) Izik Eidus @ 2009-06-10 16:23 ` Izik Eidus 2009-06-14 11:10 ` Avi Kivity 0 siblings, 1 reply; 8+ messages in thread From: Izik Eidus @ 2009-06-10 16:23 UTC (permalink / raw) To: kvm; +Cc: avi, Izik Eidus When slot is already allocted and being asked to be tracked we need to break the large pages. This code flush the mmu when someone ask a slot to start dirty bit tracking. Signed-off-by: Izik Eidus <ieidus@redhat.com> --- virt/kvm/kvm_main.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 669eb4a..3046e9c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1194,6 +1194,8 @@ int __kvm_set_memory_region(struct kvm *kvm, if (!new.dirty_bitmap) goto out_free; memset(new.dirty_bitmap, 0, dirty_bytes); + if (old.npages) + kvm_arch_flush_shadow(kvm); } #endif /* not defined CONFIG_S390 */ -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages 2009-06-10 16:23 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Izik Eidus @ 2009-06-14 11:10 ` Avi Kivity 0 siblings, 0 replies; 8+ messages in thread From: Avi Kivity @ 2009-06-14 11:10 UTC (permalink / raw) To: Izik Eidus; +Cc: kvm Izik Eidus wrote: > When slot is already allocted and being asked to be tracked we need to break the > large pages. > > This code flush the mmu when someone ask a slot to start dirty bit tracking. > > Applied, thanks. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2009-06-14 11:10 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2009-06-10 11:51 [PATCH 0/2] *** SUBJECT HERE *** Izik Eidus 2009-06-10 11:51 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Izik Eidus 2009-06-10 11:51 ` [PATCH 2/2] kvm: change the dirty page tracking to work with dirty bit instead of page fault Izik Eidus 2009-06-10 12:02 ` Izik Eidus 2009-06-10 11:54 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Avi Kivity 2009-06-10 12:06 ` Izik Eidus 2009-06-10 16:23 [PATCH 0/2] RFC use dirty bit for page dirty tracking (v2) Izik Eidus 2009-06-10 16:23 ` [PATCH 1/2] kvm: fix dirty bit tracking for slots with large pages Izik Eidus 2009-06-14 11:10 ` Avi Kivity
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.