From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755061AbbDGPmL (ORCPT ); Tue, 7 Apr 2015 11:42:11 -0400 Received: from mail-wi0-f179.google.com ([209.85.212.179]:37860 "EHLO mail-wi0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754832AbbDGPmD (ORCPT ); Tue, 7 Apr 2015 11:42:03 -0400 Message-ID: <5523FAC5.6080209@redhat.com> Date: Tue, 07 Apr 2015 17:41:57 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Wanpeng Li , kvm@vger.kernel.org, linux-kernel@vger.kernel.org CC: Xiao Guangrong Subject: Re: [PATCH v3] kvm: mmu: lazy collapse small sptes into large sptes References: <1428046825-6905-1-git-send-email-wanpeng.li@linux.intel.com> In-Reply-To: <1428046825-6905-1-git-send-email-wanpeng.li@linux.intel.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/04/2015 09:40, Wanpeng Li wrote: > There are two scenarios for the requirement of collapsing small sptes > into large sptes. > - dirty logging tracks sptes in 4k granularity, so large sptes are split, > the large sptes will be reallocated in the destination machine and the > guest in the source machine will be destroyed when live migration successfully. > However, the guest in the source machine will continue to run if live migration > fail due to some reasons, the sptes still keep small which lead to bad > performance. > - our customers write tools to track the dirty speed of guests by EPT D bit/PML > in order to determine the most appropriate one to be live migrated, however > sptes will still keep small after tracking dirty speed. > > This patch introduce lazy collapse small sptes into large sptes, the memory region > will be scanned on the ioctl context when dirty log is stopped, the ones which can > be collapsed into large pages will be dropped during the scan, it depends the on > later #PF to reallocate all large sptes. > > Reviewed-by: Xiao Guangrong > Signed-off-by: Wanpeng Li > --- > v2 -> v3: > * update comments > * fix infinite for loop > v1 -> v2: > * use 'bool' instead of 'int' > * add more comments > * fix can not get the next spte after drop the current spte > > arch/x86/include/asm/kvm_host.h | 2 ++ > arch/x86/kvm/mmu.c | 73 +++++++++++++++++++++++++++++++++++++++++ > arch/x86/kvm/x86.c | 19 +++++++++++ > 3 files changed, 94 insertions(+) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 30b28dc..91b5bdb 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -854,6 +854,8 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, > void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); > void kvm_mmu_slot_remove_write_access(struct kvm *kvm, > struct kvm_memory_slot *memslot); > +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, > + struct kvm_memory_slot *memslot); > void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, > struct kvm_memory_slot *memslot); > void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index cee7592..ba002a0 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -4465,6 +4465,79 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, > kvm_flush_remote_tlbs(kvm); > } > > +static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, > + unsigned long *rmapp) > +{ > + u64 *sptep; > + struct rmap_iterator iter; > + int need_tlb_flush = 0; > + pfn_t pfn; > + struct kvm_mmu_page *sp; > + > + for (sptep = rmap_get_first(*rmapp, &iter); sptep;) { > + BUG_ON(!(*sptep & PT_PRESENT_MASK)); > + > + sp = page_header(__pa(sptep)); > + pfn = spte_to_pfn(*sptep); > + > + /* > + * Lets support EPT only for now, there still needs to figure > + * out an efficient way to let these codes be aware what mapping > + * level used in guest. > + */ > + if (sp->role.direct && > + !kvm_is_reserved_pfn(pfn) && > + PageTransCompound(pfn_to_page(pfn))) { > + drop_spte(kvm, sptep); > + sptep = rmap_get_first(*rmapp, &iter); > + need_tlb_flush = 1; > + } else > + sptep = rmap_get_next(&iter); > + } > + > + return need_tlb_flush; > +} > + > +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, > + struct kvm_memory_slot *memslot) > +{ > + bool flush = false; > + unsigned long *rmapp; > + unsigned long last_index, index; > + gfn_t gfn_start, gfn_end; > + > + spin_lock(&kvm->mmu_lock); > + > + gfn_start = memslot->base_gfn; > + gfn_end = memslot->base_gfn + memslot->npages - 1; > + > + if (gfn_start >= gfn_end) > + goto out; > + > + rmapp = memslot->arch.rmap[0]; > + last_index = gfn_to_index(gfn_end, memslot->base_gfn, > + PT_PAGE_TABLE_LEVEL); > + > + for (index = 0; index <= last_index; ++index, ++rmapp) { > + if (*rmapp) > + flush |= kvm_mmu_zap_collapsible_spte(kvm, rmapp); > + > + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { > + if (flush) { > + kvm_flush_remote_tlbs(kvm); > + flush = false; > + } > + cond_resched_lock(&kvm->mmu_lock); > + } > + } > + > + if (flush) > + kvm_flush_remote_tlbs(kvm); > + > +out: > + spin_unlock(&kvm->mmu_lock); > +} > + > void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, > struct kvm_memory_slot *memslot) > { > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 50861dd..a6cd10b 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7647,6 +7647,25 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, > new = id_to_memslot(kvm->memslots, mem->slot); > > /* > + * Dirty logging tracks sptes in 4k granularity, so large sptes are > + * split, the large sptes will be reallocated in the destination > + * machine and the guest in the source machine will be destroyed > + * when live migration successfully. However, the guest in the source > + * machine will continue to run if live migration fail due to some > + * reasons, the sptes still keep small which lead to bad performance. > + * > + * Lazy collapse small sptes into large sptes is intended to handle > + * this, the memory region will be scanned on the ioctl context when > + * dirty log is stopped, the ones which can be collapsed into large > + * pages will be dropped during the scan, it depends the on later #PF > + * to reallocate all large sptes. > + */ > + if ((change != KVM_MR_DELETE) && > + (old->flags & KVM_MEM_LOG_DIRTY_PAGES) && > + !(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) > + kvm_mmu_zap_collapsible_sptes(kvm, new); > + > + /* > * Set up write protection and/or dirty logging for the new slot. > * > * For KVM_MR_DELETE and KVM_MR_MOVE, the shadow pages of old slot have > Applied just with editing of the comments and commit message. Thanks to you and Xiao. Paolo