From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752661AbbC3AHG (ORCPT ); Sun, 29 Mar 2015 20:07:06 -0400 Received: from mga03.intel.com ([134.134.136.65]:19447 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751355AbbC3AHA (ORCPT ); Sun, 29 Mar 2015 20:07:00 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,490,1422950400"; d="scan'208";a="699835887" From: Wanpeng Li To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Marcelo Tosatti , Paolo Bonzini , Xiao Guangrong , Wanpeng Li Subject: [PATCH] kvm: mmu: lazy collapse small sptes into large sptes Date: Mon, 30 Mar 2015 07:48:35 +0800 Message-Id: <1427672915-5749-1-git-send-email-wanpeng.li@linux.intel.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are two scenarios for the requirement of collapsing small sptes into large sptes. - dirty logging tracks sptes in 4k granularity, so large sptes are splitted, the large sptes will be reallocated in the destination machine and the guest in the source machine will be destroyed when live migration successfully. However, the guest in the source machine will continue to run if live migration fail due to some reasons, the sptes still keep small which lead to bad performance. - our customers write tools to track the dirty speed of guests by EPT D bit/PML in order to determine the most appropriate one to be live migrated, however sptes will still keep small after tracking dirty speed. This patch introduce lazy collapse small sptes into large sptes, the memory region will be scanned on the ioctl context when dirty log is stopped, the ones which can be collapsed into large pages will be dropped during the scan, it depends the on later #PF to reallocate all large sptes. Signed-off-by: Wanpeng Li --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.c | 66 +++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.c | 5 ++++ 3 files changed, 73 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a236e39..73de5d3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -859,6 +859,8 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot); +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, + struct kvm_memory_slot *memslot); void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot); void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index cee7592..d25ced1 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4465,6 +4465,72 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_flush_remote_tlbs(kvm); } +static int kvm_mmu_zap_collapsible_spte(struct kvm *kvm, + unsigned long *rmapp) +{ + u64 *sptep; + struct rmap_iterator iter; + int need_tlb_flush = 0; + pfn_t pfn; + struct kvm_mmu_page *sp; + + for (sptep = rmap_get_first(*rmapp, &iter); sptep;) { + BUG_ON(!(*sptep & PT_PRESENT_MASK)); + + sp = page_header(__pa(sptep)); + pfn = spte_to_pfn(*sptep); + if (sp->role.direct && + !kvm_is_reserved_pfn(pfn) && + PageTransCompound(pfn_to_page(pfn))) { + drop_spte(kvm, sptep); + need_tlb_flush = 1; + } + sptep = rmap_get_next(&iter); + } + + return need_tlb_flush; +} + +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + bool flush = false; + unsigned long *rmapp; + unsigned long last_index, index; + gfn_t gfn_start, gfn_end; + + spin_lock(&kvm->mmu_lock); + + gfn_start = memslot->base_gfn; + gfn_end = memslot->base_gfn + memslot->npages - 1; + + if (gfn_start >= gfn_end) + goto out; + + rmapp = memslot->arch.rmap[0]; + last_index = gfn_to_index(gfn_end, memslot->base_gfn, + PT_PAGE_TABLE_LEVEL); + + for (index = 0; index <= last_index; ++index, ++rmapp) { + if (*rmapp) + flush |= kvm_mmu_zap_collapsible_spte(kvm, rmapp); + + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush = false; + } + cond_resched_lock(&kvm->mmu_lock); + } + } + + if (flush) + kvm_flush_remote_tlbs(kvm); + +out: + spin_unlock(&kvm->mmu_lock); +} + void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c5f7e03..6037389 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7618,6 +7618,11 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, /* It's OK to get 'new' slot here as it has already been installed */ new = id_to_memslot(kvm->memslots, mem->slot); + if ((change != KVM_MR_DELETE) && + (old->flags & KVM_MEM_LOG_DIRTY_PAGES) && + !(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) + kvm_mmu_zap_collapsible_sptes(kvm, new); + /* * Set up write protection and/or dirty logging for the new slot. * -- 1.9.1