From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752090AbcBNLlE (ORCPT ); Sun, 14 Feb 2016 06:41:04 -0500 Received: from mga14.intel.com ([192.55.52.115]:13055 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751890AbcBNLkR (ORCPT ); Sun, 14 Feb 2016 06:40:17 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,445,1449561600"; d="scan'208";a="911770841" From: Xiao Guangrong To: pbonzini@redhat.com Cc: gleb@kernel.org, mtosatti@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, kai.huang@linux.intel.com, jike.song@intel.com, Xiao Guangrong Subject: [PATCH v3 08/11] KVM: MMU: use page track for non-leaf shadow pages Date: Sun, 14 Feb 2016 19:31:40 +0800 Message-Id: <1455449503-20993-9-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1455449503-20993-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1455449503-20993-1-git-send-email-guangrong.xiao@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org non-leaf shadow pages are always write protected, it can be the user of page track Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index bd9c278..e9dbd85 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -806,11 +806,17 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) struct kvm_memory_slot *slot; gfn_t gfn; + kvm->arch.indirect_shadow_pages++; gfn = sp->gfn; slots = kvm_memslots_for_spte_role(kvm, sp->role); slot = __gfn_to_memslot(slots, gfn); + + /* the non-leaf shadow pages are keeping readonly. */ + if (sp->role.level > PT_PAGE_TABLE_LEVEL) + return kvm_slot_page_track_add_page_nolock(kvm, slot, gfn, + KVM_PAGE_TRACK_WRITE); + kvm_mmu_gfn_disallow_lpage(slot, gfn); - kvm->arch.indirect_shadow_pages++; } static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -819,11 +825,15 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) struct kvm_memory_slot *slot; gfn_t gfn; + kvm->arch.indirect_shadow_pages--; gfn = sp->gfn; slots = kvm_memslots_for_spte_role(kvm, sp->role); slot = __gfn_to_memslot(slots, gfn); + if (sp->role.level > PT_PAGE_TABLE_LEVEL) + return kvm_slot_page_track_remove_page_nolock(kvm, slot, gfn, + KVM_PAGE_TRACK_WRITE); + kvm_mmu_gfn_allow_lpage(slot, gfn); - kvm->arch.indirect_shadow_pages--; } static bool __mmu_gfn_lpage_is_disallowed(gfn_t gfn, int level, @@ -2132,12 +2142,18 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, hlist_add_head(&sp->hash_link, &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]); if (!direct) { - if (rmap_write_protect(vcpu, gfn)) + /* + * we should do write protection before syncing pages + * otherwise the content of the synced shadow page may + * be inconsistent with guest page table. + */ + account_shadowed(vcpu->kvm, sp); + + if (level == PT_PAGE_TABLE_LEVEL && + rmap_write_protect(vcpu, gfn)) kvm_flush_remote_tlbs(vcpu->kvm); if (level > PT_PAGE_TABLE_LEVEL && need_sync) kvm_sync_pages(vcpu, gfn); - - account_shadowed(vcpu->kvm, sp); } sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; clear_page(sp->spt); -- 1.8.3.1