From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 1/3] kvm: dont hold pagecount reference for mapped sptes pages Date: Thu, 24 Sep 2009 11:18:32 -0300 Message-ID: <20090924141832.GB13623@amt.cnet> References: <1253731638-24575-1-git-send-email-ieidus@redhat.com> <1253731638-24575-2-git-send-email-ieidus@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: avi@redhat.com, kvm@vger.kernel.org, aarcange@redhat.com, Jan Kiszka To: Izik Eidus Return-path: Received: from mx1.redhat.com ([209.132.183.28]:32548 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751932AbZIXOjq (ORCPT ); Thu, 24 Sep 2009 10:39:46 -0400 Content-Disposition: inline In-Reply-To: <1253731638-24575-2-git-send-email-ieidus@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: This needs compat code for !MMU_NOTIFIERS case in kvm-kmod (Jan CC'ed). Otherwise looks good. On Wed, Sep 23, 2009 at 09:47:16PM +0300, Izik Eidus wrote: > When using mmu notifiers, we are allowed to remove the page count > reference tooken by get_user_pages to a specific page that is mapped > inside the shadow page tables. > > This is needed so we can balance the pagecount against mapcount > checking. > > (Right now kvm increase the pagecount and does not increase the > mapcount when mapping page into shadow page table entry, > so when comparing pagecount against mapcount, you have no > reliable result.) > > Signed-off-by: Izik Eidus > --- > arch/x86/kvm/mmu.c | 7 ++----- > 1 files changed, 2 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index eca41ae..6c67b23 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -634,9 +634,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) > if (*spte & shadow_accessed_mask) > kvm_set_pfn_accessed(pfn); > if (is_writeble_pte(*spte)) > - kvm_release_pfn_dirty(pfn); > - else > - kvm_release_pfn_clean(pfn); > + kvm_set_pfn_dirty(pfn); > rmapp = gfn_to_rmap(kvm, sp->gfns[spte - sp->spt], sp->role.level); > if (!*rmapp) { > printk(KERN_ERR "rmap_remove: %p %llx 0->BUG\n", spte, *spte); > @@ -1877,8 +1875,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, > page_header_update_slot(vcpu->kvm, sptep, gfn); > if (!was_rmapped) { > rmap_count = rmap_add(vcpu, sptep, gfn); > - if (!is_rmap_spte(*sptep)) > - kvm_release_pfn_clean(pfn); > + kvm_release_pfn_clean(pfn); > if (rmap_count > RMAP_RECYCLE_THRESHOLD) > rmap_recycle(vcpu, sptep, gfn); > } else { > -- > 1.5.6.5