From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 07/10] KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU update Date: Tue, 22 Sep 2009 13:16:26 -0300 Message-ID: <20090922161626.GA12506@amt.cnet> References: <20090921233711.213665413@amt.cnet> <20090921234124.596305294@amt.cnet> <4AB875B8.80904@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org To: Avi Kivity Return-path: Received: from mx1.redhat.com ([209.132.183.28]:6778 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756772AbZIVQSI (ORCPT ); Tue, 22 Sep 2009 12:18:08 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n8MGICNT016532 for ; Tue, 22 Sep 2009 12:18:12 -0400 Content-Disposition: inline In-Reply-To: <4AB875B8.80904@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Sep 22, 2009 at 09:59:04AM +0300, Avi Kivity wrote: > On 09/22/2009 02:37 AM, Marcelo Tosatti wrote: >> Use two steps for memslot deletion: mark the slot invalid (which stops >> instantiation of new shadow pages for that slot, but allows destruction), >> then instantiate the new empty slot. >> >> Also simplifies kvm_handle_hva locking. >> >> unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) >> { >> - int i; >> + int i, idx; >> unsigned int nr_mmu_pages; >> unsigned int nr_pages = 0; >> + struct kvm_memslots *slots; >> >> - for (i = 0; i< kvm->memslots->nmemslots; i++) >> - nr_pages += kvm->memslots->memslots[i].npages; >> + idx = srcu_read_lock(&kvm->srcu); >> > > Doesn't the caller hold the srcu_read_lock() here? No: kvm_vm_ioctl_set_nr_mmu_pages -> kvm_mmu_change_mmu_pages And even if the caller did, recursive "locking" is tolerated. >> Index: kvm-slotslock/arch/x86/kvm/vmx.c >> =================================================================== >> --- kvm-slotslock.orig/arch/x86/kvm/vmx.c >> +++ kvm-slotslock/arch/x86/kvm/vmx.c >> @@ -24,6 +24,7 @@ >> #include >> #include >> #include >> +#include >> #include >> #include >> #include "kvm_cache_regs.h" >> @@ -1465,10 +1466,18 @@ static void enter_pmode(struct kvm_vcpu >> static gva_t rmode_tss_base(struct kvm *kvm) >> { >> if (!kvm->arch.tss_addr) { >> - gfn_t base_gfn = kvm->memslots->memslots[0].base_gfn + >> - kvm->memslots->memslots[0].npages - 3; >> + struct kvm_memslots *slots; >> + gfn_t base_gfn; >> + int idx; >> + >> + idx = srcu_read_lock(&kvm->srcu); >> + slots = rcu_dereference(kvm->memslots); >> + base_gfn = slots->memslots[0].base_gfn + >> + slots->memslots[0].npages - 3; >> + srcu_read_unlock(&kvm->srcu, idx); >> return base_gfn<< PAGE_SHIFT; >> } >> + >> > > And here? kvm_arch_vcpu_ioctl_set_sregs -> kvm_x86_ops->set_cr0. > Maybe we should take the srcu_lock in vcpu_load/put and only > drop in when going into vcpu context or explicitly sleeping, just to > simplify things. Hum, possible, but i'd rather leave it for later.