KVM Archive on lore.kernel.org
 help / color / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: kvm@vger.kernel.org, avi@redhat.com
Subject: Re: [patch 07/10] KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU update
Date: Thu, 24 Sep 2009 10:28:41 -0700
Message-ID: <20090924172841.GC6265@linux.vnet.ibm.com> (raw)
In-Reply-To: <20090924140651.GA13623@amt.cnet>

On Thu, Sep 24, 2009 at 11:06:51AM -0300, Marcelo Tosatti wrote:
> On Mon, Sep 21, 2009 at 08:37:18PM -0300, Marcelo Tosatti wrote:
> > Use two steps for memslot deletion: mark the slot invalid (which stops 
> > instantiation of new shadow pages for that slot, but allows destruction),
> > then instantiate the new empty slot.
> > 
> > Also simplifies kvm_handle_hva locking.
> > 
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> > 
> 
> <snip>
> 
> > -	if (!npages)
> > +	if (!npages) {
> > +		slots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);
> > +		if (!slots)
> > +			goto out_free;
> > +		memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
> > +		if (mem->slot >= slots->nmemslots)
> > +			slots->nmemslots = mem->slot + 1;
> > +		slots->memslots[mem->slot].flags |= KVM_MEMSLOT_INVALID;
> > +
> > +		old_memslots = kvm->memslots;
> > +		rcu_assign_pointer(kvm->memslots, slots);
> > +		synchronize_srcu(&kvm->srcu);
> > +		/* From this point no new shadow pages pointing to a deleted
> > +		 * memslot will be created.
> > +         	 *
> > +         	 * validation of sp->gfn happens in:
> > +         	 * 	- gfn_to_hva (kvm_read_guest, gfn_to_pfn)
> > +         	 * 	- kvm_is_visible_gfn (mmu_check_roots)
> > +         	 */
> >  		kvm_arch_flush_shadow(kvm);
> > +		kfree(old_memslots);
> > +	}
> >  
> >  	r = kvm_arch_prepare_memory_region(kvm, &new, old, user_alloc);
> >  	if (r)
> >  		goto out_free;
> >  
> > -	spin_lock(&kvm->mmu_lock);
> > -	if (mem->slot >= kvm->memslots->nmemslots)
> > -		kvm->memslots->nmemslots = mem->slot + 1;
> > +#ifdef CONFIG_DMAR
> > +	/* map the pages in iommu page table */
> > +	if (npages)
> > +		r = kvm_iommu_map_pages(kvm, &new);
> > +		if (r)
> > +			goto out_free;
> > +#endif
> >  
> > -	*memslot = new;
> > -	spin_unlock(&kvm->mmu_lock);
> > +	slots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);
> > +	if (!slots)
> > +		goto out_free;
> > +	memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
> > +	if (mem->slot >= slots->nmemslots)
> > +		slots->nmemslots = mem->slot + 1;
> > +
> > +	/* actual memory is freed via old in kvm_free_physmem_slot below */
> > +	if (!npages) {
> > +		new.rmap = NULL;
> > +		new.dirty_bitmap = NULL;
> > +		for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i)
> > +			new.lpage_info[i] = NULL;
> > +	}
> > +
> > +	slots->memslots[mem->slot] = new;
> > +	old_memslots = kvm->memslots;
> > +	rcu_assign_pointer(kvm->memslots, slots);
> > +	synchronize_srcu(&kvm->srcu);
> >  
> >  	kvm_arch_commit_memory_region(kvm, mem, old, user_alloc);
> 
> Paul,
> 
> There is a scenario where this path, which updates KVM memory slots, is
> called relatively often.
> 
> Each synchronize_srcu() call takes about 10ms (avg 3ms per
> synchronize_sched call), so this is hurting us.
> 
> Is this expected? Is there any possibility for synchronize_srcu()
> optimization?
> 
> There are other sides we can work on, such as reducing the memory slot 
> updates, but i'm wondering what can be done regarding SRCU itself.

This is expected behavior, but there is a possible fix currently
in mainline (Linus's git tree).  The idea would be to create a
synchronize_srcu_expedited(), which starts with synchronize_srcu(), and
replaces the synchronize_sched() calls with synchronize_sched_expedited().

This could potentially reduce the overall synchronize_srcu() latency
to well under a microsecond.  The price to be paid is that each instance
of synchronize_sched_expedited() IPIs all the online CPUs, and awakens
the migration thread on each.

Would this approach likely work for you?

							Thanx, Paul

  reply index

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-09-21 23:37 [patch 00/10] RFC: switch vcpu context to use SRCU Marcelo Tosatti
2009-09-21 23:37 ` [patch 01/10] KVM: modify memslots layout in struct kvm Marcelo Tosatti
2009-09-21 23:37 ` [patch 02/10] KVM: modify alias layout in x86s struct kvm_arch Marcelo Tosatti
2009-09-21 23:37 ` [patch 03/10] KVM: switch dirty_log to mmu_lock protection Marcelo Tosatti
2009-09-22  6:37   ` Avi Kivity
2009-09-22 12:44     ` Marcelo Tosatti
2009-09-22 12:52       ` Avi Kivity
2009-09-21 23:37 ` [patch 04/10] KVM: split kvm_arch_set_memory_region into prepare and commit Marcelo Tosatti
2009-09-22  6:40   ` Avi Kivity
2009-09-21 23:37 ` [patch 05/10] KVM: introduce gfn_to_pfn_memslot Marcelo Tosatti
2009-09-21 23:37 ` [patch 06/10] KVM: use gfn_to_pfn_memslot in kvm_iommu_map_pages Marcelo Tosatti
2009-09-21 23:37 ` [patch 07/10] KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU update Marcelo Tosatti
2009-09-22  6:59   ` Avi Kivity
2009-09-22 16:16     ` Marcelo Tosatti
2009-09-22 10:40   ` Fernando Carrijo
2009-09-22 12:55     ` Marcelo Tosatti
2009-09-24 14:06   ` Marcelo Tosatti
2009-09-24 17:28     ` Paul E. McKenney [this message]
2009-09-24 18:05       ` Marcelo Tosatti
2009-09-25 15:05       ` Avi Kivity
2009-09-21 23:37 ` [patch 08/10] KVM: x86: switch kvm_set_memory_alias " Marcelo Tosatti
2009-09-22  7:04   ` Avi Kivity
2009-09-21 23:37 ` [patch 09/10] KVM: convert io_bus to SRCU Marcelo Tosatti
2009-09-21 23:37 ` [patch 10/10] KVM: switch vcpu context to use SRCU Marcelo Tosatti
2009-09-22  7:07   ` Avi Kivity
2009-09-22  7:09 ` [patch 00/10] RFC: " Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090924172841.GC6265@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

KVM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \
		kvm@vger.kernel.org
	public-inbox-index kvm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.kvm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git