All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
@ 2012-12-18  7:25 Takuya Yoshikawa
  2012-12-18  7:26 ` [PATCH 1/7] KVM: Write protect the updated slot only " Takuya Yoshikawa
                   ` (8 more replies)
  0 siblings, 9 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:25 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

This patch set makes kvm_mmu_slot_remove_write_access() rmap based and
adds conditional rescheduling to it.

The motivation for this change is of course to reduce the mmu_lock hold
time when we start dirty logging for a large memory slot.  You may not
see the problem if you just give 8GB or less of the memory to the guest
with THP enabled on the host -- this is for the worst case.


IMPORTANT NOTE (not about this patch set):

I have hit the following bug many times with the current next branch,
even WITHOUT my patches.  Although I do not know a way to reproduce this
yet, it seems that something was broken around slot->dirty_bitmap.  I am
now investigating the new code in __kvm_set_memory_region().

The bug:
[  575.238063] BUG: unable to handle kernel paging request at 00000002efe83a77
[  575.238185] IP: [<ffffffffa05f9619>] mark_page_dirty_in_slot+0x19/0x20 [kvm]
[  575.238308] PGD 0 
[  575.238343] Oops: 0002 [#1] SMP 

The call trace:
[  575.241207] Call Trace:
[  575.241257]  [<ffffffffa05f96b1>] kvm_write_guest_cached+0x91/0xb0 [kvm]
[  575.241370]  [<ffffffffa0610db9>] kvm_arch_vcpu_ioctl_run+0x1109/0x12c0 [kvm]
[  575.241488]  [<ffffffffa060fd55>] ? kvm_arch_vcpu_ioctl_run+0xa5/0x12c0 [kvm]
[  575.241595]  [<ffffffff81679194>] ? mutex_lock_killable_nested+0x274/0x340
[  575.241706]  [<ffffffffa05faf80>] ? kvm_set_ioapic_irq+0x20/0x20 [kvm]
[  575.241813]  [<ffffffffa05f71c9>] kvm_vcpu_ioctl+0x559/0x670 [kvm]
[  575.241913]  [<ffffffffa05f8a58>] ? kvm_vm_ioctl+0x1b8/0x570 [kvm]
[  575.242007]  [<ffffffff8101b9d3>] ? native_sched_clock+0x13/0x80
[  575.242125]  [<ffffffff8101ba49>] ? sched_clock+0x9/0x10
[  575.242208]  [<ffffffff8109015d>] ? sched_clock_cpu+0xbd/0x110
[  575.242298]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
[  575.242381]  [<ffffffff8119dfa8>] do_vfs_ioctl+0x98/0x570
[  575.242463]  [<ffffffff811a91b1>] ? fget_light+0xa1/0x140
[  575.246393]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
[  575.250363]  [<ffffffff8119e511>] sys_ioctl+0x91/0xb0
[  575.254327]  [<ffffffff81684c19>] system_call_fastpath+0x16/0x1b


Takuya Yoshikawa (7):
  KVM: Write protect the updated slot only when we start dirty logging
  KVM: MMU: Remove unused parameter level from __rmap_write_protect()
  KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based
  KVM: x86: Remove unused slot_bitmap from kvm_mmu_page
  KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself
  KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself
  KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time

 Documentation/virtual/kvm/mmu.txt |    7 ----
 arch/x86/include/asm/kvm_host.h   |    5 ---
 arch/x86/kvm/mmu.c                |   56 +++++++++++++++++++-----------------
 arch/x86/kvm/x86.c                |   13 +++++---
 virt/kvm/kvm_main.c               |    1 -
 5 files changed, 38 insertions(+), 44 deletions(-)

-- 
1.7.5.4


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
@ 2012-12-18  7:26 ` Takuya Yoshikawa
  2012-12-24 13:27   ` Gleb Natapov
  2013-01-07 20:11   ` Marcelo Tosatti
  2012-12-18  7:27 ` [PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect() Takuya Yoshikawa
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:26 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

This is needed to make kvm_mmu_slot_remove_write_access() rmap based:
otherwise we may end up using invalid rmap's.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/x86.c  |    9 ++++++++-
 virt/kvm/kvm_main.c |    1 -
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1c9c834..9451efa 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6897,7 +6897,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 	spin_lock(&kvm->mmu_lock);
 	if (nr_mmu_pages)
 		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
-	kvm_mmu_slot_remove_write_access(kvm, mem->slot);
+	/*
+	 * Write protect all pages for dirty logging.
+	 * Existing largepage mappings are destroyed here and new ones will
+	 * not be created until the end of the logging.
+	 */
+	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
+	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
+		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
 	spin_unlock(&kvm->mmu_lock);
 	/*
 	 * If memory slot is created, or moved, we need to clear all
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bd31096..0ef5daa 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -805,7 +805,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
 		if (kvm_create_dirty_bitmap(&new) < 0)
 			goto out_free;
-		/* destroy any largepage mappings for dirty tracking */
 	}
 
 	if (!npages || base_gfn != old.base_gfn) {
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect()
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
  2012-12-18  7:26 ` [PATCH 1/7] KVM: Write protect the updated slot only " Takuya Yoshikawa
@ 2012-12-18  7:27 ` Takuya Yoshikawa
  2012-12-18  7:28 ` [PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based Takuya Yoshikawa
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:27 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

No longer need to care about the mapping level in this function.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1142,7 +1142,7 @@ spte_write_protect(struct kvm *kvm, u64 *sptep, bool *flush, bool pt_protect)
 }
 
 static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp,
-				 int level, bool pt_protect)
+				 bool pt_protect)
 {
 	u64 *sptep;
 	struct rmap_iterator iter;
@@ -1180,7 +1180,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 	while (mask) {
 		rmapp = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
 				      PT_PAGE_TABLE_LEVEL, slot);
-		__rmap_write_protect(kvm, rmapp, PT_PAGE_TABLE_LEVEL, false);
+		__rmap_write_protect(kvm, rmapp, false);
 
 		/* clear the first set bit */
 		mask &= mask - 1;
@@ -1199,7 +1199,7 @@ static bool rmap_write_protect(struct kvm *kvm, u64 gfn)
 	for (i = PT_PAGE_TABLE_LEVEL;
 	     i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) {
 		rmapp = __gfn_to_rmap(gfn, i, slot);
-		write_protected |= __rmap_write_protect(kvm, rmapp, i, true);
+		write_protected |= __rmap_write_protect(kvm, rmapp, true);
 	}
 
 	return write_protected;
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
  2012-12-18  7:26 ` [PATCH 1/7] KVM: Write protect the updated slot only " Takuya Yoshikawa
  2012-12-18  7:27 ` [PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect() Takuya Yoshikawa
@ 2012-12-18  7:28 ` Takuya Yoshikawa
  2012-12-18  7:28 ` [PATCH 4/7] KVM: x86: Remove unused slot_bitmap from kvm_mmu_page Takuya Yoshikawa
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:28 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

This makes it possible to release mmu_lock and reschedule conditionally
in a later patch.  Although this may increase the time needed to protect
the whole slot when we start dirty logging, the kernel should not allow
the userspace to trigger something that will hold a spinlock for such a
long time as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.

Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |   28 +++++++++++++++-------------
 1 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index bee3509..b4d4fd1 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4198,25 +4198,27 @@ int kvm_mmu_setup(struct kvm_vcpu *vcpu)
 
 void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
 {
-	struct kvm_mmu_page *sp;
-	bool flush = false;
+	struct kvm_memory_slot *memslot;
+	gfn_t last_gfn;
+	int i;
 
-	list_for_each_entry(sp, &kvm->arch.active_mmu_pages, link) {
-		int i;
-		u64 *pt;
+	memslot = id_to_memslot(kvm->memslots, slot);
+	last_gfn = memslot->base_gfn + memslot->npages - 1;
 
-		if (!test_bit(slot, sp->slot_bitmap))
-			continue;
+	for (i = PT_PAGE_TABLE_LEVEL;
+	     i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) {
+		unsigned long *rmapp;
+		unsigned long last_index, index;
 
-		pt = sp->spt;
-		for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
-			if (!is_shadow_present_pte(pt[i]) ||
-			      !is_last_spte(pt[i], sp->role.level))
-				continue;
+		rmapp = memslot->arch.rmap[i - PT_PAGE_TABLE_LEVEL];
+		last_index = gfn_to_index(last_gfn, memslot->base_gfn, i);
 
-			spte_write_protect(kvm, &pt[i], &flush, false);
+		for (index = 0; index <= last_index; ++index, ++rmapp) {
+			if (*rmapp)
+				__rmap_write_protect(kvm, rmapp, false);
 		}
 	}
+
 	kvm_flush_remote_tlbs(kvm);
 }
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 4/7] KVM: x86: Remove unused slot_bitmap from kvm_mmu_page
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
                   ` (2 preceding siblings ...)
  2012-12-18  7:28 ` [PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based Takuya Yoshikawa
@ 2012-12-18  7:28 ` Takuya Yoshikawa
  2012-12-18  7:29 ` [PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself Takuya Yoshikawa
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:28 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

Not needed any more.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 Documentation/virtual/kvm/mmu.txt |    7 -------
 arch/x86/include/asm/kvm_host.h   |    5 -----
 arch/x86/kvm/mmu.c                |   10 ----------
 3 files changed, 0 insertions(+), 22 deletions(-)

diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
index fa5f1db..43fcb76 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -187,13 +187,6 @@ Shadow pages contain the following information:
     perform a reverse map from a pte to a gfn. When role.direct is set, any
     element of this array can be calculated from the gfn field when used, in
     this case, the array of gfns is not allocated. See role.direct and gfn.
-  slot_bitmap:
-    A bitmap containing one bit per memory slot.  If the page contains a pte
-    mapping a page from memory slot n, then bit n of slot_bitmap will be set
-    (if a page is aliased among several slots, then it is not guaranteed that
-    all slots will be marked).
-    Used during dirty logging to avoid scanning a shadow page if none if its
-    pages need tracking.
   root_count:
     A counter keeping track of how many hardware registers (guest cr3 or
     pdptrs) are now pointing at the page.  While this counter is nonzero, the
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c431b33..f75e1fe 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -219,11 +219,6 @@ struct kvm_mmu_page {
 	u64 *spt;
 	/* hold the gfn of each spte inside spt */
 	gfn_t *gfns;
-	/*
-	 * One bit set per slot which has memory
-	 * in this shadow page.
-	 */
-	DECLARE_BITMAP(slot_bitmap, KVM_MEM_SLOTS_NUM);
 	bool unsync;
 	int root_count;          /* Currently serving as active root */
 	unsigned int unsync_children;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b4d4fd1..bb964b3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1522,7 +1522,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
 		sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache);
 	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
 	list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
-	bitmap_zero(sp->slot_bitmap, KVM_MEM_SLOTS_NUM);
 	sp->parent_ptes = 0;
 	mmu_page_add_parent_pte(vcpu, sp, parent_pte);
 	kvm_mod_used_mmu_pages(vcpu->kvm, +1);
@@ -2183,14 +2182,6 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_unprotect_page);
 
-static void page_header_update_slot(struct kvm *kvm, void *pte, gfn_t gfn)
-{
-	int slot = memslot_id(kvm, gfn);
-	struct kvm_mmu_page *sp = page_header(__pa(pte));
-
-	__set_bit(slot, sp->slot_bitmap);
-}
-
 /*
  * The function is based on mtrr_type_lookup() in
  * arch/x86/kernel/cpu/mtrr/generic.c
@@ -2497,7 +2488,6 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 		++vcpu->kvm->stat.lpages;
 
 	if (is_shadow_present_pte(*sptep)) {
-		page_header_update_slot(vcpu->kvm, sptep, gfn);
 		if (!was_rmapped) {
 			rmap_count = rmap_add(vcpu, sptep, gfn);
 			if (rmap_count > RMAP_RECYCLE_THRESHOLD)
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
                   ` (3 preceding siblings ...)
  2012-12-18  7:28 ` [PATCH 4/7] KVM: x86: Remove unused slot_bitmap from kvm_mmu_page Takuya Yoshikawa
@ 2012-12-18  7:29 ` Takuya Yoshikawa
  2012-12-18  7:30 ` [PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() " Takuya Yoshikawa
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:29 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

No reason to make callers take mmu_lock since we do not need to protect
kvm_mmu_change_mmu_pages() and kvm_mmu_slot_remove_write_access()
together by mmu_lock in kvm_arch_commit_memory_region(): the former
calls kvm_mmu_commit_zap_page() and flushes TLBs by itself.

Note: we do not need to protect kvm->arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |    4 ++++
 arch/x86/kvm/x86.c |    9 ++++-----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index bb964b3..fc7d84a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2143,6 +2143,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int goal_nr_mmu_pages)
 	 * change the value
 	 */
 
+	spin_lock(&kvm->mmu_lock);
+
 	if (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages) {
 		while (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages &&
 			!list_empty(&kvm->arch.active_mmu_pages)) {
@@ -2157,6 +2159,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int goal_nr_mmu_pages)
 	}
 
 	kvm->arch.n_max_mmu_pages = goal_nr_mmu_pages;
+
+	spin_unlock(&kvm->mmu_lock);
 }
 
 int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9451efa..d9d0f6b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3270,12 +3270,10 @@ static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
 		return -EINVAL;
 
 	mutex_lock(&kvm->slots_lock);
-	spin_lock(&kvm->mmu_lock);
 
 	kvm_mmu_change_mmu_pages(kvm, kvm_nr_mmu_pages);
 	kvm->arch.n_requested_mmu_pages = kvm_nr_mmu_pages;
 
-	spin_unlock(&kvm->mmu_lock);
 	mutex_unlock(&kvm->slots_lock);
 	return 0;
 }
@@ -6894,7 +6892,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 	if (!kvm->arch.n_requested_mmu_pages)
 		nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm);
 
-	spin_lock(&kvm->mmu_lock);
 	if (nr_mmu_pages)
 		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
 	/*
@@ -6903,9 +6900,11 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 	 * not be created until the end of the logging.
 	 */
 	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
-	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
+	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES)) {
+		spin_lock(&kvm->mmu_lock);
 		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
-	spin_unlock(&kvm->mmu_lock);
+		spin_unlock(&kvm->mmu_lock);
+	}
 	/*
 	 * If memory slot is created, or moved, we need to clear all
 	 * mmio sptes.
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
                   ` (4 preceding siblings ...)
  2012-12-18  7:29 ` [PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself Takuya Yoshikawa
@ 2012-12-18  7:30 ` Takuya Yoshikawa
  2012-12-18  7:30 ` [PATCH 7/7] KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time Takuya Yoshikawa
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:30 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |    3 +++
 arch/x86/kvm/x86.c |    5 +----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fc7d84a..b7a1235 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4199,6 +4199,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
 	memslot = id_to_memslot(kvm->memslots, slot);
 	last_gfn = memslot->base_gfn + memslot->npages - 1;
 
+	spin_lock(&kvm->mmu_lock);
+
 	for (i = PT_PAGE_TABLE_LEVEL;
 	     i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) {
 		unsigned long *rmapp;
@@ -4214,6 +4216,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
 	}
 
 	kvm_flush_remote_tlbs(kvm);
+	spin_unlock(&kvm->mmu_lock);
 }
 
 void kvm_mmu_zap_all(struct kvm *kvm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d9d0f6b..aa183e9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6900,11 +6900,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 	 * not be created until the end of the logging.
 	 */
 	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
-	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES)) {
-		spin_lock(&kvm->mmu_lock);
+	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
 		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
-		spin_unlock(&kvm->mmu_lock);
-	}
 	/*
 	 * If memory slot is created, or moved, we need to clear all
 	 * mmio sptes.
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 7/7] KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
                   ` (5 preceding siblings ...)
  2012-12-18  7:30 ` [PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() " Takuya Yoshikawa
@ 2012-12-18  7:30 ` Takuya Yoshikawa
  2012-12-19 12:30 ` [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
  2013-01-07 20:36 ` Marcelo Tosatti
  8 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-18  7:30 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm, linux-kernel

If the userspace starts dirty logging for a large slot, say 64GB of
memory, kvm_mmu_slot_remove_write_access() needs to hold mmu_lock for
a long time such as tens of milliseconds.  This patch controls the lock
hold time by asking the scheduler if we need to reschedule for others.

One penalty for this is that we need to flush TLBs before releasing
mmu_lock.  But since holding mmu_lock for a long time does affect not
only the guest, vCPU threads in other words, but also the host as a
whole, we should pay for that.

In practice, the cost will not be so high because we can protect a fair
amount of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP.  We can also revisit Avi's "unlocked TLB flush" work
later for completely suppressing extra TLB flushes if needed.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b7a1235..a32e8cf 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4212,6 +4212,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
 		for (index = 0; index <= last_index; ++index, ++rmapp) {
 			if (*rmapp)
 				__rmap_write_protect(kvm, rmapp, false);
+
+			if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
+				kvm_flush_remote_tlbs(kvm);
+				cond_resched_lock(&kvm->mmu_lock);
+			}
 		}
 	}
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
                   ` (6 preceding siblings ...)
  2012-12-18  7:30 ` [PATCH 7/7] KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time Takuya Yoshikawa
@ 2012-12-19 12:30 ` Takuya Yoshikawa
  2012-12-19 15:42   ` Alex Williamson
  2013-01-07 20:36 ` Marcelo Tosatti
  8 siblings, 1 reply; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-19 12:30 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: mtosatti, gleb, kvm, linux-kernel, alex.williamson

Ccing Alex,

I tested kvm.git next branch before Alex's patch set was applied, and
did not see the bug.  Although I'm not 100% sure but it is possible
that something got broken by a patch in the following series:

[01/10] KVM: Restrict non-existing slot state transitions
[02/10] KVM: Check userspace_addr when modifying a memory slot
[03/10] KVM: Fix iommu map/unmap to handle memory slot moves
[04/10] KVM: Minor memory slot optimization
[05/10] KVM: Rename KVM_MEMORY_SLOTS -> KVM_USER_MEM_SLOTS
[06/10] KVM: Make KVM_PRIVATE_MEM_SLOTS optional
[07/10] KVM: struct kvm_memory_slot.user_alloc -> bool
[08/10] KVM: struct kvm_memory_slot.flags -> u32
[09/10] KVM: struct kvm_memory_slot.id -> short
[10/10] KVM: Increase user memory slots on x86 to 125

If I can get time, I will check which one caused the problem tomorrow.

Thanks,
	Takuya


On Tue, 18 Dec 2012 16:25:58 +0900
Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> wrote:

> IMPORTANT NOTE (not about this patch set):
> 
> I have hit the following bug many times with the current next branch,
> even WITHOUT my patches.  Although I do not know a way to reproduce this
> yet, it seems that something was broken around slot->dirty_bitmap.  I am
> now investigating the new code in __kvm_set_memory_region().
> 
> The bug:
> [  575.238063] BUG: unable to handle kernel paging request at 00000002efe83a77
> [  575.238185] IP: [<ffffffffa05f9619>] mark_page_dirty_in_slot+0x19/0x20 [kvm]
> [  575.238308] PGD 0 
> [  575.238343] Oops: 0002 [#1] SMP 
> 
> The call trace:
> [  575.241207] Call Trace:
> [  575.241257]  [<ffffffffa05f96b1>] kvm_write_guest_cached+0x91/0xb0 [kvm]
> [  575.241370]  [<ffffffffa0610db9>] kvm_arch_vcpu_ioctl_run+0x1109/0x12c0 [kvm]
> [  575.241488]  [<ffffffffa060fd55>] ? kvm_arch_vcpu_ioctl_run+0xa5/0x12c0 [kvm]
> [  575.241595]  [<ffffffff81679194>] ? mutex_lock_killable_nested+0x274/0x340
> [  575.241706]  [<ffffffffa05faf80>] ? kvm_set_ioapic_irq+0x20/0x20 [kvm]
> [  575.241813]  [<ffffffffa05f71c9>] kvm_vcpu_ioctl+0x559/0x670 [kvm]
> [  575.241913]  [<ffffffffa05f8a58>] ? kvm_vm_ioctl+0x1b8/0x570 [kvm]
> [  575.242007]  [<ffffffff8101b9d3>] ? native_sched_clock+0x13/0x80
> [  575.242125]  [<ffffffff8101ba49>] ? sched_clock+0x9/0x10
> [  575.242208]  [<ffffffff8109015d>] ? sched_clock_cpu+0xbd/0x110
> [  575.242298]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
> [  575.242381]  [<ffffffff8119dfa8>] do_vfs_ioctl+0x98/0x570
> [  575.242463]  [<ffffffff811a91b1>] ? fget_light+0xa1/0x140
> [  575.246393]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
> [  575.250363]  [<ffffffff8119e511>] sys_ioctl+0x91/0xb0
> [  575.254327]  [<ffffffff81684c19>] system_call_fastpath+0x16/0x1b

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-19 12:30 ` [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
@ 2012-12-19 15:42   ` Alex Williamson
  2012-12-20  5:02     ` Takuya Yoshikawa
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-12-19 15:42 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: Takuya Yoshikawa, mtosatti, gleb, kvm, linux-kernel

On Wed, 2012-12-19 at 21:30 +0900, Takuya Yoshikawa wrote:
> Ccing Alex,
> 
> I tested kvm.git next branch before Alex's patch set was applied, and
> did not see the bug.  Although I'm not 100% sure but it is possible
> that something got broken by a patch in the following series:
> 
> [01/10] KVM: Restrict non-existing slot state transitions
> [02/10] KVM: Check userspace_addr when modifying a memory slot
> [03/10] KVM: Fix iommu map/unmap to handle memory slot moves
> [04/10] KVM: Minor memory slot optimization
> [05/10] KVM: Rename KVM_MEMORY_SLOTS -> KVM_USER_MEM_SLOTS
> [06/10] KVM: Make KVM_PRIVATE_MEM_SLOTS optional
> [07/10] KVM: struct kvm_memory_slot.user_alloc -> bool
> [08/10] KVM: struct kvm_memory_slot.flags -> u32
> [09/10] KVM: struct kvm_memory_slot.id -> short
> [10/10] KVM: Increase user memory slots on x86 to 125
> 
> If I can get time, I will check which one caused the problem tomorrow.

Please let me know if you can identify one of these as the culprit.
They're all very simple, but there's always a chance I've missed a hard
coding of slot numbers somewhere.  Thanks,

Alex

> On Tue, 18 Dec 2012 16:25:58 +0900
> Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> wrote:
> 
> > IMPORTANT NOTE (not about this patch set):
> > 
> > I have hit the following bug many times with the current next branch,
> > even WITHOUT my patches.  Although I do not know a way to reproduce this
> > yet, it seems that something was broken around slot->dirty_bitmap.  I am
> > now investigating the new code in __kvm_set_memory_region().
> > 
> > The bug:
> > [  575.238063] BUG: unable to handle kernel paging request at 00000002efe83a77
> > [  575.238185] IP: [<ffffffffa05f9619>] mark_page_dirty_in_slot+0x19/0x20 [kvm]
> > [  575.238308] PGD 0 
> > [  575.238343] Oops: 0002 [#1] SMP 
> > 
> > The call trace:
> > [  575.241207] Call Trace:
> > [  575.241257]  [<ffffffffa05f96b1>] kvm_write_guest_cached+0x91/0xb0 [kvm]
> > [  575.241370]  [<ffffffffa0610db9>] kvm_arch_vcpu_ioctl_run+0x1109/0x12c0 [kvm]
> > [  575.241488]  [<ffffffffa060fd55>] ? kvm_arch_vcpu_ioctl_run+0xa5/0x12c0 [kvm]
> > [  575.241595]  [<ffffffff81679194>] ? mutex_lock_killable_nested+0x274/0x340
> > [  575.241706]  [<ffffffffa05faf80>] ? kvm_set_ioapic_irq+0x20/0x20 [kvm]
> > [  575.241813]  [<ffffffffa05f71c9>] kvm_vcpu_ioctl+0x559/0x670 [kvm]
> > [  575.241913]  [<ffffffffa05f8a58>] ? kvm_vm_ioctl+0x1b8/0x570 [kvm]
> > [  575.242007]  [<ffffffff8101b9d3>] ? native_sched_clock+0x13/0x80
> > [  575.242125]  [<ffffffff8101ba49>] ? sched_clock+0x9/0x10
> > [  575.242208]  [<ffffffff8109015d>] ? sched_clock_cpu+0xbd/0x110
> > [  575.242298]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
> > [  575.242381]  [<ffffffff8119dfa8>] do_vfs_ioctl+0x98/0x570
> > [  575.242463]  [<ffffffff811a91b1>] ? fget_light+0xa1/0x140
> > [  575.246393]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
> > [  575.250363]  [<ffffffff8119e511>] sys_ioctl+0x91/0xb0
> > [  575.254327]  [<ffffffff81684c19>] system_call_fastpath+0x16/0x1b




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-19 15:42   ` Alex Williamson
@ 2012-12-20  5:02     ` Takuya Yoshikawa
  2012-12-20 12:59       ` Marcelo Tosatti
  0 siblings, 1 reply; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-20  5:02 UTC (permalink / raw)
  To: Alex Williamson; +Cc: Takuya Yoshikawa, mtosatti, gleb, kvm, linux-kernel

On Wed, 19 Dec 2012 08:42:57 -0700
Alex Williamson <alex.williamson@redhat.com> wrote:

> Please let me know if you can identify one of these as the culprit.
> They're all very simple, but there's always a chance I've missed a hard
> coding of slot numbers somewhere.  Thanks,

I identified the one:
  commit b7f69c555ca430129b6cde81e9f0927531420c5c
  KVM: Minor memory slot optimization

IIUC, the problem was that you did not care about the generation of
slots which was updated by update_memslots():

  Your patch reused the old memory slots which was there before
  doing the update for invalidating the slot, and badly, we did flush
  shadow pages after that before doing the second update for finally
  installing the new slot.  As a result, the generation did not change
  from that of the invalidated one, although the ghc(gfn to hva cache)
  might be stale.

  After that, kvm_write_guest_cached() checked if ghc should be
  initialized by comparing ghc's generation with that old one,
  resulting mark_page_dirty_in_slot() was called with the invalid
  cache contents.

Although we can do something to correct the generation alone, I do not
think such a trick is worth it because this is not a hot path.  Let's
just revert the patch.

Thanks,
	Takuya

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-20  5:02     ` Takuya Yoshikawa
@ 2012-12-20 12:59       ` Marcelo Tosatti
  2012-12-20 13:22         ` Gleb Natapov
  0 siblings, 1 reply; 28+ messages in thread
From: Marcelo Tosatti @ 2012-12-20 12:59 UTC (permalink / raw)
  To: Takuya Yoshikawa
  Cc: Alex Williamson, Takuya Yoshikawa, gleb, kvm, linux-kernel

On Thu, Dec 20, 2012 at 02:02:32PM +0900, Takuya Yoshikawa wrote:
> On Wed, 19 Dec 2012 08:42:57 -0700
> Alex Williamson <alex.williamson@redhat.com> wrote:
> 
> > Please let me know if you can identify one of these as the culprit.
> > They're all very simple, but there's always a chance I've missed a hard
> > coding of slot numbers somewhere.  Thanks,
> 
> I identified the one:
>   commit b7f69c555ca430129b6cde81e9f0927531420c5c
>   KVM: Minor memory slot optimization
> 
> IIUC, the problem was that you did not care about the generation of
> slots which was updated by update_memslots():
> 
>   Your patch reused the old memory slots which was there before
>   doing the update for invalidating the slot, and badly, we did flush
>   shadow pages after that before doing the second update for finally
>   installing the new slot.  As a result, the generation did not change
>   from that of the invalidated one, although the ghc(gfn to hva cache)
>   might be stale.
> 
>   After that, kvm_write_guest_cached() checked if ghc should be
>   initialized by comparing ghc's generation with that old one,
>   resulting mark_page_dirty_in_slot() was called with the invalid
>   cache contents.
> 
> Although we can do something to correct the generation alone, I do not
> think such a trick is worth it because this is not a hot path.  Let's
> just revert the patch.

Agreed. No dependencies by the following patches on it?


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-20 12:59       ` Marcelo Tosatti
@ 2012-12-20 13:22         ` Gleb Natapov
  2012-12-20 13:41           ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Gleb Natapov @ 2012-12-20 13:22 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Takuya Yoshikawa, Alex Williamson, Takuya Yoshikawa, kvm, linux-kernel

On Thu, Dec 20, 2012 at 10:59:46AM -0200, Marcelo Tosatti wrote:
> On Thu, Dec 20, 2012 at 02:02:32PM +0900, Takuya Yoshikawa wrote:
> > On Wed, 19 Dec 2012 08:42:57 -0700
> > Alex Williamson <alex.williamson@redhat.com> wrote:
> > 
> > > Please let me know if you can identify one of these as the culprit.
> > > They're all very simple, but there's always a chance I've missed a hard
> > > coding of slot numbers somewhere.  Thanks,
> > 
> > I identified the one:
> >   commit b7f69c555ca430129b6cde81e9f0927531420c5c
> >   KVM: Minor memory slot optimization
> > 
> > IIUC, the problem was that you did not care about the generation of
> > slots which was updated by update_memslots():
> > 
> >   Your patch reused the old memory slots which was there before
> >   doing the update for invalidating the slot, and badly, we did flush
> >   shadow pages after that before doing the second update for finally
> >   installing the new slot.  As a result, the generation did not change
> >   from that of the invalidated one, although the ghc(gfn to hva cache)
> >   might be stale.
> > 
> >   After that, kvm_write_guest_cached() checked if ghc should be
> >   initialized by comparing ghc's generation with that old one,
> >   resulting mark_page_dirty_in_slot() was called with the invalid
> >   cache contents.
> > 
> > Although we can do something to correct the generation alone, I do not
> > think such a trick is worth it because this is not a hot path.  Let's
> > just revert the patch.
> 
> Agreed. No dependencies by the following patches on it?
Heh, this generation management looks subtle. Would be easy to break by
other changes to the code. I wounder can we make it less subtle somehow.

--
			Gleb.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-20 13:22         ` Gleb Natapov
@ 2012-12-20 13:41           ` Alex Williamson
  2012-12-20 14:35             ` Takuya Yoshikawa
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-12-20 13:41 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Marcelo Tosatti, Takuya Yoshikawa, Takuya Yoshikawa, kvm, linux-kernel

On Thu, 2012-12-20 at 15:22 +0200, Gleb Natapov wrote:
> On Thu, Dec 20, 2012 at 10:59:46AM -0200, Marcelo Tosatti wrote:
> > On Thu, Dec 20, 2012 at 02:02:32PM +0900, Takuya Yoshikawa wrote:
> > > On Wed, 19 Dec 2012 08:42:57 -0700
> > > Alex Williamson <alex.williamson@redhat.com> wrote:
> > > 
> > > > Please let me know if you can identify one of these as the culprit.
> > > > They're all very simple, but there's always a chance I've missed a hard
> > > > coding of slot numbers somewhere.  Thanks,
> > > 
> > > I identified the one:
> > >   commit b7f69c555ca430129b6cde81e9f0927531420c5c
> > >   KVM: Minor memory slot optimization
> > > 
> > > IIUC, the problem was that you did not care about the generation of
> > > slots which was updated by update_memslots():
> > > 
> > >   Your patch reused the old memory slots which was there before
> > >   doing the update for invalidating the slot, and badly, we did flush
> > >   shadow pages after that before doing the second update for finally
> > >   installing the new slot.  As a result, the generation did not change
> > >   from that of the invalidated one, although the ghc(gfn to hva cache)
> > >   might be stale.
> > > 
> > >   After that, kvm_write_guest_cached() checked if ghc should be
> > >   initialized by comparing ghc's generation with that old one,
> > >   resulting mark_page_dirty_in_slot() was called with the invalid
> > >   cache contents.
> > > 
> > > Although we can do something to correct the generation alone, I do not
> > > think such a trick is worth it because this is not a hot path.  Let's
> > > just revert the patch.
> > 
> > Agreed. No dependencies by the following patches on it?
> Heh, this generation management looks subtle. Would be easy to break by
> other changes to the code. I wounder can we make it less subtle somehow.

Hmm, isn't the fix as simple as:

--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
                                GFP_KERNEL);
                if (!slots)
                        goto out_free;
-       }
+       } else
+               slots->generation = kvm->memslots->generation;
 
        /* map new memory slot into the iommu */
        if (npages) {

Or even just slots->generation++ since we're holding the lock across all
of this.

The original patch can be reverted, there are no following dependencies,
but the idea was that we're making the memslot array larger, so there
could be more pressure in allocating it, so let's not trivially do extra
frees and allocs.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-20 13:41           ` Alex Williamson
@ 2012-12-20 14:35             ` Takuya Yoshikawa
  2012-12-20 14:55               ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-20 14:35 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Gleb Natapov, Marcelo Tosatti, Takuya Yoshikawa, kvm, linux-kernel

On Thu, 20 Dec 2012 06:41:27 -0700
Alex Williamson <alex.williamson@redhat.com> wrote:

> Hmm, isn't the fix as simple as:
> 
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
>                                 GFP_KERNEL);
>                 if (!slots)
>                         goto out_free;
> -       }
> +       } else
> +               slots->generation = kvm->memslots->generation;
>  
>         /* map new memory slot into the iommu */
>         if (npages) {
> 
> Or even just slots->generation++ since we're holding the lock across all
> of this.

Yes, the fix should work, but I do not want to update the
generation from outside of update_memslots().

> The original patch can be reverted, there are no following dependencies,
> but the idea was that we're making the memslot array larger, so there
> could be more pressure in allocating it, so let's not trivially do extra
> frees and allocs.  Thanks,

I agree that the current set_memory_region() is not good for frequent updates.
But the alloc/free is not a major factor at the moment: flushing shadows should
be more problematic.

Thanks,
	Takuya

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-20 14:35             ` Takuya Yoshikawa
@ 2012-12-20 14:55               ` Alex Williamson
  2012-12-21  8:02                 ` Takuya Yoshikawa
  2012-12-21  8:05                 ` Takuya Yoshikawa
  0 siblings, 2 replies; 28+ messages in thread
From: Alex Williamson @ 2012-12-20 14:55 UTC (permalink / raw)
  To: Takuya Yoshikawa
  Cc: Gleb Natapov, Marcelo Tosatti, Takuya Yoshikawa, kvm, linux-kernel

On Thu, 2012-12-20 at 23:35 +0900, Takuya Yoshikawa wrote:
> On Thu, 20 Dec 2012 06:41:27 -0700
> Alex Williamson <alex.williamson@redhat.com> wrote:
> 
> > Hmm, isn't the fix as simple as:
> > 
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >                                 GFP_KERNEL);
> >                 if (!slots)
> >                         goto out_free;
> > -       }
> > +       } else
> > +               slots->generation = kvm->memslots->generation;
> >  
> >         /* map new memory slot into the iommu */
> >         if (npages) {
> > 
> > Or even just slots->generation++ since we're holding the lock across all
> > of this.
> 
> Yes, the fix should work, but I do not want to update the
> generation from outside of update_memslots().

Ok, then:

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 87089dd..c7b5061 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -413,7 +413,8 @@ void kvm_exit(void);
 
 void kvm_get_kvm(struct kvm *kvm);
 void kvm_put_kvm(struct kvm *kvm);
-void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new);
+void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
+                     u64 last_generation);
 
 static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c4c8ec1..06961ea 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -667,7 +667,8 @@ static void sort_memslots(struct kvm_memslots *slots)
 		slots->id_to_index[slots->memslots[i].id] = i;
 }
 
-void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
+void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
+                     u64 last_generation)
 {
 	if (new) {
 		int id = new->id;
@@ -679,7 +680,7 @@ void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
 			sort_memslots(slots);
 	}
 
-	slots->generation++;
+	slots->generation = last_generation + 1;
 }
 
 static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
@@ -814,7 +815,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		slot = id_to_memslot(slots, mem->slot);
 		slot->flags |= KVM_MEMSLOT_INVALID;
 
-		update_memslots(slots, NULL);
+		update_memslots(slots, NULL, kvm->memslots->generation);
 
 		old_memslots = kvm->memslots;
 		rcu_assign_pointer(kvm->memslots, slots);
@@ -862,7 +863,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		memset(&new.arch, 0, sizeof(new.arch));
 	}
 
-	update_memslots(slots, &new);
+	update_memslots(slots, &new, kvm->memslots->generation);
 	old_memslots = kvm->memslots;
 	rcu_assign_pointer(kvm->memslots, slots);
 	synchronize_srcu_expedited(&kvm->srcu);

> > The original patch can be reverted, there are no following dependencies,
> > but the idea was that we're making the memslot array larger, so there
> > could be more pressure in allocating it, so let's not trivially do extra
> > frees and allocs.  Thanks,
> 
> I agree that the current set_memory_region() is not good for frequent updates.
> But the alloc/free is not a major factor at the moment: flushing shadows should
> be more problematic.

I don't understand why we'd throw away even a minor optimization that's
so easy to fix.  We're not only removing a free/alloc, but we're being
more friendly to the cache by avoiding the extra memcpy.  The above also
makes the generation management a bit more explicit.  Thanks,

Alex


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-20 14:55               ` Alex Williamson
@ 2012-12-21  8:02                 ` Takuya Yoshikawa
  2012-12-21  8:54                   ` Gleb Natapov
  2012-12-21  8:05                 ` Takuya Yoshikawa
  1 sibling, 1 reply; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-21  8:02 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Takuya Yoshikawa, Gleb Natapov, Marcelo Tosatti, kvm, linux-kernel

On Thu, 20 Dec 2012 07:55:43 -0700
Alex Williamson <alex.williamson@redhat.com> wrote:

> > Yes, the fix should work, but I do not want to update the
> > generation from outside of update_memslots().
> 
> Ok, then:
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 87089dd..c7b5061 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -413,7 +413,8 @@ void kvm_exit(void);
>  
>  void kvm_get_kvm(struct kvm *kvm);
>  void kvm_put_kvm(struct kvm *kvm);
> -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new);
> +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> +                     u64 last_generation);
>  
>  static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
>  {
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c4c8ec1..06961ea 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -667,7 +667,8 @@ static void sort_memslots(struct kvm_memslots *slots)
>  		slots->id_to_index[slots->memslots[i].id] = i;
>  }
>  
> -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
> +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> +                     u64 last_generation)
>  {
>  	if (new) {
>  		int id = new->id;
> @@ -679,7 +680,7 @@ void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
>  			sort_memslots(slots);
>  	}
>  
> -	slots->generation++;
> +	slots->generation = last_generation + 1;
>  }
>  
>  static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
> @@ -814,7 +815,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		slot = id_to_memslot(slots, mem->slot);
>  		slot->flags |= KVM_MEMSLOT_INVALID;
>  
> -		update_memslots(slots, NULL);
> +		update_memslots(slots, NULL, kvm->memslots->generation);
>  
>  		old_memslots = kvm->memslots;
>  		rcu_assign_pointer(kvm->memslots, slots);
> @@ -862,7 +863,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		memset(&new.arch, 0, sizeof(new.arch));
>  	}
>  
> -	update_memslots(slots, &new);
> +	update_memslots(slots, &new, kvm->memslots->generation);
>  	old_memslots = kvm->memslots;
>  	rcu_assign_pointer(kvm->memslots, slots);
>  	synchronize_srcu_expedited(&kvm->srcu);
> 
> > > The original patch can be reverted, there are no following dependencies,
> > > but the idea was that we're making the memslot array larger, so there
> > > could be more pressure in allocating it, so let's not trivially do extra
> > > frees and allocs.  Thanks,
> > 
> > I agree that the current set_memory_region() is not good for frequent updates.
> > But the alloc/free is not a major factor at the moment: flushing shadows should
> > be more problematic.
> 
> I don't understand why we'd throw away even a minor optimization that's
> so easy to fix.  We're not only removing a free/alloc, but we're being
> more friendly to the cache by avoiding the extra memcpy.  The above also
> makes the generation management a bit more explicit.  Thanks,

Looks good to me!

I just wanted to keep the code readable, so no reason to object to
a clean solution.  Any chance to see the fix on kvm.git soon?

Thanks,
	Takuya

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-20 14:55               ` Alex Williamson
  2012-12-21  8:02                 ` Takuya Yoshikawa
@ 2012-12-21  8:05                 ` Takuya Yoshikawa
  1 sibling, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-21  8:05 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Takuya Yoshikawa, Gleb Natapov, Marcelo Tosatti, kvm, linux-kernel

On Thu, 20 Dec 2012 07:55:43 -0700
Alex Williamson <alex.williamson@redhat.com> wrote:

> > Yes, the fix should work, but I do not want to update the
> > generation from outside of update_memslots().
> 
> Ok, then:
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 87089dd..c7b5061 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -413,7 +413,8 @@ void kvm_exit(void);
>  
>  void kvm_get_kvm(struct kvm *kvm);
>  void kvm_put_kvm(struct kvm *kvm);
> -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new);
> +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> +                     u64 last_generation);
>  
>  static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
>  {
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c4c8ec1..06961ea 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -667,7 +667,8 @@ static void sort_memslots(struct kvm_memslots *slots)
>  		slots->id_to_index[slots->memslots[i].id] = i;
>  }
>  
> -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
> +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> +                     u64 last_generation)
>  {
>  	if (new) {
>  		int id = new->id;
> @@ -679,7 +680,7 @@ void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
>  			sort_memslots(slots);
>  	}
>  
> -	slots->generation++;
> +	slots->generation = last_generation + 1;
>  }
>  
>  static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
> @@ -814,7 +815,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		slot = id_to_memslot(slots, mem->slot);
>  		slot->flags |= KVM_MEMSLOT_INVALID;
>  
> -		update_memslots(slots, NULL);
> +		update_memslots(slots, NULL, kvm->memslots->generation);
>  
>  		old_memslots = kvm->memslots;
>  		rcu_assign_pointer(kvm->memslots, slots);
> @@ -862,7 +863,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		memset(&new.arch, 0, sizeof(new.arch));
>  	}
>  
> -	update_memslots(slots, &new);
> +	update_memslots(slots, &new, kvm->memslots->generation);
>  	old_memslots = kvm->memslots;
>  	rcu_assign_pointer(kvm->memslots, slots);
>  	synchronize_srcu_expedited(&kvm->srcu);
> 
> > > The original patch can be reverted, there are no following dependencies,
> > > but the idea was that we're making the memslot array larger, so there
> > > could be more pressure in allocating it, so let's not trivially do extra
> > > frees and allocs.  Thanks,
> > 
> > I agree that the current set_memory_region() is not good for frequent updates.
> > But the alloc/free is not a major factor at the moment: flushing shadows should
> > be more problematic.
> 
> I don't understand why we'd throw away even a minor optimization that's
> so easy to fix.  We're not only removing a free/alloc, but we're being
> more friendly to the cache by avoiding the extra memcpy.  The above also
> makes the generation management a bit more explicit.  Thanks,

Looks good to me!

I just wanted to keep the code readable, so no reason to object to
a clean solution.  Any chance to see the fix on kvm.git soon?

Thanks,
	Takuya

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-21  8:02                 ` Takuya Yoshikawa
@ 2012-12-21  8:54                   ` Gleb Natapov
  2012-12-21 13:24                     ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Gleb Natapov @ 2012-12-21  8:54 UTC (permalink / raw)
  To: Takuya Yoshikawa
  Cc: Alex Williamson, Takuya Yoshikawa, Marcelo Tosatti, kvm, linux-kernel

On Fri, Dec 21, 2012 at 05:02:50PM +0900, Takuya Yoshikawa wrote:
> On Thu, 20 Dec 2012 07:55:43 -0700
> Alex Williamson <alex.williamson@redhat.com> wrote:
> 
> > > Yes, the fix should work, but I do not want to update the
> > > generation from outside of update_memslots().
> > 
> > Ok, then:
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 87089dd..c7b5061 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -413,7 +413,8 @@ void kvm_exit(void);
> >  
> >  void kvm_get_kvm(struct kvm *kvm);
> >  void kvm_put_kvm(struct kvm *kvm);
> > -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new);
> > +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> > +                     u64 last_generation);
> >  
> >  static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
> >  {
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index c4c8ec1..06961ea 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -667,7 +667,8 @@ static void sort_memslots(struct kvm_memslots *slots)
> >  		slots->id_to_index[slots->memslots[i].id] = i;
> >  }
> >  
> > -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
> > +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> > +                     u64 last_generation)
> >  {
> >  	if (new) {
> >  		int id = new->id;
> > @@ -679,7 +680,7 @@ void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
> >  			sort_memslots(slots);
> >  	}
> >  
> > -	slots->generation++;
> > +	slots->generation = last_generation + 1;
> >  }
> >  
> >  static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
> > @@ -814,7 +815,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >  		slot = id_to_memslot(slots, mem->slot);
> >  		slot->flags |= KVM_MEMSLOT_INVALID;
> >  
> > -		update_memslots(slots, NULL);
> > +		update_memslots(slots, NULL, kvm->memslots->generation);
> >  
> >  		old_memslots = kvm->memslots;
> >  		rcu_assign_pointer(kvm->memslots, slots);
> > @@ -862,7 +863,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >  		memset(&new.arch, 0, sizeof(new.arch));
> >  	}
> >  
> > -	update_memslots(slots, &new);
> > +	update_memslots(slots, &new, kvm->memslots->generation);
> >  	old_memslots = kvm->memslots;
> >  	rcu_assign_pointer(kvm->memslots, slots);
> >  	synchronize_srcu_expedited(&kvm->srcu);
> > 
> > > > The original patch can be reverted, there are no following dependencies,
> > > > but the idea was that we're making the memslot array larger, so there
> > > > could be more pressure in allocating it, so let's not trivially do extra
> > > > frees and allocs.  Thanks,
> > > 
> > > I agree that the current set_memory_region() is not good for frequent updates.
> > > But the alloc/free is not a major factor at the moment: flushing shadows should
> > > be more problematic.
> > 
> > I don't understand why we'd throw away even a minor optimization that's
> > so easy to fix.  We're not only removing a free/alloc, but we're being
> > more friendly to the cache by avoiding the extra memcpy.  The above also
> > makes the generation management a bit more explicit.  Thanks,
> 
> Looks good to me!
> 
Me too.

> I just wanted to keep the code readable, so no reason to object to
> a clean solution.  Any chance to see the fix on kvm.git soon?
> 
Soon after Alex will send proper patch with Signed-off-by.

--
			Gleb.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-21  8:54                   ` Gleb Natapov
@ 2012-12-21 13:24                     ` Alex Williamson
  0 siblings, 0 replies; 28+ messages in thread
From: Alex Williamson @ 2012-12-21 13:24 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Takuya Yoshikawa, Takuya Yoshikawa, Marcelo Tosatti, kvm, linux-kernel

On Fri, 2012-12-21 at 10:54 +0200, Gleb Natapov wrote:
> On Fri, Dec 21, 2012 at 05:02:50PM +0900, Takuya Yoshikawa wrote:
> > On Thu, 20 Dec 2012 07:55:43 -0700
> > Alex Williamson <alex.williamson@redhat.com> wrote:
> > 
> > > > Yes, the fix should work, but I do not want to update the
> > > > generation from outside of update_memslots().
> > > 
> > > Ok, then:
> > >
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index 87089dd..c7b5061 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -413,7 +413,8 @@ void kvm_exit(void);
> > >  
> > >  void kvm_get_kvm(struct kvm *kvm);
> > >  void kvm_put_kvm(struct kvm *kvm);
> > > -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new);
> > > +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> > > +                     u64 last_generation);
> > >  
> > >  static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
> > >  {
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index c4c8ec1..06961ea 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -667,7 +667,8 @@ static void sort_memslots(struct kvm_memslots *slots)
> > >  		slots->id_to_index[slots->memslots[i].id] = i;
> > >  }
> > >  
> > > -void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
> > > +void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
> > > +                     u64 last_generation)
> > >  {
> > >  	if (new) {
> > >  		int id = new->id;
> > > @@ -679,7 +680,7 @@ void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
> > >  			sort_memslots(slots);
> > >  	}
> > >  
> > > -	slots->generation++;
> > > +	slots->generation = last_generation + 1;
> > >  }
> > >  
> > >  static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
> > > @@ -814,7 +815,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> > >  		slot = id_to_memslot(slots, mem->slot);
> > >  		slot->flags |= KVM_MEMSLOT_INVALID;
> > >  
> > > -		update_memslots(slots, NULL);
> > > +		update_memslots(slots, NULL, kvm->memslots->generation);
> > >  
> > >  		old_memslots = kvm->memslots;
> > >  		rcu_assign_pointer(kvm->memslots, slots);
> > > @@ -862,7 +863,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> > >  		memset(&new.arch, 0, sizeof(new.arch));
> > >  	}
> > >  
> > > -	update_memslots(slots, &new);
> > > +	update_memslots(slots, &new, kvm->memslots->generation);
> > >  	old_memslots = kvm->memslots;
> > >  	rcu_assign_pointer(kvm->memslots, slots);
> > >  	synchronize_srcu_expedited(&kvm->srcu);
> > > 
> > > > > The original patch can be reverted, there are no following dependencies,
> > > > > but the idea was that we're making the memslot array larger, so there
> > > > > could be more pressure in allocating it, so let's not trivially do extra
> > > > > frees and allocs.  Thanks,
> > > > 
> > > > I agree that the current set_memory_region() is not good for frequent updates.
> > > > But the alloc/free is not a major factor at the moment: flushing shadows should
> > > > be more problematic.
> > > 
> > > I don't understand why we'd throw away even a minor optimization that's
> > > so easy to fix.  We're not only removing a free/alloc, but we're being
> > > more friendly to the cache by avoiding the extra memcpy.  The above also
> > > makes the generation management a bit more explicit.  Thanks,
> > 
> > Looks good to me!
> > 
> Me too.
> 
> > I just wanted to keep the code readable, so no reason to object to
> > a clean solution.  Any chance to see the fix on kvm.git soon?
> > 
> Soon after Alex will send proper patch with Signed-off-by.

I'll test and do that first thing today, thanks,

Alex


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging
  2012-12-18  7:26 ` [PATCH 1/7] KVM: Write protect the updated slot only " Takuya Yoshikawa
@ 2012-12-24 13:27   ` Gleb Natapov
  2012-12-25  4:08     ` Takuya Yoshikawa
  2013-01-07 20:11   ` Marcelo Tosatti
  1 sibling, 1 reply; 28+ messages in thread
From: Gleb Natapov @ 2012-12-24 13:27 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: mtosatti, kvm, linux-kernel

On Tue, Dec 18, 2012 at 04:26:47PM +0900, Takuya Yoshikawa wrote:
> This is needed to make kvm_mmu_slot_remove_write_access() rmap based:
> otherwise we may end up using invalid rmap's.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> ---
>  arch/x86/kvm/x86.c  |    9 ++++++++-
>  virt/kvm/kvm_main.c |    1 -
>  2 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 1c9c834..9451efa 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6897,7 +6897,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>  	spin_lock(&kvm->mmu_lock);
>  	if (nr_mmu_pages)
>  		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
> -	kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> +	/*
> +	 * Write protect all pages for dirty logging.
> +	 * Existing largepage mappings are destroyed here and new ones will
> +	 * not be created until the end of the logging.
> +	 */
> +	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> +	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
> +		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
We should not check old slot flags here or at least check that
old.npages is not zero. Userspace may delete a slot using old flags,
then, if new memslot is created with dirty log enabled, it will not be
protected.

>  	spin_unlock(&kvm->mmu_lock);
>  	/*
>  	 * If memory slot is created, or moved, we need to clear all
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index bd31096..0ef5daa 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -805,7 +805,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
>  		if (kvm_create_dirty_bitmap(&new) < 0)
>  			goto out_free;
> -		/* destroy any largepage mappings for dirty tracking */
>  	}
>  
>  	if (!npages || base_gfn != old.base_gfn) {
> -- 
> 1.7.5.4

--
			Gleb.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging
  2012-12-24 13:27   ` Gleb Natapov
@ 2012-12-25  4:08     ` Takuya Yoshikawa
  2012-12-25  5:05       ` Gleb Natapov
  0 siblings, 1 reply; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-25  4:08 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: mtosatti, kvm, linux-kernel

On Mon, 24 Dec 2012 15:27:17 +0200
Gleb Natapov <gleb@redhat.com> wrote:

> > @@ -6897,7 +6897,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> >  	spin_lock(&kvm->mmu_lock);
> >  	if (nr_mmu_pages)
> >  		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
> > -	kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> > +	/*
> > +	 * Write protect all pages for dirty logging.
> > +	 * Existing largepage mappings are destroyed here and new ones will
> > +	 * not be created until the end of the logging.
> > +	 */
> > +	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> > +	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
> > +		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> We should not check old slot flags here or at least check that
> old.npages is not zero. Userspace may delete a slot using old flags,
> then, if new memslot is created with dirty log enabled, it will not be
> protected.

The flag, KVM_MEM_LOG_DIRTY_PAGES, is explicitely cleared when we
delete a slot to free dirty_bitmap:

	if (!npages)
		mem->flags &= ~KVM_MEM_LOG_DIRTY_PAGES;

So when old.npages is not zero, the second condition must be true.

Other parts are doing "if (!slot->dirty_bitmap)" to see if the slot
is in dirty logging mode.  If you prefer, we can do the same here.

Thanks,
	Takuya

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging
  2012-12-25  4:08     ` Takuya Yoshikawa
@ 2012-12-25  5:05       ` Gleb Natapov
  2012-12-25  5:26         ` Takuya Yoshikawa
  0 siblings, 1 reply; 28+ messages in thread
From: Gleb Natapov @ 2012-12-25  5:05 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: mtosatti, kvm, linux-kernel

On Tue, Dec 25, 2012 at 01:08:40PM +0900, Takuya Yoshikawa wrote:
> On Mon, 24 Dec 2012 15:27:17 +0200
> Gleb Natapov <gleb@redhat.com> wrote:
> 
> > > @@ -6897,7 +6897,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> > >  	spin_lock(&kvm->mmu_lock);
> > >  	if (nr_mmu_pages)
> > >  		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
> > > -	kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> > > +	/*
> > > +	 * Write protect all pages for dirty logging.
> > > +	 * Existing largepage mappings are destroyed here and new ones will
> > > +	 * not be created until the end of the logging.
> > > +	 */
> > > +	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> > > +	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
> > > +		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> > We should not check old slot flags here or at least check that
> > old.npages is not zero. Userspace may delete a slot using old flags,
> > then, if new memslot is created with dirty log enabled, it will not be
> > protected.
> 
> The flag, KVM_MEM_LOG_DIRTY_PAGES, is explicitely cleared when we
> delete a slot to free dirty_bitmap:
> 
> 	if (!npages)
> 		mem->flags &= ~KVM_MEM_LOG_DIRTY_PAGES;
> 
> So when old.npages is not zero, the second condition must be true.
> 
Right. I was looking for where it is cleared and missed that.

> Other parts are doing "if (!slot->dirty_bitmap)" to see if the slot
> is in dirty logging mode.  If you prefer, we can do the same here.
> 
Does not matter to me. If you think it will be more consistent do it.

--
			Gleb.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging
  2012-12-25  5:05       ` Gleb Natapov
@ 2012-12-25  5:26         ` Takuya Yoshikawa
  0 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2012-12-25  5:26 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: mtosatti, kvm, linux-kernel

On Tue, 25 Dec 2012 07:05:55 +0200
Gleb Natapov <gleb@redhat.com> wrote:

> > Other parts are doing "if (!slot->dirty_bitmap)" to see if the slot
> > is in dirty logging mode.  If you prefer, we can do the same here.
> > 
> Does not matter to me. If you think it will be more consistent do it.

No preference at the moment.  But since .dirty_bitmap and .rmap may be
changed to control free_physmem(new, old) behaviour, I'll keep the code
as is.

I'll send some patches to clean up the current "mem, new, old, slot"
games in __kvm_set_memory_region() later.  I was confused many times
by these.

In addition, passing old slot by value to kvm_arch_commit_memory_region()
should be fixed as well.  The structure is too big to copy.

Thanks,
	Takuya

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging
  2012-12-18  7:26 ` [PATCH 1/7] KVM: Write protect the updated slot only " Takuya Yoshikawa
  2012-12-24 13:27   ` Gleb Natapov
@ 2013-01-07 20:11   ` Marcelo Tosatti
  2013-01-08 11:50     ` Gleb Natapov
  1 sibling, 1 reply; 28+ messages in thread
From: Marcelo Tosatti @ 2013-01-07 20:11 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: gleb, kvm, linux-kernel

On Tue, Dec 18, 2012 at 04:26:47PM +0900, Takuya Yoshikawa wrote:
> This is needed to make kvm_mmu_slot_remove_write_access() rmap based:
> otherwise we may end up using invalid rmap's.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>

Why? memslot->arch.rmap[] has been properly allocated at this point.

> ---
>  arch/x86/kvm/x86.c  |    9 ++++++++-
>  virt/kvm/kvm_main.c |    1 -
>  2 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 1c9c834..9451efa 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6897,7 +6897,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>  	spin_lock(&kvm->mmu_lock);
>  	if (nr_mmu_pages)
>  		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
> -	kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> +	/*
> +	 * Write protect all pages for dirty logging.
> +	 * Existing largepage mappings are destroyed here and new ones will
> +	 * not be created until the end of the logging.
> +	 */
> +	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> +	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
> +		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
>  	spin_unlock(&kvm->mmu_lock);
>  	/*
>  	 * If memory slot is created, or moved, we need to clear all
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index bd31096..0ef5daa 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -805,7 +805,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
>  		if (kvm_create_dirty_bitmap(&new) < 0)
>  			goto out_free;
> -		/* destroy any largepage mappings for dirty tracking */
>  	}
>  
>  	if (!npages || base_gfn != old.base_gfn) {
> -- 
> 1.7.5.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
                   ` (7 preceding siblings ...)
  2012-12-19 12:30 ` [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
@ 2013-01-07 20:36 ` Marcelo Tosatti
  2013-01-08 10:40   ` Takuya Yoshikawa
  8 siblings, 1 reply; 28+ messages in thread
From: Marcelo Tosatti @ 2013-01-07 20:36 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: gleb, kvm, linux-kernel

On Tue, Dec 18, 2012 at 04:25:58PM +0900, Takuya Yoshikawa wrote:
> This patch set makes kvm_mmu_slot_remove_write_access() rmap based and
> adds conditional rescheduling to it.
> 
> The motivation for this change is of course to reduce the mmu_lock hold
> time when we start dirty logging for a large memory slot.  You may not
> see the problem if you just give 8GB or less of the memory to the guest
> with THP enabled on the host -- this is for the worst case.

Neat.

Looks good, except patch 1 - 

a) don't understand why it is necessary and 
b) not confident its safe - isnt clearing necessary for KVM_SET_MEMORY
instances other than

!(old.flags & LOG_DIRTY) && (new.flags & LOG_DIRTY)

> 
> 
> IMPORTANT NOTE (not about this patch set):
> 
> I have hit the following bug many times with the current next branch,
> even WITHOUT my patches.  Although I do not know a way to reproduce this
> yet, it seems that something was broken around slot->dirty_bitmap.  I am
> now investigating the new code in __kvm_set_memory_region().
> 
> The bug:
> [  575.238063] BUG: unable to handle kernel paging request at 00000002efe83a77
> [  575.238185] IP: [<ffffffffa05f9619>] mark_page_dirty_in_slot+0x19/0x20 [kvm]
> [  575.238308] PGD 0 
> [  575.238343] Oops: 0002 [#1] SMP 
> 
> The call trace:
> [  575.241207] Call Trace:
> [  575.241257]  [<ffffffffa05f96b1>] kvm_write_guest_cached+0x91/0xb0 [kvm]
> [  575.241370]  [<ffffffffa0610db9>] kvm_arch_vcpu_ioctl_run+0x1109/0x12c0 [kvm]
> [  575.241488]  [<ffffffffa060fd55>] ? kvm_arch_vcpu_ioctl_run+0xa5/0x12c0 [kvm]
> [  575.241595]  [<ffffffff81679194>] ? mutex_lock_killable_nested+0x274/0x340
> [  575.241706]  [<ffffffffa05faf80>] ? kvm_set_ioapic_irq+0x20/0x20 [kvm]
> [  575.241813]  [<ffffffffa05f71c9>] kvm_vcpu_ioctl+0x559/0x670 [kvm]
> [  575.241913]  [<ffffffffa05f8a58>] ? kvm_vm_ioctl+0x1b8/0x570 [kvm]
> [  575.242007]  [<ffffffff8101b9d3>] ? native_sched_clock+0x13/0x80
> [  575.242125]  [<ffffffff8101ba49>] ? sched_clock+0x9/0x10
> [  575.242208]  [<ffffffff8109015d>] ? sched_clock_cpu+0xbd/0x110
> [  575.242298]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
> [  575.242381]  [<ffffffff8119dfa8>] do_vfs_ioctl+0x98/0x570
> [  575.242463]  [<ffffffff811a91b1>] ? fget_light+0xa1/0x140
> [  575.246393]  [<ffffffff811a914c>] ? fget_light+0x3c/0x140
> [  575.250363]  [<ffffffff8119e511>] sys_ioctl+0x91/0xb0
> [  575.254327]  [<ffffffff81684c19>] system_call_fastpath+0x16/0x1b
> 
> 
> Takuya Yoshikawa (7):
>   KVM: Write protect the updated slot only when we start dirty logging
>   KVM: MMU: Remove unused parameter level from __rmap_write_protect()
>   KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based
>   KVM: x86: Remove unused slot_bitmap from kvm_mmu_page
>   KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself
>   KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself
>   KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time
> 
>  Documentation/virtual/kvm/mmu.txt |    7 ----
>  arch/x86/include/asm/kvm_host.h   |    5 ---
>  arch/x86/kvm/mmu.c                |   56 +++++++++++++++++++-----------------
>  arch/x86/kvm/x86.c                |   13 +++++---
>  virt/kvm/kvm_main.c               |    1 -
>  5 files changed, 38 insertions(+), 44 deletions(-)
> 
> -- 
> 1.7.5.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging
  2013-01-07 20:36 ` Marcelo Tosatti
@ 2013-01-08 10:40   ` Takuya Yoshikawa
  0 siblings, 0 replies; 28+ messages in thread
From: Takuya Yoshikawa @ 2013-01-08 10:40 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: gleb, kvm, linux-kernel

On Mon, 7 Jan 2013 18:36:42 -0200
Marcelo Tosatti <mtosatti@redhat.com> wrote:

> Looks good, except patch 1 - 
> 
> a) don't understand why it is necessary and 

What's really necessary is to make sure that we don't call the function
for a deleted slot.  My explanation was wrong.

> b) not confident its safe - isnt clearing necessary for KVM_SET_MEMORY
> instances other than
> 
> !(old.flags & LOG_DIRTY) && (new.flags & LOG_DIRTY)

I think flushing shadows should be enough for other cases, e.g. moving a slot.

But I've changed the condition (see v2) to make it easier to understand:
  npages && LOG_DIRTY

Since remove_write_access() is for dirty logging, this condition should be safe.

Thanks,
	Takuya

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging
  2013-01-07 20:11   ` Marcelo Tosatti
@ 2013-01-08 11:50     ` Gleb Natapov
  0 siblings, 0 replies; 28+ messages in thread
From: Gleb Natapov @ 2013-01-08 11:50 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Takuya Yoshikawa, kvm, linux-kernel

On Mon, Jan 07, 2013 at 06:11:46PM -0200, Marcelo Tosatti wrote:
> On Tue, Dec 18, 2012 at 04:26:47PM +0900, Takuya Yoshikawa wrote:
> > This is needed to make kvm_mmu_slot_remove_write_access() rmap based:
> > otherwise we may end up using invalid rmap's.
> > 
> > Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> 
> Why? memslot->arch.rmap[] has been properly allocated at this point.
> 
FWIW a long time ago in a galaxy far, far away there was a check for
KVM_MEM_LOG_DIRTY_PAGES before call to kvm_mmu_slot_remove_write_access(),
but it was removed by 90cb0529dd230548a7, as far as I can tell, accidentally.

> > ---
> >  arch/x86/kvm/x86.c  |    9 ++++++++-
> >  virt/kvm/kvm_main.c |    1 -
> >  2 files changed, 8 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 1c9c834..9451efa 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -6897,7 +6897,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> >  	spin_lock(&kvm->mmu_lock);
> >  	if (nr_mmu_pages)
> >  		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
> > -	kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> > +	/*
> > +	 * Write protect all pages for dirty logging.
> > +	 * Existing largepage mappings are destroyed here and new ones will
> > +	 * not be created until the end of the logging.
> > +	 */
> > +	if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> > +	    !(old.flags & KVM_MEM_LOG_DIRTY_PAGES))
> > +		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
> >  	spin_unlock(&kvm->mmu_lock);
> >  	/*
> >  	 * If memory slot is created, or moved, we need to clear all
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index bd31096..0ef5daa 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -805,7 +805,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >  	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> >  		if (kvm_create_dirty_bitmap(&new) < 0)
> >  			goto out_free;
> > -		/* destroy any largepage mappings for dirty tracking */
> >  	}
> >  
> >  	if (!npages || base_gfn != old.base_gfn) {
> > -- 
> > 1.7.5.4
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
			Gleb.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2013-01-08 11:53 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-18  7:25 [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
2012-12-18  7:26 ` [PATCH 1/7] KVM: Write protect the updated slot only " Takuya Yoshikawa
2012-12-24 13:27   ` Gleb Natapov
2012-12-25  4:08     ` Takuya Yoshikawa
2012-12-25  5:05       ` Gleb Natapov
2012-12-25  5:26         ` Takuya Yoshikawa
2013-01-07 20:11   ` Marcelo Tosatti
2013-01-08 11:50     ` Gleb Natapov
2012-12-18  7:27 ` [PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect() Takuya Yoshikawa
2012-12-18  7:28 ` [PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based Takuya Yoshikawa
2012-12-18  7:28 ` [PATCH 4/7] KVM: x86: Remove unused slot_bitmap from kvm_mmu_page Takuya Yoshikawa
2012-12-18  7:29 ` [PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself Takuya Yoshikawa
2012-12-18  7:30 ` [PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() " Takuya Yoshikawa
2012-12-18  7:30 ` [PATCH 7/7] KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time Takuya Yoshikawa
2012-12-19 12:30 ` [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
2012-12-19 15:42   ` Alex Williamson
2012-12-20  5:02     ` Takuya Yoshikawa
2012-12-20 12:59       ` Marcelo Tosatti
2012-12-20 13:22         ` Gleb Natapov
2012-12-20 13:41           ` Alex Williamson
2012-12-20 14:35             ` Takuya Yoshikawa
2012-12-20 14:55               ` Alex Williamson
2012-12-21  8:02                 ` Takuya Yoshikawa
2012-12-21  8:54                   ` Gleb Natapov
2012-12-21 13:24                     ` Alex Williamson
2012-12-21  8:05                 ` Takuya Yoshikawa
2013-01-07 20:36 ` Marcelo Tosatti
2013-01-08 10:40   ` Takuya Yoshikawa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.