All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] Optimize vcpu->requests processing
@ 2012-07-09 17:05 Avi Kivity
  2012-07-09 17:05 ` [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER Avi Kivity
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Avi Kivity @ 2012-07-09 17:05 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

Currently, any time a request bit is set (not too uncommon) we check all of them.
This patchset optimizes the process slightly by skipping over unset bits using
for_each_set_bit().

v3:
   new approach using for_each_set_bit(), as the previous one might have skipped a bit.

Avi Kivity (6):
  KVM: Don't use KVM_REQ_PENDING_TIMER
  KVM: Simplify KVM_REQ_EVENT/req_int_win handling
  KVM: Move mmu reload out of line
  KVM: Optimize vcpu->requests checking
  KVM: Reorder KVM_REQ_EVENT to optimize processing
  KVM: Clean up vcpu->requests == 0 processing

 arch/x86/kvm/mmu.c       |   4 +-
 arch/x86/kvm/svm.c       |   1 +
 arch/x86/kvm/timer.c     |   8 +--
 arch/x86/kvm/x86.c       | 142 ++++++++++++++++++++++++++---------------------
 include/linux/kvm_host.h |  14 ++---
 5 files changed, 91 insertions(+), 78 deletions(-)

-- 
1.7.11


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER
  2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
@ 2012-07-09 17:05 ` Avi Kivity
  2012-07-10  8:50   ` Gleb Natapov
  2012-07-09 17:05 ` [PATCH v3 2/6] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Avi Kivity @ 2012-07-09 17:05 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

It's a write-only bit, set by the timer and cleared by the main loop.
Remove it.  Retain the definition since ppc uses it.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/timer.c | 8 ++------
 arch/x86/kvm/x86.c   | 1 -
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/timer.c b/arch/x86/kvm/timer.c
index 6b85cc6..c28f838 100644
--- a/arch/x86/kvm/timer.c
+++ b/arch/x86/kvm/timer.c
@@ -27,14 +27,10 @@ enum hrtimer_restart kvm_timer_fn(struct hrtimer *data)
 	/*
 	 * There is a race window between reading and incrementing, but we do
 	 * not care about potentially losing timer events in the !reinject
-	 * case anyway. Note: KVM_REQ_PENDING_TIMER is implicitly checked
-	 * in vcpu_enter_guest.
+	 * case anyway.
 	 */
-	if (ktimer->reinject || !atomic_read(&ktimer->pending)) {
+	if (ktimer->reinject || !atomic_read(&ktimer->pending))
 		atomic_inc(&ktimer->pending);
-		/* FIXME: this code should not know anything about vcpus */
-		kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu);
-	}
 
 	if (waitqueue_active(q))
 		wake_up_interruptible(q);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ff0b487..ae07ef2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5437,7 +5437,6 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
 		if (r <= 0)
 			break;
 
-		clear_bit(KVM_REQ_PENDING_TIMER, &vcpu->requests);
 		if (kvm_cpu_has_pending_timer(vcpu))
 			kvm_inject_pending_timer_irqs(vcpu);
 
-- 
1.7.11


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 2/6] KVM: Simplify KVM_REQ_EVENT/req_int_win handling
  2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
  2012-07-09 17:05 ` [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER Avi Kivity
@ 2012-07-09 17:05 ` Avi Kivity
  2012-07-09 17:05 ` [PATCH v3 3/6] KVM: Move mmu reload out of line Avi Kivity
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2012-07-09 17:05 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

Put the KVM_REQ_EVENT block in the regular vcpu->requests if (),
instead of its own little check.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/x86.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ae07ef2..959e5a9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5222,6 +5222,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		vcpu->run->request_interrupt_window;
 	bool req_immediate_exit = 0;
 
+	if (unlikely(req_int_win))
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+
 	if (vcpu->requests) {
 		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
 			kvm_mmu_unload(vcpu);
@@ -5266,20 +5269,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			kvm_handle_pmu_event(vcpu);
 		if (kvm_check_request(KVM_REQ_PMI, vcpu))
 			kvm_deliver_pmi(vcpu);
-	}
-
-	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
-		inject_pending_event(vcpu);
-
-		/* enable NMI/IRQ window open exits if needed */
-		if (vcpu->arch.nmi_pending)
-			kvm_x86_ops->enable_nmi_window(vcpu);
-		else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
-			kvm_x86_ops->enable_irq_window(vcpu);
-
-		if (kvm_lapic_enabled(vcpu)) {
-			update_cr8_intercept(vcpu);
-			kvm_lapic_sync_to_vapic(vcpu);
+		if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
+			inject_pending_event(vcpu);
+
+			/* enable NMI/IRQ window open exits if needed */
+			if (vcpu->arch.nmi_pending)
+				kvm_x86_ops->enable_nmi_window(vcpu);
+			else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+				kvm_x86_ops->enable_irq_window(vcpu);
+
+			if (kvm_lapic_enabled(vcpu)) {
+				update_cr8_intercept(vcpu);
+				kvm_lapic_sync_to_vapic(vcpu);
+			}
 		}
 	}
 
-- 
1.7.11


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 3/6] KVM: Move mmu reload out of line
  2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
  2012-07-09 17:05 ` [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER Avi Kivity
  2012-07-09 17:05 ` [PATCH v3 2/6] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
@ 2012-07-09 17:05 ` Avi Kivity
  2012-07-10  3:57   ` Xiao Guangrong
  2012-07-09 17:05 ` [PATCH v3 4/6] KVM: Optimize vcpu->requests checking Avi Kivity
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Avi Kivity @ 2012-07-09 17:05 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

Currently we check that the mmu root exits before every entry.  Use the
existing KVM_REQ_MMU_RELOAD mechanism instead, by making it really reload
the mmu, and by adding the request to mmu initialization code.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/mmu.c |  4 +++-
 arch/x86/kvm/svm.c |  1 +
 arch/x86/kvm/x86.c | 13 +++++++------
 3 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 569cd66..136d757 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3180,7 +3180,8 @@ void kvm_mmu_flush_tlb(struct kvm_vcpu *vcpu)
 static void paging_new_cr3(struct kvm_vcpu *vcpu)
 {
 	pgprintk("%s: cr3 %lx\n", __func__, kvm_read_cr3(vcpu));
-	mmu_free_roots(vcpu);
+	kvm_mmu_unload(vcpu);
+	kvm_mmu_load(vcpu);
 }
 
 static unsigned long get_cr3(struct kvm_vcpu *vcpu)
@@ -3469,6 +3470,7 @@ static int init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
 
 static int init_kvm_mmu(struct kvm_vcpu *vcpu)
 {
+	kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
 	if (mmu_is_nested(vcpu))
 		return init_kvm_nested_mmu(vcpu);
 	else if (tdp_enabled)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7a41878..d77ad8c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2523,6 +2523,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
 
 	if (nested_vmcb->control.nested_ctl) {
 		kvm_mmu_unload(&svm->vcpu);
+		kvm_make_request(KVM_REQ_MMU_RELOAD, &svm->vcpu);
 		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
 		nested_svm_init_mmu_context(&svm->vcpu);
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 959e5a9..162231f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5226,8 +5226,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
 
 	if (vcpu->requests) {
-		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
+		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) {
 			kvm_mmu_unload(vcpu);
+			r = kvm_mmu_reload(vcpu);
+			if (unlikely(r)) {
+				kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+				goto out;
+			}
+		}
 		if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
 			__kvm_migrate_timers(vcpu);
 		if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
@@ -5285,11 +5291,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		}
 	}
 
-	r = kvm_mmu_reload(vcpu);
-	if (unlikely(r)) {
-		goto cancel_injection;
-	}
-
 	preempt_disable();
 
 	kvm_x86_ops->prepare_guest_switch(vcpu);
-- 
1.7.11


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 4/6] KVM: Optimize vcpu->requests checking
  2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
                   ` (2 preceding siblings ...)
  2012-07-09 17:05 ` [PATCH v3 3/6] KVM: Move mmu reload out of line Avi Kivity
@ 2012-07-09 17:05 ` Avi Kivity
  2012-07-09 17:05 ` [PATCH v3 5/6] KVM: Reorder KVM_REQ_EVENT to optimize processing Avi Kivity
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2012-07-09 17:05 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

Instead of checking for each request linearly, use for_each_set_bit() to
iterate on just the requests that are set (should be 0 or 1 most of the
time).

To avoid a useless call to find_first_bit(), add an extra check for no
requests set.  To avoid an extra indent and an unreviewable patch, I
added a rather ugly goto.  This can be fixed in a later patch.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/x86.c | 62 ++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 39 insertions(+), 23 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 162231f..9296dce 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5217,6 +5217,7 @@ static void process_nmi(struct kvm_vcpu *vcpu)
 
 static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 {
+	unsigned req;
 	int r;
 	bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
 		vcpu->run->request_interrupt_window;
@@ -5225,57 +5226,67 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (unlikely(req_int_win))
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
 
-	if (vcpu->requests) {
-		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) {
+	if (!vcpu->requests)
+		goto no_requests;
+
+	for_each_set_bit(req, &vcpu->requests, BITS_PER_LONG) {
+		clear_bit(req, &vcpu->requests);
+		switch (req) {
+		case KVM_REQ_MMU_RELOAD:
 			kvm_mmu_unload(vcpu);
 			r = kvm_mmu_reload(vcpu);
 			if (unlikely(r)) {
 				kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
 				goto out;
 			}
-		}
-		if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
+			break;
+		case KVM_REQ_MIGRATE_TIMER:
 			__kvm_migrate_timers(vcpu);
-		if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
+			break;
+		case KVM_REQ_CLOCK_UPDATE:
 			r = kvm_guest_time_update(vcpu);
 			if (unlikely(r))
 				goto out;
-		}
-		if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
+			break;
+		case KVM_REQ_MMU_SYNC:
 			kvm_mmu_sync_roots(vcpu);
-		if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
+			break;
+		case KVM_REQ_TLB_FLUSH:
 			kvm_x86_ops->tlb_flush(vcpu);
-		if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
+			break;
+		case KVM_REQ_REPORT_TPR_ACCESS:
 			vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
 			r = 0;
 			goto out;
-		}
-		if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) {
+		case KVM_REQ_TRIPLE_FAULT:
 			vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
 			r = 0;
 			goto out;
-		}
-		if (kvm_check_request(KVM_REQ_DEACTIVATE_FPU, vcpu)) {
+		case KVM_REQ_DEACTIVATE_FPU:
 			vcpu->fpu_active = 0;
 			kvm_x86_ops->fpu_deactivate(vcpu);
-		}
-		if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) {
+			break;
+		case KVM_REQ_APF_HALT:
 			/* Page is swapped out. Do synthetic halt */
 			vcpu->arch.apf.halted = true;
 			r = 1;
 			goto out;
-		}
-		if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
+		case KVM_REQ_STEAL_UPDATE:
 			record_steal_time(vcpu);
-		if (kvm_check_request(KVM_REQ_NMI, vcpu))
+			break;
+		case KVM_REQ_NMI:
 			process_nmi(vcpu);
-		req_immediate_exit =
-			kvm_check_request(KVM_REQ_IMMEDIATE_EXIT, vcpu);
-		if (kvm_check_request(KVM_REQ_PMU, vcpu))
+			break;
+		case KVM_REQ_IMMEDIATE_EXIT:
+			req_immediate_exit = true;
+			break;
+		case KVM_REQ_PMU:
 			kvm_handle_pmu_event(vcpu);
-		if (kvm_check_request(KVM_REQ_PMI, vcpu))
+			break;
+		case KVM_REQ_PMI:
 			kvm_deliver_pmi(vcpu);
-		if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
+			break;
+		case KVM_REQ_EVENT:
 			inject_pending_event(vcpu);
 
 			/* enable NMI/IRQ window open exits if needed */
@@ -5288,9 +5299,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 				update_cr8_intercept(vcpu);
 				kvm_lapic_sync_to_vapic(vcpu);
 			}
+			break;
+		default:
+			BUG();
 		}
 	}
 
+no_requests:
+
 	preempt_disable();
 
 	kvm_x86_ops->prepare_guest_switch(vcpu);
-- 
1.7.11


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 5/6] KVM: Reorder KVM_REQ_EVENT to optimize processing
  2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
                   ` (3 preceding siblings ...)
  2012-07-09 17:05 ` [PATCH v3 4/6] KVM: Optimize vcpu->requests checking Avi Kivity
@ 2012-07-09 17:05 ` Avi Kivity
  2012-07-09 17:05 ` [PATCH v3 6/6] KVM: Clean up vcpu->requests == 0 processing Avi Kivity
  2012-09-24  5:55 ` [PATCH v3 0/6] Optimize vcpu->requests processing Xiao Guangrong
  6 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2012-07-09 17:05 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

Since we now process events in order of their bit values, we need to keep
KVM_REQ_EVENT last.  This is so that if an event which may generate an
interrupt (say, PMU) happens, we can process the generated KVM_REQ_EVENT
before entering the critical section (and then aborting because vcpu->requests
has KVM_REQ_EVENT set).

Reorder the KVM_REQ_ definitions to make it so.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 include/linux/kvm_host.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index e3c86f8..6d71058 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -62,13 +62,13 @@
 #define KVM_REQ_CLOCK_UPDATE       8
 #define KVM_REQ_KICK               9
 #define KVM_REQ_DEACTIVATE_FPU    10
-#define KVM_REQ_EVENT             11
-#define KVM_REQ_APF_HALT          12
-#define KVM_REQ_STEAL_UPDATE      13
-#define KVM_REQ_NMI               14
-#define KVM_REQ_IMMEDIATE_EXIT    15
-#define KVM_REQ_PMU               16
-#define KVM_REQ_PMI               17
+#define KVM_REQ_APF_HALT          11
+#define KVM_REQ_STEAL_UPDATE      12
+#define KVM_REQ_NMI               13
+#define KVM_REQ_IMMEDIATE_EXIT    14
+#define KVM_REQ_PMU               15
+#define KVM_REQ_PMI               16
+#define KVM_REQ_EVENT             17 /* Keep after any req which can cause an interrupt */
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID	0
 
-- 
1.7.11


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 6/6] KVM: Clean up vcpu->requests == 0 processing
  2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
                   ` (4 preceding siblings ...)
  2012-07-09 17:05 ` [PATCH v3 5/6] KVM: Reorder KVM_REQ_EVENT to optimize processing Avi Kivity
@ 2012-07-09 17:05 ` Avi Kivity
  2012-09-24  5:55 ` [PATCH v3 0/6] Optimize vcpu->requests processing Xiao Guangrong
  6 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2012-07-09 17:05 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

A previous patch introduced a goto to make the patch clearer.  This
patch cleans up the code but has no functionality changes.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/x86.c | 148 ++++++++++++++++++++++++++---------------------------
 1 file changed, 72 insertions(+), 76 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9296dce..44a2c87 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5226,86 +5226,82 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (unlikely(req_int_win))
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
 
-	if (!vcpu->requests)
-		goto no_requests;
-
-	for_each_set_bit(req, &vcpu->requests, BITS_PER_LONG) {
-		clear_bit(req, &vcpu->requests);
-		switch (req) {
-		case KVM_REQ_MMU_RELOAD:
-			kvm_mmu_unload(vcpu);
-			r = kvm_mmu_reload(vcpu);
-			if (unlikely(r)) {
-				kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+	if (vcpu->requests)
+		for_each_set_bit(req, &vcpu->requests, BITS_PER_LONG) {
+			clear_bit(req, &vcpu->requests);
+			switch (req) {
+			case KVM_REQ_MMU_RELOAD:
+				kvm_mmu_unload(vcpu);
+				r = kvm_mmu_reload(vcpu);
+				if (unlikely(r)) {
+					kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+					goto out;
+				}
+				break;
+			case KVM_REQ_MIGRATE_TIMER:
+				__kvm_migrate_timers(vcpu);
+				break;
+			case KVM_REQ_CLOCK_UPDATE:
+				r = kvm_guest_time_update(vcpu);
+				if (unlikely(r))
+					goto out;
+				break;
+			case KVM_REQ_MMU_SYNC:
+				kvm_mmu_sync_roots(vcpu);
+				break;
+			case KVM_REQ_TLB_FLUSH:
+				kvm_x86_ops->tlb_flush(vcpu);
+				break;
+			case KVM_REQ_REPORT_TPR_ACCESS:
+				vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
+				r = 0;
 				goto out;
-			}
-			break;
-		case KVM_REQ_MIGRATE_TIMER:
-			__kvm_migrate_timers(vcpu);
-			break;
-		case KVM_REQ_CLOCK_UPDATE:
-			r = kvm_guest_time_update(vcpu);
-			if (unlikely(r))
+			case KVM_REQ_TRIPLE_FAULT:
+				vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
+				r = 0;
 				goto out;
-			break;
-		case KVM_REQ_MMU_SYNC:
-			kvm_mmu_sync_roots(vcpu);
-			break;
-		case KVM_REQ_TLB_FLUSH:
-			kvm_x86_ops->tlb_flush(vcpu);
-			break;
-		case KVM_REQ_REPORT_TPR_ACCESS:
-			vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
-			r = 0;
-			goto out;
-		case KVM_REQ_TRIPLE_FAULT:
-			vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
-			r = 0;
-			goto out;
-		case KVM_REQ_DEACTIVATE_FPU:
-			vcpu->fpu_active = 0;
-			kvm_x86_ops->fpu_deactivate(vcpu);
-			break;
-		case KVM_REQ_APF_HALT:
-			/* Page is swapped out. Do synthetic halt */
-			vcpu->arch.apf.halted = true;
-			r = 1;
-			goto out;
-		case KVM_REQ_STEAL_UPDATE:
-			record_steal_time(vcpu);
-			break;
-		case KVM_REQ_NMI:
-			process_nmi(vcpu);
-			break;
-		case KVM_REQ_IMMEDIATE_EXIT:
-			req_immediate_exit = true;
-			break;
-		case KVM_REQ_PMU:
-			kvm_handle_pmu_event(vcpu);
-			break;
-		case KVM_REQ_PMI:
-			kvm_deliver_pmi(vcpu);
-			break;
-		case KVM_REQ_EVENT:
-			inject_pending_event(vcpu);
-
-			/* enable NMI/IRQ window open exits if needed */
-			if (vcpu->arch.nmi_pending)
-				kvm_x86_ops->enable_nmi_window(vcpu);
-			else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
-				kvm_x86_ops->enable_irq_window(vcpu);
-
-			if (kvm_lapic_enabled(vcpu)) {
-				update_cr8_intercept(vcpu);
-				kvm_lapic_sync_to_vapic(vcpu);
+			case KVM_REQ_DEACTIVATE_FPU:
+				vcpu->fpu_active = 0;
+				kvm_x86_ops->fpu_deactivate(vcpu);
+				break;
+			case KVM_REQ_APF_HALT:
+				/* Page is swapped out. Do synthetic halt */
+				vcpu->arch.apf.halted = true;
+				r = 1;
+				goto out;
+			case KVM_REQ_STEAL_UPDATE:
+				record_steal_time(vcpu);
+				break;
+			case KVM_REQ_NMI:
+				process_nmi(vcpu);
+				break;
+			case KVM_REQ_IMMEDIATE_EXIT:
+				req_immediate_exit = true;
+				break;
+			case KVM_REQ_PMU:
+				kvm_handle_pmu_event(vcpu);
+				break;
+			case KVM_REQ_PMI:
+				kvm_deliver_pmi(vcpu);
+				break;
+			case KVM_REQ_EVENT:
+				inject_pending_event(vcpu);
+
+				/* enable NMI/IRQ window open exits if needed */
+				if (vcpu->arch.nmi_pending)
+					kvm_x86_ops->enable_nmi_window(vcpu);
+				else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+					kvm_x86_ops->enable_irq_window(vcpu);
+
+				if (kvm_lapic_enabled(vcpu)) {
+					update_cr8_intercept(vcpu);
+					kvm_lapic_sync_to_vapic(vcpu);
+				}
+				break;
+			default:
+				BUG();
 			}
-			break;
-		default:
-			BUG();
 		}
-	}
-
-no_requests:
 
 	preempt_disable();
 
-- 
1.7.11


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 3/6] KVM: Move mmu reload out of line
  2012-07-09 17:05 ` [PATCH v3 3/6] KVM: Move mmu reload out of line Avi Kivity
@ 2012-07-10  3:57   ` Xiao Guangrong
  2012-07-10  7:48     ` Avi Kivity
  0 siblings, 1 reply; 17+ messages in thread
From: Xiao Guangrong @ 2012-07-10  3:57 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, kvm

On 07/10/2012 01:05 AM, Avi Kivity wrote:
> Currently we check that the mmu root exits before every entry.  Use the
> existing KVM_REQ_MMU_RELOAD mechanism instead, by making it really reload
> the mmu, and by adding the request to mmu initialization code.
> 
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
>  arch/x86/kvm/mmu.c |  4 +++-
>  arch/x86/kvm/svm.c |  1 +
>  arch/x86/kvm/x86.c | 13 +++++++------
>  3 files changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 569cd66..136d757 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3180,7 +3180,8 @@ void kvm_mmu_flush_tlb(struct kvm_vcpu *vcpu)
>  static void paging_new_cr3(struct kvm_vcpu *vcpu)
>  {
>  	pgprintk("%s: cr3 %lx\n", __func__, kvm_read_cr3(vcpu));
> -	mmu_free_roots(vcpu);
> +	kvm_mmu_unload(vcpu);
> +	kvm_mmu_load(vcpu);
>  }
> 
>  static unsigned long get_cr3(struct kvm_vcpu *vcpu)
> @@ -3469,6 +3470,7 @@ static int init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
> 
>  static int init_kvm_mmu(struct kvm_vcpu *vcpu)
>  {
> +	kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
>  	if (mmu_is_nested(vcpu))
>  		return init_kvm_nested_mmu(vcpu);
>  	else if (tdp_enabled)
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 7a41878..d77ad8c 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -2523,6 +2523,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
> 
>  	if (nested_vmcb->control.nested_ctl) {
>  		kvm_mmu_unload(&svm->vcpu);
> +		kvm_make_request(KVM_REQ_MMU_RELOAD, &svm->vcpu);
>  		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
>  		nested_svm_init_mmu_context(&svm->vcpu);
>  	}
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 959e5a9..162231f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5226,8 +5226,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  		kvm_make_request(KVM_REQ_EVENT, vcpu);
> 
>  	if (vcpu->requests) {
> -		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
> +		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) {
>  			kvm_mmu_unload(vcpu);
> +			r = kvm_mmu_reload(vcpu);
> +			if (unlikely(r)) {
> +				kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
> +				goto out;
> +			}

Now, reload mmu is before event injecting, can below bug be triggered again?

commit d8368af8b46b904def42a0f341d2f4f29001fa77
Author: Avi Kivity <avi@redhat.com>
Date:   Mon May 14 18:07:56 2012 +0300

    KVM: Fix mmu_reload() clash with nested vmx event injection

    Currently the inject_pending_event() call during guest entry happens after
    kvm_mmu_reload().  This is for historical reasons - we used to
    inject_pending_event() in atomic context, while kvm_mmu_reload() needs task
    context.

    A problem is that nested vmx can cause the mmu context to be reset, if event
    injection is intercepted and causes a #VMEXIT instead (the #VMEXIT resets
    CR0/CR3/CR4).  If this happens, we end up with invalid root_hpa, and since
    kvm_mmu_reload() has already run, no one will fix it and we end up entering
    the guest this way.

    Fix by reordering event injection to be before kvm_mmu_reload().  Use
    ->cancel_injection() to undo if kvm_mmu_reload() fails.

    https://bugzilla.kernel.org/show_bug.cgi?id=42980

    Reported-by: Luke-Jr <luke-jr+linuxbugs@utopios.org>
    Signed-off-by: Avi Kivity <avi@redhat.com>
    Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 3/6] KVM: Move mmu reload out of line
  2012-07-10  3:57   ` Xiao Guangrong
@ 2012-07-10  7:48     ` Avi Kivity
  0 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2012-07-10  7:48 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Marcelo Tosatti, kvm

On 07/10/2012 06:57 AM, Xiao Guangrong wrote:
> On 07/10/2012 01:05 AM, Avi Kivity wrote:
> > Currently we check that the mmu root exits before every entry.  Use the
> > existing KVM_REQ_MMU_RELOAD mechanism instead, by making it really reload
> > the mmu, and by adding the request to mmu initialization code.
> > 
> > Signed-off-by: Avi Kivity <avi@redhat.com>
> > ---
> >  arch/x86/kvm/mmu.c |  4 +++-
> >  arch/x86/kvm/svm.c |  1 +
> >  arch/x86/kvm/x86.c | 13 +++++++------
> >  3 files changed, 11 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index 569cd66..136d757 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -3180,7 +3180,8 @@ void kvm_mmu_flush_tlb(struct kvm_vcpu *vcpu)
> >  static void paging_new_cr3(struct kvm_vcpu *vcpu)
> >  {
> >  	pgprintk("%s: cr3 %lx\n", __func__, kvm_read_cr3(vcpu));
> > -	mmu_free_roots(vcpu);
> > +	kvm_mmu_unload(vcpu);
> > +	kvm_mmu_load(vcpu);
> >  }
> > 
> >  static unsigned long get_cr3(struct kvm_vcpu *vcpu)
> > @@ -3469,6 +3470,7 @@ static int init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
> > 
> >  static int init_kvm_mmu(struct kvm_vcpu *vcpu)
> >  {
> > +	kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
> >  	if (mmu_is_nested(vcpu))
> >  		return init_kvm_nested_mmu(vcpu);
> >  	else if (tdp_enabled)
> > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > index 7a41878..d77ad8c 100644
> > --- a/arch/x86/kvm/svm.c
> > +++ b/arch/x86/kvm/svm.c
> > @@ -2523,6 +2523,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
> > 
> >  	if (nested_vmcb->control.nested_ctl) {
> >  		kvm_mmu_unload(&svm->vcpu);
> > +		kvm_make_request(KVM_REQ_MMU_RELOAD, &svm->vcpu);
> >  		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
> >  		nested_svm_init_mmu_context(&svm->vcpu);
> >  	}
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 959e5a9..162231f 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -5226,8 +5226,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >  		kvm_make_request(KVM_REQ_EVENT, vcpu);
> > 
> >  	if (vcpu->requests) {
> > -		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
> > +		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) {
> >  			kvm_mmu_unload(vcpu);
> > +			r = kvm_mmu_reload(vcpu);
> > +			if (unlikely(r)) {
> > +				kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
> > +				goto out;
> > +			}
>
> Now, reload mmu is before event injecting, can below bug be triggered again?
>
> commit d8368af8b46b904def42a0f341d2f4f29001fa77
> Author: Avi Kivity <avi@redhat.com>
> Date:   Mon May 14 18:07:56 2012 +0300
>
>     KVM: Fix mmu_reload() clash with nested vmx event injection
>
>     Currently the inject_pending_event() call during guest entry happens after
>     kvm_mmu_reload().  This is for historical reasons - we used to
>     inject_pending_event() in atomic context, while kvm_mmu_reload() needs task
>     context.
>
>     A problem is that nested vmx can cause the mmu context to be reset, if event
>     injection is intercepted and causes a #VMEXIT instead (the #VMEXIT resets
>     CR0/CR3/CR4).  If this happens, we end up with invalid root_hpa, and since
>     kvm_mmu_reload() has already run, no one will fix it and we end up entering
>     the guest this way.
>
>     Fix by reordering event injection to be before kvm_mmu_reload().  Use
>     ->cancel_injection() to undo if kvm_mmu_reload() fails.
>

I haven't considered it, but I think the patch is safe.  If
init_kvm_mmu() is called as part of event injection then it will set
KVM_REQ_MMU_RELOAD and we will process the requests again before
entering the guest.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER
  2012-07-09 17:05 ` [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER Avi Kivity
@ 2012-07-10  8:50   ` Gleb Natapov
  2012-07-10  9:13     ` Avi Kivity
  0 siblings, 1 reply; 17+ messages in thread
From: Gleb Natapov @ 2012-07-10  8:50 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, kvm

On Mon, Jul 09, 2012 at 08:05:40PM +0300, Avi Kivity wrote:
> It's a write-only bit, set by the timer and cleared by the main loop.
> Remove it.  Retain the definition since ppc uses it.
> 
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
>  arch/x86/kvm/timer.c | 8 ++------
>  arch/x86/kvm/x86.c   | 1 -
>  2 files changed, 2 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/timer.c b/arch/x86/kvm/timer.c
> index 6b85cc6..c28f838 100644
> --- a/arch/x86/kvm/timer.c
> +++ b/arch/x86/kvm/timer.c
> @@ -27,14 +27,10 @@ enum hrtimer_restart kvm_timer_fn(struct hrtimer *data)
>  	/*
>  	 * There is a race window between reading and incrementing, but we do
>  	 * not care about potentially losing timer events in the !reinject
> -	 * case anyway. Note: KVM_REQ_PENDING_TIMER is implicitly checked
> -	 * in vcpu_enter_guest.
So what about the part that says "KVM_REQ_PENDING_TIMER is implicitly
checked in vcpu_enter_guest"? This patch drops the check. So now if
timer triggers while vcpu is in userspace it will enter guest mode
without injecting timer interrupt on the next ioctl(KVM_RUN). Or do I
miss something here?

> +	 * case anyway.
>  	 */
> -	if (ktimer->reinject || !atomic_read(&ktimer->pending)) {
> +	if (ktimer->reinject || !atomic_read(&ktimer->pending))
>  		atomic_inc(&ktimer->pending);
> -		/* FIXME: this code should not know anything about vcpus */
> -		kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu);
> -	}
>  
>  	if (waitqueue_active(q))
>  		wake_up_interruptible(q);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index ff0b487..ae07ef2 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5437,7 +5437,6 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
>  		if (r <= 0)
>  			break;
>  
> -		clear_bit(KVM_REQ_PENDING_TIMER, &vcpu->requests);
>  		if (kvm_cpu_has_pending_timer(vcpu))
>  			kvm_inject_pending_timer_irqs(vcpu);
>  
> -- 
> 1.7.11
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
			Gleb.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER
  2012-07-10  8:50   ` Gleb Natapov
@ 2012-07-10  9:13     ` Avi Kivity
  0 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2012-07-10  9:13 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: Marcelo Tosatti, kvm

On 07/10/2012 11:50 AM, Gleb Natapov wrote:
> On Mon, Jul 09, 2012 at 08:05:40PM +0300, Avi Kivity wrote:
> > It's a write-only bit, set by the timer and cleared by the main loop.
> > Remove it.  Retain the definition since ppc uses it.
> > 
> > Signed-off-by: Avi Kivity <avi@redhat.com>
> > ---
> >  arch/x86/kvm/timer.c | 8 ++------
> >  arch/x86/kvm/x86.c   | 1 -
> >  2 files changed, 2 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/timer.c b/arch/x86/kvm/timer.c
> > index 6b85cc6..c28f838 100644
> > --- a/arch/x86/kvm/timer.c
> > +++ b/arch/x86/kvm/timer.c
> > @@ -27,14 +27,10 @@ enum hrtimer_restart kvm_timer_fn(struct hrtimer *data)
> >  	/*
> >  	 * There is a race window between reading and incrementing, but we do
> >  	 * not care about potentially losing timer events in the !reinject
> > -	 * case anyway. Note: KVM_REQ_PENDING_TIMER is implicitly checked
> > -	 * in vcpu_enter_guest.
> So what about the part that says "KVM_REQ_PENDING_TIMER is implicitly
> checked in vcpu_enter_guest"? This patch drops the check. So now if
> timer triggers while vcpu is in userspace it will enter guest mode
> without injecting timer interrupt on the next ioctl(KVM_RUN). Or do I
> miss something here?

You're right, the bit appears to be write-only, but it isn't.  The check
inside the critical section for vcpu->requests reads it.

I guess we can make the check explicit by doing a 'goto out' if the bit
is set.


-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/6] Optimize vcpu->requests processing
  2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
                   ` (5 preceding siblings ...)
  2012-07-09 17:05 ` [PATCH v3 6/6] KVM: Clean up vcpu->requests == 0 processing Avi Kivity
@ 2012-09-24  5:55 ` Xiao Guangrong
  2012-09-24  9:48   ` Avi Kivity
  6 siblings, 1 reply; 17+ messages in thread
From: Xiao Guangrong @ 2012-09-24  5:55 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, kvm

On 07/10/2012 01:05 AM, Avi Kivity wrote:
> Currently, any time a request bit is set (not too uncommon) we check all of them.
> This patchset optimizes the process slightly by skipping over unset bits using
> for_each_set_bit().
> 

I also notice that kvm_check_request costs lots of cpu time. What is the status
of this patchset?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/6] Optimize vcpu->requests processing
  2012-09-24  5:55 ` [PATCH v3 0/6] Optimize vcpu->requests processing Xiao Guangrong
@ 2012-09-24  9:48   ` Avi Kivity
  2012-09-24 10:19     ` Xiao Guangrong
  0 siblings, 1 reply; 17+ messages in thread
From: Avi Kivity @ 2012-09-24  9:48 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Marcelo Tosatti, kvm

On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
> On 07/10/2012 01:05 AM, Avi Kivity wrote:
>> Currently, any time a request bit is set (not too uncommon) we check all of them.
>> This patchset optimizes the process slightly by skipping over unset bits using
>> for_each_set_bit().
>> 
> 
> I also notice that kvm_check_request costs lots of cpu time. What is the status
> of this patchset?
> 

I had problems getting rid of KVM_REQ_PENDING_TIMER.  I'll try again.

In what workloads did you see kvm_check_request()?

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/6] Optimize vcpu->requests processing
  2012-09-24  9:48   ` Avi Kivity
@ 2012-09-24 10:19     ` Xiao Guangrong
  2012-09-24 10:52       ` Avi Kivity
  0 siblings, 1 reply; 17+ messages in thread
From: Xiao Guangrong @ 2012-09-24 10:19 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, kvm

On 09/24/2012 05:48 PM, Avi Kivity wrote:
> On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
>> On 07/10/2012 01:05 AM, Avi Kivity wrote:
>>> Currently, any time a request bit is set (not too uncommon) we check all of them.
>>> This patchset optimizes the process slightly by skipping over unset bits using
>>> for_each_set_bit().
>>>
>>
>> I also notice that kvm_check_request costs lots of cpu time. What is the status
>> of this patchset?
>>
> 
> I had problems getting rid of KVM_REQ_PENDING_TIMER.  I'll try again.
> 
> In what workloads did you see kvm_check_request()?
> 

Run kernbench on guest, and use perf to sample, this function is really
hot.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/6] Optimize vcpu->requests processing
  2012-09-24 10:19     ` Xiao Guangrong
@ 2012-09-24 10:52       ` Avi Kivity
  2012-09-24 11:16         ` Xiao Guangrong
  0 siblings, 1 reply; 17+ messages in thread
From: Avi Kivity @ 2012-09-24 10:52 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Marcelo Tosatti, kvm

On 09/24/2012 12:19 PM, Xiao Guangrong wrote:
> On 09/24/2012 05:48 PM, Avi Kivity wrote:
>> On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
>>> On 07/10/2012 01:05 AM, Avi Kivity wrote:
>>>> Currently, any time a request bit is set (not too uncommon) we check all of them.
>>>> This patchset optimizes the process slightly by skipping over unset bits using
>>>> for_each_set_bit().
>>>>
>>>
>>> I also notice that kvm_check_request costs lots of cpu time. What is the status
>>> of this patchset?
>>>
>> 
>> I had problems getting rid of KVM_REQ_PENDING_TIMER.  I'll try again.
>> 
>> In what workloads did you see kvm_check_request()?
>> 
> 
> Run kernbench on guest, and use perf to sample, this function is really
> hot.
> 

I don't see it at all.  Westmere, 4-way guest.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/6] Optimize vcpu->requests processing
  2012-09-24 10:52       ` Avi Kivity
@ 2012-09-24 11:16         ` Xiao Guangrong
  2012-09-24 12:12           ` Avi Kivity
  0 siblings, 1 reply; 17+ messages in thread
From: Xiao Guangrong @ 2012-09-24 11:16 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, kvm

On 09/24/2012 06:52 PM, Avi Kivity wrote:
> On 09/24/2012 12:19 PM, Xiao Guangrong wrote:
>> On 09/24/2012 05:48 PM, Avi Kivity wrote:
>>> On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
>>>> On 07/10/2012 01:05 AM, Avi Kivity wrote:
>>>>> Currently, any time a request bit is set (not too uncommon) we check all of them.
>>>>> This patchset optimizes the process slightly by skipping over unset bits using
>>>>> for_each_set_bit().
>>>>>
>>>>
>>>> I also notice that kvm_check_request costs lots of cpu time. What is the status
>>>> of this patchset?
>>>>
>>>
>>> I had problems getting rid of KVM_REQ_PENDING_TIMER.  I'll try again.
>>>
>>> In what workloads did you see kvm_check_request()?
>>>
>>
>> Run kernbench on guest, and use perf to sample, this function is really
>> hot.
>>
> 
> I don't see it at all.  Westmere, 4-way guest.
> 

This is the result i got on my laptop:

# ./perf report | grep "\[kvm\]"
    85.59%  qemu-kvm  [kvm]                    [k] arch_local_irq_enable
     0.18%  qemu-kvm  [kvm]                    [k] kvm_arch_vcpu_ioctl_run
     0.10%  qemu-kvm  [kvm]                    [k] paging64_walk_addr_generic
     0.10%  qemu-kvm  [kvm]                    [k] x86_decode_insn
     0.08%  qemu-kvm  [kvm]                    [k] kvm_check_request
     0.06%  qemu-kvm  [kvm]                    [k] apic_clear_vector

My box: i5-2540M CPU @ 2.60GHz * 4 + 4G
guest: 2 vcpu + 1G


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/6] Optimize vcpu->requests processing
  2012-09-24 11:16         ` Xiao Guangrong
@ 2012-09-24 12:12           ` Avi Kivity
  0 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2012-09-24 12:12 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Marcelo Tosatti, kvm

On 09/24/2012 01:16 PM, Xiao Guangrong wrote:
> On 09/24/2012 06:52 PM, Avi Kivity wrote:
>> On 09/24/2012 12:19 PM, Xiao Guangrong wrote:
>>> On 09/24/2012 05:48 PM, Avi Kivity wrote:
>>>> On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
>>>>> On 07/10/2012 01:05 AM, Avi Kivity wrote:
>>>>>> Currently, any time a request bit is set (not too uncommon) we check all of them.
>>>>>> This patchset optimizes the process slightly by skipping over unset bits using
>>>>>> for_each_set_bit().
>>>>>>
>>>>>
>>>>> I also notice that kvm_check_request costs lots of cpu time. What is the status
>>>>> of this patchset?
>>>>>
>>>>
>>>> I had problems getting rid of KVM_REQ_PENDING_TIMER.  I'll try again.
>>>>
>>>> In what workloads did you see kvm_check_request()?
>>>>
>>>
>>> Run kernbench on guest, and use perf to sample, this function is really
>>> hot.
>>>
>> 
>> I don't see it at all.  Westmere, 4-way guest.
>> 
> 
> This is the result i got on my laptop:
> 
> # ./perf report | grep "\[kvm\]"
>     85.59%  qemu-kvm  [kvm]                    [k] arch_local_irq_enable

Something's wrong here, should be inlined and certainly not take 86% of cpu time.

>      0.18%  qemu-kvm  [kvm]                    [k] kvm_arch_vcpu_ioctl_run
>      0.10%  qemu-kvm  [kvm]                    [k] paging64_walk_addr_generic
>      0.10%  qemu-kvm  [kvm]                    [k] x86_decode_insn
>      0.08%  qemu-kvm  [kvm]                    [k] kvm_check_request

0.08% is hardly hot.

>      0.06%  qemu-kvm  [kvm]                    [k] apic_clear_vector
> 
> My box: i5-2540M CPU @ 2.60GHz * 4 + 4G
> guest: 2 vcpu + 1G
> 



-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2012-09-24 12:13 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-09 17:05 [PATCH v3 0/6] Optimize vcpu->requests processing Avi Kivity
2012-07-09 17:05 ` [PATCH v3 1/6] KVM: Don't use KVM_REQ_PENDING_TIMER Avi Kivity
2012-07-10  8:50   ` Gleb Natapov
2012-07-10  9:13     ` Avi Kivity
2012-07-09 17:05 ` [PATCH v3 2/6] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
2012-07-09 17:05 ` [PATCH v3 3/6] KVM: Move mmu reload out of line Avi Kivity
2012-07-10  3:57   ` Xiao Guangrong
2012-07-10  7:48     ` Avi Kivity
2012-07-09 17:05 ` [PATCH v3 4/6] KVM: Optimize vcpu->requests checking Avi Kivity
2012-07-09 17:05 ` [PATCH v3 5/6] KVM: Reorder KVM_REQ_EVENT to optimize processing Avi Kivity
2012-07-09 17:05 ` [PATCH v3 6/6] KVM: Clean up vcpu->requests == 0 processing Avi Kivity
2012-09-24  5:55 ` [PATCH v3 0/6] Optimize vcpu->requests processing Xiao Guangrong
2012-09-24  9:48   ` Avi Kivity
2012-09-24 10:19     ` Xiao Guangrong
2012-09-24 10:52       ` Avi Kivity
2012-09-24 11:16         ` Xiao Guangrong
2012-09-24 12:12           ` Avi Kivity

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.