From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: [patch 6/8] KVM: timer: optimize next_timer_event and vcpu_arm_exit Date: Sun, 05 Jul 2009 22:55:17 -0300 Message-ID: <20090706015813.001127585@localhost.localdomain> References: <20090706015511.923596553@localhost.localdomain> Cc: Marcelo Tosatti To: kvm@vger.kernel.org Return-path: Received: from mx2.redhat.com ([66.187.237.31]:39620 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753325AbZGFUBV (ORCPT ); Mon, 6 Jul 2009 16:01:21 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n66K1PRP013637 for ; Mon, 6 Jul 2009 16:01:25 -0400 Content-Disposition: inline; filename=kvm-armexit-optimize Sender: kvm-owner@vger.kernel.org List-ID: Which reduces the added entry/exit overhead down to ~= 30 cycles. Signed-off-by: Marcelo Tosatti Index: kvm-new/arch/x86/kvm/timer.c =================================================================== --- kvm-new.orig/arch/x86/kvm/timer.c +++ kvm-new/arch/x86/kvm/timer.c @@ -135,14 +135,15 @@ ktime_t kvm_vcpu_next_timer_event(struct void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu) { struct kvm_timer *ktimer, *n; - ktime_t now = ktime_get(); list_for_each_entry_safe(ktimer, n, &vcpu->arch.timers, vcpu_timer) { - ktime_t expire; + ktime_t expire, now; if (!ktimer->can_inject) continue; + now = ktime_get(); + expire = kvm_timer_next_event(ktimer); if (ktime_to_ns(now) < ktime_to_ns(expire)) continue; @@ -173,8 +174,12 @@ void kvm_vcpu_arm_exit(struct kvm_vcpu * { ktime_t expire; ktime_t now; - struct kvm_timer *ktimer = kvm_vcpu_injectable_timer_event(vcpu); + struct kvm_timer *ktimer; + + if (hrtimer_active(&vcpu->arch.exit_timer)) + return; + ktimer = kvm_vcpu_injectable_timer_event(vcpu); if (!ktimer) return; Index: kvm-new/arch/x86/kvm/x86.c =================================================================== --- kvm-new.orig/arch/x86/kvm/x86.c +++ kvm-new/arch/x86/kvm/x86.c @@ -3567,8 +3567,6 @@ static int vcpu_enter_guest(struct kvm_v preempt_enable(); - kvm_vcpu_cleanup_timer(vcpu); - down_read(&vcpu->kvm->slots_lock); /* --