From: Marcelo Tosatti <mtosatti@redhat.com>
To: Gleb Natapov <gleb@redhat.com>
Cc: kvm@vger.kernel.org
Subject: Re: [patch 4/8] KVM: x86: replace hrtimer based timer emulation
Date: Wed, 8 Jul 2009 10:17:21 -0300 [thread overview]
Message-ID: <20090708131721.GA3382@amt.cnet> (raw)
In-Reply-To: <20090708125819.GM28046@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 1746 bytes --]
On Wed, Jul 08, 2009 at 03:58:19PM +0300, Gleb Natapov wrote:
> Excellent patch series.
>
> On Sun, Jul 05, 2009 at 10:55:15PM -0300, Marcelo Tosatti wrote:
> > int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
> > {
> > - int ret;
> > + ktime_t now, expires;
> >
> > - ret = pit_has_pending_timer(vcpu);
> > - ret |= apic_has_pending_timer(vcpu);
> > + expires = kvm_vcpu_next_timer_event(vcpu);
> > + now = ktime_get();
> > + if (expires.tv64 <= now.tv64) {
> > + if (kvm_arch_interrupt_allowed(vcpu))
> > + set_bit(KVM_REQ_UNHALT, &vcpu->requests);
> You shouldn't unhalt vcpu here. Not every timer event will generate
> interrupt (vector can be masked in pic/ioapic)
Yeah. Note however that kvm_vcpu_next_timer_event only returns the
expiration time for events that have been acked (for timers that have
had events injected, but not acked it returns KTIME_MAX).
So, the code above will set one spurious unhalt if:
- inject timer irq
- guest acks irq
- guest mask irq
- unhalt (once)
I had a "kvm_timer_mask" callback before (along with the attached
patch), but decided to keep it simpler using the ack trick above.
I suppose one spurious unhalt is harmless, or is it a correctness issue?
> or timer event can generate NMI instead of interrupt.
In the NMI case it should not unhalt the processor?
(but yes, bypassing the irq injection system its not a very beatiful
shortcut, but its done in other places too eg i8254.c NMI injection via
all cpus LINT0).
> Leaving this code out probably means
> that you can't remove kvm_inject_pending_timer_irqs() call from
> __vcpu_run().
>
> > + return 1;
> > + }
> >
> > - return ret;
> > + return 0;
> > }
> > EXPORT_SYMBOL(kvm_cpu_has_pending_timer);
> >
>
> --
> Gleb.
[-- Attachment #2: kvm-int-mask-helpers --]
[-- Type: text/plain, Size: 3761 bytes --]
Index: kvm/arch/x86/kvm/i8259.c
===================================================================
--- kvm.orig/arch/x86/kvm/i8259.c
+++ kvm/arch/x86/kvm/i8259.c
@@ -122,6 +122,18 @@ static inline int get_priority(struct kv
return priority;
}
+int pic_int_is_masked(struct kvm *kvm, int irq)
+{
+ u8 imr;
+ struct kvm_pic *s = pic_irqchip(kvm);
+
+ pic_lock(s);
+ imr = s->pics[SELECT_PIC(irq)].imr;
+ pic_unlock(s);
+
+ return (imr & (1 << irq));
+}
+
/*
* return the pic wanted interrupt. return -1 if none
*/
Index: kvm/arch/x86/kvm/irq.c
===================================================================
--- kvm.orig/arch/x86/kvm/irq.c
+++ kvm/arch/x86/kvm/irq.c
@@ -42,6 +42,29 @@ int kvm_cpu_has_pending_timer(struct kvm
EXPORT_SYMBOL(kvm_cpu_has_pending_timer);
/*
+ * The PIT interrupt can be masked on different levels: PIC/IOAPIC/LAPIC.
+ * Hide the maze from the PIT mask notifier, all it cares about is whether
+ * the interrupt can reach the processor.
+ */
+void kvm_pit_mask_notifier(struct kvm_irq_mask_notifier *kimn, bool mask)
+{
+ bool masked = 0;
+ struct kvm *kvm;
+ struct kvm_pit *pit = container_of(kimn, struct kvm_pit, mask_notifier);
+
+ kvm = pit->kvm;
+
+ /* LAPIC hardware disabled or INTIN0 programmed as ExtINT */
+ if (kvm_apic_accept_pic_intr(kvm->vcpus[0])) {
+ masked = pic_int_is_masked(kvm, 0);
+ /* IOAPIC -> LAPIC */
+ } else
+ masked = ioapic_int_is_masked(kvm, 0);
+
+ kvm_pit_internal_mask_notifier(kvm, mask);
+}
+
+/*
* check if there is pending interrupt without
* intack.
*/
Index: kvm/virt/kvm/ioapic.c
===================================================================
--- kvm.orig/virt/kvm/ioapic.c
+++ kvm/virt/kvm/ioapic.c
@@ -101,6 +101,16 @@ static int ioapic_service(struct kvm_ioa
return injected;
}
+int ioapic_int_is_masked(struct kvm *kvm, int pin)
+{
+ struct kvm_ioapic *ioapic = kvm->arch.vioapic;
+
+ if (!ioapic)
+ return 1;
+
+ return ioapic->redirtbl[pin].fields.mask;
+}
+
static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
{
unsigned index;
Index: kvm/arch/x86/kvm/i8254.c
===================================================================
--- kvm.orig/arch/x86/kvm/i8254.c
+++ kvm/arch/x86/kvm/i8254.c
@@ -558,9 +558,9 @@ void kvm_pit_reset(struct kvm_pit *pit)
pit->pit_state.irq_ack = 1;
}
-static void pit_mask_notifer(struct kvm_irq_mask_notifier *kimn, bool mask)
+void kvm_pit_internal_mask_notifier(struct kvm *kvm, bool mask)
{
- struct kvm_pit *pit = container_of(kimn, struct kvm_pit, mask_notifier);
+ struct kvm_pit *pit = kvm->arch.vpit;
if (!mask) {
atomic_set(&pit->pit_state.pit_timer.pending, 0);
@@ -614,8 +614,8 @@ struct kvm_pit *kvm_create_pit(struct kv
kvm_pit_reset(pit);
- pit->mask_notifier.func = pit_mask_notifer;
- kvm_register_irq_mask_notifier(kvm, 0, &pit->mask_notifier);
+ pit->mask_notifier.func = kvm_pit_mask_notifier;
+ kvm_register_irq_mask_notifier(kvm, 0, &kvm->arch.vpit->mask_notifier);
kvm_iodevice_init(&pit->dev, &pit_dev_ops);
kvm_io_bus_register_dev(&kvm->pio_bus, &pit->dev);
Index: kvm/arch/x86/kvm/irq.h
===================================================================
--- kvm.orig/arch/x86/kvm/irq.h
+++ kvm/arch/x86/kvm/irq.h
@@ -98,7 +98,13 @@ void __kvm_migrate_apic_timer(struct kvm
void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu);
void __kvm_migrate_timers(struct kvm_vcpu *vcpu);
+void kvm_pit_internal_mask_notifier(struct kvm *kvm, bool mask);
+void kvm_pit_mask_notifier(struct kvm_irq_mask_notifier *kimn, bool mask);
+
int pit_has_pending_timer(struct kvm_vcpu *vcpu);
int apic_has_pending_timer(struct kvm_vcpu *vcpu);
+int pic_int_is_masked(struct kvm *kvm, int irq);
+int ioapic_int_is_masked(struct kvm *kvm, int irq);
+
#endif
next prev parent reply other threads:[~2009-07-08 13:17 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-06 1:55 [patch 0/8] RFC: in-kernel timer emulation changes Marcelo Tosatti
2009-07-06 1:55 ` [patch 1/8] KVM: timer interface unification Marcelo Tosatti
2009-07-08 13:33 ` Avi Kivity
2009-07-06 1:55 ` [patch 2/8] KVM: move lapic timer ack to EOI handler Marcelo Tosatti
2009-07-06 1:55 ` [patch 3/8] KVM: x86: per-vcpu timer list Marcelo Tosatti
2009-07-06 1:55 ` [patch 4/8] KVM: x86: replace hrtimer based timer emulation Marcelo Tosatti
2009-07-08 12:58 ` Gleb Natapov
2009-07-08 13:17 ` Marcelo Tosatti [this message]
2009-07-08 13:39 ` Gleb Natapov
2009-07-08 15:42 ` Marcelo Tosatti
2009-07-08 16:13 ` Gleb Natapov
2009-07-08 16:29 ` Gleb Natapov
2009-07-08 16:37 ` Marcelo Tosatti
2009-07-08 16:40 ` Gleb Natapov
2009-07-08 16:47 ` Marcelo Tosatti
2009-07-08 13:41 ` Avi Kivity
2009-07-08 16:24 ` Marcelo Tosatti
2009-07-08 16:36 ` Avi Kivity
2009-07-06 1:55 ` [patch 5/8] KVM: timer: honor noreinject tunable Marcelo Tosatti
2009-07-06 1:55 ` [patch 6/8] KVM: timer: optimize next_timer_event and vcpu_arm_exit Marcelo Tosatti
2009-07-06 1:55 ` [patch 7/8] KVM: PIT: removed unused code Marcelo Tosatti
2009-07-06 1:55 ` [patch 8/8] kvmctl: time testcase Marcelo Tosatti
2009-07-08 13:43 ` [patch 0/8] RFC: in-kernel timer emulation changes Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090708131721.GA3382@amt.cnet \
--to=mtosatti@redhat.com \
--cc=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).