* windows workload: many ept_violation and mmio exits
@ 2009-12-03 13:46 Andrew Theurer
2009-12-03 14:34 ` Avi Kivity
0 siblings, 1 reply; 22+ messages in thread
From: Andrew Theurer @ 2009-12-03 13:46 UTC (permalink / raw)
To: kvm
I am running a windows workload which has 26 windows VMs running many
instances of a J2EE workload. There are 13 pairs of an application
server VM and database server VM. There seem to be quite a bit of
vm_exits, and it looks over a third of them are mmio_exit:
> efer_relo 0
> exits 337139
> fpu_reloa 247321
> halt_exit 19092
> halt_wake 18611
> host_stat 247332
> hypercall 0
> insn_emul 184265
> insn_emul 184265
> invlpg 0
> io_exits 69184
> irq_exits 52953
> irq_injec 48115
> irq_windo 2411
> largepage 19
> mmio_exit 123554
> mmu_cache 0
> mmu_flood 0
> mmu_pde_z 0
> mmu_pte_u 0
> mmu_pte_w 0
> mmu_recyc 0
> mmu_shado 0
> mmu_unsyn 0
> nmi_injec 0
> nmi_windo 0
> pf_fixed 19
> pf_guest 0
> remote_tl 0
> request_i 0
> signal_ex 0
> tlb_flush 0
I collected a kvmtrace, and below is a very small portion of that. Is
there a way I can figure out what device the mmio's are for? Also, is
it normal to have lots of ept_violations? This is a 2 socket Nehalem
system with SMT on.
> qemu-system-x86-19673 [014] 213577.939614: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939624: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939624: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939627: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19673 [014] 213577.939629: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f214d
> qemu-system-x86-19673 [014] 213577.939631: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939633: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939634: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939636: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19332 [008] 213577.939637: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939638: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f24e2
> qemu-system-x86-19673 [014] 213577.939640: kvm_entry: vcpu 0
> qemu-system-x86-19211 [010] 213577.939663: kvm_set_irq: gsi 11 level 1 source 0
> qemu-system-x86-19211 [010] 213577.939664: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19211 [010] 213577.939665: kvm_apic_accept_irq: apicid 0 vec 130 (LowPrio|level)
> qemu-system-x86-19211 [010] 213577.939666: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19673 [014] 213577.939692: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939693: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939696: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19332 [008] 213577.939699: kvm_exit: reason ept_violation rip 0xfffff80001b3af8e
> qemu-system-x86-19332 [008] 213577.939700: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939702: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f3da6
> qemu-system-x86-19563 [010] 213577.939702: kvm_set_irq: gsi 11 level 1 source 0
> qemu-system-x86-19563 [010] 213577.939703: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19673 [014] 213577.939704: kvm_entry: vcpu 0
> qemu-system-x86-19563 [010] 213577.939705: kvm_apic_accept_irq: apicid 0 vec 130 (LowPrio|level)
> qemu-system-x86-19332 [008] 213577.939706: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19563 [010] 213577.939707: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19332 [008] 213577.939713: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0x29a105de
> qemu-system-x86-19332 [008] 213577.939715: kvm_entry: vcpu 0
> qemu-system-x86-19201 [011] 213577.939716: kvm_exit: reason exception rip 0x1162412
> qemu-system-x86-19332 [008] 213577.939717: kvm_exit: reason halt rip 0xfffffa6000fae7a1
> qemu-system-x86-19201 [011] 213577.939717: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939761: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939762: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939766: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19673 [014] 213577.939772: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f58dd
> qemu-system-x86-19673 [014] 213577.939774: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939776: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939776: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939779: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19673 [014] 213577.939782: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f5d09
> qemu-system-x86-19673 [014] 213577.939784: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939791: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939791: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939794: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19673 [014] 213577.939798: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f62fb
> qemu-system-x86-19673 [014] 213577.939799: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939802: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939802: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939805: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19673 [014] 213577.939808: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f66f4
> qemu-system-x86-19673 [014] 213577.939809: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939836: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939837: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939845: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19661 [014] 213577.939875: kvm_set_irq: gsi 11 level 1 source 0
> qemu-system-x86-19661 [014] 213577.939876: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19661 [014] 213577.939876: kvm_apic_accept_irq: apicid 0 vec 130 (LowPrio|level)
> qemu-system-x86-19661 [014] 213577.939877: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19320 [008] 213577.939895: kvm_set_irq: gsi 11 level 1 source 0
> qemu-system-x86-19320 [008] 213577.939896: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19320 [008] 213577.939897: kvm_apic_accept_irq: apicid 0 vec 130 (LowPrio|level)
> qemu-system-x86-19673 [014] 213577.939898: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8f89f2
> qemu-system-x86-19320 [008] 213577.939899: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19673 [014] 213577.939900: kvm_inj_virq: irq 130
> qemu-system-x86-19673 [014] 213577.939901: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939904: kvm_exit: reason io_instruction rip 0xfffffa6001acb435
> qemu-system-x86-19673 [014] 213577.939904: kvm_pio: pio_read at 0xc033 size 1 count 1
> qemu-system-x86-19673 [014] 213577.939907: kvm_set_irq: gsi 11 level 0 source 0
> qemu-system-x86-19673 [014] 213577.939907: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19673 [014] 213577.939908: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19673 [014] 213577.939910: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939912: kvm_exit: reason apic_access rip 0xfffff800016a050c
> qemu-system-x86-19673 [014] 213577.939914: kvm_mmio: mmio write len 4 gpa 0xfee000b0 val 0x0
> qemu-system-x86-19673 [014] 213577.939914: kvm_apic: apic_write APIC_EOI = 0x0
> qemu-system-x86-19673 [014] 213577.939914: kvm_ack_irq: irqchip IOAPIC pin 11
> qemu-system-x86-19673 [014] 213577.939915: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939918: kvm_exit: reason ext_irq rip 0xfffff800016a12f0
> qemu-system-x86-19661 [014] 213577.939934: kvm_set_irq: gsi 11 level 1 source 0
> qemu-system-x86-19661 [014] 213577.939935: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19661 [014] 213577.939936: kvm_apic_accept_irq: apicid 0 vec 130 (LowPrio|level)
> qemu-system-x86-19661 [014] 213577.939936: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19332 [008] 213577.939940: kvm_inj_virq: irq 130
> qemu-system-x86-19332 [008] 213577.939941: kvm_entry: vcpu 0
> qemu-system-x86-19332 [008] 213577.939944: kvm_exit: reason io_instruction rip 0xfffffa6000f9d435
> qemu-system-x86-19332 [008] 213577.939944: kvm_pio: pio_read at 0xc033 size 1 count 1
> qemu-system-x86-19332 [008] 213577.939948: kvm_set_irq: gsi 11 level 0 source 0
> qemu-system-x86-19332 [008] 213577.939949: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19332 [008] 213577.939949: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19673 [014] 213577.939950: kvm_inj_virq: irq 130
> qemu-system-x86-19673 [014] 213577.939951: kvm_entry: vcpu 0
> qemu-system-x86-19332 [008] 213577.939953: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939953: kvm_exit: reason io_instruction rip 0xfffffa6001acb435
> qemu-system-x86-19673 [014] 213577.939954: kvm_pio: pio_read at 0xc033 size 1 count 1
> qemu-system-x86-19332 [008] 213577.939955: kvm_exit: reason apic_access rip 0xfffff8000166e50c
> qemu-system-x86-19673 [014] 213577.939957: kvm_set_irq: gsi 11 level 0 source 0
> qemu-system-x86-19332 [008] 213577.939958: kvm_mmio: mmio write len 4 gpa 0xfee000b0 val 0x0
> qemu-system-x86-19673 [014] 213577.939958: kvm_pic_set_irq: chip 1 pin 3 (level|masked)
> qemu-system-x86-19332 [008] 213577.939958: kvm_apic: apic_write APIC_EOI = 0x0
> qemu-system-x86-19673 [014] 213577.939958: kvm_ioapic_set_irq: pin 11 dst 1 vec=130 (LowPrio|logical|level)
> qemu-system-x86-19332 [008] 213577.939958: kvm_ack_irq: irqchip IOAPIC pin 11
> qemu-system-x86-19332 [008] 213577.939959: kvm_entry: vcpu 0
> qemu-system-x86-19332 [008] 213577.939961: kvm_exit: reason ept_violation rip 0xfffff80001b3af8e
> qemu-system-x86-19332 [008] 213577.939961: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939962: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939964: kvm_exit: reason apic_access rip 0xfffff800016a050c
> qemu-system-x86-19332 [008] 213577.939965: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19673 [014] 213577.939966: kvm_mmio: mmio write len 4 gpa 0xfee000b0 val 0x0
> qemu-system-x86-19673 [014] 213577.939967: kvm_apic: apic_write APIC_EOI = 0x0
> qemu-system-x86-19673 [014] 213577.939967: kvm_ack_irq: irqchip IOAPIC pin 11
> qemu-system-x86-19673 [014] 213577.939968: kvm_entry: vcpu 0
> qemu-system-x86-19332 [008] 213577.939969: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0x29a16a40
> qemu-system-x86-19332 [008] 213577.939971: kvm_entry: vcpu 0
> qemu-system-x86-19332 [008] 213577.939981: kvm_exit: reason ept_violation rip 0xfffff80001b3af8e
> qemu-system-x86-19332 [008] 213577.939982: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.939982: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.939983: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19332 [008] 213577.939985: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19673 [014] 213577.939987: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
> qemu-system-x86-19332 [008] 213577.939989: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0x29a1722f
> qemu-system-x86-19673 [014] 213577.939991: kvm_mmio: mmio read len 4 gpa 0xfed000f0 val 0xfb8fae83
> qemu-system-x86-19332 [008] 213577.939991: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.939993: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.940010: kvm_exit: reason cr_access rip 0xfffff800016ee2b2
> qemu-system-x86-19673 [014] 213577.940011: kvm_cr: cr_write 4 = 0x678
> qemu-system-x86-19673 [014] 213577.940017: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.940019: kvm_exit: reason cr_access rip 0xfffff800016ee2b5
> qemu-system-x86-19673 [014] 213577.940019: kvm_cr: cr_write 4 = 0x6f8
> qemu-system-x86-19673 [014] 213577.940021: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.940048: kvm_exit: reason exception rip 0xfffff800016a0620
> qemu-system-x86-19673 [014] 213577.940049: kvm_entry: vcpu 0
> qemu-system-x86-19673 [014] 213577.940079: kvm_exit: reason ept_violation rip 0xfffff8000160ef8e
> qemu-system-x86-19673 [014] 213577.940080: kvm_page_fault: address fed000f0 error_code 181
> qemu-system-x86-19673 [014] 213577.940083: kvm_mmio: mmio unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
Here is oprofile:
> 4117817 62.2029 kvm-intel.ko kvm-intel.ko vmx_vcpu_run
> 338198 5.1087 qemu-system-x86_64 qemu-system-x86_64 /usr/local/qemu/48bb360cc687b89b74dfb1cac0f6e8812b64841c/bin/qemu-system-x86_64
> 62449 0.9433 kvm.ko kvm.ko kvm_arch_vcpu_ioctl_run
> 56512 0.8537 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 copy_user_generic_string
> 52373 0.7911 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 native_write_msr_safe
> 34847 0.5264 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 schedule
> 34678 0.5238 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 fget_light
> 29894 0.4516 kvm.ko kvm.ko paging64_walk_addr
> 27778 0.4196 kvm.ko kvm.ko gfn_to_hva
> 24563 0.3710 kvm.ko kvm.ko x86_decode_insn
> 23900 0.3610 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 do_select
> 21123 0.3191 libc-2.10.90.so libc-2.10.90.so memcpy
> 20694 0.3126 kvm.ko kvm.ko x86_emulate_insn
> 19862 0.3000 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 kfree
> 19107 0.2886 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 __switch_to
> 18319 0.2767 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 update_curr
> 17981 0.2716 libc-2.10.90.so libc-2.10.90.so ioctl
> 17934 0.2709 librt-2.10.90.so librt-2.10.90.so clock_gettime
> 17874 0.2700 ioatdma.ko ioatdma.ko ioat2_issue_pending
> 17578 0.2655 libpthread-2.10.90.so libpthread-2.10.90.so pthread_mutex_lock
> 17041 0.2574 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 task_rq_lock
> 15806 0.2388 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 native_read_msr_safe
> 15292 0.2310 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 fput
> 14197 0.2145 libc-2.10.90.so libc-2.10.90.so memset
> 14167 0.2140 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 __up_read
> 13974 0.2111 kvm.ko kvm.ko kvm_arch_vcpu_put
> 13885 0.2097 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 select_task_rq_fair
> 13766 0.2079 bnx2.ko bnx2.ko bnx2_poll_work
> 13349 0.2016 kvm.ko kvm.ko find_highest_vector
> 13121 0.1982 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 __down_read
> 12518 0.1891 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 do_vfs_ioctl
> 12184 0.1840 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 try_to_wake_up
> 12095 0.1827 kvm.ko kvm.ko gfn_to_memslot
> 11870 0.1793 kvm.ko kvm.ko kvm_read_guest
> 11657 0.1761 tun.ko tun.ko tun_chr_aio_read
-Andrew
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: windows workload: many ept_violation and mmio exits
2009-12-03 13:46 windows workload: many ept_violation and mmio exits Andrew Theurer
@ 2009-12-03 14:34 ` Avi Kivity
2011-08-26 5:32 ` ya su
0 siblings, 1 reply; 22+ messages in thread
From: Avi Kivity @ 2009-12-03 14:34 UTC (permalink / raw)
To: Andrew Theurer; +Cc: kvm
On 12/03/2009 03:46 PM, Andrew Theurer wrote:
> I am running a windows workload which has 26 windows VMs running many
> instances of a J2EE workload. There are 13 pairs of an application
> server VM and database server VM. There seem to be quite a bit of
> vm_exits, and it looks over a third of them are mmio_exit:
>
>> efer_relo 0
>> exits 337139
>> fpu_reloa 247321
>> halt_exit 19092
>> halt_wake 18611
>> host_stat 247332
>> hypercall 0
>> insn_emul 184265
>> insn_emul 184265
>> invlpg 0
>> io_exits 69184
>> irq_exits 52953
>> irq_injec 48115
>> irq_windo 2411
>> largepage 19
>> mmio_exit 123554
> I collected a kvmtrace, and below is a very small portion of that. Is
> there a way I can figure out what device the mmio's are for?
We want 'info physical_address_space' in the monitor.
> Also, is it normal to have lots of ept_violations? This is a 2 socket
> Nehalem system with SMT on.
So long as pf_fixed is low, these are all mmio or apic accesses.
>
>
>> qemu-system-x86-19673 [014] 213577.939624: kvm_page_fault: address
>> fed000f0 error_code 181
>> qemu-system-x86-19673 [014] 213577.939627: kvm_mmio: mmio
>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
>> qemu-system-x86-19673 [014] 213577.939629: kvm_mmio: mmio read len 4
>> gpa 0xfed000f0 val 0xfb8f214d
hpet
>> qemu-system-x86-19673 [014] 213577.939631: kvm_entry: vcpu 0
>> qemu-system-x86-19673 [014] 213577.939633: kvm_exit: reason
>> ept_violation rip 0xfffff8000160ef8e
>> qemu-system-x86-19673 [014] 213577.939634: kvm_page_fault: address
>> fed000f0 error_code 181
hpet - was this the same exit? we ought to skip over the emulated
instruction.
>> qemu-system-x86-19673 [014] 213577.939693: kvm_page_fault: address
>> fed000f0 error_code 181
>> qemu-system-x86-19673 [014] 213577.939696: kvm_mmio: mmio
>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
hpet
>> qemu-system-x86-19332 [008] 213577.939699: kvm_exit: reason
>> ept_violation rip 0xfffff80001b3af8e
>> qemu-system-x86-19332 [008] 213577.939700: kvm_page_fault: address
>> fed000f0 error_code 181
>> qemu-system-x86-19673 [014] 213577.939702: kvm_mmio: mmio read len 4
>> gpa 0xfed000f0 val 0xfb8f3da6
hpet
>> qemu-system-x86-19332 [008] 213577.939706: kvm_mmio: mmio
>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
>> qemu-system-x86-19563 [010] 213577.939707: kvm_ioapic_set_irq: pin
>> 11 dst 1 vec=130 (LowPrio|logical|level)
>> qemu-system-x86-19332 [008] 213577.939713: kvm_mmio: mmio read len 4
>> gpa 0xfed000f0 val 0x29a105de
hpet ...
>> qemu-system-x86-19673 [014] 213577.939908: kvm_ioapic_set_irq: pin
>> 11 dst 1 vec=130 (LowPrio|logical|level)
>> qemu-system-x86-19673 [014] 213577.939910: kvm_entry: vcpu 0
>> qemu-system-x86-19673 [014] 213577.939912: kvm_exit: reason
>> apic_access rip 0xfffff800016a050c
>> qemu-system-x86-19673 [014] 213577.939914: kvm_mmio: mmio write len
>> 4 gpa 0xfee000b0 val 0x0
apic eoi
>> qemu-system-x86-19332 [008] 213577.939958: kvm_mmio: mmio write len
>> 4 gpa 0xfee000b0 val 0x0
>> qemu-system-x86-19673 [014] 213577.939958: kvm_pic_set_irq: chip 1
>> pin 3 (level|masked)
>> qemu-system-x86-19332 [008] 213577.939958: kvm_apic: apic_write
>> APIC_EOI = 0x0
apic eoi
>> qemu-system-x86-19673 [014] 213577.940010: kvm_exit: reason
>> cr_access rip 0xfffff800016ee2b2
>> qemu-system-x86-19673 [014] 213577.940011: kvm_cr: cr_write 4 = 0x678
>> qemu-system-x86-19673 [014] 213577.940017: kvm_entry: vcpu 0
>> qemu-system-x86-19673 [014] 213577.940019: kvm_exit: reason
>> cr_access rip 0xfffff800016ee2b5
>> qemu-system-x86-19673 [014] 213577.940019: kvm_cr: cr_write 4 = 0x6f8
toggling global pages, we can avoid that with CR4_GUEST_HOST_MASK.
So, tons of hpet and eois. We can accelerate both by thing the hyper-V
accelerations, we already have some (unmerged) code for eoi, so this
should be improved soon.
>
> Here is oprofile:
>
>> 4117817 62.2029 kvm-intel.ko kvm-intel.ko
>> vmx_vcpu_run
>> 338198 5.1087 qemu-system-x86_64 qemu-system-x86_64
>> /usr/local/qemu/48bb360cc687b89b74dfb1cac0f6e8812b64841c/bin/qemu-system-x86_64
>>
>> 62449 0.9433 kvm.ko kvm.ko
>> kvm_arch_vcpu_ioctl_run
>> 56512 0.8537
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> copy_user_generic_string
We ought to switch to put_user/get_user. rep movs has quite slow start-up.
>> 52373 0.7911
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> native_write_msr_safe
hpet in kernel or hyper-V timers will reduce this.
>> 34847 0.5264
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> schedule
>> 34678 0.5238
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> fget_light
and this.
>> 29894 0.4516 kvm.ko kvm.ko
>> paging64_walk_addr
>> 27778 0.4196 kvm.ko kvm.ko
>> gfn_to_hva
>> 24563 0.3710 kvm.ko kvm.ko
>> x86_decode_insn
>> 23900 0.3610
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>> do_select
>> 21123 0.3191 libc-2.10.90.so libc-2.10.90.so
>> memcpy
>> 20694 0.3126 kvm.ko kvm.ko
>> x86_emulate_insn
hyper-V APIC and timers will reduce all of the above (except memcpy).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: windows workload: many ept_violation and mmio exits
2009-12-03 14:34 ` Avi Kivity
@ 2011-08-26 5:32 ` ya su
2011-08-28 7:42 ` Avi Kivity
0 siblings, 1 reply; 22+ messages in thread
From: ya su @ 2011-08-26 5:32 UTC (permalink / raw)
To: Avi Kivity; +Cc: Andrew Theurer, kvm
hi,Avi:
I met the same problem, tons of hpet vm_exits(vector 209, fault
address is in the guest vm's hpet mmio range), even I disable hpet
device in win7 guest vm, it still produce a larget amount of vm_exits
when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
device inside VM.
Does that means the HPET device in VM does not depend on the
emulated hpet device in qemu-kvm? Is there any way to disable the VM
HPET device to prevent so many vm_exits? Thansk.
Regards.
Suya.
2009/12/3 Avi Kivity <avi@redhat.com>:
> On 12/03/2009 03:46 PM, Andrew Theurer wrote:
>>
>> I am running a windows workload which has 26 windows VMs running many
>> instances of a J2EE workload. There are 13 pairs of an application server
>> VM and database server VM. There seem to be quite a bit of vm_exits, and it
>> looks over a third of them are mmio_exit:
>>
>>> efer_relo 0
>>> exits 337139
>>> fpu_reloa 247321
>>> halt_exit 19092
>>> halt_wake 18611
>>> host_stat 247332
>>> hypercall 0
>>> insn_emul 184265
>>> insn_emul 184265
>>> invlpg 0
>>> io_exits 69184
>>> irq_exits 52953
>>> irq_injec 48115
>>> irq_windo 2411
>>> largepage 19
>>> mmio_exit 123554
>>
>> I collected a kvmtrace, and below is a very small portion of that. Is
>> there a way I can figure out what device the mmio's are for?
>
> We want 'info physical_address_space' in the monitor.
>
>> Also, is it normal to have lots of ept_violations? This is a 2 socket
>> Nehalem system with SMT on.
>
> So long as pf_fixed is low, these are all mmio or apic accesses.
>
>>
>>
>>> qemu-system-x86-19673 [014] 213577.939624: kvm_page_fault: address
>>> fed000f0 error_code 181
>>> qemu-system-x86-19673 [014] 213577.939627: kvm_mmio: mmio
>>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
>>> qemu-system-x86-19673 [014] 213577.939629: kvm_mmio: mmio read len 4 gpa
>>> 0xfed000f0 val 0xfb8f214d
>
> hpet
>
>>> qemu-system-x86-19673 [014] 213577.939631: kvm_entry: vcpu 0
>>> qemu-system-x86-19673 [014] 213577.939633: kvm_exit: reason
>>> ept_violation rip 0xfffff8000160ef8e
>>> qemu-system-x86-19673 [014] 213577.939634: kvm_page_fault: address
>>> fed000f0 error_code 181
>
> hpet - was this the same exit? we ought to skip over the emulated
> instruction.
>
>>> qemu-system-x86-19673 [014] 213577.939693: kvm_page_fault: address
>>> fed000f0 error_code 181
>>> qemu-system-x86-19673 [014] 213577.939696: kvm_mmio: mmio
>>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
>
> hpet
>
>>> qemu-system-x86-19332 [008] 213577.939699: kvm_exit: reason
>>> ept_violation rip 0xfffff80001b3af8e
>>> qemu-system-x86-19332 [008] 213577.939700: kvm_page_fault: address
>>> fed000f0 error_code 181
>>> qemu-system-x86-19673 [014] 213577.939702: kvm_mmio: mmio read len 4 gpa
>>> 0xfed000f0 val 0xfb8f3da6
>
> hpet
>
>>> qemu-system-x86-19332 [008] 213577.939706: kvm_mmio: mmio
>>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
>>> qemu-system-x86-19563 [010] 213577.939707: kvm_ioapic_set_irq: pin 11
>>> dst 1 vec=130 (LowPrio|logical|level)
>>> qemu-system-x86-19332 [008] 213577.939713: kvm_mmio: mmio read len 4 gpa
>>> 0xfed000f0 val 0x29a105de
>
> hpet ...
>
>>> qemu-system-x86-19673 [014] 213577.939908: kvm_ioapic_set_irq: pin 11
>>> dst 1 vec=130 (LowPrio|logical|level)
>>> qemu-system-x86-19673 [014] 213577.939910: kvm_entry: vcpu 0
>>> qemu-system-x86-19673 [014] 213577.939912: kvm_exit: reason apic_access
>>> rip 0xfffff800016a050c
>>> qemu-system-x86-19673 [014] 213577.939914: kvm_mmio: mmio write len 4
>>> gpa 0xfee000b0 val 0x0
>
> apic eoi
>
>>> qemu-system-x86-19332 [008] 213577.939958: kvm_mmio: mmio write len 4
>>> gpa 0xfee000b0 val 0x0
>>> qemu-system-x86-19673 [014] 213577.939958: kvm_pic_set_irq: chip 1 pin 3
>>> (level|masked)
>>> qemu-system-x86-19332 [008] 213577.939958: kvm_apic: apic_write APIC_EOI
>>> = 0x0
>
> apic eoi
>
>>> qemu-system-x86-19673 [014] 213577.940010: kvm_exit: reason cr_access
>>> rip 0xfffff800016ee2b2
>>> qemu-system-x86-19673 [014] 213577.940011: kvm_cr: cr_write 4 = 0x678
>>> qemu-system-x86-19673 [014] 213577.940017: kvm_entry: vcpu 0
>>> qemu-system-x86-19673 [014] 213577.940019: kvm_exit: reason cr_access
>>> rip 0xfffff800016ee2b5
>>> qemu-system-x86-19673 [014] 213577.940019: kvm_cr: cr_write 4 = 0x6f8
>
> toggling global pages, we can avoid that with CR4_GUEST_HOST_MASK.
>
> So, tons of hpet and eois. We can accelerate both by thing the hyper-V
> accelerations, we already have some (unmerged) code for eoi, so this should
> be improved soon.
>
>>
>> Here is oprofile:
>>
>>> 4117817 62.2029 kvm-intel.ko kvm-intel.ko
>>> vmx_vcpu_run
>>> 338198 5.1087 qemu-system-x86_64 qemu-system-x86_64
>>> /usr/local/qemu/48bb360cc687b89b74dfb1cac0f6e8812b64841c/bin/qemu-system-x86_64
>>> 62449 0.9433 kvm.ko kvm.ko
>>> kvm_arch_vcpu_ioctl_run
>>> 56512 0.8537
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> copy_user_generic_string
>
> We ought to switch to put_user/get_user. rep movs has quite slow start-up.
>
>>> 52373 0.7911
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> native_write_msr_safe
>
> hpet in kernel or hyper-V timers will reduce this.
>
>>> 34847 0.5264
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> schedule
>>> 34678 0.5238
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> fget_light
>
> and this.
>
>>> 29894 0.4516 kvm.ko kvm.ko
>>> paging64_walk_addr
>>> 27778 0.4196 kvm.ko kvm.ko
>>> gfn_to_hva
>>> 24563 0.3710 kvm.ko kvm.ko
>>> x86_decode_insn
>>> 23900 0.3610
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
>>> do_select
>>> 21123 0.3191 libc-2.10.90.so libc-2.10.90.so
>>> memcpy
>>> 20694 0.3126 kvm.ko kvm.ko
>>> x86_emulate_insn
>
> hyper-V APIC and timers will reduce all of the above (except memcpy).
>
> --
> error compiling committee.c: too many arguments to function
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: windows workload: many ept_violation and mmio exits
2011-08-26 5:32 ` ya su
@ 2011-08-28 7:42 ` Avi Kivity
2011-08-28 18:54 ` [Qemu-devel] " Alexander Graf
0 siblings, 1 reply; 22+ messages in thread
From: Avi Kivity @ 2011-08-28 7:42 UTC (permalink / raw)
To: ya su; +Cc: Andrew Theurer, kvm
On 08/26/2011 08:32 AM, ya su wrote:
> hi,Avi:
>
> I met the same problem, tons of hpet vm_exits(vector 209, fault
> address is in the guest vm's hpet mmio range), even I disable hpet
> device in win7 guest vm, it still produce a larget amount of vm_exits
> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
> device inside VM.
>
> Does that means the HPET device in VM does not depend on the
> emulated hpet device in qemu-kvm? Is there any way to disable the VM
> HPET device to prevent so many vm_exits? Thansk.
>
Looks like a bug to me.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: windows workload: many ept_violation and mmio exits
2011-08-28 7:42 ` Avi Kivity
@ 2011-08-28 18:54 ` Alexander Graf
0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2011-08-28 18:54 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, ya su, kvm@vger.kernel.org list, QEMU Developers
On 28.08.2011, at 02:42, Avi Kivity wrote:
> On 08/26/2011 08:32 AM, ya su wrote:
>> hi,Avi:
>>
>> I met the same problem, tons of hpet vm_exits(vector 209, fault
>> address is in the guest vm's hpet mmio range), even I disable hpet
>> device in win7 guest vm, it still produce a larget amount of vm_exits
>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
>> device inside VM.
>>
>> Does that means the HPET device in VM does not depend on the
>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
>> HPET device to prevent so many vm_exits? Thansk.
>>
>
> Looks like a bug to me.
IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
Alex
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] windows workload: many ept_violation and mmio exits
@ 2011-08-28 18:54 ` Alexander Graf
0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2011-08-28 18:54 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, ya su, kvm@vger.kernel.org list, QEMU Developers
On 28.08.2011, at 02:42, Avi Kivity wrote:
> On 08/26/2011 08:32 AM, ya su wrote:
>> hi,Avi:
>>
>> I met the same problem, tons of hpet vm_exits(vector 209, fault
>> address is in the guest vm's hpet mmio range), even I disable hpet
>> device in win7 guest vm, it still produce a larget amount of vm_exits
>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
>> device inside VM.
>>
>> Does that means the HPET device in VM does not depend on the
>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
>> HPET device to prevent so many vm_exits? Thansk.
>>
>
> Looks like a bug to me.
IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
Alex
^ permalink raw reply [flat|nested] 22+ messages in thread
* HPET configuration in Seabios (was: Re: windows workload: many ept_violation and mmio exits)
2011-08-28 18:54 ` [Qemu-devel] " Alexander Graf
@ 2011-08-28 20:42 ` Jan Kiszka
-1 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-28 20:42 UTC (permalink / raw)
To: Alexander Graf, Kevin O'Connor
Cc: Avi Kivity, Andrew Theurer, ya su, kvm@vger.kernel.org list,
QEMU Developers, Gleb Natapov, seabios
[-- Attachment #1: Type: text/plain, Size: 1547 bytes --]
On 2011-08-28 20:54, Alexander Graf wrote:
>
> On 28.08.2011, at 02:42, Avi Kivity wrote:
>
>> On 08/26/2011 08:32 AM, ya su wrote:
>>> hi,Avi:
>>>
>>> I met the same problem, tons of hpet vm_exits(vector 209, fault
>>> address is in the guest vm's hpet mmio range), even I disable hpet
>>> device in win7 guest vm, it still produce a larget amount of vm_exits
>>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
>>> device inside VM.
>>>
>>> Does that means the HPET device in VM does not depend on the
>>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
>>> HPET device to prevent so many vm_exits? Thansk.
>>>
>>
>> Looks like a bug to me.
>
> IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
Exactly. We have a fw_cfg interface in place for quite a while now
(though I wonder how the firmware is supposed to tell -no-hpet apart
from QEMU versions that don't provide this data - both return count =
255), but SeaBios still exposes one HPET block at a hard-coded address
unconditionally.
There was quite some discussion about the corresponding Seabios patches
back then but apparently no consensus was found. Re-reading it, I think
Kevin asked for passing the necessary DSDT fragments from QEMU to the
firmware instead of using a new, proprietary fw_cfg format. Is that
still the key requirement for any patch finally fixing this bug?
Jan
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* [Qemu-devel] HPET configuration in Seabios (was: Re: windows workload: many ept_violation and mmio exits)
@ 2011-08-28 20:42 ` Jan Kiszka
0 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-28 20:42 UTC (permalink / raw)
To: Alexander Graf, Kevin O'Connor
Cc: Andrew Theurer, kvm@vger.kernel.org list, Gleb Natapov, seabios,
ya su, QEMU Developers, Avi Kivity
[-- Attachment #1: Type: text/plain, Size: 1547 bytes --]
On 2011-08-28 20:54, Alexander Graf wrote:
>
> On 28.08.2011, at 02:42, Avi Kivity wrote:
>
>> On 08/26/2011 08:32 AM, ya su wrote:
>>> hi,Avi:
>>>
>>> I met the same problem, tons of hpet vm_exits(vector 209, fault
>>> address is in the guest vm's hpet mmio range), even I disable hpet
>>> device in win7 guest vm, it still produce a larget amount of vm_exits
>>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
>>> device inside VM.
>>>
>>> Does that means the HPET device in VM does not depend on the
>>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
>>> HPET device to prevent so many vm_exits? Thansk.
>>>
>>
>> Looks like a bug to me.
>
> IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
Exactly. We have a fw_cfg interface in place for quite a while now
(though I wonder how the firmware is supposed to tell -no-hpet apart
from QEMU versions that don't provide this data - both return count =
255), but SeaBios still exposes one HPET block at a hard-coded address
unconditionally.
There was quite some discussion about the corresponding Seabios patches
back then but apparently no consensus was found. Re-reading it, I think
Kevin asked for passing the necessary DSDT fragments from QEMU to the
firmware instead of using a new, proprietary fw_cfg format. Is that
still the key requirement for any patch finally fixing this bug?
Jan
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: HPET configuration in Seabios (was: Re: windows workload: many ept_violation and mmio exits)
2011-08-28 20:42 ` [Qemu-devel] " Jan Kiszka
@ 2011-08-28 22:14 ` Kevin O'Connor
-1 siblings, 0 replies; 22+ messages in thread
From: Kevin O'Connor @ 2011-08-28 22:14 UTC (permalink / raw)
To: Jan Kiszka
Cc: Alexander Graf, Avi Kivity, Andrew Theurer, ya su,
kvm@vger.kernel.org list, QEMU Developers, Gleb Natapov, seabios
On Sun, Aug 28, 2011 at 10:42:49PM +0200, Jan Kiszka wrote:
> On 2011-08-28 20:54, Alexander Graf wrote:
> >
> > On 28.08.2011, at 02:42, Avi Kivity wrote:
> >
> >> On 08/26/2011 08:32 AM, ya su wrote:
> >>> hi,Avi:
> >>>
> >>> I met the same problem, tons of hpet vm_exits(vector 209, fault
> >>> address is in the guest vm's hpet mmio range), even I disable hpet
> >>> device in win7 guest vm, it still produce a larget amount of vm_exits
> >>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
> >>> device inside VM.
> >>>
> >>> Does that means the HPET device in VM does not depend on the
> >>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
> >>> HPET device to prevent so many vm_exits? Thansk.
> >>>
> >>
> >> Looks like a bug to me.
> >
> > IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
>
> Exactly. We have a fw_cfg interface in place for quite a while now
> (though I wonder how the firmware is supposed to tell -no-hpet apart
> from QEMU versions that don't provide this data - both return count =
> 255), but SeaBios still exposes one HPET block at a hard-coded address
> unconditionally.
>
> There was quite some discussion about the corresponding Seabios patches
> back then but apparently no consensus was found. Re-reading it, I think
> Kevin asked for passing the necessary DSDT fragments from QEMU to the
> firmware instead of using a new, proprietary fw_cfg format. Is that
> still the key requirement for any patch finally fixing this bug?
My preference would be to use the existing ACPI table passing
interface (fw_cfg slot 0x8000) to pass different ACPI tables to
SeaBIOS.
SeaBIOS doesn't currently allow that interface to override tables
SeaBIOS builds itself, but it's a simple change to rectify that.
When this was last proposed, it was raised that the header information
in the ACPI table may then not match the tables that SeaBIOS builds.
I think I proposed at that time that SeaBIOS could use the header of
the first fw_cfg table (or some other fw_cfg interface) to populate
the headers of its table headers. However, there was no consensus.
Note - the above is in regard to the HPET table. If the HPET entry in
the DSDT needs to be removed then that's a bigger change.
-Kevin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] HPET configuration in Seabios (was: Re: windows workload: many ept_violation and mmio exits)
@ 2011-08-28 22:14 ` Kevin O'Connor
0 siblings, 0 replies; 22+ messages in thread
From: Kevin O'Connor @ 2011-08-28 22:14 UTC (permalink / raw)
To: Jan Kiszka
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, QEMU Developers, Alexander Graf, Avi Kivity
On Sun, Aug 28, 2011 at 10:42:49PM +0200, Jan Kiszka wrote:
> On 2011-08-28 20:54, Alexander Graf wrote:
> >
> > On 28.08.2011, at 02:42, Avi Kivity wrote:
> >
> >> On 08/26/2011 08:32 AM, ya su wrote:
> >>> hi,Avi:
> >>>
> >>> I met the same problem, tons of hpet vm_exits(vector 209, fault
> >>> address is in the guest vm's hpet mmio range), even I disable hpet
> >>> device in win7 guest vm, it still produce a larget amount of vm_exits
> >>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
> >>> device inside VM.
> >>>
> >>> Does that means the HPET device in VM does not depend on the
> >>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
> >>> HPET device to prevent so many vm_exits? Thansk.
> >>>
> >>
> >> Looks like a bug to me.
> >
> > IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
>
> Exactly. We have a fw_cfg interface in place for quite a while now
> (though I wonder how the firmware is supposed to tell -no-hpet apart
> from QEMU versions that don't provide this data - both return count =
> 255), but SeaBios still exposes one HPET block at a hard-coded address
> unconditionally.
>
> There was quite some discussion about the corresponding Seabios patches
> back then but apparently no consensus was found. Re-reading it, I think
> Kevin asked for passing the necessary DSDT fragments from QEMU to the
> firmware instead of using a new, proprietary fw_cfg format. Is that
> still the key requirement for any patch finally fixing this bug?
My preference would be to use the existing ACPI table passing
interface (fw_cfg slot 0x8000) to pass different ACPI tables to
SeaBIOS.
SeaBIOS doesn't currently allow that interface to override tables
SeaBIOS builds itself, but it's a simple change to rectify that.
When this was last proposed, it was raised that the header information
in the ACPI table may then not match the tables that SeaBIOS builds.
I think I proposed at that time that SeaBIOS could use the header of
the first fw_cfg table (or some other fw_cfg interface) to populate
the headers of its table headers. However, there was no consensus.
Note - the above is in regard to the HPET table. If the HPET entry in
the DSDT needs to be removed then that's a bigger change.
-Kevin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: HPET configuration in Seabios
2011-08-28 22:14 ` [Qemu-devel] " Kevin O'Connor
@ 2011-08-29 5:32 ` Avi Kivity
-1 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2011-08-29 5:32 UTC (permalink / raw)
To: Kevin O'Connor
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Jan Kiszka
On 08/29/2011 01:14 AM, Kevin O'Connor wrote:
> On Sun, Aug 28, 2011 at 10:42:49PM +0200, Jan Kiszka wrote:
> > On 2011-08-28 20:54, Alexander Graf wrote:
> > >
> > > On 28.08.2011, at 02:42, Avi Kivity wrote:
> > >
> > >> On 08/26/2011 08:32 AM, ya su wrote:
> > >>> hi,Avi:
> > >>>
> > >>> I met the same problem, tons of hpet vm_exits(vector 209, fault
> > >>> address is in the guest vm's hpet mmio range), even I disable hpet
> > >>> device in win7 guest vm, it still produce a larget amount of vm_exits
> > >>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
> > >>> device inside VM.
> > >>>
> > >>> Does that means the HPET device in VM does not depend on the
> > >>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
> > >>> HPET device to prevent so many vm_exits? Thansk.
> > >>>
> > >>
> > >> Looks like a bug to me.
> > >
> > > IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
> >
> > Exactly. We have a fw_cfg interface in place for quite a while now
> > (though I wonder how the firmware is supposed to tell -no-hpet apart
> > from QEMU versions that don't provide this data - both return count =
> > 255), but SeaBios still exposes one HPET block at a hard-coded address
> > unconditionally.
> >
> > There was quite some discussion about the corresponding Seabios patches
> > back then but apparently no consensus was found. Re-reading it, I think
> > Kevin asked for passing the necessary DSDT fragments from QEMU to the
> > firmware instead of using a new, proprietary fw_cfg format. Is that
> > still the key requirement for any patch finally fixing this bug?
>
> My preference would be to use the existing ACPI table passing
> interface (fw_cfg slot 0x8000) to pass different ACPI tables to
> SeaBIOS.
>
> SeaBIOS doesn't currently allow that interface to override tables
> SeaBIOS builds itself, but it's a simple change to rectify that.
>
> When this was last proposed, it was raised that the header information
> in the ACPI table may then not match the tables that SeaBIOS builds.
> I think I proposed at that time that SeaBIOS could use the header of
> the first fw_cfg table (or some other fw_cfg interface) to populate
> the headers of its table headers. However, there was no consensus.
>
> Note - the above is in regard to the HPET table. If the HPET entry in
> the DSDT needs to be removed then that's a bigger change.
>
Can't seabios just poke at the hpet itself and see if it exists or not?
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] HPET configuration in Seabios
@ 2011-08-29 5:32 ` Avi Kivity
0 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2011-08-29 5:32 UTC (permalink / raw)
To: Kevin O'Connor
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Jan Kiszka
On 08/29/2011 01:14 AM, Kevin O'Connor wrote:
> On Sun, Aug 28, 2011 at 10:42:49PM +0200, Jan Kiszka wrote:
> > On 2011-08-28 20:54, Alexander Graf wrote:
> > >
> > > On 28.08.2011, at 02:42, Avi Kivity wrote:
> > >
> > >> On 08/26/2011 08:32 AM, ya su wrote:
> > >>> hi,Avi:
> > >>>
> > >>> I met the same problem, tons of hpet vm_exits(vector 209, fault
> > >>> address is in the guest vm's hpet mmio range), even I disable hpet
> > >>> device in win7 guest vm, it still produce a larget amount of vm_exits
> > >>> when trace-cmd ; I add -no-hpet to start the vm, it still has HPET
> > >>> device inside VM.
> > >>>
> > >>> Does that means the HPET device in VM does not depend on the
> > >>> emulated hpet device in qemu-kvm? Is there any way to disable the VM
> > >>> HPET device to prevent so many vm_exits? Thansk.
> > >>>
> > >>
> > >> Looks like a bug to me.
> > >
> > > IIRC disabling the HPET device doesn't remove the entry from the DSDT, no? So the guest OS might still think it's there while nothing responds (read returns -1).
> >
> > Exactly. We have a fw_cfg interface in place for quite a while now
> > (though I wonder how the firmware is supposed to tell -no-hpet apart
> > from QEMU versions that don't provide this data - both return count =
> > 255), but SeaBios still exposes one HPET block at a hard-coded address
> > unconditionally.
> >
> > There was quite some discussion about the corresponding Seabios patches
> > back then but apparently no consensus was found. Re-reading it, I think
> > Kevin asked for passing the necessary DSDT fragments from QEMU to the
> > firmware instead of using a new, proprietary fw_cfg format. Is that
> > still the key requirement for any patch finally fixing this bug?
>
> My preference would be to use the existing ACPI table passing
> interface (fw_cfg slot 0x8000) to pass different ACPI tables to
> SeaBIOS.
>
> SeaBIOS doesn't currently allow that interface to override tables
> SeaBIOS builds itself, but it's a simple change to rectify that.
>
> When this was last proposed, it was raised that the header information
> in the ACPI table may then not match the tables that SeaBIOS builds.
> I think I proposed at that time that SeaBIOS could use the header of
> the first fw_cfg table (or some other fw_cfg interface) to populate
> the headers of its table headers. However, there was no consensus.
>
> Note - the above is in regard to the HPET table. If the HPET entry in
> the DSDT needs to be removed then that's a bigger change.
>
Can't seabios just poke at the hpet itself and see if it exists or not?
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: HPET configuration in Seabios
2011-08-29 5:32 ` [Qemu-devel] " Avi Kivity
@ 2011-08-29 10:25 ` Jan Kiszka
-1 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-29 10:25 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 2011-08-29 07:32, Avi Kivity wrote:
> On 08/29/2011 01:14 AM, Kevin O'Connor wrote:
>> On Sun, Aug 28, 2011 at 10:42:49PM +0200, Jan Kiszka wrote:
>> > On 2011-08-28 20:54, Alexander Graf wrote:
>> > >
>> > > On 28.08.2011, at 02:42, Avi Kivity wrote:
>> > >
>> > >> On 08/26/2011 08:32 AM, ya su wrote:
>> > >>> hi,Avi:
>> > >>>
>> > >>> I met the same problem, tons of hpet vm_exits(vector 209,
>> fault
>> > >>> address is in the guest vm's hpet mmio range), even I disable
>> hpet
>> > >>> device in win7 guest vm, it still produce a larget amount of
>> vm_exits
>> > >>> when trace-cmd ; I add -no-hpet to start the vm, it still has
>> HPET
>> > >>> device inside VM.
>> > >>>
>> > >>> Does that means the HPET device in VM does not depend on the
>> > >>> emulated hpet device in qemu-kvm? Is there any way to disable
>> the VM
>> > >>> HPET device to prevent so many vm_exits? Thansk.
>> > >>>
>> > >>
>> > >> Looks like a bug to me.
>> > >
>> > > IIRC disabling the HPET device doesn't remove the entry from the
>> DSDT, no? So the guest OS might still think it's there while nothing
>> responds (read returns -1).
>> >
>> > Exactly. We have a fw_cfg interface in place for quite a while now
>> > (though I wonder how the firmware is supposed to tell -no-hpet apart
>> > from QEMU versions that don't provide this data - both return count =
>> > 255), but SeaBios still exposes one HPET block at a hard-coded address
>> > unconditionally.
>> >
>> > There was quite some discussion about the corresponding Seabios
>> patches
>> > back then but apparently no consensus was found. Re-reading it, I
>> think
>> > Kevin asked for passing the necessary DSDT fragments from QEMU to the
>> > firmware instead of using a new, proprietary fw_cfg format. Is that
>> > still the key requirement for any patch finally fixing this bug?
>>
>> My preference would be to use the existing ACPI table passing
>> interface (fw_cfg slot 0x8000) to pass different ACPI tables to
>> SeaBIOS.
>>
>> SeaBIOS doesn't currently allow that interface to override tables
>> SeaBIOS builds itself, but it's a simple change to rectify that.
>>
>> When this was last proposed, it was raised that the header information
>> in the ACPI table may then not match the tables that SeaBIOS builds.
>> I think I proposed at that time that SeaBIOS could use the header of
>> the first fw_cfg table (or some other fw_cfg interface) to populate
>> the headers of its table headers. However, there was no consensus.
>>
>> Note - the above is in regard to the HPET table. If the HPET entry in
>> the DSDT needs to be removed then that's a bigger change.
>>
>
> Can't seabios just poke at the hpet itself and see if it exists or not?
>
Would be hard for the BIOS to guess the locations of the blocks unless
we define the addresses used by QEMU as something like base + hpet_no *
block_size in all cases.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] HPET configuration in Seabios
@ 2011-08-29 10:25 ` Jan Kiszka
0 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-29 10:25 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 2011-08-29 07:32, Avi Kivity wrote:
> On 08/29/2011 01:14 AM, Kevin O'Connor wrote:
>> On Sun, Aug 28, 2011 at 10:42:49PM +0200, Jan Kiszka wrote:
>> > On 2011-08-28 20:54, Alexander Graf wrote:
>> > >
>> > > On 28.08.2011, at 02:42, Avi Kivity wrote:
>> > >
>> > >> On 08/26/2011 08:32 AM, ya su wrote:
>> > >>> hi,Avi:
>> > >>>
>> > >>> I met the same problem, tons of hpet vm_exits(vector 209,
>> fault
>> > >>> address is in the guest vm's hpet mmio range), even I disable
>> hpet
>> > >>> device in win7 guest vm, it still produce a larget amount of
>> vm_exits
>> > >>> when trace-cmd ; I add -no-hpet to start the vm, it still has
>> HPET
>> > >>> device inside VM.
>> > >>>
>> > >>> Does that means the HPET device in VM does not depend on the
>> > >>> emulated hpet device in qemu-kvm? Is there any way to disable
>> the VM
>> > >>> HPET device to prevent so many vm_exits? Thansk.
>> > >>>
>> > >>
>> > >> Looks like a bug to me.
>> > >
>> > > IIRC disabling the HPET device doesn't remove the entry from the
>> DSDT, no? So the guest OS might still think it's there while nothing
>> responds (read returns -1).
>> >
>> > Exactly. We have a fw_cfg interface in place for quite a while now
>> > (though I wonder how the firmware is supposed to tell -no-hpet apart
>> > from QEMU versions that don't provide this data - both return count =
>> > 255), but SeaBios still exposes one HPET block at a hard-coded address
>> > unconditionally.
>> >
>> > There was quite some discussion about the corresponding Seabios
>> patches
>> > back then but apparently no consensus was found. Re-reading it, I
>> think
>> > Kevin asked for passing the necessary DSDT fragments from QEMU to the
>> > firmware instead of using a new, proprietary fw_cfg format. Is that
>> > still the key requirement for any patch finally fixing this bug?
>>
>> My preference would be to use the existing ACPI table passing
>> interface (fw_cfg slot 0x8000) to pass different ACPI tables to
>> SeaBIOS.
>>
>> SeaBIOS doesn't currently allow that interface to override tables
>> SeaBIOS builds itself, but it's a simple change to rectify that.
>>
>> When this was last proposed, it was raised that the header information
>> in the ACPI table may then not match the tables that SeaBIOS builds.
>> I think I proposed at that time that SeaBIOS could use the header of
>> the first fw_cfg table (or some other fw_cfg interface) to populate
>> the headers of its table headers. However, there was no consensus.
>>
>> Note - the above is in regard to the HPET table. If the HPET entry in
>> the DSDT needs to be removed then that's a bigger change.
>>
>
> Can't seabios just poke at the hpet itself and see if it exists or not?
>
Would be hard for the BIOS to guess the locations of the blocks unless
we define the addresses used by QEMU as something like base + hpet_no *
block_size in all cases.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: HPET configuration in Seabios
2011-08-29 10:25 ` [Qemu-devel] " Jan Kiszka
@ 2011-08-29 11:00 ` Avi Kivity
-1 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2011-08-29 11:00 UTC (permalink / raw)
To: Jan Kiszka
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 08/29/2011 01:25 PM, Jan Kiszka wrote:
> >
> > Can't seabios just poke at the hpet itself and see if it exists or not?
> >
>
> Would be hard for the BIOS to guess the locations of the blocks unless
> we define the addresses used by QEMU as something like base + hpet_no *
> block_size in all cases.
>
Currently we have a fixed address. We could do:
if available in fw_cfg:
use that (may indicate no hpet)
elif fixed address works:
use that
else
no hpet
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] HPET configuration in Seabios
@ 2011-08-29 11:00 ` Avi Kivity
0 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2011-08-29 11:00 UTC (permalink / raw)
To: Jan Kiszka
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 08/29/2011 01:25 PM, Jan Kiszka wrote:
> >
> > Can't seabios just poke at the hpet itself and see if it exists or not?
> >
>
> Would be hard for the BIOS to guess the locations of the blocks unless
> we define the addresses used by QEMU as something like base + hpet_no *
> block_size in all cases.
>
Currently we have a fixed address. We could do:
if available in fw_cfg:
use that (may indicate no hpet)
elif fixed address works:
use that
else
no hpet
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: HPET configuration in Seabios
2011-08-29 11:00 ` [Qemu-devel] " Avi Kivity
@ 2011-08-29 11:05 ` Jan Kiszka
-1 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-29 11:05 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 2011-08-29 13:00, Avi Kivity wrote:
> On 08/29/2011 01:25 PM, Jan Kiszka wrote:
>>>
>>> Can't seabios just poke at the hpet itself and see if it exists or not?
>>>
>>
>> Would be hard for the BIOS to guess the locations of the blocks unless
>> we define the addresses used by QEMU as something like base + hpet_no *
>> block_size in all cases.
>>
>
> Currently we have a fixed address. We could do:
>
> if available in fw_cfg:
> use that (may indicate no hpet)
> elif fixed address works:
> use that
> else
> no hpet
Currently, we also only have a single HPET block, but that's just
because of some QEMU limitations that will vanish sooner or later. Then
nothing will prevent multiple "-device hpet,base=XXX".
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] HPET configuration in Seabios
@ 2011-08-29 11:05 ` Jan Kiszka
0 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-29 11:05 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 2011-08-29 13:00, Avi Kivity wrote:
> On 08/29/2011 01:25 PM, Jan Kiszka wrote:
>>>
>>> Can't seabios just poke at the hpet itself and see if it exists or not?
>>>
>>
>> Would be hard for the BIOS to guess the locations of the blocks unless
>> we define the addresses used by QEMU as something like base + hpet_no *
>> block_size in all cases.
>>
>
> Currently we have a fixed address. We could do:
>
> if available in fw_cfg:
> use that (may indicate no hpet)
> elif fixed address works:
> use that
> else
> no hpet
Currently, we also only have a single HPET block, but that's just
because of some QEMU limitations that will vanish sooner or later. Then
nothing will prevent multiple "-device hpet,base=XXX".
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: HPET configuration in Seabios
2011-08-29 11:05 ` [Qemu-devel] " Jan Kiszka
@ 2011-08-29 11:11 ` Avi Kivity
-1 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2011-08-29 11:11 UTC (permalink / raw)
To: Jan Kiszka
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 08/29/2011 02:05 PM, Jan Kiszka wrote:
> On 2011-08-29 13:00, Avi Kivity wrote:
> > On 08/29/2011 01:25 PM, Jan Kiszka wrote:
> >>>
> >>> Can't seabios just poke at the hpet itself and see if it exists or not?
> >>>
> >>
> >> Would be hard for the BIOS to guess the locations of the blocks unless
> >> we define the addresses used by QEMU as something like base + hpet_no *
> >> block_size in all cases.
> >>
> >
> > Currently we have a fixed address. We could do:
> >
> > if available in fw_cfg:
> > use that (may indicate no hpet)
> > elif fixed address works:
> > use that
> > else
> > no hpet
>
> Currently, we also only have a single HPET block, but that's just
> because of some QEMU limitations that will vanish sooner or later. Then
> nothing will prevent multiple "-device hpet,base=XXX".
>
Yes, so we should enable the fw_cfg interface before that happens.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] HPET configuration in Seabios
@ 2011-08-29 11:11 ` Avi Kivity
0 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2011-08-29 11:11 UTC (permalink / raw)
To: Jan Kiszka
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 08/29/2011 02:05 PM, Jan Kiszka wrote:
> On 2011-08-29 13:00, Avi Kivity wrote:
> > On 08/29/2011 01:25 PM, Jan Kiszka wrote:
> >>>
> >>> Can't seabios just poke at the hpet itself and see if it exists or not?
> >>>
> >>
> >> Would be hard for the BIOS to guess the locations of the blocks unless
> >> we define the addresses used by QEMU as something like base + hpet_no *
> >> block_size in all cases.
> >>
> >
> > Currently we have a fixed address. We could do:
> >
> > if available in fw_cfg:
> > use that (may indicate no hpet)
> > elif fixed address works:
> > use that
> > else
> > no hpet
>
> Currently, we also only have a single HPET block, but that's just
> because of some QEMU limitations that will vanish sooner or later. Then
> nothing will prevent multiple "-device hpet,base=XXX".
>
Yes, so we should enable the fw_cfg interface before that happens.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: HPET configuration in Seabios
2011-08-29 11:05 ` [Qemu-devel] " Jan Kiszka
@ 2011-08-29 11:12 ` Jan Kiszka
-1 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-29 11:12 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 2011-08-29 13:05, Jan Kiszka wrote:
> On 2011-08-29 13:00, Avi Kivity wrote:
>> On 08/29/2011 01:25 PM, Jan Kiszka wrote:
>>>>
>>>> Can't seabios just poke at the hpet itself and see if it exists or not?
>>>>
>>>
>>> Would be hard for the BIOS to guess the locations of the blocks unless
>>> we define the addresses used by QEMU as something like base + hpet_no *
>>> block_size in all cases.
>>>
>>
>> Currently we have a fixed address. We could do:
>>
>> if available in fw_cfg:
>> use that (may indicate no hpet)
>> elif fixed address works:
>> use that
>> else
>> no hpet
>
> Currently, we also only have a single HPET block, but that's just
> because of some QEMU limitations that will vanish sooner or later. Then
> nothing will prevent multiple "-device hpet,base=XXX".
That said, some HPET probing (without any fw_cfg) may be a short-term
workaround to fix Seabios until we defined The solution for
communicating HPET block configurations.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] HPET configuration in Seabios
@ 2011-08-29 11:12 ` Jan Kiszka
0 siblings, 0 replies; 22+ messages in thread
From: Jan Kiszka @ 2011-08-29 11:12 UTC (permalink / raw)
To: Avi Kivity
Cc: Andrew Theurer, Gleb Natapov, kvm@vger.kernel.org list, seabios,
ya su, Alexander Graf, QEMU Developers, Kevin O'Connor
On 2011-08-29 13:05, Jan Kiszka wrote:
> On 2011-08-29 13:00, Avi Kivity wrote:
>> On 08/29/2011 01:25 PM, Jan Kiszka wrote:
>>>>
>>>> Can't seabios just poke at the hpet itself and see if it exists or not?
>>>>
>>>
>>> Would be hard for the BIOS to guess the locations of the blocks unless
>>> we define the addresses used by QEMU as something like base + hpet_no *
>>> block_size in all cases.
>>>
>>
>> Currently we have a fixed address. We could do:
>>
>> if available in fw_cfg:
>> use that (may indicate no hpet)
>> elif fixed address works:
>> use that
>> else
>> no hpet
>
> Currently, we also only have a single HPET block, but that's just
> because of some QEMU limitations that will vanish sooner or later. Then
> nothing will prevent multiple "-device hpet,base=XXX".
That said, some HPET probing (without any fw_cfg) may be a short-term
workaround to fix Seabios until we defined The solution for
communicating HPET block configurations.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2011-08-29 11:12 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-12-03 13:46 windows workload: many ept_violation and mmio exits Andrew Theurer
2009-12-03 14:34 ` Avi Kivity
2011-08-26 5:32 ` ya su
2011-08-28 7:42 ` Avi Kivity
2011-08-28 18:54 ` Alexander Graf
2011-08-28 18:54 ` [Qemu-devel] " Alexander Graf
2011-08-28 20:42 ` HPET configuration in Seabios (was: Re: windows workload: many ept_violation and mmio exits) Jan Kiszka
2011-08-28 20:42 ` [Qemu-devel] " Jan Kiszka
2011-08-28 22:14 ` Kevin O'Connor
2011-08-28 22:14 ` [Qemu-devel] " Kevin O'Connor
2011-08-29 5:32 ` HPET configuration in Seabios Avi Kivity
2011-08-29 5:32 ` [Qemu-devel] " Avi Kivity
2011-08-29 10:25 ` Jan Kiszka
2011-08-29 10:25 ` [Qemu-devel] " Jan Kiszka
2011-08-29 11:00 ` Avi Kivity
2011-08-29 11:00 ` [Qemu-devel] " Avi Kivity
2011-08-29 11:05 ` Jan Kiszka
2011-08-29 11:05 ` [Qemu-devel] " Jan Kiszka
2011-08-29 11:11 ` Avi Kivity
2011-08-29 11:11 ` [Qemu-devel] " Avi Kivity
2011-08-29 11:12 ` Jan Kiszka
2011-08-29 11:12 ` [Qemu-devel] " Jan Kiszka
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.