kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* high load with win7 & usb tablet
@ 2010-09-03 12:21 Michael Tokarev
  2010-09-10 11:47 ` Jes Sorensen
  2010-09-10 12:19 ` high load with win7 & usb tablet Avi Kivity
  0 siblings, 2 replies; 36+ messages in thread
From: Michael Tokarev @ 2010-09-03 12:21 UTC (permalink / raw)
  To: KVM list

Hello.

I noticed that window7, unlike windowsXP as I played with
previously, poses quite high load on host when idle.  On
my machine, host cpu usage is about 25% when one single-cpu
win7 guest is running with "-usbdevice tablet" option.
Here's a typical kvm_stats when such a guest is idling:

 efer_reload                  0       0
 exits                   991832     695
 fpu_reload                3786       0
 halt_exits                7759      67
 halt_wakeup               7717      67
 host_state_reload       613116     495
 hypercalls                   0       0
 insn_emulation          873244     488
 insn_emulation_fail          0       0
 invlpg                       0       0
 io_exits                582925     409
 irq_exits                30188       8
 irq_injections           41093       0
 irq_window                   0       0
 largepages                 288       0
 mmio_exits               16718       0
 mmu_cache_miss             217       0
 mmu_flooded                  0       0
 mmu_pde_zapped               0       0
 mmu_pte_updated              0       0
 mmu_pte_write                0       0
 mmu_recycled                 0       0
 mmu_shadow_zapped          269       0
 mmu_unsync                   0       0
 nmi_injections               0       0
 nmi_window                   0       0
 pf_fixed                 20630       0
 pf_guest                     0       0
 remote_tlb_flush           176       0
 request_irq                  0       0

Here -- http://www.corpit.ru/mjt/kvm-trace-w7-idle --
are some results from /sys/kernel/debug/tracing/events/kvm/enable
(taken from /sys/kernel/debug/tracing/trace_pipe).
The file's >5Mb in size.

When I run the same guest without -usbdevice tablet,
it idles at about 3..4% host CPU time -- still a bit
high but acceptable.

Here's a typical kvm_stats output withOUT usb tablet:

 efer_reload                  0       0
 exits                  1204081     686
 fpu_reload                4235       2
 halt_exits               12116      67
 halt_wakeup              12047      67
 host_state_reload       797865     493
 hypercalls                   0       0
 insn_emulation         1074589     473
 insn_emulation_fail          0       0
 invlpg                       0       0
 io_exits                761834     408
 irq_exits                22240      12
 irq_injections           33099       0
 irq_window                   0       0
 largepages                 304       0
 mmio_exits               16718       0
 mmu_cache_miss             220       0
 mmu_flooded                  0       0
 mmu_pde_zapped               0       0
 mmu_pte_updated              0       0
 mmu_pte_write                0       0
 mmu_recycled                 0       0
 mmu_shadow_zapped          271       0
 mmu_unsync                   0       0
 nmi_injections               0       0
 nmi_window                   0       0
 pf_fixed                 18790       0
 pf_guest                     0       0
 remote_tlb_flush           109       0
 request_irq                  0       0

And here's the two tables combined (last column from
the above two tables, without and with usbmouse):

statistics         without  with
efer_reload              0     0
exits                  686   695
fpu_reload               2     0
halt_exits              67    67
halt_wakeup             67    67
host_state_reload      493   495
hypercalls               0     0
insn_emulation         473   488
insn_emulation_fail      0     0
invlpg                   0     0
io_exits               408   409
irq_exits               12     8
irq_injections           0     0
irq_window               0     0
largepages               0     0
mmio_exits               0     0
mmu_cache_miss           0     0
mmu_flooded              0     0
mmu_pde_zapped           0     0
mmu_pte_updated          0     0
mmu_pte_write            0     0
mmu_recycled             0     0
mmu_shadow_zapped        0     0
mmu_unsync               0     0
nmi_injections           0     0
nmi_window               0     0
pf_fixed                 0     0
pf_guest                 0     0
remote_tlb_flush         0     0
request_irq              0     0

So basically they're quite similar to each other,
give or take some random fluctuations.

kvm version is 0.12.5, kvm command line:

 kvm -nodefaults -m 1G \
  -drive file=w7.raw,cache=none,boot=on,if=virtio \
  -monitor stdio \
  -vga std \
  -device virtio-net-pci,netdev=net \
  -netdev type=tap,ifname=tap-kvm,id=net
  -no-hpet
  -mem-path /dev/hugepages/

(and also -usbdevice tablet).  I especially disabled almost
everything (incl. hpet) and used "light" virtio drivers in
order to try to reduce load as much as possible.

The host CPU is AthlonII X2 260 (Regor).

Anyone know how to make w7 less idle-hungry for usb?

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-03 12:21 high load with win7 & usb tablet Michael Tokarev
@ 2010-09-10 11:47 ` Jes Sorensen
  2010-09-10 12:16   ` Michael Tokarev
  2010-09-10 12:19 ` high load with win7 & usb tablet Avi Kivity
  1 sibling, 1 reply; 36+ messages in thread
From: Jes Sorensen @ 2010-09-10 11:47 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

On 09/03/10 14:21, Michael Tokarev wrote:
> Hello.
> 
> I noticed that window7, unlike windowsXP as I played with
> previously, poses quite high load on host when idle.  On
> my machine, host cpu usage is about 25% when one single-cpu
> win7 guest is running with "-usbdevice tablet" option.
> Here's a typical kvm_stats when such a guest is idling:

Hi Michael,

I am curious, the XP numbers you are referring to, are they against the
same version of KVM/QEMU or is it something you saw in the past? It
would be interesting to track down whether this idle time consumption is
only with win7 or if it happens with all versions of windows, as opposed
to being a general problem in current KVM/QEMU.

Cheers,
Jes

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-10 11:47 ` Jes Sorensen
@ 2010-09-10 12:16   ` Michael Tokarev
  2010-09-10 16:17     ` high load with usb device Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-10 12:16 UTC (permalink / raw)
  To: Jes Sorensen; +Cc: KVM list

10.09.2010 15:47, Jes Sorensen wrote:
> On 09/03/10 14:21, Michael Tokarev wrote:
>> Hello.
>>
>> I noticed that window7, unlike windowsXP as I played with
>> previously, poses quite high load on host when idle.  On
>> my machine, host cpu usage is about 25% when one single-cpu
>> win7 guest is running with "-usbdevice tablet" option.
>> Here's a typical kvm_stats when such a guest is idling:
> 
> Hi Michael,
> 
> I am curious, the XP numbers you are referring to, are they against the
> same version of KVM/QEMU or is it something you saw in the past? It

I ran winXP guest and win7 guest one after another on the
same machine using the same version of kvm (it was 0.12.5).

You run a freshly installed win7 guest and at idle it consumes
about 25% of host CPU.  You shut it down and run a freshly
installed winXP guest - about 3..4% host CPU usage at idle.

(This is AthlonII CPU, 3.2GHz.  Cpufreq is enabled, and that
activity is not enough for ondemand governer to increase the
frequency from the lowest awailable 800MHz, so that's 25% of
800MHz, not 25% of 3.2GHz).

> would be interesting to track down whether this idle time consumption is
> only with win7 or if it happens with all versions of windows, as opposed
> to being a general problem in current KVM/QEMU.

It does not happen on winXP and happens on win7.
I haven't tried vista - don't have it around.

Since my initial report, I learned a bit about USB
(trying to find a way to change pooling frequency).
I don't actually understand how it works, but some
sources claims that USB device descriptor, among
other things, also may include preferred pooling
interval.  It has also been mentioned that win7
has different USB driver for HID devices, and some
sources report that it now checks for mouse moves
1000 times per second, instead of 120 or 250 as
previous versions of windows.  I've no idea how
accurate these sources and claims are, however... :)

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-03 12:21 high load with win7 & usb tablet Michael Tokarev
  2010-09-10 11:47 ` Jes Sorensen
@ 2010-09-10 12:19 ` Avi Kivity
  2010-09-10 12:44   ` Michael Tokarev
  1 sibling, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-10 12:19 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/03/2010 03:21 PM, Michael Tokarev wrote:
> Hello.
>
> I noticed that window7, unlike windowsXP as I played with
> previously, poses quite high load on host when idle.  On
> my machine, host cpu usage is about 25% when one single-cpu
> win7 guest is running with "-usbdevice tablet" option.
> Here's a typical kvm_stats when such a guest is idling:
>
>   efer_reload                  0       0
>   exits                   991832     695
>   fpu_reload                3786       0
>   halt_exits                7759      67
>   halt_wakeup               7717      67
>   host_state_reload       613116     495
>   hypercalls                   0       0
>   insn_emulation          873244     488
>   insn_emulation_fail          0       0
>   invlpg                       0       0
>   io_exits                582925     409
>   irq_exits                30188       8
>   irq_injections           41093       0
>   irq_window                   0       0
>   largepages                 288       0
>   mmio_exits               16718       0
>   mmu_cache_miss             217       0
>   mmu_flooded                  0       0
>   mmu_pde_zapped               0       0
>   mmu_pte_updated              0       0
>   mmu_pte_write                0       0
>   mmu_recycled                 0       0
>   mmu_shadow_zapped          269       0
>   mmu_unsync                   0       0
>   nmi_injections               0       0
>   nmi_window                   0       0
>   pf_fixed                 20630       0
>   pf_guest                     0       0
>   remote_tlb_flush           176       0
>   request_irq                  0       0
>

That's pretty good for an idle guest.

What does the guest think it is doing?  The Windows performance monitor?

-- 

I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-10 12:19 ` high load with win7 & usb tablet Avi Kivity
@ 2010-09-10 12:44   ` Michael Tokarev
  2010-09-10 12:50     ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-10 12:44 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

[-- Attachment #1: Type: text/plain, Size: 2564 bytes --]

10.09.2010 16:19, Avi Kivity wrote:
>  On 09/03/2010 03:21 PM, Michael Tokarev wrote:
>> Hello.
>>
>> I noticed that window7, unlike windowsXP as I played with
>> previously, poses quite high load on host when idle.  On
>> my machine, host cpu usage is about 25% when one single-cpu
>> win7 guest is running with "-usbdevice tablet" option.
>> Here's a typical kvm_stats when such a guest is idling:
[]
> That's pretty good for an idle guest.

Hm.  Well.  Sometimes it goes "a bit" further than that:

 efer_reload                  0       0
 exits                  6307547   12445
 fpu_reload               39995      49
 halt_exits               74950      50
 halt_wakeup              72075      49
 host_state_reload      2150687    5585
 hypercalls                   0       0
 insn_emulation         3218768    6337
 insn_emulation_fail          0       0
 invlpg                       0       0
 io_exits                962906    5012
 irq_exits               190637     436
 irq_injections           31017       2
 irq_window                   0       0
 largepages                   0       0
 mmio_exits             1076036     496
 mmu_cache_miss           31812       9
 mmu_flooded                  0       0
 mmu_pde_zapped               0       0
 mmu_pte_updated              0       0
 mmu_pte_write             9818       9
 mmu_recycled                 0       0
 mmu_shadow_zapped        43946       0
 mmu_unsync                   0       0
 nmi_injections               0       0
 nmi_window                   0       0
 pf_fixed               2553881    5512
 pf_guest                     0       0
 remote_tlb_flush          4502       0
 request_irq                  0       0

It happens every few seconds, but it does not look like
this is the cause of high CPU load, most likely it's some
periodic tasks that really triggers every few secs, not
more often, so almost does not add to the average and min
loads.

> What does the guest think it is doing?  The Windows performance monitor?

Nothing.  It shows perfmon.exe using 3% CPU, and that's about
it.  I'm attaching a .png screenshot of the KVM window with
w7 perfmon running.  Unfortunately I don't have English
version handy, it's all in Russian, and I weren't able to
change system language to English.  But it is still readable.

During this shot, host CPU was about 30..35% in use - higher
than usual 25%, but I guess it's due to perfmon.

Also, I haven't found a way to see, for example, individual
interrupt activity in windows (like /proc/interrupts in
linux).

Thanks!

/mjt

[-- Attachment #2: w7-perfmon-ru-idle.png --]
[-- Type: image/png, Size: 31806 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-10 12:44   ` Michael Tokarev
@ 2010-09-10 12:50     ` Avi Kivity
       [not found]       ` <4C8A44E6.2060008@msgid.tls.msk.ru>
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-10 12:50 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/10/2010 03:44 PM, Michael Tokarev wrote:
> 10.09.2010 16:19, Avi Kivity wrote:
>>   On 09/03/2010 03:21 PM, Michael Tokarev wrote:
>>> Hello.
>>>
>>> I noticed that window7, unlike windowsXP as I played with
>>> previously, poses quite high load on host when idle.  On
>>> my machine, host cpu usage is about 25% when one single-cpu
>>> win7 guest is running with "-usbdevice tablet" option.
>>> Here's a typical kvm_stats when such a guest is idling:
> []
>> That's pretty good for an idle guest.
> Hm.  Well.  Sometimes it goes "a bit" further than that:
>
>   efer_reload                  0       0
>   exits                  6307547   12445

This still isn't worth 25% cpu.

>> What does the guest think it is doing?  The Windows performance monitor?
> Nothing.  It shows perfmon.exe using 3% CPU, and that's about
> it.  I'm attaching a .png screenshot of the KVM window with
> w7 perfmon running.  Unfortunately I don't have English
> version handy, it's all in Russian, and I weren't able to
> change system language to English.  But it is still readable.
>
> During this shot, host CPU was about 30..35% in use - higher
> than usual 25%, but I guess it's due to perfmon.

Please build qemu with --disable-strip and use 'perf' to profile it.

> Also, I haven't found a way to see, for example, individual
> interrupt activity in windows (like /proc/interrupts in
> linux).

irq_injections is very low anyway.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
       [not found]       ` <4C8A44E6.2060008@msgid.tls.msk.ru>
@ 2010-09-10 14:56         ` Avi Kivity
  2010-09-10 15:03           ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-10 14:56 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/10/2010 05:47 PM, Michael Tokarev wrote:
> 10.09.2010 16:50, Avi Kivity wrote:
> []
>>> During this shot, host CPU was about 30..35% in use - higher
>>> than usual 25%, but I guess it's due to perfmon.
>> Please build qemu with --disable-strip and use 'perf' to profile it.
> Done, I tried several different options, but I can't find
> anything intesting in there.  Attaching a screenshot of
> 3 - win7 guest (~5 minutes after startup), `perf top'
> in xterm and `top' in another xterm.
>
> Here are random snapshots of both, in text form:
>
> ---------------------------------------------------------------------------------------
>     PerfTop:    1409 irqs/sec  kernel:79.6%  exact:  0.0% [1000Hz cycles],  (all, 2 CPUs)
> ---------------------------------------------------------------------------------------
>
>               samples  pcnt function                        DSO
>               _______ _____ _______________________________ _______________________
>
>              11837.00 64.9% native_safe_halt                [kernel.kallsyms]
>                570.00  3.1% rb_next                         [kernel.kallsyms]
>                206.00  1.1% hpet_next_event                 [kernel.kallsyms]
>                149.00  0.8% do_select                       [kernel.kallsyms]
>                149.00  0.8% svm_vcpu_run                    [kvm_amd]
>                134.00  0.7% schedule                        [kernel.kallsyms]
>
>
> Tasks: 199 total,   2 running, 196 sleeping,   0 stopped,   1 zombie
> Cpu(s): 13.4%us, 10.2%sy,  0.0%ni, 76.3%id,  0.0%wa,  0.0%hi,  0.2%si,  0.0%st
> Mem:   5860724k total,  4601132k used,  1259592k free,    18508k buffers
> Swap:  4192928k total,   286928k used,  3906000k free,  1593908k cached
>
>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 28492 mjt       20   0 1111m 1.0g 5456 R   27 17.7   3:50.79 qemu-system-x86
> 28763 root      20   0  105m  30m 6176 S   11  0.5 184:08.58 Xorg
> 29263 mjt       20   0  637m 212m  11m S    6  3.7  92:08.82 firefox-bin
> 28845 mjt       20   0 58612 5840 2372 S    2  0.1  49:27.17 compiz
>
> I'm still not sure where to look at.. :)
>

Strange.  Can you also post a few lines of 'vmstat 1'?

Maybe we'll see a lot of context switches in there.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-10 14:56         ` Avi Kivity
@ 2010-09-10 15:03           ` Michael Tokarev
  2010-09-10 16:03             ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-10 15:03 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

10.09.2010 18:56, Avi Kivity пишет:
>  On 09/10/2010 05:47 PM, Michael Tokarev wrote:
>> 10.09.2010 16:50, Avi Kivity wrote:
>> []
>>>> During this shot, host CPU was about 30..35% in use - higher
>>>> than usual 25%, but I guess it's due to perfmon.
>>> Please build qemu with --disable-strip and use 'perf' to profile it.
>> Done, I tried several different options, but I can't find
>> anything intesting in there.  Attaching a screenshot of
>> 3 - win7 guest (~5 minutes after startup), `perf top'
>> in xterm and `top' in another xterm.
>>
>> Here are random snapshots of both, in text form:
>>
>>  PerfTop:    1409 irqs/sec  kernel:79.6%  exact:  0.0% [1000Hz cycles],  (all, 2 CPUs)
>>
>>   samples  pcnt function               DSO
>>   _______ _____ __________________    _________________
>>  11837.00 64.9% native_safe_halt      [kernel.kallsyms]
>>    570.00  3.1% rb_next               [kernel.kallsyms]
>>    206.00  1.1% hpet_next_event       [kernel.kallsyms]
>>    149.00  0.8% do_select             [kernel.kallsyms]
>>    149.00  0.8% svm_vcpu_run          [kvm_amd]
>>    134.00  0.7% schedule              [kernel.kallsyms]
>>
>>
>> Tasks: 199 total,   2 running, 196 sleeping,   0 stopped,   1 zombie
>> Cpu(s): 13.4%us, 10.2%sy,  0.0%ni, 76.3%id,  0.0%wa,  0.0%hi, 0.2%si,  0.0%st
>> Mem:   5860724k total,  4601132k used,  1259592k free,    18508k buffers
>> Swap:  4192928k total,   286928k used,  3906000k free,  1593908k cached
>>
>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>> 28492 mjt       20   0 1111m 1.0g 5456 R   27 17.7   3:50.79 qemu-system-x86
>> 28763 root      20   0  105m  30m 6176 S   11  0.5 184:08.58 Xorg
>> 29263 mjt       20   0  637m 212m  11m S    6  3.7  92:08.82 firefox-bin
>> 28845 mjt       20   0 58612 5840 2372 S    2  0.1  49:27.17 compiz
>>
>> I'm still not sure where to look at.. :)
> 
> Strange.  Can you also post a few lines of 'vmstat 1'?
> 
> Maybe we'll see a lot of context switches in there.

Not that many.  Still running the same w7 guest, still
~25..27% CPU usage reported by top for the kvm process,
here's `vmstat 5' after a while (I tend to use larger
delay to mitigate large(ish) spikes):

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
0  0 286764 1262976  20640 1618468    0    0     0    24 6206 4460  7  9 84  0
1  0 286764 1263040  20640 1618436    0    0     0     0 6080 4415  7  9 84  0
1  0 286764 1264380  20648 1618448    0    0     0     5 6332 4576  8 11 81  0
1  0 286760 1264700  20656 1618436    6    0     6    18 6464 4742  7  8 84  1
1  0 286760 1264864  20664 1618472    0    0     0    10 6266 4704  8  8 85  0
0  0 286760 1265004  20668 1618440    0    0     0     1 6330 4683  6  8 86  0
0  0 286760 1264660  20668 1618452    0    0     0     1 6401 4675  7  8 85  0
1  0 286760 1264668  20668 1618464    0    0     0     0 6239 4688  7  8 85  0
0  0 286760 1264692  20676 1618472    0    0     0    10 6445 4703  7  9 84  0
1  0 286760 1264204  20676 1618480    0    0     8     1 6180 4587  7  8 85  1
0  0 286760 1263956  20684 1618492    0    0     0    10 6314 4630  7  9 83  0
0  0 286760 1263972  20684 1618504    0    0     0     0 6222 4569  7  9 84  0
2  0 286760 1264128  20692 1618504    0    0     0    18 6458 5006  8  9 83  0

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-10 15:03           ` Michael Tokarev
@ 2010-09-10 16:03             ` Avi Kivity
  2010-09-14  8:26               ` Brad Campbell
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-10 16:03 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/10/2010 06:03 PM, Michael Tokarev wrote:
>
>> Strange.  Can you also post a few lines of 'vmstat 1'?
>>
>> Maybe we'll see a lot of context switches in there.
> Not that many.  Still running the same w7 guest, still
> ~25..27% CPU usage reported by top for the kvm process,
> here's `vmstat 5' after a while (I tend to use larger
> delay to mitigate large(ish) spikes):
>
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
> r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
> 0  0 286764 1262976  20640 1618468    0    0     0    24 6206 4460  7  9 84  0
> 1  0 286764 1263040  20640 1618436    0    0     0     0 6080 4415  7  9 84  0
> 1  0 286764 1264380  20648 1618448    0    0     0     5 6332 4576  8 11 81  0
> 1  0 286760 1264700  20656 1618436    6    0     6    18 6464 4742  7  8 84  1
> 1  0 286760 1264864  20664 1618472    0    0     0    10 6266 4704  8  8 85  0
> 0

Stumped.

A shot in the dark: can you try running with vnc instead of sdl (and 
disconnecting your vnc client while measuring)?  Maybe the guest is 
doing a lot of vga updates.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-10 12:16   ` Michael Tokarev
@ 2010-09-10 16:17     ` Michael Tokarev
  2010-09-12  9:26       ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-10 16:17 UTC (permalink / raw)
  To: KVM list; +Cc: Jes Sorensen, Avi Kivity

Note the changed subject line.

I just did a few tests with linux guest (amd64 2.6.35 kernel).

And it shows the same behavour as win7 (unlike winXP), namely,
high host CPU load when guest is idle.

When I run this guest (which network-boots and only runs udevd
and a busybox shell after loading all needed drivers) without
-usbdevice, the host shows <1% CPU usage when this guest is idle.
But once I add -usbdevice -- be it tablet or mouse - guest
starts eating 16..19% host cpu when idling.  Note that there's
no other services running - no dbus, X, inetd etc, just udevd,
busybox's init and sh.

So now we've quite well-supported debuggable guest.

Looking at /proc/interrupts in guest discovers nothing
interesting.  Even the usb irq# is not increasing.

But I looked at usb endpoint descriptors in qemu, in
particular hw/usb-hid.c.  There, all bInterval values -
apparently - assumes usb-2.0 specs.  But as far as I can
see, all devices are usb-1, where bInterval is expressed
in different units - in ms.  So, for example, bInterval=10
for the table is _not_ 255ms, it is 10ms!  For it to be
255, actual value = 255 should be written in bInterval.

I tried changing this field directly, but it didn't help.
It reduced idle cpu usage for a few percents however.
Probably that's because there are also usb root hubs
and other similar stuff which needs to be checked too.
I'll do it in a moment.

Reportedly, this problem (high system load during idle)
first shows with win7/win2008.  My _guess_ would be that
starting with this version of windows it actually is able
to honour this 10ms interval - XP definitely does not,
it pools mouse every 130ms.  But this is just guesses
and speculations...

Digging further.

Answering to the sdl question: the linux guest experiment
shows that sdl has nothing to do with this, since linux
does not do any vga updates at all when idles in text
mode...

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-10 16:17     ` high load with usb device Michael Tokarev
@ 2010-09-12  9:26       ` Avi Kivity
  2010-09-14  6:51         ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-12  9:26 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list, Jes Sorensen

  On 09/10/2010 07:17 PM, Michael Tokarev wrote:
> Note the changed subject line.
>
> I just did a few tests with linux guest (amd64 2.6.35 kernel).
>
> And it shows the same behavour as win7 (unlike winXP), namely,
> high host CPU load when guest is idle.

Not for me - F12 idle guest takes 3.5% cpu.

What does 'top' in the guest show now?  Connect with ssh to avoid 
triggering the GUI.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-12  9:26       ` Avi Kivity
@ 2010-09-14  6:51         ` Michael Tokarev
  2010-09-14  9:06           ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14  6:51 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list, Jes Sorensen

12.09.2010 13:26, Avi Kivity пишет:
>  On 09/10/2010 07:17 PM, Michael Tokarev wrote:
>> Note the changed subject line.
>>
>> I just did a few tests with linux guest (amd64 2.6.35 kernel).
>>
>> And it shows the same behavour as win7 (unlike winXP), namely,
>> high host CPU load when guest is idle.
> 
> Not for me - F12 idle guest takes 3.5% cpu.
> 
> What does 'top' in the guest show now?  Connect with ssh to avoid
> triggering the GUI.

Guest:
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Host:
Cpu(s):  5.3%us,  6.7%sy,  0.0%ni, 88.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
25685 mjt       20   0 1074m 109m 4652 S   19  1.9   0:29.03 qemu-system-x86

So it appears it's the qemu process that's doing something,
after all.  Is there a way to compile it with profiling
information?  I tried, but it does not compile: it uses
-fomit-frame-pointer which is not compatible with -p[g],
but when removing -fo-f-p it complains about assembly.

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-10 16:03             ` Avi Kivity
@ 2010-09-14  8:26               ` Brad Campbell
  2010-09-14  8:33                 ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Brad Campbell @ 2010-09-14  8:26 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Michael Tokarev, KVM list

On 11/09/10 00:03, Avi Kivity wrote:
> On 09/10/2010 06:03 PM, Michael Tokarev wrote:
>>
>>> Strange. Can you also post a few lines of 'vmstat 1'?
>>>
>>> Maybe we'll see a lot of context switches in there.
>> Not that many. Still running the same w7 guest, still
>> ~25..27% CPU usage reported by top for the kvm process,
>> here's `vmstat 5' after a while (I tend to use larger
>> delay to mitigate large(ish) spikes):
>>
>> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>> r b swpd free buff cache si so bi bo in cs us sy id wa
>> 0 0 286764 1262976 20640 1618468 0 0 0 24 6206 4460 7 9 84 0
>> 1 0 286764 1263040 20640 1618436 0 0 0 0 6080 4415 7 9 84 0
>> 1 0 286764 1264380 20648 1618448 0 0 0 5 6332 4576 8 11 81 0
>> 1 0 286760 1264700 20656 1618436 6 0 6 18 6464 4742 7 8 84 1
>> 1 0 286760 1264864 20664 1618472 0 0 0 10 6266 4704 8 8 85 0
>> 0
>
> Stumped.
>
> A shot in the dark: can you try running with vnc instead of sdl (and disconnecting your vnc client
> while measuring)? Maybe the guest is doing a lot of vga updates.
>

Just a data point. An idle XP guest for me without -usb sees the host running at about 8,000 context 
switches a second. With the guest using -usb -usbdevice tablet, the host runs at between 15,000 - 
18,000 context switches a second.

The descriptors show the device is requesting a polling frequency of 100Hz (0x0A ms), but don't 
forget QEMU emulating a controller with a 1000hz microframe rate at worst.

There would probably be a lower load if the controller emulated was OHCI rather than UHCI just on a 
reduction of required work by the Guest.

If you were prepared to tolerate a bit less responsiveness, you could always tweak the endpoint 
descriptor to turn the polling frequency down.

         0x0a,       /*  u8  ep_bInterval; (255ms -- usb 2.0 spec) */

I'm unclear as to where that comment came from. The usb 2.0 spec says a LOW_SPEED device can request 
between 10->255 ms polling intervals. If the device is reporting as a HIGH_SPEED device, then all 
bets are off.



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-14  8:26               ` Brad Campbell
@ 2010-09-14  8:33                 ` Michael Tokarev
  2010-09-14  8:46                   ` Brad Campbell
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14  8:33 UTC (permalink / raw)
  To: Brad Campbell; +Cc: Avi Kivity, KVM list

14.09.2010 12:26, Brad Campbell wrote:
[]
> Just a data point. An idle XP guest for me without -usb sees the host
> running at about 8,000 context switches a second. With the guest using
> -usb -usbdevice tablet, the host runs at between 15,000 - 18,000 context
> switches a second.

Yes, I observed the same as well.

> The descriptors show the device is requesting a polling frequency of
> 100Hz (0x0A ms), but don't forget QEMU emulating a controller with a
> 1000hz microframe rate at worst.

Yes, that's 10ms, which is way too frequent IMHO.  WinXP performs
polling at 130Hz, win7 can do higher.

> There would probably be a lower load if the controller emulated was OHCI
> rather than UHCI just on a reduction of required work by the Guest.

What's the diff. between the two?

> If you were prepared to tolerate a bit less responsiveness, you could
> always tweak the endpoint descriptor to turn the polling frequency down.
> 
>         0x0a,       /*  u8  ep_bInterval; (255ms -- usb 2.0 spec) */
> 
> I'm unclear as to where that comment came from. The usb 2.0 spec says a
> LOW_SPEED device can request between 10->255 ms polling intervals. If
> the device is reporting as a HIGH_SPEED device, then all bets are off.

I already found this place, and already questioned where that
comment come from (no one answered).

But the thing is that changing this value to, say, 0xff (255ms)
does not actually change the load level for me -- not for linux
guest not for w7 guest.  The descriptor is indeed changes, lsusb
correctly reports bInterval=255, but exactly nothing changes.

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with win7 & usb tablet
  2010-09-14  8:33                 ` Michael Tokarev
@ 2010-09-14  8:46                   ` Brad Campbell
  0 siblings, 0 replies; 36+ messages in thread
From: Brad Campbell @ 2010-09-14  8:46 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Avi Kivity, KVM list

On 14/09/10 16:33, Michael Tokarev wrote:
> 14.09.2010 12:26, Brad Campbell wrote:
> []
>> Just a data point. An idle XP guest for me without -usb sees the host
>> running at about 8,000 context switches a second. With the guest using
>> -usb -usbdevice tablet, the host runs at between 15,000 - 18,000 context
>> switches a second.
>
> Yes, I observed the same as well.
>
>> The descriptors show the device is requesting a polling frequency of
>> 100Hz (0x0A ms), but don't forget QEMU emulating a controller with a
>> 1000hz microframe rate at worst.
>
> Yes, that's 10ms, which is way too frequent IMHO.  WinXP performs
> polling at 130Hz, win7 can do higher.
>
>> There would probably be a lower load if the controller emulated was OHCI
>> rather than UHCI just on a reduction of required work by the Guest.
>
> What's the diff. between the two?

UHCI uses a less intelligent controller that pushes a bit more of the load onto the host CPU. OHCI 
uses silicon that has a few more smarts and therefore requires less work on the part of the host. In 
a QEMU context I'd hazard a guess that OHCI would require less switches into the guest, but more 
host code.

I tried to fire up Virtualbox to see what it reports for its controllers, and how it goes with host 
load, but it refused to load with the kvm kernel module in and I have a guest I can't shut down 
right now.

>> If you were prepared to tolerate a bit less responsiveness, you could
>> always tweak the endpoint descriptor to turn the polling frequency down.
>>
>>          0x0a,       /*  u8  ep_bInterval; (255ms -- usb 2.0 spec) */
>>
>> I'm unclear as to where that comment came from. The usb 2.0 spec says a
>> LOW_SPEED device can request between 10->255 ms polling intervals. If
>> the device is reporting as a HIGH_SPEED device, then all bets are off.
>
> I already found this place, and already questioned where that
> comment come from (no one answered).
>
> But the thing is that changing this value to, say, 0xff (255ms)
> does not actually change the load level for me -- not for linux
> guest not for w7 guest.  The descriptor is indeed changes, lsusb
> correctly reports bInterval=255, but exactly nothing changes.

I can confirm this also. Should have tested before posting. Sorry.

How about running Powertop in the linux guest to get a handle on what it's doing?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14  6:51         ` Michael Tokarev
@ 2010-09-14  9:06           ` Avi Kivity
  2010-09-14  9:08             ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-14  9:06 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list, Jes Sorensen, linux-perf-users

  On 09/14/2010 08:51 AM, Michael Tokarev wrote:
> 12.09.2010 13:26, Avi Kivity пишет:
>>   On 09/10/2010 07:17 PM, Michael Tokarev wrote:
>>> Note the changed subject line.
>>>
>>> I just did a few tests with linux guest (amd64 2.6.35 kernel).
>>>
>>> And it shows the same behavour as win7 (unlike winXP), namely,
>>> high host CPU load when guest is idle.
>> Not for me - F12 idle guest takes 3.5% cpu.
>>
>> What does 'top' in the guest show now?  Connect with ssh to avoid
>> triggering the GUI.
> Guest:
> Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
>
> Host:
> Cpu(s):  5.3%us,  6.7%sy,  0.0%ni, 88.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 25685 mjt       20   0 1074m 109m 4652 S   19  1.9   0:29.03 qemu-system-x86
>
> So it appears it's the qemu process that's doing something,
> after all.  Is there a way to compile it with profiling
> information?  I tried, but it does not compile: it uses
> -fomit-frame-pointer which is not compatible with -p[g],
> but when removing -fo-f-p it complains about assembly.
>
>

Really, 'perf top' should have shown what qemu was doing.  I don't have 
a lot of experience with profiling userspace, copying linux-perf-users 
for tips.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14  9:06           ` Avi Kivity
@ 2010-09-14  9:08             ` Avi Kivity
  2010-09-14 10:00               ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-14  9:08 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list, Jes Sorensen, linux-perf-users

  On 09/14/2010 11:06 AM, Avi Kivity wrote:
>  On 09/14/2010 08:51 AM, Michael Tokarev wrote:
>> 12.09.2010 13:26, Avi Kivity пишет:
>>>   On 09/10/2010 07:17 PM, Michael Tokarev wrote:
>>>> Note the changed subject line.
>>>>
>>>> I just did a few tests with linux guest (amd64 2.6.35 kernel).
>>>>
>>>> And it shows the same behavour as win7 (unlike winXP), namely,
>>>> high host CPU load when guest is idle.
>>> Not for me - F12 idle guest takes 3.5% cpu.
>>>
>>> What does 'top' in the guest show now?  Connect with ssh to avoid
>>> triggering the GUI.
>> Guest:
>> Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  
>> 0.0%si,  0.0%st
>>
>> Host:
>> Cpu(s):  5.3%us,  6.7%sy,  0.0%ni, 88.0%id,  0.0%wa,  0.0%hi,  
>> 0.0%si,  0.0%st
>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>> 25685 mjt       20   0 1074m 109m 4652 S   19  1.9   0:29.03 
>> qemu-system-x86
>>
>> So it appears it's the qemu process that's doing something,
>> after all.  Is there a way to compile it with profiling
>> information?  I tried, but it does not compile: it uses
>> -fomit-frame-pointer which is not compatible with -p[g],
>> but when removing -fo-f-p it complains about assembly.
>>
>>
>
> Really, 'perf top' should have shown what qemu was doing.  I don't 
> have a lot of experience with profiling userspace, copying 
> linux-perf-users for tips.
>

Also, please strace qemu.  Try 'strace -p $pid1 -p $pid2 -p $pid3 -c' to 
get a summary.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14  9:08             ` Avi Kivity
@ 2010-09-14 10:00               ` Michael Tokarev
  2010-09-14 10:07                 ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 10:00 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list, Jes Sorensen, linux-perf-users

14.09.2010 13:08, Avi Kivity wrote:
>>  On 09/14/2010 08:51 AM, Michael Tokarev wrote:
>>> 12.09.2010 13:26, Avi Kivity пишет:
>>> Guest: 0 load
>>> Host:
>>> Cpu(s):  5.3%us,  6.7%sy,  0.0%ni, 88.0%id,  0.0%wa,  0.0%hi, 
>>> 0.0%si,  0.0%st
>>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>> 25685 mjt       20   0 1074m 109m 4652 S   19  1.9   0:29.03
>>> qemu-system-x86
>>>
>>> So it appears it's the qemu process that's doing something,
>>> after all.  Is there a way to compile it with profiling
>>> information?  I tried, but it does not compile: it uses
>>> -fomit-frame-pointer which is not compatible with -p[g],
>>> but when removing -fo-f-p it complains about assembly.
>>
>> Really, 'perf top' should have shown what qemu was doing.  I don't
>> have a lot of experience with profiling userspace, copying
>> linux-perf-users for tips.

`perf top' shows nothing interesting.  Here it is:

-----------------------------------------------------------------------------------------
   PerfTop:    1161 irqs/sec  kernel:93.7%  exact:  0.0% [1000Hz cycles],  (all, 2 CPUs)
-----------------------------------------------------------------------------------------

             samples  pcnt function                       DSO
             _______ _____ ______________________________ __________________

            12345.00 74.4% native_safe_halt               [kernel.kallsyms]
              224.00  1.4% hpet_next_event                [kernel.kallsyms]
              178.00  1.1% tick_dev_program_event         [kernel.kallsyms]
              125.00  0.8% tick_broadcast_oneshot_control [kernel.kallsyms]
              113.00  0.7% do_select                      [kernel.kallsyms]
              105.00  0.6% delay_tsc                      [kernel.kallsyms]
               94.00  0.6% schedule                       [kernel.kallsyms]
               79.00  0.5% ktime_get                      [kernel.kallsyms]
               78.00  0.5% main_loop_wait                 qemu-system-x86_64
               75.00  0.5% __hrtimer_start_range_ns       [kernel.kallsyms]
               69.00  0.4% tick_nohz_stop_sched_tick      [kernel.kallsyms]
               61.00  0.4% rb_erase                       [kernel.kallsyms]
               58.00  0.3% _raw_spin_unlock_irqrestore    [kernel.kallsyms]
               55.00  0.3% _raw_spin_lock_irqsave         [kernel.kallsyms]
               54.00  0.3% enqueue_hrtimer                [kernel.kallsyms]
               54.00  0.3% __switch_to                    [kernel.kallsyms]
               49.00  0.3% ia32_syscall                   [kernel.kallsyms]

I started perf after guest were running for some time, and waited
for about a minute for perf to get proper stats.

Here's vmstat on host, when an idle linux guest is running with usbdevice:

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  0      0 306744 1213984 1376000    0    0     1     0 5531 4114  4  5 91  0
 1  0      0 306356 1213996 1375972    0    0     0     0 5886 4229  4  6 90  0

when linux guest is running w/o -usbdevice:

 1  0      0 303604 1214116 1376388    0    0     0     9  986  623  1  2 97  0
 1  0      0 303356 1214116 1376340    0    0     0     0  946  600  2  1 96  0

when no guest running:

 1  0      0 415048 1214000 1374860    0    0     0     9  821  481  2  2 96  0
 0  0      0 415056 1214000 1374868    0    0     0     0  717  454  1  1 98  0

> Also, please strace qemu.  Try 'strace -p $pid1 -p $pid2 -p $pid3 -c' to
> get a summary.

There are only two qemu threads running (as shown in /proc/$pid/task).  Here's
a sample (after hitting Ctrl+C):

# strace -p 23020 -p 23023 -c
Process 23020 attached - interrupt to quit
Process 23023 attached - interrupt to quit
^CProcess 23020 detached
Process 23023 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 98.98   23.462158       15569      1507           ioctl
  0.64    0.151725          37      4063         5 futex
  0.11    0.025051           0     86932           select
  0.09    0.021188           0    408836           clock_gettime
  0.07    0.016543           0    271933    136484 read
  0.04    0.008958           0    181118           gettimeofday
  0.02    0.005756           0    101190           write
  0.02    0.005506           0     89180           timer_settime
  0.02    0.004754           0     96271           timer_gettime
  0.01    0.002682           0     50595           rt_sigaction
  0.00    0.000000           0         1           rt_sigpending
  0.00    0.000000           0         1         1 rt_sigtimedwait
------ ----------- ----------- --------- --------- ----------------
100.00   23.704321               1291627    136490 total

(Note that during strace run the CPU is >100% busy -- top(1)
reports 65% for strace and 45% for the qemu process).

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 10:00               ` Michael Tokarev
@ 2010-09-14 10:07                 ` Avi Kivity
  2010-09-14 10:25                   ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-14 10:07 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list, Jes Sorensen, linux-perf-users

  On 09/14/2010 12:00 PM, Michael Tokarev wrote:
>   # strace -p 23020 -p 23023 -c
> Process 23020 attached - interrupt to quit
> Process 23023 attached - interrupt to quit
> ^CProcess 23020 detached
> Process 23023 detached
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>   98.98   23.462158       15569      1507           ioctl
>    0.64    0.151725          37      4063         5 futex
>    0.11    0.025051           0     86932           select
>    0.09    0.021188           0    408836           clock_gettime
>    0.07    0.016543           0    271933    136484 read
>    0.04    0.008958           0    181118           gettimeofday
>    0.02    0.005756           0    101190           write
>    0.02    0.005506           0     89180           timer_settime
>    0.02    0.004754           0     96271           timer_gettime
>    0.01    0.002682           0     50595           rt_sigaction
>    0.00    0.000000           0         1           rt_sigpending
>    0.00    0.000000           0         1         1 rt_sigtimedwait
> ------ ----------- ----------- --------- --------- ----------------
> 100.00   23.704321               1291627    136490 total
>
>

Well, well.  Translating to rates, we have 63.6 ioctls/sec but 17k 
clock_gettime, 11k read() and 4k writes per second.  The device model 
has gone haywire, somehow without any trace in perf top.

Let's see what those reads and writes are about.  Please strace a short 
segment again, without -c, and use lsof to see which fds are referenced 
in the read and write calls.


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 10:07                 ` Avi Kivity
@ 2010-09-14 10:25                   ` Michael Tokarev
  2010-09-14 10:39                     ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 10:25 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list, Jes Sorensen, linux-perf-users

14.09.2010 14:07, Avi Kivity пишет:
>  On 09/14/2010 12:00 PM, Michael Tokarev wrote:
>>   # strace -p 23020 -p 23023 -c
>> Process 23020 attached - interrupt to quit
>> Process 23023 attached - interrupt to quit
>> ^CProcess 23020 detached
>> Process 23023 detached
>> % time     seconds  usecs/call     calls    errors syscall
>> ------ ----------- ----------- --------- --------- ----------------
>>   98.98   23.462158       15569      1507           ioctl
>>    0.64    0.151725          37      4063         5 futex
>>    0.11    0.025051           0     86932           select
>>    0.09    0.021188           0    408836           clock_gettime
>>    0.07    0.016543           0    271933    136484 read
>>    0.04    0.008958           0    181118           gettimeofday
>>    0.02    0.005756           0    101190           write
>>    0.02    0.005506           0     89180           timer_settime
>>    0.02    0.004754           0     96271           timer_gettime
>>    0.01    0.002682           0     50595           rt_sigaction
>>    0.00    0.000000           0         1           rt_sigpending
>>    0.00    0.000000           0         1         1 rt_sigtimedwait
>> ------ ----------- ----------- --------- --------- ----------------
>> 100.00   23.704321               1291627    136490 total
>>
>>
> 
> Well, well.  Translating to rates, we have 63.6 ioctls/sec but 17k
> clock_gettime, 11k read() and 4k writes per second.  The device model
> has gone haywire, somehow without any trace in perf top.
> 
> Let's see what those reads and writes are about.  Please strace a short
> segment again, without -c, and use lsof to see which fds are referenced
> in the read and write calls.

Not that it is much helpful either.  lsof:

qemu-syst 23203  mjt    0u   CHR   136,9      0t0      12 /dev/pts/9
qemu-syst 23203  mjt    1u   CHR   136,9      0t0      12 /dev/pts/9
qemu-syst 23203  mjt    2u   CHR   136,9      0t0      12 /dev/pts/9
qemu-syst 23203  mjt    3u   CHR  10,232      0t0    4402 /dev/kvm
qemu-syst 23203  mjt    4u  0000     0,9        0     607 anon_inode
qemu-syst 23203  mjt    5r  FIFO     0,8      0t0 8172675 pipe
qemu-syst 23203  mjt    6w  FIFO     0,8      0t0 8172675 pipe
qemu-syst 23203  mjt    7u   CHR  10,200      0t0    1228 /dev/net/tun
qemu-syst 23203  mjt    8u  0000     0,9        0     607 anon_inode
qemu-syst 23203  mjt    9u  IPv4 8173217      0t0     TCP *:5900 (LISTEN)
qemu-syst 23203  mjt   10u  0000     0,9        0     607 anon_inode
qemu-syst 23203  mjt   11u  0000     0,9        0     607 anon_inode
qemu-syst 23203  mjt   12u  0000     0,9        0     607 anon_inode

(I switched to (unconnected) vnc from sdl to remove X stuff).

Here's the strace (200 lines of it):

23203 14:21:04.417579 select(13, [0 5 7 9 10 12], [], [], {0, 999495} <unfinished ...>
23206 14:21:04.417827 rt_sigtimedwait([BUS RT_6],  <unfinished ...>
23203 14:21:04.417897 <... select resumed> ) = 1 (in [12], left {0, 999476}) <0.000140>
23206 14:21:04.417957 <... rt_sigtimedwait resumed> 0xf6acb124, {0, 0}, 8) = -1 EAGAIN (Resource temporarily unavailable) <0.000070>
23203 14:21:04.418016 read(12,  <unfinished ...>
23206 14:21:04.418063 futex(0x824c444, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
23203 14:21:04.418098 <... read resumed> "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000057>
23203 14:21:04.418213 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000020>
23203 14:21:04.418441 write(6, "\0", 1) = 1 <0.000074>
23203 14:21:04.418787 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000050>
23203 14:21:04.418911 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000021>
23203 14:21:04.419000 clock_gettime(CLOCK_MONOTONIC, {119262, 938833745}) = 0 <0.000019>
23203 14:21:04.419081 gettimeofday({1284459664, 419106}, NULL) = 0 <0.000019>
23203 14:21:04.419158 clock_gettime(CLOCK_MONOTONIC, {119262, 938986680}) = 0 <0.000019>
23203 14:21:04.419232 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000020>
23203 14:21:04.419315 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000023>
23203 14:21:04.419400 clock_gettime(CLOCK_MONOTONIC, {119262, 939228476}) = 0 <0.000018>
23203 14:21:04.419477 clock_gettime(CLOCK_MONOTONIC, {119262, 939305118}) = 0 <0.000018>
23203 14:21:04.419550 clock_gettime(CLOCK_MONOTONIC, {119262, 939378782}) = 0 <0.000019>
23203 14:21:04.419629 gettimeofday({1284459664, 419652}, NULL) = 0 <0.000019>
23203 14:21:04.419704 clock_gettime(CLOCK_MONOTONIC, {119262, 939532390}) = 0 <0.000018>
23203 14:21:04.419777 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000019>
23203 14:21:04.419857 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 927000}}, NULL) = 0 <0.000020>
23203 14:21:04.419936 clock_gettime(CLOCK_MONOTONIC, {119262, 939763985}) = 0 <0.000018>
23203 14:21:04.420008 gettimeofday({1284459664, 420031}, NULL) = 0 <0.000019>
23203 14:21:04.420089 futex(0x824c444, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
23206 14:21:04.420143 <... futex resumed> ) = 0 <0.002052>
23203 14:21:04.420174 <... futex resumed> ) = 1 <0.000064>
23206 14:21:04.420217 rt_sigpending( <unfinished ...>
23203 14:21:04.420249 select(13, [0 5 7 9 10 12], [], [], {1, 0} <unfinished ...>
23206 14:21:04.420294 <... rt_sigpending resumed> [ALRM]) = 0 <0.000059>
23203 14:21:04.420343 <... select resumed> ) = 3 (in [5 10 12], left {0, 999984}) <0.000055>
23206 14:21:04.420393 futex(0x824c444, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
23203 14:21:04.420424 read(12,  <unfinished ...>
23206 14:21:04.420452 <... futex resumed> ) = 0 <0.000037>
23203 14:21:04.420479 <... read resumed> "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000036>
23206 14:21:04.420543 ioctl(8, KVM_RUN <unfinished ...>
23203 14:21:04.420592 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000024>
23203 14:21:04.420739 write(6, "\0", 1) = 1 <0.000027>
23203 14:21:04.420828 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000018>
23203 14:21:04.420910 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000021>
23203 14:21:04.421039 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000020>
23203 14:21:04.421181 write(6, "\0", 1) = 1 <0.000025>
23203 14:21:04.421264 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000052>
23203 14:21:04.421379 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000020>
23203 14:21:04.421457 read(10, "\3\0\0\0\0\0\0\0", 4096) = 8 <0.000019>
23203 14:21:04.421538 read(10, 0xffa86510, 4096) = -1 EAGAIN (Resource temporarily unavailable) <0.000019>
23203 14:21:04.421632 read(5, "\0\0\0", 512) = 3 <0.000047>
23203 14:21:04.421742 read(5, 0xffa87310, 512) = -1 EAGAIN (Resource temporarily unavailable) <0.000052>
23203 14:21:04.421864 clock_gettime(CLOCK_MONOTONIC, {119262, 941710424}) = 0 <0.000037>
23203 14:21:04.421959 gettimeofday({1284459664, 421983}, NULL) = 0 <0.000019>
23203 14:21:04.422034 clock_gettime(CLOCK_MONOTONIC, {119262, 941862127}) = 0 <0.000018>
23203 14:21:04.422108 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000020>
23203 14:21:04.422189 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000099>
23203 14:21:04.422413 clock_gettime(CLOCK_MONOTONIC, {119262, 942253027}) = 0 <0.000022>
23203 14:21:04.422512 clock_gettime(CLOCK_MONOTONIC, {119262, 942342550}) = 0 <0.000019>
23203 14:21:04.422589 clock_gettime(CLOCK_MONOTONIC, {119262, 942418192}) = 0 <0.000018>
23203 14:21:04.422663 gettimeofday({1284459664, 422686}, NULL) = 0 <0.000018>
23203 14:21:04.422737 clock_gettime(CLOCK_MONOTONIC, {119262, 942565097}) = 0 <0.000018>
23203 14:21:04.422809 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000019>
23203 14:21:04.422890 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 925000}}, NULL) = 0 <0.000087>
23203 14:21:04.423100 clock_gettime(CLOCK_MONOTONIC, {119262, 942938261}) = 0 <0.000021>
23203 14:21:04.423188 gettimeofday({1284459664, 423212}, NULL) = 0 <0.000020>
23203 14:21:04.423273 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999982}) <0.000089>
23203 14:21:04.423517 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000026>
23203 14:21:04.423673 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000020>
23203 14:21:04.423861 write(6, "\0", 1) = 1 <0.000050>
23203 14:21:04.424008 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000054>
23203 14:21:04.424127 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000022>
23203 14:21:04.424311 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000058>
23203 14:21:04.424521 write(6, "\0", 1) = 1 <0.000027>
23203 14:21:04.424607 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000020>
23203 14:21:04.424691 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000021>
23203 14:21:04.424771 clock_gettime(CLOCK_MONOTONIC, {119262, 944601250}) = 0 <0.000020>
23203 14:21:04.424848 gettimeofday({1284459664, 424872}, NULL) = 0 <0.000019>
23203 14:21:04.424923 clock_gettime(CLOCK_MONOTONIC, {119262, 944805889}) = 0 <0.000048>
23203 14:21:04.425068 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000021>
23203 14:21:04.425152 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000075>
23203 14:21:04.425292 clock_gettime(CLOCK_MONOTONIC, {119262, 945121851}) = 0 <0.000020>
23203 14:21:04.425372 clock_gettime(CLOCK_MONOTONIC, {119262, 945200578}) = 0 <0.000019>
23203 14:21:04.425446 clock_gettime(CLOCK_MONOTONIC, {119262, 945278488}) = 0 <0.000021>
23203 14:21:04.425524 gettimeofday({1284459664, 425548}, NULL) = 0 <0.000020>
23203 14:21:04.425600 clock_gettime(CLOCK_MONOTONIC, {119262, 945428567}) = 0 <0.000019>
23203 14:21:04.425674 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000019>
23203 14:21:04.425754 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 923000}}, NULL) = 0 <0.000019>
23203 14:21:04.425835 clock_gettime(CLOCK_MONOTONIC, {119262, 945663413}) = 0 <0.000019>
23203 14:21:04.425909 gettimeofday({1284459664, 425932}, NULL) = 0 <0.000018>
23203 14:21:04.425991 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999983}) <0.000091>
23203 14:21:04.426232 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000025>
23203 14:21:04.426388 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000019>
23203 14:21:04.426535 write(6, "\0", 1) = 1 <0.000027>
23203 14:21:04.426621 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000055>
23203 14:21:04.426744 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000022>
23203 14:21:04.426946 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000057>
23203 14:21:04.427155 write(6, "\0", 1) = 1 <0.000026>
23203 14:21:04.427240 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000019>
23203 14:21:04.427322 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000020>
23203 14:21:04.427400 read(10, "\4\0\0\0\0\0\0\0", 4096) = 8 <0.000019>
23203 14:21:04.427481 read(10, 0xffa86510, 4096) = -1 EAGAIN (Resource temporarily unavailable) <0.000018>
23203 14:21:04.427555 read(5, "\0\0\0\0", 512) = 4 <0.000022>
23203 14:21:04.427635 read(5, 0xffa87310, 512) = -1 EAGAIN (Resource temporarily unavailable) <0.000019>
23203 14:21:04.427709 clock_gettime(CLOCK_MONOTONIC, {119262, 947539408}) = 0 <0.000020>
23203 14:21:04.429301 gettimeofday({1284459664, 429383}, NULL) = 0 <0.000018>
23203 14:21:04.429442 clock_gettime(CLOCK_MONOTONIC, {119262, 949275654}) = 0 <0.000014>
23203 14:21:04.429518 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000015>
23203 14:21:04.429598 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000017>
23203 14:21:04.429676 clock_gettime(CLOCK_MONOTONIC, {119262, 949504996}) = 0 <0.000013>
23203 14:21:04.429750 clock_gettime(CLOCK_MONOTONIC, {119262, 949579332}) = 0 <0.000014>
23203 14:21:04.429819 clock_gettime(CLOCK_MONOTONIC, {119262, 949647805}) = 0 <0.000013>
23203 14:21:04.429891 gettimeofday({1284459664, 429914}, NULL) = 0 <0.000013>
23203 14:21:04.429959 clock_gettime(CLOCK_MONOTONIC, {119262, 949787250}) = 0 <0.000014>
23203 14:21:04.430025 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000013>
23203 14:21:04.430098 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 932000}}, NULL) = 0 <0.000015>
23203 14:21:04.430173 clock_gettime(CLOCK_MONOTONIC, {119262, 950001657}) = 0 <0.000013>
23203 14:21:04.430240 gettimeofday({1284459664, 430263}, NULL) = 0 <0.000013>
23203 14:21:04.430319 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999988}) <0.000028>
23203 14:21:04.430438 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000015>
23203 14:21:04.430546 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000014>
23203 14:21:04.430658 write(6, "\0", 1) = 1 <0.000017>
23203 14:21:04.430733 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000015>
23203 14:21:04.430810 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000014>
23203 14:21:04.430883 clock_gettime(CLOCK_MONOTONIC, {119262, 950713026}) = 0 <0.000013>
23203 14:21:04.430952 gettimeofday({1284459664, 430975}, NULL) = 0 <0.000013>
23203 14:21:04.431020 clock_gettime(CLOCK_MONOTONIC, {119262, 950848643}) = 0 <0.000013>
23203 14:21:04.431091 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000014>
23203 14:21:04.431165 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000015>
23203 14:21:04.431241 clock_gettime(CLOCK_MONOTONIC, {119262, 951070442}) = 0 <0.000014>
23203 14:21:04.431312 clock_gettime(CLOCK_MONOTONIC, {119262, 951141565}) = 0 <0.000014>
23203 14:21:04.431381 clock_gettime(CLOCK_MONOTONIC, {119262, 951209628}) = 0 <0.000014>
23203 14:21:04.431448 gettimeofday({1284459664, 431474}, NULL) = 0 <0.000014>
23203 14:21:04.431519 clock_gettime(CLOCK_MONOTONIC, {119262, 951347952}) = 0 <0.000013>
23203 14:21:04.431586 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000014>
23203 14:21:04.431662 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 932000}}, NULL) = 0 <0.000016>
23203 14:21:04.431740 clock_gettime(CLOCK_MONOTONIC, {119262, 951569525}) = 0 <0.000014>
23203 14:21:04.431809 gettimeofday({1284459664, 431833}, NULL) = 0 <0.000013>
23203 14:21:04.431897 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999993}) <0.000021>
23203 14:21:04.432005 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000015>
23203 14:21:04.432125 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000014>
23203 14:21:04.432233 write(6, "\0", 1) = 1 <0.000017>
23203 14:21:04.432305 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000014>
23203 14:21:04.432378 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000012>
23203 14:21:04.432444 read(10, "\2\0\0\0\0\0\0\0", 4096) = 8 <0.000013>
23203 14:21:04.432518 read(10, 0xffa86510, 4096) = -1 EAGAIN (Resource temporarily unavailable) <0.000013>
23203 14:21:04.432584 read(5, "\0\0", 512) = 2 <0.000014>
23203 14:21:04.432658 read(5, 0xffa87310, 512) = -1 EAGAIN (Resource temporarily unavailable) <0.000012>
23203 14:21:04.432724 clock_gettime(CLOCK_MONOTONIC, {119262, 952551357}) = 0 <0.000012>
23203 14:21:04.432789 gettimeofday({1284459664, 432811}, NULL) = 0 <0.000012>
23203 14:21:04.432854 clock_gettime(CLOCK_MONOTONIC, {119262, 952681486}) = 0 <0.000012>
23203 14:21:04.432919 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000013>
23203 14:21:04.432990 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000014>
23203 14:21:04.433062 clock_gettime(CLOCK_MONOTONIC, {119262, 952889944}) = 0 <0.000012>
23203 14:21:04.433129 clock_gettime(CLOCK_MONOTONIC, {119262, 952957001}) = 0 <0.000012>
23203 14:21:04.433194 clock_gettime(CLOCK_MONOTONIC, {119262, 953021746}) = 0 <0.000012>
23203 14:21:04.433259 gettimeofday({1284459664, 433283}, NULL) = 0 <0.000012>
23203 14:21:04.433327 clock_gettime(CLOCK_MONOTONIC, {119262, 953154033}) = 0 <0.000013>
23203 14:21:04.433392 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000013>
23203 14:21:04.433463 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 936000}}, NULL) = 0 <0.000014>
23203 14:21:04.433535 clock_gettime(CLOCK_MONOTONIC, {119262, 953363023}) = 0 <0.000013>
23203 14:21:04.433601 gettimeofday({1284459664, 433622}, NULL) = 0 <0.000012>
23203 14:21:04.433670 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999992}) <0.000021>
23203 14:21:04.433771 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000013>
23203 14:21:04.433887 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000022>
23203 14:21:04.434089 write(6, "\0", 1) = 1 <0.000064>
23203 14:21:04.434247 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000052>
23203 14:21:04.434367 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000021>
23203 14:21:04.434456 clock_gettime(CLOCK_MONOTONIC, {119262, 954287414}) = 0 <0.000020>
23203 14:21:04.434535 gettimeofday({1284459664, 434559}, NULL) = 0 <0.000019>
23203 14:21:04.434610 clock_gettime(CLOCK_MONOTONIC, {119262, 954438671}) = 0 <0.000019>
23203 14:21:04.434684 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000020>
23203 14:21:04.434766 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000023>
23203 14:21:04.434849 clock_gettime(CLOCK_MONOTONIC, {119262, 954678059}) = 0 <0.000018>
23203 14:21:04.434929 clock_gettime(CLOCK_MONOTONIC, {119262, 954778987}) = 0 <0.000040>
23203 14:21:04.435027 clock_gettime(CLOCK_MONOTONIC, {119262, 954856108}) = 0 <0.000019>
23203 14:21:04.435101 gettimeofday({1284459664, 435125}, NULL) = 0 <0.000019>
23203 14:21:04.435176 clock_gettime(CLOCK_MONOTONIC, {119262, 955004347}) = 0 <0.000018>
23203 14:21:04.435276 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000019>
23203 14:21:04.435357 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 923000}}, NULL) = 0 <0.000020>
23203 14:21:04.435437 clock_gettime(CLOCK_MONOTONIC, {119262, 955265191}) = 0 <0.000018>
23203 14:21:04.435510 gettimeofday({1284459664, 435533}, NULL) = 0 <0.000018>
23203 14:21:04.435594 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999983}) <0.000096>
23203 14:21:04.435840 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000026>
23203 14:21:04.436037 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000054>
23203 14:21:04.436260 write(6, "\0", 1) = 1 <0.000028>
23203 14:21:04.436356 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000054>
23203 14:21:04.436475 read(12, "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0"..., 128) = 128 <0.000022>
23203 14:21:04.436661 rt_sigaction(SIGALRM, NULL, {0x80522d0, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 <0.000062>
23203 14:21:04.436874 write(6, "\0", 1) = 1 <0.000027>
23203 14:21:04.436960 write(11, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000021>
23203 14:21:04.437045 read(12, 0xffa87480, 128) = -1 EAGAIN (Resource temporarily unavailable) <0.000021>
23203 14:21:04.437123 read(10, "\3\0\0\0\0\0\0\0", 4096) = 8 <0.000020>
23203 14:21:04.437205 read(10, 0xffa86510, 4096) = -1 EAGAIN (Resource temporarily unavailable) <0.000018>
23203 14:21:04.437279 read(5, "\0\0\0", 512) = 3 <0.000023>
23203 14:21:04.437360 read(5, 0xffa87310, 512) = -1 EAGAIN (Resource temporarily unavailable) <0.000019>
23203 14:21:04.437434 clock_gettime(CLOCK_MONOTONIC, {119262, 957264356}) = 0 <0.000020>
23203 14:21:04.437511 gettimeofday({1284459664, 437535}, NULL) = 0 <0.000019>
23203 14:21:04.437586 clock_gettime(CLOCK_MONOTONIC, {119262, 957414785}) = 0 <0.000019>
23203 14:21:04.437660 timer_gettime(0x1, {it_interval={0, 0}, it_value={0, 0}}) = 0 <0.000020>
23203 14:21:04.437743 timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 <0.000091>
23203 14:21:04.437958 clock_gettime(CLOCK_MONOTONIC, {119262, 957798172}) = 0 <0.000022>
23203 14:21:04.438054 clock_gettime(CLOCK_MONOTONIC, {119262, 957882793}) = 0 <0.000025>

So it is constantly poking fds# 11, 12, 10, 5 & 6.
5 and 6 are pipe (selfpipe?), and 10..12 are "anon inode".

Here's the command line again:

qemu-system-x86_64 \
  -netdev type=tap,ifname=tap-kvm,id=x \
  -device virtio-net-pci,netdev=x \
  -monitor stdio \
  -boot n \
  -usbdevice tablet \
  -m 1G \
  -vnc :0

Yes, it does quite a lot of timer stuff... ;)

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 10:25                   ` Michael Tokarev
@ 2010-09-14 10:39                     ` Avi Kivity
  2010-09-14 11:02                       ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-14 10:39 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list, Jes Sorensen, linux-perf-users

  On 09/14/2010 12:25 PM, Michael Tokarev wrote:
> Not that it is much helpful either.  lsof:
>
> qemu-syst 23203  mjt    0u   CHR   136,9      0t0      12 /dev/pts/9
> qemu-syst 23203  mjt    1u   CHR   136,9      0t0      12 /dev/pts/9
> qemu-syst 23203  mjt    2u   CHR   136,9      0t0      12 /dev/pts/9
> qemu-syst 23203  mjt    3u   CHR  10,232      0t0    4402 /dev/kvm
> qemu-syst 23203  mjt    4u  0000     0,9        0     607 anon_inode
> qemu-syst 23203  mjt    5r  FIFO     0,8      0t0 8172675 pipe
> qemu-syst 23203  mjt    6w  FIFO     0,8      0t0 8172675 pipe
> qemu-syst 23203  mjt    7u   CHR  10,200      0t0    1228 /dev/net/tun
> qemu-syst 23203  mjt    8u  0000     0,9        0     607 anon_inode
> qemu-syst 23203  mjt    9u  IPv4 8173217      0t0     TCP *:5900 (LISTEN)
> qemu-syst 23203  mjt   10u  0000     0,9        0     607 anon_inode
> qemu-syst 23203  mjt   11u  0000     0,9        0     607 anon_inode
> qemu-syst 23203  mjt   12u  0000     0,9        0     607 anon_inode

> So it is constantly poking fds# 11, 12, 10, 5&  6.
> 5 and 6 are pipe (selfpipe?),

signalfd emulation, used to deliver signals efficiently.  Older glibc?

> and 10..12 are "anon inode".

Those are likely eventfds.

> Here's the command line again:
>
> qemu-system-x86_64 \
>    -netdev type=tap,ifname=tap-kvm,id=x \
>    -device virtio-net-pci,netdev=x \
>    -monitor stdio \
>    -boot n \
>    -usbdevice tablet \
>    -m 1G \
>    -vnc :0
>
> Yes, it does quite a lot of timer stuff... ;)
>
>

So timers internal to usb.

Please try (independently):

- just -usb, without -usbdevice tablet
- instrument calls to qemu_mod_timer() in hw/usb-*hci.c.  Looks like 
these are all 1kHz, but something else is clearly happening.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 10:39                     ` Avi Kivity
@ 2010-09-14 11:02                       ` Michael Tokarev
  2010-09-14 13:15                         ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 11:02 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list, Jes Sorensen, linux-perf-users

14.09.2010 14:39, Avi Kivity wrote:
>  On 09/14/2010 12:25 PM, Michael Tokarev wrote:
>> Not that it is much helpful either.  lsof:
>>
>> qemu-syst 23203  mjt    0u   CHR   136,9      0t0      12 /dev/pts/9
>> qemu-syst 23203  mjt    1u   CHR   136,9      0t0      12 /dev/pts/9
>> qemu-syst 23203  mjt    2u   CHR   136,9      0t0      12 /dev/pts/9
>> qemu-syst 23203  mjt    3u   CHR  10,232      0t0    4402 /dev/kvm
>> qemu-syst 23203  mjt    4u  0000     0,9        0     607 anon_inode
>> qemu-syst 23203  mjt    5r  FIFO     0,8      0t0 8172675 pipe
>> qemu-syst 23203  mjt    6w  FIFO     0,8      0t0 8172675 pipe
>> qemu-syst 23203  mjt    7u   CHR  10,200      0t0    1228 /dev/net/tun
>> qemu-syst 23203  mjt    8u  0000     0,9        0     607 anon_inode
>> qemu-syst 23203  mjt    9u  IPv4 8173217      0t0     TCP *:5900 (LISTEN)
>> qemu-syst 23203  mjt   10u  0000     0,9        0     607 anon_inode
>> qemu-syst 23203  mjt   11u  0000     0,9        0     607 anon_inode
>> qemu-syst 23203  mjt   12u  0000     0,9        0     607 anon_inode
> 
>> So it is constantly poking fds# 11, 12, 10, 5&  6.
>> 5 and 6 are pipe (selfpipe?),
> 
> signalfd emulation, used to deliver signals efficiently.  Older glibc?

[e]glibc-2.11.

$ grep SIGNAL config-host.mak
CONFIG_SIGNALFD=y

>From strace of another run:
24318 signalfd(-1, [BUS ALRM IO], 8)    = 12
(so one of the remaining fds is a signalfd :)

>> and 10..12 are "anon inode".
> 
> Those are likely eventfds.
> 
>> Here's the command line again:
>>
>> qemu-system-x86_64 \
>>    -netdev type=tap,ifname=tap-kvm,id=x \
>>    -device virtio-net-pci,netdev=x \
>>    -monitor stdio \
>>    -boot n \
>>    -usbdevice tablet \
>>    -m 1G \
>>    -vnc :0
>>
>> Yes, it does quite a lot of timer stuff... ;)
> 
> So timers internal to usb.
> 
> Please try (independently):
> 
> - just -usb, without -usbdevice tablet

No, that one works as expected - all quiet.
-usbdevice tablet is also quiet up until
guest loads usb host controller driver
(not particular usb device driver).

> - instrument calls to qemu_mod_timer() in hw/usb-*hci.c.  Looks like
> these are all 1kHz, but something else is clearly happening.

Doing that now...

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 11:02                       ` Michael Tokarev
@ 2010-09-14 13:15                         ` Michael Tokarev
  2010-09-14 13:25                           ` Avi Kivity
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 13:15 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

[linux-perf-users removed from Cc]

14.09.2010 15:02, Michael Tokarev wrote:
> 14.09.2010 14:39, Avi Kivity wrote:
>>  On 09/14/2010 12:25 PM, Michael Tokarev wrote:
>>> Not that it is much helpful either.  lsof:
>>>
>>> qemu-syst 23203  mjt    0u   CHR   136,9      0t0      12 /dev/pts/9
>>> qemu-syst 23203  mjt    1u   CHR   136,9      0t0      12 /dev/pts/9
>>> qemu-syst 23203  mjt    2u   CHR   136,9      0t0      12 /dev/pts/9
>>> qemu-syst 23203  mjt    3u   CHR  10,232      0t0    4402 /dev/kvm
>>> qemu-syst 23203  mjt    4u  0000     0,9        0     607 anon_inode
>>> qemu-syst 23203  mjt    5r  FIFO     0,8      0t0 8172675 pipe
>>> qemu-syst 23203  mjt    6w  FIFO     0,8      0t0 8172675 pipe
>>> qemu-syst 23203  mjt    7u   CHR  10,200      0t0    1228 /dev/net/tun
>>> qemu-syst 23203  mjt    8u  0000     0,9        0     607 anon_inode
>>> qemu-syst 23203  mjt    9u  IPv4 8173217      0t0     TCP *:5900 (LISTEN)
>>> qemu-syst 23203  mjt   10u  0000     0,9        0     607 anon_inode
>>> qemu-syst 23203  mjt   11u  0000     0,9        0     607 anon_inode
>>> qemu-syst 23203  mjt   12u  0000     0,9        0     607 anon_inode
>>
>>> So it is constantly poking fds# 11, 12, 10, 5&  6.
>>> 5 and 6 are pipe (selfpipe?),
>>
>> signalfd emulation, used to deliver signals efficiently.  Older glibc?
> 
> [e]glibc-2.11.
> 
> $ grep SIGNAL config-host.mak
> CONFIG_SIGNALFD=y
> 
> From strace of another run:
> 24318 signalfd(-1, [BUS ALRM IO], 8)    = 12
> (so one of the remaining fds is a signalfd :)
> 
>>> and 10..12 are "anon inode".
>>
>> Those are likely eventfds.
>>
>>> Here's the command line again:
>>>
>>> qemu-system-x86_64 \
>>>    -netdev type=tap,ifname=tap-kvm,id=x \
>>>    -device virtio-net-pci,netdev=x \
>>>    -monitor stdio \
>>>    -boot n \
>>>    -usbdevice tablet \
>>>    -m 1G \
>>>    -vnc :0
>>>
>>> Yes, it does quite a lot of timer stuff... ;)
>>
>> So timers internal to usb.
>>
>> Please try (independently):
>>
>> - just -usb, without -usbdevice tablet
> 
> No, that one works as expected - all quiet.
> -usbdevice tablet is also quiet up until
> guest loads usb host controller driver
> (not particular usb device driver).
> 
>> - instrument calls to qemu_mod_timer() in hw/usb-*hci.c.  Looks like
>> these are all 1kHz, but something else is clearly happening.

Ok. There's nothing interesting going on there either,
apparently.

It is using hw/usb-uhci.c.  I added a few prints() in there,
but they're firing at the defined 1KHz frequency.

Just for test, I lowered the frequency (FRAME_TIMER_FREQ)
from 1000 to 500, and the load dropped to half, from 19%
to 9..10%.

Looking at what hw/usb-uhci.c:uhci_frame_timer() routine
does, it is quite expected to have that many writes and
reads and that many gettimers().  It is polling for events
every 1/1000th of a second, instead of using some form of
select().

I'll do some more tests -- after all I'm curious why winXP
does not show this behavour.

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 13:15                         ` Michael Tokarev
@ 2010-09-14 13:25                           ` Avi Kivity
  2010-09-14 13:29                             ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-14 13:25 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/14/2010 03:15 PM, Michael Tokarev wrote:
> >
> >>  - instrument calls to qemu_mod_timer() in hw/usb-*hci.c.  Looks like
> >>  these are all 1kHz, but something else is clearly happening.
>
> Ok. There's nothing interesting going on there either,
> apparently.
>
> It is using hw/usb-uhci.c.  I added a few prints() in there,
> but they're firing at the defined 1KHz frequency.
>
> Just for test, I lowered the frequency (FRAME_TIMER_FREQ)
> from 1000 to 500, and the load dropped to half, from 19%
> to 9..10%.
>
> Looking at what hw/usb-uhci.c:uhci_frame_timer() routine
> does, it is quite expected to have that many writes and
> reads and that many gettimers().  It is polling for events
> every 1/1000th of a second, instead of using some form of
> select().
>

IIUC that's mandated by USB hardware.  The guest may place data in 
memory, and USB polls it to see if it needs to tell send some message on 
the bus.

Please post an strace again, this time with -e trace=select.  Looks like 
each timer callback results in >50 syscalls, 4 of which are select()s).



-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 13:25                           ` Avi Kivity
@ 2010-09-14 13:29                             ` Michael Tokarev
  2010-09-14 13:38                               ` Michael Tokarev
  2010-09-14 14:45                               ` Avi Kivity
  0 siblings, 2 replies; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 13:29 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

14.09.2010 17:25, Avi Kivity wrote:
>  On 09/14/2010 03:15 PM, Michael Tokarev wrote:
[]
>> Looking at what hw/usb-uhci.c:uhci_frame_timer() routine
>> does, it is quite expected to have that many writes and
>> reads and that many gettimers().  It is polling for events
>> every 1/1000th of a second, instead of using some form of
>> select().
> 
> IIUC that's mandated by USB hardware.  The guest may place data in
> memory, and USB polls it to see if it needs to tell send some message on
> the bus.

Well, checking guest memory does not involve so many
reads/writes, i guess ;)

> Please post an strace again, this time with -e trace=select.  Looks like
> each timer callback results in >50 syscalls, 4 of which are select()s).

Here we go.

qemu-syst 25728  mjt    0u   CHR    136,9      0t0       12 /dev/pts/9
qemu-syst 25728  mjt    1u   CHR    136,9      0t0       12 /dev/pts/9
qemu-syst 25728  mjt    2u   CHR    136,9      0t0       12 /dev/pts/9
qemu-syst 25728  mjt    3u   CHR   10,232      0t0     4402 /dev/kvm
qemu-syst 25728  mjt    4u  0000      0,9        0      607 anon_inode
qemu-syst 25728  mjt    5r  FIFO      0,8      0t0 11703862 pipe
qemu-syst 25728  mjt    6w  FIFO      0,8      0t0 11703862 pipe
qemu-syst 25728  mjt    7u   CHR   10,200      0t0     1228 /dev/net/tun
qemu-syst 25728  mjt    8u  0000      0,9        0      607 anon_inode
qemu-syst 25728  mjt    9u  IPv4 11704055      0t0      TCP *:5900 (LISTEN)
qemu-syst 25728  mjt   10u  0000      0,9        0      607 anon_inode
qemu-syst 25728  mjt   11u  0000      0,9        0      607 anon_inode
qemu-syst 25728  mjt   12u  0000      0,9        0      607 anon_inode

17:27:23.995096 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998573}) <0.001461>
17:27:23.996994 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999987}) <0.000042>
17:27:23.997258 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999998}) <0.000011>
17:27:23.997561 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000009>
17:27:23.997739 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998771}) <0.001256>
17:27:23.999458 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999991}) <0.000017>
17:27:23.999665 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999998}) <0.000010>
17:27:23.999996 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000009>
17:27:24.000199 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998775}) <0.001241>
17:27:24.001666 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000006>
17:27:24.001768 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999932}) <0.000103>
17:27:24.001985 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.002061 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998407}) <0.001617>
17:27:24.004234 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999992}) <0.000041>
17:27:24.004827 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998683}) <0.001361>
17:27:24.006746 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999991}) <0.000019>
17:27:24.007218 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998545}) <0.001479>
17:27:24.009212 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999992}) <0.000014>
17:27:24.009366 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.009667 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999992}) <0.000017>
17:27:24.009964 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998755}) <0.001272>
17:27:24.011573 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.011771 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.016422 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999991}) <0.000014>
17:27:24.016596 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998705}) <0.001313>
17:27:24.018166 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000008>
17:27:24.018307 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999935}) <0.000071>
17:27:24.018643 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000007>
17:27:24.018767 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999992}) <0.000012>
17:27:24.018964 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000009>
17:27:24.019097 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998457}) <0.001562>
17:27:24.021019 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.021167 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.021388 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000008>
17:27:24.021512 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998518}) <0.001499>
17:27:24.023341 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.023583 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.023777 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.023900 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998605}) <0.001414>
17:27:24.025611 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.025755 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.025945 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.026066 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999154}) <0.000861>
17:27:24.027218 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.027358 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.027550 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.027733 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999990}) <0.000015>
17:27:24.027932 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000008>
17:27:24.028053 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998340}) <0.001693>
17:27:24.030049 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.030439 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.030634 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.030757 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998712}) <0.001312>
17:27:24.032273 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000006>
17:27:24.032367 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999918}) <0.000086>
17:27:24.032732 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999995}) <0.000013>
17:27:24.032896 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998530}) <0.001511>
17:27:24.034841 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999988}) <0.000021>
17:27:24.035046 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000010>
17:27:24.035325 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000010>
17:27:24.035490 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998655}) <0.001384>
17:27:24.037374 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999987}) <0.000022>
17:27:24.037581 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000012>
17:27:24.037868 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000010>
17:27:24.038038 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998679}) <0.001343>
17:27:24.039753 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000011>
17:27:24.040002 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999992}) <0.000010>
17:27:24.040135 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.040210 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998511}) <0.001502>
17:27:24.041899 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000006>
17:27:24.041988 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999878}) <0.000135>
17:27:24.042273 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.042360 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998431}) <0.001590>
17:27:24.044257 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000010>
17:27:24.044448 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.044639 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.044761 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998520}) <0.001501>
17:27:24.046557 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.046704 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000007>
17:27:24.046997 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000007>
17:27:24.047121 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998574}) <0.001445>
17:27:24.048859 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000008>
17:27:24.049002 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.049193 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.049315 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998456}) <0.001562>
17:27:24.051173 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.051316 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000007>
17:27:24.051506 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.051628 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998455}) <0.001564>
17:27:24.053595 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999996}) <0.000009>
17:27:24.053854 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998375}) <0.001644>
17:27:24.055793 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.055936 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.056127 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.056249 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999133}) <0.000889>
17:27:24.057437 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.057581 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.057833 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.057955 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999988}) <0.000017>
17:27:24.058260 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999992}) <0.000017>
17:27:24.058473 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998336}) <0.001688>
17:27:24.061690 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999993}) <0.000013>
17:27:24.061868 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000009>
17:27:24.062120 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999993}) <0.000016>
17:27:24.062325 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998650}) <0.001386>
17:27:24.064563 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999992}) <0.000069>
17:27:24.065264 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998966}) <0.001069>
17:27:24.067189 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999985}) <0.000027>
17:27:24.067689 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998523}) <0.001517>
17:27:24.069684 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999991}) <0.000018>
17:27:24.070097 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999994}) <0.000015>
17:27:24.070460 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000011>
17:27:24.070648 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998988}) <0.001031>
17:27:24.071969 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.072110 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.072302 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.072423 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998442}) <0.001584>
17:27:24.074628 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999993}) <0.000019>
17:27:24.075215 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998650}) <0.001379>
17:27:24.077279 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999993}) <0.000020>
17:27:24.077875 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998656}) <0.001376>
17:27:24.079896 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999993}) <0.000019>
17:27:24.080511 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 997648}) <0.002393>
17:27:24.083586 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999993}) <0.000019>
17:27:24.084182 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 997956}) <0.002214>
17:27:24.087306 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999993}) <0.000020>
17:27:24.088097 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999025}) <0.001024>
17:27:24.090080 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999972}) <0.000050>
17:27:24.090816 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998969}) <0.001068>
17:27:24.092516 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999988}) <0.000025>
17:27:24.093052 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998552}) <0.001490>
17:27:24.095219 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999986}) <0.000028>
17:27:24.095814 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998631}) <0.001411>
17:27:24.097910 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999985}) <0.000029>
17:27:24.098495 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998609}) <0.001422>
17:27:24.100709 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999986}) <0.000052>
17:27:24.104943 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999865}) <0.000150>
17:27:24.105349 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999995}) <0.000010>
17:27:24.105492 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998544}) <0.001480>
17:27:24.107290 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000010>
17:27:24.107443 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.107635 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.107757 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998444}) <0.001572>
17:27:24.109618 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000008>
17:27:24.109757 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.110038 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000008>
17:27:24.110164 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998548}) <0.001471>
17:27:24.111935 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.112078 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.112268 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.112390 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998464}) <0.001555>
17:27:24.114241 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.114384 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.114573 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.114694 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998458}) <0.001560>
17:27:24.116617 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000008>
17:27:24.116762 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.116954 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.117076 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998517}) <0.001506>
17:27:24.118889 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.119032 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.119222 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.119343 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 997620}) <0.002398>
17:27:24.122048 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.122213 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999994}) <0.000015>
17:27:24.122549 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000010>
17:27:24.122731 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998613}) <0.001481>
17:27:24.124724 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999987}) <0.000055>
17:27:24.125243 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998521}) <0.001510>
17:27:24.127315 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999985}) <0.000084>
17:27:24.127896 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998608}) <0.001432>
17:27:24.129848 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999991}) <0.000016>
17:27:24.130044 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000010>
17:27:24.133414 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999994}) <0.000011>
17:27:24.133598 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.133794 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.133917 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998490}) <0.001532>
17:27:24.135747 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.135894 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.136087 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000008>
17:27:24.136208 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998465}) <0.001552>
17:27:24.138056 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.138198 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.138388 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.138510 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 997378}) <0.002637>
17:27:24.141445 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.141590 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.141784 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000008>
17:27:24.141906 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998464}) <0.001556>
17:27:24.143770 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.143915 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.144105 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.144226 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998472}) <0.001548>
17:27:24.145983 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000006>
17:27:24.146078 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999819}) <0.000255>
17:27:24.146771 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999998}) <0.000060>
17:27:24.147258 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999207}) <0.000824>
17:27:24.148562 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999989}) <0.000021>
17:27:24.148778 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999998}) <0.000010>
17:27:24.149129 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000009>
17:27:24.149300 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998755}) <0.001267>
17:27:24.150998 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999992}) <0.000015>
17:27:24.151191 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999998}) <0.000010>
17:27:24.151468 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000009>
17:27:24.151636 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998667}) <0.001354>
17:27:24.153310 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.153456 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.153650 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.153773 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998451}) <0.001600>
17:27:24.155956 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999986}) <0.000077>
17:27:24.156793 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998898}) <0.001178>
17:27:24.158563 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999990}) <0.000021>
17:27:24.158792 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000012>
17:27:24.159120 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000011>
17:27:24.159317 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 997877}) <0.002140>
17:27:24.161746 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000008>
17:27:24.161889 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.162081 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.162202 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998468}) <0.001550>
17:27:24.164047 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.164191 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000007>
17:27:24.164384 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000008>
17:27:24.164506 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998462}) <0.001557>
17:27:24.166362 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.166517 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.166709 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.166832 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998473}) <0.001545>
17:27:24.168678 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000009>
17:27:24.168820 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.169011 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.169132 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998460}) <0.001557>
17:27:24.170986 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.171128 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.171318 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.171438 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998456}) <0.001563>
17:27:24.173309 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000010>
17:27:24.173542 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000009>
17:27:24.173738 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.173861 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998583}) <0.001457>
17:27:24.175614 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999996}) <0.000008>
17:27:24.175759 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000007>
17:27:24.175954 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.176077 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999139}) <0.000896>
17:27:24.177434 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999989}) <0.000055>
17:27:24.177834 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999998}) <0.000009>
17:27:24.178133 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000009>
17:27:24.178309 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998528}) <0.001492>
17:27:24.180354 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999989}) <0.000018>
17:27:24.180719 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998434}) <0.001599>
17:27:24.182759 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999991}) <0.000017>
17:27:24.182938 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000010>
17:27:24.183198 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000009>
17:27:24.183346 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998594}) <0.001440>
17:27:24.185268 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999991}) <0.000018>
17:27:24.185487 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000012>
17:27:24.185796 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000145>
17:27:24.186095 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998910}) <0.001104>
17:27:24.187599 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000009>
17:27:24.187740 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.187935 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.188058 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998462}) <0.001570>
17:27:24.190181 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999989}) <0.000017>
17:27:24.190374 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000008>
17:27:24.190567 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.190690 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998521}) <0.001496>
17:27:24.192478 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000008>
17:27:24.192618 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000007>
17:27:24.192809 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000007>
17:27:24.192930 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 989677}) <0.016745>
17:27:24.209931 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999992}) <0.000012>
17:27:24.210033 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999958}) <0.000045>
17:27:24.210265 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.210342 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998429}) <0.001591>
17:27:24.212144 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000007>
17:27:24.212241 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999880}) <0.000141>
17:27:24.212558 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000007>
17:27:24.212655 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998491}) <0.001529>
17:27:24.214431 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000007>
17:27:24.214538 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999948}) <0.000056>
17:27:24.214735 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000006>
17:27:24.214828 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998401}) <0.001613>
17:27:24.216668 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000007>
17:27:24.216877 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999997}) <0.000006>
17:27:24.217025 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.217118 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998457}) <0.001558>
17:27:24.218902 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 3 (in [5 10 12], left {0, 999991}) <0.000023>
17:27:24.232348 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999883}) <0.000123>
17:27:24.232690 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000008>
17:27:24.232825 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998519}) <0.001489>
17:27:24.234500 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.234588 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999916}) <0.000088>
17:27:24.234800 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.234880 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998371}) <0.001644>
17:27:24.236733 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000007>
17:27:24.236845 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999895}) <0.000124>
17:27:24.237148 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997}) <0.000006>
17:27:24.237249 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998537}) <0.001475>
17:27:24.238919 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000005>
17:27:24.239011 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999895}) <0.000116>
17:27:24.239270 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998}) <0.000006>

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 13:29                             ` Michael Tokarev
@ 2010-09-14 13:38                               ` Michael Tokarev
  2010-09-14 14:45                               ` Avi Kivity
  1 sibling, 0 replies; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 13:38 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

14.09.2010 17:29, Michael Tokarev wrote:
[]
>> Please post an strace again, this time with -e trace=select.  Looks like
>> each timer callback results in >50 syscalls, 4 of which are select()s).

I just built 0.13-rc1 to see how that one performs.  It is very similar
to 0.12 in this respect.  The selfpipe is gone, but the high load is
still here and is exactly like was with 0.12, with very similar strace.

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 13:29                             ` Michael Tokarev
  2010-09-14 13:38                               ` Michael Tokarev
@ 2010-09-14 14:45                               ` Avi Kivity
  2010-09-14 14:53                                 ` Michael Tokarev
  2010-09-14 15:51                                 ` David S. Ahern
  1 sibling, 2 replies; 36+ messages in thread
From: Avi Kivity @ 2010-09-14 14:45 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/14/2010 03:29 PM, Michael Tokarev wrote:
> 14.09.2010 17:25, Avi Kivity wrote:
> >   On 09/14/2010 03:15 PM, Michael Tokarev wrote:
> []
> >>  Looking at what hw/usb-uhci.c:uhci_frame_timer() routine
> >>  does, it is quite expected to have that many writes and
> >>  reads and that many gettimers().  It is polling for events
> >>  every 1/1000th of a second, instead of using some form of
> >>  select().
> >
> >  IIUC that's mandated by USB hardware.  The guest may place data in
> >  memory, and USB polls it to see if it needs to tell send some message on
> >  the bus.
>
> Well, checking guest memory does not involve so many
> reads/writes, i guess ;)
>
> >  Please post an strace again, this time with -e trace=select.  Looks like
> >  each timer callback results in>50 syscalls, 4 of which are select()s).
>
> Here we go.
>
> qemu-syst 25728  mjt    0u   CHR    136,9      0t0       12 /dev/pts/9
> qemu-syst 25728  mjt    1u   CHR    136,9      0t0       12 /dev/pts/9
> qemu-syst 25728  mjt    2u   CHR    136,9      0t0       12 /dev/pts/9
> qemu-syst 25728  mjt    3u   CHR   10,232      0t0     4402 /dev/kvm
> qemu-syst 25728  mjt    4u  0000      0,9        0      607 anon_inode
> qemu-syst 25728  mjt    5r  FIFO      0,8      0t0 11703862 pipe
> qemu-syst 25728  mjt    6w  FIFO      0,8      0t0 11703862 pipe
> qemu-syst 25728  mjt    7u   CHR   10,200      0t0     1228 /dev/net/tun
> qemu-syst 25728  mjt    8u  0000      0,9        0      607 anon_inode
> qemu-syst 25728  mjt    9u  IPv4 11704055      0t0      TCP *:5900 (LISTEN)
> qemu-syst 25728  mjt   10u  0000      0,9        0      607 anon_inode
> qemu-syst 25728  mjt   11u  0000      0,9        0      607 anon_inode
> qemu-syst 25728  mjt   12u  0000      0,9        0      607 anon_inode
>
> 17:27:23.995096 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998573})<0.001461>
> 17:27:23.996994 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999987})<0.000042>
> 17:27:23.997258 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999998})<0.000011>
> 17:27:23.997561 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998})<0.000009>
> 17:27:23.997739 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998771})<0.001256>
> 17:27:23.999458 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999991})<0.000017>
> 17:27:23.999665 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999998})<0.000010>
> 17:27:23.999996 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998})<0.000009>
> 17:27:24.000199 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998775})<0.001241>
> 17:27:24.001666 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999997})<0.000006>
> 17:27:24.001768 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 999932})<0.000103>
> 17:27:24.001985 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5 10], left {0, 999998})<0.000005>
> 17:27:24.002061 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in [12], left {0, 998407})<0.001617>
>

That pipe is doing a lot of damage (I don't have it, and couldn't 
reproduce your results, another pointer).  Do you have CONFIG_EVENTFD 
set?  If not, why not?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 14:45                               ` Avi Kivity
@ 2010-09-14 14:53                                 ` Michael Tokarev
  2010-09-14 16:00                                   ` Avi Kivity
  2010-09-14 15:51                                 ` David S. Ahern
  1 sibling, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 14:53 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

14.09.2010 18:45, Avi Kivity пишет:
>> 17:27:23.999996 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5
>> 10], left {0, 999998})<0.000009>
>> 17:27:24.000199 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in
>> [12], left {0, 998775})<0.001241>
>> 17:27:24.001666 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5
>> 10], left {0, 999997})<0.000006>
>> 17:27:24.001768 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in
>> [12], left {0, 999932})<0.000103>
>> 17:27:24.001985 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5
>> 10], left {0, 999998})<0.000005>
>> 17:27:24.002061 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in
>> [12], left {0, 998407})<0.001617>
> 
> That pipe is doing a lot of damage (I don't have it, and couldn't
> reproduce your results, another pointer).  Do you have CONFIG_EVENTFD
> set?  If not, why not?

As I mentioned in other emails in this thread:

o yes, I do have CONFIG_EVENTFD set, and it is being used
  too (fd#12 in the above strace).

o 0.13.0-rc1 behaves the same way (that is, it also shows
  high load when idle -- the same 18% of host CPU), but it
  has no pipe on fd#5.

Here's how it looks like for 0.13.0-rc1
(so far i were playing with 0.12.5):

qemu-syst 11397  mjt    0u   CHR              136,5      0t0        8 /dev/pts/5
qemu-syst 11397  mjt    1u   CHR              136,5      0t0        8 /dev/pts/5
qemu-syst 11397  mjt    2u   CHR              136,5      0t0        8 /dev/pts/5
qemu-syst 11397  mjt    3u   CHR             10,232      0t0     4402 /dev/kvm
qemu-syst 11397  mjt    4u  0000                0,9        0      607 anon_inode
qemu-syst 11397  mjt    5u  0000                0,9        0      607 anon_inode
qemu-syst 11397  mjt    6u  0000                0,9        0      607 anon_inode
qemu-syst 11397  mjt    7u   CHR             10,200      0t0     1228 /dev/net/tun
qemu-syst 11397  mjt    8u  0000                0,9        0      607 anon_inode
qemu-syst 11397  mjt    9u  unix 0xffff8801950a4c80      0t0 13170832 socket
qemu-syst 11397  mjt   10u  unix 0xffff8801950a4680      0t0 13170834 socket
qemu-syst 11397  mjt   11u  0000                0,9        0      607 anon_inode
qemu-syst 11397  mjt   12u  0000                0,9        0      607 anon_inode
qemu-syst 11397  mjt   13u  0000                0,9        0      607 anon_inode

18:52:24.871650 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999867}) <0.000165>
18:52:24.872175 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999993}) <0.000015>
18:52:24.872300 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000010>
18:52:24.872548 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000010>
18:52:24.872648 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999861}) <0.000171>
18:52:24.874516 select(10, [9], NULL, NULL, {0, 0}) = 0 (Timeout) <0.000008>
18:52:24.874686 select(14, [0 5 7 11 13], [], [], {1, 0}) = 2 (in [11 13], left {0, 999993}) <0.000012>
18:52:24.875000 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999994}) <0.000037>
18:52:24.875305 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000010>
18:52:24.875406 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999633}) <0.000387>
18:52:24.876084 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.876195 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999975}) <0.000031>
18:52:24.876414 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000008>
18:52:24.876507 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999741}) <0.000276>
18:52:24.877066 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000009>
18:52:24.877173 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999971}) <0.000035>
18:52:24.877393 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000007>
18:52:24.877485 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999720}) <0.000298>
18:52:24.878067 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000009>
18:52:24.878174 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999971}) <0.000034>
18:52:24.878394 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000007>
18:52:24.878485 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 997665}) <0.002351>
18:52:24.881150 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.881240 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000006>
18:52:24.881389 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000006>
18:52:24.881460 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999703}) <0.000314>
18:52:24.882137 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.882224 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000006>
18:52:24.882373 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000006>
18:52:24.882443 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999664}) <0.000353>
18:52:24.883024 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.883110 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999894}) <0.000124>
18:52:24.883423 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.883506 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999750}) <0.000265>
18:52:24.884001 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.884084 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999888}) <0.000129>
18:52:24.884392 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000006>
18:52:24.884472 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999715}) <0.000300>
18:52:24.884993 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000006>
18:52:24.885074 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999882}) <0.000135>
18:52:24.885385 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.885467 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999710}) <0.000306>
18:52:24.885990 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.886074 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999912}) <0.000093>
18:52:24.886337 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000007>
18:52:24.886425 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999663}) <0.000356>
18:52:24.887058 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000010>
18:52:24.887167 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999962}) <0.000044>
18:52:24.887383 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000007>
18:52:24.887471 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999708}) <0.000312>
18:52:24.888060 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.888165 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999960}) <0.000045>
18:52:24.888382 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.888470 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999687}) <0.000318>
18:52:24.889015 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.889185 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999993}) <0.000018>
18:52:24.889526 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000011>
18:52:24.889653 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999806}) <0.000235>
18:52:24.890433 select(14, [0 5 7 11 13], [], [], {1, 0}) = 2 (in [11 13], left {0, 999996}) <0.000015>
18:52:24.890700 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999770}) <0.000253>
18:52:24.891327 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999993}) <0.000015>
18:52:24.891452 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999997}) <0.000010>
18:52:24.891690 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000010>
18:52:24.891856 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999985}) <0.000022>
18:52:24.892104 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000009>
18:52:24.892204 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999217}) <0.000816>
18:52:24.893380 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999993}) <0.000014>
18:52:24.893555 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999993}) <0.000018>
18:52:24.893962 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000009>
18:52:24.894084 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000009>
18:52:24.894377 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000006>
18:52:24.894440 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999651}) <0.000364>
18:52:24.894994 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000006>
18:52:24.895065 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999859}) <0.000155>
18:52:24.895370 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000006>
18:52:24.895466 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999720}) <0.000295>
18:52:24.895968 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000006>
18:52:24.896035 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999874}) <0.000140>
18:52:24.896467 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000012>
18:52:24.896603 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999812}) <0.000220>
18:52:24.897222 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999992}) <0.000016>
18:52:24.897366 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000010>
18:52:24.897631 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000010>
18:52:24.897825 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000009>
18:52:24.898446 select(14, [0 5 7 11 13], [], [], {1, 0}) = 2 (in [11 13], left {0, 999992}) <0.000553>
18:52:24.899377 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999885}) <0.000136>
18:52:24.900028 select(14, [0 5 7 11 13], [], [], {1, 0}) = 2 (in [11 13], left {0, 999993}) <0.000045>
18:52:24.900461 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999620}) <0.000420>
18:52:24.901384 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999987}) <0.000050>
18:52:24.901588 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000012>
18:52:24.901975 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000011>
18:52:24.902102 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000011>
18:52:24.902392 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000010>
18:52:24.902524 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999714}) <0.000328>
18:52:24.903280 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999991}) <0.000018>
18:52:24.903435 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000010>
18:52:24.903724 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000011>
18:52:24.903955 select(10, [9], NULL, NULL, {0, 0}) = 0 (Timeout) <0.000009>
18:52:24.904137 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999997}) <0.000011>
18:52:24.904427 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000011>
18:52:24.904551 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999742}) <0.000283>
18:52:24.906453 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999993}) <0.000013>
18:52:24.906587 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999997}) <0.000008>
18:52:24.906831 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000008>
18:52:24.906924 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999854}) <0.000153>
18:52:24.907266 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000008>
18:52:24.907359 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999589}) <0.000429>
18:52:24.908075 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.908183 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999969}) <0.000036>
18:52:24.908405 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000008>
18:52:24.908497 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999733}) <0.000284>
18:52:24.909063 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000009>
18:52:24.909255 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999997}) <0.000009>
18:52:24.909471 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000008>
18:52:24.909565 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999804}) <0.000215>
18:52:24.910066 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.910174 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999967}) <0.000038>
18:52:24.910397 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000008>
18:52:24.910489 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999725}) <0.000293>
18:52:24.911074 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.911183 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999969}) <0.000036>
18:52:24.911405 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000007>
18:52:24.911496 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999728}) <0.000291>
18:52:24.912082 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000010>
18:52:24.912192 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999976}) <0.000030>
18:52:24.912407 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000008>
18:52:24.912499 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999713}) <0.000304>
18:52:24.913093 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.913203 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999976}) <0.000029>
18:52:24.913418 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000008>
18:52:24.913508 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 998544}) <0.001471>
18:52:24.915301 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000009>
18:52:24.915411 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999997}) <0.000007>
18:52:24.915627 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000007>
18:52:24.915716 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999924}) <0.000081>
18:52:24.916018 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000008>
18:52:24.916133 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999962}) <0.000043>
18:52:24.916350 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000007>
18:52:24.916436 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999674}) <0.000345>
18:52:24.917054 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.917158 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999959}) <0.000046>
18:52:24.917376 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000008>
18:52:24.917463 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999093}) <0.000927>
18:52:24.918821 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000008>
18:52:24.918928 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000008>
18:52:24.919111 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000007>
18:52:24.919198 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999470}) <0.000537>
18:52:24.919991 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999995}) <0.000030>
18:52:24.920170 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999992}) <0.000037>
18:52:24.920521 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999997}) <0.000011>
18:52:24.920632 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999819}) <0.000207>
18:52:24.921272 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999992}) <0.000016>
18:52:24.921396 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000009>
18:52:24.921630 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000009>
18:52:24.921729 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999877}) <0.000145>
18:52:24.922233 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999993}) <0.000015>
18:52:24.922352 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000010>
18:52:24.922857 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999991}) <0.000017>
18:52:24.922975 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000009>
18:52:24.923213 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000010>
18:52:24.923311 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999513}) <0.000508>
18:52:24.924160 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000011>
18:52:24.924274 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000010>
18:52:24.924508 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000009>
18:52:24.924607 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999810}) <0.000212>
18:52:24.925158 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000011>
18:52:24.925272 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000009>
18:52:24.925507 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000009>
18:52:24.925606 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999809}) <0.000260>
18:52:24.926293 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000013>
18:52:24.926436 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000011>
18:52:24.927194 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999993}) <0.000034>
18:52:24.927394 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999997}) <0.000011>
18:52:24.927688 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000010>
18:52:24.927900 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000011>
18:52:24.928189 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000011>
18:52:24.928315 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999506}) <0.000519>
18:52:24.929554 select(14, [0 5 7 11 13], [], [], {1, 0}) = 2 (in [11 13], left {0, 999995}) <0.000011>
18:52:24.929953 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999949}) <0.000057>
18:52:24.930424 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999994}) <0.000012>
18:52:24.930557 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999629}) <0.000388>
18:52:24.931233 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999996}) <0.000009>
18:52:24.931342 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999973}) <0.000032>
18:52:24.931560 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999998}) <0.000008>
18:52:24.931652 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999865}) <0.000153>
18:52:24.932194 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [11], left {0, 999994}) <0.000032>
18:52:24.932377 select(14, [0 5 7 11 13], [], [], {1, 0}) = 1 (in [13], left {0, 999998}) <0.000011>

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 14:45                               ` Avi Kivity
  2010-09-14 14:53                                 ` Michael Tokarev
@ 2010-09-14 15:51                                 ` David S. Ahern
  2010-09-14 16:00                                   ` Michael Tokarev
  1 sibling, 1 reply; 36+ messages in thread
From: David S. Ahern @ 2010-09-14 15:51 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Avi Kivity, KVM list

What's your clock source on the host?
cat /sys/devices/system/clocksource/clocksource0/current_clocksource

With the usbtablet device the host clock source is read 2-3 times for
frequently which for acpi_pm and hpet jack up the CPU.

David

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 15:51                                 ` David S. Ahern
@ 2010-09-14 16:00                                   ` Michael Tokarev
  2010-09-14 16:09                                     ` David S. Ahern
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 16:00 UTC (permalink / raw)
  To: David S. Ahern; +Cc: Avi Kivity, KVM list

14.09.2010 19:51, David S. Ahern пишет:
> cat /sys/devices/system/clocksource/clocksource0/current_clocksource

It's tsc (AthlonII CPU).  Also available are hpet and acpi_pm.
Switching to hpet or acpi_pm does not have visible effect, at
least not while the guest is running.

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 14:53                                 ` Michael Tokarev
@ 2010-09-14 16:00                                   ` Avi Kivity
  2010-09-14 16:29                                     ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-14 16:00 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/14/2010 04:53 PM, Michael Tokarev wrote:
> 14.09.2010 18:45, Avi Kivity пишет:
> >>  17:27:23.999996 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5
> >>  10], left {0, 999998})<0.000009>
> >>  17:27:24.000199 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in
> >>  [12], left {0, 998775})<0.001241>
> >>  17:27:24.001666 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5
> >>  10], left {0, 999997})<0.000006>
> >>  17:27:24.001768 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in
> >>  [12], left {0, 999932})<0.000103>
> >>  17:27:24.001985 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 2 (in [5
> >>  10], left {0, 999998})<0.000005>
> >>  17:27:24.002061 select(13, [0 5 7 9 10 12], [], [], {1, 0}) = 1 (in
> >>  [12], left {0, 998407})<0.001617>
> >
> >  That pipe is doing a lot of damage (I don't have it, and couldn't
> >  reproduce your results, another pointer).  Do you have CONFIG_EVENTFD
> >  set?  If not, why not?
>
> As I mentioned in other emails in this thread:
>
> o yes, I do have CONFIG_EVENTFD set, and it is being used
>    too (fd#12 in the above strace).

I thought that was the signalfd.

> o 0.13.0-rc1 behaves the same way (that is, it also shows
>    high load when idle -- the same 18% of host CPU), but it
>    has no pipe on fd#5.
>
>

I think it's host_alarm_handler()'s use of qemu_notify_event().  It's 
telling the main loop to rescan pending events, even though it's called 
from the main loop itself.  Please drop it and rerun.

It booted for me and seems to work.

Marcelo, it's safe to remove it, yes? (except for tcg or upstream 
without iothread).

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 16:00                                   ` Michael Tokarev
@ 2010-09-14 16:09                                     ` David S. Ahern
  0 siblings, 0 replies; 36+ messages in thread
From: David S. Ahern @ 2010-09-14 16:09 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Avi Kivity, KVM list



On 09/14/10 10:00, Michael Tokarev wrote:
> 14.09.2010 19:51, David S. Ahern пишет:
>> cat /sys/devices/system/clocksource/clocksource0/current_clocksource
> 
> It's tsc (AthlonII CPU).  Also available are hpet and acpi_pm.
> Switching to hpet or acpi_pm does not have visible effect, at
> least not while the guest is running.


acpi_pm takes more cycles to read. On my laptop switching from hpet to
acpi_pm caused the winxp VM to jump up in CPU usage. For both time
sources 'perf top -p <pid of main thread>' shows timer reads as the top
function for qemu-kvm (e.g., read_hpet).

On a Nehalem box the clock source is TSC. 'perf top -p <pid>' for a
winxp VM does not show clock reads at all.

David


> 
> Thanks!
> 
> /mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 16:00                                   ` Avi Kivity
@ 2010-09-14 16:29                                     ` Michael Tokarev
  2010-09-14 16:37                                       ` Avi Kivity
  2010-09-14 16:45                                       ` David S. Ahern
  0 siblings, 2 replies; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 16:29 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

14.09.2010 20:00, Avi Kivity wrote:
>> As I mentioned in other emails in this thread:
>>
>> o yes, I do have CONFIG_EVENTFD set, and it is being used
>>    too (fd#12 in the above strace).
> 
> I thought that was the signalfd.

Uh. Yes, it was, i confused the two.  And yes, CONFIG_EVENTFD is
set and used too.

$ grep FD= config-host.mak
CONFIG_EVENTFD=y
CONFIG_SIGNALFD=y

>> o 0.13.0-rc1 behaves the same way (that is, it also shows
>>    high load when idle -- the same 18% of host CPU), but it
>>    has no pipe on fd#5.
> 
> I think it's host_alarm_handler()'s use of qemu_notify_event().  It's
> telling the main loop to rescan pending events, even though it's called
> from the main loop itself.  Please drop it and rerun.

Without qemu_notify_event() in host_alarm_handler():

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 98.96   48.184832       13747      3505           ioctl
  0.39    0.191613          25      7745        28 futex
  0.37    0.181032           1    173192           select
  0.09    0.045379           0    980369           clock_gettime
  0.05    0.024362           0    351024    173220 read
  0.05    0.023247           0    487766           gettimeofday
  0.04    0.017996           0    319428           timer_gettime
  0.03    0.013837           0    198267           timer_settime
  0.02    0.010036           0    177790           rt_sigaction
  0.00    0.000000           0         1           writev
  0.00    0.000000           0         2           poll
  0.00    0.000000           0         1           rt_sigpending
  0.00    0.000000           0         1         1 rt_sigtimedwait
------ ----------- ----------- --------- --------- ----------------
100.00   48.692334               2699091    173249 total

The picture is pretty similar to the one before :)
(And yes, I'm sure I've run the right binary).

But it looks like we're barking the wrong tree.  It's spending
99% time in ioctl.  Yes, there's quite large amount of reads
and gettimes, but they're taking very small time, even combined.

It is still spending much more time in the ioctl (apparently in
kvm_run).

For comparison, here's the same strace stats without -usbdevice:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 97.70    0.080237          22      3584           select
  1.09    0.000895           0      6018      3584 read
  0.33    0.000271           0      9670           clock_gettime
  0.31    0.000254           0      6086           gettimeofday
  0.26    0.000210           0      2432           rt_sigaction
  0.17    0.000137           0      3653           timer_gettime
  0.15    0.000122           0      2778           timer_settime
  0.00    0.000000           0         1           ioctl
  0.00    0.000000           0         1           rt_sigpending
  0.00    0.000000           0         1         1 rt_sigtimedwait
------ ----------- ----------- --------- --------- ----------------
100.00    0.082126                 34224      3585 total

Yes, it is still doing lots of unnecessary stuff, but the load
is <1%.

(This is without host_alarm_handler() in qemu_notify_event())

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 16:29                                     ` Michael Tokarev
@ 2010-09-14 16:37                                       ` Avi Kivity
  2010-09-14 16:50                                         ` Michael Tokarev
  2010-09-14 16:45                                       ` David S. Ahern
  1 sibling, 1 reply; 36+ messages in thread
From: Avi Kivity @ 2010-09-14 16:37 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: KVM list

  On 09/14/2010 06:29 PM, Michael Tokarev wrote:
> 14.09.2010 20:00, Avi Kivity wrote:
> >>  As I mentioned in other emails in this thread:
> >>
> >>  o yes, I do have CONFIG_EVENTFD set, and it is being used
> >>     too (fd#12 in the above strace).
> >
> >  I thought that was the signalfd.
>
> Uh. Yes, it was, i confused the two.  And yes, CONFIG_EVENTFD is
> set and used too.
>
> $ grep FD= config-host.mak
> CONFIG_EVENTFD=y
> CONFIG_SIGNALFD=y
>
> >>  o 0.13.0-rc1 behaves the same way (that is, it also shows
> >>     high load when idle -- the same 18% of host CPU), but it
> >>     has no pipe on fd#5.
> >
> >  I think it's host_alarm_handler()'s use of qemu_notify_event().  It's
> >  telling the main loop to rescan pending events, even though it's called
> >  from the main loop itself.  Please drop it and rerun.
>
> Without qemu_notify_event() in host_alarm_handler():
>
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>   98.96   48.184832       13747      3505           ioctl
>    0.39    0.191613          25      7745        28 futex
>    0.37    0.181032           1    173192           select
>    0.09    0.045379           0    980369           clock_gettime
>    0.05    0.024362           0    351024    173220 read
>    0.05    0.023247           0    487766           gettimeofday
>    0.04    0.017996           0    319428           timer_gettime
>    0.03    0.013837           0    198267           timer_settime
>    0.02    0.010036           0    177790           rt_sigaction
>    0.00    0.000000           0         1           writev
>    0.00    0.000000           0         2           poll
>    0.00    0.000000           0         1           rt_sigpending
>    0.00    0.000000           0         1         1 rt_sigtimedwait
> ------ ----------- ----------- --------- --------- ----------------
> 100.00   48.692334               2699091    173249 total
>
> The picture is pretty similar to the one before :)
> (And yes, I'm sure I've run the right binary).

No more writes, and read() is cut to twice select() (due to the need to 
see a 0, we can probably elimate it if we know it's a real eventfd), 
somewhat fewer select()s.

What's the cpu load?

> But it looks like we're barking the wrong tree.  It's spending
> 99% time in ioctl.  Yes, there's quite large amount of reads
> and gettimes, but they're taking very small time, even combined.
>
> It is still spending much more time in the ioctl (apparently in
> kvm_run).

That time includes guest sleep time, not just cpu time, so it isn't an 
indicator.

> For comparison, here's the same strace stats without -usbdevice:
>
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>   97.70    0.080237          22      3584           select
>    1.09    0.000895           0      6018      3584 read
>    0.33    0.000271           0      9670           clock_gettime
>    0.31    0.000254           0      6086           gettimeofday
>    0.26    0.000210           0      2432           rt_sigaction
>    0.17    0.000137           0      3653           timer_gettime
>    0.15    0.000122           0      2778           timer_settime
>    0.00    0.000000           0         1           ioctl
>    0.00    0.000000           0         1           rt_sigpending
>    0.00    0.000000           0         1         1 rt_sigtimedwait
> ------ ----------- ----------- --------- --------- ----------------
> 100.00    0.082126                 34224      3585 total
>
> Yes, it is still doing lots of unnecessary stuff, but the load
> is<1%.
>
> (This is without host_alarm_handler() in qemu_notify_event())
>
>

I guess it only saw a single ioctl, so it didn't measure it.  Ping the 
guest and you'll see ioctl times surge back.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 16:29                                     ` Michael Tokarev
  2010-09-14 16:37                                       ` Avi Kivity
@ 2010-09-14 16:45                                       ` David S. Ahern
  1 sibling, 0 replies; 36+ messages in thread
From: David S. Ahern @ 2010-09-14 16:45 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Avi Kivity, KVM list



On 09/14/10 10:29, Michael Tokarev wrote:

> For comparison, here's the same strace stats without -usbdevice:
> 
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>  97.70    0.080237          22      3584           select
>   1.09    0.000895           0      6018      3584 read
>   0.33    0.000271           0      9670           clock_gettime
>   0.31    0.000254           0      6086           gettimeofday
>   0.26    0.000210           0      2432           rt_sigaction
>   0.17    0.000137           0      3653           timer_gettime
>   0.15    0.000122           0      2778           timer_settime
>   0.00    0.000000           0         1           ioctl
>   0.00    0.000000           0         1           rt_sigpending
>   0.00    0.000000           0         1         1 rt_sigtimedwait
> ------ ----------- ----------- --------- --------- ----------------
> 100.00    0.082126                 34224      3585 total
> 
> Yes, it is still doing lots of unnecessary stuff, but the load
> is <1%.

Without a USB device attached the controller is turned off. See the call
to qemu_del_timer() in uhci_frame_timer(). As soon as you add the tablet
device the polling starts (see qemu_mod_timer in  uhci_ioport_writew)
and the cpu load starts.

David


> 
> (This is without host_alarm_handler() in qemu_notify_event())
> 
> /mjt
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: high load with usb device
  2010-09-14 16:37                                       ` Avi Kivity
@ 2010-09-14 16:50                                         ` Michael Tokarev
  0 siblings, 0 replies; 36+ messages in thread
From: Michael Tokarev @ 2010-09-14 16:50 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

14.09.2010 20:37, Avi Kivity пишет:
>  On 09/14/2010 06:29 PM, Michael Tokarev wrote:
>> 14.09.2010 20:00, Avi Kivity wrote:
>> >>  As I mentioned in other emails in this thread:
>> >>
>> >>  o yes, I do have CONFIG_EVENTFD set, and it is being used
>> >>     too (fd#12 in the above strace).
>> >
>> >  I thought that was the signalfd.
>>
>> Uh. Yes, it was, i confused the two.  And yes, CONFIG_EVENTFD is
>> set and used too.
>>
>> $ grep FD= config-host.mak
>> CONFIG_EVENTFD=y
>> CONFIG_SIGNALFD=y
>>
>> >>  o 0.13.0-rc1 behaves the same way (that is, it also shows
>> >>     high load when idle -- the same 18% of host CPU), but it
>> >>     has no pipe on fd#5.
>> >
>> >  I think it's host_alarm_handler()'s use of qemu_notify_event().  It's
>> >  telling the main loop to rescan pending events, even though it's
>> called
>> >  from the main loop itself.  Please drop it and rerun.
>>
>> Without qemu_notify_event() in host_alarm_handler():
>>
>> % time     seconds  usecs/call     calls    errors syscall
>> ------ ----------- ----------- --------- --------- ----------------
>>   98.96   48.184832       13747      3505           ioctl
>>    0.39    0.191613          25      7745        28 futex
>>    0.37    0.181032           1    173192           select
>>    0.09    0.045379           0    980369           clock_gettime
>>    0.05    0.024362           0    351024    173220 read
>>    0.05    0.023247           0    487766           gettimeofday
>>    0.04    0.017996           0    319428           timer_gettime
>>    0.03    0.013837           0    198267           timer_settime
>>    0.02    0.010036           0    177790           rt_sigaction
>>    0.00    0.000000           0         1           writev
>>    0.00    0.000000           0         2           poll
>>    0.00    0.000000           0         1           rt_sigpending
>>    0.00    0.000000           0         1         1 rt_sigtimedwait
>> ------ ----------- ----------- --------- --------- ----------------
>> 100.00   48.692334               2699091    173249 total
>>
>> The picture is pretty similar to the one before :)

I mean, the picture of the host CPU load.  There's less extra stuff
going on, but the load is almost the same.

>> (And yes, I'm sure I've run the right binary).
> 
> No more writes, and read() is cut to twice select() (due to the need to
> see a 0, we can probably elimate it if we know it's a real eventfd),
> somewhat fewer select()s.
> 
> What's the cpu load?

According to top(1):

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
12039 mjt       20   0 1078m 106m 4892 S   17  1.9   0:15.23 qemu-system-x86

So now it's 16..17%, not 18..19% as before.  Better, but far
from good :)

And i still does not understand why the load's almost zero on winXP.
Lemme try it out again with winXP...

>> It is still spending much more time in the ioctl (apparently in
>> kvm_run).
> 
> That time includes guest sleep time, not just cpu time, so it isn't an
> indicator.

Oh.  I see....

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2010-09-14 16:50 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-03 12:21 high load with win7 & usb tablet Michael Tokarev
2010-09-10 11:47 ` Jes Sorensen
2010-09-10 12:16   ` Michael Tokarev
2010-09-10 16:17     ` high load with usb device Michael Tokarev
2010-09-12  9:26       ` Avi Kivity
2010-09-14  6:51         ` Michael Tokarev
2010-09-14  9:06           ` Avi Kivity
2010-09-14  9:08             ` Avi Kivity
2010-09-14 10:00               ` Michael Tokarev
2010-09-14 10:07                 ` Avi Kivity
2010-09-14 10:25                   ` Michael Tokarev
2010-09-14 10:39                     ` Avi Kivity
2010-09-14 11:02                       ` Michael Tokarev
2010-09-14 13:15                         ` Michael Tokarev
2010-09-14 13:25                           ` Avi Kivity
2010-09-14 13:29                             ` Michael Tokarev
2010-09-14 13:38                               ` Michael Tokarev
2010-09-14 14:45                               ` Avi Kivity
2010-09-14 14:53                                 ` Michael Tokarev
2010-09-14 16:00                                   ` Avi Kivity
2010-09-14 16:29                                     ` Michael Tokarev
2010-09-14 16:37                                       ` Avi Kivity
2010-09-14 16:50                                         ` Michael Tokarev
2010-09-14 16:45                                       ` David S. Ahern
2010-09-14 15:51                                 ` David S. Ahern
2010-09-14 16:00                                   ` Michael Tokarev
2010-09-14 16:09                                     ` David S. Ahern
2010-09-10 12:19 ` high load with win7 & usb tablet Avi Kivity
2010-09-10 12:44   ` Michael Tokarev
2010-09-10 12:50     ` Avi Kivity
     [not found]       ` <4C8A44E6.2060008@msgid.tls.msk.ru>
2010-09-10 14:56         ` Avi Kivity
2010-09-10 15:03           ` Michael Tokarev
2010-09-10 16:03             ` Avi Kivity
2010-09-14  8:26               ` Brad Campbell
2010-09-14  8:33                 ` Michael Tokarev
2010-09-14  8:46                   ` Brad Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).