All of lore.kernel.org
 help / color / mirror / Atom feed
* I/O performance of VirtIO
@ 2009-10-12 20:49 René Pfeiffer
  2009-10-12 21:45 ` Michael Tokarev
  0 siblings, 1 reply; 9+ messages in thread
From: René Pfeiffer @ 2009-10-12 20:49 UTC (permalink / raw)
  To: kvm

[-- Attachment #1: Type: text/plain, Size: 1349 bytes --]

Hello!

I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
noticed that the I/O performance of an unattended stock Debian Lenny
install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x
before. The difference is very noticeable (went from about 5 minutes up
to 15-25 minutes). The two test machines have different CPUs (one is an
Intel Core2 CPU, the other runs with an AMD Athlon 64 X2 Dual).

Is this the effect of added code regarding caching/data integrity to the
VirtIO block layer or somewhere else? The qemu-system-x86_64 seems to
hang a lot more in heavy I/O (showing 'D' in top/htop).

The command line is quite straight-forward:
qemu-system-x86_64 -drive file=debian.qcow2,if=virtio,boot=on -cdrom \
/srv/isos/debian-502-i386-netinst.iso -smp 2 -boot d -m 512 -net nic \
-net user -usb

Installation was repeated multiple times, every time the test machines
hat no other load. The effect is the same with a Windows XP guest
running without VirtIO.

Best,
René.

-- 
  )\._.,--....,'``.  fL  Let GNU/Linux work for you while you take a nap.
 /,   _.. \   _\  (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/
`._.-(,_..'--(,_..'`-.;.'  - System administration + Consulting + Teaching -
Got mail delivery problems?  http://web.luchs.at/information/blockedmail.php

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-12 20:49 I/O performance of VirtIO René Pfeiffer
@ 2009-10-12 21:45 ` Michael Tokarev
  2009-10-12 21:54   ` René Pfeiffer
  2009-10-13  6:35   ` Jan Kiszka
  0 siblings, 2 replies; 9+ messages in thread
From: Michael Tokarev @ 2009-10-12 21:45 UTC (permalink / raw)
  To: René Pfeiffer; +Cc: kvm

René Pfeiffer wrote:
> Hello!
> 
> I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
> noticed that the I/O performance of an unattended stock Debian Lenny
> install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x
> before. The difference is very noticeable (went from about 5 minutes up
> to 15-25 minutes). The two test machines have different CPUs (one is an
> Intel Core2 CPU, the other runs with an AMD Athlon 64 X2 Dual).
> 
> Is this the effect of added code regarding caching/data integrity to the
> VirtIO block layer or somewhere else? The qemu-system-x86_64 seems to
> hang a lot more in heavy I/O (showing 'D' in top/htop).
> 
> The command line is quite straight-forward:
> qemu-system-x86_64 -drive file=debian.qcow2,if=virtio,boot=on -cdrom \
> /srv/isos/debian-502-i386-netinst.iso -smp 2 -boot d -m 512 -net nic \
> -net user -usb
   ^^^^^^^^^

Care to try with something more real than user-level networking?
You're using netinstall which - apparently - tries to use some
networking d/loading components etc, and userlevel networking is
known to be very very slow....

Also try the same with raw images.

I for one does not see any noticeable speed difference with tap
networking (virtio or e1000 or rtl8139) and with raw disks (either
virtio or ide), on either linux or windows guests (windows without
virtio so far).  But granted, I didn't try user-level networking,
and I don't try qcow too often (however collegue of mine who uses
qcow didn't complain about speed either).

/mjt

> Installation was repeated multiple times, every time the test machines
> hat no other load. The effect is the same with a Windows XP guest
> running without VirtIO.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-12 21:45 ` Michael Tokarev
@ 2009-10-12 21:54   ` René Pfeiffer
  2009-10-13  6:35   ` Jan Kiszka
  1 sibling, 0 replies; 9+ messages in thread
From: René Pfeiffer @ 2009-10-12 21:54 UTC (permalink / raw)
  To: kvm

[-- Attachment #1: Type: text/plain, Size: 2165 bytes --]

On Oct 13, 2009 at 0145 +0400, Michael Tokarev appeared and said:
> René Pfeiffer wrote:
> >Hello!
> >
> >I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
> >noticed that the I/O performance of an unattended stock Debian Lenny
> >install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x
> >before. The difference is very noticeable (went from about 5 minutes up
> >to 15-25 minutes). The two test machines have different CPUs (one is an
> >Intel Core2 CPU, the other runs with an AMD Athlon 64 X2 Dual).
> >
> >Is this the effect of added code regarding caching/data integrity to the
> >VirtIO block layer or somewhere else? The qemu-system-x86_64 seems to
> >hang a lot more in heavy I/O (showing 'D' in top/htop).
> >
> >The command line is quite straight-forward:
> >qemu-system-x86_64 -drive file=debian.qcow2,if=virtio,boot=on -cdrom \
> >/srv/isos/debian-502-i386-netinst.iso -smp 2 -boot d -m 512 -net nic \
> >-net user -usb
>   ^^^^^^^^^
> 
> Care to try with something more real than user-level networking?

Yes, I tried that on the other machine. It made not much difference (the
installation files are on local Squid proxies).

But I found that setting cache=writeback restores the old behaviour. I
think the default changed to cache=writethrough.

> You're using netinstall which - apparently - tries to use some
> networking d/loading components etc, and userlevel networking is
> known to be very very slow....

Right, I just verified the disk I/O performance with severall runs of
hdparm and fresh installations using cache=none, cache=writeback and
cache=writethrough settings. The network settings were the same (the
test machine with the software bridge setup is down at the moment). I
wanted to compare the behaviour of the I/O load.

Best,
René.

-- 
  )\._.,--....,'``.  fL  Let GNU/Linux work for you while you take a nap.
 /,   _.. \   _\  (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/
`._.-(,_..'--(,_..'`-.;.'  - System administration + Consulting + Teaching -
Got mail delivery problems?  http://web.luchs.at/information/blockedmail.php

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-12 21:45 ` Michael Tokarev
  2009-10-12 21:54   ` René Pfeiffer
@ 2009-10-13  6:35   ` Jan Kiszka
  2009-10-22 16:29     ` Avi Kivity
  1 sibling, 1 reply; 9+ messages in thread
From: Jan Kiszka @ 2009-10-13  6:35 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: René Pfeiffer, kvm

[-- Attachment #1: Type: text/plain, Size: 1444 bytes --]

Michael Tokarev wrote:
> René Pfeiffer wrote:
>> Hello!
>>
>> I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
>> noticed that the I/O performance of an unattended stock Debian Lenny
>> install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x
>> before. The difference is very noticeable (went from about 5 minutes up
>> to 15-25 minutes). The two test machines have different CPUs (one is an
>> Intel Core2 CPU, the other runs with an AMD Athlon 64 X2 Dual).
>>
>> Is this the effect of added code regarding caching/data integrity to the
>> VirtIO block layer or somewhere else? The qemu-system-x86_64 seems to
>> hang a lot more in heavy I/O (showing 'D' in top/htop).
>>
>> The command line is quite straight-forward:
>> qemu-system-x86_64 -drive file=debian.qcow2,if=virtio,boot=on -cdrom \
>> /srv/isos/debian-502-i386-netinst.iso -smp 2 -boot d -m 512 -net nic \
>> -net user -usb
>   ^^^^^^^^^
> 
> Care to try with something more real than user-level networking?
> You're using netinstall which - apparently - tries to use some
> networking d/loading components etc, and userlevel networking is
> known to be very very slow....

It can be particularly slow if you use in-kernel irqchips and the
default NIC emulation (up to 10 times slower), some effect I always
wanted to understand on a rainy day. So, when you actually want -net
user, try -no-kvm-irqchip.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-13  6:35   ` Jan Kiszka
@ 2009-10-22 16:29     ` Avi Kivity
  2009-10-22 22:06       ` Alexander Graf
  0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2009-10-22 16:29 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Michael Tokarev, René Pfeiffer, kvm

On 10/13/2009 08:35 AM, Jan Kiszka wrote:
> It can be particularly slow if you use in-kernel irqchips and the
> default NIC emulation (up to 10 times slower), some effect I always
> wanted to understand on a rainy day. So, when you actually want -net
> user, try -no-kvm-irqchip.
>    

This might be due to a missing SIGIO or SIGALRM; -no-kvm-irqchip 
generates a lot of extra signals and thus polling opportunities.

I see 300kB/sec from slirp; always assumed it was due to missing tcp 
features.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-22 16:29     ` Avi Kivity
@ 2009-10-22 22:06       ` Alexander Graf
  2009-10-25  5:44         ` Avi Kivity
  0 siblings, 1 reply; 9+ messages in thread
From: Alexander Graf @ 2009-10-22 22:06 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Jan Kiszka, Michael Tokarev, René Pfeiffer, kvm


Am 22.10.2009 um 18:29 schrieb Avi Kivity <avi@redhat.com>:

> On 10/13/2009 08:35 AM, Jan Kiszka wrote:
>> It can be particularly slow if you use in-kernel irqchips and the
>> default NIC emulation (up to 10 times slower), some effect I always
>> wanted to understand on a rainy day. So, when you actually want -net
>> user, try -no-kvm-irqchip.
>>
>
> This might be due to a missing SIGIO or SIGALRM; -no-kvm-irqchip  
> generates a lot of extra signals and thus polling opportunities.

Isn't that what dedicated io threads are supposed to solve?

Alex

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-22 22:06       ` Alexander Graf
@ 2009-10-25  5:44         ` Avi Kivity
  2009-10-26  8:12           ` Jan Kiszka
  0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2009-10-25  5:44 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Jan Kiszka, Michael Tokarev, René Pfeiffer, kvm

On 10/23/2009 12:06 AM, Alexander Graf wrote:
>
> Am 22.10.2009 um 18:29 schrieb Avi Kivity <avi@redhat.com>:
>
>> On 10/13/2009 08:35 AM, Jan Kiszka wrote:
>>> It can be particularly slow if you use in-kernel irqchips and the
>>> default NIC emulation (up to 10 times slower), some effect I always
>>> wanted to understand on a rainy day. So, when you actually want -net
>>> user, try -no-kvm-irqchip.
>>>
>>
>> This might be due to a missing SIGIO or SIGALRM; -no-kvm-irqchip 
>> generates a lot of extra signals and thus polling opportunities.
>
> Isn't that what dedicated io threads are supposed to solve?
>

No.  Dedicated I/O threads provide parallelism.  All latency needs is to 
have SIGIO sent on all file descriptors (or rather, in qemu-kvm with 
irqchip, to have all file descriptors in the poll() call).

Jan, does slirp add new connections to the select set?

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-25  5:44         ` Avi Kivity
@ 2009-10-26  8:12           ` Jan Kiszka
  2009-10-26  8:35             ` Avi Kivity
  0 siblings, 1 reply; 9+ messages in thread
From: Jan Kiszka @ 2009-10-26  8:12 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Alexander Graf, Michael Tokarev, René Pfeiffer, kvm

[-- Attachment #1: Type: text/plain, Size: 1155 bytes --]

Avi Kivity wrote:
> On 10/23/2009 12:06 AM, Alexander Graf wrote:
>>
>> Am 22.10.2009 um 18:29 schrieb Avi Kivity <avi@redhat.com>:
>>
>>> On 10/13/2009 08:35 AM, Jan Kiszka wrote:
>>>> It can be particularly slow if you use in-kernel irqchips and the
>>>> default NIC emulation (up to 10 times slower), some effect I always
>>>> wanted to understand on a rainy day. So, when you actually want -net
>>>> user, try -no-kvm-irqchip.
>>>>
>>>
>>> This might be due to a missing SIGIO or SIGALRM; -no-kvm-irqchip
>>> generates a lot of extra signals and thus polling opportunities.
>>
>> Isn't that what dedicated io threads are supposed to solve?
>>
> 
> No.  Dedicated I/O threads provide parallelism.  All latency needs is to
> have SIGIO sent on all file descriptors (or rather, in qemu-kvm with
> irqchip, to have all file descriptors in the poll() call).
> 
> Jan, does slirp add new connections to the select set?
> 

It should do so in slirp_select_fill (it iterates over all TCP&UDP
sockets of all instances). I think without doing this, slirp wouldn't
receive a single bit at all (no activity without FD_ISSET).

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: I/O performance of VirtIO
  2009-10-26  8:12           ` Jan Kiszka
@ 2009-10-26  8:35             ` Avi Kivity
  0 siblings, 0 replies; 9+ messages in thread
From: Avi Kivity @ 2009-10-26  8:35 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Alexander Graf, Michael Tokarev, René Pfeiffer, kvm

On 10/26/2009 10:12 AM, Jan Kiszka wrote:
>> No.  Dedicated I/O threads provide parallelism.  All latency needs is to
>> have SIGIO sent on all file descriptors (or rather, in qemu-kvm with
>> irqchip, to have all file descriptors in the poll() call).
>>
>> Jan, does slirp add new connections to the select set?
>>
>>      
> It should do so in slirp_select_fill (it iterates over all TCP&UDP
> sockets of all instances). I think without doing this, slirp wouldn't
> receive a single bit at all (no activity without FD_ISSET).
>    

Yes, so it seems from the code.  But something is missing if you get 
better performance with -no-kvm-irqchip.  Perhaps timers are off.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2009-10-26  8:36 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-12 20:49 I/O performance of VirtIO René Pfeiffer
2009-10-12 21:45 ` Michael Tokarev
2009-10-12 21:54   ` René Pfeiffer
2009-10-13  6:35   ` Jan Kiszka
2009-10-22 16:29     ` Avi Kivity
2009-10-22 22:06       ` Alexander Graf
2009-10-25  5:44         ` Avi Kivity
2009-10-26  8:12           ` Jan Kiszka
2009-10-26  8:35             ` Avi Kivity

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.