All of lore.kernel.org
 help / color / mirror / Atom feed
* qemu-xen qdisk performance
@ 2012-02-10 15:22 Roger Pau Monné
  2012-02-10 15:53 ` Stefano Stabellini
  0 siblings, 1 reply; 5+ messages in thread
From: Roger Pau Monné @ 2012-02-10 15:22 UTC (permalink / raw)
  To: xen-devel

Hello,

I've recently setup a Linux Dom0 with a 3.0.17 kernel and Xen 4.1.2,
and since the 3.x series doesn't have blktap support I'm using qdisk
to attach raw images. I've been playing with small images, something
like 1GB, and everything seemed fine, speed was not fantastic but it
was ok. Today I've set up a bigger machine, with a 20GB raw hdd and
the disk write throughput is really slow, inferior to 0.5MB/s. I'm
trying to install a Debian PV there, and after more than 3 hours it is
still installing the base system.

I've looked at the xenstore backend entries, and everything looks fine:

/local/domain/0/backend/qdisk/21/51712/frontend =
"/local/domain/21/device/vbd/51712"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/params =
"aio:/hdd/vm/servlet/servlet.img"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/frontend-id = "21"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/online = "1"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/removable = "0"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/bootable = "1"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/state = "4"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/dev = "xvda"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/type = "tap"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/mode = "w"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/feature-barrier = "1"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/info = "0"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/sector-size = "512"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/sectors = "40960000"   (n0,r21)
/local/domain/0/backend/qdisk/21/51712/hotplug-status = "connected"   (n0,r21)

Also, the related qemu-dm process doesn't seem to be hung by CPU, in
fact it is reporting a CPU usage of 0% almost all the time. I've
attached to the qemu-dm process with strace, and it is doing lseeks
and writes like crazy, is this normal? Is there any improvement when
using qemu-upstream?

Thanks, Roger.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: qemu-xen qdisk performance
  2012-02-10 15:22 qemu-xen qdisk performance Roger Pau Monné
@ 2012-02-10 15:53 ` Stefano Stabellini
  2012-02-13 11:37   ` Roger Pau Monné
  0 siblings, 1 reply; 5+ messages in thread
From: Stefano Stabellini @ 2012-02-10 15:53 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 2277 bytes --]

On Fri, 10 Feb 2012, Roger Pau Monné wrote:
> Hello,
> 
> I've recently setup a Linux Dom0 with a 3.0.17 kernel and Xen 4.1.2,
> and since the 3.x series doesn't have blktap support I'm using qdisk
> to attach raw images. I've been playing with small images, something
> like 1GB, and everything seemed fine, speed was not fantastic but it
> was ok. Today I've set up a bigger machine, with a 20GB raw hdd and
> the disk write throughput is really slow, inferior to 0.5MB/s. I'm
> trying to install a Debian PV there, and after more than 3 hours it is
> still installing the base system.
> 
> I've looked at the xenstore backend entries, and everything looks fine:
> 
> /local/domain/0/backend/qdisk/21/51712/frontend =
> "/local/domain/21/device/vbd/51712"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/params =
> "aio:/hdd/vm/servlet/servlet.img"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/frontend-id = "21"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/online = "1"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/removable = "0"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/bootable = "1"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/state = "4"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/dev = "xvda"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/type = "tap"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/mode = "w"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/feature-barrier = "1"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/info = "0"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/sector-size = "512"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/sectors = "40960000"   (n0,r21)
> /local/domain/0/backend/qdisk/21/51712/hotplug-status = "connected"   (n0,r21)
> 
> Also, the related qemu-dm process doesn't seem to be hung by CPU, in
> fact it is reporting a CPU usage of 0% almost all the time. I've
> attached to the qemu-dm process with strace, and it is doing lseeks
> and writes like crazy, is this normal? Is there any improvement when
> using qemu-upstream?

Yes, great improvements.
The old qemu-xen uses threads to simulate async IO so it is very slow;
upstream QEMU uses Linux AIO and is much faster.
I wouldn't expect it to hang completely though, that might be a bug.

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: qemu-xen qdisk performance
  2012-02-10 15:53 ` Stefano Stabellini
@ 2012-02-13 11:37   ` Roger Pau Monné
  2012-02-13 11:59     ` Stefano Stabellini
  0 siblings, 1 reply; 5+ messages in thread
From: Roger Pau Monné @ 2012-02-13 11:37 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel

2012/2/10 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:
> On Fri, 10 Feb 2012, Roger Pau Monné wrote:
>> Hello,
>>
>> I've recently setup a Linux Dom0 with a 3.0.17 kernel and Xen 4.1.2,
>> and since the 3.x series doesn't have blktap support I'm using qdisk
>> to attach raw images. I've been playing with small images, something
>> like 1GB, and everything seemed fine, speed was not fantastic but it
>> was ok. Today I've set up a bigger machine, with a 20GB raw hdd and
>> the disk write throughput is really slow, inferior to 0.5MB/s. I'm
>> trying to install a Debian PV there, and after more than 3 hours it is
>> still installing the base system.
>>
>> I've looked at the xenstore backend entries, and everything looks fine:
>>
>> /local/domain/0/backend/qdisk/21/51712/frontend =
>> "/local/domain/21/device/vbd/51712"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/params =
>> "aio:/hdd/vm/servlet/servlet.img"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/frontend-id = "21"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/online = "1"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/removable = "0"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/bootable = "1"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/state = "4"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/dev = "xvda"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/type = "tap"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/mode = "w"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/feature-barrier = "1"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/info = "0"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/sector-size = "512"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/sectors = "40960000"   (n0,r21)
>> /local/domain/0/backend/qdisk/21/51712/hotplug-status = "connected"   (n0,r21)
>>
>> Also, the related qemu-dm process doesn't seem to be hung by CPU, in
>> fact it is reporting a CPU usage of 0% almost all the time. I've
>> attached to the qemu-dm process with strace, and it is doing lseeks
>> and writes like crazy, is this normal? Is there any improvement when
>> using qemu-upstream?
>
> Yes, great improvements.
> The old qemu-xen uses threads to simulate async IO so it is very slow;
> upstream QEMU uses Linux AIO and is much faster.

That's great news, so qdisk performance in qemu-upstream should be
similar to blktap?

> I wouldn't expect it to hang completely though, that might be a bug.

No, it doesn't hang completely, just very slow.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: qemu-xen qdisk performance
  2012-02-13 11:37   ` Roger Pau Monné
@ 2012-02-13 11:59     ` Stefano Stabellini
  2012-02-13 14:41       ` Roger Pau Monné
  0 siblings, 1 reply; 5+ messages in thread
From: Stefano Stabellini @ 2012-02-13 11:59 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 357 bytes --]

On Mon, 13 Feb 2012, Roger Pau Monné wrote:
> > Yes, great improvements.
> > The old qemu-xen uses threads to simulate async IO so it is very slow;
> > upstream QEMU uses Linux AIO and is much faster.
> 
> That's great news, so qdisk performance in qemu-upstream should be
> similar to blktap?

Slightly better actually, from my very quick and dirty tests.

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: qemu-xen qdisk performance
  2012-02-13 11:59     ` Stefano Stabellini
@ 2012-02-13 14:41       ` Roger Pau Monné
  0 siblings, 0 replies; 5+ messages in thread
From: Roger Pau Monné @ 2012-02-13 14:41 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel

2012/2/13 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:
> On Mon, 13 Feb 2012, Roger Pau Monné wrote:
>> > Yes, great improvements.
>> > The old qemu-xen uses threads to simulate async IO so it is very slow;
>> > upstream QEMU uses Linux AIO and is much faster.
>>
>> That's great news, so qdisk performance in qemu-upstream should be
>> similar to blktap?
>
> Slightly better actually, from my very quick and dirty tests.

I'm sure blktap had some context switches (from kernel to userspace)
that qdisk doesn't have, since it's a pure userland implementation, so
it's plausible for it to be faster.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-02-13 14:41 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-10 15:22 qemu-xen qdisk performance Roger Pau Monné
2012-02-10 15:53 ` Stefano Stabellini
2012-02-13 11:37   ` Roger Pau Monné
2012-02-13 11:59     ` Stefano Stabellini
2012-02-13 14:41       ` Roger Pau Monné

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.