* [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs
@ 2020-04-28 22:45 Alan Murtagh
2020-05-04 13:35 ` Stefan Hajnoczi
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Alan Murtagh @ 2020-04-28 22:45 UTC (permalink / raw)
To: qemu-devel
Public bug reported:
Found in QEMU 4.1, and reproduced on master.
QEMU appears to suffer from remarkably poor disk performance when
writing to sparse-extent VMDKs. Of course it's to be expected that
allocation takes time and sparse VMDKs peform worse than allocated
VMDKs, but surely not on the orders of magnitude I'm observing. On my
system, the fully allocated write speeds are approximately 1.5GB/s,
while the fully sparse write speeds can be as low as 10MB/s. I've
noticed that adding "cache unsafe" reduces the issue dramatically,
bringing speeds up to around 750MB/s. I don't know if this is still slow
or if this perhaps reveals a problem with the default caching method.
To reproduce the issue I've attached two 4GiB VMDKs. Both are completely
empty and both are technically sparse-extent VMDKs, but one is 100% pre-
allocated and the other is 100% unallocated. If you attach these VMDKs
as second and third disks to an Ubuntu VM running on QEMU (with KVM) and
measure their write performance (using dd to write to /dev/sdb and
/dev/sdc for example) the difference in write speeds is clear.
For what it's worth, the flags I'm using that relate to the VMDK are as
follows:
`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-scsi-
pci,id=scsi -device scsi-hd,drive=hd0`
** Affects: qemu
Importance: Undecided
Status: New
** Attachment added: "Two different empty VMDKs with vastly different performance."
https://bugs.launchpad.net/bugs/1875762/+attachment/5363023/+files/vmdks.zip
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1875762
Title:
Poor disk performance on sparse VMDKs
Status in QEMU:
New
Bug description:
Found in QEMU 4.1, and reproduced on master.
QEMU appears to suffer from remarkably poor disk performance when
writing to sparse-extent VMDKs. Of course it's to be expected that
allocation takes time and sparse VMDKs peform worse than allocated
VMDKs, but surely not on the orders of magnitude I'm observing. On my
system, the fully allocated write speeds are approximately 1.5GB/s,
while the fully sparse write speeds can be as low as 10MB/s. I've
noticed that adding "cache unsafe" reduces the issue dramatically,
bringing speeds up to around 750MB/s. I don't know if this is still
slow or if this perhaps reveals a problem with the default caching
method.
To reproduce the issue I've attached two 4GiB VMDKs. Both are
completely empty and both are technically sparse-extent VMDKs, but one
is 100% pre-allocated and the other is 100% unallocated. If you attach
these VMDKs as second and third disks to an Ubuntu VM running on QEMU
(with KVM) and measure their write performance (using dd to write to
/dev/sdb and /dev/sdc for example) the difference in write speeds is
clear.
For what it's worth, the flags I'm using that relate to the VMDK are
as follows:
`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-
scsi-pci,id=scsi -device scsi-hd,drive=hd0`
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1875762/+subscriptions
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs
@ 2020-05-04 13:35 ` Stefan Hajnoczi
0 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2020-05-04 13:35 UTC (permalink / raw)
To: Bug 1875762; +Cc: qemu-devel, qemu-block
[-- Attachment #1: Type: text/plain, Size: 1181 bytes --]
On Tue, Apr 28, 2020 at 10:45:07PM -0000, Alan Murtagh wrote:
> QEMU appears to suffer from remarkably poor disk performance when
> writing to sparse-extent VMDKs. Of course it's to be expected that
> allocation takes time and sparse VMDKs peform worse than allocated
> VMDKs, but surely not on the orders of magnitude I'm observing.
Hi Alan,
This is expected behavior. The VMDK block driver is not intended for
running VMs. It is primarily there for qemu-img convert support.
You can get good performance by converting the image file to qcow2 or
raw instead.
The effort required to develop a high-performance image format driver
for non-trivial file formats like VMDK is quite high. Therefore only
qcow2 goes through the lengths required to deliver good performance
(request parallelism, metadata caching, optimizing metadata update
dependencies, etc).
The non-native image format drivers are simple and basically only work
well for sequential I/O with no parallel requests. That's all qemu-img
convert needs!
If someone volunteers to optimize VMDK then I'm sure the patches could
be merged. In the meantime I suggest using QEMU's native image formats:
qcow2 or raw.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs
@ 2020-05-04 13:35 ` Stefan Hajnoczi
0 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2020-05-04 13:35 UTC (permalink / raw)
To: qemu-devel
On Tue, Apr 28, 2020 at 10:45:07PM -0000, Alan Murtagh wrote:
> QEMU appears to suffer from remarkably poor disk performance when
> writing to sparse-extent VMDKs. Of course it's to be expected that
> allocation takes time and sparse VMDKs peform worse than allocated
> VMDKs, but surely not on the orders of magnitude I'm observing.
Hi Alan,
This is expected behavior. The VMDK block driver is not intended for
running VMs. It is primarily there for qemu-img convert support.
You can get good performance by converting the image file to qcow2 or
raw instead.
The effort required to develop a high-performance image format driver
for non-trivial file formats like VMDK is quite high. Therefore only
qcow2 goes through the lengths required to deliver good performance
(request parallelism, metadata caching, optimizing metadata update
dependencies, etc).
The non-native image format drivers are simple and basically only work
well for sequential I/O with no parallel requests. That's all qemu-img
convert needs!
If someone volunteers to optimize VMDK then I'm sure the patches could
be merged. In the meantime I suggest using QEMU's native image formats:
qcow2 or raw.
Stefan
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1875762
Title:
Poor disk performance on sparse VMDKs
Status in QEMU:
New
Bug description:
Found in QEMU 4.1, and reproduced on master.
QEMU appears to suffer from remarkably poor disk performance when
writing to sparse-extent VMDKs. Of course it's to be expected that
allocation takes time and sparse VMDKs peform worse than allocated
VMDKs, but surely not on the orders of magnitude I'm observing. On my
system, the fully allocated write speeds are approximately 1.5GB/s,
while the fully sparse write speeds can be as low as 10MB/s. I've
noticed that adding "cache unsafe" reduces the issue dramatically,
bringing speeds up to around 750MB/s. I don't know if this is still
slow or if this perhaps reveals a problem with the default caching
method.
To reproduce the issue I've attached two 4GiB VMDKs. Both are
completely empty and both are technically sparse-extent VMDKs, but one
is 100% pre-allocated and the other is 100% unallocated. If you attach
these VMDKs as second and third disks to an Ubuntu VM running on QEMU
(with KVM) and measure their write performance (using dd to write to
/dev/sdb and /dev/sdc for example) the difference in write speeds is
clear.
For what it's worth, the flags I'm using that relate to the VMDK are
as follows:
`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-
scsi-pci,id=scsi -device scsi-hd,drive=hd0`
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1875762/+subscriptions
^ permalink raw reply [flat|nested] 5+ messages in thread
* [Bug 1875762] Re: Poor disk performance on sparse VMDKs
2020-04-28 22:45 [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs Alan Murtagh
2020-05-04 13:35 ` Stefan Hajnoczi
@ 2020-05-05 1:11 ` Alan Murtagh
2021-05-06 14:30 ` Thomas Huth
2 siblings, 0 replies; 5+ messages in thread
From: Alan Murtagh @ 2020-05-05 1:11 UTC (permalink / raw)
To: qemu-devel
Thanks Stefan.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1875762
Title:
Poor disk performance on sparse VMDKs
Status in QEMU:
New
Bug description:
Found in QEMU 4.1, and reproduced on master.
QEMU appears to suffer from remarkably poor disk performance when
writing to sparse-extent VMDKs. Of course it's to be expected that
allocation takes time and sparse VMDKs peform worse than allocated
VMDKs, but surely not on the orders of magnitude I'm observing. On my
system, the fully allocated write speeds are approximately 1.5GB/s,
while the fully sparse write speeds can be as low as 10MB/s. I've
noticed that adding "cache unsafe" reduces the issue dramatically,
bringing speeds up to around 750MB/s. I don't know if this is still
slow or if this perhaps reveals a problem with the default caching
method.
To reproduce the issue I've attached two 4GiB VMDKs. Both are
completely empty and both are technically sparse-extent VMDKs, but one
is 100% pre-allocated and the other is 100% unallocated. If you attach
these VMDKs as second and third disks to an Ubuntu VM running on QEMU
(with KVM) and measure their write performance (using dd to write to
/dev/sdb and /dev/sdc for example) the difference in write speeds is
clear.
For what it's worth, the flags I'm using that relate to the VMDK are
as follows:
`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-
scsi-pci,id=scsi -device scsi-hd,drive=hd0`
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1875762/+subscriptions
^ permalink raw reply [flat|nested] 5+ messages in thread
* [Bug 1875762] Re: Poor disk performance on sparse VMDKs
2020-04-28 22:45 [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs Alan Murtagh
2020-05-04 13:35 ` Stefan Hajnoczi
2020-05-05 1:11 ` [Bug 1875762] " Alan Murtagh
@ 2021-05-06 14:30 ` Thomas Huth
2 siblings, 0 replies; 5+ messages in thread
From: Thomas Huth @ 2021-05-06 14:30 UTC (permalink / raw)
To: qemu-devel
Ok, I'm closing this now, since this is the expected behavior according
to Stefan's description.
** Changed in: qemu
Status: New => Won't Fix
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1875762
Title:
Poor disk performance on sparse VMDKs
Status in QEMU:
Won't Fix
Bug description:
Found in QEMU 4.1, and reproduced on master.
QEMU appears to suffer from remarkably poor disk performance when
writing to sparse-extent VMDKs. Of course it's to be expected that
allocation takes time and sparse VMDKs peform worse than allocated
VMDKs, but surely not on the orders of magnitude I'm observing. On my
system, the fully allocated write speeds are approximately 1.5GB/s,
while the fully sparse write speeds can be as low as 10MB/s. I've
noticed that adding "cache unsafe" reduces the issue dramatically,
bringing speeds up to around 750MB/s. I don't know if this is still
slow or if this perhaps reveals a problem with the default caching
method.
To reproduce the issue I've attached two 4GiB VMDKs. Both are
completely empty and both are technically sparse-extent VMDKs, but one
is 100% pre-allocated and the other is 100% unallocated. If you attach
these VMDKs as second and third disks to an Ubuntu VM running on QEMU
(with KVM) and measure their write performance (using dd to write to
/dev/sdb and /dev/sdc for example) the difference in write speeds is
clear.
For what it's worth, the flags I'm using that relate to the VMDK are
as follows:
`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-
scsi-pci,id=scsi -device scsi-hd,drive=hd0`
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1875762/+subscriptions
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-05-06 14:38 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-28 22:45 [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs Alan Murtagh
2020-05-04 13:35 ` Stefan Hajnoczi
2020-05-04 13:35 ` Stefan Hajnoczi
2020-05-05 1:11 ` [Bug 1875762] " Alan Murtagh
2021-05-06 14:30 ` Thomas Huth
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.