From: Stefan Hajnoczi <stefanha@gmail.com>
To: Bug 1875762 <1875762@bugs.launchpad.net>
Cc: qemu-devel@nongnu.org, qemu-block@nongnu.org
Subject: Re: [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs
Date: Mon, 4 May 2020 14:35:02 +0100 [thread overview]
Message-ID: <20200504133502.GG354891@stefanha-x1.localdomain> (raw)
In-Reply-To: <158811390770.10067.14727390581808721252.malonedeb@soybean.canonical.com>
[-- Attachment #1: Type: text/plain, Size: 1181 bytes --]
On Tue, Apr 28, 2020 at 10:45:07PM -0000, Alan Murtagh wrote:
> QEMU appears to suffer from remarkably poor disk performance when
> writing to sparse-extent VMDKs. Of course it's to be expected that
> allocation takes time and sparse VMDKs peform worse than allocated
> VMDKs, but surely not on the orders of magnitude I'm observing.
Hi Alan,
This is expected behavior. The VMDK block driver is not intended for
running VMs. It is primarily there for qemu-img convert support.
You can get good performance by converting the image file to qcow2 or
raw instead.
The effort required to develop a high-performance image format driver
for non-trivial file formats like VMDK is quite high. Therefore only
qcow2 goes through the lengths required to deliver good performance
(request parallelism, metadata caching, optimizing metadata update
dependencies, etc).
The non-native image format drivers are simple and basically only work
well for sequential I/O with no parallel requests. That's all qemu-img
convert needs!
If someone volunteers to optimize VMDK then I'm sure the patches could
be merged. In the meantime I suggest using QEMU's native image formats:
qcow2 or raw.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
WARNING: multiple messages have this Message-ID (diff)
From: Stefan Hajnoczi <1875762@bugs.launchpad.net>
To: qemu-devel@nongnu.org
Subject: Re: [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs
Date: Mon, 04 May 2020 13:35:02 -0000 [thread overview]
Message-ID: <20200504133502.GG354891@stefanha-x1.localdomain> (raw)
Message-ID: <20200504133502.svW5ATVhTDKr4-kfhiWzI2qcyMy2Y15olvYGv5fHo4Y@z> (raw)
In-Reply-To: 158811390770.10067.14727390581808721252.malonedeb@soybean.canonical.com
On Tue, Apr 28, 2020 at 10:45:07PM -0000, Alan Murtagh wrote:
> QEMU appears to suffer from remarkably poor disk performance when
> writing to sparse-extent VMDKs. Of course it's to be expected that
> allocation takes time and sparse VMDKs peform worse than allocated
> VMDKs, but surely not on the orders of magnitude I'm observing.
Hi Alan,
This is expected behavior. The VMDK block driver is not intended for
running VMs. It is primarily there for qemu-img convert support.
You can get good performance by converting the image file to qcow2 or
raw instead.
The effort required to develop a high-performance image format driver
for non-trivial file formats like VMDK is quite high. Therefore only
qcow2 goes through the lengths required to deliver good performance
(request parallelism, metadata caching, optimizing metadata update
dependencies, etc).
The non-native image format drivers are simple and basically only work
well for sequential I/O with no parallel requests. That's all qemu-img
convert needs!
If someone volunteers to optimize VMDK then I'm sure the patches could
be merged. In the meantime I suggest using QEMU's native image formats:
qcow2 or raw.
Stefan
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1875762
Title:
Poor disk performance on sparse VMDKs
Status in QEMU:
New
Bug description:
Found in QEMU 4.1, and reproduced on master.
QEMU appears to suffer from remarkably poor disk performance when
writing to sparse-extent VMDKs. Of course it's to be expected that
allocation takes time and sparse VMDKs peform worse than allocated
VMDKs, but surely not on the orders of magnitude I'm observing. On my
system, the fully allocated write speeds are approximately 1.5GB/s,
while the fully sparse write speeds can be as low as 10MB/s. I've
noticed that adding "cache unsafe" reduces the issue dramatically,
bringing speeds up to around 750MB/s. I don't know if this is still
slow or if this perhaps reveals a problem with the default caching
method.
To reproduce the issue I've attached two 4GiB VMDKs. Both are
completely empty and both are technically sparse-extent VMDKs, but one
is 100% pre-allocated and the other is 100% unallocated. If you attach
these VMDKs as second and third disks to an Ubuntu VM running on QEMU
(with KVM) and measure their write performance (using dd to write to
/dev/sdb and /dev/sdc for example) the difference in write speeds is
clear.
For what it's worth, the flags I'm using that relate to the VMDK are
as follows:
`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-
scsi-pci,id=scsi -device scsi-hd,drive=hd0`
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1875762/+subscriptions
next prev parent reply other threads:[~2020-05-04 13:36 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-28 22:45 [Bug 1875762] [NEW] Poor disk performance on sparse VMDKs Alan Murtagh
2020-05-04 13:35 ` Stefan Hajnoczi [this message]
2020-05-04 13:35 ` Stefan Hajnoczi
2020-05-05 1:11 ` [Bug 1875762] " Alan Murtagh
2021-05-06 14:30 ` Thomas Huth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200504133502.GG354891@stefanha-x1.localdomain \
--to=stefanha@gmail.com \
--cc=1875762@bugs.launchpad.net \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.