All of lore.kernel.org
 help / color / mirror / Atom feed
From: ein <ein.net@gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] Very poor IO performance which looks like some design problem.
Date: Sat, 11 Apr 2015 19:10:08 +0200	[thread overview]
Message-ID: <55295570.6010900@gmail.com> (raw)
In-Reply-To: <55291CF2.4000905@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 1660 bytes --]

On 04/11/2015 03:09 PM, Paolo Bonzini wrote:
> On 10/04/2015 22:38, ein wrote:
>> Qemu creates more than 70 threads and everyone of them tries to write to
>> disk, which results in:
>> 1. High I/O time.
>> 2. Large latency.
>> 2. Poor sequential read/write speeds.
>>
>> When I limited number of cores, I guess I limited number of threads as
>> well. That's why I got better numbers.
>>
>> I've tried to combine AIO native and thread setting with deadline
>> scheduler. Native AIO was much more worse.
>>
>> The final question, is there any way to prevent Qemu for making so large
>> number of processes when VM does only one sequential R/W operation?
> Use "aio=native,cache=none".  If that's not enough, you'll need to use
> XFS or a block device; ext4 suffers from spinlock contention on O_DIRECT
> I/O.
Hello Paolo and thank you for reply.

Firstly, I do use ext2 now, which gave me more MiB/s than XFS in the
past. I've tried combination with XFS and block_device with NTFS (4KB)
on it. I did tests with AIO=native,cache=none. Results in this workload
was significantly worse. I don't have numbers on me right now but if
somebody is interested, I'll redo the tests. From my experience I can
say that disabling every software caches gives significant boost in
sequential RW ops. I mean: Qemu cache, linux kernel dirty pages or even
caching on VM itself. It makes somehow speed of data flow softer and
more stable. Using cache creates hiccups. Firstly there's enormous speed
for couple of seconds, more than hardware is capable of, then flush and
no data flow at all (or very little) in few / over a dozen / seconds.





[-- Attachment #2: 0xF2C6EA10.asc --]
[-- Type: application/pgp-keys, Size: 4055 bytes --]

  reply	other threads:[~2015-04-11 17:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-10 20:38 [Qemu-devel] Very poor IO performance which looks like some design problem ein
2015-04-11 13:09 ` Paolo Bonzini
2015-04-11 17:10   ` ein [this message]
2015-04-11 19:00     ` ein
2015-04-13  1:45 ` Fam Zheng
2015-04-13 12:28   ` ein
2015-04-13 13:53     ` Paolo Bonzini
2015-04-14 10:31     ` Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55295570.6010900@gmail.com \
    --to=ein.net@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.