All of lore.kernel.org
 help / color / mirror / Atom feed
* qcow2 perfomance: read-only IO on the guest generates high write IO on the host
@ 2021-08-11 11:36 Christopher Pereira
  2021-08-24 15:37 ` Kevin Wolf
  0 siblings, 1 reply; 3+ messages in thread
From: Christopher Pereira @ 2021-08-11 11:36 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1223 bytes --]

Hi,

I'm reading a directory with 5.000.000 files (2,4 GB) inside a guest 
using "find | grep -c".

On the host I saw high write IO (40 MB/s !) during over 1 hour using 
virt-top.

I later repeated the read-only operation inside the guest and no 
additional data was written on the host. The operation took only some 
seconds.

I believe QEMU was creating some kind of cache or metadata map the first 
time I accessed the inodes.

But I wonder why the cache or metadata map wasn't available the first 
time and why QEMU had to recreate it?

The VM has "compressed base <- snap 1" and base was converted without 
prealloc.

Is it because we created the base using convert without metadata 
prealloc and so the metadata map got lost?

I will do some experiments soon using convert + metadata prealloc and 
probably find out myself, but I will happy to read your comments and 
gain some additional insights.
If it the problem persists, I would try again without compression.

Additional info:

  * Guest fs is xfs.
  * (I believe the snapshot didn't significantly increase in size, but I
    would need to double check)
  * This is a production host with old QEMU emulator version 2.3.0
    (qemu-kvm-ev-2.3.0-31.el7_2.10.1)


[-- Attachment #2: Type: text/html, Size: 1627 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: qcow2 perfomance: read-only IO on the guest generates high write IO on the host
  2021-08-11 11:36 qcow2 perfomance: read-only IO on the guest generates high write IO on the host Christopher Pereira
@ 2021-08-24 15:37 ` Kevin Wolf
  2021-09-09 10:23   ` Christopher Pereira
  0 siblings, 1 reply; 3+ messages in thread
From: Kevin Wolf @ 2021-08-24 15:37 UTC (permalink / raw)
  To: Christopher Pereira; +Cc: qemu-devel, qemu-block

[ Cc: qemu-block ]

Am 11.08.2021 um 13:36 hat Christopher Pereira geschrieben:
> Hi,
> 
> I'm reading a directory with 5.000.000 files (2,4 GB) inside a guest using
> "find | grep -c".
> 
> On the host I saw high write IO (40 MB/s !) during over 1 hour using
> virt-top.
> 
> I later repeated the read-only operation inside the guest and no additional
> data was written on the host. The operation took only some seconds.
> 
> I believe QEMU was creating some kind of cache or metadata map the first
> time I accessed the inodes.

No, at least in theory, QEMU shouldn't allocate anything when you're
just reading.

Are you sure that this isn't activity coming from your guest OS?

> But I wonder why the cache or metadata map wasn't available the first time
> and why QEMU had to recreate it?
> 
> The VM has "compressed base <- snap 1" and base was converted without
> prealloc.
> 
> Is it because we created the base using convert without metadata prealloc
> and so the metadata map got lost?
> 
> I will do some experiments soon using convert + metadata prealloc and
> probably find out myself, but I will happy to read your comments and gain
> some additional insights.
> If it the problem persists, I would try again without compression.

What were the results of your experiments? Is the behaviour related to
any of these options?

> Additional info:
> 
>  * Guest fs is xfs.
>  * (I believe the snapshot didn't significantly increase in size, but I
>    would need to double check)
>  * This is a production host with old QEMU emulator version 2.3.0
>    (qemu-kvm-ev-2.3.0-31.el7_2.10.1)

Discussing the most recent version is generally easier, but the expected
behaviour has always been the same, so it probably doesn't matter much
in this case.

Kevin



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: qcow2 perfomance: read-only IO on the guest generates high write IO on the host
  2021-08-24 15:37 ` Kevin Wolf
@ 2021-09-09 10:23   ` Christopher Pereira
  0 siblings, 0 replies; 3+ messages in thread
From: Christopher Pereira @ 2021-09-09 10:23 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-devel, qemu-block

[-- Attachment #1: Type: text/plain, Size: 1965 bytes --]


On 24-08-2021 11:37, Kevin Wolf wrote:
> [ Cc: qemu-block ]
>
> Am 11.08.2021 um 13:36 hat Christopher Pereira geschrieben:
>> Hi,
>>
>> I'm reading a directory with 5.000.000 files (2,4 GB) inside a guest using
>> "find | grep -c".
>>
>> On the host I saw high write IO (40 MB/s !) during over 1 hour using
>> virt-top.
>>
>> I later repeated the read-only operation inside the guest and no additional
>> data was written on the host. The operation took only some seconds.
>>
>> I believe QEMU was creating some kind of cache or metadata map the first
>> time I accessed the inodes.
> No, at least in theory, QEMU shouldn't allocate anything when you're
> just reading.
Hmm...interesting.
> Are you sure that this isn't activity coming from your guest OS?

Yes. iotop was showing only read IOs on the guest, and on the host iotop 
and virt-top where showing strong write IOs for hours.
Stopping the "find" command on the guest also stopped the write IOs on 
the host.

>> But I wonder why the cache or metadata map wasn't available the first time
>> and why QEMU had to recreate it?
>>
>> The VM has "compressed base <- snap 1" and base was converted without
>> prealloc.
>>
>> Is it because we created the base using convert without metadata prealloc
>> and so the metadata map got lost?
>>
>> I will do some experiments soon using convert + metadata prealloc and
>> probably find out myself, but I will happy to read your comments and gain
>> some additional insights.
>> If it the problem persists, I would try again without compression.
> What were the results of your experiments? Is the behaviour related to
> any of these options?

I will do the experiments and report back.

It's also strange that the second time I repeat the "find" command, I 
see no more write IOs and it takes only seconds instead of hours.

I was assuming QEMU was creating some kind of map or cache on the 
snapshot for the content present in the base, but now I got more curious.


[-- Attachment #2: Type: text/html, Size: 2931 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-09-09 10:24 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-11 11:36 qcow2 perfomance: read-only IO on the guest generates high write IO on the host Christopher Pereira
2021-08-24 15:37 ` Kevin Wolf
2021-09-09 10:23   ` Christopher Pereira

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.