All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Ring <stefanrin@gmail.com>
To: qemu-block@nongnu.org, qemu-devel@nongnu.org, integration@gluster.org
Subject: Re: Strange data corruption issue with gluster (libgfapi) and ZFS
Date: Mon, 24 Feb 2020 13:35:36 +0100	[thread overview]
Message-ID: <CAAxjCEx79Fkjw9tFbSMo+b1LGv2LNivLRXf1GS9JsYnXrNVVkQ@mail.gmail.com> (raw)
In-Reply-To: <CAAxjCEzHQz4cG_8m7S6=CwCBoN5daQs+KVyuU5GL5Tq3Bky1NA@mail.gmail.com>

On Thu, Feb 20, 2020 at 10:19 AM Stefan Ring <stefanrin@gmail.com> wrote:
>
> Hi,
>
> I have a very curious problem on an oVirt-like virtualization host
> whose storage lives on gluster (as qcow2).
>
> The problem is that of the writes done by ZFS, whose sizes according
> to blktrace are a mixture of 8, 16, 24, ... 256 (512 byte)
> blocks,sometimes the first 4KB or more, but at least the first 4KB,
> end up zeroed out when read back later from storage. To clarify: ZFS
> is only used in the guest. In my current test scenario, I write
> approx. 3GB to the guest machine, which takes roughly a minute.
> Actually it’s around 35 GB which gets compressed down to 3GB by lz4.
> Within that, I end up with close to 100 data errors when I read it
> back from storage afterwards (zpool scrub).
>
> There are quite a few machines running on this host, and we have not
> experienced other problems so far. So right now, only ZFS is able to
> trigger this for some reason. The guest has 8 virtual cores. I also
> tried writing directly to the affected device from user space in
> patterns mimicking what I see in blktrace, but so far have been unable
> to trigger the same issue that way. Of the many ZFS knobs, I know at
> least one that makes a huge difference: When I set
> zfs_vdev_async_write_max_active to 1 (as opposed to 2 or 10), the
> error count goes through the roof (11.000). Curiously, when I switch
> off ZFS compression, the data amount written increases almost
> 10-fold,while the absolute error amount drops to close to, but not
> entirely,zero. Which I guess supports my suspicion that this must be
> somehow related to timing.
>
> Switching the guest storage driver between scsi and virtio does not
> make a difference.
>
> Switching the storage backend to file on glusterfs-fuse does make a
> difference, i.e. the problem disappears.
>
> Any hints? I'm still trying to investigate a few things, but what bugs
> me most that only ZFS seems to trigger this behavior, although I am
> almost sure that ZFS is not really at fault here.
>
> Software versions used:
>
> Host
> kernel 3.10.0-957.12.1.el7.x86_64
> qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
> glusterfs-api-5.6-1.el7.x86_64
>
> Guest
> kernel 3.10.0-1062.12.1.el7.x86_64
> kmod-zfs-0.8.3-1.el7.x86_64 (from the official ZoL binaries)

I can actually reproduce this on my Fedora 31 home machine with 3 VMs.
All 3 running CentOS 7.7. Two for glusterd, one for ZFS. Briefly, I
also got rid of the 2 glusterd VMs altogether, i.e. running glusterd
(the Fedora version) directly on the host, and it would still occur.
So my impression is that the server side of GlusterFS does not matter
much – I’ve seen it happen on 4.x, 6.x, 7.2 and 7.3. Also, as it
happens in the same way on a Fedora 31 qemu as well as a CentOS 7 one,
the qemu version is equally irrelevant.

The main conclusion so far is that it has to do with growing the qcow2
image. With a fully pre-populated image, I cannot trigger it.

I poked around a little in the glfs api integration, but trying to
make sense of two unknown asynchronous io systems (QEMU's and
GlusterFS's) interacting with each other is demanding a bit much for a
single weekend ;). The one thing I did verify so far is that there is
only one thread ever calling qemu_gluster_co_rw. As already stated in
the original post, the problem only occurs with multiple parallel
write requests happening.

What I plan to do next is look at the block ranges being written in
the hope of finding overlaps there.


       reply	other threads:[~2020-02-25  1:14 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAAxjCEzHQz4cG_8m7S6=CwCBoN5daQs+KVyuU5GL5Tq3Bky1NA@mail.gmail.com>
2020-02-24 12:35 ` Stefan Ring [this message]
2020-02-24 13:10   ` Strange data corruption issue with gluster (libgfapi) and ZFS Stefan Ring
2020-02-24 13:26   ` Kevin Wolf
2020-02-24 15:50     ` Stefan Ring
2020-02-25 14:12   ` Stefan Ring
2020-02-27 21:12     ` Stefan Ring
2020-02-27 22:25       ` Stefan Ring
2020-02-28 11:10         ` Kevin Wolf
2020-02-28 11:41           ` Stefan Ring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAxjCEx79Fkjw9tFbSMo+b1LGv2LNivLRXf1GS9JsYnXrNVVkQ@mail.gmail.com \
    --to=stefanrin@gmail.com \
    --cc=integration@gluster.org \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.