All of lore.kernel.org
 help / color / mirror / Atom feed
From: Eric Blake <eblake@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>, Ed Swierk <eswierk@skyportsystems.com>
Cc: qemu-block@nongnu.org, qemu-devel@nongnu.org,
	"Denis V. Lunev" <den@openvz.org>
Subject: Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k
Date: Tue, 25 Oct 2016 08:51:14 -0500	[thread overview]
Message-ID: <28dabb20-ab80-95a3-f399-292afd52ecdf@redhat.com> (raw)
In-Reply-To: <20161025083924.GB4695@noname.str.redhat.com>

[-- Attachment #1: Type: text/plain, Size: 1687 bytes --]

On 10/25/2016 03:39 AM, Kevin Wolf wrote:
>> It appears loop devices (with or without dm-crypt/LUKS) report a
>> 255-sector maximum per request via the BLKSECTGET ioctl, which qemu
>> rounds down to 64k in raw_refresh_limits().

Cool - I can make a loop device, so now I should have enough info to
reproduce locally.

I wonder if we should continue rounding down to a power of 2 (128
sectors), or if we should try to utilize the full 255 sector limit.  But
that's an independent patch and bigger audit, more along the lines of
what we did for the weird scsi device with 15M limits.

>> However this maximum
>> appears to be just a hint: bdrv_driver_pwritev() succeeds even with a
>> 385024-byte buffer of zeroes.
> 
> I suppose what happens is that we get short writes, but the raw-posix
> driver actually has the loop to deal with this, so eventually we return
> with the whole thing written.
> 
> Considering the presence of this loop, maybe we shouldn't set
> bs->bl.max_transfer at all for raw-posix. Hm, except that for Linux AIO
> we might actually need it.

bdrv_aligned_preadv() fragments before calling into
bdrv_driver_preadv(), but our write-zeroes fallback code is calling
directly into bdrv_driver_preadv() rather than going through
bdrv_aligned_preadv(); and I don't think we want to change that.  In
other words, it sounds like bdrv_co_do_pwrite_zeroes() has to do the
same fragmentation that bdrv_aligned_preadv() would be doing.  Okay, I
think I know where to go with the patch, and hope to have something
posted later today.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

      reply	other threads:[~2016-10-25 13:51 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-21  0:24 [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k Ed Swierk
2016-10-21  1:38 ` Eric Blake
2016-10-21 13:14   ` Ed Swierk
2016-10-24 20:32     ` Eric Blake
2016-11-14  9:50       ` Kevin Wolf
2016-11-14 15:46         ` Eric Blake
2016-11-14 15:54           ` Kevin Wolf
2016-11-14 16:59             ` Eric Blake
2016-10-24 21:21 ` Eric Blake
2016-10-24 23:06   ` Ed Swierk
2016-10-25  8:39     ` Kevin Wolf
2016-10-25 13:51       ` Eric Blake [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=28dabb20-ab80-95a3-f399-292afd52ecdf@redhat.com \
    --to=eblake@redhat.com \
    --cc=den@openvz.org \
    --cc=eswierk@skyportsystems.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.