* [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k @ 2016-10-21 0:24 Ed Swierk 2016-10-21 1:38 ` Eric Blake 2016-10-24 21:21 ` Eric Blake 0 siblings, 2 replies; 12+ messages in thread From: Ed Swierk @ 2016-10-21 0:24 UTC (permalink / raw) To: qemu-block, qemu-devel; +Cc: Kevin Wolf, Denis V. Lunev Shortly after I start qemu 2.7.0 with a qcow2 disk image created with -o cluster_size=1048576, it prints the following and dies: block/qcow2.c:2451: qcow2_co_pwrite_zeroes: Assertion `head + count <= s->cluster_size' failed. I narrowed the problem to bdrv_co_do_pwrite_zeroes(), called by bdrv_aligned_pwritev() with flags & BDRV_REQ_ZERO_WRITE set. On the first loop iteration, offset=8003584, count=2093056, head=663552, tail=659456 and num=2093056. qcow2_co_pwrite_zeroes() is called with offset=8003584 and count=385024 and finds that the head portion is not already zero, so it returns -ENOTSUP. bdrv_co_do_pwrite_zeroes() falls back to a normal write, with max_transfer=65536. The next iteration starts with offset incremented by 65536, count and num decremented by 65536, and head=0, violating the assumption that the entire 385024 bytes of the head remainder had been zeroed the first time around. Then it calls qcow2_co_pwrite_zeroes() with an unaligned offset=8069120 and a count=1368064 greater than the cluster size, triggering the assertion failure. Changing max_transfer in the normal write case to MIN_NON_ZERO(alignment, MAX_WRITE_ZEROES_BOUNCE_BUFFER) appears to fix the problem, but I don't pretend to understand all the subtleties here. --Ed ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-21 0:24 [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k Ed Swierk @ 2016-10-21 1:38 ` Eric Blake 2016-10-21 13:14 ` Ed Swierk 2016-10-24 21:21 ` Eric Blake 1 sibling, 1 reply; 12+ messages in thread From: Eric Blake @ 2016-10-21 1:38 UTC (permalink / raw) To: Ed Swierk, qemu-block, qemu-devel; +Cc: Kevin Wolf, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 1890 bytes --] On 10/20/2016 07:24 PM, Ed Swierk wrote: > Shortly after I start qemu 2.7.0 with a qcow2 disk image created with > -o cluster_size=1048576, it prints the following and dies: > > block/qcow2.c:2451: qcow2_co_pwrite_zeroes: Assertion `head + count <= > s->cluster_size' failed. > > I narrowed the problem to bdrv_co_do_pwrite_zeroes(), called by > bdrv_aligned_pwritev() with flags & BDRV_REQ_ZERO_WRITE set. > > On the first loop iteration, offset=8003584, count=2093056, > head=663552, tail=659456 and num=2093056. qcow2_co_pwrite_zeroes() is > called with offset=8003584 and count=385024 and finds that the head > portion is not already zero, so it returns -ENOTSUP. > bdrv_co_do_pwrite_zeroes() falls back to a normal write, with > max_transfer=65536. Ah. When the cluster is larger than 64k, we HAVE to handle the entire cluster in one operation, or else the head occupies more than one 'sector' while the assert is that at most 'sector' is unaligned. > > The next iteration starts with offset incremented by 65536, count and > num decremented by 65536, and head=0, violating the assumption that > the entire 385024 bytes of the head remainder had been zeroed the > first time around. Then it calls qcow2_co_pwrite_zeroes() with an > unaligned offset=8069120 and a count=1368064 greater than the cluster > size, triggering the assertion failure. > > Changing max_transfer in the normal write case to > MIN_NON_ZERO(alignment, MAX_WRITE_ZEROES_BOUNCE_BUFFER) appears to fix > the problem, but I don't pretend to understand all the subtleties > here. That actually sounds like the right fix. But since the bug was probably caused by my code, I'll formalize it into a patch and see if I can modify the testsuite to give it coverage. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 604 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-21 1:38 ` Eric Blake @ 2016-10-21 13:14 ` Ed Swierk 2016-10-24 20:32 ` Eric Blake 0 siblings, 1 reply; 12+ messages in thread From: Ed Swierk @ 2016-10-21 13:14 UTC (permalink / raw) To: Eric Blake; +Cc: qemu-block, qemu-devel, Kevin Wolf, Denis V. Lunev On Thu, Oct 20, 2016 at 6:38 PM, Eric Blake <eblake@redhat.com> wrote: > On 10/20/2016 07:24 PM, Ed Swierk wrote: >> Changing max_transfer in the normal write case to >> MIN_NON_ZERO(alignment, MAX_WRITE_ZEROES_BOUNCE_BUFFER) appears to fix >> the problem, but I don't pretend to understand all the subtleties >> here. > > That actually sounds like the right fix. But since the bug was probably > caused by my code, I'll formalize it into a patch and see if I can > modify the testsuite to give it coverage. If alignment > MAX_WRITE_ZEROES_BOUNCE_BUFFER (however unlikely) we have the same problem, so maybe this would be better? max_transfer = alignment > 0 ? alignment : MAX_WRITE_ZEROES_BOUNCE_BUFFER --Ed ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-21 13:14 ` Ed Swierk @ 2016-10-24 20:32 ` Eric Blake 2016-11-14 9:50 ` Kevin Wolf 0 siblings, 1 reply; 12+ messages in thread From: Eric Blake @ 2016-10-24 20:32 UTC (permalink / raw) To: Ed Swierk; +Cc: qemu-block, qemu-devel, Kevin Wolf, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 1321 bytes --] On 10/21/2016 08:14 AM, Ed Swierk wrote: > On Thu, Oct 20, 2016 at 6:38 PM, Eric Blake <eblake@redhat.com> wrote: >> On 10/20/2016 07:24 PM, Ed Swierk wrote: >>> Changing max_transfer in the normal write case to >>> MIN_NON_ZERO(alignment, MAX_WRITE_ZEROES_BOUNCE_BUFFER) appears to fix >>> the problem, but I don't pretend to understand all the subtleties >>> here. >> >> That actually sounds like the right fix. But since the bug was probably >> caused by my code, I'll formalize it into a patch and see if I can >> modify the testsuite to give it coverage. > > If alignment > MAX_WRITE_ZEROES_BOUNCE_BUFFER (however unlikely) we > have the same problem, so maybe this would be better? Our qcow2 support is currently limited to a maximum of 2M clusters; while MAX_WRITE_ZEROES_BOUNCE_BUFFER is 32k * 512, or 16M. The maximum-size bounce buffer should not be the problem here; but for some reason, it looks like alignment is larger than max_transfer which should not normally be possible. I'm still playing with what should be the right patch, but hope to have something posted soon. > > max_transfer = alignment > 0 ? alignment : MAX_WRITE_ZEROES_BOUNCE_BUFFER > > --Ed > -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 604 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-24 20:32 ` Eric Blake @ 2016-11-14 9:50 ` Kevin Wolf 2016-11-14 15:46 ` Eric Blake 0 siblings, 1 reply; 12+ messages in thread From: Kevin Wolf @ 2016-11-14 9:50 UTC (permalink / raw) To: Eric Blake; +Cc: Ed Swierk, qemu-block, qemu-devel, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 1248 bytes --] Am 24.10.2016 um 22:32 hat Eric Blake geschrieben: > On 10/21/2016 08:14 AM, Ed Swierk wrote: > > On Thu, Oct 20, 2016 at 6:38 PM, Eric Blake <eblake@redhat.com> wrote: > >> On 10/20/2016 07:24 PM, Ed Swierk wrote: > >>> Changing max_transfer in the normal write case to > >>> MIN_NON_ZERO(alignment, MAX_WRITE_ZEROES_BOUNCE_BUFFER) appears to fix > >>> the problem, but I don't pretend to understand all the subtleties > >>> here. > >> > >> That actually sounds like the right fix. But since the bug was probably > >> caused by my code, I'll formalize it into a patch and see if I can > >> modify the testsuite to give it coverage. > > > > If alignment > MAX_WRITE_ZEROES_BOUNCE_BUFFER (however unlikely) we > > have the same problem, so maybe this would be better? > > Our qcow2 support is currently limited to a maximum of 2M clusters; > while MAX_WRITE_ZEROES_BOUNCE_BUFFER is 32k * 512, or 16M. The > maximum-size bounce buffer should not be the problem here; but for some > reason, it looks like alignment is larger than max_transfer which should > not normally be possible. I'm still playing with what should be the > right patch, but hope to have something posted soon. Are you still playing with it? Kevin [-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-11-14 9:50 ` Kevin Wolf @ 2016-11-14 15:46 ` Eric Blake 2016-11-14 15:54 ` Kevin Wolf 0 siblings, 1 reply; 12+ messages in thread From: Eric Blake @ 2016-11-14 15:46 UTC (permalink / raw) To: Kevin Wolf; +Cc: Ed Swierk, qemu-block, qemu-devel, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 1502 bytes --] On 11/14/2016 03:50 AM, Kevin Wolf wrote: > Am 24.10.2016 um 22:32 hat Eric Blake geschrieben: >> On 10/21/2016 08:14 AM, Ed Swierk wrote: >>> On Thu, Oct 20, 2016 at 6:38 PM, Eric Blake <eblake@redhat.com> wrote: >>>> On 10/20/2016 07:24 PM, Ed Swierk wrote: >>>>> Changing max_transfer in the normal write case to >>>>> MIN_NON_ZERO(alignment, MAX_WRITE_ZEROES_BOUNCE_BUFFER) appears to fix >>>>> the problem, but I don't pretend to understand all the subtleties >>>>> here. >>>> >>>> That actually sounds like the right fix. But since the bug was probably >>>> caused by my code, I'll formalize it into a patch and see if I can >>>> modify the testsuite to give it coverage. >>> >>> If alignment > MAX_WRITE_ZEROES_BOUNCE_BUFFER (however unlikely) we >>> have the same problem, so maybe this would be better? >> >> Our qcow2 support is currently limited to a maximum of 2M clusters; >> while MAX_WRITE_ZEROES_BOUNCE_BUFFER is 32k * 512, or 16M. The >> maximum-size bounce buffer should not be the problem here; but for some >> reason, it looks like alignment is larger than max_transfer which should >> not normally be possible. I'm still playing with what should be the >> right patch, but hope to have something posted soon. > > Are you still playing with it? Patch was posted here: https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg01603.html -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 604 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-11-14 15:46 ` Eric Blake @ 2016-11-14 15:54 ` Kevin Wolf 2016-11-14 16:59 ` Eric Blake 0 siblings, 1 reply; 12+ messages in thread From: Kevin Wolf @ 2016-11-14 15:54 UTC (permalink / raw) To: Eric Blake; +Cc: Ed Swierk, qemu-block, qemu-devel, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 1733 bytes --] Am 14.11.2016 um 16:46 hat Eric Blake geschrieben: > On 11/14/2016 03:50 AM, Kevin Wolf wrote: > > Am 24.10.2016 um 22:32 hat Eric Blake geschrieben: > >> On 10/21/2016 08:14 AM, Ed Swierk wrote: > >>> On Thu, Oct 20, 2016 at 6:38 PM, Eric Blake <eblake@redhat.com> wrote: > >>>> On 10/20/2016 07:24 PM, Ed Swierk wrote: > >>>>> Changing max_transfer in the normal write case to > >>>>> MIN_NON_ZERO(alignment, MAX_WRITE_ZEROES_BOUNCE_BUFFER) appears to fix > >>>>> the problem, but I don't pretend to understand all the subtleties > >>>>> here. > >>>> > >>>> That actually sounds like the right fix. But since the bug was probably > >>>> caused by my code, I'll formalize it into a patch and see if I can > >>>> modify the testsuite to give it coverage. > >>> > >>> If alignment > MAX_WRITE_ZEROES_BOUNCE_BUFFER (however unlikely) we > >>> have the same problem, so maybe this would be better? > >> > >> Our qcow2 support is currently limited to a maximum of 2M clusters; > >> while MAX_WRITE_ZEROES_BOUNCE_BUFFER is 32k * 512, or 16M. The > >> maximum-size bounce buffer should not be the problem here; but for some > >> reason, it looks like alignment is larger than max_transfer which should > >> not normally be possible. I'm still playing with what should be the > >> right patch, but hope to have something posted soon. > > > > Are you still playing with it? > > Patch was posted here: > > https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg01603.html Ah, yes, sorry. I didn't make the connection today because this one had qcow2 in the subject line and the patch doesn't. Are you going to add a qemu-iotests case then, or do you want us to merge the patch as it is? Kevin [-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-11-14 15:54 ` Kevin Wolf @ 2016-11-14 16:59 ` Eric Blake 0 siblings, 0 replies; 12+ messages in thread From: Eric Blake @ 2016-11-14 16:59 UTC (permalink / raw) To: Kevin Wolf; +Cc: Ed Swierk, qemu-block, qemu-devel, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 1240 bytes --] On 11/14/2016 09:54 AM, Kevin Wolf wrote: >>>> Our qcow2 support is currently limited to a maximum of 2M clusters; >>>> while MAX_WRITE_ZEROES_BOUNCE_BUFFER is 32k * 512, or 16M. The >>>> maximum-size bounce buffer should not be the problem here; but for some >>>> reason, it looks like alignment is larger than max_transfer which should >>>> not normally be possible. I'm still playing with what should be the >>>> right patch, but hope to have something posted soon. >>> >>> Are you still playing with it? >> >> Patch was posted here: >> >> https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg01603.html > > Ah, yes, sorry. I didn't make the connection today because this one had > qcow2 in the subject line and the patch doesn't. > > Are you going to add a qemu-iotests case then, or do you want us to > merge the patch as it is? I think the patch is ready to merge as-is; I'm still adding a qemu-iotests addition, but the addition requires enhancing blkdebug which is a bit extreme to be doing after soft freeze (so the tests are probably 2.9 material, while the bug fix is still for 2.8) -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 604 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-21 0:24 [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k Ed Swierk 2016-10-21 1:38 ` Eric Blake @ 2016-10-24 21:21 ` Eric Blake 2016-10-24 23:06 ` Ed Swierk 1 sibling, 1 reply; 12+ messages in thread From: Eric Blake @ 2016-10-24 21:21 UTC (permalink / raw) To: Ed Swierk, qemu-block, qemu-devel; +Cc: Kevin Wolf, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 4102 bytes --] On 10/20/2016 07:24 PM, Ed Swierk wrote: > Shortly after I start qemu 2.7.0 with a qcow2 disk image created with > -o cluster_size=1048576, it prints the following and dies: > > block/qcow2.c:2451: qcow2_co_pwrite_zeroes: Assertion `head + count <= > s->cluster_size' failed. > > I narrowed the problem to bdrv_co_do_pwrite_zeroes(), called by > bdrv_aligned_pwritev() with flags & BDRV_REQ_ZERO_WRITE set. > > On the first loop iteration, offset=8003584, count=2093056, > head=663552, tail=659456 and num=2093056. qcow2_co_pwrite_zeroes() is > called with offset=8003584 and count=385024 and finds that the head > portion is not already zero, so it returns -ENOTSUP. > bdrv_co_do_pwrite_zeroes() falls back to a normal write, with > max_transfer=65536. How are you getting max_transfer == 65536? I can't reproduce it with the following setup: $ qemu-img create -f qcow2 -o cluster_size=1M file 10M $ qemu-io -f qcow2 -c 'w 7m 1k' file $ qemu-io -f qcow2 -c 'w -z 8003584 2093056' file although I did confirm that the above sequence was enough to get the -ENOTSUP failure and fall into the code calculating max_transfer. I'm guessing that you are using something other than a file system as the backing protocol for your qcow2 image. But do you really have a protocol that takes AT MOST 64k per transaction, while still trying to a cluster size of 1M in the qcow2 format? That's rather awkward, as it means that you are required to do 16 transactions per cluster (the whole point of using larger clusters is usually to get fewer transactions). I think we need to get to a root cause of why you are seeing such a small max_transfer, before I can propose the right patch, since I haven't been able to reproduce it locally yet (although I admit I haven't tried to see if blkdebug could reliably introduce artificial limits to simulate your setup). And it may turn out that I just have to fix the bdrv_co_do_pwrite_zeroes() code to loop multiple times if the size of the unaligned head really does exceed the max_transfer size that the underlying protocol is able to support, rather than assuming that the unaligned head/tail always fit in a single fallback write. Can you also try this patch? If I'm right, you'll still fail, but the assertion will be slightly different. (Again, I'm passing locally, but that's because I'm using the file protocol, and my file system does not impose a puny 64k max transfer). diff --git i/block/io.c w/block/io.c index b136c89..8757063 100644 --- i/block/io.c +++ w/block/io.c @@ -1179,6 +1179,8 @@ static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, int max_write_zeroes = MIN_NON_ZERO(bs->bl.max_pwrite_zeroes, INT_MAX); int alignment = MAX(bs->bl.pwrite_zeroes_alignment, bs->bl.request_alignment); + int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer, + MAX_WRITE_ZEROES_BOUNCE_BUFFER); assert(alignment % bs->bl.request_alignment == 0); head = offset % alignment; @@ -1197,6 +1199,8 @@ static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, /* Make a small request up to the first aligned sector. */ num = MIN(count, alignment - head); head = 0; + assert(num < max_write_zeroes); + assert(num < max_transfer); } else if (tail && num > alignment) { /* Shorten the request to the last aligned sector. */ num -= tail; @@ -1222,8 +1226,6 @@ static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, if (ret == -ENOTSUP) { /* Fall back to bounce buffer if write zeroes is unsupported */ - int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer, - MAX_WRITE_ZEROES_BOUNCE_BUFFER); BdrvRequestFlags write_flags = flags & ~BDRV_REQ_ZERO_WRITE; if ((flags & BDRV_REQ_FUA) && -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 604 bytes --] ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-24 21:21 ` Eric Blake @ 2016-10-24 23:06 ` Ed Swierk 2016-10-25 8:39 ` Kevin Wolf 0 siblings, 1 reply; 12+ messages in thread From: Ed Swierk @ 2016-10-24 23:06 UTC (permalink / raw) To: Eric Blake; +Cc: qemu-block, qemu-devel, Kevin Wolf, Denis V. Lunev On Mon, Oct 24, 2016 at 2:21 PM, Eric Blake <eblake@redhat.com> wrote: > How are you getting max_transfer == 65536? I can't reproduce it with > the following setup: > > $ qemu-img create -f qcow2 -o cluster_size=1M file 10M > $ qemu-io -f qcow2 -c 'w 7m 1k' file > $ qemu-io -f qcow2 -c 'w -z 8003584 2093056' file > > although I did confirm that the above sequence was enough to get the > -ENOTSUP failure and fall into the code calculating max_transfer. > > I'm guessing that you are using something other than a file system as > the backing protocol for your qcow2 image. But do you really have a > protocol that takes AT MOST 64k per transaction, while still trying to a > cluster size of 1M in the qcow2 format? That's rather awkward, as it > means that you are required to do 16 transactions per cluster (the whole > point of using larger clusters is usually to get fewer transactions). I > think we need to get to a root cause of why you are seeing such a small > max_transfer, before I can propose the right patch, since I haven't been > able to reproduce it locally yet (although I admit I haven't tried to > see if blkdebug could reliably introduce artificial limits to simulate > your setup). And it may turn out that I just have to fix the > bdrv_co_do_pwrite_zeroes() code to loop multiple times if the size of > the unaligned head really does exceed the max_transfer size that the > underlying protocol is able to support, rather than assuming that the > unaligned head/tail always fit in a single fallback write. In this case I'm using a qcow2 image that's stored directly in a raw dm-crypt/LUKS container, which is itself a loop device on an ext4 filesystem. It appears loop devices (with or without dm-crypt/LUKS) report a 255-sector maximum per request via the BLKSECTGET ioctl, which qemu rounds down to 64k in raw_refresh_limits(). However this maximum appears to be just a hint: bdrv_driver_pwritev() succeeds even with a 385024-byte buffer of zeroes. As for the 1M cluster size, this is a temporary workaround for another qemu issue (the default qcow2 L2 table cache size performs well with random reads covering only up to 8 GB of image data with 64k clusters; beyond that the L2 table cache thrashes). I agree this is not an optimal configuration for writes. > Can you also try this patch? If I'm right, you'll still fail, but the > assertion will be slightly different. (Again, I'm passing locally, but > that's because I'm using the file protocol, and my file system does not > impose a puny 64k max transfer). > > diff --git i/block/io.c w/block/io.c > index b136c89..8757063 100644 > --- i/block/io.c > +++ w/block/io.c > @@ -1179,6 +1179,8 @@ static int coroutine_fn > bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, > int max_write_zeroes = MIN_NON_ZERO(bs->bl.max_pwrite_zeroes, INT_MAX); > int alignment = MAX(bs->bl.pwrite_zeroes_alignment, > bs->bl.request_alignment); > + int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer, > + MAX_WRITE_ZEROES_BOUNCE_BUFFER); > > assert(alignment % bs->bl.request_alignment == 0); > head = offset % alignment; > @@ -1197,6 +1199,8 @@ static int coroutine_fn > bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, > /* Make a small request up to the first aligned sector. */ > num = MIN(count, alignment - head); > head = 0; > + assert(num < max_write_zeroes); > + assert(num < max_transfer); > } else if (tail && num > alignment) { > /* Shorten the request to the last aligned sector. */ > num -= tail; > @@ -1222,8 +1226,6 @@ static int coroutine_fn > bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, > > if (ret == -ENOTSUP) { > /* Fall back to bounce buffer if write zeroes is unsupported */ > - int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer, > - > MAX_WRITE_ZEROES_BOUNCE_BUFFER); > BdrvRequestFlags write_flags = flags & ~BDRV_REQ_ZERO_WRITE; > > if ((flags & BDRV_REQ_FUA) && With this change, the num < max_transfer assertion fails on the first iteration (with num=385024 and max_transfer=65536). --Ed ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-24 23:06 ` Ed Swierk @ 2016-10-25 8:39 ` Kevin Wolf 2016-10-25 13:51 ` Eric Blake 0 siblings, 1 reply; 12+ messages in thread From: Kevin Wolf @ 2016-10-25 8:39 UTC (permalink / raw) To: Ed Swierk; +Cc: Eric Blake, qemu-block, qemu-devel, Denis V. Lunev Am 25.10.2016 um 01:06 hat Ed Swierk geschrieben: > On Mon, Oct 24, 2016 at 2:21 PM, Eric Blake <eblake@redhat.com> wrote: > > How are you getting max_transfer == 65536? I can't reproduce it with > > the following setup: > > > > $ qemu-img create -f qcow2 -o cluster_size=1M file 10M > > $ qemu-io -f qcow2 -c 'w 7m 1k' file > > $ qemu-io -f qcow2 -c 'w -z 8003584 2093056' file > > > > although I did confirm that the above sequence was enough to get the > > -ENOTSUP failure and fall into the code calculating max_transfer. > > > > I'm guessing that you are using something other than a file system as > > the backing protocol for your qcow2 image. But do you really have a > > protocol that takes AT MOST 64k per transaction, while still trying to a > > cluster size of 1M in the qcow2 format? That's rather awkward, as it > > means that you are required to do 16 transactions per cluster (the whole > > point of using larger clusters is usually to get fewer transactions). I > > think we need to get to a root cause of why you are seeing such a small > > max_transfer, before I can propose the right patch, since I haven't been > > able to reproduce it locally yet (although I admit I haven't tried to > > see if blkdebug could reliably introduce artificial limits to simulate > > your setup). And it may turn out that I just have to fix the > > bdrv_co_do_pwrite_zeroes() code to loop multiple times if the size of > > the unaligned head really does exceed the max_transfer size that the > > underlying protocol is able to support, rather than assuming that the > > unaligned head/tail always fit in a single fallback write. > > In this case I'm using a qcow2 image that's stored directly in a raw > dm-crypt/LUKS container, which is itself a loop device on an ext4 > filesystem. > > It appears loop devices (with or without dm-crypt/LUKS) report a > 255-sector maximum per request via the BLKSECTGET ioctl, which qemu > rounds down to 64k in raw_refresh_limits(). However this maximum > appears to be just a hint: bdrv_driver_pwritev() succeeds even with a > 385024-byte buffer of zeroes. I suppose what happens is that we get short writes, but the raw-posix driver actually has the loop to deal with this, so eventually we return with the whole thing written. Considering the presence of this loop, maybe we shouldn't set bs->bl.max_transfer at all for raw-posix. Hm, except that for Linux AIO we might actually need it. > As for the 1M cluster size, this is a temporary workaround for another > qemu issue (the default qcow2 L2 table cache size performs well with > random reads covering only up to 8 GB of image data with 64k clusters; > beyond that the L2 table cache thrashes). I agree this is not an > optimal configuration for writes. You can configure the qcow2 cache size without changing the cluster size (though of course larger cluster sizes help the total metadata size to stay smaller for larger image sizes): -drive file=test.qcow2,l2-cache-size=16M Kevin ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k 2016-10-25 8:39 ` Kevin Wolf @ 2016-10-25 13:51 ` Eric Blake 0 siblings, 0 replies; 12+ messages in thread From: Eric Blake @ 2016-10-25 13:51 UTC (permalink / raw) To: Kevin Wolf, Ed Swierk; +Cc: qemu-block, qemu-devel, Denis V. Lunev [-- Attachment #1: Type: text/plain, Size: 1687 bytes --] On 10/25/2016 03:39 AM, Kevin Wolf wrote: >> It appears loop devices (with or without dm-crypt/LUKS) report a >> 255-sector maximum per request via the BLKSECTGET ioctl, which qemu >> rounds down to 64k in raw_refresh_limits(). Cool - I can make a loop device, so now I should have enough info to reproduce locally. I wonder if we should continue rounding down to a power of 2 (128 sectors), or if we should try to utilize the full 255 sector limit. But that's an independent patch and bigger audit, more along the lines of what we did for the weird scsi device with 15M limits. >> However this maximum >> appears to be just a hint: bdrv_driver_pwritev() succeeds even with a >> 385024-byte buffer of zeroes. > > I suppose what happens is that we get short writes, but the raw-posix > driver actually has the loop to deal with this, so eventually we return > with the whole thing written. > > Considering the presence of this loop, maybe we shouldn't set > bs->bl.max_transfer at all for raw-posix. Hm, except that for Linux AIO > we might actually need it. bdrv_aligned_preadv() fragments before calling into bdrv_driver_preadv(), but our write-zeroes fallback code is calling directly into bdrv_driver_preadv() rather than going through bdrv_aligned_preadv(); and I don't think we want to change that. In other words, it sounds like bdrv_co_do_pwrite_zeroes() has to do the same fragmentation that bdrv_aligned_preadv() would be doing. Okay, I think I know where to go with the patch, and hope to have something posted later today. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 604 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2016-11-14 16:59 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-10-21 0:24 [Qemu-devel] Assertion failure on qcow2 disk with cluster_size != 64k Ed Swierk 2016-10-21 1:38 ` Eric Blake 2016-10-21 13:14 ` Ed Swierk 2016-10-24 20:32 ` Eric Blake 2016-11-14 9:50 ` Kevin Wolf 2016-11-14 15:46 ` Eric Blake 2016-11-14 15:54 ` Kevin Wolf 2016-11-14 16:59 ` Eric Blake 2016-10-24 21:21 ` Eric Blake 2016-10-24 23:06 ` Ed Swierk 2016-10-25 8:39 ` Kevin Wolf 2016-10-25 13:51 ` Eric Blake
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.