* [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests
@ 2015-02-27 19:54 Max Reitz
2015-03-02 10:30 ` Paolo Bonzini
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Max Reitz @ 2015-02-27 19:54 UTC (permalink / raw)
To: qemu-block
Cc: Kevin Wolf, Stefan Weil, qemu-devel, Max Reitz, Stefan Hajnoczi,
Paolo Bonzini
When allocating a new cluster, the first write to it must be the one
doing the allocation, because that one pads its write request to the
cluster size; if another write to that cluster is executed before it,
that write will be overwritten due to the padding.
See https://bugs.launchpad.net/qemu/+bug/1422307 for what can go wrong
without this patch.
Cc: qemu-stable <qemu-stable@nongnu.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
---
v3: Hopefully finally found the real issue which causes the problems
described in the bug report; at least it sounds very reasonable and
I can no longer reproduce any of the issues described there.
Thank you, Paolo and Stefan!
---
block/vdi.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/block/vdi.c b/block/vdi.c
index 74030c6..53bd02f 100644
--- a/block/vdi.c
+++ b/block/vdi.c
@@ -53,6 +53,7 @@
#include "block/block_int.h"
#include "qemu/module.h"
#include "migration/migration.h"
+#include "block/coroutine.h"
#if defined(CONFIG_UUID)
#include <uuid/uuid.h>
@@ -196,6 +197,8 @@ typedef struct {
/* VDI header (converted to host endianness). */
VdiHeader header;
+ CoMutex write_lock;
+
Error *migration_blocker;
} BDRVVdiState;
@@ -504,6 +507,8 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
"vdi", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
+ qemu_co_mutex_init(&s->write_lock);
+
return 0;
fail_free_bmap:
@@ -639,11 +644,31 @@ static int vdi_co_write(BlockDriverState *bs,
buf, n_sectors * SECTOR_SIZE);
memset(block + (sector_in_block + n_sectors) * SECTOR_SIZE, 0,
(s->block_sectors - n_sectors - sector_in_block) * SECTOR_SIZE);
+
+ /* Note that this coroutine does not yield anywhere from reading the
+ * bmap entry until here, so in regards to all the coroutines trying
+ * to write to this cluster, the one doing the allocation will
+ * always be the first to try to acquire the lock.
+ * Therefore, it is also the first that will actually be able to
+ * acquire the lock and thus the padded cluster is written before
+ * the other coroutines can write to the affected area. */
+ qemu_co_mutex_lock(&s->write_lock);
ret = bdrv_write(bs->file, offset, block, s->block_sectors);
+ qemu_co_mutex_unlock(&s->write_lock);
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
+ qemu_co_mutex_lock(&s->write_lock);
+ /* This lock is only used to make sure the following write operation
+ * is executed after the write issued by the coroutine allocating
+ * this cluster, therefore we do not need to keep it locked.
+ * As stated above, the allocating coroutine will always try to lock
+ * the mutex before all the other concurrent accesses to that
+ * cluster, therefore at this point we can be absolutely certain
+ * that that write operation has returned (there may be other writes
+ * in flight, but they do not concern this very operation). */
+ qemu_co_mutex_unlock(&s->write_lock);
ret = bdrv_write(bs->file, offset, buf, n_sectors);
}
--
2.1.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests
2015-02-27 19:54 [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests Max Reitz
@ 2015-03-02 10:30 ` Paolo Bonzini
2015-03-02 17:31 ` Stefan Hajnoczi
2015-03-04 11:04 ` Kevin Wolf
2 siblings, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2015-03-02 10:30 UTC (permalink / raw)
To: Max Reitz, qemu-block
Cc: Kevin Wolf, Stefan Weil, qemu-devel, Stefan Hajnoczi
On 27/02/2015 20:54, Max Reitz wrote:
> When allocating a new cluster, the first write to it must be the one
> doing the allocation, because that one pads its write request to the
> cluster size; if another write to that cluster is executed before it,
> that write will be overwritten due to the padding.
>
> See https://bugs.launchpad.net/qemu/+bug/1422307 for what can go wrong
> without this patch.
>
> Cc: qemu-stable <qemu-stable@nongnu.org>
> Signed-off-by: Max Reitz <mreitz@redhat.com>
Usage of CoMutex is tricky, but well commented. So:
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> v3: Hopefully finally found the real issue which causes the problems
> described in the bug report; at least it sounds very reasonable and
> I can no longer reproduce any of the issues described there.
> Thank you, Paolo and Stefan!
> ---
> block/vdi.c | 25 +++++++++++++++++++++++++
> 1 file changed, 25 insertions(+)
>
> diff --git a/block/vdi.c b/block/vdi.c
> index 74030c6..53bd02f 100644
> --- a/block/vdi.c
> +++ b/block/vdi.c
> @@ -53,6 +53,7 @@
> #include "block/block_int.h"
> #include "qemu/module.h"
> #include "migration/migration.h"
> +#include "block/coroutine.h"
>
> #if defined(CONFIG_UUID)
> #include <uuid/uuid.h>
> @@ -196,6 +197,8 @@ typedef struct {
> /* VDI header (converted to host endianness). */
> VdiHeader header;
>
> + CoMutex write_lock;
> +
> Error *migration_blocker;
> } BDRVVdiState;
>
> @@ -504,6 +507,8 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
> "vdi", bdrv_get_device_name(bs), "live migration");
> migrate_add_blocker(s->migration_blocker);
>
> + qemu_co_mutex_init(&s->write_lock);
> +
> return 0;
>
> fail_free_bmap:
> @@ -639,11 +644,31 @@ static int vdi_co_write(BlockDriverState *bs,
> buf, n_sectors * SECTOR_SIZE);
> memset(block + (sector_in_block + n_sectors) * SECTOR_SIZE, 0,
> (s->block_sectors - n_sectors - sector_in_block) * SECTOR_SIZE);
> +
> + /* Note that this coroutine does not yield anywhere from reading the
> + * bmap entry until here, so in regards to all the coroutines trying
> + * to write to this cluster, the one doing the allocation will
> + * always be the first to try to acquire the lock.
> + * Therefore, it is also the first that will actually be able to
> + * acquire the lock and thus the padded cluster is written before
> + * the other coroutines can write to the affected area. */
> + qemu_co_mutex_lock(&s->write_lock);
> ret = bdrv_write(bs->file, offset, block, s->block_sectors);
> + qemu_co_mutex_unlock(&s->write_lock);
> } else {
> uint64_t offset = s->header.offset_data / SECTOR_SIZE +
> (uint64_t)bmap_entry * s->block_sectors +
> sector_in_block;
> + qemu_co_mutex_lock(&s->write_lock);
> + /* This lock is only used to make sure the following write operation
> + * is executed after the write issued by the coroutine allocating
> + * this cluster, therefore we do not need to keep it locked.
> + * As stated above, the allocating coroutine will always try to lock
> + * the mutex before all the other concurrent accesses to that
> + * cluster, therefore at this point we can be absolutely certain
> + * that that write operation has returned (there may be other writes
> + * in flight, but they do not concern this very operation). */
> + qemu_co_mutex_unlock(&s->write_lock);
> ret = bdrv_write(bs->file, offset, buf, n_sectors);
> }
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests
2015-02-27 19:54 [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests Max Reitz
2015-03-02 10:30 ` Paolo Bonzini
@ 2015-03-02 17:31 ` Stefan Hajnoczi
2015-03-04 11:04 ` Kevin Wolf
2 siblings, 0 replies; 4+ messages in thread
From: Stefan Hajnoczi @ 2015-03-02 17:31 UTC (permalink / raw)
To: Max Reitz; +Cc: Kevin Wolf, Stefan Weil, qemu-devel, qemu-block, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 1133 bytes --]
On Fri, Feb 27, 2015 at 02:54:39PM -0500, Max Reitz wrote:
> When allocating a new cluster, the first write to it must be the one
> doing the allocation, because that one pads its write request to the
> cluster size; if another write to that cluster is executed before it,
> that write will be overwritten due to the padding.
>
> See https://bugs.launchpad.net/qemu/+bug/1422307 for what can go wrong
> without this patch.
>
> Cc: qemu-stable <qemu-stable@nongnu.org>
> Signed-off-by: Max Reitz <mreitz@redhat.com>
> ---
> v3: Hopefully finally found the real issue which causes the problems
> described in the bug report; at least it sounds very reasonable and
> I can no longer reproduce any of the issues described there.
> Thank you, Paolo and Stefan!
> ---
> block/vdi.c | 25 +++++++++++++++++++++++++
> 1 file changed, 25 insertions(+)
I think this is overkill but let's merge it since the patch has been
written. A simple s->lock for I/O operations would have been fine since
this block driver was never designed for parallel I/O.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests
2015-02-27 19:54 [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests Max Reitz
2015-03-02 10:30 ` Paolo Bonzini
2015-03-02 17:31 ` Stefan Hajnoczi
@ 2015-03-04 11:04 ` Kevin Wolf
2 siblings, 0 replies; 4+ messages in thread
From: Kevin Wolf @ 2015-03-04 11:04 UTC (permalink / raw)
To: Max Reitz
Cc: Stefan Weil, Stefan Hajnoczi, qemu-devel, qemu-block, Paolo Bonzini
Am 27.02.2015 um 20:54 hat Max Reitz geschrieben:
> When allocating a new cluster, the first write to it must be the one
> doing the allocation, because that one pads its write request to the
> cluster size; if another write to that cluster is executed before it,
> that write will be overwritten due to the padding.
>
> See https://bugs.launchpad.net/qemu/+bug/1422307 for what can go wrong
> without this patch.
>
> Cc: qemu-stable <qemu-stable@nongnu.org>
> Signed-off-by: Max Reitz <mreitz@redhat.com>
> ---
> v3: Hopefully finally found the real issue which causes the problems
> described in the bug report; at least it sounds very reasonable and
> I can no longer reproduce any of the issues described there.
> Thank you, Paolo and Stefan!
Thanks, applied to the block branch.
Kevin
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-03-04 11:04 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-27 19:54 [Qemu-devel] [PATCH v3] block/vdi: Add locking for parallel requests Max Reitz
2015-03-02 10:30 ` Paolo Bonzini
2015-03-02 17:31 ` Stefan Hajnoczi
2015-03-04 11:04 ` Kevin Wolf
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.