All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] qcow2: seriously improve savevm performance
@ 2020-06-10 14:41 Denis V. Lunev
  2020-06-10 14:41 ` [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine Denis V. Lunev
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Denis V. Lunev @ 2020-06-10 14:41 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Denis V . Lunev, Vladimir Sementsov-Ogievskiy,
	Denis Plotnikov, Max Reitz

This series do standard basic things:
- it creates intermediate buffer for all writes from QEMU migration code
  to QCOW2 image,
- this buffer is sent to disk asynchronously, allowing several writes to
  run in parallel.

In general, migration code is fantastically inefficent (by observation),
buffers are not aligned and sent with arbitrary pieces, a lot of time
less than 100 bytes at a chunk, which results in read-modify-write
operations with non-cached operations. It should also be noted that all
operations are performed into unallocated image blocks, which also suffer
due to partial writes to such new clusters.

This patch series is an implementation of idea discussed in the RFC
posted by Denis
https://lists.gnu.org/archive/html/qemu-devel/2020-04/msg01925.html
Results with this series over NVME are better than original code
                original     rfc    this
cached:          1.79s      2.38s   1.27s
non-cached:      3.29s      1.31s   0.81s

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
CC: Denis Plotnikov <dplotnikov@virtuozzo.com>



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine
  2020-06-10 14:41 [PATCH 0/2] qcow2: seriously improve savevm performance Denis V. Lunev
@ 2020-06-10 14:41 ` Denis V. Lunev
  2020-06-10 15:10   ` Vladimir Sementsov-Ogievskiy
  2020-06-10 14:41 ` [PATCH 2/2] qcow2: improve savevm performance Denis V. Lunev
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: Denis V. Lunev @ 2020-06-10 14:41 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Denis V. Lunev, Vladimir Sementsov-Ogievskiy,
	Denis Plotnikov, Max Reitz

The patch preserves the constraint that the only waiter is allowed.

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
CC: Denis Plotnikov <dplotnikov@virtuozzo.com>
---
 block/aio_task.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/block/aio_task.c b/block/aio_task.c
index 88989fa248..f338049147 100644
--- a/block/aio_task.c
+++ b/block/aio_task.c
@@ -27,7 +27,7 @@
 #include "block/aio_task.h"
 
 struct AioTaskPool {
-    Coroutine *main_co;
+    Coroutine *wake_co;
     int status;
     int max_busy_tasks;
     int busy_tasks;
@@ -54,15 +54,15 @@ static void coroutine_fn aio_task_co(void *opaque)
 
     if (pool->waiting) {
         pool->waiting = false;
-        aio_co_wake(pool->main_co);
+        aio_co_wake(pool->wake_co);
     }
 }
 
 void coroutine_fn aio_task_pool_wait_one(AioTaskPool *pool)
 {
     assert(pool->busy_tasks > 0);
-    assert(qemu_coroutine_self() == pool->main_co);
 
+    pool->wake_co = qemu_coroutine_self();
     pool->waiting = true;
     qemu_coroutine_yield();
 
@@ -98,7 +98,7 @@ AioTaskPool *coroutine_fn aio_task_pool_new(int max_busy_tasks)
 {
     AioTaskPool *pool = g_new0(AioTaskPool, 1);
 
-    pool->main_co = qemu_coroutine_self();
+    pool->wake_co = NULL;
     pool->max_busy_tasks = max_busy_tasks;
 
     return pool;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] qcow2: improve savevm performance
  2020-06-10 14:41 [PATCH 0/2] qcow2: seriously improve savevm performance Denis V. Lunev
  2020-06-10 14:41 ` [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine Denis V. Lunev
@ 2020-06-10 14:41 ` Denis V. Lunev
  2020-06-10 18:19 ` [PATCH 0/2] qcow2: seriously " no-reply
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Denis V. Lunev @ 2020-06-10 14:41 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Denis V. Lunev, Vladimir Sementsov-Ogievskiy,
	Denis Plotnikov, Max Reitz

This patch does 2 standard basic things:
- it creates intermediate buffer for all writes from QEMU migration code
  to QCOW2 image,
- this buffer is sent to disk asynchronously, allowing several writes to
  run in parallel.

In general, migration code is fantastically inefficent (by observation),
buffers are not aligned and sent with arbitrary pieces, a lot of time
less than 100 bytes at a chunk, which results in read-modify-write
operations with non-cached operations. It should also be noted that all
operations are performed into unallocated image blocks, which also suffer
due to partial writes to such new clusters.

Snapshot creation time (2 GB Fedora-31 VM running over NVME storage):
                original     fixed
cached:          1.79s       1.27s
non-cached:      3.29s       0.81s

The difference over HDD would be more significant :)

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
CC: Denis Plotnikov <dplotnikov@virtuozzo.com>
---
 block/qcow2.c | 111 +++++++++++++++++++++++++++++++++++++++++++++++++-
 block/qcow2.h |   4 ++
 2 files changed, 113 insertions(+), 2 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index 0cd2e6757e..e2ae69422a 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -4797,11 +4797,43 @@ static int qcow2_make_empty(BlockDriverState *bs)
     return ret;
 }
 
+
+typedef struct Qcow2VMStateTask {
+    AioTask task;
+
+    BlockDriverState *bs;
+    int64_t offset;
+    void *buf;
+    size_t bytes;
+} Qcow2VMStateTask;
+
+typedef struct Qcow2SaveVMState {
+    AioTaskPool *pool;
+    Qcow2VMStateTask *t;
+} Qcow2SaveVMState;
+
 static coroutine_fn int qcow2_co_flush_to_os(BlockDriverState *bs)
 {
     BDRVQcow2State *s = bs->opaque;
+    Qcow2SaveVMState *state = s->savevm_state;
     int ret;
 
+    if (state != NULL) {
+        aio_task_pool_start_task(state->pool, &state->t->task);
+
+        aio_task_pool_wait_all(state->pool);
+        ret = aio_task_pool_status(state->pool);
+
+        aio_task_pool_free(state->pool);
+        g_free(state);
+
+        s->savevm_state = NULL;
+
+        if (ret < 0) {
+            return ret;
+        }
+    }
+
     qemu_co_mutex_lock(&s->lock);
     ret = qcow2_write_caches(bs);
     qemu_co_mutex_unlock(&s->lock);
@@ -5098,14 +5130,89 @@ static int qcow2_has_zero_init(BlockDriverState *bs)
     }
 }
 
+
+static coroutine_fn int qcow2_co_vmstate_task_entry(AioTask *task)
+{
+    int err;
+    Qcow2VMStateTask *t = container_of(task, Qcow2VMStateTask, task);
+
+    if (t->bytes != 0) {
+        QEMUIOVector local_qiov;
+        qemu_iovec_init_buf(&local_qiov, t->buf, t->bytes);
+        err = t->bs->drv->bdrv_co_pwritev_part(t->bs, t->offset, t->bytes,
+                                               &local_qiov, 0, 0);
+    }
+
+    qemu_vfree(t->buf);
+    return err;
+}
+
+static Qcow2VMStateTask *qcow2_vmstate_task_create(BlockDriverState *bs,
+                                                    int64_t pos, size_t size)
+{
+    BDRVQcow2State *s = bs->opaque;
+    Qcow2VMStateTask *t = g_new(Qcow2VMStateTask, 1);
+
+    *t = (Qcow2VMStateTask) {
+        .task.func = qcow2_co_vmstate_task_entry,
+        .buf = qemu_blockalign(bs, size),
+        .offset = qcow2_vm_state_offset(s) + pos,
+        .bs = bs,
+    };
+
+    return t;
+}
+
 static int qcow2_save_vmstate(BlockDriverState *bs, QEMUIOVector *qiov,
                               int64_t pos)
 {
     BDRVQcow2State *s = bs->opaque;
+    Qcow2SaveVMState *state = s->savevm_state;
+    Qcow2VMStateTask *t;
+    size_t buf_size = MAX(s->cluster_size, 1 * MiB);
+    size_t to_copy;
+    size_t off;
 
     BLKDBG_EVENT(bs->file, BLKDBG_VMSTATE_SAVE);
-    return bs->drv->bdrv_co_pwritev_part(bs, qcow2_vm_state_offset(s) + pos,
-                                         qiov->size, qiov, 0, 0);
+
+    if (state == NULL) {
+        state = g_new(Qcow2SaveVMState, 1);
+        *state = (Qcow2SaveVMState) {
+            .pool = aio_task_pool_new(QCOW2_MAX_WORKERS),
+            .t = qcow2_vmstate_task_create(bs, pos, buf_size),
+        };
+
+        s->savevm_state = state;
+    }
+
+    if (aio_task_pool_status(state->pool) != 0) {
+        return aio_task_pool_status(state->pool);
+    }
+
+    t = state->t;
+    if (t->offset + t->bytes != qcow2_vm_state_offset(s) + pos) {
+        /* Normally this branch is not reachable from migration */
+        return bs->drv->bdrv_co_pwritev_part(bs,
+                qcow2_vm_state_offset(s) + pos, qiov->size, qiov, 0, 0);
+    }
+
+    off = 0;
+    while (1) {
+        to_copy = MIN(qiov->size - off, buf_size - t->bytes);
+        qemu_iovec_to_buf(qiov, off, t->buf + t->bytes, to_copy);
+        t->bytes += to_copy;
+        if (t->bytes < buf_size) {
+            return 0;
+        }
+
+        aio_task_pool_start_task(state->pool, &t->task);
+
+        pos += to_copy;
+        off += to_copy;
+        state->t = t = qcow2_vmstate_task_create(bs, pos, buf_size);
+    }
+
+    return 0;
 }
 
 static int qcow2_load_vmstate(BlockDriverState *bs, QEMUIOVector *qiov,
diff --git a/block/qcow2.h b/block/qcow2.h
index 7ce2c23bdb..146cfed739 100644
--- a/block/qcow2.h
+++ b/block/qcow2.h
@@ -291,6 +291,8 @@ typedef struct Qcow2BitmapHeaderExt {
 
 #define QCOW2_MAX_THREADS 4
 
+typedef struct Qcow2SaveVMState Qcow2SaveVMState;
+
 typedef struct BDRVQcow2State {
     int cluster_bits;
     int cluster_size;
@@ -384,6 +386,8 @@ typedef struct BDRVQcow2State {
      * is to convert the image with the desired compression type set.
      */
     Qcow2CompressionType compression_type;
+
+    Qcow2SaveVMState *savevm_state;
 } BDRVQcow2State;
 
 typedef struct Qcow2COWRegion {
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine
  2020-06-10 14:41 ` [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine Denis V. Lunev
@ 2020-06-10 15:10   ` Vladimir Sementsov-Ogievskiy
  2020-06-10 16:52     ` Denis V. Lunev
  0 siblings, 1 reply; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-10 15:10 UTC (permalink / raw)
  To: Denis V. Lunev, qemu-block, qemu-devel
  Cc: Kevin Wolf, Denis Plotnikov, Max Reitz

10.06.2020 17:41, Denis V. Lunev wrote:
> The patch preserves the constraint that the only waiter is allowed.
> 
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> CC: Denis Plotnikov <dplotnikov@virtuozzo.com>
> ---
>   block/aio_task.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/block/aio_task.c b/block/aio_task.c
> index 88989fa248..f338049147 100644
> --- a/block/aio_task.c
> +++ b/block/aio_task.c
> @@ -27,7 +27,7 @@
>   #include "block/aio_task.h"
>   
>   struct AioTaskPool {
> -    Coroutine *main_co;
> +    Coroutine *wake_co;
>       int status;
>       int max_busy_tasks;
>       int busy_tasks;
> @@ -54,15 +54,15 @@ static void coroutine_fn aio_task_co(void *opaque)
>   
>       if (pool->waiting) {
>           pool->waiting = false;
> -        aio_co_wake(pool->main_co);
> +        aio_co_wake(pool->wake_co);
>       }
>   }
>   
>   void coroutine_fn aio_task_pool_wait_one(AioTaskPool *pool)
>   {
>       assert(pool->busy_tasks > 0);
> -    assert(qemu_coroutine_self() == pool->main_co);
>   
> +    pool->wake_co = qemu_coroutine_self();
>       pool->waiting = true;
>       qemu_coroutine_yield();
>   
> @@ -98,7 +98,7 @@ AioTaskPool *coroutine_fn aio_task_pool_new(int max_busy_tasks)
>   {
>       AioTaskPool *pool = g_new0(AioTaskPool, 1);
>   
> -    pool->main_co = qemu_coroutine_self();
> +    pool->wake_co = NULL;
>       pool->max_busy_tasks = max_busy_tasks;
>   
>       return pool;
> 

With such approach, if several coroutines will wait simultaneously, the only one will be finally woken and other will hang.

I think, we should use CoQueue here: CoQueue instead of wake_co, qemu_co_queue_wait in wait_one, and qemu_co_queue_next instead of aio_co_wake.


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine
  2020-06-10 15:10   ` Vladimir Sementsov-Ogievskiy
@ 2020-06-10 16:52     ` Denis V. Lunev
  0 siblings, 0 replies; 8+ messages in thread
From: Denis V. Lunev @ 2020-06-10 16:52 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: Kevin Wolf, Denis Plotnikov, Max Reitz

On 6/10/20 6:10 PM, Vladimir Sementsov-Ogievskiy wrote:
> 10.06.2020 17:41, Denis V. Lunev wrote:
>> The patch preserves the constraint that the only waiter is allowed.
>>
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> CC: Kevin Wolf <kwolf@redhat.com>
>> CC: Max Reitz <mreitz@redhat.com>
>> CC: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> CC: Denis Plotnikov <dplotnikov@virtuozzo.com>
>> ---
>>   block/aio_task.c | 8 ++++----
>>   1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/block/aio_task.c b/block/aio_task.c
>> index 88989fa248..f338049147 100644
>> --- a/block/aio_task.c
>> +++ b/block/aio_task.c
>> @@ -27,7 +27,7 @@
>>   #include "block/aio_task.h"
>>     struct AioTaskPool {
>> -    Coroutine *main_co;
>> +    Coroutine *wake_co;
>>       int status;
>>       int max_busy_tasks;
>>       int busy_tasks;
>> @@ -54,15 +54,15 @@ static void coroutine_fn aio_task_co(void *opaque)
>>         if (pool->waiting) {
>>           pool->waiting = false;
>> -        aio_co_wake(pool->main_co);
>> +        aio_co_wake(pool->wake_co);
>>       }
>>   }
>>     void coroutine_fn aio_task_pool_wait_one(AioTaskPool *pool)
>>   {
>>       assert(pool->busy_tasks > 0);
>> -    assert(qemu_coroutine_self() == pool->main_co);
>>   +    pool->wake_co = qemu_coroutine_self();
>>       pool->waiting = true;
>>       qemu_coroutine_yield();
>>   @@ -98,7 +98,7 @@ AioTaskPool *coroutine_fn aio_task_pool_new(int
>> max_busy_tasks)
>>   {
>>       AioTaskPool *pool = g_new0(AioTaskPool, 1);
>>   -    pool->main_co = qemu_coroutine_self();
>> +    pool->wake_co = NULL;
>>       pool->max_busy_tasks = max_busy_tasks;
>>         return pool;
>>
>
> With such approach, if several coroutines will wait simultaneously,
> the only one will be finally woken and other will hang.
>
> I think, we should use CoQueue here: CoQueue instead of wake_co,
> qemu_co_queue_wait in wait_one, and qemu_co_queue_next instead of
> aio_co_wake.
>
>
I will make a check, but for now it would be enough to
add
  assert(!pool->waiting);
at the beginning of aio_task_pool_wait_one

Den


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] qcow2: seriously improve savevm performance
  2020-06-10 14:41 [PATCH 0/2] qcow2: seriously improve savevm performance Denis V. Lunev
  2020-06-10 14:41 ` [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine Denis V. Lunev
  2020-06-10 14:41 ` [PATCH 2/2] qcow2: improve savevm performance Denis V. Lunev
@ 2020-06-10 18:19 ` no-reply
  2020-06-10 18:24 ` no-reply
  2020-06-10 18:24 ` no-reply
  4 siblings, 0 replies; 8+ messages in thread
From: no-reply @ 2020-06-10 18:19 UTC (permalink / raw)
  To: den; +Cc: kwolf, vsementsov, qemu-block, qemu-devel, mreitz, dplotnikov, den

Patchew URL: https://patchew.org/QEMU/20200610144129.27659-1-den@openvz.org/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

  CC      crypto/hash.o
  CC      crypto/hash-nettle.o
/tmp/qemu-test/src/block/qcow2.c: In function 'qcow2_co_vmstate_task_entry':
/tmp/qemu-test/src/block/qcow2.c:5147:5: error: 'err' may be used uninitialized in this function [-Werror=maybe-uninitialized]
     return err;
     ^
cc1: all warnings being treated as errors
  CC      crypto/hmac.o
  CC      crypto/hmac-nettle.o
make: *** [block/qcow2.o] Error 1
make: *** Waiting for unfinished jobs....
  CC      crypto/desrfb.o
Traceback (most recent call last):
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=3bb6d855342d412ca997d990b1688b3c', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-0hsvevb2/src/docker-src.2020-06-10-14.17.46.12598:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=3bb6d855342d412ca997d990b1688b3c
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-0hsvevb2/src'
make: *** [docker-run-test-quick@centos7] Error 2

real    2m9.722s
user    0m9.192s


The full log is available at
http://patchew.org/logs/20200610144129.27659-1-den@openvz.org/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] qcow2: seriously improve savevm performance
  2020-06-10 14:41 [PATCH 0/2] qcow2: seriously improve savevm performance Denis V. Lunev
                   ` (2 preceding siblings ...)
  2020-06-10 18:19 ` [PATCH 0/2] qcow2: seriously " no-reply
@ 2020-06-10 18:24 ` no-reply
  2020-06-10 18:24 ` no-reply
  4 siblings, 0 replies; 8+ messages in thread
From: no-reply @ 2020-06-10 18:24 UTC (permalink / raw)
  To: den; +Cc: kwolf, vsementsov, qemu-block, qemu-devel, mreitz, dplotnikov, den

Patchew URL: https://patchew.org/QEMU/20200610144129.27659-1-den@openvz.org/



Hi,

This series failed the docker-mingw@fedora build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#! /bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-mingw@fedora J=14 NETWORK=1
=== TEST SCRIPT END ===

  BUNZIP2 pc-bios/edk2-i386-code.fd.bz2
  BUNZIP2 pc-bios/edk2-arm-vars.fd.bz2
/tmp/qemu-test/src/block/qcow2.c: In function 'qcow2_co_vmstate_task_entry':
/tmp/qemu-test/src/block/qcow2.c:5147:12: error: 'err' may be used uninitialized in this function [-Werror=maybe-uninitialized]
     return err;
            ^~~
cc1: all warnings being treated as errors
make: *** [/tmp/qemu-test/src/rules.mak:69: block/qcow2.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 665, in <module>
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=a0327ae2ef3c4163bdd307b30bc90a7c', '-u', '1003', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew2/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-fbvrtr6u/src/docker-src.2020-06-10-14.22.01.21453:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=a0327ae2ef3c4163bdd307b30bc90a7c
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-fbvrtr6u/src'
make: *** [docker-run-test-mingw@fedora] Error 2

real    2m20.791s
user    0m8.483s


The full log is available at
http://patchew.org/logs/20200610144129.27659-1-den@openvz.org/testing.docker-mingw@fedora/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] qcow2: seriously improve savevm performance
  2020-06-10 14:41 [PATCH 0/2] qcow2: seriously improve savevm performance Denis V. Lunev
                   ` (3 preceding siblings ...)
  2020-06-10 18:24 ` no-reply
@ 2020-06-10 18:24 ` no-reply
  4 siblings, 0 replies; 8+ messages in thread
From: no-reply @ 2020-06-10 18:24 UTC (permalink / raw)
  To: den; +Cc: kwolf, vsementsov, qemu-block, qemu-devel, mreitz, dplotnikov, den

Patchew URL: https://patchew.org/QEMU/20200610144129.27659-1-den@openvz.org/



Hi,

This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
=== TEST SCRIPT END ===

  CC      block/gluster.o
  CC      block/ssh.o
  CC      block/dmg-bz2.o
/tmp/qemu-test/src/block/qcow2.c:5139:9: error: variable 'err' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
    if (t->bytes != 0) {
        ^~~~~~~~~~~~~
/tmp/qemu-test/src/block/qcow2.c:5147:12: note: uninitialized use occurs here
---
           ^
            = 0
1 error generated.
make: *** [/tmp/qemu-test/src/rules.mak:69: block/qcow2.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 665, in <module>
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=213a8da69081459b91db63888e1cc6a0', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=x86_64-softmmu', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-v54hgiy2/src/docker-src.2020-06-10-14.20.39.19315:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-debug']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=213a8da69081459b91db63888e1cc6a0
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-v54hgiy2/src'
make: *** [docker-run-test-debug@fedora] Error 2

real    4m8.609s
user    0m8.917s


The full log is available at
http://patchew.org/logs/20200610144129.27659-1-den@openvz.org/testing.asan/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-06-10 18:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-10 14:41 [PATCH 0/2] qcow2: seriously improve savevm performance Denis V. Lunev
2020-06-10 14:41 ` [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine Denis V. Lunev
2020-06-10 15:10   ` Vladimir Sementsov-Ogievskiy
2020-06-10 16:52     ` Denis V. Lunev
2020-06-10 14:41 ` [PATCH 2/2] qcow2: improve savevm performance Denis V. Lunev
2020-06-10 18:19 ` [PATCH 0/2] qcow2: seriously " no-reply
2020-06-10 18:24 ` no-reply
2020-06-10 18:24 ` no-reply

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.