qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 0/4] Block layer patches
@ 2023-03-28 12:35 Kevin Wolf
  2023-03-28 12:35 ` [PULL 1/4] nbd/server: push pending frames after sending reply Kevin Wolf
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Kevin Wolf @ 2023-03-28 12:35 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

The following changes since commit e3debd5e7d0ce031356024878a0a18b9d109354a:

  Merge tag 'pull-request-2023-03-24' of https://gitlab.com/thuth/qemu into staging (2023-03-24 16:08:46 +0000)

are available in the Git repository at:

  https://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to d8fbf9aa85aed64450907580a1d70583f097e9df:

  block/export: Fix graph locking in blk_get_geometry() call (2023-03-27 15:16:05 +0200)

----------------------------------------------------------------
Block layer patches

- aio-posix: Fix race during epoll upgrade
- vhost-user-blk/VDUSE export: Fix a potential deadlock and an assertion
  failure when the export runs in an iothread
- NBD server: Push pending frames after sending reply to fix performance
  especially when used with TLS

----------------------------------------------------------------
Florian Westphal (1):
      nbd/server: push pending frames after sending reply

Kevin Wolf (1):
      block/export: Fix graph locking in blk_get_geometry() call

Stefan Hajnoczi (2):
      block/export: only acquire AioContext once for vhost_user_server_stop()
      aio-posix: fix race between epoll upgrade and aio_set_fd_handler()

 include/block/block-io.h          |  4 +++-
 include/sysemu/block-backend-io.h |  5 ++++-
 block.c                           |  5 +++--
 block/block-backend.c             |  7 +++++--
 block/export/virtio-blk-handler.c |  7 ++++---
 nbd/server.c                      |  3 +++
 util/fdmon-epoll.c                | 25 ++++++++++++++++++-------
 util/vhost-user-server.c          |  5 +----
 8 files changed, 41 insertions(+), 20 deletions(-)



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PULL 1/4] nbd/server: push pending frames after sending reply
  2023-03-28 12:35 [PULL 0/4] Block layer patches Kevin Wolf
@ 2023-03-28 12:35 ` Kevin Wolf
  2023-03-28 12:35 ` [PULL 2/4] block/export: only acquire AioContext once for vhost_user_server_stop() Kevin Wolf
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Kevin Wolf @ 2023-03-28 12:35 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Florian Westphal <fw@strlen.de>

qemu-nbd doesn't set TCP_NODELAY on the tcp socket.

Kernel waits for more data and avoids transmission of small packets.
Without TLS this is barely noticeable, but with TLS this really shows.

Booting a VM via qemu-nbd on localhost (with tls) takes more than
2 minutes on my system.  tcpdump shows frequent wait periods, where no
packets get sent for a 40ms period.

Add explicit (un)corking when processing (and responding to) requests.
"TCP_CORK, &zero" after earlier "CORK, &one" will flush pending data.

VM Boot time:
main:    no tls:  23s, with tls: 2m45s
patched: no tls:  14s, with tls: 15s

VM Boot time, qemu-nbd via network (same lan):
main:    no tls:  18s, with tls: 1m50s
patched: no tls:  17s, with tls: 18s

Future optimization: if we could detect if there is another pending
request we could defer the uncork operation because more data would be
appended.

Signed-off-by: Florian Westphal <fw@strlen.de>
Message-Id: <20230324104720.2498-1-fw@strlen.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 nbd/server.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/nbd/server.c b/nbd/server.c
index a4750e4188..848836d414 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2667,6 +2667,8 @@ static coroutine_fn void nbd_trip(void *opaque)
         goto disconnect;
     }
 
+    qio_channel_set_cork(client->ioc, true);
+
     if (ret < 0) {
         /* It wasn't -EIO, so, according to nbd_co_receive_request()
          * semantics, we should return the error to the client. */
@@ -2692,6 +2694,7 @@ static coroutine_fn void nbd_trip(void *opaque)
         goto disconnect;
     }
 
+    qio_channel_set_cork(client->ioc, false);
 done:
     nbd_request_put(req);
     nbd_client_put(client);
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PULL 2/4] block/export: only acquire AioContext once for vhost_user_server_stop()
  2023-03-28 12:35 [PULL 0/4] Block layer patches Kevin Wolf
  2023-03-28 12:35 ` [PULL 1/4] nbd/server: push pending frames after sending reply Kevin Wolf
@ 2023-03-28 12:35 ` Kevin Wolf
  2023-03-28 12:35 ` [PULL 3/4] aio-posix: fix race between epoll upgrade and aio_set_fd_handler() Kevin Wolf
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Kevin Wolf @ 2023-03-28 12:35 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Stefan Hajnoczi <stefanha@redhat.com>

vhost_user_server_stop() uses AIO_WAIT_WHILE(). AIO_WAIT_WHILE()
requires that AioContext is only acquired once.

Since blk_exp_request_shutdown() already acquires the AioContext it
shouldn't be acquired again in vhost_user_server_stop().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230323145853.1345527-1-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 util/vhost-user-server.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 40f36ea214..5b6216069c 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -346,10 +346,9 @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
     aio_context_release(server->ctx);
 }
 
+/* server->ctx acquired by caller */
 void vhost_user_server_stop(VuServer *server)
 {
-    aio_context_acquire(server->ctx);
-
     qemu_bh_delete(server->restart_listener_bh);
     server->restart_listener_bh = NULL;
 
@@ -366,8 +365,6 @@ void vhost_user_server_stop(VuServer *server)
         AIO_WAIT_WHILE(server->ctx, server->co_trip);
     }
 
-    aio_context_release(server->ctx);
-
     if (server->listener) {
         qio_net_listener_disconnect(server->listener);
         object_unref(OBJECT(server->listener));
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PULL 3/4] aio-posix: fix race between epoll upgrade and aio_set_fd_handler()
  2023-03-28 12:35 [PULL 0/4] Block layer patches Kevin Wolf
  2023-03-28 12:35 ` [PULL 1/4] nbd/server: push pending frames after sending reply Kevin Wolf
  2023-03-28 12:35 ` [PULL 2/4] block/export: only acquire AioContext once for vhost_user_server_stop() Kevin Wolf
@ 2023-03-28 12:35 ` Kevin Wolf
  2023-03-28 12:35 ` [PULL 4/4] block/export: Fix graph locking in blk_get_geometry() call Kevin Wolf
  2023-03-28 19:42 ` [PULL 0/4] Block layer patches Peter Maydell
  4 siblings, 0 replies; 9+ messages in thread
From: Kevin Wolf @ 2023-03-28 12:35 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Stefan Hajnoczi <stefanha@redhat.com>

If another thread calls aio_set_fd_handler() while the IOThread event
loop is upgrading from ppoll(2) to epoll(7) then we might miss new
AioHandlers. The epollfd will not monitor the new AioHandler's fd,
resulting in hangs.

Take the AioHandler list lock while upgrading to epoll. This prevents
AioHandlers from changing while epoll is being set up. If we cannot lock
because we're in a nested event loop, then don't upgrade to epoll (it
will happen next time we're not in a nested call).

The downside to taking the lock is that the aio_set_fd_handler() thread
has to wait until the epoll upgrade is finished, which involves many
epoll_ctl(2) system calls. However, this scenario is rare and I couldn't
think of another solution that is still simple.

Reported-by: Qing Wang <qinwang@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2090998
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Fam Zheng <fam@euphon.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230323144859.1338495-1-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 util/fdmon-epoll.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
index e11a8a022e..1683aa1105 100644
--- a/util/fdmon-epoll.c
+++ b/util/fdmon-epoll.c
@@ -127,6 +127,8 @@ static bool fdmon_epoll_try_enable(AioContext *ctx)
 
 bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
 {
+    bool ok;
+
     if (ctx->epollfd < 0) {
         return false;
     }
@@ -136,14 +138,23 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
         return false;
     }
 
-    if (npfd >= EPOLL_ENABLE_THRESHOLD) {
-        if (fdmon_epoll_try_enable(ctx)) {
-            return true;
-        } else {
-            fdmon_epoll_disable(ctx);
-        }
+    if (npfd < EPOLL_ENABLE_THRESHOLD) {
+        return false;
+    }
+
+    /* The list must not change while we add fds to epoll */
+    if (!qemu_lockcnt_dec_if_lock(&ctx->list_lock)) {
+        return false;
+    }
+
+    ok = fdmon_epoll_try_enable(ctx);
+
+    qemu_lockcnt_inc_and_unlock(&ctx->list_lock);
+
+    if (!ok) {
+        fdmon_epoll_disable(ctx);
     }
-    return false;
+    return ok;
 }
 
 void fdmon_epoll_setup(AioContext *ctx)
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PULL 4/4] block/export: Fix graph locking in blk_get_geometry() call
  2023-03-28 12:35 [PULL 0/4] Block layer patches Kevin Wolf
                   ` (2 preceding siblings ...)
  2023-03-28 12:35 ` [PULL 3/4] aio-posix: fix race between epoll upgrade and aio_set_fd_handler() Kevin Wolf
@ 2023-03-28 12:35 ` Kevin Wolf
  2023-03-28 19:42 ` [PULL 0/4] Block layer patches Peter Maydell
  4 siblings, 0 replies; 9+ messages in thread
From: Kevin Wolf @ 2023-03-28 12:35 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

blk_get_geometry() eventually calls bdrv_nb_sectors(), which is a
co_wrapper_mixed_bdrv_rdlock. This means that when it is called from
coroutine context, it already assume to have the graph locked.

However, virtio_blk_sect_range_ok() in block/export/virtio-blk-handler.c
(used by vhost-user-blk and VDUSE exports) runs in a coroutine, but
doesn't take the graph lock - blk_*() functions are generally expected
to do that internally. This causes an assertion failure when accessing
an export for the first time if it runs in an iothread.

This is an example of the crash:

  $ ./storage-daemon/qemu-storage-daemon --object iothread,id=th0 --blockdev file,filename=/home/kwolf/images/hd.img,node-name=disk --export vhost-user-blk,addr.type=unix,addr.path=/tmp/vhost.sock,node-name=disk,id=exp0,iothread=th0
  qemu-storage-daemon: ../block/graph-lock.c:268: void assert_bdrv_graph_readable(void): Assertion `qemu_in_main_thread() || reader_count()' failed.

  (gdb) bt
  #0  0x00007ffff6eafe5c in __pthread_kill_implementation () from /lib64/libc.so.6
  #1  0x00007ffff6e5fa76 in raise () from /lib64/libc.so.6
  #2  0x00007ffff6e497fc in abort () from /lib64/libc.so.6
  #3  0x00007ffff6e4971b in __assert_fail_base.cold () from /lib64/libc.so.6
  #4  0x00007ffff6e58656 in __assert_fail () from /lib64/libc.so.6
  #5  0x00005555556337a3 in assert_bdrv_graph_readable () at ../block/graph-lock.c:268
  #6  0x00005555555fd5a2 in bdrv_co_nb_sectors (bs=0x5555564c5ef0) at ../block.c:5847
  #7  0x00005555555ee949 in bdrv_nb_sectors (bs=0x5555564c5ef0) at block/block-gen.c:256
  #8  0x00005555555fd6b9 in bdrv_get_geometry (bs=0x5555564c5ef0, nb_sectors_ptr=0x7fffef7fedd0) at ../block.c:5884
  #9  0x000055555562ad6d in blk_get_geometry (blk=0x5555564cb200, nb_sectors_ptr=0x7fffef7fedd0) at ../block/block-backend.c:1624
  #10 0x00005555555ddb74 in virtio_blk_sect_range_ok (blk=0x5555564cb200, block_size=512, sector=0, size=512) at ../block/export/virtio-blk-handler.c:44
  #11 0x00005555555dd80d in virtio_blk_process_req (handler=0x5555564cbb98, in_iov=0x7fffe8003830, out_iov=0x7fffe8003860, in_num=1, out_num=0) at ../block/export/virtio-blk-handler.c:189
  #12 0x00005555555dd546 in vu_blk_virtio_process_req (opaque=0x7fffe8003800) at ../block/export/vhost-user-blk-server.c:66
  #13 0x00005555557bf4a1 in coroutine_trampoline (i0=-402635264, i1=32767) at ../util/coroutine-ucontext.c:177
  #14 0x00007ffff6e75c20 in ?? () from /lib64/libc.so.6
  #15 0x00007fffefffa870 in ?? ()
  #16 0x0000000000000000 in ?? ()

Fix this by creating a new blk_co_get_geometry() that takes the lock,
and changing blk_get_geometry() to be a co_wrapper_mixed around it.

To make the resulting code cleaner, virtio-blk-handler.c can directly
call the coroutine version now (though that wouldn't be necessary for
fixing the bug, taking the lock in blk_co_get_geometry() is what fixes
it).

Fixes: 8ab8140a04cf771d63e9754d6ba6c1e676bfe507
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230327113959.60071-1-kwolf@redhat.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 include/block/block-io.h          | 4 +++-
 include/sysemu/block-backend-io.h | 5 ++++-
 block.c                           | 5 +++--
 block/block-backend.c             | 7 +++++--
 block/export/virtio-blk-handler.c | 7 ++++---
 5 files changed, 19 insertions(+), 9 deletions(-)

diff --git a/include/block/block-io.h b/include/block/block-io.h
index 5da99d4d60..dbc034b728 100644
--- a/include/block/block-io.h
+++ b/include/block/block-io.h
@@ -89,7 +89,9 @@ int64_t co_wrapper bdrv_get_allocated_file_size(BlockDriverState *bs);
 
 BlockMeasureInfo *bdrv_measure(BlockDriver *drv, QemuOpts *opts,
                                BlockDriverState *in_bs, Error **errp);
-void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr);
+
+void coroutine_fn GRAPH_RDLOCK
+bdrv_co_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr);
 
 int coroutine_fn GRAPH_RDLOCK
 bdrv_co_delete_file(BlockDriverState *bs, Error **errp);
diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
index 40ab178719..c672b77247 100644
--- a/include/sysemu/block-backend-io.h
+++ b/include/sysemu/block-backend-io.h
@@ -70,7 +70,10 @@ void co_wrapper blk_eject(BlockBackend *blk, bool eject_flag);
 int64_t coroutine_fn blk_co_getlength(BlockBackend *blk);
 int64_t co_wrapper_mixed blk_getlength(BlockBackend *blk);
 
-void blk_get_geometry(BlockBackend *blk, uint64_t *nb_sectors_ptr);
+void coroutine_fn blk_co_get_geometry(BlockBackend *blk,
+                                      uint64_t *nb_sectors_ptr);
+void co_wrapper_mixed blk_get_geometry(BlockBackend *blk,
+                                       uint64_t *nb_sectors_ptr);
 
 int64_t coroutine_fn blk_co_nb_sectors(BlockBackend *blk);
 int64_t co_wrapper_mixed blk_nb_sectors(BlockBackend *blk);
diff --git a/block.c b/block.c
index 0dd604d0f6..e0c6c648b1 100644
--- a/block.c
+++ b/block.c
@@ -5879,9 +5879,10 @@ int64_t coroutine_fn bdrv_co_getlength(BlockDriverState *bs)
 }
 
 /* return 0 as number of sectors if no device present or error */
-void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr)
+void coroutine_fn bdrv_co_get_geometry(BlockDriverState *bs,
+                                       uint64_t *nb_sectors_ptr)
 {
-    int64_t nb_sectors = bdrv_nb_sectors(bs);
+    int64_t nb_sectors = bdrv_co_nb_sectors(bs);
     IO_CODE();
 
     *nb_sectors_ptr = nb_sectors < 0 ? 0 : nb_sectors;
diff --git a/block/block-backend.c b/block/block-backend.c
index 278b04ce69..2ee39229e4 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1615,13 +1615,16 @@ int64_t coroutine_fn blk_co_getlength(BlockBackend *blk)
     return bdrv_co_getlength(blk_bs(blk));
 }
 
-void blk_get_geometry(BlockBackend *blk, uint64_t *nb_sectors_ptr)
+void coroutine_fn blk_co_get_geometry(BlockBackend *blk,
+                                      uint64_t *nb_sectors_ptr)
 {
     IO_CODE();
+    GRAPH_RDLOCK_GUARD();
+
     if (!blk_bs(blk)) {
         *nb_sectors_ptr = 0;
     } else {
-        bdrv_get_geometry(blk_bs(blk), nb_sectors_ptr);
+        bdrv_co_get_geometry(blk_bs(blk), nb_sectors_ptr);
     }
 }
 
diff --git a/block/export/virtio-blk-handler.c b/block/export/virtio-blk-handler.c
index 313666e8ab..bc1cec6757 100644
--- a/block/export/virtio-blk-handler.c
+++ b/block/export/virtio-blk-handler.c
@@ -22,8 +22,9 @@ struct virtio_blk_inhdr {
     unsigned char status;
 };
 
-static bool virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
-                                     uint64_t sector, size_t size)
+static bool coroutine_fn
+virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
+                         uint64_t sector, size_t size)
 {
     uint64_t nb_sectors;
     uint64_t total_sectors;
@@ -41,7 +42,7 @@ static bool virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
     if ((sector << VIRTIO_BLK_SECTOR_BITS) % block_size) {
         return false;
     }
-    blk_get_geometry(blk, &total_sectors);
+    blk_co_get_geometry(blk, &total_sectors);
     if (sector > total_sectors || nb_sectors > total_sectors - sector) {
         return false;
     }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PULL 0/4] Block layer patches
  2023-03-28 12:35 [PULL 0/4] Block layer patches Kevin Wolf
                   ` (3 preceding siblings ...)
  2023-03-28 12:35 ` [PULL 4/4] block/export: Fix graph locking in blk_get_geometry() call Kevin Wolf
@ 2023-03-28 19:42 ` Peter Maydell
  4 siblings, 0 replies; 9+ messages in thread
From: Peter Maydell @ 2023-03-28 19:42 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, qemu-devel

On Tue, 28 Mar 2023 at 13:35, Kevin Wolf <kwolf@redhat.com> wrote:
>
> The following changes since commit e3debd5e7d0ce031356024878a0a18b9d109354a:
>
>   Merge tag 'pull-request-2023-03-24' of https://gitlab.com/thuth/qemu into staging (2023-03-24 16:08:46 +0000)
>
> are available in the Git repository at:
>
>   https://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to d8fbf9aa85aed64450907580a1d70583f097e9df:
>
>   block/export: Fix graph locking in blk_get_geometry() call (2023-03-27 15:16:05 +0200)
>
> ----------------------------------------------------------------
> Block layer patches
>
> - aio-posix: Fix race during epoll upgrade
> - vhost-user-blk/VDUSE export: Fix a potential deadlock and an assertion
>   failure when the export runs in an iothread
> - NBD server: Push pending frames after sending reply to fix performance
>   especially when used with TLS


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/8.0
for any user-visible changes.

-- PMM


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PULL 0/4] Block layer patches
@ 2023-11-28 14:09 Kevin Wolf
  0 siblings, 0 replies; 9+ messages in thread
From: Kevin Wolf @ 2023-11-28 14:09 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, stefanha, qemu-devel

The following changes since commit e867b01cd6658a64c16052117dbb18093a2f9772:

  Merge tag 'qga-pull-2023-11-25' of https://github.com/kostyanf14/qemu into staging (2023-11-27 08:59:00 -0500)

are available in the Git repository at:

  https://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 6e081324facf9aeece9c286774bab5af3b8d6099:

  ide/via: Fix BAR4 value in legacy mode (2023-11-28 14:56:32 +0100)

----------------------------------------------------------------
Block layer patches

- ide/via: Fix BAR4 value in legacy mode
- export/vhost-user-blk: Fix consecutive drains
- vmdk: Don't corrupt desc file in vmdk_write_cid
- iotests: fix default machine type detection

----------------------------------------------------------------
Andrey Drobyshev (1):
      iotests: fix default machine type detection

BALATON Zoltan (1):
      ide/via: Fix BAR4 value in legacy mode

Fam Zheng (1):
      vmdk: Don't corrupt desc file in vmdk_write_cid

Kevin Wolf (1):
      export/vhost-user-blk: Fix consecutive drains

 include/qemu/vhost-user-server.h     |  1 +
 block/export/vhost-user-blk-server.c |  9 +++++++--
 block/vmdk.c                         | 28 ++++++++++++++++++--------
 hw/ide/via.c                         | 17 ++++++++++------
 util/vhost-user-server.c             | 39 ++++++++++++++++++++++++++++--------
 tests/qemu-iotests/testenv.py        |  2 +-
 tests/qemu-iotests/059               |  2 ++
 tests/qemu-iotests/059.out           |  4 ++++
 8 files changed, 77 insertions(+), 25 deletions(-)



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PULL 0/4] Block layer patches
  2019-09-20 16:20 Kevin Wolf
@ 2019-09-23  9:49 ` Peter Maydell
  0 siblings, 0 replies; 9+ messages in thread
From: Peter Maydell @ 2019-09-23  9:49 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: QEMU Developers, Qemu-block

On Fri, 20 Sep 2019 at 17:21, Kevin Wolf <kwolf@redhat.com> wrote:
>
> The following changes since commit 521db80318d6c749a6f6c5a65a68397af9e3ef16:
>
>   Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2019-09-16' into staging (2019-09-16 15:25:55 +0100)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to d2c8c09fca9210d0f2399c8d570086a4a66bd22e:
>
>   iotests: Remove Python 2 compatibility code (2019-09-20 17:58:51 +0200)
>
> ----------------------------------------------------------------
> Block layer patches:
>
> - Fix internal snapshots with typical -blockdev setups
> - iotests: Require Python 3.6 or later
>
> ----------------------------------------------------------------
> Kevin Wolf (4):
>       block/snapshot: Restrict set of snapshot nodes
>       iotests: Test internal snapshots with -blockdev
>       iotests: Require Python 3.6 or later
>       iotests: Remove Python 2 compatibility code

Hi. This fails 'make check' on all the non-x86 Linux hosts:
iotests 267 fails on aarch32, ppc64, s390x, aarch64.
Sample output from the aarch32 run; others are similar
but the listed snapshot size differs.

  TEST    iotest-qcow2: 267 [fail]
QEMU          --
"/home/peter.maydell/qemu/build/all-a32/tests/qemu-iotests/../../aarch64-softmmu/qemu-system-aarch64"
-nodefaults -display none
-machine virt,accel=qtest
QEMU_IMG      --
"/home/peter.maydell/qemu/build/all-a32/tests/qemu-iotests/../../qemu-img"
QEMU_IO       --
"/home/peter.maydell/qemu/build/all-a32/tests/qemu-iotests/../../qemu-io"
 --cache writeback -f qcow2
QEMU_NBD      --
"/home/peter.maydell/qemu/build/all-a32/tests/qemu-iotests/../../qemu-nbd"
IMGFMT        -- qcow2 (compat=1.1)
IMGPROTO      -- file
PLATFORM      -- Linux/aarch64 mustang-maydell 4.15.0-51-generic
TEST_DIR      --
/home/peter.maydell/qemu/build/all-a32/tests/qemu-iotests/scratch
SOCKET_SCM_HELPER --
/home/peter.maydell/qemu/build/all-a32/tests/qemu-iotests/socket_scm_helper

--- /home/peter.maydell/qemu/tests/qemu-iotests/267.out 2019-09-20
17:54:40.127012142 +0000
+++ /home/peter.maydell/qemu/build/all-a32/tests/qemu-iotests/267.out.bad
      2019-09-20 18:02:11.756586745 +0000
@@ -34,7 +34,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -45,7 +45,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               636 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               684 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -70,7 +70,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               636 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               684 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -95,7 +95,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -106,7 +106,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -120,7 +120,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -135,7 +135,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -146,14 +146,14 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

 Internal snapshots on overlay:
 Snapshot list:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
-1         snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+1         snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 Internal snapshots on backing file:

 === -blockdev with NBD server on the backing file ===
@@ -167,7 +167,7 @@
 (qemu) info snapshots
 List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
---        snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+--        snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 (qemu) loadvm snap0
 (qemu) quit

@@ -178,5 +178,5 @@
 Internal snapshots on backing file:
 Snapshot list:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
-1         snap0               591 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
+1         snap0               640 KiB yyyy-mm-dd hh:mm:ss   00:00:00.000
 *** done
Not run: 172 186 192 220
Failures: 267
Failed 1 of 104 iotests


thanks
-- PMM


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PULL 0/4] Block layer patches
@ 2019-09-20 16:20 Kevin Wolf
  2019-09-23  9:49 ` Peter Maydell
  0 siblings, 1 reply; 9+ messages in thread
From: Kevin Wolf @ 2019-09-20 16:20 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

The following changes since commit 521db80318d6c749a6f6c5a65a68397af9e3ef16:

  Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2019-09-16' into staging (2019-09-16 15:25:55 +0100)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to d2c8c09fca9210d0f2399c8d570086a4a66bd22e:

  iotests: Remove Python 2 compatibility code (2019-09-20 17:58:51 +0200)

----------------------------------------------------------------
Block layer patches:

- Fix internal snapshots with typical -blockdev setups
- iotests: Require Python 3.6 or later

----------------------------------------------------------------
Kevin Wolf (4):
      block/snapshot: Restrict set of snapshot nodes
      iotests: Test internal snapshots with -blockdev
      iotests: Require Python 3.6 or later
      iotests: Remove Python 2 compatibility code

 block/snapshot.c                         |  26 +++--
 tests/qemu-iotests/044                   |   3 -
 tests/qemu-iotests/163                   |   3 -
 tests/qemu-iotests/267                   | 168 ++++++++++++++++++++++++++++
 tests/qemu-iotests/267.out               | 182 +++++++++++++++++++++++++++++++
 tests/qemu-iotests/check                 |  13 ++-
 tests/qemu-iotests/common.filter         |   5 +-
 tests/qemu-iotests/group                 |   1 +
 tests/qemu-iotests/iotests.py            |  13 +--
 tests/qemu-iotests/nbd-fault-injector.py |   7 +-
 10 files changed, 389 insertions(+), 32 deletions(-)
 create mode 100755 tests/qemu-iotests/267
 create mode 100644 tests/qemu-iotests/267.out


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-11-28 14:11 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-28 12:35 [PULL 0/4] Block layer patches Kevin Wolf
2023-03-28 12:35 ` [PULL 1/4] nbd/server: push pending frames after sending reply Kevin Wolf
2023-03-28 12:35 ` [PULL 2/4] block/export: only acquire AioContext once for vhost_user_server_stop() Kevin Wolf
2023-03-28 12:35 ` [PULL 3/4] aio-posix: fix race between epoll upgrade and aio_set_fd_handler() Kevin Wolf
2023-03-28 12:35 ` [PULL 4/4] block/export: Fix graph locking in blk_get_geometry() call Kevin Wolf
2023-03-28 19:42 ` [PULL 0/4] Block layer patches Peter Maydell
  -- strict thread matches above, loose matches on Subject: below --
2023-11-28 14:09 Kevin Wolf
2019-09-20 16:20 Kevin Wolf
2019-09-23  9:49 ` Peter Maydell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).