qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] nbd: Grab aio context lock in more places
@ 2019-09-17  2:39 Eric Blake
  2019-09-20 22:07 ` [PATCH 2/1] tests: Use iothreads during iotest 223 Eric Blake
  2019-09-23 14:10 ` [PATCH] nbd: Grab aio context lock in more places Sergio Lopez
  0 siblings, 2 replies; 4+ messages in thread
From: Eric Blake @ 2019-09-17  2:39 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, open list:Network Block Dev..., slp, Max Reitz

When iothreads are in use, the failure to grab the aio context results
in an assertion failure when trying to unlock things during blk_unref,
when trying to unlock a mutex that was not locked.  In short, all
calls to nbd_export_put need to done while within the correct aio
context.  But since nbd_export_put can recursively reach itself via
nbd_export_close, and recursively grabbing the context would deadlock,
we can't do the context grab directly in those functions, but must do
so in their callers.

Hoist the use of the correct aio_context from nbd_export_new() to its
caller qmp_nbd_server_add().  Then tweak qmp_nbd_server_remove(),
nbd_eject_notifier(), and nbd_esport_close_all() to grab the right
context, so that all callers during qemu now own the context before
nbd_export_put() can call blk_unref().

Remaining uses in qemu-nbd don't matter (since that use case does not
support iothreads).

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
---

With this in place, my emailed formula [1] for causing an iothread
assertion failure no longer hits, and all the -nbd and -qcow2 iotests
still pass.  I would still like to update iotests to cover things (I
could not quickly figure out how to make iotest 222 use iothreads -
either we modify that one or add a new one), but wanted to get review
started on this first.

[1] https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03383.html

 include/block/nbd.h |  1 +
 blockdev-nbd.c      | 14 ++++++++++++--
 nbd/server.c        | 23 +++++++++++++++++++----
 3 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 21550747cf35..316fd705a9e4 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -340,6 +340,7 @@ void nbd_export_put(NBDExport *exp);

 BlockBackend *nbd_export_get_blockdev(NBDExport *exp);

+AioContext *nbd_export_aio_context(NBDExport *exp);
 NBDExport *nbd_export_find(const char *name);
 void nbd_export_close_all(void);

diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index 213f226ac1c4..6a8b206e1d74 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -151,6 +151,7 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
     BlockBackend *on_eject_blk;
     NBDExport *exp;
     int64_t len;
+    AioContext *aio_context;

     if (!nbd_server) {
         error_setg(errp, "NBD server not running");
@@ -173,11 +174,13 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
         return;
     }

+    aio_context = bdrv_get_aio_context(bs);
+    aio_context_acquire(aio_context);
     len = bdrv_getlength(bs);
     if (len < 0) {
         error_setg_errno(errp, -len,
                          "Failed to determine the NBD export's length");
-        return;
+        goto out;
     }

     if (!has_writable) {
@@ -190,13 +193,16 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
     exp = nbd_export_new(bs, 0, len, name, NULL, bitmap, !writable, !writable,
                          NULL, false, on_eject_blk, errp);
     if (!exp) {
-        return;
+        goto out;
     }

     /* The list of named exports has a strong reference to this export now and
      * our only way of accessing it is through nbd_export_find(), so we can drop
      * the strong reference that is @exp. */
     nbd_export_put(exp);
+
+ out:
+    aio_context_release(aio_context);
 }

 void qmp_nbd_server_remove(const char *name,
@@ -204,6 +210,7 @@ void qmp_nbd_server_remove(const char *name,
                            Error **errp)
 {
     NBDExport *exp;
+    AioContext *aio_context;

     if (!nbd_server) {
         error_setg(errp, "NBD server not running");
@@ -220,7 +227,10 @@ void qmp_nbd_server_remove(const char *name,
         mode = NBD_SERVER_REMOVE_MODE_SAFE;
     }

+    aio_context = nbd_export_aio_context(exp);
+    aio_context_acquire(aio_context);
     nbd_export_remove(exp, mode, errp);
+    aio_context_release(aio_context);
 }

 void qmp_nbd_server_stop(Error **errp)
diff --git a/nbd/server.c b/nbd/server.c
index 378784c1e54a..3003381c86b4 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1458,7 +1458,12 @@ static void blk_aio_detach(void *opaque)
 static void nbd_eject_notifier(Notifier *n, void *data)
 {
     NBDExport *exp = container_of(n, NBDExport, eject_notifier);
+    AioContext *aio_context;
+
+    aio_context = exp->ctx;
+    aio_context_acquire(aio_context);
     nbd_export_close(exp);
+    aio_context_release(aio_context);
 }

 NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
@@ -1477,12 +1482,11 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
      * NBD exports are used for non-shared storage migration.  Make sure
      * that BDRV_O_INACTIVE is cleared and the image is ready for write
      * access since the export could be available before migration handover.
+     * ctx was acquired in the caller.
      */
     assert(name);
     ctx = bdrv_get_aio_context(bs);
-    aio_context_acquire(ctx);
     bdrv_invalidate_cache(bs, NULL);
-    aio_context_release(ctx);

     /* Don't allow resize while the NBD server is running, otherwise we don't
      * care what happens with the node. */
@@ -1490,7 +1494,7 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
     if (!readonly) {
         perm |= BLK_PERM_WRITE;
     }
-    blk = blk_new(bdrv_get_aio_context(bs), perm,
+    blk = blk_new(ctx, perm,
                   BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
                   BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
     ret = blk_insert_bs(blk, bs, errp);
@@ -1557,7 +1561,7 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
     }

     exp->close = close;
-    exp->ctx = blk_get_aio_context(blk);
+    exp->ctx = ctx;
     blk_add_aio_context_notifier(blk, blk_aio_attached, blk_aio_detach, exp);

     if (on_eject_blk) {
@@ -1590,11 +1594,18 @@ NBDExport *nbd_export_find(const char *name)
     return NULL;
 }

+AioContext *
+nbd_export_aio_context(NBDExport *exp)
+{
+    return exp->ctx;
+}
+
 void nbd_export_close(NBDExport *exp)
 {
     NBDClient *client, *next;

     nbd_export_get(exp);
+
     /*
      * TODO: Should we expand QMP NbdServerRemoveNode enum to allow a
      * close mode that stops advertising the export to new clients but
@@ -1684,9 +1695,13 @@ BlockBackend *nbd_export_get_blockdev(NBDExport *exp)
 void nbd_export_close_all(void)
 {
     NBDExport *exp, *next;
+    AioContext *aio_context;

     QTAILQ_FOREACH_SAFE(exp, &exports, next, next) {
+        aio_context = exp->ctx;
+        aio_context_acquire(aio_context);
         nbd_export_close(exp);
+        aio_context_release(aio_context);
     }
 }

-- 
2.21.0



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/1] tests: Use iothreads during iotest 223
  2019-09-17  2:39 [Qemu-devel] [PATCH] nbd: Grab aio context lock in more places Eric Blake
@ 2019-09-20 22:07 ` Eric Blake
  2019-09-23 14:10 ` [PATCH] nbd: Grab aio context lock in more places Sergio Lopez
  1 sibling, 0 replies; 4+ messages in thread
From: Eric Blake @ 2019-09-20 22:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, jsnow, open list:Block layer core, slp, Max Reitz

Doing so catches the bugs we just fixed with NBD not properly using
correct contexts.

Signed-off-by: Eric Blake <eblake@redhat.com>
---

This is https://bugzilla.redhat.com/show_bug.cgi?id=1741094,
distinct from Sergio's patch also related to aiocontext in NBD, which is
https://bugzilla.redhat.com/show_bug.cgi?id=1748253

I could not easily figure out how to tweak iotests to cover Sergio's
issue, but really want to get both fixes in a pull request soon.

 tests/qemu-iotests/223     | 6 ++++--
 tests/qemu-iotests/223.out | 1 +
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/223 b/tests/qemu-iotests/223
index cc48e78ea7dc..2ba3d8124b4f 100755
--- a/tests/qemu-iotests/223
+++ b/tests/qemu-iotests/223
@@ -2,7 +2,7 @@
 #
 # Test reading dirty bitmap over NBD
 #
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2018-2019 Red Hat, Inc.
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License as published by
@@ -109,7 +109,7 @@ echo
 echo "=== End dirty bitmaps, and start serving image over NBD ==="
 echo

-_launch_qemu 2> >(_filter_nbd)
+_launch_qemu -object iothread,id=io0 2> >(_filter_nbd)

 # Intentionally provoke some errors as well, to check error handling
 silent=
@@ -117,6 +117,8 @@ _send_qemu_cmd $QEMU_HANDLE '{"execute":"qmp_capabilities"}' "return"
 _send_qemu_cmd $QEMU_HANDLE '{"execute":"blockdev-add",
   "arguments":{"driver":"qcow2", "node-name":"n",
     "file":{"driver":"file", "filename":"'"$TEST_IMG"'"}}}' "return"
+_send_qemu_cmd $QEMU_HANDLE '{"execute":"x-blockdev-set-iothread",
+  "arguments":{"node-name":"n", "iothread":"io0"}}' "return"
 _send_qemu_cmd $QEMU_HANDLE '{"execute":"block-dirty-bitmap-disable",
   "arguments":{"node":"n", "name":"b"}}' "return"
 _send_qemu_cmd $QEMU_HANDLE '{"execute":"nbd-server-add",
diff --git a/tests/qemu-iotests/223.out b/tests/qemu-iotests/223.out
index 5d00398c11cb..23b34fcd202e 100644
--- a/tests/qemu-iotests/223.out
+++ b/tests/qemu-iotests/223.out
@@ -27,6 +27,7 @@ wrote 2097152/2097152 bytes at offset 2097152
 {"return": {}}
 {"return": {}}
 {"return": {}}
+{"return": {}}
 {"error": {"class": "GenericError", "desc": "NBD server not running"}}
 {"return": {}}
 {"error": {"class": "GenericError", "desc": "NBD server already running"}}
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] nbd: Grab aio context lock in more places
  2019-09-17  2:39 [Qemu-devel] [PATCH] nbd: Grab aio context lock in more places Eric Blake
  2019-09-20 22:07 ` [PATCH 2/1] tests: Use iothreads during iotest 223 Eric Blake
@ 2019-09-23 14:10 ` Sergio Lopez
  2019-09-23 14:14   ` Eric Blake
  1 sibling, 1 reply; 4+ messages in thread
From: Sergio Lopez @ 2019-09-23 14:10 UTC (permalink / raw)
  To: Eric Blake; +Cc: kwolf, qemu-devel, open list:Network Block Dev..., Max Reitz

[-- Attachment #1: Type: text/plain, Size: 7289 bytes --]


Eric Blake <eblake@redhat.com> writes:

> When iothreads are in use, the failure to grab the aio context results
> in an assertion failure when trying to unlock things during blk_unref,
> when trying to unlock a mutex that was not locked.  In short, all
> calls to nbd_export_put need to done while within the correct aio
> context.  But since nbd_export_put can recursively reach itself via
> nbd_export_close, and recursively grabbing the context would deadlock,
> we can't do the context grab directly in those functions, but must do
> so in their callers.
>
> Hoist the use of the correct aio_context from nbd_export_new() to its
> caller qmp_nbd_server_add().  Then tweak qmp_nbd_server_remove(),
> nbd_eject_notifier(), and nbd_esport_close_all() to grab the right
> context, so that all callers during qemu now own the context before
> nbd_export_put() can call blk_unref().
>
> Remaining uses in qemu-nbd don't matter (since that use case does not
> support iothreads).
>
> Suggested-by: Kevin Wolf <kwolf@redhat.com>
> Signed-off-by: Eric Blake <eblake@redhat.com>
> ---
>
> With this in place, my emailed formula [1] for causing an iothread
> assertion failure no longer hits, and all the -nbd and -qcow2 iotests
> still pass.  I would still like to update iotests to cover things (I
> could not quickly figure out how to make iotest 222 use iothreads -
> either we modify that one or add a new one), but wanted to get review
> started on this first.
>
> [1] https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03383.html
>
>  include/block/nbd.h |  1 +
>  blockdev-nbd.c      | 14 ++++++++++++--
>  nbd/server.c        | 23 +++++++++++++++++++----
>  3 files changed, 32 insertions(+), 6 deletions(-)
>
> diff --git a/include/block/nbd.h b/include/block/nbd.h
> index 21550747cf35..316fd705a9e4 100644
> --- a/include/block/nbd.h
> +++ b/include/block/nbd.h
> @@ -340,6 +340,7 @@ void nbd_export_put(NBDExport *exp);
>
>  BlockBackend *nbd_export_get_blockdev(NBDExport *exp);
>
> +AioContext *nbd_export_aio_context(NBDExport *exp);
>  NBDExport *nbd_export_find(const char *name);
>  void nbd_export_close_all(void);
>
> diff --git a/blockdev-nbd.c b/blockdev-nbd.c
> index 213f226ac1c4..6a8b206e1d74 100644
> --- a/blockdev-nbd.c
> +++ b/blockdev-nbd.c
> @@ -151,6 +151,7 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
>      BlockBackend *on_eject_blk;
>      NBDExport *exp;
>      int64_t len;
> +    AioContext *aio_context;
>
>      if (!nbd_server) {
>          error_setg(errp, "NBD server not running");
> @@ -173,11 +174,13 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
>          return;
>      }
>
> +    aio_context = bdrv_get_aio_context(bs);
> +    aio_context_acquire(aio_context);
>      len = bdrv_getlength(bs);
>      if (len < 0) {
>          error_setg_errno(errp, -len,
>                           "Failed to determine the NBD export's length");
> -        return;
> +        goto out;
>      }
>
>      if (!has_writable) {
> @@ -190,13 +193,16 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
>      exp = nbd_export_new(bs, 0, len, name, NULL, bitmap, !writable, !writable,
>                           NULL, false, on_eject_blk, errp);
>      if (!exp) {
> -        return;
> +        goto out;
>      }
>
>      /* The list of named exports has a strong reference to this export now and
>       * our only way of accessing it is through nbd_export_find(), so we can drop
>       * the strong reference that is @exp. */
>      nbd_export_put(exp);
> +
> + out:
> +    aio_context_release(aio_context);
>  }
>
>  void qmp_nbd_server_remove(const char *name,
> @@ -204,6 +210,7 @@ void qmp_nbd_server_remove(const char *name,
>                             Error **errp)
>  {
>      NBDExport *exp;
> +    AioContext *aio_context;
>
>      if (!nbd_server) {
>          error_setg(errp, "NBD server not running");
> @@ -220,7 +227,10 @@ void qmp_nbd_server_remove(const char *name,
>          mode = NBD_SERVER_REMOVE_MODE_SAFE;
>      }
>
> +    aio_context = nbd_export_aio_context(exp);
> +    aio_context_acquire(aio_context);
>      nbd_export_remove(exp, mode, errp);
> +    aio_context_release(aio_context);
>  }
>
>  void qmp_nbd_server_stop(Error **errp)
> diff --git a/nbd/server.c b/nbd/server.c
> index 378784c1e54a..3003381c86b4 100644
> --- a/nbd/server.c
> +++ b/nbd/server.c
> @@ -1458,7 +1458,12 @@ static void blk_aio_detach(void *opaque)
>  static void nbd_eject_notifier(Notifier *n, void *data)
>  {
>      NBDExport *exp = container_of(n, NBDExport, eject_notifier);
> +    AioContext *aio_context;
> +
> +    aio_context = exp->ctx;
> +    aio_context_acquire(aio_context);
>      nbd_export_close(exp);
> +    aio_context_release(aio_context);
>  }
>
>  NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
> @@ -1477,12 +1482,11 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
>       * NBD exports are used for non-shared storage migration.  Make sure
>       * that BDRV_O_INACTIVE is cleared and the image is ready for write
>       * access since the export could be available before migration handover.
> +     * ctx was acquired in the caller.
>       */
>      assert(name);
>      ctx = bdrv_get_aio_context(bs);
> -    aio_context_acquire(ctx);
>      bdrv_invalidate_cache(bs, NULL);
> -    aio_context_release(ctx);
>
>      /* Don't allow resize while the NBD server is running, otherwise we don't
>       * care what happens with the node. */
> @@ -1490,7 +1494,7 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
>      if (!readonly) {
>          perm |= BLK_PERM_WRITE;
>      }
> -    blk = blk_new(bdrv_get_aio_context(bs), perm,
> +    blk = blk_new(ctx, perm,
>                    BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>                    BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
>      ret = blk_insert_bs(blk, bs, errp);
> @@ -1557,7 +1561,7 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
>      }
>
>      exp->close = close;
> -    exp->ctx = blk_get_aio_context(blk);
> +    exp->ctx = ctx;
>      blk_add_aio_context_notifier(blk, blk_aio_attached, blk_aio_detach, exp);
>
>      if (on_eject_blk) {
> @@ -1590,11 +1594,18 @@ NBDExport *nbd_export_find(const char *name)
>      return NULL;
>  }
>
> +AioContext *
> +nbd_export_aio_context(NBDExport *exp)
> +{
> +    return exp->ctx;
> +}
> +
>  void nbd_export_close(NBDExport *exp)
>  {
>      NBDClient *client, *next;
>
>      nbd_export_get(exp);
> +

I'm not sure if this new line was added here on purpose.

>      /*
>       * TODO: Should we expand QMP NbdServerRemoveNode enum to allow a
>       * close mode that stops advertising the export to new clients but
> @@ -1684,9 +1695,13 @@ BlockBackend *nbd_export_get_blockdev(NBDExport *exp)
>  void nbd_export_close_all(void)
>  {
>      NBDExport *exp, *next;
> +    AioContext *aio_context;
>
>      QTAILQ_FOREACH_SAFE(exp, &exports, next, next) {
> +        aio_context = exp->ctx;
> +        aio_context_acquire(aio_context);
>          nbd_export_close(exp);
> +        aio_context_release(aio_context);
>      }
>  }

Otherwise, LGTM.

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] nbd: Grab aio context lock in more places
  2019-09-23 14:10 ` [PATCH] nbd: Grab aio context lock in more places Sergio Lopez
@ 2019-09-23 14:14   ` Eric Blake
  0 siblings, 0 replies; 4+ messages in thread
From: Eric Blake @ 2019-09-23 14:14 UTC (permalink / raw)
  To: Sergio Lopez; +Cc: kwolf, qemu-devel, open list:Network Block Dev..., Max Reitz


[-- Attachment #1.1: Type: text/plain, Size: 2981 bytes --]

On 9/23/19 9:10 AM, Sergio Lopez wrote:
> 
> Eric Blake <eblake@redhat.com> writes:
> 
>> When iothreads are in use, the failure to grab the aio context results
>> in an assertion failure when trying to unlock things during blk_unref,
>> when trying to unlock a mutex that was not locked.  In short, all
>> calls to nbd_export_put need to done while within the correct aio
>> context.  But since nbd_export_put can recursively reach itself via
>> nbd_export_close, and recursively grabbing the context would deadlock,
>> we can't do the context grab directly in those functions, but must do
>> so in their callers.
>>
>> Hoist the use of the correct aio_context from nbd_export_new() to its
>> caller qmp_nbd_server_add().  Then tweak qmp_nbd_server_remove(),
>> nbd_eject_notifier(), and nbd_esport_close_all() to grab the right
>> context, so that all callers during qemu now own the context before
>> nbd_export_put() can call blk_unref().
>>
>> Remaining uses in qemu-nbd don't matter (since that use case does not
>> support iothreads).
>>
>> Suggested-by: Kevin Wolf <kwolf@redhat.com>
>> Signed-off-by: Eric Blake <eblake@redhat.com>
>> ---
>>
>> With this in place, my emailed formula [1] for causing an iothread
>> assertion failure no longer hits, and all the -nbd and -qcow2 iotests
>> still pass.  I would still like to update iotests to cover things (I
>> could not quickly figure out how to make iotest 222 use iothreads -
>> either we modify that one or add a new one), but wanted to get review
>> started on this first.
>>
>> [1] https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03383.html

I ended up patch 223 instead, as patch 2/1 in reply to the original.


>>  void nbd_export_close(NBDExport *exp)
>>  {
>>      NBDClient *client, *next;
>>
>>      nbd_export_get(exp);
>> +
> 
> I'm not sure if this new line was added here on purpose.
> 

Spurious leftovers from an alternative attempt.  Will drop this.

>>      /*
>>       * TODO: Should we expand QMP NbdServerRemoveNode enum to allow a
>>       * close mode that stops advertising the export to new clients but
>> @@ -1684,9 +1695,13 @@ BlockBackend *nbd_export_get_blockdev(NBDExport *exp)
>>  void nbd_export_close_all(void)
>>  {
>>      NBDExport *exp, *next;
>> +    AioContext *aio_context;
>>
>>      QTAILQ_FOREACH_SAFE(exp, &exports, next, next) {
>> +        aio_context = exp->ctx;
>> +        aio_context_acquire(aio_context);
>>          nbd_export_close(exp);
>> +        aio_context_release(aio_context);
>>      }
>>  }
> 
> Otherwise, LGTM.
> 
> Reviewed-by: Sergio Lopez <slp@redhat.com>
> 

Thanks; queuing this on my NBD tree, along with the iotest followup
(although there's still a bit of time for that to get a review before I
send the pull request later today).


-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-09-23 14:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-17  2:39 [Qemu-devel] [PATCH] nbd: Grab aio context lock in more places Eric Blake
2019-09-20 22:07 ` [PATCH 2/1] tests: Use iothreads during iotest 223 Eric Blake
2019-09-23 14:10 ` [PATCH] nbd: Grab aio context lock in more places Sergio Lopez
2019-09-23 14:14   ` Eric Blake

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).