All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] virtio-blk: remove AioContext lock
@ 2022-11-08 21:19 Stefan Hajnoczi
  2022-11-08 21:19 ` [PATCH 1/8] virtio_queue_aio_attach_host_notifier: " Stefan Hajnoczi
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

This is a continuation of Emanuele Esposito's work to remove the AioContext
lock in virtio-blk. In the past it was necessary to acquire the AioContext lock
in order to do I/O. Paolo Bonzini and Emanuele have removed the need for the
AioContext in the core block layer code, with a few exceptions like blk_drain()
and blk_set_aio_context().

This patch series annotates virtio-blk functions with
IO_CODE()/GLOBAL_STATE_CODE() so it's clear in which context they are called.
It also removes unnecessary AioContext locks.

The end result is that virtio-blk rarely takes the AioContext lock. Future
patch series will add support for multiple IOThreads so that true multi-queue
can be achieved, but right now virtio-blk still has unprotected shared state
like s->rq so that needs to wait for another day.

Based-on: <20221102182337.252202-1-stefanha@redhat.com>

Emanuele Giuseppe Esposito (6):
  virtio_queue_aio_attach_host_notifier: remove AioContext lock
  block-backend: enable_write_cache should be atomic
  virtio: categorize callbacks in GS
  virtio-blk: mark GLOBAL_STATE_CODE functions
  virtio-blk: mark IO_CODE functions
  virtio-blk: remove unnecessary AioContext lock from function already
    safe

Stefan Hajnoczi (2):
  virtio-blk: don't acquire AioContext in virtio_blk_handle_vq()
  virtio-blk: minimize virtio_blk_reset() AioContext lock region

 include/block/aio-wait.h        |  4 +-
 block/block-backend.c           |  6 +--
 hw/block/dataplane/virtio-blk.c | 18 +++++--
 hw/block/virtio-blk.c           | 92 ++++++++++++++++++++++++---------
 hw/scsi/virtio-scsi-dataplane.c | 10 ++--
 hw/virtio/virtio-bus.c          |  5 ++
 hw/virtio/virtio-pci.c          |  2 +
 hw/virtio/virtio.c              |  8 +++
 util/aio-wait.c                 |  2 +-
 9 files changed, 106 insertions(+), 41 deletions(-)

-- 
2.38.1



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/8] virtio_queue_aio_attach_host_notifier: remove AioContext lock
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:17   ` Emanuele Giuseppe Esposito
  2022-11-08 21:19 ` [PATCH 2/8] block-backend: enable_write_cache should be atomic Stefan Hajnoczi
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

From: Emanuele Giuseppe Esposito <eesposit@redhat.com>

virtio_queue_aio_attach_host_notifier() and
virtio_queue_aio_attach_host_notifier_nopoll() run always in the
main loop, so there is no need to protect them with AioContext
lock.

On the other side, virtio_queue_aio_detach_host_notifier() runs
in a bh in the iothread context, but it is always scheduled
(thus serialized) by the main loop. Therefore removing the
AioContext lock is safe.

In order to remove the AioContext lock it is necessary to switch
aio_wait_bh_oneshot() to AIO_WAIT_WHILE_UNLOCKED(). virtio-blk and
virtio-scsi are the only users of aio_wait_bh_oneshot() so it is
possible to make this change.

For now bdrv_set_aio_context() still needs the AioContext lock.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220609143727.1151816-2-eesposit@redhat.com>
---
 include/block/aio-wait.h        |  4 ++--
 hw/block/dataplane/virtio-blk.c | 10 ++++++----
 hw/block/virtio-blk.c           |  2 ++
 hw/scsi/virtio-scsi-dataplane.c | 10 ++++------
 util/aio-wait.c                 |  2 +-
 5 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h
index dd9a7f6461..fce6bfee3a 100644
--- a/include/block/aio-wait.h
+++ b/include/block/aio-wait.h
@@ -131,8 +131,8 @@ void aio_wait_kick(void);
  *
  * Run a BH in @ctx and wait for it to complete.
  *
- * Must be called from the main loop thread with @ctx acquired exactly once.
- * Note that main loop event processing may occur.
+ * Must be called from the main loop thread. @ctx must not be acquired by the
+ * caller. Note that main loop event processing may occur.
  */
 void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque);
 
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index b28d81737e..975f5ca8c4 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -167,6 +167,8 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     Error *local_err = NULL;
     int r;
 
+    GLOBAL_STATE_CODE();
+
     if (vblk->dataplane_started || s->starting) {
         return 0;
     }
@@ -245,13 +247,11 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     }
 
     /* Get this show started by hooking up our callbacks */
-    aio_context_acquire(s->ctx);
     for (i = 0; i < nvqs; i++) {
         VirtQueue *vq = virtio_get_queue(s->vdev, i);
 
         virtio_queue_aio_attach_host_notifier(vq, s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
   fail_aio_context:
@@ -301,6 +301,8 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
     unsigned i;
     unsigned nvqs = s->conf->num_queues;
 
+    GLOBAL_STATE_CODE();
+
     if (!vblk->dataplane_started || s->stopping) {
         return;
     }
@@ -314,9 +316,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
     s->stopping = true;
     trace_virtio_blk_data_plane_stop(s);
 
-    aio_context_acquire(s->ctx);
     aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
 
+    aio_context_acquire(s->ctx);
+
     /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
     blk_drain(s->conf->conf.blk);
 
@@ -325,7 +328,6 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
      * BlockBackend in the iothread, that's ok
      */
     blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context(), NULL);
-
     aio_context_release(s->ctx);
 
     /*
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 1762517878..cdc6fd5979 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -100,6 +100,8 @@ static void virtio_blk_rw_complete(void *opaque, int ret)
     VirtIOBlock *s = next->dev;
     VirtIODevice *vdev = VIRTIO_DEVICE(s);
 
+    IO_CODE();
+
     aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
     while (next) {
         VirtIOBlockReq *req = next;
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 20bb91766e..f6f55d4511 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -91,6 +91,8 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
 
+    GLOBAL_STATE_CODE();
+
     if (s->dataplane_started ||
         s->dataplane_starting ||
         s->dataplane_fenced) {
@@ -138,20 +140,18 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
 
     /*
      * These fields are visible to the IOThread so we rely on implicit barriers
-     * in aio_context_acquire() on the write side and aio_notify_accept() on
-     * the read side.
+     * in virtio_queue_aio_attach_host_notifier() on the write side and
+     * aio_notify_accept() on the read side.
      */
     s->dataplane_starting = false;
     s->dataplane_started = true;
 
-    aio_context_acquire(s->ctx);
     virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
     virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
 
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
 fail_host_notifiers:
@@ -197,9 +197,7 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
     }
     s->dataplane_stopping = true;
 
-    aio_context_acquire(s->ctx);
     aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
-    aio_context_release(s->ctx);
 
     blk_drain_all(); /* ensure there are no in-flight requests */
 
diff --git a/util/aio-wait.c b/util/aio-wait.c
index 98c5accd29..80f26ee520 100644
--- a/util/aio-wait.c
+++ b/util/aio-wait.c
@@ -82,5 +82,5 @@ void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
     assert(qemu_get_current_aio_context() == qemu_get_aio_context());
 
     aio_bh_schedule_oneshot(ctx, aio_wait_bh, &data);
-    AIO_WAIT_WHILE(ctx, !data.done);
+    AIO_WAIT_WHILE_UNLOCKED(ctx, !data.done);
 }
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/8] block-backend: enable_write_cache should be atomic
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
  2022-11-08 21:19 ` [PATCH 1/8] virtio_queue_aio_attach_host_notifier: " Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:17   ` Emanuele Giuseppe Esposito
  2022-11-08 21:19 ` [PATCH 3/8] virtio: categorize callbacks in GS Stefan Hajnoczi
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

From: Emanuele Giuseppe Esposito <eesposit@redhat.com>

It is read from IO_CODE and written with BQL held,
so setting it as atomic should be enough.

Also remove the aiocontext lock that was sporadically
taken around the set.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220609143727.1151816-3-eesposit@redhat.com>
---
 block/block-backend.c | 6 +++---
 hw/block/virtio-blk.c | 4 ----
 2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index c0c7d56c8d..949418cad4 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -60,7 +60,7 @@ struct BlockBackend {
      * can be used to restore those options in the new BDS on insert) */
     BlockBackendRootState root_state;
 
-    bool enable_write_cache;
+    bool enable_write_cache; /* Atomic */
 
     /* I/O stats (display with "info blockstats"). */
     BlockAcctStats stats;
@@ -1939,13 +1939,13 @@ bool blk_is_sg(BlockBackend *blk)
 bool blk_enable_write_cache(BlockBackend *blk)
 {
     IO_CODE();
-    return blk->enable_write_cache;
+    return qatomic_read(&blk->enable_write_cache);
 }
 
 void blk_set_enable_write_cache(BlockBackend *blk, bool wce)
 {
     IO_CODE();
-    blk->enable_write_cache = wce;
+    qatomic_set(&blk->enable_write_cache, wce);
 }
 
 void blk_activate(BlockBackend *blk, Error **errp)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index cdc6fd5979..96d00103a4 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -961,9 +961,7 @@ static void virtio_blk_set_config(VirtIODevice *vdev, const uint8_t *config)
 
     memcpy(&blkcfg, config, s->config_size);
 
-    aio_context_acquire(blk_get_aio_context(s->blk));
     blk_set_enable_write_cache(s->blk, blkcfg.wce != 0);
-    aio_context_release(blk_get_aio_context(s->blk));
 }
 
 static uint64_t virtio_blk_get_features(VirtIODevice *vdev, uint64_t features,
@@ -1031,11 +1029,9 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status)
      * s->blk would erroneously be placed in writethrough mode.
      */
     if (!virtio_vdev_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE)) {
-        aio_context_acquire(blk_get_aio_context(s->blk));
         blk_set_enable_write_cache(s->blk,
                                    virtio_vdev_has_feature(vdev,
                                                            VIRTIO_BLK_F_WCE));
-        aio_context_release(blk_get_aio_context(s->blk));
     }
 }
 
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/8] virtio: categorize callbacks in GS
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
  2022-11-08 21:19 ` [PATCH 1/8] virtio_queue_aio_attach_host_notifier: " Stefan Hajnoczi
  2022-11-08 21:19 ` [PATCH 2/8] block-backend: enable_write_cache should be atomic Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:19   ` Emanuele Giuseppe Esposito
  2022-11-08 21:19 ` [PATCH 4/8] virtio-blk: mark GLOBAL_STATE_CODE functions Stefan Hajnoczi
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

From: Emanuele Giuseppe Esposito <eesposit@redhat.com>

All the callbacks below are always running in the main loop.

The callbacks are the following:
- start/stop_ioeventfd: these are the callbacks where
  blk_set_aio_context(iothread) is done, so they are called in the main
  loop.

- save and load: called during migration, when VM is stopped from the
  main loop.

- reset: before calling this callback, stop_ioeventfd is invoked, so
  it can only run in the main loop.

- set_status: going through all the callers we can see it is called
  from a MemoryRegionOps callback, which always run in a vcpu thread and
  hold the BQL.

- realize: iothread is not even created yet.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220609143727.1151816-5-eesposit@redhat.com>
---
 hw/block/virtio-blk.c  | 2 ++
 hw/virtio/virtio-bus.c | 5 +++++
 hw/virtio/virtio-pci.c | 2 ++
 hw/virtio/virtio.c     | 8 ++++++++
 4 files changed, 17 insertions(+)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 96d00103a4..96bc11d2fe 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1005,6 +1005,8 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status)
 {
     VirtIOBlock *s = VIRTIO_BLK(vdev);
 
+    GLOBAL_STATE_CODE();
+
     if (!(status & (VIRTIO_CONFIG_S_DRIVER | VIRTIO_CONFIG_S_DRIVER_OK))) {
         assert(!s->dataplane_started);
     }
diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c
index 896feb37a1..74cdf4bd27 100644
--- a/hw/virtio/virtio-bus.c
+++ b/hw/virtio/virtio-bus.c
@@ -23,6 +23,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/main-loop.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "qapi/error.h"
@@ -224,6 +225,8 @@ int virtio_bus_start_ioeventfd(VirtioBusState *bus)
     VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev);
     int r;
 
+    GLOBAL_STATE_CODE();
+
     if (!k->ioeventfd_assign || !k->ioeventfd_enabled(proxy)) {
         return -ENOSYS;
     }
@@ -248,6 +251,8 @@ void virtio_bus_stop_ioeventfd(VirtioBusState *bus)
     VirtIODevice *vdev;
     VirtioDeviceClass *vdc;
 
+    GLOBAL_STATE_CODE();
+
     if (!bus->ioeventfd_started) {
         return;
     }
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index a1c9dfa7bb..4f9a94f61b 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -313,6 +313,8 @@ static void virtio_ioport_write(void *opaque, uint32_t addr, uint32_t val)
     uint16_t vector;
     hwaddr pa;
 
+    GLOBAL_STATE_CODE();
+
     switch (addr) {
     case VIRTIO_PCI_GUEST_FEATURES:
         /* Guest does not negotiate properly?  We have to assume nothing. */
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9683b2e158..468e8f5ad0 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -2422,6 +2422,8 @@ int virtio_set_status(VirtIODevice *vdev, uint8_t val)
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
     trace_virtio_set_status(vdev, val);
 
+    GLOBAL_STATE_CODE();
+
     if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
         if (!(vdev->status & VIRTIO_CONFIG_S_FEATURES_OK) &&
             val & VIRTIO_CONFIG_S_FEATURES_OK) {
@@ -2515,6 +2517,8 @@ void virtio_reset(void *opaque)
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
     int i;
 
+    GLOBAL_STATE_CODE();
+
     virtio_set_status(vdev, 0);
     if (current_cpu) {
         /* Guest initiated reset */
@@ -3357,6 +3361,8 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
     uint32_t guest_features_lo = (vdev->guest_features & 0xffffffff);
     int i;
 
+    GLOBAL_STATE_CODE();
+
     if (k->save_config) {
         k->save_config(qbus->parent, f);
     }
@@ -3508,6 +3514,8 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
     VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev);
 
+    GLOBAL_STATE_CODE();
+
     /*
      * We poison the endianness to ensure it does not get used before
      * subsections have been loaded.
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/8] virtio-blk: mark GLOBAL_STATE_CODE functions
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2022-11-08 21:19 ` [PATCH 3/8] virtio: categorize callbacks in GS Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:19   ` Emanuele Giuseppe Esposito
  2022-11-08 21:19 ` [PATCH 5/8] virtio-blk: mark IO_CODE functions Stefan Hajnoczi
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

From: Emanuele Giuseppe Esposito <eesposit@redhat.com>

Just as done in the block API, mark functions in virtio-blk
that are always called in the main loop with BQL held.

We know such functions are GS because they all are callbacks
from virtio.c API that has already classified them as GS.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220609143727.1151816-6-eesposit@redhat.com>
---
 hw/block/dataplane/virtio-blk.c |  4 ++++
 hw/block/virtio-blk.c           | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 975f5ca8c4..728c9cd86c 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -89,6 +89,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
 
+    GLOBAL_STATE_CODE();
+
     *dataplane = NULL;
 
     if (conf->iothread) {
@@ -140,6 +142,8 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
 {
     VirtIOBlock *vblk;
 
+    GLOBAL_STATE_CODE();
+
     if (!s) {
         return;
     }
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 96bc11d2fe..02b213a140 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -845,11 +845,17 @@ static void virtio_blk_dma_restart_bh(void *opaque)
     aio_context_release(blk_get_aio_context(s->conf.conf.blk));
 }
 
+/*
+ * Only called when VM is started or stopped in cpus.c.
+ * No iothread runs in parallel
+ */
 static void virtio_blk_dma_restart_cb(void *opaque, bool running,
                                       RunState state)
 {
     VirtIOBlock *s = opaque;
 
+    GLOBAL_STATE_CODE();
+
     if (!running) {
         return;
     }
@@ -867,8 +873,14 @@ static void virtio_blk_reset(VirtIODevice *vdev)
     AioContext *ctx;
     VirtIOBlockReq *req;
 
+    GLOBAL_STATE_CODE();
+
     ctx = blk_get_aio_context(s->blk);
     aio_context_acquire(ctx);
+    /*
+     * This drain together with ->stop_ioeventfd() in virtio_pci_reset()
+     * stops all Iothreads.
+     */
     blk_drain(s->blk);
 
     /* We drop queued requests after blk_drain() because blk_drain() itself can
@@ -1037,11 +1049,17 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status)
     }
 }
 
+/*
+ * VM is stopped while doing migration, so iothread has
+ * no requests to process.
+ */
 static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f)
 {
     VirtIOBlock *s = VIRTIO_BLK(vdev);
     VirtIOBlockReq *req = s->rq;
 
+    GLOBAL_STATE_CODE();
+
     while (req) {
         qemu_put_sbyte(f, 1);
 
@@ -1055,11 +1073,17 @@ static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f)
     qemu_put_sbyte(f, 0);
 }
 
+/*
+ * VM is stopped while doing migration, so iothread has
+ * no requests to process.
+ */
 static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f,
                                   int version_id)
 {
     VirtIOBlock *s = VIRTIO_BLK(vdev);
 
+    GLOBAL_STATE_CODE();
+
     while (qemu_get_sbyte(f)) {
         unsigned nvqs = s->conf.num_queues;
         unsigned vq_idx = 0;
@@ -1108,6 +1132,7 @@ static const BlockDevOps virtio_block_ops = {
     .resize_cb = virtio_blk_resize,
 };
 
+/* Iothread is not yet created */
 static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -1116,6 +1141,8 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
     Error *err = NULL;
     unsigned i;
 
+    GLOBAL_STATE_CODE();
+
     if (!conf->conf.blk) {
         error_setg(errp, "drive property not set");
         return;
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/8] virtio-blk: mark IO_CODE functions
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2022-11-08 21:19 ` [PATCH 4/8] virtio-blk: mark GLOBAL_STATE_CODE functions Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:20   ` Emanuele Giuseppe Esposito
  2022-11-08 21:19 ` [PATCH 6/8] virtio-blk: remove unnecessary AioContext lock from function already safe Stefan Hajnoczi
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

From: Emanuele Giuseppe Esposito <eesposit@redhat.com>

Just as done in the block API, mark functions in virtio-blk
that are called also from iothread(s).

We know such functions are IO because many are blk_* callbacks,
running always in the device iothread, and remaining are propagated
from the leaf IO functions (if a function calls a IO_CODE function,
itself is categorized as IO_CODE too).

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220609143727.1151816-7-eesposit@redhat.com>
---
 hw/block/dataplane/virtio-blk.c |  4 +++
 hw/block/virtio-blk.c           | 45 ++++++++++++++++++++++++++++-----
 2 files changed, 43 insertions(+), 6 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 728c9cd86c..3593ac0e7b 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -63,6 +63,8 @@ static void notify_guest_bh(void *opaque)
     unsigned long bitmap[BITS_TO_LONGS(nvqs)];
     unsigned j;
 
+    IO_CODE();
+
     memcpy(bitmap, s->batch_notify_vqs, sizeof(bitmap));
     memset(s->batch_notify_vqs, 0, sizeof(bitmap));
 
@@ -288,6 +290,8 @@ static void virtio_blk_data_plane_stop_bh(void *opaque)
     VirtIOBlockDataPlane *s = opaque;
     unsigned i;
 
+    IO_CODE();
+
     for (i = 0; i < s->conf->num_queues; i++) {
         VirtQueue *vq = virtio_get_queue(s->vdev, i);
 
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 02b213a140..f8fcf25292 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -39,6 +39,8 @@
 static void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq,
                                     VirtIOBlockReq *req)
 {
+    IO_CODE();
+
     req->dev = s;
     req->vq = vq;
     req->qiov.size = 0;
@@ -57,6 +59,8 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
     VirtIOBlock *s = req->dev;
     VirtIODevice *vdev = VIRTIO_DEVICE(s);
 
+    IO_CODE();
+
     trace_virtio_blk_req_complete(vdev, req, status);
 
     stb_p(&req->in->status, status);
@@ -76,6 +80,8 @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error,
     VirtIOBlock *s = req->dev;
     BlockErrorAction action = blk_get_error_action(s->blk, is_read, error);
 
+    IO_CODE();
+
     if (action == BLOCK_ERROR_ACTION_STOP) {
         /* Break the link as the next request is going to be parsed from the
          * ring again. Otherwise we may end up doing a double completion! */
@@ -143,7 +149,9 @@ static void virtio_blk_flush_complete(void *opaque, int ret)
     VirtIOBlockReq *req = opaque;
     VirtIOBlock *s = req->dev;
 
+    IO_CODE();
     aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
+
     if (ret) {
         if (virtio_blk_handle_rw_error(req, -ret, 0, true)) {
             goto out;
@@ -165,7 +173,9 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
     bool is_write_zeroes = (virtio_ldl_p(VIRTIO_DEVICE(s), &req->out.type) &
                             ~VIRTIO_BLK_T_BARRIER) == VIRTIO_BLK_T_WRITE_ZEROES;
 
+    IO_CODE();
     aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
+
     if (ret) {
         if (virtio_blk_handle_rw_error(req, -ret, false, is_write_zeroes)) {
             goto out;
@@ -198,6 +208,8 @@ static void virtio_blk_ioctl_complete(void *opaque, int status)
     struct virtio_scsi_inhdr *scsi;
     struct sg_io_hdr *hdr;
 
+    IO_CODE();
+
     scsi = (void *)req->elem.in_sg[req->elem.in_num - 2].iov_base;
 
     if (status) {
@@ -239,6 +251,8 @@ static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s, VirtQueue *vq)
 {
     VirtIOBlockReq *req = virtqueue_pop(vq, sizeof(VirtIOBlockReq));
 
+    IO_CODE();
+
     if (req) {
         virtio_blk_init_request(s, vq, req);
     }
@@ -259,6 +273,8 @@ static int virtio_blk_handle_scsi_req(VirtIOBlockReq *req)
     BlockAIOCB *acb;
 #endif
 
+    IO_CODE();
+
     /*
      * We require at least one output segment each for the virtio_blk_outhdr
      * and the SCSI command block.
@@ -357,6 +373,7 @@ fail:
 static void virtio_blk_handle_scsi(VirtIOBlockReq *req)
 {
     int status;
+    IO_CODE();
 
     status = virtio_blk_handle_scsi_req(req);
     if (status != -EINPROGRESS) {
@@ -374,6 +391,8 @@ static inline void submit_requests(VirtIOBlock *s, MultiReqBuffer *mrb,
     bool is_write = mrb->is_write;
     BdrvRequestFlags flags = 0;
 
+    IO_CODE();
+
     if (num_reqs > 1) {
         int i;
         struct iovec *tmp_iov = qiov->iov;
@@ -423,6 +442,8 @@ static int multireq_compare(const void *a, const void *b)
     const VirtIOBlockReq *req1 = *(VirtIOBlockReq **)a,
                          *req2 = *(VirtIOBlockReq **)b;
 
+    IO_CODE();
+
     /*
      * Note that we can't simply subtract sector_num1 from sector_num2
      * here as that could overflow the return value.
@@ -442,6 +463,8 @@ static void virtio_blk_submit_multireq(VirtIOBlock *s, MultiReqBuffer *mrb)
     uint32_t max_transfer;
     int64_t sector_num = 0;
 
+    IO_CODE();
+
     if (mrb->num_reqs == 1) {
         submit_requests(s, mrb, 0, 1, -1);
         mrb->num_reqs = 0;
@@ -491,6 +514,8 @@ static void virtio_blk_handle_flush(VirtIOBlockReq *req, MultiReqBuffer *mrb)
 {
     VirtIOBlock *s = req->dev;
 
+    IO_CODE();
+
     block_acct_start(blk_get_stats(s->blk), &req->acct, 0,
                      BLOCK_ACCT_FLUSH);
 
@@ -509,6 +534,8 @@ static bool virtio_blk_sect_range_ok(VirtIOBlock *dev,
     uint64_t nb_sectors = size >> BDRV_SECTOR_BITS;
     uint64_t total_sectors;
 
+    IO_CODE();
+
     if (nb_sectors > BDRV_REQUEST_MAX_SECTORS) {
         return false;
     }
@@ -535,6 +562,8 @@ static uint8_t virtio_blk_handle_discard_write_zeroes(VirtIOBlockReq *req,
     uint8_t err_status;
     int bytes;
 
+    IO_CODE();
+
     sector = virtio_ldq_p(vdev, &dwz_hdr->sector);
     num_sectors = virtio_ldl_p(vdev, &dwz_hdr->num_sectors);
     flags = virtio_ldl_p(vdev, &dwz_hdr->flags);
@@ -613,6 +642,8 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
     VirtIOBlock *s = req->dev;
     VirtIODevice *vdev = VIRTIO_DEVICE(s);
 
+    IO_CODE();
+
     if (req->elem.out_num < 1 || req->elem.in_num < 1) {
         virtio_error(vdev, "virtio-blk missing headers");
         return -1;
@@ -763,6 +794,8 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
     MultiReqBuffer mrb = {};
     bool suppress_notifications = virtio_queue_get_notification(vq);
 
+    IO_CODE();
+
     aio_context_acquire(blk_get_aio_context(s->blk));
     blk_io_plug(s->blk);
 
@@ -796,6 +829,8 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
 {
     VirtIOBlock *s = (VirtIOBlock *)vdev;
 
+    IO_CODE();
+
     if (s->dataplane && !s->dataplane_started) {
         /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
          * dataplane here instead of waiting for .set_status().
@@ -846,8 +881,9 @@ static void virtio_blk_dma_restart_bh(void *opaque)
 }
 
 /*
- * Only called when VM is started or stopped in cpus.c.
- * No iothread runs in parallel
+ * Only called when VM is started or stopped in cpus.c. When running is true
+ * ->start_ioeventfd() has already been called. When running is false
+ * ->stop_ioeventfd() has not yet been called.
  */
 static void virtio_blk_dma_restart_cb(void *opaque, bool running,
                                       RunState state)
@@ -867,6 +903,7 @@ static void virtio_blk_dma_restart_cb(void *opaque, bool running,
             virtio_blk_dma_restart_bh, s);
 }
 
+/* ->stop_ioeventfd() has already been called by virtio_bus_reset() */
 static void virtio_blk_reset(VirtIODevice *vdev)
 {
     VirtIOBlock *s = VIRTIO_BLK(vdev);
@@ -877,10 +914,6 @@ static void virtio_blk_reset(VirtIODevice *vdev)
 
     ctx = blk_get_aio_context(s->blk);
     aio_context_acquire(ctx);
-    /*
-     * This drain together with ->stop_ioeventfd() in virtio_pci_reset()
-     * stops all Iothreads.
-     */
     blk_drain(s->blk);
 
     /* We drop queued requests after blk_drain() because blk_drain() itself can
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/8] virtio-blk: remove unnecessary AioContext lock from function already safe
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
                   ` (4 preceding siblings ...)
  2022-11-08 21:19 ` [PATCH 5/8] virtio-blk: mark IO_CODE functions Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:23   ` Emanuele Giuseppe Esposito
  2022-11-08 21:19 ` [PATCH 7/8] virtio-blk: don't acquire AioContext in virtio_blk_handle_vq() Stefan Hajnoczi
  2022-11-08 21:19 ` [PATCH 8/8] virtio-blk: minimize virtio_blk_reset() AioContext lock region Stefan Hajnoczi
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

From: Emanuele Giuseppe Esposito <eesposit@redhat.com>

AioContext lock was introduced in b9e413dd375 and in this instance
it is used to protect these 3 functions:
- virtio_blk_handle_rw_error
- virtio_blk_req_complete
- block_acct_done

Now that all three of the above functions are protected with
their own locks, we can get rid of the AioContext lock.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220609143727.1151816-9-eesposit@redhat.com>
---
 hw/block/virtio-blk.c | 19 ++-----------------
 1 file changed, 2 insertions(+), 17 deletions(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f8fcf25292..faea045178 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -108,7 +108,6 @@ static void virtio_blk_rw_complete(void *opaque, int ret)
 
     IO_CODE();
 
-    aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
     while (next) {
         VirtIOBlockReq *req = next;
         next = req->mr_next;
@@ -141,7 +140,6 @@ static void virtio_blk_rw_complete(void *opaque, int ret)
         block_acct_done(blk_get_stats(s->blk), &req->acct);
         virtio_blk_free_request(req);
     }
-    aio_context_release(blk_get_aio_context(s->conf.conf.blk));
 }
 
 static void virtio_blk_flush_complete(void *opaque, int ret)
@@ -150,20 +148,16 @@ static void virtio_blk_flush_complete(void *opaque, int ret)
     VirtIOBlock *s = req->dev;
 
     IO_CODE();
-    aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
 
     if (ret) {
         if (virtio_blk_handle_rw_error(req, -ret, 0, true)) {
-            goto out;
+            return;
         }
     }
 
     virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
     block_acct_done(blk_get_stats(s->blk), &req->acct);
     virtio_blk_free_request(req);
-
-out:
-    aio_context_release(blk_get_aio_context(s->conf.conf.blk));
 }
 
 static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
@@ -174,11 +168,10 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
                             ~VIRTIO_BLK_T_BARRIER) == VIRTIO_BLK_T_WRITE_ZEROES;
 
     IO_CODE();
-    aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
 
     if (ret) {
         if (virtio_blk_handle_rw_error(req, -ret, false, is_write_zeroes)) {
-            goto out;
+            return;
         }
     }
 
@@ -187,9 +180,6 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
         block_acct_done(blk_get_stats(s->blk), &req->acct);
     }
     virtio_blk_free_request(req);
-
-out:
-    aio_context_release(blk_get_aio_context(s->conf.conf.blk));
 }
 
 #ifdef __linux__
@@ -238,10 +228,8 @@ static void virtio_blk_ioctl_complete(void *opaque, int status)
     virtio_stl_p(vdev, &scsi->data_len, hdr->dxfer_len);
 
 out:
-    aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
     virtio_blk_req_complete(req, status);
     virtio_blk_free_request(req);
-    aio_context_release(blk_get_aio_context(s->conf.conf.blk));
     g_free(ioctl_req);
 }
 
@@ -852,7 +840,6 @@ static void virtio_blk_dma_restart_bh(void *opaque)
 
     s->rq = NULL;
 
-    aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
     while (req) {
         VirtIOBlockReq *next = req->next;
         if (virtio_blk_handle_request(req, &mrb)) {
@@ -876,8 +863,6 @@ static void virtio_blk_dma_restart_bh(void *opaque)
 
     /* Paired with inc in virtio_blk_dma_restart_cb() */
     blk_dec_in_flight(s->conf.conf.blk);
-
-    aio_context_release(blk_get_aio_context(s->conf.conf.blk));
 }
 
 /*
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 7/8] virtio-blk: don't acquire AioContext in virtio_blk_handle_vq()
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
                   ` (5 preceding siblings ...)
  2022-11-08 21:19 ` [PATCH 6/8] virtio-blk: remove unnecessary AioContext lock from function already safe Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:41   ` Emanuele Giuseppe Esposito
  2022-11-08 21:19 ` [PATCH 8/8] virtio-blk: minimize virtio_blk_reset() AioContext lock region Stefan Hajnoczi
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

There is no need to acquire AioContext in virtio_blk_handle_vq() because
no APIs used in the function require it and nothing else in the
virtio-blk code requires mutual exclusion anymore.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/virtio-blk.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index faea045178..771d87cfbe 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -784,7 +784,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
 
     IO_CODE();
 
-    aio_context_acquire(blk_get_aio_context(s->blk));
     blk_io_plug(s->blk);
 
     do {
@@ -810,7 +809,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
     }
 
     blk_io_unplug(s->blk);
-    aio_context_release(blk_get_aio_context(s->blk));
 }
 
 static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 8/8] virtio-blk: minimize virtio_blk_reset() AioContext lock region
  2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
                   ` (6 preceding siblings ...)
  2022-11-08 21:19 ` [PATCH 7/8] virtio-blk: don't acquire AioContext in virtio_blk_handle_vq() Stefan Hajnoczi
@ 2022-11-08 21:19 ` Stefan Hajnoczi
  2022-11-11 12:41   ` Emanuele Giuseppe Esposito
  7 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2022-11-08 21:19 UTC (permalink / raw)
  To: qemu-devel
  Cc: Emanuele Giuseppe Esposito, Michael S. Tsirkin, qemu-block,
	Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
	Stefan Hajnoczi

blk_drain() needs the lock because it calls AIO_WAIT_WHILE().

The s->rq loop doesn't need the lock because dataplane has been stopped
when virtio_blk_reset() is called.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/virtio-blk.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 771d87cfbe..0b411b3065 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -898,6 +898,7 @@ static void virtio_blk_reset(VirtIODevice *vdev)
     ctx = blk_get_aio_context(s->blk);
     aio_context_acquire(ctx);
     blk_drain(s->blk);
+    aio_context_release(ctx);
 
     /* We drop queued requests after blk_drain() because blk_drain() itself can
      * produce them. */
@@ -908,8 +909,6 @@ static void virtio_blk_reset(VirtIODevice *vdev)
         virtio_blk_free_request(req);
     }
 
-    aio_context_release(ctx);
-
     assert(!s->dataplane_started);
     blk_set_enable_write_cache(s->blk, s->original_wce);
 }
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/8] virtio_queue_aio_attach_host_notifier: remove AioContext lock
  2022-11-08 21:19 ` [PATCH 1/8] virtio_queue_aio_attach_host_notifier: " Stefan Hajnoczi
@ 2022-11-11 12:17   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:17 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>
> virtio_queue_aio_attach_host_notifier() and
> virtio_queue_aio_attach_host_notifier_nopoll() run always in the
> main loop, so there is no need to protect them with AioContext
> lock.
>
> On the other side, virtio_queue_aio_detach_host_notifier() runs
> in a bh in the iothread context, but it is always scheduled
> (thus serialized) by the main loop. Therefore removing the
> AioContext lock is safe.
>
> In order to remove the AioContext lock it is necessary to switch
> aio_wait_bh_oneshot() to AIO_WAIT_WHILE_UNLOCKED(). virtio-blk and
> virtio-scsi are the only users of aio_wait_bh_oneshot() so it is
> possible to make this change.
>
> For now bdrv_set_aio_context() still needs the AioContext lock.
>
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Message-Id: <20220609143727.1151816-2-eesposit@redhat.com>
>


Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/8] block-backend: enable_write_cache should be atomic
  2022-11-08 21:19 ` [PATCH 2/8] block-backend: enable_write_cache should be atomic Stefan Hajnoczi
@ 2022-11-11 12:17   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:17 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>
> It is read from IO_CODE and written with BQL held,
> so setting it as atomic should be enough.
>
> Also remove the aiocontext lock that was sporadically
> taken around the set.
>
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Message-Id: <20220609143727.1151816-3-eesposit@redhat.com>
>


Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/8] virtio: categorize callbacks in GS
  2022-11-08 21:19 ` [PATCH 3/8] virtio: categorize callbacks in GS Stefan Hajnoczi
@ 2022-11-11 12:19   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:19 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>
> All the callbacks below are always running in the main loop.
>
> The callbacks are the following:
> - start/stop_ioeventfd: these are the callbacks where
>   blk_set_aio_context(iothread) is done, so they are called in the main
>   loop.
>
> - save and load: called during migration, when VM is stopped from the
>   main loop.
>
> - reset: before calling this callback, stop_ioeventfd is invoked, so
>   it can only run in the main loop.
>
> - set_status: going through all the callers we can see it is called
>   from a MemoryRegionOps callback, which always run in a vcpu thread and
>   hold the BQL.
>
> - realize: iothread is not even created yet.
>
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Message-Id: <20220609143727.1151816-5-eesposit@redhat.com>
>


Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/8] virtio-blk: mark GLOBAL_STATE_CODE functions
  2022-11-08 21:19 ` [PATCH 4/8] virtio-blk: mark GLOBAL_STATE_CODE functions Stefan Hajnoczi
@ 2022-11-11 12:19   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:19 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>
> Just as done in the block API, mark functions in virtio-blk
> that are always called in the main loop with BQL held.
>
> We know such functions are GS because they all are callbacks
> from virtio.c API that has already classified them as GS.
>
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Message-Id: <20220609143727.1151816-6-eesposit@redhat.com>
>


Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/8] virtio-blk: mark IO_CODE functions
  2022-11-08 21:19 ` [PATCH 5/8] virtio-blk: mark IO_CODE functions Stefan Hajnoczi
@ 2022-11-11 12:20   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:20 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>
> Just as done in the block API, mark functions in virtio-blk
> that are called also from iothread(s).
>
> We know such functions are IO because many are blk_* callbacks,
> running always in the device iothread, and remaining are propagated
> from the leaf IO functions (if a function calls a IO_CODE function,
> itself is categorized as IO_CODE too).
>
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Message-Id: <20220609143727.1151816-7-eesposit@redhat.com>
>


Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 6/8] virtio-blk: remove unnecessary AioContext lock from function already safe
  2022-11-08 21:19 ` [PATCH 6/8] virtio-blk: remove unnecessary AioContext lock from function already safe Stefan Hajnoczi
@ 2022-11-11 12:23   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:23 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>
> AioContext lock was introduced in b9e413dd375 and in this instance
> it is used to protect these 3 functions:
> - virtio_blk_handle_rw_error
> - virtio_blk_req_complete
> - block_acct_done
>
> Now that all three of the above functions are protected with
> their own locks, we can get rid of the AioContext lock.
>
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Message-Id: <20220609143727.1151816-9-eesposit@redhat.com>
>


Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 7/8] virtio-blk: don't acquire AioContext in virtio_blk_handle_vq()
  2022-11-08 21:19 ` [PATCH 7/8] virtio-blk: don't acquire AioContext in virtio_blk_handle_vq() Stefan Hajnoczi
@ 2022-11-11 12:41   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:41 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> There is no need to acquire AioContext in virtio_blk_handle_vq() because
> no APIs used in the function require it and nothing else in the
> virtio-blk code requires mutual exclusion anymore.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/block/virtio-blk.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index faea045178..771d87cfbe 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -784,7 +784,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
>  
>      IO_CODE();
>  
> -    aio_context_acquire(blk_get_aio_context(s->blk));
>      blk_io_plug(s->blk);
>  
>      do {
> @@ -810,7 +809,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
>      }
>  
>      blk_io_unplug(s->blk);
> -    aio_context_release(blk_get_aio_context(s->blk));
>  }
>  
>  static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> 

As already discussed offline, this might be problematic with the work I
am done.

Basically I am trying to replace the AioContext lock with a rwlock for
graph modifications, and in order to use it we must convert all
BlockDriver IO functions in coroutines, because they traverse the graph
and must take the read lock (defined as coroutine_fn).

This implies that for now we need to implement blk_* and bdrv_*
functions in a similar fashion as generated_co_wrapper, therefore
creating a coroutine and polling waiting for it.
And polling uses AIO_WAIT_WHILE, which assumes the AioContext lock to be
taken.

In the future, we will use AIO_WAIT_WHILE_UNLOCKED, as you did in patch
1, but right now it's definitely too early to do it for a g_c_w function.

For this specific case, I see that blk_ioplug/unplug is not called in a
lot of places:
- here, and it's ok
- virtio-scsi, and I think you are going to cover this too soon
- xen-block

So if you manage to make all callers aiocontext-free, then I can just
rebase on top of your series and use AIO_WAIT_WHILE_POLL for
blk_ioplug/unplug when I convert it in coroutine.

Thank you,
Emanuele



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 8/8] virtio-blk: minimize virtio_blk_reset() AioContext lock region
  2022-11-08 21:19 ` [PATCH 8/8] virtio-blk: minimize virtio_blk_reset() AioContext lock region Stefan Hajnoczi
@ 2022-11-11 12:41   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 17+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-11-11 12:41 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Michael S. Tsirkin, qemu-block, Kevin Wolf, Hanna Reitz,
	Paolo Bonzini, Fam Zheng



Am 08/11/2022 um 22:19 schrieb Stefan Hajnoczi:
> blk_drain() needs the lock because it calls AIO_WAIT_WHILE().
>
> The s->rq loop doesn't need the lock because dataplane has been stopped
> when virtio_blk_reset() is called.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>


Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-11-11 13:47 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-08 21:19 [PATCH 0/8] virtio-blk: remove AioContext lock Stefan Hajnoczi
2022-11-08 21:19 ` [PATCH 1/8] virtio_queue_aio_attach_host_notifier: " Stefan Hajnoczi
2022-11-11 12:17   ` Emanuele Giuseppe Esposito
2022-11-08 21:19 ` [PATCH 2/8] block-backend: enable_write_cache should be atomic Stefan Hajnoczi
2022-11-11 12:17   ` Emanuele Giuseppe Esposito
2022-11-08 21:19 ` [PATCH 3/8] virtio: categorize callbacks in GS Stefan Hajnoczi
2022-11-11 12:19   ` Emanuele Giuseppe Esposito
2022-11-08 21:19 ` [PATCH 4/8] virtio-blk: mark GLOBAL_STATE_CODE functions Stefan Hajnoczi
2022-11-11 12:19   ` Emanuele Giuseppe Esposito
2022-11-08 21:19 ` [PATCH 5/8] virtio-blk: mark IO_CODE functions Stefan Hajnoczi
2022-11-11 12:20   ` Emanuele Giuseppe Esposito
2022-11-08 21:19 ` [PATCH 6/8] virtio-blk: remove unnecessary AioContext lock from function already safe Stefan Hajnoczi
2022-11-11 12:23   ` Emanuele Giuseppe Esposito
2022-11-08 21:19 ` [PATCH 7/8] virtio-blk: don't acquire AioContext in virtio_blk_handle_vq() Stefan Hajnoczi
2022-11-11 12:41   ` Emanuele Giuseppe Esposito
2022-11-08 21:19 ` [PATCH 8/8] virtio-blk: minimize virtio_blk_reset() AioContext lock region Stefan Hajnoczi
2022-11-11 12:41   ` Emanuele Giuseppe Esposito

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.