qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] block layer: split block APIs in graph and I/O
@ 2021-09-08 13:10 Emanuele Giuseppe Esposito
  2021-09-08 13:10 ` [RFC PATCH 1/4] main-loop.h: introduce qemu_in_main_thread() Emanuele Giuseppe Esposito
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Emanuele Giuseppe Esposito @ 2021-09-08 13:10 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, Juan Quintela, John Snow,
	Emanuele Giuseppe Esposito, Richard Henderson,
	Dr. David Alan Gilbert, qemu-devel, Hanna Reitz, Stefan Hajnoczi,
	Paolo Bonzini, Eric Blake

Currently, block layer APIs like block-backend.h contain a mix of
functions that are either running in the main loop and under the
BQL, or are thread-safe functions and run in iothreads performing I/O.
The functions running under BQL also take care of modifying the
block graph, by using drain and/or aio_context_acquire/release.
This makes it very confusing to understand where each function
runs, and what assumptions it provided with regards to thread
safety.

We call the functions running under BQL "graph API", and 
distinguish them from the thread-safe "I/O API".

The aim of this series is to split the relevant block headers in 
graph and I/O sub-headers. The division will be done in this way:
header.h will be split in header-graph.h, header-io.h and
header-common.h. The latter will just contain the data structures
needed by header-graph and header-io. header.h will remain for
legacy and to avoid changing all includes in all QEMU c files,
but will only include the two new headers.
Once we split all relevant headers, it will be much easier to see what
uses the AioContext lock and remove it, which is the overall main
goal of this and other series that I posted/will post.

RFC: I am happy to change all includes, if you think that it would
be better than leaving an empty header.h file.

The relevant headers I found so far are:
- block-backend.h
- block.h
- block_int.h
- backup-top.h and blockdev.h (but contain very few functions)
Once these are done, we would also need to audit the callback
offered by struct Jobdriver, BlockDevOps and BlockDriver.

Each function in the graph API will have an assertion, checking
that it is always running under BQL.
I/O functions are instead thread safe (or so should be), meaning
that they *can* run under BQL, but also in an iothread in another
AioContext. Therefore they do not provide any assertion, and
need to be audited manually to verify the correctness.
RFC: Any idea on how to guarantee I/O functions is welcome.

Adding assetions has helped finding a bug already, as shown in
patch 2.

RFC: Because this task is very time consuming and requires a lot
of auditing, this series only provide block-backend.h split,
to gather initial feedbacks.

Tested this series by running unit tests, qemu-iotests and qtests 
(x86_64)
Some functions in the graph API are used everywhere but not
properly tested. Therefore their assertion is never actually run in
the tests, so despite my very careful auditing, it is not impossible
to exclude that some will trigger while actually using QEMU.

Patch 1 introduces qemu_in_main_thread(), the function used in
all assertions. This had to be introduced otherwise all unit tests
would fail, since they run in the main loop but use the code in
stubs/iothread.c
Patch 2 fixes a bug that came up when auditing the code.
Patch 3 splits block-backend header, and to reduce the diff 
I moved the assertions in patch 4.

Next steps:
1) if this series gets positive comments, convert block.h and block_int.h
2) audit graph API and replace the AioContext lock with drains,
or remove them when not necessary (requires further discussion,
not necessary now).
3) [optional as it should be already the case] audit the I/O API
and check that thread safety is guaranteed

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>

Emanuele Giuseppe Esposito (4):
  main-loop.h: introduce qemu_in_main_thread()
  migration: block-dirty-bitmap: add missing qemu_mutex_lock_iothread
  include/sysemu/block-backend: split header into I/O and graph API
  block/block-backend.c: assertions for block-backend

 block/block-backend.c                 | 102 +++++++++-
 include/qemu/main-loop.h              |  13 ++
 include/sysemu/block-backend-common.h |  74 ++++++++
 include/sysemu/block-backend-graph.h  | 132 +++++++++++++
 include/sysemu/block-backend-io.h     | 123 ++++++++++++
 include/sysemu/block-backend.h        | 262 +-------------------------
 migration/block-dirty-bitmap.c        |   5 +-
 softmmu/cpus.c                        |   5 +
 softmmu/qdev-monitor.c                |   2 +
 stubs/iothread-lock.c                 |   5 +
 10 files changed, 459 insertions(+), 264 deletions(-)
 create mode 100644 include/sysemu/block-backend-common.h
 create mode 100644 include/sysemu/block-backend-graph.h
 create mode 100644 include/sysemu/block-backend-io.h

-- 
2.27.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [RFC PATCH 1/4] main-loop.h: introduce qemu_in_main_thread()
  2021-09-08 13:10 [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Emanuele Giuseppe Esposito
@ 2021-09-08 13:10 ` Emanuele Giuseppe Esposito
  2021-09-13 13:15   ` Stefan Hajnoczi
  2021-09-08 13:10 ` [RFC PATCH 2/4] migration: block-dirty-bitmap: add missing qemu_mutex_lock_iothread Emanuele Giuseppe Esposito
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Emanuele Giuseppe Esposito @ 2021-09-08 13:10 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, Juan Quintela, John Snow,
	Emanuele Giuseppe Esposito, Richard Henderson,
	Dr. David Alan Gilbert, qemu-devel, Hanna Reitz, Stefan Hajnoczi,
	Paolo Bonzini, Eric Blake

When invoked from the main loop, this function is the same
as qemu_mutex_iothread_locked, and returns true if the BQL is held.
When invoked from iothreads or tests, it returns true only
if the current AioContext is the Main Loop.

This essentially just extends qemu_mutex_iothread_locked to work
also in unit tests or other users like storage-daemon, that run
in the Main Loop but end up using the implementation in
stubs/iothread-lock.c.

Using qemu_mutex_iothread_locked in unit tests defaults to false
because they use the implementation in stubs/iothread-lock,
making all assertions added in next patches fail despite the
AioContext is still the main loop.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 include/qemu/main-loop.h | 13 +++++++++++++
 softmmu/cpus.c           |  5 +++++
 stubs/iothread-lock.c    |  5 +++++
 3 files changed, 23 insertions(+)

diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
index 8dbc6fcb89..c6547207f7 100644
--- a/include/qemu/main-loop.h
+++ b/include/qemu/main-loop.h
@@ -245,6 +245,19 @@ AioContext *iohandler_get_aio_context(void);
  */
 bool qemu_mutex_iothread_locked(void);
 
+/**
+ * qemu_in_main_thread: Return true if the function runs with BQL
+ * or in the main loop AioContext.
+ *
+ * This function falls back to qemu_mutex_iothread_locked() if
+ * called from the main loop, otherwise it checks if the current
+ * AioContext is the main loop. This is useful to check that the BQL
+ * is held, and not make it return false when invoked by unit
+ * tests or other users like storage-daemon that end up using
+ * stubs/iothread-lock.c implementation.
+ */
+bool qemu_in_main_thread(void);
+
 /**
  * qemu_mutex_lock_iothread: Lock the main loop mutex.
  *
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
index 071085f840..3f61a3c31d 100644
--- a/softmmu/cpus.c
+++ b/softmmu/cpus.c
@@ -481,6 +481,11 @@ bool qemu_mutex_iothread_locked(void)
     return iothread_locked;
 }
 
+bool qemu_in_main_thread(void)
+{
+    return qemu_mutex_iothread_locked();
+}
+
 /*
  * The BQL is taken from so many places that it is worth profiling the
  * callers directly, instead of funneling them all through a single function.
diff --git a/stubs/iothread-lock.c b/stubs/iothread-lock.c
index 5b45b7fc8b..ff7386e42c 100644
--- a/stubs/iothread-lock.c
+++ b/stubs/iothread-lock.c
@@ -6,6 +6,11 @@ bool qemu_mutex_iothread_locked(void)
     return false;
 }
 
+bool qemu_in_main_thread(void)
+{
+    return qemu_get_current_aio_context() == qemu_get_aio_context();
+}
+
 void qemu_mutex_lock_iothread_impl(const char *file, int line)
 {
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC PATCH 2/4] migration: block-dirty-bitmap: add missing qemu_mutex_lock_iothread
  2021-09-08 13:10 [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Emanuele Giuseppe Esposito
  2021-09-08 13:10 ` [RFC PATCH 1/4] main-loop.h: introduce qemu_in_main_thread() Emanuele Giuseppe Esposito
@ 2021-09-08 13:10 ` Emanuele Giuseppe Esposito
  2021-09-13 13:19   ` Stefan Hajnoczi
  2021-09-08 13:10 ` [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API Emanuele Giuseppe Esposito
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Emanuele Giuseppe Esposito @ 2021-09-08 13:10 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, Juan Quintela, John Snow,
	Emanuele Giuseppe Esposito, Richard Henderson,
	Dr. David Alan Gilbert, qemu-devel, Hanna Reitz, Stefan Hajnoczi,
	Paolo Bonzini, Eric Blake

init_dirty_bitmap_migration assumes the iothread lock (BQL)
to be held, but instead it isn't.

Instead of adding the lock to qemu_savevm_state_setup(),
follow the same pattern as the other ->save_setup callbacks
and lock+unlock inside dirty_bitmap_save_setup().

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 migration/block-dirty-bitmap.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
index 35f5ef688d..9aba7d9c22 100644
--- a/migration/block-dirty-bitmap.c
+++ b/migration/block-dirty-bitmap.c
@@ -1215,7 +1215,10 @@ static int dirty_bitmap_save_setup(QEMUFile *f, void *opaque)
 {
     DBMSaveState *s = &((DBMState *)opaque)->save;
     SaveBitmapState *dbms = NULL;
+
+    qemu_mutex_lock_iothread();
     if (init_dirty_bitmap_migration(s) < 0) {
+        qemu_mutex_unlock_iothread();
         return -1;
     }
 
@@ -1223,7 +1226,7 @@ static int dirty_bitmap_save_setup(QEMUFile *f, void *opaque)
         send_bitmap_start(f, s, dbms);
     }
     qemu_put_bitmap_flags(f, DIRTY_BITMAP_MIG_FLAG_EOS);
-
+    qemu_mutex_unlock_iothread();
     return 0;
 }
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API
  2021-09-08 13:10 [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Emanuele Giuseppe Esposito
  2021-09-08 13:10 ` [RFC PATCH 1/4] main-loop.h: introduce qemu_in_main_thread() Emanuele Giuseppe Esposito
  2021-09-08 13:10 ` [RFC PATCH 2/4] migration: block-dirty-bitmap: add missing qemu_mutex_lock_iothread Emanuele Giuseppe Esposito
@ 2021-09-08 13:10 ` Emanuele Giuseppe Esposito
  2021-09-13 13:27   ` Stefan Hajnoczi
  2021-09-13 13:41   ` Stefan Hajnoczi
  2021-09-08 13:10 ` [RFC PATCH 4/4] block/block-backend.c: assertions for block-backend Emanuele Giuseppe Esposito
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 18+ messages in thread
From: Emanuele Giuseppe Esposito @ 2021-09-08 13:10 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, Juan Quintela, John Snow,
	Emanuele Giuseppe Esposito, Richard Henderson,
	Dr. David Alan Gilbert, qemu-devel, Hanna Reitz, Stefan Hajnoczi,
	Paolo Bonzini, Eric Blake

block-backend.h currently contains a mix of functions:
some of them run under the BQL and modify the block layer graph,
others are instead thread-safe and perform I/O in iothreads.
It is not easy to understand which function is part of which
group (I/O vs graph), and this patch aims to clarify it.

The graph functions need the BQL, and often use
aio_context_acquire/release and/or drain to be sure they
can modify the graph safely.
The I/O function are instead thread safe, and can run in
any AioContext.

By splitting the header in two files, block-backend-io.h
and block-backend-graph.h we have a clearer view on what
needs what kind of protection. block-backend-common.h
instead contains common structures shared by both headers.

In addition, remove the block/block.h include as it seems
it is not necessary anymore, together with qemu/iov.h

block-backend.h is left there for legacy and avoid to change
all includes in all c files that use the block-backend APIs.

Assertions are added in the next patch.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 block/block-backend.c                 |   9 +-
 include/sysemu/block-backend-common.h |  74 ++++++++
 include/sysemu/block-backend-graph.h  | 132 +++++++++++++
 include/sysemu/block-backend-io.h     | 123 ++++++++++++
 include/sysemu/block-backend.h        | 262 +-------------------------
 5 files changed, 338 insertions(+), 262 deletions(-)
 create mode 100644 include/sysemu/block-backend-common.h
 create mode 100644 include/sysemu/block-backend-graph.h
 create mode 100644 include/sysemu/block-backend-io.h

diff --git a/block/block-backend.c b/block/block-backend.c
index deb55c272e..7d5216a9a0 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -78,6 +78,7 @@ struct BlockBackend {
     bool allow_aio_context_change;
     bool allow_write_beyond_eof;
 
+    /* Protected by BQL lock */
     NotifierList remove_bs_notifiers, insert_bs_notifiers;
     QLIST_HEAD(, BlockBackendAioNotifier) aio_notifiers;
 
@@ -110,12 +111,14 @@ static const AIOCBInfo block_backend_aiocb_info = {
 static void drive_info_del(DriveInfo *dinfo);
 static BlockBackend *bdrv_first_blk(BlockDriverState *bs);
 
-/* All BlockBackends */
+/* All BlockBackends. Protected by the BQL lock. */
 static QTAILQ_HEAD(, BlockBackend) block_backends =
     QTAILQ_HEAD_INITIALIZER(block_backends);
 
-/* All BlockBackends referenced by the monitor and which are iterated through by
- * blk_next() */
+/*
+ * All BlockBackends referenced by the monitor and which are iterated through by
+ * blk_next(). Protected by the BQL lock.
+ */
 static QTAILQ_HEAD(, BlockBackend) monitor_block_backends =
     QTAILQ_HEAD_INITIALIZER(monitor_block_backends);
 
diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-backend-common.h
new file mode 100644
index 0000000000..52ff6a4d26
--- /dev/null
+++ b/include/sysemu/block-backend-common.h
@@ -0,0 +1,74 @@
+/*
+ * QEMU Block backends
+ *
+ * Copyright (C) 2014-2016 Red Hat, Inc.
+ *
+ * Authors:
+ *  Markus Armbruster <armbru@redhat.com>,
+ *
+ * This work is licensed under the terms of the GNU LGPL, version 2.1
+ * or later.  See the COPYING.LIB file in the top-level directory.
+ */
+
+#ifndef BLOCK_BACKEND_COMMON_H
+#define BLOCK_BACKEND_COMMON_H
+
+#include "block/throttle-groups.h"
+
+/* Callbacks for block device models */
+typedef struct BlockDevOps {
+    /*
+     * Runs when virtual media changed (monitor commands eject, change)
+     * Argument load is true on load and false on eject.
+     * Beware: doesn't run when a host device's physical media
+     * changes.  Sure would be useful if it did.
+     * Device models with removable media must implement this callback.
+     */
+    void (*change_media_cb)(void *opaque, bool load, Error **errp);
+    /*
+     * Runs when an eject request is issued from the monitor, the tray
+     * is closed, and the medium is locked.
+     * Device models that do not implement is_medium_locked will not need
+     * this callback.  Device models that can lock the medium or tray might
+     * want to implement the callback and unlock the tray when "force" is
+     * true, even if they do not support eject requests.
+     */
+    void (*eject_request_cb)(void *opaque, bool force);
+    /*
+     * Is the virtual tray open?
+     * Device models implement this only when the device has a tray.
+     */
+    bool (*is_tray_open)(void *opaque);
+    /*
+     * Is the virtual medium locked into the device?
+     * Device models implement this only when device has such a lock.
+     */
+    bool (*is_medium_locked)(void *opaque);
+    /*
+     * Runs when the size changed (e.g. monitor command block_resize)
+     */
+    void (*resize_cb)(void *opaque);
+    /*
+     * Runs when the backend receives a drain request.
+     */
+    void (*drained_begin)(void *opaque);
+    /*
+     * Runs when the backend's last drain request ends.
+     */
+    void (*drained_end)(void *opaque);
+    /*
+     * Is the device still busy?
+     */
+    bool (*drained_poll)(void *opaque);
+} BlockDevOps;
+
+/*
+ * This struct is embedded in (the private) BlockBackend struct and contains
+ * fields that must be public. This is in particular for QLIST_ENTRY() and
+ * friends so that BlockBackends can be kept in lists outside block-backend.c
+ */
+typedef struct BlockBackendPublic {
+    ThrottleGroupMember throttle_group_member;
+} BlockBackendPublic;
+
+#endif /* BLOCK_BACKEND_COMMON_H */
diff --git a/include/sysemu/block-backend-graph.h b/include/sysemu/block-backend-graph.h
new file mode 100644
index 0000000000..3310987b16
--- /dev/null
+++ b/include/sysemu/block-backend-graph.h
@@ -0,0 +1,132 @@
+/*
+ * QEMU Block backends
+ *
+ * Copyright (C) 2014-2016 Red Hat, Inc.
+ *
+ * Authors:
+ *  Markus Armbruster <armbru@redhat.com>,
+ *
+ * This work is licensed under the terms of the GNU LGPL, version 2.1
+ * or later.  See the COPYING.LIB file in the top-level directory.
+ */
+
+#ifndef BLOCK_BACKEND_GRAPH_H
+#define BLOCK_BACKEND_GRAPH_H
+
+#include "block-backend-common.h"
+
+/*
+ * Graph API. These functions run under the BQL lock.
+ *
+ * If a function modifies the graph, it uses drain and/or
+ * aio_context_acquire/release to be sure it has unique access.
+ *
+ * All functions in this header must use this assertion:
+ * g_assert(qemu_in_main_thread());
+ * to be sure they belong here.
+ */
+
+BlockBackend *blk_new(AioContext *ctx, uint64_t perm, uint64_t shared_perm);
+BlockBackend *blk_new_with_bs(BlockDriverState *bs, uint64_t perm,
+                              uint64_t shared_perm, Error **errp);
+BlockBackend *blk_new_open(const char *filename, const char *reference,
+                           QDict *options, int flags, Error **errp);
+int blk_get_refcnt(BlockBackend *blk);
+void blk_ref(BlockBackend *blk);
+void blk_unref(BlockBackend *blk);
+void blk_remove_all_bs(void);
+const char *blk_name(const BlockBackend *blk);
+BlockBackend *blk_by_name(const char *name);
+BlockBackend *blk_next(BlockBackend *blk);
+BlockBackend *blk_all_next(BlockBackend *blk);
+bool monitor_add_blk(BlockBackend *blk, const char *name, Error **errp);
+void monitor_remove_blk(BlockBackend *blk);
+
+BlockBackendPublic *blk_get_public(BlockBackend *blk);
+BlockBackend *blk_by_public(BlockBackendPublic *public);
+
+void blk_remove_bs(BlockBackend *blk);
+int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp);
+bool bdrv_has_blk(BlockDriverState *bs);
+bool bdrv_is_root_node(BlockDriverState *bs);
+int blk_set_perm(BlockBackend *blk, uint64_t perm, uint64_t shared_perm,
+                 Error **errp);
+void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm);
+
+void blk_iostatus_enable(BlockBackend *blk);
+bool blk_iostatus_is_enabled(const BlockBackend *blk);
+BlockDeviceIoStatus blk_iostatus(const BlockBackend *blk);
+void blk_iostatus_disable(BlockBackend *blk);
+void blk_iostatus_reset(BlockBackend *blk);
+void blk_iostatus_set_err(BlockBackend *blk, int error);
+int blk_attach_dev(BlockBackend *blk, DeviceState *dev);
+void blk_detach_dev(BlockBackend *blk, DeviceState *dev);
+DeviceState *blk_get_attached_dev(BlockBackend *blk);
+char *blk_get_attached_dev_id(BlockBackend *blk);
+BlockBackend *blk_by_dev(void *dev);
+BlockBackend *blk_by_qdev_id(const char *id, Error **errp);
+void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops, void *opaque);
+
+int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags);
+int64_t blk_nb_sectors(BlockBackend *blk);
+int blk_commit_all(void);
+void blk_drain(BlockBackend *blk);
+void blk_drain_all(void);
+void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
+                      BlockdevOnError on_write_error);
+BlockdevOnError blk_get_on_error(BlockBackend *blk, bool is_read);
+BlockErrorAction blk_get_error_action(BlockBackend *blk, bool is_read,
+                                      int error);
+void blk_error_action(BlockBackend *blk, BlockErrorAction action,
+                      bool is_read, int error);
+bool blk_supports_write_perm(BlockBackend *blk);
+bool blk_is_sg(BlockBackend *blk);
+bool blk_enable_write_cache(BlockBackend *blk);
+void blk_set_enable_write_cache(BlockBackend *blk, bool wce);
+void blk_lock_medium(BlockBackend *blk, bool locked);
+void blk_eject(BlockBackend *blk, bool eject_flag);
+int blk_get_flags(BlockBackend *blk);
+int blk_get_max_iov(BlockBackend *blk);
+void blk_set_guest_block_size(BlockBackend *blk, int align);
+bool blk_op_is_blocked(BlockBackend *blk, BlockOpType op, Error **errp);
+void blk_op_unblock(BlockBackend *blk, BlockOpType op, Error *reason);
+void blk_op_block_all(BlockBackend *blk, Error *reason);
+void blk_op_unblock_all(BlockBackend *blk, Error *reason);
+int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
+                        Error **errp);
+void blk_add_aio_context_notifier(BlockBackend *blk,
+        void (*attached_aio_context)(AioContext *new_context, void *opaque),
+        void (*detach_aio_context)(void *opaque), void *opaque);
+void blk_remove_aio_context_notifier(BlockBackend *blk,
+                                     void (*attached_aio_context)(AioContext *,
+                                                                  void *),
+                                     void (*detach_aio_context)(void *),
+                                     void *opaque);
+void blk_add_remove_bs_notifier(BlockBackend *blk, Notifier *notify);
+void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify);
+BlockBackendRootState *blk_get_root_state(BlockBackend *blk);
+void blk_update_root_state(BlockBackend *blk);
+bool blk_get_detect_zeroes_from_root_state(BlockBackend *blk);
+int blk_get_open_flags_from_root_state(BlockBackend *blk);
+
+int blk_save_vmstate(BlockBackend *blk, const uint8_t *buf,
+                     int64_t pos, int size);
+int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size);
+int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz);
+int blk_probe_geometry(BlockBackend *blk, HDGeometry *geo);
+BlockAIOCB *blk_abort_aio_request(BlockBackend *blk,
+                                  BlockCompletionFunc *cb,
+                                  void *opaque, int ret);
+
+void blk_set_io_limits(BlockBackend *blk, ThrottleConfig *cfg);
+void blk_io_limits_disable(BlockBackend *blk);
+void blk_io_limits_enable(BlockBackend *blk, const char *group);
+void blk_io_limits_update_group(BlockBackend *blk, const char *group);
+void blk_set_force_allow_inactivate(BlockBackend *blk);
+
+void blk_register_buf(BlockBackend *blk, void *host, size_t size);
+void blk_unregister_buf(BlockBackend *blk, void *host);
+
+const BdrvChild *blk_root(BlockBackend *blk);
+
+#endif /* BLOCK_BACKEND_GRAPH_H */
diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
new file mode 100644
index 0000000000..66a7bed9f0
--- /dev/null
+++ b/include/sysemu/block-backend-io.h
@@ -0,0 +1,123 @@
+/*
+ * QEMU Block backends
+ *
+ * Copyright (C) 2014-2016 Red Hat, Inc.
+ *
+ * Authors:
+ *  Markus Armbruster <armbru@redhat.com>,
+ *
+ * This work is licensed under the terms of the GNU LGPL, version 2.1
+ * or later.  See the COPYING.LIB file in the top-level directory.
+ */
+
+#ifndef BLOCK_BACKEND_IO_H
+#define BLOCK_BACKEND_IO_H
+
+#include "block-backend-common.h"
+
+/*
+ * I/O API functions. These functions are thread-safe, and therefore
+ * can run in any AioContext.
+ */
+
+BlockDriverState *blk_bs(BlockBackend *blk);
+
+void blk_set_allow_write_beyond_eof(BlockBackend *blk, bool allow);
+void blk_set_allow_aio_context_change(BlockBackend *blk, bool allow);
+void blk_set_disable_request_queuing(BlockBackend *blk, bool disable);
+
+int blk_pread(BlockBackend *blk, int64_t offset, void *buf, int bytes);
+int blk_pwrite(BlockBackend *blk, int64_t offset, const void *buf, int bytes,
+               BdrvRequestFlags flags);
+int coroutine_fn blk_co_preadv(BlockBackend *blk, int64_t offset,
+                               unsigned int bytes, QEMUIOVector *qiov,
+                               BdrvRequestFlags flags);
+int coroutine_fn blk_co_pwritev_part(BlockBackend *blk, int64_t offset,
+                                     unsigned int bytes,
+                                     QEMUIOVector *qiov, size_t qiov_offset,
+                                     BdrvRequestFlags flags);
+int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
+                               unsigned int bytes, QEMUIOVector *qiov,
+                               BdrvRequestFlags flags);
+
+static inline int coroutine_fn blk_co_pread(BlockBackend *blk, int64_t offset,
+                                            unsigned int bytes, void *buf,
+                                            BdrvRequestFlags flags)
+{
+    QEMUIOVector qiov = QEMU_IOVEC_INIT_BUF(qiov, buf, bytes);
+
+    return blk_co_preadv(blk, offset, bytes, &qiov, flags);
+}
+
+static inline int coroutine_fn blk_co_pwrite(BlockBackend *blk, int64_t offset,
+                                             unsigned int bytes, void *buf,
+                                             BdrvRequestFlags flags)
+{
+    QEMUIOVector qiov = QEMU_IOVEC_INIT_BUF(qiov, buf, bytes);
+
+    return blk_co_pwritev(blk, offset, bytes, &qiov, flags);
+}
+
+BlockAIOCB *blk_aio_pwrite_zeroes(BlockBackend *blk, int64_t offset,
+                                  int bytes, BdrvRequestFlags flags,
+                                  BlockCompletionFunc *cb, void *opaque);
+
+BlockAIOCB *blk_aio_preadv(BlockBackend *blk, int64_t offset,
+                           QEMUIOVector *qiov, BdrvRequestFlags flags,
+                           BlockCompletionFunc *cb, void *opaque);
+BlockAIOCB *blk_aio_pwritev(BlockBackend *blk, int64_t offset,
+                            QEMUIOVector *qiov, BdrvRequestFlags flags,
+                            BlockCompletionFunc *cb, void *opaque);
+BlockAIOCB *blk_aio_flush(BlockBackend *blk,
+                          BlockCompletionFunc *cb, void *opaque);
+BlockAIOCB *blk_aio_pdiscard(BlockBackend *blk, int64_t offset, int bytes,
+                             BlockCompletionFunc *cb, void *opaque);
+void blk_aio_cancel(BlockAIOCB *acb);
+void blk_aio_cancel_async(BlockAIOCB *acb);
+int blk_ioctl(BlockBackend *blk, unsigned long int req, void *buf);
+BlockAIOCB *blk_aio_ioctl(BlockBackend *blk, unsigned long int req, void *buf,
+                          BlockCompletionFunc *cb, void *opaque);
+int blk_co_pdiscard(BlockBackend *blk, int64_t offset, int bytes);
+int blk_co_flush(BlockBackend *blk);
+int blk_flush(BlockBackend *blk);
+void blk_inc_in_flight(BlockBackend *blk);
+void blk_dec_in_flight(BlockBackend *blk);
+bool blk_is_inserted(BlockBackend *blk);
+bool blk_is_available(BlockBackend *blk);
+int64_t blk_getlength(BlockBackend *blk);
+void blk_get_geometry(BlockBackend *blk, uint64_t *nb_sectors_ptr);
+void *blk_try_blockalign(BlockBackend *blk, size_t size);
+void *blk_blockalign(BlockBackend *blk, size_t size);
+bool blk_is_writable(BlockBackend *blk);
+
+void blk_invalidate_cache(BlockBackend *blk, Error **errp);
+
+void blk_io_plug(BlockBackend *blk);
+void blk_io_unplug(BlockBackend *blk);
+AioContext *blk_get_aio_context(BlockBackend *blk);
+BlockAcctStats *blk_get_stats(BlockBackend *blk);
+void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
+                  BlockCompletionFunc *cb, void *opaque);
+int blk_pwrite_compressed(BlockBackend *blk, int64_t offset, const void *buf,
+                          int bytes);
+int blk_pdiscard(BlockBackend *blk, int64_t offset, int bytes);
+int blk_pwrite_zeroes(BlockBackend *blk, int64_t offset,
+                      int bytes, BdrvRequestFlags flags);
+int coroutine_fn blk_co_pwrite_zeroes(BlockBackend *blk, int64_t offset,
+                                      int bytes, BdrvRequestFlags flags);
+int blk_truncate(BlockBackend *blk, int64_t offset, bool exact,
+                 PreallocMode prealloc, BdrvRequestFlags flags, Error **errp);
+
+uint32_t blk_get_request_alignment(BlockBackend *blk);
+uint32_t blk_get_max_transfer(BlockBackend *blk);
+uint64_t blk_get_max_hw_transfer(BlockBackend *blk);
+
+int coroutine_fn blk_co_copy_range(BlockBackend *blk_in, int64_t off_in,
+                                   BlockBackend *blk_out, int64_t off_out,
+                                   int bytes, BdrvRequestFlags read_flags,
+                                   BdrvRequestFlags write_flags);
+
+
+int blk_make_empty(BlockBackend *blk, Error **errp);
+
+#endif /* BLOCK_BACKEND_IO_H */
diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index 9ac5f7bbd3..8c64a46008 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -13,265 +13,9 @@
 #ifndef BLOCK_BACKEND_H
 #define BLOCK_BACKEND_H
 
-#include "qemu/iov.h"
-#include "block/throttle-groups.h"
+#include "block-backend-graph.h"
+#include "block-backend-io.h"
 
-/*
- * TODO Have to include block/block.h for a bunch of block layer
- * types.  Unfortunately, this pulls in the whole BlockDriverState
- * API, which we don't want used by many BlockBackend users.  Some of
- * the types belong here, and the rest should be split into a common
- * header and one for the BlockDriverState API.
- */
-#include "block/block.h"
-
-/* Callbacks for block device models */
-typedef struct BlockDevOps {
-    /*
-     * Runs when virtual media changed (monitor commands eject, change)
-     * Argument load is true on load and false on eject.
-     * Beware: doesn't run when a host device's physical media
-     * changes.  Sure would be useful if it did.
-     * Device models with removable media must implement this callback.
-     */
-    void (*change_media_cb)(void *opaque, bool load, Error **errp);
-    /*
-     * Runs when an eject request is issued from the monitor, the tray
-     * is closed, and the medium is locked.
-     * Device models that do not implement is_medium_locked will not need
-     * this callback.  Device models that can lock the medium or tray might
-     * want to implement the callback and unlock the tray when "force" is
-     * true, even if they do not support eject requests.
-     */
-    void (*eject_request_cb)(void *opaque, bool force);
-    /*
-     * Is the virtual tray open?
-     * Device models implement this only when the device has a tray.
-     */
-    bool (*is_tray_open)(void *opaque);
-    /*
-     * Is the virtual medium locked into the device?
-     * Device models implement this only when device has such a lock.
-     */
-    bool (*is_medium_locked)(void *opaque);
-    /*
-     * Runs when the size changed (e.g. monitor command block_resize)
-     */
-    void (*resize_cb)(void *opaque);
-    /*
-     * Runs when the backend receives a drain request.
-     */
-    void (*drained_begin)(void *opaque);
-    /*
-     * Runs when the backend's last drain request ends.
-     */
-    void (*drained_end)(void *opaque);
-    /*
-     * Is the device still busy?
-     */
-    bool (*drained_poll)(void *opaque);
-} BlockDevOps;
-
-/* This struct is embedded in (the private) BlockBackend struct and contains
- * fields that must be public. This is in particular for QLIST_ENTRY() and
- * friends so that BlockBackends can be kept in lists outside block-backend.c
- * */
-typedef struct BlockBackendPublic {
-    ThrottleGroupMember throttle_group_member;
-} BlockBackendPublic;
-
-BlockBackend *blk_new(AioContext *ctx, uint64_t perm, uint64_t shared_perm);
-BlockBackend *blk_new_with_bs(BlockDriverState *bs, uint64_t perm,
-                              uint64_t shared_perm, Error **errp);
-BlockBackend *blk_new_open(const char *filename, const char *reference,
-                           QDict *options, int flags, Error **errp);
-int blk_get_refcnt(BlockBackend *blk);
-void blk_ref(BlockBackend *blk);
-void blk_unref(BlockBackend *blk);
-void blk_remove_all_bs(void);
-const char *blk_name(const BlockBackend *blk);
-BlockBackend *blk_by_name(const char *name);
-BlockBackend *blk_next(BlockBackend *blk);
-BlockBackend *blk_all_next(BlockBackend *blk);
-bool monitor_add_blk(BlockBackend *blk, const char *name, Error **errp);
-void monitor_remove_blk(BlockBackend *blk);
-
-BlockBackendPublic *blk_get_public(BlockBackend *blk);
-BlockBackend *blk_by_public(BlockBackendPublic *public);
-
-BlockDriverState *blk_bs(BlockBackend *blk);
-void blk_remove_bs(BlockBackend *blk);
-int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp);
-bool bdrv_has_blk(BlockDriverState *bs);
-bool bdrv_is_root_node(BlockDriverState *bs);
-int blk_set_perm(BlockBackend *blk, uint64_t perm, uint64_t shared_perm,
-                 Error **errp);
-void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm);
-
-void blk_set_allow_write_beyond_eof(BlockBackend *blk, bool allow);
-void blk_set_allow_aio_context_change(BlockBackend *blk, bool allow);
-void blk_set_disable_request_queuing(BlockBackend *blk, bool disable);
-void blk_iostatus_enable(BlockBackend *blk);
-bool blk_iostatus_is_enabled(const BlockBackend *blk);
-BlockDeviceIoStatus blk_iostatus(const BlockBackend *blk);
-void blk_iostatus_disable(BlockBackend *blk);
-void blk_iostatus_reset(BlockBackend *blk);
-void blk_iostatus_set_err(BlockBackend *blk, int error);
-int blk_attach_dev(BlockBackend *blk, DeviceState *dev);
-void blk_detach_dev(BlockBackend *blk, DeviceState *dev);
-DeviceState *blk_get_attached_dev(BlockBackend *blk);
-char *blk_get_attached_dev_id(BlockBackend *blk);
-BlockBackend *blk_by_dev(void *dev);
-BlockBackend *blk_by_qdev_id(const char *id, Error **errp);
-void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops, void *opaque);
-int coroutine_fn blk_co_preadv(BlockBackend *blk, int64_t offset,
-                               unsigned int bytes, QEMUIOVector *qiov,
-                               BdrvRequestFlags flags);
-int coroutine_fn blk_co_pwritev_part(BlockBackend *blk, int64_t offset,
-                                     unsigned int bytes,
-                                     QEMUIOVector *qiov, size_t qiov_offset,
-                                     BdrvRequestFlags flags);
-int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
-                               unsigned int bytes, QEMUIOVector *qiov,
-                               BdrvRequestFlags flags);
-
-static inline int coroutine_fn blk_co_pread(BlockBackend *blk, int64_t offset,
-                                            unsigned int bytes, void *buf,
-                                            BdrvRequestFlags flags)
-{
-    QEMUIOVector qiov = QEMU_IOVEC_INIT_BUF(qiov, buf, bytes);
-
-    return blk_co_preadv(blk, offset, bytes, &qiov, flags);
-}
-
-static inline int coroutine_fn blk_co_pwrite(BlockBackend *blk, int64_t offset,
-                                             unsigned int bytes, void *buf,
-                                             BdrvRequestFlags flags)
-{
-    QEMUIOVector qiov = QEMU_IOVEC_INIT_BUF(qiov, buf, bytes);
-
-    return blk_co_pwritev(blk, offset, bytes, &qiov, flags);
-}
-
-int blk_pwrite_zeroes(BlockBackend *blk, int64_t offset,
-                      int bytes, BdrvRequestFlags flags);
-BlockAIOCB *blk_aio_pwrite_zeroes(BlockBackend *blk, int64_t offset,
-                                  int bytes, BdrvRequestFlags flags,
-                                  BlockCompletionFunc *cb, void *opaque);
-int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags);
-int blk_pread(BlockBackend *blk, int64_t offset, void *buf, int bytes);
-int blk_pwrite(BlockBackend *blk, int64_t offset, const void *buf, int bytes,
-               BdrvRequestFlags flags);
-int64_t blk_getlength(BlockBackend *blk);
-void blk_get_geometry(BlockBackend *blk, uint64_t *nb_sectors_ptr);
-int64_t blk_nb_sectors(BlockBackend *blk);
-BlockAIOCB *blk_aio_preadv(BlockBackend *blk, int64_t offset,
-                           QEMUIOVector *qiov, BdrvRequestFlags flags,
-                           BlockCompletionFunc *cb, void *opaque);
-BlockAIOCB *blk_aio_pwritev(BlockBackend *blk, int64_t offset,
-                            QEMUIOVector *qiov, BdrvRequestFlags flags,
-                            BlockCompletionFunc *cb, void *opaque);
-BlockAIOCB *blk_aio_flush(BlockBackend *blk,
-                          BlockCompletionFunc *cb, void *opaque);
-BlockAIOCB *blk_aio_pdiscard(BlockBackend *blk, int64_t offset, int bytes,
-                             BlockCompletionFunc *cb, void *opaque);
-void blk_aio_cancel(BlockAIOCB *acb);
-void blk_aio_cancel_async(BlockAIOCB *acb);
-int blk_ioctl(BlockBackend *blk, unsigned long int req, void *buf);
-BlockAIOCB *blk_aio_ioctl(BlockBackend *blk, unsigned long int req, void *buf,
-                          BlockCompletionFunc *cb, void *opaque);
-int blk_co_pdiscard(BlockBackend *blk, int64_t offset, int bytes);
-int blk_co_flush(BlockBackend *blk);
-int blk_flush(BlockBackend *blk);
-int blk_commit_all(void);
-void blk_inc_in_flight(BlockBackend *blk);
-void blk_dec_in_flight(BlockBackend *blk);
-void blk_drain(BlockBackend *blk);
-void blk_drain_all(void);
-void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
-                      BlockdevOnError on_write_error);
-BlockdevOnError blk_get_on_error(BlockBackend *blk, bool is_read);
-BlockErrorAction blk_get_error_action(BlockBackend *blk, bool is_read,
-                                      int error);
-void blk_error_action(BlockBackend *blk, BlockErrorAction action,
-                      bool is_read, int error);
-bool blk_supports_write_perm(BlockBackend *blk);
-bool blk_is_writable(BlockBackend *blk);
-bool blk_is_sg(BlockBackend *blk);
-bool blk_enable_write_cache(BlockBackend *blk);
-void blk_set_enable_write_cache(BlockBackend *blk, bool wce);
-void blk_invalidate_cache(BlockBackend *blk, Error **errp);
-bool blk_is_inserted(BlockBackend *blk);
-bool blk_is_available(BlockBackend *blk);
-void blk_lock_medium(BlockBackend *blk, bool locked);
-void blk_eject(BlockBackend *blk, bool eject_flag);
-int blk_get_flags(BlockBackend *blk);
-uint32_t blk_get_request_alignment(BlockBackend *blk);
-uint32_t blk_get_max_transfer(BlockBackend *blk);
-uint64_t blk_get_max_hw_transfer(BlockBackend *blk);
-int blk_get_max_iov(BlockBackend *blk);
-void blk_set_guest_block_size(BlockBackend *blk, int align);
-void *blk_try_blockalign(BlockBackend *blk, size_t size);
-void *blk_blockalign(BlockBackend *blk, size_t size);
-bool blk_op_is_blocked(BlockBackend *blk, BlockOpType op, Error **errp);
-void blk_op_unblock(BlockBackend *blk, BlockOpType op, Error *reason);
-void blk_op_block_all(BlockBackend *blk, Error *reason);
-void blk_op_unblock_all(BlockBackend *blk, Error *reason);
-AioContext *blk_get_aio_context(BlockBackend *blk);
-int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
-                        Error **errp);
-void blk_add_aio_context_notifier(BlockBackend *blk,
-        void (*attached_aio_context)(AioContext *new_context, void *opaque),
-        void (*detach_aio_context)(void *opaque), void *opaque);
-void blk_remove_aio_context_notifier(BlockBackend *blk,
-                                     void (*attached_aio_context)(AioContext *,
-                                                                  void *),
-                                     void (*detach_aio_context)(void *),
-                                     void *opaque);
-void blk_add_remove_bs_notifier(BlockBackend *blk, Notifier *notify);
-void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify);
-void blk_io_plug(BlockBackend *blk);
-void blk_io_unplug(BlockBackend *blk);
-BlockAcctStats *blk_get_stats(BlockBackend *blk);
-BlockBackendRootState *blk_get_root_state(BlockBackend *blk);
-void blk_update_root_state(BlockBackend *blk);
-bool blk_get_detect_zeroes_from_root_state(BlockBackend *blk);
-int blk_get_open_flags_from_root_state(BlockBackend *blk);
-
-void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
-                  BlockCompletionFunc *cb, void *opaque);
-int coroutine_fn blk_co_pwrite_zeroes(BlockBackend *blk, int64_t offset,
-                                      int bytes, BdrvRequestFlags flags);
-int blk_pwrite_compressed(BlockBackend *blk, int64_t offset, const void *buf,
-                          int bytes);
-int blk_truncate(BlockBackend *blk, int64_t offset, bool exact,
-                 PreallocMode prealloc, BdrvRequestFlags flags, Error **errp);
-int blk_pdiscard(BlockBackend *blk, int64_t offset, int bytes);
-int blk_save_vmstate(BlockBackend *blk, const uint8_t *buf,
-                     int64_t pos, int size);
-int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size);
-int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz);
-int blk_probe_geometry(BlockBackend *blk, HDGeometry *geo);
-BlockAIOCB *blk_abort_aio_request(BlockBackend *blk,
-                                  BlockCompletionFunc *cb,
-                                  void *opaque, int ret);
-
-void blk_set_io_limits(BlockBackend *blk, ThrottleConfig *cfg);
-void blk_io_limits_disable(BlockBackend *blk);
-void blk_io_limits_enable(BlockBackend *blk, const char *group);
-void blk_io_limits_update_group(BlockBackend *blk, const char *group);
-void blk_set_force_allow_inactivate(BlockBackend *blk);
-
-void blk_register_buf(BlockBackend *blk, void *host, size_t size);
-void blk_unregister_buf(BlockBackend *blk, void *host);
-
-int coroutine_fn blk_co_copy_range(BlockBackend *blk_in, int64_t off_in,
-                                   BlockBackend *blk_out, int64_t off_out,
-                                   int bytes, BdrvRequestFlags read_flags,
-                                   BdrvRequestFlags write_flags);
-
-const BdrvChild *blk_root(BlockBackend *blk);
-
-int blk_make_empty(BlockBackend *blk, Error **errp);
+/* DO NOT ADD ANYTHING IN HERE. USE ONE OF THE HEADERS INCLUDED ABOVE */
 
 #endif
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC PATCH 4/4] block/block-backend.c: assertions for block-backend
  2021-09-08 13:10 [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Emanuele Giuseppe Esposito
                   ` (2 preceding siblings ...)
  2021-09-08 13:10 ` [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API Emanuele Giuseppe Esposito
@ 2021-09-08 13:10 ` Emanuele Giuseppe Esposito
  2021-09-13 13:38   ` Stefan Hajnoczi
  2021-09-13 13:10 ` [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Stefan Hajnoczi
  2021-09-13 13:39 ` Stefan Hajnoczi
  5 siblings, 1 reply; 18+ messages in thread
From: Emanuele Giuseppe Esposito @ 2021-09-08 13:10 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, Juan Quintela, John Snow,
	Emanuele Giuseppe Esposito, Richard Henderson,
	Dr. David Alan Gilbert, qemu-devel, Hanna Reitz, Stefan Hajnoczi,
	Paolo Bonzini, Eric Blake

All the graph API functions will check that
qemu_in_main_thread() returns true. If not, it means
that the safety of BQL cannot be guaranteed, and
they need to be moved to I/O.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 block/block-backend.c  | 93 +++++++++++++++++++++++++++++++++++++++++-
 softmmu/qdev-monitor.c |  2 +
 2 files changed, 94 insertions(+), 1 deletion(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 7d5216a9a0..7289c4f62e 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -227,6 +227,7 @@ static void blk_root_activate(BdrvChild *child, Error **errp)
 
 void blk_set_force_allow_inactivate(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     blk->force_allow_inactivate = true;
 }
 
@@ -345,6 +346,8 @@ BlockBackend *blk_new(AioContext *ctx, uint64_t perm, uint64_t shared_perm)
 {
     BlockBackend *blk;
 
+    g_assert(qemu_in_main_thread());
+
     blk = g_new0(BlockBackend, 1);
     blk->refcnt = 1;
     blk->ctx = ctx;
@@ -382,6 +385,8 @@ BlockBackend *blk_new_with_bs(BlockDriverState *bs, uint64_t perm,
 {
     BlockBackend *blk = blk_new(bdrv_get_aio_context(bs), perm, shared_perm);
 
+    g_assert(qemu_in_main_thread());
+
     if (blk_insert_bs(blk, bs, errp) < 0) {
         blk_unref(blk);
         return NULL;
@@ -410,6 +415,8 @@ BlockBackend *blk_new_open(const char *filename, const char *reference,
     uint64_t perm = 0;
     uint64_t shared = BLK_PERM_ALL;
 
+    g_assert(qemu_in_main_thread());
+
     /*
      * blk_new_open() is mainly used in .bdrv_create implementations and the
      * tools where sharing isn't a major concern because the BDS stays private
@@ -487,6 +494,7 @@ static void drive_info_del(DriveInfo *dinfo)
 
 int blk_get_refcnt(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk ? blk->refcnt : 0;
 }
 
@@ -497,6 +505,7 @@ int blk_get_refcnt(BlockBackend *blk)
 void blk_ref(BlockBackend *blk)
 {
     assert(blk->refcnt > 0);
+    g_assert(qemu_in_main_thread());
     blk->refcnt++;
 }
 
@@ -507,6 +516,7 @@ void blk_ref(BlockBackend *blk)
  */
 void blk_unref(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     if (blk) {
         assert(blk->refcnt > 0);
         if (blk->refcnt > 1) {
@@ -527,6 +537,7 @@ void blk_unref(BlockBackend *blk)
  */
 BlockBackend *blk_all_next(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk ? QTAILQ_NEXT(blk, link)
                : QTAILQ_FIRST(&block_backends);
 }
@@ -535,6 +546,8 @@ void blk_remove_all_bs(void)
 {
     BlockBackend *blk = NULL;
 
+    g_assert(qemu_in_main_thread());
+
     while ((blk = blk_all_next(blk)) != NULL) {
         AioContext *ctx = blk_get_aio_context(blk);
 
@@ -558,6 +571,7 @@ void blk_remove_all_bs(void)
  */
 BlockBackend *blk_next(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk ? QTAILQ_NEXT(blk, monitor_link)
                : QTAILQ_FIRST(&monitor_block_backends);
 }
@@ -624,6 +638,7 @@ static void bdrv_next_reset(BdrvNextIterator *it)
 
 BlockDriverState *bdrv_first(BdrvNextIterator *it)
 {
+    g_assert(qemu_in_main_thread());
     bdrv_next_reset(it);
     return bdrv_next(it);
 }
@@ -661,6 +676,7 @@ bool monitor_add_blk(BlockBackend *blk, const char *name, Error **errp)
 {
     assert(!blk->name);
     assert(name && name[0]);
+    g_assert(qemu_in_main_thread());
 
     if (!id_wellformed(name)) {
         error_setg(errp, "Invalid device name");
@@ -688,6 +704,8 @@ bool monitor_add_blk(BlockBackend *blk, const char *name, Error **errp)
  */
 void monitor_remove_blk(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
+
     if (!blk->name) {
         return;
     }
@@ -703,6 +721,7 @@ void monitor_remove_blk(BlockBackend *blk)
  */
 const char *blk_name(const BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->name ?: "";
 }
 
@@ -714,6 +733,7 @@ BlockBackend *blk_by_name(const char *name)
 {
     BlockBackend *blk = NULL;
 
+    g_assert(qemu_in_main_thread());
     assert(name);
     while ((blk = blk_next(blk)) != NULL) {
         if (!strcmp(name, blk->name)) {
@@ -748,6 +768,7 @@ static BlockBackend *bdrv_first_blk(BlockDriverState *bs)
  */
 bool bdrv_has_blk(BlockDriverState *bs)
 {
+    g_assert(qemu_in_main_thread());
     return bdrv_first_blk(bs) != NULL;
 }
 
@@ -758,6 +779,7 @@ bool bdrv_is_root_node(BlockDriverState *bs)
 {
     BdrvChild *c;
 
+    g_assert(qemu_in_main_thread());
     QLIST_FOREACH(c, &bs->parents, next_parent) {
         if (c->klass != &child_root) {
             return false;
@@ -807,6 +829,7 @@ BlockBackend *blk_by_legacy_dinfo(DriveInfo *dinfo)
  */
 BlockBackendPublic *blk_get_public(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return &blk->public;
 }
 
@@ -815,6 +838,7 @@ BlockBackendPublic *blk_get_public(BlockBackend *blk)
  */
 BlockBackend *blk_by_public(BlockBackendPublic *public)
 {
+    g_assert(qemu_in_main_thread());
     return container_of(public, BlockBackend, public);
 }
 
@@ -827,6 +851,8 @@ void blk_remove_bs(BlockBackend *blk)
     BlockDriverState *bs;
     BdrvChild *root;
 
+    g_assert(qemu_in_main_thread());
+
     notifier_list_notify(&blk->remove_bs_notifiers, blk);
     if (tgm->throttle_state) {
         bs = blk_bs(blk);
@@ -854,6 +880,7 @@ void blk_remove_bs(BlockBackend *blk)
 int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
 {
     ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
+    g_assert(qemu_in_main_thread());
     bdrv_ref(bs);
     blk->root = bdrv_root_attach_child(bs, "root", &child_root,
                                        BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
@@ -880,6 +907,7 @@ int blk_set_perm(BlockBackend *blk, uint64_t perm, uint64_t shared_perm,
 {
     int ret;
 
+    g_assert(qemu_in_main_thread());
     if (blk->root && !blk->disable_perm) {
         ret = bdrv_child_try_set_perm(blk->root, perm, shared_perm, errp);
         if (ret < 0) {
@@ -895,6 +923,7 @@ int blk_set_perm(BlockBackend *blk, uint64_t perm, uint64_t shared_perm,
 
 void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
 {
+    g_assert(qemu_in_main_thread());
     *perm = blk->perm;
     *shared_perm = blk->shared_perm;
 }
@@ -905,6 +934,7 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
  */
 int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
 {
+    g_assert(qemu_in_main_thread());
     if (blk->dev) {
         return -EBUSY;
     }
@@ -930,6 +960,7 @@ int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
 void blk_detach_dev(BlockBackend *blk, DeviceState *dev)
 {
     assert(blk->dev == dev);
+    g_assert(qemu_in_main_thread());
     blk->dev = NULL;
     blk->dev_ops = NULL;
     blk->dev_opaque = NULL;
@@ -943,6 +974,7 @@ void blk_detach_dev(BlockBackend *blk, DeviceState *dev)
  */
 DeviceState *blk_get_attached_dev(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->dev;
 }
 
@@ -951,6 +983,7 @@ DeviceState *blk_get_attached_dev(BlockBackend *blk)
 char *blk_get_attached_dev_id(BlockBackend *blk)
 {
     DeviceState *dev = blk->dev;
+    g_assert(qemu_in_main_thread());
 
     if (!dev) {
         return g_strdup("");
@@ -971,6 +1004,8 @@ BlockBackend *blk_by_dev(void *dev)
 {
     BlockBackend *blk = NULL;
 
+    g_assert(qemu_in_main_thread());
+
     assert(dev != NULL);
     while ((blk = blk_all_next(blk)) != NULL) {
         if (blk->dev == dev) {
@@ -1009,6 +1044,7 @@ void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops,
  */
 void blk_dev_change_media_cb(BlockBackend *blk, bool load, Error **errp)
 {
+    g_assert(qemu_in_main_thread());
     if (blk->dev_ops && blk->dev_ops->change_media_cb) {
         bool tray_was_open, tray_is_open;
         Error *local_err = NULL;
@@ -1100,6 +1136,7 @@ static void blk_root_resize(BdrvChild *child)
 
 void blk_iostatus_enable(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     blk->iostatus_enabled = true;
     blk->iostatus = BLOCK_DEVICE_IO_STATUS_OK;
 }
@@ -1108,6 +1145,7 @@ void blk_iostatus_enable(BlockBackend *blk)
  * enables it _and_ the VM is configured to stop on errors */
 bool blk_iostatus_is_enabled(const BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return (blk->iostatus_enabled &&
            (blk->on_write_error == BLOCKDEV_ON_ERROR_ENOSPC ||
             blk->on_write_error == BLOCKDEV_ON_ERROR_STOP   ||
@@ -1116,16 +1154,19 @@ bool blk_iostatus_is_enabled(const BlockBackend *blk)
 
 BlockDeviceIoStatus blk_iostatus(const BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->iostatus;
 }
 
 void blk_iostatus_disable(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     blk->iostatus_enabled = false;
 }
 
 void blk_iostatus_reset(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     if (blk_iostatus_is_enabled(blk)) {
         blk->iostatus = BLOCK_DEVICE_IO_STATUS_OK;
     }
@@ -1133,6 +1174,7 @@ void blk_iostatus_reset(BlockBackend *blk)
 
 void blk_iostatus_set_err(BlockBackend *blk, int error)
 {
+    g_assert(qemu_in_main_thread());
     assert(blk_iostatus_is_enabled(blk));
     if (blk->iostatus == BLOCK_DEVICE_IO_STATUS_OK) {
         blk->iostatus = error == ENOSPC ? BLOCK_DEVICE_IO_STATUS_NOSPACE :
@@ -1365,6 +1407,7 @@ int blk_pwrite_zeroes(BlockBackend *blk, int64_t offset,
 
 int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags)
 {
+    g_assert(qemu_in_main_thread());
     return bdrv_make_zero(blk->root, flags);
 }
 
@@ -1393,6 +1436,7 @@ BlockAIOCB *blk_abort_aio_request(BlockBackend *blk,
                                   void *opaque, int ret)
 {
     struct BlockBackendAIOCB *acb;
+    g_assert(qemu_in_main_thread());
 
     blk_inc_in_flight(blk);
     acb = blk_aio_get(&block_backend_aiocb_info, blk, cb, opaque);
@@ -1543,6 +1587,7 @@ void blk_get_geometry(BlockBackend *blk, uint64_t *nb_sectors_ptr)
 
 int64_t blk_nb_sectors(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     if (!blk_is_available(blk)) {
         return -ENOMEDIUM;
     }
@@ -1568,6 +1613,7 @@ BlockAIOCB *blk_aio_pwritev(BlockBackend *blk, int64_t offset,
 
 void blk_aio_cancel(BlockAIOCB *acb)
 {
+    g_assert(qemu_in_main_thread());
     bdrv_aio_cancel(acb);
 }
 
@@ -1731,6 +1777,8 @@ void blk_drain(BlockBackend *blk)
 {
     BlockDriverState *bs = blk_bs(blk);
 
+    g_assert(qemu_in_main_thread());
+
     if (bs) {
         bdrv_drained_begin(bs);
     }
@@ -1748,6 +1796,8 @@ void blk_drain_all(void)
 {
     BlockBackend *blk = NULL;
 
+    g_assert(qemu_in_main_thread());
+
     bdrv_drain_all_begin();
 
     while ((blk = blk_all_next(blk)) != NULL) {
@@ -1767,12 +1817,14 @@ void blk_drain_all(void)
 void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
                       BlockdevOnError on_write_error)
 {
+    g_assert(qemu_in_main_thread());
     blk->on_read_error = on_read_error;
     blk->on_write_error = on_write_error;
 }
 
 BlockdevOnError blk_get_on_error(BlockBackend *blk, bool is_read)
 {
+    g_assert(qemu_in_main_thread());
     return is_read ? blk->on_read_error : blk->on_write_error;
 }
 
@@ -1780,6 +1832,7 @@ BlockErrorAction blk_get_error_action(BlockBackend *blk, bool is_read,
                                       int error)
 {
     BlockdevOnError on_err = blk_get_on_error(blk, is_read);
+    g_assert(qemu_in_main_thread());
 
     switch (on_err) {
     case BLOCKDEV_ON_ERROR_ENOSPC:
@@ -1819,6 +1872,7 @@ void blk_error_action(BlockBackend *blk, BlockErrorAction action,
                       bool is_read, int error)
 {
     assert(error >= 0);
+    g_assert(qemu_in_main_thread());
 
     if (action == BLOCK_ERROR_ACTION_STOP) {
         /* First set the iostatus, so that "info block" returns an iostatus
@@ -1850,6 +1904,7 @@ void blk_error_action(BlockBackend *blk, BlockErrorAction action,
 bool blk_supports_write_perm(BlockBackend *blk)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (bs) {
         return !bdrv_is_read_only(bs);
@@ -1870,6 +1925,7 @@ bool blk_is_writable(BlockBackend *blk)
 bool blk_is_sg(BlockBackend *blk)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (!bs) {
         return false;
@@ -1880,17 +1936,20 @@ bool blk_is_sg(BlockBackend *blk)
 
 bool blk_enable_write_cache(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->enable_write_cache;
 }
 
 void blk_set_enable_write_cache(BlockBackend *blk, bool wce)
 {
+    g_assert(qemu_in_main_thread());
     blk->enable_write_cache = wce;
 }
 
 void blk_invalidate_cache(BlockBackend *blk, Error **errp)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (!bs) {
         error_setg(errp, "Device '%s' has no medium", blk->name);
@@ -1903,7 +1962,6 @@ void blk_invalidate_cache(BlockBackend *blk, Error **errp)
 bool blk_is_inserted(BlockBackend *blk)
 {
     BlockDriverState *bs = blk_bs(blk);
-
     return bs && bdrv_is_inserted(bs);
 }
 
@@ -1915,6 +1973,7 @@ bool blk_is_available(BlockBackend *blk)
 void blk_lock_medium(BlockBackend *blk, bool locked)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (bs) {
         bdrv_lock_medium(bs, locked);
@@ -1924,6 +1983,8 @@ void blk_lock_medium(BlockBackend *blk, bool locked)
 void blk_eject(BlockBackend *blk, bool eject_flag)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
+
     char *id;
 
     if (bs) {
@@ -1941,6 +2002,7 @@ void blk_eject(BlockBackend *blk, bool eject_flag)
 int blk_get_flags(BlockBackend *blk)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (bs) {
         return bdrv_get_flags(bs);
@@ -1983,11 +2045,13 @@ uint32_t blk_get_max_transfer(BlockBackend *blk)
 
 int blk_get_max_iov(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->root->bs->bl.max_iov;
 }
 
 void blk_set_guest_block_size(BlockBackend *blk, int align)
 {
+    g_assert(qemu_in_main_thread());
     blk->guest_block_size = align;
 }
 
@@ -2004,6 +2068,7 @@ void *blk_blockalign(BlockBackend *blk, size_t size)
 bool blk_op_is_blocked(BlockBackend *blk, BlockOpType op, Error **errp)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (!bs) {
         return false;
@@ -2015,6 +2080,7 @@ bool blk_op_is_blocked(BlockBackend *blk, BlockOpType op, Error **errp)
 void blk_op_unblock(BlockBackend *blk, BlockOpType op, Error *reason)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (bs) {
         bdrv_op_unblock(bs, op, reason);
@@ -2024,6 +2090,7 @@ void blk_op_unblock(BlockBackend *blk, BlockOpType op, Error *reason)
 void blk_op_block_all(BlockBackend *blk, Error *reason)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (bs) {
         bdrv_op_block_all(bs, reason);
@@ -2033,6 +2100,7 @@ void blk_op_block_all(BlockBackend *blk, Error *reason)
 void blk_op_unblock_all(BlockBackend *blk, Error *reason)
 {
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     if (bs) {
         bdrv_op_unblock_all(bs, reason);
@@ -2087,6 +2155,7 @@ static int blk_do_set_aio_context(BlockBackend *blk, AioContext *new_context,
 int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
                         Error **errp)
 {
+    g_assert(qemu_in_main_thread());
     return blk_do_set_aio_context(blk, new_context, true, errp);
 }
 
@@ -2123,6 +2192,7 @@ void blk_add_aio_context_notifier(BlockBackend *blk,
 {
     BlockBackendAioNotifier *notifier;
     BlockDriverState *bs = blk_bs(blk);
+    g_assert(qemu_in_main_thread());
 
     notifier = g_new(BlockBackendAioNotifier, 1);
     notifier->attached_aio_context = attached_aio_context;
@@ -2145,6 +2215,8 @@ void blk_remove_aio_context_notifier(BlockBackend *blk,
     BlockBackendAioNotifier *notifier;
     BlockDriverState *bs = blk_bs(blk);
 
+    g_assert(qemu_in_main_thread());
+
     if (bs) {
         bdrv_remove_aio_context_notifier(bs, attached_aio_context,
                                          detach_aio_context, opaque);
@@ -2165,11 +2237,13 @@ void blk_remove_aio_context_notifier(BlockBackend *blk,
 
 void blk_add_remove_bs_notifier(BlockBackend *blk, Notifier *notify)
 {
+    g_assert(qemu_in_main_thread());
     notifier_list_add(&blk->remove_bs_notifiers, notify);
 }
 
 void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify)
 {
+    g_assert(qemu_in_main_thread());
     notifier_list_add(&blk->insert_bs_notifiers, notify);
 }
 
@@ -2231,6 +2305,7 @@ int blk_save_vmstate(BlockBackend *blk, const uint8_t *buf,
                      int64_t pos, int size)
 {
     int ret;
+    g_assert(qemu_in_main_thread());
 
     if (!blk_is_available(blk)) {
         return -ENOMEDIUM;
@@ -2250,6 +2325,7 @@ int blk_save_vmstate(BlockBackend *blk, const uint8_t *buf,
 
 int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size)
 {
+    g_assert(qemu_in_main_thread());
     if (!blk_is_available(blk)) {
         return -ENOMEDIUM;
     }
@@ -2259,6 +2335,7 @@ int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size)
 
 int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz)
 {
+    g_assert(qemu_in_main_thread());
     if (!blk_is_available(blk)) {
         return -ENOMEDIUM;
     }
@@ -2268,6 +2345,7 @@ int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz)
 
 int blk_probe_geometry(BlockBackend *blk, HDGeometry *geo)
 {
+    g_assert(qemu_in_main_thread());
     if (!blk_is_available(blk)) {
         return -ENOMEDIUM;
     }
@@ -2281,6 +2359,7 @@ int blk_probe_geometry(BlockBackend *blk, HDGeometry *geo)
  */
 void blk_update_root_state(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     assert(blk->root);
 
     blk->root_state.open_flags    = blk->root->bs->open_flags;
@@ -2293,6 +2372,7 @@ void blk_update_root_state(BlockBackend *blk)
  */
 bool blk_get_detect_zeroes_from_root_state(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->root_state.detect_zeroes;
 }
 
@@ -2302,17 +2382,20 @@ bool blk_get_detect_zeroes_from_root_state(BlockBackend *blk)
  */
 int blk_get_open_flags_from_root_state(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->root_state.open_flags;
 }
 
 BlockBackendRootState *blk_get_root_state(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return &blk->root_state;
 }
 
 int blk_commit_all(void)
 {
     BlockBackend *blk = NULL;
+    g_assert(qemu_in_main_thread());
 
     while ((blk = blk_all_next(blk)) != NULL) {
         AioContext *aio_context = blk_get_aio_context(blk);
@@ -2337,6 +2420,7 @@ int blk_commit_all(void)
 /* throttling disk I/O limits */
 void blk_set_io_limits(BlockBackend *blk, ThrottleConfig *cfg)
 {
+    g_assert(qemu_in_main_thread());
     throttle_group_config(&blk->public.throttle_group_member, cfg);
 }
 
@@ -2345,6 +2429,7 @@ void blk_io_limits_disable(BlockBackend *blk)
     BlockDriverState *bs = blk_bs(blk);
     ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
     assert(tgm->throttle_state);
+    g_assert(qemu_in_main_thread());
     if (bs) {
         bdrv_drained_begin(bs);
     }
@@ -2358,12 +2443,14 @@ void blk_io_limits_disable(BlockBackend *blk)
 void blk_io_limits_enable(BlockBackend *blk, const char *group)
 {
     assert(!blk->public.throttle_group_member.throttle_state);
+    g_assert(qemu_in_main_thread());
     throttle_group_register_tgm(&blk->public.throttle_group_member,
                                 group, blk_get_aio_context(blk));
 }
 
 void blk_io_limits_update_group(BlockBackend *blk, const char *group)
 {
+    g_assert(qemu_in_main_thread());
     /* this BB is not part of any group */
     if (!blk->public.throttle_group_member.throttle_state) {
         return;
@@ -2431,11 +2518,13 @@ static void blk_root_drained_end(BdrvChild *child, int *drained_end_counter)
 
 void blk_register_buf(BlockBackend *blk, void *host, size_t size)
 {
+    g_assert(qemu_in_main_thread());
     bdrv_register_buf(blk_bs(blk), host, size);
 }
 
 void blk_unregister_buf(BlockBackend *blk, void *host)
 {
+    g_assert(qemu_in_main_thread());
     bdrv_unregister_buf(blk_bs(blk), host);
 }
 
@@ -2460,11 +2549,13 @@ int coroutine_fn blk_co_copy_range(BlockBackend *blk_in, int64_t off_in,
 
 const BdrvChild *blk_root(BlockBackend *blk)
 {
+    g_assert(qemu_in_main_thread());
     return blk->root;
 }
 
 int blk_make_empty(BlockBackend *blk, Error **errp)
 {
+    g_assert(qemu_in_main_thread());
     if (!blk_is_available(blk)) {
         error_setg(errp, "No medium inserted");
         return -ENOMEDIUM;
diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
index 721dec2d82..dcf298b6af 100644
--- a/softmmu/qdev-monitor.c
+++ b/softmmu/qdev-monitor.c
@@ -928,6 +928,8 @@ BlockBackend *blk_by_qdev_id(const char *id, Error **errp)
     DeviceState *dev;
     BlockBackend *blk;
 
+    g_assert(qemu_in_main_thread());
+
     dev = find_device_state(id, errp);
     if (dev == NULL) {
         return NULL;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 0/4] block layer: split block APIs in graph and I/O
  2021-09-08 13:10 [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Emanuele Giuseppe Esposito
                   ` (3 preceding siblings ...)
  2021-09-08 13:10 ` [RFC PATCH 4/4] block/block-backend.c: assertions for block-backend Emanuele Giuseppe Esposito
@ 2021-09-13 13:10 ` Stefan Hajnoczi
  2021-09-15 12:11   ` Paolo Bonzini
  2021-09-13 13:39 ` Stefan Hajnoczi
  5 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-13 13:10 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Paolo Bonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 814 bytes --]

On Wed, Sep 08, 2021 at 09:10:17AM -0400, Emanuele Giuseppe Esposito wrote:
> Currently, block layer APIs like block-backend.h contain a mix of
> functions that are either running in the main loop and under the
> BQL, or are thread-safe functions and run in iothreads performing I/O.
> The functions running under BQL also take care of modifying the
> block graph, by using drain and/or aio_context_acquire/release.
> This makes it very confusing to understand where each function
> runs, and what assumptions it provided with regards to thread
> safety.
> 
> We call the functions running under BQL "graph API", and 
> distinguish them from the thread-safe "I/O API".

Maybe "BQL" is clearer than "graph" because not all functions classified
as "graph" need to traverse/modify the graph.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 1/4] main-loop.h: introduce qemu_in_main_thread()
  2021-09-08 13:10 ` [RFC PATCH 1/4] main-loop.h: introduce qemu_in_main_thread() Emanuele Giuseppe Esposito
@ 2021-09-13 13:15   ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-13 13:15 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Paolo Bonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 2251 bytes --]

On Wed, Sep 08, 2021 at 09:10:18AM -0400, Emanuele Giuseppe Esposito wrote:
> When invoked from the main loop, this function is the same
> as qemu_mutex_iothread_locked, and returns true if the BQL is held.
> When invoked from iothreads or tests, it returns true only
> if the current AioContext is the Main Loop.
> 
> This essentially just extends qemu_mutex_iothread_locked to work
> also in unit tests or other users like storage-daemon, that run
> in the Main Loop but end up using the implementation in
> stubs/iothread-lock.c.
> 
> Using qemu_mutex_iothread_locked in unit tests defaults to false
> because they use the implementation in stubs/iothread-lock,
> making all assertions added in next patches fail despite the
> AioContext is still the main loop.
> 
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> ---
>  include/qemu/main-loop.h | 13 +++++++++++++
>  softmmu/cpus.c           |  5 +++++
>  stubs/iothread-lock.c    |  5 +++++
>  3 files changed, 23 insertions(+)
> 
> diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
> index 8dbc6fcb89..c6547207f7 100644
> --- a/include/qemu/main-loop.h
> +++ b/include/qemu/main-loop.h
> @@ -245,6 +245,19 @@ AioContext *iohandler_get_aio_context(void);
>   */
>  bool qemu_mutex_iothread_locked(void);
>  
> +/**
> + * qemu_in_main_thread: Return true if the function runs with BQL
> + * or in the main loop AioContext.
> + *
> + * This function falls back to qemu_mutex_iothread_locked() if
> + * called from the main loop, otherwise it checks if the current
> + * AioContext is the main loop. This is useful to check that the BQL
> + * is held, and not make it return false when invoked by unit
> + * tests or other users like storage-daemon that end up using
> + * stubs/iothread-lock.c implementation.
> + */
> +bool qemu_in_main_thread(void);

This description doesn't match the behavior because the "or in the main
loop AioContext" part only applies to non softmmu builds (e.g. tests).

Please phrase it so it's clear that this function is the same as
qemu_mutex_iothread_locked() when softmmu/cpus.c is linked into the
program. Otherwise it checks that the current AioContext is the global
AioContext.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 2/4] migration: block-dirty-bitmap: add missing qemu_mutex_lock_iothread
  2021-09-08 13:10 ` [RFC PATCH 2/4] migration: block-dirty-bitmap: add missing qemu_mutex_lock_iothread Emanuele Giuseppe Esposito
@ 2021-09-13 13:19   ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-13 13:19 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Paolo Bonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 585 bytes --]

On Wed, Sep 08, 2021 at 09:10:19AM -0400, Emanuele Giuseppe Esposito wrote:
> init_dirty_bitmap_migration assumes the iothread lock (BQL)
> to be held, but instead it isn't.
> 
> Instead of adding the lock to qemu_savevm_state_setup(),
> follow the same pattern as the other ->save_setup callbacks
> and lock+unlock inside dirty_bitmap_save_setup().
> 
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> ---
>  migration/block-dirty-bitmap.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API
  2021-09-08 13:10 ` [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API Emanuele Giuseppe Esposito
@ 2021-09-13 13:27   ` Stefan Hajnoczi
  2021-09-15 12:14     ` Paolo Bonzini
  2021-09-13 13:41   ` Stefan Hajnoczi
  1 sibling, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-13 13:27 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Paolo Bonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 2401 bytes --]

On Wed, Sep 08, 2021 at 09:10:20AM -0400, Emanuele Giuseppe Esposito wrote:
> diff --git a/include/sysemu/block-backend-graph.h b/include/sysemu/block-backend-graph.h
> new file mode 100644
> index 0000000000..3310987b16
> --- /dev/null
> +++ b/include/sysemu/block-backend-graph.h
> @@ -0,0 +1,132 @@
> +/*
> + * QEMU Block backends
> + *
> + * Copyright (C) 2014-2016 Red Hat, Inc.
> + *
> + * Authors:
> + *  Markus Armbruster <armbru@redhat.com>,
> + *
> + * This work is licensed under the terms of the GNU LGPL, version 2.1
> + * or later.  See the COPYING.LIB file in the top-level directory.
> + */
> +
> +#ifndef BLOCK_BACKEND_GRAPH_H
> +#define BLOCK_BACKEND_GRAPH_H
> +
> +#include "block-backend-common.h"
> +
> +/*
> + * Graph API. These functions run under the BQL lock.
> + *
> + * If a function modifies the graph, it uses drain and/or
> + * aio_context_acquire/release to be sure it has unique access.

It's not obvious why additional locking is needed if the BQL is already
held. Please refer to the thread-safe I/O API functions that can be
running concurrently without the BQL.

> + *
> + * All functions in this header must use this assertion:
> + * g_assert(qemu_in_main_thread());
> + * to be sure they belong here.

I suggest "to catch when they are accidentally called without the BQL".
It explains the rationale whereas "to be sure they belong here" doesn't
explain anything.

> diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
> new file mode 100644
> index 0000000000..66a7bed9f0
> --- /dev/null
> +++ b/include/sysemu/block-backend-io.h
> @@ -0,0 +1,123 @@
> +/*
> + * QEMU Block backends
> + *
> + * Copyright (C) 2014-2016 Red Hat, Inc.
> + *
> + * Authors:
> + *  Markus Armbruster <armbru@redhat.com>,
> + *
> + * This work is licensed under the terms of the GNU LGPL, version 2.1
> + * or later.  See the COPYING.LIB file in the top-level directory.
> + */
> +
> +#ifndef BLOCK_BACKEND_IO_H
> +#define BLOCK_BACKEND_IO_H
> +
> +#include "block-backend-common.h"
> +
> +/*
> + * I/O API functions. These functions are thread-safe, and therefore
> + * can run in any AioContext.

"can run in any AioContext" makes me wonder what the exact requirements
are. Can they run in any *thread* (regardless of whether an AioContext
even exists for that thread) or do they need to run in a thread that has
called qemu_set_current_aio_context()?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 4/4] block/block-backend.c: assertions for block-backend
  2021-09-08 13:10 ` [RFC PATCH 4/4] block/block-backend.c: assertions for block-backend Emanuele Giuseppe Esposito
@ 2021-09-13 13:38   ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-13 13:38 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Paolo Bonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 1846 bytes --]

On Wed, Sep 08, 2021 at 09:10:21AM -0400, Emanuele Giuseppe Esposito wrote:
> @@ -1767,12 +1817,14 @@ void blk_drain_all(void)
>  void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
>                        BlockdevOnError on_write_error)
>  {
> +    g_assert(qemu_in_main_thread());
>      blk->on_read_error = on_read_error;
>      blk->on_write_error = on_write_error;
>  }
>  
>  BlockdevOnError blk_get_on_error(BlockBackend *blk, bool is_read)
>  {
> +    g_assert(qemu_in_main_thread());
>      return is_read ? blk->on_read_error : blk->on_write_error;
>  }
>  
> @@ -1780,6 +1832,7 @@ BlockErrorAction blk_get_error_action(BlockBackend *blk, bool is_read,
>                                        int error)
>  {
>      BlockdevOnError on_err = blk_get_on_error(blk, is_read);
> +    g_assert(qemu_in_main_thread());
>  
>      switch (on_err) {
>      case BLOCKDEV_ON_ERROR_ENOSPC:
> @@ -1819,6 +1872,7 @@ void blk_error_action(BlockBackend *blk, BlockErrorAction action,
>                        bool is_read, int error)
>  {
>      assert(error >= 0);
> +    g_assert(qemu_in_main_thread());
>  
>      if (action == BLOCK_ERROR_ACTION_STOP) {
>          /* First set the iostatus, so that "info block" returns an iostatus

The error action APIs are called from the I/O code path. For example,
hw/block/virtio-blk.c:virtio_blk_handle_rw_error() is called from an
IOThread with -device virtio-blk-pci,iothread=... with the AioContext
held.

> @@ -1983,11 +2045,13 @@ uint32_t blk_get_max_transfer(BlockBackend *blk)
>  
>  int blk_get_max_iov(BlockBackend *blk)
>  {
> +    g_assert(qemu_in_main_thread());
>      return blk->root->bs->bl.max_iov;
>  }

This is called by hw/block/virtio-blk.c:virtio_blk_submit_multireq()
from an IOThread with the AioContext held.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 0/4] block layer: split block APIs in graph and I/O
  2021-09-08 13:10 [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Emanuele Giuseppe Esposito
                   ` (4 preceding siblings ...)
  2021-09-13 13:10 ` [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Stefan Hajnoczi
@ 2021-09-13 13:39 ` Stefan Hajnoczi
  5 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-13 13:39 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Paolo Bonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 130 bytes --]

Thanks, marking block layers APIs that require the BQL and those that
don't is an important step. I have posted comments.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API
  2021-09-08 13:10 ` [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API Emanuele Giuseppe Esposito
  2021-09-13 13:27   ` Stefan Hajnoczi
@ 2021-09-13 13:41   ` Stefan Hajnoczi
  1 sibling, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-13 13:41 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Paolo Bonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 796 bytes --]

On Wed, Sep 08, 2021 at 09:10:20AM -0400, Emanuele Giuseppe Esposito wrote:
> +/*
> + * Graph API. These functions run under the BQL lock.
> + *
> + * If a function modifies the graph, it uses drain and/or
> + * aio_context_acquire/release to be sure it has unique access.
> + *
> + * All functions in this header must use this assertion:
> + * g_assert(qemu_in_main_thread());
> + * to be sure they belong here.
> + */

It's important to note that not all of these functions are necessarily
limited to running under the BQL, but they would require additional
auditing and may small thread-safety changes to move them into the I/O
API. Often it's not worth doing that work since the APIs are only used
with the BQL held at the moment, so they have been placed in the graph
API (for now).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 0/4] block layer: split block APIs in graph and I/O
  2021-09-13 13:10 ` [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Stefan Hajnoczi
@ 2021-09-15 12:11   ` Paolo Bonzini
  2021-09-15 14:43     ` Stefan Hajnoczi
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2021-09-15 12:11 UTC (permalink / raw)
  To: Stefan Hajnoczi, Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Eric Blake

On 13/09/21 15:10, Stefan Hajnoczi wrote:
> On Wed, Sep 08, 2021 at 09:10:17AM -0400, Emanuele Giuseppe Esposito wrote:
>> Currently, block layer APIs like block-backend.h contain a mix of
>> functions that are either running in the main loop and under the
>> BQL, or are thread-safe functions and run in iothreads performing I/O.
>> The functions running under BQL also take care of modifying the
>> block graph, by using drain and/or aio_context_acquire/release.
>> This makes it very confusing to understand where each function
>> runs, and what assumptions it provided with regards to thread
>> safety.
>>
>> We call the functions running under BQL "graph API", and
>> distinguish them from the thread-safe "I/O API".
> 
> Maybe "BQL" is clearer than "graph" because not all functions classified
> as "graph" need to traverse/modify the graph.

Bikeshedding, I like it! :)

... on the other hand, qemu-storage-daemon does not have a BQL (see 
patch 1); "graph API" functions run from the main (monitor) thread.

The characteristic of the "graph API" is that they affect global state, 
so another possibility could be "global state API".  But is there any 
global state apart from the BlockDriverState graph and the associated 
BlockBackends?

Paolo



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API
  2021-09-13 13:27   ` Stefan Hajnoczi
@ 2021-09-15 12:14     ` Paolo Bonzini
  2021-09-15 14:38       ` Stefan Hajnoczi
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2021-09-15 12:14 UTC (permalink / raw)
  To: Stefan Hajnoczi, Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Eric Blake

On 13/09/21 15:27, Stefan Hajnoczi wrote:
>> +/*
>> + * I/O API functions. These functions are thread-safe, and therefore
>> + * can run in any AioContext.
> "can run in any AioContext" makes me wonder what the exact requirements
> are. Can they run in any*thread*  (regardless of whether an AioContext
> even exists for that thread) or do they need to run in a thread that has
> called qemu_set_current_aio_context()?

I think they can run in any thread as long as they have called 
aio_context_acquire/release; later on, they will be able to run in any 
thread completely (which will be the underlying mechanism for multiqueue).

Paolo



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API
  2021-09-15 12:14     ` Paolo Bonzini
@ 2021-09-15 14:38       ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-15 14:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Emanuele Giuseppe Esposito, Kevin Wolf,
	Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Fam Zheng, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 801 bytes --]

On Wed, Sep 15, 2021 at 02:14:08PM +0200, Paolo Bonzini wrote:
> On 13/09/21 15:27, Stefan Hajnoczi wrote:
> > > +/*
> > > + * I/O API functions. These functions are thread-safe, and therefore
> > > + * can run in any AioContext.
> > "can run in any AioContext" makes me wonder what the exact requirements
> > are. Can they run in any*thread*  (regardless of whether an AioContext
> > even exists for that thread) or do they need to run in a thread that has
> > called qemu_set_current_aio_context()?
> 
> I think they can run in any thread as long as they have called
> aio_context_acquire/release; later on, they will be able to run in any
> thread completely (which will be the underlying mechanism for multiqueue).

Great, it would be good to reflect this in the comment.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 0/4] block layer: split block APIs in graph and I/O
  2021-09-15 12:11   ` Paolo Bonzini
@ 2021-09-15 14:43     ` Stefan Hajnoczi
  2021-09-16 14:03       ` Emanuele Giuseppe Esposito
  0 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2021-09-15 14:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Emanuele Giuseppe Esposito, Kevin Wolf,
	Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Fam Zheng, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 1446 bytes --]

On Wed, Sep 15, 2021 at 02:11:41PM +0200, Paolo Bonzini wrote:
> On 13/09/21 15:10, Stefan Hajnoczi wrote:
> > On Wed, Sep 08, 2021 at 09:10:17AM -0400, Emanuele Giuseppe Esposito wrote:
> > > Currently, block layer APIs like block-backend.h contain a mix of
> > > functions that are either running in the main loop and under the
> > > BQL, or are thread-safe functions and run in iothreads performing I/O.
> > > The functions running under BQL also take care of modifying the
> > > block graph, by using drain and/or aio_context_acquire/release.
> > > This makes it very confusing to understand where each function
> > > runs, and what assumptions it provided with regards to thread
> > > safety.
> > > 
> > > We call the functions running under BQL "graph API", and
> > > distinguish them from the thread-safe "I/O API".
> > 
> > Maybe "BQL" is clearer than "graph" because not all functions classified
> > as "graph" need to traverse/modify the graph.
> 
> Bikeshedding, I like it! :)
> 
> ... on the other hand, qemu-storage-daemon does not have a BQL (see patch
> 1); "graph API" functions run from the main (monitor) thread.
> 
> The characteristic of the "graph API" is that they affect global state, so
> another possibility could be "global state API".  But is there any global
> state apart from the BlockDriverState graph and the associated
> BlockBackends?

I would be happy with that name too.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 0/4] block layer: split block APIs in graph and I/O
  2021-09-15 14:43     ` Stefan Hajnoczi
@ 2021-09-16 14:03       ` Emanuele Giuseppe Esposito
  2021-09-16 19:49         ` Paolo Bonzini
  0 siblings, 1 reply; 18+ messages in thread
From: Emanuele Giuseppe Esposito @ 2021-09-16 14:03 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Juan Quintela, John Snow,
	Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Eric Blake



On 15/09/2021 16:43, Stefan Hajnoczi wrote:
> On Wed, Sep 15, 2021 at 02:11:41PM +0200, Paolo Bonzini wrote:
>> On 13/09/21 15:10, Stefan Hajnoczi wrote:
>>> On Wed, Sep 08, 2021 at 09:10:17AM -0400, Emanuele Giuseppe Esposito wrote:
>>>> Currently, block layer APIs like block-backend.h contain a mix of
>>>> functions that are either running in the main loop and under the
>>>> BQL, or are thread-safe functions and run in iothreads performing I/O.
>>>> The functions running under BQL also take care of modifying the
>>>> block graph, by using drain and/or aio_context_acquire/release.
>>>> This makes it very confusing to understand where each function
>>>> runs, and what assumptions it provided with regards to thread
>>>> safety.
>>>>
>>>> We call the functions running under BQL "graph API", and
>>>> distinguish them from the thread-safe "I/O API".
>>>
>>> Maybe "BQL" is clearer than "graph" because not all functions classified
>>> as "graph" need to traverse/modify the graph.
>>
>> Bikeshedding, I like it! :)
>>
>> ... on the other hand, qemu-storage-daemon does not have a BQL (see patch
>> 1); "graph API" functions run from the main (monitor) thread.
>>
>> The characteristic of the "graph API" is that they affect global state, so
>> another possibility could be "global state API".  But is there any global
>> state apart from the BlockDriverState graph and the associated
>> BlockBackends?
> 
> I would be happy with that name too.
> 

Sounds good to me too, thanks.
One more minor cosmetic thing: should I name the header 
block-backend-global-state.h or using block-backend-gs.h is 
straightforward enough?

Thank you,
Emanuele



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 0/4] block layer: split block APIs in graph and I/O
  2021-09-16 14:03       ` Emanuele Giuseppe Esposito
@ 2021-09-16 19:49         ` Paolo Bonzini
  0 siblings, 0 replies; 18+ messages in thread
From: Paolo Bonzini @ 2021-09-16 19:49 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: Kevin Wolf, Fam Zheng, Vladimir Sementsov-Ogievskiy,
	Daniel P. Berrangé,
	Eduardo Habkost, open list:Block layer core, Juan Quintela,
	John Snow, Richard Henderson, Dr. David Alan Gilbert, qemu-devel,
	Hanna Reitz, Stefan Hajnoczi, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 1878 bytes --]

I think either -global or -global-state.

Paolo


Il gio 16 set 2021, 16:03 Emanuele Giuseppe Esposito <eesposit@redhat.com>
ha scritto:

>
>
> On 15/09/2021 16:43, Stefan Hajnoczi wrote:
> > On Wed, Sep 15, 2021 at 02:11:41PM +0200, Paolo Bonzini wrote:
> >> On 13/09/21 15:10, Stefan Hajnoczi wrote:
> >>> On Wed, Sep 08, 2021 at 09:10:17AM -0400, Emanuele Giuseppe Esposito
> wrote:
> >>>> Currently, block layer APIs like block-backend.h contain a mix of
> >>>> functions that are either running in the main loop and under the
> >>>> BQL, or are thread-safe functions and run in iothreads performing I/O.
> >>>> The functions running under BQL also take care of modifying the
> >>>> block graph, by using drain and/or aio_context_acquire/release.
> >>>> This makes it very confusing to understand where each function
> >>>> runs, and what assumptions it provided with regards to thread
> >>>> safety.
> >>>>
> >>>> We call the functions running under BQL "graph API", and
> >>>> distinguish them from the thread-safe "I/O API".
> >>>
> >>> Maybe "BQL" is clearer than "graph" because not all functions
> classified
> >>> as "graph" need to traverse/modify the graph.
> >>
> >> Bikeshedding, I like it! :)
> >>
> >> ... on the other hand, qemu-storage-daemon does not have a BQL (see
> patch
> >> 1); "graph API" functions run from the main (monitor) thread.
> >>
> >> The characteristic of the "graph API" is that they affect global state,
> so
> >> another possibility could be "global state API".  But is there any
> global
> >> state apart from the BlockDriverState graph and the associated
> >> BlockBackends?
> >
> > I would be happy with that name too.
> >
>
> Sounds good to me too, thanks.
> One more minor cosmetic thing: should I name the header
> block-backend-global-state.h or using block-backend-gs.h is
> straightforward enough?
>
> Thank you,
> Emanuele
>
>

[-- Attachment #2: Type: text/html, Size: 2660 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-09-16 19:51 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-08 13:10 [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Emanuele Giuseppe Esposito
2021-09-08 13:10 ` [RFC PATCH 1/4] main-loop.h: introduce qemu_in_main_thread() Emanuele Giuseppe Esposito
2021-09-13 13:15   ` Stefan Hajnoczi
2021-09-08 13:10 ` [RFC PATCH 2/4] migration: block-dirty-bitmap: add missing qemu_mutex_lock_iothread Emanuele Giuseppe Esposito
2021-09-13 13:19   ` Stefan Hajnoczi
2021-09-08 13:10 ` [RFC PATCH 3/4] include/sysemu/block-backend: split header into I/O and graph API Emanuele Giuseppe Esposito
2021-09-13 13:27   ` Stefan Hajnoczi
2021-09-15 12:14     ` Paolo Bonzini
2021-09-15 14:38       ` Stefan Hajnoczi
2021-09-13 13:41   ` Stefan Hajnoczi
2021-09-08 13:10 ` [RFC PATCH 4/4] block/block-backend.c: assertions for block-backend Emanuele Giuseppe Esposito
2021-09-13 13:38   ` Stefan Hajnoczi
2021-09-13 13:10 ` [RFC PATCH 0/4] block layer: split block APIs in graph and I/O Stefan Hajnoczi
2021-09-15 12:11   ` Paolo Bonzini
2021-09-15 14:43     ` Stefan Hajnoczi
2021-09-16 14:03       ` Emanuele Giuseppe Esposito
2021-09-16 19:49         ` Paolo Bonzini
2021-09-13 13:39 ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).