All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
@ 2017-11-29 14:49 Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 1/9] block: Remove unused bdrv_requests_pending Fam Zheng
                   ` (9 more replies)
  0 siblings, 10 replies; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

While we look at the fixes for 2.11, I briefly prototyped this series to see if
it makes sense as a simplification of the drain API for 2.12.

The idea is to let AioContext manage quiesce callbacks, then the block layer
only needs to do the in-flight request waiting. This lets us get rid of the
callback recursion (both up and down).

Many commit logs and code comments are definitely missing, but it would be good
to get high level review and maybe some testing already.

Fam

Fam Zheng (9):
  block: Remove unused bdrv_requests_pending
  aio: Add drain begin/end API to AioContext
  blockjob: Implement AioContext drain ops
  throttle: Implement AioContext drain ops
  qed: Implement AioContext drain ops
  block: Use aio_context_drained_begin in bdrv_set_aio_context
  block: Switch to use AIO drained begin/end API
  block: Drop old drained_{begin,end} callbacks
  blockjob: Drop unused functions

 block.c                        |  30 +--------
 block/block-backend.c          |  22 -------
 block/io.c                     | 134 +++--------------------------------------
 block/qed.c                    |  34 ++++++++---
 block/throttle.c               |  34 ++++++++---
 blockjob.c                     |  67 ++++++++-------------
 include/block/aio.h            |  27 ++++++++-
 include/block/block.h          |  16 -----
 include/block/block_int.h      |  12 ----
 include/block/blockjob_int.h   |  14 -----
 include/sysemu/block-backend.h |   8 ---
 util/async.c                   |  73 ++++++++++++++++++++++
 12 files changed, 186 insertions(+), 285 deletions(-)

-- 
2.14.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 1/9] block: Remove unused bdrv_requests_pending
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-12-01 12:35   ` [Qemu-devel] [Qemu-block] " Alberto Garcia
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 2/9] aio: Add drain begin/end API to AioContext Fam Zheng
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block/io.c                | 18 ------------------
 include/block/block_int.h |  1 -
 2 files changed, 19 deletions(-)

diff --git a/block/io.c b/block/io.c
index 4fdf93a014..7f07972489 100644
--- a/block/io.c
+++ b/block/io.c
@@ -134,24 +134,6 @@ void bdrv_disable_copy_on_read(BlockDriverState *bs)
     assert(old >= 1);
 }
 
-/* Check if any requests are in-flight (including throttled requests) */
-bool bdrv_requests_pending(BlockDriverState *bs)
-{
-    BdrvChild *child;
-
-    if (atomic_read(&bs->in_flight)) {
-        return true;
-    }
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        if (bdrv_requests_pending(child->bs)) {
-            return true;
-        }
-    }
-
-    return false;
-}
-
 typedef struct {
     Coroutine *co;
     BlockDriverState *bs;
diff --git a/include/block/block_int.h b/include/block/block_int.h
index a5482775ec..e107163594 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -1045,7 +1045,6 @@ bool blk_dev_is_tray_open(BlockBackend *blk);
 bool blk_dev_is_medium_locked(BlockBackend *blk);
 
 void bdrv_set_dirty(BlockDriverState *bs, int64_t offset, int64_t bytes);
-bool bdrv_requests_pending(BlockDriverState *bs);
 
 void bdrv_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap **out);
 void bdrv_undo_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap *in);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 2/9] aio: Add drain begin/end API to AioContext
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 1/9] block: Remove unused bdrv_requests_pending Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-30 16:07   ` Stefan Hajnoczi
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 3/9] blockjob: Implement AioContext drain ops Fam Zheng
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 include/block/aio.h | 27 +++++++++++++++++---
 util/async.c        | 73 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 97 insertions(+), 3 deletions(-)

diff --git a/include/block/aio.h b/include/block/aio.h
index e9aeeaec94..40c2f64544 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -47,6 +47,15 @@ typedef void QEMUBHFunc(void *opaque);
 typedef bool AioPollFn(void *opaque);
 typedef void IOHandler(void *opaque);
 
+typedef void AioDrainFn(void *opaque);
+typedef struct AioDrainOps {
+    AioDrainFn *drained_begin;
+    AioDrainFn *drained_end;
+    void *opaque;
+    bool is_new;
+    QTAILQ_ENTRY(AioDrainOps) next;
+} AioDrainOps;
+
 struct Coroutine;
 struct ThreadPool;
 struct LinuxAioState;
@@ -147,6 +156,9 @@ struct AioContext {
     int epollfd;
     bool epoll_enabled;
     bool epoll_available;
+
+    QTAILQ_HEAD(, AioDrainOps) drain_ops;
+    bool drain_ops_updated;
 };
 
 /**
@@ -441,9 +453,9 @@ int64_t aio_compute_timeout(AioContext *ctx);
  *
  * Disable the further processing of external clients.
  */
-static inline void aio_disable_external(AioContext *ctx)
+static inline bool aio_disable_external(AioContext *ctx)
 {
-    atomic_inc(&ctx->external_disable_cnt);
+    return atomic_fetch_inc(&ctx->external_disable_cnt) == 0;
 }
 
 /**
@@ -452,7 +464,7 @@ static inline void aio_disable_external(AioContext *ctx)
  *
  * Enable the processing of external clients.
  */
-static inline void aio_enable_external(AioContext *ctx)
+static inline bool aio_enable_external(AioContext *ctx)
 {
     int old;
 
@@ -462,6 +474,7 @@ static inline void aio_enable_external(AioContext *ctx)
         /* Kick event loop so it re-arms file descriptors */
         aio_notify(ctx);
     }
+    return old == 1;
 }
 
 /**
@@ -564,4 +577,12 @@ void aio_context_set_poll_params(AioContext *ctx, int64_t max_ns,
                                  int64_t grow, int64_t shrink,
                                  Error **errp);
 
+void aio_context_drained_begin(AioContext *ctx);
+void aio_context_drained_end(AioContext *ctx);
+
+void aio_context_add_drain_ops(AioContext *ctx,
+                               AioDrainFn *begin, AioDrainFn *end, void *opaque);
+void aio_context_del_drain_ops(AioContext *ctx,
+                               AioDrainFn *begin, AioDrainFn *end, void *opaque);
+
 #endif
diff --git a/util/async.c b/util/async.c
index 4dd9d95a9e..cca0efd263 100644
--- a/util/async.c
+++ b/util/async.c
@@ -402,6 +402,7 @@ AioContext *aio_context_new(Error **errp)
     AioContext *ctx;
 
     ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
+    QTAILQ_INIT(&ctx->drain_ops);
     aio_context_setup(ctx);
 
     ret = event_notifier_init(&ctx->notifier, false);
@@ -506,3 +507,75 @@ void aio_context_release(AioContext *ctx)
 {
     qemu_rec_mutex_unlock(&ctx->lock);
 }
+
+/* Called with ctx->lock */
+void aio_context_drained_begin(AioContext *ctx)
+{
+    AioDrainOps *ops;
+
+    /* TODO: When all external fds are handled in the following drain_ops
+     * callbacks, aio_disable_external can be dropped. */
+    aio_disable_external(ctx);
+restart:
+    ctx->drain_ops_updated = false;
+    QTAILQ_FOREACH(ops, &ctx->drain_ops, next) {
+        ops->drained_begin(ops->opaque);
+        if (ctx->drain_ops_updated) {
+            goto restart;
+        }
+    }
+}
+
+/* Called with ctx->lock */
+void aio_context_drained_end(AioContext *ctx)
+{
+    AioDrainOps *ops;
+
+restart:
+    ctx->drain_ops_updated = false;
+    QTAILQ_FOREACH(ops, &ctx->drain_ops, next) {
+        if (ops->is_new) {
+            continue;
+        }
+        ops->drained_end(ops->opaque);
+        if (ctx->drain_ops_updated) {
+            goto restart;
+        }
+    }
+    if (aio_enable_external(ctx)) {
+        QTAILQ_FOREACH(ops, &ctx->drain_ops, next) {
+            ops->is_new = false;
+        }
+    }
+}
+
+/* Called with ctx->lock */
+void aio_context_add_drain_ops(AioContext *ctx,
+                               AioDrainFn *begin, AioDrainFn *end, void *opaque)
+{
+    AioDrainOps *ops = g_new0(AioDrainOps, 1);
+    ops->drained_begin = begin;
+    ops->drained_end = end;
+    ops->opaque = opaque;
+    ops->is_new = true;
+    QTAILQ_INSERT_TAIL(&ctx->drain_ops, ops, next);
+    ctx->drain_ops_updated = true;
+}
+
+/* Called with ctx->lock */
+void aio_context_del_drain_ops(AioContext *ctx,
+                               AioDrainFn *begin, AioDrainFn *end, void *opaque)
+{
+    AioDrainOps *ops;
+
+    QTAILQ_FOREACH(ops, &ctx->drain_ops, next) {
+        if (ops->drained_begin == begin &&
+            ops->drained_end == end &&
+            ops->opaque == opaque) {
+            QTAILQ_REMOVE(&ctx->drain_ops, ops, next);
+            ctx->drain_ops_updated = true;
+            g_free(ops);
+            return;
+        }
+    }
+}
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 3/9] blockjob: Implement AioContext drain ops
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 1/9] block: Remove unused bdrv_requests_pending Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 2/9] aio: Add drain begin/end API to AioContext Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-30 16:09   ` Stefan Hajnoczi
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 4/9] throttle: " Fam Zheng
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 blockjob.c | 47 ++++++++++++++++++++++++++++++-----------------
 1 file changed, 30 insertions(+), 17 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index ff9a614531..86d060c89c 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -148,6 +148,23 @@ static void block_job_attached_aio_context(AioContext *new_context,
                                            void *opaque);
 static void block_job_detach_aio_context(void *opaque);
 
+static void block_job_drained_begin(void *opaque)
+{
+    BlockJob *job = opaque;
+    block_job_pause(job);
+}
+
+static void block_job_drained_end(void *opaque)
+{
+    BlockJob *job = opaque;
+    block_job_resume(job);
+}
+
+static const BlockDevOps block_job_dev_ops = {
+    .drained_begin = block_job_drained_begin,
+    .drained_end = block_job_drained_end,
+};
+
 void block_job_unref(BlockJob *job)
 {
     if (--job->refcnt == 0) {
@@ -157,6 +174,10 @@ void block_job_unref(BlockJob *job)
         blk_remove_aio_context_notifier(job->blk,
                                         block_job_attached_aio_context,
                                         block_job_detach_aio_context, job);
+        aio_context_del_drain_ops(blk_get_aio_context(job->blk),
+                                  block_job_drained_begin,
+                                  block_job_drained_end,
+                                  job);
         blk_unref(job->blk);
         error_free(job->blocker);
         g_free(job->id);
@@ -170,6 +191,9 @@ static void block_job_attached_aio_context(AioContext *new_context,
 {
     BlockJob *job = opaque;
 
+    aio_context_add_drain_ops(blk_get_aio_context(job->blk),
+                              block_job_drained_begin, block_job_drained_end,
+                              job);
     if (job->driver->attached_aio_context) {
         job->driver->attached_aio_context(job, new_context);
     }
@@ -192,6 +216,9 @@ static void block_job_detach_aio_context(void *opaque)
 {
     BlockJob *job = opaque;
 
+    aio_context_del_drain_ops(blk_get_aio_context(job->blk),
+                              block_job_drained_begin, block_job_drained_end,
+                              job);
     /* In case the job terminates during aio_poll()... */
     block_job_ref(job);
 
@@ -217,23 +244,6 @@ static const BdrvChildRole child_job = {
     .stay_at_node       = true,
 };
 
-static void block_job_drained_begin(void *opaque)
-{
-    BlockJob *job = opaque;
-    block_job_pause(job);
-}
-
-static void block_job_drained_end(void *opaque)
-{
-    BlockJob *job = opaque;
-    block_job_resume(job);
-}
-
-static const BlockDevOps block_job_dev_ops = {
-    .drained_begin = block_job_drained_begin,
-    .drained_end = block_job_drained_end,
-};
-
 void block_job_remove_all_bdrv(BlockJob *job)
 {
     GSList *l;
@@ -671,6 +681,9 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
     bs->job = job;
 
     blk_set_dev_ops(blk, &block_job_dev_ops, job);
+    aio_context_add_drain_ops(blk_get_aio_context(blk),
+                              block_job_drained_begin, block_job_drained_end,
+                              job);
     bdrv_op_unblock(bs, BLOCK_OP_TYPE_DATAPLANE, job->blocker);
 
     QLIST_INSERT_HEAD(&block_jobs, job, job_list);
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 4/9] throttle: Implement AioContext drain ops
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
                   ` (2 preceding siblings ...)
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 3/9] blockjob: Implement AioContext drain ops Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 5/9] qed: " Fam Zheng
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block/throttle.c | 34 +++++++++++++++++++++++++++-------
 1 file changed, 27 insertions(+), 7 deletions(-)

diff --git a/block/throttle.c b/block/throttle.c
index 833175ac77..35b740e3de 100644
--- a/block/throttle.c
+++ b/block/throttle.c
@@ -70,6 +70,25 @@ fin:
     return ret;
 }
 
+static void throttle_drained_begin(void *opaque)
+{
+    BlockDriverState *bs = opaque;
+    ThrottleGroupMember *tgm = bs->opaque;
+
+    if (atomic_fetch_inc(&tgm->io_limits_disabled) == 0) {
+        throttle_group_restart_tgm(tgm);
+    }
+}
+
+static void throttle_drained_end(void *opaque)
+{
+    BlockDriverState *bs = opaque;
+    ThrottleGroupMember *tgm = bs->opaque;
+
+    assert(tgm->io_limits_disabled);
+    atomic_dec(&tgm->io_limits_disabled);
+}
+
 static int throttle_open(BlockDriverState *bs, QDict *options,
                          int flags, Error **errp)
 {
@@ -146,6 +165,9 @@ static int throttle_co_flush(BlockDriverState *bs)
 static void throttle_detach_aio_context(BlockDriverState *bs)
 {
     ThrottleGroupMember *tgm = bs->opaque;
+    aio_context_del_drain_ops(bdrv_get_aio_context(bs),
+                              throttle_drained_begin, throttle_drained_end,
+                              bs);
     throttle_group_detach_aio_context(tgm);
 }
 
@@ -153,6 +175,9 @@ static void throttle_attach_aio_context(BlockDriverState *bs,
                                         AioContext *new_context)
 {
     ThrottleGroupMember *tgm = bs->opaque;
+    aio_context_add_drain_ops(new_context,
+                              throttle_drained_begin, throttle_drained_end,
+                              bs);
     throttle_group_attach_aio_context(tgm, new_context);
 }
 
@@ -199,17 +224,12 @@ static bool throttle_recurse_is_first_non_filter(BlockDriverState *bs,
 
 static void coroutine_fn throttle_co_drain_begin(BlockDriverState *bs)
 {
-    ThrottleGroupMember *tgm = bs->opaque;
-    if (atomic_fetch_inc(&tgm->io_limits_disabled) == 0) {
-        throttle_group_restart_tgm(tgm);
-    }
+    throttle_drained_begin(bs);
 }
 
 static void coroutine_fn throttle_co_drain_end(BlockDriverState *bs)
 {
-    ThrottleGroupMember *tgm = bs->opaque;
-    assert(tgm->io_limits_disabled);
-    atomic_dec(&tgm->io_limits_disabled);
+    throttle_drained_end(bs);
 }
 
 static BlockDriver bdrv_throttle = {
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 5/9] qed: Implement AioContext drain ops
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
                   ` (3 preceding siblings ...)
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 4/9] throttle: " Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 6/9] block: Use aio_context_drained_begin in bdrv_set_aio_context Fam Zheng
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block/qed.c | 34 +++++++++++++++++++++++++---------
 1 file changed, 25 insertions(+), 9 deletions(-)

diff --git a/block/qed.c b/block/qed.c
index 821dcaa055..8ddaa31e7c 100644
--- a/block/qed.c
+++ b/block/qed.c
@@ -337,12 +337,33 @@ static void qed_cancel_need_check_timer(BDRVQEDState *s)
     timer_del(s->need_check_timer);
 }
 
+static void bdrv_qed_drained_begin(void *opaque)
+{
+    BlockDriverState *bs = opaque;
+    BDRVQEDState *s = bs->opaque;
+
+    /* Fire the timer immediately in order to start doing I/O as soon as the
+     * header is flushed.
+     */
+    if (s->need_check_timer && timer_pending(s->need_check_timer)) {
+        qed_cancel_need_check_timer(s);
+        qed_need_check_timer_entry(s);
+    }
+}
+
+static void bdrv_qed_drained_end(void *opaque)
+{
+}
+
 static void bdrv_qed_detach_aio_context(BlockDriverState *bs)
 {
     BDRVQEDState *s = bs->opaque;
 
     qed_cancel_need_check_timer(s);
     timer_free(s->need_check_timer);
+    aio_context_del_drain_ops(bdrv_get_aio_context(bs),
+                              bdrv_qed_drained_begin, bdrv_qed_drained_end,
+                              bs);
 }
 
 static void bdrv_qed_attach_aio_context(BlockDriverState *bs,
@@ -356,19 +377,14 @@ static void bdrv_qed_attach_aio_context(BlockDriverState *bs,
     if (s->header.features & QED_F_NEED_CHECK) {
         qed_start_need_check_timer(s);
     }
+    aio_context_add_drain_ops(new_context,
+                              bdrv_qed_drained_begin, bdrv_qed_drained_end,
+                              bs);
 }
 
 static void coroutine_fn bdrv_qed_co_drain_begin(BlockDriverState *bs)
 {
-    BDRVQEDState *s = bs->opaque;
-
-    /* Fire the timer immediately in order to start doing I/O as soon as the
-     * header is flushed.
-     */
-    if (s->need_check_timer && timer_pending(s->need_check_timer)) {
-        qed_cancel_need_check_timer(s);
-        qed_need_check_timer_entry(s);
-    }
+    bdrv_qed_drained_begin(bs);
 }
 
 static void bdrv_qed_init_state(BlockDriverState *bs)
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 6/9] block: Use aio_context_drained_begin in bdrv_set_aio_context
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
                   ` (4 preceding siblings ...)
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 5/9] qed: " Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 7/9] block: Switch to use AIO drained begin/end API Fam Zheng
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/block.c b/block.c
index 9a1a0d1e73..949f0dec11 100644
--- a/block.c
+++ b/block.c
@@ -4745,8 +4745,7 @@ void bdrv_set_aio_context(BlockDriverState *bs, AioContext *new_context)
 {
     AioContext *ctx = bdrv_get_aio_context(bs);
 
-    aio_disable_external(ctx);
-    bdrv_parent_drained_begin(bs);
+    aio_context_drained_begin(ctx);
     bdrv_drain(bs); /* ensure there are no in-flight requests */
 
     while (aio_poll(ctx, false)) {
@@ -4760,8 +4759,7 @@ void bdrv_set_aio_context(BlockDriverState *bs, AioContext *new_context)
      */
     aio_context_acquire(new_context);
     bdrv_attach_aio_context(bs, new_context);
-    bdrv_parent_drained_end(bs);
-    aio_enable_external(ctx);
+    aio_context_drained_end(ctx);
     aio_context_release(new_context);
 }
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 7/9] block: Switch to use AIO drained begin/end API
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
                   ` (5 preceding siblings ...)
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 6/9] block: Use aio_context_drained_begin in bdrv_set_aio_context Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 8/9] block: Drop old drained_{begin, end} callbacks Fam Zheng
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Instead of the recursion of the "disable/enable external requests"
operations on the graph, we switch to AioContext's API to disable/enable
on the whole AioContext altogether. Strictly it is be a bit more than
necessary, but as all drained sections are short, it is not a big
problem.

Drained end can just get away with that.  The other half of drained
begin is to wait for requests, which we can do with BDRV_POLL_WHILE() in
a loop.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block/io.c | 116 ++++++-------------------------------------------------------
 1 file changed, 10 insertions(+), 106 deletions(-)

diff --git a/block/io.c b/block/io.c
index 7f07972489..914037b21a 100644
--- a/block/io.c
+++ b/block/io.c
@@ -40,28 +40,6 @@
 static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
     int64_t offset, int bytes, BdrvRequestFlags flags);
 
-void bdrv_parent_drained_begin(BlockDriverState *bs)
-{
-    BdrvChild *c;
-
-    QLIST_FOREACH(c, &bs->parents, next_parent) {
-        if (c->role->drained_begin) {
-            c->role->drained_begin(c);
-        }
-    }
-}
-
-void bdrv_parent_drained_end(BlockDriverState *bs)
-{
-    BdrvChild *c;
-
-    QLIST_FOREACH(c, &bs->parents, next_parent) {
-        if (c->role->drained_end) {
-            c->role->drained_end(c);
-        }
-    }
-}
-
 static void bdrv_merge_limits(BlockLimits *dst, const BlockLimits *src)
 {
     dst->opt_transfer = MAX(dst->opt_transfer, src->opt_transfer);
@@ -141,71 +119,6 @@ typedef struct {
     bool begin;
 } BdrvCoDrainData;
 
-static void coroutine_fn bdrv_drain_invoke_entry(void *opaque)
-{
-    BdrvCoDrainData *data = opaque;
-    BlockDriverState *bs = data->bs;
-
-    if (data->begin) {
-        bs->drv->bdrv_co_drain_begin(bs);
-    } else {
-        bs->drv->bdrv_co_drain_end(bs);
-    }
-
-    /* Set data->done before reading bs->wakeup.  */
-    atomic_mb_set(&data->done, true);
-    bdrv_wakeup(bs);
-}
-
-static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
-{
-    BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin};
-
-    if (!bs->drv || (begin && !bs->drv->bdrv_co_drain_begin) ||
-            (!begin && !bs->drv->bdrv_co_drain_end)) {
-        return;
-    }
-
-    data.co = qemu_coroutine_create(bdrv_drain_invoke_entry, &data);
-    bdrv_coroutine_enter(bs, data.co);
-    BDRV_POLL_WHILE(bs, !data.done);
-}
-
-static bool bdrv_drain_recurse(BlockDriverState *bs, bool begin)
-{
-    BdrvChild *child, *tmp;
-    bool waited;
-
-    /* Ensure any pending metadata writes are submitted to bs->file.  */
-    bdrv_drain_invoke(bs, begin);
-
-    /* Wait for drained requests to finish */
-    waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0);
-
-    QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
-        BlockDriverState *bs = child->bs;
-        bool in_main_loop =
-            qemu_get_current_aio_context() == qemu_get_aio_context();
-        assert(bs->refcnt > 0);
-        if (in_main_loop) {
-            /* In case the recursive bdrv_drain_recurse processes a
-             * block_job_defer_to_main_loop BH and modifies the graph,
-             * let's hold a reference to bs until we are done.
-             *
-             * IOThread doesn't have such a BH, and it is not safe to call
-             * bdrv_unref without BQL, so skip doing it there.
-             */
-            bdrv_ref(bs);
-        }
-        waited |= bdrv_drain_recurse(bs, begin);
-        if (in_main_loop) {
-            bdrv_unref(bs);
-        }
-    }
-
-    return waited;
-}
-
 static void bdrv_co_drain_bh_cb(void *opaque)
 {
     BdrvCoDrainData *data = opaque;
@@ -256,12 +169,13 @@ void bdrv_drained_begin(BlockDriverState *bs)
         return;
     }
 
-    if (atomic_fetch_inc(&bs->quiesce_counter) == 0) {
-        aio_disable_external(bdrv_get_aio_context(bs));
-        bdrv_parent_drained_begin(bs);
+    if (atomic_fetch_inc(&bs->quiesce_counter) > 0) {
+        return;
+    }
+    aio_context_drained_begin(bdrv_get_aio_context(bs));
+    while (BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0)) {
+        /* Loop until no progress is made. */
     }
-
-    bdrv_drain_recurse(bs, true);
 }
 
 void bdrv_drained_end(BlockDriverState *bs)
@@ -275,9 +189,7 @@ void bdrv_drained_end(BlockDriverState *bs)
         return;
     }
 
-    bdrv_parent_drained_end(bs);
-    bdrv_drain_recurse(bs, false);
-    aio_enable_external(bdrv_get_aio_context(bs));
+    aio_context_drained_end(bdrv_get_aio_context(bs));
 }
 
 /*
@@ -324,14 +236,11 @@ void bdrv_drain_all_begin(void)
     BdrvNextIterator it;
     GSList *aio_ctxs = NULL, *ctx;
 
-    block_job_pause_all();
-
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *aio_context = bdrv_get_aio_context(bs);
 
         aio_context_acquire(aio_context);
-        bdrv_parent_drained_begin(bs);
-        aio_disable_external(aio_context);
+        aio_context_drained_begin(aio_context);
         aio_context_release(aio_context);
 
         if (!g_slist_find(aio_ctxs, aio_context)) {
@@ -347,14 +256,13 @@ void bdrv_drain_all_begin(void)
      */
     while (waited) {
         waited = false;
-
         for (ctx = aio_ctxs; ctx != NULL; ctx = ctx->next) {
             AioContext *aio_context = ctx->data;
 
             aio_context_acquire(aio_context);
             for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
                 if (aio_context == bdrv_get_aio_context(bs)) {
-                    waited |= bdrv_drain_recurse(bs, true);
+                    waited |= BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0);
                 }
             }
             aio_context_release(aio_context);
@@ -373,13 +281,9 @@ void bdrv_drain_all_end(void)
         AioContext *aio_context = bdrv_get_aio_context(bs);
 
         aio_context_acquire(aio_context);
-        aio_enable_external(aio_context);
-        bdrv_parent_drained_end(bs);
-        bdrv_drain_recurse(bs, false);
+        aio_context_drained_end(aio_context);
         aio_context_release(aio_context);
     }
-
-    block_job_resume_all();
 }
 
 void bdrv_drain_all(void)
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 8/9] block: Drop old drained_{begin, end} callbacks
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
                   ` (6 preceding siblings ...)
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 7/9] block: Switch to use AIO drained begin/end API Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 9/9] blockjob: Drop unused functions Fam Zheng
  2017-11-29 17:25 ` [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Kevin Wolf
  9 siblings, 0 replies; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block.c                        | 24 ------------------------
 block/block-backend.c          | 22 ----------------------
 blockjob.c                     |  6 ------
 include/block/block.h          | 16 ----------------
 include/block/block_int.h      | 11 -----------
 include/sysemu/block-backend.h |  8 --------
 6 files changed, 87 deletions(-)

diff --git a/block.c b/block.c
index 949f0dec11..4434441df5 100644
--- a/block.c
+++ b/block.c
@@ -810,18 +810,6 @@ static char *bdrv_child_get_parent_desc(BdrvChild *c)
     return g_strdup(bdrv_get_device_or_node_name(parent));
 }
 
-static void bdrv_child_cb_drained_begin(BdrvChild *child)
-{
-    BlockDriverState *bs = child->opaque;
-    bdrv_drained_begin(bs);
-}
-
-static void bdrv_child_cb_drained_end(BdrvChild *child)
-{
-    BlockDriverState *bs = child->opaque;
-    bdrv_drained_end(bs);
-}
-
 static int bdrv_child_cb_inactivate(BdrvChild *child)
 {
     BlockDriverState *bs = child->opaque;
@@ -887,8 +875,6 @@ static void bdrv_inherited_options(int *child_flags, QDict *child_options,
 const BdrvChildRole child_file = {
     .get_parent_desc = bdrv_child_get_parent_desc,
     .inherit_options = bdrv_inherited_options,
-    .drained_begin   = bdrv_child_cb_drained_begin,
-    .drained_end     = bdrv_child_cb_drained_end,
     .inactivate      = bdrv_child_cb_inactivate,
 };
 
@@ -909,8 +895,6 @@ static void bdrv_inherited_fmt_options(int *child_flags, QDict *child_options,
 const BdrvChildRole child_format = {
     .get_parent_desc = bdrv_child_get_parent_desc,
     .inherit_options = bdrv_inherited_fmt_options,
-    .drained_begin   = bdrv_child_cb_drained_begin,
-    .drained_end     = bdrv_child_cb_drained_end,
     .inactivate      = bdrv_child_cb_inactivate,
 };
 
@@ -1022,8 +1006,6 @@ const BdrvChildRole child_backing = {
     .attach          = bdrv_backing_attach,
     .detach          = bdrv_backing_detach,
     .inherit_options = bdrv_backing_options,
-    .drained_begin   = bdrv_child_cb_drained_begin,
-    .drained_end     = bdrv_child_cb_drained_end,
     .inactivate      = bdrv_child_cb_inactivate,
     .update_filename = bdrv_backing_update_filename,
 };
@@ -1973,9 +1955,6 @@ static void bdrv_replace_child_noperm(BdrvChild *child,
         assert(bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs));
     }
     if (old_bs) {
-        if (old_bs->quiesce_counter && child->role->drained_end) {
-            child->role->drained_end(child);
-        }
         if (child->role->detach) {
             child->role->detach(child);
         }
@@ -1986,9 +1965,6 @@ static void bdrv_replace_child_noperm(BdrvChild *child,
 
     if (new_bs) {
         QLIST_INSERT_HEAD(&new_bs->parents, child, next_parent);
-        if (new_bs->quiesce_counter && child->role->drained_begin) {
-            child->role->drained_begin(child);
-        }
 
         if (child->role->attach) {
             child->role->attach(child);
diff --git a/block/block-backend.c b/block/block-backend.c
index baef8e7abc..05855ab767 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -68,7 +68,6 @@ struct BlockBackend {
 
     NotifierList remove_bs_notifiers, insert_bs_notifiers;
 
-    int quiesce_counter;
     VMChangeStateEntry *vmsh;
     bool force_allow_inactivate;
 };
@@ -245,9 +244,6 @@ static const BdrvChildRole child_root = {
     .get_name           = blk_root_get_name,
     .get_parent_desc    = blk_root_get_parent_desc,
 
-    .drained_begin      = blk_root_drained_begin,
-    .drained_end        = blk_root_drained_end,
-
     .activate           = blk_root_activate,
     .inactivate         = blk_root_inactivate,
 };
@@ -887,11 +883,6 @@ void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops,
 
     blk->dev_ops = ops;
     blk->dev_opaque = opaque;
-
-    /* Are we currently quiesced? Should we enforce this right now? */
-    if (blk->quiesce_counter && ops->drained_begin) {
-        ops->drained_begin(opaque);
-    }
 }
 
 /*
@@ -2068,12 +2059,6 @@ static void blk_root_drained_begin(BdrvChild *child)
 {
     BlockBackend *blk = child->opaque;
 
-    if (++blk->quiesce_counter == 1) {
-        if (blk->dev_ops && blk->dev_ops->drained_begin) {
-            blk->dev_ops->drained_begin(blk->dev_opaque);
-        }
-    }
-
     /* Note that blk->root may not be accessible here yet if we are just
      * attaching to a BlockDriverState that is drained. Use child instead. */
 
@@ -2085,14 +2070,7 @@ static void blk_root_drained_begin(BdrvChild *child)
 static void blk_root_drained_end(BdrvChild *child)
 {
     BlockBackend *blk = child->opaque;
-    assert(blk->quiesce_counter);
 
     assert(blk->public.throttle_group_member.io_limits_disabled);
     atomic_dec(&blk->public.throttle_group_member.io_limits_disabled);
-
-    if (--blk->quiesce_counter == 0) {
-        if (blk->dev_ops && blk->dev_ops->drained_end) {
-            blk->dev_ops->drained_end(blk->dev_opaque);
-        }
-    }
 }
diff --git a/blockjob.c b/blockjob.c
index 86d060c89c..809111bf24 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -160,11 +160,6 @@ static void block_job_drained_end(void *opaque)
     block_job_resume(job);
 }
 
-static const BlockDevOps block_job_dev_ops = {
-    .drained_begin = block_job_drained_begin,
-    .drained_end = block_job_drained_end,
-};
-
 void block_job_unref(BlockJob *job)
 {
     if (--job->refcnt == 0) {
@@ -680,7 +675,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
     block_job_add_bdrv(job, "main node", bs, 0, BLK_PERM_ALL, &error_abort);
     bs->job = job;
 
-    blk_set_dev_ops(blk, &block_job_dev_ops, job);
     aio_context_add_drain_ops(blk_get_aio_context(blk),
                               block_job_drained_begin, block_job_drained_end,
                               job);
diff --git a/include/block/block.h b/include/block/block.h
index c05cac57e5..df73e77200 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -579,22 +579,6 @@ int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo);
 void bdrv_io_plug(BlockDriverState *bs);
 void bdrv_io_unplug(BlockDriverState *bs);
 
-/**
- * bdrv_parent_drained_begin:
- *
- * Begin a quiesced section of all users of @bs. This is part of
- * bdrv_drained_begin.
- */
-void bdrv_parent_drained_begin(BlockDriverState *bs);
-
-/**
- * bdrv_parent_drained_end:
- *
- * End a quiesced section of all users of @bs. This is part of
- * bdrv_drained_end.
- */
-void bdrv_parent_drained_end(BlockDriverState *bs);
-
 /**
  * bdrv_drained_begin:
  *
diff --git a/include/block/block_int.h b/include/block/block_int.h
index e107163594..8eed595682 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -535,17 +535,6 @@ struct BdrvChildRole {
      * caller is responsible for freeing the memory. */
     char *(*get_parent_desc)(BdrvChild *child);
 
-    /*
-     * If this pair of functions is implemented, the parent doesn't issue new
-     * requests after returning from .drained_begin() until .drained_end() is
-     * called.
-     *
-     * Note that this can be nested. If drained_begin() was called twice, new
-     * I/O is allowed only after drained_end() was called twice, too.
-     */
-    void (*drained_begin)(BdrvChild *child);
-    void (*drained_end)(BdrvChild *child);
-
     /* Notifies the parent that the child has been activated/inactivated (e.g.
      * when migration is completing) and it can start/stop requesting
      * permissions and doing I/O on it. */
diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index c4e52a5fa3..9e81232a83 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -58,14 +58,6 @@ typedef struct BlockDevOps {
      * Runs when the size changed (e.g. monitor command block_resize)
      */
     void (*resize_cb)(void *opaque);
-    /*
-     * Runs when the backend receives a drain request.
-     */
-    void (*drained_begin)(void *opaque);
-    /*
-     * Runs when the backend's last drain request ends.
-     */
-    void (*drained_end)(void *opaque);
 } BlockDevOps;
 
 /* This struct is embedded in (the private) BlockBackend struct and contains
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [PATCH RFC 9/9] blockjob: Drop unused functions
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
                   ` (7 preceding siblings ...)
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 8/9] block: Drop old drained_{begin, end} callbacks Fam Zheng
@ 2017-11-29 14:49 ` Fam Zheng
  2017-11-29 17:25 ` [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Kevin Wolf
  9 siblings, 0 replies; 21+ messages in thread
From: Fam Zheng @ 2017-11-29 14:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini,
	Stefan Hajnoczi, Fam Zheng

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 blockjob.c                   | 24 ------------------------
 include/block/blockjob_int.h | 14 --------------
 2 files changed, 38 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 809111bf24..bfeb7c4ace 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -699,18 +699,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
     return job;
 }
 
-void block_job_pause_all(void)
-{
-    BlockJob *job = NULL;
-    while ((job = block_job_next(job))) {
-        AioContext *aio_context = blk_get_aio_context(job->blk);
-
-        aio_context_acquire(aio_context);
-        block_job_pause(job);
-        aio_context_release(aio_context);
-    }
-}
-
 void block_job_early_fail(BlockJob *job)
 {
     block_job_unref(job);
@@ -764,18 +752,6 @@ void coroutine_fn block_job_pause_point(BlockJob *job)
     }
 }
 
-void block_job_resume_all(void)
-{
-    BlockJob *job = NULL;
-    while ((job = block_job_next(job))) {
-        AioContext *aio_context = blk_get_aio_context(job->blk);
-
-        aio_context_acquire(aio_context);
-        block_job_resume(job);
-        aio_context_release(aio_context);
-    }
-}
-
 void block_job_enter(BlockJob *job)
 {
     if (!block_job_started(job)) {
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index 43f3be2965..4b43367608 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -156,20 +156,6 @@ void block_job_sleep_ns(BlockJob *job, QEMUClockType type, int64_t ns);
  */
 void block_job_yield(BlockJob *job);
 
-/**
- * block_job_pause_all:
- *
- * Asynchronously pause all jobs.
- */
-void block_job_pause_all(void);
-
-/**
- * block_job_resume_all:
- *
- * Resume all block jobs.  Must be paired with a preceding block_job_pause_all.
- */
-void block_job_resume_all(void);
-
 /**
  * block_job_early_fail:
  * @bs: The block device.
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
                   ` (8 preceding siblings ...)
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 9/9] blockjob: Drop unused functions Fam Zheng
@ 2017-11-29 17:25 ` Kevin Wolf
  2017-11-30  2:03   ` Fam Zheng
  9 siblings, 1 reply; 21+ messages in thread
From: Kevin Wolf @ 2017-11-29 17:25 UTC (permalink / raw)
  To: Fam Zheng
  Cc: qemu-devel, qemu-block, jcody, Max Reitz, pbonzini, Stefan Hajnoczi

Am 29.11.2017 um 15:49 hat Fam Zheng geschrieben:
> While we look at the fixes for 2.11, I briefly prototyped this series
> to see if it makes sense as a simplification of the drain API for
> 2.12.
> 
> The idea is to let AioContext manage quiesce callbacks, then the block
> layer only needs to do the in-flight request waiting. This lets us get
> rid of the callback recursion (both up and down).

So essentially you don't drain individual nodes any more, but whole
AioContexts. I have a feeeling that this would be a step in the wrong
direction.

Not only would it completely bypass the path I/O requests take and
potentially drain a lot more than is actually necessary, but it also
requires that all nodes that are connected in a tree are in the same
AioContext.

Paolo can say more on this, but my understanding was that the long-term
plan is to make the block layer fully thread safe so that any thread
could call things on any node. I remember Paolo saying that AioContext
was even fully going away in the future. I don't see how this is
compatible with your approach.

And finally, I don't really think that the recursion is even a problem.
The problem is with graph changes made by callbacks that drain allows to
run. With your changes, it might be a bit easier to avoid that
bdrv_drain() itself gets into trouble due to graph changes, but this
doesn't solve the problem for any (possibly indirect) callers of
bdrv_drain().

Kevin

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-11-29 17:25 ` [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Kevin Wolf
@ 2017-11-30  2:03   ` Fam Zheng
  2017-11-30 10:31     ` Kevin Wolf
  0 siblings, 1 reply; 21+ messages in thread
From: Fam Zheng @ 2017-11-30  2:03 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-devel, qemu-block, jcody, Max Reitz, pbonzini, Stefan Hajnoczi

On Wed, 11/29 18:25, Kevin Wolf wrote:
> Am 29.11.2017 um 15:49 hat Fam Zheng geschrieben:
> > While we look at the fixes for 2.11, I briefly prototyped this series
> > to see if it makes sense as a simplification of the drain API for
> > 2.12.
> > 
> > The idea is to let AioContext manage quiesce callbacks, then the block
> > layer only needs to do the in-flight request waiting. This lets us get
> > rid of the callback recursion (both up and down).
> 
> So essentially you don't drain individual nodes any more, but whole
> AioContexts. I have a feeeling that this would be a step in the wrong
> direction.
> 
> Not only would it completely bypass the path I/O requests take and
> potentially drain a lot more than is actually necessary, but it also
> requires that all nodes that are connected in a tree are in the same
> AioContext.

Yeah, good point. Initially I wanted to introduce a BlockGraph object which
manages the per-graph draining, (i.e. where to register the drain callbacks),
but I felt lazy and used AioContext.

Will that make it better?  BlockGraph would be a proper abstraction and will not
limit the API to one AioContext per tree.

> 
> And finally, I don't really think that the recursion is even a problem.
> The problem is with graph changes made by callbacks that drain allows to
> run. With your changes, it might be a bit easier to avoid that
> bdrv_drain() itself gets into trouble due to graph changes, but this
> doesn't solve the problem for any (possibly indirect) callers of
> bdrv_drain().

The recursion is the one big place that can be easily broken by graph changes,
fixing this doesn't make the situation any worse. We could still fix the
indirect callers by taking references or by introducing "ubiquitous coroutines".

Fam

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-11-30  2:03   ` Fam Zheng
@ 2017-11-30 10:31     ` Kevin Wolf
  2017-11-30 14:34       ` Fam Zheng
  0 siblings, 1 reply; 21+ messages in thread
From: Kevin Wolf @ 2017-11-30 10:31 UTC (permalink / raw)
  To: Fam Zheng
  Cc: qemu-devel, qemu-block, jcody, Max Reitz, pbonzini, Stefan Hajnoczi

Am 30.11.2017 um 03:03 hat Fam Zheng geschrieben:
> On Wed, 11/29 18:25, Kevin Wolf wrote:
> > Am 29.11.2017 um 15:49 hat Fam Zheng geschrieben:
> > > While we look at the fixes for 2.11, I briefly prototyped this series
> > > to see if it makes sense as a simplification of the drain API for
> > > 2.12.
> > > 
> > > The idea is to let AioContext manage quiesce callbacks, then the block
> > > layer only needs to do the in-flight request waiting. This lets us get
> > > rid of the callback recursion (both up and down).
> > 
> > So essentially you don't drain individual nodes any more, but whole
> > AioContexts. I have a feeeling that this would be a step in the wrong
> > direction.
> > 
> > Not only would it completely bypass the path I/O requests take and
> > potentially drain a lot more than is actually necessary, but it also
> > requires that all nodes that are connected in a tree are in the same
> > AioContext.
> 
> Yeah, good point. Initially I wanted to introduce a BlockGraph object
> which manages the per-graph draining, (i.e. where to register the
> drain callbacks), but I felt lazy and used AioContext.
> 
> Will that make it better?  BlockGraph would be a proper abstraction
> and will not limit the API to one AioContext per tree.

There is only a single graph, so this would mean going back to global
bdrv_drain_all() exclusively.

What you really mean is probably connected components in the graph, but
do we really want to manage merging and splitting object representing
connected components when a node is added or removed from the graph?
Especially when that graph change occurs in a drain callback?

You can also still easily introduce bugs where graph changes during a
drain end up in nodes not being drained, possibly drained twice, you
still access the next pointer of a deleted node or you accidentally
switch to draining a different component.

It's probably possible to get this right, but essentially you're just
switching from iterating a tree to iterating a list. You get roughly the
same set of problems that you have to consider as today, and getting it
right should be about the same difficulty.

> > And finally, I don't really think that the recursion is even a problem.
> > The problem is with graph changes made by callbacks that drain allows to
> > run. With your changes, it might be a bit easier to avoid that
> > bdrv_drain() itself gets into trouble due to graph changes, but this
> > doesn't solve the problem for any (possibly indirect) callers of
> > bdrv_drain().
> 
> The recursion is the one big place that can be easily broken by graph changes,
> fixing this doesn't make the situation any worse. We could still fix the
> indirect callers by taking references or by introducing "ubiquitous coroutines".

But hiding a bug in 80% of the cases where it shows isn't enough.

I think the only real solution is to forbid graph changes until after
any critical operation has completed. I haven't tried it out in
practice, but I suppose we could use a CoMutex around them and take it
in bdrv_drained_begin/end() and all other places that can get into
trouble with graph changes.

Kevin

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-11-30 10:31     ` Kevin Wolf
@ 2017-11-30 14:34       ` Fam Zheng
  2017-11-30 15:10         ` Kevin Wolf
  0 siblings, 1 reply; 21+ messages in thread
From: Fam Zheng @ 2017-11-30 14:34 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-devel, qemu-block, jcody, Max Reitz, pbonzini, Stefan Hajnoczi

On Thu, 11/30 11:31, Kevin Wolf wrote:
> What you really mean is probably connected components in the graph, but
> do we really want to manage merging and splitting object representing
> connected components when a node is added or removed from the graph?
> Especially when that graph change occurs in a drain callback?
> 
> You can also still easily introduce bugs where graph changes during a
> drain end up in nodes not being drained, possibly drained twice, you
> still access the next pointer of a deleted node or you accidentally
> switch to draining a different component.
> 
> It's probably possible to get this right, but essentially you're just
> switching from iterating a tree to iterating a list. You get roughly the
> same set of problems that you have to consider as today, and getting it
> right should be about the same difficulty.

Not quite. The essential idea is redo the drain begin/end in a correct way:

bdrv_drained_begin(bs):
    1.a) stop all sources of new requests in the connected components, including
        * aio_disable_external()
        * stop QED timer
        * stop block job
    1.b) aio_poll() until:
        * no request is in flight for bs
        * no progress is made (any progress may generate new requests)

bdrv_drained_end(bs):
    2.a) resume stopped sources of new requests
    2.b) no need to do aio_poll() because the main loop will move on

1.a) can be either done with either a graph recursion like now, or a list
traversing if managed somewhere, like in this series. Or better, BlockGraph will
be the manager. The rule is that these operations will not cause graph change,
so it's an improvement.

1.b) doesn't need recursion IMO, only looking at bs->in_flight and the return
value of aio_poll() should be enough.

For 2.a) if the drain begin/end callbacks are mananged in a list, it's easy to
only call drained_end() for entries that got drained_begin() earlier, as shown
in patch 2.

> 
> > > And finally, I don't really think that the recursion is even a problem.
> > > The problem is with graph changes made by callbacks that drain allows to
> > > run. With your changes, it might be a bit easier to avoid that
> > > bdrv_drain() itself gets into trouble due to graph changes, but this
> > > doesn't solve the problem for any (possibly indirect) callers of
> > > bdrv_drain().
> > 
> > The recursion is the one big place that can be easily broken by graph changes,
> > fixing this doesn't make the situation any worse. We could still fix the
> > indirect callers by taking references or by introducing "ubiquitous coroutines".
> 
> But hiding a bug in 80% of the cases where it shows isn't enough.

I think they are separate bugs. And with the one that this series is fixing,
others (bdrv_drain*() callers') may even not show up, because
bdrv_drained_begin() crashed already.

> 
> I think the only real solution is to forbid graph changes until after
> any critical operation has completed. I haven't tried it out in
> practice, but I suppose we could use a CoMutex around them and take it
> in bdrv_drained_begin/end() and all other places that can get into
> trouble with graph changes.

Yes, I agree, but that (using CoMutex around graph change) requires everything,
especially the defer_to_main_loop_bh, runs in a coroutine context, which is
exactly what I mean by "introducing 'ubiquitous coroutines'", because currently
we don't have them.

Fam

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-11-30 14:34       ` Fam Zheng
@ 2017-11-30 15:10         ` Kevin Wolf
  2017-11-30 16:04           ` Paolo Bonzini
  0 siblings, 1 reply; 21+ messages in thread
From: Kevin Wolf @ 2017-11-30 15:10 UTC (permalink / raw)
  To: Fam Zheng
  Cc: qemu-devel, qemu-block, jcody, Max Reitz, pbonzini, Stefan Hajnoczi

Am 30.11.2017 um 15:34 hat Fam Zheng geschrieben:
> On Thu, 11/30 11:31, Kevin Wolf wrote:
> > What you really mean is probably connected components in the graph, but
> > do we really want to manage merging and splitting object representing
> > connected components when a node is added or removed from the graph?
> > Especially when that graph change occurs in a drain callback?
> > 
> > You can also still easily introduce bugs where graph changes during a
> > drain end up in nodes not being drained, possibly drained twice, you
> > still access the next pointer of a deleted node or you accidentally
> > switch to draining a different component.
> > 
> > It's probably possible to get this right, but essentially you're just
> > switching from iterating a tree to iterating a list. You get roughly the
> > same set of problems that you have to consider as today, and getting it
> > right should be about the same difficulty.
> 
> Not quite. The essential idea is redo the drain begin/end in a correct way:
> 
> bdrv_drained_begin(bs):
>     1.a) stop all sources of new requests in the connected components, including
>         * aio_disable_external()
>         * stop QED timer
>         * stop block job
>     1.b) aio_poll() until:
>         * no request is in flight for bs
>         * no progress is made (any progress may generate new requests)
> 
> bdrv_drained_end(bs):
>     2.a) resume stopped sources of new requests
>     2.b) no need to do aio_poll() because the main loop will move on
> 
> 1.a) can be either done with either a graph recursion like now, or a list
> traversing if managed somewhere, like in this series. Or better, BlockGraph will
> be the manager. The rule is that these operations will not cause graph change,
> so it's an improvement.
> 
> 1.b) doesn't need recursion IMO, only looking at bs->in_flight and the return
> value of aio_poll() should be enough.

The trouble is with 1.b), which can cause graph changes, including nodes
being added to or removed from a connected component. This means that
you can get nodes in the component for which 1.a) hasn't been executed,
or nodes that are outside the component, but for which 1.a) has still
been executed.

You then need to compensate for that somehow, and basically execute 1.a)
for any new nodes (otherwise you have nodes in the component that could
still be issuing requests; skipping drained_end for them is not enough)
and execute 2.*) for the removed ones.

> For 2.a) if the drain begin/end callbacks are mananged in a list, it's
> easy to only call drained_end() for entries that got drained_begin()
> earlier, as shown in patch 2.

Yes, but not draining them was a bug in the first place.

I'll give you that trying to separate 1.a) and 1.b) so that we have only
a single BDRV_POLL_WHILE() after the most critical code, is an
interesting idea. But as long as 1.b) can involve graph changes, it
remains quite tricky.

And of course, it can't solve the problem with graph changes in callers
of bdrv_drained_begin/end because at least one BDRV_POLL_WHILE() will
stay there that can cause graph changes.

> > > > And finally, I don't really think that the recursion is even a problem.
> > > > The problem is with graph changes made by callbacks that drain allows to
> > > > run. With your changes, it might be a bit easier to avoid that
> > > > bdrv_drain() itself gets into trouble due to graph changes, but this
> > > > doesn't solve the problem for any (possibly indirect) callers of
> > > > bdrv_drain().
> > > 
> > > The recursion is the one big place that can be easily broken by graph changes,
> > > fixing this doesn't make the situation any worse. We could still fix the
> > > indirect callers by taking references or by introducing "ubiquitous coroutines".
> > 
> > But hiding a bug in 80% of the cases where it shows isn't enough.
> 
> I think they are separate bugs. And with the one that this series is fixing,
> others (bdrv_drain*() callers') may even not show up, because
> bdrv_drained_begin() crashed already.
> 
> > 
> > I think the only real solution is to forbid graph changes until after
> > any critical operation has completed. I haven't tried it out in
> > practice, but I suppose we could use a CoMutex around them and take it
> > in bdrv_drained_begin/end() and all other places that can get into
> > trouble with graph changes.
> 
> Yes, I agree, but that (using CoMutex around graph change) requires
> everything, especially the defer_to_main_loop_bh, runs in a coroutine
> context, which is exactly what I mean by "introducing 'ubiquitous
> coroutines'", because currently we don't have them.

Is it hard to do, though? Instead of using a BH to switch to the main
loop and outside of coroutine context, you could use aio_co_schedule()
and yield, which would leave you in the main loop, but still in
coroutine context.

Would this have any bad side effects I'm not aware of?

I'm not sure about other places, of course. We'd have to check that.

Kevin

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-11-30 15:10         ` Kevin Wolf
@ 2017-11-30 16:04           ` Paolo Bonzini
  2017-12-01  9:51             ` Fam Zheng
  0 siblings, 1 reply; 21+ messages in thread
From: Paolo Bonzini @ 2017-11-30 16:04 UTC (permalink / raw)
  To: Kevin Wolf, Fam Zheng
  Cc: qemu-devel, qemu-block, jcody, Max Reitz, Stefan Hajnoczi

On 30/11/2017 16:10, Kevin Wolf wrote:
>> Yes, I agree, but that (using CoMutex around graph change) requires
>> everything, especially the defer_to_main_loop_bh, runs in a coroutine
>> context, which is exactly what I mean by "introducing 'ubiquitous
>> coroutines'", because currently we don't have them.
> Is it hard to do, though? Instead of using a BH to switch to the main
> loop and outside of coroutine context, you could use aio_co_schedule()
> and yield, which would leave you in the main loop, but still in
> coroutine context.

Not that I think of, but just aio_co_schedule wouldn't work, because
"the coroutine must have yielded unless ctx is the context in which the
coroutine is running (i.e. the value of qemu_get_current_aio_context()
from the coroutine itself)".

So you'd have to use a bottom half that calls aio_co_schedule.  But that
would work.

Paolo

> Would this have any bad side effects I'm not aware of?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 2/9] aio: Add drain begin/end API to AioContext
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 2/9] aio: Add drain begin/end API to AioContext Fam Zheng
@ 2017-11-30 16:07   ` Stefan Hajnoczi
  0 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2017-11-30 16:07 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini

[-- Attachment #1: Type: text/plain, Size: 2161 bytes --]

On Wed, Nov 29, 2017 at 10:49:49PM +0800, Fam Zheng wrote:
> diff --git a/util/async.c b/util/async.c
> index 4dd9d95a9e..cca0efd263 100644
> --- a/util/async.c
> +++ b/util/async.c
> @@ -402,6 +402,7 @@ AioContext *aio_context_new(Error **errp)
>      AioContext *ctx;
>  
>      ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
> +    QTAILQ_INIT(&ctx->drain_ops);
>      aio_context_setup(ctx);
>  
>      ret = event_notifier_init(&ctx->notifier, false);
> @@ -506,3 +507,75 @@ void aio_context_release(AioContext *ctx)
>  {
>      qemu_rec_mutex_unlock(&ctx->lock);
>  }
> +
> +/* Called with ctx->lock */
> +void aio_context_drained_begin(AioContext *ctx)
> +{
> +    AioDrainOps *ops;
> +
> +    /* TODO: When all external fds are handled in the following drain_ops
> +     * callbacks, aio_disable_external can be dropped. */
> +    aio_disable_external(ctx);
> +restart:
> +    ctx->drain_ops_updated = false;
> +    QTAILQ_FOREACH(ops, &ctx->drain_ops, next) {
> +        ops->drained_begin(ops->opaque);
> +        if (ctx->drain_ops_updated) {
> +            goto restart;

drained_begin() can be called multiple times.  This needs to be clearly
documented to avoid surprises.

> +        }
> +    }
> +}
> +
> +/* Called with ctx->lock */
> +void aio_context_drained_end(AioContext *ctx)
> +{
> +    AioDrainOps *ops;
> +
> +restart:
> +    ctx->drain_ops_updated = false;
> +    QTAILQ_FOREACH(ops, &ctx->drain_ops, next) {
> +        if (ops->is_new) {
> +            continue;
> +        }
> +        ops->drained_end(ops->opaque);

drained_end() can be called multiple times.  This needs to be clearly
documented to avoid surprises.


> +        if (ctx->drain_ops_updated) {
> +            goto restart;
> +        }
> +    }
> +    if (aio_enable_external(ctx)) {
> +        QTAILQ_FOREACH(ops, &ctx->drain_ops, next) {
> +            ops->is_new = false;
> +        }
> +    }

This is weird, aio_context_drained_end() has nesting support for
->is_new but not for ->drained_end() calls.  I'm not sure where you're
going with these semantics yet.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 3/9] blockjob: Implement AioContext drain ops
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 3/9] blockjob: Implement AioContext drain ops Fam Zheng
@ 2017-11-30 16:09   ` Stefan Hajnoczi
  0 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2017-11-30 16:09 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, qemu-block, jcody, Kevin Wolf, Max Reitz, pbonzini

[-- Attachment #1: Type: text/plain, Size: 892 bytes --]

On Wed, Nov 29, 2017 at 10:49:50PM +0800, Fam Zheng wrote:
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  blockjob.c | 47 ++++++++++++++++++++++++++++++-----------------
>  1 file changed, 30 insertions(+), 17 deletions(-)
> 
> diff --git a/blockjob.c b/blockjob.c
> index ff9a614531..86d060c89c 100644
> --- a/blockjob.c
> +++ b/blockjob.c
> @@ -148,6 +148,23 @@ static void block_job_attached_aio_context(AioContext *new_context,
>                                             void *opaque);
>  static void block_job_detach_aio_context(void *opaque);
>  
> +static void block_job_drained_begin(void *opaque)
> +{
> +    BlockJob *job = opaque;
> +    block_job_pause(job);
> +}

This is buggy because block_job_pause() increments a counter.  Remember
the .drained_begin() semantics are that it can be called any number of
times (see comment in previous patch)!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-11-30 16:04           ` Paolo Bonzini
@ 2017-12-01  9:51             ` Fam Zheng
  2017-12-01 12:24               ` Kevin Wolf
  0 siblings, 1 reply; 21+ messages in thread
From: Fam Zheng @ 2017-12-01  9:51 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-devel, Paolo Bonzini, qemu-block, jcody, Max Reitz, Stefan Hajnoczi

On Thu, 11/30 17:04, Paolo Bonzini wrote:
> On 30/11/2017 16:10, Kevin Wolf wrote:
> >> Yes, I agree, but that (using CoMutex around graph change) requires
> >> everything, especially the defer_to_main_loop_bh, runs in a coroutine
> >> context, which is exactly what I mean by "introducing 'ubiquitous
> >> coroutines'", because currently we don't have them.
> > Is it hard to do, though? Instead of using a BH to switch to the main
> > loop and outside of coroutine context, you could use aio_co_schedule()
> > and yield, which would leave you in the main loop, but still in
> > coroutine context.
> 
> Not that I think of, but just aio_co_schedule wouldn't work, because
> "the coroutine must have yielded unless ctx is the context in which the
> coroutine is running (i.e. the value of qemu_get_current_aio_context()
> from the coroutine itself)".
> 
> So you'd have to use a bottom half that calls aio_co_schedule.  But that
> would work.
> 

We have QMP commands that can manupulate the graph which are all not coroutines.
I think running QMP commands in coroutines has it merit especially regarding to
the nested event loops.

Also the bdrv_close_all() and similar at the end of main() do draining too,
which I'm not sure how to deal with. Maybe special case them and forget the
draining CoMutex?

Fam

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end
  2017-12-01  9:51             ` Fam Zheng
@ 2017-12-01 12:24               ` Kevin Wolf
  0 siblings, 0 replies; 21+ messages in thread
From: Kevin Wolf @ 2017-12-01 12:24 UTC (permalink / raw)
  To: Fam Zheng
  Cc: qemu-devel, Paolo Bonzini, qemu-block, jcody, Max Reitz, Stefan Hajnoczi

Am 01.12.2017 um 10:51 hat Fam Zheng geschrieben:
> On Thu, 11/30 17:04, Paolo Bonzini wrote:
> > On 30/11/2017 16:10, Kevin Wolf wrote:
> > >> Yes, I agree, but that (using CoMutex around graph change) requires
> > >> everything, especially the defer_to_main_loop_bh, runs in a coroutine
> > >> context, which is exactly what I mean by "introducing 'ubiquitous
> > >> coroutines'", because currently we don't have them.
> > > Is it hard to do, though? Instead of using a BH to switch to the main
> > > loop and outside of coroutine context, you could use aio_co_schedule()
> > > and yield, which would leave you in the main loop, but still in
> > > coroutine context.
> > 
> > Not that I think of, but just aio_co_schedule wouldn't work, because
> > "the coroutine must have yielded unless ctx is the context in which the
> > coroutine is running (i.e. the value of qemu_get_current_aio_context()
> > from the coroutine itself)".
> > 
> > So you'd have to use a bottom half that calls aio_co_schedule.  But that
> > would work.
> 
> We have QMP commands that can manupulate the graph which are all not coroutines.
> I think running QMP commands in coroutines has it merit especially regarding to
> the nested event loops.
> 
> Also the bdrv_close_all() and similar at the end of main() do draining too,
> which I'm not sure how to deal with. Maybe special case them and forget the
> draining CoMutex?

All of these cases have in common that they are the outermost layer with
respect to the block subsystem. This means that a nested event loop
there should be harmless because the callbacks called by it won't
influence callers further up in the call stack.

Kevin

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH RFC 1/9] block: Remove unused bdrv_requests_pending
  2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 1/9] block: Remove unused bdrv_requests_pending Fam Zheng
@ 2017-12-01 12:35   ` Alberto Garcia
  0 siblings, 0 replies; 21+ messages in thread
From: Alberto Garcia @ 2017-12-01 12:35 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel
  Cc: Kevin Wolf, qemu-block, Max Reitz, Stefan Hajnoczi, pbonzini

On Wed 29 Nov 2017 03:49:48 PM CET, Fam Zheng wrote:
> Signed-off-by: Fam Zheng <famz@redhat.com>

Reviewed-by: Alberto Garcia <berto@igalia.com>

Berto

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2017-12-01 17:57 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-29 14:49 [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Fam Zheng
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 1/9] block: Remove unused bdrv_requests_pending Fam Zheng
2017-12-01 12:35   ` [Qemu-devel] [Qemu-block] " Alberto Garcia
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 2/9] aio: Add drain begin/end API to AioContext Fam Zheng
2017-11-30 16:07   ` Stefan Hajnoczi
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 3/9] blockjob: Implement AioContext drain ops Fam Zheng
2017-11-30 16:09   ` Stefan Hajnoczi
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 4/9] throttle: " Fam Zheng
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 5/9] qed: " Fam Zheng
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 6/9] block: Use aio_context_drained_begin in bdrv_set_aio_context Fam Zheng
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 7/9] block: Switch to use AIO drained begin/end API Fam Zheng
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 8/9] block: Drop old drained_{begin, end} callbacks Fam Zheng
2017-11-29 14:49 ` [Qemu-devel] [PATCH RFC 9/9] blockjob: Drop unused functions Fam Zheng
2017-11-29 17:25 ` [Qemu-devel] [PATCH RFC 0/9] block: Rewrite block drain begin/end Kevin Wolf
2017-11-30  2:03   ` Fam Zheng
2017-11-30 10:31     ` Kevin Wolf
2017-11-30 14:34       ` Fam Zheng
2017-11-30 15:10         ` Kevin Wolf
2017-11-30 16:04           ` Paolo Bonzini
2017-12-01  9:51             ` Fam Zheng
2017-12-01 12:24               ` Kevin Wolf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.