From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57988) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zw9hN-00018e-My for qemu-devel@nongnu.org; Tue, 10 Nov 2015 09:15:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Zw9hM-0005Wj-Mk for qemu-devel@nongnu.org; Tue, 10 Nov 2015 09:15:01 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47155) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zw9hM-0005Wb-HU for qemu-devel@nongnu.org; Tue, 10 Nov 2015 09:15:00 -0500 From: Stefan Hajnoczi Date: Tue, 10 Nov 2015 14:14:03 +0000 Message-Id: <1447164879-6756-9-git-send-email-stefanha@redhat.com> In-Reply-To: <1447164879-6756-1-git-send-email-stefanha@redhat.com> References: <1447164879-6756-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PULL 08/44] block: Introduce BlockDriver.bdrv_drain callback List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Peter Maydell , Fam Zheng , Stefan Hajnoczi From: Fam Zheng Drivers can have internal request sources that generate IO, like the need_check_timer in QED. Since we want quiesced periods that contain nested event loops in block layer, we need to have a way to disable such event sources. Block drivers must implement the "bdrv_drain" callback if it has any internal sources that can generate I/O activity, like a timer or a worker thread (even in a library) that can schedule QEMUBH in an asynchronous callback. Update the comments of bdrv_drain and bdrv_drained_begin accordingly. Like bdrv_requests_pending(), we should consider all the children of bs. Before, the while loop just works, as bdrv_requests_pending() already tracks its children; now we mustn't miss the callback, so recurse down explicitly. Signed-off-by: Fam Zheng Reviewed-by: Paolo Bonzini Message-id: 1447064214-29930-9-git-send-email-famz@redhat.com Signed-off-by: Stefan Hajnoczi --- block/io.c | 16 +++++++++++++++- include/block/block_int.h | 6 ++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/block/io.c b/block/io.c index 4ecb171..adc1eab 100644 --- a/block/io.c +++ b/block/io.c @@ -237,8 +237,21 @@ bool bdrv_requests_pending(BlockDriverState *bs) return false; } +static void bdrv_drain_recurse(BlockDriverState *bs) +{ + BdrvChild *child; + + if (bs->drv && bs->drv->bdrv_drain) { + bs->drv->bdrv_drain(bs); + } + QLIST_FOREACH(child, &bs->children, next) { + bdrv_drain_recurse(child->bs); + } +} + /* - * Wait for pending requests to complete on a single BlockDriverState subtree + * Wait for pending requests to complete on a single BlockDriverState subtree, + * and suspend block driver's internal I/O until next request arrives. * * Note that unlike bdrv_drain_all(), the caller must hold the BlockDriverState * AioContext. @@ -251,6 +264,7 @@ void bdrv_drain(BlockDriverState *bs) { bool busy = true; + bdrv_drain_recurse(bs); while (busy) { /* Keep iterating */ bdrv_flush_io_queue(bs); diff --git a/include/block/block_int.h b/include/block/block_int.h index 550ce18..4a9f8ff 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -295,6 +295,12 @@ struct BlockDriver { */ int (*bdrv_probe_geometry)(BlockDriverState *bs, HDGeometry *geo); + /** + * Drain and stop any internal sources of requests in the driver, and + * remain so until next I/O callback (e.g. bdrv_co_writev) is called. + */ + void (*bdrv_drain)(BlockDriverState *bs); + QLIST_ENTRY(BlockDriver) list; }; -- 2.5.0