From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48386) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aPD0K-0005t6-0e for qemu-devel@nongnu.org; Fri, 29 Jan 2016 12:38:41 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aPD0J-0006uy-0W for qemu-devel@nongnu.org; Fri, 29 Jan 2016 12:38:39 -0500 From: Kevin Wolf Date: Fri, 29 Jan 2016 18:37:35 +0100 Message-Id: <1454089074-4819-30-git-send-email-kwolf@redhat.com> In-Reply-To: <1454089074-4819-1-git-send-email-kwolf@redhat.com> References: <1454089074-4819-1-git-send-email-kwolf@redhat.com> Subject: [Qemu-devel] [PULL 29/48] block: Rewrite bdrv_close_all() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-block@nongnu.org Cc: kwolf@redhat.com, qemu-devel@nongnu.org From: Max Reitz This patch rewrites bdrv_close_all(): Until now, all root BDSs have been force-closed. This is bad because it can lead to cached data not being flushed to disk. Instead, try to make all reference holders relinquish their reference voluntarily: 1. All BlockBackend users are handled by making all BBs simply eject their BDS tree. Since a BDS can never be on top of a BB, this will not cause any of the issues as seen with the force-closing of BDSs. The references will be relinquished and any further access to the BB will fail gracefully. 2. All BDSs which are owned by the monitor itself (because they do not have a BB) are relinquished next. 3. Besides BBs and the monitor, block jobs and other BDSs are the only things left that can hold a reference to BDSs. After every remaining block job has been canceled, there should not be any BDSs left (and the loop added here will always terminate (as long as NDEBUG is not defined), because either all_bdrv_states will be empty or there will not be any block job left to cancel, failing the assertion). Signed-off-by: Max Reitz Signed-off-by: Kevin Wolf --- block.c | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/block.c b/block.c index d687d2c..ff1aafc 100644 --- a/block.c +++ b/block.c @@ -2145,9 +2145,7 @@ static void bdrv_close(BlockDriverState *bs) { BdrvAioNotifier *ban, *ban_next; - if (bs->job) { - block_job_cancel_sync(bs->job); - } + assert(!bs->job); /* Disable I/O limits and drain all pending throttled requests */ if (bs->throttle_state) { @@ -2214,13 +2212,33 @@ static void bdrv_close(BlockDriverState *bs) void bdrv_close_all(void) { BlockDriverState *bs; + AioContext *aio_context; - QTAILQ_FOREACH(bs, &bdrv_states, device_list) { - AioContext *aio_context = bdrv_get_aio_context(bs); + /* Drop references from requests still in flight, such as canceled block + * jobs whose AIO context has not been polled yet */ + bdrv_drain_all(); - aio_context_acquire(aio_context); - bdrv_close(bs); - aio_context_release(aio_context); + blk_remove_all_bs(); + blockdev_close_all_bdrv_states(); + + /* Cancel all block jobs */ + while (!QTAILQ_EMPTY(&all_bdrv_states)) { + QTAILQ_FOREACH(bs, &all_bdrv_states, bs_list) { + aio_context = bdrv_get_aio_context(bs); + + aio_context_acquire(aio_context); + if (bs->job) { + block_job_cancel_sync(bs->job); + aio_context_release(aio_context); + break; + } + aio_context_release(aio_context); + } + + /* All the remaining BlockDriverStates are referenced directly or + * indirectly from block jobs, so there needs to be at least one BDS + * directly used by a block job */ + assert(bs); } } -- 1.8.3.1