From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38582) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fLz7c-00011L-Ie for qemu-devel@nongnu.org; Thu, 24 May 2018 18:54:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fLz7Z-000393-E5 for qemu-devel@nongnu.org; Thu, 24 May 2018 18:54:12 -0400 Received: from 1.mo2.mail-out.ovh.net ([46.105.63.121]:60393) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fLz7Z-00038n-8e for qemu-devel@nongnu.org; Thu, 24 May 2018 18:54:09 -0400 Received: from player692.ha.ovh.net (unknown [10.109.122.80]) by mo2.mail-out.ovh.net (Postfix) with ESMTP id F3EF7131355 for ; Fri, 25 May 2018 00:54:06 +0200 (CEST) From: Greg Kurz Date: Fri, 25 May 2018 00:53:55 +0200 Message-ID: <152720243501.517758.270483838608061631.stgit@bahia.lan> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] [PATCH v3] block: fix QEMU crash with scsi-hd and drive_del List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , qemu-stable@nongnu.org Removing a drive with drive_del while it is being used to run an I/O intensive workload can cause QEMU to crash. An AIO flush can yield at some point: blk_aio_flush_entry() blk_co_flush(blk) bdrv_co_flush(blk->root->bs) ... qemu_coroutine_yield() and let the HMP command to run, free blk->root and give control back to the AIO flush: hmp_drive_del() blk_remove_bs() bdrv_root_unref_child(blk->root) child_bs = blk->root->bs bdrv_detach_child(blk->root) bdrv_replace_child(blk->root, NULL) blk->root->bs = NULL g_free(blk->root) <============== blk->root becomes stale bdrv_unref(child_bs) bdrv_delete(child_bs) bdrv_close() bdrv_drained_begin() bdrv_do_drained_begin() bdrv_drain_recurse() aio_poll() ... qemu_coroutine_switch() and the AIO flush completion ends up dereferencing blk->root: blk_aio_complete() scsi_aio_complete() blk_get_aio_context(blk) bs = blk_bs(blk) ie, bs = blk->root ? blk->root->bs : NULL ^^^^^ stale The problem is that we should avoid making block driver graph changes while we have in-flight requests. This patch hence adds a drained section to bdrv_detach_child(), so that we're sure all requests have been drained before blk->root becomes stale. Signed-off-by: Greg Kurz --- v3: - start drained section before modifying the graph (Stefan) v2: - drain I/O requests when detaching the BDS (Stefan, Paolo) --- block.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/block.c b/block.c index 501b64c8193f..715c1b56c1e2 100644 --- a/block.c +++ b/block.c @@ -2127,12 +2127,16 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs, static void bdrv_detach_child(BdrvChild *child) { + BlockDriverState *child_bs = child->bs; + + bdrv_drained_begin(child_bs); if (child->next.le_prev) { QLIST_REMOVE(child, next); child->next.le_prev = NULL; } bdrv_replace_child(child, NULL); + bdrv_drained_end(child_bs); g_free(child->name); g_free(child);