From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42506) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fM8Dz-0003eT-3l for qemu-devel@nongnu.org; Fri, 25 May 2018 04:37:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fM8Dv-0000RX-5g for qemu-devel@nongnu.org; Fri, 25 May 2018 04:37:23 -0400 Date: Fri, 25 May 2018 10:37:15 +0200 From: Kevin Wolf Message-ID: <20180525083715.GC5562@localhost.localdomain> References: <152720243501.517758.270483838608061631.stgit@bahia.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <152720243501.517758.270483838608061631.stgit@bahia.lan> Subject: Re: [Qemu-devel] [PATCH v3] block: fix QEMU crash with scsi-hd and drive_del List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Greg Kurz Cc: qemu-devel@nongnu.org, Max Reitz , Stefan Hajnoczi , Paolo Bonzini , qemu-stable@nongnu.org Am 25.05.2018 um 00:53 hat Greg Kurz geschrieben: > Removing a drive with drive_del while it is being used to run an I/O > intensive workload can cause QEMU to crash. > > An AIO flush can yield at some point: > > blk_aio_flush_entry() > blk_co_flush(blk) > bdrv_co_flush(blk->root->bs) > ... > qemu_coroutine_yield() > > and let the HMP command to run, free blk->root and give control > back to the AIO flush: > > hmp_drive_del() > blk_remove_bs() > bdrv_root_unref_child(blk->root) > child_bs = blk->root->bs > bdrv_detach_child(blk->root) > bdrv_replace_child(blk->root, NULL) > blk->root->bs = NULL > g_free(blk->root) <============== blk->root becomes stale > bdrv_unref(child_bs) > bdrv_delete(child_bs) > bdrv_close() > bdrv_drained_begin() > bdrv_do_drained_begin() > bdrv_drain_recurse() > aio_poll() > ... > qemu_coroutine_switch() > > and the AIO flush completion ends up dereferencing blk->root: > > blk_aio_complete() > scsi_aio_complete() > blk_get_aio_context(blk) > bs = blk_bs(blk) > ie, bs = blk->root ? blk->root->bs : NULL > ^^^^^ > stale > > The problem is that we should avoid making block driver graph > changes while we have in-flight requests. This patch hence adds > a drained section to bdrv_detach_child(), so that we're sure > all requests have been drained before blk->root becomes stale. > > Signed-off-by: Greg Kurz > --- > v3: - start drained section before modifying the graph (Stefan) > > v2: - drain I/O requests when detaching the BDS (Stefan, Paolo) > --- > block.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/block.c b/block.c > index 501b64c8193f..715c1b56c1e2 100644 > --- a/block.c > +++ b/block.c > @@ -2127,12 +2127,16 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs, > > static void bdrv_detach_child(BdrvChild *child) > { > + BlockDriverState *child_bs = child->bs; > + > + bdrv_drained_begin(child_bs); > if (child->next.le_prev) { > QLIST_REMOVE(child, next); > child->next.le_prev = NULL; > } > > bdrv_replace_child(child, NULL); > + bdrv_drained_end(child_bs); > > g_free(child->name); > g_free(child); I wonder if the better fix would be calling blk_drain() in blk_remove_bs() (which would also better be blk_drained_begin/end...). Doing the proposed change in bdrv_detach_child() should fix the problem that you're seeing, but at first sight it promises that callers don't have to care about shutting down their activity on the child node first. This isn't necessarily correct if the parent may still issue a new request (e.g. in response to the completion of an old one). What really needs to be drained is the parent's use of the child, not the activity of the child. Another minor problem with your approach: If a child node is used by more than one parent, this patch would unnecessarily quiesce those other parents and wait for the completion of their requests. Kevin