From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58495) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YsTsb-0000Bd-SN for qemu-devel@nongnu.org; Wed, 13 May 2015 06:27:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YsTsa-00036c-Sj for qemu-devel@nongnu.org; Wed, 13 May 2015 06:27:09 -0400 Message-ID: <555326ED.3050609@redhat.com> Date: Wed, 13 May 2015 12:26:53 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1431538099-3286-1-git-send-email-famz@redhat.com> <1431538099-3286-12-git-send-email-famz@redhat.com> In-Reply-To: <1431538099-3286-12-git-send-email-famz@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2 11/11] block: Block "device IO" during bdrv_drain and bdrv_drain_all List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng , qemu-devel@nongnu.org Cc: Kevin Wolf , qemu-block@nongnu.org, jcody@redhat.com, armbru@redhat.com, mreitz@redhat.com, Stefan Hajnoczi On 13/05/2015 19:28, Fam Zheng wrote: > We don't want new requests from guest, so block the operation around the > nested poll. > > Signed-off-by: Fam Zheng > --- > block/io.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/block/io.c b/block/io.c > index 1ce62c4..d369de3 100644 > --- a/block/io.c > +++ b/block/io.c > @@ -289,9 +289,15 @@ static bool bdrv_drain_one(BlockDriverState *bs) > */ > void bdrv_drain(BlockDriverState *bs) > { > + Error *blocker = NULL; > + > + error_setg(&blocker, "bdrv_drain in progress"); > + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > while (bdrv_drain_one(bs)) { > /* Keep iterating */ > } > + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > + error_free(blocker); > } > > /* > @@ -311,6 +317,9 @@ void bdrv_drain_all(void) > /* Always run first iteration so any pending completion BHs run */ > bool busy = true; > BlockDriverState *bs = NULL; > + Error *blocker = NULL; > + > + error_setg(&blocker, "bdrv_drain_all in progress"); > > while ((bs = bdrv_next(bs))) { > AioContext *aio_context = bdrv_get_aio_context(bs); > @@ -319,6 +328,7 @@ void bdrv_drain_all(void) > if (bs->job) { > block_job_pause(bs->job); > } > + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > aio_context_release(aio_context); > } > > @@ -343,8 +353,10 @@ void bdrv_drain_all(void) > if (bs->job) { > block_job_resume(bs->job); > } > + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > aio_context_release(aio_context); > } > + error_free(blocker); > } > > /** > I think this isn't enough. It's the callers of bdrv_drain and bdrv_drain_all that need to block before drain and unblock before aio_context_release. Paolo