From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49327) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aVcSZ-000080-5E for qemu-devel@nongnu.org; Tue, 16 Feb 2016 05:02:23 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aVcSV-0007rD-Qp for qemu-devel@nongnu.org; Tue, 16 Feb 2016 05:02:19 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47371) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aVcSV-0007qh-Eb for qemu-devel@nongnu.org; Tue, 16 Feb 2016 05:02:15 -0500 Date: Tue, 16 Feb 2016 11:02:08 +0100 From: Kevin Wolf Message-ID: <20160216100208.GA4920@noname.str.redhat.com> References: <001201d16597$fa5de6a0$ef19b3e0$@Dovgaluk@ispras.ru> <20160212135820.GD4828@noname.redhat.com> <003301d167cc$4d7d9480$e878bd80$@ru> <003a01d167d1$42df95f0$c89ec1d0$@ru> <20160215093810.GC5244@noname.str.redhat.com> <004701d167f8$5cbe70f0$163b52d0$@ru> <20160215140635.GF5244@noname.str.redhat.com> <005501d167fc$8ed75030$ac85f090$@ru> <20160215150110.GG5244@noname.str.redhat.com> <000601d16882$c9637270$5c2a5750$@ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <000601d16882$c9637270$5c2a5750$@ru> Subject: Re: [Qemu-devel] [PATCH 3/3] replay: introduce block devices record/replay List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Pavel Dovgalyuk Cc: edgar.iglesias@xilinx.com, peter.maydell@linaro.org, igor.rubinov@gmail.com, mark.burton@greensocs.com, real@ispras.ru, hines@cert.org, qemu-devel@nongnu.org, maria.klimushenkova@ispras.ru, stefanha@redhat.com, pbonzini@redhat.com, batuzovk@ispras.ru, alex.bennee@linaro.org, fred.konrad@greensocs.com Am 16.02.2016 um 07:25 hat Pavel Dovgalyuk geschrieben: > > From: Kevin Wolf [mailto:kwolf@redhat.com] > > Am 15.02.2016 um 15:24 hat Pavel Dovgalyuk geschrieben: > > > > From: Kevin Wolf [mailto:kwolf@redhat.com] > > > > > > > > There could be asynchronous events that occur in non-cpu threads. > > > > > For now these events are shutdown request and block task execution. > > > > > They may "hide" following clock (or another one) events. That is why > > > > > we process them until synchronous event (like clock, instructions > > > > > execution, or checkpoint) is met. > > > > > > > > > > > > > > > > Anyway, what does "can't proceed" mean? The coroutine yields because > > > > > > it's waiting for I/O, but it is never reentered? Or is it hanging while > > > > > > trying to acquire a lock? > > > > > > > > > > I've solved this problem by slightly modifying the queue. > > > > > I haven't yet made BlockDriverState assignment to the request ids. > > > > > Therefore aio_poll was temporarily replaced with usleep. > > > > > Now execution starts and hangs at some random moment of OS loading. > > > > > > > > > > Here is the current version of blkreplay functions: > > > > > > > > > > static int coroutine_fn blkreplay_co_readv(BlockDriverState *bs, > > > > > int64_t sector_num, int nb_sectors, QEMUIOVector *qiov) > > > > > { > > > > > uint32_t reqid = request_id++; > > > > > Request *req; > > > > > req = block_request_insert(reqid, bs, qemu_coroutine_self()); > > > > > bdrv_co_readv(bs->file->bs, sector_num, nb_sectors, qiov); > > > > > > > > > > if (replay_mode == REPLAY_MODE_RECORD) { > > > > > replay_save_block_event(reqid); > > > > > } else { > > > > > assert(replay_mode == REPLAY_MODE_PLAY); > > > > > qemu_coroutine_yield(); > > > > > } > > > > > block_request_remove(req); > > > > > > > > > > return 0; > > > > > } > > > > > > > > > > void replay_run_block_event(uint32_t id) > > > > > { > > > > > Request *req; > > > > > if (replay_mode == REPLAY_MODE_PLAY) { > > > > > while (!(req = block_request_find(id))) { > > > > > //aio_poll(bdrv_get_aio_context(req->bs), true); > > > > > usleep(1); > > > > > } > > > > > > > > How is this loop supposed to make any progress? > > > > > > This loop does not supposed to make any progress. It waits until block_request_insert > > > call is added to the queue. > > > > Yes. And who is supposed to add something to the queue? You are looping > > and doing nothing, so no other code gets to run in this thread. It's > > only aio_poll() that would run event handlers that could eventually add > > something to the list. > > blkreplay_co_readv adds request to the queue. I don't see blkreplay_co_readv() called in this loop without aio_poll(). > > > > And I still don't understand why aio_poll() doesn't work and where it > > > > hangs. > > > > > > aio_poll hangs if "req = block_request_insert(reqid, bs, qemu_coroutine_self());" line > > > is executed after bdrv_co_readv. When bdrv_co_readv yields, replay_run_block_event has no > > > information about pending request and cannot jump to its coroutine. > > > Maybe I should implement aio_poll execution there to make progress in that case? > > > > Up in the thread, I posted code that was a bit more complex than what > > you have, and which considered both cases (completion before > > replay/completion after replay). If you implement it like this, it might > > just work. And yes, it involved an aio_poll() loop in the replay > > function. > > You are right, but I found kind of fundamental problem here. > There are two possible approaches for replaying coroutine events: > > 1. Waiting with aio_poll in replay_run_block_event. > In this case replay cannot be made, because of recursive mutex lock: > aio_poll -> qemu_clock_get_ns -> -> > replay_run_block_event -> aio_poll -> qemu_clock_get_ns -> -> > > I.e. we have recursive aio_poll function calls that lead to recursive replay calls > and to recursive mutex lock. That's what you get for replaying stuff in qemu_clock_get_ns() when it should really be replayed in the hardware emulation. This is a problem you would also have had with your original patch, as there are some places in the block layer that use aio_poll() internally. Using the blkreplay driver only made you find the bug earlier. But... can you detail how we get from aio_poll() to qemu_clock_get_ns()? I don't see it directly calling that function, so it might just be a callback to some guest device that is running now. > 2. Adding block events to the queue until checkpoint is met. > This will not work with synchronous requests, because they will wait infinitely > and checkpoint will never happen. Correct me, if I'm not right for this case. > If blkreplay_co_readv will yield, can vcpu thread unlock the BQL and wait for > the checkpoint in iothread? I don't know enough checkpoints and about CPU emulation to give a definite answer. This kind of blocking the I/O thread feels wrong, though. Kevin