From: Kevin Wolf <kwolf@redhat.com>
To: Pavel Dovgalyuk <dovgaluk@ispras.ru>
Cc: edgar.iglesias@xilinx.com, peter.maydell@linaro.org,
igor.rubinov@gmail.com, mark.burton@greensocs.com,
real@ispras.ru, hines@cert.org, qemu-devel@nongnu.org,
maria.klimushenkova@ispras.ru, stefanha@redhat.com,
pbonzini@redhat.com, batuzovk@ispras.ru, alex.bennee@linaro.org,
fred.konrad@greensocs.com
Subject: Re: [Qemu-devel] [PATCH 3/3] replay: introduce block devices record/replay
Date: Tue, 16 Feb 2016 13:54:53 +0100 [thread overview]
Message-ID: <20160216125453.GC4920@noname.str.redhat.com> (raw)
In-Reply-To: <000a01d168ac$09929500$1cb7bf00$@ru>
Am 16.02.2016 um 12:20 hat Pavel Dovgalyuk geschrieben:
> > From: Kevin Wolf [mailto:kwolf@redhat.com]
> > Am 16.02.2016 um 07:25 hat Pavel Dovgalyuk geschrieben:
> > > > From: Kevin Wolf [mailto:kwolf@redhat.com]
> > > > Am 15.02.2016 um 15:24 hat Pavel Dovgalyuk geschrieben:
> > > > > > From: Kevin Wolf [mailto:kwolf@redhat.com]
> > > > > > > Here is the current version of blkreplay functions:
> > > > > > >
> > > > > > > static int coroutine_fn blkreplay_co_readv(BlockDriverState *bs,
> > > > > > > int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
> > > > > > > {
> > > > > > > uint32_t reqid = request_id++;
> > > > > > > Request *req;
> > > > > > > req = block_request_insert(reqid, bs, qemu_coroutine_self());
> > > > > > > bdrv_co_readv(bs->file->bs, sector_num, nb_sectors, qiov);
> > > > > > >
> > > > > > > if (replay_mode == REPLAY_MODE_RECORD) {
> > > > > > > replay_save_block_event(reqid);
> > > > > > > } else {
> > > > > > > assert(replay_mode == REPLAY_MODE_PLAY);
> > > > > > > qemu_coroutine_yield();
> > > > > > > }
> > > > > > > block_request_remove(req);
> > > > > > >
> > > > > > > return 0;
> > > > > > > }
> > > > > > >
> > > > > > > void replay_run_block_event(uint32_t id)
> > > > > > > {
> > > > > > > Request *req;
> > > > > > > if (replay_mode == REPLAY_MODE_PLAY) {
> > > > > > > while (!(req = block_request_find(id))) {
> > > > > > > //aio_poll(bdrv_get_aio_context(req->bs), true);
> > > > > > > usleep(1);
> > > > > > > }
> > > > > >
> > > > > > How is this loop supposed to make any progress?
> > > > >
> > > > > This loop does not supposed to make any progress. It waits until block_request_insert
> > > > > call is added to the queue.
> > > >
> > > > Yes. And who is supposed to add something to the queue? You are looping
> > > > and doing nothing, so no other code gets to run in this thread. It's
> > > > only aio_poll() that would run event handlers that could eventually add
> > > > something to the list.
> > >
> > > blkreplay_co_readv adds request to the queue.
> >
> > I don't see blkreplay_co_readv() called in this loop without aio_poll().
>
> Is this mandatory?
If you don't want it to be an endless loop, someone needs to make the
value of the loop condition change...
> I thought that version which you proposed has a race condition at the following lines
> when AIO request is operated by a worker thread. Is this correct?
Unfortunately this ended up linewrapped, trying to restore what you
meant:
> Coroutine Replay
> bool *done = req_replayed_list_get(reqid) // NULL
> co = req_completed_list_get(e.reqid); // NULL
There was no yield, this context switch is impossible to happen. Same
for the switch back.
> req_completed_list_insert(reqid, qemu_coroutine_self());
> qemu_coroutine_yield();
This is the point at which a context switch happens. The only other
point in my code is the qemu_coroutine_enter() in the other function.
> bool done = false;
> req_replayed_list_insert(reqid, &done);
> while (!done) { // nobody will change done anymore
> aio_poll();
> }
>
>
> > > You are right, but I found kind of fundamental problem here.
> > > There are two possible approaches for replaying coroutine events:
> > >
> > > 1. Waiting with aio_poll in replay_run_block_event.
> > > In this case replay cannot be made, because of recursive mutex lock:
> > > aio_poll -> qemu_clock_get_ns -> <lock replay mutex> ->
> > > replay_run_block_event -> aio_poll -> qemu_clock_get_ns -> <lock replay mutex> ->
> > > <assert failed>
> > > I.e. we have recursive aio_poll function calls that lead to recursive replay calls
> > > and to recursive mutex lock.
> >
> > That's what you get for replaying stuff in qemu_clock_get_ns() when it
> > should really be replayed in the hardware emulation.
>
> I'm not sure that it is possible for RTC. Host clock should run even if VM is stopped.
> If we emulate host clock, its behavior should be somehow synchronized with the host.
I think the RTC behaviour should be replayed, whereas the host clock
should be the right one.
Or maybe it would actually make most sense to forbid clock=host for
record/replay, because in the replay you don't get the host clock
anyway, but just saved values.
> > This is a problem you would also have had with your original patch, as
> > there are some places in the block layer that use aio_poll() internally.
> > Using the blkreplay driver only made you find the bug earlier.
>
> Not exactly. Host clock replay will use cached clock value if there is no such event in the log.
> The problem I observe here is with asynchronous block events that should be processed
> immediately as they read from log.
Doesn't your earlier patch do the same? I seem to remember that it calls
bdrv_drain_all(), which is essentially the same as calling aio_poll() in
a loop, except that its loop condition is slightly different.
> > But... can you detail how we get from aio_poll() to qemu_clock_get_ns()?
> > I don't see it directly calling that function, so it might just be a
> > callback to some guest device that is running now.
>
> aio_poll (win32) has the following line:
>
> timeout = blocking && !have_select_revents
> ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
>
> POSIX version also uses that function. aio_compute_timeout() requests time from
> all existing clocks by calling timerlistgroup_deadline_ns().
I see, thanks.
Kevin
next prev parent reply other threads:[~2016-02-16 12:55 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-09 5:55 [Qemu-devel] [PATCH 0/3] Deterministic replay extensions Pavel Dovgalyuk
2016-02-09 5:55 ` [Qemu-devel] [PATCH 1/3] replay: character devices Pavel Dovgalyuk
2016-02-09 5:55 ` [Qemu-devel] [PATCH 2/3] replay: introduce new checkpoint for icount warp Pavel Dovgalyuk
2016-02-09 5:55 ` [Qemu-devel] [PATCH 3/3] replay: introduce block devices record/replay Pavel Dovgalyuk
2016-02-09 10:27 ` Kevin Wolf
2016-02-09 11:52 ` Pavel Dovgalyuk
2016-02-10 11:45 ` Kevin Wolf
2016-02-10 12:05 ` Pavel Dovgalyuk
2016-02-10 12:28 ` Kevin Wolf
2016-02-10 12:51 ` Pavel Dovgalyuk
2016-02-10 13:25 ` Kevin Wolf
2016-02-10 13:33 ` Pavel Dovgalyuk
2016-02-10 13:52 ` Kevin Wolf
2016-02-11 6:05 ` Pavel Dovgalyuk
2016-02-11 9:43 ` Kevin Wolf
2016-02-11 11:00 ` Pavel Dovgalyuk
2016-02-11 12:18 ` Kevin Wolf
2016-02-11 12:24 ` Pavel Dovgalyuk
2016-02-12 8:33 ` Pavel Dovgalyuk
2016-02-12 9:44 ` Kevin Wolf
2016-02-12 13:19 ` Pavel Dovgalyuk
2016-02-12 13:58 ` Kevin Wolf
2016-02-15 8:38 ` Pavel Dovgalyuk
2016-02-15 9:10 ` Kevin Wolf
2016-02-15 9:14 ` Pavel Dovgalyuk
2016-02-15 9:38 ` Kevin Wolf
2016-02-15 11:19 ` Pavel Dovgalyuk
2016-02-15 12:46 ` Kevin Wolf
2016-02-15 13:54 ` Pavel Dovgalyuk
2016-02-15 14:06 ` Kevin Wolf
2016-02-15 14:24 ` Pavel Dovgalyuk
2016-02-15 15:01 ` Kevin Wolf
2016-02-16 6:25 ` Pavel Dovgalyuk
2016-02-16 10:02 ` Kevin Wolf
2016-02-16 11:20 ` Pavel Dovgalyuk
2016-02-16 12:54 ` Kevin Wolf [this message]
2016-02-18 9:18 ` Pavel Dovgalyuk
2016-02-20 7:11 ` Pavel Dovgalyuk
2016-02-22 11:06 ` Kevin Wolf
2016-02-24 11:59 ` Pavel Dovgalyuk
2016-02-24 13:14 ` Kevin Wolf
2016-02-25 9:06 ` Pavel Dovgalyuk
2016-02-26 9:01 ` Kevin Wolf
2016-02-29 7:03 ` Pavel Dovgalyuk
2016-02-29 7:54 ` Kevin Wolf
2016-02-15 14:50 ` Pavel Dovgalyuk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160216125453.GC4920@noname.str.redhat.com \
--to=kwolf@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=batuzovk@ispras.ru \
--cc=dovgaluk@ispras.ru \
--cc=edgar.iglesias@xilinx.com \
--cc=fred.konrad@greensocs.com \
--cc=hines@cert.org \
--cc=igor.rubinov@gmail.com \
--cc=maria.klimushenkova@ispras.ru \
--cc=mark.burton@greensocs.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=real@ispras.ru \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).