qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: John Snow <jsnow@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	qemu-block@nongnu.org, qemu-devel <qemu-devel@nongnu.org>,
	Hanna Reitz <hreitz@redhat.com>, Cleber Rosa <crosa@redhat.com>
Subject: Re: [PATCH v5 2/8] python/machine: Handle QMP errors on close more meticulously
Date: Wed, 27 Oct 2021 13:49:26 -0400	[thread overview]
Message-ID: <CAFn=p-bU3SjifQXDEBX07Fxy-G-TAj3hbBJYqMJQ=Xrr0bJqEw@mail.gmail.com> (raw)
In-Reply-To: <YXk1wbRWNLlvXxMU@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 10467 bytes --]

This reply is long, sorry.

On Wed, Oct 27, 2021 at 7:19 AM Kevin Wolf <kwolf@redhat.com> wrote:

> Am 26.10.2021 um 19:56 hat John Snow geschrieben:
> > To use the AQMP backend, Machine just needs to be a little more diligent
> > about what happens when closing a QMP connection. The operation is no
> > longer a freebie in the async world; it may return errors encountered in
> > the async bottom half on incoming message receipt, etc.
> >
> > (AQMP's disconnect, ultimately, serves as the quiescence point where all
> > async contexts are gathered together, and any final errors reported at
> > that point.)
> >
> > Because async QMP continues to check for messages asynchronously, it's
> > almost certainly likely that the loop will have exited due to EOF after
> > issuing the last 'quit' command. That error will ultimately be bubbled
> > up when attempting to close the QMP connection. The manager class here
> > then is free to discard it -- if it was expected.
> >
> > Signed-off-by: John Snow <jsnow@redhat.com>
> > Reviewed-by: Hanna Reitz <hreitz@redhat.com>
> > ---
> >  python/qemu/machine/machine.py | 48 +++++++++++++++++++++++++++++-----
> >  1 file changed, 42 insertions(+), 6 deletions(-)
> >
> > diff --git a/python/qemu/machine/machine.py
> b/python/qemu/machine/machine.py
> > index 0bd40bc2f76..a0cf69786b4 100644
> > --- a/python/qemu/machine/machine.py
> > +++ b/python/qemu/machine/machine.py
> > @@ -342,9 +342,15 @@ def _post_shutdown(self) -> None:
> >          # Comprehensive reset for the failed launch case:
> >          self._early_cleanup()
> >
> > -        if self._qmp_connection:
> > -            self._qmp.close()
> > -            self._qmp_connection = None
> > +        try:
> > +            self._close_qmp_connection()
> > +        except Exception as err:  # pylint: disable=broad-except
> > +            LOG.warning(
> > +                "Exception closing QMP connection: %s",
> > +                str(err) if str(err) else type(err).__name__
> > +            )
> > +        finally:
> > +            assert self._qmp_connection is None
> >
> >          self._close_qemu_log_file()
> >
> > @@ -420,6 +426,31 @@ def _launch(self) -> None:
> >                                         close_fds=False)
> >          self._post_launch()
> >
> > +    def _close_qmp_connection(self) -> None:
> > +        """
> > +        Close the underlying QMP connection, if any.
> > +
> > +        Dutifully report errors that occurred while closing, but assume
> > +        that any error encountered indicates an abnormal termination
> > +        process and not a failure to close.
> > +        """
> > +        if self._qmp_connection is None:
> > +            return
> > +
> > +        try:
> > +            self._qmp.close()
> > +        except EOFError:
> > +            # EOF can occur as an Exception here when using the Async
> > +            # QMP backend. It indicates that the server closed the
> > +            # stream. If we successfully issued 'quit' at any point,
> > +            # then this was expected. If the remote went away without
> > +            # our permission, it's worth reporting that as an abnormal
> > +            # shutdown case.
> > +            if not (self._user_killed or self._quit_issued):
> > +                raise
>
> Isn't this racy for those tests that expect QEMU to quit by itself and
> then later call wait()? self._quit_issued is only set to True in wait(),
> but whatever will cause QEMU to quit happens earlier and it might
> actually quit before wait() is called.
>

_quit_issued is also set to True via qmp() and command(), but I think
you're referring to cases where QEMU terminates for other reasons than an
explicit command. So, yes, QEMU might indeed terminate/abort/quit before
machine.py has recorded that fact somewhere. I wasn't aware of that being a
problem. I suppose it'd be racy if, say, you orchestrated a "side-channel
quit" and then didn't call wait() and instead called shutdown(). I think
that's the case you're worried about? But, this code should ultimately be
called in only four cases:

(1) Connection failure
(2) shutdown()
(3) shutdown(), via wait()
(4) shutdown(), via kill()

I had considered it a matter of calling the correct exit path from the test
code. If shutdown() does what the label on the tin says (it shuts down the
VM), I actually would expect it to be racy if QEMU was in the middle of
deconstructing itself. That's the belief behind changing around iotest 300
the way I did.


> It would make sense to me that such tests need to declare that they
> expect QEMU to quit before actually performing the action. And then
> wait() becomes less weird in patch 1, too, because it can just assert
> self._quit_issued instead of unconditionally setting it.
>

That's a thought. I was working on the model that calling wait() implies
the fact that the writer is expecting it to terminate through some
mechanism otherwise invisible to machine.py (HMP perhaps, a QGA action,
migration failure, etc.) It's one less call vs. saying "expect_shutdown();
wait_shutdown()". I wasn't aware of a circumstance where you'd want to
separate these two semantic bits of info ("I expect QEMU will shut down" /
"I am waiting for QEMU to shut down") and since the latter implied the
former, I rolled 'em up into one.

I suppose in your case, you're wondering what happens if we decide to call
shutdown() instead of wait() after orchestrating a "side-channel quit"? I
*would* have considered that a semantic error in the author's code, but I
suppose it's reasonable to expect that "shutdown() should DTRT". Though, in
order for it to DTRT, you have to inform machine that you're expecting that
side-channel termination, and if you already forgot to use wait() instead
of shutdown(), I'm not sure I can help the author remember that they should
have issued a "expect_shutdown()" or similar.

The other point I'm unsure is whether you can actually kill QEMU without
> getting either self._user_killed or self._quit_issued set. The
> potentially problematic case I see is shutdown(hard = False) where soft
> shutdown fails. Then a hard shutdown will be performed without setting
> self._user_killed (is this a bug?).
>

The original intent of _user_killed was specifically to record cases where
the test-writer *intentionally* killed the VM process. We then used that
flag to change the error reporting on cleanup in _post_shutdown. The idea
was that if the QEMU process terminated due to signal and we didn't
explicitly intentionally cause it, that we should loudly report that
occurrence. We added this specifically to catch problems on the exit path
in various iotests. So, the fact that soft shutdown fails over to a hard
shutdown but doesn't set that intent flag is the intended behavior.

So, what about _close_qmp_connection()? I figured that if any of the
following were true:

1) quit was explicitly issued via qmp() or command()
2) wait() was called instead of shutdown, implying an anticipated
side-channel termination
3) kill() or shutdown(hard=True) was called

... then we were expecting the remote to hang up on us, and we didn't need
to report that error.

What about when the VM is killed but it wasn't explicitly intentional by
the test-writer? As far as machine.py knows, it was unintentional and so it
will allow that error to trickle up when _close_qmp_connection() is called.
The server hung up on us and we have no idea why. The error will be
reported upwards. From the context of just this one function, that seems
like the correct behavior.

Of course, sending the 'quit' command in _soft_shutdown() will set
> self._quit_issued at least, but are we absolutely sure that it can never
> raise an exception before getting to qmp()? I guess in theory, closing
> the console socket in _early_cleanup() could raise one? (But either way,
> not relying on such subtleties would make the code easier to
> understand.)
>

If it does raise an exception before getting to qmp(), we'll fail over to
the hard shutdown path.

shutdown()
  _do_shutdown()
    _soft_shutdown()
      _early_cleanup() -- Let's say this one chokes.

So we'll likely have _quit_issued = False and _user_killed = False.

We fail over:

shutdown()
  _do_shutdown()
    _hard_shutdown()
      _early_cleanup() -- Uh oh, if it failed once, it might fail again.
      self._subp.kill()
      self._subp.wait(timeout=60)

I'm going to ignore the _early_cleanup() problem here for a moment, as it's
a pre-existing problem and I'll fix it separately. Let's just continue the
hypothetical.

shutdown()
  _post_shutdown()
    _early_cleanup()  -- Uh oh for a third time.
    _close_qmp_connection()
    ...

At this point, we're finally going to try to close this connection.

        try:
            self._close_qmp_connection()
        except Exception as err:  # pylint: disable=broad-except

            LOG.warning(
                "Exception closing QMP connection: %s",
                str(err) if str(err) else type(err).__name__
            )
        finally:
            assert self._qmp_connection is None

And the grand result is that it's very likely just going to log a warning
to the error stream saying that the QMP connection closed in a weird way --
possibly EOFError, possibly ECONNABORTED, ECONNRESET, EPIPE.The rest of
cleanup will continue, and the exception that was stored prior to executing
the "finally:" phase of shutdown() will now be raised, which would be:

raise AbnormalShutdown("Could not perform graceful shutdown") from exc

where 'exc' is the theoretical exception from _soft_shutdown >
_early_cleanup.

Ultimately, we'll see an error log for the QMP connection terminating in a
strange way, followed by a stack trace for our failure to gracefully shut
down the VM (because the early cleanup code failed.) I think this is
probably a reasonable way to report this circumstance -- the machine.py
code has no idea what's going on, so it (as best as it can) just reports
the errors it sees. This makes the cleanup code a little "eager to fail",
but for a testing suite I thought that was actually appropriate behavior.

I think this is at least functional as written, ultimately? The trick is
remembering to call wait() instead of shutdown() when you need it, but I
don't see a way to predict or track the intent of the test-writer any
better than that, so I don't know how to improve the usability of the
library around that case, exactly.

--js


>
> Kevin
>
>

[-- Attachment #2: Type: text/html, Size: 14018 bytes --]

  reply	other threads:[~2021-10-27 17:51 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-26 17:56 [PATCH v5 0/8] Switch iotests to using Async QMP John Snow
2021-10-26 17:56 ` [PATCH v5 1/8] python/machine: remove has_quit argument John Snow
2021-10-26 17:56 ` [PATCH v5 2/8] python/machine: Handle QMP errors on close more meticulously John Snow
2021-10-27 11:19   ` Kevin Wolf
2021-10-27 17:49     ` John Snow [this message]
2021-10-28 10:01       ` Kevin Wolf
2021-10-28 15:35         ` John Snow
2021-10-26 17:56 ` [PATCH v5 3/8] python/aqmp: Remove scary message John Snow
2021-10-26 17:56 ` [PATCH v5 4/8] iotests: Accommodate async QMP Exception classes John Snow
2021-10-26 17:56 ` [PATCH v5 5/8] iotests: Conditionally silence certain AQMP errors John Snow
2021-10-26 17:56 ` [PATCH v5 6/8] iotests/300: avoid abnormal shutdown race condition John Snow
2021-10-27 12:56   ` Kevin Wolf
2021-10-27 17:55     ` John Snow
2021-10-26 17:56 ` [PATCH v5 7/8] python/aqmp: Create sync QMP wrapper for iotests John Snow
2021-10-26 17:56 ` [PATCH v5 8/8] python, iotests: replace qmp with aqmp John Snow
2021-10-28 10:37 ` [PATCH v5 0/8] Switch iotests to using Async QMP Kevin Wolf
2021-10-28 14:48   ` John Snow

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAFn=p-bU3SjifQXDEBX07Fxy-G-TAj3hbBJYqMJQ=Xrr0bJqEw@mail.gmail.com' \
    --to=jsnow@redhat.com \
    --cc=crosa@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=hreitz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=vsementsov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).