All of lore.kernel.org
 help / color / mirror / Atom feed
From: Markus Armbruster <armbru@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] monitor: enable OOB by default
Date: Thu, 28 Jun 2018 11:29:30 +0200	[thread overview]
Message-ID: <87r2kr2r9h.fsf@dusky.pond.sub.org> (raw)
In-Reply-To: <20180627123228.GE2516@xz-mi> (Peter Xu's message of "Wed, 27 Jun 2018 20:32:28 +0800")

Peter Xu <peterx@redhat.com> writes:

> On Wed, Jun 27, 2018 at 10:35:15AM +0200, Markus Armbruster wrote:
>> Markus Armbruster <armbru@redhat.com> writes:
>> 
>> > Another lose end: event COMMAND_DROPPED seems to lack test coverage.
>> 
>> Hmm, dropping commands serves to limit the request queue.  What limits
>> the response queue?
>
> As long as we have a request queue limitation, that'll somehow be
> "part of" the limitation of response queue.  Since the real responses
> (let's not consider the events first) should be no more than the
> maximum QMP requests that we allow in the request queue (one response
> for one request).  In that sense it seems fine to me.

"Normal" flow of the request through the server (QEMU):

    receive
 -> handle_qmp_command()

 -> request queue: mon->qmp.qmp_requests

 -> monitor_qmp_bh_dispatcher()
 -> monitor_qmp_dispatch_one()
 -> monitor_qmp_respond()
 -> monitor_json_emitter()

 -> response queue: mon->qmp.qmp_requests

 -> monitor_qmp_response_pop_one()
 -> monitor_qmp_bh_responder()
 -> monitor_json_emitter_raw()
 -> monitor_puts()

 -> output buffer: mon->outbuf

 -> monitor_flush_locked()

The purpose of event COMMAND_DROPPED is flow control notification: when
the client sends more than we're willing to buffer, we drop the excess
and notify the client.

If the client sends too many requests too quickly, the request queue
fills up, and flow control kicks in.  Good.

As long as the client sends requests at a moderate pace, the request
queue never fills up: the dispatch & execute code sitting between
request queue and response queue drains it just fine.

The response queue proper also doesn't fill up: the emitter code sitting
between response queue and output buffer drains it just fine.

However, output buffer can still grow without bounds!
monitor_flush_locked() requires the client to keep up to make progress.
Our flow control fails then.

Extreme case: a (misbehaving!) client that keeps sending requests at a
moderate pace while not reading any responses.  output buffer grows
without bounds.

Less extreme case: a client sends a small number of requests quickly,
then reads responses very slowly or not at all for some reason, say
because the network goes bad right at this time.  Here, the size of the
request queue does limit the size of output buffer, as you proposed, but
the size multiplier isn't really known.

My point is: the idea "limiting the request queue also limits the
response queue + output buffer" isn't entirely wrong, but it's not
entirely right, either.

Can we improve flow control to cover the complete flow, not just the
flow into the request queue?

>> Before OOB, the monitor read at most one command, and wrote its response
>> with monitor_puts().
>> 
>> For input, this leaves queueing to the kernel: if the client sends
>> commands faster than the server can execute them, eventually the kernel
>> refuses to buffer more, and the client's send either blocks or fails
>> with EAGAIN.
>> 
>> Output is a mess.  monitor_puts() uses an output buffer.  It tries to
>> flush at newline.  Issues:
>> 
>> * If flushing runs into a partial write, the unwritten remainder remains
>>   in the output buffer until the next newline.  That newline may take
>>   its own sweet time to arrive.

Hmm, it's also flushed via mon->out_watch.  If that works how I guess it
does, there's no deadlock.

>>                                  Could even lead to deadlocks, where a
>>   client awaits complete output before it sends more input.  Bug,
>>   predates OOB, doesn't block this series.
>
> True.  Though I noticed that we have a "hackish" line in
> monitor_json_emitter_raw():
>
>     qstring_append_chr(json, '\n');
>
> So it seems that at least we should never encounter a deadlock, after
> all there will always be a newline there. But I'd say I agree with you
> on that it's at least not that "beautiful". :-)

The newline ensures responses arrive on their own line.  JSON doesn't
care about lines (makes sense), and qmp-spec.txt doesn't care, either
(that's wrong, in my opinion).  Anyone playing with QMP by hand will
definitely care.  Even QMP clients might.

The newline also triggers the actual write(), because monitor_puts() is
line-buffered.  That buffering makes sense for HMP, but it's useless for
QMP.  Let's not worry about that right now.

The flushing, however, is not guaranteed to write anything!  If
qemu_chr_fe_write() fails with EAGAIN, mon->outbuf remains unchanged.

>> * If the client fails to read, the output buffer can grow without bound.
>>   Not a security issue; the client is trusted.  Just bad workmanship.
>
> True.
>
>> 
>> OOB doesn't change this for monitors running in the main thread.  Only
>> mux chardevs run there.
>> 
>> Aside: keeping special case code around just for mux is a really bad
>> idea.  We need to get rid of it.
>
> We should be running the same code path even for MUX-ed typed, right?
> Do you mean to put MUX-ed typed handling onto iothreads as well when
> you say "get rid of it"?

I figure I'll cover this in my reply to Daniel.  If not, I'll reply to
this one again.

>> For monitors running in an I/O thread, we add another buffer: the
>> response queue.  It's drained by monitor_qmp_bh_responder().  I guess
>> that means the response queue is effectively bounded by timely draining.
>> Correct?
>
> I don't see a timely manner to flush it, but as long as we queue
> anything (including events) onto the response queue, we'll poke the
> bottom half (in monitor_json_emitter() we call qemu_bh_schedule()) so
> we'll possibly drain the queue very soon, and there should be no
> chance to have a stale message in that queue.

As long as bottom halves work, the response queue remains small.  That's
okay.

>> Buffering twice seems silly, but that could be addressed in follow-up
>> patches.
>
> Do you mean that we can write the response immediately into
> Monitor.outbuf, then only flush it in iothread?  IMHO that's fine -
> after all, the response queue, as mentioned above, should have a
> natural restriction as well due to the request queue, then we won't
> waste too much resources for that.  Meanwhile using a queue with QMP
> response objects seems to be a bit more cleaner to me from design pov
> (though I might be wrong).

Again, let's not worry about this right now.

  reply	other threads:[~2018-06-28  9:29 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-20  7:32 [Qemu-devel] [PATCH v5 0/7] monitor: enable OOB by default Peter Xu
2018-06-20  7:32 ` [Qemu-devel] [PATCH v5 1/7] chardev: comment details for CLOSED event Peter Xu
2018-06-20  7:32 ` [Qemu-devel] [PATCH v5 2/7] monitor: rename *_pop_one to *_pop_any Peter Xu
2018-06-20  7:32 ` [Qemu-devel] [PATCH v5 3/7] monitor: flush qmp responses when CLOSED Peter Xu
2018-06-20  8:33   ` Markus Armbruster
2018-06-20  8:38     ` Peter Xu
2018-06-20  9:50       ` Markus Armbruster
2018-06-20  7:32 ` [Qemu-devel] [PATCH v5 4/7] tests: iotests: drop some stderr line Peter Xu
2018-06-20  8:35   ` Markus Armbruster
2018-06-20  7:32 ` [Qemu-devel] [PATCH v5 5/7] docs: mention shared state protect for OOB Peter Xu
2018-06-20  7:32 ` [Qemu-devel] [PATCH v5 6/7] monitor: remove "x-oob", turn oob on by default Peter Xu
2018-06-20  7:32 ` [Qemu-devel] [PATCH v5 7/7] Revert "tests: Add parameter to qtest_init_without_qmp_handshake" Peter Xu
2018-06-26 17:21 ` [Qemu-devel] (no subject) Markus Armbruster
2018-06-27  7:38   ` [Qemu-devel] monitor: enable OOB by default Markus Armbruster
2018-06-27  8:41     ` Markus Armbruster
2018-06-27 10:20       ` Daniel P. Berrangé
2018-06-27 11:23         ` Markus Armbruster
2018-06-27 12:07           ` Peter Xu
2018-06-27 12:37             ` Eric Blake
2018-06-28  7:04               ` Markus Armbruster
2018-06-29  7:20                 ` Peter Xu
2018-06-28  6:55             ` Markus Armbruster
2018-06-28 11:43               ` Eric Blake
2018-06-29  8:18               ` Peter Xu
2018-06-27 13:13       ` Markus Armbruster
2018-06-27 13:28         ` Daniel P. Berrangé
2018-06-28 13:02           ` Markus Armbruster
2018-06-27 13:34         ` Peter Xu
2018-06-28 13:20           ` Markus Armbruster
2018-06-29  9:01             ` Peter Xu
2018-07-18 15:08               ` Markus Armbruster
2018-07-19 13:00                 ` Peter Xu
2018-06-27  7:40   ` Markus Armbruster
2018-06-27  8:35     ` Markus Armbruster
2018-06-27 12:32       ` Peter Xu
2018-06-28  9:29         ` Markus Armbruster [this message]
2018-06-29  9:42           ` Peter Xu
2018-06-27 13:36     ` Peter Xu
2018-06-27 11:59   ` [Qemu-devel] your mail Peter Xu
2018-06-28  8:31     ` Markus Armbruster
2018-06-28 11:51       ` Eric Blake
2018-06-28 12:00       ` Daniel P. Berrangé
2018-06-29  9:57         ` Peter Xu
2018-06-29 15:40           ` Eric Blake
2018-07-02  5:43   ` [Qemu-devel] monitor: enable OOB by default Markus Armbruster
2018-07-04  5:44     ` Peter Xu
2018-07-04  7:03       ` Markus Armbruster
2018-06-30 16:26 ` [Qemu-devel] [PATCH v5 0/7] " Markus Armbruster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87r2kr2r9h.fsf@dusky.pond.sub.org \
    --to=armbru@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.