qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Increase amount of data for monitor to read
@ 2020-11-27 13:35 Andrey Shinkevich via
  2020-11-27 13:35 ` [PATCH v3 1/5] monitor: change function obsolete name in comments Andrey Shinkevich via
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Andrey Shinkevich via @ 2020-11-27 13:35 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, kwolf, mreitz, mdroth, thuth, lvivier, armbru,
	dgilbert, pbonzini, den, vsementsov, andrey.shinkevich

The subject was discussed here:
https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg00206.html
https://patchew.org/QEMU/20190610105906.28524-1-dplotnikov@virtuozzo.com/#
Message-ID: <31dd78ba-bd64-2ed6-3c8f-eed4e904d14c@virtuozzo.com>
and v2:
Message-Id: <1606146274-246154-1-git-send-email-andrey.shinkevich@virtuozzo.com>

This series is a solution for the issue with overflow of the monitor queue
with QMP requests if we keep the maximum queue length unchanged (=8).

v3:
  01: New
  02: New
  03: The additional little JSON parser removed and the resources of the
      existing JSON parser were used to track the end of a QMP command.
  04: The amount of read input data increases only.

Andrey Shinkevich (4):
  monitor: change function obsolete name in comments
  monitor: drain requests queue with 'channel closed' event
  monitor: let QMP monitor track JSON message content
  monitor: increase amount of data for monitor to read

Vladimir Sementsov-Ogievskiy (1):
  iotests: 129 don't check backup "busy"

 include/qapi/qmp/json-parser.h |  5 ++--
 monitor/monitor.c              |  2 +-
 monitor/qmp.c                  | 66 ++++++++++++++++++++++++------------------
 qga/main.c                     |  2 +-
 qobject/json-lexer.c           | 30 +++++++++++++------
 qobject/json-parser-int.h      |  8 +++--
 qobject/json-streamer.c        | 15 +++++-----
 qobject/qjson.c                |  2 +-
 tests/qemu-iotests/129         |  1 -
 tests/qtest/libqtest.c         |  2 +-
 10 files changed, 79 insertions(+), 54 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/5] monitor: change function obsolete name in comments
  2020-11-27 13:35 [PATCH v3 0/5] Increase amount of data for monitor to read Andrey Shinkevich via
@ 2020-11-27 13:35 ` Andrey Shinkevich via
  2021-03-02 13:45   ` Markus Armbruster
  2020-11-27 13:35 ` [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event Andrey Shinkevich via
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Shinkevich via @ 2020-11-27 13:35 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, kwolf, mreitz, mdroth, thuth, lvivier, armbru,
	dgilbert, pbonzini, den, vsementsov, andrey.shinkevich

The function name monitor_qmp_bh_dispatcher() has been changed to
monitor_qmp_dispatcher_co() since the commit 9ce44e2c. Let's amend the
comments.

Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
 monitor/qmp.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/monitor/qmp.c b/monitor/qmp.c
index b42f8c6..7169366 100644
--- a/monitor/qmp.c
+++ b/monitor/qmp.c
@@ -80,7 +80,7 @@ static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
     qemu_mutex_lock(&mon->qmp_queue_lock);
 
     /*
-     * Same condition as in monitor_qmp_bh_dispatcher(), but before
+     * Same condition as in monitor_qmp_dispatcher_co(), but before
      * removing an element from the queue (hence no `- 1`).
      * Also, the queue should not be empty either, otherwise the
      * monitor hasn't been suspended yet (or was already resumed).
@@ -343,7 +343,7 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err)
 
     /*
      * Suspend the monitor when we can't queue more requests after
-     * this one.  Dequeuing in monitor_qmp_bh_dispatcher() or
+     * this one.  Dequeuing in monitor_qmp_dispatcher_co() or
      * monitor_qmp_cleanup_queue_and_resume() will resume it.
      * Note that when OOB is disabled, we queue at most one command,
      * for backward compatibility.
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event
  2020-11-27 13:35 [PATCH v3 0/5] Increase amount of data for monitor to read Andrey Shinkevich via
  2020-11-27 13:35 ` [PATCH v3 1/5] monitor: change function obsolete name in comments Andrey Shinkevich via
@ 2020-11-27 13:35 ` Andrey Shinkevich via
  2021-03-02 13:53   ` Markus Armbruster
  2020-11-27 13:35 ` [PATCH v3 3/5] monitor: let QMP monitor track JSON message content Andrey Shinkevich via
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Shinkevich via @ 2020-11-27 13:35 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, kwolf, mreitz, mdroth, thuth, lvivier, armbru,
	dgilbert, pbonzini, den, vsementsov, andrey.shinkevich

When CHR_EVENT_CLOSED comes, the QMP requests queue may still contain
unprocessed commands. It can happen with QMP capability OOB enabled.
Let the dispatcher complete handling requests rest in the monitor
queue.

Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
 monitor/qmp.c | 46 +++++++++++++++++++++-------------------------
 1 file changed, 21 insertions(+), 25 deletions(-)

diff --git a/monitor/qmp.c b/monitor/qmp.c
index 7169366..a86ed35 100644
--- a/monitor/qmp.c
+++ b/monitor/qmp.c
@@ -75,36 +75,32 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
     }
 }
 
-static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
+/*
+ * Let unprocessed QMP commands be handled.
+ */
+static void monitor_qmp_drain_queue(MonitorQMP *mon)
 {
-    qemu_mutex_lock(&mon->qmp_queue_lock);
+    bool q_is_empty = false;
 
-    /*
-     * Same condition as in monitor_qmp_dispatcher_co(), but before
-     * removing an element from the queue (hence no `- 1`).
-     * Also, the queue should not be empty either, otherwise the
-     * monitor hasn't been suspended yet (or was already resumed).
-     */
-    bool need_resume = (!qmp_oob_enabled(mon) ||
-        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX)
-        && !g_queue_is_empty(mon->qmp_requests);
+    while (!q_is_empty) {
+        qemu_mutex_lock(&mon->qmp_queue_lock);
+        q_is_empty = g_queue_is_empty(mon->qmp_requests);
+        qemu_mutex_unlock(&mon->qmp_queue_lock);
 
-    monitor_qmp_cleanup_req_queue_locked(mon);
+        if (!q_is_empty) {
+            if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
+                /* Kick the dispatcher coroutine */
+                aio_co_wake(qmp_dispatcher_co);
+            } else {
+                /* Let the dispatcher do its job for a while */
+                g_usleep(40);
+            }
+        }
+    }
 
-    if (need_resume) {
-        /*
-         * handle_qmp_command() suspended the monitor because the
-         * request queue filled up, to be resumed when the queue has
-         * space again.  We just emptied it; resume the monitor.
-         *
-         * Without this, the monitor would remain suspended forever
-         * when we get here while the monitor is suspended.  An
-         * unfortunately timed CHR_EVENT_CLOSED can do the trick.
-         */
+    if (qatomic_mb_read(&mon->common.suspend_cnt)) {
         monitor_resume(&mon->common);
     }
-
-    qemu_mutex_unlock(&mon->qmp_queue_lock);
 }
 
 void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
@@ -418,7 +414,7 @@ static void monitor_qmp_event(void *opaque, QEMUChrEvent event)
          * stdio, it's possible that stdout is still open when stdin
          * is closed.
          */
-        monitor_qmp_cleanup_queue_and_resume(mon);
+        monitor_qmp_drain_queue(mon);
         json_message_parser_destroy(&mon->parser);
         json_message_parser_init(&mon->parser, handle_qmp_command,
                                  mon, NULL);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/5] monitor: let QMP monitor track JSON message content
  2020-11-27 13:35 [PATCH v3 0/5] Increase amount of data for monitor to read Andrey Shinkevich via
  2020-11-27 13:35 ` [PATCH v3 1/5] monitor: change function obsolete name in comments Andrey Shinkevich via
  2020-11-27 13:35 ` [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event Andrey Shinkevich via
@ 2020-11-27 13:35 ` Andrey Shinkevich via
  2020-11-27 13:35 ` [PATCH v3 4/5] iotests: 129 don't check backup "busy" Andrey Shinkevich via
  2020-11-27 13:35 ` [PATCH v3 5/5] monitor: increase amount of data for monitor to read Andrey Shinkevich via
  4 siblings, 0 replies; 14+ messages in thread
From: Andrey Shinkevich via @ 2020-11-27 13:35 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, kwolf, mreitz, mdroth, thuth, lvivier, armbru,
	dgilbert, pbonzini, den, vsementsov, andrey.shinkevich

We are going to allow the QMP monitor reading data from input channel
more than one byte at once to increase the performance. With the OOB
compatibility disabled, the monitor queues one QMP command at most. It
was done for the backward compatibility as stated in the comment before
pushing a command into the queue. To keep that concept functional, the
monitor should track the end of a single QMP command. It allows the
dispatcher handling the command and send a response to client in time.

Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
 include/qapi/qmp/json-parser.h |  5 +++--
 monitor/qmp.c                  | 18 ++++++++++++++++--
 qga/main.c                     |  2 +-
 qobject/json-lexer.c           | 30 +++++++++++++++++++++---------
 qobject/json-parser-int.h      |  8 +++++---
 qobject/json-streamer.c        | 15 ++++++++-------
 qobject/qjson.c                |  2 +-
 tests/qtest/libqtest.c         |  2 +-
 8 files changed, 56 insertions(+), 26 deletions(-)

diff --git a/include/qapi/qmp/json-parser.h b/include/qapi/qmp/json-parser.h
index 7345a9b..039addb 100644
--- a/include/qapi/qmp/json-parser.h
+++ b/include/qapi/qmp/json-parser.h
@@ -36,8 +36,9 @@ void json_message_parser_init(JSONMessageParser *parser,
                                            Error *err),
                               void *opaque, va_list *ap);
 
-void json_message_parser_feed(JSONMessageParser *parser,
-                             const char *buffer, size_t size);
+size_t  json_message_parser_feed(JSONMessageParser *parser,
+                                 const char *buffer, size_t size,
+                                 bool track_qmp);
 
 void json_message_parser_flush(JSONMessageParser *parser);
 
diff --git a/monitor/qmp.c b/monitor/qmp.c
index a86ed35..0b39c62 100644
--- a/monitor/qmp.c
+++ b/monitor/qmp.c
@@ -367,8 +367,22 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err)
 static void monitor_qmp_read(void *opaque, const uint8_t *buf, int size)
 {
     MonitorQMP *mon = opaque;
-
-    json_message_parser_feed(&mon->parser, (const char *) buf, size);
+    char *cursor = (char *) buf;
+    size_t len;
+
+    while (size > 0) {
+        len = json_message_parser_feed(&mon->parser, (const char *) cursor,
+                                       size, true);
+        cursor += len;
+        size -= len;
+
+        if (size > 0) {
+            /* Let the dispatcher process the QMP command */
+            while (qatomic_mb_read(&mon->common.suspend_cnt)) {
+                g_usleep(20);
+            }
+        }
+    }
 }
 
 static QDict *qmp_greeting(MonitorQMP *mon)
diff --git a/qga/main.c b/qga/main.c
index dea6a3a..16de642 100644
--- a/qga/main.c
+++ b/qga/main.c
@@ -605,7 +605,7 @@ static gboolean channel_event_cb(GIOCondition condition, gpointer data)
     case G_IO_STATUS_NORMAL:
         buf[count] = 0;
         g_debug("read data, count: %d, data: %s", (int)count, buf);
-        json_message_parser_feed(&s->parser, (char *)buf, (int)count);
+        json_message_parser_feed(&s->parser, (char *)buf, (int)count, false);
         break;
     case G_IO_STATUS_EOF:
         g_debug("received EOF");
diff --git a/qobject/json-lexer.c b/qobject/json-lexer.c
index 632320d..1fefbae 100644
--- a/qobject/json-lexer.c
+++ b/qobject/json-lexer.c
@@ -280,10 +280,11 @@ void json_lexer_init(JSONLexer *lexer, bool enable_interpolation)
     lexer->x = lexer->y = 0;
 }
 
-static void json_lexer_feed_char(JSONLexer *lexer, char ch, bool flush)
+static JSONTokenType json_lexer_feed_char(JSONLexer *lexer, char ch, bool flush)
 {
     int new_state;
     bool char_consumed = false;
+    JSONTokenType ret;
 
     lexer->x++;
     if (ch == '\n') {
@@ -310,16 +311,16 @@ static void json_lexer_feed_char(JSONLexer *lexer, char ch, bool flush)
         case JSON_FLOAT:
         case JSON_KEYWORD:
         case JSON_STRING:
-            json_message_process_token(lexer, lexer->token, new_state,
-                                       lexer->x, lexer->y);
+            ret = json_message_process_token(lexer, lexer->token, new_state,
+                                             lexer->x, lexer->y);
             /* fall through */
         case IN_START:
             g_string_truncate(lexer->token, 0);
             new_state = lexer->start_state;
             break;
         case JSON_ERROR:
-            json_message_process_token(lexer, lexer->token, JSON_ERROR,
-                                       lexer->x, lexer->y);
+            ret = json_message_process_token(lexer, lexer->token, JSON_ERROR,
+                                             lexer->x, lexer->y);
             new_state = IN_RECOVERY;
             /* fall through */
         case IN_RECOVERY:
@@ -335,20 +336,31 @@ static void json_lexer_feed_char(JSONLexer *lexer, char ch, bool flush)
      * this is a security consideration.
      */
     if (lexer->token->len > MAX_TOKEN_SIZE) {
-        json_message_process_token(lexer, lexer->token, lexer->state,
-                                   lexer->x, lexer->y);
+        ret = json_message_process_token(lexer, lexer->token, lexer->state,
+                                         lexer->x, lexer->y);
         g_string_truncate(lexer->token, 0);
         lexer->state = lexer->start_state;
     }
+    return ret;
 }
 
-void json_lexer_feed(JSONLexer *lexer, const char *buffer, size_t size)
+/*
+ * Return the number of characters fed until the end of a QMP command or
+ * the buffer size if not any or else not tracked.
+ */
+size_t json_lexer_feed(JSONLexer *lexer, const char *buffer, size_t size,
+                       bool track)
 {
     size_t i;
+    JSONTokenType type = JSON_ERROR;
 
     for (i = 0; i < size; i++) {
-        json_lexer_feed_char(lexer, buffer[i], false);
+        if ((type == JSON_QMP_CMD_END) && track) {
+            break;
+        }
+        type = json_lexer_feed_char(lexer, buffer[i], false);
     }
+    return i;
 }
 
 void json_lexer_flush(JSONLexer *lexer)
diff --git a/qobject/json-parser-int.h b/qobject/json-parser-int.h
index 16a25d0..904555b 100644
--- a/qobject/json-parser-int.h
+++ b/qobject/json-parser-int.h
@@ -31,6 +31,7 @@ typedef enum json_token_type {
     JSON_KEYWORD,
     JSON_STRING,
     JSON_INTERP,
+    JSON_QMP_CMD_END,
     JSON_END_OF_INPUT,
     JSON_MAX = JSON_END_OF_INPUT
 } JSONTokenType;
@@ -39,13 +40,14 @@ typedef struct JSONToken JSONToken;
 
 /* json-lexer.c */
 void json_lexer_init(JSONLexer *lexer, bool enable_interpolation);
-void json_lexer_feed(JSONLexer *lexer, const char *buffer, size_t size);
+size_t json_lexer_feed(JSONLexer *lexer, const char *buffer, size_t size,
+                       bool track);
 void json_lexer_flush(JSONLexer *lexer);
 void json_lexer_destroy(JSONLexer *lexer);
 
 /* json-streamer.c */
-void json_message_process_token(JSONLexer *lexer, GString *input,
-                                JSONTokenType type, int x, int y);
+JSONTokenType json_message_process_token(JSONLexer *lexer, GString *input,
+                                         JSONTokenType type, int x, int y);
 
 /* json-parser.c */
 JSONToken *json_token(JSONTokenType type, int x, int y, GString *tokstr);
diff --git a/qobject/json-streamer.c b/qobject/json-streamer.c
index b93d97b..fe33303 100644
--- a/qobject/json-streamer.c
+++ b/qobject/json-streamer.c
@@ -28,8 +28,8 @@ static void json_message_free_tokens(JSONMessageParser *parser)
     }
 }
 
-void json_message_process_token(JSONLexer *lexer, GString *input,
-                                JSONTokenType type, int x, int y)
+JSONTokenType json_message_process_token(JSONLexer *lexer, GString *input,
+                                         JSONTokenType type, int x, int y)
 {
     JSONMessageParser *parser = container_of(lexer, JSONMessageParser, lexer);
     QObject *json = NULL;
@@ -54,7 +54,7 @@ void json_message_process_token(JSONLexer *lexer, GString *input,
         goto out_emit;
     case JSON_END_OF_INPUT:
         if (g_queue_is_empty(&parser->tokens)) {
-            return;
+            return type;
         }
         json = json_parser_parse(&parser->tokens, parser->ap, &err);
         goto out_emit;
@@ -86,7 +86,7 @@ void json_message_process_token(JSONLexer *lexer, GString *input,
 
     if ((parser->brace_count > 0 || parser->bracket_count > 0)
         && parser->brace_count >= 0 && parser->bracket_count >= 0) {
-        return;
+        return type;
     }
 
     json = json_parser_parse(&parser->tokens, parser->ap, &err);
@@ -97,6 +97,7 @@ out_emit:
     json_message_free_tokens(parser);
     parser->token_size = 0;
     parser->emit(parser->opaque, json, err);
+    return JSON_QMP_CMD_END;
 }
 
 void json_message_parser_init(JSONMessageParser *parser,
@@ -115,10 +116,10 @@ void json_message_parser_init(JSONMessageParser *parser,
     json_lexer_init(&parser->lexer, !!ap);
 }
 
-void json_message_parser_feed(JSONMessageParser *parser,
-                             const char *buffer, size_t size)
+size_t json_message_parser_feed(JSONMessageParser *parser,
+                                const char *buffer, size_t size, bool track_qmp)
 {
-    json_lexer_feed(&parser->lexer, buffer, size);
+    return json_lexer_feed(&parser->lexer, buffer, size, track_qmp);
 }
 
 void json_message_parser_flush(JSONMessageParser *parser)
diff --git a/qobject/qjson.c b/qobject/qjson.c
index f1f2c69..f85a1ff 100644
--- a/qobject/qjson.c
+++ b/qobject/qjson.c
@@ -66,7 +66,7 @@ static QObject *qobject_from_jsonv(const char *string, va_list *ap,
     JSONParsingState state = {};
 
     json_message_parser_init(&state.parser, consume_json, &state, ap);
-    json_message_parser_feed(&state.parser, string, strlen(string));
+    json_message_parser_feed(&state.parser, string, strlen(string), false);
     json_message_parser_flush(&state.parser);
     json_message_parser_destroy(&state.parser);
 
diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
index e49f3a1..7e82d9f 100644
--- a/tests/qtest/libqtest.c
+++ b/tests/qtest/libqtest.c
@@ -611,7 +611,7 @@ QDict *qmp_fd_receive(int fd)
         if (log) {
             len = write(2, &c, 1);
         }
-        json_message_parser_feed(&qmp.parser, &c, 1);
+        json_message_parser_feed(&qmp.parser, &c, 1, false);
     }
     json_message_parser_destroy(&qmp.parser);
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 4/5] iotests: 129 don't check backup "busy"
  2020-11-27 13:35 [PATCH v3 0/5] Increase amount of data for monitor to read Andrey Shinkevich via
                   ` (2 preceding siblings ...)
  2020-11-27 13:35 ` [PATCH v3 3/5] monitor: let QMP monitor track JSON message content Andrey Shinkevich via
@ 2020-11-27 13:35 ` Andrey Shinkevich via
  2021-03-02 13:45   ` Markus Armbruster
  2020-11-27 13:35 ` [PATCH v3 5/5] monitor: increase amount of data for monitor to read Andrey Shinkevich via
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Shinkevich via @ 2020-11-27 13:35 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, kwolf, mreitz, mdroth, thuth, lvivier, armbru,
	dgilbert, pbonzini, den, vsementsov, andrey.shinkevich

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Busy is racy, job has it's "pause-points" when it's not busy. Drop this
check.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 tests/qemu-iotests/129 | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
index 0e13244..3c22f64 100755
--- a/tests/qemu-iotests/129
+++ b/tests/qemu-iotests/129
@@ -67,7 +67,6 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
         result = self.vm.qmp("stop")
         self.assert_qmp(result, 'return', {})
         result = self.vm.qmp("query-block-jobs")
-        self.assert_qmp(result, 'return[0]/busy', True)
         self.assert_qmp(result, 'return[0]/ready', False)
 
     def test_drive_mirror(self):
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 5/5] monitor: increase amount of data for monitor to read
  2020-11-27 13:35 [PATCH v3 0/5] Increase amount of data for monitor to read Andrey Shinkevich via
                   ` (3 preceding siblings ...)
  2020-11-27 13:35 ` [PATCH v3 4/5] iotests: 129 don't check backup "busy" Andrey Shinkevich via
@ 2020-11-27 13:35 ` Andrey Shinkevich via
  4 siblings, 0 replies; 14+ messages in thread
From: Andrey Shinkevich via @ 2020-11-27 13:35 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, kwolf, mreitz, mdroth, thuth, lvivier, armbru,
	dgilbert, pbonzini, den, vsementsov, andrey.shinkevich

QMP and HMP monitors read one byte at a time from the socket or stdin,
which is very inefficient. With 100+ VMs on the host, this results in
multiple extra system calls and CPU overuse.
This patch increases the amount of read data up to 4096 bytes that fits
the buffer size on the channel level.

Suggested-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
 monitor/monitor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/monitor/monitor.c b/monitor/monitor.c
index 84222cd..43d2d3b 100644
--- a/monitor/monitor.c
+++ b/monitor/monitor.c
@@ -566,7 +566,7 @@ int monitor_can_read(void *opaque)
 {
     Monitor *mon = opaque;
 
-    return !qatomic_mb_read(&mon->suspend_cnt);
+    return !qatomic_mb_read(&mon->suspend_cnt) ? CHR_READ_BUF_LEN : 0;
 }
 
 void monitor_list_append(Monitor *mon)
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/5] monitor: change function obsolete name in comments
  2020-11-27 13:35 ` [PATCH v3 1/5] monitor: change function obsolete name in comments Andrey Shinkevich via
@ 2021-03-02 13:45   ` Markus Armbruster
  0 siblings, 0 replies; 14+ messages in thread
From: Markus Armbruster @ 2021-03-02 13:45 UTC (permalink / raw)
  To: Andrey Shinkevich via
  Cc: kwolf, lvivier, thuth, vsementsov, qemu-block, den, mdroth,
	mreitz, pbonzini, Andrey Shinkevich, dgilbert

Andrey Shinkevich via <qemu-devel@nongnu.org> writes:

> The function name monitor_qmp_bh_dispatcher() has been changed to
> monitor_qmp_dispatcher_co() since the commit 9ce44e2c. Let's amend the
> comments.
>
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> ---
>  monitor/qmp.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/monitor/qmp.c b/monitor/qmp.c
> index b42f8c6..7169366 100644
> --- a/monitor/qmp.c
> +++ b/monitor/qmp.c
> @@ -80,7 +80,7 @@ static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>      qemu_mutex_lock(&mon->qmp_queue_lock);
>  
>      /*
> -     * Same condition as in monitor_qmp_bh_dispatcher(), but before
> +     * Same condition as in monitor_qmp_dispatcher_co(), but before
>       * removing an element from the queue (hence no `- 1`).
>       * Also, the queue should not be empty either, otherwise the
>       * monitor hasn't been suspended yet (or was already resumed).
> @@ -343,7 +343,7 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err)
>  
>      /*
>       * Suspend the monitor when we can't queue more requests after
> -     * this one.  Dequeuing in monitor_qmp_bh_dispatcher() or
> +     * this one.  Dequeuing in monitor_qmp_dispatcher_co() or
>       * monitor_qmp_cleanup_queue_and_resume() will resume it.
>       * Note that when OOB is disabled, we queue at most one command,
>       * for backward compatibility.

The same change has since made it to master as commit 395a95080a "qmp:
Fix up comments after commit 9ce44e2ce2".  I should have picked your
patch instead, but I wasn't aware of it then, because I had put your
series in my review queue without looking closely.

It's been stuck in my queue for way too long.  Reviewing non-trivial
monitor patches is slow and exhausting work for me, and other,
non-monitor patches have kept crowding out your work.  My apologies!



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] iotests: 129 don't check backup "busy"
  2020-11-27 13:35 ` [PATCH v3 4/5] iotests: 129 don't check backup "busy" Andrey Shinkevich via
@ 2021-03-02 13:45   ` Markus Armbruster
  0 siblings, 0 replies; 14+ messages in thread
From: Markus Armbruster @ 2021-03-02 13:45 UTC (permalink / raw)
  To: Andrey Shinkevich via
  Cc: kwolf, lvivier, thuth, vsementsov, qemu-block, den, mdroth,
	mreitz, pbonzini, Andrey Shinkevich, dgilbert

Andrey Shinkevich via <qemu-devel@nongnu.org> writes:

> From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>
> Busy is racy, job has it's "pause-points" when it's not busy. Drop this
> check.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Max Reitz <mreitz@redhat.com>
> ---
>  tests/qemu-iotests/129 | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
> index 0e13244..3c22f64 100755
> --- a/tests/qemu-iotests/129
> +++ b/tests/qemu-iotests/129
> @@ -67,7 +67,6 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
>          result = self.vm.qmp("stop")
>          self.assert_qmp(result, 'return', {})
>          result = self.vm.qmp("query-block-jobs")
> -        self.assert_qmp(result, 'return[0]/busy', True)
>          self.assert_qmp(result, 'return[0]/ready', False)
>  
>      def test_drive_mirror(self):

The same change has since made it to master as commit f9a6256b48
"iotests/129: Do not check @busy".



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event
  2020-11-27 13:35 ` [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event Andrey Shinkevich via
@ 2021-03-02 13:53   ` Markus Armbruster
  2021-03-02 15:25     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 14+ messages in thread
From: Markus Armbruster @ 2021-03-02 13:53 UTC (permalink / raw)
  To: Andrey Shinkevich via
  Cc: kwolf, lvivier, thuth, vsementsov, qemu-block, den, armbru,
	mdroth, mreitz, pbonzini, Andrey Shinkevich, dgilbert

Andrey Shinkevich via <qemu-devel@nongnu.org> writes:

> When CHR_EVENT_CLOSED comes, the QMP requests queue may still contain
> unprocessed commands. It can happen with QMP capability OOB enabled.
> Let the dispatcher complete handling requests rest in the monitor
> queue.
>
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> ---
>  monitor/qmp.c | 46 +++++++++++++++++++++-------------------------
>  1 file changed, 21 insertions(+), 25 deletions(-)
>
> diff --git a/monitor/qmp.c b/monitor/qmp.c
> index 7169366..a86ed35 100644
> --- a/monitor/qmp.c
> +++ b/monitor/qmp.c
> @@ -75,36 +75,32 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
>      }
>  }
>  
> -static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
> +/*
> + * Let unprocessed QMP commands be handled.
> + */
> +static void monitor_qmp_drain_queue(MonitorQMP *mon)
>  {
> -    qemu_mutex_lock(&mon->qmp_queue_lock);
> +    bool q_is_empty = false;
>  
> -    /*
> -     * Same condition as in monitor_qmp_dispatcher_co(), but before
> -     * removing an element from the queue (hence no `- 1`).
> -     * Also, the queue should not be empty either, otherwise the
> -     * monitor hasn't been suspended yet (or was already resumed).
> -     */
> -    bool need_resume = (!qmp_oob_enabled(mon) ||
> -        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX)
> -        && !g_queue_is_empty(mon->qmp_requests);
> +    while (!q_is_empty) {
> +        qemu_mutex_lock(&mon->qmp_queue_lock);
> +        q_is_empty = g_queue_is_empty(mon->qmp_requests);
> +        qemu_mutex_unlock(&mon->qmp_queue_lock);
>  
> -    monitor_qmp_cleanup_req_queue_locked(mon);
> +        if (!q_is_empty) {
> +            if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
> +                /* Kick the dispatcher coroutine */
> +                aio_co_wake(qmp_dispatcher_co);
> +            } else {
> +                /* Let the dispatcher do its job for a while */
> +                g_usleep(40);
> +            }
> +        }
> +    }
>  
> -    if (need_resume) {
> -        /*
> -         * handle_qmp_command() suspended the monitor because the
> -         * request queue filled up, to be resumed when the queue has
> -         * space again.  We just emptied it; resume the monitor.
> -         *
> -         * Without this, the monitor would remain suspended forever
> -         * when we get here while the monitor is suspended.  An
> -         * unfortunately timed CHR_EVENT_CLOSED can do the trick.
> -         */
> +    if (qatomic_mb_read(&mon->common.suspend_cnt)) {
>          monitor_resume(&mon->common);
>      }
> -
> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
>  }
>  
>  void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
> @@ -418,7 +414,7 @@ static void monitor_qmp_event(void *opaque, QEMUChrEvent event)
>           * stdio, it's possible that stdout is still open when stdin
>           * is closed.
>           */
> -        monitor_qmp_cleanup_queue_and_resume(mon);
> +        monitor_qmp_drain_queue(mon);
>          json_message_parser_destroy(&mon->parser);
>          json_message_parser_init(&mon->parser, handle_qmp_command,
>                                   mon, NULL);

Before the patch: we call monitor_qmp_cleanup_queue_and_resume() to
throw away the contents of the request queue, and resume the monitor if
suspended.

Afterwards: we call monitor_qmp_drain_queue() to wait for the request
queue to drain.  I think.  Before we discuss the how, I have a question
the commit message should answer, but doesn't: why?



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event
  2021-03-02 13:53   ` Markus Armbruster
@ 2021-03-02 15:25     ` Vladimir Sementsov-Ogievskiy
  2021-03-02 16:32       ` Denis V. Lunev
  2021-03-05 13:41       ` Markus Armbruster
  0 siblings, 2 replies; 14+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-02 15:25 UTC (permalink / raw)
  To: Markus Armbruster, Andrey Shinkevich via
  Cc: qemu-block, Andrey Shinkevich, kwolf, mreitz, mdroth, thuth,
	lvivier, dgilbert, pbonzini, den

02.03.2021 16:53, Markus Armbruster wrote:
> Andrey Shinkevich via <qemu-devel@nongnu.org> writes:
> 
>> When CHR_EVENT_CLOSED comes, the QMP requests queue may still contain
>> unprocessed commands. It can happen with QMP capability OOB enabled.
>> Let the dispatcher complete handling requests rest in the monitor
>> queue.
>>
>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>> ---
>>   monitor/qmp.c | 46 +++++++++++++++++++++-------------------------
>>   1 file changed, 21 insertions(+), 25 deletions(-)
>>
>> diff --git a/monitor/qmp.c b/monitor/qmp.c
>> index 7169366..a86ed35 100644
>> --- a/monitor/qmp.c
>> +++ b/monitor/qmp.c
>> @@ -75,36 +75,32 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
>>       }
>>   }
>>   
>> -static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>> +/*
>> + * Let unprocessed QMP commands be handled.
>> + */
>> +static void monitor_qmp_drain_queue(MonitorQMP *mon)
>>   {
>> -    qemu_mutex_lock(&mon->qmp_queue_lock);
>> +    bool q_is_empty = false;
>>   
>> -    /*
>> -     * Same condition as in monitor_qmp_dispatcher_co(), but before
>> -     * removing an element from the queue (hence no `- 1`).
>> -     * Also, the queue should not be empty either, otherwise the
>> -     * monitor hasn't been suspended yet (or was already resumed).
>> -     */
>> -    bool need_resume = (!qmp_oob_enabled(mon) ||
>> -        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX)
>> -        && !g_queue_is_empty(mon->qmp_requests);
>> +    while (!q_is_empty) {
>> +        qemu_mutex_lock(&mon->qmp_queue_lock);
>> +        q_is_empty = g_queue_is_empty(mon->qmp_requests);
>> +        qemu_mutex_unlock(&mon->qmp_queue_lock);
>>   
>> -    monitor_qmp_cleanup_req_queue_locked(mon);
>> +        if (!q_is_empty) {
>> +            if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
>> +                /* Kick the dispatcher coroutine */
>> +                aio_co_wake(qmp_dispatcher_co);
>> +            } else {
>> +                /* Let the dispatcher do its job for a while */
>> +                g_usleep(40);
>> +            }
>> +        }
>> +    }
>>   
>> -    if (need_resume) {
>> -        /*
>> -         * handle_qmp_command() suspended the monitor because the
>> -         * request queue filled up, to be resumed when the queue has
>> -         * space again.  We just emptied it; resume the monitor.
>> -         *
>> -         * Without this, the monitor would remain suspended forever
>> -         * when we get here while the monitor is suspended.  An
>> -         * unfortunately timed CHR_EVENT_CLOSED can do the trick.
>> -         */
>> +    if (qatomic_mb_read(&mon->common.suspend_cnt)) {
>>           monitor_resume(&mon->common);
>>       }
>> -
>> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
>>   }
>>   
>>   void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
>> @@ -418,7 +414,7 @@ static void monitor_qmp_event(void *opaque, QEMUChrEvent event)
>>            * stdio, it's possible that stdout is still open when stdin
>>            * is closed.
>>            */
>> -        monitor_qmp_cleanup_queue_and_resume(mon);
>> +        monitor_qmp_drain_queue(mon);
>>           json_message_parser_destroy(&mon->parser);
>>           json_message_parser_init(&mon->parser, handle_qmp_command,
>>                                    mon, NULL);
> 
> Before the patch: we call monitor_qmp_cleanup_queue_and_resume() to
> throw away the contents of the request queue, and resume the monitor if
> suspended.
> 
> Afterwards: we call monitor_qmp_drain_queue() to wait for the request
> queue to drain.  I think.  Before we discuss the how, I have a question
> the commit message should answer, but doesn't: why?
> 

Hi!

Andrey is not in Virtuozzo now, and nobody doing this work actually.. Honestly, I don't believe that the feature should be so difficult.

Actually, we have the following patch in Virtuozzo 7 (Rhel7 based) for years, and it just works without any problems:

--- a/monitor.c
+++ b/monitor.c
@@ -4013,7 +4013,7 @@ static int monitor_can_read(void *opaque)
  {
      Monitor *mon = opaque;
  
-    return !atomic_mb_read(&mon->suspend_cnt);
+    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
  }


And in Vz8 (Rhel8 based), it looks like (to avoid assertion in handle_qmp_command()):

--- a/include/monitor/monitor.h
+++ b/include/monitor/monitor.h
@@ -9,7 +9,7 @@ extern __thread Monitor *cur_mon;
  typedef struct MonitorHMP MonitorHMP;
  typedef struct MonitorOptions MonitorOptions;
  
-#define QMP_REQ_QUEUE_LEN_MAX 8
+#define QMP_REQ_QUEUE_LEN_MAX 4096
  
  extern QemuOptsList qemu_mon_opts;
  
diff --git a/monitor/monitor.c b/monitor/monitor.c
index b385a3d569..a124d010f3 100644
--- a/monitor/monitor.c
+++ b/monitor/monitor.c
@@ -501,7 +501,7 @@ int monitor_can_read(void *opaque)
  {
      Monitor *mon = opaque;
  
-    return !atomic_mb_read(&mon->suspend_cnt);
+    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
  }


There are some theoretical risks of overflowing... But it just works. Still this probably not good for upstream. And I'm not sure how would it work with OOB..


-- 
Best regards,
Vladimir


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event
  2021-03-02 15:25     ` Vladimir Sementsov-Ogievskiy
@ 2021-03-02 16:32       ` Denis V. Lunev
  2021-03-02 17:02         ` Vladimir Sementsov-Ogievskiy
  2021-03-05 13:41       ` Markus Armbruster
  1 sibling, 1 reply; 14+ messages in thread
From: Denis V. Lunev @ 2021-03-02 16:32 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, Markus Armbruster, Andrey Shinkevich via
  Cc: kwolf, lvivier, thuth, qemu-block, mdroth, mreitz, pbonzini,
	Andrey Shinkevich, dgilbert

On 3/2/21 6:25 PM, Vladimir Sementsov-Ogievskiy wrote:
> 02.03.2021 16:53, Markus Armbruster wrote:
>> Andrey Shinkevich via <qemu-devel@nongnu.org> writes:
>>
>>> When CHR_EVENT_CLOSED comes, the QMP requests queue may still contain
>>> unprocessed commands. It can happen with QMP capability OOB enabled.
>>> Let the dispatcher complete handling requests rest in the monitor
>>> queue.
>>>
>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>> ---
>>>   monitor/qmp.c | 46 +++++++++++++++++++++-------------------------
>>>   1 file changed, 21 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/monitor/qmp.c b/monitor/qmp.c
>>> index 7169366..a86ed35 100644
>>> --- a/monitor/qmp.c
>>> +++ b/monitor/qmp.c
>>> @@ -75,36 +75,32 @@ static void
>>> monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
>>>       }
>>>   }
>>>   -static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>>> +/*
>>> + * Let unprocessed QMP commands be handled.
>>> + */
>>> +static void monitor_qmp_drain_queue(MonitorQMP *mon)
>>>   {
>>> -    qemu_mutex_lock(&mon->qmp_queue_lock);
>>> +    bool q_is_empty = false;
>>>   -    /*
>>> -     * Same condition as in monitor_qmp_dispatcher_co(), but before
>>> -     * removing an element from the queue (hence no `- 1`).
>>> -     * Also, the queue should not be empty either, otherwise the
>>> -     * monitor hasn't been suspended yet (or was already resumed).
>>> -     */
>>> -    bool need_resume = (!qmp_oob_enabled(mon) ||
>>> -        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX)
>>> -        && !g_queue_is_empty(mon->qmp_requests);
>>> +    while (!q_is_empty) {
>>> +        qemu_mutex_lock(&mon->qmp_queue_lock);
>>> +        q_is_empty = g_queue_is_empty(mon->qmp_requests);
>>> +        qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>   -    monitor_qmp_cleanup_req_queue_locked(mon);
>>> +        if (!q_is_empty) {
>>> +            if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
>>> +                /* Kick the dispatcher coroutine */
>>> +                aio_co_wake(qmp_dispatcher_co);
>>> +            } else {
>>> +                /* Let the dispatcher do its job for a while */
>>> +                g_usleep(40);
>>> +            }
>>> +        }
>>> +    }
>>>   -    if (need_resume) {
>>> -        /*
>>> -         * handle_qmp_command() suspended the monitor because the
>>> -         * request queue filled up, to be resumed when the queue has
>>> -         * space again.  We just emptied it; resume the monitor.
>>> -         *
>>> -         * Without this, the monitor would remain suspended forever
>>> -         * when we get here while the monitor is suspended.  An
>>> -         * unfortunately timed CHR_EVENT_CLOSED can do the trick.
>>> -         */
>>> +    if (qatomic_mb_read(&mon->common.suspend_cnt)) {
>>>           monitor_resume(&mon->common);
>>>       }
>>> -
>>> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>   }
>>>     void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
>>> @@ -418,7 +414,7 @@ static void monitor_qmp_event(void *opaque,
>>> QEMUChrEvent event)
>>>            * stdio, it's possible that stdout is still open when stdin
>>>            * is closed.
>>>            */
>>> -        monitor_qmp_cleanup_queue_and_resume(mon);
>>> +        monitor_qmp_drain_queue(mon);
>>>           json_message_parser_destroy(&mon->parser);
>>>           json_message_parser_init(&mon->parser, handle_qmp_command,
>>>                                    mon, NULL);
>>
>> Before the patch: we call monitor_qmp_cleanup_queue_and_resume() to
>> throw away the contents of the request queue, and resume the monitor if
>> suspended.
>>
>> Afterwards: we call monitor_qmp_drain_queue() to wait for the request
>> queue to drain.  I think.  Before we discuss the how, I have a question
>> the commit message should answer, but doesn't: why?
>>
>
> Hi!
>
> Andrey is not in Virtuozzo now, and nobody doing this work actually..
> Honestly, I don't believe that the feature should be so difficult.
>
> Actually, we have the following patch in Virtuozzo 7 (Rhel7 based) for
> years, and it just works without any problems:
>
> --- a/monitor.c
> +++ b/monitor.c
> @@ -4013,7 +4013,7 @@ static int monitor_can_read(void *opaque)
>  {
>      Monitor *mon = opaque;
>  
> -    return !atomic_mb_read(&mon->suspend_cnt);
> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>  }
>
>
> And in Vz8 (Rhel8 based), it looks like (to avoid assertion in
> handle_qmp_command()):
>
> --- a/include/monitor/monitor.h
> +++ b/include/monitor/monitor.h
> @@ -9,7 +9,7 @@ extern __thread Monitor *cur_mon;
>  typedef struct MonitorHMP MonitorHMP;
>  typedef struct MonitorOptions MonitorOptions;
>  
> -#define QMP_REQ_QUEUE_LEN_MAX 8
> +#define QMP_REQ_QUEUE_LEN_MAX 4096
>  
>  extern QemuOptsList qemu_mon_opts;
>  
> diff --git a/monitor/monitor.c b/monitor/monitor.c
> index b385a3d569..a124d010f3 100644
> --- a/monitor/monitor.c
> +++ b/monitor/monitor.c
> @@ -501,7 +501,7 @@ int monitor_can_read(void *opaque)
>  {
>      Monitor *mon = opaque;
>  
> -    return !atomic_mb_read(&mon->suspend_cnt);
> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>  }
>
>
> There are some theoretical risks of overflowing... But it just works.
> Still this probably not good for upstream. And I'm not sure how would
> it work with OOB..
>
>
I believe that this piece has been done to pass unit tests.
I am unsure at the moment which one will failed with
the queue length increase.

At least this is my gut feeling.

Den


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event
  2021-03-02 16:32       ` Denis V. Lunev
@ 2021-03-02 17:02         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 14+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-02 17:02 UTC (permalink / raw)
  To: Denis V. Lunev, Markus Armbruster, Andrey Shinkevich via
  Cc: qemu-block, Andrey Shinkevich, kwolf, mreitz, mdroth, thuth,
	lvivier, dgilbert, pbonzini

02.03.2021 19:32, Denis V. Lunev wrote:
> On 3/2/21 6:25 PM, Vladimir Sementsov-Ogievskiy wrote:
>> 02.03.2021 16:53, Markus Armbruster wrote:
>>> Andrey Shinkevich via <qemu-devel@nongnu.org> writes:
>>>
>>>> When CHR_EVENT_CLOSED comes, the QMP requests queue may still contain
>>>> unprocessed commands. It can happen with QMP capability OOB enabled.
>>>> Let the dispatcher complete handling requests rest in the monitor
>>>> queue.
>>>>
>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>> ---
>>>>    monitor/qmp.c | 46 +++++++++++++++++++++-------------------------
>>>>    1 file changed, 21 insertions(+), 25 deletions(-)
>>>>
>>>> diff --git a/monitor/qmp.c b/monitor/qmp.c
>>>> index 7169366..a86ed35 100644
>>>> --- a/monitor/qmp.c
>>>> +++ b/monitor/qmp.c
>>>> @@ -75,36 +75,32 @@ static void
>>>> monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
>>>>        }
>>>>    }
>>>>    -static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>>>> +/*
>>>> + * Let unprocessed QMP commands be handled.
>>>> + */
>>>> +static void monitor_qmp_drain_queue(MonitorQMP *mon)
>>>>    {
>>>> -    qemu_mutex_lock(&mon->qmp_queue_lock);
>>>> +    bool q_is_empty = false;
>>>>    -    /*
>>>> -     * Same condition as in monitor_qmp_dispatcher_co(), but before
>>>> -     * removing an element from the queue (hence no `- 1`).
>>>> -     * Also, the queue should not be empty either, otherwise the
>>>> -     * monitor hasn't been suspended yet (or was already resumed).
>>>> -     */
>>>> -    bool need_resume = (!qmp_oob_enabled(mon) ||
>>>> -        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX)
>>>> -        && !g_queue_is_empty(mon->qmp_requests);
>>>> +    while (!q_is_empty) {
>>>> +        qemu_mutex_lock(&mon->qmp_queue_lock);
>>>> +        q_is_empty = g_queue_is_empty(mon->qmp_requests);
>>>> +        qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>>    -    monitor_qmp_cleanup_req_queue_locked(mon);
>>>> +        if (!q_is_empty) {
>>>> +            if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
>>>> +                /* Kick the dispatcher coroutine */
>>>> +                aio_co_wake(qmp_dispatcher_co);
>>>> +            } else {
>>>> +                /* Let the dispatcher do its job for a while */
>>>> +                g_usleep(40);
>>>> +            }
>>>> +        }
>>>> +    }
>>>>    -    if (need_resume) {
>>>> -        /*
>>>> -         * handle_qmp_command() suspended the monitor because the
>>>> -         * request queue filled up, to be resumed when the queue has
>>>> -         * space again.  We just emptied it; resume the monitor.
>>>> -         *
>>>> -         * Without this, the monitor would remain suspended forever
>>>> -         * when we get here while the monitor is suspended.  An
>>>> -         * unfortunately timed CHR_EVENT_CLOSED can do the trick.
>>>> -         */
>>>> +    if (qatomic_mb_read(&mon->common.suspend_cnt)) {
>>>>            monitor_resume(&mon->common);
>>>>        }
>>>> -
>>>> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>>    }
>>>>      void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
>>>> @@ -418,7 +414,7 @@ static void monitor_qmp_event(void *opaque,
>>>> QEMUChrEvent event)
>>>>             * stdio, it's possible that stdout is still open when stdin
>>>>             * is closed.
>>>>             */
>>>> -        monitor_qmp_cleanup_queue_and_resume(mon);
>>>> +        monitor_qmp_drain_queue(mon);
>>>>            json_message_parser_destroy(&mon->parser);
>>>>            json_message_parser_init(&mon->parser, handle_qmp_command,
>>>>                                     mon, NULL);
>>>
>>> Before the patch: we call monitor_qmp_cleanup_queue_and_resume() to
>>> throw away the contents of the request queue, and resume the monitor if
>>> suspended.
>>>
>>> Afterwards: we call monitor_qmp_drain_queue() to wait for the request
>>> queue to drain.  I think.  Before we discuss the how, I have a question
>>> the commit message should answer, but doesn't: why?
>>>
>>
>> Hi!
>>
>> Andrey is not in Virtuozzo now, and nobody doing this work actually..
>> Honestly, I don't believe that the feature should be so difficult.
>>
>> Actually, we have the following patch in Virtuozzo 7 (Rhel7 based) for
>> years, and it just works without any problems:
>>
>> --- a/monitor.c
>> +++ b/monitor.c
>> @@ -4013,7 +4013,7 @@ static int monitor_can_read(void *opaque)
>>   {
>>       Monitor *mon = opaque;
>>   
>> -    return !atomic_mb_read(&mon->suspend_cnt);
>> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>>   }
>>
>>
>> And in Vz8 (Rhel8 based), it looks like (to avoid assertion in
>> handle_qmp_command()):
>>
>> --- a/include/monitor/monitor.h
>> +++ b/include/monitor/monitor.h
>> @@ -9,7 +9,7 @@ extern __thread Monitor *cur_mon;
>>   typedef struct MonitorHMP MonitorHMP;
>>   typedef struct MonitorOptions MonitorOptions;
>>   
>> -#define QMP_REQ_QUEUE_LEN_MAX 8
>> +#define QMP_REQ_QUEUE_LEN_MAX 4096
>>   
>>   extern QemuOptsList qemu_mon_opts;
>>   
>> diff --git a/monitor/monitor.c b/monitor/monitor.c
>> index b385a3d569..a124d010f3 100644
>> --- a/monitor/monitor.c
>> +++ b/monitor/monitor.c
>> @@ -501,7 +501,7 @@ int monitor_can_read(void *opaque)
>>   {
>>       Monitor *mon = opaque;
>>   
>> -    return !atomic_mb_read(&mon->suspend_cnt);
>> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>>   }
>>
>>
>> There are some theoretical risks of overflowing... But it just works.
>> Still this probably not good for upstream. And I'm not sure how would
>> it work with OOB..
>>
>>
> I believe that this piece has been done to pass unit tests.
> I am unsure at the moment which one will failed with
> the queue length increase.
> 
> At least this is my gut feeling.
> 


Tests are passing.. Actually, the most relevant thread is:

   https://patchew.org/QEMU/20190610105906.28524-1-dplotnikov@virtuozzo.com/

I'll ping it

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event
  2021-03-02 15:25     ` Vladimir Sementsov-Ogievskiy
  2021-03-02 16:32       ` Denis V. Lunev
@ 2021-03-05 13:41       ` Markus Armbruster
  2021-03-05 14:01         ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 14+ messages in thread
From: Markus Armbruster @ 2021-03-05 13:41 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, lvivier, thuth, qemu-block, den, mdroth,
	Andrey Shinkevich via, pbonzini, Andrey Shinkevich, mreitz,
	dgilbert

Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> writes:

> 02.03.2021 16:53, Markus Armbruster wrote:
>> Andrey Shinkevich via <qemu-devel@nongnu.org> writes:
>> 
>>> When CHR_EVENT_CLOSED comes, the QMP requests queue may still contain
>>> unprocessed commands. It can happen with QMP capability OOB enabled.
>>> Let the dispatcher complete handling requests rest in the monitor
>>> queue.
>>>
>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>> ---
>>>   monitor/qmp.c | 46 +++++++++++++++++++++-------------------------
>>>   1 file changed, 21 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/monitor/qmp.c b/monitor/qmp.c
>>> index 7169366..a86ed35 100644
>>> --- a/monitor/qmp.c
>>> +++ b/monitor/qmp.c
>>> @@ -75,36 +75,32 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
>>>       }
>>>   }
>>>   
>>> -static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>>> +/*
>>> + * Let unprocessed QMP commands be handled.
>>> + */
>>> +static void monitor_qmp_drain_queue(MonitorQMP *mon)
>>>   {
>>> -    qemu_mutex_lock(&mon->qmp_queue_lock);
>>> +    bool q_is_empty = false;
>>>   
>>> -    /*
>>> -     * Same condition as in monitor_qmp_dispatcher_co(), but before
>>> -     * removing an element from the queue (hence no `- 1`).
>>> -     * Also, the queue should not be empty either, otherwise the
>>> -     * monitor hasn't been suspended yet (or was already resumed).
>>> -     */
>>> -    bool need_resume = (!qmp_oob_enabled(mon) ||
>>> -        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX)
>>> -        && !g_queue_is_empty(mon->qmp_requests);
>>> +    while (!q_is_empty) {
>>> +        qemu_mutex_lock(&mon->qmp_queue_lock);
>>> +        q_is_empty = g_queue_is_empty(mon->qmp_requests);
>>> +        qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>   
>>> -    monitor_qmp_cleanup_req_queue_locked(mon);
>>> +        if (!q_is_empty) {
>>> +            if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
>>> +                /* Kick the dispatcher coroutine */
>>> +                aio_co_wake(qmp_dispatcher_co);
>>> +            } else {
>>> +                /* Let the dispatcher do its job for a while */
>>> +                g_usleep(40);
>>> +            }
>>> +        }
>>> +    }
>>>   
>>> -    if (need_resume) {
>>> -        /*
>>> -         * handle_qmp_command() suspended the monitor because the
>>> -         * request queue filled up, to be resumed when the queue has
>>> -         * space again.  We just emptied it; resume the monitor.
>>> -         *
>>> -         * Without this, the monitor would remain suspended forever
>>> -         * when we get here while the monitor is suspended.  An
>>> -         * unfortunately timed CHR_EVENT_CLOSED can do the trick.
>>> -         */
>>> +    if (qatomic_mb_read(&mon->common.suspend_cnt)) {
>>>           monitor_resume(&mon->common);
>>>       }
>>> -
>>> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>   }
>>>   
>>>   void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
>>> @@ -418,7 +414,7 @@ static void monitor_qmp_event(void *opaque, QEMUChrEvent event)
>>>            * stdio, it's possible that stdout is still open when stdin
>>>            * is closed.
>>>            */
>>> -        monitor_qmp_cleanup_queue_and_resume(mon);
>>> +        monitor_qmp_drain_queue(mon);
>>>           json_message_parser_destroy(&mon->parser);
>>>           json_message_parser_init(&mon->parser, handle_qmp_command,
>>>                                    mon, NULL);
>> 
>> Before the patch: we call monitor_qmp_cleanup_queue_and_resume() to
>> throw away the contents of the request queue, and resume the monitor if
>> suspended.
>> 
>> Afterwards: we call monitor_qmp_drain_queue() to wait for the request
>> queue to drain.  I think.  Before we discuss the how, I have a question
>> the commit message should answer, but doesn't: why?
>> 
>
> Hi!
>
> Andrey is not in Virtuozzo now, and nobody doing this work actually.. Honestly, I don't believe that the feature should be so difficult.
>
> Actually, we have the following patch in Virtuozzo 7 (Rhel7 based) for years, and it just works without any problems:

I appreciate your repeated efforts to get your downstream patch
upstream.

> --- a/monitor.c
> +++ b/monitor.c
> @@ -4013,7 +4013,7 @@ static int monitor_can_read(void *opaque)
>   {
>       Monitor *mon = opaque;
>   
> -    return !atomic_mb_read(&mon->suspend_cnt);
> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>   }
>
>
> And in Vz8 (Rhel8 based), it looks like (to avoid assertion in handle_qmp_command()):
>
> --- a/include/monitor/monitor.h
> +++ b/include/monitor/monitor.h
> @@ -9,7 +9,7 @@ extern __thread Monitor *cur_mon;
>   typedef struct MonitorHMP MonitorHMP;
>   typedef struct MonitorOptions MonitorOptions;
>   
> -#define QMP_REQ_QUEUE_LEN_MAX 8
> +#define QMP_REQ_QUEUE_LEN_MAX 4096
>   
>   extern QemuOptsList qemu_mon_opts;
>   
>
> diff --git a/monitor/monitor.c b/monitor/monitor.c
> index b385a3d569..a124d010f3 100644
> --- a/monitor/monitor.c
> +++ b/monitor/monitor.c
> @@ -501,7 +501,7 @@ int monitor_can_read(void *opaque)
>   {
>       Monitor *mon = opaque;
>   
> -    return !atomic_mb_read(&mon->suspend_cnt);
> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>   }
>
>
> There are some theoretical risks of overflowing... But it just works. Still this probably not good for upstream. And I'm not sure how would it work with OOB..

This is exactly what makes the feature difficult: we need to think
through the ramifications taking OOB and coroutines into account.

So far, the feature has been important enough to post patches, but not
important enough to accompany them with a "think through".

Sometimes, maintainers are willing and able to do some of the patch
submitter's work for them.  I haven't been able to do that for this
feature.  I'll need more help, I'm afraid.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event
  2021-03-05 13:41       ` Markus Armbruster
@ 2021-03-05 14:01         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 14+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-05 14:01 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Andrey Shinkevich via, qemu-block, Andrey Shinkevich, kwolf,
	mreitz, mdroth, thuth, lvivier, dgilbert, pbonzini, den

05.03.2021 16:41, Markus Armbruster wrote:
> Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> writes:
> 
>> 02.03.2021 16:53, Markus Armbruster wrote:
>>> Andrey Shinkevich via <qemu-devel@nongnu.org> writes:
>>>
>>>> When CHR_EVENT_CLOSED comes, the QMP requests queue may still contain
>>>> unprocessed commands. It can happen with QMP capability OOB enabled.
>>>> Let the dispatcher complete handling requests rest in the monitor
>>>> queue.
>>>>
>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>> ---
>>>>    monitor/qmp.c | 46 +++++++++++++++++++++-------------------------
>>>>    1 file changed, 21 insertions(+), 25 deletions(-)
>>>>
>>>> diff --git a/monitor/qmp.c b/monitor/qmp.c
>>>> index 7169366..a86ed35 100644
>>>> --- a/monitor/qmp.c
>>>> +++ b/monitor/qmp.c
>>>> @@ -75,36 +75,32 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
>>>>        }
>>>>    }
>>>>    
>>>> -static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>>>> +/*
>>>> + * Let unprocessed QMP commands be handled.
>>>> + */
>>>> +static void monitor_qmp_drain_queue(MonitorQMP *mon)
>>>>    {
>>>> -    qemu_mutex_lock(&mon->qmp_queue_lock);
>>>> +    bool q_is_empty = false;
>>>>    
>>>> -    /*
>>>> -     * Same condition as in monitor_qmp_dispatcher_co(), but before
>>>> -     * removing an element from the queue (hence no `- 1`).
>>>> -     * Also, the queue should not be empty either, otherwise the
>>>> -     * monitor hasn't been suspended yet (or was already resumed).
>>>> -     */
>>>> -    bool need_resume = (!qmp_oob_enabled(mon) ||
>>>> -        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX)
>>>> -        && !g_queue_is_empty(mon->qmp_requests);
>>>> +    while (!q_is_empty) {
>>>> +        qemu_mutex_lock(&mon->qmp_queue_lock);
>>>> +        q_is_empty = g_queue_is_empty(mon->qmp_requests);
>>>> +        qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>>    
>>>> -    monitor_qmp_cleanup_req_queue_locked(mon);
>>>> +        if (!q_is_empty) {
>>>> +            if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
>>>> +                /* Kick the dispatcher coroutine */
>>>> +                aio_co_wake(qmp_dispatcher_co);
>>>> +            } else {
>>>> +                /* Let the dispatcher do its job for a while */
>>>> +                g_usleep(40);
>>>> +            }
>>>> +        }
>>>> +    }
>>>>    
>>>> -    if (need_resume) {
>>>> -        /*
>>>> -         * handle_qmp_command() suspended the monitor because the
>>>> -         * request queue filled up, to be resumed when the queue has
>>>> -         * space again.  We just emptied it; resume the monitor.
>>>> -         *
>>>> -         * Without this, the monitor would remain suspended forever
>>>> -         * when we get here while the monitor is suspended.  An
>>>> -         * unfortunately timed CHR_EVENT_CLOSED can do the trick.
>>>> -         */
>>>> +    if (qatomic_mb_read(&mon->common.suspend_cnt)) {
>>>>            monitor_resume(&mon->common);
>>>>        }
>>>> -
>>>> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
>>>>    }
>>>>    
>>>>    void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
>>>> @@ -418,7 +414,7 @@ static void monitor_qmp_event(void *opaque, QEMUChrEvent event)
>>>>             * stdio, it's possible that stdout is still open when stdin
>>>>             * is closed.
>>>>             */
>>>> -        monitor_qmp_cleanup_queue_and_resume(mon);
>>>> +        monitor_qmp_drain_queue(mon);
>>>>            json_message_parser_destroy(&mon->parser);
>>>>            json_message_parser_init(&mon->parser, handle_qmp_command,
>>>>                                     mon, NULL);
>>>
>>> Before the patch: we call monitor_qmp_cleanup_queue_and_resume() to
>>> throw away the contents of the request queue, and resume the monitor if
>>> suspended.
>>>
>>> Afterwards: we call monitor_qmp_drain_queue() to wait for the request
>>> queue to drain.  I think.  Before we discuss the how, I have a question
>>> the commit message should answer, but doesn't: why?
>>>
>>
>> Hi!
>>
>> Andrey is not in Virtuozzo now, and nobody doing this work actually.. Honestly, I don't believe that the feature should be so difficult.
>>
>> Actually, we have the following patch in Virtuozzo 7 (Rhel7 based) for years, and it just works without any problems:
> 
> I appreciate your repeated efforts to get your downstream patch
> upstream.
> 
>> --- a/monitor.c
>> +++ b/monitor.c
>> @@ -4013,7 +4013,7 @@ static int monitor_can_read(void *opaque)
>>    {
>>        Monitor *mon = opaque;
>>    
>> -    return !atomic_mb_read(&mon->suspend_cnt);
>> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>>    }
>>
>>
>> And in Vz8 (Rhel8 based), it looks like (to avoid assertion in handle_qmp_command()):
>>
>> --- a/include/monitor/monitor.h
>> +++ b/include/monitor/monitor.h
>> @@ -9,7 +9,7 @@ extern __thread Monitor *cur_mon;
>>    typedef struct MonitorHMP MonitorHMP;
>>    typedef struct MonitorOptions MonitorOptions;
>>    
>> -#define QMP_REQ_QUEUE_LEN_MAX 8
>> +#define QMP_REQ_QUEUE_LEN_MAX 4096
>>    
>>    extern QemuOptsList qemu_mon_opts;
>>    
>>
>> diff --git a/monitor/monitor.c b/monitor/monitor.c
>> index b385a3d569..a124d010f3 100644
>> --- a/monitor/monitor.c
>> +++ b/monitor/monitor.c
>> @@ -501,7 +501,7 @@ int monitor_can_read(void *opaque)
>>    {
>>        Monitor *mon = opaque;
>>    
>> -    return !atomic_mb_read(&mon->suspend_cnt);
>> +    return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0;
>>    }
>>
>>
>> There are some theoretical risks of overflowing... But it just works. Still this probably not good for upstream. And I'm not sure how would it work with OOB..
> 
> This is exactly what makes the feature difficult: we need to think
> through the ramifications taking OOB and coroutines into account.
> 
> So far, the feature has been important enough to post patches, but not
> important enough to accompany them with a "think through".
> 
> Sometimes, maintainers are willing and able to do some of the patch
> submitter's work for them.  I haven't been able to do that for this
> feature.  I'll need more help, I'm afraid.
> 

OK, I understand. So, when we are ready, we'll come with new fresh series and good explanation.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-03-05 14:03 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-27 13:35 [PATCH v3 0/5] Increase amount of data for monitor to read Andrey Shinkevich via
2020-11-27 13:35 ` [PATCH v3 1/5] monitor: change function obsolete name in comments Andrey Shinkevich via
2021-03-02 13:45   ` Markus Armbruster
2020-11-27 13:35 ` [PATCH v3 2/5] monitor: drain requests queue with 'channel closed' event Andrey Shinkevich via
2021-03-02 13:53   ` Markus Armbruster
2021-03-02 15:25     ` Vladimir Sementsov-Ogievskiy
2021-03-02 16:32       ` Denis V. Lunev
2021-03-02 17:02         ` Vladimir Sementsov-Ogievskiy
2021-03-05 13:41       ` Markus Armbruster
2021-03-05 14:01         ` Vladimir Sementsov-Ogievskiy
2020-11-27 13:35 ` [PATCH v3 3/5] monitor: let QMP monitor track JSON message content Andrey Shinkevich via
2020-11-27 13:35 ` [PATCH v3 4/5] iotests: 129 don't check backup "busy" Andrey Shinkevich via
2021-03-02 13:45   ` Markus Armbruster
2020-11-27 13:35 ` [PATCH v3 5/5] monitor: increase amount of data for monitor to read Andrey Shinkevich via

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).