All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter
@ 2023-09-18 16:16 Stefan Hajnoczi
  2023-09-18 16:16 ` [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type Stefan Hajnoczi
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2023-09-18 16:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Stefan Hajnoczi, Eric Blake, Kevin Wolf, Michael S. Tsirkin,
	Hanna Reitz

virtio-blk and virtio-scsi devices need a way to specify the mapping between
IOThreads and virtqueues. At the moment all virtqueues are assigned to a single
IOThread or the main loop. This single thread can be a CPU bottleneck, so it is
necessary to allow finer-grained assignment to spread the load. With this
series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are able
to exploit multiple IOThreads.

This series introduces command-line syntax for the new iothread-vq-mapping
property is as follows:

  --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...'

IOThreads are specified by name and virtqueues are specified by 0-based
index.

It will be common to simply assign virtqueues round-robin across a set
of IOThreads. A convenient syntax that does not require specifying
individual virtqueue indices is available:

  --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...'

There is no way to reassign virtqueues at runtime and I expect that to be a
very rare requirement.

Note that JSON --device syntax is required for the iothread-vq-mapping
parameter because it's non-scalar.

Based-on: 20230912231037.826804-1-stefanha@redhat.com ("[PATCH v3 0/5] block-backend: process I/O in the current AioContext")

Stefan Hajnoczi (2):
  qdev: add IOThreadVirtQueueMappingList property type
  virtio-blk: add iothread-vq-mapping parameter

 qapi/virtio.json                    |  30 +++++
 hw/block/dataplane/virtio-blk.h     |   3 +
 include/hw/qdev-properties-system.h |   4 +
 include/hw/virtio/virtio-blk.h      |   2 +
 hw/block/dataplane/virtio-blk.c     | 163 +++++++++++++++++++++-------
 hw/block/virtio-blk.c               |  92 ++++++++++++++--
 hw/core/qdev-properties-system.c    |  47 ++++++++
 7 files changed, 287 insertions(+), 54 deletions(-)

-- 
2.41.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-09-18 16:16 [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
@ 2023-09-18 16:16 ` Stefan Hajnoczi
  2023-10-14  7:35   ` Markus Armbruster
  2023-12-11 13:56   ` Kevin Wolf
  2023-09-18 16:16 ` [PATCH v2 2/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2023-09-18 16:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Stefan Hajnoczi, Eric Blake, Kevin Wolf, Michael S. Tsirkin,
	Hanna Reitz

virtio-blk and virtio-scsi devices will need a way to specify the
mapping between IOThreads and virtqueues. At the moment all virtqueues
are assigned to a single IOThread or the main loop. This single thread
can be a CPU bottleneck, so it is necessary to allow finer-grained
assignment to spread the load.

Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
parameter that maps virtqueues to IOThreads. The command-line syntax for
this new property is as follows:

  --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'

IOThreads are specified by name and virtqueues are specified by 0-based
index.

It will be common to simply assign virtqueues round-robin across a set
of IOThreads. A convenient syntax that does not require specifying
individual virtqueue indices is available:

  --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 qapi/virtio.json                    | 30 ++++++++++++++++++
 include/hw/qdev-properties-system.h |  4 +++
 hw/core/qdev-properties-system.c    | 47 +++++++++++++++++++++++++++++
 3 files changed, 81 insertions(+)

diff --git a/qapi/virtio.json b/qapi/virtio.json
index e6dcee7b83..cb341ae596 100644
--- a/qapi/virtio.json
+++ b/qapi/virtio.json
@@ -928,3 +928,33 @@
   'data': { 'path': 'str', 'queue': 'uint16', '*index': 'uint16' },
   'returns': 'VirtioQueueElement',
   'features': [ 'unstable' ] }
+
+##
+# @IOThreadVirtQueueMapping:
+#
+# Describes the subset of virtqueues assigned to an IOThread.
+#
+# @iothread: the id of IOThread object
+# @vqs: an optional array of virtqueue indices that will be handled by this
+#       IOThread. When absent, virtqueues are assigned round-robin across all
+#       IOThreadVirtQueueMappings provided. Either all
+#       IOThreadVirtQueueMappings must have @vqs or none of them must have it.
+#
+# Since: 8.2
+#
+##
+
+{ 'struct': 'IOThreadVirtQueueMapping',
+  'data': { 'iothread': 'str', '*vqs': ['uint16'] } }
+
+##
+# @IOThreadVirtQueueMappings:
+#
+# IOThreadVirtQueueMapping list. This struct is not actually used but the
+# IOThreadVirtQueueMappingList type it generates is!
+#
+# Since: 8.2
+##
+
+{ 'struct': 'IOThreadVirtQueueMappings',
+  'data': { 'mappings': ['IOThreadVirtQueueMapping'] } }
diff --git a/include/hw/qdev-properties-system.h b/include/hw/qdev-properties-system.h
index 0ac327ae60..c526e502c8 100644
--- a/include/hw/qdev-properties-system.h
+++ b/include/hw/qdev-properties-system.h
@@ -22,6 +22,7 @@ extern const PropertyInfo qdev_prop_audiodev;
 extern const PropertyInfo qdev_prop_off_auto_pcibar;
 extern const PropertyInfo qdev_prop_pcie_link_speed;
 extern const PropertyInfo qdev_prop_pcie_link_width;
+extern const PropertyInfo qdev_prop_iothread_vq_mapping_list;
 
 #define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
     DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
@@ -73,5 +74,8 @@ extern const PropertyInfo qdev_prop_pcie_link_width;
 #define DEFINE_PROP_UUID_NODEFAULT(_name, _state, _field) \
     DEFINE_PROP(_name, _state, _field, qdev_prop_uuid, QemuUUID)
 
+#define DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST(_name, _state, _field) \
+    DEFINE_PROP(_name, _state, _field, qdev_prop_iothread_vq_mapping_list, \
+                IOThreadVirtQueueMappingList *)
 
 #endif
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 6d5d43eda2..831796e106 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -18,6 +18,7 @@
 #include "qapi/qapi-types-block.h"
 #include "qapi/qapi-types-machine.h"
 #include "qapi/qapi-types-migration.h"
+#include "qapi/qapi-visit-virtio.h"
 #include "qapi/qmp/qerror.h"
 #include "qemu/ctype.h"
 #include "qemu/cutils.h"
@@ -1147,3 +1148,49 @@ const PropertyInfo qdev_prop_uuid = {
     .set   = set_uuid,
     .set_default_value = set_default_uuid_auto,
 };
+
+/* --- IOThreadVirtQueueMappingList --- */
+
+static void get_iothread_vq_mapping_list(Object *obj, Visitor *v,
+        const char *name, void *opaque, Error **errp)
+{
+    IOThreadVirtQueueMappingList **prop_ptr =
+        object_field_prop_ptr(obj, opaque);
+    IOThreadVirtQueueMappingList *list = *prop_ptr;
+
+    visit_type_IOThreadVirtQueueMappingList(v, name, &list, errp);
+}
+
+static void set_iothread_vq_mapping_list(Object *obj, Visitor *v,
+        const char *name, void *opaque, Error **errp)
+{
+    IOThreadVirtQueueMappingList **prop_ptr =
+        object_field_prop_ptr(obj, opaque);
+    IOThreadVirtQueueMappingList *list;
+
+    if (!visit_type_IOThreadVirtQueueMappingList(v, name, &list, errp)) {
+        return;
+    }
+
+    qapi_free_IOThreadVirtQueueMappingList(*prop_ptr);
+    *prop_ptr = list;
+}
+
+static void release_iothread_vq_mapping_list(Object *obj,
+        const char *name, void *opaque)
+{
+    IOThreadVirtQueueMappingList **prop_ptr =
+        object_field_prop_ptr(obj, opaque);
+
+    qapi_free_IOThreadVirtQueueMappingList(*prop_ptr);
+    *prop_ptr = NULL;
+}
+
+const PropertyInfo qdev_prop_iothread_vq_mapping_list = {
+    .name = "IOThreadVirtQueueMappingList",
+    .description = "IOThread virtqueue mapping list [{\"iothread\":\"<id>\", "
+                   "\"vqs\":[1,2,3,...]},...]",
+    .get = get_iothread_vq_mapping_list,
+    .set = set_iothread_vq_mapping_list,
+    .release = release_iothread_vq_mapping_list,
+};
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/2] virtio-blk: add iothread-vq-mapping parameter
  2023-09-18 16:16 [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
  2023-09-18 16:16 ` [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type Stefan Hajnoczi
@ 2023-09-18 16:16 ` Stefan Hajnoczi
  2023-10-03 13:12 ` [PATCH v2 0/2] " Michael S. Tsirkin
  2023-11-02 14:10 ` Kevin Wolf
  3 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2023-09-18 16:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Stefan Hajnoczi, Eric Blake, Kevin Wolf, Michael S. Tsirkin,
	Hanna Reitz

Add the iothread-vq-mapping parameter to assign virtqueues to IOThreads.
Store the vq:AioContext mapping in the new struct
VirtIOBlockDataPlane->vq_aio_context[] field and refactor the code to
use the per-vq AioContext instead of the BlockDriverState's AioContext.

Reimplement --device virtio-blk-pci,iothread= and non-IOThread mode by
assigning all virtqueues to the IOThread and main loop's AioContext in
vq_aio_context[], respectively.

The comment in struct VirtIOBlockDataPlane about EventNotifiers is
stale. Remove it.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.h |   3 +
 include/hw/virtio/virtio-blk.h  |   2 +
 hw/block/dataplane/virtio-blk.c | 163 ++++++++++++++++++++++++--------
 hw/block/virtio-blk.c           |  92 +++++++++++++++---
 4 files changed, 206 insertions(+), 54 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.h b/hw/block/dataplane/virtio-blk.h
index 5e18bb99ae..1a806fe447 100644
--- a/hw/block/dataplane/virtio-blk.h
+++ b/hw/block/dataplane/virtio-blk.h
@@ -28,4 +28,7 @@ void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq);
 int virtio_blk_data_plane_start(VirtIODevice *vdev);
 void virtio_blk_data_plane_stop(VirtIODevice *vdev);
 
+void virtio_blk_data_plane_detach(VirtIOBlockDataPlane *s);
+void virtio_blk_data_plane_attach(VirtIOBlockDataPlane *s);
+
 #endif /* HW_DATAPLANE_VIRTIO_BLK_H */
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 9881009c22..5e4091e4da 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -21,6 +21,7 @@
 #include "sysemu/block-backend.h"
 #include "sysemu/block-ram-registrar.h"
 #include "qom/object.h"
+#include "qapi/qapi-types-virtio.h"
 
 #define TYPE_VIRTIO_BLK "virtio-blk-device"
 OBJECT_DECLARE_SIMPLE_TYPE(VirtIOBlock, VIRTIO_BLK)
@@ -37,6 +38,7 @@ struct VirtIOBlkConf
 {
     BlockConf conf;
     IOThread *iothread;
+    IOThreadVirtQueueMappingList *iothread_vq_mapping_list;
     char *serial;
     uint32_t request_merging;
     uint16_t num_queues;
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index f83bb0f116..7c933ed800 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -32,13 +32,11 @@ struct VirtIOBlockDataPlane {
     VirtIOBlkConf *conf;
     VirtIODevice *vdev;
 
-    /* Note that these EventNotifiers are assigned by value.  This is
-     * fine as long as you do not call event_notifier_cleanup on them
-     * (because you don't own the file descriptor or handle; you just
-     * use it).
+    /*
+     * The AioContext for each virtqueue. The BlockDriverState will use the
+     * first element as its AioContext.
      */
-    IOThread *iothread;
-    AioContext *ctx;
+    AioContext **vq_aio_context;
 };
 
 /* Raise an interrupt to signal guest, if necessary */
@@ -47,6 +45,45 @@ void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq)
     virtio_notify_irqfd(s->vdev, vq);
 }
 
+/* Generate vq:AioContext mappings from a validated iothread-vq-mapping list */
+static void
+apply_vq_mapping(IOThreadVirtQueueMappingList *iothread_vq_mapping_list,
+                 AioContext **vq_aio_context, uint16_t num_queues)
+{
+    IOThreadVirtQueueMappingList *node;
+    size_t num_iothreads = 0;
+    size_t cur_iothread = 0;
+
+    for (node = iothread_vq_mapping_list; node; node = node->next) {
+        num_iothreads++;
+    }
+
+    for (node = iothread_vq_mapping_list; node; node = node->next) {
+        IOThread *iothread = iothread_by_id(node->value->iothread);
+        AioContext *ctx = iothread_get_aio_context(iothread);
+
+        /* Released in virtio_blk_data_plane_destroy() */
+        object_ref(OBJECT(iothread));
+
+        if (node->value->vqs) {
+            uint16List *vq;
+
+            /* Explicit vq:IOThread assignment */
+            for (vq = node->value->vqs; vq; vq = vq->next) {
+                vq_aio_context[vq->value] = ctx;
+            }
+        } else {
+            /* Round-robin vq:IOThread assignment */
+            for (unsigned i = cur_iothread; i < num_queues;
+                 i += num_iothreads) {
+                vq_aio_context[i] = ctx;
+            }
+        }
+
+        cur_iothread++;
+    }
+}
+
 /* Context: QEMU global mutex held */
 bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
                                   VirtIOBlockDataPlane **dataplane,
@@ -58,7 +95,7 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
 
     *dataplane = NULL;
 
-    if (conf->iothread) {
+    if (conf->iothread || conf->iothread_vq_mapping_list) {
         if (!k->set_guest_notifiers || !k->ioeventfd_assign) {
             error_setg(errp,
                        "device is incompatible with iothread "
@@ -86,13 +123,24 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     s = g_new0(VirtIOBlockDataPlane, 1);
     s->vdev = vdev;
     s->conf = conf;
+    s->vq_aio_context = g_new(AioContext *, conf->num_queues);
 
-    if (conf->iothread) {
-        s->iothread = conf->iothread;
-        object_ref(OBJECT(s->iothread));
-        s->ctx = iothread_get_aio_context(s->iothread);
+    if (conf->iothread_vq_mapping_list) {
+        apply_vq_mapping(conf->iothread_vq_mapping_list, s->vq_aio_context,
+                         conf->num_queues);
+    } else if (conf->iothread) {
+        AioContext *ctx = iothread_get_aio_context(conf->iothread);
+        for (unsigned i = 0; i < conf->num_queues; i++) {
+            s->vq_aio_context[i] = ctx;
+        }
+
+        /* Released in virtio_blk_data_plane_destroy() */
+        object_ref(OBJECT(conf->iothread));
     } else {
-        s->ctx = qemu_get_aio_context();
+        AioContext *ctx = qemu_get_aio_context();
+        for (unsigned i = 0; i < conf->num_queues; i++) {
+            s->vq_aio_context[i] = ctx;
+        }
     }
 
     *dataplane = s;
@@ -104,6 +152,7 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
 void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
 {
     VirtIOBlock *vblk;
+    VirtIOBlkConf *conf = s->conf;
 
     if (!s) {
         return;
@@ -111,9 +160,21 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
 
     vblk = VIRTIO_BLK(s->vdev);
     assert(!vblk->dataplane_started);
-    if (s->iothread) {
-        object_unref(OBJECT(s->iothread));
+
+    if (conf->iothread_vq_mapping_list) {
+        IOThreadVirtQueueMappingList *node;
+
+        for (node = conf->iothread_vq_mapping_list; node; node = node->next) {
+            IOThread *iothread = iothread_by_id(node->value->iothread);
+            object_unref(OBJECT(iothread));
+        }
     }
+
+    if (conf->iothread) {
+        object_unref(OBJECT(conf->iothread));
+    }
+
+    g_free(s->vq_aio_context);
     g_free(s);
 }
 
@@ -180,20 +241,14 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
 
     old_context = blk_get_aio_context(s->conf->conf.blk);
     aio_context_acquire(old_context);
-    r = blk_set_aio_context(s->conf->conf.blk, s->ctx, &local_err);
+    r = blk_set_aio_context(s->conf->conf.blk, s->vq_aio_context[0],
+                            &local_err);
     aio_context_release(old_context);
     if (r < 0) {
         error_report_err(local_err);
         goto fail_aio_context;
     }
 
-    /* Kick right away to begin processing requests already in vring */
-    for (i = 0; i < nvqs; i++) {
-        VirtQueue *vq = virtio_get_queue(s->vdev, i);
-
-        event_notifier_set(virtio_queue_get_host_notifier(vq));
-    }
-
     /*
      * These fields must be visible to the IOThread when it processes the
      * virtqueue, otherwise it will think dataplane has not started yet.
@@ -208,13 +263,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
 
     /* Get this show started by hooking up our callbacks */
     if (!blk_in_drain(s->conf->conf.blk)) {
-        aio_context_acquire(s->ctx);
         for (i = 0; i < nvqs; i++) {
             VirtQueue *vq = virtio_get_queue(s->vdev, i);
+            AioContext *ctx = s->vq_aio_context[i];
 
-            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+            /* Kick right away to begin processing requests already in vring */
+            event_notifier_set(virtio_queue_get_host_notifier(vq));
+
+            virtio_queue_aio_attach_host_notifier(vq, ctx);
         }
-        aio_context_release(s->ctx);
     }
     return 0;
 
@@ -242,23 +299,18 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
  *
  * Context: BH in IOThread
  */
-static void virtio_blk_data_plane_stop_bh(void *opaque)
+static void virtio_blk_data_plane_stop_vq_bh(void *opaque)
 {
-    VirtIOBlockDataPlane *s = opaque;
-    unsigned i;
+    VirtQueue *vq = opaque;
+    EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
 
-    for (i = 0; i < s->conf->num_queues; i++) {
-        VirtQueue *vq = virtio_get_queue(s->vdev, i);
-        EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
+    virtio_queue_aio_detach_host_notifier(vq, qemu_get_current_aio_context());
 
-        virtio_queue_aio_detach_host_notifier(vq, s->ctx);
-
-        /*
-         * Test and clear notifier after disabling event, in case poll callback
-         * didn't have time to run.
-         */
-        virtio_queue_host_notifier_read(host_notifier);
-    }
+    /*
+     * Test and clear notifier after disabling event, in case poll callback
+     * didn't have time to run.
+     */
+    virtio_queue_host_notifier_read(host_notifier);
 }
 
 /* Context: QEMU global mutex held */
@@ -285,7 +337,14 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
     trace_virtio_blk_data_plane_stop(s);
 
     if (!blk_in_drain(s->conf->conf.blk)) {
-        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+        for (i = 0; i < nvqs; i++) {
+            VirtQueue *vq = virtio_get_queue(s->vdev, i);
+            AioContext *ctx = s->vq_aio_context[i];
+
+            aio_context_acquire(ctx);
+            aio_wait_bh_oneshot(ctx, virtio_blk_data_plane_stop_vq_bh, vq);
+            aio_context_release(ctx);
+        }
     }
 
     /*
@@ -314,7 +373,7 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
      */
     vblk->dataplane_started = false;
 
-    aio_context_acquire(s->ctx);
+    aio_context_acquire(s->vq_aio_context[0]);
 
     /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
     blk_drain(s->conf->conf.blk);
@@ -325,10 +384,30 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
      */
     blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context(), NULL);
 
-    aio_context_release(s->ctx);
+    aio_context_release(s->vq_aio_context[0]);
 
     /* Clean up guest notifier (irq) */
     k->set_guest_notifiers(qbus->parent, nvqs, false);
 
     s->stopping = false;
 }
+
+void virtio_blk_data_plane_detach(VirtIOBlockDataPlane *s)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(s->vdev);
+
+    for (uint16_t i = 0; i < s->conf->num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, s->vq_aio_context[i]);
+    }
+}
+
+void virtio_blk_data_plane_attach(VirtIOBlockDataPlane *s)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(s->vdev);
+
+    for (uint16_t i = 0; i < s->conf->num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, s->vq_aio_context[i]);
+    }
+}
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 5735a51e3b..a16c404469 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1150,6 +1150,7 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
             return;
         }
     }
+
     virtio_blk_handle_vq(s, vq);
 }
 
@@ -1475,6 +1476,68 @@ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f,
     return 0;
 }
 
+static bool
+validate_iothread_vq_mapping_list(IOThreadVirtQueueMappingList *list,
+        uint16_t num_queues, Error **errp)
+{
+    g_autofree unsigned long *vqs = bitmap_new(num_queues);
+    g_autoptr(GHashTable) iothreads =
+        g_hash_table_new(g_str_hash, g_str_equal);
+
+    for (IOThreadVirtQueueMappingList *node = list; node; node = node->next) {
+        const char *name = node->value->iothread;
+        uint16List *vq;
+
+        if (!iothread_by_id(name)) {
+            error_setg(errp, "IOThread \"%s\" object does not exist", name);
+            return false;
+        }
+
+        if (!g_hash_table_add(iothreads, (gpointer)name)) {
+            error_setg(errp,
+                    "duplicate IOThread name \"%s\" in iothread-vq-mapping",
+                    name);
+            return false;
+        }
+
+        if (node != list) {
+            if (!!node->value->vqs != !!list->value->vqs) {
+                error_setg(errp, "either all items in iothread-vq-mapping "
+                                 "must have vqs or none of them must have it");
+                return false;
+            }
+        }
+
+        for (vq = node->value->vqs; vq; vq = vq->next) {
+            if (vq->value >= num_queues) {
+                error_setg(errp, "vq index %u for IOThread \"%s\" must be "
+                        "less than num_queues %u in iothread-vq-mapping",
+                        vq->value, name, num_queues);
+                return false;
+            }
+
+            if (test_and_set_bit(vq->value, vqs)) {
+                error_setg(errp, "cannot assign vq %u to IOThread \"%s\" "
+                        "because it is already assigned", vq->value, name);
+                return false;
+            }
+        }
+    }
+
+    if (list->value->vqs) {
+        for (uint16_t i = 0; i < num_queues; i++) {
+            if (!test_bit(i, vqs)) {
+                error_setg(errp,
+                        "missing vq %u IOThread assignment in iothread-vq-mapping",
+                        i);
+                return false;
+            }
+        }
+    }
+
+    return true;
+}
+
 static void virtio_resize_cb(void *opaque)
 {
     VirtIODevice *vdev = opaque;
@@ -1499,34 +1562,24 @@ static void virtio_blk_resize(void *opaque)
 static void virtio_blk_drained_begin(void *opaque)
 {
     VirtIOBlock *s = opaque;
-    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
-    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
 
     if (!s->dataplane || !s->dataplane_started) {
         return;
     }
 
-    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
-        VirtQueue *vq = virtio_get_queue(vdev, i);
-        virtio_queue_aio_detach_host_notifier(vq, ctx);
-    }
+    virtio_blk_data_plane_detach(s->dataplane);
 }
 
 /* Resume virtqueue ioeventfd processing after drain */
 static void virtio_blk_drained_end(void *opaque)
 {
     VirtIOBlock *s = opaque;
-    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
-    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
 
     if (!s->dataplane || !s->dataplane_started) {
         return;
     }
 
-    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
-        VirtQueue *vq = virtio_get_queue(vdev, i);
-        virtio_queue_aio_attach_host_notifier(vq, ctx);
-    }
+    virtio_blk_data_plane_attach(s->dataplane);
 }
 
 static const BlockDevOps virtio_block_ops = {
@@ -1612,6 +1665,19 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
+    if (conf->iothread_vq_mapping_list) {
+        if (conf->iothread) {
+            error_setg(errp, "iothread and iothread-vq-mapping properties "
+                             "cannot be set at the same time");
+            return;
+        }
+
+        if (!validate_iothread_vq_mapping_list(conf->iothread_vq_mapping_list,
+                                               conf->num_queues, errp)) {
+            return;
+        }
+    }
+
     s->config_size = virtio_get_config_size(&virtio_blk_cfg_size_params,
                                             s->host_features);
     virtio_init(vdev, VIRTIO_ID_BLOCK, s->config_size);
@@ -1714,6 +1780,8 @@ static Property virtio_blk_properties[] = {
     DEFINE_PROP_BOOL("seg-max-adjust", VirtIOBlock, conf.seg_max_adjust, true),
     DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD,
                      IOThread *),
+    DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST("iothread-vq-mapping", VirtIOBlock,
+                                         conf.iothread_vq_mapping_list),
     DEFINE_PROP_BIT64("discard", VirtIOBlock, host_features,
                       VIRTIO_BLK_F_DISCARD, true),
     DEFINE_PROP_BOOL("report-discard-granularity", VirtIOBlock,
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter
  2023-09-18 16:16 [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
  2023-09-18 16:16 ` [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type Stefan Hajnoczi
  2023-09-18 16:16 ` [PATCH v2 2/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
@ 2023-10-03 13:12 ` Michael S. Tsirkin
  2023-11-02 14:10 ` Kevin Wolf
  3 siblings, 0 replies; 17+ messages in thread
From: Michael S. Tsirkin @ 2023-10-03 13:12 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Kevin Wolf, Hanna Reitz

On Mon, Sep 18, 2023 at 12:16:02PM -0400, Stefan Hajnoczi wrote:
> virtio-blk and virtio-scsi devices need a way to specify the mapping between
> IOThreads and virtqueues. At the moment all virtqueues are assigned to a single
> IOThread or the main loop. This single thread can be a CPU bottleneck, so it is
> necessary to allow finer-grained assignment to spread the load. With this
> series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are able
> to exploit multiple IOThreads.
> 
> This series introduces command-line syntax for the new iothread-vq-mapping
> property is as follows:
> 
>   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...'
> 
> IOThreads are specified by name and virtqueues are specified by 0-based
> index.
> 
> It will be common to simply assign virtqueues round-robin across a set
> of IOThreads. A convenient syntax that does not require specifying
> individual virtqueue indices is available:
> 
>   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...'
> 
> There is no way to reassign virtqueues at runtime and I expect that to be a
> very rare requirement.
> 
> Note that JSON --device syntax is required for the iothread-vq-mapping
> parameter because it's non-scalar.
> 
> Based-on: 20230912231037.826804-1-stefanha@redhat.com ("[PATCH v3 0/5] block-backend: process I/O in the current AioContext")


Acked-by: Michael S. Tsirkin <mst@redhat.com>

More of a block thingy so pls use that tree.

> Stefan Hajnoczi (2):
>   qdev: add IOThreadVirtQueueMappingList property type
>   virtio-blk: add iothread-vq-mapping parameter
> 
>  qapi/virtio.json                    |  30 +++++
>  hw/block/dataplane/virtio-blk.h     |   3 +
>  include/hw/qdev-properties-system.h |   4 +
>  include/hw/virtio/virtio-blk.h      |   2 +
>  hw/block/dataplane/virtio-blk.c     | 163 +++++++++++++++++++++-------
>  hw/block/virtio-blk.c               |  92 ++++++++++++++--
>  hw/core/qdev-properties-system.c    |  47 ++++++++
>  7 files changed, 287 insertions(+), 54 deletions(-)
> 
> -- 
> 2.41.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-09-18 16:16 ` [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type Stefan Hajnoczi
@ 2023-10-14  7:35   ` Markus Armbruster
  2023-12-19 15:13     ` Stefan Hajnoczi
  2023-12-11 13:56   ` Kevin Wolf
  1 sibling, 1 reply; 17+ messages in thread
From: Markus Armbruster @ 2023-10-14  7:35 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Michal Privoznik, Paolo Bonzini, Eduardo Habkost,
	qemu-block, Daniel P. Berrangé,
	Eric Blake, Kevin Wolf, Michael S. Tsirkin, Hanna Reitz

Stefan Hajnoczi <stefanha@redhat.com> writes:

> virtio-blk and virtio-scsi devices will need a way to specify the
> mapping between IOThreads and virtqueues. At the moment all virtqueues
> are assigned to a single IOThread or the main loop. This single thread
> can be a CPU bottleneck, so it is necessary to allow finer-grained
> assignment to spread the load.
>
> Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
> parameter that maps virtqueues to IOThreads. The command-line syntax for
> this new property is as follows:
>
>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'
>
> IOThreads are specified by name and virtqueues are specified by 0-based
> index.
>
> It will be common to simply assign virtqueues round-robin across a set
> of IOThreads. A convenient syntax that does not require specifying
> individual virtqueue indices is available:
>
>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  qapi/virtio.json                    | 30 ++++++++++++++++++
>  include/hw/qdev-properties-system.h |  4 +++
>  hw/core/qdev-properties-system.c    | 47 +++++++++++++++++++++++++++++
>  3 files changed, 81 insertions(+)
>
> diff --git a/qapi/virtio.json b/qapi/virtio.json
> index e6dcee7b83..cb341ae596 100644
> --- a/qapi/virtio.json
> +++ b/qapi/virtio.json
> @@ -928,3 +928,33 @@
>    'data': { 'path': 'str', 'queue': 'uint16', '*index': 'uint16' },
>    'returns': 'VirtioQueueElement',
>    'features': [ 'unstable' ] }
> +
> +##
> +# @IOThreadVirtQueueMapping:
> +#
> +# Describes the subset of virtqueues assigned to an IOThread.
> +#
> +# @iothread: the id of IOThread object
> +# @vqs: an optional array of virtqueue indices that will be handled by this
> +#       IOThread. When absent, virtqueues are assigned round-robin across all
> +#       IOThreadVirtQueueMappings provided. Either all
> +#       IOThreadVirtQueueMappings must have @vqs or none of them must have it.
> +#
> +# Since: 8.2
> +#
> +##

Please format like

   ##
   # @IOThreadVirtQueueMapping:
   #
   # Describes the subset of virtqueues assigned to an IOThread.
   #
   # @iothread: the id of IOThread object
   #
   # @vqs: an optional array of virtqueue indices that will be handled by
   #     this IOThread.  When absent, virtqueues are assigned round-robin
   #     across all IOThreadVirtQueueMappings provided.  Either all
   #     IOThreadVirtQueueMappings must have @vqs or none of them must
   #     have it.
   #
   # Since: 8.2
   ##

to blend in with recent commit a937b6aa739 (qapi: Reformat doc comments
to conform to current conventions).

> +
> +{ 'struct': 'IOThreadVirtQueueMapping',
> +  'data': { 'iothread': 'str', '*vqs': ['uint16'] } }
> +
> +##
> +# @IOThreadVirtQueueMappings:
> +#
> +# IOThreadVirtQueueMapping list. This struct is not actually used but the
> +# IOThreadVirtQueueMappingList type it generates is!

Two spaces between sentences for consistency, please.

Doc comments are QMP reference documentation for users.  Does this
paragraph belong there?

> +#
> +# Since: 8.2
> +##
> +
> +{ 'struct': 'IOThreadVirtQueueMappings',
> +  'data': { 'mappings': ['IOThreadVirtQueueMapping'] } }
> diff --git a/include/hw/qdev-properties-system.h b/include/hw/qdev-properties-system.h
> index 0ac327ae60..c526e502c8 100644
> --- a/include/hw/qdev-properties-system.h
> +++ b/include/hw/qdev-properties-system.h
> @@ -22,6 +22,7 @@ extern const PropertyInfo qdev_prop_audiodev;
>  extern const PropertyInfo qdev_prop_off_auto_pcibar;
>  extern const PropertyInfo qdev_prop_pcie_link_speed;
>  extern const PropertyInfo qdev_prop_pcie_link_width;
> +extern const PropertyInfo qdev_prop_iothread_vq_mapping_list;
>  
>  #define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
>      DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
> @@ -73,5 +74,8 @@ extern const PropertyInfo qdev_prop_pcie_link_width;
>  #define DEFINE_PROP_UUID_NODEFAULT(_name, _state, _field) \
>      DEFINE_PROP(_name, _state, _field, qdev_prop_uuid, QemuUUID)
>  
> +#define DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST(_name, _state, _field) \
> +    DEFINE_PROP(_name, _state, _field, qdev_prop_iothread_vq_mapping_list, \
> +                IOThreadVirtQueueMappingList *)
>  
>  #endif
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
> index 6d5d43eda2..831796e106 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -18,6 +18,7 @@
>  #include "qapi/qapi-types-block.h"
>  #include "qapi/qapi-types-machine.h"
>  #include "qapi/qapi-types-migration.h"
> +#include "qapi/qapi-visit-virtio.h"
>  #include "qapi/qmp/qerror.h"
>  #include "qemu/ctype.h"
>  #include "qemu/cutils.h"
> @@ -1147,3 +1148,49 @@ const PropertyInfo qdev_prop_uuid = {
>      .set   = set_uuid,
>      .set_default_value = set_default_uuid_auto,
>  };
> +
> +/* --- IOThreadVirtQueueMappingList --- */
> +
> +static void get_iothread_vq_mapping_list(Object *obj, Visitor *v,
> +        const char *name, void *opaque, Error **errp)
> +{
> +    IOThreadVirtQueueMappingList **prop_ptr =
> +        object_field_prop_ptr(obj, opaque);
> +    IOThreadVirtQueueMappingList *list = *prop_ptr;
> +
> +    visit_type_IOThreadVirtQueueMappingList(v, name, &list, errp);

@list is a copy of the property value.  Why do you need to pass a copy
to visit_type_IOThreadVirtQueueMappingList()?

> +}
> +
> +static void set_iothread_vq_mapping_list(Object *obj, Visitor *v,
> +        const char *name, void *opaque, Error **errp)
> +{
> +    IOThreadVirtQueueMappingList **prop_ptr =
> +        object_field_prop_ptr(obj, opaque);
> +    IOThreadVirtQueueMappingList *list;
> +
> +    if (!visit_type_IOThreadVirtQueueMappingList(v, name, &list, errp)) {
> +        return;
> +    }
> +
> +    qapi_free_IOThreadVirtQueueMappingList(*prop_ptr);
> +    *prop_ptr = list;
> +}
> +
> +static void release_iothread_vq_mapping_list(Object *obj,
> +        const char *name, void *opaque)
> +{
> +    IOThreadVirtQueueMappingList **prop_ptr =
> +        object_field_prop_ptr(obj, opaque);
> +
> +    qapi_free_IOThreadVirtQueueMappingList(*prop_ptr);
> +    *prop_ptr = NULL;
> +}
> +
> +const PropertyInfo qdev_prop_iothread_vq_mapping_list = {
> +    .name = "IOThreadVirtQueueMappingList",
> +    .description = "IOThread virtqueue mapping list [{\"iothread\":\"<id>\", "
> +                   "\"vqs\":[1,2,3,...]},...]",
> +    .get = get_iothread_vq_mapping_list,
> +    .set = set_iothread_vq_mapping_list,
> +    .release = release_iothread_vq_mapping_list,
> +};



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter
  2023-09-18 16:16 [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2023-10-03 13:12 ` [PATCH v2 0/2] " Michael S. Tsirkin
@ 2023-11-02 14:10 ` Kevin Wolf
  2023-11-07  3:00   ` Stefan Hajnoczi
  2023-11-07  3:29   ` Stefan Hajnoczi
  3 siblings, 2 replies; 17+ messages in thread
From: Kevin Wolf @ 2023-11-02 14:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz

Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> virtio-blk and virtio-scsi devices need a way to specify the mapping between
> IOThreads and virtqueues. At the moment all virtqueues are assigned to a single
> IOThread or the main loop. This single thread can be a CPU bottleneck, so it is
> necessary to allow finer-grained assignment to spread the load. With this
> series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are able
> to exploit multiple IOThreads.
> 
> This series introduces command-line syntax for the new iothread-vq-mapping
> property is as follows:
> 
>   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...'
> 
> IOThreads are specified by name and virtqueues are specified by 0-based
> index.
> 
> It will be common to simply assign virtqueues round-robin across a set
> of IOThreads. A convenient syntax that does not require specifying
> individual virtqueue indices is available:
> 
>   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...'
> 
> There is no way to reassign virtqueues at runtime and I expect that to be a
> very rare requirement.
> 
> Note that JSON --device syntax is required for the iothread-vq-mapping
> parameter because it's non-scalar.
> 
> Based-on: 20230912231037.826804-1-stefanha@redhat.com ("[PATCH v3 0/5] block-backend: process I/O in the current AioContext")

Does this strictly depend on patch 5/5 of that series, or would it just
be a missed opportunity for optimisation by unnecessarily running some
requests from a different thread?

I suspect it does depend on the other virtio-blk series, though:

[PATCH 0/4] virtio-blk: prepare for the multi-queue block layer
https://patchew.org/QEMU/20230914140101.1065008-1-stefanha@redhat.com/

Is this right?

Given that soft freeze is early next week, maybe we should try to merge
just the bare minimum of strictly necessary dependencies.

Kevin



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter
  2023-11-02 14:10 ` Kevin Wolf
@ 2023-11-07  3:00   ` Stefan Hajnoczi
  2023-11-07 10:20     ` Kevin Wolf
  2023-11-07  3:29   ` Stefan Hajnoczi
  1 sibling, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2023-11-07  3:00 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-devel, Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz

[-- Attachment #1: Type: text/plain, Size: 2926 bytes --]

On Thu, Nov 02, 2023 at 03:10:52PM +0100, Kevin Wolf wrote:
> Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> > virtio-blk and virtio-scsi devices need a way to specify the mapping between
> > IOThreads and virtqueues. At the moment all virtqueues are assigned to a single
> > IOThread or the main loop. This single thread can be a CPU bottleneck, so it is
> > necessary to allow finer-grained assignment to spread the load. With this
> > series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are able
> > to exploit multiple IOThreads.
> > 
> > This series introduces command-line syntax for the new iothread-vq-mapping
> > property is as follows:
> > 
> >   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...'
> > 
> > IOThreads are specified by name and virtqueues are specified by 0-based
> > index.
> > 
> > It will be common to simply assign virtqueues round-robin across a set
> > of IOThreads. A convenient syntax that does not require specifying
> > individual virtqueue indices is available:
> > 
> >   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...'
> > 
> > There is no way to reassign virtqueues at runtime and I expect that to be a
> > very rare requirement.
> > 
> > Note that JSON --device syntax is required for the iothread-vq-mapping
> > parameter because it's non-scalar.
> > 
> > Based-on: 20230912231037.826804-1-stefanha@redhat.com ("[PATCH v3 0/5] block-backend: process I/O in the current AioContext")
> 
> Does this strictly depend on patch 5/5 of that series, or would it just
> be a missed opportunity for optimisation by unnecessarily running some
> requests from a different thread?

"[PATCH v3 5/5] block-coroutine-wrapper: use
qemu_get_current_aio_context()" is necessary so that
virtio_blk_sect_range_ok -> blk_get_geometry -> blk_nb_sectors ->
bdrv_refresh_total_sectors -> bdrv_poll_co can be called without holding
the AioContext lock.

That case only happens when the BlockDriverState is a file-posix host
CD-ROM or a file-win32 host_device. Most users will never hit this
problem, but it would be unsafe to proceed merging code without this
patch.

> I suspect it does depend on the other virtio-blk series, though:
> 
> [PATCH 0/4] virtio-blk: prepare for the multi-queue block layer
> https://patchew.org/QEMU/20230914140101.1065008-1-stefanha@redhat.com/
> 
> Is this right?

Yes, it depends on "[PATCH 0/4] virtio-blk: prepare for the multi-queue
block layer" so that every AioContext is able to handle Linux AIO or
io_uring I/O and to stop using the AioContext lock in the virtio-blk I/O
code path.

Stefan

> 
> Given that soft freeze is early next week, maybe we should try to merge
> just the bare minimum of strictly necessary dependencies.
> 
> Kevin
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter
  2023-11-02 14:10 ` Kevin Wolf
  2023-11-07  3:00   ` Stefan Hajnoczi
@ 2023-11-07  3:29   ` Stefan Hajnoczi
  1 sibling, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2023-11-07  3:29 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-devel, Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz

[-- Attachment #1: Type: text/plain, Size: 3156 bytes --]

On Thu, Nov 02, 2023 at 03:10:52PM +0100, Kevin Wolf wrote:
> Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> > virtio-blk and virtio-scsi devices need a way to specify the mapping between
> > IOThreads and virtqueues. At the moment all virtqueues are assigned to a single
> > IOThread or the main loop. This single thread can be a CPU bottleneck, so it is
> > necessary to allow finer-grained assignment to spread the load. With this
> > series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are able
> > to exploit multiple IOThreads.
> > 
> > This series introduces command-line syntax for the new iothread-vq-mapping
> > property is as follows:
> > 
> >   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...'
> > 
> > IOThreads are specified by name and virtqueues are specified by 0-based
> > index.
> > 
> > It will be common to simply assign virtqueues round-robin across a set
> > of IOThreads. A convenient syntax that does not require specifying
> > individual virtqueue indices is available:
> > 
> >   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...'
> > 
> > There is no way to reassign virtqueues at runtime and I expect that to be a
> > very rare requirement.
> > 
> > Note that JSON --device syntax is required for the iothread-vq-mapping
> > parameter because it's non-scalar.
> > 
> > Based-on: 20230912231037.826804-1-stefanha@redhat.com ("[PATCH v3 0/5] block-backend: process I/O in the current AioContext")
> 
> Does this strictly depend on patch 5/5 of that series, or would it just
> be a missed opportunity for optimisation by unnecessarily running some
> requests from a different thread?

I looked at the issue with PATCH 5/5 more and didn't find a minimal
solution that I can implement today for soft freeze. There are too much
inconsistency between blk_remove_bs() in whether or not the AioContext
is acquired:

block/block-backend.c:        blk_remove_bs(blk); <- blk_unref (can't tell if AioContext is acquired)
block/block-backend.c:            blk_remove_bs(blk); (acquired)
block/monitor/block-hmp-cmds.c:        blk_remove_bs(blk); (acquired)
block/qapi-sysemu.c:    blk_remove_bs(blk); (acquired)
block/qapi-sysemu.c:            blk_remove_bs(blk); (not acquired)
qemu-nbd.c:        blk_remove_bs(blk); (not acquired)
tests/unit/test-block-iothread.c:    blk_remove_bs(blk); (acquired)
tests/unit/test-blockjob.c:    blk_remove_bs(blk); (sometimes acquired, sometimes not)

They usually get away with it because BDRV_WAIT_WHILE() only unlocks the
AioContext when the BlockDriverState's AioContext is not the current
thread's home context. This means main loop code works when the
AioContext is not acquired as long as the BDS AioContext is the main
loop AioContext.

The solution I have confidence in is to stop using the AioContext lock,
but it will take more time to refactor the SCSI layer (the last real
user of the AioContext).

I'm afraid iothread-vq-mapping can't make it into QEMU 8.2.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter
  2023-11-07  3:00   ` Stefan Hajnoczi
@ 2023-11-07 10:20     ` Kevin Wolf
  0 siblings, 0 replies; 17+ messages in thread
From: Kevin Wolf @ 2023-11-07 10:20 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz

[-- Attachment #1: Type: text/plain, Size: 2659 bytes --]

Am 07.11.2023 um 04:00 hat Stefan Hajnoczi geschrieben:
> On Thu, Nov 02, 2023 at 03:10:52PM +0100, Kevin Wolf wrote:
> > Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> > > virtio-blk and virtio-scsi devices need a way to specify the mapping between
> > > IOThreads and virtqueues. At the moment all virtqueues are assigned to a single
> > > IOThread or the main loop. This single thread can be a CPU bottleneck, so it is
> > > necessary to allow finer-grained assignment to spread the load. With this
> > > series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are able
> > > to exploit multiple IOThreads.
> > > 
> > > This series introduces command-line syntax for the new iothread-vq-mapping
> > > property is as follows:
> > > 
> > >   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...'
> > > 
> > > IOThreads are specified by name and virtqueues are specified by 0-based
> > > index.
> > > 
> > > It will be common to simply assign virtqueues round-robin across a set
> > > of IOThreads. A convenient syntax that does not require specifying
> > > individual virtqueue indices is available:
> > > 
> > >   --device '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...'
> > > 
> > > There is no way to reassign virtqueues at runtime and I expect that to be a
> > > very rare requirement.
> > > 
> > > Note that JSON --device syntax is required for the iothread-vq-mapping
> > > parameter because it's non-scalar.
> > > 
> > > Based-on: 20230912231037.826804-1-stefanha@redhat.com ("[PATCH v3 0/5] block-backend: process I/O in the current AioContext")
> > 
> > Does this strictly depend on patch 5/5 of that series, or would it just
> > be a missed opportunity for optimisation by unnecessarily running some
> > requests from a different thread?
> 
> "[PATCH v3 5/5] block-coroutine-wrapper: use
> qemu_get_current_aio_context()" is necessary so that
> virtio_blk_sect_range_ok -> blk_get_geometry -> blk_nb_sectors ->
> bdrv_refresh_total_sectors -> bdrv_poll_co can be called without holding
> the AioContext lock.

Ooh, so we only have the whole problem because bdrv_poll_co() wants to
temporarily unlock an AioContext that we don't even hold? That's a real
shame, but I understand now why we need the patch.

> That case only happens when the BlockDriverState is a file-posix host
> CD-ROM or a file-win32 host_device. Most users will never hit this
> problem, but it would be unsafe to proceed merging code without this
> patch.

Yes, I agree.

Kevin

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-09-18 16:16 ` [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type Stefan Hajnoczi
  2023-10-14  7:35   ` Markus Armbruster
@ 2023-12-11 13:56   ` Kevin Wolf
  2023-12-11 15:32     ` Markus Armbruster
  1 sibling, 1 reply; 17+ messages in thread
From: Kevin Wolf @ 2023-12-11 13:56 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Markus Armbruster, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz, aliang, qinwang

Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> virtio-blk and virtio-scsi devices will need a way to specify the
> mapping between IOThreads and virtqueues. At the moment all virtqueues
> are assigned to a single IOThread or the main loop. This single thread
> can be a CPU bottleneck, so it is necessary to allow finer-grained
> assignment to spread the load.
> 
> Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
> parameter that maps virtqueues to IOThreads. The command-line syntax for
> this new property is as follows:
> 
>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'
> 
> IOThreads are specified by name and virtqueues are specified by 0-based
> index.
> 
> It will be common to simply assign virtqueues round-robin across a set
> of IOThreads. A convenient syntax that does not require specifying
> individual virtqueue indices is available:
> 
>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

When testing this, Qing Wang noticed that "info qtree" crashes. This is
because the string output visitor doesn't support structs. I suppose
IOThreadVirtQueueMapping is the first struct type that is used in a qdev
property type.

So we'll probably have to add some kind of struct support to the string
output visitor before we can apply this. Even if it's as stupid as just
printing "<struct IOThreadVirtQueueMapping>" without actually displaying
the value.

Kevin



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-12-11 13:56   ` Kevin Wolf
@ 2023-12-11 15:32     ` Markus Armbruster
  2023-12-11 21:15       ` Stefan Hajnoczi
  2023-12-12  9:59       ` Kevin Wolf
  0 siblings, 2 replies; 17+ messages in thread
From: Markus Armbruster @ 2023-12-11 15:32 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Stefan Hajnoczi, qemu-devel, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz, aliang, qinwang

Kevin Wolf <kwolf@redhat.com> writes:

> Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
>> virtio-blk and virtio-scsi devices will need a way to specify the
>> mapping between IOThreads and virtqueues. At the moment all virtqueues
>> are assigned to a single IOThread or the main loop. This single thread
>> can be a CPU bottleneck, so it is necessary to allow finer-grained
>> assignment to spread the load.
>> 
>> Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
>> parameter that maps virtqueues to IOThreads. The command-line syntax for
>> this new property is as follows:
>> 
>>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'
>> 
>> IOThreads are specified by name and virtqueues are specified by 0-based
>> index.
>> 
>> It will be common to simply assign virtqueues round-robin across a set
>> of IOThreads. A convenient syntax that does not require specifying
>> individual virtqueue indices is available:
>> 
>>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'
>> 
>> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>
> When testing this, Qing Wang noticed that "info qtree" crashes. This is
> because the string output visitor doesn't support structs. I suppose
> IOThreadVirtQueueMapping is the first struct type that is used in a qdev
> property type.
>
> So we'll probably have to add some kind of struct support to the string
> output visitor before we can apply this. Even if it's as stupid as just
> printing "<struct IOThreadVirtQueueMapping>" without actually displaying
> the value.

The string visitors have been nothing but trouble.

For input, we can now use keyval_parse() and the QObject input visitor
instead.  Comes with restrictions, but I'd argue it's a more solid base
than the string input visitor.

Perhaps we can do something similar for output: create a suitable
formatter for use it with the QObject output visitor, replacing the
string output visitor.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-12-11 15:32     ` Markus Armbruster
@ 2023-12-11 21:15       ` Stefan Hajnoczi
  2023-12-19  7:43         ` Markus Armbruster
  2023-12-12  9:59       ` Kevin Wolf
  1 sibling, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2023-12-11 21:15 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Kevin Wolf, qemu-devel, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz, aliang, qinwang

[-- Attachment #1: Type: text/plain, Size: 3069 bytes --]

On Mon, Dec 11, 2023 at 04:32:06PM +0100, Markus Armbruster wrote:
> Kevin Wolf <kwolf@redhat.com> writes:
> 
> > Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> >> virtio-blk and virtio-scsi devices will need a way to specify the
> >> mapping between IOThreads and virtqueues. At the moment all virtqueues
> >> are assigned to a single IOThread or the main loop. This single thread
> >> can be a CPU bottleneck, so it is necessary to allow finer-grained
> >> assignment to spread the load.
> >> 
> >> Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
> >> parameter that maps virtqueues to IOThreads. The command-line syntax for
> >> this new property is as follows:
> >> 
> >>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'
> >> 
> >> IOThreads are specified by name and virtqueues are specified by 0-based
> >> index.
> >> 
> >> It will be common to simply assign virtqueues round-robin across a set
> >> of IOThreads. A convenient syntax that does not require specifying
> >> individual virtqueue indices is available:
> >> 
> >>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'
> >> 
> >> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> >
> > When testing this, Qing Wang noticed that "info qtree" crashes. This is
> > because the string output visitor doesn't support structs. I suppose
> > IOThreadVirtQueueMapping is the first struct type that is used in a qdev
> > property type.
> >
> > So we'll probably have to add some kind of struct support to the string
> > output visitor before we can apply this. Even if it's as stupid as just
> > printing "<struct IOThreadVirtQueueMapping>" without actually displaying
> > the value.
> 
> The string visitors have been nothing but trouble.
> 
> For input, we can now use keyval_parse() and the QObject input visitor
> instead.  Comes with restrictions, but I'd argue it's a more solid base
> than the string input visitor.
> 
> Perhaps we can do something similar for output: create a suitable
> formatter for use it with the QObject output visitor, replacing the
> string output visitor.

I sent an initial patch that just shows "<omitted>" but would like to
work on a proper solution with your input.

From what I've seen StringOutputVisitor is used in several places in
QEMU. "info qtree" calls it through object_property_print() to print
individual qdev properties. I don't understand the requirements of the
other callers, but object_property_print() wants to return a single
string without newlines.

QObjectOutputVisitor produces a QObject. That could be formatted as
JSON using qobject_to_json_pretty()?

The pretty JSON would contain newlines and existing callers such as
"info qtree" may need to scan the output and insert indentation.

The goal would be to keep the output unchanged as far as possible and to
emit JSON for structs and lists.

What do you think about this approach?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-12-11 15:32     ` Markus Armbruster
  2023-12-11 21:15       ` Stefan Hajnoczi
@ 2023-12-12  9:59       ` Kevin Wolf
  1 sibling, 0 replies; 17+ messages in thread
From: Kevin Wolf @ 2023-12-12  9:59 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Stefan Hajnoczi, qemu-devel, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz, aliang, qinwang

Am 11.12.2023 um 16:32 hat Markus Armbruster geschrieben:
> Kevin Wolf <kwolf@redhat.com> writes:
> 
> > Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> >> virtio-blk and virtio-scsi devices will need a way to specify the
> >> mapping between IOThreads and virtqueues. At the moment all virtqueues
> >> are assigned to a single IOThread or the main loop. This single thread
> >> can be a CPU bottleneck, so it is necessary to allow finer-grained
> >> assignment to spread the load.
> >> 
> >> Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
> >> parameter that maps virtqueues to IOThreads. The command-line syntax for
> >> this new property is as follows:
> >> 
> >>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'
> >> 
> >> IOThreads are specified by name and virtqueues are specified by 0-based
> >> index.
> >> 
> >> It will be common to simply assign virtqueues round-robin across a set
> >> of IOThreads. A convenient syntax that does not require specifying
> >> individual virtqueue indices is available:
> >> 
> >>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'
> >> 
> >> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> >
> > When testing this, Qing Wang noticed that "info qtree" crashes. This is
> > because the string output visitor doesn't support structs. I suppose
> > IOThreadVirtQueueMapping is the first struct type that is used in a qdev
> > property type.
> >
> > So we'll probably have to add some kind of struct support to the string
> > output visitor before we can apply this. Even if it's as stupid as just
> > printing "<struct IOThreadVirtQueueMapping>" without actually displaying
> > the value.
> 
> The string visitors have been nothing but trouble.
> 
> For input, we can now use keyval_parse() and the QObject input visitor
> instead.  Comes with restrictions, but I'd argue it's a more solid base
> than the string input visitor.
> 
> Perhaps we can do something similar for output: create a suitable
> formatter for use it with the QObject output visitor, replacing the
> string output visitor.

dump_qobject() frm block/qapi.c?

But I wouldn't like to block this series for a major rework of "info
qtree", so I think we need to make sure that the string input visitor
doesn't crash at least.

Kevin



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-12-11 21:15       ` Stefan Hajnoczi
@ 2023-12-19  7:43         ` Markus Armbruster
  0 siblings, 0 replies; 17+ messages in thread
From: Markus Armbruster @ 2023-12-19  7:43 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, qemu-devel, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz, aliang, qinwang

Stefan Hajnoczi <stefanha@redhat.com> writes:

> On Mon, Dec 11, 2023 at 04:32:06PM +0100, Markus Armbruster wrote:
>> Kevin Wolf <kwolf@redhat.com> writes:
>> 
>> > Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
>> >> virtio-blk and virtio-scsi devices will need a way to specify the
>> >> mapping between IOThreads and virtqueues. At the moment all virtqueues
>> >> are assigned to a single IOThread or the main loop. This single thread
>> >> can be a CPU bottleneck, so it is necessary to allow finer-grained
>> >> assignment to spread the load.
>> >> 
>> >> Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
>> >> parameter that maps virtqueues to IOThreads. The command-line syntax for
>> >> this new property is as follows:
>> >> 
>> >>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'
>> >> 
>> >> IOThreads are specified by name and virtqueues are specified by 0-based
>> >> index.
>> >> 
>> >> It will be common to simply assign virtqueues round-robin across a set
>> >> of IOThreads. A convenient syntax that does not require specifying
>> >> individual virtqueue indices is available:
>> >> 
>> >>   --device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'
>> >> 
>> >> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> >
>> > When testing this, Qing Wang noticed that "info qtree" crashes. This is
>> > because the string output visitor doesn't support structs. I suppose
>> > IOThreadVirtQueueMapping is the first struct type that is used in a qdev
>> > property type.
>> >
>> > So we'll probably have to add some kind of struct support to the string
>> > output visitor before we can apply this. Even if it's as stupid as just
>> > printing "<struct IOThreadVirtQueueMapping>" without actually displaying
>> > the value.
>> 
>> The string visitors have been nothing but trouble.
>> 
>> For input, we can now use keyval_parse() and the QObject input visitor
>> instead.  Comes with restrictions, but I'd argue it's a more solid base
>> than the string input visitor.
>> 
>> Perhaps we can do something similar for output: create a suitable
>> formatter for use it with the QObject output visitor, replacing the
>> string output visitor.
>
> I sent an initial patch that just shows "<omitted>" but would like to
> work on a proper solution with your input.
>
> From what I've seen StringOutputVisitor is used in several places in
> QEMU. "info qtree" calls it through object_property_print() to print
> individual qdev properties. I don't understand the requirements of the
> other callers, but object_property_print() wants to return a single
> string without newlines.

string_output_visitor_new():

* hmp_info_migrate(): format a list of integers, then print it like

        monitor_printf(mon, "postcopy vcpu blocktime: %s\n", str);

  One element per vCPU; can produce a long line.

* netfilter_print_info(): format the property values of a
  NetFilterState, then print each like

        monitor_printf(mon, ",%s=%s", prop->name, str);

* object_property_print(): format a property value of an object, return
  the string.

  Function is misnamed.  object_property_format() or
  object_property_to_string() would be better.

  Just one caller: qdev_print_props(), helper for hmp_info_qtree().
  Prints the string like

        qdev_printf("%s = %s\n", props->name,
                    *value ? value : "<null>");

  where qdev_printf() is a macro wrapping monitor_printf().

  This one passes human=true, unlike the others.  More on that below.

* hmp_info_memdev(): format a list of integers, then print it like

        monitor_printf(mon, "  policy: %s\n",
                       HostMemPolicy_str(m->value->policy));

  One element per "host node", whatever that may be; might produce a
  long line.

* Tests; not relevant here.

hmp_info_migrate() and hmp_info_memdev() use the visitor as a (somewhat
cumbersome) helper for printing uint32List and uint16List, respectively.
Could do without.

The other two display all properties in HMP.  Both kind of assume the
string visitor produces no newlines.  I think we could instead use the
QObject output visitor, then format the QObject in human-readable form.
Might be less efficient, because we create a temporary QObject.  Perhaps
factor out a single helper first.

string_input_visitor_new(), for good measure:

* hmp_migrate_set_parameter(): parse an uint8_t, uint32, size_t, bool,
  str, or QAPI enum from a string.

* object_property_parse(): parse a property value from a string, and
  assign it to the property.

  Calling this object_property_set_from_string() would be better.

  Callers:

  - object_apply_global_props(): applying compatibility properties
    (defined in C) and defauls set with -global (given by user).

  - object_set_propv(): helper for convenience functions to set multiple
    properties in C.

  - hmp_qom_set(): set the property value when not JSON (-j is off).

  - object_parse_property_opt(), for accelerator_set_property(), which
    processes the argument of -accel.

  - x86_cpu_apply_props() and x86_cpu_apply_version_props(): apply
    properties defined in C.

  The property settings defined in C could just as well use QObject
  input visitor instead.  Might be less efficient, because we create a
  temporary QObject.

  The property settings that come from the user are ABI.

  Reminder: supports only scalars and lists of small integers.
  Supporting structured values containing strings would involve creating
  syntax that's basically reinventing JSON.

  I think qom-set demonstrates how to support structured values: just
  use JSON.

> QObjectOutputVisitor produces a QObject. That could be formatted as
> JSON using qobject_to_json_pretty()?
>
> The pretty JSON would contain newlines and existing callers such as
> "info qtree" may need to scan the output and insert indentation.

With pretty=false, the JSON formatter produces a string without
newlines.  With pretty=true, it adds newlines and indentation.

Instead of scanning the formatted string to change its indentation, we
could perhaps pass indentation instructions to the JSON formatter.

> The goal would be to keep the output unchanged as far as possible and to
> emit JSON for structs and lists.

That's basically JSON, except we print 'str' values without quotes and
escapes (resulting in ambiguity when they contain {[]}, and even more
fun when they contain control characters), plus a few more largely
gratuitous differences.

With human=true (advertized as "a bit easier for humans to read"), we
get strings with quotes (still no escapes), and a tweaked set of
gratuitous differences.

> What do you think about this approach?

Format as JSON and call it a day?

I figure the only thing of value we'd lose is displaying integers both
decimal and hex.  In some, but not all places.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-10-14  7:35   ` Markus Armbruster
@ 2023-12-19 15:13     ` Stefan Hajnoczi
  2023-12-19 16:08       ` Kevin Wolf
  0 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2023-12-19 15:13 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Michal Privoznik, Paolo Bonzini, Eduardo Habkost,
	qemu-block, Daniel P. Berrangé,
	Eric Blake, Kevin Wolf, Michael S. Tsirkin, Hanna Reitz

[-- Attachment #1: Type: text/plain, Size: 773 bytes --]

On Sat, Oct 14, 2023 at 09:35:14AM +0200, Markus Armbruster wrote:
> Stefan Hajnoczi <stefanha@redhat.com> writes:
> > +##
> > +# @IOThreadVirtQueueMappings:
> > +#
> > +# IOThreadVirtQueueMapping list. This struct is not actually used but the
> > +# IOThreadVirtQueueMappingList type it generates is!
> 
> Two spaces between sentences for consistency, please.
> 
> Doc comments are QMP reference documentation for users.  Does this
> paragraph belong there?

Someone might wonder why a type exists that is never used, so I think
including this in the documentation is acceptable.

My comment is mostly intended to stop someone from removing this
"unused" type from the schema though. If there is a better way to do
that I can do that instead.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-12-19 15:13     ` Stefan Hajnoczi
@ 2023-12-19 16:08       ` Kevin Wolf
  2023-12-20  7:39         ` Markus Armbruster
  0 siblings, 1 reply; 17+ messages in thread
From: Kevin Wolf @ 2023-12-19 16:08 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Markus Armbruster, qemu-devel, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz

[-- Attachment #1: Type: text/plain, Size: 1038 bytes --]

Am 19.12.2023 um 16:13 hat Stefan Hajnoczi geschrieben:
> On Sat, Oct 14, 2023 at 09:35:14AM +0200, Markus Armbruster wrote:
> > Stefan Hajnoczi <stefanha@redhat.com> writes:
> > > +##
> > > +# @IOThreadVirtQueueMappings:
> > > +#
> > > +# IOThreadVirtQueueMapping list. This struct is not actually used but the
> > > +# IOThreadVirtQueueMappingList type it generates is!
> > 
> > Two spaces between sentences for consistency, please.
> > 
> > Doc comments are QMP reference documentation for users.  Does this
> > paragraph belong there?
> 
> Someone might wonder why a type exists that is never used, so I think
> including this in the documentation is acceptable.

I seem to remember that we had a similar remark elsewhere, but maybe it
doesn't exist any more today?

> My comment is mostly intended to stop someone from removing this
> "unused" type from the schema though. If there is a better way to do
> that I can do that instead.

Won't the QAPI generator or the compiler stop them soon enough?

Kevin

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type
  2023-12-19 16:08       ` Kevin Wolf
@ 2023-12-20  7:39         ` Markus Armbruster
  0 siblings, 0 replies; 17+ messages in thread
From: Markus Armbruster @ 2023-12-20  7:39 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Stefan Hajnoczi, qemu-devel, Michal Privoznik, Paolo Bonzini,
	Eduardo Habkost, qemu-block, Daniel P. Berrangé,
	Eric Blake, Michael S. Tsirkin, Hanna Reitz

Kevin Wolf <kwolf@redhat.com> writes:

> Am 19.12.2023 um 16:13 hat Stefan Hajnoczi geschrieben:
>> On Sat, Oct 14, 2023 at 09:35:14AM +0200, Markus Armbruster wrote:
>> > Stefan Hajnoczi <stefanha@redhat.com> writes:
>> > > +##
>> > > +# @IOThreadVirtQueueMappings:
>> > > +#
>> > > +# IOThreadVirtQueueMapping list. This struct is not actually used but the
>> > > +# IOThreadVirtQueueMappingList type it generates is!
>> > 
>> > Two spaces between sentences for consistency, please.
>> > 
>> > Doc comments are QMP reference documentation for users.  Does this
>> > paragraph belong there?
>> 
>> Someone might wonder why a type exists that is never used, so I think
>> including this in the documentation is acceptable.
>
> I seem to remember that we had a similar remark elsewhere, but maybe it
> doesn't exist any more today?

Working up more context...  alright, now I see.

When the QAPI schema defines a type 'T', the QAPI generator generates
code and documentation for it whether it is used in the schema or not.
We occasionally use this to generate types with QAPI goodies for purely
internal use.  In other words, the generator assumes there is a use of T
in C code.

However, the QAPI generator generates code for ['T'] only if there's a
use of ['T'] in the schema.  This is because it's actually needed only
for about one in seven types.  See commit 9f08c8ec738 (qapi: Lazy
creation of array types).

Once in a blue moon, ['T'] is only used in C code.  The QAPI generator
won't generate code for it then, and the C code won't compile.  Then we
have to add a dummy use to the schema to force the array into existence.
Not exactly elegant, but it works.

I can see two blue moons in the schema before this patch.

In qapi/block-core.json:

    ##
    # @DummyBlockCoreForceArrays:
    #
    # Not used by QMP; hack to let us use BlockGraphInfoList internally
    #
    # Since: 8.0
    ##
    { 'struct': 'DummyBlockCoreForceArrays',
      'data': { 'unused-block-graph-info': ['BlockGraphInfo'] } }

It's called Dummy<NameOfModule>ForceArrays, it's at the end of the file,
and it collects all the arrays being forced into existence.

In qapi/machine.json:

    ##
    # @DummyForceArrays:
    #
    # Not used by QMP; hack to let us use X86CPUFeatureWordInfoList
    # internally
    #
    # Since: 2.5
    ##
    { 'struct': 'DummyForceArrays',
      'data': { 'unused': ['X86CPUFeatureWordInfo'] } }

Less good: it has a clash-prone name, and it's not at the end of the
file.  It was the first use of this trick, and we still had a single
schema file back then.

I recommend you do yours just like the first one.

>> My comment is mostly intended to stop someone from removing this
>> "unused" type from the schema though. If there is a better way to do
>> that I can do that instead.
>
> Won't the QAPI generator or the compiler stop them soon enough?

The compiler would.



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-12-20  7:40 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-18 16:16 [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
2023-09-18 16:16 ` [PATCH v2 1/2] qdev: add IOThreadVirtQueueMappingList property type Stefan Hajnoczi
2023-10-14  7:35   ` Markus Armbruster
2023-12-19 15:13     ` Stefan Hajnoczi
2023-12-19 16:08       ` Kevin Wolf
2023-12-20  7:39         ` Markus Armbruster
2023-12-11 13:56   ` Kevin Wolf
2023-12-11 15:32     ` Markus Armbruster
2023-12-11 21:15       ` Stefan Hajnoczi
2023-12-19  7:43         ` Markus Armbruster
2023-12-12  9:59       ` Kevin Wolf
2023-09-18 16:16 ` [PATCH v2 2/2] virtio-blk: add iothread-vq-mapping parameter Stefan Hajnoczi
2023-10-03 13:12 ` [PATCH v2 0/2] " Michael S. Tsirkin
2023-11-02 14:10 ` Kevin Wolf
2023-11-07  3:00   ` Stefan Hajnoczi
2023-11-07 10:20     ` Kevin Wolf
2023-11-07  3:29   ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.