All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
@ 2017-07-10  7:20 Fam Zheng
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 1/5] aio: Wrap poll parameters into AioContextPollParams Fam Zheng
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: Fam Zheng @ 2017-07-10  7:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: pbonzini, Fam Zheng, Stefan Hajnoczi

Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
A dedicated "iothread-group" class is cleaner from the interface point of view.
This series does that.

It has the same set of poll parameters as the existing "iothread" object, plus
a "size" option to specify how many threads to start. Using iothread-group
doesn't require the user to explicitly create the contained IOThreads. The
IOThreads are created by the group object.

Internally, IOThreads share one AioContext.  This is to make it easier to adapt
this to the current data plane code (see the last patch). But it is an
implementation detail, and will change depending on the block layer multiqueue
needs.

TODO:

- qmp_query_iothread_groups, in addition to proper QOM @child property from
  IOThreadGroup to its IOThread instances.
- Add virtio-scsi.
- Variant of iothread_stop_all().

Fam Zheng (5):
  aio: Wrap poll parameters into AioContextPollParams
  iothread: Don't error on windows
  iothread: Extract iothread_start
  Introduce iothread-group
  virtio-blk: Add iothread-group property

 Makefile.objs                   |   2 +-
 hw/block/dataplane/virtio-blk.c |  18 ++--
 hw/block/virtio-blk.c           |   6 ++
 include/block/aio.h             |  18 ++--
 include/hw/virtio/virtio-blk.h  |   2 +
 include/sysemu/iothread.h       |  35 ++++++-
 iothread-group.c                | 210 ++++++++++++++++++++++++++++++++++++++++
 iothread.c                      |  97 +++++++++----------
 util/aio-posix.c                |  10 +-
 util/aio-win32.c                |   8 +-
 10 files changed, 328 insertions(+), 78 deletions(-)
 create mode 100644 iothread-group.c

-- 
2.9.4

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH RFC 1/5] aio: Wrap poll parameters into AioContextPollParams
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
@ 2017-07-10  7:20 ` Fam Zheng
  2017-07-11 13:34   ` Stefan Hajnoczi
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 2/5] iothread: Don't error on windows Fam Zheng
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Fam Zheng @ 2017-07-10  7:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: pbonzini, Fam Zheng, Stefan Hajnoczi

The same set of parameters will also be wanted by the coming iothread
group object, make a structure to slightly reduce the code duplication.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 include/block/aio.h       | 18 ++++++++++++------
 include/sysemu/iothread.h |  5 +----
 iothread.c                | 24 +++++++++---------------
 util/aio-posix.c          | 10 +++++-----
 util/aio-win32.c          |  4 ++--
 5 files changed, 29 insertions(+), 32 deletions(-)

diff --git a/include/block/aio.h b/include/block/aio.h
index e9aeeae..fcf1faf 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -551,17 +551,23 @@ static inline bool aio_context_in_iothread(AioContext *ctx)
  */
 void aio_context_setup(AioContext *ctx);
 
+typedef struct {
+    /* how long to busy poll for, in nanoseconds. 0 means don't poll */
+    int64_t max_ns;
+    /* polling time growth factor */
+    int64_t grow;
+    /* polling time shrink factor */
+    int64_t shrink;
+} AioContextPollParams;
+
 /**
  * aio_context_set_poll_params:
  * @ctx: the aio context
- * @max_ns: how long to busy poll for, in nanoseconds
- * @grow: polling time growth factor
- * @shrink: polling time shrink factor
+ * @params: the new params to update to
  *
- * Poll mode can be disabled by setting poll_max_ns to 0.
+ * Poll mode can be disabled by setting params.max_ns to 0.
  */
-void aio_context_set_poll_params(AioContext *ctx, int64_t max_ns,
-                                 int64_t grow, int64_t shrink,
+void aio_context_set_poll_params(AioContext *ctx, AioContextPollParams params,
                                  Error **errp);
 
 #endif
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
index e6da1a4..eecfc19 100644
--- a/include/sysemu/iothread.h
+++ b/include/sysemu/iothread.h
@@ -29,10 +29,7 @@ typedef struct {
     bool stopping;
     int thread_id;
 
-    /* AioContext poll parameters */
-    int64_t poll_max_ns;
-    int64_t poll_grow;
-    int64_t poll_shrink;
+    AioContextPollParams poll_params;
 } IOThread;
 
 #define IOTHREAD(obj) \
diff --git a/iothread.c b/iothread.c
index beeb870..f5a01bb 100644
--- a/iothread.c
+++ b/iothread.c
@@ -81,7 +81,7 @@ static void iothread_instance_init(Object *obj)
 {
     IOThread *iothread = IOTHREAD(obj);
 
-    iothread->poll_max_ns = IOTHREAD_POLL_MAX_NS_DEFAULT;
+    iothread->poll_params.max_ns = IOTHREAD_POLL_MAX_NS_DEFAULT;
 }
 
 static void iothread_instance_finalize(Object *obj)
@@ -111,10 +111,7 @@ static void iothread_complete(UserCreatable *obj, Error **errp)
         return;
     }
 
-    aio_context_set_poll_params(iothread->ctx,
-                                iothread->poll_max_ns,
-                                iothread->poll_grow,
-                                iothread->poll_shrink,
+    aio_context_set_poll_params(iothread->ctx, iothread->poll_params,
                                 &local_error);
     if (local_error) {
         error_propagate(errp, local_error);
@@ -151,13 +148,13 @@ typedef struct {
 } PollParamInfo;
 
 static PollParamInfo poll_max_ns_info = {
-    "poll-max-ns", offsetof(IOThread, poll_max_ns),
+    "poll-max-ns", offsetof(IOThread, poll_params.max_ns),
 };
 static PollParamInfo poll_grow_info = {
-    "poll-grow", offsetof(IOThread, poll_grow),
+    "poll-grow", offsetof(IOThread, poll_params.grow),
 };
 static PollParamInfo poll_shrink_info = {
-    "poll-shrink", offsetof(IOThread, poll_shrink),
+    "poll-shrink", offsetof(IOThread, poll_params.shrink),
 };
 
 static void iothread_get_poll_param(Object *obj, Visitor *v,
@@ -193,10 +190,7 @@ static void iothread_set_poll_param(Object *obj, Visitor *v,
     *field = value;
 
     if (iothread->ctx) {
-        aio_context_set_poll_params(iothread->ctx,
-                                    iothread->poll_max_ns,
-                                    iothread->poll_grow,
-                                    iothread->poll_shrink,
+        aio_context_set_poll_params(iothread->ctx, iothread->poll_params,
                                     &local_err);
     }
 
@@ -268,9 +262,9 @@ static int query_one_iothread(Object *object, void *opaque)
     info = g_new0(IOThreadInfo, 1);
     info->id = iothread_get_id(iothread);
     info->thread_id = iothread->thread_id;
-    info->poll_max_ns = iothread->poll_max_ns;
-    info->poll_grow = iothread->poll_grow;
-    info->poll_shrink = iothread->poll_shrink;
+    info->poll_max_ns = iothread->poll_params.max_ns;
+    info->poll_grow = iothread->poll_params.grow;
+    info->poll_shrink = iothread->poll_params.shrink;
 
     elem = g_new0(IOThreadInfoList, 1);
     elem->value = info;
diff --git a/util/aio-posix.c b/util/aio-posix.c
index 2d51239..1db8f3c 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -713,16 +713,16 @@ void aio_context_setup(AioContext *ctx)
 #endif
 }
 
-void aio_context_set_poll_params(AioContext *ctx, int64_t max_ns,
-                                 int64_t grow, int64_t shrink, Error **errp)
+void aio_context_set_poll_params(AioContext *ctx, AioContextPollParams params,
+                                 Error **errp)
 {
     /* No thread synchronization here, it doesn't matter if an incorrect value
      * is used once.
      */
-    ctx->poll_max_ns = max_ns;
+    ctx->poll_max_ns = params.max_ns;
     ctx->poll_ns = 0;
-    ctx->poll_grow = grow;
-    ctx->poll_shrink = shrink;
+    ctx->poll_grow = params.grow;
+    ctx->poll_shrink = params.shrink;
 
     aio_notify(ctx);
 }
diff --git a/util/aio-win32.c b/util/aio-win32.c
index bca496a..d8a1b20 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -400,8 +400,8 @@ void aio_context_setup(AioContext *ctx)
 {
 }
 
-void aio_context_set_poll_params(AioContext *ctx, int64_t max_ns,
-                                 int64_t grow, int64_t shrink, Error **errp)
+void aio_context_set_poll_params(AioContext *ctx, AioContextPollParams params,
+                                 Error **errp)
 {
     error_setg(errp, "AioContext polling is not implemented on Windows");
 }
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH RFC 2/5] iothread: Don't error on windows
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 1/5] aio: Wrap poll parameters into AioContextPollParams Fam Zheng
@ 2017-07-10  7:20 ` Fam Zheng
  2017-07-11 13:35   ` Stefan Hajnoczi
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 3/5] iothread: Extract iothread_start Fam Zheng
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Fam Zheng @ 2017-07-10  7:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: pbonzini, Fam Zheng, Stefan Hajnoczi

aio_context_set_poll_params is called unconditionally from iothread
initialization code and the contract is that if max_ns == 0 polling is
disabled, or otherwise windows reports and error. The current default
being non-zero will always make win32 to exit on an "-object iothread"
option, which is not nice.

Default to zero on windows and keep going.

(While we are at it, move the definition to the header, because it will
be shared with iothread-group.)

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 include/sysemu/iothread.h | 12 ++++++++++++
 iothread.c                |  6 ------
 util/aio-win32.c          |  4 +++-
 3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
index eecfc19..37f033b 100644
--- a/include/sysemu/iothread.h
+++ b/include/sysemu/iothread.h
@@ -39,4 +39,16 @@ char *iothread_get_id(IOThread *iothread);
 AioContext *iothread_get_aio_context(IOThread *iothread);
 void iothread_stop_all(void);
 
+#ifndef _WIN32
+/* Benchmark results from 2016 on NVMe SSD drives show max polling times around
+ * 16-32 microseconds yield IOPS improvements for both iodepth=1 and iodepth=32
+ * workloads.
+ */
+#define IOTHREAD_POLL_MAX_NS_DEFAULT 32768ULL
+#else
+/* Our aio implementation on Windows doesn't support polling, don't enable it
+ * by default. */
+#define IOTHREAD_POLL_MAX_NS_DEFAULT 0
+#endif
+
 #endif /* IOTHREAD_H */
diff --git a/iothread.c b/iothread.c
index f5a01bb..ce56724 100644
--- a/iothread.c
+++ b/iothread.c
@@ -30,12 +30,6 @@ typedef ObjectClass IOThreadClass;
 #define IOTHREAD_CLASS(klass) \
    OBJECT_CLASS_CHECK(IOThreadClass, klass, TYPE_IOTHREAD)
 
-/* Benchmark results from 2016 on NVMe SSD drives show max polling times around
- * 16-32 microseconds yield IOPS improvements for both iodepth=1 and iodepth=32
- * workloads.
- */
-#define IOTHREAD_POLL_MAX_NS_DEFAULT 32768ULL
-
 static __thread IOThread *my_iothread;
 
 AioContext *qemu_get_current_aio_context(void)
diff --git a/util/aio-win32.c b/util/aio-win32.c
index d8a1b20..3d2de11 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -403,5 +403,7 @@ void aio_context_setup(AioContext *ctx)
 void aio_context_set_poll_params(AioContext *ctx, AioContextPollParams params,
                                  Error **errp)
 {
-    error_setg(errp, "AioContext polling is not implemented on Windows");
+    if (params.max_ns) {
+        error_setg(errp, "AioContext polling is not implemented on Windows");
+    }
 }
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH RFC 3/5] iothread: Extract iothread_start
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 1/5] aio: Wrap poll parameters into AioContextPollParams Fam Zheng
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 2/5] iothread: Don't error on windows Fam Zheng
@ 2017-07-10  7:20 ` Fam Zheng
  2017-07-11 13:37   ` Stefan Hajnoczi
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 4/5] Introduce iothread-group Fam Zheng
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Fam Zheng @ 2017-07-10  7:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: pbonzini, Fam Zheng, Stefan Hajnoczi

This is the "create and start the thread" part of iothread spawning.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 include/sysemu/iothread.h |  1 +
 iothread.c                | 33 +++++++++++++++++++--------------
 2 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
index 37f033b..a3a03e8 100644
--- a/include/sysemu/iothread.h
+++ b/include/sysemu/iothread.h
@@ -36,6 +36,7 @@ typedef struct {
    OBJECT_CHECK(IOThread, obj, TYPE_IOTHREAD)
 
 char *iothread_get_id(IOThread *iothread);
+void iothread_start(IOThread *iothread, const char *thread_name, Error **errp);
 AioContext *iothread_get_aio_context(IOThread *iothread);
 void iothread_stop_all(void);
 
diff --git a/iothread.c b/iothread.c
index ce56724..8d703db 100644
--- a/iothread.c
+++ b/iothread.c
@@ -76,6 +76,10 @@ static void iothread_instance_init(Object *obj)
     IOThread *iothread = IOTHREAD(obj);
 
     iothread->poll_params.max_ns = IOTHREAD_POLL_MAX_NS_DEFAULT;
+    iothread->stopping = false;
+    iothread->thread_id = -1;
+    qemu_mutex_init(&iothread->init_done_lock);
+    qemu_cond_init(&iothread->init_done_cond);
 }
 
 static void iothread_instance_finalize(Object *obj)
@@ -91,14 +95,26 @@ static void iothread_instance_finalize(Object *obj)
     aio_context_unref(iothread->ctx);
 }
 
+void iothread_start(IOThread *iothread, const char *thread_name, Error **errp)
+{
+    qemu_thread_create(&iothread->thread, thread_name, iothread_run,
+                       iothread, QEMU_THREAD_JOINABLE);
+
+    /* Wait for initialization to complete */
+    qemu_mutex_lock(&iothread->init_done_lock);
+    while (iothread->thread_id == -1) {
+        qemu_cond_wait(&iothread->init_done_cond,
+                       &iothread->init_done_lock);
+    }
+    qemu_mutex_unlock(&iothread->init_done_lock);
+}
+
 static void iothread_complete(UserCreatable *obj, Error **errp)
 {
     Error *local_error = NULL;
     IOThread *iothread = IOTHREAD(obj);
     char *name, *thread_name;
 
-    iothread->stopping = false;
-    iothread->thread_id = -1;
     iothread->ctx = aio_context_new(&local_error);
     if (!iothread->ctx) {
         error_propagate(errp, local_error);
@@ -114,26 +130,15 @@ static void iothread_complete(UserCreatable *obj, Error **errp)
         return;
     }
 
-    qemu_mutex_init(&iothread->init_done_lock);
-    qemu_cond_init(&iothread->init_done_cond);
 
     /* This assumes we are called from a thread with useful CPU affinity for us
      * to inherit.
      */
     name = object_get_canonical_path_component(OBJECT(obj));
     thread_name = g_strdup_printf("IO %s", name);
-    qemu_thread_create(&iothread->thread, thread_name, iothread_run,
-                       iothread, QEMU_THREAD_JOINABLE);
+    iothread_start(iothread, thread_name, errp);
     g_free(thread_name);
     g_free(name);
-
-    /* Wait for initialization to complete */
-    qemu_mutex_lock(&iothread->init_done_lock);
-    while (iothread->thread_id == -1) {
-        qemu_cond_wait(&iothread->init_done_cond,
-                       &iothread->init_done_lock);
-    }
-    qemu_mutex_unlock(&iothread->init_done_lock);
 }
 
 typedef struct {
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH RFC 4/5] Introduce iothread-group
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
                   ` (2 preceding siblings ...)
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 3/5] iothread: Extract iothread_start Fam Zheng
@ 2017-07-10  7:20 ` Fam Zheng
  2017-07-11 13:48   ` Stefan Hajnoczi
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 5/5] virtio-blk: Add iothread-group property Fam Zheng
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Fam Zheng @ 2017-07-10  7:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: pbonzini, Fam Zheng, Stefan Hajnoczi

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 Makefile.objs             |   2 +-
 include/sysemu/iothread.h |  17 ++++
 iothread-group.c          | 210 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 228 insertions(+), 1 deletion(-)
 create mode 100644 iothread-group.c

diff --git a/Makefile.objs b/Makefile.objs
index b2e6322..b7ee479 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -40,7 +40,7 @@ io-obj-y = io/
 
 ifeq ($(CONFIG_SOFTMMU),y)
 common-obj-y = blockdev.o blockdev-nbd.o block/
-common-obj-y += iothread.o
+common-obj-y += iothread.o iothread-group.o
 common-obj-y += net/
 common-obj-y += qdev-monitor.o device-hotplug.o
 common-obj-$(CONFIG_WIN32) += os-win32.o
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
index a3a03e8..0cb28c0 100644
--- a/include/sysemu/iothread.h
+++ b/include/sysemu/iothread.h
@@ -32,12 +32,29 @@ typedef struct {
     AioContextPollParams poll_params;
 } IOThread;
 
+#define TYPE_IOTHREAD_GROUP "iothread-group"
+
+typedef struct {
+    Object parent_obj;
+
+    int running_threads;
+    AioContext *ctx;
+    AioContextPollParams poll_params;
+    /* Number of iothreads */
+    int64_t size;
+    IOThread **iothreads;
+} IOThreadGroup;
+
 #define IOTHREAD(obj) \
    OBJECT_CHECK(IOThread, obj, TYPE_IOTHREAD)
 
+#define IOTHREAD_GROUP(obj) \
+   OBJECT_CHECK(IOThreadGroup, obj, TYPE_IOTHREAD_GROUP)
+
 char *iothread_get_id(IOThread *iothread);
 void iothread_start(IOThread *iothread, const char *thread_name, Error **errp);
 AioContext *iothread_get_aio_context(IOThread *iothread);
+AioContext *iothread_group_get_aio_context(IOThreadGroup *group);
 void iothread_stop_all(void);
 
 #ifndef _WIN32
diff --git a/iothread-group.c b/iothread-group.c
new file mode 100644
index 0000000..163ca2b
--- /dev/null
+++ b/iothread-group.c
@@ -0,0 +1,210 @@
+/*
+ * Event loop thread group
+ *
+ * Copyright Red Hat Inc., 2013, 2017
+ *
+ * Authors:
+ *  Stefan Hajnoczi   <stefanha@redhat.com>
+ *  Fam Zheng         <famz@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qom/object.h"
+#include "qom/object_interfaces.h"
+#include "qemu/module.h"
+#include "block/aio.h"
+#include "block/block.h"
+#include "sysemu/iothread.h"
+#include "qmp-commands.h"
+#include "qemu/error-report.h"
+#include "qemu/rcu.h"
+#include "qemu/main-loop.h"
+
+static int iothread_group_stop(Object *object, void *opaque)
+{
+    /* XXX: stop each iothread */
+    return 0;
+}
+
+static void iothread_group_instance_init(Object *obj)
+{
+    IOThreadGroup *group = IOTHREAD_GROUP(obj);
+
+    group->poll_params.max_ns = IOTHREAD_POLL_MAX_NS_DEFAULT;
+    group->size = 1;
+}
+
+static void iothread_group_instance_finalize(Object *obj)
+{
+    IOThreadGroup *group = IOTHREAD_GROUP(obj);
+
+    iothread_group_stop(obj, NULL);
+    if (!group->ctx) {
+        return;
+    }
+    aio_context_unref(group->ctx);
+}
+
+static void iothread_group_complete(UserCreatable *obj, Error **errp)
+{
+    Error *local_err = NULL;
+    IOThreadGroup *group = IOTHREAD_GROUP(obj);
+    char *name, *thread_name;
+    int i;
+
+    group->ctx = aio_context_new(&local_err);
+    if (!group->ctx) {
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    aio_context_set_poll_params(group->ctx, group->poll_params,
+                                &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        aio_context_unref(group->ctx);
+        group->ctx = NULL;
+        return;
+    }
+
+    /* This assumes we are called from a thread with useful CPU affinity for us
+     * to inherit.
+     */
+    name = object_get_canonical_path_component(OBJECT(obj));
+    group->iothreads = g_new0(IOThread *, group->size);
+    for (i = 0; i < group->size; ++i) {
+        IOThread *iothread = (IOThread *)object_new(TYPE_IOTHREAD);
+
+        /* We've already set up the shared aio context so this should do
+         * nothing, but better be consistent. */
+        iothread->poll_params = group->poll_params;
+        iothread->ctx = group->ctx;
+
+        thread_name = g_strdup_printf("IO %s[%d]", name, i);
+        iothread_start(iothread, thread_name, &local_err);
+        g_free(thread_name);
+        if (local_err) {
+            object_unref(OBJECT(iothread));
+            break;
+        }
+        group->iothreads[i] = iothread;
+    }
+    g_free(name);
+    if (local_err) {
+        error_propagate(errp, local_err);
+    }
+}
+
+typedef struct {
+    const char *name;
+    ptrdiff_t offset; /* field's byte offset in IOThreadGroup struct */
+} PropInfo;
+
+static PropInfo size_info = {
+    "size", offsetof(IOThreadGroup, size),
+};
+static PropInfo poll_max_ns_info = {
+    "poll-max-ns", offsetof(IOThreadGroup, poll_params.max_ns),
+};
+static PropInfo poll_grow_info = {
+    "poll-grow", offsetof(IOThreadGroup, poll_params.grow),
+};
+static PropInfo poll_shrink_info = {
+    "poll-shrink", offsetof(IOThreadGroup, poll_params.shrink),
+};
+
+static void iothread_group_get_prop(Object *obj, Visitor *v,
+                                    const char *name, void *opaque,
+                                    Error **errp)
+{
+    IOThreadGroup *group = IOTHREAD_GROUP(obj);
+    PropInfo *info = opaque;
+    int64_t *field = (void *)group + info->offset;
+
+    visit_type_int64(v, name, field, errp);
+}
+
+static void iothread_group_set_prop(Object *obj, Visitor *v,
+                                    const char *name, void *opaque,
+                                    Error **errp)
+{
+    IOThreadGroup *group = IOTHREAD_GROUP(obj);
+    PropInfo *info = opaque;
+    int64_t *field = (void *)group + info->offset;
+    Error *local_err = NULL;
+    int64_t value;
+
+    visit_type_int64(v, name, &value, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
+    if (value < 0) {
+        error_setg(&local_err, "%s value must be in range [0, %"PRId64"]",
+                   info->name, INT64_MAX);
+        goto out;
+    }
+
+    *field = value;
+
+    if (group->ctx) {
+        aio_context_set_poll_params(group->ctx,
+                                    group->poll_params,
+                                    &local_err);
+    }
+
+out:
+    error_propagate(errp, local_err);
+}
+
+static void iothread_group_class_init(ObjectClass *klass, void *class_data)
+{
+    UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
+    ucc->complete = iothread_group_complete;
+
+    object_class_property_add(klass, "size", "int",
+                              iothread_group_get_prop,
+                              iothread_group_set_prop,
+                              NULL, &size_info, &error_abort);
+    object_class_property_add(klass, "poll-max-ns", "int",
+                              iothread_group_get_prop,
+                              iothread_group_set_prop,
+                              NULL, &poll_max_ns_info, &error_abort);
+    object_class_property_add(klass, "poll-grow", "int",
+                              iothread_group_get_prop,
+                              iothread_group_set_prop,
+                              NULL, &poll_grow_info, &error_abort);
+    object_class_property_add(klass, "poll-shrink", "int",
+                              iothread_group_get_prop,
+                              iothread_group_set_prop,
+                              NULL, &poll_shrink_info, &error_abort);
+}
+
+static const TypeInfo iothread_group_info = {
+    .name = TYPE_IOTHREAD_GROUP,
+    .parent = TYPE_OBJECT,
+    .class_init = iothread_group_class_init,
+    .instance_size = sizeof(IOThreadGroup),
+    .instance_init = iothread_group_instance_init,
+    .instance_finalize = iothread_group_instance_finalize,
+    .interfaces = (InterfaceInfo[]) {
+        {TYPE_USER_CREATABLE},
+        {}
+    },
+};
+
+static void iothread_group_register_types(void)
+{
+    type_register_static(&iothread_group_info);
+}
+
+type_init(iothread_group_register_types)
+
+AioContext *iothread_group_get_aio_context(IOThreadGroup *group)
+{
+    return group->ctx;
+}
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH RFC 5/5] virtio-blk: Add iothread-group property
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
                   ` (3 preceding siblings ...)
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 4/5] Introduce iothread-group Fam Zheng
@ 2017-07-10  7:20 ` Fam Zheng
  2017-07-10  7:46 ` [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Fam Zheng @ 2017-07-10  7:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: pbonzini, Fam Zheng, Stefan Hajnoczi

Do I/O on the IOThreadGroup's aio context. This is mutually exclusive to
iothread property.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 18 +++++++++++-------
 hw/block/virtio-blk.c           |  6 ++++++
 include/hw/virtio/virtio-blk.h  |  2 ++
 3 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 6fdc6f6..4b76f8b 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -40,7 +40,6 @@ struct VirtIOBlockDataPlane {
      * (because you don't own the file descriptor or handle; you just
      * use it).
      */
-    IOThread *iothread;
     AioContext *ctx;
 };
 
@@ -86,7 +85,7 @@ void virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
 
     *dataplane = NULL;
 
-    if (conf->iothread) {
+    if (conf->iothread || conf->iothread_group) {
         if (!k->set_guest_notifiers || !k->ioeventfd_assign) {
             error_setg(errp,
                        "device is incompatible with iothread "
@@ -116,9 +115,11 @@ void virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     s->conf = conf;
 
     if (conf->iothread) {
-        s->iothread = IOTHREAD(conf->iothread);
-        object_ref(OBJECT(s->iothread));
-        s->ctx = iothread_get_aio_context(s->iothread);
+        object_ref(conf->iothread);
+        s->ctx = iothread_get_aio_context(IOTHREAD(conf->iothread));
+    } else if (conf->iothread_group) {
+        object_ref(conf->iothread_group);
+        s->ctx = iothread_group_get_aio_context(IOTHREAD_GROUP(conf->iothread_group));
     } else {
         s->ctx = qemu_get_aio_context();
     }
@@ -141,8 +142,11 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
     assert(!vblk->dataplane_started);
     g_free(s->batch_notify_vqs);
     qemu_bh_delete(s->bh);
-    if (s->iothread) {
-        object_unref(OBJECT(s->iothread));
+    if (s->conf->iothread) {
+        object_unref(s->conf->iothread);
+    }
+    if (s->conf->iothread_group) {
+        object_unref(s->conf->iothread_group);
     }
     g_free(s);
 }
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 8146306..cfd847b 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -918,6 +918,10 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "drive property not set");
         return;
     }
+    if (conf->iothread && conf->iothread_group) {
+        error_setg(errp, "iothread and iothread-group cannot be used together");
+        return;
+    }
     if (!blk_is_inserted(conf->conf.blk)) {
         error_setg(errp, "Device needs media, but drive is empty");
         return;
@@ -1011,6 +1015,8 @@ static Property virtio_blk_properties[] = {
                     true),
     DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
     DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD),
+    DEFINE_PROP_LINK("iothread-group", VirtIOBlock, conf.iothread_group,
+                     TYPE_IOTHREAD_GROUP),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 2452074..16e72e0 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -35,6 +35,8 @@ struct VirtIOBlkConf
     BlockConf conf;
     /* IOThread pointer to be filled by link property */
     Object *iothread;
+    /* IOThreadGroup pointer to be filled by link property */
+    Object *iothread_group;
     char *serial;
     uint32_t scsi;
     uint32_t config_wce;
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
                   ` (4 preceding siblings ...)
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 5/5] virtio-blk: Add iothread-group property Fam Zheng
@ 2017-07-10  7:46 ` Fam Zheng
  2017-07-11 14:15 ` Stefan Hajnoczi
  2017-07-11 14:58 ` Stefan Hajnoczi
  7 siblings, 0 replies; 18+ messages in thread
From: Fam Zheng @ 2017-07-10  7:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: pbonzini, Stefan Hajnoczi

On Mon, 07/10 15:20, Fam Zheng wrote:
> Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
> A dedicated "iothread-group" class is cleaner from the interface point of view.
> This series does that.
> 
> It has the same set of poll parameters as the existing "iothread" object, plus
> a "size" option to specify how many threads to start. Using iothread-group
> doesn't require the user to explicitly create the contained IOThreads. The
> IOThreads are created by the group object.
> 
> Internally, IOThreads share one AioContext.  This is to make it easier to adapt
> this to the current data plane code (see the last patch). But it is an
> implementation detail, and will change depending on the block layer multiqueue
> needs.

I forgot to mention that the last patch depends on

    [PATCH v3 00/20] qdev: Introduce DEFINE_PROP_LINK

Fam

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 1/5] aio: Wrap poll parameters into AioContextPollParams
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 1/5] aio: Wrap poll parameters into AioContextPollParams Fam Zheng
@ 2017-07-11 13:34   ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-11 13:34 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, pbonzini

[-- Attachment #1: Type: text/plain, Size: 615 bytes --]

On Mon, Jul 10, 2017 at 03:20:23PM +0800, Fam Zheng wrote:
> The same set of parameters will also be wanted by the coming iothread
> group object, make a structure to slightly reduce the code duplication.
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  include/block/aio.h       | 18 ++++++++++++------
>  include/sysemu/iothread.h |  5 +----
>  iothread.c                | 24 +++++++++---------------
>  util/aio-posix.c          | 10 +++++-----
>  util/aio-win32.c          |  4 ++--
>  5 files changed, 29 insertions(+), 32 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 2/5] iothread: Don't error on windows
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 2/5] iothread: Don't error on windows Fam Zheng
@ 2017-07-11 13:35   ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-11 13:35 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, pbonzini

[-- Attachment #1: Type: text/plain, Size: 841 bytes --]

On Mon, Jul 10, 2017 at 03:20:24PM +0800, Fam Zheng wrote:
> aio_context_set_poll_params is called unconditionally from iothread
> initialization code and the contract is that if max_ns == 0 polling is
> disabled, or otherwise windows reports and error. The current default
> being non-zero will always make win32 to exit on an "-object iothread"
> option, which is not nice.
> 
> Default to zero on windows and keep going.
> 
> (While we are at it, move the definition to the header, because it will
> be shared with iothread-group.)
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  include/sysemu/iothread.h | 12 ++++++++++++
>  iothread.c                |  6 ------
>  util/aio-win32.c          |  4 +++-
>  3 files changed, 15 insertions(+), 7 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 3/5] iothread: Extract iothread_start
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 3/5] iothread: Extract iothread_start Fam Zheng
@ 2017-07-11 13:37   ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-11 13:37 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, pbonzini

[-- Attachment #1: Type: text/plain, Size: 404 bytes --]

On Mon, Jul 10, 2017 at 03:20:25PM +0800, Fam Zheng wrote:
> This is the "create and start the thread" part of iothread spawning.
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  include/sysemu/iothread.h |  1 +
>  iothread.c                | 33 +++++++++++++++++++--------------
>  2 files changed, 20 insertions(+), 14 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 4/5] Introduce iothread-group
  2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 4/5] Introduce iothread-group Fam Zheng
@ 2017-07-11 13:48   ` Stefan Hajnoczi
  2017-07-12  8:44     ` Fam Zheng
  0 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-11 13:48 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, pbonzini

[-- Attachment #1: Type: text/plain, Size: 1217 bytes --]

On Mon, Jul 10, 2017 at 03:20:26PM +0800, Fam Zheng wrote:
> +static void iothread_group_complete(UserCreatable *obj, Error **errp)
> +{
> +    Error *local_err = NULL;
> +    IOThreadGroup *group = IOTHREAD_GROUP(obj);
> +    char *name, *thread_name;
> +    int i;
> +
> +    group->ctx = aio_context_new(&local_err);
> +    if (!group->ctx) {
> +        error_propagate(errp, local_err);
> +        return;
> +    }
> +
> +    aio_context_set_poll_params(group->ctx, group->poll_params,
> +                                &local_err);
> +    if (local_err) {
> +        error_propagate(errp, local_err);
> +        aio_context_unref(group->ctx);
> +        group->ctx = NULL;
> +        return;
> +    }
> +
> +    /* This assumes we are called from a thread with useful CPU affinity for us
> +     * to inherit.
> +     */
> +    name = object_get_canonical_path_component(OBJECT(obj));
> +    group->iothreads = g_new0(IOThread *, group->size);
> +    for (i = 0; i < group->size; ++i) {

group->size > INT_MAX must be rejected.  The error message below is
wrong for this property:

  if (value < 0) {
    error_setg(&local_err, "%s value must be in range [0, %"PRId64"]",
               info->name, INT64_MAX);

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
                   ` (5 preceding siblings ...)
  2017-07-10  7:46 ` [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
@ 2017-07-11 14:15 ` Stefan Hajnoczi
  2017-07-11 15:14   ` Fam Zheng
  2017-07-11 14:58 ` Stefan Hajnoczi
  7 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-11 14:15 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, pbonzini, Eric Blake

[-- Attachment #1: Type: text/plain, Size: 2001 bytes --]

On Mon, Jul 10, 2017 at 03:20:22PM +0800, Fam Zheng wrote:
> Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
> A dedicated "iothread-group" class is cleaner from the interface point of view.
> This series does that.
> 
> It has the same set of poll parameters as the existing "iothread" object, plus
> a "size" option to specify how many threads to start. Using iothread-group
> doesn't require the user to explicitly create the contained IOThreads. The
> IOThreads are created by the group object.
> 
> Internally, IOThreads share one AioContext.  This is to make it easier to adapt
> this to the current data plane code (see the last patch). But it is an
> implementation detail, and will change depending on the block layer multiqueue
> needs.
> 
> TODO:
> 
> - qmp_query_iothread_groups, in addition to proper QOM @child property from
>   IOThreadGroup to its IOThread instances.
> - Add virtio-scsi.
> - Variant of iothread_stop_all().
> 
> Fam Zheng (5):
>   aio: Wrap poll parameters into AioContextPollParams
>   iothread: Don't error on windows
>   iothread: Extract iothread_start
>   Introduce iothread-group
>   virtio-blk: Add iothread-group property

From your TODO note above it looks like you plan to duplicate IOThread
interfaces for IOThreadGroup?  This means existing query-iothreads users
no longer see the full view of all IOThreads.

I think it would be cleaner to define and query IOThreads like they are
today but change virtio-blk/virtio-scsi to accept a list of IOThreads.
That way existing management tool functionality can be used and the only
tweak is that devices can now be added to multiple IOThreads.

It would be nice to express the group relationship in QOM instead of
open coding a new group object.  The majority of the RFC code creates a
child/link list relationship that QOM should support for any type, not
just IOThread.

Adding Eric Blake so we can discuss QMP requirements further.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
  2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
                   ` (6 preceding siblings ...)
  2017-07-11 14:15 ` Stefan Hajnoczi
@ 2017-07-11 14:58 ` Stefan Hajnoczi
  2017-07-12 11:06   ` Fam Zheng
  7 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-11 14:58 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, pbonzini

[-- Attachment #1: Type: text/plain, Size: 3189 bytes --]

On Mon, Jul 10, 2017 at 03:20:22PM +0800, Fam Zheng wrote:
> Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
> A dedicated "iothread-group" class is cleaner from the interface point of view.
> This series does that.
> 
> It has the same set of poll parameters as the existing "iothread" object, plus
> a "size" option to specify how many threads to start. Using iothread-group
> doesn't require the user to explicitly create the contained IOThreads. The
> IOThreads are created by the group object.
> 
> Internally, IOThreads share one AioContext.  This is to make it easier to adapt
> this to the current data plane code (see the last patch). But it is an
> implementation detail, and will change depending on the block layer multiqueue
> needs.
> 
> TODO:
> 
> - qmp_query_iothread_groups, in addition to proper QOM @child property from
>   IOThreadGroup to its IOThread instances.
> - Add virtio-scsi.
> - Variant of iothread_stop_all().
> 
> Fam Zheng (5):
>   aio: Wrap poll parameters into AioContextPollParams
>   iothread: Don't error on windows
>   iothread: Extract iothread_start
>   Introduce iothread-group
>   virtio-blk: Add iothread-group property
> 
>  Makefile.objs                   |   2 +-
>  hw/block/dataplane/virtio-blk.c |  18 ++--
>  hw/block/virtio-blk.c           |   6 ++
>  include/block/aio.h             |  18 ++--
>  include/hw/virtio/virtio-blk.h  |   2 +
>  include/sysemu/iothread.h       |  35 ++++++-
>  iothread-group.c                | 210 ++++++++++++++++++++++++++++++++++++++++
>  iothread.c                      |  97 +++++++++----------
>  util/aio-posix.c                |  10 +-
>  util/aio-win32.c                |   8 +-
>  10 files changed, 328 insertions(+), 78 deletions(-)
>  create mode 100644 iothread-group.c

I reviewed QOM "foo[*]" array syntax but it's very limited.  Basically
all it does it append the new property to foo[0], foo[1], ... (i.e. it
allocates an index).  AFAICT there is no way to specify an array
property and really arrays are just individual properties.

Daniel Berrange has a patch for non-scalar properties:
"[PATCH v14 15/21] qom: support non-scalar properties with -object"
https://patchwork.kernel.org/patch/9358503/

I think the challenge with existing "[*]" or pre-allocated "foo[0]",
"foo[1]", ... is that they don't support variable-sized arrays via QAPI.
With that missing piece solved it would be possible to say:

  -device virtio-blk-pci,iothread.0=iothread0,iothread.1=iothread1

Or maybe:

  -device virtio-blk-pci,iothread[0]=iothread0,iothread[1]=iothread1

When the virtio-blk-pci object is realized it can loop over iothread[*]
properties to set them up (if necessary).

Your current RFC series uses a single AioContext in IOThreadGroup.  I
think something similar can be achieved with an iothread property:

  -object iothread,id=iothread1,share-event-loop-with=iothread0

The reason I am suggesting this approach is to keep IOThread as the
first-class object.  Existing QMP APIs continue to apply to all objects.
We avoid introducing an ad-hoc group object just for IOThread.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
  2017-07-11 14:15 ` Stefan Hajnoczi
@ 2017-07-11 15:14   ` Fam Zheng
  2017-07-12 10:40     ` Stefan Hajnoczi
  0 siblings, 1 reply; 18+ messages in thread
From: Fam Zheng @ 2017-07-11 15:14 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: pbonzini, qemu-devel

On Tue, 07/11 15:15, Stefan Hajnoczi wrote:
> On Mon, Jul 10, 2017 at 03:20:22PM +0800, Fam Zheng wrote:
> > Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
> > A dedicated "iothread-group" class is cleaner from the interface point of view.
> > This series does that.
> > 
> > It has the same set of poll parameters as the existing "iothread" object, plus
> > a "size" option to specify how many threads to start. Using iothread-group
> > doesn't require the user to explicitly create the contained IOThreads. The
> > IOThreads are created by the group object.
> > 
> > Internally, IOThreads share one AioContext.  This is to make it easier to adapt
> > this to the current data plane code (see the last patch). But it is an
> > implementation detail, and will change depending on the block layer multiqueue
> > needs.
> > 
> > TODO:
> > 
> > - qmp_query_iothread_groups, in addition to proper QOM @child property from
> >   IOThreadGroup to its IOThread instances.
> > - Add virtio-scsi.
> > - Variant of iothread_stop_all().
> > 
> > Fam Zheng (5):
> >   aio: Wrap poll parameters into AioContextPollParams
> >   iothread: Don't error on windows
> >   iothread: Extract iothread_start
> >   Introduce iothread-group
> >   virtio-blk: Add iothread-group property
> 
> From your TODO note above it looks like you plan to duplicate IOThread
> interfaces for IOThreadGroup?  This means existing query-iothreads users
> no longer see the full view of all IOThreads.
> 
> I think it would be cleaner to define and query IOThreads like they are
> today but change virtio-blk/virtio-scsi to accept a list of IOThreads.

That way the groups are formed passively and I'm not sure if it is better for
users/tools to manage in the long run. Consider this syntax:

    -object iothread,id=iot0 \
    -object iothread,id=iot1 \
    -object iothread,id=iot2 \
    -device virtio-blk-pci,id=vblk0,iothread.0=iot0,iothread.1=iot1 \
    -device virtio-blk-pci,id=vblk1,iothread.0=iot1,iothread.1=iot2

where vblk0 uses iot0 and iot1 and vblk1 uses iot1 and iot2. There is a
intersection between the two groups. IMO it is less clean compared to the
rule set by an explicit syntax:

    -object iothread-group,id=iotg0,size=4 \
    -object iothread-group,id=iotg1,size=4 \
    -device virtio-blk-pci,id=vblk0,iothread-group=iotg0 \
    -device virtio-blk-pci,id=vblk1,iothread-group=iotg1 \


Also I have not idea how easy it is to add a "list of links" qdev property. I
remember there was some related work in progress, but I've lost the pointers.

> That way existing management tool functionality can be used and the only
> tweak is that devices can now be added to multiple IOThreads.

Another way could be to still include any IOThreads created by IOThreadGroup in
"query-iothreads" output, and add a "group name" property so users know the
groupings.

> 
> It would be nice to express the group relationship in QOM instead of
> open coding a new group object.  The majority of the RFC code creates a
> child/link list relationship that QOM should support for any type, not
> just IOThread.

Sounds fine, but I'm not sure what exactly you have in mind (I think it is
extending QOM). Can you elaborate?

Fam

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 4/5] Introduce iothread-group
  2017-07-11 13:48   ` Stefan Hajnoczi
@ 2017-07-12  8:44     ` Fam Zheng
  0 siblings, 0 replies; 18+ messages in thread
From: Fam Zheng @ 2017-07-12  8:44 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: pbonzini, qemu-devel

On Tue, 07/11 14:48, Stefan Hajnoczi wrote:
> > +    for (i = 0; i < group->size; ++i) {
> 
> group->size > INT_MAX must be rejected.  The error message below is
> wrong for this property:
> 
>   if (value < 0) {
>     error_setg(&local_err, "%s value must be in range [0, %"PRId64"]",
>                info->name, INT64_MAX);

Yes, I can add a PropInfo.max field so "size" can have a different max.

Fam

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
  2017-07-11 15:14   ` Fam Zheng
@ 2017-07-12 10:40     ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-12 10:40 UTC (permalink / raw)
  To: Fam Zheng; +Cc: pbonzini, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 3686 bytes --]

On Tue, Jul 11, 2017 at 11:14:21PM +0800, Fam Zheng wrote:
> On Tue, 07/11 15:15, Stefan Hajnoczi wrote:
> > On Mon, Jul 10, 2017 at 03:20:22PM +0800, Fam Zheng wrote:
> > > Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
> > > A dedicated "iothread-group" class is cleaner from the interface point of view.
> > > This series does that.
> > > 
> > > It has the same set of poll parameters as the existing "iothread" object, plus
> > > a "size" option to specify how many threads to start. Using iothread-group
> > > doesn't require the user to explicitly create the contained IOThreads. The
> > > IOThreads are created by the group object.
> > > 
> > > Internally, IOThreads share one AioContext.  This is to make it easier to adapt
> > > this to the current data plane code (see the last patch). But it is an
> > > implementation detail, and will change depending on the block layer multiqueue
> > > needs.
> > > 
> > > TODO:
> > > 
> > > - qmp_query_iothread_groups, in addition to proper QOM @child property from
> > >   IOThreadGroup to its IOThread instances.
> > > - Add virtio-scsi.
> > > - Variant of iothread_stop_all().
> > > 
> > > Fam Zheng (5):
> > >   aio: Wrap poll parameters into AioContextPollParams
> > >   iothread: Don't error on windows
> > >   iothread: Extract iothread_start
> > >   Introduce iothread-group
> > >   virtio-blk: Add iothread-group property
> > 
> > From your TODO note above it looks like you plan to duplicate IOThread
> > interfaces for IOThreadGroup?  This means existing query-iothreads users
> > no longer see the full view of all IOThreads.
> > 
> > I think it would be cleaner to define and query IOThreads like they are
> > today but change virtio-blk/virtio-scsi to accept a list of IOThreads.
> 
> That way the groups are formed passively and I'm not sure if it is better for
> users/tools to manage in the long run. Consider this syntax:
> 
>     -object iothread,id=iot0 \
>     -object iothread,id=iot1 \
>     -object iothread,id=iot2 \
>     -device virtio-blk-pci,id=vblk0,iothread.0=iot0,iothread.1=iot1 \
>     -device virtio-blk-pci,id=vblk1,iothread.0=iot1,iothread.1=iot2
> 
> where vblk0 uses iot0 and iot1 and vblk1 uses iot1 and iot2. There is a
> intersection between the two groups. IMO it is less clean compared to the
> rule set by an explicit syntax:
> 
>     -object iothread-group,id=iotg0,size=4 \
>     -object iothread-group,id=iotg1,size=4 \
>     -device virtio-blk-pci,id=vblk0,iothread-group=iotg0 \
>     -device virtio-blk-pci,id=vblk1,iothread-group=iotg1 \
> 
> 
> Also I have not idea how easy it is to add a "list of links" qdev property. I
> remember there was some related work in progress, but I've lost the pointers.
> 
> > That way existing management tool functionality can be used and the only
> > tweak is that devices can now be added to multiple IOThreads.
> 
> Another way could be to still include any IOThreads created by IOThreadGroup in
> "query-iothreads" output, and add a "group name" property so users know the
> groupings.
> 
> > 
> > It would be nice to express the group relationship in QOM instead of
> > open coding a new group object.  The majority of the RFC code creates a
> > child/link list relationship that QOM should support for any type, not
> > just IOThread.
> 
> Sounds fine, but I'm not sure what exactly you have in mind (I think it is
> extending QOM). Can you elaborate?

I thought about it after sending this email.  Here is a follow-up that
gets into the QOM details:

https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg02799.html

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
  2017-07-11 14:58 ` Stefan Hajnoczi
@ 2017-07-12 11:06   ` Fam Zheng
  2017-07-14 13:18     ` Stefan Hajnoczi
  0 siblings, 1 reply; 18+ messages in thread
From: Fam Zheng @ 2017-07-12 11:06 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, pbonzini

On Tue, 07/11 15:58, Stefan Hajnoczi wrote:
> On Mon, Jul 10, 2017 at 03:20:22PM +0800, Fam Zheng wrote:
> > Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
> > A dedicated "iothread-group" class is cleaner from the interface point of view.
> > This series does that.
> > 
> > It has the same set of poll parameters as the existing "iothread" object, plus
> > a "size" option to specify how many threads to start. Using iothread-group
> > doesn't require the user to explicitly create the contained IOThreads. The
> > IOThreads are created by the group object.
> > 
> > Internally, IOThreads share one AioContext.  This is to make it easier to adapt
> > this to the current data plane code (see the last patch). But it is an
> > implementation detail, and will change depending on the block layer multiqueue
> > needs.
> > 
> > TODO:
> > 
> > - qmp_query_iothread_groups, in addition to proper QOM @child property from
> >   IOThreadGroup to its IOThread instances.
> > - Add virtio-scsi.
> > - Variant of iothread_stop_all().
> > 
> > Fam Zheng (5):
> >   aio: Wrap poll parameters into AioContextPollParams
> >   iothread: Don't error on windows
> >   iothread: Extract iothread_start
> >   Introduce iothread-group
> >   virtio-blk: Add iothread-group property
> > 
> >  Makefile.objs                   |   2 +-
> >  hw/block/dataplane/virtio-blk.c |  18 ++--
> >  hw/block/virtio-blk.c           |   6 ++
> >  include/block/aio.h             |  18 ++--
> >  include/hw/virtio/virtio-blk.h  |   2 +
> >  include/sysemu/iothread.h       |  35 ++++++-
> >  iothread-group.c                | 210 ++++++++++++++++++++++++++++++++++++++++
> >  iothread.c                      |  97 +++++++++----------
> >  util/aio-posix.c                |  10 +-
> >  util/aio-win32.c                |   8 +-
> >  10 files changed, 328 insertions(+), 78 deletions(-)
> >  create mode 100644 iothread-group.c
> 
> I reviewed QOM "foo[*]" array syntax but it's very limited.  Basically
> all it does it append the new property to foo[0], foo[1], ... (i.e. it
> allocates an index).  AFAICT there is no way to specify an array
> property and really arrays are just individual properties.
> 
> Daniel Berrange has a patch for non-scalar properties:
> "[PATCH v14 15/21] qom: support non-scalar properties with -object"
> https://patchwork.kernel.org/patch/9358503/
> 
> I think the challenge with existing "[*]" or pre-allocated "foo[0]",
> "foo[1]", ... is that they don't support variable-sized arrays via QAPI.

Plus I'm not a fan of such a "spelling out indexes explicitly" syntax. It makes
me feel like a slave of a poorly coded programme. With it we'll avoid open
coding an iothread group class in QEMU, but will ask the user to "open type"
group configurations.

> With that missing piece solved it would be possible to say:
> 
>   -device virtio-blk-pci,iothread.0=iothread0,iothread.1=iothread1
> 
> Or maybe:
> 
>   -device virtio-blk-pci,iothread[0]=iothread0,iothread[1]=iothread1

Understood. As I said in the other message, I'm not quite convinced with this
"arbitrary grouping" API made possible with this syntax. Probably the simplist
cases are okay, but I'm afraid as the number of virtio-blk-pci and iothread
grows, the configuration can get complicated to manage.

> 
> When the virtio-blk-pci object is realized it can loop over iothread[*]
> properties to set them up (if necessary).
> 
> Your current RFC series uses a single AioContext in IOThreadGroup.  I
> think something similar can be achieved with an iothread property:
> 
>   -object iothread,id=iothread1,share-event-loop-with=iothread0
> 
> The reason I am suggesting this approach is to keep IOThread as the
> first-class object.  Existing QMP APIs continue to apply to all objects.
> We avoid introducing an ad-hoc group object just for IOThread.

Another possibility is to introduce 1) a "TYPE_GROUPABLE" InterfaceInfo that
will add a ".group" str property to IOThread, and build a generic group object
out of the objects sharing the same .group name; 2) a qdev prop
DEFINE_PROP_LINK_TO_GROUP() that can be used to reference a QOM object group
from virtio-blk/scsi. That way it is not specific to IOThread. (Since I am not
QOM expert, this is basically brainstorming.)

Fam

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group"
  2017-07-12 11:06   ` Fam Zheng
@ 2017-07-14 13:18     ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-07-14 13:18 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel, pbonzini

[-- Attachment #1: Type: text/plain, Size: 4448 bytes --]

On Wed, Jul 12, 2017 at 07:06:33PM +0800, Fam Zheng wrote:
> On Tue, 07/11 15:58, Stefan Hajnoczi wrote:
> > On Mon, Jul 10, 2017 at 03:20:22PM +0800, Fam Zheng wrote:
> > > Last time we've looked at "-object iothread,spawns=N" but it was a bit abusive.
> > > A dedicated "iothread-group" class is cleaner from the interface point of view.
> > > This series does that.
> > > 
> > > It has the same set of poll parameters as the existing "iothread" object, plus
> > > a "size" option to specify how many threads to start. Using iothread-group
> > > doesn't require the user to explicitly create the contained IOThreads. The
> > > IOThreads are created by the group object.
> > > 
> > > Internally, IOThreads share one AioContext.  This is to make it easier to adapt
> > > this to the current data plane code (see the last patch). But it is an
> > > implementation detail, and will change depending on the block layer multiqueue
> > > needs.
> > > 
> > > TODO:
> > > 
> > > - qmp_query_iothread_groups, in addition to proper QOM @child property from
> > >   IOThreadGroup to its IOThread instances.
> > > - Add virtio-scsi.
> > > - Variant of iothread_stop_all().
> > > 
> > > Fam Zheng (5):
> > >   aio: Wrap poll parameters into AioContextPollParams
> > >   iothread: Don't error on windows
> > >   iothread: Extract iothread_start
> > >   Introduce iothread-group
> > >   virtio-blk: Add iothread-group property
> > > 
> > >  Makefile.objs                   |   2 +-
> > >  hw/block/dataplane/virtio-blk.c |  18 ++--
> > >  hw/block/virtio-blk.c           |   6 ++
> > >  include/block/aio.h             |  18 ++--
> > >  include/hw/virtio/virtio-blk.h  |   2 +
> > >  include/sysemu/iothread.h       |  35 ++++++-
> > >  iothread-group.c                | 210 ++++++++++++++++++++++++++++++++++++++++
> > >  iothread.c                      |  97 +++++++++----------
> > >  util/aio-posix.c                |  10 +-
> > >  util/aio-win32.c                |   8 +-
> > >  10 files changed, 328 insertions(+), 78 deletions(-)
> > >  create mode 100644 iothread-group.c
> > 
> > I reviewed QOM "foo[*]" array syntax but it's very limited.  Basically
> > all it does it append the new property to foo[0], foo[1], ... (i.e. it
> > allocates an index).  AFAICT there is no way to specify an array
> > property and really arrays are just individual properties.
> > 
> > Daniel Berrange has a patch for non-scalar properties:
> > "[PATCH v14 15/21] qom: support non-scalar properties with -object"
> > https://patchwork.kernel.org/patch/9358503/
> > 
> > I think the challenge with existing "[*]" or pre-allocated "foo[0]",
> > "foo[1]", ... is that they don't support variable-sized arrays via QAPI.
> 
> Plus I'm not a fan of such a "spelling out indexes explicitly" syntax. It makes
> me feel like a slave of a poorly coded programme. With it we'll avoid open
> coding an iothread group class in QEMU, but will ask the user to "open type"
> group configurations.

Indexes are necessary if the user wants to remove, swap, etc elements
later, so they do serve a purpose.  Sometimes they might not be
necessary.

> > When the virtio-blk-pci object is realized it can loop over iothread[*]
> > properties to set them up (if necessary).
> > 
> > Your current RFC series uses a single AioContext in IOThreadGroup.  I
> > think something similar can be achieved with an iothread property:
> > 
> >   -object iothread,id=iothread1,share-event-loop-with=iothread0
> > 
> > The reason I am suggesting this approach is to keep IOThread as the
> > first-class object.  Existing QMP APIs continue to apply to all objects.
> > We avoid introducing an ad-hoc group object just for IOThread.
> 
> Another possibility is to introduce 1) a "TYPE_GROUPABLE" InterfaceInfo that
> will add a ".group" str property to IOThread, and build a generic group object
> out of the objects sharing the same .group name; 2) a qdev prop
> DEFINE_PROP_LINK_TO_GROUP() that can be used to reference a QOM object group
> from virtio-blk/scsi. That way it is not specific to IOThread. (Since I am not
> QOM expert, this is basically brainstorming.)

What you describe sounds like qom/container.c except QOM links are used
instead of a QOM child relationship.

No new QOM interface is necessary in order to do this - the objects in a
group don't have to be aware that they are being grouped.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-07-14 13:18 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-10  7:20 [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 1/5] aio: Wrap poll parameters into AioContextPollParams Fam Zheng
2017-07-11 13:34   ` Stefan Hajnoczi
2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 2/5] iothread: Don't error on windows Fam Zheng
2017-07-11 13:35   ` Stefan Hajnoczi
2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 3/5] iothread: Extract iothread_start Fam Zheng
2017-07-11 13:37   ` Stefan Hajnoczi
2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 4/5] Introduce iothread-group Fam Zheng
2017-07-11 13:48   ` Stefan Hajnoczi
2017-07-12  8:44     ` Fam Zheng
2017-07-10  7:20 ` [Qemu-devel] [PATCH RFC 5/5] virtio-blk: Add iothread-group property Fam Zheng
2017-07-10  7:46 ` [Qemu-devel] [PATCH RFC 0/5] Introduce "-object iothread-group" Fam Zheng
2017-07-11 14:15 ` Stefan Hajnoczi
2017-07-11 15:14   ` Fam Zheng
2017-07-12 10:40     ` Stefan Hajnoczi
2017-07-11 14:58 ` Stefan Hajnoczi
2017-07-12 11:06   ` Fam Zheng
2017-07-14 13:18     ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.