All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH V7 00/16] Support more virtio queues
@ 2015-04-23  6:21 Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 01/16] virtio-net: fix the upper bound when trying to delete queues Jason Wang
                   ` (17 more replies)
  0 siblings, 18 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, mst, Amit Shah, Jason Wang, Alexander Graf,
	Keith Busch, Christian Borntraeger, qemu-ppc, Stefan Hajnoczi,
	Cornelia Huck, Paolo Bonzini, Richard Henderson

We current limit the max virtio queues to 64. This is not sufficient
to support multiqueue devices (e.g recent Linux support up to 256
tap queues). So this series tries to let virtio to support more
queues.

No much works need to be done except:

- Introducing transport specific queue limitation. This is used to
  simplify increasing the limitation for one transport without
  harming the rest.
- Let each virtio transport to use specific limit.
- Speedup the MSI-X masking and unmasking through per vector queue
  list, and increase the maximum MSI-X vectors supported by qemu.
- With the above enhancements, increase the maximum number of
  virtqueues supported by PCI from 64 to 1024.
- Compat legacy virtio queue limit

With this patch, we can support up to 256 queues. Since x86 can only
allow about 240 interrupt vectors for MSI-X, current Linux driver can
only have about 80 queue pairs has their private MSI-X interrupt
vectors. With sharing IRQ with queue pairs (RFC posted in
https://lkml.org/lkml/2014/12/25/169), Linux driver can have up
to about 186 queue pairs has their private MSI-X interrupt vectors.

Stress/migration test on virtio-pci, compile test on other
targets. And make check on s390x-softmmu and ppc64-softmmu.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Amit Shah <amit.shah@redhat.com>
Cc: qemu-ppc@nongnu.org

Please review

Thanks

Changes from V6:
- Tweak some commit logs
- Replace the missed VIRTIO_PCI_MAX in ccw with a bus specific limit
- Bump the pci queue limit from 513 to 1024

Changes from V5:
- Rebase to HEAD
- Drop the patch that introduce helper to get vq index since there has
  already one
- Don't try to solve the migration compatibility issue for invalid
  configuration like vectors > 128 with only 64 virto queues. This is
  done by let msix calaucalte the correct bar size and drop the patch
  18 that intrdouces the auto_msix_bar_size.
- Cleanup the bar size and pba offset calcuation code
- Add a comment to explain the new virtio pci queue limit number 513

Changes from V4:
- Drop the patch of bus limitation validation and send it as an
  independent patch
- Don't let the virtio-pci to calculate the MSI-X bar size, instead,
  tell the msix code whether or not to use legacy bar size and let it
  to do this
- Make sure the MSI-X bar size is at least 4K
- Subject typos fixes for patch 8
- Mention adapter routes changes in the commit log of patch 9
- Only use the vector to queue list for virtio-pci to avoid the
  unnecessary possible overhead on other transport.

Changes from V3:
- rebase to master and target to 2.4
- handling compat issues for spapr
- fixes for hmp command completion when we have a nic with 256 queues
- using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue in
  virtio-ccw
- fix compile issues for ppc64-softmmu
- don't export VIRTIO_CCW_QUEUE_MAX by introducing a gsi limit and
  inherit in ccw
- use transport specific queue limit in virtio-serial
- correct the stale comment for AdapterRoutes and move it to ccw patch
- replace 128 with queue_max + 64 and add a comment for this in
  virtio_ccw_notify()

Changes from V2:
- move transport specific limitation to their implementation. (The
  left is VIRTIO_CCW_QUEUE_MAX since I fail to find a common header
  files other than virtio.h)
- use virtio_get_queue_ma) if possible in virtio.c
- AdapterRoutes should use ccw limit
- introduce a vector to queue mapping for virito devices and speedup
  the MSI-X masking and unmasking through this.

Changes from V1:
- add a validation against the bus limitation
- switch to use a bus specific queue limit instead of a global one,
  this will allow us to just increase the limit of one transport
  without disturbing others.
- only increase the queue limit of virtio-pci
- limit the maximum number of virtio queues to 64 for legacy machine
  types

Jason Wang (16):
  virtio-net: fix the upper bound when trying to delete queues
  pc: add 2.4 machine types
  spapr: add machine type specific instance init function
  ppc: spapr: add 2.4 machine type
  monitor: replace the magic number 255 with MAX_QUEUE_NUM
  monitor: check return value of qemu_find_net_clients_except()
  virtio-ccw: using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue
  virtio: introduce bus specific queue limit
  virtio-ccw: introduce ccw specific queue limit
  virtio-s390: switch to bus specific queue limit
  virtio-mmio: switch to bus specific queue limit
  virtio-pci: switch to use bus specific queue limit
  virtio: introduce vector to virtqueues mapping
  virtio-pci: speedup MSI-X masking and unmasking
  virtio-pci: increase the maximum number of virtqueues to 513
  pci: remove hard-coded bar size in msix_init_exclusive_bar()

 hw/char/virtio-serial-bus.c    |  2 +-
 hw/i386/pc_piix.c              | 34 ++++++++++++++++---
 hw/i386/pc_q35.c               | 31 ++++++++++++++++--
 hw/net/virtio-net.c            |  6 ++--
 hw/pci/msix.c                  | 30 ++++++++++-------
 hw/ppc/spapr.c                 | 59 +++++++++++++++++++++++++++++++--
 hw/s390x/s390-virtio-bus.c     |  7 ++--
 hw/s390x/s390-virtio-ccw.c     |  7 ++--
 hw/s390x/virtio-ccw.c          | 22 ++++++++-----
 hw/scsi/virtio-scsi.c          |  4 +--
 hw/virtio/virtio-mmio.c        |  7 ++--
 hw/virtio/virtio-pci.c         | 65 +++++++++++++++++++++++--------------
 hw/virtio/virtio.c             | 74 +++++++++++++++++++++++++++++++++---------
 include/hw/s390x/s390_flic.h   |  4 ++-
 include/hw/virtio/virtio-bus.h |  2 ++
 include/hw/virtio/virtio.h     |  6 ++--
 monitor.c                      | 25 +++++++-------
 17 files changed, 291 insertions(+), 94 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 01/16] virtio-net: fix the upper bound when trying to delete queues
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 02/16] pc: add 2.4 machine types Jason Wang
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, qemu-stable, mst

Virtqueue were indexed from zero, so don't delete virtqueue whose
index is n->max_queues * 2 + 1.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: qemu-stable <qemu-stable@nongnu.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/virtio-net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 59f76bc..b6fac9c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1309,7 +1309,7 @@ static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
 
     n->multiqueue = multiqueue;
 
-    for (i = 2; i <= n->max_queues * 2 + 1; i++) {
+    for (i = 2; i < n->max_queues * 2 + 1; i++) {
         virtio_del_queue(vdev, i);
     }
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 02/16] pc: add 2.4 machine types
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 01/16] virtio-net: fix the upper bound when trying to delete queues Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-27 11:03   ` Michael S. Tsirkin
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 03/16] spapr: add machine type specific instance init function Jason Wang
                   ` (15 subsequent siblings)
  17 siblings, 1 reply; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Jason Wang, Richard Henderson, mst

The following patches will limit the following things to legacy
machine type:

- maximum number of virtqueues for virtio-pci were limited to 64
- auto msix bar size for virtio-net-pci were disabled by default

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/i386/pc_piix.c | 29 +++++++++++++++++++++++++----
 hw/i386/pc_q35.c  | 26 +++++++++++++++++++++++---
 2 files changed, 48 insertions(+), 7 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 1fe7bfb..212e263 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -310,8 +310,13 @@ static void pc_init_pci(MachineState *machine)
     pc_init1(machine, 1, 1);
 }
 
+static void pc_compat_2_3(MachineState *machine)
+{
+}
+
 static void pc_compat_2_2(MachineState *machine)
 {
+    pc_compat_2_3(machine);
     rsdp_in_ram = false;
     x86_cpu_compat_set_features("kvm64", FEAT_1_EDX, 0, CPUID_VME);
     x86_cpu_compat_set_features("kvm32", FEAT_1_EDX, 0, CPUID_VME);
@@ -413,6 +418,12 @@ static void pc_compat_1_2(MachineState *machine)
     x86_cpu_compat_kvm_no_autoenable(FEAT_KVM, 1 << KVM_FEATURE_PV_EOI);
 }
 
+static void pc_init_pci_2_3(MachineState *machine)
+{
+    pc_compat_2_3(machine);
+    pc_init_pci(machine);
+}
+
 static void pc_init_pci_2_2(MachineState *machine)
 {
     pc_compat_2_2(machine);
@@ -512,19 +523,28 @@ static void pc_xen_hvm_init(MachineState *machine)
     .desc = "Standard PC (i440FX + PIIX, 1996)", \
     .hot_add_cpu = pc_hot_add_cpu
 
-#define PC_I440FX_2_3_MACHINE_OPTIONS                           \
+#define PC_I440FX_2_4_MACHINE_OPTIONS                           \
     PC_I440FX_MACHINE_OPTIONS,                                  \
     .default_machine_opts = "firmware=bios-256k.bin",           \
     .default_display = "std"
 
-static QEMUMachine pc_i440fx_machine_v2_3 = {
-    PC_I440FX_2_3_MACHINE_OPTIONS,
-    .name = "pc-i440fx-2.3",
+
+static QEMUMachine pc_i440fx_machine_v2_4 = {
+    PC_I440FX_2_4_MACHINE_OPTIONS,
+    .name = "pc-i440fx-2.4",
     .alias = "pc",
     .init = pc_init_pci,
     .is_default = 1,
 };
 
+#define PC_I440FX_2_3_MACHINE_OPTIONS PC_I440FX_2_4_MACHINE_OPTIONS
+
+static QEMUMachine pc_i440fx_machine_v2_3 = {
+    PC_I440FX_2_3_MACHINE_OPTIONS,
+    .name = "pc-i440fx-2.3",
+    .init = pc_init_pci_2_3,
+};
+
 #define PC_I440FX_2_2_MACHINE_OPTIONS PC_I440FX_2_3_MACHINE_OPTIONS
 
 static QEMUMachine pc_i440fx_machine_v2_2 = {
@@ -970,6 +990,7 @@ static QEMUMachine xenfv_machine = {
 
 static void pc_machine_init(void)
 {
+    qemu_register_pc_machine(&pc_i440fx_machine_v2_4);
     qemu_register_pc_machine(&pc_i440fx_machine_v2_3);
     qemu_register_pc_machine(&pc_i440fx_machine_v2_2);
     qemu_register_pc_machine(&pc_i440fx_machine_v2_1);
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index dcc17c0..e67f2de 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -289,8 +289,13 @@ static void pc_q35_init(MachineState *machine)
     }
 }
 
+static void pc_compat_2_3(MachineState *machine)
+{
+}
+
 static void pc_compat_2_2(MachineState *machine)
 {
+    pc_compat_2_3(machine);
     rsdp_in_ram = false;
     x86_cpu_compat_set_features("kvm64", FEAT_1_EDX, 0, CPUID_VME);
     x86_cpu_compat_set_features("kvm32", FEAT_1_EDX, 0, CPUID_VME);
@@ -361,6 +366,12 @@ static void pc_compat_1_4(MachineState *machine)
     x86_cpu_compat_set_features("Westmere", FEAT_1_ECX, 0, CPUID_EXT_PCLMULQDQ);
 }
 
+static void pc_q35_init_2_3(MachineState *machine)
+{
+    pc_compat_2_3(machine);
+    pc_q35_init(machine);
+}
+
 static void pc_q35_init_2_2(MachineState *machine)
 {
     pc_compat_2_2(machine);
@@ -410,16 +421,24 @@ static void pc_q35_init_1_4(MachineState *machine)
     .hot_add_cpu = pc_hot_add_cpu, \
     .units_per_default_bus = 1
 
-#define PC_Q35_2_3_MACHINE_OPTIONS                      \
+#define PC_Q35_2_4_MACHINE_OPTIONS                      \
     PC_Q35_MACHINE_OPTIONS,                             \
     .default_machine_opts = "firmware=bios-256k.bin",   \
     .default_display = "std"
 
+static QEMUMachine pc_q35_machine_v2_4 = {
+    PC_Q35_2_4_MACHINE_OPTIONS,
+    .name = "pc-q35-2.4",
+    .alias = "q35",
+    .init = pc_q35_init,
+};
+
+#define PC_Q35_2_3_MACHINE_OPTIONS PC_Q35_2_4_MACHINE_OPTIONS
+
 static QEMUMachine pc_q35_machine_v2_3 = {
     PC_Q35_2_3_MACHINE_OPTIONS,
     .name = "pc-q35-2.3",
-    .alias = "q35",
-    .init = pc_q35_init,
+    .init = pc_q35_init_2_3,
 };
 
 #define PC_Q35_2_2_MACHINE_OPTIONS PC_Q35_2_3_MACHINE_OPTIONS
@@ -506,6 +525,7 @@ static QEMUMachine pc_q35_machine_v1_4 = {
 
 static void pc_q35_machine_init(void)
 {
+    qemu_register_pc_machine(&pc_q35_machine_v2_4);
     qemu_register_pc_machine(&pc_q35_machine_v2_3);
     qemu_register_pc_machine(&pc_q35_machine_v2_2);
     qemu_register_pc_machine(&pc_q35_machine_v2_1);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 03/16] spapr: add machine type specific instance init function
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 01/16] virtio-net: fix the upper bound when trying to delete queues Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 02/16] pc: add 2.4 machine types Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type Jason Wang
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, qemu-ppc, Alexander Graf, mst

This patches adds machine type specific instance initialization
functions. Those functions will be used by following patches to compat
class properties for legacy machine types.

Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/ppc/spapr.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 61ddc79..9e676ef 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1825,6 +1825,27 @@ static const TypeInfo spapr_machine_info = {
 #define SPAPR_COMPAT_2_1 \
         SPAPR_COMPAT_2_2
 
+static void spapr_compat_2_2(Object *obj)
+{
+}
+
+static void spapr_compat_2_1(Object *obj)
+{
+    spapr_compat_2_2(obj);
+}
+
+static void spapr_machine_2_2_instance_init(Object *obj)
+{
+    spapr_compat_2_2(obj);
+    spapr_machine_initfn(obj);
+}
+
+static void spapr_machine_2_1_instance_init(Object *obj)
+{
+    spapr_compat_2_1(obj);
+    spapr_machine_initfn(obj);
+}
+
 static void spapr_machine_2_1_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
@@ -1843,6 +1864,7 @@ static const TypeInfo spapr_machine_2_1_info = {
     .name          = TYPE_SPAPR_MACHINE "2.1",
     .parent        = TYPE_SPAPR_MACHINE,
     .class_init    = spapr_machine_2_1_class_init,
+    .instance_init = spapr_machine_2_1_instance_init,
 };
 
 static void spapr_machine_2_2_class_init(ObjectClass *oc, void *data)
@@ -1862,6 +1884,7 @@ static const TypeInfo spapr_machine_2_2_info = {
     .name          = TYPE_SPAPR_MACHINE "2.2",
     .parent        = TYPE_SPAPR_MACHINE,
     .class_init    = spapr_machine_2_2_class_init,
+    .instance_init = spapr_machine_2_2_instance_init,
 };
 
 static void spapr_machine_2_3_class_init(ObjectClass *oc, void *data)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (2 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 03/16] spapr: add machine type specific instance init function Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-27 11:03   ` Michael S. Tsirkin
  2015-04-27 13:14   ` Alexander Graf
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 05/16] monitor: replace the magic number 255 with MAX_QUEUE_NUM Jason Wang
                   ` (13 subsequent siblings)
  17 siblings, 2 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, qemu-ppc, Alexander Graf, mst

The following patches will limit the following things to legacy
machine type:

- maximum number of virtqueues for virtio-pci were limited to 64

Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/ppc/spapr.c | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 9e676ef..8e43aa2 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1825,8 +1825,13 @@ static const TypeInfo spapr_machine_info = {
 #define SPAPR_COMPAT_2_1 \
         SPAPR_COMPAT_2_2
 
+static void spapr_compat_2_3(Object *obj)
+{
+}
+
 static void spapr_compat_2_2(Object *obj)
 {
+    spapr_compat_2_3(obj);
 }
 
 static void spapr_compat_2_1(Object *obj)
@@ -1834,6 +1839,12 @@ static void spapr_compat_2_1(Object *obj)
     spapr_compat_2_2(obj);
 }
 
+static void spapr_machine_2_3_instance_init(Object *obj)
+{
+    spapr_compat_2_3(obj);
+    spapr_machine_initfn(obj);
+}
+
 static void spapr_machine_2_2_instance_init(Object *obj)
 {
     spapr_compat_2_2(obj);
@@ -1893,14 +1904,29 @@ static void spapr_machine_2_3_class_init(ObjectClass *oc, void *data)
 
     mc->name = "pseries-2.3";
     mc->desc = "pSeries Logical Partition (PAPR compliant) v2.3";
-    mc->alias = "pseries";
-    mc->is_default = 1;
 }
 
 static const TypeInfo spapr_machine_2_3_info = {
     .name          = TYPE_SPAPR_MACHINE "2.3",
     .parent        = TYPE_SPAPR_MACHINE,
     .class_init    = spapr_machine_2_3_class_init,
+    .instance_init = spapr_machine_2_3_instance_init,
+};
+
+static void spapr_machine_2_4_class_init(ObjectClass *oc, void *data)
+{
+    MachineClass *mc = MACHINE_CLASS(oc);
+
+    mc->name = "pseries-2.4";
+    mc->desc = "pSeries Logical Partition (PAPR compliant) v2.4";
+    mc->alias = "pseries";
+    mc->is_default = 1;
+}
+
+static const TypeInfo spapr_machine_2_4_info = {
+    .name          = TYPE_SPAPR_MACHINE "2.4",
+    .parent        = TYPE_SPAPR_MACHINE,
+    .class_init    = spapr_machine_2_4_class_init,
 };
 
 static void spapr_machine_register_types(void)
@@ -1909,6 +1935,7 @@ static void spapr_machine_register_types(void)
     type_register_static(&spapr_machine_2_1_info);
     type_register_static(&spapr_machine_2_2_info);
     type_register_static(&spapr_machine_2_3_info);
+    type_register_static(&spapr_machine_2_4_info);
 }
 
 type_init(spapr_machine_register_types)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 05/16] monitor: replace the magic number 255 with MAX_QUEUE_NUM
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (3 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 06/16] monitor: check return value of qemu_find_net_clients_except() Jason Wang
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, Luiz Capitulino, mst

This patch replace the magic number 255, and increase it to
MAX_QUEUE_NUM which is maximum number of queues supported by a nic.

Cc: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 monitor.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/monitor.c b/monitor.c
index 68873ec..a039edf 100644
--- a/monitor.c
+++ b/monitor.c
@@ -4472,10 +4472,11 @@ void set_link_completion(ReadLineState *rs, int nb_args, const char *str)
     len = strlen(str);
     readline_set_completion_index(rs, len);
     if (nb_args == 2) {
-        NetClientState *ncs[255];
+        NetClientState *ncs[MAX_QUEUE_NUM];
         int count, i;
         count = qemu_find_net_clients_except(NULL, ncs,
-                                             NET_CLIENT_OPTIONS_KIND_NONE, 255);
+                                             NET_CLIENT_OPTIONS_KIND_NONE,
+                                             MAX_QUEUE_NUM);
         for (i = 0; i < count; i++) {
             const char *name = ncs[i]->name;
             if (!strncmp(str, name, len)) {
@@ -4491,7 +4492,7 @@ void set_link_completion(ReadLineState *rs, int nb_args, const char *str)
 void netdev_del_completion(ReadLineState *rs, int nb_args, const char *str)
 {
     int len, count, i;
-    NetClientState *ncs[255];
+    NetClientState *ncs[MAX_QUEUE_NUM];
 
     if (nb_args != 2) {
         return;
@@ -4500,7 +4501,7 @@ void netdev_del_completion(ReadLineState *rs, int nb_args, const char *str)
     len = strlen(str);
     readline_set_completion_index(rs, len);
     count = qemu_find_net_clients_except(NULL, ncs, NET_CLIENT_OPTIONS_KIND_NIC,
-                                         255);
+                                         MAX_QUEUE_NUM);
     for (i = 0; i < count; i++) {
         QemuOpts *opts;
         const char *name = ncs[i]->name;
@@ -4566,14 +4567,15 @@ void host_net_add_completion(ReadLineState *rs, int nb_args, const char *str)
 
 void host_net_remove_completion(ReadLineState *rs, int nb_args, const char *str)
 {
-    NetClientState *ncs[255];
+    NetClientState *ncs[MAX_QUEUE_NUM];
     int count, i, len;
 
     len = strlen(str);
     readline_set_completion_index(rs, len);
     if (nb_args == 2) {
         count = qemu_find_net_clients_except(NULL, ncs,
-                                             NET_CLIENT_OPTIONS_KIND_NONE, 255);
+                                             NET_CLIENT_OPTIONS_KIND_NONE,
+                                             MAX_QUEUE_NUM);
         for (i = 0; i < count; i++) {
             int id;
             char name[16];
@@ -4589,7 +4591,8 @@ void host_net_remove_completion(ReadLineState *rs, int nb_args, const char *str)
         return;
     } else if (nb_args == 3) {
         count = qemu_find_net_clients_except(NULL, ncs,
-                                             NET_CLIENT_OPTIONS_KIND_NIC, 255);
+                                             NET_CLIENT_OPTIONS_KIND_NIC,
+                                             MAX_QUEUE_NUM);
         for (i = 0; i < count; i++) {
             int id;
             const char *name;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 06/16] monitor: check return value of qemu_find_net_clients_except()
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (4 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 05/16] monitor: replace the magic number 255 with MAX_QUEUE_NUM Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 07/16] virtio-ccw: using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue Jason Wang
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, Luiz Capitulino, mst

qemu_find_net_clients_except() may return a value which is greater
than the size of array we provided. So we should check this value
before using it, otherwise this may cause unexpected memory access.

This patch fixes the net related command completion when we have a
virtio-net nic with more than 255 queues.

Cc: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 monitor.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/monitor.c b/monitor.c
index a039edf..2b5643d 100644
--- a/monitor.c
+++ b/monitor.c
@@ -4477,7 +4477,7 @@ void set_link_completion(ReadLineState *rs, int nb_args, const char *str)
         count = qemu_find_net_clients_except(NULL, ncs,
                                              NET_CLIENT_OPTIONS_KIND_NONE,
                                              MAX_QUEUE_NUM);
-        for (i = 0; i < count; i++) {
+        for (i = 0; i < MIN(count, MAX_QUEUE_NUM); i++) {
             const char *name = ncs[i]->name;
             if (!strncmp(str, name, len)) {
                 readline_add_completion(rs, name);
@@ -4502,7 +4502,7 @@ void netdev_del_completion(ReadLineState *rs, int nb_args, const char *str)
     readline_set_completion_index(rs, len);
     count = qemu_find_net_clients_except(NULL, ncs, NET_CLIENT_OPTIONS_KIND_NIC,
                                          MAX_QUEUE_NUM);
-    for (i = 0; i < count; i++) {
+    for (i = 0; i < MIN(count, MAX_QUEUE_NUM); i++) {
         QemuOpts *opts;
         const char *name = ncs[i]->name;
         if (strncmp(str, name, len)) {
@@ -4576,7 +4576,7 @@ void host_net_remove_completion(ReadLineState *rs, int nb_args, const char *str)
         count = qemu_find_net_clients_except(NULL, ncs,
                                              NET_CLIENT_OPTIONS_KIND_NONE,
                                              MAX_QUEUE_NUM);
-        for (i = 0; i < count; i++) {
+        for (i = 0; i < MIN(count, MAX_QUEUE_NUM); i++) {
             int id;
             char name[16];
 
@@ -4593,7 +4593,7 @@ void host_net_remove_completion(ReadLineState *rs, int nb_args, const char *str)
         count = qemu_find_net_clients_except(NULL, ncs,
                                              NET_CLIENT_OPTIONS_KIND_NIC,
                                              MAX_QUEUE_NUM);
-        for (i = 0; i < count; i++) {
+        for (i = 0; i < MIN(count, MAX_QUEUE_NUM); i++) {
             int id;
             const char *name;
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 07/16] virtio-ccw: using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (5 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 06/16] monitor: check return value of qemu_find_net_clients_except() Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit Jason Wang
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, Jason Wang, Alexander Graf, Christian Borntraeger,
	Cornelia Huck, Richard Henderson

It's a bad idea to need to use vector 0 for invalid virtqueue. So this patch
changes to using VIRTIO_NO_VECTOR instead.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
CC: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
---
 hw/s390x/virtio-ccw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
index d32ecaf..0434f56 100644
--- a/hw/s390x/virtio-ccw.c
+++ b/hw/s390x/virtio-ccw.c
@@ -281,7 +281,7 @@ static int virtio_ccw_set_vqs(SubchDev *sch, uint64_t addr, uint32_t align,
 
     virtio_queue_set_addr(vdev, index, addr);
     if (!addr) {
-        virtio_queue_set_vector(vdev, index, 0);
+        virtio_queue_set_vector(vdev, index, VIRTIO_NO_VECTOR);
     } else {
         /* Fail if we don't have a big enough queue. */
         /* TODO: Add interface to handle vring.num changing */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (6 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 07/16] virtio-ccw: using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-27 11:05   ` Michael S. Tsirkin
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 09/16] virtio-ccw: introduce ccw " Jason Wang
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, Jason Wang, Alexander Graf, Christian Borntraeger,
	Cornelia Huck, Paolo Bonzini, Richard Henderson

This patch introduces a bus specific queue limitation. It will be
useful for increasing the limit for one of the bus without disturbing
other buses.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
---
 hw/char/virtio-serial-bus.c    |  2 +-
 hw/net/virtio-net.c            |  4 ++--
 hw/s390x/s390-virtio-bus.c     |  1 +
 hw/s390x/virtio-ccw.c          |  1 +
 hw/scsi/virtio-scsi.c          |  4 ++--
 hw/virtio/virtio-mmio.c        |  1 +
 hw/virtio/virtio-pci.c         |  1 +
 hw/virtio/virtio.c             | 40 +++++++++++++++++++++++++---------------
 include/hw/virtio/virtio-bus.h |  1 +
 include/hw/virtio/virtio.h     |  1 +
 10 files changed, 36 insertions(+), 20 deletions(-)

diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
index e336bdb..0694831 100644
--- a/hw/char/virtio-serial-bus.c
+++ b/hw/char/virtio-serial-bus.c
@@ -973,7 +973,7 @@ static void virtio_serial_device_realize(DeviceState *dev, Error **errp)
     }
 
     /* Each port takes 2 queues, and one pair is for the control queue */
-    max_supported_ports = VIRTIO_PCI_QUEUE_MAX / 2 - 1;
+    max_supported_ports = virtio_get_queue_max(vdev) / 2 - 1;
 
     if (vser->serial.max_virtserial_ports > max_supported_ports) {
         error_setg(errp, "maximum ports supported: %u", max_supported_ports);
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index b6fac9c..bf286f5 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1588,10 +1588,10 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
     virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size);
 
     n->max_queues = MAX(n->nic_conf.peers.queues, 1);
-    if (n->max_queues * 2 + 1 > VIRTIO_PCI_QUEUE_MAX) {
+    if (n->max_queues * 2 + 1 > virtio_get_queue_max(vdev)) {
         error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
                    "must be a postive integer less than %d.",
-                   n->max_queues, (VIRTIO_PCI_QUEUE_MAX - 1) / 2);
+                   n->max_queues, (virtio_get_queue_max(vdev) - 1) / 2);
         virtio_cleanup(vdev);
         return;
     }
diff --git a/hw/s390x/s390-virtio-bus.c b/hw/s390x/s390-virtio-bus.c
index 047c963..2b41e32 100644
--- a/hw/s390x/s390-virtio-bus.c
+++ b/hw/s390x/s390-virtio-bus.c
@@ -715,6 +715,7 @@ static void virtio_s390_bus_class_init(ObjectClass *klass, void *data)
     bus_class->max_dev = 1;
     k->notify = virtio_s390_notify;
     k->get_features = virtio_s390_get_features;
+    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
 }
 
 static const TypeInfo virtio_s390_bus_info = {
diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
index 0434f56..590eed5 100644
--- a/hw/s390x/virtio-ccw.c
+++ b/hw/s390x/virtio-ccw.c
@@ -1715,6 +1715,7 @@ static void virtio_ccw_bus_class_init(ObjectClass *klass, void *data)
     k->load_queue = virtio_ccw_load_queue;
     k->save_config = virtio_ccw_save_config;
     k->load_config = virtio_ccw_load_config;
+    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
 }
 
 static const TypeInfo virtio_ccw_bus_info = {
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index c9bea06..fbdde2b 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -826,10 +826,10 @@ void virtio_scsi_common_realize(DeviceState *dev, Error **errp,
                 sizeof(VirtIOSCSIConfig));
 
     if (s->conf.num_queues == 0 ||
-            s->conf.num_queues > VIRTIO_PCI_QUEUE_MAX - 2) {
+            s->conf.num_queues > virtio_get_queue_max(vdev) - 2) {
         error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
                          "must be a positive integer less than %d.",
-                   s->conf.num_queues, VIRTIO_PCI_QUEUE_MAX - 2);
+                   s->conf.num_queues, virtio_get_queue_max(vdev) - 2);
         virtio_cleanup(vdev);
         return;
     }
diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
index 10123f3..2ae6942 100644
--- a/hw/virtio/virtio-mmio.c
+++ b/hw/virtio/virtio-mmio.c
@@ -403,6 +403,7 @@ static void virtio_mmio_bus_class_init(ObjectClass *klass, void *data)
     k->device_plugged = virtio_mmio_device_plugged;
     k->has_variable_vring_alignment = true;
     bus_class->max_dev = 1;
+    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
 }
 
 static const TypeInfo virtio_mmio_bus_info = {
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index c7c3f72..075b13b 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1498,6 +1498,7 @@ static void virtio_pci_bus_class_init(ObjectClass *klass, void *data)
     k->vmstate_change = virtio_pci_vmstate_change;
     k->device_plugged = virtio_pci_device_plugged;
     k->device_unplugged = virtio_pci_device_unplugged;
+    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
 }
 
 static const TypeInfo virtio_pci_bus_info = {
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 17c1260..bbb224f 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -541,6 +541,14 @@ void virtio_update_irq(VirtIODevice *vdev)
     virtio_notify_vector(vdev, VIRTIO_NO_VECTOR);
 }
 
+int virtio_get_queue_max(VirtIODevice *vdev)
+{
+    BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
+    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+
+    return k->queue_max;
+}
+
 void virtio_set_status(VirtIODevice *vdev, uint8_t val)
 {
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
@@ -599,7 +607,7 @@ void virtio_reset(void *opaque)
     vdev->config_vector = VIRTIO_NO_VECTOR;
     virtio_notify_vector(vdev, vdev->config_vector);
 
-    for(i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
+    for (i = 0; i < virtio_get_queue_max(vdev); i++) {
         vdev->vq[i].vring.desc = 0;
         vdev->vq[i].vring.avail = 0;
         vdev->vq[i].vring.used = 0;
@@ -738,7 +746,8 @@ int virtio_queue_get_num(VirtIODevice *vdev, int n)
 int virtio_queue_get_id(VirtQueue *vq)
 {
     VirtIODevice *vdev = vq->vdev;
-    assert(vq >= &vdev->vq[0] && vq < &vdev->vq[VIRTIO_PCI_QUEUE_MAX]);
+
+    assert(vq >= &vdev->vq[0] && vq < &vdev->vq[virtio_get_queue_max(vdev)]);
     return vq - &vdev->vq[0];
 }
 
@@ -774,28 +783,29 @@ void virtio_queue_notify(VirtIODevice *vdev, int n)
 
 uint16_t virtio_queue_vector(VirtIODevice *vdev, int n)
 {
-    return n < VIRTIO_PCI_QUEUE_MAX ? vdev->vq[n].vector :
+    return n < virtio_get_queue_max(vdev) ? vdev->vq[n].vector :
         VIRTIO_NO_VECTOR;
 }
 
 void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t vector)
 {
-    if (n < VIRTIO_PCI_QUEUE_MAX)
+    if (n < virtio_get_queue_max(vdev))
         vdev->vq[n].vector = vector;
 }
 
 VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
                             void (*handle_output)(VirtIODevice *, VirtQueue *))
 {
-    int i;
+    int i, queue_max = virtio_get_queue_max(vdev);
 
-    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
+    for (i = 0; i < queue_max; i++) {
         if (vdev->vq[i].vring.num == 0)
             break;
     }
 
-    if (i == VIRTIO_PCI_QUEUE_MAX || queue_size > VIRTQUEUE_MAX_SIZE)
+    if (i == queue_max || queue_size > VIRTQUEUE_MAX_SIZE) {
         abort();
+    }
 
     vdev->vq[i].vring.num = queue_size;
     vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
@@ -806,7 +816,7 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
 
 void virtio_del_queue(VirtIODevice *vdev, int n)
 {
-    if (n < 0 || n >= VIRTIO_PCI_QUEUE_MAX) {
+    if (n < 0 || n >= virtio_get_queue_max(vdev)) {
         abort();
     }
 
@@ -916,14 +926,14 @@ void virtio_save(VirtIODevice *vdev, QEMUFile *f)
     qemu_put_be32(f, vdev->config_len);
     qemu_put_buffer(f, vdev->config, vdev->config_len);
 
-    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
+    for (i = 0; i < k->queue_max; i++) {
         if (vdev->vq[i].vring.num == 0)
             break;
     }
 
     qemu_put_be32(f, i);
 
-    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
+    for (i = 0; i < k->queue_max; i++) {
         if (vdev->vq[i].vring.num == 0)
             break;
 
@@ -988,7 +998,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
     qemu_get_8s(f, &vdev->status);
     qemu_get_8s(f, &vdev->isr);
     qemu_get_be16s(f, &vdev->queue_sel);
-    if (vdev->queue_sel >= VIRTIO_PCI_QUEUE_MAX) {
+    if (vdev->queue_sel >= k->queue_max) {
         return -1;
     }
     qemu_get_be32s(f, &features);
@@ -1015,7 +1025,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
 
     num = qemu_get_be32(f);
 
-    if (num > VIRTIO_PCI_QUEUE_MAX) {
+    if (num > k->queue_max) {
         error_report("Invalid number of PCI queues: 0x%x", num);
         return -1;
     }
@@ -1125,15 +1135,15 @@ void virtio_instance_init_common(Object *proxy_obj, void *data,
 void virtio_init(VirtIODevice *vdev, const char *name,
                  uint16_t device_id, size_t config_size)
 {
-    int i;
+    int i, queue_max = virtio_get_queue_max(vdev);
     vdev->device_id = device_id;
     vdev->status = 0;
     vdev->isr = 0;
     vdev->queue_sel = 0;
     vdev->config_vector = VIRTIO_NO_VECTOR;
-    vdev->vq = g_malloc0(sizeof(VirtQueue) * VIRTIO_PCI_QUEUE_MAX);
+    vdev->vq = g_malloc0(sizeof(VirtQueue) * queue_max);
     vdev->vm_running = runstate_is_running();
-    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
+    for (i = 0; i < queue_max; i++) {
         vdev->vq[i].vector = VIRTIO_NO_VECTOR;
         vdev->vq[i].vdev = vdev;
         vdev->vq[i].queue_index = i;
diff --git a/include/hw/virtio/virtio-bus.h b/include/hw/virtio/virtio-bus.h
index 0d2e7b4..4da8022 100644
--- a/include/hw/virtio/virtio-bus.h
+++ b/include/hw/virtio/virtio-bus.h
@@ -68,6 +68,7 @@ typedef struct VirtioBusClass {
      * Note that changing this will break migration for this transport.
      */
     bool has_variable_vring_alignment;
+    uint16_t queue_max;
 } VirtioBusClass;
 
 struct VirtioBusState {
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index d95f8b6..91fd673 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -179,6 +179,7 @@ void virtio_queue_notify(VirtIODevice *vdev, int n);
 uint16_t virtio_queue_vector(VirtIODevice *vdev, int n);
 void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t vector);
 void virtio_set_status(VirtIODevice *vdev, uint8_t val);
+int virtio_get_queue_max(VirtIODevice *vdev);
 void virtio_reset(void *opaque);
 void virtio_update_irq(VirtIODevice *vdev);
 int virtio_set_features(VirtIODevice *vdev, uint32_t val);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 09/16] virtio-ccw: introduce ccw specific queue limit
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (7 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23 10:59   ` Cornelia Huck
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 10/16] virtio-s390: switch to bus " Jason Wang
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, Jason Wang, Alexander Graf, Christian Borntraeger,
	Cornelia Huck, Richard Henderson

Instead of depending on marco, using a bus specific limit. Also make
it clear that the number of gsis per I/O adapter is not directly
depending on the number of virtio queues, but rather the other way
around.

Cc: Alexander Graf <agraf@suse.de>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/s390x/s390-virtio-ccw.c   |  7 +++++--
 hw/s390x/virtio-ccw.c        | 21 +++++++++++++--------
 include/hw/s390x/s390_flic.h |  4 +++-
 3 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
index afb539a..96b84ab 100644
--- a/hw/s390x/s390-virtio-ccw.c
+++ b/hw/s390x/s390-virtio-ccw.c
@@ -55,6 +55,7 @@ void io_subsystem_reset(void)
 
 static int virtio_ccw_hcall_notify(const uint64_t *args)
 {
+    VirtIODevice *vdev;
     uint64_t subch_id = args[0];
     uint64_t queue = args[1];
     SubchDev *sch;
@@ -67,10 +68,12 @@ static int virtio_ccw_hcall_notify(const uint64_t *args)
     if (!sch || !css_subch_visible(sch)) {
         return -EINVAL;
     }
-    if (queue >= VIRTIO_PCI_QUEUE_MAX) {
+
+    vdev = virtio_ccw_get_vdev(sch);
+    if (queue >= virtio_get_queue_max(vdev)) {
         return -EINVAL;
     }
-    virtio_queue_notify(virtio_ccw_get_vdev(sch), queue);
+    virtio_queue_notify(vdev, queue);
     return 0;
 
 }
diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
index 590eed5..5619eb4 100644
--- a/hw/s390x/virtio-ccw.c
+++ b/hw/s390x/virtio-ccw.c
@@ -32,6 +32,8 @@
 static QTAILQ_HEAD(, IndAddr) indicator_addresses =
     QTAILQ_HEAD_INITIALIZER(indicator_addresses);
 
+#define VIRTIO_CCW_QUEUE_MAX ADAPTER_ROUTES_MAX_GSI
+
 static IndAddr *get_indicator(hwaddr ind_addr, int len)
 {
     IndAddr *indicator;
@@ -170,7 +172,7 @@ static void virtio_ccw_start_ioeventfd(VirtioCcwDevice *dev)
         return;
     }
     vdev = virtio_bus_get_device(&dev->bus);
-    for (n = 0; n < VIRTIO_PCI_QUEUE_MAX; n++) {
+    for (n = 0; n < virtio_get_queue_max(vdev); n++) {
         if (!virtio_queue_get_num(vdev, n)) {
             continue;
         }
@@ -205,7 +207,7 @@ static void virtio_ccw_stop_ioeventfd(VirtioCcwDevice *dev)
         return;
     }
     vdev = virtio_bus_get_device(&dev->bus);
-    for (n = 0; n < VIRTIO_PCI_QUEUE_MAX; n++) {
+    for (n = 0; n < virtio_get_queue_max(vdev); n++) {
         if (!virtio_queue_get_num(vdev, n)) {
             continue;
         }
@@ -265,8 +267,9 @@ static int virtio_ccw_set_vqs(SubchDev *sch, uint64_t addr, uint32_t align,
                               uint16_t index, uint16_t num)
 {
     VirtIODevice *vdev = virtio_ccw_get_vdev(sch);
+    int queue_max = virtio_get_queue_max(vdev);
 
-    if (index >= VIRTIO_PCI_QUEUE_MAX) {
+    if (index >= queue_max) {
         return -EINVAL;
     }
 
@@ -291,7 +294,7 @@ static int virtio_ccw_set_vqs(SubchDev *sch, uint64_t addr, uint32_t align,
         virtio_queue_set_vector(vdev, index, index);
     }
     /* tell notify handler in case of config change */
-    vdev->config_vector = VIRTIO_PCI_QUEUE_MAX;
+    vdev->config_vector = queue_max;
     return 0;
 }
 
@@ -549,7 +552,7 @@ static int virtio_ccw_cb(SubchDev *sch, CCW1 ccw)
             ret = -EFAULT;
         } else {
             vq_config.index = lduw_be_phys(&address_space_memory, ccw.cda);
-            if (vq_config.index >= VIRTIO_PCI_QUEUE_MAX) {
+            if (vq_config.index >= virtio_get_queue_max(vdev)) {
                 ret = -EINVAL;
                 break;
             }
@@ -1040,14 +1043,16 @@ static uint8_t virtio_set_ind_atomic(SubchDev *sch, uint64_t ind_loc,
 static void virtio_ccw_notify(DeviceState *d, uint16_t vector)
 {
     VirtioCcwDevice *dev = to_virtio_ccw_dev_fast(d);
+    VirtIODevice *vdev = virtio_bus_get_device(&dev->bus);
     SubchDev *sch = dev->sch;
     uint64_t indicators;
 
-    if (vector >= 128) {
+    /* queue indicators + secondary indicators */
+    if (vector >= virtio_get_queue_max(vdev) + 64) {
         return;
     }
 
-    if (vector < VIRTIO_PCI_QUEUE_MAX) {
+    if (vector < virtio_get_queue_max(vdev)) {
         if (!dev->indicators) {
             return;
         }
@@ -1715,7 +1720,7 @@ static void virtio_ccw_bus_class_init(ObjectClass *klass, void *data)
     k->load_queue = virtio_ccw_load_queue;
     k->save_config = virtio_ccw_save_config;
     k->load_config = virtio_ccw_load_config;
-    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
+    k->queue_max = VIRTIO_CCW_QUEUE_MAX;
 }
 
 static const TypeInfo virtio_ccw_bus_info = {
diff --git a/include/hw/s390x/s390_flic.h b/include/hw/s390x/s390_flic.h
index 489d73b..e1b0389 100644
--- a/include/hw/s390x/s390_flic.h
+++ b/include/hw/s390x/s390_flic.h
@@ -17,10 +17,12 @@
 #include "hw/s390x/adapter.h"
 #include "hw/virtio/virtio.h"
 
+#define ADAPTER_ROUTES_MAX_GSI 64
+
 typedef struct AdapterRoutes {
     AdapterInfo adapter;
     int num_routes;
-    int gsi[VIRTIO_PCI_QUEUE_MAX];
+    int gsi[ADAPTER_ROUTES_MAX_GSI];
 } AdapterRoutes;
 
 #define TYPE_S390_FLIC_COMMON "s390-flic"
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 10/16] virtio-s390: switch to bus specific queue limit
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (8 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 09/16] virtio-ccw: introduce ccw " Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 11/16] virtio-mmio: " Jason Wang
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, Jason Wang, Alexander Graf, Christian Borntraeger,
	Cornelia Huck, Richard Henderson

Instead of depending on macro, switch to use a bus specific queue
limit.

Cc: Alexander Graf <agraf@suse.de>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
---
 hw/s390x/s390-virtio-bus.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-virtio-bus.c b/hw/s390x/s390-virtio-bus.c
index 2b41e32..9d6028d 100644
--- a/hw/s390x/s390-virtio-bus.c
+++ b/hw/s390x/s390-virtio-bus.c
@@ -45,6 +45,8 @@
     do { } while (0)
 #endif
 
+#define VIRTIO_S390_QUEUE_MAX 64
+
 static void virtio_s390_bus_new(VirtioBusState *bus, size_t bus_size,
                                 VirtIOS390Device *dev);
 
@@ -344,7 +346,7 @@ static ram_addr_t s390_virtio_device_num_vq(VirtIOS390Device *dev)
     VirtIODevice *vdev = dev->vdev;
     int num_vq;
 
-    for (num_vq = 0; num_vq < VIRTIO_PCI_QUEUE_MAX; num_vq++) {
+    for (num_vq = 0; num_vq < virtio_get_queue_max(vdev); num_vq++) {
         if (!virtio_queue_get_num(vdev, num_vq)) {
             break;
         }
@@ -446,7 +448,7 @@ VirtIOS390Device *s390_virtio_bus_find_vring(VirtIOS390Bus *bus,
     QTAILQ_FOREACH(kid, &bus->bus.children, sibling) {
         VirtIOS390Device *dev = (VirtIOS390Device *)kid->child;
 
-        for(i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
+        for (i = 0; i < virtio_get_queue_max(dev->vdev); i++) {
             if (!virtio_queue_get_addr(dev->vdev, i))
                 break;
             if (virtio_queue_get_addr(dev->vdev, i) == mem) {
@@ -715,7 +717,7 @@ static void virtio_s390_bus_class_init(ObjectClass *klass, void *data)
     bus_class->max_dev = 1;
     k->notify = virtio_s390_notify;
     k->get_features = virtio_s390_get_features;
-    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
+    k->queue_max = VIRTIO_S390_QUEUE_MAX;
 }
 
 static const TypeInfo virtio_s390_bus_info = {
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 11/16] virtio-mmio: switch to bus specific queue limit
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (9 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 10/16] virtio-s390: switch to bus " Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 12/16] virtio-pci: switch to use " Jason Wang
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, mst

Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/virtio-mmio.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
index 2ae6942..dbd44b6 100644
--- a/hw/virtio/virtio-mmio.c
+++ b/hw/virtio/virtio-mmio.c
@@ -34,6 +34,8 @@ do { printf("virtio_mmio: " fmt , ## __VA_ARGS__); } while (0)
 #define DPRINTF(fmt, ...) do {} while (0)
 #endif
 
+#define VIRTIO_MMIO_QUEUE_MAX 64
+
 /* QOM macros */
 /* virtio-mmio-bus */
 #define TYPE_VIRTIO_MMIO_BUS "virtio-mmio-bus"
@@ -237,7 +239,7 @@ static void virtio_mmio_write(void *opaque, hwaddr offset, uint64_t value,
                 proxy->guest_page_shift);
         break;
     case VIRTIO_MMIO_QUEUESEL:
-        if (value < VIRTIO_PCI_QUEUE_MAX) {
+        if (value < virtio_get_queue_max(vdev)) {
             vdev->queue_sel = value;
         }
         break;
@@ -257,7 +259,7 @@ static void virtio_mmio_write(void *opaque, hwaddr offset, uint64_t value,
         }
         break;
     case VIRTIO_MMIO_QUEUENOTIFY:
-        if (value < VIRTIO_PCI_QUEUE_MAX) {
+        if (value < virtio_get_queue_max(vdev)) {
             virtio_queue_notify(vdev, value);
         }
         break;
@@ -403,7 +405,7 @@ static void virtio_mmio_bus_class_init(ObjectClass *klass, void *data)
     k->device_plugged = virtio_mmio_device_plugged;
     k->has_variable_vring_alignment = true;
     bus_class->max_dev = 1;
-    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
+    k->queue_max = VIRTIO_MMIO_QUEUE_MAX;
 }
 
 static const TypeInfo virtio_mmio_bus_info = {
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 12/16] virtio-pci: switch to use bus specific queue limit
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (10 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 11/16] virtio-mmio: " Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 13/16] virtio: introduce vector to virtqueues mapping Jason Wang
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, mst

Instead of depending on a macro, switch to use a bus specific queue
limit.

Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/virtio-pci.c     | 12 +++++++-----
 include/hw/virtio/virtio.h |  2 --
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 075b13b..e556919 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -42,6 +42,8 @@
  * configuration space */
 #define VIRTIO_PCI_CONFIG_SIZE(dev)     VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
 
+#define VIRTIO_PCI_QUEUE_MAX 64
+
 static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
                                VirtIOPCIProxy *dev);
 
@@ -171,7 +173,7 @@ static void virtio_pci_start_ioeventfd(VirtIOPCIProxy *proxy)
         return;
     }
 
-    for (n = 0; n < VIRTIO_PCI_QUEUE_MAX; n++) {
+    for (n = 0; n < virtio_get_queue_max(vdev); n++) {
         if (!virtio_queue_get_num(vdev, n)) {
             continue;
         }
@@ -207,7 +209,7 @@ static void virtio_pci_stop_ioeventfd(VirtIOPCIProxy *proxy)
         return;
     }
 
-    for (n = 0; n < VIRTIO_PCI_QUEUE_MAX; n++) {
+    for (n = 0; n < virtio_get_queue_max(vdev); n++) {
         if (!virtio_queue_get_num(vdev, n)) {
             continue;
         }
@@ -243,11 +245,11 @@ static void virtio_ioport_write(void *opaque, uint32_t addr, uint32_t val)
             virtio_queue_set_addr(vdev, vdev->queue_sel, pa);
         break;
     case VIRTIO_PCI_QUEUE_SEL:
-        if (val < VIRTIO_PCI_QUEUE_MAX)
+        if (val < virtio_get_queue_max(vdev))
             vdev->queue_sel = val;
         break;
     case VIRTIO_PCI_QUEUE_NOTIFY:
-        if (val < VIRTIO_PCI_QUEUE_MAX) {
+        if (val < virtio_get_queue_max(vdev)) {
             virtio_queue_notify(vdev, val);
         }
         break;
@@ -748,7 +750,7 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
     bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
         kvm_msi_via_irqfd_enabled();
 
-    nvqs = MIN(nvqs, VIRTIO_PCI_QUEUE_MAX);
+    nvqs = MIN(nvqs, virtio_get_queue_max(vdev));
 
     /* When deassigning, pass a consistent nvqs value
      * to avoid leaking notifiers.
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 91fd673..7ff40ac 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -48,8 +48,6 @@ typedef struct VirtQueueElement
     struct iovec out_sg[VIRTQUEUE_MAX_SIZE];
 } VirtQueueElement;
 
-#define VIRTIO_PCI_QUEUE_MAX 64
-
 #define VIRTIO_NO_VECTOR 0xffff
 
 #define TYPE_VIRTIO_DEVICE "virtio-device"
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 13/16] virtio: introduce vector to virtqueues mapping
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (11 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 12/16] virtio-pci: switch to use " Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 14/16] virtio-pci: speedup MSI-X masking and unmasking Jason Wang
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, mst

Currently we will try to traverse all virtqueues to find a subset that
using a specific vector. This is sub optimal when we will support
hundreds or even thousands of virtqueues. So this patch introduces a
method which could be used by transport to get all virtqueues that
using a same vector. This is done through QLISTs and the number of
QLISTs was queried through a transport specific method. When guest
setting vectors, the virtqueue will be linked and helpers for traverse
the list was also introduced.

The first user will be virtio pci which will use this to speed up
MSI-X masking and unmasking handling.

Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/virtio-pci.c         |  8 ++++++++
 hw/virtio/virtio.c             | 36 ++++++++++++++++++++++++++++++++++--
 include/hw/virtio/virtio-bus.h |  1 +
 include/hw/virtio/virtio.h     |  3 +++
 4 files changed, 46 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index e556919..c38f33f 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -910,6 +910,13 @@ static const TypeInfo virtio_9p_pci_info = {
  * virtio-pci: This is the PCIDevice which has a virtio-pci-bus.
  */
 
+static int virtio_pci_query_nvectors(DeviceState *d)
+{
+    VirtIOPCIProxy *proxy = VIRTIO_PCI(d);
+
+    return proxy->nvectors;
+}
+
 /* This is called by virtio-bus just after the device is plugged. */
 static void virtio_pci_device_plugged(DeviceState *d)
 {
@@ -1500,6 +1507,7 @@ static void virtio_pci_bus_class_init(ObjectClass *klass, void *data)
     k->vmstate_change = virtio_pci_vmstate_change;
     k->device_plugged = virtio_pci_device_plugged;
     k->device_unplugged = virtio_pci_device_unplugged;
+    k->query_nvectors = virtio_pci_query_nvectors;
     k->queue_max = VIRTIO_PCI_QUEUE_MAX;
 }
 
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index bbb224f..159e5c6 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -89,6 +89,7 @@ struct VirtQueue
     VirtIODevice *vdev;
     EventNotifier guest_notifier;
     EventNotifier host_notifier;
+    QLIST_ENTRY(VirtQueue) node;
 };
 
 /* virt queue functions */
@@ -613,7 +614,7 @@ void virtio_reset(void *opaque)
         vdev->vq[i].vring.used = 0;
         vdev->vq[i].last_avail_idx = 0;
         vdev->vq[i].pa = 0;
-        vdev->vq[i].vector = VIRTIO_NO_VECTOR;
+        virtio_queue_set_vector(vdev, i, VIRTIO_NO_VECTOR);
         vdev->vq[i].signalled_used = 0;
         vdev->vq[i].signalled_used_valid = false;
         vdev->vq[i].notification = true;
@@ -738,6 +739,16 @@ void virtio_queue_set_num(VirtIODevice *vdev, int n, int num)
     virtqueue_init(&vdev->vq[n]);
 }
 
+VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector)
+{
+    return QLIST_FIRST(&vdev->vector_queues[vector]);
+}
+
+VirtQueue *virtio_vector_next_queue(VirtQueue *vq)
+{
+    return QLIST_NEXT(vq, node);
+}
+
 int virtio_queue_get_num(VirtIODevice *vdev, int n)
 {
     return vdev->vq[n].vring.num;
@@ -789,8 +800,19 @@ uint16_t virtio_queue_vector(VirtIODevice *vdev, int n)
 
 void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t vector)
 {
-    if (n < virtio_get_queue_max(vdev))
+    VirtQueue *vq = &vdev->vq[n];
+
+    if (n < virtio_get_queue_max(vdev)) {
+        if (vdev->vector_queues &&
+            vdev->vq[n].vector != VIRTIO_NO_VECTOR) {
+            QLIST_REMOVE(vq, node);
+        }
         vdev->vq[n].vector = vector;
+        if (vdev->vector_queues &&
+            vector != VIRTIO_NO_VECTOR) {
+            QLIST_INSERT_HEAD(&vdev->vector_queues[vector], vq, node);
+        }
+    }
 }
 
 VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
@@ -1098,6 +1120,7 @@ void virtio_cleanup(VirtIODevice *vdev)
     qemu_del_vm_change_state_handler(vdev->vmstate);
     g_free(vdev->config);
     g_free(vdev->vq);
+    g_free(vdev->vector_queues);
 }
 
 static void virtio_vmstate_change(void *opaque, int running, RunState state)
@@ -1135,7 +1158,16 @@ void virtio_instance_init_common(Object *proxy_obj, void *data,
 void virtio_init(VirtIODevice *vdev, const char *name,
                  uint16_t device_id, size_t config_size)
 {
+    BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
+    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
     int i, queue_max = virtio_get_queue_max(vdev);
+    int nvectors = k->query_nvectors ? k->query_nvectors(qbus->parent) : 0;
+
+    if (nvectors) {
+        vdev->vector_queues =
+            g_malloc0(sizeof(*vdev->vector_queues) * nvectors);
+    }
+
     vdev->device_id = device_id;
     vdev->status = 0;
     vdev->isr = 0;
diff --git a/include/hw/virtio/virtio-bus.h b/include/hw/virtio/virtio-bus.h
index 4da8022..12386d5 100644
--- a/include/hw/virtio/virtio-bus.h
+++ b/include/hw/virtio/virtio-bus.h
@@ -62,6 +62,7 @@ typedef struct VirtioBusClass {
      * This is called by virtio-bus just before the device is unplugged.
      */
     void (*device_unplugged)(DeviceState *d);
+    int (*query_nvectors)(DeviceState *d);
     /*
      * Does the transport have variable vring alignment?
      * (ie can it ever call virtio_queue_set_align()?)
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 7ff40ac..e3adb1d 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -82,6 +82,7 @@ struct VirtIODevice
     VMChangeStateEntry *vmstate;
     char *bus_name;
     uint8_t device_endian;
+    QLIST_HEAD(, VirtQueue) * vector_queues;
 };
 
 typedef struct VirtioDeviceClass {
@@ -217,6 +218,8 @@ void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
                                                bool set_handler);
 void virtio_queue_notify_vq(VirtQueue *vq);
 void virtio_irq(VirtQueue *vq);
+VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
+VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
 
 static inline void virtio_add_feature(uint32_t *features, unsigned int fbit)
 {
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 14/16] virtio-pci: speedup MSI-X masking and unmasking
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (12 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 13/16] virtio: introduce vector to virtqueues mapping Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513 Jason Wang
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, mst

This patch tries to speed up the MSI-X masking and unmasking through
the mapping between vector and queues. With this patch it will there's
no need to go through all possible virtqueues, which may help to
reduce the time spent when doing MSI-X masking/unmasking a single
vector when more than hundreds or even thousands of virtqueues were
supported.

Tested with 80 queue pairs virito-net-pci by changing the smp affinity
in the background and doing netperf in the same time:

Before the patch:
5711.70 Gbits/sec
After the patch:
6830.98 Gbits/sec

About 19.6% improvements in throughput.

Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/virtio-pci.c | 40 +++++++++++++++++++++-------------------
 1 file changed, 21 insertions(+), 19 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index c38f33f..7d01500 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -632,28 +632,30 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector,
 {
     VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev);
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
-    int ret, queue_no;
+    VirtQueue *vq = virtio_vector_first_queue(vdev, vector);
+    int ret, index, unmasked = 0;
 
-    for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) {
-        if (!virtio_queue_get_num(vdev, queue_no)) {
+    while (vq) {
+        index = virtio_get_queue_index(vq);
+        if (!virtio_queue_get_num(vdev, index)) {
             break;
         }
-        if (virtio_queue_vector(vdev, queue_no) != vector) {
-            continue;
-        }
-        ret = virtio_pci_vq_vector_unmask(proxy, queue_no, vector, msg);
+        ret = virtio_pci_vq_vector_unmask(proxy, index, vector, msg);
         if (ret < 0) {
             goto undo;
         }
+        vq = virtio_vector_next_queue(vq);
+        ++unmasked;
     }
+
     return 0;
 
 undo:
-    while (--queue_no >= 0) {
-        if (virtio_queue_vector(vdev, queue_no) != vector) {
-            continue;
-        }
-        virtio_pci_vq_vector_mask(proxy, queue_no, vector);
+    vq = virtio_vector_first_queue(vdev, vector);
+    while (vq && --unmasked >= 0) {
+        index = virtio_get_queue_index(vq);
+        virtio_pci_vq_vector_mask(proxy, index, vector);
+        vq = virtio_vector_next_queue(vq);
     }
     return ret;
 }
@@ -662,16 +664,16 @@ static void virtio_pci_vector_mask(PCIDevice *dev, unsigned vector)
 {
     VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev);
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
-    int queue_no;
+    VirtQueue *vq = virtio_vector_first_queue(vdev, vector);
+    int index;
 
-    for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) {
-        if (!virtio_queue_get_num(vdev, queue_no)) {
+    while (vq) {
+        index = virtio_get_queue_index(vq);
+        if (!virtio_queue_get_num(vdev, index)) {
             break;
         }
-        if (virtio_queue_vector(vdev, queue_no) != vector) {
-            continue;
-        }
-        virtio_pci_vq_vector_mask(proxy, queue_no, vector);
+        virtio_pci_vq_vector_mask(proxy, index, vector);
+        vq = virtio_vector_next_queue(vq);
     }
 }
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (13 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 14/16] virtio-pci: speedup MSI-X masking and unmasking Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23 11:24   ` Cornelia Huck
                     ` (2 more replies)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 16/16] pci: remove hard-coded bar size in msix_init_exclusive_bar() Jason Wang
                   ` (2 subsequent siblings)
  17 siblings, 3 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, Jason Wang, Alexander Graf, qemu-ppc, Paolo Bonzini,
	Richard Henderson

This patch increases the maximum number of virtqueues for pci from 64
to 513. This will allow booting a virtio-net-pci device with 256 queue
pairs on recent Linux host (which supports up to 256 tuntap queue pairs).

To keep migration compatibility, 64 was kept for legacy machine
types. This is because qemu in fact allows guest to probe the limit of
virtqueues through trying to set queue_sel. So for legacy machine
type, we should make sure setting queue_sel to more than 63 won't
make effect.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/i386/pc_piix.c      | 5 +++++
 hw/i386/pc_q35.c       | 5 +++++
 hw/ppc/spapr.c         | 5 +++++
 hw/virtio/virtio-pci.c | 6 +++++-
 4 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 212e263..6e098ce 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -49,6 +49,7 @@
 #include "hw/acpi/acpi.h"
 #include "cpu.h"
 #include "qemu/error-report.h"
+#include "hw/virtio/virtio-bus.h"
 #ifdef CONFIG_XEN
 #  include <xen/hvm/hvm_info_table.h>
 #endif
@@ -312,6 +313,10 @@ static void pc_init_pci(MachineState *machine)
 
 static void pc_compat_2_3(MachineState *machine)
 {
+    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
+    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
+
+    k->queue_max = 64;
 }
 
 static void pc_compat_2_2(MachineState *machine)
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index e67f2de..ff7c414 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -45,6 +45,7 @@
 #include "hw/usb.h"
 #include "hw/cpu/icc_bus.h"
 #include "qemu/error-report.h"
+#include "include/hw/virtio/virtio-bus.h"
 
 /* ICH9 AHCI has 6 ports */
 #define MAX_SATA_PORTS     6
@@ -291,6 +292,10 @@ static void pc_q35_init(MachineState *machine)
 
 static void pc_compat_2_3(MachineState *machine)
 {
+    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
+    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
+
+    k->queue_max = 64;
 }
 
 static void pc_compat_2_2(MachineState *machine)
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 8e43aa2..ee8f6a3 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -50,6 +50,7 @@
 #include "hw/pci/pci.h"
 #include "hw/scsi/scsi.h"
 #include "hw/virtio/virtio-scsi.h"
+#include "hw/virtio/virtio-bus.h"
 
 #include "exec/address-spaces.h"
 #include "hw/usb.h"
@@ -1827,6 +1828,10 @@ static const TypeInfo spapr_machine_info = {
 
 static void spapr_compat_2_3(Object *obj)
 {
+    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
+    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
+
+    k->queue_max = 64;
 }
 
 static void spapr_compat_2_2(Object *obj)
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 7d01500..c510cb7 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -42,7 +42,11 @@
  * configuration space */
 #define VIRTIO_PCI_CONFIG_SIZE(dev)     VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
 
-#define VIRTIO_PCI_QUEUE_MAX 64
+/* The number was chose to be greater than both the the number of max
+ * vcpus supported by host and the number of max tuntap queues support
+ * by host and also leave some spaces for future.
+ */
+#define VIRTIO_PCI_QUEUE_MAX 1024
 
 static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
                                VirtIOPCIProxy *dev);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [PATCH V7 16/16] pci: remove hard-coded bar size in msix_init_exclusive_bar()
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (14 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513 Jason Wang
@ 2015-04-23  6:21 ` Jason Wang
  2015-04-23 11:27 ` [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Cornelia Huck
  2015-04-27 19:06 ` Michael S. Tsirkin
  17 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-23  6:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Keith Busch, Kevin Wolf, Jason Wang, Stefan Hajnoczi, mst

This patch lets msix_init_exclusive_bar() can calculate the bar and
pba size based on the number of MSI-X vectors other than using a
hard-coded limit 4096. This is needed to allow device to have more
than 128 MSI_X vectors. To keep migration compatibility, keep using
4096 as bar size and 2048 for pba offset.

Notes: We don't care about the case that using vectors > 128 for
legacy machine type. Since we limit the queue max to 64, so vectors >=
65 is meaningless.

Virtio device will be the first user for this.

Cc: Keith Busch <keith.busch@intel.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/pci/msix.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/hw/pci/msix.c b/hw/pci/msix.c
index 24de260..f8748cf 100644
--- a/hw/pci/msix.c
+++ b/hw/pci/msix.c
@@ -295,29 +295,37 @@ int msix_init_exclusive_bar(PCIDevice *dev, unsigned short nentries,
 {
     int ret;
     char *name;
+    uint32_t bar_size = 4096;
+    uint32_t bar_pba_offset = bar_size / 2;
+    uint32_t bar_pba_size = (nentries / 8 + 1) * 8;
 
     /*
      * Migration compatibility dictates that this remains a 4k
      * BAR with the vector table in the lower half and PBA in
-     * the upper half.  Do not use these elsewhere!
+     * the upper half for nentries which is lower or equal to 128.
+     * No need to care about using more than 65 entries for legacy
+     * machine types who has at most 64 queues.
      */
-#define MSIX_EXCLUSIVE_BAR_SIZE 4096
-#define MSIX_EXCLUSIVE_BAR_TABLE_OFFSET 0
-#define MSIX_EXCLUSIVE_BAR_PBA_OFFSET (MSIX_EXCLUSIVE_BAR_SIZE / 2)
-#define MSIX_EXCLUSIVE_CAP_OFFSET 0
+    if (nentries * PCI_MSIX_ENTRY_SIZE > bar_pba_offset) {
+        bar_pba_offset = nentries * PCI_MSIX_ENTRY_SIZE;
+    }
 
-    if (nentries * PCI_MSIX_ENTRY_SIZE > MSIX_EXCLUSIVE_BAR_PBA_OFFSET) {
-        return -EINVAL;
+    if (bar_pba_offset + bar_pba_size > 4096) {
+        bar_size = bar_pba_offset + bar_pba_size;
+    }
+
+    if (bar_size & (bar_size - 1)) {
+        bar_size = 1 << qemu_fls(bar_size);
     }
 
     name = g_strdup_printf("%s-msix", dev->name);
-    memory_region_init(&dev->msix_exclusive_bar, OBJECT(dev), name, MSIX_EXCLUSIVE_BAR_SIZE);
+    memory_region_init(&dev->msix_exclusive_bar, OBJECT(dev), name, bar_size);
     g_free(name);
 
     ret = msix_init(dev, nentries, &dev->msix_exclusive_bar, bar_nr,
-                    MSIX_EXCLUSIVE_BAR_TABLE_OFFSET, &dev->msix_exclusive_bar,
-                    bar_nr, MSIX_EXCLUSIVE_BAR_PBA_OFFSET,
-                    MSIX_EXCLUSIVE_CAP_OFFSET);
+                    0, &dev->msix_exclusive_bar,
+                    bar_nr, bar_pba_offset,
+                    0);
     if (ret) {
         return ret;
     }
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 09/16] virtio-ccw: introduce ccw specific queue limit
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 09/16] virtio-ccw: introduce ccw " Jason Wang
@ 2015-04-23 10:59   ` Cornelia Huck
  0 siblings, 0 replies; 47+ messages in thread
From: Cornelia Huck @ 2015-04-23 10:59 UTC (permalink / raw)
  To: Jason Wang
  Cc: Alexander Graf, Richard Henderson, qemu-devel,
	Christian Borntraeger, mst

On Thu, 23 Apr 2015 14:21:42 +0800
Jason Wang <jasowang@redhat.com> wrote:

> Instead of depending on marco, using a bus specific limit. Also make
> it clear that the number of gsis per I/O adapter is not directly
> depending on the number of virtio queues, but rather the other way
> around.
> 
> Cc: Alexander Graf <agraf@suse.de>
> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  hw/s390x/s390-virtio-ccw.c   |  7 +++++--
>  hw/s390x/virtio-ccw.c        | 21 +++++++++++++--------
>  include/hw/s390x/s390_flic.h |  4 +++-
>  3 files changed, 21 insertions(+), 11 deletions(-)

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513 Jason Wang
@ 2015-04-23 11:24   ` Cornelia Huck
  2015-04-28  3:05     ` Jason Wang
  2015-04-27 11:02   ` Michael S. Tsirkin
  2015-05-14 18:54   ` Eduardo Habkost
  2 siblings, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2015-04-23 11:24 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, Alexander Graf, qemu-devel, qemu-ppc, Paolo Bonzini,
	Richard Henderson

On Thu, 23 Apr 2015 14:21:48 +0800
Jason Wang <jasowang@redhat.com> wrote:

> This patch increases the maximum number of virtqueues for pci from 64
> to 513. This will allow booting a virtio-net-pci device with 256 queue
     ^^^

> pairs on recent Linux host (which supports up to 256 tuntap queue pairs).
> 
> To keep migration compatibility, 64 was kept for legacy machine
> types. This is because qemu in fact allows guest to probe the limit of
> virtqueues through trying to set queue_sel. So for legacy machine
> type, we should make sure setting queue_sel to more than 63 won't
> make effect.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: qemu-ppc@nongnu.org
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  hw/i386/pc_piix.c      | 5 +++++
>  hw/i386/pc_q35.c       | 5 +++++
>  hw/ppc/spapr.c         | 5 +++++
>  hw/virtio/virtio-pci.c | 6 +++++-
>  4 files changed, 20 insertions(+), 1 deletion(-)
> 
(...)
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 7d01500..c510cb7 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -42,7 +42,11 @@
>   * configuration space */
>  #define VIRTIO_PCI_CONFIG_SIZE(dev)     VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
> 
> -#define VIRTIO_PCI_QUEUE_MAX 64
> +/* The number was chose to be greater than both the the number of max
> + * vcpus supported by host and the number of max tuntap queues support
> + * by host and also leave some spaces for future.
> + */
> +#define VIRTIO_PCI_QUEUE_MAX 1024
                                ^^^^

I think you need to fixup subject and patch description :)

> 
>  static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
>                                 VirtIOPCIProxy *dev);

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 00/16] Support more virtio queues
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (15 preceding siblings ...)
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 16/16] pci: remove hard-coded bar size in msix_init_exclusive_bar() Jason Wang
@ 2015-04-23 11:27 ` Cornelia Huck
  2015-04-28  3:14   ` Jason Wang
  2015-04-27 19:06 ` Michael S. Tsirkin
  17 siblings, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2015-04-23 11:27 UTC (permalink / raw)
  To: Jason Wang
  Cc: Kevin Wolf, mst, Alexander Graf, qemu-devel, Keith Busch,
	Christian Borntraeger, qemu-ppc, Stefan Hajnoczi, Amit Shah,
	Paolo Bonzini, Richard Henderson

On Thu, 23 Apr 2015 14:21:33 +0800
Jason Wang <jasowang@redhat.com> wrote:

> Stress/migration test on virtio-pci, compile test on other
> targets. And make check on s390x-softmmu and ppc64-softmmu.

This passes my smoke tests on the s390-ccw-virtio and s390-virtio
machines.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513 Jason Wang
  2015-04-23 11:24   ` Cornelia Huck
@ 2015-04-27 11:02   ` Michael S. Tsirkin
  2015-04-28  3:12     ` Jason Wang
  2015-05-14 18:54   ` Eduardo Habkost
  2 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-27 11:02 UTC (permalink / raw)
  To: Jason Wang
  Cc: Paolo Bonzini, Alexander Graf, qemu-ppc, qemu-devel, Richard Henderson

On Thu, Apr 23, 2015 at 02:21:48PM +0800, Jason Wang wrote:
> This patch increases the maximum number of virtqueues for pci from 64
> to 513. This will allow booting a virtio-net-pci device with 256 queue
> pairs on recent Linux host (which supports up to 256 tuntap queue pairs).
> 
> To keep migration compatibility, 64 was kept for legacy machine
> types. This is because qemu in fact allows guest to probe the limit of
> virtqueues through trying to set queue_sel. So for legacy machine
> type, we should make sure setting queue_sel to more than 63 won't
> make effect.

This isn't a documented interface, and no guest that I know of does
this.  Accordingly, I think we should drop everything except the
hw/virtio/virtio-pci.c change.

> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: qemu-ppc@nongnu.org
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  hw/i386/pc_piix.c      | 5 +++++
>  hw/i386/pc_q35.c       | 5 +++++
>  hw/ppc/spapr.c         | 5 +++++
>  hw/virtio/virtio-pci.c | 6 +++++-
>  4 files changed, 20 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 212e263..6e098ce 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -49,6 +49,7 @@
>  #include "hw/acpi/acpi.h"
>  #include "cpu.h"
>  #include "qemu/error-report.h"
> +#include "hw/virtio/virtio-bus.h"
>  #ifdef CONFIG_XEN
>  #  include <xen/hvm/hvm_info_table.h>
>  #endif
> @@ -312,6 +313,10 @@ static void pc_init_pci(MachineState *machine)
>  
>  static void pc_compat_2_3(MachineState *machine)
>  {
> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> +
> +    k->queue_max = 64;
>  }
>  
>  static void pc_compat_2_2(MachineState *machine)
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index e67f2de..ff7c414 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -45,6 +45,7 @@
>  #include "hw/usb.h"
>  #include "hw/cpu/icc_bus.h"
>  #include "qemu/error-report.h"
> +#include "include/hw/virtio/virtio-bus.h"
>  
>  /* ICH9 AHCI has 6 ports */
>  #define MAX_SATA_PORTS     6
> @@ -291,6 +292,10 @@ static void pc_q35_init(MachineState *machine)
>  
>  static void pc_compat_2_3(MachineState *machine)
>  {
> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> +
> +    k->queue_max = 64;
>  }
>  
>  static void pc_compat_2_2(MachineState *machine)
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 8e43aa2..ee8f6a3 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -50,6 +50,7 @@
>  #include "hw/pci/pci.h"
>  #include "hw/scsi/scsi.h"
>  #include "hw/virtio/virtio-scsi.h"
> +#include "hw/virtio/virtio-bus.h"
>  
>  #include "exec/address-spaces.h"
>  #include "hw/usb.h"
> @@ -1827,6 +1828,10 @@ static const TypeInfo spapr_machine_info = {
>  
>  static void spapr_compat_2_3(Object *obj)
>  {
> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> +
> +    k->queue_max = 64;
>  }
>  
>  static void spapr_compat_2_2(Object *obj)
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 7d01500..c510cb7 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -42,7 +42,11 @@
>   * configuration space */
>  #define VIRTIO_PCI_CONFIG_SIZE(dev)     VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>  
> -#define VIRTIO_PCI_QUEUE_MAX 64
> +/* The number was chose to be greater than both the the number of max
> + * vcpus supported by host and the number of max tuntap queues support
> + * by host and also leave some spaces for future.
> + */
> +#define VIRTIO_PCI_QUEUE_MAX 1024
>  
>  static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
>                                 VirtIOPCIProxy *dev);
> -- 
> 2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type Jason Wang
@ 2015-04-27 11:03   ` Michael S. Tsirkin
  2015-04-27 13:14   ` Alexander Graf
  1 sibling, 0 replies; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-27 11:03 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-ppc, qemu-devel, Alexander Graf

On Thu, Apr 23, 2015 at 02:21:37PM +0800, Jason Wang wrote:
> The following patches will limit the following things to legacy
> machine type:
> 
> - maximum number of virtqueues for virtio-pci were limited to 64
> 
> Cc: Alexander Graf <agraf@suse.de>
> Cc: qemu-ppc@nongnu.org
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

This won't be needed if as I suggest we just increase this
value everywhere.

> ---
>  hw/ppc/spapr.c | 31 +++++++++++++++++++++++++++++--
>  1 file changed, 29 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 9e676ef..8e43aa2 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1825,8 +1825,13 @@ static const TypeInfo spapr_machine_info = {
>  #define SPAPR_COMPAT_2_1 \
>          SPAPR_COMPAT_2_2
>  
> +static void spapr_compat_2_3(Object *obj)
> +{
> +}
> +
>  static void spapr_compat_2_2(Object *obj)
>  {
> +    spapr_compat_2_3(obj);
>  }
>  
>  static void spapr_compat_2_1(Object *obj)
> @@ -1834,6 +1839,12 @@ static void spapr_compat_2_1(Object *obj)
>      spapr_compat_2_2(obj);
>  }
>  
> +static void spapr_machine_2_3_instance_init(Object *obj)
> +{
> +    spapr_compat_2_3(obj);
> +    spapr_machine_initfn(obj);
> +}
> +
>  static void spapr_machine_2_2_instance_init(Object *obj)
>  {
>      spapr_compat_2_2(obj);
> @@ -1893,14 +1904,29 @@ static void spapr_machine_2_3_class_init(ObjectClass *oc, void *data)
>  
>      mc->name = "pseries-2.3";
>      mc->desc = "pSeries Logical Partition (PAPR compliant) v2.3";
> -    mc->alias = "pseries";
> -    mc->is_default = 1;
>  }
>  
>  static const TypeInfo spapr_machine_2_3_info = {
>      .name          = TYPE_SPAPR_MACHINE "2.3",
>      .parent        = TYPE_SPAPR_MACHINE,
>      .class_init    = spapr_machine_2_3_class_init,
> +    .instance_init = spapr_machine_2_3_instance_init,
> +};
> +
> +static void spapr_machine_2_4_class_init(ObjectClass *oc, void *data)
> +{
> +    MachineClass *mc = MACHINE_CLASS(oc);
> +
> +    mc->name = "pseries-2.4";
> +    mc->desc = "pSeries Logical Partition (PAPR compliant) v2.4";
> +    mc->alias = "pseries";
> +    mc->is_default = 1;
> +}
> +
> +static const TypeInfo spapr_machine_2_4_info = {
> +    .name          = TYPE_SPAPR_MACHINE "2.4",
> +    .parent        = TYPE_SPAPR_MACHINE,
> +    .class_init    = spapr_machine_2_4_class_init,
>  };
>  
>  static void spapr_machine_register_types(void)
> @@ -1909,6 +1935,7 @@ static void spapr_machine_register_types(void)
>      type_register_static(&spapr_machine_2_1_info);
>      type_register_static(&spapr_machine_2_2_info);
>      type_register_static(&spapr_machine_2_3_info);
> +    type_register_static(&spapr_machine_2_4_info);
>  }
>  
>  type_init(spapr_machine_register_types)
> -- 
> 2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 02/16] pc: add 2.4 machine types
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 02/16] pc: add 2.4 machine types Jason Wang
@ 2015-04-27 11:03   ` Michael S. Tsirkin
  2015-04-28  3:12     ` Jason Wang
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-27 11:03 UTC (permalink / raw)
  To: Jason Wang; +Cc: Paolo Bonzini, qemu-devel, Richard Henderson

On Thu, Apr 23, 2015 at 02:21:35PM +0800, Jason Wang wrote:
> The following patches will limit the following things to legacy
> machine type:
> 
> - maximum number of virtqueues for virtio-pci were limited to 64
> - auto msix bar size for virtio-net-pci were disabled by default

We dropped the auto size chunk so the commit log is
slightly wrong.

> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  hw/i386/pc_piix.c | 29 +++++++++++++++++++++++++----
>  hw/i386/pc_q35.c  | 26 +++++++++++++++++++++++---
>  2 files changed, 48 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 1fe7bfb..212e263 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -310,8 +310,13 @@ static void pc_init_pci(MachineState *machine)
>      pc_init1(machine, 1, 1);
>  }
>  
> +static void pc_compat_2_3(MachineState *machine)
> +{
> +}
> +
>  static void pc_compat_2_2(MachineState *machine)
>  {
> +    pc_compat_2_3(machine);
>      rsdp_in_ram = false;
>      x86_cpu_compat_set_features("kvm64", FEAT_1_EDX, 0, CPUID_VME);
>      x86_cpu_compat_set_features("kvm32", FEAT_1_EDX, 0, CPUID_VME);
> @@ -413,6 +418,12 @@ static void pc_compat_1_2(MachineState *machine)
>      x86_cpu_compat_kvm_no_autoenable(FEAT_KVM, 1 << KVM_FEATURE_PV_EOI);
>  }
>  
> +static void pc_init_pci_2_3(MachineState *machine)
> +{
> +    pc_compat_2_3(machine);
> +    pc_init_pci(machine);
> +}
> +
>  static void pc_init_pci_2_2(MachineState *machine)
>  {
>      pc_compat_2_2(machine);
> @@ -512,19 +523,28 @@ static void pc_xen_hvm_init(MachineState *machine)
>      .desc = "Standard PC (i440FX + PIIX, 1996)", \
>      .hot_add_cpu = pc_hot_add_cpu
>  
> -#define PC_I440FX_2_3_MACHINE_OPTIONS                           \
> +#define PC_I440FX_2_4_MACHINE_OPTIONS                           \
>      PC_I440FX_MACHINE_OPTIONS,                                  \
>      .default_machine_opts = "firmware=bios-256k.bin",           \
>      .default_display = "std"
>  
> -static QEMUMachine pc_i440fx_machine_v2_3 = {
> -    PC_I440FX_2_3_MACHINE_OPTIONS,
> -    .name = "pc-i440fx-2.3",
> +
> +static QEMUMachine pc_i440fx_machine_v2_4 = {
> +    PC_I440FX_2_4_MACHINE_OPTIONS,
> +    .name = "pc-i440fx-2.4",
>      .alias = "pc",
>      .init = pc_init_pci,
>      .is_default = 1,
>  };
>  
> +#define PC_I440FX_2_3_MACHINE_OPTIONS PC_I440FX_2_4_MACHINE_OPTIONS
> +
> +static QEMUMachine pc_i440fx_machine_v2_3 = {
> +    PC_I440FX_2_3_MACHINE_OPTIONS,
> +    .name = "pc-i440fx-2.3",
> +    .init = pc_init_pci_2_3,
> +};
> +
>  #define PC_I440FX_2_2_MACHINE_OPTIONS PC_I440FX_2_3_MACHINE_OPTIONS
>  
>  static QEMUMachine pc_i440fx_machine_v2_2 = {
> @@ -970,6 +990,7 @@ static QEMUMachine xenfv_machine = {
>  
>  static void pc_machine_init(void)
>  {
> +    qemu_register_pc_machine(&pc_i440fx_machine_v2_4);
>      qemu_register_pc_machine(&pc_i440fx_machine_v2_3);
>      qemu_register_pc_machine(&pc_i440fx_machine_v2_2);
>      qemu_register_pc_machine(&pc_i440fx_machine_v2_1);
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index dcc17c0..e67f2de 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -289,8 +289,13 @@ static void pc_q35_init(MachineState *machine)
>      }
>  }
>  
> +static void pc_compat_2_3(MachineState *machine)
> +{
> +}
> +
>  static void pc_compat_2_2(MachineState *machine)
>  {
> +    pc_compat_2_3(machine);
>      rsdp_in_ram = false;
>      x86_cpu_compat_set_features("kvm64", FEAT_1_EDX, 0, CPUID_VME);
>      x86_cpu_compat_set_features("kvm32", FEAT_1_EDX, 0, CPUID_VME);
> @@ -361,6 +366,12 @@ static void pc_compat_1_4(MachineState *machine)
>      x86_cpu_compat_set_features("Westmere", FEAT_1_ECX, 0, CPUID_EXT_PCLMULQDQ);
>  }
>  
> +static void pc_q35_init_2_3(MachineState *machine)
> +{
> +    pc_compat_2_3(machine);
> +    pc_q35_init(machine);
> +}
> +
>  static void pc_q35_init_2_2(MachineState *machine)
>  {
>      pc_compat_2_2(machine);
> @@ -410,16 +421,24 @@ static void pc_q35_init_1_4(MachineState *machine)
>      .hot_add_cpu = pc_hot_add_cpu, \
>      .units_per_default_bus = 1
>  
> -#define PC_Q35_2_3_MACHINE_OPTIONS                      \
> +#define PC_Q35_2_4_MACHINE_OPTIONS                      \
>      PC_Q35_MACHINE_OPTIONS,                             \
>      .default_machine_opts = "firmware=bios-256k.bin",   \
>      .default_display = "std"
>  
> +static QEMUMachine pc_q35_machine_v2_4 = {
> +    PC_Q35_2_4_MACHINE_OPTIONS,
> +    .name = "pc-q35-2.4",
> +    .alias = "q35",
> +    .init = pc_q35_init,
> +};
> +
> +#define PC_Q35_2_3_MACHINE_OPTIONS PC_Q35_2_4_MACHINE_OPTIONS
> +
>  static QEMUMachine pc_q35_machine_v2_3 = {
>      PC_Q35_2_3_MACHINE_OPTIONS,
>      .name = "pc-q35-2.3",
> -    .alias = "q35",
> -    .init = pc_q35_init,
> +    .init = pc_q35_init_2_3,
>  };
>  
>  #define PC_Q35_2_2_MACHINE_OPTIONS PC_Q35_2_3_MACHINE_OPTIONS
> @@ -506,6 +525,7 @@ static QEMUMachine pc_q35_machine_v1_4 = {
>  
>  static void pc_q35_machine_init(void)
>  {
> +    qemu_register_pc_machine(&pc_q35_machine_v2_4);
>      qemu_register_pc_machine(&pc_q35_machine_v2_3);
>      qemu_register_pc_machine(&pc_q35_machine_v2_2);
>      qemu_register_pc_machine(&pc_q35_machine_v2_1);
> -- 
> 2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit Jason Wang
@ 2015-04-27 11:05   ` Michael S. Tsirkin
  2015-04-28  3:14     ` Jason Wang
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-27 11:05 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Alexander Graf, Christian Borntraeger, Cornelia Huck,
	Paolo Bonzini, Richard Henderson

On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> This patch introduces a bus specific queue limitation. It will be
> useful for increasing the limit for one of the bus without disturbing
> other buses.
> 
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>

Is this still needed if you drop the attempt to
keep the limit around for old machine types?

> ---
>  hw/char/virtio-serial-bus.c    |  2 +-
>  hw/net/virtio-net.c            |  4 ++--
>  hw/s390x/s390-virtio-bus.c     |  1 +
>  hw/s390x/virtio-ccw.c          |  1 +
>  hw/scsi/virtio-scsi.c          |  4 ++--
>  hw/virtio/virtio-mmio.c        |  1 +
>  hw/virtio/virtio-pci.c         |  1 +
>  hw/virtio/virtio.c             | 40 +++++++++++++++++++++++++---------------
>  include/hw/virtio/virtio-bus.h |  1 +
>  include/hw/virtio/virtio.h     |  1 +
>  10 files changed, 36 insertions(+), 20 deletions(-)
> 
> diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
> index e336bdb..0694831 100644
> --- a/hw/char/virtio-serial-bus.c
> +++ b/hw/char/virtio-serial-bus.c
> @@ -973,7 +973,7 @@ static void virtio_serial_device_realize(DeviceState *dev, Error **errp)
>      }
>  
>      /* Each port takes 2 queues, and one pair is for the control queue */
> -    max_supported_ports = VIRTIO_PCI_QUEUE_MAX / 2 - 1;
> +    max_supported_ports = virtio_get_queue_max(vdev) / 2 - 1;
>  
>      if (vser->serial.max_virtserial_ports > max_supported_ports) {
>          error_setg(errp, "maximum ports supported: %u", max_supported_ports);
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index b6fac9c..bf286f5 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -1588,10 +1588,10 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>      virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size);
>  
>      n->max_queues = MAX(n->nic_conf.peers.queues, 1);
> -    if (n->max_queues * 2 + 1 > VIRTIO_PCI_QUEUE_MAX) {
> +    if (n->max_queues * 2 + 1 > virtio_get_queue_max(vdev)) {
>          error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
>                     "must be a postive integer less than %d.",
> -                   n->max_queues, (VIRTIO_PCI_QUEUE_MAX - 1) / 2);
> +                   n->max_queues, (virtio_get_queue_max(vdev) - 1) / 2);
>          virtio_cleanup(vdev);
>          return;
>      }
> diff --git a/hw/s390x/s390-virtio-bus.c b/hw/s390x/s390-virtio-bus.c
> index 047c963..2b41e32 100644
> --- a/hw/s390x/s390-virtio-bus.c
> +++ b/hw/s390x/s390-virtio-bus.c
> @@ -715,6 +715,7 @@ static void virtio_s390_bus_class_init(ObjectClass *klass, void *data)
>      bus_class->max_dev = 1;
>      k->notify = virtio_s390_notify;
>      k->get_features = virtio_s390_get_features;
> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>  }
>  
>  static const TypeInfo virtio_s390_bus_info = {
> diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
> index 0434f56..590eed5 100644
> --- a/hw/s390x/virtio-ccw.c
> +++ b/hw/s390x/virtio-ccw.c
> @@ -1715,6 +1715,7 @@ static void virtio_ccw_bus_class_init(ObjectClass *klass, void *data)
>      k->load_queue = virtio_ccw_load_queue;
>      k->save_config = virtio_ccw_save_config;
>      k->load_config = virtio_ccw_load_config;
> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>  }
>  
>  static const TypeInfo virtio_ccw_bus_info = {
> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index c9bea06..fbdde2b 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -826,10 +826,10 @@ void virtio_scsi_common_realize(DeviceState *dev, Error **errp,
>                  sizeof(VirtIOSCSIConfig));
>  
>      if (s->conf.num_queues == 0 ||
> -            s->conf.num_queues > VIRTIO_PCI_QUEUE_MAX - 2) {
> +            s->conf.num_queues > virtio_get_queue_max(vdev) - 2) {
>          error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
>                           "must be a positive integer less than %d.",
> -                   s->conf.num_queues, VIRTIO_PCI_QUEUE_MAX - 2);
> +                   s->conf.num_queues, virtio_get_queue_max(vdev) - 2);
>          virtio_cleanup(vdev);
>          return;
>      }
> diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
> index 10123f3..2ae6942 100644
> --- a/hw/virtio/virtio-mmio.c
> +++ b/hw/virtio/virtio-mmio.c
> @@ -403,6 +403,7 @@ static void virtio_mmio_bus_class_init(ObjectClass *klass, void *data)
>      k->device_plugged = virtio_mmio_device_plugged;
>      k->has_variable_vring_alignment = true;
>      bus_class->max_dev = 1;
> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>  }
>  
>  static const TypeInfo virtio_mmio_bus_info = {
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index c7c3f72..075b13b 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -1498,6 +1498,7 @@ static void virtio_pci_bus_class_init(ObjectClass *klass, void *data)
>      k->vmstate_change = virtio_pci_vmstate_change;
>      k->device_plugged = virtio_pci_device_plugged;
>      k->device_unplugged = virtio_pci_device_unplugged;
> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>  }
>  
>  static const TypeInfo virtio_pci_bus_info = {
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 17c1260..bbb224f 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -541,6 +541,14 @@ void virtio_update_irq(VirtIODevice *vdev)
>      virtio_notify_vector(vdev, VIRTIO_NO_VECTOR);
>  }
>  
> +int virtio_get_queue_max(VirtIODevice *vdev)
> +{
> +    BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
> +    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> +
> +    return k->queue_max;
> +}
> +
>  void virtio_set_status(VirtIODevice *vdev, uint8_t val)
>  {
>      VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> @@ -599,7 +607,7 @@ void virtio_reset(void *opaque)
>      vdev->config_vector = VIRTIO_NO_VECTOR;
>      virtio_notify_vector(vdev, vdev->config_vector);
>  
> -    for(i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> +    for (i = 0; i < virtio_get_queue_max(vdev); i++) {
>          vdev->vq[i].vring.desc = 0;
>          vdev->vq[i].vring.avail = 0;
>          vdev->vq[i].vring.used = 0;
> @@ -738,7 +746,8 @@ int virtio_queue_get_num(VirtIODevice *vdev, int n)
>  int virtio_queue_get_id(VirtQueue *vq)
>  {
>      VirtIODevice *vdev = vq->vdev;
> -    assert(vq >= &vdev->vq[0] && vq < &vdev->vq[VIRTIO_PCI_QUEUE_MAX]);
> +
> +    assert(vq >= &vdev->vq[0] && vq < &vdev->vq[virtio_get_queue_max(vdev)]);
>      return vq - &vdev->vq[0];
>  }
>  
> @@ -774,28 +783,29 @@ void virtio_queue_notify(VirtIODevice *vdev, int n)
>  
>  uint16_t virtio_queue_vector(VirtIODevice *vdev, int n)
>  {
> -    return n < VIRTIO_PCI_QUEUE_MAX ? vdev->vq[n].vector :
> +    return n < virtio_get_queue_max(vdev) ? vdev->vq[n].vector :
>          VIRTIO_NO_VECTOR;
>  }
>  
>  void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t vector)
>  {
> -    if (n < VIRTIO_PCI_QUEUE_MAX)
> +    if (n < virtio_get_queue_max(vdev))
>          vdev->vq[n].vector = vector;
>  }
>  
>  VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
>                              void (*handle_output)(VirtIODevice *, VirtQueue *))
>  {
> -    int i;
> +    int i, queue_max = virtio_get_queue_max(vdev);
>  
> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> +    for (i = 0; i < queue_max; i++) {
>          if (vdev->vq[i].vring.num == 0)
>              break;
>      }
>  
> -    if (i == VIRTIO_PCI_QUEUE_MAX || queue_size > VIRTQUEUE_MAX_SIZE)
> +    if (i == queue_max || queue_size > VIRTQUEUE_MAX_SIZE) {
>          abort();
> +    }
>  
>      vdev->vq[i].vring.num = queue_size;
>      vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
> @@ -806,7 +816,7 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
>  
>  void virtio_del_queue(VirtIODevice *vdev, int n)
>  {
> -    if (n < 0 || n >= VIRTIO_PCI_QUEUE_MAX) {
> +    if (n < 0 || n >= virtio_get_queue_max(vdev)) {
>          abort();
>      }
>  
> @@ -916,14 +926,14 @@ void virtio_save(VirtIODevice *vdev, QEMUFile *f)
>      qemu_put_be32(f, vdev->config_len);
>      qemu_put_buffer(f, vdev->config, vdev->config_len);
>  
> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> +    for (i = 0; i < k->queue_max; i++) {
>          if (vdev->vq[i].vring.num == 0)
>              break;
>      }
>  
>      qemu_put_be32(f, i);
>  
> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> +    for (i = 0; i < k->queue_max; i++) {
>          if (vdev->vq[i].vring.num == 0)
>              break;
>  
> @@ -988,7 +998,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
>      qemu_get_8s(f, &vdev->status);
>      qemu_get_8s(f, &vdev->isr);
>      qemu_get_be16s(f, &vdev->queue_sel);
> -    if (vdev->queue_sel >= VIRTIO_PCI_QUEUE_MAX) {
> +    if (vdev->queue_sel >= k->queue_max) {
>          return -1;
>      }
>      qemu_get_be32s(f, &features);
> @@ -1015,7 +1025,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
>  
>      num = qemu_get_be32(f);
>  
> -    if (num > VIRTIO_PCI_QUEUE_MAX) {
> +    if (num > k->queue_max) {
>          error_report("Invalid number of PCI queues: 0x%x", num);
>          return -1;
>      }
> @@ -1125,15 +1135,15 @@ void virtio_instance_init_common(Object *proxy_obj, void *data,
>  void virtio_init(VirtIODevice *vdev, const char *name,
>                   uint16_t device_id, size_t config_size)
>  {
> -    int i;
> +    int i, queue_max = virtio_get_queue_max(vdev);
>      vdev->device_id = device_id;
>      vdev->status = 0;
>      vdev->isr = 0;
>      vdev->queue_sel = 0;
>      vdev->config_vector = VIRTIO_NO_VECTOR;
> -    vdev->vq = g_malloc0(sizeof(VirtQueue) * VIRTIO_PCI_QUEUE_MAX);
> +    vdev->vq = g_malloc0(sizeof(VirtQueue) * queue_max);
>      vdev->vm_running = runstate_is_running();
> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> +    for (i = 0; i < queue_max; i++) {
>          vdev->vq[i].vector = VIRTIO_NO_VECTOR;
>          vdev->vq[i].vdev = vdev;
>          vdev->vq[i].queue_index = i;
> diff --git a/include/hw/virtio/virtio-bus.h b/include/hw/virtio/virtio-bus.h
> index 0d2e7b4..4da8022 100644
> --- a/include/hw/virtio/virtio-bus.h
> +++ b/include/hw/virtio/virtio-bus.h
> @@ -68,6 +68,7 @@ typedef struct VirtioBusClass {
>       * Note that changing this will break migration for this transport.
>       */
>      bool has_variable_vring_alignment;
> +    uint16_t queue_max;
>  } VirtioBusClass;
>  
>  struct VirtioBusState {
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index d95f8b6..91fd673 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -179,6 +179,7 @@ void virtio_queue_notify(VirtIODevice *vdev, int n);
>  uint16_t virtio_queue_vector(VirtIODevice *vdev, int n);
>  void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t vector);
>  void virtio_set_status(VirtIODevice *vdev, uint8_t val);
> +int virtio_get_queue_max(VirtIODevice *vdev);
>  void virtio_reset(void *opaque);
>  void virtio_update_irq(VirtIODevice *vdev);
>  int virtio_set_features(VirtIODevice *vdev, uint32_t val);
> -- 
> 2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type Jason Wang
  2015-04-27 11:03   ` Michael S. Tsirkin
@ 2015-04-27 13:14   ` Alexander Graf
  1 sibling, 0 replies; 47+ messages in thread
From: Alexander Graf @ 2015-04-27 13:14 UTC (permalink / raw)
  To: Jason Wang, qemu-devel; +Cc: David Gibson, qemu-ppc, mst

On 04/23/2015 08:21 AM, Jason Wang wrote:
> The following patches will limit the following things to legacy
> machine type:
>
> - maximum number of virtqueues for virtio-pci were limited to 64
>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: qemu-ppc@nongnu.org
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Reviewed-by: Alexander Graf <agraf@suse.de>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 00/16] Support more virtio queues
  2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
                   ` (16 preceding siblings ...)
  2015-04-23 11:27 ` [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Cornelia Huck
@ 2015-04-27 19:06 ` Michael S. Tsirkin
  17 siblings, 0 replies; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-27 19:06 UTC (permalink / raw)
  To: Jason Wang
  Cc: Kevin Wolf, Amit Shah, Alexander Graf, qemu-devel, Keith Busch,
	Christian Borntraeger, qemu-ppc, Stefan Hajnoczi, Cornelia Huck,
	Paolo Bonzini, Richard Henderson

On Thu, Apr 23, 2015 at 02:21:33PM +0800, Jason Wang wrote:
> We current limit the max virtio queues to 64. This is not sufficient
> to support multiqueue devices (e.g recent Linux support up to 256
> tap queues). So this series tries to let virtio to support more
> queues.

I put some of these in my tree.
Pls check it out to make sure I didn't make mistakes,
and send patches on top.


> No much works need to be done except:
> 
> - Introducing transport specific queue limitation. This is used to
>   simplify increasing the limitation for one transport without
>   harming the rest.
> - Let each virtio transport to use specific limit.
> - Speedup the MSI-X masking and unmasking through per vector queue
>   list, and increase the maximum MSI-X vectors supported by qemu.
> - With the above enhancements, increase the maximum number of
>   virtqueues supported by PCI from 64 to 1024.
> - Compat legacy virtio queue limit
> 
> With this patch, we can support up to 256 queues. Since x86 can only
> allow about 240 interrupt vectors for MSI-X, current Linux driver can
> only have about 80 queue pairs has their private MSI-X interrupt
> vectors. With sharing IRQ with queue pairs (RFC posted in
> https://lkml.org/lkml/2014/12/25/169), Linux driver can have up
> to about 186 queue pairs has their private MSI-X interrupt vectors.
> 
> Stress/migration test on virtio-pci, compile test on other
> targets. And make check on s390x-softmmu and ppc64-softmmu.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: Keith Busch <keith.busch@intel.com>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> Cc: Amit Shah <amit.shah@redhat.com>
> Cc: qemu-ppc@nongnu.org
> 
> Please review
> 
> Thanks
> 
> Changes from V6:
> - Tweak some commit logs
> - Replace the missed VIRTIO_PCI_MAX in ccw with a bus specific limit
> - Bump the pci queue limit from 513 to 1024
> 
> Changes from V5:
> - Rebase to HEAD
> - Drop the patch that introduce helper to get vq index since there has
>   already one
> - Don't try to solve the migration compatibility issue for invalid
>   configuration like vectors > 128 with only 64 virto queues. This is
>   done by let msix calaucalte the correct bar size and drop the patch
>   18 that intrdouces the auto_msix_bar_size.
> - Cleanup the bar size and pba offset calcuation code
> - Add a comment to explain the new virtio pci queue limit number 513
> 
> Changes from V4:
> - Drop the patch of bus limitation validation and send it as an
>   independent patch
> - Don't let the virtio-pci to calculate the MSI-X bar size, instead,
>   tell the msix code whether or not to use legacy bar size and let it
>   to do this
> - Make sure the MSI-X bar size is at least 4K
> - Subject typos fixes for patch 8
> - Mention adapter routes changes in the commit log of patch 9
> - Only use the vector to queue list for virtio-pci to avoid the
>   unnecessary possible overhead on other transport.
> 
> Changes from V3:
> - rebase to master and target to 2.4
> - handling compat issues for spapr
> - fixes for hmp command completion when we have a nic with 256 queues
> - using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue in
>   virtio-ccw
> - fix compile issues for ppc64-softmmu
> - don't export VIRTIO_CCW_QUEUE_MAX by introducing a gsi limit and
>   inherit in ccw
> - use transport specific queue limit in virtio-serial
> - correct the stale comment for AdapterRoutes and move it to ccw patch
> - replace 128 with queue_max + 64 and add a comment for this in
>   virtio_ccw_notify()
> 
> Changes from V2:
> - move transport specific limitation to their implementation. (The
>   left is VIRTIO_CCW_QUEUE_MAX since I fail to find a common header
>   files other than virtio.h)
> - use virtio_get_queue_ma) if possible in virtio.c
> - AdapterRoutes should use ccw limit
> - introduce a vector to queue mapping for virito devices and speedup
>   the MSI-X masking and unmasking through this.
> 
> Changes from V1:
> - add a validation against the bus limitation
> - switch to use a bus specific queue limit instead of a global one,
>   this will allow us to just increase the limit of one transport
>   without disturbing others.
> - only increase the queue limit of virtio-pci
> - limit the maximum number of virtio queues to 64 for legacy machine
>   types
> 
> Jason Wang (16):
>   virtio-net: fix the upper bound when trying to delete queues
>   pc: add 2.4 machine types
>   spapr: add machine type specific instance init function
>   ppc: spapr: add 2.4 machine type
>   monitor: replace the magic number 255 with MAX_QUEUE_NUM
>   monitor: check return value of qemu_find_net_clients_except()
>   virtio-ccw: using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue
>   virtio: introduce bus specific queue limit
>   virtio-ccw: introduce ccw specific queue limit
>   virtio-s390: switch to bus specific queue limit
>   virtio-mmio: switch to bus specific queue limit
>   virtio-pci: switch to use bus specific queue limit
>   virtio: introduce vector to virtqueues mapping
>   virtio-pci: speedup MSI-X masking and unmasking
>   virtio-pci: increase the maximum number of virtqueues to 513
>   pci: remove hard-coded bar size in msix_init_exclusive_bar()
> 
>  hw/char/virtio-serial-bus.c    |  2 +-
>  hw/i386/pc_piix.c              | 34 ++++++++++++++++---
>  hw/i386/pc_q35.c               | 31 ++++++++++++++++--
>  hw/net/virtio-net.c            |  6 ++--
>  hw/pci/msix.c                  | 30 ++++++++++-------
>  hw/ppc/spapr.c                 | 59 +++++++++++++++++++++++++++++++--
>  hw/s390x/s390-virtio-bus.c     |  7 ++--
>  hw/s390x/s390-virtio-ccw.c     |  7 ++--
>  hw/s390x/virtio-ccw.c          | 22 ++++++++-----
>  hw/scsi/virtio-scsi.c          |  4 +--
>  hw/virtio/virtio-mmio.c        |  7 ++--
>  hw/virtio/virtio-pci.c         | 65 +++++++++++++++++++++++--------------
>  hw/virtio/virtio.c             | 74 +++++++++++++++++++++++++++++++++---------
>  include/hw/s390x/s390_flic.h   |  4 ++-
>  include/hw/virtio/virtio-bus.h |  2 ++
>  include/hw/virtio/virtio.h     |  6 ++--
>  monitor.c                      | 25 +++++++-------
>  17 files changed, 291 insertions(+), 94 deletions(-)
> 
> -- 
> 2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-23 11:24   ` Cornelia Huck
@ 2015-04-28  3:05     ` Jason Wang
  0 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-28  3:05 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: mst, Alexander Graf, qemu-devel, qemu-ppc, Paolo Bonzini,
	Richard Henderson



On Thu, Apr 23, 2015 at 7:24 PM, Cornelia Huck 
<cornelia.huck@de.ibm.com> wrote:
> On Thu, 23 Apr 2015 14:21:48 +0800
> Jason Wang <jasowang@redhat.com> wrote:
> 
>>  This patch increases the maximum number of virtqueues for pci from 
>> 64
>>  to 513. This will allow booting a virtio-net-pci device with 256 
>> queue
>      ^^^
> 
>>  pairs on recent Linux host (which supports up to 256 tuntap queue 
>> pairs).
>>  
>>  To keep migration compatibility, 64 was kept for legacy machine
>>  types. This is because qemu in fact allows guest to probe the limit 
>> of
>>  virtqueues through trying to set queue_sel. So for legacy machine
>>  type, we should make sure setting queue_sel to more than 63 won't
>>  make effect.
>>  
>>  Cc: Paolo Bonzini <pbonzini@redhat.com>
>>  Cc: Richard Henderson <rth@twiddle.net>
>>  Cc: Michael S. Tsirkin <mst@redhat.com>
>>  Cc: Alexander Graf <agraf@suse.de>
>>  Cc: qemu-ppc@nongnu.org
>>  Signed-off-by: Jason Wang <jasowang@redhat.com>
>>  ---
>>   hw/i386/pc_piix.c      | 5 +++++
>>   hw/i386/pc_q35.c       | 5 +++++
>>   hw/ppc/spapr.c         | 5 +++++
>>   hw/virtio/virtio-pci.c | 6 +++++-
>>   4 files changed, 20 insertions(+), 1 deletion(-)
>>  
> (...)
>>  diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>>  index 7d01500..c510cb7 100644
>>  --- a/hw/virtio/virtio-pci.c
>>  +++ b/hw/virtio/virtio-pci.c
>>  @@ -42,7 +42,11 @@
>>    * configuration space */
>>   #define VIRTIO_PCI_CONFIG_SIZE(dev)     
>> VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>>  
>>  -#define VIRTIO_PCI_QUEUE_MAX 64
>>  +/* The number was chose to be greater than both the the number of 
>> max
>>  + * vcpus supported by host and the number of max tuntap queues 
>> support
>>  + * by host and also leave some spaces for future.
>>  + */
>>  +#define VIRTIO_PCI_QUEUE_MAX 1024
>                                 ^^^^
> 
> I think you need to fixup subject and patch description :)

Yes, will fix this.

Thanks
> 
>>  
>>   static void virtio_pci_bus_new(VirtioBusState *bus, size_t 
>> bus_size,
>>                                  VirtIOPCIProxy *dev);
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-27 11:02   ` Michael S. Tsirkin
@ 2015-04-28  3:12     ` Jason Wang
  2015-04-28  7:17       ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Jason Wang @ 2015-04-28  3:12 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Paolo Bonzini, Alexander Graf, qemu-ppc, qemu-devel, Richard Henderson



On Mon, Apr 27, 2015 at 7:02 PM, Michael S. Tsirkin <mst@redhat.com> 
wrote:
> On Thu, Apr 23, 2015 at 02:21:48PM +0800, Jason Wang wrote:
>>  This patch increases the maximum number of virtqueues for pci from 
>> 64
>>  to 513. This will allow booting a virtio-net-pci device with 256 
>> queue
>>  pairs on recent Linux host (which supports up to 256 tuntap queue 
>> pairs).
>>  
>>  To keep migration compatibility, 64 was kept for legacy machine
>>  types. This is because qemu in fact allows guest to probe the limit 
>> of
>>  virtqueues through trying to set queue_sel. So for legacy machine
>>  type, we should make sure setting queue_sel to more than 63 won't
>>  make effect.
> 
> This isn't a documented interface, and no guest that I know of does
> this.  Accordingly, I think we should drop everything except the
> hw/virtio/virtio-pci.c change.

We leave a chance for guest to use such undocumented behavior, so 
technically we'd better keep it and it maybe too late for us to fix if 
we find such a guest in the future. And consider keeping this 
compatibility was really not hard, so I suggest to include this.
  
> 
> 
>>  Cc: Paolo Bonzini <pbonzini@redhat.com>
>>  Cc: Richard Henderson <rth@twiddle.net>
>>  Cc: Michael S. Tsirkin <mst@redhat.com>
>>  Cc: Alexander Graf <agraf@suse.de>
>>  Cc: qemu-ppc@nongnu.org
>>  Signed-off-by: Jason Wang <jasowang@redhat.com>
>>  ---
>>   hw/i386/pc_piix.c      | 5 +++++
>>   hw/i386/pc_q35.c       | 5 +++++
>>   hw/ppc/spapr.c         | 5 +++++
>>   hw/virtio/virtio-pci.c | 6 +++++-
>>   4 files changed, 20 insertions(+), 1 deletion(-)
>>  
>>  diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>  index 212e263..6e098ce 100644
>>  --- a/hw/i386/pc_piix.c
>>  +++ b/hw/i386/pc_piix.c
>>  @@ -49,6 +49,7 @@
>>   #include "hw/acpi/acpi.h"
>>   #include "cpu.h"
>>   #include "qemu/error-report.h"
>>  +#include "hw/virtio/virtio-bus.h"
>>   #ifdef CONFIG_XEN
>>   #  include <xen/hvm/hvm_info_table.h>
>>   #endif
>>  @@ -312,6 +313,10 @@ static void pc_init_pci(MachineState *machine)
>>   
>>   static void pc_compat_2_3(MachineState *machine)
>>   {
>>  +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
>>  +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
>>  +
>>  +    k->queue_max = 64;
>>   }
>>   
>>   static void pc_compat_2_2(MachineState *machine)
>>  diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
>>  index e67f2de..ff7c414 100644
>>  --- a/hw/i386/pc_q35.c
>>  +++ b/hw/i386/pc_q35.c
>>  @@ -45,6 +45,7 @@
>>   #include "hw/usb.h"
>>   #include "hw/cpu/icc_bus.h"
>>   #include "qemu/error-report.h"
>>  +#include "include/hw/virtio/virtio-bus.h"
>>   
>>   /* ICH9 AHCI has 6 ports */
>>   #define MAX_SATA_PORTS     6
>>  @@ -291,6 +292,10 @@ static void pc_q35_init(MachineState *machine)
>>   
>>   static void pc_compat_2_3(MachineState *machine)
>>   {
>>  +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
>>  +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
>>  +
>>  +    k->queue_max = 64;
>>   }
>>   
>>   static void pc_compat_2_2(MachineState *machine)
>>  diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>>  index 8e43aa2..ee8f6a3 100644
>>  --- a/hw/ppc/spapr.c
>>  +++ b/hw/ppc/spapr.c
>>  @@ -50,6 +50,7 @@
>>   #include "hw/pci/pci.h"
>>   #include "hw/scsi/scsi.h"
>>   #include "hw/virtio/virtio-scsi.h"
>>  +#include "hw/virtio/virtio-bus.h"
>>   
>>   #include "exec/address-spaces.h"
>>   #include "hw/usb.h"
>>  @@ -1827,6 +1828,10 @@ static const TypeInfo spapr_machine_info = {
>>   
>>   static void spapr_compat_2_3(Object *obj)
>>   {
>>  +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
>>  +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
>>  +
>>  +    k->queue_max = 64;
>>   }
>>   
>>   static void spapr_compat_2_2(Object *obj)
>>  diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>>  index 7d01500..c510cb7 100644
>>  --- a/hw/virtio/virtio-pci.c
>>  +++ b/hw/virtio/virtio-pci.c
>>  @@ -42,7 +42,11 @@
>>    * configuration space */
>>   #define VIRTIO_PCI_CONFIG_SIZE(dev)     
>> VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>>   
>>  -#define VIRTIO_PCI_QUEUE_MAX 64
>>  +/* The number was chose to be greater than both the the number of 
>> max
>>  + * vcpus supported by host and the number of max tuntap queues 
>> support
>>  + * by host and also leave some spaces for future.
>>  + */
>>  +#define VIRTIO_PCI_QUEUE_MAX 1024
>>   
>>   static void virtio_pci_bus_new(VirtioBusState *bus, size_t 
>> bus_size,
>>                                  VirtIOPCIProxy *dev);
>>  -- 
>>  2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 02/16] pc: add 2.4 machine types
  2015-04-27 11:03   ` Michael S. Tsirkin
@ 2015-04-28  3:12     ` Jason Wang
  0 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-28  3:12 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Paolo Bonzini, qemu-devel, Richard Henderson



On Mon, Apr 27, 2015 at 7:03 PM, Michael S. Tsirkin <mst@redhat.com> 
wrote:
> On Thu, Apr 23, 2015 at 02:21:35PM +0800, Jason Wang wrote:
>>  The following patches will limit the following things to legacy
>>  machine type:
>>  
>>  - maximum number of virtqueues for virtio-pci were limited to 64
>>  - auto msix bar size for virtio-net-pci were disabled by default
> 
> We dropped the auto size chunk so the commit log is
> slightly wrong.

Yes.
> 
>>  Cc: Paolo Bonzini <pbonzini@redhat.com>
>>  Cc: Richard Henderson <rth@twiddle.net>
>>  Cc: Michael S. Tsirkin <mst@redhat.com>
>>  Signed-off-by: Jason Wang <jasowang@redhat.com>
>>  ---
>>   hw/i386/pc_piix.c | 29 +++++++++++++++++++++++++----
>>   hw/i386/pc_q35.c  | 26 +++++++++++++++++++++++---
>>   2 files changed, 48 insertions(+), 7 deletions(-)
>>  
>>  diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>  index 1fe7bfb..212e263 100644
>>  --- a/hw/i386/pc_piix.c
>>  +++ b/hw/i386/pc_piix.c
>>  @@ -310,8 +310,13 @@ static void pc_init_pci(MachineState *machine)
>>       pc_init1(machine, 1, 1);
>>   }
>>   
>>  +static void pc_compat_2_3(MachineState *machine)
>>  +{
>>  +}
>>  +
>>   static void pc_compat_2_2(MachineState *machine)
>>   {
>>  +    pc_compat_2_3(machine);
>>       rsdp_in_ram = false;
>>       x86_cpu_compat_set_features("kvm64", FEAT_1_EDX, 0, CPUID_VME);
>>       x86_cpu_compat_set_features("kvm32", FEAT_1_EDX, 0, CPUID_VME);
>>  @@ -413,6 +418,12 @@ static void pc_compat_1_2(MachineState 
>> *machine)
>>       x86_cpu_compat_kvm_no_autoenable(FEAT_KVM, 1 << 
>> KVM_FEATURE_PV_EOI);
>>   }
>>   
>>  +static void pc_init_pci_2_3(MachineState *machine)
>>  +{
>>  +    pc_compat_2_3(machine);
>>  +    pc_init_pci(machine);
>>  +}
>>  +
>>   static void pc_init_pci_2_2(MachineState *machine)
>>   {
>>       pc_compat_2_2(machine);
>>  @@ -512,19 +523,28 @@ static void pc_xen_hvm_init(MachineState 
>> *machine)
>>       .desc = "Standard PC (i440FX + PIIX, 1996)", \
>>       .hot_add_cpu = pc_hot_add_cpu
>>   
>>  -#define PC_I440FX_2_3_MACHINE_OPTIONS                           \
>>  +#define PC_I440FX_2_4_MACHINE_OPTIONS                           \
>>       PC_I440FX_MACHINE_OPTIONS,                                  \
>>       .default_machine_opts = "firmware=bios-256k.bin",           \
>>       .default_display = "std"
>>   
>>  -static QEMUMachine pc_i440fx_machine_v2_3 = {
>>  -    PC_I440FX_2_3_MACHINE_OPTIONS,
>>  -    .name = "pc-i440fx-2.3",
>>  +
>>  +static QEMUMachine pc_i440fx_machine_v2_4 = {
>>  +    PC_I440FX_2_4_MACHINE_OPTIONS,
>>  +    .name = "pc-i440fx-2.4",
>>       .alias = "pc",
>>       .init = pc_init_pci,
>>       .is_default = 1,
>>   };
>>   
>>  +#define PC_I440FX_2_3_MACHINE_OPTIONS PC_I440FX_2_4_MACHINE_OPTIONS
>>  +
>>  +static QEMUMachine pc_i440fx_machine_v2_3 = {
>>  +    PC_I440FX_2_3_MACHINE_OPTIONS,
>>  +    .name = "pc-i440fx-2.3",
>>  +    .init = pc_init_pci_2_3,
>>  +};
>>  +
>>   #define PC_I440FX_2_2_MACHINE_OPTIONS PC_I440FX_2_3_MACHINE_OPTIONS
>>   
>>   static QEMUMachine pc_i440fx_machine_v2_2 = {
>>  @@ -970,6 +990,7 @@ static QEMUMachine xenfv_machine = {
>>   
>>   static void pc_machine_init(void)
>>   {
>>  +    qemu_register_pc_machine(&pc_i440fx_machine_v2_4);
>>       qemu_register_pc_machine(&pc_i440fx_machine_v2_3);
>>       qemu_register_pc_machine(&pc_i440fx_machine_v2_2);
>>       qemu_register_pc_machine(&pc_i440fx_machine_v2_1);
>>  diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
>>  index dcc17c0..e67f2de 100644
>>  --- a/hw/i386/pc_q35.c
>>  +++ b/hw/i386/pc_q35.c
>>  @@ -289,8 +289,13 @@ static void pc_q35_init(MachineState *machine)
>>       }
>>   }
>>   
>>  +static void pc_compat_2_3(MachineState *machine)
>>  +{
>>  +}
>>  +
>>   static void pc_compat_2_2(MachineState *machine)
>>   {
>>  +    pc_compat_2_3(machine);
>>       rsdp_in_ram = false;
>>       x86_cpu_compat_set_features("kvm64", FEAT_1_EDX, 0, CPUID_VME);
>>       x86_cpu_compat_set_features("kvm32", FEAT_1_EDX, 0, CPUID_VME);
>>  @@ -361,6 +366,12 @@ static void pc_compat_1_4(MachineState 
>> *machine)
>>       x86_cpu_compat_set_features("Westmere", FEAT_1_ECX, 0, 
>> CPUID_EXT_PCLMULQDQ);
>>   }
>>   
>>  +static void pc_q35_init_2_3(MachineState *machine)
>>  +{
>>  +    pc_compat_2_3(machine);
>>  +    pc_q35_init(machine);
>>  +}
>>  +
>>   static void pc_q35_init_2_2(MachineState *machine)
>>   {
>>       pc_compat_2_2(machine);
>>  @@ -410,16 +421,24 @@ static void pc_q35_init_1_4(MachineState 
>> *machine)
>>       .hot_add_cpu = pc_hot_add_cpu, \
>>       .units_per_default_bus = 1
>>   
>>  -#define PC_Q35_2_3_MACHINE_OPTIONS                      \
>>  +#define PC_Q35_2_4_MACHINE_OPTIONS                      \
>>       PC_Q35_MACHINE_OPTIONS,                             \
>>       .default_machine_opts = "firmware=bios-256k.bin",   \
>>       .default_display = "std"
>>   
>>  +static QEMUMachine pc_q35_machine_v2_4 = {
>>  +    PC_Q35_2_4_MACHINE_OPTIONS,
>>  +    .name = "pc-q35-2.4",
>>  +    .alias = "q35",
>>  +    .init = pc_q35_init,
>>  +};
>>  +
>>  +#define PC_Q35_2_3_MACHINE_OPTIONS PC_Q35_2_4_MACHINE_OPTIONS
>>  +
>>   static QEMUMachine pc_q35_machine_v2_3 = {
>>       PC_Q35_2_3_MACHINE_OPTIONS,
>>       .name = "pc-q35-2.3",
>>  -    .alias = "q35",
>>  -    .init = pc_q35_init,
>>  +    .init = pc_q35_init_2_3,
>>   };
>>   
>>   #define PC_Q35_2_2_MACHINE_OPTIONS PC_Q35_2_3_MACHINE_OPTIONS
>>  @@ -506,6 +525,7 @@ static QEMUMachine pc_q35_machine_v1_4 = {
>>   
>>   static void pc_q35_machine_init(void)
>>   {
>>  +    qemu_register_pc_machine(&pc_q35_machine_v2_4);
>>       qemu_register_pc_machine(&pc_q35_machine_v2_3);
>>       qemu_register_pc_machine(&pc_q35_machine_v2_2);
>>       qemu_register_pc_machine(&pc_q35_machine_v2_1);
>>  -- 
>>  2.1.0
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-27 11:05   ` Michael S. Tsirkin
@ 2015-04-28  3:14     ` Jason Wang
  2015-04-28  5:13       ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Jason Wang @ 2015-04-28  3:14 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, Alexander Graf, Christian Borntraeger, Cornelia Huck,
	Paolo Bonzini, Richard Henderson



On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin <mst@redhat.com> 
wrote:
> On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
>>  This patch introduces a bus specific queue limitation. It will be
>>  useful for increasing the limit for one of the bus without 
>> disturbing
>>  other buses.
>>  
>>  Cc: Michael S. Tsirkin <mst@redhat.com>
>>  Cc: Alexander Graf <agraf@suse.de>
>>  Cc: Richard Henderson <rth@twiddle.net>
>>  Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
>>  Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>>  Cc: Paolo Bonzini <pbonzini@redhat.com>
>>  Signed-off-by: Jason Wang <jasowang@redhat.com>
>>  Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> 
> Is this still needed if you drop the attempt to
> keep the limit around for old machine types?

If we agree to drop, we probably need transport specific macro.

> 
> 
>>  ---
>>   hw/char/virtio-serial-bus.c    |  2 +-
>>   hw/net/virtio-net.c            |  4 ++--
>>   hw/s390x/s390-virtio-bus.c     |  1 +
>>   hw/s390x/virtio-ccw.c          |  1 +
>>   hw/scsi/virtio-scsi.c          |  4 ++--
>>   hw/virtio/virtio-mmio.c        |  1 +
>>   hw/virtio/virtio-pci.c         |  1 +
>>   hw/virtio/virtio.c             | 40 
>> +++++++++++++++++++++++++---------------
>>   include/hw/virtio/virtio-bus.h |  1 +
>>   include/hw/virtio/virtio.h     |  1 +
>>   10 files changed, 36 insertions(+), 20 deletions(-)
>>  
>>  diff --git a/hw/char/virtio-serial-bus.c 
>> b/hw/char/virtio-serial-bus.c
>>  index e336bdb..0694831 100644
>>  --- a/hw/char/virtio-serial-bus.c
>>  +++ b/hw/char/virtio-serial-bus.c
>>  @@ -973,7 +973,7 @@ static void 
>> virtio_serial_device_realize(DeviceState *dev, Error **errp)
>>       }
>>   
>>       /* Each port takes 2 queues, and one pair is for the control 
>> queue */
>>  -    max_supported_ports = VIRTIO_PCI_QUEUE_MAX / 2 - 1;
>>  +    max_supported_ports = virtio_get_queue_max(vdev) / 2 - 1;
>>   
>>       if (vser->serial.max_virtserial_ports > max_supported_ports) {
>>           error_setg(errp, "maximum ports supported: %u", 
>> max_supported_ports);
>>  diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>  index b6fac9c..bf286f5 100644
>>  --- a/hw/net/virtio-net.c
>>  +++ b/hw/net/virtio-net.c
>>  @@ -1588,10 +1588,10 @@ static void 
>> virtio_net_device_realize(DeviceState *dev, Error **errp)
>>       virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size);
>>   
>>       n->max_queues = MAX(n->nic_conf.peers.queues, 1);
>>  -    if (n->max_queues * 2 + 1 > VIRTIO_PCI_QUEUE_MAX) {
>>  +    if (n->max_queues * 2 + 1 > virtio_get_queue_max(vdev)) {
>>           error_setg(errp, "Invalid number of queues (= %" PRIu32 
>> "), "
>>                      "must be a postive integer less than %d.",
>>  -                   n->max_queues, (VIRTIO_PCI_QUEUE_MAX - 1) / 2);
>>  +                   n->max_queues, (virtio_get_queue_max(vdev) - 1) 
>> / 2);
>>           virtio_cleanup(vdev);
>>           return;
>>       }
>>  diff --git a/hw/s390x/s390-virtio-bus.c b/hw/s390x/s390-virtio-bus.c
>>  index 047c963..2b41e32 100644
>>  --- a/hw/s390x/s390-virtio-bus.c
>>  +++ b/hw/s390x/s390-virtio-bus.c
>>  @@ -715,6 +715,7 @@ static void 
>> virtio_s390_bus_class_init(ObjectClass *klass, void *data)
>>       bus_class->max_dev = 1;
>>       k->notify = virtio_s390_notify;
>>       k->get_features = virtio_s390_get_features;
>>  +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>>   }
>>   
>>   static const TypeInfo virtio_s390_bus_info = {
>>  diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
>>  index 0434f56..590eed5 100644
>>  --- a/hw/s390x/virtio-ccw.c
>>  +++ b/hw/s390x/virtio-ccw.c
>>  @@ -1715,6 +1715,7 @@ static void 
>> virtio_ccw_bus_class_init(ObjectClass *klass, void *data)
>>       k->load_queue = virtio_ccw_load_queue;
>>       k->save_config = virtio_ccw_save_config;
>>       k->load_config = virtio_ccw_load_config;
>>  +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>>   }
>>   
>>   static const TypeInfo virtio_ccw_bus_info = {
>>  diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
>>  index c9bea06..fbdde2b 100644
>>  --- a/hw/scsi/virtio-scsi.c
>>  +++ b/hw/scsi/virtio-scsi.c
>>  @@ -826,10 +826,10 @@ void virtio_scsi_common_realize(DeviceState 
>> *dev, Error **errp,
>>                   sizeof(VirtIOSCSIConfig));
>>   
>>       if (s->conf.num_queues == 0 ||
>>  -            s->conf.num_queues > VIRTIO_PCI_QUEUE_MAX - 2) {
>>  +            s->conf.num_queues > virtio_get_queue_max(vdev) - 2) {
>>           error_setg(errp, "Invalid number of queues (= %" PRIu32 
>> "), "
>>                            "must be a positive integer less than 
>> %d.",
>>  -                   s->conf.num_queues, VIRTIO_PCI_QUEUE_MAX - 2);
>>  +                   s->conf.num_queues, virtio_get_queue_max(vdev) 
>> - 2);
>>           virtio_cleanup(vdev);
>>           return;
>>       }
>>  diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
>>  index 10123f3..2ae6942 100644
>>  --- a/hw/virtio/virtio-mmio.c
>>  +++ b/hw/virtio/virtio-mmio.c
>>  @@ -403,6 +403,7 @@ static void 
>> virtio_mmio_bus_class_init(ObjectClass *klass, void *data)
>>       k->device_plugged = virtio_mmio_device_plugged;
>>       k->has_variable_vring_alignment = true;
>>       bus_class->max_dev = 1;
>>  +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>>   }
>>   
>>   static const TypeInfo virtio_mmio_bus_info = {
>>  diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>>  index c7c3f72..075b13b 100644
>>  --- a/hw/virtio/virtio-pci.c
>>  +++ b/hw/virtio/virtio-pci.c
>>  @@ -1498,6 +1498,7 @@ static void 
>> virtio_pci_bus_class_init(ObjectClass *klass, void *data)
>>       k->vmstate_change = virtio_pci_vmstate_change;
>>       k->device_plugged = virtio_pci_device_plugged;
>>       k->device_unplugged = virtio_pci_device_unplugged;
>>  +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
>>   }
>>   
>>   static const TypeInfo virtio_pci_bus_info = {
>>  diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
>>  index 17c1260..bbb224f 100644
>>  --- a/hw/virtio/virtio.c
>>  +++ b/hw/virtio/virtio.c
>>  @@ -541,6 +541,14 @@ void virtio_update_irq(VirtIODevice *vdev)
>>       virtio_notify_vector(vdev, VIRTIO_NO_VECTOR);
>>   }
>>   
>>  +int virtio_get_queue_max(VirtIODevice *vdev)
>>  +{
>>  +    BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
>>  +    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
>>  +
>>  +    return k->queue_max;
>>  +}
>>  +
>>   void virtio_set_status(VirtIODevice *vdev, uint8_t val)
>>   {
>>       VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
>>  @@ -599,7 +607,7 @@ void virtio_reset(void *opaque)
>>       vdev->config_vector = VIRTIO_NO_VECTOR;
>>       virtio_notify_vector(vdev, vdev->config_vector);
>>   
>>  -    for(i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
>>  +    for (i = 0; i < virtio_get_queue_max(vdev); i++) {
>>           vdev->vq[i].vring.desc = 0;
>>           vdev->vq[i].vring.avail = 0;
>>           vdev->vq[i].vring.used = 0;
>>  @@ -738,7 +746,8 @@ int virtio_queue_get_num(VirtIODevice *vdev, 
>> int n)
>>   int virtio_queue_get_id(VirtQueue *vq)
>>   {
>>       VirtIODevice *vdev = vq->vdev;
>>  -    assert(vq >= &vdev->vq[0] && vq < 
>> &vdev->vq[VIRTIO_PCI_QUEUE_MAX]);
>>  +
>>  +    assert(vq >= &vdev->vq[0] && vq < 
>> &vdev->vq[virtio_get_queue_max(vdev)]);
>>       return vq - &vdev->vq[0];
>>   }
>>   
>>  @@ -774,28 +783,29 @@ void virtio_queue_notify(VirtIODevice *vdev, 
>> int n)
>>   
>>   uint16_t virtio_queue_vector(VirtIODevice *vdev, int n)
>>   {
>>  -    return n < VIRTIO_PCI_QUEUE_MAX ? vdev->vq[n].vector :
>>  +    return n < virtio_get_queue_max(vdev) ? vdev->vq[n].vector :
>>           VIRTIO_NO_VECTOR;
>>   }
>>   
>>   void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t 
>> vector)
>>   {
>>  -    if (n < VIRTIO_PCI_QUEUE_MAX)
>>  +    if (n < virtio_get_queue_max(vdev))
>>           vdev->vq[n].vector = vector;
>>   }
>>   
>>   VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
>>                               void (*handle_output)(VirtIODevice *, 
>> VirtQueue *))
>>   {
>>  -    int i;
>>  +    int i, queue_max = virtio_get_queue_max(vdev);
>>   
>>  -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
>>  +    for (i = 0; i < queue_max; i++) {
>>           if (vdev->vq[i].vring.num == 0)
>>               break;
>>       }
>>   
>>  -    if (i == VIRTIO_PCI_QUEUE_MAX || queue_size > 
>> VIRTQUEUE_MAX_SIZE)
>>  +    if (i == queue_max || queue_size > VIRTQUEUE_MAX_SIZE) {
>>           abort();
>>  +    }
>>   
>>       vdev->vq[i].vring.num = queue_size;
>>       vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
>>  @@ -806,7 +816,7 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, 
>> int queue_size,
>>   
>>   void virtio_del_queue(VirtIODevice *vdev, int n)
>>   {
>>  -    if (n < 0 || n >= VIRTIO_PCI_QUEUE_MAX) {
>>  +    if (n < 0 || n >= virtio_get_queue_max(vdev)) {
>>           abort();
>>       }
>>   
>>  @@ -916,14 +926,14 @@ void virtio_save(VirtIODevice *vdev, QEMUFile 
>> *f)
>>       qemu_put_be32(f, vdev->config_len);
>>       qemu_put_buffer(f, vdev->config, vdev->config_len);
>>   
>>  -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
>>  +    for (i = 0; i < k->queue_max; i++) {
>>           if (vdev->vq[i].vring.num == 0)
>>               break;
>>       }
>>   
>>       qemu_put_be32(f, i);
>>   
>>  -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
>>  +    for (i = 0; i < k->queue_max; i++) {
>>           if (vdev->vq[i].vring.num == 0)
>>               break;
>>   
>>  @@ -988,7 +998,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile 
>> *f, int version_id)
>>       qemu_get_8s(f, &vdev->status);
>>       qemu_get_8s(f, &vdev->isr);
>>       qemu_get_be16s(f, &vdev->queue_sel);
>>  -    if (vdev->queue_sel >= VIRTIO_PCI_QUEUE_MAX) {
>>  +    if (vdev->queue_sel >= k->queue_max) {
>>           return -1;
>>       }
>>       qemu_get_be32s(f, &features);
>>  @@ -1015,7 +1025,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile 
>> *f, int version_id)
>>   
>>       num = qemu_get_be32(f);
>>   
>>  -    if (num > VIRTIO_PCI_QUEUE_MAX) {
>>  +    if (num > k->queue_max) {
>>           error_report("Invalid number of PCI queues: 0x%x", num);
>>           return -1;
>>       }
>>  @@ -1125,15 +1135,15 @@ void virtio_instance_init_common(Object 
>> *proxy_obj, void *data,
>>   void virtio_init(VirtIODevice *vdev, const char *name,
>>                    uint16_t device_id, size_t config_size)
>>   {
>>  -    int i;
>>  +    int i, queue_max = virtio_get_queue_max(vdev);
>>       vdev->device_id = device_id;
>>       vdev->status = 0;
>>       vdev->isr = 0;
>>       vdev->queue_sel = 0;
>>       vdev->config_vector = VIRTIO_NO_VECTOR;
>>  -    vdev->vq = g_malloc0(sizeof(VirtQueue) * VIRTIO_PCI_QUEUE_MAX);
>>  +    vdev->vq = g_malloc0(sizeof(VirtQueue) * queue_max);
>>       vdev->vm_running = runstate_is_running();
>>  -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
>>  +    for (i = 0; i < queue_max; i++) {
>>           vdev->vq[i].vector = VIRTIO_NO_VECTOR;
>>           vdev->vq[i].vdev = vdev;
>>           vdev->vq[i].queue_index = i;
>>  diff --git a/include/hw/virtio/virtio-bus.h 
>> b/include/hw/virtio/virtio-bus.h
>>  index 0d2e7b4..4da8022 100644
>>  --- a/include/hw/virtio/virtio-bus.h
>>  +++ b/include/hw/virtio/virtio-bus.h
>>  @@ -68,6 +68,7 @@ typedef struct VirtioBusClass {
>>        * Note that changing this will break migration for this 
>> transport.
>>        */
>>       bool has_variable_vring_alignment;
>>  +    uint16_t queue_max;
>>   } VirtioBusClass;
>>   
>>   struct VirtioBusState {
>>  diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
>>  index d95f8b6..91fd673 100644
>>  --- a/include/hw/virtio/virtio.h
>>  +++ b/include/hw/virtio/virtio.h
>>  @@ -179,6 +179,7 @@ void virtio_queue_notify(VirtIODevice *vdev, 
>> int n);
>>   uint16_t virtio_queue_vector(VirtIODevice *vdev, int n);
>>   void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t 
>> vector);
>>   void virtio_set_status(VirtIODevice *vdev, uint8_t val);
>>  +int virtio_get_queue_max(VirtIODevice *vdev);
>>   void virtio_reset(void *opaque);
>>   void virtio_update_irq(VirtIODevice *vdev);
>>   int virtio_set_features(VirtIODevice *vdev, uint32_t val);
>>  -- 
>>  2.1.0
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 00/16] Support more virtio queues
  2015-04-23 11:27 ` [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Cornelia Huck
@ 2015-04-28  3:14   ` Jason Wang
  0 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-04-28  3:14 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Kevin Wolf, mst, Alexander Graf, qemu-devel, Keith Busch,
	Christian Borntraeger, qemu-ppc, Stefan Hajnoczi, Amit Shah,
	Paolo Bonzini, Richard Henderson



On Thu, Apr 23, 2015 at 7:27 PM, Cornelia Huck 
<cornelia.huck@de.ibm.com> wrote:
> On Thu, 23 Apr 2015 14:21:33 +0800
> Jason Wang <jasowang@redhat.com> wrote:
> 
>>  Stress/migration test on virtio-pci, compile test on other
>>  targets. And make check on s390x-softmmu and ppc64-softmmu.
> 
> This passes my smoke tests on the s390-ccw-virtio and s390-virtio
> machines.

Thanks a lot for the testing.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28  3:14     ` Jason Wang
@ 2015-04-28  5:13       ` Michael S. Tsirkin
  2015-04-28  6:13         ` Jason Wang
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-28  5:13 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Alexander Graf, Christian Borntraeger, Cornelia Huck,
	Paolo Bonzini, Richard Henderson

On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> 
> 
> On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> >> This patch introduces a bus specific queue limitation. It will be
> >> useful for increasing the limit for one of the bus without disturbing
> >> other buses.
> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> >> Cc: Alexander Graf <agraf@suse.de>
> >> Cc: Richard Henderson <rth@twiddle.net>
> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> >
> >Is this still needed if you drop the attempt to
> >keep the limit around for old machine types?
> 
> If we agree to drop, we probably need transport specific macro.

You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
Fine, why not.

> >
> >
> >> ---
> >>  hw/char/virtio-serial-bus.c    |  2 +-
> >>  hw/net/virtio-net.c            |  4 ++--
> >>  hw/s390x/s390-virtio-bus.c     |  1 +
> >>  hw/s390x/virtio-ccw.c          |  1 +
> >>  hw/scsi/virtio-scsi.c          |  4 ++--
> >>  hw/virtio/virtio-mmio.c        |  1 +
> >>  hw/virtio/virtio-pci.c         |  1 +
> >>  hw/virtio/virtio.c             | 40
> >>+++++++++++++++++++++++++---------------
> >>  include/hw/virtio/virtio-bus.h |  1 +
> >>  include/hw/virtio/virtio.h     |  1 +
> >>  10 files changed, 36 insertions(+), 20 deletions(-)
> >>   diff --git a/hw/char/virtio-serial-bus.c
> >>b/hw/char/virtio-serial-bus.c
> >> index e336bdb..0694831 100644
> >> --- a/hw/char/virtio-serial-bus.c
> >> +++ b/hw/char/virtio-serial-bus.c
> >> @@ -973,7 +973,7 @@ static void
> >>virtio_serial_device_realize(DeviceState *dev, Error **errp)
> >>      }
> >>         /* Each port takes 2 queues, and one pair is for the control
> >>queue */
> >> -    max_supported_ports = VIRTIO_PCI_QUEUE_MAX / 2 - 1;
> >> +    max_supported_ports = virtio_get_queue_max(vdev) / 2 - 1;
> >>      if (vser->serial.max_virtserial_ports > max_supported_ports) {
> >>          error_setg(errp, "maximum ports supported: %u",
> >>max_supported_ports);
> >> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >> index b6fac9c..bf286f5 100644
> >> --- a/hw/net/virtio-net.c
> >> +++ b/hw/net/virtio-net.c
> >> @@ -1588,10 +1588,10 @@ static void
> >>virtio_net_device_realize(DeviceState *dev, Error **errp)
> >>      virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size);
> >>      n->max_queues = MAX(n->nic_conf.peers.queues, 1);
> >> -    if (n->max_queues * 2 + 1 > VIRTIO_PCI_QUEUE_MAX) {
> >> +    if (n->max_queues * 2 + 1 > virtio_get_queue_max(vdev)) {
> >>          error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> >>                     "must be a postive integer less than %d.",
> >> -                   n->max_queues, (VIRTIO_PCI_QUEUE_MAX - 1) / 2);
> >> +                   n->max_queues, (virtio_get_queue_max(vdev) - 1) /
> >>2);
> >>          virtio_cleanup(vdev);
> >>          return;
> >>      }
> >> diff --git a/hw/s390x/s390-virtio-bus.c b/hw/s390x/s390-virtio-bus.c
> >> index 047c963..2b41e32 100644
> >> --- a/hw/s390x/s390-virtio-bus.c
> >> +++ b/hw/s390x/s390-virtio-bus.c
> >> @@ -715,6 +715,7 @@ static void virtio_s390_bus_class_init(ObjectClass
> >>*klass, void *data)
> >>      bus_class->max_dev = 1;
> >>      k->notify = virtio_s390_notify;
> >>      k->get_features = virtio_s390_get_features;
> >> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
> >>  }
> >>  static const TypeInfo virtio_s390_bus_info = {
> >> diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
> >> index 0434f56..590eed5 100644
> >> --- a/hw/s390x/virtio-ccw.c
> >> +++ b/hw/s390x/virtio-ccw.c
> >> @@ -1715,6 +1715,7 @@ static void virtio_ccw_bus_class_init(ObjectClass
> >>*klass, void *data)
> >>      k->load_queue = virtio_ccw_load_queue;
> >>      k->save_config = virtio_ccw_save_config;
> >>      k->load_config = virtio_ccw_load_config;
> >> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
> >>  }
> >>  static const TypeInfo virtio_ccw_bus_info = {
> >> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> >> index c9bea06..fbdde2b 100644
> >> --- a/hw/scsi/virtio-scsi.c
> >> +++ b/hw/scsi/virtio-scsi.c
> >> @@ -826,10 +826,10 @@ void virtio_scsi_common_realize(DeviceState *dev,
> >>Error **errp,
> >>                  sizeof(VirtIOSCSIConfig));
> >>      if (s->conf.num_queues == 0 ||
> >> -            s->conf.num_queues > VIRTIO_PCI_QUEUE_MAX - 2) {
> >> +            s->conf.num_queues > virtio_get_queue_max(vdev) - 2) {
> >>          error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> >>                           "must be a positive integer less than %d.",
> >> -                   s->conf.num_queues, VIRTIO_PCI_QUEUE_MAX - 2);
> >> +                   s->conf.num_queues, virtio_get_queue_max(vdev) -
> >>2);
> >>          virtio_cleanup(vdev);
> >>          return;
> >>      }
> >> diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
> >> index 10123f3..2ae6942 100644
> >> --- a/hw/virtio/virtio-mmio.c
> >> +++ b/hw/virtio/virtio-mmio.c
> >> @@ -403,6 +403,7 @@ static void virtio_mmio_bus_class_init(ObjectClass
> >>*klass, void *data)
> >>      k->device_plugged = virtio_mmio_device_plugged;
> >>      k->has_variable_vring_alignment = true;
> >>      bus_class->max_dev = 1;
> >> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
> >>  }
> >>  static const TypeInfo virtio_mmio_bus_info = {
> >> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> >> index c7c3f72..075b13b 100644
> >> --- a/hw/virtio/virtio-pci.c
> >> +++ b/hw/virtio/virtio-pci.c
> >> @@ -1498,6 +1498,7 @@ static void virtio_pci_bus_class_init(ObjectClass
> >>*klass, void *data)
> >>      k->vmstate_change = virtio_pci_vmstate_change;
> >>      k->device_plugged = virtio_pci_device_plugged;
> >>      k->device_unplugged = virtio_pci_device_unplugged;
> >> +    k->queue_max = VIRTIO_PCI_QUEUE_MAX;
> >>  }
> >>  static const TypeInfo virtio_pci_bus_info = {
> >> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> >> index 17c1260..bbb224f 100644
> >> --- a/hw/virtio/virtio.c
> >> +++ b/hw/virtio/virtio.c
> >> @@ -541,6 +541,14 @@ void virtio_update_irq(VirtIODevice *vdev)
> >>      virtio_notify_vector(vdev, VIRTIO_NO_VECTOR);
> >>  }
> >> +int virtio_get_queue_max(VirtIODevice *vdev)
> >> +{
> >> +    BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
> >> +    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> >> +
> >> +    return k->queue_max;
> >> +}
> >> +
> >>  void virtio_set_status(VirtIODevice *vdev, uint8_t val)
> >>  {
> >>      VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> >> @@ -599,7 +607,7 @@ void virtio_reset(void *opaque)
> >>      vdev->config_vector = VIRTIO_NO_VECTOR;
> >>      virtio_notify_vector(vdev, vdev->config_vector);
> >> -    for(i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> >> +    for (i = 0; i < virtio_get_queue_max(vdev); i++) {
> >>          vdev->vq[i].vring.desc = 0;
> >>          vdev->vq[i].vring.avail = 0;
> >>          vdev->vq[i].vring.used = 0;
> >> @@ -738,7 +746,8 @@ int virtio_queue_get_num(VirtIODevice *vdev, int n)
> >>  int virtio_queue_get_id(VirtQueue *vq)
> >>  {
> >>      VirtIODevice *vdev = vq->vdev;
> >> -    assert(vq >= &vdev->vq[0] && vq <
> >>&vdev->vq[VIRTIO_PCI_QUEUE_MAX]);
> >> +
> >> +    assert(vq >= &vdev->vq[0] && vq <
> >>&vdev->vq[virtio_get_queue_max(vdev)]);
> >>      return vq - &vdev->vq[0];
> >>  }
> >>    @@ -774,28 +783,29 @@ void virtio_queue_notify(VirtIODevice *vdev,
> >>int n)
> >>  uint16_t virtio_queue_vector(VirtIODevice *vdev, int n)
> >>  {
> >> -    return n < VIRTIO_PCI_QUEUE_MAX ? vdev->vq[n].vector :
> >> +    return n < virtio_get_queue_max(vdev) ? vdev->vq[n].vector :
> >>          VIRTIO_NO_VECTOR;
> >>  }
> >>     void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t
> >>vector)
> >>  {
> >> -    if (n < VIRTIO_PCI_QUEUE_MAX)
> >> +    if (n < virtio_get_queue_max(vdev))
> >>          vdev->vq[n].vector = vector;
> >>  }
> >>  VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
> >>                              void (*handle_output)(VirtIODevice *,
> >>VirtQueue *))
> >>  {
> >> -    int i;
> >> +    int i, queue_max = virtio_get_queue_max(vdev);
> >> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> >> +    for (i = 0; i < queue_max; i++) {
> >>          if (vdev->vq[i].vring.num == 0)
> >>              break;
> >>      }
> >>    -    if (i == VIRTIO_PCI_QUEUE_MAX || queue_size >
> >>VIRTQUEUE_MAX_SIZE)
> >> +    if (i == queue_max || queue_size > VIRTQUEUE_MAX_SIZE) {
> >>          abort();
> >> +    }
> >>      vdev->vq[i].vring.num = queue_size;
> >>      vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
> >> @@ -806,7 +816,7 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int
> >>queue_size,
> >>  void virtio_del_queue(VirtIODevice *vdev, int n)
> >>  {
> >> -    if (n < 0 || n >= VIRTIO_PCI_QUEUE_MAX) {
> >> +    if (n < 0 || n >= virtio_get_queue_max(vdev)) {
> >>          abort();
> >>      }
> >>    @@ -916,14 +926,14 @@ void virtio_save(VirtIODevice *vdev, QEMUFile
> >>*f)
> >>      qemu_put_be32(f, vdev->config_len);
> >>      qemu_put_buffer(f, vdev->config, vdev->config_len);
> >> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> >> +    for (i = 0; i < k->queue_max; i++) {
> >>          if (vdev->vq[i].vring.num == 0)
> >>              break;
> >>      }
> >>      qemu_put_be32(f, i);
> >> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> >> +    for (i = 0; i < k->queue_max; i++) {
> >>          if (vdev->vq[i].vring.num == 0)
> >>              break;
> >>    @@ -988,7 +998,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f,
> >>int version_id)
> >>      qemu_get_8s(f, &vdev->status);
> >>      qemu_get_8s(f, &vdev->isr);
> >>      qemu_get_be16s(f, &vdev->queue_sel);
> >> -    if (vdev->queue_sel >= VIRTIO_PCI_QUEUE_MAX) {
> >> +    if (vdev->queue_sel >= k->queue_max) {
> >>          return -1;
> >>      }
> >>      qemu_get_be32s(f, &features);
> >> @@ -1015,7 +1025,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f,
> >>int version_id)
> >>      num = qemu_get_be32(f);
> >> -    if (num > VIRTIO_PCI_QUEUE_MAX) {
> >> +    if (num > k->queue_max) {
> >>          error_report("Invalid number of PCI queues: 0x%x", num);
> >>          return -1;
> >>      }
> >> @@ -1125,15 +1135,15 @@ void virtio_instance_init_common(Object
> >>*proxy_obj, void *data,
> >>  void virtio_init(VirtIODevice *vdev, const char *name,
> >>                   uint16_t device_id, size_t config_size)
> >>  {
> >> -    int i;
> >> +    int i, queue_max = virtio_get_queue_max(vdev);
> >>      vdev->device_id = device_id;
> >>      vdev->status = 0;
> >>      vdev->isr = 0;
> >>      vdev->queue_sel = 0;
> >>      vdev->config_vector = VIRTIO_NO_VECTOR;
> >> -    vdev->vq = g_malloc0(sizeof(VirtQueue) * VIRTIO_PCI_QUEUE_MAX);
> >> +    vdev->vq = g_malloc0(sizeof(VirtQueue) * queue_max);
> >>      vdev->vm_running = runstate_is_running();
> >> -    for (i = 0; i < VIRTIO_PCI_QUEUE_MAX; i++) {
> >> +    for (i = 0; i < queue_max; i++) {
> >>          vdev->vq[i].vector = VIRTIO_NO_VECTOR;
> >>          vdev->vq[i].vdev = vdev;
> >>          vdev->vq[i].queue_index = i;
> >> diff --git a/include/hw/virtio/virtio-bus.h
> >>b/include/hw/virtio/virtio-bus.h
> >> index 0d2e7b4..4da8022 100644
> >> --- a/include/hw/virtio/virtio-bus.h
> >> +++ b/include/hw/virtio/virtio-bus.h
> >> @@ -68,6 +68,7 @@ typedef struct VirtioBusClass {
> >>       * Note that changing this will break migration for this
> >>transport.
> >>       */
> >>      bool has_variable_vring_alignment;
> >> +    uint16_t queue_max;
> >>  } VirtioBusClass;
> >>  struct VirtioBusState {
> >> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> >> index d95f8b6..91fd673 100644
> >> --- a/include/hw/virtio/virtio.h
> >> +++ b/include/hw/virtio/virtio.h
> >> @@ -179,6 +179,7 @@ void virtio_queue_notify(VirtIODevice *vdev, int
> >>n);
> >>  uint16_t virtio_queue_vector(VirtIODevice *vdev, int n);
> >>  void virtio_queue_set_vector(VirtIODevice *vdev, int n, uint16_t
> >>vector);
> >>  void virtio_set_status(VirtIODevice *vdev, uint8_t val);
> >> +int virtio_get_queue_max(VirtIODevice *vdev);
> >>  void virtio_reset(void *opaque);
> >>  void virtio_update_irq(VirtIODevice *vdev);
> >>  int virtio_set_features(VirtIODevice *vdev, uint32_t val);
> >> --  2.1.0
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28  5:13       ` Michael S. Tsirkin
@ 2015-04-28  6:13         ` Jason Wang
  2015-04-28  7:14           ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Jason Wang @ 2015-04-28  6:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, Alexander Graf, Christian Borntraeger, Cornelia Huck,
	Paolo Bonzini, Richard Henderson



On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> 
wrote:
> On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
>>  
>>  
>>  On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin 
>> <mst@redhat.com> wrote:
>>  >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
>>  >> This patch introduces a bus specific queue limitation. It will be
>>  >> useful for increasing the limit for one of the bus without 
>> disturbing
>>  >> other buses.
>>  >> Cc: Michael S. Tsirkin <mst@redhat.com>
>>  >> Cc: Alexander Graf <agraf@suse.de>
>>  >> Cc: Richard Henderson <rth@twiddle.net>
>>  >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
>>  >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>>  >> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>  >> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>  >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
>>  >
>>  >Is this still needed if you drop the attempt to
>>  >keep the limit around for old machine types?
>>  
>>  If we agree to drop, we probably need transport specific macro.
> 
> You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> Fine, why not.

I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci 
limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 
64. Since to my understanding, it's not safe to increase the limit for 
all other transports which was pointed out by Cornelia in V1: 
http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28  6:13         ` Jason Wang
@ 2015-04-28  7:14           ` Michael S. Tsirkin
  2015-04-28  8:04             ` Cornelia Huck
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-28  7:14 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Alexander Graf, Christian Borntraeger, Cornelia Huck,
	Paolo Bonzini, Richard Henderson

On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> 
> 
> On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> >><mst@redhat.com> wrote:
> >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> >> >> This patch introduces a bus specific queue limitation. It will be
> >> >> useful for increasing the limit for one of the bus without
> >>disturbing
> >> >> other buses.
> >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> >> >> Cc: Alexander Graf <agraf@suse.de>
> >> >> Cc: Richard Henderson <rth@twiddle.net>
> >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> >> >
> >> >Is this still needed if you drop the attempt to
> >> >keep the limit around for old machine types?
> >> If we agree to drop, we probably need transport specific macro.
> >
> >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> >Fine, why not.
> 
> I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> Since to my understanding, it's not safe to increase the limit for all other
> transports which was pointed out by Cornelia in V1:
> http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.

I think all you need is add a check to CCW_CMD_SET_IND:
limit to 64 for legacy interrupts only.

-- 
MST

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-28  3:12     ` Jason Wang
@ 2015-04-28  7:17       ` Michael S. Tsirkin
  2015-05-13  7:47         ` Jason Wang
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-28  7:17 UTC (permalink / raw)
  To: Jason Wang
  Cc: Paolo Bonzini, Alexander Graf, qemu-ppc, qemu-devel, Richard Henderson

On Tue, Apr 28, 2015 at 11:12:10AM +0800, Jason Wang wrote:
> 
> 
> On Mon, Apr 27, 2015 at 7:02 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >On Thu, Apr 23, 2015 at 02:21:48PM +0800, Jason Wang wrote:
> >> This patch increases the maximum number of virtqueues for pci from 64
> >> to 513. This will allow booting a virtio-net-pci device with 256 queue
> >> pairs on recent Linux host (which supports up to 256 tuntap queue
> >>pairs).
> >> To keep migration compatibility, 64 was kept for legacy machine
> >> types. This is because qemu in fact allows guest to probe the limit of
> >> virtqueues through trying to set queue_sel. So for legacy machine
> >> type, we should make sure setting queue_sel to more than 63 won't
> >> make effect.
> >
> >This isn't a documented interface, and no guest that I know of does
> >this.  Accordingly, I think we should drop everything except the
> >hw/virtio/virtio-pci.c change.
> 
> We leave a chance for guest to use such undocumented behavior, so
> technically we'd better keep it and it maybe too late for us to fix if we
> find such a guest in the future. And consider keeping this compatibility was
> really not hard, so I suggest to include this.

Reminds me of  https://xkcd.com/1172/
We don't do this kind of thing.

> >
> >
> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >> Cc: Richard Henderson <rth@twiddle.net>
> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> >> Cc: Alexander Graf <agraf@suse.de>
> >> Cc: qemu-ppc@nongnu.org
> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> >> ---
> >>  hw/i386/pc_piix.c      | 5 +++++
> >>  hw/i386/pc_q35.c       | 5 +++++
> >>  hw/ppc/spapr.c         | 5 +++++
> >>  hw/virtio/virtio-pci.c | 6 +++++-
> >>  4 files changed, 20 insertions(+), 1 deletion(-)
> >> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> >> index 212e263..6e098ce 100644
> >> --- a/hw/i386/pc_piix.c
> >> +++ b/hw/i386/pc_piix.c
> >> @@ -49,6 +49,7 @@
> >>  #include "hw/acpi/acpi.h"
> >>  #include "cpu.h"
> >>  #include "qemu/error-report.h"
> >> +#include "hw/virtio/virtio-bus.h"
> >>  #ifdef CONFIG_XEN
> >>  #  include <xen/hvm/hvm_info_table.h>
> >>  #endif
> >> @@ -312,6 +313,10 @@ static void pc_init_pci(MachineState *machine)
> >>  static void pc_compat_2_3(MachineState *machine)
> >>  {
> >> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> >> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> >> +
> >> +    k->queue_max = 64;
> >>  }
> >>  static void pc_compat_2_2(MachineState *machine)
> >> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> >> index e67f2de..ff7c414 100644
> >> --- a/hw/i386/pc_q35.c
> >> +++ b/hw/i386/pc_q35.c
> >> @@ -45,6 +45,7 @@
> >>  #include "hw/usb.h"
> >>  #include "hw/cpu/icc_bus.h"
> >>  #include "qemu/error-report.h"
> >> +#include "include/hw/virtio/virtio-bus.h"
> >>  /* ICH9 AHCI has 6 ports */
> >>  #define MAX_SATA_PORTS     6
> >> @@ -291,6 +292,10 @@ static void pc_q35_init(MachineState *machine)
> >>  static void pc_compat_2_3(MachineState *machine)
> >>  {
> >> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> >> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> >> +
> >> +    k->queue_max = 64;
> >>  }
> >>  static void pc_compat_2_2(MachineState *machine)
> >> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> >> index 8e43aa2..ee8f6a3 100644
> >> --- a/hw/ppc/spapr.c
> >> +++ b/hw/ppc/spapr.c
> >> @@ -50,6 +50,7 @@
> >>  #include "hw/pci/pci.h"
> >>  #include "hw/scsi/scsi.h"
> >>  #include "hw/virtio/virtio-scsi.h"
> >> +#include "hw/virtio/virtio-bus.h"
> >>  #include "exec/address-spaces.h"
> >>  #include "hw/usb.h"
> >> @@ -1827,6 +1828,10 @@ static const TypeInfo spapr_machine_info = {
> >>  static void spapr_compat_2_3(Object *obj)
> >>  {
> >> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> >> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> >> +
> >> +    k->queue_max = 64;
> >>  }
> >>  static void spapr_compat_2_2(Object *obj)
> >> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> >> index 7d01500..c510cb7 100644
> >> --- a/hw/virtio/virtio-pci.c
> >> +++ b/hw/virtio/virtio-pci.c
> >> @@ -42,7 +42,11 @@
> >>   * configuration space */
> >>  #define VIRTIO_PCI_CONFIG_SIZE(dev)
> >>VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
> >> -#define VIRTIO_PCI_QUEUE_MAX 64
> >> +/* The number was chose to be greater than both the the number of max
> >> + * vcpus supported by host and the number of max tuntap queues support
> >> + * by host and also leave some spaces for future.
> >> + */
> >> +#define VIRTIO_PCI_QUEUE_MAX 1024
> >>     static void virtio_pci_bus_new(VirtioBusState *bus, size_t
> >>bus_size,
> >>                                 VirtIOPCIProxy *dev);
> >> --  2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28  7:14           ` Michael S. Tsirkin
@ 2015-04-28  8:04             ` Cornelia Huck
  2015-04-28  8:16               ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2015-04-28  8:04 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, 28 Apr 2015 09:14:07 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > 
> > 
> > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > >><mst@redhat.com> wrote:
> > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > >> >> This patch introduces a bus specific queue limitation. It will be
> > >> >> useful for increasing the limit for one of the bus without
> > >>disturbing
> > >> >> other buses.
> > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > >> >> Cc: Alexander Graf <agraf@suse.de>
> > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > >> >
> > >> >Is this still needed if you drop the attempt to
> > >> >keep the limit around for old machine types?
> > >> If we agree to drop, we probably need transport specific macro.
> > >
> > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > >Fine, why not.
> > 
> > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > Since to my understanding, it's not safe to increase the limit for all other
> > transports which was pointed out by Cornelia in V1:
> > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> 
> I think all you need is add a check to CCW_CMD_SET_IND:
> limit to 64 for legacy interrupts only.

It isn't that easy.

What is easy is to add a check to the guest driver that fails setup for
devices with more than 64 queues not using adapter interrupts.

On the host side, we're lacking information when interpreting
CCW_CMD_SET_IND (the command does not contain a queue count, and the
actual number of virtqueues is not readily available.) We also can't
fence off when setting up the vqs, as this happens before we know which
kind of indicators the guest wants to use.

More importantly, we haven't even speced what we want to do in this
case. Do we want to reject SET_IND for devices with more than 64
queues? (Probably yes.)

All this involves more work, and I'd prefer to do Jason's changes
instead as this gives us some more time to figure this out properly.

And we haven't even considered s390-virtio yet, which I really want to
touch as little as possible :)

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28  8:04             ` Cornelia Huck
@ 2015-04-28  8:16               ` Michael S. Tsirkin
  2015-04-28 10:40                 ` Cornelia Huck
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-28  8:16 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
> On Tue, 28 Apr 2015 09:14:07 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > > 
> > > 
> > > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > > >><mst@redhat.com> wrote:
> > > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > > >> >> This patch introduces a bus specific queue limitation. It will be
> > > >> >> useful for increasing the limit for one of the bus without
> > > >>disturbing
> > > >> >> other buses.
> > > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > > >> >> Cc: Alexander Graf <agraf@suse.de>
> > > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > >> >
> > > >> >Is this still needed if you drop the attempt to
> > > >> >keep the limit around for old machine types?
> > > >> If we agree to drop, we probably need transport specific macro.
> > > >
> > > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > > >Fine, why not.
> > > 
> > > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > > Since to my understanding, it's not safe to increase the limit for all other
> > > transports which was pointed out by Cornelia in V1:
> > > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> > 
> > I think all you need is add a check to CCW_CMD_SET_IND:
> > limit to 64 for legacy interrupts only.
> 
> It isn't that easy.
> 
> What is easy is to add a check to the guest driver that fails setup for
> devices with more than 64 queues not using adapter interrupts.
> 
> On the host side, we're lacking information when interpreting
> CCW_CMD_SET_IND (the command does not contain a queue count, and the
> actual number of virtqueues is not readily available.)

Why isn't it available? All devices call virtio_add_queue
as appropriate. Just fail legacy adaptors.

> We also can't
> fence off when setting up the vqs, as this happens before we know which
> kind of indicators the guest wants to use.
> 
> More importantly, we haven't even speced what we want to do in this
> case. Do we want to reject SET_IND for devices with more than 64
> queues? (Probably yes.)
> 
> All this involves more work, and I'd prefer to do Jason's changes
> instead as this gives us some more time to figure this out properly.
> 
> And we haven't even considered s390-virtio yet, which I really want to
> touch as little as possible :)

Well this patch does touch it anyway :)
For s390 just check and fail at init if you like.

-- 
MST

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28  8:16               ` Michael S. Tsirkin
@ 2015-04-28 10:40                 ` Cornelia Huck
  2015-04-28 10:55                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2015-04-28 10:40 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, 28 Apr 2015 10:16:04 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
> > On Tue, 28 Apr 2015 09:14:07 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > > > 
> > > > 
> > > > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > > > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > > > >><mst@redhat.com> wrote:
> > > > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > > > >> >> This patch introduces a bus specific queue limitation. It will be
> > > > >> >> useful for increasing the limit for one of the bus without
> > > > >>disturbing
> > > > >> >> other buses.
> > > > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > >> >> Cc: Alexander Graf <agraf@suse.de>
> > > > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > > > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > > > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > >> >
> > > > >> >Is this still needed if you drop the attempt to
> > > > >> >keep the limit around for old machine types?
> > > > >> If we agree to drop, we probably need transport specific macro.
> > > > >
> > > > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > > > >Fine, why not.
> > > > 
> > > > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > > > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > > > Since to my understanding, it's not safe to increase the limit for all other
> > > > transports which was pointed out by Cornelia in V1:
> > > > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> > > 
> > > I think all you need is add a check to CCW_CMD_SET_IND:
> > > limit to 64 for legacy interrupts only.
> > 
> > It isn't that easy.
> > 
> > What is easy is to add a check to the guest driver that fails setup for
> > devices with more than 64 queues not using adapter interrupts.
> > 
> > On the host side, we're lacking information when interpreting
> > CCW_CMD_SET_IND (the command does not contain a queue count, and the
> > actual number of virtqueues is not readily available.)
> 
> Why isn't it available? All devices call virtio_add_queue
> as appropriate. Just fail legacy adaptors.

Because we don't know what the guest is going to use? It is free to
use per-subchannel indicators, even if it is operating in virtio-1 mode.

> 
> > We also can't
> > fence off when setting up the vqs, as this happens before we know which
> > kind of indicators the guest wants to use.
> > 
> > More importantly, we haven't even speced what we want to do in this
> > case. Do we want to reject SET_IND for devices with more than 64
> > queues? (Probably yes.)
> > 
> > All this involves more work, and I'd prefer to do Jason's changes
> > instead as this gives us some more time to figure this out properly.
> > 
> > And we haven't even considered s390-virtio yet, which I really want to
> > touch as little as possible :)
> 
> Well this patch does touch it anyway :)

But only small, self-evident changes.

> For s390 just check and fail at init if you like.

What about devices that may change their number of queues? I'd really
prefer large queue numbers to be fenced off in the the individual
devices, and for that they need to be able to grab a transport-specific
queue limit.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28 10:40                 ` Cornelia Huck
@ 2015-04-28 10:55                   ` Michael S. Tsirkin
  2015-04-28 11:39                     ` Cornelia Huck
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-28 10:55 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, Apr 28, 2015 at 12:40:07PM +0200, Cornelia Huck wrote:
> On Tue, 28 Apr 2015 10:16:04 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
> > > On Tue, 28 Apr 2015 09:14:07 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > > > > 
> > > > > 
> > > > > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > > > > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > > > > >><mst@redhat.com> wrote:
> > > > > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > > > > >> >> This patch introduces a bus specific queue limitation. It will be
> > > > > >> >> useful for increasing the limit for one of the bus without
> > > > > >>disturbing
> > > > > >> >> other buses.
> > > > > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > > >> >> Cc: Alexander Graf <agraf@suse.de>
> > > > > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > > > > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > > > > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > > > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > >> >
> > > > > >> >Is this still needed if you drop the attempt to
> > > > > >> >keep the limit around for old machine types?
> > > > > >> If we agree to drop, we probably need transport specific macro.
> > > > > >
> > > > > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > > > > >Fine, why not.
> > > > > 
> > > > > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > > > > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > > > > Since to my understanding, it's not safe to increase the limit for all other
> > > > > transports which was pointed out by Cornelia in V1:
> > > > > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> > > > 
> > > > I think all you need is add a check to CCW_CMD_SET_IND:
> > > > limit to 64 for legacy interrupts only.
> > > 
> > > It isn't that easy.
> > > 
> > > What is easy is to add a check to the guest driver that fails setup for
> > > devices with more than 64 queues not using adapter interrupts.
> > > 
> > > On the host side, we're lacking information when interpreting
> > > CCW_CMD_SET_IND (the command does not contain a queue count, and the
> > > actual number of virtqueues is not readily available.)
> > 
> > Why isn't it available? All devices call virtio_add_queue
> > as appropriate. Just fail legacy adaptors.
> 
> Because we don't know what the guest is going to use? It is free to
> use per-subchannel indicators, even if it is operating in virtio-1 mode.
> > 
> > > We also can't
> > > fence off when setting up the vqs, as this happens before we know which
> > > kind of indicators the guest wants to use.
> > > 
> > > More importantly, we haven't even speced what we want to do in this
> > > case. Do we want to reject SET_IND for devices with more than 64
> > > queues? (Probably yes.)
> > > 
> > > All this involves more work, and I'd prefer to do Jason's changes
> > > instead as this gives us some more time to figure this out properly.
> > > 
> > > And we haven't even considered s390-virtio yet, which I really want to
> > > touch as little as possible :)
> > 
> > Well this patch does touch it anyway :)
> 
> But only small, self-evident changes.
> 

Sorry, I don't see what you are trying to say.
There's no chance legacy interrupts work with > 64 queues.
Guests should have validated the # of queues, and not
attempted to use >64 queues. Looks like there's no
such validation in guest, right?

Solution - don't specify this configuration with legacy guests.

Modern guests work so there's value in supporting such
configuration in QEMU, I don't see why we must deny it in QEMU.

> > For s390 just check and fail at init if you like.
> 
> What about devices that may change their number of queues? I'd really
> prefer large queue numbers to be fenced off in the the individual
> devices, and for that they need to be able to grab a transport-specific
> queue limit.

This is why I don't want bus specific limits in core,
it just makes it too easy to sweep dirt under the carpet.
s390 is legacy - fine, but don't perpetuate the issue
in devices.

-- 
MST

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28 10:55                   ` Michael S. Tsirkin
@ 2015-04-28 11:39                     ` Cornelia Huck
  2015-04-28 12:47                       ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2015-04-28 11:39 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, 28 Apr 2015 12:55:40 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Apr 28, 2015 at 12:40:07PM +0200, Cornelia Huck wrote:
> > On Tue, 28 Apr 2015 10:16:04 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
> > > > On Tue, 28 Apr 2015 09:14:07 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > > > > > 
> > > > > > 
> > > > > > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > > > > > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > > > > > >><mst@redhat.com> wrote:
> > > > > > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > > > > > >> >> This patch introduces a bus specific queue limitation. It will be
> > > > > > >> >> useful for increasing the limit for one of the bus without
> > > > > > >>disturbing
> > > > > > >> >> other buses.
> > > > > > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > > > >> >> Cc: Alexander Graf <agraf@suse.de>
> > > > > > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > > > > > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > > > > > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > > > > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > > > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > >> >
> > > > > > >> >Is this still needed if you drop the attempt to
> > > > > > >> >keep the limit around for old machine types?
> > > > > > >> If we agree to drop, we probably need transport specific macro.
> > > > > > >
> > > > > > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > > > > > >Fine, why not.
> > > > > > 
> > > > > > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > > > > > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > > > > > Since to my understanding, it's not safe to increase the limit for all other
> > > > > > transports which was pointed out by Cornelia in V1:
> > > > > > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> > > > > 
> > > > > I think all you need is add a check to CCW_CMD_SET_IND:
> > > > > limit to 64 for legacy interrupts only.
> > > > 
> > > > It isn't that easy.
> > > > 
> > > > What is easy is to add a check to the guest driver that fails setup for
> > > > devices with more than 64 queues not using adapter interrupts.
> > > > 
> > > > On the host side, we're lacking information when interpreting
> > > > CCW_CMD_SET_IND (the command does not contain a queue count, and the
> > > > actual number of virtqueues is not readily available.)
> > > 
> > > Why isn't it available? All devices call virtio_add_queue
> > > as appropriate. Just fail legacy adaptors.
> > 
> > Because we don't know what the guest is going to use? It is free to
> > use per-subchannel indicators, even if it is operating in virtio-1 mode.
> > > 
> > > > We also can't
> > > > fence off when setting up the vqs, as this happens before we know which
> > > > kind of indicators the guest wants to use.
> > > > 
> > > > More importantly, we haven't even speced what we want to do in this
> > > > case. Do we want to reject SET_IND for devices with more than 64
> > > > queues? (Probably yes.)
> > > > 
> > > > All this involves more work, and I'd prefer to do Jason's changes
> > > > instead as this gives us some more time to figure this out properly.
> > > > 
> > > > And we haven't even considered s390-virtio yet, which I really want to
> > > > touch as little as possible :)
> > > 
> > > Well this patch does touch it anyway :)
> > 
> > But only small, self-evident changes.
> > 
> 
> Sorry, I don't see what you are trying to say.
> There's no chance legacy interrupts work with > 64 queues.
> Guests should have validated the # of queues, and not
> attempted to use >64 queues. Looks like there's no
> such validation in guest, right?

I have no idea whether > 64 queues would work with s390-virtio - it
might well work, but I'm not willing to extend any effort to verifying
that.

> 
> Solution - don't specify this configuration with legacy guests.
> 
> Modern guests work so there's value in supporting such
> configuration in QEMU, I don't see why we must deny it in QEMU.

What is "legacy guest" in your context? A guest running with the legacy
transport or a guest using ccw but not virtio-1? A ccw guest using
adapter interrupts but not virtio-1 should be fine.

> 
> > > For s390 just check and fail at init if you like.
> > 
> > What about devices that may change their number of queues? I'd really
> > prefer large queue numbers to be fenced off in the the individual
> > devices, and for that they need to be able to grab a transport-specific
> > queue limit.
> 
> This is why I don't want bus specific limits in core,
> it just makes it too easy to sweep dirt under the carpet.
> s390 is legacy - fine, but don't perpetuate the issue
> in devices.

What is "swept under the carpet" here? A device can have min(max queues
from transport, max queues from device type) queues. I think it's
easier to refuse instantiating with too many queues per device type (as
most will be fine with 64 queues), so I don't want that code in the
transport (beyond making the limit available).

For s390 I'd like in the end:
- s390-virtio: legacy - keep it working as best-can-do, so I'd prefer
  to keep it at 64 queues, even if more might work
- virtio-ccw, devices in legacy or virtio-1 mode: works with adapter
  interrupts, so let's fence off setting per-subchannel indicators if a
  device has more than 64 queues (needs work and a well thought-out
  rejection mechanism)

That's _in the end_: I'd like to keep ccw at 64 queues _for now_ so
that we don't have a rushed interface change - and at the same time, I
don't want to hold off pci. Makes sense?

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28 11:39                     ` Cornelia Huck
@ 2015-04-28 12:47                       ` Michael S. Tsirkin
  2015-04-28 13:33                         ` Cornelia Huck
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-28 12:47 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, Apr 28, 2015 at 01:39:51PM +0200, Cornelia Huck wrote:
> On Tue, 28 Apr 2015 12:55:40 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Apr 28, 2015 at 12:40:07PM +0200, Cornelia Huck wrote:
> > > On Tue, 28 Apr 2015 10:16:04 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
> > > > > On Tue, 28 Apr 2015 09:14:07 +0200
> > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > > 
> > > > > > On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > > > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > > > > > > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > > > > > > >><mst@redhat.com> wrote:
> > > > > > > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > > > > > > >> >> This patch introduces a bus specific queue limitation. It will be
> > > > > > > >> >> useful for increasing the limit for one of the bus without
> > > > > > > >>disturbing
> > > > > > > >> >> other buses.
> > > > > > > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > >> >> Cc: Alexander Graf <agraf@suse.de>
> > > > > > > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > > > > > > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > > > > > > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > > > > > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > > > > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > > >> >
> > > > > > > >> >Is this still needed if you drop the attempt to
> > > > > > > >> >keep the limit around for old machine types?
> > > > > > > >> If we agree to drop, we probably need transport specific macro.
> > > > > > > >
> > > > > > > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > > > > > > >Fine, why not.
> > > > > > > 
> > > > > > > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > > > > > > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > > > > > > Since to my understanding, it's not safe to increase the limit for all other
> > > > > > > transports which was pointed out by Cornelia in V1:
> > > > > > > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> > > > > > 
> > > > > > I think all you need is add a check to CCW_CMD_SET_IND:
> > > > > > limit to 64 for legacy interrupts only.
> > > > > 
> > > > > It isn't that easy.
> > > > > 
> > > > > What is easy is to add a check to the guest driver that fails setup for
> > > > > devices with more than 64 queues not using adapter interrupts.
> > > > > 
> > > > > On the host side, we're lacking information when interpreting
> > > > > CCW_CMD_SET_IND (the command does not contain a queue count, and the
> > > > > actual number of virtqueues is not readily available.)
> > > > 
> > > > Why isn't it available? All devices call virtio_add_queue
> > > > as appropriate. Just fail legacy adaptors.
> > > 
> > > Because we don't know what the guest is going to use? It is free to
> > > use per-subchannel indicators, even if it is operating in virtio-1 mode.
> > > > 
> > > > > We also can't
> > > > > fence off when setting up the vqs, as this happens before we know which
> > > > > kind of indicators the guest wants to use.
> > > > > 
> > > > > More importantly, we haven't even speced what we want to do in this
> > > > > case. Do we want to reject SET_IND for devices with more than 64
> > > > > queues? (Probably yes.)
> > > > > 
> > > > > All this involves more work, and I'd prefer to do Jason's changes
> > > > > instead as this gives us some more time to figure this out properly.
> > > > > 
> > > > > And we haven't even considered s390-virtio yet, which I really want to
> > > > > touch as little as possible :)
> > > > 
> > > > Well this patch does touch it anyway :)
> > > 
> > > But only small, self-evident changes.
> > > 
> > 
> > Sorry, I don't see what you are trying to say.
> > There's no chance legacy interrupts work with > 64 queues.
> > Guests should have validated the # of queues, and not
> > attempted to use >64 queues. Looks like there's no
> > such validation in guest, right?
> 
> I have no idea whether > 64 queues would work with s390-virtio - it
> might well work, but I'm not willing to extend any effort to verifying
> that.

Well this doesn't mean we won't make any changes, ever,
just so we can reduce verification costs.
Let's make the change everywhere, if we see issues
we'll backtrack.

> > 
> > Solution - don't specify this configuration with legacy guests.
> > 
> > Modern guests work so there's value in supporting such
> > configuration in QEMU, I don't see why we must deny it in QEMU.
> 
> What is "legacy guest" in your context? A guest running with the legacy
> transport or a guest using ccw but not virtio-1? A ccw guest using
> adapter interrupts but not virtio-1 should be fine.

A guest not using adapter interrupts.

> > 
> > > > For s390 just check and fail at init if you like.
> > > 
> > > What about devices that may change their number of queues? I'd really
> > > prefer large queue numbers to be fenced off in the the individual
> > > devices, and for that they need to be able to grab a transport-specific
> > > queue limit.
> > 
> > This is why I don't want bus specific limits in core,
> > it just makes it too easy to sweep dirt under the carpet.
> > s390 is legacy - fine, but don't perpetuate the issue
> > in devices.
> 
> What is "swept under the carpet" here? A device can have min(max queues
> from transport, max queues from device type) queues. I think it's
> easier to refuse instantiating with too many queues per device type (as
> most will be fine with 64 queues), so I don't want that code in the
> transport (beyond making the limit available).
> 
> For s390 I'd like in the end:
> - s390-virtio: legacy - keep it working as best-can-do, so I'd prefer
>   to keep it at 64 queues, even if more might work
> - virtio-ccw, devices in legacy or virtio-1 mode: works with adapter
>   interrupts, so let's fence off setting per-subchannel indicators if a
>   device has more than 64 queues (needs work and a well thought-out
>   rejection mechanism)
> 
> That's _in the end_: I'd like to keep ccw at 64 queues _for now_ so
> that we don't have a rushed interface change - and at the same time, I
> don't want to hold off pci. Makes sense?

If you want to fail configurations with > 64 queues in ccw or s390,
that's fine by me. I don't want work arounds for these bugs in virtio
core though. So transports should not have a say in how many queues can
be supported, but they can fail configurations they can't support if
they want to.

-- 
MST

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28 12:47                       ` Michael S. Tsirkin
@ 2015-04-28 13:33                         ` Cornelia Huck
  2015-04-28 14:40                           ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Cornelia Huck @ 2015-04-28 13:33 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, 28 Apr 2015 14:47:11 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Apr 28, 2015 at 01:39:51PM +0200, Cornelia Huck wrote:
> > On Tue, 28 Apr 2015 12:55:40 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Apr 28, 2015 at 12:40:07PM +0200, Cornelia Huck wrote:
> > > > On Tue, 28 Apr 2015 10:16:04 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
> > > > > > On Tue, 28 Apr 2015 09:14:07 +0200
> > > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > > > 
> > > > > > > On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > > > > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > > > > > > > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > > > > > > > >><mst@redhat.com> wrote:
> > > > > > > > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > > > > > > > >> >> This patch introduces a bus specific queue limitation. It will be
> > > > > > > > >> >> useful for increasing the limit for one of the bus without
> > > > > > > > >>disturbing
> > > > > > > > >> >> other buses.
> > > > > > > > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > >> >> Cc: Alexander Graf <agraf@suse.de>
> > > > > > > > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > > > > > > > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > > > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > > > > > > > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > > > > > > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > > > > > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > > > >> >
> > > > > > > > >> >Is this still needed if you drop the attempt to
> > > > > > > > >> >keep the limit around for old machine types?
> > > > > > > > >> If we agree to drop, we probably need transport specific macro.
> > > > > > > > >
> > > > > > > > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > > > > > > > >Fine, why not.
> > > > > > > > 
> > > > > > > > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > > > > > > > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > > > > > > > Since to my understanding, it's not safe to increase the limit for all other
> > > > > > > > transports which was pointed out by Cornelia in V1:
> > > > > > > > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> > > > > > > 
> > > > > > > I think all you need is add a check to CCW_CMD_SET_IND:
> > > > > > > limit to 64 for legacy interrupts only.
> > > > > > 
> > > > > > It isn't that easy.
> > > > > > 
> > > > > > What is easy is to add a check to the guest driver that fails setup for
> > > > > > devices with more than 64 queues not using adapter interrupts.
> > > > > > 
> > > > > > On the host side, we're lacking information when interpreting
> > > > > > CCW_CMD_SET_IND (the command does not contain a queue count, and the
> > > > > > actual number of virtqueues is not readily available.)
> > > > > 
> > > > > Why isn't it available? All devices call virtio_add_queue
> > > > > as appropriate. Just fail legacy adaptors.
> > > > 
> > > > Because we don't know what the guest is going to use? It is free to
> > > > use per-subchannel indicators, even if it is operating in virtio-1 mode.
> > > > > 
> > > > > > We also can't
> > > > > > fence off when setting up the vqs, as this happens before we know which
> > > > > > kind of indicators the guest wants to use.
> > > > > > 
> > > > > > More importantly, we haven't even speced what we want to do in this
> > > > > > case. Do we want to reject SET_IND for devices with more than 64
> > > > > > queues? (Probably yes.)
> > > > > > 
> > > > > > All this involves more work, and I'd prefer to do Jason's changes
> > > > > > instead as this gives us some more time to figure this out properly.
> > > > > > 
> > > > > > And we haven't even considered s390-virtio yet, which I really want to
> > > > > > touch as little as possible :)
> > > > > 
> > > > > Well this patch does touch it anyway :)
> > > > 
> > > > But only small, self-evident changes.
> > > > 
> > > 
> > > Sorry, I don't see what you are trying to say.
> > > There's no chance legacy interrupts work with > 64 queues.
> > > Guests should have validated the # of queues, and not
> > > attempted to use >64 queues. Looks like there's no
> > > such validation in guest, right?
> > 
> > I have no idea whether > 64 queues would work with s390-virtio - it
> > might well work, but I'm not willing to extend any effort to verifying
> > that.
> 
> Well this doesn't mean we won't make any changes, ever,
> just so we can reduce verification costs.
> Let's make the change everywhere, if we see issues
> we'll backtrack.

I don't like possibly breaking things with a seeing eye. And I know
that some virtio-ccw setups will break.

> 
> > > 
> > > Solution - don't specify this configuration with legacy guests.
> > > 
> > > Modern guests work so there's value in supporting such
> > > configuration in QEMU, I don't see why we must deny it in QEMU.
> > 
> > What is "legacy guest" in your context? A guest running with the legacy
> > transport or a guest using ccw but not virtio-1? A ccw guest using
> > adapter interrupts but not virtio-1 should be fine.
> 
> A guest not using adapter interrupts.

There's nothing about that that's per-guest. It is a choice per-device.
In fact, the Linux guest driver falls back to classic interrupts if it
fails to setup adapter interrupts for a device - and this might happen
for large guests when the host adapter routing table is full.

> 
> > > 
> > > > > For s390 just check and fail at init if you like.
> > > > 
> > > > What about devices that may change their number of queues? I'd really
> > > > prefer large queue numbers to be fenced off in the the individual
> > > > devices, and for that they need to be able to grab a transport-specific
> > > > queue limit.
> > > 
> > > This is why I don't want bus specific limits in core,
> > > it just makes it too easy to sweep dirt under the carpet.
> > > s390 is legacy - fine, but don't perpetuate the issue
> > > in devices.
> > 
> > What is "swept under the carpet" here? A device can have min(max queues
> > from transport, max queues from device type) queues. I think it's
> > easier to refuse instantiating with too many queues per device type (as
> > most will be fine with 64 queues), so I don't want that code in the
> > transport (beyond making the limit available).
> > 
> > For s390 I'd like in the end:
> > - s390-virtio: legacy - keep it working as best-can-do, so I'd prefer
> >   to keep it at 64 queues, even if more might work
> > - virtio-ccw, devices in legacy or virtio-1 mode: works with adapter
> >   interrupts, so let's fence off setting per-subchannel indicators if a
> >   device has more than 64 queues (needs work and a well thought-out
> >   rejection mechanism)
> > 
> > That's _in the end_: I'd like to keep ccw at 64 queues _for now_ so
> > that we don't have a rushed interface change - and at the same time, I
> > don't want to hold off pci. Makes sense?
> 
> If you want to fail configurations with > 64 queues in ccw or s390,
> that's fine by me. I don't want work arounds for these bugs in virtio
> core though. So transports should not have a say in how many queues can
> be supported, but they can fail configurations they can't support if
> they want to.

Eh, isn't that a contradiction? Failing a configuration means that the
transport does indeed have a say?

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28 13:33                         ` Cornelia Huck
@ 2015-04-28 14:40                           ` Michael S. Tsirkin
  2015-05-13  7:51                             ` Jason Wang
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-04-28 14:40 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Jason Wang, qemu-devel, Alexander Graf, Christian Borntraeger,
	Paolo Bonzini, Richard Henderson

On Tue, Apr 28, 2015 at 03:33:37PM +0200, Cornelia Huck wrote:
> On Tue, 28 Apr 2015 14:47:11 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Apr 28, 2015 at 01:39:51PM +0200, Cornelia Huck wrote:
> > > On Tue, 28 Apr 2015 12:55:40 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Tue, Apr 28, 2015 at 12:40:07PM +0200, Cornelia Huck wrote:
> > > > > On Tue, 28 Apr 2015 10:16:04 +0200
> > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > > 
> > > > > > On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
> > > > > > > On Tue, 28 Apr 2015 09:14:07 +0200
> > > > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > > > > 
> > > > > > > > On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > > > > > >On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
> > > > > > > > > >>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
> > > > > > > > > >><mst@redhat.com> wrote:
> > > > > > > > > >> >On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
> > > > > > > > > >> >> This patch introduces a bus specific queue limitation. It will be
> > > > > > > > > >> >> useful for increasing the limit for one of the bus without
> > > > > > > > > >>disturbing
> > > > > > > > > >> >> other buses.
> > > > > > > > > >> >> Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > > >> >> Cc: Alexander Graf <agraf@suse.de>
> > > > > > > > > >> >> Cc: Richard Henderson <rth@twiddle.net>
> > > > > > > > > >> >> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > > > > >> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > > > > > > > > >> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > > > > > > > >> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > > > > > > >> >> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > > > > > > > >> >
> > > > > > > > > >> >Is this still needed if you drop the attempt to
> > > > > > > > > >> >keep the limit around for old machine types?
> > > > > > > > > >> If we agree to drop, we probably need transport specific macro.
> > > > > > > > > >
> > > > > > > > > >You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
> > > > > > > > > >Fine, why not.
> > > > > > > > > 
> > > > > > > > > I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
> > > > > > > > > limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
> > > > > > > > > Since to my understanding, it's not safe to increase the limit for all other
> > > > > > > > > transports which was pointed out by Cornelia in V1:
> > > > > > > > > http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
> > > > > > > > 
> > > > > > > > I think all you need is add a check to CCW_CMD_SET_IND:
> > > > > > > > limit to 64 for legacy interrupts only.
> > > > > > > 
> > > > > > > It isn't that easy.
> > > > > > > 
> > > > > > > What is easy is to add a check to the guest driver that fails setup for
> > > > > > > devices with more than 64 queues not using adapter interrupts.
> > > > > > > 
> > > > > > > On the host side, we're lacking information when interpreting
> > > > > > > CCW_CMD_SET_IND (the command does not contain a queue count, and the
> > > > > > > actual number of virtqueues is not readily available.)
> > > > > > 
> > > > > > Why isn't it available? All devices call virtio_add_queue
> > > > > > as appropriate. Just fail legacy adaptors.
> > > > > 
> > > > > Because we don't know what the guest is going to use? It is free to
> > > > > use per-subchannel indicators, even if it is operating in virtio-1 mode.
> > > > > > 
> > > > > > > We also can't
> > > > > > > fence off when setting up the vqs, as this happens before we know which
> > > > > > > kind of indicators the guest wants to use.
> > > > > > > 
> > > > > > > More importantly, we haven't even speced what we want to do in this
> > > > > > > case. Do we want to reject SET_IND for devices with more than 64
> > > > > > > queues? (Probably yes.)
> > > > > > > 
> > > > > > > All this involves more work, and I'd prefer to do Jason's changes
> > > > > > > instead as this gives us some more time to figure this out properly.
> > > > > > > 
> > > > > > > And we haven't even considered s390-virtio yet, which I really want to
> > > > > > > touch as little as possible :)
> > > > > > 
> > > > > > Well this patch does touch it anyway :)
> > > > > 
> > > > > But only small, self-evident changes.
> > > > > 
> > > > 
> > > > Sorry, I don't see what you are trying to say.
> > > > There's no chance legacy interrupts work with > 64 queues.
> > > > Guests should have validated the # of queues, and not
> > > > attempted to use >64 queues. Looks like there's no
> > > > such validation in guest, right?
> > > 
> > > I have no idea whether > 64 queues would work with s390-virtio - it
> > > might well work, but I'm not willing to extend any effort to verifying
> > > that.
> > 
> > Well this doesn't mean we won't make any changes, ever,
> > just so we can reduce verification costs.
> > Let's make the change everywhere, if we see issues
> > we'll backtrack.
> 
> I don't like possibly breaking things with a seeing eye. And I know
> that some virtio-ccw setups will break.
> 
> > 
> > > > 
> > > > Solution - don't specify this configuration with legacy guests.
> > > > 
> > > > Modern guests work so there's value in supporting such
> > > > configuration in QEMU, I don't see why we must deny it in QEMU.
> > > 
> > > What is "legacy guest" in your context? A guest running with the legacy
> > > transport or a guest using ccw but not virtio-1? A ccw guest using
> > > adapter interrupts but not virtio-1 should be fine.
> > 
> > A guest not using adapter interrupts.
> 
> There's nothing about that that's per-guest. It is a choice per-device.
> In fact, the Linux guest driver falls back to classic interrupts if it
> fails to setup adapter interrupts for a device - and this might happen
> for large guests when the host adapter routing table is full.
> 
> > 
> > > > 
> > > > > > For s390 just check and fail at init if you like.
> > > > > 
> > > > > What about devices that may change their number of queues? I'd really
> > > > > prefer large queue numbers to be fenced off in the the individual
> > > > > devices, and for that they need to be able to grab a transport-specific
> > > > > queue limit.
> > > > 
> > > > This is why I don't want bus specific limits in core,
> > > > it just makes it too easy to sweep dirt under the carpet.
> > > > s390 is legacy - fine, but don't perpetuate the issue
> > > > in devices.
> > > 
> > > What is "swept under the carpet" here? A device can have min(max queues
> > > from transport, max queues from device type) queues. I think it's
> > > easier to refuse instantiating with too many queues per device type (as
> > > most will be fine with 64 queues), so I don't want that code in the
> > > transport (beyond making the limit available).
> > > 
> > > For s390 I'd like in the end:
> > > - s390-virtio: legacy - keep it working as best-can-do, so I'd prefer
> > >   to keep it at 64 queues, even if more might work
> > > - virtio-ccw, devices in legacy or virtio-1 mode: works with adapter
> > >   interrupts, so let's fence off setting per-subchannel indicators if a
> > >   device has more than 64 queues (needs work and a well thought-out
> > >   rejection mechanism)
> > > 
> > > That's _in the end_: I'd like to keep ccw at 64 queues _for now_ so
> > > that we don't have a rushed interface change - and at the same time, I
> > > don't want to hold off pci. Makes sense?
> > 
> > If you want to fail configurations with > 64 queues in ccw or s390,
> > that's fine by me. I don't want work arounds for these bugs in virtio
> > core though. So transports should not have a say in how many queues can
> > be supported, but they can fail configurations they can't support if
> > they want to.
> 
> Eh, isn't that a contradiction? Failing a configuration means that the
> transport does indeed have a say?

I'm fine with general capability that lets transport check device
and fail init, for whatever reason.
E.g. can we teach plugged callback to fail?
I don't want to mess up core with knowledge about specific transport
bugs such as random limits on # of queues.

-- 
MST

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-28  7:17       ` Michael S. Tsirkin
@ 2015-05-13  7:47         ` Jason Wang
  2015-05-13  8:16           ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Jason Wang @ 2015-05-13  7:47 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Paolo Bonzini, Alexander Graf, qemu-ppc, qemu-devel, Richard Henderson



On 04/28/2015 03:17 PM, Michael S. Tsirkin wrote:
> On Tue, Apr 28, 2015 at 11:12:10AM +0800, Jason Wang wrote:
>> > 
>> > 
>> > On Mon, Apr 27, 2015 at 7:02 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
>>> > >On Thu, Apr 23, 2015 at 02:21:48PM +0800, Jason Wang wrote:
>>>> > >> This patch increases the maximum number of virtqueues for pci from 64
>>>> > >> to 513. This will allow booting a virtio-net-pci device with 256 queue
>>>> > >> pairs on recent Linux host (which supports up to 256 tuntap queue
>>>> > >>pairs).
>>>> > >> To keep migration compatibility, 64 was kept for legacy machine
>>>> > >> types. This is because qemu in fact allows guest to probe the limit of
>>>> > >> virtqueues through trying to set queue_sel. So for legacy machine
>>>> > >> type, we should make sure setting queue_sel to more than 63 won't
>>>> > >> make effect.
>>> > >
>>> > >This isn't a documented interface, and no guest that I know of does
>>> > >this.  Accordingly, I think we should drop everything except the
>>> > >hw/virtio/virtio-pci.c change.
>> > 
>> > We leave a chance for guest to use such undocumented behavior, so
>> > technically we'd better keep it and it maybe too late for us to fix if we
>> > find such a guest in the future. And consider keeping this compatibility was
>> > really not hard, so I suggest to include this.
> Reminds me of  https://xkcd.com/1172/
> We don't do this kind of thing.

Ok, but let's consider for management:

If we don't do this, consider src has qemu 2.4 and dst has qemu 2.3.
Then libvirt can create 2.3 machine on src with more than 64 queues.
What happens if it want to migrate to dst? I believe we don't want to
teach libvirt about the queue limit for each machine type?

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit
  2015-04-28 14:40                           ` Michael S. Tsirkin
@ 2015-05-13  7:51                             ` Jason Wang
  0 siblings, 0 replies; 47+ messages in thread
From: Jason Wang @ 2015-05-13  7:51 UTC (permalink / raw)
  To: Michael S. Tsirkin, Cornelia Huck
  Cc: Christian Borntraeger, Richard Henderson, Paolo Bonzini,
	qemu-devel, Alexander Graf



On 04/28/2015 10:40 PM, Michael S. Tsirkin wrote:
> On Tue, Apr 28, 2015 at 03:33:37PM +0200, Cornelia Huck wrote:
>> On Tue, 28 Apr 2015 14:47:11 +0200
>> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>>
>>> On Tue, Apr 28, 2015 at 01:39:51PM +0200, Cornelia Huck wrote:
>>>> On Tue, 28 Apr 2015 12:55:40 +0200
>>>> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>>>>
>>>>> On Tue, Apr 28, 2015 at 12:40:07PM +0200, Cornelia Huck wrote:
>>>>>> On Tue, 28 Apr 2015 10:16:04 +0200
>>>>>> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>>>>>>
>>>>>>> On Tue, Apr 28, 2015 at 10:04:15AM +0200, Cornelia Huck wrote:
>>>>>>>> On Tue, 28 Apr 2015 09:14:07 +0200
>>>>>>>> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>>>>>>>>
>>>>>>>>> On Tue, Apr 28, 2015 at 02:13:20PM +0800, Jason Wang wrote:
>>>>>>>>>>
>>>>>>>>>> On Tue, Apr 28, 2015 at 1:13 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
>>>>>>>>>>> On Tue, Apr 28, 2015 at 11:14:04AM +0800, Jason Wang wrote:
>>>>>>>>>>>>     On Mon, Apr 27, 2015 at 7:05 PM, Michael S. Tsirkin
>>>>>>>>>>>> <mst@redhat.com> wrote:
>>>>>>>>>>>>> On Thu, Apr 23, 2015 at 02:21:41PM +0800, Jason Wang wrote:
>>>>>>>>>>>>>> This patch introduces a bus specific queue limitation. It will be
>>>>>>>>>>>>>> useful for increasing the limit for one of the bus without
>>>>>>>>>>>> disturbing
>>>>>>>>>>>>>> other buses.
>>>>>>>>>>>>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>>>>>>>>>>>>> Cc: Alexander Graf <agraf@suse.de>
>>>>>>>>>>>>>> Cc: Richard Henderson <rth@twiddle.net>
>>>>>>>>>>>>>> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
>>>>>>>>>>>>>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>>>>>>>>>>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>>>>>>>>>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>>>>>>>>>>>>> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
>>>>>>>>>>>>> Is this still needed if you drop the attempt to
>>>>>>>>>>>>> keep the limit around for old machine types?
>>>>>>>>>>>> If we agree to drop, we probably need transport specific macro.
>>>>>>>>>>> You mean just rename VIRTIO_PCI_QUEUE_MAX to VIRTIO_QUEUE_MAX?
>>>>>>>>>>> Fine, why not.
>>>>>>>>>> I mean keeping VIRTIO_PCI_QUEUE_MAX for pci only and just increase pci
>>>>>>>>>> limit. And introduce e.g VIRTIO_PCI_QUEUE_CCW for ccw and keep it as 64.
>>>>>>>>>> Since to my understanding, it's not safe to increase the limit for all other
>>>>>>>>>> transports which was pointed out by Cornelia in V1:
>>>>>>>>>> http://permalink.gmane.org/gmane.comp.emulators.qemu/318245.
>>>>>>>>> I think all you need is add a check to CCW_CMD_SET_IND:
>>>>>>>>> limit to 64 for legacy interrupts only.
>>>>>>>> It isn't that easy.
>>>>>>>>
>>>>>>>> What is easy is to add a check to the guest driver that fails setup for
>>>>>>>> devices with more than 64 queues not using adapter interrupts.
>>>>>>>>
>>>>>>>> On the host side, we're lacking information when interpreting
>>>>>>>> CCW_CMD_SET_IND (the command does not contain a queue count, and the
>>>>>>>> actual number of virtqueues is not readily available.)
>>>>>>> Why isn't it available? All devices call virtio_add_queue
>>>>>>> as appropriate. Just fail legacy adaptors.
>>>>>> Because we don't know what the guest is going to use? It is free to
>>>>>> use per-subchannel indicators, even if it is operating in virtio-1 mode.
>>>>>>>> We also can't
>>>>>>>> fence off when setting up the vqs, as this happens before we know which
>>>>>>>> kind of indicators the guest wants to use.
>>>>>>>>
>>>>>>>> More importantly, we haven't even speced what we want to do in this
>>>>>>>> case. Do we want to reject SET_IND for devices with more than 64
>>>>>>>> queues? (Probably yes.)
>>>>>>>>
>>>>>>>> All this involves more work, and I'd prefer to do Jason's changes
>>>>>>>> instead as this gives us some more time to figure this out properly.
>>>>>>>>
>>>>>>>> And we haven't even considered s390-virtio yet, which I really want to
>>>>>>>> touch as little as possible :)
>>>>>>> Well this patch does touch it anyway :)
>>>>>> But only small, self-evident changes.
>>>>>>
>>>>> Sorry, I don't see what you are trying to say.
>>>>> There's no chance legacy interrupts work with > 64 queues.
>>>>> Guests should have validated the # of queues, and not
>>>>> attempted to use >64 queues. Looks like there's no
>>>>> such validation in guest, right?
>>>> I have no idea whether > 64 queues would work with s390-virtio - it
>>>> might well work, but I'm not willing to extend any effort to verifying
>>>> that.
>>> Well this doesn't mean we won't make any changes, ever,
>>> just so we can reduce verification costs.
>>> Let's make the change everywhere, if we see issues
>>> we'll backtrack.
>> I don't like possibly breaking things with a seeing eye. And I know
>> that some virtio-ccw setups will break.
>>
>>>>> Solution - don't specify this configuration with legacy guests.
>>>>>
>>>>> Modern guests work so there's value in supporting such
>>>>> configuration in QEMU, I don't see why we must deny it in QEMU.
>>>> What is "legacy guest" in your context? A guest running with the legacy
>>>> transport or a guest using ccw but not virtio-1? A ccw guest using
>>>> adapter interrupts but not virtio-1 should be fine.
>>> A guest not using adapter interrupts.
>> There's nothing about that that's per-guest. It is a choice per-device.
>> In fact, the Linux guest driver falls back to classic interrupts if it
>> fails to setup adapter interrupts for a device - and this might happen
>> for large guests when the host adapter routing table is full.
>>
>>>>>>> For s390 just check and fail at init if you like.
>>>>>> What about devices that may change their number of queues? I'd really
>>>>>> prefer large queue numbers to be fenced off in the the individual
>>>>>> devices, and for that they need to be able to grab a transport-specific
>>>>>> queue limit.
>>>>> This is why I don't want bus specific limits in core,
>>>>> it just makes it too easy to sweep dirt under the carpet.
>>>>> s390 is legacy - fine, but don't perpetuate the issue
>>>>> in devices.
>>>> What is "swept under the carpet" here? A device can have min(max queues
>>>> from transport, max queues from device type) queues. I think it's
>>>> easier to refuse instantiating with too many queues per device type (as
>>>> most will be fine with 64 queues), so I don't want that code in the
>>>> transport (beyond making the limit available).
>>>>
>>>> For s390 I'd like in the end:
>>>> - s390-virtio: legacy - keep it working as best-can-do, so I'd prefer
>>>>   to keep it at 64 queues, even if more might work
>>>> - virtio-ccw, devices in legacy or virtio-1 mode: works with adapter
>>>>   interrupts, so let's fence off setting per-subchannel indicators if a
>>>>   device has more than 64 queues (needs work and a well thought-out
>>>>   rejection mechanism)
>>>>
>>>> That's _in the end_: I'd like to keep ccw at 64 queues _for now_ so
>>>> that we don't have a rushed interface change - and at the same time, I
>>>> don't want to hold off pci. Makes sense?
>>> If you want to fail configurations with > 64 queues in ccw or s390,
>>> that's fine by me. I don't want work arounds for these bugs in virtio
>>> core though. So transports should not have a say in how many queues can
>>> be supported, but they can fail configurations they can't support if
>>> they want to.
>> Eh, isn't that a contradiction? Failing a configuration means that the
>> transport does indeed have a say?
> I'm fine with general capability that lets transport check device
> and fail init, for whatever reason.
> E.g. can we teach plugged callback to fail?

Looks like we can (and for s390, we need add a callback just for
checking this). That just moves the transport specific limit to
k->device_plugged (my patch check k->queue_max). I don't see obvious
difference.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-05-13  7:47         ` Jason Wang
@ 2015-05-13  8:16           ` Michael S. Tsirkin
  0 siblings, 0 replies; 47+ messages in thread
From: Michael S. Tsirkin @ 2015-05-13  8:16 UTC (permalink / raw)
  To: Jason Wang
  Cc: Paolo Bonzini, Alexander Graf, qemu-ppc, qemu-devel, Richard Henderson

On Wed, May 13, 2015 at 03:47:51PM +0800, Jason Wang wrote:
> 
> 
> On 04/28/2015 03:17 PM, Michael S. Tsirkin wrote:
> > On Tue, Apr 28, 2015 at 11:12:10AM +0800, Jason Wang wrote:
> >> > 
> >> > 
> >> > On Mon, Apr 27, 2015 at 7:02 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >>> > >On Thu, Apr 23, 2015 at 02:21:48PM +0800, Jason Wang wrote:
> >>>> > >> This patch increases the maximum number of virtqueues for pci from 64
> >>>> > >> to 513. This will allow booting a virtio-net-pci device with 256 queue
> >>>> > >> pairs on recent Linux host (which supports up to 256 tuntap queue
> >>>> > >>pairs).
> >>>> > >> To keep migration compatibility, 64 was kept for legacy machine
> >>>> > >> types. This is because qemu in fact allows guest to probe the limit of
> >>>> > >> virtqueues through trying to set queue_sel. So for legacy machine
> >>>> > >> type, we should make sure setting queue_sel to more than 63 won't
> >>>> > >> make effect.
> >>> > >
> >>> > >This isn't a documented interface, and no guest that I know of does
> >>> > >this.  Accordingly, I think we should drop everything except the
> >>> > >hw/virtio/virtio-pci.c change.
> >> > 
> >> > We leave a chance for guest to use such undocumented behavior, so
> >> > technically we'd better keep it and it maybe too late for us to fix if we
> >> > find such a guest in the future. And consider keeping this compatibility was
> >> > really not hard, so I suggest to include this.
> > Reminds me of  https://xkcd.com/1172/
> > We don't do this kind of thing.
> 
> Ok, but let's consider for management:
> 
> If we don't do this, consider src has qemu 2.4 and dst has qemu 2.3.
> Then libvirt can create 2.3 machine on src with more than 64 queues.
> What happens if it want to migrate to dst?
> I believe we don't want to
> teach libvirt about the queue limit for each machine type?

The basic requirement for migration is to supply identical
configuration at both sides. If you don't, migration won't
work, and that's expected.

-- 
MST

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513
  2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513 Jason Wang
  2015-04-23 11:24   ` Cornelia Huck
  2015-04-27 11:02   ` Michael S. Tsirkin
@ 2015-05-14 18:54   ` Eduardo Habkost
  2 siblings, 0 replies; 47+ messages in thread
From: Eduardo Habkost @ 2015-05-14 18:54 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, Alexander Graf, qemu-devel, qemu-ppc, Paolo Bonzini,
	Richard Henderson

On Thu, Apr 23, 2015 at 02:21:48PM +0800, Jason Wang wrote:
> This patch increases the maximum number of virtqueues for pci from 64
> to 513. This will allow booting a virtio-net-pci device with 256 queue
> pairs on recent Linux host (which supports up to 256 tuntap queue pairs).
> 
> To keep migration compatibility, 64 was kept for legacy machine
> types. This is because qemu in fact allows guest to probe the limit of
> virtqueues through trying to set queue_sel. So for legacy machine
> type, we should make sure setting queue_sel to more than 63 won't
> make effect.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: qemu-ppc@nongnu.org
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  hw/i386/pc_piix.c      | 5 +++++
>  hw/i386/pc_q35.c       | 5 +++++
>  hw/ppc/spapr.c         | 5 +++++
>  hw/virtio/virtio-pci.c | 6 +++++-
>  4 files changed, 20 insertions(+), 1 deletion(-)
> 
[...]
>  static void pc_compat_2_3(MachineState *machine)
>  {
> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> +
> +    k->queue_max = 64;
>  }
[...]
>  static void pc_compat_2_3(MachineState *machine)
>  {
> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> +
> +    k->queue_max = 64;
>  }
[...]
>  static void spapr_compat_2_3(Object *obj)
>  {
> +    ObjectClass *klass = object_class_by_name("virtio-pci-bus");
> +    VirtioBusClass *k = VIRTIO_BUS_CLASS(klass);
> +
> +    k->queue_max = 64;
>  }

If you use compat_props for this, you will just need to add it to
HW_COMPAT_2_3 without duplicating the same compat code on all machines.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2015-05-14 18:54 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-23  6:21 [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 01/16] virtio-net: fix the upper bound when trying to delete queues Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 02/16] pc: add 2.4 machine types Jason Wang
2015-04-27 11:03   ` Michael S. Tsirkin
2015-04-28  3:12     ` Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 03/16] spapr: add machine type specific instance init function Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 04/16] ppc: spapr: add 2.4 machine type Jason Wang
2015-04-27 11:03   ` Michael S. Tsirkin
2015-04-27 13:14   ` Alexander Graf
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 05/16] monitor: replace the magic number 255 with MAX_QUEUE_NUM Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 06/16] monitor: check return value of qemu_find_net_clients_except() Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 07/16] virtio-ccw: using VIRTIO_NO_VECTOR instead of 0 for invalid virtqueue Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 08/16] virtio: introduce bus specific queue limit Jason Wang
2015-04-27 11:05   ` Michael S. Tsirkin
2015-04-28  3:14     ` Jason Wang
2015-04-28  5:13       ` Michael S. Tsirkin
2015-04-28  6:13         ` Jason Wang
2015-04-28  7:14           ` Michael S. Tsirkin
2015-04-28  8:04             ` Cornelia Huck
2015-04-28  8:16               ` Michael S. Tsirkin
2015-04-28 10:40                 ` Cornelia Huck
2015-04-28 10:55                   ` Michael S. Tsirkin
2015-04-28 11:39                     ` Cornelia Huck
2015-04-28 12:47                       ` Michael S. Tsirkin
2015-04-28 13:33                         ` Cornelia Huck
2015-04-28 14:40                           ` Michael S. Tsirkin
2015-05-13  7:51                             ` Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 09/16] virtio-ccw: introduce ccw " Jason Wang
2015-04-23 10:59   ` Cornelia Huck
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 10/16] virtio-s390: switch to bus " Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 11/16] virtio-mmio: " Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 12/16] virtio-pci: switch to use " Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 13/16] virtio: introduce vector to virtqueues mapping Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 14/16] virtio-pci: speedup MSI-X masking and unmasking Jason Wang
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 15/16] virtio-pci: increase the maximum number of virtqueues to 513 Jason Wang
2015-04-23 11:24   ` Cornelia Huck
2015-04-28  3:05     ` Jason Wang
2015-04-27 11:02   ` Michael S. Tsirkin
2015-04-28  3:12     ` Jason Wang
2015-04-28  7:17       ` Michael S. Tsirkin
2015-05-13  7:47         ` Jason Wang
2015-05-13  8:16           ` Michael S. Tsirkin
2015-05-14 18:54   ` Eduardo Habkost
2015-04-23  6:21 ` [Qemu-devel] [PATCH V7 16/16] pci: remove hard-coded bar size in msix_init_exclusive_bar() Jason Wang
2015-04-23 11:27 ` [Qemu-devel] [PATCH V7 00/16] Support more virtio queues Cornelia Huck
2015-04-28  3:14   ` Jason Wang
2015-04-27 19:06 ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.