All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block
@ 2017-12-15 15:02 Denis V. Lunev
  2017-12-15 15:02 ` [Qemu-devel] [PATCH 1/2] pc, q35: add 2.12 machine types Denis V. Lunev
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Denis V. Lunev @ 2017-12-15 15:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Denis V . Lunev, Michael S. Tsirkin, Stefan Hajnoczi, Kevin Wolf,
	Max Reitz, Paolo Bonzini, Richard Henderson, Eduardo Habkost

v2->v3
- added 2.12 machine types
- added compat properties for 2.11 machine type

v1->v2:
- added max_segments property for virtblock device

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: "Michael S. Tsirkin" <mst@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Richard Henderson <rth@twiddle.net>
CC: Eduardo Habkost <ehabkost@redhat.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 1/2] pc, q35: add 2.12 machine types
  2017-12-15 15:02 [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
@ 2017-12-15 15:02 ` Denis V. Lunev
  2017-12-18 13:54   ` Christian Borntraeger
  2017-12-15 15:02 ` [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Denis V. Lunev @ 2017-12-15 15:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Denis V. Lunev, Michael S. Tsirkin, Stefan Hajnoczi, Kevin Wolf,
	Max Reitz, Paolo Bonzini, Richard Henderson, Eduardo Habkost

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: "Michael S. Tsirkin" <mst@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Richard Henderson <rth@twiddle.net>
CC: Eduardo Habkost <ehabkost@redhat.com>
---
 include/hw/compat.h  |  2 ++
 include/hw/i386/pc.h |  3 +++
 hw/i386/pc_piix.c    | 13 ++++++++++++-
 hw/i386/pc_q35.c     | 12 +++++++++++-
 4 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/include/hw/compat.h b/include/hw/compat.h
index cf389b4..026fee9 100644
--- a/include/hw/compat.h
+++ b/include/hw/compat.h
@@ -1,6 +1,8 @@
 #ifndef HW_COMPAT_H
 #define HW_COMPAT_H
 
+#define HW_COMPAT_2_11 \
+
 #define HW_COMPAT_2_10 \
     {\
         .driver   = "virtio-mouse-device",\
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index ef438bd..e08c492 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -369,6 +369,9 @@ int e820_add_entry(uint64_t, uint64_t, uint32_t);
 int e820_get_num_entries(void);
 bool e820_get_entry(int, uint32_t, uint64_t *, uint64_t *);
 
+#define PC_COMPAT_2_11 \
+    HW_COMPAT_2_11 \
+
 #define PC_COMPAT_2_10 \
     HW_COMPAT_2_10 \
     {\
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 5e47528..25380b0 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -430,13 +430,24 @@ static void pc_i440fx_machine_options(MachineClass *m)
     m->default_display = "std";
 }
 
-static void pc_i440fx_2_11_machine_options(MachineClass *m)
+static void pc_i440fx_2_12_machine_options(MachineClass *m)
 {
     pc_i440fx_machine_options(m);
     m->alias = "pc";
     m->is_default = 1;
 }
 
+DEFINE_I440FX_MACHINE(v2_12, "pc-i440fx-2.12", NULL,
+                      pc_i440fx_2_12_machine_options);
+
+static void pc_i440fx_2_11_machine_options(MachineClass *m)
+{
+    pc_i440fx_2_12_machine_options(m);
+    m->is_default = 0;
+    m->alias = NULL;
+    SET_MACHINE_COMPAT(m, PC_COMPAT_2_11);
+}
+
 DEFINE_I440FX_MACHINE(v2_11, "pc-i440fx-2.11", NULL,
                       pc_i440fx_2_11_machine_options);
 
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index d606004..a9b9208 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -303,12 +303,22 @@ static void pc_q35_machine_options(MachineClass *m)
     m->max_cpus = 288;
 }
 
-static void pc_q35_2_11_machine_options(MachineClass *m)
+static void pc_q35_2_12_machine_options(MachineClass *m)
 {
     pc_q35_machine_options(m);
     m->alias = "q35";
 }
 
+DEFINE_Q35_MACHINE(v2_12, "pc-q35-2.12", NULL,
+                   pc_q35_2_12_machine_options);
+
+static void pc_q35_2_11_machine_options(MachineClass *m)
+{
+    pc_q35_2_12_machine_options(m);
+    m->alias = NULL;
+    SET_MACHINE_COMPAT(m, PC_COMPAT_2_11);
+}
+
 DEFINE_Q35_MACHINE(v2_11, "pc-q35-2.11", NULL,
                    pc_q35_2_11_machine_options);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-15 15:02 [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
  2017-12-15 15:02 ` [Qemu-devel] [PATCH 1/2] pc, q35: add 2.12 machine types Denis V. Lunev
@ 2017-12-15 15:02 ` Denis V. Lunev
  2017-12-18 13:38   ` Stefan Hajnoczi
  2017-12-18 13:38 ` [Qemu-devel] [PATCH v3 0/2] " Stefan Hajnoczi
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Denis V. Lunev @ 2017-12-15 15:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Denis V. Lunev, Michael S. Tsirkin, Stefan Hajnoczi, Kevin Wolf,
	Max Reitz, Paolo Bonzini, Richard Henderson, Eduardo Habkost

Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
field reported by SCSI controler. Thus typical sequential read with
1 MB size results in the following pattern of the IO from the guest:
  8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
  8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
  8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
  8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
  8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
  8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
The IO was generated by
  dd if=/dev/sda of=/dev/null bs=1024 iflag=direct

This effectively means that on rotational disks we will observe 3 IOPS
for each 2 MBs processed. This definitely negatively affects both
guest and host IO performance.

The cure is relatively simple - we should report lengthy scatter-gather
ability of the SCSI controller. Fortunately the situation here is very
good. VirtIO transport layer can accomodate 1024 items in one request
while we are using only 128. This situation is present since almost
very beginning. 2 items are dedicated for request metadata thus we
should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.

The following pattern is observed after the patch:
  8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
  8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
  8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
  8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
which is much better.

The dark side of this patch is that we are tweaking guest visible
parameter, though this should be relatively safe as above transport
layer support is present in QEMU/host Linux for a very long time.
The patch adds configurable property for VirtIO SCSI with a new default
and hardcode option for VirtBlock which does not provide good
configurable framework.

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: "Michael S. Tsirkin" <mst@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Richard Henderson <rth@twiddle.net>
CC: Eduardo Habkost <ehabkost@redhat.com>
---
 include/hw/compat.h             | 17 +++++++++++++++++
 include/hw/virtio/virtio-blk.h  |  1 +
 include/hw/virtio/virtio-scsi.h |  1 +
 hw/block/virtio-blk.c           |  4 +++-
 hw/scsi/vhost-scsi.c            |  2 ++
 hw/scsi/vhost-user-scsi.c       |  2 ++
 hw/scsi/virtio-scsi.c           |  4 +++-
 7 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/include/hw/compat.h b/include/hw/compat.h
index 026fee9..b9be5d7 100644
--- a/include/hw/compat.h
+++ b/include/hw/compat.h
@@ -2,6 +2,23 @@
 #define HW_COMPAT_H
 
 #define HW_COMPAT_2_11 \
+    {\
+        .driver   = "virtio-blk-device",\
+        .property = "max_segments",\
+        .value    = "126",\
+    },{\
+        .driver   = "vhost-scsi",\
+        .property = "max_segments",\
+        .value    = "126",\
+    },{\
+        .driver   = "vhost-user-scsi",\
+        .property = "max_segments",\
+        .value    = "126",\
+    },{\
+        .driver   = "virtio-scsi-device",\
+        .property = "max_segments",\
+        .value    = "126",\
+    },
 
 #define HW_COMPAT_2_10 \
     {\
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index d3c8a6f..0aa83a3 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -39,6 +39,7 @@ struct VirtIOBlkConf
     uint32_t config_wce;
     uint32_t request_merging;
     uint16_t num_queues;
+    uint32_t max_segments;
 };
 
 struct VirtIOBlockDataPlane;
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 4c0bcdb..1e5805e 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -49,6 +49,7 @@ struct VirtIOSCSIConf {
     uint32_t num_queues;
     uint32_t virtqueue_size;
     uint32_t max_sectors;
+    uint32_t max_segments;
     uint32_t cmd_per_lun;
 #ifdef CONFIG_VHOST_SCSI
     char *vhostfd;
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 05d1440..99da3b6 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -736,7 +736,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
     blk_get_geometry(s->blk, &capacity);
     memset(&blkcfg, 0, sizeof(blkcfg));
     virtio_stq_p(vdev, &blkcfg.capacity, capacity);
-    virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
+    virtio_stl_p(vdev, &blkcfg.seg_max, s->conf.max_segments);
     virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls);
     virtio_stl_p(vdev, &blkcfg.blk_size, blk_size);
     virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size);
@@ -1014,6 +1014,8 @@ static Property virtio_blk_properties[] = {
     DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
     DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD,
                      IOThread *),
+    DEFINE_PROP_UINT32("max_segments", VirtIOBlock, conf.max_segments,
+                       VIRTQUEUE_MAX_SIZE - 2),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/scsi/vhost-scsi.c b/hw/scsi/vhost-scsi.c
index 9c1bea8..f93eac6 100644
--- a/hw/scsi/vhost-scsi.c
+++ b/hw/scsi/vhost-scsi.c
@@ -238,6 +238,8 @@ static Property vhost_scsi_properties[] = {
     DEFINE_PROP_UINT32("max_sectors", VirtIOSCSICommon, conf.max_sectors,
                        0xFFFF),
     DEFINE_PROP_UINT32("cmd_per_lun", VirtIOSCSICommon, conf.cmd_per_lun, 128),
+    DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments,
+                       VIRTQUEUE_MAX_SIZE - 2),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
index f7561e2..8b02ab1 100644
--- a/hw/scsi/vhost-user-scsi.c
+++ b/hw/scsi/vhost-user-scsi.c
@@ -146,6 +146,8 @@ static Property vhost_user_scsi_properties[] = {
     DEFINE_PROP_BIT64("param_change", VHostUserSCSI, host_features,
                                                      VIRTIO_SCSI_F_CHANGE,
                                                      true),
+    DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments,
+                       VIRTQUEUE_MAX_SIZE - 2),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 3aa9971..5404dde 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -644,7 +644,7 @@ static void virtio_scsi_get_config(VirtIODevice *vdev,
     VirtIOSCSICommon *s = VIRTIO_SCSI_COMMON(vdev);
 
     virtio_stl_p(vdev, &scsiconf->num_queues, s->conf.num_queues);
-    virtio_stl_p(vdev, &scsiconf->seg_max, 128 - 2);
+    virtio_stl_p(vdev, &scsiconf->seg_max, s->conf.max_segments);
     virtio_stl_p(vdev, &scsiconf->max_sectors, s->conf.max_sectors);
     virtio_stl_p(vdev, &scsiconf->cmd_per_lun, s->conf.cmd_per_lun);
     virtio_stl_p(vdev, &scsiconf->event_info_size, sizeof(VirtIOSCSIEvent));
@@ -929,6 +929,8 @@ static Property virtio_scsi_properties[] = {
                                                 VIRTIO_SCSI_F_CHANGE, true),
     DEFINE_PROP_LINK("iothread", VirtIOSCSI, parent_obj.conf.iothread,
                      TYPE_IOTHREAD, IOThread *),
+    DEFINE_PROP_UINT32("max_segments", VirtIOSCSI, parent_obj.conf.max_segments,
+                       VIRTQUEUE_MAX_SIZE - 2),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-15 15:02 ` [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
@ 2017-12-18 13:38   ` Stefan Hajnoczi
  2017-12-18 16:16     ` Harris, James R
  0 siblings, 1 reply; 15+ messages in thread
From: Stefan Hajnoczi @ 2017-12-18 13:38 UTC (permalink / raw)
  To: Denis V. Lunev
  Cc: qemu-devel, Michael S. Tsirkin, Kevin Wolf, Max Reitz,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost,
	Felipe Franciosi, james.r.harris

[-- Attachment #1: Type: text/plain, Size: 3743 bytes --]

On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote:
> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> field reported by SCSI controler. Thus typical sequential read with
> 1 MB size results in the following pattern of the IO from the guest:
>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> The IO was generated by
>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> 
> This effectively means that on rotational disks we will observe 3 IOPS
> for each 2 MBs processed. This definitely negatively affects both
> guest and host IO performance.
> 
> The cure is relatively simple - we should report lengthy scatter-gather
> ability of the SCSI controller. Fortunately the situation here is very
> good. VirtIO transport layer can accomodate 1024 items in one request
> while we are using only 128. This situation is present since almost
> very beginning. 2 items are dedicated for request metadata thus we
> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> 
> The following pattern is observed after the patch:
>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> which is much better.
> 
> The dark side of this patch is that we are tweaking guest visible
> parameter, though this should be relatively safe as above transport
> layer support is present in QEMU/host Linux for a very long time.
> The patch adds configurable property for VirtIO SCSI with a new default
> and hardcode option for VirtBlock which does not provide good
> configurable framework.
> 
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: "Michael S. Tsirkin" <mst@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Richard Henderson <rth@twiddle.net>
> CC: Eduardo Habkost <ehabkost@redhat.com>
> ---
>  include/hw/compat.h             | 17 +++++++++++++++++
>  include/hw/virtio/virtio-blk.h  |  1 +
>  include/hw/virtio/virtio-scsi.h |  1 +
>  hw/block/virtio-blk.c           |  4 +++-
>  hw/scsi/vhost-scsi.c            |  2 ++
>  hw/scsi/vhost-user-scsi.c       |  2 ++
>  hw/scsi/virtio-scsi.c           |  4 +++-
>  7 files changed, 29 insertions(+), 2 deletions(-)
> 
> diff --git a/include/hw/compat.h b/include/hw/compat.h
> index 026fee9..b9be5d7 100644
> --- a/include/hw/compat.h
> +++ b/include/hw/compat.h
> @@ -2,6 +2,23 @@
>  #define HW_COMPAT_H
>  
>  #define HW_COMPAT_2_11 \
> +    {\
> +        .driver   = "virtio-blk-device",\
> +        .property = "max_segments",\
> +        .value    = "126",\
> +    },{\
> +        .driver   = "vhost-scsi",\
> +        .property = "max_segments",\
> +        .value    = "126",\
> +    },{\
> +        .driver   = "vhost-user-scsi",\
> +        .property = "max_segments",\
> +        .value    = "126",\

Existing vhost-user-scsi slave programs might not expect up to 1022
segments.  Hopefully we can get away with this change since there are
relatively few vhost-user-scsi slave programs.

CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-15 15:02 [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
  2017-12-15 15:02 ` [Qemu-devel] [PATCH 1/2] pc, q35: add 2.12 machine types Denis V. Lunev
  2017-12-15 15:02 ` [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
@ 2017-12-18 13:38 ` Stefan Hajnoczi
  2017-12-19 12:45 ` Denis V. Lunev
  2017-12-20  4:23 ` Michael S. Tsirkin
  4 siblings, 0 replies; 15+ messages in thread
From: Stefan Hajnoczi @ 2017-12-18 13:38 UTC (permalink / raw)
  To: Denis V. Lunev
  Cc: qemu-devel, Michael S. Tsirkin, Kevin Wolf, Max Reitz,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost

[-- Attachment #1: Type: text/plain, Size: 632 bytes --]

On Fri, Dec 15, 2017 at 06:02:48PM +0300, Denis V. Lunev wrote:
> v2->v3
> - added 2.12 machine types
> - added compat properties for 2.11 machine type
> 
> v1->v2:
> - added max_segments property for virtblock device
> 
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: "Michael S. Tsirkin" <mst@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Richard Henderson <rth@twiddle.net>
> CC: Eduardo Habkost <ehabkost@redhat.com>
> 

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] pc, q35: add 2.12 machine types
  2017-12-15 15:02 ` [Qemu-devel] [PATCH 1/2] pc, q35: add 2.12 machine types Denis V. Lunev
@ 2017-12-18 13:54   ` Christian Borntraeger
  0 siblings, 0 replies; 15+ messages in thread
From: Christian Borntraeger @ 2017-12-18 13:54 UTC (permalink / raw)
  To: Denis V. Lunev, qemu-devel
  Cc: Kevin Wolf, Eduardo Habkost, Michael S. Tsirkin, Max Reitz,
	Stefan Hajnoczi, Paolo Bonzini, Richard Henderson, Peter Maydell

On 12/15/2017 04:02 PM, Denis V. Lunev wrote:

>  include/hw/compat.h  |  2 ++
>  include/hw/i386/pc.h |  3 +++
>  hw/i386/pc_piix.c    | 13 ++++++++++++-
>  hw/i386/pc_q35.c     | 12 +++++++++++-
>  4 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/include/hw/compat.h b/include/hw/compat.h
> index cf389b4..026fee9 100644
> --- a/include/hw/compat.h
> +++ b/include/hw/compat.h
> @@ -1,6 +1,8 @@
>  #ifndef HW_COMPAT_H
>  #define HW_COMPAT_H
> 
> +#define HW_COMPAT_2_11 \
> +
>  #define HW_COMPAT_2_10 \
>      {\

FWIW, arm, s390, power also include HW_COMPAT_* for their machines.
As it happens s390 and power already introduced HW_COMPAT_2_11 last
week so these changes should be fine. 

Peter, maybe ARM should add a 2.11 compat machine (and a switch to 2.12)
sooner than usual?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-18 13:38   ` Stefan Hajnoczi
@ 2017-12-18 16:16     ` Harris, James R
  2017-12-18 19:35       ` Felipe Franciosi
  0 siblings, 1 reply; 15+ messages in thread
From: Harris, James R @ 2017-12-18 16:16 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Denis V. Lunev, qemu-devel, Michael S. Tsirkin, Kevin Wolf,
	Max Reitz, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
	Felipe Franciosi, Liu, Changpeng


> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> 
> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote:
>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>> field reported by SCSI controler. Thus typical sequential read with
>> 1 MB size results in the following pattern of the IO from the guest:
>>  8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>  8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>  8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>  8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>  8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>  8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>> The IO was generated by
>>  dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>> 
>> This effectively means that on rotational disks we will observe 3 IOPS
>> for each 2 MBs processed. This definitely negatively affects both
>> guest and host IO performance.
>> 
>> The cure is relatively simple - we should report lengthy scatter-gather
>> ability of the SCSI controller. Fortunately the situation here is very
>> good. VirtIO transport layer can accomodate 1024 items in one request
>> while we are using only 128. This situation is present since almost
>> very beginning. 2 items are dedicated for request metadata thus we
>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>> 
>> The following pattern is observed after the patch:
>>  8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>  8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>  8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>  8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>> which is much better.
>> 
>> The dark side of this patch is that we are tweaking guest visible
>> parameter, though this should be relatively safe as above transport
>> layer support is present in QEMU/host Linux for a very long time.
>> The patch adds configurable property for VirtIO SCSI with a new default
>> and hardcode option for VirtBlock which does not provide good
>> configurable framework.
>> 
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> CC: "Michael S. Tsirkin" <mst@redhat.com>
>> CC: Stefan Hajnoczi <stefanha@redhat.com>
>> CC: Kevin Wolf <kwolf@redhat.com>
>> CC: Max Reitz <mreitz@redhat.com>
>> CC: Paolo Bonzini <pbonzini@redhat.com>
>> CC: Richard Henderson <rth@twiddle.net>
>> CC: Eduardo Habkost <ehabkost@redhat.com>
>> ---
>> include/hw/compat.h             | 17 +++++++++++++++++
>> include/hw/virtio/virtio-blk.h  |  1 +
>> include/hw/virtio/virtio-scsi.h |  1 +
>> hw/block/virtio-blk.c           |  4 +++-
>> hw/scsi/vhost-scsi.c            |  2 ++
>> hw/scsi/vhost-user-scsi.c       |  2 ++
>> hw/scsi/virtio-scsi.c           |  4 +++-
>> 7 files changed, 29 insertions(+), 2 deletions(-)
>> 
>> diff --git a/include/hw/compat.h b/include/hw/compat.h
>> index 026fee9..b9be5d7 100644
>> --- a/include/hw/compat.h
>> +++ b/include/hw/compat.h
>> @@ -2,6 +2,23 @@
>> #define HW_COMPAT_H
>> 
>> #define HW_COMPAT_2_11 \
>> +    {\
>> +        .driver   = "virtio-blk-device",\
>> +        .property = "max_segments",\
>> +        .value    = "126",\
>> +    },{\
>> +        .driver   = "vhost-scsi",\
>> +        .property = "max_segments",\
>> +        .value    = "126",\
>> +    },{\
>> +        .driver   = "vhost-user-scsi",\
>> +        .property = "max_segments",\
>> +        .value    = "126",\
> 
> Existing vhost-user-scsi slave programs might not expect up to 1022
> segments.  Hopefully we can get away with this change since there are
> relatively few vhost-user-scsi slave programs.
> 
> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.

SPDK vhost-user targets only expect max 128 segments.  They also pre-allocate I/O task structures when QEMU connects to the vhost-user device.

Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal.

What if this was just bumped from 126 to 128?  I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch.  One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference.

-Jim



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-18 16:16     ` Harris, James R
@ 2017-12-18 19:35       ` Felipe Franciosi
  2017-12-18 19:42         ` Denis V. Lunev
  2017-12-20  4:16         ` Michael S. Tsirkin
  0 siblings, 2 replies; 15+ messages in thread
From: Felipe Franciosi @ 2017-12-18 19:35 UTC (permalink / raw)
  To: Harris, James R, Stefan Hajnoczi, Denis V. Lunev
  Cc: qemu-devel, Michael S. Tsirkin, Kevin Wolf, Max Reitz,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost, Liu,
	Changpeng


> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote:
> 
> 
>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>> 
>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote:
>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>> field reported by SCSI controler. Thus typical sequential read with
>>> 1 MB size results in the following pattern of the IO from the guest:
>>> 8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>> 8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>> 8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>> 8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>> 8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>> 8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>> The IO was generated by
>>> dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>> 
>>> This effectively means that on rotational disks we will observe 3 IOPS
>>> for each 2 MBs processed. This definitely negatively affects both
>>> guest and host IO performance.
>>> 
>>> The cure is relatively simple - we should report lengthy scatter-gather
>>> ability of the SCSI controller. Fortunately the situation here is very
>>> good. VirtIO transport layer can accomodate 1024 items in one request
>>> while we are using only 128. This situation is present since almost
>>> very beginning. 2 items are dedicated for request metadata thus we
>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>> 
>>> The following pattern is observed after the patch:
>>> 8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>> 8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>> 8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>> 8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>> which is much better.
>>> 
>>> The dark side of this patch is that we are tweaking guest visible
>>> parameter, though this should be relatively safe as above transport
>>> layer support is present in QEMU/host Linux for a very long time.
>>> The patch adds configurable property for VirtIO SCSI with a new default
>>> and hardcode option for VirtBlock which does not provide good
>>> configurable framework.
>>> 
>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>> CC: "Michael S. Tsirkin" <mst@redhat.com>
>>> CC: Stefan Hajnoczi <stefanha@redhat.com>
>>> CC: Kevin Wolf <kwolf@redhat.com>
>>> CC: Max Reitz <mreitz@redhat.com>
>>> CC: Paolo Bonzini <pbonzini@redhat.com>
>>> CC: Richard Henderson <rth@twiddle.net>
>>> CC: Eduardo Habkost <ehabkost@redhat.com>
>>> ---
>>> include/hw/compat.h             | 17 +++++++++++++++++
>>> include/hw/virtio/virtio-blk.h  |  1 +
>>> include/hw/virtio/virtio-scsi.h |  1 +
>>> hw/block/virtio-blk.c           |  4 +++-
>>> hw/scsi/vhost-scsi.c            |  2 ++
>>> hw/scsi/vhost-user-scsi.c       |  2 ++
>>> hw/scsi/virtio-scsi.c           |  4 +++-
>>> 7 files changed, 29 insertions(+), 2 deletions(-)
>>> 
>>> diff --git a/include/hw/compat.h b/include/hw/compat.h
>>> index 026fee9..b9be5d7 100644
>>> --- a/include/hw/compat.h
>>> +++ b/include/hw/compat.h
>>> @@ -2,6 +2,23 @@
>>> #define HW_COMPAT_H
>>> 
>>> #define HW_COMPAT_2_11 \
>>> +    {\
>>> +        .driver   = "virtio-blk-device",\
>>> +        .property = "max_segments",\
>>> +        .value    = "126",\
>>> +    },{\
>>> +        .driver   = "vhost-scsi",\
>>> +        .property = "max_segments",\
>>> +        .value    = "126",\
>>> +    },{\
>>> +        .driver   = "vhost-user-scsi",\
>>> +        .property = "max_segments",\
>>> +        .value    = "126",\
>> 
>> Existing vhost-user-scsi slave programs might not expect up to 1022
>> segments.  Hopefully we can get away with this change since there are
>> relatively few vhost-user-scsi slave programs.
>> 
>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.
> 
> SPDK vhost-user targets only expect max 128 segments.  They also pre-allocate I/O task structures when QEMU connects to the vhost-user device.
> 
> Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal.
> 
> What if this was just bumped from 126 to 128?  I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch.  One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference.

SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements.
https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23

Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size.

With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration.

Cheers,
Felipe

> 
> -Jim
> 
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-18 19:35       ` Felipe Franciosi
@ 2017-12-18 19:42         ` Denis V. Lunev
  2017-12-19  8:57           ` Roman Kagan
  2017-12-20  4:16         ` Michael S. Tsirkin
  1 sibling, 1 reply; 15+ messages in thread
From: Denis V. Lunev @ 2017-12-18 19:42 UTC (permalink / raw)
  To: Felipe Franciosi, Harris, James R, Stefan Hajnoczi
  Cc: qemu-devel, Michael S. Tsirkin, Kevin Wolf, Max Reitz,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost, Liu,
	Changpeng

On 12/18/2017 10:35 PM, Felipe Franciosi wrote:
>> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote:
>>
>>
>>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>>
>>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote:
>>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>>> field reported by SCSI controler. Thus typical sequential read with
>>>> 1 MB size results in the following pattern of the IO from the guest:
>>>> 8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>>> 8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>>> 8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>>> 8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>>> 8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>>> 8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>>> The IO was generated by
>>>> dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>>>
>>>> This effectively means that on rotational disks we will observe 3 IOPS
>>>> for each 2 MBs processed. This definitely negatively affects both
>>>> guest and host IO performance.
>>>>
>>>> The cure is relatively simple - we should report lengthy scatter-gather
>>>> ability of the SCSI controller. Fortunately the situation here is very
>>>> good. VirtIO transport layer can accomodate 1024 items in one request
>>>> while we are using only 128. This situation is present since almost
>>>> very beginning. 2 items are dedicated for request metadata thus we
>>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>>>
>>>> The following pattern is observed after the patch:
>>>> 8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>>> 8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>>> 8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>>> 8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>>> which is much better.
>>>>
>>>> The dark side of this patch is that we are tweaking guest visible
>>>> parameter, though this should be relatively safe as above transport
>>>> layer support is present in QEMU/host Linux for a very long time.
>>>> The patch adds configurable property for VirtIO SCSI with a new default
>>>> and hardcode option for VirtBlock which does not provide good
>>>> configurable framework.
>>>>
>>>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>>>> CC: "Michael S. Tsirkin" <mst@redhat.com>
>>>> CC: Stefan Hajnoczi <stefanha@redhat.com>
>>>> CC: Kevin Wolf <kwolf@redhat.com>
>>>> CC: Max Reitz <mreitz@redhat.com>
>>>> CC: Paolo Bonzini <pbonzini@redhat.com>
>>>> CC: Richard Henderson <rth@twiddle.net>
>>>> CC: Eduardo Habkost <ehabkost@redhat.com>
>>>> ---
>>>> include/hw/compat.h             | 17 +++++++++++++++++
>>>> include/hw/virtio/virtio-blk.h  |  1 +
>>>> include/hw/virtio/virtio-scsi.h |  1 +
>>>> hw/block/virtio-blk.c           |  4 +++-
>>>> hw/scsi/vhost-scsi.c            |  2 ++
>>>> hw/scsi/vhost-user-scsi.c       |  2 ++
>>>> hw/scsi/virtio-scsi.c           |  4 +++-
>>>> 7 files changed, 29 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/include/hw/compat.h b/include/hw/compat.h
>>>> index 026fee9..b9be5d7 100644
>>>> --- a/include/hw/compat.h
>>>> +++ b/include/hw/compat.h
>>>> @@ -2,6 +2,23 @@
>>>> #define HW_COMPAT_H
>>>>
>>>> #define HW_COMPAT_2_11 \
>>>> +    {\
>>>> +        .driver   = "virtio-blk-device",\
>>>> +        .property = "max_segments",\
>>>> +        .value    = "126",\
>>>> +    },{\
>>>> +        .driver   = "vhost-scsi",\
>>>> +        .property = "max_segments",\
>>>> +        .value    = "126",\
>>>> +    },{\
>>>> +        .driver   = "vhost-user-scsi",\
>>>> +        .property = "max_segments",\
>>>> +        .value    = "126",\
>>> Existing vhost-user-scsi slave programs might not expect up to 1022
>>> segments.  Hopefully we can get away with this change since there are
>>> relatively few vhost-user-scsi slave programs.
>>>
>>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.
>> SPDK vhost-user targets only expect max 128 segments.  They also pre-allocate I/O task structures when QEMU connects to the vhost-user device.
>>
>> Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal.
>>
>> What if this was just bumped from 126 to 128?  I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch.  One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference.
> SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements.
> https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23
>
> Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size.
>
> With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration.

This should not be a problem at all IMHO. The guest is not obliged
to use the message of entire possible size. The guest initiates
request with 128 elements. Fine. QEMU is ready to this.

Den

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-18 19:42         ` Denis V. Lunev
@ 2017-12-19  8:57           ` Roman Kagan
  2017-12-19  9:59             ` Liu, Changpeng
  0 siblings, 1 reply; 15+ messages in thread
From: Roman Kagan @ 2017-12-19  8:57 UTC (permalink / raw)
  To: Denis V. Lunev
  Cc: Felipe Franciosi, Harris, James R, Stefan Hajnoczi, Kevin Wolf,
	Eduardo Habkost, Michael S. Tsirkin, qemu-devel, Max Reitz,
	Paolo Bonzini, Liu,	Changpeng, Richard Henderson

On Mon, Dec 18, 2017 at 10:42:35PM +0300, Denis V. Lunev wrote:
> On 12/18/2017 10:35 PM, Felipe Franciosi wrote:
> >> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote:
> >>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote:
> >>>> diff --git a/include/hw/compat.h b/include/hw/compat.h
> >>>> index 026fee9..b9be5d7 100644
> >>>> --- a/include/hw/compat.h
> >>>> +++ b/include/hw/compat.h
> >>>> @@ -2,6 +2,23 @@
> >>>> #define HW_COMPAT_H
> >>>>
> >>>> #define HW_COMPAT_2_11 \
> >>>> +    {\
> >>>> +        .driver   = "virtio-blk-device",\
> >>>> +        .property = "max_segments",\
> >>>> +        .value    = "126",\
> >>>> +    },{\
> >>>> +        .driver   = "vhost-scsi",\
> >>>> +        .property = "max_segments",\
> >>>> +        .value    = "126",\
> >>>> +    },{\
> >>>> +        .driver   = "vhost-user-scsi",\
> >>>> +        .property = "max_segments",\
> >>>> +        .value    = "126",\
> >>> Existing vhost-user-scsi slave programs might not expect up to 1022
> >>> segments.  Hopefully we can get away with this change since there are
> >>> relatively few vhost-user-scsi slave programs.
> >>>
> >>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.
> >> SPDK vhost-user targets only expect max 128 segments.  They also pre-allocate I/O task structures when QEMU connects to the vhost-user device.
> >>
> >> Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal.
> >>
> >> What if this was just bumped from 126 to 128?  I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch.  One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference.
> > SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements.
> > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23
> >
> > Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size.
> >
> > With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration.
> 
> This should not be a problem at all IMHO. The guest is not obliged
> to use the message of entire possible size. The guest initiates
> request with 128 elements. Fine. QEMU is ready to this.

QEMU is, but vhost-user slaves may not be.  And there seems to be no
vhost-user protocol message type that would allow to negotiate this
value between the master and the slave.

So apparently the default for vhost-user-scsi has to stay the same in
order not to break existing slaves.  I guess having it tunable via a
property may still turn out useful.

Roman.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-19  8:57           ` Roman Kagan
@ 2017-12-19  9:59             ` Liu, Changpeng
  0 siblings, 0 replies; 15+ messages in thread
From: Liu, Changpeng @ 2017-12-19  9:59 UTC (permalink / raw)
  To: Roman Kagan, Denis V. Lunev
  Cc: Felipe Franciosi, Harris, James R, Stefan Hajnoczi, Kevin Wolf,
	Eduardo Habkost, Michael S. Tsirkin, qemu-devel, Max Reitz,
	Paolo Bonzini, Richard Henderson



> -----Original Message-----
> From: Roman Kagan [mailto:rkagan@virtuozzo.com]
> Sent: Tuesday, December 19, 2017 4:58 PM
> To: Denis V. Lunev <den@openvz.org>
> Cc: Felipe Franciosi <felipe@nutanix.com>; Harris, James R
> <james.r.harris@intel.com>; Stefan Hajnoczi <stefanha@redhat.com>; Kevin Wolf
> <kwolf@redhat.com>; Eduardo Habkost <ehabkost@redhat.com>; Michael S.
> Tsirkin <mst@redhat.com>; qemu-devel@nongnu.org; Max Reitz
> <mreitz@redhat.com>; Paolo Bonzini <pbonzini@redhat.com>; Liu, Changpeng
> <changpeng.liu@intel.com>; Richard Henderson <rth@twiddle.net>
> Subject: Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio
> SCSI/block
> 
> On Mon, Dec 18, 2017 at 10:42:35PM +0300, Denis V. Lunev wrote:
> > On 12/18/2017 10:35 PM, Felipe Franciosi wrote:
> > >> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote:
> > >>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > >>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote:
> > >>>> diff --git a/include/hw/compat.h b/include/hw/compat.h
> > >>>> index 026fee9..b9be5d7 100644
> > >>>> --- a/include/hw/compat.h
> > >>>> +++ b/include/hw/compat.h
> > >>>> @@ -2,6 +2,23 @@
> > >>>> #define HW_COMPAT_H
> > >>>>
> > >>>> #define HW_COMPAT_2_11 \
> > >>>> +    {\
> > >>>> +        .driver   = "virtio-blk-device",\
> > >>>> +        .property = "max_segments",\
> > >>>> +        .value    = "126",\
> > >>>> +    },{\
> > >>>> +        .driver   = "vhost-scsi",\
> > >>>> +        .property = "max_segments",\
> > >>>> +        .value    = "126",\
> > >>>> +    },{\
> > >>>> +        .driver   = "vhost-user-scsi",\
> > >>>> +        .property = "max_segments",\
> > >>>> +        .value    = "126",\
> > >>> Existing vhost-user-scsi slave programs might not expect up to 1022
> > >>> segments.  Hopefully we can get away with this change since there are
> > >>> relatively few vhost-user-scsi slave programs.
> > >>>
> > >>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.
> > >> SPDK vhost-user targets only expect max 128 segments.  They also pre-allocate
> I/O task structures when QEMU connects to the vhost-user device.
> > >>
> > >> Supporting up to 1022 segments would result in significantly higher memory
> usage, reduction in I/O queue depth processed by the vhost-user target, or having
> to dynamically allocate I/O task structures - none of which are ideal.
> > >>
> > >> What if this was just bumped from 126 to 128?  I guess I’m trying to
> understand the level of guest and host I/O performance that is gained with this
> patch.  One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a
> few hundred IO/s difference.
> > > SeaBIOS also makes the assumption that the queue size is not bigger than 128
> elements.
> > > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23
> > >
> > > Perhaps a better approach is to make the value configurable (ie. add the
> "max_segments" property), but set the default to 128-2. In addition to what Jim
> pointed out, I think there may be other legacy front end drivers which can assume
> the ring will be at most 128 entries in size.
> > >
> > > With that, hypervisors can choose to bump the value higher if it's known to be
> safe for their host+guest configuration.
> >
> > This should not be a problem at all IMHO. The guest is not obliged
> > to use the message of entire possible size. The guest initiates
> > request with 128 elements. Fine. QEMU is ready to this.
> 
> QEMU is, but vhost-user slaves may not be.  And there seems to be no
> vhost-user protocol message type that would allow to negotiate this
> value between the master and the slave.
> 
> So apparently the default for vhost-user-scsi has to stay the same in
> order not to break existing slaves.  I guess having it tunable via a
> property may still turn out useful.
Actually I wrote a new patch set recently for support vhost-user-blk host device, 
and added 2 extra vhost-user messages,  GET_CONFIG/SET_CONFIG which can let host device get those parameters 
from vhost-user slave target, the new added messages can get virtio device's configuration space from slave target,
so vhost-user-scsi may use that as well. 

> 
> Roman.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-15 15:02 [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
                   ` (2 preceding siblings ...)
  2017-12-18 13:38 ` [Qemu-devel] [PATCH v3 0/2] " Stefan Hajnoczi
@ 2017-12-19 12:45 ` Denis V. Lunev
  2017-12-20  4:17   ` Michael S. Tsirkin
  2017-12-20  4:23 ` Michael S. Tsirkin
  4 siblings, 1 reply; 15+ messages in thread
From: Denis V. Lunev @ 2017-12-19 12:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, Kevin Wolf, Max Reitz,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost

On 12/15/2017 06:02 PM, Denis V. Lunev wrote:
> v2->v3
> - added 2.12 machine types
> - added compat properties for 2.11 machine type
>
> v1->v2:
> - added max_segments property for virtblock device
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: "Michael S. Tsirkin" <mst@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Richard Henderson <rth@twiddle.net>
> CC: Eduardo Habkost <ehabkost@redhat.com>
>
the patch appears to be problematic.

We observe the following crashes under heavy load

    [    2.348177] kernel BUG at drivers/virtio/virtio_ring.c:160!
    [    2.349382] invalid opcode: 0000 [#1] SMP 
    [    2.350448] Modules linked in: xfs libcrc32c sr_mod cdrom sd_mod crc_t10dif crct10dif_generic virtio_scsi virtio_console virtio_net ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw bochs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm virtio_pci virtio_ring virtio i2c_core ata_piix libata floppy dm_mirror dm_region_hash dm_log dm_mod
    [    2.357149] CPU: 1 PID: 399 Comm: mount Not tainted 3.10.0-514.26.2.el7.x86_64 #1
    [    2.358569] Hardware name: Virtuozzo KVM, BIOS 1.10.2-3.1.vz7.2 04/01/2014
    [    2.359967] task: ffff8800362f4e70 ti: ffff880035b00000 task.ti: ffff880035b00000
    [    2.361443] RIP: 0010:[<ffffffffa00b4ae0>]  [<ffffffffa00b4ae0>] virtqueue_add_sgs+0x370/0x3c0 [virtio_ring]
    [    2.363171] RSP: 0018:ffff880035b03760  EFLAGS: 00010002
    [    2.364419] RAX: ffff8800359b8800 RBX: 0000000000000082 RCX: 0000000000000003
    [    2.365866] RDX: ffffea0000d9b7c2 RSI: ffff880035b037e0 RDI: ffff8800783dcfe0
    [    2.367325] RBP: ffff880035b037b8 R08: ffff88003679d3c0 R09: 0000000000000020
    [    2.368766] R10: ffff8800359c08c0 R11: ffff8800359c08c0 R12: ffff8800787a4948
    [    2.370232] R13: ffff880035b037f8 R14: ffff880035b037f8 R15: 0000000000000020
    [    2.371681] FS:  00007f38a0887880(0000) GS:ffff88007fd00000(0000) knlGS:0000000000000000
    [    2.373233] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [    2.374529] CR2: 00007f2090d276f8 CR3: 0000000036371000 CR4: 00000000000406e0
    [    2.375982] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [    2.377462] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    [    2.378913] Stack:
    [    2.379846]  ffff88003679d3c0 0000000100000000 ffff88003582d800 ffff880035b037e0
    [    2.381389]  0000000335b03820 ffff8800359b8800 ffff88003679d3c0 ffff8800787a4948
    [    2.382905]  ffff8800359c0998 ffff8800359b8800 000000000000006c ffff880035b03870
    [    2.384449] Call Trace:
    [    2.385420]  [<ffffffffa019a631>] virtscsi_kick_cmd+0x161/0x280 [virtio_scsi]
    [    2.386874]  [<ffffffff81183e49>] ? mempool_alloc+0x69/0x170
    [    2.388189]  [<ffffffffa019a87f>] virtscsi_queuecommand+0x12f/0x230 [virtio_scsi]
    [    2.389702]  [<ffffffffa019aa57>] virtscsi_queuecommand_single+0x37/0x40 [virtio_scsi]
    [    2.391217]  [<ffffffff8145269a>] scsi_dispatch_cmd+0xaa/0x230
    [    2.392537]  [<ffffffff8145b7a1>] scsi_request_fn+0x501/0x770
    [    2.393832]  [<ffffffff812eb9c3>] __blk_run_queue+0x33/0x40
    [    2.395103]  [<ffffffff812eba7a>] queue_unplugged+0x2a/0xa0
    [    2.396380]  [<ffffffff812f08d8>] blk_flush_plug_list+0x1d8/0x230
    [    2.397673]  [<ffffffff812f0ce4>] blk_finish_plug+0x14/0x40
    [    2.398935]  [<ffffffffa0226a84>] _xfs_buf_ioapply+0x334/0x460 [xfs]
    [    2.400286]  [<ffffffffa0250378>] ? xlog_bread_noalign+0xa8/0xe0 [xfs]
    [    2.401631]  [<ffffffffa022872d>] xfs_buf_submit_wait+0x5d/0x1d0 [xfs]
    [    2.402960]  [<ffffffffa0250378>] xlog_bread_noalign+0xa8/0xe0 [xfs]
    [    2.404306]  [<ffffffffa0251023>] xlog_bread+0x23/0x50 [xfs]
    [    2.405537]  [<ffffffffa0255f71>] xlog_find_verify_cycle+0xf1/0x1b0 [xfs]
    [    2.406885]  [<ffffffffa0256541>] xlog_find_head+0x2f1/0x3e0 [xfs]
    [    2.408175]  [<ffffffffa0256673>] xlog_find_tail+0x43/0x2f0 [xfs]
    [    2.409432]  [<ffffffff810c52b4>] ? try_to_wake_up+0x174/0x340
    [    2.410673]  [<ffffffffa025694d>] xlog_recover+0x2d/0x190 [xfs]
    [    2.411927]  [<ffffffffa0257bbb>] ? xfs_trans_ail_init+0xab/0xd0 [xfs]
    [    2.413246]  [<ffffffffa02498da>] xfs_log_mount+0xea/0x2e0 [xfs]
    [    2.414490]  [<ffffffffa0240138>] xfs_mountfs+0x518/0x8b0 [xfs]
    [    2.415714]  [<ffffffffa022e400>] ? xfs_filestream_get_parent+0x80/0x80 [xfs]
    [    2.417100]  [<ffffffffa0241009>] ? xfs_mru_cache_create+0x129/0x190 [xfs]
    [    2.419226]  [<ffffffffa02435e3>] xfs_fs_fill_super+0x3b3/0x4d0 [xfs]
    [    2.420473]  [<ffffffff81202400>] mount_bdev+0x1b0/0x1f0
    [    2.421575]  [<ffffffffa0243230>] ? xfs_parseargs+0xbe0/0xbe0 [xfs]
    [    2.422766]  [<ffffffffa02419a5>] xfs_fs_mount+0x15/0x20 [xfs]
    [    2.423903]  [<ffffffff81202b99>] mount_fs+0x39/0x1b0
    [    2.424955]  [<ffffffff811a5415>] ? __alloc_percpu+0x15/0x20
    [    2.426054]  [<ffffffff8121e91f>] vfs_kern_mount+0x5f/0xf0
    [    2.427147]  [<ffffffff81220e7e>] do_mount+0x24e/0xaa0
    [    2.428170]  [<ffffffff8119f8eb>] ? strndup_user+0x4b/0xa0
    [    2.429226]  [<ffffffff81221766>] SyS_mount+0x96/0xf0
    [    2.430242]  [<ffffffff81697809>] system_call_fastpath+0x16/0x1b
    [    2.431351] Code: 5c e9 69 ff ff ff 31 db e9 17 fd ff ff 89 da 48 c7 c6 98 63 0b a0 48 c7 c7 a0 70 0b a0 31 c0 e8 a7 7f 28 e1 e9 d5 fd ff ff 0f 0b <0f> 0b 8b 55 c8 48 89 d9 48 c7 c6 c0 62 0b a0 48 c7 c7 78 70 0b 

The problem is presumed to be gone in very latest 4.14 kernel.
We believe that the problem is fixed with

commit 44ed8089e991a60d614abe0ee4b9057a28b364e4
Author: Richard W.M. Jones <rjones@redhat.com>
Date:   Thu Aug 10 17:56:51 2017 +0100

    scsi: virtio: Reduce BUG if total_sg > virtqueue size to WARN.
    
    If using indirect descriptors, you can make the total_sg as large as you
    want.  If not, BUG is too serious because the function later returns
    -ENOSPC.
    
    Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
    Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>

Thus I am going to add the property, but with default 126 :(

Den

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-18 19:35       ` Felipe Franciosi
  2017-12-18 19:42         ` Denis V. Lunev
@ 2017-12-20  4:16         ` Michael S. Tsirkin
  1 sibling, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2017-12-20  4:16 UTC (permalink / raw)
  To: Felipe Franciosi
  Cc: Harris, James R, Stefan Hajnoczi, Denis V. Lunev, qemu-devel,
	Kevin Wolf, Max Reitz, Paolo Bonzini, Richard Henderson,
	Eduardo Habkost, Liu, Changpeng

On Mon, Dec 18, 2017 at 07:35:48PM +0000, Felipe Franciosi wrote:
> >> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.
> > 
> > SPDK vhost-user targets only expect max 128 segments.  They also pre-allocate I/O task structures when QEMU connects to the vhost-user device.
> > 
> > Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal.
> > 
> > What if this was just bumped from 126 to 128?  I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch.  One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference.
> 
> SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements.
> https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23

And what happens if it's bigger? Looks like a bug to me.


> Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size.
> 
> With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration.
> 
> Cheers,
> Felipe

For 1.0 guests can just downgrade to 128 if they want to save memory.
So it might make sense to gate this change on 1.0 enabled by guest.


> > 
> > -Jim
> > 
> > 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-19 12:45 ` Denis V. Lunev
@ 2017-12-20  4:17   ` Michael S. Tsirkin
  0 siblings, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2017-12-20  4:17 UTC (permalink / raw)
  To: Denis V. Lunev
  Cc: qemu-devel, Stefan Hajnoczi, Kevin Wolf, Max Reitz,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost

On Tue, Dec 19, 2017 at 03:45:52PM +0300, Denis V. Lunev wrote:
> On 12/15/2017 06:02 PM, Denis V. Lunev wrote:
> > v2->v3
> > - added 2.12 machine types
> > - added compat properties for 2.11 machine type
> >
> > v1->v2:
> > - added max_segments property for virtblock device
> >
> > Signed-off-by: Denis V. Lunev <den@openvz.org>
> > CC: "Michael S. Tsirkin" <mst@redhat.com>
> > CC: Stefan Hajnoczi <stefanha@redhat.com>
> > CC: Kevin Wolf <kwolf@redhat.com>
> > CC: Max Reitz <mreitz@redhat.com>
> > CC: Paolo Bonzini <pbonzini@redhat.com>
> > CC: Richard Henderson <rth@twiddle.net>
> > CC: Eduardo Habkost <ehabkost@redhat.com>
> >
> the patch appears to be problematic.
> 
> We observe the following crashes under heavy load
> 
>     [    2.348177] kernel BUG at drivers/virtio/virtio_ring.c:160!
>     [    2.349382] invalid opcode: 0000 [#1] SMP 
>     [    2.350448] Modules linked in: xfs libcrc32c sr_mod cdrom sd_mod crc_t10dif crct10dif_generic virtio_scsi virtio_console virtio_net ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw bochs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm virtio_pci virtio_ring virtio i2c_core ata_piix libata floppy dm_mirror dm_region_hash dm_log dm_mod
>     [    2.357149] CPU: 1 PID: 399 Comm: mount Not tainted 3.10.0-514.26.2.el7.x86_64 #1
>     [    2.358569] Hardware name: Virtuozzo KVM, BIOS 1.10.2-3.1.vz7.2 04/01/2014
>     [    2.359967] task: ffff8800362f4e70 ti: ffff880035b00000 task.ti: ffff880035b00000
>     [    2.361443] RIP: 0010:[<ffffffffa00b4ae0>]  [<ffffffffa00b4ae0>] virtqueue_add_sgs+0x370/0x3c0 [virtio_ring]
>     [    2.363171] RSP: 0018:ffff880035b03760  EFLAGS: 00010002
>     [    2.364419] RAX: ffff8800359b8800 RBX: 0000000000000082 RCX: 0000000000000003
>     [    2.365866] RDX: ffffea0000d9b7c2 RSI: ffff880035b037e0 RDI: ffff8800783dcfe0
>     [    2.367325] RBP: ffff880035b037b8 R08: ffff88003679d3c0 R09: 0000000000000020
>     [    2.368766] R10: ffff8800359c08c0 R11: ffff8800359c08c0 R12: ffff8800787a4948
>     [    2.370232] R13: ffff880035b037f8 R14: ffff880035b037f8 R15: 0000000000000020
>     [    2.371681] FS:  00007f38a0887880(0000) GS:ffff88007fd00000(0000) knlGS:0000000000000000
>     [    2.373233] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>     [    2.374529] CR2: 00007f2090d276f8 CR3: 0000000036371000 CR4: 00000000000406e0
>     [    2.375982] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>     [    2.377462] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>     [    2.378913] Stack:
>     [    2.379846]  ffff88003679d3c0 0000000100000000 ffff88003582d800 ffff880035b037e0
>     [    2.381389]  0000000335b03820 ffff8800359b8800 ffff88003679d3c0 ffff8800787a4948
>     [    2.382905]  ffff8800359c0998 ffff8800359b8800 000000000000006c ffff880035b03870
>     [    2.384449] Call Trace:
>     [    2.385420]  [<ffffffffa019a631>] virtscsi_kick_cmd+0x161/0x280 [virtio_scsi]
>     [    2.386874]  [<ffffffff81183e49>] ? mempool_alloc+0x69/0x170
>     [    2.388189]  [<ffffffffa019a87f>] virtscsi_queuecommand+0x12f/0x230 [virtio_scsi]
>     [    2.389702]  [<ffffffffa019aa57>] virtscsi_queuecommand_single+0x37/0x40 [virtio_scsi]
>     [    2.391217]  [<ffffffff8145269a>] scsi_dispatch_cmd+0xaa/0x230
>     [    2.392537]  [<ffffffff8145b7a1>] scsi_request_fn+0x501/0x770
>     [    2.393832]  [<ffffffff812eb9c3>] __blk_run_queue+0x33/0x40
>     [    2.395103]  [<ffffffff812eba7a>] queue_unplugged+0x2a/0xa0
>     [    2.396380]  [<ffffffff812f08d8>] blk_flush_plug_list+0x1d8/0x230
>     [    2.397673]  [<ffffffff812f0ce4>] blk_finish_plug+0x14/0x40
>     [    2.398935]  [<ffffffffa0226a84>] _xfs_buf_ioapply+0x334/0x460 [xfs]
>     [    2.400286]  [<ffffffffa0250378>] ? xlog_bread_noalign+0xa8/0xe0 [xfs]
>     [    2.401631]  [<ffffffffa022872d>] xfs_buf_submit_wait+0x5d/0x1d0 [xfs]
>     [    2.402960]  [<ffffffffa0250378>] xlog_bread_noalign+0xa8/0xe0 [xfs]
>     [    2.404306]  [<ffffffffa0251023>] xlog_bread+0x23/0x50 [xfs]
>     [    2.405537]  [<ffffffffa0255f71>] xlog_find_verify_cycle+0xf1/0x1b0 [xfs]
>     [    2.406885]  [<ffffffffa0256541>] xlog_find_head+0x2f1/0x3e0 [xfs]
>     [    2.408175]  [<ffffffffa0256673>] xlog_find_tail+0x43/0x2f0 [xfs]
>     [    2.409432]  [<ffffffff810c52b4>] ? try_to_wake_up+0x174/0x340
>     [    2.410673]  [<ffffffffa025694d>] xlog_recover+0x2d/0x190 [xfs]
>     [    2.411927]  [<ffffffffa0257bbb>] ? xfs_trans_ail_init+0xab/0xd0 [xfs]
>     [    2.413246]  [<ffffffffa02498da>] xfs_log_mount+0xea/0x2e0 [xfs]
>     [    2.414490]  [<ffffffffa0240138>] xfs_mountfs+0x518/0x8b0 [xfs]
>     [    2.415714]  [<ffffffffa022e400>] ? xfs_filestream_get_parent+0x80/0x80 [xfs]
>     [    2.417100]  [<ffffffffa0241009>] ? xfs_mru_cache_create+0x129/0x190 [xfs]
>     [    2.419226]  [<ffffffffa02435e3>] xfs_fs_fill_super+0x3b3/0x4d0 [xfs]
>     [    2.420473]  [<ffffffff81202400>] mount_bdev+0x1b0/0x1f0
>     [    2.421575]  [<ffffffffa0243230>] ? xfs_parseargs+0xbe0/0xbe0 [xfs]
>     [    2.422766]  [<ffffffffa02419a5>] xfs_fs_mount+0x15/0x20 [xfs]
>     [    2.423903]  [<ffffffff81202b99>] mount_fs+0x39/0x1b0
>     [    2.424955]  [<ffffffff811a5415>] ? __alloc_percpu+0x15/0x20
>     [    2.426054]  [<ffffffff8121e91f>] vfs_kern_mount+0x5f/0xf0
>     [    2.427147]  [<ffffffff81220e7e>] do_mount+0x24e/0xaa0
>     [    2.428170]  [<ffffffff8119f8eb>] ? strndup_user+0x4b/0xa0
>     [    2.429226]  [<ffffffff81221766>] SyS_mount+0x96/0xf0
>     [    2.430242]  [<ffffffff81697809>] system_call_fastpath+0x16/0x1b
>     [    2.431351] Code: 5c e9 69 ff ff ff 31 db e9 17 fd ff ff 89 da 48 c7 c6 98 63 0b a0 48 c7 c7 a0 70 0b a0 31 c0 e8 a7 7f 28 e1 e9 d5 fd ff ff 0f 0b <0f> 0b 8b 55 c8 48 89 d9 48 c7 c6 c0 62 0b a0 48 c7 c7 78 70 0b 
> 
> The problem is presumed to be gone in very latest 4.14 kernel.
> We believe that the problem is fixed with
> 
> commit 44ed8089e991a60d614abe0ee4b9057a28b364e4
> Author: Richard W.M. Jones <rjones@redhat.com>
> Date:   Thu Aug 10 17:56:51 2017 +0100
> 
>     scsi: virtio: Reduce BUG if total_sg > virtqueue size to WARN.
>     
>     If using indirect descriptors, you can make the total_sg as large as you
>     want.  If not, BUG is too serious because the function later returns
>     -ENOSPC.
>     
>     Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
>     Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
>     Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
> 
> Thus I am going to add the property, but with default 126 :(
> 
> Den

About that, Paolo, you promised to propose a spec patch to
relax the requirement.

-- 
MST

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block
  2017-12-15 15:02 [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
                   ` (3 preceding siblings ...)
  2017-12-19 12:45 ` Denis V. Lunev
@ 2017-12-20  4:23 ` Michael S. Tsirkin
  4 siblings, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2017-12-20  4:23 UTC (permalink / raw)
  To: Denis V. Lunev
  Cc: qemu-devel, Stefan Hajnoczi, Kevin Wolf, Max Reitz,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost, kraxel

On Fri, Dec 15, 2017 at 06:02:48PM +0300, Denis V. Lunev wrote:
> v2->v3
> - added 2.12 machine types
> - added compat properties for 2.11 machine type
> 
> v1->v2:
> - added max_segments property for virtblock device

I'm not applying this for now.

It seems too easy to create illegal configurations with it,
e.g. where max seg > queue size.

1022 also seems too aggressive - e.g. if a couple of segments
cross page boundaries, we'll exceed the iov length. around
500 seems more prudent.

Guerd, could you pls also take a look at whether seabios is
smart enough to downgrade if guest queue size is too big?

> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: "Michael S. Tsirkin" <mst@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Richard Henderson <rth@twiddle.net>
> CC: Eduardo Habkost <ehabkost@redhat.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-12-20  4:23 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-15 15:02 [Qemu-devel] [PATCH v3 0/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
2017-12-15 15:02 ` [Qemu-devel] [PATCH 1/2] pc, q35: add 2.12 machine types Denis V. Lunev
2017-12-18 13:54   ` Christian Borntraeger
2017-12-15 15:02 ` [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block Denis V. Lunev
2017-12-18 13:38   ` Stefan Hajnoczi
2017-12-18 16:16     ` Harris, James R
2017-12-18 19:35       ` Felipe Franciosi
2017-12-18 19:42         ` Denis V. Lunev
2017-12-19  8:57           ` Roman Kagan
2017-12-19  9:59             ` Liu, Changpeng
2017-12-20  4:16         ` Michael S. Tsirkin
2017-12-18 13:38 ` [Qemu-devel] [PATCH v3 0/2] " Stefan Hajnoczi
2017-12-19 12:45 ` Denis V. Lunev
2017-12-20  4:17   ` Michael S. Tsirkin
2017-12-20  4:23 ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.