All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
@ 2016-11-16 21:53 Stefan Hajnoczi
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check Stefan Hajnoczi
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-16 21:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: zhunxun, Fam Zheng, Christian Borntraeger, Paolo Bonzini,
	Michael S. Tsirkin, Kevin Wolf, Stefan Hajnoczi

Disabling notifications during virtqueue processing reduces the number of
exits.  The virtio-net device already uses virtio_queue_set_notifications() but
virtio-blk and virtio-scsi do not.

The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:

  (host)$ qemu-system-x86_64 \
              -enable-kvm -m 1024 -cpu host \
              -drive if=virtio,id=drive0,file=f24.img,format=raw,\
                     cache=none,aio=native
  (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
  (host)$ sudo perf record -a -e kvm:kvm_fast_mmio

Number of kvm_fast_mmio events:
Unpatched: 685k
Patched: 592k (-15%, lower is better)

Note that a workload with iodepth=1 and a single thread will not benefit - this
is a batching optimization.  The effect should be strongest with large iodepth
and multiple threads submitting I/O.  The guest I/O scheduler also affects the
optimization.

Stefan Hajnoczi (3):
  virtio: add missing vdev->broken check
  virtio-blk: suppress virtqueue kick during processing
  virtio-scsi: suppress virtqueue kick during processing

 hw/block/virtio-blk.c | 18 ++++++++++++------
 hw/scsi/virtio-scsi.c | 36 +++++++++++++++++++++---------------
 hw/virtio/virtio.c    |  4 ++++
 3 files changed, 37 insertions(+), 21 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check
  2016-11-16 21:53 [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Stefan Hajnoczi
@ 2016-11-16 21:53 ` Stefan Hajnoczi
  2016-11-17  8:31   ` Cornelia Huck
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 2/3] virtio-blk: suppress virtqueue kick during processing Stefan Hajnoczi
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-16 21:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: zhunxun, Fam Zheng, Christian Borntraeger, Paolo Bonzini,
	Michael S. Tsirkin, Kevin Wolf, Stefan Hajnoczi

virtio_queue_notify_vq() checks vdev->broken before invoking the
handler, virtio_queue_notify_aio_vq() should too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/virtio/virtio.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 55a00cd..a4759bd 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1239,6 +1239,10 @@ static void virtio_queue_notify_aio_vq(VirtQueue *vq)
     if (vq->vring.desc && vq->handle_aio_output) {
         VirtIODevice *vdev = vq->vdev;
 
+        if (unlikely(vq->vdev->broken)) {
+            return;
+        }
+
         trace_virtio_queue_notify(vdev, vq - vdev->vq, vq);
         vq->handle_aio_output(vdev, vq);
     }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH 2/3] virtio-blk: suppress virtqueue kick during processing
  2016-11-16 21:53 [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Stefan Hajnoczi
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check Stefan Hajnoczi
@ 2016-11-16 21:53 ` Stefan Hajnoczi
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 3/3] virtio-scsi: " Stefan Hajnoczi
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-16 21:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: zhunxun, Fam Zheng, Christian Borntraeger, Paolo Bonzini,
	Michael S. Tsirkin, Kevin Wolf, Stefan Hajnoczi

The guest does not need to kick the virtqueue while we are processing
it.  This reduces the number of vmexits during periods of heavy I/O.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/virtio-blk.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 0c5fd27..50bb0cb 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -588,13 +588,19 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
 
     blk_io_plug(s->blk);
 
-    while ((req = virtio_blk_get_request(s, vq))) {
-        if (virtio_blk_handle_request(req, &mrb)) {
-            virtqueue_detach_element(req->vq, &req->elem, 0);
-            virtio_blk_free_request(req);
-            break;
+    do {
+        virtio_queue_set_notification(vq, 0);
+
+        while ((req = virtio_blk_get_request(s, vq))) {
+            if (virtio_blk_handle_request(req, &mrb)) {
+                virtqueue_detach_element(req->vq, &req->elem, 0);
+                virtio_blk_free_request(req);
+                break;
+            }
         }
-    }
+
+        virtio_queue_set_notification(vq, 1);
+    } while (!virtio_queue_empty(vq));
 
     if (mrb.num_reqs) {
         virtio_blk_submit_multireq(s->blk, &mrb);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Qemu-devel] [PATCH 3/3] virtio-scsi: suppress virtqueue kick during processing
  2016-11-16 21:53 [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Stefan Hajnoczi
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check Stefan Hajnoczi
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 2/3] virtio-blk: suppress virtqueue kick during processing Stefan Hajnoczi
@ 2016-11-16 21:53 ` Stefan Hajnoczi
  2016-11-16 22:17 ` [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Michael S. Tsirkin
  2016-11-17 11:01 ` Christian Borntraeger
  4 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-16 21:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: zhunxun, Fam Zheng, Christian Borntraeger, Paolo Bonzini,
	Michael S. Tsirkin, Kevin Wolf, Stefan Hajnoczi

The guest does not need to kick the virtqueue while we are processing
it.  This reduces the number of vmexits during periods of heavy I/O.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/virtio-scsi.c | 36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 3e5ae6a..4d23a78 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -578,26 +578,32 @@ static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
 void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
 {
     VirtIOSCSIReq *req, *next;
-    int ret;
+    int ret = 0;
 
     QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs);
 
-    while ((req = virtio_scsi_pop_req(s, vq))) {
-        ret = virtio_scsi_handle_cmd_req_prepare(s, req);
-        if (!ret) {
-            QTAILQ_INSERT_TAIL(&reqs, req, next);
-        } else if (ret == -EINVAL) {
-            /* The device is broken and shouldn't process any request */
-            while (!QTAILQ_EMPTY(&reqs)) {
-                req = QTAILQ_FIRST(&reqs);
-                QTAILQ_REMOVE(&reqs, req, next);
-                blk_io_unplug(req->sreq->dev->conf.blk);
-                scsi_req_unref(req->sreq);
-                virtqueue_detach_element(req->vq, &req->elem, 0);
-                virtio_scsi_free_req(req);
+    do {
+        virtio_queue_set_notification(vq, 0);
+
+        while ((req = virtio_scsi_pop_req(s, vq))) {
+            ret = virtio_scsi_handle_cmd_req_prepare(s, req);
+            if (!ret) {
+                QTAILQ_INSERT_TAIL(&reqs, req, next);
+            } else if (ret == -EINVAL) {
+                /* The device is broken and shouldn't process any request */
+                while (!QTAILQ_EMPTY(&reqs)) {
+                    req = QTAILQ_FIRST(&reqs);
+                    QTAILQ_REMOVE(&reqs, req, next);
+                    blk_io_unplug(req->sreq->dev->conf.blk);
+                    scsi_req_unref(req->sreq);
+                    virtqueue_detach_element(req->vq, &req->elem, 0);
+                    virtio_scsi_free_req(req);
+                }
             }
         }
-    }
+
+        virtio_queue_set_notification(vq, 1);
+    } while (ret != -EINVAL && !virtio_queue_empty(vq));
 
     QTAILQ_FOREACH_SAFE(req, &reqs, next, next) {
         virtio_scsi_handle_cmd_req_submit(s, req);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-16 21:53 [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 3/3] virtio-scsi: " Stefan Hajnoczi
@ 2016-11-16 22:17 ` Michael S. Tsirkin
  2016-11-17 13:27   ` Stefan Hajnoczi
  2016-11-17 11:01 ` Christian Borntraeger
  4 siblings, 1 reply; 18+ messages in thread
From: Michael S. Tsirkin @ 2016-11-16 22:17 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, zhunxun, Fam Zheng, Christian Borntraeger,
	Paolo Bonzini, Kevin Wolf

On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> Disabling notifications during virtqueue processing reduces the number of
> exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> virtio-blk and virtio-scsi do not.
> 
> The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> 
>   (host)$ qemu-system-x86_64 \
>               -enable-kvm -m 1024 -cpu host \
>               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
>                      cache=none,aio=native
>   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
>   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> 
> Number of kvm_fast_mmio events:
> Unpatched: 685k
> Patched: 592k (-15%, lower is better)

Any chance to see a gain in actual benchmark numbers?
This is important to make sure we are not just
shifting overhead around.


> Note that a workload with iodepth=1 and a single thread will not benefit - this
> is a batching optimization.  The effect should be strongest with large iodepth
> and multiple threads submitting I/O.  The guest I/O scheduler also affects the
> optimization.
> 
> Stefan Hajnoczi (3):
>   virtio: add missing vdev->broken check
>   virtio-blk: suppress virtqueue kick during processing
>   virtio-scsi: suppress virtqueue kick during processing
> 
>  hw/block/virtio-blk.c | 18 ++++++++++++------
>  hw/scsi/virtio-scsi.c | 36 +++++++++++++++++++++---------------
>  hw/virtio/virtio.c    |  4 ++++
>  3 files changed, 37 insertions(+), 21 deletions(-)
> 
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check
  2016-11-16 21:53 ` [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check Stefan Hajnoczi
@ 2016-11-17  8:31   ` Cornelia Huck
  2016-11-17 10:58     ` Stefan Hajnoczi
  0 siblings, 1 reply; 18+ messages in thread
From: Cornelia Huck @ 2016-11-17  8:31 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Kevin Wolf, zhunxun, Fam Zheng, Michael S. Tsirkin,
	Christian Borntraeger, Paolo Bonzini

On Wed, 16 Nov 2016 21:53:07 +0000
Stefan Hajnoczi <stefanha@redhat.com> wrote:

> virtio_queue_notify_vq() checks vdev->broken before invoking the
> handler, virtio_queue_notify_aio_vq() should too.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/virtio/virtio.c | 4 ++++
>  1 file changed, 4 insertions(+)

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>

I think this makes sense as a fix independent of your other patches.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check
  2016-11-17  8:31   ` Cornelia Huck
@ 2016-11-17 10:58     ` Stefan Hajnoczi
  2016-11-17 12:24       ` Cornelia Huck
  0 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-17 10:58 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng,
	Michael S. Tsirkin, qemu-devel, Christian Borntraeger,
	Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 788 bytes --]

On Thu, Nov 17, 2016 at 09:31:18AM +0100, Cornelia Huck wrote:
> On Wed, 16 Nov 2016 21:53:07 +0000
> Stefan Hajnoczi <stefanha@redhat.com> wrote:
> 
> > virtio_queue_notify_vq() checks vdev->broken before invoking the
> > handler, virtio_queue_notify_aio_vq() should too.
> > 
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  hw/virtio/virtio.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> 
> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> 
> I think this makes sense as a fix independent of your other patches.

I'm not aware of an actual bug caused by this.  virtio_queue_pop()
returns NULL when the device is broken.

This is more for consistency than correctness, so I didn't add
qemu-stable, -rc1, or send it as a separate fix.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-16 21:53 [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2016-11-16 22:17 ` [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Michael S. Tsirkin
@ 2016-11-17 11:01 ` Christian Borntraeger
  2016-11-18 11:02   ` Stefan Hajnoczi
  4 siblings, 1 reply; 18+ messages in thread
From: Christian Borntraeger @ 2016-11-17 11:01 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: zhunxun, Fam Zheng, Paolo Bonzini, Michael S. Tsirkin, Kevin Wolf

On 11/16/2016 10:53 PM, Stefan Hajnoczi wrote:
> Disabling notifications during virtqueue processing reduces the number of
> exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> virtio-blk and virtio-scsi do not.
> 
> The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> 
>   (host)$ qemu-system-x86_64 \
>               -enable-kvm -m 1024 -cpu host \
>               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
>                      cache=none,aio=native
>   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
>   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> 
> Number of kvm_fast_mmio events:
> Unpatched: 685k
> Patched: 592k (-15%, lower is better)
> 
> Note that a workload with iodepth=1 and a single thread will not benefit - this
> is a batching optimization.  The effect should be strongest with large iodepth
> and multiple threads submitting I/O.  The guest I/O scheduler also affects the
> optimization.

I have trouble seeing any difference in terms of performances or CPU load (other than 
a reduced number of kicks).
I was expecting some benefit by reducing the spinlock hold times in virtio-blk,
but this needs some more setups to actually find the sweet spot.

Maybe it will show its benefit with the polling thing?
> 
> Stefan Hajnoczi (3):
>   virtio: add missing vdev->broken check
>   virtio-blk: suppress virtqueue kick during processing
>   virtio-scsi: suppress virtqueue kick during processing
> 
>  hw/block/virtio-blk.c | 18 ++++++++++++------
>  hw/scsi/virtio-scsi.c | 36 +++++++++++++++++++++---------------
>  hw/virtio/virtio.c    |  4 ++++
>  3 files changed, 37 insertions(+), 21 deletions(-)
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check
  2016-11-17 10:58     ` Stefan Hajnoczi
@ 2016-11-17 12:24       ` Cornelia Huck
  2016-11-18 10:55         ` Stefan Hajnoczi
  0 siblings, 1 reply; 18+ messages in thread
From: Cornelia Huck @ 2016-11-17 12:24 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng,
	Michael S. Tsirkin, qemu-devel, Christian Borntraeger,
	Paolo Bonzini

On Thu, 17 Nov 2016 10:58:30 +0000
Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Thu, Nov 17, 2016 at 09:31:18AM +0100, Cornelia Huck wrote:
> > On Wed, 16 Nov 2016 21:53:07 +0000
> > Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > 
> > > virtio_queue_notify_vq() checks vdev->broken before invoking the
> > > handler, virtio_queue_notify_aio_vq() should too.
> > > 
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > ---
> > >  hw/virtio/virtio.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > 
> > Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > 
> > I think this makes sense as a fix independent of your other patches.
> 
> I'm not aware of an actual bug caused by this.  virtio_queue_pop()
> returns NULL when the device is broken.
> 
> This is more for consistency than correctness, so I didn't add
> qemu-stable, -rc1, or send it as a separate fix.

Fair enough. It was more "we should add this patch even if we don't use
the other patches in this series".

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-16 22:17 ` [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Michael S. Tsirkin
@ 2016-11-17 13:27   ` Stefan Hajnoczi
  2016-11-17 17:38     ` Michael S. Tsirkin
  0 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-17 13:27 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng, qemu-devel,
	Christian Borntraeger, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 1532 bytes --]

On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote:
> On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> > Disabling notifications during virtqueue processing reduces the number of
> > exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> > virtio-blk and virtio-scsi do not.
> > 
> > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> > 
> >   (host)$ qemu-system-x86_64 \
> >               -enable-kvm -m 1024 -cpu host \
> >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> >                      cache=none,aio=native
> >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > 
> > Number of kvm_fast_mmio events:
> > Unpatched: 685k
> > Patched: 592k (-15%, lower is better)
> 
> Any chance to see a gain in actual benchmark numbers?
> This is important to make sure we are not just
> shifting overhead around.

Good idea.  I reran this morning without any tracing and compared
against bare metal.

Total reads for a 30-second 4 KB random read benchmark with 4 processes
x iodepth=8:

Bare metal: 26440 MB
Unpatched:  19799 MB
Patched:    21252 MB

Patched vs Unpatched: +7% improvement
Patched vs Bare metal: 20% virtualization overhead

The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung
MZNLN256HCHP 256 GB SATA SSD.  This is just my laptop.

Seems like a worthwhile improvement to me.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-17 13:27   ` Stefan Hajnoczi
@ 2016-11-17 17:38     ` Michael S. Tsirkin
  2016-11-18 10:58       ` Stefan Hajnoczi
  2017-01-04 13:51       ` Stefan Hajnoczi
  0 siblings, 2 replies; 18+ messages in thread
From: Michael S. Tsirkin @ 2016-11-17 17:38 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng, qemu-devel,
	Christian Borntraeger, Paolo Bonzini

On Thu, Nov 17, 2016 at 01:27:49PM +0000, Stefan Hajnoczi wrote:
> On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote:
> > On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> > > Disabling notifications during virtqueue processing reduces the number of
> > > exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> > > virtio-blk and virtio-scsi do not.
> > > 
> > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> > > 
> > >   (host)$ qemu-system-x86_64 \
> > >               -enable-kvm -m 1024 -cpu host \
> > >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> > >                      cache=none,aio=native
> > >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> > >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > > 
> > > Number of kvm_fast_mmio events:
> > > Unpatched: 685k
> > > Patched: 592k (-15%, lower is better)
> > 
> > Any chance to see a gain in actual benchmark numbers?
> > This is important to make sure we are not just
> > shifting overhead around.
> 
> Good idea.  I reran this morning without any tracing and compared
> against bare metal.
> 
> Total reads for a 30-second 4 KB random read benchmark with 4 processes
> x iodepth=8:
> 
> Bare metal: 26440 MB
> Unpatched:  19799 MB
> Patched:    21252 MB
> 
> Patched vs Unpatched: +7% improvement
> Patched vs Bare metal: 20% virtualization overhead
> 
> The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung
> MZNLN256HCHP 256 GB SATA SSD.  This is just my laptop.
> 
> Seems like a worthwhile improvement to me.
> 
> Stefan

Sure. Pls remember to ping or re-post after the release.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check
  2016-11-17 12:24       ` Cornelia Huck
@ 2016-11-18 10:55         ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-18 10:55 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng,
	Michael S. Tsirkin, qemu-devel, Christian Borntraeger,
	Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 1161 bytes --]

On Thu, Nov 17, 2016 at 01:24:47PM +0100, Cornelia Huck wrote:
> On Thu, 17 Nov 2016 10:58:30 +0000
> Stefan Hajnoczi <stefanha@gmail.com> wrote:
> 
> > On Thu, Nov 17, 2016 at 09:31:18AM +0100, Cornelia Huck wrote:
> > > On Wed, 16 Nov 2016 21:53:07 +0000
> > > Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > > 
> > > > virtio_queue_notify_vq() checks vdev->broken before invoking the
> > > > handler, virtio_queue_notify_aio_vq() should too.
> > > > 
> > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > ---
> > > >  hw/virtio/virtio.c | 4 ++++
> > > >  1 file changed, 4 insertions(+)
> > > 
> > > Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > > 
> > > I think this makes sense as a fix independent of your other patches.
> > 
> > I'm not aware of an actual bug caused by this.  virtio_queue_pop()
> > returns NULL when the device is broken.
> > 
> > This is more for consistency than correctness, so I didn't add
> > qemu-stable, -rc1, or send it as a separate fix.
> 
> Fair enough. It was more "we should add this patch even if we don't use
> the other patches in this series".

Okay.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-17 17:38     ` Michael S. Tsirkin
@ 2016-11-18 10:58       ` Stefan Hajnoczi
  2016-11-18 14:21         ` Michael S. Tsirkin
  2017-01-04 13:51       ` Stefan Hajnoczi
  1 sibling, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-18 10:58 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng, qemu-devel,
	Christian Borntraeger, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 2190 bytes --]

On Thu, Nov 17, 2016 at 07:38:45PM +0200, Michael S. Tsirkin wrote:
> On Thu, Nov 17, 2016 at 01:27:49PM +0000, Stefan Hajnoczi wrote:
> > On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> > > > Disabling notifications during virtqueue processing reduces the number of
> > > > exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> > > > virtio-blk and virtio-scsi do not.
> > > > 
> > > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> > > > 
> > > >   (host)$ qemu-system-x86_64 \
> > > >               -enable-kvm -m 1024 -cpu host \
> > > >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> > > >                      cache=none,aio=native
> > > >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> > > >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > > > 
> > > > Number of kvm_fast_mmio events:
> > > > Unpatched: 685k
> > > > Patched: 592k (-15%, lower is better)
> > > 
> > > Any chance to see a gain in actual benchmark numbers?
> > > This is important to make sure we are not just
> > > shifting overhead around.
> > 
> > Good idea.  I reran this morning without any tracing and compared
> > against bare metal.
> > 
> > Total reads for a 30-second 4 KB random read benchmark with 4 processes
> > x iodepth=8:
> > 
> > Bare metal: 26440 MB
> > Unpatched:  19799 MB
> > Patched:    21252 MB
> > 
> > Patched vs Unpatched: +7% improvement
> > Patched vs Bare metal: 20% virtualization overhead
> > 
> > The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung
> > MZNLN256HCHP 256 GB SATA SSD.  This is just my laptop.
> > 
> > Seems like a worthwhile improvement to me.
> > 
> > Stefan
> 
> Sure. Pls remember to ping or re-post after the release.

How about a -next tree?

I've found that useful for block, net, and tracing in the past.  Most of
the time it means patch authors can rest assured their patches will be
merged without further action.  It allows development of features that
depend on out-of-tree patches.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-17 11:01 ` Christian Borntraeger
@ 2016-11-18 11:02   ` Stefan Hajnoczi
  2016-11-18 11:36     ` Christian Borntraeger
  0 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-18 11:02 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Stefan Hajnoczi, qemu-devel, Kevin Wolf, Paolo Bonzini, zhunxun,
	Fam Zheng, Michael S. Tsirkin

[-- Attachment #1: Type: text/plain, Size: 1668 bytes --]

On Thu, Nov 17, 2016 at 12:01:30PM +0100, Christian Borntraeger wrote:
> On 11/16/2016 10:53 PM, Stefan Hajnoczi wrote:
> > Disabling notifications during virtqueue processing reduces the number of
> > exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> > virtio-blk and virtio-scsi do not.
> > 
> > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> > 
> >   (host)$ qemu-system-x86_64 \
> >               -enable-kvm -m 1024 -cpu host \
> >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> >                      cache=none,aio=native
> >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > 
> > Number of kvm_fast_mmio events:
> > Unpatched: 685k
> > Patched: 592k (-15%, lower is better)
> > 
> > Note that a workload with iodepth=1 and a single thread will not benefit - this
> > is a batching optimization.  The effect should be strongest with large iodepth
> > and multiple threads submitting I/O.  The guest I/O scheduler also affects the
> > optimization.
> 
> I have trouble seeing any difference in terms of performances or CPU load (other than 
> a reduced number of kicks).
> I was expecting some benefit by reducing the spinlock hold times in virtio-blk,
> but this needs some more setups to actually find the sweet spot.

Are you testing on s390 with ccw?  I'm not familiar with the performance
characteristics of the kick under ccw.

> Maybe it will show its benefit with the polling thing?

Yes, I hope it will benefit polling.  I'll build patches for polling on
top of this.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-18 11:02   ` Stefan Hajnoczi
@ 2016-11-18 11:36     ` Christian Borntraeger
  0 siblings, 0 replies; 18+ messages in thread
From: Christian Borntraeger @ 2016-11-18 11:36 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, qemu-devel, Kevin Wolf, Paolo Bonzini, zhunxun,
	Fam Zheng, Michael S. Tsirkin

On 11/18/2016 12:02 PM, Stefan Hajnoczi wrote:
> On Thu, Nov 17, 2016 at 12:01:30PM +0100, Christian Borntraeger wrote:
>> On 11/16/2016 10:53 PM, Stefan Hajnoczi wrote:
>>> Disabling notifications during virtqueue processing reduces the number of
>>> exits.  The virtio-net device already uses virtio_queue_set_notifications() but
>>> virtio-blk and virtio-scsi do not.
>>>
>>> The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
>>>
>>>   (host)$ qemu-system-x86_64 \
>>>               -enable-kvm -m 1024 -cpu host \
>>>               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
>>>                      cache=none,aio=native
>>>   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
>>>   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
>>>
>>> Number of kvm_fast_mmio events:
>>> Unpatched: 685k
>>> Patched: 592k (-15%, lower is better)
>>>
>>> Note that a workload with iodepth=1 and a single thread will not benefit - this
>>> is a batching optimization.  The effect should be strongest with large iodepth
>>> and multiple threads submitting I/O.  The guest I/O scheduler also affects the
>>> optimization.
>>
>> I have trouble seeing any difference in terms of performances or CPU load (other than 
>> a reduced number of kicks).
>> I was expecting some benefit by reducing the spinlock hold times in virtio-blk,
>> but this needs some more setups to actually find the sweet spot.
> 
> Are you testing on s390 with ccw?  

Yes

I'm not familiar with the performance
> characteristics of the kick under ccw.

The kick is a diagnose instruction, that exits the guest into the host kernel.
In the host kernel it will notify an eventfd and return back to the guest, 
so in essence it should be the same as x86. I was using host ramdisks, maybe this
has affected the performance.

> 
>> Maybe it will show its benefit with the polling thing?
> 
> Yes, I hope it will benefit polling.  I'll build patches for polling on
> top of this.
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-18 10:58       ` Stefan Hajnoczi
@ 2016-11-18 14:21         ` Michael S. Tsirkin
  2016-11-18 15:20           ` Stefan Hajnoczi
  0 siblings, 1 reply; 18+ messages in thread
From: Michael S. Tsirkin @ 2016-11-18 14:21 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng, qemu-devel,
	Christian Borntraeger, Paolo Bonzini

On Fri, Nov 18, 2016 at 10:58:47AM +0000, Stefan Hajnoczi wrote:
> On Thu, Nov 17, 2016 at 07:38:45PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Nov 17, 2016 at 01:27:49PM +0000, Stefan Hajnoczi wrote:
> > > On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote:
> > > > On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> > > > > Disabling notifications during virtqueue processing reduces the number of
> > > > > exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> > > > > virtio-blk and virtio-scsi do not.
> > > > > 
> > > > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> > > > > 
> > > > >   (host)$ qemu-system-x86_64 \
> > > > >               -enable-kvm -m 1024 -cpu host \
> > > > >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> > > > >                      cache=none,aio=native
> > > > >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> > > > >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > > > > 
> > > > > Number of kvm_fast_mmio events:
> > > > > Unpatched: 685k
> > > > > Patched: 592k (-15%, lower is better)
> > > > 
> > > > Any chance to see a gain in actual benchmark numbers?
> > > > This is important to make sure we are not just
> > > > shifting overhead around.
> > > 
> > > Good idea.  I reran this morning without any tracing and compared
> > > against bare metal.
> > > 
> > > Total reads for a 30-second 4 KB random read benchmark with 4 processes
> > > x iodepth=8:
> > > 
> > > Bare metal: 26440 MB
> > > Unpatched:  19799 MB
> > > Patched:    21252 MB
> > > 
> > > Patched vs Unpatched: +7% improvement
> > > Patched vs Bare metal: 20% virtualization overhead
> > > 
> > > The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung
> > > MZNLN256HCHP 256 GB SATA SSD.  This is just my laptop.
> > > 
> > > Seems like a worthwhile improvement to me.
> > > 
> > > Stefan
> > 
> > Sure. Pls remember to ping or re-post after the release.
> 
> How about a -next tree?

-next would make sense if we did Linus style short merge
cycles followed by a long stabilization period.

With current QEMU style -next seems counter-productive, we do freezes in
particular so people focus on stabilization, with -next everyone except
maintainers just keeps going as usual, and maintainers must handle
double the load.

> I've found that useful for block, net, and tracing in the past.  Most of
> the time it means patch authors can rest assured their patches will be
> merged without further action.  It allows development of features that
> depend on out-of-tree patches.
> 
> Stefan

Less work for authors, more work for me ... I'd rather distribute the load.

-- 
MST

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-18 14:21         ` Michael S. Tsirkin
@ 2016-11-18 15:20           ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2016-11-18 15:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng, qemu-devel,
	Christian Borntraeger, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 3007 bytes --]

On Fri, Nov 18, 2016 at 04:21:33PM +0200, Michael S. Tsirkin wrote:
> On Fri, Nov 18, 2016 at 10:58:47AM +0000, Stefan Hajnoczi wrote:
> > On Thu, Nov 17, 2016 at 07:38:45PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Nov 17, 2016 at 01:27:49PM +0000, Stefan Hajnoczi wrote:
> > > > On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote:
> > > > > On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> > > > > > Disabling notifications during virtqueue processing reduces the number of
> > > > > > exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> > > > > > virtio-blk and virtio-scsi do not.
> > > > > > 
> > > > > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> > > > > > 
> > > > > >   (host)$ qemu-system-x86_64 \
> > > > > >               -enable-kvm -m 1024 -cpu host \
> > > > > >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> > > > > >                      cache=none,aio=native
> > > > > >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> > > > > >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > > > > > 
> > > > > > Number of kvm_fast_mmio events:
> > > > > > Unpatched: 685k
> > > > > > Patched: 592k (-15%, lower is better)
> > > > > 
> > > > > Any chance to see a gain in actual benchmark numbers?
> > > > > This is important to make sure we are not just
> > > > > shifting overhead around.
> > > > 
> > > > Good idea.  I reran this morning without any tracing and compared
> > > > against bare metal.
> > > > 
> > > > Total reads for a 30-second 4 KB random read benchmark with 4 processes
> > > > x iodepth=8:
> > > > 
> > > > Bare metal: 26440 MB
> > > > Unpatched:  19799 MB
> > > > Patched:    21252 MB
> > > > 
> > > > Patched vs Unpatched: +7% improvement
> > > > Patched vs Bare metal: 20% virtualization overhead
> > > > 
> > > > The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung
> > > > MZNLN256HCHP 256 GB SATA SSD.  This is just my laptop.
> > > > 
> > > > Seems like a worthwhile improvement to me.
> > > > 
> > > > Stefan
> > > 
> > > Sure. Pls remember to ping or re-post after the release.
> > 
> > How about a -next tree?
> 
> -next would make sense if we did Linus style short merge
> cycles followed by a long stabilization period.
> 
> With current QEMU style -next seems counter-productive, we do freezes in
> particular so people focus on stabilization, with -next everyone except
> maintainers just keeps going as usual, and maintainers must handle
> double the load.
> 
> > I've found that useful for block, net, and tracing in the past.  Most of
> > the time it means patch authors can rest assured their patches will be
> > merged without further action.  It allows development of features that
> > depend on out-of-tree patches.
> > 
> > Stefan
> 
> Less work for authors, more work for me ... I'd rather distribute the load.

Okay.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
  2016-11-17 17:38     ` Michael S. Tsirkin
  2016-11-18 10:58       ` Stefan Hajnoczi
@ 2017-01-04 13:51       ` Stefan Hajnoczi
  1 sibling, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2017-01-04 13:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Kevin Wolf, zhunxun, Fam Zheng, qemu-devel,
	Christian Borntraeger, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 2174 bytes --]

On Thu, Nov 17, 2016 at 07:38:45PM +0200, Michael S. Tsirkin wrote:
> On Thu, Nov 17, 2016 at 01:27:49PM +0000, Stefan Hajnoczi wrote:
> > On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote:
> > > > Disabling notifications during virtqueue processing reduces the number of
> > > > exits.  The virtio-net device already uses virtio_queue_set_notifications() but
> > > > virtio-blk and virtio-scsi do not.
> > > > 
> > > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
> > > > 
> > > >   (host)$ qemu-system-x86_64 \
> > > >               -enable-kvm -m 1024 -cpu host \
> > > >               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
> > > >                      cache=none,aio=native
> > > >   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
> > > >   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
> > > > 
> > > > Number of kvm_fast_mmio events:
> > > > Unpatched: 685k
> > > > Patched: 592k (-15%, lower is better)
> > > 
> > > Any chance to see a gain in actual benchmark numbers?
> > > This is important to make sure we are not just
> > > shifting overhead around.
> > 
> > Good idea.  I reran this morning without any tracing and compared
> > against bare metal.
> > 
> > Total reads for a 30-second 4 KB random read benchmark with 4 processes
> > x iodepth=8:
> > 
> > Bare metal: 26440 MB
> > Unpatched:  19799 MB
> > Patched:    21252 MB
> > 
> > Patched vs Unpatched: +7% improvement
> > Patched vs Bare metal: 20% virtualization overhead
> > 
> > The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung
> > MZNLN256HCHP 256 GB SATA SSD.  This is just my laptop.
> > 
> > Seems like a worthwhile improvement to me.
> > 
> > Stefan
> 
> Sure. Pls remember to ping or re-post after the release.

The block pull request I just sent with AioContext polling includes
these patches.  Just noticed this and remembered this email thread.
Feel free to NACK the pull request if you wish to discuss this series
again: <20170104133414.6524-1-stefanha@redhat.com>

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-01-05  9:28 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-16 21:53 [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Stefan Hajnoczi
2016-11-16 21:53 ` [Qemu-devel] [PATCH 1/3] virtio: add missing vdev->broken check Stefan Hajnoczi
2016-11-17  8:31   ` Cornelia Huck
2016-11-17 10:58     ` Stefan Hajnoczi
2016-11-17 12:24       ` Cornelia Huck
2016-11-18 10:55         ` Stefan Hajnoczi
2016-11-16 21:53 ` [Qemu-devel] [PATCH 2/3] virtio-blk: suppress virtqueue kick during processing Stefan Hajnoczi
2016-11-16 21:53 ` [Qemu-devel] [PATCH 3/3] virtio-scsi: " Stefan Hajnoczi
2016-11-16 22:17 ` [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi Michael S. Tsirkin
2016-11-17 13:27   ` Stefan Hajnoczi
2016-11-17 17:38     ` Michael S. Tsirkin
2016-11-18 10:58       ` Stefan Hajnoczi
2016-11-18 14:21         ` Michael S. Tsirkin
2016-11-18 15:20           ` Stefan Hajnoczi
2017-01-04 13:51       ` Stefan Hajnoczi
2016-11-17 11:01 ` Christian Borntraeger
2016-11-18 11:02   ` Stefan Hajnoczi
2016-11-18 11:36     ` Christian Borntraeger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.