From mboxrd@z Thu Jan 1 00:00:00 1970 From: Khoa Huynh Subject: Re: [RFC v9 00/27] virtio: virtio-blk data plane Date: Wed, 18 Jul 2012 11:18:29 -0500 Message-ID: References: <1342624074-24650-1-git-send-email-stefanha@linux.vnet.ibm.com> <20120718154323.GE1777@redhat.com> Mime-Version: 1.0 Content-Type: multipart/alternative; Boundary="0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38" Cc: Kevin Wolf , Anthony Liguori , Stefan Hajnoczi , kvm@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Asias He To: "Michael S. Tsirkin" Return-path: In-Reply-To: <20120718154323.GE1777@redhat.com> Content-Disposition: inline List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org Sender: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org List-Id: kvm.vger.kernel.org --0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38 Content-type: text/plain; charset=US-ASCII "Michael S. Tsirkin" wrote on 07/18/2012 10:43:23 AM: > From: "Michael S. Tsirkin" > To: Stefan Hajnoczi , > Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, Anthony Liguori/ > Austin/IBM@IBMUS, Kevin Wolf , Paolo Bonzini > , Asias He , Khoa Huynh/ > Austin/IBM@IBMUS > Date: 07/18/2012 10:46 AM > Subject: Re: [RFC v9 00/27] virtio: virtio-blk data plane > > On Wed, Jul 18, 2012 at 04:07:27PM +0100, Stefan Hajnoczi wrote: > > This series implements a dedicated thread for virtio-blk > processing using Linux > > AIO for raw image files only. It is based on qemu-kvm.git a0bc8c3 > and somewhat > > old but I wanted to share it on the list since it has been > mentioned on mailing > > lists and IRC recently. > > > > These patches can be used for benchmarking and discussion about > how to improve > > block performance. Paolo Bonzini has also worked in this area andmight want > > to share his patches. > > > > The basic approach is: > > 1. Each virtio-blk device has a thread dedicated to handling ioeventfd > > signalling when the guest kicks the virtqueue. > > 2. Requests are processed without going through the QEMU block layer using > > Linux AIO directly. > > 3. Completion interrupts are injected via ioctl from the dedicated thread. > > > > The series also contains request merging as a bdrv_aio_multiwrite > () equivalent. > > This was only to get a comparison against the QEMU block layer and > I would drop > > it for other types of analysis. > > > > The effect of this series is that O_DIRECT Linux AIO on raw files can bypass > > the QEMU global mutex and block layer. This means higher performance. > > Do you have any numbers at all? Yes, we do have a lot of data for this data-plane patch set. I can send you detailed charts if you like, but generally, we run into a performance bottleneck with the existing qemu due to the qemu global mutex, and thus, could only get to about 140,000 IOPS for a single guest (at least on my setup). With this data-plane patch set, we bypass this bottleneck and have been able to achieve more than 600,000 IOPS for a single guest, and an aggregate 1.33 million IOPS with 4 guests on a single host. Just for reference, VMware has claimed that they could get 300,000 IOPS for a single VM and 1 million IOPS with 6 VMs on a single VSphere 5.0 host. So we definitely need something like this for KVM to be competitive with VMware and other hypervisors. Of course, this would also help satisfy the high I/O rate requirements for BigData and other data-intensive applications or benchmarks running on KVM. Thanks, -Khoa > > > A cleaned up version of this approach could be added to QEMU as a > raw O_DIRECT > > Linux AIO fast path. Image file formats, protocols, and other block layer > > features are not supported by virtio-blk-data-plane. > > > > Git repo: > > http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/ > virtio-blk-data-plane > > > > Stefan Hajnoczi (27): > > virtio-blk: Remove virtqueue request handling code > > virtio-blk: Set up host notifier for data plane > > virtio-blk: Data plane thread event loop > > virtio-blk: Map vring > > virtio-blk: Do cheapest possible memory mapping > > virtio-blk: Take PCI memory range into account > > virtio-blk: Put dataplane code into its own directory > > virtio-blk: Read requests from the vring > > virtio-blk: Add Linux AIO queue > > virtio-blk: Stop data plane thread cleanly > > virtio-blk: Indirect vring and flush support > > virtio-blk: Add workaround for BUG_ON() dependency in virtio_ring.h > > virtio-blk: Increase max requests for indirect vring > > virtio-blk: Use pthreads instead of qemu-thread > > notifier: Add a function to set the notifier > > virtio-blk: Kick data plane thread using event notifier set > > virtio-blk: Use guest notifier to raise interrupts > > virtio-blk: Call ioctl() directly instead of irqfd > > virtio-blk: Disable guest->host notifies while processing vring > > virtio-blk: Add ioscheduler to detect mergable requests > > virtio-blk: Add basic request merging > > virtio-blk: Fix request merging > > virtio-blk: Stub out SCSI commands > > virtio-blk: fix incorrect length > > msix: fix irqchip breakage in msix_try_notify_from_thread() > > msix: use upstream kvm_irqchip_set_irq() > > virtio-blk: add EVENT_IDX support to dataplane > > > > event_notifier.c | 7 + > > event_notifier.h | 1 + > > hw/dataplane/event-poll.h | 116 +++++++ > > hw/dataplane/ioq.h | 128 ++++++++ > > hw/dataplane/iosched.h | 97 ++++++ > > hw/dataplane/vring.h | 334 ++++++++++++++++++++ > > hw/msix.c | 15 + > > hw/msix.h | 1 + > > hw/virtio-blk.c | 753 ++++++++++++++++++++ > +------------------------ > > hw/virtio-pci.c | 8 + > > hw/virtio.c | 9 + > > hw/virtio.h | 3 + > > 12 files changed, 1074 insertions(+), 398 deletions(-) > > create mode 100644 hw/dataplane/event-poll.h > > create mode 100644 hw/dataplane/ioq.h > > create mode 100644 hw/dataplane/iosched.h > > create mode 100644 hw/dataplane/vring.h > > > > -- > > 1.7.10.4 > --0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38 Content-type: text/html; charset=US-ASCII Content-Disposition: inline

"Michael S. Tsirkin" <mst@redhat.com> wrote on 07/18/2012 10:43:23 AM:

> From: "Michael S. Tsirkin" <mst@redhat.com>

> To: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>,
> Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, Anthony Liguori/
> Austin/IBM@IBMUS, Kevin Wolf <kwolf@redhat.com>, Paolo Bonzini
> <pbonzini@redhat.com>, Asias He <asias@redhat.com>, Khoa Huynh/
> Austin/IBM@IBMUS

> Date: 07/18/2012 10:46 AM
> Subject: Re: [RFC v9 00/27] virtio: virtio-blk data plane
>
> On Wed, Jul 18, 2012 at 04:07:27PM +0100, Stefan Hajnoczi wrote:
> > This series implements a dedicated thread for virtio-blk
> processing using Linux
> > AIO for raw image files only.  It is based on qemu-kvm.git a0bc8c3
> and somewhat
> > old but I wanted to share it on the list since it has been
> mentioned on mailing
> > lists and IRC recently.
> >
> > These patches can be used for benchmarking and discussion about
> how to improve
> > block performance.  Paolo Bonzini has also worked in this area andmight want
> > to share his patches.
> >
> > The basic approach is:
> > 1. Each virtio-blk device has a thread dedicated to handling ioeventfd
> >    signalling when the guest kicks the virtqueue.
> > 2. Requests are processed without going through the QEMU block layer using
> >    Linux AIO directly.
> > 3. Completion interrupts are injected via ioctl from the dedicated thread.
> >
> > The series also contains request merging as a bdrv_aio_multiwrite
> () equivalent.
> > This was only to get a comparison against the QEMU block layer and
> I would drop
> > it for other types of analysis.
> >
> > The effect of this series is that O_DIRECT Linux AIO on raw files can bypass
> > the QEMU global mutex and block layer.  This means higher performance.
>
> Do you have any numbers at all?


Yes, we do have a lot of data for this data-plane patch set.  I can send you
detailed charts if you like, but generally, we run into a performance bottleneck
with the existing qemu due to the qemu global mutex, and thus, could only get
to about 140,000 IOPS for a single guest (at least on my setup).  With this
data-plane patch set, we bypass this bottleneck and have been able to achieve
more than 600,000 IOPS for a single guest, and an aggregate 1.33 million IOPS
with 4 guests on a single host.

Just for reference, VMware has claimed that they could get 300,000 IOPS for a
single VM and 1 million IOPS with 6 VMs on a single VSphere 5.0 host.  So we
definitely need something like this for KVM to be competitive with VMware and
other hypervisors.  Of course, this would also help satisfy the high I/O rate
requirements for BigData and other data-intensive applications or benchmarks
running on KVM.

Thanks,
-Khoa

>
> > A cleaned up version of this approach could be added to QEMU as a
> raw O_DIRECT
> > Linux AIO fast path.  Image file formats, protocols, and other block layer
> > features are not supported by virtio-blk-data-plane.
> >
> > Git repo:
> > http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/
> virtio-blk-data-plane
> >
> > Stefan Hajnoczi (27):
> >   virtio-blk: Remove virtqueue request handling code
> >   virtio-blk: Set up host notifier for data plane
> >   virtio-blk: Data plane thread event loop
> >   virtio-blk: Map vring
> >   virtio-blk: Do cheapest possible memory mapping
> >   virtio-blk: Take PCI memory range into account
> >   virtio-blk: Put dataplane code into its own directory
> >   virtio-blk: Read requests from the vring
> >   virtio-blk: Add Linux AIO queue
> >   virtio-blk: Stop data plane thread cleanly
> >   virtio-blk: Indirect vring and flush support
> >   virtio-blk: Add workaround for BUG_ON() dependency in virtio_ring.h
> >   virtio-blk: Increase max requests for indirect vring
> >   virtio-blk: Use pthreads instead of qemu-thread
> >   notifier: Add a function to set the notifier
> >   virtio-blk: Kick data plane thread using event notifier set
> >   virtio-blk: Use guest notifier to raise interrupts
> >   virtio-blk: Call ioctl() directly instead of irqfd
> >   virtio-blk: Disable guest->host notifies while processing vring
> >   virtio-blk: Add ioscheduler to detect mergable requests
> >   virtio-blk: Add basic request merging
> >   virtio-blk: Fix request merging
> >   virtio-blk: Stub out SCSI commands
> >   virtio-blk: fix incorrect length
> >   msix: fix irqchip breakage in msix_try_notify_from_thread()
> >   msix: use upstream kvm_irqchip_set_irq()
> >   virtio-blk: add EVENT_IDX support to dataplane
> >
> >  event_notifier.c          |    7 +
> >  event_notifier.h          |    1 +
> >  hw/dataplane/event-poll.h |  116 +++++++
> >  hw/dataplane/ioq.h        |  128 ++++++++
> >  hw/dataplane/iosched.h    |   97 ++++++
> >  hw/dataplane/vring.h      |  334 ++++++++++++++++++++
> >  hw/msix.c                 |   15 +
> >  hw/msix.h                 |    1 +
> >  hw/virtio-blk.c           |  753 ++++++++++++++++++++
> +------------------------
> >  hw/virtio-pci.c           |    8 +
> >  hw/virtio.c               |    9 +
> >  hw/virtio.h               |    3 +
> >  12 files changed, 1074 insertions(+), 398 deletions(-)
> >  create mode 100644 hw/dataplane/event-poll.h
> >  create mode 100644 hw/dataplane/ioq.h
> >  create mode 100644 hw/dataplane/iosched.h
> >  create mode 100644 hw/dataplane/vring.h
> >
> > --
> > 1.7.10.4
>
--0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38-- From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:58227) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SrXgC-0003ps-JL for qemu-devel@nongnu.org; Wed, 18 Jul 2012 13:04:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1SrXgA-0002N7-BY for qemu-devel@nongnu.org; Wed, 18 Jul 2012 13:04:52 -0400 Received: from e2.ny.us.ibm.com ([32.97.182.142]:57740) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SrXgA-0002JT-4k for qemu-devel@nongnu.org; Wed, 18 Jul 2012 13:04:50 -0400 Received: from /spool/local by e2.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 18 Jul 2012 12:33:25 -0400 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by d01dlp02.pok.ibm.com (Postfix) with ESMTP id 369346E8483 for ; Wed, 18 Jul 2012 12:18:31 -0400 (EDT) Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q6IGIUPw319380 for ; Wed, 18 Jul 2012 12:18:30 -0400 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q6IGIUtr003991 for ; Wed, 18 Jul 2012 12:18:30 -0400 In-Reply-To: <20120718154323.GE1777@redhat.com> References: <1342624074-24650-1-git-send-email-stefanha@linux.vnet.ibm.com> <20120718154323.GE1777@redhat.com> Message-ID: From: Khoa Huynh Date: Wed, 18 Jul 2012 11:18:29 -0500 MIME-Version: 1.0 Content-type: multipart/alternative; Boundary="0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38" Content-Disposition: inline Subject: Re: [Qemu-devel] [RFC v9 00/27] virtio: virtio-blk data plane List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Kevin Wolf , Anthony Liguori , Stefan Hajnoczi , kvm@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Asias He --0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38 Content-type: text/plain; charset=US-ASCII "Michael S. Tsirkin" wrote on 07/18/2012 10:43:23 AM: > From: "Michael S. Tsirkin" > To: Stefan Hajnoczi , > Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, Anthony Liguori/ > Austin/IBM@IBMUS, Kevin Wolf , Paolo Bonzini > , Asias He , Khoa Huynh/ > Austin/IBM@IBMUS > Date: 07/18/2012 10:46 AM > Subject: Re: [RFC v9 00/27] virtio: virtio-blk data plane > > On Wed, Jul 18, 2012 at 04:07:27PM +0100, Stefan Hajnoczi wrote: > > This series implements a dedicated thread for virtio-blk > processing using Linux > > AIO for raw image files only. It is based on qemu-kvm.git a0bc8c3 > and somewhat > > old but I wanted to share it on the list since it has been > mentioned on mailing > > lists and IRC recently. > > > > These patches can be used for benchmarking and discussion about > how to improve > > block performance. Paolo Bonzini has also worked in this area andmight want > > to share his patches. > > > > The basic approach is: > > 1. Each virtio-blk device has a thread dedicated to handling ioeventfd > > signalling when the guest kicks the virtqueue. > > 2. Requests are processed without going through the QEMU block layer using > > Linux AIO directly. > > 3. Completion interrupts are injected via ioctl from the dedicated thread. > > > > The series also contains request merging as a bdrv_aio_multiwrite > () equivalent. > > This was only to get a comparison against the QEMU block layer and > I would drop > > it for other types of analysis. > > > > The effect of this series is that O_DIRECT Linux AIO on raw files can bypass > > the QEMU global mutex and block layer. This means higher performance. > > Do you have any numbers at all? Yes, we do have a lot of data for this data-plane patch set. I can send you detailed charts if you like, but generally, we run into a performance bottleneck with the existing qemu due to the qemu global mutex, and thus, could only get to about 140,000 IOPS for a single guest (at least on my setup). With this data-plane patch set, we bypass this bottleneck and have been able to achieve more than 600,000 IOPS for a single guest, and an aggregate 1.33 million IOPS with 4 guests on a single host. Just for reference, VMware has claimed that they could get 300,000 IOPS for a single VM and 1 million IOPS with 6 VMs on a single VSphere 5.0 host. So we definitely need something like this for KVM to be competitive with VMware and other hypervisors. Of course, this would also help satisfy the high I/O rate requirements for BigData and other data-intensive applications or benchmarks running on KVM. Thanks, -Khoa > > > A cleaned up version of this approach could be added to QEMU as a > raw O_DIRECT > > Linux AIO fast path. Image file formats, protocols, and other block layer > > features are not supported by virtio-blk-data-plane. > > > > Git repo: > > http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/ > virtio-blk-data-plane > > > > Stefan Hajnoczi (27): > > virtio-blk: Remove virtqueue request handling code > > virtio-blk: Set up host notifier for data plane > > virtio-blk: Data plane thread event loop > > virtio-blk: Map vring > > virtio-blk: Do cheapest possible memory mapping > > virtio-blk: Take PCI memory range into account > > virtio-blk: Put dataplane code into its own directory > > virtio-blk: Read requests from the vring > > virtio-blk: Add Linux AIO queue > > virtio-blk: Stop data plane thread cleanly > > virtio-blk: Indirect vring and flush support > > virtio-blk: Add workaround for BUG_ON() dependency in virtio_ring.h > > virtio-blk: Increase max requests for indirect vring > > virtio-blk: Use pthreads instead of qemu-thread > > notifier: Add a function to set the notifier > > virtio-blk: Kick data plane thread using event notifier set > > virtio-blk: Use guest notifier to raise interrupts > > virtio-blk: Call ioctl() directly instead of irqfd > > virtio-blk: Disable guest->host notifies while processing vring > > virtio-blk: Add ioscheduler to detect mergable requests > > virtio-blk: Add basic request merging > > virtio-blk: Fix request merging > > virtio-blk: Stub out SCSI commands > > virtio-blk: fix incorrect length > > msix: fix irqchip breakage in msix_try_notify_from_thread() > > msix: use upstream kvm_irqchip_set_irq() > > virtio-blk: add EVENT_IDX support to dataplane > > > > event_notifier.c | 7 + > > event_notifier.h | 1 + > > hw/dataplane/event-poll.h | 116 +++++++ > > hw/dataplane/ioq.h | 128 ++++++++ > > hw/dataplane/iosched.h | 97 ++++++ > > hw/dataplane/vring.h | 334 ++++++++++++++++++++ > > hw/msix.c | 15 + > > hw/msix.h | 1 + > > hw/virtio-blk.c | 753 ++++++++++++++++++++ > +------------------------ > > hw/virtio-pci.c | 8 + > > hw/virtio.c | 9 + > > hw/virtio.h | 3 + > > 12 files changed, 1074 insertions(+), 398 deletions(-) > > create mode 100644 hw/dataplane/event-poll.h > > create mode 100644 hw/dataplane/ioq.h > > create mode 100644 hw/dataplane/iosched.h > > create mode 100644 hw/dataplane/vring.h > > > > -- > > 1.7.10.4 > --0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38 Content-type: text/html; charset=US-ASCII Content-Disposition: inline

"Michael S. Tsirkin" <mst@redhat.com> wrote on 07/18/2012 10:43:23 AM:

> From: "Michael S. Tsirkin" <mst@redhat.com>

> To: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>,
> Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, Anthony Liguori/
> Austin/IBM@IBMUS, Kevin Wolf <kwolf@redhat.com>, Paolo Bonzini
> <pbonzini@redhat.com>, Asias He <asias@redhat.com>, Khoa Huynh/
> Austin/IBM@IBMUS

> Date: 07/18/2012 10:46 AM
> Subject: Re: [RFC v9 00/27] virtio: virtio-blk data plane
>
> On Wed, Jul 18, 2012 at 04:07:27PM +0100, Stefan Hajnoczi wrote:
> > This series implements a dedicated thread for virtio-blk
> processing using Linux
> > AIO for raw image files only.  It is based on qemu-kvm.git a0bc8c3
> and somewhat
> > old but I wanted to share it on the list since it has been
> mentioned on mailing
> > lists and IRC recently.
> >
> > These patches can be used for benchmarking and discussion about
> how to improve
> > block performance.  Paolo Bonzini has also worked in this area andmight want
> > to share his patches.
> >
> > The basic approach is:
> > 1. Each virtio-blk device has a thread dedicated to handling ioeventfd
> >    signalling when the guest kicks the virtqueue.
> > 2. Requests are processed without going through the QEMU block layer using
> >    Linux AIO directly.
> > 3. Completion interrupts are injected via ioctl from the dedicated thread.
> >
> > The series also contains request merging as a bdrv_aio_multiwrite
> () equivalent.
> > This was only to get a comparison against the QEMU block layer and
> I would drop
> > it for other types of analysis.
> >
> > The effect of this series is that O_DIRECT Linux AIO on raw files can bypass
> > the QEMU global mutex and block layer.  This means higher performance.
>
> Do you have any numbers at all?


Yes, we do have a lot of data for this data-plane patch set.  I can send you
detailed charts if you like, but generally, we run into a performance bottleneck
with the existing qemu due to the qemu global mutex, and thus, could only get
to about 140,000 IOPS for a single guest (at least on my setup).  With this
data-plane patch set, we bypass this bottleneck and have been able to achieve
more than 600,000 IOPS for a single guest, and an aggregate 1.33 million IOPS
with 4 guests on a single host.

Just for reference, VMware has claimed that they could get 300,000 IOPS for a
single VM and 1 million IOPS with 6 VMs on a single VSphere 5.0 host.  So we
definitely need something like this for KVM to be competitive with VMware and
other hypervisors.  Of course, this would also help satisfy the high I/O rate
requirements for BigData and other data-intensive applications or benchmarks
running on KVM.

Thanks,
-Khoa

>
> > A cleaned up version of this approach could be added to QEMU as a
> raw O_DIRECT
> > Linux AIO fast path.  Image file formats, protocols, and other block layer
> > features are not supported by virtio-blk-data-plane.
> >
> > Git repo:
> > http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/
> virtio-blk-data-plane
> >
> > Stefan Hajnoczi (27):
> >   virtio-blk: Remove virtqueue request handling code
> >   virtio-blk: Set up host notifier for data plane
> >   virtio-blk: Data plane thread event loop
> >   virtio-blk: Map vring
> >   virtio-blk: Do cheapest possible memory mapping
> >   virtio-blk: Take PCI memory range into account
> >   virtio-blk: Put dataplane code into its own directory
> >   virtio-blk: Read requests from the vring
> >   virtio-blk: Add Linux AIO queue
> >   virtio-blk: Stop data plane thread cleanly
> >   virtio-blk: Indirect vring and flush support
> >   virtio-blk: Add workaround for BUG_ON() dependency in virtio_ring.h
> >   virtio-blk: Increase max requests for indirect vring
> >   virtio-blk: Use pthreads instead of qemu-thread
> >   notifier: Add a function to set the notifier
> >   virtio-blk: Kick data plane thread using event notifier set
> >   virtio-blk: Use guest notifier to raise interrupts
> >   virtio-blk: Call ioctl() directly instead of irqfd
> >   virtio-blk: Disable guest->host notifies while processing vring
> >   virtio-blk: Add ioscheduler to detect mergable requests
> >   virtio-blk: Add basic request merging
> >   virtio-blk: Fix request merging
> >   virtio-blk: Stub out SCSI commands
> >   virtio-blk: fix incorrect length
> >   msix: fix irqchip breakage in msix_try_notify_from_thread()
> >   msix: use upstream kvm_irqchip_set_irq()
> >   virtio-blk: add EVENT_IDX support to dataplane
> >
> >  event_notifier.c          |    7 +
> >  event_notifier.h          |    1 +
> >  hw/dataplane/event-poll.h |  116 +++++++
> >  hw/dataplane/ioq.h        |  128 ++++++++
> >  hw/dataplane/iosched.h    |   97 ++++++
> >  hw/dataplane/vring.h      |  334 ++++++++++++++++++++
> >  hw/msix.c                 |   15 +
> >  hw/msix.h                 |    1 +
> >  hw/virtio-blk.c           |  753 ++++++++++++++++++++
> +------------------------
> >  hw/virtio-pci.c           |    8 +
> >  hw/virtio.c               |    9 +
> >  hw/virtio.h               |    3 +
> >  12 files changed, 1074 insertions(+), 398 deletions(-)
> >  create mode 100644 hw/dataplane/event-poll.h
> >  create mode 100644 hw/dataplane/ioq.h
> >  create mode 100644 hw/dataplane/iosched.h
> >  create mode 100644 hw/dataplane/vring.h
> >
> > --
> > 1.7.10.4
>
--0__=0ABBF0ACDFCBDD388f9e8a93df938690918c0ABBF0ACDFCBDD38--