All of lore.kernel.org
 help / color / mirror / Atom feed
* Accelerating non-standard disk types
@ 2022-05-16 17:38 Raphael Norwitz
  2022-05-17 13:53 ` Paolo Bonzini
  2022-05-17 15:29 ` Stefan Hajnoczi
  0 siblings, 2 replies; 8+ messages in thread
From: Raphael Norwitz @ 2022-05-16 17:38 UTC (permalink / raw)
  To: stefanha
  Cc: qemu-devel, John Levon, Thanos Makatos, Swapnil Ingle,
	alexis.lescout, Felipe Franciosi, mst

Hey Stefan,

We've been thinking about ways to accelerate other disk types such as
SATA and IDE rather than translating to SCSI and using QEMU's iSCSI
driver, with existing and more performant backends such as SPDK. We
think there are some options worth exploring:

[1] Keep using the SCSI translation in QEMU but back vDisks with a
vhost-user-scsi or vhost-user-blk backend device.
[2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
client?).
[3] We've also been looking at your libblkio library. From your
description in
https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg06146.html it
sounds like it may definitely play a role here, and possibly provide the
nessesary abstractions to back I/O from these emulated disks to any
backends we may want?

We are planning to start a review of these options internally to survey
tradeoffs, potential timelines and practicality for these approaches. We
were also considering putting a submission together for KVM forum
describing our findings. Would you see any value in that?

Thanks,
Raphael

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Accelerating non-standard disk types
  2022-05-16 17:38 Accelerating non-standard disk types Raphael Norwitz
@ 2022-05-17 13:53 ` Paolo Bonzini
  2022-05-19 18:39   ` Raphael Norwitz
  2022-05-17 15:29 ` Stefan Hajnoczi
  1 sibling, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2022-05-17 13:53 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha
  Cc: qemu-devel, John Levon, Thanos Makatos, Swapnil Ingle,
	alexis.lescout, Felipe Franciosi, mst

On 5/16/22 19:38, Raphael Norwitz wrote:
> [1] Keep using the SCSI translation in QEMU but back vDisks with a
> vhost-user-scsi or vhost-user-blk backend device.
> [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> client?).
> [3] We've also been looking at your libblkio library. From your
> description in
> https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg06146.html  it
> sounds like it may definitely play a role here, and possibly provide the
> nessesary abstractions to back I/O from these emulated disks to any
> backends we may want?

First of all: have you benchmarked it?  How much time is spent on MMIO 
vs. disk I/O?

Of the options above, the most interesting to me is to implement a 
vhost-user-blk/vhost-user-scsi backend in QEMU, similar to the NVMe one, 
that would translate I/O submissions to virtqueue (including polling and 
the like) and could be used with SATA.

For IDE specifically, I'm not sure how much it can be sped up since it 
has only 1 in-flight operation.  I think using KVM coalesced I/O could 
provide an interesting boost (assuming instant or near-instant reply 
from the backend).  If all you're interested in however is not really 
performance, but rather having a single "connection" to your back end, 
vhost-user is certainly an option.

Paolo


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Accelerating non-standard disk types
  2022-05-16 17:38 Accelerating non-standard disk types Raphael Norwitz
  2022-05-17 13:53 ` Paolo Bonzini
@ 2022-05-17 15:29 ` Stefan Hajnoczi
  2022-05-19 18:34   ` Raphael Norwitz
  1 sibling, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2022-05-17 15:29 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: qemu-devel, John Levon, Thanos Makatos, Swapnil Ingle,
	alexis.lescout, Felipe Franciosi, mst

[-- Attachment #1: Type: text/plain, Size: 2135 bytes --]

On Mon, May 16, 2022 at 05:38:31PM +0000, Raphael Norwitz wrote:
> Hey Stefan,
> 
> We've been thinking about ways to accelerate other disk types such as
> SATA and IDE rather than translating to SCSI and using QEMU's iSCSI
> driver, with existing and more performant backends such as SPDK. We
> think there are some options worth exploring:
> 
> [1] Keep using the SCSI translation in QEMU but back vDisks with a
> vhost-user-scsi or vhost-user-blk backend device.

If I understand correctly the idea is to have a QEMU Block Driver that
connects to SPDK using vhost-user-scsi/blk?

> [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> client?).

This is definitely the option with the lowest overhead. I'm not sure if
implementing SATA and IDE emulation in SPDK is worth the effort for
saving the last few cycles.

> [3] We've also been looking at your libblkio library. From your
> description in
> https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg06146.html it
> sounds like it may definitely play a role here, and possibly provide the
> nessesary abstractions to back I/O from these emulated disks to any
> backends we may want?

Kevin Wolf has contributed a vhost-user-blk driver for libblkio. This
lets you achieve #1 using QEMU's libblkio Block Driver. The guest still
sees IDE or SATA but instead of translating to iSCSI the I/O requests
are sent over vhost-user-blk.

I suggest joining the libblkio chat and we can discuss how to set this
up (the QEMU libblkio BlockDriver is not yet in qemu.git):
https://matrix.to/#/#libblkio:matrix.org

> We are planning to start a review of these options internally to survey
> tradeoffs, potential timelines and practicality for these approaches. We
> were also considering putting a submission together for KVM forum
> describing our findings. Would you see any value in that?

I think it's always interesting to see performance results. I wonder if
you have more cutting-edge optimizations or performance results you want
to share at KVM Forum because IDE and SATA are more legacy/niche
nowadays?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Accelerating non-standard disk types
  2022-05-17 15:29 ` Stefan Hajnoczi
@ 2022-05-19 18:34   ` Raphael Norwitz
  0 siblings, 0 replies; 8+ messages in thread
From: Raphael Norwitz @ 2022-05-19 18:34 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Raphael Norwitz, qemu-devel, John Levon, Thanos Makatos,
	Swapnil Ingle, alexis.lescout, Felipe Franciosi, mst

On Tue, May 17, 2022 at 04:29:17PM +0100, Stefan Hajnoczi wrote:
> On Mon, May 16, 2022 at 05:38:31PM +0000, Raphael Norwitz wrote:
> > Hey Stefan,
> > 
> > We've been thinking about ways to accelerate other disk types such as
> > SATA and IDE rather than translating to SCSI and using QEMU's iSCSI
> > driver, with existing and more performant backends such as SPDK. We
> > think there are some options worth exploring:
> > 
> > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > vhost-user-scsi or vhost-user-blk backend device.
> 
> If I understand correctly the idea is to have a QEMU Block Driver that
> connects to SPDK using vhost-user-scsi/blk?
>

Yes - the idea would be to introduce logic to translate SATA/IDE to SCSI
or block requests and send them via vhost-user-{scsi/blk} to SPDK or any
other vhost-user backend. Our thought is that this is doable today
whereas we may have to wait for QEMU to formally adopt libblkio before
proceeding with [3], and depending on timelines it may make sense to
implement [1] and then switch over to [3] later. Thoughts?

> > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > client?).
> 
> This is definitely the option with the lowest overhead. I'm not sure if
> implementing SATA and IDE emulation in SPDK is worth the effort for
> saving the last few cycles.
>

Agreed - it’s probably not worth exploring because of the amount of work
involved. One good argument would be that it may be better for security
in the multiprocess QEMU world, but to me that does not seem strong
enough to justify the work involved so I suggest we drop option [2].

> > [3] We've also been looking at your libblkio library. From your
> > description in
> > https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg06146.html it
> > sounds like it may definitely play a role here, and possibly provide the
> > nessesary abstractions to back I/O from these emulated disks to any
> > backends we may want?
> 
> Kevin Wolf has contributed a vhost-user-blk driver for libblkio. This
> lets you achieve #1 using QEMU's libblkio Block Driver. The guest still
> sees IDE or SATA but instead of translating to iSCSI the I/O requests
> are sent over vhost-user-blk.
> 
> I suggest joining the libblkio chat and we can discuss how to set this
> up (the QEMU libblkio BlockDriver is not yet in qemu.git):
> https://matrix.to/#/#libblkio:matrix.org

Great - I have joined and will follow up there.

> 
> > We are planning to start a review of these options internally to survey
> > tradeoffs, potential timelines and practicality for these approaches. We
> > were also considering putting a submission together for KVM forum
> > describing our findings. Would you see any value in that?
> 
> I think it's always interesting to see performance results. I wonder if
> you have more cutting-edge optimizations or performance results you want
> to share at KVM Forum because IDE and SATA are more legacy/niche
> nowadays?
>

I realize I over-emphasized performance in my question - our larger goal
here is to align the data path for all disk types. We have some hope
that SATA can be sped up a bit, but it’s entirely possible that the MMIO
overhead will way outweigh and disk I/O improvements. Our thought was to
present a “Roadmap for supporting offload alternate disk types”, but
with your and Paolo’s response it seems like there isn’t enough material
to warrant a KVM talk and we should rather invest time in prototyping
and evaluating solutions.

> Stefan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Accelerating non-standard disk types
  2022-05-17 13:53 ` Paolo Bonzini
@ 2022-05-19 18:39   ` Raphael Norwitz
  2022-05-25 16:00     ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Raphael Norwitz @ 2022-05-19 18:39 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Raphael Norwitz, stefanha, qemu-devel, John Levon,
	Thanos Makatos, Swapnil Ingle, Alexis Lescouet, Felipe Franciosi,
	mst

On Tue, May 17, 2022 at 03:53:52PM +0200, Paolo Bonzini wrote:
> On 5/16/22 19:38, Raphael Norwitz wrote:
> > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > vhost-user-scsi or vhost-user-blk backend device.
> > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > client?).
> > [3] We've also been looking at your libblkio library. From your
> > description in
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gnu.org_archive_html_qemu-2Ddevel_2021-2D04_msg06146.html&d=DwICaQ&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=wBSqcw0cal3wPP87YIKgFgmqMHjGCC3apYf4wCn1SIrX6GW_FR-J9wO68v-cyrpn&s=CP-6ZY-gqgQ2zLAJdR8WVTrMBoqmFHilGvW_qnf2myU&e=   it
> > sounds like it may definitely play a role here, and possibly provide the
> > nessesary abstractions to back I/O from these emulated disks to any
> > backends we may want?
> 
> First of all: have you benchmarked it?  How much time is spent on MMIO vs.
> disk I/O?
>

Good point - we haven’t benchmarked the emulation, exit and translation
overheads - it is very possible speeding up disk I/O may not have a huge
impact. We would definitely benchmark this before exploring any of the
options seriously, but as you rightly note, performance is not the only
motivation here.

> Of the options above, the most interesting to me is to implement a
> vhost-user-blk/vhost-user-scsi backend in QEMU, similar to the NVMe one,
> that would translate I/O submissions to virtqueue (including polling and the
> like) and could be used with SATA.
>

We were certainly eyeing [1] as the most viable in the immediate future.
That said, since a vhost-user-blk driver has been added to libblkio, [3]
also sounds like a strong option. Do you see any long term benefit to
translating SATA/IDE submissions to virtqueues in a world where libblkio
is to be adopted?

> For IDE specifically, I'm not sure how much it can be sped up since it has
> only 1 in-flight operation.  I think using KVM coalesced I/O could provide
> an interesting boost (assuming instant or near-instant reply from the
> backend).  If all you're interested in however is not really performance,
> but rather having a single "connection" to your back end, vhost-user is
> certainly an option.
> 

Interesting - I will take a look at KVM coalesced I/O.

You’re totally right though, performance is not our main interest for
these disk types. I should have emphasized offload rather than
acceleration and performance. We would prefer to QA and support as few
data paths as possible, and a vhost-user offload mechanism would allow
us to use the same path for all I/O. I imagine other QEMU users who
offload to backends like SPDK and use SATA/IDE disk types may feel
similarly?

> Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Accelerating non-standard disk types
  2022-05-19 18:39   ` Raphael Norwitz
@ 2022-05-25 16:00     ` Stefan Hajnoczi
  2022-05-31  3:06       ` Raphael Norwitz
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2022-05-25 16:00 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: Paolo Bonzini, qemu-devel, John Levon, Thanos Makatos,
	Swapnil Ingle, Alexis Lescouet, Felipe Franciosi, mst

[-- Attachment #1: Type: text/plain, Size: 3267 bytes --]

On Thu, May 19, 2022 at 06:39:39PM +0000, Raphael Norwitz wrote:
> On Tue, May 17, 2022 at 03:53:52PM +0200, Paolo Bonzini wrote:
> > On 5/16/22 19:38, Raphael Norwitz wrote:
> > > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > > vhost-user-scsi or vhost-user-blk backend device.
> > > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > > client?).
> > > [3] We've also been looking at your libblkio library. From your
> > > description in
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gnu.org_archive_html_qemu-2Ddevel_2021-2D04_msg06146.html&d=DwICaQ&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=wBSqcw0cal3wPP87YIKgFgmqMHjGCC3apYf4wCn1SIrX6GW_FR-J9wO68v-cyrpn&s=CP-6ZY-gqgQ2zLAJdR8WVTrMBoqmFHilGvW_qnf2myU&e=   it
> > > sounds like it may definitely play a role here, and possibly provide the
> > > nessesary abstractions to back I/O from these emulated disks to any
> > > backends we may want?
> > 
> > First of all: have you benchmarked it?  How much time is spent on MMIO vs.
> > disk I/O?
> >
> 
> Good point - we haven’t benchmarked the emulation, exit and translation
> overheads - it is very possible speeding up disk I/O may not have a huge
> impact. We would definitely benchmark this before exploring any of the
> options seriously, but as you rightly note, performance is not the only
> motivation here.
> 
> > Of the options above, the most interesting to me is to implement a
> > vhost-user-blk/vhost-user-scsi backend in QEMU, similar to the NVMe one,
> > that would translate I/O submissions to virtqueue (including polling and the
> > like) and could be used with SATA.
> >
> 
> We were certainly eyeing [1] as the most viable in the immediate future.
> That said, since a vhost-user-blk driver has been added to libblkio, [3]
> also sounds like a strong option. Do you see any long term benefit to
> translating SATA/IDE submissions to virtqueues in a world where libblkio
> is to be adopted?
>
> > For IDE specifically, I'm not sure how much it can be sped up since it has
> > only 1 in-flight operation.  I think using KVM coalesced I/O could provide
> > an interesting boost (assuming instant or near-instant reply from the
> > backend).  If all you're interested in however is not really performance,
> > but rather having a single "connection" to your back end, vhost-user is
> > certainly an option.
> > 
> 
> Interesting - I will take a look at KVM coalesced I/O.
> 
> You’re totally right though, performance is not our main interest for
> these disk types. I should have emphasized offload rather than
> acceleration and performance. We would prefer to QA and support as few
> data paths as possible, and a vhost-user offload mechanism would allow
> us to use the same path for all I/O. I imagine other QEMU users who
> offload to backends like SPDK and use SATA/IDE disk types may feel
> similarly?

It's nice to have a single target (e.g. vhost-user-blk in SPDK) that
handles all disk I/O. On the other hand, QEMU would still have the
IDE/SATA emulation and libblkio vhost-user-blk driver, so in the end it
may not reduce the amount of code that you need to support.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Accelerating non-standard disk types
  2022-05-25 16:00     ` Stefan Hajnoczi
@ 2022-05-31  3:06       ` Raphael Norwitz
  2022-06-01 13:06         ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Raphael Norwitz @ 2022-05-31  3:06 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Raphael Norwitz, Paolo Bonzini, qemu-devel, John Levon,
	Thanos Makatos, Swapnil Ingle, Alexis Lescouet, Felipe Franciosi,
	mst

On Wed, May 25, 2022 at 05:00:04PM +0100, Stefan Hajnoczi wrote:
> On Thu, May 19, 2022 at 06:39:39PM +0000, Raphael Norwitz wrote:
> > On Tue, May 17, 2022 at 03:53:52PM +0200, Paolo Bonzini wrote:
> > > On 5/16/22 19:38, Raphael Norwitz wrote:
> > > > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > > > vhost-user-scsi or vhost-user-blk backend device.
> > > > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > > > client?).
> > > > [3] We've also been looking at your libblkio library. From your
> > > > description in
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gnu.org_archive_html_qemu-2Ddevel_2021-2D04_msg06146.html&d=DwICaQ&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=wBSqcw0cal3wPP87YIKgFgmqMHjGCC3apYf4wCn1SIrX6GW_FR-J9wO68v-cyrpn&s=CP-6ZY-gqgQ2zLAJdR8WVTrMBoqmFHilGvW_qnf2myU&e=   it
> > > > sounds like it may definitely play a role here, and possibly provide the
> > > > nessesary abstractions to back I/O from these emulated disks to any
> > > > backends we may want?
> > > 
> > > First of all: have you benchmarked it?  How much time is spent on MMIO vs.
> > > disk I/O?
> > >
> > 
> > Good point - we haven’t benchmarked the emulation, exit and translation
> > overheads - it is very possible speeding up disk I/O may not have a huge
> > impact. We would definitely benchmark this before exploring any of the
> > options seriously, but as you rightly note, performance is not the only
> > motivation here.
> > 
> > > Of the options above, the most interesting to me is to implement a
> > > vhost-user-blk/vhost-user-scsi backend in QEMU, similar to the NVMe one,
> > > that would translate I/O submissions to virtqueue (including polling and the
> > > like) and could be used with SATA.
> > >
> > 
> > We were certainly eyeing [1] as the most viable in the immediate future.
> > That said, since a vhost-user-blk driver has been added to libblkio, [3]
> > also sounds like a strong option. Do you see any long term benefit to
> > translating SATA/IDE submissions to virtqueues in a world where libblkio
> > is to be adopted?
> >
> > > For IDE specifically, I'm not sure how much it can be sped up since it has
> > > only 1 in-flight operation.  I think using KVM coalesced I/O could provide
> > > an interesting boost (assuming instant or near-instant reply from the
> > > backend).  If all you're interested in however is not really performance,
> > > but rather having a single "connection" to your back end, vhost-user is
> > > certainly an option.
> > > 
> > 
> > Interesting - I will take a look at KVM coalesced I/O.
> > 
> > You’re totally right though, performance is not our main interest for
> > these disk types. I should have emphasized offload rather than
> > acceleration and performance. We would prefer to QA and support as few
> > data paths as possible, and a vhost-user offload mechanism would allow
> > us to use the same path for all I/O. I imagine other QEMU users who
> > offload to backends like SPDK and use SATA/IDE disk types may feel
> > similarly?
> 
> It's nice to have a single target (e.g. vhost-user-blk in SPDK) that
> handles all disk I/O. On the other hand, QEMU would still have the
> IDE/SATA emulation and libblkio vhost-user-blk driver, so in the end it
> may not reduce the amount of code that you need to support.
> 

Apologies for the late reply - I was on PTO.

For us it’s not so much about the overall LOC we support. We have our
own iSCSI client implementation with embedded business logic which we
use for SCSI disks. Continuing to support SATA and IDE disks without our
implementation has been really troublesome so, even if it means more
LOC, we would really like to unify our data path at least at the iSCSI
layer.

While the overall code may not be reduced so much for many others today,
it may make a significant difference in the future. I can imagine some
QEMU users may want to deprecate (or not implement) iSCSI target support
in favor of NVMe over fabrics and still support these disk types. Being
able to offload the transport layer via vhost-user-blk (either with some
added logic on top of the existing SCSI translation layer or with
libblkio) would make this easy.

Does that sound reasonable?

> Stefan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Accelerating non-standard disk types
  2022-05-31  3:06       ` Raphael Norwitz
@ 2022-06-01 13:06         ` Stefan Hajnoczi
  0 siblings, 0 replies; 8+ messages in thread
From: Stefan Hajnoczi @ 2022-06-01 13:06 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: Paolo Bonzini, qemu-devel, John Levon, Thanos Makatos,
	Swapnil Ingle, Alexis Lescouet, Felipe Franciosi, mst

[-- Attachment #1: Type: text/plain, Size: 4603 bytes --]

On Tue, May 31, 2022 at 03:06:20AM +0000, Raphael Norwitz wrote:
> On Wed, May 25, 2022 at 05:00:04PM +0100, Stefan Hajnoczi wrote:
> > On Thu, May 19, 2022 at 06:39:39PM +0000, Raphael Norwitz wrote:
> > > On Tue, May 17, 2022 at 03:53:52PM +0200, Paolo Bonzini wrote:
> > > > On 5/16/22 19:38, Raphael Norwitz wrote:
> > > > > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > > > > vhost-user-scsi or vhost-user-blk backend device.
> > > > > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > > > > client?).
> > > > > [3] We've also been looking at your libblkio library. From your
> > > > > description in
> > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gnu.org_archive_html_qemu-2Ddevel_2021-2D04_msg06146.html&d=DwICaQ&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=wBSqcw0cal3wPP87YIKgFgmqMHjGCC3apYf4wCn1SIrX6GW_FR-J9wO68v-cyrpn&s=CP-6ZY-gqgQ2zLAJdR8WVTrMBoqmFHilGvW_qnf2myU&e=   it
> > > > > sounds like it may definitely play a role here, and possibly provide the
> > > > > nessesary abstractions to back I/O from these emulated disks to any
> > > > > backends we may want?
> > > > 
> > > > First of all: have you benchmarked it?  How much time is spent on MMIO vs.
> > > > disk I/O?
> > > >
> > > 
> > > Good point - we haven’t benchmarked the emulation, exit and translation
> > > overheads - it is very possible speeding up disk I/O may not have a huge
> > > impact. We would definitely benchmark this before exploring any of the
> > > options seriously, but as you rightly note, performance is not the only
> > > motivation here.
> > > 
> > > > Of the options above, the most interesting to me is to implement a
> > > > vhost-user-blk/vhost-user-scsi backend in QEMU, similar to the NVMe one,
> > > > that would translate I/O submissions to virtqueue (including polling and the
> > > > like) and could be used with SATA.
> > > >
> > > 
> > > We were certainly eyeing [1] as the most viable in the immediate future.
> > > That said, since a vhost-user-blk driver has been added to libblkio, [3]
> > > also sounds like a strong option. Do you see any long term benefit to
> > > translating SATA/IDE submissions to virtqueues in a world where libblkio
> > > is to be adopted?
> > >
> > > > For IDE specifically, I'm not sure how much it can be sped up since it has
> > > > only 1 in-flight operation.  I think using KVM coalesced I/O could provide
> > > > an interesting boost (assuming instant or near-instant reply from the
> > > > backend).  If all you're interested in however is not really performance,
> > > > but rather having a single "connection" to your back end, vhost-user is
> > > > certainly an option.
> > > > 
> > > 
> > > Interesting - I will take a look at KVM coalesced I/O.
> > > 
> > > You’re totally right though, performance is not our main interest for
> > > these disk types. I should have emphasized offload rather than
> > > acceleration and performance. We would prefer to QA and support as few
> > > data paths as possible, and a vhost-user offload mechanism would allow
> > > us to use the same path for all I/O. I imagine other QEMU users who
> > > offload to backends like SPDK and use SATA/IDE disk types may feel
> > > similarly?
> > 
> > It's nice to have a single target (e.g. vhost-user-blk in SPDK) that
> > handles all disk I/O. On the other hand, QEMU would still have the
> > IDE/SATA emulation and libblkio vhost-user-blk driver, so in the end it
> > may not reduce the amount of code that you need to support.
> > 
> 
> Apologies for the late reply - I was on PTO.
> 
> For us it’s not so much about the overall LOC we support. We have our
> own iSCSI client implementation with embedded business logic which we
> use for SCSI disks. Continuing to support SATA and IDE disks without our
> implementation has been really troublesome so, even if it means more
> LOC, we would really like to unify our data path at least at the iSCSI
> layer.
> 
> While the overall code may not be reduced so much for many others today,
> it may make a significant difference in the future. I can imagine some
> QEMU users may want to deprecate (or not implement) iSCSI target support
> in favor of NVMe over fabrics and still support these disk types. Being
> able to offload the transport layer via vhost-user-blk (either with some
> added logic on top of the existing SCSI translation layer or with
> libblkio) would make this easy.
> 
> Does that sound reasonable?

Yes.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-06-01 13:13 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-16 17:38 Accelerating non-standard disk types Raphael Norwitz
2022-05-17 13:53 ` Paolo Bonzini
2022-05-19 18:39   ` Raphael Norwitz
2022-05-25 16:00     ` Stefan Hajnoczi
2022-05-31  3:06       ` Raphael Norwitz
2022-06-01 13:06         ` Stefan Hajnoczi
2022-05-17 15:29 ` Stefan Hajnoczi
2022-05-19 18:34   ` Raphael Norwitz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.