spdk.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 13:54 zbhhbz
  0 siblings, 0 replies; 13+ messages in thread
From: zbhhbz @ 2022-03-08 13:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5499 bytes --]




ok, thanks,
one more follow up question:
isn't this leads to double DMA of data? one from guest to DRAM region and one from DRAM region to device buffer. 











At 2022-03-08 21:31:44, "Thanos Makatos" <thanos.makatos(a)nutanix.com> wrote:
>> -----Original Message-----
>> From: zbhhbz <zbhhbz(a)yeah.net>
>> Sent: 08 March 2022 13:20
>> To: Storage Performance Development Kit <spdk(a)lists.01.org>
>> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>> 
>> 
>> 
>> 
>> thanks, this helps a lot.
>> 
>> 
>> about the vfio-user,
>> 
>> 
>> I understand that in the vfio-user,
>> the guestOS can issue DMA read/write to a "pcie space " of a virtual device,
>> but I'm confused:
>> 1. does the guestOS issue DMA read/write to region on actual physical device
>> or just a DRAM region?
>
>A DRAM region.
>
>>      if the guestOS directly access the physical device(DMA,IOMMU), where
>> does spdk stands?
>> 2. why do vfio-user need a socket, what kind of data does the socket carries?
>
>The vfio-user protocol allows a device to be emulated outside QEMU (the vfio-user client), in a separate process (the vfio-user server, SPDK running the nvmf/vfio-user target in our case). The UNIX domain socket is used between QEMU and SPDK for initial device setup, virtual IRQs, and other infrequent operations.
>
>> 3. Does the vfio-user look like the vhost-user except for direct DMA access
>> instead of shared memory communication?
>
>vfio-user allows any kind of device to be emulated in a separate process (even non-PCI), while vhost-user is mainly for VirtIO devices.
>
>> 
>> 
>> thanks
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> At 2022-03-08 17:40:12, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
>> wrote:
>> >> -----Original Message-----
>> >> From: zbhhbz <zbhhbz(a)yeah.net>
>> >> Sent: 08 March 2022 09:29
>> >> To: spdk <spdk(a)lists.01.org>
>> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>> >>
>> >> thanks, follow up questions:
>> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i cant'
>> >> have access to the nvme feature in guest(shadow door bell). is that
>> correct?
>> >
>> >Correct.
>> >
>> >> 2. in the vfio-user solution, does the interrupt sending from the
>> ssd(nvme)
>> >> go through the qemu/kvm? or it go straight to the guest kernel?
>> >
>> >It doesn't go to QEMU/KVM. It depends on how you've set it up in SPDK: it
>> can either go the host kernel or to SPDK.
>> >
>> >> 3. when will the vfio-user be available? does kvm have same delima here?
>> >
>> >vfio-user is under review in QEMU, so we can't predict when it will be
>> accepted upstream. This doesn't mean you can't use it though, have a look
>> here: https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__github.com_nutanix_libvfio-
>> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
>> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
>> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
>> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
>> >
>> >>
>> >>
>> >> thank you very much!
>> >>
>> >>
>> >>
>> >>
>> >> ---- 回复的原邮件 ----
>> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
>> >> | 日期 | 2022年03月08日 16:33 |
>> >> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
>> >> | 抄送至 | |
>> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
>> >> Previously SPDK extended QEMU with a separate driver to enable vhost-
>> >> nvme, this driver
>> >> was not accepted by QEMU, so now we support emulated NVMe with a
>> new
>> >> solution
>> >> "vfio-user", again the driver for supporting this is still under code review
>> of
>> >> QEMU community,
>> >> but SPDK already supports this.
>> >>
>> >> > -----Original Message-----
>> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
>> >> > Sent: Tuesday, March 8, 2022 4:29 PM
>> >> > To: spdk(a)lists.01.org
>> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-blk
>> >> >
>> >> > Could someone help me understand the difference between vhost-
>> nvme
>> >> and
>> >> > vhost-blk?
>> >> > The online doc only shows there is vhost-user-blk-pci, why not vhost-
>> user-
>> >> nvme-
>> >> > pci?
>> >> > There is little doc fined in github/spdk and the qemu itself doesn't help
>> >> either
>> >> > Thanks!
>> >> > _______________________________________________
>> >> > SPDK mailing list -- spdk(a)lists.01.org
>> >> > To unsubscribe send an email to spdk-leave(a)lists.01.org
>> >> _______________________________________________
>> >> SPDK mailing list -- spdk(a)lists.01.org
>> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
>> >> _______________________________________________
>> >> SPDK mailing list -- spdk(a)lists.01.org
>> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
>> >_______________________________________________
>> >SPDK mailing list -- spdk(a)lists.01.org
>> >To unsubscribe send an email to spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org
>> To unsubscribe send an email to spdk-leave(a)lists.01.org
>_______________________________________________
>SPDK mailing list -- spdk(a)lists.01.org
>To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 17:54 zbhhbz
  0 siblings, 0 replies; 13+ messages in thread
From: zbhhbz @ 2022-03-08 17:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 13406 bytes --]

thank you very much, i understand now



---- 回复的原邮件 ----
| 发件人 | Walker, Benjamin<benjamin.walker(a)intel.com> |
| 日期 | 2022年03月09日 00:45 |
| 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
| 抄送至 | |
| 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
To clarify the flow of a request for vfio-user:

1) At start up time the guest shares it's memory by passing an fd over the socket to the SPDK backing process, which SPDK maps.
2) The guest then issues an NVMe read request into the "virtual" NVMe submission queue and rings the (shadow) doorbell
3) The SPDK side sees the doorbell write, reads the requests that were placed into the virtual queue, and then constructs bdev I/O requests to it's back-end, which could be any kind of device.
4) If the backing bdev happens to be a local NVMe SSD, the bdev I/O request is translated into an NVMe request that's put onto the NVMe device's submission queue and the doorbell is rung.
5) When SPDK detects a completion from the backing device (by polling the CQ if it's an NVMe device) it passes the completion back up the bdev stack and into the nvmf target which places a CQE on the virtual completion queue
6) After writing completion queue entries to the queue, SPDK will generate an interrupt by kicking a pre-created file descriptor that represents the interrupt vector (if interrupts are enabled).
7) QEMU wakes up on the fd kick and emulates an interrupt into the guest

During this whole process, the data is never copied or moved - only the request structure. The backing device ends up doing DMA directly to guest memory (that's been mapped into the SPDK process). If the guest is using "virtual" NVMe queues that do not have interrupts enabled, the SPDK process will not generate interrupts. So if the guest is polling, no VM_EXITs are generated at all during the entire process.

Note that the flow here is mostly the same as for vhost-user. The main difference is that with vfio-user we're free to emulate any device type, so we've chosen to emulate NVMe. That means the guest can run it's standard NVMe driver rather than requiring virtio drivers.

> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: Tuesday, March 8, 2022 7:46 AM
> To: spdk <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>
> OK,thanks,
> I'm also curious about how the spdk interrupt the guestOS does the interrupt go
> through the qemu/kvm or does it go straight to the guestOS?
> 1.in Vhost-user the guestOS is in poll mode, so it should wait for an interrupt
> from the nic.Will the interrupt from the nic go through qemu/kvm first?
> 2.in vfio-user , since the guestOS has direct access to the "pcie address"(DRAM),
> can the spdk interrupt the guestOS directly? (a nvme interrupt)
>
>
>
> ---- 回复的原邮件 ----
> | 发件人 | Thanos Makatos<thanos.makatos(a)nutanix.com> |
> | 日期 | 2022年03月08日 22:19 |
> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
> | 抄送至 | |
> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
>
> > -----Original Message-----
> > From: zbhhbz <zbhhbz(a)yeah.net>
> > Sent: 08 March 2022 14:16
> > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > what i meat is that :
> > in vhost-user:
> > 1. the guestOS should put an (vhost) request in the virtqueue 2. then
> > the spdk polling discover this request 3. spdk should put an nvme
> > request  in the actual device and knock the door bell.
> > this is two data(not the actual data but the request struct itself).
> >
> >
> > in vfio-user:
> > 1. the guestOS put an (nvme) request struct in the DRAM region 2. the
> > spdk discovers this and then what ? still needs to inform the nvme
> > physical device  right?
> > this is still two data copy(DMA) in the manner of nvme request struct.
>
> You're right, it does need to create a new request an put in the queue that can
> be seen by the physical controller. However, I believe the read and write payload
> doesn't require this additional step.
>
> >
> >
> >
> >
> >
> >
> >
> >
> > At 2022-03-08 22:02:48, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
> > wrote:
> > >> -----Original Message-----
> > >> From: Liu, Xiaodong <xiaodong.liu(a)intel.com>
> > >> Sent: 08 March 2022 14:00
> > >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> > >>
> > >> No, Vhost-user will just do once data DMA from Guest DRAM region to
> > >> device buffer.
> > >
> > >Same for vfio-user, the guest has shared its memory to SPDK so the
> > >physical
> > device can access that memory directly.
> > >
> > >>
> > >> -----Original Message-----
> > >> From: zbhhbz <zbhhbz(a)yeah.net>
> > >> Sent: Tuesday, March 8, 2022 9:54 PM
> > >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> > >>
> > >>
> > >>
> > >>
> > >> ok, thanks,
> > >> one more follow up question:
> > >> isn't this leads to double DMA of data? one from guest to DRAM
> > >> region
> > and
> > >> one from DRAM region to device buffer.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> At 2022-03-08 21:31:44, "Thanos Makatos"
> > <thanos.makatos(a)nutanix.com>
> > >> wrote:
> > >> >> -----Original Message-----
> > >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> > >> >> Sent: 08 March 2022 13:20
> > >> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > >> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-
> > blk
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> thanks, this helps a lot.
> > >> >>
> > >> >>
> > >> >> about the vfio-user,
> > >> >>
> > >> >>
> > >> >> I understand that in the vfio-user, the guestOS can issue DMA
> > >> >> read/write to a "pcie space " of a virtual device, but I'm
> > >> >> confused:
> > >> >> 1. does the guestOS issue DMA read/write to region on actual
> > >> >> physical device or just a DRAM region?
> > >> >
> > >> >A DRAM region.
> > >> >
> > >> >>      if the guestOS directly access the physical
> > >> >> device(DMA,IOMMU), where does spdk stands?
> > >> >> 2. why do vfio-user need a socket, what kind of data does the
> > >> >> socket
> > >> carries?
> > >> >
> > >> >The vfio-user protocol allows a device to be emulated outside QEMU
> > (the
> > >> vfio-user client), in a separate process (the vfio-user server,
> > >> SPDK running the nvmf/vfio-user target in our case). The UNIX
> > >> domain socket is used between QEMU and SPDK for initial device
> > >> setup, virtual IRQs, and other infrequent operations.
> > >> >
> > >> >> 3. Does the vfio-user look like the vhost-user except for direct
> > >> >> DMA access instead of shared memory communication?
> > >> >
> > >> >vfio-user allows any kind of device to be emulated in a separate
> > >> >process
> > >> (even non-PCI), while vhost-user is mainly for VirtIO devices.
> > >> >
> > >> >>
> > >> >>
> > >> >> thanks
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> At 2022-03-08 17:40:12, "Thanos Makatos"
> > >> <thanos.makatos(a)nutanix.com>
> > >> >> wrote:
> > >> >> >> -----Original Message-----
> > >> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> > >> >> >> Sent: 08 March 2022 09:29
> > >> >> >> To: spdk <spdk(a)lists.01.org>
> > >> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and
> > >> >> >> vhost-blk
> > >> >> >>
> > >> >> >> thanks, follow up questions:
> > >> >> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme
> > >> >> >> ssd(bdev),
> > i
> > >> cant'
> > >> >> >> have access to the nvme feature in guest(shadow door bell).
> > >> >> >> is that
> > >> >> correct?
> > >> >> >
> > >> >> >Correct.
> > >> >> >
> > >> >> >> 2. in the vfio-user solution, does the interrupt sending from
> > >> >> >> the
> > >> >> ssd(nvme)
> > >> >> >> go through the qemu/kvm? or it go straight to the guest kernel?
> > >> >> >
> > >> >> >It doesn't go to QEMU/KVM. It depends on how you've set it up
> > >> >> >in
> > >> >> >SPDK: it
> > >> >> can either go the host kernel or to SPDK.
> > >> >> >
> > >> >> >> 3. when will the vfio-user be available? does kvm have same
> > >> >> >> delima
> > >> here?
> > >> >> >
> > >> >> >vfio-user is under review in QEMU, so we can't predict when it
> > >> >> >will be
> > >> >> accepted upstream. This doesn't mean you can't use it though,
> > >> >> have a look
> > >> >> here: https://urldefense.proofpoint.com/v2/url?u=https-
> > >> >> 3A__github.com_nutanix_libvfio-
> > >> >>
> > >>
> > 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
> > >> >>
> > >>
> > tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
> > >> >> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
> > >> >> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
> > >> >> >
> > >> >> >>
> > >> >> >>
> > >> >> >> thank you very much!
> > >> >> >>
> > >> >> >>
> > >> >> >>
> > >> >> >>
> > >> >> >> ---- 回复的原邮件 ----
> > >> >> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> > >> >> >> | 日期 | 2022年03月08日 16:33 |
> > >> >> >> | 收件人 | Storage Performance Development
> > Kit<spdk(a)lists.01.org>
> > >> |
> > >> >> >> | 抄送至 | |
> > >> >> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and
> > vhost-
> > >> blk
> > >> >> >> | |
> > >> >> >> Previously SPDK extended QEMU with a separate driver to
> > >> >> >> enable
> > >> >> >> vhost- nvme, this driver was not accepted by QEMU, so now we
> > >> >> >> support emulated NVMe with a
> > >> >> new
> > >> >> >> solution
> > >> >> >> "vfio-user", again the driver for supporting this is still
> > >> >> >> under code review
> > >> >> of
> > >> >> >> QEMU community,
> > >> >> >> but SPDK already supports this.
> > >> >> >>
> > >> >> >> > -----Original Message-----
> > >> >> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> > >> >> >> > Sent: Tuesday, March 8, 2022 4:29 PM
> > >> >> >> > To: spdk(a)lists.01.org
> > >> >> >> > Subject: [SPDK] The difference between vhost-nvme and
> > >> >> >> > vhost-
> > blk
> > >> >> >> >
> > >> >> >> > Could someone help me understand the difference between
> > vhost-
> > >> >> nvme
> > >> >> >> and
> > >> >> >> > vhost-blk?
> > >> >> >> > The online doc only shows there is vhost-user-blk-pci, why
> > >> >> >> > not
> > >> >> >> > vhost-
> > >> >> user-
> > >> >> >> nvme-
> > >> >> >> > pci?
> > >> >> >> > There is little doc fined in github/spdk and the qemu
> > >> >> >> > itself doesn't help
> > >> >> >> either
> > >> >> >> > Thanks!
> > >> >> >> > _______________________________________________
> > >> >> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send
> > >> >> >> > an email to spdk-leave(a)lists.01.org
> > >> >> >> _______________________________________________
> > >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> >> email to spdk-leave(a)lists.01.org
> > >> >> >> _______________________________________________
> > >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> >> email to spdk-leave(a)lists.01.org
> > >> >> >_______________________________________________
> > >> >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> >email to spdk-leave(a)lists.01.org
> > >> >> _______________________________________________
> > >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> email to spdk-leave(a)lists.01.org
> > >> >_______________________________________________
> > >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >email to spdk-leave(a)lists.01.org
> > >> _______________________________________________
> > >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> > >> to spdk-leave(a)lists.01.org
> > >> _______________________________________________
> > >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> > >> to spdk-leave(a)lists.01.org
> > >_______________________________________________
> > >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> > >to spdk-leave(a)lists.01.org
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to
> > spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 16:45 Walker, Benjamin
  0 siblings, 0 replies; 13+ messages in thread
From: Walker, Benjamin @ 2022-03-08 16:45 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 12923 bytes --]

To clarify the flow of a request for vfio-user:

1) At start up time the guest shares it's memory by passing an fd over the socket to the SPDK backing process, which SPDK maps.
2) The guest then issues an NVMe read request into the "virtual" NVMe submission queue and rings the (shadow) doorbell
3) The SPDK side sees the doorbell write, reads the requests that were placed into the virtual queue, and then constructs bdev I/O requests to it's back-end, which could be any kind of device.
4) If the backing bdev happens to be a local NVMe SSD, the bdev I/O request is translated into an NVMe request that's put onto the NVMe device's submission queue and the doorbell is rung.
5) When SPDK detects a completion from the backing device (by polling the CQ if it's an NVMe device) it passes the completion back up the bdev stack and into the nvmf target which places a CQE on the virtual completion queue
6) After writing completion queue entries to the queue, SPDK will generate an interrupt by kicking a pre-created file descriptor that represents the interrupt vector (if interrupts are enabled).
7) QEMU wakes up on the fd kick and emulates an interrupt into the guest

During this whole process, the data is never copied or moved - only the request structure. The backing device ends up doing DMA directly to guest memory (that's been mapped into the SPDK process). If the guest is using "virtual" NVMe queues that do not have interrupts enabled, the SPDK process will not generate interrupts. So if the guest is polling, no VM_EXITs are generated at all during the entire process.

Note that the flow here is mostly the same as for vhost-user. The main difference is that with vfio-user we're free to emulate any device type, so we've chosen to emulate NVMe. That means the guest can run it's standard NVMe driver rather than requiring virtio drivers.

> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: Tuesday, March 8, 2022 7:46 AM
> To: spdk <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> 
> OK,thanks,
> I'm also curious about how the spdk interrupt the guestOS does the interrupt go
> through the qemu/kvm or does it go straight to the guestOS?
> 1.in Vhost-user the guestOS is in poll mode, so it should wait for an interrupt
> from the nic.Will the interrupt from the nic go through qemu/kvm first?
> 2.in vfio-user , since the guestOS has direct access to the "pcie address"(DRAM),
> can the spdk interrupt the guestOS directly? (a nvme interrupt)
> 
> 
> 
> ---- 回复的原邮件 ----
> | 发件人 | Thanos Makatos<thanos.makatos(a)nutanix.com> |
> | 日期 | 2022年03月08日 22:19 |
> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
> | 抄送至 | |
> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
> 
> > -----Original Message-----
> > From: zbhhbz <zbhhbz(a)yeah.net>
> > Sent: 08 March 2022 14:16
> > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > what i meat is that :
> > in vhost-user:
> > 1. the guestOS should put an (vhost) request in the virtqueue 2. then
> > the spdk polling discover this request 3. spdk should put an nvme
> > request  in the actual device and knock the door bell.
> > this is two data(not the actual data but the request struct itself).
> >
> >
> > in vfio-user:
> > 1. the guestOS put an (nvme) request struct in the DRAM region 2. the
> > spdk discovers this and then what ? still needs to inform the nvme
> > physical device  right?
> > this is still two data copy(DMA) in the manner of nvme request struct.
> 
> You're right, it does need to create a new request an put in the queue that can
> be seen by the physical controller. However, I believe the read and write payload
> doesn't require this additional step.
> 
> >
> >
> >
> >
> >
> >
> >
> >
> > At 2022-03-08 22:02:48, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
> > wrote:
> > >> -----Original Message-----
> > >> From: Liu, Xiaodong <xiaodong.liu(a)intel.com>
> > >> Sent: 08 March 2022 14:00
> > >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> > >>
> > >> No, Vhost-user will just do once data DMA from Guest DRAM region to
> > >> device buffer.
> > >
> > >Same for vfio-user, the guest has shared its memory to SPDK so the
> > >physical
> > device can access that memory directly.
> > >
> > >>
> > >> -----Original Message-----
> > >> From: zbhhbz <zbhhbz(a)yeah.net>
> > >> Sent: Tuesday, March 8, 2022 9:54 PM
> > >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> > >>
> > >>
> > >>
> > >>
> > >> ok, thanks,
> > >> one more follow up question:
> > >> isn't this leads to double DMA of data? one from guest to DRAM
> > >> region
> > and
> > >> one from DRAM region to device buffer.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> At 2022-03-08 21:31:44, "Thanos Makatos"
> > <thanos.makatos(a)nutanix.com>
> > >> wrote:
> > >> >> -----Original Message-----
> > >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> > >> >> Sent: 08 March 2022 13:20
> > >> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > >> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-
> > blk
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> thanks, this helps a lot.
> > >> >>
> > >> >>
> > >> >> about the vfio-user,
> > >> >>
> > >> >>
> > >> >> I understand that in the vfio-user, the guestOS can issue DMA
> > >> >> read/write to a "pcie space " of a virtual device, but I'm
> > >> >> confused:
> > >> >> 1. does the guestOS issue DMA read/write to region on actual
> > >> >> physical device or just a DRAM region?
> > >> >
> > >> >A DRAM region.
> > >> >
> > >> >>      if the guestOS directly access the physical
> > >> >> device(DMA,IOMMU), where does spdk stands?
> > >> >> 2. why do vfio-user need a socket, what kind of data does the
> > >> >> socket
> > >> carries?
> > >> >
> > >> >The vfio-user protocol allows a device to be emulated outside QEMU
> > (the
> > >> vfio-user client), in a separate process (the vfio-user server,
> > >> SPDK running the nvmf/vfio-user target in our case). The UNIX
> > >> domain socket is used between QEMU and SPDK for initial device
> > >> setup, virtual IRQs, and other infrequent operations.
> > >> >
> > >> >> 3. Does the vfio-user look like the vhost-user except for direct
> > >> >> DMA access instead of shared memory communication?
> > >> >
> > >> >vfio-user allows any kind of device to be emulated in a separate
> > >> >process
> > >> (even non-PCI), while vhost-user is mainly for VirtIO devices.
> > >> >
> > >> >>
> > >> >>
> > >> >> thanks
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> At 2022-03-08 17:40:12, "Thanos Makatos"
> > >> <thanos.makatos(a)nutanix.com>
> > >> >> wrote:
> > >> >> >> -----Original Message-----
> > >> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> > >> >> >> Sent: 08 March 2022 09:29
> > >> >> >> To: spdk <spdk(a)lists.01.org>
> > >> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and
> > >> >> >> vhost-blk
> > >> >> >>
> > >> >> >> thanks, follow up questions:
> > >> >> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme
> > >> >> >> ssd(bdev),
> > i
> > >> cant'
> > >> >> >> have access to the nvme feature in guest(shadow door bell).
> > >> >> >> is that
> > >> >> correct?
> > >> >> >
> > >> >> >Correct.
> > >> >> >
> > >> >> >> 2. in the vfio-user solution, does the interrupt sending from
> > >> >> >> the
> > >> >> ssd(nvme)
> > >> >> >> go through the qemu/kvm? or it go straight to the guest kernel?
> > >> >> >
> > >> >> >It doesn't go to QEMU/KVM. It depends on how you've set it up
> > >> >> >in
> > >> >> >SPDK: it
> > >> >> can either go the host kernel or to SPDK.
> > >> >> >
> > >> >> >> 3. when will the vfio-user be available? does kvm have same
> > >> >> >> delima
> > >> here?
> > >> >> >
> > >> >> >vfio-user is under review in QEMU, so we can't predict when it
> > >> >> >will be
> > >> >> accepted upstream. This doesn't mean you can't use it though,
> > >> >> have a look
> > >> >> here: https://urldefense.proofpoint.com/v2/url?u=https-
> > >> >> 3A__github.com_nutanix_libvfio-
> > >> >>
> > >>
> > 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
> > >> >>
> > >>
> > tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
> > >> >> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
> > >> >> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
> > >> >> >
> > >> >> >>
> > >> >> >>
> > >> >> >> thank you very much!
> > >> >> >>
> > >> >> >>
> > >> >> >>
> > >> >> >>
> > >> >> >> ---- 回复的原邮件 ----
> > >> >> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> > >> >> >> | 日期 | 2022年03月08日 16:33 |
> > >> >> >> | 收件人 | Storage Performance Development
> > Kit<spdk(a)lists.01.org>
> > >> |
> > >> >> >> | 抄送至 | |
> > >> >> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and
> > vhost-
> > >> blk
> > >> >> >> | |
> > >> >> >> Previously SPDK extended QEMU with a separate driver to
> > >> >> >> enable
> > >> >> >> vhost- nvme, this driver was not accepted by QEMU, so now we
> > >> >> >> support emulated NVMe with a
> > >> >> new
> > >> >> >> solution
> > >> >> >> "vfio-user", again the driver for supporting this is still
> > >> >> >> under code review
> > >> >> of
> > >> >> >> QEMU community,
> > >> >> >> but SPDK already supports this.
> > >> >> >>
> > >> >> >> > -----Original Message-----
> > >> >> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> > >> >> >> > Sent: Tuesday, March 8, 2022 4:29 PM
> > >> >> >> > To: spdk(a)lists.01.org
> > >> >> >> > Subject: [SPDK] The difference between vhost-nvme and
> > >> >> >> > vhost-
> > blk
> > >> >> >> >
> > >> >> >> > Could someone help me understand the difference between
> > vhost-
> > >> >> nvme
> > >> >> >> and
> > >> >> >> > vhost-blk?
> > >> >> >> > The online doc only shows there is vhost-user-blk-pci, why
> > >> >> >> > not
> > >> >> >> > vhost-
> > >> >> user-
> > >> >> >> nvme-
> > >> >> >> > pci?
> > >> >> >> > There is little doc fined in github/spdk and the qemu
> > >> >> >> > itself doesn't help
> > >> >> >> either
> > >> >> >> > Thanks!
> > >> >> >> > _______________________________________________
> > >> >> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send
> > >> >> >> > an email to spdk-leave(a)lists.01.org
> > >> >> >> _______________________________________________
> > >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> >> email to spdk-leave(a)lists.01.org
> > >> >> >> _______________________________________________
> > >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> >> email to spdk-leave(a)lists.01.org
> > >> >> >_______________________________________________
> > >> >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> >email to spdk-leave(a)lists.01.org
> > >> >> _______________________________________________
> > >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >> email to spdk-leave(a)lists.01.org
> > >> >_______________________________________________
> > >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> > >> >email to spdk-leave(a)lists.01.org
> > >> _______________________________________________
> > >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> > >> to spdk-leave(a)lists.01.org
> > >> _______________________________________________
> > >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> > >> to spdk-leave(a)lists.01.org
> > >_______________________________________________
> > >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> > >to spdk-leave(a)lists.01.org
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to
> > spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 14:46 zbhhbz
  0 siblings, 0 replies; 13+ messages in thread
From: zbhhbz @ 2022-03-08 14:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9971 bytes --]

OK,thanks,
I'm also curious about how the spdk interrupt the guestOS
does the interrupt go through the qemu/kvm
or does it go straight to the guestOS?
1.in Vhost-user the guestOS is in poll mode,
so it should wait for an interrupt from the nic.Will the interrupt from the nic go through qemu/kvm first?
2.in vfio-user , since the guestOS has direct access to the "pcie address"(DRAM), can the spdk interrupt the guestOS directly? (a nvme interrupt)



---- 回复的原邮件 ----
| 发件人 | Thanos Makatos<thanos.makatos(a)nutanix.com> |
| 日期 | 2022年03月08日 22:19 |
| 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
| 抄送至 | |
| 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |

> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: 08 March 2022 14:16
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>
>
>
>
>
>
>
>
>
>
> what i meat is that :
> in vhost-user:
> 1. the guestOS should put an (vhost) request in the virtqueue
> 2. then the spdk polling discover this request
> 3. spdk should put an nvme request  in the actual device and knock the door
> bell.
> this is two data(not the actual data but the request struct itself).
>
>
> in vfio-user:
> 1. the guestOS put an (nvme) request struct in the DRAM region
> 2. the spdk discovers this and then what ? still needs to inform the nvme
> physical device  right?
> this is still two data copy(DMA) in the manner of nvme request struct.

You're right, it does need to create a new request an put in the queue that can be seen by the physical controller. However, I believe the read and write payload doesn't require this additional step.

>
>
>
>
>
>
>
>
> At 2022-03-08 22:02:48, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
> wrote:
> >> -----Original Message-----
> >> From: Liu, Xiaodong <xiaodong.liu(a)intel.com>
> >> Sent: 08 March 2022 14:00
> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >> No, Vhost-user will just do once data DMA from Guest DRAM region to
> >> device buffer.
> >
> >Same for vfio-user, the guest has shared its memory to SPDK so the physical
> device can access that memory directly.
> >
> >>
> >> -----Original Message-----
> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> Sent: Tuesday, March 8, 2022 9:54 PM
> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >>
> >>
> >>
> >> ok, thanks,
> >> one more follow up question:
> >> isn't this leads to double DMA of data? one from guest to DRAM region
> and
> >> one from DRAM region to device buffer.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> At 2022-03-08 21:31:44, "Thanos Makatos"
> <thanos.makatos(a)nutanix.com>
> >> wrote:
> >> >> -----Original Message-----
> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> >> Sent: 08 March 2022 13:20
> >> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-
> blk
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> thanks, this helps a lot.
> >> >>
> >> >>
> >> >> about the vfio-user,
> >> >>
> >> >>
> >> >> I understand that in the vfio-user,
> >> >> the guestOS can issue DMA read/write to a "pcie space " of a virtual
> >> >> device, but I'm confused:
> >> >> 1. does the guestOS issue DMA read/write to region on actual physical
> >> >> device or just a DRAM region?
> >> >
> >> >A DRAM region.
> >> >
> >> >>      if the guestOS directly access the physical device(DMA,IOMMU),
> >> >> where does spdk stands?
> >> >> 2. why do vfio-user need a socket, what kind of data does the socket
> >> carries?
> >> >
> >> >The vfio-user protocol allows a device to be emulated outside QEMU
> (the
> >> vfio-user client), in a separate process (the vfio-user server, SPDK running
> >> the nvmf/vfio-user target in our case). The UNIX domain socket is used
> >> between QEMU and SPDK for initial device setup, virtual IRQs, and other
> >> infrequent operations.
> >> >
> >> >> 3. Does the vfio-user look like the vhost-user except for direct DMA
> >> >> access instead of shared memory communication?
> >> >
> >> >vfio-user allows any kind of device to be emulated in a separate process
> >> (even non-PCI), while vhost-user is mainly for VirtIO devices.
> >> >
> >> >>
> >> >>
> >> >> thanks
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> At 2022-03-08 17:40:12, "Thanos Makatos"
> >> <thanos.makatos(a)nutanix.com>
> >> >> wrote:
> >> >> >> -----Original Message-----
> >> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> >> >> Sent: 08 March 2022 09:29
> >> >> >> To: spdk <spdk(a)lists.01.org>
> >> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and
> >> >> >> vhost-blk
> >> >> >>
> >> >> >> thanks, follow up questions:
> >> >> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev),
> i
> >> cant'
> >> >> >> have access to the nvme feature in guest(shadow door bell). is
> >> >> >> that
> >> >> correct?
> >> >> >
> >> >> >Correct.
> >> >> >
> >> >> >> 2. in the vfio-user solution, does the interrupt sending from the
> >> >> ssd(nvme)
> >> >> >> go through the qemu/kvm? or it go straight to the guest kernel?
> >> >> >
> >> >> >It doesn't go to QEMU/KVM. It depends on how you've set it up in
> >> >> >SPDK: it
> >> >> can either go the host kernel or to SPDK.
> >> >> >
> >> >> >> 3. when will the vfio-user be available? does kvm have same delima
> >> here?
> >> >> >
> >> >> >vfio-user is under review in QEMU, so we can't predict when it will
> >> >> >be
> >> >> accepted upstream. This doesn't mean you can't use it though, have a
> >> >> look
> >> >> here: https://urldefense.proofpoint.com/v2/url?u=https-
> >> >> 3A__github.com_nutanix_libvfio-
> >> >>
> >>
> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
> >> >>
> >>
> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
> >> >> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
> >> >> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
> >> >> >
> >> >> >>
> >> >> >>
> >> >> >> thank you very much!
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> ---- 回复的原邮件 ----
> >> >> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> >> >> >> | 日期 | 2022年03月08日 16:33 |
> >> >> >> | 收件人 | Storage Performance Development
> Kit<spdk(a)lists.01.org>
> >> |
> >> >> >> | 抄送至 | |
> >> >> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and
> vhost-
> >> blk
> >> >> >> | |
> >> >> >> Previously SPDK extended QEMU with a separate driver to enable
> >> >> >> vhost- nvme, this driver was not accepted by QEMU, so now we
> >> >> >> support emulated NVMe with a
> >> >> new
> >> >> >> solution
> >> >> >> "vfio-user", again the driver for supporting this is still under
> >> >> >> code review
> >> >> of
> >> >> >> QEMU community,
> >> >> >> but SPDK already supports this.
> >> >> >>
> >> >> >> > -----Original Message-----
> >> >> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> >> >> >> > Sent: Tuesday, March 8, 2022 4:29 PM
> >> >> >> > To: spdk(a)lists.01.org
> >> >> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-
> blk
> >> >> >> >
> >> >> >> > Could someone help me understand the difference between
> vhost-
> >> >> nvme
> >> >> >> and
> >> >> >> > vhost-blk?
> >> >> >> > The online doc only shows there is vhost-user-blk-pci, why not
> >> >> >> > vhost-
> >> >> user-
> >> >> >> nvme-
> >> >> >> > pci?
> >> >> >> > There is little doc fined in github/spdk and the qemu itself
> >> >> >> > doesn't help
> >> >> >> either
> >> >> >> > Thanks!
> >> >> >> > _______________________________________________
> >> >> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> > email to spdk-leave(a)lists.01.org
> >> >> >> _______________________________________________
> >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> email to spdk-leave(a)lists.01.org
> >> >> >> _______________________________________________
> >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> email to spdk-leave(a)lists.01.org
> >> >> >_______________________________________________
> >> >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> >> >to spdk-leave(a)lists.01.org
> >> >> _______________________________________________
> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> >> to spdk-leave(a)lists.01.org
> >> >_______________________________________________
> >> >SPDK mailing list -- spdk(a)lists.01.org
> >> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >_______________________________________________
> >SPDK mailing list -- spdk(a)lists.01.org
> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 14:19 Thanos Makatos
  0 siblings, 0 replies; 13+ messages in thread
From: Thanos Makatos @ 2022-03-08 14:19 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9083 bytes --]


> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: 08 March 2022 14:16
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> what i meat is that :
> in vhost-user:
> 1. the guestOS should put an (vhost) request in the virtqueue
> 2. then the spdk polling discover this request
> 3. spdk should put an nvme request  in the actual device and knock the door
> bell.
> this is two data(not the actual data but the request struct itself).
> 
> 
> in vfio-user:
> 1. the guestOS put an (nvme) request struct in the DRAM region
> 2. the spdk discovers this and then what ? still needs to inform the nvme
> physical device  right?
> this is still two data copy(DMA) in the manner of nvme request struct.

You're right, it does need to create a new request an put in the queue that can be seen by the physical controller. However, I believe the read and write payload doesn't require this additional step.

> 
> 
> 
> 
> 
> 
> 
> 
> At 2022-03-08 22:02:48, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
> wrote:
> >> -----Original Message-----
> >> From: Liu, Xiaodong <xiaodong.liu(a)intel.com>
> >> Sent: 08 March 2022 14:00
> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >> No, Vhost-user will just do once data DMA from Guest DRAM region to
> >> device buffer.
> >
> >Same for vfio-user, the guest has shared its memory to SPDK so the physical
> device can access that memory directly.
> >
> >>
> >> -----Original Message-----
> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> Sent: Tuesday, March 8, 2022 9:54 PM
> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >>
> >>
> >>
> >> ok, thanks,
> >> one more follow up question:
> >> isn't this leads to double DMA of data? one from guest to DRAM region
> and
> >> one from DRAM region to device buffer.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> At 2022-03-08 21:31:44, "Thanos Makatos"
> <thanos.makatos(a)nutanix.com>
> >> wrote:
> >> >> -----Original Message-----
> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> >> Sent: 08 March 2022 13:20
> >> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-
> blk
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> thanks, this helps a lot.
> >> >>
> >> >>
> >> >> about the vfio-user,
> >> >>
> >> >>
> >> >> I understand that in the vfio-user,
> >> >> the guestOS can issue DMA read/write to a "pcie space " of a virtual
> >> >> device, but I'm confused:
> >> >> 1. does the guestOS issue DMA read/write to region on actual physical
> >> >> device or just a DRAM region?
> >> >
> >> >A DRAM region.
> >> >
> >> >>      if the guestOS directly access the physical device(DMA,IOMMU),
> >> >> where does spdk stands?
> >> >> 2. why do vfio-user need a socket, what kind of data does the socket
> >> carries?
> >> >
> >> >The vfio-user protocol allows a device to be emulated outside QEMU
> (the
> >> vfio-user client), in a separate process (the vfio-user server, SPDK running
> >> the nvmf/vfio-user target in our case). The UNIX domain socket is used
> >> between QEMU and SPDK for initial device setup, virtual IRQs, and other
> >> infrequent operations.
> >> >
> >> >> 3. Does the vfio-user look like the vhost-user except for direct DMA
> >> >> access instead of shared memory communication?
> >> >
> >> >vfio-user allows any kind of device to be emulated in a separate process
> >> (even non-PCI), while vhost-user is mainly for VirtIO devices.
> >> >
> >> >>
> >> >>
> >> >> thanks
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> At 2022-03-08 17:40:12, "Thanos Makatos"
> >> <thanos.makatos(a)nutanix.com>
> >> >> wrote:
> >> >> >> -----Original Message-----
> >> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> >> >> Sent: 08 March 2022 09:29
> >> >> >> To: spdk <spdk(a)lists.01.org>
> >> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and
> >> >> >> vhost-blk
> >> >> >>
> >> >> >> thanks, follow up questions:
> >> >> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev),
> i
> >> cant'
> >> >> >> have access to the nvme feature in guest(shadow door bell). is
> >> >> >> that
> >> >> correct?
> >> >> >
> >> >> >Correct.
> >> >> >
> >> >> >> 2. in the vfio-user solution, does the interrupt sending from the
> >> >> ssd(nvme)
> >> >> >> go through the qemu/kvm? or it go straight to the guest kernel?
> >> >> >
> >> >> >It doesn't go to QEMU/KVM. It depends on how you've set it up in
> >> >> >SPDK: it
> >> >> can either go the host kernel or to SPDK.
> >> >> >
> >> >> >> 3. when will the vfio-user be available? does kvm have same delima
> >> here?
> >> >> >
> >> >> >vfio-user is under review in QEMU, so we can't predict when it will
> >> >> >be
> >> >> accepted upstream. This doesn't mean you can't use it though, have a
> >> >> look
> >> >> here: https://urldefense.proofpoint.com/v2/url?u=https-
> >> >> 3A__github.com_nutanix_libvfio-
> >> >>
> >>
> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
> >> >>
> >>
> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
> >> >> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
> >> >> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
> >> >> >
> >> >> >>
> >> >> >>
> >> >> >> thank you very much!
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> ---- 回复的原邮件 ----
> >> >> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> >> >> >> | 日期 | 2022年03月08日 16:33 |
> >> >> >> | 收件人 | Storage Performance Development
> Kit<spdk(a)lists.01.org>
> >> |
> >> >> >> | 抄送至 | |
> >> >> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and
> vhost-
> >> blk
> >> >> >> | |
> >> >> >> Previously SPDK extended QEMU with a separate driver to enable
> >> >> >> vhost- nvme, this driver was not accepted by QEMU, so now we
> >> >> >> support emulated NVMe with a
> >> >> new
> >> >> >> solution
> >> >> >> "vfio-user", again the driver for supporting this is still under
> >> >> >> code review
> >> >> of
> >> >> >> QEMU community,
> >> >> >> but SPDK already supports this.
> >> >> >>
> >> >> >> > -----Original Message-----
> >> >> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> >> >> >> > Sent: Tuesday, March 8, 2022 4:29 PM
> >> >> >> > To: spdk(a)lists.01.org
> >> >> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-
> blk
> >> >> >> >
> >> >> >> > Could someone help me understand the difference between
> vhost-
> >> >> nvme
> >> >> >> and
> >> >> >> > vhost-blk?
> >> >> >> > The online doc only shows there is vhost-user-blk-pci, why not
> >> >> >> > vhost-
> >> >> user-
> >> >> >> nvme-
> >> >> >> > pci?
> >> >> >> > There is little doc fined in github/spdk and the qemu itself
> >> >> >> > doesn't help
> >> >> >> either
> >> >> >> > Thanks!
> >> >> >> > _______________________________________________
> >> >> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> > email to spdk-leave(a)lists.01.org
> >> >> >> _______________________________________________
> >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> email to spdk-leave(a)lists.01.org
> >> >> >> _______________________________________________
> >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> email to spdk-leave(a)lists.01.org
> >> >> >_______________________________________________
> >> >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> >> >to spdk-leave(a)lists.01.org
> >> >> _______________________________________________
> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> >> to spdk-leave(a)lists.01.org
> >> >_______________________________________________
> >> >SPDK mailing list -- spdk(a)lists.01.org
> >> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >_______________________________________________
> >SPDK mailing list -- spdk(a)lists.01.org
> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 14:15 zbhhbz
  0 siblings, 0 replies; 13+ messages in thread
From: zbhhbz @ 2022-03-08 14:15 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7976 bytes --]










what i meat is that :
in vhost-user:
1. the guestOS should put an (vhost) request in the virtqueue 
2. then the spdk polling discover this request 
3. spdk should put an nvme request  in the actual device and knock the door bell.
this is two data(not the actual data but the request struct itself).


in vfio-user:
1. the guestOS put an (nvme) request struct in the DRAM region
2. the spdk discovers this and then what ? still needs to inform the nvme physical device  right? 
this is still two data copy(DMA) in the manner of nvme request struct.








At 2022-03-08 22:02:48, "Thanos Makatos" <thanos.makatos(a)nutanix.com> wrote:
>> -----Original Message-----
>> From: Liu, Xiaodong <xiaodong.liu(a)intel.com>
>> Sent: 08 March 2022 14:00
>> To: Storage Performance Development Kit <spdk(a)lists.01.org>
>> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>> 
>> No, Vhost-user will just do once data DMA from Guest DRAM region to
>> device buffer.
>
>Same for vfio-user, the guest has shared its memory to SPDK so the physical device can access that memory directly.
>
>> 
>> -----Original Message-----
>> From: zbhhbz <zbhhbz(a)yeah.net>
>> Sent: Tuesday, March 8, 2022 9:54 PM
>> To: Storage Performance Development Kit <spdk(a)lists.01.org>
>> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>> 
>> 
>> 
>> 
>> ok, thanks,
>> one more follow up question:
>> isn't this leads to double DMA of data? one from guest to DRAM region and
>> one from DRAM region to device buffer.
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> At 2022-03-08 21:31:44, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
>> wrote:
>> >> -----Original Message-----
>> >> From: zbhhbz <zbhhbz(a)yeah.net>
>> >> Sent: 08 March 2022 13:20
>> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
>> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>> >>
>> >>
>> >>
>> >>
>> >> thanks, this helps a lot.
>> >>
>> >>
>> >> about the vfio-user,
>> >>
>> >>
>> >> I understand that in the vfio-user,
>> >> the guestOS can issue DMA read/write to a "pcie space " of a virtual
>> >> device, but I'm confused:
>> >> 1. does the guestOS issue DMA read/write to region on actual physical
>> >> device or just a DRAM region?
>> >
>> >A DRAM region.
>> >
>> >>      if the guestOS directly access the physical device(DMA,IOMMU),
>> >> where does spdk stands?
>> >> 2. why do vfio-user need a socket, what kind of data does the socket
>> carries?
>> >
>> >The vfio-user protocol allows a device to be emulated outside QEMU (the
>> vfio-user client), in a separate process (the vfio-user server, SPDK running
>> the nvmf/vfio-user target in our case). The UNIX domain socket is used
>> between QEMU and SPDK for initial device setup, virtual IRQs, and other
>> infrequent operations.
>> >
>> >> 3. Does the vfio-user look like the vhost-user except for direct DMA
>> >> access instead of shared memory communication?
>> >
>> >vfio-user allows any kind of device to be emulated in a separate process
>> (even non-PCI), while vhost-user is mainly for VirtIO devices.
>> >
>> >>
>> >>
>> >> thanks
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> At 2022-03-08 17:40:12, "Thanos Makatos"
>> <thanos.makatos(a)nutanix.com>
>> >> wrote:
>> >> >> -----Original Message-----
>> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
>> >> >> Sent: 08 March 2022 09:29
>> >> >> To: spdk <spdk(a)lists.01.org>
>> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and
>> >> >> vhost-blk
>> >> >>
>> >> >> thanks, follow up questions:
>> >> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i
>> cant'
>> >> >> have access to the nvme feature in guest(shadow door bell). is
>> >> >> that
>> >> correct?
>> >> >
>> >> >Correct.
>> >> >
>> >> >> 2. in the vfio-user solution, does the interrupt sending from the
>> >> ssd(nvme)
>> >> >> go through the qemu/kvm? or it go straight to the guest kernel?
>> >> >
>> >> >It doesn't go to QEMU/KVM. It depends on how you've set it up in
>> >> >SPDK: it
>> >> can either go the host kernel or to SPDK.
>> >> >
>> >> >> 3. when will the vfio-user be available? does kvm have same delima
>> here?
>> >> >
>> >> >vfio-user is under review in QEMU, so we can't predict when it will
>> >> >be
>> >> accepted upstream. This doesn't mean you can't use it though, have a
>> >> look
>> >> here: https://urldefense.proofpoint.com/v2/url?u=https-
>> >> 3A__github.com_nutanix_libvfio-
>> >>
>> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
>> >>
>> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
>> >> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
>> >> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
>> >> >
>> >> >>
>> >> >>
>> >> >> thank you very much!
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> ---- 回复的原邮件 ----
>> >> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
>> >> >> | 日期 | 2022年03月08日 16:33 |
>> >> >> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org>
>> |
>> >> >> | 抄送至 | |
>> >> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-
>> blk
>> >> >> | |
>> >> >> Previously SPDK extended QEMU with a separate driver to enable
>> >> >> vhost- nvme, this driver was not accepted by QEMU, so now we
>> >> >> support emulated NVMe with a
>> >> new
>> >> >> solution
>> >> >> "vfio-user", again the driver for supporting this is still under
>> >> >> code review
>> >> of
>> >> >> QEMU community,
>> >> >> but SPDK already supports this.
>> >> >>
>> >> >> > -----Original Message-----
>> >> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
>> >> >> > Sent: Tuesday, March 8, 2022 4:29 PM
>> >> >> > To: spdk(a)lists.01.org
>> >> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-blk
>> >> >> >
>> >> >> > Could someone help me understand the difference between vhost-
>> >> nvme
>> >> >> and
>> >> >> > vhost-blk?
>> >> >> > The online doc only shows there is vhost-user-blk-pci, why not
>> >> >> > vhost-
>> >> user-
>> >> >> nvme-
>> >> >> > pci?
>> >> >> > There is little doc fined in github/spdk and the qemu itself
>> >> >> > doesn't help
>> >> >> either
>> >> >> > Thanks!
>> >> >> > _______________________________________________
>> >> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
>> >> >> > email to spdk-leave(a)lists.01.org
>> >> >> _______________________________________________
>> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
>> >> >> email to spdk-leave(a)lists.01.org
>> >> >> _______________________________________________
>> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
>> >> >> email to spdk-leave(a)lists.01.org
>> >> >_______________________________________________
>> >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
>> >> >to spdk-leave(a)lists.01.org
>> >> _______________________________________________
>> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
>> >> to spdk-leave(a)lists.01.org
>> >_______________________________________________
>> >SPDK mailing list -- spdk(a)lists.01.org
>> >To unsubscribe send an email to spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org
>> To unsubscribe send an email to spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org
>> To unsubscribe send an email to spdk-leave(a)lists.01.org
>_______________________________________________
>SPDK mailing list -- spdk(a)lists.01.org
>To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 14:02 Thanos Makatos
  0 siblings, 0 replies; 13+ messages in thread
From: Thanos Makatos @ 2022-03-08 14:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6941 bytes --]

> -----Original Message-----
> From: Liu, Xiaodong <xiaodong.liu(a)intel.com>
> Sent: 08 March 2022 14:00
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> 
> No, Vhost-user will just do once data DMA from Guest DRAM region to
> device buffer.

Same for vfio-user, the guest has shared its memory to SPDK so the physical device can access that memory directly.

> 
> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: Tuesday, March 8, 2022 9:54 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> 
> 
> 
> 
> ok, thanks,
> one more follow up question:
> isn't this leads to double DMA of data? one from guest to DRAM region and
> one from DRAM region to device buffer.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At 2022-03-08 21:31:44, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
> wrote:
> >> -----Original Message-----
> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> Sent: 08 March 2022 13:20
> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >>
> >>
> >>
> >> thanks, this helps a lot.
> >>
> >>
> >> about the vfio-user,
> >>
> >>
> >> I understand that in the vfio-user,
> >> the guestOS can issue DMA read/write to a "pcie space " of a virtual
> >> device, but I'm confused:
> >> 1. does the guestOS issue DMA read/write to region on actual physical
> >> device or just a DRAM region?
> >
> >A DRAM region.
> >
> >>      if the guestOS directly access the physical device(DMA,IOMMU),
> >> where does spdk stands?
> >> 2. why do vfio-user need a socket, what kind of data does the socket
> carries?
> >
> >The vfio-user protocol allows a device to be emulated outside QEMU (the
> vfio-user client), in a separate process (the vfio-user server, SPDK running
> the nvmf/vfio-user target in our case). The UNIX domain socket is used
> between QEMU and SPDK for initial device setup, virtual IRQs, and other
> infrequent operations.
> >
> >> 3. Does the vfio-user look like the vhost-user except for direct DMA
> >> access instead of shared memory communication?
> >
> >vfio-user allows any kind of device to be emulated in a separate process
> (even non-PCI), while vhost-user is mainly for VirtIO devices.
> >
> >>
> >>
> >> thanks
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> At 2022-03-08 17:40:12, "Thanos Makatos"
> <thanos.makatos(a)nutanix.com>
> >> wrote:
> >> >> -----Original Message-----
> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> >> Sent: 08 March 2022 09:29
> >> >> To: spdk <spdk(a)lists.01.org>
> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and
> >> >> vhost-blk
> >> >>
> >> >> thanks, follow up questions:
> >> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i
> cant'
> >> >> have access to the nvme feature in guest(shadow door bell). is
> >> >> that
> >> correct?
> >> >
> >> >Correct.
> >> >
> >> >> 2. in the vfio-user solution, does the interrupt sending from the
> >> ssd(nvme)
> >> >> go through the qemu/kvm? or it go straight to the guest kernel?
> >> >
> >> >It doesn't go to QEMU/KVM. It depends on how you've set it up in
> >> >SPDK: it
> >> can either go the host kernel or to SPDK.
> >> >
> >> >> 3. when will the vfio-user be available? does kvm have same delima
> here?
> >> >
> >> >vfio-user is under review in QEMU, so we can't predict when it will
> >> >be
> >> accepted upstream. This doesn't mean you can't use it though, have a
> >> look
> >> here: https://urldefense.proofpoint.com/v2/url?u=https-
> >> 3A__github.com_nutanix_libvfio-
> >>
> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
> >>
> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
> >> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
> >> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
> >> >
> >> >>
> >> >>
> >> >> thank you very much!
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> ---- 回复的原邮件 ----
> >> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> >> >> | 日期 | 2022年03月08日 16:33 |
> >> >> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org>
> |
> >> >> | 抄送至 | |
> >> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-
> blk
> >> >> | |
> >> >> Previously SPDK extended QEMU with a separate driver to enable
> >> >> vhost- nvme, this driver was not accepted by QEMU, so now we
> >> >> support emulated NVMe with a
> >> new
> >> >> solution
> >> >> "vfio-user", again the driver for supporting this is still under
> >> >> code review
> >> of
> >> >> QEMU community,
> >> >> but SPDK already supports this.
> >> >>
> >> >> > -----Original Message-----
> >> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> >> >> > Sent: Tuesday, March 8, 2022 4:29 PM
> >> >> > To: spdk(a)lists.01.org
> >> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-blk
> >> >> >
> >> >> > Could someone help me understand the difference between vhost-
> >> nvme
> >> >> and
> >> >> > vhost-blk?
> >> >> > The online doc only shows there is vhost-user-blk-pci, why not
> >> >> > vhost-
> >> user-
> >> >> nvme-
> >> >> > pci?
> >> >> > There is little doc fined in github/spdk and the qemu itself
> >> >> > doesn't help
> >> >> either
> >> >> > Thanks!
> >> >> > _______________________________________________
> >> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> > email to spdk-leave(a)lists.01.org
> >> >> _______________________________________________
> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> email to spdk-leave(a)lists.01.org
> >> >> _______________________________________________
> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> email to spdk-leave(a)lists.01.org
> >> >_______________________________________________
> >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> >to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> to spdk-leave(a)lists.01.org
> >_______________________________________________
> >SPDK mailing list -- spdk(a)lists.01.org
> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 14:00 Liu, Xiaodong
  0 siblings, 0 replies; 13+ messages in thread
From: Liu, Xiaodong @ 2022-03-08 14:00 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6053 bytes --]

No, Vhost-user will just do once data DMA from Guest DRAM region to device buffer.

-----Original Message-----
From: zbhhbz <zbhhbz(a)yeah.net> 
Sent: Tuesday, March 8, 2022 9:54 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk




ok, thanks,
one more follow up question:
isn't this leads to double DMA of data? one from guest to DRAM region and one from DRAM region to device buffer. 











At 2022-03-08 21:31:44, "Thanos Makatos" <thanos.makatos(a)nutanix.com> wrote:
>> -----Original Message-----
>> From: zbhhbz <zbhhbz(a)yeah.net>
>> Sent: 08 March 2022 13:20
>> To: Storage Performance Development Kit <spdk(a)lists.01.org>
>> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>> 
>> 
>> 
>> 
>> thanks, this helps a lot.
>> 
>> 
>> about the vfio-user,
>> 
>> 
>> I understand that in the vfio-user,
>> the guestOS can issue DMA read/write to a "pcie space " of a virtual 
>> device, but I'm confused:
>> 1. does the guestOS issue DMA read/write to region on actual physical 
>> device or just a DRAM region?
>
>A DRAM region.
>
>>      if the guestOS directly access the physical device(DMA,IOMMU), 
>> where does spdk stands?
>> 2. why do vfio-user need a socket, what kind of data does the socket carries?
>
>The vfio-user protocol allows a device to be emulated outside QEMU (the vfio-user client), in a separate process (the vfio-user server, SPDK running the nvmf/vfio-user target in our case). The UNIX domain socket is used between QEMU and SPDK for initial device setup, virtual IRQs, and other infrequent operations.
>
>> 3. Does the vfio-user look like the vhost-user except for direct DMA 
>> access instead of shared memory communication?
>
>vfio-user allows any kind of device to be emulated in a separate process (even non-PCI), while vhost-user is mainly for VirtIO devices.
>
>> 
>> 
>> thanks
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> At 2022-03-08 17:40:12, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
>> wrote:
>> >> -----Original Message-----
>> >> From: zbhhbz <zbhhbz(a)yeah.net>
>> >> Sent: 08 March 2022 09:29
>> >> To: spdk <spdk(a)lists.01.org>
>> >> Subject: [SPDK] Re: The difference between vhost-nvme and 
>> >> vhost-blk
>> >>
>> >> thanks, follow up questions:
>> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i cant'
>> >> have access to the nvme feature in guest(shadow door bell). is 
>> >> that
>> correct?
>> >
>> >Correct.
>> >
>> >> 2. in the vfio-user solution, does the interrupt sending from the
>> ssd(nvme)
>> >> go through the qemu/kvm? or it go straight to the guest kernel?
>> >
>> >It doesn't go to QEMU/KVM. It depends on how you've set it up in 
>> >SPDK: it
>> can either go the host kernel or to SPDK.
>> >
>> >> 3. when will the vfio-user be available? does kvm have same delima here?
>> >
>> >vfio-user is under review in QEMU, so we can't predict when it will 
>> >be
>> accepted upstream. This doesn't mean you can't use it though, have a 
>> look
>> here: https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__github.com_nutanix_libvfio-
>> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
>> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
>> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
>> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
>> >
>> >>
>> >>
>> >> thank you very much!
>> >>
>> >>
>> >>
>> >>
>> >> ---- 回复的原邮件 ----
>> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
>> >> | 日期 | 2022年03月08日 16:33 |
>> >> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
>> >> | 抄送至 | |
>> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk 
>> >> | |
>> >> Previously SPDK extended QEMU with a separate driver to enable 
>> >> vhost- nvme, this driver was not accepted by QEMU, so now we 
>> >> support emulated NVMe with a
>> new
>> >> solution
>> >> "vfio-user", again the driver for supporting this is still under 
>> >> code review
>> of
>> >> QEMU community,
>> >> but SPDK already supports this.
>> >>
>> >> > -----Original Message-----
>> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
>> >> > Sent: Tuesday, March 8, 2022 4:29 PM
>> >> > To: spdk(a)lists.01.org
>> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-blk
>> >> >
>> >> > Could someone help me understand the difference between vhost-
>> nvme
>> >> and
>> >> > vhost-blk?
>> >> > The online doc only shows there is vhost-user-blk-pci, why not 
>> >> > vhost-
>> user-
>> >> nvme-
>> >> > pci?
>> >> > There is little doc fined in github/spdk and the qemu itself 
>> >> > doesn't help
>> >> either
>> >> > Thanks!
>> >> > _______________________________________________
>> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an 
>> >> > email to spdk-leave(a)lists.01.org
>> >> _______________________________________________
>> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an 
>> >> email to spdk-leave(a)lists.01.org 
>> >> _______________________________________________
>> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an 
>> >> email to spdk-leave(a)lists.01.org
>> >_______________________________________________
>> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email 
>> >to spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email 
>> to spdk-leave(a)lists.01.org
>_______________________________________________
>SPDK mailing list -- spdk(a)lists.01.org
>To unsubscribe send an email to spdk-leave(a)lists.01.org
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 13:31 Thanos Makatos
  0 siblings, 0 replies; 13+ messages in thread
From: Thanos Makatos @ 2022-03-08 13:31 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4941 bytes --]

> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: 08 March 2022 13:20
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> 
> 
> 
> 
> thanks, this helps a lot.
> 
> 
> about the vfio-user,
> 
> 
> I understand that in the vfio-user,
> the guestOS can issue DMA read/write to a "pcie space " of a virtual device,
> but I'm confused:
> 1. does the guestOS issue DMA read/write to region on actual physical device
> or just a DRAM region?

A DRAM region.

>      if the guestOS directly access the physical device(DMA,IOMMU), where
> does spdk stands?
> 2. why do vfio-user need a socket, what kind of data does the socket carries?

The vfio-user protocol allows a device to be emulated outside QEMU (the vfio-user client), in a separate process (the vfio-user server, SPDK running the nvmf/vfio-user target in our case). The UNIX domain socket is used between QEMU and SPDK for initial device setup, virtual IRQs, and other infrequent operations.

> 3. Does the vfio-user look like the vhost-user except for direct DMA access
> instead of shared memory communication?

vfio-user allows any kind of device to be emulated in a separate process (even non-PCI), while vhost-user is mainly for VirtIO devices.

> 
> 
> thanks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At 2022-03-08 17:40:12, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
> wrote:
> >> -----Original Message-----
> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> Sent: 08 March 2022 09:29
> >> To: spdk <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >> thanks, follow up questions:
> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i cant'
> >> have access to the nvme feature in guest(shadow door bell). is that
> correct?
> >
> >Correct.
> >
> >> 2. in the vfio-user solution, does the interrupt sending from the
> ssd(nvme)
> >> go through the qemu/kvm? or it go straight to the guest kernel?
> >
> >It doesn't go to QEMU/KVM. It depends on how you've set it up in SPDK: it
> can either go the host kernel or to SPDK.
> >
> >> 3. when will the vfio-user be available? does kvm have same delima here?
> >
> >vfio-user is under review in QEMU, so we can't predict when it will be
> accepted upstream. This doesn't mean you can't use it though, have a look
> here: https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__github.com_nutanix_libvfio-
> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
> >
> >>
> >>
> >> thank you very much!
> >>
> >>
> >>
> >>
> >> ---- 回复的原邮件 ----
> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> >> | 日期 | 2022年03月08日 16:33 |
> >> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
> >> | 抄送至 | |
> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
> >> Previously SPDK extended QEMU with a separate driver to enable vhost-
> >> nvme, this driver
> >> was not accepted by QEMU, so now we support emulated NVMe with a
> new
> >> solution
> >> "vfio-user", again the driver for supporting this is still under code review
> of
> >> QEMU community,
> >> but SPDK already supports this.
> >>
> >> > -----Original Message-----
> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> >> > Sent: Tuesday, March 8, 2022 4:29 PM
> >> > To: spdk(a)lists.01.org
> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-blk
> >> >
> >> > Could someone help me understand the difference between vhost-
> nvme
> >> and
> >> > vhost-blk?
> >> > The online doc only shows there is vhost-user-blk-pci, why not vhost-
> user-
> >> nvme-
> >> > pci?
> >> > There is little doc fined in github/spdk and the qemu itself doesn't help
> >> either
> >> > Thanks!
> >> > _______________________________________________
> >> > SPDK mailing list -- spdk(a)lists.01.org
> >> > To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >_______________________________________________
> >SPDK mailing list -- spdk(a)lists.01.org
> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08 13:19 zbhhbz
  0 siblings, 0 replies; 13+ messages in thread
From: zbhhbz @ 2022-03-08 13:19 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3567 bytes --]




thanks, this helps a lot.


about the vfio-user,


I understand that in the vfio-user,
the guestOS can issue DMA read/write to a "pcie space " of a virtual device,
but I'm confused:
1. does the guestOS issue DMA read/write to region on actual physical device or just a DRAM region?
     if the guestOS directly access the physical device(DMA,IOMMU), where does spdk stands?
2. why do vfio-user need a socket, what kind of data does the socket carries?
3. Does the vfio-user look like the vhost-user except for direct DMA access instead of shared memory communication?


thanks














At 2022-03-08 17:40:12, "Thanos Makatos" <thanos.makatos(a)nutanix.com> wrote:
>> -----Original Message-----
>> From: zbhhbz <zbhhbz(a)yeah.net>
>> Sent: 08 March 2022 09:29
>> To: spdk <spdk(a)lists.01.org>
>> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>> 
>> thanks, follow up questions:
>> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i cant'
>> have access to the nvme feature in guest(shadow door bell). is that correct?
>
>Correct.
>
>> 2. in the vfio-user solution, does the interrupt sending from the ssd(nvme)
>> go through the qemu/kvm? or it go straight to the guest kernel?
>
>It doesn't go to QEMU/KVM. It depends on how you've set it up in SPDK: it can either go the host kernel or to SPDK.
>
>> 3. when will the vfio-user be available? does kvm have same delima here?
>
>vfio-user is under review in QEMU, so we can't predict when it will be accepted upstream. This doesn't mean you can't use it though, have a look here: https://github.com/nutanix/libvfio-user/blob/master/docs/spdk.md
>
>> 
>> 
>> thank you very much!
>> 
>> 
>> 
>> 
>> ---- 回复的原邮件 ----
>> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
>> | 日期 | 2022年03月08日 16:33 |
>> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
>> | 抄送至 | |
>> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
>> Previously SPDK extended QEMU with a separate driver to enable vhost-
>> nvme, this driver
>> was not accepted by QEMU, so now we support emulated NVMe with a new
>> solution
>> "vfio-user", again the driver for supporting this is still under code review of
>> QEMU community,
>> but SPDK already supports this.
>> 
>> > -----Original Message-----
>> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
>> > Sent: Tuesday, March 8, 2022 4:29 PM
>> > To: spdk(a)lists.01.org
>> > Subject: [SPDK] The difference between vhost-nvme and vhost-blk
>> >
>> > Could someone help me understand the difference between vhost-nvme
>> and
>> > vhost-blk?
>> > The online doc only shows there is vhost-user-blk-pci, why not vhost-user-
>> nvme-
>> > pci?
>> > There is little doc fined in github/spdk and the qemu itself doesn't help
>> either
>> > Thanks!
>> > _______________________________________________
>> > SPDK mailing list -- spdk(a)lists.01.org
>> > To unsubscribe send an email to spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org
>> To unsubscribe send an email to spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org
>> To unsubscribe send an email to spdk-leave(a)lists.01.org
>_______________________________________________
>SPDK mailing list -- spdk(a)lists.01.org
>To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08  9:40 Thanos Makatos
  0 siblings, 0 replies; 13+ messages in thread
From: Thanos Makatos @ 2022-03-08  9:40 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2641 bytes --]

> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: 08 March 2022 09:29
> To: spdk <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> 
> thanks, follow up questions:
> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i cant'
> have access to the nvme feature in guest(shadow door bell). is that correct?

Correct.

> 2. in the vfio-user solution, does the interrupt sending from the ssd(nvme)
> go through the qemu/kvm? or it go straight to the guest kernel?

It doesn't go to QEMU/KVM. It depends on how you've set it up in SPDK: it can either go the host kernel or to SPDK.

> 3. when will the vfio-user be available? does kvm have same delima here?

vfio-user is under review in QEMU, so we can't predict when it will be accepted upstream. This doesn't mean you can't use it though, have a look here: https://github.com/nutanix/libvfio-user/blob/master/docs/spdk.md

> 
> 
> thank you very much!
> 
> 
> 
> 
> ---- 回复的原邮件 ----
> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> | 日期 | 2022年03月08日 16:33 |
> | 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
> | 抄送至 | |
> | 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
> Previously SPDK extended QEMU with a separate driver to enable vhost-
> nvme, this driver
> was not accepted by QEMU, so now we support emulated NVMe with a new
> solution
> "vfio-user", again the driver for supporting this is still under code review of
> QEMU community,
> but SPDK already supports this.
> 
> > -----Original Message-----
> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> > Sent: Tuesday, March 8, 2022 4:29 PM
> > To: spdk(a)lists.01.org
> > Subject: [SPDK] The difference between vhost-nvme and vhost-blk
> >
> > Could someone help me understand the difference between vhost-nvme
> and
> > vhost-blk?
> > The online doc only shows there is vhost-user-blk-pci, why not vhost-user-
> nvme-
> > pci?
> > There is little doc fined in github/spdk and the qemu itself doesn't help
> either
> > Thanks!
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org
> > To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08  9:29 zbhhbz
  0 siblings, 0 replies; 13+ messages in thread
From: zbhhbz @ 2022-03-08  9:29 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1819 bytes --]

thanks, follow up questions:
1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev), i cant' have access to the nvme feature in guest(shadow door bell). is that correct?
2. in the vfio-user solution, does the interrupt sending from the ssd(nvme) go through the qemu/kvm? or it go straight to the guest kernel?
3. when will the vfio-user be available? does kvm have same delima here?


thank you very much!




---- 回复的原邮件 ----
| 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
| 日期 | 2022年03月08日 16:33 |
| 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
| 抄送至 | |
| 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
Previously SPDK extended QEMU with a separate driver to enable vhost-nvme, this driver
was not accepted by QEMU, so now we support emulated NVMe with a new solution
"vfio-user", again the driver for supporting this is still under code review of QEMU community,
but SPDK already supports this.

> -----Original Message-----
> From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> Sent: Tuesday, March 8, 2022 4:29 PM
> To: spdk(a)lists.01.org
> Subject: [SPDK] The difference between vhost-nvme and vhost-blk
>
> Could someone help me understand the difference between vhost-nvme and
> vhost-blk?
> The online doc only shows there is vhost-user-blk-pci, why not vhost-user-nvme-
> pci?
> There is little doc fined in github/spdk and the qemu itself doesn't help either
> Thanks!
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [SPDK] Re: The difference between vhost-nvme and vhost-blk
@ 2022-03-08  8:33 Liu, Changpeng
  0 siblings, 0 replies; 13+ messages in thread
From: Liu, Changpeng @ 2022-03-08  8:33 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 942 bytes --]

Previously SPDK extended QEMU with a separate driver to enable vhost-nvme, this driver
was not accepted by QEMU, so now we support emulated NVMe with a new solution
"vfio-user", again the driver for supporting this is still under code review of QEMU community,
but SPDK already supports this.

> -----Original Message-----
> From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> Sent: Tuesday, March 8, 2022 4:29 PM
> To: spdk(a)lists.01.org
> Subject: [SPDK] The difference between vhost-nvme and vhost-blk
> 
> Could someone help me understand the difference between vhost-nvme and
> vhost-blk?
> The online doc only shows there is vhost-user-blk-pci, why not vhost-user-nvme-
> pci?
> There is little doc fined in github/spdk and the qemu itself doesn't help either
> Thanks!
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-03-08 17:54 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-08 13:54 [SPDK] Re: The difference between vhost-nvme and vhost-blk zbhhbz
  -- strict thread matches above, loose matches on Subject: below --
2022-03-08 17:54 zbhhbz
2022-03-08 16:45 Walker, Benjamin
2022-03-08 14:46 zbhhbz
2022-03-08 14:19 Thanos Makatos
2022-03-08 14:15 zbhhbz
2022-03-08 14:02 Thanos Makatos
2022-03-08 14:00 Liu, Xiaodong
2022-03-08 13:31 Thanos Makatos
2022-03-08 13:19 zbhhbz
2022-03-08  9:40 Thanos Makatos
2022-03-08  9:29 zbhhbz
2022-03-08  8:33 Liu, Changpeng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).