All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
@ 2018-12-13  0:22 Nikos Dragazis
  0 siblings, 0 replies; 6+ messages in thread
From: Nikos Dragazis @ 2018-12-13  0:22 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5998 bytes --]

Hi all,

yesterday in the conference meeting I had the chance to talk about the
work I am doing here in Arrikto Inc. Let me give an overview for those
who missed it.

I am working on the SPDK vhost target. I am trying to extend the
vhost-user transport mechanism to enable deploying the SPDK vhost target
into a dedicated storage appliance VM instead of host user space. My end
goal is to have a setup where a storage appliance VM offers emulated
storage devices to compute VMs.

To make this more clear, the topology looks like this:
https://www.dropbox.com/s/gdskob7lgtlwlio/spdk_vhost_vvu_support.svg?dl=0

The code is here:
https://github.com/ndragazis/spdk

I think this is important for two reasons:
- in a cloud environment security really matters. So, running the SPDK
  vhost target inside a VM instead of host user space is definitely
  better in terms if security.
- this will enable the users in the cloud to create their own storage
  devices for their compute VMs. This was not possible with the previous
  topology because running the SPDK vhost target on host user space
  could only be done by the cloud provider. So, with this topology,
  users can create their own custom storage devices because they can run
  themselves the SPDK vhost app.

Getting into more detail about how it works:
Moving the vhost target from host user space to a VM creates three
issues with the vhost-user transport mechanism that need to be solved.
1. We need a way so that the vhost-user messages can reach the SPDK
   vhost target.
2. We need a mechanism so that the SPDK vhost target can have access to
   the Compute VM’s file backed memory.
3. We need a way for the SPDK vhost target to interrupt the compute VM.

These are all solved by a special virtio device called
“virtio-vhost-user”. This device was created by Stefan Hajnoczi and is
described here:
https://wiki.qemu.org/Features/VirtioVhostUser

This device solves the above problems as follows:
1. it reads the messages from the unix socket and passes them into a
   virtqueue. A user space driver in SPDK receives the messages from the
   virtqueue. The received messages are then passed to the SPDK
   vhost-user message handler.
2. it maps the vhost memory regions sent by the master with message
   VHOST_USER_SET_MEM_TABLE. The vvu device exposes those regions to the
   guest as a PCI memory region.
3. it intercepts the VHOST_USER_SET_VRING_CALL messages and saves the
   callfds for the virtqueues. For each virtqueue, it exposes a doorbell
   to the guest. When this doorbell is kicked from the SPDK vvu driver,
   the device kicks the corresponding callfd.

Changes in the API:
Currently, the rte_vhost library provides both transports, the one
inserted from me and the pre-existing one. The choice for the transport
is being made using a new command line option “T” when running the vhost
app. This option can take two values: “vvu” or “unix” which correspond
to the two transports. When vvu transport is being used, the “S” option
has to be the PCI address of the virtio-vhost-user device. When unix
transport is being used, the “S” option is the directory path where the
unix socket will be created.

Step-by-step guide to test it yourself:

SPDK version: https://github.com/ndragazis/spdk
QEMU version: https://github.com/stefanha/qemu/tree/virtio-vhost-user

1. Compile QEMU and SPDK:
$ git clone -b virtio-vhost-user https://github.com/stefanha/qemu
$ (cd qemu && ./configure --target-list=x86_64-softmmu && make)

$ git clone https://github.com/ndragazis/spdk.git
$ cd spdk
$ git submodule update --init
$ ./configure
$ make

2. Launch the Storage Appliance VM:
$ ./qemu/x86_64-softmmu/qemu-system-x86_64 \
  -machine q35,accel=kvm -cpu host -smp 2 -m 4G \
  -drive if=none,file=image.qcow2,format=qcow2,id=bootdisk \
  -device virtio-blk-pci,drive=bootdisk,id=virtio-disk1,bootindex=0,addr=04.0 \
  -device virtio-scsi-pci,id=scsi0,addr=05.0 \
  -drive file=scsi_disk.qcow2,if=none,format=qcow2,id=scsi_disk \
  -device scsi-hd,drive=scsi_disk,bus=scsi0.0,channel=0,scsi-id=0,lun=0 \
  -drive file=nvme_disk.qcow2,if=none,format=qcow2,id=nvme_disk \
  -device nvme,drive=nvme_disk,serial=1,addr=06.0 \
  -chardev socket,id=chardev0,path=vhost-user.sock,server,nowait \
  -device virtio-vhost-user-pci,chardev=chardev0,addr=07.0

The SPDK code needs to be accessible to the guest in the Storage
Appliance VM. A simple solution would be mounting the corresponding host
directory with sshfs, but it’s up to you.

3. Create the SPDK vhost SCSI target inside the Storage Appliance VM:
$ sudo modprobe vfio enable_unsafe_noiommu_mode=1
$ sudo modprobe vfio-pci
$ cd spdk
$ sudo scripts/setup.sh
$ sudo app/vhost/vhost -S "0000:00:07.0" -T "vvu" -m 0x3 &
$ sudo scripts/rpc.py construct_vhost_scsi_controller --cpumask 0x1 vhost.0
$ sudo scripts/rpc.py construct_virtio_pci_scsi_bdev 0000:00:05.0 VirtioScsi0
$ sudo scripts/rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:00:06.0
$ sudo scripts/rpc.py construct_malloc_bdev 64 512 -b Malloc0
$ sudo scripts/rpc.py add_vhost_scsi_lun vhost.0 0 VirtioScsi0t0
$ sudo scripts/rpc.py add_vhost_scsi_lun vhost.0 1 NVMe1n1
$ sudo scripts/rpc.py add_vhost_scsi_lun vhost.0 2 Malloc0

4. Launch the Compute VM:
$ ./qemu/x86_64-softmmu/qemu-system-x86_64 \
  -M accel=kvm -cpu host -m 1G \
  -object memory-backend-file,id=mem0,mem-path=/dev/shm/ivshmem,size=1G,share=on \
  -numa node,memdev=mem0 \
  -drive if=virtio,file=image.qcow2,format=qcow2 \
  -chardev socket,id=chardev0,path=vhost-user.sock \
  -device vhost-user-scsi-pci,chardev=chardev0

5. Ensure that the virtio-scsi HBA and the associated SCSI targets are
visible in the Compute VM:
$ lsscsi

I will submit the code for review in GerritHub soon. Hopefully, we can
get this upstream with your help!

Thanks,
Nikos


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
@ 2019-09-28  2:06 wuzhouhui
  0 siblings, 0 replies; 6+ messages in thread
From: wuzhouhui @ 2019-09-28  2:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6801 bytes --]


> -----Original Messages-----
> From: "Nikos Dragazis" <ndragazis(a)outlook.com.gr>
> Sent Time: 2018-12-13 08:22:42 (Thursday)
> To: "spdk(a)lists.01.org" <spdk(a)lists.01.org>
> Cc: 
> Subject: [CASS SPAM][SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
> 
> Hi all,
> 
> yesterday in the conference meeting I had the chance to talk about the
> work I am doing here in Arrikto Inc. Let me give an overview for those
> who missed it.
> 
> I am working on the SPDK vhost target. I am trying to extend the
> vhost-user transport mechanism to enable deploying the SPDK vhost target
> into a dedicated storage appliance VM instead of host user space. My end
> goal is to have a setup where a storage appliance VM offers emulated
> storage devices to compute VMs.
> 
> To make this more clear, the topology looks like this:
> https://www.dropbox.com/s/gdskob7lgtlwlio/spdk_vhost_vvu_support.svg?dl=0

Eh, it's difficult for Chinese to access Dropbox, could you put it in
Chinese-accessible site, like GitHub?

> 
> The code is here:
> https://github.com/ndragazis/spdk
> 
> I think this is important for two reasons:
> - in a cloud environment security really matters. So, running the SPDK
>   vhost target inside a VM instead of host user space is definitely
>   better in terms if security.
> - this will enable the users in the cloud to create their own storage
>   devices for their compute VMs. This was not possible with the previous
>   topology because running the SPDK vhost target on host user space
>   could only be done by the cloud provider. So, with this topology,
>   users can create their own custom storage devices because they can run
>   themselves the SPDK vhost app.
> 
> Getting into more detail about how it works:
> Moving the vhost target from host user space to a VM creates three
> issues with the vhost-user transport mechanism that need to be solved.
> 1. We need a way so that the vhost-user messages can reach the SPDK
>    vhost target.
> 2. We need a mechanism so that the SPDK vhost target can have access to
>    the Compute VM’s file backed memory.
> 3. We need a way for the SPDK vhost target to interrupt the compute VM.
> 
> These are all solved by a special virtio device called
> “virtio-vhost-user”. This device was created by Stefan Hajnoczi and is
> described here:
> https://wiki.qemu.org/Features/VirtioVhostUser
> 
> This device solves the above problems as follows:
> 1. it reads the messages from the unix socket and passes them into a
>    virtqueue. A user space driver in SPDK receives the messages from the
>    virtqueue. The received messages are then passed to the SPDK
>    vhost-user message handler.
> 2. it maps the vhost memory regions sent by the master with message
>    VHOST_USER_SET_MEM_TABLE. The vvu device exposes those regions to the
>    guest as a PCI memory region.
> 3. it intercepts the VHOST_USER_SET_VRING_CALL messages and saves the
>    callfds for the virtqueues. For each virtqueue, it exposes a doorbell
>    to the guest. When this doorbell is kicked from the SPDK vvu driver,
>    the device kicks the corresponding callfd.
> 
> Changes in the API:
> Currently, the rte_vhost library provides both transports, the one
> inserted from me and the pre-existing one. The choice for the transport
> is being made using a new command line option “T” when running the vhost
> app. This option can take two values: “vvu” or “unix” which correspond
> to the two transports. When vvu transport is being used, the “S” option
> has to be the PCI address of the virtio-vhost-user device. When unix
> transport is being used, the “S” option is the directory path where the
> unix socket will be created.
> 
> Step-by-step guide to test it yourself:
> 
> SPDK version: https://github.com/ndragazis/spdk
> QEMU version: https://github.com/stefanha/qemu/tree/virtio-vhost-user
> 
> 1. Compile QEMU and SPDK:
> $ git clone -b virtio-vhost-user https://github.com/stefanha/qemu
> $ (cd qemu && ./configure --target-list=x86_64-softmmu && make)
> 
> $ git clone https://github.com/ndragazis/spdk.git
> $ cd spdk
> $ git submodule update --init
> $ ./configure
> $ make
> 
> 2. Launch the Storage Appliance VM:
> $ ./qemu/x86_64-softmmu/qemu-system-x86_64 \
>   -machine q35,accel=kvm -cpu host -smp 2 -m 4G \
>   -drive if=none,file=image.qcow2,format=qcow2,id=bootdisk \
>   -device virtio-blk-pci,drive=bootdisk,id=virtio-disk1,bootindex=0,addr=04.0 \
>   -device virtio-scsi-pci,id=scsi0,addr=05.0 \
>   -drive file=scsi_disk.qcow2,if=none,format=qcow2,id=scsi_disk \
>   -device scsi-hd,drive=scsi_disk,bus=scsi0.0,channel=0,scsi-id=0,lun=0 \
>   -drive file=nvme_disk.qcow2,if=none,format=qcow2,id=nvme_disk \
>   -device nvme,drive=nvme_disk,serial=1,addr=06.0 \
>   -chardev socket,id=chardev0,path=vhost-user.sock,server,nowait \
>   -device virtio-vhost-user-pci,chardev=chardev0,addr=07.0
> 
> The SPDK code needs to be accessible to the guest in the Storage
> Appliance VM. A simple solution would be mounting the corresponding host
> directory with sshfs, but it’s up to you.
> 
> 3. Create the SPDK vhost SCSI target inside the Storage Appliance VM:
> $ sudo modprobe vfio enable_unsafe_noiommu_mode=1
> $ sudo modprobe vfio-pci
> $ cd spdk
> $ sudo scripts/setup.sh
> $ sudo app/vhost/vhost -S "0000:00:07.0" -T "vvu" -m 0x3 &
> $ sudo scripts/rpc.py construct_vhost_scsi_controller --cpumask 0x1 vhost.0
> $ sudo scripts/rpc.py construct_virtio_pci_scsi_bdev 0000:00:05.0 VirtioScsi0
> $ sudo scripts/rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:00:06.0
> $ sudo scripts/rpc.py construct_malloc_bdev 64 512 -b Malloc0
> $ sudo scripts/rpc.py add_vhost_scsi_lun vhost.0 0 VirtioScsi0t0
> $ sudo scripts/rpc.py add_vhost_scsi_lun vhost.0 1 NVMe1n1
> $ sudo scripts/rpc.py add_vhost_scsi_lun vhost.0 2 Malloc0
> 
> 4. Launch the Compute VM:
> $ ./qemu/x86_64-softmmu/qemu-system-x86_64 \
>   -M accel=kvm -cpu host -m 1G \
>   -object memory-backend-file,id=mem0,mem-path=/dev/shm/ivshmem,size=1G,share=on \
>   -numa node,memdev=mem0 \
>   -drive if=virtio,file=image.qcow2,format=qcow2 \
>   -chardev socket,id=chardev0,path=vhost-user.sock \
>   -device vhost-user-scsi-pci,chardev=chardev0
> 
> 5. Ensure that the virtio-scsi HBA and the associated SCSI targets are
> visible in the Compute VM:
> $ lsscsi
> 
> I will submit the code for review in GerritHub soon. Hopefully, we can
> get this upstream with your help!
> 
> Thanks,
> Nikos
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
@ 2018-12-18 17:22 Nikos Dragazis
  0 siblings, 0 replies; 6+ messages in thread
From: Nikos Dragazis @ 2018-12-18 17:22 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2604 bytes --]

On 17/12/18 12:30 μ.μ., Wei Wang wrote:

> On 12/14/2018 07:00 PM, Stefan Hajnoczi wrote:
>> On Thu, Dec 13, 2018 at 12:22 AM Nikos Dragazis
>> <ndragazis(a)outlook.com.gr> wrote:
>>> These are all solved by a special virtio device called
>>> “virtio-vhost-user”. This device was created by Stefan Hajnoczi and is
>>> described here:
>>> https://wiki.qemu.org/Features/VirtioVhostUser
>> Hi Nikos,
>> Nice that you're pushing virtio-vhost-user.  I'm focussed on other
>> projects and probably won't resume virtio-vhost-user work any time
>> soon.  It was done as part of the vhost-pci effort that Wei Wang from
>> Intel was pursuing, so I've CCed them.
>>
>> Although virtio-vhost-user may have been stalled, I still think it's a
>> good solution for device emulation inside VMs.
>>
>> With some effort virtio-vhost-user can get upstream into the VIRTIO
>> specification, QEMU, and SPDK.  A starting point would be to resend
>> the VIRTIO spec, QEMU, SPDK patches.  The VIRTIO spec is here:
>> https://stefanha.github.io/virtio/vhost-user-slave.html#x1-2830007
>>
>> If I remember correctly, most of the remaining work was in SPDK/DPDK,
>> where the vhost-user library may need refactoring.  I have CCed
>> Dariusz, who was working on a general rte_vhost library overhaul and
>> had already looked at virtio-vhost-user.
>>
>
> It seems Nikos wasn't included in the email, have him cc-ed
>
> Agree with Stefan that virtio-vhost-user is a good inter-VM communication solution.
> With the previous Vhost-PCI PoC, the 2 VM networking communication (64B packet)
> throughput is around 1.6x larger than the exiting OVS based solution. The data path of
> virtio-vhost-user remains the same as Vhost-PCI, so we would also expect it to have a
> similar good performance.
>
> Btw, could you guys share your usage of this inter-VM communication solution in the
> storage domain?

Sure. You can check the following link for a detailed description:
https://github.com/ndragazis/ndragazis.github.io/blob/master/spdk.md

In a nutshell, we want to have a Storage Appliance VM implementing
virtual storage devices being exposed to Compute VMs over
virtio-vhost-user. We run SPDK inside the Storage Appliance VM for this
purpose. We use the existing SPDK vhost target which allows creating
virtio-blk, virtio-scsi and nvme devices. Checkout the SPDK docs for
more information about the SPDK vhost target here:
https://spdk.io/doc/vhost.html
https://spdk.io/doc/vhost_processing.html

Hope I made it more clear.

Best regards,
Nikos

>
> Best,
> Wei


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
@ 2018-12-18 15:36 Nikos Dragazis
  0 siblings, 0 replies; 6+ messages in thread
From: Nikos Dragazis @ 2018-12-18 15:36 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1367 bytes --]

On 13/12/18 3:27 π.μ., wuzhouhui wrote:

>> -----Original Messages-----
>> From: "Nikos Dragazis" <ndragazis(a)outlook.com.gr>
>> Sent Time: 2018-12-13 08:22:42 (Thursday)
>> To: "spdk(a)lists.01.org" <spdk(a)lists.01.org>
>> Cc: 
>> Subject: [CASS SPAM][SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
>>
>> Hi all,
>>
>> yesterday in the conference meeting I had the chance to talk about the
>> work I am doing here in Arrikto Inc. Let me give an overview for those
>> who missed it.
>>
>> I am working on the SPDK vhost target. I am trying to extend the
>> vhost-user transport mechanism to enable deploying the SPDK vhost target
>> into a dedicated storage appliance VM instead of host user space. My end
>> goal is to have a setup where a storage appliance VM offers emulated
>> storage devices to compute VMs.
>>
>> To make this more clear, the topology looks like this:
>> https://www.dropbox.com/s/gdskob7lgtlwlio/spdk_vhost_vvu_support.svg?dl=0
> Eh, it's difficult for Chinese to access Dropbox, could you put it in
> Chinese-accessible site, like GitHub?

Hi,

I am sorry for the delayed response. I didn't know you guys don't have
access to Dropbox.

You can find the image in my GitHub site here:
https://github.com/ndragazis/ndragazis.github.io/blob/master/spdk.md#topology

Best regards,
Nikos


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
@ 2018-12-17 10:30 Wei Wang
  0 siblings, 0 replies; 6+ messages in thread
From: Wei Wang @ 2018-12-17 10:30 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1846 bytes --]

On 12/14/2018 07:00 PM, Stefan Hajnoczi wrote:
> On Thu, Dec 13, 2018 at 12:22 AM Nikos Dragazis
> <ndragazis(a)outlook.com.gr> wrote:
>> These are all solved by a special virtio device called
>> “virtio-vhost-user”. This device was created by Stefan Hajnoczi and is
>> described here:
>> https://wiki.qemu.org/Features/VirtioVhostUser
> Hi Nikos,
> Nice that you're pushing virtio-vhost-user.  I'm focussed on other
> projects and probably won't resume virtio-vhost-user work any time
> soon.  It was done as part of the vhost-pci effort that Wei Wang from
> Intel was pursuing, so I've CCed them.
>
> Although virtio-vhost-user may have been stalled, I still think it's a
> good solution for device emulation inside VMs.
>
> With some effort virtio-vhost-user can get upstream into the VIRTIO
> specification, QEMU, and SPDK.  A starting point would be to resend
> the VIRTIO spec, QEMU, SPDK patches.  The VIRTIO spec is here:
> https://stefanha.github.io/virtio/vhost-user-slave.html#x1-2830007
>
> If I remember correctly, most of the remaining work was in SPDK/DPDK,
> where the vhost-user library may need refactoring.  I have CCed
> Dariusz, who was working on a general rte_vhost library overhaul and
> had already looked at virtio-vhost-user.
>

It seems Nikos wasn't included in the email, have him cc-ed

Agree with Stefan that virtio-vhost-user is a good inter-VM 
communication solution.
With the previous Vhost-PCI PoC, the 2 VM networking communication (64B 
packet)
throughput is around 1.6x larger than the exiting OVS based solution. 
The data path of
virtio-vhost-user remains the same as Vhost-PCI, so we would also expect 
it to have a
similar good performance.

Btw, could you guys share your usage of this inter-VM communication 
solution in the
storage domain?

Best,
Wei

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support
@ 2018-12-14 11:00 Stefan Hajnoczi
  0 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2018-12-14 11:00 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1205 bytes --]

On Thu, Dec 13, 2018 at 12:22 AM Nikos Dragazis
<ndragazis(a)outlook.com.gr> wrote:
> These are all solved by a special virtio device called
> “virtio-vhost-user”. This device was created by Stefan Hajnoczi and is
> described here:
> https://wiki.qemu.org/Features/VirtioVhostUser

Hi Nikos,
Nice that you're pushing virtio-vhost-user.  I'm focussed on other
projects and probably won't resume virtio-vhost-user work any time
soon.  It was done as part of the vhost-pci effort that Wei Wang from
Intel was pursuing, so I've CCed them.

Although virtio-vhost-user may have been stalled, I still think it's a
good solution for device emulation inside VMs.

With some effort virtio-vhost-user can get upstream into the VIRTIO
specification, QEMU, and SPDK.  A starting point would be to resend
the VIRTIO spec, QEMU, SPDK patches.  The VIRTIO spec is here:
https://stefanha.github.io/virtio/vhost-user-slave.html#x1-2830007

If I remember correctly, most of the remaining work was in SPDK/DPDK,
where the vhost-user library may need refactoring.  I have CCed
Dariusz, who was working on a general rte_vhost library overhaul and
had already looked at virtio-vhost-user.

Stefan

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-09-28  2:06 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-13  0:22 [SPDK] SPDK Euro Meeting 11/12: vhost-user target with vvu support Nikos Dragazis
2018-12-14 11:00 Stefan Hajnoczi
2018-12-17 10:30 Wei Wang
2018-12-18 15:36 Nikos Dragazis
2018-12-18 17:22 Nikos Dragazis
2019-09-28  2:06 wuzhouhui

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.