Hi all:
      I just create NVMe bdev and vhost-scsi controller which can be accessed by QEMU, but it occurred error when IO issued from VM.
      Here are my steps for SPDK configuration

Host OS:Ubuntu 18.04, Kernel 4.15.0-30
Guest OS: Ubuntu 18.04
QEMU: 2.12.0
SPDK: v18.07

1)  sudo HUGEMEM=4096 scripts/setup.sh

0000:05:00.0 (8086 2522): nvme -> vfio-pci

Current user memlock limit: 4116 MB

This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as current user.
To change this, please adjust limits.conf memlock limit for current user.

2) sudo ./app/vhost/vhost -S /var/tmp -m 0x3 &

[ DPDK EAL parameters: vhost -c 0x3 -m 1024 --legacy-mem --file-prefix=spdk_pid1921 ]
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/spdk_pid1921/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 530:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 718:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 1 on socket 0
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0

3) sudo ./scripts/rpc.py construct_vhost_scsi_controller --cpumask 0x1 vhost.0
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:2522 spdk_nvme
EAL:   using IOMMU type 1 (Type 1)
Nvme0n1

4) sudo ./scripts/rpc.py add_vhost_scsi_lun vhost.0 0 Nvme0n1
5) start qemu:
taskset qemu-system-x86_64 -enable-kvm -m 1G \
        -name bread,debug-threads=on \
        -daemonize \
        -pidfile /var/log/bread.pid \
        -cpu host\
        -smp 4,sockets=1,cores=4,threads=1 \
        -object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on -numa node,memdev=mem0\
        -drive file=../ubuntu.img,media=disk,cache=unsafe,aio=threads,format=qcow2\
-chardev socket,id=char0,path=/var/tmp/vhost.0 \
-device vhost-user-scsi-pci,id=scsi0,chardev=char0\
        -machine usb=on \
        -device usb-tablet \
        -device usb-mouse \
        -device usb-kbd \
        -vnc :2 \
    -net nic,model=virtio\
    -net user,hostfwd=tcp::2222-:22

then when I use fio to test the vhost nvme disk in guest VM, I got the following error message in host console.
===========================================================================
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
=========================================================================== 

I  used the lsblk to check block device information in guest, and could see the nvme disk with sdb.
>lsblk --output "NAME,KNAME,MODEL,HCTL,SIZE,VENDOR,SUBSYSTEMS"
===========================================================================   
NAME   KNAME  MODEL            HCTL         SIZE VENDOR   SUBSYSTEMS
fd0    fd0                                    4K          block:platform
loop0  loop0                               12.2M          block
loop1  loop1                               86.6M          block
loop2  loop2                                1.6M          block
loop3  loop3                                3.3M          block
loop4  loop4                                 21M          block
loop5  loop5                                2.3M          block
loop6  loop6                                 13M          block
loop7  loop7                                3.7M          block
loop8  loop8                                2.3M          block
loop9  loop9                               86.9M          block
loop10 loop10                              34.7M          block
loop11 loop11                                87M          block
loop12 loop12                             140.9M          block
loop13 loop13                                13M          block
loop14 loop14                               140M          block
loop15 loop15                             139.5M          block
loop16 loop16                               3.7M          block
loop17 loop17                              14.5M          block
sda    sda    QEMU HARDDISK    0:0:0:0       32G ATA      block:scsi:pci
  sda1 sda1                                  32G          block:scsi:pci
sdb    sdb    NVMe disk        2:0:0:0     27.3G INTEL    block:scsi:virtio:pci
sr0    sr0    QEMU DVD-ROM     1:0:0:0     1024M QEMU     block:scsi:pci
===========================================================================   

Does anyone can give me help how to solve this problem ?

Thanks.
Adam Chang