From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============8553218599087370508==" MIME-Version: 1.0 From: Stojaczyk, DariuszX Subject: Re: [SPDK] Error when issue IO in QEMU to vhost scsi NVMe Date: Thu, 09 Aug 2018 08:07:33 +0000 Message-ID: In-Reply-To: CANvoUxjsa3hA0nE_zkVuzhc8xdGxqUFUmR2P24zcmc8BcOsnGA@mail.gmail.com List-ID: To: spdk@lists.01.org --===============8553218599087370508== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Can you provide a full vhost log? D. > -----Original Message----- > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Adam Chang > Sent: Thursday, August 9, 2018 4:05 AM > To: spdk(a)lists.01.org > Subject: [SPDK] Error when issue IO in QEMU to vhost scsi NVMe > = > Hi all: > I just create NVMe bdev and vhost-scsi controller which can be acce= ssed by > QEMU, but it occurred error when IO issued from VM. > Here are my steps for SPDK configuration > = > Host OS:Ubuntu 18.04, Kernel 4.15.0-30 > Guest OS: Ubuntu 18.04 > QEMU: 2.12.0 > SPDK: v18.07 > = > 1) sudo HUGEMEM=3D4096 scripts/setup.sh > = > 0000:05:00.0 (8086 2522): nvme -> vfio-pci > = > Current user memlock limit: 4116 MB > = > This is the maximum amount of memory you will be > able to use with DPDK and VFIO if run as current user. > To change this, please adjust limits.conf memlock limit for current user. > = > 2) sudo ./app/vhost/vhost -S /var/tmp -m 0x3 & > = > [ DPDK EAL parameters: vhost -c 0x3 -m 1024 --legacy-mem --file- > prefix=3Dspdk_pid1921 ] > EAL: Detected 12 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/spdk_pid1921/mp_socket > EAL: No free hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: VFIO support initialized > app.c: 530:spdk_app_start: *NOTICE*: Total cores available: 2 > reactor.c: 718:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is = 0x1 > reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 1 on = socket > 0 > reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on = socket > 0 > = > 3) sudo ./scripts/rpc.py construct_vhost_scsi_controller --cpumask 0x1 vh= ost.0 > EAL: PCI device 0000:05:00.0 on NUMA socket 0 > EAL: probe driver: 8086:2522 spdk_nvme > EAL: using IOMMU type 1 (Type 1) > Nvme0n1 > = > 4) sudo ./scripts/rpc.py add_vhost_scsi_lun vhost.0 0 Nvme0n1 > 5) start qemu: > taskset qemu-system-x86_64 -enable-kvm -m 1G \ > -name bread,debug-threads=3Don \ > -daemonize \ > -pidfile /var/log/bread.pid \ > -cpu host\ > -smp 4,sockets=3D1,cores=3D4,threads=3D1 \ > -object memory-backend-file,id=3Dmem0,size=3D1G,mem- > path=3D/dev/hugepages,share=3Don -numa node,memdev=3Dmem0\ > -drive > file=3D../ubuntu.img,media=3Ddisk,cache=3Dunsafe,aio=3Dthreads,format=3Dq= cow2\ > -chardev socket,id=3Dchar0,path=3D/var/tmp/vhost.0 \ > -device vhost-user-scsi-pci,id=3Dscsi0,chardev=3Dchar0\ > -machine usb=3Don \ > -device usb-tablet \ > -device usb-mouse \ > -device usb-kbd \ > -vnc :2 \ > -net nic,model=3Dvirtio\ > -net user,hostfwd=3Dtcp::2222-:22 > = > then when I use fio to test the vhost nvme disk in guest VM, I got the fo= llowing > error message in host console. > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:32 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:32 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:32 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:32 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:32 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:32 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:8 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:8 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:8 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:8 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:8 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: > vtophys(0x7f8fed64d000) failed > nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 > cid:95 nsid:1 lba:0 len:8 > nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD > (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 > bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc =3D -22 > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > = > I used the lsblk to check block device information in guest, and could s= ee the > nvme disk with sdb. > >lsblk --output "NAME,KNAME,MODEL,HCTL,SIZE,VENDOR,SUBSYSTEMS" > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > = > NAME KNAME MODEL HCTL SIZE VENDOR SUBSYSTEMS > fd0 fd0 4K block:platform > loop0 loop0 12.2M block > loop1 loop1 86.6M block > loop2 loop2 1.6M block > loop3 loop3 3.3M block > loop4 loop4 21M block > loop5 loop5 2.3M block > loop6 loop6 13M block > loop7 loop7 3.7M block > loop8 loop8 2.3M block > loop9 loop9 86.9M block > loop10 loop10 34.7M block > loop11 loop11 87M block > loop12 loop12 140.9M block > loop13 loop13 13M block > loop14 loop14 140M block > loop15 loop15 139.5M block > loop16 loop16 3.7M block > loop17 loop17 14.5M block > sda sda QEMU HARDDISK 0:0:0:0 32G ATA block:scsi:pci > sda1 sda1 32G block:scsi:pci > sdb sdb NVMe disk 2:0:0:0 27.3G INTEL block:scsi:virt= io:pci > sr0 sr0 QEMU DVD-ROM 1:0:0:0 1024M QEMU block:scsi:pci > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > = > = > Does anyone can give me help how to solve this problem ? > = > Thanks. > Adam Chang --===============8553218599087370508==--