All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wodkowski, PawelX <pawelx.wodkowski at intel.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] Error when issue IO in QEMU to vhost scsi NVMe
Date: Thu, 09 Aug 2018 06:20:33 +0000	[thread overview]
Message-ID: <F6F2A6264E145F47A18AB6DF8E87425D7033F4D8@IRSMSX102.ger.corp.intel.com> (raw)
In-Reply-To: CANvoUxjsa3hA0nE_zkVuzhc8xdGxqUFUmR2P24zcmc8BcOsnGA@mail.gmail.com

[-- Attachment #1: Type: text/plain, Size: 9494 bytes --]

I think you need to add

-numa node,memdev=mem0

to QEMU command line options.

Also consider adding ‘prealloc=yes,host-nodes=0,policy=bind’ to ‘-object’

Thanks.



From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Adam Chang
Sent: Thursday, August 9, 2018 4:05 AM
To: spdk(a)lists.01.org
Subject: [SPDK] Error when issue IO in QEMU to vhost scsi NVMe

Hi all:
      I just create NVMe bdev and vhost-scsi controller which can be accessed by QEMU, but it occurred error when IO issued from VM.
      Here are my steps for SPDK configuration

Host OS:Ubuntu 18.04, Kernel 4.15.0-30
Guest OS: Ubuntu 18.04
QEMU: 2.12.0
SPDK: v18.07

1)  sudo HUGEMEM=4096 scripts/setup.sh

0000:05:00.0 (8086 2522): nvme -> vfio-pci

Current user memlock limit: 4116 MB

This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as current user.
To change this, please adjust limits.conf memlock limit for current user.

2) sudo ./app/vhost/vhost -S /var/tmp -m 0x3 &

[ DPDK EAL parameters: vhost -c 0x3 -m 1024 --legacy-mem --file-prefix=spdk_pid1921 ]
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/spdk_pid1921/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 530:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 718:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 1 on socket 0
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0

3) sudo ./scripts/rpc.py construct_vhost_scsi_controller --cpumask 0x1 vhost.0
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:2522 spdk_nvme
EAL:   using IOMMU type 1 (Type 1)
Nvme0n1

4) sudo ./scripts/rpc.py add_vhost_scsi_lun vhost.0 0 Nvme0n1
5) start qemu:
taskset qemu-system-x86_64 -enable-kvm -m 1G \
        -name bread,debug-threads=on \
        -daemonize \
        -pidfile /var/log/bread.pid \
        -cpu host\
        -smp 4,sockets=1,cores=4,threads=1 \
        -object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on -numa node,memdev=mem0\
        -drive file=../ubuntu.img,media=disk,cache=unsafe,aio=threads,format=qcow2\
-chardev socket,id=char0,path=/var/tmp/vhost.0 \
-device vhost-user-scsi-pci,id=scsi0,chardev=char0\
        -machine usb=on \
        -device usb-tablet \
        -device usb-mouse \
        -device usb-kbd \
        -vnc :2 \
    -net nic,model=virtio\
    -net user,hostfwd=tcp::2222-:22

then when I use fio to test the vhost nvme disk in guest VM, I got the following error message in host console.
===========================================================================
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
===========================================================================

I  used the lsblk to check block device information in guest, and could see the nvme disk with sdb.
>lsblk --output "NAME,KNAME,MODEL,HCTL,SIZE,VENDOR,SUBSYSTEMS"
===========================================================================
NAME   KNAME  MODEL            HCTL         SIZE VENDOR   SUBSYSTEMS
fd0    fd0                                    4K          block:platform
loop0  loop0                               12.2M          block
loop1  loop1                               86.6M          block
loop2  loop2                                1.6M          block
loop3  loop3                                3.3M          block
loop4  loop4                                 21M          block
loop5  loop5                                2.3M          block
loop6  loop6                                 13M          block
loop7  loop7                                3.7M          block
loop8  loop8                                2.3M          block
loop9  loop9                               86.9M          block
loop10 loop10                              34.7M          block
loop11 loop11                                87M          block
loop12 loop12                             140.9M          block
loop13 loop13                                13M          block
loop14 loop14                               140M          block
loop15 loop15                             139.5M          block
loop16 loop16                               3.7M          block
loop17 loop17                              14.5M          block
sda    sda    QEMU HARDDISK    0:0:0:0       32G ATA      block:scsi:pci
  sda1 sda1                                  32G          block:scsi:pci
sdb    sdb    NVMe disk        2:0:0:0     27.3G INTEL    block:scsi:virtio:pci
sr0    sr0    QEMU DVD-ROM     1:0:0:0     1024M QEMU     block:scsi:pci
===========================================================================

Does anyone can give me help how to solve this problem ?

Thanks.
Adam Chang

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 32147 bytes --]

             reply	other threads:[~2018-08-09  6:20 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-09  6:20 Wodkowski, PawelX [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-08-10  8:54 [SPDK] Error when issue IO in QEMU to vhost scsi NVMe Adam Chang
2018-08-10  5:14 Stojaczyk, DariuszX
2018-08-10  5:01 Adam Chang
2018-08-09 13:55 Stojaczyk, DariuszX
2018-08-09 12:42 Wodkowski, PawelX
2018-08-09 10:56 Adam Chang
2018-08-09  8:07 Stojaczyk, DariuszX
2018-08-09  2:04 Adam Chang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F6F2A6264E145F47A18AB6DF8E87425D7033F4D8@IRSMSX102.ger.corp.intel.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.