All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
@ 2017-06-13  7:44 Wodkowski, PawelX
  0 siblings, 0 replies; 7+ messages in thread
From: Wodkowski, PawelX @ 2017-06-13  7:44 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5066 bytes --]

I see bsrange 1k to 512k, is NVMe formatted as 512b block size here?

Which commit you use on this  test?
filename=/mnt/ssdtest1 - is this some FS mounted directory?
Can you send us dmesg from failure?

Paweł

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Liu, Changpeng
Sent: Tuesday, June 13, 2017 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

Thanks Xun.

We'll take a look at the issue.

From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com<mailto:nixun_992(a)sina.com>
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

 Hi, All:

   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29

 the error message is the following

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0


=========================

our vhost conf is the following

# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
  # Syntax:
  #   Split <bdev> <count> [<size_in_megabytes>]
  #
  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
  Split Nvme0n1 4 200000

  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
  # leaving the rest of the device inaccessible
  #Split Malloc2 8 1
[VhostScsi0]
    Dev 0 Nvme0n1p0
[VhostScsi1]
    Dev 0 Nvme0n1p1
[VhostScsi2]
    Dev 0 Nvme0n1p2
[VhostScsi3]
    Dev 0 Nvme0n1p3
fio script is the following:

[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5

# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall

# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall

# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall

# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 14810 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [SPDK] spdk can't pass fio_test_if_4_clients_testing_with_4_split_partitions
@ 2017-06-16  7:23 Wodkowski, PawelX
  0 siblings, 0 replies; 7+ messages in thread
From: Wodkowski, PawelX @ 2017-06-16  7:23 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7478 bytes --]

Because 1 physical block is min unit you can read/write. If you need 1k direct IO you need to format undelaying NVMe with 512b sector size, then you can do direct IO with N x 512b size. In buffered mode (non-direct) you can do IO size whatever you want. Kernel will handle that for you.

Going back to bsrange: 4k-512k will force fio to produce IO that is N x 4k and this case must work with SPDK vhost.

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com
Sent: Friday, June 16, 2017 5:28 AM
To: spdk <spdk(a)lists.01.org>
Subject: Re: [SPDK] spdk can't pass fio_test_if_4_clients_testing_with_4_split_partitions

Why have such limitation(size>=4k)?   In my opinion, guest kernel should not have any limitation, spdk vhost should take over it.

Thanks,
Xun

---------------------------
Try bsrange from 4k (not 1k). For direct IO you should not send IO that is less than minimum_io_size/hw_sector_size. Also can you send  qemu and vhost launch command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.

If resets are triggered from guest OS you should see some failures on guest dmesg.

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com<mailto:nixun_992(a)sina.com>
Sent: Tuesday, June 13, 2017 10:48 AM
To: spdk <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions


Hi, Pawel & changpeng:

    No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt.  the ssdtest1 is the test file for fio test.
    my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and i didn't see it in old version.

Thanks,
Xun

================================


I see bsrange 1k to 512k, is NVMe formatted as 512b block size here?

Which commit you use on this  test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?

Paweł

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Liu, Changpeng
Sent: Tuesday, June 13, 2017 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

Thanks Xun.

We’ll take a look at the issue.

From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com<mailto:nixun_992(a)sina.com>
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

 Hi, All:

   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29

 the error message is the following

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0


=========================

our vhost conf is the following

# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
  # Syntax:
  #   Split <bdev> <count> [<size_in_megabytes>]
  #
  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
  Split Nvme0n1 4 200000

  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
  # leaving the rest of the device inaccessible
  #Split Malloc2 8 1
[VhostScsi0]
    Dev 0 Nvme0n1p0
[VhostScsi1]
    Dev 0 Nvme0n1p1
[VhostScsi2]
    Dev 0 Nvme0n1p2
[VhostScsi3]
    Dev 0 Nvme0n1p3
fio script is the following:

[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5

# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall

# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall

# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall

# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 24848 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [SPDK] spdk can't pass fio_test_if_4_clients_testing_with_4_split_partitions
@ 2017-06-16  3:27 nixun_992
  0 siblings, 0 replies; 7+ messages in thread
From: nixun_992 @ 2017-06-16  3:27 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7058 bytes --]

Why have such limitation(size>=4k)?   In my opinion, guest kernel should not have any limitation, spdk vhost should take over it.

Thanks,
Xun

---------------------------




Try bsrange from 4k (not 1k). For direct IO you should not send IO that is less than minimum_io_size/hw_sector_size. Also
 can you send  qemu and vhost launch command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.
 
If resets are triggered from guest OS you should see some failures on guest dmesg.
 
Pawel
 



From: SPDK [mailto:spdk-bounces(a)lists.01.org]
On Behalf Of nixun_992(a)sina.com

Sent: Tuesday, June 13, 2017 10:48 AM

To: spdk <spdk(a)lists.01.org>

Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions


 

 



Hi, Pawel & changpeng:

 

    No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt.  the ssdtest1 is the test file for fio test.

    my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and i didn't see it in old version.

 

Thanks,

Xun






 

================================




 

 



I see bsrange 1k to 512k, is NVMe formatted as 512b block size
 here?
 
Which commit you use on this  test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?
 
Paweł
 



From: SPDK
 [mailto:spdk-bounces(a)lists.01.org]
On Behalf Of Liu, Changpeng

Sent: Tuesday, June 13, 2017 9:18 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org>

Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions


 
Thanks Xun.
 
We’ll take a look at the issue.
 
From the error log message, there seems a SCSI task management command received by SPDK,

the reason why VM sent the task management command is most likely due to timeout for some
commands.

 
 



From: SPDK
 [mailto:spdk-bounces(a)lists.01.org]
On Behalf Of nixun_992(a)sina.com

Sent: Monday, June 12, 2017 3:40 PM

To: spdk <spdk(a)lists.01.org>

Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions


 

 Hi, All:

 


   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29



 the error message is the following





nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128

ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256

ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256

ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96

ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

resetting controller

resetting controller

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:253 nsid:1 lba:998926248 len:88

ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176

ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128

ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:134 nsid:1 lba:958884728 len:136

ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240

ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:234 nsid:1 lba:898096896 len:8

ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:130 nsid:1 lba:991125248 len:96

ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

resetting controller

resetting controller

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:130 nsid:1 lba:609149952 len:64

ABORTED - BY REQUEST (00/07) sqid:1
cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

 

 

=========================

 

our vhost conf is the following



# The Split virtual block device slices block devices into multiple smaller bdevs.

[Split]

  # Syntax:

  #   Split <bdev> <count> [<size_in_megabytes>]

  #

  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1

  Split Nvme0n1 4 200000



  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,

  # leaving the rest of the device inaccessible

  #Split Malloc2 8 1



[VhostScsi0]

    Dev 0 Nvme0n1p0

[VhostScsi1]

    Dev 0 Nvme0n1p1

[VhostScsi2]

    Dev 0 Nvme0n1p2

[VhostScsi3]

    Dev 0 Nvme0n1p3





fio script is the following:



[global]

filename=/mnt/ssdtest1

size=100G

numjobs=8

iodepth=16

ioengine=libaio

group_reporting

do_verify=1

verify=md5



# direct rand read

[rand-read]

bsrange=1k-512k

#direct=1

rw=randread

runtime=10000

stonewall



# direct seq read

[seq-read]

bsrange=1k-512k

direct=1

rw=read

runtime=10000

stonewall



# direct rand write

[rand-write]

bsrange=1k-512k

direct=1

rw=randwrite

runtime=10000

stonewall



# direct seq write

[seq-write]

bsrange=1k-512k

direct=1

rw=write

runtime=10000


















_______________________________________________

SPDK mailing list

SPDK(a)lists.01.org

https://lists.01.org/mailman/listinfo/spdk





_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 18532 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [SPDK] spdk can't pass fio test_if_4_clients_testing_with_4_split_partitions
@ 2017-06-16  0:15 Liu, Changpeng
  0 siblings, 0 replies; 7+ messages in thread
From: Liu, Changpeng @ 2017-06-16  0:15 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8717 bytes --]

Thanks Xun. We have reproduced this issue in our local environment, under debugging, will update you once root caused the issue.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com
Sent: Thursday, June 15, 2017 5:15 PM
To: spdk <spdk(a)lists.01.org>
Subject: Re: [SPDK] spdk can't pass fio test_if_4_clients_testing_with_4_split_partitions

Hi, Pawel & karol:

    the ssdfile is generated by fio, i just speicfy the size. i will test the bsrange 4k - 512k with upstream code. and give you feedback.

Thanks,
Xun

---------------------------------------------------------
Hi Xun,
Like Pawel mentioned – could you please try running the same config but with bsrange 4k-512k?

Also, could you tell what are the contents of /mnt/ssdtest1 file before you start the test?
Do you generate it’s contents before fio starts or is it empty?

Karol

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Wodkowski, PawelX
Sent: Tuesday, June 13, 2017 1:15 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions

Try bsrange from 4k (not 1k). For direct IO you should not send IO that is less than minimum_io_size/hw_sector_size. Also can you send  qemu and vhost launch command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.

If resets are triggered from guest OS you should see some failures on guest dmesg.

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com<mailto:nixun_992(a)sina.com>
Sent: Tuesday, June 13, 2017 10:48 AM
To: spdk <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions


Hi, Pawel & changpeng:

    No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt.  the ssdtest1 is the test file for fio test.
    my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and i didn't see it in old version.

Thanks,
Xun

================================


I see bsrange 1k to 512k, is NVMe formatted as 512b block size here?

Which commit you use on this  test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?

Paweł

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Liu, Changpeng
Sent: Tuesday, June 13, 2017 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

Thanks Xun.

We’ll take a look at the issue.

From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com<mailto:nixun_992(a)sina.com>
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

 Hi, All:

   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29

 the error message is the following

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0


=========================

our vhost conf is the following

# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
  # Syntax:
  #   Split <bdev> <count> [<size_in_megabytes>]
  #
  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
  Split Nvme0n1 4 200000

  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
  # leaving the rest of the device inaccessible
  #Split Malloc2 8 1
[VhostScsi0]
    Dev 0 Nvme0n1p0
[VhostScsi1]
    Dev 0 Nvme0n1p1
[VhostScsi2]
    Dev 0 Nvme0n1p2
[VhostScsi3]
    Dev 0 Nvme0n1p3
fio script is the following:

[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5

# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall

# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall

# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall

# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

---------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
ul. Słowackiego 173 | 80-298 Gdańsk | Sąd Rejonowy Gdańsk Północ | VII Wydział Gospodarczy Krajowego Rejestru Sądowego - KRS 101882 | NIP 957-07-52-316 | Kapitał zakładowy 200.000 PLN.

Ta wiadomość wraz z załącznikami jest przeznaczona dla określonego adresata i może zawierać informacje poufne. W razie przypadkowego otrzymania tej wiadomości, prosimy o powiadomienie nadawcy oraz trwałe jej usunięcie; jakiekolwiek przeglądanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by others is strictly prohibited.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 74234 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [SPDK] spdk can't pass fio test_if_4_clients_testing_with_4_split_partitions
@ 2017-06-15  9:15 nixun_992
  0 siblings, 0 replies; 7+ messages in thread
From: nixun_992 @ 2017-06-15  9:15 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8577 bytes --]

Hi, Pawel & karol:

    the ssdfile is generated by fio, i just speicfy the size. i will test the bsrange 4k - 512k with upstream code. and give you feedback.

Thanks,
Xun

---------------------------------------------------------




Hi Xun,
Like Pawel mentioned – could you please try running the same config but with bsrange 4k-512k?
 
Also, could you tell what are the contents of /mnt/ssdtest1 file before you start the test?
Do you generate it’s contents before fio starts or is it empty?
 
Karol
 


From: SPDK [mailto:spdk-bounces(a)lists.01.org]
On Behalf Of Wodkowski, PawelX

Sent: Tuesday, June 13, 2017 1:15 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org>

Subject: Re: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions


 
Try bsrange from 4k (not 1k). For direct IO you should not send IO that is less than minimum_io_size/hw_sector_size. Also can you send  qemu and vhost launch
 command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.
 
If resets are triggered from guest OS you should see some failures on guest dmesg.
 
Pawel
 



From: SPDK [mailto:spdk-bounces(a)lists.01.org]
On Behalf Of nixun_992(a)sina.com

Sent: Tuesday, June 13, 2017 10:48 AM

To: spdk <spdk(a)lists.01.org>

Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions


 

 



Hi, Pawel & changpeng:

 

    No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt.  the ssdtest1 is the test file for fio test.

    my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and i didn't see it in old version.

 

Thanks,

Xun






 

================================




 

 



I see bsrange 1k to 512k, is NVMe formatted as 512b block size here?
 
Which commit you use on this  test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?
 
Paweł
 



From: SPDK [mailto:spdk-bounces(a)lists.01.org]
On Behalf Of Liu, Changpeng

Sent: Tuesday, June 13, 2017 9:18 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org>

Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions


 
Thanks Xun.
 
We’ll take a look at the issue.
 
From the error log message, there seems a SCSI task management command received by SPDK,

the reason why VM sent the task management command is most likely due to timeout for some
commands.

 
 



From: SPDK [mailto:spdk-bounces(a)lists.01.org]
On Behalf Of nixun_992(a)sina.com

Sent: Monday, June 12, 2017 3:40 PM

To: spdk <spdk(a)lists.01.org>

Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions


 

 Hi, All:

 


   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29



 the error message is the following





nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128

ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256

ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256

ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96

ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

resetting controller

resetting controller

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:253 nsid:1 lba:998926248 len:88

ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176

ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128

ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:134 nsid:1 lba:958884728 len:136

ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240

ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:234 nsid:1 lba:898096896 len:8

ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:130 nsid:1 lba:991125248 len:96

ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

resetting controller

resetting controller

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command

READ sqid:1 cid:130 nsid:1 lba:609149952 len:64

ABORTED - BY REQUEST (00/07) sqid:1
cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0

 

 

=========================

 

our vhost conf is the following



# The Split virtual block device slices block devices into multiple smaller bdevs.

[Split]

  # Syntax:

  #   Split <bdev> <count> [<size_in_megabytes>]

  #

  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1

  Split Nvme0n1 4 200000



  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,

  # leaving the rest of the device inaccessible

  #Split Malloc2 8 1



[VhostScsi0]

    Dev 0 Nvme0n1p0

[VhostScsi1]

    Dev 0 Nvme0n1p1

[VhostScsi2]

    Dev 0 Nvme0n1p2

[VhostScsi3]

    Dev 0 Nvme0n1p3





fio script is the following:



[global]

filename=/mnt/ssdtest1

size=100G

numjobs=8

iodepth=16

ioengine=libaio

group_reporting

do_verify=1

verify=md5



# direct rand read

[rand-read]

bsrange=1k-512k

#direct=1

rw=randread

runtime=10000

stonewall



# direct seq read

[seq-read]

bsrange=1k-512k

direct=1

rw=read

runtime=10000

stonewall



# direct rand write

[rand-write]

bsrange=1k-512k

direct=1

rw=randwrite

runtime=10000

stonewall



# direct seq write

[seq-write]

bsrange=1k-512k

direct=1

rw=write

runtime=10000


















_______________________________________________

SPDK mailing list

SPDK(a)lists.01.org

https://lists.01.org/mailman/listinfo/spdk



---------------------------------------------------------------------

Intel
Technology Poland sp. z o.o.
ul. Słowackiego 173 | 80-298 Gdańsk | Sąd Rejonowy Gdańsk
Północ
| VII Wydział Gospodarczy Krajowego Rejestru Sądowego - KRS 101882 | NIP
957-07-52-316 | Kapitał zakładowy 200.000 PLN.

Ta wiadomość wraz z załącznikami jest przeznaczona dla określonego
adresata i może zawierać informacje poufne. W razie przypadkowego otrzymania
tej wiadomości, prosimy o powiadomienie nadawcy oraz trwałe jej usunięcie;
jakiekolwiek przeglądanie lub rozpowszechnianie jest zabronione.

This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). If you are not the intended recipient,
please
contact the sender and delete all copies; any review or distribution by others
is strictly prohibited.


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 22154 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
@ 2017-06-13  7:18 Liu, Changpeng
  0 siblings, 0 replies; 7+ messages in thread
From: Liu, Changpeng @ 2017-06-13  7:18 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4519 bytes --]

Thanks Xun.

We'll take a look at the issue.

From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of nixun_992(a)sina.com
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

 Hi, All:

   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29

 the error message is the following

nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0


=========================

our vhost conf is the following

# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
  # Syntax:
  #   Split <bdev> <count> [<size_in_megabytes>]
  #
  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
  Split Nvme0n1 4 200000

  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
  # leaving the rest of the device inaccessible
  #Split Malloc2 8 1
[VhostScsi0]
    Dev 0 Nvme0n1p0
[VhostScsi1]
    Dev 0 Nvme0n1p1
[VhostScsi2]
    Dev 0 Nvme0n1p2
[VhostScsi3]
    Dev 0 Nvme0n1p3
fio script is the following:

[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5

# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall

# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall

# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall

# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 48700 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
@ 2017-06-12  7:39 nixun_992
  0 siblings, 0 replies; 7+ messages in thread
From: nixun_992 @ 2017-06-12  7:39 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4032 bytes --]

 Hi, All:

   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29

 the error message is the following
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0


=========================

our vhost conf is the following

# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
  # Syntax:
  #   Split <bdev> <count> [<size_in_megabytes>]
  #
  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
  Split Nvme0n1 4 200000

  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
  # leaving the rest of the device inaccessible
  #Split Malloc2 8 1
[VhostScsi0]
    Dev 0 Nvme0n1p0
[VhostScsi1]
    Dev 0 Nvme0n1p1
[VhostScsi2]
    Dev 0 Nvme0n1p2
[VhostScsi3]
    Dev 0 Nvme0n1p3

fio script is the following:

[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5

# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall

# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall

# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall

# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 4648 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-06-16  7:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-13  7:44 [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions Wodkowski, PawelX
  -- strict thread matches above, loose matches on Subject: below --
2017-06-16  7:23 [SPDK] spdk can't pass fio_test_if_4_clients_testing_with_4_split_partitions Wodkowski, PawelX
2017-06-16  3:27 nixun_992
2017-06-16  0:15 [SPDK] spdk can't pass fio test_if_4_clients_testing_with_4_split_partitions Liu, Changpeng
2017-06-15  9:15 nixun_992
2017-06-13  7:18 [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions Liu, Changpeng
2017-06-12  7:39 nixun_992

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.