From: Luse, Paul E <paul.e.luse at intel.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] SPDK perf starting I/O failed
Date: Fri, 18 Aug 2017 17:42:21 +0000 [thread overview]
Message-ID: <82C9F782B054C94B9FC04A331649C77A68F1E10B@fmsmsx104.amr.corp.intel.com> (raw)
In-Reply-To: 704F567C-0506-4C84-A7E5-1F36FC2E6D54@intel.com
[-- Attachment #1: Type: text/plain, Size: 3447 bytes --]
Oza/Jim,
FWIW I tried to repro this, I have Max Q entries of 4096 w/the SSD that I have so I can’t expect to run the same exact cmd below but if I change the Q depth only, the highest I can go (powers of 2) is 256. Anything beyond that fails (in GDB can see its failing with ENOMEM as the rc in submit_single_io()
FYI, here’s my passing cmd line:
sudo ./perf -r 'trtype:PCIe traddr:0000:06:00.0' -q 256 -s 2048 -w read -t 5 -c 0x1 -d 6144
And falling
sudo ./perf -r 'trtype:PCIe traddr:0000:06:00.0' -q 512 -s 2048 -w read -t 5 -c 0x1 -d 6144:
Also, I’m on master and have 32GB RAM in my system. Let me know if there’s anything I can do on this to help
Thx
Paul
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Friday, August 18, 2017 8:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] SPDK perf starting I/O failed
Hi Oza,
What is CAP.MQES on the device you are testing?
You can get this by running:
examples/nvme/identify/identify | grep “Maximum Queue Entries”
-Jim
From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Oza Oza <oza.oza(a)broadcom.com<mailto:oza.oza(a)broadcom.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, August 17, 2017 at 11:12 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] SPDK perf starting I/O failed
Hi All,
SPDK perf test queue size more than 8187 fails.
Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1
THE COMMAND OUTPUT IS:
root(a)bcm958742t<mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0<mailto:0x19(a)0x2a0>
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed
Regards,
Oza.
[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 13453 bytes --]
next reply other threads:[~2017-08-18 17:42 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-18 17:42 Luse, Paul E [this message]
-- strict thread matches above, loose matches on Subject: below --
2017-08-21 18:35 [SPDK] SPDK perf starting I/O failed Luse, Paul E
2017-08-21 15:40 Lance Hartmann ORACLE
2017-08-21 2:32 Oza Oza
2017-08-19 15:10 Lance Hartmann ORACLE
2017-08-19 15:02 Luse, Paul E
2017-08-19 14:50 Lance Hartmann ORACLE
2017-08-18 15:23 Harris, James R
2017-08-18 14:12 Luse, Paul E
2017-08-18 6:12 Oza Oza
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=82C9F782B054C94B9FC04A331649C77A68F1E10B@fmsmsx104.amr.corp.intel.com \
--to=spdk@lists.01.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.