All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-21  2:32 Oza Oza
  0 siblings, 0 replies; 10+ messages in thread
From: Oza Oza @ 2017-08-21  2:32 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3932 bytes --]

That’s great, Lance,

I possible, can you please post the patch, because I have to backport it on
older version of SPDK.



Regards,

Oza.



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Lance
Hartmann ORACLE
*Sent:* Saturday, August 19, 2017 8:40 PM
*To:* Storage Performance Development Kit
*Subject:* Re: [SPDK] SPDK perf starting I/O failed





Thanks Paul.  I’m trying to get on a system this morning where we have an
NVMoF configuration so that I can sanity-check that my fix operates okay in
that environment too.



--

Lance Hartmann
lance.hartmann(a)oracle.com



On Aug 19, 2017, at 10:02 AM, Luse, Paul E <paul.e.luse(a)intel.com> wrote:



Excellent! Reach out if you have any problems getting a patch going….



Thx

Paul



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Lance Hartmann ORACLE
*Sent:* Saturday, August 19, 2017 7:50 AM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* Re: [SPDK] SPDK perf starting I/O failed



I have a fix for this.  I haven’t tested it yet with NVMoF, but have it
working for NVMe.

--
Lance Hartmann
lance.hartmann(a)oracle.com



On Aug 18, 2017, at 9:12 AM, Luse, Paul E <paul.e.luse(a)intel.com> wrote:



Hi Oza,



If nobody else resolves this soon, I’ll try and repro on my end here in a
few hours.  Have some things to wrap up first



Thx

Paul



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Oza Oza
*Sent:* Thursday, August 17, 2017 11:13 PM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* [SPDK] SPDK perf starting I/O failed



Hi All,



SPDK perf test queue size more than 8187 fails.



Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page
memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory
allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q
8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it
seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q
8187 -s 2048 -w read -d 6144 -t 30 -c 0x1



THE COMMAND OUTPUT IS:
root(a)bcm958742t:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe
traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in
socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed





Regards,

Oza.

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk



_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 19499 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-21 18:35 Luse, Paul E
  0 siblings, 0 replies; 10+ messages in thread
From: Luse, Paul E @ 2017-08-21 18:35 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5365 bytes --]

That’s a great question as some communities use the dist list for other purposes.  We submit al code changes through GerritHub, no reviews are done via email.  Also, preferred method of communication, if possible is IRC but the dist list works as well (obviously).  Always if you push a new patch that has asked about on the list, like here, feel free to share the link…

Oza, here’s Lance’s patch: https://review.gerrithub.io/#/c/374895/

Thanks Lance!!

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Lance Hartmann ORACLE
Sent: Monday, August 21, 2017 8:41 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] SPDK perf starting I/O failed


Post the patch?  You mean directly to this, the SPDK email list?  Please bear with me.  I did push my change, but this is my first time using GerritHub so I’m still getting my “SPDK legs” under me process-wise ;-)

thanks,
--
Lance Hartmann
lance.hartmann(a)oracle.com<mailto:lance.hartmann(a)oracle.com>

On Aug 20, 2017, at 9:32 PM, Oza Oza <oza.oza(a)broadcom.com<mailto:oza.oza(a)broadcom.com>> wrote:

That’s great, Lance,
I possible, can you please post the patch, because I have to backport it on older version of SPDK.

Regards,
Oza.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Lance Hartmann ORACLE
Sent: Saturday, August 19, 2017 8:40 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] SPDK perf starting I/O failed


Thanks Paul.  I’m trying to get on a system this morning where we have an NVMoF configuration so that I can sanity-check that my fix operates okay in that environment too.

--
Lance Hartmann
lance.hartmann(a)oracle.com<mailto:lance.hartmann(a)oracle.com>

On Aug 19, 2017, at 10:02 AM, Luse, Paul E <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>> wrote:

Excellent! Reach out if you have any problems getting a patch going….

Thx
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Lance Hartmann ORACLE
Sent: Saturday, August 19, 2017 7:50 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] SPDK perf starting I/O failed

I have a fix for this.  I haven’t tested it yet with NVMoF, but have it working for NVMe.
--
Lance Hartmann
lance.hartmann(a)oracle.com<mailto:lance.hartmann(a)oracle.com>

On Aug 18, 2017, at 9:12 AM, Luse, Paul E <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>> wrote:

Hi Oza,

If nobody else resolves this soon, I’ll try and repro on my end here in a few hours.  Have some things to wrap up first

Thx
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Oza Oza
Sent: Thursday, August 17, 2017 11:13 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] SPDK perf starting I/O failed

Hi All,

SPDK perf test queue size more than 8187 fails.

Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1

THE COMMAND OUTPUT IS:
root(a)bcm958742t<mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0<mailto:0x19(a)0x2a0>
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed


Regards,
Oza.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 25560 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-21 15:40 Lance Hartmann ORACLE
  0 siblings, 0 replies; 10+ messages in thread
From: Lance Hartmann ORACLE @ 2017-08-21 15:40 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5124 bytes --]


Post the patch?  You mean directly to this, the SPDK email list?  Please bear with me.  I did push my change, but this is my first time using GerritHub so I’m still getting my “SPDK legs” under me process-wise ;-)

thanks,
--
Lance Hartmann
lance.hartmann(a)oracle.com


> On Aug 20, 2017, at 9:32 PM, Oza Oza <oza.oza(a)broadcom.com> wrote:
> 
> That’s great, Lance,
> I possible, can you please post the patch, because I have to backport it on older version of SPDK.
>  
> Regards,
> Oza.
>  
> From: SPDK [mailto:spdk-bounces(a)lists.01.org <mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Lance Hartmann ORACLE
> Sent: Saturday, August 19, 2017 8:40 PM
> To: Storage Performance Development Kit
> Subject: Re: [SPDK] SPDK perf starting I/O failed
>  
>  
> Thanks Paul.  I’m trying to get on a system this morning where we have an NVMoF configuration so that I can sanity-check that my fix operates okay in that environment too.
>  
> --
> Lance Hartmann
> lance.hartmann(a)oracle.com <mailto:lance.hartmann(a)oracle.com>
>  
>> On Aug 19, 2017, at 10:02 AM, Luse, Paul E <paul.e.luse(a)intel.com <mailto:paul.e.luse(a)intel.com>> wrote:
>>  
>> Excellent! Reach out if you have any problems getting a patch going….
>>  
>> Thx
>> Paul
>>   <>
>> From: SPDK [mailto:spdk-bounces(a)lists.01.org <mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Lance Hartmann ORACLE
>> Sent: Saturday, August 19, 2017 7:50 AM
>> To: Storage Performance Development Kit <spdk(a)lists.01.org <mailto:spdk(a)lists.01.org>>
>> Subject: Re: [SPDK] SPDK perf starting I/O failed
>>  
>> I have a fix for this.  I haven’t tested it yet with NVMoF, but have it working for NVMe.
>> --
>> Lance Hartmann
>> lance.hartmann(a)oracle.com <mailto:lance.hartmann(a)oracle.com>
>>  
>>> On Aug 18, 2017, at 9:12 AM, Luse, Paul E <paul.e.luse(a)intel.com <mailto:paul.e.luse(a)intel.com>> wrote:
>>>  
>>> Hi Oza,
>>>  
>>> If nobody else resolves this soon, I’ll try and repro on my end here in a few hours.  Have some things to wrap up first
>>>  
>>> Thx
>>> Paul
>>>  
>>> From: SPDK [mailto:spdk-bounces(a)lists.01.org <mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Oza Oza
>>> Sent: Thursday, August 17, 2017 11:13 PM
>>> To: Storage Performance Development Kit <spdk(a)lists.01.org <mailto:spdk(a)lists.01.org>>
>>> Subject: [SPDK] SPDK perf starting I/O failed
>>>  
>>> Hi All,
>>>  
>>> SPDK perf test queue size more than 8187 fails.
>>>  
>>> Test procedure: 
>>> 1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB 
>>> $echo 4096 >/proc/sys/vm/nr_hugepages 
>>> 2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB. 
>>> /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1 
>>> 3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails 
>>> 4. Observe the aplication also hangs 
>>> 5. run the same test with queue size as 8187 and observe the test passes 
>>> /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1
>>>  
>>> THE COMMAND OUTPUT IS: 
>>> root(a)bcm958742t <mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1 
>>> Starting DPDK 16.11.1 initialization... 
>>> [ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ] 
>>> EAL: Detected 8 lcore(s) 
>>> EAL: Probing VFIO support... 
>>> EAL: VFIO support initialized 
>>> EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0 
>>> Initializing NVMe Controllers 
>>> EAL: PCI device 0001:01:00.0 on NUMA socket 0 
>>> EAL: probe driver: 8086:953 spdk_nvme 
>>> EAL: using IOMMU type 1 (Type 1) 
>>> [151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402) 
>>> [151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0 <mailto:0x19(a)0x2a0> 
>>> Attaching to NVMe Controller at 0001:01:00.0 [8086:0953] 
>>> Attached to NVMe Controller at 0001:01:00.0 [8086:0953] 
>>> Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0 
>>> Initialization complete. Launching workers. 
>>> Starting thread on core 0 
>>> starting I/O failed 
>>>  
>>>  
>>> Regards,
>>> Oza.
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
>>> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>
>>  
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
>> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>
>  
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 32223 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-19 15:10 Lance Hartmann ORACLE
  0 siblings, 0 replies; 10+ messages in thread
From: Lance Hartmann ORACLE @ 2017-08-19 15:10 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3968 bytes --]


Thanks Paul.  I’m trying to get on a system this morning where we have an NVMoF configuration so that I can sanity-check that my fix operates okay in that environment too.

--
Lance Hartmann
lance.hartmann(a)oracle.com


> On Aug 19, 2017, at 10:02 AM, Luse, Paul E <paul.e.luse(a)intel.com> wrote:
> 
> Excellent! Reach out if you have any problems getting a patch going….
>  
> Thx
> Paul
>   <>
> From: SPDK [mailto:spdk-bounces(a)lists.01.org <mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Lance Hartmann ORACLE
> Sent: Saturday, August 19, 2017 7:50 AM
> To: Storage Performance Development Kit <spdk(a)lists.01.org <mailto:spdk(a)lists.01.org>>
> Subject: Re: [SPDK] SPDK perf starting I/O failed
>  
> I have a fix for this.  I haven’t tested it yet with NVMoF, but have it working for NVMe.
> --
> Lance Hartmann
> lance.hartmann(a)oracle.com <mailto:lance.hartmann(a)oracle.com>
>  
> On Aug 18, 2017, at 9:12 AM, Luse, Paul E <paul.e.luse(a)intel.com <mailto:paul.e.luse(a)intel.com>> wrote:
>  
> Hi Oza,
>  
> If nobody else resolves this soon, I’ll try and repro on my end here in a few hours.  Have some things to wrap up first
>  
> Thx
> Paul
>  
> From: SPDK [mailto:spdk-bounces(a)lists.01.org <mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Oza Oza
> Sent: Thursday, August 17, 2017 11:13 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org <mailto:spdk(a)lists.01.org>>
> Subject: [SPDK] SPDK perf starting I/O failed
>  
> Hi All,
>  
> SPDK perf test queue size more than 8187 fails.
>  
> Test procedure: 
> 1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB 
> $echo 4096 >/proc/sys/vm/nr_hugepages 
> 2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB. 
> /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1 
> 3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails 
> 4. Observe the aplication also hangs 
> 5. run the same test with queue size as 8187 and observe the test passes 
> /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1
>  
> THE COMMAND OUTPUT IS: 
> root(a)bcm958742t <mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1 
> Starting DPDK 16.11.1 initialization... 
> [ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ] 
> EAL: Detected 8 lcore(s) 
> EAL: Probing VFIO support... 
> EAL: VFIO support initialized 
> EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0 
> Initializing NVMe Controllers 
> EAL: PCI device 0001:01:00.0 on NUMA socket 0 
> EAL: probe driver: 8086:953 spdk_nvme 
> EAL: using IOMMU type 1 (Type 1) 
> [151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402) 
> [151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0 <mailto:0x19(a)0x2a0> 
> Attaching to NVMe Controller at 0001:01:00.0 [8086:0953] 
> Attached to NVMe Controller at 0001:01:00.0 [8086:0953] 
> Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0 
> Initialization complete. Launching workers. 
> Starting thread on core 0 
> starting I/O failed 
>  
>  
> Regards,
> Oza.
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>
>  
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 26936 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-19 15:02 Luse, Paul E
  0 siblings, 0 replies; 10+ messages in thread
From: Luse, Paul E @ 2017-08-19 15:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3093 bytes --]

Excellent! Reach out if you have any problems getting a patch going….

Thx
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Lance Hartmann ORACLE
Sent: Saturday, August 19, 2017 7:50 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] SPDK perf starting I/O failed

I have a fix for this.  I haven’t tested it yet with NVMoF, but have it working for NVMe.
--
Lance Hartmann
lance.hartmann(a)oracle.com<mailto:lance.hartmann(a)oracle.com>

On Aug 18, 2017, at 9:12 AM, Luse, Paul E <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>> wrote:

Hi Oza,

If nobody else resolves this soon, I’ll try and repro on my end here in a few hours.  Have some things to wrap up first

Thx
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Oza Oza
Sent: Thursday, August 17, 2017 11:13 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] SPDK perf starting I/O failed

Hi All,

SPDK perf test queue size more than 8187 fails.

Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1

THE COMMAND OUTPUT IS:
root(a)bcm958742t<mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0<mailto:0x19(a)0x2a0>
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed


Regards,
Oza.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 16286 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-19 14:50 Lance Hartmann ORACLE
  0 siblings, 0 replies; 10+ messages in thread
From: Lance Hartmann ORACLE @ 2017-08-19 14:50 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2797 bytes --]

I have a fix for this.  I haven’t tested it yet with NVMoF, but have it working for NVMe.
--
Lance Hartmann
lance.hartmann(a)oracle.com


> On Aug 18, 2017, at 9:12 AM, Luse, Paul E <paul.e.luse(a)intel.com> wrote:
> 
> Hi Oza,
>  
> If nobody else resolves this soon, I’ll try and repro on my end here in a few hours.  Have some things to wrap up first
>  
> Thx
> Paul
>   <>
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Oza Oza
> Sent: Thursday, August 17, 2017 11:13 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] SPDK perf starting I/O failed
>  
> Hi All,
>  
> SPDK perf test queue size more than 8187 fails.
>  
> Test procedure: 
> 1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB 
> $echo 4096 >/proc/sys/vm/nr_hugepages 
> 2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB. 
> /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1 
> 3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails 
> 4. Observe the aplication also hangs 
> 5. run the same test with queue size as 8187 and observe the test passes 
> /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1
>  
> THE COMMAND OUTPUT IS: 
> root(a)bcm958742t <mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1 
> Starting DPDK 16.11.1 initialization... 
> [ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ] 
> EAL: Detected 8 lcore(s) 
> EAL: Probing VFIO support... 
> EAL: VFIO support initialized 
> EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0 
> Initializing NVMe Controllers 
> EAL: PCI device 0001:01:00.0 on NUMA socket 0 
> EAL: probe driver: 8086:953 spdk_nvme 
> EAL: using IOMMU type 1 (Type 1) 
> [151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402) 
> [151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0 <mailto:0x19(a)0x2a0> 
> Attaching to NVMe Controller at 0001:01:00.0 [8086:0953] 
> Attached to NVMe Controller at 0001:01:00.0 [8086:0953] 
> Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0 
> Initialization complete. Launching workers. 
> Starting thread on core 0 
> starting I/O failed 
>  
>  
> Regards,
> Oza.
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 18397 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-18 17:42 Luse, Paul E
  0 siblings, 0 replies; 10+ messages in thread
From: Luse, Paul E @ 2017-08-18 17:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3447 bytes --]

Oza/Jim,

FWIW I tried to repro this, I have Max Q entries of 4096 w/the SSD that I have so I can’t expect to run the same exact cmd below but if I change the Q depth only, the highest I can go (powers of 2) is 256.  Anything beyond that fails (in GDB can see its failing with ENOMEM as the rc in submit_single_io()

FYI, here’s my passing cmd line:
sudo ./perf -r 'trtype:PCIe traddr:0000:06:00.0' -q 256 -s 2048 -w read -t 5 -c 0x1 -d 6144

And falling
sudo ./perf -r 'trtype:PCIe traddr:0000:06:00.0' -q 512 -s 2048 -w read -t 5 -c 0x1 -d 6144:

Also, I’m on master and have 32GB RAM in my system.  Let me know if there’s anything I can do on this to help

Thx
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Friday, August 18, 2017 8:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] SPDK perf starting I/O failed

Hi Oza,

What is CAP.MQES on the device you are testing?

You can get this by running:

examples/nvme/identify/identify | grep “Maximum Queue Entries”

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Oza Oza <oza.oza(a)broadcom.com<mailto:oza.oza(a)broadcom.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, August 17, 2017 at 11:12 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] SPDK perf starting I/O failed

Hi All,

SPDK perf test queue size more than 8187 fails.

Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1

THE COMMAND OUTPUT IS:
root(a)bcm958742t<mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0<mailto:0x19(a)0x2a0>
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed


Regards,
Oza.

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 13453 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-18 15:23 Harris, James R
  0 siblings, 0 replies; 10+ messages in thread
From: Harris, James R @ 2017-08-18 15:23 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2398 bytes --]

Hi Oza,

What is CAP.MQES on the device you are testing?

You can get this by running:

examples/nvme/identify/identify | grep “Maximum Queue Entries”

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Oza Oza <oza.oza(a)broadcom.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, August 17, 2017 at 11:12 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] SPDK perf starting I/O failed

Hi All,

SPDK perf test queue size more than 8187 fails.

Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1

THE COMMAND OUTPUT IS:
root(a)bcm958742t<mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0<mailto:0x19(a)0x2a0>
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed


Regards,
Oza.

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10531 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [SPDK] SPDK perf starting I/O failed
@ 2017-08-18 14:12 Luse, Paul E
  0 siblings, 0 replies; 10+ messages in thread
From: Luse, Paul E @ 2017-08-18 14:12 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2283 bytes --]

Hi Oza,

If nobody else resolves this soon, I’ll try and repro on my end here in a few hours.  Have some things to wrap up first

Thx
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Oza Oza
Sent: Thursday, August 17, 2017 11:13 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] SPDK perf starting I/O failed

Hi All,

SPDK perf test queue size more than 8187 fails.

Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8187 -s 2048 -w read -d 6144 -t 30 -c 0x1

THE COMMAND OUTPUT IS:
root(a)bcm958742t<mailto:root(a)bcm958742t>:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0<mailto:0x19(a)0x2a0>
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed


Regards,
Oza.

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10403 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [SPDK] SPDK perf starting I/O failed
@ 2017-08-18  6:12 Oza Oza
  0 siblings, 0 replies; 10+ messages in thread
From: Oza Oza @ 2017-08-18  6:12 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1890 bytes --]

Hi All,



SPDK perf test queue size more than 8187 fails.



Test procedure:
1. Increase the number of huge pages to 4096 - that means total huge page
memory reserved is 4096 * 2048KB. that means 8GB
$echo 4096 >/proc/sys/vm/nr_hugepages
2. Run the perf test with queue size as 8188 (or above) and DPDK memory
allocation to be around 6GB.
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q
8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
3. Observer the test fails at "spdk_nvme_ns_cmd_read" for one request. (it
seems 8187 requests succeeds, any number above that fails
4. Observe the aplication also hangs
5. run the same test with queue size as 8187 and observe the test passes
/usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe traddr:0001:01:00.0' -q
8187 -s 2048 -w read -d 6144 -t 30 -c 0x1



THE COMMAND OUTPUT IS:
root(a)bcm958742t:~# /usr/share/spdk/examples/nvme/perf -r 'trtype:PCIe
traddr:0001:01:00.0' -q 8188 -s 2048 -w read -d 6144 -t 30 -c 0x1
Starting DPDK 16.11.1 initialization...
[ DPDK EAL parameters: perf -c 1 -m 6144 --file-prefix=spdk_pid10557 ]
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in
socket_id 0
Initializing NVMe Controllers
EAL: PCI device 0001:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
[151516.159960] vfio-pci 0001:01:00.0: enabling device (0400 -> 0402)
[151516.272405] vfio_ecap_init: 0001:01:00.0 hiding ecap 0x19(a)0x2a0
Attaching to NVMe Controller at 0001:01:00.0 [8086:0953]
Attached to NVMe Controller at 0001:01:00.0 [8086:0953]
Associating INTEL SSDPEDMW400G4 (CVCQ6433008P400AGN ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
starting I/O failed





Regards,

Oza.

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8566 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-08-21 18:35 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-21  2:32 [SPDK] SPDK perf starting I/O failed Oza Oza
  -- strict thread matches above, loose matches on Subject: below --
2017-08-21 18:35 Luse, Paul E
2017-08-21 15:40 Lance Hartmann ORACLE
2017-08-19 15:10 Lance Hartmann ORACLE
2017-08-19 15:02 Luse, Paul E
2017-08-19 14:50 Lance Hartmann ORACLE
2017-08-18 17:42 Luse, Paul E
2017-08-18 15:23 Harris, James R
2017-08-18 14:12 Luse, Paul E
2017-08-18  6:12 Oza Oza

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.