All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] Number of NVMe devices per core
@ 2018-02-15 13:46 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-15 13:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 276 bytes --]

Hi All,
I ran 'perf' with different settings to see how to get most of NVMe disks.
I see that performance degrades once cpu mask puts more than 2 devises per
core. Is it OK? what is the theoretical (or empirical) limit of devices one
core can handle?
Sincerely,
Ernest

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 330 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-18 12:24 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-18 12:24 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 14071 bytes --]

Ok, the configuration, it is indeed shares 8 drives between 2 NUMA nodes
(Intel S2600CW motherboard), Turbo Boost and Speed Step were disabled, so I
enabled it. Queue depth was set to 128. Added one test to set the CPU mask
so the NVMes are assigned to corresponding NUMA nodes. I successfully
reached the maximum of IOPS possible - 4.3M with two CPUs, interestingly,
when I run all tests on NUMA 0, which implies copying data from node to
node I reach the same IOPS. However, I concerned with the average latency I
see, which hovering around 200us. Which is quite high I think. Any ideas?

On Thu, Feb 15, 2018 at 7:55 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
wrote:

> After looking at your perf command line more closely, It looks like your
> system is hooked up with 8 nvme drives (4 on each socket). It really
> depends on your platform, if it allows you to connect 8 drives on the same
> socket or not. Basically does it have enough PCie slots for you to allow
> hook up 8 drives on same socket.
>
>
>
> As far as I remember I think our 3-3.5M IOPS # was achieved on the
> platform which has all the drives connected to same socket. Our
> recommendation would be to see if you can enable Turbo to clock the CPU to
> higher frequency and run with –q 128.
>
>
>
> E5- 2697v4 can go as high as 3.6GHz @ Turbo I believe. I think that should
> definitely help you get more IOPs/core.
>
>
>
> Thanks,
>
> Vishal
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 10:42 AM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> Vishal, I wast multiplying 128 by 8, just was checking how queue depth
> would affect the performance, I dont remember all values I've tried, but I
> started from something low, like 32, however I didnt see any significant
> perf. impact, could be considered as measurement error, so I left it 1024.
> Will try longer runs and check the socket assignment as soon as I get back
> to office.
>
> When I see the BDF 80+, means I have to assign mask (for 18 core CPU) 0
> and 18? BTW, how do I assign all NVMe devices to the same NUMA node?
> otherwise, how come the aforementioned intel presentation mention 8 NVMe on
> single core? They all have be assigned to the same socket to get max. perf.
> right?
>
>
>
> On Thu, Feb 15, 2018 at 6:54 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
> wrote:
>
> Thanks Ernest!
>
>
>
> Can you try –q 128 (or 256) instead and –t to something like 120 (2
> minutes)? I think perf applies –q parameter value to each nvme drive. So
> you don’t have to multiply 128 by 8.
>
> There might be different ways to check CPU socket connection of a
> particular drive. Quickest would be to check using “lspci | grep –i Non”
> and look for BDF for each of your NVMe drive. Anything greater than 80 in
> the BDF eg: “80:00.0” would mean it is connected to socket 1.
>
>
>
> Thanks,
>
> Vishal
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 9:46 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> >> What is queue depth you are using while running perf benchmark?
>
> >> Can you share your exact perf command line?
>
> echo 8dev x 1CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
>
> echo ============================================================
> ===================
>
> echo 8dev x 2CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 3
>
> echo ============================================================
> ===================
>
> echo 8dev x 3CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 7
>
> echo ============================================================
> ===================
>
> echo 8dev x 4CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c F
>
> >> What is the model of your NVMe drives?
>
> INTEL SSDPE2MD800G4
>
> >> Do you know which CPU socket are your NVMes connected?
>
> Nope, not sure how to check
>
>
>
> Regarding Turbo setting, will check in BIOS, however I'm out of office for
> the weekend. Will get back as soon as I get back.
>
>
>
> In addition, I'm attaching the full run of my test script, it may provide
> additional info.
>
>
>
> Sincerely,
>
> Ernest
>
>
>
>
>
> On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
> wrote:
>
> Hi Zed,
>
>
>
> It is good that you are able to scale performance with # of cores… Our
> testing has showed that we can get close to 3M IOPs/Core using perf
> benchmark. Few questions I have for you regarding the configuration.
>
>
>
> What is queue depth you are using while running perf benchmark? Can you
> share your exact perf command line? What is the model of your NVMe drives?
> Do you know which CPU socket are your NVMes connected? Should try and run
> perf core mask to the same socket where NVMes are connected. This would
> help avoid any cross-socket traffic.
>
>
>
> Also, In order to achieve max performance/core, we generally enable Turbo
> to allow CPU to run at max frequency.
>
>
>
> Thanks,
>
> Vishal
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 8:52 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
>
> AFAIR to get debug build I have to add some command line argument to
> configure, I just ran ./configure and make
>
>
>
>
>
> On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will
> confirm.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 8:26 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] Number of NVMe devices per core
>
>
>
> Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
>
> 8 x Intel® SSD DC P3700 Series
>
> 16 x 16 DIMM DDR4  2133 MHz
>
> Ubuntu 17.10
>
> 4.13.0-32-generic
>
> SPDK version - cloned today
>
>
>
> The hardware quite close to the one in the presentation. now results I get
> with perf for 1, 2 and 4 cores
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20
>  807.83    4951.17    1088.65   28467.64
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20
>  807.83    4951.71    4392.74   25127.60
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20
>  807.83    4952.46    4379.57   21966.62
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20
>  807.83    4953.35    4379.25   23165.75
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20
>  807.83    4954.31    4357.84   30302.48
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20
>  807.83    4955.33    4366.51   37664.17
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20
>  807.83    4956.35    4357.68   45025.49
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20
>  807.83    4957.39    3205.82   52456.83
>
> ========================================================
>
> Total                                                  : 1654425.60
> 6462.60    4954.01    1088.65   52456.83
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00
> 1713.25    2334.48    1363.58   16128.43
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00
> 1713.25    2335.47    2073.78   14921.21
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00
> 1713.25    2336.68    2038.52   29808.68
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00
> 1713.25    2337.99    1827.81   44872.18
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60
> 1710.85    2337.78    1409.41   15688.80
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60
> 1710.85    2338.77    2034.40   14879.24
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60
> 1710.85    2339.98    2037.44   29795.93
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60
> 1710.85    2341.29    2037.46   44764.63
>
> ========================================================
>
> Total                                                  : 3506278.40
>  13696.40    2337.80    1363.58   44872.18
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65
> 2401.62    1665.48     827.93   13076.89
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05
> 1880.96    2128.57    1085.12   37295.10
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10
> 2117.86    1888.83     920.16   19435.55
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25
> 1907.48    2099.24    1070.72   42946.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45
> 2286.76    1749.10     897.17   10901.34
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90
> 1909.18    2097.28    1126.79   43355.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50
> 2900.10    1379.04     706.27   10713.54
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80
> 1909.14    2096.44    1136.78   31606.15
>
> ========================================================
>
> Total                                                  : 4432152.70
>  17313.10    1849.11     706.27   43355.29
>
>
>
> Any idea what possibly could go wrong here?
>
>
>
>
>
> On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Ernest,
>
>
>
> The answer depends on many different factors – the IOPs limit per device,
> CPU core frequency, Turbo, etc.  As a frame of reference, the team here at
> Intel has measured over 3M IO/s on a single Intel Xeon core[1].
>
>
>
> -Jim
>
>
>
> [1] https://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 6:46 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] Number of NVMe devices per core
>
>
>
> Hi All,
>
> I ran 'perf' with different settings to see how to get most of NVMe disks.
> I see that performance degrades once cpu mask puts more than 2 devises per
> core. Is it OK? what is the theoretical (or empirical) limit of devices one
> core can handle?
>
> Sincerely,
>
> Ernest
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 34356 bytes --]

[-- Attachment #3: TestResults.txt --]
[-- Type: text/plain, Size: 44430 bytes --]

===============================================================================
Device 1
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2875 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  728387.54    2845.26     175.70       4.17    5135.50
========================================================
Total                                                  :  728387.54    2845.26     175.70       4.17    5135.50

===============================================================================
Device 2
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2882 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  569124.12    2223.14     224.87       4.18    7264.36
========================================================
Total                                                  :  569124.12    2223.14     224.87       4.18    7264.36

===============================================================================
Device 3
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2884 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  526707.85    2057.45     242.98       4.21    5283.93
========================================================
Total                                                  :  526707.85    2057.45     242.98       4.21    5283.93

===============================================================================
Device 4
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2888 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  601264.88    2348.69     212.85       4.12    5472.76
========================================================
Total                                                  :  601264.88    2348.69     212.85       4.12    5472.76

===============================================================================
Device 5
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2890 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  462536.88    1806.78     276.70       5.04    4613.92
========================================================
Total                                                  :  462536.88    1806.78     276.70       5.04    4613.92

===============================================================================
Device 6
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2905 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  468748.97    1831.05     273.03       5.14    4239.06
========================================================
Total                                                  :  468748.97    1831.05     273.03       5.14    4239.06

===============================================================================
Device 7
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2907 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  465251.64    1817.39     275.09       4.97    8475.08
========================================================
Total                                                  :  465251.64    1817.39     275.09       4.97    8475.08

===============================================================================
Device 8
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2911 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  462405.63    1806.27     276.78       4.94    6961.70
========================================================
Total                                                  :  462405.63    1806.27     276.78       4.94    6961.70

===============================================================================
1dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2913 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  728655.46    2846.31     175.64       4.14    7524.85
========================================================
Total                                                  :  728655.46    2846.31     175.64       4.14    7524.85

===============================================================================
2dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2927 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  736634.45    2877.48     173.71       4.10    7175.67
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  569104.84    2223.07     224.86       4.12    7594.83
========================================================
Total                                                  : 1305739.29    5100.54     196.00       4.10    7594.83

===============================================================================
3dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2929 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  735808.56    2874.25     173.90       4.13    7021.71
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  582321.62    2274.69     219.75       4.03    5021.42
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  524099.80    2047.26     244.17       4.12   10247.68
========================================================
Total                                                  : 1842229.98    7196.21     208.39       4.03   10247.68

===============================================================================
4dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2931 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  737012.27    2878.95     173.61       4.21    7704.28
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  577374.43    2255.37     221.63       4.16    7152.40
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  533526.72    2084.09     239.87       4.16   10256.55
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  599666.88    2342.45     213.42       4.14   15688.40
========================================================
Total                                                  : 2447580.29    9560.86     209.14       4.14   15688.40

===============================================================================
5dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2933 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  591454.54    2310.37     216.37       4.36    7569.10
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  535658.18    2092.41     238.91       4.40    7697.29
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  485962.27    1898.29     263.36       4.33   10223.57
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  564529.10    2205.19     226.71       4.62   15365.17
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  422215.38    1649.28     303.14      10.80  740522.02
========================================================
Total                                                  : 2599819.47   10155.54     246.13       4.33  740522.02

===============================================================================
6dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2935 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  454832.10    1776.69     281.37      16.25    3388.27
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  452339.94    1766.95     282.92       6.00    7451.45
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437310.78    1708.25     292.66       7.66   12252.38
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  453415.11    1771.15     282.28       7.66   15312.76
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  394598.51    1541.40     324.35      19.21   20694.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  390486.51    1525.34     327.79      17.95   26146.91
========================================================
Total                                                  : 2582982.94   10089.78     297.29       6.00   26146.91

===============================================================================
7dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2937 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  368750.78    1440.43     347.05      24.00    3209.46
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  368288.13    1438.63     347.50      15.18    5060.62
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  366648.41    1432.22     349.07      14.84   10006.02
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  368337.24    1438.82     347.48       9.95   15339.78
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  349032.94    1363.41     366.71      18.95   20903.56
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  351129.60    1371.60     364.53      23.02   26334.30
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  344667.17    1346.36     371.39      21.80  664703.91
========================================================
Total                                                  : 2516854.27    9831.46     355.97       9.95  664703.91

===============================================================================
8dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2949 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  309542.42    1209.15     413.45       8.05    7120.38
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  309121.20    1207.50     414.02       7.05    7453.46
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  308895.82    1206.62     414.34      13.62   12477.65
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  309466.99    1208.86     413.59      10.71   16410.33
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  302408.13    1181.28     423.25      22.89   21233.45
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  302810.27    1182.85     422.71      21.65   27049.97
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  302309.61    1180.90     423.42      20.80   32172.09
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  299791.93    1171.06     427.00      13.28  334599.30
========================================================
Total                                                  : 2444346.38    9548.23     418.91       7.05  334599.30

===============================================================================
8dev x 2CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid2951 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 1
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  578265.37    2258.85     221.28       4.14    4151.52
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  608566.95    2377.21     210.28       4.15   12852.53
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  476094.46    1859.74     268.82       4.55   24100.99
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  472754.30    1846.70     270.74       5.44   34695.07
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  737286.53    2880.03     173.55       4.17    3536.07
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  527378.97    2060.07     242.65       4.13    7800.35
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  478394.73    1868.73     267.51       4.71   15022.28
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  484177.85    1891.32     264.33       4.66   26064.85
========================================================
Total                                                  : 4362919.17   17042.65     234.66       4.13   34695.07

===============================================================================
8dev x 3CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 7 --file-prefix=spdk_pid2955 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 1
Initialization complete. Launching workers.
Starting thread on core 2
Starting thread on core 1
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 2:  575634.16    2248.57     222.28       4.13    7090.11
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 2:  479946.34    1874.79     266.63       4.81    4645.92
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 2:  472166.12    1844.40     271.04       4.98   15403.37
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 1:  732872.30    2862.78     174.59       4.12    7133.99
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  607845.54    2374.40     210.54       4.13   14782.42
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 1:  478157.84    1867.80     267.67       4.54   25678.04
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  533111.38    2082.47     240.03       4.11    3715.65
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  481925.20    1882.52     265.54       4.71    8018.88
========================================================
Total                                                  : 4361658.88   17037.73     234.72       4.11   25678.04

===============================================================================
8dev x 4CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid2960 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 3
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 3
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 3
Starting thread on core 2
Starting thread on core 1
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  605859.19    2366.64     211.21       4.09    7734.48
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  470791.39    1839.03     271.86       4.78   30396.26
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  529520.31    2068.44     241.67       4.15    7163.76
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  478007.09    1867.22     267.76       4.79   30249.49
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  579702.43    2264.46     220.74       4.13    6301.96
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  478600.37    1869.53     267.42       4.83   26259.14
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  734452.52    2868.96     174.20       4.14    3272.76
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  478549.44    1869.33     267.40       4.72    7784.02
========================================================
Total                                                  : 4355482.74   17013.60     235.05       4.09   30396.26

===============================================================================
8dev x 2CPUs x 2 NUMA nodes
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 40001 --file-prefix=spdk_pid2965 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 18
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 18
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 18
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 18
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 18
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 18:  573138.14    2238.82     223.25       4.30    7083.35
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 18:  606201.15    2367.97     211.06       4.28    7733.87
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 18:  477412.78    1864.89     268.06       4.45   11673.91
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 18:  473107.08    1848.07     270.53       4.80   16989.10
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  733883.44    2866.73     174.35       4.20    7144.70
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  532627.19    2080.57     240.26       4.14    4114.67
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  483222.78    1887.59     264.83       4.75    9103.54
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  480046.30    1875.18     266.58       5.27   16885.46
========================================================
Total                                                  : 4359638.86   17029.84     234.82       4.14   16989.10


[-- Attachment #4: NVMeTest.sh --]
[-- Type: application/x-sh, Size: 5979 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 18:24 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-15 18:24 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 13693 bytes --]

For all these questions to be answered I need a physical access to the
machine, will check as soon as I get to the office.
Is it any useful utility that can check it for me instead of digging into
the vendor's spec?

On Thu, Feb 15, 2018 at 7:55 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
wrote:

> After looking at your perf command line more closely, It looks like your
> system is hooked up with 8 nvme drives (4 on each socket). It really
> depends on your platform, if it allows you to connect 8 drives on the same
> socket or not. Basically does it have enough PCie slots for you to allow
> hook up 8 drives on same socket.
>
>
>
> As far as I remember I think our 3-3.5M IOPS # was achieved on the
> platform which has all the drives connected to same socket. Our
> recommendation would be to see if you can enable Turbo to clock the CPU to
> higher frequency and run with –q 128.
>
>
>
> E5- 2697v4 can go as high as 3.6GHz @ Turbo I believe. I think that should
> definitely help you get more IOPs/core.
>
>
>
> Thanks,
>
> Vishal
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 10:42 AM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> Vishal, I wast multiplying 128 by 8, just was checking how queue depth
> would affect the performance, I dont remember all values I've tried, but I
> started from something low, like 32, however I didnt see any significant
> perf. impact, could be considered as measurement error, so I left it 1024.
> Will try longer runs and check the socket assignment as soon as I get back
> to office.
>
> When I see the BDF 80+, means I have to assign mask (for 18 core CPU) 0
> and 18? BTW, how do I assign all NVMe devices to the same NUMA node?
> otherwise, how come the aforementioned intel presentation mention 8 NVMe on
> single core? They all have be assigned to the same socket to get max. perf.
> right?
>
>
>
> On Thu, Feb 15, 2018 at 6:54 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
> wrote:
>
> Thanks Ernest!
>
>
>
> Can you try –q 128 (or 256) instead and –t to something like 120 (2
> minutes)? I think perf applies –q parameter value to each nvme drive. So
> you don’t have to multiply 128 by 8.
>
> There might be different ways to check CPU socket connection of a
> particular drive. Quickest would be to check using “lspci | grep –i Non”
> and look for BDF for each of your NVMe drive. Anything greater than 80 in
> the BDF eg: “80:00.0” would mean it is connected to socket 1.
>
>
>
> Thanks,
>
> Vishal
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 9:46 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> >> What is queue depth you are using while running perf benchmark?
>
> >> Can you share your exact perf command line?
>
> echo 8dev x 1CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
>
> echo ============================================================
> ===================
>
> echo 8dev x 2CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 3
>
> echo ============================================================
> ===================
>
> echo 8dev x 3CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 7
>
> echo ============================================================
> ===================
>
> echo 8dev x 4CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c F
>
> >> What is the model of your NVMe drives?
>
> INTEL SSDPE2MD800G4
>
> >> Do you know which CPU socket are your NVMes connected?
>
> Nope, not sure how to check
>
>
>
> Regarding Turbo setting, will check in BIOS, however I'm out of office for
> the weekend. Will get back as soon as I get back.
>
>
>
> In addition, I'm attaching the full run of my test script, it may provide
> additional info.
>
>
>
> Sincerely,
>
> Ernest
>
>
>
>
>
> On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
> wrote:
>
> Hi Zed,
>
>
>
> It is good that you are able to scale performance with # of cores… Our
> testing has showed that we can get close to 3M IOPs/Core using perf
> benchmark. Few questions I have for you regarding the configuration.
>
>
>
> What is queue depth you are using while running perf benchmark? Can you
> share your exact perf command line? What is the model of your NVMe drives?
> Do you know which CPU socket are your NVMes connected? Should try and run
> perf core mask to the same socket where NVMes are connected. This would
> help avoid any cross-socket traffic.
>
>
>
> Also, In order to achieve max performance/core, we generally enable Turbo
> to allow CPU to run at max frequency.
>
>
>
> Thanks,
>
> Vishal
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 8:52 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
>
> AFAIR to get debug build I have to add some command line argument to
> configure, I just ran ./configure and make
>
>
>
>
>
> On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will
> confirm.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 8:26 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] Number of NVMe devices per core
>
>
>
> Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
>
> 8 x Intel® SSD DC P3700 Series
>
> 16 x 16 DIMM DDR4  2133 MHz
>
> Ubuntu 17.10
>
> 4.13.0-32-generic
>
> SPDK version - cloned today
>
>
>
> The hardware quite close to the one in the presentation. now results I get
> with perf for 1, 2 and 4 cores
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20
>  807.83    4951.17    1088.65   28467.64
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20
>  807.83    4951.71    4392.74   25127.60
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20
>  807.83    4952.46    4379.57   21966.62
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20
>  807.83    4953.35    4379.25   23165.75
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20
>  807.83    4954.31    4357.84   30302.48
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20
>  807.83    4955.33    4366.51   37664.17
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20
>  807.83    4956.35    4357.68   45025.49
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20
>  807.83    4957.39    3205.82   52456.83
>
> ========================================================
>
> Total                                                  : 1654425.60
> 6462.60    4954.01    1088.65   52456.83
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00
> 1713.25    2334.48    1363.58   16128.43
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00
> 1713.25    2335.47    2073.78   14921.21
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00
> 1713.25    2336.68    2038.52   29808.68
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00
> 1713.25    2337.99    1827.81   44872.18
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60
> 1710.85    2337.78    1409.41   15688.80
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60
> 1710.85    2338.77    2034.40   14879.24
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60
> 1710.85    2339.98    2037.44   29795.93
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60
> 1710.85    2341.29    2037.46   44764.63
>
> ========================================================
>
> Total                                                  : 3506278.40
>  13696.40    2337.80    1363.58   44872.18
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65
> 2401.62    1665.48     827.93   13076.89
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05
> 1880.96    2128.57    1085.12   37295.10
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10
> 2117.86    1888.83     920.16   19435.55
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25
> 1907.48    2099.24    1070.72   42946.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45
> 2286.76    1749.10     897.17   10901.34
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90
> 1909.18    2097.28    1126.79   43355.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50
> 2900.10    1379.04     706.27   10713.54
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80
> 1909.14    2096.44    1136.78   31606.15
>
> ========================================================
>
> Total                                                  : 4432152.70
>  17313.10    1849.11     706.27   43355.29
>
>
>
> Any idea what possibly could go wrong here?
>
>
>
>
>
> On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Ernest,
>
>
>
> The answer depends on many different factors – the IOPs limit per device,
> CPU core frequency, Turbo, etc.  As a frame of reference, the team here at
> Intel has measured over 3M IO/s on a single Intel Xeon core[1].
>
>
>
> -Jim
>
>
>
> [1] https://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 6:46 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] Number of NVMe devices per core
>
>
>
> Hi All,
>
> I ran 'perf' with different settings to see how to get most of NVMe disks.
> I see that performance degrades once cpu mask puts more than 2 devises per
> core. Is it OK? what is the theoretical (or empirical) limit of devices one
> core can handle?
>
> Sincerely,
>
> Ernest
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 33996 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 18:22 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-15 18:22 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 13703 bytes --]

Ok, as I thought it is vendor specific, interesting, what gonna happen on a
dual socket system with 12 or even 24 NVMe, my guess, they will be spit
between two NUMA nodes, meaning that even if one core can take care of 8
drives it will take 4 cores (two on each node) for 12 NVMe, not too bright
from CPU utilization perspective especially, if it will be 8 cores, for 24
NVMe drives, more than 10% of computation power.

On Thu, Feb 15, 2018 at 7:52 PM, Verkamp, Daniel <daniel.verkamp(a)intel.com>
wrote:

> Hi Ernest,
>
>
>
> You can use the SPDK setup.sh script to reliably determine which NUMA node
> each NVMe device is on:
>
>
>
>   ./scripts/setup.sh status
>
>
>
> The bus number is assigned by your system’s firmware, and the “80 or
> higher” heuristic isn’t necessarily true on all systems.
>
>
>
> NUMA node of PCIe-attached NVMe devices is determined by which physical
> PCIe slot they’re plugged into and how that slot is routed on your system
> board; there is no way to change this in software.
>
>
>
> Thanks,
>
> -- Daniel
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 10:42 AM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> Vishal, I wast multiplying 128 by 8, just was checking how queue depth
> would affect the performance, I dont remember all values I've tried, but I
> started from something low, like 32, however I didnt see any significant
> perf. impact, could be considered as measurement error, so I left it 1024.
> Will try longer runs and check the socket assignment as soon as I get back
> to office.
>
> When I see the BDF 80+, means I have to assign mask (for 18 core CPU) 0
> and 18? BTW, how do I assign all NVMe devices to the same NUMA node?
> otherwise, how come the aforementioned intel presentation mention 8 NVMe on
> single core? They all have be assigned to the same socket to get max. perf.
> right?
>
>
>
> On Thu, Feb 15, 2018 at 6:54 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
> wrote:
>
> Thanks Ernest!
>
>
>
> Can you try –q 128 (or 256) instead and –t to something like 120 (2
> minutes)? I think perf applies –q parameter value to each nvme drive. So
> you don’t have to multiply 128 by 8.
>
> There might be different ways to check CPU socket connection of a
> particular drive. Quickest would be to check using “lspci | grep –i Non”
> and look for BDF for each of your NVMe drive. Anything greater than 80 in
> the BDF eg: “80:00.0” would mean it is connected to socket 1.
>
>
>
> Thanks,
>
> Vishal
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 9:46 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> >> What is queue depth you are using while running perf benchmark?
>
> >> Can you share your exact perf command line?
>
> echo 8dev x 1CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
>
> echo ============================================================
> ===================
>
> echo 8dev x 2CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 3
>
> echo ============================================================
> ===================
>
> echo 8dev x 3CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 7
>
> echo ============================================================
> ===================
>
> echo 8dev x 4CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c F
>
> >> What is the model of your NVMe drives?
>
> INTEL SSDPE2MD800G4
>
> >> Do you know which CPU socket are your NVMes connected?
>
> Nope, not sure how to check
>
>
>
> Regarding Turbo setting, will check in BIOS, however I'm out of office for
> the weekend. Will get back as soon as I get back.
>
>
>
> In addition, I'm attaching the full run of my test script, it may provide
> additional info.
>
>
>
> Sincerely,
>
> Ernest
>
>
>
>
>
> On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
> wrote:
>
> Hi Zed,
>
>
>
> It is good that you are able to scale performance with # of cores… Our
> testing has showed that we can get close to 3M IOPs/Core using perf
> benchmark. Few questions I have for you regarding the configuration.
>
>
>
> What is queue depth you are using while running perf benchmark? Can you
> share your exact perf command line? What is the model of your NVMe drives?
> Do you know which CPU socket are your NVMes connected? Should try and run
> perf core mask to the same socket where NVMes are connected. This would
> help avoid any cross-socket traffic.
>
>
>
> Also, In order to achieve max performance/core, we generally enable Turbo
> to allow CPU to run at max frequency.
>
>
>
> Thanks,
>
> Vishal
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 8:52 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
>
> AFAIR to get debug build I have to add some command line argument to
> configure, I just ran ./configure and make
>
>
>
>
>
> On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will
> confirm.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 8:26 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] Number of NVMe devices per core
>
>
>
> Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
>
> 8 x Intel® SSD DC P3700 Series
>
> 16 x 16 DIMM DDR4  2133 MHz
>
> Ubuntu 17.10
>
> 4.13.0-32-generic
>
> SPDK version - cloned today
>
>
>
> The hardware quite close to the one in the presentation. now results I get
> with perf for 1, 2 and 4 cores
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20
>  807.83    4951.17    1088.65   28467.64
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20
>  807.83    4951.71    4392.74   25127.60
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20
>  807.83    4952.46    4379.57   21966.62
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20
>  807.83    4953.35    4379.25   23165.75
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20
>  807.83    4954.31    4357.84   30302.48
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20
>  807.83    4955.33    4366.51   37664.17
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20
>  807.83    4956.35    4357.68   45025.49
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20
>  807.83    4957.39    3205.82   52456.83
>
> ========================================================
>
> Total                                                  : 1654425.60
> 6462.60    4954.01    1088.65   52456.83
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00
> 1713.25    2334.48    1363.58   16128.43
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00
> 1713.25    2335.47    2073.78   14921.21
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00
> 1713.25    2336.68    2038.52   29808.68
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00
> 1713.25    2337.99    1827.81   44872.18
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60
> 1710.85    2337.78    1409.41   15688.80
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60
> 1710.85    2338.77    2034.40   14879.24
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60
> 1710.85    2339.98    2037.44   29795.93
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60
> 1710.85    2341.29    2037.46   44764.63
>
> ========================================================
>
> Total                                                  : 3506278.40
>  13696.40    2337.80    1363.58   44872.18
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65
> 2401.62    1665.48     827.93   13076.89
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05
> 1880.96    2128.57    1085.12   37295.10
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10
> 2117.86    1888.83     920.16   19435.55
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25
> 1907.48    2099.24    1070.72   42946.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45
> 2286.76    1749.10     897.17   10901.34
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90
> 1909.18    2097.28    1126.79   43355.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50
> 2900.10    1379.04     706.27   10713.54
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80
> 1909.14    2096.44    1136.78   31606.15
>
> ========================================================
>
> Total                                                  : 4432152.70
>  17313.10    1849.11     706.27   43355.29
>
>
>
> Any idea what possibly could go wrong here?
>
>
>
>
>
> On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Ernest,
>
>
>
> The answer depends on many different factors – the IOPs limit per device,
> CPU core frequency, Turbo, etc.  As a frame of reference, the team here at
> Intel has measured over 3M IO/s on a single Intel Xeon core[1].
>
>
>
> -Jim
>
>
>
> [1] https://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 6:46 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] Number of NVMe devices per core
>
>
>
> Hi All,
>
> I ran 'perf' with different settings to see how to get most of NVMe disks.
> I see that performance degrades once cpu mask puts more than 2 devises per
> core. Is it OK? what is the theoretical (or empirical) limit of devices one
> core can handle?
>
> Sincerely,
>
> Ernest
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 34648 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 17:55 Verma, Vishal4
  0 siblings, 0 replies; 14+ messages in thread
From: Verma, Vishal4 @ 2018-02-15 17:55 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 12808 bytes --]

After looking at your perf command line more closely, It looks like your system is hooked up with 8 nvme drives (4 on each socket). It really depends on your platform, if it allows you to connect 8 drives on the same socket or not. Basically does it have enough PCie slots for you to allow hook up 8 drives on same socket.

As far as I remember I think our 3-3.5M IOPS # was achieved on the platform which has all the drives connected to same socket. Our recommendation would be to see if you can enable Turbo to clock the CPU to higher frequency and run with –q 128.

E5- 2697v4 can go as high as 3.6GHz @ Turbo I believe. I think that should definitely help you get more IOPs/core.

Thanks,
Vishal

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 10:42 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Number of NVMe devices per core

Vishal, I wast multiplying 128 by 8, just was checking how queue depth would affect the performance, I dont remember all values I've tried, but I started from something low, like 32, however I didnt see any significant perf. impact, could be considered as measurement error, so I left it 1024. Will try longer runs and check the socket assignment as soon as I get back to office.
When I see the BDF 80+, means I have to assign mask (for 18 core CPU) 0 and 18? BTW, how do I assign all NVMe devices to the same NUMA node? otherwise, how come the aforementioned intel presentation mention 8 NVMe on single core? They all have be assigned to the same socket to get max. perf. right?

On Thu, Feb 15, 2018 at 6:54 PM, Verma, Vishal4 <vishal4.verma(a)intel.com<mailto:vishal4.verma(a)intel.com>> wrote:
Thanks Ernest!

Can you try –q 128 (or 256) instead and –t to something like 120 (2 minutes)? I think perf applies –q parameter value to each nvme drive. So you don’t have to multiply 128 by 8.
There might be different ways to check CPU socket connection of a particular drive. Quickest would be to check using “lspci | grep –i Non” and look for BDF for each of your NVMe drive. Anything greater than 80 in the BDF eg: “80:00.0” would mean it is connected to socket 1.

Thanks,
Vishal
From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 9:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

>> What is queue depth you are using while running perf benchmark?
>> Can you share your exact perf command line?
echo 8dev x 1CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
echo ===============================================================================
echo 8dev x 2CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 3
echo ===============================================================================
echo 8dev x 3CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 7
echo ===============================================================================
echo 8dev x 4CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c F
>> What is the model of your NVMe drives?
INTEL SSDPE2MD800G4
>> Do you know which CPU socket are your NVMes connected?
Nope, not sure how to check

Regarding Turbo setting, will check in BIOS, however I'm out of office for the weekend. Will get back as soon as I get back.

In addition, I'm attaching the full run of my test script, it may provide additional info.

Sincerely,
Ernest


On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com<mailto:vishal4.verma(a)intel.com>> wrote:
Hi Zed,

It is good that you are able to scale performance with # of cores… Our testing has showed that we can get close to 3M IOPs/Core using perf benchmark. Few questions I have for you regarding the configuration.

What is queue depth you are using while running perf benchmark? Can you share your exact perf command line? What is the model of your NVMe drives? Do you know which CPU socket are your NVMes connected? Should try and run perf core mask to the same socket where NVMes are connected. This would help avoid any cross-socket traffic.

Also, In order to achieve max performance/core, we generally enable Turbo to allow CPU to run at max frequency.

Thanks,
Vishal

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 8:52 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
AFAIR to get debug build I have to add some command line argument to configure, I just ran ./configure and make


On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will confirm.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 8:26 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
8 x Intel® SSD DC P3700 Series
16 x 16 DIMM DDR4  2133 MHz
Ubuntu 17.10
4.13.0-32-generic
SPDK version - cloned today

The hardware quite close to the one in the presentation. now results I get with perf for 1, 2 and 4 cores

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20     807.83    4951.17    1088.65   28467.64
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20     807.83    4951.71    4392.74   25127.60
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20     807.83    4952.46    4379.57   21966.62
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20     807.83    4953.35    4379.25   23165.75
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20     807.83    4954.31    4357.84   30302.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20     807.83    4955.33    4366.51   37664.17
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20     807.83    4956.35    4357.68   45025.49
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20     807.83    4957.39    3205.82   52456.83
========================================================
Total                                                  : 1654425.60    6462.60    4954.01    1088.65   52456.83

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00    1713.25    2334.48    1363.58   16128.43
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00    1713.25    2335.47    2073.78   14921.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00    1713.25    2336.68    2038.52   29808.68
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00    1713.25    2337.99    1827.81   44872.18
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60    1710.85    2337.78    1409.41   15688.80
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60    1710.85    2338.77    2034.40   14879.24
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60    1710.85    2339.98    2037.44   29795.93
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60    1710.85    2341.29    2037.46   44764.63
========================================================
Total                                                  : 3506278.40   13696.40    2337.80    1363.58   44872.18

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65    2401.62    1665.48     827.93   13076.89
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05    1880.96    2128.57    1085.12   37295.10
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10    2117.86    1888.83     920.16   19435.55
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25    1907.48    2099.24    1070.72   42946.29
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45    2286.76    1749.10     897.17   10901.34
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90    1909.18    2097.28    1126.79   43355.29
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50    2900.10    1379.04     706.27   10713.54
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80    1909.14    2096.44    1136.78   31606.15
========================================================
Total                                                  : 4432152.70   17313.10    1849.11     706.27   43355.29

Any idea what possibly could go wrong here?


On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Ernest,

The answer depends on many different factors – the IOPs limit per device, CPU core frequency, Turbo, etc.  As a frame of reference, the team here at Intel has measured over 3M IO/s on a single Intel Xeon core[1].

-Jim

[1] https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 6:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Number of NVMe devices per core

Hi All,
I ran 'perf' with different settings to see how to get most of NVMe disks. I see that performance degrades once cpu mask puts more than 2 devises per core. Is it OK? what is the theoretical (or empirical) limit of devices one core can handle?
Sincerely,
Ernest

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 43698 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 17:52 Verkamp, Daniel
  0 siblings, 0 replies; 14+ messages in thread
From: Verkamp, Daniel @ 2018-02-15 17:52 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 12604 bytes --]

Hi Ernest,

You can use the SPDK setup.sh script to reliably determine which NUMA node each NVMe device is on:

  ./scripts/setup.sh status

The bus number is assigned by your system’s firmware, and the “80 or higher” heuristic isn’t necessarily true on all systems.

NUMA node of PCIe-attached NVMe devices is determined by which physical PCIe slot they’re plugged into and how that slot is routed on your system board; there is no way to change this in software.

Thanks,
-- Daniel

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 10:42 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Number of NVMe devices per core

Vishal, I wast multiplying 128 by 8, just was checking how queue depth would affect the performance, I dont remember all values I've tried, but I started from something low, like 32, however I didnt see any significant perf. impact, could be considered as measurement error, so I left it 1024. Will try longer runs and check the socket assignment as soon as I get back to office.
When I see the BDF 80+, means I have to assign mask (for 18 core CPU) 0 and 18? BTW, how do I assign all NVMe devices to the same NUMA node? otherwise, how come the aforementioned intel presentation mention 8 NVMe on single core? They all have be assigned to the same socket to get max. perf. right?

On Thu, Feb 15, 2018 at 6:54 PM, Verma, Vishal4 <vishal4.verma(a)intel.com<mailto:vishal4.verma(a)intel.com>> wrote:
Thanks Ernest!

Can you try –q 128 (or 256) instead and –t to something like 120 (2 minutes)? I think perf applies –q parameter value to each nvme drive. So you don’t have to multiply 128 by 8.
There might be different ways to check CPU socket connection of a particular drive. Quickest would be to check using “lspci | grep –i Non” and look for BDF for each of your NVMe drive. Anything greater than 80 in the BDF eg: “80:00.0” would mean it is connected to socket 1.

Thanks,
Vishal
From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 9:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

>> What is queue depth you are using while running perf benchmark?
>> Can you share your exact perf command line?
echo 8dev x 1CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
echo ===============================================================================
echo 8dev x 2CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 3
echo ===============================================================================
echo 8dev x 3CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 7
echo ===============================================================================
echo 8dev x 4CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c F
>> What is the model of your NVMe drives?
INTEL SSDPE2MD800G4
>> Do you know which CPU socket are your NVMes connected?
Nope, not sure how to check

Regarding Turbo setting, will check in BIOS, however I'm out of office for the weekend. Will get back as soon as I get back.

In addition, I'm attaching the full run of my test script, it may provide additional info.

Sincerely,
Ernest


On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com<mailto:vishal4.verma(a)intel.com>> wrote:
Hi Zed,

It is good that you are able to scale performance with # of cores… Our testing has showed that we can get close to 3M IOPs/Core using perf benchmark. Few questions I have for you regarding the configuration.

What is queue depth you are using while running perf benchmark? Can you share your exact perf command line? What is the model of your NVMe drives? Do you know which CPU socket are your NVMes connected? Should try and run perf core mask to the same socket where NVMes are connected. This would help avoid any cross-socket traffic.

Also, In order to achieve max performance/core, we generally enable Turbo to allow CPU to run at max frequency.

Thanks,
Vishal

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 8:52 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
AFAIR to get debug build I have to add some command line argument to configure, I just ran ./configure and make


On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will confirm.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 8:26 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
8 x Intel® SSD DC P3700 Series
16 x 16 DIMM DDR4  2133 MHz
Ubuntu 17.10
4.13.0-32-generic
SPDK version - cloned today

The hardware quite close to the one in the presentation. now results I get with perf for 1, 2 and 4 cores

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20     807.83    4951.17    1088.65   28467.64
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20     807.83    4951.71    4392.74   25127.60
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20     807.83    4952.46    4379.57   21966.62
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20     807.83    4953.35    4379.25   23165.75
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20     807.83    4954.31    4357.84   30302.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20     807.83    4955.33    4366.51   37664.17
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20     807.83    4956.35    4357.68   45025.49
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20     807.83    4957.39    3205.82   52456.83
========================================================
Total                                                  : 1654425.60    6462.60    4954.01    1088.65   52456.83

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00    1713.25    2334.48    1363.58   16128.43
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00    1713.25    2335.47    2073.78   14921.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00    1713.25    2336.68    2038.52   29808.68
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00    1713.25    2337.99    1827.81   44872.18
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60    1710.85    2337.78    1409.41   15688.80
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60    1710.85    2338.77    2034.40   14879.24
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60    1710.85    2339.98    2037.44   29795.93
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60    1710.85    2341.29    2037.46   44764.63
========================================================
Total                                                  : 3506278.40   13696.40    2337.80    1363.58   44872.18

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65    2401.62    1665.48     827.93   13076.89
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05    1880.96    2128.57    1085.12   37295.10
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10    2117.86    1888.83     920.16   19435.55
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25    1907.48    2099.24    1070.72   42946.29
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45    2286.76    1749.10     897.17   10901.34
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90    1909.18    2097.28    1126.79   43355.29
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50    2900.10    1379.04     706.27   10713.54
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80    1909.14    2096.44    1136.78   31606.15
========================================================
Total                                                  : 4432152.70   17313.10    1849.11     706.27   43355.29

Any idea what possibly could go wrong here?


On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Ernest,

The answer depends on many different factors – the IOPs limit per device, CPU core frequency, Turbo, etc.  As a frame of reference, the team here at Intel has measured over 3M IO/s on a single Intel Xeon core[1].

-Jim

[1] https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 6:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Number of NVMe devices per core

Hi All,
I ran 'perf' with different settings to see how to get most of NVMe disks. I see that performance degrades once cpu mask puts more than 2 devises per core. Is it OK? what is the theoretical (or empirical) limit of devices one core can handle?
Sincerely,
Ernest

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 44375 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 17:42 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-15 17:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 12150 bytes --]

Vishal, I wast multiplying 128 by 8, just was checking how queue depth
would affect the performance, I dont remember all values I've tried, but I
started from something low, like 32, however I didnt see any significant
perf. impact, could be considered as measurement error, so I left it 1024.
Will try longer runs and check the socket assignment as soon as I get back
to office.
When I see the BDF 80+, means I have to assign mask (for 18 core CPU) 0 and
18? BTW, how do I assign all NVMe devices to the same NUMA node? otherwise,
how come the aforementioned intel presentation mention 8 NVMe on single
core? They all have be assigned to the same socket to get max. perf. right?

On Thu, Feb 15, 2018 at 6:54 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
wrote:

> Thanks Ernest!
>
>
>
> Can you try –q 128 (or 256) instead and –t to something like 120 (2
> minutes)? I think perf applies –q parameter value to each nvme drive. So
> you don’t have to multiply 128 by 8.
>
> There might be different ways to check CPU socket connection of a
> particular drive. Quickest would be to check using “lspci | grep –i Non”
> and look for BDF for each of your NVMe drive. Anything greater than 80 in
> the BDF eg: “80:00.0” would mean it is connected to socket 1.
>
>
>
> Thanks,
>
> Vishal
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 9:46 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> >> What is queue depth you are using while running perf benchmark?
>
> >> Can you share your exact perf command line?
>
> echo 8dev x 1CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
>
> echo ============================================================
> ===================
>
> echo 8dev x 2CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 3
>
> echo ============================================================
> ===================
>
> echo 8dev x 3CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c 7
>
> echo ============================================================
> ===================
>
> echo 8dev x 4CPUs
>
> /spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r
> 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r
> 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r
> 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r
> 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
> -c F
>
> >> What is the model of your NVMe drives?
>
> INTEL SSDPE2MD800G4
>
> >> Do you know which CPU socket are your NVMes connected?
>
> Nope, not sure how to check
>
>
>
> Regarding Turbo setting, will check in BIOS, however I'm out of office for
> the weekend. Will get back as soon as I get back.
>
>
>
> In addition, I'm attaching the full run of my test script, it may provide
> additional info.
>
>
>
> Sincerely,
>
> Ernest
>
>
>
>
>
> On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
> wrote:
>
> Hi Zed,
>
>
>
> It is good that you are able to scale performance with # of cores… Our
> testing has showed that we can get close to 3M IOPs/Core using perf
> benchmark. Few questions I have for you regarding the configuration.
>
>
>
> What is queue depth you are using while running perf benchmark? Can you
> share your exact perf command line? What is the model of your NVMe drives?
> Do you know which CPU socket are your NVMes connected? Should try and run
> perf core mask to the same socket where NVMes are connected. This would
> help avoid any cross-socket traffic.
>
>
>
> Also, In order to achieve max performance/core, we generally enable Turbo
> to allow CPU to run at max frequency.
>
>
>
> Thanks,
>
> Vishal
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 8:52 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
>
> AFAIR to get debug build I have to add some command line argument to
> configure, I just ran ./configure and make
>
>
>
>
>
> On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will
> confirm.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 8:26 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] Number of NVMe devices per core
>
>
>
> Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
>
> 8 x Intel® SSD DC P3700 Series
>
> 16 x 16 DIMM DDR4  2133 MHz
>
> Ubuntu 17.10
>
> 4.13.0-32-generic
>
> SPDK version - cloned today
>
>
>
> The hardware quite close to the one in the presentation. now results I get
> with perf for 1, 2 and 4 cores
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20
>  807.83    4951.17    1088.65   28467.64
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20
>  807.83    4951.71    4392.74   25127.60
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20
>  807.83    4952.46    4379.57   21966.62
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20
>  807.83    4953.35    4379.25   23165.75
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20
>  807.83    4954.31    4357.84   30302.48
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20
>  807.83    4955.33    4366.51   37664.17
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20
>  807.83    4956.35    4357.68   45025.49
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20
>  807.83    4957.39    3205.82   52456.83
>
> ========================================================
>
> Total                                                  : 1654425.60
> 6462.60    4954.01    1088.65   52456.83
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00
> 1713.25    2334.48    1363.58   16128.43
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00
> 1713.25    2335.47    2073.78   14921.21
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00
> 1713.25    2336.68    2038.52   29808.68
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00
> 1713.25    2337.99    1827.81   44872.18
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60
> 1710.85    2337.78    1409.41   15688.80
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60
> 1710.85    2338.77    2034.40   14879.24
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60
> 1710.85    2339.98    2037.44   29795.93
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60
> 1710.85    2341.29    2037.46   44764.63
>
> ========================================================
>
> Total                                                  : 3506278.40
>  13696.40    2337.80    1363.58   44872.18
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65
> 2401.62    1665.48     827.93   13076.89
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05
> 1880.96    2128.57    1085.12   37295.10
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10
> 2117.86    1888.83     920.16   19435.55
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25
> 1907.48    2099.24    1070.72   42946.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45
> 2286.76    1749.10     897.17   10901.34
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90
> 1909.18    2097.28    1126.79   43355.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50
> 2900.10    1379.04     706.27   10713.54
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80
> 1909.14    2096.44    1136.78   31606.15
>
> ========================================================
>
> Total                                                  : 4432152.70
>  17313.10    1849.11     706.27   43355.29
>
>
>
> Any idea what possibly could go wrong here?
>
>
>
>
>
> On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Ernest,
>
>
>
> The answer depends on many different factors – the IOPs limit per device,
> CPU core frequency, Turbo, etc.  As a frame of reference, the team here at
> Intel has measured over 3M IO/s on a single Intel Xeon core[1].
>
>
>
> -Jim
>
>
>
> [1] https://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 6:46 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] Number of NVMe devices per core
>
>
>
> Hi All,
>
> I ran 'perf' with different settings to see how to get most of NVMe disks.
> I see that performance degrades once cpu mask puts more than 2 devises per
> core. Is it OK? what is the theoretical (or empirical) limit of devices one
> core can handle?
>
> Sincerely,
>
> Ernest
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 30104 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 16:54 Verma, Vishal4
  0 siblings, 0 replies; 14+ messages in thread
From: Verma, Vishal4 @ 2018-02-15 16:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10830 bytes --]

Thanks Ernest!

Can you try –q 128 (or 256) instead and –t to something like 120 (2 minutes)? I think perf applies –q parameter value to each nvme drive. So you don’t have to multiply 128 by 8.
There might be different ways to check CPU socket connection of a particular drive. Quickest would be to check using “lspci | grep –i Non” and look for BDF for each of your NVMe drive. Anything greater than 80 in the BDF eg: “80:00.0” would mean it is connected to socket 1.

Thanks,
Vishal
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 9:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Number of NVMe devices per core

>> What is queue depth you are using while running perf benchmark?
>> Can you share your exact perf command line?
echo 8dev x 1CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
echo ===============================================================================
echo 8dev x 2CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 3
echo ===============================================================================
echo 8dev x 3CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 7
echo ===============================================================================
echo 8dev x 4CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c F
>> What is the model of your NVMe drives?
INTEL SSDPE2MD800G4
>> Do you know which CPU socket are your NVMes connected?
Nope, not sure how to check

Regarding Turbo setting, will check in BIOS, however I'm out of office for the weekend. Will get back as soon as I get back.

In addition, I'm attaching the full run of my test script, it may provide additional info.

Sincerely,
Ernest


On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com<mailto:vishal4.verma(a)intel.com>> wrote:
Hi Zed,

It is good that you are able to scale performance with # of cores… Our testing has showed that we can get close to 3M IOPs/Core using perf benchmark. Few questions I have for you regarding the configuration.

What is queue depth you are using while running perf benchmark? Can you share your exact perf command line? What is the model of your NVMe drives? Do you know which CPU socket are your NVMes connected? Should try and run perf core mask to the same socket where NVMes are connected. This would help avoid any cross-socket traffic.

Also, In order to achieve max performance/core, we generally enable Turbo to allow CPU to run at max frequency.

Thanks,
Vishal

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 8:52 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
AFAIR to get debug build I have to add some command line argument to configure, I just ran ./configure and make


On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will confirm.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 8:26 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
8 x Intel® SSD DC P3700 Series
16 x 16 DIMM DDR4  2133 MHz
Ubuntu 17.10
4.13.0-32-generic
SPDK version - cloned today

The hardware quite close to the one in the presentation. now results I get with perf for 1, 2 and 4 cores

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20     807.83    4951.17    1088.65   28467.64
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20     807.83    4951.71    4392.74   25127.60
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20     807.83    4952.46    4379.57   21966.62
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20     807.83    4953.35    4379.25   23165.75
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20     807.83    4954.31    4357.84   30302.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20     807.83    4955.33    4366.51   37664.17
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20     807.83    4956.35    4357.68   45025.49
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20     807.83    4957.39    3205.82   52456.83
========================================================
Total                                                  : 1654425.60    6462.60    4954.01    1088.65   52456.83

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00    1713.25    2334.48    1363.58   16128.43
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00    1713.25    2335.47    2073.78   14921.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00    1713.25    2336.68    2038.52   29808.68
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00    1713.25    2337.99    1827.81   44872.18
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60    1710.85    2337.78    1409.41   15688.80
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60    1710.85    2338.77    2034.40   14879.24
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60    1710.85    2339.98    2037.44   29795.93
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60    1710.85    2341.29    2037.46   44764.63
========================================================
Total                                                  : 3506278.40   13696.40    2337.80    1363.58   44872.18

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65    2401.62    1665.48     827.93   13076.89
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05    1880.96    2128.57    1085.12   37295.10
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10    2117.86    1888.83     920.16   19435.55
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25    1907.48    2099.24    1070.72   42946.29
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45    2286.76    1749.10     897.17   10901.34
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90    1909.18    2097.28    1126.79   43355.29
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50    2900.10    1379.04     706.27   10713.54
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80    1909.14    2096.44    1136.78   31606.15
========================================================
Total                                                  : 4432152.70   17313.10    1849.11     706.27   43355.29

Any idea what possibly could go wrong here?


On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Ernest,

The answer depends on many different factors – the IOPs limit per device, CPU core frequency, Turbo, etc.  As a frame of reference, the team here at Intel has measured over 3M IO/s on a single Intel Xeon core[1].

-Jim

[1] https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 6:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Number of NVMe devices per core

Hi All,
I ran 'perf' with different settings to see how to get most of NVMe disks. I see that performance degrades once cpu mask puts more than 2 devises per core. Is it OK? what is the theoretical (or empirical) limit of devices one core can handle?
Sincerely,
Ernest

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 37236 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 16:45 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-15 16:45 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10217 bytes --]

>> What is queue depth you are using while running perf benchmark?
>> Can you share your exact perf command line?
echo 8dev x 1CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe
traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe
traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe
traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe
traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20
echo ============================================================
===================
echo 8dev x 2CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe
traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe
traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe
traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe
traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 3
echo ============================================================
===================
echo 8dev x 3CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe
traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe
traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe
traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe
traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c 7
echo ============================================================
===================
echo 8dev x 4CPUs
/spdk_test/spdk/examples/nvme/perf/perf -w randread -s 4096 -r 'trtype:PCIe
traddr:03:00.0' -r 'trtype:PCIe traddr:04:00.0' -r 'trtype:PCIe
traddr:05:00.0' -r 'trtype:PCIe traddr:06:00.0' -r 'trtype:PCIe
traddr:83:00.0' -r 'trtype:PCIe traddr:84:00.0' -r 'trtype:PCIe
traddr:85:00.0' -r 'trtype:PCIe traddr:86:00.0' -q 1024 -t 20 -c F
>> What is the model of your NVMe drives?
INTEL SSDPE2MD800G4
>> Do you know which CPU socket are your NVMes connected?
Nope, not sure how to check

Regarding Turbo setting, will check in BIOS, however I'm out of office for
the weekend. Will get back as soon as I get back.

In addition, I'm attaching the full run of my test script, it may provide
additional info.

Sincerely,
Ernest


On Thu, Feb 15, 2018 at 6:17 PM, Verma, Vishal4 <vishal4.verma(a)intel.com>
wrote:

> Hi Zed,
>
>
>
> It is good that you are able to scale performance with # of cores… Our
> testing has showed that we can get close to 3M IOPs/Core using perf
> benchmark. Few questions I have for you regarding the configuration.
>
>
>
> What is queue depth you are using while running perf benchmark? Can you
> share your exact perf command line? What is the model of your NVMe drives?
> Do you know which CPU socket are your NVMes connected? Should try and run
> perf core mask to the same socket where NVMes are connected. This would
> help avoid any cross-socket traffic.
>
>
>
> Also, In order to achieve max performance/core, we generally enable Turbo
> to allow CPU to run at max frequency.
>
>
>
> Thanks,
>
> Vishal
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Ernest Zed
> *Sent:* Thursday, February 15, 2018 8:52 AM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Number of NVMe devices per core
>
>
>
> cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
>
> AFAIR to get debug build I have to add some command line argument to
> configure, I just ran ./configure and make
>
>
>
>
>
> On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will
> confirm.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 8:26 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] Number of NVMe devices per core
>
>
>
> Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
>
> 8 x Intel® SSD DC P3700 Series
>
> 16 x 16 DIMM DDR4  2133 MHz
>
> Ubuntu 17.10
>
> 4.13.0-32-generic
>
> SPDK version - cloned today
>
>
>
> The hardware quite close to the one in the presentation. now results I get
> with perf for 1, 2 and 4 cores
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20
>  807.83    4951.17    1088.65   28467.64
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20
>  807.83    4951.71    4392.74   25127.60
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20
>  807.83    4952.46    4379.57   21966.62
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20
>  807.83    4953.35    4379.25   23165.75
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20
>  807.83    4954.31    4357.84   30302.48
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20
>  807.83    4955.33    4366.51   37664.17
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20
>  807.83    4956.35    4357.68   45025.49
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20
>  807.83    4957.39    3205.82   52456.83
>
> ========================================================
>
> Total                                                  : 1654425.60
> 6462.60    4954.01    1088.65   52456.83
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00
> 1713.25    2334.48    1363.58   16128.43
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00
> 1713.25    2335.47    2073.78   14921.21
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00
> 1713.25    2336.68    2038.52   29808.68
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00
> 1713.25    2337.99    1827.81   44872.18
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60
> 1710.85    2337.78    1409.41   15688.80
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60
> 1710.85    2338.77    2034.40   14879.24
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60
> 1710.85    2339.98    2037.44   29795.93
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60
> 1710.85    2341.29    2037.46   44764.63
>
> ========================================================
>
> Total                                                  : 3506278.40
>  13696.40    2337.80    1363.58   44872.18
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65
> 2401.62    1665.48     827.93   13076.89
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05
> 1880.96    2128.57    1085.12   37295.10
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10
> 2117.86    1888.83     920.16   19435.55
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25
> 1907.48    2099.24    1070.72   42946.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45
> 2286.76    1749.10     897.17   10901.34
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90
> 1909.18    2097.28    1126.79   43355.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50
> 2900.10    1379.04     706.27   10713.54
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80
> 1909.14    2096.44    1136.78   31606.15
>
> ========================================================
>
> Total                                                  : 4432152.70
>  17313.10    1849.11     706.27   43355.29
>
>
>
> Any idea what possibly could go wrong here?
>
>
>
>
>
> On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Ernest,
>
>
>
> The answer depends on many different factors – the IOPs limit per device,
> CPU core frequency, Turbo, etc.  As a frame of reference, the team here at
> Intel has measured over 3M IO/s on a single Intel Xeon core[1].
>
>
>
> -Jim
>
>
>
> [1] https://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 6:46 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] Number of NVMe devices per core
>
>
>
> Hi All,
>
> I ran 'perf' with different settings to see how to get most of NVMe disks.
> I see that performance degrades once cpu mask puts more than 2 devises per
> core. Is it OK? what is the theoretical (or empirical) limit of devices one
> core can handle?
>
> Sincerely,
>
> Ernest
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 34982 bytes --]

[-- Attachment #3: TestResults.txt --]
[-- Type: text/plain, Size: 40569 bytes --]

===============================================================================
Device 1
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2928 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  733608.70    2865.66    1395.70     540.33    4398.25
========================================================
Total                                                  :  733608.70    2865.66    1395.70     540.33    4398.25

===============================================================================
Device 2
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2930 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  578815.55    2261.00    1768.97     556.10    5022.17
========================================================
Total                                                  :  578815.55    2261.00    1768.97     556.10    5022.17

===============================================================================
Device 3
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2933 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  534542.75    2088.06    1915.50     538.60    5003.96
========================================================
Total                                                  :  534542.75    2088.06    1915.50     538.60    5003.96

===============================================================================
Device 4
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2935 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  608158.55    2375.62    1683.63     545.43    4800.61
========================================================
Total                                                  :  608158.55    2375.62    1683.63     545.43    4800.61

===============================================================================
Device 5
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2937 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  480095.65    1875.37    2132.58     664.05    5315.21
========================================================
Total                                                  :  480095.65    1875.37    2132.58     664.05    5315.21

===============================================================================
Device 6
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2939 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  462862.15    1808.06    2212.08     756.84    6528.31
========================================================
Total                                                  :  462862.15    1808.06    2212.08     756.84    6528.31

===============================================================================
Device 7
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2987 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  479816.85    1874.28    2133.81     599.33    5321.08
========================================================
Total                                                  :  479816.85    1874.28    2133.81     599.33    5321.08

===============================================================================
Device 8
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2989 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  480037.85    1875.15    2132.83     847.28    5138.84
========================================================
Total                                                  :  480037.85    1875.15    2132.83     847.28    5138.84

===============================================================================
1dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid2991 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  733404.95    2864.86    1396.09     540.11    4292.34
========================================================
Total                                                  :  733404.95    2864.86    1396.09     540.11    4292.34

===============================================================================
2dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3072 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  741116.90    2894.99    1381.53     741.78    5693.58
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  577311.90    2255.12    1773.66    1002.65    6291.19
========================================================
Total                                                  : 1318428.80    5150.11    1553.24     741.78    6291.19

===============================================================================
3dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3076 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  715040.00    2793.12    1431.91     533.60    9398.39
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  585143.00    2285.71    1749.93    1216.29    6577.43
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  534818.50    2089.13    1914.89    1290.84   12921.03
========================================================
Total                                                  : 1835001.50    7167.97    1674.09     533.60   12921.03

===============================================================================
4dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3079 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  461408.00    1802.38    2219.06     639.46   12399.16
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  461408.00    1802.38    2219.27    1942.65    9276.61
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  461408.00    1802.38    2219.64    1925.55   13349.01
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  461408.00    1802.38    2220.10    1793.16   20247.16
========================================================
Total                                                  : 1845632.00    7209.50    2219.52     639.46   20247.16

===============================================================================
5dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3081 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  360032.00    1406.38    2843.94     713.42   15866.43
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  360032.00    1406.38    2844.21    2528.79   12719.48
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  360032.00    1406.38    2844.67    2490.77   14160.41
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  360032.00    1406.38    2845.24    2528.38   20665.78
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  360032.00    1406.38    2845.84    2379.60   28257.11
========================================================
Total                                                  : 1800160.00    7031.88    2844.78     713.42   28257.11

===============================================================================
6dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3084 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  294438.40    1150.15    3477.51     844.06   19471.68
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  294438.40    1150.15    3477.91    2513.40   16258.59
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  294438.40    1150.15    3478.44    2480.76   14953.71
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  294438.40    1150.15    3479.09    2521.02   21558.27
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  294438.40    1150.15    3479.80    2470.50   28825.53
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  273435.35    1068.11    3747.92    2686.09  642330.64
========================================================
Total                                                  : 1745627.35    6818.86    3520.74     844.06  642330.64

===============================================================================
7dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3098 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  238598.40     932.02    4291.35    1002.89   23916.44
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  238598.40     932.02    4291.83    3808.52   20671.16
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  238598.40     932.02    4292.51    3786.46   17538.04
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  238598.40     932.02    4293.32    3810.25   22618.23
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  238598.40     932.02    4294.19    3757.51   29833.37
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  238598.40     932.02    4295.11    3762.27   37277.14
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  238598.40     932.02    4296.04    3112.73   44742.86
========================================================
Total                                                  : 1670188.80    6524.17    4293.48    1002.89   44742.86

===============================================================================
8dev x 1CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20     807.83    4951.17    1088.65   28467.64
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20     807.83    4951.71    4392.74   25127.60
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20     807.83    4952.46    4379.57   21966.62
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20     807.83    4953.35    4379.25   23165.75
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20     807.83    4954.31    4357.84   30302.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20     807.83    4955.33    4366.51   37664.17
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20     807.83    4956.35    4357.68   45025.49
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20     807.83    4957.39    3205.82   52456.83
========================================================
Total                                                  : 1654425.60    6462.60    4954.01    1088.65   52456.83

===============================================================================
8dev x 2CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 1
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00    1713.25    2334.48    1363.58   16128.43
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00    1713.25    2335.47    2073.78   14921.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00    1713.25    2336.68    2038.52   29808.68
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00    1713.25    2337.99    1827.81   44872.18
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60    1710.85    2337.78    1409.41   15688.80
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60    1710.85    2338.77    2034.40   14879.24
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60    1710.85    2339.98    2037.44   29795.93
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60    1710.85    2341.29    2037.46   44764.63
========================================================
Total                                                  : 3506278.40   13696.40    2337.80    1363.58   44872.18

===============================================================================
8dev x 3CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 7 --file-prefix=spdk_pid3201 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 1
Initialization complete. Launching workers.
Starting thread on core 2
Starting thread on core 1
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 2:  584665.45    2283.85    1751.27    1156.54   11274.66
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 2:  489513.00    1912.16    2093.03    1539.57   22099.22
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 2:  482778.20    1885.85    2123.45    1558.85   37274.15
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 1:  714668.80    2791.68    1432.99    1056.55   13138.53
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  615049.25    2402.54    1666.19    1197.61   28068.11
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 1:  488152.35    1906.85    2100.52    1375.03   42836.50
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  540964.70    2113.14    1892.73     989.24    7358.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  488261.10    1907.27    2097.26    1221.23    8356.11
========================================================
Total                                                  : 4404052.85   17203.33    1860.97     989.24   42836.50

===============================================================================
8dev x 4CPUs
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
EAL: Probing VFIO support...
Initializing NVMe Controllers
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:03:00.0 [8086:0953]
Attached to NVMe Controller at 0000:03:00.0 [8086:0953]
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:04:00.0 [8086:0953]
Attached to NVMe Controller at 0000:04:00.0 [8086:0953]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:05:00.0 [8086:0953]
Attached to NVMe Controller at 0000:05:00.0 [8086:0953]
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:06:00.0 [8086:0953]
Attached to NVMe Controller at 0000:06:00.0 [8086:0953]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:83:00.0 [8086:0953]
Attached to NVMe Controller at 0000:83:00.0 [8086:0953]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:84:00.0 [8086:0953]
Attached to NVMe Controller at 0000:84:00.0 [8086:0953]
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:85:00.0 [8086:0953]
Attached to NVMe Controller at 0000:85:00.0 [8086:0953]
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 spdk_nvme
Attaching to NVMe Controller at 0000:86:00.0 [8086:0953]
Attached to NVMe Controller at 0000:86:00.0 [8086:0953]
Associating INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) with lcore 3
Associating INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) with lcore 0
Associating INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) with lcore 3
Associating INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) with lcore 2
Associating INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) with lcore 1
Associating INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) with lcore 0
Initialization complete. Launching workers.
Starting thread on core 3
Starting thread on core 2
Starting thread on core 1
Starting thread on core 0
========================================================
                                                                                            Latency(us)
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65    2401.62    1665.48     827.93   13076.89
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05    1880.96    2128.57    1085.12   37295.10
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10    2117.86    1888.83     920.16   19435.55
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25    1907.48    2099.24    1070.72   42946.29
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45    2286.76    1749.10     897.17   10901.34
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90    1909.18    2097.28    1126.79   43355.29
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50    2900.10    1379.04     706.27   10713.54
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80    1909.14    2096.44    1136.78   31606.15
========================================================
Total                                                  : 4432152.70   17313.10    1849.11     706.27   43355.29


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 16:17 Verma, Vishal4
  0 siblings, 0 replies; 14+ messages in thread
From: Verma, Vishal4 @ 2018-02-15 16:17 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7548 bytes --]

Hi Zed,

It is good that you are able to scale performance with # of cores… Our testing has showed that we can get close to 3M IOPs/Core using perf benchmark. Few questions I have for you regarding the configuration.

What is queue depth you are using while running perf benchmark? Can you share your exact perf command line? What is the model of your NVMe drives? Do you know which CPU socket are your NVMes connected? Should try and run perf core mask to the same socket where NVMes are connected. This would help avoid any cross-socket traffic.

Also, In order to achieve max performance/core, we generally enable Turbo to allow CPU to run at max frequency.

Thanks,
Vishal

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Ernest Zed
Sent: Thursday, February 15, 2018 8:52 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Number of NVMe devices per core

cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
AFAIR to get debug build I have to add some command line argument to configure, I just ran ./configure and make


On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will confirm.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 8:26 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Number of NVMe devices per core

Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
8 x Intel® SSD DC P3700 Series
16 x 16 DIMM DDR4  2133 MHz
Ubuntu 17.10
4.13.0-32-generic
SPDK version - cloned today

The hardware quite close to the one in the presentation. now results I get with perf for 1, 2 and 4 cores

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20     807.83    4951.17    1088.65   28467.64
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20     807.83    4951.71    4392.74   25127.60
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20     807.83    4952.46    4379.57   21966.62
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20     807.83    4953.35    4379.25   23165.75
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20     807.83    4954.31    4357.84   30302.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20     807.83    4955.33    4366.51   37664.17
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20     807.83    4956.35    4357.68   45025.49
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20     807.83    4957.39    3205.82   52456.83
========================================================
Total                                                  : 1654425.60    6462.60    4954.01    1088.65   52456.83

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00    1713.25    2334.48    1363.58   16128.43
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00    1713.25    2335.47    2073.78   14921.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00    1713.25    2336.68    2038.52   29808.68
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00    1713.25    2337.99    1827.81   44872.18
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60    1710.85    2337.78    1409.41   15688.80
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60    1710.85    2338.77    2034.40   14879.24
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60    1710.85    2339.98    2037.44   29795.93
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60    1710.85    2341.29    2037.46   44764.63
========================================================
Total                                                  : 3506278.40   13696.40    2337.80    1363.58   44872.18

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65    2401.62    1665.48     827.93   13076.89
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05    1880.96    2128.57    1085.12   37295.10
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10    2117.86    1888.83     920.16   19435.55
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25    1907.48    2099.24    1070.72   42946.29
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45    2286.76    1749.10     897.17   10901.34
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90    1909.18    2097.28    1126.79   43355.29
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50    2900.10    1379.04     706.27   10713.54
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80    1909.14    2096.44    1136.78   31606.15
========================================================
Total                                                  : 4432152.70   17313.10    1849.11     706.27   43355.29

Any idea what possibly could go wrong here?


On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Ernest,

The answer depends on many different factors – the IOPs limit per device, CPU core frequency, Turbo, etc.  As a frame of reference, the team here at Intel has measured over 3M IO/s on a single Intel Xeon core[1].

-Jim

[1] https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 6:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Number of NVMe devices per core

Hi All,
I ran 'perf' with different settings to see how to get most of NVMe disks. I see that performance degrades once cpu mask puts more than 2 devises per core. Is it OK? what is the theoretical (or empirical) limit of devices one core can handle?
Sincerely,
Ernest

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 26696 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 15:51 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-15 15:51 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6715 bytes --]

cat CONFIG.local returns CONFIG_DPDK_DIR?=/spdk_test/spdk/dpdk/build
AFAIR to get debug build I have to add some command line argument to
configure, I just ran ./configure and make


On Thu, Feb 15, 2018 at 5:35 PM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will
> confirm.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 8:26 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] Number of NVMe devices per core
>
>
>
> Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
>
> 8 x Intel® SSD DC P3700 Series
>
> 16 x 16 DIMM DDR4  2133 MHz
>
> Ubuntu 17.10
>
> 4.13.0-32-generic
>
> SPDK version - cloned today
>
>
>
> The hardware quite close to the one in the presentation. now results I get
> with perf for 1, 2 and 4 cores
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20
>  807.83    4951.17    1088.65   28467.64
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20
>  807.83    4951.71    4392.74   25127.60
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20
>  807.83    4952.46    4379.57   21966.62
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20
>  807.83    4953.35    4379.25   23165.75
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20
>  807.83    4954.31    4357.84   30302.48
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20
>  807.83    4955.33    4366.51   37664.17
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20
>  807.83    4956.35    4357.68   45025.49
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20
>  807.83    4957.39    3205.82   52456.83
>
> ========================================================
>
> Total                                                  : 1654425.60
> 6462.60    4954.01    1088.65   52456.83
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00
> 1713.25    2334.48    1363.58   16128.43
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00
> 1713.25    2335.47    2073.78   14921.21
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00
> 1713.25    2336.68    2038.52   29808.68
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00
> 1713.25    2337.99    1827.81   44872.18
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60
> 1710.85    2337.78    1409.41   15688.80
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60
> 1710.85    2338.77    2034.40   14879.24
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60
> 1710.85    2339.98    2037.44   29795.93
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60
> 1710.85    2341.29    2037.46   44764.63
>
> ========================================================
>
> Total                                                  : 3506278.40
>  13696.40    2337.80    1363.58   44872.18
>
>
>
> Starting DPDK 17.11.0 initialization...
>
> [ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
>
> Device Information                                     :       IOPS
>  MB/s    Average        min        max
>
> INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65
> 2401.62    1665.48     827.93   13076.89
>
> INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05
> 1880.96    2128.57    1085.12   37295.10
>
> INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10
> 2117.86    1888.83     920.16   19435.55
>
> INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25
> 1907.48    2099.24    1070.72   42946.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45
> 2286.76    1749.10     897.17   10901.34
>
> INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90
> 1909.18    2097.28    1126.79   43355.29
>
> INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50
> 2900.10    1379.04     706.27   10713.54
>
> INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80
> 1909.14    2096.44    1136.78   31606.15
>
> ========================================================
>
> Total                                                  : 4432152.70
>  17313.10    1849.11     706.27   43355.29
>
>
>
> Any idea what possibly could go wrong here?
>
>
>
>
>
> On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Ernest,
>
>
>
> The answer depends on many different factors – the IOPs limit per device,
> CPU core frequency, Turbo, etc.  As a frame of reference, the team here at
> Intel has measured over 3M IO/s on a single Intel Xeon core[1].
>
>
>
> -Jim
>
>
>
> [1] https://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 6:46 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] Number of NVMe devices per core
>
>
>
> Hi All,
>
> I ran 'perf' with different settings to see how to get most of NVMe disks.
> I see that performance degrades once cpu mask puts more than 2 devises per
> core. Is it OK? what is the theoretical (or empirical) limit of devices one
> core can handle?
>
> Sincerely,
>
> Ernest
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 15627 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 15:35 Harris, James R
  0 siblings, 0 replies; 14+ messages in thread
From: Harris, James R @ 2018-02-15 15:35 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6025 bytes --]

Can you confirm you aren’t using a debug build?  ‘cat CONFIG.local’ will confirm.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, February 15, 2018 at 8:26 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Number of NVMe devices per core

Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
8 x Intel® SSD DC P3700 Series
16 x 16 DIMM DDR4  2133 MHz
Ubuntu 17.10
4.13.0-32-generic
SPDK version - cloned today

The hardware quite close to the one in the presentation. now results I get with perf for 1, 2 and 4 cores

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20     807.83    4951.17    1088.65   28467.64
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20     807.83    4951.71    4392.74   25127.60
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20     807.83    4952.46    4379.57   21966.62
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20     807.83    4953.35    4379.25   23165.75
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20     807.83    4954.31    4357.84   30302.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20     807.83    4955.33    4366.51   37664.17
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20     807.83    4956.35    4357.68   45025.49
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20     807.83    4957.39    3205.82   52456.83
========================================================
Total                                                  : 1654425.60    6462.60    4954.01    1088.65   52456.83

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00    1713.25    2334.48    1363.58   16128.43
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00    1713.25    2335.47    2073.78   14921.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00    1713.25    2336.68    2038.52   29808.68
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00    1713.25    2337.99    1827.81   44872.18
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60    1710.85    2337.78    1409.41   15688.80
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60    1710.85    2338.77    2034.40   14879.24
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60    1710.85    2339.98    2037.44   29795.93
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60    1710.85    2341.29    2037.46   44764.63
========================================================
Total                                                  : 3506278.40   13696.40    2337.80    1363.58   44872.18

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
Device Information                                     :       IOPS       MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65    2401.62    1665.48     827.93   13076.89
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05    1880.96    2128.57    1085.12   37295.10
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10    2117.86    1888.83     920.16   19435.55
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25    1907.48    2099.24    1070.72   42946.29
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45    2286.76    1749.10     897.17   10901.34
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90    1909.18    2097.28    1126.79   43355.29
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50    2900.10    1379.04     706.27   10713.54
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80    1909.14    2096.44    1136.78   31606.15
========================================================
Total                                                  : 4432152.70   17313.10    1849.11     706.27   43355.29

Any idea what possibly could go wrong here?


On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Ernest,

The answer depends on many different factors – the IOPs limit per device, CPU core frequency, Turbo, etc.  As a frame of reference, the team here at Intel has measured over 3M IO/s on a single Intel Xeon core[1].

-Jim

[1] https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com<mailto:kreuzerkrieg(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, February 15, 2018 at 6:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Number of NVMe devices per core

Hi All,
I ran 'perf' with different settings to see how to get most of NVMe disks. I see that performance degrades once cpu mask puts more than 2 devises per core. Is it OK? what is the theoretical (or empirical) limit of devices one core can handle?
Sincerely,
Ernest

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 18671 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 15:26 Ernest Zed
  0 siblings, 0 replies; 14+ messages in thread
From: Ernest Zed @ 2018-02-15 15:26 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5472 bytes --]

Dual socket Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz, HT off
8 x Intel® SSD DC P3700 Series
16 x 16 DIMM DDR4  2133 MHz
Ubuntu 17.10
4.13.0-32-generic
SPDK version - cloned today

The hardware quite close to the one in the presentation. now results I get
with perf for 1, 2 and 4 cores

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 0x1 --file-prefix=spdk_pid3180 ]
Device Information                                     :       IOPS
 MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  206803.20
 807.83    4951.17    1088.65   28467.64
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 0:  206803.20
 807.83    4951.71    4392.74   25127.60
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  206803.20
 807.83    4952.46    4379.57   21966.62
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 0:  206803.20
 807.83    4953.35    4379.25   23165.75
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  206803.20
 807.83    4954.31    4357.84   30302.48
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 0:  206803.20
 807.83    4955.33    4366.51   37664.17
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  206803.20
 807.83    4956.35    4357.68   45025.49
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 0:  206803.20
 807.83    4957.39    3205.82   52456.83
========================================================
Total                                                  : 1654425.60
6462.60    4954.01    1088.65   52456.83

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c 3 --file-prefix=spdk_pid3192 ]
Device Information                                     :       IOPS
 MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  438592.00
1713.25    2334.48    1363.58   16128.43
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 1:  438592.00
1713.25    2335.47    2073.78   14921.21
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  438592.00
1713.25    2336.68    2038.52   29808.68
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 1:  438592.00
1713.25    2337.99    1827.81   44872.18
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  437977.60
1710.85    2337.78    1409.41   15688.80
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 0:  437977.60
1710.85    2338.77    2034.40   14879.24
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  437977.60
1710.85    2339.98    2037.44   29795.93
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 0:  437977.60
1710.85    2341.29    2037.46   44764.63
========================================================
Total                                                  : 3506278.40
 13696.40    2337.80    1363.58   44872.18

Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: perf -c F --file-prefix=spdk_pid3205 ]
Device Information                                     :       IOPS
 MB/s    Average        min        max
INTEL SSDPE2MD800G4  (CVFT5471002K800HGN  ) from core 3:  614814.65
2401.62    1665.48     827.93   13076.89
INTEL SSDPE2MD800G4  (CVFT54710015800HGN  ) from core 3:  481525.05
1880.96    2128.57    1085.12   37295.10
INTEL SSDPE2MD800G4  (CVFT5471001U800HGN  ) from core 2:  542172.10
2117.86    1888.83     920.16   19435.55
INTEL SSDPE2MD800G4  (CVFT5471002H800HGN  ) from core 2:  488315.25
1907.48    2099.24    1070.72   42946.29
INTEL SSDPE2MD800G4  (CVFT5471002S800HGN  ) from core 1:  585409.45
2286.76    1749.10     897.17   10901.34
INTEL SSDPE2MD800G4  (CVFT5471001L800HGN  ) from core 1:  488748.90
1909.18    2097.28    1126.79   43355.29
INTEL SSDPE2MD800G4  (CVFT5471002L800HGN  ) from core 0:  742426.50
2900.10    1379.04     706.27   10713.54
INTEL SSDPE2MD800G4  (CVFT54710002800HGN  ) from core 0:  488740.80
1909.14    2096.44    1136.78   31606.15
========================================================
Total                                                  : 4432152.70
 17313.10    1849.11     706.27   43355.29

Any idea what possibly could go wrong here?


On Thu, Feb 15, 2018 at 4:50 PM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Hi Ernest,
>
>
>
> The answer depends on many different factors – the IOPs limit per device,
> CPU core frequency, Turbo, etc.  As a frame of reference, the team here at
> Intel has measured over 3M IO/s on a single Intel Xeon core[1].
>
>
>
> -Jim
>
>
>
> [1] https://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <
> kreuzerkrieg(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, February 15, 2018 at 6:46 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] Number of NVMe devices per core
>
>
>
> Hi All,
>
> I ran 'perf' with different settings to see how to get most of NVMe disks.
> I see that performance degrades once cpu mask puts more than 2 devises per
> core. Is it OK? what is the theoretical (or empirical) limit of devices one
> core can handle?
>
> Sincerely,
>
> Ernest
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 9072 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [SPDK] Number of NVMe devices per core
@ 2018-02-15 14:50 Harris, James R
  0 siblings, 0 replies; 14+ messages in thread
From: Harris, James R @ 2018-02-15 14:50 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 956 bytes --]

Hi Ernest,

The answer depends on many different factors – the IOPs limit per device, CPU core frequency, Turbo, etc.  As a frame of reference, the team here at Intel has measured over 3M IO/s on a single Intel Xeon core[1].

-Jim

[1] https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2016/20160809_FA12_P3_Prepalli.pdf - slide 63


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Ernest Zed <kreuzerkrieg(a)gmail.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, February 15, 2018 at 6:46 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Number of NVMe devices per core

Hi All,
I ran 'perf' with different settings to see how to get most of NVMe disks. I see that performance degrades once cpu mask puts more than 2 devises per core. Is it OK? what is the theoretical (or empirical) limit of devices one core can handle?
Sincerely,
Ernest

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 4259 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-02-18 12:24 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-15 13:46 [SPDK] Number of NVMe devices per core Ernest Zed
2018-02-15 14:50 Harris, James R
2018-02-15 15:26 Ernest Zed
2018-02-15 15:35 Harris, James R
2018-02-15 15:51 Ernest Zed
2018-02-15 16:17 Verma, Vishal4
2018-02-15 16:45 Ernest Zed
2018-02-15 16:54 Verma, Vishal4
2018-02-15 17:42 Ernest Zed
2018-02-15 17:52 Verkamp, Daniel
2018-02-15 17:55 Verma, Vishal4
2018-02-15 18:22 Ernest Zed
2018-02-15 18:24 Ernest Zed
2018-02-18 12:24 Ernest Zed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.