All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] ioat performance questions
@ 2017-12-06 15:20 Harris, James R
  0 siblings, 0 replies; 11+ messages in thread
From: Harris, James R @ 2017-12-06 15:20 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7142 bytes --]

This 5GB/s limitation only affects the IOAT DMA engines.  It does not affect DMA engines that may exist in other PCIe devices.  DMA engines in other PCIe devices will be subject to different limitations including the width and speed of its PCIe link.

-Jim


From: "huangqingxin(a)ruijie.com.cn" <huangqingxin(a)ruijie.com.cn>
Date: Wednesday, December 6, 2017 at 7:54 AM
To: James Harris <james.r.harris(a)intel.com>, "spdk(a)lists.01.org" <spdk(a)lists.01.org>, Nathan Marushak <nathan.marushak(a)intel.com>, Paul E Luse <paul.e.luse(a)intel.com>
Subject: Re: Re: [SPDK] ioat performance questions


Hi, Jim

Thank you for the reply. It helps me to make a good understand of IOAT. ButI have another question about DMA. Will the `first-party` type of DMA be limited by the same hardware "pipe"?
For example , there are a lot of PCIe devices as follow, when these device use the DMA to communicate, the throughput is determined by the "pipe" or the BUS itself?

[root(a)localhost ntb]# lspci | grep Intel
00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02)
00:01.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02)
00:02.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:02.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:03.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:03.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:05.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management (rev 02)

From: Harris, James R<mailto:james.r.harris(a)intel.com>
Date: 2017-12-05 00:05
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Marushak, Nathan<mailto:nathan.marushak(a)intel.com>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Hi,

5GB/s is the expected aggregate throughput for all of the ioat channels on a single Intel Xeon CPU socket.  All of the channels on one CPU socket share the same hardware “pipe”, so using additional channels from that socket will not increase the overall throughput.

Note that the ioat channels on the recently released Intel Xeon Scalable processors share this same shared bandwidth architecture, but with an aggregate throughput closer to 10GB/s per CPU socket.

In the Intel specs, ioat is referred to as Quickdata, so searching on “intel quickdata specification” finds some relevant public links.  Section 3.4 in https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-1600-2600-vol-2-datasheet.pdf has a lot of details on the register definitions.

Thanks,

-Jim

P.S. Hey Paul – you need to run ioat_kperf as root, in addition to making sure that ioat channels are assigned to the kernel ioat driver.



From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "huangqingxin(a)ruijie.com.cn" <huangqingxin(a)ruijie.com.cn>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Monday, December 4, 2017 at 8:59 AM
To: Nathan Marushak <nathan.marushak(a)intel.com>, "spdk(a)lists.01.org" <spdk(a)lists.01.org>, Paul E Luse <paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions

Hi Nathan

Thanks, How can I get specification about DMA?  And why the channels grow up but the average of per channel goes down?

From: Marushak, Nathan<mailto:nathan.marushak(a)intel.com>
Date: 2017-12-04 23:53
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>; spdk(a)lists.01.org
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 26091 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-07 13:02 Marushak, Nathan
  0 siblings, 0 replies; 11+ messages in thread
From: Marushak, Nathan @ 2017-12-07 13:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9031 bytes --]

Hi Frank,

While your question is very straightforward, the answer isn't, because "it depends". IOAT is just one mechanism for data movement. A core, a NIC, or other add in cards can also move data from one place to another. So, depending on how an application is architected and the design choices one makes, the overall throughput can be limited by: memory, NIC, ioat, etc.

I realize this doesn't provide the answer your looking for...well perhaps it does in a general sense: no you can't make that conclusion with the information in this email.

Hope this helps.

Thanks,
Nate

On Dec 7, 2017, at 1:40 AM, Huang Frank <kinzent(a)hotmail.com<mailto:kinzent(a)hotmail.com>> wrote:

Hi,Jim

Can I make a conclusion that the upper limit of throughput of a server will be limited by the IOAT , if others conditions(networks,etc.) are ideal?

________________________________
kinzent(a)hotmail.com<mailto:kinzent(a)hotmail.com>

From: Harris, James R<mailto:james.r.harris(a)intel.com>
Date: 2017-12-06 23:20
To: huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>; spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>; Marushak, Nathan<mailto:nathan.marushak(a)intel.com>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
This 5GB/s limitation only affects the IOAT DMA engines.  It does not affect DMA engines that may exist in other PCIe devices.  DMA engines in other PCIe devices will be subject to different limitations including the width and speed of its PCIe link.

-Jim


From: "huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>" <huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>>
Date: Wednesday, December 6, 2017 at 7:54 AM
To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>, "spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>" <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, Nathan Marushak <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>>, Paul E Luse <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>>
Subject: Re: Re: [SPDK] ioat performance questions


Hi, Jim

Thank you for the reply. It helps me to make a good understand of IOAT. ButI have another question about DMA. Will the `first-party` type of DMA be limited by the same hardware "pipe"?
For example , there are a lot of PCIe devices as follow, when these device use the DMA to communicate, the throughput is determined by the "pipe" or the BUS itself?

[root(a)localhost ntb]# lspci | grep Intel
00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02)
00:01.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02)
00:02.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:02.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:03.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:03.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:05.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management (rev 02)

From: Harris, James R<mailto:james.r.harris(a)intel.com>
Date: 2017-12-05 00:05
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Marushak, Nathan<mailto:nathan.marushak(a)intel.com>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Hi,

5GB/s is the expected aggregate throughput for all of the ioat channels on a single Intel Xeon CPU socket.  All of the channels on one CPU socket share the same hardware “pipe”, so using additional channels from that socket will not increase the overall throughput.

Note that the ioat channels on the recently released Intel Xeon Scalable processors share this same shared bandwidth architecture, but with an aggregate throughput closer to 10GB/s per CPU socket.

In the Intel specs, ioat is referred to as Quickdata, so searching on “intel quickdata specification” finds some relevant public links.  Section 3.4 in https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-1600-2600-vol-2-datasheet.pdf has a lot of details on the register definitions.

Thanks,

-Jim

P.S. Hey Paul � you need to run ioat_kperf as root, in addition to making sure that ioat channels are assigned to the kernel ioat driver.



From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of "huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>" <huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, December 4, 2017 at 8:59 AM
To: Nathan Marushak <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>>, "spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>" <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, Paul E Luse <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>>
Subject: Re: [SPDK] ioat performance questions

Hi Nathan

Thanks, How can I get specification about DMA?  And why the channels grow up but the average of per channel goes down?

From: Marushak, Nathan<mailto:nathan.marushak(a)intel.com>
Date: 2017-12-04 23:53
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>>; spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 34066 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-07  8:40 Huang Frank
  0 siblings, 0 replies; 11+ messages in thread
From: Huang Frank @ 2017-12-07  8:40 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7712 bytes --]

Hi,Jim

Can I make a conclusion that the upper limit of throughput of a server will be limited by the IOAT , if others conditions(networks,etc.) are ideal?

________________________________
kinzent(a)hotmail.com

From: Harris, James R<mailto:james.r.harris(a)intel.com>
Date: 2017-12-06 23:20
To: huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>; spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>; Marushak, Nathan<mailto:nathan.marushak(a)intel.com>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
This 5GB/s limitation only affects the IOAT DMA engines.  It does not affect DMA engines that may exist in other PCIe devices.  DMA engines in other PCIe devices will be subject to different limitations including the width and speed of its PCIe link.

-Jim


From: "huangqingxin(a)ruijie.com.cn" <huangqingxin(a)ruijie.com.cn>
Date: Wednesday, December 6, 2017 at 7:54 AM
To: James Harris <james.r.harris(a)intel.com>, "spdk(a)lists.01.org" <spdk(a)lists.01.org>, Nathan Marushak <nathan.marushak(a)intel.com>, Paul E Luse <paul.e.luse(a)intel.com>
Subject: Re: Re: [SPDK] ioat performance questions


Hi, Jim

Thank you for the reply. It helps me to make a good understand of IOAT. ButI have another question about DMA. Will the `first-party` type of DMA be limited by the same hardware "pipe"?
For example , there are a lot of PCIe devices as follow, when these device use the DMA to communicate, the throughput is determined by the "pipe" or the BUS itself?

[root(a)localhost ntb]# lspci | grep Intel
00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02)
00:01.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02)
00:02.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:02.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:03.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:03.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:05.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management (rev 02)

From: Harris, James R<mailto:james.r.harris(a)intel.com>
Date: 2017-12-05 00:05
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Marushak, Nathan<mailto:nathan.marushak(a)intel.com>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Hi,

5GB/s is the expected aggregate throughput for all of the ioat channels on a single Intel Xeon CPU socket.  All of the channels on one CPU socket share the same hardware “pipe”, so using additional channels from that socket will not increase the overall throughput.

Note that the ioat channels on the recently released Intel Xeon Scalable processors share this same shared bandwidth architecture, but with an aggregate throughput closer to 10GB/s per CPU socket.

In the Intel specs, ioat is referred to as Quickdata, so searching on “intel quickdata specification” finds some relevant public links.  Section 3.4 in https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-1600-2600-vol-2-datasheet.pdf has a lot of details on the register definitions.

Thanks,

-Jim

P.S. Hey Paul – you need to run ioat_kperf as root, in addition to making sure that ioat channels are assigned to the kernel ioat driver.



From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "huangqingxin(a)ruijie.com.cn" <huangqingxin(a)ruijie.com.cn>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Monday, December 4, 2017 at 8:59 AM
To: Nathan Marushak <nathan.marushak(a)intel.com>, "spdk(a)lists.01.org" <spdk(a)lists.01.org>, Paul E Luse <paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions

Hi Nathan

Thanks, How can I get specification about DMA?  And why the channels grow up but the average of per channel goes down?

From: Marushak, Nathan<mailto:nathan.marushak(a)intel.com>
Date: 2017-12-04 23:53
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>; spdk(a)lists.01.org
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 32541 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-06 14:54 huangqingxin
  0 siblings, 0 replies; 11+ messages in thread
From: huangqingxin @ 2017-12-06 14:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6531 bytes --]


Hi, Jim

Thank you for the reply. It helps me to make a good understand of IOAT. ButI have another question about DMA. Will the `first-party` type of DMA be limited by the same hardware "pipe"?
For example , there are a lot of PCIe devices as follow, when these device use the DMA to communicate, the throughput is determined by the "pipe" or the BUS itself?

[root(a)localhost ntb]# lspci | grep Intel
00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02)
00:01.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02)
00:02.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:02.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:03.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:03.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:05.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management (rev 02)

From: Harris, James R<mailto:james.r.harris(a)intel.com>
Date: 2017-12-05 00:05
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Marushak, Nathan<mailto:nathan.marushak(a)intel.com>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Hi,

5GB/s is the expected aggregate throughput for all of the ioat channels on a single Intel Xeon CPU socket.  All of the channels on one CPU socket share the same hardware “pipe”, so using additional channels from that socket will not increase the overall throughput.

Note that the ioat channels on the recently released Intel Xeon Scalable processors share this same shared bandwidth architecture, but with an aggregate throughput closer to 10GB/s per CPU socket.

In the Intel specs, ioat is referred to as Quickdata, so searching on “intel quickdata specification” finds some relevant public links.  Section 3.4 in https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-1600-2600-vol-2-datasheet.pdf has a lot of details on the register definitions.

Thanks,

-Jim

P.S. Hey Paul – you need to run ioat_kperf as root, in addition to making sure that ioat channels are assigned to the kernel ioat driver.



From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "huangqingxin(a)ruijie.com.cn" <huangqingxin(a)ruijie.com.cn>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Monday, December 4, 2017 at 8:59 AM
To: Nathan Marushak <nathan.marushak(a)intel.com>, "spdk(a)lists.01.org" <spdk(a)lists.01.org>, Paul E Luse <paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions

Hi Nathan

Thanks, How can I get specification about DMA?  And why the channels grow up but the average of per channel goes down?

From: Marushak, Nathan<mailto:nathan.marushak(a)intel.com>
Date: 2017-12-04 23:53
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>; spdk(a)lists.01.org
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 26575 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-04 17:03 Luse, Paul E
  0 siblings, 0 replies; 11+ messages in thread
From: Luse, Paul E @ 2017-12-04 17:03 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8196 bytes --]

Ah, thanks Jim, wasn’t running w/sudo.  FYI here’s my output FWIW, jives with what both Nate and Jim have said:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ sudo ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 128, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . . .
Channel 0 Bandwidth 584 MiB/s
Channel 1 Bandwidth 584 MiB/s
Channel 2 Bandwidth 584 MiB/s
Channel 3 Bandwidth 584 MiB/s
Channel 4 Bandwidth 584 MiB/s
Channel 5 Bandwidth 584 MiB/s
Channel 6 Bandwidth 584 MiB/s
Channel 7 Bandwidth 584 MiB/s
Total Channel Bandwidth: 4904 MiB/s
Average Bandwidth Per Channel: 584 MiB/s
peluse(a)pels-64:~/spdk/examples/ioat/kperf$ sudo ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 128, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1258 MiB/s
Channel 1 Bandwidth 1258 MiB/s
Channel 2 Bandwidth 1260 MiB/s
Channel 3 Bandwidth 1255 MiB/s
Total Channel Bandwidth: 5266 MiB/s
Average Bandwidth Per Channel: 1255 MiB/s
peluse(a)pels-64:~/spdk/examples/ioat/kperf$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                72
On-line CPU(s) list:   0-71
Thread(s) per core:    2
Core(s) per socket:    18
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Stepping:              2
CPU MHz:               1201.121
CPU max MHz:           3600.0000
CPU min MHz:           1200.0000
BogoMIPS:              4591.78
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0-17,36-53
NUMA node1 CPU(s):     18-35,54-71
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts

From: Harris, James R
Sent: Monday, December 4, 2017 9:06 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; Marushak, Nathan <nathan.marushak(a)intel.com>; Luse, Paul E <paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions

Hi,

5GB/s is the expected aggregate throughput for all of the ioat channels on a single Intel Xeon CPU socket.  All of the channels on one CPU socket share the same hardware “pipe”, so using additional channels from that socket will not increase the overall throughput.

Note that the ioat channels on the recently released Intel Xeon Scalable processors share this same shared bandwidth architecture, but with an aggregate throughput closer to 10GB/s per CPU socket.

In the Intel specs, ioat is referred to as Quickdata, so searching on “intel quickdata specification” finds some relevant public links.  Section 3.4 in https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-1600-2600-vol-2-datasheet.pdf has a lot of details on the register definitions.

Thanks,

-Jim

P.S. Hey Paul – you need to run ioat_kperf as root, in addition to making sure that ioat channels are assigned to the kernel ioat driver.



From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of "huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>" <huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, December 4, 2017 at 8:59 AM
To: Nathan Marushak <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>>, "spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>" <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, Paul E Luse <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>>
Subject: Re: [SPDK] ioat performance questions

Hi Nathan

Thanks, How can I get specification about DMA?  And why the channels grow up but the average of per channel goes down?

From: Marushak, Nathan<mailto:nathan.marushak(a)intel.com>
Date: 2017-12-04 23:53
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>>; spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 33136 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-04 16:05 Harris, James R
  0 siblings, 0 replies; 11+ messages in thread
From: Harris, James R @ 2017-12-04 16:05 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5099 bytes --]

Hi,

5GB/s is the expected aggregate throughput for all of the ioat channels on a single Intel Xeon CPU socket.  All of the channels on one CPU socket share the same hardware “pipe”, so using additional channels from that socket will not increase the overall throughput.

Note that the ioat channels on the recently released Intel Xeon Scalable processors share this same shared bandwidth architecture, but with an aggregate throughput closer to 10GB/s per CPU socket.

In the Intel specs, ioat is referred to as Quickdata, so searching on “intel quickdata specification” finds some relevant public links.  Section 3.4 in https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-1600-2600-vol-2-datasheet.pdf has a lot of details on the register definitions.

Thanks,

-Jim

P.S. Hey Paul – you need to run ioat_kperf as root, in addition to making sure that ioat channels are assigned to the kernel ioat driver.



From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "huangqingxin(a)ruijie.com.cn" <huangqingxin(a)ruijie.com.cn>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Monday, December 4, 2017 at 8:59 AM
To: Nathan Marushak <nathan.marushak(a)intel.com>, "spdk(a)lists.01.org" <spdk(a)lists.01.org>, Paul E Luse <paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions

Hi Nathan

Thanks, How can I get specification about DMA?  And why the channels grow up but the average of per channel goes down?

From: Marushak, Nathan<mailto:nathan.marushak(a)intel.com>
Date: 2017-12-04 23:53
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>; spdk(a)lists.01.org
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 19964 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-04 15:59 huangqingxin
  0 siblings, 0 replies; 11+ messages in thread
From: huangqingxin @ 2017-12-04 15:59 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3720 bytes --]

Hi Nathan

Thanks, How can I get specification about DMA?  And why the channels grow up but the average of per channel goes down?

From: Marushak, Nathan<mailto:nathan.marushak(a)intel.com>
Date: 2017-12-04 23:53
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>; Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Subject: Re: [SPDK] ioat performance questions
Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>; spdk(a)lists.01.org
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 19159 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-04 15:53 Marushak, Nathan
  0 siblings, 0 replies; 11+ messages in thread
From: Marushak, Nathan @ 2017-12-04 15:53 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3338 bytes --]

Depending on the platform you are using, 5 GB/s is likely the expected throughput.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 04, 2017 8:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>; spdk(a)lists.01.org
Subject: Re: [SPDK] ioat performance questions

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn<mailto:huangqingxin(a)ruijie.com.cn>
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 14902 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-04 14:59 huangqingxin
  0 siblings, 0 replies; 11+ messages in thread
From: huangqingxin @ 2017-12-04 14:59 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2940 bytes --]

hi, Paul

Thank you!
If you have run the ./scripts/setup.sh , the DMA channels will be unloaded, cause No DMA channels or Devices found.
Have you ever tried to reset the DMA channels from vfio? You can run `./scripts/setup.sh reset` .


From: Luse, Paul E<mailto:paul.e.luse(a)intel.com>
Date: 2017-12-04 22:19
To: Storage Performance Development Kit<mailto:spdk(a)lists.01.org>
Subject: Re: [SPDK] ioat performance questions
I’m sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 11541 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [SPDK] ioat performance questions
@ 2017-12-04 14:19 Luse, Paul E
  0 siblings, 0 replies; 11+ messages in thread
From: Luse, Paul E @ 2017-12-04 14:19 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2501 bytes --]

I'm sure someone else can help. I at least tried to repro your results as another data point but even after following the direction son
https://github.com/spdk/spdk/tree/master/examples/ioat/kperf I get:

peluse(a)pels-64:~/spdk/examples/ioat/kperf$ ./ioat_kperf -n 8
Cannot set dma channels

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of huangqingxin(a)ruijie.com.cn
Sent: Monday, December 4, 2017 6:38 AM
To: spdk(a)lists.01.org
Subject: [SPDK] ioat performance questions

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 9374 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [SPDK] ioat performance questions
@ 2017-12-04 13:38 huangqingxin
  0 siblings, 0 replies; 11+ messages in thread
From: huangqingxin @ 2017-12-04 13:38 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1991 bytes --]

hi,

When I run the ioat_perf provided by spdk , I get this result.

[root(a)localhost kperf]# ./ioat_kperf -n 8
Total 8 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . . . . .
Channel 0 Bandwidth 661 MiB/s
Channel 1 Bandwidth 660 MiB/s
Channel 2 Bandwidth 661 MiB/s
Channel 3 Bandwidth 661 MiB/s
Channel 4 Bandwidth 661 MiB/s
Channel 5 Bandwidth 661 MiB/s
Channel 6 Bandwidth 661 MiB/s
Channel 7 Bandwidth 661 MiB/s
Total Channel Bandwidth: 5544 MiB/s
Average Bandwidth Per Channel: 660 MiB/s
[root(a)localhost kperf]# ./ioat_kperf -n 4
Total 4 Channels, Queue_Depth 256, Transfer Size 4096 Bytes, Total Transfer Size 4 GB
Running I/O . . . . .
Channel 0 Bandwidth 1319 MiB/s
Channel 1 Bandwidth 1322 MiB/s
Channel 2 Bandwidth 1319 MiB/s
Channel 3 Bandwidth 1318 MiB/s
Total Channel Bandwidth: 5530 MiB/s
Average Bandwidth Per Channel: 1318 MiB/s
[root(a)localhost kperf]#

[root(a)localhost kperf]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23

I found the `Total Channel Bandwidth` can not increase with more channels. What's the limitation? Does the performance of ioat dma on E5 V3 can only access around 5GB/s ?

Any helps will be appreciated!

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 4770 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-12-07 13:02 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-06 15:20 [SPDK] ioat performance questions Harris, James R
  -- strict thread matches above, loose matches on Subject: below --
2017-12-07 13:02 Marushak, Nathan
2017-12-07  8:40 Huang Frank
2017-12-06 14:54 huangqingxin
2017-12-04 17:03 Luse, Paul E
2017-12-04 16:05 Harris, James R
2017-12-04 15:59 huangqingxin
2017-12-04 15:53 Marushak, Nathan
2017-12-04 14:59 huangqingxin
2017-12-04 14:19 Luse, Paul E
2017-12-04 13:38 huangqingxin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.