All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] SPDK performance benchmarking guidelines and using FIO_plugin
@ 2017-04-04 15:49 Harris, James R
  0 siblings, 0 replies; 6+ messages in thread
From: Harris, James R @ 2017-04-04 15:49 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4126 bytes --]


> On Apr 4, 2017, at 4:25 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com> wrote:
> 
> Hi Jim, 
> 
> Thank you for the pointers relating to workload. Is these tests ( throughput test , latency test ) are done on single drives or are they done on bunch of drives. 

Hi Kiran,

This will depend on what you are trying to measure.  Some throughput tests measure how many I/O can be performed using a single Intel Xeon CPU core, in which case we test with multiple SSDs to max out the CPU core.  Other tests such as QD=1 latency tests are usually done using a single drive.

> Attached is FIO job file which i have used and getting the error. 

Please add thread=1 to your job file.  A patch will be pushed shortly that documents this requirement for the SPDK plugin and enforces it at runtime with a more suitable error message.

-Jim

> 
> 
> Thank you
> Kiran
> 
>> On 31-Mar-2017, at 12:09 AM, Harris, James R <james.r.harris(a)intel.com> wrote:
>> 
>>> 
>>> On Mar 30, 2017, at 5:43 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com> wrote:
>>> 
>>> Hi All,
>>> 
>>> 
>>> I have 2 part question relating to performance benchmarking of the SPDK and fio_plugin. 
>>> 
>>> 1. Is there any reference document specifying the workload types , queue depth and block size used for benchmarking the SPDK performance numbers. 
>>>     Is there any OS level performance tuning to be done. It will be great if we can get some insight on the performance testbed used.
>>> 
>>> Note : 
>>>      I did find the DPDK  performance optimisation guidelines https://lists.01.org/pipermail/spdk/2016-June/000035.html. Which is useful. 
>>> 
>> 
>> Hi Kiran,
>> 
>> Most of the tests we run for driver performance comparisons are with 4KB random reads.  This puts the most stress on the driver, since on NAND SSDs, random read performance is typically much higher than random write performance.  For throughput tests, queue depth is typically tested at 128.  For latency tests, queue depth is typically tested at 1.
>> 
>>> 
>>> 2.  I am trying fio_plugin for benchmarking the performance, the jobs are completing with following error 
>>> 
>>> “nvme_pcie.c: 996:nvme_pcie_qpair_complete_pending_admin_request: ***ERROR*** the active process (pid 6700) is not found for this controller.”
>>> 
>>> I found that the  nvme_pcie_qpair_complete_pending_admin_request()is checking if the process exist, this is where the error message is coming from.
>>> I am not sure how this process is getting killed even before completion. There is no there operation done on this system apart from running fio plugin.
>>> Was any similar issue seen in the past, which might be of help to get around this error.
>> 
>> Could you post the contents of your fio job file?  Note that the SPDK fio plugin is currently limited to a single job.  I would expect to see an error like this if specifying multiple jobs and without -thread (meaning a separate process per job).
>> 
>> Thanks,
>> 
>> -Jim
>> 
>>> 
>>> Below is my setup details 
>>> 
>>> OS : fedora 25 ( server edition )
>>> Kernel version : 4.8.6-300
>>> DPDK version : 6.11
>>> 
>>> I have attached a single nvme drive of 745GB from Intel on which i am running the FIO 
>>> 
>>> Below workaround were tried and the issue still persists not sure how to get around this. 
>>> 
>>> workaround
>>> 
>>> 1. Tried different workloads in FIO 
>>> 2. I did detach the NVMe and attach a new NVMe drive 
>>> 3. Re-installed the DPDK , SPDK and FIO tool
>>> 
>>> Note :
>>> Following links were used to install and setup SPDK and FIO plugin
>>> https://github.com/spdk/spdk —> SPDK 
>>> https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin —> FIO_plugin
>>> 
>>> 
>>> Thank you
>>> Kiran
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>> https://lists.01.org/mailman/listinfo/spdk
>> 
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
> 
> <seq_read.4KiB_csi.fio>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK performance benchmarking guidelines and using FIO_plugin
@ 2017-04-05 10:20 Kiran Dikshit
  0 siblings, 0 replies; 6+ messages in thread
From: Kiran Dikshit @ 2017-04-05 10:20 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4571 bytes --]

Hi Jim, 

Thank you Adding the thread parameter did the trick, i am able to see the job completing without errors. 
So going forward we need the thread parameter in our job files, and it should be 1. 

- Kiran

> On 04-Apr-2017, at 9:19 PM, Harris, James R <james.r.harris(a)intel.com> wrote:
> 
>> 
>> On Apr 4, 2017, at 4:25 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com <mailto:kdikshit(a)cloudsimple.com>> wrote:
>> 
>> Hi Jim, 
>> 
>> Thank you for the pointers relating to workload. Is these tests ( throughput test , latency test ) are done on single drives or are they done on bunch of drives. 
> 
> Hi Kiran,
> 
> This will depend on what you are trying to measure.  Some throughput tests measure how many I/O can be performed using a single Intel Xeon CPU core, in which case we test with multiple SSDs to max out the CPU core.  Other tests such as QD=1 latency tests are usually done using a single drive.
> 
>> Attached is FIO job file which i have used and getting the error. 
> 
> Please add thread=1 to your job file.  A patch will be pushed shortly that documents this requirement for the SPDK plugin and enforces it at runtime with a more suitable error message.
> 
> -Jim
> 
>> 
>> 
>> Thank you
>> Kiran
>> 
>>> On 31-Mar-2017, at 12:09 AM, Harris, James R <james.r.harris(a)intel.com> wrote:
>>> 
>>>> 
>>>> On Mar 30, 2017, at 5:43 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com> wrote:
>>>> 
>>>> Hi All,
>>>> 
>>>> 
>>>> I have 2 part question relating to performance benchmarking of the SPDK and fio_plugin. 
>>>> 
>>>> 1. Is there any reference document specifying the workload types , queue depth and block size used for benchmarking the SPDK performance numbers. 
>>>>    Is there any OS level performance tuning to be done. It will be great if we can get some insight on the performance testbed used.
>>>> 
>>>> Note : 
>>>>     I did find the DPDK  performance optimisation guidelines https://lists.01.org/pipermail/spdk/2016-June/000035.html. Which is useful. 
>>>> 
>>> 
>>> Hi Kiran,
>>> 
>>> Most of the tests we run for driver performance comparisons are with 4KB random reads.  This puts the most stress on the driver, since on NAND SSDs, random read performance is typically much higher than random write performance.  For throughput tests, queue depth is typically tested at 128.  For latency tests, queue depth is typically tested at 1.
>>> 
>>>> 
>>>> 2.  I am trying fio_plugin for benchmarking the performance, the jobs are completing with following error 
>>>> 
>>>> “nvme_pcie.c: 996:nvme_pcie_qpair_complete_pending_admin_request: ***ERROR*** the active process (pid 6700) is not found for this controller.”
>>>> 
>>>> I found that the  nvme_pcie_qpair_complete_pending_admin_request()is checking if the process exist, this is where the error message is coming from.
>>>> I am not sure how this process is getting killed even before completion. There is no there operation done on this system apart from running fio plugin.
>>>> Was any similar issue seen in the past, which might be of help to get around this error.
>>> 
>>> Could you post the contents of your fio job file?  Note that the SPDK fio plugin is currently limited to a single job.  I would expect to see an error like this if specifying multiple jobs and without -thread (meaning a separate process per job).
>>> 
>>> Thanks,
>>> 
>>> -Jim
>>> 
>>>> 
>>>> Below is my setup details 
>>>> 
>>>> OS : fedora 25 ( server edition )
>>>> Kernel version : 4.8.6-300
>>>> DPDK version : 6.11
>>>> 
>>>> I have attached a single nvme drive of 745GB from Intel on which i am running the FIO 
>>>> 
>>>> Below workaround were tried and the issue still persists not sure how to get around this. 
>>>> 
>>>> workaround
>>>> 
>>>> 1. Tried different workloads in FIO 
>>>> 2. I did detach the NVMe and attach a new NVMe drive 
>>>> 3. Re-installed the DPDK , SPDK and FIO tool
>>>> 
>>>> Note :
>>>> Following links were used to install and setup SPDK and FIO plugin
>>>> https://github.com/spdk/spdk —> SPDK 
>>>> https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin —> FIO_plugin
>>>> 
>>>> 
>>>> Thank you
>>>> Kiran
>>>> _______________________________________________
>>>> SPDK mailing list
>>>> SPDK(a)lists.01.org
>>>> https://lists.01.org/mailman/listinfo/spdk
>>> 
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>> https://lists.01.org/mailman/listinfo/spdk
>> 
>> <seq_read.4KiB_csi.fio>


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 12421 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK performance benchmarking guidelines and using FIO_plugin
@ 2017-04-04 11:25 Kiran Dikshit
  0 siblings, 0 replies; 6+ messages in thread
From: Kiran Dikshit @ 2017-04-04 11:25 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3755 bytes --]

Hi Jim, 

Thank you for the pointers relating to workload. Is these tests ( throughput test , latency test ) are done on single drives or are they done on bunch of drives. 

Attached is FIO job file which i have used and getting the error. 



Thank you
Kiran

> On 31-Mar-2017, at 12:09 AM, Harris, James R <james.r.harris(a)intel.com> wrote:
> 
>> 
>> On Mar 30, 2017, at 5:43 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com <mailto:kdikshit(a)cloudsimple.com>> wrote:
>> 
>> Hi All,
>> 
>> 
>> I have 2 part question relating to performance benchmarking of the SPDK and fio_plugin. 
>> 
>> 1. Is there any reference document specifying the workload types , queue depth and block size used for benchmarking the SPDK performance numbers. 
>>     Is there any OS level performance tuning to be done. It will be great if we can get some insight on the performance testbed used.
>> 
>> Note : 
>>      I did find the DPDK  performance optimisation guidelines https://lists.01.org/pipermail/spdk/2016-June/000035.html <https://lists.01.org/pipermail/spdk/2016-June/000035.html>. Which is useful. 
>> 
> 
> Hi Kiran,
> 
> Most of the tests we run for driver performance comparisons are with 4KB random reads.  This puts the most stress on the driver, since on NAND SSDs, random read performance is typically much higher than random write performance.  For throughput tests, queue depth is typically tested at 128.  For latency tests, queue depth is typically tested at 1.
> 
>> 
>> 2.  I am trying fio_plugin for benchmarking the performance, the jobs are completing with following error 
>> 
>> “nvme_pcie.c: 996:nvme_pcie_qpair_complete_pending_admin_request: ***ERROR*** the active process (pid 6700) is not found for this controller.”
>> 
>> I found that the  nvme_pcie_qpair_complete_pending_admin_request()is checking if the process exist, this is where the error message is coming from.
>> I am not sure how this process is getting killed even before completion. There is no there operation done on this system apart from running fio plugin.
>> Was any similar issue seen in the past, which might be of help to get around this error.
> 
> Could you post the contents of your fio job file?  Note that the SPDK fio plugin is currently limited to a single job.  I would expect to see an error like this if specifying multiple jobs and without -thread (meaning a separate process per job).
> 
> Thanks,
> 
> -Jim
> 
>> 
>> Below is my setup details 
>> 
>> OS : fedora 25 ( server edition )
>> Kernel version : 4.8.6-300
>> DPDK version : 6.11
>> 
>> I have attached a single nvme drive of 745GB from Intel on which i am running the FIO 
>> 
>> Below workaround were tried and the issue still persists not sure how to get around this. 
>> 
>> workaround
>> 
>> 1. Tried different workloads in FIO 
>> 2. I did detach the NVMe and attach a new NVMe drive 
>> 3. Re-installed the DPDK , SPDK and FIO tool
>> 
>> Note :
>> Following links were used to install and setup SPDK and FIO plugin
>> https://github.com/spdk/spdk <https://github.com/spdk/spdk> —> SPDK 
>> https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin <https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin> —> FIO_plugin
>> 
>> 
>> Thank you
>> Kiran
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
>> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 623 bytes --]

[-- Attachment #3: seq_read.4KiB_csi.fio --]
[-- Type: application/octet-stream, Size: 305 bytes --]

[global]
time_based
size=700g
runtime=30
filename=0000.03.00.0/1
ioengine=./fio_plugin
randrepeat=0
direct=1
invalidate=1
verify=0
verify_fatal=0
ramp_time=5
blocksize=4k
zero_buffers
rw=read
numjobs=1
bwavgtime=1000
iopsavgtime=1000
log_avg_msec=1000
group_reporting

[job00-sr4k-1q]
stonewall
iodepth=1

[-- Attachment #4: attachment.html --]
[-- Type: text/html, Size: 16012 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK performance benchmarking guidelines and using FIO_plugin
@ 2017-03-31  5:40 Kiran Dikshit
  0 siblings, 0 replies; 6+ messages in thread
From: Kiran Dikshit @ 2017-03-31  5:40 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3753 bytes --]

Hi Jim, 

Thank you for the pointers relating to workload. Is these tests ( throughput test , latency test ) are done on single drives or are they done on bunch of drives. 

Attached is FIO job file which i have used and getting the error. 


Thank you
Kiran

> On 31-Mar-2017, at 12:09 AM, Harris, James R <james.r.harris(a)intel.com> wrote:
> 
>> 
>> On Mar 30, 2017, at 5:43 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com <mailto:kdikshit(a)cloudsimple.com>> wrote:
>> 
>> Hi All,
>> 
>> 
>> I have 2 part question relating to performance benchmarking of the SPDK and fio_plugin. 
>> 
>> 1. Is there any reference document specifying the workload types , queue depth and block size used for benchmarking the SPDK performance numbers. 
>>     Is there any OS level performance tuning to be done. It will be great if we can get some insight on the performance testbed used.
>> 
>> Note : 
>>      I did find the DPDK  performance optimisation guidelines https://lists.01.org/pipermail/spdk/2016-June/000035.html <https://lists.01.org/pipermail/spdk/2016-June/000035.html>. Which is useful. 
>> 
> 
> Hi Kiran,
> 
> Most of the tests we run for driver performance comparisons are with 4KB random reads.  This puts the most stress on the driver, since on NAND SSDs, random read performance is typically much higher than random write performance.  For throughput tests, queue depth is typically tested at 128.  For latency tests, queue depth is typically tested at 1.
> 
>> 
>> 2.  I am trying fio_plugin for benchmarking the performance, the jobs are completing with following error 
>> 
>> “nvme_pcie.c: 996:nvme_pcie_qpair_complete_pending_admin_request: ***ERROR*** the active process (pid 6700) is not found for this controller.”
>> 
>> I found that the  nvme_pcie_qpair_complete_pending_admin_request()is checking if the process exist, this is where the error message is coming from.
>> I am not sure how this process is getting killed even before completion. There is no there operation done on this system apart from running fio plugin.
>> Was any similar issue seen in the past, which might be of help to get around this error.
> 
> Could you post the contents of your fio job file?  Note that the SPDK fio plugin is currently limited to a single job.  I would expect to see an error like this if specifying multiple jobs and without -thread (meaning a separate process per job).
> 
> Thanks,
> 
> -Jim
> 
>> 
>> Below is my setup details 
>> 
>> OS : fedora 25 ( server edition )
>> Kernel version : 4.8.6-300
>> DPDK version : 6.11
>> 
>> I have attached a single nvme drive of 745GB from Intel on which i am running the FIO 
>> 
>> Below workaround were tried and the issue still persists not sure how to get around this. 
>> 
>> workaround
>> 
>> 1. Tried different workloads in FIO 
>> 2. I did detach the NVMe and attach a new NVMe drive 
>> 3. Re-installed the DPDK , SPDK and FIO tool
>> 
>> Note :
>> Following links were used to install and setup SPDK and FIO plugin
>> https://github.com/spdk/spdk <https://github.com/spdk/spdk> —> SPDK 
>> https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin <https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin> —> FIO_plugin
>> 
>> 
>> Thank you
>> Kiran
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
>> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org <mailto:SPDK(a)lists.01.org>
> https://lists.01.org/mailman/listinfo/spdk <https://lists.01.org/mailman/listinfo/spdk>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 590 bytes --]

[-- Attachment #3: seq_read.4KiB_csi.fio --]
[-- Type: application/octet-stream, Size: 305 bytes --]

[global]
time_based
size=700g
runtime=30
filename=0000.03.00.0/1
ioengine=./fio_plugin
randrepeat=0
direct=1
invalidate=1
verify=0
verify_fatal=0
ramp_time=5
blocksize=4k
zero_buffers
rw=read
numjobs=1
bwavgtime=1000
iopsavgtime=1000
log_avg_msec=1000
group_reporting

[job00-sr4k-1q]
stonewall
iodepth=1

[-- Attachment #4: attachment.html --]
[-- Type: text/html, Size: 16025 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] SPDK performance benchmarking guidelines and using FIO_plugin
@ 2017-03-30 18:39 Harris, James R
  0 siblings, 0 replies; 6+ messages in thread
From: Harris, James R @ 2017-03-30 18:39 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2784 bytes --]


On Mar 30, 2017, at 5:43 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com<mailto:kdikshit(a)cloudsimple.com>> wrote:

Hi All,


I have 2 part question relating to performance benchmarking of the SPDK and fio_plugin.

1. Is there any reference document specifying the workload types , queue depth and block size used for benchmarking the SPDK performance numbers.
    Is there any OS level performance tuning to be done. It will be great if we can get some insight on the performance testbed used.

Note :
     I did find the DPDK  performance optimisation guidelines https://lists.01.org/pipermail/spdk/2016-June/000035.html. Which is useful.


Hi Kiran,

Most of the tests we run for driver performance comparisons are with 4KB random reads.  This puts the most stress on the driver, since on NAND SSDs, random read performance is typically much higher than random write performance.  For throughput tests, queue depth is typically tested at 128.  For latency tests, queue depth is typically tested at 1.


2.  I am trying fio_plugin for benchmarking the performance, the jobs are completing with following error

“nvme_pcie.c: 996:nvme_pcie_qpair_complete_pending_admin_request: ***ERROR*** the active process (pid 6700) is not found for this controller.”

I found that the  nvme_pcie_qpair_complete_pending_admin_request()is checking if the process exist, this is where the error message is coming from.
I am not sure how this process is getting killed even before completion. There is no there operation done on this system apart from running fio plugin.
Was any similar issue seen in the past, which might be of help to get around this error.

Could you post the contents of your fio job file?  Note that the SPDK fio plugin is currently limited to a single job.  I would expect to see an error like this if specifying multiple jobs and without -thread (meaning a separate process per job).

Thanks,

-Jim


Below is my setup details

OS : fedora 25 ( server edition )
Kernel version : 4.8.6-300
DPDK version : 6.11

I have attached a single nvme drive of 745GB from Intel on which i am running the FIO

Below workaround were tried and the issue still persists not sure how to get around this.

workaround

1. Tried different workloads in FIO
2. I did detach the NVMe and attach a new NVMe drive
3. Re-installed the DPDK , SPDK and FIO tool

Note :
Following links were used to install and setup SPDK and FIO plugin
https://github.com/spdk/spdk —> SPDK
https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin —> FIO_plugin


Thank you
Kiran
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 11367 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] SPDK performance benchmarking guidelines and using FIO_plugin
@ 2017-03-30 12:43 Kiran Dikshit
  0 siblings, 0 replies; 6+ messages in thread
From: Kiran Dikshit @ 2017-03-30 12:43 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2038 bytes --]

Hi All,


I have 2 part question relating to performance benchmarking of the SPDK and fio_plugin. 

1. Is there any reference document specifying the workload types , queue depth and block size used for benchmarking the SPDK performance numbers. 
    Is there any OS level performance tuning to be done. It will be great if we can get some insight on the performance testbed used.

	Note : 
	     I did find the DPDK  performance optimisation guidelines https://lists.01.org/pipermail/spdk/2016-June/000035.html <https://lists.01.org/pipermail/spdk/2016-June/000035.html>. Which is useful. 


2.  I am trying fio_plugin for benchmarking the performance, the jobs are completing with following error 

“nvme_pcie.c: 996:nvme_pcie_qpair_complete_pending_admin_request: ***ERROR*** the active process (pid 6700) is not found for this controller.”

I found that the  nvme_pcie_qpair_complete_pending_admin_request()is checking if the process exist, this is where the error message is coming from.
I am not sure how this process is getting killed even before completion. There is no there operation done on this system apart from running fio plugin.
Was any similar issue seen in the past, which might be of help to get around this error.

Below is my setup details 

OS : fedora 25 ( server edition )
Kernel version : 4.8.6-300
DPDK version : 6.11

I have attached a single nvme drive of 745GB from Intel on which i am running the FIO 

Below workaround were tried and the issue still persists not sure how to get around this. 

workaround

1. Tried different workloads in FIO 
2. I did detach the NVMe and attach a new NVMe drive 
3. Re-installed the DPDK , SPDK and FIO tool

Note :
	Following links were used to install and setup SPDK and FIO plugin
	https://github.com/spdk/spdk <https://github.com/spdk/spdk> —> SPDK 
	https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin <https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin> —> FIO_plugin


Thank you
Kiran

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8673 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-04-05 10:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-04 15:49 [SPDK] SPDK performance benchmarking guidelines and using FIO_plugin Harris, James R
  -- strict thread matches above, loose matches on Subject: below --
2017-04-05 10:20 Kiran Dikshit
2017-04-04 11:25 Kiran Dikshit
2017-03-31  5:40 Kiran Dikshit
2017-03-30 18:39 Harris, James R
2017-03-30 12:43 Kiran Dikshit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.