All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt
@ 2018-03-14  1:05 
  0 siblings, 0 replies; 4+ messages in thread
From:  @ 2018-03-14  1:05 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1280 bytes --]

Hi All,

We have evaluated the performance of SPDK NVMf-tgt.
We have observed that 1.4M IOPS is the best performance when using SPDK NVMf-tgt and 4 drives regardless of any change.

- number of drives is always 4
- about drives, there is no difference in the best performance between malloc-bdev and NVMe-bdev (based on P3700).
- about SPDK NVMf-tgt, there is no difference in the best performance between 4 NVMf-tgt applications and 1 NVMf-tgt application.
- about benchmark, there is no difference in the best performance between FIO and SPDKPERF.
- about transport, there is no difference in the best performance between PCIe and RDMA.
- about number of core, reduction affected the performance but increase more than needed didn't affect the performance.

We have observed improvement of efficiency of core utilization by changing configuration.
However the best performance have not been affected.

On the other hand, when we use local drives, we have not observed such a clear capping for direct NVMe, NVMe-bdev, and malloc-bdev.

Have you known this characteristics already?

I have not investigated anything in detail yet but I would like to share with you first.
I'm happy if any tips about configuration setting or any insight.

Thanks,
Shuhei


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 1932 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt
@ 2018-03-14  4:30 
  0 siblings, 0 replies; 4+ messages in thread
From:  @ 2018-03-14  4:30 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3769 bytes --]

Hi Vishal



Thank you so much for your reply and sorry for being late for my reply.



First I’m sorry that I was in a hurry without enough thought because this was our product dev team.



We didn’t understand that data goes through RDMA NIC when probe is done to RDMA network even if initiator and target are on the same server.

And we didn’t know the bandwidth of RDMA NIC because we didn’t have drives enough to saturate bandwidth before.

We got four drive very recently.



Now I could think that this capping came from RDMA NIC.



We are using Mellanox Connect-X4. We use just two servers and hence we don’t use switch for this evaluation.



Except for this misunderstanding, we see very good result for SPDK so far.

And we will be able to get newer RDMA NIC (Connect-X5) soon.



Thank you again,

Shuhei

________________________________
差出人: SPDK <spdk-bounces(a)lists.01.org> が Verma, Vishal4 <vishal4.verma(a)intel.com> の代理で送信
送信日時: 2018年3月14日 11:27
宛先: Storage Performance Development Kit
件名: [!]Re: [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt


Hi Shuhei,



Thanks for sharing your findings.



Most likely that 1.4M IOPS cap is coming from the RDMA NIC you are using.. Can you please share some details regarding your network configuration i.e how much network bandwidth, what RDMA vendor NIC or any switch in b/w etc?



Note: For locally attached nvme, there is no network fabric in the picture and local I/O case is expected to perform whatever nvme drives are capable of performing (with proper configuration).



Thanks,

Vishal



From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of ????
Sent: Tuesday, March 13, 2018 7:22 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt



Hi All,

I want to add information.

We have used spdk user space initiator and when we use nvme driver with rdma, the nvmf-tgt is spdk.

Thanks,
Shuhei

2018年3月14日(水) 10:05 松本周平 / MATSUMOTO,SHUUHEI <shuhei.matsumoto.xt(a)hitachi.com<mailto:shuhei.matsumoto.xt(a)hitachi.com>>:

Hi All,



We have evaluated the performance of SPDK NVMf-tgt.

We have observed that 1.4M IOPS is the best performance when using SPDK NVMf-tgt and 4 drives regardless of any change.



- number of drives is always 4
- about drives, there is no difference in the best performance between malloc-bdev and NVMe-bdev (based on P3700).
- about SPDK NVMf-tgt, there is no difference in the best performance between 4 NVMf-tgt applications and 1 NVMf-tgt application.
- about benchmark, there is no difference in the best performance between FIO and SPDKPERF.

- about transport, there is no difference in the best performance between PCIe and RDMA.
- about number of core, reduction affected the performance but increase more than needed didn't affect the performance.



We have observed improvement of efficiency of core utilization by changing configuration.
However the best performance have not been affected.



On the other hand, when we use local drives, we have not observed such a clear capping for direct NVMe, NVMe-bdev, and malloc-bdev.



Have you known this characteristics already?



I have not investigated anything in detail yet but I would like to share with you first.

I'm happy if any tips about configuration setting or any insight.



Thanks,
Shuhei



_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 18276 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt
@ 2018-03-14  2:27 Verma, Vishal4
  0 siblings, 0 replies; 4+ messages in thread
From: Verma, Vishal4 @ 2018-03-14  2:27 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2506 bytes --]

Hi Shuhei,

Thanks for sharing your findings.

Most likely that 1.4M IOPS cap is coming from the RDMA NIC you are using.. Can you please share some details regarding your network configuration i.e how much network bandwidth, what RDMA vendor NIC or any switch in b/w etc?

Note: For locally attached nvme, there is no network fabric in the picture and local I/O case is expected to perform whatever nvme drives are capable of performing (with proper configuration).

Thanks,
Vishal

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of ????
Sent: Tuesday, March 13, 2018 7:22 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt

Hi All,

I want to add information.

We have used spdk user space initiator and when we use nvme driver with rdma, the nvmf-tgt is spdk.

Thanks,
Shuhei

2018年3月14日(水) 10:05 松本周平 / MATSUMOTO,SHUUHEI <shuhei.matsumoto.xt(a)hitachi.com<mailto:shuhei.matsumoto.xt(a)hitachi.com>>:
Hi All,

We have evaluated the performance of SPDK NVMf-tgt.
We have observed that 1.4M IOPS is the best performance when using SPDK NVMf-tgt and 4 drives regardless of any change.

- number of drives is always 4
- about drives, there is no difference in the best performance between malloc-bdev and NVMe-bdev (based on P3700).
- about SPDK NVMf-tgt, there is no difference in the best performance between 4 NVMf-tgt applications and 1 NVMf-tgt application.
- about benchmark, there is no difference in the best performance between FIO and SPDKPERF.
- about transport, there is no difference in the best performance between PCIe and RDMA.
- about number of core, reduction affected the performance but increase more than needed didn't affect the performance.

We have observed improvement of efficiency of core utilization by changing configuration.
However the best performance have not been affected.

On the other hand, when we use local drives, we have not observed such a clear capping for direct NVMe, NVMe-bdev, and malloc-bdev.

Have you known this characteristics already?

I have not investigated anything in detail yet but I would like to share with you first.
I'm happy if any tips about configuration setting or any insight.

Thanks,
Shuhei

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 9780 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt
@ 2018-03-14  2:21 
  0 siblings, 0 replies; 4+ messages in thread
From:  @ 2018-03-14  2:21 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1764 bytes --]

Hi All,

I want to add information.

We have used spdk user space initiator and when we use nvme driver with
rdma, the nvmf-tgt is spdk.

Thanks,
Shuhei


2018年3月14日(水) 10:05 松本周平 / MATSUMOTO,SHUUHEI <
shuhei.matsumoto.xt(a)hitachi.com>:

> Hi All,
>
> We have evaluated the performance of SPDK NVMf-tgt.
> We have observed that 1.4M IOPS is the best performance when using SPDK
> NVMf-tgt and 4 drives regardless of any change.
>
> - number of drives is always 4
> - about drives, there is no difference in the best performance between
> malloc-bdev and NVMe-bdev (based on P3700).
> - about SPDK NVMf-tgt, there is no difference in the best performance
> between 4 NVMf-tgt applications and 1 NVMf-tgt application.
> - about benchmark, there is no difference in the best performance between
> FIO and SPDKPERF.
> - about transport, there is no difference in the best performance between
> PCIe and RDMA.
> - about number of core, reduction affected the performance but increase
> more than needed didn't affect the performance.
>
> We have observed improvement of efficiency of core utilization by changing
> configuration.
> However the best performance have not been affected.
>
> On the other hand, when we use local drives, we have not observed such a
> clear capping for direct NVMe, NVMe-bdev, and malloc-bdev.
>
> Have you known this characteristics already?
>
> I have not investigated anything in detail yet but I would like to share
> with you first.
> I'm happy if any tips about configuration setting or any insight.
>
> Thanks,
> Shuhei
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2566 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-03-14  4:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-14  1:05 [SPDK] FIO and Spdkperf are capped lower than expected when using SPDK NVMf-tgt 
2018-03-14  2:21 
2018-03-14  2:27 Verma, Vishal4
2018-03-14  4:30 

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.