All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] DPDK Integration
@ 2016-12-21  7:34 Yang, Ziye
  0 siblings, 0 replies; 18+ messages in thread
From: Yang, Ziye @ 2016-12-21  7:34 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6732 bytes --]

Hi  Sandeep,

If the data you give is correct:

Kernel NVMf target:  ~700K,  12 cores,  30% CPU utilization.    =>  IOPS   core =>   700/12*0.3
SPDK NVMftarget :  ~300K,   one core,  7% CPU utilization.              =>    IOPS per core   300/1*0.07

IOPS per core comparison:   SPDK/Kernel = ( 300/1*0.07)/(700/12*0.3)  = 22 X

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 20, 2016 12:03 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

I am using 12 cores for Linux Nvmf Target and i observed almost ~30% of CPU utilization in this case.
In case of SPDK, with one core, i am observing ~7% of CPU utilization.

As per my understanding, SPDK with one core, can produce  similar IOPs compared to  Linux NVMf with 11 cores. Please let me know if my interpretation is wrong.

Thanks,
Sandeep

On Mon, Dec 19, 2016 at 10:09 AM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi Sandeep,

How many cores are used by Linux NVMf target?

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 4:23 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

We are connecting only 1 SSD with one partition  for both Linux NVMf and SPDK on target machine.

Thanks,
Sandeep

On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi  Sandeep,

How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for kernel NVMf target, how many NVMe partitions are exported?

Thanks.


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 3:07 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Nathan,

The video shows that SPDK IOPs numbers will be at par with Linux Nvmf driver but with lesser CPUs. But, I am  observing different performance numbers with Linux NVMf and SPDK.
on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i am missing any configuration tweaks on top of default configuration provided.



Thanks,
Sandeep

On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>> wrote:
You can watch the storage tech field day: http://techfieldday.com/appearance/intel-presents-at-storage-field-day-11/

Around the 36 minute mark.  This doesn’t have different IO sizes however.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 6:56 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

What is the performance numbers observed using SPDK nvmf target with different blockSize and iodepths? Any link showing these results is fine.

Thanks,
Sandeep

On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
We use ib_verbs interface and currently DPDK’s user space driver cannot support those interface.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Andrey Kuzmin
Sent: Tuesday, December 13, 2016 9:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration


On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
--

Regards,
Andrey

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 32133 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-23  7:25 Liu, Changpeng
  0 siblings, 0 replies; 18+ messages in thread
From: Liu, Changpeng @ 2016-12-23  7:25 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8199 bytes --]

Hi Sandeep,

I would suggest you keep the parameter: AcceptorPollRate 10000.

And you can try the following configuration in your system to see if you got performance improvement.

ReactorMask 0x00FF
AcceptorPollRate 100000

[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 1
  Mode Direct
  Listen RDMA 15.15.15.1:4420
  #Host nqn.2016-06.io.spdk:init
  NVMe 0000:0b:00.0


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of sandeep dhanvada
Sent: Friday, December 23, 2016 3:00 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration

Hi Ben,

Attached is my Configuration file.

Thanks,
Sandeep

On Wed, Dec 21, 2016 at 8:37 PM, Walker, Benjamin <benjamin.walker(a)intel.com<mailto:benjamin.walker(a)intel.com>> wrote:
Sandeep,

Can you send the nvme.conf file that you are using for the target? Also, can you tell us which CPU you are using?

Thanks,
Ben


-------- Original message --------
From: "Yang, Ziye" <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>>
Date: 12/21/16 12:35 AM (GMT-07:00)
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi  Sandeep,

If the data you give is correct:

Kernel NVMf target:  ~700K,  12 cores,  30% CPU utilization.    =>  IOPS   core =>   700/12*0.3
SPDK NVMftarget :  ~300K,   one core,  7% CPU utilization.              =>    IOPS per core   300/1*0.07

IOPS per core comparison:   SPDK/Kernel = ( 300/1*0.07)/(700/12*0.3)  = 22 X

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 20, 2016 12:03 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

I am using 12 cores for Linux Nvmf Target and i observed almost ~30% of CPU utilization in this case.
In case of SPDK, with one core, i am observing ~7% of CPU utilization.

As per my understanding, SPDK with one core, can produce  similar IOPs compared to  Linux NVMf with 11 cores. Please let me know if my interpretation is wrong.

Thanks,
Sandeep

On Mon, Dec 19, 2016 at 10:09 AM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi Sandeep,

How many cores are used by Linux NVMf target?

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 4:23 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

We are connecting only 1 SSD with one partition  for both Linux NVMf and SPDK on target machine.

Thanks,
Sandeep

On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi  Sandeep,

How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for kernel NVMf target, how many NVMe partitions are exported?

Thanks.


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 3:07 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Nathan,

The video shows that SPDK IOPs numbers will be at par with Linux Nvmf driver but with lesser CPUs. But, I am  observing different performance numbers with Linux NVMf and SPDK.
on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i am missing any configuration tweaks on top of default configuration provided.



Thanks,
Sandeep

On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>> wrote:
You can watch the storage tech field day: http://techfieldday.com/appearance/intel-presents-at-storage-field-day-11/

Around the 36 minute mark.  This doesn’t have different IO sizes however.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 6:56 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

What is the performance numbers observed using SPDK nvmf target with different blockSize and iodepths? Any link showing these results is fine.

Thanks,
Sandeep

On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
We use ib_verbs interface and currently DPDK’s user space driver cannot support those interface.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Andrey Kuzmin
Sent: Tuesday, December 13, 2016 9:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration


On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
--

Regards,
Andrey

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 95104 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-23  7:00 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-23  7:00 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7625 bytes --]

Hi Ben,

Attached is my Configuration file.

Thanks,
Sandeep

On Wed, Dec 21, 2016 at 8:37 PM, Walker, Benjamin <benjamin.walker(a)intel.com
> wrote:

> Sandeep,
>
> Can you send the nvme.conf file that you are using for the target? Also,
> can you tell us which CPU you are using?
>
> Thanks,
> Ben
>
>
> -------- Original message --------
> From: "Yang, Ziye" <ziye.yang(a)intel.com>
> Date: 12/21/16 12:35 AM (GMT-07:00)
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: Re: [SPDK] DPDK Integration
>
> Hi  Sandeep,
>
>
>
> If the data you give is correct:
>
>
>
> Kernel NVMf target:  ~700K,  12 cores,  30% CPU utilization.    =>  IOPS
>  core =>   700/12*0.3
>
> SPDK NVMftarget :  ~300K,   one core,  7% CPU utilization.
> =>    IOPS per core   300/1*0.07
>
>
>
> IOPS per core comparison:   SPDK/Kernel = ( 300/1*0.07)/(700/12*0.3)  = 22
> X
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 20, 2016 12:03 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> I am using 12 cores for Linux Nvmf Target and i observed almost ~30% of
> CPU utilization in this case.
>
> In case of SPDK, with one core, i am observing ~7% of CPU utilization.
>
>
>
> As per my understanding, SPDK with one core, can produce  similar IOPs
> compared to  Linux NVMf with 11 cores. Please let me know if my
> interpretation is wrong.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Mon, Dec 19, 2016 at 10:09 AM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi Sandeep,
>
>
>
> How many cores are used by Linux NVMf target?
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Thursday, December 15, 2016 4:23 PM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> We are connecting only 1 SSD with one partition  for both Linux NVMf and
> SPDK on target machine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi  Sandeep,
>
>
>
> How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for
> kernel NVMf target, how many NVMe partitions are exported?
>
>
>
> Thanks.
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Thursday, December 15, 2016 3:07 PM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Nathan,
>
>
>
> The video shows that SPDK IOPs numbers will be at par with Linux Nvmf
> driver but with lesser CPUs. But, I am  observing different performance
> numbers with Linux NVMf and SPDK.
>
> on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K
> IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i
> am missing any configuration tweaks on top of default configuration
> provided.
>
>
>
>
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <
> nathan.marushak(a)intel.com> wrote:
>
> You can watch the storage tech field day: http://techfieldday.com/
> appearance/intel-presents-at-storage-field-day-11/
>
>
>
> Around the 36 minute mark.  This doesn’t have different IO sizes however.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 6:56 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> What is the performance numbers observed using SPDK nvmf target with
> different blockSize and iodepths? Any link showing these results is fine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> We use ib_verbs interface and currently DPDK’s user space driver cannot
> support those interface.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Andrey
> Kuzmin
> *Sent:* Tuesday, December 13, 2016 9:43 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
>
>
> On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>
>
>
> Just for me to understand, any specific reason for not using DPDK
> driver(s)?
>
>
>
> Regards,
>
> Andrey
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a
> \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a
> \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
> --
>
> Regards,
> Andrey
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 22738 bytes --]

[-- Attachment #3: nvmf.conf.in --]
[-- Type: application/octet-stream, Size: 4145 bytes --]

# NVMf Target Configuration File
#
# Please write all parameters using ASCII.
# The parameter must be quoted if it includes whitespace.
#
# Configuration syntax:
# Leading whitespace is ignored.
# Lines starting with '#' are comments.
# Lines ending with '\' are concatenated with the next line.
# Bracketed ([]) names define sections

[Global]
  # Users can restrict work items to only run on certain cores by
  #  specifying a ReactorMask.  Default ReactorMask mask is defined as
  #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
  #ReactorMask 0x00FF

  # Tracepoint group mask for spdk trace buffers
  # Default: 0x0 (all tracepoint groups disabled)
  # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
  #TpointGroupMask 0x0

  # syslog facility
  LogFacility "local7"

[Rpc]
  # Defines whether to enable configuration via RPC.
  # Default is disabled.  Note that the RPC interface is not
  # authenticated, so users should be careful about enabling
  # RPC in non-trusted environments.
  Enable No

# Users may change this section to create a different number or size of
#  malloc LUNs.
# This will generate 8 LUNs with a malloc-allocated backend.
# Each LUN will be size 64MB and these will be named
# Malloc0 through Malloc7.  Not all LUNs defined here are necessarily
#  used below.
[Malloc]
  NumberOfLuns 8
  LunSizeInMB 64

# Define NVMf protocol global options
[Nvmf]
  # Set the maximum number of submission and completion queues per session.
  # Setting this to '8', for example, allows for 8 submission and 8 completion queues
  # per session.
  MaxQueuesPerSession 12

  # Set the maximum number of outstanding I/O per queue.
  #MaxQueueDepth 128
  MaxQueueDepth 129

  # Set the maximum in-capsule data size. Must be a multiple of 16.
  #InCapsuleDataSize 4096

  # Set the maximum I/O size. Must be a multiple of 4096.
  #MaxIOSize 131072

  # Set the global acceptor lcore ID, lcores are numbered starting at 0.
  #AcceptorCore 0

  # Set how often the acceptor polls for incoming connections. The acceptor is also
  # responsible for polling existing connections that have gone idle. 0 means continuously
  # poll. Units in microseconds.
  #AcceptorPollRate 10000
  AcceptorPollRate 0

  # Registers the application to receive timeout callback and to reset the controller.
  ResetControllerOnTimeout Yes
  # Timeout value.
  NvmeTimeoutValue 30

# Define an NVMf Subsystem.
# - NQN is required and must be unique.
# - Core may be set or not. If set, the specified subsystem will run on
#   it, otherwise each subsystem will use a round-robin method to allocate
#   core from available cores,  lcores are numbered starting at 0.
# - Mode may be either "Direct" or "Virtual". Direct means that physical
#   devices attached to the target will be presented to hosts as if they
#   were directly attached to the host. No software emulation or command
#   validation is performed. Virtual means that an NVMe controller is
#   emulated in software and the namespaces it contains map to block devices
#   on the target system. These block devices do not need to be NVMe devices.
#   Only Direct mode is currently supported.
# - Between 1 and 255 Listen directives are allowed. This defines
#   the addresses on which new connections may be accepted. The format
#   is Listen <type> <address> where type currently can only be RDMA.
# - Between 0 and 255 Host directives are allowed. This defines the
#   NQNs of allowed hosts. If no Host directive is specified, all hosts
#   are allowed to connect.
# - Exactly 1 NVMe directive specifying an NVMe device by PCI BDF. The
#   PCI domain:bus:device.function can be replaced by "*" to indicate
#   any PCI device.

# Direct controller
[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Mode Direct
  Listen RDMA 15.15.15.1:4420
  #Host nqn.2016-06.io.spdk:init
  NVMe 0000:0b:00.0

# Multiple subsystems are allowed.
# Virtual controller
#[Subsystem2]
#  NQN nqn.2016-06.io.spdk:cnode2
#  Core 0
#  Mode Virtual
#  Listen RDMA 192.168.2.21:4420
#  Host nqn.2016-06.io.spdk:init
#  SN SPDK00000000000001
#  Namespace Malloc0
#  Namespace Malloc1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-23  6:58 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-23  6:58 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7368 bytes --]

Hi Yang,

Can you please let me know how we can configure SPDK to use multiple CPUs
so that, i can compare performance with equal number of CPUs in both SPDK
and Linux NVMf environments.


Thanks,
Sandeep

On Wed, Dec 21, 2016 at 1:04 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi  Sandeep,
>
>
>
> If the data you give is correct:
>
>
>
> Kernel NVMf target:  ~700K,  12 cores,  30% CPU utilization.    =>  IOPS
>  core =>   700/12*0.3
>
> SPDK NVMftarget :  ~300K,   one core,  7% CPU utilization.
> =>    IOPS per core   300/1*0.07
>
>
>
> IOPS per core comparison:   SPDK/Kernel = ( 300/1*0.07)/(700/12*0.3)  = 22
> X
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 20, 2016 12:03 AM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> I am using 12 cores for Linux Nvmf Target and i observed almost ~30% of
> CPU utilization in this case.
>
> In case of SPDK, with one core, i am observing ~7% of CPU utilization.
>
>
>
> As per my understanding, SPDK with one core, can produce  similar IOPs
> compared to  Linux NVMf with 11 cores. Please let me know if my
> interpretation is wrong.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Mon, Dec 19, 2016 at 10:09 AM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi Sandeep,
>
>
>
> How many cores are used by Linux NVMf target?
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Thursday, December 15, 2016 4:23 PM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> We are connecting only 1 SSD with one partition  for both Linux NVMf and
> SPDK on target machine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi  Sandeep,
>
>
>
> How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for
> kernel NVMf target, how many NVMe partitions are exported?
>
>
>
> Thanks.
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Thursday, December 15, 2016 3:07 PM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Nathan,
>
>
>
> The video shows that SPDK IOPs numbers will be at par with Linux Nvmf
> driver but with lesser CPUs. But, I am  observing different performance
> numbers with Linux NVMf and SPDK.
>
> on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K
> IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i
> am missing any configuration tweaks on top of default configuration
> provided.
>
>
>
>
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <
> nathan.marushak(a)intel.com> wrote:
>
> You can watch the storage tech field day: http://techfieldday.com/
> appearance/intel-presents-at-storage-field-day-11/
>
>
>
> Around the 36 minute mark.  This doesn’t have different IO sizes however.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 6:56 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> What is the performance numbers observed using SPDK nvmf target with
> different blockSize and iodepths? Any link showing these results is fine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> We use ib_verbs interface and currently DPDK’s user space driver cannot
> support those interface.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Andrey
> Kuzmin
> *Sent:* Tuesday, December 13, 2016 9:43 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
>
>
> On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>
>
>
> Just for me to understand, any specific reason for not using DPDK
> driver(s)?
>
>
>
> Regards,
>
> Andrey
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a
> \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a
> \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
> --
>
> Regards,
> Andrey
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 24267 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-21 15:07 Walker, Benjamin
  0 siblings, 0 replies; 18+ messages in thread
From: Walker, Benjamin @ 2016-12-21 15:07 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7099 bytes --]

Sandeep,

Can you send the nvme.conf file that you are using for the target? Also, can you tell us which CPU you are using?

Thanks,
Ben


-------- Original message --------
From: "Yang, Ziye" <ziye.yang(a)intel.com>
Date: 12/21/16 12:35 AM (GMT-07:00)
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration

Hi  Sandeep,

If the data you give is correct:

Kernel NVMf target:  ~700K,  12 cores,  30% CPU utilization.    =>  IOPS   core =>   700/12*0.3
SPDK NVMftarget :  ~300K,   one core,  7% CPU utilization.              =>    IOPS per core   300/1*0.07

IOPS per core comparison:   SPDK/Kernel = ( 300/1*0.07)/(700/12*0.3)  = 22 X

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 20, 2016 12:03 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

I am using 12 cores for Linux Nvmf Target and i observed almost ~30% of CPU utilization in this case.
In case of SPDK, with one core, i am observing ~7% of CPU utilization.

As per my understanding, SPDK with one core, can produce  similar IOPs compared to  Linux NVMf with 11 cores. Please let me know if my interpretation is wrong.

Thanks,
Sandeep

On Mon, Dec 19, 2016 at 10:09 AM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi Sandeep,

How many cores are used by Linux NVMf target?

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 4:23 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

We are connecting only 1 SSD with one partition  for both Linux NVMf and SPDK on target machine.

Thanks,
Sandeep

On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi  Sandeep,

How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for kernel NVMf target, how many NVMe partitions are exported?

Thanks.


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 3:07 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Nathan,

The video shows that SPDK IOPs numbers will be at par with Linux Nvmf driver but with lesser CPUs. But, I am  observing different performance numbers with Linux NVMf and SPDK.
on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i am missing any configuration tweaks on top of default configuration provided.



Thanks,
Sandeep

On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>> wrote:
You can watch the storage tech field day: http://techfieldday.com/appearance/intel-presents-at-storage-field-day-11/

Around the 36 minute mark.  This doesn’t have different IO sizes however.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 6:56 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

What is the performance numbers observed using SPDK nvmf target with different blockSize and iodepths? Any link showing these results is fine.

Thanks,
Sandeep

On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
We use ib_verbs interface and currently DPDK’s user space driver cannot support those interface.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Andrey Kuzmin
Sent: Tuesday, December 13, 2016 9:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration


On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
--

Regards,
Andrey

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 23823 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-19 16:02 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-19 16:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6237 bytes --]

Hi Yang,

I am using 12 cores for Linux Nvmf Target and i observed almost ~30% of CPU
utilization in this case.
In case of SPDK, with one core, i am observing ~7% of CPU utilization.

As per my understanding, SPDK with one core, can produce  similar IOPs
compared to  Linux NVMf with 11 cores. Please let me know if my
interpretation is wrong.

Thanks,
Sandeep

On Mon, Dec 19, 2016 at 10:09 AM, Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi Sandeep,
>
>
>
> How many cores are used by Linux NVMf target?
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Thursday, December 15, 2016 4:23 PM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> We are connecting only 1 SSD with one partition  for both Linux NVMf and
> SPDK on target machine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi  Sandeep,
>
>
>
> How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for
> kernel NVMf target, how many NVMe partitions are exported?
>
>
>
> Thanks.
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Thursday, December 15, 2016 3:07 PM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Nathan,
>
>
>
> The video shows that SPDK IOPs numbers will be at par with Linux Nvmf
> driver but with lesser CPUs. But, I am  observing different performance
> numbers with Linux NVMf and SPDK.
>
> on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K
> IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i
> am missing any configuration tweaks on top of default configuration
> provided.
>
>
>
>
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <
> nathan.marushak(a)intel.com> wrote:
>
> You can watch the storage tech field day: http://techfieldday.com/
> appearance/intel-presents-at-storage-field-day-11/
>
>
>
> Around the 36 minute mark.  This doesn’t have different IO sizes however.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 6:56 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> What is the performance numbers observed using SPDK nvmf target with
> different blockSize and iodepths? Any link showing these results is fine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> We use ib_verbs interface and currently DPDK’s user space driver cannot
> support those interface.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Andrey
> Kuzmin
> *Sent:* Tuesday, December 13, 2016 9:43 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
>
>
> On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>
>
>
> Just for me to understand, any specific reason for not using DPDK
> driver(s)?
>
>
>
> Regards,
>
> Andrey
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a
> \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a
> \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
> --
>
> Regards,
> Andrey
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 20210 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-19  4:39 Yang, Ziye
  0 siblings, 0 replies; 18+ messages in thread
From: Yang, Ziye @ 2016-12-19  4:39 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5460 bytes --]

Hi Sandeep,

How many cores are used by Linux NVMf target?

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 4:23 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

We are connecting only 1 SSD with one partition  for both Linux NVMf and SPDK on target machine.

Thanks,
Sandeep

On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi  Sandeep,

How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for kernel NVMf target, how many NVMe partitions are exported?

Thanks.


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 3:07 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Nathan,

The video shows that SPDK IOPs numbers will be at par with Linux Nvmf driver but with lesser CPUs. But, I am  observing different performance numbers with Linux NVMf and SPDK.
on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i am missing any configuration tweaks on top of default configuration provided.



Thanks,
Sandeep

On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>> wrote:
You can watch the storage tech field day: http://techfieldday.com/appearance/intel-presents-at-storage-field-day-11/

Around the 36 minute mark.  This doesn’t have different IO sizes however.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 6:56 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

What is the performance numbers observed using SPDK nvmf target with different blockSize and iodepths? Any link showing these results is fine.

Thanks,
Sandeep

On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
We use ib_verbs interface and currently DPDK’s user space driver cannot support those interface.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Andrey Kuzmin
Sent: Tuesday, December 13, 2016 9:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration


On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
--

Regards,
Andrey

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 26591 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-15  8:22 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-15  8:22 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5235 bytes --]

Hi Yang,

We are connecting only 1 SSD with one partition  for both Linux NVMf and
SPDK on target machine.

Thanks,
Sandeep

On Thu, Dec 15, 2016 at 12:52 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi  Sandeep,
>
>
>
> How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for
> kernel NVMf target, how many NVMe partitions are exported?
>
>
>
> Thanks.
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Thursday, December 15, 2016 3:07 PM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Nathan,
>
>
>
> The video shows that SPDK IOPs numbers will be at par with Linux Nvmf
> driver but with lesser CPUs. But, I am  observing different performance
> numbers with Linux NVMf and SPDK.
>
> on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K
> IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i
> am missing any configuration tweaks on top of default configuration
> provided.
>
>
>
>
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <
> nathan.marushak(a)intel.com> wrote:
>
> You can watch the storage tech field day: http://techfieldday.com/
> appearance/intel-presents-at-storage-field-day-11/
>
>
>
> Around the 36 minute mark.  This doesn’t have different IO sizes however.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 6:56 AM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> What is the performance numbers observed using SPDK nvmf target with
> different blockSize and iodepths? Any link showing these results is fine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> We use ib_verbs interface and currently DPDK’s user space driver cannot
> support those interface.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Andrey
> Kuzmin
> *Sent:* Tuesday, December 13, 2016 9:43 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
>
>
> On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>
>
>
> Just for me to understand, any specific reason for not using DPDK
> driver(s)?
>
>
>
> Regards,
>
> Andrey
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a
> \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a
> \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
> --
>
> Regards,
> Andrey
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 16803 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-15  7:22 Yang, Ziye
  0 siblings, 0 replies; 18+ messages in thread
From: Yang, Ziye @ 2016-12-15  7:22 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4690 bytes --]

Hi  Sandeep,

How many NVMe SSDs used  for Kernel NVMf tgt and SPDK NVMf tgt?  Also for kernel NVMf target, how many NVMe partitions are exported?

Thanks.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of sandeep dhanvada
Sent: Thursday, December 15, 2016 3:07 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration

Hi Nathan,

The video shows that SPDK IOPs numbers will be at par with Linux Nvmf driver but with lesser CPUs. But, I am  observing different performance numbers with Linux NVMf and SPDK.
on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i am missing any configuration tweaks on top of default configuration provided.



Thanks,
Sandeep

On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <nathan.marushak(a)intel.com<mailto:nathan.marushak(a)intel.com>> wrote:
You can watch the storage tech field day: http://techfieldday.com/appearance/intel-presents-at-storage-field-day-11/

Around the 36 minute mark.  This doesn’t have different IO sizes however.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 6:56 AM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

What is the performance numbers observed using SPDK nvmf target with different blockSize and iodepths? Any link showing these results is fine.

Thanks,
Sandeep

On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
We use ib_verbs interface and currently DPDK’s user space driver cannot support those interface.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Andrey Kuzmin
Sent: Tuesday, December 13, 2016 9:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration


On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
--

Regards,
Andrey

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 22073 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-15  7:07 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-15  7:07 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4355 bytes --]

Hi Nathan,

The video shows that SPDK IOPs numbers will be at par with Linux Nvmf
driver but with lesser CPUs. But, I am  observing different performance
numbers with Linux NVMf and SPDK.
on my Setup with Mellanox 40G NIC, and Samsung SSD, Linux NVMf has ~700K
IOPs, but on same setup, SPDK has ~300K IOPs. Please let me know whether i
am missing any configuration tweaks on top of default configuration
provided.



Thanks,
Sandeep

On Tue, Dec 13, 2016 at 8:59 PM, Marushak, Nathan <nathan.marushak(a)intel.com
> wrote:

> You can watch the storage tech field day: http://techfieldday.com/
> appearance/intel-presents-at-storage-field-day-11/
>
>
>
> Around the 36 minute mark.  This doesn’t have different IO sizes however.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 6:56 AM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
> Hi Yang,
>
>
>
> What is the performance numbers observed using SPDK nvmf target with
> different blockSize and iodepths? Any link showing these results is fine.
>
>
> Thanks,
>
> Sandeep
>
>
>
> On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> We use ib_verbs interface and currently DPDK’s user space driver cannot
> support those interface.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Andrey
> Kuzmin
> *Sent:* Tuesday, December 13, 2016 9:43 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
>
>
> On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>
>
>
> Just for me to understand, any specific reason for not using DPDK
> driver(s)?
>
>
>
> Regards,
>
> Andrey
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a
> \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a
> \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
> --
>
> Regards,
> Andrey
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 13421 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-13 16:48 Daniel Verkamp
  0 siblings, 0 replies; 18+ messages in thread
From: Daniel Verkamp @ 2016-12-13 16:48 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1145 bytes --]

On 12/13/2016 05:21 AM, sandeep dhanvada wrote:
> Hi Yang,
> 
> Thanks for the response. Is there any configuration changes to
> default config file to get optimal performance. I am observing ~340K
> IOPs using FIO from Initiator(host) for randread/4 jobs/16
> io-depth/4K blockSize/. If i change the number of jobs from 4 to 12,
> performance drastically falls down to ~10-12K IOPs. I was thinking
> that DPDK integration was the bottleneck for performance.
> 
> in <SPDK_PATH>/etc/spdk/nvmf.conf.in <http://nvmf.conf.in>, I
> modified only "AcceptorPollRate" parameter and configured the value
> as 0. Please let me know if we need to add/modify any other
> parameters.
> 
> 
> Thanks, Sandeep

Hi Sandeep,

Configuring AcceptorPollRate to 0 means that the code that checks for new
connections will execute on every iteration of the main reactor thread
with no delay between checks.  If you have subsystems configured to run
on the same core as the acceptor, this will most likely cause significant
performance degradation.  The recommended value for AcceptorPollRate is
at least 10000 (10 ms).

Thanks,
-- Daniel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-13 15:29 Marushak, Nathan
  0 siblings, 0 replies; 18+ messages in thread
From: Marushak, Nathan @ 2016-12-13 15:29 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3501 bytes --]

You can watch the storage tech field day: http://techfieldday.com/appearance/intel-presents-at-storage-field-day-11/

Around the 36 minute mark.  This doesn’t have different IO sizes however.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 6:56 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration

Hi Yang,

What is the performance numbers observed using SPDK nvmf target with different blockSize and iodepths? Any link showing these results is fine.

Thanks,
Sandeep

On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
We use ib_verbs interface and currently DPDK’s user space driver cannot support those interface.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Andrey Kuzmin
Sent: Tuesday, December 13, 2016 9:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] DPDK Integration


On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
--

Regards,
Andrey

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 17148 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-13 13:56 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-13 13:56 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3151 bytes --]

Hi Yang,

What is the performance numbers observed using SPDK nvmf target with
different blockSize and iodepths? Any link showing these results is fine.

Thanks,
Sandeep

On Tue, Dec 13, 2016 at 7:20 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:

> We use ib_verbs interface and currently DPDK’s user space driver cannot
> support those interface.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Andrey
> Kuzmin
> *Sent:* Tuesday, December 13, 2016 9:43 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] DPDK Integration
>
>
>
>
>
> On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com> wrote:
>
> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>
>
>
> Just for me to understand, any specific reason for not using DPDK
> driver(s)?
>
>
>
> Regards,
>
> Andrey
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a
> \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a
> \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
> --
>
> Regards,
> Andrey
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 9552 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-13 13:50 Yang, Ziye
  0 siblings, 0 replies; 18+ messages in thread
From: Yang, Ziye @ 2016-12-13 13:50 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2565 bytes --]

We use ib_verbs interface and currently DPDK’s user space driver cannot support those interface.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Andrey Kuzmin
Sent: Tuesday, December 13, 2016 9:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] DPDK Integration


On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com<mailto:ziye.yang(a)intel.com>> wrote:
Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
--

Regards,
Andrey

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 12990 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-13 13:42 Andrey Kuzmin
  0 siblings, 0 replies; 18+ messages in thread
From: Andrey Kuzmin @ 2016-12-13 13:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2339 bytes --]

On Tue, Dec 13, 2016, 15:02 Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>

Just for me to understand, any specific reason for not using DPDK driver(s)?

Regards,
Andrey

>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a
> $(DPDK_ABS_DIR)/lib/librte_mempool.a \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a
> $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
-- 

Regards,
Andrey

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10239 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-13 12:21 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-13 12:21 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2835 bytes --]

Hi Yang,

Thanks for the response. Is there any configuration changes to default
config file to get optimal performance. I am observing ~340K IOPs using FIO
from Initiator(host) for randread/4 jobs/16 io-depth/4K blockSize/.
If i change the number of jobs from 4 to 12, performance drastically falls
down to ~10-12K IOPs. I was thinking that DPDK integration was the
bottleneck for performance.

in <SPDK_PATH>/etc/spdk/nvmf.conf.in, I modified only "AcceptorPollRate"
parameter and configured the value as 0. Please let me know if we need to
add/modify any other parameters.


Thanks,
Sandeep

On Tue, Dec 13, 2016 at 5:32 PM, Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi,
>
>
>
> For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4
> driver.
>
>
>
> Thanks.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *sandeep
> dhanvada
> *Sent:* Tuesday, December 13, 2016 7:51 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] DPDK Integration
>
>
>
> Hi,
>
>
>
> I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed
> Mellanox OFED using the command
>
> "./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
> link  dpdk mlx4 NIC driver as below.
>
>
>
> ========
>
> diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
>
> index 41fb18a..7ab1e8f 100644
>
> --- a/lib/env_dpdk/env.mk
>
> +++ b/lib/env_dpdk/env.mk
>
> @@ -55,7 +55,9 @@ else
>
>  DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
>
>  endif
>
>  DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a
> \
>
> -          $(DPDK_ABS_DIR)/lib/librte_ring.a
>
> +          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a
> \
>
> +           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
> $(DPDK_ABS_DIR)/lib/librte_net.a \
>
> +           $(DPDK_ABS_DIR)/lib/librte_mbuf.a
>
>
>
>  # librte_malloc was removed after DPDK 2.1.  Link this library
> conditionally based on its
>
>  #  existence to maintain backward compatibility.
>
> @@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
>
>  endif
>
>
>
>  ifeq ($(OS),Linux)
>
> -DPDK_LIB += -ldl
>
> +DPDK_LIB += -ldl -libverbs
>
>  endif
>
>  ifeq ($(OS),FreeBSD)
>
>  DPDK_LIB += -lexecinfo
>
>
>
> ========
>
>
>
>
>
> with these changes, mlx4 DPDK driver probe is executed while executing the
> nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
> transfer using FIO. Please let me know if there is any additional
> programming needed to hook NIC Tx and Rx to SPDK.
>
>
>
>
> Thanks,
>
> Sandeep
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 6832 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] DPDK Integration
@ 2016-12-13 12:02 Yang, Ziye
  0 siblings, 0 replies; 18+ messages in thread
From: Yang, Ziye @ 2016-12-13 12:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1790 bytes --]

Hi,

For  NVMf target, we use kernel’s mlx4 driver, we will not use dpdk’s mlx4 driver.

Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of sandeep dhanvada
Sent: Tuesday, December 13, 2016 7:51 PM
To: spdk(a)lists.01.org
Subject: [SPDK] DPDK Integration

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk<http://env.mk> b/lib/env_dpdk/env.mk<http://env.mk>
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk<http://env.mk>
+++ b/lib/env_dpdk/env.mk<http://env.mk>
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a $(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a $(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a $(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the nvmf_tgt command, but, Tx and Rx routines are not executed while i/o transfer using FIO. Please let me know if there is any additional programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 7028 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [SPDK] DPDK Integration
@ 2016-12-13 11:50 sandeep dhanvada
  0 siblings, 0 replies; 18+ messages in thread
From: sandeep dhanvada @ 2016-12-13 11:50 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1449 bytes --]

Hi,

I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox
OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
link  dpdk mlx4 NIC driver as below.

========
diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
index 41fb18a..7ab1e8f 100644
--- a/lib/env_dpdk/env.mk
+++ b/lib/env_dpdk/env.mk
@@ -55,7 +55,9 @@ else
 DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
 endif
 DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a
$(DPDK_ABS_DIR)/lib/librte_mempool.a \
-          $(DPDK_ABS_DIR)/lib/librte_ring.a
+          $(DPDK_ABS_DIR)/lib/librte_ring.a
$(DPDK_ABS_DIR)/lib/librte_pmd_mlx4.a \
+           $(DPDK_ABS_DIR)/lib/librte_ethdev.a
$(DPDK_ABS_DIR)/lib/librte_net.a \
+           $(DPDK_ABS_DIR)/lib/librte_mbuf.a

 # librte_malloc was removed after DPDK 2.1.  Link this library
conditionally based on its
 #  existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
 endif

 ifeq ($(OS),Linux)
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
 endif
 ifeq ($(OS),FreeBSD)
 DPDK_LIB += -lexecinfo

========


with these changes, mlx4 DPDK driver probe is executed while executing the
nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
transfer using FIO. Please let me know if there is any additional
programming needed to hook NIC Tx and Rx to SPDK.


Thanks,
Sandeep

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2052 bytes --]

^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-12-23  7:25 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-21  7:34 [SPDK] DPDK Integration Yang, Ziye
  -- strict thread matches above, loose matches on Subject: below --
2016-12-23  7:25 Liu, Changpeng
2016-12-23  7:00 sandeep dhanvada
2016-12-23  6:58 sandeep dhanvada
2016-12-21 15:07 Walker, Benjamin
2016-12-19 16:02 sandeep dhanvada
2016-12-19  4:39 Yang, Ziye
2016-12-15  8:22 sandeep dhanvada
2016-12-15  7:22 Yang, Ziye
2016-12-15  7:07 sandeep dhanvada
2016-12-13 16:48 Daniel Verkamp
2016-12-13 15:29 Marushak, Nathan
2016-12-13 13:56 sandeep dhanvada
2016-12-13 13:50 Yang, Ziye
2016-12-13 13:42 Andrey Kuzmin
2016-12-13 12:21 sandeep dhanvada
2016-12-13 12:02 Yang, Ziye
2016-12-13 11:50 sandeep dhanvada

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.