All of lore.kernel.org
 help / color / mirror / Atom feed
* Random packet drops with ip_pipeline on R730.
@ 2015-09-08  4:55 husainee
  2015-09-08 13:02 ` Dumitrescu, Cristian
  0 siblings, 1 reply; 6+ messages in thread
From: husainee @ 2015-09-08  4:55 UTC (permalink / raw)
  To: dev

Hi

I am using a DELL730 with Dual socket. Processor in each socket is
Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores.
The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with
1,3,5,7,9,11 cores.
The NIC card is i350.

The Cores 2-11 are isolated using isolcpus kernel parameter. We are
running the ip_peipeline application with only Master, RX and TX threads
(Flow and Route have been removed from cfg file). The threads are run as
follows

- Master on CPU core 2
- RX on CPU core 4
- TX on CPU core 6

64 byte packets are sent from ixia at different speeds, but we are
seeing random packet drops.  Same excercise is done on core 3,5,7 and
results are same.

We tried the l2fwd app and it works fine with no packet drops.

Hugepages per 1024 x 2M per socket.


Can anyone suggest what could be the reason for these random packet drops.

regards
husainee








 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Random packet drops with ip_pipeline on R730.
  2015-09-08  4:55 Random packet drops with ip_pipeline on R730 husainee
@ 2015-09-08 13:02 ` Dumitrescu, Cristian
  2015-09-09  9:47   ` husainee
  0 siblings, 1 reply; 6+ messages in thread
From: Dumitrescu, Cristian @ 2015-09-08 13:02 UTC (permalink / raw)
  To: husainee, dev

Hi Husainee,

Can you please explain what do you mean by random packet drops? What percentage of the input packets get dropped, does it take place on every run, does the number of dropped packets vary on every run, etc?

Are you also able to reproduce this issue with other NICs, e.g. 10GbE NIC?

Can you share your config file?

Can you please double check the low level NIC settings between the two applications, i.e. the settings in structures link_params_default, default_hwq_in_params, default_hwq_out_params from ip_pipeline file config_parse.c vs. their equivalents from l2fwd? The only thing I can think of right now is maybe one of the low level threshold values for the Ethernet link is not tuned for your 1GbE NIC.

Regards,
Cristian

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of husainee
> Sent: Tuesday, September 8, 2015 7:56 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Random packet drops with ip_pipeline on R730.
> 
> Hi
> 
> I am using a DELL730 with Dual socket. Processor in each socket is
> Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores.
> The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with
> 1,3,5,7,9,11 cores.
> The NIC card is i350.
> 
> The Cores 2-11 are isolated using isolcpus kernel parameter. We are
> running the ip_peipeline application with only Master, RX and TX threads
> (Flow and Route have been removed from cfg file). The threads are run as
> follows
> 
> - Master on CPU core 2
> - RX on CPU core 4
> - TX on CPU core 6
> 
> 64 byte packets are sent from ixia at different speeds, but we are
> seeing random packet drops.  Same excercise is done on core 3,5,7 and
> results are same.
> 
> We tried the l2fwd app and it works fine with no packet drops.
> 
> Hugepages per 1024 x 2M per socket.
> 
> 
> Can anyone suggest what could be the reason for these random packet
> drops.
> 
> regards
> husainee
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Random packet drops with ip_pipeline on R730.
  2015-09-08 13:02 ` Dumitrescu, Cristian
@ 2015-09-09  9:47   ` husainee
  2015-09-09 11:39     ` Dumitrescu, Cristian
  0 siblings, 1 reply; 6+ messages in thread
From: husainee @ 2015-09-09  9:47 UTC (permalink / raw)
  To: Dumitrescu, Cristian, dev

[-- Attachment #1: Type: text/plain, Size: 3185 bytes --]

Hi Cristian
PFA the config file.

I am sending packets from port0 and receiving on port1.

By random packet drops I mean, on every run the number of packets
dropped is not same. Here are some results as below.

Frame sent rate 1488095.2 fps, 64Byte packets (100% of 1000Mbps)
Run1- 0.0098% (20-22 Million Packets)
Run2- 0.021% (20-22 Million Packets)
Run3- 0.0091% (20-22 Million Packets)

Frame rate 744047.62 fps, 64 Byte packets, (50% of 1000Mbps)
Run1- 0.0047% (20-22 Million Packets)
Run2- 0.0040% (20-22 Million Packets)
Run3- 0.0040% (20-22 Million Packets)


Frame rate 148809.52 fps, 64 Byte packets,(10% of 1000Mbps)
Run1- 0 (20-22 Million Packets)
Run2- 0 (20-22 Million Packets)
Run3- 0 (20-22 Million Packets)



Following are the hw nic setting differences btw ip_pipeline and l2fwd app.

parameter
	ip_pipeline
	l2fwd
jumbo frame
	1
	0
hw_ip_checksum
	1
	0
rx_conf. wthresh
	4
	0
rx_conf.rx_free_thresh
	64
	32
tx_conf.pthresh
	36
	32
burst size
	64
	32


We tried to make the ip_pipeline settings same as l2fwd but no change in
results.

I have not tried with 10GbE . I do not have 10GbE test equipment.



regards
husainee




On 09/08/2015 06:32 PM, Dumitrescu, Cristian wrote:
> Hi Husainee,
>
> Can you please explain what do you mean by random packet drops? What percentage of the input packets get dropped, does it take place on every run, does the number of dropped packets vary on every run, etc?
>
> Are you also able to reproduce this issue with other NICs, e.g. 10GbE NIC?
>
> Can you share your config file?
>
> Can you please double check the low level NIC settings between the two applications, i.e. the settings in structures link_params_default, default_hwq_in_params, default_hwq_out_params from ip_pipeline file config_parse.c vs. their equivalents from l2fwd? The only thing I can think of right now is maybe one of the low level threshold values for the Ethernet link is not tuned for your 1GbE NIC.
>
> Regards,
> Cristian
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of husainee
>> Sent: Tuesday, September 8, 2015 7:56 AM
>> To: dev@dpdk.org
>> Subject: [dpdk-dev] Random packet drops with ip_pipeline on R730.
>>
>> Hi
>>
>> I am using a DELL730 with Dual socket. Processor in each socket is
>> Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores.
>> The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with
>> 1,3,5,7,9,11 cores.
>> The NIC card is i350.
>>
>> The Cores 2-11 are isolated using isolcpus kernel parameter. We are
>> running the ip_peipeline application with only Master, RX and TX threads
>> (Flow and Route have been removed from cfg file). The threads are run as
>> follows
>>
>> - Master on CPU core 2
>> - RX on CPU core 4
>> - TX on CPU core 6
>>
>> 64 byte packets are sent from ixia at different speeds, but we are
>> seeing random packet drops.  Same excercise is done on core 3,5,7 and
>> results are same.
>>
>> We tried the l2fwd app and it works fine with no packet drops.
>>
>> Hugepages per 1024 x 2M per socket.
>>
>>
>> Can anyone suggest what could be the reason for these random packet
>> drops.
>>
>> regards
>> husainee
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>  




[-- Attachment #2: ip_pipeline.cfg --]
[-- Type: text/plain, Size: 2054 bytes --]

;   BSD LICENSE
;
;   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
;   All rights reserved.
;
;   Redistribution and use in source and binary forms, with or without
;   modification, are permitted provided that the following conditions
;   are met:
;
;     * Redistributions of source code must retain the above copyright
;       notice, this list of conditions and the following disclaimer.
;     * Redistributions in binary form must reproduce the above copyright
;       notice, this list of conditions and the following disclaimer in
;       the documentation and/or other materials provided with the
;       distribution.
;     * Neither the name of Intel Corporation nor the names of its
;       contributors may be used to endorse or promote products derived
;       from this software without specific prior written permission.
;
;   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
;   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
;   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
;   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
;   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
;   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
;   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
;   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
;   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
;   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
;   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

; Core configuration
[core 0]
type = MASTER
queues in  = 5 -1 -1 -1 -1 -1 -1 -1
queues out = 4 -1 -1  -1 -1 -1 -1 -1

[core 1]
type = RX
queues in  = -1 -1 -1 -1 -1 -1 -1 4
queues out =  0  1  2  3 -1 -1 -1 5

;[core 2]
;type = FC
;queues in  =  0  1  2  3 -1 -1 -1 9
;queues out =  4  5  6  7 -1 -1 -1 11


[core 2]
type = TX
queues in  =  1 0 2 3 -1 -1 -1 -1
queues out = -1 -1 -1 -1 -1 -1 -1 -1

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Random packet drops with ip_pipeline on R730.
  2015-09-09  9:47   ` husainee
@ 2015-09-09 11:39     ` Dumitrescu, Cristian
  2015-09-09 12:15       ` husainee
  0 siblings, 1 reply; 6+ messages in thread
From: Dumitrescu, Cristian @ 2015-09-09 11:39 UTC (permalink / raw)
  To: husainee, dev

Hi Husainee,

Looking at your config file, looks like you are using an old DPDK release prior to 2.1, can you please try out same simple test in your environment for latest DPDK 2.1 release?

We did a lot of work in DPDK release 2.1 for the ip_pipeline application, we basically rewrote large parts of it, including the parser, checks, run-time, library of pipelines, etc. The format of the config file has been improved a lot, you should be able to adapt your config file to the latest syntax very quickly.

Btw, you config file is not really equivalent to the l2fwd, as you are using two CPU cores connected through software rings rather than a single core, as l2fwd.

Here is an equivalent DPDK 2.1 config file using two cores connected through software rings (port 0 -> port 1, port 1-> port 0, port 2 -> port 3, port 3 -> port2):

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = SWQ0 SWQ1 SWQ2 SWQ3

[PIPELINE2]
type = PASS-THROUGH
core = 2; you can also place PIPELINE2 on same core as PIPELINE1: core = 1
pktq_in = SWQ1 SWQ0 SWQ3 SWQ2
pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0

Here is an config file doing similar processing with a single core, closer configuration to l2fwd (port 0 -> port 1, port 1-> port 0, port 2 -> port 3, port 3 -> port2):

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = TXQ1.0 TXQ0.0 TXQ3.0 TXQ2.0

Regards,
Cristian

From: husainee [mailto:husainee.plumber@nevisnetworks.com]
Sent: Wednesday, September 9, 2015 12:47 PM
To: Dumitrescu, Cristian; dev@dpdk.org
Cc: Cao, Waterman
Subject: Re: [dpdk-dev] Random packet drops with ip_pipeline on R730.

Hi Cristian
PFA the config file.

I am sending packets from port0 and receiving on port1.

By random packet drops I mean, on every run the number of packets dropped is not same. Here are some results as below.

Frame sent rate 1488095.2 fps, 64Byte packets (100% of 1000Mbps)
Run1- 0.0098% (20-22 Million Packets)
Run2- 0.021% (20-22 Million Packets)
Run3- 0.0091% (20-22 Million Packets)

Frame rate 744047.62 fps, 64 Byte packets, (50% of 1000Mbps)
Run1- 0.0047% (20-22 Million Packets)
Run2- 0.0040% (20-22 Million Packets)
Run3- 0.0040% (20-22 Million Packets)


Frame rate 148809.52 fps, 64 Byte packets,(10% of 1000Mbps)
Run1- 0 (20-22 Million Packets)
Run2- 0 (20-22 Million Packets)
Run3- 0 (20-22 Million Packets)



Following are the hw nic setting differences btw ip_pipeline and l2fwd app.
parameter

ip_pipeline

l2fwd

jumbo frame

1

0

hw_ip_checksum

1

0

rx_conf. wthresh

4

0

rx_conf.rx_free_thresh

64

32

tx_conf.pthresh

36

32

burst size

64

32


We tried to make the ip_pipeline settings same as l2fwd but no change in results.

I have not tried with 10GbE . I do not have 10GbE test equipment.



regards
husainee



On 09/08/2015 06:32 PM, Dumitrescu, Cristian wrote:

Hi Husainee,



Can you please explain what do you mean by random packet drops? What percentage of the input packets get dropped, does it take place on every run, does the number of dropped packets vary on every run, etc?



Are you also able to reproduce this issue with other NICs, e.g. 10GbE NIC?



Can you share your config file?



Can you please double check the low level NIC settings between the two applications, i.e. the settings in structures link_params_default, default_hwq_in_params, default_hwq_out_params from ip_pipeline file config_parse.c vs. their equivalents from l2fwd? The only thing I can think of right now is maybe one of the low level threshold values for the Ethernet link is not tuned for your 1GbE NIC.



Regards,

Cristian



-----Original Message-----

From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of husainee

Sent: Tuesday, September 8, 2015 7:56 AM

To: dev@dpdk.org<mailto:dev@dpdk.org>

Subject: [dpdk-dev] Random packet drops with ip_pipeline on R730.



Hi



I am using a DELL730 with Dual socket. Processor in each socket is

Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores.

The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with

1,3,5,7,9,11 cores.

The NIC card is i350.



The Cores 2-11 are isolated using isolcpus kernel parameter. We are

running the ip_peipeline application with only Master, RX and TX threads

(Flow and Route have been removed from cfg file). The threads are run as

follows



- Master on CPU core 2

- RX on CPU core 4

- TX on CPU core 6



64 byte packets are sent from ixia at different speeds, but we are

seeing random packet drops.  Same excercise is done on core 3,5,7 and

results are same.



We tried the l2fwd app and it works fine with no packet drops.



Hugepages per 1024 x 2M per socket.





Can anyone suggest what could be the reason for these random packet

drops.



regards

husainee



























^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Random packet drops with ip_pipeline on R730.
  2015-09-09 11:39     ` Dumitrescu, Cristian
@ 2015-09-09 12:15       ` husainee
  2015-09-09 16:31         ` Dumitrescu, Cristian
  0 siblings, 1 reply; 6+ messages in thread
From: husainee @ 2015-09-09 12:15 UTC (permalink / raw)
  To: Dumitrescu, Cristian, dev

hi Cristian

I am using 2.0 release. I will try with 2.1 and revert.

But for additional information I tried the same 2.0 ip_pipeline
application with a Desktop system which has a single socket  Intel(R)
Core(TM) i5-4440 CPU @ 3.10GHz, 4 core. The nic is same i350.

On this machine I am sending packets on 4 ports at 1Gbps full duplex and
i get 4Gbps throughput with no drops.

The change between the two systems is the processor (speed, cores) and
no of sockets. Is the speed of processor reducing the performance of
DPDK drastically from 4Gbps to something <0.5Gpbs. This is confusing!

regards
husainee


On 09/09/2015 05:09 PM, Dumitrescu, Cristian wrote:
>
> Hi Husainee,
>
>  
>
> Looking at your config file, looks like you are using an old DPDK
> release prior to 2.1, can you please try out same simple test in your
> environment for latest DPDK 2.1 release?
>
>  
>
> We did a lot of work in DPDK release 2.1 for the ip_pipeline
> application, we basically rewrote large parts of it, including the
> parser, checks, run-time, library of pipelines, etc. The format of the
> config file has been improved a lot, you should be able to adapt your
> config file to the latest syntax very quickly.
>
>  
>
> Btw, you config file is not really equivalent to the l2fwd, as you are
> using two CPU cores connected through software rings rather than a
> single core, as l2fwd.
>
>  
>
> Here is an equivalent DPDK 2.1 config file using two cores connected
> through software rings (port 0 -> port 1, port 1-> port 0, port 2 ->
> port 3, port 3 -> port2):
>
>  
>
> [PIPELINE0]
>
> type = MASTER
>
> core = 0
>
>  
>
> [PIPELINE1]
>
> type = PASS-THROUGH
>
> core = 1
>
> pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
>
> pktq_out = SWQ0 SWQ1 SWQ2 SWQ3
>
>  
>
> [PIPELINE2]
>
> type = PASS-THROUGH
>
> core = 2; you can also place PIPELINE2 on same core as PIPELINE1: core = 1
>
> pktq_in = SWQ1 SWQ0 SWQ3 SWQ2
>
> pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
>
>  
>
> Here is an config file doing similar processing with a single core,
> closer configuration to l2fwd (port 0 -> port 1, port 1-> port 0, port
> 2 -> port 3, port 3 -> port2):
>
>  
>
> [PIPELINE0]
>
> type = MASTER
>
> core = 0
>
>  
>
> [PIPELINE1]
>
> type = PASS-THROUGH
>
> core = 1
>
> pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
>
> pktq_out = TXQ1.0 TXQ0.0 TXQ3.0 TXQ2.0
>
>  
>
> Regards,
>
> Cristian
>
>  
>
> *From:*husainee [mailto:husainee.plumber@nevisnetworks.com]
> *Sent:* Wednesday, September 9, 2015 12:47 PM
> *To:* Dumitrescu, Cristian; dev@dpdk.org
> *Cc:* Cao, Waterman
> *Subject:* Re: [dpdk-dev] Random packet drops with ip_pipeline on R730.
>
>  
>
> Hi Cristian
> PFA the config file.
>
> I am sending packets from port0 and receiving on port1.
>
> By random packet drops I mean, on every run the number of packets
> dropped is not same. Here are some results as below.
>
> Frame sent rate 1488095.2 fps, 64Byte packets (100% of 1000Mbps)
> Run1- 0.0098% (20-22 Million Packets)
> Run2- 0.021% (20-22 Million Packets)
> Run3- 0.0091% (20-22 Million Packets)
>
> Frame rate 744047.62 fps, 64 Byte packets, (50% of 1000Mbps)
> Run1- 0.0047% (20-22 Million Packets)
> Run2- 0.0040% (20-22 Million Packets)
> Run3- 0.0040% (20-22 Million Packets)
>
>
> Frame rate 148809.52 fps, 64 Byte packets,(10% of 1000Mbps)
> Run1- 0 (20-22 Million Packets)
> Run2- 0 (20-22 Million Packets)
> Run3- 0 (20-22 Million Packets)
>
>
>
> Following are the hw nic setting differences btw ip_pipeline and l2fwd
> app.
>
> parameter
>
> 	
>
> ip_pipeline
>
> 	
>
> l2fwd
>
> jumbo frame
>
> 	
>
> 1
>
> 	
>
> 0
>
> hw_ip_checksum
>
> 	
>
> 1
>
> 	
>
> 0
>
> rx_conf. wthresh
>
> 	
>
> 4
>
> 	
>
> 0
>
> rx_conf.rx_free_thresh
>
> 	
>
> 64
>
> 	
>
> 32
>
> tx_conf.pthresh
>
> 	
>
> 36
>
> 	
>
> 32
>
> burst size
>
> 	
>
> 64
>
> 	
>
> 32
>
>
> We tried to make the ip_pipeline settings same as l2fwd but no change
> in results.
>
> I have not tried with 10GbE . I do not have 10GbE test equipment.
>
>
>
> regards
> husainee
>
>
>
> On 09/08/2015 06:32 PM, Dumitrescu, Cristian wrote:
>
>     Hi Husainee,
>
>      
>
>     Can you please explain what do you mean by random packet drops? What percentage of the input packets get dropped, does it take place on every run, does the number of dropped packets vary on every run, etc?
>
>      
>
>     Are you also able to reproduce this issue with other NICs, e.g. 10GbE NIC?
>
>      
>
>     Can you share your config file?
>
>      
>
>     Can you please double check the low level NIC settings between the two applications, i.e. the settings in structures link_params_default, default_hwq_in_params, default_hwq_out_params from ip_pipeline file config_parse.c vs. their equivalents from l2fwd? The only thing I can think of right now is maybe one of the low level threshold values for the Ethernet link is not tuned for your 1GbE NIC.
>
>      
>
>     Regards,
>
>     Cristian
>
>      
>
>         -----Original Message-----
>
>         From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of husainee
>
>         Sent: Tuesday, September 8, 2015 7:56 AM
>
>         To: dev@dpdk.org <mailto:dev@dpdk.org>
>
>         Subject: [dpdk-dev] Random packet drops with ip_pipeline on R730.
>
>          
>
>         Hi
>
>          
>
>         I am using a DELL730 with Dual socket. Processor in each socket is
>
>         Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores.
>
>         The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with
>
>         1,3,5,7,9,11 cores.
>
>         The NIC card is i350.
>
>          
>
>         The Cores 2-11 are isolated using isolcpus kernel parameter. We are
>
>         running the ip_peipeline application with only Master, RX and TX threads
>
>         (Flow and Route have been removed from cfg file). The threads are run as
>
>         follows
>
>          
>
>         - Master on CPU core 2
>
>         - RX on CPU core 4
>
>         - TX on CPU core 6
>
>          
>
>         64 byte packets are sent from ixia at different speeds, but we are
>
>         seeing random packet drops.  Same excercise is done on core 3,5,7 and
>
>         results are same.
>
>          
>
>         We tried the l2fwd app and it works fine with no packet drops.
>
>          
>
>         Hugepages per 1024 x 2M per socket.
>
>          
>
>          
>
>         Can anyone suggest what could be the reason for these random packet
>
>         drops.
>
>          
>
>         regards
>
>         husainee
>
>          
>
>          
>
>          
>
>          
>
>          
>
>          
>
>          
>
>          
>
>          
>
>          
>
>      
>
>      
>
>  
>
>  
>
>
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Random packet drops with ip_pipeline on R730.
  2015-09-09 12:15       ` husainee
@ 2015-09-09 16:31         ` Dumitrescu, Cristian
  0 siblings, 0 replies; 6+ messages in thread
From: Dumitrescu, Cristian @ 2015-09-09 16:31 UTC (permalink / raw)
  To: husainee, dev

Hi Husainee,

Yes, please try on release 2.1 and do come back to us with your findings. Based on your findings so far though, looks like this is not a SW issue with the ip_pipeline application (from 2.0 release).

The packet i/O rate that you are using is of a few Mpps, which is low enough to be sustained by a 1.6 GHz CPU or 3.1 GHz CPU, so I don’t think the CPU is the issue, but some other HW things might be: how many PCIe lanes are routed to each of the NICs, are they PCI Gen2 or Gen1, are the PCIs slots used by the NICs on the same CPU socket with the CPU core(s) you’re using for packet forwarding, etc? I think you got it right: the fastest way to debug this issue is to try our multiple CPUs and NICs.

Regards,
Cristian

From: husainee [mailto:husainee.plumber@nevisnetworks.com]
Sent: Wednesday, September 9, 2015 3:16 PM
To: Dumitrescu, Cristian; dev@dpdk.org
Cc: Cao, Waterman
Subject: Re: [dpdk-dev] Random packet drops with ip_pipeline on R730.

hi Cristian

I am using 2.0 release. I will try with 2.1 and revert.

But for additional information I tried the same 2.0 ip_pipeline application with a Desktop system which has a single socket  Intel(R) Core(TM) i5-4440 CPU @ 3.10GHz, 4 core. The nic is same i350.

On this machine I am sending packets on 4 ports at 1Gbps full duplex and i get 4Gbps throughput with no drops.

The change between the two systems is the processor (speed, cores) and no of sockets. Is the speed of processor reducing the performance of DPDK drastically from 4Gbps to something <0.5Gpbs. This is confusing!

regards
husainee

On 09/09/2015 05:09 PM, Dumitrescu, Cristian wrote:
Hi Husainee,

Looking at your config file, looks like you are using an old DPDK release prior to 2.1, can you please try out same simple test in your environment for latest DPDK 2.1 release?

We did a lot of work in DPDK release 2.1 for the ip_pipeline application, we basically rewrote large parts of it, including the parser, checks, run-time, library of pipelines, etc. The format of the config file has been improved a lot, you should be able to adapt your config file to the latest syntax very quickly.

Btw, you config file is not really equivalent to the l2fwd, as you are using two CPU cores connected through software rings rather than a single core, as l2fwd.

Here is an equivalent DPDK 2.1 config file using two cores connected through software rings (port 0 -> port 1, port 1-> port 0, port 2 -> port 3, port 3 -> port2):

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = SWQ0 SWQ1 SWQ2 SWQ3

[PIPELINE2]
type = PASS-THROUGH
core = 2; you can also place PIPELINE2 on same core as PIPELINE1: core = 1
pktq_in = SWQ1 SWQ0 SWQ3 SWQ2
pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0

Here is an config file doing similar processing with a single core, closer configuration to l2fwd (port 0 -> port 1, port 1-> port 0, port 2 -> port 3, port 3 -> port2):

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = TXQ1.0 TXQ0.0 TXQ3.0 TXQ2.0

Regards,
Cristian

From: husainee [mailto:husainee.plumber@nevisnetworks.com]
Sent: Wednesday, September 9, 2015 12:47 PM
To: Dumitrescu, Cristian; dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Cao, Waterman
Subject: Re: [dpdk-dev] Random packet drops with ip_pipeline on R730.

Hi Cristian
PFA the config file.

I am sending packets from port0 and receiving on port1.

By random packet drops I mean, on every run the number of packets dropped is not same. Here are some results as below.

Frame sent rate 1488095.2 fps, 64Byte packets (100% of 1000Mbps)
Run1- 0.0098% (20-22 Million Packets)
Run2- 0.021% (20-22 Million Packets)
Run3- 0.0091% (20-22 Million Packets)

Frame rate 744047.62 fps, 64 Byte packets, (50% of 1000Mbps)
Run1- 0.0047% (20-22 Million Packets)
Run2- 0.0040% (20-22 Million Packets)
Run3- 0.0040% (20-22 Million Packets)


Frame rate 148809.52 fps, 64 Byte packets,(10% of 1000Mbps)
Run1- 0 (20-22 Million Packets)
Run2- 0 (20-22 Million Packets)
Run3- 0 (20-22 Million Packets)



Following are the hw nic setting differences btw ip_pipeline and l2fwd app.
parameter

ip_pipeline

l2fwd

jumbo frame

1

0

hw_ip_checksum

1

0

rx_conf. wthresh

4

0

rx_conf.rx_free_thresh

64

32

tx_conf.pthresh

36

32

burst size

64

32


We tried to make the ip_pipeline settings same as l2fwd but no change in results.

I have not tried with 10GbE . I do not have 10GbE test equipment.



regards
husainee




On 09/08/2015 06:32 PM, Dumitrescu, Cristian wrote:

Hi Husainee,



Can you please explain what do you mean by random packet drops? What percentage of the input packets get dropped, does it take place on every run, does the number of dropped packets vary on every run, etc?



Are you also able to reproduce this issue with other NICs, e.g. 10GbE NIC?



Can you share your config file?



Can you please double check the low level NIC settings between the two applications, i.e. the settings in structures link_params_default, default_hwq_in_params, default_hwq_out_params from ip_pipeline file config_parse.c vs. their equivalents from l2fwd? The only thing I can think of right now is maybe one of the low level threshold values for the Ethernet link is not tuned for your 1GbE NIC.



Regards,

Cristian



-----Original Message-----

From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of husainee

Sent: Tuesday, September 8, 2015 7:56 AM

To: dev@dpdk.org<mailto:dev@dpdk.org>

Subject: [dpdk-dev] Random packet drops with ip_pipeline on R730.



Hi



I am using a DELL730 with Dual socket. Processor in each socket is

Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores.

The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with

1,3,5,7,9,11 cores.

The NIC card is i350.



The Cores 2-11 are isolated using isolcpus kernel parameter. We are

running the ip_peipeline application with only Master, RX and TX threads

(Flow and Route have been removed from cfg file). The threads are run as

follows



- Master on CPU core 2

- RX on CPU core 4

- TX on CPU core 6



64 byte packets are sent from ixia at different speeds, but we are

seeing random packet drops.  Same excercise is done on core 3,5,7 and

results are same.



We tried the l2fwd app and it works fine with no packet drops.



Hugepages per 1024 x 2M per socket.





Can anyone suggest what could be the reason for these random packet

drops.



regards

husainee































^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-09-09 16:32 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-08  4:55 Random packet drops with ip_pipeline on R730 husainee
2015-09-08 13:02 ` Dumitrescu, Cristian
2015-09-09  9:47   ` husainee
2015-09-09 11:39     ` Dumitrescu, Cristian
2015-09-09 12:15       ` husainee
2015-09-09 16:31         ` Dumitrescu, Cristian

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.