* FEC performance degradation on iMX28 with forced link media
@ 2013-11-22 12:40 Hector Palacios
2013-11-24 4:40 ` Marek Vasut
0 siblings, 1 reply; 13+ messages in thread
From: Hector Palacios @ 2013-11-22 12:40 UTC (permalink / raw)
To: netdev; +Cc: fabio.estevam, Marek Vasut
Hello,
When forcing the Ethernet PHY link media with ethtool/mii-tool on the i.MX28 I've seen
important performance degradation as the packet size increases.
On the target:
# mii-tool eth0 -F 10baseT-FD
# netpipe
On the host:
# netpipe -h <target-ip> -n 1
...
44: 1024 bytes 1 times --> 6.56 Mbps in 1191.00 usec
45: 1027 bytes 1 times --> 6.56 Mbps in 1193.52 usec
46: 1533 bytes 1 times --> 0.60 Mbps in 19600.54 usec
47: 1536 bytes 1 times --> 0.46 Mbps in 25262.52 usec
48: 1539 bytes 1 times --> 0.57 Mbps in 20745.54 usec
49: 2045 bytes 1 times --> 0.74 Mbps in 20971.95 usec
...
On loop 46, as the packet size exceeds the MTU (1500) performance falls from 6.56Mbps
to 0.60Mbps.
Going back to 100baseTX-FD, but still forced (autonegotiation off), the same occurs:
On the target:
# mii-tool eth0 -F 100baseTx-FD
# netpipe
On the host:
# netpipe -h <target-ip> -n 1
...
58: 6141 bytes 1 times --> 39.74 Mbps in 1179.03 usec
59: 6144 bytes 1 times --> 41.83 Mbps in 1120.51 usec
60: 6147 bytes 1 times --> 41.39 Mbps in 1133.03 usec
61: 8189 bytes 1 times --> 6.36 Mbps in 9823.94 usec
62: 8192 bytes 1 times --> 6.56 Mbps in 9521.46 usec
63: 8195 bytes 1 times --> 6.56 Mbps in 9532.99 usec
...
only this time it happens with a larger packet size (8189 bytes).
With autonegotiation on, performance is ok and does not suffer these drops.
I've reproduced this on the mx28evk board but it also happens in my hardware, with
different PHY on v3.10.
I also tried on an old v2.6.35 kernel and the issue was reproducible as well, though
it happened with larger packet sizes than it happens with v3.10:
...
75: 32771 bytes 1 times --> 49.64 Mbps in 5036.50 usec
76: 49149 bytes 1 times --> 46.18 Mbps in 8120.48 usec
77: 49152 bytes 1 times --> 43.30 Mbps in 8660.46 usec
78: 49155 bytes 1 times --> 40.10 Mbps in 9351.46 usec
79: 65533 bytes 1 times --> 2.03 Mbps in 246061.04 usec
80: 65536 bytes 1 times --> 2.21 Mbps in 226516.50 usec
81: 65539 bytes 1 times --> 1.45 Mbps in 344196.46 usec
...
Could there be any issue with packet fragmentation?
I tried the same on imx6sabresd but here the issue is not reproducible. I don't know
if the higher CPU frequency might be hiding the problem, though.
Any idea about what can make the difference between forcing media vs autonegotiation?
Best regards,
--
Hector Palacios
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: FEC performance degradation on iMX28 with forced link media
2013-11-22 12:40 FEC performance degradation on iMX28 with forced link media Hector Palacios
@ 2013-11-24 4:40 ` Marek Vasut
2013-11-25 8:56 ` Hector Palacios
0 siblings, 1 reply; 13+ messages in thread
From: Marek Vasut @ 2013-11-24 4:40 UTC (permalink / raw)
To: Hector Palacios; +Cc: netdev, fabio.estevam
Hi Hector,
> Hello,
>
> When forcing the Ethernet PHY link media with ethtool/mii-tool on the
> i.MX28 I've seen important performance degradation as the packet size
> increases.
>
> On the target:
> # mii-tool eth0 -F 10baseT-FD
> # netpipe
>
> On the host:
> # netpipe -h <target-ip> -n 1
> ...
> 44: 1024 bytes 1 times --> 6.56 Mbps in 1191.00 usec
> 45: 1027 bytes 1 times --> 6.56 Mbps in 1193.52 usec
> 46: 1533 bytes 1 times --> 0.60 Mbps in 19600.54 usec
> 47: 1536 bytes 1 times --> 0.46 Mbps in 25262.52 usec
> 48: 1539 bytes 1 times --> 0.57 Mbps in 20745.54 usec
> 49: 2045 bytes 1 times --> 0.74 Mbps in 20971.95 usec
> ...
> On loop 46, as the packet size exceeds the MTU (1500) performance falls
> from 6.56Mbps to 0.60Mbps.
>
> Going back to 100baseTX-FD, but still forced (autonegotiation off), the
> same occurs: On the target:
> # mii-tool eth0 -F 100baseTx-FD
> # netpipe
>
> On the host:
> # netpipe -h <target-ip> -n 1
> ...
> 58: 6141 bytes 1 times --> 39.74 Mbps in 1179.03 usec
> 59: 6144 bytes 1 times --> 41.83 Mbps in 1120.51 usec
> 60: 6147 bytes 1 times --> 41.39 Mbps in 1133.03 usec
> 61: 8189 bytes 1 times --> 6.36 Mbps in 9823.94 usec
> 62: 8192 bytes 1 times --> 6.56 Mbps in 9521.46 usec
> 63: 8195 bytes 1 times --> 6.56 Mbps in 9532.99 usec
> ...
> only this time it happens with a larger packet size (8189 bytes).
>
> With autonegotiation on, performance is ok and does not suffer these drops.
>
> I've reproduced this on the mx28evk board but it also happens in my
> hardware, with different PHY on v3.10.
> I also tried on an old v2.6.35 kernel and the issue was reproducible as
> well, though it happened with larger packet sizes than it happens with
> v3.10:
> ...
> 75: 32771 bytes 1 times --> 49.64 Mbps in 5036.50 usec
> 76: 49149 bytes 1 times --> 46.18 Mbps in 8120.48 usec
> 77: 49152 bytes 1 times --> 43.30 Mbps in 8660.46 usec
> 78: 49155 bytes 1 times --> 40.10 Mbps in 9351.46 usec
> 79: 65533 bytes 1 times --> 2.03 Mbps in 246061.04 usec
> 80: 65536 bytes 1 times --> 2.21 Mbps in 226516.50 usec
> 81: 65539 bytes 1 times --> 1.45 Mbps in 344196.46 usec
> ...
>
> Could there be any issue with packet fragmentation?
> I tried the same on imx6sabresd but here the issue is not reproducible. I
> don't know if the higher CPU frequency might be hiding the problem,
> though.
>
> Any idea about what can make the difference between forcing media vs
> autonegotiation?
Let me ask, this might be unrelated, but I will still go ahead. Do you also
observe packetloss? You can check with iperf:
On host machine (PC): iperf -u -s -l 4M -i 60
On target: iperf -u -c <hostip> -t 3600 -B 100M -i 60
Best regards,
Marek Vasut
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: FEC performance degradation on iMX28 with forced link media
2013-11-24 4:40 ` Marek Vasut
@ 2013-11-25 8:56 ` Hector Palacios
2013-12-18 16:43 ` FEC performance degradation with certain packet sizes Hector Palacios
0 siblings, 1 reply; 13+ messages in thread
From: Hector Palacios @ 2013-11-25 8:56 UTC (permalink / raw)
To: Marek Vasut; +Cc: netdev, fabio.estevam
On 11/24/2013 05:40 AM, Marek Vasut wrote:
> Hi Hector,
>
>> Hello,
>>
>> When forcing the Ethernet PHY link media with ethtool/mii-tool on the
>> i.MX28 I've seen important performance degradation as the packet size
>> increases.
>>
>> On the target:
>> # mii-tool eth0 -F 10baseT-FD
>> # netpipe
>>
>> On the host:
>> # netpipe -h <target-ip> -n 1
>> ...
>> 44: 1024 bytes 1 times --> 6.56 Mbps in 1191.00 usec
>> 45: 1027 bytes 1 times --> 6.56 Mbps in 1193.52 usec
>> 46: 1533 bytes 1 times --> 0.60 Mbps in 19600.54 usec
>> 47: 1536 bytes 1 times --> 0.46 Mbps in 25262.52 usec
>> 48: 1539 bytes 1 times --> 0.57 Mbps in 20745.54 usec
>> 49: 2045 bytes 1 times --> 0.74 Mbps in 20971.95 usec
>> ...
>> On loop 46, as the packet size exceeds the MTU (1500) performance falls
>> from 6.56Mbps to 0.60Mbps.
>>
>> Going back to 100baseTX-FD, but still forced (autonegotiation off), the
>> same occurs: On the target:
>> # mii-tool eth0 -F 100baseTx-FD
>> # netpipe
>>
>> On the host:
>> # netpipe -h <target-ip> -n 1
>> ...
>> 58: 6141 bytes 1 times --> 39.74 Mbps in 1179.03 usec
>> 59: 6144 bytes 1 times --> 41.83 Mbps in 1120.51 usec
>> 60: 6147 bytes 1 times --> 41.39 Mbps in 1133.03 usec
>> 61: 8189 bytes 1 times --> 6.36 Mbps in 9823.94 usec
>> 62: 8192 bytes 1 times --> 6.56 Mbps in 9521.46 usec
>> 63: 8195 bytes 1 times --> 6.56 Mbps in 9532.99 usec
>> ...
>> only this time it happens with a larger packet size (8189 bytes).
>>
>> With autonegotiation on, performance is ok and does not suffer these drops.
>>
>> I've reproduced this on the mx28evk board but it also happens in my
>> hardware, with different PHY on v3.10.
>> I also tried on an old v2.6.35 kernel and the issue was reproducible as
>> well, though it happened with larger packet sizes than it happens with
>> v3.10:
>> ...
>> 75: 32771 bytes 1 times --> 49.64 Mbps in 5036.50 usec
>> 76: 49149 bytes 1 times --> 46.18 Mbps in 8120.48 usec
>> 77: 49152 bytes 1 times --> 43.30 Mbps in 8660.46 usec
>> 78: 49155 bytes 1 times --> 40.10 Mbps in 9351.46 usec
>> 79: 65533 bytes 1 times --> 2.03 Mbps in 246061.04 usec
>> 80: 65536 bytes 1 times --> 2.21 Mbps in 226516.50 usec
>> 81: 65539 bytes 1 times --> 1.45 Mbps in 344196.46 usec
>> ...
>>
>> Could there be any issue with packet fragmentation?
>> I tried the same on imx6sabresd but here the issue is not reproducible. I
>> don't know if the higher CPU frequency might be hiding the problem,
>> though.
>>
>> Any idea about what can make the difference between forcing media vs
>> autonegotiation?
>
> Let me ask, this might be unrelated, but I will still go ahead. Do you also
> observe packetloss? You can check with iperf:
>
> On host machine (PC): iperf -u -s -l 4M -i 60
> On target: iperf -u -c <hostip> -t 3600 -B 100M -i 60
Yes, with forced 100baseTX-FD there is a small packet loss:
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-60.0 sec 339 MBytes 47.4 Mbits/sec 0.075 ms 61/242070 (0.025%)
[ 3] 60.0-120.0 sec 339 MBytes 47.4 Mbits/sec 0.209 ms 45/242122 (0.019%)
[ 3] 120.0-180.0 sec 339 MBytes 47.5 Mbits/sec 0.084 ms 70/242237 (0.029%)
[ 3] 180.0-240.0 sec 339 MBytes 47.4 Mbits/sec 0.030 ms 80/241993 (0.033%)
[ 3] 240.0-300.0 sec 340 MBytes 47.5 Mbits/sec 0.042 ms 111/242363 (0.046%)
[ 3] 300.0-360.0 sec 339 MBytes 47.4 Mbits/sec 0.038 ms 93/241972 (0.038%)
[ 3] 360.0-420.0 sec 339 MBytes 47.5 Mbits/sec 0.030 ms 78/242214 (0.032%)
[ 3] 420.0-480.0 sec 339 MBytes 47.4 Mbits/sec 0.090 ms 77/241980 (0.032%)
[ 3] 480.0-540.0 sec 339 MBytes 47.4 Mbits/sec 0.025 ms 125/242058 (0.052%)
With autonegotiated 100baseTX-FD, there is not:
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-60.0 sec 336 MBytes 47.0 Mbits/sec 0.038 ms 0/239673 (0%)
[ 3] 60.0-120.0 sec 337 MBytes 47.1 Mbits/sec 0.078 ms 0/240353 (0%)
[ 3] 120.0-180.0 sec 337 MBytes 47.1 Mbits/sec 0.047 ms 0/240054 (0%)
[ 3] 180.0-240.0 sec 337 MBytes 47.1 Mbits/sec 0.038 ms 0/240195 (0%)
[ 3] 240.0-300.0 sec 337 MBytes 47.1 Mbits/sec 0.038 ms 0/240109 (0%)
[ 3] 300.0-360.0 sec 337 MBytes 47.1 Mbits/sec 0.035 ms 0/240101 (0%)
[ 3] 360.0-420.0 sec 337 MBytes 47.0 Mbits/sec 0.031 ms 0/240032 (0%)
[ 3] 420.0-480.0 sec 336 MBytes 47.0 Mbits/sec 0.036 ms 0/239912 (0%)
Best regards,
--
Hector Palacios
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: FEC performance degradation with certain packet sizes
2013-11-25 8:56 ` Hector Palacios
@ 2013-12-18 16:43 ` Hector Palacios
2013-12-18 17:38 ` Eric Dumazet
2013-12-20 3:35 ` fugang.duan
0 siblings, 2 replies; 13+ messages in thread
From: Hector Palacios @ 2013-12-18 16:43 UTC (permalink / raw)
To: Marek Vasut, netdev
Cc: fabio.estevam, shawn.guo, l.stach, Frank Li, fugang.duan,
bhutchings, davem
Hello,
I'm resending this thread (reworded the subject) with additional people on CC.
I found the issue happens also with auto-negotiated link and is reproducible on the
i.MX6 as well as on the i.MX28. It looks like a problem with the fec driver.
Steps to reproduce:
On the target:
netpipe
On the host:
netpipe -h <target_ip> -n 5
At certain packet sizes (starting always at 1533 bytes), the performance drops
dramatically:
On i.MX28:
[...]
42: 771 bytes 5 times --> 19.78 Mbps in 297.41 usec
43: 1021 bytes 5 times --> 23.29 Mbps in 334.41 usec
44: 1024 bytes 5 times --> 23.61 Mbps in 330.90 usec
45: 1027 bytes 5 times --> 23.43 Mbps in 334.41 usec
46: 1533 bytes 5 times --> 0.13 Mbps in 88817.49 usec
47: 1536 bytes 5 times --> 0.06 Mbps in 189914.91 usec
48: 1539 bytes 5 times --> 0.06 Mbps in 204917.19 usec
49: 2045 bytes 5 times --> 0.07 Mbps in 210931.79 usec
50: 2048 bytes 5 times --> 0.07 Mbps in 210919.10 usec
51: 2051 bytes 5 times --> 0.07 Mbps in 212915.71 usec
52: 3069 bytes 5 times --> 35.42 Mbps in 661.01 usec
53: 3072 bytes 5 times --> 35.57 Mbps in 659.00 usec
54: 3075 bytes 5 times --> 35.42 Mbps in 662.29 usec
55: 4093 bytes 5 times --> 40.03 Mbps in 780.19 usec
56: 4096 bytes 5 times --> 40.75 Mbps in 766.79 usec
57: 4099 bytes 5 times --> 40.64 Mbps in 769.49 usec
58: 6141 bytes 5 times --> 3.08 Mbps in 15187.90 usec
59: 6144 bytes 5 times --> 2.94 Mbps in 15928.19 usec
60: 6147 bytes 5 times --> 5.57 Mbps in 8418.91 usec
61: 8189 bytes 5 times --> 1.34 Mbps in 46574.90 usec
62: 8192 bytes 5 times --> 2.17 Mbps in 28781.99 usec
63: 8195 bytes 5 times --> 1.36 Mbps in 45923.69 usec
64: 12285 bytes 5 times --> 51.78 Mbps in 1810.21 usec
65: 12288 bytes 5 times --> 50.46 Mbps in 1857.81 usec
66: 12291 bytes 5 times --> 54.01 Mbps in 1736.21 usec
67: 16381 bytes 5 times --> 55.86 Mbps in 2237.50 usec
68: 16384 bytes 5 times --> 56.93 Mbps in 2195.79 usec
69: 16387 bytes 5 times --> 35.62 Mbps in 3509.60 usec
70: 24573 bytes 5 times --> 7.19 Mbps in 26075.60 usec
71: 24576 bytes 5 times --> 58.36 Mbps in 3212.59 usec
72: 24579 bytes 5 times --> 7.92 Mbps in 23678.90 usec
73: 32765 bytes 5 times --> 58.14 Mbps in 4299.79 usec
74: 32768 bytes 5 times --> 5.34 Mbps in 46810.20 usec
75: 32771 bytes 5 times --> 41.51 Mbps in 6023.21 usec
76: 49149 bytes 5 times --> 49.62 Mbps in 7557.20 usec
77: 49152 bytes 5 times --> 48.82 Mbps in 7681.11 usec
On i.MX6:
[...]
42: 771 bytes 5 times --> 16.21 Mbps in 362.91 usec
43: 1021 bytes 5 times --> 17.97 Mbps in 433.51 usec
44: 1024 bytes 5 times --> 18.19 Mbps in 429.40 usec
45: 1027 bytes 5 times --> 18.16 Mbps in 431.41 usec
46: 1533 bytes 5 times --> 2.35 Mbps in 4970.11 usec
47: 1536 bytes 5 times --> 2.36 Mbps in 4959.91 usec
48: 1539 bytes 5 times --> 2.37 Mbps in 4959.20 usec
49: 2045 bytes 5 times --> 3.14 Mbps in 4972.31 usec
50: 2048 bytes 5 times --> 3.15 Mbps in 4959.50 usec
51: 2051 bytes 5 times --> 3.15 Mbps in 4960.01 usec
52: 3069 bytes 5 times --> 4.70 Mbps in 4984.19 usec
53: 3072 bytes 5 times --> 4.73 Mbps in 4960.10 usec
54: 3075 bytes 5 times --> 4.73 Mbps in 4957.81 usec
55: 4093 bytes 5 times --> 6.29 Mbps in 4966.71 usec
56: 4096 bytes 5 times --> 6.30 Mbps in 4962.00 usec
57: 4099 bytes 5 times --> 6.31 Mbps in 4957.71 usec
58: 6141 bytes 5 times --> 49.25 Mbps in 951.40 usec
59: 6144 bytes 5 times --> 49.23 Mbps in 952.21 usec
60: 6147 bytes 5 times --> 49.18 Mbps in 953.69 usec
Does anyone have any clue about where the problem might be?
Best regards,
--
Hector Palacios
On 11/25/2013 09:56 AM, Hector Palacios wrote:
> On 11/24/2013 05:40 AM, Marek Vasut wrote:
>> Hi Hector,
>>
>>> Hello,
>>>
>>> When forcing the Ethernet PHY link media with ethtool/mii-tool on the
>>> i.MX28 I've seen important performance degradation as the packet size
>>> increases.
>>>
>>> On the target:
>>> # mii-tool eth0 -F 10baseT-FD
>>> # netpipe
>>>
>>> On the host:
>>> # netpipe -h <target-ip> -n 1
>>> ...
>>> 44: 1024 bytes 1 times --> 6.56 Mbps in 1191.00 usec
>>> 45: 1027 bytes 1 times --> 6.56 Mbps in 1193.52 usec
>>> 46: 1533 bytes 1 times --> 0.60 Mbps in 19600.54 usec
>>> 47: 1536 bytes 1 times --> 0.46 Mbps in 25262.52 usec
>>> 48: 1539 bytes 1 times --> 0.57 Mbps in 20745.54 usec
>>> 49: 2045 bytes 1 times --> 0.74 Mbps in 20971.95 usec
>>> ...
>>> On loop 46, as the packet size exceeds the MTU (1500) performance falls
>>> from 6.56Mbps to 0.60Mbps.
>>>
>>> Going back to 100baseTX-FD, but still forced (autonegotiation off), the
>>> same occurs: On the target:
>>> # mii-tool eth0 -F 100baseTx-FD
>>> # netpipe
>>>
>>> On the host:
>>> # netpipe -h <target-ip> -n 1
>>> ...
>>> 58: 6141 bytes 1 times --> 39.74 Mbps in 1179.03 usec
>>> 59: 6144 bytes 1 times --> 41.83 Mbps in 1120.51 usec
>>> 60: 6147 bytes 1 times --> 41.39 Mbps in 1133.03 usec
>>> 61: 8189 bytes 1 times --> 6.36 Mbps in 9823.94 usec
>>> 62: 8192 bytes 1 times --> 6.56 Mbps in 9521.46 usec
>>> 63: 8195 bytes 1 times --> 6.56 Mbps in 9532.99 usec
>>> ...
>>> only this time it happens with a larger packet size (8189 bytes).
>>>
>>> With autonegotiation on, performance is ok and does not suffer these drops.
>>>
>>> I've reproduced this on the mx28evk board but it also happens in my
>>> hardware, with different PHY on v3.10.
>>> I also tried on an old v2.6.35 kernel and the issue was reproducible as
>>> well, though it happened with larger packet sizes than it happens with
>>> v3.10:
>>> ...
>>> 75: 32771 bytes 1 times --> 49.64 Mbps in 5036.50 usec
>>> 76: 49149 bytes 1 times --> 46.18 Mbps in 8120.48 usec
>>> 77: 49152 bytes 1 times --> 43.30 Mbps in 8660.46 usec
>>> 78: 49155 bytes 1 times --> 40.10 Mbps in 9351.46 usec
>>> 79: 65533 bytes 1 times --> 2.03 Mbps in 246061.04 usec
>>> 80: 65536 bytes 1 times --> 2.21 Mbps in 226516.50 usec
>>> 81: 65539 bytes 1 times --> 1.45 Mbps in 344196.46 usec
>>> ...
>>>
>>> Could there be any issue with packet fragmentation?
>>> I tried the same on imx6sabresd but here the issue is not reproducible. I
>>> don't know if the higher CPU frequency might be hiding the problem,
>>> though.
>>>
>>> Any idea about what can make the difference between forcing media vs
>>> autonegotiation?
>>
>> Let me ask, this might be unrelated, but I will still go ahead. Do you also
>> observe packetloss? You can check with iperf:
>>
>> On host machine (PC): iperf -u -s -l 4M -i 60
>> On target: iperf -u -c <hostip> -t 3600 -B 100M -i 60
>
> Yes, with forced 100baseTX-FD there is a small packet loss:
>
> [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
> [ 3] 0.0-60.0 sec 339 MBytes 47.4 Mbits/sec 0.075 ms 61/242070 (0.025%)
> [ 3] 60.0-120.0 sec 339 MBytes 47.4 Mbits/sec 0.209 ms 45/242122 (0.019%)
> [ 3] 120.0-180.0 sec 339 MBytes 47.5 Mbits/sec 0.084 ms 70/242237 (0.029%)
> [ 3] 180.0-240.0 sec 339 MBytes 47.4 Mbits/sec 0.030 ms 80/241993 (0.033%)
> [ 3] 240.0-300.0 sec 340 MBytes 47.5 Mbits/sec 0.042 ms 111/242363 (0.046%)
> [ 3] 300.0-360.0 sec 339 MBytes 47.4 Mbits/sec 0.038 ms 93/241972 (0.038%)
> [ 3] 360.0-420.0 sec 339 MBytes 47.5 Mbits/sec 0.030 ms 78/242214 (0.032%)
> [ 3] 420.0-480.0 sec 339 MBytes 47.4 Mbits/sec 0.090 ms 77/241980 (0.032%)
> [ 3] 480.0-540.0 sec 339 MBytes 47.4 Mbits/sec 0.025 ms 125/242058 (0.052%)
>
> With autonegotiated 100baseTX-FD, there is not:
>
> [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
> [ 3] 0.0-60.0 sec 336 MBytes 47.0 Mbits/sec 0.038 ms 0/239673 (0%)
> [ 3] 60.0-120.0 sec 337 MBytes 47.1 Mbits/sec 0.078 ms 0/240353 (0%)
> [ 3] 120.0-180.0 sec 337 MBytes 47.1 Mbits/sec 0.047 ms 0/240054 (0%)
> [ 3] 180.0-240.0 sec 337 MBytes 47.1 Mbits/sec 0.038 ms 0/240195 (0%)
> [ 3] 240.0-300.0 sec 337 MBytes 47.1 Mbits/sec 0.038 ms 0/240109 (0%)
> [ 3] 300.0-360.0 sec 337 MBytes 47.1 Mbits/sec 0.035 ms 0/240101 (0%)
> [ 3] 360.0-420.0 sec 337 MBytes 47.0 Mbits/sec 0.031 ms 0/240032 (0%)
> [ 3] 420.0-480.0 sec 336 MBytes 47.0 Mbits/sec 0.036 ms 0/239912 (0%)
>
>
> Best regards,
> --
> Hector Palacios
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: FEC performance degradation with certain packet sizes
2013-12-18 16:43 ` FEC performance degradation with certain packet sizes Hector Palacios
@ 2013-12-18 17:38 ` Eric Dumazet
2013-12-19 2:44 ` fugang.duan
2013-12-20 3:35 ` fugang.duan
1 sibling, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2013-12-18 17:38 UTC (permalink / raw)
To: Hector Palacios
Cc: Marek Vasut, netdev, fabio.estevam, shawn.guo, l.stach, Frank Li,
fugang.duan, bhutchings, davem
On Wed, 2013-12-18 at 17:43 +0100, Hector Palacios wrote:
> Hello,
>
> I'm resending this thread (reworded the subject) with additional people on CC.
> I found the issue happens also with auto-negotiated link and is reproducible on the
> i.MX6 as well as on the i.MX28. It looks like a problem with the fec driver.
>
> Steps to reproduce:
> On the target:
> netpipe
> On the host:
> netpipe -h <target_ip> -n 5
>
> At certain packet sizes (starting always at 1533 bytes), the performance drops
> dramatically:
>
> On i.MX28:
> [...]
> 42: 771 bytes 5 times --> 19.78 Mbps in 297.41 usec
> 43: 1021 bytes 5 times --> 23.29 Mbps in 334.41 usec
> 44: 1024 bytes 5 times --> 23.61 Mbps in 330.90 usec
> 45: 1027 bytes 5 times --> 23.43 Mbps in 334.41 usec
> 46: 1533 bytes 5 times --> 0.13 Mbps in 88817.49 usec
> 47: 1536 bytes 5 times --> 0.06 Mbps in 189914.91 usec
> 48: 1539 bytes 5 times --> 0.06 Mbps in 204917.19 usec
> 49: 2045 bytes 5 times --> 0.07 Mbps in 210931.79 usec
> 50: 2048 bytes 5 times --> 0.07 Mbps in 210919.10 usec
> 51: 2051 bytes 5 times --> 0.07 Mbps in 212915.71 usec
> 52: 3069 bytes 5 times --> 35.42 Mbps in 661.01 usec
> 53: 3072 bytes 5 times --> 35.57 Mbps in 659.00 usec
> 54: 3075 bytes 5 times --> 35.42 Mbps in 662.29 usec
> 55: 4093 bytes 5 times --> 40.03 Mbps in 780.19 usec
> 56: 4096 bytes 5 times --> 40.75 Mbps in 766.79 usec
> 57: 4099 bytes 5 times --> 40.64 Mbps in 769.49 usec
> 58: 6141 bytes 5 times --> 3.08 Mbps in 15187.90 usec
> 59: 6144 bytes 5 times --> 2.94 Mbps in 15928.19 usec
> 60: 6147 bytes 5 times --> 5.57 Mbps in 8418.91 usec
> 61: 8189 bytes 5 times --> 1.34 Mbps in 46574.90 usec
> 62: 8192 bytes 5 times --> 2.17 Mbps in 28781.99 usec
> 63: 8195 bytes 5 times --> 1.36 Mbps in 45923.69 usec
> 64: 12285 bytes 5 times --> 51.78 Mbps in 1810.21 usec
> 65: 12288 bytes 5 times --> 50.46 Mbps in 1857.81 usec
> 66: 12291 bytes 5 times --> 54.01 Mbps in 1736.21 usec
> 67: 16381 bytes 5 times --> 55.86 Mbps in 2237.50 usec
> 68: 16384 bytes 5 times --> 56.93 Mbps in 2195.79 usec
> 69: 16387 bytes 5 times --> 35.62 Mbps in 3509.60 usec
> 70: 24573 bytes 5 times --> 7.19 Mbps in 26075.60 usec
> 71: 24576 bytes 5 times --> 58.36 Mbps in 3212.59 usec
> 72: 24579 bytes 5 times --> 7.92 Mbps in 23678.90 usec
> 73: 32765 bytes 5 times --> 58.14 Mbps in 4299.79 usec
> 74: 32768 bytes 5 times --> 5.34 Mbps in 46810.20 usec
> 75: 32771 bytes 5 times --> 41.51 Mbps in 6023.21 usec
> 76: 49149 bytes 5 times --> 49.62 Mbps in 7557.20 usec
> 77: 49152 bytes 5 times --> 48.82 Mbps in 7681.11 usec
>
> On i.MX6:
> [...]
> 42: 771 bytes 5 times --> 16.21 Mbps in 362.91 usec
> 43: 1021 bytes 5 times --> 17.97 Mbps in 433.51 usec
> 44: 1024 bytes 5 times --> 18.19 Mbps in 429.40 usec
> 45: 1027 bytes 5 times --> 18.16 Mbps in 431.41 usec
> 46: 1533 bytes 5 times --> 2.35 Mbps in 4970.11 usec
> 47: 1536 bytes 5 times --> 2.36 Mbps in 4959.91 usec
> 48: 1539 bytes 5 times --> 2.37 Mbps in 4959.20 usec
> 49: 2045 bytes 5 times --> 3.14 Mbps in 4972.31 usec
> 50: 2048 bytes 5 times --> 3.15 Mbps in 4959.50 usec
> 51: 2051 bytes 5 times --> 3.15 Mbps in 4960.01 usec
> 52: 3069 bytes 5 times --> 4.70 Mbps in 4984.19 usec
> 53: 3072 bytes 5 times --> 4.73 Mbps in 4960.10 usec
> 54: 3075 bytes 5 times --> 4.73 Mbps in 4957.81 usec
> 55: 4093 bytes 5 times --> 6.29 Mbps in 4966.71 usec
> 56: 4096 bytes 5 times --> 6.30 Mbps in 4962.00 usec
> 57: 4099 bytes 5 times --> 6.31 Mbps in 4957.71 usec
> 58: 6141 bytes 5 times --> 49.25 Mbps in 951.40 usec
> 59: 6144 bytes 5 times --> 49.23 Mbps in 952.21 usec
> 60: 6147 bytes 5 times --> 49.18 Mbps in 953.69 usec
>
> Does anyone have any clue about where the problem might be?
What is the driver in use ?
Have you tried disabling tso/gso ?
ethtool -k eth0
ethtool -K eth0 tso off gso off
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: FEC performance degradation with certain packet sizes
2013-12-18 17:38 ` Eric Dumazet
@ 2013-12-19 2:44 ` fugang.duan
2013-12-19 23:04 ` Eric Dumazet
0 siblings, 1 reply; 13+ messages in thread
From: fugang.duan @ 2013-12-19 2:44 UTC (permalink / raw)
To: Eric Dumazet, Hector Palacios
Cc: Marek Vasut, netdev, Fabio.Estevam, shawn.guo, l.stach, Frank.Li,
bhutchings, davem
From: Eric Dumazet <eric.dumazet@gmail.com>
Data: Thursday, December 19, 2013 1:39 AM
>To: Hector Palacios
>Cc: Marek Vasut; netdev@vger.kernel.org; Estevam Fabio-R49496;
>shawn.guo@linaro.org; l.stach@pengutronix.de; Li Frank-B20596; Duan Fugang-
>B38611; bhutchings@solarflare.com; davem@davemloft.net
>Subject: Re: FEC performance degradation with certain packet sizes
>
>On Wed, 2013-12-18 at 17:43 +0100, Hector Palacios wrote:
>> Hello,
>>
>> I'm resending this thread (reworded the subject) with additional people on CC.
>> I found the issue happens also with auto-negotiated link and is reproducible
>on the
>> i.MX6 as well as on the i.MX28. It looks like a problem with the fec driver.
>>
>> Steps to reproduce:
>> On the target:
>> netpipe
>> On the host:
>> netpipe -h <target_ip> -n 5
>>
>> At certain packet sizes (starting always at 1533 bytes), the performance
>drops
>> dramatically:
>>
>> On i.MX28:
>> [...]
>> 42: 771 bytes 5 times --> 19.78 Mbps in 297.41 usec
>> 43: 1021 bytes 5 times --> 23.29 Mbps in 334.41 usec
>> 44: 1024 bytes 5 times --> 23.61 Mbps in 330.90 usec
>> 45: 1027 bytes 5 times --> 23.43 Mbps in 334.41 usec
>> 46: 1533 bytes 5 times --> 0.13 Mbps in 88817.49 usec
>> 47: 1536 bytes 5 times --> 0.06 Mbps in 189914.91 usec
>> 48: 1539 bytes 5 times --> 0.06 Mbps in 204917.19 usec
>> 49: 2045 bytes 5 times --> 0.07 Mbps in 210931.79 usec
>> 50: 2048 bytes 5 times --> 0.07 Mbps in 210919.10 usec
>> 51: 2051 bytes 5 times --> 0.07 Mbps in 212915.71 usec
>> 52: 3069 bytes 5 times --> 35.42 Mbps in 661.01 usec
>> 53: 3072 bytes 5 times --> 35.57 Mbps in 659.00 usec
>> 54: 3075 bytes 5 times --> 35.42 Mbps in 662.29 usec
>> 55: 4093 bytes 5 times --> 40.03 Mbps in 780.19 usec
>> 56: 4096 bytes 5 times --> 40.75 Mbps in 766.79 usec
>> 57: 4099 bytes 5 times --> 40.64 Mbps in 769.49 usec
>> 58: 6141 bytes 5 times --> 3.08 Mbps in 15187.90 usec
>> 59: 6144 bytes 5 times --> 2.94 Mbps in 15928.19 usec
>> 60: 6147 bytes 5 times --> 5.57 Mbps in 8418.91 usec
>> 61: 8189 bytes 5 times --> 1.34 Mbps in 46574.90 usec
>> 62: 8192 bytes 5 times --> 2.17 Mbps in 28781.99 usec
>> 63: 8195 bytes 5 times --> 1.36 Mbps in 45923.69 usec
>> 64: 12285 bytes 5 times --> 51.78 Mbps in 1810.21 usec
>> 65: 12288 bytes 5 times --> 50.46 Mbps in 1857.81 usec
>> 66: 12291 bytes 5 times --> 54.01 Mbps in 1736.21 usec
>> 67: 16381 bytes 5 times --> 55.86 Mbps in 2237.50 usec
>> 68: 16384 bytes 5 times --> 56.93 Mbps in 2195.79 usec
>> 69: 16387 bytes 5 times --> 35.62 Mbps in 3509.60 usec
>> 70: 24573 bytes 5 times --> 7.19 Mbps in 26075.60 usec
>> 71: 24576 bytes 5 times --> 58.36 Mbps in 3212.59 usec
>> 72: 24579 bytes 5 times --> 7.92 Mbps in 23678.90 usec
>> 73: 32765 bytes 5 times --> 58.14 Mbps in 4299.79 usec
>> 74: 32768 bytes 5 times --> 5.34 Mbps in 46810.20 usec
>> 75: 32771 bytes 5 times --> 41.51 Mbps in 6023.21 usec
>> 76: 49149 bytes 5 times --> 49.62 Mbps in 7557.20 usec
>> 77: 49152 bytes 5 times --> 48.82 Mbps in 7681.11 usec
>>
>> On i.MX6:
>> [...]
>> 42: 771 bytes 5 times --> 16.21 Mbps in 362.91 usec
>> 43: 1021 bytes 5 times --> 17.97 Mbps in 433.51 usec
>> 44: 1024 bytes 5 times --> 18.19 Mbps in 429.40 usec
>> 45: 1027 bytes 5 times --> 18.16 Mbps in 431.41 usec
>> 46: 1533 bytes 5 times --> 2.35 Mbps in 4970.11 usec
>> 47: 1536 bytes 5 times --> 2.36 Mbps in 4959.91 usec
>> 48: 1539 bytes 5 times --> 2.37 Mbps in 4959.20 usec
>> 49: 2045 bytes 5 times --> 3.14 Mbps in 4972.31 usec
>> 50: 2048 bytes 5 times --> 3.15 Mbps in 4959.50 usec
>> 51: 2051 bytes 5 times --> 3.15 Mbps in 4960.01 usec
>> 52: 3069 bytes 5 times --> 4.70 Mbps in 4984.19 usec
>> 53: 3072 bytes 5 times --> 4.73 Mbps in 4960.10 usec
>> 54: 3075 bytes 5 times --> 4.73 Mbps in 4957.81 usec
>> 55: 4093 bytes 5 times --> 6.29 Mbps in 4966.71 usec
>> 56: 4096 bytes 5 times --> 6.30 Mbps in 4962.00 usec
>> 57: 4099 bytes 5 times --> 6.31 Mbps in 4957.71 usec
>> 58: 6141 bytes 5 times --> 49.25 Mbps in 951.40 usec
>> 59: 6144 bytes 5 times --> 49.23 Mbps in 952.21 usec
>> 60: 6147 bytes 5 times --> 49.18 Mbps in 953.69 usec
>>
>> Does anyone have any clue about where the problem might be?
>
>What is the driver in use ?
>
>Have you tried disabling tso/gso ?
>
>ethtool -k eth0
>
>ethtool -K eth0 tso off gso off
>
Enet IP don't support tso feature.
I will reproduce the issue in imx6q/dl sd platform, and analyze the issue.
Previous test, we don't use netpipe tool test ethernet performance.
Thanks,
Andy
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: FEC performance degradation with certain packet sizes
2013-12-19 2:44 ` fugang.duan
@ 2013-12-19 23:04 ` Eric Dumazet
2013-12-20 0:18 ` Shawn Guo
0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2013-12-19 23:04 UTC (permalink / raw)
To: fugang.duan
Cc: Hector Palacios, Marek Vasut, netdev, Fabio.Estevam, shawn.guo,
l.stach, Frank.Li, bhutchings, davem
On Thu, 2013-12-19 at 02:44 +0000, fugang.duan@freescale.com wrote:
> Enet IP don't support tso feature.
>
> I will reproduce the issue in imx6q/dl sd platform, and analyze the issue.
> Previous test, we don't use netpipe tool test ethernet performance.
drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c seems broken...
If tx_skb_align_workaround() returns NULL, returning NETDEV_TX_BUSY
is the worst possible thing to do.
Packet should be dropped instead.
Probably not related to your issue.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: FEC performance degradation with certain packet sizes
2013-12-19 23:04 ` Eric Dumazet
@ 2013-12-20 0:18 ` Shawn Guo
0 siblings, 0 replies; 13+ messages in thread
From: Shawn Guo @ 2013-12-20 0:18 UTC (permalink / raw)
To: Eric Dumazet
Cc: fugang.duan, Hector Palacios, Marek Vasut, netdev, Fabio.Estevam,
l.stach, Frank.Li, bhutchings, davem
On Thu, Dec 19, 2013 at 03:04:59PM -0800, Eric Dumazet wrote:
> On Thu, 2013-12-19 at 02:44 +0000, fugang.duan@freescale.com wrote:
>
> > Enet IP don't support tso feature.
> >
> > I will reproduce the issue in imx6q/dl sd platform, and analyze the issue.
> > Previous test, we don't use netpipe tool test ethernet performance.
>
> drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c seems broken...
The fec driver running on IMX is drivers/net/ethernet/freescale/fec_main.c
Shawn
>
> If tx_skb_align_workaround() returns NULL, returning NETDEV_TX_BUSY
> is the worst possible thing to do.
>
> Packet should be dropped instead.
>
> Probably not related to your issue.
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: FEC performance degradation with certain packet sizes
2013-12-18 16:43 ` FEC performance degradation with certain packet sizes Hector Palacios
2013-12-18 17:38 ` Eric Dumazet
@ 2013-12-20 3:35 ` fugang.duan
2013-12-20 15:01 ` Hector Palacios
1 sibling, 1 reply; 13+ messages in thread
From: fugang.duan @ 2013-12-20 3:35 UTC (permalink / raw)
To: Hector Palacios, Marek Vasut, netdev
Cc: Fabio.Estevam, shawn.guo, l.stach, Frank.Li, bhutchings, davem
From: Hector Palacios <hector.palacios@digi.com>
Data: Thursday, December 19, 2013 12:44 AM
>To: Marek Vasut; netdev@vger.kernel.org
>Cc: Estevam Fabio-R49496; shawn.guo@linaro.org; l.stach@pengutronix.de; Li
>Frank-B20596; Duan Fugang-B38611; bhutchings@solarflare.com;
>davem@davemloft.net
>Subject: Re: FEC performance degradation with certain packet sizes
>
>Hello,
>
>I'm resending this thread (reworded the subject) with additional people on CC.
>I found the issue happens also with auto-negotiated link and is reproducible on
>the
>i.MX6 as well as on the i.MX28. It looks like a problem with the fec driver.
>
>Steps to reproduce:
>On the target:
> netpipe
>On the host:
> netpipe -h <target_ip> -n 5
>
>At certain packet sizes (starting always at 1533 bytes), the performance drops
>dramatically:
>
>On i.MX28:
>[...]
> 42: 771 bytes 5 times --> 19.78 Mbps in 297.41 usec
> 43: 1021 bytes 5 times --> 23.29 Mbps in 334.41 usec
> 44: 1024 bytes 5 times --> 23.61 Mbps in 330.90 usec
> 45: 1027 bytes 5 times --> 23.43 Mbps in 334.41 usec
> 46: 1533 bytes 5 times --> 0.13 Mbps in 88817.49 usec
> 47: 1536 bytes 5 times --> 0.06 Mbps in 189914.91 usec
> 48: 1539 bytes 5 times --> 0.06 Mbps in 204917.19 usec
> 49: 2045 bytes 5 times --> 0.07 Mbps in 210931.79 usec
> 50: 2048 bytes 5 times --> 0.07 Mbps in 210919.10 usec
> 51: 2051 bytes 5 times --> 0.07 Mbps in 212915.71 usec
> 52: 3069 bytes 5 times --> 35.42 Mbps in 661.01 usec
> 53: 3072 bytes 5 times --> 35.57 Mbps in 659.00 usec
> 54: 3075 bytes 5 times --> 35.42 Mbps in 662.29 usec
> 55: 4093 bytes 5 times --> 40.03 Mbps in 780.19 usec
> 56: 4096 bytes 5 times --> 40.75 Mbps in 766.79 usec
> 57: 4099 bytes 5 times --> 40.64 Mbps in 769.49 usec
> 58: 6141 bytes 5 times --> 3.08 Mbps in 15187.90 usec
> 59: 6144 bytes 5 times --> 2.94 Mbps in 15928.19 usec
> 60: 6147 bytes 5 times --> 5.57 Mbps in 8418.91 usec
> 61: 8189 bytes 5 times --> 1.34 Mbps in 46574.90 usec
> 62: 8192 bytes 5 times --> 2.17 Mbps in 28781.99 usec
> 63: 8195 bytes 5 times --> 1.36 Mbps in 45923.69 usec
> 64: 12285 bytes 5 times --> 51.78 Mbps in 1810.21 usec
> 65: 12288 bytes 5 times --> 50.46 Mbps in 1857.81 usec
> 66: 12291 bytes 5 times --> 54.01 Mbps in 1736.21 usec
> 67: 16381 bytes 5 times --> 55.86 Mbps in 2237.50 usec
> 68: 16384 bytes 5 times --> 56.93 Mbps in 2195.79 usec
> 69: 16387 bytes 5 times --> 35.62 Mbps in 3509.60 usec
> 70: 24573 bytes 5 times --> 7.19 Mbps in 26075.60 usec
> 71: 24576 bytes 5 times --> 58.36 Mbps in 3212.59 usec
> 72: 24579 bytes 5 times --> 7.92 Mbps in 23678.90 usec
> 73: 32765 bytes 5 times --> 58.14 Mbps in 4299.79 usec
> 74: 32768 bytes 5 times --> 5.34 Mbps in 46810.20 usec
> 75: 32771 bytes 5 times --> 41.51 Mbps in 6023.21 usec
> 76: 49149 bytes 5 times --> 49.62 Mbps in 7557.20 usec
> 77: 49152 bytes 5 times --> 48.82 Mbps in 7681.11 usec
>
>On i.MX6:
>[...]
> 42: 771 bytes 5 times --> 16.21 Mbps in 362.91 usec
> 43: 1021 bytes 5 times --> 17.97 Mbps in 433.51 usec
> 44: 1024 bytes 5 times --> 18.19 Mbps in 429.40 usec
> 45: 1027 bytes 5 times --> 18.16 Mbps in 431.41 usec
> 46: 1533 bytes 5 times --> 2.35 Mbps in 4970.11 usec
> 47: 1536 bytes 5 times --> 2.36 Mbps in 4959.91 usec
> 48: 1539 bytes 5 times --> 2.37 Mbps in 4959.20 usec
> 49: 2045 bytes 5 times --> 3.14 Mbps in 4972.31 usec
> 50: 2048 bytes 5 times --> 3.15 Mbps in 4959.50 usec
> 51: 2051 bytes 5 times --> 3.15 Mbps in 4960.01 usec
> 52: 3069 bytes 5 times --> 4.70 Mbps in 4984.19 usec
> 53: 3072 bytes 5 times --> 4.73 Mbps in 4960.10 usec
> 54: 3075 bytes 5 times --> 4.73 Mbps in 4957.81 usec
> 55: 4093 bytes 5 times --> 6.29 Mbps in 4966.71 usec
> 56: 4096 bytes 5 times --> 6.30 Mbps in 4962.00 usec
> 57: 4099 bytes 5 times --> 6.31 Mbps in 4957.71 usec
> 58: 6141 bytes 5 times --> 49.25 Mbps in 951.40 usec
> 59: 6144 bytes 5 times --> 49.23 Mbps in 952.21 usec
> 60: 6147 bytes 5 times --> 49.18 Mbps in 953.69 usec
>
>Does anyone have any clue about where the problem might be?
>
>Best regards,
>--
>Hector Palacios
>
I can reproduce the issue on imx6q/dl platform with freescale internal kernel tree.
This issue must be related to cpufreq, when set scaling_governor to performance:
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
And then do NPtcp test, the result as below:
24: 99 bytes 5 times --> 9.89 Mbps in 76.40 usec
25: 125 bytes 5 times --> 12.10 Mbps in 78.80 usec
26: 128 bytes 5 times --> 12.27 Mbps in 79.60 usec
27: 131 bytes 5 times --> 12.80 Mbps in 78.10 usec
28: 189 bytes 5 times --> 18.00 Mbps in 80.10 usec
29: 192 bytes 5 times --> 18.31 Mbps in 80.00 usec
30: 195 bytes 5 times --> 18.41 Mbps in 80.80 usec
31: 253 bytes 5 times --> 23.34 Mbps in 82.70 usec
32: 256 bytes 5 times --> 23.91 Mbps in 81.70 usec
33: 259 bytes 5 times --> 24.19 Mbps in 81.70 usec
34: 381 bytes 5 times --> 33.18 Mbps in 87.60 usec
35: 384 bytes 5 times --> 33.87 Mbps in 86.50 usec
36: 387 bytes 5 times --> 34.41 Mbps in 85.80 usec
37: 509 bytes 5 times --> 42.72 Mbps in 90.90 usec
38: 512 bytes 5 times --> 42.60 Mbps in 91.70 usec
39: 515 bytes 5 times --> 42.80 Mbps in 91.80 usec
40: 765 bytes 5 times --> 56.45 Mbps in 103.40 usec
41: 768 bytes 5 times --> 57.11 Mbps in 102.60 usec
42: 771 bytes 5 times --> 57.22 Mbps in 102.80 usec
43: 1021 bytes 5 times --> 70.69 Mbps in 110.20 usec
44: 1024 bytes 5 times --> 70.70 Mbps in 110.50 usec
45: 1027 bytes 5 times --> 69.59 Mbps in 112.60 usec
46: 1533 bytes 5 times --> 73.56 Mbps in 159.00 usec
47: 1536 bytes 5 times --> 72.92 Mbps in 160.70 usec
48: 1539 bytes 5 times --> 73.80 Mbps in 159.10 usec
49: 2045 bytes 5 times --> 93.59 Mbps in 166.70 usec
50: 2048 bytes 5 times --> 94.07 Mbps in 166.10 usec
51: 2051 bytes 5 times --> 92.92 Mbps in 168.40 usec
52: 3069 bytes 5 times --> 123.43 Mbps in 189.70 usec
53: 3072 bytes 5 times --> 123.68 Mbps in 189.50 usec
Thanks,
Andy
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: FEC performance degradation with certain packet sizes
2013-12-20 3:35 ` fugang.duan
@ 2013-12-20 15:01 ` Hector Palacios
2013-12-23 1:08 ` fugang.duan
2013-12-23 2:52 ` fugang.duan
0 siblings, 2 replies; 13+ messages in thread
From: Hector Palacios @ 2013-12-20 15:01 UTC (permalink / raw)
To: fugang.duan, Marek Vasut, netdev
Cc: Fabio.Estevam, shawn.guo, l.stach, Frank.Li, bhutchings, davem
Dear Andy,
On 12/20/2013 04:35 AM, fugang.duan@freescale.com wrote:
> [...]
>
> I can reproduce the issue on imx6q/dl platform with freescale internal kernel tree.
>
> This issue must be related to cpufreq, when set scaling_governor to performance:
> echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
>
> And then do NPtcp test, the result as below:
>
> 24: 99 bytes 5 times --> 9.89 Mbps in 76.40 usec
> 25: 125 bytes 5 times --> 12.10 Mbps in 78.80 usec
> 26: 128 bytes 5 times --> 12.27 Mbps in 79.60 usec
> 27: 131 bytes 5 times --> 12.80 Mbps in 78.10 usec
> 28: 189 bytes 5 times --> 18.00 Mbps in 80.10 usec
> 29: 192 bytes 5 times --> 18.31 Mbps in 80.00 usec
> 30: 195 bytes 5 times --> 18.41 Mbps in 80.80 usec
> 31: 253 bytes 5 times --> 23.34 Mbps in 82.70 usec
> 32: 256 bytes 5 times --> 23.91 Mbps in 81.70 usec
> 33: 259 bytes 5 times --> 24.19 Mbps in 81.70 usec
> 34: 381 bytes 5 times --> 33.18 Mbps in 87.60 usec
> 35: 384 bytes 5 times --> 33.87 Mbps in 86.50 usec
> 36: 387 bytes 5 times --> 34.41 Mbps in 85.80 usec
> 37: 509 bytes 5 times --> 42.72 Mbps in 90.90 usec
> 38: 512 bytes 5 times --> 42.60 Mbps in 91.70 usec
> 39: 515 bytes 5 times --> 42.80 Mbps in 91.80 usec
> 40: 765 bytes 5 times --> 56.45 Mbps in 103.40 usec
> 41: 768 bytes 5 times --> 57.11 Mbps in 102.60 usec
> 42: 771 bytes 5 times --> 57.22 Mbps in 102.80 usec
> 43: 1021 bytes 5 times --> 70.69 Mbps in 110.20 usec
> 44: 1024 bytes 5 times --> 70.70 Mbps in 110.50 usec
> 45: 1027 bytes 5 times --> 69.59 Mbps in 112.60 usec
> 46: 1533 bytes 5 times --> 73.56 Mbps in 159.00 usec
> 47: 1536 bytes 5 times --> 72.92 Mbps in 160.70 usec
> 48: 1539 bytes 5 times --> 73.80 Mbps in 159.10 usec
> 49: 2045 bytes 5 times --> 93.59 Mbps in 166.70 usec
> 50: 2048 bytes 5 times --> 94.07 Mbps in 166.10 usec
> 51: 2051 bytes 5 times --> 92.92 Mbps in 168.40 usec
> 52: 3069 bytes 5 times --> 123.43 Mbps in 189.70 usec
> 53: 3072 bytes 5 times --> 123.68 Mbps in 189.50 usec
You are right. Unfortunately, this does not work on i.MX28 (at least for me). Couldn't
it be that the cpufreq is masking the problem on the i.MX6?
Best regards,
--
Hector Palacios
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: FEC performance degradation with certain packet sizes
2013-12-20 15:01 ` Hector Palacios
@ 2013-12-23 1:08 ` fugang.duan
2013-12-23 2:52 ` fugang.duan
1 sibling, 0 replies; 13+ messages in thread
From: fugang.duan @ 2013-12-23 1:08 UTC (permalink / raw)
To: Hector Palacios, Marek Vasut, netdev
Cc: Fabio.Estevam, shawn.guo, l.stach, Frank.Li, bhutchings, davem
From: Hector Palacios <hector.palacios@digi.com>
Data: Friday, December 20, 2013 11:02 PM
>To: Duan Fugang-B38611; Marek Vasut; netdev@vger.kernel.org
>Cc: Estevam Fabio-R49496; shawn.guo@linaro.org; l.stach@pengutronix.de; Li
>Frank-B20596; bhutchings@solarflare.com; davem@davemloft.net
>Subject: Re: FEC performance degradation with certain packet sizes
>
>Dear Andy,
>
>On 12/20/2013 04:35 AM, fugang.duan@freescale.com wrote:
>> [...]
>>
>> I can reproduce the issue on imx6q/dl platform with freescale internal kernel
>tree.
>>
>> This issue must be related to cpufreq, when set scaling_governor to
>performance:
>> echo performance >
>> /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
>>
>> And then do NPtcp test, the result as below:
>>
>> 24: 99 bytes 5 times --> 9.89 Mbps in 76.40 usec
>> 25: 125 bytes 5 times --> 12.10 Mbps in 78.80 usec
>> 26: 128 bytes 5 times --> 12.27 Mbps in 79.60 usec
>> 27: 131 bytes 5 times --> 12.80 Mbps in 78.10 usec
>> 28: 189 bytes 5 times --> 18.00 Mbps in 80.10 usec
>> 29: 192 bytes 5 times --> 18.31 Mbps in 80.00 usec
>> 30: 195 bytes 5 times --> 18.41 Mbps in 80.80 usec
>> 31: 253 bytes 5 times --> 23.34 Mbps in 82.70 usec
>> 32: 256 bytes 5 times --> 23.91 Mbps in 81.70 usec
>> 33: 259 bytes 5 times --> 24.19 Mbps in 81.70 usec
>> 34: 381 bytes 5 times --> 33.18 Mbps in 87.60 usec
>> 35: 384 bytes 5 times --> 33.87 Mbps in 86.50 usec
>> 36: 387 bytes 5 times --> 34.41 Mbps in 85.80 usec
>> 37: 509 bytes 5 times --> 42.72 Mbps in 90.90 usec
>> 38: 512 bytes 5 times --> 42.60 Mbps in 91.70 usec
>> 39: 515 bytes 5 times --> 42.80 Mbps in 91.80 usec
>> 40: 765 bytes 5 times --> 56.45 Mbps in 103.40 usec
>> 41: 768 bytes 5 times --> 57.11 Mbps in 102.60 usec
>> 42: 771 bytes 5 times --> 57.22 Mbps in 102.80 usec
>> 43: 1021 bytes 5 times --> 70.69 Mbps in 110.20 usec
>> 44: 1024 bytes 5 times --> 70.70 Mbps in 110.50 usec
>> 45: 1027 bytes 5 times --> 69.59 Mbps in 112.60 usec
>> 46: 1533 bytes 5 times --> 73.56 Mbps in 159.00 usec
>> 47: 1536 bytes 5 times --> 72.92 Mbps in 160.70 usec
>> 48: 1539 bytes 5 times --> 73.80 Mbps in 159.10 usec
>> 49: 2045 bytes 5 times --> 93.59 Mbps in 166.70 usec
>> 50: 2048 bytes 5 times --> 94.07 Mbps in 166.10 usec
>> 51: 2051 bytes 5 times --> 92.92 Mbps in 168.40 usec
>> 52: 3069 bytes 5 times --> 123.43 Mbps in 189.70 usec
>> 53: 3072 bytes 5 times --> 123.68 Mbps in 189.50 usec
>
>You are right. Unfortunately, this does not work on i.MX28 (at least for me).
>Couldn't it be that the cpufreq is masking the problem on the i.MX6?
>
>Best regards,
>--
>Hector Palacios
>
I will test it on imx28 platform. And then analyze the result.
Thanks,
Andy
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: FEC performance degradation with certain packet sizes
2013-12-20 15:01 ` Hector Palacios
2013-12-23 1:08 ` fugang.duan
@ 2013-12-23 2:52 ` fugang.duan
2014-01-21 17:49 ` Marek Vasut
1 sibling, 1 reply; 13+ messages in thread
From: fugang.duan @ 2013-12-23 2:52 UTC (permalink / raw)
To: Hector Palacios, Marek Vasut, netdev
Cc: Fabio.Estevam, shawn.guo, l.stach, Frank.Li, bhutchings, davem
From: Hector Palacios <hector.palacios@digi.com>
Sent: Friday, December 20, 2013 11:02 PM
>To: Duan Fugang-B38611; Marek Vasut; netdev@vger.kernel.org
>Cc: Estevam Fabio-R49496; shawn.guo@linaro.org; l.stach@pengutronix.de; Li
>Frank-B20596; bhutchings@solarflare.com; davem@davemloft.net
>Subject: Re: FEC performance degradation with certain packet sizes
>
>Dear Andy,
>
>On 12/20/2013 04:35 AM, fugang.duan@freescale.com wrote:
>> [...]
>>
>> I can reproduce the issue on imx6q/dl platform with freescale internal kernel
>tree.
>>
>> This issue must be related to cpufreq, when set scaling_governor to
>performance:
>> echo performance >
>> /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
>>
>> And then do NPtcp test, the result as below:
>>
>> 24: 99 bytes 5 times --> 9.89 Mbps in 76.40 usec
>> 25: 125 bytes 5 times --> 12.10 Mbps in 78.80 usec
>> 26: 128 bytes 5 times --> 12.27 Mbps in 79.60 usec
>> 27: 131 bytes 5 times --> 12.80 Mbps in 78.10 usec
>> 28: 189 bytes 5 times --> 18.00 Mbps in 80.10 usec
>> 29: 192 bytes 5 times --> 18.31 Mbps in 80.00 usec
>> 30: 195 bytes 5 times --> 18.41 Mbps in 80.80 usec
>> 31: 253 bytes 5 times --> 23.34 Mbps in 82.70 usec
>> 32: 256 bytes 5 times --> 23.91 Mbps in 81.70 usec
>> 33: 259 bytes 5 times --> 24.19 Mbps in 81.70 usec
>> 34: 381 bytes 5 times --> 33.18 Mbps in 87.60 usec
>> 35: 384 bytes 5 times --> 33.87 Mbps in 86.50 usec
>> 36: 387 bytes 5 times --> 34.41 Mbps in 85.80 usec
>> 37: 509 bytes 5 times --> 42.72 Mbps in 90.90 usec
>> 38: 512 bytes 5 times --> 42.60 Mbps in 91.70 usec
>> 39: 515 bytes 5 times --> 42.80 Mbps in 91.80 usec
>> 40: 765 bytes 5 times --> 56.45 Mbps in 103.40 usec
>> 41: 768 bytes 5 times --> 57.11 Mbps in 102.60 usec
>> 42: 771 bytes 5 times --> 57.22 Mbps in 102.80 usec
>> 43: 1021 bytes 5 times --> 70.69 Mbps in 110.20 usec
>> 44: 1024 bytes 5 times --> 70.70 Mbps in 110.50 usec
>> 45: 1027 bytes 5 times --> 69.59 Mbps in 112.60 usec
>> 46: 1533 bytes 5 times --> 73.56 Mbps in 159.00 usec
>> 47: 1536 bytes 5 times --> 72.92 Mbps in 160.70 usec
>> 48: 1539 bytes 5 times --> 73.80 Mbps in 159.10 usec
>> 49: 2045 bytes 5 times --> 93.59 Mbps in 166.70 usec
>> 50: 2048 bytes 5 times --> 94.07 Mbps in 166.10 usec
>> 51: 2051 bytes 5 times --> 92.92 Mbps in 168.40 usec
>> 52: 3069 bytes 5 times --> 123.43 Mbps in 189.70 usec
>> 53: 3072 bytes 5 times --> 123.68 Mbps in 189.50 usec
>
>You are right. Unfortunately, this does not work on i.MX28 (at least for me).
>Couldn't it be that the cpufreq is masking the problem on the i.MX6?
>
>Best regards,
>--
>Hector Palacios
>
I reproduce the issue on imx28 evk platform, imx28 has no specific cpufreq driver.
In kernel 3.13, the ethernet driver is almost the same for imx28 and imx6 since they use the
Same enet IP, but imx6 enet IP have some evolution.
Now I don't know the cause. When I am free, I will dig out it.
Thanks,
Andy
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: FEC performance degradation with certain packet sizes
2013-12-23 2:52 ` fugang.duan
@ 2014-01-21 17:49 ` Marek Vasut
0 siblings, 0 replies; 13+ messages in thread
From: Marek Vasut @ 2014-01-21 17:49 UTC (permalink / raw)
To: fugang.duan
Cc: Hector Palacios, netdev, Fabio.Estevam, shawn.guo, l.stach,
Frank.Li, bhutchings, davem
On Monday, December 23, 2013 at 03:52:20 AM, fugang.duan@freescale.com wrote:
> From: Hector Palacios <hector.palacios@digi.com>
> Sent: Friday, December 20, 2013 11:02 PM
>
> >To: Duan Fugang-B38611; Marek Vasut; netdev@vger.kernel.org
> >Cc: Estevam Fabio-R49496; shawn.guo@linaro.org; l.stach@pengutronix.de; Li
> >Frank-B20596; bhutchings@solarflare.com; davem@davemloft.net
> >Subject: Re: FEC performance degradation with certain packet sizes
> >
> >Dear Andy,
> >
> >On 12/20/2013 04:35 AM, fugang.duan@freescale.com wrote:
> >> [...]
> >>
> >> I can reproduce the issue on imx6q/dl platform with freescale internal
> >> kernel
> >
> >tree.
> >
> >> This issue must be related to cpufreq, when set scaling_governor to
> >
> >performance:
> >> echo performance >
> >> /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
> >>
> >> And then do NPtcp test, the result as below:
> >> 24: 99 bytes 5 times --> 9.89 Mbps in 76.40 usec
> >> 25: 125 bytes 5 times --> 12.10 Mbps in 78.80 usec
> >> 26: 128 bytes 5 times --> 12.27 Mbps in 79.60 usec
> >> 27: 131 bytes 5 times --> 12.80 Mbps in 78.10 usec
> >> 28: 189 bytes 5 times --> 18.00 Mbps in 80.10 usec
> >> 29: 192 bytes 5 times --> 18.31 Mbps in 80.00 usec
> >> 30: 195 bytes 5 times --> 18.41 Mbps in 80.80 usec
> >> 31: 253 bytes 5 times --> 23.34 Mbps in 82.70 usec
> >> 32: 256 bytes 5 times --> 23.91 Mbps in 81.70 usec
> >> 33: 259 bytes 5 times --> 24.19 Mbps in 81.70 usec
> >> 34: 381 bytes 5 times --> 33.18 Mbps in 87.60 usec
> >> 35: 384 bytes 5 times --> 33.87 Mbps in 86.50 usec
> >> 36: 387 bytes 5 times --> 34.41 Mbps in 85.80 usec
> >> 37: 509 bytes 5 times --> 42.72 Mbps in 90.90 usec
> >> 38: 512 bytes 5 times --> 42.60 Mbps in 91.70 usec
> >> 39: 515 bytes 5 times --> 42.80 Mbps in 91.80 usec
> >> 40: 765 bytes 5 times --> 56.45 Mbps in 103.40 usec
> >> 41: 768 bytes 5 times --> 57.11 Mbps in 102.60 usec
> >> 42: 771 bytes 5 times --> 57.22 Mbps in 102.80 usec
> >> 43: 1021 bytes 5 times --> 70.69 Mbps in 110.20 usec
> >> 44: 1024 bytes 5 times --> 70.70 Mbps in 110.50 usec
> >> 45: 1027 bytes 5 times --> 69.59 Mbps in 112.60 usec
> >> 46: 1533 bytes 5 times --> 73.56 Mbps in 159.00 usec
> >> 47: 1536 bytes 5 times --> 72.92 Mbps in 160.70 usec
> >> 48: 1539 bytes 5 times --> 73.80 Mbps in 159.10 usec
> >> 49: 2045 bytes 5 times --> 93.59 Mbps in 166.70 usec
> >> 50: 2048 bytes 5 times --> 94.07 Mbps in 166.10 usec
> >> 51: 2051 bytes 5 times --> 92.92 Mbps in 168.40 usec
> >> 52: 3069 bytes 5 times --> 123.43 Mbps in 189.70 usec
> >> 53: 3072 bytes 5 times --> 123.68 Mbps in 189.50 usec
> >
> >You are right. Unfortunately, this does not work on i.MX28 (at least for
> >me). Couldn't it be that the cpufreq is masking the problem on the i.MX6?
> >
> >Best regards,
> >--
> >Hector Palacios
>
> I reproduce the issue on imx28 evk platform, imx28 has no specific cpufreq
> driver. In kernel 3.13, the ethernet driver is almost the same for imx28
> and imx6 since they use the Same enet IP, but imx6 enet IP have some
> evolution.
>
> Now I don't know the cause. When I am free, I will dig out it.
Hi! Is there any progress on this issue ? Did you manage to find anything out
please?
Best regards,
Marek Vasut
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2014-01-21 17:49 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-22 12:40 FEC performance degradation on iMX28 with forced link media Hector Palacios
2013-11-24 4:40 ` Marek Vasut
2013-11-25 8:56 ` Hector Palacios
2013-12-18 16:43 ` FEC performance degradation with certain packet sizes Hector Palacios
2013-12-18 17:38 ` Eric Dumazet
2013-12-19 2:44 ` fugang.duan
2013-12-19 23:04 ` Eric Dumazet
2013-12-20 0:18 ` Shawn Guo
2013-12-20 3:35 ` fugang.duan
2013-12-20 15:01 ` Hector Palacios
2013-12-23 1:08 ` fugang.duan
2013-12-23 2:52 ` fugang.duan
2014-01-21 17:49 ` Marek Vasut
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).