* MPC8313e RDB rev A4 and rev C network throughput
@ 2009-12-21 22:27 RONETIX - Asen Dimov
2009-12-22 3:47 ` Liu Dave-R63238
0 siblings, 1 reply; 3+ messages in thread
From: RONETIX - Asen Dimov @ 2009-12-21 22:27 UTC (permalink / raw)
To: linuxppc-dev
Hello all,
I have made some test on network throughput with MPC8313e RDB revA4 and
revC.
Some have mentioned that CSB(Coherent System Bus) frequency or untuned
TCP/IP stack,
could cause decrease of network throughput.
**Test results
-on MPC8313e RDB revA4 with kernel 2.6.20 and u-boot 1.1.6 created with
ltib-mpc8313erdb-20070824
iperf -c 172.16.0.1 -l 2m -w 256k
-throughput is 510Mbps
-on MPC8313e RDB revA4 with kernel 2.6.23 and u-boot 1.3.0 generated
with ltib-mpc8313erdb-20081222
iperf -c 172.16.0.1 -l 2m -w 256k
-throughput is 510Mbps
-on MPC8313e RDB revC with kernel 2.6.23 (the same u-boot, kernel and
rootfs as in rev A4, only dtb file differs)
iperf -c 172.16.0.1 -l 2m -w 256k
-throughput is 360Mbps.
Have someone made such measurements? Any ideas why MPC8313e RDB revC
gives worser throughput than revA4?
** Notes
*The PC (CPU:Intel(R) Core(TM)2 Duo CPU, E8400 @ 3.00GHz;
RAM: 2x2G DDR2 @ 800Mhz ;
NIC: R8168B PCI Express Gigabit Ethernet controller, driver
8.014.00-NAPI;
OS: Fedora release 10 (Cambridge) with kernel:
2.6.27.38-170.2.113.fc10.i686.PAE #1 SMP )
*Commnads to set PC
ethtool -s eth0 autoneg off speed 1000 duplex full
ifconfig eth1 172.16.0.1/12 mtu 6000 txqueuelen 10000
echo 131071 > /proc/sys/net/core/rmem_max
echo 131071 > /proc/sys/net/core/wmem_max
echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_wmem
iperf -s -l 2m -w 70k
*The MPC8313e RDB(CPU: 333Mhz; CSB: 166Mhz) revA4 and revC(using PHY not
switch)
*Commnads to set a board before using iperf
ifconfig eth1 172.16.0.1/12 mtu 6000 txqueuelen 10000
#The PC lan card is set to advertise 1000Mbps only, so the
board switches to 1000Mbps too.
echo 131071 > /proc/sys/net/core/rmem_max
echo 131071 > /proc/sys/net/core/wmem_max
echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_wmem
Regards,
Asen
^ permalink raw reply [flat|nested] 3+ messages in thread
* RE: MPC8313e RDB rev A4 and rev C network throughput
2009-12-21 22:27 MPC8313e RDB rev A4 and rev C network throughput RONETIX - Asen Dimov
@ 2009-12-22 3:47 ` Liu Dave-R63238
2009-12-22 9:19 ` RONETIX - Asen Dimov
0 siblings, 1 reply; 3+ messages in thread
From: Liu Dave-R63238 @ 2009-12-22 3:47 UTC (permalink / raw)
To: RONETIX - Asen Dimov, linuxppc-dev
One possible cause is the two board has different RCW.
So that the freq of core/csb/.... Is different.=20
> -----Original Message-----
> From:=20
> linuxppc-dev-bounces+daveliu=3Dfreescale.com@lists.ozlabs.org=20
> [mailto:linuxppc-dev-bounces+daveliu=3Dfreescale.com@lists.ozlab
> s.org] On Behalf Of RONETIX - Asen Dimov
> Sent: Tuesday, December 22, 2009 6:28 AM
> To: linuxppc-dev@lists.ozlabs.org
> Subject: MPC8313e RDB rev A4 and rev C network throughput
>=20
> Hello all,
> I have made some test on network throughput with MPC8313e RDB=20
> revA4 and revC.
> Some have mentioned that CSB(Coherent System Bus) frequency=20
> or untuned TCP/IP stack, could cause decrease of network throughput.
>=20
> **Test results
>=20
> -on MPC8313e RDB revA4 with kernel 2.6.20 and u-boot 1.1.6=20
> created with
> ltib-mpc8313erdb-20070824
> iperf -c 172.16.0.1 -l 2m -w 256k -throughput is 510Mbps
>=20
>=20
> -on MPC8313e RDB revA4 with kernel 2.6.23 and u-boot 1.3.0 generated=20
> with ltib-mpc8313erdb-20081222
> iperf -c 172.16.0.1 -l 2m -w 256k
> -throughput is 510Mbps
>=20
>=20
> -on MPC8313e RDB revC with kernel 2.6.23 (the same u-boot, kernel and=20
> rootfs as in rev A4, only dtb file differs)
> iperf -c 172.16.0.1 -l 2m -w 256k
> -throughput is 360Mbps.
>=20
>=20
> Have someone made such measurements? Any ideas why MPC8313e RDB revC=20
> gives worser throughput than revA4?
>=20
> ** Notes
> *The PC (CPU:Intel(R) Core(TM)2 Duo CPU, E8400 @ 3.00GHz;
> RAM: 2x2G DDR2 @ 800Mhz ;
> NIC: R8168B PCI Express Gigabit Ethernet controller, driver=20
> 8.014.00-NAPI;
> OS: Fedora release 10 (Cambridge) with kernel:=20
> 2.6.27.38-170.2.113.fc10.i686.PAE #1 SMP )
> *Commnads to set PC
>=20
> ethtool -s eth0 autoneg off speed 1000 duplex full
> ifconfig eth1 172.16.0.1/12 mtu 6000 txqueuelen 10000
> echo 131071 > /proc/sys/net/core/rmem_max
> echo 131071 > /proc/sys/net/core/wmem_max
> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_rmem
> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_wmem
> iperf -s -l 2m -w 70k
>=20
>=20
> *The MPC8313e RDB(CPU: 333Mhz; CSB: 166Mhz) revA4 and=20
> revC(using PHY not=20
> switch)
> *Commnads to set a board before using iperf
>=20
> ifconfig eth1 172.16.0.1/12 mtu 6000 txqueuelen 10000
> #The PC lan card is set to advertise 1000Mbps only, so the=20
> board switches to 1000Mbps too.
> echo 131071 > /proc/sys/net/core/rmem_max
> echo 131071 > /proc/sys/net/core/wmem_max
> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_rmem
> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_wmem
>=20
> Regards,
> Asen
>=20
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
>=20
>=20
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: MPC8313e RDB rev A4 and rev C network throughput
2009-12-22 3:47 ` Liu Dave-R63238
@ 2009-12-22 9:19 ` RONETIX - Asen Dimov
0 siblings, 0 replies; 3+ messages in thread
From: RONETIX - Asen Dimov @ 2009-12-22 9:19 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Liu Dave-R63238
Hi,
I checked the RCW(RCWLR and RCWHR) and they are the same(taken from
NOR), S3 and S4 switches are the same too.
I will take a look at the priority levels on CSB.
Regards,
Asen
Liu Dave-R63238 wrote:
> One possible cause is the two board has different RCW.
> So that the freq of core/csb/.... Is different.
>
>
>> -----Original Message-----
>> From:
>> linuxppc-dev-bounces+daveliu=freescale.com@lists.ozlabs.org
>> [mailto:linuxppc-dev-bounces+daveliu=freescale.com@lists.ozlab
>> s.org] On Behalf Of RONETIX - Asen Dimov
>> Sent: Tuesday, December 22, 2009 6:28 AM
>> To: linuxppc-dev@lists.ozlabs.org
>> Subject: MPC8313e RDB rev A4 and rev C network throughput
>>
>> Hello all,
>> I have made some test on network throughput with MPC8313e RDB
>> revA4 and revC.
>> Some have mentioned that CSB(Coherent System Bus) frequency
>> or untuned TCP/IP stack, could cause decrease of network throughput.
>>
>> **Test results
>>
>> -on MPC8313e RDB revA4 with kernel 2.6.20 and u-boot 1.1.6
>> created with
>> ltib-mpc8313erdb-20070824
>> iperf -c 172.16.0.1 -l 2m -w 256k -throughput is 510Mbps
>>
>>
>> -on MPC8313e RDB revA4 with kernel 2.6.23 and u-boot 1.3.0 generated
>> with ltib-mpc8313erdb-20081222
>> iperf -c 172.16.0.1 -l 2m -w 256k
>> -throughput is 510Mbps
>>
>>
>> -on MPC8313e RDB revC with kernel 2.6.23 (the same u-boot, kernel and
>> rootfs as in rev A4, only dtb file differs)
>> iperf -c 172.16.0.1 -l 2m -w 256k
>> -throughput is 360Mbps.
>>
>>
>> Have someone made such measurements? Any ideas why MPC8313e RDB revC
>> gives worser throughput than revA4?
>>
>> ** Notes
>> *The PC (CPU:Intel(R) Core(TM)2 Duo CPU, E8400 @ 3.00GHz;
>> RAM: 2x2G DDR2 @ 800Mhz ;
>> NIC: R8168B PCI Express Gigabit Ethernet controller, driver
>> 8.014.00-NAPI;
>> OS: Fedora release 10 (Cambridge) with kernel:
>> 2.6.27.38-170.2.113.fc10.i686.PAE #1 SMP )
>> *Commnads to set PC
>>
>> ethtool -s eth0 autoneg off speed 1000 duplex full
>> ifconfig eth1 172.16.0.1/12 mtu 6000 txqueuelen 10000
>> echo 131071 > /proc/sys/net/core/rmem_max
>> echo 131071 > /proc/sys/net/core/wmem_max
>> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_rmem
>> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_wmem
>> iperf -s -l 2m -w 70k
>>
>>
>> *The MPC8313e RDB(CPU: 333Mhz; CSB: 166Mhz) revA4 and
>> revC(using PHY not
>> switch)
>> *Commnads to set a board before using iperf
>>
>> ifconfig eth1 172.16.0.1/12 mtu 6000 txqueuelen 10000
>> #The PC lan card is set to advertise 1000Mbps only, so the
>> board switches to 1000Mbps too.
>> echo 131071 > /proc/sys/net/core/rmem_max
>> echo 131071 > /proc/sys/net/core/wmem_max
>> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_rmem
>> echo "4096 1048576 8388608" > /proc/sys/net/ipv4/tcp_wmem
>>
>> Regards,
>> Asen
>>
>> _______________________________________________
>> Linuxppc-dev mailing list
>> Linuxppc-dev@lists.ozlabs.org
>> https://lists.ozlabs.org/listinfo/linuxppc-dev
>>
>>
>>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2009-12-22 9:18 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-12-21 22:27 MPC8313e RDB rev A4 and rev C network throughput RONETIX - Asen Dimov
2009-12-22 3:47 ` Liu Dave-R63238
2009-12-22 9:19 ` RONETIX - Asen Dimov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).