netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "fugang.duan@freescale.com" <fugang.duan@freescale.com>
To: Hector Palacios <hector.palacios@digi.com>,
	Marek Vasut <marex@denx.de>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Cc: "Fabio.Estevam@freescale.com" <Fabio.Estevam@freescale.com>,
	"shawn.guo@linaro.org" <shawn.guo@linaro.org>,
	"l.stach@pengutronix.de" <l.stach@pengutronix.de>,
	"Frank.Li@freescale.com" <Frank.Li@freescale.com>,
	"bhutchings@solarflare.com" <bhutchings@solarflare.com>,
	"davem@davemloft.net" <davem@davemloft.net>
Subject: RE: FEC performance degradation with certain packet sizes
Date: Mon, 23 Dec 2013 01:08:05 +0000	[thread overview]
Message-ID: <4fcefc0e43a5448292b7b40f32c950de@BLUPR03MB373.namprd03.prod.outlook.com> (raw)
In-Reply-To: <52B45BD2.2060306@digi.com>

From: Hector Palacios <hector.palacios@digi.com>
Data: Friday, December 20, 2013 11:02 PM

>To: Duan Fugang-B38611; Marek Vasut; netdev@vger.kernel.org
>Cc: Estevam Fabio-R49496; shawn.guo@linaro.org; l.stach@pengutronix.de; Li
>Frank-B20596; bhutchings@solarflare.com; davem@davemloft.net
>Subject: Re: FEC performance degradation with certain packet sizes
>
>Dear Andy,
>
>On 12/20/2013 04:35 AM, fugang.duan@freescale.com wrote:
>> [...]
>>
>> I can reproduce the issue on imx6q/dl platform with freescale internal kernel
>tree.
>>
>> This issue must be related to cpufreq, when set scaling_governor to
>performance:
>> echo performance >
>> /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
>>
>> And then do NPtcp test, the result as below:
>>
>>   24:      99 bytes      5 times -->      9.89 Mbps in      76.40 usec
>>   25:     125 bytes      5 times -->     12.10 Mbps in      78.80 usec
>>   26:     128 bytes      5 times -->     12.27 Mbps in      79.60 usec
>>   27:     131 bytes      5 times -->     12.80 Mbps in      78.10 usec
>>   28:     189 bytes      5 times -->     18.00 Mbps in      80.10 usec
>>   29:     192 bytes      5 times -->     18.31 Mbps in      80.00 usec
>>   30:     195 bytes      5 times -->     18.41 Mbps in      80.80 usec
>>   31:     253 bytes      5 times -->     23.34 Mbps in      82.70 usec
>>   32:     256 bytes      5 times -->     23.91 Mbps in      81.70 usec
>>   33:     259 bytes      5 times -->     24.19 Mbps in      81.70 usec
>>   34:     381 bytes      5 times -->     33.18 Mbps in      87.60 usec
>>   35:     384 bytes      5 times -->     33.87 Mbps in      86.50 usec
>>   36:     387 bytes      5 times -->     34.41 Mbps in      85.80 usec
>>   37:     509 bytes      5 times -->     42.72 Mbps in      90.90 usec
>>   38:     512 bytes      5 times -->     42.60 Mbps in      91.70 usec
>>   39:     515 bytes      5 times -->     42.80 Mbps in      91.80 usec
>>   40:     765 bytes      5 times -->     56.45 Mbps in     103.40 usec
>>   41:     768 bytes      5 times -->     57.11 Mbps in     102.60 usec
>>   42:     771 bytes      5 times -->     57.22 Mbps in     102.80 usec
>>   43:    1021 bytes      5 times -->     70.69 Mbps in     110.20 usec
>>   44:    1024 bytes      5 times -->     70.70 Mbps in     110.50 usec
>>   45:    1027 bytes      5 times -->     69.59 Mbps in     112.60 usec
>>   46:    1533 bytes      5 times -->     73.56 Mbps in     159.00 usec
>>   47:    1536 bytes      5 times -->     72.92 Mbps in     160.70 usec
>>   48:    1539 bytes      5 times -->     73.80 Mbps in     159.10 usec
>>   49:    2045 bytes      5 times -->     93.59 Mbps in     166.70 usec
>>   50:    2048 bytes      5 times -->     94.07 Mbps in     166.10 usec
>>   51:    2051 bytes      5 times -->     92.92 Mbps in     168.40 usec
>>   52:    3069 bytes      5 times -->    123.43 Mbps in     189.70 usec
>>   53:    3072 bytes      5 times -->    123.68 Mbps in     189.50 usec
>
>You are right. Unfortunately, this does not work on i.MX28 (at least for me).
>Couldn't it be that the cpufreq is masking the problem on the i.MX6?
>
>Best regards,
>--
>Hector Palacios
>
I will test it on imx28 platform. And then analyze the result. 

Thanks,
Andy

  reply	other threads:[~2013-12-23  1:08 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-22 12:40 FEC performance degradation on iMX28 with forced link media Hector Palacios
2013-11-24  4:40 ` Marek Vasut
2013-11-25  8:56   ` Hector Palacios
2013-12-18 16:43     ` FEC performance degradation with certain packet sizes Hector Palacios
2013-12-18 17:38       ` Eric Dumazet
2013-12-19  2:44         ` fugang.duan
2013-12-19 23:04           ` Eric Dumazet
2013-12-20  0:18             ` Shawn Guo
2013-12-20  3:35       ` fugang.duan
2013-12-20 15:01         ` Hector Palacios
2013-12-23  1:08           ` fugang.duan [this message]
2013-12-23  2:52           ` fugang.duan
2014-01-21 17:49             ` Marek Vasut

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4fcefc0e43a5448292b7b40f32c950de@BLUPR03MB373.namprd03.prod.outlook.com \
    --to=fugang.duan@freescale.com \
    --cc=Fabio.Estevam@freescale.com \
    --cc=Frank.Li@freescale.com \
    --cc=bhutchings@solarflare.com \
    --cc=davem@davemloft.net \
    --cc=hector.palacios@digi.com \
    --cc=l.stach@pengutronix.de \
    --cc=marex@denx.de \
    --cc=netdev@vger.kernel.org \
    --cc=shawn.guo@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).