All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Ming Liu" <eemingliu@hotmail.com>
To: Rick.Moleres@xilinx.com
Cc: linuxppc-embedded@ozlabs.org
Subject: RE: Speed of plb_temac 3.00 on ML403
Date: Sun, 11 Feb 2007 13:37:11 +0000	[thread overview]
Message-ID: <BAY110-F2669499909689C5DC2B93B2920@phx.gbl> (raw)
In-Reply-To: <20070209160123.BFF2BA30080@mail83-dub.bigfish.com>

Dear Rick,
First thank you SO SO SO MUCH for your kindly telling. It's really useful 
for me to solve problems.

>From the test summary listed, I can see that we have similar systems except 
that you are using MontaVista while I am using the general open-source 
kernel. Also I enabled all the features which could improve the 
performance, including DRE, CSUM offload and SGDMA,etc. 

>- Are Checksum offload, SGDMA, and DRE enabled in the plb_temac?
Yes. All features are enabled.

>- Are you using the TCP_SENDFILE option of netperf?  Your UDP numbers are 
similar already to what we saw in Linux 2.6, and your TCP numbers are 
similar to what we saw *without* the sendfile option.

Perhaps no. I just understand that sendfile option is so important to 
improve performance. At first I thought it will achieve a same performance 
with TCP_STREAM, until I saw the article to explain how to use sendfile() 
to optimize data transfer. So I will try this soon. 

After some reading of the articles on performance improving, here come 
other problems. So I will appreciate you so much if you can clarify them 
for me. 

>1. Results are from PLB_TEMAC, not GSRD.  You would likely see similar 
throughput rates with GSRD and Linux.

Problem 1: From the website for GSRD, I know that it uses a different 
structure than PLB, where a Multi port mem controller and DMA are added to 
release the CPU from move data between Memory and TEMAC. So can GSRD 
achieve a higher performance than PLB_TEMAC, or similar performance like 
what you said above? If their performance is similar, what's the advantage 
for GSRD? Could you please explain some differences between these two 
structures? 

>2. Assuming you have everything tuned for SGDMA based on previous emails, 
I would suspect the bottleneck is the 300MHz CPU *when* running Linux.  In 
Linux 2.6 we've not spent any time trying to tune the TCP/Ethernet 
parameters on the target board or the host, so there could be some 
optimizations that can be done at that level.  In the exact same system we 
can achieve over 800Mbps using the Treck TCP/IP stack, and with VxWorks it 
was over 600Mbps.  

Problem 2. I read XAPP546 of High performance TCP/IP on xilinx FPGA devices 
using the Treck embedded TCP/IP stack. I notice that the features of Treck 
TCP/IP stack include: Zero-copy send and receive, Jumbo-frame support, CSUM 
offload, etc. which could achieve a much higher performance than not using 
it. However in the Xilinx TEMAC core V3.00, these features are all 
supported: Zero-copy is supported by sendfile() when using Netperf; 
Jumbo-frame is also supported; CSUM offload and DRE are also supported by 
the hardware. So does this mean I can achieve a similarly high performance 
with PLB_TEMAC V3.00 and without Treck TCP/IP stack? I mean if all the 
features of Treck stack have been included in the PLB_TEMAC cores, what's 
the use for Treck stack? 

Maybe my questions are a little stupid. But I am really confused on them. 
So thank you so much if you can explain them to me. Thanks a lot.

BR
Ming

_________________________________________________________________
与联机的朋友进行交流,请使用 MSN Messenger:  http://messenger.msn.com/cn  

  parent reply	other threads:[~2007-02-11 13:37 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-12-05 19:08 Speed of plb_temac 3.00 on ML403 Rick Moleres
2006-12-12 11:08 ` Ming Liu
2007-02-09 14:16 ` Ming Liu
2007-02-09 14:57   ` jozsef imrek
2007-02-11 15:25     ` Ming Liu
2007-02-12 18:09       ` jozsef imrek
2007-02-12 19:18         ` Ming Liu
2007-02-14  7:24           ` jozsef imrek
2007-02-09 16:00   ` Rick Moleres
2007-02-11  6:22     ` Leonid
2007-02-11 13:37     ` Ming Liu [this message]
2007-02-12 19:45       ` Rick Moleres
2007-02-12 20:39         ` Ming Liu
2007-02-11  6:55   ` Linux " Leonid
2007-02-11 13:10     ` Ming Liu
  -- strict thread matches above, loose matches on Subject: below --
2006-12-13  0:11 Speed of plb_temac 3.00 " Rick Moleres
2006-12-17 15:05 ` Ming Liu
2006-12-05 16:18 Thomas Denzinger
2006-12-05 16:49 ` Ming Liu
2006-12-05 18:42 ` Michael Galassi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BAY110-F2669499909689C5DC2B93B2920@phx.gbl \
    --to=eemingliu@hotmail.com \
    --cc=Rick.Moleres@xilinx.com \
    --cc=linuxppc-embedded@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.