All of lore.kernel.org
 help / color / mirror / Atom feed
From: Scott Wood <scottwood@freescale.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: linuxppc-dev@ozlabs.org, Felix Radensky <felix@embedded-sol.com>,
	"Ira W. Snyder" <iws@ovro.caltech.edu>
Subject: Re: FSL DMA engine transfer to PCI memory
Date: Tue, 25 Jan 2011 13:57:06 -0600	[thread overview]
Message-ID: <20110125135706.45f351a2@udp111988uds.am.freescale.net> (raw)
In-Reply-To: <AE90C24D6B3A694183C094C60CF0A2F6D8AC29@saturn3.aculab.com>

On Tue, 25 Jan 2011 16:34:49 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

>  
> > > >>>> custom board based on P2020 running linux-2.6.35. The PCI
> > > >>>> device is Altera FPGA, connected directly to SoC PCI-E
> controller.
> 
>  
> > This sounds like your FPGA doesn't handle burst mode accesses 
> > correctly.
> > A logic analyzer will help you prove it.
> 
> He is doing PCIe, not PCI.
> A PCIe transfers is an HDLC packet pair, one containing the
> request, the other the response.
> In order to get any significant throughput the hdlc packet(s)
> have to contain all the data (eg 128 bytes).
> On the ppc we used that means you have to use the dma
> controller inside the PCIe interface block.

What was the ppc you used?

On 85xx/QorIQ-family chips such as P2020, there is no DMA controller
inside the PCIe controller itself (or are you talking about bus
mastering by the PCIe device[1]?  "interface" is a bit ambiguous),
though it was considered part of the PCI controller on 82xx.

The DMA engine and PCIe are both on OCeaN, so the traffic does not need
to pass through the e500 Coherency Module.  My understanding -- for
what it's worth, coming from a software person :-) -- is that you should
be able to get large transfer chunks using the DMA engine.

I suggest getting things working, and then seeing whether the
performance is acceptable.

> The generic dma controller can't even generate 64bit
> cycles into the ppc's PCIe engine.

Could you elaborate?

-Scott

[1] To the original poster, is there any reason you're not doing bus
mastering from the PCIe device, assuming you control the content of
the FPGA?

  reply	other threads:[~2011-01-25 19:57 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-24 21:47 FSL DMA engine transfer to PCI memory Felix Radensky
2011-01-24 22:26 ` Ira W. Snyder
2011-01-24 23:39   ` Felix Radensky
2011-01-25  0:18     ` Ira W. Snyder
2011-01-25 14:32       ` Felix Radensky
2011-01-25 16:29         ` Ira W. Snyder
2011-01-25 16:34           ` David Laight
2011-01-25 19:57             ` Scott Wood [this message]
2011-01-26 10:18               ` David Laight
2011-01-26 19:09                 ` Scott Wood
2011-01-27  8:32           ` Felix Radensky
2011-01-27 16:34             ` Ira W. Snyder
2011-01-24 22:44 ` Scott Wood
2011-01-25  8:56 ` David Laight

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110125135706.45f351a2@udp111988uds.am.freescale.net \
    --to=scottwood@freescale.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=felix@embedded-sol.com \
    --cc=iws@ovro.caltech.edu \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.