linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vidya Sagar <vidyas@nvidia.com>
To: Rob Herring <robh@kernel.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Andrew Murray <amurray@thegoodpenguin.co.uk>,
	Jingoo Han <jingoohan1@gmail.com>,
	Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
	Krishna Thota <kthota@nvidia.com>,
	"Manikanta Maddireddy" <mmaddireddy@nvidia.com>,
	Thierry Reding <treding@nvidia.com>,
	Jonathan Hunter <jonathanh@nvidia.com>,
	PCI <linux-pci@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	linux-tegra <linux-tegra@vger.kernel.org>
Subject: Re: Device driver location for the PCIe root port's DMA engine
Date: Wed, 14 Apr 2021 00:14:20 +0530	[thread overview]
Message-ID: <ba59db08-3c1c-506b-366c-3fe94ab97dfd@nvidia.com> (raw)
In-Reply-To: <CAL_JsqLJn+6affnTbF7qS3QUe=trACwKm7rPfJNLL0fF2aMydg@mail.gmail.com>



On 4/13/2021 11:43 PM, Rob Herring wrote:
> External email: Use caution opening links or attachments
> 
> 
> On Mon, Apr 12, 2021 at 12:01 PM Vidya Sagar <vidyas@nvidia.com> wrote:
>>
>> Hi
>> I'm starting this mail to seek advice on the best approach to be taken
>> to add support for the driver of the PCIe root port's DMA engine.
>> To give some background, Tegra194's PCIe IPs are dual-mode PCIe IPs i.e.
>> they work either in the root port mode or in the endpoint mode based on
>> the boot time configuration.
>> Since the PCIe hardware IP as such is the same for both (RP and EP)
>> modes, the DMA engine sub-system of the PCIe IP is also made available
>> to both modes of operation.
>> Typically, the DMA engine is seen only in the endpoint mode, and that
>> DMA engine’s configuration registers are made available to the host
>> through one of its BARs.
>> In the situation that we have here, where there is a DMA engine present
>> as part of the root port, the DMA engine isn’t a typical general-purpose
>> DMA engine in the sense that it can’t have both source and destination
>> addresses targeting external memory addresses.
>> RP’s DMA engine, while doing a write operation,
>> would always fetch data (i.e. source) from local memory and write it to
>> the remote memory over PCIe link (i.e. destination would be the BAR of
>> an endpoint)
>> whereas while doing a read operation,
>> would always fetch/read data (i.e. source) from a remote memory over the
>> PCIe link and write it to the local memory.
>>
>> I see that there are at least two ways we can have a driver for this DMA
>> engine.
>> a) DMA engine driver as one of the port service drivers
>>          Since the DMA engine is a part of the root port hardware itself
>> (although it is not part of the standard capabilities of the root port),
>> it is one of the options to have the driver for the DMA engine go as one
>> of the port service drivers (along with AER, PME, hot-plug, etc...).
>> Based on Vendor-ID and Device-ID matching runtime, either it gets
>> binded/enabled (like in the case of Tegra194) or it doesn't.
>> b) DMA engine driver as a platform driver
>>          The DMA engine hardware can be described as a sub-node under the PCIe
>> controller's node in the device tree and a separate platform driver can
>> be written to work with it.
> 
> DT expects PCI bridge child nodes to be a PCI device. We've already
> broken that with the interrupt controller child nodes, but I don't
> really want to add more.
Understood. Is there any other way of specifying the DMA functionality 
other than as a child node so that it is inline with the DT framework's 
expectations?

> 
>> I’m inclined to have the DMA engine driver as a port service driver as
>> it makes it cleaner and also in line with the design philosophy (the way
>> I understood it) of the port service drivers.
>> Please let me know your thoughts on this.
> 
> What is the actual usecase and benefit for using the DMA engine with
> the RP? The only one I've come up with is the hardware designers think
> having DMA is better than not having DMA so they include that option
> on the DWC controller.
In Tegra194-to-Tegra194 configuration (with one Tegra194 as RP and the 
other as EP) better performance is expected when DMA engines on both 
sides are used for pushing(writing) the data across instead of using 
only the EP's DMA engine for both push(write) and pull(read).

> 
> Rob
> 

      reply	other threads:[~2021-04-13 18:44 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-12 17:01 Device driver location for the PCIe root port's DMA engine Vidya Sagar
2021-04-12 21:53 ` Bjorn Helgaas
2021-04-13 18:12   ` Vidya Sagar
2021-04-13 19:05     ` Bjorn Helgaas
2021-04-13 18:13 ` Rob Herring
2021-04-13 18:44   ` Vidya Sagar [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ba59db08-3c1c-506b-366c-3fe94ab97dfd@nvidia.com \
    --to=vidyas@nvidia.com \
    --cc=amurray@thegoodpenguin.co.uk \
    --cc=bhelgaas@google.com \
    --cc=gustavo.pimentel@synopsys.com \
    --cc=jingoohan1@gmail.com \
    --cc=jonathanh@nvidia.com \
    --cc=kthota@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mmaddireddy@nvidia.com \
    --cc=robh@kernel.org \
    --cc=treding@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).