linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vidya Sagar <vidyas@nvidia.com>
To: Logan Gunthorpe <logang@deltatee.com>,
	Bjorn Helgaas <helgaas@kernel.org>,
	Tim Harvey <tharvey@gateworks.com>,
	Kishon Vijay Abraham I <kishon@ti.com>,
	Jon Mason <jdmason@kudzu.us>
Cc: <linux-pci@vger.kernel.org>
Subject: Re: PCIe endpoint network driver?
Date: Fri, 28 May 2021 12:54:48 +0530	[thread overview]
Message-ID: <7397b5cb-d955-aa4d-6784-ae95cc26c6fd@nvidia.com> (raw)
In-Reply-To: <4268cd10-37b7-443f-2c77-d5421c2574e0@deltatee.com>

I'm not sure if it is fine to give non Linux kernel references here,
but Tegra194 has this implemented (not in an optimized way though)
for Tegra194(RP) <-> Tegra194(EP) configuration.
It does use Tegra194's proprietary syncpoint shim hardware to facilitate 
interrupt generation from RP to EP. (FWIW, regular MSIs are used from EP 
to RP).
syncpoint shim hardware gets mapped to a portion of the BAR during 
initialization, and when RP does a write to this BAR region, it 
generates an interrupt to the local CPU (i.e. EP's local CPU).

You can take a look at the code here
EPF driver (on EP system):
https://nv-tegra.nvidia.com/gitweb/?p=linux-nvidia.git;a=blob;f=drivers/pci/endpoint/functions/pci-epf-tegra-vnet.c;h=f55790f8c569368ad6012aeb9726b9a6c08c5304;
hb=6dc57fec39c444e4c4448be61ddd19c55693daf1

EP's device driver (on RP system):
https://nv-tegra.nvidia.com/gitweb/?p=linux-nvidia.git;a=blob;f=drivers/net/ethernet/nvidia/pcie/tegra_vnet.c;h=af74baae1452fea25c3c5292a36a4cd1d8f22e50;hb=6dc57fec39c444e4c4448be61ddd19c55693daf1

As I mentioned, this is not an optimized version and we are yet to 
upstream it (hence it may not be of upstream code quality).
We get around 5Gbps throughput with this.

- Vidya Sagar


On 5/26/2021 10:31 PM, Logan Gunthorpe wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 2021-05-26 10:28 a.m., Bjorn Helgaas wrote:
>> [+to Kishon, Jon, Logan, who might have more insight]
>>
>> On Wed, May 26, 2021 at 08:44:59AM -0700, Tim Harvey wrote:
>>> Greetings,
>>>
>>> Is there an existing driver to implement a network interface
>>> controller via a PCIe endpoint? I'm envisioning a system with a PCIe
>>> master and multiple endpoints that all have a network interface to
>>> communicate with each other.
> 
> That sounds awfully similar to NTB. See ntb_netdev and ntb_transport.
> 
> Though IMO NTB has proven to be a poor solution to the problem. Modern
> network cards with RDMA are pretty much superior in every way.
> 
> Logan
> 

      reply	other threads:[~2021-05-28  7:24 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-26 15:44 PCIe endpoint network driver? Tim Harvey
2021-05-26 16:28 ` Bjorn Helgaas
2021-05-26 17:01   ` Logan Gunthorpe
2021-05-28  7:24     ` Vidya Sagar [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7397b5cb-d955-aa4d-6784-ae95cc26c6fd@nvidia.com \
    --to=vidyas@nvidia.com \
    --cc=helgaas@kernel.org \
    --cc=jdmason@kudzu.us \
    --cc=kishon@ti.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=logang@deltatee.com \
    --cc=tharvey@gateworks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).