Linux-PCI Archive on lore.kernel.org
 help / color / Atom feed
From: Lucas Stach <dev@lynxeye.de>
To: Vidya Sagar <vidyas@nvidia.com>,
	bhelgaas@google.com, lorenzo.pieralisi@arm.com,
	treding@nvidia.com, swarren@nvidia.com, mperttunen@nvidia.com,
	jonathanh@nvidia.com
Cc: linux-tegra@vger.kernel.org, linux-pci@vger.kernel.org,
	kthota@nvidia.com, mmaddireddy@nvidia.com
Subject: Re: [PATCH] PCI: tegra: Do not allocate MSI target memory
Date: Sat, 02 Mar 2019 15:22:38 +0100
Message-ID: <91fbda5d299b29d8516cfdd7c1f72a4b34797e82.camel@lynxeye.de> (raw)
In-Reply-To: <6ee0919a-504f-ae76-5995-a9eae505ef90@nvidia.com>

Am Samstag, den 02.03.2019, 08:20 +0530 schrieb Vidya Sagar:
> On 3/1/2019 8:26 PM, Lucas Stach wrote:
> > Am Freitag, den 01.03.2019, 08:45 +0530 schrieb Vidya Sagar:
> > > On 3/1/2019 12:32 AM, Lucas Stach wrote:
> > > > Am Donnerstag, den 28.02.2019, 20:30 +0530 schrieb Vidya Sagar:
> > > > > The PCI host bridge found on Tegra SoCs doesn't require the MSI target
> > > > > address to be backed by physical system memory. Writes are intercepted
> > > > > within the controller and never make it to the memory pointed to.
> > > > > 
> > > > > Since no actual system memory is required, remove the allocation of a
> > > > > single page and hardcode the MSI target address with a special address
> > > > > on a per-SoC basis. Ideally this would be an address to an MMIO memory
> > > > > region (such as where the controller's register are located). However,
> > > > > those addresses don't work reliably across all Tegra generations. The
> > > > > only set of addresses that work consistently are those that point to
> > > > > external memory.
> > > > > 
> > > > > This is not ideal, since those addresses could technically be used for
> > > > > DMA and hence be confusing. However, the first page of external memory
> > > > > is unlikely to be used and special enough to avoid confusion.
> > > > So you are trading a slight memory waste of a single page against a
> > > > sporadic (and probably hard to debug) DMA failure if any device happens
> > > > to initiate DMA to the first page of physical memory? That does not
> > > > sound like a good deal...
> > > > 
> > > > Also why would the first page of external memory be unlikely to be
> > > > used?
> > > > 
> > > > Regards,
> > > > Lucas
> > > We are not wasting single page of memory here and if any device's DMA is
> > > trying to access it, it will still go through. Its just that we are using that
> > > same address for MSI (note that MSI writes don't go beyond PCIe IP as they get
> > > decoded at PCIe IP level itself and only an interrupt
> > > goes to CPU) and that might be a bit confusing as same address is used
> > > as normal memory as well as MSI target address. Since there can never be any
> > > issue with this would you suggest to remove the last paragraph from commit
> > > description?
> > How does the core distinguish between a normal DMA memory write and a
> > MSI? If I remember the PCIe spec correctly, there aren't any
> > differences between the two besides the target address.
> > 
> > So if you now set a non-reserved region of memory to decode as a MSI at
> > the PCIe host controller level, wouldn't this lead to normal DMA
> > transactions to this address being wrongfully turned into an MSI and
> > the write not reaching the targeted location?
> > 
> > Regards,
> > Lucas
> 
> You are correct that core cannot distinguish between a normal DMA memory and
> MSI. In that case, the only way I see is to alloc memory using 
> dma_alloc_coherent()
> and use the IOVA as the MSI target address. That way, a page gets 
> reserved (in a way wasted
> also as the MSI writes don't really make it to RAM) and there won't be 
> any address
> overlaps with normal DMA writes. I'll push a patch for it.

At that point it's no longer different from the current code, which
simply reserves a single page of memory and uses its address as the MSI
target address.

So in conclusion this change should just be dropped.

Regards,
Lucas


  reply index

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-28 15:00 Vidya Sagar
2019-02-28 19:02 ` Lucas Stach
2019-03-01  3:15   ` Vidya Sagar
2019-03-01 14:56     ` Lucas Stach
2019-03-02  2:50       ` Vidya Sagar
2019-03-02 14:22         ` Lucas Stach [this message]
2019-03-04  5:38           ` Vidya Sagar
2019-03-04  8:37             ` Thierry Reding

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=91fbda5d299b29d8516cfdd7c1f72a4b34797e82.camel@lynxeye.de \
    --to=dev@lynxeye.de \
    --cc=bhelgaas@google.com \
    --cc=jonathanh@nvidia.com \
    --cc=kthota@nvidia.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mmaddireddy@nvidia.com \
    --cc=mperttunen@nvidia.com \
    --cc=swarren@nvidia.com \
    --cc=treding@nvidia.com \
    --cc=vidyas@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-PCI Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-pci/0 linux-pci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-pci linux-pci/ https://lore.kernel.org/linux-pci \
		linux-pci@vger.kernel.org
	public-inbox-index linux-pci

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pci


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git