Linux-PCI Archive on lore.kernel.org
 help / color / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Alan Mikhak <alan.mikhak@sifive.com>
Cc: Ard Biesheuvel <ardb@kernel.org>,
	Joao Pinto <Joao.Pinto@synopsys.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Graeme Gregory <graeme.gregory@linaro.org>,
	Jingoo Han <jingoohan1@gmail.com>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	linux-pci <linux-pci@vger.kernel.org>,
	Leif Lindholm <leif@nuviainc.com>
Subject: Re: [PATCH 2/3] pci: designware: add separate driver for the MSI part of the RC
Date: Wed, 19 Feb 2020 08:11:48 +0000
Message-ID: <20200219081148.5307e30a@why> (raw)
In-Reply-To: <CABEDWGxDz6njYYQN879XnGmY2vxOKvbygeg=9nBK54U6WP8_ug@mail.gmail.com>

On Tue, 18 Feb 2020 11:09:10 -0800
Alan Mikhak <alan.mikhak@sifive.com> wrote:

> On Sat, Feb 15, 2020 at 2:36 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Sat, 15 Feb 2020 09:35:56 +0000,
> > Ard Biesheuvel <ardb@kernel.org> wrote:  
> > >
> > > (updated some email addresses in cc, including my own)
> > >
> > > On Sat, 15 Feb 2020 at 01:54, Alan Mikhak <alan.mikhak@sifive.com> wrote:  
> > > >
> > > > Hi..
> > > >
> > > > What is the right approach for adding MSI support for the generic
> > > > Linux PCI host driver?
> > > >
> > > > I came across this patch which seems to address a similar
> > > > situation. It seems to have been dropped in v3 of the patchset
> > > > with the explanation "drop MSI patch [for now], since it
> > > > turns out we may not need it".
> > > >
> > > > [PATCH 2/3] pci: designware: add separate driver for the MSI part of the RC
> > > > https://lore.kernel.org/linux-pci/20170821192907.8695-3-ard.biesheuvel@linaro.org/
> > > >
> > > > [PATCH v2 2/3] pci: designware: add separate driver for the MSI part of the RC
> > > > https://lore.kernel.org/linux-pci/20170824184321.19432-3-ard.biesheuvel@linaro.org/
> > > >
> > > > [PATCH v3 0/2] pci: add support for firmware initialized designware RCs
> > > > https://lore.kernel.org/linux-pci/20170828180437.2646-1-ard.biesheuvel@linaro.org/
> > > >  
> > >
> > > For the platform in question, it turned out that we could use the MSI
> > > block of the core's GIC interrupt controller directly, which is a much
> > > better solution.
> > >
> > > In general, turning MSIs into wired interrupts is not a great idea,
> > > since the whole point of MSIs is that they are sufficiently similar to
> > > other DMA transactions to ensure that the interrupt won't arrive
> > > before the related memory transactions have completed.
> > >
> > > If your interrupt controller does not have this capability, then yes,
> > > you are stuck with this little widget that decodes an inbound write to
> > > a magic address and turns it into a wired interrupt.  
> >
> > I can only second this. It is much better to have a generic block
> > implementing MSI *in a non multiplexed way*, for multiple reasons:
> >
> > - the interrupt vs DMA race that Ard mentions above,
> >
> > - MSIs are very often used to describe the state of per-CPU queues. If
> >   you multiplex MSIs behind a single multiplexing interrupt, it is
> >   always the same CPU that gets interrupted, and you don't benefit
> >   from having multiple queues at all.
> >
> > Even if you have to implement the support as a bunch of wired
> > interrupts, there is still a lot of value in keeping a 1:1 mapping
> > between MSIs and wires.
> >
> > Thanks,
> >
> >         M.
> >
> > --
> > Jazz is not dead, it just smells funny.  
> 
> Ard and Marc, Thanks for you comments. I will take a look at the code
> related to MSI block of GIC interrupt controller for some reference.

GICv2m or GICv3 MBI are probably your best bets. Don't get anywhere near
the GICv3 ITS, there lies madness. ;-)

> I am looking into supporting MSI in a non-multiplexed way when using
> ECAM and the generic Linux PCI host driver when Linux is booted
> from U-Boot.

I don't really get the relationship between ECAM and MSIs. They should
be fairly independent, unless that has to do with the allowing the MSI
doorbell to be reached from the PCIe endpoint.

> Specifically, what is the right approach for sharing the physical
> address of the MSI data block used in Linux with U-Boot?
> 
> I imagine the Linux driver for MSI interrupt controller allocates
> some DMA-able memory for use as the MSI data block. The
> U-Boot PCIe driver would program an inbound ATU region to
> map mem writes from endpoint devices to that MSI data block
> before booting Linux.

The "MSI block" is really a piece of HW, not memory. So whatever you
have to program in the PCIe RC must allow an endpoint to reach that
device with a 32bit write.

	M.
-- 
Jazz is not dead. It just smells funny...

  reply index

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-21 19:29 [PATCH 0/3] pci: add support for firmware initialized designware RCs Ard Biesheuvel
2017-08-21 19:29 ` [PATCH 2/3] pci: designware: add separate driver for the MSI part of the RC Ard Biesheuvel
2017-08-24 16:42   ` Bjorn Helgaas
2017-08-24 16:43     ` Ard Biesheuvel
2017-08-24 16:48   ` Robin Murphy
2017-08-24 16:50     ` Ard Biesheuvel
2020-02-15  0:54   ` Alan Mikhak
2020-02-15  9:35     ` Ard Biesheuvel
2020-02-15 10:36       ` Marc Zyngier
2020-02-18 19:09         ` Alan Mikhak
2020-02-19  8:11           ` Marc Zyngier [this message]
2020-02-19  8:17             ` Ard Biesheuvel
2020-02-19 20:24               ` Alan Mikhak
2020-02-19 21:06                 ` Ard Biesheuvel
2020-02-19 21:35                   ` Alan Mikhak
2017-08-21 19:29 ` [PATCH 3/3] dt-bindings: designware: add binding for Designware PCIe in ECAM mode Ard Biesheuvel
2017-08-24 16:46 ` [PATCH 0/3] pci: add support for firmware initialized designware RCs Bjorn Helgaas
2017-08-31 19:21   ` Ard Biesheuvel
2017-08-21 19:29 [PATCH 1/3] pci: designware: add driver for DWC controller in ECAM shift mode Ard Biesheuvel
2017-08-24 16:24 ` kbuild test robot
2017-08-24 16:25 ` kbuild test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200219081148.5307e30a@why \
    --to=maz@kernel.org \
    --cc=Joao.Pinto@synopsys.com \
    --cc=alan.mikhak@sifive.com \
    --cc=ardb@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=graeme.gregory@linaro.org \
    --cc=jingoohan1@gmail.com \
    --cc=leif@nuviainc.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-PCI Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-pci/0 linux-pci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-pci linux-pci/ https://lore.kernel.org/linux-pci \
		linux-pci@vger.kernel.org
	public-inbox-index linux-pci

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pci


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git