linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
To: Chen Baozi <chenbaozi@phytium.com.cn>
Cc: Marc Zyngier <maz@kernel.org>, Guohanjun <guohanjun@huawei.com>,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	linux-pci@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH V2] acpi/irq: Add stacked IRQ domain support to PCI interrupt link
Date: Thu, 19 Nov 2020 10:37:20 +0000	[thread overview]
Message-ID: <20201119103720.GA19138@e121166-lin.cambridge.arm.com> (raw)
In-Reply-To: <17EDC3AB-FF11-4624-912E-95832DB20804@phytium.com.cn>

On Wed, Nov 18, 2020 at 10:05:29PM +0800, Chen Baozi wrote:
> Hi Lorenzo,
> 
> > On Nov 18, 2020, at 5:51 PM, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote:
> > 
> > On Tue, Nov 17, 2020 at 09:42:14PM +0800, Chen Baozi wrote:
> >> Some PCIe designs require software to do extra acknowledgements for
> >> legacy INTx interrupts. If the driver is written only for device tree,
> >> things are simple. In that case, a new driver can be written under
> >> driver/pci/controller/ with a DT node of PCIe host written like:
> >> 
> >>  pcie {
> >>    ...
> >>    interrupt-map = <0 0 0  1  &pcie_intc 0>,
> >>                    <0 0 0  2  &pcie_intc 1>,
> >>                    <0 0 0  3  &pcie_intc 2>,
> >>                    <0 0 0  4  &pcie_intc 3>;
> >> 
> >>    pcie_intc: legacy-interrupt-controller {
> >>      interrupt-controller;
> >>      #interrupt-cells = <1>;
> >>      interrupt-parent = <&gic>;
> >>      interrupts = <0 226 4>;
> >>    };
> >>  };
> >> 
> >> Similar designs can be found on Aardvark, MediaTek Gen2 and Socionext
> >> UniPhier PCIe controller at the moment. Essentially, those designs are
> >> supported by inserting an extra interrupt controller between PCIe host
> >> and GIC and parse the topology in a DT-based PCI controller driver.
> >> As we turn to ACPI, All the PCIe hosts are described the same ID of
> >> "PNP0A03" and share driver/acpi/pci_root.c. It comes to be a problem
> >> to make this kind of PCI INTx work under ACPI.
> > 
> > In this respect this patch is a minor detail. The major detail is how
> > those host controllers are going to probe and initialize with ACPI and I
> > am against merging this patch stand alone with no user before
> > understanding what you really want to do with those host controller
> > drivers in the ACPI world.
> > 
> > Side note, there is ongoing work for a generic interrupt MUX:
> > 
> > https://bugzilla.tianocore.org/show_bug.cgi?id=2995
> > 
> > If we ever come to support those MUXes with ACPI that must be a
> > starting point, the binding above can be your first "user".
> > 
> > I still have reservations about bootstrapping the host controllers
> > you mentioned in platforms with no firmware support whatsoever for
> > PCI initialization (eg address decoders, link bring-up, etc. - the
> > ACPI host bridge model relies on FW to carry out that initialization)
> > with ACPI - I would like to see the whole picture first.
> 
> Frankly, I’m also waiting for my first “user” to be announced at the
> moment, so that I can make the whole picture clearer. And it is why I
> mark this patch as an RFC. 

AFAIK none of the host controllers requiring this IRQ muxing is shipped
with a firmware stack that can bootstrap them with ACPI.

That's why I think this patch is a thought exercise, there is not much
to talk about.

> Yes. I admit it is a little weird to add another interrupt controller
> between the GIC and INTx device. But if it is not only about
> initialization but also about hooking into the INTx processing (e.g.,
> introduce an extra ack operation...), it seems we cannot only rely
> on FW. I have looked for a FW solution without introducing a new
> driver later but failed... I’m happy to be fixed if there is a pure
> FW solution.

I did not say that to solve the INTX muxing we can use FW. I said that
to probe the host controllers that require this muxing there must be
firmware in place (to allow probing them with ACPI) before we look at
solving the PCI INTX muxing issue.

We should not solve issues that don't exist ;-)

Thanks,
Lorenzo

      reply	other threads:[~2020-11-19 10:37 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-17 13:42 [RFC PATCH V2] acpi/irq: Add stacked IRQ domain support to PCI interrupt link Chen Baozi
2020-11-17 18:57 ` Bjorn Helgaas
2020-11-18 10:37   ` Chen Baozi
2020-11-18 13:45   ` Ard Biesheuvel
2020-11-18 13:50     ` Rafael J. Wysocki
2020-11-18  9:27 ` Marc Zyngier
2020-11-18 13:36   ` Chen Baozi
2020-11-18  9:51 ` Lorenzo Pieralisi
2020-11-18 14:05   ` Chen Baozi
2020-11-19 10:37     ` Lorenzo Pieralisi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201119103720.GA19138@e121166-lin.cambridge.arm.com \
    --to=lorenzo.pieralisi@arm.com \
    --cc=ardb@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=chenbaozi@phytium.com.cn \
    --cc=guohanjun@huawei.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=maz@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).