From: "Pali Rohár" <pali@kernel.org>
To: Bjorn Helgaas <bhelgaas@google.com>
Cc: "Thomas Petazzoni" <thomas.petazzoni@bootlin.com>,
"Lorenzo Pieralisi" <lorenzo.pieralisi@arm.com>,
"Rob Herring" <robh@kernel.org>,
"Krzysztof Wilczyński" <kw@linux.com>,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
Date: Thu, 23 Jun 2022 15:10:29 +0200 [thread overview]
Message-ID: <20220623131029.e4tyegjmvhy5xxxw@pali> (raw)
In-Reply-To: <20220524122817.7199-1-pali@kernel.org>
On Tuesday 24 May 2022 14:28:17 Pali Rohár wrote:
> Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> instead of chained IRQ handler in pci-mvebu.c driver.
>
> This change fixes affinity support and allows to pin interrupts from
> different PCIe controllers to different CPU cores.
>
> Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> Signed-off-by: Pali Rohár <pali@kernel.org>
> ---
PING?
> Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
>
> I tested this patch with pci=nomsi in cmdline (to force kernel to use
> legacy intx instead of MSI) on A385 and checked that I can set affinity
> via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> different CPU and legacy interrupts from different cards/controllers
> were handled by different CPUs.
>
> I think that this is important on Armada XP platforms which have many
> independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> ---
> drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> 1 file changed, 17 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> index 8f76d4bda356..de67ea39fea5 100644
> --- a/drivers/pci/controller/pci-mvebu.c
> +++ b/drivers/pci/controller/pci-mvebu.c
> @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> return 0;
> }
>
> -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> {
> - struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> - struct irq_chip *chip = irq_desc_get_chip(desc);
> + struct mvebu_pcie_port *port = arg;
> struct device *dev = &port->pcie->pdev->dev;
> u32 cause, unmask, status;
> int i;
>
> - chained_irq_enter(chip, desc);
> -
> cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> status = cause & unmask;
> @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> }
>
> - chained_irq_exit(chip, desc);
> + return status ? IRQ_HANDLED : IRQ_NONE;
> }
>
> static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> mvebu_pcie_powerdown(port);
> continue;
> }
> - irq_set_chained_handler_and_data(irq,
> - mvebu_pcie_irq_handler,
> - port);
> +
> + ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> + IRQF_SHARED | IRQF_NO_THREAD,
> + port->name, port);
> + if (ret) {
> + dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> + port->name, ret);
> + irq_domain_remove(port->intx_irq_domain);
> + pci_bridge_emul_cleanup(&port->bridge);
> + devm_iounmap(dev, port->base);
> + port->base = NULL;
> + mvebu_pcie_powerdown(port);
> + continue;
> + }
> }
>
> /*
> @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>
> for (i = 0; i < pcie->nports; i++) {
> struct mvebu_pcie_port *port = &pcie->ports[i];
> - int irq = port->intx_irq;
>
> if (!port->base)
> continue;
> @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> /* Clear all interrupt causes. */
> mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
>
> - if (irq > 0)
> - irq_set_chained_handler_and_data(irq, NULL, NULL);
> -
> /* Remove IRQ domains. */
> if (port->intx_irq_domain)
> irq_domain_remove(port->intx_irq_domain);
> --
> 2.20.1
>
next prev parent reply other threads:[~2022-06-23 13:10 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-24 12:28 [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler Pali Rohár
2022-06-23 13:10 ` Pali Rohár [this message]
2022-06-23 16:27 ` Bjorn Helgaas
2022-06-23 16:32 ` Pali Rohár
2022-06-23 16:49 ` Bjorn Helgaas
2022-06-23 20:31 ` Marc Zyngier
2022-07-01 14:29 ` Pali Rohár
2022-07-09 14:31 ` Pali Rohár
2022-07-09 23:44 ` Bjorn Helgaas
2022-07-10 0:06 ` Pali Rohár
2022-08-29 16:51 ` Pali Rohár
2022-09-11 15:45 ` Pali Rohár
2022-09-12 8:01 ` Lorenzo Pieralisi
2022-09-12 8:48 ` Pali Rohár
2022-09-12 8:55 ` Lorenzo Pieralisi
2022-09-12 9:03 ` Pali Rohár
2022-11-11 12:56 ` Lorenzo Pieralisi
2022-11-11 17:15 ` Pali Rohár
2022-11-14 9:33 ` Lorenzo Pieralisi
2022-11-06 23:25 ` Pali Rohár
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220623131029.e4tyegjmvhy5xxxw@pali \
--to=pali@kernel.org \
--cc=bhelgaas@google.com \
--cc=kw@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=robh@kernel.org \
--cc=thomas.petazzoni@bootlin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).