From: Rob Herring <robh@kernel.org>
To: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Cc: Andy Gross <agross@kernel.org>,
Bjorn Andersson <bjorn.andersson@linaro.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
Jingoo Han <jingoohan1@gmail.com>,
Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
Bjorn Helgaas <bhelgaas@google.com>,
Stanimir Varbanov <svarbanov@mm-sol.com>,
Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>,
Vinod Koul <vkoul@kernel.org>,
linux-arm-msm@vger.kernel.org, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org
Subject: Re: [PATCH v12 6/8] PCI: dwc: Implement special ISR handler for split MSI IRQ setup
Date: Thu, 26 May 2022 13:42:28 -0500 [thread overview]
Message-ID: <20220526184228.GF54904-robh@kernel.org> (raw)
In-Reply-To: <20220523181836.2019180-7-dmitry.baryshkov@linaro.org>
On Mon, May 23, 2022 at 09:18:34PM +0300, Dmitry Baryshkov wrote:
> If the PCIe DWC controller uses split MSI IRQs for reporting MSI
> vectors, it is possible to detect, which group triggered the interrupt.
> Provide an optimized version of MSI ISR handler that will handle just a
> single MSI group instead of handling all of them.
A lot more complexity to save 7 register reads...
>
> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> ---
> .../pci/controller/dwc/pcie-designware-host.c | 86 ++++++++++++++-----
> 1 file changed, 65 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index 98a57249ecaf..2b2de517301a 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -52,34 +52,42 @@ static struct msi_domain_info dw_pcie_msi_domain_info = {
> .chip = &dw_pcie_msi_irq_chip,
> };
>
> +static inline irqreturn_t dw_handle_single_msi_group(struct pcie_port *pp, int i)
> +{
> + int pos;
> + unsigned long val;
> + u32 status;
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> + status = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS +
> + (i * MSI_REG_CTRL_BLOCK_SIZE));
> + if (!status)
> + return IRQ_NONE;
> +
> + val = status;
> + pos = 0;
> + while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
> + pos)) != MAX_MSI_IRQS_PER_CTRL) {
for_each_set_bit() doesn't work here?
> + generic_handle_domain_irq(pp->irq_domain,
> + (i * MAX_MSI_IRQS_PER_CTRL) +
> + pos);
> + pos++;
> + }
> +
> + return IRQ_HANDLED;
> +}
> +
> /* MSI int handler */
> irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
> {
> - int i, pos;
> - unsigned long val;
> - u32 status, num_ctrls;
> + int i;
> + u32 num_ctrls;
> irqreturn_t ret = IRQ_NONE;
> - struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>
> num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
>
> - for (i = 0; i < num_ctrls; i++) {
> - status = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS +
> - (i * MSI_REG_CTRL_BLOCK_SIZE));
> - if (!status)
> - continue;
> -
> - ret = IRQ_HANDLED;
> - val = status;
> - pos = 0;
> - while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
> - pos)) != MAX_MSI_IRQS_PER_CTRL) {
> - generic_handle_domain_irq(pp->irq_domain,
> - (i * MAX_MSI_IRQS_PER_CTRL) +
> - pos);
> - pos++;
> - }
> - }
> + for (i = 0; i < num_ctrls; i++)
> + ret |= dw_handle_single_msi_group(pp, i);
>
> return ret;
> }
> @@ -98,6 +106,38 @@ static void dw_chained_msi_isr(struct irq_desc *desc)
> chained_irq_exit(chip, desc);
> }
>
> +static void dw_split_msi_isr(struct irq_desc *desc)
> +{
> + struct irq_chip *chip = irq_desc_get_chip(desc);
> + int irq = irq_desc_get_irq(desc);
> + struct pcie_port *pp;
> + int i;
> + u32 num_ctrls;
> + struct dw_pcie *pci;
> +
> + chained_irq_enter(chip, desc);
> +
> + pp = irq_desc_get_handler_data(desc);
> + pci = to_dw_pcie_from_pp(pp);
> +
> + /*
> + * Unlike generic dw_handle_msi_irq(), we can determine which group of
> + * MSIs triggered the IRQ, so process just that group.
> + */
> + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
> +
> + for (i = 0; i < num_ctrls; i++) {
> + if (pp->msi_irq[i] == irq) {
> + dw_handle_single_msi_group(pp, i);
> + break;
> + }
> + }
> +
> + WARN_ON_ONCE(i == num_ctrls);
> +
> + chained_irq_exit(chip, desc);
> +}
> +
> static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
> {
> struct pcie_port *pp = irq_data_get_irq_chip_data(d);
> @@ -336,6 +376,7 @@ static int dw_pcie_msi_host_init(struct pcie_port *pp)
> struct platform_device *pdev = to_platform_device(dev);
> int ret;
> u32 ctrl, num_ctrls;
> + bool has_split_msi_irq = false;
>
> for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
> pp->irq_mask[ctrl] = ~0;
> @@ -344,6 +385,8 @@ static int dw_pcie_msi_host_init(struct pcie_port *pp)
> ret = dw_pcie_parse_split_msi_irq(pp);
> if (ret < 0 && ret != -ENXIO)
> return ret;
> + else if (!ret)
> + has_split_msi_irq = true;
> }
>
> if (!pp->num_vectors)
> @@ -372,6 +415,7 @@ static int dw_pcie_msi_host_init(struct pcie_port *pp)
> for (ctrl = 0; ctrl < num_ctrls; ctrl++)
> if (pp->msi_irq[ctrl] > 0)
> irq_set_chained_handler_and_data(pp->msi_irq[ctrl],
> + has_split_msi_irq ? dw_split_msi_isr :
> dw_chained_msi_isr,
> pp);
>
> --
> 2.35.1
>
next prev parent reply other threads:[~2022-05-26 18:42 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-23 18:18 [PATCH v12 0/8] PCI: qcom: Fix higher MSI vectors handling Dmitry Baryshkov
2022-05-23 18:18 ` [PATCH v12 1/8] PCI: qcom: Revert "PCI: qcom: Add support for handling MSIs from 8 endpoints" Dmitry Baryshkov
2022-05-23 18:18 ` [PATCH v12 2/8] PCI: dwc: Correct msi_irq condition in dw_pcie_free_msi() Dmitry Baryshkov
2022-06-02 13:42 ` Johan Hovold
2022-05-23 18:18 ` [PATCH v12 3/8] PCI: dwc: Convert msi_irq to the array Dmitry Baryshkov
2022-05-26 18:05 ` Rob Herring
2022-06-02 13:45 ` Johan Hovold
2022-05-23 18:18 ` [PATCH v12 4/8] PCI: dwc: split MSI IRQ parsing/allocation to a separate function Dmitry Baryshkov
2022-05-26 18:09 ` Rob Herring
2022-05-26 20:57 ` Dmitry Baryshkov
2022-06-02 13:55 ` Johan Hovold
2022-05-23 18:18 ` [PATCH v12 5/8] PCI: dwc: Handle MSIs routed to multiple GIC interrupts Dmitry Baryshkov
2022-05-26 18:17 ` Rob Herring
2022-06-02 14:18 ` Johan Hovold
2022-05-23 18:18 ` [PATCH v12 6/8] PCI: dwc: Implement special ISR handler for split MSI IRQ setup Dmitry Baryshkov
2022-05-24 0:18 ` kernel test robot
2022-05-26 18:42 ` Rob Herring [this message]
2022-05-26 20:29 ` Dmitry Baryshkov
2022-05-23 18:18 ` [PATCH v12 7/8] dt-bindings: PCI: qcom: Support additional MSI interrupts Dmitry Baryshkov
2022-05-26 18:19 ` Rob Herring
2022-05-23 18:18 ` [PATCH v12 8/8] arm64: dts: qcom: sm8250: provide " Dmitry Baryshkov
2022-05-24 14:52 ` [PATCH v12 0/8] PCI: qcom: Fix higher MSI vectors handling Bjorn Helgaas
2022-05-24 16:17 ` Dmitry Baryshkov
2022-05-24 16:35 ` Bjorn Helgaas
2022-06-02 14:21 ` Johan Hovold
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220526184228.GF54904-robh@kernel.org \
--to=robh@kernel.org \
--cc=agross@kernel.org \
--cc=bhelgaas@google.com \
--cc=bjorn.andersson@linaro.org \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.baryshkov@linaro.org \
--cc=gustavo.pimentel@synopsys.com \
--cc=jingoohan1@gmail.com \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=manivannan.sadhasivam@linaro.org \
--cc=svarbanov@mm-sol.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).