From: Johan Hovold <johan@kernel.org>
To: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Cc: Andy Gross <agross@kernel.org>,
Bjorn Andersson <bjorn.andersson@linaro.org>,
Rob Herring <robh+dt@kernel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
Jingoo Han <jingoohan1@gmail.com>,
Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
Bjorn Helgaas <bhelgaas@google.com>,
Stanimir Varbanov <svarbanov@mm-sol.com>,
Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>,
Vinod Koul <vkoul@kernel.org>,
linux-arm-msm@vger.kernel.org, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org
Subject: Re: [PATCH v12 5/8] PCI: dwc: Handle MSIs routed to multiple GIC interrupts
Date: Thu, 2 Jun 2022 16:18:24 +0200 [thread overview]
Message-ID: <YpjGsOT2y2IDTHAU@hovoldconsulting.com> (raw)
In-Reply-To: <20220523181836.2019180-6-dmitry.baryshkov@linaro.org>
On Mon, May 23, 2022 at 09:18:33PM +0300, Dmitry Baryshkov wrote:
> On some of Qualcomm platforms each group of 32 MSI vectors is routed to the
> separate GIC interrupt. Implement support for such configurations by
> parsing "msi0" ... "msiN" interrupts and attaching them to the chained
> handler.
>
> Note, that if DT doesn't list an array of MSI interrupts and uses single
> "msi" IRQ, the driver will limit the amount of supported MSI vectors
> accordingly (to 32).
>
> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> ---
> .../pci/controller/dwc/pcie-designware-host.c | 61 +++++++++++++++++--
> 1 file changed, 57 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index a076abe6611c..98a57249ecaf 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -288,6 +288,47 @@ static void dw_pcie_msi_init(struct pcie_port *pp)
> dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_HI, upper_32_bits(msi_target));
> }
>
> +static const char * const split_msi_names[] = {
> + "msi0", "msi1", "msi2", "msi3",
> + "msi4", "msi5", "msi6", "msi7",
> +};
> +
> +static int dw_pcie_parse_split_msi_irq(struct pcie_port *pp)
> +{
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> + struct device *dev = pci->dev;
> + struct platform_device *pdev = to_platform_device(dev);
> + int irq;
> + u32 ctrl, max_vectors;
> +
> + /* Parse as many IRQs as described in the devicetree. */
> + for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++) {
> + irq = platform_get_irq_byname_optional(pdev, split_msi_names[ctrl]);
> + if (irq == -ENXIO)
> + break;
> + if (irq < 0)
> + return dev_err_probe(dev, irq,
> + "Failed to parse MSI IRQ '%s'\n",
> + split_msi_names[ctrl]);
> +
> + pp->msi_irq[ctrl] = irq;
> + }
> +
> + /* If there were no "msiN" IRQs at all, fallback to the standard "msi" IRQ. */
> + if (ctrl == 0)
> + return -ENXIO;
> +
> + max_vectors = ctrl * MAX_MSI_IRQS_PER_CTRL;
> + if (pp->num_vectors > max_vectors) {
> + dev_warn(dev, "Exceeding number of MSI vectors, limiting to %d\n", max_vectors);
%u
break line after last comma?
> + pp->num_vectors = max_vectors;
> + }
> + if (!pp->num_vectors)
> + pp->num_vectors = max_vectors;
> +
> + return 0;
> +}
> +
> static int dw_pcie_msi_host_init(struct pcie_port *pp)
> {
> struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> @@ -296,21 +337,32 @@ static int dw_pcie_msi_host_init(struct pcie_port *pp)
> int ret;
> u32 ctrl, num_ctrls;
>
> - num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
> - for (ctrl = 0; ctrl < num_ctrls; ctrl++)
> + for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
> pp->irq_mask[ctrl] = ~0;
>
> + if (!pp->msi_irq[0]) {
> + ret = dw_pcie_parse_split_msi_irq(pp);
> + if (ret < 0 && ret != -ENXIO)
> + return ret;
> + }
> +
> + if (!pp->num_vectors)
> + pp->num_vectors = MSI_DEF_NUM_VECTORS;
> + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
> +
> if (!pp->msi_irq[0]) {
> int irq = platform_get_irq_byname_optional(pdev, "msi");
>
> if (irq < 0) {
> irq = platform_get_irq(pdev, 0);
> if (irq < 0)
> - return irq;
> + return dev_err_probe(dev, irq, "Failed to parse MSI irq\n");
> }
> pp->msi_irq[0] = irq;
> }
>
> + dev_dbg(dev, "Using %d MSI vectors\n", pp->num_vectors);
> +
> pp->msi_irq_chip = &dw_pci_msi_bottom_irq_chip;
>
> ret = dw_pcie_allocate_domains(pp);
> @@ -407,7 +459,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
> of_property_read_bool(np, "msi-parent") ||
> of_property_read_bool(np, "msi-map"));
>
> - if (!pp->num_vectors) {
> + /* for the has_msi_ctrl the default assignment is handled inside dw_pcie_msi_host_init() */
Add the missing "case" after "has_msi_ctrl".
s/inside/in/
Please make this a multiline comment split at < 80 chars.
And follow the comment style of the driver and start with a capital
letter.
> + if (!pp->has_msi_ctrl && !pp->num_vectors) {
> pp->num_vectors = MSI_DEF_NUM_VECTORS;
> } else if (pp->num_vectors > MAX_MSI_IRQS) {
> dev_err(dev, "Invalid number of vectors\n");
Looks good now otherwise.
But please consider Rob's suggestion for generating the interrupt names.
Reviewed-by: Johan Hovold <johan+linaro@kernel.org>
next prev parent reply other threads:[~2022-06-02 14:18 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-23 18:18 [PATCH v12 0/8] PCI: qcom: Fix higher MSI vectors handling Dmitry Baryshkov
2022-05-23 18:18 ` [PATCH v12 1/8] PCI: qcom: Revert "PCI: qcom: Add support for handling MSIs from 8 endpoints" Dmitry Baryshkov
2022-05-23 18:18 ` [PATCH v12 2/8] PCI: dwc: Correct msi_irq condition in dw_pcie_free_msi() Dmitry Baryshkov
2022-06-02 13:42 ` Johan Hovold
2022-05-23 18:18 ` [PATCH v12 3/8] PCI: dwc: Convert msi_irq to the array Dmitry Baryshkov
2022-05-26 18:05 ` Rob Herring
2022-06-02 13:45 ` Johan Hovold
2022-05-23 18:18 ` [PATCH v12 4/8] PCI: dwc: split MSI IRQ parsing/allocation to a separate function Dmitry Baryshkov
2022-05-26 18:09 ` Rob Herring
2022-05-26 20:57 ` Dmitry Baryshkov
2022-06-02 13:55 ` Johan Hovold
2022-05-23 18:18 ` [PATCH v12 5/8] PCI: dwc: Handle MSIs routed to multiple GIC interrupts Dmitry Baryshkov
2022-05-26 18:17 ` Rob Herring
2022-06-02 14:18 ` Johan Hovold [this message]
2022-05-23 18:18 ` [PATCH v12 6/8] PCI: dwc: Implement special ISR handler for split MSI IRQ setup Dmitry Baryshkov
2022-05-24 0:18 ` kernel test robot
2022-05-26 18:42 ` Rob Herring
2022-05-26 20:29 ` Dmitry Baryshkov
2022-05-23 18:18 ` [PATCH v12 7/8] dt-bindings: PCI: qcom: Support additional MSI interrupts Dmitry Baryshkov
2022-05-26 18:19 ` Rob Herring
2022-05-23 18:18 ` [PATCH v12 8/8] arm64: dts: qcom: sm8250: provide " Dmitry Baryshkov
2022-05-24 14:52 ` [PATCH v12 0/8] PCI: qcom: Fix higher MSI vectors handling Bjorn Helgaas
2022-05-24 16:17 ` Dmitry Baryshkov
2022-05-24 16:35 ` Bjorn Helgaas
2022-06-02 14:21 ` Johan Hovold
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YpjGsOT2y2IDTHAU@hovoldconsulting.com \
--to=johan@kernel.org \
--cc=agross@kernel.org \
--cc=bhelgaas@google.com \
--cc=bjorn.andersson@linaro.org \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.baryshkov@linaro.org \
--cc=gustavo.pimentel@synopsys.com \
--cc=jingoohan1@gmail.com \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=manivannan.sadhasivam@linaro.org \
--cc=robh+dt@kernel.org \
--cc=svarbanov@mm-sol.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).