From: David Daney <ddaney@caviumnetworks.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Linus Walleij <linus.walleij@linaro.org>
Subject: irq domain hierarchy vs. chaining w/ PCI MSI-X...
Date: Thu, 12 Jan 2017 14:35:58 -0800 [thread overview]
Message-ID: <dd246f88-0e27-b27e-fc42-6e193a91da3e@caviumnetworks.com> (raw)
Hi Thomas,
I am trying to figure out how to handle this situation:
handle_level_irq()
+---------------+ handle_fasteoi_irq()
| PCIe hosted | +-----------+
+-----+
--level_gpio---->| GPIO to MSI-X |--MSI_message--+>| gicv3-ITS |---> |
CPU |
| widget | | +-----------+
+-----+
+---------------+ |
|
+-------------------+ |
| other PCIe device |---MSI_message-----+
+-------------------+
The question is how to structure the interrupt handling. My initial
attempt was a chaining arrangement where the GPIO driver does
request_irq() for the appropriate MSI-X vector, and the handler calls
back into the irq system like this:
static irqreturn_t thunderx_gpio_chain_handler(int irq, void *dev)
{
struct thunderx_irqdev *irqdev = dev;
int chained_irq;
int ret;
chained_irq = irq_find_mapping(irqdev->gpio->chip.irqdomain,
irqdev->line);
if (!chained_irq)
return IRQ_NONE;
ret = generic_handle_irq(chained_irq);
return ret ? IRQ_NONE : IRQ_HANDLED;
}
Thus getting the proper GPIO irq_chip functions called to manage the
level triggering semantics.
The drawbacks of this approach are that there are then two irqs
associated with the GPIO line (the base MSI-X and the chained GPIO),
also there can be up to 80-100 of these widgets, so potentially we can
consume twice that many irq numbers.
It was suggested by Linus Walleij that using an irq domain hierarchy
might be a better idea. However, I cannot figure out how this might
work. The gicv3-ITS needs to use handle_fasteoi_irq(), and we need
handle_level_irq() for the GPIO-level lines. Getting the proper
irq_chip functions called in a hierarchical configuration doesn't seem
doable given the heterogeneous flow handlers.
Can you think of a better way of structuring this than chaining from the
MSI-X handler as I outlined above?
Thanks in advance for any insight,
David Daney
next reply other threads:[~2017-01-12 22:36 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-12 22:35 David Daney [this message]
2017-01-13 15:41 ` irq domain hierarchy vs. chaining w/ PCI MSI-X Linus Walleij
2017-01-13 16:15 ` Marc Zyngier
2017-01-13 17:37 ` David Daney
2017-01-13 18:45 ` Marc Zyngier
2017-01-13 19:40 ` David Daney
2017-01-30 13:32 ` Thomas Gleixner
2017-01-30 17:55 ` David Daney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dd246f88-0e27-b27e-fc42-6e193a91da3e@caviumnetworks.com \
--to=ddaney@caviumnetworks.com \
--cc=linus.walleij@linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).