From: Mason <slash.tmp@free.fr> To: Marc Zyngier <marc.zyngier@arm.com>, Thomas Gleixner <tglx@linutronix.de> Cc: Bjorn Helgaas <helgaas@kernel.org>, Robin Murphy <robin.murphy@arm.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Liviu Dudau <liviu.dudau@arm.com>, David Laight <david.laight@aculab.com>, linux-pci <linux-pci@vger.kernel.org>, Linux ARM <linux-arm-kernel@lists.infradead.org>, Thibaud Cornic <thibaud_cornic@sigmadesigns.com>, Phuong Nguyen <phuong_nguyen@sigmadesigns.com>, LKML <linux-kernel@vger.kernel.org> Subject: Re: [RFC PATCH v0.2] PCI: Add support for tango PCIe host bridge Date: Tue, 11 Apr 2017 18:26:23 +0200 [thread overview] Message-ID: <2b5eef4c-32f2-54f1-ca2f-f9426e68fb2c@free.fr> (raw) In-Reply-To: <5f81730d-fbe3-1f4c-de34-09bbfb893ee1@arm.com> On 11/04/2017 17:49, Marc Zyngier wrote: > On 11/04/17 16:13, Mason wrote: >> On 27/03/2017 19:09, Marc Zyngier wrote: >> >>> Here's what your system looks like: >>> >>> PCI-EP -------> MSI Controller ------> INTC >>> MSI IRQ >>> >>> A PCI MSI is always edge. No ifs, no buts. That's what it is, and nothing >>> else. Now, your MSI controller signals its output using a level interrupt, >>> since you need to whack it on the head so that it lowers its line. >>> >>> There is not a single trigger, because there is not a single interrupt. >> >> Hello Marc, >> >> I was hoping you or Thomas might help clear some confusion >> in my mind around IRQ domains (struct irq_domain). >> >> I have read https://www.kernel.org/doc/Documentation/IRQ-domain.txt >> >> IIUC, there should be one IRQ domain per IRQ controller. >> >> I have this MSI controller handling 256 interrupts, so I should >> have *one* domain for all possible MSIs. Yet the Altera driver >> registers *two* domains (msi_domain and inner_domain). >> >> Could I make everything work with a single IRQ domain? > > No, because you have two irqchips. One that deals with the HW, and the > other that deals with the MSIs how they are presented to the kernel, > depending on the bus (PCI or something else). The fact that it doesn't > really drive any HW doesn't make it irrelevant. The example given in IRQ-domain.txt is Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU with an irq_domain for each interrupt controller. On my system I have: PCI-EP -> MSI controller -> System INTC -> GIC -> CPU The driver for System INTC is drivers/irqchip/irq-tango.c I think it has only one domain. For the GIC, drivers/irqchip/irq-gic.c I see a call to irq_domain_create_linear() Is the handling of MSI different, and that is why we need two domains? (Sorry, I did not understand that part well.) When I looked at drivers/pci/host/pci-hyperv.c they seem to have a single pci_msi_create_irq_domain call, no call to domain_add or domain_create. And they have a single struct irq_chip. > You don't need to tell it anything about the number of interrupts you > manage. As for your private structure, you've already given it to your > low level domain, and there is no need to propagate it any further. My main issue is that in the ack callback, I was in the "wrong" domain, in that d->hwirq was not the MSI number. So I thought I needed a single irq_domain. Is there a function to map virq to the hwirq in any domain? Regards.
WARNING: multiple messages have this Message-ID (diff)
From: slash.tmp@free.fr (Mason) To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v0.2] PCI: Add support for tango PCIe host bridge Date: Tue, 11 Apr 2017 18:26:23 +0200 [thread overview] Message-ID: <2b5eef4c-32f2-54f1-ca2f-f9426e68fb2c@free.fr> (raw) In-Reply-To: <5f81730d-fbe3-1f4c-de34-09bbfb893ee1@arm.com> On 11/04/2017 17:49, Marc Zyngier wrote: > On 11/04/17 16:13, Mason wrote: >> On 27/03/2017 19:09, Marc Zyngier wrote: >> >>> Here's what your system looks like: >>> >>> PCI-EP -------> MSI Controller ------> INTC >>> MSI IRQ >>> >>> A PCI MSI is always edge. No ifs, no buts. That's what it is, and nothing >>> else. Now, your MSI controller signals its output using a level interrupt, >>> since you need to whack it on the head so that it lowers its line. >>> >>> There is not a single trigger, because there is not a single interrupt. >> >> Hello Marc, >> >> I was hoping you or Thomas might help clear some confusion >> in my mind around IRQ domains (struct irq_domain). >> >> I have read https://www.kernel.org/doc/Documentation/IRQ-domain.txt >> >> IIUC, there should be one IRQ domain per IRQ controller. >> >> I have this MSI controller handling 256 interrupts, so I should >> have *one* domain for all possible MSIs. Yet the Altera driver >> registers *two* domains (msi_domain and inner_domain). >> >> Could I make everything work with a single IRQ domain? > > No, because you have two irqchips. One that deals with the HW, and the > other that deals with the MSIs how they are presented to the kernel, > depending on the bus (PCI or something else). The fact that it doesn't > really drive any HW doesn't make it irrelevant. The example given in IRQ-domain.txt is Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU with an irq_domain for each interrupt controller. On my system I have: PCI-EP -> MSI controller -> System INTC -> GIC -> CPU The driver for System INTC is drivers/irqchip/irq-tango.c I think it has only one domain. For the GIC, drivers/irqchip/irq-gic.c I see a call to irq_domain_create_linear() Is the handling of MSI different, and that is why we need two domains? (Sorry, I did not understand that part well.) When I looked at drivers/pci/host/pci-hyperv.c they seem to have a single pci_msi_create_irq_domain call, no call to domain_add or domain_create. And they have a single struct irq_chip. > You don't need to tell it anything about the number of interrupts you > manage. As for your private structure, you've already given it to your > low level domain, and there is no need to propagate it any further. My main issue is that in the ack callback, I was in the "wrong" domain, in that d->hwirq was not the MSI number. So I thought I needed a single irq_domain. Is there a function to map virq to the hwirq in any domain? Regards.
next prev parent reply other threads:[~2017-04-11 16:26 UTC|newest] Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-03-23 13:05 [RFC PATCH v0.2] PCI: Add support for tango PCIe host bridge Mason 2017-03-23 13:05 ` Mason 2017-03-23 14:22 ` Marc Zyngier 2017-03-23 14:22 ` Marc Zyngier 2017-03-23 17:03 ` Mason 2017-03-23 17:03 ` Mason 2017-03-23 23:40 ` Mason 2017-03-23 23:40 ` Mason 2017-03-24 18:22 ` Marc Zyngier 2017-03-24 18:22 ` Marc Zyngier 2017-03-27 14:35 ` Mason 2017-03-27 14:35 ` Mason 2017-03-27 14:35 ` Mason 2017-03-27 14:46 ` Thomas Gleixner 2017-03-27 14:46 ` Thomas Gleixner 2017-03-27 15:18 ` Mason 2017-03-27 15:18 ` Mason 2017-03-27 15:18 ` Mason 2017-03-24 18:47 ` Marc Zyngier 2017-03-24 18:47 ` Marc Zyngier 2017-03-27 15:53 ` Mason 2017-03-27 15:53 ` Mason 2017-03-27 17:09 ` Marc Zyngier 2017-03-27 17:09 ` Marc Zyngier 2017-03-27 19:44 ` Mason 2017-03-27 19:44 ` Mason 2017-03-27 21:07 ` Marc Zyngier 2017-03-27 21:07 ` Marc Zyngier 2017-03-27 21:07 ` Marc Zyngier 2017-03-27 22:04 ` Mason 2017-03-27 22:04 ` Mason 2017-03-28 8:21 ` Marc Zyngier 2017-03-28 8:21 ` Marc Zyngier 2017-04-11 15:13 ` Mason 2017-04-11 15:13 ` Mason 2017-04-11 15:49 ` Marc Zyngier 2017-04-11 15:49 ` Marc Zyngier 2017-04-11 16:26 ` Mason [this message] 2017-04-11 16:26 ` Mason 2017-04-11 16:43 ` Marc Zyngier 2017-04-11 16:43 ` Marc Zyngier 2017-04-11 17:52 ` Mason 2017-04-11 17:52 ` Mason 2017-04-12 8:08 ` Marc Zyngier 2017-04-12 8:08 ` Marc Zyngier 2017-04-12 9:50 ` Mason 2017-04-12 9:50 ` Mason 2017-04-12 9:59 ` Marc Zyngier 2017-04-12 9:59 ` Marc Zyngier 2017-04-19 11:19 ` Mason 2017-04-19 11:19 ` Mason 2017-04-20 8:20 ` Mason 2017-04-20 8:20 ` Mason 2017-04-20 9:43 ` Marc Zyngier 2017-04-20 9:43 ` Marc Zyngier 2017-03-29 11:39 ` Mason 2017-03-29 11:39 ` Mason 2017-03-30 11:09 ` Mason 2017-03-30 11:09 ` Mason
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=2b5eef4c-32f2-54f1-ca2f-f9426e68fb2c@free.fr \ --to=slash.tmp@free.fr \ --cc=david.laight@aculab.com \ --cc=helgaas@kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-pci@vger.kernel.org \ --cc=liviu.dudau@arm.com \ --cc=lorenzo.pieralisi@arm.com \ --cc=marc.zyngier@arm.com \ --cc=phuong_nguyen@sigmadesigns.com \ --cc=robin.murphy@arm.com \ --cc=tglx@linutronix.de \ --cc=thibaud_cornic@sigmadesigns.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.