All of lore.kernel.org
 help / color / mirror / Atom feed
* Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
@ 2021-08-05 17:09 Reed, Scott
  2021-08-17 13:36 ` Reed, Scott
  0 siblings, 1 reply; 8+ messages in thread
From: Reed, Scott @ 2021-08-05 17:09 UTC (permalink / raw)
  To: xenomai

Hello,

About a year ago, I ported an Ethernet driver we use to be an RTnet
driver. In general everything is working, but we have run into a
recent problem which looks like it might be being caused by high
Ethernet receive interrupt latencies.

The Ethernet interrupts come in as PCIe MSI interrupts and I
implemented the potentially creative but bad solution of
modifying the PCIe driver to distribute the MSI interrupts
to i-ipipe be replacing the call to "generic_handle_irq" with
a call to "ipipe_handle_demuxed_irq".

I am thinking that this was maybe not the best approach and the
correct solution would be to modify the PCIe driver so that the
original MSI interrupt is an RTDM interrupt as if it is not
this could be causing the high interrupt latencies.

Is my thinking in the right direction or completely off?

Background information
--------------------------------
Processor: i.MX 6 Quad
Linux 4.14.62
Xenomai 3.0.7

Thanks in advance for any advice.

Regards,

Scott



Hinweis auf Vertraulichkeit: Diese elektronisch übermittelte Nachricht ist nur für die natürliche oder juristische Person bestimmt, die als Empfänger genannt ist; sie kann vertrauliche Informationen oder Betriebsgeheimnisse enthalten, die Eigentum des Absenders sind. Für den Fall, dass Sie nicht der genannte Empfänger sind, ist Ihnen jede Offenlegung, Verwendung, Vervielfältigung oder Weiterleitung des Inhalts dieser Informationen streng untersagt. Dasselbe gilt für alle Handlungen, die sich auf den Inhalt dieser Informationen gründen. Falls Sie diese Nachricht irrtümlicherweise erhalten haben, bitten wir Sie, den Absender umgehend per E-Mail darüber in Kenntnis zu setzen und die ursprüngliche Nachricht zu löschen. Wir danken Ihnen für Ihre Kooperation.

Confidentiality Notice: This electronic mail transmission is intended for the use of the individual or entity to which it is addressed and may contain confidential and/or proprietary information belonging to the sender. If you are not the intended recipient, you are hereby notified that any disclosure, use, copying, distribution, or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this transmission in error, please notify the sender immediately by e-mail and delete the original message. Thank you for your cooperation.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
  2021-08-05 17:09 Question regarding RTnet Ethernet Driver with PCIe MSI interrupts Reed, Scott
@ 2021-08-17 13:36 ` Reed, Scott
  2021-08-17 20:01   ` Jeroen Van den Keybus
  0 siblings, 1 reply; 8+ messages in thread
From: Reed, Scott @ 2021-08-17 13:36 UTC (permalink / raw)
  To: xenomai



Scott Reed
Senior Software Engineer
direct +497031622263

Moog Industrial Group
Moog GmbH
Hanns-Klemm-Strasse 28
71034 Böblingen
Deutschland
www.moog.com/industrial

_______________________________________________________________________________



Moog GmbH, Hanns-Klemm-Strasse 28, 71034 Böblingen, Deutschland, Handelsregister: Amtsgericht Stuttgart HRB 240301,
Geschäftsführer: Thomas Czeppel, Johannes (Pim) van den Dijssel



> -----Original Message-----
> From: Xenomai <xenomai-bounces@xenomai.org> On Behalf Of Reed, Scott
> via Xenomai
> Sent: Thursday, August 5, 2021 19:09
> To: xenomai@xenomai.org
> Subject: [EXTERNAL] Question regarding RTnet Ethernet Driver with PCIe MSI
> interrupts
>
> Hello,
>
> About a year ago, I ported an Ethernet driver we use to be an RTnet
> driver. In general everything is working, but we have run into a
> recent problem which looks like it might be being caused by high
> Ethernet receive interrupt latencies.
>
> The Ethernet interrupts come in as PCIe MSI interrupts and I
> implemented the potentially creative but bad solution of
> modifying the PCIe driver to distribute the MSI interrupts
> to i-ipipe be replacing the call to "generic_handle_irq" with
> a call to "ipipe_handle_demuxed_irq".
>
> I am thinking that this was maybe not the best approach and the
> correct solution would be to modify the PCIe driver so that the
> original MSI interrupt is an RTDM interrupt as if it is not
> this could be causing the high interrupt latencies.
>
> Is my thinking in the right direction or completely off?

I did not receive any feedback from the forum (maybe I made
a mistake in my post as it was my first post), but in case someone
else runs across this thread and has a similar issue...

I modified the PCIe driver so that the MSI interrupt is an
RTDM interrupt (i.e. registered interrupt with rtdm_irq_request()
as opposed to with devm_request_irq()) and this solved my problem
of seeing latencies (i.e. delays) in the handling of the Ethernet receive
interrupts (which are packaged in the PCIe MSI interrupt).

Since my Ethernet receive interrupts (and transmit interrupts) are also
registered as RTDM Interrupts, I needed to keep my original PCIe driver
modification that MSI interrupts are distributed in dw_handle_msi_irq()
using "ipipe_handle_demuxed_irq" instead of "generic_handle_irq".

I assume it would probably also work to register the Ethernet interrupts
as "normal Linux" interrupts and then the this modification would not
be necessary, but I did not verify this and prefer to have the Ethernet
interrupts as RTDM interrupts so the driver is correct whether or not
it is "behind" an PCIe MSI interrupt or not.

If anyone would like more details, I can gladly provide them.

Regards,

Scott

>
> Background information
> --------------------------------
> Processor: iMx6 Quad
> Linux 4.14.62
> Xenomai 3.0.7
>
> Thanks in advance for any advice.
>
> Regards,
>
> Scott

Hinweis auf Vertraulichkeit: Diese elektronisch übermittelte Nachricht ist nur für die natürliche oder juristische Person bestimmt, die als Empfänger genannt ist; sie kann vertrauliche Informationen oder Betriebsgeheimnisse enthalten, die Eigentum des Absenders sind. Für den Fall, dass Sie nicht der genannte Empfänger sind, ist Ihnen jede Offenlegung, Verwendung, Vervielfältigung oder Weiterleitung des Inhalts dieser Informationen streng untersagt. Dasselbe gilt für alle Handlungen, die sich auf den Inhalt dieser Informationen gründen. Falls Sie diese Nachricht irrtümlicherweise erhalten haben, bitten wir Sie, den Absender umgehend per E-Mail darüber in Kenntnis zu setzen und die ursprüngliche Nachricht zu löschen. Wir danken Ihnen für Ihre Kooperation.

Confidentiality Notice: This electronic mail transmission is intended for the use of the individual or entity to which it is addressed and may contain confidential and/or proprietary information belonging to the sender. If you are not the intended recipient, you are hereby notified that any disclosure, use, copying, distribution, or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this transmission in error, please notify the sender immediately by e-mail and delete the original message. Thank you for your cooperation.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
  2021-08-17 13:36 ` Reed, Scott
@ 2021-08-17 20:01   ` Jeroen Van den Keybus
  2021-08-18  6:21     ` Jan Kiszka
  0 siblings, 1 reply; 8+ messages in thread
From: Jeroen Van den Keybus @ 2021-08-17 20:01 UTC (permalink / raw)
  To: Reed, Scott; +Cc: xenomai

FWIW, in the past I have worked with e1000 NICs in RTnet that registered
their MSI interrupt with the kernel.

I do not recall that I had to do anything out of the ordinary.


Op di 17 aug. 2021 om 15:37 schreef Reed, Scott via Xenomai <
xenomai@xenomai.org>:

>
>
> Scott Reed
> Senior Software Engineer
> direct +497031622263
>
> Moog Industrial Group
> Moog GmbH
> Hanns-Klemm-Strasse 28
> 71034 Böblingen
> Deutschland
> www.moog.com/industrial
>
>
> _______________________________________________________________________________
>
>
>
> Moog GmbH, Hanns-Klemm-Strasse 28, 71034 Böblingen, Deutschland,
> Handelsregister: Amtsgericht Stuttgart HRB 240301,
> Geschäftsführer: Thomas Czeppel, Johannes (Pim) van den Dijssel
>
>
>
> > -----Original Message-----
> > From: Xenomai <xenomai-bounces@xenomai.org> On Behalf Of Reed, Scott
> > via Xenomai
> > Sent: Thursday, August 5, 2021 19:09
> > To: xenomai@xenomai.org
> > Subject: [EXTERNAL] Question regarding RTnet Ethernet Driver with PCIe
> MSI
> > interrupts
> >
> > Hello,
> >
> > About a year ago, I ported an Ethernet driver we use to be an RTnet
> > driver. In general everything is working, but we have run into a
> > recent problem which looks like it might be being caused by high
> > Ethernet receive interrupt latencies.
> >
> > The Ethernet interrupts come in as PCIe MSI interrupts and I
> > implemented the potentially creative but bad solution of
> > modifying the PCIe driver to distribute the MSI interrupts
> > to i-ipipe be replacing the call to "generic_handle_irq" with
> > a call to "ipipe_handle_demuxed_irq".
> >
> > I am thinking that this was maybe not the best approach and the
> > correct solution would be to modify the PCIe driver so that the
> > original MSI interrupt is an RTDM interrupt as if it is not
> > this could be causing the high interrupt latencies.
> >
> > Is my thinking in the right direction or completely off?
>
> I did not receive any feedback from the forum (maybe I made
> a mistake in my post as it was my first post), but in case someone
> else runs across this thread and has a similar issue...
>
> I modified the PCIe driver so that the MSI interrupt is an
> RTDM interrupt (i.e. registered interrupt with rtdm_irq_request()
> as opposed to with devm_request_irq()) and this solved my problem
> of seeing latencies (i.e. delays) in the handling of the Ethernet receive
> interrupts (which are packaged in the PCIe MSI interrupt).
>
> Since my Ethernet receive interrupts (and transmit interrupts) are also
> registered as RTDM Interrupts, I needed to keep my original PCIe driver
> modification that MSI interrupts are distributed in dw_handle_msi_irq()
> using "ipipe_handle_demuxed_irq" instead of "generic_handle_irq".
>
> I assume it would probably also work to register the Ethernet interrupts
> as "normal Linux" interrupts and then the this modification would not
> be necessary, but I did not verify this and prefer to have the Ethernet
> interrupts as RTDM interrupts so the driver is correct whether or not
> it is "behind" an PCIe MSI interrupt or not.
>
> If anyone would like more details, I can gladly provide them.
>
> Regards,
>
> Scott
>
> >
> > Background information
> > --------------------------------
> > Processor: iMx6 Quad
> > Linux 4.14.62
> > Xenomai 3.0.7
> >
> > Thanks in advance for any advice.
> >
> > Regards,
> >
> > Scott
>
> Hinweis auf Vertraulichkeit: Diese elektronisch übermittelte Nachricht ist
> nur für die natürliche oder juristische Person bestimmt, die als Empfänger
> genannt ist; sie kann vertrauliche Informationen oder Betriebsgeheimnisse
> enthalten, die Eigentum des Absenders sind. Für den Fall, dass Sie nicht
> der genannte Empfänger sind, ist Ihnen jede Offenlegung, Verwendung,
> Vervielfältigung oder Weiterleitung des Inhalts dieser Informationen streng
> untersagt. Dasselbe gilt für alle Handlungen, die sich auf den Inhalt
> dieser Informationen gründen. Falls Sie diese Nachricht irrtümlicherweise
> erhalten haben, bitten wir Sie, den Absender umgehend per E-Mail darüber in
> Kenntnis zu setzen und die ursprüngliche Nachricht zu löschen. Wir danken
> Ihnen für Ihre Kooperation.
>
> Confidentiality Notice: This electronic mail transmission is intended for
> the use of the individual or entity to which it is addressed and may
> contain confidential and/or proprietary information belonging to the
> sender. If you are not the intended recipient, you are hereby notified that
> any disclosure, use, copying, distribution, or the taking of any action in
> reliance on the contents of this information is strictly prohibited. If you
> have received this transmission in error, please notify the sender
> immediately by e-mail and delete the original message. Thank you for your
> cooperation.
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
  2021-08-17 20:01   ` Jeroen Van den Keybus
@ 2021-08-18  6:21     ` Jan Kiszka
  2021-08-18 10:15       ` [EXTERNAL] " Reed, Scott
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Kiszka @ 2021-08-18  6:21 UTC (permalink / raw)
  To: Jeroen Van den Keybus, Reed, Scott; +Cc: xenomai

On 17.08.21 22:01, Jeroen Van den Keybus via Xenomai wrote:
> FWIW, in the past I have worked with e1000 NICs in RTnet that registered
> their MSI interrupt with the kernel.
> 
> I do not recall that I had to do anything out of the ordinary.
> 

The main difference was likely that you were on x86 while Scott is
dealing with an i.MX6-based SoC.

I'm not familiar with the details of MSI routing on that target. If I
quickly understand arch/arm/boot/dts/imx6qdl.dtsi correctly, all MSIs
will be folded by the PCI host controller into a single SPI (120), the
only interrupt source that old SoC understands. In that case, you need
to make sure that this SPI is handled by I-pipe and, indeed, it treated
as muxing IRQ. But I have no idea how the demuxing will happen in
details. You likely need to study and then patch the PCI host controller
driver.

Jan

> 
> Op di 17 aug. 2021 om 15:37 schreef Reed, Scott via Xenomai <
> xenomai@xenomai.org>:
> 
>>
>>
>> Scott Reed
>> Senior Software Engineer
>> direct +497031622263
>>
>> Moog Industrial Group
>> Moog GmbH
>> Hanns-Klemm-Strasse 28
>> 71034 Böblingen
>> Deutschland
>> http://www.moog.com/industrial
>>
>>
>> _______________________________________________________________________________
>>
>>
>>
>> Moog GmbH, Hanns-Klemm-Strasse 28, 71034 Böblingen, Deutschland,
>> Handelsregister: Amtsgericht Stuttgart HRB 240301,
>> Geschäftsführer: Thomas Czeppel, Johannes (Pim) van den Dijssel
>>
>>
>>
>>> -----Original Message-----
>>> From: Xenomai <xenomai-bounces@xenomai.org> On Behalf Of Reed, Scott
>>> via Xenomai
>>> Sent: Thursday, August 5, 2021 19:09
>>> To: xenomai@xenomai.org
>>> Subject: [EXTERNAL] Question regarding RTnet Ethernet Driver with PCIe
>> MSI
>>> interrupts
>>>
>>> Hello,
>>>
>>> About a year ago, I ported an Ethernet driver we use to be an RTnet
>>> driver. In general everything is working, but we have run into a
>>> recent problem which looks like it might be being caused by high
>>> Ethernet receive interrupt latencies.
>>>
>>> The Ethernet interrupts come in as PCIe MSI interrupts and I
>>> implemented the potentially creative but bad solution of
>>> modifying the PCIe driver to distribute the MSI interrupts
>>> to i-ipipe be replacing the call to "generic_handle_irq" with
>>> a call to "ipipe_handle_demuxed_irq".
>>>
>>> I am thinking that this was maybe not the best approach and the
>>> correct solution would be to modify the PCIe driver so that the
>>> original MSI interrupt is an RTDM interrupt as if it is not
>>> this could be causing the high interrupt latencies.
>>>
>>> Is my thinking in the right direction or completely off?
>>
>> I did not receive any feedback from the forum (maybe I made
>> a mistake in my post as it was my first post), but in case someone
>> else runs across this thread and has a similar issue...
>>
>> I modified the PCIe driver so that the MSI interrupt is an
>> RTDM interrupt (i.e. registered interrupt with rtdm_irq_request()
>> as opposed to with devm_request_irq()) and this solved my problem
>> of seeing latencies (i.e. delays) in the handling of the Ethernet receive
>> interrupts (which are packaged in the PCIe MSI interrupt).
>>
>> Since my Ethernet receive interrupts (and transmit interrupts) are also
>> registered as RTDM Interrupts, I needed to keep my original PCIe driver
>> modification that MSI interrupts are distributed in dw_handle_msi_irq()
>> using "ipipe_handle_demuxed_irq" instead of "generic_handle_irq".
>>
>> I assume it would probably also work to register the Ethernet interrupts
>> as "normal Linux" interrupts and then the this modification would not
>> be necessary, but I did not verify this and prefer to have the Ethernet
>> interrupts as RTDM interrupts so the driver is correct whether or not
>> it is "behind" an PCIe MSI interrupt or not.
>>
>> If anyone would like more details, I can gladly provide them.
>>
>> Regards,
>>
>> Scott
>>
>>>
>>> Background information
>>> --------------------------------
>>> Processor: iMx6 Quad
>>> Linux 4.14.62
>>> Xenomai 3.0.7
>>>
>>> Thanks in advance for any advice.
>>>
>>> Regards,
>>>
>>> Scott
>>
>> Hinweis auf Vertraulichkeit: Diese elektronisch übermittelte Nachricht ist
>> nur für die natürliche oder juristische Person bestimmt, die als Empfänger
>> genannt ist; sie kann vertrauliche Informationen oder Betriebsgeheimnisse
>> enthalten, die Eigentum des Absenders sind. Für den Fall, dass Sie nicht
>> der genannte Empfänger sind, ist Ihnen jede Offenlegung, Verwendung,
>> Vervielfältigung oder Weiterleitung des Inhalts dieser Informationen streng
>> untersagt. Dasselbe gilt für alle Handlungen, die sich auf den Inhalt
>> dieser Informationen gründen. Falls Sie diese Nachricht irrtümlicherweise
>> erhalten haben, bitten wir Sie, den Absender umgehend per E-Mail darüber in
>> Kenntnis zu setzen und die ursprüngliche Nachricht zu löschen. Wir danken
>> Ihnen für Ihre Kooperation.
>>
>> Confidentiality Notice: This electronic mail transmission is intended for
>> the use of the individual or entity to which it is addressed and may
>> contain confidential and/or proprietary information belonging to the
>> sender. If you are not the intended recipient, you are hereby notified that
>> any disclosure, use, copying, distribution, or the taking of any action in
>> reliance on the contents of this information is strictly prohibited. If you
>> have received this transmission in error, please notify the sender
>> immediately by e-mail and delete the original message. Thank you for your
>> cooperation.
>>

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [EXTERNAL] Re: Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
  2021-08-18  6:21     ` Jan Kiszka
@ 2021-08-18 10:15       ` Reed, Scott
  2021-08-18 11:49         ` Jan Kiszka
  0 siblings, 1 reply; 8+ messages in thread
From: Reed, Scott @ 2021-08-18 10:15 UTC (permalink / raw)
  To: xenomai

> -----Original Message-----
> From: Jan Kiszka <jan.kiszka@siemens.com>
> Sent: Wednesday, August 18, 2021 08:22
> To: Jeroen Van den Keybus <jeroen.vandenkeybus@gmail.com>; Reed,
> Scott <sreed@moog.com>
> Cc: xenomai@xenomai.org
> Subject: [EXTERNAL] Re: Question regarding RTnet Ethernet Driver with PCIe
> MSI interrupts
>
> On 17.08.21 22:01, Jeroen Van den Keybus via Xenomai wrote:
> > FWIW, in the past I have worked with e1000 NICs in RTnet that
> > registered their MSI interrupt with the kernel.
> >
> > I do not recall that I had to do anything out of the ordinary.
> >
>
> The main difference was likely that you were on x86 while Scott is dealing
> with an i.MX6-based SoC.
>
> I'm not familiar with the details of MSI routing on that target. If I quickly
> understand arch/arm/boot/dts/imx6qdl.dtsi correctly, all MSIs will be folded
> by the PCI host controller into a single SPI (120), the only interrupt source
> that old SoC understands. In that case, you need to make sure that this SPI is
> handled by I-pipe and, indeed, it treated as muxing IRQ. But I have no idea
> how the demuxing will happen in details. You likely need to study and then
> patch the PCI host controller driver.
>
> Jan

That is correct that I am working with an iMX6-based SoC.

For anyone who might face similar issues, for information here is my patch to
the Linux 4.14.62 PCI host controller (Xenomai 3.0.7 patch already applied).

Regards,

Scott

Subject: [PATCH] Modify PCI MSI interrupts for Xenomai RTDM

Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
interrupts (e.g. Altera TSE driver), we need to also make the PCI MSI
interrupt an RTDM interrupt to avoid latency when the PCI MSI interrupt
is serviced.

Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
interrupts (e.g. Altera TSE driver), we need to first pass the MSI
interrupt to the i-ipipe handler. If there is not an RTDM driver for
the interrupt, then the i-pipe handler calls generic_handle_irq.
---
 drivers/pci/dwc/pci-imx6.c             | 17 ++++++++++-------
 drivers/pci/dwc/pcie-designware-host.c |  2 +-
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c
index b7348353..65eb9ba5 100644
--- a/drivers/pci/dwc/pci-imx6.c
+++ b/drivers/pci/dwc/pci-imx6.c
@@ -33,6 +33,10 @@

 #include "pcie-designware.h"

+#include <xenomai/rtdm/driver.h>
+
+static rtdm_irq_t pcie_msi_irq_handle;
+
 #define to_imx6_pcie(x)dev_get_drvdata((x)->dev)

 enum imx6_pcie_variants {
@@ -545,13 +549,14 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
 return -EINVAL;
 }

-static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
+static int imx6_pcie_msi_handler_rtdm(rtdm_irq_t *irq_handle)
 {
-struct imx6_pcie *imx6_pcie = arg;
+struct imx6_pcie *imx6_pcie = rtdm_irq_get_arg(irq_handle, struct imx6_pcie);
 struct dw_pcie *pci = imx6_pcie->pci;
 struct pcie_port *pp = &pci->pp;

-return dw_handle_msi_irq(pp);
+dw_handle_msi_irq(pp);
+return RTDM_IRQ_HANDLED;
 }

 static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
@@ -678,10 +683,8 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
 return -ENODEV;
 }

-ret = devm_request_irq(dev, pp->msi_irq,
-       imx6_pcie_msi_handler,
-       IRQF_SHARED | IRQF_NO_THREAD,
-       "mx6-pcie-msi", imx6_pcie);
+ret = rtdm_irq_request(&pcie_msi_irq_handle, pp->msi_irq, imx6_pcie_msi_handler_rtdm,
+RTDM_IRQTYPE_SHARED, "mx6-pcie-msi", imx6_pcie);
 if (ret) {
 dev_err(dev, "failed to request MSI irq\n");
 return ret;
diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
index a5b8dd07..d0af2cfa 100644
--- a/drivers/pci/dwc/pcie-designware-host.c
+++ b/drivers/pci/dwc/pcie-designware-host.c
@@ -72,7 +72,7 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
 while ((pos = find_next_bit((unsigned long *) &val, 32,
     pos)) != 32) {
 irq = irq_find_mapping(pp->irq_domain, i * 32 + pos);
-generic_handle_irq(irq);
+ipipe_handle_demuxed_irq(irq);
 dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12,
     4, 1 << pos);
 pos++;
--
2.25.1

>
> >
> > Op di 17 aug. 2021 om 15:37 schreef Reed, Scott via Xenomai <
> > xenomai@xenomai.org>:
> >
> >>> -----Original Message-----
> >>> From: Xenomai <xenomai-bounces@xenomai.org> On Behalf Of Reed,
> Scott
> >>> via Xenomai
> >>> Sent: Thursday, August 5, 2021 19:09
> >>> To: xenomai@xenomai.org
> >>> Subject: [EXTERNAL] Question regarding RTnet Ethernet Driver with
> >>> PCIe
> >> MSI
> >>> interrupts
> >>>
> >>> Hello,
> >>>
> >>> About a year ago, I ported an Ethernet driver we use to be an RTnet
> >>> driver. In general everything is working, but we have run into a
> >>> recent problem which looks like it might be being caused by high
> >>> Ethernet receive interrupt latencies.
> >>>
> >>> The Ethernet interrupts come in as PCIe MSI interrupts and I
> >>> implemented the potentially creative but bad solution of modifying
> >>> the PCIe driver to distribute the MSI interrupts to i-ipipe be
> >>> replacing the call to "generic_handle_irq" with a call to
> >>> "ipipe_handle_demuxed_irq".
> >>>
> >>> I am thinking that this was maybe not the best approach and the
> >>> correct solution would be to modify the PCIe driver so that the
> >>> original MSI interrupt is an RTDM interrupt as if it is not this
> >>> could be causing the high interrupt latencies.
> >>>
> >>> Is my thinking in the right direction or completely off?
> >>
> >> I did not receive any feedback from the forum (maybe I made a mistake
> >> in my post as it was my first post), but in case someone else runs
> >> across this thread and has a similar issue...
> >>
> >> I modified the PCIe driver so that the MSI interrupt is an RTDM
> >> interrupt (i.e. registered interrupt with rtdm_irq_request() as
> >> opposed to with devm_request_irq()) and this solved my problem of
> >> seeing latencies (i.e. delays) in the handling of the Ethernet
> >> receive interrupts (which are packaged in the PCIe MSI interrupt).
> >>
> >> Since my Ethernet receive interrupts (and transmit interrupts) are
> >> also registered as RTDM Interrupts, I needed to keep my original PCIe
> >> driver modification that MSI interrupts are distributed in
> >> dw_handle_msi_irq() using "ipipe_handle_demuxed_irq" instead of
> "generic_handle_irq".
> >>
> >> I assume it would probably also work to register the Ethernet
> >> interrupts as "normal Linux" interrupts and then the this
> >> modification would not be necessary, but I did not verify this and
> >> prefer to have the Ethernet interrupts as RTDM interrupts so the
> >> driver is correct whether or not it is "behind" an PCIe MSI interrupt or not.
> >>
> >> If anyone would like more details, I can gladly provide them.
> >>
> >> Regards,
> >>
> >> Scott
> >>
> >>>
> >>> Background information
> >>> --------------------------------
> >>> Processor: iMx6 Quad
> >>> Linux 4.14.62
> >>> Xenomai 3.0.7
> >>>
> >>> Thanks in advance for any advice.
> >>>
> >>> Regards,
> >>>
> >>> Scott
> >>
> >>
>
> --
> Siemens AG, T RDA IOT
> Corporate Competence Center Embedded Linux

Hinweis auf Vertraulichkeit: Diese elektronisch übermittelte Nachricht ist nur für die natürliche oder juristische Person bestimmt, die als Empfänger genannt ist; sie kann vertrauliche Informationen oder Betriebsgeheimnisse enthalten, die Eigentum des Absenders sind. Für den Fall, dass Sie nicht der genannte Empfänger sind, ist Ihnen jede Offenlegung, Verwendung, Vervielfältigung oder Weiterleitung des Inhalts dieser Informationen streng untersagt. Dasselbe gilt für alle Handlungen, die sich auf den Inhalt dieser Informationen gründen. Falls Sie diese Nachricht irrtümlicherweise erhalten haben, bitten wir Sie, den Absender umgehend per E-Mail darüber in Kenntnis zu setzen und die ursprüngliche Nachricht zu löschen. Wir danken Ihnen für Ihre Kooperation.

Confidentiality Notice: This electronic mail transmission is intended for the use of the individual or entity to which it is addressed and may contain confidential and/or proprietary information belonging to the sender. If you are not the intended recipient, you are hereby notified that any disclosure, use, copying, distribution, or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this transmission in error, please notify the sender immediately by e-mail and delete the original message. Thank you for your cooperation.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [EXTERNAL] Re: Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
  2021-08-18 10:15       ` [EXTERNAL] " Reed, Scott
@ 2021-08-18 11:49         ` Jan Kiszka
       [not found]           ` <98779fb158c14f0d968be1f9441aad45@debo1svwi003091.corp.mooginc.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Kiszka @ 2021-08-18 11:49 UTC (permalink / raw)
  To: Reed, Scott, xenomai

On 18.08.21 12:15, Reed, Scott via Xenomai wrote:
>> -----Original Message-----
>> From: Jan Kiszka <jan.kiszka@siemens.com>
>> Sent: Wednesday, August 18, 2021 08:22
>> To: Jeroen Van den Keybus <jeroen.vandenkeybus@gmail.com>; Reed,
>> Scott <sreed@moog.com>
>> Cc: xenomai@xenomai.org
>> Subject: [EXTERNAL] Re: Question regarding RTnet Ethernet Driver with PCIe
>> MSI interrupts
>>
>> On 17.08.21 22:01, Jeroen Van den Keybus via Xenomai wrote:
>>> FWIW, in the past I have worked with e1000 NICs in RTnet that
>>> registered their MSI interrupt with the kernel.
>>>
>>> I do not recall that I had to do anything out of the ordinary.
>>>
>>
>> The main difference was likely that you were on x86 while Scott is dealing
>> with an i.MX6-based SoC.
>>
>> I'm not familiar with the details of MSI routing on that target. If I quickly
>> understand arch/arm/boot/dts/imx6qdl.dtsi correctly, all MSIs will be folded
>> by the PCI host controller into a single SPI (120), the only interrupt source
>> that old SoC understands. In that case, you need to make sure that this SPI is
>> handled by I-pipe and, indeed, it treated as muxing IRQ. But I have no idea
>> how the demuxing will happen in details. You likely need to study and then
>> patch the PCI host controller driver.
>>
>> Jan
> 
> That is correct that I am working with an iMX6-based SoC.
> 
> For anyone who might face similar issues, for information here is my patch to
> the Linux 4.14.62 PCI host controller (Xenomai 3.0.7 patch already applied).
> 

Patch got mangled, as you can see below. Seems your client or server
decided to have tabs for lunch today.

> Regards,
> 
> Scott
> 
> Subject: [PATCH] Modify PCI MSI interrupts for Xenomai RTDM
> 
> Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
> interrupts (e.g. Altera TSE driver), we need to also make the PCI MSI
> interrupt an RTDM interrupt to avoid latency when the PCI MSI interrupt
> is serviced.
> 
> Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
> interrupts (e.g. Altera TSE driver), we need to first pass the MSI
> interrupt to the i-ipipe handler. If there is not an RTDM driver for
> the interrupt, then the i-pipe handler calls generic_handle_irq.
> ---
>  drivers/pci/dwc/pci-imx6.c             | 17 ++++++++++-------
>  drivers/pci/dwc/pcie-designware-host.c |  2 +-
>  2 files changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c
> index b7348353..65eb9ba5 100644
> --- a/drivers/pci/dwc/pci-imx6.c
> +++ b/drivers/pci/dwc/pci-imx6.c
> @@ -33,6 +33,10 @@
> 
>  #include "pcie-designware.h"
> 
> +#include <xenomai/rtdm/driver.h>
> +
> +static rtdm_irq_t pcie_msi_irq_handle;
> +
>  #define to_imx6_pcie(x)dev_get_drvdata((x)->dev)
> 
>  enum imx6_pcie_variants {
> @@ -545,13 +549,14 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
>  return -EINVAL;
>  }
> 
> -static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
> +static int imx6_pcie_msi_handler_rtdm(rtdm_irq_t *irq_handle)
>  {
> -struct imx6_pcie *imx6_pcie = arg;
> +struct imx6_pcie *imx6_pcie = rtdm_irq_get_arg(irq_handle, struct imx6_pcie);
>  struct dw_pcie *pci = imx6_pcie->pci;
>  struct pcie_port *pp = &pci->pp;
> 
> -return dw_handle_msi_irq(pp);
> +dw_handle_msi_irq(pp);
> +return RTDM_IRQ_HANDLED;

Nasty - if dw_handle_msi_irq decides to return UNHANDLED...

>  }
> 
>  static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
> @@ -678,10 +683,8 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
>  return -ENODEV;
>  }
> 
> -ret = devm_request_irq(dev, pp->msi_irq,
> -       imx6_pcie_msi_handler,
> -       IRQF_SHARED | IRQF_NO_THREAD,
> -       "mx6-pcie-msi", imx6_pcie);
> +ret = rtdm_irq_request(&pcie_msi_irq_handle, pp->msi_irq, imx6_pcie_msi_handler_rtdm,
> +RTDM_IRQTYPE_SHARED, "mx6-pcie-msi", imx6_pcie);
>  if (ret) {
>  dev_err(dev, "failed to request MSI irq\n");
>  return ret;
> diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
> index a5b8dd07..d0af2cfa 100644
> --- a/drivers/pci/dwc/pcie-designware-host.c
> +++ b/drivers/pci/dwc/pcie-designware-host.c
> @@ -72,7 +72,7 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>  while ((pos = find_next_bit((unsigned long *) &val, 32,
>      pos)) != 32) {
>  irq = irq_find_mapping(pp->irq_domain, i * 32 + pos);
> -generic_handle_irq(irq);
> +ipipe_handle_demuxed_irq(irq);
>  dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12,
>      4, 1 << pos);
>  pos++;

This function and likely more will no run in primary domain. But, if you
are unlucky, it may still used Linux locking or other incompatible
calls. Did you check, e.g by enabling I-pipe debugging or by careful
code inspection, if there is no such call remaining? It's a classic if
that is a trigger for lockups etc.

Jan

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
       [not found]           ` <98779fb158c14f0d968be1f9441aad45@debo1svwi003091.corp.mooginc.com>
@ 2021-08-23  8:59             ` Scott Reed
  2021-08-23 10:27               ` Jan Kiszka
  0 siblings, 1 reply; 8+ messages in thread
From: Scott Reed @ 2021-08-23  8:59 UTC (permalink / raw)
  To: xenomai

> -----Original Message-----
> From: Jan Kiszka <jan.kiszka@siemens.com>
> 
> On 18.08.21 12:15, Reed, Scott via Xenomai wrote:
>>> -----Original Message-----
>>> From: Jan Kiszka <jan.kiszka@siemens.com>
>>>
>>> On 17.08.21 22:01, Jeroen Van den Keybus via Xenomai wrote:
>>>> FWIW, in the past I have worked with e1000 NICs in RTnet that
>>>> registered their MSI interrupt with the kernel.
>>>>
>>>> I do not recall that I had to do anything out of the ordinary.
>>>>
>>>
>>> The main difference was likely that you were on x86 while Scott is dealing
>>> with an i.MX6-based SoC.
>>>
>>> I'm not familiar with the details of MSI routing on that target. If I quickly
>>> understand arch/arm/boot/dts/imx6qdl.dtsi correctly, all MSIs will be folded
>>> by the PCI host controller into a single SPI (120), the only interrupt source
>>> that old SoC understands. In that case, you need to make sure that this SPI is
>>> handled by I-pipe and, indeed, it treated as muxing IRQ. But I have no idea
>>> how the demuxing will happen in details. You likely need to study and then
>>> patch the PCI host controller driver.
>>>
>>> Jan
>>
>> That is correct that I am working with an iMX6-based SoC.
>>
>> For anyone who might face similar issues, for information here is my patch to
>> the Linux 4.14.62 PCI host controller (Xenomai 3.0.7 patch already applied).
>>
> 
> Patch got mangled, as you can see below. Seems your client or server
> decided to have tabs for lunch today.
> 
Sorry about that! I have incorporated your suggestions from below
and re-posted the patch hopefully this time not mangled.
>> Regards,
>>
>> Scott
>>
>> Subject: [PATCH] Modify PCI MSI interrupts for Xenomai RTDM
>>
>> Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
>> interrupts (e.g. Altera TSE driver), we need to also make the PCI MSI
>> interrupt an RTDM interrupt to avoid latency when the PCI MSI interrupt
>> is serviced.
>>
>> Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
>> interrupts (e.g. Altera TSE driver), we need to first pass the MSI
>> interrupt to the i-ipipe handler. If there is not an RTDM driver for
>> the interrupt, then the i-pipe handler calls generic_handle_irq.
>> ---
>>   drivers/pci/dwc/pci-imx6.c             | 17 ++++++++++-------
>>   drivers/pci/dwc/pcie-designware-host.c |  2 +-
>>   2 files changed, 11 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c
>> index b7348353..65eb9ba5 100644
>> --- a/drivers/pci/dwc/pci-imx6.c
>> +++ b/drivers/pci/dwc/pci-imx6.c
>> @@ -33,6 +33,10 @@
>>
>>   #include "pcie-designware.h"
>>
>> +#include <xenomai/rtdm/driver.h>
>> +
>> +static rtdm_irq_t pcie_msi_irq_handle;
>> +
>>   #define to_imx6_pcie(x)dev_get_drvdata((x)->dev)
>>
>>   enum imx6_pcie_variants {
>> @@ -545,13 +549,14 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
>>   return -EINVAL;
>>   }
>>
>> -static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
>> +static int imx6_pcie_msi_handler_rtdm(rtdm_irq_t *irq_handle)
>>   {
>> -struct imx6_pcie *imx6_pcie = arg;
>> +struct imx6_pcie *imx6_pcie = rtdm_irq_get_arg(irq_handle, struct imx6_pcie);
>>   struct dw_pcie *pci = imx6_pcie->pci;
>>   struct pcie_port *pp = &pci->pp;
>>
>> -return dw_handle_msi_irq(pp);
>> +dw_handle_msi_irq(pp);
>> +return RTDM_IRQ_HANDLED;
> 
> Nasty - if dw_handle_msi_irq decides to return UNHANDLED...
> 
Yes, I was lazy since I (incorrectly) assumed dw_handle_msi_irq always
returns HANDLED and therefore did not process the return value.
Is handled in new patch posted below.
>>   }
>>
>>   static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
>> @@ -678,10 +683,8 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
>>   return -ENODEV;
>>   }
>>
>> -ret = devm_request_irq(dev, pp->msi_irq,
>> -       imx6_pcie_msi_handler,
>> -       IRQF_SHARED | IRQF_NO_THREAD,
>> -       "mx6-pcie-msi", imx6_pcie);
>> +ret = rtdm_irq_request(&pcie_msi_irq_handle, pp->msi_irq, imx6_pcie_msi_handler_rtdm,
>> +RTDM_IRQTYPE_SHARED, "mx6-pcie-msi", imx6_pcie);
>>   if (ret) {
>>   dev_err(dev, "failed to request MSI irq\n");
>>   return ret;
>> diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
>> index a5b8dd07..d0af2cfa 100644
>> --- a/drivers/pci/dwc/pcie-designware-host.c
>> +++ b/drivers/pci/dwc/pcie-designware-host.c
>> @@ -72,7 +72,7 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>>   while ((pos = find_next_bit((unsigned long *) &val, 32,
>>       pos)) != 32) {
>>   irq = irq_find_mapping(pp->irq_domain, i * 32 + pos);
>> -generic_handle_irq(irq);
>> +ipipe_handle_demuxed_irq(irq);
>>   dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12,
>>       4, 1 << pos);
>>   pos++;
> 
> This function and likely more will no run in primary domain. But, if you
> are unlucky, it may still used Linux locking or other incompatible
> calls. Did you check, e.g by enabling I-pipe debugging or by careful
> code inspection, if there is no such call remaining? It's a classic if
> that is a trigger for lockups etc.
I assume your comment above is "will no run"->"will now run".
Thanks for the feedback!

Yes, I did run with I-pipe debugging and other debugging options enabled 
and saw no issues. The configuration options I turned on are
  IPIPE_DEBUG
  IPIPE_DEBUG_CONTEXT
  IPIPE_DEBUG_INTERNAL
  XENO_OPT_DEBUG_COBALT
  XENO_OPT_DEBUG_CONTEXT

Is this enough to be confident that there are no Linux locking or other
incompatible calls or is a careful code inspection still necessary?

If careful code inspection is still necessary, can you give me some
more guidance on what I should be looking for and how to generally
mitigate problems if they exist?

In addition to the Ethernet Rx/Tx PCI MSI interrupts which are 
distributed to an RTDM (RTnet) Ethernet driver, we have other PCI MSI 
interrupts which are distributed to non-RTDM drivers. My understanding
is that the non-RTDM based PCI MSI interrupts are handled by the
root domain (i.e Linux kernel) and therefore do not need to be checked 
for Linux locking or other incompatible calls. Is that correct?

I did inspect the PCI driver code which dispatches the MSI interrupt
and saw that dw_handle_msi_irq looks like it calls rcu_read_lock/
rcu_read_unlock via the irq_find_mapping call. Is this problematic?

Thanks for your help, feedback, and guidance!

Regards,

Scott

Updated Patch:
Subject: [PATCH] Modify PCI MSI interrupts for Xenomai RTDM

Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
interrupts (e.g. Altera TSE driver), we need to also make the PCI MSI
interrupt an RTDM interrupt to avoid latency when the PCI MSI interrupt
is serviced.

Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
interrupts (e.g. Altera TSE driver), we need to first pass the MSI
interrupt to the i-ipipe handler. If there is not an RTDM driver for
the interrupt, then the i-pipe handler calls generic_handle_irq.
---
 drivers/pci/dwc/pci-imx6.c             | 33 ++++++++++++++++++++------
 drivers/pci/dwc/pcie-designware-host.c |  2 +-
 2 files changed, 27 insertions(+), 8 deletions(-)

diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c
index b7348353..bd4f1d14 100644
--- a/drivers/pci/dwc/pci-imx6.c
+++ b/drivers/pci/dwc/pci-imx6.c
@@ -33,6 +33,10 @@
 
 #include "pcie-designware.h"
 
+#include <xenomai/rtdm/driver.h>
+
+static rtdm_irq_t pcie_msi_irq_handle;
+
 #define to_imx6_pcie(x)	dev_get_drvdata((x)->dev)
 
 enum imx6_pcie_variants {
@@ -545,13 +549,30 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
 	return -EINVAL;
 }
 
-static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
+static int imx6_pcie_msi_handler_rtdm(rtdm_irq_t *irq_handle)
 {
-	struct imx6_pcie *imx6_pcie = arg;
+	struct imx6_pcie *imx6_pcie = rtdm_irq_get_arg(irq_handle, struct imx6_pcie);
 	struct dw_pcie *pci = imx6_pcie->pci;
 	struct pcie_port *pp = &pci->pp;
+	int ret = RTDM_IRQ_NONE;
+
+	switch (dw_handle_msi_irq(pp)) {
+	case IRQ_NONE:
+		ret = RTDM_IRQ_NONE;
+		break;
+	case IRQ_HANDLED:
+		ret = RTDM_IRQ_HANDLED;
+		break;
+	case IRQ_WAKE_THREAD:
+		// This return value should never occur, but to be safe map to RTDM_IRQ_NONE
+		ret = RTDM_IRQ_NONE;
+		break;
+	default:
+		ret = RTDM_IRQ_NONE;
+		break;
+	}
 
-	return dw_handle_msi_irq(pp);
+	return ret;
 }
 
 static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
@@ -678,10 +699,8 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
 			return -ENODEV;
 		}
 
-		ret = devm_request_irq(dev, pp->msi_irq,
-				       imx6_pcie_msi_handler,
-				       IRQF_SHARED | IRQF_NO_THREAD,
-				       "mx6-pcie-msi", imx6_pcie);
+		ret = rtdm_irq_request(&pcie_msi_irq_handle, pp->msi_irq, imx6_pcie_msi_handler_rtdm,
+				RTDM_IRQTYPE_SHARED, "mx6-pcie-msi", imx6_pcie);
 		if (ret) {
 			dev_err(dev, "failed to request MSI irq\n");
 			return ret;
diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
index a5b8dd07..d0af2cfa 100644
--- a/drivers/pci/dwc/pcie-designware-host.c
+++ b/drivers/pci/dwc/pcie-designware-host.c
@@ -72,7 +72,7 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
 		while ((pos = find_next_bit((unsigned long *) &val, 32,
 					    pos)) != 32) {
 			irq = irq_find_mapping(pp->irq_domain, i * 32 + pos);
-			generic_handle_irq(irq);
+			ipipe_handle_demuxed_irq(irq);
 			dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12,
 					    4, 1 << pos);
 			pos++;
-- 
2.25.1



> 
> Jan
> 
> --
> Siemens AG, T RDA IOT
> Corporate Competence Center Embedded Linux



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: Question regarding RTnet Ethernet Driver with PCIe MSI interrupts
  2021-08-23  8:59             ` Scott Reed
@ 2021-08-23 10:27               ` Jan Kiszka
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Kiszka @ 2021-08-23 10:27 UTC (permalink / raw)
  To: Scott Reed, xenomai

On 23.08.21 10:59, Scott Reed via Xenomai wrote:
>> -----Original Message-----
>> From: Jan Kiszka <jan.kiszka@siemens.com>
>>
>> On 18.08.21 12:15, Reed, Scott via Xenomai wrote:
>>>> -----Original Message-----
>>>> From: Jan Kiszka <jan.kiszka@siemens.com>
>>>>
>>>> On 17.08.21 22:01, Jeroen Van den Keybus via Xenomai wrote:
>>>>> FWIW, in the past I have worked with e1000 NICs in RTnet that
>>>>> registered their MSI interrupt with the kernel.
>>>>>
>>>>> I do not recall that I had to do anything out of the ordinary.
>>>>>
>>>>
>>>> The main difference was likely that you were on x86 while Scott is dealing
>>>> with an i.MX6-based SoC.
>>>>
>>>> I'm not familiar with the details of MSI routing on that target. If I quickly
>>>> understand arch/arm/boot/dts/imx6qdl.dtsi correctly, all MSIs will be folded
>>>> by the PCI host controller into a single SPI (120), the only interrupt source
>>>> that old SoC understands. In that case, you need to make sure that this SPI is
>>>> handled by I-pipe and, indeed, it treated as muxing IRQ. But I have no idea
>>>> how the demuxing will happen in details. You likely need to study and then
>>>> patch the PCI host controller driver.
>>>>
>>>> Jan
>>>
>>> That is correct that I am working with an iMX6-based SoC.
>>>
>>> For anyone who might face similar issues, for information here is my patch to
>>> the Linux 4.14.62 PCI host controller (Xenomai 3.0.7 patch already applied).
>>>
>>
>> Patch got mangled, as you can see below. Seems your client or server
>> decided to have tabs for lunch today.
>>
> Sorry about that! I have incorporated your suggestions from below
> and re-posted the patch hopefully this time not mangled.
>>> Regards,
>>>
>>> Scott
>>>
>>> Subject: [PATCH] Modify PCI MSI interrupts for Xenomai RTDM
>>>
>>> Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
>>> interrupts (e.g. Altera TSE driver), we need to also make the PCI MSI
>>> interrupt an RTDM interrupt to avoid latency when the PCI MSI interrupt
>>> is serviced.
>>>
>>> Since we have Xenomai RTDM drivers which are "tied" to PCI MSI
>>> interrupts (e.g. Altera TSE driver), we need to first pass the MSI
>>> interrupt to the i-ipipe handler. If there is not an RTDM driver for
>>> the interrupt, then the i-pipe handler calls generic_handle_irq.
>>> ---
>>>   drivers/pci/dwc/pci-imx6.c             | 17 ++++++++++-------
>>>   drivers/pci/dwc/pcie-designware-host.c |  2 +-
>>>   2 files changed, 11 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c
>>> index b7348353..65eb9ba5 100644
>>> --- a/drivers/pci/dwc/pci-imx6.c
>>> +++ b/drivers/pci/dwc/pci-imx6.c
>>> @@ -33,6 +33,10 @@
>>>
>>>   #include "pcie-designware.h"
>>>
>>> +#include <xenomai/rtdm/driver.h>
>>> +
>>> +static rtdm_irq_t pcie_msi_irq_handle;
>>> +
>>>   #define to_imx6_pcie(x)dev_get_drvdata((x)->dev)
>>>
>>>   enum imx6_pcie_variants {
>>> @@ -545,13 +549,14 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
>>>   return -EINVAL;
>>>   }
>>>
>>> -static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
>>> +static int imx6_pcie_msi_handler_rtdm(rtdm_irq_t *irq_handle)
>>>   {
>>> -struct imx6_pcie *imx6_pcie = arg;
>>> +struct imx6_pcie *imx6_pcie = rtdm_irq_get_arg(irq_handle, struct imx6_pcie);
>>>   struct dw_pcie *pci = imx6_pcie->pci;
>>>   struct pcie_port *pp = &pci->pp;
>>>
>>> -return dw_handle_msi_irq(pp);
>>> +dw_handle_msi_irq(pp);
>>> +return RTDM_IRQ_HANDLED;
>>
>> Nasty - if dw_handle_msi_irq decides to return UNHANDLED...
>>
> Yes, I was lazy since I (incorrectly) assumed dw_handle_msi_irq always
> returns HANDLED and therefore did not process the return value.
> Is handled in new patch posted below.
>>>   }
>>>
>>>   static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
>>> @@ -678,10 +683,8 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
>>>   return -ENODEV;
>>>   }
>>>
>>> -ret = devm_request_irq(dev, pp->msi_irq,
>>> -       imx6_pcie_msi_handler,
>>> -       IRQF_SHARED | IRQF_NO_THREAD,
>>> -       "mx6-pcie-msi", imx6_pcie);
>>> +ret = rtdm_irq_request(&pcie_msi_irq_handle, pp->msi_irq, imx6_pcie_msi_handler_rtdm,
>>> +RTDM_IRQTYPE_SHARED, "mx6-pcie-msi", imx6_pcie);
>>>   if (ret) {
>>>   dev_err(dev, "failed to request MSI irq\n");
>>>   return ret;
>>> diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
>>> index a5b8dd07..d0af2cfa 100644
>>> --- a/drivers/pci/dwc/pcie-designware-host.c
>>> +++ b/drivers/pci/dwc/pcie-designware-host.c
>>> @@ -72,7 +72,7 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>>>   while ((pos = find_next_bit((unsigned long *) &val, 32,
>>>       pos)) != 32) {
>>>   irq = irq_find_mapping(pp->irq_domain, i * 32 + pos);
>>> -generic_handle_irq(irq);
>>> +ipipe_handle_demuxed_irq(irq);
>>>   dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12,
>>>       4, 1 << pos);
>>>   pos++;
>>
>> This function and likely more will no run in primary domain. But, if you
>> are unlucky, it may still used Linux locking or other incompatible
>> calls. Did you check, e.g by enabling I-pipe debugging or by careful
>> code inspection, if there is no such call remaining? It's a classic if
>> that is a trigger for lockups etc.
> I assume your comment above is "will no run"->"will now run".

Yeah, my w-key must have been broken.

> Thanks for the feedback!
> 
> Yes, I did run with I-pipe debugging and other debugging options enabled 
> and saw no issues. The configuration options I turned on are
>   IPIPE_DEBUG
>   IPIPE_DEBUG_CONTEXT
>   IPIPE_DEBUG_INTERNAL
>   XENO_OPT_DEBUG_COBALT
>   XENO_OPT_DEBUG_CONTEXT
> 
> Is this enough to be confident that there are no Linux locking or other
> incompatible calls or is a careful code inspection still necessary?

DEBUG_CONTEXT will catch a brought range of potential issues already,
but only those where the "right" (problematic) patch is actually taken
during the test.

> 
> If careful code inspection is still necessary, can you give me some
> more guidance on what I should be looking for and how to generally
> mitigate problems if they exist?

Well, you generally need to dive into each function called from the
converted IRQ handler, check which locks they may take and possibly
recurse deeper.

> 
> In addition to the Ethernet Rx/Tx PCI MSI interrupts which are 
> distributed to an RTDM (RTnet) Ethernet driver, we have other PCI MSI 
> interrupts which are distributed to non-RTDM drivers. My understanding
> is that the non-RTDM based PCI MSI interrupts are handled by the
> root domain (i.e Linux kernel) and therefore do not need to be checked 
> for Linux locking or other incompatible calls. Is that correct?

Right, handlers registered by Linux against the Linux domain will
continue to run in that context.

> 
> I did inspect the PCI driver code which dispatches the MSI interrupt
> and saw that dw_handle_msi_irq looks like it calls rcu_read_lock/
> rcu_read_unlock via the irq_find_mapping call. Is this problematic?

Conceptually, in any case. Practically, it may depend on the RCU
configuration of the kernel, e.g. what actually is checked and my happen
on unlock. I've just dropped the idea of using a kernel call with
rcu_read_lock internally over Xenomai [1] because I-pipe is not fully
compatible with that. Dovetail (5.10+) is by now, though.

Interestingly, there is also other I-pipe-converted code that calls
irq_find_mapping. Maybe sleeping issues...

Jan

[1] https://xenomai.org/pipermail/xenomai/2021-August/046166.html

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-08-23 10:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-05 17:09 Question regarding RTnet Ethernet Driver with PCIe MSI interrupts Reed, Scott
2021-08-17 13:36 ` Reed, Scott
2021-08-17 20:01   ` Jeroen Van den Keybus
2021-08-18  6:21     ` Jan Kiszka
2021-08-18 10:15       ` [EXTERNAL] " Reed, Scott
2021-08-18 11:49         ` Jan Kiszka
     [not found]           ` <98779fb158c14f0d968be1f9441aad45@debo1svwi003091.corp.mooginc.com>
2021-08-23  8:59             ` Scott Reed
2021-08-23 10:27               ` Jan Kiszka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.