linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* PCIe MSI address is not written at pci_enable_msi_range call
@ 2016-07-11  2:32 Bharat Kumar Gogada
  2016-07-11  8:47 ` Marc Zyngier
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-11  2:32 UTC (permalink / raw)
  To: linux-pci, linux-kernel; +Cc: marc.zyngier, Arnd Bergmann, Bjorn Helgaas

Hi,

I have a query.
I see that when we use PCI_MSI_IRQ_DOMAIN to handle MSI's, MSI address is not being
written in to end point's PCI_MSI_ADDRESS_LO/HI at the call pci_enable_msi_range.

Instead it is being written at the time end point requests irq.

Can any one tell the reason why is it handled in this manner ?

Correct me If my observation is wrong.

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-11  2:32 PCIe MSI address is not written at pci_enable_msi_range call Bharat Kumar Gogada
@ 2016-07-11  8:47 ` Marc Zyngier
  2016-07-11  9:33   ` Bharat Kumar Gogada
  0 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-11  8:47 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel; +Cc: Arnd Bergmann, Bjorn Helgaas

On 11/07/16 03:32, Bharat Kumar Gogada wrote:
> Hi,
> 
> I have a query.
> I see that when we use PCI_MSI_IRQ_DOMAIN to handle MSI's, MSI address is not being
> written in to end point's PCI_MSI_ADDRESS_LO/HI at the call pci_enable_msi_range.
> 
> Instead it is being written at the time end point requests irq.
> 
> Can any one tell the reason why is it handled in this manner ?

Because there is no real need to do it earlier, and in some case you
cannot allocate MSIs at that stage. pci_enable_msi_range only works out
how many vectors are required. At least one MSI controller (GICv3 ITS)
needs to know how many vectors are required before they can be provided
to the end-point.

Do you see any issue with this?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-11  8:47 ` Marc Zyngier
@ 2016-07-11  9:33   ` Bharat Kumar Gogada
  2016-07-11 10:21     ` Marc Zyngier
  2016-07-12 15:56     ` Marc Zyngier
  0 siblings, 2 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-11  9:33 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

Hi Marc,

Thanks for the reply.

>From PCIe Spec:
MSI Enable Bit:
If 1 and the MSI-X Enable bit in the MSI-X Message
Control register (see Section 6.8.2.3) is 0, the
function is permitted to use MSI to request service
and is prohibited from using its INTx# pin.

>From Endpoint perspective, MSI Enable = 1 indicates MSI can be used which means MSI address and data fields are available/programmed.

In our SoC whenever MSI Enable goes from 0 --> 1 the hardware latches onto MSI address and MSI data values.

With current MSI implementation in kernel, our SoC is latching on to incorrect address and data values, as address/data
are updated much later than MSI Enable bit.

Thanks & Regards,
Bharat

> -----Original Message-----
> From: Marc Zyngier [mailto:marc.zyngier@arm.com]
> Sent: Monday, July 11, 2016 2:18 PM
> To: Bharat Kumar Gogada <bharatku@xilinx.com>; linux-
> pci@vger.kernel.org; linux-kernel@vger.kernel.org
> Cc: Arnd Bergmann <arnd@arndb.de>; Bjorn Helgaas
> <bhelgaas@google.com>
> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>
> On 11/07/16 03:32, Bharat Kumar Gogada wrote:
> > Hi,
> >
> > I have a query.
> > I see that when we use PCI_MSI_IRQ_DOMAIN to handle MSI's, MSI
> address is not being
> > written in to end point's PCI_MSI_ADDRESS_LO/HI at the call
> pci_enable_msi_range.
> >
> > Instead it is being written at the time end point requests irq.
> >
> > Can any one tell the reason why is it handled in this manner ?
>
> Because there is no real need to do it earlier, and in some case you
> cannot allocate MSIs at that stage. pci_enable_msi_range only works out
> how many vectors are required. At least one MSI controller (GICv3 ITS)
> needs to know how many vectors are required before they can be provided
> to the end-point.
>
> Do you see any issue with this?
>
> Thanks,
>
>       M.
> --
> Jazz is not dead. It just smells funny...


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-11  9:33   ` Bharat Kumar Gogada
@ 2016-07-11 10:21     ` Marc Zyngier
  2016-07-11 10:51       ` Bharat Kumar Gogada
  2016-07-12 15:56     ` Marc Zyngier
  1 sibling, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-11 10:21 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

[Please don't top-post]

On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> Hi Marc,
> 
> Thanks for the reply.
> 
> From PCIe Spec:
> MSI Enable Bit:
> If 1 and the MSI-X Enable bit in the MSI-X Message
> Control register (see Section 6.8.2.3) is 0, the
> function is permitted to use MSI to request service
> and is prohibited from using its INTx# pin.
> 
> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used which means MSI address and data fields are available/programmed.
> 
> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware latches onto MSI address and MSI data values.
> 
> With current MSI implementation in kernel, our SoC is latching on to incorrect address and data values, as address/data
> are updated much later than MSI Enable bit.

Interesting. It looks like we're doing something wrong in the MSI flow.
Can you confirm that this is limited to MSI and doesn't affect MSI-X?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-11 10:21     ` Marc Zyngier
@ 2016-07-11 10:51       ` Bharat Kumar Gogada
  2016-07-11 15:50         ` Marc Zyngier
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-11 10:51 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

> > Hi Marc,
> >
> > Thanks for the reply.
> >
> > From PCIe Spec:
> > MSI Enable Bit:
> > If 1 and the MSI-X Enable bit in the MSI-X Message
> > Control register (see Section 6.8.2.3) is 0, the
> > function is permitted to use MSI to request service
> > and is prohibited from using its INTx# pin.
> >
> > From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
> which means MSI address and data fields are available/programmed.
> >
> > In our SoC whenever MSI Enable goes from 0 --> 1 the hardware latches
> onto MSI address and MSI data values.
> >
> > With current MSI implementation in kernel, our SoC is latching on to
> incorrect address and data values, as address/data
> > are updated much later than MSI Enable bit.
>
> Interesting. It looks like we're doing something wrong in the MSI flow.
> Can you confirm that this is limited to MSI and doesn't affect MSI-X?
>

I think it's the same issue irrespective of MSI or MSI-X as we are enabling these interrupts before providing
the  vectors.

So we always have a hole when MSI/MSI-X is 1, and software driver has not registered the irq, and End Point
may raise an interrupt (may be due to error) in this point of time.

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-11 10:51       ` Bharat Kumar Gogada
@ 2016-07-11 15:50         ` Marc Zyngier
  2016-07-12  9:11           ` Bharat Kumar Gogada
  0 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-11 15:50 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

On 11/07/16 11:51, Bharat Kumar Gogada wrote:
>>> Hi Marc,
>>>
>>> Thanks for the reply.
>>>
>>> From PCIe Spec:
>>> MSI Enable Bit:
>>> If 1 and the MSI-X Enable bit in the MSI-X Message
>>> Control register (see Section 6.8.2.3) is 0, the
>>> function is permitted to use MSI to request service
>>> and is prohibited from using its INTx# pin.
>>>
>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
>> which means MSI address and data fields are available/programmed.
>>>
>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware latches
>> onto MSI address and MSI data values.
>>>
>>> With current MSI implementation in kernel, our SoC is latching on to
>> incorrect address and data values, as address/data
>>> are updated much later than MSI Enable bit.
>>
>> Interesting. It looks like we're doing something wrong in the MSI flow.
>> Can you confirm that this is limited to MSI and doesn't affect MSI-X?
>>
> I think it's the same issue irrespective of MSI or MSI-X as we are
> enabling these interrupts before providing the  vectors.
> 
> So we always have a hole when MSI/MSI-X is 1, and software driver has
> not registered the irq, and End Point may raise an interrupt (may be
> due to error) in this point of time.

Looking at the MSI-X part of the code, there is this:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/msi.c#n764

which hints that it may not be possible to do otherwise. Damned if you
do, damned if you don't.

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-11 15:50         ` Marc Zyngier
@ 2016-07-12  9:11           ` Bharat Kumar Gogada
  2016-07-12 14:28             ` Marc Zyngier
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-12  9:11 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>
> On 11/07/16 11:51, Bharat Kumar Gogada wrote:
> >>> Hi Marc,
> >>>
> >>> Thanks for the reply.
> >>>
> >>> From PCIe Spec:
> >>> MSI Enable Bit:
> >>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
> >>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
> >>> request service and is prohibited from using its INTx# pin.
> >>>
> >>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
> >> which means MSI address and data fields are available/programmed.
> >>>
> >>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
> >>> latches
> >> onto MSI address and MSI data values.
> >>>
> >>> With current MSI implementation in kernel, our SoC is latching on to
> >> incorrect address and data values, as address/data
> >>> are updated much later than MSI Enable bit.
> >>
> >> Interesting. It looks like we're doing something wrong in the MSI flow.
> >> Can you confirm that this is limited to MSI and doesn't affect MSI-X?
> >>
> > I think it's the same issue irrespective of MSI or MSI-X as we are
> > enabling these interrupts before providing the  vectors.
> >
> > So we always have a hole when MSI/MSI-X is 1, and software driver has
> > not registered the irq, and End Point may raise an interrupt (may be
> > due to error) in this point of time.
>
> Looking at the MSI-X part of the code, there is this:
>
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/pci
> /msi.c#n764
>
> which hints that it may not be possible to do otherwise. Damned if you do,
> damned if you don't.
>
MSI-X might not have problem then, how to resolve the issue with MSI ?

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-12  9:11           ` Bharat Kumar Gogada
@ 2016-07-12 14:28             ` Marc Zyngier
  0 siblings, 0 replies; 22+ messages in thread
From: Marc Zyngier @ 2016-07-12 14:28 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel; +Cc: Arnd Bergmann, Bjorn Helgaas

On 12/07/16 10:11, Bharat Kumar Gogada wrote:
>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>>
>> On 11/07/16 11:51, Bharat Kumar Gogada wrote:
>>>>> Hi Marc,
>>>>>
>>>>> Thanks for the reply.
>>>>>
>>>>> From PCIe Spec:
>>>>> MSI Enable Bit:
>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
>>>>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
>>>>> request service and is prohibited from using its INTx# pin.
>>>>>
>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
>>>> which means MSI address and data fields are available/programmed.
>>>>>
>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
>>>>> latches
>>>> onto MSI address and MSI data values.
>>>>>
>>>>> With current MSI implementation in kernel, our SoC is latching on to
>>>> incorrect address and data values, as address/data
>>>>> are updated much later than MSI Enable bit.
>>>>
>>>> Interesting. It looks like we're doing something wrong in the MSI flow.
>>>> Can you confirm that this is limited to MSI and doesn't affect MSI-X?
>>>>
>>> I think it's the same issue irrespective of MSI or MSI-X as we are
>>> enabling these interrupts before providing the  vectors.
>>>
>>> So we always have a hole when MSI/MSI-X is 1, and software driver has
>>> not registered the irq, and End Point may raise an interrupt (may be
>>> due to error) in this point of time.
>>
>> Looking at the MSI-X part of the code, there is this:
>>
>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/pci
>> /msi.c#n764
>>
>> which hints that it may not be possible to do otherwise. Damned if you do,
>> damned if you don't.
>>
> MSI-X might not have problem then, how to resolve the issue with MSI ?

Can you give this patch a go and let me know if that works for you?

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index a080f44..565e2a4 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1277,6 +1277,8 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
 	if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS)
 		pci_msi_domain_update_chip_ops(info);
 
+	info->flags |= MSI_FLAG_ACTIVATE_EARLY;
+
 	domain = msi_create_irq_domain(fwnode, info, parent);
 	if (!domain)
 		return NULL;
diff --git a/include/linux/msi.h b/include/linux/msi.h
index 8b425c6..513b7c7 100644
--- a/include/linux/msi.h
+++ b/include/linux/msi.h
@@ -270,6 +270,8 @@ enum {
 	MSI_FLAG_MULTI_PCI_MSI		= (1 << 3),
 	/* Support PCI MSIX interrupts */
 	MSI_FLAG_PCI_MSIX		= (1 << 4),
+	/* Needs early activate, required for PCI */
+	MSI_FLAG_ACTIVATE_EARLY		= (1 << 5),
 };
 
 int msi_domain_set_affinity(struct irq_data *data, const struct cpumask *mask,
diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
index 38e89ce..4ed2cca 100644
--- a/kernel/irq/msi.c
+++ b/kernel/irq/msi.c
@@ -361,6 +361,13 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
 		else
 			dev_dbg(dev, "irq [%d-%d] for MSI\n",
 				virq, virq + desc->nvec_used - 1);
+
+		if (info->flags & MSI_FLAG_ACTIVATE_EARLY) {
+			struct irq_data *irq_data;
+
+			irq_data = irq_domain_get_irq_data(domain, desc->irq);
+			irq_domain_activate_irq(irq_data);
+		}
 	}
 
 	return 0;


Thanks,

	M.

-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-11  9:33   ` Bharat Kumar Gogada
  2016-07-11 10:21     ` Marc Zyngier
@ 2016-07-12 15:56     ` Marc Zyngier
  2016-07-13  6:22       ` Bharat Kumar Gogada
  1 sibling, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-12 15:56 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel; +Cc: Arnd Bergmann, Bjorn Helgaas

On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> Hi Marc,
> 
> Thanks for the reply.
> 
> From PCIe Spec:
> MSI Enable Bit:
> If 1 and the MSI-X Enable bit in the MSI-X Message
> Control register (see Section 6.8.2.3) is 0, the
> function is permitted to use MSI to request service
> and is prohibited from using its INTx# pin.
> 
> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used which means MSI address and data fields are available/programmed.
> 
> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware latches onto MSI address and MSI data values.
> 
> With current MSI implementation in kernel, our SoC is latching on to incorrect address and data values, as address/data
> are updated much later than MSI Enable bit.

As a side question, how does setting the affinity work on this end-point
if this involves changing the address programmed in the MSI registers?
Do you expect the enabled bit to be toggled to around the write?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-12 15:56     ` Marc Zyngier
@ 2016-07-13  6:22       ` Bharat Kumar Gogada
  2016-07-13  8:16         ` Marc Zyngier
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13  6:22 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>
> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> > Hi Marc,
> >
> > Thanks for the reply.
> >
> > From PCIe Spec:
> > MSI Enable Bit:
> > If 1 and the MSI-X Enable bit in the MSI-X Message Control register
> > (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
> > request service and is prohibited from using its INTx# pin.
> >
> > From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
> which means MSI address and data fields are available/programmed.
> >
> > In our SoC whenever MSI Enable goes from 0 --> 1 the hardware latches
> onto MSI address and MSI data values.
> >
> > With current MSI implementation in kernel, our SoC is latching on to
> > incorrect address and data values, as address/data are updated much later
> than MSI Enable bit.
>
> As a side question, how does setting the affinity work on this end-point if this
> involves changing the address programmed in the MSI registers?
> Do you expect the enabled bit to be toggled to around the write?
>

Yes,
Would anybody change MSI address in between wouldn't it cause race condition ?

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  6:22       ` Bharat Kumar Gogada
@ 2016-07-13  8:16         ` Marc Zyngier
  2016-07-13  8:33           ` Bharat Kumar Gogada
  0 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-13  8:16 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

On 13/07/16 07:22, Bharat Kumar Gogada wrote:
>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>>
>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
>>> Hi Marc,
>>>
>>> Thanks for the reply.
>>>
>>> From PCIe Spec:
>>> MSI Enable Bit:
>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
>>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
>>> request service and is prohibited from using its INTx# pin.
>>>
>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
>> which means MSI address and data fields are available/programmed.
>>>
>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware latches
>> onto MSI address and MSI data values.
>>>
>>> With current MSI implementation in kernel, our SoC is latching on to
>>> incorrect address and data values, as address/data are updated much later
>> than MSI Enable bit.
>>
>> As a side question, how does setting the affinity work on this end-point if this
>> involves changing the address programmed in the MSI registers?
>> Do you expect the enabled bit to be toggled to around the write?
>>
> 
> Yes,

Well, that's pretty annoying, as this will not work either. But maybe
your MSI controller has a single doorbell? You haven't mentioned which
HW that is...

> Would anybody change MSI address in between wouldn't it cause race condition ?

Changing the affinity of an interrupt is always racy, and the kernel
deals with it.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  8:16         ` Marc Zyngier
@ 2016-07-13  8:33           ` Bharat Kumar Gogada
  2016-07-13  8:37             ` Marc Zyngier
  2016-07-20 12:19             ` Marc Zyngier
  0 siblings, 2 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13  8:33 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>
> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
> >> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
> >> call
> >>
> >> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> >>> Hi Marc,
> >>>
> >>> Thanks for the reply.
> >>>
> >>> From PCIe Spec:
> >>> MSI Enable Bit:
> >>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
> >>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
> >>> request service and is prohibited from using its INTx# pin.
> >>>
> >>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
> >> which means MSI address and data fields are available/programmed.
> >>>
> >>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
> >>> latches
> >> onto MSI address and MSI data values.
> >>>
> >>> With current MSI implementation in kernel, our SoC is latching on to
> >>> incorrect address and data values, as address/data are updated much
> >>> later
> >> than MSI Enable bit.
> >>
> >> As a side question, how does setting the affinity work on this
> >> end-point if this involves changing the address programmed in the MSI
> registers?
> >> Do you expect the enabled bit to be toggled to around the write?
> >>
> >
> > Yes,
>
> Well, that's pretty annoying, as this will not work either. But maybe your MSI
> controller has a single doorbell? You haven't mentioned which HW that is...
>
The MSI address/data is located in config space, in our SoC for the logic behind PCIe
to become aware of new address/data  MSI enable transition is used (0 to 1).
The logic cannot keep polling these registers in configuration space as it would consume power.

So the logic uses the transition in MSI enable to latch on to address/data.

> > Would anybody change MSI address in between wouldn't it cause race
> condition ?
>
> Changing the affinity of an interrupt is always racy, and the kernel deals with
> it.
>

Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  8:33           ` Bharat Kumar Gogada
@ 2016-07-13  8:37             ` Marc Zyngier
  2016-07-13  9:10               ` Bharat Kumar Gogada
  2016-07-20 12:19             ` Marc Zyngier
  1 sibling, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-13  8:37 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

On 13/07/16 09:33, Bharat Kumar Gogada wrote:
>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>>
>> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
>>>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
>>>> call
>>>>
>>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
>>>>> Hi Marc,
>>>>>
>>>>> Thanks for the reply.
>>>>>
>>>>> From PCIe Spec:
>>>>> MSI Enable Bit:
>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
>>>>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
>>>>> request service and is prohibited from using its INTx# pin.
>>>>>
>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
>>>> which means MSI address and data fields are available/programmed.
>>>>>
>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
>>>>> latches
>>>> onto MSI address and MSI data values.
>>>>>
>>>>> With current MSI implementation in kernel, our SoC is latching on to
>>>>> incorrect address and data values, as address/data are updated much
>>>>> later
>>>> than MSI Enable bit.
>>>>
>>>> As a side question, how does setting the affinity work on this
>>>> end-point if this involves changing the address programmed in the MSI
>> registers?
>>>> Do you expect the enabled bit to be toggled to around the write?
>>>>
>>>
>>> Yes,
>>
>> Well, that's pretty annoying, as this will not work either. But maybe your MSI
>> controller has a single doorbell? You haven't mentioned which HW that is...
>>
> The MSI address/data is located in config space, in our SoC for the logic behind PCIe
> to become aware of new address/data  MSI enable transition is used (0 to 1).
> The logic cannot keep polling these registers in configuration space as it would consume power.
> 
> So the logic uses the transition in MSI enable to latch on to address/data.

I understand the "why". I'm just wondering if your SoC needs to have
the MSI address changed when changing the affinity of the MSI? What MSI
controller are you using? Is it in mainline?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  8:37             ` Marc Zyngier
@ 2016-07-13  9:10               ` Bharat Kumar Gogada
  2016-07-13  9:19                 ` Marc Zyngier
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13  9:10 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>
> On 13/07/16 09:33, Bharat Kumar Gogada wrote:
> >> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
> >>
> >> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
> >>>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
> >>>> call
> >>>>
> >>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> >>>>> Hi Marc,
> >>>>>
> >>>>> Thanks for the reply.
> >>>>>
> >>>>> From PCIe Spec:
> >>>>> MSI Enable Bit:
> >>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
> >>>>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
> >>>>> request service and is prohibited from using its INTx# pin.
> >>>>>
> >>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
> >>>> which means MSI address and data fields are available/programmed.
> >>>>>
> >>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
> >>>>> latches
> >>>> onto MSI address and MSI data values.
> >>>>>
> >>>>> With current MSI implementation in kernel, our SoC is latching on to
> >>>>> incorrect address and data values, as address/data are updated much
> >>>>> later
> >>>> than MSI Enable bit.
> >>>>
> >>>> As a side question, how does setting the affinity work on this
> >>>> end-point if this involves changing the address programmed in the MSI
> >> registers?
> >>>> Do you expect the enabled bit to be toggled to around the write?
> >>>>
> >>>
> >>> Yes,
> >>
> >> Well, that's pretty annoying, as this will not work either. But maybe your
> MSI
> >> controller has a single doorbell? You haven't mentioned which HW that
> is...
> >>
> > The MSI address/data is located in config space, in our SoC for the logic
> behind PCIe
> > to become aware of new address/data  MSI enable transition is used (0 to
> 1).
> > The logic cannot keep polling these registers in configuration space as it
> would consume power.
> >
> > So the logic uses the transition in MSI enable to latch on to address/data.
>
> I understand the "why". I'm just wondering if your SoC needs to have
> the MSI address changed when changing the affinity of the MSI? What MSI
> controller are you using? Is it in mainline?
>
Can you please give more information on MSI affinity ?
For cpu affinity for interrupts we would use MSI-X.

We are using GIC 400 v2.

Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  9:10               ` Bharat Kumar Gogada
@ 2016-07-13  9:19                 ` Marc Zyngier
  2016-07-13  9:36                   ` Bharat Kumar Gogada
  0 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-13  9:19 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

On 13/07/16 10:10, Bharat Kumar Gogada wrote:
>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>>
>> On 13/07/16 09:33, Bharat Kumar Gogada wrote:
>>>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>>>>
>>>> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
>>>>>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
>>>>>> call
>>>>>>
>>>>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
>>>>>>> Hi Marc,
>>>>>>>
>>>>>>> Thanks for the reply.
>>>>>>>
>>>>>>> From PCIe Spec:
>>>>>>> MSI Enable Bit:
>>>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
>>>>>>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
>>>>>>> request service and is prohibited from using its INTx# pin.
>>>>>>>
>>>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
>>>>>> which means MSI address and data fields are available/programmed.
>>>>>>>
>>>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
>>>>>>> latches
>>>>>> onto MSI address and MSI data values.
>>>>>>>
>>>>>>> With current MSI implementation in kernel, our SoC is latching on to
>>>>>>> incorrect address and data values, as address/data are updated much
>>>>>>> later
>>>>>> than MSI Enable bit.
>>>>>>
>>>>>> As a side question, how does setting the affinity work on this
>>>>>> end-point if this involves changing the address programmed in the MSI
>>>> registers?
>>>>>> Do you expect the enabled bit to be toggled to around the write?
>>>>>>
>>>>>
>>>>> Yes,
>>>>
>>>> Well, that's pretty annoying, as this will not work either. But maybe your
>> MSI
>>>> controller has a single doorbell? You haven't mentioned which HW that
>> is...
>>>>
>>> The MSI address/data is located in config space, in our SoC for the logic
>> behind PCIe
>>> to become aware of new address/data  MSI enable transition is used (0 to
>> 1).
>>> The logic cannot keep polling these registers in configuration space as it
>> would consume power.
>>>
>>> So the logic uses the transition in MSI enable to latch on to address/data.
>>
>> I understand the "why". I'm just wondering if your SoC needs to have
>> the MSI address changed when changing the affinity of the MSI? What MSI
>> controller are you using? Is it in mainline?
>>
> Can you please give more information on MSI affinity ?
> For cpu affinity for interrupts we would use MSI-X.
> 
> We are using GIC 400 v2.

None of that is relevant. GIC400 doesn't have the faintest notion of
what an MSI is, and MSI-X vs MSI is an end-point property.

Please answer these questions: does your MSI controller have a unique
doorbell, or multiple doorbells? Does it use wired interrupts (SPIs)
connected to the GIC? Is the support code for this MSI controller in
mainline or not?

I'm trying to work out what I can do to help you.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  9:19                 ` Marc Zyngier
@ 2016-07-13  9:36                   ` Bharat Kumar Gogada
  2016-07-13  9:40                     ` Marc Zyngier
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13  9:36 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>
> On 13/07/16 10:10, Bharat Kumar Gogada wrote:
> >> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
> >> call
> >>
> >> On 13/07/16 09:33, Bharat Kumar Gogada wrote:
> >>>> Subject: Re: PCIe MSI address is not written at
> >>>> pci_enable_msi_range call
> >>>>
> >>>> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
> >>>>>> Subject: Re: PCIe MSI address is not written at
> >>>>>> pci_enable_msi_range call
> >>>>>>
> >>>>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> >>>>>>> Hi Marc,
> >>>>>>>
> >>>>>>> Thanks for the reply.
> >>>>>>>
> >>>>>>> From PCIe Spec:
> >>>>>>> MSI Enable Bit:
> >>>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control
> >>>>>>> register (see Section 6.8.2.3) is 0, the function is permitted
> >>>>>>> to use MSI to request service and is prohibited from using its INTx#
> pin.
> >>>>>>>
> >>>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be
> >>>>>>> used
> >>>>>> which means MSI address and data fields are available/programmed.
> >>>>>>>
> >>>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
> >>>>>>> latches
> >>>>>> onto MSI address and MSI data values.
> >>>>>>>
> >>>>>>> With current MSI implementation in kernel, our SoC is latching
> >>>>>>> on to incorrect address and data values, as address/data are
> >>>>>>> updated much later
> >>>>>> than MSI Enable bit.
> >>>>>>
> >>>>>> As a side question, how does setting the affinity work on this
> >>>>>> end-point if this involves changing the address programmed in the
> >>>>>> MSI
> >>>> registers?
> >>>>>> Do you expect the enabled bit to be toggled to around the write?
> >>>>>>
> >>>>>
> >>>>> Yes,
> >>>>
> >>>> Well, that's pretty annoying, as this will not work either. But
> >>>> maybe your
> >> MSI
> >>>> controller has a single doorbell? You haven't mentioned which HW
> >>>> that
> >> is...
> >>>>
> >>> The MSI address/data is located in config space, in our SoC for the
> >>> logic
> >> behind PCIe
> >>> to become aware of new address/data  MSI enable transition is used
> >>> (0 to
> >> 1).
> >>> The logic cannot keep polling these registers in configuration space
> >>> as it
> >> would consume power.
> >>>
> >>> So the logic uses the transition in MSI enable to latch on to address/data.
> >>
> >> I understand the "why". I'm just wondering if your SoC needs to have
> >> the MSI address changed when changing the affinity of the MSI? What
> >> MSI controller are you using? Is it in mainline?
> >>
> > Can you please give more information on MSI affinity ?
> > For cpu affinity for interrupts we would use MSI-X.
> >
> > We are using GIC 400 v2.
>
> None of that is relevant. GIC400 doesn't have the faintest notion of what an
> MSI is, and MSI-X vs MSI is an end-point property.
>
> Please answer these questions: does your MSI controller have a unique
> doorbell, or multiple doorbells? Does it use wired interrupts (SPIs) connected
> to the GIC? Is the support code for this MSI controller in mainline or not?
>

It has single doorbell.
The MSI decoding is part of our PCIe bridge, and it has SPI to GIC.
Our root driver is in mainline drivers/pci/host/pcie-xilinx-nwl.c

Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  9:36                   ` Bharat Kumar Gogada
@ 2016-07-13  9:40                     ` Marc Zyngier
  2016-07-13 15:34                       ` Bharat Kumar Gogada
  0 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-13  9:40 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

On 13/07/16 10:36, Bharat Kumar Gogada wrote:
>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>>
>> On 13/07/16 10:10, Bharat Kumar Gogada wrote:
>>>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
>>>> call
>>>>
>>>> On 13/07/16 09:33, Bharat Kumar Gogada wrote:
>>>>>> Subject: Re: PCIe MSI address is not written at
>>>>>> pci_enable_msi_range call
>>>>>>
>>>>>> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
>>>>>>>> Subject: Re: PCIe MSI address is not written at
>>>>>>>> pci_enable_msi_range call
>>>>>>>>
>>>>>>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
>>>>>>>>> Hi Marc,
>>>>>>>>>
>>>>>>>>> Thanks for the reply.
>>>>>>>>>
>>>>>>>>> From PCIe Spec:
>>>>>>>>> MSI Enable Bit:
>>>>>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control
>>>>>>>>> register (see Section 6.8.2.3) is 0, the function is permitted
>>>>>>>>> to use MSI to request service and is prohibited from using its INTx#
>> pin.
>>>>>>>>>
>>>>>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be
>>>>>>>>> used
>>>>>>>> which means MSI address and data fields are available/programmed.
>>>>>>>>>
>>>>>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
>>>>>>>>> latches
>>>>>>>> onto MSI address and MSI data values.
>>>>>>>>>
>>>>>>>>> With current MSI implementation in kernel, our SoC is latching
>>>>>>>>> on to incorrect address and data values, as address/data are
>>>>>>>>> updated much later
>>>>>>>> than MSI Enable bit.
>>>>>>>>
>>>>>>>> As a side question, how does setting the affinity work on this
>>>>>>>> end-point if this involves changing the address programmed in the
>>>>>>>> MSI
>>>>>> registers?
>>>>>>>> Do you expect the enabled bit to be toggled to around the write?
>>>>>>>>
>>>>>>>
>>>>>>> Yes,
>>>>>>
>>>>>> Well, that's pretty annoying, as this will not work either. But
>>>>>> maybe your
>>>> MSI
>>>>>> controller has a single doorbell? You haven't mentioned which HW
>>>>>> that
>>>> is...
>>>>>>
>>>>> The MSI address/data is located in config space, in our SoC for the
>>>>> logic
>>>> behind PCIe
>>>>> to become aware of new address/data  MSI enable transition is used
>>>>> (0 to
>>>> 1).
>>>>> The logic cannot keep polling these registers in configuration space
>>>>> as it
>>>> would consume power.
>>>>>
>>>>> So the logic uses the transition in MSI enable to latch on to address/data.
>>>>
>>>> I understand the "why". I'm just wondering if your SoC needs to have
>>>> the MSI address changed when changing the affinity of the MSI? What
>>>> MSI controller are you using? Is it in mainline?
>>>>
>>> Can you please give more information on MSI affinity ?
>>> For cpu affinity for interrupts we would use MSI-X.
>>>
>>> We are using GIC 400 v2.
>>
>> None of that is relevant. GIC400 doesn't have the faintest notion of what an
>> MSI is, and MSI-X vs MSI is an end-point property.
>>
>> Please answer these questions: does your MSI controller have a unique
>> doorbell, or multiple doorbells? Does it use wired interrupts (SPIs) connected
>> to the GIC? Is the support code for this MSI controller in mainline or not?
>>
> 
> It has single doorbell.
> The MSI decoding is part of our PCIe bridge, and it has SPI to GIC.
> Our root driver is in mainline drivers/pci/host/pcie-xilinx-nwl.c

OK, so you're not affected by this affinity setting issue. Please let me
know if the patch I sent yesterday improve things for you once you have
a chance to test it.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  9:40                     ` Marc Zyngier
@ 2016-07-13 15:34                       ` Bharat Kumar Gogada
  2016-07-13 15:39                         ` Marc Zyngier
  0 siblings, 1 reply; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-13 15:34 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

> On 13/07/16 10:36, Bharat Kumar Gogada wrote:
> >> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
> >> call
> >>
> >> On 13/07/16 10:10, Bharat Kumar Gogada wrote:
> >>>> Subject: Re: PCIe MSI address is not written at
> >>>> pci_enable_msi_range call
> >>>>
> >>>> On 13/07/16 09:33, Bharat Kumar Gogada wrote:
> >>>>>> Subject: Re: PCIe MSI address is not written at
> >>>>>> pci_enable_msi_range call
> >>>>>>
> >>>>>> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
> >>>>>>>> Subject: Re: PCIe MSI address is not written at
> >>>>>>>> pci_enable_msi_range call
> >>>>>>>>
> >>>>>>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> >>>>>>>>> Hi Marc,
> >>>>>>>>>
> >>>>>>>>> Thanks for the reply.
> >>>>>>>>>
> >>>>>>>>> From PCIe Spec:
> >>>>>>>>> MSI Enable Bit:
> >>>>>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control
> >>>>>>>>> register (see Section 6.8.2.3) is 0, the function is permitted
> >>>>>>>>> to use MSI to request service and is prohibited from using its
> >>>>>>>>> INTx#
> >> pin.
> >>>>>>>>>
> >>>>>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be
> >>>>>>>>> used
> >>>>>>>> which means MSI address and data fields are
> available/programmed.
> >>>>>>>>>
> >>>>>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
> >>>>>>>>> latches
> >>>>>>>> onto MSI address and MSI data values.
> >>>>>>>>>
> >>>>>>>>> With current MSI implementation in kernel, our SoC is latching
> >>>>>>>>> on to incorrect address and data values, as address/data are
> >>>>>>>>> updated much later
> >>>>>>>> than MSI Enable bit.
> >>>>>>>>
> >>>>>>>> As a side question, how does setting the affinity work on this
> >>>>>>>> end-point if this involves changing the address programmed in
> >>>>>>>> the MSI
> >>>>>> registers?
> >>>>>>>> Do you expect the enabled bit to be toggled to around the write?
> >>>>>>>>
> >>>>>>>
> >>>>>>> Yes,
> >>>>>>
> >>>>>> Well, that's pretty annoying, as this will not work either. But
> >>>>>> maybe your
> >>>> MSI
> >>>>>> controller has a single doorbell? You haven't mentioned which HW
> >>>>>> that
> >>>> is...
> >>>>>>
> >>>>> The MSI address/data is located in config space, in our SoC for
> >>>>> the logic
> >>>> behind PCIe
> >>>>> to become aware of new address/data  MSI enable transition is used
> >>>>> (0 to
> >>>> 1).
> >>>>> The logic cannot keep polling these registers in configuration
> >>>>> space as it
> >>>> would consume power.
> >>>>>
> >>>>> So the logic uses the transition in MSI enable to latch on to
> address/data.
> >>>>
> >>>> I understand the "why". I'm just wondering if your SoC needs to
> >>>> have the MSI address changed when changing the affinity of the MSI?
> >>>> What MSI controller are you using? Is it in mainline?
> >>>>
> >>> Can you please give more information on MSI affinity ?
> >>> For cpu affinity for interrupts we would use MSI-X.
> >>>
> >>> We are using GIC 400 v2.
> >>
> >> None of that is relevant. GIC400 doesn't have the faintest notion of
> >> what an MSI is, and MSI-X vs MSI is an end-point property.
> >>
> >> Please answer these questions: does your MSI controller have a unique
> >> doorbell, or multiple doorbells? Does it use wired interrupts (SPIs)
> >> connected to the GIC? Is the support code for this MSI controller in
> mainline or not?
> >>
> >
> > It has single doorbell.
> > The MSI decoding is part of our PCIe bridge, and it has SPI to GIC.
> > Our root driver is in mainline drivers/pci/host/pcie-xilinx-nwl.c
>
> OK, so you're not affected by this affinity setting issue. Please let me know if
> the patch I sent yesterday improve things for you once you have a chance to
> test it.
>
Hi Marc,

I tested with the patch you provided, now it is working for us.

Can you please point to any doc related to affinity in MSI, until now we
came across affinity for MSI-X. I will explore more on it.

Thanks for your help.

Regards,
Bharat



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13 15:34                       ` Bharat Kumar Gogada
@ 2016-07-13 15:39                         ` Marc Zyngier
  0 siblings, 0 replies; 22+ messages in thread
From: Marc Zyngier @ 2016-07-13 15:39 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter

On 13/07/16 16:34, Bharat Kumar Gogada wrote:
>> On 13/07/16 10:36, Bharat Kumar Gogada wrote:
>>>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
>>>> call
>>>>
>>>> On 13/07/16 10:10, Bharat Kumar Gogada wrote:
>>>>>> Subject: Re: PCIe MSI address is not written at
>>>>>> pci_enable_msi_range call
>>>>>>
>>>>>> On 13/07/16 09:33, Bharat Kumar Gogada wrote:
>>>>>>>> Subject: Re: PCIe MSI address is not written at
>>>>>>>> pci_enable_msi_range call
>>>>>>>>
>>>>>>>> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
>>>>>>>>>> Subject: Re: PCIe MSI address is not written at
>>>>>>>>>> pci_enable_msi_range call
>>>>>>>>>>
>>>>>>>>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
>>>>>>>>>>> Hi Marc,
>>>>>>>>>>>
>>>>>>>>>>> Thanks for the reply.
>>>>>>>>>>>
>>>>>>>>>>> From PCIe Spec:
>>>>>>>>>>> MSI Enable Bit:
>>>>>>>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control
>>>>>>>>>>> register (see Section 6.8.2.3) is 0, the function is permitted
>>>>>>>>>>> to use MSI to request service and is prohibited from using its
>>>>>>>>>>> INTx#
>>>> pin.
>>>>>>>>>>>
>>>>>>>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be
>>>>>>>>>>> used
>>>>>>>>>> which means MSI address and data fields are
>> available/programmed.
>>>>>>>>>>>
>>>>>>>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
>>>>>>>>>>> latches
>>>>>>>>>> onto MSI address and MSI data values.
>>>>>>>>>>>
>>>>>>>>>>> With current MSI implementation in kernel, our SoC is latching
>>>>>>>>>>> on to incorrect address and data values, as address/data are
>>>>>>>>>>> updated much later
>>>>>>>>>> than MSI Enable bit.
>>>>>>>>>>
>>>>>>>>>> As a side question, how does setting the affinity work on this
>>>>>>>>>> end-point if this involves changing the address programmed in
>>>>>>>>>> the MSI
>>>>>>>> registers?
>>>>>>>>>> Do you expect the enabled bit to be toggled to around the write?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Yes,
>>>>>>>>
>>>>>>>> Well, that's pretty annoying, as this will not work either. But
>>>>>>>> maybe your
>>>>>> MSI
>>>>>>>> controller has a single doorbell? You haven't mentioned which HW
>>>>>>>> that
>>>>>> is...
>>>>>>>>
>>>>>>> The MSI address/data is located in config space, in our SoC for
>>>>>>> the logic
>>>>>> behind PCIe
>>>>>>> to become aware of new address/data  MSI enable transition is used
>>>>>>> (0 to
>>>>>> 1).
>>>>>>> The logic cannot keep polling these registers in configuration
>>>>>>> space as it
>>>>>> would consume power.
>>>>>>>
>>>>>>> So the logic uses the transition in MSI enable to latch on to
>> address/data.
>>>>>>
>>>>>> I understand the "why". I'm just wondering if your SoC needs to
>>>>>> have the MSI address changed when changing the affinity of the MSI?
>>>>>> What MSI controller are you using? Is it in mainline?
>>>>>>
>>>>> Can you please give more information on MSI affinity ?
>>>>> For cpu affinity for interrupts we would use MSI-X.
>>>>>
>>>>> We are using GIC 400 v2.
>>>>
>>>> None of that is relevant. GIC400 doesn't have the faintest notion of
>>>> what an MSI is, and MSI-X vs MSI is an end-point property.
>>>>
>>>> Please answer these questions: does your MSI controller have a unique
>>>> doorbell, or multiple doorbells? Does it use wired interrupts (SPIs)
>>>> connected to the GIC? Is the support code for this MSI controller in
>> mainline or not?
>>>>
>>>
>>> It has single doorbell.
>>> The MSI decoding is part of our PCIe bridge, and it has SPI to GIC.
>>> Our root driver is in mainline drivers/pci/host/pcie-xilinx-nwl.c
>>
>> OK, so you're not affected by this affinity setting issue. Please let me know if
>> the patch I sent yesterday improve things for you once you have a chance to
>> test it.
>>
> Hi Marc,
> 
> I tested with the patch you provided, now it is working for us.

Thanks, I'll repost this as a proper patch with your Tested-by.

> Can you please point to any doc related to affinity in MSI, until now we
> came across affinity for MSI-X. I will explore more on it.

I don't have anything at hand, but simply look at how MSI (and MSI-X) is
implemented on x86, for example: each CPU has its own doorbell, and
changing the affinity of a MSI is done by changing the target address of
that interrupt. And it doesn't seem that the kernel switches the Enable
bit off and on for those.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-13  8:33           ` Bharat Kumar Gogada
  2016-07-13  8:37             ` Marc Zyngier
@ 2016-07-20 12:19             ` Marc Zyngier
  2016-07-27 11:14               ` Bharat Kumar Gogada
  1 sibling, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2016-07-20 12:19 UTC (permalink / raw)
  To: Bharat Kumar Gogada, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter, Thomas Gleixner

+tglx

On 13/07/16 09:33, Bharat Kumar Gogada wrote:
>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>>
>> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
>>>> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
>>>> call
>>>>
>>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
>>>>> Hi Marc,
>>>>>
>>>>> Thanks for the reply.
>>>>>
>>>>> From PCIe Spec:
>>>>> MSI Enable Bit:
>>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control register
>>>>> (see Section 6.8.2.3) is 0, the function is permitted to use MSI to
>>>>> request service and is prohibited from using its INTx# pin.
>>>>>
>>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be used
>>>> which means MSI address and data fields are available/programmed.
>>>>>
>>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
>>>>> latches
>>>> onto MSI address and MSI data values.
>>>>>
>>>>> With current MSI implementation in kernel, our SoC is latching on to
>>>>> incorrect address and data values, as address/data are updated much
>>>>> later
>>>> than MSI Enable bit.
>>>>
>>>> As a side question, how does setting the affinity work on this
>>>> end-point if this involves changing the address programmed in the MSI
>> registers?
>>>> Do you expect the enabled bit to be toggled to around the write?
>>>>
>>>
>>> Yes,
>>
>> Well, that's pretty annoying, as this will not work either. But maybe your MSI
>> controller has a single doorbell? You haven't mentioned which HW that is...
>>
> The MSI address/data is located in config space, in our SoC for the logic behind PCIe
> to become aware of new address/data  MSI enable transition is used (0 to 1).
> The logic cannot keep polling these registers in configuration space as it would consume power.
> 
> So the logic uses the transition in MSI enable to latch on to address/data.

A couple of additional questions:

Does your HW support MSI masking? And if it does, does it resample the
address/data on unmask?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: PCIe MSI address is not written at pci_enable_msi_range call
  2016-07-20 12:19             ` Marc Zyngier
@ 2016-07-27 11:14               ` Bharat Kumar Gogada
  0 siblings, 0 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-27 11:14 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, linux-kernel
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter, Thomas Gleixner

> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range call
>
> +tglx
>
> On 13/07/16 09:33, Bharat Kumar Gogada wrote:
> >> Subject: Re: PCIe MSI address is not written at pci_enable_msi_range
> >> call
> >>
> >> On 13/07/16 07:22, Bharat Kumar Gogada wrote:
> >>>> Subject: Re: PCIe MSI address is not written at
> >>>> pci_enable_msi_range call
> >>>>
> >>>> On 11/07/16 10:33, Bharat Kumar Gogada wrote:
> >>>>> Hi Marc,
> >>>>>
> >>>>> Thanks for the reply.
> >>>>>
> >>>>> From PCIe Spec:
> >>>>> MSI Enable Bit:
> >>>>> If 1 and the MSI-X Enable bit in the MSI-X Message Control
> >>>>> register (see Section 6.8.2.3) is 0, the function is permitted to
> >>>>> use MSI to request service and is prohibited from using its INTx# pin.
> >>>>>
> >>>>> From Endpoint perspective, MSI Enable = 1 indicates MSI can be
> >>>>> used
> >>>> which means MSI address and data fields are available/programmed.
> >>>>>
> >>>>> In our SoC whenever MSI Enable goes from 0 --> 1 the hardware
> >>>>> latches
> >>>> onto MSI address and MSI data values.
> >>>>>
> >>>>> With current MSI implementation in kernel, our SoC is latching on
> >>>>> to incorrect address and data values, as address/data are updated
> >>>>> much later
> >>>> than MSI Enable bit.
> >>>>
> >>>> As a side question, how does setting the affinity work on this
> >>>> end-point if this involves changing the address programmed in the
> >>>> MSI
> >> registers?
> >>>> Do you expect the enabled bit to be toggled to around the write?
> >>>>
> >>>
> >>> Yes,
> >>
> >> Well, that's pretty annoying, as this will not work either. But maybe
> >> your MSI controller has a single doorbell? You haven't mentioned which
> HW that is...
> >>
> > The MSI address/data is located in config space, in our SoC for the
> > logic behind PCIe to become aware of new address/data  MSI enable
> transition is used (0 to 1).
> > The logic cannot keep polling these registers in configuration space as it
> would consume power.
> >
> > So the logic uses the transition in MSI enable to latch on to address/data.
>
> A couple of additional questions:
>
> Does your HW support MSI masking? And if it does, does it resample the
> address/data on unmask?
>
No we do not support masking.

Regards,
Bharat



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* PCIe MSI address is not written at pci_enable_msi_range call
@ 2016-07-11  2:37 Bharat Kumar Gogada
  0 siblings, 0 replies; 22+ messages in thread
From: Bharat Kumar Gogada @ 2016-07-11  2:37 UTC (permalink / raw)
  To: linux-kernel, linux-pci
  Cc: Arnd Bergmann, Bjorn Helgaas, nofooter, marc.zyngier

Hi,

I have a query.
I see that when we use PCI_MSI_IRQ_DOMAIN to handle MSI's, MSI address is not being
written in to end point's PCI_MSI_ADDRESS_LO/HI at the call pci_enable_msi_range.

Instead it is being written at the time end point requests irq.

Can any one tell the reason why is it handled in this manner ?

Correct me If my observation is wrong.

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-07-27 11:14 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-11  2:32 PCIe MSI address is not written at pci_enable_msi_range call Bharat Kumar Gogada
2016-07-11  8:47 ` Marc Zyngier
2016-07-11  9:33   ` Bharat Kumar Gogada
2016-07-11 10:21     ` Marc Zyngier
2016-07-11 10:51       ` Bharat Kumar Gogada
2016-07-11 15:50         ` Marc Zyngier
2016-07-12  9:11           ` Bharat Kumar Gogada
2016-07-12 14:28             ` Marc Zyngier
2016-07-12 15:56     ` Marc Zyngier
2016-07-13  6:22       ` Bharat Kumar Gogada
2016-07-13  8:16         ` Marc Zyngier
2016-07-13  8:33           ` Bharat Kumar Gogada
2016-07-13  8:37             ` Marc Zyngier
2016-07-13  9:10               ` Bharat Kumar Gogada
2016-07-13  9:19                 ` Marc Zyngier
2016-07-13  9:36                   ` Bharat Kumar Gogada
2016-07-13  9:40                     ` Marc Zyngier
2016-07-13 15:34                       ` Bharat Kumar Gogada
2016-07-13 15:39                         ` Marc Zyngier
2016-07-20 12:19             ` Marc Zyngier
2016-07-27 11:14               ` Bharat Kumar Gogada
2016-07-11  2:37 Bharat Kumar Gogada

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).