All of lore.kernel.org
 help / color / mirror / Atom feed
* Why two irq chips for MSI
@ 2018-03-21 17:12 valmiki
  2018-03-21 17:25 ` Marc Zyngier
  0 siblings, 1 reply; 4+ messages in thread
From: valmiki @ 2018-03-21 17:12 UTC (permalink / raw)
  To: linux-pci, Linux Kernel Mailing List; +Cc: Bjorn Helgaas, Marc Zyngier

Hi,

In most of the RP drivers, why two irq chips are being used for MSI ?

One at irq_domain_set_info (which uses irq_compose_msi_msg and 
irq_set_affinity methods) and another being registered with struct 
msi_domain_info (which uses irq_mask/irq_unmask methods).

When will each chip be used w.r.t to virq ?

Thanks,
Valmiki

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Why two irq chips for MSI
  2018-03-21 17:12 Why two irq chips for MSI valmiki
@ 2018-03-21 17:25 ` Marc Zyngier
  2018-03-21 17:38   ` valmiki
  0 siblings, 1 reply; 4+ messages in thread
From: Marc Zyngier @ 2018-03-21 17:25 UTC (permalink / raw)
  To: valmiki, linux-pci, Linux Kernel Mailing List; +Cc: Bjorn Helgaas

On 21/03/18 17:12, valmiki wrote:
> Hi,
> 
> In most of the RP drivers, why two irq chips are being used for MSI ?
> 
> One at irq_domain_set_info (which uses irq_compose_msi_msg and 
> irq_set_affinity methods) and another being registered with struct 
> msi_domain_info (which uses irq_mask/irq_unmask methods).
> 
> When will each chip be used w.r.t to virq ?

A simple way to think of it is that you have two pieces of HW involved:
an end-point that generates an interrupt, and a controller that receives it.

Transpose this to the kernel view of things: one chip implements the PCI
MSI, with the PCI semantics attached to it (how to program the
payload/doorbell into the end-point, for example). The other implements
the MSI controller part of it, talking to the HW that deals with the
interrupt.

Does it makes sense? Admittedly, this is not always that simple, but
that the general approach.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Why two irq chips for MSI
  2018-03-21 17:25 ` Marc Zyngier
@ 2018-03-21 17:38   ` valmiki
  2018-03-21 18:05     ` Marc Zyngier
  0 siblings, 1 reply; 4+ messages in thread
From: valmiki @ 2018-03-21 17:38 UTC (permalink / raw)
  To: Marc Zyngier, linux-pci, Linux Kernel Mailing List; +Cc: Bjorn Helgaas

> On 21/03/18 17:12, valmiki wrote:
>> Hi,
>>
>> In most of the RP drivers, why two irq chips are being used for MSI ?
>>
>> One at irq_domain_set_info (which uses irq_compose_msi_msg and
>> irq_set_affinity methods) and another being registered with struct
>> msi_domain_info (which uses irq_mask/irq_unmask methods).
>>
>> When will each chip be used w.r.t to virq ?
>
> A simple way to think of it is that you have two pieces of HW involved:
> an end-point that generates an interrupt, and a controller that receives it.
>
> Transpose this to the kernel view of things: one chip implements the PCI
> MSI, with the PCI semantics attached to it (how to program the
> payload/doorbell into the end-point, for example). The other implements
> the MSI controller part of it, talking to the HW that deals with the
> interrupt.
>
> Does it makes sense? Admittedly, this is not always that simple, but
> that the general approach.
>
Thanks Marc. Yes got a good picture now.
So the one which implements PCI semantics has irq_set_affinity, which is 
being invoked at request_irq. Why most of the drivers have this as dummy 
with return 0 ?
Does setting affinity to MSI needs any support from GIC?
Setting affinity can be achieved only with hardware support ?

Valmiki

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Why two irq chips for MSI
  2018-03-21 17:38   ` valmiki
@ 2018-03-21 18:05     ` Marc Zyngier
  0 siblings, 0 replies; 4+ messages in thread
From: Marc Zyngier @ 2018-03-21 18:05 UTC (permalink / raw)
  To: valmiki, linux-pci, Linux Kernel Mailing List; +Cc: Bjorn Helgaas

On 21/03/18 17:38, valmiki wrote:
>> On 21/03/18 17:12, valmiki wrote:
>>> Hi,
>>>
>>> In most of the RP drivers, why two irq chips are being used for MSI ?
>>>
>>> One at irq_domain_set_info (which uses irq_compose_msi_msg and
>>> irq_set_affinity methods) and another being registered with struct
>>> msi_domain_info (which uses irq_mask/irq_unmask methods).
>>>
>>> When will each chip be used w.r.t to virq ?
>>
>> A simple way to think of it is that you have two pieces of HW involved:
>> an end-point that generates an interrupt, and a controller that receives it.
>>
>> Transpose this to the kernel view of things: one chip implements the PCI
>> MSI, with the PCI semantics attached to it (how to program the
>> payload/doorbell into the end-point, for example). The other implements
>> the MSI controller part of it, talking to the HW that deals with the
>> interrupt.
>>
>> Does it makes sense? Admittedly, this is not always that simple, but
>> that the general approach.
>>
> Thanks Marc. Yes got a good picture now.
> So the one which implements PCI semantics has irq_set_affinity, which is 
> being invoked at request_irq. Why most of the drivers have this as dummy 
> with return 0 ?

It depends.

In general, the irqchip implementing set_affinity is the one talking to
the MSI controller directly.

A lot of drivers don't implement it because they are multiplexing a
number of MSIs on a single interrupt. The consequence is that you cannot
change the affinity of a single MSI without affecting them all.

HW that is build this way makes it impossible to implement one of the
main feature of MSIs, which is to have per-cpu interrupts. Oh well. They
probably also miss on the "MSI as a memory barrier" semantics...

Decent HW doesn't do any multiplexing, and thus can very easily
implement set_affinity.

The other model (which x86 is using, for example), is to have one
doorbell per CPU. In this model, changing affinity is just a matter of
changing the doorbell address.

> Does setting affinity to MSI needs any support from GIC?

Absolutely. If using GICv2m, you need to change the affinity at the
distributor level. With GICv3 and the ITS, you need to emit a MOVI
command to target another redistributor.

> Setting affinity can be achieved only with hardware support ?

See above. It depends which signalling model you're using, and how well
it has been implemented.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-03-21 18:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-21 17:12 Why two irq chips for MSI valmiki
2018-03-21 17:25 ` Marc Zyngier
2018-03-21 17:38   ` valmiki
2018-03-21 18:05     ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.