linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* Ethernet over PCIe driver for Inter-Processor Communication
@ 2013-08-22 15:34 Saravanan S
  2013-08-22 21:38 ` Scott Wood
  2013-08-22 21:43 ` David Hawkins
  0 siblings, 2 replies; 10+ messages in thread
From: Saravanan S @ 2013-08-22 15:34 UTC (permalink / raw)
  To: linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 1112 bytes --]

Hi All,
          I have a custom board  with four MPC8640 nodes connected over a
transparent PCI express switch . In this configuration one node is
configured as host(Root Complex) and others as agents(End Point) .Thus the
legacy PCI software works fine . However the mainline kernel lacks any
standard support for Inter-processor communication over PCI . I am in the
process of developing an Ethernet over  PCI driver for the same on the
lines of rionet . However I am facing the following problems.

a)   I can generate MSI interrupts from End Point to Root Complex over PCI
. But the vice-versa is not possible . However i need a method to interrupt
the End Point from the Root Complex to complete my driver. Only previous
references I can find are this post
http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg25765.html .
However this uses doorbells and I think may not be possible in MPC8640.

b) Also i need a method to interrupt an End Point from another end point?
Is that possible ?

Any pointers on this issue and guidance on this driver development would be
helpful .

Regards,
S.Saravanan

[-- Attachment #2: Type: text/html, Size: 1327 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-22 15:34 Ethernet over PCIe driver for Inter-Processor Communication Saravanan S
@ 2013-08-22 21:38 ` Scott Wood
  2013-08-22 21:43 ` David Hawkins
  1 sibling, 0 replies; 10+ messages in thread
From: Scott Wood @ 2013-08-22 21:38 UTC (permalink / raw)
  To: Saravanan S; +Cc: linuxppc-dev

On Thu, 2013-08-22 at 21:04 +0530, Saravanan S wrote:

> a)   I can generate MSI interrupts from End Point to Root Complex over
> PCI . But the vice-versa is not possible . However i need a method to
> interrupt the End Point from the Root Complex to complete my driver.
> Only previous references I can find are this post
> http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg25765.html . However this uses doorbells and I think may not be possible in MPC8640.

You should be able to write to the destination's MPIC registers to send
a doorbell.

-Scott

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-22 15:34 Ethernet over PCIe driver for Inter-Processor Communication Saravanan S
  2013-08-22 21:38 ` Scott Wood
@ 2013-08-22 21:43 ` David Hawkins
  2013-08-22 22:29   ` Ira W. Snyder
  1 sibling, 1 reply; 10+ messages in thread
From: David Hawkins @ 2013-08-22 21:43 UTC (permalink / raw)
  To: Saravanan S; +Cc: linuxppc-dev, Ira W. Snyder

Hi S.Saravanan,

> I have a custom board  with four MPC8640 nodes connected over
> a transparent PCI express switch . In this configuration one node is
> configured as host(Root Complex) and others as agents(End Point). Thus
> the legacy PCI software works fine . However the mainline kernel lacks
> any standard support for Inter-processor communication over PCI. I am
> in the process of developing an Ethernet over  PCI driver for the same
> on the lines of rionet . However I am facing the following problems.
>
> a)   I can generate MSI interrupts from End Point to Root Complex over
> PCI . But the vice-versa is not possible . However i need a method to
> interrupt the End Point from the Root Complex to complete my driver.

Root complex's would normally interrupt a device via a PCIe write
to a register in a BAR on the end-point (or in extended configuration
space registers depending on the hardware implementation).

> Only previous references I can find are this post
> http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg25765.html
> However this uses doorbells and I think may not be possible in MPC8640.

PCIe drivers need some way to interrupt the processor, so there must
be an option somewhere ... for example, what are the message register
interrupts intended for? See p479

http://cache.freescale.com/files/32bit/doc/ref_manual/MPC8641DRM.pdf

(Ira and myself have not used the MPC8640 so are not familiar with
its user manual).

> Any pointers on this issue and guidance on this driver development would
> be helpful .

We use the Ethernet-over-PCI driver that Ira developed. Our next boards
will use an MPC8308, but we don't currently have any in a PCIe device
form-factor (just the MPC8038RDB), so he has not ported it to PCIe.

Feel free to discuss your ideas for your PCIe driver (eg., why start
with rionet rather than Ira's driver), either on-list, or email Ira
and myself directly.

Cheers,
Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-22 21:43 ` David Hawkins
@ 2013-08-22 22:29   ` Ira W. Snyder
  2013-08-25 15:20     ` Saravanan S
  0 siblings, 1 reply; 10+ messages in thread
From: Ira W. Snyder @ 2013-08-22 22:29 UTC (permalink / raw)
  To: David Hawkins; +Cc: Saravanan S, linuxppc-dev

On Thu, Aug 22, 2013 at 02:43:38PM -0700, David Hawkins wrote:
> Hi S.Saravanan,
> 
> > I have a custom board  with four MPC8640 nodes connected over
> > a transparent PCI express switch . In this configuration one node is
> > configured as host(Root Complex) and others as agents(End Point). Thus
> > the legacy PCI software works fine . However the mainline kernel lacks
> > any standard support for Inter-processor communication over PCI. I am
> > in the process of developing an Ethernet over  PCI driver for the same
> > on the lines of rionet . However I am facing the following problems.
> >
> > a)   I can generate MSI interrupts from End Point to Root Complex over
> > PCI . But the vice-versa is not possible . However i need a method to
> > interrupt the End Point from the Root Complex to complete my driver.
> 
> Root complex's would normally interrupt a device via a PCIe write
> to a register in a BAR on the end-point (or in extended configuration
> space registers depending on the hardware implementation).
> 
> > Only previous references I can find are this post
> > http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg25765.html
> > However this uses doorbells and I think may not be possible in MPC8640.
> 
> PCIe drivers need some way to interrupt the processor, so there must
> be an option somewhere ... for example, what are the message register
> interrupts intended for? See p479
> 
> http://cache.freescale.com/files/32bit/doc/ref_manual/MPC8641DRM.pdf
> 
> (Ira and myself have not used the MPC8640 so are not familiar with
> its user manual).
> 
> > Any pointers on this issue and guidance on this driver development would
> > be helpful .
> 
> We use the Ethernet-over-PCI driver that Ira developed. Our next boards
> will use an MPC8308, but we don't currently have any in a PCIe device
> form-factor (just the MPC8038RDB), so he has not ported it to PCIe.
> 
> Feel free to discuss your ideas for your PCIe driver (eg., why start
> with rionet rather than Ira's driver), either on-list, or email Ira
> and myself directly.
> 

One further note. You might want to look at rproc/rpmsg and their virtio
driver support. That seems to be where the Linux world is moving for
inter-processor communications. See for example the ARM CPUs interfacing
with DSPs.

Ira

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-22 22:29   ` Ira W. Snyder
@ 2013-08-25 15:20     ` Saravanan S
  2013-08-25 22:38       ` David Hawkins
  0 siblings, 1 reply; 10+ messages in thread
From: Saravanan S @ 2013-08-25 15:20 UTC (permalink / raw)
  To: Ira W. Snyder, scottwood; +Cc: linuxppc-dev, David Hawkins

[-- Attachment #1: Type: text/plain, Size: 3644 bytes --]

Hi All,
         First of all thank you  all for taking your time out to reply



On Fri, Aug 23, 2013 at 3:59 AM, Ira W. Snyder <iws@ovro.caltech.edu> wrote:

> On Thu, Aug 22, 2013 at 02:43:38PM -0700, David Hawkins wrote:
> > Hi S.Saravanan,
> >
> > > I have a custom board  with four MPC8640 nodes connected over
> > > a transparent PCI express switch . In this configuration one node is
> > > configured as host(Root Complex) and others as agents(End Point). Thus
> > > the legacy PCI software works fine . However the mainline kernel lacks
> > > any standard support for Inter-processor communication over PCI. I am
> > > in the process of developing an Ethernet over  PCI driver for the same
> > > on the lines of rionet . However I am facing the following problems.
> > >
> > > a)   I can generate MSI interrupts from End Point to Root Complex over
> > > PCI . But the vice-versa is not possible . However i need a method to
> > > interrupt the End Point from the Root Complex to complete my driver.
> >
> > Root complex's would normally interrupt a device via a PCIe write
> > to a register in a BAR on the end-point (or in extended configuration
> > space registers depending on the hardware implementation).
>
> MPC8640 End point implements only the Type 0 header (Page 1116) . The
header implements five BARs (Page 1165).


> > > Only previous references I can find are this post
> > >
> http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg25765.html
> > > However this uses doorbells and I think may not be possible in MPC8640.
> >
> > PCIe drivers need some way to interrupt the processor, so there must
> > be an option somewhere ... for example, what are the message register
> > interrupts intended for? See p479
> >
> > http://cache.freescale.com/files/32bit/doc/ref_manual/MPC8641DRM.pdf
> >
> > (Ira and myself have not used the MPC8640 so are not familiar with
> > its user manual).
>

   Message registers  are for interrupting the processor . A write to them
sends an interrupt to the processor .  Actually message registers are used
by the RC to enable interrupts to the processor when an EP sends an MSI
transaction to RC.In RC driver  i register separately for the msi
interrupts from all three EPs.

 To access them in the EP from the RC  i will have to set an inbound window
mapping the PIC register space in the EP  to the PCI mem space  assigned to
it . An inbound window maps a PCI address on the bus received by the PCIe
controller to a platform address. I will try that and let u know .

> >
> > > Any pointers on this issue and guidance on this driver development
> would
> > > be helpful .
> >
> > We use the Ethernet-over-PCI driver that Ira developed. Our next boards
> > will use an MPC8308, but we don't currently have any in a PCIe device
> > form-factor (just the MPC8038RDB), so he has not ported it to PCIe.
> >
> > Feel free to discuss your ideas for your PCIe driver (eg., why start
> > with rionet rather than Ira's driver), either on-list, or email Ira
> > and myself directly
>

To be frank with you there was no particular reason in starting with
rionet. Maybe because our board also had SRIO interface and we are using
rionet driver successfully. I had looked at Ira's driver later. I will
study that also and try   to come back with a skeleton for my driver.


>
> One further note. You might want to look at rproc/rpmsg and their virtio
> driver support. That seems to be where the Linux world is moving for
> inter-processor communications. See for example the ARM CPUs interfacing
> with DSPs.
>
> Ira
>

I will study that as i am not familiar with virtio .

Regards,

S.Saravanan

[-- Attachment #2: Type: text/html, Size: 5499 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-25 15:20     ` Saravanan S
@ 2013-08-25 22:38       ` David Hawkins
  2013-08-30 17:37         ` Saravanan S
  0 siblings, 1 reply; 10+ messages in thread
From: David Hawkins @ 2013-08-25 22:38 UTC (permalink / raw)
  To: Saravanan S; +Cc: linuxppc-dev, Ira W. Snyder

Hi S.Saravanan,

>> Root complex's would normally interrupt a device via a PCIe write
>> to a register in a BAR on the end-point (or in extended configuration
>> space registers depending on the hardware implementation).
>
> MPC8640 End point implements only the Type 0 header (Page 1116) . The
> header implements five BARs (Page 1165).

One of those BARs typically provides access to the PowerPC memory
mapped registers (or at least a 1MB window onto those registers).
This is how your root complex can write to some form of messaging
register.

>> PCIe drivers need some way to interrupt the processor, so there must
>> be an option somewhere ... for example, what are the message register
>> interrupts intended for? See p479
>>
>> http://cache.freescale.com/files/32bit/doc/ref_manual/MPC8641DRM.pdf
>>
>> (Ira and myself have not used the MPC8640 so are not familiar with
>> its user manual).
>
> Message registers are for interrupting the processor. A write to
> them sends an interrupt to the processor.  Actually message registers
> are used by the RC to enable interrupts to the processor when an EP
> sends an MSI transaction to RC. In RC driver i register separately for
> the msi interrupts from all three EPs.

This is pretty much what you are looking for then right?

The end-points interrrupt the root-complex using PCIe MSI interrupts,
whereas the root-complex interrupts an end-point by writing directly
to its MSI interrupt.

> To access them in the EP from the RC  i will have to set an inbound
> window mapping the PIC register space in the EP to the PCI mem space
> assigned to it . An inbound window maps a PCI address on the bus
> received by the PCIe controller to a platform address. I will try that
> and let u know .

Right, as I comment above, one of the BARs typically exposes the PowerPC
internal registers.

>> Feel free to discuss your ideas for your PCIe driver (eg., why start
>> with rionet rather than Ira's driver), either on-list, or email Ira
>> and myself directly
>
> To be frank with you there was no particular reason in starting with
> rionet. Maybe because our board also had SRIO interface and we are using
> rionet driver successfully. I had looked at Ira's driver later. I will
> study that also and try   to come back with a skeleton for my driver.

Its always a good idea to discuss different options, and to stub out
drivers or create minimal (but functional) drivers. That way you'll
be able to see how similar your new driver is to other drivers, and
you'll quickly discover if there is a hardware feature in the
existing driver that you cannot emulate (eg., some SRIO feature
used by the rionet driver).

>> One further note. You might want to look at rproc/rpmsg and their virtio
>> driver support. That seems to be where the Linux world is moving for
>> inter-processor communications. See for example the ARM CPUs interfacing
>> with DSPs.
>
> I will study that as i am not familiar with virtio .

Follow Ira's advice. Talk to the guys working on virtio, tell them what
you are trying to do. They'll likely have good advice for you.

Good luck!

Cheers,
Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-25 22:38       ` David Hawkins
@ 2013-08-30 17:37         ` Saravanan S
  2013-08-30 18:06           ` David Hawkins
  0 siblings, 1 reply; 10+ messages in thread
From: Saravanan S @ 2013-08-30 17:37 UTC (permalink / raw)
  To: David Hawkins, Michael George, naishab; +Cc: linuxppc-dev, Ira W. Snyder

[-- Attachment #1: Type: text/plain, Size: 5228 bytes --]

Hi All ,


On Mon, Aug 26, 2013 at 4:08 AM, David Hawkins <dwh@ovro.caltech.edu> wrote:

> Hi S.Saravanan,
>
>
>  Root complex's would normally interrupt a device via a PCIe write
>>> to a register in a BAR on the end-point (or in extended configuration
>>> space registers depending on the hardware implementation).
>>>
>>
>> MPC8640 End point implements only the Type 0 header (Page 1116) . The
>> header implements five BARs (Page 1165).
>>
>
> One of those BARs typically provides access to the PowerPC memory
> mapped registers (or at least a 1MB window onto those registers).
> This is how your root complex can write to some form of messaging
> register.
>
>  PCIe drivers need some way to interrupt the processor, so there must
>>> be an option somewhere ... for example, what are the message register
>>> interrupts intended for? See p479
>>>
>>> http://cache.freescale.com/files/32bit/doc/ref_manual/MPC8641DRM.pdf
>>>
>>> (Ira and myself have not used the MPC8640 so are not familiar with
>>> its user manual).
>>>
>>
>> Message registers are for interrupting the processor. A write to
>> them sends an interrupt to the processor.  Actually message registers
>>
>> are used by the RC to enable interrupts to the processor when an EP
>> sends an MSI transaction to RC. In RC driver i register separately for
>>
>> the msi interrupts from all three EPs.
>>
>
> This is pretty much what you are looking for then right?



I successfully  mapped the Programmable Interrupt Controller registers in
the EP to the PCI space . Thus now I can write the shared message interrupt
registers in the EP from the RC over PCI . But  I am facing the following
problems now  .

1) In my driver at EP, to register for this interrupt I need to know the
hardware irq number but I can't find any interrupt number assigned  by the
PIC for the messages interrupt sources(Page 451 , MPC8641DRM manual).
2) Otherwise i need to get the virtual irq number assigned by kernel
corresponding to the message interrupt . I am unable to find a method to
get this also.

In the RC side driver i get the virtual irq number after calling
pci_enable_msi() which is straightforward.
I studied the RC code which sets up shared message interrupts (Page 481,
MPC manual)  for PCI MSI interrupts . When  msi is enabled the
"arch_setup_msi_irqs()" is called leading to the fsl_setup_msi_irqs() (
http://lxr.free-electrons.com/source/arch/powerpc/sysdev/fsl_msi.c?v=3.7#L151)
. In this function the virtual irq no is obtained as below:

*virq = irq_create_mapping(msi_data->irqhost, hwirq);*

* *
In the above function the hardware irq number is same as the value
written into the  Shared Message Signaled Interrupt Index Register (Page
482) which is strange. Further these functions are called in the RC during
pci_probe at boot time or when pci_enable_msi() is called . Thus there is a
always a PCI slave device context to it. However I  require to do it in the
EP which has no pci probing nor any  pci device reference whatsoever as it
a slave. Is this approach right  ?

The end-points interrrupt the root-complex using PCIe MSI interrupts,
> whereas the root-complex interrupts an end-point by writing directly
> to its MSI interrupt.
>
>
>  To access them in the EP from the RC  i will have to set an inbound
>> window mapping the PIC register space in the EP to the PCI mem space
>> assigned to it . An inbound window maps a PCI address on the bus
>> received by the PCIe controller to a platform address. I will try that
>> and let u know .
>>
>
> Right, as I comment above, one of the BARs typically exposes the PowerPC
> internal registers.
>
>
>  Feel free to discuss your ideas for your PCIe driver (eg., why start
>>> with rionet rather than Ira's driver), either on-list, or email Ira
>>> and myself directly
>>>
>>
>> To be frank with you there was no particular reason in starting with
>> rionet. Maybe because our board also had SRIO interface and we are using
>> rionet driver successfully. I had looked at Ira's driver later. I will
>> study that also and try   to come back with a skeleton for my driver.
>>
>
> Its always a good idea to discuss different options, and to stub out
> drivers or create minimal (but functional) drivers. That way you'll
> be able to see how similar your new driver is to other drivers, and
> you'll quickly discover if there is a hardware feature in the
> existing driver that you cannot emulate (eg., some SRIO feature
> used by the rionet driver).
>

Right now I am trying a very primitive driver just to check the feasibility
of bi-directional communication between the RC and the EP. Once this is
established  I will be in a better position to get inputs on making it a
more effective one.


>  One further note. You might want to look at rproc/rpmsg and their virtio
>>> driver support. That seems to be where the Linux world is moving for
>>> inter-processor communications. See for example the ARM CPUs interfacing
>>> with DSPs.
>>>
>>
>> I will study that as i am not familiar with virtio .
>>
>
> Follow Ira's advice. Talk to the guys working on virtio, tell them what
> you are trying to do. They'll likely have good advice for you.
>
> Good luck!
>
> Cheers,
> Dave
>
>
>
Warm Regards,

S.Saravanan

[-- Attachment #2: Type: text/html, Size: 8105 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-30 17:37         ` Saravanan S
@ 2013-08-30 18:06           ` David Hawkins
  2013-09-04 18:34             ` Saravanan S
  0 siblings, 1 reply; 10+ messages in thread
From: David Hawkins @ 2013-08-30 18:06 UTC (permalink / raw)
  To: Saravanan S; +Cc: naishab, linuxppc-dev, Michael George, Ira W. Snyder

Hi S.Saravanan,
>
> I successfully  mapped the Programmable Interrupt Controller registers
> in the EP to the PCI space. Thus now I can write the shared message
> interrupt registers in the EP from the RC over PCI.

Excellent.

> But I am facing the following problems now.
>
> 1) In my driver at EP, to register for this interrupt I need to know the
> hardware irq number but I can't find any interrupt number assigned  by
> the PIC for the messages interrupt sources(Page 451 , MPC8641DRM manual).
> 2) Otherwise i need to get the virtual irq number assigned by kernel
> corresponding to the message interrupt . I am unable to find a method to
> get this also.

I recall having to ask a similar question when trying to map a
GPIO interrupt into a Linux interrupt number. I forget the
convention (I'm "the hardware guy"). It may be a device tree
thing, or an offset, I'll let someone more knowledgeable comment.

> In the RC side driver i get the virtual irq number after calling
> pci_enable_msi() which is straightforward.
> I studied the RC code which sets up shared message interrupts (Page 481,
> MPC manual)  for PCI MSI interrupts . When  msi is enabled the
> "arch_setup_msi_irqs()" is called leading to the fsl_setup_msi_irqs()
> (http://lxr.free-electrons.com/source/arch/powerpc/sysdev/fsl_msi.c?v=3.7#L151)
> . In this function the virtual irq no is obtained as below:
>
> /virq = irq_create_mapping(msi_data->irqhost, hwirq);/
>
> In the above function the hardware irq number is same as the value
> written into the  Shared Message Signaled Interrupt Index Register (Page
> 482) which is strange. Further these functions are called in the RC
> during pci_probe at boot time or when pci_enable_msi() is called . Thus
> there is a always a PCI slave device context to it. However I  require
> to do it in the EP which has no pci probing nor any  pci device
> reference whatsoever as it a slave. Is this approach right  ?

I'm not sure.

You'll have two drivers;
  * The root-complex.
    This is a standard PCIe driver, so you'll just follow convention
    there
  * The end-point driver.
    This driver needs to use the PCIe bus, but its not responsible
    for the PCIe bus in the way a root-complex is. The driver needs
    to know what the root-complex is interrupting it for, eg.,
    "transmitter empty" (I've read your last message) or "receiver
    ready" (there is a message from me, waiting for you).
    So you need at least two unique interrupts or messages from the
    root-complex to the end-point.

>> Its always a good idea to discuss different options, and to stub out
>> drivers or create minimal (but functional) drivers. That way you'll
>> be able to see how similar your new driver is to other drivers, and
>> you'll quickly discover if there is a hardware feature in the
>> existing driver that you cannot emulate (eg., some SRIO feature
>> used by the rionet driver).
>
> Right now I am trying a very primitive driver just to check the
> feasibility of bi-directional communication between the RC and the EP.
> Once this is established  I will be in a better position to get inputs
> on making it a more effective one.

You're on the right track. When I looked at using the messaging
registers on the PLX PCI device, I started by simply creating
what was effectively a serial port (one char at a time).
Section 4 explains the interlocking required between two processors

http://www.ovro.caltech.edu/~dwh/correlator/pdf/cobra_driver.pdf

The mailbox/interrupt registers are effectively being used to
implement a mutex between the two processors.

I think at one point Ira took similar code to this and hooked
it into the actual serial layer, so that you had a tty over
PCI. You could always start with a simplification like that too.

Cheers,
Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-08-30 18:06           ` David Hawkins
@ 2013-09-04 18:34             ` Saravanan S
  2013-09-04 19:28               ` David Hawkins
  0 siblings, 1 reply; 10+ messages in thread
From: Saravanan S @ 2013-09-04 18:34 UTC (permalink / raw)
  To: David Hawkins; +Cc: naishab, linuxppc-dev, Michael George, Ira W. Snyder

[-- Attachment #1: Type: text/plain, Size: 6636 bytes --]

Hi All,


On Fri, Aug 30, 2013 at 11:36 PM, David Hawkins <dwh@ovro.caltech.edu>wrote:

> Hi S.Saravanan,
>
>>
>> I successfully  mapped the Programmable Interrupt Controller registers
>> in the EP to the PCI space. Thus now I can write the shared message
>>
>> interrupt registers in the EP from the RC over PCI.
>>
>
> Excellent.
>
>
>  But I am facing the following problems now.
>>
>> 1) In my driver at EP, to register for this interrupt I need to know the
>> hardware irq number but I can't find any interrupt number assigned  by
>> the PIC for the messages interrupt sources(Page 451 , MPC8641DRM manual).
>> 2) Otherwise i need to get the virtual irq number assigned by kernel
>> corresponding to the message interrupt . I am unable to find a method to
>> get this also.
>>
>
> I recall having to ask a similar question when trying to map a
> GPIO interrupt into a Linux interrupt number. I forget the
> convention (I'm "the hardware guy"). It may be a device tree
> thing, or an offset, I'll let someone more knowledgeable comment.
>
>  In the RC side driver i get the virtual irq number after calling
>> pci_enable_msi() which is straightforward.
>> I studied the RC code which sets up shared message interrupts (Page 481,
>> MPC manual)  for PCI MSI interrupts . When  msi is enabled the
>> "arch_setup_msi_irqs()" is called leading to the fsl_setup_msi_irqs()
>> (http://lxr.free-electrons.**com/source/arch/powerpc/**
>> sysdev/fsl_msi.c?v=3.7#L151<http://lxr.free-electrons.com/source/arch/powerpc/sysdev/fsl_msi.c?v=3.7#L151>
>> )
>> . In this function the virtual irq no is obtained as below:
>>
>> /virq = irq_create_mapping(msi_data->**irqhost, hwirq);/
>>
>>
>> In the above function the hardware irq number is same as the value
>> written into the  Shared Message Signaled Interrupt Index Register (Page
>> 482) which is strange. Further these functions are called in the RC
>> during pci_probe at boot time or when pci_enable_msi() is called . Thus
>> there is a always a PCI slave device context to it. However I  require
>> to do it in the EP which has no pci probing nor any  pci device
>> reference whatsoever as it a slave. Is this approach right  ?
>>
>
> I'm not sure.
>
> You'll have two drivers;
>  * The root-complex.
>    This is a standard PCIe driver, so you'll just follow convention
>    there
>  * The end-point driver.
>    This driver needs to use the PCIe bus, but its not responsible
>    for the PCIe bus in the way a root-complex is. The driver needs
>    to know what the root-complex is interrupting it for, eg.,
>    "transmitter empty" (I've read your last message) or "receiver
>    ready" (there is a message from me, waiting for you).
>    So you need at least two unique interrupts or messages from the
>    root-complex to the end-point.
>

I am happy to inform you that I finally found a way to register for the
interrupts from RC to EP. Now I have made a simple root and end point
network driver for two MPC8640 nodes  that are now up and running and I
could successfully ping across them. The basic flow is as follows.

 *Root Complex Driver*:
   1. It discovers the EP processor node and gets its base addresses.(BAR 1
and BAR 2)
   2. It sets a single inbound window mapping a portion of its RAM to PCI
space.(This is to allow inbound memory writes from EP).
   3.It enables the MSI interrupt for the EP and registers an interrupt
handler for the same.(To receive interrupts from EP. Note this is
conventional PCI method)
   4.  On receiving a transmit request from kernel it initiates a DMA
memory copy of the packet(in the socket buffer) to the EP memory through
BAR 1. After DMA finishes it sends an interrupt to EP by writing to its msi
register mapped in BAR2.
   5 . On reception of a packet(from EP) the msi interrupt  handler  is
called and it copies the packet in RAM to a socket buffer and passes it to
the kernel.
*
*
*End Point Driver:

*
1. It sets up the internal msi interrupt structure and registers an
interrupt handler.(To receive interrupts from RC. Note this is not done by
default in kernel as it is a slave and thus is added in the driver.)
2. It sets two inbound windows
    i) BAR1 maps to RAM area.(To allow inbound memory write from RC)
    ii) BAR2 is mapped to PIC register area.(To allow inbound message
interrupt register write from RC)
3. It sets up one outbound window to map its local address to PCI address
of RC .(To allow outbound memory write to RC RAM space).
4. On receiving a transmit request from kernel it initiates a DMA memory
copy of the packet(in the socket buffer) to the RC memory through the
outbound window. After DMA finishes it sends an interrupt to RC through the
conventional PCI MSI transaction.
5. On reception of a packet(from RC) the msi interrupt  handler  is called
and it copies the packet in RAM to a socket buffer and passes it to the
kernel.

So basically a bidirectional communication channel  has been established
but the driver is not ready for performance checks yet. I am working on it
now. I will report any improvements obtained in this regard.


> Its always a good idea to discuss different options, and to stub out
>>> drivers or create minimal (but functional) drivers. That way you'll
>>> be able to see how similar your new driver is to other drivers, and
>>> you'll quickly discover if there is a hardware feature in the
>>> existing driver that you cannot emulate (eg., some SRIO feature
>>> used by the rionet driver).
>>>
>>
>> Right now I am trying a very primitive driver just to check the
>> feasibility of bi-directional communication between the RC and the EP.
>> Once this is established  I will be in a better position to get inputs
>> on making it a more effective one.
>>
>
> You're on the right track. When I looked at using the messaging
> registers on the PLX PCI device, I started by simply creating
> what was effectively a serial port (one char at a time).
> Section 4 explains the interlocking required between two processors
>
> http://www.ovro.caltech.edu/~**dwh/correlator/pdf/cobra_**driver.pdf<http://www.ovro.caltech.edu/~dwh/correlator/pdf/cobra_driver.pdf>
>
> Thank You for this document . Was very helpful in understanding the basics
of a Host Target Communication and implementation of a virtual driver for
the same.


> The mailbox/interrupt registers are effectively being used to
> implement a mutex between the two processors.
>
> I think at one point Ira took similar code to this and hooked
> it into the actual serial layer, so that you had a tty over
> PCI. You could always start with a simplification like that too.
>
> Cheers,
> Dave
>
>

Regards,
S.Saravanan

[-- Attachment #2: Type: text/html, Size: 8771 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ethernet over PCIe driver for Inter-Processor Communication
  2013-09-04 18:34             ` Saravanan S
@ 2013-09-04 19:28               ` David Hawkins
  0 siblings, 0 replies; 10+ messages in thread
From: David Hawkins @ 2013-09-04 19:28 UTC (permalink / raw)
  To: Saravanan S; +Cc: naishab, linuxppc-dev, Michael George, Ira W. Snyder

Hi S.Saravanan,

>> You'll have two drivers;
>>  * The root-complex.
>>    This is a standard PCIe driver, so you'll just follow convention
>>    there
>>  * The end-point driver.
>>    This driver needs to use the PCIe bus, but its not responsible
>>    for the PCIe bus in the way a root-complex is. The driver needs
>>    to know what the root-complex is interrupting it for, eg.,
>>    "transmitter empty" (I've read your last message) or "receiver
>>    ready" (there is a message from me, waiting for you).
>>    So you need at least two unique interrupts or messages from the
>>    root-complex to the end-point.
>
> I am happy to inform you that I finally found a way to register for the
> interrupts from RC to EP. Now I have made a simple root and end point
> network driver for two MPC8640 nodes  that are now up and running and I
> could successfully ping across them.

That is awesome! :)

> The basic flow is as follows.
>
> _Root Complex Driver_:
>     1. It discovers the EP processor node and gets its base
> addresses.(BAR 1 and BAR 2)
>     2. It sets a single inbound window mapping a portion of its RAM to
> PCI space.(This is to allow inbound memory writes from EP).
>     3.It enables the MSI interrupt for the EP and registers an interrupt
> handler for the same.(To receive interrupts from EP. Note this is
> conventional PCI method)
>     4.  On receiving a transmit request from kernel it initiates a DMA
> memory copy of the packet(in the socket buffer) to the EP memory through
> BAR 1. After DMA finishes it sends an interrupt to EP by writing to its
> msi register mapped in BAR2.
>     5 . On reception of a packet(from EP) the msi interrupt  handler  is
> called and it copies the packet in RAM to a socket buffer and passes it
> to the kernel.
> _
> _
> _End Point Driver:
>
> _
> 1. It sets up the internal msi interrupt structure and registers an
> interrupt handler.(To receive interrupts from RC. Note this is not done
> by default in kernel as it is a slave and thus is added in the driver.)
> 2. It sets two inbound windows
>      i) BAR1 maps to RAM area.(To allow inbound memory write from RC)
>      ii) BAR2 is mapped to PIC register area.(To allow inbound message
> interrupt register write from RC)
> 3. It sets up one outbound window to map its local address to PCI
> address of RC .(To allow outbound memory write to RC RAM space).
> 4. On receiving a transmit request from kernel it initiates a DMA memory
> copy of the packet(in the socket buffer) to the RC memory through the
> outbound window. After DMA finishes it sends an interrupt to RC through
> the conventional PCI MSI transaction.
> 5. On reception of a packet(from RC) the msi interrupt  handler  is
> called and it copies the packet in RAM to a socket buffer and passes it
> to the kernel.
>
> So basically a bidirectional communication channel  has been established
> but the driver is not ready for performance checks yet. I am working on
> it now. I will report any improvements obtained in this regard.

Now that you have processor-to-processor communications working,
it would be useful to figure out an architecture for the driver
that will make it acceptable to the community at large.

For example, can you make this driver work from U-Boot too?
Eg., can your driver support a root-complex running Linux and
end-points running U-Boot that fetch their kernel via the
PCIe network, and then boot Linux, and switch over to using
the Linux version of the PCIe network driver.

This is what Ira has done with the PCInet driver, and it allows
us to have an x86 PCI host CPU that then boots multiple
MPC8349EA PowerPC peripheral CPUs.

Ira had discussions with various kernel developers, and I believe
the general feedback was "Can this be made to work with virtio?".
Ira can comment more on that.

>> You're on the right track. When I looked at using the messaging
>> registers on the PLX PCI device, I started by simply creating
>> what was effectively a serial port (one char at a time).
>> Section 4 explains the interlocking required between two processors
>>
>> <http://www.ovro.caltech.edu/~dwh/correlator/pdf/cobra_driver.pdf>
>
> Thank You for this document . Was very helpful in understanding the
> basics of a Host Target Communication and implementation of a virtual
> driver for the same.

I'm glad to hear it helped.

Cheers,
Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-09-04 19:28 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-22 15:34 Ethernet over PCIe driver for Inter-Processor Communication Saravanan S
2013-08-22 21:38 ` Scott Wood
2013-08-22 21:43 ` David Hawkins
2013-08-22 22:29   ` Ira W. Snyder
2013-08-25 15:20     ` Saravanan S
2013-08-25 22:38       ` David Hawkins
2013-08-30 17:37         ` Saravanan S
2013-08-30 18:06           ` David Hawkins
2013-09-04 18:34             ` Saravanan S
2013-09-04 19:28               ` David Hawkins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).