All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-11 21:19 ` Meador Inge
  0 siblings, 0 replies; 38+ messages in thread
From: Meador Inge @ 2011-02-11 21:19 UTC (permalink / raw)
  To: linuxppc-dev, linux-arm-kernel
  Cc: openmcapi-dev, Blanchard, Hollis, Hiroshi DOYU

Hi All,

I am currently working on building AMP systems using OpenMCAPI
(https://bitbucket.org/hollisb/openmcapi/wiki/Home) as the
inter-processor communication mechanism.  With OpenMCAPI we, of course,
need a way to send messages to various cores.  On some Freescale PPC
platforms (e.g. P1022DS, MPC8572DS), we have been using message
registers to do this work.  Recently, I was looking at the OMAP4
mailboxes to gear up for moving into ARM based platforms.

With that, I noticed 'arch/arm/plat-omap/mailbox.c'.  This is very
specific to the OMAP4 boards.  I am looking at designing a new set of
drivers to expose a mailbox service to userspace that will be used
for inter-processor communication.  This would entail the traditional
generic/specific driver split:

     1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
        for the MPIC message registers on Power and OMAP4 mailboxes, for
        example.
     2. A higher level driver under '.../drivers/mailbox/*'.  That the
        pieces in (1) would register with.  This piece would expose the
        main kernel API.
     3. Userspace interfaces for accessing the mailboxes.  A
        '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.

Now I have the following questions:

     1. Do others see value in this?
     2. Does something like this already exist?
     3. Is someone else already working on this?

Any feedback will be greatly appreciated.

-- 
Meador Inge     | meador_inge AT mentor.com
Mentor Embedded | http://www.mentor.com/embedded-software

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-11 21:19 ` Meador Inge
  0 siblings, 0 replies; 38+ messages in thread
From: Meador Inge @ 2011-02-11 21:19 UTC (permalink / raw)
  To: linux-arm-kernel

Hi All,

I am currently working on building AMP systems using OpenMCAPI
(https://bitbucket.org/hollisb/openmcapi/wiki/Home) as the
inter-processor communication mechanism.  With OpenMCAPI we, of course,
need a way to send messages to various cores.  On some Freescale PPC
platforms (e.g. P1022DS, MPC8572DS), we have been using message
registers to do this work.  Recently, I was looking at the OMAP4
mailboxes to gear up for moving into ARM based platforms.

With that, I noticed 'arch/arm/plat-omap/mailbox.c'.  This is very
specific to the OMAP4 boards.  I am looking at designing a new set of
drivers to expose a mailbox service to userspace that will be used
for inter-processor communication.  This would entail the traditional
generic/specific driver split:

     1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
        for the MPIC message registers on Power and OMAP4 mailboxes, for
        example.
     2. A higher level driver under '.../drivers/mailbox/*'.  That the
        pieces in (1) would register with.  This piece would expose the
        main kernel API.
     3. Userspace interfaces for accessing the mailboxes.  A
        '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.

Now I have the following questions:

     1. Do others see value in this?
     2. Does something like this already exist?
     3. Is someone else already working on this?

Any feedback will be greatly appreciated.

-- 
Meador Inge     | meador_inge AT mentor.com
Mentor Embedded | http://www.mentor.com/embedded-software

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-11 21:19 ` Meador Inge
@ 2011-02-12  6:28   ` Sundar
  -1 siblings, 0 replies; 38+ messages in thread
From: Sundar @ 2011-02-12  6:28 UTC (permalink / raw)
  To: Meador Inge
  Cc: Linus WALLEIJ, Hiroshi DOYU, Blanchard, Hollis, openmcapi-dev,
	linuxppc-dev, linux-arm-kernel

Hi,

On Sat, Feb 12, 2011 at 2:49 AM, Meador Inge <meador_inge@mentor.com> wrote=
:
>
> =A0 =A01. Hardware specific bits somewhere under '.../arch/*'. =A0Drivers
> =A0 =A0 =A0 for the MPIC message registers on Power and OMAP4 mailboxes, =
for
> =A0 =A0 =A0 example.

Yes; this can help.

> =A0 =A02. A higher level driver under '.../drivers/mailbox/*'. =A0That th=
e
> =A0 =A0 =A0 pieces in (1) would register with. =A0This piece would expose=
 the
> =A0 =A0 =A0 main kernel API.

A lot of mailboxes are too platform specific with regards to the communicat=
ion
with the main CPU and probably it depends on the mailbox too; you can
find polling and
interrupt supported mailbox support at times on the same platform.
APIs should probably be
generic enough to be able to operate in any context.

> Now I have the following questions:
>
> =A0 =A01. Do others see value in this?

At least I would like this; I wanted to generalize such mailbox IPCs
right from the
day when I was working on one, but coudnt really work on that.

> =A0 =A02. Does something like this already exist?

Not generic as you say; but apart from the OMAP platforms,
you could refer to arch/arm/mach-ux500/prcmu for a mailbox based
IPC on the U8500 platform.

> =A0 =A03. Is someone else already working on this?

Not sure of that too :), but I am CCing Linus W, the maintainer
of U8500 if he thinks it is a good idea to come up with a mailbox IPC
framework

Cheers!
--=20
---------
The views expressed in this email are personal and do not necessarily
echo my employers.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-12  6:28   ` Sundar
  0 siblings, 0 replies; 38+ messages in thread
From: Sundar @ 2011-02-12  6:28 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

On Sat, Feb 12, 2011 at 2:49 AM, Meador Inge <meador_inge@mentor.com> wrote:
>
> ? ?1. Hardware specific bits somewhere under '.../arch/*'. ?Drivers
> ? ? ? for the MPIC message registers on Power and OMAP4 mailboxes, for
> ? ? ? example.

Yes; this can help.

> ? ?2. A higher level driver under '.../drivers/mailbox/*'. ?That the
> ? ? ? pieces in (1) would register with. ?This piece would expose the
> ? ? ? main kernel API.

A lot of mailboxes are too platform specific with regards to the communication
with the main CPU and probably it depends on the mailbox too; you can
find polling and
interrupt supported mailbox support at times on the same platform.
APIs should probably be
generic enough to be able to operate in any context.

> Now I have the following questions:
>
> ? ?1. Do others see value in this?

At least I would like this; I wanted to generalize such mailbox IPCs
right from the
day when I was working on one, but coudnt really work on that.

> ? ?2. Does something like this already exist?

Not generic as you say; but apart from the OMAP platforms,
you could refer to arch/arm/mach-ux500/prcmu for a mailbox based
IPC on the U8500 platform.

> ? ?3. Is someone else already working on this?

Not sure of that too :), but I am CCing Linus W, the maintainer
of U8500 if he thinks it is a good idea to come up with a mailbox IPC
framework

Cheers!
-- 
---------
The views expressed in this email are personal and do not necessarily
echo my employers.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-12  6:28   ` Sundar
@ 2011-02-13 21:16     ` Linus Walleij
  -1 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-13 21:16 UTC (permalink / raw)
  To: Sundar
  Cc: Meador Inge, Hiroshi DOYU, Blanchard, Hollis, openmcapi-dev,
	linuxppc-dev, linux-arm-kernel

2011/2/12 Sundar <sunder.svit@gmail.com>:

> At least I would like this; I wanted to generalize such mailbox IPCs
> right from the day when I was working on one, but coudnt really
> work on that.
>
>> =A0 =A02. Does something like this already exist?
>
> Not generic as you say; but apart from the OMAP platforms,
> you could refer to arch/arm/mach-ux500/prcmu for a mailbox based
> IPC on the U8500 platform.

We also have this thing:
arch/arm/mach-ux500/mbox-db5500.c

It's another mailbox driver, this one talks to the modem in the
U5500 (basically a physical transport for the CAIF protocol).
(For the U8500 I think modem IPC is instead handled with
a high-speed hardware FIFO, a bit different.)

>> =A0 =A03. Is someone else already working on this?
>
> Not sure of that too :), but I am CCing Linus W, the maintainer
> of U8500 if he thinks it is a good idea to come up with a mailbox IPC
> framework

I don't know too much about the subject actually, I've not been
deeply into any such code. I don't think anyone is working
on something general from ST-Ericsson or Linaro.

Recently I saw that Texas Instruments are posting a "hardware
spinlock" framework though, this would be on a related tone,
but I think it's for shared data structures (control path) rather
than buffer passing (data path). I'm guessing this works like
that one CPU gets to spin waiting for another one to release
the lock.

Given that we may have a framework for hardware spinlock
and that we don't want to stockpile drivers into arch/*
or drivers/misc/* I would say it's intuitively a good idea,
but the question is what data types you would pass in?
In arch/arm/mach-ux500/include/mach/mbox-db5500.h
we have a struct like this:

/**
  * struct mbox - Mailbox instance struct
  * @list:              Linked list head.
  * @pdev:              Pointer to device struct.
  * @cb:                Callback function. Will be called
  *                     when new data is received.
  * @client_data:       Clients private data. Will be sent back
  *                     in the callback function.
  * @virtbase_peer:     Virtual address for outgoing mailbox.
  * @virtbase_local:    Virtual address for incoming mailbox.
  * @buffer:            Then internal queue for outgoing messages.
  * @name:              Name of this mailbox.
  * @buffer_available:  Completion variable to achieve "blocking send".
  *                     This variable will be signaled when there is
  *                     internal buffer space available.
  * @client_blocked:    To keep track if any client is currently
  *                     blocked.
  * @lock:              Spinlock to protect this mailbox instance.
  * @write_index:       Index in internal buffer to write to.
  * @read_index:        Index in internal buffer to read from.
  * @allocated:         Indicates whether this particular mailbox
  *                     id has been allocated by someone.
  */
struct mbox {
        struct list_head list;
        struct platform_device *pdev;
        mbox_recv_cb_t *cb;
        void *client_data;
        void __iomem *virtbase_peer;
        void __iomem *virtbase_local;
        u32 buffer[MBOX_BUF_SIZE];
        char name[MBOX_NAME_SIZE];
        struct completion buffer_available;
        u8 client_blocked;
        spinlock_t lock;
        u8 write_index;
        u8 read_index;
        bool allocated;
};

Compare OMAPs mailboxes in
arch/arm/plat-omap/include/plat/mailbox.h:

typedef u32 mbox_msg_t;

truct omap_mbox_ops {
        omap_mbox_type_t        type;
        int             (*startup)(struct omap_mbox *mbox);
        void            (*shutdown)(struct omap_mbox *mbox);
        /* fifo */
        mbox_msg_t      (*fifo_read)(struct omap_mbox *mbox);
        void            (*fifo_write)(struct omap_mbox *mbox, mbox_msg_t ms=
g);
        int             (*fifo_empty)(struct omap_mbox *mbox);
        int             (*fifo_full)(struct omap_mbox *mbox);
        /* irq */
        void            (*enable_irq)(struct omap_mbox *mbox,
                                                omap_mbox_irq_t irq);
        void            (*disable_irq)(struct omap_mbox *mbox,
                                                omap_mbox_irq_t irq);
        void            (*ack_irq)(struct omap_mbox *mbox, omap_mbox_irq_t =
irq);
        int             (*is_irq)(struct omap_mbox *mbox, omap_mbox_irq_t i=
rq);
        /* ctx */
        void            (*save_ctx)(struct omap_mbox *mbox);
        void            (*restore_ctx)(struct omap_mbox *mbox);
};

struct omap_mbox_queue {
        spinlock_t              lock;
        struct kfifo            fifo;
        struct work_struct      work;
        struct tasklet_struct   tasklet;
        struct omap_mbox        *mbox;
        bool full;
};

struct omap_mbox {
        char                    *name;
        unsigned int            irq;
        struct omap_mbox_queue  *txq, *rxq;
        struct omap_mbox_ops    *ops;
        struct device           *dev;
        void                    *priv;
        int                     use_count;
        struct blocking_notifier_head   notifier;
};

Some of this may be generalized? I dunno, they look quite
different but maybe queueing etc can actually be made general
enough to form a framework.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-13 21:16     ` Linus Walleij
  0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-13 21:16 UTC (permalink / raw)
  To: linux-arm-kernel

2011/2/12 Sundar <sunder.svit@gmail.com>:

> At least I would like this; I wanted to generalize such mailbox IPCs
> right from the day when I was working on one, but coudnt really
> work on that.
>
>> ? ?2. Does something like this already exist?
>
> Not generic as you say; but apart from the OMAP platforms,
> you could refer to arch/arm/mach-ux500/prcmu for a mailbox based
> IPC on the U8500 platform.

We also have this thing:
arch/arm/mach-ux500/mbox-db5500.c

It's another mailbox driver, this one talks to the modem in the
U5500 (basically a physical transport for the CAIF protocol).
(For the U8500 I think modem IPC is instead handled with
a high-speed hardware FIFO, a bit different.)

>> ? ?3. Is someone else already working on this?
>
> Not sure of that too :), but I am CCing Linus W, the maintainer
> of U8500 if he thinks it is a good idea to come up with a mailbox IPC
> framework

I don't know too much about the subject actually, I've not been
deeply into any such code. I don't think anyone is working
on something general from ST-Ericsson or Linaro.

Recently I saw that Texas Instruments are posting a "hardware
spinlock" framework though, this would be on a related tone,
but I think it's for shared data structures (control path) rather
than buffer passing (data path). I'm guessing this works like
that one CPU gets to spin waiting for another one to release
the lock.

Given that we may have a framework for hardware spinlock
and that we don't want to stockpile drivers into arch/*
or drivers/misc/* I would say it's intuitively a good idea,
but the question is what data types you would pass in?
In arch/arm/mach-ux500/include/mach/mbox-db5500.h
we have a struct like this:

/**
  * struct mbox - Mailbox instance struct
  * @list:              Linked list head.
  * @pdev:              Pointer to device struct.
  * @cb:                Callback function. Will be called
  *                     when new data is received.
  * @client_data:       Clients private data. Will be sent back
  *                     in the callback function.
  * @virtbase_peer:     Virtual address for outgoing mailbox.
  * @virtbase_local:    Virtual address for incoming mailbox.
  * @buffer:            Then internal queue for outgoing messages.
  * @name:              Name of this mailbox.
  * @buffer_available:  Completion variable to achieve "blocking send".
  *                     This variable will be signaled when there is
  *                     internal buffer space available.
  * @client_blocked:    To keep track if any client is currently
  *                     blocked.
  * @lock:              Spinlock to protect this mailbox instance.
  * @write_index:       Index in internal buffer to write to.
  * @read_index:        Index in internal buffer to read from.
  * @allocated:         Indicates whether this particular mailbox
  *                     id has been allocated by someone.
  */
struct mbox {
        struct list_head list;
        struct platform_device *pdev;
        mbox_recv_cb_t *cb;
        void *client_data;
        void __iomem *virtbase_peer;
        void __iomem *virtbase_local;
        u32 buffer[MBOX_BUF_SIZE];
        char name[MBOX_NAME_SIZE];
        struct completion buffer_available;
        u8 client_blocked;
        spinlock_t lock;
        u8 write_index;
        u8 read_index;
        bool allocated;
};

Compare OMAPs mailboxes in
arch/arm/plat-omap/include/plat/mailbox.h:

typedef u32 mbox_msg_t;

truct omap_mbox_ops {
        omap_mbox_type_t        type;
        int             (*startup)(struct omap_mbox *mbox);
        void            (*shutdown)(struct omap_mbox *mbox);
        /* fifo */
        mbox_msg_t      (*fifo_read)(struct omap_mbox *mbox);
        void            (*fifo_write)(struct omap_mbox *mbox, mbox_msg_t msg);
        int             (*fifo_empty)(struct omap_mbox *mbox);
        int             (*fifo_full)(struct omap_mbox *mbox);
        /* irq */
        void            (*enable_irq)(struct omap_mbox *mbox,
                                                omap_mbox_irq_t irq);
        void            (*disable_irq)(struct omap_mbox *mbox,
                                                omap_mbox_irq_t irq);
        void            (*ack_irq)(struct omap_mbox *mbox, omap_mbox_irq_t irq);
        int             (*is_irq)(struct omap_mbox *mbox, omap_mbox_irq_t irq);
        /* ctx */
        void            (*save_ctx)(struct omap_mbox *mbox);
        void            (*restore_ctx)(struct omap_mbox *mbox);
};

struct omap_mbox_queue {
        spinlock_t              lock;
        struct kfifo            fifo;
        struct work_struct      work;
        struct tasklet_struct   tasklet;
        struct omap_mbox        *mbox;
        bool full;
};

struct omap_mbox {
        char                    *name;
        unsigned int            irq;
        struct omap_mbox_queue  *txq, *rxq;
        struct omap_mbox_ops    *ops;
        struct device           *dev;
        void                    *priv;
        int                     use_count;
        struct blocking_notifier_head   notifier;
};

Some of this may be generalized? I dunno, they look quite
different but maybe queueing etc can actually be made general
enough to form a framework.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-11 21:19 ` Meador Inge
@ 2011-02-13 21:24   ` Linus Walleij
  -1 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-13 21:24 UTC (permalink / raw)
  To: Meador Inge
  Cc: openmcapi-dev, Blanchard, Hollis, Hiroshi DOYU, linuxppc-dev,
	linux-arm-kernel

2011/2/11 Meador Inge <meador_inge@mentor.com>:

>=A0This would entail the traditional
> generic/specific driver split:
>
> =A0 =A01. Hardware specific bits somewhere under '.../arch/*'. =A0Drivers
> =A0 =A0 =A0 for the MPIC message registers on Power and OMAP4 mailboxes, =
for
> =A0 =A0 =A0 example.

Having any drivers under arch/* is no good tradition IMO.
Better move the whole shebang down to drivers/mailbox so
that the subsystem maintainer get the complete overview
of her/his driver family.

> =A0 =A02. A higher level driver under '.../drivers/mailbox/*'. =A0That th=
e
> =A0 =A0 =A0 pieces in (1) would register with. =A0This piece would expose=
 the
> =A0 =A0 =A0 main kernel API.

Cool...

> =A0 =A03. Userspace interfaces for accessing the mailboxes. =A0A
> =A0 =A0 =A0 '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example=
.

What kind of business does userspace have with directly using
mailboxes? Enlighten me so I get it... in our system these are
used by protocols, such as net/caif/* thru drivers/net/caif/*, and
we have similar kernelspace functionality for Phonet.

CAIF and Phonet on the other hand, have custom openings
down to the thing that exists on the other end of the mailbox.
Most of these systems tend to talk some funny protocol that
is often better handled by the kernel than by any userspace.

So is this for the situation when you have no intermediate
protocol between your userpace and the other CPU's
subsystem? Or are you thinking about handling that
protocol in userspace? That is generally not such a good idea
for efficiency reasons.

Yoursm
Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-13 21:24   ` Linus Walleij
  0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-13 21:24 UTC (permalink / raw)
  To: linux-arm-kernel

2011/2/11 Meador Inge <meador_inge@mentor.com>:

>?This would entail the traditional
> generic/specific driver split:
>
> ? ?1. Hardware specific bits somewhere under '.../arch/*'. ?Drivers
> ? ? ? for the MPIC message registers on Power and OMAP4 mailboxes, for
> ? ? ? example.

Having any drivers under arch/* is no good tradition IMO.
Better move the whole shebang down to drivers/mailbox so
that the subsystem maintainer get the complete overview
of her/his driver family.

> ? ?2. A higher level driver under '.../drivers/mailbox/*'. ?That the
> ? ? ? pieces in (1) would register with. ?This piece would expose the
> ? ? ? main kernel API.

Cool...

> ? ?3. Userspace interfaces for accessing the mailboxes. ?A
> ? ? ? '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.

What kind of business does userspace have with directly using
mailboxes? Enlighten me so I get it... in our system these are
used by protocols, such as net/caif/* thru drivers/net/caif/*, and
we have similar kernelspace functionality for Phonet.

CAIF and Phonet on the other hand, have custom openings
down to the thing that exists on the other end of the mailbox.
Most of these systems tend to talk some funny protocol that
is often better handled by the kernel than by any userspace.

So is this for the situation when you have no intermediate
protocol between your userpace and the other CPU's
subsystem? Or are you thinking about handling that
protocol in userspace? That is generally not such a good idea
for efficiency reasons.

Yoursm
Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-11 21:19 ` Meador Inge
@ 2011-02-14  7:21   ` Hiroshi DOYU
  -1 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  7:21 UTC (permalink / raw)
  To: meador_inge
  Cc: openmcapi-dev, Hollis_Blanchard, linuxppc-dev, linux-arm-kernel

Hi Meador,

From: ext Meador Inge <meador_inge@mentor.com>
Subject: [RFC] Inter-processor Mailboxes Drivers
Date: Fri, 11 Feb 2011 15:19:51 -0600

> Hi All,
> 
> I am currently working on building AMP systems using OpenMCAPI
> (https://bitbucket.org/hollisb/openmcapi/wiki/Home) as the
> inter-processor communication mechanism.  With OpenMCAPI we, of
> course,
> need a way to send messages to various cores.  On some Freescale PPC
> platforms (e.g. P1022DS, MPC8572DS), we have been using message
> registers to do this work.  Recently, I was looking at the OMAP4
> mailboxes to gear up for moving into ARM based platforms.
> 
> With that, I noticed 'arch/arm/plat-omap/mailbox.c'.  This is very
> specific to the OMAP4 boards.  I am looking at designing a new set of
> drivers to expose a mailbox service to userspace that will be used
> for inter-processor communication.  This would entail the traditional
> generic/specific driver split:
> 
>     1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>        for the MPIC message registers on Power and OMAP4 mailboxes, for
>        example.
>     2. A higher level driver under '.../drivers/mailbox/*'.  That the
>        pieces in (1) would register with.  This piece would expose the
>        main kernel API.
>     3. Userspace interfaces for accessing the mailboxes.  A
>        '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
> 
> Now I have the following questions:
> 
>     1. Do others see value in this?

Yes.

I discussed this with TI(Hari) long time ago, but it didn't proceed
with some reason, not techinical one.

>     2. Does something like this already exist?
>     3. Is someone else already working on this?
> 
> Any feedback will be greatly appreciated.

Now the basic concpet can be devided into 3 parts, (1)HW dependent,
(2) generic driver, (3) character device interface. I guess that it
might be good to have one _pseudo_ H/W instance, then easier for
verification without real H/W and firmwares, across multiplatforms.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14  7:21   ` Hiroshi DOYU
  0 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  7:21 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Meador,

From: ext Meador Inge <meador_inge@mentor.com>
Subject: [RFC] Inter-processor Mailboxes Drivers
Date: Fri, 11 Feb 2011 15:19:51 -0600

> Hi All,
> 
> I am currently working on building AMP systems using OpenMCAPI
> (https://bitbucket.org/hollisb/openmcapi/wiki/Home) as the
> inter-processor communication mechanism.  With OpenMCAPI we, of
> course,
> need a way to send messages to various cores.  On some Freescale PPC
> platforms (e.g. P1022DS, MPC8572DS), we have been using message
> registers to do this work.  Recently, I was looking at the OMAP4
> mailboxes to gear up for moving into ARM based platforms.
> 
> With that, I noticed 'arch/arm/plat-omap/mailbox.c'.  This is very
> specific to the OMAP4 boards.  I am looking at designing a new set of
> drivers to expose a mailbox service to userspace that will be used
> for inter-processor communication.  This would entail the traditional
> generic/specific driver split:
> 
>     1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>        for the MPIC message registers on Power and OMAP4 mailboxes, for
>        example.
>     2. A higher level driver under '.../drivers/mailbox/*'.  That the
>        pieces in (1) would register with.  This piece would expose the
>        main kernel API.
>     3. Userspace interfaces for accessing the mailboxes.  A
>        '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
> 
> Now I have the following questions:
> 
>     1. Do others see value in this?

Yes.

I discussed this with TI(Hari) long time ago, but it didn't proceed
with some reason, not techinical one.

>     2. Does something like this already exist?
>     3. Is someone else already working on this?
> 
> Any feedback will be greatly appreciated.

Now the basic concpet can be devided into 3 parts, (1)HW dependent,
(2) generic driver, (3) character device interface. I guess that it
might be good to have one _pseudo_ H/W instance, then easier for
verification without real H/W and firmwares, across multiplatforms.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-13 21:16     ` Linus Walleij
@ 2011-02-14  7:32       ` Hiroshi DOYU
  -1 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  7:32 UTC (permalink / raw)
  To: linus.walleij
  Cc: meador_inge, Hollis_Blanchard, sunder.svit, openmcapi-dev,
	linuxppc-dev, linux-arm-kernel

From: ext Linus Walleij <linus.walleij@linaro.org>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Sun, 13 Feb 2011 22:16:12 +0100

> 2011/2/12 Sundar <sunder.svit@gmail.com>:
> =

>> At least I would like this; I wanted to generalize such mailbox IPCs=

>> right from the day when I was working on one, but coudnt really
>> work on that.
>>
>>> =A0 =A02. Does something like this already exist?
>>
>> Not generic as you say; but apart from the OMAP platforms,
>> you could refer to arch/arm/mach-ux500/prcmu for a mailbox based
>> IPC on the U8500 platform.
> =

> We also have this thing:
> arch/arm/mach-ux500/mbox-db5500.c
> =

> It's another mailbox driver, this one talks to the modem in the
> U5500 (basically a physical transport for the CAIF protocol).
> (For the U8500 I think modem IPC is instead handled with
> a high-speed hardware FIFO, a bit different.)
> =

>>> =A0 =A03. Is someone else already working on this?
>>
>> Not sure of that too :), but I am CCing Linus W, the maintainer
>> of U8500 if he thinks it is a good idea to come up with a mailbox IP=
C
>> framework
> =

> I don't know too much about the subject actually, I've not been
> deeply into any such code. I don't think anyone is working
> on something general from ST-Ericsson or Linaro.
> =

> Recently I saw that Texas Instruments are posting a "hardware
> spinlock" framework though, this would be on a related tone,
> but I think it's for shared data structures (control path) rather
> than buffer passing (data path). I'm guessing this works like
> that one CPU gets to spin waiting for another one to release
> the lock.
> =

> Given that we may have a framework for hardware spinlock
> and that we don't want to stockpile drivers into arch/*
> or drivers/misc/* I would say it's intuitively a good idea,
> but the question is what data types you would pass in?
> In arch/arm/mach-ux500/include/mach/mbox-db5500.h
> we have a struct like this:
> =

> /**
>   * struct mbox - Mailbox instance struct
>   * @list:              Linked list head.
>   * @pdev:              Pointer to device struct.
>   * @cb:                Callback function. Will be called
>   *                     when new data is received.
>   * @client_data:       Clients private data. Will be sent back
>   *                     in the callback function.
>   * @virtbase_peer:     Virtual address for outgoing mailbox.
>   * @virtbase_local:    Virtual address for incoming mailbox.
>   * @buffer:            Then internal queue for outgoing messages.
>   * @name:              Name of this mailbox.
>   * @buffer_available:  Completion variable to achieve "blocking send=
".
>   *                     This variable will be signaled when there is
>   *                     internal buffer space available.
>   * @client_blocked:    To keep track if any client is currently
>   *                     blocked.
>   * @lock:              Spinlock to protect this mailbox instance.
>   * @write_index:       Index in internal buffer to write to.
>   * @read_index:        Index in internal buffer to read from.
>   * @allocated:         Indicates whether this particular mailbox
>   *                     id has been allocated by someone.
>   */
> struct mbox {
>         struct list_head list;
>         struct platform_device *pdev;
>         mbox_recv_cb_t *cb;
>         void *client_data;
>         void __iomem *virtbase_peer;
>         void __iomem *virtbase_local;
>         u32 buffer[MBOX_BUF_SIZE];
>         char name[MBOX_NAME_SIZE];
>         struct completion buffer_available;
>         u8 client_blocked;
>         spinlock_t lock;
>         u8 write_index;
>         u8 read_index;
>         bool allocated;
> };
> =

> Compare OMAPs mailboxes in
> arch/arm/plat-omap/include/plat/mailbox.h:
> =

> typedef u32 mbox_msg_t;
> =

> truct omap_mbox_ops {
>         omap_mbox_type_t        type;
>         int             (*startup)(struct omap_mbox *mbox);
>         void            (*shutdown)(struct omap_mbox *mbox);
>         /* fifo */
>         mbox_msg_t      (*fifo_read)(struct omap_mbox *mbox);
>         void            (*fifo_write)(struct omap_mbox *mbox, mbox_ms=
g_t msg);
>         int             (*fifo_empty)(struct omap_mbox *mbox);
>         int             (*fifo_full)(struct omap_mbox *mbox);
>         /* irq */
>         void            (*enable_irq)(struct omap_mbox *mbox,
>                                                 omap_mbox_irq_t irq);=

>         void            (*disable_irq)(struct omap_mbox *mbox,
>                                                 omap_mbox_irq_t irq);=

>         void            (*ack_irq)(struct omap_mbox *mbox, omap_mbox_=
irq_t irq);
>         int             (*is_irq)(struct omap_mbox *mbox, omap_mbox_i=
rq_t irq);
>         /* ctx */
>         void            (*save_ctx)(struct omap_mbox *mbox);
>         void            (*restore_ctx)(struct omap_mbox *mbox);
> };
> =

> struct omap_mbox_queue {
>         spinlock_t              lock;
>         struct kfifo            fifo;
>         struct work_struct      work;
>         struct tasklet_struct   tasklet;
>         struct omap_mbox        *mbox;
>         bool full;
> };
> =

> struct omap_mbox {
>         char                    *name;
>         unsigned int            irq;
>         struct omap_mbox_queue  *txq, *rxq;
>         struct omap_mbox_ops    *ops;
>         struct device           *dev;
>         void                    *priv;
>         int                     use_count;
>         struct blocking_notifier_head   notifier;
> };
> =

> Some of this may be generalized? I dunno, they look quite
> different but maybe queueing etc can actually be made general
> enough to form a framework.

OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
cores.

"struct omap_mbox_ops" was provided mainly to absorb the difference
between OMAP1 and OMAP2+ from H/W POV. So generally the layer could
be:

	-----------------------
	character device driver
	-----------------------
	generic mailbox driver
	-----------------------
	   H/W registration
	-----------------------

In OMAP case, in addition to the above, it could be exceptionally:

	-----------------------
	character device driver
	-----------------------
	generic mailbox driver
	-----------------------
	   H/W registration
	-----------------------
	   OMAP 1  | OMAP2+
	-----------------------

So "character device driver"(interface) and "generic mailbox
driver"(queuing) may be able to abstructed/generalized.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14  7:32       ` Hiroshi DOYU
  0 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  7:32 UTC (permalink / raw)
  To: linux-arm-kernel

From: ext Linus Walleij <linus.walleij@linaro.org>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Sun, 13 Feb 2011 22:16:12 +0100

> 2011/2/12 Sundar <sunder.svit@gmail.com>:
> 
>> At least I would like this; I wanted to generalize such mailbox IPCs
>> right from the day when I was working on one, but coudnt really
>> work on that.
>>
>>> ? ?2. Does something like this already exist?
>>
>> Not generic as you say; but apart from the OMAP platforms,
>> you could refer to arch/arm/mach-ux500/prcmu for a mailbox based
>> IPC on the U8500 platform.
> 
> We also have this thing:
> arch/arm/mach-ux500/mbox-db5500.c
> 
> It's another mailbox driver, this one talks to the modem in the
> U5500 (basically a physical transport for the CAIF protocol).
> (For the U8500 I think modem IPC is instead handled with
> a high-speed hardware FIFO, a bit different.)
> 
>>> ? ?3. Is someone else already working on this?
>>
>> Not sure of that too :), but I am CCing Linus W, the maintainer
>> of U8500 if he thinks it is a good idea to come up with a mailbox IPC
>> framework
> 
> I don't know too much about the subject actually, I've not been
> deeply into any such code. I don't think anyone is working
> on something general from ST-Ericsson or Linaro.
> 
> Recently I saw that Texas Instruments are posting a "hardware
> spinlock" framework though, this would be on a related tone,
> but I think it's for shared data structures (control path) rather
> than buffer passing (data path). I'm guessing this works like
> that one CPU gets to spin waiting for another one to release
> the lock.
> 
> Given that we may have a framework for hardware spinlock
> and that we don't want to stockpile drivers into arch/*
> or drivers/misc/* I would say it's intuitively a good idea,
> but the question is what data types you would pass in?
> In arch/arm/mach-ux500/include/mach/mbox-db5500.h
> we have a struct like this:
> 
> /**
>   * struct mbox - Mailbox instance struct
>   * @list:              Linked list head.
>   * @pdev:              Pointer to device struct.
>   * @cb:                Callback function. Will be called
>   *                     when new data is received.
>   * @client_data:       Clients private data. Will be sent back
>   *                     in the callback function.
>   * @virtbase_peer:     Virtual address for outgoing mailbox.
>   * @virtbase_local:    Virtual address for incoming mailbox.
>   * @buffer:            Then internal queue for outgoing messages.
>   * @name:              Name of this mailbox.
>   * @buffer_available:  Completion variable to achieve "blocking send".
>   *                     This variable will be signaled when there is
>   *                     internal buffer space available.
>   * @client_blocked:    To keep track if any client is currently
>   *                     blocked.
>   * @lock:              Spinlock to protect this mailbox instance.
>   * @write_index:       Index in internal buffer to write to.
>   * @read_index:        Index in internal buffer to read from.
>   * @allocated:         Indicates whether this particular mailbox
>   *                     id has been allocated by someone.
>   */
> struct mbox {
>         struct list_head list;
>         struct platform_device *pdev;
>         mbox_recv_cb_t *cb;
>         void *client_data;
>         void __iomem *virtbase_peer;
>         void __iomem *virtbase_local;
>         u32 buffer[MBOX_BUF_SIZE];
>         char name[MBOX_NAME_SIZE];
>         struct completion buffer_available;
>         u8 client_blocked;
>         spinlock_t lock;
>         u8 write_index;
>         u8 read_index;
>         bool allocated;
> };
> 
> Compare OMAPs mailboxes in
> arch/arm/plat-omap/include/plat/mailbox.h:
> 
> typedef u32 mbox_msg_t;
> 
> truct omap_mbox_ops {
>         omap_mbox_type_t        type;
>         int             (*startup)(struct omap_mbox *mbox);
>         void            (*shutdown)(struct omap_mbox *mbox);
>         /* fifo */
>         mbox_msg_t      (*fifo_read)(struct omap_mbox *mbox);
>         void            (*fifo_write)(struct omap_mbox *mbox, mbox_msg_t msg);
>         int             (*fifo_empty)(struct omap_mbox *mbox);
>         int             (*fifo_full)(struct omap_mbox *mbox);
>         /* irq */
>         void            (*enable_irq)(struct omap_mbox *mbox,
>                                                 omap_mbox_irq_t irq);
>         void            (*disable_irq)(struct omap_mbox *mbox,
>                                                 omap_mbox_irq_t irq);
>         void            (*ack_irq)(struct omap_mbox *mbox, omap_mbox_irq_t irq);
>         int             (*is_irq)(struct omap_mbox *mbox, omap_mbox_irq_t irq);
>         /* ctx */
>         void            (*save_ctx)(struct omap_mbox *mbox);
>         void            (*restore_ctx)(struct omap_mbox *mbox);
> };
> 
> struct omap_mbox_queue {
>         spinlock_t              lock;
>         struct kfifo            fifo;
>         struct work_struct      work;
>         struct tasklet_struct   tasklet;
>         struct omap_mbox        *mbox;
>         bool full;
> };
> 
> struct omap_mbox {
>         char                    *name;
>         unsigned int            irq;
>         struct omap_mbox_queue  *txq, *rxq;
>         struct omap_mbox_ops    *ops;
>         struct device           *dev;
>         void                    *priv;
>         int                     use_count;
>         struct blocking_notifier_head   notifier;
> };
> 
> Some of this may be generalized? I dunno, they look quite
> different but maybe queueing etc can actually be made general
> enough to form a framework.

OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
cores.

"struct omap_mbox_ops" was provided mainly to absorb the difference
between OMAP1 and OMAP2+ from H/W POV. So generally the layer could
be:

	-----------------------
	character device driver
	-----------------------
	generic mailbox driver
	-----------------------
	   H/W registration
	-----------------------

In OMAP case, in addition to the above, it could be exceptionally:

	-----------------------
	character device driver
	-----------------------
	generic mailbox driver
	-----------------------
	   H/W registration
	-----------------------
	   OMAP 1  | OMAP2+
	-----------------------

So "character device driver"(interface) and "generic mailbox
driver"(queuing) may be able to abstructed/generalized.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14  7:32       ` Hiroshi DOYU
@ 2011-02-14  8:39         ` Linus Walleij
  -1 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-14  8:39 UTC (permalink / raw)
  To: Hiroshi DOYU
  Cc: meador_inge, Hollis_Blanchard, sunder.svit, openmcapi-dev,
	linuxppc-dev, linux-arm-kernel

On Mon, Feb 14, 2011 at 8:32 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:

> OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
> cores.

How is it used? Is it a low-traffic (like single 32bit words etc) signal
control-path link while the actual high-throughput data-path is done
with shared memory? (That is how the db5500 mbox works anyways.)

Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14  8:39         ` Linus Walleij
  0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-14  8:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 14, 2011 at 8:32 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:

> OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
> cores.

How is it used? Is it a low-traffic (like single 32bit words etc) signal
control-path link while the actual high-throughput data-path is done
with shared memory? (That is how the db5500 mbox works anyways.)

Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14  8:39         ` Linus Walleij
@ 2011-02-14  8:55           ` Hiroshi DOYU
  -1 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  8:55 UTC (permalink / raw)
  To: linus.walleij
  Cc: meador_inge, Hollis_Blanchard, sunder.svit, openmcapi-dev,
	linuxppc-dev, linux-arm-kernel

From: ext Linus Walleij <linus.walleij@linaro.org>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Mon, 14 Feb 2011 09:39:32 +0100

> On Mon, Feb 14, 2011 at 8:32 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:
> 
>> OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
>> cores.
> 
> How is it used? Is it a low-traffic (like single 32bit words etc) signal
> control-path link while the actual high-throughput data-path is done
> with shared memory? (That is how the db5500 mbox works anyways.)

Yes, maybe quite similar.

maibox is not single 32 bit but is 32 bit x 4(or 8?) slots fifo, IIRC,
and mainly it is used as notification between cores. And big amount of
data is transftered with sahred memory, which has been mapped onto the
virtual address space of the other side of core, in advance.

For example, typical usage of DSP, mp3 decoding,

1, ARM maps 2 shared memory area(input/output) onto DSP virtual
   address space.
2, ARM fills mp3 data in input buffer.
3, ARM notifies DSP that data is ready in input buffer.
4, DSP decodes input data and put output data on output buffer.
5, DSP notifies that output buffer is ready.

Roughly something like the above. DSP S/W is multi-tasking, though.

Does db5500 use IOMMU for mapping shared memories?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14  8:55           ` Hiroshi DOYU
  0 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  8:55 UTC (permalink / raw)
  To: linux-arm-kernel

From: ext Linus Walleij <linus.walleij@linaro.org>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Mon, 14 Feb 2011 09:39:32 +0100

> On Mon, Feb 14, 2011 at 8:32 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:
> 
>> OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
>> cores.
> 
> How is it used? Is it a low-traffic (like single 32bit words etc) signal
> control-path link while the actual high-throughput data-path is done
> with shared memory? (That is how the db5500 mbox works anyways.)

Yes, maybe quite similar.

maibox is not single 32 bit but is 32 bit x 4(or 8?) slots fifo, IIRC,
and mainly it is used as notification between cores. And big amount of
data is transftered with sahred memory, which has been mapped onto the
virtual address space of the other side of core, in advance.

For example, typical usage of DSP, mp3 decoding,

1, ARM maps 2 shared memory area(input/output) onto DSP virtual
   address space.
2, ARM fills mp3 data in input buffer.
3, ARM notifies DSP that data is ready in input buffer.
4, DSP decodes input data and put output data on output buffer.
5, DSP notifies that output buffer is ready.

Roughly something like the above. DSP S/W is multi-tasking, though.

Does db5500 use IOMMU for mapping shared memories?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14  8:55           ` Hiroshi DOYU
@ 2011-02-14  9:00             ` Linus Walleij
  -1 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-14  9:00 UTC (permalink / raw)
  To: Hiroshi DOYU
  Cc: meador_inge, Hollis_Blanchard, sunder.svit, openmcapi-dev,
	linuxppc-dev, linux-arm-kernel

On Mon, Feb 14, 2011 at 9:55 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:

> Does db5500 use IOMMU for mapping shared memories?

Nope, it's a fixed physical allocation from the modem side
of the world.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14  9:00             ` Linus Walleij
  0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-14  9:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 14, 2011 at 9:55 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:

> Does db5500 use IOMMU for mapping shared memories?

Nope, it's a fixed physical allocation from the modem side
of the world.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14  8:55           ` Hiroshi DOYU
@ 2011-02-14  9:06             ` Hiroshi DOYU
  -1 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  9:06 UTC (permalink / raw)
  To: meador_inge, linus.walleij
  Cc: openmcapi-dev, sunder.svit, Hollis_Blanchard, linuxppc-dev,
	linux-arm-kernel

From: Hiroshi DOYU <Hiroshi.DOYU@nokia.com>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Mon, 14 Feb 2011 10:55:53 +0200 (EET)

> From: ext Linus Walleij <linus.walleij@linaro.org>
> Subject: Re: [RFC] Inter-processor Mailboxes Drivers
> Date: Mon, 14 Feb 2011 09:39:32 +0100
> 
>> On Mon, Feb 14, 2011 at 8:32 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:
>> 
>>> OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
>>> cores.
>> 
>> How is it used? Is it a low-traffic (like single 32bit words etc) signal
>> control-path link while the actual high-throughput data-path is done
>> with shared memory? (That is how the db5500 mbox works anyways.)
> 
> Yes, maybe quite similar.
> 
> maibox is not single 32 bit but is 32 bit x 4(or 8?) slots fifo, IIRC,
> and mainly it is used as notification between cores. And big amount of
> data is transftered with sahred memory, which has been mapped onto the
> virtual address space of the other side of core, in advance.
> 
> For example, typical usage of DSP, mp3 decoding,
> 
> 1, ARM maps 2 shared memory area(input/output) onto DSP virtual
>    address space.
> 2, ARM fills mp3 data in input buffer.
> 3, ARM notifies DSP that data is ready in input buffer.
> 4, DSP decodes input data and put output data on output buffer.
> 5, DSP notifies that output buffer is ready.
> 
> Roughly something like the above. DSP S/W is multi-tasking, though.

Here, ARM side process talks to DSP site process in their own way, and
also there are another cores, talking with their own protocols. So I
think that at least, protocol part should be pluggable anyway although
it doesn't have to be always in userland.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14  9:06             ` Hiroshi DOYU
  0 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-14  9:06 UTC (permalink / raw)
  To: linux-arm-kernel

From: Hiroshi DOYU <Hiroshi.DOYU@nokia.com>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Mon, 14 Feb 2011 10:55:53 +0200 (EET)

> From: ext Linus Walleij <linus.walleij@linaro.org>
> Subject: Re: [RFC] Inter-processor Mailboxes Drivers
> Date: Mon, 14 Feb 2011 09:39:32 +0100
> 
>> On Mon, Feb 14, 2011 at 8:32 AM, Hiroshi DOYU <Hiroshi.DOYU@nokia.com> wrote:
>> 
>>> OMAP mailbox is the interrupt driven 32bit unit H/W FIFO to other
>>> cores.
>> 
>> How is it used? Is it a low-traffic (like single 32bit words etc) signal
>> control-path link while the actual high-throughput data-path is done
>> with shared memory? (That is how the db5500 mbox works anyways.)
> 
> Yes, maybe quite similar.
> 
> maibox is not single 32 bit but is 32 bit x 4(or 8?) slots fifo, IIRC,
> and mainly it is used as notification between cores. And big amount of
> data is transftered with sahred memory, which has been mapped onto the
> virtual address space of the other side of core, in advance.
> 
> For example, typical usage of DSP, mp3 decoding,
> 
> 1, ARM maps 2 shared memory area(input/output) onto DSP virtual
>    address space.
> 2, ARM fills mp3 data in input buffer.
> 3, ARM notifies DSP that data is ready in input buffer.
> 4, DSP decodes input data and put output data on output buffer.
> 5, DSP notifies that output buffer is ready.
> 
> Roughly something like the above. DSP S/W is multi-tasking, though.

Here, ARM side process talks to DSP site process in their own way, and
also there are another cores, talking with their own protocols. So I
think that at least, protocol part should be pluggable anyway although
it doesn't have to be always in userland.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-11 21:19 ` Meador Inge
@ 2011-02-14 10:01   ` Jamie Iles
  -1 siblings, 0 replies; 38+ messages in thread
From: Jamie Iles @ 2011-02-14 10:01 UTC (permalink / raw)
  To: Meador Inge
  Cc: openmcapi-dev, Blanchard, Hollis, Hiroshi DOYU, linuxppc-dev,
	linux-arm-kernel

On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>     1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>        for the MPIC message registers on Power and OMAP4 mailboxes, for
>        example.
>     2. A higher level driver under '.../drivers/mailbox/*'.  That the
>        pieces in (1) would register with.  This piece would expose the
>        main kernel API.
>     3. Userspace interfaces for accessing the mailboxes.  A
>        '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.

How about using virtio for all of this and having the mailbox as a 
notification/message passing driver for the virtio backend?  There are 
already virtio console and network drivers that could be useful for the 
userspace part of it.  drivers/virtio/virtio_ring.c might be a good 
starting point if you thought there was some mileage in this approach.

Jamie

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14 10:01   ` Jamie Iles
  0 siblings, 0 replies; 38+ messages in thread
From: Jamie Iles @ 2011-02-14 10:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>     1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>        for the MPIC message registers on Power and OMAP4 mailboxes, for
>        example.
>     2. A higher level driver under '.../drivers/mailbox/*'.  That the
>        pieces in (1) would register with.  This piece would expose the
>        main kernel API.
>     3. Userspace interfaces for accessing the mailboxes.  A
>        '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.

How about using virtio for all of this and having the mailbox as a 
notification/message passing driver for the virtio backend?  There are 
already virtio console and network drivers that could be useful for the 
userspace part of it.  drivers/virtio/virtio_ring.c might be a good 
starting point if you thought there was some mileage in this approach.

Jamie

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14 10:01   ` Jamie Iles
@ 2011-02-14 10:03     ` Ohad Ben-Cohen
  -1 siblings, 0 replies; 38+ messages in thread
From: Ohad Ben-Cohen @ 2011-02-14 10:03 UTC (permalink / raw)
  To: Jamie Iles
  Cc: Meador Inge, Hiroshi DOYU, Blanchard, Hollis, openmcapi-dev,
	linuxppc-dev, linux-arm-kernel

On Mon, Feb 14, 2011 at 12:01 PM, Jamie Iles <jamie@jamieiles.com> wrote:
> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>> =A0 =A0 1. Hardware specific bits somewhere under '.../arch/*'. =A0Drive=
rs
>> =A0 =A0 =A0 =A0for the MPIC message registers on Power and OMAP4 mailbox=
es, for
>> =A0 =A0 =A0 =A0example.
>> =A0 =A0 2. A higher level driver under '.../drivers/mailbox/*'. =A0That =
the
>> =A0 =A0 =A0 =A0pieces in (1) would register with. =A0This piece would ex=
pose the
>> =A0 =A0 =A0 =A0main kernel API.
>> =A0 =A0 3. Userspace interfaces for accessing the mailboxes. =A0A
>> =A0 =A0 =A0 =A0'/dev/mailbox1', '/dev/mailbox2', etc... mapping, for exa=
mple.
>
> How about using virtio for all of this and having the mailbox as a
> notification/message passing driver for the virtio backend?

This is exactly what we are doing now, and it looks promising. expect
patches soon.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14 10:03     ` Ohad Ben-Cohen
  0 siblings, 0 replies; 38+ messages in thread
From: Ohad Ben-Cohen @ 2011-02-14 10:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 14, 2011 at 12:01 PM, Jamie Iles <jamie@jamieiles.com> wrote:
> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>> ? ? 1. Hardware specific bits somewhere under '.../arch/*'. ?Drivers
>> ? ? ? ?for the MPIC message registers on Power and OMAP4 mailboxes, for
>> ? ? ? ?example.
>> ? ? 2. A higher level driver under '.../drivers/mailbox/*'. ?That the
>> ? ? ? ?pieces in (1) would register with. ?This piece would expose the
>> ? ? ? ?main kernel API.
>> ? ? 3. Userspace interfaces for accessing the mailboxes. ?A
>> ? ? ? ?'/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
>
> How about using virtio for all of this and having the mailbox as a
> notification/message passing driver for the virtio backend?

This is exactly what we are doing now, and it looks promising. expect
patches soon.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14 10:03     ` Ohad Ben-Cohen
@ 2011-02-14 16:53       ` Ira W. Snyder
  -1 siblings, 0 replies; 38+ messages in thread
From: Ira W. Snyder @ 2011-02-14 16:53 UTC (permalink / raw)
  To: Ohad Ben-Cohen
  Cc: Meador Inge, Hiroshi DOYU, Blanchard, Hollis, openmcapi-dev,
	Jamie Iles, linuxppc-dev, linux-arm-kernel

On Mon, Feb 14, 2011 at 12:03:59PM +0200, Ohad Ben-Cohen wrote:
> On Mon, Feb 14, 2011 at 12:01 PM, Jamie Iles <jamie@jamieiles.com> wrote:
> > On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
> >>     1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
> >>        for the MPIC message registers on Power and OMAP4 mailboxes, for
> >>        example.
> >>     2. A higher level driver under '.../drivers/mailbox/*'.  That the
> >>        pieces in (1) would register with.  This piece would expose the
> >>        main kernel API.
> >>     3. Userspace interfaces for accessing the mailboxes.  A
> >>        '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
> >
> > How about using virtio for all of this and having the mailbox as a
> > notification/message passing driver for the virtio backend?
> 
> This is exactly what we are doing now, and it looks promising. expect
> patches soon.

I'll be happy to examine the feasibility of doing a port to mpc83xx as
soon as I see the code. :-) I have been using the message registers to
create a software "network card" over PCI (between a host system and an
mpc83xx in a PCI slot). I have wanted to use virtio for this task for a
long time.

I think a uniform interface for the mailbox registers would be a very
useful API.

Thanks,
Ira

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14 16:53       ` Ira W. Snyder
  0 siblings, 0 replies; 38+ messages in thread
From: Ira W. Snyder @ 2011-02-14 16:53 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 14, 2011 at 12:03:59PM +0200, Ohad Ben-Cohen wrote:
> On Mon, Feb 14, 2011 at 12:01 PM, Jamie Iles <jamie@jamieiles.com> wrote:
> > On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
> >> ? ? 1. Hardware specific bits somewhere under '.../arch/*'. ?Drivers
> >> ? ? ? ?for the MPIC message registers on Power and OMAP4 mailboxes, for
> >> ? ? ? ?example.
> >> ? ? 2. A higher level driver under '.../drivers/mailbox/*'. ?That the
> >> ? ? ? ?pieces in (1) would register with. ?This piece would expose the
> >> ? ? ? ?main kernel API.
> >> ? ? 3. Userspace interfaces for accessing the mailboxes. ?A
> >> ? ? ? ?'/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
> >
> > How about using virtio for all of this and having the mailbox as a
> > notification/message passing driver for the virtio backend?
> 
> This is exactly what we are doing now, and it looks promising. expect
> patches soon.

I'll be happy to examine the feasibility of doing a port to mpc83xx as
soon as I see the code. :-) I have been using the message registers to
create a software "network card" over PCI (between a host system and an
mpc83xx in a PCI slot). I have wanted to use virtio for this task for a
long time.

I think a uniform interface for the mailbox registers would be a very
useful API.

Thanks,
Ira

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-13 21:24   ` Linus Walleij
@ 2011-02-14 23:05     ` Blanchard, Hollis
  -1 siblings, 0 replies; 38+ messages in thread
From: Blanchard, Hollis @ 2011-02-14 23:05 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Inge, Meador, openmcapi-dev, Hiroshi DOYU, linuxppc-dev,
	linux-arm-kernel

On 02/13/2011 01:24 PM, Linus Walleij wrote:
>> >      3. Userspace interfaces for accessing the mailboxes.  A
>> >         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for =
example.
> What kind of business does userspace have with directly using
> mailboxes? Enlighten me so I get it... in our system these are
> used by protocols, such as net/caif/* thru drivers/net/caif/*, and
> we have similar kernelspace functionality for Phonet.
>
> CAIF and Phonet on the other hand, have custom openings
> down to the thing that exists on the other end of the mailbox.
> Most of these systems tend to talk some funny protocol that
> is often better handled by the kernel than by any userspace.
>
> So is this for the situation when you have no intermediate
> protocol between your userpace and the other CPU's
> subsystem? Or are you thinking about handling that
> protocol in userspace? That is generally not such a good idea
> for efficiency reasons.
OpenMCAPI (http://openmcapi.org) implements the MCAPI specification,=20
which is a simple application-level communication API that uses shared=20
memory. The API could be layered over any protocol, but was more or less =

designed for simple shared-memory systems, e.g. fixed topology, no=20
retransmission, etc.

Currently, we implement almost all of this as a shared library, plus a=20
very small kernel driver. The only requirements on the kernel are to=20
allow userspace to map the shared memory area, and provide an IPI=20
mechanism (and allow the process to sleep while waiting). Applications=20
sync with each other using normal atomic memory operations.

We're now trying to optimize the transfer of scalars on platforms that=20
provide mailboxes (beyond simple IPIs), which is why we're looking at=20
defining a user-facing API to such hardware.

I'll add that we haven't done serious optimization yet, but the numbers=20
we do have seem reasonable. What are the "efficiency" issues you're=20
worried about?

Hollis Blanchard
Mentor Graphics, Embedded Systems Division

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-14 23:05     ` Blanchard, Hollis
  0 siblings, 0 replies; 38+ messages in thread
From: Blanchard, Hollis @ 2011-02-14 23:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/13/2011 01:24 PM, Linus Walleij wrote:
>> >      3. Userspace interfaces for accessing the mailboxes.  A
>> >         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
> What kind of business does userspace have with directly using
> mailboxes? Enlighten me so I get it... in our system these are
> used by protocols, such as net/caif/* thru drivers/net/caif/*, and
> we have similar kernelspace functionality for Phonet.
>
> CAIF and Phonet on the other hand, have custom openings
> down to the thing that exists on the other end of the mailbox.
> Most of these systems tend to talk some funny protocol that
> is often better handled by the kernel than by any userspace.
>
> So is this for the situation when you have no intermediate
> protocol between your userpace and the other CPU's
> subsystem? Or are you thinking about handling that
> protocol in userspace? That is generally not such a good idea
> for efficiency reasons.
OpenMCAPI (http://openmcapi.org) implements the MCAPI specification, 
which is a simple application-level communication API that uses shared 
memory. The API could be layered over any protocol, but was more or less 
designed for simple shared-memory systems, e.g. fixed topology, no 
retransmission, etc.

Currently, we implement almost all of this as a shared library, plus a 
very small kernel driver. The only requirements on the kernel are to 
allow userspace to map the shared memory area, and provide an IPI 
mechanism (and allow the process to sleep while waiting). Applications 
sync with each other using normal atomic memory operations.

We're now trying to optimize the transfer of scalars on platforms that 
provide mailboxes (beyond simple IPIs), which is why we're looking at 
defining a user-facing API to such hardware.

I'll add that we haven't done serious optimization yet, but the numbers 
we do have seem reasonable. What are the "efficiency" issues you're 
worried about?

Hollis Blanchard
Mentor Graphics, Embedded Systems Division

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14 10:01   ` Jamie Iles
@ 2011-02-15 21:58     ` Meador Inge
  -1 siblings, 0 replies; 38+ messages in thread
From: Meador Inge @ 2011-02-15 21:58 UTC (permalink / raw)
  To: Jamie Iles
  Cc: openmcapi-dev, Blanchard, Hollis, Hiroshi DOYU, linuxppc-dev,
	linux-arm-kernel

On 02/14/2011 04:01 AM, Jamie Iles wrote:
> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>      1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>>         for the MPIC message registers on Power and OMAP4 mailboxes, for
>>         example.
>>      2. A higher level driver under '.../drivers/mailbox/*'.  That the
>>         pieces in (1) would register with.  This piece would expose the
>>         main kernel API.
>>      3. Userspace interfaces for accessing the mailboxes.  A
>>         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
>
> How about using virtio for all of this and having the mailbox as a
> notification/message passing driver for the virtio backend?  There are
> already virtio console and network drivers that could be useful for the
> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
> starting point if you thought there was some mileage in this approach.

To be honest, I am not that familiar with 'virtio', but I will take a 
look.  Thanks for the pointer.  Maybe Hollis can speak to this idea more.

> Jamie
>


-- 
Meador Inge     | meador_inge AT mentor.com
Mentor Embedded | http://www.mentor.com/embedded-software

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-15 21:58     ` Meador Inge
  0 siblings, 0 replies; 38+ messages in thread
From: Meador Inge @ 2011-02-15 21:58 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/14/2011 04:01 AM, Jamie Iles wrote:
> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>      1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>>         for the MPIC message registers on Power and OMAP4 mailboxes, for
>>         example.
>>      2. A higher level driver under '.../drivers/mailbox/*'.  That the
>>         pieces in (1) would register with.  This piece would expose the
>>         main kernel API.
>>      3. Userspace interfaces for accessing the mailboxes.  A
>>         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
>
> How about using virtio for all of this and having the mailbox as a
> notification/message passing driver for the virtio backend?  There are
> already virtio console and network drivers that could be useful for the
> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
> starting point if you thought there was some mileage in this approach.

To be honest, I am not that familiar with 'virtio', but I will take a 
look.  Thanks for the pointer.  Maybe Hollis can speak to this idea more.

> Jamie
>


-- 
Meador Inge     | meador_inge AT mentor.com
Mentor Embedded | http://www.mentor.com/embedded-software

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-15 21:58     ` Meador Inge
@ 2011-02-15 23:38       ` Blanchard, Hollis
  -1 siblings, 0 replies; 38+ messages in thread
From: Blanchard, Hollis @ 2011-02-15 23:38 UTC (permalink / raw)
  To: Inge, Meador
  Cc: openmcapi-dev, Jamie Iles, Hiroshi DOYU, linuxppc-dev, linux-arm-kernel

On 02/15/2011 01:58 PM, Meador Inge wrote:
> On 02/14/2011 04:01 AM, Jamie Iles wrote:
>> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>>      1. Hardware specific bits somewhere under '.../arch/*'.  =
Drivers
>>>         for the MPIC message registers on Power and OMAP4 mailboxes, =

>>> for
>>>         example.
>>>      2. A higher level driver under '.../drivers/mailbox/*'.  That =
the
>>>         pieces in (1) would register with.  This piece would expose =
the
>>>         main kernel API.
>>>      3. Userspace interfaces for accessing the mailboxes.  A
>>>         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for =
example.
>>
>> How about using virtio for all of this and having the mailbox as a
>> notification/message passing driver for the virtio backend?  There =
are
>> already virtio console and network drivers that could be useful for =
the
>> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
>> starting point if you thought there was some mileage in this =
approach.
>
> To be honest, I am not that familiar with 'virtio', but I will take a=20
> look.  Thanks for the pointer.  Maybe Hollis can speak to this idea =
more.
My opinion is that virtio is (over?) complicated.

I've looked into it in the past, and I'm definitely open to using it if=20
somebody can demonstrate how easy it is, but adopting it wouldn't have=20
helped OpenMCAPI with our use cases, and would have incurred extra pain, =

so we didn't.

Hollis Blanchard
Mentor Graphics, Embedded Systems Division

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-15 23:38       ` Blanchard, Hollis
  0 siblings, 0 replies; 38+ messages in thread
From: Blanchard, Hollis @ 2011-02-15 23:38 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/15/2011 01:58 PM, Meador Inge wrote:
> On 02/14/2011 04:01 AM, Jamie Iles wrote:
>> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>>      1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>>>         for the MPIC message registers on Power and OMAP4 mailboxes, 
>>> for
>>>         example.
>>>      2. A higher level driver under '.../drivers/mailbox/*'.  That the
>>>         pieces in (1) would register with.  This piece would expose the
>>>         main kernel API.
>>>      3. Userspace interfaces for accessing the mailboxes.  A
>>>         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
>>
>> How about using virtio for all of this and having the mailbox as a
>> notification/message passing driver for the virtio backend?  There are
>> already virtio console and network drivers that could be useful for the
>> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
>> starting point if you thought there was some mileage in this approach.
>
> To be honest, I am not that familiar with 'virtio', but I will take a 
> look.  Thanks for the pointer.  Maybe Hollis can speak to this idea more.
My opinion is that virtio is (over?) complicated.

I've looked into it in the past, and I'm definitely open to using it if 
somebody can demonstrate how easy it is, but adopting it wouldn't have 
helped OpenMCAPI with our use cases, and would have incurred extra pain, 
so we didn't.

Hollis Blanchard
Mentor Graphics, Embedded Systems Division

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-15 23:38       ` Blanchard, Hollis
@ 2011-02-16  6:22         ` Hiroshi DOYU
  -1 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-16  6:22 UTC (permalink / raw)
  To: Hollis_Blanchard
  Cc: meador_inge, jamie, linuxppc-dev, openmcapi-dev, linux-arm-kernel

From: "ext Blanchard, Hollis" <Hollis_Blanchard@mentor.com>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Tue, 15 Feb 2011 15:38:25 -0800

> On 02/15/2011 01:58 PM, Meador Inge wrote:
>> On 02/14/2011 04:01 AM, Jamie Iles wrote:
>>> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>>>      1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>>>>         for the MPIC message registers on Power and OMAP4 mailboxes, 
>>>> for
>>>>         example.
>>>>      2. A higher level driver under '.../drivers/mailbox/*'.  That the
>>>>         pieces in (1) would register with.  This piece would expose the
>>>>         main kernel API.
>>>>      3. Userspace interfaces for accessing the mailboxes.  A
>>>>         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
>>>
>>> How about using virtio for all of this and having the mailbox as a
>>> notification/message passing driver for the virtio backend?  There are
>>> already virtio console and network drivers that could be useful for the
>>> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
>>> starting point if you thought there was some mileage in this approach.
>>
>> To be honest, I am not that familiar with 'virtio', but I will take a 
>> look.  Thanks for the pointer.  Maybe Hollis can speak to this idea more.
> My opinion is that virtio is (over?) complicated.

Considering the case of omap mailbox H/W, it is just a simple one way
4 slot x 32bit H/W FIFO, I also may think that this may be a bit too
much...

> I've looked into it in the past, and I'm definitely open to using it if 
> somebody can demonstrate how easy it is, but adopting it wouldn't have 
> helped OpenMCAPI with our use cases, and would have incurred extra pain, 
> so we didn't.
> 
> Hollis Blanchard
> Mentor Graphics, Embedded Systems Division
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-16  6:22         ` Hiroshi DOYU
  0 siblings, 0 replies; 38+ messages in thread
From: Hiroshi DOYU @ 2011-02-16  6:22 UTC (permalink / raw)
  To: linux-arm-kernel

From: "ext Blanchard, Hollis" <Hollis_Blanchard@mentor.com>
Subject: Re: [RFC] Inter-processor Mailboxes Drivers
Date: Tue, 15 Feb 2011 15:38:25 -0800

> On 02/15/2011 01:58 PM, Meador Inge wrote:
>> On 02/14/2011 04:01 AM, Jamie Iles wrote:
>>> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>>>      1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>>>>         for the MPIC message registers on Power and OMAP4 mailboxes, 
>>>> for
>>>>         example.
>>>>      2. A higher level driver under '.../drivers/mailbox/*'.  That the
>>>>         pieces in (1) would register with.  This piece would expose the
>>>>         main kernel API.
>>>>      3. Userspace interfaces for accessing the mailboxes.  A
>>>>         '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
>>>
>>> How about using virtio for all of this and having the mailbox as a
>>> notification/message passing driver for the virtio backend?  There are
>>> already virtio console and network drivers that could be useful for the
>>> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
>>> starting point if you thought there was some mileage in this approach.
>>
>> To be honest, I am not that familiar with 'virtio', but I will take a 
>> look.  Thanks for the pointer.  Maybe Hollis can speak to this idea more.
> My opinion is that virtio is (over?) complicated.

Considering the case of omap mailbox H/W, it is just a simple one way
4 slot x 32bit H/W FIFO, I also may think that this may be a bit too
much...

> I've looked into it in the past, and I'm definitely open to using it if 
> somebody can demonstrate how easy it is, but adopting it wouldn't have 
> helped OpenMCAPI with our use cases, and would have incurred extra pain, 
> so we didn't.
> 
> Hollis Blanchard
> Mentor Graphics, Embedded Systems Division
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [openmcapi-dev] Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-16  6:22         ` Hiroshi DOYU
@ 2011-02-16 20:22           ` Blanchard, Hollis
  -1 siblings, 0 replies; 38+ messages in thread
From: Blanchard, Hollis @ 2011-02-16 20:22 UTC (permalink / raw)
  To: Hiroshi DOYU
  Cc: Inge, Meador, jamie, linuxppc-dev, openmcapi-dev, linux-arm-kernel

On 02/15/2011 10:22 PM, Hiroshi DOYU wrote:
> From: "ext Blanchard, Hollis"<Hollis_Blanchard@mentor.com>
> Subject: Re: [RFC] Inter-processor Mailboxes Drivers
> Date: Tue, 15 Feb 2011 15:38:25 -0800
>
>> On 02/15/2011 01:58 PM, Meador Inge wrote:
>>> On 02/14/2011 04:01 AM, Jamie Iles wrote:
>>>> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>>>>       1. Hardware specific bits somewhere under '.../arch/*'.  =
Drivers
>>>>>          for the MPIC message registers on Power and OMAP4 =
mailboxes,
>>>>> for
>>>>>          example.
>>>>>       2. A higher level driver under '.../drivers/mailbox/*'.  =
That the
>>>>>          pieces in (1) would register with.  This piece would =
expose the
>>>>>          main kernel API.
>>>>>       3. Userspace interfaces for accessing the mailboxes.  A
>>>>>          '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for =
example.
>>>> How about using virtio for all of this and having the mailbox as a
>>>> notification/message passing driver for the virtio backend?  There =
are
>>>> already virtio console and network drivers that could be useful for =
the
>>>> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
>>>> starting point if you thought there was some mileage in this =
approach.
>>> To be honest, I am not that familiar with 'virtio', but I will take =
a
>>> look.  Thanks for the pointer.  Maybe Hollis can speak to this idea =
more.
>> My opinion is that virtio is (over?) complicated.
> Considering the case of omap mailbox H/W, it is just a simple one way
> 4 slot x 32bit H/W FIFO, I also may think that this may be a bit too
> much...
I think the proposal is to implement a virtio link using the OMAP=20
mailboxes as the interrupt mechanism, and shared memory to carry the=20
data and descriptor rings.

Hollis Blanchard
Mentor Graphics, Embedded Systems Division

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [openmcapi-dev] Re: [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-16 20:22           ` Blanchard, Hollis
  0 siblings, 0 replies; 38+ messages in thread
From: Blanchard, Hollis @ 2011-02-16 20:22 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/15/2011 10:22 PM, Hiroshi DOYU wrote:
> From: "ext Blanchard, Hollis"<Hollis_Blanchard@mentor.com>
> Subject: Re: [RFC] Inter-processor Mailboxes Drivers
> Date: Tue, 15 Feb 2011 15:38:25 -0800
>
>> On 02/15/2011 01:58 PM, Meador Inge wrote:
>>> On 02/14/2011 04:01 AM, Jamie Iles wrote:
>>>> On Fri, Feb 11, 2011 at 03:19:51PM -0600, Meador Inge wrote:
>>>>>       1. Hardware specific bits somewhere under '.../arch/*'.  Drivers
>>>>>          for the MPIC message registers on Power and OMAP4 mailboxes,
>>>>> for
>>>>>          example.
>>>>>       2. A higher level driver under '.../drivers/mailbox/*'.  That the
>>>>>          pieces in (1) would register with.  This piece would expose the
>>>>>          main kernel API.
>>>>>       3. Userspace interfaces for accessing the mailboxes.  A
>>>>>          '/dev/mailbox1', '/dev/mailbox2', etc... mapping, for example.
>>>> How about using virtio for all of this and having the mailbox as a
>>>> notification/message passing driver for the virtio backend?  There are
>>>> already virtio console and network drivers that could be useful for the
>>>> userspace part of it.  drivers/virtio/virtio_ring.c might be a good
>>>> starting point if you thought there was some mileage in this approach.
>>> To be honest, I am not that familiar with 'virtio', but I will take a
>>> look.  Thanks for the pointer.  Maybe Hollis can speak to this idea more.
>> My opinion is that virtio is (over?) complicated.
> Considering the case of omap mailbox H/W, it is just a simple one way
> 4 slot x 32bit H/W FIFO, I also may think that this may be a bit too
> much...
I think the proposal is to implement a virtio link using the OMAP 
mailboxes as the interrupt mechanism, and shared memory to carry the 
data and descriptor rings.

Hollis Blanchard
Mentor Graphics, Embedded Systems Division

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC] Inter-processor Mailboxes Drivers
  2011-02-14 23:05     ` Blanchard, Hollis
@ 2011-02-16 21:54       ` Linus Walleij
  -1 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-16 21:54 UTC (permalink / raw)
  To: Blanchard, Hollis
  Cc: Inge, Meador, openmcapi-dev, Hiroshi DOYU, linuxppc-dev,
	linux-arm-kernel

2011/2/15 Blanchard, Hollis <Hollis_Blanchard@mentor.com>:

> OpenMCAPI (http://openmcapi.org) implements the MCAPI specification,
> which is a simple application-level communication API that uses shared
> memory. The API could be layered over any protocol, but was more or less
> designed for simple shared-memory systems, e.g. fixed topology, no
> retransmission, etc.

Cool...

> Currently, we implement almost all of this as a shared library, plus a
> very small kernel driver. The only requirements on the kernel are to
> allow userspace to map the shared memory area, and provide an IPI
> mechanism (and allow the process to sleep while waiting). Applications
> sync with each other using normal atomic memory operations.

Can't this real small kernel driver take care of the mailbox
business as well?

It seems a bit backward if you have say /dev/mcapi0, /dev/mcapi1
etc (or however you expose this to userspace) and /dev/mailbox0
/dev/mailbox1 etc on top of that. One device node per communication
channel instead of this would certainly be nicer? Then you would
have some ioctl() on the /dev/mcapi0 etc node to trigger the
transport and need not worry that it's a mailbox doing the sync.

What I'm after is that whatever datapath you have should include
the control mechanism, now it's like you're opening two interfaces
into the kernel, one for mapping in data pages, one for synchronizing
the transfers, or am I getting things wrong?

I think nominally all mailbox users would be in-kernel like the
MCAPI driver, so they don't need a userspace interface, to me
it feels like say /dev/mutex0, /dev/mutex1 for some other
shared memory opening into the kernel (such as the framebuffer),
and that would look a bit funny.

> I'll add that we haven't done serious optimization yet, but the numbers
> we do have seem reasonable. What are the "efficiency" issues you're
> worried about?

For huge data flows I think you may get into trouble, needing things
like queueing, descriptor pools etc. But if you're convinced this will
work, do go ahead.

Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC] Inter-processor Mailboxes Drivers
@ 2011-02-16 21:54       ` Linus Walleij
  0 siblings, 0 replies; 38+ messages in thread
From: Linus Walleij @ 2011-02-16 21:54 UTC (permalink / raw)
  To: linux-arm-kernel

2011/2/15 Blanchard, Hollis <Hollis_Blanchard@mentor.com>:

> OpenMCAPI (http://openmcapi.org) implements the MCAPI specification,
> which is a simple application-level communication API that uses shared
> memory. The API could be layered over any protocol, but was more or less
> designed for simple shared-memory systems, e.g. fixed topology, no
> retransmission, etc.

Cool...

> Currently, we implement almost all of this as a shared library, plus a
> very small kernel driver. The only requirements on the kernel are to
> allow userspace to map the shared memory area, and provide an IPI
> mechanism (and allow the process to sleep while waiting). Applications
> sync with each other using normal atomic memory operations.

Can't this real small kernel driver take care of the mailbox
business as well?

It seems a bit backward if you have say /dev/mcapi0, /dev/mcapi1
etc (or however you expose this to userspace) and /dev/mailbox0
/dev/mailbox1 etc on top of that. One device node per communication
channel instead of this would certainly be nicer? Then you would
have some ioctl() on the /dev/mcapi0 etc node to trigger the
transport and need not worry that it's a mailbox doing the sync.

What I'm after is that whatever datapath you have should include
the control mechanism, now it's like you're opening two interfaces
into the kernel, one for mapping in data pages, one for synchronizing
the transfers, or am I getting things wrong?

I think nominally all mailbox users would be in-kernel like the
MCAPI driver, so they don't need a userspace interface, to me
it feels like say /dev/mutex0, /dev/mutex1 for some other
shared memory opening into the kernel (such as the framebuffer),
and that would look a bit funny.

> I'll add that we haven't done serious optimization yet, but the numbers
> we do have seem reasonable. What are the "efficiency" issues you're
> worried about?

For huge data flows I think you may get into trouble, needing things
like queueing, descriptor pools etc. But if you're convinced this will
work, do go ahead.

Linus Walleij

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2011-02-16 21:54 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-11 21:19 [RFC] Inter-processor Mailboxes Drivers Meador Inge
2011-02-11 21:19 ` Meador Inge
2011-02-12  6:28 ` Sundar
2011-02-12  6:28   ` Sundar
2011-02-13 21:16   ` Linus Walleij
2011-02-13 21:16     ` Linus Walleij
2011-02-14  7:32     ` Hiroshi DOYU
2011-02-14  7:32       ` Hiroshi DOYU
2011-02-14  8:39       ` Linus Walleij
2011-02-14  8:39         ` Linus Walleij
2011-02-14  8:55         ` Hiroshi DOYU
2011-02-14  8:55           ` Hiroshi DOYU
2011-02-14  9:00           ` Linus Walleij
2011-02-14  9:00             ` Linus Walleij
2011-02-14  9:06           ` Hiroshi DOYU
2011-02-14  9:06             ` Hiroshi DOYU
2011-02-13 21:24 ` Linus Walleij
2011-02-13 21:24   ` Linus Walleij
2011-02-14 23:05   ` Blanchard, Hollis
2011-02-14 23:05     ` Blanchard, Hollis
2011-02-16 21:54     ` Linus Walleij
2011-02-16 21:54       ` Linus Walleij
2011-02-14  7:21 ` Hiroshi DOYU
2011-02-14  7:21   ` Hiroshi DOYU
2011-02-14 10:01 ` Jamie Iles
2011-02-14 10:01   ` Jamie Iles
2011-02-14 10:03   ` Ohad Ben-Cohen
2011-02-14 10:03     ` Ohad Ben-Cohen
2011-02-14 16:53     ` Ira W. Snyder
2011-02-14 16:53       ` Ira W. Snyder
2011-02-15 21:58   ` Meador Inge
2011-02-15 21:58     ` Meador Inge
2011-02-15 23:38     ` Blanchard, Hollis
2011-02-15 23:38       ` Blanchard, Hollis
2011-02-16  6:22       ` Hiroshi DOYU
2011-02-16  6:22         ` Hiroshi DOYU
2011-02-16 20:22         ` [openmcapi-dev] " Blanchard, Hollis
2011-02-16 20:22           ` Blanchard, Hollis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.