linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg KH <gregkh@linuxfoundation.org>
To: Daniele Alessandrelli <daniele.alessandrelli@linux.intel.com>
Cc: mgross@linux.intel.com, markgross@kernel.org, arnd@arndb.de,
	bp@suse.de, damien.lemoal@wdc.com, dragan.cvetic@xilinx.com,
	corbet@lwn.net, leonard.crestez@nxp.com,
	palmerdabbelt@google.com, paul.walmsley@sifive.com,
	peng.fan@nxp.com, robh+dt@kernel.org, shawnguo@kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 03/22] keembay-ipc: Add Keem Bay IPC module
Date: Tue, 8 Dec 2020 20:48:01 +0100	[thread overview]
Message-ID: <X8/YceIsM/Akt/E/@kroah.com> (raw)
In-Reply-To: <bcf8bad08a5f586093a151126aba2127eee44c02.camel@linux.intel.com>

On Tue, Dec 08, 2020 at 06:59:09PM +0000, Daniele Alessandrelli wrote:
> Hi Greg,
> 
> Thanks for your feedback.
> 
> On Wed, 2020-12-02 at 07:19 +0100, Greg KH wrote:
> > On Tue, Dec 01, 2020 at 02:34:52PM -0800, mgross@linux.intel.com wrote:
> > > From: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
> > > 
> > > On the Intel Movidius SoC code named Keem Bay, communication between the
> > > Computing Sub-System (CSS), i.e., the CPU, and the Multimedia Sub-System
> > > (MSS), i.e., the VPU is enabled by the Keem Bay Inter-Processor
> > > Communication (IPC) mechanism.
> > > 
> > > Add the driver for using Keem Bay IPC from within the Linux Kernel.
> > > 
> > > Keem Bay IPC uses the following terminology:
> > > 
> > > - Node:    A processing entity that can use the IPC to communicate;
> > > 	   currently, we just have two nodes, CPU (CSS) and VPU (MSS).
> > > 
> > > - Link:    Two nodes that can communicate over IPC form an IPC link
> > > 	   (currently, we just have one link, the one between the CPU
> > > 	   and VPU).
> > > 
> > > - Channel: An IPC link can provide multiple IPC channels. IPC channels
> > > 	   allow communication multiplexing, i.e., the same IPC link can
> > > 	   be used by different applications for different
> > > 	   communications. Each channel is identified by a channel ID,
> > > 	   which must be unique within a single IPC link. Channels are
> > > 	   divided in two categories, High-Speed (HS) channels and
> > > 	   General-Purpose (GP) channels. HS channels have higher
> > > 	   priority over GP channels.
> > > 
> > > Keem Bay IPC mechanism is based on shared memory and hardware FIFOs.
> > > Both the CPU and the VPU have their own hardware FIFO. When the CPU
> > > wants to send an IPC message to the VPU, it writes to the VPU FIFO (MSS
> > > FIFO); similarly, when MSS wants to send an IPC message to the CPU, it
> > > writes to the CPU FIFO (CSS FIFO).
> > > 
> > > A FIFO entry is simply a pointer to an IPC buffer (aka IPC header)
> > > stored in a portion of memory shared between the CPU and the VPU.
> > > Specifically, the FIFO entry contains the (VPU) physical address of the
> > > IPC buffer being transferred.
> > > 
> > > In turn, the IPC buffer contains the (VPU) physical address of the
> > > payload (which must be located in shared memory too) as well as other
> > > information (payload size, IPC channel ID, etc.).
> > > 
> > > Each IPC node instantiates a pool of IPC buffers from its own IPC buffer
> > > memory region. When instantiated, IPC buffers are marked as free. When
> > > the node needs to send an IPC message, it gets the first free buffer it
> > > finds (from its own pool), marks it as allocated (used), and puts its
> > > physical address into the IPC FIFO of the destination node. The
> > > destination node (which is notified by an interrupt when there are
> > > entries pending in its FIFO) extract the physical address from the FIFO
> > > and process the IPC buffer, marking it as free once done (so that the
> > > sender can reuse the buffer).
> > 
> > Any reason you can't use the dmabuf interface for these memory buffers
> > you are creating and having to manage "by hand"?  I thought that was
> > what the kernel was wanting to unify on such that individual
> > drivers/subsystems didn't have to do this on their own.
> 
> My understanding is that the dmabuf interface is used to share DMA
> buffers across different drivers, while these buffers are used only
> internally to the IPC driver (and exchanged only with the VPU
> firmware). They basically are packet headers that are sent to the VPU.

There's no reason you couldn't use these to share your buffers
"internally" as well, because you have the same lifetime rules and
accounting and all other sorts of things you have to handle, right?  Why
rewrite something like this when you should take advantage of common
code instead?

thanks,

greg k-h

  reply	other threads:[~2020-12-08 20:25 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-01 22:34 [PATCH 00/22] Intel Vision Processing Unit base enabling part 1 mgross
2020-12-01 22:34 ` [PATCH 01/22] Add Vision Processing Unit (VPU) documentation mgross
2020-12-18 23:30   ` Randy Dunlap
2021-01-07 20:16     ` mark gross
2020-12-01 22:34 ` [PATCH 02/22] dt-bindings: Add bindings for Keem Bay IPC driver mgross
2020-12-07 16:01   ` Rob Herring
2020-12-07 18:22     ` mark gross
2020-12-07 18:42     ` Daniele Alessandrelli
2020-12-07 20:31       ` Jassi Brar
2020-12-09  0:12         ` mark gross
2020-12-09 16:49           ` Jassi Brar
2020-12-09 17:33       ` Rob Herring
2020-12-01 22:34 ` [PATCH 03/22] keembay-ipc: Add Keem Bay IPC module mgross
2020-12-02  6:16   ` Greg KH
2020-12-02 17:42     ` mark gross
2020-12-02 19:01       ` Greg KH
2020-12-05  3:35         ` mark gross
2020-12-05  8:40           ` Greg KH
2020-12-05 16:37             ` Gross, Mark
2020-12-07  1:36               ` mark gross
2020-12-02  6:19   ` Greg KH
2020-12-08 18:59     ` Daniele Alessandrelli
2020-12-08 19:48       ` Greg KH [this message]
2020-12-10 18:38         ` Daniele Alessandrelli
2020-12-11  8:22           ` Greg KH
2020-12-11 17:26             ` Gross, Mark
2020-12-01 22:34 ` [PATCH 04/22] dt-bindings: Add bindings for Keem Bay VPU IPC driver mgross
2020-12-07 15:57   ` Rob Herring
2020-12-07 21:28     ` mark gross
2020-12-01 22:34 ` [PATCH 05/22] keembay-vpu-ipc: Add Keem Bay VPU IPC module mgross
2020-12-01 22:34 ` [PATCH 06/22] misc: xlink-pcie: Add documentation for XLink PCIe driver mgross
2020-12-18 22:59   ` Randy Dunlap
2020-12-19  0:07     ` mark gross
2020-12-01 22:34 ` [PATCH 07/22] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host mgross
2020-12-01 22:34 ` [PATCH 08/22] misc: xlink-pcie: lh: Add PCIe EP DMA functionality mgross
2020-12-01 22:34 ` [PATCH 09/22] misc: xlink-pcie: lh: Add core communication logic mgross
2020-12-02  6:18   ` Greg KH
2020-12-02 16:46     ` Thokala, Srikanth
2020-12-01 22:34 ` [PATCH 10/22] misc: xlink-pcie: lh: Prepare changes for adding remote host driver mgross
2020-12-01 22:35 ` [PATCH 11/22] misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host mgross
2020-12-01 22:35 ` [PATCH 12/22] misc: xlink-pcie: rh: Add core communication logic mgross
2020-12-01 22:35 ` [PATCH 13/22] misc: xlink-pcie: Add XLink API interface mgross
2020-12-01 22:35 ` [PATCH 14/22] misc: xlink-pcie: Add asynchronous event notification support for XLink mgross
2020-12-01 22:35 ` [PATCH 15/22] xlink-ipc: Add xlink ipc device tree bindings mgross
2020-12-07 15:58   ` Rob Herring
2020-12-07 21:41     ` mark gross
2020-12-01 22:35 ` [PATCH 16/22] xlink-ipc: Add xlink ipc driver mgross
2020-12-07  2:32   ` Joe Perches
2020-12-11 11:33     ` Kelly, Seamus
2020-12-11 12:14       ` gregkh
2020-12-11 17:12         ` Gross, Mark
2020-12-07 19:53   ` Randy Dunlap
2020-12-01 22:35 ` [PATCH 17/22] xlink-core: Add xlink core device tree bindings mgross
2020-12-07 16:02   ` Rob Herring
2020-12-01 22:35 ` [PATCH 18/22] xlink-core: Add xlink core driver xLink mgross
2020-12-07 21:50   ` Randy Dunlap
2020-12-01 22:35 ` [PATCH 19/22] xlink-core: Enable xlink protocol over pcie mgross
2020-12-01 22:35 ` [PATCH 20/22] xlink-core: Enable VPU IP management and runtime control mgross
2020-12-01 22:35 ` [PATCH 21/22] xlink-core: add async channel and events mgross
2020-12-07  2:55   ` Joe Perches
2020-12-11 11:34     ` Kelly, Seamus
2020-12-01 22:35 ` [PATCH 22/22] xlink-core: factorize xlink_ioctl function by creating sub-functions for each ioctl command mgross
2020-12-07  3:05   ` Joe Perches
2020-12-07 21:59     ` mark gross
2020-12-09  8:30   ` Joe Perches
2020-12-11 11:36     ` Kelly, Seamus
  -- strict thread matches above, loose matches on Subject: below --
2020-11-30 23:06 [PATCH 00/22] Intel Vision Processing Unit base enabling part 1 mgross
2020-11-30 23:06 ` [PATCH 03/22] keembay-ipc: Add Keem Bay IPC module mgross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=X8/YceIsM/Akt/E/@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=arnd@arndb.de \
    --cc=bp@suse.de \
    --cc=corbet@lwn.net \
    --cc=damien.lemoal@wdc.com \
    --cc=daniele.alessandrelli@linux.intel.com \
    --cc=dragan.cvetic@xilinx.com \
    --cc=leonard.crestez@nxp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=markgross@kernel.org \
    --cc=mgross@linux.intel.com \
    --cc=palmerdabbelt@google.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peng.fan@nxp.com \
    --cc=robh+dt@kernel.org \
    --cc=shawnguo@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).