linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/34] Intel Vision Processing base enabling
@ 2021-01-08 21:25 mgross
  2021-01-08 21:25 ` [PATCH v2 01/34] Add Vision Processing Unit (VPU) documentation mgross
                   ` (33 more replies)
  0 siblings, 34 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel

From: Mark Gross <mgross@linux.intel.com>

Second try, the first attempt failed to land on lore.

The Intel Vision Processing Unit (VPU) is an IP block that is showing up for
the first time as part of the Keem Bay SOC.  Keem Bay is a quad core A53 Arm
SOC.  It is designed to be used as a stand alone SOC as well as in an PCIe
Vision Processing accelerator add in card.

This is version 2 of the earlier patch set:
Subject: [PATCH 00/22] Intel Vision Processing Unit base enabling part 1
Date: Tue,  1 Dec 2020 14:34:49 -0800

It also include all the part 2 patches along with updates addressing feedback
from the earlier posting.
The most notable are:
* added a VPU mailbox IPC driver
* reworked the Keem Bay IPC driver to use that mailbox
* folding a refactoring of the xlink core patch into the xlink core set.
* Documentation updates
* corrected MAINTAINER file entries WRT sustained vrs maintained use.
* adds associated drivers:
	* tsens -- thermal managment
	* HDDL -- device manageme
	* Xlink SMBus
	* vpu manager -- user mode interface


Thanks for looking at these and providing feedback.

--mark

C, Udhayakumar (8):
  dt-bindings: misc: intel_tsens: Add tsens thermal bindings
    documentation
  misc: Tsens ARM host thermal driver.
  misc: Intel tsens IA host driver.
  Intel tsens i2c slave driver.
  misc:intel_tsens: Intel Keem Bay tsens driver.
  dt-bindings: misc: hddl_dev: Add hddl device management documentation
  misc: Hddl device management for local host
  misc: HDDL device management for IA host

Daniele Alessandrelli (4):
  dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings
  mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox
  dt-bindings: Add bindings for Keem Bay IPC driver
  keembay-ipc: Add Keem Bay IPC module

Li, Tingqian (2):
  dt-bindings: misc: Add Keem Bay vpumgr
  misc: Add Keem Bay VPU manager

Paul Murphy (2):
  dt-bindings: Add bindings for Keem Bay VPU IPC driver
  keembay-vpu-ipc: Add Keem Bay VPU IPC module

Ramya P Karanth (1):
  Intel Keem Bay XLink SMBus driver

Seamus Kelly (7):
  xlink-ipc: Add xlink ipc device tree bindings
  xlink-ipc: Add xlink ipc driver
  xlink-core: Add xlink core device tree bindings
  xlink-core: Add xlink core driver xLink
  xlink-core: Enable xlink protocol over pcie
  xlink-core: Enable VPU IP management and runtime control
  xlink-core: add async channel and events

Srikanth Thokala (9):
  misc: xlink-pcie: Add documentation for XLink PCIe driver
  misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
  misc: xlink-pcie: lh: Add PCIe EP DMA functionality
  misc: xlink-pcie: lh: Add core communication logic
  misc: xlink-pcie: lh: Prepare changes for adding remote host driver
  misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host
  misc: xlink-pcie: rh: Add core communication logic
  misc: xlink-pcie: Add XLink API interface
  misc: xlink-pcie: Add asynchronous event notification support for
    XLink

mark gross (1):
  Add Vision Processing Unit (VPU) documentation.

 .../mailbox/intel,vpu-ipc-mailbox.yaml        |   69 +
 .../bindings/misc/intel,hddl-client.yaml      |  114 +
 .../bindings/misc/intel,intel-tsens.yaml      |  122 +
 .../bindings/misc/intel,keembay-vpu-mgr.yaml  |   48 +
 .../misc/intel,keembay-xlink-ipc.yaml         |   49 +
 .../bindings/misc/intel,keembay-xlink.yaml    |   27 +
 .../bindings/soc/intel/intel,keembay-ipc.yaml |   45 +
 .../soc/intel/intel,keembay-vpu-ipc.yaml      |  153 ++
 Documentation/hwmon/index.rst                 |    2 +
 Documentation/hwmon/intel_tsens_host.rst      |   71 +
 Documentation/hwmon/intel_tsens_sensor.rst    |   67 +
 Documentation/i2c/busses/index.rst            |    1 +
 .../i2c/busses/intel-xlink-smbus.rst          |   71 +
 Documentation/index.rst                       |    1 +
 .../misc-devices/hddl_device_client.rst       |  212 ++
 .../misc-devices/hddl_device_server.rst       |  205 ++
 Documentation/misc-devices/index.rst          |    2 +
 Documentation/vpu/index.rst                   |   20 +
 Documentation/vpu/vpu-stack-overview.rst      |  270 +++
 Documentation/vpu/xlink-core.rst              |   81 +
 Documentation/vpu/xlink-ipc.rst               |   51 +
 Documentation/vpu/xlink-pcie.rst              |   90 +
 MAINTAINERS                                   |   54 +
 drivers/mailbox/Kconfig                       |   11 +
 drivers/mailbox/Makefile                      |    2 +
 drivers/mailbox/vpu-ipc-mailbox.c             |  297 +++
 drivers/misc/Kconfig                          |    7 +
 drivers/misc/Makefile                         |    7 +
 drivers/misc/hddl_device/Kconfig              |   26 +
 drivers/misc/hddl_device/Makefile             |    7 +
 drivers/misc/hddl_device/hddl_device.c        |  565 +++++
 drivers/misc/hddl_device/hddl_device_lh.c     |  764 +++++++
 drivers/misc/hddl_device/hddl_device_rh.c     |  837 +++++++
 drivers/misc/hddl_device/hddl_device_util.h   |   52 +
 drivers/misc/intel_tsens/Kconfig              |   54 +
 drivers/misc/intel_tsens/Makefile             |   10 +
 drivers/misc/intel_tsens/intel_tsens_host.c   |  351 +++
 drivers/misc/intel_tsens/intel_tsens_i2c.c    |  119 +
 .../misc/intel_tsens/intel_tsens_thermal.c    |  651 ++++++
 .../misc/intel_tsens/intel_tsens_thermal.h    |   38 +
 drivers/misc/intel_tsens/keembay_thermal.c    |  169 ++
 drivers/misc/intel_tsens/keembay_tsens.h      |  366 +++
 drivers/misc/vpumgr/Kconfig                   |    9 +
 drivers/misc/vpumgr/Makefile                  |    3 +
 drivers/misc/vpumgr/vpu_common.h              |   31 +
 drivers/misc/vpumgr/vpu_mgr.c                 |  370 +++
 drivers/misc/vpumgr/vpu_smm.c                 |  554 +++++
 drivers/misc/vpumgr/vpu_smm.h                 |   30 +
 drivers/misc/vpumgr/vpu_vcm.c                 |  585 +++++
 drivers/misc/vpumgr/vpu_vcm.h                 |   84 +
 drivers/misc/xlink-core/Kconfig               |   33 +
 drivers/misc/xlink-core/Makefile              |    5 +
 drivers/misc/xlink-core/xlink-core.c          | 1331 +++++++++++
 drivers/misc/xlink-core/xlink-core.h          |   25 +
 drivers/misc/xlink-core/xlink-defs.h          |  181 ++
 drivers/misc/xlink-core/xlink-dispatcher.c    |  436 ++++
 drivers/misc/xlink-core/xlink-dispatcher.h    |   26 +
 drivers/misc/xlink-core/xlink-ioctl.c         |  554 +++++
 drivers/misc/xlink-core/xlink-ioctl.h         |   36 +
 drivers/misc/xlink-core/xlink-multiplexer.c   | 1164 ++++++++++
 drivers/misc/xlink-core/xlink-multiplexer.h   |   35 +
 drivers/misc/xlink-core/xlink-platform.c      |  273 +++
 drivers/misc/xlink-core/xlink-platform.h      |   65 +
 drivers/misc/xlink-ipc/Kconfig                |    7 +
 drivers/misc/xlink-ipc/Makefile               |    4 +
 drivers/misc/xlink-ipc/xlink-ipc.c            |  878 +++++++
 drivers/misc/xlink-pcie/Kconfig               |   20 +
 drivers/misc/xlink-pcie/Makefile              |    2 +
 drivers/misc/xlink-pcie/common/core.h         |  247 ++
 drivers/misc/xlink-pcie/common/interface.c    |  126 +
 drivers/misc/xlink-pcie/common/util.c         |  375 +++
 drivers/misc/xlink-pcie/common/util.h         |   70 +
 drivers/misc/xlink-pcie/common/xpcie.h        |  102 +
 drivers/misc/xlink-pcie/local_host/Makefile   |    6 +
 drivers/misc/xlink-pcie/local_host/core.c     |  819 +++++++
 drivers/misc/xlink-pcie/local_host/dma.c      |  577 +++++
 drivers/misc/xlink-pcie/local_host/epf.c      |  522 +++++
 drivers/misc/xlink-pcie/local_host/epf.h      |  103 +
 drivers/misc/xlink-pcie/remote_host/Makefile  |    6 +
 drivers/misc/xlink-pcie/remote_host/core.c    |  623 +++++
 drivers/misc/xlink-pcie/remote_host/main.c    |   95 +
 drivers/misc/xlink-pcie/remote_host/pci.c     |  525 +++++
 drivers/misc/xlink-pcie/remote_host/pci.h     |   67 +
 drivers/misc/xlink-smbus/Kconfig              |   26 +
 drivers/misc/xlink-smbus/Makefile             |    5 +
 drivers/misc/xlink-smbus/xlink-smbus.c        |  467 ++++
 drivers/soc/Kconfig                           |    1 +
 drivers/soc/Makefile                          |    1 +
 drivers/soc/intel/Kconfig                     |   33 +
 drivers/soc/intel/Makefile                    |    5 +
 drivers/soc/intel/keembay-ipc.c               | 1364 +++++++++++
 drivers/soc/intel/keembay-vpu-ipc.c           | 2036 +++++++++++++++++
 include/linux/hddl_device.h                   |  153 ++
 include/linux/intel_tsens_host.h              |   34 +
 include/linux/soc/intel/keembay-ipc.h         |   30 +
 include/linux/soc/intel/keembay-vpu-ipc.h     |   62 +
 include/linux/xlink-ipc.h                     |   48 +
 include/linux/xlink.h                         |  146 ++
 include/linux/xlink_drv_inf.h                 |   72 +
 include/uapi/misc/vpumgr.h                    |   64 +
 include/uapi/misc/xlink_uapi.h                |  145 ++
 101 files changed, 21854 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
 create mode 100644 Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
 create mode 100644 Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
 create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
 create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml
 create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
 create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
 create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
 create mode 100644 Documentation/hwmon/intel_tsens_host.rst
 create mode 100644 Documentation/hwmon/intel_tsens_sensor.rst
 create mode 100644 Documentation/i2c/busses/intel-xlink-smbus.rst
 create mode 100644 Documentation/misc-devices/hddl_device_client.rst
 create mode 100644 Documentation/misc-devices/hddl_device_server.rst
 create mode 100644 Documentation/vpu/index.rst
 create mode 100644 Documentation/vpu/vpu-stack-overview.rst
 create mode 100644 Documentation/vpu/xlink-core.rst
 create mode 100644 Documentation/vpu/xlink-ipc.rst
 create mode 100644 Documentation/vpu/xlink-pcie.rst
 create mode 100644 drivers/mailbox/vpu-ipc-mailbox.c
 create mode 100644 drivers/misc/hddl_device/Kconfig
 create mode 100644 drivers/misc/hddl_device/Makefile
 create mode 100644 drivers/misc/hddl_device/hddl_device.c
 create mode 100644 drivers/misc/hddl_device/hddl_device_lh.c
 create mode 100644 drivers/misc/hddl_device/hddl_device_rh.c
 create mode 100644 drivers/misc/hddl_device/hddl_device_util.h
 create mode 100644 drivers/misc/intel_tsens/Kconfig
 create mode 100644 drivers/misc/intel_tsens/Makefile
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_host.c
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_i2c.c
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.c
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.h
 create mode 100644 drivers/misc/intel_tsens/keembay_thermal.c
 create mode 100644 drivers/misc/intel_tsens/keembay_tsens.h
 create mode 100644 drivers/misc/vpumgr/Kconfig
 create mode 100644 drivers/misc/vpumgr/Makefile
 create mode 100644 drivers/misc/vpumgr/vpu_common.h
 create mode 100644 drivers/misc/vpumgr/vpu_mgr.c
 create mode 100644 drivers/misc/vpumgr/vpu_smm.c
 create mode 100644 drivers/misc/vpumgr/vpu_smm.h
 create mode 100644 drivers/misc/vpumgr/vpu_vcm.c
 create mode 100644 drivers/misc/vpumgr/vpu_vcm.h
 create mode 100644 drivers/misc/xlink-core/Kconfig
 create mode 100644 drivers/misc/xlink-core/Makefile
 create mode 100644 drivers/misc/xlink-core/xlink-core.c
 create mode 100644 drivers/misc/xlink-core/xlink-core.h
 create mode 100644 drivers/misc/xlink-core/xlink-defs.h
 create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.c
 create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.h
 create mode 100644 drivers/misc/xlink-core/xlink-ioctl.c
 create mode 100644 drivers/misc/xlink-core/xlink-ioctl.h
 create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.c
 create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.h
 create mode 100644 drivers/misc/xlink-core/xlink-platform.c
 create mode 100644 drivers/misc/xlink-core/xlink-platform.h
 create mode 100644 drivers/misc/xlink-ipc/Kconfig
 create mode 100644 drivers/misc/xlink-ipc/Makefile
 create mode 100644 drivers/misc/xlink-ipc/xlink-ipc.c
 create mode 100644 drivers/misc/xlink-pcie/Kconfig
 create mode 100644 drivers/misc/xlink-pcie/Makefile
 create mode 100644 drivers/misc/xlink-pcie/common/core.h
 create mode 100644 drivers/misc/xlink-pcie/common/interface.c
 create mode 100644 drivers/misc/xlink-pcie/common/util.c
 create mode 100644 drivers/misc/xlink-pcie/common/util.h
 create mode 100644 drivers/misc/xlink-pcie/common/xpcie.h
 create mode 100644 drivers/misc/xlink-pcie/local_host/Makefile
 create mode 100644 drivers/misc/xlink-pcie/local_host/core.c
 create mode 100644 drivers/misc/xlink-pcie/local_host/dma.c
 create mode 100644 drivers/misc/xlink-pcie/local_host/epf.c
 create mode 100644 drivers/misc/xlink-pcie/local_host/epf.h
 create mode 100644 drivers/misc/xlink-pcie/remote_host/Makefile
 create mode 100644 drivers/misc/xlink-pcie/remote_host/core.c
 create mode 100644 drivers/misc/xlink-pcie/remote_host/main.c
 create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.c
 create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.h
 create mode 100644 drivers/misc/xlink-smbus/Kconfig
 create mode 100644 drivers/misc/xlink-smbus/Makefile
 create mode 100644 drivers/misc/xlink-smbus/xlink-smbus.c
 create mode 100644 drivers/soc/intel/Kconfig
 create mode 100644 drivers/soc/intel/Makefile
 create mode 100644 drivers/soc/intel/keembay-ipc.c
 create mode 100644 drivers/soc/intel/keembay-vpu-ipc.c
 create mode 100644 include/linux/hddl_device.h
 create mode 100644 include/linux/intel_tsens_host.h
 create mode 100644 include/linux/soc/intel/keembay-ipc.h
 create mode 100644 include/linux/soc/intel/keembay-vpu-ipc.h
 create mode 100644 include/linux/xlink-ipc.h
 create mode 100644 include/linux/xlink.h
 create mode 100644 include/linux/xlink_drv_inf.h
 create mode 100644 include/uapi/misc/vpumgr.h
 create mode 100644 include/uapi/misc/xlink_uapi.h

-- 
2.17.1

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v2 01/34] Add Vision Processing Unit (VPU) documentation.
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 02/34] dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings mgross
                   ` (32 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel

From: mark gross <mgross@linux.intel.com>

The Intel VPU needs a complicated SW stack to make it work.  Add a
directory to hold VPU related documentation including an architectural
overview of the SW stack that the patches implement.

Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 Documentation/index.rst                  |   1 +
 Documentation/vpu/index.rst              |  16 ++
 Documentation/vpu/vpu-stack-overview.rst | 270 +++++++++++++++++++++++
 3 files changed, 287 insertions(+)
 create mode 100644 Documentation/vpu/index.rst
 create mode 100644 Documentation/vpu/vpu-stack-overview.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index 5888e8a7272f..81a02f2af939 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -137,6 +137,7 @@ needed).
    misc-devices/index
    scheduler/index
    mhi/index
+   vpu/index
 
 Architecture-agnostic documentation
 -----------------------------------
diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
new file mode 100644
index 000000000000..7e290e048910
--- /dev/null
+++ b/Documentation/vpu/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+============================================
+Vision Processor Unit Documentation
+============================================
+
+This documentation contains information for the Intel VPU stack.
+
+.. class:: toc-title
+
+	   Table of contents
+
+.. toctree::
+   :maxdepth: 2
+
+   vpu-stack-overview
diff --git a/Documentation/vpu/vpu-stack-overview.rst b/Documentation/vpu/vpu-stack-overview.rst
new file mode 100644
index 000000000000..1fe9ce423177
--- /dev/null
+++ b/Documentation/vpu/vpu-stack-overview.rst
@@ -0,0 +1,270 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+======================
+Intel VPU architecture
+======================
+
+Overview
+========
+
+The Intel Movidius acquisition has developed a Vision Processing Unit (VPU)
+roadmap of products starting with Keem Bay (KMB). The hardware configurations
+the VPU can support include:
+
+1. Standalone smart camera that does local Computer Vision (CV) processing in
+   camera
+2. Standalone appliance or signel board computer connected to a network and
+   tethered cameras doing local CV processing
+3. Embedded in a USB dongle or M.2 as an CV accelerator.
+4. Multiple VPU enabled SOC's on a PCIe card as a CV accelerator in a larger IA
+   box or server.
+
+Keem Bay is the first instance of this family of products. This document
+provides an architectural overview of the software stack supporting the VPU
+enabled products.
+
+Keem Bay (KMB) is a Computer Vision AI processing SoC based on ARM A53 CPU that
+provides Edge neural network acceleration (inference) and includes a Vision
+Processing Unit (VPU) hardware. The ARM CPU SubSystem (CPUSS) interfaces
+locally to the VPU and enables integration/interfacing with a remote host over
+PCIe or USB or Ethernet interfaces. The interface between the CPUSS and the VPU
+is implemented with hardware FIFOs (Control) and coherent memory mapping (Data)
+such that zero copy processing can happen within the VPU.
+
+The KMB can be used in all 4 of the above classes of designs.
+
+We refer to the 'local host' as being the ARM part of the SoC, while the
+'remote host' as the IA system hosting the KMB device(s). The KMB SoC boots
+from an eMMC via uBoot and ARM Linux compatible device tree interface with an
+expectation to fully boot within hundreds of milliseconds. There is also
+support for downloading the kernel and root file system image from a remote
+host.
+
+The eMMC can be updated with standard Mender update process.
+See https://github.com/mendersoftware/mender
+
+The VPU is started and controlled from the A53 local host. Its firmware image
+is loaded using the drive firware helper KAPI's.
+
+The VPU IP firware payload consists of a SPARC ISA RTEMS bootloader and/or
+application binary.
+
+The interface allowing (remote or local) host clients to access VPU IP
+capabilities is realized through an abstracted programming model, which
+provides Remote Proxy APIs for a host CPU application to dynamically create and
+execute CV and NN workloads on the VPU. All frameworks exposed through
+programming model’s APIs are contained in the pre-compiled standard firmware
+image.
+
+There is a significant software stack built up to support KMB and the use
+cases. The rest of this documentation provides an overview of the components
+of the stack.
+
+Keem Bay IPC
+============
+
+Directly interfaces with the KMB hardware FIFOs to provide zero copy processing
+from the VPU. It implements the lowest level protocol for interacting with the
+VPU.
+
+The Keem Bay IPC mechanism is based on shared memory and hardware FIFOs.
+Specifically there are:
+
+* Two 128-entry hardware FIFOs, one for the CPU and one for the VPU.
+* Two shared memory regions, used as memory pool for allocating IPC buffers.
+
+An IPC channel is a software abstraction allowing communication multiplexing,
+so that multiple applications / users can concurrently communicate with the
+VPU.  IPC channels area conceptually similar to socket ports.
+
+There are a total of 1024 channels, each one identified by a channel ID,
+ranging from 0 to 1023.
+
+Channels are divided in two categories:
+
+* High-Speed (HS) channels, having IDs in the 0-9 range.
+* General-Purpose (GP) channels, having IDs in the 10-1023 range.
+
+HS channels have higher priority over GP channels and can be used by
+applications requiring higher throughput or lower latency.
+
+Since all the channels share the same hardare resources (i.e., the hardware
+FIFOs and the IPC memory pools), the Keem Bay IPC driver uses software queues
+to give a higher priority to HS channels.
+
+The driver supports a build-time configurable number of communication channels
+defined in a so-called Channel Mapping Table.
+
+An IPC channel is full duplex: a pending operation from a certain channel does
+not block other operations on the same channel, regardless of their operation
+mode (blocking or non-blocking).
+
+Operation mode is individually selectable for each channel, per operation
+direction (read or write). All operations for that direction comply to
+selection.
+
+
+Keem Bay-VPU-IPC
+================
+
+This is the MMIO driver of the VPU IP block inside the SOC. It is a control
+driver mapping IPC channel communication to Xlink virtual channels.
+
+This driver provides the following functionality to other drivers in the
+communication stack:
+
+* VPU IP execution control (firmware load, start, reset)
+* VPU IP event notifications (device connected, device disconnected, WDT event)
+* VPU IP device status query (OFF, BUSY, READY, ERROR, RECOVERY)
+* Communication via the IPC protocol (wrapping the Keem Bay IPC driver and
+  exposing it to higher level Xlink layer)
+
+In addition to the above, the driver exposes SoC information (like stepping,
+device ID, etc.) to user-space via sysfs.
+
+This driver depends on the 'Keem Bay IPC' driver, which enables the Keem Bay
+IPC communication protocol.
+
+The driver uses the Firmware API to load the VPU firmware from user-space.
+
+Xlink-IPC
+=========
+This component implements the IPC specific Xlink protocol. It maps channel
+IDs to hardware FIFO entries, using the Keem Bay VPU IPC driver.
+
+Some of the main functions this driver provides:
+
+* establishing a connection with an IPC device
+* obtaining a list with the available devices
+* obtaining the status for a device
+* booting a device
+* resetting a device
+* opening and closing channels
+* issuing read and write operations
+
+Xlink-core
+==========
+
+This component implements an abstracted set of control and communication APIs
+based on channel identification. It is intended to support VPU technology both
+at SoC level as well as at IP level, over multiple interfaces.
+
+It provides symmetrical services, where the producer and the consumer have
+the same privileges.
+
+Xlink driver has the ability to abstract several types of communication
+channels underneath, allowing the usage of different interfaces with the same
+function calls.
+
+Xlink services are available to both kernel and user space clients and include:
+
+* interface abstract control and communication API
+* multi device support
+* concurrent communication across 4096 communication channels (from 0 to
+  0xFFF), with customizable properties
+* full duplex channels with multiprocess and multithread support
+* channel IDs can be mapped to desired physical interface (PCIe, USB, ETH, IPC)
+  via a Channel Mapping Table
+* asynchronous fast passthrough mode: remote host data packets are directly
+  dispatched using interrupt systems running on local host to IPC calls for low
+  overhead
+* channel handshaking mechanism for peer to peer communication, without the
+  need of static channel preallocation
+* channel resource management
+* asynchronous data and device notifications to subscribers
+
+Xlink transports: PCIe, USB, ETH, IPC, XLink-PCIe
+
+XLink-PCIe
+==========
+This is an endpoint driver that maps Xlink channel IDs to PCIe channels.
+
+This component ensures (remote)host-to-(local)host communication, and VPU IP
+communication via an asynchronous passthrough mode, where PCIe data loads are
+directly dispatched to Xlink-IPC.
+
+The component builds and advertises Device IDs that are used by local host
+application in case of multi device scenarios.
+
+XLink-USB
+==========
+This is an endpoint driver that maps Xlink channel IDs to bidirectional
+USB endpoints and supports CDC USB class protocol. More than one Xlink channels
+can be mapped to a single USB endpoint.
+
+This component ensures host-to-host communication, and, as well, asynchronous
+passthrough communication, where USB transfer packets are directly dispatched
+to Xlink-IPC.
+
+The component builds and advertises Device IDs that can are used by local host
+application in case of multi device scenarios.
+
+XLink-ETH
+=========
+
+This is an endpoint driver that maps Xlink channel IDs to Ethernet
+sockets.
+
+This component ensures host-to-host communication, and, as well, asynchronous
+passthrough communication, where Ethernet data loads are directly dispatched to
+Xlink-IPC.
+
+The component builds and advertises Device IDs that can are used by local host
+application in case of multi device scenarios.
+
+Assorted drivers that depend on this stack:
+
+Xlink-SMB
+=========
+The Intel Edge.AI Computer Vision platforms have to be monitored using platform
+devices like sensors, fan controller, IO expander etc. Some of these devices
+are memory mapped and some are I2C-based. None of these devices is directly
+accessible to the host.
+
+The host here refers to the server to which the vision accelerators are
+connected over PCIe Interface. The Host needs to do a consolidated action based
+on the parameters of platform devices. In general, most of the standard devices
+(includes sensors, fan controller, IO expander etc) are I2C/SMBus based and are
+used to provide the status of the accelerator. Standard drivers for these
+devices are available based on I2C/SMBus APIs.
+
+Instead of changing the sensor drivers to adapt to PCIe interface, a generic
+I2C adapter "Xlink-SMBus" which underneath uses Xlink as physical medium is
+used. With Xlink-SMBus, the drivers for the platform devices don't need to
+undergo any interface change.
+
+TSEN
+====
+
+Thermal sensor driver for exporting thermal events to the local Arm64 host as
+well as to the remote X86 host if in the PCIe add-in CV accelerator
+configuration.
+
+The driver receives the junction temperature from different heating points
+inside the SOC. The driver will receive the temperature on SMBus connection and
+forward over Xlink-smb when in a remote host configuration.
+
+In Keem Bay, the four thermal junction temperature points are Media Subsystem
+(mss), Neral Network subsystem (nce), Compute subsystem (cse) and SOC(maximum
+of mss, nce and cse).
+
+HDDL
+====
+
+- Exports details of temperature sensor, current sensor and fan controller
+  present in Intel Edge.AI Computer Vision platforms to IA host.
+- Enable Time sync of Intel Edge.AI Computer Vision platform with IA host.
+- Handles device connect and disconnect events.
+- Receives slave address from the IA host for memory mapped thermal sensors
+  present in SoC (Documentation/hwmon/intel_tsens_sensors.rst).
+- Registers I2C slave device for slaves present in Intel Edge.AI Computer
+  Vision platform
+
+
+VPUMGR (VPU Manager)
+====================
+
+Bridges firmware on VPU side and applications on CPU user-space, it assists
+firmware on VPU side serving multiple user space application processes on CPU
+side concurrently while also performing necessary data buffer management on
+behalf of VPU IP.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 02/34] dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
  2021-01-08 21:25 ` [PATCH v2 01/34] Add Vision Processing Unit (VPU) documentation mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox mgross
                   ` (31 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Daniele Alessandrelli

From: Daniele Alessandrelli <daniele.alessandrelli@intel.com>

Add bindings for the Intel VPU IPC mailbox driver.

Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
---
 .../mailbox/intel,vpu-ipc-mailbox.yaml        | 69 +++++++++++++++++++
 MAINTAINERS                                   |  6 ++
 2 files changed, 75 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml

diff --git a/Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml b/Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
new file mode 100644
index 000000000000..923a6d619a64
--- /dev/null
+++ b/Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
@@ -0,0 +1,69 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+# Copyright (c) 2020 Intel Corporation
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/mailbox/intel,vpu-ipc-mailbox.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel VPU IPC mailbox
+
+maintainers:
+  - Daniele Alessandrelli <daniele.alessandrelli@intel.com>
+
+description: |
+  Intel VPU SoCs like Keem Bay have hardware FIFOs to enable Inter-Processor
+  Communication (IPC) between the CPU and the VPU.
+
+  Specifically, there is one HW FIFO for the CPU (aka Application Processor -
+  AP) and one for the VPU. Each FIFO can hold 128 entries of 32 bits each. A
+  "FIFO-not-empty" interrupt is raised every time there is at least a message
+  in the FIFO. The CPU FIFO raises interrupts to the CPU, while the VPU FIFO
+  raises interrupts to VPU. When the CPU wants to send a message to the VPU it
+  writes to the VPU FIFO, similarly, when the VPU want to send a message to the
+  CPU, it writes to the CPU FIFO.
+
+  Refer to ./mailbox.txt for generic information about mailbox device-tree
+  bindings.
+
+properties:
+  compatible:
+    const: intel,vpu-ipc-mailbox
+
+  reg:
+    items:
+      - description: The CPU FIFO registers
+      - description: The VPU FIFO registers
+
+  reg-names:
+    items:
+      - const: cpu_fifo
+      - const: vpu_fifo
+
+  interrupts:
+    items:
+      - description: CPU FIFO-not-empty interrupt
+
+  "#mbox-cells":
+    const: 1
+
+required:
+  - compatible
+  - reg
+  - reg-names
+  - interrupts
+  - "#mbox-cells"
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/irq.h>
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+    vpu_ipc_mailbox@203300f0 {
+        compatible = "intel,vpu-ipc-mailbox";
+        #mbox-cells = <1>;
+        reg = <0x203300f0 0x310>,
+              <0x208200f0 0x310>;
+        reg-names = "cpu_fifo", "vpu_fifo";
+        interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+    };
diff --git a/MAINTAINERS b/MAINTAINERS
index 6eff4f720c72..c0fb04969916 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9183,6 +9183,12 @@ L:	platform-driver-x86@vger.kernel.org
 S:	Maintained
 F:	drivers/platform/x86/intel-vbtn.c
 
+INTEL VPU IPC MAILBOX
+M:	Daniele Alessandrelli <daniele.alessandrelli@intel.com>
+M:	Mark Gross <mgross@linux.intel.com>
+S:	Supported
+F:	Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
+
 INTEL WIRELESS 3945ABG/BG, 4965AGN (iwlegacy)
 M:	Stanislaw Gruszka <stf_xl@wp.pl>
 L:	linux-wireless@vger.kernel.org
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
  2021-01-08 21:25 ` [PATCH v2 01/34] Add Vision Processing Unit (VPU) documentation mgross
  2021-01-08 21:25 ` [PATCH v2 02/34] dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 04/34] dt-bindings: Add bindings for Keem Bay IPC driver mgross
                   ` (30 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Daniele Alessandrelli

From: Daniele Alessandrelli <daniele.alessandrelli@intel.com>

Add mailbox controller enabling inter-processor communication (IPC)
between the CPU (aka, the Application Processor - AP) and the VPU on
Intel Movidius SoCs like Keem Bay.

The controller uses HW FIFOs to enable such communication. Specifically,
there are two FIFOs, one for the CPU and one for VPU. Each FIFO can hold
128 entries (messages) of 32-bit each (but only 26 bits are actually
usable, since the 6 least-significant bits are reserved).

When the Linux kernel on the AP needs to send messages to the VPU
firmware, it writes them to the VPU FIFO; similarly, when the VPU
firmware needs to send messages to the AP, it writes them to the CPU
FIFO.

The AP is notified of pending messages in the CPU FIFO by means of the
'FIFO-not-empty' interrupt, which is generated by the CPU FIFO while not
empty. This interrupt is cleared automatically once all messages have
been read from the FIFO (i.e., the FIFO has been emptied).

The hardware doesn't provide an TX done IRQ (i.e., an IRQ that allows
the VPU firmware to notify the AP that the message put into the VPU FIFO
has been received); however the AP can ensure that the message has been
successfully put into the VPU FIFO (and therefore transmitted) by
checking the VPU FIFO status register to ensure that writing the message
didn't cause the FIFO to overflow.

Therefore, the mailbox controller is configured as capable of tx_done
IRQs and a tasklet is used to simulate the tx_done IRQ. The tasklet is
activated by send_data() right after the message has been put into the
VPU FIFO and the VPU FIFO status registers has been checked. If an
overflow is reported by the status register, the tasklet passes -EBUSY
to mbox_chan_txdone(), to notify the mailbox client of the failed TX.

The client should therefore register a tx_done() callback to properly
handle failed transmissions.

Note: the 'txdone_poll' mechanism cannot be used because it doesn't
provide a way to report a failed transmission.

Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
---
 MAINTAINERS                       |   1 +
 drivers/mailbox/Kconfig           |  11 ++
 drivers/mailbox/Makefile          |   2 +
 drivers/mailbox/vpu-ipc-mailbox.c | 297 ++++++++++++++++++++++++++++++
 4 files changed, 311 insertions(+)
 create mode 100644 drivers/mailbox/vpu-ipc-mailbox.c

diff --git a/MAINTAINERS b/MAINTAINERS
index c0fb04969916..496baaf0c754 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9188,6 +9188,7 @@ M:	Daniele Alessandrelli <daniele.alessandrelli@intel.com>
 M:	Mark Gross <mgross@linux.intel.com>
 S:	Supported
 F:	Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
+F:	drivers/mailbox/vpu-ipc-mailbox.c
 
 INTEL WIRELESS 3945ABG/BG, 4965AGN (iwlegacy)
 M:	Stanislaw Gruszka <stf_xl@wp.pl>
diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
index f4abe3529acd..cb50b541a5c6 100644
--- a/drivers/mailbox/Kconfig
+++ b/drivers/mailbox/Kconfig
@@ -29,6 +29,17 @@ config IMX_MBOX
 	help
 	  Mailbox implementation for i.MX Messaging Unit (MU).
 
+config INTEL_VPU_IPC_MBOX
+	tristate "Intel VPU IPC Mailbox"
+	depends on HAS_IOMEM
+	depends on OF || COMPILE_TEST
+	help
+	  Mailbox implementation for enabling inter-processor communication
+	  between application processors and Intel VPUs.
+
+	  Say Y or M here if you are building for an SoC equipped with an Intel
+	  VPU. If M is selected, the module will be called vpu-ipc-mailbox.
+
 config PLATFORM_MHU
 	tristate "Platform MHU Mailbox"
 	depends on OF
diff --git a/drivers/mailbox/Makefile b/drivers/mailbox/Makefile
index 7194fa92c787..68768bb2ee43 100644
--- a/drivers/mailbox/Makefile
+++ b/drivers/mailbox/Makefile
@@ -56,3 +56,5 @@ obj-$(CONFIG_SUN6I_MSGBOX)	+= sun6i-msgbox.o
 obj-$(CONFIG_SPRD_MBOX)		+= sprd-mailbox.o
 
 obj-$(CONFIG_QCOM_IPCC)		+= qcom-ipcc.o
+
+obj-$(CONFIG_INTEL_VPU_IPC_MBOX)	+= vpu-ipc-mailbox.o
diff --git a/drivers/mailbox/vpu-ipc-mailbox.c b/drivers/mailbox/vpu-ipc-mailbox.c
new file mode 100644
index 000000000000..ad161a7bbabb
--- /dev/null
+++ b/drivers/mailbox/vpu-ipc-mailbox.c
@@ -0,0 +1,297 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Intel VPU IPC mailbox driver.
+ *
+ * Copyright (c) 2020-2021 Intel Corporation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/mailbox_controller.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+
+/*
+ * The IPC FIFO registers (offsets to the base address defined in device tree).
+ */
+
+/*
+ * TIM_IPC_FIFO - Write a 32-bit entry to FIFO.
+ *
+ * The entry to be put in the FIFO must be written to this register.
+ *
+ * NOTE: the 6 least-significant bits are reserved for the writing processor
+ * to include its processor ID, 0 <= x <= 62, so it can determine if the entry
+ * was written correctly by checking the appropriate bit of register
+ * TIM_IPC_FIFO_OF_FLAG[n].
+ *
+ * Internally, the hardware increments FIFO write pointer and fill level.
+ *
+ */
+#define IPC_FIFO		0x00
+
+/* The last 6 bits of an IPC entry are reserved. */
+#define IPC_FIFO_ENTRY_RSVD_MASK	0x3f
+
+/*
+ * IPC_FIFO_ATM - Read from FIFO using ATM mode.
+ *
+ * If FIFO is empty, reading from this registers returns 0xFFFFFFFF, otherwise
+ * returns the value from the FIFO with the 6 least-significant bits set to 0.
+ *
+ * Internally, the hardware increments FIFO read pointer and decrements fill
+ * level.
+ */
+#define IPC_FIFO_ATM		0x04
+#define IPC_FIFO_EMPTY		0xFFFFFFFF
+
+/*
+ * TIM_IPC_FIFO_OF_FLAG[n] - IPC FIFO overflow status for processor IDs 0-62.
+ *
+ * Read:
+ *
+ * A processor can check that its writes to the IPC FIFO were successful by
+ * reading the value of TIM_IPC_FIFO_OF_FLAG0 or TIM_IPC_FIFO_OF_FLAG1
+ * (depending on its processor ID).
+ *
+ * Bit x, 0 <= x <= 31, of TIM_IPC_FIFO_OF_FLAG0 is set high if a write
+ * to TIM_IPC_FIFO by processor ID x failed because the FIFO was full.
+ *
+ * Bit x, 0 <= x <= 30, of TIM_IPC_FIFO_OF_FLAG1 is set high if a write
+ * to TIM_IPC_FIFO by processor ID x+32 failed because the FIFO was
+ * full.
+ *
+ * Processors are identified by the 6 least-significant bits of words
+ * written to TIM_IPC_FIFO, i.e. x = TIM_IPC_FIFO[5:0].
+ * Processor ID = 0x3F is reserved to indicate a read of an empty FIFO
+ * has occurred.
+ *
+ * Write:
+ *
+ * Writing 1 to bit position x of TIM_IPC_FIFO_OF_FLAG0 clears the
+ * overflow flag corresponding to processor ID x.  Writing 1 to bit
+ * position x of TIM_IPC_FIFO_OF_FLAG1 clears the overflow flag
+ * corresponding to processor ID x+32.
+ *
+ * Writing 0 to any bit position has not effect.
+ */
+#define IPC_FIFO_OF_FLAG0	0x10
+#define IPC_FIFO_OF_FLAG1	0x14
+
+/* The processor ID of the CPU. */
+#define IPC_FIFO_ID_CPU		0
+
+/**
+ * struct vpu_ipc_mbox - Intel VPU IPC mailbox controller.
+ * @mbox:		Mailbox controller.
+ * @mbox_chan:		The only channel supported by this controller.
+ * @dev:		The device associated with this controller.
+ * @cpu_fifo_base:	Base address of CPU FIFO registers.
+ * @vpu_fifo_base:	Base address of VPU FIFO registers.
+ * @txdone_tasklet:	Tasklet calling mbox_chan_txdone(). It's activated by
+ *			the send_data() function, after the VPU FIFO has been
+ *			written; a tasklet is used because send_data() cannot
+ *			call mbox_chan_txdone() directly.
+ * @txdone_result:	The result of the last TX. It's set by the send_data()
+ *			function before activating the txdone_tasklet.
+ */
+struct vpu_ipc_mbox {
+	struct mbox_controller	mbox;
+	struct mbox_chan	mbox_chan;
+	void __iomem		*cpu_fifo_base;
+	void __iomem		*vpu_fifo_base;
+	struct tasklet_struct	txdone_tasklet;
+	int			txdone_result;
+};
+
+/* The IRQ handler servicing 'FIFO-not-empty' IRQs coming from the CPU FIFO. */
+static irqreturn_t vpu_ipc_mailbox_irq_handler(int irq, void *data)
+{
+	struct vpu_ipc_mbox *vpu_ipc_mbox = data;
+	u32 entry;
+
+	/* Extract and process one entry from CPU FIFO. */
+	entry = ioread32(vpu_ipc_mbox->cpu_fifo_base + IPC_FIFO_ATM);
+	if (unlikely(entry == IPC_FIFO_EMPTY))
+		return IRQ_NONE;
+
+	/* Notify mailbox client of new data. */
+	mbox_chan_received_data(&vpu_ipc_mbox->mbox_chan, (void *)&entry);
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * The function implementing the txdone_tasklet.
+ *
+ * It calls mbox_chan_txdone() passing as arguments the only channel we have
+ * and the result of the last TX (as stored in the vpu_ipc_mbox struct).
+ */
+static void txdone_tasklet_func(unsigned long vpu_ipc_mbox_ptr)
+{
+	struct vpu_ipc_mbox *vpu_ipc_mbox = (void *)vpu_ipc_mbox_ptr;
+
+	/* Notify client that tx is completed and pass proper result code. */
+	mbox_chan_txdone(&vpu_ipc_mbox->mbox_chan, vpu_ipc_mbox->txdone_result);
+}
+
+/*
+ * Mailbox controller 'send_data()' function.
+ *
+ * This functions tries to put 'data' into the VPU FIFO. This is done by
+ * writing to the IPC_FIFO VPU register and then checking if we overflew the
+ * FIFO (by reading the IPC_FIFO_OF_FLAG0 VPU register).
+ *
+ * If we overflew the FIFO, the TX has failed and we notify the mailbox client
+ * by passing -EBUSY to mbox_chan_txdone()); otherwise the TX succeeded (we
+ * pass 0 to mbox_chan_txdone()). Note: mbox_chan_txdone() cannot be called
+ * directly (since that would case a deadlock), therefore a tasklet is used to
+ * defer the call.
+ *
+ * 'data' is meant to be a 32-bit unsigned integer with the least 6 significant
+ * bits set to 0.
+ */
+static int vpu_ipc_mailbox_send_data(struct mbox_chan *chan, void *data)
+{
+	struct vpu_ipc_mbox *vpu_ipc_mbox = chan->con_priv;
+	u32 entry, overflow;
+
+	entry = *((u32 *)data);
+
+	/* Ensure last 6-bits of entry are not used. */
+	if (unlikely(entry & IPC_FIFO_ENTRY_RSVD_MASK)) {
+		vpu_ipc_mbox->txdone_result = -EINVAL;
+		goto exit;
+	}
+
+	/* Add processor ID to entry. */
+	entry |= IPC_FIFO_ID_CPU & IPC_FIFO_ENTRY_RSVD_MASK;
+
+	/* Write entry to VPU FIFO. */
+	iowrite32(entry, vpu_ipc_mbox->vpu_fifo_base + IPC_FIFO);
+
+	/* Check if we overflew the VPU FIFO. */
+	overflow = ioread32(vpu_ipc_mbox->vpu_fifo_base + IPC_FIFO_OF_FLAG0) &
+		   BIT(IPC_FIFO_ID_CPU);
+	if (unlikely(overflow)) {
+		/* Reset overflow register. */
+		iowrite32(BIT(IPC_FIFO_ID_CPU),
+			  vpu_ipc_mbox->vpu_fifo_base + IPC_FIFO_OF_FLAG0);
+		vpu_ipc_mbox->txdone_result = -EBUSY;
+		goto exit;
+	}
+	vpu_ipc_mbox->txdone_result = 0;
+
+exit:
+	/* Schedule tasklet to call mbox_chan_txdone(). */
+	tasklet_schedule(&vpu_ipc_mbox->txdone_tasklet);
+
+	return 0;
+}
+
+/* The mailbox channel ops for this controller. */
+static const struct mbox_chan_ops vpu_ipc_mbox_chan_ops = {
+	.send_data = vpu_ipc_mailbox_send_data,
+};
+
+static int vpu_ipc_mailbox_probe(struct platform_device *pdev)
+{
+	struct vpu_ipc_mbox *vpu_ipc_mbox;
+	struct device *dev = &pdev->dev;
+	void __iomem *base;
+	int irq;
+	int rc;
+
+	vpu_ipc_mbox = devm_kzalloc(dev, sizeof(*vpu_ipc_mbox), GFP_KERNEL);
+	if (!vpu_ipc_mbox)
+		return -ENOMEM;
+
+	/* Map CPU FIFO registers. */
+	base = devm_platform_ioremap_resource_byname(pdev, "cpu_fifo");
+	if (IS_ERR(base)) {
+		dev_err(dev, "Failed to ioremap CPU FIFO registers\n");
+		return PTR_ERR(base);
+	}
+	vpu_ipc_mbox->cpu_fifo_base = base;
+
+	/* MAP VPU FIFO registers. */
+	base = devm_platform_ioremap_resource_byname(pdev, "vpu_fifo");
+	if (IS_ERR(base)) {
+		dev_err(dev, "Failed to ioremap VPU FIFO registers\n");
+		return PTR_ERR(base);
+	}
+	vpu_ipc_mbox->vpu_fifo_base = base;
+
+	/* Initialize mailbox channels. */
+	vpu_ipc_mbox->mbox_chan.con_priv = vpu_ipc_mbox;
+
+	/* Initialize mailbox controller. */
+	vpu_ipc_mbox->mbox.dev = dev;
+	vpu_ipc_mbox->mbox.ops = &vpu_ipc_mbox_chan_ops;
+	vpu_ipc_mbox->mbox.chans = &vpu_ipc_mbox->mbox_chan;
+	vpu_ipc_mbox->mbox.num_chans = 1;
+	/*
+	 * Set txdone_irq; we don't have a HW IRQ, but we use a txdone tasklet
+	 * to simulate it.
+	 */
+	vpu_ipc_mbox->mbox.txdone_irq = true;
+
+	/* Init TX done tasklet. */
+	tasklet_init(&vpu_ipc_mbox->txdone_tasklet, txdone_tasklet_func,
+		     (uintptr_t)vpu_ipc_mbox);
+
+	rc = devm_mbox_controller_register(dev, &vpu_ipc_mbox->mbox);
+	if (rc) {
+		dev_err(&pdev->dev, "Failed to register VPU IPC controller\n");
+		return rc;
+	}
+
+	/* Register interrupt handler for CPU FIFO. */
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0)
+		return irq;
+	rc = devm_request_irq(dev, irq, vpu_ipc_mailbox_irq_handler, 0,
+			      dev_name(dev), vpu_ipc_mbox);
+	if (rc)
+		return rc;
+
+	platform_set_drvdata(pdev, vpu_ipc_mbox);
+
+	return 0;
+}
+
+static int vpu_ipc_mailbox_remove(struct platform_device *pdev)
+{
+	struct vpu_ipc_mbox *vpu_ipc_mbox = platform_get_drvdata(pdev);
+
+	/*
+	 * Just kill the tasklet as iomem and irq have been requested with
+	 * devm_* functions and, therefore, are freed automatically.
+	 */
+	tasklet_kill(&vpu_ipc_mbox->txdone_tasklet);
+
+	return 0;
+}
+
+static const struct of_device_id vpu_ipc_mailbox_of_match[] = {
+	{
+		.compatible = "intel,vpu-ipc-mailbox",
+	},
+	{}
+};
+
+static struct platform_driver vpu_ipc_mailbox_driver = {
+	.driver = {
+			.name = "vpu-ipc-mailbox",
+			.of_match_table = vpu_ipc_mailbox_of_match,
+		},
+	.probe = vpu_ipc_mailbox_probe,
+	.remove = vpu_ipc_mailbox_remove,
+};
+module_platform_driver(vpu_ipc_mailbox_driver);
+
+MODULE_DESCRIPTION("Intel VPU IPC mailbox driver");
+MODULE_AUTHOR("Daniele Alessandrelli <daniele.alessandrelli@intel.com>");
+MODULE_LICENSE("GPL");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 04/34] dt-bindings: Add bindings for Keem Bay IPC driver
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (2 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 05/34] keembay-ipc: Add Keem Bay IPC module mgross
                   ` (29 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Daniele Alessandrelli, devicetree

From: Daniele Alessandrelli <daniele.alessandrelli@intel.com>

Add DT binding documentation for the Intel Keem Bay IPC driver, which
enables communication between the Computing Sub-System (CSS) and the
Multimedia Sub-System (MSS) of the Intel Movidius SoC code named Keem
Bay.

Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
---
 .../bindings/soc/intel/intel,keembay-ipc.yaml | 45 +++++++++++++++++++
 1 file changed, 45 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml

diff --git a/Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml b/Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
new file mode 100644
index 000000000000..586fe73f4cd4
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
@@ -0,0 +1,45 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+# Copyright (C) 2020 Intel Corporation
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/soc/intel/intel,keembay-ipc.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Keem Bay IPC
+
+maintainers:
+  - Daniele Alessandrelli <daniele.alessandrelli@intel.com>
+
+description:
+  The Keem Bay IPC driver enables Inter-Processor Communication (IPC) with the
+  Visual Processor Unit (VPU) embedded in the Intel Movidius SoC code named
+  Keem Bay.
+
+properties:
+  compatible:
+    const: intel,keembay-ipc
+
+  memory-region:
+    items:
+      - description:
+          Reserved memory region used by the CPU to allocate IPC packets.
+      - description:
+          Reserved memory region used by the VPU to allocate IPC packets.
+
+  mboxes:
+    description: VPU IPC Mailbox.
+
+required:
+  - compatible
+  - memory-region
+  - mboxes
+
+additionalProperties: false
+
+examples:
+  - |
+    ipc {
+          compatible = "intel,keembay-ipc";
+          memory-region = <&ipc_cpu_reserved>, <&ipc_vpu_reserved>;
+          mboxes = <&vpu_ipc_mbox 0>;
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 05/34] keembay-ipc: Add Keem Bay IPC module
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (3 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 04/34] dt-bindings: Add bindings for Keem Bay IPC driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver mgross
                   ` (28 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Daniele Alessandrelli

From: Daniele Alessandrelli <daniele.alessandrelli@intel.com>

On the Intel Movidius SoC code named Keem Bay, communication between the
Application Processor(AP) and the VPU is enabled by the Keem Bay
Inter-Processor
Communication (IPC) mechanism.

Add the driver for using Keem Bay IPC from within the Linux Kernel.

The IPC uses the following terminology:

- Node:    A processing entity that can use the IPC to communicate
           (currently, we just have two nodes, the AP and the VPU).

- Link:    Two nodes that can communicate over IPC form an IPC link
           (currently, we just have one link, the one formed by the AP
           and the VPU).

- Channel: An IPC link can provide multiple IPC channels. IPC channels
           allow communication multiplexing, i.e., the same IPC link can
           be used by different applications for different
           communications. Each channel is identified by a channel ID,
           which must be unique within a single IPC link. Channels are
           divided in two categories, High-Speed (HS) channels and
           General-Purpose (GP) channels. HS channels have higher
           priority over GP channels.

The Keem Bay IPC mechanism is built on top of the VPU IPC mailbox, which
allows the AP and the VPU to exchange 32-bit messages. Specifically, the
IPC uses shared memory (shared between the AP and the VPU) to allocate
IPC packets and then exchanges them using the VPU IPC mailbox (the
32-bit physical address of the packet is passed as a message to the VPU
IPC mailbox).

IPC packets have a fixed structure containing the (VPU) physical address
of the payload (which must be located in shared memory too) as well as
other information (payload size, IPC channel ID, etc.).

Each IPC node (i.e., both the AP and the VPU) has its own reserved
memory region (in shared memory) from which it instantiates its own pool
of IPC packets.  When instantiated, IPC packets are marked as free. When
the node needs to send an IPC message, it gets the first free packet it
finds (from its own pool), marks it as allocated (used), and transfer
its physical address to the destination node using the VPU IPC mailbox.
The destination node uses the received physical address to access the
IPC packet, process the packet, and, once done with it, marks it as free
(so that the sender can reuse it).

Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Peng Fan <peng.fan@nxp.com>
Cc: Shawn Guo <shawnguo@kernel.org>
Cc: Leonard Crestez <leonard.crestez@nxp.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
---
 MAINTAINERS                           |    8 +
 drivers/soc/Kconfig                   |    1 +
 drivers/soc/Makefile                  |    1 +
 drivers/soc/intel/Kconfig             |   18 +
 drivers/soc/intel/Makefile            |    4 +
 drivers/soc/intel/keembay-ipc.c       | 1364 +++++++++++++++++++++++++
 include/linux/soc/intel/keembay-ipc.h |   30 +
 7 files changed, 1426 insertions(+)
 create mode 100644 drivers/soc/intel/Kconfig
 create mode 100644 drivers/soc/intel/Makefile
 create mode 100644 drivers/soc/intel/keembay-ipc.c
 create mode 100644 include/linux/soc/intel/keembay-ipc.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 496baaf0c754..422047edbf0c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9062,6 +9062,14 @@ F:	drivers/crypto/keembay/keembay-ocs-aes-core.c
 F:	drivers/crypto/keembay/ocs-aes.c
 F:	drivers/crypto/keembay/ocs-aes.h
 
+INTEL KEEM BAY IPC DRIVER
+M:	Daniele Alessandrelli <daniele.alessandrelli@intel.com>
+M:	Mark Gross <mgross@linux.intel.com>
+S:	Supported
+F:	Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
+F:	drivers/soc/intel/keembay-ipc.c
+F:	include/linux/soc/intel/keembay-ipc.h
+
 INTEL MANAGEMENT ENGINE (mei)
 M:	Tomas Winkler <tomas.winkler@intel.com>
 L:	linux-kernel@vger.kernel.org
diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index d097d070f579..b9d69a1eedc7 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -8,6 +8,7 @@ source "drivers/soc/atmel/Kconfig"
 source "drivers/soc/bcm/Kconfig"
 source "drivers/soc/fsl/Kconfig"
 source "drivers/soc/imx/Kconfig"
+source "drivers/soc/intel/Kconfig"
 source "drivers/soc/ixp4xx/Kconfig"
 source "drivers/soc/litex/Kconfig"
 source "drivers/soc/mediatek/Kconfig"
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 699b758d28e4..1a6c00d2e32e 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -12,6 +12,7 @@ obj-$(CONFIG_MACH_DOVE)		+= dove/
 obj-y				+= fsl/
 obj-$(CONFIG_ARCH_GEMINI)	+= gemini/
 obj-y				+= imx/
+obj-y				+= intel/
 obj-$(CONFIG_ARCH_IXP4XX)	+= ixp4xx/
 obj-$(CONFIG_SOC_XWAY)		+= lantiq/
 obj-$(CONFIG_LITEX_SOC_CONTROLLER) += litex/
diff --git a/drivers/soc/intel/Kconfig b/drivers/soc/intel/Kconfig
new file mode 100644
index 000000000000..a575e31e47b4
--- /dev/null
+++ b/drivers/soc/intel/Kconfig
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Keem Bay SoC drivers
+#
+
+menu "Intel SoC drivers"
+
+config KEEMBAY_IPC
+	tristate "Support for Intel Keem Bay IPC"
+	depends on INTEL_VPU_IPC_MBOX
+	depends on ARCH_KEEMBAY || (ARM64 && COMPILE_TEST)
+	help
+	  Keem Bay IPC enables communication between the Keem Bay CPU
+	  Sub-System (CSS) and the Keem Bay Media Sub-System (MSS).
+
+	  Select this if you are compiling the Kernel for an Intel SoC that
+	  includes the Intel Vision Processing Unit (VPU) such as Keem Bay.
+endmenu
diff --git a/drivers/soc/intel/Makefile b/drivers/soc/intel/Makefile
new file mode 100644
index 000000000000..ecf0246e7822
--- /dev/null
+++ b/drivers/soc/intel/Makefile
@@ -0,0 +1,4 @@
+#
+# Makefile for Keem Bay IPC Linux driver
+#
+obj-$(CONFIG_KEEMBAY_IPC) += keembay-ipc.o
diff --git a/drivers/soc/intel/keembay-ipc.c b/drivers/soc/intel/keembay-ipc.c
new file mode 100644
index 000000000000..70a8d8b1f9a8
--- /dev/null
+++ b/drivers/soc/intel/keembay-ipc.c
@@ -0,0 +1,1364 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Keem Bay IPC Driver.
+ *
+ * Copyright (C) 2018-2020 Intel Corporation
+ *
+ * This driver implements the VPU Inter-Processor Communication (IPC) mechanism
+ * which enables communication between the Application Processor (AP), running
+ * Linux, and the VPU, running a proprietary VPU firmware.
+ *
+ * The IPC uses the following terminology:
+ *
+ * - Node:    A processing entity that can use the IPC to communicate
+ *	      (currently, we just have two nodes, the AP and the VPU).
+ *
+ * - Link:    Two nodes that can communicate over IPC form an IPC link
+ *	      (currently, we just have one link, the one formed by the AP and
+ *	      the VPU).
+ *
+ * - Channel: An IPC link can provide multiple IPC channels. IPC channels allow
+ *            communication multiplexing, i.e., the same IPC link can be used
+ *            by different applications for different communications. Each
+ *            channel is identified by a channel ID, which must be unique
+ *            within a single IPC link. Channels are divided in two categories,
+ *            High-Speed (HS) channels and General-Purpose (GP) channels.
+ *            HS channels have higher priority over GP channels.
+ *
+ * The VPU IPC mechanism is built on top of the VPU IPC mailbox, which allows
+ * the AP and the VPU to exchange 32-bit messages. Specifically, the VPU IPC
+ * mechanism uses shared memory (shared between the AP and the VPU) to allocate
+ * IPC packets and then exchanges them using the VPU IPC mailbox (the 32-bit
+ * physical address of the packet is passed as a message to the VPU IPC
+ * mailbox).
+ *
+ * IPC packets have a fixed structure containing the (VPU) physical address of
+ * the payload (which must be located in shared memory too) as well as other
+ * information (payload size, IPC channel ID, etc.).
+ *
+ * Each IPC node (i.e., both the AP and the VPU) has its own reserved memory
+ * region (in shared memory) from which it instantiates its own pool of IPC
+ * packets.  When instantiated, IPC packets are marked as free. When the node
+ * needs to send an IPC message, it gets the first free packet it finds (from
+ * its own pool), marks it as allocated (used), and transfer its physical
+ * address to the destination node using the VPU IPC mailbox. The destination
+ * node uses the received physical address to access the IPC packet, process
+ * the packet, and, once done with it, marks it as free (so that the sender can
+ * reuse it).
+ *
+ * Note: Keem Bay IPC is not based on RPMsg, since VPU HW/FW does not support
+ * Virtio and Virtqueues.
+ */
+
+#include <linux/circ_buf.h>
+#include <linux/completion.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/mailbox_client.h>
+#include <linux/mailbox_controller.h> /* Needed for MBOX_TX_QUEUE_LEN. */
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_irq.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+
+#include <linux/soc/intel/keembay-ipc.h>
+
+#define DRV_NAME		"keembay-ipc"
+
+/* The alignment to be used for IPC Packets and IPC Data. */
+#define KMB_IPC_ALIGNMENT	64
+
+/* Maximum number of channels per link. */
+#define KMB_IPC_MAX_CHANNELS	1024
+
+/* The number of high-speed channels per link. */
+#define KMB_IPC_NUM_HS_CHANNELS	10
+
+/*
+ * This is used as index for retrieving reserved memory from the device
+ * tree.
+ */
+#define RSVD_MEM_IDX_CPU_PKTS	0
+#define RSVD_MEM_IDX_VPU_PKTS	1
+
+/* The possible states of an IPC packet. */
+enum {
+	/*
+	 * KMB_IPC_PKT_FREE must be set to 0 to ensure that packets can be
+	 * initialized with memset(&buf, 0, sizeof(buf)).
+	 */
+	KMB_IPC_PKT_FREE = 0,
+	KMB_IPC_PKT_ALLOCATED,
+};
+
+/**
+ * struct kmb_ipc_pkt - The IPC packet structure.
+ * @data_addr:	The address where the IPC payload is located; NOTE: this is a
+ *		VPU address (not a CPU one).
+ * @data_size:	The size of the payload.
+ * @channel:	The channel used.
+ * @src_node:	The Node ID of the sender.
+ * @dst_node:	The Node ID of the intended receiver.
+ * @status:	Either free or allocated.
+ */
+struct kmb_ipc_pkt {
+	u32	data_addr;
+	u32	data_size;
+	u16	channel;
+	u8	src_node;
+	u8	dst_node;
+	u8	status;
+} __packed __aligned(KMB_IPC_ALIGNMENT);
+
+/**
+ * struct ipc_pkt_mem - IPC Packet Memory Region.
+ * @dev:	Child device managing the memory region.
+ * @vaddr:	The virtual address of the memory region.
+ * @dma_handle:	The VPU address of the memory region.
+ * @size:	The size of the memory region.
+ */
+struct ipc_pkt_mem {
+	struct device	*dev;
+	void		*vaddr;
+	dma_addr_t	dma_handle;
+	size_t		size;
+};
+
+/**
+ * struct ipc_pkt_pool - IPC packet pool.
+ * @packets:	Pointer to the array of packets.
+ * @buf_cnt:	Pool size (i.e., number of packets).
+ * @idx:	Current index.
+ */
+struct ipc_pkt_pool {
+	struct kmb_ipc_pkt	*packets;
+	size_t			buf_cnt;
+	size_t			idx;
+};
+
+/**
+ * struct ipc_chan - IPC channel.
+ * @rx_data_list:	The list of incoming messages.
+ * @rx_lock:		The lock for modifying the rx_data_list.
+ * @rx_wait_queue:	The wait queue for RX Data (recv() waits on it).
+ * @closing:		Closing flag, set when the channel is being closed.
+ */
+struct ipc_chan {
+	struct list_head	rx_data_list;
+	spinlock_t		rx_lock; /* Protects 'rx_data_list'. */
+	wait_queue_head_t	rx_wait_queue;
+	bool			closing;
+};
+
+/**
+ * struct tx_data - Element of a TX queue.
+ * @list:	The list head used to concatenate TX data elements.
+ * @vpu_addr:	The VPU address of the data to be transferred.
+ * @size:	The size of the data to be transferred.
+ * @chan_id:	The channel to be used for the transfer.
+ * @dst_node:	The destination node.
+ * @retv:	The result of the transfer.
+ * @tx_done:	The completion struct used by the sender to wait for the
+ *		transfer to complete.
+ * @entry:	The IPC packet VPU address to be sent to the VPU (this field is
+ *		set by tx_data_send() and used by ipc_mbox_tx_done() to get a
+ *		reference to the associated tx_data struct).
+ */
+struct tx_data {
+	struct	list_head list;
+	u32	vpu_addr;
+	u32	size;
+	u16	chan_id;
+	u8	dst_node;
+	int	retv;
+	struct	completion tx_done;
+	u32	entry;
+};
+
+/**
+ * struct tx_queue - The TX queue structure.
+ * @tx_data_list: The list of pending TX data on this queue.
+ * @lock:	  The lock protecting the TX data list.
+ */
+struct tx_queue {
+	struct list_head	tx_data_list;
+	spinlock_t		lock;	/* Protects tx_data_list. */
+};
+
+/**
+ * struct ipc_link - An IPC link.
+ * @mbox_cl:	   The mailbox client associated with this link.
+ * @mbox_chan:	   The mailbox channel associated with this link.
+ * @mbox_tx_queue: The completion used to avoid overflowing the mailbox
+ *		   framework queue (MBOX_TX_QUEUE_LEN), which will result in
+ *		   IPC packets being dropped.
+ * @channels:	   The channels associated with this link (the pointers to the
+ *		   channels are RCU-protected).
+ * @chan_lock:	   The lock for modifying the channels array.
+ * @srcu_sp:	   The Sleepable RCU structs associated with the link's
+ *		   channels.
+ * @tx_queues:	   The TX queues for this link (1 queue for each high-speed
+ *		   channels + 1 shared among all the general-purpose channels).
+ * @tx_qidx:	   The index of the next tx_queue to be check for outgoing data.
+ * @tx_queued:	   The TX completion used to signal when new TX data is pending.
+ * @tx_thread:	   The TX thread.
+ * @tx_stopping:   Flag signaling that the IPC Link is being closed.
+ */
+struct ipc_link {
+	struct mbox_client	mbox_cl;
+	struct mbox_chan	*mbox_chan;
+	struct completion       mbox_tx_queue;
+	struct ipc_chan __rcu	*channels[KMB_IPC_MAX_CHANNELS];
+	spinlock_t		chan_lock; /* Protects 'channels'. */
+	struct srcu_struct	srcu_sp[KMB_IPC_MAX_CHANNELS];
+	struct tx_queue		tx_queues[KMB_IPC_NUM_HS_CHANNELS + 1];
+	int			tx_qidx;
+	struct completion	tx_queued;
+	struct task_struct	*tx_thread;
+	bool			tx_stopping;
+};
+
+/**
+ * struct keembay_ipc_dev - IPC private data.
+ *
+ * @plat_dev: Platform device for this driver.
+ * @cpu_ipc_mem:	    Local IPC Packet memory region.
+ * @vpu_ipc_mem:	    Remove IPC Packet memory region.
+ * @ipc_pkt_pool:	    The pool of IPC packets.
+ * @vpu_link:		    The CPU-VPU link.
+ */
+struct keembay_ipc_dev {
+	struct platform_device	*plat_dev;
+	struct ipc_pkt_mem	cpu_ipc_mem;
+	struct ipc_pkt_mem	vpu_ipc_mem;
+	struct ipc_pkt_pool	ipc_pkt_pool;
+	struct ipc_link		vpu_link;
+};
+
+/**
+ * struct rx_data - RX Data Descriptor.
+ * @list:		List head for creating a list of rx_data elements.
+ * @data_vpu_addr:	The VPU address of the received data.
+ * @data_size:		The size of the received data.
+ *
+ * Instances of this struct are meant to be inserted in the RX Data queue
+ * (list) associated with each channel.
+ */
+struct rx_data {
+	struct list_head	list;
+	u32			data_vpu_addr;
+	u32			data_size;
+};
+
+/* Forward declaration of TX thread function. */
+static int tx_thread_fn(void *ptr);
+
+/* Forward declaration of mailbox client callbacks. */
+static void ipc_mbox_rx_callback(struct mbox_client *cl, void *msg);
+static void ipc_mbox_tx_done(struct mbox_client *cl, void *mssg, int r);
+
+/*
+ * Functions related to reserved-memory sub-devices.
+ */
+
+/*
+ * init_ipc_rsvd_mem() - Init the specified IPC reserved memory.
+ * @dev:	The IPC device for which the memory will be initialized.
+ * @mem:	Where to stored information about the initialized memory.
+ * @mem_name:	The name of this IPC memory.
+ * @mem_idx:	The index of the memory to initialize.
+ *
+ * Create a child device for 'dev' and use it to initialize the reserved
+ * memory region specified in the device tree at index 'mem_idx'.
+ * Also allocate DMA memory from the initialized memory region.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+static int init_ipc_rsvd_mem(struct device *dev, struct ipc_pkt_mem *mem,
+			     const char *mem_name, unsigned int mem_idx)
+{
+	struct device *mem_dev;
+	struct device_node *np;
+	struct resource mem_res;
+	dma_addr_t dma_handle;
+	size_t mem_size;
+	void *vaddr;
+	int rc;
+
+	/* Create a child device (of dev) to own the reserved memory. */
+	mem_dev = devm_kzalloc(dev, sizeof(struct device), GFP_KERNEL);
+	if (!mem_dev)
+		return -ENOMEM;
+
+	device_initialize(mem_dev);
+	dev_set_name(mem_dev, "%s:%s", dev_name(dev), mem_name);
+	mem_dev->parent = dev;
+	mem_dev->dma_mask = dev->dma_mask;
+	mem_dev->coherent_dma_mask = dev->coherent_dma_mask;
+	mem_dev->release = of_reserved_mem_device_release;
+
+	/* Set up DMA configuration using information from parent's DT node. */
+	rc = of_dma_configure(mem_dev, dev->of_node, true);
+	if (rc)
+		goto err_add;
+
+	rc = device_add(mem_dev);
+	if (rc)
+		goto err_add;
+
+	/* Initialized the device reserved memory region. */
+	rc = of_reserved_mem_device_init_by_idx(mem_dev, dev->of_node, mem_idx);
+	if (rc) {
+		dev_err(dev, "Couldn't get reserved memory with idx = %d, %d\n",
+			mem_idx, rc);
+		goto err_post_add;
+	}
+
+	/* Find out the size of the memory region. */
+	np = of_parse_phandle(dev->of_node, "memory-region", mem_idx);
+	if (!np) {
+		dev_err(dev, "Couldn't find memory-region %d\n", mem_idx);
+		rc = -EINVAL;
+		goto err_post_add;
+	}
+	rc = of_address_to_resource(np, 0, &mem_res);
+	if (rc) {
+		dev_err(dev, "Couldn't map address to resource %d\n", mem_idx);
+		goto err_post_add;
+	}
+	mem_size = resource_size(&mem_res);
+
+	/* Allocate memory from the reserved memory regions */
+	vaddr = dmam_alloc_coherent(mem_dev, mem_size, &dma_handle, GFP_KERNEL);
+	if (!vaddr) {
+		dev_err(mem_dev, "Failed to allocate from reserved memory.\n");
+		rc = -ENOMEM;
+		goto err_post_add;
+	}
+
+	mem->dev = mem_dev;
+	mem->vaddr = vaddr;
+	mem->dma_handle = dma_handle;
+	mem->size = mem_size;
+
+	return 0;
+
+err_post_add:
+	device_del(mem_dev);
+err_add:
+	put_device(mem_dev);
+	return rc;
+}
+
+/*
+ * IPC internal functions.
+ */
+
+/**
+ * channel_close() - Close a channel and return whether it was open or not.
+ * @link:	The link the channel belongs to.
+ * @chan_id:	The channel ID of the channel to close.
+ *
+ * Return:	0 if the channel was already closed, 1 otherwise.
+ */
+static int channel_close(struct ipc_link *link, u16 chan_id)
+{
+	struct ipc_chan *chan;
+	struct rx_data *pos, *nxt;
+
+	/* Get channel from channel array. */
+	spin_lock(&link->chan_lock);
+	chan = rcu_dereference_protected(link->channels[chan_id],
+					 lockdep_is_held(&link->chan_lock));
+
+	/* If channel is already NULL, we are done. */
+	if (!chan) {
+		spin_unlock(&link->chan_lock);
+		return 0;
+	}
+
+	/* Otherwise remove it from the 'channels' array. */
+	RCU_INIT_POINTER(link->channels[chan_id], NULL);
+	spin_unlock(&link->chan_lock);
+
+	/* Set closing flag and wake up user threads waiting on recv(). */
+	chan->closing = true;
+	wake_up_all(&chan->rx_wait_queue);
+
+	/* Wait for channel users to drop the reference to the old channel. */
+	synchronize_srcu(&link->srcu_sp[chan_id]);
+
+	/* Free channel memory (rx_data queue and channel struct). */
+	/*
+	 * No need to get chan->rx_lock as we know that nobody is using the
+	 * channel at this point.
+	 */
+	list_for_each_entry_safe(pos, nxt, &chan->rx_data_list, list) {
+		list_del(&pos->list);
+		kfree(pos);
+	}
+	kfree(chan);
+
+	return 1;
+}
+
+/**
+ * ipc_pkt_tx_alloc() - Allocate an IPC packet to be used for TX.
+ * @pool:  The IPC packet pool from which the IPC packet will be allocated.
+ *
+ * Return: The pointer to the allocated packet, or NULL if allocation fails.
+ */
+static struct kmb_ipc_pkt *ipc_pkt_tx_alloc(struct ipc_pkt_pool *pool)
+{
+	struct kmb_ipc_pkt *buf;
+	int i;
+
+	/*
+	 * Look for a free packet starting from the last index (pointing to the
+	 * next packet after the last allocated one) and potentially going
+	 * through all the packets in the pool.
+	 */
+	for (i = 0; i < pool->buf_cnt; i++) {
+		/*
+		 * Get reference to current packet and increment index (for
+		 * next iteration or function call).
+		 */
+		buf = &pool->packets[pool->idx++];
+		if (pool->idx == pool->buf_cnt)
+			pool->idx = 0;
+
+		/* Use current packet if free. */
+		if (buf->status == KMB_IPC_PKT_FREE) {
+			buf->status = KMB_IPC_PKT_ALLOCATED;
+			return buf;
+		}
+	}
+
+	/* We went through all the packets and found none free. */
+	return NULL;
+}
+
+/**
+ * init_ipc_pkt_pool() - Init the CPU IPC Packet Pool.
+ * @ipc_dev:	The IPC device the pool belongs to.
+ *
+ * Set up the IPC Packet Pool to be used for allocating TX packets.
+ *
+ * The pool uses the CPU IPC Packet memory previously allocated.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+static int init_ipc_pkt_pool(struct keembay_ipc_dev *ipc_dev)
+{
+	struct ipc_pkt_mem *mem = &ipc_dev->cpu_ipc_mem;
+
+	ipc_dev->ipc_pkt_pool.buf_cnt = mem->size / sizeof(struct kmb_ipc_pkt);
+
+	/* Fail if we end up having a pool of 0 packets. */
+	if (ipc_dev->ipc_pkt_pool.buf_cnt == 0)
+		return -ENOMEM;
+	/*
+	 * Set reserved memory to 0 to initialize the IPC Packet array
+	 * (ipc_pkt_pool.packets); this works because the value of
+	 * KMB_IPC_PKT_FREE is 0.
+	 */
+	memset(mem->vaddr, 0, mem->size);
+	ipc_dev->ipc_pkt_pool.packets = mem->vaddr;
+	ipc_dev->ipc_pkt_pool.idx = 0;
+
+	return 0;
+}
+
+/*
+ * ipc_link_init() - Initialize CPU/VPU IPC link.
+ * @ipc_dev: The IPC device the link belongs to.
+ *
+ * This function is expected to be called during probing.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int ipc_link_init(struct keembay_ipc_dev *ipc_dev)
+{
+	struct ipc_link *link = &ipc_dev->vpu_link;
+	struct tx_queue *queue;
+	int i;
+
+	/* Init mailbox client. */
+	link->mbox_cl.dev = &ipc_dev->plat_dev->dev;
+	link->mbox_cl.tx_block = false;
+	link->mbox_cl.tx_tout = 0;
+	link->mbox_cl.knows_txdone = false;
+	link->mbox_cl.rx_callback = ipc_mbox_rx_callback;
+	link->mbox_cl.tx_prepare = NULL;
+	link->mbox_cl.tx_done = ipc_mbox_tx_done;
+
+	/* Init completion keeping track of free slots in mbox tx queue. */
+	init_completion(&link->mbox_tx_queue);
+	for (i = 0; i < MBOX_TX_QUEUE_LEN; i++)
+		complete(&link->mbox_tx_queue);
+
+	/* Request mailbox channel */
+	link->mbox_chan = mbox_request_channel(&link->mbox_cl, 0);
+	if (IS_ERR(link->mbox_chan))
+		return PTR_ERR(link->mbox_chan);
+
+	spin_lock_init(&link->chan_lock);
+	for (i = 0; i < ARRAY_SIZE(link->srcu_sp); i++)
+		init_srcu_struct(&link->srcu_sp[i]);
+	memset(link->channels, 0, sizeof(link->channels));
+
+	/* Init TX queues. */
+	for (i = 0; i < ARRAY_SIZE(link->tx_queues); i++) {
+		queue = &link->tx_queues[i];
+		INIT_LIST_HEAD(&queue->tx_data_list);
+		spin_lock_init(&queue->lock);
+	}
+	link->tx_qidx = 0;
+	link->tx_stopping = false;
+	init_completion(&link->tx_queued);
+
+	/* Start TX thread. */
+	link->tx_thread = kthread_run(tx_thread_fn, ipc_dev,
+				      "kmb_ipc_tx_thread");
+	if (IS_ERR(link->tx_thread)) {
+		mbox_free_channel(link->mbox_chan);
+		return PTR_ERR(link->tx_thread);
+	}
+
+	return 0;
+}
+
+/**
+ * ipc_link_deinit() - De-initialize CPU/VPU IPC link.
+ * @ipc_dev:	The IPC device the link belongs to.
+ */
+static void ipc_link_deinit(struct keembay_ipc_dev *ipc_dev)
+{
+	struct ipc_link *link = &ipc_dev->vpu_link;
+	struct tx_queue *queue;
+	struct tx_data *pos, *nxt;
+	int i;
+
+	/* Close all channels. */
+	for (i = 0; i < ARRAY_SIZE(link->channels); i++)
+		channel_close(link, i);
+
+	/* Stop TX Thread. */
+	link->tx_stopping = true;
+	complete(&link->tx_queued);
+	kthread_stop(link->tx_thread);
+
+	/* Flush all TX queue. */
+	for (i = 0; i < ARRAY_SIZE(link->tx_queues); i++) {
+		queue = &link->tx_queues[i];
+		list_for_each_entry_safe(pos, nxt, &queue->tx_data_list, list) {
+			list_del(&pos->list);
+			pos->retv = -EPIPE;
+			complete(&pos->tx_done);
+		}
+	}
+
+	mbox_free_channel(link->mbox_chan);
+}
+
+/* Driver probing. */
+static int kmb_ipc_probe(struct platform_device *pdev)
+{
+	int rc;
+	struct keembay_ipc_dev *ipc_dev;
+	struct device *dev = &pdev->dev;
+
+	ipc_dev = devm_kzalloc(dev, sizeof(*ipc_dev), GFP_KERNEL);
+	if (!ipc_dev)
+		return -ENOMEM;
+
+	ipc_dev->plat_dev = pdev;
+
+	/* Init the memory used for CPU packets. */
+	rc = init_ipc_rsvd_mem(dev, &ipc_dev->cpu_ipc_mem,
+			       "ipc_cpu_rsvd_mem", RSVD_MEM_IDX_CPU_PKTS);
+	if (rc) {
+		dev_err(dev, "Failed to set up CPU reserved memory.\n");
+		return rc;
+	}
+
+	/* Init the memory used for VPU packets. */
+	rc = init_ipc_rsvd_mem(dev, &ipc_dev->vpu_ipc_mem,
+			       "ipc_vpu_rsvd_mem", RSVD_MEM_IDX_VPU_PKTS);
+	if (rc) {
+		dev_err(dev, "Failed to set up VPU reserved memory.\n");
+		device_unregister(ipc_dev->cpu_ipc_mem.dev);
+		return rc;
+	}
+
+	/* Init the pool of IPC packets to be used to TX. */
+	rc = init_ipc_pkt_pool(ipc_dev);
+	if (rc)
+		goto err_post_rsvd_mem;
+
+	/* Init the only link we have (CPU -> VPU). */
+	rc = ipc_link_init(ipc_dev);
+	if (rc)
+		goto err_post_rsvd_mem;
+
+	platform_set_drvdata(pdev, ipc_dev);
+
+	return 0;
+
+err_post_rsvd_mem:
+	device_unregister(ipc_dev->cpu_ipc_mem.dev);
+	device_unregister(ipc_dev->vpu_ipc_mem.dev);
+
+	return rc;
+}
+
+/* Driver removal. */
+static int kmb_ipc_remove(struct platform_device *pdev)
+{
+	struct keembay_ipc_dev *ipc_dev = platform_get_drvdata(pdev);
+
+	ipc_link_deinit(ipc_dev);
+
+	/*
+	 * No need to de-alloc IPC memory (cpu_ipc_mem and vpu_ipc_mem)
+	 * since it was allocated with dmam_alloc_coherent().
+	 */
+
+	device_unregister(ipc_dev->cpu_ipc_mem.dev);
+	device_unregister(ipc_dev->vpu_ipc_mem.dev);
+
+	return 0;
+}
+
+/*
+ * ipc_vpu_to_virt() - Convert a VPU address to a CPU virtual address.
+ *
+ * @ipc_mem:  The IPC memory region where the VPU address is expected to be
+ *	      mapped to.
+ * @vpu_addr: The VPU address to be converted to a virtual one.
+ *
+ * The VPU can map the DDR differently than the CPU. This functions converts
+ * VPU addresses to CPU virtual addresses.
+ *
+ * Return: The corresponding CPU virtual address, or NULL if the VPU address
+ *	   is not in the expected memory range.
+ */
+static void *ipc_vpu_to_virt(const struct ipc_pkt_mem *ipc_mem,
+			     u32 vpu_addr)
+{
+	if (unlikely(vpu_addr < ipc_mem->dma_handle ||
+		     vpu_addr >= (ipc_mem->dma_handle + ipc_mem->size)))
+		return NULL;
+
+	/* Cast to (u8 *) needed since void pointer arithmetic is undefined. */
+	return (u8 *)ipc_mem->vaddr + (vpu_addr - ipc_mem->dma_handle);
+}
+
+/*
+ * ipc_virt_to_vpu() - Convert an CPU virtual address to a VPU address.
+ * @ipc_mem:  [in]  The IPC memory region where the VPU address is expected to
+ *		    be mapped to.
+ * @vaddr:    [in]  The CPU virtual address to be converted to a VPU one.
+ * @vpu_addr: [out] Where to store the computed VPU address.
+ *
+ * The VPU can map the DDR differently than the CPU. This functions converts
+ * CPU virtual addresses to VPU virtual addresses.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int ipc_virt_to_vpu(struct ipc_pkt_mem *ipc_mem, void *vaddr,
+			   u32 *vpu_addr)
+{
+	if (unlikely((ipc_mem->dma_handle + ipc_mem->size) > 0xFFFFFFFF))
+		return -EINVAL;
+
+	/* Cast to (u8 *) needed since void pointer arithmetic is undefined. */
+	if (unlikely(vaddr < ipc_mem->vaddr ||
+		     (u8 *)vaddr >= ((u8 *)ipc_mem->vaddr + ipc_mem->size)))
+		return -EINVAL;
+
+	*vpu_addr = ipc_mem->dma_handle + (vaddr - ipc_mem->vaddr);
+
+	return 0;
+}
+
+/**
+ * ipc_mbox_rx_callback() - Process a FIFO entry coming from IPC mailbox.
+ * @cl:		The mailbox client.
+ * @msg:	The FIFO entry to process.
+ *
+ * This function performs the following tasks:
+ * - Check the source node id.
+ * - Process the IPC packet (locate it, validate it, read data info, release
+ *   packet).
+ * - Add an RX Data descriptor (data ptr and data size) to the channel RX queue.
+ */
+static void ipc_mbox_rx_callback(struct mbox_client *cl, void *msg)
+{
+	struct device *dev = cl->dev;
+	struct ipc_link *link = container_of(cl, struct ipc_link, mbox_cl);
+	struct keembay_ipc_dev *ipc_dev = container_of(link,
+						       struct keembay_ipc_dev,
+						       vpu_link);
+	struct kmb_ipc_pkt *ipc_pkt;
+	struct rx_data *rx_data;
+	struct ipc_chan *chan;
+	u32 entry;
+	int idx;
+
+	entry = *((u32 *)msg);
+
+	/* Get IPC packet. */
+	ipc_pkt = ipc_vpu_to_virt(&ipc_dev->vpu_ipc_mem, entry);
+	if (unlikely(!ipc_pkt)) {
+		dev_warn(dev, "RX: Message out of expected memory range: %x\n",
+			 entry);
+
+		/* Return immediately (cannot mark the IPC packet as free). */
+		return;
+	}
+	if (unlikely(ipc_pkt->src_node != KMB_IPC_NODE_LEON_MSS)) {
+		dev_warn(dev, "RX: Message from unexpected source: %d\n",
+			 ipc_pkt->src_node);
+		goto exit;
+	}
+
+	/* Check destination node. */
+	if (unlikely(ipc_pkt->dst_node != KMB_IPC_NODE_CPU)) {
+		dev_warn(dev, "RX: Message is not for this node\n");
+		goto exit;
+	}
+
+	/* Preliminary channel check. */
+	if (unlikely(ipc_pkt->channel >= KMB_IPC_MAX_CHANNELS)) {
+		dev_warn(dev, "RX: Message for invalid channel\n");
+		goto exit;
+	}
+
+	/* Access internal channel struct (this is protected by an SRCU). */
+	idx = srcu_read_lock(&link->srcu_sp[ipc_pkt->channel]);
+	chan = srcu_dereference(link->channels[ipc_pkt->channel],
+				&link->srcu_sp[ipc_pkt->channel]);
+	if (unlikely(!chan)) {
+		srcu_read_unlock(&link->srcu_sp[ipc_pkt->channel], idx);
+		dev_warn(dev, "RX: Message for closed channel.\n");
+		goto exit;
+	}
+	rx_data = kmalloc(sizeof(*rx_data), GFP_ATOMIC);
+	if (unlikely(!rx_data)) {
+		/* If kmalloc fails, we are forced to discard the message. */
+		srcu_read_unlock(&link->srcu_sp[ipc_pkt->channel], idx);
+		dev_err(dev, "RX: Message dropped: Cannot allocate RX Data.\n");
+		goto exit;
+	}
+	/* Read data info. */
+	rx_data->data_vpu_addr = ipc_pkt->data_addr;
+	rx_data->data_size = ipc_pkt->data_size;
+	/*
+	 * Put data info in rx channel queue.
+	 *
+	 * Note: rx_lock is shared with user context only (and this function is
+	 * run in tasklet context), so spin_lock() is enough.
+	 */
+	spin_lock(&chan->rx_lock);
+	list_add_tail(&rx_data->list, &chan->rx_data_list);
+	spin_unlock(&chan->rx_lock);
+
+	/* Wake up thread waiting on recv(). */
+	wake_up_interruptible(&chan->rx_wait_queue);
+
+	/* Exit SRCU region protecting chan struct. */
+	srcu_read_unlock(&link->srcu_sp[ipc_pkt->channel], idx);
+
+exit:
+	barrier(); /* Ensure IPC packet is fully processed before release. */
+	ipc_pkt->status = KMB_IPC_PKT_FREE;
+}
+
+static void ipc_mbox_tx_done(struct mbox_client *cl, void *mssg, int r)
+{
+	struct tx_data *tx_data = container_of(mssg, struct tx_data, entry);
+	struct ipc_link *link = container_of(cl, struct ipc_link, mbox_cl);
+
+	/* Signal that there is one more free slot in mbox tx queue. */
+	complete(&link->mbox_tx_queue);
+
+	/* Save TX result and notify that IPC TX is completed. */
+	tx_data->retv = r;
+	complete(&tx_data->tx_done);
+}
+
+/**
+ * tx_data_send() - Send a TX data element.
+ * @ipc_dev:	The IPC device to use.
+ * @tx_data:	The TX data element to send.
+ */
+static void tx_data_send(struct keembay_ipc_dev *ipc_dev,
+			 struct tx_data *tx_data)
+{
+	struct device *dev = &ipc_dev->plat_dev->dev;
+	struct ipc_link *link = &ipc_dev->vpu_link;
+	struct kmb_ipc_pkt *ipc_pkt;
+	int rc;
+
+	/* Allocate and set IPC packet. */
+	ipc_pkt = ipc_pkt_tx_alloc(&ipc_dev->ipc_pkt_pool);
+	if (unlikely(!ipc_pkt)) {
+		rc = -ENOMEM;
+		goto error;
+	}
+
+	/* Prepare IPC packet. */
+	ipc_pkt->channel = tx_data->chan_id;
+	ipc_pkt->src_node = KMB_IPC_NODE_CPU;
+	ipc_pkt->dst_node = tx_data->dst_node;
+	ipc_pkt->data_addr = tx_data->vpu_addr;
+	ipc_pkt->data_size = tx_data->size;
+
+	/* Ensure changes to IPC Packet are performed before entry is sent. */
+	wmb();
+
+	/* Initialize entry to ipc_pkt VPU address. */
+	rc = ipc_virt_to_vpu(&ipc_dev->cpu_ipc_mem, ipc_pkt, &tx_data->entry);
+
+	/*
+	 * Check validity of IPC packet VPU address (this error should never
+	 * occur if IPC packet region is defined properly in Device Tree).
+	 */
+	if (unlikely(rc)) {
+		dev_err(dev, "Cannot convert IPC buf vaddr to vpu_addr: %p\n",
+			ipc_pkt);
+		rc = -ENXIO;
+		goto error;
+	}
+	if (unlikely(!IS_ALIGNED(tx_data->entry, KMB_IPC_ALIGNMENT))) {
+		dev_err(dev, "Allocated IPC buf is not 64-byte aligned: %p\n",
+			ipc_pkt);
+		rc = -EFAULT;
+		goto error;
+	}
+
+	/*
+	 * Ensure that the mbox TX queue is not full before passing the packet
+	 * to mbox controller with mbox_send_message(). This will ensure that
+	 * the packet won't be dropped by the mbox framework with the error
+	 * "Try increasing MBOX_TX_QUEUE_LEN".
+	 */
+	rc = wait_for_completion_interruptible(&link->mbox_tx_queue);
+	if (unlikely(rc))
+		goto error;
+
+	mbox_send_message(link->mbox_chan, &tx_data->entry);
+
+	return;
+
+error:
+	/* If an error occurred and a packet was allocated, free it. */
+	if (ipc_pkt)
+		ipc_pkt->status = KMB_IPC_PKT_FREE;
+
+	tx_data->retv = rc;
+	complete(&tx_data->tx_done);
+}
+
+/**
+ * tx_data_dequeue() - Dequeue the next TX data waiting for transfer.
+ * @link:  The link from which we dequeue the TX Data.
+ *
+ * The de-queue policy is round robin between each high speed channel queue and
+ * the one queue for all low speed channels.
+ *
+ * Return: The next TX data waiting to be transferred, or NULL if no TX is
+ *	   pending.
+ */
+static struct tx_data *tx_data_dequeue(struct ipc_link *link)
+{
+	struct tx_data *tx_data;
+	struct tx_queue *queue;
+	int i;
+
+	/*
+	 * TX queues are logically organized in a circular array.
+	 * We go through such an array until we find a non-empty queue.
+	 * We start from where we left since last function invocation.
+	 * If all queues are empty we return NULL.
+	 */
+	for (i = 0; i < ARRAY_SIZE(link->tx_queues); i++) {
+		queue = &link->tx_queues[link->tx_qidx];
+		link->tx_qidx++;
+		if (link->tx_qidx == ARRAY_SIZE(link->tx_queues))
+			link->tx_qidx = 0;
+
+		spin_lock(&queue->lock);
+		tx_data = list_first_entry_or_null(&queue->tx_data_list,
+						   struct tx_data, list);
+		/* If no data in the queue, process the next queue. */
+		if (!tx_data) {
+			spin_unlock(&queue->lock);
+			continue;
+		}
+		/* Otherwise remove rx_data from queue and return. */
+		list_del(&tx_data->list);
+		spin_unlock(&queue->lock);
+
+		return tx_data;
+	}
+
+	return NULL;
+}
+
+/**
+ * tx_data_enqueue() - Enqueue TX data for transfer into the specified link.
+ * @link:	The link the data is enqueued to.
+ * @tx_data:	The TX data to enqueue.
+ */
+static void tx_data_enqueue(struct ipc_link *link, struct tx_data *tx_data)
+{
+	struct tx_queue *queue;
+	int qid;
+
+	/*
+	 * Find the right queue where to put TX data:
+	 * - Each high-speed channel has a dedicated queue, whose index is the
+	 *   same as the channel id (e.g., Channel 1 uses tx_queues[1]).
+	 * - All the general-purpose channels use the same TX queue, which is
+	 *   the last element in the tx_queues array.
+	 *
+	 * Note: tx_queues[] has KMB_IPC_NUM_HS_CHANNELS+1 elements)
+	 */
+	qid = tx_data->chan_id < ARRAY_SIZE(link->tx_queues) ?
+	      tx_data->chan_id : (ARRAY_SIZE(link->tx_queues) - 1);
+
+	queue = &link->tx_queues[qid];
+
+	/*
+	 * Lock shared between user contexts (callers of ipc_send()) and tx
+	 * thread; so spin_lock() is enough.
+	 */
+	spin_lock(&queue->lock);
+	list_add_tail(&tx_data->list, &queue->tx_data_list);
+	spin_unlock(&queue->lock);
+}
+
+/**
+ * tx_data_remove() - Remove TX data element from specified link.
+ * @link:	The link the data is currently enqueued to.
+ * @tx_data:	The TX data element to be removed.
+ *
+ * This function is called by the main send function, when the send is
+ * interrupted or has timed out.
+ */
+static void tx_data_remove(struct ipc_link *link, struct tx_data *tx_data)
+{
+	struct tx_queue *queue;
+	int qid;
+
+	/*
+	 * Find the TX queue where TX data is currently located:
+	 * - Each high-speed channel has a dedicated queue, whose index is the
+	 *   same as the channel id (e.g., Channel 1 uses tx_queues[1]).
+	 * - All the general-purpose channels use the same TX queue, which is
+	 *   the last element in the tx_queues array.
+	 *
+	 * Note: tx_queues[] has KMB_IPC_NUM_HS_CHANNELS+1 elements)
+	 */
+	qid = tx_data->chan_id < ARRAY_SIZE(link->tx_queues) ?
+	      tx_data->chan_id : (ARRAY_SIZE(link->tx_queues) - 1);
+
+	queue = &link->tx_queues[qid];
+
+	/*
+	 * Lock shared between user contexts (callers of ipc_send()) and tx
+	 * thread; so spin_lock() is enough.
+	 */
+	spin_lock(&queue->lock);
+	list_del(&tx_data->list);
+	spin_unlock(&queue->lock);
+}
+
+/**
+ * tx_thread_fn() - The function run by the TX thread.
+ * @ptr: A pointer to the keembay_ipc_dev struct associated with the thread.
+ *
+ * This thread continuously dequeues and send TX data elements. The TX
+ * semaphore is used to pause the loop when all the pending TX data elements
+ * have been transmitted (the send function 'ups' the semaphore every time a
+ * new TX data element is enqueued).
+ */
+static int tx_thread_fn(void *ptr)
+{
+	struct keembay_ipc_dev *ipc_dev = ptr;
+	struct ipc_link *link = &ipc_dev->vpu_link;
+	struct tx_data *tx_data;
+	int rc;
+
+	while (1) {
+		rc = wait_for_completion_interruptible(&link->tx_queued);
+		if (rc || link->tx_stopping)
+			break;
+		tx_data = tx_data_dequeue(link);
+		/*
+		 * We can get a null tx_data if tx_data_remove() has been
+		 * called. Just ignore it and continue.
+		 */
+		if (!tx_data)
+			continue;
+		tx_data_send(ipc_dev, tx_data);
+	}
+
+	/* Wait until kthread_stop() is called. */
+	set_current_state(TASK_INTERRUPTIBLE);
+	while (!kthread_should_stop()) {
+		schedule();
+		set_current_state(TASK_INTERRUPTIBLE);
+	}
+	__set_current_state(TASK_RUNNING);
+
+	return rc;
+}
+
+/* Internal send. */
+static int __ipc_send(struct keembay_ipc_dev *ipc_dev, u8 dst_node,
+		      u16 chan_id, u32 vpu_addr, size_t size)
+{
+	struct ipc_link *link = &ipc_dev->vpu_link;
+	struct tx_data *tx_data;
+	int rc;
+
+	/* Allocate and init TX data. */
+	tx_data = kmalloc(sizeof(*tx_data), GFP_KERNEL);
+	if (!tx_data)
+		return -ENOMEM;
+	tx_data->dst_node = dst_node;
+	tx_data->chan_id = chan_id;
+	tx_data->vpu_addr = vpu_addr;
+	tx_data->size = size;
+	tx_data->retv = 1;
+	INIT_LIST_HEAD(&tx_data->list);
+	init_completion(&tx_data->tx_done);
+
+	/* Add tx_data to tx queues. */
+	tx_data_enqueue(link, tx_data);
+
+	/* Signal that we have a new pending TX. */
+	complete(&link->tx_queued);
+
+	/* Wait until data is transmitted. */
+	rc = wait_for_completion_interruptible(&tx_data->tx_done);
+	if (unlikely(rc)) {
+		tx_data_remove(link, tx_data);
+		goto exit;
+	}
+	rc = tx_data->retv;
+
+exit:
+	kfree(tx_data);
+	return rc;
+}
+
+/*
+ * Driver allocation.
+ */
+
+/* Device tree driver match. */
+static const struct of_device_id kmb_ipc_of_match[] = {
+	{
+		.compatible = "intel,keembay-ipc",
+	},
+	{}
+};
+
+/* The IPC driver is a platform device. */
+static struct platform_driver kmb_ipc_driver = {
+	.probe  = kmb_ipc_probe,
+	.remove = kmb_ipc_remove,
+	.driver = {
+		.name = DRV_NAME,
+		.of_match_table = kmb_ipc_of_match,
+	},
+};
+
+module_platform_driver(kmb_ipc_driver);
+
+/*
+ * Perform basic validity check on common API arguments.
+ *
+ * Verify that the specified device is a Keem Bay IPC device and that node ID
+ * and the channel ID are within the allowed ranges.
+ */
+static int validate_api_args(struct device *dev, u8 node_id, u16 chan_id)
+{
+	if (!dev || dev->driver != &kmb_ipc_driver.driver)
+		return -EINVAL;
+	if (node_id != KMB_IPC_NODE_LEON_MSS) {
+		dev_warn(dev, "Invalid Link ID\n");
+		return -EINVAL;
+	}
+	if (chan_id >= KMB_IPC_MAX_CHANNELS) {
+		dev_warn(dev, "Invalid Channel ID\n");
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/*
+ * IPC Kernel API.
+ */
+
+/**
+ * intel_keembay_ipc_open_channel() - Open an IPC channel.
+ * @dev:	The IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The ID of the channel to be opened.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+int intel_keembay_ipc_open_channel(struct device *dev, u8 node_id, u16 chan_id)
+{
+	struct ipc_chan *chan, *cur_chan;
+	struct keembay_ipc_dev *ipc_dev;
+	struct ipc_link *link;
+	int rc;
+
+	rc = validate_api_args(dev, node_id, chan_id);
+	if (rc)
+		return rc;
+
+	ipc_dev = dev_get_drvdata(dev);
+	link = &ipc_dev->vpu_link;
+
+	/* Create channel before getting lock. */
+	chan = kzalloc(sizeof(*chan), GFP_KERNEL);
+	if (!chan)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&chan->rx_data_list);
+	spin_lock_init(&chan->rx_lock);
+	init_waitqueue_head(&chan->rx_wait_queue);
+
+	/* Add channel to the channel array (if not already present). */
+	spin_lock(&link->chan_lock);
+	cur_chan = rcu_dereference_protected(link->channels[chan_id],
+					     lockdep_is_held(&link->chan_lock));
+	if (cur_chan) {
+		spin_unlock(&link->chan_lock);
+		kfree(chan);
+		return -EEXIST;
+	}
+	rcu_assign_pointer(link->channels[chan_id], chan);
+	spin_unlock(&link->chan_lock);
+
+	return 0;
+}
+EXPORT_SYMBOL(intel_keembay_ipc_open_channel);
+
+/**
+ * intel_keembay_ipc_close_channel() - Close an IPC channel.
+ * @dev:	The IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The ID of the channel to be closed.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+int intel_keembay_ipc_close_channel(struct device *dev, u8 node_id, u16 chan_id)
+{
+	struct keembay_ipc_dev *ipc_dev;
+	struct ipc_link *link;
+	int rc;
+
+	rc = validate_api_args(dev, node_id, chan_id);
+	if (rc)
+		return rc;
+
+	ipc_dev = dev_get_drvdata(dev);
+	link = &ipc_dev->vpu_link;
+
+	rc = channel_close(link, chan_id);
+	if (!rc)
+		dev_info(dev, "Channel was already closed\n");
+
+	return 0;
+}
+EXPORT_SYMBOL(intel_keembay_ipc_close_channel);
+
+/**
+ * intel_keembay_ipc_send() - Send data via IPC.
+ * @dev:	The IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The IPC channel to be used to send the message.
+ * @vpu_addr:	The VPU address of the data to be transferred.
+ * @size:	The size of the data to be transferred.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+int intel_keembay_ipc_send(struct device *dev, u8 node_id, u16 chan_id,
+			   u32 vpu_addr, size_t size)
+{
+	struct keembay_ipc_dev *ipc_dev;
+	struct ipc_link *link;
+	struct ipc_chan *chan;
+	int idx, rc;
+
+	rc = validate_api_args(dev, node_id, chan_id);
+	if (rc)
+		return rc;
+
+	ipc_dev = dev_get_drvdata(dev);
+	link = &ipc_dev->vpu_link;
+	/*
+	 * Start Sleepable RCU critical section (this prevents close() from
+	 * destroying the channels struct while we are sending data)
+	 */
+	idx = srcu_read_lock(&link->srcu_sp[chan_id]);
+
+	/* Get channel. */
+	chan = srcu_dereference(link->channels[chan_id],
+				&link->srcu_sp[chan_id]);
+	if (unlikely(!chan)) {
+		/* The channel is closed. */
+		rc = -ENOENT;
+		goto exit;
+	}
+
+	rc = __ipc_send(ipc_dev, node_id, chan_id, vpu_addr, size);
+
+exit:
+	/* End sleepable RCU critical section. */
+	srcu_read_unlock(&link->srcu_sp[chan_id], idx);
+	return rc;
+}
+EXPORT_SYMBOL(intel_keembay_ipc_send);
+
+/**
+ * intel_keembay_ipc_recv() - Read data via IPC
+ * @dev:	The IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The IPC channel to read from.
+ * @vpu_addr:	[out] The VPU address of the received data.
+ * @size:	[out] Where to store the size of the received data.
+ * @timeout:	How long (in ms) the function will block waiting for an IPC
+ *		message; if UINT32_MAX it will block indefinitely; if 0 it
+ *		will not block.
+ *
+ * Return:	0 on success, negative error code otherwise
+ */
+int intel_keembay_ipc_recv(struct device *dev, u8 node_id, u16 chan_id,
+			   u32 *vpu_addr, size_t *size, u32 timeout)
+{
+	struct keembay_ipc_dev *ipc_dev;
+	struct rx_data *rx_entry;
+	struct ipc_link *link;
+	struct ipc_chan *chan;
+	int idx, rc;
+
+	rc = validate_api_args(dev, node_id, chan_id);
+	if (rc)
+		return rc;
+
+	if (!vpu_addr || !size)
+		return -EINVAL;
+
+	ipc_dev = dev_get_drvdata(dev);
+	link = &ipc_dev->vpu_link;
+
+	/*
+	 * Start Sleepable RCU critical section (this prevents close() from
+	 * destroying the channels struct while we are using it)
+	 */
+	idx = srcu_read_lock(&link->srcu_sp[chan_id]);
+
+	/* Get channel. */
+	chan = srcu_dereference(link->channels[chan_id],
+				&link->srcu_sp[chan_id]);
+	if (unlikely(!chan)) {
+		rc = -ENOENT;
+		goto err;
+	}
+	/*
+	 * Get the lock protecting rx_data_list; the lock will be released by
+	 * wait_event_*_lock_irq() before going to sleep and automatically
+	 * reacquired after wake up.
+	 *
+	 * NOTE: lock_irq() is needed because rx_lock is also used by the RX
+	 * tasklet; also lock_bh() is not used because there is no
+	 * wait_event_interruptible_lock_bh().
+	 */
+	spin_lock_irq(&chan->rx_lock);
+	/*
+	 * Wait for RX data.
+	 *
+	 * Note: wait_event_interruptible_lock_irq_timeout() has different
+	 * return values than wait_event_interruptible_lock_irq().
+	 *
+	 * The following if/then branch ensures that return values are
+	 * consistent for the both cases, that is:
+	 * - rc == 0 only if the wait was successfully (i.e., we were notified
+	 *   of a message or of a channel closure)
+	 * - rc < 0 if an error occurred (we got interrupted or the timeout
+	 *   expired).
+	 */
+	if (timeout == U32_MAX) {
+		rc = wait_event_interruptible_lock_irq(chan->rx_wait_queue,
+						       !list_empty(&chan->rx_data_list) ||
+						       chan->closing,
+						       chan->rx_lock);
+	} else {
+		rc = wait_event_interruptible_lock_irq_timeout(chan->rx_wait_queue,
+							       !list_empty(&chan->rx_data_list) ||
+							       chan->closing,
+							       chan->rx_lock,
+							       msecs_to_jiffies(timeout));
+		if (!rc)
+			rc = -ETIME;
+		if (rc > 0)
+			rc = 0;
+	}
+
+	/* Check if the channel was closed while waiting. */
+	if (chan->closing)
+		rc = -EPIPE;
+	if (rc) {
+		spin_unlock_irq(&chan->rx_lock);
+		goto err;
+	}
+
+	/* Extract RX entry. */
+	rx_entry = list_first_entry(&chan->rx_data_list, struct rx_data, list);
+	list_del(&rx_entry->list);
+	spin_unlock_irq(&chan->rx_lock);
+
+	/* Set output parameters. */
+	*vpu_addr =  rx_entry->data_vpu_addr;
+	*size = rx_entry->data_size;
+
+	/* Free RX entry. */
+	kfree(rx_entry);
+
+err:
+	/* End sleepable RCU critical section. */
+	srcu_read_unlock(&link->srcu_sp[chan_id], idx);
+	return rc;
+}
+EXPORT_SYMBOL(intel_keembay_ipc_recv);
+
+MODULE_DESCRIPTION("Keem Bay IPC Driver");
+MODULE_AUTHOR("Daniele Alessandrelli <daniele.alessandrelli@intel.com>");
+MODULE_AUTHOR("Paul Murphy <paul.j.murphy@intel.com>");
+MODULE_LICENSE("GPL");
diff --git a/include/linux/soc/intel/keembay-ipc.h b/include/linux/soc/intel/keembay-ipc.h
new file mode 100644
index 000000000000..ac7748d1595f
--- /dev/null
+++ b/include/linux/soc/intel/keembay-ipc.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Keem Bay IPC Linux Kernel API
+ *
+ * Copyright (C) 2018-2020 Intel Corporation
+ */
+
+#ifndef __KEEMBAY_IPC_H
+#define __KEEMBAY_IPC_H
+
+#include <linux/types.h>
+
+/* The possible node IDs. */
+enum {
+	KMB_IPC_NODE_CPU = 0,
+	KMB_IPC_NODE_LEON_MSS,
+};
+
+int intel_keembay_ipc_open_channel(struct device *dev, u8 node_id, u16 chan_id);
+
+int intel_keembay_ipc_close_channel(struct device *dev, u8 node_id,
+				    u16 chan_id);
+
+int intel_keembay_ipc_send(struct device *dev, u8 node_id, u16 chan_id,
+			   u32 vpu_addr, size_t size);
+
+int intel_keembay_ipc_recv(struct device *dev, u8 node_id, u16 chan_id,
+			   u32 *vpu_addr, size_t *size, u32 timeout);
+
+#endif /* __KEEMBAY_IPC_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (4 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 05/34] keembay-ipc: Add Keem Bay IPC module mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-10 17:18   ` Rob Herring
  2021-01-11 19:24   ` Rob Herring
  2021-01-08 21:25 ` [PATCH v2 07/34] keembay-vpu-ipc: Add Keem Bay VPU IPC module mgross
                   ` (27 subsequent siblings)
  33 siblings, 2 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Paul Murphy, devicetree, Daniele Alessandrelli

From: Paul Murphy <paul.j.murphy@intel.com>

Add DT bindings documentation for the Keem Bay VPU IPC driver.

Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Paul Murphy <paul.j.murphy@intel.com>
Co-developed-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
---
 .../soc/intel/intel,keembay-vpu-ipc.yaml      | 153 ++++++++++++++++++
 1 file changed, 153 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml

diff --git a/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml b/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
new file mode 100644
index 000000000000..cd1c4abe8bc9
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
@@ -0,0 +1,153 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+# Copyright (c) Intel Corporation. All rights reserved.
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/soc/intel/intel,keembay-vpu-ipc.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel Keem Bay VPU IPC
+
+maintainers:
+  - Paul Murphy <paul.j.murphy@intel.com>
+
+description:
+  The VPU IPC driver facilitates loading of firmware, control, and communication
+  with the VPU over the IPC FIFO in the Intel Keem Bay SoC.
+
+properties:
+  compatible:
+    oneOf:
+      - items:
+        - const: intel,keembay-vpu-ipc
+
+  reg:
+    items:
+      - description: NCE WDT registers
+      - description: NCE TIM_GEN_CONFIG registers
+      - description: MSS WDT registers
+      - description: MSS TIM_GEN_CONFIG registers
+
+  reg-names:
+    items:
+      - const: nce_wdt
+      - const: nce_tim_cfg
+      - const: mss_wdt
+      - const: mss_tim_cfg
+
+  memory-region:
+    items:
+      - description: reference to the VPU reserved memory region
+      - description: reference to the X509 reserved memory region
+      - description: reference to the MSS IPC area
+
+  clocks:
+    items:
+      - description: cpu clock
+      - description: pll 0 out 0 rate
+      - description: pll 0 out 1 rate
+      - description: pll 0 out 2 rate
+      - description: pll 0 out 3 rate
+      - description: pll 1 out 0 rate
+      - description: pll 1 out 1 rate
+      - description: pll 1 out 2 rate
+      - description: pll 1 out 3 rate
+      - description: pll 2 out 0 rate
+      - description: pll 2 out 1 rate
+      - description: pll 2 out 2 rate
+      - description: pll 2 out 3 rate
+
+  clock-names:
+    items:
+      - const: cpu_clock
+      - const: pll_0_out_0
+      - const: pll_0_out_1
+      - const: pll_0_out_2
+      - const: pll_0_out_3
+      - const: pll_1_out_0
+      - const: pll_1_out_1
+      - const: pll_1_out_2
+      - const: pll_1_out_3
+      - const: pll_2_out_0
+      - const: pll_2_out_1
+      - const: pll_2_out_2
+      - const: pll_2_out_3
+
+  interrupts:
+    items:
+      - description: number of NCE sub-system WDT timeout IRQ
+      - description: number of MSS sub-system WDT timeout IRQ
+
+  interrupt-names:
+    items:
+      - const: nce_wdt
+      - const: mss_wdt
+
+  intel,keembay-vpu-ipc-nce-wdt-redirect:
+    $ref: "/schemas/types.yaml#/definitions/uint32"
+    description:
+      Number to which we will request that the NCE sub-system
+      re-directs it's WDT timeout IRQ
+
+  intel,keembay-vpu-ipc-mss-wdt-redirect:
+    $ref: "/schemas/types.yaml#/definitions/uint32"
+    description:
+      Number to which we will request that the MSS sub-system
+      re-directs it's WDT timeout IRQ
+
+  intel,keembay-vpu-ipc-imr:
+    $ref: "/schemas/types.yaml#/definitions/uint32"
+    description:
+      IMR (isolated memory region) number which we will request
+      the runtime service uses to protect the VPU memory region
+      before authentication
+
+  intel,keembay-vpu-ipc-id:
+    $ref: "/schemas/types.yaml#/definitions/uint32"
+    description: The VPU ID to be passed to the VPU firmware.
+
+additionalProperties: False
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+    vpu-ipc@3f00209c {
+        compatible = "intel,keembay-vpu-ipc";
+        reg = <0x3f00209c 0x10>,
+              <0x3f003008 0x4>,
+              <0x2082009c 0x10>,
+              <0x20821008 0x4>;
+        reg-names = "nce_wdt",
+                    "nce_tim_cfg",
+                    "mss_wdt",
+                    "mss_tim_cfg";
+        memory-region = <&vpu_reserved>,
+                        <&vpu_x509_reserved>,
+                        <&mss_ipc_reserved>;
+        clocks = <&scmi_clk 0>,
+                 <&scmi_clk 0>,
+                 <&scmi_clk 1>,
+                 <&scmi_clk 2>,
+                 <&scmi_clk 3>,
+                 <&scmi_clk 4>,
+                 <&scmi_clk 5>,
+                 <&scmi_clk 6>,
+                 <&scmi_clk 7>,
+                 <&scmi_clk 8>,
+                 <&scmi_clk 9>,
+                 <&scmi_clk 10>,
+                 <&scmi_clk 11>;
+        clock-names = "cpu_clock",
+                      "pll_0_out_0", "pll_0_out_1",
+                      "pll_0_out_2", "pll_0_out_3",
+                      "pll_1_out_0", "pll_1_out_1",
+                      "pll_1_out_2", "pll_1_out_3",
+                      "pll_2_out_0", "pll_2_out_1",
+                      "pll_2_out_2", "pll_2_out_3";
+        interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>,
+                     <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>;
+        interrupt-names = "nce_wdt", "mss_wdt";
+        intel,keembay-vpu-ipc-nce-wdt-redirect = <63>;
+        intel,keembay-vpu-ipc-mss-wdt-redirect = <47>;
+        intel,keembay-vpu-ipc-imr = <9>;
+        intel,keembay-vpu-ipc-id = <0>;
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 07/34] keembay-vpu-ipc: Add Keem Bay VPU IPC module
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (5 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 08/34] misc: xlink-pcie: Add documentation for XLink PCIe driver mgross
                   ` (26 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Paul Murphy, Daniele Alessandrelli

From: Paul Murphy <paul.j.murphy@intel.com>

Intel Keem Bay SoC contains a Vision Processing Unit (VPU) to enable
machine vision and other applications.

Enable Linux to control the VPU processor and provides an interface to
the Keem Bay IPC for communicating with the VPU firmware.

Specifically the driver provides the following functionality to other
kernel components:
- Starting (including loading the VPU firmware) / Stopping / Rebooting
  the   VPU.
- Getting notifications of VPU events (e.g., WDT events).
- Communicating with the VPU via the Keem Bay IPC mechanism.

In addition to the above, the driver also exposes SoC information (like
stepping, device ID, etc.) to user-space via sysfs. Specifically, the
following sysfs files are provided:
- /sys/firmware/keembay-vpu-ipc/device_id
- /sys/firmware/keembay-vpu-ipc/feature_exclusion
- /sys/firmware/keembay-vpu-ipc/hardware_id
- /sys/firmware/keembay-vpu-ipc/sku
- /sys/firmware/keembay-vpu-ipc/stepping

Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Paul Murphy <paul.j.murphy@intel.com>
Co-developed-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 MAINTAINERS                               |    9 +
 drivers/soc/intel/Kconfig                 |   15 +
 drivers/soc/intel/Makefile                |    3 +-
 drivers/soc/intel/keembay-vpu-ipc.c       | 2036 +++++++++++++++++++++
 include/linux/soc/intel/keembay-vpu-ipc.h |   62 +
 5 files changed, 2124 insertions(+), 1 deletion(-)
 create mode 100644 drivers/soc/intel/keembay-vpu-ipc.c
 create mode 100644 include/linux/soc/intel/keembay-vpu-ipc.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 422047edbf0c..2c118fcab623 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9070,6 +9070,15 @@ F:	Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
 F:	drivers/soc/intel/keembay-ipc.c
 F:	include/linux/soc/intel/keembay-ipc.h
 
+INTEL KEEM BAY VPU IPC DRIVER
+M:	Paul J Murphy <paul.j.murphy@intel.com>
+M:	Daniele Alessandrelli <daniele.alessandrelli@intel.com>
+M:	Mark Gross <mgross@linux.intel.com>
+S:	Supported
+F:	Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
+F:	drivers/soc/intel/keembay-vpu-ipc.c
+F:	include/linux/soc/intel/keembay-vpu-ipc.h
+
 INTEL MANAGEMENT ENGINE (mei)
 M:	Tomas Winkler <tomas.winkler@intel.com>
 L:	linux-kernel@vger.kernel.org
diff --git a/drivers/soc/intel/Kconfig b/drivers/soc/intel/Kconfig
index a575e31e47b4..ebd23ea57d04 100644
--- a/drivers/soc/intel/Kconfig
+++ b/drivers/soc/intel/Kconfig
@@ -15,4 +15,19 @@ config KEEMBAY_IPC
 
 	  Select this if you are compiling the Kernel for an Intel SoC that
 	  includes the Intel Vision Processing Unit (VPU) such as Keem Bay.
+
+config KEEMBAY_VPU_IPC
+	tristate "Intel Keem Bay VPU IPC Driver"
+	depends on KEEMBAY_IPC
+	depends on HAVE_ARM_SMCCC
+	help
+	  This option enables support for loading and communicating with
+	  the firmware on the Vision Processing Unit (VPU) of the Keem Bay
+	  SoC. The driver depends on the Keem Bay IPC driver to do
+	  communication, and it depends on secure world monitor software to
+	  do the control of the VPU state.
+
+	  Select this if you are compiling the Kernel for an Intel SoC that
+	  includes the Intel Vision Processing Unit (VPU) such as Keem Bay.
+
 endmenu
diff --git a/drivers/soc/intel/Makefile b/drivers/soc/intel/Makefile
index ecf0246e7822..363a81848843 100644
--- a/drivers/soc/intel/Makefile
+++ b/drivers/soc/intel/Makefile
@@ -1,4 +1,5 @@
 #
 # Makefile for Keem Bay IPC Linux driver
 #
-obj-$(CONFIG_KEEMBAY_IPC) += keembay-ipc.o
+obj-$(CONFIG_KEEMBAY_IPC)	+= keembay-ipc.o
+obj-$(CONFIG_KEEMBAY_VPU_IPC)	+= keembay-vpu-ipc.o
diff --git a/drivers/soc/intel/keembay-vpu-ipc.c b/drivers/soc/intel/keembay-vpu-ipc.c
new file mode 100644
index 000000000000..bcf9bb4a225a
--- /dev/null
+++ b/drivers/soc/intel/keembay-vpu-ipc.c
@@ -0,0 +1,2036 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Keem Bay VPU IPC Driver.
+ *
+ * Copyright (c) 2018-2020 Intel Corporation.
+ *
+ * The purpose of this driver is to facilitate booting, control and
+ * communication with the VPU IP on the Keem Bay SoC.
+ *
+ * Specifically the driver provides the following functionality to other kernel
+ * components:
+ * - Loading the VPU firmware into DDR for the VPU to execute.
+ * - Starting / Stopping / Rebooting the VPU.
+ * - Getting notifications of VPU events (e.g., WDT events).
+ * - Communicating with the VPU using the Keem Bay IPC mechanism.
+ *
+ * In addition to the above, the driver also exposes SoC information (like
+ * stepping, device ID, etc.) to user-space via sysfs.
+ *
+ *
+ * VPU Firmware loading
+ * --------------------
+ *
+ * The VPU Firmware consists of both the RTOS and the application code meant to
+ * be run by the VPU.
+ *
+ * The VPU Firmware is loaded into DDR using the Linux Firmware API. The
+ * firmware is loaded into a specific reserved memory region in DDR and
+ * executed by the VPU directly from there.
+ *
+ * The VPU Firmware binary is expected to have the following format:
+ *
+ * +------------------+ 0x0
+ * | Header           |
+ * +------------------+ 0x1000
+ * | FW Version Area  |
+ * +------------------+ 0x2000
+ * | FW Image         |
+ * +------------------+ 0x2000 + FW image size
+ * | x509 Certificate |
+ * +------------------+
+ *
+ * Note: the x509 Certificate is ignored for now.
+ *
+ * As part of the firmware loading process, the driver performs the following
+ * operations:
+ * - It parses the VPU firmware binary.
+ * - It loads the FW version area to the DDR location expected by the VPU
+ *   firmware and specified in the FW header.
+ * - It loads the FW image to the location specified in the FW header.
+ * - It prepares the boot parameters to be passed to the VPU firmware and loads
+ *   them at the location specified in the FW header.
+ *
+ * VPU Start / Stop / Reboot
+ * -------------------------
+ *
+ * The VPU is started / stopped by the SoC firmware, not by this driver
+ * directly. This driver just calls the VPU_BOOT and VPU_STOP SMC SiP services
+ * provided by the SoC firmware.
+ *
+ * Reboot is performed by stopping and re-starting the VPU, including
+ * re-loading the VPU firmware (this is because the VPU firmware .data and .bss
+ * sections need to be re-initialized).
+ *
+ * Sysfs interface
+ * ---------------
+ *
+ * This module exposes SoC information via sysfs. The following sysfs files are
+ * created by this module:
+ * - /sys/firmware/keembay-vpu-ipc/device_id
+ * - /sys/firmware/keembay-vpu-ipc/feature_exclusion
+ * - /sys/firmware/keembay-vpu-ipc/hardware_id
+ * - /sys/firmware/keembay-vpu-ipc/sku
+ * - /sys/firmware/keembay-vpu-ipc/stepping
+ */
+
+#include <linux/arm-smccc.h>
+#include <linux/clk.h>
+#include <linux/dma-direct.h>
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <linux/soc/intel/keembay-ipc.h>
+#include <linux/soc/intel/keembay-vpu-ipc.h>
+
+/* Function ID for the SiP SMC to boot the VPU */
+#define KMB_SIP_SVC_VPU_BOOT		0xFF10
+
+/* Function ID for the SiP SMC to stop the VPU */
+#define KMB_SIP_SVC_VPU_STOP		0xFF16
+
+/* Device tree "memory-region" for VPU firmware area */
+#define VPU_IPC_FW_AREA_IDX		0
+
+/* Device tree region for VPU driver to store X509 region */
+#define VPU_IPC_X509_AREA_IDX		1
+
+/* Device tree "memory-region" for MSS IPC header area */
+#define MSS_IPC_AREA_IDX		2
+
+/*
+ * These are the parameters of the ready message to be received
+ * from the VPU when it is booted correctly.
+ */
+#define READY_MESSAGE_IPC_CHANNEL	0xA
+
+/* Ready message timeout, in ms */
+#define READY_MESSAGE_TIMEOUT_MS	2000
+
+/* Ready message 'physical data address', which is actually a command. */
+#define READY_MESSAGE_EXPECTED_PADDR	0x424f4f54
+
+/* Ready message size */
+#define READY_MESSAGE_EXPECTED_SIZE	0
+
+/* Size of version information in the header */
+#define VPU_VERSION_SIZE		32
+
+/* Version of header that this driver supports. */
+#define HEADER_VERSION_SUPPORTED	0x1
+
+/* Maximum size allowed for firmware version region */
+#define MAX_FIRMWARE_VERSION_SIZE	0x1000
+
+/* Size allowed for header region */
+#define MAX_HEADER_SIZE			0x1000
+
+/* VPU reset vector must be aligned to 4kB. */
+#define VPU_RESET_VECTOR_ALIGNMENT	0x1000
+
+/* Watchdog timer reset trigger */
+#define TIM_WATCHDOG			0x0
+
+/* Watchdog counter enable register */
+#define TIM_WDOG_EN			0x8
+
+/* Write access to protected registers */
+#define TIM_SAFE			0xC
+
+/* Watchdog timer count value */
+#define TIM_WATCHDOG_RESET_VALUE	0xFFFFFFFF
+
+/* Watchdog timer safe value */
+#define TIM_SAFE_ENABLE			0xf1d0dead
+
+/* Watchdog timeout interrupt clear bit */
+#define TIM_GEN_CONFIG_WDOG_TO_INT_CLR	BIT(9)
+
+/* Magic number for the boot parameters. */
+#define BOOT_PARAMS_MAGIC_NUMBER	0x00010000
+
+/* Maximum size of string of form "pll_i_out_j" */
+#define PLL_STRING_SIZE			128
+
+/* Number of PLLs */
+#define NUM_PLLS			3
+
+/* Every PLL has this many outputs. */
+#define NUM_PLL_OUTPUTS			4
+
+/* SoC SKU length, in bytes. */
+#define SOC_INFO_SKU_BYTES		6
+
+/* SoC stepping length, in bytes. */
+#define SOC_INFO_STEPPING_BYTES		2
+
+/**
+ * struct boot_parameters - Boot parameters passed to the VPU.
+ * @magic_number:		Magic number to indicate structure populated
+ * @vpu_id:			ID to be passed to the VPU firmware.
+ * @reserved_0:			Reserved memory for other 'header' information
+ * @cpu_frequency_hz:		Frequency that the CPU is running at
+ * @pll0_out:			Frequency of each of the outputs of PLL 0
+ * @pll1_out:			Frequency of each of the outputs of PLL 1
+ * @pll2_out:			Frequency of each of the outputs of PLL 2
+ * @reserved_1:			Reserved memory for other clock frequencies
+ * @mss_ipc_header_address:	Base address of MSS IPC header region
+ * @mss_ipc_header_area_size:	Size of MSS IPC header region
+ * @mmio_address:		MMIO region for VPU communication
+ * @mmio_area_size:		Size of MMIO region for VPU communication
+ * @reserved_2:			Reserved memory for other memory regions
+ * @mss_wdt_to_irq_a53_redir:	MSS redirects WDT TO IRQ to this ARM IRQ number
+ * @nce_wdt_to_irq_a53_redir:	NCE redirects WDT TO IRQ to this ARM IRQ number
+ * @vpu_to_host_irq:		VPU to host notification IRQ
+ * @reserved_3:			Reserved memory for other configurations
+ * @a53ss_version_id:		SoC A53SS_VERSION_ID register value
+ * @si_stepping:		Silicon stepping, 2 characters
+ * @device_id:			64 bits of device ID info from fuses
+ * @feature_exclusion:		64 bits of feature exclusion info from fuses
+ * @sku:			64-bit SKU identifier.
+ * @reserved_4:			Reserved memory for other information
+ * @reserved_5:			Unused/reserved memory
+ */
+struct boot_parameters {
+	/* Header: 0x0 - 0x1F */
+	u32 magic_number;
+	u32 vpu_id;
+	u32 reserved_0[6];
+	/* Clock frequencies: 0x20 - 0xFF */
+	u32 cpu_frequency_hz;
+	u32 pll0_out[NUM_PLL_OUTPUTS];
+	u32 pll1_out[NUM_PLL_OUTPUTS];
+	u32 pll2_out[NUM_PLL_OUTPUTS];
+	u32 reserved_1[43];
+	/* Memory regions: 0x100 - 0x1FF */
+	u64 mss_ipc_header_address;
+	u32 mss_ipc_header_area_size;
+	u64 mmio_address;
+	u32 mmio_area_size;
+	u32 reserved_2[58];
+	/* IRQ re-direct numbers: 0x200 - 0x2FF */
+	u32 mss_wdt_to_irq_a53_redir;
+	u32 nce_wdt_to_irq_a53_redir;
+	u32 vpu_to_host_irq;
+	u32 reserved_3[61];
+	/* Silicon information: 0x300 - 0x3FF */
+	u32 a53ss_version_id;
+	u32 si_stepping;
+	u64 device_id;
+	u64 feature_exclusion;
+	u64 sku;
+	u32 reserved_4[56];
+	/* Unused/reserved: 0x400 - 0xFFF */
+	u32 reserved_5[0x300];
+} __packed;
+
+/**
+ * struct firmware_header - Firmware header information
+ * @header_ver:	This header version dictates content structure of
+ *			remainder of firmware image, including the header
+ *			itself.
+ * @image_format:	Image format defines how the loader will handle the
+ *			'firmware image'.
+ * @image_load_addr:	VPU address where the firmware image must be loaded to.
+ * @image_size:		Size of the image.
+ * @entry_point:	Entry point for the VPU firmware (this is a VPU
+ *			address).
+ * @vpu_ver:		Version of the VPU firmware.
+ * @compression_type:	Type of compression used for the VPU firmware.
+ * @fw_ver_load_addr: VPU address where to load the data in the FW version
+ *			area of the binary.
+ * @fw_ver_size:	Size of the FW version.
+ * @config_load_addr:	VPU IPC driver will populate the 4kB of configuration
+ *			data to this address.
+ */
+struct firmware_header {
+	u32 header_ver;
+	u32 image_format;
+	u64 image_load_addr;
+	u32 image_size;
+	u64 entry_point;
+	u8  vpu_ver[VPU_VERSION_SIZE];
+	u32 compression_type;
+	u64 fw_ver_load_addr;
+	u32 fw_ver_size;
+	u64 config_load_addr;
+} __packed;
+
+/**
+ * struct vpu_mem - Information about reserved memory shared with VPU.
+ * @vaddr:	The virtual address of the memory region.
+ * @paddr:	The (CPU) physical address of the memory region.
+ * @vpu_addr:	The VPU address of the memory region.
+ * @size:	The size of the memory region.
+ */
+struct vpu_mem {
+	void *vaddr;
+	phys_addr_t paddr;
+	dma_addr_t vpu_addr;
+	size_t size;
+};
+
+/**
+ * struct vpu_mem - Information about reserved memory shared with ATF.
+ * @vaddr:	The virtual address of the memory region.
+ * @paddr:	The physical address of the memory region.
+ * @size:	The size of the memory region.
+ */
+struct atf_mem {
+	void __iomem *vaddr;
+	phys_addr_t paddr;
+	size_t size;
+};
+
+/**
+ * struct vpu_ipc_dev - The VPU IPC device structure.
+ * @pdev:		Platform device associated with this VPU IPC device.
+ * @state:		The current state of the device's finite state machine.
+ * @reserved_mem:	VPU firmware reserved memory region. The VPU firmware,
+ *			VPU firmware version and the boot parameters are loaded
+ *			inside this region.
+ * @x509_mem:		x509 reserved memory region.
+ * @mss_ipc_mem:	The reserved memory from which the VPU is expected to
+ *			allocate its own IPC buffers.
+ * @boot_vec_paddr:	The VPU entry point (as specified in the VPU FW binary).
+ * @boot_params:	Pointer to the VPU boot parameters.
+ * @fw_res:		The memory region where the VPU FW was loaded.
+ * @ready_message_task: The ktrhead instantiated to handle the reception of the
+ *			VPU ready message.
+ * @lock:		Spinlock protecting @state.
+ * @cpu_clock:		The main clock powering the VPU IP.
+ * @pll:		Array of PLL clocks.
+ * @nce_irq:		IRQ number of the A53 re-direct IRQ used for receiving
+ *			the NCE WDT timeout interrupt.
+ * @mss_irq:		IRQ number of the A53 re-direct IRQ which will be used
+ *			for receiving the MSS WDT timeout interrupt.
+ * @nce_wdt_redirect:   Re-direct IRQ for NCE ICB.
+ * @mss_wdt_redirect:	Re-direct IRQ for MSS ICB.
+ * @imr:		Isolated Memory Region (IMR) to be used to protect the
+ *			loaded VPU firmware.
+ * @vpu_id:		The ID of the VPU associated with this device.
+ * @nce_wdt_reg:	NCE WDT registers.
+ * @nce_tim_cfg_reg:	NCE TIM registers.
+ * @mss_wdt_reg:	MSS WDT registers.
+ * @mss_tim_cfg_reg:	MSS TIM registers.
+ * @nce_wdt_count:	Number of NCE WDT timeout event occurred since device
+ *			probing.
+ * @mss_wdt_count:	Number of MSS WDT timeout event occurred since device
+ *			probing.
+ * @ready_queue:	Wait queue for waiting on VPU to be ready.
+ * @ipc_dev:		The IPC device to use for IPC communication.
+ * @firmware_name:	The name of the firmware binary to be loaded.
+ * @callback:		The callback executed when CONNECT / DISCONNECT events
+ *			happen.
+ */
+struct vpu_ipc_dev {
+	struct platform_device *pdev;
+	enum intel_keembay_vpu_state state;
+	struct vpu_mem reserved_mem;
+	struct atf_mem x509_mem;
+	struct vpu_mem mss_ipc_mem;
+	u64 boot_vec_paddr;
+	struct boot_parameters *boot_params;
+	struct resource fw_res;
+	struct task_struct *ready_message_task;
+	spinlock_t lock; /* Protects the 'state' field above. */
+	struct clk *cpu_clock;
+	struct clk *pll[NUM_PLLS][NUM_PLL_OUTPUTS];
+	int nce_irq;
+	int mss_irq;
+	u32 nce_wdt_redirect;
+	u32 mss_wdt_redirect;
+	u32 imr;
+	u32 vpu_id;
+	void __iomem *nce_wdt_reg;
+	void __iomem *nce_tim_cfg_reg;
+	void __iomem *mss_wdt_reg;
+	void __iomem *mss_tim_cfg_reg;
+	unsigned int nce_wdt_count;
+	unsigned int mss_wdt_count;
+	wait_queue_head_t ready_queue;
+	struct device *ipc_dev;
+	char *firmware_name;
+	void (*callback)(struct device *dev, enum intel_keembay_vpu_event);
+};
+
+/**
+ * struct vpu_ipc_soc_info - VPU SKU information.
+ * @device_id:		Value of device ID e-fuse.
+ * @feature_exclusion:	Value of feature exclusion e-fuse.
+ * @hardware_id:	Hardware identifier.
+ * @stepping:		Silicon stepping.
+ * @sku:		SKU identifier.
+ *
+ * SoC information read from the device-tree and exported via sysfs.
+ */
+struct vpu_ipc_soc_info {
+	u64 device_id;
+	u64 feature_exclusion;
+	u32 hardware_id;
+	u8 stepping[SOC_INFO_STEPPING_BYTES];
+	u8 sku[SOC_INFO_SKU_BYTES];
+};
+
+/**
+ * enum keembay_vpu_event - Internal events handled by the driver state machine.
+ * @KEEMBAY_VPU_EVENT_BOOT:		VPU booted.
+ * @KEEMBAY_VPU_EVENT_BOOT_FAILED:	VPU boot failed.
+ * @KEEMBAY_VPU_EVENT_STOP:		VPU stop initiated.
+ * @KEEMBAY_VPU_EVENT_STOP_COMPLETE:	VPU stop completed.
+ * @KEEMBAY_VPU_EVENT_NCE_WDT_TIMEOUT:	NCE watchdog triggered.
+ * @KEEMBAY_VPU_EVENT_MSS_WDT_TIMEOUT:	MSS watchdog triggered.
+ * @KEEMBAY_VPU_EVENT_MSS_READY_OK:	VPU ready message successfully received.
+ * @KEEMBAY_VPU_EVENT_MSS_READY_FAIL:	Failed to receive VPU ready message.
+ */
+enum keembay_vpu_event {
+	KEEMBAY_VPU_EVENT_BOOT = 0,
+	KEEMBAY_VPU_EVENT_BOOT_FAILED,
+	KEEMBAY_VPU_EVENT_STOP,
+	KEEMBAY_VPU_EVENT_STOP_COMPLETE,
+	KEEMBAY_VPU_EVENT_NCE_WDT_TIMEOUT,
+	KEEMBAY_VPU_EVENT_MSS_WDT_TIMEOUT,
+	KEEMBAY_VPU_EVENT_MSS_READY_OK,
+	KEEMBAY_VPU_EVENT_MSS_READY_FAIL
+};
+
+static struct vpu_ipc_dev *to_vpu_dev(struct device *dev);
+
+/* Variable containing SoC information. */
+static struct vpu_ipc_soc_info *vpu_ipc_soc_info;
+
+/**
+ * vpu_ipc_notify_event() - Trigger callback
+ * @vpu_dev:		Private data
+ * @event:		Event to notify
+ *
+ * This function is called when an event has occurred. If a callback has
+ * been registered it is called with the device and event as arguments.
+ *
+ * This function can be called from interrupt context.
+ */
+static void vpu_ipc_notify_event(struct vpu_ipc_dev *vpu_dev,
+				 enum intel_keembay_vpu_event event)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+
+	if (vpu_dev->callback)
+		vpu_dev->callback(dev, event);
+}
+
+/**
+ * vpu_ipc_handle_event() - Handle events and optionally update state
+ *
+ * @vpu_dev:		Private data
+ * @event:		Event that has occurred
+ *
+ * This function is called in the case that an event has occurred. This
+ * function tells the calling code if the event is valid for the current state
+ * and also updates the internal state accordingly to the event.
+ *
+ * This function can be called from interrupt context.
+ *
+ * Returns: 0 for success otherwise negative error value
+ */
+static int vpu_ipc_handle_event(struct vpu_ipc_dev *vpu_dev,
+				enum keembay_vpu_event event)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	unsigned long flags;
+	int rc = -EINVAL;
+
+	/*
+	 * Atomic update of state.
+	 * Note: this function is called by the WDT IRQ handlers; therefore
+	 * we must use the spin_lock_irqsave().
+	 */
+	spin_lock_irqsave(&vpu_dev->lock, flags);
+
+	switch (vpu_dev->state) {
+	case KEEMBAY_VPU_OFF:
+		if (event == KEEMBAY_VPU_EVENT_BOOT) {
+			vpu_dev->state = KEEMBAY_VPU_BUSY;
+			rc = 0;
+		}
+		break;
+	case KEEMBAY_VPU_BUSY:
+		if (event == KEEMBAY_VPU_EVENT_MSS_READY_OK) {
+			vpu_dev->state = KEEMBAY_VPU_READY;
+			vpu_ipc_notify_event(vpu_dev,
+					     KEEMBAY_VPU_NOTIFY_CONNECT);
+			rc = 0;
+			break;
+		}
+		if (event == KEEMBAY_VPU_EVENT_MSS_READY_FAIL ||
+		    event == KEEMBAY_VPU_EVENT_BOOT_FAILED) {
+			vpu_dev->state = KEEMBAY_VPU_ERROR;
+			rc = 0;
+		}
+		break;
+	case KEEMBAY_VPU_READY:
+		if (event != KEEMBAY_VPU_EVENT_MSS_READY_OK)
+			vpu_ipc_notify_event(vpu_dev,
+					     KEEMBAY_VPU_NOTIFY_DISCONNECT);
+
+		if (event == KEEMBAY_VPU_EVENT_MSS_READY_FAIL ||
+		    event == KEEMBAY_VPU_EVENT_BOOT_FAILED) {
+			vpu_dev->state = KEEMBAY_VPU_ERROR;
+			rc = 0;
+			break;
+		}
+		if (event == KEEMBAY_VPU_EVENT_NCE_WDT_TIMEOUT ||
+		    event == KEEMBAY_VPU_EVENT_MSS_WDT_TIMEOUT) {
+			vpu_dev->state = KEEMBAY_VPU_ERROR;
+			rc = 0;
+			break;
+		}
+		fallthrough;
+	case KEEMBAY_VPU_ERROR:
+		if (event == KEEMBAY_VPU_EVENT_BOOT) {
+			vpu_dev->state = KEEMBAY_VPU_BUSY;
+			rc = 0;
+			break;
+		}
+		if (event == KEEMBAY_VPU_EVENT_STOP) {
+			vpu_dev->state = KEEMBAY_VPU_STOPPING;
+			rc = 0;
+		}
+		break;
+	case KEEMBAY_VPU_STOPPING:
+		if (event == KEEMBAY_VPU_EVENT_STOP_COMPLETE) {
+			vpu_dev->state = KEEMBAY_VPU_OFF;
+			rc = 0;
+		}
+		break;
+	default:
+		break;
+	}
+
+	spin_unlock_irqrestore(&vpu_dev->lock, flags);
+
+	if (rc)
+		dev_err(dev, "Can't handle event %d in state %d\n",
+			event, vpu_dev->state);
+
+	return rc;
+}
+
+/**
+ * clear_and_disable_vpu_wdt() - Clear and disable VPU WDT.
+ * @wdt_base:		Base address of the WDT register.
+ * @tim_cfg_base:	Base address of the associated TIM configuration
+ *			register.
+ */
+static void clear_and_disable_vpu_wdt(u8 __iomem *wdt_base,
+				      u8 __iomem *tim_cfg_base)
+{
+	u32 tim_gen_config;
+
+	/* Enable writing and set non-zero WDT value */
+	iowrite32(TIM_SAFE_ENABLE, wdt_base + TIM_SAFE);
+	iowrite32(TIM_WATCHDOG_RESET_VALUE, wdt_base + TIM_WATCHDOG);
+
+	/* Enable writing and disable watchdog timer */
+	iowrite32(TIM_SAFE_ENABLE, wdt_base + TIM_SAFE);
+	iowrite32(0, wdt_base + TIM_WDOG_EN);
+
+	/* Now clear the timeout interrupt */
+	tim_gen_config = ioread32(tim_cfg_base);
+	tim_gen_config &= ~(TIM_GEN_CONFIG_WDOG_TO_INT_CLR);
+	iowrite32(tim_gen_config, tim_cfg_base);
+}
+
+static irqreturn_t nce_wdt_irq_handler(int irq, void *ptr)
+{
+	struct vpu_ipc_dev *vpu_dev = ptr;
+	struct device *dev = &vpu_dev->pdev->dev;
+	int rc;
+
+	vpu_ipc_notify_event(vpu_dev, KEEMBAY_VPU_NOTIFY_NCE_WDT);
+	dev_dbg_ratelimited(dev, "NCE WDT IRQ occurred.\n");
+
+	clear_and_disable_vpu_wdt(vpu_dev->nce_wdt_reg,
+				  vpu_dev->nce_tim_cfg_reg);
+	/* Update driver state machine. */
+	rc = vpu_ipc_handle_event(vpu_dev, KEEMBAY_VPU_EVENT_NCE_WDT_TIMEOUT);
+	if (rc < 0)
+		dev_warn_ratelimited(dev, "Unexpected NCE WDT event.\n");
+
+	vpu_dev->nce_wdt_count++;
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t mss_wdt_irq_handler(int irq, void *ptr)
+{
+	struct vpu_ipc_dev *vpu_dev = ptr;
+	struct device *dev = &vpu_dev->pdev->dev;
+	int rc;
+
+	vpu_ipc_notify_event(vpu_dev, KEEMBAY_VPU_NOTIFY_MSS_WDT);
+	dev_dbg_ratelimited(dev, "MSS WDT IRQ occurred.\n");
+
+	clear_and_disable_vpu_wdt(vpu_dev->mss_wdt_reg,
+				  vpu_dev->mss_tim_cfg_reg);
+	/* Update driver state machine. */
+	rc = vpu_ipc_handle_event(vpu_dev, KEEMBAY_VPU_EVENT_MSS_WDT_TIMEOUT);
+	if (rc < 0)
+		dev_warn_ratelimited(dev, "Unexpected MSS WDT event.\n");
+
+	vpu_dev->mss_wdt_count++;
+
+	return IRQ_HANDLED;
+}
+
+static resource_size_t get_reserved_mem_size(struct device *dev,
+					     unsigned int idx)
+{
+	struct resource mem;
+	struct device_node *np;
+	int rc;
+
+	np = of_parse_phandle(dev->of_node, "memory-region", idx);
+	if (!np) {
+		pr_err("Couldn't find memory-region %d\n", idx);
+		return 0;
+	}
+
+	rc = of_address_to_resource(np, 0, &mem);
+	if (rc) {
+		pr_err("Couldn't map address to resource\n");
+		return 0;
+	}
+
+	return resource_size(&mem);
+}
+
+static int setup_vpu_fw_region(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	struct vpu_mem *rsvd_mem = &vpu_dev->reserved_mem;
+	int rc;
+
+	rc = of_reserved_mem_device_init(dev);
+	if (rc) {
+		dev_err(dev, "Failed to initialise device reserved memory.\n");
+		return rc;
+	}
+
+	rsvd_mem->size = get_reserved_mem_size(dev, VPU_IPC_FW_AREA_IDX);
+	if (rsvd_mem->size == 0) {
+		dev_err(dev, "Couldn't get size of reserved memory region.\n");
+		rc = -ENODEV;
+		goto setup_vpu_fw_fail;
+	}
+
+	rsvd_mem->vaddr = dmam_alloc_coherent(dev, rsvd_mem->size,
+					      &rsvd_mem->vpu_addr, GFP_KERNEL);
+	/* Get the physical address of the reserved memory region. */
+	rsvd_mem->paddr = dma_to_phys(dev, vpu_dev->reserved_mem.vpu_addr);
+
+	if (!rsvd_mem->vaddr) {
+		dev_err(dev, "Failed to allocate memory for firmware.\n");
+		rc = -ENOMEM;
+		goto setup_vpu_fw_fail;
+	}
+
+	return 0;
+
+setup_vpu_fw_fail:
+	of_reserved_mem_device_release(dev);
+
+	return rc;
+}
+
+static int setup_x509_region(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	struct device_node *node;
+	struct resource res;
+	int rc;
+
+	node = of_parse_phandle(dev->of_node, "memory-region",
+				VPU_IPC_X509_AREA_IDX);
+	if (!node) {
+		dev_err(dev, "Couldn't find X509 region.\n");
+		return -EINVAL;
+	}
+
+	rc = of_address_to_resource(node, 0, &res);
+
+	/* Release node first as we will not use it anymore */
+	of_node_put(node);
+
+	if (rc) {
+		dev_err(dev, "Couldn't resolve X509 region.\n");
+		return rc;
+	}
+
+	vpu_dev->x509_mem.vaddr =
+		devm_ioremap(dev, res.start, resource_size(&res));
+	if (!vpu_dev->x509_mem.vaddr) {
+		dev_err(dev, "Couldn't ioremap x509 region.\n");
+		return -EADDRNOTAVAIL;
+	}
+
+	vpu_dev->x509_mem.paddr = res.start;
+	vpu_dev->x509_mem.size = resource_size(&res);
+
+	return 0;
+}
+
+static int setup_mss_ipc_region(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	struct device_node *node;
+	struct resource res;
+	int rc;
+
+	node = of_parse_phandle(dev->of_node, "memory-region",
+				MSS_IPC_AREA_IDX);
+	if (!node) {
+		dev_err(dev, "Didn't find MSS IPC region.\n");
+		return -EINVAL;
+	}
+
+	rc = of_address_to_resource(node, 0, &res);
+	if (rc) {
+		dev_err(dev, "Couldn't resolve MSS IPC region.\n");
+		return rc;
+	}
+	of_node_put(node);
+
+	vpu_dev->mss_ipc_mem.paddr = res.start;
+	vpu_dev->mss_ipc_mem.vpu_addr = phys_to_dma(dev, res.start);
+	vpu_dev->mss_ipc_mem.size = resource_size(&res);
+
+	return 0;
+}
+
+static int setup_reserved_memory(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	int rc;
+
+	/*
+	 * Find the VPU firmware area described in the device tree,
+	 * and allocate it for our usage.
+	 */
+	rc = setup_vpu_fw_region(vpu_dev);
+	if (rc) {
+		dev_err(dev, "Failed to init FW memory.\n");
+		return rc;
+	}
+
+	/*
+	 * Find the X509 area described in the device tree,
+	 * and allocate it for our usage.
+	 */
+	rc = setup_x509_region(vpu_dev);
+	if (rc) {
+		dev_err(dev, "Failed to setup X509 region.\n");
+		goto res_mem_setup_fail;
+	}
+
+	/*
+	 * Find the MSS IPC area in the device tree and get the location and
+	 * size information
+	 */
+	rc = setup_mss_ipc_region(vpu_dev);
+	if (rc) {
+		dev_err(dev, "Couldn't setup MSS IPC region.\n");
+		goto res_mem_setup_fail;
+	}
+
+	return 0;
+
+res_mem_setup_fail:
+	of_reserved_mem_device_release(dev);
+
+	return rc;
+}
+
+static void ipc_device_put(struct vpu_ipc_dev *vpu_dev)
+{
+	put_device(vpu_dev->ipc_dev);
+}
+
+static int ipc_device_get(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	struct platform_device *pdev;
+	struct device_node *np;
+
+	np = of_parse_phandle(dev->of_node, "intel,keembay-ipc", 0);
+	if (!np) {
+		dev_err(dev, "Cannot find phandle to IPC device\n");
+		return -ENODEV;
+	}
+
+	pdev = of_find_device_by_node(np);
+	if (!pdev) {
+		dev_info(dev, "IPC device not probed\n");
+		of_node_put(np);
+		return -EPROBE_DEFER;
+	}
+
+	vpu_dev->ipc_dev = get_device(&pdev->dev);
+	of_node_put(np);
+
+	return 0;
+}
+
+static int retrieve_clocks(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	char pll_string[PLL_STRING_SIZE];
+	struct clk *clk;
+	int pll;
+	int out;
+
+	clk = devm_clk_get(dev, "cpu_clock");
+	if (IS_ERR(clk)) {
+		dev_err(dev, "cpu_clock not found.\n");
+		return PTR_ERR(clk);
+	}
+	vpu_dev->cpu_clock = clk;
+
+	for (pll = 0; pll < NUM_PLLS; pll++) {
+		for (out = 0; out < NUM_PLL_OUTPUTS; out++) {
+			sprintf(pll_string, "pll_%d_out_%d", pll, out);
+			clk = devm_clk_get(dev, pll_string);
+			if (IS_ERR(clk)) {
+				dev_err(dev, "%s not found.\n", pll_string);
+				return PTR_ERR(clk);
+			}
+			vpu_dev->pll[pll][out] = clk;
+		}
+	}
+
+	return 0;
+}
+
+/* Get register resource from device tree and re-map as I/O memory. */
+static int get_pdev_res_and_ioremap(struct platform_device *pdev,
+				    const char *reg_name,
+				    void __iomem **target_reg)
+{
+	struct device *dev = &pdev->dev;
+	struct resource *res;
+	void __iomem *reg;
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, reg_name);
+	if (!res) {
+		dev_err(dev, "Couldn't get resource for %s\n", reg_name);
+		return -EINVAL;
+	}
+
+	reg = devm_ioremap_resource(dev, res);
+	if (IS_ERR(reg)) {
+		dev_err(dev, "Couldn't ioremap resource for %s\n", reg_name);
+		return PTR_ERR(reg);
+	}
+
+	*target_reg = reg;
+
+	return 0;
+}
+
+static int setup_watchdog_resources(struct vpu_ipc_dev *vpu_dev)
+{
+	struct platform_device *pdev = vpu_dev->pdev;
+	struct device *dev = &vpu_dev->pdev->dev;
+	int rc;
+
+	/* Get registers */
+	rc = get_pdev_res_and_ioremap(pdev, "nce_wdt", &vpu_dev->nce_wdt_reg);
+	if (rc) {
+		dev_err(dev, "Failed to get NCE WDT registers\n");
+		return rc;
+	}
+	rc = get_pdev_res_and_ioremap(pdev, "nce_tim_cfg",
+				      &vpu_dev->nce_tim_cfg_reg);
+	if (rc) {
+		dev_err(dev, "Failed to get NCE TIM_GEN_CONFIG register\n");
+		return rc;
+	}
+	rc = get_pdev_res_and_ioremap(pdev, "mss_wdt", &vpu_dev->mss_wdt_reg);
+	if (rc) {
+		dev_err(dev, "Failed to get MSS WDT registers\n");
+		return rc;
+	}
+	rc = get_pdev_res_and_ioremap(pdev, "mss_tim_cfg",
+				      &vpu_dev->mss_tim_cfg_reg);
+	if (rc) {
+		dev_err(dev, "Failed to get MSS TIM_GEN_CONFIG register\n");
+		return rc;
+	}
+
+	/* Request interrupts */
+	vpu_dev->nce_irq = platform_get_irq_byname(pdev, "nce_wdt");
+	if (vpu_dev->nce_irq < 0)
+		return vpu_dev->nce_irq;
+	vpu_dev->mss_irq = platform_get_irq_byname(pdev, "mss_wdt");
+	if (vpu_dev->mss_irq < 0)
+		return vpu_dev->mss_irq;
+	rc = devm_request_irq(dev, vpu_dev->nce_irq, nce_wdt_irq_handler, 0,
+			      "keembay-vpu-ipc", vpu_dev);
+	if (rc) {
+		dev_err(dev, "failed to request NCE IRQ.\n");
+		return rc;
+	}
+	rc = devm_request_irq(dev, vpu_dev->mss_irq, mss_wdt_irq_handler, 0,
+			      "keembay-vpu-ipc", vpu_dev);
+	if (rc) {
+		dev_err(dev, "failed to request MSS IRQ.\n");
+		return rc;
+	}
+
+	/* Request interrupt re-direct numbers */
+	rc = of_property_read_u32(dev->of_node,
+				  "intel,keembay-vpu-ipc-nce-wdt-redirect",
+				  &vpu_dev->nce_wdt_redirect);
+	if (rc) {
+		dev_err(dev, "failed to get NCE WDT redirect number.\n");
+		return rc;
+	}
+	rc = of_property_read_u32(dev->of_node,
+				  "intel,keembay-vpu-ipc-mss-wdt-redirect",
+				  &vpu_dev->mss_wdt_redirect);
+	if (rc) {
+		dev_err(dev, "failed to get MSS WDT redirect number.\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/* Populate the boot parameters to be passed to the VPU. */
+static int setup_boot_parameters(struct vpu_ipc_dev *vpu_dev)
+{
+	int i;
+
+	/* Set all values to zero. This will disable most clocks/devices */
+	memset(vpu_dev->boot_params, 0, sizeof(*vpu_dev->boot_params));
+
+	/*
+	 * Set magic number, so VPU knows that the parameters are
+	 * populated correctly
+	 */
+	vpu_dev->boot_params->magic_number = BOOT_PARAMS_MAGIC_NUMBER;
+
+	/* Set VPU ID. */
+	vpu_dev->boot_params->vpu_id = vpu_dev->vpu_id;
+
+	/* Inform VPU of clock frequencies */
+	vpu_dev->boot_params->cpu_frequency_hz =
+		clk_get_rate(vpu_dev->cpu_clock);
+	for (i = 0; i < NUM_PLL_OUTPUTS; i++) {
+		vpu_dev->boot_params->pll0_out[i] =
+			clk_get_rate(vpu_dev->pll[0][i]);
+		vpu_dev->boot_params->pll1_out[i] =
+			clk_get_rate(vpu_dev->pll[1][i]);
+		vpu_dev->boot_params->pll2_out[i] =
+			clk_get_rate(vpu_dev->pll[2][i]);
+	}
+
+	/*
+	 * Fill in IPC buffer information: the VPU needs to know where it
+	 * should allocate IPC buffer from.
+	 */
+	vpu_dev->boot_params->mss_ipc_header_address =
+		vpu_dev->mss_ipc_mem.vpu_addr;
+	vpu_dev->boot_params->mss_ipc_header_area_size =
+		vpu_dev->mss_ipc_mem.size;
+
+	/* Fill in IRQ re-direct request information */
+	vpu_dev->boot_params->mss_wdt_to_irq_a53_redir =
+		vpu_dev->mss_wdt_redirect;
+	vpu_dev->boot_params->nce_wdt_to_irq_a53_redir =
+		vpu_dev->nce_wdt_redirect;
+
+	/* Setup A53SS_VERSION_ID */
+	vpu_dev->boot_params->a53ss_version_id = vpu_ipc_soc_info->hardware_id;
+
+	/* Setup Silicon stepping */
+	vpu_dev->boot_params->si_stepping = vpu_ipc_soc_info->stepping[0] |
+					    vpu_ipc_soc_info->stepping[1] << 8;
+
+	/* Set feature exclude and device id information. */
+	vpu_dev->boot_params->device_id = vpu_ipc_soc_info->device_id;
+	vpu_dev->boot_params->feature_exclusion =
+					vpu_ipc_soc_info->feature_exclusion;
+
+	/* Set SKU information */
+	vpu_dev->boot_params->sku = (u64)vpu_ipc_soc_info->sku[0] |
+				    (u64)vpu_ipc_soc_info->sku[1] << 8 |
+				    (u64)vpu_ipc_soc_info->sku[2] << 16 |
+				    (u64)vpu_ipc_soc_info->sku[3] << 24 |
+				    (u64)vpu_ipc_soc_info->sku[4] << 32 |
+				    (u64)vpu_ipc_soc_info->sku[5] << 40;
+	return 0;
+}
+
+/* Request SoC firmware to boot the VPU. */
+static int request_vpu_boot(struct vpu_ipc_dev *vpu_dev)
+{
+	u64 function_id;
+	struct arm_smccc_res res;
+	u16 function_number = KMB_SIP_SVC_VPU_BOOT;
+
+	function_id = ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_32,
+					 ARM_SMCCC_OWNER_SIP, function_number);
+
+	/*
+	 * Arguments are as follows:
+	 * 1. Reserved
+	 * 2. Reserved region size
+	 * 3. Firmware image physical base address
+	 * 4. Firmware image size
+	 * 5. Entry point for VPU
+	 * 6. X509 certificate location
+	 * 7. IMR driver number
+	 */
+	arm_smccc_smc(function_id, 0,
+		      vpu_dev->reserved_mem.size, vpu_dev->fw_res.start,
+		      resource_size(&vpu_dev->fw_res), vpu_dev->boot_vec_paddr,
+		      vpu_dev->x509_mem.paddr, vpu_dev->imr, &res);
+
+	if (res.a0) {
+		dev_info(&vpu_dev->pdev->dev, "Boot failed: 0x%lx.\n", res.a0);
+		return -EIO;
+	}
+
+	dev_info(&vpu_dev->pdev->dev,
+		 "VPU Boot successfully requested to secure monitor.\n");
+
+	return 0;
+}
+
+/* Request SoC firmware to stop the VPU. */
+static int request_vpu_stop(struct vpu_ipc_dev *vpu_dev)
+{
+	u64 function_id;
+	struct arm_smccc_res res;
+	u16 function_number = KMB_SIP_SVC_VPU_STOP;
+
+	function_id = ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_32,
+					 ARM_SMCCC_OWNER_SIP, function_number);
+
+	arm_smccc_smc(function_id, 0, 0, 0, 0, 0, 0, 0, &res);
+
+	if (res.a0) {
+		dev_info(&vpu_dev->pdev->dev, "Stop failed: 0x%lx.\n", res.a0);
+		return -EIO;
+	}
+
+	dev_info(&vpu_dev->pdev->dev,
+		 "VPU Stop successfully requested to secure monitor.\n");
+
+	return 0;
+}
+
+/*
+ * Get kernel virtual address of resource inside the VPU reserved memory
+ * region.
+ */
+static void *get_vpu_dev_vaddr(struct vpu_ipc_dev *vpu_dev,
+			       struct resource *res)
+{
+	unsigned long offset;
+
+	/* Given the calculation below, this must not be true. */
+	if (res->start < vpu_dev->reserved_mem.vpu_addr)
+		return NULL;
+
+	offset = res->start - vpu_dev->reserved_mem.vpu_addr;
+
+	/* Cast to (u8 *) since void pointer arithmetic is undefined. */
+	return (u8 *)vpu_dev->reserved_mem.vaddr + offset;
+}
+
+static int parse_fw_header(struct vpu_ipc_dev *vpu_dev,
+			   const struct firmware *fw)
+{
+	struct resource config_res, version_res, total_reserved_res;
+	struct device *dev = &vpu_dev->pdev->dev;
+	struct firmware_header *fw_header;
+	void *version_region;
+	void *config_region;
+	void *fw_region;
+
+	/* Is the fw size big enough to read the header? */
+	if (fw->size < sizeof(struct firmware_header)) {
+		dev_err(dev, "Firmware was too small for header.\n");
+		return -EINVAL;
+	}
+
+	fw_header = (struct firmware_header *)fw->data;
+
+	/* Check header version */
+	if (fw_header->header_ver != HEADER_VERSION_SUPPORTED) {
+		dev_err(dev, "Header version check expected 0x%x, got 0x%x\n",
+			HEADER_VERSION_SUPPORTED, fw_header->header_ver);
+		return -EINVAL;
+	}
+
+	/* Check firmware version size is allowed */
+	if (fw_header->fw_ver_size > MAX_FIRMWARE_VERSION_SIZE) {
+		dev_err(dev, "Firmware version area larger than allowed: %d\n",
+			fw_header->fw_ver_size);
+		return -EINVAL;
+	}
+
+	/*
+	 * Check the firmware binary is at least large enough for the
+	 * firmware image size described in the header.
+	 */
+	if (fw->size < (MAX_HEADER_SIZE + MAX_FIRMWARE_VERSION_SIZE +
+			fw_header->image_size)) {
+		dev_err(dev, "Real firmware size is not large enough.\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Make sure that the final address is aligned correctly. If not, the
+	 * boot will never work.
+	 */
+	if (!IS_ALIGNED(fw_header->entry_point, VPU_RESET_VECTOR_ALIGNMENT)) {
+		dev_err(dev,
+			"Entry point for firmware (0x%llX) is not correctly aligned.\n",
+			fw_header->entry_point);
+		return -EINVAL;
+	}
+
+	/*
+	 * Generate the resource describing the region containing the actual
+	 * firmware data.
+	 */
+	vpu_dev->fw_res.start = fw_header->image_load_addr;
+	vpu_dev->fw_res.end = fw_header->image_size +
+			      fw_header->image_load_addr - 1;
+	vpu_dev->fw_res.flags = IORESOURCE_MEM;
+
+	/*
+	 * Generate the resource describing the region containing the
+	 * configuration data for the VPU.
+	 */
+	config_res.start = fw_header->config_load_addr;
+	config_res.end = sizeof(struct boot_parameters) +
+			 fw_header->config_load_addr - 1;
+	config_res.flags = IORESOURCE_MEM;
+
+	/*
+	 * Generate the resource describing the region containing the
+	 * version information for the VPU.
+	 */
+	version_res.start = fw_header->fw_ver_load_addr;
+	version_res.end = fw_header->fw_ver_size +
+			  fw_header->fw_ver_load_addr - 1;
+	version_res.flags = IORESOURCE_MEM;
+
+	/*
+	 * Generate the resource describing the region of memory
+	 * completely dedicated to the VPU.
+	 */
+	total_reserved_res.start = vpu_dev->reserved_mem.vpu_addr;
+	total_reserved_res.end = vpu_dev->reserved_mem.vpu_addr +
+		vpu_dev->reserved_mem.size - 1;
+	total_reserved_res.flags = IORESOURCE_MEM;
+
+	/*
+	 * Check all pieces to be copied reside completely in the reserved
+	 * region
+	 */
+	if (!resource_contains(&total_reserved_res, &vpu_dev->fw_res)) {
+		dev_err(dev, "Can't fit firmware in reserved region.\n");
+		return -EINVAL;
+	}
+	if (!resource_contains(&total_reserved_res, &version_res)) {
+		dev_err(dev,
+			"Can't fit firmware version data in reserved region.\n");
+		return -EINVAL;
+	}
+	if (!resource_contains(&total_reserved_res, &config_res)) {
+		dev_err(dev,
+			"Can't fit configuration information in reserved region.\n");
+		return -EINVAL;
+	}
+
+	/* Check for overlapping regions */
+	if (resource_overlaps(&vpu_dev->fw_res, &version_res)) {
+		dev_err(dev, "FW and version regions overlap.\n");
+		return -EINVAL;
+	}
+	if (resource_overlaps(&vpu_dev->fw_res, &config_res)) {
+		dev_err(dev, "FW and config regions overlap.\n");
+		return -EINVAL;
+	}
+	if (resource_overlaps(&config_res, &version_res)) {
+		dev_err(dev, "Version and config regions overlap.\n");
+		return -EINVAL;
+	}
+
+	/* Setup boot parameter region */
+	config_region = get_vpu_dev_vaddr(vpu_dev, &config_res);
+	if (!config_region) {
+		dev_err(dev,
+			"Couldn't map boot configuration area to CPU virtual address.\n");
+		return -EINVAL;
+	}
+	version_region = get_vpu_dev_vaddr(vpu_dev, &version_res);
+	if (!version_region) {
+		dev_err(dev,
+			"Couldn't map version area to CPU virtual address.\n");
+		return -EINVAL;
+	}
+	fw_region = get_vpu_dev_vaddr(vpu_dev, &vpu_dev->fw_res);
+	if (!fw_region) {
+		dev_err(dev,
+			"Couldn't map firmware area to CPU virtual address.\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Copy version region: the region is located in the file @ offset of
+	 * MAX_HEADER_SIZE, size was specified in the header and has been
+	 * checked to not be larger than that allowed.
+	 */
+	memcpy(version_region, &fw->data[MAX_HEADER_SIZE],
+	       fw_header->fw_ver_size);
+
+	/*
+	 * Copy firmware region: the region is located in the file @ offset of
+	 * MAX_HEADER_SIZE + MAX_FIRMWARE_VERSION_SIZE, size was specified in
+	 * the header and has been checked to not be larger than that allowed.
+	 */
+	memcpy(fw_region,
+	       &fw->data[MAX_HEADER_SIZE + MAX_FIRMWARE_VERSION_SIZE],
+	       fw_header->image_size);
+
+	/* Save off boot parameters region vaddr */
+	vpu_dev->boot_params = config_region;
+
+	/* Save off boot vector physical address */
+	vpu_dev->boot_vec_paddr = fw_header->entry_point;
+
+	return 0;
+}
+
+static int ready_message_wait_thread(void *arg)
+{
+	struct vpu_ipc_dev *vpu_dev = arg;
+	struct device *dev = &vpu_dev->pdev->dev;
+	size_t size = 0;
+	u32 paddr = 0;
+	int close_rc;
+	int rc;
+
+	/*
+	 * We will wait a few seconds for the message. We will complete earlier
+	 * if the message is received earlier.
+	 * NOTE: this is not a busy wait, we sleep until message is received.
+	 */
+	rc = intel_keembay_ipc_recv(vpu_dev->ipc_dev, KMB_IPC_NODE_LEON_MSS,
+				    READY_MESSAGE_IPC_CHANNEL, &paddr, &size,
+				    READY_MESSAGE_TIMEOUT_MS);
+	/*
+	 * IPC channel must be closed regardless of 'rc' value, so close the
+	 * channel now and then process 'rc' value.
+	 */
+	close_rc = intel_keembay_ipc_close_channel(vpu_dev->ipc_dev,
+						   KMB_IPC_NODE_LEON_MSS,
+						   READY_MESSAGE_IPC_CHANNEL);
+	if (close_rc < 0) {
+		dev_warn(dev, "Couldn't close IPC channel.\n");
+		/* Continue, as this is not a critical issue. */
+	}
+
+	/* Now process recv() return code. */
+	if (rc < 0) {
+		dev_err(dev,
+			"Failed to receive ready message within %d ms: %d.\n",
+			READY_MESSAGE_TIMEOUT_MS, rc);
+		goto ready_message_thread_failure;
+	}
+
+	if (paddr != READY_MESSAGE_EXPECTED_PADDR ||
+	    size != READY_MESSAGE_EXPECTED_SIZE) {
+		dev_err(dev, "Bad ready message: (paddr, size) = (0x%x, %zu)\n",
+			paddr, size);
+		goto ready_message_thread_failure;
+	}
+
+	dev_info(dev, "VPU ready message received successfully!\n");
+
+	rc = vpu_ipc_handle_event(vpu_dev, KEEMBAY_VPU_EVENT_MSS_READY_OK);
+	if (rc < 0)
+		dev_err(dev, "Fatal error: failed to set state (ready ok).\n");
+
+	/* Wake up anyone waiting for READY. */
+	wake_up_all(&vpu_dev->ready_queue);
+
+	return 0;
+
+ready_message_thread_failure:
+	rc = vpu_ipc_handle_event(vpu_dev, KEEMBAY_VPU_EVENT_MSS_READY_FAIL);
+	if (rc < 0)
+		dev_err(dev,
+			"Fatal error: failed to set state (ready timeout).\n");
+
+	return 0;
+}
+
+static int create_ready_message_thread(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	struct task_struct *task;
+
+	task = kthread_run(&ready_message_wait_thread, (void *)vpu_dev,
+			   "keembay-vpu-ipc-ready");
+	if (IS_ERR(task)) {
+		dev_err(dev, "Couldn't start thread to receive message.\n");
+		return -EIO;
+	}
+
+	vpu_dev->ready_message_task = task;
+
+	return 0;
+}
+
+static int kickoff_vpu_sequence(struct vpu_ipc_dev *vpu_dev,
+				int (*boot_fn)(struct vpu_ipc_dev *))
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	int err_rc;
+	int rc;
+
+	/*
+	 * Open the IPC channel. If we don't do it before booting
+	 * the VPU, we may miss the message, as the IPC driver will
+	 * discard messages for unopened channels.
+	 */
+	rc = intel_keembay_ipc_open_channel(vpu_dev->ipc_dev,
+					    KMB_IPC_NODE_LEON_MSS,
+					    READY_MESSAGE_IPC_CHANNEL);
+	if (rc < 0) {
+		dev_err(dev,
+			"Couldn't open IPC channel to receive ready message.\n");
+		goto kickoff_failed;
+	}
+
+	/* Request boot */
+	rc = boot_fn(vpu_dev);
+	if (rc < 0) {
+		dev_err(dev, "Failed to do request to boot.\n");
+		goto close_and_kickoff_failed;
+	}
+
+	/*
+	 * Start thread waiting for message, and update state
+	 * if the request was successful.
+	 */
+	rc = create_ready_message_thread(vpu_dev);
+	if (rc < 0) {
+		dev_err(dev,
+			"Failed to create thread to wait for ready message.\n");
+		goto close_and_kickoff_failed;
+	}
+
+	return 0;
+
+close_and_kickoff_failed:
+	/* Close the channel. */
+	err_rc = intel_keembay_ipc_close_channel(vpu_dev->ipc_dev,
+						 KMB_IPC_NODE_LEON_MSS,
+						 READY_MESSAGE_IPC_CHANNEL);
+	if (err_rc < 0) {
+		dev_err(dev, "Couldn't close IPC channel: %d\n", err_rc);
+		/*
+		 * We have had a more serious failure - don't update the
+		 * original 'rc' and continue.
+		 */
+	}
+
+kickoff_failed:
+	return rc;
+}
+
+/*
+ * Try to boot the VPU using the firmware name stored in
+ * vpu_dev->firmware_name (which when this function is called is expected to be
+ * not NULL).
+ */
+static int do_boot_sequence(struct vpu_ipc_dev *vpu_dev)
+{
+	struct device *dev = &vpu_dev->pdev->dev;
+	const struct firmware *fw;
+	int event_rc;
+	int rc;
+
+	/* Update state machine. */
+	rc = vpu_ipc_handle_event(vpu_dev, KEEMBAY_VPU_EVENT_BOOT);
+	if (rc < 0) {
+		dev_err(dev, "Can't start in this state.\n");
+		return rc;
+	}
+
+	/* Stop the VPU running */
+	rc = request_vpu_stop(vpu_dev);
+	if (rc < 0)
+		dev_err(dev, "Failed stop - continue sequence anyway.\n");
+
+	dev_info(dev, "Keem Bay VPU IPC start with %s.\n",
+		 vpu_dev->firmware_name);
+
+	/* Request firmware and wait for it. */
+	rc = request_firmware(&fw, vpu_dev->firmware_name, &vpu_dev->pdev->dev);
+	if (rc < 0) {
+		dev_err(dev, "Couldn't find firmware: %d\n", rc);
+		goto boot_failed_no_fw;
+	}
+
+	/* Do checks on the firmware header. */
+	rc = parse_fw_header(vpu_dev, fw);
+	if (rc < 0) {
+		dev_err(dev, "Firmware checks failed.\n");
+		goto boot_failed;
+	}
+
+	/* Write configuration data. */
+	rc = setup_boot_parameters(vpu_dev);
+	if (rc < 0) {
+		dev_err(dev, "Failed to set up boot parameters.\n");
+		goto boot_failed;
+	}
+
+	/* Try 'boot' sequence */
+	rc = kickoff_vpu_sequence(vpu_dev, request_vpu_boot);
+	if (rc < 0) {
+		dev_err(dev, "Failed to boot VPU.\n");
+		goto boot_failed;
+	}
+
+	release_firmware(fw);
+	return 0;
+
+boot_failed:
+	release_firmware(fw);
+
+boot_failed_no_fw:
+	/* Update state machine after failure. */
+	event_rc = vpu_ipc_handle_event(vpu_dev,
+					KEEMBAY_VPU_EVENT_BOOT_FAILED);
+	if (event_rc < 0) {
+		dev_err(dev,
+			"Unexpected error: failed to handle fail event: %d.\n",
+			event_rc);
+		/* Continue: prefer original 'rc' to 'event_rc'. */
+	}
+
+	return rc;
+}
+
+/**
+ * intel_keembay_vpu_ipc_open_channel() - Open an IPC channel.
+ * @dev:	The VPU IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The ID of the channel to be opened.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+int intel_keembay_vpu_ipc_open_channel(struct device *dev, u8 node_id,
+				       u16 chan_id)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	return intel_keembay_ipc_open_channel(vpu_dev->ipc_dev, node_id,
+					      chan_id);
+}
+EXPORT_SYMBOL(intel_keembay_vpu_ipc_open_channel);
+
+/**
+ * intel_keembay_vpu_ipc_close_channel() - Close an IPC channel.
+ * @dev:	The VPU IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The ID of the channel to be closed.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+
+int intel_keembay_vpu_ipc_close_channel(struct device *dev, u8 node_id,
+					u16 chan_id)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	return intel_keembay_ipc_close_channel(vpu_dev->ipc_dev,
+					       node_id, chan_id);
+}
+EXPORT_SYMBOL(intel_keembay_vpu_ipc_close_channel);
+
+/**
+ * intel_keembay_vpu_ipc_send() - Send data via IPC.
+ * @dev:	The VPU IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The IPC channel to be used to send the message.
+ * @vpu_addr:	The VPU address of the data to be transferred.
+ * @size:	The size of the data to be transferred.
+ *
+ * Return:	0 on success, negative error code otherwise.
+ */
+int intel_keembay_vpu_ipc_send(struct device *dev, u8 node_id, u16 chan_id,
+			       u32 vpu_addr, size_t size)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	return intel_keembay_ipc_send(vpu_dev->ipc_dev, node_id, chan_id,
+				      vpu_addr, size);
+}
+EXPORT_SYMBOL(intel_keembay_vpu_ipc_send);
+
+/**
+ * intel_keembay_vpu_ipc_recv() - Read data via IPC
+ * @dev:	The VPU IPC device to use.
+ * @node_id:	The node ID of the remote node (used to identify the link the
+ *		channel must be added to). KMB_IPC_NODE_LEON_MSS is the only
+ *		allowed value for now.
+ * @chan_id:	The IPC channel to read from.
+ * @vpu_addr:	[out] The VPU address of the received data.
+ * @size:	[out] Where to store the size of the received data.
+ * @timeout:	How long (in ms) the function will block waiting for an IPC
+ *		message; if UINT32_MAX it will block indefinitely; if 0 it
+ *		will not block.
+ *
+ * Return:	0 on success, negative error code otherwise
+ */
+int intel_keembay_vpu_ipc_recv(struct device *dev, u8 node_id, u16 chan_id,
+			       u32 *vpu_addr, size_t *size, u32 timeout)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	return intel_keembay_ipc_recv(vpu_dev->ipc_dev, node_id, chan_id,
+				      vpu_addr, size, timeout);
+}
+EXPORT_SYMBOL(intel_keembay_vpu_ipc_recv);
+
+/**
+ * intel_keembay_vpu_startup() - Boot the VPU
+ * @dev:	   The VPU device to boot.
+ * @firmware_name: Name of firmware file
+ *
+ * This API is only valid while the VPU is OFF.
+ *
+ * The firmware called "firmware_name" will be searched for using the
+ * kernel firmware API. The firmware header will then be parsed. This driver
+ * will load requested information to the reserved memory region, including
+ * initialisation data. Lastly, we will request the secure world to do the
+ * boot sequence. If the boot sequence is successful, the
+ * VPU state will become BUSY. The caller should then wait for the status to
+ * become READY before starting to communicate with the VPU. If the boot
+ * sequence failed, this function will fail and the caller may try again,
+ * the VPU status will still be OFF.
+ *
+ * If we fail to get to READY, because the VPU did not send us the 'ready'
+ * message, the VPU state will go to ERROR.
+ *
+ * Return: 0 on success, negative error code otherwise
+ */
+int intel_keembay_vpu_startup(struct device *dev, const char *firmware_name)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return PTR_ERR(vpu_dev);
+
+	if (!firmware_name)
+		return -EINVAL;
+
+	/* Free old vpu_dev->firmware_name value (if any). */
+	kfree(vpu_dev->firmware_name);
+
+	/* Set new value. */
+	vpu_dev->firmware_name = kstrdup(firmware_name, GFP_KERNEL);
+
+	return do_boot_sequence(vpu_dev);
+}
+EXPORT_SYMBOL(intel_keembay_vpu_startup);
+
+/**
+ * intel_keembay_vpu_reset() - Reset the VPU
+ * @dev:	The VPU device to reset.
+ *
+ * Resets the VPU. Only valid when the VPU is in the READY or ERROR state.
+ * The state of the VPU will become BUSY.
+ *
+ * Return: 0 on success, negative error code otherwise
+ */
+int intel_keembay_vpu_reset(struct device *dev)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return PTR_ERR(vpu_dev);
+
+	/*
+	 * If vpu_dev->firmware_name == NULL then the VPU is not running
+	 * (either it was never booted or vpu_stop() was called). So, calling
+	 * reset is not allowed.
+	 */
+	if (!vpu_dev->firmware_name)
+		return -EINVAL;
+
+	return do_boot_sequence(vpu_dev);
+}
+EXPORT_SYMBOL(intel_keembay_vpu_reset);
+
+/**
+ * intel_keembay_vpu_stop() - Stop the VPU
+ * @dev:	The VPU device to stop.
+ *
+ * Stops the VPU and restores to the OFF state. Only valid when the VPU is in
+ * the READY or ERROR state.
+ *
+ * Return: 0 on success, negative error code otherwise
+ */
+int intel_keembay_vpu_stop(struct device *dev)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+	int event_rc;
+	int rc;
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	rc = vpu_ipc_handle_event(vpu_dev, KEEMBAY_VPU_EVENT_STOP);
+	if (rc < 0) {
+		dev_err(dev, "Can't stop in this state.\n");
+		return rc;
+	}
+
+	dev_info(dev, "Keem Bay VPU IPC stop.\n");
+
+	/* Request stop */
+	rc = request_vpu_stop(vpu_dev);
+	if (rc < 0) {
+		dev_err(dev,
+			"Failed to do request to stop - resetting state to OFF anyway.\n");
+	}
+
+	/* Remove any saved-off name */
+	kfree(vpu_dev->firmware_name);
+	vpu_dev->firmware_name = NULL;
+
+	event_rc = vpu_ipc_handle_event(vpu_dev,
+					KEEMBAY_VPU_EVENT_STOP_COMPLETE);
+	if (event_rc < 0) {
+		dev_err(dev,
+			"Failed to handle 'stop complete' event, probably fatal.\n");
+		return event_rc;
+	}
+
+	return rc;
+}
+EXPORT_SYMBOL(intel_keembay_vpu_stop);
+
+/**
+ * intel_keembay_vpu_status() - Get the VPU state.
+ * @dev:	The VPU device to retrieve the status for.
+ *
+ * Returns the state of the VPU as tracked by this driver.
+ *
+ * Return: Relevant value of enum intel_keembay_vpu_state
+ */
+enum intel_keembay_vpu_state intel_keembay_vpu_status(struct device *dev)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	return vpu_dev->state;
+}
+EXPORT_SYMBOL(intel_keembay_vpu_status);
+
+/**
+ * intel_keembay_vpu_get_wdt_count() - Get the WDT count
+ * @dev:	The VPU device to get the WDT count for.
+ * @id:		ID of WDT events we wish to get
+ *
+ * Returns: Number of WDT timeout occurrences for given ID, or negative
+ *	    error value for invalid ID.
+ */
+int intel_keembay_vpu_get_wdt_count(struct device *dev,
+				    enum intel_keembay_wdt_cpu_id id)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+	int rc = -EINVAL;
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	switch (id) {
+	case KEEMBAY_VPU_NCE:
+		rc = vpu_dev->nce_wdt_count;
+		break;
+	case KEEMBAY_VPU_MSS:
+		rc = vpu_dev->mss_wdt_count;
+		break;
+	default:
+		break;
+	}
+	return rc;
+}
+EXPORT_SYMBOL(intel_keembay_vpu_get_wdt_count);
+
+/**
+ * intel_keembay_vpu_wait_for_ready() - Sleep until VPU is READY
+ * @dev:	The VPU device for which we are waiting the ready message.
+ * @timeout:	How long (in ms) the function will block waiting for the VPU
+ *		to become ready.
+ *
+ * The caller may ask the VPU IPC driver to notify it when the VPU
+ * is READY. The driver performs no checks on the current state, so it
+ * is up to the caller to confirm that the state is correct before starting
+ * the wait.
+ *
+ * Returns: 0 on success negative error code otherwise
+ */
+int intel_keembay_vpu_wait_for_ready(struct device *dev, u32 timeout)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+	int rc;
+
+	if (IS_ERR(vpu_dev))
+		return -EINVAL;
+
+	/*
+	 * If we are in ERROR state, we will not get to READY
+	 * state without some other transitions, so return
+	 * error immediately for caller to handle.
+	 */
+	if (vpu_dev->state == KEEMBAY_VPU_ERROR)
+		return -EIO;
+
+	rc = wait_event_interruptible_timeout(vpu_dev->ready_queue,
+					      vpu_dev->state == KEEMBAY_VPU_READY,
+					      msecs_to_jiffies(timeout));
+
+	/* Condition was false after timeout elapsed */
+	if (!rc)
+		rc = -ETIME;
+
+	/* Condition was true, so rc == 1 */
+	if (rc > 0)
+		rc = 0;
+
+	return rc;
+}
+EXPORT_SYMBOL(intel_keembay_vpu_wait_for_ready);
+
+/**
+ * intel_keembay_vpu_register_for_events() - Register callback for event notification
+ * @dev:	The VPU device.
+ * @callback: Callback function pointer
+ *
+ * Only a single callback can be registered at a time.
+ *
+ * Callback can be triggered from any context, so needs to be able to be run
+ * from IRQ context.
+ *
+ * Return: 0 on success, negative error code otherwise
+ */
+int intel_keembay_vpu_register_for_events(struct device *dev,
+					  void (*callback)(struct device *dev,
+							   enum intel_keembay_vpu_event))
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return PTR_ERR(vpu_dev);
+
+	if (vpu_dev->callback)
+		return -EEXIST;
+
+	vpu_dev->callback = callback;
+
+	return 0;
+}
+EXPORT_SYMBOL(intel_keembay_vpu_register_for_events);
+
+/**
+ * intel_keembay_vpu_unregister_for_events() - Unregister callback for event notification
+ * @dev:	The VPU device.
+ *
+ * Return: 0 on success, negative error code otherwise
+ */
+int intel_keembay_vpu_unregister_for_events(struct device *dev)
+{
+	struct vpu_ipc_dev *vpu_dev = to_vpu_dev(dev);
+
+	if (IS_ERR(vpu_dev))
+		return PTR_ERR(vpu_dev);
+
+	vpu_dev->callback = NULL;
+
+	return 0;
+}
+EXPORT_SYMBOL(intel_keembay_vpu_unregister_for_events);
+
+/* Probe() function for the VPU IPC platform driver. */
+static int keembay_vpu_ipc_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct vpu_ipc_dev *vpu_dev;
+	int rc;
+
+	vpu_dev = devm_kzalloc(dev, sizeof(*vpu_dev), GFP_KERNEL);
+	if (!vpu_dev)
+		return -ENOMEM;
+
+	vpu_dev->pdev = pdev;
+	vpu_dev->state = KEEMBAY_VPU_OFF;
+	vpu_dev->ready_message_task = NULL;
+	vpu_dev->firmware_name = NULL;
+	vpu_dev->nce_wdt_count = 0;
+	vpu_dev->mss_wdt_count = 0;
+	spin_lock_init(&vpu_dev->lock);
+	init_waitqueue_head(&vpu_dev->ready_queue);
+
+	/* Retrieve clocks */
+	rc = retrieve_clocks(vpu_dev);
+	if (rc) {
+		dev_err(dev, "Failed to retrieve clocks %d\n", rc);
+		return rc;
+	}
+
+	/* Retrieve memory regions, allocate memory */
+	rc = setup_reserved_memory(vpu_dev);
+	if (rc) {
+		dev_err(dev,
+			"Failed to set up reserved memory regions: %d\n", rc);
+		return rc;
+	}
+
+	/* Request watchdog timer resources */
+	rc = setup_watchdog_resources(vpu_dev);
+	if (rc) {
+		dev_err(dev, "Failed to setup watchdog resources %d\n", rc);
+		goto probe_fail_post_resmem_setup;
+	}
+
+	/* Request the IMR number to be used */
+	rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-imr",
+				  &vpu_dev->imr);
+	if (rc) {
+		dev_err(dev, "failed to get IMR number.\n");
+		goto probe_fail_post_resmem_setup;
+	}
+
+	/* Get VPU ID. */
+	rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-id",
+				  &vpu_dev->vpu_id);
+	if (rc) {
+		/* Only warn for now; we will enforce this in the future. */
+		dev_err(dev, "VPU ID not defined in Device Tree\n");
+		goto probe_fail_post_resmem_setup;
+	}
+
+	/* Get IPC device to be used for IPC communication. */
+	rc = ipc_device_get(vpu_dev);
+	if (rc) {
+		dev_err(dev, "Failed to get IPC device\n");
+		goto probe_fail_post_resmem_setup;
+	}
+
+	/* Set platform data reference. */
+	platform_set_drvdata(pdev, vpu_dev);
+
+	return rc;
+
+probe_fail_post_resmem_setup:
+	of_reserved_mem_device_release(dev);
+
+	return rc;
+}
+
+/* Remove() function for the VPU IPC platform driver. */
+static int keembay_vpu_ipc_remove(struct platform_device *pdev)
+{
+	struct vpu_ipc_dev *vpu_dev = platform_get_drvdata(pdev);
+	struct device *dev = &pdev->dev;
+
+	if (vpu_dev->ready_message_task) {
+		kthread_stop(vpu_dev->ready_message_task);
+		vpu_dev->ready_message_task = NULL;
+	}
+
+	of_reserved_mem_device_release(dev);
+
+	ipc_device_put(vpu_dev);
+
+	return 0;
+}
+
+/* Compatible string for the VPU/IPC driver. */
+static const struct of_device_id keembay_vpu_ipc_of_match[] = {
+	{
+		.compatible = "intel,keembay-vpu-ipc",
+	},
+	{}
+};
+
+/* The VPU IPC platform driver. */
+static struct platform_driver keem_bay_vpu_ipc_driver = {
+	.driver = {
+			.name = "keembay-vpu-ipc",
+			.of_match_table = keembay_vpu_ipc_of_match,
+		},
+	.probe = keembay_vpu_ipc_probe,
+	.remove = keembay_vpu_ipc_remove,
+};
+
+/* Helper function to get a vpu_dev struct from a generic device pointer. */
+static struct vpu_ipc_dev *to_vpu_dev(struct device *dev)
+{
+	struct platform_device *pdev;
+
+	if (!dev || dev->driver != &keem_bay_vpu_ipc_driver.driver)
+		return ERR_PTR(-EINVAL);
+	pdev = to_platform_device(dev);
+
+	return platform_get_drvdata(pdev);
+}
+
+/*
+ * Retrieve SoC information from the '/soc/version-info' device tree node and
+ * store it into 'vpu_ipc_soc_info' global variable.
+ */
+static int retrieve_dt_soc_information(void)
+{
+	struct device_node *soc_info_dn;
+	int ret;
+
+	soc_info_dn = of_find_node_by_path("/soc/version-info");
+	if (!soc_info_dn)
+		return -ENOENT;
+
+	ret = of_property_read_u64(soc_info_dn, "feature-exclusion",
+				   &vpu_ipc_soc_info->feature_exclusion);
+	if (ret) {
+		pr_err("Property 'feature-exclusion' can't be read.\n");
+		return ret;
+	}
+	ret = of_property_read_u64(soc_info_dn, "device-id",
+				   &vpu_ipc_soc_info->device_id);
+	if (ret) {
+		pr_err("Property 'device-id' can't be read.\n");
+		return ret;
+	}
+	ret = of_property_read_u32(soc_info_dn, "hardware-id",
+				   &vpu_ipc_soc_info->hardware_id);
+	if (ret) {
+		pr_err("Property 'hardware-id' can't be read.\n");
+		return ret;
+	}
+	/*
+	 * Note: the SKU and stepping information from the device tree is
+	 * not a string, but an array of u8/chars. Therefore, we cannot
+	 * parse it as a string.
+	 */
+	ret = of_property_read_u8_array(soc_info_dn, "sku",
+					vpu_ipc_soc_info->sku,
+					sizeof(vpu_ipc_soc_info->sku));
+	if (ret) {
+		pr_err("Property 'sku' can't be read.\n");
+		return ret;
+	}
+	ret = of_property_read_u8_array(soc_info_dn, "stepping",
+					vpu_ipc_soc_info->stepping,
+					sizeof(vpu_ipc_soc_info->stepping));
+	if (ret) {
+		pr_err("Property 'stepping' can't be read.\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Init VPU IPC module:
+ * - Retrieve SoC information from device tree.
+ * - Register the VPU IPC platform driver.
+ */
+static int __init vpu_ipc_init(void)
+{
+	int rc;
+
+	vpu_ipc_soc_info = kzalloc(sizeof(*vpu_ipc_soc_info), GFP_KERNEL);
+	if (!vpu_ipc_soc_info)
+		return -ENOMEM;
+
+	rc = retrieve_dt_soc_information();
+	if (rc < 0)
+		pr_warn("VPU IPC failed to find SoC info, using defaults.\n");
+
+	rc = platform_driver_register(&keem_bay_vpu_ipc_driver);
+	if (rc < 0) {
+		pr_err("Failed to register platform driver for VPU IPC.\n");
+		goto cleanup_soc_info;
+	}
+
+	return 0;
+
+cleanup_soc_info:
+	kfree(vpu_ipc_soc_info);
+
+	return rc;
+}
+
+/*
+ * Remove VPU IPC module.
+ * - Un-register the VPU IPC platform driver.
+ * - Remove sysfs exposing SoC information.
+ * - Free allocated memory.
+ */
+static void __exit vpu_ipc_exit(void)
+{
+	platform_driver_unregister(&keem_bay_vpu_ipc_driver);
+	kfree(vpu_ipc_soc_info);
+}
+
+module_init(vpu_ipc_init);
+module_exit(vpu_ipc_exit);
+
+MODULE_DESCRIPTION("Keem Bay VPU IPC Driver");
+MODULE_AUTHOR("Paul Murphy <paul.j.murphy@intel.com>");
+MODULE_AUTHOR("Daniele Alessandrelli <daniele.alessandrelli@intel.com>");
+MODULE_LICENSE("GPL");
diff --git a/include/linux/soc/intel/keembay-vpu-ipc.h b/include/linux/soc/intel/keembay-vpu-ipc.h
new file mode 100644
index 000000000000..81d132186482
--- /dev/null
+++ b/include/linux/soc/intel/keembay-vpu-ipc.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Keem Bay VPU IPC Linux Kernel API
+ *
+ * Copyright (c) 2018-2020 Intel Corporation.
+ */
+
+#ifndef __KEEMBAY_VPU_IPC_H
+#define __KEEMBAY_VPU_IPC_H
+
+#include "linux/types.h"
+
+/* The possible node IDs. */
+enum {
+	KMB_VPU_IPC_NODE_ARM_CSS = 0,
+	KMB_VPU_IPC_NODE_LEON_MSS,
+};
+
+/* Possible states of VPU. */
+enum intel_keembay_vpu_state {
+	KEEMBAY_VPU_OFF = 0,
+	KEEMBAY_VPU_BUSY,
+	KEEMBAY_VPU_READY,
+	KEEMBAY_VPU_ERROR,
+	KEEMBAY_VPU_STOPPING
+};
+
+/* Possible CPU IDs for which we receive WDT timeout events. */
+enum intel_keembay_wdt_cpu_id {
+	KEEMBAY_VPU_MSS = 0,
+	KEEMBAY_VPU_NCE
+};
+
+/* Events that can be notified via callback, when registered. */
+enum intel_keembay_vpu_event {
+	KEEMBAY_VPU_NOTIFY_DISCONNECT = 0,
+	KEEMBAY_VPU_NOTIFY_CONNECT,
+	KEEMBAY_VPU_NOTIFY_MSS_WDT,
+	KEEMBAY_VPU_NOTIFY_NCE_WDT,
+};
+
+int intel_keembay_vpu_ipc_open_channel(struct device *dev, u8 node_id,
+				       u16 chan_id);
+int intel_keembay_vpu_ipc_close_channel(struct device *dev, u8 node_id,
+					u16 chan_id);
+int intel_keembay_vpu_ipc_send(struct device *dev, u8 node_id, u16 chan_id,
+			       u32 paddr, size_t size);
+int intel_keembay_vpu_ipc_recv(struct device *dev, u8 node_id, u16 chan_id,
+			       u32 *paddr, size_t *size, u32 timeout);
+int intel_keembay_vpu_startup(struct device *dev, const char *firmware_name);
+int intel_keembay_vpu_reset(struct device *dev);
+int intel_keembay_vpu_stop(struct device *dev);
+enum intel_keembay_vpu_state intel_keembay_vpu_status(struct device *dev);
+int intel_keembay_vpu_get_wdt_count(struct device *dev,
+				    enum intel_keembay_wdt_cpu_id id);
+int intel_keembay_vpu_wait_for_ready(struct device *dev, u32 timeout);
+int intel_keembay_vpu_register_for_events(struct device *dev,
+					  void (*callback)(struct device *dev,
+							   enum intel_keembay_vpu_event event));
+int intel_keembay_vpu_unregister_for_events(struct device *dev);
+
+#endif /* __KEEMBAY_VPU_IPC_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 08/34] misc: xlink-pcie: Add documentation for XLink PCIe driver
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (6 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 07/34] keembay-vpu-ipc: Add Keem Bay VPU IPC module mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-19 19:36   ` Randy Dunlap
  2021-01-08 21:25 ` [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host mgross
                   ` (25 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala, linux-doc

From: Srikanth Thokala <srikanth.thokala@intel.com>

Provide overview of XLink PCIe driver implementation

Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 Documentation/vpu/index.rst      |  1 +
 Documentation/vpu/xlink-pcie.rst | 90 ++++++++++++++++++++++++++++++++
 2 files changed, 91 insertions(+)
 create mode 100644 Documentation/vpu/xlink-pcie.rst

diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index 7e290e048910..661cc700ee45 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -14,3 +14,4 @@ This documentation contains information for the Intel VPU stack.
    :maxdepth: 2
 
    vpu-stack-overview
+   xlink-pcie
diff --git a/Documentation/vpu/xlink-pcie.rst b/Documentation/vpu/xlink-pcie.rst
new file mode 100644
index 000000000000..2d877c966b1e
--- /dev/null
+++ b/Documentation/vpu/xlink-pcie.rst
@@ -0,0 +1,90 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+================================
+Kernel driver: Xlink-pcie driver
+================================
+Supported chips:
+  * Intel Edge.AI Computer Vision platforms: Keem Bay
+    Suffix: Bay
+    Slave address: 6240
+    Datasheet: Publicly available at Intel
+
+Author: Srikanth Thokala Srikanth.Thokala@intel.com
+
+Introduction
+============
+The Xlink-pcie driver provides transport layer implementation for
+the data transfers to support Xlink protocol subsystem communication with the
+peer device. i.e, between remote host system and Keem Bay device.
+
+The Keem Bay device is an ARM-based SOC that includes a vision processing
+unit (VPU) and deep learning, neural network core in the hardware.
+The Xlink-pcie driver exports a functional device endpoint to the Keem Bay
+device and supports two-way communication with the peer device.
+
+High-level architecture
+=======================
+Remote Host: IA CPU
+Local Host: ARM CPU (Keem Bay)::
+
+        +------------------------------------------------------------------------+
+        |  Remote Host IA CPU              | | Local Host ARM CPU (Keem Bay) |   |
+        +==================================+=+===============================+===+
+        |  User App                        | | User App                      |   |
+        +----------------------------------+-+-------------------------------+---+
+        |   XLink UAPI                     | | XLink UAPI                    |   |
+        +----------------------------------+-+-------------------------------+---+
+        |   XLink Core                     | | XLink Core                    |   |
+        +----------------------------------+-+-------------------------------+---+
+        |   XLink PCIe                     | | XLink PCIe                    |   |
+        +----------------------------------+-+-------------------------------+---+
+        |   XLink-PCIe Remote Host driver  | | XLink-PCIe Local Host driver  |   |
+        +----------------------------------+-+-------------------------------+---+
+        |-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:|:|:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:|
+        +----------------------------------+-+-------------------------------+---+
+        |     PCIe Host Controller         | | PCIe Device Controller        | HW|
+        +----------------------------------+-+-------------------------------+---+
+               ^                                             ^
+               |                                             |
+               |------------- PCIe x2 Link  -----------------|
+
+This XLink PCIe driver comprises of two variants:
+* Local Host driver
+
+  * Intended for ARM CPU
+  * It is based on PCI Endpoint Framework
+  * Driver path: {tree}/drivers/misc/Xlink-pcie/local_host
+
+* Remote Host driver
+
+       * Intended for IA CPU
+       * It is a PCIe endpoint driver
+       * Driver path: {tree}/drivers/misc/Xlink-pcie/remote_host
+
+XLink PCIe communication between local host and remote host is achieved through
+ring buffer management and MSI/Doorbell interrupts.
+
+The Xlink-pcie driver subsystem registers the Keem Bay device as an endpoint
+driver and provides standard linux PCIe sysfs interface, #
+/sys/bus/pci/devices/xxxx:xx:xx.0/
+
+
+XLink protocol subsystem
+========================
+Xlink is an abstracted control and communication subsystem based on channel
+identification. It is intended to support VPU technology both at SoC level as
+well as at IP level, over multiple interfaces.
+
+- The Xlink subsystem abstracts several types of communication channels
+  underneath, allowing the usage of different interfaces with the
+  same function call interface.
+- The Communication channels are full-duplex protocol channels allowing
+  concurrent bidirectional communication.
+- The Xlink subsystem also supports control operations to VPU either
+  from standalone local system or from remote system based on communication
+  interface underneath.
+- The Xlink subsystem supports the following communication interfaces:
+    * USB CDC
+    * Gigabit Ethernet
+    * PCIe
+    * IPC
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (7 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 08/34] misc: xlink-pcie: Add documentation for XLink PCIe driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-20 17:57   ` Greg KH
  2021-01-08 21:25 ` [PATCH v2 10/34] misc: xlink-pcie: lh: Add PCIe EP DMA functionality mgross
                   ` (24 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala, Derek Kiernan

From: Srikanth Thokala <srikanth.thokala@intel.com>

Add PCIe EPF driver for local host (lh) to configure BAR's and other
HW resources. Underlying PCIe HW controller is a Synopsys DWC PCIe core.

Cc: Derek Kiernan <derek.kiernan@xilinx.com>
Cc: Dragan Cvetic <dragan.cvetic@xilinx.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 MAINTAINERS                                 |   6 +
 drivers/misc/Kconfig                        |   1 +
 drivers/misc/Makefile                       |   1 +
 drivers/misc/xlink-pcie/Kconfig             |   9 +
 drivers/misc/xlink-pcie/Makefile            |   1 +
 drivers/misc/xlink-pcie/local_host/Makefile |   2 +
 drivers/misc/xlink-pcie/local_host/epf.c    | 413 ++++++++++++++++++++
 drivers/misc/xlink-pcie/local_host/epf.h    |  39 ++
 drivers/misc/xlink-pcie/local_host/xpcie.h  |  38 ++
 9 files changed, 510 insertions(+)
 create mode 100644 drivers/misc/xlink-pcie/Kconfig
 create mode 100644 drivers/misc/xlink-pcie/Makefile
 create mode 100644 drivers/misc/xlink-pcie/local_host/Makefile
 create mode 100644 drivers/misc/xlink-pcie/local_host/epf.c
 create mode 100644 drivers/misc/xlink-pcie/local_host/epf.h
 create mode 100644 drivers/misc/xlink-pcie/local_host/xpcie.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 2c118fcab623..036658cba574 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1962,6 +1962,12 @@ F:	Documentation/devicetree/bindings/arm/intel,keembay.yaml
 F:	arch/arm64/boot/dts/intel/keembay-evm.dts
 F:	arch/arm64/boot/dts/intel/keembay-soc.dtsi
 
+ARM KEEM BAY XLINK PCIE SUPPORT
+M:	Srikanth Thokala <srikanth.thokala@intel.com>
+M:	Mark Gross <mgross@linux.intel.com>
+S:	Supported
+F:	drivers/misc/xlink-pcie/
+
 ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
 M:	Jonathan Cameron <jic23@cam.ac.uk>
 L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index fafa8b0d8099..dfb98e444c6e 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -481,4 +481,5 @@ source "drivers/misc/ocxl/Kconfig"
 source "drivers/misc/cardreader/Kconfig"
 source "drivers/misc/habanalabs/Kconfig"
 source "drivers/misc/uacce/Kconfig"
+source "drivers/misc/xlink-pcie/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d23231e73330..d17621fc43d5 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -57,3 +57,4 @@ obj-$(CONFIG_HABANA_AI)		+= habanalabs/
 obj-$(CONFIG_UACCE)		+= uacce/
 obj-$(CONFIG_XILINX_SDFEC)	+= xilinx_sdfec.o
 obj-$(CONFIG_HISI_HIKEY_USB)	+= hisi_hikey_usb.o
+obj-y                           += xlink-pcie/
diff --git a/drivers/misc/xlink-pcie/Kconfig b/drivers/misc/xlink-pcie/Kconfig
new file mode 100644
index 000000000000..46aa401d79b7
--- /dev/null
+++ b/drivers/misc/xlink-pcie/Kconfig
@@ -0,0 +1,9 @@
+config XLINK_PCIE_LH_DRIVER
+	tristate "XLink PCIe Local Host driver"
+	depends on PCI_ENDPOINT && ARCH_KEEMBAY
+	help
+	  This option enables XLink PCIe Local Host driver.
+
+	  Choose M here to compile this driver as a module, name is mxlk_ep.
+	  This driver is used for XLink communication over PCIe and is to be
+	  loaded on the Intel Keem Bay platform.
diff --git a/drivers/misc/xlink-pcie/Makefile b/drivers/misc/xlink-pcie/Makefile
new file mode 100644
index 000000000000..d693d382e9c6
--- /dev/null
+++ b/drivers/misc/xlink-pcie/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += local_host/
diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
new file mode 100644
index 000000000000..514d3f0c91bc
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
+mxlk_ep-objs := epf.o
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
new file mode 100644
index 000000000000..9e6d407aa6b3
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -0,0 +1,413 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+#include "epf.h"
+
+#define BAR2_MIN_SIZE			SZ_16K
+#define BAR4_MIN_SIZE			SZ_16K
+
+#define PCIE_REGS_PCIE_INTR_ENABLE	0x18
+#define PCIE_REGS_PCIE_INTR_FLAGS	0x1C
+#define LBC_CII_EVENT_FLAG		BIT(18)
+#define PCIE_REGS_PCIE_ERR_INTR_FLAGS	0x24
+#define LINK_REQ_RST_FLG		BIT(15)
+
+static struct pci_epf_header xpcie_header = {
+	.vendorid = PCI_VENDOR_ID_INTEL,
+	.deviceid = PCI_DEVICE_ID_INTEL_KEEMBAY,
+	.baseclass_code = PCI_BASE_CLASS_MULTIMEDIA,
+	.subclass_code = 0x0,
+	.subsys_vendor_id = 0x0,
+	.subsys_id = 0x0,
+};
+
+static const struct pci_epf_device_id xpcie_epf_ids[] = {
+	{
+		.name = "mxlk_pcie_epf",
+	},
+	{},
+};
+
+static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)
+{
+	struct xpcie_epf *xpcie_epf;
+	struct xpcie *xpcie = args;
+	u32 val;
+
+	xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
+	val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+
+	iowrite32(val, xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
+{
+	struct xpcie_epf *xpcie_epf;
+	struct xpcie *xpcie = args;
+	u32 val;
+
+	xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
+	val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+	if (val & LBC_CII_EVENT_FLAG) {
+		iowrite32(LBC_CII_EVENT_FLAG,
+			  xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static int intel_xpcie_check_bar(struct pci_epf *epf,
+				 struct pci_epf_bar *epf_bar,
+				 enum pci_barno barno,
+				 size_t size, u8 reserved_bar)
+{
+	if (reserved_bar & (1 << barno)) {
+		dev_err(&epf->dev, "BAR%d is already reserved\n", barno);
+		return -EFAULT;
+	}
+
+	if (epf_bar->size != 0 && epf_bar->size < size) {
+		dev_err(&epf->dev, "BAR%d fixed size is not enough\n", barno);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int intel_xpcie_configure_bar(struct pci_epf *epf,
+				     const struct pci_epc_features
+					*epc_features)
+{
+	struct pci_epf_bar *epf_bar;
+	bool bar_fixed_64bit;
+	int ret, i;
+
+	for (i = BAR_0; i <= BAR_5; i++) {
+		epf_bar = &epf->bar[i];
+		bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 << i));
+		if (bar_fixed_64bit)
+			epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+		if (epc_features->bar_fixed_size[i])
+			epf_bar->size = epc_features->bar_fixed_size[i];
+
+		if (i == BAR_2) {
+			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
+						    BAR2_MIN_SIZE,
+						    epc_features->reserved_bar);
+			if (ret)
+				return ret;
+		}
+
+		if (i == BAR_4) {
+			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
+						    BAR4_MIN_SIZE,
+						    epc_features->reserved_bar);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+static void intel_xpcie_cleanup_bar(struct pci_epf *epf, enum pci_barno barno)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+	struct pci_epc *epc = epf->epc;
+
+	if (xpcie_epf->vaddr[barno]) {
+		pci_epc_clear_bar(epc, epf->func_no, &epf->bar[barno]);
+		pci_epf_free_space(epf, xpcie_epf->vaddr[barno], barno);
+		xpcie_epf->vaddr[barno] = NULL;
+	}
+}
+
+static void intel_xpcie_cleanup_bars(struct pci_epf *epf)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+	intel_xpcie_cleanup_bar(epf, BAR_2);
+	intel_xpcie_cleanup_bar(epf, BAR_4);
+	xpcie_epf->xpcie.mmio = NULL;
+	xpcie_epf->xpcie.bar4 = NULL;
+}
+
+static int intel_xpcie_setup_bar(struct pci_epf *epf, enum pci_barno barno,
+				 size_t min_size, size_t align)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+	struct pci_epf_bar *bar = &epf->bar[barno];
+	struct pci_epc *epc = epf->epc;
+	void *vaddr;
+	int ret;
+
+	bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+	if (!bar->size)
+		bar->size = min_size;
+
+	if (barno == BAR_4)
+		bar->flags |= PCI_BASE_ADDRESS_MEM_PREFETCH;
+
+	vaddr = pci_epf_alloc_space(epf, bar->size, barno, align);
+	if (!vaddr) {
+		dev_err(&epf->dev, "Failed to map BAR%d\n", barno);
+		return -ENOMEM;
+	}
+
+	ret = pci_epc_set_bar(epc, epf->func_no, bar);
+	if (ret) {
+		pci_epf_free_space(epf, vaddr, barno);
+		dev_err(&epf->dev, "Failed to set BAR%d\n", barno);
+		return ret;
+	}
+
+	xpcie_epf->vaddr[barno] = vaddr;
+
+	return 0;
+}
+
+static int intel_xpcie_setup_bars(struct pci_epf *epf, size_t align)
+{
+	int ret;
+
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+	ret = intel_xpcie_setup_bar(epf, BAR_2, BAR2_MIN_SIZE, align);
+	if (ret)
+		return ret;
+
+	ret = intel_xpcie_setup_bar(epf, BAR_4, BAR4_MIN_SIZE, align);
+	if (ret) {
+		intel_xpcie_cleanup_bar(epf, BAR_2);
+		return ret;
+	}
+
+	xpcie_epf->comm_bar = BAR_2;
+	xpcie_epf->xpcie.mmio = (void *)xpcie_epf->vaddr[BAR_2] +
+				XPCIE_MMIO_OFFSET;
+
+	xpcie_epf->bar4 = BAR_4;
+	xpcie_epf->xpcie.bar4 = xpcie_epf->vaddr[BAR_4];
+
+	return 0;
+}
+
+static int intel_xpcie_epf_get_platform_data(struct device *dev,
+					     struct xpcie_epf *xpcie_epf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct device_node *soc_node, *version_node;
+	struct resource *res;
+	const char *prop;
+	int prop_size;
+
+	xpcie_epf->irq_dma = platform_get_irq_byname(pdev, "intr");
+	if (xpcie_epf->irq_dma < 0) {
+		dev_err(&xpcie_epf->epf->dev, "failed to get IRQ: %d\n",
+			xpcie_epf->irq_dma);
+		return -EINVAL;
+	}
+
+	xpcie_epf->irq_err = platform_get_irq_byname(pdev, "err_intr");
+	if (xpcie_epf->irq_err < 0) {
+		dev_err(&xpcie_epf->epf->dev, "failed to get erroe IRQ: %d\n",
+			xpcie_epf->irq_err);
+		return -EINVAL;
+	}
+
+	xpcie_epf->irq = platform_get_irq_byname(pdev, "ev_intr");
+	if (xpcie_epf->irq < 0) {
+		dev_err(&xpcie_epf->epf->dev, "failed to get event IRQ: %d\n",
+			xpcie_epf->irq);
+		return -EINVAL;
+	}
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb");
+	xpcie_epf->apb_base =
+		devm_ioremap(dev, res->start, resource_size(res));
+	if (IS_ERR(xpcie_epf->apb_base))
+		return PTR_ERR(xpcie_epf->apb_base);
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
+	xpcie_epf->dbi_base =
+		devm_ioremap(dev, res->start, resource_size(res));
+	if (IS_ERR(xpcie_epf->dbi_base))
+		return PTR_ERR(xpcie_epf->dbi_base);
+
+	memcpy(xpcie_epf->stepping, "B0", 2);
+	soc_node = of_get_parent(pdev->dev.of_node);
+	if (soc_node) {
+		version_node = of_get_child_by_name(soc_node, "version-info");
+		if (version_node) {
+			prop = of_get_property(version_node, "stepping",
+					       &prop_size);
+			if (prop && prop_size <= KEEMBAY_XPCIE_STEPPING_MAXLEN)
+				memcpy(xpcie_epf->stepping, prop, prop_size);
+			of_node_put(version_node);
+		}
+		of_node_put(soc_node);
+	}
+
+	return 0;
+}
+
+static int intel_xpcie_epf_bind(struct pci_epf *epf)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+	const struct pci_epc_features *features;
+	struct pci_epc *epc = epf->epc;
+	struct device *dev;
+	size_t align = SZ_16K;
+	int ret;
+
+	if (WARN_ON_ONCE(!epc))
+		return -EINVAL;
+
+	dev = epc->dev.parent;
+	features = pci_epc_get_features(epc, epf->func_no);
+	xpcie_epf->epc_features = features;
+	if (features) {
+		align = features->align;
+		ret = intel_xpcie_configure_bar(epf, features);
+		if (ret)
+			return ret;
+	}
+
+	ret = intel_xpcie_setup_bars(epf, align);
+	if (ret) {
+		dev_err(&epf->dev, "BAR initialization failed\n");
+		return ret;
+	}
+
+	ret = intel_xpcie_epf_get_platform_data(dev, xpcie_epf);
+	if (ret) {
+		dev_err(&epf->dev, "Unable to get platform data\n");
+		return -EINVAL;
+	}
+
+	if (!strcmp(xpcie_epf->stepping, "A0")) {
+		xpcie_epf->xpcie.legacy_a0 = true;
+		iowrite32(1, (void __iomem *)xpcie_epf->xpcie.mmio +
+			     XPCIE_MMIO_LEGACY_A0);
+	} else {
+		xpcie_epf->xpcie.legacy_a0 = false;
+		iowrite32(0, (void __iomem *)xpcie_epf->xpcie.mmio +
+			     XPCIE_MMIO_LEGACY_A0);
+	}
+
+	/* Enable interrupt */
+	writel(LBC_CII_EVENT_FLAG,
+	       xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_ENABLE);
+	ret = devm_request_irq(&epf->dev, xpcie_epf->irq,
+			       &intel_xpcie_host_interrupt, 0,
+			       XPCIE_DRIVER_NAME, &xpcie_epf->xpcie);
+	if (ret) {
+		dev_err(&epf->dev, "failed to request irq\n");
+		goto err_cleanup_bars;
+	}
+
+	ret = devm_request_irq(&epf->dev, xpcie_epf->irq_err,
+			       &intel_xpcie_err_interrupt, 0,
+			       XPCIE_DRIVER_NAME, &xpcie_epf->xpcie);
+	if (ret) {
+		dev_err(&epf->dev, "failed to request error irq\n");
+		goto err_cleanup_bars;
+	}
+
+	return 0;
+
+err_cleanup_bars:
+	intel_xpcie_cleanup_bars(epf);
+
+	return ret;
+}
+
+static void intel_xpcie_epf_unbind(struct pci_epf *epf)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+	struct pci_epc *epc = epf->epc;
+
+	free_irq(xpcie_epf->irq, &xpcie_epf->xpcie);
+	free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+
+	pci_epc_stop(epc);
+
+	intel_xpcie_cleanup_bars(epf);
+}
+
+static int intel_xpcie_epf_probe(struct pci_epf *epf)
+{
+	struct device *dev = &epf->dev;
+	struct xpcie_epf *xpcie_epf;
+
+	xpcie_epf = devm_kzalloc(dev, sizeof(*xpcie_epf), GFP_KERNEL);
+	if (!xpcie_epf)
+		return -ENOMEM;
+
+	epf->header = &xpcie_header;
+	xpcie_epf->epf = epf;
+	epf_set_drvdata(epf, xpcie_epf);
+
+	return 0;
+}
+
+static void intel_xpcie_epf_shutdown(struct device *dev)
+{
+	struct pci_epf *epf = to_pci_epf(dev);
+	struct xpcie_epf *xpcie_epf;
+
+	xpcie_epf = epf_get_drvdata(epf);
+
+	/* Notify host in case PCIe hot plug not supported */
+	if (xpcie_epf)
+		pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+}
+
+static struct pci_epf_ops ops = {
+	.bind = intel_xpcie_epf_bind,
+	.unbind = intel_xpcie_epf_unbind,
+};
+
+static struct pci_epf_driver xpcie_epf_driver = {
+	.driver.name = "mxlk_pcie_epf",
+	.driver.shutdown = intel_xpcie_epf_shutdown,
+	.probe = intel_xpcie_epf_probe,
+	.id_table = xpcie_epf_ids,
+	.ops = &ops,
+	.owner = THIS_MODULE,
+};
+
+static int __init intel_xpcie_epf_init(void)
+{
+	int ret;
+
+	ret = pci_epf_register_driver(&xpcie_epf_driver);
+	if (ret) {
+		pr_err("Failed to register xlink pcie epf driver: %d\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+module_init(intel_xpcie_epf_init);
+
+static void __exit intel_xpcie_epf_exit(void)
+{
+	pci_epf_unregister_driver(&xpcie_epf_driver);
+}
+module_exit(intel_xpcie_epf_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION(XPCIE_DRIVER_DESC);
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
new file mode 100644
index 000000000000..2b38c87b3701
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_EPF_HEADER_
+#define XPCIE_EPF_HEADER_
+
+#include <linux/pci-epc.h>
+#include <linux/pci-epf.h>
+
+#include "xpcie.h"
+
+#define XPCIE_DRIVER_NAME "mxlk_pcie_epf"
+#define XPCIE_DRIVER_DESC "Intel(R) xLink PCIe endpoint function driver"
+
+#define KEEMBAY_XPCIE_STEPPING_MAXLEN 8
+
+struct xpcie_epf {
+	struct pci_epf *epf;
+	void *vaddr[BAR_5 + 1];
+	enum pci_barno comm_bar;
+	enum pci_barno bar4;
+	const struct pci_epc_features *epc_features;
+	struct xpcie xpcie;
+	int irq;
+	int irq_dma;
+	int irq_err;
+	void __iomem *apb_base;
+	void __iomem *dma_base;
+	void __iomem *dbi_base;
+	char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];
+};
+
+#endif /* XPCIE_EPF_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/xpcie.h b/drivers/misc/xlink-pcie/local_host/xpcie.h
new file mode 100644
index 000000000000..0745e6dfee10
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/xpcie.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_HEADER_
+#define XPCIE_HEADER_
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci_ids.h>
+
+#ifndef PCI_DEVICE_ID_INTEL_KEEMBAY
+#define PCI_DEVICE_ID_INTEL_KEEMBAY 0x6240
+#endif
+
+#define XPCIE_IO_COMM_SIZE SZ_16K
+#define XPCIE_MMIO_OFFSET SZ_4K
+
+/* MMIO layout and offsets shared between device and host */
+struct xpcie_mmio {
+	u8 legacy_a0;
+} __packed;
+
+#define XPCIE_MMIO_LEGACY_A0	(offsetof(struct xpcie_mmio, legacy_a0))
+
+struct xpcie {
+	u32 status;
+	bool legacy_a0;
+	void *mmio;
+	void *bar4;
+};
+
+#endif /* XPCIE_HEADER_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 10/34] misc: xlink-pcie: lh: Add PCIe EP DMA functionality
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (8 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 11/34] misc: xlink-pcie: lh: Add core communication logic mgross
                   ` (23 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala

From: Srikanth Thokala <srikanth.thokala@intel.com>

Add Synopsys PCIe DWC core embedded-DMA functionality for local host

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 drivers/misc/xlink-pcie/local_host/Makefile |   1 +
 drivers/misc/xlink-pcie/local_host/dma.c    | 577 ++++++++++++++++++++
 drivers/misc/xlink-pcie/local_host/epf.c    |  15 +-
 drivers/misc/xlink-pcie/local_host/epf.h    |  41 ++
 4 files changed, 631 insertions(+), 3 deletions(-)
 create mode 100644 drivers/misc/xlink-pcie/local_host/dma.c

diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 514d3f0c91bc..54fc118e2dd1 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
 mxlk_ep-objs := epf.o
+mxlk_ep-objs += dma.o
diff --git a/drivers/misc/xlink-pcie/local_host/dma.c b/drivers/misc/xlink-pcie/local_host/dma.c
new file mode 100644
index 000000000000..811e5eebb7ab
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/dma.c
@@ -0,0 +1,577 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/wait.h>
+
+#include "epf.h"
+
+#define DMA_DBI_OFFSET			(0x380000)
+
+/* PCIe DMA control 1 register definitions. */
+#define DMA_CH_CONTROL1_CB_SHIFT	(0)
+#define DMA_CH_CONTROL1_TCB_SHIFT	(1)
+#define DMA_CH_CONTROL1_LLP_SHIFT	(2)
+#define DMA_CH_CONTROL1_LIE_SHIFT	(3)
+#define DMA_CH_CONTROL1_CS_SHIFT	(5)
+#define DMA_CH_CONTROL1_CCS_SHIFT	(8)
+#define DMA_CH_CONTROL1_LLE_SHIFT	(9)
+#define DMA_CH_CONTROL1_CB_MASK		(BIT(DMA_CH_CONTROL1_CB_SHIFT))
+#define DMA_CH_CONTROL1_TCB_MASK	(BIT(DMA_CH_CONTROL1_TCB_SHIFT))
+#define DMA_CH_CONTROL1_LLP_MASK	(BIT(DMA_CH_CONTROL1_LLP_SHIFT))
+#define DMA_CH_CONTROL1_LIE_MASK	(BIT(DMA_CH_CONTROL1_LIE_SHIFT))
+#define DMA_CH_CONTROL1_CS_MASK		(0x3 << DMA_CH_CONTROL1_CS_SHIFT)
+#define DMA_CH_CONTROL1_CCS_MASK	(BIT(DMA_CH_CONTROL1_CCS_SHIFT))
+#define DMA_CH_CONTROL1_LLE_MASK	(BIT(DMA_CH_CONTROL1_LLE_SHIFT))
+
+/* DMA control 1 register Channel Status */
+#define DMA_CH_CONTROL1_CS_RUNNING	(0x1 << DMA_CH_CONTROL1_CS_SHIFT)
+#define DMA_CH_CONTROL1_CS_HALTED	(0x2 << DMA_CH_CONTROL1_CS_SHIFT)
+#define DMA_CH_CONTROL1_CS_STOPPED	(0x3 << DMA_CH_CONTROL1_CS_SHIFT)
+
+/* PCIe DMA Engine enable register definitions. */
+#define DMA_ENGINE_EN_SHIFT		(0)
+#define DMA_ENGINE_EN_MASK		(BIT(DMA_ENGINE_EN_SHIFT))
+
+/* PCIe DMA interrupt registers definitions. */
+#define DMA_ABORT_INTERRUPT_SHIFT	(16)
+#define DMA_ABORT_INTERRUPT_MASK	(0xFF << DMA_ABORT_INTERRUPT_SHIFT)
+#define DMA_ABORT_INTERRUPT_CH_MASK(_c) (BIT(_c) << DMA_ABORT_INTERRUPT_SHIFT)
+#define DMA_DONE_INTERRUPT_MASK		(0xFF)
+#define DMA_DONE_INTERRUPT_CH_MASK(_c)	(BIT(_c))
+#define DMA_ALL_INTERRUPT_MASK \
+	(DMA_ABORT_INTERRUPT_MASK | DMA_DONE_INTERRUPT_MASK)
+
+#define DMA_LL_ERROR_SHIFT		(16)
+#define DMA_CPL_ABORT_SHIFT		(8)
+#define DMA_CPL_TIMEOUT_SHIFT		(16)
+#define DMA_DATA_POI_SHIFT		(24)
+#define DMA_AR_ERROR_CH_MASK(_c)	(BIT(_c))
+#define DMA_LL_ERROR_CH_MASK(_c)	(BIT(_c) << DMA_LL_ERROR_SHIFT)
+#define DMA_UNREQ_ERROR_CH_MASK(_c)	(BIT(_c))
+#define DMA_CPL_ABORT_ERROR_CH_MASK(_c)	(BIT(_c) << DMA_CPL_ABORT_SHIFT)
+#define DMA_CPL_TIMEOUT_ERROR_CH_MASK(_c) (BIT(_c) << DMA_CPL_TIMEOUT_SHIFT)
+#define DMA_DATA_POI_ERROR_CH_MASK(_c)	(BIT(_c) << DMA_DATA_POI_SHIFT)
+
+#define DMA_LLLAIE_SHIFT		(16)
+#define DMA_LLLAIE_MASK			(0xF << DMA_LLLAIE_SHIFT)
+
+#define DMA_CHAN_WRITE_MAX_WEIGHT	(0x7)
+#define DMA_CHAN_READ_MAX_WEIGHT	(0x3)
+#define DMA_CHAN0_WEIGHT_OFFSET		(0)
+#define DMA_CHAN1_WEIGHT_OFFSET		(5)
+#define DMA_CHAN2_WEIGHT_OFFSET		(10)
+#define DMA_CHAN3_WEIGHT_OFFSET		(15)
+#define DMA_CHAN_WRITE_ALL_MAX_WEIGHT					\
+	((DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN0_WEIGHT_OFFSET) |	\
+	 (DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN1_WEIGHT_OFFSET) |	\
+	 (DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN2_WEIGHT_OFFSET) |	\
+	 (DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN3_WEIGHT_OFFSET))
+#define DMA_CHAN_READ_ALL_MAX_WEIGHT					\
+	((DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN0_WEIGHT_OFFSET) |	\
+	 (DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN1_WEIGHT_OFFSET) |	\
+	 (DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN2_WEIGHT_OFFSET) |	\
+	 (DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN3_WEIGHT_OFFSET))
+
+#define PCIE_REGS_PCIE_APP_CNTRL	0x8
+#define APP_XFER_PENDING		BIT(6)
+#define PCIE_REGS_PCIE_SII_PM_STATE_1	0xb4
+#define PM_LINKST_IN_L1			BIT(10)
+
+#define DMA_POLLING_TIMEOUT		1000000
+#define DMA_ENABLE_TIMEOUT		1000
+#define DMA_PCIE_PM_L1_TIMEOUT		20
+
+struct __packed pcie_dma_reg {
+	u32 dma_ctrl_data_arb_prior;
+	u32 reserved1;
+	u32 dma_ctrl;
+	u32 dma_write_engine_en;
+	u32 dma_write_doorbell;
+	u32 reserved2;
+	u32 dma_write_channel_arb_weight_low;
+	u32 dma_write_channel_arb_weight_high;
+	u32 reserved3[3];
+	u32 dma_read_engine_en;
+	u32 dma_read_doorbell;
+	u32 reserved4;
+	u32 dma_read_channel_arb_weight_low;
+	u32 dma_read_channel_arb_weight_high;
+	u32 reserved5[3];
+	u32 dma_write_int_status;
+	u32 reserved6;
+	u32 dma_write_int_mask;
+	u32 dma_write_int_clear;
+	u32 dma_write_err_status;
+	u32 dma_write_done_imwr_low;
+	u32 dma_write_done_imwr_high;
+	u32 dma_write_abort_imwr_low;
+	u32 dma_write_abort_imwr_high;
+	u16 dma_write_ch_imwr_data[8];
+	u32 reserved7[4];
+	u32 dma_write_linked_list_err_en;
+	u32 reserved8[3];
+	u32 dma_read_int_status;
+	u32 reserved9;
+	u32 dma_read_int_mask;
+	u32 dma_read_int_clear;
+	u32 reserved10;
+	u32 dma_read_err_status_low;
+	u32 dma_rd_err_sts_h;
+	u32 reserved11[2];
+	u32 dma_read_linked_list_err_en;
+	u32 reserved12;
+	u32 dma_read_done_imwr_low;
+	u32 dma_read_done_imwr_high;
+	u32 dma_read_abort_imwr_low;
+	u32 dma_read_abort_imwr_high;
+	u16 dma_read_ch_imwr_data[8];
+};
+
+struct __packed pcie_dma_chan {
+	u32 dma_ch_control1;
+	u32 reserved1;
+	u32 dma_transfer_size;
+	u32 dma_sar_low;
+	u32 dma_sar_high;
+	u32 dma_dar_low;
+	u32 dma_dar_high;
+	u32 dma_llp_low;
+	u32 dma_llp_high;
+};
+
+enum xpcie_ep_engine_type {
+	WRITE_ENGINE,
+	READ_ENGINE
+};
+
+static u32 dma_chan_offset[2][DMA_CHAN_NUM] = {
+	{ 0x200, 0x400, 0x600, 0x800 },
+	{ 0x300, 0x500, 0x700, 0x900 }
+};
+
+static void __iomem *intel_xpcie_ep_get_dma_base(struct pci_epf *epf)
+{
+	struct device *dev = &epf->dev;
+	struct xpcie_epf *xpcie_epf = (struct xpcie_epf *)dev->driver_data;
+
+	return xpcie_epf->dbi_base + DMA_DBI_OFFSET;
+}
+
+static int intel_xpcie_ep_dma_disable(void __iomem *dma_base,
+				      enum xpcie_ep_engine_type rw)
+{
+	struct __iomem pcie_dma_reg * dma_reg =
+				(struct __iomem pcie_dma_reg *)dma_base;
+	void __iomem *int_mask, *int_clear;
+	void __iomem *engine_en, *ll_err;
+	int i;
+
+	if (rw == WRITE_ENGINE) {
+		engine_en = (void __iomem *)&dma_reg->dma_write_engine_en;
+		int_mask = (void __iomem *)&dma_reg->dma_write_int_mask;
+		int_clear = (void __iomem *)&dma_reg->dma_write_int_clear;
+		ll_err = (void __iomem *)&dma_reg->dma_write_linked_list_err_en;
+	} else {
+		engine_en = (void __iomem *)&dma_reg->dma_read_engine_en;
+		int_mask = (void __iomem *)&dma_reg->dma_read_int_mask;
+		int_clear = (void __iomem *)&dma_reg->dma_read_int_clear;
+		ll_err = (void __iomem *)&dma_reg->dma_read_linked_list_err_en;
+	}
+
+	iowrite32(0x0, engine_en);
+
+	/* Mask all interrupts. */
+	iowrite32(DMA_ALL_INTERRUPT_MASK, int_mask);
+
+	/* Clear all interrupts. */
+	iowrite32(DMA_ALL_INTERRUPT_MASK, int_clear);
+
+	/* Disable LL abort interrupt (LLLAIE). */
+	iowrite32(0, ll_err);
+
+	/* Wait until the engine is disabled. */
+	for (i = 0; i < DMA_ENABLE_TIMEOUT; i++) {
+		if (!(ioread32(engine_en) & DMA_ENGINE_EN_MASK))
+			return 0;
+		msleep(20);
+	}
+
+	return -EBUSY;
+}
+
+static void intel_xpcie_ep_dma_enable(void __iomem *dma_base,
+				      enum xpcie_ep_engine_type rw)
+{
+	struct __iomem pcie_dma_reg * dma_reg =
+				(struct __iomem pcie_dma_reg *)(dma_base);
+	void __iomem *engine_en, *ll_err, *arb_weight;
+	struct __iomem pcie_dma_chan * dma_chan;
+	void __iomem *int_mask, *int_clear;
+	u32 offset, weight;
+	int i;
+
+	if (rw == WRITE_ENGINE) {
+		engine_en = (void __iomem *)&dma_reg->dma_write_engine_en;
+		int_mask = (void __iomem *)&dma_reg->dma_write_int_mask;
+		int_clear = (void __iomem *)&dma_reg->dma_write_int_clear;
+		ll_err = (void __iomem *)&dma_reg->dma_write_linked_list_err_en;
+		arb_weight = (void __iomem *)
+			     &dma_reg->dma_write_channel_arb_weight_low;
+		weight = DMA_CHAN_WRITE_ALL_MAX_WEIGHT;
+	} else {
+		engine_en = (void __iomem *)&dma_reg->dma_read_engine_en;
+		int_mask = (void __iomem *)&dma_reg->dma_read_int_mask;
+		int_clear = (void __iomem *)&dma_reg->dma_read_int_clear;
+		ll_err = (void __iomem *)&dma_reg->dma_read_linked_list_err_en;
+		arb_weight = (void __iomem *)
+			     &dma_reg->dma_read_channel_arb_weight_low;
+		weight = DMA_CHAN_READ_ALL_MAX_WEIGHT;
+	}
+
+	iowrite32(DMA_ENGINE_EN_MASK, engine_en);
+
+	/* Unmask all interrupts, so that the interrupt line gets asserted. */
+	iowrite32(~(u32)DMA_ALL_INTERRUPT_MASK, int_mask);
+
+	/* Clear all interrupts. */
+	iowrite32(DMA_ALL_INTERRUPT_MASK, int_clear);
+
+	/* Set channel round robin weight. */
+	iowrite32(weight, arb_weight);
+
+	/* Enable LL abort interrupt (LLLAIE). */
+	iowrite32(DMA_LLLAIE_MASK, ll_err);
+
+	/* Enable linked list mode. */
+	for (i = 0; i < DMA_CHAN_NUM; i++) {
+		offset = dma_chan_offset[rw][i];
+		dma_chan = (struct __iomem pcie_dma_chan *)(dma_base + offset);
+		iowrite32(DMA_CH_CONTROL1_LLE_MASK,
+			  (void __iomem *)&dma_chan->dma_ch_control1);
+	}
+}
+
+/*
+ * Make sure EP is not in L1 state when DMA doorbell.
+ * The DMA controller may start the wrong channel if doorbell occurs at the
+ * same time as controller is transitioning to L1.
+ */
+static int intel_xpcie_ep_dma_doorbell(struct xpcie_epf *xpcie_epf, int chan,
+				       void __iomem *doorbell)
+{
+	int i = DMA_PCIE_PM_L1_TIMEOUT, rc = 0;
+	u32 val, pm_val;
+
+	val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
+	iowrite32(val | APP_XFER_PENDING,
+		  xpcie_epf->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
+	pm_val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_SII_PM_STATE_1);
+	while (pm_val & PM_LINKST_IN_L1) {
+		if (i-- < 0) {
+			rc = -ETIME;
+			break;
+		}
+		udelay(5);
+		pm_val = ioread32(xpcie_epf->apb_base +
+				  PCIE_REGS_PCIE_SII_PM_STATE_1);
+	}
+
+	iowrite32((u32)chan, doorbell);
+
+	iowrite32(val & ~APP_XFER_PENDING,
+		  xpcie_epf->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
+
+	return rc;
+}
+
+static int intel_xpcie_ep_dma_err_status(void __iomem *err_status, int chan)
+{
+	if (ioread32(err_status) &
+	    (DMA_AR_ERROR_CH_MASK(chan) | DMA_LL_ERROR_CH_MASK(chan)))
+		return -EIO;
+
+	return 0;
+}
+
+static int intel_xpcie_ep_dma_rd_err_sts_h(void __iomem *err_status,
+					   int chan)
+{
+	if (ioread32(err_status) &
+	    (DMA_UNREQ_ERROR_CH_MASK(chan) |
+	     DMA_CPL_ABORT_ERROR_CH_MASK(chan) |
+	     DMA_CPL_TIMEOUT_ERROR_CH_MASK(chan) |
+	     DMA_DATA_POI_ERROR_CH_MASK(chan)))
+		return -EIO;
+
+	return 0;
+}
+
+static void
+intel_xpcie_ep_dma_setup_ll_descs(struct __iomem pcie_dma_chan * dma_chan,
+				  struct xpcie_dma_ll_desc_buf *desc_buf,
+				  int descs_num)
+{
+	struct xpcie_dma_ll_desc *descs = desc_buf->virt;
+	int i;
+
+	/* Setup linked list descriptors */
+	for (i = 0; i < descs_num - 1; i++)
+		descs[i].dma_ch_control1 = DMA_CH_CONTROL1_CB_MASK;
+	descs[descs_num - 1].dma_ch_control1 = DMA_CH_CONTROL1_LIE_MASK |
+						DMA_CH_CONTROL1_CB_MASK;
+	descs[descs_num].dma_ch_control1 = DMA_CH_CONTROL1_LLP_MASK |
+					   DMA_CH_CONTROL1_TCB_MASK;
+	descs[descs_num].src_addr = (phys_addr_t)desc_buf->phys;
+
+	/* Setup linked list settings */
+	iowrite32(DMA_CH_CONTROL1_LLE_MASK | DMA_CH_CONTROL1_CCS_MASK,
+		  (void __iomem *)&dma_chan->dma_ch_control1);
+	iowrite32((u32)desc_buf->phys, (void __iomem *)&dma_chan->dma_llp_low);
+	iowrite32((u64)desc_buf->phys >> 32,
+		  (void __iomem *)&dma_chan->dma_llp_high);
+}
+
+int intel_xpcie_ep_dma_write_ll(struct pci_epf *epf, int chan, int descs_num)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+	void __iomem *dma_base = xpcie_epf->dma_base;
+	struct __iomem pcie_dma_chan * dma_chan;
+	struct xpcie_dma_ll_desc_buf *desc_buf;
+	struct __iomem pcie_dma_reg * dma_reg =
+				(struct __iomem pcie_dma_reg *)(dma_base);
+	int i, rc;
+
+	if (descs_num <= 0 || descs_num > XPCIE_NUM_TX_DESCS)
+		return -EINVAL;
+
+	if (chan < 0 || chan >= DMA_CHAN_NUM)
+		return -EINVAL;
+
+	dma_chan = (struct __iomem pcie_dma_chan *)
+		(dma_base + dma_chan_offset[WRITE_ENGINE][chan]);
+
+	desc_buf = &xpcie_epf->tx_desc_buf[chan];
+
+	intel_xpcie_ep_dma_setup_ll_descs(dma_chan, desc_buf, descs_num);
+
+	/* Start DMA transfer. */
+	rc = intel_xpcie_ep_dma_doorbell(xpcie_epf, chan,
+					 (void __iomem *)
+					 &dma_reg->dma_write_doorbell);
+	if (rc)
+		return rc;
+
+	/* Wait for DMA transfer to complete. */
+	for (i = 0; i < DMA_POLLING_TIMEOUT; i++) {
+		usleep_range(5, 10);
+		if (ioread32((void __iomem *)&dma_reg->dma_write_int_status) &
+		    (DMA_DONE_INTERRUPT_CH_MASK(chan) |
+		     DMA_ABORT_INTERRUPT_CH_MASK(chan)))
+			break;
+	}
+	if (i == DMA_POLLING_TIMEOUT) {
+		dev_err(&xpcie_epf->epf->dev, "DMA Wr timeout\n");
+		rc = -ETIME;
+		goto cleanup;
+	}
+
+	rc = intel_xpcie_ep_dma_err_status((void __iomem *)
+					   &dma_reg->dma_write_err_status,
+					   chan);
+
+cleanup:
+	/* Clear the done/abort interrupt. */
+	iowrite32((DMA_DONE_INTERRUPT_CH_MASK(chan) |
+		   DMA_ABORT_INTERRUPT_CH_MASK(chan)),
+		  (void __iomem *)&dma_reg->dma_write_int_clear);
+
+	if (rc) {
+		if (intel_xpcie_ep_dma_disable(dma_base, WRITE_ENGINE)) {
+			dev_err(&xpcie_epf->epf->dev,
+				"failed to disable WR DMA\n");
+			return rc;
+		}
+		intel_xpcie_ep_dma_enable(dma_base, WRITE_ENGINE);
+	}
+
+	return rc;
+}
+
+int intel_xpcie_ep_dma_read_ll(struct pci_epf *epf, int chan, int descs_num)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+	void __iomem *dma_base = xpcie_epf->dma_base;
+	struct xpcie_dma_ll_desc_buf *desc_buf;
+	struct __iomem pcie_dma_reg * dma_reg =
+				(struct __iomem pcie_dma_reg *)(dma_base);
+	struct __iomem pcie_dma_chan * dma_chan;
+	int i, rc;
+
+	if (descs_num <= 0 || descs_num > XPCIE_NUM_RX_DESCS)
+		return -EINVAL;
+
+	if (chan < 0 || chan >= DMA_CHAN_NUM)
+		return -EINVAL;
+
+	dma_chan = (struct __iomem pcie_dma_chan *)
+		(dma_base + dma_chan_offset[READ_ENGINE][chan]);
+
+	desc_buf = &xpcie_epf->rx_desc_buf[chan];
+
+	intel_xpcie_ep_dma_setup_ll_descs(dma_chan, desc_buf, descs_num);
+
+	/* Start DMA transfer. */
+	rc = intel_xpcie_ep_dma_doorbell(xpcie_epf, chan,
+					 (void __iomem *)
+					 &dma_reg->dma_read_doorbell);
+	if (rc)
+		return rc;
+
+	/* Wait for DMA transfer to complete. */
+	for (i = 0; i < DMA_POLLING_TIMEOUT; i++) {
+		usleep_range(5, 10);
+		if (ioread32((void __iomem *)&dma_reg->dma_read_int_status) &
+		    (DMA_DONE_INTERRUPT_CH_MASK(chan) |
+		     DMA_ABORT_INTERRUPT_CH_MASK(chan)))
+			break;
+	}
+	if (i == DMA_POLLING_TIMEOUT) {
+		dev_err(&xpcie_epf->epf->dev, "DMA Rd timeout\n");
+		rc = -ETIME;
+		goto cleanup;
+	}
+
+	rc = intel_xpcie_ep_dma_err_status((void __iomem *)
+					   &dma_reg->dma_read_err_status_low,
+					   chan);
+	if (!rc) {
+		rc =
+		intel_xpcie_ep_dma_rd_err_sts_h((void __iomem *)
+						&dma_reg->dma_rd_err_sts_h,
+						chan);
+	}
+cleanup:
+	/* Clear the done/abort interrupt. */
+	iowrite32((DMA_DONE_INTERRUPT_CH_MASK(chan) |
+		   DMA_ABORT_INTERRUPT_CH_MASK(chan)),
+		  (void __iomem *)&dma_reg->dma_read_int_clear);
+
+	if (rc) {
+		if (intel_xpcie_ep_dma_disable(dma_base, READ_ENGINE)) {
+			dev_err(&xpcie_epf->epf->dev,
+				"failed to disable RD DMA\n");
+			return rc;
+		}
+		intel_xpcie_ep_dma_enable(dma_base, READ_ENGINE);
+	}
+
+	return rc;
+}
+
+static void intel_xpcie_ep_dma_free_ll_descs_mem(struct xpcie_epf *xpcie_epf)
+{
+	struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+	int i;
+
+	for (i = 0; i < DMA_CHAN_NUM; i++) {
+		if (xpcie_epf->tx_desc_buf[i].virt) {
+			dma_free_coherent(dma_dev,
+					  xpcie_epf->tx_desc_buf[i].size,
+					  xpcie_epf->tx_desc_buf[i].virt,
+					  xpcie_epf->tx_desc_buf[i].phys);
+		}
+		if (xpcie_epf->rx_desc_buf[i].virt) {
+			dma_free_coherent(dma_dev,
+					  xpcie_epf->rx_desc_buf[i].size,
+					  xpcie_epf->rx_desc_buf[i].virt,
+					  xpcie_epf->rx_desc_buf[i].phys);
+		}
+
+		memset(&xpcie_epf->tx_desc_buf[i], 0,
+		       sizeof(struct xpcie_dma_ll_desc_buf));
+		memset(&xpcie_epf->rx_desc_buf[i], 0,
+		       sizeof(struct xpcie_dma_ll_desc_buf));
+	}
+}
+
+static int intel_xpcie_ep_dma_alloc_ll_descs_mem(struct xpcie_epf *xpcie_epf)
+{
+	struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+	int tx_num = XPCIE_NUM_TX_DESCS + 1;
+	int rx_num = XPCIE_NUM_RX_DESCS + 1;
+	size_t tx_size, rx_size;
+	int i;
+
+	tx_size = tx_num * sizeof(struct xpcie_dma_ll_desc);
+	rx_size = rx_num * sizeof(struct xpcie_dma_ll_desc);
+
+	for (i = 0; i < DMA_CHAN_NUM; i++) {
+		xpcie_epf->tx_desc_buf[i].virt =
+			dma_alloc_coherent(dma_dev, tx_size,
+					   &xpcie_epf->tx_desc_buf[i].phys,
+					   GFP_KERNEL);
+		xpcie_epf->rx_desc_buf[i].virt =
+			dma_alloc_coherent(dma_dev, rx_size,
+					   &xpcie_epf->rx_desc_buf[i].phys,
+					   GFP_KERNEL);
+
+		if (!xpcie_epf->tx_desc_buf[i].virt ||
+		    !xpcie_epf->rx_desc_buf[i].virt) {
+			intel_xpcie_ep_dma_free_ll_descs_mem(xpcie_epf);
+			return -ENOMEM;
+		}
+
+		xpcie_epf->tx_desc_buf[i].size = tx_size;
+		xpcie_epf->rx_desc_buf[i].size = rx_size;
+	}
+	return 0;
+}
+
+int intel_xpcie_ep_dma_reset(struct pci_epf *epf)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+	/* Disable the DMA read/write engine. */
+	if (intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, WRITE_ENGINE) ||
+	    intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, READ_ENGINE))
+		return -EBUSY;
+
+	intel_xpcie_ep_dma_enable(xpcie_epf->dma_base, WRITE_ENGINE);
+	intel_xpcie_ep_dma_enable(xpcie_epf->dma_base, READ_ENGINE);
+
+	return 0;
+}
+
+int intel_xpcie_ep_dma_uninit(struct pci_epf *epf)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+	if (intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, WRITE_ENGINE) ||
+	    intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, READ_ENGINE))
+		return -EBUSY;
+
+	intel_xpcie_ep_dma_free_ll_descs_mem(xpcie_epf);
+
+	return 0;
+}
+
+int intel_xpcie_ep_dma_init(struct pci_epf *epf)
+{
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+	int rc;
+
+	xpcie_epf->dma_base = intel_xpcie_ep_get_dma_base(epf);
+
+	rc = intel_xpcie_ep_dma_alloc_ll_descs_mem(xpcie_epf);
+	if (rc)
+		return rc;
+
+	return intel_xpcie_ep_dma_reset(epf);
+}
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
index 9e6d407aa6b3..dd8ffcabf5f9 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.c
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -45,6 +45,8 @@ static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)
 
 	xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
 	val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+	if (val & LINK_REQ_RST_FLG)
+		intel_xpcie_ep_dma_reset(xpcie_epf->epf);
 
 	iowrite32(val, xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
 
@@ -325,8 +327,17 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
 		goto err_cleanup_bars;
 	}
 
+	ret = intel_xpcie_ep_dma_init(epf);
+	if (ret) {
+		dev_err(&epf->dev, "DMA initialization failed\n");
+		goto err_free_err_irq;
+	}
+
 	return 0;
 
+err_free_err_irq:
+	free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+
 err_cleanup_bars:
 	intel_xpcie_cleanup_bars(epf);
 
@@ -335,11 +346,9 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
 
 static void intel_xpcie_epf_unbind(struct pci_epf *epf)
 {
-	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
 	struct pci_epc *epc = epf->epc;
 
-	free_irq(xpcie_epf->irq, &xpcie_epf->xpcie);
-	free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+	intel_xpcie_ep_dma_uninit(epf);
 
 	pci_epc_stop(epc);
 
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
index 2b38c87b3701..6ce5260e67be 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.h
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -20,6 +20,38 @@
 
 #define KEEMBAY_XPCIE_STEPPING_MAXLEN 8
 
+#define DMA_CHAN_NUM		(4)
+
+#define XPCIE_NUM_TX_DESCS	(64)
+#define XPCIE_NUM_RX_DESCS	(64)
+
+extern bool dma_ll_mode;
+
+struct xpcie_dma_ll_desc {
+	u32 dma_ch_control1;
+	u32 dma_transfer_size;
+	union {
+		struct {
+			u32 dma_sar_low;
+			u32 dma_sar_high;
+		};
+		phys_addr_t src_addr;
+	};
+	union {
+		struct {
+			u32 dma_dar_low;
+			u32 dma_dar_high;
+		};
+		phys_addr_t dst_addr;
+	};
+} __packed;
+
+struct xpcie_dma_ll_desc_buf {
+	struct xpcie_dma_ll_desc *virt;
+	dma_addr_t phys;
+	size_t size;
+};
+
 struct xpcie_epf {
 	struct pci_epf *epf;
 	void *vaddr[BAR_5 + 1];
@@ -34,6 +66,15 @@ struct xpcie_epf {
 	void __iomem *dma_base;
 	void __iomem *dbi_base;
 	char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];
+
+	struct xpcie_dma_ll_desc_buf	tx_desc_buf[DMA_CHAN_NUM];
+	struct xpcie_dma_ll_desc_buf	rx_desc_buf[DMA_CHAN_NUM];
 };
 
+int intel_xpcie_ep_dma_init(struct pci_epf *epf);
+int intel_xpcie_ep_dma_uninit(struct pci_epf *epf);
+int intel_xpcie_ep_dma_reset(struct pci_epf *epf);
+int intel_xpcie_ep_dma_read_ll(struct pci_epf *epf, int chan, int descs_num);
+int intel_xpcie_ep_dma_write_ll(struct pci_epf *epf, int chan, int descs_num);
+
 #endif /* XPCIE_EPF_HEADER_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 11/34] misc: xlink-pcie: lh: Add core communication logic
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (9 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 10/34] misc: xlink-pcie: lh: Add PCIe EP DMA functionality mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 12/34] misc: xlink-pcie: lh: Prepare changes for adding remote host driver mgross
                   ` (22 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala

From: Srikanth Thokala <srikanth.thokala@intel.com>

Add logic to establish communication with the remote host which is through
ring buffer management and MSI/Doorbell interrupts

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 drivers/misc/xlink-pcie/local_host/Makefile |   2 +
 drivers/misc/xlink-pcie/local_host/core.c   | 808 ++++++++++++++++++++
 drivers/misc/xlink-pcie/local_host/core.h   | 247 ++++++
 drivers/misc/xlink-pcie/local_host/epf.c    | 116 ++-
 drivers/misc/xlink-pcie/local_host/epf.h    |  23 +
 drivers/misc/xlink-pcie/local_host/util.c   | 375 +++++++++
 drivers/misc/xlink-pcie/local_host/util.h   |  70 ++
 drivers/misc/xlink-pcie/local_host/xpcie.h  |  63 ++
 include/linux/xlink_drv_inf.h               |  60 ++
 9 files changed, 1756 insertions(+), 8 deletions(-)
 create mode 100644 drivers/misc/xlink-pcie/local_host/core.c
 create mode 100644 drivers/misc/xlink-pcie/local_host/core.h
 create mode 100644 drivers/misc/xlink-pcie/local_host/util.c
 create mode 100644 drivers/misc/xlink-pcie/local_host/util.h
 create mode 100644 include/linux/xlink_drv_inf.h

diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 54fc118e2dd1..28761751d43b 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -1,3 +1,5 @@
 obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
 mxlk_ep-objs := epf.o
 mxlk_ep-objs += dma.o
+mxlk_ep-objs += core.o
+mxlk_ep-objs += util.o
diff --git a/drivers/misc/xlink-pcie/local_host/core.c b/drivers/misc/xlink-pcie/local_host/core.c
new file mode 100644
index 000000000000..612ab917db45
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/core.c
@@ -0,0 +1,808 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include <linux/of_reserved_mem.h>
+
+#include "epf.h"
+#include "core.h"
+#include "util.h"
+
+static struct xpcie *global_xpcie;
+
+static struct xpcie *intel_xpcie_core_get_by_id(u32 sw_device_id)
+{
+	return (sw_device_id == xlink_sw_id) ? global_xpcie : NULL;
+}
+
+static int intel_xpcie_map_dma(struct xpcie *xpcie, struct xpcie_buf_desc *bd,
+			       int direction)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct pci_epf *epf = xpcie_epf->epf;
+	struct device *dma_dev = epf->epc->dev.parent;
+
+	bd->phys = dma_map_single(dma_dev, bd->data, bd->length, direction);
+
+	return dma_mapping_error(dma_dev, bd->phys);
+}
+
+static void intel_xpcie_unmap_dma(struct xpcie *xpcie,
+				  struct xpcie_buf_desc *bd, int direction)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct pci_epf *epf = xpcie_epf->epf;
+	struct device *dma_dev = epf->epc->dev.parent;
+
+	dma_unmap_single(dma_dev, bd->phys, bd->length, direction);
+}
+
+static void intel_xpcie_set_cap_txrx(struct xpcie *xpcie)
+{
+	size_t tx_len = sizeof(struct xpcie_transfer_desc) *
+				XPCIE_NUM_TX_DESCS;
+	size_t rx_len = sizeof(struct xpcie_transfer_desc) *
+				XPCIE_NUM_RX_DESCS;
+	size_t hdr_len = sizeof(struct xpcie_cap_txrx);
+	u32 start = sizeof(struct xpcie_mmio);
+	struct xpcie_cap_txrx *cap;
+	struct xpcie_cap_hdr *hdr;
+	u16 next;
+
+	next = (u16)(start + hdr_len + tx_len + rx_len);
+	intel_xpcie_iowrite32(start, xpcie->mmio + XPCIE_MMIO_CAP_OFF);
+	cap = (void *)xpcie->mmio + start;
+	memset(cap, 0, sizeof(struct xpcie_cap_txrx));
+	cap->hdr.id = XPCIE_CAP_TXRX;
+	cap->hdr.next = next;
+	cap->fragment_size = XPCIE_FRAGMENT_SIZE;
+	cap->tx.ring = start + hdr_len;
+	cap->tx.ndesc = XPCIE_NUM_TX_DESCS;
+	cap->rx.ring = start + hdr_len + tx_len;
+	cap->rx.ndesc = XPCIE_NUM_RX_DESCS;
+
+	hdr = (struct xpcie_cap_hdr *)((void *)xpcie->mmio + next);
+	hdr->id = XPCIE_CAP_NULL;
+}
+
+static void intel_xpcie_txrx_cleanup(struct xpcie *xpcie)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+	struct xpcie_interface *inf = &xpcie->interfaces[0];
+	struct xpcie_stream *tx = &xpcie->tx;
+	struct xpcie_stream *rx = &xpcie->rx;
+	struct xpcie_transfer_desc *td;
+	int index;
+
+	xpcie->stop_flag = true;
+	xpcie->no_tx_buffer = false;
+	inf->data_avail = true;
+	wake_up_interruptible(&xpcie->tx_waitq);
+	wake_up_interruptible(&inf->rx_waitq);
+	mutex_lock(&xpcie->wlock);
+	mutex_lock(&inf->rlock);
+
+	for (index = 0; index < rx->pipe.ndesc; index++) {
+		td = rx->pipe.tdr + index;
+		intel_xpcie_set_td_address(td, 0);
+		intel_xpcie_set_td_length(td, 0);
+	}
+	for (index = 0; index < tx->pipe.ndesc; index++) {
+		td = tx->pipe.tdr + index;
+		intel_xpcie_set_td_address(td, 0);
+		intel_xpcie_set_td_length(td, 0);
+	}
+
+	intel_xpcie_list_cleanup(&xpcie->tx_pool);
+	intel_xpcie_list_cleanup(&xpcie->rx_pool);
+
+	if (xpcie_epf->tx_virt) {
+		dma_free_coherent(dma_dev, xpcie_epf->tx_size,
+				  xpcie_epf->tx_virt, xpcie_epf->tx_phys);
+	}
+
+	mutex_unlock(&inf->rlock);
+	mutex_unlock(&xpcie->wlock);
+}
+
+/*
+ * The RX/TX are named for Remote Host, in Local Host
+ * RX/TX is reversed.
+ */
+static int intel_xpcie_txrx_init(struct xpcie *xpcie,
+				 struct xpcie_cap_txrx *cap)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+	struct xpcie_stream *tx = &xpcie->tx;
+	struct xpcie_stream *rx = &xpcie->rx;
+	int tx_pool_size, rx_pool_size;
+	struct xpcie_buf_desc *bd;
+	int index, ndesc, rc;
+
+	xpcie->txrx = cap;
+	xpcie->fragment_size = cap->fragment_size;
+	xpcie->stop_flag = false;
+
+	rx->pipe.ndesc = cap->tx.ndesc;
+	rx->pipe.head = &cap->tx.head;
+	rx->pipe.tail = &cap->tx.tail;
+	rx->pipe.tdr = (void *)xpcie->mmio + cap->tx.ring;
+
+	tx->pipe.ndesc = cap->rx.ndesc;
+	tx->pipe.head = &cap->rx.head;
+	tx->pipe.tail = &cap->rx.tail;
+	tx->pipe.tdr = (void *)xpcie->mmio + cap->rx.ring;
+
+	intel_xpcie_list_init(&xpcie->rx_pool);
+	rx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+	ndesc = rx_pool_size / xpcie->fragment_size;
+
+	/* Initialize reserved memory resources */
+	rc = of_reserved_mem_device_init(dma_dev);
+	if (rc) {
+		dev_err(dma_dev, "Could not get reserved memory\n");
+		goto error;
+	}
+
+	for (index = 0; index < ndesc; index++) {
+		bd = intel_xpcie_alloc_bd(xpcie->fragment_size);
+		if (bd) {
+			intel_xpcie_list_put(&xpcie->rx_pool, bd);
+		} else {
+			dev_err(xpcie_to_dev(xpcie),
+				"failed to alloc all rx pool descriptors\n");
+			goto error;
+		}
+	}
+
+	intel_xpcie_list_init(&xpcie->tx_pool);
+	tx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+	ndesc = tx_pool_size / xpcie->fragment_size;
+
+	xpcie_epf->tx_size = tx_pool_size;
+	xpcie_epf->tx_virt = dma_alloc_coherent(dma_dev,
+						xpcie_epf->tx_size,
+						&xpcie_epf->tx_phys,
+						GFP_KERNEL);
+	if (!xpcie_epf->tx_virt)
+		goto error;
+
+	for (index = 0; index < ndesc; index++) {
+		bd = intel_xpcie_alloc_bd_reuse(xpcie->fragment_size,
+						xpcie_epf->tx_virt +
+						(index *
+						 xpcie->fragment_size),
+						xpcie_epf->tx_phys +
+						(index *
+						 xpcie->fragment_size));
+		if (bd) {
+			intel_xpcie_list_put(&xpcie->tx_pool, bd);
+		} else {
+			dev_err(xpcie_to_dev(xpcie),
+				"failed to alloc all tx pool descriptors\n");
+			goto error;
+		}
+	}
+
+	return 0;
+
+error:
+	intel_xpcie_txrx_cleanup(xpcie);
+
+	return -ENOMEM;
+}
+
+static int intel_xpcie_discover_txrx(struct xpcie *xpcie)
+{
+	struct xpcie_cap_txrx *cap;
+	int error;
+
+	cap = intel_xpcie_cap_find(xpcie, 0, XPCIE_CAP_TXRX);
+	if (cap) {
+		error = intel_xpcie_txrx_init(xpcie, cap);
+	} else {
+		dev_err(xpcie_to_dev(xpcie), "xpcie txrx info not found\n");
+		error = -EIO;
+	}
+
+	return error;
+}
+
+static void intel_xpcie_start_tx(struct xpcie *xpcie, unsigned long delay)
+{
+	/*
+	 * Use only one WQ for both Rx and Tx
+	 *
+	 * Synchronous Read and Writes to DDR is found to result in memory
+	 * mismatch errors in stability tests due to silicon bug in A0 SoC.
+	 */
+	if (xpcie->legacy_a0)
+		queue_delayed_work(xpcie->rx_wq, &xpcie->tx_event, delay);
+	else
+		queue_delayed_work(xpcie->tx_wq, &xpcie->tx_event, delay);
+}
+
+static void intel_xpcie_start_rx(struct xpcie *xpcie, unsigned long delay)
+{
+	queue_delayed_work(xpcie->rx_wq, &xpcie->rx_event, delay);
+}
+
+static void intel_xpcie_rx_event_handler(struct work_struct *work)
+{
+	struct xpcie *xpcie = container_of(work, struct xpcie, rx_event.work);
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct xpcie_buf_desc *bd_head, *bd_tail, *bd;
+	u32 head, tail, ndesc, length, initial_head;
+	unsigned long delay = msecs_to_jiffies(1);
+	struct xpcie_stream *rx = &xpcie->rx;
+	int descs_num = 0, chan = 0, rc;
+	struct xpcie_dma_ll_desc *desc;
+	struct xpcie_transfer_desc *td;
+	bool reset_work = false;
+	u16 interface;
+	u64 address;
+
+	if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+		return;
+
+	bd_head = NULL;
+	bd_tail = NULL;
+	ndesc = rx->pipe.ndesc;
+	tail = intel_xpcie_get_tdr_tail(&rx->pipe);
+	initial_head = intel_xpcie_get_tdr_head(&rx->pipe);
+	head = initial_head;
+
+	while (head != tail) {
+		td = rx->pipe.tdr + head;
+
+		bd = intel_xpcie_alloc_rx_bd(xpcie);
+		if (!bd) {
+			reset_work = true;
+			if (descs_num == 0) {
+				delay = msecs_to_jiffies(10);
+				goto task_exit;
+			}
+			break;
+		}
+
+		interface = intel_xpcie_get_td_interface(td);
+		length = intel_xpcie_get_td_length(td);
+		address = intel_xpcie_get_td_address(td);
+
+		bd->length = length;
+		bd->interface = interface;
+		rc = intel_xpcie_map_dma(xpcie, bd, DMA_FROM_DEVICE);
+		if (rc) {
+			dev_err(xpcie_to_dev(xpcie),
+				"failed to map rx bd (%d)\n", rc);
+			intel_xpcie_free_rx_bd(xpcie, bd);
+			break;
+		}
+
+		desc = &xpcie_epf->rx_desc_buf[chan].virt[descs_num++];
+		desc->dma_transfer_size = length;
+		desc->dst_addr = bd->phys;
+		desc->src_addr = address;
+
+		if (bd_head)
+			bd_tail->next = bd;
+		else
+			bd_head = bd;
+		bd_tail = bd;
+
+		head = XPCIE_CIRCULAR_INC(head, ndesc);
+	}
+
+	if (descs_num == 0)
+		goto task_exit;
+
+	rc = intel_xpcie_copy_from_host_ll(xpcie, chan, descs_num);
+
+	bd = bd_head;
+	while (bd) {
+		intel_xpcie_unmap_dma(xpcie, bd, DMA_FROM_DEVICE);
+		bd = bd->next;
+	}
+
+	if (rc) {
+		dev_err(xpcie_to_dev(xpcie),
+			"failed to DMA from host (%d)\n", rc);
+		intel_xpcie_free_rx_bd(xpcie, bd_head);
+		delay = msecs_to_jiffies(5);
+		reset_work = true;
+		goto task_exit;
+	}
+
+	head = initial_head;
+	bd = bd_head;
+	while (bd) {
+		td = rx->pipe.tdr + head;
+		bd_head = bd_head->next;
+		bd->next = NULL;
+
+		if (likely(bd->interface < XPCIE_NUM_INTERFACES)) {
+			intel_xpcie_set_td_status(td,
+						  XPCIE_DESC_STATUS_SUCCESS);
+			intel_xpcie_add_bd_to_interface(xpcie, bd);
+		} else {
+			dev_err(xpcie_to_dev(xpcie),
+				"detected rx desc interface failure (%u)\n",
+				bd->interface);
+			intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_ERROR);
+			intel_xpcie_free_rx_bd(xpcie, bd);
+		}
+
+		bd = bd_head;
+		head = XPCIE_CIRCULAR_INC(head, ndesc);
+	}
+
+	if (head != initial_head) {
+		intel_xpcie_set_tdr_head(&rx->pipe, head);
+		intel_xpcie_raise_irq(xpcie, DATA_RECEIVED);
+	}
+
+task_exit:
+	if (reset_work)
+		intel_xpcie_start_rx(xpcie, delay);
+}
+
+static void intel_xpcie_tx_event_handler(struct work_struct *work)
+{
+	struct xpcie *xpcie = container_of(work, struct xpcie, tx_event.work);
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct xpcie_buf_desc *bd_head, *bd_tail, *bd;
+	struct xpcie_stream *tx = &xpcie->tx;
+	u32 head, tail, ndesc, initial_tail;
+	struct xpcie_dma_ll_desc *desc;
+	struct xpcie_transfer_desc *td;
+	int descs_num = 0, chan = 0, rc;
+	size_t buffers = 0, bytes = 0;
+	u64 address;
+
+	if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+		return;
+
+	bd_head = NULL;
+	bd_tail = NULL;
+	ndesc = tx->pipe.ndesc;
+	initial_tail = intel_xpcie_get_tdr_tail(&tx->pipe);
+	tail = initial_tail;
+	head = intel_xpcie_get_tdr_head(&tx->pipe);
+
+	/* add new entries */
+	while (XPCIE_CIRCULAR_INC(tail, ndesc) != head) {
+		bd = intel_xpcie_list_get(&xpcie->write);
+		if (!bd)
+			break;
+
+		td = tx->pipe.tdr + tail;
+		address = intel_xpcie_get_td_address(td);
+
+		desc = &xpcie_epf->tx_desc_buf[chan].virt[descs_num++];
+		desc->dma_transfer_size = bd->length;
+		desc->src_addr = bd->phys;
+		desc->dst_addr = address;
+
+		if (bd_head)
+			bd_tail->next = bd;
+		else
+			bd_head = bd;
+		bd_tail = bd;
+
+		tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+	}
+
+	if (descs_num == 0)
+		goto task_exit;
+
+	rc = intel_xpcie_copy_to_host_ll(xpcie, chan, descs_num);
+
+	tail = initial_tail;
+	bd = bd_head;
+	while (bd) {
+		if (rc) {
+			bd = bd->next;
+			continue;
+		}
+
+		td = tx->pipe.tdr + tail;
+		intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_SUCCESS);
+		intel_xpcie_set_td_length(td, bd->length);
+		intel_xpcie_set_td_interface(td, bd->interface);
+
+		bd = bd->next;
+		tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+	}
+
+	if (rc) {
+		dev_err(xpcie_to_dev(xpcie),
+			"failed to DMA to host (%d)\n", rc);
+		intel_xpcie_list_put_head(&xpcie->write, bd_head);
+		return;
+	}
+
+	intel_xpcie_free_tx_bd(xpcie, bd_head);
+
+	if (intel_xpcie_get_tdr_tail(&tx->pipe) != tail) {
+		intel_xpcie_set_tdr_tail(&tx->pipe, tail);
+		intel_xpcie_raise_irq(xpcie, DATA_SENT);
+	}
+
+task_exit:
+	intel_xpcie_list_info(&xpcie->write, &bytes, &buffers);
+	if (buffers) {
+		xpcie->tx_pending = true;
+		head = intel_xpcie_get_tdr_head(&tx->pipe);
+		if (XPCIE_CIRCULAR_INC(tail, ndesc) != head)
+			intel_xpcie_start_tx(xpcie, 0);
+	} else {
+		xpcie->tx_pending = false;
+	}
+}
+
+static irqreturn_t intel_xpcie_core_irq_cb(int irq, void *args)
+{
+	struct xpcie *xpcie = args;
+
+	if (intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DATA_SENT)) {
+		intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_SENT, 0);
+		intel_xpcie_start_rx(xpcie, 0);
+	}
+	if (intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED)) {
+		intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED, 0);
+		if (xpcie->tx_pending)
+			intel_xpcie_start_tx(xpcie, 0);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static int intel_xpcie_events_init(struct xpcie *xpcie)
+{
+	xpcie->rx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+					       WQ_MEM_RECLAIM | WQ_HIGHPRI);
+	if (!xpcie->rx_wq) {
+		dev_err(xpcie_to_dev(xpcie), "failed to allocate workqueue\n");
+		return -ENOMEM;
+	}
+
+	if (!xpcie->legacy_a0) {
+		xpcie->tx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+						       WQ_MEM_RECLAIM |
+						       WQ_HIGHPRI);
+		if (!xpcie->tx_wq) {
+			dev_err(xpcie_to_dev(xpcie),
+				"failed to allocate workqueue\n");
+			destroy_workqueue(xpcie->rx_wq);
+			return -ENOMEM;
+		}
+	}
+
+	INIT_DELAYED_WORK(&xpcie->rx_event, intel_xpcie_rx_event_handler);
+	INIT_DELAYED_WORK(&xpcie->tx_event, intel_xpcie_tx_event_handler);
+
+	return 0;
+}
+
+static void intel_xpcie_events_cleanup(struct xpcie *xpcie)
+{
+	cancel_delayed_work_sync(&xpcie->rx_event);
+	cancel_delayed_work_sync(&xpcie->tx_event);
+
+	destroy_workqueue(xpcie->rx_wq);
+	if (!xpcie->legacy_a0)
+		destroy_workqueue(xpcie->tx_wq);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie)
+{
+	int error;
+
+	global_xpcie = xpcie;
+
+	intel_xpcie_set_cap_txrx(xpcie);
+
+	error = intel_xpcie_events_init(xpcie);
+	if (error)
+		return error;
+
+	error = intel_xpcie_discover_txrx(xpcie);
+	if (error)
+		goto error_txrx;
+
+	intel_xpcie_interfaces_init(xpcie);
+
+	intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_SENT, 0);
+	intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED, 0);
+	intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DEV_EVENT, NO_OP);
+	intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_SENT, 0);
+	intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_RECEIVED, 0);
+	intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DEV_EVENT, NO_OP);
+
+	intel_xpcie_register_host_irq(xpcie, intel_xpcie_core_irq_cb);
+
+	return 0;
+
+error_txrx:
+	intel_xpcie_events_cleanup(xpcie);
+
+	return error;
+}
+
+void intel_xpcie_core_cleanup(struct xpcie *xpcie)
+{
+	if (xpcie->status == XPCIE_STATUS_RUN) {
+		intel_xpcie_events_cleanup(xpcie);
+		intel_xpcie_interfaces_cleanup(xpcie);
+		intel_xpcie_txrx_cleanup(xpcie);
+	}
+}
+
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer,
+			  size_t *length, u32 timeout_ms)
+{
+	long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+	struct xpcie_interface *inf = &xpcie->interfaces[0];
+	unsigned long jiffies_start = jiffies;
+	struct xpcie_buf_desc *bd;
+	long jiffies_passed = 0;
+	size_t len, remaining;
+	int ret;
+
+	if (*length == 0)
+		return -EINVAL;
+
+	if (xpcie->status != XPCIE_STATUS_RUN)
+		return -ENODEV;
+
+	len = *length;
+	remaining = len;
+	*length = 0;
+
+	ret = mutex_lock_interruptible(&inf->rlock);
+	if (ret < 0)
+		return -EINTR;
+
+	do {
+		while (!inf->data_avail) {
+			mutex_unlock(&inf->rlock);
+			if (timeout_ms == 0) {
+				ret =
+				wait_event_interruptible(inf->rx_waitq,
+							 inf->data_avail);
+			} else {
+				ret =
+			wait_event_interruptible_timeout(inf->rx_waitq,
+							 inf->data_avail,
+							 jiffies_timeout -
+							  jiffies_passed);
+				if (ret == 0)
+					return -ETIME;
+			}
+			if (ret < 0 || xpcie->stop_flag)
+				return -EINTR;
+
+			ret = mutex_lock_interruptible(&inf->rlock);
+			if (ret < 0)
+				return -EINTR;
+		}
+
+		bd = (inf->partial_read) ? inf->partial_read :
+					   intel_xpcie_list_get(&inf->read);
+
+		while (remaining && bd) {
+			size_t bcopy;
+
+			bcopy = min(remaining, bd->length);
+			memcpy(buffer, bd->data, bcopy);
+
+			buffer += bcopy;
+			remaining -= bcopy;
+			bd->data += bcopy;
+			bd->length -= bcopy;
+
+			if (bd->length == 0) {
+				intel_xpcie_free_rx_bd(xpcie, bd);
+				bd = intel_xpcie_list_get(&inf->read);
+			}
+		}
+
+		/* save for next time */
+		inf->partial_read = bd;
+
+		if (!bd)
+			inf->data_avail = false;
+
+		*length = len - remaining;
+
+		jiffies_passed = (long)jiffies - (long)jiffies_start;
+	} while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+				   timeout_ms == 0));
+
+	mutex_unlock(&inf->rlock);
+
+	return 0;
+}
+
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer,
+			   size_t *length, u32 timeout_ms)
+{
+	long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+	struct xpcie_interface *inf = &xpcie->interfaces[0];
+	unsigned long jiffies_start = jiffies;
+	struct xpcie_buf_desc *bd, *head;
+	long jiffies_passed = 0;
+	size_t remaining, len;
+	int ret;
+
+	if (*length == 0)
+		return -EINVAL;
+
+	if (xpcie->status != XPCIE_STATUS_RUN)
+		return -ENODEV;
+
+	if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+		return -ENODEV;
+
+	len = *length;
+	remaining = len;
+	*length = 0;
+
+	ret = mutex_lock_interruptible(&xpcie->wlock);
+	if (ret < 0)
+		return -EINTR;
+
+	do {
+		bd = intel_xpcie_alloc_tx_bd(xpcie);
+		head = bd;
+		while (!head) {
+			mutex_unlock(&xpcie->wlock);
+			if (timeout_ms == 0) {
+				ret =
+				wait_event_interruptible(xpcie->tx_waitq,
+							 !xpcie->no_tx_buffer);
+			} else {
+				ret =
+			wait_event_interruptible_timeout(xpcie->tx_waitq,
+							 !xpcie->no_tx_buffer,
+							 jiffies_timeout -
+							  jiffies_passed);
+				if (ret == 0)
+					return -ETIME;
+			}
+			if (ret < 0 || xpcie->stop_flag)
+				return -EINTR;
+
+			ret = mutex_lock_interruptible(&xpcie->wlock);
+			if (ret < 0)
+				return -EINTR;
+
+			bd = intel_xpcie_alloc_tx_bd(xpcie);
+			head = bd;
+		}
+
+		while (remaining && bd) {
+			size_t bcopy;
+
+			bcopy = min(bd->length, remaining);
+			memcpy(bd->data, buffer, bcopy);
+
+			buffer += bcopy;
+			remaining -= bcopy;
+			bd->length = bcopy;
+			bd->interface = inf->id;
+
+			if (remaining) {
+				bd->next = intel_xpcie_alloc_tx_bd(xpcie);
+				bd = bd->next;
+			}
+		}
+
+		intel_xpcie_list_put(&inf->xpcie->write, head);
+		intel_xpcie_start_tx(xpcie, 0);
+
+		*length = len - remaining;
+
+		jiffies_passed = (long)jiffies - (long)jiffies_start;
+	} while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+				   timeout_ms == 0));
+
+	mutex_unlock(&xpcie->wlock);
+
+	return 0;
+}
+
+int intel_xpcie_get_device_status_by_id(u32 id, u32 *status)
+{
+	struct xpcie *xpcie = intel_xpcie_core_get_by_id(id);
+
+	if (!xpcie)
+		return -ENODEV;
+
+	*status = xpcie->status;
+
+	return 0;
+}
+
+u32 intel_xpcie_get_device_num(u32 *id_list)
+{
+	u32 num_devices = 0;
+
+	if (xlink_sw_id) {
+		num_devices = 1;
+		*id_list = xlink_sw_id;
+	}
+
+	return num_devices;
+}
+
+int intel_xpcie_get_device_name_by_id(u32 id,
+				      char *device_name, size_t name_size)
+{
+	struct xpcie *xpcie;
+
+	xpcie = intel_xpcie_core_get_by_id(id);
+	if (!xpcie)
+		return -ENODEV;
+
+	memset(device_name, 0, name_size);
+	if (name_size > strlen(XPCIE_DRIVER_NAME))
+		name_size = strlen(XPCIE_DRIVER_NAME);
+	memcpy(device_name, XPCIE_DRIVER_NAME, name_size);
+
+	return 0;
+}
+
+int intel_xpcie_pci_connect_device(u32 id)
+{
+	struct xpcie *xpcie;
+
+	xpcie = intel_xpcie_core_get_by_id(id);
+	if (!xpcie)
+		return -ENODEV;
+
+	if (xpcie->status != XPCIE_STATUS_RUN)
+		return -EIO;
+
+	return 0;
+}
+
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout)
+{
+	struct xpcie *xpcie;
+
+	xpcie = intel_xpcie_core_get_by_id(id);
+	if (!xpcie)
+		return -ENODEV;
+
+	return intel_xpcie_core_read(xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout)
+{
+	struct xpcie *xpcie;
+
+	xpcie = intel_xpcie_core_get_by_id(id);
+	if (!xpcie)
+		return -ENODEV;
+
+	return intel_xpcie_core_write(xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_reset_device(u32 id)
+{
+	return 0;
+}
diff --git a/drivers/misc/xlink-pcie/local_host/core.h b/drivers/misc/xlink-pcie/local_host/core.h
new file mode 100644
index 000000000000..84985ef41a64
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/core.h
@@ -0,0 +1,247 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_CORE_HEADER_
+#define XPCIE_CORE_HEADER_
+
+#include <linux/io.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/mempool.h>
+#include <linux/dma-mapping.h>
+#include <linux/cache.h>
+#include <linux/wait.h>
+
+#include <linux/xlink_drv_inf.h>
+
+/* Number of interfaces to statically allocate resources for */
+#define XPCIE_NUM_INTERFACES (1)
+
+/* max should be always power of '2' */
+#define XPCIE_CIRCULAR_INC(val, max) (((val) + 1) & ((max) - 1))
+
+#define XPCIE_FRAGMENT_SIZE	SZ_128K
+
+/* Status encoding of the transfer descriptors */
+#define XPCIE_DESC_STATUS_SUCCESS	(0)
+#define XPCIE_DESC_STATUS_ERROR		(0xFFFF)
+
+/* Layout transfer descriptors used by device and host */
+struct xpcie_transfer_desc {
+	u64 address;
+	u32 length;
+	u16 status;
+	u16 interface;
+} __packed;
+
+struct xpcie_pipe {
+	u32 old;
+	u32 ndesc;
+	u32 *head;
+	u32 *tail;
+	struct xpcie_transfer_desc *tdr;
+};
+
+struct xpcie_buf_desc {
+	struct xpcie_buf_desc *next;
+	void *head;
+	dma_addr_t phys;
+	size_t true_len;
+	void *data;
+	size_t length;
+	int interface;
+	bool own_mem;
+};
+
+struct xpcie_stream {
+	size_t frag;
+	struct xpcie_pipe pipe;
+};
+
+struct xpcie_list {
+	spinlock_t lock; /* list lock */
+	size_t bytes;
+	size_t buffers;
+	struct xpcie_buf_desc *head;
+	struct xpcie_buf_desc *tail;
+};
+
+struct xpcie_interface {
+	int id;
+	struct xpcie *xpcie;
+	struct mutex rlock; /* read lock */
+	struct xpcie_list read;
+	struct xpcie_buf_desc *partial_read;
+	bool data_avail;
+	wait_queue_head_t rx_waitq;
+};
+
+struct xpcie_debug_stats {
+	struct {
+		size_t cnts;
+		size_t bytes;
+	} tx_krn, rx_krn, tx_usr, rx_usr;
+	size_t send_ints;
+	size_t interrupts;
+	size_t rx_event_runs;
+	size_t tx_event_runs;
+};
+
+/* Defined capabilities located in mmio space */
+#define XPCIE_CAP_NULL (0)
+#define XPCIE_CAP_TXRX (1)
+
+#define XPCIE_CAP_TTL (32)
+#define XPCIE_CAP_HDR_ID	(offsetof(struct xpcie_cap_hdr, id))
+#define XPCIE_CAP_HDR_NEXT	(offsetof(struct xpcie_cap_hdr, next))
+
+/* Header at the beginning of each capability to define and link to next */
+struct xpcie_cap_hdr {
+	u16 id;
+	u16 next;
+} __packed;
+
+struct xpcie_cap_pipe {
+	u32 ring;
+	u32 ndesc;
+	u32 head;
+	u32 tail;
+} __packed;
+
+/* Transmit and Receive capability */
+struct xpcie_cap_txrx {
+	struct xpcie_cap_hdr hdr;
+	u32 fragment_size;
+	struct xpcie_cap_pipe tx;
+	struct xpcie_cap_pipe rx;
+} __packed;
+
+static inline u64 _ioread64(void __iomem *addr)
+{
+	u64 low, high;
+
+	low = ioread32(addr);
+	high = ioread32(addr + sizeof(u32));
+
+	return low | (high << 32);
+}
+
+static inline void _iowrite64(u64 value, void __iomem *addr)
+{
+	iowrite32(value, addr);
+	iowrite32(value >> 32, addr + sizeof(u32));
+}
+
+#define intel_xpcie_iowrite64(value, addr) \
+			_iowrite64(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite32(value, addr) \
+			iowrite32(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite16(value, addr) \
+			iowrite16(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite8(value, addr) \
+			iowrite8(value, (void __iomem *)addr)
+#define intel_xpcie_ioread64(addr) \
+			_ioread64((void __iomem *)addr)
+#define intel_xpcie_ioread32(addr) \
+			ioread32((void __iomem *)addr)
+#define intel_xpcie_ioread16(addr) \
+			ioread16((void __iomem *)addr)
+#define intel_xpcie_ioread8(addr) \
+			ioread8((void __iomem *)addr)
+
+static inline
+void intel_xpcie_set_td_address(struct xpcie_transfer_desc *td, u64 address)
+{
+	intel_xpcie_iowrite64(address, &td->address);
+}
+
+static inline
+u64 intel_xpcie_get_td_address(struct xpcie_transfer_desc *td)
+{
+	return intel_xpcie_ioread64(&td->address);
+}
+
+static inline
+void intel_xpcie_set_td_length(struct xpcie_transfer_desc *td, u32 length)
+{
+	intel_xpcie_iowrite32(length, &td->length);
+}
+
+static inline
+u32 intel_xpcie_get_td_length(struct xpcie_transfer_desc *td)
+{
+	return intel_xpcie_ioread32(&td->length);
+}
+
+static inline
+void intel_xpcie_set_td_interface(struct xpcie_transfer_desc *td, u16 interface)
+{
+	intel_xpcie_iowrite16(interface, &td->interface);
+}
+
+static inline
+u16 intel_xpcie_get_td_interface(struct xpcie_transfer_desc *td)
+{
+	return intel_xpcie_ioread16(&td->interface);
+}
+
+static inline
+void intel_xpcie_set_td_status(struct xpcie_transfer_desc *td, u16 status)
+{
+	intel_xpcie_iowrite16(status, &td->status);
+}
+
+static inline
+u16 intel_xpcie_get_td_status(struct xpcie_transfer_desc *td)
+{
+	return intel_xpcie_ioread16(&td->status);
+}
+
+static inline
+void intel_xpcie_set_tdr_head(struct xpcie_pipe *p, u32 head)
+{
+	intel_xpcie_iowrite32(head, p->head);
+}
+
+static inline
+u32 intel_xpcie_get_tdr_head(struct xpcie_pipe *p)
+{
+	return intel_xpcie_ioread32(p->head);
+}
+
+static inline
+void intel_xpcie_set_tdr_tail(struct xpcie_pipe *p, u32 tail)
+{
+	intel_xpcie_iowrite32(tail, p->tail);
+}
+
+static inline
+u32 intel_xpcie_get_tdr_tail(struct xpcie_pipe *p)
+{
+	return intel_xpcie_ioread32(p->tail);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie);
+void intel_xpcie_core_cleanup(struct xpcie *xpcie);
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer, size_t *length,
+			  u32 timeout_ms);
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer, size_t *length,
+			   u32 timeout_ms);
+u32 intel_xpcie_get_device_num(u32 *id_list);
+struct xpcie_dev *intel_xpcie_get_device_by_id(u32 id);
+int intel_xpcie_get_device_name_by_id(u32 id, char *device_name,
+				      size_t name_size);
+int intel_xpcie_get_device_status_by_id(u32 id, u32 *status);
+int intel_xpcie_pci_connect_device(u32 id);
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout);
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout);
+int intel_xpcie_pci_reset_device(u32 id);
+#endif /* XPCIE_CORE_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
index dd8ffcabf5f9..b3112ca47bad 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.c
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -9,6 +9,7 @@
 
 #include <linux/of.h>
 #include <linux/platform_device.h>
+#include <linux/reboot.h>
 
 #include "epf.h"
 
@@ -21,6 +22,12 @@
 #define PCIE_REGS_PCIE_ERR_INTR_FLAGS	0x24
 #define LINK_REQ_RST_FLG		BIT(15)
 
+#define PCIE_REGS_PCIE_SYS_CFG_CORE	0x7C
+#define PCIE_CFG_PBUS_NUM_OFFSET	8
+#define PCIE_CFG_PBUS_NUM_MASK		0xFF
+#define PCIE_CFG_PBUS_DEV_NUM_OFFSET	16
+#define PCIE_CFG_PBUS_DEV_NUM_MASK	0x1F
+
 static struct pci_epf_header xpcie_header = {
 	.vendorid = PCI_VENDOR_ID_INTEL,
 	.deviceid = PCI_DEVICE_ID_INTEL_KEEMBAY,
@@ -37,6 +44,45 @@ static const struct pci_epf_device_id xpcie_epf_ids[] = {
 	{},
 };
 
+u32 xlink_sw_id;
+
+int intel_xpcie_copy_from_host_ll(struct xpcie *xpcie, int chan, int descs_num)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct pci_epf *epf = xpcie_epf->epf;
+
+	return intel_xpcie_ep_dma_read_ll(epf, chan, descs_num);
+}
+
+int intel_xpcie_copy_to_host_ll(struct xpcie *xpcie, int chan, int descs_num)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct pci_epf *epf = xpcie_epf->epf;
+
+	return intel_xpcie_ep_dma_write_ll(epf, chan, descs_num);
+}
+
+void intel_xpcie_register_host_irq(struct xpcie *xpcie, irq_handler_t func)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+
+	xpcie_epf->core_irq_callback = func;
+}
+
+int intel_xpcie_raise_irq(struct xpcie *xpcie, enum xpcie_doorbell_type type)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+	struct pci_epf *epf = xpcie_epf->epf;
+
+	intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, type, 1);
+
+	return pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+}
+
 static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)
 {
 	struct xpcie_epf *xpcie_epf;
@@ -57,6 +103,7 @@ static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
 {
 	struct xpcie_epf *xpcie_epf;
 	struct xpcie *xpcie = args;
+	u8 event;
 	u32 val;
 
 	xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
@@ -64,6 +111,18 @@ static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
 	if (val & LBC_CII_EVENT_FLAG) {
 		iowrite32(LBC_CII_EVENT_FLAG,
 			  xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+
+		event = intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DEV_EVENT);
+		if (unlikely(event != NO_OP)) {
+			intel_xpcie_set_doorbell(xpcie, TO_DEVICE,
+						 DEV_EVENT, NO_OP);
+			if (event == REQUEST_RESET)
+				orderly_reboot();
+			return IRQ_HANDLED;
+		}
+
+		if (likely(xpcie_epf->core_irq_callback))
+			xpcie_epf->core_irq_callback(irq, xpcie);
 	}
 
 	return IRQ_HANDLED;
@@ -269,6 +328,7 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
 	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
 	const struct pci_epc_features *features;
 	struct pci_epc *epc = epf->epc;
+	u32 bus_num, dev_num;
 	struct device *dev;
 	size_t align = SZ_16K;
 	int ret;
@@ -300,12 +360,12 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
 
 	if (!strcmp(xpcie_epf->stepping, "A0")) {
 		xpcie_epf->xpcie.legacy_a0 = true;
-		iowrite32(1, (void __iomem *)xpcie_epf->xpcie.mmio +
-			     XPCIE_MMIO_LEGACY_A0);
+		intel_xpcie_iowrite32(1, xpcie_epf->xpcie.mmio +
+					 XPCIE_MMIO_LEGACY_A0);
 	} else {
 		xpcie_epf->xpcie.legacy_a0 = false;
-		iowrite32(0, (void __iomem *)xpcie_epf->xpcie.mmio +
-			     XPCIE_MMIO_LEGACY_A0);
+		intel_xpcie_iowrite32(0, xpcie_epf->xpcie.mmio +
+					 XPCIE_MMIO_LEGACY_A0);
 	}
 
 	/* Enable interrupt */
@@ -330,13 +390,46 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
 	ret = intel_xpcie_ep_dma_init(epf);
 	if (ret) {
 		dev_err(&epf->dev, "DMA initialization failed\n");
-		goto err_free_err_irq;
+		goto err_cleanup_bars;
 	}
 
+	intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_READY);
+
+	ret = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_SYS_CFG_CORE);
+	bus_num = (ret >> PCIE_CFG_PBUS_NUM_OFFSET) & PCIE_CFG_PBUS_NUM_MASK;
+	dev_num = (ret >> PCIE_CFG_PBUS_DEV_NUM_OFFSET) &
+			PCIE_CFG_PBUS_DEV_NUM_MASK;
+
+	xlink_sw_id = FIELD_PREP(XLINK_DEV_INF_TYPE_MASK,
+				 XLINK_DEV_INF_PCIE) |
+		      FIELD_PREP(XLINK_DEV_PHYS_ID_MASK,
+				 bus_num << 8 | dev_num) |
+		      FIELD_PREP(XLINK_DEV_TYPE_MASK, XLINK_DEV_TYPE_KMB) |
+		      FIELD_PREP(XLINK_DEV_PCIE_ID_MASK, XLINK_DEV_PCIE_0) |
+		      FIELD_PREP(XLINK_DEV_FUNC_MASK, XLINK_DEV_FUNC_VPU);
+
+	ret = intel_xpcie_core_init(&xpcie_epf->xpcie);
+	if (ret) {
+		dev_err(&epf->dev, "Core component configuration failed\n");
+		goto err_uninit_dma;
+	}
+
+	intel_xpcie_iowrite32(XPCIE_STATUS_UNINIT,
+			      xpcie_epf->xpcie.mmio + XPCIE_MMIO_HOST_STATUS);
+	intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_RUN);
+	intel_xpcie_set_doorbell(&xpcie_epf->xpcie, FROM_DEVICE,
+				 DEV_EVENT, NO_OP);
+	memcpy(xpcie_epf->xpcie.mmio + XPCIE_MMIO_MAGIC_OFF, XPCIE_MAGIC_YOCTO,
+	       strlen(XPCIE_MAGIC_YOCTO));
+
 	return 0;
 
-err_free_err_irq:
-	free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+err_uninit_dma:
+	intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_ERROR);
+	memcpy(xpcie_epf->xpcie.mmio + XPCIE_MMIO_MAGIC_OFF, XPCIE_MAGIC_YOCTO,
+	       strlen(XPCIE_MAGIC_YOCTO));
+
+	intel_xpcie_ep_dma_uninit(epf);
 
 err_cleanup_bars:
 	intel_xpcie_cleanup_bars(epf);
@@ -346,8 +439,12 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
 
 static void intel_xpcie_epf_unbind(struct pci_epf *epf)
 {
+	struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
 	struct pci_epc *epc = epf->epc;
 
+	intel_xpcie_core_cleanup(&xpcie_epf->xpcie);
+	intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_READY);
+
 	intel_xpcie_ep_dma_uninit(epf);
 
 	pci_epc_stop(epc);
@@ -379,8 +476,11 @@ static void intel_xpcie_epf_shutdown(struct device *dev)
 	xpcie_epf = epf_get_drvdata(epf);
 
 	/* Notify host in case PCIe hot plug not supported */
-	if (xpcie_epf)
+	if (xpcie_epf && xpcie_epf->xpcie.status == XPCIE_STATUS_RUN) {
+		intel_xpcie_set_doorbell(&xpcie_epf->xpcie, FROM_DEVICE,
+					 DEV_EVENT, DEV_SHUTDOWN);
 		pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+	}
 }
 
 static struct pci_epf_ops ops = {
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
index 6ce5260e67be..ca01e17c5107 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.h
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -14,6 +14,7 @@
 #include <linux/pci-epf.h>
 
 #include "xpcie.h"
+#include "util.h"
 
 #define XPCIE_DRIVER_NAME "mxlk_pcie_epf"
 #define XPCIE_DRIVER_DESC "Intel(R) xLink PCIe endpoint function driver"
@@ -26,6 +27,7 @@
 #define XPCIE_NUM_RX_DESCS	(64)
 
 extern bool dma_ll_mode;
+extern u32 xlink_sw_id;
 
 struct xpcie_dma_ll_desc {
 	u32 dma_ch_control1;
@@ -67,14 +69,35 @@ struct xpcie_epf {
 	void __iomem *dbi_base;
 	char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];
 
+	irq_handler_t			core_irq_callback;
+	dma_addr_t			tx_phys;
+	void				*tx_virt;
+	size_t				tx_size;
+
 	struct xpcie_dma_ll_desc_buf	tx_desc_buf[DMA_CHAN_NUM];
 	struct xpcie_dma_ll_desc_buf	rx_desc_buf[DMA_CHAN_NUM];
 };
 
+static inline struct device *xpcie_to_dev(struct xpcie *xpcie)
+{
+	struct xpcie_epf *xpcie_epf = container_of(xpcie,
+						   struct xpcie_epf, xpcie);
+
+	return &xpcie_epf->epf->dev;
+}
+
 int intel_xpcie_ep_dma_init(struct pci_epf *epf);
 int intel_xpcie_ep_dma_uninit(struct pci_epf *epf);
 int intel_xpcie_ep_dma_reset(struct pci_epf *epf);
 int intel_xpcie_ep_dma_read_ll(struct pci_epf *epf, int chan, int descs_num);
 int intel_xpcie_ep_dma_write_ll(struct pci_epf *epf, int chan, int descs_num);
 
+void intel_xpcie_register_host_irq(struct xpcie *xpcie,
+				   irq_handler_t func);
+int intel_xpcie_raise_irq(struct xpcie *xpcie,
+			  enum xpcie_doorbell_type type);
+int intel_xpcie_copy_from_host_ll(struct xpcie *xpcie,
+				  int chan, int descs_num);
+int intel_xpcie_copy_to_host_ll(struct xpcie *xpcie,
+				int chan, int descs_num);
 #endif /* XPCIE_EPF_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/util.c b/drivers/misc/xlink-pcie/local_host/util.c
new file mode 100644
index 000000000000..ec808b0cd72b
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/util.c
@@ -0,0 +1,375 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include "util.h"
+
+void intel_xpcie_set_device_status(struct xpcie *xpcie, u32 status)
+{
+	xpcie->status = status;
+	intel_xpcie_iowrite32(status, xpcie->mmio + XPCIE_MMIO_DEV_STATUS);
+}
+
+u32 intel_xpcie_get_device_status(struct xpcie *xpcie)
+{
+	return intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_DEV_STATUS);
+}
+
+static size_t intel_xpcie_doorbell_offset(struct xpcie *xpcie,
+					  enum xpcie_doorbell_direction dirt,
+					  enum xpcie_doorbell_type type)
+{
+	if (dirt == TO_DEVICE && type == DATA_SENT)
+		return XPCIE_MMIO_HTOD_TX_DOORBELL;
+	if (dirt == TO_DEVICE && type == DATA_RECEIVED)
+		return XPCIE_MMIO_HTOD_RX_DOORBELL;
+	if (dirt == TO_DEVICE && type == DEV_EVENT)
+		return XPCIE_MMIO_HTOD_EVENT_DOORBELL;
+	if (dirt == FROM_DEVICE && type == DATA_SENT)
+		return XPCIE_MMIO_DTOH_TX_DOORBELL;
+	if (dirt == FROM_DEVICE && type == DATA_RECEIVED)
+		return XPCIE_MMIO_DTOH_RX_DOORBELL;
+	if (dirt == FROM_DEVICE && type == DEV_EVENT)
+		return XPCIE_MMIO_DTOH_EVENT_DOORBELL;
+
+	return 0;
+}
+
+void intel_xpcie_set_doorbell(struct xpcie *xpcie,
+			      enum xpcie_doorbell_direction dirt,
+			      enum xpcie_doorbell_type type, u8 value)
+{
+	size_t offset = intel_xpcie_doorbell_offset(xpcie, dirt, type);
+
+	intel_xpcie_iowrite8(value, xpcie->mmio + offset);
+}
+
+u8 intel_xpcie_get_doorbell(struct xpcie *xpcie,
+			    enum xpcie_doorbell_direction dirt,
+			    enum xpcie_doorbell_type type)
+{
+	size_t offset = intel_xpcie_doorbell_offset(xpcie, dirt, type);
+
+	return intel_xpcie_ioread8(xpcie->mmio + offset);
+}
+
+u32 intel_xpcie_get_host_status(struct xpcie *xpcie)
+{
+	return intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_HOST_STATUS);
+}
+
+void intel_xpcie_set_host_status(struct xpcie *xpcie, u32 status)
+{
+	xpcie->status = status;
+	intel_xpcie_iowrite32(status, xpcie->mmio + XPCIE_MMIO_HOST_STATUS);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd(size_t length)
+{
+	struct xpcie_buf_desc *bd;
+
+	bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+	if (!bd)
+		return NULL;
+
+	bd->head = kzalloc(roundup(length, cache_line_size()), GFP_KERNEL);
+	if (!bd->head) {
+		kfree(bd);
+		return NULL;
+	}
+
+	bd->data = bd->head;
+	bd->length = length;
+	bd->true_len = length;
+	bd->next = NULL;
+	bd->own_mem = true;
+
+	return bd;
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd_reuse(size_t length, void *virt,
+						  dma_addr_t phys)
+{
+	struct xpcie_buf_desc *bd;
+
+	bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+	if (!bd)
+		return NULL;
+
+	bd->head = virt;
+	bd->phys = phys;
+	bd->data = bd->head;
+	bd->length = length;
+	bd->true_len = length;
+	bd->next = NULL;
+	bd->own_mem = false;
+
+	return bd;
+}
+
+void intel_xpcie_free_bd(struct xpcie_buf_desc *bd)
+{
+	if (bd) {
+		if (bd->own_mem)
+			kfree(bd->head);
+		kfree(bd);
+	}
+}
+
+int intel_xpcie_list_init(struct xpcie_list *list)
+{
+	spin_lock_init(&list->lock);
+	list->bytes = 0;
+	list->buffers = 0;
+	list->head = NULL;
+	list->tail = NULL;
+
+	return 0;
+}
+
+void intel_xpcie_list_cleanup(struct xpcie_list *list)
+{
+	struct xpcie_buf_desc *bd;
+
+	spin_lock(&list->lock);
+	while (list->head) {
+		bd = list->head;
+		list->head = bd->next;
+		intel_xpcie_free_bd(bd);
+	}
+
+	list->head = NULL;
+	list->tail = NULL;
+	spin_unlock(&list->lock);
+}
+
+int intel_xpcie_list_put(struct xpcie_list *list, struct xpcie_buf_desc *bd)
+{
+	if (!bd)
+		return -EINVAL;
+
+	spin_lock(&list->lock);
+	if (list->head)
+		list->tail->next = bd;
+	else
+		list->head = bd;
+
+	while (bd) {
+		list->tail = bd;
+		list->bytes += bd->length;
+		list->buffers++;
+		bd = bd->next;
+	}
+	spin_unlock(&list->lock);
+
+	return 0;
+}
+
+int intel_xpcie_list_put_head(struct xpcie_list *list,
+			      struct xpcie_buf_desc *bd)
+{
+	struct xpcie_buf_desc *old_head;
+
+	if (!bd)
+		return -EINVAL;
+
+	spin_lock(&list->lock);
+	old_head = list->head;
+	list->head = bd;
+	while (bd) {
+		list->bytes += bd->length;
+		list->buffers++;
+		if (!bd->next) {
+			list->tail = list->tail ? list->tail : bd;
+			bd->next = old_head;
+			break;
+		}
+		bd = bd->next;
+	}
+	spin_unlock(&list->lock);
+
+	return 0;
+}
+
+struct xpcie_buf_desc *intel_xpcie_list_get(struct xpcie_list *list)
+{
+	struct xpcie_buf_desc *bd;
+
+	spin_lock(&list->lock);
+	bd = list->head;
+	if (list->head) {
+		list->head = list->head->next;
+		if (!list->head)
+			list->tail = NULL;
+		bd->next = NULL;
+		list->bytes -= bd->length;
+		list->buffers--;
+	}
+	spin_unlock(&list->lock);
+
+	return bd;
+}
+
+void intel_xpcie_list_info(struct xpcie_list *list,
+			   size_t *bytes, size_t *buffers)
+{
+	spin_lock(&list->lock);
+	*bytes = list->bytes;
+	*buffers = list->buffers;
+	spin_unlock(&list->lock);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_rx_bd(struct xpcie *xpcie)
+{
+	struct xpcie_buf_desc *bd;
+
+	bd = intel_xpcie_list_get(&xpcie->rx_pool);
+	if (bd) {
+		bd->data = bd->head;
+		bd->length = bd->true_len;
+		bd->next = NULL;
+		bd->interface = 0;
+	}
+
+	return bd;
+}
+
+void intel_xpcie_free_rx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd)
+{
+	if (bd)
+		intel_xpcie_list_put(&xpcie->rx_pool, bd);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_tx_bd(struct xpcie *xpcie)
+{
+	struct xpcie_buf_desc *bd;
+
+	bd = intel_xpcie_list_get(&xpcie->tx_pool);
+	if (bd) {
+		bd->data = bd->head;
+		bd->length = bd->true_len;
+		bd->next = NULL;
+		bd->interface = 0;
+	} else {
+		xpcie->no_tx_buffer = true;
+	}
+
+	return bd;
+}
+
+void intel_xpcie_free_tx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd)
+{
+	if (!bd)
+		return;
+
+	intel_xpcie_list_put(&xpcie->tx_pool, bd);
+
+	xpcie->no_tx_buffer = false;
+	wake_up_interruptible(&xpcie->tx_waitq);
+}
+
+int intel_xpcie_interface_init(struct xpcie *xpcie, int id)
+{
+	struct xpcie_interface *inf = xpcie->interfaces + id;
+
+	inf->id = id;
+	inf->xpcie = xpcie;
+
+	inf->partial_read = NULL;
+	intel_xpcie_list_init(&inf->read);
+	mutex_init(&inf->rlock);
+	inf->data_avail = false;
+	init_waitqueue_head(&inf->rx_waitq);
+
+	return 0;
+}
+
+void intel_xpcie_interface_cleanup(struct xpcie_interface *inf)
+{
+	struct xpcie_buf_desc *bd;
+
+	intel_xpcie_free_rx_bd(inf->xpcie, inf->partial_read);
+	while ((bd = intel_xpcie_list_get(&inf->read)))
+		intel_xpcie_free_rx_bd(inf->xpcie, bd);
+
+	mutex_destroy(&inf->rlock);
+}
+
+void intel_xpcie_interfaces_cleanup(struct xpcie *xpcie)
+{
+	int index;
+
+	for (index = 0; index < XPCIE_NUM_INTERFACES; index++)
+		intel_xpcie_interface_cleanup(xpcie->interfaces + index);
+
+	intel_xpcie_list_cleanup(&xpcie->write);
+	mutex_destroy(&xpcie->wlock);
+}
+
+int intel_xpcie_interfaces_init(struct xpcie *xpcie)
+{
+	int index;
+
+	mutex_init(&xpcie->wlock);
+	intel_xpcie_list_init(&xpcie->write);
+	init_waitqueue_head(&xpcie->tx_waitq);
+	xpcie->no_tx_buffer = false;
+
+	for (index = 0; index < XPCIE_NUM_INTERFACES; index++)
+		intel_xpcie_interface_init(xpcie, index);
+
+	return 0;
+}
+
+void intel_xpcie_add_bd_to_interface(struct xpcie *xpcie,
+				     struct xpcie_buf_desc *bd)
+{
+	struct xpcie_interface *inf;
+
+	inf = xpcie->interfaces + bd->interface;
+
+	intel_xpcie_list_put(&inf->read, bd);
+
+	mutex_lock(&inf->rlock);
+	inf->data_avail = true;
+	mutex_unlock(&inf->rlock);
+	wake_up_interruptible(&inf->rx_waitq);
+}
+
+void *intel_xpcie_cap_find(struct xpcie *xpcie, u32 start, u16 id)
+{
+	int ttl = XPCIE_CAP_TTL;
+	void *hdr;
+	u16 id_out, next;
+
+	/* If user didn't specify start, assume start of mmio */
+	if (!start)
+		start = intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_CAP_OFF);
+
+	/* Read header info */
+	hdr = xpcie->mmio + start;
+
+	/* Check if we still have time to live */
+	while (ttl--) {
+		id_out = intel_xpcie_ioread16(hdr + XPCIE_CAP_HDR_ID);
+		next = intel_xpcie_ioread16(hdr + XPCIE_CAP_HDR_NEXT);
+
+		/* If cap matches, return header */
+		if (id_out == id)
+			return hdr;
+		/* If cap is NULL, we are at the end of the list */
+		else if (id_out == XPCIE_CAP_NULL)
+			return NULL;
+		/* If no match and no end of list, traverse the linked list */
+		else
+			hdr = xpcie->mmio + next;
+	}
+
+	/* If we reached here, the capability list is corrupted */
+	return NULL;
+}
diff --git a/drivers/misc/xlink-pcie/local_host/util.h b/drivers/misc/xlink-pcie/local_host/util.h
new file mode 100644
index 000000000000..908be897a61d
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/util.h
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_UTIL_HEADER_
+#define XPCIE_UTIL_HEADER_
+
+#include "xpcie.h"
+
+enum xpcie_doorbell_direction {
+	TO_DEVICE,
+	FROM_DEVICE
+};
+
+enum xpcie_doorbell_type {
+	DATA_SENT,
+	DATA_RECEIVED,
+	DEV_EVENT
+};
+
+enum xpcie_event_type {
+	NO_OP,
+	REQUEST_RESET,
+	DEV_SHUTDOWN
+};
+
+void intel_xpcie_set_doorbell(struct xpcie *xpcie,
+			      enum xpcie_doorbell_direction dirt,
+			      enum xpcie_doorbell_type type, u8 value);
+u8 intel_xpcie_get_doorbell(struct xpcie *xpcie,
+			    enum xpcie_doorbell_direction dirt,
+			    enum xpcie_doorbell_type type);
+
+void intel_xpcie_set_device_status(struct xpcie *xpcie, u32 status);
+u32 intel_xpcie_get_device_status(struct xpcie *xpcie);
+u32 intel_xpcie_get_host_status(struct xpcie *xpcie);
+void intel_xpcie_set_host_status(struct xpcie *xpcie, u32 status);
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd(size_t length);
+struct xpcie_buf_desc *intel_xpcie_alloc_bd_reuse(size_t length, void *virt,
+						  dma_addr_t phys);
+void intel_xpcie_free_bd(struct xpcie_buf_desc *bd);
+
+int intel_xpcie_list_init(struct xpcie_list *list);
+void intel_xpcie_list_cleanup(struct xpcie_list *list);
+int intel_xpcie_list_put(struct xpcie_list *list, struct xpcie_buf_desc *bd);
+int intel_xpcie_list_put_head(struct xpcie_list *list,
+			      struct xpcie_buf_desc *bd);
+struct xpcie_buf_desc *intel_xpcie_list_get(struct xpcie_list *list);
+void intel_xpcie_list_info(struct xpcie_list *list, size_t *bytes,
+			   size_t *buffers);
+
+struct xpcie_buf_desc *intel_xpcie_alloc_rx_bd(struct xpcie *xpcie);
+void intel_xpcie_free_rx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd);
+struct xpcie_buf_desc *intel_xpcie_alloc_tx_bd(struct xpcie *xpcie);
+void intel_xpcie_free_tx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd);
+
+int intel_xpcie_interface_init(struct xpcie *xpcie, int id);
+void intel_xpcie_interface_cleanup(struct xpcie_interface *inf);
+void intel_xpcie_interfaces_cleanup(struct xpcie *xpcie);
+int intel_xpcie_interfaces_init(struct xpcie *xpcie);
+void intel_xpcie_add_bd_to_interface(struct xpcie *xpcie,
+				     struct xpcie_buf_desc *bd);
+void *intel_xpcie_cap_find(struct xpcie *xpcie, u32 start, u16 id);
+#endif /* XPCIE_UTIL_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/xpcie.h b/drivers/misc/xlink-pcie/local_host/xpcie.h
index 0745e6dfee10..8a559617daba 100644
--- a/drivers/misc/xlink-pcie/local_host/xpcie.h
+++ b/drivers/misc/xlink-pcie/local_host/xpcie.h
@@ -14,6 +14,8 @@
 #include <linux/module.h>
 #include <linux/pci_ids.h>
 
+#include "core.h"
+
 #ifndef PCI_DEVICE_ID_INTEL_KEEMBAY
 #define PCI_DEVICE_ID_INTEL_KEEMBAY 0x6240
 #endif
@@ -21,18 +23,79 @@
 #define XPCIE_IO_COMM_SIZE SZ_16K
 #define XPCIE_MMIO_OFFSET SZ_4K
 
+/* Status encoding of both device and host */
+#define XPCIE_STATUS_ERROR	(0xFFFFFFFF)
+#define XPCIE_STATUS_UNINIT	(0)
+#define XPCIE_STATUS_READY	(1)
+#define XPCIE_STATUS_RECOVERY	(2)
+#define XPCIE_STATUS_OFF	(3)
+#define XPCIE_STATUS_RUN	(4)
+
+#define XPCIE_MAGIC_STRLEN	(16)
+#define XPCIE_MAGIC_YOCTO	"VPUYOCTO"
+
 /* MMIO layout and offsets shared between device and host */
 struct xpcie_mmio {
+	u32 device_status;
+	u32 host_status;
 	u8 legacy_a0;
+	u8 htod_tx_doorbell;
+	u8 htod_rx_doorbell;
+	u8 htod_event_doorbell;
+	u8 dtoh_tx_doorbell;
+	u8 dtoh_rx_doorbell;
+	u8 dtoh_event_doorbell;
+	u8 reserved;
+	u32 cap_offset;
+	u8 magic[XPCIE_MAGIC_STRLEN];
 } __packed;
 
+#define XPCIE_MMIO_DEV_STATUS	(offsetof(struct xpcie_mmio, device_status))
+#define XPCIE_MMIO_HOST_STATUS	(offsetof(struct xpcie_mmio, host_status))
 #define XPCIE_MMIO_LEGACY_A0	(offsetof(struct xpcie_mmio, legacy_a0))
+#define XPCIE_MMIO_HTOD_TX_DOORBELL \
+	(offsetof(struct xpcie_mmio, htod_tx_doorbell))
+#define XPCIE_MMIO_HTOD_RX_DOORBELL \
+	(offsetof(struct xpcie_mmio, htod_rx_doorbell))
+#define XPCIE_MMIO_HTOD_EVENT_DOORBELL \
+	(offsetof(struct xpcie_mmio, htod_event_doorbell))
+#define XPCIE_MMIO_DTOH_TX_DOORBELL \
+	(offsetof(struct xpcie_mmio, dtoh_tx_doorbell))
+#define XPCIE_MMIO_DTOH_RX_DOORBELL \
+	(offsetof(struct xpcie_mmio, dtoh_rx_doorbell))
+#define XPCIE_MMIO_DTOH_EVENT_DOORBELL \
+	(offsetof(struct xpcie_mmio, dtoh_event_doorbell))
+#define XPCIE_MMIO_CAP_OFF	(offsetof(struct xpcie_mmio, cap_offset))
+#define XPCIE_MMIO_MAGIC_OFF	(offsetof(struct xpcie_mmio, magic))
 
 struct xpcie {
 	u32 status;
 	bool legacy_a0;
 	void *mmio;
 	void *bar4;
+
+	struct workqueue_struct *rx_wq;
+	struct workqueue_struct *tx_wq;
+
+	struct xpcie_interface interfaces[XPCIE_NUM_INTERFACES];
+
+	size_t fragment_size;
+	struct xpcie_cap_txrx *txrx;
+	struct xpcie_stream tx;
+	struct xpcie_stream rx;
+
+	struct mutex wlock; /* write lock */
+	struct xpcie_list write;
+	bool no_tx_buffer;
+	wait_queue_head_t tx_waitq;
+	bool tx_pending;
+	bool stop_flag;
+
+	struct xpcie_list rx_pool;
+	struct xpcie_list tx_pool;
+
+	struct delayed_work rx_event;
+	struct delayed_work tx_event;
 };
 
 #endif /* XPCIE_HEADER_ */
diff --git a/include/linux/xlink_drv_inf.h b/include/linux/xlink_drv_inf.h
new file mode 100644
index 000000000000..ffe8f4c253e6
--- /dev/null
+++ b/include/linux/xlink_drv_inf.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef _XLINK_DRV_INF_H_
+#define _XLINK_DRV_INF_H_
+
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#define XLINK_DEV_INF_TYPE_MASK		GENMASK(27, 24)
+#define XLINK_DEV_PHYS_ID_MASK		GENMASK(23, 8)
+#define XLINK_DEV_TYPE_MASK		GENMASK(6, 4)
+#define XLINK_DEV_PCIE_ID_MASK		GENMASK(3, 1)
+#define XLINK_DEV_FUNC_MASK		GENMASK(0, 0)
+
+enum xlink_device_inf_type {
+	XLINK_DEV_INF_PCIE = 1,
+};
+
+enum xlink_device_type {
+	XLINK_DEV_TYPE_KMB = 0,
+};
+
+enum xlink_device_pcie {
+	XLINK_DEV_PCIE_0 = 0,
+};
+
+enum xlink_device_func {
+	XLINK_DEV_FUNC_VPU = 0,
+};
+
+enum _xlink_device_status {
+	_XLINK_DEV_OFF,
+	_XLINK_DEV_ERROR,
+	_XLINK_DEV_BUSY,
+	_XLINK_DEV_RECOVERY,
+	_XLINK_DEV_READY
+};
+
+int xlink_pcie_get_device_list(u32 *sw_device_id_list,
+			       u32 *num_devices);
+int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
+			       size_t name_size);
+int xlink_pcie_get_device_status(u32 sw_device_id,
+				 u32 *device_status);
+int xlink_pcie_boot_device(u32 sw_device_id, const char *binary_name);
+int xlink_pcie_connect(u32 sw_device_id);
+int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
+		    u32 timeout);
+int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
+		     u32 timeout);
+int xlink_pcie_reset_device(u32 sw_device_id);
+#endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 12/34] misc: xlink-pcie: lh: Prepare changes for adding remote host driver
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (10 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 11/34] misc: xlink-pcie: lh: Add core communication logic mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 13/34] misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host mgross
                   ` (21 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala

From: Srikanth Thokala <srikanth.thokala@intel.com>

Move logic that can be reused between local host and remote host to
common/ folder

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 drivers/misc/xlink-pcie/{local_host => common}/core.h  | 0
 drivers/misc/xlink-pcie/{local_host => common}/util.c  | 0
 drivers/misc/xlink-pcie/{local_host => common}/util.h  | 0
 drivers/misc/xlink-pcie/{local_host => common}/xpcie.h | 0
 drivers/misc/xlink-pcie/local_host/Makefile            | 2 +-
 drivers/misc/xlink-pcie/local_host/core.c              | 4 ++--
 drivers/misc/xlink-pcie/local_host/epf.h               | 4 ++--
 7 files changed, 5 insertions(+), 5 deletions(-)
 rename drivers/misc/xlink-pcie/{local_host => common}/core.h (100%)
 rename drivers/misc/xlink-pcie/{local_host => common}/util.c (100%)
 rename drivers/misc/xlink-pcie/{local_host => common}/util.h (100%)
 rename drivers/misc/xlink-pcie/{local_host => common}/xpcie.h (100%)

diff --git a/drivers/misc/xlink-pcie/local_host/core.h b/drivers/misc/xlink-pcie/common/core.h
similarity index 100%
rename from drivers/misc/xlink-pcie/local_host/core.h
rename to drivers/misc/xlink-pcie/common/core.h
diff --git a/drivers/misc/xlink-pcie/local_host/util.c b/drivers/misc/xlink-pcie/common/util.c
similarity index 100%
rename from drivers/misc/xlink-pcie/local_host/util.c
rename to drivers/misc/xlink-pcie/common/util.c
diff --git a/drivers/misc/xlink-pcie/local_host/util.h b/drivers/misc/xlink-pcie/common/util.h
similarity index 100%
rename from drivers/misc/xlink-pcie/local_host/util.h
rename to drivers/misc/xlink-pcie/common/util.h
diff --git a/drivers/misc/xlink-pcie/local_host/xpcie.h b/drivers/misc/xlink-pcie/common/xpcie.h
similarity index 100%
rename from drivers/misc/xlink-pcie/local_host/xpcie.h
rename to drivers/misc/xlink-pcie/common/xpcie.h
diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 28761751d43b..65df94c7e860 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -2,4 +2,4 @@ obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
 mxlk_ep-objs := epf.o
 mxlk_ep-objs += dma.o
 mxlk_ep-objs += core.o
-mxlk_ep-objs += util.o
+mxlk_ep-objs += ../common/util.o
diff --git a/drivers/misc/xlink-pcie/local_host/core.c b/drivers/misc/xlink-pcie/local_host/core.c
index 612ab917db45..51fa25259515 100644
--- a/drivers/misc/xlink-pcie/local_host/core.c
+++ b/drivers/misc/xlink-pcie/local_host/core.c
@@ -10,8 +10,8 @@
 #include <linux/of_reserved_mem.h>
 
 #include "epf.h"
-#include "core.h"
-#include "util.h"
+#include "../common/core.h"
+#include "../common/util.h"
 
 static struct xpcie *global_xpcie;
 
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
index ca01e17c5107..ad79416476d5 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.h
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -13,8 +13,8 @@
 #include <linux/pci-epc.h>
 #include <linux/pci-epf.h>
 
-#include "xpcie.h"
-#include "util.h"
+#include "../common/xpcie.h"
+#include "../common/util.h"
 
 #define XPCIE_DRIVER_NAME "mxlk_pcie_epf"
 #define XPCIE_DRIVER_DESC "Intel(R) xLink PCIe endpoint function driver"
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 13/34] misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (11 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 12/34] misc: xlink-pcie: lh: Prepare changes for adding remote host driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 14/34] misc: xlink-pcie: rh: Add core communication logic mgross
                   ` (20 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala

From: Srikanth Thokala <srikanth.thokala@intel.com>

Add PCIe Endpoint driver that configures PCIe BARs and MSIs on the
Remote Host

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 MAINTAINERS                                  |   2 +-
 drivers/misc/xlink-pcie/Kconfig              |  11 +
 drivers/misc/xlink-pcie/Makefile             |   1 +
 drivers/misc/xlink-pcie/common/xpcie.h       |   1 +
 drivers/misc/xlink-pcie/remote_host/Makefile |   3 +
 drivers/misc/xlink-pcie/remote_host/main.c   |  92 ++++
 drivers/misc/xlink-pcie/remote_host/pci.c    | 451 +++++++++++++++++++
 drivers/misc/xlink-pcie/remote_host/pci.h    |  64 +++
 8 files changed, 624 insertions(+), 1 deletion(-)
 create mode 100644 drivers/misc/xlink-pcie/remote_host/Makefile
 create mode 100644 drivers/misc/xlink-pcie/remote_host/main.c
 create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.c
 create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 036658cba574..df7b61900664 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1962,7 +1962,7 @@ F:	Documentation/devicetree/bindings/arm/intel,keembay.yaml
 F:	arch/arm64/boot/dts/intel/keembay-evm.dts
 F:	arch/arm64/boot/dts/intel/keembay-soc.dtsi
 
-ARM KEEM BAY XLINK PCIE SUPPORT
+ARM/INTEL KEEM BAY XLINK PCIE SUPPORT
 M:	Srikanth Thokala <srikanth.thokala@intel.com>
 M:	Mark Gross <mgross@linux.intel.com>
 S:	Supported
diff --git a/drivers/misc/xlink-pcie/Kconfig b/drivers/misc/xlink-pcie/Kconfig
index 46aa401d79b7..448b9bfbdfa2 100644
--- a/drivers/misc/xlink-pcie/Kconfig
+++ b/drivers/misc/xlink-pcie/Kconfig
@@ -1,3 +1,14 @@
+config XLINK_PCIE_RH_DRIVER
+	tristate "XLink PCIe Remote Host driver"
+	depends on PCI && X86_64
+	help
+	  This option enables XLink PCIe Remote Host driver.
+
+	  Choose M here to compile this driver as a module, name is mxlk.
+	  This driver is used for XLink communication over PCIe,
+	  and is to be loaded on the IA host which is connected to
+	  the Intel Keem Bay.
+
 config XLINK_PCIE_LH_DRIVER
 	tristate "XLink PCIe Local Host driver"
 	depends on PCI_ENDPOINT && ARCH_KEEMBAY
diff --git a/drivers/misc/xlink-pcie/Makefile b/drivers/misc/xlink-pcie/Makefile
index d693d382e9c6..1dd984d8d88c 100644
--- a/drivers/misc/xlink-pcie/Makefile
+++ b/drivers/misc/xlink-pcie/Makefile
@@ -1 +1,2 @@
+obj-$(CONFIG_XLINK_PCIE_RH_DRIVER) += remote_host/
 obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += local_host/
diff --git a/drivers/misc/xlink-pcie/common/xpcie.h b/drivers/misc/xlink-pcie/common/xpcie.h
index 8a559617daba..d6e06f91ad91 100644
--- a/drivers/misc/xlink-pcie/common/xpcie.h
+++ b/drivers/misc/xlink-pcie/common/xpcie.h
@@ -71,6 +71,7 @@ struct xpcie_mmio {
 struct xpcie {
 	u32 status;
 	bool legacy_a0;
+	void *bar0;
 	void *mmio;
 	void *bar4;
 
diff --git a/drivers/misc/xlink-pcie/remote_host/Makefile b/drivers/misc/xlink-pcie/remote_host/Makefile
new file mode 100644
index 000000000000..96374a43023e
--- /dev/null
+++ b/drivers/misc/xlink-pcie/remote_host/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_XLINK_PCIE_RH_DRIVER) += mxlk.o
+mxlk-objs := main.o
+mxlk-objs += pci.o
diff --git a/drivers/misc/xlink-pcie/remote_host/main.c b/drivers/misc/xlink-pcie/remote_host/main.c
new file mode 100644
index 000000000000..d88257dd2585
--- /dev/null
+++ b/drivers/misc/xlink-pcie/remote_host/main.c
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include "pci.h"
+#include "../common/core.h"
+
+#define HW_ID_LO_MASK	GENMASK(7, 0)
+#define HW_ID_HI_MASK	GENMASK(15, 8)
+
+static const struct pci_device_id xpcie_pci_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KEEMBAY), 0 },
+	{ 0 }
+};
+
+static int intel_xpcie_probe(struct pci_dev *pdev,
+			     const struct pci_device_id *ent)
+{
+	bool new_device = false;
+	struct xpcie_dev *xdev;
+	u32 sw_devid;
+	u16 hw_id;
+	int ret;
+
+	hw_id = FIELD_PREP(HW_ID_HI_MASK, pdev->bus->number) |
+		FIELD_PREP(HW_ID_LO_MASK, PCI_SLOT(pdev->devfn));
+
+	sw_devid = FIELD_PREP(XLINK_DEV_INF_TYPE_MASK,
+			      XLINK_DEV_INF_PCIE) |
+		   FIELD_PREP(XLINK_DEV_PHYS_ID_MASK, hw_id) |
+		   FIELD_PREP(XLINK_DEV_TYPE_MASK, XLINK_DEV_TYPE_KMB) |
+		   FIELD_PREP(XLINK_DEV_PCIE_ID_MASK, XLINK_DEV_PCIE_0) |
+		   FIELD_PREP(XLINK_DEV_FUNC_MASK, XLINK_DEV_FUNC_VPU);
+
+	xdev = intel_xpcie_get_device_by_id(sw_devid);
+	if (!xdev) {
+		xdev = intel_xpcie_create_device(sw_devid, pdev);
+		if (!xdev)
+			return -ENOMEM;
+
+		new_device = true;
+	}
+
+	ret = intel_xpcie_pci_init(xdev, pdev);
+	if (ret) {
+		intel_xpcie_remove_device(xdev);
+		return ret;
+	}
+
+	if (new_device)
+		intel_xpcie_list_add_device(xdev);
+
+	return ret;
+}
+
+static void intel_xpcie_remove(struct pci_dev *pdev)
+{
+	struct xpcie_dev *xdev = pci_get_drvdata(pdev);
+
+	if (xdev) {
+		intel_xpcie_pci_cleanup(xdev);
+		intel_xpcie_remove_device(xdev);
+	}
+}
+
+static struct pci_driver xpcie_driver = {
+	.name = XPCIE_DRIVER_NAME,
+	.id_table = xpcie_pci_table,
+	.probe = intel_xpcie_probe,
+	.remove = intel_xpcie_remove
+};
+
+static int __init intel_xpcie_init_module(void)
+{
+	return pci_register_driver(&xpcie_driver);
+}
+
+static void __exit intel_xpcie_exit_module(void)
+{
+	pci_unregister_driver(&xpcie_driver);
+}
+
+module_init(intel_xpcie_init_module);
+module_exit(intel_xpcie_exit_module);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION(XPCIE_DRIVER_DESC);
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.c b/drivers/misc/xlink-pcie/remote_host/pci.c
new file mode 100644
index 000000000000..0ca0755b591f
--- /dev/null
+++ b/drivers/misc/xlink-pcie/remote_host/pci.c
@@ -0,0 +1,451 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "pci.h"
+
+#include "../common/core.h"
+#include "../common/util.h"
+
+static int aspm_enable;
+module_param(aspm_enable, int, 0664);
+MODULE_PARM_DESC(aspm_enable, "enable ASPM");
+
+static LIST_HEAD(dev_list);
+static DEFINE_MUTEX(dev_list_mutex);
+
+struct xpcie_dev *intel_xpcie_get_device_by_id(u32 id)
+{
+	struct xpcie_dev *xdev;
+
+	mutex_lock(&dev_list_mutex);
+
+	if (list_empty(&dev_list)) {
+		mutex_unlock(&dev_list_mutex);
+		return NULL;
+	}
+
+	list_for_each_entry(xdev, &dev_list, list) {
+		if (xdev->devid == id) {
+			mutex_unlock(&dev_list_mutex);
+			return xdev;
+		}
+	}
+
+	mutex_unlock(&dev_list_mutex);
+
+	return NULL;
+}
+
+struct xpcie_dev *intel_xpcie_create_device(u32 sw_device_id,
+					    struct pci_dev *pdev)
+{
+	struct xpcie_dev *xdev = kzalloc(sizeof(*xdev), GFP_KERNEL);
+
+	if (!xdev)
+		return NULL;
+
+	xdev->devid = sw_device_id;
+	snprintf(xdev->name, XPCIE_MAX_NAME_LEN, "%02x:%02x.%x",
+		 pdev->bus->number,
+		 PCI_SLOT(pdev->devfn),
+		 PCI_FUNC(pdev->devfn));
+
+	mutex_init(&xdev->lock);
+
+	return xdev;
+}
+
+void intel_xpcie_remove_device(struct xpcie_dev *xdev)
+{
+	mutex_destroy(&xdev->lock);
+	kfree(xdev);
+}
+
+void intel_xpcie_list_add_device(struct xpcie_dev *xdev)
+{
+	mutex_lock(&dev_list_mutex);
+
+	list_add_tail(&xdev->list, &dev_list);
+
+	mutex_unlock(&dev_list_mutex);
+}
+
+void intel_xpcie_list_del_device(struct xpcie_dev *xdev)
+{
+	mutex_lock(&dev_list_mutex);
+
+	list_del(&xdev->list);
+
+	mutex_unlock(&dev_list_mutex);
+}
+
+static void intel_xpcie_pci_set_aspm(struct xpcie_dev *xdev, int aspm)
+{
+	u16 link_control;
+	u8 cap_exp;
+
+	cap_exp = pci_find_capability(xdev->pci, PCI_CAP_ID_EXP);
+	if (!cap_exp) {
+		dev_err(&xdev->pci->dev, "failed to find pcie capability\n");
+		return;
+	}
+
+	pci_read_config_word(xdev->pci, cap_exp + PCI_EXP_LNKCTL,
+			     &link_control);
+	link_control &= ~(PCI_EXP_LNKCTL_ASPMC);
+	link_control |= (aspm & PCI_EXP_LNKCTL_ASPMC);
+	pci_write_config_word(xdev->pci, cap_exp + PCI_EXP_LNKCTL,
+			      link_control);
+}
+
+static void intel_xpcie_pci_unmap_bar(struct xpcie_dev *xdev)
+{
+	if (xdev->xpcie.bar0) {
+		iounmap((void __iomem *)xdev->xpcie.bar0);
+		xdev->xpcie.bar0 = NULL;
+	}
+
+	if (xdev->xpcie.mmio) {
+		iounmap((void __iomem *)(xdev->xpcie.mmio - XPCIE_MMIO_OFFSET));
+		xdev->xpcie.mmio = NULL;
+	}
+
+	if (xdev->xpcie.bar4) {
+		iounmap((void __iomem *)xdev->xpcie.bar4);
+		xdev->xpcie.bar4 = NULL;
+	}
+}
+
+static int intel_xpcie_pci_map_bar(struct xpcie_dev *xdev)
+{
+	if (pci_resource_len(xdev->pci, 2) < XPCIE_IO_COMM_SIZE) {
+		dev_err(&xdev->pci->dev, "device BAR region is too small\n");
+		return -EIO;
+	}
+
+	xdev->xpcie.bar0 = (void __force *)pci_ioremap_bar(xdev->pci, 0);
+	if (!xdev->xpcie.bar0) {
+		dev_err(&xdev->pci->dev, "failed to ioremap BAR0\n");
+		goto bar_error;
+	}
+
+	xdev->xpcie.mmio = (void __force *)
+			   (pci_ioremap_bar(xdev->pci, 2) + XPCIE_MMIO_OFFSET);
+	if (!xdev->xpcie.mmio) {
+		dev_err(&xdev->pci->dev, "failed to ioremap BAR2\n");
+		goto bar_error;
+	}
+
+	xdev->xpcie.bar4 = (void __force *)pci_ioremap_wc_bar(xdev->pci, 4);
+	if (!xdev->xpcie.bar4) {
+		dev_err(&xdev->pci->dev, "failed to ioremap BAR4\n");
+		goto bar_error;
+	}
+
+	return 0;
+
+bar_error:
+	intel_xpcie_pci_unmap_bar(xdev);
+	return -EIO;
+}
+
+static void intel_xpcie_pci_irq_cleanup(struct xpcie_dev *xdev)
+{
+	int irq = pci_irq_vector(xdev->pci, 0);
+
+	if (irq < 0)
+		return;
+
+	synchronize_irq(irq);
+	free_irq(irq, xdev);
+	pci_free_irq_vectors(xdev->pci);
+}
+
+static int intel_xpcie_pci_irq_init(struct xpcie_dev *xdev,
+				    irq_handler_t irq_handler)
+{
+	int rc, irq;
+
+	rc = pci_alloc_irq_vectors(xdev->pci, 1, 1, PCI_IRQ_MSI);
+	if (rc < 0) {
+		dev_err(&xdev->pci->dev,
+			"failed to allocate %d MSI vectors\n", 1);
+		return rc;
+	}
+
+	irq = pci_irq_vector(xdev->pci, 0);
+	if (irq < 0) {
+		dev_err(&xdev->pci->dev, "failed to get irq\n");
+		rc = irq;
+		goto error_irq;
+	}
+	rc = request_irq(irq, irq_handler, 0,
+			 XPCIE_DRIVER_NAME, xdev);
+	if (rc) {
+		dev_err(&xdev->pci->dev, "failed to request irq\n");
+		goto error_irq;
+	}
+
+	return 0;
+
+error_irq:
+	pci_free_irq_vectors(xdev->pci);
+	return rc;
+}
+
+static void xpcie_device_poll(struct work_struct *work)
+{
+	struct xpcie_dev *xdev = container_of(work, struct xpcie_dev,
+					      wait_event.work);
+	u32 dev_status = intel_xpcie_ioread32(xdev->xpcie.mmio +
+					      XPCIE_MMIO_DEV_STATUS);
+
+	if (dev_status < XPCIE_STATUS_RUN)
+		schedule_delayed_work(&xdev->wait_event,
+				      msecs_to_jiffies(100));
+	else
+		xdev->xpcie.status = XPCIE_STATUS_READY;
+}
+
+static int intel_xpcie_pci_prepare_dev_reset(struct xpcie_dev *xdev,
+					     bool notify)
+{
+	if (mutex_lock_interruptible(&xdev->lock))
+		return -EINTR;
+
+	if (xdev->core_irq_callback)
+		xdev->core_irq_callback = NULL;
+
+	xdev->xpcie.status = XPCIE_STATUS_OFF;
+	if (notify)
+		intel_xpcie_pci_raise_irq(xdev, DEV_EVENT, REQUEST_RESET);
+
+	mutex_unlock(&xdev->lock);
+
+	return 0;
+}
+
+static void xpcie_device_shutdown(struct work_struct *work)
+{
+	struct xpcie_dev *xdev = container_of(work, struct xpcie_dev,
+					      shutdown_event.work);
+
+	intel_xpcie_pci_prepare_dev_reset(xdev, false);
+}
+
+static int xpcie_device_init(struct xpcie_dev *xdev)
+{
+	INIT_DELAYED_WORK(&xdev->wait_event, xpcie_device_poll);
+	INIT_DELAYED_WORK(&xdev->shutdown_event, xpcie_device_shutdown);
+
+	pci_set_master(xdev->pci);
+
+	xdev->xpcie.status = XPCIE_STATUS_UNINIT;
+
+	init_waitqueue_head(&xdev->waitqueue);
+	schedule_delayed_work(&xdev->wait_event, 0);
+
+	return 0;
+}
+
+int intel_xpcie_pci_init(struct xpcie_dev *xdev, struct pci_dev *pdev)
+{
+	int rc;
+
+	if (mutex_lock_interruptible(&xdev->lock))
+		return -EINTR;
+
+	xdev->pci = pdev;
+	pci_set_drvdata(pdev, xdev);
+
+	rc = pci_enable_device_mem(xdev->pci);
+	if (rc) {
+		dev_err(&pdev->dev, "failed to enable pci device\n");
+		goto error_exit;
+	}
+
+	rc = pci_request_regions(xdev->pci, XPCIE_DRIVER_NAME);
+	if (rc) {
+		dev_err(&pdev->dev, "failed to request mmio regions\n");
+		goto error_req_mem;
+	}
+
+	rc = intel_xpcie_pci_map_bar(xdev);
+	if (rc)
+		goto error_map;
+
+	rc = dma_set_mask_and_coherent(&xdev->pci->dev, DMA_BIT_MASK(64));
+	if (rc) {
+		dev_err(&pdev->dev, "failed to set dma mask\n");
+		goto error_dma_mask;
+	}
+
+	intel_xpcie_pci_set_aspm(xdev, aspm_enable);
+
+	rc = xpcie_device_init(xdev);
+	if (!rc)
+		goto init_exit;
+
+error_dma_mask:
+	intel_xpcie_pci_unmap_bar(xdev);
+
+error_map:
+	pci_release_regions(xdev->pci);
+
+error_req_mem:
+	pci_disable_device(xdev->pci);
+
+error_exit:
+	xdev->xpcie.status = XPCIE_STATUS_ERROR;
+
+init_exit:
+	mutex_unlock(&xdev->lock);
+	if (rc)
+		mutex_destroy(&xdev->lock);
+	return rc;
+}
+
+int intel_xpcie_pci_cleanup(struct xpcie_dev *xdev)
+{
+	if (mutex_lock_interruptible(&xdev->lock))
+		return -EINTR;
+
+	cancel_delayed_work(&xdev->wait_event);
+	cancel_delayed_work(&xdev->shutdown_event);
+	xdev->core_irq_callback = NULL;
+	intel_xpcie_pci_irq_cleanup(xdev);
+
+	intel_xpcie_pci_unmap_bar(xdev);
+	pci_release_regions(xdev->pci);
+	pci_disable_device(xdev->pci);
+	pci_set_drvdata(xdev->pci, NULL);
+	xdev->xpcie.status = XPCIE_STATUS_OFF;
+	xdev->irq_enabled = false;
+
+	mutex_unlock(&xdev->lock);
+
+	return 0;
+}
+
+int intel_xpcie_pci_register_irq(struct xpcie_dev *xdev,
+				 irq_handler_t irq_handler)
+{
+	int rc;
+
+	if (xdev->xpcie.status != XPCIE_STATUS_READY)
+		return -EINVAL;
+
+	rc = intel_xpcie_pci_irq_init(xdev, irq_handler);
+	if (rc)
+		dev_warn(&xdev->pci->dev, "failed to initialize pci irq\n");
+
+	return rc;
+}
+
+int intel_xpcie_pci_raise_irq(struct xpcie_dev *xdev,
+			      enum xpcie_doorbell_type type,
+			      u8 value)
+{
+	u16 pci_status;
+
+	pci_read_config_word(xdev->pci, PCI_STATUS, &pci_status);
+
+	return 0;
+}
+
+u32 intel_xpcie_get_device_num(u32 *id_list)
+{
+	struct xpcie_dev *p;
+	u32 num = 0;
+
+	mutex_lock(&dev_list_mutex);
+
+	if (list_empty(&dev_list)) {
+		mutex_unlock(&dev_list_mutex);
+		return 0;
+	}
+
+	list_for_each_entry(p, &dev_list, list) {
+		*id_list++ = p->devid;
+		num++;
+	}
+	mutex_unlock(&dev_list_mutex);
+
+	return num;
+}
+
+int intel_xpcie_get_device_name_by_id(u32 id,
+				      char *device_name, size_t name_size)
+{
+	struct xpcie_dev *xdev;
+	size_t size;
+
+	xdev = intel_xpcie_get_device_by_id(id);
+	if (!xdev)
+		return -ENODEV;
+
+	mutex_lock(&xdev->lock);
+
+	size = (name_size > XPCIE_MAX_NAME_LEN) ?
+		XPCIE_MAX_NAME_LEN : name_size;
+	memcpy(device_name, xdev->name, size);
+
+	mutex_unlock(&xdev->lock);
+
+	return 0;
+}
+
+int intel_xpcie_get_device_status_by_id(u32 id, u32 *status)
+{
+	struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(id);
+
+	if (!xdev)
+		return -ENODEV;
+
+	mutex_lock(&xdev->lock);
+	*status = xdev->xpcie.status;
+	mutex_unlock(&xdev->lock);
+
+	return 0;
+}
+
+int intel_xpcie_pci_connect_device(u32 id)
+{
+	struct xpcie_dev *xdev;
+	int rc = 0;
+
+	xdev = intel_xpcie_get_device_by_id(id);
+	if (!xdev)
+		return -ENODEV;
+
+	if (mutex_lock_interruptible(&xdev->lock))
+		return -EINTR;
+
+	if (xdev->xpcie.status == XPCIE_STATUS_RUN)
+		goto connect_cleanup;
+
+	if (xdev->xpcie.status == XPCIE_STATUS_OFF) {
+		rc = -ENODEV;
+		goto connect_cleanup;
+	}
+
+	if (xdev->xpcie.status != XPCIE_STATUS_READY) {
+		rc = -EBUSY;
+		goto connect_cleanup;
+	}
+
+connect_cleanup:
+	mutex_unlock(&xdev->lock);
+	return rc;
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.h b/drivers/misc/xlink-pcie/remote_host/pci.h
new file mode 100644
index 000000000000..72de3701f83a
--- /dev/null
+++ b/drivers/misc/xlink-pcie/remote_host/pci.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_PCI_HEADER_
+#define XPCIE_PCI_HEADER_
+
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/pci.h>
+#include <linux/xlink_drv_inf.h>
+#include "../common/xpcie.h"
+#include "../common/util.h"
+
+#define XPCIE_DRIVER_NAME "mxlk"
+#define XPCIE_DRIVER_DESC "Intel(R) Keem Bay XLink PCIe driver"
+
+#define XPCIE_MAX_NAME_LEN	(32)
+
+struct xpcie_dev {
+	struct list_head list;
+	struct mutex lock; /* Device Lock */
+
+	struct pci_dev *pci;
+	char name[XPCIE_MAX_NAME_LEN];
+	u32 devid;
+	char fw_name[XPCIE_MAX_NAME_LEN];
+
+	struct delayed_work wait_event;
+	struct delayed_work shutdown_event;
+	wait_queue_head_t waitqueue;
+	bool irq_enabled;
+	irq_handler_t core_irq_callback;
+
+	struct xpcie xpcie;
+};
+
+static inline struct device *xpcie_to_dev(struct xpcie *xpcie)
+{
+	struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+
+	return &xdev->pci->dev;
+}
+
+int intel_xpcie_pci_init(struct xpcie_dev *xdev, struct pci_dev *pdev);
+int intel_xpcie_pci_cleanup(struct xpcie_dev *xdev);
+int intel_xpcie_pci_register_irq(struct xpcie_dev *xdev,
+				 irq_handler_t irq_handler);
+int intel_xpcie_pci_raise_irq(struct xpcie_dev *xdev,
+			      enum xpcie_doorbell_type type,
+			      u8 value);
+
+struct xpcie_dev *intel_xpcie_create_device(u32 sw_device_id,
+					    struct pci_dev *pdev);
+void intel_xpcie_remove_device(struct xpcie_dev *xdev);
+void intel_xpcie_list_add_device(struct xpcie_dev *xdev);
+void intel_xpcie_list_del_device(struct xpcie_dev *xdev);
+
+#endif /* XPCIE_PCI_HEADER_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 14/34] misc: xlink-pcie: rh: Add core communication logic
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (12 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 13/34] misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface mgross
                   ` (19 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala

From: Srikanth Thokala <srikanth.thokala@intel.com>

Add logic to establish communication with the local host which is through
ring buffer management and MSI/Doorbell interrupts

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 drivers/misc/xlink-pcie/common/core.h        |  11 +-
 drivers/misc/xlink-pcie/remote_host/Makefile |   2 +
 drivers/misc/xlink-pcie/remote_host/core.c   | 623 +++++++++++++++++++
 drivers/misc/xlink-pcie/remote_host/pci.c    |  48 +-
 4 files changed, 672 insertions(+), 12 deletions(-)
 create mode 100644 drivers/misc/xlink-pcie/remote_host/core.c

diff --git a/drivers/misc/xlink-pcie/common/core.h b/drivers/misc/xlink-pcie/common/core.h
index 84985ef41a64..eec8566c19d9 100644
--- a/drivers/misc/xlink-pcie/common/core.h
+++ b/drivers/misc/xlink-pcie/common/core.h
@@ -10,15 +10,11 @@
 #ifndef XPCIE_CORE_HEADER_
 #define XPCIE_CORE_HEADER_
 
-#include <linux/io.h>
-#include <linux/types.h>
-#include <linux/workqueue.h>
-#include <linux/slab.h>
-#include <linux/mutex.h>
-#include <linux/mempool.h>
 #include <linux/dma-mapping.h>
-#include <linux/cache.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
 #include <linux/wait.h>
+#include <linux/workqueue.h>
 
 #include <linux/xlink_drv_inf.h>
 
@@ -64,6 +60,7 @@ struct xpcie_buf_desc {
 struct xpcie_stream {
 	size_t frag;
 	struct xpcie_pipe pipe;
+	struct xpcie_buf_desc **ddr;
 };
 
 struct xpcie_list {
diff --git a/drivers/misc/xlink-pcie/remote_host/Makefile b/drivers/misc/xlink-pcie/remote_host/Makefile
index 96374a43023e..e8074dbb1161 100644
--- a/drivers/misc/xlink-pcie/remote_host/Makefile
+++ b/drivers/misc/xlink-pcie/remote_host/Makefile
@@ -1,3 +1,5 @@
 obj-$(CONFIG_XLINK_PCIE_RH_DRIVER) += mxlk.o
 mxlk-objs := main.o
 mxlk-objs += pci.o
+mxlk-objs += core.o
+mxlk-objs += ../common/util.o
diff --git a/drivers/misc/xlink-pcie/remote_host/core.c b/drivers/misc/xlink-pcie/remote_host/core.c
new file mode 100644
index 000000000000..2e20e1490076
--- /dev/null
+++ b/drivers/misc/xlink-pcie/remote_host/core.c
@@ -0,0 +1,623 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include "pci.h"
+
+#include "../common/core.h"
+#include "../common/util.h"
+
+static int intel_xpcie_map_dma(struct xpcie *xpcie, struct xpcie_buf_desc *bd,
+			       int direction)
+{
+	struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+	struct device *dev = &xdev->pci->dev;
+
+	bd->phys = dma_map_single(dev, bd->data, bd->length, direction);
+
+	return dma_mapping_error(dev, bd->phys);
+}
+
+static void intel_xpcie_unmap_dma(struct xpcie *xpcie,
+				  struct xpcie_buf_desc *bd,
+				  int direction)
+{
+	struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+	struct device *dev = &xdev->pci->dev;
+
+	dma_unmap_single(dev, bd->phys, bd->length, direction);
+}
+
+static void intel_xpcie_txrx_cleanup(struct xpcie *xpcie)
+{
+	struct xpcie_interface *inf = &xpcie->interfaces[0];
+	struct xpcie_stream *tx = &xpcie->tx;
+	struct xpcie_stream *rx = &xpcie->rx;
+	struct xpcie_buf_desc *bd;
+	int index;
+
+	xpcie->stop_flag = true;
+	xpcie->no_tx_buffer = false;
+	inf->data_avail = true;
+	wake_up_interruptible(&xpcie->tx_waitq);
+	wake_up_interruptible(&inf->rx_waitq);
+	mutex_lock(&xpcie->wlock);
+	mutex_lock(&inf->rlock);
+
+	if (tx->ddr) {
+		for (index = 0; index < tx->pipe.ndesc; index++) {
+			struct xpcie_transfer_desc *td = tx->pipe.tdr + index;
+
+			bd = tx->ddr[index];
+			if (bd) {
+				intel_xpcie_unmap_dma(xpcie, bd, DMA_TO_DEVICE);
+				intel_xpcie_free_tx_bd(xpcie, bd);
+				intel_xpcie_set_td_address(td, 0);
+				intel_xpcie_set_td_length(td, 0);
+			}
+		}
+		kfree(tx->ddr);
+	}
+
+	if (rx->ddr) {
+		for (index = 0; index < rx->pipe.ndesc; index++) {
+			struct xpcie_transfer_desc *td = rx->pipe.tdr + index;
+
+			bd = rx->ddr[index];
+			if (bd) {
+				intel_xpcie_unmap_dma(xpcie,
+						      bd, DMA_FROM_DEVICE);
+				intel_xpcie_free_rx_bd(xpcie, bd);
+				intel_xpcie_set_td_address(td, 0);
+				intel_xpcie_set_td_length(td, 0);
+			}
+		}
+		kfree(rx->ddr);
+	}
+
+	intel_xpcie_list_cleanup(&xpcie->tx_pool);
+	intel_xpcie_list_cleanup(&xpcie->rx_pool);
+
+	mutex_unlock(&inf->rlock);
+	mutex_unlock(&xpcie->wlock);
+}
+
+static int intel_xpcie_txrx_init(struct xpcie *xpcie,
+				 struct xpcie_cap_txrx *cap)
+{
+	struct xpcie_stream *tx = &xpcie->tx;
+	struct xpcie_stream *rx = &xpcie->rx;
+	int tx_pool_size, rx_pool_size;
+	struct xpcie_buf_desc *bd;
+	int rc, index, ndesc;
+
+	xpcie->txrx = cap;
+	xpcie->fragment_size = intel_xpcie_ioread32(&cap->fragment_size);
+	xpcie->stop_flag = false;
+
+	tx->pipe.ndesc = intel_xpcie_ioread32(&cap->tx.ndesc);
+	tx->pipe.head = &cap->tx.head;
+	tx->pipe.tail = &cap->tx.tail;
+	tx->pipe.old = intel_xpcie_ioread32(&cap->tx.tail);
+	tx->pipe.tdr = (struct xpcie_transfer_desc *)(xpcie->mmio +
+				intel_xpcie_ioread32(&cap->tx.ring));
+
+	tx->ddr = kcalloc(tx->pipe.ndesc, sizeof(struct xpcie_buf_desc *),
+			  GFP_KERNEL);
+	if (!tx->ddr) {
+		rc = -ENOMEM;
+		goto error;
+	}
+
+	rx->pipe.ndesc = intel_xpcie_ioread32(&cap->rx.ndesc);
+	rx->pipe.head = &cap->rx.head;
+	rx->pipe.tail = &cap->rx.tail;
+	rx->pipe.old = intel_xpcie_ioread32(&cap->rx.head);
+	rx->pipe.tdr = (struct xpcie_transfer_desc *)(xpcie->mmio +
+				intel_xpcie_ioread32(&cap->rx.ring));
+
+	rx->ddr = kcalloc(rx->pipe.ndesc, sizeof(struct xpcie_buf_desc *),
+			  GFP_KERNEL);
+	if (!rx->ddr) {
+		rc = -ENOMEM;
+		goto error;
+	}
+
+	intel_xpcie_list_init(&xpcie->rx_pool);
+	rx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+	ndesc = rx_pool_size / xpcie->fragment_size;
+
+	for (index = 0; index < ndesc; index++) {
+		bd = intel_xpcie_alloc_bd(xpcie->fragment_size);
+		if (bd) {
+			intel_xpcie_list_put(&xpcie->rx_pool, bd);
+		} else {
+			rc = -ENOMEM;
+			goto error;
+		}
+	}
+
+	intel_xpcie_list_init(&xpcie->tx_pool);
+	tx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+	ndesc = tx_pool_size / xpcie->fragment_size;
+
+	for (index = 0; index < ndesc; index++) {
+		bd = intel_xpcie_alloc_bd(xpcie->fragment_size);
+		if (bd) {
+			intel_xpcie_list_put(&xpcie->tx_pool, bd);
+		} else {
+			rc = -ENOMEM;
+			goto error;
+		}
+	}
+
+	for (index = 0; index < rx->pipe.ndesc; index++) {
+		struct xpcie_transfer_desc *td = rx->pipe.tdr + index;
+
+		bd = intel_xpcie_alloc_rx_bd(xpcie);
+		if (!bd) {
+			rc = -ENOMEM;
+			goto error;
+		}
+
+		if (intel_xpcie_map_dma(xpcie, bd, DMA_FROM_DEVICE)) {
+			dev_err(xpcie_to_dev(xpcie), "failed to map rx bd\n");
+			rc = -ENOMEM;
+			goto error;
+		}
+
+		rx->ddr[index] = bd;
+		intel_xpcie_set_td_address(td, bd->phys);
+		intel_xpcie_set_td_length(td, bd->length);
+	}
+
+	return 0;
+
+error:
+	intel_xpcie_txrx_cleanup(xpcie);
+
+	return rc;
+}
+
+static int intel_xpcie_discover_txrx(struct xpcie *xpcie)
+{
+	struct xpcie_cap_txrx *cap;
+	int error;
+
+	cap = intel_xpcie_cap_find(xpcie, 0, XPCIE_CAP_TXRX);
+	if (cap)
+		error = intel_xpcie_txrx_init(xpcie, cap);
+	else
+		error = -EIO;
+
+	return error;
+}
+
+static void intel_xpcie_start_tx(struct xpcie *xpcie, unsigned long delay)
+{
+	queue_delayed_work(xpcie->tx_wq, &xpcie->tx_event, delay);
+}
+
+static void intel_xpcie_start_rx(struct xpcie *xpcie, unsigned long delay)
+{
+	queue_delayed_work(xpcie->rx_wq, &xpcie->rx_event, delay);
+}
+
+static void intel_xpcie_rx_event_handler(struct work_struct *work)
+{
+	struct xpcie *xpcie = container_of(work, struct xpcie, rx_event.work);
+	struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+	struct xpcie_buf_desc *bd, *replacement = NULL;
+	unsigned long delay = msecs_to_jiffies(1);
+	struct xpcie_stream *rx = &xpcie->rx;
+	struct xpcie_transfer_desc *td;
+	u32 head, tail, ndesc, length;
+	u16 status, interface;
+	int rc;
+
+	if (intel_xpcie_get_device_status(xpcie) != XPCIE_STATUS_RUN)
+		return;
+
+	ndesc = rx->pipe.ndesc;
+	tail = intel_xpcie_get_tdr_tail(&rx->pipe);
+	head = intel_xpcie_get_tdr_head(&rx->pipe);
+
+	while (head != tail) {
+		td = rx->pipe.tdr + head;
+		bd = rx->ddr[head];
+
+		replacement = intel_xpcie_alloc_rx_bd(xpcie);
+		if (!replacement) {
+			delay = msecs_to_jiffies(20);
+			break;
+		}
+
+		rc = intel_xpcie_map_dma(xpcie, replacement, DMA_FROM_DEVICE);
+		if (rc) {
+			dev_err(xpcie_to_dev(xpcie),
+				"failed to map rx bd (%d)\n", rc);
+			intel_xpcie_free_rx_bd(xpcie, replacement);
+			break;
+		}
+
+		status = intel_xpcie_get_td_status(td);
+		interface = intel_xpcie_get_td_interface(td);
+		length = intel_xpcie_get_td_length(td);
+		intel_xpcie_unmap_dma(xpcie, bd, DMA_FROM_DEVICE);
+
+		if (unlikely(status != XPCIE_DESC_STATUS_SUCCESS) ||
+		    unlikely(interface >= XPCIE_NUM_INTERFACES)) {
+			dev_err(xpcie_to_dev(xpcie),
+				"rx desc failure, status(%u), interface(%u)\n",
+			status, interface);
+			intel_xpcie_free_rx_bd(xpcie, bd);
+		} else {
+			bd->interface = interface;
+			bd->length = length;
+			bd->next = NULL;
+
+			intel_xpcie_add_bd_to_interface(xpcie, bd);
+		}
+
+		rx->ddr[head] = replacement;
+		intel_xpcie_set_td_address(td, replacement->phys);
+		intel_xpcie_set_td_length(td, replacement->length);
+		head = XPCIE_CIRCULAR_INC(head, ndesc);
+	}
+
+	if (intel_xpcie_get_tdr_head(&rx->pipe) != head) {
+		intel_xpcie_set_tdr_head(&rx->pipe, head);
+		intel_xpcie_pci_raise_irq(xdev, DATA_RECEIVED, 1);
+	}
+
+	if (!replacement)
+		intel_xpcie_start_rx(xpcie, delay);
+}
+
+static void intel_xpcie_tx_event_handler(struct work_struct *work)
+{
+	struct xpcie *xpcie = container_of(work, struct xpcie, tx_event.work);
+	struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+	struct xpcie_stream *tx = &xpcie->tx;
+	struct xpcie_transfer_desc *td;
+	u32 head, tail, old, ndesc;
+	struct xpcie_buf_desc *bd;
+	size_t bytes, buffers;
+	u16 status;
+
+	if (intel_xpcie_get_device_status(xpcie) != XPCIE_STATUS_RUN)
+		return;
+
+	ndesc = tx->pipe.ndesc;
+	old = tx->pipe.old;
+	tail = intel_xpcie_get_tdr_tail(&tx->pipe);
+	head = intel_xpcie_get_tdr_head(&tx->pipe);
+
+	/* clean old entries first */
+	while (old != head) {
+		bd = tx->ddr[old];
+		td = tx->pipe.tdr + old;
+		status = intel_xpcie_get_td_status(td);
+		if (status != XPCIE_DESC_STATUS_SUCCESS)
+			dev_err(xpcie_to_dev(xpcie),
+				"detected tx desc failure (%u)\n", status);
+
+		intel_xpcie_unmap_dma(xpcie, bd, DMA_TO_DEVICE);
+		intel_xpcie_free_tx_bd(xpcie, bd);
+		tx->ddr[old] = NULL;
+		old = XPCIE_CIRCULAR_INC(old, ndesc);
+	}
+	tx->pipe.old = old;
+
+	/* add new entries */
+	while (XPCIE_CIRCULAR_INC(tail, ndesc) != head) {
+		bd = intel_xpcie_list_get(&xpcie->write);
+		if (!bd)
+			break;
+
+		td = tx->pipe.tdr + tail;
+
+		if (intel_xpcie_map_dma(xpcie, bd, DMA_TO_DEVICE)) {
+			dev_err(xpcie_to_dev(xpcie),
+				"dma mapping error bd addr %p, size %zu\n",
+				bd->data, bd->length);
+			break;
+		}
+
+		tx->ddr[tail] = bd;
+		intel_xpcie_set_td_address(td, bd->phys);
+		intel_xpcie_set_td_length(td, bd->length);
+		intel_xpcie_set_td_interface(td, bd->interface);
+		intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_ERROR);
+
+		tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+	}
+
+	if (intel_xpcie_get_tdr_tail(&tx->pipe) != tail) {
+		intel_xpcie_set_tdr_tail(&tx->pipe, tail);
+		intel_xpcie_pci_raise_irq(xdev, DATA_SENT, 1);
+	}
+
+	intel_xpcie_list_info(&xpcie->write, &bytes, &buffers);
+	if (buffers)
+		xpcie->tx_pending = true;
+	else
+		xpcie->tx_pending = false;
+}
+
+static irqreturn_t intel_xpcie_interrupt(int irq, void *args)
+{
+	struct xpcie_dev *xdev = args;
+	struct xpcie *xpcie;
+
+	xpcie = &xdev->xpcie;
+
+	if (intel_xpcie_get_doorbell(xpcie, FROM_DEVICE, DATA_SENT)) {
+		intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_SENT, 0);
+		intel_xpcie_start_rx(xpcie, 0);
+	}
+	if (intel_xpcie_get_doorbell(xpcie, FROM_DEVICE, DATA_RECEIVED)) {
+		intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_RECEIVED, 0);
+		if (xpcie->tx_pending)
+			intel_xpcie_start_tx(xpcie, 0);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static int intel_xpcie_events_init(struct xpcie *xpcie)
+{
+	xpcie->rx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+					       WQ_MEM_RECLAIM | WQ_HIGHPRI);
+	if (!xpcie->rx_wq) {
+		dev_err(xpcie_to_dev(xpcie), "failed to allocate workqueue\n");
+		return -ENOMEM;
+	}
+
+	xpcie->tx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+					       WQ_MEM_RECLAIM | WQ_HIGHPRI);
+	if (!xpcie->tx_wq) {
+		dev_err(xpcie_to_dev(xpcie), "failed to allocate workqueue\n");
+		destroy_workqueue(xpcie->rx_wq);
+		return -ENOMEM;
+	}
+
+	INIT_DELAYED_WORK(&xpcie->rx_event, intel_xpcie_rx_event_handler);
+	INIT_DELAYED_WORK(&xpcie->tx_event, intel_xpcie_tx_event_handler);
+
+	return 0;
+}
+
+static void intel_xpcie_events_cleanup(struct xpcie *xpcie)
+{
+	cancel_delayed_work_sync(&xpcie->rx_event);
+	cancel_delayed_work_sync(&xpcie->tx_event);
+
+	destroy_workqueue(xpcie->rx_wq);
+	destroy_workqueue(xpcie->tx_wq);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie)
+{
+	struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+	int status, rc;
+
+	status = intel_xpcie_get_device_status(xpcie);
+	if (status != XPCIE_STATUS_RUN) {
+		dev_err(&xdev->pci->dev,
+			"device status not RUNNING (%d)\n", status);
+		rc = -EBUSY;
+		return rc;
+	}
+
+	if (intel_xpcie_ioread8(xpcie->mmio + XPCIE_MMIO_LEGACY_A0))
+		xpcie->legacy_a0 = true;
+
+	rc = intel_xpcie_events_init(xpcie);
+	if (rc)
+		return rc;
+
+	rc = intel_xpcie_discover_txrx(xpcie);
+	if (rc)
+		goto error_txrx;
+
+	intel_xpcie_interfaces_init(xpcie);
+
+	rc = intel_xpcie_pci_register_irq(xdev, &intel_xpcie_interrupt);
+	if (rc)
+		goto error_txrx;
+
+	intel_xpcie_set_host_status(xpcie, XPCIE_STATUS_RUN);
+
+	return 0;
+
+error_txrx:
+	intel_xpcie_events_cleanup(xpcie);
+	intel_xpcie_set_host_status(xpcie, XPCIE_STATUS_ERROR);
+
+	return rc;
+}
+
+void intel_xpcie_core_cleanup(struct xpcie *xpcie)
+{
+	if (xpcie->status == XPCIE_STATUS_RUN) {
+		intel_xpcie_set_host_status(xpcie, XPCIE_STATUS_UNINIT);
+		intel_xpcie_events_cleanup(xpcie);
+		intel_xpcie_interfaces_cleanup(xpcie);
+		intel_xpcie_txrx_cleanup(xpcie);
+	}
+}
+
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer, size_t *length,
+			  uint32_t timeout_ms)
+{
+	long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+	struct xpcie_interface *inf = &xpcie->interfaces[0];
+	unsigned long jiffies_start = jiffies;
+	struct xpcie_buf_desc *bd;
+	size_t remaining, len;
+	long jiffies_passed = 0;
+	int ret;
+
+	if (*length == 0)
+		return -EINVAL;
+
+	if (xpcie->status != XPCIE_STATUS_RUN)
+		return -ENODEV;
+
+	len = *length;
+	remaining = len;
+	*length = 0;
+
+	ret = mutex_lock_interruptible(&inf->rlock);
+	if (ret < 0)
+		return -EINTR;
+
+	do {
+		while (!inf->data_avail) {
+			mutex_unlock(&inf->rlock);
+			if (timeout_ms == 0) {
+				ret = wait_event_interruptible(inf->rx_waitq,
+							       inf->data_avail);
+			} else {
+				ret =
+			wait_event_interruptible_timeout(inf->rx_waitq,
+							 inf->data_avail,
+							 jiffies_timeout -
+							 jiffies_passed);
+				if (ret == 0)
+					return -ETIME;
+			}
+			if (ret < 0 || xpcie->stop_flag)
+				return -EINTR;
+
+			ret = mutex_lock_interruptible(&inf->rlock);
+			if (ret < 0)
+				return -EINTR;
+		}
+
+		bd = (inf->partial_read) ? inf->partial_read :
+					   intel_xpcie_list_get(&inf->read);
+		while (remaining && bd) {
+			size_t bcopy;
+
+			bcopy = min(remaining, bd->length);
+			memcpy(buffer, bd->data, bcopy);
+
+			buffer += bcopy;
+			remaining -= bcopy;
+			bd->data += bcopy;
+			bd->length -= bcopy;
+
+			if (bd->length == 0) {
+				intel_xpcie_free_rx_bd(xpcie, bd);
+				bd = intel_xpcie_list_get(&inf->read);
+			}
+		}
+
+		/* save for next time */
+		inf->partial_read = bd;
+
+		if (!bd)
+			inf->data_avail = false;
+
+		*length = len - remaining;
+
+		jiffies_passed = (long)jiffies - (long)jiffies_start;
+	} while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+				   timeout_ms == 0));
+
+	mutex_unlock(&inf->rlock);
+
+	return 0;
+}
+
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer, size_t *length,
+			   uint32_t timeout_ms)
+{
+	long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+	struct xpcie_interface *inf = &xpcie->interfaces[0];
+	unsigned long jiffies_start = jiffies;
+	struct xpcie_buf_desc *bd, *head;
+	long jiffies_passed = 0;
+	size_t remaining, len;
+	int ret;
+
+	if (*length == 0)
+		return -EINVAL;
+
+	if (xpcie->status != XPCIE_STATUS_RUN)
+		return -ENODEV;
+
+	len = *length;
+	remaining = len;
+	*length = 0;
+
+	ret = mutex_lock_interruptible(&xpcie->wlock);
+	if (ret < 0)
+		return -EINTR;
+
+	do {
+		bd = intel_xpcie_alloc_tx_bd(xpcie);
+		head = bd;
+		while (!head) {
+			mutex_unlock(&xpcie->wlock);
+			if (timeout_ms == 0) {
+				ret =
+				wait_event_interruptible(xpcie->tx_waitq,
+							 !xpcie->no_tx_buffer);
+			} else {
+				ret =
+			wait_event_interruptible_timeout(xpcie->tx_waitq,
+							 !xpcie->no_tx_buffer,
+							 jiffies_timeout -
+							 jiffies_passed);
+				if (ret == 0)
+					return -ETIME;
+			}
+			if (ret < 0 || xpcie->stop_flag)
+				return -EINTR;
+
+			ret = mutex_lock_interruptible(&xpcie->wlock);
+			if (ret < 0)
+				return -EINTR;
+
+			bd = intel_xpcie_alloc_tx_bd(xpcie);
+			head = bd;
+		}
+
+		while (remaining && bd) {
+			size_t bcopy;
+
+			bcopy = min(bd->length, remaining);
+			memcpy(bd->data, buffer, bcopy);
+
+			buffer += bcopy;
+			remaining -= bcopy;
+			bd->length = bcopy;
+			bd->interface = inf->id;
+
+			if (remaining) {
+				bd->next = intel_xpcie_alloc_tx_bd(xpcie);
+				bd = bd->next;
+			}
+		}
+
+		intel_xpcie_list_put(&xpcie->write, head);
+		intel_xpcie_start_tx(xpcie, 0);
+
+		*length = len - remaining;
+
+		jiffies_passed = (long)jiffies - (long)jiffies_start;
+	} while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+				   timeout_ms == 0));
+
+	mutex_unlock(&xpcie->wlock);
+
+	return 0;
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.c b/drivers/misc/xlink-pcie/remote_host/pci.c
index 0ca0755b591f..f92a78f41324 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.c
+++ b/drivers/misc/xlink-pcie/remote_host/pci.c
@@ -208,10 +208,8 @@ static void xpcie_device_poll(struct work_struct *work)
 {
 	struct xpcie_dev *xdev = container_of(work, struct xpcie_dev,
 					      wait_event.work);
-	u32 dev_status = intel_xpcie_ioread32(xdev->xpcie.mmio +
-					      XPCIE_MMIO_DEV_STATUS);
 
-	if (dev_status < XPCIE_STATUS_RUN)
+	if (intel_xpcie_get_device_status(&xdev->xpcie) < XPCIE_STATUS_RUN)
 		schedule_delayed_work(&xdev->wait_event,
 				      msecs_to_jiffies(100));
 	else
@@ -224,9 +222,10 @@ static int intel_xpcie_pci_prepare_dev_reset(struct xpcie_dev *xdev,
 	if (mutex_lock_interruptible(&xdev->lock))
 		return -EINTR;
 
-	if (xdev->core_irq_callback)
+	if (xdev->core_irq_callback) {
 		xdev->core_irq_callback = NULL;
-
+		intel_xpcie_core_cleanup(&xdev->xpcie);
+	}
 	xdev->xpcie.status = XPCIE_STATUS_OFF;
 	if (notify)
 		intel_xpcie_pci_raise_irq(xdev, DEV_EVENT, REQUEST_RESET);
@@ -326,6 +325,8 @@ int intel_xpcie_pci_cleanup(struct xpcie_dev *xdev)
 	xdev->core_irq_callback = NULL;
 	intel_xpcie_pci_irq_cleanup(xdev);
 
+	intel_xpcie_core_cleanup(&xdev->xpcie);
+
 	intel_xpcie_pci_unmap_bar(xdev);
 	pci_release_regions(xdev->pci);
 	pci_disable_device(xdev->pci);
@@ -359,6 +360,7 @@ int intel_xpcie_pci_raise_irq(struct xpcie_dev *xdev,
 {
 	u16 pci_status;
 
+	intel_xpcie_set_doorbell(&xdev->xpcie, TO_DEVICE, type, value);
 	pci_read_config_word(xdev->pci, PCI_STATUS, &pci_status);
 
 	return 0;
@@ -445,7 +447,43 @@ int intel_xpcie_pci_connect_device(u32 id)
 		goto connect_cleanup;
 	}
 
+	rc = intel_xpcie_core_init(&xdev->xpcie);
+	if (rc < 0) {
+		dev_err(&xdev->pci->dev, "failed to sync with device\n");
+		goto connect_cleanup;
+	}
+
 connect_cleanup:
 	mutex_unlock(&xdev->lock);
 	return rc;
 }
+
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout)
+{
+	struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(id);
+
+	if (!xdev)
+		return -ENODEV;
+
+	return intel_xpcie_core_read(&xdev->xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout)
+{
+	struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(id);
+
+	if (!xdev)
+		return -ENODEV;
+
+	return intel_xpcie_core_write(&xdev->xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_reset_device(u32 id)
+{
+	struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(id);
+
+	if (!xdev)
+		return -ENOMEM;
+
+	return intel_xpcie_pci_prepare_dev_reset(xdev, true);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (13 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 14/34] misc: xlink-pcie: rh: Add core communication logic mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-20 17:59   ` Greg KH
  2021-01-08 21:25 ` [PATCH v2 16/34] misc: xlink-pcie: Add asynchronous event notification support for XLink mgross
                   ` (18 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala

From: Srikanth Thokala <srikanth.thokala@intel.com>

Provide interface for XLink layer to interact with XLink PCIe transport
layer on both local host and remote host.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 drivers/misc/xlink-pcie/common/interface.c   | 109 +++++++++++++++++++
 drivers/misc/xlink-pcie/local_host/Makefile  |   1 +
 drivers/misc/xlink-pcie/remote_host/Makefile |   1 +
 3 files changed, 111 insertions(+)
 create mode 100644 drivers/misc/xlink-pcie/common/interface.c

diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
new file mode 100644
index 000000000000..56c1d9ed9d8f
--- /dev/null
+++ b/drivers/misc/xlink-pcie/common/interface.c
@@ -0,0 +1,109 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include <linux/xlink_drv_inf.h>
+
+#include "core.h"
+#include "xpcie.h"
+
+/* Define xpcie driver interface API */
+int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
+{
+	if (!sw_device_id_list || !num_devices)
+		return -EINVAL;
+
+	*num_devices = intel_xpcie_get_device_num(sw_device_id_list);
+
+	return 0;
+}
+EXPORT_SYMBOL(xlink_pcie_get_device_list);
+
+int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
+			       size_t name_size)
+{
+	if (!device_name)
+		return -EINVAL;
+
+	return intel_xpcie_get_device_name_by_id(sw_device_id,
+						 device_name, name_size);
+}
+EXPORT_SYMBOL(xlink_pcie_get_device_name);
+
+int xlink_pcie_get_device_status(u32 sw_device_id, u32 *device_status)
+{
+	u32 status;
+	int rc;
+
+	if (!device_status)
+		return -EINVAL;
+
+	rc = intel_xpcie_get_device_status_by_id(sw_device_id, &status);
+	if (rc)
+		return rc;
+
+	switch (status) {
+	case XPCIE_STATUS_READY:
+	case XPCIE_STATUS_RUN:
+		*device_status = _XLINK_DEV_READY;
+		break;
+	case XPCIE_STATUS_ERROR:
+		*device_status = _XLINK_DEV_ERROR;
+		break;
+	case XPCIE_STATUS_RECOVERY:
+		*device_status = _XLINK_DEV_RECOVERY;
+		break;
+	case XPCIE_STATUS_OFF:
+		*device_status = _XLINK_DEV_OFF;
+		break;
+	default:
+		*device_status = _XLINK_DEV_BUSY;
+		break;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(xlink_pcie_get_device_status);
+
+int xlink_pcie_boot_device(u32 sw_device_id, const char *binary_name)
+{
+	return 0;
+}
+EXPORT_SYMBOL(xlink_pcie_boot_device);
+
+int xlink_pcie_connect(u32 sw_device_id)
+{
+	return intel_xpcie_pci_connect_device(sw_device_id);
+}
+EXPORT_SYMBOL(xlink_pcie_connect);
+
+int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
+		    u32 timeout)
+{
+	if (!data || !size)
+		return -EINVAL;
+
+	return intel_xpcie_pci_read(sw_device_id, data, size, timeout);
+}
+EXPORT_SYMBOL(xlink_pcie_read);
+
+int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
+		     u32 timeout)
+{
+	if (!data || !size)
+		return -EINVAL;
+
+	return intel_xpcie_pci_write(sw_device_id, data, size, timeout);
+}
+EXPORT_SYMBOL(xlink_pcie_write);
+
+int xlink_pcie_reset_device(u32 sw_device_id)
+{
+	return intel_xpcie_pci_reset_device(sw_device_id);
+}
+EXPORT_SYMBOL(xlink_pcie_reset_device);
diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 65df94c7e860..16bb1e7345ac 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -3,3 +3,4 @@ mxlk_ep-objs := epf.o
 mxlk_ep-objs += dma.o
 mxlk_ep-objs += core.o
 mxlk_ep-objs += ../common/util.o
+mxlk_ep-objs += ../common/interface.o
diff --git a/drivers/misc/xlink-pcie/remote_host/Makefile b/drivers/misc/xlink-pcie/remote_host/Makefile
index e8074dbb1161..088e121ad46e 100644
--- a/drivers/misc/xlink-pcie/remote_host/Makefile
+++ b/drivers/misc/xlink-pcie/remote_host/Makefile
@@ -3,3 +3,4 @@ mxlk-objs := main.o
 mxlk-objs += pci.o
 mxlk-objs += core.o
 mxlk-objs += ../common/util.o
+mxlk-objs += ../common/interface.o
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 16/34] misc: xlink-pcie: Add asynchronous event notification support for XLink
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (14 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 17/34] xlink-ipc: Add xlink ipc device tree bindings mgross
                   ` (17 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala

From: Srikanth Thokala <srikanth.thokala@intel.com>

Add support to notify XLink layer upon PCIe link UP/DOWN events

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
---
 drivers/misc/xlink-pcie/common/core.h      |  3 ++
 drivers/misc/xlink-pcie/common/interface.c | 17 ++++++++++
 drivers/misc/xlink-pcie/local_host/core.c  | 11 +++++++
 drivers/misc/xlink-pcie/remote_host/main.c |  3 ++
 drivers/misc/xlink-pcie/remote_host/pci.c  | 36 ++++++++++++++++++++++
 drivers/misc/xlink-pcie/remote_host/pci.h  |  3 ++
 include/linux/xlink_drv_inf.h              | 12 ++++++++
 7 files changed, 85 insertions(+)

diff --git a/drivers/misc/xlink-pcie/common/core.h b/drivers/misc/xlink-pcie/common/core.h
index eec8566c19d9..34b6c268aac5 100644
--- a/drivers/misc/xlink-pcie/common/core.h
+++ b/drivers/misc/xlink-pcie/common/core.h
@@ -241,4 +241,7 @@ int intel_xpcie_pci_connect_device(u32 id);
 int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout);
 int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout);
 int intel_xpcie_pci_reset_device(u32 id);
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+					  xlink_device_event event_notif_fn);
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id);
 #endif /* XPCIE_CORE_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
index 56c1d9ed9d8f..4ad291ff97c8 100644
--- a/drivers/misc/xlink-pcie/common/interface.c
+++ b/drivers/misc/xlink-pcie/common/interface.c
@@ -107,3 +107,20 @@ int xlink_pcie_reset_device(u32 sw_device_id)
 	return intel_xpcie_pci_reset_device(sw_device_id);
 }
 EXPORT_SYMBOL(xlink_pcie_reset_device);
+
+int xlink_pcie_register_device_event(u32 sw_device_id,
+				     xlink_device_event event_notif_fn)
+{
+	if (!event_notif_fn)
+		return -EINVAL;
+
+	return intel_xpcie_pci_register_device_event(sw_device_id,
+						     event_notif_fn);
+}
+EXPORT_SYMBOL(xlink_pcie_register_device_event);
+
+int xlink_pcie_unregister_device_event(u32 sw_device_id)
+{
+	return intel_xpcie_pci_unregister_device_event(sw_device_id);
+}
+EXPORT_SYMBOL(xlink_pcie_unregister_device_event);
diff --git a/drivers/misc/xlink-pcie/local_host/core.c b/drivers/misc/xlink-pcie/local_host/core.c
index 51fa25259515..a343e30d8b45 100644
--- a/drivers/misc/xlink-pcie/local_host/core.c
+++ b/drivers/misc/xlink-pcie/local_host/core.c
@@ -806,3 +806,14 @@ int intel_xpcie_pci_reset_device(u32 id)
 {
 	return 0;
 }
+
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+					  xlink_device_event event_notif_fn)
+{
+	return 0;
+}
+
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id)
+{
+	return 0;
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/main.c b/drivers/misc/xlink-pcie/remote_host/main.c
index d88257dd2585..fd31ee3c153b 100644
--- a/drivers/misc/xlink-pcie/remote_host/main.c
+++ b/drivers/misc/xlink-pcie/remote_host/main.c
@@ -55,6 +55,8 @@ static int intel_xpcie_probe(struct pci_dev *pdev,
 	if (new_device)
 		intel_xpcie_list_add_device(xdev);
 
+	intel_xpcie_pci_notify_event(xdev, NOTIFY_DEVICE_CONNECTED);
+
 	return ret;
 }
 
@@ -64,6 +66,7 @@ static void intel_xpcie_remove(struct pci_dev *pdev)
 
 	if (xdev) {
 		intel_xpcie_pci_cleanup(xdev);
+		intel_xpcie_pci_notify_event(xdev, NOTIFY_DEVICE_DISCONNECTED);
 		intel_xpcie_remove_device(xdev);
 	}
 }
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.c b/drivers/misc/xlink-pcie/remote_host/pci.c
index f92a78f41324..0046bff5f604 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.c
+++ b/drivers/misc/xlink-pcie/remote_host/pci.c
@@ -8,6 +8,7 @@
  ****************************************************************************/
 
 #include <linux/mutex.h>
+#include <linux/pci.h>
 #include <linux/sched.h>
 #include <linux/wait.h>
 #include <linux/workqueue.h>
@@ -487,3 +488,38 @@ int intel_xpcie_pci_reset_device(u32 id)
 
 	return intel_xpcie_pci_prepare_dev_reset(xdev, true);
 }
+
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+					  xlink_device_event event_notif_fn)
+{
+	struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(sw_device_id);
+
+	if (!xdev)
+		return -ENOMEM;
+
+	xdev->event_fn = event_notif_fn;
+
+	return 0;
+}
+
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id)
+{
+	struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(sw_device_id);
+
+	if (!xdev)
+		return -ENOMEM;
+
+	xdev->event_fn = NULL;
+
+	return 0;
+}
+
+void intel_xpcie_pci_notify_event(struct xpcie_dev *xdev,
+				  enum xlink_device_event_type event_type)
+{
+	if (event_type >= NUM_EVENT_TYPE)
+		return;
+
+	if (xdev->event_fn)
+		xdev->event_fn(xdev->devid, event_type);
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.h b/drivers/misc/xlink-pcie/remote_host/pci.h
index 72de3701f83a..a05dedf36a12 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.h
+++ b/drivers/misc/xlink-pcie/remote_host/pci.h
@@ -38,6 +38,7 @@ struct xpcie_dev {
 	irq_handler_t core_irq_callback;
 
 	struct xpcie xpcie;
+	xlink_device_event event_fn;
 };
 
 static inline struct device *xpcie_to_dev(struct xpcie *xpcie)
@@ -60,5 +61,7 @@ struct xpcie_dev *intel_xpcie_create_device(u32 sw_device_id,
 void intel_xpcie_remove_device(struct xpcie_dev *xdev);
 void intel_xpcie_list_add_device(struct xpcie_dev *xdev);
 void intel_xpcie_list_del_device(struct xpcie_dev *xdev);
+void intel_xpcie_pci_notify_event(struct xpcie_dev *xdev,
+				  enum xlink_device_event_type event_type);
 
 #endif /* XPCIE_PCI_HEADER_ */
diff --git a/include/linux/xlink_drv_inf.h b/include/linux/xlink_drv_inf.h
index ffe8f4c253e6..5ca0ae1ae2e3 100644
--- a/include/linux/xlink_drv_inf.h
+++ b/include/linux/xlink_drv_inf.h
@@ -44,6 +44,15 @@ enum _xlink_device_status {
 	_XLINK_DEV_READY
 };
 
+enum xlink_device_event_type {
+	NOTIFY_DEVICE_DISCONNECTED,
+	NOTIFY_DEVICE_CONNECTED,
+	NUM_EVENT_TYPE
+};
+
+typedef int (*xlink_device_event)(u32 sw_device_id,
+				  enum xlink_device_event_type event_type);
+
 int xlink_pcie_get_device_list(u32 *sw_device_id_list,
 			       u32 *num_devices);
 int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
@@ -57,4 +66,7 @@ int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
 int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
 		     u32 timeout);
 int xlink_pcie_reset_device(u32 sw_device_id);
+int xlink_pcie_register_device_event(u32 sw_device_id,
+				     xlink_device_event event_notif_fn);
+int xlink_pcie_unregister_device_event(u32 sw_device_id);
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 17/34] xlink-ipc: Add xlink ipc device tree bindings
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (15 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 16/34] misc: xlink-pcie: Add asynchronous event notification support for XLink mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-10 17:18   ` Rob Herring
  2021-01-08 21:25 ` [PATCH v2 18/34] xlink-ipc: Add xlink ipc driver mgross
                   ` (16 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly, devicetree, Ryan Carnaghi

From: Seamus Kelly <seamus.kelly@intel.com>

Add device tree bindings for the xLink IPC driver which enables xLink to
control and communicate with the VPU IP present on the Intel Keem Bay
SoC.

Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
Signed-off-by: Ryan Carnaghi <ryan.r.carnaghi@intel.com>
---
 .../misc/intel,keembay-xlink-ipc.yaml         | 49 +++++++++++++++++++
 1 file changed, 49 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml
new file mode 100644
index 000000000000..699e43c4cd40
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml
@@ -0,0 +1,49 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+# Copyright (c) Intel Corporation. All rights reserved.
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink-ipc.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel Keem Bay xlink IPC
+
+maintainers:
+  - Kelly Seamus <seamus.kelly@intel.com>
+
+description: |
+  The Keem Bay xlink IPC driver enables the communication/control sub-system
+  for internal IPC communications within the Intel Keem Bay SoC.
+
+properties:
+  compatible:
+    oneOf:
+      - items:
+        - const: intel,keembay-xlink-ipc
+
+  memory-region:
+    items:
+      - description: reference to the CSS xlink IPC reserved memory region.
+      - description: reference to the MSS xlink IPC reserved memory region.
+
+  intel,keembay-vpu-ipc-id:
+    $ref: "/schemas/types.yaml#/definitions/uint32"
+    description: The numeric ID identifying the VPU within the xLink stack.
+
+  intel,keembay-vpu-ipc-name:
+    $ref: "/schemas/types.yaml#/definitions/string"
+    description: User-friendly name for the VPU within the xLink stack.
+
+  intel,keembay-vpu-ipc:
+    $ref: "/schemas/types.yaml#/definitions/phandle"
+    description: reference to the corresponding intel,keembay-vpu-ipc node.
+
+examples:
+  - |
+    xlink-ipc {
+        compatible = "intel,keembay-xlink-ipc";
+        memory-region = <&css_xlink_reserved>,
+                        <&mss_xlink_reserved>;
+        intel,keembay-vpu-ipc-id = <0x0>;
+        intel,keembay-vpu-ipc-name = "vpu-slice-0";
+        intel,keembay-vpu-ipc = <&vpuipc>;
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 18/34] xlink-ipc: Add xlink ipc driver
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (16 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 17/34] xlink-ipc: Add xlink ipc device tree bindings mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings mgross
                   ` (15 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly, Cc : Derek Kiernan, linux-doc, Ryan Carnaghi

From: Seamus Kelly <seamus.kelly@intel.com>

Add xLink driver, which interfaces the xLink Core driver with the Keem
Bay VPU IPC driver, thus enabling xLink to control and communicate with
the VPU IP present on the Intel Keem Bay SoC.

Specifically the driver enables xLink Core to:

* Boot / Reset the VPU IP
* Register to VPU IP event notifications (device connected, device
  disconnected, WDT event)
* Query the status of the VPU IP (OFF, BUSY, READY, ERROR, RECOVERY)
* Exchange data with the VPU IP, using the Keem Bay IPC mechanism
  - Including the ability to send 'volatile' data (i.e., small amount of
    data, up to 128-bytes that was not allocated in the CPU/VPU shared
    memory region)

Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Cc: Derek Kiernan <derek.kiernan@xilinx.com>
Cc: Dragan Cvetic <dragan.cvetic@xilinx.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-doc@vger.kernel.org
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
Signed-off-by: Ryan Carnaghi <ryan.r.carnaghi@intel.com>
---
 Documentation/vpu/index.rst        |   1 +
 Documentation/vpu/xlink-ipc.rst    |  51 ++
 MAINTAINERS                        |   6 +
 drivers/misc/Kconfig               |   1 +
 drivers/misc/Makefile              |   1 +
 drivers/misc/xlink-ipc/Kconfig     |   7 +
 drivers/misc/xlink-ipc/Makefile    |   4 +
 drivers/misc/xlink-ipc/xlink-ipc.c | 878 +++++++++++++++++++++++++++++
 include/linux/xlink-ipc.h          |  48 ++
 9 files changed, 997 insertions(+)
 create mode 100644 Documentation/vpu/xlink-ipc.rst
 create mode 100644 drivers/misc/xlink-ipc/Kconfig
 create mode 100644 drivers/misc/xlink-ipc/Makefile
 create mode 100644 drivers/misc/xlink-ipc/xlink-ipc.c
 create mode 100644 include/linux/xlink-ipc.h

diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index 661cc700ee45..49c78bb65b83 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -15,3 +15,4 @@ This documentation contains information for the Intel VPU stack.
 
    vpu-stack-overview
    xlink-pcie
+   xlink-ipc
diff --git a/Documentation/vpu/xlink-ipc.rst b/Documentation/vpu/xlink-ipc.rst
new file mode 100644
index 000000000000..97ee62b10e93
--- /dev/null
+++ b/Documentation/vpu/xlink-ipc.rst
@@ -0,0 +1,51 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============================
+Kernel driver: xLink IPC driver
+===============================
+
+Supported chips:
+
+* | Intel Edge.AI Computer Vision platforms: Keem Bay
+  | Suffix: Bay
+  | Datasheet: (not yet publicly available)
+
+Introduction
+============
+
+The xLink IPC driver interfaces the xLink Core driver with the Keem Bay VPU IPC
+driver, thus enabling xLink to control and communicate with the VPU IP present
+on the Intel Keem Bay SoC.
+
+Specifically the driver enables xLink Core to:
+
+* Boot / Reset the VPU IP
+* Register to VPU IP event notifications (device connected, device disconnected,
+  WDT event)
+* Query the status of the VPU IP (OFF, BUSY, READY, ERROR, RECOVERY)
+* Exchange data with the VPU IP, using the Keem Bay IPC mechanism
+
+  * Including the ability to send 'volatile' data (i.e. small amounts of data,
+    up to 128-bytes that was not allocated in the CPU/VPU shared memory region)
+
+Sending / Receiving 'volatile' data
+===================================
+
+Data to be exchanged with Keem Bay IPC needs to be allocated in the portion of
+DDR shared between the CPU and VPU.
+
+This can be impractical for small amounts of data that user code can allocate
+on the stack.
+
+To reduce the burden on user code, xLink Core provides special send / receive
+functions to send up to 128 bytes of 'volatile data', i.e., data that is not
+allocated in the shared memory and that might also disappear after the xLink
+API is called (e.g., because allocated on the stack).
+
+The xLink IPC driver implements support for transferring such 'volatile data'
+to the VPU using Keem Bay IPC. To this end, the driver reserves some memory in
+the shared memory region.
+
+When volatile data is to be sent, xLink IPC allocates a buffer from the
+reserved memory region and copies the volatile data to the buffer. The buffer
+is then transferred to the VPU using Keem Bay IPC.
diff --git a/MAINTAINERS b/MAINTAINERS
index df7b61900664..92693390c59d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1962,6 +1962,12 @@ F:	Documentation/devicetree/bindings/arm/intel,keembay.yaml
 F:	arch/arm64/boot/dts/intel/keembay-evm.dts
 F:	arch/arm64/boot/dts/intel/keembay-soc.dtsi
 
+ARM/INTEL XLINK IPC SUPPORT
+M:	Seamus Kelly <seamus.kelly@intel.com>
+M:	Mark Gross <mgross@linux.intel.com>
+S:	Supported
+F:	drivers/misc/xlink-ipc/
+
 ARM/INTEL KEEM BAY XLINK PCIE SUPPORT
 M:	Srikanth Thokala <srikanth.thokala@intel.com>
 M:	Mark Gross <mgross@linux.intel.com>
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index dfb98e444c6e..1f81ea915b95 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -482,4 +482,5 @@ source "drivers/misc/cardreader/Kconfig"
 source "drivers/misc/habanalabs/Kconfig"
 source "drivers/misc/uacce/Kconfig"
 source "drivers/misc/xlink-pcie/Kconfig"
+source "drivers/misc/xlink-ipc/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d17621fc43d5..b51495a2f1e0 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -58,3 +58,4 @@ obj-$(CONFIG_UACCE)		+= uacce/
 obj-$(CONFIG_XILINX_SDFEC)	+= xilinx_sdfec.o
 obj-$(CONFIG_HISI_HIKEY_USB)	+= hisi_hikey_usb.o
 obj-y                           += xlink-pcie/
+obj-$(CONFIG_XLINK_IPC)		+= xlink-ipc/
diff --git a/drivers/misc/xlink-ipc/Kconfig b/drivers/misc/xlink-ipc/Kconfig
new file mode 100644
index 000000000000..6e5374c7d85c
--- /dev/null
+++ b/drivers/misc/xlink-ipc/Kconfig
@@ -0,0 +1,7 @@
+config XLINK_IPC
+	tristate "Support for XLINK IPC"
+	depends on KEEMBAY_VPU_IPC
+	help
+	  XLINK IPC enables the communication/control IPC Sub-System.
+
+	  Select M if you have an Intel SoC with a Vision Processing Unit (VPU).
diff --git a/drivers/misc/xlink-ipc/Makefile b/drivers/misc/xlink-ipc/Makefile
new file mode 100644
index 000000000000..f92c95525e23
--- /dev/null
+++ b/drivers/misc/xlink-ipc/Makefile
@@ -0,0 +1,4 @@
+#
+# Makefile for xlink IPC Linux driver
+#
+obj-$(CONFIG_XLINK_IPC) += xlink-ipc.o
diff --git a/drivers/misc/xlink-ipc/xlink-ipc.c b/drivers/misc/xlink-ipc/xlink-ipc.c
new file mode 100644
index 000000000000..c04b8f44f684
--- /dev/null
+++ b/drivers/misc/xlink-ipc/xlink-ipc.c
@@ -0,0 +1,878 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2020 Intel Corporation
+ *
+ */
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <linux/soc/intel/keembay-vpu-ipc.h>
+
+#include <linux/xlink-ipc.h>
+
+#define XLINK_IPC_MAX_DEVICE_NAME_SIZE	12
+
+/* used to extract fields from and create xlink sw device id */
+#define SW_DEVICE_ID_INTERFACE_SHIFT	24U
+#define SW_DEVICE_ID_INTERFACE_MASK	0x7
+#define SW_DEVICE_ID_INTERFACE_SMASK \
+		(SW_DEVICE_ID_INTERFACE_MASK << SW_DEVICE_ID_INTERFACE_SHIFT)
+#define SW_DEVICE_ID_INTERFACE_IPC_VALUE 0x0
+#define SW_DEVICE_ID_INTERFACE_IPC_SVALUE \
+		(SW_DEVICE_ID_INTERFACE_IPC_VALUE << SW_DEVICE_ID_INTERFACE_SHIFT)
+#define SW_DEVICE_ID_VPU_ID_SHIFT 1U
+#define SW_DEVICE_ID_VPU_ID_MASK 0x7
+#define SW_DEVICE_ID_VPU_ID_SMASK \
+		(SW_DEVICE_ID_VPU_ID_MASK << SW_DEVICE_ID_VPU_ID_SHIFT)
+#define GET_VPU_ID_FROM_SW_DEVICE_ID(id) \
+		(((id) >> SW_DEVICE_ID_VPU_ID_SHIFT) & SW_DEVICE_ID_VPU_ID_MASK)
+#define GET_SW_DEVICE_ID_FROM_VPU_ID(id) \
+		((((id) << SW_DEVICE_ID_VPU_ID_SHIFT) & SW_DEVICE_ID_VPU_ID_SMASK) \
+		| SW_DEVICE_ID_INTERFACE_IPC_SVALUE)
+
+/* the maximum buffer size for volatile xlink operations */
+#define XLINK_MAX_BUF_SIZE 128U
+
+/* indices used to retrieve reserved memory from the dt */
+#define LOCAL_XLINK_IPC_BUFFER_IDX	0
+#define REMOTE_XLINK_IPC_BUFFER_IDX	1
+
+/* index used to retrieve the vpu ipc device phandle from the dt */
+#define VPU_IPC_DEVICE_PHANDLE_IDX	1
+
+/* the timeout (in ms) used to wait for the vpu ready message */
+#define XLINK_VPU_WAIT_FOR_READY_MS 3000
+
+/* xlink buffer memory region */
+struct xlink_buf_mem {
+	struct device *dev;	/* child device managing the memory region */
+	void *vaddr;		/* the virtual address of the memory region */
+	dma_addr_t dma_handle;	/* the physical address of the memory region */
+	size_t size;		/* the size of the memory region */
+};
+
+/* xlink buffer pool */
+struct xlink_buf_pool {
+	void *buf;		/* pointer to the start of pool area */
+	size_t buf_cnt;		/* pool size (i.e., number of buffers) */
+	size_t idx;		/* current index */
+	spinlock_t lock;	/* the lock protecting this pool */
+};
+
+/* xlink ipc device */
+struct xlink_ipc_dev {
+	struct platform_device *pdev;		/* pointer to platform device */
+	u32 vpu_id;				/* The VPU ID defined in the device tree */
+	u32 sw_device_id;			/* the sw device id */
+	const char *device_name;		/* the vpu device name */
+	struct xlink_buf_mem local_xlink_mem;	/* tx buffer memory region */
+	struct xlink_buf_mem remote_xlink_mem;	/* rx buffer memory region */
+	struct xlink_buf_pool xlink_buf_pool;	/* tx buffer pool */
+	struct device *vpu_dev;			/* pointer to vpu ipc device */
+	int (*callback)(u32 sw_device_id, u32 event);
+};
+
+/* Events that can be notified via callback, when registered. */
+enum xlink_vpu_event {
+	XLINK_VPU_NOTIFY_DISCONNECT = 0,
+	XLINK_VPU_NOTIFY_CONNECT,
+	XLINK_VPU_NOTIFY_MSS_WDT,
+	XLINK_VPU_NOTIFY_NCE_WDT,
+	NUM_EVENT_TYPE
+};
+
+#define VPU_ID 0
+#define VPU_DEVICE_NAME "vpu-0"
+#define VPU_SW_DEVICE_ID 0
+static struct xlink_ipc_dev *xlink_dev;
+
+/*
+ * Functions related to reserved-memory sub-devices.
+ */
+
+/*
+ * xlink_reserved_memory_remove() - Removes the reserved memory sub-devices.
+ *
+ * @xlink_dev: [in] The xlink ipc device with reserved memory sub-devices.
+ */
+static void xlink_reserved_memory_remove(struct xlink_ipc_dev *xlink_dev)
+{
+	device_unregister(xlink_dev->local_xlink_mem.dev);
+	device_unregister(xlink_dev->remote_xlink_mem.dev);
+}
+
+/*
+ * xlink_reserved_mem_release() - Reserved memory release callback function.
+ *
+ * @dev: [in] The reserved memory sub-device.
+ */
+static void xlink_reserved_mem_release(struct device *dev)
+{
+	of_reserved_mem_device_release(dev);
+}
+
+/*
+ * get_xlink_reserved_mem_size() - Gets the size of the reserved memory region.
+ *
+ * @dev: [in] The device the reserved memory region is allocated to.
+ * @idx: [in] The reserved memory region's index in the phandle table.
+ *
+ * Return: The reserved memory size, 0 on failure.
+ */
+static resource_size_t get_xlink_reserved_mem_size(struct device *dev, int idx)
+{
+	struct device_node *np;
+	struct resource mem;
+	int rc;
+
+	np = of_parse_phandle(dev->of_node, "memory-region", idx);
+	if (!np) {
+		dev_err(dev, "Couldn't find memory-region %d\n", idx);
+		return 0;
+	}
+
+	rc = of_address_to_resource(np, 0, &mem);
+	if (rc) {
+		dev_err(dev, "Couldn't map address to resource %d\n", idx);
+		return 0;
+	}
+	return resource_size(&mem);
+}
+
+/*
+ * init_xlink_reserved_mem_dev() - Initializes the reserved memory sub-devices.
+ *
+ * @dev:	[in] The parent device of the reserved memory sub-device.
+ * @name:	[in] The name to assign to the memory region.
+ * @idx:	[in] The reserved memory region index in the phandle table.
+ *
+ * Return: The initialized sub-device, NULL on failure.
+ */
+static struct device *init_xlink_reserved_mem_dev(struct device *dev,
+						  const char *name, int idx)
+{
+	struct device *child;
+	int rc;
+
+	child = devm_kzalloc(dev, sizeof(*child), GFP_KERNEL);
+	if (!child)
+		return NULL;
+
+	device_initialize(child);
+	dev_set_name(child, "%s:%s", dev_name(dev), name);
+	dev_err(dev, " dev_name %s, name %s\n", dev_name(dev), name);
+	child->parent = dev;
+	child->dma_mask = dev->dma_mask;
+	child->coherent_dma_mask = dev->coherent_dma_mask;
+	/* set up dma configuration using information from parent's dt node */
+	rc = of_dma_configure(child, dev->of_node, true);
+	if (rc)
+		return NULL;
+	child->release = xlink_reserved_mem_release;
+
+	rc = device_add(child);
+	if (rc)
+		goto err;
+	rc = of_reserved_mem_device_init_by_idx(child, dev->of_node, idx);
+	if (rc) {
+		dev_err(dev, "Couldn't get reserved memory with idx = %d, %d\n",
+			idx, rc);
+		device_del(child);
+		goto err;
+	}
+	return child;
+
+err:
+	put_device(child);
+	return NULL;
+}
+
+/*
+ * xlink_reserved_memory_init() - Initialize reserved memory for the device.
+ *
+ * @xlink_dev:	[in] The xlink ipc device the reserved memory is allocated to.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_reserved_memory_init(struct xlink_ipc_dev *xlink_dev)
+{
+	struct device *dev = &xlink_dev->pdev->dev;
+	struct xlink_buf_mem *lxm = &xlink_dev->local_xlink_mem;
+	struct xlink_buf_mem *rxm = &xlink_dev->remote_xlink_mem;
+
+	lxm->dev = init_xlink_reserved_mem_dev(dev, "xlink_local_reserved",
+					       LOCAL_XLINK_IPC_BUFFER_IDX);
+	if (!lxm->dev)
+		return -ENOMEM;
+
+	lxm->size = get_xlink_reserved_mem_size(dev, LOCAL_XLINK_IPC_BUFFER_IDX);
+
+	rxm->dev = init_xlink_reserved_mem_dev(dev, "xlink_remote_reserved",
+					       REMOTE_XLINK_IPC_BUFFER_IDX);
+	if (!rxm->dev) {
+		device_unregister(xlink_dev->local_xlink_mem.dev);
+		return -ENOMEM;
+	}
+
+	rxm->size = get_xlink_reserved_mem_size(dev, REMOTE_XLINK_IPC_BUFFER_IDX);
+
+	return 0;
+}
+
+/*
+ * xlink_reserved_memory_alloc() - Allocate reserved memory for the device.
+ *
+ * @xlink_dev:	[in] The xlink ipc device.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_reserved_memory_alloc(struct xlink_ipc_dev *xlink_dev)
+{
+	struct xlink_buf_mem *lxm = &xlink_dev->local_xlink_mem;
+	struct xlink_buf_mem *rxm = &xlink_dev->remote_xlink_mem;
+
+	lxm->vaddr = dmam_alloc_coherent(xlink_dev->local_xlink_mem.dev,
+					 xlink_dev->local_xlink_mem.size,
+					 &xlink_dev->local_xlink_mem.dma_handle,
+					 GFP_KERNEL);
+	if (!lxm->vaddr) {
+		dev_err(&xlink_dev->pdev->dev,
+			"Failed to allocate from local reserved memory.\n");
+		return -ENOMEM;
+	}
+	rxm->vaddr = dmam_alloc_coherent(xlink_dev->remote_xlink_mem.dev,
+					 xlink_dev->remote_xlink_mem.size,
+					 &xlink_dev->remote_xlink_mem.dma_handle,
+					 GFP_KERNEL);
+	if (!rxm->vaddr) {
+		dev_err(&xlink_dev->pdev->dev,
+			"Failed to allocate from remote reserved memory.\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/*
+ * init_xlink_buf_pool() - Initialize the device's tx buffer pool.
+ *
+ * @xlink_dev:	[in] The xlink ipc device.
+ *
+ * Return: 0 on success.
+ */
+static int init_xlink_buf_pool(struct xlink_ipc_dev *xlink_dev)
+{
+	struct xlink_buf_mem *mem = &xlink_dev->local_xlink_mem;
+	struct xlink_buf_pool *xbufpool = &xlink_dev->xlink_buf_pool;
+
+	memset(mem->vaddr, 0, mem->size);
+	xbufpool->buf = mem->vaddr;
+	xbufpool->buf_cnt = mem->size / XLINK_MAX_BUF_SIZE;
+	xbufpool->idx = 0;
+	dev_info(&xlink_dev->pdev->dev, "xlink Buffer Pool size: %zX\n",
+		 xbufpool->buf_cnt);
+	spin_lock_init(&xbufpool->lock);
+
+	return 0;
+}
+
+/*
+ * xlink_phys_to_virt() - Convert an xlink physical addresses to a virtual one.
+ *
+ * @xlink_mem:	[in] The memory region where the physical address is located.
+ * @paddr:		[in] The physical address to convert to a virtual one.
+ *
+ * Return:		The corresponding virtual address, or NULL if the
+ *				physical address is not in the expected memory
+ *				range.
+ */
+static void *xlink_phys_to_virt(const struct xlink_buf_mem *xlink_mem,
+				u32 paddr)
+{
+	if (unlikely(paddr < xlink_mem->dma_handle) ||
+	    paddr >= (xlink_mem->dma_handle + xlink_mem->size))
+		return NULL;
+
+	return xlink_mem->vaddr + (paddr - xlink_mem->dma_handle);
+}
+
+/*
+ * xlink_virt_to_phys() - Convert an xlink virtual addresses to a physical one.
+ *
+ * @xlink_mem:	[in]  The memory region where the physical address is located.
+ * @vaddr:		[in]  The virtual address to convert to a physical one.
+ * @paddr:		[out] Where to store the computed physical address.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_virt_to_phys(struct xlink_buf_mem *xlink_mem, void *vaddr,
+			      u32 *paddr)
+{
+	if (unlikely((xlink_mem->dma_handle + xlink_mem->size) > 0xFFFFFFFF))
+		return -EINVAL;
+	if (unlikely(vaddr < xlink_mem->vaddr ||
+		     vaddr >= (xlink_mem->vaddr + xlink_mem->size)))
+		return -EINVAL;
+	*paddr = xlink_mem->dma_handle + (vaddr - xlink_mem->vaddr);
+
+	return 0;
+}
+
+/*
+ * get_next_xlink_buf() - Get next xlink buffer from an xlink device's pool.
+ *
+ * @xlink_dev:	[in]  The xlink ipc device to get a buffer from.
+ * @buf:		[out] Where to store the reference to the next buffer.
+ * @size:		[in]  The size of the buffer to get.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int get_next_xlink_buf(struct xlink_ipc_dev *xlink_dev, void **buf,
+			      int size)
+{
+	struct xlink_buf_pool *pool;
+	unsigned long flags;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	if (size > XLINK_MAX_BUF_SIZE)
+		return -EINVAL;
+
+	pool = &xlink_dev->xlink_buf_pool;
+
+	spin_lock_irqsave(&pool->lock, flags);
+	if (pool->idx == pool->buf_cnt) {
+		/* reached end of buffers - wrap around */
+		pool->idx = 0;
+	}
+	*buf = pool->buf + (pool->idx * XLINK_MAX_BUF_SIZE);
+	pool->idx++;
+	spin_unlock_irqrestore(&pool->lock, flags);
+	return 0;
+}
+
+/*
+ * Functions related to the vpu ipc device reference.
+ */
+
+/*
+ * vpu_ipc_device_put() - Release the vpu ipc device held by the xlink device.
+ *
+ * @xlink_dev:	[in] The xlink ipc device.
+ */
+static void vpu_ipc_device_put(struct xlink_ipc_dev *xlink_dev)
+{
+	put_device(xlink_dev->vpu_dev);
+}
+
+/*
+ * vpu_ipc_device_get() - Get the vpu ipc device reference for the xlink device.
+ *
+ * @xlink_dev:	[in] The xlink ipc device.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int vpu_ipc_device_get(struct xlink_ipc_dev *xlink_dev)
+{
+	struct device *dev = &xlink_dev->pdev->dev;
+	struct platform_device *pdev;
+	struct device_node *np;
+
+	np = of_parse_phandle(dev->of_node, "intel,keembay-vpu-ipc", 0);
+	if (!np)
+		return -ENODEV;
+
+	pdev = of_find_device_by_node(np);
+	if (!pdev) {
+		dev_info(dev, "IPC device not probed\n");
+		of_node_put(np);
+		return -EPROBE_DEFER;
+	}
+
+	xlink_dev->vpu_dev = get_device(&pdev->dev);
+	of_node_put(np);
+
+	dev_info(dev, "Using IPC device: %s\n", dev_name(xlink_dev->vpu_dev));
+	return 0;
+}
+
+/*
+ * xlink platform api - ipc interface functions
+ */
+
+/*
+ * xlink_ipc_connect() - platform connect interface.
+ *
+ * @sw_device_id:	[in]  The sw device id of the device to connect to.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_connect(u32 sw_device_id)
+{
+	if (!xlink_dev)
+		return -ENODEV;
+
+	return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_connect);
+
+/*
+ * xlink_ipc_write() - platform write interface.
+ *
+ * @sw_device_id:	[in]     The sw device id of the device to write to.
+ * @data:			[in]     The data buffer to write.
+ * @size:			[in-out] The amount of data to write/written.
+ * @timeout:		[in]     The time (in ms) to wait before timing out.
+ * @context:		[in]     The ipc operation context.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_write(u32 sw_device_id, void *data, size_t * const size,
+		    u32 timeout, void *context)
+{
+	struct xlink_ipc_context *ctx = context;
+	void *vaddr = NULL;
+	u32 paddr;
+	int rc;
+
+	if (!ctx)
+		return -EINVAL;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	if (ctx->is_volatile) {
+		rc = get_next_xlink_buf(xlink_dev, &vaddr, XLINK_MAX_BUF_SIZE);
+		if (rc)
+			return rc;
+		memcpy(vaddr, data, *size);
+		rc = xlink_virt_to_phys(&xlink_dev->local_xlink_mem, vaddr,
+					&paddr);
+		if (rc)
+			return rc;
+	} else {
+		paddr = *(u32 *)data;
+	}
+	rc = intel_keembay_vpu_ipc_send(xlink_dev->vpu_dev,
+					KMB_VPU_IPC_NODE_LEON_MSS, ctx->chan,
+					paddr, *size);
+
+	return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_write);
+
+/*
+ * xlink_ipc_read() - platform read interface.
+ *
+ * @sw_device_id:	[in]     The sw device id of the device to read from.
+ * @data:			[out]    The data buffer to read into.
+ * @size:			[in-out] The amount of data to read/was read.
+ * @timeout:		[in]     The time (in ms) to wait before timing out.
+ * @context:		[in]     The ipc operation context.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_read(u32 sw_device_id, void *data, size_t * const size,
+		   u32 timeout, void *context)
+{
+	struct xlink_ipc_context *ctx = context;
+	u32 addr = 0;
+	void *vaddr;
+	int rc;
+
+	if (!ctx)
+		return -EINVAL;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	rc = intel_keembay_vpu_ipc_recv(xlink_dev->vpu_dev,
+					KMB_VPU_IPC_NODE_LEON_MSS, ctx->chan,
+					&addr, size, timeout);
+
+	if (ctx->is_volatile) {
+		vaddr = xlink_phys_to_virt(&xlink_dev->remote_xlink_mem, addr);
+		if (vaddr)
+			memcpy(data, vaddr, *size);
+		else
+			return -ENXIO;
+	} else {
+		*(u32 *)data = addr;
+	}
+	return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_read);
+
+/*
+ * xlink_ipc_get_device_list() - platform get device list interface.
+ *
+ * @sw_device_id_list:	[out]  The list of devices found.
+ * @num_devices:		[out]  The number of devices found.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
+{
+	int i = 0;
+
+	if (!sw_device_id_list || !num_devices)
+		return -EINVAL;
+
+	if (xlink_dev) {
+		*sw_device_id_list = xlink_dev->sw_device_id;
+		i++;
+	}
+
+	*num_devices = i;
+	return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_get_device_list);
+
+/*
+ * xlink_ipc_get_device_name() - platform get device name interface.
+ *
+ * @sw_device_id:	[in]  The sw device id of the device to get name of.
+ * @device_name:	[out] The name of the xlink ipc device.
+ * @name_size:		[in]  The maximum size of the name.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_name(u32 sw_device_id, char *device_name,
+			      size_t name_size)
+{
+	size_t size;
+
+	if (!device_name)
+		return -EINVAL;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	size = (name_size > XLINK_IPC_MAX_DEVICE_NAME_SIZE)
+			? XLINK_IPC_MAX_DEVICE_NAME_SIZE
+			: name_size;
+	strncpy(device_name, xlink_dev->device_name, size);
+	return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_get_device_name);
+
+/*
+ * xlink_ipc_get_device_status() - platform get device status interface.
+ *
+ * @sw_device_id:	[in]  The sw device id of the device to get status of.
+ * @device_status:	[out] The device status.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_status(u32 sw_device_id, u32 *device_status)
+{
+	if (!device_status)
+		return -EINVAL;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	*device_status = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+	return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_get_device_status);
+
+static void kernel_callback(struct device *dev, enum intel_keembay_vpu_event event)
+{
+	if ((enum xlink_vpu_event)event >= NUM_EVENT_TYPE)
+		return;
+
+	if (xlink_dev) {
+		if (xlink_dev->callback)
+			xlink_dev->callback(xlink_dev->sw_device_id, event);
+	}
+}
+
+/*
+ * xlink_ipc_register_for_events() - platform register for events
+ *
+ * @sw_device_id:	[in]	The sw device id of the device to get status of.
+ * @callback:		[in]	Callback function for events
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_register_for_events(u32 sw_device_id,
+				  int (*callback)(u32 sw_device_id, enum xlink_vpu_event event))
+{
+	int rc;
+
+	if (!xlink_dev)
+		return -ENODEV;
+	xlink_dev->callback = callback;
+	rc = intel_keembay_vpu_register_for_events(xlink_dev->vpu_dev, kernel_callback);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_register_for_events);
+/*
+ * xlink_ipc_unregister_for_events() - platform register for events
+ *
+ * @sw_device_id:	[in]	The sw device id of the device to get status of.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_unregister_for_events(u32 sw_device_id)
+{
+	int rc;
+
+	if (!xlink_dev)
+		return -ENODEV;
+	rc = intel_keembay_vpu_unregister_for_events(xlink_dev->vpu_dev);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_unregister_for_events);
+
+/*
+ * xlink_ipc_boot_device() - platform boot device interface.
+ *
+ * @sw_device_id:	[in] The sw device id of the device to boot.
+ * @binary_name:	[in] The file name of the firmware binary to boot.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_boot_device(u32 sw_device_id, const char *binary_name)
+{
+	enum intel_keembay_vpu_state state;
+	int rc;
+
+	if (!binary_name)
+		return -EINVAL;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	pr_info("\nStart VPU 0x%x - %s\n", sw_device_id, binary_name);
+	rc = intel_keembay_vpu_startup(xlink_dev->vpu_dev, binary_name);
+	if (rc) {
+		pr_err("Failed to start VPU: %d\n", rc);
+		return -EBUSY;
+	}
+	pr_info("Successfully started VPU!\n");
+
+	/* Wait for VPU to be READY */
+	rc = intel_keembay_vpu_wait_for_ready(xlink_dev->vpu_dev,
+					      XLINK_VPU_WAIT_FOR_READY_MS);
+	if (rc) {
+		pr_err("Tried to start VPU but never got READY.\n");
+		return -EBUSY;
+	}
+	pr_info("Successfully synchronised state with VPU!\n");
+
+	/* Check state */
+	state = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+	if (state != KEEMBAY_VPU_READY) {
+		pr_err("VPU was not ready, it was %d\n", state);
+		return -EBUSY;
+	}
+	pr_info("VPU was ready.\n");
+	return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_boot_device);
+
+/*
+ * xlink_ipc_reset_device() - platform reset device interface.
+ *
+ * @sw_device_id:	[in] The sw device id of the device to reset.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_reset_device(u32 sw_device_id)
+{
+	enum intel_keembay_vpu_state state;
+	int rc;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	/* stop the vpu */
+	rc = intel_keembay_vpu_stop(xlink_dev->vpu_dev);
+	if (rc) {
+		pr_err("Failed to stop VPU: %d\n", rc);
+		return -EBUSY;
+	}
+	pr_info("Successfully stopped VPU!\n");
+
+	/* check state */
+	state = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+	if (state != KEEMBAY_VPU_OFF) {
+		pr_err("VPU was not OFF after stop request, it was %d\n",
+		       state);
+		return -EBUSY;
+	}
+	return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_reset_device);
+
+/*
+ * xlink_ipc_open_channel() - platform open channel interface.
+ *
+ * @sw_device_id:	[in] The sw device id of the device to open channel to.
+ * @channel:		[in] The channel id to open.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_open_channel(u32 sw_device_id, u32 channel)
+{
+	int rc;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	rc = intel_keembay_vpu_ipc_open_channel(xlink_dev->vpu_dev,
+						KMB_VPU_IPC_NODE_LEON_MSS, channel);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_open_channel);
+
+/*
+ * xlink_ipc_close_channel() - platform close channel interface.
+ *
+ * @sw_device_id:	[in] The sw device id of the device to close channel to.
+ * @channel:		[in] The channel id to close.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_close_channel(u32 sw_device_id, u32 channel)
+{
+	int rc;
+
+	if (!xlink_dev)
+		return -ENODEV;
+
+	rc = intel_keembay_vpu_ipc_close_channel(xlink_dev->vpu_dev,
+						 KMB_VPU_IPC_NODE_LEON_MSS, channel);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_close_channel);
+
+/*
+ * xlink ipc driver functions
+ */
+
+static int keembay_xlink_ipc_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	int rc;
+
+	/* allocate device data structure */
+	xlink_dev = kzalloc(sizeof(*xlink_dev), GFP_KERNEL);
+	if (!xlink_dev)
+		return -ENOMEM;
+
+	xlink_dev->pdev = pdev;
+	dev_info(dev, "Keem Bay xlink IPC driver probed.\n");
+
+	/* grab reserved memory regions and assign to child devices */
+	rc = xlink_reserved_memory_init(xlink_dev);
+	if (rc < 0) {
+		dev_err(&pdev->dev,
+			"Failed to set up reserved memory regions.\n");
+		goto r_cleanup;
+	}
+
+	/* allocate memory from the reserved memory regions */
+	rc = xlink_reserved_memory_alloc(xlink_dev);
+	if (rc < 0) {
+		dev_err(&pdev->dev,
+			"Failed to allocate reserved memory regions.\n");
+		goto r_cleanup;
+	}
+
+	/* init the xlink buffer pool used for rx/tx */
+	init_xlink_buf_pool(xlink_dev);
+
+	/* get reference to vpu ipc device */
+	rc = vpu_ipc_device_get(xlink_dev);
+	if (rc)
+		goto r_cleanup;
+
+	/* get device id */
+	rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-id",
+				  &xlink_dev->vpu_id);
+	if (rc) {
+		dev_err(dev, "Cannot get VPU ID from DT.\n");
+		goto r_cleanup;
+	}
+
+	/* assign a sw device id */
+	xlink_dev->sw_device_id = GET_SW_DEVICE_ID_FROM_VPU_ID
+			(xlink_dev->vpu_id);
+
+	/* assign a device name */
+	rc = of_property_read_string(dev->of_node, "intel,keembay-vpu-ipc-name",
+				     &xlink_dev->device_name);
+	if (rc) {
+		/* only warn for now; we will enforce this in the future */
+		dev_warn(dev, "VPU name not defined in DT, using %s as default.\n",
+			 VPU_DEVICE_NAME);
+		dev_warn(dev, "WARNING: additional VPU devices may fail probing.\n");
+		xlink_dev->device_name = VPU_DEVICE_NAME;
+	}
+
+	/* get platform data reference */
+	platform_set_drvdata(pdev, xlink_dev);
+
+	dev_info(dev, "Device id=%u sw_device_id=0x%x name=%s probe complete.\n",
+		 xlink_dev->vpu_id, xlink_dev->sw_device_id,
+			xlink_dev->device_name);
+	return 0;
+
+r_cleanup:
+	xlink_reserved_memory_remove(xlink_dev);
+	return rc;
+}
+
+static int keembay_xlink_ipc_remove(struct platform_device *pdev)
+{
+	struct xlink_ipc_dev *xlink_dev = platform_get_drvdata(pdev);
+	struct device *dev = &pdev->dev;
+
+	/*
+	 * no need to de-alloc xlink mem (local_xlink_mem and remote_xlink_mem)
+	 * since it was allocated with dmam_alloc
+	 */
+	xlink_reserved_memory_remove(xlink_dev);
+
+	/* release vpu ipc device */
+	vpu_ipc_device_put(xlink_dev);
+
+	dev_info(dev, "Keem Bay xlink IPC driver removed.\n");
+	return 0;
+}
+
+static const struct of_device_id keembay_xlink_ipc_of_match[] = {
+	{
+		.compatible = "intel,keembay-xlink-ipc",
+	},
+	{}
+};
+
+static struct platform_driver keembay_xlink_ipc_driver = {
+	.driver = {
+			.name = "keembay-xlink-ipc",
+			.of_match_table = keembay_xlink_ipc_of_match,
+		},
+	.probe = keembay_xlink_ipc_probe,
+	.remove = keembay_xlink_ipc_remove,
+};
+module_platform_driver(keembay_xlink_ipc_driver);
+
+MODULE_DESCRIPTION("Keem Bay xlink IPC Driver");
+MODULE_AUTHOR("Ryan Carnaghi <ryan.r.carnaghi@intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/xlink-ipc.h b/include/linux/xlink-ipc.h
new file mode 100644
index 000000000000..f26b53bf6506
--- /dev/null
+++ b/include/linux/xlink-ipc.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay xlink IPC Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef _XLINK_IPC_H_
+#define _XLINK_IPC_H_
+
+#include <linux/types.h>
+
+struct xlink_ipc_context {
+	u16 chan;
+	u8 is_volatile;
+};
+
+int xlink_ipc_connect(u32 sw_device_id);
+
+int xlink_ipc_write(u32 sw_device_id, void *data, size_t * const size,
+		    u32 timeout, void *context);
+
+int xlink_ipc_read(u32 sw_device_id, void *data, size_t * const size,
+		   u32 timeout, void *context);
+
+int xlink_ipc_get_device_list(u32 *sw_device_id_list, u32 *num_devices);
+
+int xlink_ipc_get_device_name(u32 sw_device_id, char *device_name,
+			      size_t name_size);
+
+int xlink_ipc_get_device_status(u32 sw_device_id, u32 *device_status);
+
+int xlink_ipc_boot_device(u32 sw_device_id, const char *binary_name);
+
+int xlink_ipc_reset_device(u32 sw_device_id);
+
+int xlink_ipc_open_channel(u32 sw_device_id, u32 channel);
+
+int xlink_ipc_close_channel(u32 sw_device_id, u32 channel);
+
+int xlink_ipc_register_for_events(u32 sw_device_id,
+				  int (*callback)(u32 sw_device_id, u32 event));
+
+int xlink_ipc_unregister_for_events(u32 sw_device_id);
+
+#endif /* _XLINK_IPC_H_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (17 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 18/34] xlink-ipc: Add xlink ipc driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-10 17:18   ` Rob Herring
  2021-01-11 19:27   ` Rob Herring
  2021-01-08 21:25 ` [PATCH v2 20/34] xlink-core: Add xlink core driver xLink mgross
                   ` (14 subsequent siblings)
  33 siblings, 2 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly, devicetree, Ryan Carnaghi

From: Seamus Kelly <seamus.kelly@intel.com>

Add device tree bindings for keembay-xlink.

Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
Signed-off-by: Ryan Carnaghi <ryan.r.carnaghi@intel.com>
---
 .../bindings/misc/intel,keembay-xlink.yaml    | 27 +++++++++++++++++++
 1 file changed, 27 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
new file mode 100644
index 000000000000..89c34018fa04
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+# Copyright (c) Intel Corporation. All rights reserved.
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel Keem Bay xlink
+
+maintainers:
+  - Seamus Kelly <seamus.kelly@intel.com>
+
+description: |
+  The Keem Bay xlink driver enables the communication/control sub-system
+  for internal and external communications to the Intel Keem Bay SoC.
+
+properties:
+  compatible:
+    oneOf:
+      - items:
+        - const: intel,keembay-xlink
+
+examples:
+  - |
+    xlink {
+        compatible = "intel,keembay-xlink";
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 20/34] xlink-core: Add xlink core driver xLink
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (18 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-19 19:58   ` Randy Dunlap
  2021-01-08 21:25 ` [PATCH v2 21/34] xlink-core: Enable xlink protocol over pcie mgross
                   ` (13 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly, Derek Kiernan, linux-doc

From: Seamus Kelly <seamus.kelly@intel.com>

    Add xLink driver, which provides an abstracted control and communication
    subsystem based on channel identification.
    It is intended to support VPU technology both at SoC level as well as at
    IP level, over multiple interfaces.  This initial patch enables local host
    user mode to open/close/read/write via IOCTLs.

    Specifically the driver enables application/process to:

    * Access a common xLink API across all interfaces from both kernel and
      user space.
    * Call typical APIs types (open, close, read, write) that you would
      associate with a communication interface.
    * Call other APIs that are related to other functions that the
      device can perform e.g. boot, reset get/set device mode.  Device mode
      refers to the power load of the VPU and an API can be used to read and
      control it.
    * Use multiple commnication channels that the driver manages from one
      interface to another, providing routing of data through these multiple
      channels across a single physical interface.

    xLink: Add xLink Core device tree bindings

    Add device tree bindings for the xLink Core driver which enables xLink
    to control and communicate with the VPU IP present on the Intel Keem Bay
    SoC.

Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Derek Kiernan <derek.kiernan@xilinx.com>
Cc: Dragan Cvetic <dragan.cvetic@xilinx.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-doc@vger.kernel.org
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
---
 Documentation/vpu/index.rst                 |   1 +
 Documentation/vpu/xlink-core.rst            |  81 +++
 MAINTAINERS                                 |  12 +
 drivers/misc/Kconfig                        |   1 +
 drivers/misc/Makefile                       |   1 +
 drivers/misc/xlink-core/Kconfig             |  33 +
 drivers/misc/xlink-core/Makefile            |   5 +
 drivers/misc/xlink-core/xlink-core.c        | 738 ++++++++++++++++++++
 drivers/misc/xlink-core/xlink-core.h        |  22 +
 drivers/misc/xlink-core/xlink-defs.h        | 175 +++++
 drivers/misc/xlink-core/xlink-ioctl.c       | 212 ++++++
 drivers/misc/xlink-core/xlink-ioctl.h       |  21 +
 drivers/misc/xlink-core/xlink-multiplexer.c | 534 ++++++++++++++
 drivers/misc/xlink-core/xlink-multiplexer.h |  35 +
 drivers/misc/xlink-core/xlink-platform.c    | 160 +++++
 drivers/misc/xlink-core/xlink-platform.h    |  65 ++
 include/linux/xlink.h                       | 108 +++
 include/uapi/misc/xlink_uapi.h              | 145 ++++
 18 files changed, 2349 insertions(+)
 create mode 100644 Documentation/vpu/xlink-core.rst
 create mode 100644 drivers/misc/xlink-core/Kconfig
 create mode 100644 drivers/misc/xlink-core/Makefile
 create mode 100644 drivers/misc/xlink-core/xlink-core.c
 create mode 100644 drivers/misc/xlink-core/xlink-core.h
 create mode 100644 drivers/misc/xlink-core/xlink-defs.h
 create mode 100644 drivers/misc/xlink-core/xlink-ioctl.c
 create mode 100644 drivers/misc/xlink-core/xlink-ioctl.h
 create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.c
 create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.h
 create mode 100644 drivers/misc/xlink-core/xlink-platform.c
 create mode 100644 drivers/misc/xlink-core/xlink-platform.h
 create mode 100644 include/linux/xlink.h
 create mode 100644 include/uapi/misc/xlink_uapi.h

diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index 49c78bb65b83..cd4272e089ec 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -16,3 +16,4 @@ This documentation contains information for the Intel VPU stack.
    vpu-stack-overview
    xlink-pcie
    xlink-ipc
+   xlink-core
diff --git a/Documentation/vpu/xlink-core.rst b/Documentation/vpu/xlink-core.rst
new file mode 100644
index 000000000000..1c471ec803d3
--- /dev/null
+++ b/Documentation/vpu/xlink-core.rst
@@ -0,0 +1,81 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============================
+xLink-core software subsystem
+=============================
+
+The purpose of the xLink software subsystem is to facilitate communication
+between multiple users on multiple nodes in the system.
+
+There are three types of xLink nodes:
+
+1. Remote Host: this is an external IA/x86 host system that is only capable of
+   communicating directly to the Local Host node on VPU 2.x products.
+2. Local Host: this is the ARM core within the VPU 2.x  SoC. The Local Host can
+   communicate upstream to the Remote Host node, and downstream to the VPU IP
+   node.
+3. VPU IP: this is the Leon RT core within the VPU 2.x SoC. The VPU IP can only
+   communicate upstream to the Local Host node.
+
+xLink provides a common API across all interfaces for users to access xLink
+functions and provides user space APIs via an IOCTL interface implemented in
+the xLink core.
+
+xLink manages communications from one interface to another and provides routing
+of data through multiple channels across a single physical interface.
+
+It exposes a common API across all interfaces at both kernel and user levels
+for processes/applications to access.
+
+It has typical API types (open, close, read, write) that you would associate
+with a communication interface.
+
+It also has other APIs that are related to other functions that the device can
+perform, e.g. boot, reset get/set device mode.
+The driver is broken down into 4 source files.
+
+xlink-core:
+Contains driver initialization, driver API and IOCTL interface (for user
+space).
+
+xlink-multiplexer:
+The Multiplexer component is responsible for securely routing messages through
+multiple communication channels over a single physical interface.
+
+xlink-dispatcher:
+The Dispatcher component is responsible for queueing and handling xLink
+communication requests from all users in the system and invoking the underlying
+platform interface drivers.
+
+xlink-platform:
+provides abstraction to each interface supported (PCIe, USB, IPC, etc).
+
+Typical xLink transaction (simplified):
+When a user wants to send data across an interface via xLink it firstly calls
+xlink connect which connects to the relevant interface (PCIe, USB, IPC, etc.)
+and then xlink open channel.
+
+Then it calls xlink write function, this takes the data, passes it to the
+kernel which packages up the data and channel and then adds it to a transmit
+queue.
+
+A separate thread reads this transaction queue and pops off data if available
+and passes the data to the underlying interface (e.g. PCIe) write function.
+Using this thread provides serialization of transactions and decouples the user
+write from the platform write.
+
+On the other side of the interface, a thread is continually reading the
+interface (e.g. PCIe) via the platform interface read function and if it reads
+any data it adds it to channel packet container.
+
+The application at this side of the interface will have called xlink connect,
+opened the channel and called xlink read function to read data from the
+interface and if any exists for that channel , the data gets popped from the
+channel packet container and copied from kernel space to user space buffer
+provided by the call.
+
+xLink can handle API requests from multi-process and multi-threaded
+application/processes.
+
+xLink maintains 4096 channels per device connected (via xlink connect) and
+maintains a separate channel infrastructure for each device.
diff --git a/MAINTAINERS b/MAINTAINERS
index 92693390c59d..e4165c9983cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1962,6 +1962,12 @@ F:	Documentation/devicetree/bindings/arm/intel,keembay.yaml
 F:	arch/arm64/boot/dts/intel/keembay-evm.dts
 F:	arch/arm64/boot/dts/intel/keembay-soc.dtsi
 
+ARM/INTEL XLINK CORE SUPPORT
+M:	Seamus Kelly <seamus.kelly@intel.com>
+M:	Mark Gross <mgross@linux.intel.com>
+S:	Supported
+F:	drivers/misc/xlink-core/
+
 ARM/INTEL XLINK IPC SUPPORT
 M:	Seamus Kelly <seamus.kelly@intel.com>
 M:	Mark Gross <mgross@linux.intel.com>
@@ -1974,6 +1980,12 @@ M:	Mark Gross <mgross@linux.intel.com>
 S:	Supported
 F:	drivers/misc/xlink-pcie/
 
+ARM/INTEL TSENS SUPPORT
+M:	Udhayakumar C <udhayakumar.c@intel.com>
+S:	Supported
+F:	drivers/misc/hddl_device/
+F:	drivers/misc/intel_tsens/
+
 ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
 M:	Jonathan Cameron <jic23@cam.ac.uk>
 L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 1f81ea915b95..09ae65e98681 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -483,4 +483,5 @@ source "drivers/misc/habanalabs/Kconfig"
 source "drivers/misc/uacce/Kconfig"
 source "drivers/misc/xlink-pcie/Kconfig"
 source "drivers/misc/xlink-ipc/Kconfig"
+source "drivers/misc/xlink-core/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index b51495a2f1e0..f3a6eb03bae9 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -59,3 +59,4 @@ obj-$(CONFIG_XILINX_SDFEC)	+= xilinx_sdfec.o
 obj-$(CONFIG_HISI_HIKEY_USB)	+= hisi_hikey_usb.o
 obj-y                           += xlink-pcie/
 obj-$(CONFIG_XLINK_IPC)		+= xlink-ipc/
+obj-$(CONFIG_XLINK_CORE)	+= xlink-core/
diff --git a/drivers/misc/xlink-core/Kconfig b/drivers/misc/xlink-core/Kconfig
new file mode 100644
index 000000000000..a0ceb0b48219
--- /dev/null
+++ b/drivers/misc/xlink-core/Kconfig
@@ -0,0 +1,33 @@
+config XLINK_CORE
+	tristate "Support for XLINK CORE"
+	depends on ((XLINK_PCIE_RH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER) || (XLINK_LOCAL_HOST && (XLINK_PCIE_LH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER)))
+	help
+	  XLINK CORE enables the communication/control subsystem.
+
+	  If unsure, say N.
+
+	  To compile this driver as a module, choose M here: the
+	  module will be called xlink.ko.
+
+config XLINK_LOCAL_HOST
+	tristate "Support for XLINK LOCAL HOST"
+	depends on XLINK_IPC
+	help
+	  XLINK LOCAL HOST enables local host functionality for
+	  the communication/control Sub-System.
+
+	  Enable this config when building the kernel for the Intel Vision
+	  Processing Unit (VPU) Local Host core.
+
+	  If building for a Remote Host kernel, say N.
+
+config XLINK_PSS
+	tristate "Support for XLINK PSS (Pre-Silicon Solution)"
+	depends on XLINK_LOCAL_HOST
+	help
+	  XLINK PSS enables the communication/control subsystem on a PSS platform.
+
+	  Enable this config when building the kernel for the Intel Vision
+	  Processing Unit (VPU) in a simulated env.
+
+	  If building for a VPU silicon, say N.
diff --git a/drivers/misc/xlink-core/Makefile b/drivers/misc/xlink-core/Makefile
new file mode 100644
index 000000000000..e82b7c72b6b9
--- /dev/null
+++ b/drivers/misc/xlink-core/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Keem Bay xlink Linux driver
+#
+obj-$(CONFIG_XLINK_CORE) += xlink.o
+xlink-objs += xlink-core.o xlink-multiplexer.o xlink-platform.o xlink-ioctl.o
diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
new file mode 100644
index 000000000000..1a443f54786d
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -0,0 +1,738 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Core Driver.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/kref.h>
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#endif
+
+#include "xlink-core.h"
+#include "xlink-defs.h"
+#include "xlink-ioctl.h"
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+// xlink version number
+#define XLINK_VERSION_MAJOR		0
+#define XLINK_VERSION_MINOR		1
+#define XLINK_VERSION_REVISION		2
+#define XLINK_VERSION_SUB_REV		"a"
+
+// timeout in milliseconds used to wait for the reay message from the VPU
+#ifdef CONFIG_XLINK_PSS
+#define XLINK_VPU_WAIT_FOR_READY (3000000)
+#else
+#define XLINK_VPU_WAIT_FOR_READY (3000)
+#endif
+
+// device, class, and driver names
+#define DEVICE_NAME	"xlnk"
+#define CLASS_NAME	"xlkcore"
+#define DRV_NAME	"xlink-driver"
+
+// used to determine if an API was called from user or kernel space
+#define CHANNEL_SET_USER_BIT(chan)	((chan) |= (1 << 15))
+#define CHANNEL_USER_BIT_IS_SET(chan)	((chan) & (1 << 15))
+#define CHANNEL_CLEAR_USER_BIT(chan)	((chan) &= ~(1 << 15))
+
+static dev_t xdev;
+static struct class *dev_class;
+static struct cdev xlink_cdev;
+
+static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
+
+static const struct file_operations fops = {
+		.owner		= THIS_MODULE,
+		.unlocked_ioctl = xlink_ioctl,
+};
+
+struct xlink_link {
+	u32 id;
+	struct xlink_handle handle;
+	struct kref refcount;
+};
+
+struct keembay_xlink_dev {
+	struct platform_device *pdev;
+	struct xlink_link links[XLINK_MAX_CONNECTIONS];
+	u32 nmb_connected_links;
+	struct mutex lock;  // protect access to xlink_dev
+};
+
+/*
+ * global variable pointing to our xlink device.
+ *
+ * This is meant to be used only when platform_get_drvdata() cannot be used
+ * because we lack a reference to our platform_device.
+ */
+static struct keembay_xlink_dev *xlink;
+
+/*
+ * get_next_link() - Searches the list of links to find the next available.
+ *
+ * Note: This function is only used in xlink_connect, where the xlink mutex is
+ * already locked.
+ *
+ * Return: the next available link, or NULL if maximum connections reached.
+ */
+static struct xlink_link *get_next_link(void)
+{
+	struct xlink_link *link = NULL;
+	int i;
+
+	for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+		if (xlink->links[i].handle.sw_device_id == XLINK_INVALID_SW_DEVICE_ID) {
+			link = &xlink->links[i];
+			break;
+		}
+	}
+	return link;
+}
+
+/*
+ * get_link_by_sw_device_id()
+ *
+ * Searches the list of links to find a link by sw device id.
+ *
+ * Return: the handle, or NULL if the handle is not found.
+ */
+static struct xlink_link *get_link_by_sw_device_id(u32 sw_device_id)
+{
+	struct xlink_link *link = NULL;
+	int i;
+
+	mutex_lock(&xlink->lock);
+	for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+		if (xlink->links[i].handle.sw_device_id == sw_device_id) {
+			link = &xlink->links[i];
+			break;
+		}
+	}
+	mutex_unlock(&xlink->lock);
+	return link;
+}
+
+// For now , do nothing and leave for further consideration
+static void release_after_kref_put(struct kref *ref) {}
+
+/* Driver probing. */
+static int kmb_xlink_probe(struct platform_device *pdev)
+{
+	struct keembay_xlink_dev *xlink_dev;
+	struct device *dev_ret;
+	int rc, i;
+
+	dev_info(&pdev->dev, "Keem Bay xlink v%d.%d.%d:%s\n", XLINK_VERSION_MAJOR,
+		 XLINK_VERSION_MINOR, XLINK_VERSION_REVISION, XLINK_VERSION_SUB_REV);
+
+	xlink_dev = devm_kzalloc(&pdev->dev, sizeof(*xlink), GFP_KERNEL);
+	if (!xlink_dev)
+		return -ENOMEM;
+
+	xlink_dev->pdev = pdev;
+
+	// initialize multiplexer
+	rc = xlink_multiplexer_init(xlink_dev->pdev);
+	if (rc != X_LINK_SUCCESS) {
+		pr_err("Multiplexer initialization failed\n");
+		goto r_multiplexer;
+	}
+
+	// initialize xlink data structure
+	xlink_dev->nmb_connected_links = 0;
+	mutex_init(&xlink_dev->lock);
+	for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+		xlink_dev->links[i].id = i;
+		xlink_dev->links[i].handle.sw_device_id =
+				XLINK_INVALID_SW_DEVICE_ID;
+	}
+
+	platform_set_drvdata(pdev, xlink_dev);
+
+	/* Set the global reference to our device. */
+	xlink = xlink_dev;
+
+	/*Allocating Major number*/
+	if ((alloc_chrdev_region(&xdev, 0, 1, "xlinkdev")) < 0) {
+		dev_info(&pdev->dev, "Cannot allocate major number\n");
+		goto r_multiplexer;
+	}
+	dev_info(&pdev->dev, "Major = %d Minor = %d\n", MAJOR(xdev),
+		 MINOR(xdev));
+
+	/*Creating struct class*/
+	dev_class = class_create(THIS_MODULE, CLASS_NAME);
+	if (IS_ERR(dev_class)) {
+		dev_info(&pdev->dev, "Cannot create the struct class - Err %ld\n",
+			 PTR_ERR(dev_class));
+		goto r_class;
+	}
+
+	/*Creating device*/
+	dev_ret = device_create(dev_class, NULL, xdev, NULL, DEVICE_NAME);
+	if (IS_ERR(dev_ret)) {
+		dev_info(&pdev->dev, "Cannot create the Device 1 - Err %ld\n",
+			 PTR_ERR(dev_ret));
+		goto r_device;
+	}
+	dev_info(&pdev->dev, "Device Driver Insert...Done!!!\n");
+
+	/*Creating cdev structure*/
+	cdev_init(&xlink_cdev, &fops);
+
+	/*Adding character device to the system*/
+	if ((cdev_add(&xlink_cdev, xdev, 1)) < 0) {
+		dev_info(&pdev->dev, "Cannot add the device to the system\n");
+		goto r_class;
+	}
+
+	return 0;
+
+r_device:
+	class_destroy(dev_class);
+r_class:
+	unregister_chrdev_region(xdev, 1);
+r_multiplexer:
+	xlink_multiplexer_destroy();
+	return -1;
+}
+
+/* Driver removal. */
+static int kmb_xlink_remove(struct platform_device *pdev)
+{
+	int rc;
+
+	mutex_lock(&xlink->lock);
+	// destroy multiplexer
+	rc = xlink_multiplexer_destroy();
+	if (rc != X_LINK_SUCCESS)
+		pr_err("Multiplexer destroy failed\n");
+
+	mutex_unlock(&xlink->lock);
+	mutex_destroy(&xlink->lock);
+	// unregister and destroy device
+	unregister_chrdev_region(xdev, 1);
+	device_destroy(dev_class, xdev);
+	cdev_del(&xlink_cdev);
+	class_destroy(dev_class);
+	pr_info("XLink Driver removed\n");
+	return 0;
+}
+
+/*
+ * IOCTL function for User Space access to xlink kernel functions
+ *
+ */
+
+static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+	int rc;
+
+	switch (cmd) {
+	case XL_CONNECT:
+		rc = ioctl_connect(arg);
+		break;
+	case XL_OPEN_CHANNEL:
+		rc = ioctl_open_channel(arg);
+		break;
+	case XL_READ_DATA:
+		rc = ioctl_read_data(arg);
+		break;
+	case XL_WRITE_DATA:
+		rc = ioctl_write_data(arg);
+		break;
+	case XL_WRITE_VOLATILE:
+		rc = ioctl_write_volatile_data(arg);
+		break;
+	case XL_RELEASE_DATA:
+		rc = ioctl_release_data(arg);
+		break;
+	case XL_CLOSE_CHANNEL:
+		rc = ioctl_close_channel(arg);
+		break;
+	case XL_DISCONNECT:
+		rc = ioctl_disconnect(arg);
+		break;
+	}
+	if (rc)
+		return -EIO;
+	else
+		return 0;
+}
+
+/*
+ * xlink Kernel API.
+ */
+
+enum xlink_error xlink_initialize(void)
+{
+	return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL(xlink_initialize);
+
+enum xlink_error xlink_connect(struct xlink_handle *handle)
+{
+	struct xlink_link *link;
+	enum xlink_error rc;
+	int interface;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	mutex_lock(&xlink->lock);
+	if (!link) {
+		link = get_next_link();
+		if (!link) {
+			pr_err("max connections reached %d\n",
+			       XLINK_MAX_CONNECTIONS);
+			mutex_unlock(&xlink->lock);
+			return X_LINK_ERROR;
+		}
+		// platform connect
+		interface = get_interface_from_sw_device_id(handle->sw_device_id);
+		rc = xlink_platform_connect(interface, handle->sw_device_id);
+		if (rc) {
+			pr_err("platform connect failed %d\n", rc);
+			mutex_unlock(&xlink->lock);
+			return X_LINK_ERROR;
+		}
+		// set link handle reference and link id
+		link->handle = *handle;
+		xlink->nmb_connected_links++;
+		kref_init(&link->refcount);
+		// initialize multiplexer connection
+		rc = xlink_multiplexer_connect(link->id);
+		if (rc) {
+			pr_err("multiplexer connect failed\n");
+			goto r_cleanup;
+		}
+		pr_info("dev 0x%x connected - dev_type %d - nmb_connected_links %d\n",
+			link->handle.sw_device_id,
+			link->handle.dev_type,
+			xlink->nmb_connected_links);
+	} else {
+		// already connected
+		pr_info("dev 0x%x ALREADY connected - dev_type %d\n",
+			link->handle.sw_device_id,
+			link->handle.dev_type);
+		kref_get(&link->refcount);
+		*handle = link->handle;
+	}
+	mutex_unlock(&xlink->lock);
+	// TODO: implement ping
+	return X_LINK_SUCCESS;
+
+r_cleanup:
+	link->handle.sw_device_id = XLINK_INVALID_SW_DEVICE_ID;
+	mutex_unlock(&xlink->lock);
+	return X_LINK_ERROR;
+}
+EXPORT_SYMBOL(xlink_connect);
+
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+				    u16 chan, enum xlink_opmode mode,
+				    u32 data_size, u32 timeout)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	int event_queued = 0;
+	enum xlink_error rc;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
+				   &link->handle, chan, data_size, timeout);
+	if (!event)
+		return X_LINK_ERROR;
+
+	event->data = (void *)mode;
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
+		xlink_destroy_event(event);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_open_channel);
+
+enum xlink_error xlink_close_channel(struct xlink_handle *handle,
+				     u16 chan)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	enum xlink_error rc;
+	int event_queued = 0;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+				   &link->handle, chan, 0, 0);
+	if (!event)
+		return X_LINK_ERROR;
+
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
+		xlink_destroy_event(event);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_close_channel);
+
+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+				  u16 chan, u8 const *pmessage, u32 size)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	enum xlink_error rc;
+	int event_queued = 0;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	if (size > XLINK_MAX_DATA_SIZE)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
+				   chan, size, 0);
+	if (!event)
+		return X_LINK_ERROR;
+
+	if (chan < XLINK_IPC_MAX_CHANNELS &&
+	    event->interface == IPC_INTERFACE) {
+		/* only passing message address across IPC interface */
+		event->data = &pmessage;
+		rc = xlink_multiplexer_tx(event, &event_queued);
+		xlink_destroy_event(event);
+	} else {
+		event->data = (u8 *)pmessage;
+		event->paddr = 0;
+		rc = xlink_multiplexer_tx(event, &event_queued);
+		if (!event_queued)
+			xlink_destroy_event(event);
+	}
+	return rc;
+}
+EXPORT_SYMBOL(xlink_write_data);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+				       u16 chan, u8 const *pmessage,
+				       u32 size)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	enum xlink_error rc;
+	int event_queued = 0;
+	dma_addr_t paddr = 0;
+	u32 addr;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	if (size > XLINK_MAX_DATA_SIZE)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
+				   chan, size, 0);
+	if (!event)
+		return X_LINK_ERROR;
+	event->user_data = 1;
+
+	if (chan < XLINK_IPC_MAX_CHANNELS &&
+	    event->interface == IPC_INTERFACE) {
+		/* only passing message address across IPC interface */
+		if (get_user(addr, (u32 __user *)pmessage)) {
+			xlink_destroy_event(event);
+			return X_LINK_ERROR;
+		}
+		event->data = &addr;
+		rc = xlink_multiplexer_tx(event, &event_queued);
+		xlink_destroy_event(event);
+	} else {
+		event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
+						      size,
+						      XLINK_PACKET_ALIGNMENT,
+						      XLINK_NORMAL_MEMORY);
+		if (!event->data) {
+			xlink_destroy_event(event);
+			return X_LINK_ERROR;
+		}
+		if (copy_from_user(event->data, (void __user *)pmessage, size)) {
+			xlink_platform_deallocate(&xlink->pdev->dev,
+						  event->data, paddr, size,
+						  XLINK_PACKET_ALIGNMENT,
+						  XLINK_NORMAL_MEMORY);
+			xlink_destroy_event(event);
+			return X_LINK_ERROR;
+		}
+		event->paddr = paddr;
+		rc = xlink_multiplexer_tx(event, &event_queued);
+		if (!event_queued) {
+			xlink_platform_deallocate(&xlink->pdev->dev,
+						  event->data, paddr, size,
+						  XLINK_PACKET_ALIGNMENT,
+						  XLINK_NORMAL_MEMORY);
+			xlink_destroy_event(event);
+		}
+	}
+	return rc;
+}
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+				      u16 chan, u8 const *message, u32 size)
+{
+	enum xlink_error rc = 0;
+
+	rc = do_xlink_write_volatile(handle, chan, message, size, 0);
+	return rc;
+}
+
+enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+					 u16 chan, u8 const *message,
+					 u32 size, u32 user_flag)
+{
+	enum xlink_error rc = 0;
+	struct xlink_link *link = NULL;
+	struct xlink_event *event = NULL;
+	int event_queued = 0;
+	dma_addr_t paddr;
+	int region = 0;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	if (size > XLINK_MAX_BUF_SIZE)
+		return X_LINK_ERROR; // TODO: XLink Parameter Error
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_WRITE_VOLATILE_REQ,
+				   &link->handle, chan, size, 0);
+	if (!event)
+		return X_LINK_ERROR;
+
+	region = XLINK_NORMAL_MEMORY;
+	event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr, size,
+					      XLINK_PACKET_ALIGNMENT, region);
+	if (!event->data) {
+		xlink_destroy_event(event);
+		return X_LINK_ERROR;
+	}
+	memcpy(event->data, message, size);
+	event->user_data = user_flag;
+	event->paddr = paddr;
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued) {
+		xlink_platform_deallocate(&xlink->pdev->dev, event->data, paddr, size,
+					  XLINK_PACKET_ALIGNMENT, region);
+		xlink_destroy_event(event);
+	}
+	return rc;
+}
+
+enum xlink_error xlink_read_data(struct xlink_handle *handle,
+				 u16 chan, u8 **pmessage, u32 *size)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	int event_queued = 0;
+	enum xlink_error rc;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_READ_REQ, &link->handle,
+				   chan, *size, 0);
+	if (!event)
+		return X_LINK_ERROR;
+
+	event->pdata = (void **)pmessage;
+	event->length = size;
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
+		xlink_destroy_event(event);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_read_data);
+
+enum xlink_error xlink_read_data_to_buffer(struct xlink_handle *handle,
+					   u16 chan, u8 *const message, u32 *size)
+{
+	enum xlink_error rc = 0;
+	struct xlink_link *link = NULL;
+	struct xlink_event *event = NULL;
+	int event_queued = 0;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_READ_TO_BUFFER_REQ,
+				   &link->handle, chan, *size, 0);
+	if (!event)
+		return X_LINK_ERROR;
+
+	event->data = message;
+	event->length = size;
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
+		xlink_destroy_event(event);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_read_data_to_buffer);
+
+enum xlink_error xlink_release_data(struct xlink_handle *handle,
+				    u16 chan, u8 * const data_addr)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	int event_queued = 0;
+	enum xlink_error rc;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_RELEASE_REQ, &link->handle,
+				   chan, 0, 0);
+	if (!event)
+		return X_LINK_ERROR;
+
+	event->data = data_addr;
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
+		xlink_destroy_event(event);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_release_data);
+
+enum xlink_error xlink_disconnect(struct xlink_handle *handle)
+{
+	struct xlink_link *link;
+	enum xlink_error rc = X_LINK_ERROR;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	// decrement refcount, if count is 0 lock mutex and disconnect
+	if (kref_put_mutex(&link->refcount, release_after_kref_put,
+			   &xlink->lock)) {
+		// deinitialize multiplexer connection
+		rc = xlink_multiplexer_disconnect(link->id);
+		if (rc) {
+			pr_err("multiplexer disconnect failed\n");
+			mutex_unlock(&xlink->lock);
+			return X_LINK_ERROR;
+		}
+		// TODO: reset device?
+		// invalidate link handle reference
+		link->handle.sw_device_id = XLINK_INVALID_SW_DEVICE_ID;
+		xlink->nmb_connected_links--;
+		mutex_unlock(&xlink->lock);
+	}
+	return rc;
+}
+EXPORT_SYMBOL(xlink_disconnect);
+
+/* Device tree driver match. */
+static const struct of_device_id kmb_xlink_of_match[] = {
+	{
+		.compatible = "intel,keembay-xlink",
+	},
+	{}
+};
+
+/* The xlink driver is a platform device. */
+static struct platform_driver kmb_xlink_driver = {
+	.probe = kmb_xlink_probe,
+	.remove = kmb_xlink_remove,
+	.driver = {
+			.name = DRV_NAME,
+			.of_match_table = kmb_xlink_of_match,
+		},
+};
+
+/*
+ * The remote host system will need to create an xlink platform
+ * device for the platform driver to match with
+ */
+#ifndef CONFIG_XLINK_LOCAL_HOST
+static struct platform_device pdev;
+static void kmb_xlink_release(struct device *dev) { return; }
+#endif
+
+static int kmb_xlink_init(void)
+{
+	int rc;
+
+	rc = platform_driver_register(&kmb_xlink_driver);
+#ifndef CONFIG_XLINK_LOCAL_HOST
+	pdev.dev.release = kmb_xlink_release;
+	pdev.name = DRV_NAME;
+	pdev.id = -1;
+	if (!rc) {
+		rc = platform_device_register(&pdev);
+		if (rc)
+			platform_driver_unregister(&kmb_xlink_driver);
+	}
+#endif
+	return rc;
+}
+module_init(kmb_xlink_init);
+
+static void kmb_xlink_exit(void)
+{
+#ifndef CONFIG_XLINK_LOCAL_HOST
+	platform_device_unregister(&pdev);
+#endif
+	platform_driver_unregister(&kmb_xlink_driver);
+}
+module_exit(kmb_xlink_exit);
+
+MODULE_DESCRIPTION("Keem Bay xlink Kernel Driver");
+MODULE_AUTHOR("Seamus Kelly <seamus.kelly@intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/xlink-core/xlink-core.h b/drivers/misc/xlink-core/xlink-core.h
new file mode 100644
index 000000000000..5ba7ac653bf7
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-core.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink core header file.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+
+#ifndef XLINK_CORE_H_
+#define XLINK_CORE_H_
+#include "xlink-defs.h"
+
+#define NUM_REG_EVENTS 4
+
+enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+					 u16 chan, u8 const *message,
+					 u32 size, u32 user_flag);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+				       u16 chan, u8 const *pmessage,
+				       u32 size);
+#endif /* XLINK_CORE_H_ */
diff --git a/drivers/misc/xlink-core/xlink-defs.h b/drivers/misc/xlink-core/xlink-defs.h
new file mode 100644
index 000000000000..09aee36d5542
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-defs.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Defines.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_DEFS_H
+#define __XLINK_DEFS_H
+
+#include <linux/slab.h>
+#include <linux/xlink.h>
+
+#define XLINK_MAX_BUF_SIZE		128U
+#define XLINK_MAX_DATA_SIZE		(1024U * 1024U * 1024U)
+#define XLINK_MAX_CONTROL_DATA_SIZE	100U
+#define XLINK_MAX_CONNECTIONS		16
+#define XLINK_PACKET_ALIGNMENT		64
+#define XLINK_INVALID_EVENT_ID		0xDEADBEEF
+#define XLINK_INVALID_CHANNEL_ID	0xDEAD
+#define XLINK_PACKET_QUEUE_CAPACITY	10000
+#define XLINK_EVENT_QUEUE_CAPACITY	10000
+#define XLINK_EVENT_HEADER_MAGIC	0x786C6E6B
+#define XLINK_PING_TIMEOUT_MS		5000U
+#define XLINK_MAX_DEVICE_NAME_SIZE	128
+#define XLINK_MAX_DEVICE_LIST_SIZE	8
+#define XLINK_INVALID_LINK_ID		0xDEADBEEF
+#define XLINK_INVALID_SW_DEVICE_ID	0xDEADBEEF
+
+#define NMB_CHANNELS			4096
+#define IP_CONTROL_CHANNEL		0x0A
+#define VPU_CONTROL_CHANNEL		0x400
+#define CONTROL_CHANNEL_OPMODE		RXB_TXB	// blocking
+#define CONTROL_CHANNEL_DATASIZE	128U	// size of internal rx/tx buffers
+#define CONTROL_CHANNEL_TIMEOUT_MS	0U	// wait indefinitely
+#define SIGXLNK				44	// signal XLink uses for callback signalling
+
+#define UNUSED(x) (void)(x)
+
+// the end of the IPC channel range (starting at zero)
+#define XLINK_IPC_MAX_CHANNELS	1024
+
+// used to extract the interface type from a sw device id
+#define SW_DEVICE_ID_INTERFACE_SHIFT	24U
+#define SW_DEVICE_ID_INTERFACE_MASK	0x7
+#define GET_INTERFACE_FROM_SW_DEVICE_ID(id) (((id) >> SW_DEVICE_ID_INTERFACE_SHIFT) & \
+					     SW_DEVICE_ID_INTERFACE_MASK)
+#define SW_DEVICE_ID_IPC_INTERFACE  0x0
+#define SW_DEVICE_ID_PCIE_INTERFACE 0x1
+#define SW_DEVICE_ID_USB_INTERFACE  0x2
+#define SW_DEVICE_ID_ETH_INTERFACE  0x3
+
+enum xlink_interface {
+	NULL_INTERFACE = -1,
+	IPC_INTERFACE = 0,
+	PCIE_INTERFACE,
+	USB_CDC_INTERFACE,
+	ETH_INTERFACE,
+	NMB_OF_INTERFACES,
+};
+
+static inline int get_interface_from_sw_device_id(u32 sw_device_id)
+{
+	u32 interface = 0;
+
+	interface = GET_INTERFACE_FROM_SW_DEVICE_ID(sw_device_id);
+	switch (interface) {
+	case SW_DEVICE_ID_IPC_INTERFACE:
+		return IPC_INTERFACE;
+	case SW_DEVICE_ID_PCIE_INTERFACE:
+		return PCIE_INTERFACE;
+	case SW_DEVICE_ID_USB_INTERFACE:
+		return USB_CDC_INTERFACE;
+	case SW_DEVICE_ID_ETH_INTERFACE:
+		return ETH_INTERFACE;
+	default:
+		return NULL_INTERFACE;
+	}
+}
+
+enum xlink_channel_status {
+	CHAN_CLOSED		= 0x0000,
+	CHAN_OPEN		= 0x0001,
+	CHAN_BLOCKED_READ	= 0x0010,
+	CHAN_BLOCKED_WRITE	= 0x0100,
+	CHAN_OPEN_PEER		= 0x1000,
+};
+
+enum xlink_event_origin {
+	EVENT_TX = 0,	// outgoing events
+	EVENT_RX,	// incoming events
+};
+
+enum xlink_event_type {
+	// request events
+	XLINK_WRITE_REQ = 0x00,
+	XLINK_WRITE_VOLATILE_REQ,
+	XLINK_READ_REQ,
+	XLINK_READ_TO_BUFFER_REQ,
+	XLINK_RELEASE_REQ,
+	XLINK_OPEN_CHANNEL_REQ,
+	XLINK_CLOSE_CHANNEL_REQ,
+	XLINK_PING_REQ,
+	XLINK_REQ_LAST,
+	// response events
+	XLINK_WRITE_RESP = 0x10,
+	XLINK_WRITE_VOLATILE_RESP,
+	XLINK_READ_RESP,
+	XLINK_READ_TO_BUFFER_RESP,
+	XLINK_RELEASE_RESP,
+	XLINK_OPEN_CHANNEL_RESP,
+	XLINK_CLOSE_CHANNEL_RESP,
+	XLINK_PING_RESP,
+	XLINK_RESP_LAST,
+};
+
+struct xlink_event_header {
+	u32 magic;
+	u32 id;
+	enum xlink_event_type type;
+	u32 chan;
+	size_t size;
+	u32 timeout;
+	u8  control_data[XLINK_MAX_CONTROL_DATA_SIZE];
+};
+
+struct xlink_event {
+	struct xlink_event_header header;
+	enum xlink_event_origin origin;
+	u32 link_id;
+	struct xlink_handle *handle;
+	int interface;
+	void *data;
+	struct task_struct *calling_pid;
+	char callback_origin;
+	char user_data;
+	void **pdata;
+	dma_addr_t paddr;
+	u32 *length;
+	struct list_head list;
+};
+
+static inline struct xlink_event *xlink_create_event(u32 link_id,
+						     enum xlink_event_type type,
+						     struct xlink_handle *handle,
+						     u32 chan, u32 size, u32 timeout)
+{
+	struct xlink_event *new_event = NULL;
+
+	// allocate new event
+	new_event = kzalloc(sizeof(*new_event), GFP_KERNEL);
+	if (!new_event)
+		return NULL;
+
+	// set event context
+	new_event->link_id	= link_id;
+	new_event->handle	= handle;
+	new_event->interface	= get_interface_from_sw_device_id(handle->sw_device_id);
+	new_event->user_data	= 0;
+
+	// set event header
+	new_event->header.magic	= XLINK_EVENT_HEADER_MAGIC;
+	new_event->header.id	= XLINK_INVALID_EVENT_ID;
+	new_event->header.type	= type;
+	new_event->header.chan	= chan;
+	new_event->header.size	= size;
+	new_event->header.timeout = timeout;
+	return new_event;
+}
+
+static inline void xlink_destroy_event(struct xlink_event *event)
+{
+	kfree(event);
+}
+#endif /* __XLINK_DEFS_H */
diff --git a/drivers/misc/xlink-core/xlink-ioctl.c b/drivers/misc/xlink-core/xlink-ioctl.c
new file mode 100644
index 000000000000..1f75ad38137b
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-ioctl.c
@@ -0,0 +1,212 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Core Driver.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/kref.h>
+
+#include "xlink-defs.h"
+#include "xlink-core.h"
+#include "xlink-ioctl.h"
+
+#define CHANNEL_SET_USER_BIT(chan) ((chan) |= (1 << 15))
+
+static int copy_result_to_user(u32 *where, int rc)
+{
+	if (copy_to_user((void __user *)where, &rc, sizeof(rc)))
+		return -EFAULT;
+	return rc;
+}
+
+static enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+						  u16 chan, u8 const *message, u32 size)
+{
+	enum xlink_error rc = 0;
+
+	rc = do_xlink_write_volatile(handle, chan, message, size, 1);
+	return rc;
+}
+
+int ioctl_connect(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkconnect	con	= {};
+	int rc = 0;
+
+	if (copy_from_user(&con, (void __user *)arg,
+			   sizeof(struct xlinkconnect)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)con.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_connect(&devh);
+	if (rc == X_LINK_SUCCESS) {
+		if (copy_to_user((void __user *)con.handle,
+				 &devh, sizeof(struct xlink_handle)))
+			return -EFAULT;
+	}
+
+	return copy_result_to_user(con.return_code, rc);
+}
+
+int ioctl_open_channel(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkopenchannel	op	= {};
+	int rc = 0;
+
+	if (copy_from_user(&op, (void __user *)arg,
+			   sizeof(struct xlinkopenchannel)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)op.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_open_channel(&devh, op.chan, op.mode, op.data_size,
+				op.timeout);
+
+	return copy_result_to_user(op.return_code, rc);
+}
+
+int ioctl_read_data(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkreaddata	rd	= {};
+	int rc = 0;
+	u8 *rdaddr;
+	u32 size;
+	int interface;
+
+	if (copy_from_user(&rd, (void __user *)arg,
+			   sizeof(struct xlinkreaddata)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)rd.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_read_data(&devh, rd.chan, &rdaddr, &size);
+	if (!rc) {
+		interface = get_interface_from_sw_device_id(devh.sw_device_id);
+		if (interface == IPC_INTERFACE) {
+			if (copy_to_user((void __user *)rd.pmessage, (void *)&rdaddr,
+					 sizeof(u32)))
+				return -EFAULT;
+		} else {
+			if (copy_to_user((void __user *)rd.pmessage, (void *)rdaddr,
+					 size))
+				return -EFAULT;
+		}
+		if (copy_to_user((void __user *)rd.size, (void *)&size, sizeof(size)))
+			return -EFAULT;
+	}
+
+	return copy_result_to_user(rd.return_code, rc);
+}
+
+int ioctl_write_data(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkwritedata	wr	= {};
+	int rc = 0;
+
+	if (copy_from_user(&wr, (void __user *)arg,
+			   sizeof(struct xlinkwritedata)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)wr.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	if (wr.size > XLINK_MAX_DATA_SIZE)
+		return -EFAULT;
+	rc = xlink_write_data_user(&devh, wr.chan, wr.pmessage, wr.size);
+
+	return copy_result_to_user(wr.return_code, rc);
+}
+
+int ioctl_write_volatile_data(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkwritedata	wr	= {};
+	int rc = 0;
+	u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+
+	if (copy_from_user(&wr, (void __user *)arg,
+			   sizeof(struct xlinkwritedata)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)wr.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	if (wr.size > XLINK_MAX_BUF_SIZE)
+		return -EFAULT;
+	if (copy_from_user(volbuf, (void __user *)wr.pmessage, wr.size))
+		return -EFAULT;
+	rc = xlink_write_volatile_user(&devh, wr.chan, volbuf, wr.size);
+
+	return copy_result_to_user(wr.return_code, rc);
+}
+
+int ioctl_release_data(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkrelease	rel	= {};
+	int rc = 0;
+	u8 reladdr;
+
+	if (copy_from_user(&rel, (void __user *)arg,
+			   sizeof(struct xlinkrelease)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)rel.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	if (rel.addr) {
+		if (get_user(reladdr, (u32 __user *const)rel.addr))
+			return -EFAULT;
+		rc = xlink_release_data(&devh, rel.chan,
+					(u8 *)&reladdr);
+	} else {
+		rc = xlink_release_data(&devh, rel.chan, NULL);
+	}
+
+	return copy_result_to_user(rel.return_code, rc);
+}
+
+int ioctl_close_channel(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkopenchannel	op	= {};
+	int rc = 0;
+
+	if (copy_from_user(&op, (void __user *)arg,
+			   sizeof(struct xlinkopenchannel)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)op.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_close_channel(&devh, op.chan);
+
+	return copy_result_to_user(op.return_code, rc);
+}
+
+int ioctl_disconnect(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkconnect	con	= {};
+	int rc = 0;
+
+	if (copy_from_user(&con, (void __user *)arg,
+			   sizeof(struct xlinkconnect)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)con.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_disconnect(&devh);
+
+	return copy_result_to_user(con.return_code, rc);
+}
diff --git a/drivers/misc/xlink-core/xlink-ioctl.h b/drivers/misc/xlink-core/xlink-ioctl.h
new file mode 100644
index 000000000000..0f317c6c2b94
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-ioctl.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink ioctl header files.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+
+#ifndef XLINK_IOCTL_H_
+#define XLINK_IOCTL_H_
+
+int ioctl_connect(unsigned long arg);
+int ioctl_open_channel(unsigned long arg);
+int ioctl_read_data(unsigned long arg);
+int ioctl_write_data(unsigned long arg);
+int ioctl_write_volatile_data(unsigned long arg);
+int ioctl_release_data(unsigned long arg);
+int ioctl_close_channel(unsigned long arg);
+int ioctl_disconnect(unsigned long arg);
+
+#endif /* XLINK_IOCTL_H_ */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
new file mode 100644
index 000000000000..9b1ed008bb56
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -0,0 +1,534 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Multiplexer.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-direct.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/completion.h>
+#include <linux/sched/signal.h>
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#endif
+
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+#define THR_UPR 85
+#define THR_LWR 80
+
+// timeout used for open channel
+#define OPEN_CHANNEL_TIMEOUT_MSEC 5000
+
+/* Channel mapping table. */
+struct xlink_channel_type {
+	enum xlink_interface remote_to_local;
+	enum xlink_interface local_to_ip;
+};
+
+struct xlink_channel_table_entry {
+	u16 start_range;
+	u16 stop_range;
+	struct xlink_channel_type type;
+};
+
+static const struct xlink_channel_table_entry default_channel_table[] = {
+	{0x0, 0x1, {PCIE_INTERFACE, IPC_INTERFACE}},
+	{0x2, 0x9, {USB_CDC_INTERFACE, IPC_INTERFACE}},
+	{0xA, 0x3FD, {PCIE_INTERFACE, IPC_INTERFACE}},
+	{0x3FE, 0x3FF, {ETH_INTERFACE, IPC_INTERFACE}},
+	{0x400, 0xFFE, {PCIE_INTERFACE, NULL_INTERFACE}},
+	{0xFFF, 0xFFF, {ETH_INTERFACE, NULL_INTERFACE}},
+	{NMB_CHANNELS, NMB_CHANNELS, {NULL_INTERFACE, NULL_INTERFACE}},
+};
+
+struct channel {
+	struct open_channel *opchan;
+	enum xlink_opmode mode;
+	enum xlink_channel_status status;
+	enum xlink_channel_status ipc_status;
+	u32 size;
+	u32 timeout;
+};
+
+struct packet {
+	u8 *data;
+	u32 length;
+	dma_addr_t paddr;
+	struct list_head list;
+};
+
+struct packet_queue {
+	u32 count;
+	u32 capacity;
+	struct list_head head;
+	struct mutex lock;  // lock to protect packet queue
+};
+
+struct open_channel {
+	u16 id;
+	struct channel *chan;
+	struct packet_queue rx_queue;
+	struct packet_queue tx_queue;
+	s32 rx_fill_level;
+	s32 tx_fill_level;
+	s32 tx_packet_level;
+	s32 tx_up_limit;
+	struct completion opened;
+	struct completion pkt_available;
+	struct completion pkt_consumed;
+	struct completion pkt_released;
+	struct task_struct *ready_calling_pid;
+	void *ready_callback;
+	struct task_struct *consumed_calling_pid;
+	void *consumed_callback;
+	char callback_origin;
+	struct mutex lock;  // lock to protect channel config
+};
+
+struct xlink_multiplexer {
+	struct device *dev;
+	struct channel channels[XLINK_MAX_CONNECTIONS][NMB_CHANNELS];
+};
+
+static struct xlink_multiplexer *xmux;
+
+/*
+ * Multiplexer Internal Functions
+ *
+ */
+
+static inline int chan_is_non_blocking_read(struct open_channel *opchan)
+{
+	if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXN_TXB)
+		return 1;
+
+	return 0;
+}
+
+static inline int chan_is_non_blocking_write(struct open_channel *opchan)
+{
+	if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXB_TXN)
+		return 1;
+
+	return 0;
+}
+
+static struct xlink_channel_type const *get_channel_type(u16 chan)
+{
+	struct xlink_channel_type const *type = NULL;
+	int i = 0;
+
+	while (default_channel_table[i].start_range < NMB_CHANNELS) {
+		if (chan >= default_channel_table[i].start_range &&
+		    chan <= default_channel_table[i].stop_range) {
+			type = &default_channel_table[i].type;
+			break;
+		}
+		i++;
+	}
+	return type;
+}
+
+static int is_channel_for_device(u16 chan, u32 sw_device_id,
+				 enum xlink_dev_type dev_type)
+{
+	struct xlink_channel_type const *chan_type = get_channel_type(chan);
+	int interface = NULL_INTERFACE;
+
+	if (chan_type) {
+		interface = get_interface_from_sw_device_id(sw_device_id);
+		if (dev_type == VPUIP_DEVICE) {
+			if (chan_type->local_to_ip == interface)
+				return 1;
+		} else {
+			if (chan_type->remote_to_local == interface)
+				return 1;
+		}
+	}
+	return 0;
+}
+
+static int is_control_channel(u16 chan)
+{
+	if (chan == IP_CONTROL_CHANNEL || chan == VPU_CONTROL_CHANNEL)
+		return 1;
+	else
+		return 0;
+}
+
+static int release_packet_from_channel(struct open_channel *opchan,
+				       struct packet_queue *queue,
+				       u8 * const addr,	u32 *size)
+{
+	u8 packet_found = 0;
+	struct packet *pkt = NULL;
+
+	if (!addr) {
+		// address is null, release first packet in queue
+		if (!list_empty(&queue->head)) {
+			pkt = list_first_entry(&queue->head, struct packet,
+					       list);
+			packet_found = 1;
+		}
+	} else {
+		// find packet in channel rx queue
+		list_for_each_entry(pkt, &queue->head, list) {
+			if (pkt->data == addr) {
+				packet_found = 1;
+				break;
+			}
+		}
+	}
+	if (!pkt || !packet_found)
+		return X_LINK_ERROR;
+	// packet found, deallocate and remove from queue
+	xlink_platform_deallocate(xmux->dev, pkt->data, pkt->paddr, pkt->length,
+				  XLINK_PACKET_ALIGNMENT, XLINK_NORMAL_MEMORY);
+	list_del(&pkt->list);
+	queue->count--;
+	opchan->rx_fill_level -= pkt->length;
+	if (size)
+		*size = pkt->length;
+	kfree(pkt);
+	return X_LINK_SUCCESS;
+}
+
+static int multiplexer_open_channel(u32 link_id, u16 chan)
+{
+	struct open_channel *opchan;
+
+	// channel already open
+	if (xmux->channels[link_id][chan].opchan)
+		return X_LINK_SUCCESS;
+
+	// allocate open channel
+	opchan = kzalloc(sizeof(*opchan), GFP_KERNEL);
+	if (!opchan)
+		return X_LINK_ERROR;
+
+	// initialize open channel
+	opchan->id = chan;
+	opchan->chan = &xmux->channels[link_id][chan];
+	// TODO: remove circular dependency
+	xmux->channels[link_id][chan].opchan = opchan;
+	INIT_LIST_HEAD(&opchan->rx_queue.head);
+	opchan->rx_queue.count = 0;
+	opchan->rx_queue.capacity = XLINK_PACKET_QUEUE_CAPACITY;
+	INIT_LIST_HEAD(&opchan->tx_queue.head);
+	opchan->tx_queue.count = 0;
+	opchan->tx_queue.capacity = XLINK_PACKET_QUEUE_CAPACITY;
+	opchan->rx_fill_level = 0;
+	opchan->tx_fill_level = 0;
+	opchan->tx_packet_level = 0;
+	opchan->tx_up_limit = 0;
+	init_completion(&opchan->opened);
+	init_completion(&opchan->pkt_available);
+	init_completion(&opchan->pkt_consumed);
+	init_completion(&opchan->pkt_released);
+	mutex_init(&opchan->lock);
+	return X_LINK_SUCCESS;
+}
+
+static int multiplexer_close_channel(struct open_channel *opchan)
+{
+	if (!opchan)
+		return X_LINK_ERROR;
+
+	// free remaining packets
+	while (!list_empty(&opchan->rx_queue.head)) {
+		release_packet_from_channel(opchan, &opchan->rx_queue,
+					    NULL, NULL);
+	}
+
+	while (!list_empty(&opchan->tx_queue.head)) {
+		release_packet_from_channel(opchan, &opchan->tx_queue,
+					    NULL, NULL);
+	}
+
+	// deallocate data structure and destroy
+	opchan->chan->opchan = NULL; // TODO: remove circular dependency
+	mutex_destroy(&opchan->rx_queue.lock);
+	mutex_destroy(&opchan->tx_queue.lock);
+	mutex_unlock(&opchan->lock);
+	mutex_destroy(&opchan->lock);
+	kfree(opchan);
+	return X_LINK_SUCCESS;
+}
+
+/*
+ * Multiplexer External Functions
+ *
+ */
+
+enum xlink_error xlink_multiplexer_init(void *dev)
+{
+	struct platform_device *plat_dev = (struct platform_device *)dev;
+
+	// allocate multiplexer data structure
+	xmux = kzalloc(sizeof(*xmux), GFP_KERNEL);
+	if (!xmux)
+		return X_LINK_ERROR;
+
+	xmux->dev = &plat_dev->dev;
+	return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_connect(u32 link_id)
+{
+	int rc;
+
+	if (!xmux)
+		return X_LINK_ERROR;
+
+	// open ip control channel
+	rc = multiplexer_open_channel(link_id, IP_CONTROL_CHANNEL);
+	if (rc) {
+		goto r_cleanup;
+	} else {
+		xmux->channels[link_id][IP_CONTROL_CHANNEL].size = CONTROL_CHANNEL_DATASIZE;
+		xmux->channels[link_id][IP_CONTROL_CHANNEL].timeout = CONTROL_CHANNEL_TIMEOUT_MS;
+		xmux->channels[link_id][IP_CONTROL_CHANNEL].mode = CONTROL_CHANNEL_OPMODE;
+		xmux->channels[link_id][IP_CONTROL_CHANNEL].status = CHAN_OPEN;
+	}
+	// open vpu control channel
+	rc = multiplexer_open_channel(link_id, VPU_CONTROL_CHANNEL);
+	if (rc) {
+		goto r_cleanup;
+	} else {
+		xmux->channels[link_id][VPU_CONTROL_CHANNEL].size = CONTROL_CHANNEL_DATASIZE;
+		xmux->channels[link_id][VPU_CONTROL_CHANNEL].timeout = CONTROL_CHANNEL_TIMEOUT_MS;
+		xmux->channels[link_id][VPU_CONTROL_CHANNEL].mode = CONTROL_CHANNEL_OPMODE;
+		xmux->channels[link_id][VPU_CONTROL_CHANNEL].status = CHAN_OPEN;
+	}
+	return X_LINK_SUCCESS;
+
+r_cleanup:
+	xlink_multiplexer_disconnect(link_id);
+	return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_multiplexer_disconnect(u32 link_id)
+{
+	int i;
+
+	if (!xmux)
+		return X_LINK_ERROR;
+
+	for (i = 0; i < NMB_CHANNELS; i++) {
+		if (xmux->channels[link_id][i].opchan)
+			multiplexer_close_channel(xmux->channels[link_id][i].opchan);
+	}
+	return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_destroy(void)
+{
+	int i;
+
+	if (!xmux)
+		return X_LINK_ERROR;
+
+	// close all open channels and deallocate remaining packets
+	for (i = 0; i < XLINK_MAX_CONNECTIONS; i++)
+		xlink_multiplexer_disconnect(i);
+
+	// destroy multiplexer
+	kfree(xmux);
+	xmux = NULL;
+	return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
+				      int *event_queued)
+{
+	int rc = X_LINK_SUCCESS;
+	u16 chan = 0;
+
+	if (!xmux || !event)
+		return X_LINK_ERROR;
+
+	chan = event->header.chan;
+
+	// verify channel ID is in range
+	if (chan >= NMB_CHANNELS)
+		return X_LINK_ERROR;
+
+	// verify communication to device on channel is valid
+	if (!is_channel_for_device(chan, event->handle->sw_device_id,
+				   event->handle->dev_type))
+		return X_LINK_ERROR;
+
+	// verify this is not a control channel
+	if (is_control_channel(chan))
+		return X_LINK_ERROR;
+
+	if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE)
+		// event should be handled by passthrough
+		rc = xlink_passthrough(event);
+	return rc;
+}
+
+enum xlink_error xlink_passthrough(struct xlink_event *event)
+{
+	int rc = 0;
+#ifdef CONFIG_XLINK_LOCAL_HOST
+	struct xlink_ipc_context ipc = {0};
+	phys_addr_t physaddr = 0;
+	dma_addr_t vpuaddr = 0;
+	u32 timeout = 0;
+	u32 link_id;
+	u16 chan;
+
+	if (!xmux || !event)
+		return X_LINK_ERROR;
+
+	link_id = event->link_id;
+	chan = event->header.chan;
+	ipc.chan = chan;
+
+	if (ipc.chan >= XLINK_IPC_MAX_CHANNELS)
+		return rc;
+
+	switch (event->header.type) {
+	case XLINK_WRITE_REQ:
+		if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+			/* Translate physical address to VPU address */
+			vpuaddr = phys_to_dma(xmux->dev, *(u32 *)event->data);
+			event->data = &vpuaddr;
+			rc = xlink_platform_write(IPC_INTERFACE,
+						  event->handle->sw_device_id,
+						  event->data,
+						  &event->header.size, 0, &ipc);
+		} else {
+			/* channel not open */
+			rc = X_LINK_ERROR;
+		}
+		break;
+	case XLINK_WRITE_VOLATILE_REQ:
+		if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+			ipc.is_volatile = 1;
+			rc = xlink_platform_write(IPC_INTERFACE,
+						  event->handle->sw_device_id,
+						  event->data,
+						  &event->header.size, 0, &ipc);
+		} else {
+			/* channel not open */
+			rc = X_LINK_ERROR;
+		}
+		break;
+	case XLINK_READ_REQ:
+		if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+			/* if channel has receive blocking set,
+			 * then set timeout to U32_MAX
+			 */
+			if (xmux->channels[link_id][chan].mode == RXB_TXN ||
+			    xmux->channels[link_id][chan].mode == RXB_TXB) {
+				timeout = U32_MAX;
+			} else {
+				timeout = xmux->channels[link_id][chan].timeout;
+			}
+			rc = xlink_platform_read(IPC_INTERFACE,
+						 event->handle->sw_device_id,
+						 &vpuaddr,
+						 (size_t *)event->length,
+						 timeout, &ipc);
+			/* Translate VPU address to physical address */
+			physaddr = dma_to_phys(xmux->dev, vpuaddr);
+			*(phys_addr_t *)event->pdata = physaddr;
+		} else {
+			/* channel not open */
+			rc = X_LINK_ERROR;
+		}
+		break;
+	case XLINK_READ_TO_BUFFER_REQ:
+		if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+			/* if channel has receive blocking set,
+			 * then set timeout to U32_MAX
+			 */
+			if (xmux->channels[link_id][chan].mode == RXB_TXN ||
+			    xmux->channels[link_id][chan].mode == RXB_TXB) {
+				timeout = U32_MAX;
+			} else {
+				timeout = xmux->channels[link_id][chan].timeout;
+			}
+			ipc.is_volatile = 1;
+			rc = xlink_platform_read(IPC_INTERFACE,
+						 event->handle->sw_device_id,
+						 event->data,
+						 (size_t *)event->length,
+						 timeout, &ipc);
+			if (rc || *event->length > XLINK_MAX_BUF_SIZE)
+				rc = X_LINK_ERROR;
+		} else {
+			/* channel not open */
+			rc = X_LINK_ERROR;
+		}
+		break;
+	case XLINK_RELEASE_REQ:
+		break;
+	case XLINK_OPEN_CHANNEL_REQ:
+		if (xmux->channels[link_id][chan].ipc_status == CHAN_CLOSED) {
+			xmux->channels[link_id][chan].size = event->header.size;
+			xmux->channels[link_id][chan].timeout = event->header.timeout;
+			xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+			rc = xlink_platform_open_channel(IPC_INTERFACE,
+							 event->handle->sw_device_id,
+							 chan);
+			if (rc)
+				rc = X_LINK_ERROR;
+			else
+				xmux->channels[link_id][chan].ipc_status = CHAN_OPEN;
+		} else {
+			/* channel already open */
+			rc = X_LINK_ALREADY_OPEN;
+		}
+		break;
+	case XLINK_CLOSE_CHANNEL_REQ:
+		if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+			rc = xlink_platform_close_channel(IPC_INTERFACE,
+							  event->handle->sw_device_id,
+							  chan);
+			if (rc)
+				rc = X_LINK_ERROR;
+			else
+				xmux->channels[link_id][chan].ipc_status = CHAN_CLOSED;
+		} else {
+			/* can't close channel not open */
+			rc = X_LINK_ERROR;
+		}
+		break;
+	case XLINK_PING_REQ:
+	case XLINK_WRITE_RESP:
+	case XLINK_WRITE_VOLATILE_RESP:
+	case XLINK_READ_RESP:
+	case XLINK_READ_TO_BUFFER_RESP:
+	case XLINK_RELEASE_RESP:
+	case XLINK_OPEN_CHANNEL_RESP:
+	case XLINK_CLOSE_CHANNEL_RESP:
+	case XLINK_PING_RESP:
+		break;
+	default:
+		rc = X_LINK_ERROR;
+	}
+#else
+	rc = 0;
+#endif // CONFIG_XLINK_LOCAL_HOST
+	return rc;
+}
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.h b/drivers/misc/xlink-core/xlink-multiplexer.h
new file mode 100644
index 000000000000..c978e5683b45
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-multiplexer.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Multiplexer.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_MULTIPLEXER_H
+#define __XLINK_MULTIPLEXER_H
+
+#include "xlink-defs.h"
+
+enum xlink_error xlink_multiplexer_init(void *dev);
+
+enum xlink_error xlink_multiplexer_connect(u32 link_id);
+
+enum xlink_error xlink_multiplexer_disconnect(u32 link_id);
+
+enum xlink_error xlink_multiplexer_destroy(void);
+
+enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
+				      int *event_queued);
+
+enum xlink_error xlink_multiplexer_rx(struct xlink_event *event);
+
+enum xlink_error xlink_passthrough(struct xlink_event *event);
+
+void *find_allocated_buffer(dma_addr_t paddr);
+
+int unregister_allocated_buffer(void *buf, dma_addr_t paddr);
+
+int core_release_packet_from_channel(u32 link_id, uint16_t chan,
+				     uint8_t * const addr);
+
+#endif /* __XLINK_MULTIPLEXER_H */
diff --git a/drivers/misc/xlink-core/xlink-platform.c b/drivers/misc/xlink-core/xlink-platform.c
new file mode 100644
index 000000000000..c34b69ee206b
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-platform.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+#include <linux/xlink_drv_inf.h>
+
+#include "xlink-platform.h"
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#else /* !CONFIG_XLINK_LOCAL_HOST */
+
+static inline int xlink_ipc_connect(u32 sw_device_id)
+{ return -1; }
+
+static inline int xlink_ipc_write(u32 sw_device_id, void *data,
+				  size_t * const size, u32 timeout, void *context)
+{ return -1; }
+
+static inline int xlink_ipc_read(u32 sw_device_id, void *data,
+				 size_t * const size, u32 timeout, void *context)
+{ return -1; }
+
+static inline int xlink_ipc_open_channel(u32 sw_device_id,
+					 u32 channel)
+{ return -1; }
+
+static inline int xlink_ipc_close_channel(u32 sw_device_id,
+					  u32 channel)
+{ return -1; }
+
+#endif /* CONFIG_XLINK_LOCAL_HOST */
+
+/*
+ * xlink low-level driver interface arrays
+ *
+ * note: array indices based on xlink_interface enum definition
+ */
+
+static int (*connect_fcts[NMB_OF_INTERFACES])(u32) = {
+		xlink_ipc_connect, xlink_pcie_connect, NULL, NULL};
+
+static int (*write_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) = {
+		NULL, xlink_pcie_write, NULL, NULL};
+
+static int (*read_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) = {
+		NULL, xlink_pcie_read, NULL, NULL};
+
+static int (*open_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
+		xlink_ipc_open_channel, NULL, NULL, NULL};
+
+static int (*close_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
+		xlink_ipc_close_channel, NULL, NULL, NULL};
+
+/*
+ * xlink low-level driver interface
+ */
+
+int xlink_platform_connect(u32 interface, u32 sw_device_id)
+{
+	if (interface >= NMB_OF_INTERFACES || !connect_fcts[interface])
+		return -1;
+
+	return connect_fcts[interface](sw_device_id);
+}
+
+int xlink_platform_write(u32 interface, u32 sw_device_id, void *data,
+			 size_t * const size, u32 timeout, void *context)
+{
+	if (interface == IPC_INTERFACE)
+		return xlink_ipc_write(sw_device_id, data, size, timeout,
+				context);
+
+	if (interface >= NMB_OF_INTERFACES || !write_fcts[interface])
+		return -1;
+
+	return write_fcts[interface](sw_device_id, data, size, timeout);
+}
+
+int xlink_platform_read(u32 interface, u32 sw_device_id, void *data,
+			size_t * const size, u32 timeout, void *context)
+{
+	if (interface == IPC_INTERFACE)
+		return xlink_ipc_read(sw_device_id, data, size, timeout,
+				context);
+
+	if (interface >= NMB_OF_INTERFACES || !read_fcts[interface])
+		return -1;
+
+	return read_fcts[interface](sw_device_id, data, size, timeout);
+}
+
+int xlink_platform_open_channel(u32 interface, u32 sw_device_id,
+				u32 channel)
+{
+	if (interface >= NMB_OF_INTERFACES || !open_chan_fcts[interface])
+		return -1;
+
+	return open_chan_fcts[interface](sw_device_id, channel);
+}
+
+int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
+				 u32 channel)
+{
+	if (interface >= NMB_OF_INTERFACES || !close_chan_fcts[interface])
+		return -1;
+
+	return close_chan_fcts[interface](sw_device_id, channel);
+}
+
+void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
+			      u32 size, u32 alignment,
+			      enum xlink_memory_region region)
+{
+#if defined(CONFIG_XLINK_PSS) || !defined(CONFIG_XLINK_LOCAL_HOST)
+	*handle = 0;
+	return kzalloc(size, GFP_KERNEL);
+#else
+	void *p;
+
+	if (region == XLINK_CMA_MEMORY) {
+		// size needs to be at least 4097 to be allocated from CMA
+		size = (size < PAGE_SIZE * 2) ? (PAGE_SIZE * 2) : size;
+		p = dma_alloc_coherent(dev, size, handle, GFP_KERNEL);
+		return p;
+	}
+	*handle = 0;
+	return kzalloc(size, GFP_KERNEL);
+#endif
+}
+
+void xlink_platform_deallocate(struct device *dev, void *buf,
+			       dma_addr_t handle, u32 size, u32 alignment,
+			       enum xlink_memory_region region)
+{
+#if defined(CONFIG_XLINK_PSS) || !defined(CONFIG_XLINK_LOCAL_HOST)
+	kfree(buf);
+#else
+	if (region == XLINK_CMA_MEMORY) {
+		// size needs to be at least 4097 to be allocated from CMA
+		size = (size < PAGE_SIZE * 2) ? (PAGE_SIZE * 2) : size;
+		dma_free_coherent(dev, size, buf, handle);
+	} else {
+		kfree(buf);
+	}
+#endif
+}
diff --git a/drivers/misc/xlink-core/xlink-platform.h b/drivers/misc/xlink-core/xlink-platform.h
new file mode 100644
index 000000000000..2c7c4c418099
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-platform.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_PLATFORM_H
+#define __XLINK_PLATFORM_H
+
+#include "xlink-defs.h"
+
+int xlink_platform_connect(u32 interface, u32 sw_device_id);
+
+int xlink_platform_write(u32 interface, u32 sw_device_id, void *data,
+			 size_t * const size, u32 timeout, void *context);
+
+int xlink_platform_read(u32 interface, u32 sw_device_id, void *data,
+			size_t * const size, u32 timeout, void *context);
+
+int xlink_platform_reset_device(u32 interface, u32 sw_device_id);
+
+int xlink_platform_boot_device(u32 interface, u32 sw_device_id,
+			       const char *binary_name);
+
+int xlink_platform_get_device_name(u32 interface, u32 sw_device_id,
+				   char *device_name, size_t name_size);
+
+int xlink_platform_get_device_list(u32 interface, u32 *sw_device_id_list,
+				   u32 *num_devices);
+
+int xlink_platform_get_device_status(u32 interface, u32 sw_device_id,
+				     u32 *device_status);
+
+int xlink_platform_set_device_mode(u32 interface, u32 sw_device_id,
+				   u32 power_mode);
+
+int xlink_platform_get_device_mode(u32 interface, u32 sw_device_id,
+				   u32 *power_mode);
+
+int xlink_platform_open_channel(u32 interface,  u32 sw_device_id,
+				u32 channel);
+
+int xlink_platform_close_channel(u32 interface,  u32 sw_device_id,
+				 u32 channel);
+
+int xlink_platform_register_for_events(u32 interface, u32 sw_device_id,
+				       xlink_device_event_cb event_notif_fn);
+
+int xlink_platform_unregister_for_events(u32 interface, u32 sw_device_id);
+
+enum xlink_memory_region {
+	XLINK_NORMAL_MEMORY = 0,
+	XLINK_CMA_MEMORY
+};
+
+void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
+			      u32 size, u32 alignment,
+			      enum xlink_memory_region region);
+
+void xlink_platform_deallocate(struct device *dev, void *buf,
+			       dma_addr_t handle, u32 size, u32 alignment,
+			       enum xlink_memory_region region);
+
+#endif /* __XLINK_PLATFORM_H */
diff --git a/include/linux/xlink.h b/include/linux/xlink.h
new file mode 100644
index 000000000000..c22439d5aade
--- /dev/null
+++ b/include/linux/xlink.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Linux Kernel API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_H
+#define __XLINK_H
+
+#include <uapi/misc/xlink_uapi.h>
+
+enum xlink_dev_type {
+	HOST_DEVICE = 0,	/* used when communicating host to host*/
+	VPUIP_DEVICE		/* used when communicating host to vpu ip */
+};
+
+struct xlink_handle {
+	u32 sw_device_id;		/* identifies a device in the system */
+	enum xlink_dev_type dev_type;	/* determines direction of comms */
+};
+
+enum xlink_opmode {
+	RXB_TXB = 0,	/* blocking read, blocking write */
+	RXN_TXN,	/* non-blocking read, non-blocking write */
+	RXB_TXN,	/* blocking read, non-blocking write */
+	RXN_TXB		/* non-blocking read, blocking write */
+};
+
+enum xlink_device_power_mode {
+	POWER_DEFAULT_NOMINAL_MAX = 0,	/* no load reduction, default mode */
+	POWER_SUBNOMINAL_HIGH,		/* slight load reduction */
+	POWER_MEDIUM,			/* average load reduction */
+	POWER_LOW,			/* significant load reduction */
+	POWER_MIN,			/* maximum load reduction */
+	POWER_SUSPENDED			/* power off or device suspend */
+};
+
+enum xlink_error {
+	X_LINK_SUCCESS = 0,		/* xlink operation completed successfully */
+	X_LINK_ALREADY_INIT,		/* xlink already initialized */
+	X_LINK_ALREADY_OPEN,		/* channel already open */
+	X_LINK_COMMUNICATION_NOT_OPEN,	/* operation on a closed channel */
+	X_LINK_COMMUNICATION_FAIL,	/* communication failure */
+	X_LINK_COMMUNICATION_UNKNOWN_ERROR, /* error unknown */
+	X_LINK_DEVICE_NOT_FOUND,	/* device specified not found */
+	X_LINK_TIMEOUT,			/* operation timed out */
+	X_LINK_ERROR,			/* parameter error */
+	X_LINK_CHAN_FULL		/* channel has reached fill level */
+};
+
+enum xlink_device_status {
+	XLINK_DEV_OFF = 0,	/* device is off */
+	XLINK_DEV_ERROR,	/* device is busy and not available */
+	XLINK_DEV_BUSY,		/* device is available for use */
+	XLINK_DEV_RECOVERY,	/* device is in recovery mode */
+	XLINK_DEV_READY		/* device HW failure is detected */
+};
+
+/* xlink API */
+
+typedef void (*xlink_event)(u16 chan);
+typedef int (*xlink_device_event_cb)(u32 sw_device_id, u32 event_type);
+
+enum xlink_error xlink_initialize(void);
+
+enum xlink_error xlink_connect(struct xlink_handle *handle);
+
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+				    u16 chan, enum xlink_opmode mode,
+				    u32 data_size, u32 timeout);
+
+enum xlink_error xlink_close_channel(struct xlink_handle *handle, u16 chan);
+
+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+				  u16 chan, u8 const *message, u32 size);
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+				      u16 chan, u8 const *message, u32 size);
+
+enum xlink_error xlink_read_data(struct xlink_handle *handle,
+				 u16 chan, u8 **message, u32 *size);
+
+enum xlink_error xlink_read_data_to_buffer(struct xlink_handle *handle,
+					   u16 chan, u8 * const message,
+					   u32 *size);
+
+enum xlink_error xlink_release_data(struct xlink_handle *handle,
+				    u16 chan, u8 * const data_addr);
+
+enum xlink_error xlink_disconnect(struct xlink_handle *handle);
+
+/* API functions to be implemented
+ *
+ * enum xlink_error xlink_write_crc_data(struct xlink_handle *handle,
+ *		u16 chan, u8 const *message, size_t * const size);
+ *
+ * enum xlink_error xlink_read_crc_data(struct xlink_handle *handle,
+ *		u16 chan, u8 **message, size_t * const size);
+ *
+ * enum xlink_error xlink_read_crc_data_to_buffer(struct xlink_handle *handle,
+ *		u16 chan, u8 * const message, size_t * const size);
+ *
+ * enum xlink_error xlink_reset_all(void);
+ *
+ */
+
+#endif /* __XLINK_H */
diff --git a/include/uapi/misc/xlink_uapi.h b/include/uapi/misc/xlink_uapi.h
new file mode 100644
index 000000000000..77794299c4a6
--- /dev/null
+++ b/include/uapi/misc/xlink_uapi.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
+/*
+ * xlink Linux Kernel API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_UAPI_H
+#define __XLINK_UAPI_H
+
+#include <linux/types.h>
+
+#define XLINK_MAGIC 'x'
+#define XL_OPEN_CHANNEL			_IOW(XLINK_MAGIC, 1, void*)
+#define XL_READ_DATA			_IOW(XLINK_MAGIC, 2, void*)
+#define XL_WRITE_DATA			_IOW(XLINK_MAGIC, 3, void*)
+#define XL_CLOSE_CHANNEL		_IOW(XLINK_MAGIC, 4, void*)
+#define XL_WRITE_VOLATILE		_IOW(XLINK_MAGIC, 5, void*)
+#define XL_READ_TO_BUFFER		_IOW(XLINK_MAGIC, 6, void*)
+#define XL_START_VPU			_IOW(XLINK_MAGIC, 7, void*)
+#define XL_STOP_VPU				_IOW(XLINK_MAGIC, 8, void*)
+#define XL_RESET_VPU			_IOW(XLINK_MAGIC, 9, void*)
+#define XL_CONNECT				_IOW(XLINK_MAGIC, 10, void*)
+#define XL_RELEASE_DATA			_IOW(XLINK_MAGIC, 11, void*)
+#define XL_DISCONNECT			_IOW(XLINK_MAGIC, 12, void*)
+#define XL_WRITE_CONTROL_DATA	_IOW(XLINK_MAGIC, 13, void*)
+#define XL_DATA_READY_CALLBACK	_IOW(XLINK_MAGIC, 14, void*)
+#define XL_DATA_CONSUMED_CALLBACK	_IOW(XLINK_MAGIC, 15, void*)
+#define XL_GET_DEVICE_NAME		_IOW(XLINK_MAGIC, 16, void*)
+#define XL_GET_DEVICE_LIST		_IOW(XLINK_MAGIC, 17, void*)
+#define XL_GET_DEVICE_STATUS	_IOW(XLINK_MAGIC, 18, void*)
+#define XL_BOOT_DEVICE			_IOW(XLINK_MAGIC, 19, void*)
+#define XL_RESET_DEVICE			_IOW(XLINK_MAGIC, 20, void*)
+#define XL_GET_DEVICE_MODE		_IOW(XLINK_MAGIC, 21, void*)
+#define XL_SET_DEVICE_MODE		_IOW(XLINK_MAGIC, 22, void*)
+#define XL_REGISTER_DEV_EVENT	_IOW(XLINK_MAGIC, 23, void*)
+#define XL_UNREGISTER_DEV_EVENT	_IOW(XLINK_MAGIC, 24, void*)
+
+struct xlinkopenchannel {
+	void *handle;
+	__u16 chan;
+	int mode;
+	__u32 data_size;
+	__u32 timeout;
+	__u32 *return_code;
+};
+
+struct xlinkcallback {
+	void *handle;
+	__u16 chan;
+	void (*callback)(__u16 chan);
+	__u32 *return_code;
+};
+
+struct xlinkwritedata {
+	void *handle;
+	__u16 chan;
+	void const *pmessage;
+	__u32 size;
+	__u32 *return_code;
+};
+
+struct xlinkreaddata {
+	void *handle;
+	__u16 chan;
+	void *pmessage;
+	__u32 *size;
+	__u32 *return_code;
+};
+
+struct xlinkreadtobuffer {
+	void *handle;
+	__u16 chan;
+	void *pmessage;
+	__u32 *size;
+	__u32 *return_code;
+};
+
+struct xlinkconnect {
+	void *handle;
+	__u32 *return_code;
+};
+
+struct xlinkrelease {
+	void *handle;
+	__u16 chan;
+	void *addr;
+	__u32 *return_code;
+};
+
+struct xlinkstartvpu {
+	char *filename;
+	int namesize;
+	__u32 *return_code;
+};
+
+struct xlinkstopvpu {
+	__u32 *return_code;
+};
+
+struct xlinkgetdevicename {
+	void *handle;
+	char *name;
+	__u32 name_size;
+	__u32 *return_code;
+};
+
+struct xlinkgetdevicelist {
+	__u32 *sw_device_id_list;
+	__u32 *num_devices;
+	__u32 *return_code;
+};
+
+struct xlinkgetdevicestatus {
+	void *handle;
+	__u32 *device_status;
+	__u32 *return_code;
+};
+
+struct xlinkbootdevice {
+	void *handle;
+	const char *binary_name;
+	__u32 binary_name_size;
+	__u32 *return_code;
+};
+
+struct xlinkresetdevice {
+	void *handle;
+	__u32 *return_code;
+};
+
+struct xlinkdevmode {
+	void *handle;
+	int *device_mode;
+	__u32 *return_code;
+};
+
+struct xlinkregdevevent {
+	void *handle;
+	__u32  *event_list;
+	__u32  num_events;
+	__u32 *return_code;
+};
+
+#endif /* __XLINK_UAPI_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 21/34] xlink-core: Enable xlink protocol over pcie
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (19 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 20/34] xlink-core: Add xlink core driver xLink mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 22/34] xlink-core: Enable VPU IP management and runtime control mgross
                   ` (12 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly

From: Seamus Kelly <seamus.kelly@intel.com>

Enable host system access to the VPU over the xlink protocol over PCIe by
enabling channel multiplexing and dispatching.  This allows for remote host
communication channels across pcie links.

    add dispatcher
    update multiplexer to utilise dispatcher

        xlink-core: Patch set 2

        Add xlink-dispatcher
                creates tx and rx threads
                enables queueing of messages for transmission and on reception

        Update multiplexer to utilise dispatcher:
                handle multiplexing channels over single interface link e.g. PCIe
                process messages received by dispatcher
                pass messages created by API calls to dispatcher for transmission


Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
---
 drivers/misc/xlink-core/Makefile            |   2 +-
 drivers/misc/xlink-core/xlink-core.c        |  35 +-
 drivers/misc/xlink-core/xlink-dispatcher.c  | 441 +++++++++++++++++
 drivers/misc/xlink-core/xlink-dispatcher.h  |  26 +
 drivers/misc/xlink-core/xlink-multiplexer.c | 498 +++++++++++++++++++-
 5 files changed, 999 insertions(+), 3 deletions(-)
 create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.c
 create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.h

diff --git a/drivers/misc/xlink-core/Makefile b/drivers/misc/xlink-core/Makefile
index e82b7c72b6b9..ee81f9d05f2b 100644
--- a/drivers/misc/xlink-core/Makefile
+++ b/drivers/misc/xlink-core/Makefile
@@ -2,4 +2,4 @@
 # Makefile for Keem Bay xlink Linux driver
 #
 obj-$(CONFIG_XLINK_CORE) += xlink.o
-xlink-objs += xlink-core.o xlink-multiplexer.o xlink-platform.o xlink-ioctl.o
+xlink-objs += xlink-core.o xlink-multiplexer.o xlink-dispatcher.o xlink-platform.o xlink-ioctl.o
diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
index 1a443f54786d..017d6776ce4c 100644
--- a/drivers/misc/xlink-core/xlink-core.c
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -21,6 +21,7 @@
 
 #include "xlink-core.h"
 #include "xlink-defs.h"
+#include "xlink-dispatcher.h"
 #include "xlink-ioctl.h"
 #include "xlink-multiplexer.h"
 #include "xlink-platform.h"
@@ -151,6 +152,12 @@ static int kmb_xlink_probe(struct platform_device *pdev)
 		goto r_multiplexer;
 	}
 
+	// initialize dispatcher
+	rc = xlink_dispatcher_init(xlink_dev->pdev);
+	if (rc != X_LINK_SUCCESS) {
+		pr_err("Dispatcher initialization failed\n");
+		goto r_dispatcher;
+	}
 	// initialize xlink data structure
 	xlink_dev->nmb_connected_links = 0;
 	mutex_init(&xlink_dev->lock);
@@ -168,7 +175,7 @@ static int kmb_xlink_probe(struct platform_device *pdev)
 	/*Allocating Major number*/
 	if ((alloc_chrdev_region(&xdev, 0, 1, "xlinkdev")) < 0) {
 		dev_info(&pdev->dev, "Cannot allocate major number\n");
-		goto r_multiplexer;
+		goto r_dispatcher;
 	}
 	dev_info(&pdev->dev, "Major = %d Minor = %d\n", MAJOR(xdev),
 		 MINOR(xdev));
@@ -205,6 +212,8 @@ static int kmb_xlink_probe(struct platform_device *pdev)
 	class_destroy(dev_class);
 r_class:
 	unregister_chrdev_region(xdev, 1);
+r_dispatcher:
+	xlink_dispatcher_destroy();
 r_multiplexer:
 	xlink_multiplexer_destroy();
 	return -1;
@@ -220,6 +229,10 @@ static int kmb_xlink_remove(struct platform_device *pdev)
 	rc = xlink_multiplexer_destroy();
 	if (rc != X_LINK_SUCCESS)
 		pr_err("Multiplexer destroy failed\n");
+	// stop dispatchers and destroy
+	rc = xlink_dispatcher_destroy();
+	if (rc != X_LINK_SUCCESS)
+		pr_err("Dispatcher destroy failed\n");
 
 	mutex_unlock(&xlink->lock);
 	mutex_destroy(&xlink->lock);
@@ -314,6 +327,14 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
 		link->handle = *handle;
 		xlink->nmb_connected_links++;
 		kref_init(&link->refcount);
+		if (interface != IPC_INTERFACE) {
+			// start dispatcher
+			rc = xlink_dispatcher_start(link->id, &link->handle);
+			if (rc) {
+				pr_err("dispatcher start failed\n");
+				goto r_cleanup;
+			}
+		}
 		// initialize multiplexer connection
 		rc = xlink_multiplexer_connect(link->id);
 		if (rc) {
@@ -649,6 +670,7 @@ EXPORT_SYMBOL(xlink_release_data);
 enum xlink_error xlink_disconnect(struct xlink_handle *handle)
 {
 	struct xlink_link *link;
+	int interface = NULL_INTERFACE;
 	enum xlink_error rc = X_LINK_ERROR;
 
 	if (!xlink || !handle)
@@ -661,6 +683,17 @@ enum xlink_error xlink_disconnect(struct xlink_handle *handle)
 	// decrement refcount, if count is 0 lock mutex and disconnect
 	if (kref_put_mutex(&link->refcount, release_after_kref_put,
 			   &xlink->lock)) {
+		// stop dispatcher
+		interface = get_interface_from_sw_device_id(link->handle.sw_device_id);
+		if (interface != IPC_INTERFACE) {
+			// stop dispatcher
+			rc = xlink_dispatcher_stop(link->id);
+			if (rc != X_LINK_SUCCESS) {
+				pr_err("dispatcher stop failed\n");
+				mutex_unlock(&xlink->lock);
+				return X_LINK_ERROR;
+			}
+		}
 		// deinitialize multiplexer connection
 		rc = xlink_multiplexer_disconnect(link->id);
 		if (rc) {
diff --git a/drivers/misc/xlink-core/xlink-dispatcher.c b/drivers/misc/xlink-core/xlink-dispatcher.c
new file mode 100644
index 000000000000..11ef8e4110ca
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-dispatcher.c
@@ -0,0 +1,441 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Dispatcher.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/semaphore.h>
+#include <linux/mutex.h>
+#include <linux/completion.h>
+#include <linux/sched/signal.h>
+#include <linux/platform_device.h>
+
+#include "xlink-dispatcher.h"
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+#define DISPATCHER_RX_TIMEOUT_MSEC 0
+
+/* state of a dispatcher servicing a link to a device*/
+enum dispatcher_state {
+	XLINK_DISPATCHER_INIT,		/* initialized but not used */
+	XLINK_DISPATCHER_RUNNING,	/* currently servicing a link */
+	XLINK_DISPATCHER_STOPPED,	/* no longer servicing a link */
+	XLINK_DISPATCHER_ERROR,		/* fatal error */
+};
+
+/* queue for dispatcher tx thread event handling */
+struct event_queue {
+	u32 count;		/* number of events in the queue */
+	u32 capacity;		/* capacity of events in the queue */
+	struct list_head head;	/* head of event linked list */
+	struct mutex lock;	/* locks queue while accessing */
+};
+
+/* dispatcher servicing a single link to a device */
+struct dispatcher {
+	u32 link_id;			/* id of link being serviced */
+	enum dispatcher_state state;	/* state of the dispatcher */
+	struct xlink_handle *handle;	/* xlink device handle */
+	int interface;			/* underlying interface of link */
+	struct task_struct *rxthread;	/* kthread servicing rx */
+	struct task_struct *txthread;	/* kthread servicing tx */
+	struct event_queue queue;	/* xlink event queue */
+	struct semaphore event_sem;	/* signals tx kthread of events */
+	struct completion rx_done;	/* sync start/stop of rx kthread */
+	struct completion tx_done;	/* sync start/stop of tx thread */
+};
+
+/* xlink dispatcher system component */
+struct xlink_dispatcher {
+	struct dispatcher dispatchers[XLINK_MAX_CONNECTIONS];	/* disp queue */
+	struct device *dev;					/* deallocate data */
+	struct mutex lock;					/* locks when start new disp */
+};
+
+/* global reference to the xlink dispatcher data structure */
+static struct xlink_dispatcher *xlinkd;
+
+/*
+ * Dispatcher Internal Functions
+ *
+ */
+
+static struct dispatcher *get_dispatcher_by_id(u32 id)
+{
+	if (!xlinkd)
+		return NULL;
+
+	if (id >= XLINK_MAX_CONNECTIONS)
+		return NULL;
+
+	return &xlinkd->dispatchers[id];
+}
+
+static u32 event_generate_id(void)
+{
+	static u32 id = 0xa; // TODO: temporary solution
+
+	return id++;
+}
+
+static struct xlink_event *event_dequeue(struct event_queue *queue)
+{
+	struct xlink_event *event = NULL;
+
+	mutex_lock(&queue->lock);
+	if (!list_empty(&queue->head)) {
+		event = list_first_entry(&queue->head, struct xlink_event,
+					 list);
+		list_del(&event->list);
+		queue->count--;
+	}
+	mutex_unlock(&queue->lock);
+	return event;
+}
+
+static int event_enqueue(struct event_queue *queue, struct xlink_event *event)
+{
+	int rc = -1;
+
+	mutex_lock(&queue->lock);
+	if (queue->count < ((queue->capacity / 10) * 7)) {
+		list_add_tail(&event->list, &queue->head);
+		queue->count++;
+		rc = 0;
+	}
+	mutex_unlock(&queue->lock);
+	return rc;
+}
+
+static struct xlink_event *dispatcher_event_get(struct dispatcher *disp)
+{
+	int rc = 0;
+	struct xlink_event *event = NULL;
+
+	// wait until an event is available
+	rc = down_interruptible(&disp->event_sem);
+	// dequeue and return next event to process
+	if (!rc)
+		event = event_dequeue(&disp->queue);
+	return event;
+}
+
+static int is_valid_event_header(struct xlink_event *event)
+{
+	if (event->header.magic != XLINK_EVENT_HEADER_MAGIC)
+		return 0;
+	else
+		return 1;
+}
+
+static int dispatcher_event_send(struct xlink_event *event)
+{
+	size_t event_header_size = sizeof(event->header);
+	int rc;
+
+	// write event header
+	// printk(KERN_DEBUG "Sending event: type = 0x%x, id = 0x%x\n",
+			// event->header.type, event->header.id);
+	rc = xlink_platform_write(event->interface,
+				  event->handle->sw_device_id, &event->header,
+				  &event_header_size, event->header.timeout, NULL);
+	if (rc || event_header_size != sizeof(event->header)) {
+		pr_err("Write header failed %d\n", rc);
+		return rc;
+	}
+	if (event->header.type == XLINK_WRITE_REQ ||
+	    event->header.type == XLINK_WRITE_VOLATILE_REQ) {
+		// write event data
+		rc = xlink_platform_write(event->interface,
+					  event->handle->sw_device_id, event->data,
+					  &event->header.size, event->header.timeout,
+					  NULL);
+		if (rc)
+			pr_err("Write data failed %d\n", rc);
+		if (event->user_data == 1) {
+			if (event->paddr != 0) {
+				xlink_platform_deallocate(xlinkd->dev,
+							  event->data, event->paddr,
+							  event->header.size,
+							  XLINK_PACKET_ALIGNMENT,
+							  XLINK_CMA_MEMORY);
+			} else {
+				xlink_platform_deallocate(xlinkd->dev,
+							  event->data, event->paddr,
+							  event->header.size,
+							  XLINK_PACKET_ALIGNMENT,
+							  XLINK_NORMAL_MEMORY);
+			}
+		}
+	}
+	return rc;
+}
+
+static int xlink_dispatcher_rxthread(void *context)
+{
+	struct dispatcher *disp = (struct dispatcher *)context;
+	struct xlink_event *event;
+	size_t size;
+	int rc;
+
+	// printk(KERN_DEBUG "dispatcher rxthread started\n");
+	event = xlink_create_event(disp->link_id, 0, disp->handle, 0, 0, 0);
+	if (!event)
+		return -1;
+
+	allow_signal(SIGTERM); // allow thread termination while waiting on sem
+	complete(&disp->rx_done);
+	while (!kthread_should_stop()) {
+		size = sizeof(event->header);
+		rc = xlink_platform_read(disp->interface,
+					 disp->handle->sw_device_id,
+					 &event->header, &size,
+					 DISPATCHER_RX_TIMEOUT_MSEC, NULL);
+		if (rc || size != (int)sizeof(event->header))
+			continue;
+		if (is_valid_event_header(event)) {
+			event->link_id = disp->link_id;
+			rc = xlink_multiplexer_rx(event);
+			if (!rc) {
+				event = xlink_create_event(disp->link_id, 0,
+							   disp->handle, 0, 0,
+							   0);
+				if (!event)
+					return -1;
+			}
+		}
+	}
+	// printk(KERN_INFO "dispatcher rxthread stopped\n");
+	complete(&disp->rx_done);
+	do_exit(0);
+	return 0;
+}
+
+static int xlink_dispatcher_txthread(void *context)
+{
+	struct dispatcher *disp = (struct dispatcher *)context;
+	struct xlink_event *event;
+
+	// printk(KERN_DEBUG "dispatcher txthread started\n");
+	allow_signal(SIGTERM); // allow thread termination while waiting on sem
+	complete(&disp->tx_done);
+	while (!kthread_should_stop()) {
+		event = dispatcher_event_get(disp);
+		if (!event)
+			continue;
+
+		dispatcher_event_send(event);
+		xlink_destroy_event(event); // free handled event
+	}
+	// printk(KERN_INFO "dispatcher txthread stopped\n");
+	complete(&disp->tx_done);
+	do_exit(0);
+	return 0;
+}
+
+/*
+ * Dispatcher External Functions
+ *
+ */
+
+enum xlink_error xlink_dispatcher_init(void *dev)
+{
+	struct platform_device *plat_dev = (struct platform_device *)dev;
+	int i;
+
+	xlinkd = kzalloc(sizeof(*xlinkd), GFP_KERNEL);
+	if (!xlinkd)
+		return X_LINK_ERROR;
+
+	xlinkd->dev = &plat_dev->dev;
+	for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+		xlinkd->dispatchers[i].link_id = i;
+		sema_init(&xlinkd->dispatchers[i].event_sem, 0);
+		init_completion(&xlinkd->dispatchers[i].rx_done);
+		init_completion(&xlinkd->dispatchers[i].tx_done);
+		INIT_LIST_HEAD(&xlinkd->dispatchers[i].queue.head);
+		mutex_init(&xlinkd->dispatchers[i].queue.lock);
+		xlinkd->dispatchers[i].queue.count = 0;
+		xlinkd->dispatchers[i].queue.capacity =
+				XLINK_EVENT_QUEUE_CAPACITY;
+		xlinkd->dispatchers[i].state = XLINK_DISPATCHER_INIT;
+	}
+	mutex_init(&xlinkd->lock);
+
+	return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_dispatcher_start(int id, struct xlink_handle *handle)
+{
+	struct dispatcher *disp;
+
+	mutex_lock(&xlinkd->lock);
+	// get dispatcher by link id
+	disp = get_dispatcher_by_id(id);
+	if (!disp)
+		goto r_error;
+
+	// cannot start a running or failed dispatcher
+	if (disp->state == XLINK_DISPATCHER_RUNNING ||
+	    disp->state == XLINK_DISPATCHER_ERROR)
+		goto r_error;
+
+	// set the dispatcher context
+	disp->handle = handle;
+	disp->interface = get_interface_from_sw_device_id(handle->sw_device_id);
+
+	// run dispatcher thread to handle and write outgoing packets
+	disp->txthread = kthread_run(xlink_dispatcher_txthread,
+				     (void *)disp, "txthread");
+	if (!disp->txthread) {
+		pr_err("xlink txthread creation failed\n");
+		goto r_txthread;
+	}
+	wait_for_completion(&disp->tx_done);
+	disp->state = XLINK_DISPATCHER_RUNNING;
+	// run dispatcher thread to read and handle incoming packets
+	disp->rxthread = kthread_run(xlink_dispatcher_rxthread,
+				     (void *)disp, "rxthread");
+	if (!disp->rxthread) {
+		pr_err("xlink rxthread creation failed\n");
+		goto r_rxthread;
+	}
+	wait_for_completion(&disp->rx_done);
+	mutex_unlock(&xlinkd->lock);
+
+	return X_LINK_SUCCESS;
+
+r_rxthread:
+	kthread_stop(disp->txthread);
+r_txthread:
+	disp->state = XLINK_DISPATCHER_STOPPED;
+r_error:
+	mutex_unlock(&xlinkd->lock);
+	return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
+					    struct xlink_event *event)
+{
+	struct dispatcher *disp;
+	int rc;
+
+	// get dispatcher by handle
+	disp = get_dispatcher_by_id(event->link_id);
+	if (!disp)
+		return X_LINK_ERROR;
+
+	// only add events if the dispatcher is running
+	if (disp->state != XLINK_DISPATCHER_RUNNING)
+		return X_LINK_ERROR;
+
+	// configure event and add to queue
+	if (origin == EVENT_TX)
+		event->header.id = event_generate_id();
+	event->origin = origin;
+	rc = event_enqueue(&disp->queue, event);
+	if (rc)
+		return X_LINK_CHAN_FULL;
+
+	// notify dispatcher tx thread of new event
+	up(&disp->event_sem);
+	return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_dispatcher_stop(int id)
+{
+	struct dispatcher *disp;
+	int rc;
+
+	mutex_lock(&xlinkd->lock);
+	// get dispatcher by link id
+	disp = get_dispatcher_by_id(id);
+	if (!disp)
+		goto r_error;
+
+	// don't stop dispatcher if not started
+	if (disp->state != XLINK_DISPATCHER_RUNNING)
+		goto r_error;
+
+	if (disp->rxthread) {
+		// stop dispatcher rx thread
+		send_sig(SIGTERM, disp->rxthread, 0);
+		rc = kthread_stop(disp->rxthread);
+		if (rc)
+			goto r_thread;
+	}
+	wait_for_completion(&disp->rx_done);
+	if (disp->txthread) {
+		// stop dispatcher tx thread
+		send_sig(SIGTERM, disp->txthread, 0);
+		rc = kthread_stop(disp->txthread);
+		if (rc)
+			goto r_thread;
+	}
+	wait_for_completion(&disp->tx_done);
+	disp->state = XLINK_DISPATCHER_STOPPED;
+	mutex_unlock(&xlinkd->lock);
+	return X_LINK_SUCCESS;
+
+r_thread:
+	// dispatcher now in error state and cannot be used
+	disp->state = XLINK_DISPATCHER_ERROR;
+r_error:
+	mutex_unlock(&xlinkd->lock);
+	return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_dispatcher_destroy(void)
+{
+	enum xlink_event_type type;
+	struct xlink_event *event;
+	struct dispatcher *disp;
+	int i;
+
+	for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+		// get dispatcher by link id
+		disp = get_dispatcher_by_id(i);
+		if (!disp)
+			continue;
+
+		// stop all running dispatchers
+		if (disp->state == XLINK_DISPATCHER_RUNNING)
+			xlink_dispatcher_stop(i);
+
+		// empty queues of all used dispatchers
+		if (disp->state == XLINK_DISPATCHER_INIT)
+			continue;
+
+		// deallocate remaining events in queue
+		while (!list_empty(&disp->queue.head)) {
+			event = event_dequeue(&disp->queue);
+			if (!event)
+				continue;
+			type = event->header.type;
+			if (type == XLINK_WRITE_REQ ||
+			    type == XLINK_WRITE_VOLATILE_REQ) {
+				// deallocate event data
+				xlink_platform_deallocate(xlinkd->dev,
+							  event->data,
+							  event->paddr,
+							  event->header.size,
+							  XLINK_PACKET_ALIGNMENT,
+							  XLINK_NORMAL_MEMORY);
+			}
+			xlink_destroy_event(event);
+		}
+		// destroy dispatcher
+		mutex_destroy(&disp->queue.lock);
+	}
+	mutex_destroy(&xlinkd->lock);
+	return X_LINK_SUCCESS;
+}
diff --git a/drivers/misc/xlink-core/xlink-dispatcher.h b/drivers/misc/xlink-core/xlink-dispatcher.h
new file mode 100644
index 000000000000..d1458e7a4ab7
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-dispatcher.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Dispatcher.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_DISPATCHER_H
+#define __XLINK_DISPATCHER_H
+
+#include "xlink-defs.h"
+
+enum xlink_error xlink_dispatcher_init(void *dev);
+
+enum xlink_error xlink_dispatcher_start(int id, struct xlink_handle *handle);
+
+enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
+					    struct xlink_event *event);
+
+enum xlink_error xlink_dispatcher_stop(int id);
+
+enum xlink_error xlink_dispatcher_destroy(void);
+
+enum xlink_error xlink_dispatcher_ipc_passthru_event_add(struct xlink_event *event);
+
+#endif /* __XLINK_DISPATCHER_H */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
index 9b1ed008bb56..339734826f3e 100644
--- a/drivers/misc/xlink-core/xlink-multiplexer.c
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -27,6 +27,7 @@
 #include <linux/xlink-ipc.h>
 #endif
 
+#include "xlink-dispatcher.h"
 #include "xlink-multiplexer.h"
 #include "xlink-platform.h"
 
@@ -165,6 +166,32 @@ static int is_channel_for_device(u16 chan, u32 sw_device_id,
 	return 0;
 }
 
+static int is_enough_space_in_channel(struct open_channel *opchan,
+				      u32 size)
+{
+	if (opchan->tx_packet_level >= ((XLINK_PACKET_QUEUE_CAPACITY / 100) * THR_UPR)) {
+		pr_info("Packet queue limit reached\n");
+		return 0;
+	}
+	if (opchan->tx_up_limit == 0) {
+		if ((opchan->tx_fill_level + size)
+				> ((opchan->chan->size / 100) * THR_UPR)) {
+			opchan->tx_up_limit = 1;
+			return 0;
+		}
+	}
+	if (opchan->tx_up_limit == 1) {
+		if ((opchan->tx_fill_level + size)
+				< ((opchan->chan->size / 100) * THR_LWR)) {
+			opchan->tx_up_limit = 0;
+			return 1;
+		} else {
+			return 0;
+		}
+	}
+	return 1;
+}
+
 static int is_control_channel(u16 chan)
 {
 	if (chan == IP_CONTROL_CHANNEL || chan == VPU_CONTROL_CHANNEL)
@@ -173,6 +200,51 @@ static int is_control_channel(u16 chan)
 		return 0;
 }
 
+static struct open_channel *get_channel(u32 link_id, u16 chan)
+{
+	if (!xmux->channels[link_id][chan].opchan)
+		return NULL;
+	mutex_lock(&xmux->channels[link_id][chan].opchan->lock);
+	return xmux->channels[link_id][chan].opchan;
+}
+
+static void release_channel(struct open_channel *opchan)
+{
+	if (opchan)
+		mutex_unlock(&opchan->lock);
+}
+
+static int add_packet_to_channel(struct open_channel *opchan,
+				 struct packet_queue *queue,
+				 void *buffer, u32 size,
+				 dma_addr_t paddr)
+{
+	struct packet *pkt;
+
+	if (queue->count < queue->capacity) {
+		pkt = kzalloc(sizeof(*pkt), GFP_KERNEL);
+		if (!pkt)
+			return X_LINK_ERROR;
+		pkt->data = buffer;
+		pkt->length = size;
+		pkt->paddr = paddr;
+		list_add_tail(&pkt->list, &queue->head);
+		queue->count++;
+		opchan->rx_fill_level += pkt->length;
+	}
+	return X_LINK_SUCCESS;
+}
+
+static struct packet *get_packet_from_channel(struct packet_queue *queue)
+{
+	struct packet *pkt = NULL;
+	// get first packet in queue
+	if (!list_empty(&queue->head))
+		pkt = list_first_entry(&queue->head, struct packet, list);
+
+	return pkt;
+}
+
 static int release_packet_from_channel(struct open_channel *opchan,
 				       struct packet_queue *queue,
 				       u8 * const addr,	u32 *size)
@@ -355,15 +427,46 @@ enum xlink_error xlink_multiplexer_destroy(void)
 	return X_LINK_SUCCESS;
 }
 
+static int compl_wait(struct completion *compl, struct open_channel *opchan)
+{
+	int rc;
+	unsigned long tout = msecs_to_jiffies(opchan->chan->timeout);
+
+	if (opchan->chan->timeout == 0) {
+		mutex_unlock(&opchan->lock);
+		rc = wait_for_completion_interruptible(compl);
+		mutex_lock(&opchan->lock);
+		if (rc < 0)	// wait interrupted
+			rc = X_LINK_ERROR;
+	} else {
+		mutex_unlock(&opchan->lock);
+		rc = wait_for_completion_interruptible_timeout(compl, tout);
+		mutex_lock(&opchan->lock);
+		if (rc == 0)
+			rc = X_LINK_TIMEOUT;
+		else if (rc < 0)	// wait interrupted
+			rc = X_LINK_ERROR;
+		else if (rc > 0)
+			rc = X_LINK_SUCCESS;
+	}
+	return rc;
+}
+
 enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
 				      int *event_queued)
 {
+	struct open_channel *opchan = NULL;
+	struct packet *pkt = NULL;
 	int rc = X_LINK_SUCCESS;
+	u32 link_id = 0;
+	u32 size = 0;
 	u16 chan = 0;
+	u32 save_timeout = 0;
 
 	if (!xmux || !event)
 		return X_LINK_ERROR;
 
+	link_id = event->link_id;
 	chan = event->header.chan;
 
 	// verify channel ID is in range
@@ -379,9 +482,402 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
 	if (is_control_channel(chan))
 		return X_LINK_ERROR;
 
-	if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE)
+	if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE) {
 		// event should be handled by passthrough
 		rc = xlink_passthrough(event);
+		return rc;
+	}
+	// event should be handled by dispatcher
+	switch (event->header.type) {
+	case XLINK_WRITE_REQ:
+	case XLINK_WRITE_VOLATILE_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan || opchan->chan->status != CHAN_OPEN) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			event->header.timeout = opchan->chan->timeout;
+			while (!is_enough_space_in_channel(opchan,
+							   event->header.size)) {
+				if (opchan->chan->mode == RXN_TXB ||
+				    opchan->chan->mode == RXB_TXB) {
+					// channel is blocking,
+					// wait for packet to be released
+					rc = compl_wait(&opchan->pkt_released, opchan);
+				} else {
+					rc = X_LINK_CHAN_FULL;
+					break;
+				}
+			}
+			if (rc == X_LINK_SUCCESS) {
+				opchan->tx_fill_level += event->header.size;
+				opchan->tx_packet_level++;
+				xlink_dispatcher_event_add(EVENT_TX, event);
+				*event_queued = 1;
+				if (opchan->chan->mode == RXN_TXB ||
+				    opchan->chan->mode == RXB_TXB) {
+					// channel is blocking,
+					// wait for packet to be consumed
+					mutex_unlock(&opchan->lock);
+					rc = compl_wait(&opchan->pkt_consumed, opchan);
+					mutex_lock(&opchan->lock);
+				}
+			}
+		}
+		release_channel(opchan);
+		break;
+	case XLINK_READ_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan || opchan->chan->status != CHAN_OPEN) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			event->header.timeout = opchan->chan->timeout;
+			if (opchan->chan->mode == RXB_TXN ||
+			    opchan->chan->mode == RXB_TXB) {
+				// channel is blocking, wait for packet to become available
+				mutex_unlock(&opchan->lock);
+				rc = compl_wait(&opchan->pkt_available, opchan);
+				mutex_lock(&opchan->lock);
+			}
+			if (rc == X_LINK_SUCCESS) {
+				pkt = get_packet_from_channel(&opchan->rx_queue);
+				if (pkt) {
+					*(u32 **)event->pdata = (u32 *)pkt->data;
+					*event->length = pkt->length;
+					xlink_dispatcher_event_add(EVENT_TX, event);
+					*event_queued = 1;
+				} else {
+					rc = X_LINK_ERROR;
+				}
+			}
+		}
+		release_channel(opchan);
+		break;
+	case XLINK_READ_TO_BUFFER_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan || opchan->chan->status != CHAN_OPEN) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			event->header.timeout = opchan->chan->timeout;
+			if (opchan->chan->mode == RXB_TXN ||
+			    opchan->chan->mode == RXB_TXB) {
+				// channel is blocking, wait for packet to become available
+				mutex_unlock(&opchan->lock);
+				rc = compl_wait(&opchan->pkt_available, opchan);
+				mutex_lock(&opchan->lock);
+			}
+			if (rc == X_LINK_SUCCESS) {
+				pkt = get_packet_from_channel(&opchan->rx_queue);
+				if (pkt) {
+					memcpy(event->data, pkt->data, pkt->length);
+					*event->length = pkt->length;
+					xlink_dispatcher_event_add(EVENT_TX, event);
+					*event_queued = 1;
+				} else {
+					rc = X_LINK_ERROR;
+				}
+			}
+		}
+		release_channel(opchan);
+		break;
+	case XLINK_RELEASE_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			rc = release_packet_from_channel(opchan,
+							 &opchan->rx_queue,
+							 event->data,
+							 &size);
+			if (rc) {
+				rc = X_LINK_ERROR;
+			} else {
+				event->header.size = size;
+				xlink_dispatcher_event_add(EVENT_TX, event);
+				*event_queued = 1;
+			}
+		}
+		release_channel(opchan);
+		break;
+	case XLINK_OPEN_CHANNEL_REQ:
+		if (xmux->channels[link_id][chan].status == CHAN_CLOSED) {
+			xmux->channels[link_id][chan].size = event->header.size;
+			xmux->channels[link_id][chan].timeout = event->header.timeout;
+			xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+			rc = multiplexer_open_channel(link_id, chan);
+			if (rc) {
+				rc = X_LINK_ERROR;
+			} else {
+				opchan = get_channel(link_id, chan);
+				if (!opchan) {
+					rc = X_LINK_COMMUNICATION_FAIL;
+				} else {
+					xlink_dispatcher_event_add(EVENT_TX, event);
+					*event_queued = 1;
+					mutex_unlock(&opchan->lock);
+					save_timeout = opchan->chan->timeout;
+					opchan->chan->timeout = OPEN_CHANNEL_TIMEOUT_MSEC;
+					rc = compl_wait(&opchan->opened, opchan);
+					opchan->chan->timeout = save_timeout;
+					if (rc == 0) {
+						xmux->channels[link_id][chan].status = CHAN_OPEN;
+						release_channel(opchan);
+					} else {
+						multiplexer_close_channel(opchan);
+					}
+				}
+			}
+		} else if (xmux->channels[link_id][chan].status == CHAN_OPEN_PEER) {
+			/* channel already open */
+			xmux->channels[link_id][chan].status = CHAN_OPEN; // opened locally
+			xmux->channels[link_id][chan].size = event->header.size;
+			xmux->channels[link_id][chan].timeout = event->header.timeout;
+			xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+			rc = multiplexer_open_channel(link_id, chan);
+		} else {
+			/* channel already open */
+			rc = X_LINK_ALREADY_OPEN;
+		}
+		break;
+	case XLINK_CLOSE_CHANNEL_REQ:
+		if (xmux->channels[link_id][chan].status == CHAN_OPEN) {
+			opchan = get_channel(link_id, chan);
+			if (!opchan)
+				return X_LINK_COMMUNICATION_FAIL;
+			rc = multiplexer_close_channel(opchan);
+			if (rc)
+				rc = X_LINK_ERROR;
+			else
+				xmux->channels[link_id][chan].status = CHAN_CLOSED;
+		} else {
+			/* can't close channel not open */
+			rc = X_LINK_ERROR;
+		}
+		break;
+	case XLINK_PING_REQ:
+		break;
+	case XLINK_WRITE_RESP:
+	case XLINK_WRITE_VOLATILE_RESP:
+	case XLINK_READ_RESP:
+	case XLINK_READ_TO_BUFFER_RESP:
+	case XLINK_RELEASE_RESP:
+	case XLINK_OPEN_CHANNEL_RESP:
+	case XLINK_CLOSE_CHANNEL_RESP:
+	case XLINK_PING_RESP:
+	default:
+		rc = X_LINK_ERROR;
+	}
+return rc;
+}
+
+enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
+{
+	struct xlink_event *passthru_event = NULL;
+	struct open_channel *opchan = NULL;
+	int rc = X_LINK_SUCCESS;
+	dma_addr_t paddr = 0;
+	void *buffer = NULL;
+	size_t size = 0;
+	u32 link_id;
+	u16 chan;
+
+	if (!xmux || !event)
+		return X_LINK_ERROR;
+
+	link_id = event->link_id;
+	chan = event->header.chan;
+
+	switch (event->header.type) {
+	case XLINK_WRITE_REQ:
+	case XLINK_WRITE_VOLATILE_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			// if we receive data on a closed channel - flush/read the data
+			buffer = xlink_platform_allocate(xmux->dev, &paddr,
+							 event->header.size,
+							 XLINK_PACKET_ALIGNMENT,
+							 XLINK_NORMAL_MEMORY);
+			if (buffer) {
+				size = event->header.size;
+				xlink_platform_read(event->interface,
+						    event->handle->sw_device_id,
+						    buffer, &size, 1000, NULL);
+				xlink_platform_deallocate(xmux->dev, buffer,
+							  paddr,
+							  event->header.size,
+							  XLINK_PACKET_ALIGNMENT,
+							  XLINK_NORMAL_MEMORY);
+			} else {
+				pr_err("Fatal error: can't allocate memory in line:%d func:%s\n", __LINE__, __func__);
+			}
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			event->header.timeout = opchan->chan->timeout;
+			buffer = xlink_platform_allocate(xmux->dev, &paddr,
+							 event->header.size,
+							 XLINK_PACKET_ALIGNMENT,
+							 XLINK_NORMAL_MEMORY);
+			if (buffer) {
+				size = event->header.size;
+				rc = xlink_platform_read(event->interface,
+							 event->handle->sw_device_id,
+							 buffer, &size,
+							 opchan->chan->timeout,
+							 NULL);
+				if (rc || event->header.size != size) {
+					xlink_platform_deallocate(xmux->dev, buffer,
+								  paddr,
+								  event->header.size,
+								  XLINK_PACKET_ALIGNMENT,
+								  XLINK_NORMAL_MEMORY);
+					rc = X_LINK_ERROR;
+					release_channel(opchan);
+					break;
+				}
+				event->paddr = paddr;
+				event->data = buffer;
+				if (add_packet_to_channel(opchan, &opchan->rx_queue,
+							  event->data,
+							  event->header.size,
+							  paddr)) {
+					xlink_platform_deallocate(xmux->dev,
+								  buffer, paddr,
+								  event->header.size,
+								  XLINK_PACKET_ALIGNMENT,
+								  XLINK_NORMAL_MEMORY);
+					rc = X_LINK_ERROR;
+					release_channel(opchan);
+					break;
+				}
+				event->header.type = XLINK_WRITE_VOLATILE_RESP;
+				xlink_dispatcher_event_add(EVENT_RX, event);
+				//complete regardless of mode/timeout
+				complete(&opchan->pkt_available);
+			} else {
+				// failed to allocate buffer
+				rc = X_LINK_ERROR;
+			}
+		}
+		release_channel(opchan);
+		break;
+	case XLINK_READ_REQ:
+	case XLINK_READ_TO_BUFFER_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+			break;
+		}
+		event->header.timeout = opchan->chan->timeout;
+		event->header.type = XLINK_READ_TO_BUFFER_RESP;
+		xlink_dispatcher_event_add(EVENT_RX, event);
+		//complete regardless of mode/timeout
+		complete(&opchan->pkt_consumed);
+		release_channel(opchan);
+		break;
+	case XLINK_RELEASE_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			event->header.timeout = opchan->chan->timeout;
+			opchan->tx_fill_level -= event->header.size;
+			opchan->tx_packet_level--;
+			event->header.type = XLINK_RELEASE_RESP;
+			xlink_dispatcher_event_add(EVENT_RX, event);
+			//complete regardless of mode/timeout
+			complete(&opchan->pkt_released);
+		}
+		release_channel(opchan);
+		break;
+	case XLINK_OPEN_CHANNEL_REQ:
+		if (xmux->channels[link_id][chan].status == CHAN_CLOSED) {
+			xmux->channels[link_id][chan].size = event->header.size;
+			xmux->channels[link_id][chan].timeout = event->header.timeout;
+			//xmux->channels[link_id][chan].mode = *(enum xlink_opmode *)event->data;
+			rc = multiplexer_open_channel(link_id, chan);
+			if (rc) {
+				rc = X_LINK_ERROR;
+			} else {
+				opchan = get_channel(link_id, chan);
+				if (!opchan) {
+					rc = X_LINK_COMMUNICATION_FAIL;
+				} else {
+					xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
+					complete(&opchan->opened);
+					passthru_event = xlink_create_event(link_id,
+									    XLINK_OPEN_CHANNEL_RESP,
+									    event->handle,
+									    chan,
+									    0,
+									    opchan->chan->timeout);
+					if (!passthru_event) {
+						rc = X_LINK_ERROR;
+						release_channel(opchan);
+						break;
+					}
+					xlink_dispatcher_event_add(EVENT_RX,
+								   passthru_event);
+				}
+				release_channel(opchan);
+			}
+		} else {
+			/* channel already open */
+			opchan = get_channel(link_id, chan);
+			if (!opchan) {
+				rc = X_LINK_COMMUNICATION_FAIL;
+			} else {
+				passthru_event = xlink_create_event(link_id,
+								    XLINK_OPEN_CHANNEL_RESP,
+								    event->handle,
+								    chan, 0, 0);
+				if (!passthru_event) {
+					release_channel(opchan);
+					rc = X_LINK_ERROR;
+					break;
+				}
+				xlink_dispatcher_event_add(EVENT_RX,
+							   passthru_event);
+			}
+			release_channel(opchan);
+		}
+		rc = xlink_passthrough(event);
+		if (rc == 0)
+			xlink_destroy_event(event); // event is handled and can now be freed
+		break;
+	case XLINK_CLOSE_CHANNEL_REQ:
+	case XLINK_PING_REQ:
+		break;
+	case XLINK_WRITE_RESP:
+	case XLINK_WRITE_VOLATILE_RESP:
+		opchan = get_channel(link_id, chan);
+		if (!opchan)
+			rc = X_LINK_COMMUNICATION_FAIL;
+		else
+			xlink_destroy_event(event); // event is handled and can now be freed
+		release_channel(opchan);
+		break;
+	case XLINK_READ_RESP:
+	case XLINK_READ_TO_BUFFER_RESP:
+	case XLINK_RELEASE_RESP:
+		xlink_destroy_event(event); // event is handled and can now be freed
+		break;
+	case XLINK_OPEN_CHANNEL_RESP:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			xlink_destroy_event(event); // event is handled and can now be freed
+			complete(&opchan->opened);
+		}
+		release_channel(opchan);
+		break;
+	case XLINK_CLOSE_CHANNEL_RESP:
+	case XLINK_PING_RESP:
+		xlink_destroy_event(event); // event is handled and can now be freed
+		break;
+	default:
+		rc = X_LINK_ERROR;
+	}
+
 	return rc;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 22/34] xlink-core: Enable VPU IP management and runtime control
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (20 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 21/34] xlink-core: Enable xlink protocol over pcie mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 23/34] xlink-core: add async channel and events mgross
                   ` (11 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly

From: Seamus Kelly <seamus.kelly@intel.com>

Enable VPU management including, enumeration, boot and runtime control.

Add APIs:
	write control data:
		used to transmit small, local data
	start vpu:
			calls boot_device API ( soon to be deprecated )
	stop vpu
			calls reset_device API ( soon to be deprecated )
	reset vpu
			calls reset_device API ( soon to be deprecated )
	get device name:
		Returns the device name for the input device id
		This could be a char device path, for example "/dev/ttyUSB0"
		for a serial device; or it could be a device string
		description, for example, for PCIE "00:00.0 Host bridge: Intel
		Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01)"
	get device list:
		Returns the list of software device IDs for all connected
		physical devices
	get device status:
		returns the current state of the input device
			OFF - The device is off (D3cold/Slot power removed).
			BUSY - device is busy and not available (device is booting)
			READY - device is available for use
			ERROR - device HW failure is detected
			RECOVERY - device is in recovery mode, waiting for recovery operations
	boot device:
		When used on the remote host - starts the SOC device by calling
		corresponding function from VPU Driver.
		Takes firmware's 'binary_name' as input.
		For Linux, the firmware image is expected to be located in
		'/lib/firmware' folder or its subfolders.
		For Linux, 'binary_name' is not a path but an image name that
		will be searched in the default Linux search paths ('/lib/firmware').
		When used on the local host - triggers the booting of VPUIP device.
	reset device:
		When used on the remote host - resets the device by calling
		corresponding VPU Driver function.
		When used on the local host - resets the VPUIP device
	get device mode:
		query and returns the current device power mode
	set device mode:
		used for device throttling or entering various power modes


Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
---
 drivers/misc/xlink-core/xlink-core.c        | 235 ++++++++++++++++++++
 drivers/misc/xlink-core/xlink-defs.h        |   2 +
 drivers/misc/xlink-core/xlink-ioctl.c       | 214 ++++++++++++++++++
 drivers/misc/xlink-core/xlink-ioctl.h       |   9 +
 drivers/misc/xlink-core/xlink-multiplexer.c |  56 +++++
 drivers/misc/xlink-core/xlink-platform.c    |  86 +++++++
 include/linux/xlink.h                       |  27 +++
 7 files changed, 629 insertions(+)

diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
index 017d6776ce4c..f30ac584a01c 100644
--- a/drivers/misc/xlink-core/xlink-core.c
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -73,6 +73,8 @@ struct keembay_xlink_dev {
 	struct mutex lock;  // protect access to xlink_dev
 };
 
+static u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+
 /*
  * global variable pointing to our xlink device.
  *
@@ -264,6 +266,9 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	case XL_READ_DATA:
 		rc = ioctl_read_data(arg);
 		break;
+	case XL_READ_TO_BUFFER:
+		rc = ioctl_read_to_buffer(arg);
+		break;
 	case XL_WRITE_DATA:
 		rc = ioctl_write_data(arg);
 		break;
@@ -276,9 +281,39 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	case XL_CLOSE_CHANNEL:
 		rc = ioctl_close_channel(arg);
 		break;
+	case XL_START_VPU:
+		rc = ioctl_start_vpu(arg);
+		break;
+	case XL_STOP_VPU:
+		rc = xlink_stop_vpu();
+		break;
+	case XL_RESET_VPU:
+		rc = xlink_stop_vpu();
+		break;
 	case XL_DISCONNECT:
 		rc = ioctl_disconnect(arg);
 		break;
+	case XL_GET_DEVICE_NAME:
+		rc = ioctl_get_device_name(arg);
+		break;
+	case XL_GET_DEVICE_LIST:
+		rc = ioctl_get_device_list(arg);
+		break;
+	case XL_GET_DEVICE_STATUS:
+		rc = ioctl_get_device_status(arg);
+		break;
+	case XL_BOOT_DEVICE:
+		rc = ioctl_boot_device(arg);
+		break;
+	case XL_RESET_DEVICE:
+		rc = ioctl_reset_device(arg);
+		break;
+	case XL_GET_DEVICE_MODE:
+		rc = ioctl_get_device_mode(arg);
+		break;
+	case XL_SET_DEVICE_MODE:
+		rc = ioctl_set_device_mode(arg);
+		break;
 	}
 	if (rc)
 		return -EIO;
@@ -289,6 +324,30 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 /*
  * xlink Kernel API.
  */
+enum xlink_error xlink_stop_vpu(void)
+{
+#ifdef CONFIG_XLINK_LOCAL_HOST
+	int rc;
+
+	rc = xlink_ipc_reset_device(0x0); // stop vpu slice 0
+	if (rc)
+		return X_LINK_ERROR;
+#endif
+	return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL(xlink_stop_vpu);
+enum xlink_error xlink_start_vpu(char *filename)
+{
+#ifdef CONFIG_XLINK_LOCAL_HOST
+	int rc;
+
+	rc = xlink_ipc_boot_device(0x0, filename); // start vpu slice 0
+	if (rc)
+		return X_LINK_ERROR;
+#endif
+	return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL(xlink_start_vpu);
 
 enum xlink_error xlink_initialize(void)
 {
@@ -527,6 +586,34 @@ enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
 	return rc;
 }
 
+enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
+					  u16 chan, u8 const *pmessage,
+					  u32 size)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	int event_queued = 0;
+	enum xlink_error rc;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+	if (size > XLINK_MAX_CONTROL_DATA_SIZE)
+		return X_LINK_ERROR; // TODO: XLink Parameter Error
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+	event = xlink_create_event(link->id, XLINK_WRITE_CONTROL_REQ,
+				   &link->handle, chan, size, 0);
+	if (!event)
+		return X_LINK_ERROR;
+	memcpy(event->header.control_data, pmessage, size);
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
+		xlink_destroy_event(event);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_write_control_data);
+
 enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
 				      u16 chan, u8 const *message, u32 size)
 {
@@ -711,6 +798,154 @@ enum xlink_error xlink_disconnect(struct xlink_handle *handle)
 }
 EXPORT_SYMBOL(xlink_disconnect);
 
+enum xlink_error xlink_get_device_list(u32 *sw_device_id_list,
+				       u32 *num_devices)
+{
+	u32 interface_nmb_devices = 0;
+	enum xlink_error rc;
+	int i;
+
+	if (!xlink)
+		return X_LINK_ERROR;
+	if (!sw_device_id_list || !num_devices)
+		return X_LINK_ERROR;
+	/* loop through each interface and combine the lists */
+	for (i = 0; i < NMB_OF_INTERFACES; i++) {
+		rc = xlink_platform_get_device_list(i, sw_device_id_list,
+						    &interface_nmb_devices);
+		if (!rc) {
+			*num_devices += interface_nmb_devices;
+			sw_device_id_list += interface_nmb_devices;
+		}
+		interface_nmb_devices = 0;
+	}
+	return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL(xlink_get_device_list);
+enum xlink_error xlink_get_device_name(struct xlink_handle *handle, char *name,
+				       size_t name_size)
+{
+	enum xlink_error rc;
+	int interface;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+	if (!name || !name_size)
+		return X_LINK_ERROR;
+	interface = get_interface_from_sw_device_id(handle->sw_device_id);
+	if (interface == NULL_INTERFACE)
+		return X_LINK_ERROR;
+	rc = xlink_platform_get_device_name(interface, handle->sw_device_id,
+					    name, name_size);
+	if (rc)
+		rc = X_LINK_ERROR;
+	else
+		rc = X_LINK_SUCCESS;
+	return rc;
+}
+EXPORT_SYMBOL(xlink_get_device_name);
+enum xlink_error xlink_get_device_status(struct xlink_handle *handle,
+					 u32 *device_status)
+{
+	enum xlink_error rc;
+	u32 interface;
+
+	if (!xlink)
+		return X_LINK_ERROR;
+	if (!device_status)
+		return X_LINK_ERROR;
+	interface = get_interface_from_sw_device_id(handle->sw_device_id);
+	if (interface == NULL_INTERFACE)
+		return X_LINK_ERROR;
+	rc = xlink_platform_get_device_status(interface, handle->sw_device_id,
+					      device_status);
+	if (rc)
+		rc = X_LINK_ERROR;
+	else
+		rc = X_LINK_SUCCESS;
+	return rc;
+}
+EXPORT_SYMBOL(xlink_get_device_status);
+enum xlink_error xlink_boot_device(struct xlink_handle *handle,
+				   const char *binary_name)
+{
+	enum xlink_error rc;
+	u32 interface;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+	if (!binary_name)
+		return X_LINK_ERROR;
+	interface = get_interface_from_sw_device_id(handle->sw_device_id);
+	if (interface == NULL_INTERFACE)
+		return X_LINK_ERROR;
+	rc = xlink_platform_boot_device(interface, handle->sw_device_id,
+					binary_name);
+	if (rc)
+		rc = X_LINK_ERROR;
+	else
+		rc = X_LINK_SUCCESS;
+	return rc;
+}
+EXPORT_SYMBOL(xlink_boot_device);
+enum xlink_error xlink_reset_device(struct xlink_handle *handle)
+{
+	enum xlink_error rc;
+	u32 interface;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+	interface = get_interface_from_sw_device_id(handle->sw_device_id);
+	if (interface == NULL_INTERFACE)
+		return X_LINK_ERROR;
+	rc = xlink_platform_reset_device(interface, handle->sw_device_id);
+	if (rc)
+		rc = X_LINK_ERROR;
+	else
+		rc = X_LINK_SUCCESS;
+	return rc;
+}
+EXPORT_SYMBOL(xlink_reset_device);
+enum xlink_error xlink_set_device_mode(struct xlink_handle *handle,
+				       enum xlink_device_power_mode power_mode)
+{
+	enum xlink_error rc;
+	u32 interface;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+	interface = get_interface_from_sw_device_id(handle->sw_device_id);
+	if (interface == NULL_INTERFACE)
+		return X_LINK_ERROR;
+	rc = xlink_platform_set_device_mode(interface, handle->sw_device_id,
+					    power_mode);
+	if (rc)
+		rc = X_LINK_ERROR;
+	else
+		rc = X_LINK_SUCCESS;
+	return rc;
+}
+EXPORT_SYMBOL(xlink_set_device_mode);
+enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
+				       enum xlink_device_power_mode *power_mode)
+{
+	enum xlink_error rc;
+	u32 interface;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+	interface = get_interface_from_sw_device_id(handle->sw_device_id);
+	if (interface == NULL_INTERFACE)
+		return X_LINK_ERROR;
+	rc = xlink_platform_get_device_mode(interface, handle->sw_device_id,
+					    power_mode);
+	if (rc)
+		rc = X_LINK_ERROR;
+	else
+		rc = X_LINK_SUCCESS;
+	return rc;
+}
+EXPORT_SYMBOL(xlink_get_device_mode);
 /* Device tree driver match. */
 static const struct of_device_id kmb_xlink_of_match[] = {
 	{
diff --git a/drivers/misc/xlink-core/xlink-defs.h b/drivers/misc/xlink-core/xlink-defs.h
index 09aee36d5542..8985f6631175 100644
--- a/drivers/misc/xlink-core/xlink-defs.h
+++ b/drivers/misc/xlink-core/xlink-defs.h
@@ -101,6 +101,7 @@ enum xlink_event_type {
 	XLINK_OPEN_CHANNEL_REQ,
 	XLINK_CLOSE_CHANNEL_REQ,
 	XLINK_PING_REQ,
+	XLINK_WRITE_CONTROL_REQ,
 	XLINK_REQ_LAST,
 	// response events
 	XLINK_WRITE_RESP = 0x10,
@@ -111,6 +112,7 @@ enum xlink_event_type {
 	XLINK_OPEN_CHANNEL_RESP,
 	XLINK_CLOSE_CHANNEL_RESP,
 	XLINK_PING_RESP,
+	XLINK_WRITE_CONTROL_RESP,
 	XLINK_RESP_LAST,
 };
 
diff --git a/drivers/misc/xlink-core/xlink-ioctl.c b/drivers/misc/xlink-core/xlink-ioctl.c
index 1f75ad38137b..90947bbccfed 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.c
+++ b/drivers/misc/xlink-core/xlink-ioctl.c
@@ -111,6 +111,34 @@ int ioctl_read_data(unsigned long arg)
 	return copy_result_to_user(rd.return_code, rc);
 }
 
+int ioctl_read_to_buffer(unsigned long arg)
+{
+	struct xlink_handle		devh	= {};
+	struct xlinkreadtobuffer	rdtobuf = {};
+	int rc = 0;
+	u32 size;
+	u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+
+	if (copy_from_user(&rdtobuf, (void __user *)arg,
+			   sizeof(struct xlinkreadtobuffer)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)rdtobuf.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_read_data_to_buffer(&devh, rdtobuf.chan,
+				       (u8 *)volbuf, &size);
+	if (!rc) {
+		if (copy_to_user((void __user *)rdtobuf.pmessage, (void *)volbuf,
+				 size))
+			return -EFAULT;
+		if (copy_to_user((void __user *)rdtobuf.size, (void *)&size,
+				 sizeof(size)))
+			return -EFAULT;
+	}
+
+	return copy_result_to_user(rdtobuf.return_code, rc);
+}
+
 int ioctl_write_data(unsigned long arg)
 {
 	struct xlink_handle	devh	= {};
@@ -194,6 +222,26 @@ int ioctl_close_channel(unsigned long arg)
 	return copy_result_to_user(op.return_code, rc);
 }
 
+int ioctl_start_vpu(unsigned long arg)
+{
+	struct xlinkstartvpu	startvpu = {};
+	char filename[64];
+	int rc = 0;
+
+	if (copy_from_user(&startvpu, (void __user *)arg,
+			   sizeof(struct xlinkstartvpu)))
+		return -EFAULT;
+	if (startvpu.namesize > sizeof(filename))
+		return -EINVAL;
+	memset(filename, 0, sizeof(filename));
+	if (copy_from_user(filename, (void __user *)startvpu.filename,
+			   startvpu.namesize))
+		return -EFAULT;
+	rc = xlink_start_vpu(filename);
+
+	return copy_result_to_user(startvpu.return_code, rc);
+}
+
 int ioctl_disconnect(unsigned long arg)
 {
 	struct xlink_handle	devh	= {};
@@ -210,3 +258,169 @@ int ioctl_disconnect(unsigned long arg)
 
 	return copy_result_to_user(con.return_code, rc);
 }
+
+int ioctl_get_device_name(unsigned long arg)
+{
+	struct xlink_handle		devh	= {};
+	struct xlinkgetdevicename	devn	= {};
+	char name[XLINK_MAX_DEVICE_NAME_SIZE];
+	int rc = 0;
+
+	if (copy_from_user(&devn, (void __user *)arg,
+			   sizeof(struct xlinkgetdevicename)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)devn.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	if (devn.name_size <= XLINK_MAX_DEVICE_NAME_SIZE) {
+		rc = xlink_get_device_name(&devh, name, devn.name_size);
+		if (!rc) {
+			if (copy_to_user((void __user *)devn.name, (void *)name,
+					 devn.name_size))
+				return -EFAULT;
+		}
+	} else {
+		rc = X_LINK_ERROR;
+	}
+
+	return copy_result_to_user(devn.return_code, rc);
+}
+
+int ioctl_get_device_list(unsigned long arg)
+{
+	struct xlinkgetdevicelist	devl	= {};
+	u32 sw_device_id_list[XLINK_MAX_DEVICE_LIST_SIZE];
+	u32 num_devices = 0;
+	int rc = 0;
+
+	if (copy_from_user(&devl, (void __user *)arg,
+			   sizeof(struct xlinkgetdevicelist)))
+		return -EFAULT;
+	rc = xlink_get_device_list(sw_device_id_list, &num_devices);
+	if (!rc && num_devices <= XLINK_MAX_DEVICE_LIST_SIZE) {
+		/* TODO: this next copy is dangerous! we have no idea
+		 * how large the devl.sw_device_id_list buffer is
+		 * provided by the user. if num_devices is too large,
+		 * the copy will overflow the buffer.
+		 */
+		if (copy_to_user((void __user *)devl.sw_device_id_list,
+				 (void *)sw_device_id_list,
+				 (sizeof(*sw_device_id_list)
+				 * num_devices)))
+			return -EFAULT;
+		if (copy_to_user((void __user *)devl.num_devices, (void *)&num_devices,
+				 (sizeof(num_devices))))
+			return -EFAULT;
+	}
+
+	return copy_result_to_user(devl.return_code, rc);
+}
+
+int ioctl_get_device_status(unsigned long arg)
+{
+	struct xlink_handle		devh	= {};
+	struct xlinkgetdevicestatus	devs	= {};
+	u32 device_status = 0;
+	int rc = 0;
+
+	if (copy_from_user(&devs, (void __user *)arg,
+			   sizeof(struct xlinkgetdevicestatus)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)devs.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_get_device_status(&devh, &device_status);
+	if (!rc) {
+		if (copy_to_user((void __user *)devs.device_status,
+				 (void *)&device_status,
+				 sizeof(device_status)))
+			return -EFAULT;
+	}
+
+	return copy_result_to_user(devs.return_code, rc);
+}
+
+int ioctl_boot_device(unsigned long arg)
+{
+	struct xlink_handle		devh	= {};
+	struct xlinkbootdevice		boot	= {};
+	char filename[64];
+	int rc = 0;
+
+	if (copy_from_user(&boot, (void __user *)arg,
+			   sizeof(struct xlinkbootdevice)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)boot.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	if (boot.binary_name_size > sizeof(filename))
+		return -EINVAL;
+	memset(filename, 0, sizeof(filename));
+	if (copy_from_user(filename, (void __user *)boot.binary_name,
+			   boot.binary_name_size))
+		return -EFAULT;
+	rc = xlink_boot_device(&devh, filename);
+
+	return copy_result_to_user(boot.return_code, rc);
+}
+
+int ioctl_reset_device(unsigned long arg)
+{
+	struct xlink_handle		devh	= {};
+	struct xlinkresetdevice		res	= {};
+	int rc = 0;
+
+	if (copy_from_user(&res, (void __user *)arg,
+			   sizeof(struct xlinkresetdevice)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)res.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_reset_device(&devh);
+
+	return copy_result_to_user(res.return_code, rc);
+}
+
+int ioctl_get_device_mode(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkdevmode	devm	= {};
+	u32 device_mode = 0;
+	int rc = 0;
+
+	if (copy_from_user(&devm, (void __user *)arg,
+			   sizeof(struct xlinkdevmode)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)devm.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	rc = xlink_get_device_mode(&devh, &device_mode);
+	if (!rc) {
+		if (copy_to_user((void __user *)devm.device_mode, (void *)&device_mode,
+				 sizeof(device_mode)))
+			return -EFAULT;
+	}
+
+	return copy_result_to_user(devm.return_code, rc);
+}
+
+int ioctl_set_device_mode(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkdevmode	devm	= {};
+	u32 device_mode = 0;
+	int rc = 0;
+
+	if (copy_from_user(&devm, (void __user *)arg,
+			   sizeof(struct xlinkdevmode)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)devm.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	if (copy_from_user(&device_mode, (void __user *)devm.device_mode,
+			   sizeof(device_mode)))
+		return -EFAULT;
+	rc = xlink_set_device_mode(&devh, device_mode);
+
+	return copy_result_to_user(devm.return_code, rc);
+}
diff --git a/drivers/misc/xlink-core/xlink-ioctl.h b/drivers/misc/xlink-core/xlink-ioctl.h
index 0f317c6c2b94..d016d8418f30 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.h
+++ b/drivers/misc/xlink-core/xlink-ioctl.h
@@ -12,10 +12,19 @@
 int ioctl_connect(unsigned long arg);
 int ioctl_open_channel(unsigned long arg);
 int ioctl_read_data(unsigned long arg);
+int ioctl_read_to_buffer(unsigned long arg);
 int ioctl_write_data(unsigned long arg);
 int ioctl_write_volatile_data(unsigned long arg);
 int ioctl_release_data(unsigned long arg);
 int ioctl_close_channel(unsigned long arg);
+int ioctl_start_vpu(unsigned long arg);
 int ioctl_disconnect(unsigned long arg);
+int ioctl_get_device_name(unsigned long arg);
+int ioctl_get_device_list(unsigned long arg);
+int ioctl_get_device_status(unsigned long arg);
+int ioctl_boot_device(unsigned long arg);
+int ioctl_reset_device(unsigned long arg);
+int ioctl_get_device_mode(unsigned long arg);
+int ioctl_set_device_mode(unsigned long arg);
 
 #endif /* XLINK_IOCTL_H_ */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
index 339734826f3e..48451dc30712 100644
--- a/drivers/misc/xlink-core/xlink-multiplexer.c
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -491,6 +491,7 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
 	switch (event->header.type) {
 	case XLINK_WRITE_REQ:
 	case XLINK_WRITE_VOLATILE_REQ:
+	case XLINK_WRITE_CONTROL_REQ:
 		opchan = get_channel(link_id, chan);
 		if (!opchan || opchan->chan->status != CHAN_OPEN) {
 			rc = X_LINK_COMMUNICATION_FAIL;
@@ -657,6 +658,7 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
 		break;
 	case XLINK_WRITE_RESP:
 	case XLINK_WRITE_VOLATILE_RESP:
+	case XLINK_WRITE_CONTROL_RESP:
 	case XLINK_READ_RESP:
 	case XLINK_READ_TO_BUFFER_RESP:
 	case XLINK_RELEASE_RESP:
@@ -759,6 +761,46 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
 		}
 		release_channel(opchan);
 		break;
+	case XLINK_WRITE_CONTROL_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			event->header.timeout = opchan->chan->timeout;
+			buffer = xlink_platform_allocate(xmux->dev, &paddr,
+							 event->header.size,
+							 XLINK_PACKET_ALIGNMENT,
+							 XLINK_NORMAL_MEMORY);
+			if (buffer) {
+				size = event->header.size;
+				memcpy(buffer, event->header.control_data, size);
+				event->paddr = paddr;
+				event->data = buffer;
+				if (add_packet_to_channel(opchan,
+							  &opchan->rx_queue,
+							  event->data,
+							  event->header.size,
+							  paddr)) {
+					xlink_platform_deallocate(xmux->dev,
+								  buffer, paddr,
+								  event->header.size,
+								  XLINK_PACKET_ALIGNMENT,
+								  XLINK_NORMAL_MEMORY);
+					rc = X_LINK_ERROR;
+					release_channel(opchan);
+					break;
+				}
+				event->header.type = XLINK_WRITE_CONTROL_RESP;
+				xlink_dispatcher_event_add(EVENT_RX, event);
+				// channel blocking, notify waiting threads of available packet
+				complete(&opchan->pkt_available);
+			} else {
+				// failed to allocate buffer
+				rc = X_LINK_ERROR;
+			}
+		}
+		release_channel(opchan);
+		break;
 	case XLINK_READ_REQ:
 	case XLINK_READ_TO_BUFFER_REQ:
 		opchan = get_channel(link_id, chan);
@@ -848,6 +890,7 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
 		break;
 	case XLINK_WRITE_RESP:
 	case XLINK_WRITE_VOLATILE_RESP:
+	case XLINK_WRITE_CONTROL_RESP:
 		opchan = get_channel(link_id, chan);
 		if (!opchan)
 			rc = X_LINK_COMMUNICATION_FAIL;
@@ -929,6 +972,18 @@ enum xlink_error xlink_passthrough(struct xlink_event *event)
 			rc = X_LINK_ERROR;
 		}
 		break;
+	case XLINK_WRITE_CONTROL_REQ:
+		if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+			ipc.is_volatile = 1;
+			rc = xlink_platform_write(IPC_INTERFACE,
+						  event->handle->sw_device_id,
+						  event->header.control_data,
+						  &event->header.size, 0, &ipc);
+		} else {
+			/* channel not open */
+			rc = X_LINK_ERROR;
+		}
+		break;
 	case XLINK_READ_REQ:
 		if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
 			/* if channel has receive blocking set,
@@ -1013,6 +1068,7 @@ enum xlink_error xlink_passthrough(struct xlink_event *event)
 	case XLINK_PING_REQ:
 	case XLINK_WRITE_RESP:
 	case XLINK_WRITE_VOLATILE_RESP:
+	case XLINK_WRITE_CONTROL_RESP:
 	case XLINK_READ_RESP:
 	case XLINK_READ_TO_BUFFER_RESP:
 	case XLINK_RELEASE_RESP:
diff --git a/drivers/misc/xlink-core/xlink-platform.c b/drivers/misc/xlink-core/xlink-platform.c
index c34b69ee206b..56eb8da28a5f 100644
--- a/drivers/misc/xlink-core/xlink-platform.c
+++ b/drivers/misc/xlink-core/xlink-platform.c
@@ -34,6 +34,20 @@ static inline int xlink_ipc_read(u32 sw_device_id, void *data,
 				 size_t * const size, u32 timeout, void *context)
 { return -1; }
 
+static inline int xlink_ipc_get_device_list(u32 *sw_device_id_list,
+					    u32 *num_devices)
+{ return -1; }
+static inline int xlink_ipc_get_device_name(u32 sw_device_id,
+					    char *device_name, size_t name_size)
+{ return -1; }
+static inline int xlink_ipc_get_device_status(u32 sw_device_id,
+					      u32 *device_status)
+{ return -1; }
+static inline int xlink_ipc_boot_device(u32 sw_device_id,
+					const char *binary_path)
+{ return -1; }
+static inline int xlink_ipc_reset_device(u32 sw_device_id)
+{ return -1; }
 static inline int xlink_ipc_open_channel(u32 sw_device_id,
 					 u32 channel)
 { return -1; }
@@ -59,6 +73,23 @@ static int (*write_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) =
 static int (*read_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) = {
 		NULL, xlink_pcie_read, NULL, NULL};
 
+static int (*reset_fcts[NMB_OF_INTERFACES])(u32) = {
+		xlink_ipc_reset_device, xlink_pcie_reset_device, NULL, NULL};
+static int (*boot_fcts[NMB_OF_INTERFACES])(u32, const char *) = {
+		xlink_ipc_boot_device, xlink_pcie_boot_device, NULL, NULL};
+static int (*dev_name_fcts[NMB_OF_INTERFACES])(u32, char *, size_t) = {
+		xlink_ipc_get_device_name, xlink_pcie_get_device_name,
+		NULL, NULL};
+static int (*dev_list_fcts[NMB_OF_INTERFACES])(u32 *, u32 *) = {
+		xlink_ipc_get_device_list, xlink_pcie_get_device_list,
+		NULL, NULL};
+static int (*dev_status_fcts[NMB_OF_INTERFACES])(u32, u32 *) = {
+		xlink_ipc_get_device_status, xlink_pcie_get_device_status,
+		NULL, NULL};
+static int (*dev_set_mode_fcts[NMB_OF_INTERFACES])(u32, u32) = {
+		NULL, NULL, NULL, NULL};
+static int (*dev_get_mode_fcts[NMB_OF_INTERFACES])(u32, u32 *) = {
+		NULL, NULL, NULL, NULL};
 static int (*open_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
 		xlink_ipc_open_channel, NULL, NULL, NULL};
 
@@ -103,6 +134,61 @@ int xlink_platform_read(u32 interface, u32 sw_device_id, void *data,
 	return read_fcts[interface](sw_device_id, data, size, timeout);
 }
 
+int xlink_platform_reset_device(u32 interface, u32 sw_device_id)
+{
+	if (interface >= NMB_OF_INTERFACES || !reset_fcts[interface])
+		return -1;
+	return reset_fcts[interface](sw_device_id);
+}
+
+int xlink_platform_boot_device(u32 interface, u32 sw_device_id,
+			       const char *binary_name)
+{
+	if (interface >= NMB_OF_INTERFACES || !boot_fcts[interface])
+		return -1;
+	return boot_fcts[interface](sw_device_id, binary_name);
+}
+
+int xlink_platform_get_device_name(u32 interface, u32 sw_device_id,
+				   char *device_name, size_t name_size)
+{
+	if (interface >= NMB_OF_INTERFACES || !dev_name_fcts[interface])
+		return -1;
+	return dev_name_fcts[interface](sw_device_id, device_name, name_size);
+}
+
+int xlink_platform_get_device_list(u32 interface,
+				   u32 *sw_device_id_list, u32 *num_devices)
+{
+	if (interface >= NMB_OF_INTERFACES || !dev_list_fcts[interface])
+		return -1;
+	return dev_list_fcts[interface](sw_device_id_list, num_devices);
+}
+
+int xlink_platform_get_device_status(u32 interface, u32 sw_device_id,
+				     u32 *device_status)
+{
+	if (interface >= NMB_OF_INTERFACES || !dev_status_fcts[interface])
+		return -1;
+	return dev_status_fcts[interface](sw_device_id, device_status);
+}
+
+int xlink_platform_set_device_mode(u32 interface, u32 sw_device_id,
+				   u32 power_mode)
+{
+	if (interface >= NMB_OF_INTERFACES || !dev_set_mode_fcts[interface])
+		return -1;
+	return dev_set_mode_fcts[interface](sw_device_id, power_mode);
+}
+
+int xlink_platform_get_device_mode(u32 interface, u32 sw_device_id,
+				   u32 *power_mode)
+{
+	if (interface >= NMB_OF_INTERFACES || !dev_get_mode_fcts[interface])
+		return -1;
+	return dev_get_mode_fcts[interface](sw_device_id, power_mode);
+}
+
 int xlink_platform_open_channel(u32 interface, u32 sw_device_id,
 				u32 channel)
 {
diff --git a/include/linux/xlink.h b/include/linux/xlink.h
index c22439d5aade..b00dbc719530 100644
--- a/include/linux/xlink.h
+++ b/include/linux/xlink.h
@@ -78,6 +78,10 @@ enum xlink_error xlink_write_data(struct xlink_handle *handle,
 enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
 				      u16 chan, u8 const *message, u32 size);
 
+enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
+					  u16 chan, u8 const *message,
+					  u32 size);
+
 enum xlink_error xlink_read_data(struct xlink_handle *handle,
 				 u16 chan, u8 **message, u32 *size);
 
@@ -90,6 +94,29 @@ enum xlink_error xlink_release_data(struct xlink_handle *handle,
 
 enum xlink_error xlink_disconnect(struct xlink_handle *handle);
 
+enum xlink_error xlink_get_device_list(u32 *sw_device_id_list, u32 *num_devices);
+
+enum xlink_error xlink_get_device_name(struct xlink_handle *handle, char *name,
+				       size_t name_size);
+
+enum xlink_error xlink_get_device_status(struct xlink_handle *handle,
+					 u32 *device_status);
+
+enum xlink_error xlink_boot_device(struct xlink_handle *handle,
+				   const char *binary_name);
+
+enum xlink_error xlink_reset_device(struct xlink_handle *handle);
+
+enum xlink_error xlink_set_device_mode(struct xlink_handle *handle,
+				       enum xlink_device_power_mode power_mode);
+
+enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
+				       enum xlink_device_power_mode *power_mode);
+
+enum xlink_error xlink_start_vpu(char *filename); /* depreciated */
+
+enum xlink_error xlink_stop_vpu(void); /* depreciated */
+
 /* API functions to be implemented
  *
  * enum xlink_error xlink_write_crc_data(struct xlink_handle *handle,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 23/34] xlink-core: add async channel and events
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (21 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 22/34] xlink-core: Enable VPU IP management and runtime control mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 24/34] dt-bindings: misc: Add Keem Bay vpumgr mgross
                   ` (10 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly

From: Seamus Kelly <seamus.kelly@intel.com>

Enable asynchronous channel and event communication.

    Add APIs:
            data ready callback:
			The xLink Data Ready Callback function is used to
			register a callback function that is invoked when data
			is ready to be read from a channel
            data consumed callback:
			The xLink Data Consumed Callback function is used to
			register a callback function that is invoked when data
			is consumed by the peer node on a channel
    Add event notification handling including APIs:
            register device event:
			The xLink Register Device Event function is used to
			register a callback for notification of certain system
			events. Currently XLink supports 4 such events [0-3]
			whose meaning is system dependent.  Registering for an
			event means that the callback will be called when the
			event occurs with 2 parameters the sw_device_id of the
			device that triggered the event and the event number [0-3]
            unregister device event
			The xLink Unregister Device Event function is used to
			unregister events that have previously been registered
			by register device event API

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
---
 drivers/misc/xlink-core/xlink-core.c        | 497 ++++++++++++++++----
 drivers/misc/xlink-core/xlink-core.h        |  11 +-
 drivers/misc/xlink-core/xlink-defs.h        |   6 +-
 drivers/misc/xlink-core/xlink-dispatcher.c  |  53 +--
 drivers/misc/xlink-core/xlink-ioctl.c       | 146 +++++-
 drivers/misc/xlink-core/xlink-ioctl.h       |   6 +
 drivers/misc/xlink-core/xlink-multiplexer.c | 176 +++++--
 drivers/misc/xlink-core/xlink-platform.c    |  27 ++
 include/linux/xlink.h                       |  15 +-
 9 files changed, 757 insertions(+), 180 deletions(-)

diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
index f30ac584a01c..8c1fd3f54afa 100644
--- a/drivers/misc/xlink-core/xlink-core.c
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -55,6 +55,8 @@ static struct cdev xlink_cdev;
 
 static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
 
+static struct mutex dev_event_lock;
+
 static const struct file_operations fops = {
 		.owner		= THIS_MODULE,
 		.unlocked_ioctl = xlink_ioctl,
@@ -66,14 +68,75 @@ struct xlink_link {
 	struct kref refcount;
 };
 
+struct xlink_attr {
+	unsigned long value;
+	u32 sw_dev_id;
+};
+
 struct keembay_xlink_dev {
 	struct platform_device *pdev;
 	struct xlink_link links[XLINK_MAX_CONNECTIONS];
 	u32 nmb_connected_links;
 	struct mutex lock;  // protect access to xlink_dev
+	struct xlink_attr eventx[4];
+};
+
+struct event_info {
+	struct list_head list;
+	u32 sw_device_id;
+	u32 event_type;
+	u32 user_flag;
+	xlink_device_event_cb event_notif_fn;
 };
 
-static u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+// sysfs attribute functions
+
+static ssize_t eventx_show(struct device *dev, struct device_attribute *attr,
+			   int index, char *buf)
+{
+	struct keembay_xlink_dev *xlink_dev = dev_get_drvdata(dev);
+	struct xlink_attr *a = &xlink_dev->eventx[index];
+
+	return sysfs_emit(buf, "0x%x 0x%lx\n", a->sw_dev_id, a->value);
+}
+
+static ssize_t event0_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return eventx_show(dev, attr, 0, buf);
+}
+
+static ssize_t event1_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return eventx_show(dev, attr, 1, buf);
+}
+
+static ssize_t event2_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return eventx_show(dev, attr, 2, buf);
+}
+
+static ssize_t event3_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return eventx_show(dev, attr, 3, buf);
+}
+
+static DEVICE_ATTR_RO(event0);
+static DEVICE_ATTR_RO(event1);
+static DEVICE_ATTR_RO(event2);
+static DEVICE_ATTR_RO(event3);
+static struct attribute *xlink_sysfs_entries[] = {
+	&dev_attr_event0.attr,
+	&dev_attr_event1.attr,
+	&dev_attr_event2.attr,
+	&dev_attr_event3.attr,
+	NULL,
+};
+
+static const struct attribute_group xlink_sysfs_group = {
+	.attrs = xlink_sysfs_entries,
+};
+
+static struct event_info ev_info;
 
 /*
  * global variable pointing to our xlink device.
@@ -207,7 +270,14 @@ static int kmb_xlink_probe(struct platform_device *pdev)
 		dev_info(&pdev->dev, "Cannot add the device to the system\n");
 		goto r_class;
 	}
+	INIT_LIST_HEAD(&ev_info.list);
 
+	rc = devm_device_add_group(&pdev->dev, &xlink_sysfs_group);
+	if (rc) {
+		dev_err(&pdev->dev, "failed to create sysfs entries: %d\n", rc);
+		return rc;
+	}
+	mutex_init(&dev_event_lock);
 	return 0;
 
 r_device:
@@ -231,7 +301,6 @@ static int kmb_xlink_remove(struct platform_device *pdev)
 	rc = xlink_multiplexer_destroy();
 	if (rc != X_LINK_SUCCESS)
 		pr_err("Multiplexer destroy failed\n");
-	// stop dispatchers and destroy
 	rc = xlink_dispatcher_destroy();
 	if (rc != X_LINK_SUCCESS)
 		pr_err("Dispatcher destroy failed\n");
@@ -251,7 +320,6 @@ static int kmb_xlink_remove(struct platform_device *pdev)
  * IOCTL function for User Space access to xlink kernel functions
  *
  */
-
 static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 {
 	int rc;
@@ -263,6 +331,12 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	case XL_OPEN_CHANNEL:
 		rc = ioctl_open_channel(arg);
 		break;
+	case XL_DATA_READY_CALLBACK:
+		rc = ioctl_data_ready_callback(arg);
+		break;
+	case XL_DATA_CONSUMED_CALLBACK:
+		rc = ioctl_data_consumed_callback(arg);
+		break;
 	case XL_READ_DATA:
 		rc = ioctl_read_data(arg);
 		break;
@@ -275,6 +349,9 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	case XL_WRITE_VOLATILE:
 		rc = ioctl_write_volatile_data(arg);
 		break;
+	case XL_WRITE_CONTROL_DATA:
+		rc = ioctl_write_control_data(arg);
+		break;
 	case XL_RELEASE_DATA:
 		rc = ioctl_release_data(arg);
 		break;
@@ -285,10 +362,10 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 		rc = ioctl_start_vpu(arg);
 		break;
 	case XL_STOP_VPU:
-		rc = xlink_stop_vpu();
+		rc = ioctl_stop_vpu();
 		break;
 	case XL_RESET_VPU:
-		rc = xlink_stop_vpu();
+		rc = ioctl_stop_vpu();
 		break;
 	case XL_DISCONNECT:
 		rc = ioctl_disconnect(arg);
@@ -314,6 +391,12 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	case XL_SET_DEVICE_MODE:
 		rc = ioctl_set_device_mode(arg);
 		break;
+	case XL_REGISTER_DEV_EVENT:
+		rc = ioctl_register_device_event(arg);
+		break;
+	case XL_UNREGISTER_DEV_EVENT:
+		rc = ioctl_unregister_device_event(arg);
+		break;
 	}
 	if (rc)
 		return -EIO;
@@ -387,14 +470,12 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
 		xlink->nmb_connected_links++;
 		kref_init(&link->refcount);
 		if (interface != IPC_INTERFACE) {
-			// start dispatcher
 			rc = xlink_dispatcher_start(link->id, &link->handle);
 			if (rc) {
 				pr_err("dispatcher start failed\n");
 				goto r_cleanup;
 			}
 		}
-		// initialize multiplexer connection
 		rc = xlink_multiplexer_connect(link->id);
 		if (rc) {
 			pr_err("multiplexer connect failed\n");
@@ -405,7 +486,6 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
 			link->handle.dev_type,
 			xlink->nmb_connected_links);
 	} else {
-		// already connected
 		pr_info("dev 0x%x ALREADY connected - dev_type %d\n",
 			link->handle.sw_device_id,
 			link->handle.dev_type);
@@ -413,7 +493,6 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
 		*handle = link->handle;
 	}
 	mutex_unlock(&xlink->lock);
-	// TODO: implement ping
 	return X_LINK_SUCCESS;
 
 r_cleanup:
@@ -423,64 +502,109 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
 }
 EXPORT_SYMBOL(xlink_connect);
 
-enum xlink_error xlink_open_channel(struct xlink_handle *handle,
-				    u16 chan, enum xlink_opmode mode,
-				    u32 data_size, u32 timeout)
+enum xlink_error xlink_data_available_event(struct xlink_handle *handle,
+					    u16 chan,
+					    xlink_event data_available_event)
 {
 	struct xlink_event *event;
 	struct xlink_link *link;
-	int event_queued = 0;
 	enum xlink_error rc;
+	int event_queued = 0;
+	char origin = 'K';
 
 	if (!xlink || !handle)
 		return X_LINK_ERROR;
 
+	if (CHANNEL_USER_BIT_IS_SET(chan))
+		origin  = 'U';     // function called from user space
+	CHANNEL_CLEAR_USER_BIT(chan);  // restore proper channel value
+
 	link = get_link_by_sw_device_id(handle->sw_device_id);
 	if (!link)
 		return X_LINK_ERROR;
-
-	event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
-				   &link->handle, chan, data_size, timeout);
+	event = xlink_create_event(link->id, XLINK_DATA_READY_CALLBACK_REQ,
+				   &link->handle, chan, 0, 0);
 	if (!event)
 		return X_LINK_ERROR;
-
-	event->data = (void *)mode;
+	event->data = data_available_event;
+	event->callback_origin = origin;
+	if (!data_available_event)
+		event->calling_pid = NULL; // disable callbacks on this channel
+	else
+		event->calling_pid = current;
 	rc = xlink_multiplexer_tx(event, &event_queued);
 	if (!event_queued)
 		xlink_destroy_event(event);
 	return rc;
 }
-EXPORT_SYMBOL(xlink_open_channel);
-
-enum xlink_error xlink_close_channel(struct xlink_handle *handle,
-				     u16 chan)
+EXPORT_SYMBOL(xlink_data_available_event);
+enum xlink_error xlink_data_consumed_event(struct xlink_handle *handle,
+					   u16 chan,
+					   xlink_event data_consumed_event)
 {
 	struct xlink_event *event;
 	struct xlink_link *link;
 	enum xlink_error rc;
 	int event_queued = 0;
+	char origin = 'K';
 
 	if (!xlink || !handle)
 		return X_LINK_ERROR;
 
+	if (CHANNEL_USER_BIT_IS_SET(chan))
+		origin  = 'U';     // function called from user space
+	CHANNEL_CLEAR_USER_BIT(chan);  // restore proper channel value
+
 	link = get_link_by_sw_device_id(handle->sw_device_id);
 	if (!link)
 		return X_LINK_ERROR;
-
-	event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+	event = xlink_create_event(link->id, XLINK_DATA_CONSUMED_CALLBACK_REQ,
 				   &link->handle, chan, 0, 0);
 	if (!event)
 		return X_LINK_ERROR;
+	event->data = data_consumed_event;
+	event->callback_origin = origin;
+	if (!data_consumed_event)
+		event->calling_pid = NULL; // disable callbacks on this channel
+	else
+		event->calling_pid = current;
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
+		xlink_destroy_event(event);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_data_consumed_event);
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+				    u16 chan, enum xlink_opmode mode,
+				    u32 data_size, u32 timeout)
+{
+	struct xlink_event *event;
+	struct xlink_link *link;
+	int event_queued = 0;
+	enum xlink_error rc;
+
+	if (!xlink || !handle)
+		return X_LINK_ERROR;
+
+	link = get_link_by_sw_device_id(handle->sw_device_id);
+	if (!link)
+		return X_LINK_ERROR;
+
+	event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
+				   &link->handle, chan, data_size, timeout);
+	if (!event)
+		return X_LINK_ERROR;
 
+	event->data = (void *)mode;
 	rc = xlink_multiplexer_tx(event, &event_queued);
 	if (!event_queued)
 		xlink_destroy_event(event);
 	return rc;
 }
-EXPORT_SYMBOL(xlink_close_channel);
+EXPORT_SYMBOL(xlink_open_channel);
 
-enum xlink_error xlink_write_data(struct xlink_handle *handle,
-				  u16 chan, u8 const *pmessage, u32 size)
+enum xlink_error xlink_close_channel(struct xlink_handle *handle,
+				     u16 chan)
 {
 	struct xlink_event *event;
 	struct xlink_link *link;
@@ -490,38 +614,26 @@ enum xlink_error xlink_write_data(struct xlink_handle *handle,
 	if (!xlink || !handle)
 		return X_LINK_ERROR;
 
-	if (size > XLINK_MAX_DATA_SIZE)
-		return X_LINK_ERROR;
-
 	link = get_link_by_sw_device_id(handle->sw_device_id);
 	if (!link)
 		return X_LINK_ERROR;
 
-	event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
-				   chan, size, 0);
+	event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+				   &link->handle, chan, 0, 0);
 	if (!event)
 		return X_LINK_ERROR;
 
-	if (chan < XLINK_IPC_MAX_CHANNELS &&
-	    event->interface == IPC_INTERFACE) {
-		/* only passing message address across IPC interface */
-		event->data = &pmessage;
-		rc = xlink_multiplexer_tx(event, &event_queued);
+	rc = xlink_multiplexer_tx(event, &event_queued);
+	if (!event_queued)
 		xlink_destroy_event(event);
-	} else {
-		event->data = (u8 *)pmessage;
-		event->paddr = 0;
-		rc = xlink_multiplexer_tx(event, &event_queued);
-		if (!event_queued)
-			xlink_destroy_event(event);
-	}
+
 	return rc;
 }
-EXPORT_SYMBOL(xlink_write_data);
+EXPORT_SYMBOL(xlink_close_channel);
 
-enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
-				       u16 chan, u8 const *pmessage,
-				       u32 size)
+static enum xlink_error do_xlink_write_data(struct xlink_handle *handle,
+					    u16 chan, u8 const *pmessage,
+					    u32 size, u32 user_flag)
 {
 	struct xlink_event *event;
 	struct xlink_link *link;
@@ -544,48 +656,78 @@ enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
 				   chan, size, 0);
 	if (!event)
 		return X_LINK_ERROR;
-	event->user_data = 1;
+	event->user_data = user_flag;
 
 	if (chan < XLINK_IPC_MAX_CHANNELS &&
 	    event->interface == IPC_INTERFACE) {
 		/* only passing message address across IPC interface */
-		if (get_user(addr, (u32 __user *)pmessage)) {
-			xlink_destroy_event(event);
-			return X_LINK_ERROR;
+		if (user_flag) {
+			if (get_user(addr, (u32 __user *)pmessage)) {
+				xlink_destroy_event(event);
+				return X_LINK_ERROR;
+			}
+			event->data = &addr;
+		} else {
+			event->data = &pmessage;
 		}
-		event->data = &addr;
 		rc = xlink_multiplexer_tx(event, &event_queued);
 		xlink_destroy_event(event);
 	} else {
-		event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
-						      size,
-						      XLINK_PACKET_ALIGNMENT,
-						      XLINK_NORMAL_MEMORY);
-		if (!event->data) {
-			xlink_destroy_event(event);
-			return X_LINK_ERROR;
-		}
-		if (copy_from_user(event->data, (void __user *)pmessage, size)) {
-			xlink_platform_deallocate(&xlink->pdev->dev,
-						  event->data, paddr, size,
-						  XLINK_PACKET_ALIGNMENT,
-						  XLINK_NORMAL_MEMORY);
-			xlink_destroy_event(event);
-			return X_LINK_ERROR;
+		if (user_flag) {
+			event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
+							      size,
+							      XLINK_PACKET_ALIGNMENT,
+							      XLINK_NORMAL_MEMORY);
+			if (!event->data) {
+				xlink_destroy_event(event);
+				return X_LINK_ERROR;
+			}
+			if (copy_from_user(event->data, (void __user *)pmessage, size)) {
+				xlink_platform_deallocate(&xlink->pdev->dev,
+							  event->data, paddr, size,
+							  XLINK_PACKET_ALIGNMENT,
+							  XLINK_NORMAL_MEMORY);
+				xlink_destroy_event(event);
+				return X_LINK_ERROR;
+			}
+			event->paddr = paddr;
+		} else {
+			event->data = (u8 *)pmessage;
+			event->paddr = 0;
 		}
-		event->paddr = paddr;
 		rc = xlink_multiplexer_tx(event, &event_queued);
 		if (!event_queued) {
-			xlink_platform_deallocate(&xlink->pdev->dev,
-						  event->data, paddr, size,
-						  XLINK_PACKET_ALIGNMENT,
-						  XLINK_NORMAL_MEMORY);
+			if (user_flag) {
+				xlink_platform_deallocate(&xlink->pdev->dev,
+							  event->data, paddr, size,
+							  XLINK_PACKET_ALIGNMENT,
+							  XLINK_NORMAL_MEMORY);
+			}
 			xlink_destroy_event(event);
 		}
 	}
 	return rc;
 }
 
+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+				  u16 chan, u8 const *pmessage, u32 size)
+{
+	enum xlink_error rc = 0;
+
+	rc = do_xlink_write_data(handle, chan, pmessage, size, 0);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_write_data);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+				       u16 chan, u8 const *pmessage, u32 size)
+{
+	enum xlink_error rc = 0;
+
+	rc = do_xlink_write_data(handle, chan, pmessage, size, 1);
+	return rc;
+}
+
 enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
 					  u16 chan, u8 const *pmessage,
 					  u32 size)
@@ -614,16 +756,7 @@ enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
 }
 EXPORT_SYMBOL(xlink_write_control_data);
 
-enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
-				      u16 chan, u8 const *message, u32 size)
-{
-	enum xlink_error rc = 0;
-
-	rc = do_xlink_write_volatile(handle, chan, message, size, 0);
-	return rc;
-}
-
-enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+static enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
 					 u16 chan, u8 const *message,
 					 u32 size, u32 user_flag)
 {
@@ -668,6 +801,26 @@ enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
 	return rc;
 }
 
+enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+					   u16 chan, u8 const *message,
+					   u32 size)
+{
+	enum xlink_error rc = 0;
+
+	rc = do_xlink_write_volatile(handle, chan, message, size, 1);
+	return rc;
+}
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+				      u16 chan, u8 const *message, u32 size)
+{
+	enum xlink_error rc = 0;
+
+	rc = do_xlink_write_volatile(handle, chan, message, size, 0);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_write_volatile);
+
 enum xlink_error xlink_read_data(struct xlink_handle *handle,
 				 u16 chan, u8 **pmessage, u32 *size)
 {
@@ -757,8 +910,8 @@ EXPORT_SYMBOL(xlink_release_data);
 enum xlink_error xlink_disconnect(struct xlink_handle *handle)
 {
 	struct xlink_link *link;
-	int interface = NULL_INTERFACE;
-	enum xlink_error rc = X_LINK_ERROR;
+	int interface;
+	enum xlink_error rc = 0;
 
 	if (!xlink || !handle)
 		return X_LINK_ERROR;
@@ -767,7 +920,6 @@ enum xlink_error xlink_disconnect(struct xlink_handle *handle)
 	if (!link)
 		return X_LINK_ERROR;
 
-	// decrement refcount, if count is 0 lock mutex and disconnect
 	if (kref_put_mutex(&link->refcount, release_after_kref_put,
 			   &xlink->lock)) {
 		// stop dispatcher
@@ -946,6 +1098,179 @@ enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
 	return rc;
 }
 EXPORT_SYMBOL(xlink_get_device_mode);
+
+static int xlink_device_event_handler(u32 sw_device_id, u32 event_type)
+{
+	struct event_info *events = NULL;
+	xlink_device_event_cb event_cb;
+	bool found = false;
+	char event_attr[7];
+
+	mutex_lock(&dev_event_lock);
+	// find sw_device_id, event_type in list
+	list_for_each_entry(events, &ev_info.list, list) {
+		if (events) {
+			if (events->sw_device_id == sw_device_id &&
+			    events->event_type == event_type) {
+				event_cb = events->event_notif_fn;
+				found = true;
+				break;
+			}
+		}
+	}
+	if (found) {
+		if (events->user_flag) {
+			xlink->eventx[events->event_type].value = events->event_type;
+			xlink->eventx[events->event_type].sw_dev_id = sw_device_id;
+			sprintf(event_attr, "event%d", events->event_type);
+			sysfs_notify(&xlink->pdev->dev.kobj, NULL, event_attr);
+		} else {
+			if (event_cb) {
+				event_cb(sw_device_id, event_type);
+			} else {
+				pr_info("No callback found for sw_device_id : 0x%x event type %d\n",
+					sw_device_id, event_type);
+				mutex_unlock(&dev_event_lock);
+				return X_LINK_ERROR;
+			}
+		}
+		pr_info("sysfs_notify event %d swdev_id %xs\n",
+			events->event_type, sw_device_id);
+	}
+	mutex_unlock(&dev_event_lock);
+return X_LINK_SUCCESS;
+}
+
+static bool event_registered(u32 sw_dev_id, u32 event)
+{
+	struct event_info *events = NULL;
+
+	list_for_each_entry(events, &ev_info.list, list) {
+		if (events) {
+			if (events->sw_device_id == sw_dev_id &&
+			    events->event_type == event) {
+				return true;
+			}
+		}
+	}
+return false;
+}
+
+static enum xlink_error do_xlink_register_device_event(struct xlink_handle *handle,
+						       u32 *event_list,
+						       u32 num_events,
+						       xlink_device_event_cb event_notif_fn,
+						       u32 user_flag)
+{
+	struct event_info *events;
+	u32 interface;
+	u32 event;
+	int i;
+
+	if (num_events < 0 || num_events >= NUM_REG_EVENTS)
+		return X_LINK_ERROR;
+	for (i = 0; i < num_events; i++) {
+		events = kzalloc(sizeof(*events), GFP_KERNEL);
+		if (!events)
+			return X_LINK_ERROR;
+		event = *event_list;
+		events->sw_device_id = handle->sw_device_id;
+		events->event_notif_fn = event_notif_fn;
+		events->event_type = *event_list++;
+		events->user_flag = user_flag;
+		if (user_flag) {
+			/* only add to list once if userspace */
+			/* xlink userspace handles multi process callbacks */
+			if (event_registered(handle->sw_device_id, event)) {
+				pr_info("xlink-core: Event 0x%x - %d already registered\n",
+					handle->sw_device_id, event);
+				kfree(events);
+				continue;
+			}
+		}
+		pr_info("xlink-core:Events: sw_device_id 0x%x event %d fn %p user_flag %d\n",
+			events->sw_device_id, events->event_type,
+			events->event_notif_fn, events->user_flag);
+		list_add_tail(&events->list, &ev_info.list);
+	}
+	interface = get_interface_from_sw_device_id(handle->sw_device_id);
+	if (interface == NULL_INTERFACE)
+		return X_LINK_ERROR;
+	xlink_platform_register_for_events(interface, handle->sw_device_id,
+					   xlink_device_event_handler);
+	return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_register_device_event_user(struct xlink_handle *handle,
+						  u32 *event_list, u32 num_events,
+						  xlink_device_event_cb event_notif_fn)
+{
+	enum xlink_error rc;
+
+	rc = do_xlink_register_device_event(handle, event_list, num_events,
+					    event_notif_fn, 1);
+	return rc;
+}
+
+enum xlink_error xlink_register_device_event(struct xlink_handle *handle,
+					     u32 *event_list, u32 num_events,
+					     xlink_device_event_cb event_notif_fn)
+{
+	enum xlink_error rc;
+
+	rc = do_xlink_register_device_event(handle, event_list, num_events,
+					    event_notif_fn, 0);
+	return rc;
+}
+EXPORT_SYMBOL(xlink_register_device_event);
+
+enum xlink_error xlink_unregister_device_event(struct xlink_handle *handle,
+					       u32 *event_list,
+					       u32 num_events)
+{
+	struct event_info *events = NULL;
+	u32 interface;
+	int found = 0;
+	int count = 0;
+	int i;
+
+	if (num_events < 0 || num_events >= NUM_REG_EVENTS)
+		return X_LINK_ERROR;
+	for (i = 0; i < num_events; i++) {
+		list_for_each_entry(events, &ev_info.list, list) {
+			if (events->sw_device_id == handle->sw_device_id &&
+			    events->event_type == event_list[i]) {
+				found = 1;
+				break;
+			}
+		}
+		if (!events || !found)
+			return X_LINK_ERROR;
+		pr_info("removing event %d for sw_device_id 0x%x\n",
+			events->event_type, events->sw_device_id);
+		list_del(&events->list);
+		kfree(events);
+	}
+	// check if any events left for this sw_device_id
+	// are still registered ( in list )
+	list_for_each_entry(events, &ev_info.list, list) {
+		if (events) {
+			if (events->sw_device_id == handle->sw_device_id) {
+				count++;
+				break;
+			}
+		}
+	}
+	if (count == 0) {
+		interface = get_interface_from_sw_device_id(handle->sw_device_id);
+		if (interface == NULL_INTERFACE)
+			return X_LINK_ERROR;
+		xlink_platform_unregister_for_events(interface, handle->sw_device_id);
+	}
+	return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL(xlink_unregister_device_event);
+
 /* Device tree driver match. */
 static const struct of_device_id kmb_xlink_of_match[] = {
 	{
diff --git a/drivers/misc/xlink-core/xlink-core.h b/drivers/misc/xlink-core/xlink-core.h
index 5ba7ac653bf7..ee10058a15ac 100644
--- a/drivers/misc/xlink-core/xlink-core.h
+++ b/drivers/misc/xlink-core/xlink-core.h
@@ -12,11 +12,14 @@
 
 #define NUM_REG_EVENTS 4
 
-enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
-					 u16 chan, u8 const *message,
-					 u32 size, u32 user_flag);
-
 enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
 				       u16 chan, u8 const *pmessage,
 				       u32 size);
+enum xlink_error xlink_register_device_event_user(struct xlink_handle *handle,
+						  u32 *event_list,
+						  u32 num_events,
+						  xlink_device_event_cb event_notif_fn);
+enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+					   u16 chan, u8 const *message,
+					   u32 size);
 #endif /* XLINK_CORE_H_ */
diff --git a/drivers/misc/xlink-core/xlink-defs.h b/drivers/misc/xlink-core/xlink-defs.h
index 8985f6631175..81aa3bfffcd3 100644
--- a/drivers/misc/xlink-core/xlink-defs.h
+++ b/drivers/misc/xlink-core/xlink-defs.h
@@ -35,7 +35,7 @@
 #define CONTROL_CHANNEL_TIMEOUT_MS	0U	// wait indefinitely
 #define SIGXLNK				44	// signal XLink uses for callback signalling
 
-#define UNUSED(x) (void)(x)
+#define UNUSED(x) ((void)(x))
 
 // the end of the IPC channel range (starting at zero)
 #define XLINK_IPC_MAX_CHANNELS	1024
@@ -102,6 +102,8 @@ enum xlink_event_type {
 	XLINK_CLOSE_CHANNEL_REQ,
 	XLINK_PING_REQ,
 	XLINK_WRITE_CONTROL_REQ,
+	XLINK_DATA_READY_CALLBACK_REQ,
+	XLINK_DATA_CONSUMED_CALLBACK_REQ,
 	XLINK_REQ_LAST,
 	// response events
 	XLINK_WRITE_RESP = 0x10,
@@ -113,6 +115,8 @@ enum xlink_event_type {
 	XLINK_CLOSE_CHANNEL_RESP,
 	XLINK_PING_RESP,
 	XLINK_WRITE_CONTROL_RESP,
+	XLINK_DATA_READY_CALLBACK_RESP,
+	XLINK_DATA_CONSUMED_CALLBACK_RESP,
 	XLINK_RESP_LAST,
 };
 
diff --git a/drivers/misc/xlink-core/xlink-dispatcher.c b/drivers/misc/xlink-core/xlink-dispatcher.c
index 11ef8e4110ca..bc2f184488ac 100644
--- a/drivers/misc/xlink-core/xlink-dispatcher.c
+++ b/drivers/misc/xlink-core/xlink-dispatcher.c
@@ -5,18 +5,18 @@
  * Copyright (C) 2018-2019 Intel Corporation
  *
  */
-#include <linux/init.h>
-#include <linux/module.h>
+#include <linux/completion.h>
 #include <linux/device.h>
+#include <linux/init.h>
 #include <linux/kernel.h>
-#include <linux/slab.h>
 #include <linux/kthread.h>
 #include <linux/list.h>
-#include <linux/semaphore.h>
+#include <linux/module.h>
 #include <linux/mutex.h>
-#include <linux/completion.h>
-#include <linux/sched/signal.h>
 #include <linux/platform_device.h>
+#include <linux/sched/signal.h>
+#include <linux/semaphore.h>
+#include <linux/slab.h>
 
 #include "xlink-dispatcher.h"
 #include "xlink-multiplexer.h"
@@ -34,18 +34,18 @@ enum dispatcher_state {
 
 /* queue for dispatcher tx thread event handling */
 struct event_queue {
+	struct list_head head;	/* head of event linked list */
 	u32 count;		/* number of events in the queue */
 	u32 capacity;		/* capacity of events in the queue */
-	struct list_head head;	/* head of event linked list */
 	struct mutex lock;	/* locks queue while accessing */
 };
 
 /* dispatcher servicing a single link to a device */
 struct dispatcher {
 	u32 link_id;			/* id of link being serviced */
+	int interface;			/* underlying interface of link */
 	enum dispatcher_state state;	/* state of the dispatcher */
 	struct xlink_handle *handle;	/* xlink device handle */
-	int interface;			/* underlying interface of link */
 	struct task_struct *rxthread;	/* kthread servicing rx */
 	struct task_struct *txthread;	/* kthread servicing tx */
 	struct event_queue queue;	/* xlink event queue */
@@ -82,7 +82,7 @@ static struct dispatcher *get_dispatcher_by_id(u32 id)
 
 static u32 event_generate_id(void)
 {
-	static u32 id = 0xa; // TODO: temporary solution
+	static u32 id = 0xa;
 
 	return id++;
 }
@@ -142,9 +142,6 @@ static int dispatcher_event_send(struct xlink_event *event)
 	size_t event_header_size = sizeof(event->header);
 	int rc;
 
-	// write event header
-	// printk(KERN_DEBUG "Sending event: type = 0x%x, id = 0x%x\n",
-			// event->header.type, event->header.id);
 	rc = xlink_platform_write(event->interface,
 				  event->handle->sw_device_id, &event->header,
 				  &event_header_size, event->header.timeout, NULL);
@@ -159,8 +156,10 @@ static int dispatcher_event_send(struct xlink_event *event)
 					  event->handle->sw_device_id, event->data,
 					  &event->header.size, event->header.timeout,
 					  NULL);
-		if (rc)
+		if (rc) {
 			pr_err("Write data failed %d\n", rc);
+			return rc;
+		}
 		if (event->user_data == 1) {
 			if (event->paddr != 0) {
 				xlink_platform_deallocate(xlinkd->dev,
@@ -187,7 +186,6 @@ static int xlink_dispatcher_rxthread(void *context)
 	size_t size;
 	int rc;
 
-	// printk(KERN_DEBUG "dispatcher rxthread started\n");
 	event = xlink_create_event(disp->link_id, 0, disp->handle, 0, 0, 0);
 	if (!event)
 		return -1;
@@ -214,7 +212,6 @@ static int xlink_dispatcher_rxthread(void *context)
 			}
 		}
 	}
-	// printk(KERN_INFO "dispatcher rxthread stopped\n");
 	complete(&disp->rx_done);
 	do_exit(0);
 	return 0;
@@ -225,7 +222,6 @@ static int xlink_dispatcher_txthread(void *context)
 	struct dispatcher *disp = (struct dispatcher *)context;
 	struct xlink_event *event;
 
-	// printk(KERN_DEBUG "dispatcher txthread started\n");
 	allow_signal(SIGTERM); // allow thread termination while waiting on sem
 	complete(&disp->tx_done);
 	while (!kthread_should_stop()) {
@@ -236,7 +232,6 @@ static int xlink_dispatcher_txthread(void *context)
 		dispatcher_event_send(event);
 		xlink_destroy_event(event); // free handled event
 	}
-	// printk(KERN_INFO "dispatcher txthread stopped\n");
 	complete(&disp->tx_done);
 	do_exit(0);
 	return 0;
@@ -250,6 +245,7 @@ static int xlink_dispatcher_txthread(void *context)
 enum xlink_error xlink_dispatcher_init(void *dev)
 {
 	struct platform_device *plat_dev = (struct platform_device *)dev;
+	struct dispatcher *dsp;
 	int i;
 
 	xlinkd = kzalloc(sizeof(*xlinkd), GFP_KERNEL);
@@ -258,16 +254,16 @@ enum xlink_error xlink_dispatcher_init(void *dev)
 
 	xlinkd->dev = &plat_dev->dev;
 	for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
-		xlinkd->dispatchers[i].link_id = i;
-		sema_init(&xlinkd->dispatchers[i].event_sem, 0);
-		init_completion(&xlinkd->dispatchers[i].rx_done);
-		init_completion(&xlinkd->dispatchers[i].tx_done);
-		INIT_LIST_HEAD(&xlinkd->dispatchers[i].queue.head);
-		mutex_init(&xlinkd->dispatchers[i].queue.lock);
-		xlinkd->dispatchers[i].queue.count = 0;
-		xlinkd->dispatchers[i].queue.capacity =
-				XLINK_EVENT_QUEUE_CAPACITY;
-		xlinkd->dispatchers[i].state = XLINK_DISPATCHER_INIT;
+		dsp = &xlinkd->dispatchers[i];
+		dsp->link_id = i;
+		sema_init(&dsp->event_sem, 0);
+		init_completion(&dsp->rx_done);
+		init_completion(&dsp->tx_done);
+		INIT_LIST_HEAD(&dsp->queue.head);
+		mutex_init(&dsp->queue.lock);
+		dsp->queue.count = 0;
+		dsp->queue.capacity = XLINK_EVENT_QUEUE_CAPACITY;
+		dsp->state = XLINK_DISPATCHER_INIT;
 	}
 	mutex_init(&xlinkd->lock);
 
@@ -329,7 +325,7 @@ enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
 	struct dispatcher *disp;
 	int rc;
 
-	// get dispatcher by handle
+	// get dispatcher by link id
 	disp = get_dispatcher_by_id(event->link_id);
 	if (!disp)
 		return X_LINK_ERROR;
@@ -433,7 +429,6 @@ enum xlink_error xlink_dispatcher_destroy(void)
 			}
 			xlink_destroy_event(event);
 		}
-		// destroy dispatcher
 		mutex_destroy(&disp->queue.lock);
 	}
 	mutex_destroy(&xlinkd->lock);
diff --git a/drivers/misc/xlink-core/xlink-ioctl.c b/drivers/misc/xlink-core/xlink-ioctl.c
index 90947bbccfed..7822a7b35bb6 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.c
+++ b/drivers/misc/xlink-core/xlink-ioctl.c
@@ -28,15 +28,6 @@ static int copy_result_to_user(u32 *where, int rc)
 	return rc;
 }
 
-static enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
-						  u16 chan, u8 const *message, u32 size)
-{
-	enum xlink_error rc = 0;
-
-	rc = do_xlink_write_volatile(handle, chan, message, size, 1);
-	return rc;
-}
-
 int ioctl_connect(unsigned long arg)
 {
 	struct xlink_handle	devh	= {};
@@ -158,6 +149,28 @@ int ioctl_write_data(unsigned long arg)
 	return copy_result_to_user(wr.return_code, rc);
 }
 
+int ioctl_write_control_data(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkwritedata	wr	= {};
+	u8 volbuf[XLINK_MAX_BUF_SIZE];
+	int rc = 0;
+
+	if (copy_from_user(&wr, (void __user *)arg,
+			   sizeof(struct xlinkwritedata)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)wr.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	if (wr.size > XLINK_MAX_CONTROL_DATA_SIZE)
+		return -EFAULT;
+	if (copy_from_user(volbuf, (void __user *)wr.pmessage, wr.size))
+		return -EFAULT;
+	rc = xlink_write_control_data(&devh, wr.chan, volbuf, wr.size);
+
+	return copy_result_to_user(wr.return_code, rc);
+}
+
 int ioctl_write_volatile_data(unsigned long arg)
 {
 	struct xlink_handle	devh	= {};
@@ -242,6 +255,14 @@ int ioctl_start_vpu(unsigned long arg)
 	return copy_result_to_user(startvpu.return_code, rc);
 }
 
+int ioctl_stop_vpu(void)
+{
+	int rc = 0;
+
+	rc = xlink_stop_vpu();
+	return rc;
+}
+
 int ioctl_disconnect(unsigned long arg)
 {
 	struct xlink_handle	devh	= {};
@@ -424,3 +445,110 @@ int ioctl_set_device_mode(unsigned long arg)
 
 	return copy_result_to_user(devm.return_code, rc);
 }
+
+int ioctl_register_device_event(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkregdevevent	regdevevent = {};
+	u32 num_events = 0;
+	u32 *ev_list;
+	int rc = 0;
+
+	if (copy_from_user(&regdevevent, (void __user *)arg,
+			   sizeof(struct xlinkregdevevent)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)regdevevent.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	num_events = regdevevent.num_events;
+	if (num_events > 0 && num_events <= NUM_REG_EVENTS) {
+		ev_list = kzalloc((num_events * sizeof(u32)), GFP_KERNEL);
+		if (ev_list) {
+			if (copy_from_user(ev_list,
+					   (void __user *)regdevevent.event_list,
+					   (num_events * sizeof(u32)))) {
+				kfree(ev_list);
+				return -EFAULT;
+			}
+			rc = xlink_register_device_event_user(&devh,
+							      ev_list,
+							      num_events,
+							      NULL);
+			kfree(ev_list);
+		} else {
+			rc = X_LINK_ERROR;
+		}
+	} else {
+		rc = X_LINK_ERROR;
+	}
+
+	return copy_result_to_user(regdevevent.return_code, rc);
+}
+
+int ioctl_unregister_device_event(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkregdevevent	regdevevent = {};
+	u32 num_events = 0;
+	u32 *ev_list;
+	int rc = 0;
+
+	if (copy_from_user(&regdevevent, (void __user *)arg,
+			   sizeof(struct xlinkregdevevent)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)regdevevent.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	num_events = regdevevent.num_events;
+	if (num_events <= NUM_REG_EVENTS) {
+		ev_list = kzalloc((num_events * sizeof(u32)), GFP_KERNEL);
+		if (copy_from_user(ev_list,
+				   (void __user *)regdevevent.event_list,
+				   (num_events * sizeof(u32)))) {
+			kfree(ev_list);
+			return -EFAULT;
+		}
+		rc = xlink_unregister_device_event(&devh, ev_list, num_events);
+		kfree(ev_list);
+	} else {
+		rc = X_LINK_ERROR;
+	}
+
+	return copy_result_to_user(regdevevent.return_code, rc);
+}
+
+int ioctl_data_ready_callback(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkcallback	cb	= {};
+	int rc = 0;
+
+	if (copy_from_user(&cb, (void __user *)arg,
+			   sizeof(struct xlinkcallback)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)cb.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	CHANNEL_SET_USER_BIT(cb.chan); // set MSbit for user space call
+	rc = xlink_data_available_event(&devh, cb.chan, cb.callback);
+
+	return copy_result_to_user(cb.return_code, rc);
+}
+
+int ioctl_data_consumed_callback(unsigned long arg)
+{
+	struct xlink_handle	devh	= {};
+	struct xlinkcallback	cb	= {};
+	int rc = 0;
+
+	if (copy_from_user(&cb, (void __user *)arg,
+			   sizeof(struct xlinkcallback)))
+		return -EFAULT;
+	if (copy_from_user(&devh, (void __user *)cb.handle,
+			   sizeof(struct xlink_handle)))
+		return -EFAULT;
+	CHANNEL_SET_USER_BIT(cb.chan); // set MSbit for user space call
+	rc = xlink_data_consumed_event(&devh, cb.chan, cb.callback);
+
+	return copy_result_to_user(cb.return_code, rc);
+}
diff --git a/drivers/misc/xlink-core/xlink-ioctl.h b/drivers/misc/xlink-core/xlink-ioctl.h
index d016d8418f30..7818b676d488 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.h
+++ b/drivers/misc/xlink-core/xlink-ioctl.h
@@ -14,10 +14,12 @@ int ioctl_open_channel(unsigned long arg);
 int ioctl_read_data(unsigned long arg);
 int ioctl_read_to_buffer(unsigned long arg);
 int ioctl_write_data(unsigned long arg);
+int ioctl_write_control_data(unsigned long arg);
 int ioctl_write_volatile_data(unsigned long arg);
 int ioctl_release_data(unsigned long arg);
 int ioctl_close_channel(unsigned long arg);
 int ioctl_start_vpu(unsigned long arg);
+int ioctl_stop_vpu(void);
 int ioctl_disconnect(unsigned long arg);
 int ioctl_get_device_name(unsigned long arg);
 int ioctl_get_device_list(unsigned long arg);
@@ -26,5 +28,9 @@ int ioctl_boot_device(unsigned long arg);
 int ioctl_reset_device(unsigned long arg);
 int ioctl_get_device_mode(unsigned long arg);
 int ioctl_set_device_mode(unsigned long arg);
+int ioctl_register_device_event(unsigned long arg);
+int ioctl_unregister_device_event(unsigned long arg);
+int ioctl_data_ready_callback(unsigned long arg);
+int ioctl_data_consumed_callback(unsigned long arg);
 
 #endif /* XLINK_IOCTL_H_ */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
index 48451dc30712..e09458b62c45 100644
--- a/drivers/misc/xlink-core/xlink-multiplexer.c
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -115,6 +115,38 @@ static struct xlink_multiplexer *xmux;
  *
  */
 
+static enum xlink_error run_callback(struct open_channel *opchan,
+				     void *callback, struct task_struct *pid)
+{
+	enum xlink_error rc = X_LINK_SUCCESS;
+	struct kernel_siginfo info;
+	void (*func)(int chan);
+	int ret;
+
+	if (opchan->callback_origin == 'U') { // user-space origin
+		if (pid) {
+			memset(&info, 0, sizeof(struct kernel_siginfo));
+			info.si_signo = SIGXLNK;
+			info.si_code = SI_QUEUE;
+			info.si_errno = opchan->id;
+			info.si_ptr = (void __user *)callback;
+			ret = send_sig_info(SIGXLNK, &info, pid);
+			if (ret < 0) {
+				pr_err("Unable to send signal %d\n", ret);
+				rc = X_LINK_ERROR;
+			}
+		} else {
+			pr_err("CHAN 0x%x -- calling_pid == NULL\n",
+			       opchan->id);
+			rc = X_LINK_ERROR;
+		}
+	} else { // kernel origin
+		func = callback;
+		func(opchan->id);
+	}
+	return rc;
+}
+
 static inline int chan_is_non_blocking_read(struct open_channel *opchan)
 {
 	if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXN_TXB)
@@ -151,7 +183,7 @@ static int is_channel_for_device(u16 chan, u32 sw_device_id,
 				 enum xlink_dev_type dev_type)
 {
 	struct xlink_channel_type const *chan_type = get_channel_type(chan);
-	int interface = NULL_INTERFACE;
+	int interface;
 
 	if (chan_type) {
 		interface = get_interface_from_sw_device_id(sw_device_id);
@@ -181,13 +213,9 @@ static int is_enough_space_in_channel(struct open_channel *opchan,
 		}
 	}
 	if (opchan->tx_up_limit == 1) {
-		if ((opchan->tx_fill_level + size)
-				< ((opchan->chan->size / 100) * THR_LWR)) {
-			opchan->tx_up_limit = 0;
-			return 1;
-		} else {
+		if ((opchan->tx_fill_level + size) >=
+		   ((opchan->chan->size / 100) * THR_LWR))
 			return 0;
-		}
 	}
 	return 1;
 }
@@ -231,6 +259,8 @@ static int add_packet_to_channel(struct open_channel *opchan,
 		list_add_tail(&pkt->list, &queue->head);
 		queue->count++;
 		opchan->rx_fill_level += pkt->length;
+	} else {
+		return X_LINK_ERROR;
 	}
 	return X_LINK_SUCCESS;
 }
@@ -262,9 +292,11 @@ static int release_packet_from_channel(struct open_channel *opchan,
 	} else {
 		// find packet in channel rx queue
 		list_for_each_entry(pkt, &queue->head, list) {
-			if (pkt->data == addr) {
-				packet_found = 1;
-				break;
+			if (pkt) {
+				if (pkt->data == addr) {
+					packet_found = 1;
+					break;
+				}
 			}
 		}
 	}
@@ -629,16 +661,46 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
 			}
 		} else if (xmux->channels[link_id][chan].status == CHAN_OPEN_PEER) {
 			/* channel already open */
-			xmux->channels[link_id][chan].status = CHAN_OPEN; // opened locally
-			xmux->channels[link_id][chan].size = event->header.size;
-			xmux->channels[link_id][chan].timeout = event->header.timeout;
-			xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
 			rc = multiplexer_open_channel(link_id, chan);
+			if (rc == X_LINK_SUCCESS) {
+				struct channel *xchan = &xmux->channels[link_id][chan];
+
+				xchan->status	= CHAN_OPEN; // opened locally
+				xchan->size	= event->header.size;
+				xchan->timeout	= event->header.timeout;
+				xchan->mode	= (uintptr_t)event->data;
+			}
 		} else {
 			/* channel already open */
 			rc = X_LINK_ALREADY_OPEN;
 		}
 		break;
+	case XLINK_DATA_READY_CALLBACK_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			opchan->ready_callback = event->data;
+			opchan->ready_calling_pid = event->calling_pid;
+			opchan->callback_origin = event->callback_origin;
+			pr_info("xlink ready callback process registered - %lx chan %d\n",
+				(uintptr_t)event->calling_pid, chan);
+			release_channel(opchan);
+		}
+		break;
+	case XLINK_DATA_CONSUMED_CALLBACK_REQ:
+		opchan = get_channel(link_id, chan);
+		if (!opchan) {
+			rc = X_LINK_COMMUNICATION_FAIL;
+		} else {
+			opchan->consumed_callback = event->data;
+			opchan->consumed_calling_pid = event->calling_pid;
+			opchan->callback_origin = event->callback_origin;
+			pr_info("xlink consumed callback process registered - %lx chan %d\n",
+				(uintptr_t)event->calling_pid, chan);
+			release_channel(opchan);
+		}
+		break;
 	case XLINK_CLOSE_CHANNEL_REQ:
 		if (xmux->channels[link_id][chan].status == CHAN_OPEN) {
 			opchan = get_channel(link_id, chan);
@@ -709,7 +771,8 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
 							  XLINK_PACKET_ALIGNMENT,
 							  XLINK_NORMAL_MEMORY);
 			} else {
-				pr_err("Fatal error: can't allocate memory in line:%d func:%s\n", __LINE__, __func__);
+				pr_err("Fatal error: can't allocate memory in line:%d func:%s\n",
+				       __LINE__, __func__);
 			}
 			rc = X_LINK_COMMUNICATION_FAIL;
 		} else {
@@ -754,6 +817,14 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
 				xlink_dispatcher_event_add(EVENT_RX, event);
 				//complete regardless of mode/timeout
 				complete(&opchan->pkt_available);
+				// run callback
+				if (xmux->channels[link_id][chan].status == CHAN_OPEN &&
+				    chan_is_non_blocking_read(opchan) &&
+				    opchan->ready_callback) {
+					rc = run_callback(opchan, opchan->ready_callback,
+							  opchan->ready_calling_pid);
+					break;
+				}
 			} else {
 				// failed to allocate buffer
 				rc = X_LINK_ERROR;
@@ -813,6 +884,13 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
 		xlink_dispatcher_event_add(EVENT_RX, event);
 		//complete regardless of mode/timeout
 		complete(&opchan->pkt_consumed);
+		// run callback
+		if (xmux->channels[link_id][chan].status == CHAN_OPEN &&
+		    chan_is_non_blocking_write(opchan) &&
+		    opchan->consumed_callback) {
+			rc = run_callback(opchan, opchan->consumed_callback,
+					  opchan->consumed_calling_pid);
+		}
 		release_channel(opchan);
 		break;
 	case XLINK_RELEASE_REQ:
@@ -838,47 +916,47 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
 			rc = multiplexer_open_channel(link_id, chan);
 			if (rc) {
 				rc = X_LINK_ERROR;
-			} else {
-				opchan = get_channel(link_id, chan);
-				if (!opchan) {
-					rc = X_LINK_COMMUNICATION_FAIL;
-				} else {
-					xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
-					complete(&opchan->opened);
-					passthru_event = xlink_create_event(link_id,
-									    XLINK_OPEN_CHANNEL_RESP,
-									    event->handle,
-									    chan,
-									    0,
-									    opchan->chan->timeout);
-					if (!passthru_event) {
-						rc = X_LINK_ERROR;
-						release_channel(opchan);
-						break;
-					}
-					xlink_dispatcher_event_add(EVENT_RX,
-								   passthru_event);
-				}
+				break;
+			}
+			opchan = get_channel(link_id, chan);
+			if (!opchan) {
+				rc = X_LINK_COMMUNICATION_FAIL;
+				break;
+			}
+			xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
+			complete(&opchan->opened);
+			passthru_event = xlink_create_event(link_id,
+							    XLINK_OPEN_CHANNEL_RESP,
+							    event->handle,
+							    chan,
+							    0,
+							    opchan->chan->timeout);
+			if (!passthru_event) {
+				rc = X_LINK_ERROR;
 				release_channel(opchan);
+				break;
 			}
+			xlink_dispatcher_event_add(EVENT_RX,
+						   passthru_event);
+			release_channel(opchan);
 		} else {
 			/* channel already open */
 			opchan = get_channel(link_id, chan);
 			if (!opchan) {
 				rc = X_LINK_COMMUNICATION_FAIL;
-			} else {
-				passthru_event = xlink_create_event(link_id,
-								    XLINK_OPEN_CHANNEL_RESP,
-								    event->handle,
-								    chan, 0, 0);
-				if (!passthru_event) {
-					release_channel(opchan);
-					rc = X_LINK_ERROR;
-					break;
-				}
-				xlink_dispatcher_event_add(EVENT_RX,
-							   passthru_event);
+				break;
+			}
+			passthru_event = xlink_create_event(link_id,
+							    XLINK_OPEN_CHANNEL_RESP,
+							    event->handle,
+							    chan, 0, 0);
+			if (!passthru_event) {
+				release_channel(opchan);
+				rc = X_LINK_ERROR;
+				break;
 			}
+			xlink_dispatcher_event_add(EVENT_RX,
+						   passthru_event);
 			release_channel(opchan);
 		}
 		rc = xlink_passthrough(event);
@@ -930,7 +1008,7 @@ enum xlink_error xlink_passthrough(struct xlink_event *event)
 #ifdef CONFIG_XLINK_LOCAL_HOST
 	struct xlink_ipc_context ipc = {0};
 	phys_addr_t physaddr = 0;
-	dma_addr_t vpuaddr = 0;
+	static dma_addr_t vpuaddr;
 	u32 timeout = 0;
 	u32 link_id;
 	u16 chan;
diff --git a/drivers/misc/xlink-core/xlink-platform.c b/drivers/misc/xlink-core/xlink-platform.c
index 56eb8da28a5f..b0076cb3671d 100644
--- a/drivers/misc/xlink-core/xlink-platform.c
+++ b/drivers/misc/xlink-core/xlink-platform.c
@@ -56,6 +56,11 @@ static inline int xlink_ipc_close_channel(u32 sw_device_id,
 					  u32 channel)
 { return -1; }
 
+static inline int xlink_ipc_register_for_events(u32 sw_device_id,
+						int (*callback)(u32 sw_device_id, u32 event))
+{ return -1; }
+static inline int xlink_ipc_unregister_for_events(u32 sw_device_id)
+{ return -1; }
 #endif /* CONFIG_XLINK_LOCAL_HOST */
 
 /*
@@ -95,6 +100,13 @@ static int (*open_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
 
 static int (*close_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
 		xlink_ipc_close_channel, NULL, NULL, NULL};
+static int (*register_for_events_fcts[NMB_OF_INTERFACES])(u32,
+						   int (*callback)(u32 sw_device_id, u32 event)) = {
+								   xlink_ipc_register_for_events,
+								   xlink_pcie_register_device_event,
+								   NULL, NULL};
+static int (*unregister_for_events_fcts[NMB_OF_INTERFACES])(u32) = {
+		xlink_ipc_unregister_for_events, xlink_pcie_unregister_device_event, NULL, NULL};
 
 /*
  * xlink low-level driver interface
@@ -207,6 +219,21 @@ int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
 	return close_chan_fcts[interface](sw_device_id, channel);
 }
 
+int xlink_platform_register_for_events(u32 interface, u32 sw_device_id,
+				       xlink_device_event_cb event_notif_fn)
+{
+	if (interface >= NMB_OF_INTERFACES || !register_for_events_fcts[interface])
+		return -1;
+	return register_for_events_fcts[interface](sw_device_id, event_notif_fn);
+}
+
+int xlink_platform_unregister_for_events(u32 interface, u32 sw_device_id)
+{
+	if (interface >= NMB_OF_INTERFACES || !unregister_for_events_fcts[interface])
+		return -1;
+	return unregister_for_events_fcts[interface](sw_device_id);
+}
+
 void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
 			      u32 size, u32 alignment,
 			      enum xlink_memory_region region)
diff --git a/include/linux/xlink.h b/include/linux/xlink.h
index b00dbc719530..ac196ff85469 100644
--- a/include/linux/xlink.h
+++ b/include/linux/xlink.h
@@ -70,6 +70,12 @@ enum xlink_error xlink_open_channel(struct xlink_handle *handle,
 				    u16 chan, enum xlink_opmode mode,
 				    u32 data_size, u32 timeout);
 
+enum xlink_error xlink_data_available_event(struct xlink_handle *handle,
+					    u16 chan,
+					    xlink_event data_available_event);
+enum xlink_error xlink_data_consumed_event(struct xlink_handle *handle,
+					   u16 chan,
+					   xlink_event data_consumed_event);
 enum xlink_error xlink_close_channel(struct xlink_handle *handle, u16 chan);
 
 enum xlink_error xlink_write_data(struct xlink_handle *handle,
@@ -113,9 +119,14 @@ enum xlink_error xlink_set_device_mode(struct xlink_handle *handle,
 enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
 				       enum xlink_device_power_mode *power_mode);
 
-enum xlink_error xlink_start_vpu(char *filename); /* depreciated */
+enum xlink_error xlink_register_device_event(struct xlink_handle *handle,
+					     u32 *event_list, u32 num_events,
+					     xlink_device_event_cb event_notif_fn);
+enum xlink_error xlink_unregister_device_event(struct xlink_handle *handle,
+					       u32 *event_list, u32 num_events);
+enum xlink_error xlink_start_vpu(char *filename); /* deprecated */
 
-enum xlink_error xlink_stop_vpu(void); /* depreciated */
+enum xlink_error xlink_stop_vpu(void); /* deprecated */
 
 /* API functions to be implemented
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 24/34] dt-bindings: misc: Add Keem Bay vpumgr
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (22 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 23/34] xlink-core: add async channel and events mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 25/34] misc: Add Keem Bay VPU manager mgross
                   ` (9 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Li, Tingqian, Li

From: "Li, Tingqian" <tingqian.li@intel.com>

  Add DT binding schema for VPU on Keem Bay ASoC platform


Signed-off-by: Li, Tingqian <tingqian.li@intel.com>
---
 .../bindings/misc/intel,keembay-vpu-mgr.yaml  | 48 +++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
new file mode 100644
index 000000000000..7fad14274ee2
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
@@ -0,0 +1,48 @@
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+# Copyright (C) 2020 Intel
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/misc/intel,keembay-vpu-mgr.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Intel VPU manager bindings
+
+maintainers:
+  - Li, Tingqian <tingqian.li@intel.com>
+  - Zhou, Luwei <luwei.zhou@intel.com>
+
+description: |
+  The Intel VPU manager provides shared memory and process
+  depedent context management for Intel VPU hardware IP.
+
+properties:
+  compatible:
+    items:
+      - enum:
+        - intel,keembay-vpu-mgr
+        - intel,keembay-vpusmm
+
+  memory-region:
+    description:
+      phandle to a node describing reserved memory (System RAM memory)
+      used by VPU (see bindings/reserved-memory/reserved-memory.txt)
+    maxItems: 1
+
+  intel,keembay-vpu-ipc-id:
+    $ref: /schemas/types.yaml#/definitions/uint32
+    description:
+      the index of the VPU slice to be managed. Default is 0.
+
+required:
+  - compatible
+  - memory-region
+
+additionalProperties: false
+
+examples:
+  - |
+    vpumgr0 {
+        compatible = "intel,keembay-vpu-mgr";
+        memory-region = <&vpu_reserved>;
+        intel,keembay-vpu-ipc-id = <0x0>;
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 25/34] misc: Add Keem Bay VPU manager
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (23 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 24/34] dt-bindings: misc: Add Keem Bay vpumgr mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 26/34] dt-bindings: misc: intel_tsens: Add tsens thermal bindings documentation mgross
                   ` (8 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Li, Tingqian, Li, Zhou, Luwei, Wang, jue

From: "Li, Tingqian" <tingqian.li@intel.com>

VPU IP on Keem Bay SOC is a vision acceleration IP complex
under the control of a RTOS-based firmware (running on RISC
MCU inside the VPU IP) serving user-space application
running on CPU side for HW accelerated computer vision tasks.

This module is kernel counterpart of the VPUAL(VPU abstraction
layer) which bridges firmware on VPU side and applications on
CPU user-space, it assists firmware on VPU side serving multiple
user space application processes on CPU side concurrently while
also performing necessary data buffer management on behave of
VPU IP.

objmgr provides basic infrastructure for create/destroy VPU side
software object concurrently on demand of user-space application
and also automatically release leaked objects during handling of
application termination. Note this module only cares about the
life-cycle of such objects, it's up to the application and firmware
to define the behavior/operations of each object.

objmgr does it's job by communicating with firmware through a fixed
reserved xlink channel, using a very simple message protocol.

smm provides DMABuf allocation/import facilities to allow user-space
app pass data to/from VPU in zero-copy fashion. it also provided a
convenient ioctl function for converting virtual pointer of a mem-mapped
and imported DMABuf into it's corresponding dma address, to allow
user-space app to specify the sub-regions of a bigger DMABuf to be
processed by VPU.

Signed-off-by: Li, Tingqian <tingqian.li@intel.com>
Signed-off-by: Zhou, Luwei <luwei.zhou@intel.com>
Signed-off-by: Wang, jue <wang.jue@intel.com>
---
 drivers/misc/Kconfig             |   1 +
 drivers/misc/Makefile            |   1 +
 drivers/misc/vpumgr/Kconfig      |   9 +
 drivers/misc/vpumgr/Makefile     |   3 +
 drivers/misc/vpumgr/vpu_common.h |  31 ++
 drivers/misc/vpumgr/vpu_mgr.c    | 370 +++++++++++++++++++
 drivers/misc/vpumgr/vpu_smm.c    | 554 +++++++++++++++++++++++++++++
 drivers/misc/vpumgr/vpu_smm.h    |  30 ++
 drivers/misc/vpumgr/vpu_vcm.c    | 585 +++++++++++++++++++++++++++++++
 drivers/misc/vpumgr/vpu_vcm.h    |  84 +++++
 include/uapi/misc/vpumgr.h       |  64 ++++
 11 files changed, 1732 insertions(+)
 create mode 100644 drivers/misc/vpumgr/Kconfig
 create mode 100644 drivers/misc/vpumgr/Makefile
 create mode 100644 drivers/misc/vpumgr/vpu_common.h
 create mode 100644 drivers/misc/vpumgr/vpu_mgr.c
 create mode 100644 drivers/misc/vpumgr/vpu_smm.c
 create mode 100644 drivers/misc/vpumgr/vpu_smm.h
 create mode 100644 drivers/misc/vpumgr/vpu_vcm.c
 create mode 100644 drivers/misc/vpumgr/vpu_vcm.h
 create mode 100644 include/uapi/misc/vpumgr.h

diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 09ae65e98681..2d1f7b165cc8 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -484,4 +484,5 @@ source "drivers/misc/uacce/Kconfig"
 source "drivers/misc/xlink-pcie/Kconfig"
 source "drivers/misc/xlink-ipc/Kconfig"
 source "drivers/misc/xlink-core/Kconfig"
+source "drivers/misc/vpumgr/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index f3a6eb03bae9..2936930f3edc 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -60,3 +60,4 @@ obj-$(CONFIG_HISI_HIKEY_USB)	+= hisi_hikey_usb.o
 obj-y                           += xlink-pcie/
 obj-$(CONFIG_XLINK_IPC)		+= xlink-ipc/
 obj-$(CONFIG_XLINK_CORE)	+= xlink-core/
+obj-$(CONFIG_VPUMGR)		+= vpumgr/
diff --git a/drivers/misc/vpumgr/Kconfig b/drivers/misc/vpumgr/Kconfig
new file mode 100644
index 000000000000..bb82ff83afd3
--- /dev/null
+++ b/drivers/misc/vpumgr/Kconfig
@@ -0,0 +1,9 @@
+config VPUMGR
+	tristate "VPU Manager"
+	depends on ARM64 && XLINK_CORE
+	help
+	  VPUMGR manages life-cycle of VPU related resources which were
+	  dynamically allocated on demands of user-space application
+
+	  Select y or m if you have a processor including the Intel
+	  Vision Processor (VPU) on it.
diff --git a/drivers/misc/vpumgr/Makefile b/drivers/misc/vpumgr/Makefile
new file mode 100644
index 000000000000..51441dc8a930
--- /dev/null
+++ b/drivers/misc/vpumgr/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_VPUMGR) += vpumgr.o
+vpumgr-objs :=	vpu_mgr.o vpu_smm.o vpu_vcm.o
diff --git a/drivers/misc/vpumgr/vpu_common.h b/drivers/misc/vpumgr/vpu_common.h
new file mode 100644
index 000000000000..cd474ffc05f3
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_common.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * VPUMGR Kernel module - common definition
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#ifndef _VPU_COMMON_H
+#define _VPU_COMMON_H
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+
+#include <uapi/misc/vpumgr.h>
+
+#include "vpu_vcm.h"
+
+/* there will be one such device for each HW instance */
+struct vpumgr_device {
+	struct device *sdev;
+	struct device *dev;
+	dev_t devnum;
+	struct cdev cdev;
+	struct platform_device *pdev;
+
+	struct vcm_dev vcm;
+	struct dentry *debugfs_root;
+
+	struct mutex client_mutex; /* protect client_list */
+	struct list_head client_list;
+};
+
+#define XLINK_INVALID_SW_DEVID  0xDEADBEEF
+
+#endif
diff --git a/drivers/misc/vpumgr/vpu_mgr.c b/drivers/misc/vpumgr/vpu_mgr.c
new file mode 100644
index 000000000000..75be64ebc3b0
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_mgr.c
@@ -0,0 +1,370 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * VPU Manager Kernel module.
+ *
+ * Copyright (C) 2020-2021 Intel Corporation
+ *
+ */
+#include <linux/debugfs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include "vpu_common.h"
+#include "vpu_smm.h"
+#include "vpu_vcm.h"
+
+#define DRIVER_NAME             "vpumgr"
+#define MAX_DEV_CNT             32
+
+/* Define the max xlink device number */
+#define MAX_SW_DEV_CNT          20
+
+/* Define the SW_DEVICE_ID bit mask and offset */
+#define IPC_INTERFACE           0x00
+#define PCIE_INTERFACE          0x01
+#define MASK_INTERFACE          0x7000000
+#define BIT_OFFSET_INTERFACE    24
+#define MASK_VPU_IPC_ID         0x000E
+#define BIT_OFFSET_VPU_ID       1
+
+#define SWDEVID_INTERFACE(sw_dev_id) (((sw_dev_id) & MASK_INTERFACE) >> BIT_OFFSET_INTERFACE)
+#define SWDEVID_VPU_IPC_ID(sw_dev_id) (((sw_dev_id) & MASK_VPU_IPC_ID) >> BIT_OFFSET_VPU_ID)
+
+static dev_t vpumgr_devnum;
+static struct class *vpumgr_class;
+
+/**
+ * struct vpumgr_fpriv - per-process context stored in FD private data.
+ * @vdev: vpumgr device corresponding to the file
+ * @smm: memory manager
+ * @ctx: vpu context manager
+ * @list: for global list of all opened file
+ * @pid: process which opens the device file
+ */
+struct vpumgr_fpriv {
+	struct vpumgr_device *vdev;
+	struct vpumgr_smm smm;
+	struct vpumgr_ctx ctx;
+	struct list_head list;
+	pid_t  pid;
+};
+
+static u32 get_sw_device_id(int vpu_ipc_id)
+{
+	u32 sw_id_list[MAX_SW_DEV_CNT];
+	enum xlink_error rc;
+	u32 num = 0;
+	u32 swid;
+	int i;
+
+	rc = xlink_get_device_list(sw_id_list, &num);
+	if (rc) {
+		pr_err("XLINK get device list error %d in %s\n", rc, __func__);
+		return XLINK_INVALID_SW_DEVID;
+	}
+
+	for (i = 0; i < num; i++) {
+		swid = sw_id_list[i];
+		if (SWDEVID_INTERFACE(swid) == IPC_INTERFACE &&
+		    SWDEVID_VPU_IPC_ID(swid) ==  vpu_ipc_id)
+			return swid;
+	}
+	return XLINK_INVALID_SW_DEVID;
+}
+
+static int vpumgr_open(struct inode *inode, struct file *filp)
+{
+	struct vpumgr_fpriv *vpriv;
+	struct vpumgr_device *vdev;
+	int rc;
+
+	vpriv = kzalloc(sizeof(*vpriv), GFP_KERNEL);
+	if (!vpriv)
+		return -ENOMEM;
+
+	vdev = container_of(inode->i_cdev, struct vpumgr_device, cdev);
+	rc = smm_open(&vpriv->smm, vdev);
+	if (rc)
+		goto free_priv;
+
+	rc = vcm_open(&vpriv->ctx, vdev);
+	if (rc)
+		goto close_smm;
+
+	vpriv->vdev = vdev;
+	vpriv->pid = task_pid_nr(current);
+	INIT_LIST_HEAD(&vpriv->list);
+
+	mutex_lock(&vdev->client_mutex);
+	list_add_tail(&vpriv->list, &vdev->client_list);
+	mutex_unlock(&vdev->client_mutex);
+
+	filp->private_data = vpriv;
+	return 0;
+
+close_smm:
+	smm_close(&vpriv->smm);
+free_priv:
+	kfree(vpriv);
+	return rc;
+}
+
+static int vpumgr_release(struct inode *inode, struct file *filp)
+{
+	struct vpumgr_fpriv *vpriv = filp->private_data;
+	struct vpumgr_device *vdev = container_of(inode->i_cdev, struct vpumgr_device, cdev);
+
+	vcm_close(&vpriv->ctx);
+	smm_close(&vpriv->smm);
+
+	mutex_lock(&vdev->client_mutex);
+	list_del(&vpriv->list);
+	mutex_unlock(&vdev->client_mutex);
+
+	kfree(vpriv);
+	filp->private_data = NULL;
+	return 0;
+}
+
+static long vpumgr_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+	struct vpumgr_fpriv *vpriv = filp->private_data;
+	const unsigned int io_dir = _IOC_DIR(cmd);
+	const unsigned int io_size = _IOC_SIZE(cmd);
+	struct vpumgr_vcm_submit *vs;
+	struct vpumgr_vcm_wait *vw;
+	char tmp[128];
+	int rc = 0;
+
+	if (_IOC_TYPE(cmd) != VPUMGR_MAGIC || _IOC_NR(cmd) >= _IOC_NR(VPUMGR_IOCTL_END))
+		return -EINVAL;
+
+	if (io_size > sizeof(tmp))
+		return -EFAULT;
+
+	if (io_dir & _IOC_READ) {
+		if (copy_from_user(tmp, (void __user *)arg, io_size) != 0)
+			return  -EFAULT;
+	}
+
+	switch (cmd) {
+	case VPUMGR_IOCTL_DMABUF_ALLOC:
+		rc = smm_alloc(&vpriv->smm, (void *)tmp);
+		break;
+	case VPUMGR_IOCTL_DMABUF_IMPORT:
+		rc = smm_import(&vpriv->smm, (void *)tmp);
+		break;
+	case VPUMGR_IOCTL_DMABUF_UNIMPORT:
+		rc = smm_unimport(&vpriv->smm, (void *)tmp);
+		break;
+	case VPUMGR_IOCTL_DMABUF_PTR2VPU:
+		rc = smm_ptr2vpu(&vpriv->smm, (void *)tmp);
+		break;
+	case VPUMGR_IOCTL_VCM_SUBMIT:
+		vs = (struct vpumgr_vcm_submit *)tmp;
+		if (vs->cmd <= VCTX_KMD_RESERVED_CMD_LAST) {
+			/*
+			 * user-space can talk to vpu context lives in firmware
+			 * with any commands other than those reserved for kernel
+			 * mode.
+			 */
+			rc = -EACCES;
+			break;
+		}
+		rc = vcm_submit(&vpriv->ctx, vs->cmd,
+				(const void *)vs->in, vs->in_len, &vs->submit_id);
+		break;
+	case VPUMGR_IOCTL_VCM_WAIT:
+		vw = (struct vpumgr_vcm_wait *)tmp;
+		rc = vcm_wait(&vpriv->ctx, vw->submit_id, &vw->vpu_rc,
+			      (void *)vw->out, &vw->out_len, vw->timeout_ms);
+		break;
+	}
+
+	if (!rc) {
+		if (io_dir & _IOC_WRITE) {
+			if (copy_to_user((void __user *)arg, tmp, io_size) != 0)
+				return -EFAULT;
+		}
+	}
+	return rc;
+}
+
+static const struct file_operations vpumgr_devfile_fops = {
+	.owner = THIS_MODULE,
+	.open = vpumgr_open,
+	.release = vpumgr_release,
+	.unlocked_ioctl = vpumgr_ioctl,
+};
+
+static int vpumgr_debugfs_stats_show(struct seq_file *file, void *offset)
+{
+	struct vpumgr_device *vdev = dev_get_drvdata(file->private);
+	struct vpumgr_fpriv *fpriv;
+	int i = 0;
+
+	mutex_lock(&vdev->client_mutex);
+	list_for_each_entry(fpriv, &vdev->client_list, list) {
+		seq_printf(file, "client #%d pid:%d\n", i++, fpriv->pid);
+		vcm_debugfs_stats_show(file, &fpriv->ctx);
+		smm_debugfs_stats_show(file, &fpriv->smm);
+	}
+	mutex_unlock(&vdev->client_mutex);
+	return 0;
+}
+
+static const struct of_device_id keembay_vpumgr_of_match[] = {
+	{ .compatible = "intel,keembay-vpu-mgr"},
+	{ .compatible = "intel,keembay-vpusmm"},
+	{}
+};
+MODULE_DEVICE_TABLE(of, keembay_vpumgr_of_match);
+
+static int vpumgr_driver_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct vpumgr_device *vdev;
+	u32 ipc_sw_device_id;
+	u32 vpu_ipc_id = 0;
+	int rc;
+
+	/* get device id */
+	rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-id",
+				  &vpu_ipc_id);
+	if (rc && rc != -EINVAL) {
+		dev_err(dev, "%s: vpu-ipc-id read failed with rc %d\n", __func__, rc);
+		return -EINVAL;
+	}
+
+	ipc_sw_device_id = get_sw_device_id(vpu_ipc_id);
+	if (ipc_sw_device_id == XLINK_INVALID_SW_DEVID)
+		dev_warn(dev, "%s: no xlink sw device for vpu_ipc_id %d\n",
+			 __func__, vpu_ipc_id);
+
+	vdev = devm_kzalloc(dev, sizeof(struct vpumgr_device), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->devnum = MKDEV(MAJOR(vpumgr_devnum), vpu_ipc_id);
+	vdev->pdev = pdev;
+	vdev->dev = dev;
+
+	dev_dbg(dev, "dev->devnum %u, id %u, major %u\n",
+		vdev->devnum, vpu_ipc_id,  MAJOR(vdev->devnum));
+
+	vdev->sdev = device_create(vpumgr_class, dev, vdev->devnum,
+				   NULL, DRIVER_NAME "%d", vpu_ipc_id);
+	if (IS_ERR(vdev->sdev)) {
+		dev_err(dev, "%s: device_create failed\n", __func__);
+		return PTR_ERR(vdev->sdev);
+	}
+
+	cdev_init(&vdev->cdev, &vpumgr_devfile_fops);
+	vdev->cdev.owner = THIS_MODULE;
+	rc = cdev_add(&vdev->cdev, vdev->devnum, 1);
+	if (rc) {
+		dev_err(dev, "%s: cdev_add failed.\n", __func__);
+		goto detroy_device;
+	}
+
+	vdev->debugfs_root = debugfs_create_dir(dev_name(vdev->sdev), NULL);
+
+	debugfs_create_devm_seqfile(dev, "stats", vdev->debugfs_root,
+				    vpumgr_debugfs_stats_show);
+
+	rc = smm_init(vdev);
+	if (rc)
+		goto remove_debugfs;
+
+	rc = vcm_init(vdev, ipc_sw_device_id);
+	if (rc)
+		goto fini_smm;
+
+	INIT_LIST_HEAD(&vdev->client_list);
+	mutex_init(&vdev->client_mutex);
+
+	dev_set_drvdata(dev, vdev);
+	return 0;
+
+fini_smm:
+	smm_fini(vdev);
+remove_debugfs:
+	debugfs_remove_recursive(vdev->debugfs_root);
+	cdev_del(&vdev->cdev);
+detroy_device:
+	device_destroy(vpumgr_class, vdev->devnum);
+	return rc;
+}
+
+static int vpumgr_driver_remove(struct platform_device *pdev)
+{
+	struct vpumgr_device *vdev = dev_get_drvdata(&pdev->dev);
+
+	mutex_destroy(&vdev->client_mutex);
+	vcm_fini(vdev);
+	smm_fini(vdev);
+	debugfs_remove_recursive(vdev->debugfs_root);
+	cdev_del(&vdev->cdev);
+	device_destroy(vpumgr_class, vdev->devnum);
+	return 0;
+}
+
+static struct platform_driver vpumgr_driver = {
+	.probe  = vpumgr_driver_probe,
+	.remove = vpumgr_driver_remove,
+	.driver = {
+			.owner = THIS_MODULE,
+			.name = "keembay-vpu-mgr",
+			.of_match_table = keembay_vpumgr_of_match,
+	 },
+};
+
+static int __init vpumgr_init(void)
+{
+	int rc;
+
+	rc = alloc_chrdev_region(&vpumgr_devnum, 0, MAX_DEV_CNT, DRIVER_NAME);
+	if (rc < 0) {
+		pr_err("[%s] err: alloc_chrdev_region\n", __func__);
+		return rc;
+	}
+
+	vpumgr_class = class_create(THIS_MODULE, DRIVER_NAME "_class");
+	if (IS_ERR(vpumgr_class)) {
+		rc = PTR_ERR(vpumgr_class);
+		pr_err("[%s] err: class_create\n", __func__);
+		goto unreg_chrdev;
+	}
+
+	rc = platform_driver_register(&vpumgr_driver);
+	if (rc) {
+		pr_err("[%s] err platform_driver_register\n", __func__);
+		goto destroy_class;
+	}
+
+	return 0;
+
+destroy_class:
+	class_destroy(vpumgr_class);
+unreg_chrdev:
+	unregister_chrdev_region(vpumgr_devnum, MAX_DEV_CNT);
+	return rc;
+}
+
+static void vpumgr_exit(void)
+{
+	platform_driver_unregister(&vpumgr_driver);
+	class_destroy(vpumgr_class);
+	unregister_chrdev_region(vpumgr_devnum, MAX_DEV_CNT);
+}
+
+module_init(vpumgr_init)
+module_exit(vpumgr_exit)
+
+MODULE_DESCRIPTION("VPU resource manager driver");
+MODULE_AUTHOR("Tingqian Li <tingqian.li@intel.com>");
+MODULE_AUTHOR("Luwei Zhou <luwie.zhou@intel.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/misc/vpumgr/vpu_smm.c b/drivers/misc/vpumgr/vpu_smm.c
new file mode 100644
index 000000000000..a89f62984a48
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_smm.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation
+ */
+#include <linux/cdev.h>
+#include <linux/debugfs.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-direct.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/rbtree.h>
+#include <linux/slab.h>
+#include <linux/seq_file.h>
+#include <linux/uaccess.h>
+
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+
+#include <uapi/misc/vpumgr.h>
+
+#include "vpu_common.h"
+#include "vpu_smm.h"
+
+/*
+ * DMABuf exported by VPU device is described by following data structure
+ * the life of the buffer may be longer than session(it may be shared with
+ * other driver),so it contains pointer to device rather than to session
+ */
+struct vpusmm_buffer {
+	struct device *dev;
+	void *cookie;
+	dma_addr_t dma_addr;
+	size_t size;
+	unsigned long dma_attrs;
+};
+
+/*
+ * DMABufs imported to VPU device are maintained in a rb tree with dmabuf's
+ * pointer as key. so user space can unimport it by specifying fd and ptr2vpu
+ * API can work by searching this rb tree.
+ */
+struct vpusmm_impbuf {
+	struct rb_node node;
+	struct dma_buf *dmabuf;
+	struct dma_buf_attachment *attach;
+	struct sg_table *sgt;
+	enum dma_data_direction direction;
+	dma_addr_t vpu_addr;
+	int refcount;
+};
+
+/*
+ * VPU imported dmabuf management
+ */
+static void vpusmm_insert_impbuf(struct vpumgr_smm *sess,
+				 struct vpusmm_impbuf *new_item)
+{
+	struct rb_root *root = &sess->imp_rb;
+	struct rb_node **iter = &root->rb_node, *parent = NULL;
+	struct dma_buf *value = new_item->dmabuf;
+	struct vpusmm_impbuf *item;
+
+	while (*iter) {
+		parent = *iter;
+		item = rb_entry(parent, struct vpusmm_impbuf, node);
+
+		if (item->dmabuf > value)
+			iter = &(*iter)->rb_left;
+		else
+			iter = &(*iter)->rb_right;
+	}
+
+	/* Put the new node there */
+	rb_link_node(&new_item->node, parent, iter);
+	rb_insert_color(&new_item->node, root);
+}
+
+static struct vpusmm_impbuf *vpusmm_find_impbuf(struct vpumgr_smm *sess,
+						struct dma_buf *dmabuf)
+{
+	struct rb_node *node = sess->imp_rb.rb_node;
+	struct vpusmm_impbuf *item;
+
+	while (node) {
+		item = rb_entry(node, struct vpusmm_impbuf, node);
+
+		if (item->dmabuf > dmabuf)
+			node = node->rb_left;
+		else if (item->dmabuf < dmabuf)
+			node = node->rb_right;
+		else
+			return item;
+	}
+	return NULL;
+}
+
+static struct vpusmm_impbuf *vpusmm_import_dmabuf(struct vpumgr_smm *sess,
+						  int dmabuf_fd,
+						  enum dma_data_direction direction, int vpu_id)
+{
+	struct vpusmm_impbuf *item;
+	struct dma_buf_attachment *attach;
+	struct device *dma_dev = sess->vdev->dev;
+	struct dma_buf *dmabuf;
+	struct sg_table *sgt;
+
+	dmabuf = dma_buf_get(dmabuf_fd);
+	if (IS_ERR(dmabuf))
+		return ERR_CAST(dmabuf);
+
+	mutex_lock(&sess->imp_rb_lock);
+	item = vpusmm_find_impbuf(sess, dmabuf);
+	if (item) {
+		item->refcount++;
+		goto found_impbuf;
+	}
+
+	attach = dma_buf_attach(dmabuf, dma_dev);
+	if (IS_ERR(attach)) {
+		item = ERR_CAST(attach);
+		goto fail_attach;
+	}
+
+	sgt = dma_buf_map_attachment(attach, direction);
+	if (IS_ERR(sgt)) {
+		item = ERR_CAST(sgt);
+		goto fail_map;
+	}
+
+	if (sgt->nents > 1) {
+		item = ERR_PTR(-EINVAL);
+		goto fail_import;
+	}
+
+	item = kzalloc(sizeof(*item), GFP_KERNEL);
+	if (!item) {
+		item = ERR_PTR(-ENOMEM);
+		goto fail_import;
+	}
+
+	item->dmabuf	= dmabuf;
+	item->attach	= attach;
+	item->sgt	= sgt;
+	item->direction = direction;
+	item->vpu_addr	= sg_dma_address(sgt->sgl);
+	item->refcount	= 1;
+
+	vpusmm_insert_impbuf(sess, item);
+
+	mutex_unlock(&sess->imp_rb_lock);
+	return item;
+
+fail_import:
+	dma_buf_unmap_attachment(attach, sgt, direction);
+fail_map:
+	dma_buf_detach(dmabuf, attach);
+fail_attach:
+found_impbuf:
+	mutex_unlock(&sess->imp_rb_lock);
+	dma_buf_put(dmabuf);
+	return item;
+}
+
+int smm_open(struct vpumgr_smm *sess, struct vpumgr_device *vdev)
+{
+	sess->vdev = vdev;
+	sess->imp_rb = RB_ROOT;
+	mutex_init(&sess->imp_rb_lock);
+	return 0;
+}
+
+int smm_close(struct vpumgr_smm *sess)
+{
+	struct device *dev = sess->vdev->sdev;
+	struct rb_node *cur, *next;
+	struct vpusmm_impbuf *item;
+
+	mutex_destroy(&sess->imp_rb_lock);
+
+	cur = rb_first(&sess->imp_rb);
+	while (cur) {
+		item = rb_entry(cur, struct vpusmm_impbuf, node);
+		next = rb_next(cur);
+		if (item) {
+			dev_dbg(dev, "[%s] PID:%d free leaked imported dmabuf %zu bytes, %d refs\n",
+				__func__, current->pid, item->dmabuf->size, item->refcount);
+			dma_buf_unmap_attachment(item->attach, item->sgt, item->direction);
+			dma_buf_detach(item->dmabuf, item->attach);
+			dma_buf_put(item->dmabuf);
+			rb_erase(&item->node, &sess->imp_rb);
+			kfree(item);
+		}
+		cur = next;
+	}
+	return 0;
+}
+
+/*
+ *  DMABuf
+ */
+static struct sg_table *map_dma_buf_vpusmm(struct dma_buf_attachment *attach,
+					   enum dma_data_direction dir)
+{
+	struct vpusmm_buffer *buff = attach->dmabuf->priv;
+	struct sg_table *sgt;
+	int rc;
+
+	if (WARN_ON(dir == DMA_NONE))
+		return ERR_PTR(-EINVAL);
+
+	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+	if (!sgt)
+		return NULL;
+
+	rc = dma_get_sgtable(buff->dev, sgt, buff->cookie, buff->dma_addr, buff->size);
+	if (rc < 0)
+		goto free_sgt;
+
+	rc = dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, DMA_ATTR_SKIP_CPU_SYNC);
+	if (!rc)
+		goto free_sg_table;
+	return sgt;
+
+free_sg_table:
+	sg_free_table(sgt);
+free_sgt:
+	kfree(sgt);
+	return ERR_PTR(rc);
+}
+
+static void unmap_dma_buf_vpusmm(struct dma_buf_attachment *attach,
+				 struct sg_table *sgt, enum dma_data_direction dir)
+{
+	dma_unmap_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, DMA_ATTR_SKIP_CPU_SYNC);
+	sg_free_table(sgt);
+	kfree(sgt);
+}
+
+static void release_vpusmm(struct dma_buf *dmabuf)
+{
+	struct vpusmm_buffer *buff = dmabuf->priv;
+
+	dma_free_attrs(buff->dev, buff->size, buff->cookie, buff->dma_addr, buff->dma_attrs);
+	kfree(buff);
+}
+
+static int mmap_vpusmm(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct vpusmm_buffer *buff = dmabuf->priv;
+	unsigned long vm_size;
+
+	vm_size = vma->vm_end - vma->vm_start;
+	if (vm_size > PAGE_ALIGN(buff->size))
+		return -EINVAL;
+
+	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	vma->vm_pgoff = 0;
+
+	return dma_mmap_attrs(buff->dev, vma, buff->cookie, buff->dma_addr,
+			    buff->size, buff->dma_attrs);
+}
+
+static int vmap_vpusmm(struct dma_buf *dmabuf, struct dma_buf_map *map)
+{
+	struct vpusmm_buffer *buff = dmabuf->priv;
+
+	dma_buf_map_set_vaddr(map, buff->cookie);
+
+	return 0;
+}
+
+static const struct dma_buf_ops vpusmm_dmabuf_ops =  {
+	.cache_sgt_mapping = true,
+	.map_dma_buf	= map_dma_buf_vpusmm,
+	.unmap_dma_buf	= unmap_dma_buf_vpusmm,
+	.release		= release_vpusmm,
+	.mmap			= mmap_vpusmm,
+	.vmap			= vmap_vpusmm,
+};
+
+/*
+ * Allocate dma buffer suitable for VPU access.
+ * export as DMABuf fd
+ * sess will hold additional refcount to the dmabuf
+ * on request of passing it to VPU side for processing
+ */
+int smm_alloc(struct vpumgr_smm *sess, struct vpumgr_args_alloc *arg)
+{
+	struct vpumgr_device *vdev = sess->vdev;
+	const int flags = O_RDWR | O_CLOEXEC;
+	size_t buffer_size = arg->size;
+	struct dma_buf *dmabuf = NULL;
+	phys_addr_t phys_addr;
+	struct dma_buf_export_info exp_info = {
+		.exp_name = dev_name(vdev->sdev),
+		.owner = THIS_MODULE,
+		.ops = &vpusmm_dmabuf_ops,
+		.size = buffer_size,
+		.flags = flags
+	};
+	struct vpusmm_buffer *buff;
+	int retval;
+
+	buff = kzalloc(sizeof(*buff), GFP_KERNEL);
+	if (!buff) {
+		retval = -ENOMEM;
+		goto failed;
+	}
+
+	buff->dev = vdev->dev;
+	buff->size = buffer_size;
+	buff->dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS | DMA_ATTR_WRITE_COMBINE;
+	buff->cookie = dma_alloc_attrs(vdev->dev, buff->size, &buff->dma_addr,
+				       GFP_KERNEL | GFP_DMA, buff->dma_attrs);
+	if (!buff->cookie) {
+		retval = -ENOMEM;
+		goto failed;
+	}
+
+	phys_addr = dma_to_phys(vdev->dev, buff->dma_addr);
+
+	exp_info.priv = buff;
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf)) {
+		retval = PTR_ERR(dmabuf);
+		dmabuf = NULL;
+		goto failed;
+	}
+
+	retval = dma_buf_fd(dmabuf, flags);
+	if (retval < 0) {
+		goto failed;
+	} else {
+		arg->fd = retval;
+		retval = 0;
+	}
+
+	dev_dbg(vdev->dev, "%s: dma_addr=%llx, phys_addr=%llx allocated from %s\n",
+		__func__, buff->dma_addr, phys_addr, dev_name(vdev->dev));
+
+	return 0;
+failed:
+	dev_err(vdev->dev, "%s failed with %d\n", __func__, retval);
+
+	if (dmabuf) {
+		/* this will finally release underlying buff */
+		dma_buf_put(dmabuf);
+	} else if (buff) {
+		if (buff->cookie)
+			dma_free_attrs(vdev->dev, buff->size, buff->cookie,
+				       buff->dma_addr, buff->dma_attrs);
+		kfree(buff);
+	}
+	return retval;
+}
+
+int smm_import(struct vpumgr_smm *sess, struct vpumgr_args_import *arg)
+{
+	struct device *dev = sess->vdev->sdev;
+	enum dma_data_direction direction;
+	struct vpusmm_impbuf *item;
+
+	switch (arg->vpu_access) {
+	case VPU_ACCESS_READ:
+		direction = DMA_TO_DEVICE;
+		break;
+	case VPU_ACCESS_WRITE:
+		direction = DMA_FROM_DEVICE;
+		break;
+	case VPU_ACCESS_DEFAULT:
+	case VPU_ACCESS_RW:
+		direction = DMA_BIDIRECTIONAL;
+		break;
+	default:
+		dev_err(dev, "Unknown vpu_access:%d\n", arg->vpu_access);
+		return -EINVAL;
+	}
+
+	item = vpusmm_import_dmabuf(sess, arg->fd, direction, 0);
+	if (IS_ERR(item))
+		return PTR_ERR(item);
+
+	arg->vpu_addr = item->vpu_addr;
+	arg->size = item->dmabuf->size;
+	return 0;
+}
+
+int smm_unimport(struct vpumgr_smm *sess, int *p_dmabuf_fd)
+{
+	struct vpusmm_impbuf *item;
+	struct dma_buf *dmabuf;
+	int rc = 0;
+
+	dmabuf = dma_buf_get(*p_dmabuf_fd);
+	if (IS_ERR(dmabuf))
+		return PTR_ERR(dmabuf);
+
+	mutex_lock(&sess->imp_rb_lock);
+	item = vpusmm_find_impbuf(sess, dmabuf);
+	if (!item) {
+		rc = -EINVAL;
+		goto exit;
+	}
+
+	item->refcount--;
+	if (item->refcount <= 0) {
+		rb_erase(&item->node, &sess->imp_rb);
+		dma_buf_unmap_attachment(item->attach, item->sgt, item->direction);
+		dma_buf_detach(item->dmabuf, item->attach);
+		dma_buf_put(item->dmabuf);
+		kfree(item);
+	}
+exit:
+	mutex_unlock(&sess->imp_rb_lock);
+	dma_buf_put(dmabuf);
+	return rc;
+}
+
+int smm_ptr2vpu(struct vpumgr_smm *sess, unsigned long *arg)
+{
+	struct device *dev = sess->vdev->sdev;
+	struct task_struct *task = current;
+	struct dma_buf *dmabuf = NULL;
+	unsigned long vaddr = *arg;
+	struct vm_area_struct *vma;
+	struct vpusmm_impbuf *item;
+	struct mm_struct *mm;
+
+	mm = get_task_mm(task);
+	if (!mm)
+		goto failed;
+
+	mmap_read_lock(mm);
+
+	vma = find_vma(mm, vaddr);
+	if (!vma) {
+		dev_dbg(dev, "cannot find vaddr: %lx\n", vaddr);
+		goto failed;
+	}
+
+	if (vaddr < vma->vm_start) {
+		dev_dbg(dev, "failed at line %d, vaddr=%lx, vma->vm_start=%lx\n",
+			__LINE__, vaddr, vma->vm_start);
+		goto failed;
+	}
+
+	/* make sure the vma is backed by a dmabuf */
+	if (!vma->vm_file) {
+		dev_dbg(dev, "failed at line %d\n", __LINE__);
+		goto failed;
+	}
+
+	dmabuf = vma->vm_file->private_data;
+	if (!dmabuf) {
+		dev_dbg(dev, "failed at line %d\n", __LINE__);
+		goto failed;
+	}
+
+	if (dmabuf->file != vma->vm_file) {
+		dev_dbg(dev, "failed at line %d\n", __LINE__);
+		goto failed;
+	}
+	mmap_read_unlock(mm);
+	mmput(mm);
+
+	mutex_lock(&sess->imp_rb_lock);
+	item = vpusmm_find_impbuf(sess, dmabuf);
+	mutex_unlock(&sess->imp_rb_lock);
+
+	if (!item) {
+		dev_dbg(dev, "failed to find dmabuf in imported list for vaddr=0x%lx (%d)\n",
+			vaddr, __LINE__);
+		return -EFAULT;
+	}
+
+	*arg = (dma_addr_t)(vaddr - vma->vm_start) + item->vpu_addr;
+	return 0;
+
+failed:
+	if (mm) {
+		mmap_read_unlock(mm);
+		mmput(mm);
+	}
+	return -EFAULT;
+}
+
+int smm_debugfs_stats_show(struct seq_file *file, struct vpumgr_smm *sess)
+{
+	struct rb_node *cur, *next;
+	struct vpusmm_impbuf *item;
+	int i;
+
+	seq_puts(file, "\tdmabuf\texpname\tsize(bytes)\tfilecount\trefs\n");
+
+	mutex_lock(&sess->imp_rb_lock);
+	cur = rb_first(&sess->imp_rb);
+	i = 0;
+	while (cur) {
+		item = rb_entry(cur, struct vpusmm_impbuf, node);
+		next = rb_next(cur);
+		if (item)
+			seq_printf(file, "\t%d:%s\t%s\t%zu\t%ld\t%d\n", i++,
+				   item->dmabuf->name ? : "",
+				   item->dmabuf->exp_name,
+				   item->dmabuf->size,
+				   file_count(item->dmabuf->file),
+				   item->refcount);
+		cur = next;
+	}
+	mutex_unlock(&sess->imp_rb_lock);
+	return 0;
+}
+
+int smm_init(struct vpumgr_device *vdev)
+{
+	int rc = 0;
+
+	if (!vdev->dev->of_node) {
+		/*
+		 * no of_node imply:
+		 * 1. no IOMMU, VPU device is only 32-bit DMA capable
+		 * 2. use default CMA because no device tree node specifying memory-region
+		 */
+		dma_set_mask(vdev->dev, DMA_BIT_MASK(32));
+		dma_set_coherent_mask(vdev->dev, DMA_BIT_MASK(32));
+	} else {
+		/* Initialize reserved memory resources */
+		rc = of_reserved_mem_device_init(vdev->dev);
+		if (rc) {
+			if (rc == -ENODEV) {
+				dev_warn(vdev->dev,
+					 "No reserved memory specified, use default cma\n");
+				rc = 0;
+			} else {
+				dev_err(vdev->dev,
+					"Failed to init reserved memory, rc=%d\n", rc);
+			}
+		}
+	}
+	return rc;
+}
+
+int smm_fini(struct vpumgr_device *vdev)
+{
+	return 0;
+}
diff --git a/drivers/misc/vpumgr/vpu_smm.h b/drivers/misc/vpumgr/vpu_smm.h
new file mode 100644
index 000000000000..ff547649d95c
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_smm.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * Copyright (C) 2020 Intel Corporation
+ */
+#ifndef _VPU_SMM_H
+#define _VPU_SMM_H
+#include <linux/kernel.h>
+#include <linux/rbtree.h>
+
+#include "vpu_common.h"
+
+struct vpumgr_smm {
+	struct vpumgr_device *vdev;
+	struct rb_root	 imp_rb;
+	struct mutex	 imp_rb_lock; /* protects imp_rb */
+};
+
+int smm_init(struct vpumgr_device *vdev);
+int smm_fini(struct vpumgr_device *vdev);
+
+int smm_open(struct vpumgr_smm *sess, struct vpumgr_device *vdev);
+int smm_close(struct vpumgr_smm *sess);
+
+int smm_alloc(struct vpumgr_smm *sess, struct vpumgr_args_alloc *arg);
+int smm_import(struct vpumgr_smm *sess, struct vpumgr_args_import *arg);
+int smm_unimport(struct vpumgr_smm *sess, int *p_dmabuf_fd);
+int smm_ptr2vpu(struct vpumgr_smm *sess, unsigned long *arg);
+
+int smm_debugfs_stats_show(struct seq_file *file, struct vpumgr_smm *sess);
+
+#endif
diff --git a/drivers/misc/vpumgr/vpu_vcm.c b/drivers/misc/vpumgr/vpu_vcm.c
new file mode 100644
index 000000000000..03311dbd579a
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_vcm.c
@@ -0,0 +1,585 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#include <linux/delay.h>
+#include <linux/dma-buf.h>
+#include <linux/idr.h>
+#include <linux/mutex.h>
+#include <linux/kthread.h>
+#include <linux/sched/task.h>
+#include <linux/seq_file.h>
+#include <linux/uaccess.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+#include "vpu_common.h"
+#include "vpu_vcm.h"
+
+#define XLINK_IPC_TIMEOUT           1000u
+
+/* Static xlink configuration */
+#define VCM_XLINK_CHANNEL           1
+#define VCM_XLINK_CHAN_SIZE         128
+
+static const int msg_header_size = offsetof(struct vcm_msg, payload.data);
+
+struct vpu_cmd {
+	struct work_struct work;
+	struct kref refcount;
+	struct xlink_handle *handle;
+	struct vpumgr_ctx *vctx;         /* the submitting vpu context */
+	struct vcm_msg msg;              /* message buffer for send/recv */
+	struct completion complete;      /* completion for async submit/reply */
+	int submit_err;                  /* error code for submittion process */
+};
+
+static int vcm_vpu_link_init(struct vcm_dev *pvcm)
+{
+	struct vpumgr_device *vdev = container_of(pvcm, struct vpumgr_device, vcm);
+	enum xlink_error rc;
+
+	pvcm->ipc_xlink_handle.dev_type = VPUIP_DEVICE;
+	pvcm->ipc_xlink_handle.sw_device_id = pvcm->sw_dev_id;
+
+	rc = xlink_initialize();
+	if (rc != X_LINK_SUCCESS)
+		goto exit;
+
+	rc = xlink_connect(&pvcm->ipc_xlink_handle);
+	if (rc != X_LINK_SUCCESS)
+		goto exit;
+
+	rc = xlink_open_channel(&pvcm->ipc_xlink_handle, VCM_XLINK_CHANNEL,
+				RXB_TXB, VCM_XLINK_CHAN_SIZE, XLINK_IPC_TIMEOUT);
+	if (rc != X_LINK_SUCCESS && rc != X_LINK_ALREADY_OPEN) {
+		xlink_disconnect(&pvcm->ipc_xlink_handle);
+		goto exit;
+	}
+
+	rc = 0;
+exit:
+	dev_info(vdev->dev, "%s: rc = %d\n", __func__, rc);
+	return -(int)rc;
+}
+
+static int vcm_vpu_link_fini(struct vcm_dev *pvcm)
+{
+	xlink_close_channel(&pvcm->ipc_xlink_handle, VCM_XLINK_CHANNEL);
+	xlink_disconnect(&pvcm->ipc_xlink_handle);
+	return 0;
+}
+
+/*
+ * Send a vcm_msg by xlink.
+ * Given limited xlink payload size, packing is also performed.
+ */
+static int vcm_send(struct xlink_handle *xlnk_handle, struct vcm_msg *req)
+{
+	struct vpumgr_device *vdev;
+	enum xlink_error rc;
+	u8 *ptr = (u8 *)req;
+	u32 size = 0;
+	u32 len = req->size;
+
+	vdev = container_of(xlnk_handle, struct vpumgr_device, vcm.ipc_xlink_handle);
+	if (len > sizeof(*req))
+		return -EINVAL;
+	do {
+		size = len > VCM_XLINK_CHAN_SIZE ? VCM_XLINK_CHAN_SIZE : len;
+		rc = xlink_write_volatile(xlnk_handle, VCM_XLINK_CHANNEL, ptr, size);
+		if (rc != X_LINK_SUCCESS) {
+			dev_warn(vdev->dev, "%s xlink_write_volatile error %d\n", __func__, rc);
+			return -EINVAL;
+		}
+		ptr += size;
+		len -= size;
+	} while (len > 0);
+
+	return 0;
+}
+
+/*
+ * Receives a vcm_msg by xlink.
+ * Given limited xlink payload size, unpacking is also performed.
+ */
+static int vcm_recv(struct xlink_handle *xlnk_handle, struct vcm_msg *rep)
+{
+	struct vpumgr_device *vdev;
+	enum xlink_error rc;
+	u64 size;
+	u32 total_size = 0;
+	u32 rx_size = 0;
+	u8 *ptr = (u8 *)rep;
+
+	vdev = container_of(xlnk_handle, struct vpumgr_device, vcm.ipc_xlink_handle);
+	do {
+		/* workaround for a bug in xlink_read_data_to_buffer()
+		 * although it's last argument is declared to be of type (u32 *), the
+		 * function actually writes 64-bit value into that address.
+		 */
+		rc = xlink_read_data_to_buffer(xlnk_handle, VCM_XLINK_CHANNEL, ptr, (u32 *)&size);
+		if (rc != X_LINK_SUCCESS) {
+			dev_warn(vdev->dev, "%s: xlink_read_data_to_buffer failed, rc:%d\n",
+				 __func__, rc);
+			return -EPIPE;
+		}
+
+		if (total_size == 0) {
+			if (size < msg_header_size) {
+				dev_warn(vdev->dev, "%s: first packet is too small (%llu)\n",
+					 __func__, size);
+				return -EINVAL;
+			}
+
+			total_size = rep->size;
+			if (total_size > sizeof(*rep)) {
+				dev_warn(vdev->dev, "%s: packet size (%u) is too big\n",
+					 __func__, total_size);
+				return -EINVAL;
+			}
+			if (total_size < size) {
+				dev_warn(vdev->dev,
+					 "%s: first packet is smaller than claimed (%llu)\n",
+					 __func__, size);
+				return -EINVAL;
+			}
+		}
+
+		ptr += size;
+		rx_size += size;
+	} while (rx_size < total_size);
+
+	if (rx_size != total_size) {
+		dev_warn(vdev->dev, "%s: actuall size %u exceeds claimed size %ud\n",
+			 __func__, rx_size, total_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void vcmd_free(struct kref *kref)
+{
+	struct vpu_cmd *vcmd = container_of(kref, struct vpu_cmd, refcount);
+
+	kvfree(vcmd);
+}
+
+static struct vpu_cmd *vcmd_get(struct vcm_dev *pvcm, int msgid)
+{
+	struct vpu_cmd *vcmd;
+
+	mutex_lock(&pvcm->msg_idr_lock);
+	vcmd = idr_find(&pvcm->msg_idr, msgid);
+	if (vcmd)
+		kref_get(&vcmd->refcount);
+	mutex_unlock(&pvcm->msg_idr_lock);
+
+	return vcmd;
+}
+
+static void vcmd_put(struct vpu_cmd *vcmd)
+{
+	kref_put(&vcmd->refcount, vcmd_free);
+}
+
+static int vcmd_alloc_msgid(struct vcm_dev *pvcm, struct vpu_cmd *vcmd)
+{
+	int msgid;
+
+	mutex_lock(&pvcm->msg_idr_lock);
+	msgid = idr_alloc_cyclic(&pvcm->msg_idr, vcmd, 1, 0, GFP_KERNEL);
+	if (msgid >= 0)
+		kref_init(&vcmd->refcount);
+	mutex_unlock(&pvcm->msg_idr_lock);
+	return msgid;
+}
+
+static void vcmd_remove_msgid(struct vcm_dev *pvcm, int msgid)
+{
+	struct vpu_cmd *vcmd;
+
+	mutex_lock(&pvcm->msg_idr_lock);
+	vcmd = idr_remove(&pvcm->msg_idr, msgid);
+	if (vcmd)
+		kref_put(&vcmd->refcount, vcmd_free);
+	mutex_unlock(&pvcm->msg_idr_lock);
+}
+
+static void vcmd_clean_ctx(struct vcm_dev *pvcm, struct vpumgr_ctx *v)
+{
+	struct vpu_cmd *vcmd;
+	int i;
+
+	mutex_lock(&pvcm->msg_idr_lock);
+	idr_for_each_entry(&pvcm->msg_idr, vcmd, i)
+		if (vcmd->vctx == v) {
+			idr_remove(&pvcm->msg_idr, i);
+			kref_put(&vcmd->refcount, vcmd_free);
+		}
+	mutex_unlock(&pvcm->msg_idr_lock);
+}
+
+static int vcmd_count_ctx(struct vcm_dev *pvcm, struct vpumgr_ctx *v)
+{
+	struct vpu_cmd *vcmd;
+	int i;
+	int count = 0;
+
+	mutex_lock(&pvcm->msg_idr_lock);
+	idr_for_each_entry(&pvcm->msg_idr, vcmd, i)
+		if (vcmd->vctx == v)
+			count++;
+	mutex_unlock(&pvcm->msg_idr_lock);
+
+	return count;
+}
+
+static void vpu_cmd_submit(struct work_struct *work)
+{
+	struct vpu_cmd *p = container_of(work, struct vpu_cmd, work);
+
+	p->submit_err = vcm_send(p->handle, &p->msg);
+}
+
+/*
+ * vcm_submit() - Submit a command to VPU
+ * @v:         Pointer to local vpu context data structure.
+ * @cmd:       Command code
+ * @data_in:   Data arguments
+ * @in_len:    Length of the data arguments
+ * @submit_id: On return, this will containe a newly allocated
+ *             vpu-device-wise unique ID for the submitted command
+ *
+ * Submit a command to corresponding vpu context running on firmware to execute
+ */
+int vcm_submit(struct vpumgr_ctx *v,
+	       u32 cmd, const void *data_in, u32 in_len, s32 *submit_id)
+{
+	struct vcm_dev *pvcm = &v->vdev->vcm;
+	int ctx = v->vpu_ctx_id;
+	int rc = 0;
+	struct vpu_cmd *vcmd;
+
+	if (!v->vdev->vcm.enabled)
+		return -ENOENT;
+
+	if (in_len > VCM_PAYLOAD_SIZE)
+		return -EINVAL;
+
+	vcmd = kvmalloc(sizeof(*vcmd), GFP_KERNEL);
+	if (!vcmd)
+		return -ENOMEM;
+
+	vcmd->vctx = v;
+
+	rc = vcmd_alloc_msgid(pvcm, vcmd);
+	if (rc < 0) {
+		rc = -EEXIST;
+		kvfree(vcmd);
+		return rc;
+	}
+
+	/* from now on, vcmd can is refcount managed */
+	vcmd->msg.id = rc;
+	*submit_id = vcmd->msg.id;
+
+	if (data_in && in_len > 0) {
+		if (access_ok((void __user *)data_in, in_len)) {
+			rc = copy_from_user(vcmd->msg.payload.data,
+					    (const void __user *)data_in, in_len);
+			if (rc)
+				goto remove_msgid;
+		} else {
+			memcpy(vcmd->msg.payload.data, data_in, in_len);
+		}
+	}
+
+	init_completion(&vcmd->complete);
+	vcmd->handle = &pvcm->ipc_xlink_handle;
+	vcmd->msg.size = msg_header_size + in_len;
+	vcmd->msg.ctx = ctx;
+	vcmd->msg.cmd = cmd;
+	vcmd->msg.rc = 0;
+	INIT_WORK(&vcmd->work, vpu_cmd_submit);
+
+	if (!queue_work(pvcm->wq, &vcmd->work)) {
+		rc = -EEXIST;
+		goto remove_msgid;
+	}
+
+	return 0;
+
+remove_msgid:
+	vcmd_remove_msgid(pvcm, vcmd->msg.id);
+	return rc;
+}
+
+/*
+ * vcm_wait() - Wait a submitted command to finish
+ * @v:          Pointer to local vpu context data structure.
+ * @submit_id:  Unique ID of the submitted command to wait for
+ * @vpu_rc:     Return code of the submitted commands
+ * @data_out:   Return data payload of the submitted command
+ * @p_out_len:  Length of the returned paylaod
+ * @timeout_ms: Time in milliseconds before the wait expires
+ *
+ * Wait for a submitted command to finish and retrieves the
+ * return code and outputs on success with timeout.
+ */
+int vcm_wait(struct vpumgr_ctx *v, s32 submit_id,
+	     s32 *vpu_rc, void *data_out, u32 *p_out_len, u32 timeout_ms)
+{
+	struct vcm_dev *pvcm = &v->vdev->vcm;
+	struct device *dev = v->vdev->sdev;
+	unsigned long timeout = msecs_to_jiffies(timeout_ms);
+	struct vpu_cmd *vcmd;
+	int rc, len;
+
+	if (!v->vdev->vcm.enabled)
+		return -ENOENT;
+
+	vcmd = vcmd_get(pvcm, submit_id);
+	if (!vcmd) {
+		dev_err(dev, "%s:cannot find submit_id %d\n", __func__, submit_id);
+		return -EINVAL;
+	}
+
+	if (v != vcmd->vctx) {
+		dev_err(dev, "%s:trying to wait on submit %d doesn't belong to vpu context %d\n",
+			__func__, submit_id, v->vpu_ctx_id);
+		return -EINVAL;
+	}
+
+	/* wait for submission work to be done */
+	flush_work(&vcmd->work);
+	rc = vcmd->submit_err;
+	if (rc)
+		goto exit;
+
+	/* wait for reply */
+	rc = wait_for_completion_interruptible_timeout(&vcmd->complete, timeout);
+	if (rc < 0) {
+		goto exit;
+	}
+	else if (rc == 0) {
+		rc = -ETIMEDOUT;
+		goto exit;
+	} else {
+		/* wait_for_completion_interruptible_timeout return positive
+		 * rc on success, but we return zero as success.
+		 */
+		rc = 0;
+	}
+
+	if (vpu_rc)
+		*vpu_rc = vcmd->msg.rc;
+
+	if (data_out && p_out_len) {
+		/* truncate payload to fit output buffer size provided */
+		len = vcmd->msg.size - msg_header_size;
+		if (len > (*p_out_len)) {
+			dev_err(dev, "%s: output is truncated from %d to %d to fit buffer size.\n",
+				__func__, len, (*p_out_len));
+			len = (*p_out_len);
+		}
+
+		if (len > 0) {
+			if (access_ok((void __user *)data_out, len)) {
+				rc = copy_to_user((void __user *)data_out,
+						  vcmd->msg.payload.data, len);
+				if (rc)
+					goto exit;
+			} else {
+				memcpy(data_out, vcmd->msg.payload.data, len);
+			}
+		}
+
+		/* tell the caller the exact length received, even if
+		 * we have truncated due to output buffer size limitation
+		 */
+		*p_out_len = len;
+	}
+exit:
+	v->total_vcmds++;
+	vcmd_put(vcmd);
+	vcmd_remove_msgid(pvcm, submit_id);
+	return rc;
+}
+
+static int vcm_call(struct vpumgr_ctx *v,
+		    s32 cmd, const void *data_in, u32 in_len,
+		    s32 *res_rc, void *data_out, u32 *p_out_len)
+{
+	int submit_id, rc;
+
+	if (!v->vdev->vcm.enabled)
+		return -ENOENT;
+
+	rc = vcm_submit(v, cmd, data_in, in_len, &submit_id);
+	if (rc)
+		return rc;
+
+	return vcm_wait(v, submit_id, res_rc, data_out, p_out_len, 1000);
+}
+
+int vcm_open(struct vpumgr_ctx *v, struct vpumgr_device *vdev)
+{
+	struct device *dev = vdev->sdev;
+	int rep_rc, rc;
+
+	v->vdev = vdev;
+
+	if (!vdev->vcm.enabled)
+		return 0;
+
+	rc = vcm_call(v, VCTX_MSG_CREATE, NULL, 0, &rep_rc, NULL, NULL);
+
+	if (rc != 0 || rep_rc < 0)
+		dev_err(dev, "%s: Vpu context create with rc:%d and vpu reply rc:%d\n",
+			__func__, rc, rep_rc);
+	if (rc)
+		return rc;
+	if (rep_rc < 0)
+		return -ENXIO;
+
+	v->vpu_ctx_id = rep_rc;
+	v->total_vcmds = 0;
+	return 0;
+}
+
+int vcm_close(struct vpumgr_ctx *v)
+{
+	struct device *dev = v->vdev->sdev;
+	int rep_rc, rc;
+
+	if (!v->vdev->vcm.enabled)
+		return 0;
+
+	rc = vcm_call(v, VCTX_MSG_DESTROY, NULL, 0, &rep_rc, NULL, NULL);
+	dev_dbg(dev, "vpu context %d is destroyed with rc:%d and vpu reply rc:%d\n",
+		v->vpu_ctx_id, rc, rep_rc);
+
+	/* remove submit belongs to this context */
+	vcmd_clean_ctx(&v->vdev->vcm, v);
+	return 0;
+}
+
+int vcm_debugfs_stats_show(struct seq_file *file, struct vpumgr_ctx *v)
+{
+	if (!v->vdev->vcm.enabled)
+		return 0;
+	seq_printf(file, "\tvpu context: #%d\n", v->vpu_ctx_id);
+	seq_printf(file, "\t\tNum of completed cmds: %llu\n", v->total_vcmds);
+	seq_printf(file, "\t\tNum of on-going cmds: %d\n", vcmd_count_ctx(&v->vdev->vcm, v));
+	return 0;
+}
+
+static int vcm_rxthread(void *param)
+{
+	struct vpumgr_device *vdev = param;
+	struct device *dev = vdev->sdev;
+	struct vcm_dev *pvcm = &vdev->vcm;
+	struct vcm_msg *msg = &pvcm->rxmsg;
+	struct vpu_cmd *vcmd;
+	int rc;
+
+	while (!kthread_should_stop()) {
+		rc = vcm_recv(&pvcm->ipc_xlink_handle, msg);
+		if (rc == -EPIPE)
+			break;
+		if (rc)
+			continue;
+
+		switch (msg->cmd) {
+		case VCTX_MSG_REPLY:
+			/* find local data associated with that msg id */
+			vcmd = vcmd_get(pvcm, (unsigned long)msg->id);
+			if (!vcmd)
+				break;
+
+			if (msg->ctx != vcmd->msg.ctx)
+				dev_warn(dev, "reply msg #%u's ctx (%u) mismatches vcmd ctx (%u)\n",
+					 msg->id, msg->ctx, vcmd->msg.ctx);
+
+			vcmd->submit_err = 0;
+
+			/* submit corresponding to msg->id is done, do post process */
+			memcpy(&vcmd->msg, msg, msg->size);
+			complete(&vcmd->complete);
+
+			vcmd_put(vcmd);
+		break;
+		default:
+		break;
+		}
+	}
+	return rc;
+}
+
+int vcm_init(struct vpumgr_device *vdev, u32 sw_dev_id)
+{
+	struct vcm_dev *pvcm = &vdev->vcm;
+	struct task_struct *rxthread;
+	int rc = 0;
+
+	if (sw_dev_id == XLINK_INVALID_SW_DEVID) {
+		dev_warn(vdev->dev, "%s: vcm is not enabled!\n",
+			 __func__);
+		rc = 0;
+		goto exit;
+	}
+
+	pvcm->sw_dev_id = sw_dev_id;
+	rc = vcm_vpu_link_init(pvcm);
+	if (rc)
+		goto exit;
+
+	pvcm->wq = alloc_ordered_workqueue("vcm workqueue", WQ_MEM_RECLAIM | WQ_HIGHPRI);
+	if (!pvcm->wq) {
+		rc = -ENOMEM;
+		goto vpu_link_fini;
+	}
+
+	mutex_init(&pvcm->msg_idr_lock);
+	idr_init(&pvcm->msg_idr);
+
+	rxthread = kthread_run(vcm_rxthread,
+			       (void *)vdev, "vcmrx");
+	if (IS_ERR(rxthread)) {
+		rc = PTR_ERR(rxthread);
+		goto destroy_idr;
+	}
+
+	pvcm->rxthread = get_task_struct(rxthread);
+
+	pvcm->enabled = true;
+	return 0;
+
+destroy_idr:
+	idr_destroy(&pvcm->msg_idr);
+	destroy_workqueue(pvcm->wq);
+vpu_link_fini:
+	vcm_vpu_link_fini(pvcm);
+exit:
+	pvcm->enabled = false;
+	return rc;
+}
+
+int vcm_fini(struct vpumgr_device *vdev)
+{
+	struct vcm_dev *pvcm = &vdev->vcm;
+
+	if (!pvcm->enabled)
+		return 0;
+
+	vcm_vpu_link_fini(pvcm);
+
+	kthread_stop(pvcm->rxthread);
+	put_task_struct(pvcm->rxthread);
+
+	idr_destroy(&pvcm->msg_idr);
+	destroy_workqueue(pvcm->wq);
+	mutex_destroy(&pvcm->msg_idr_lock);
+	return 0;
+}
diff --git a/drivers/misc/vpumgr/vpu_vcm.h b/drivers/misc/vpumgr/vpu_vcm.h
new file mode 100644
index 000000000000..9e89c281092b
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_vcm.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * RESMGR driver  -  VPU Context Manager
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#ifndef __VPU_VCM_H
+#define __VPU_VCM_H
+
+#include <linux/xlink.h>
+
+struct vpumgr_device;
+
+/* Command code for message to/from VPU context manager on firmware */
+#define VCTX_MSG_CREATE             1
+#define VCTX_MSG_DESTROY            2
+#define VCTX_MSG_REPLY              3
+
+/* Maximal payload size supported for request or reply */
+#define VCM_PAYLOAD_SIZE            (8192 - 5 * sizeof(u32))
+
+/**
+ * struct vcm_msg: VPU context manager message
+ * @size: size tag must be at the begin
+ * @ctx: to/from which context the msg is
+ * @cmd: the type of this message
+ * @id: index to identify this message in context ctx
+ * @rc: return code or misc args
+ * @payload: the payload of message
+ */
+struct vcm_msg {
+	u32 size;
+	u32 ctx;
+	u32 cmd;
+	u32 id;
+	s32 rc;
+	union {
+		char data[VCM_PAYLOAD_SIZE];
+	} payload;
+} __packed;
+
+struct vcm_dev {
+	bool enabled;
+	/*
+	 * XLINK IPC related.
+	 */
+	struct xlink_handle ipc_xlink_handle;
+	s32 sw_dev_id;
+
+	/*
+	 * dispatch work queue.
+	 * Xlink transcations would handled in the work queue;
+	 * Decouple the xlink API invoking with user space systemcall
+	 * because SIGINT would cause xlink_read* erro.
+	 */
+	struct workqueue_struct *wq;
+
+	/* kthread for rx */
+	struct task_struct *rxthread;
+
+	/* message buffer for receiving thread */
+	struct vcm_msg rxmsg;
+
+	struct mutex msg_idr_lock; /* protects msg_idr */
+	struct idr msg_idr;
+};
+
+struct vpumgr_ctx {
+	struct vpumgr_device *vdev;
+	u32 vpu_ctx_id;
+	u64 total_vcmds;
+};
+
+int vcm_init(struct vpumgr_device *vdev, u32 sw_dev_id);
+int vcm_fini(struct vpumgr_device *vdev);
+
+int vcm_open(struct vpumgr_ctx *v, struct vpumgr_device *vdev);
+int vcm_close(struct vpumgr_ctx *v);
+int vcm_debugfs_stats_show(struct seq_file *file, struct vpumgr_ctx *v);
+
+int vcm_submit(struct vpumgr_ctx *v,
+	       u32 cmd, const void *data_in, u32 in_len, s32 *submit_id);
+int vcm_wait(struct vpumgr_ctx *v, s32 submit_id,
+	     s32 *vpu_rc, void *data_out, u32 *p_out_len, u32 timeout_ms);
+
+#endif /* __VPU_VCM_H */
diff --git a/include/uapi/misc/vpumgr.h b/include/uapi/misc/vpumgr.h
new file mode 100644
index 000000000000..910b26e60097
--- /dev/null
+++ b/include/uapi/misc/vpumgr.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note
+ * VPU manager Linux Kernel API
+ * Copyright (C) 2020-2021 Intel Corporation
+ *
+ */
+#ifndef __VPUMGR_UAPI_H
+#define __VPUMGR_UAPI_H
+
+#include <linux/types.h>
+
+/* ioctl numbers */
+#define VPUMGR_MAGIC 'V'
+/* VPU manager IOCTLs */
+#define VPUMGR_IOCTL_DMABUF_ALLOC	_IOWR(VPUMGR_MAGIC, 2, struct vpumgr_args_alloc)
+#define VPUMGR_IOCTL_DMABUF_IMPORT	_IOWR(VPUMGR_MAGIC, 3, struct vpumgr_args_import)
+#define VPUMGR_IOCTL_DMABUF_UNIMPORT	_IOWR(VPUMGR_MAGIC, 4, __s32)
+#define VPUMGR_IOCTL_DMABUF_PTR2VPU	_IOWR(VPUMGR_MAGIC, 5, __u64)
+#define VPUMGR_IOCTL_VCM_SUBMIT		_IOWR(VPUMGR_MAGIC, 6, struct vpumgr_vcm_submit)
+#define VPUMGR_IOCTL_VCM_WAIT		_IOWR(VPUMGR_MAGIC, 7, struct vpumgr_vcm_wait)
+#define VPUMGR_IOCTL_END		_IO(VPUMGR_MAGIC, 8)
+
+struct vpumgr_args_alloc {
+	__s32 fd;           /* out: DMABuf fd */
+	__s32 reserved[2];  /*  in: reserved */
+	__u64 size;	    /*  in: required buffer size */
+};
+
+/* vpu_access flags */
+enum vpu_access_type {
+	VPU_ACCESS_DEFAULT = 0,
+	VPU_ACCESS_READ    = 1,
+	VPU_ACCESS_WRITE   = 2,
+	VPU_ACCESS_RW      = 3
+};
+
+struct vpumgr_args_import {
+	__s32 fd;           /*  in: input DMABuf fd */
+	__s32 vpu_access;   /*  in: how vpu is going to access the buffer */
+	__u64 vpu_addr;	    /* out: vpu dma address of the DMABuf */
+	__u64 size;         /* out: the size of the DMABuf */
+};
+
+/* Command code reserved for kernel mode driver,
+ * user-space should not use commmand code smaller
+ * than or equal to this micro
+ */
+#define VCTX_KMD_RESERVED_CMD_LAST           31
+
+struct vpumgr_vcm_submit {
+	__u32 cmd;          /*  in: command code */
+	__u64 in;           /*  in: input paramer buffer address */
+	__u32 in_len;       /*  in: input paramer buffer length */
+	__s32 submit_id;    /* out: submit id */
+};
+
+struct vpumgr_vcm_wait {
+	__s32 submit_id;    /*  in: submit id */
+	__s32 vpu_rc;       /* out: vpu return code */
+	__u64 out;          /*  in: address of the buffer for receiving result */
+	__u32 out_len;      /*  in: length of the result buffer */
+	__u32 timeout_ms;   /*  in: timeout in milliseconds */
+};
+
+#endif /* __VPUMGR_UAPI_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 26/34] dt-bindings: misc: intel_tsens: Add tsens thermal bindings documentation
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (24 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 25/34] misc: Add Keem Bay VPU manager mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 27/34] misc: Tsens ARM host thermal driver mgross
                   ` (7 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add device tree bindings for local host thermal sensors
Intel Edge.AI Computer Vision platforms.

The tsens module enables reading of on chip sensors present
in the Intel Bay series SoC. In the tsens module various junction
temperature and SoC temperature are reported using thermal subsystem
and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsytem will be used by
platform manageability software running in remote host.

Acked-by: mark gross <mgross@linux.intel.com>
Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 .../bindings/misc/intel,intel-tsens.yaml      | 122 ++++++++++++++++++
 1 file changed, 122 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml b/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
new file mode 100644
index 000000000000..abac41995643
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
@@ -0,0 +1,122 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,intel_tsens.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel Temperature sensors in Bay series
+
+maintainers:
+  - Udhayakumar C <udhayakumar.c@intel.com>
+
+description: |
+  The tsens driver enables reading of onchip sensors present
+  in the Intel Bay SoC.
+  Each subnode of the tsens represents sensors available
+  on the soc.
+
+select: false
+
+properties:
+  compatible:
+    items:
+      - const: intel,intel-tsens
+
+  plat_name:
+    contains:
+      enum:
+        - intel,keembay_thermal
+
+  reg:
+    minItems: 1
+    maxItems: 2
+
+  clocks:
+    items:
+      - description: thermal sensor clock
+
+  clk-rate:
+    minItems: 1
+    maxItems: 1
+    additionalItems: false
+    items:
+      - description: thermal sensor clock freq
+
+  sensor_name:
+    type: object
+    description:
+      Details to configure sensor trip points and its types.
+
+    properties:
+      passive_delay:
+        minItems: 1
+        maxItems: 1
+        description: number of milliseconds to wait between polls when
+                     performing passive cooling
+
+      polling_delay:
+        minItems: 1
+        maxItems: 1
+        description: number of milliseconds to wait between polls when checking
+                     whether trip points have been crossed (0 for interrupt
+                     driven systems)
+
+      trip_temp:
+        minItems: 1
+        description: temperature for trip points
+
+      trip_type:
+        minItems: 1
+        description: trip type list for trip points
+
+    required:
+      - passive_delay
+      - polling_delay
+      - trip_temp
+      - trip_type
+
+required:
+  - compatible
+
+additionalProperties: false
+
+examples:
+  - |
+    tsens: tsens@20260000 {
+        compatible = "intel,intel-tsens";
+        status = "disabled";
+        #address-cells = <2>;
+        #size-cells = <2>;
+        plat_name = "intel,keembay_thermal";
+        reg = <0x0 0x20260000 0x0 0x100>;
+        clocks = <&scmi_clk>;
+        clk-rate = <1250000>;
+
+        mss {
+                passive_delay = <1000>;
+                polling_delay = <2000>;
+                trip_temp = <40000 80000 1000000>;
+                trip_type = "passive", "passive", "critical";
+        };
+
+        css {
+                passive_delay = <1000>;
+                polling_delay = <2000>;
+                trip_temp = <40000 80000 1000000>;
+                trip_type = "passive", "passive", "critical";
+        };
+
+        nce {
+                passive_delay = <1000>;
+                polling_delay = <2000>;
+                trip_temp = <40000 80000 1000000>;
+                trip_type = "passive", "passive", "critical";
+        };
+
+        soc {
+                passive_delay = <1000>;
+                polling_delay = <2000>;
+                trip_temp = <40000 80000 1000000>;
+                trip_type = "passive", "passive", "critical";
+        };
+     };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 27/34] misc: Tsens ARM host thermal driver.
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (25 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 26/34] dt-bindings: misc: intel_tsens: Add tsens thermal bindings documentation mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 28/34] misc: Intel tsens IA host driver mgross
                   ` (6 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add tsens ARM host thermal driver for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU                    Local Host ARM CPU
----------------                     --------------------------
|  Platform    |                     |  Thermal Daemon        |
| Management SW|                     |                        |
----------------                     --------------------------
|  Intel tsens |                     |  intel tsens i2c slave |
|  i2c client  |                     |  and thermal driver    |
----------------                     --------------------------
|  XLINK I2C   |                     |  XLINK I2C Slave       |
|  controller  |     <=========>     |   controller           |
----------------        smbus        --------------------------

intel tsens module:
-------------------
The tsens module enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms. In the tsens module
various junction and SoC temperatures are reported using thermal
subsystem and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsystem will be used by
platform manageability software running in IA host.

- Local Host driver
  * Intended for ARM CPU
  * It is based on Thermal and I2C slave  Framework
  * Driver path:
  {tree}/drivers/misc/intel_tsens/intel_tsens_thermal.c

Local host and Remote host drivers communicates using
XLINK I2C SMBUS protocol.

Acked-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 Documentation/hwmon/index.rst                 |   1 +
 Documentation/hwmon/intel_tsens_sensor.rst    |  67 ++
 MAINTAINERS                                   |   5 +
 drivers/misc/Kconfig                          |   1 +
 drivers/misc/Makefile                         |   1 +
 drivers/misc/intel_tsens/Kconfig              |  15 +
 drivers/misc/intel_tsens/Makefile             |   7 +
 .../misc/intel_tsens/intel_tsens_thermal.c    | 651 ++++++++++++++++++
 .../misc/intel_tsens/intel_tsens_thermal.h    |  38 +
 include/linux/hddl_device.h                   | 153 ++++
 10 files changed, 939 insertions(+)
 create mode 100644 Documentation/hwmon/intel_tsens_sensor.rst
 create mode 100644 drivers/misc/intel_tsens/Kconfig
 create mode 100644 drivers/misc/intel_tsens/Makefile
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.c
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.h
 create mode 100644 include/linux/hddl_device.h

diff --git a/Documentation/hwmon/index.rst b/Documentation/hwmon/index.rst
index fcb870ce6286..fc29100bef73 100644
--- a/Documentation/hwmon/index.rst
+++ b/Documentation/hwmon/index.rst
@@ -80,6 +80,7 @@ Hardware Monitoring Kernel Drivers
    ir38064
    isl68137
    it87
+   intel_tsens_sensor.rst
    jc42
    k10temp
    k8temp
diff --git a/Documentation/hwmon/intel_tsens_sensor.rst b/Documentation/hwmon/intel_tsens_sensor.rst
new file mode 100644
index 000000000000..0f53dfca477e
--- /dev/null
+++ b/Documentation/hwmon/intel_tsens_sensor.rst
@@ -0,0 +1,67 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==================================
+Kernel driver: intel_tsens_thermal
+==================================
+
+Supported chips:
+  * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+    Slave address: The address is assigned by the hddl device management
+                   driver.
+
+Authors:
+    - Thalaiappan, Rathina <rathina.thalaiappan@intel.com>
+    - Udhayakumar C <udhayakumar.c@intel.com>
+
+Description
+===========
+The Intel Edge.AI Computer Vision platforms have memory mapped thermal sensors
+which are accessible locally. The intel_tsens_thermal driver handles these
+thermal sensor and exposes the temperature to
+
+* the external host similar to the standard SMBUS based thermal sensor
+    (like LM73) to the host by registering to the I2C subsystem as
+    slave interface (Documentation/i2c/slave-interface.rst).
+* the local CPU as a standard thermal device.
+
+In Keem Bay, the four thermal junction temperature points are,
+Media Subsystem (mss), NN subsystem (nce), Compute subsystem (cse) and
+SOC(Maximum of mss, nce and cse).
+
+Similarity: /drivers/thermal/qcom
+
+Example
+=======
+Local Thermal Interface:
+
+Temperature reported in Keem Bay on the Linux Thermal sysfs interface.
+
+# cat /sys/class/thermal/thermal_zone*/type
+mss
+css
+nce
+soc
+
+# cat /sys/class/thermal/thermal_zone*/temp
+0
+29210
+28478
+29210
+
+Remote Thermal Interface:
+
+tsens i2c slave driver reports temperature of various subsytem
+junction temperature based on table as below.
+
++-----------+-------------+
+| offset    |   Sensor    |
++-----------+-------------+
+|   0       |   mss       |
++-----------+-------------+
+|   1       |   css       |
++-----------+-------------+
+|   2       |   nce       |
++-----------+-------------+
+|   3       |   soc       |
++-----------+-------------+
diff --git a/MAINTAINERS b/MAINTAINERS
index e4165c9983cd..0dfbe892d852 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1986,6 +1986,11 @@ S:	Supported
 F:	drivers/misc/hddl_device/
 F:	drivers/misc/intel_tsens/
 
+ARM/INTEL TSENS SUPPORT
+M:	Udhayakumar C <udhayakumar.c@intel.com>
+S:	Supported
+F:	drivers/misc/intel_tsens/
+
 ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
 M:	Jonathan Cameron <jic23@cam.ac.uk>
 L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 2d1f7b165cc8..aed3ef61897c 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -485,4 +485,5 @@ source "drivers/misc/xlink-pcie/Kconfig"
 source "drivers/misc/xlink-ipc/Kconfig"
 source "drivers/misc/xlink-core/Kconfig"
 source "drivers/misc/vpumgr/Kconfig"
+source "drivers/misc/intel_tsens/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 2936930f3edc..c08502b22778 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -61,3 +61,4 @@ obj-y                           += xlink-pcie/
 obj-$(CONFIG_XLINK_IPC)		+= xlink-ipc/
 obj-$(CONFIG_XLINK_CORE)	+= xlink-core/
 obj-$(CONFIG_VPUMGR)		+= vpumgr/
+obj-y                           += intel_tsens/
diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
new file mode 100644
index 000000000000..bfb8fe1997f4
--- /dev/null
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -0,0 +1,15 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+config INTEL_TSENS_LOCAL_HOST
+	bool "Temperature sensor driver for intel tsens"
+	select THERMAL
+	help
+	  This option enables tsens thermal local Host driver.
+
+	  This driver is used for reporting thermal data via thermal
+	  framework.
+	  Enable this option if you want to have support for thermal
+	  management controller.
+	  Say Y if using a processor that includes the Intel VPU such as
+	  Keem Bay.  If unsure, say N.
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
new file mode 100644
index 000000000000..93dee8b9f481
--- /dev/null
+++ b/drivers/misc/intel_tsens/Makefile
@@ -0,0 +1,7 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Makefile for intel tsens Thermal Linux driver
+#
+
+obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST)	+= intel_tsens_thermal.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_thermal.c b/drivers/misc/intel_tsens/intel_tsens_thermal.c
new file mode 100644
index 000000000000..83aec191555c
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_thermal.c
@@ -0,0 +1,651 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/i2c.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include "intel_tsens_thermal.h"
+
+struct intel_tsens_trip_info {
+	enum thermal_trip_type trip_type;
+	int temp;
+};
+
+struct intel_tsens {
+	char name[20];
+	u32 n_trips;
+	u32 passive_delay;
+	u32 polling_delay;
+	u32 sensor_type;
+	u64 addr;
+	u64 size;
+	u32 curr_temp;
+	void __iomem *base_addr;
+	struct intel_tsens_trip_info **trip_info;
+	struct thermal_zone_device *tz;
+	void *pdata;
+	struct intel_tsens_plat_info plat_info;
+};
+
+struct intel_tsens_priv {
+	int n_sens;
+	bool global_clk_available;
+	void __iomem *base_addr;
+	struct clk *tsens_clk;
+	u32 tsens_clk_rate;
+	struct intel_tsens **intel_tsens;
+	struct device *dev;
+	struct platform_device *pdev;
+	struct intel_tsens_plat_info plat_info;
+};
+
+static int intel_tsens_register_pdev(struct intel_tsens_plat_info *plat_info)
+{
+	struct intel_tsens_plat_data plat_data;
+	struct platform_device_info pdevinfo;
+	struct platform_device *dd;
+
+	memset(&pdevinfo, 0, sizeof(pdevinfo));
+	pdevinfo.name = plat_info->plat_name;
+	pdevinfo.id = plat_info->id;
+	plat_data.base_addr = plat_info->base_addr;
+	plat_data.name = plat_info->plat_name;
+	plat_data.get_temp = NULL;
+	pdevinfo.data = &plat_data;
+	pdevinfo.size_data = sizeof(plat_data);
+	dd = platform_device_register_full(&pdevinfo);
+	if (IS_ERR(dd))
+		return -EINVAL;
+	plat_info->pdev = dd;
+
+	return 0;
+}
+
+static void intel_tsens_unregister_pdev(struct intel_tsens_priv *priv)
+{
+	int i;
+
+	for (i = 0; i < priv->n_sens; i++) {
+		if (priv->plat_info.pdev)
+			platform_device_unregister(priv->plat_info.pdev);
+	}
+}
+
+static int intel_tsens_add_pdev(struct intel_tsens_priv *priv)
+{
+	int i, ret;
+
+	/*
+	 * Register platform device for each sensor.
+	 *
+	 */
+	if (priv->plat_info.plat_name) {
+		priv->plat_info.base_addr = priv->base_addr;
+		ret = intel_tsens_register_pdev(&priv->plat_info);
+		if (ret) {
+			dev_err(&priv->pdev->dev,
+				"platform device register failed for %s\n",
+				priv->plat_info.plat_name);
+			return ret;
+		}
+	}
+	for (i = 0; i < priv->n_sens; i++) {
+		struct intel_tsens *tsens = priv->intel_tsens[i];
+
+		if (!tsens->plat_info.plat_name)
+			continue;
+		tsens->plat_info.base_addr = tsens->base_addr;
+		ret = intel_tsens_register_pdev(&tsens->plat_info);
+		if (ret) {
+			dev_err(&priv->pdev->dev,
+				"platform device register failed for %s\n",
+				tsens->name);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int intel_tsens_thermal_get_temp(struct thermal_zone_device *tz,
+					int *temp)
+{
+	struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+	struct intel_tsens_priv *priv =
+		(struct intel_tsens_priv *)tsens->pdata;
+	struct intel_tsens_plat_data *plat_data;
+	int type = tsens->sensor_type;
+	struct platform_device *pdev;
+
+	if (tsens->plat_info.plat_name) {
+		pdev = tsens->plat_info.pdev;
+		plat_data = pdev->dev.platform_data;
+
+		if (!plat_data) {
+			dev_err(&pdev->dev, "Platform data not found for %s\n",
+				tsens->name);
+			return -EINVAL;
+		}
+		if (!plat_data->get_temp) {
+			*temp = 0;
+			return -EINVAL;
+		}
+		if (plat_data->get_temp(pdev, type, temp))
+			return -EINVAL;
+		tsens->curr_temp = *temp;
+		return 0;
+	}
+	if (priv->plat_info.plat_name) {
+		pdev = priv->plat_info.pdev;
+		plat_data = pdev->dev.platform_data;
+
+		if (!plat_data) {
+			dev_err(&pdev->dev, "Platform data not found for %s\n",
+				tsens->name);
+			return -EINVAL;
+		}
+		if (!plat_data->get_temp) {
+			*temp = 0;
+			return -EINVAL;
+		}
+
+		if (plat_data->get_temp(pdev, type, temp))
+			return -EINVAL;
+		tsens->curr_temp = *temp;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static int intel_tsens_thermal_get_trip_type(struct thermal_zone_device *tz,
+					     int trip,
+					     enum thermal_trip_type *type)
+{
+	struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+
+	*type = tsens->trip_info[trip]->trip_type;
+	return 0;
+}
+
+static int intel_tsens_thermal_get_trip_temp(struct thermal_zone_device *tz,
+					     int trip, int *temp)
+{
+	struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+
+	*temp = tsens->trip_info[trip]->temp;
+	return 0;
+}
+
+/* Refer https://lwn.net/Articles/242046/
+ * how to receive this event in userspace
+ */
+static int intel_tsens_notify_user_space(struct thermal_zone_device *tz,
+					 int trip)
+{
+	char *thermal_prop[5];
+	int i, ret = 0;
+
+	mutex_lock(&tz->lock);
+	thermal_prop[0] = kasprintf(GFP_KERNEL, "NAME=%s", tz->type);
+	thermal_prop[1] = kasprintf(GFP_KERNEL, "TEMP=%d",
+				    tz->emul_temperature);
+	thermal_prop[2] = kasprintf(GFP_KERNEL, "TRIP=%d", trip);
+	thermal_prop[3] = kasprintf(GFP_KERNEL, "EVENT=%d", tz->notify_event);
+	thermal_prop[4] = NULL;
+	if (thermal_prop[0] && thermal_prop[1] &&
+	    thermal_prop[2] && thermal_prop[3]) {
+		kobject_uevent_env(&tz->device.kobj, KOBJ_CHANGE,
+				   thermal_prop);
+	} else {
+		ret = -ENOMEM;
+	}
+	for (i = 0; i < 4; ++i)
+		kfree(thermal_prop[i]);
+	mutex_unlock(&tz->lock);
+	return ret;
+}
+
+static int intel_tsens_thermal_notify(struct thermal_zone_device *tz,
+				      int trip, enum thermal_trip_type type)
+{
+	intel_tsens_notify_user_space(tz, trip);
+
+	if (type == THERMAL_TRIP_PASSIVE || type == THERMAL_TRIP_CRITICAL)
+		return 1;
+	return 0;
+}
+
+static int intel_tsens_thermal_bind(struct thermal_zone_device *tz,
+				    struct thermal_cooling_device *cdev)
+{
+	struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+	struct intel_tsens_priv *priv =
+		(struct intel_tsens_priv *)tsens->pdata;
+	int ret = -EINVAL;
+
+	/*
+	 * Check here thermal device zone name and cdev name to match,
+	 * then call the bind device
+	 */
+	if (!strncmp(tz->type, cdev->type, THERMAL_NAME_LENGTH) == 0) {
+		ret = thermal_zone_bind_cooling_device
+				(tz,
+				THERMAL_TRIP_PASSIVE,
+				cdev,
+				THERMAL_NO_LIMIT,
+				THERMAL_NO_LIMIT,
+				THERMAL_WEIGHT_DEFAULT);
+		if (ret) {
+			dev_err(&priv->pdev->dev,
+				"binding zone %s with cdev %s failed:%d\n",
+				tz->type, cdev->type, ret);
+		}
+	}
+	return ret;
+}
+
+static int intel_tsens_thermal_unbind(struct thermal_zone_device *tz,
+				      struct thermal_cooling_device *cdev)
+{
+	int ret;
+
+	ret = thermal_zone_unbind_cooling_device(tz, 0, cdev);
+	if (ret) {
+		dev_err(&tz->device,
+			"unbinding zone %s with cdev %s failed:%d\n",
+			tz->type, cdev->type, ret);
+	}
+	return ret;
+}
+
+static struct thermal_zone_device_ops tsens_thermal_ops = {
+	.bind = intel_tsens_thermal_bind,
+	.unbind = intel_tsens_thermal_unbind,
+	.get_temp = intel_tsens_thermal_get_temp,
+	.get_trip_type	= intel_tsens_thermal_get_trip_type,
+	.get_trip_temp	= intel_tsens_thermal_get_trip_temp,
+	.notify		= intel_tsens_thermal_notify,
+/*	.set_emul_temp = tsens_thermal_emulation */
+
+};
+
+static int intel_tsens_get_temp(int type, int *temp, void *pdata)
+{
+	struct intel_tsens_priv *priv = (struct intel_tsens_priv *)pdata;
+
+	if (!priv)
+		return -EINVAL;
+
+	return intel_tsens_thermal_get_temp(priv->intel_tsens[type]->tz, temp);
+}
+
+struct intel_tsens_i2c_plat_data i2c_plat_data = {
+	.get_temp	= intel_tsens_get_temp,
+};
+
+static void intel_tsens_remove_thermal_zones(struct intel_tsens_priv *priv)
+{
+	int i;
+
+	for (i = 0; i < priv->n_sens; i++) {
+		struct intel_tsens *tsens = priv->intel_tsens[i];
+
+		if (tsens->tz) {
+			thermal_zone_device_unregister(tsens->tz);
+			tsens->tz = NULL;
+		}
+	}
+}
+
+static int intel_tsens_add_thermal_zones(struct intel_tsens_priv *priv)
+{
+	int i;
+
+	for (i = 0; i < priv->n_sens; i++) {
+		struct intel_tsens *tsens = priv->intel_tsens[i];
+
+		tsens->tz =
+		thermal_zone_device_register(tsens->name,
+					     tsens->n_trips,
+					     0,
+					     tsens,
+					     &tsens_thermal_ops,
+					     NULL,
+					     tsens->passive_delay,
+					     tsens->polling_delay);
+		if (IS_ERR(tsens->tz)) {
+			dev_err(&priv->pdev->dev,
+				"failed to register thermal zone device %s\n",
+				tsens->name);
+			return PTR_ERR(tsens->tz);
+		}
+	}
+
+	return 0;
+}
+
+static void intel_tsens_remove_clk(struct intel_tsens_priv *priv)
+{
+	struct platform_device *pdev = priv->pdev;
+
+	clk_disable_unprepare(priv->tsens_clk);
+	devm_clk_put(&pdev->dev, priv->tsens_clk);
+}
+
+static int intel_tsens_clk_config(struct intel_tsens_priv *priv)
+{
+	struct platform_device *pdev = priv->pdev;
+	int ret;
+
+	if (priv->global_clk_available) {
+		priv->tsens_clk = devm_clk_get(&pdev->dev, NULL);
+		if (IS_ERR(priv->tsens_clk)) {
+			ret = PTR_ERR(priv->tsens_clk);
+			if (ret != -EPROBE_DEFER) {
+				dev_err(&pdev->dev,
+					"failed to get thermal clk: %d\n", ret);
+			}
+			return PTR_ERR(priv->tsens_clk);
+		}
+		ret = clk_set_rate(priv->tsens_clk, priv->tsens_clk_rate);
+		if (ret) {
+			dev_err(&pdev->dev,
+				"failed to set rate for thermal clk: %d\n",
+				ret);
+			devm_clk_put(&pdev->dev, priv->tsens_clk);
+			return ret;
+		}
+		ret = clk_prepare_enable(priv->tsens_clk);
+		if (ret) {
+			dev_err(&pdev->dev,
+				"failed to enable thermal clk: %d\n",
+				ret);
+			devm_clk_put(&pdev->dev, priv->tsens_clk);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int intel_tsens_config_sensors(struct device_node *s_node,
+				      struct intel_tsens *tsens,
+				      int sensor_type)
+{
+	struct intel_tsens_priv *priv = (struct intel_tsens_priv *)tsens->pdata;
+	struct platform_device *pdev = priv->pdev;
+	s32 trip_temp_count, trip_temp_type_c, i;
+
+	of_property_read_string_index(s_node, "plat_name", 0,
+				      &tsens->plat_info.plat_name);
+	tsens->plat_info.id = 1 << sensor_type;
+	tsens->sensor_type = sensor_type;
+	if (of_property_read_u32(s_node, "passive_delay",
+				 &tsens->passive_delay)) {
+		dev_err(&pdev->dev,
+			"passive_delay missing in dt for %s\n",
+			tsens->name);
+		return -EINVAL;
+	}
+	if (of_property_read_u32(s_node, "polling_delay",
+				 &tsens->polling_delay)) {
+		dev_err(&pdev->dev,
+			"polling_delay missing in dt for %s\n",
+			tsens->name);
+		return -EINVAL;
+	}
+	trip_temp_count = of_property_count_u32_elems(s_node, "trip_temp");
+	trip_temp_type_c = of_property_count_strings(s_node, "trip_type");
+	if (trip_temp_count != trip_temp_type_c ||
+	    trip_temp_count <= 0 || trip_temp_type_c <= 0) {
+		dev_err(&pdev->dev,
+			"trip temp config is missing in dt for %s\n",
+			tsens->name);
+		return -EINVAL;
+	}
+
+	tsens->trip_info =
+		devm_kcalloc(&pdev->dev, trip_temp_count,
+			     sizeof(struct intel_tsens_trip_info *),
+			     GFP_KERNEL);
+	if (!tsens->trip_info)
+		return -ENOMEM;
+	tsens->n_trips = trip_temp_count;
+	for (i = 0; i < trip_temp_count; i++) {
+		struct intel_tsens_trip_info *trip_info;
+		const char *trip_name;
+
+		trip_info = devm_kzalloc(&pdev->dev,
+					 sizeof(struct intel_tsens_trip_info),
+					 GFP_KERNEL);
+		if (!trip_info)
+			return -ENOMEM;
+
+		of_property_read_u32_index(s_node, "trip_temp", i,
+					   &trip_info->temp);
+		of_property_read_string_index(s_node, "trip_type", i,
+					      &trip_name);
+		if (!strcmp(trip_name, "passive"))
+			trip_info->trip_type = THERMAL_TRIP_PASSIVE;
+		else if (!strcmp(trip_name, "critical"))
+			trip_info->trip_type = THERMAL_TRIP_CRITICAL;
+		else if (!strcmp(trip_name, "hot"))
+			trip_info->trip_type = THERMAL_TRIP_HOT;
+		else
+			trip_info->trip_type = THERMAL_TRIP_ACTIVE;
+		tsens->trip_info[i] = trip_info;
+	}
+
+	return 0;
+}
+
+static int intel_tsens_config_dt(struct intel_tsens_priv *priv)
+{
+	struct platform_device *pdev = priv->pdev;
+	struct device_node *np = pdev->dev.of_node;
+	struct device_node *s_node = NULL, *node;
+	struct resource *res;
+	int i = 0, ret;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	priv->base_addr = devm_ioremap_resource(&pdev->dev, res);
+	node = of_parse_phandle(np, "soc-sensors", 0);
+	if (!node)
+		return -EINVAL;
+	priv->n_sens = of_get_child_count(node);
+	if (priv->n_sens == 0) {
+		dev_err(&pdev->dev, "No sensors configured in dt\n");
+		return -EINVAL;
+	}
+	priv->global_clk_available = of_property_read_bool(np, "clocks");
+	if (priv->global_clk_available) {
+		ret = of_property_read_u32(np, "clk-rate",
+					   &priv->tsens_clk_rate);
+		if (ret) {
+			dev_err(&pdev->dev, "clk-rate not available in dt");
+			return ret;
+		}
+	}
+	of_property_read_string_index(np, "plat_name", 0,
+				      &priv->plat_info.plat_name);
+	priv->intel_tsens =
+		devm_kcalloc(&pdev->dev, priv->n_sens,
+			     sizeof(struct intel_tsens *),
+			     GFP_KERNEL);
+	if (!priv->intel_tsens)
+		return -ENOMEM;
+	for_each_child_of_node(node, s_node) {
+		int r_count, size_count;
+		struct intel_tsens *ts;
+
+		ts = devm_kzalloc(&pdev->dev, sizeof(struct intel_tsens),
+				  GFP_KERNEL);
+		if (!ts){
+			of_node_put(s_node);
+			return -ENOMEM;
+		}
+		strcpy(ts->name, s_node->name);
+		if (!of_property_read_u32(s_node, "address-cells", &r_count) &&
+		    !of_property_read_u32(s_node, "size-cells", &size_count)) {
+			if (r_count > 1) {
+				ret = of_property_read_u64_index(s_node, "reg",
+								 0, &ts->addr);
+			} else {
+				u32 *addr = (u32 *)&ts->addr;
+
+				ret = of_property_read_u32_index(s_node, "reg",
+								 0, addr);
+			}
+			if (ret) {
+				dev_err(&pdev->dev, "Invalid reg base address");
+				of_node_put(s_node);
+				return ret;
+			}
+			if (size_count > 1) {
+				int index =
+					(r_count > 1) ? (r_count / 2) :
+					r_count;
+
+				ret = of_property_read_u64_index(s_node, "reg",
+								 index,
+								 &ts->size);
+			} else {
+				u32 *size = (u32 *)&ts->size;
+
+				ret = of_property_read_u32_index(s_node, "reg",
+								 r_count, size);
+			}
+			if (ret) {
+				dev_err(&pdev->dev, "Invalid size");
+				of_node_put(s_node);
+				return ret;
+			}
+			ts->base_addr = devm_ioremap(&pdev->dev,
+						     ts->addr,
+						     ts->size);
+		} else {
+			ts->base_addr = priv->base_addr;
+		}
+		if (!ts->base_addr) {
+			dev_err(&pdev->dev, "ioremap failed for %s\n",
+				ts->name);
+			of_node_put(s_node);
+			return -EINVAL;
+		}
+		ts->pdata = priv;
+		if (intel_tsens_config_sensors(s_node, ts, i)) {
+			dev_err(&pdev->dev,
+				"Missing sensor info in dts for %s\n",
+				ts->name);
+			of_node_put(s_node);
+			return -EINVAL;
+		}
+		priv->intel_tsens[i] = ts;
+		i++;
+	}
+
+	return 0;
+}
+
+static int intel_tsens_thermal_probe(struct platform_device *pdev)
+{
+	struct intel_tsens_priv *intel_tsens_priv;
+	int ret;
+
+	intel_tsens_priv = devm_kzalloc(&pdev->dev,
+					sizeof(struct intel_tsens_priv),
+					GFP_KERNEL);
+	if (!intel_tsens_priv)
+		return -ENOMEM;
+	intel_tsens_priv->pdev = pdev;
+	if (pdev->dev.of_node) {
+		ret = intel_tsens_config_dt(intel_tsens_priv);
+		if (ret) {
+			dev_err(&pdev->dev, "dt configuration failed\n");
+			return ret;
+		}
+	} else {
+		dev_err(&pdev->dev, "Non Device Tree build is not supported\n");
+		return -EINVAL;
+	}
+	ret = intel_tsens_clk_config(intel_tsens_priv);
+	if (ret) {
+		dev_err(&pdev->dev, "Thermal clk config failed\n");
+		return ret;
+	}
+	ret = intel_tsens_add_pdev(intel_tsens_priv);
+	if (ret) {
+		dev_err(&pdev->dev, "platform device registration failed\n");
+		goto remove_pdev;
+	}
+	ret = intel_tsens_add_thermal_zones(intel_tsens_priv);
+	if (ret) {
+		dev_err(&pdev->dev, "thermal zone configuration failed\n");
+		goto remove_tz;
+	}
+	platform_set_drvdata(pdev, intel_tsens_priv);
+	i2c_plat_data.pdata = intel_tsens_priv;
+	return 0;
+
+remove_tz:
+	intel_tsens_remove_thermal_zones(intel_tsens_priv);
+remove_pdev:
+	intel_tsens_unregister_pdev(intel_tsens_priv);
+	intel_tsens_remove_clk(intel_tsens_priv);
+	return ret;
+}
+
+/* Device Exit */
+static int intel_tsens_thermal_exit(struct platform_device *pdev)
+{
+	struct intel_tsens_priv *priv = platform_get_drvdata(pdev);
+
+	if (!priv) {
+		dev_err(&pdev->dev,
+			"unable to get private data\n");
+		return -EINVAL;
+	}
+	intel_tsens_remove_thermal_zones(priv);
+	intel_tsens_unregister_pdev(priv);
+	intel_tsens_remove_clk(priv);
+
+	return 0;
+}
+
+static const struct of_device_id intel_tsens_thermal_id_table[] = {
+	{ .compatible = "intel,intel-tsens" },
+	{}
+};
+MODULE_DEVICE_TABLE(of, intel_tsens_thermal_id_table);
+
+static struct platform_driver intel_tsens_thermal_driver = {
+	.probe = intel_tsens_thermal_probe,
+	.remove = intel_tsens_thermal_exit,
+	.driver = {
+		.name = "intel_tsens_thermal",
+		.of_match_table = intel_tsens_thermal_id_table,
+	},
+};
+
+module_platform_driver(intel_tsens_thermal_driver);
+
+MODULE_DESCRIPTION("TSENS Thermal Driver");
+MODULE_AUTHOR("Sandeep Singh <sandeep1.singh@intel.com>");
+MODULE_AUTHOR("Raja Subramanian, Lakshmi Bai <lakshmi.bai.raja.subramanian@intel.com>");
+MODULE_AUTHOR("Udhayakumar C <udhayakumar.c@intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/intel_tsens/intel_tsens_thermal.h b/drivers/misc/intel_tsens/intel_tsens_thermal.h
new file mode 100644
index 000000000000..a531c95b20b3
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_thermal.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * Intel tsens thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_INTEL_TSENS_H
+#define _LINUX_INTEL_TSENS_H
+
+#include <linux/platform_device.h>
+#include <linux/thermal.h>
+
+struct intel_tsens_plat_data {
+	const char *name;
+	void __iomem *base_addr;
+	int (*get_temp)(struct platform_device *pdev, int type, int *temp);
+	void *pdata;
+};
+
+struct intel_tsens_plat_info {
+	const char *plat_name;
+	int id;
+	struct platform_device *pdev;
+	void __iomem *base_addr;
+};
+
+struct intel_tsens_i2c_plat_data {
+	int (*get_temp)(int type, int *temp, void *pdata);
+	void *pdata;
+};
+
+/* TSENS i2c platform data */
+extern struct intel_tsens_i2c_plat_data i2c_plat_data;
+
+#endif /* _LINUX_INTEL_TSENS_H */
diff --git a/include/linux/hddl_device.h b/include/linux/hddl_device.h
new file mode 100644
index 000000000000..1c21ad27ea33
--- /dev/null
+++ b/include/linux/hddl_device.h
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * High Density Deep Learning Kernel module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef __HDDL_DEVICE_H
+#define __HDDL_DEVICE_H
+
+#include <linux/i2c.h>
+#include <linux/platform_device.h>
+#include <linux/thermal.h>
+#include <linux/types.h>
+#if IS_ENABLED(CONFIG_XLINK_CORE)
+#include <linux/xlink.h>
+#include <linux/xlink_drv_inf.h>
+#endif /* XLINK_CORE */
+
+#define HDDL_ALIGN 4
+
+#define HDDL_MAGIC 'x'
+#define HDDL_READ_SW_ID_DATA		_IOW(HDDL_MAGIC, 'a', void*)
+#define HDDL_SOFT_RESET		_IOW(HDDL_MAGIC, 'b', void*)
+
+struct sw_id_hddl_data {
+	u32 board_id;
+	u32 soc_id;
+	u32 soc_adaptor_no[2];
+	u32 sw_id;
+	u32 return_id;
+};
+
+struct sw_id_soft_reset {
+	u32 sw_id;
+	u32 return_id;
+};
+
+enum hddl_xlink_adapter {
+	HDDL_XLINK_I2C_MASTER,
+	HDDL_XLINK_I2C_SLAVE,
+	HDDL_XLINK_I2C_END,
+};
+
+enum hddl_device {
+	HDDL_I2C_CLIENT		= (1 << 0),
+	HDDL_XLINK_CLIENT	= (1 << 1),
+	HDDL_XLINK_SMBUS_CLIENT	= (1 << 2),
+};
+
+enum hddl_device_status {
+	HDDL_DEV_STATUS_START,
+	HDDL_DEV_STATUS_CONNECTED,
+	HDDL_DEV_STATUS_DISCONNECTED,
+	HDDL_DEV_STATUS_END,
+};
+
+enum hddl_msg_type {
+	HDDL_GET_NSENS		= 0x10,
+	HDDL_GET_SENS_NAME	= 0x11,
+	HDDL_GET_SENS_DETAILS	= 0x12,
+	HDDL_GET_SENS_TRIP_INFO	= 0x13,
+	HDDL_GET_N_I2C_DEVS	= 0x14,
+	HDDL_GET_I2C_DEVS	= 0x15,
+	HDDL_GET_I2C_DEV_ADDR	= 0x16,
+	HDDL_GET_SENS_COMPLETE	= 0x20,
+};
+
+struct intel_hddl_tsens_msg {
+	int msg_type;
+	u32 sensor_type;
+	u32 trip_info_idx;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_hddl_board_info {
+	int board_id;
+	int soc_id;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_tsens_data {
+	char name[20];
+	u32 n_trips;
+	u32 passive_delay;
+	u32 polling_delay;
+	u32 sensor_type;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_hddl_i2c_devs_data {
+	char name[20];
+	u32 addr;
+	u32 bus;
+	int enabled;
+	int local_host;
+	int remote_host;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_hddl_i2c_devs {
+	char name[20];
+	u32 addr;
+	u32 bus;
+	int enabled;
+	int local_host;
+	int remote_host;
+	struct i2c_board_info board_info;
+	struct i2c_client *xlk_client;
+	struct i2c_client *i2c_client;
+	struct i2c_client *smbus_client;
+};
+
+struct intel_hddl_clients {
+#if IS_ENABLED(CONFIG_XLINK_CORE)
+	struct xlink_handle xlink_dev;
+#endif /* XLINK_CORE */
+	struct task_struct *hddl_dev_connect_task;
+	void *task;
+	u32 chan_num;
+	void *pdata;
+	struct intel_hddl_board_info board_info;
+	u32 xlink_i2c_ch[HDDL_XLINK_I2C_END];
+	u32 i2c_chan_num;
+	void **tsens;
+	u32 nsens;
+	struct platform_device *xlink_i2c_plt_dev[HDDL_XLINK_I2C_END];
+	struct platform_device *pdev;
+	struct i2c_adapter *adap[HDDL_XLINK_I2C_END];
+	struct i2c_adapter *smbus_adap;
+	struct intel_hddl_i2c_devs **i2c_devs;
+	int n_clients;
+	enum hddl_device_status status;
+	/* hddl device lock */
+	struct mutex lock;
+};
+
+struct intel_tsens_trip_info {
+	enum thermal_trip_type trip_type;
+	int temp;
+} __packed __aligned(HDDL_ALIGN);
+
+#if IS_ENABLED(CONFIG_XLINK_CORE)
+static inline u32 tsens_get_device_id(struct intel_hddl_clients *d)
+{
+	return d->xlink_dev.sw_device_id;
+}
+#else
+static inline u32 tsens_get_device_id(struct intel_hddl_clients *d)
+{
+	return -EINVAL;
+}
+#endif /* XLINK_CORE */
+
+#endif /* __HDDL_DEVICE_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 28/34] misc: Intel tsens IA host driver.
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (26 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 27/34] misc: Tsens ARM host thermal driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 29/34] Intel tsens i2c slave driver mgross
                   ` (5 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add Intel tsens IA host driver for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU                      Local Host ARM CPU
----------------                     --------------------------
|  Platform    |                     |  Thermal Daemon        |
| Management SW|                     |                        |
----------------                     --------------------------
|  Intel tsens |                     |  intel tsens i2c slave |
|  i2c client  |                     |  and thermal driver    |
----------------                     --------------------------
|  XLINK I2C   |                     |  XLINK I2C Slave       |
|  controller  |     <=========>     |   controller           |
----------------     xlink smbus     --------------------------

intel tsens module:
-------------------
The tsens module enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms.In the tsens module
various junction and SoC temperatures are reported using thermal
subsystem and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsystem will be used by
platform manageability software running in IA host.

- Remote Host driver
  * Intended for IA CPU
  * It is a I2C client driver
  * Driver path:
  {tree}/drivers/misc/intel_tsens/intel_tsens_host.c

Local host and Remote host drivers communicates using
I2C SMBUS protocol.

Acked-by: Mark Mross <mgross@linux.intel.com>
Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 Documentation/hwmon/index.rst               |   1 +
 Documentation/hwmon/intel_tsens_host.rst    |  71 ++++
 drivers/misc/intel_tsens/Kconfig            |  13 +
 drivers/misc/intel_tsens/Makefile           |   1 +
 drivers/misc/intel_tsens/intel_tsens_host.c | 351 ++++++++++++++++++++
 include/linux/intel_tsens_host.h            |  34 ++
 6 files changed, 471 insertions(+)
 create mode 100644 Documentation/hwmon/intel_tsens_host.rst
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_host.c
 create mode 100644 include/linux/intel_tsens_host.h

diff --git a/Documentation/hwmon/index.rst b/Documentation/hwmon/index.rst
index fc29100bef73..7a9eaddd1ab3 100644
--- a/Documentation/hwmon/index.rst
+++ b/Documentation/hwmon/index.rst
@@ -81,6 +81,7 @@ Hardware Monitoring Kernel Drivers
    isl68137
    it87
    intel_tsens_sensor.rst
+   intel_tsens_host.rst
    jc42
    k10temp
    k8temp
diff --git a/Documentation/hwmon/intel_tsens_host.rst b/Documentation/hwmon/intel_tsens_host.rst
new file mode 100644
index 000000000000..012c593f969f
--- /dev/null
+++ b/Documentation/hwmon/intel_tsens_host.rst
@@ -0,0 +1,71 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+Kernel driver: intel_tsens
+==========================
+
+Supported chips:
+  * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+    Slave address: The address is assigned by the hddl device management
+                   driver.
+
+    Datasheet:
+      Documentation/hwmon/intel_tsens_sensor.rst#Remote Thermal Interface
+
+Authors:
+    - Thalaiappan, Rathina <rathina.thalaiappan@intel.com
+    - Udhayakumar C <udhayakumar.c@intel.com>
+
+Description
+===========
+The intel_tsens is a temperature sensor driver receiving the junction temperature
+from different heating points inside the SOC. The driver will receive the
+temperature on SMBUS connection. The reported temperature is in degrees Celsius.
+
+In Keem Bay, the four thermal junction temperature points are,
+Media Subsystem (mss), NN subsystem (nce), Compute subsystem (cse) and
+SOC(Maximum of mss, nce and cse).
+
+Example
+=======
+Temperature reported by a Keem Bay on the Linux Thermal sysfs interface.
+
+# cat /sys/class/thermal/thermal_zone*/type
+mss
+css
+nce
+soc
+
+# cat /sys/class/thermal/thermal_zone*/temp
+0
+29210
+28478
+29210
+
++-----------+-------------+
+| offset    |   Sensor    |
++-----------+-------------+
+|   0       |   mss       |
++-----------+-------------+
+|   1       |   css       |
++-----------+-------------+
+|   2       |   nce       |
++-----------+-------------+
+|   3       |   soc       |
++-----------+-------------+
+
+#sudo i2cdetect -l
+i2c-8   smbus           SMBus I801 adapter at efa0              SMBus adapte    r
+
+To read mss junction temperature:
+#i2cget -y 8 <slave addr> 0x0 w
+
+To read cse junction temperature:
+#i2cget -y 8 <slave addr> 0x1 w
+
+To read nce junction temperature:
+#i2cget -y 8 <slave addr> 0x2 w
+
+To read overall SoC temperature:
+#i2cget -y 8 <slave addr> 0x3 w
diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index bfb8fe1997f4..8b263fdd80c3 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -13,3 +13,16 @@ config INTEL_TSENS_LOCAL_HOST
 	  management controller.
 	  Say Y if using a processor that includes the Intel VPU such as
 	  Keem Bay.  If unsure, say N.
+
+config INTEL_TSENS_IA_HOST
+	tristate "Temperature sensor driver for intel tsens remote host"
+	depends on I2C && THERMAL
+	depends on I2C_SMBUS
+	help
+	  This option enables tsens i2c and thermal local Host driver.
+
+	  This driver is used for reading thermal data via I2C SMBUS
+	  and registers itself to thermal framework, which can be
+	  used by thermal daemon in remote IA host
+	  Say Y if using a processor that includes the Intel VPU such as
+	  Keem Bay.  If unsure, say N.
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index 93dee8b9f481..250dc484fb49 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -5,3 +5,4 @@
 #
 
 obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST)	+= intel_tsens_thermal.o
+obj-$(CONFIG_INTEL_TSENS_IA_HOST)	+= intel_tsens_host.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_host.c b/drivers/misc/intel_tsens/intel_tsens_host.c
new file mode 100644
index 000000000000..adb553f3f2e3
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_host.c
@@ -0,0 +1,351 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/ioctl.h>
+#include <linux/intel_tsens_host.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/printk.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include <uapi/linux/stat.h>
+
+#define TSENS_BINDING_NAME	"intel_tsens"
+#define TSENS_BYTE_INDEX_SHIFT  0x6
+#define TSENS_READ_BYTE0	(0x0 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE1	(0x1 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE2	(0x2 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE3	(0x3 << TSENS_BYTE_INDEX_SHIFT)
+
+static int tsens_i2c_smbus_read_byte_data(struct i2c_client *i2c, u8 command,
+					  u8 *i2c_val)
+{
+	union i2c_smbus_data data;
+	int status;
+
+	status = i2c_smbus_xfer(i2c->adapter, i2c->addr, i2c->flags,
+				I2C_SMBUS_READ, command,
+				I2C_SMBUS_BYTE_DATA, &data);
+	*i2c_val = data.byte;
+	return status;
+}
+
+/**
+ * intel_tsens_get_temp - get updated temperatue
+ * @zone: Thermal zone device
+ * @temp: updated temperature value.
+ *
+ * Temperature value read from sensors ranging from -40000 (-40 degree Celsius)
+ * to 126000 (126 degree Celsius). if there is a failure while reading update
+ * temperature, -255 would be returned as temperature to indicate failure.
+ */
+static int intel_tsens_get_temp(struct thermal_zone_device *zone,
+				int *temp)
+{
+	struct intel_tsens_host *tsens =
+		(struct intel_tsens_host *)zone->devdata;
+	struct i2c_client *i2c_c;
+	int status, sensor_type;
+	u8 i2c_val;
+	s32 val;
+
+	if (strstr(zone->type, "smb"))
+		i2c_c = tsens->i2c_smbus;
+	else
+		i2c_c = tsens->i2c_xlk;
+
+	*temp = -255;
+	sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE0;
+	status = tsens_i2c_smbus_read_byte_data(i2c_c,
+						sensor_type,
+						&i2c_val);
+	if (status < 0)
+		return status;
+	val = i2c_val;
+	sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE1;
+	status = tsens_i2c_smbus_read_byte_data(i2c_c,
+						sensor_type,
+						&i2c_val);
+	if (status < 0)
+		return status;
+	val |= (i2c_val << 8);
+	sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE2;
+	status = tsens_i2c_smbus_read_byte_data(i2c_c,
+						sensor_type,
+						&i2c_val);
+	if (status < 0)
+		return status;
+	val |= (i2c_val << 16);
+	sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE3;
+	status = tsens_i2c_smbus_read_byte_data(i2c_c,
+						sensor_type,
+						&i2c_val);
+	if (status < 0)
+		return status;
+	val |= (i2c_val << 24);
+	*temp = val;
+	return 0;
+}
+
+static int intel_tsens_thermal_get_trip_type(struct thermal_zone_device *zone,
+					     int trip,
+					     enum thermal_trip_type *type)
+{
+	struct intel_tsens_host *tsens =
+		(struct intel_tsens_host *)zone->devdata;
+
+	*type = tsens->trip_info[trip]->trip_type;
+	return 0;
+}
+
+static int intel_tsens_thermal_get_trip_temp(struct thermal_zone_device *zone,
+					     int trip, int *temp)
+{
+	struct intel_tsens_host *tsens =
+		(struct intel_tsens_host *)zone->devdata;
+
+	*temp = tsens->trip_info[trip]->temp;
+	return 0;
+}
+
+static int intel_tsens_thermal_notify(struct thermal_zone_device *tz,
+				      int trip, enum thermal_trip_type type)
+{
+	int ret = 0;
+
+	switch (type) {
+	case THERMAL_TRIP_ACTIVE:
+		dev_warn(&tz->device,
+			 "zone %s reached to active temperature %d\n",
+			 tz->type, tz->temperature);
+		ret = 1;
+		break;
+	case THERMAL_TRIP_CRITICAL:
+		dev_warn(&tz->device,
+			 "zone %s reached to critical temperature %d\n",
+			 tz->type, tz->temperature);
+		ret = 1;
+		break;
+	default:
+		break;
+	}
+	return ret;
+}
+
+static int intel_tsens_bind(struct thermal_zone_device *tz,
+			    struct thermal_cooling_device *cdev)
+{
+	int ret;
+
+	/*
+	 * Check here thermal device zone name and cdev name to match,
+	 * then call the bind device
+	 */
+	if (strncmp(TSENS_BINDING_NAME, cdev->type,
+		    strlen(TSENS_BINDING_NAME)) == 0) {
+		ret = thermal_zone_bind_cooling_device
+				(tz,
+				THERMAL_TRIP_ACTIVE,
+				cdev,
+				THERMAL_NO_LIMIT,
+				THERMAL_NO_LIMIT,
+				THERMAL_WEIGHT_DEFAULT);
+		if (ret) {
+			dev_err(&tz->device,
+				"binding zone %s with cdev %s failed:%d\n",
+				tz->type, cdev->type, ret);
+			return ret;
+		}
+	}
+	return 0;
+}
+
+static int intel_tsens_unbind(struct thermal_zone_device *tz,
+			      struct thermal_cooling_device *cdev)
+{
+	int ret;
+
+	if (strncmp(TSENS_BINDING_NAME, cdev->type,
+		    strlen(TSENS_BINDING_NAME)) == 0) {
+		ret = thermal_zone_unbind_cooling_device(tz, 0, cdev);
+		if (ret) {
+			dev_err(&tz->device,
+				"unbinding zone %s with cdev %s failed:%d\n",
+				tz->type, cdev->type, ret);
+			return ret;
+		}
+	}
+	return 0;
+}
+
+static struct thermal_zone_device_ops tsens_thermal_ops = {
+	.bind = intel_tsens_bind,
+	.unbind = intel_tsens_unbind,
+	.get_temp = intel_tsens_get_temp,
+	.get_trip_type	= intel_tsens_thermal_get_trip_type,
+	.get_trip_temp	= intel_tsens_thermal_get_trip_temp,
+	.notify		= intel_tsens_thermal_notify,
+};
+
+static int intel_tsens_add_tz(struct intel_tsens_host *tsens,
+			      struct thermal_zone_device **tz,
+			      const char *name,
+			      struct device *dev,
+			      int i)
+{
+	int ret;
+
+	*tz =  thermal_zone_device_register(name,
+					    tsens->t_data->n_trips,
+					    0, tsens,
+					    &tsens_thermal_ops,
+					    NULL,
+					    tsens->t_data->passive_delay,
+					    tsens->t_data->polling_delay);
+	if (IS_ERR(*tz)) {
+		ret = PTR_ERR(*tz);
+		dev_err(dev,
+			"failed to register thermal zone device %s\n",
+			tsens->t_data->name);
+		return ret;
+	}
+	return 0;
+}
+
+static void intel_tsens_remove_tz(struct intel_hddl_clients *d)
+{
+	int i;
+
+	for (i = 0; i < d->nsens; i++) {
+		struct intel_tsens_host *tsens = d->tsens[i];
+
+		if (tsens->tz_smbus) {
+			thermal_zone_device_unregister(tsens->tz_smbus);
+			tsens->tz_smbus = NULL;
+		}
+		if (tsens->tz_xlk) {
+			thermal_zone_device_unregister(tsens->tz_xlk);
+			tsens->tz_xlk = NULL;
+		}
+	}
+}
+
+static int intel_tsens_tj_probe(struct i2c_client *client,
+				const struct i2c_device_id *id)
+{
+	struct intel_hddl_clients *d = client->dev.platform_data;
+	u32 device_id = tsens_get_device_id(d);
+	char *i2c_str;
+	int ret, i;
+
+	if (strstr(client->adapter->name, "SMBus I801")) {
+		i2c_str = "smb";
+		for (i = 0; i < d->nsens; i++) {
+			struct intel_tsens_host *tsens = d->tsens[i];
+
+			tsens->sensor_name_smbus =
+				kasprintf(GFP_KERNEL,
+					  "%s_%s-%x",
+					  tsens->t_data->name,
+					  i2c_str, device_id);
+			tsens->i2c_smbus = client;
+			ret = intel_tsens_add_tz(tsens,
+						 &tsens->tz_smbus,
+						 tsens->sensor_name_smbus,
+						 &client->dev,
+						 i);
+			if (ret) {
+				dev_err(&client->dev,
+					"thermal zone configuration failed\n");
+				intel_tsens_remove_tz(d);
+				return ret;
+			}
+		}
+	} else {
+		i2c_str = "xlk";
+		for (i = 0; i < d->nsens; i++) {
+			struct intel_tsens_host *tsens = d->tsens[i];
+
+			tsens->sensor_name_xlk =
+				kasprintf(GFP_KERNEL,
+					  "%s_%s-%x",
+					  tsens->t_data->name,
+					  i2c_str, device_id);
+			tsens->i2c_xlk = client;
+			ret = intel_tsens_add_tz(tsens,
+						 &tsens->tz_xlk,
+						 tsens->sensor_name_xlk,
+						 &client->dev,
+						 i);
+			if (ret) {
+				dev_err(&client->dev,
+					"thermal zone configuration failed\n");
+				intel_tsens_remove_tz(d);
+				return ret;
+			}
+		}
+	}
+
+	i2c_set_clientdata(client, d);
+
+	return 0;
+}
+
+static int intel_tsens_tj_exit(struct i2c_client *client)
+{
+	struct intel_hddl_clients *d = client->dev.platform_data;
+
+	if (!d) {
+		dev_err(&client->dev,
+			"Unable to get private data\n");
+		return -EINVAL;
+	}
+	intel_tsens_remove_tz(d);
+	return 0;
+}
+
+static const struct i2c_device_id i2c_intel_tsens_id[] = {
+	{ "intel_tsens", (kernel_ulong_t)NULL },
+	{}
+};
+MODULE_DEVICE_TABLE(i2c, i2c_intel_tsens_id);
+
+static struct i2c_driver i2c_intel_tsens_driver = {
+	.driver = {
+		.name = "intel_tsens",
+	},
+	.probe = intel_tsens_tj_probe,
+	.remove = intel_tsens_tj_exit,
+	.id_table = i2c_intel_tsens_id,
+};
+module_i2c_driver(i2c_intel_tsens_driver);
+
+MODULE_DESCRIPTION("Intel tsens host Device driver");
+MODULE_AUTHOR("Sandeep Singh <sandeep1.singh@intel.com>");
+MODULE_AUTHOR("Vaidya, Mahesh R <mahesh.r.vaidya@intel.com>");
+MODULE_AUTHOR("Udhayakumar C <udhayakumar.c@intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/intel_tsens_host.h b/include/linux/intel_tsens_host.h
new file mode 100644
index 000000000000..4b9b2d6a5cfc
--- /dev/null
+++ b/include/linux/intel_tsens_host.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * Intel tsens host I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_INTEL_TSENS_HOST_DEVICE_H
+#define _LINUX_INTEL_TSENS_HOST_DEVICE_H
+
+struct intel_tsens_host_trip_info {
+	enum thermal_trip_type trip_type;
+	int temp;
+} __packed __aligned(4);
+
+struct intel_tsens_host {
+	const char *sensor_name_smbus;
+	const char *sensor_name_xlk;
+	struct intel_tsens_data *t_data;
+	struct intel_tsens_host_trip_info **trip_info;
+	u32 device_id;
+	struct i2c_client *i2c_xlk;
+	struct i2c_client *i2c_smbus;
+	struct thermal_zone_device *tz_xlk;
+	struct thermal_zone_device *tz_smbus;
+};
+
+struct intel_tsens_host_plat_data {
+	int nsens;
+	struct intel_tsens_host **tsens;
+};
+#endif /*_LINUX_INTEL_TSENS_HOST_DEVICE_H*/
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 29/34] Intel tsens i2c slave driver.
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (27 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 28/34] misc: Intel tsens IA host driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-12  7:15   ` Randy Dunlap
  2021-01-08 21:25 ` [PATCH v2 30/34] misc:intel_tsens: Intel Keem Bay tsens driver mgross
                   ` (4 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add Intel tsens i2c slave driver for Intel Edge.AI Computer Vision
platforms.

The tsens i2c slave driver enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms. In the tsens i2c module
various junction and SoC temperatures are reported using i2c slave
protocol.

Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 drivers/misc/intel_tsens/Kconfig           |  14 +++
 drivers/misc/intel_tsens/Makefile          |   1 +
 drivers/misc/intel_tsens/intel_tsens_i2c.c | 119 +++++++++++++++++++++
 3 files changed, 134 insertions(+)
 create mode 100644 drivers/misc/intel_tsens/intel_tsens_i2c.c

diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index 8b263fdd80c3..c2138339bd89 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
 	  Say Y if using a processor that includes the Intel VPU such as
 	  Keem Bay.  If unsure, say N.
 
+config INTEL_TSENS_I2C_SLAVE
+	bool "I2C slave driver for intel tsens"
+	depends on INTEL_TSENS_LOCAL_HOST
+	select I2C
+	select I2C_SLAVE
+	help
+	  This option enables tsens i2c slave driver.
+
+	  This driver is used for reporting thermal data via I2C
+	  SMBUS to remote host.
+	  Enable this option if you want to have support for thermal
+	  management controller
+	  Say Y if using a processor that includes the Intel VPU such as
+	  Keem Bay.  If unsure, say N.
 config INTEL_TSENS_IA_HOST
 	tristate "Temperature sensor driver for intel tsens remote host"
 	depends on I2C && THERMAL
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index 250dc484fb49..f6f41bbca80c 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -5,4 +5,5 @@
 #
 
 obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST)	+= intel_tsens_thermal.o
+obj-$(CONFIG_INTEL_TSENS_I2C_SLAVE)	+= intel_tsens_i2c.o
 obj-$(CONFIG_INTEL_TSENS_IA_HOST)	+= intel_tsens_host.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_i2c.c b/drivers/misc/intel_tsens/intel_tsens_i2c.c
new file mode 100644
index 000000000000..520c3f4bf392
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_i2c.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <linux/i2c.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include "intel_tsens_thermal.h"
+
+#define TSENS_BYTE_INDEX_SHIFT	0x6
+#define TSENS_BYTE_INDEX_MASK	0x3
+#define TSENS_SENSOR_TYPE_MASK	0x3F
+
+struct intel_tsens_i2c {
+	int sensor_type;
+	u16 buffer_idx;
+	bool read_only;
+	u8 idx_write_cnt;
+	struct intel_tsens_i2c_plat_data *plat_data;
+};
+
+static int intel_i2c_tsens_slave_cb(struct i2c_client *client,
+				    enum i2c_slave_event event, u8 *val)
+{
+	struct intel_tsens_i2c *tsens_i2c = i2c_get_clientdata(client);
+	struct intel_tsens_i2c_plat_data *plat_data = tsens_i2c->plat_data;
+	int ret = 0;
+
+	switch (event) {
+	case I2C_SLAVE_WRITE_RECEIVED:
+		tsens_i2c->sensor_type = *val;
+		break;
+
+	case I2C_SLAVE_READ_PROCESSED:
+	case I2C_SLAVE_READ_REQUESTED:
+		if (plat_data->get_temp) {
+			int temp;
+			int sensor_type = tsens_i2c->sensor_type &
+						TSENS_SENSOR_TYPE_MASK;
+
+			if (!plat_data->get_temp(sensor_type, &temp,
+						 plat_data->pdata)) {
+				u8 offset = (tsens_i2c->sensor_type >>
+						TSENS_BYTE_INDEX_SHIFT) &
+						TSENS_BYTE_INDEX_MASK;
+				u8 *ptr_temp = (u8 *)&temp;
+
+				*val = ptr_temp[offset];
+				tsens_i2c->buffer_idx++;
+				ret = 0;
+			} else {
+				ret = -EINVAL;
+			}
+		} else {
+			ret = -EINVAL;
+		}
+		break;
+
+	case I2C_SLAVE_STOP:
+	case I2C_SLAVE_WRITE_REQUESTED:
+		tsens_i2c->idx_write_cnt = 0;
+		tsens_i2c->buffer_idx = 0;
+		break;
+
+	default:
+		break;
+	}
+	return ret;
+}
+
+static int intel_i2c_tsens_slave_probe(struct i2c_client *client,
+				       const struct i2c_device_id *id)
+{	struct intel_tsens_i2c *priv;
+	int ret;
+
+	if (!id->driver_data) {
+		dev_err(&client->dev, "No platform data");
+		return -EINVAL;
+	}
+	priv = devm_kzalloc(&client->dev,
+			    sizeof(struct intel_tsens_i2c),
+			    GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+	priv->plat_data = (struct intel_tsens_i2c_plat_data *)id->driver_data;
+	i2c_set_clientdata(client, priv);
+	ret = i2c_slave_register(client, intel_i2c_tsens_slave_cb);
+	if (ret)
+		dev_err(&client->dev, "i2c slave register failed\n");
+
+	return ret;
+};
+
+static struct i2c_device_id intel_i2c_tsens_slave_id[] = {
+	{ "intel_tsens", (kernel_ulong_t)&i2c_plat_data},
+	{}
+};
+MODULE_DEVICE_TABLE(i2c, intel_i2c_tsens_slave_id);
+
+static struct i2c_driver intel_i2c_tsens_slave_driver = {
+	.driver = {
+		.name = "intel_tsens",
+	},
+	.probe = intel_i2c_tsens_slave_probe,
+	.remove = i2c_slave_unregister,
+	.id_table = intel_i2c_tsens_slave_id,
+};
+
+module_i2c_driver(intel_i2c_tsens_slave_driver);
+
+MODULE_AUTHOR("Udhayakumar C <udhayakumar.c@intel.com>");
+MODULE_DESCRIPTION("tsens i2c slave driver");
+MODULE_LICENSE("GPL");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 30/34] misc:intel_tsens: Intel Keem Bay tsens driver.
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (28 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 29/34] Intel tsens i2c slave driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 31/34] Intel Keem Bay XLink SMBus driver mgross
                   ` (3 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add keembey_thermal driver to expose on chip temperature
sensors, and register call back functions for periodic sampling.

This driver does following:
* Reads temperature data from on chip sensors present in Keem Bay
  platform.
* Registers callback function to intel tsens driver for sampling
  temperature values periodically.
* Decode the raw values from registers to Celsius.

Acked-by: mark gross <mgross@linux.intel.com>
Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 drivers/misc/intel_tsens/Kconfig           |  12 +
 drivers/misc/intel_tsens/Makefile          |   1 +
 drivers/misc/intel_tsens/keembay_thermal.c | 169 ++++++++++
 drivers/misc/intel_tsens/keembay_tsens.h   | 366 +++++++++++++++++++++
 4 files changed, 548 insertions(+)
 create mode 100644 drivers/misc/intel_tsens/keembay_thermal.c
 create mode 100644 drivers/misc/intel_tsens/keembay_tsens.h

diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index c2138339bd89..bdd87518a1bc 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -28,6 +28,18 @@ config INTEL_TSENS_I2C_SLAVE
 	  management controller
 	  Say Y if using a processor that includes the Intel VPU such as
 	  Keem Bay.  If unsure, say N.
+
+config KEEMBAY_THERMAL
+	tristate "Temperature sensor driver for intel Keem Bay"
+	depends on INTEL_TSENS_LOCAL_HOST && ARCH_KEEMBAY
+	help
+	  Enable this option if you want to have support for Keem Bay
+	  thermal management sensors.
+
+	  This driver is used for reading onchip temperature sensor
+	  values from Keem Bay SoC.
+	  Say Y if using a processor that includes the Intel VPU such as
+	  Keem Bay.  If unsure, say N.
 config INTEL_TSENS_IA_HOST
 	tristate "Temperature sensor driver for intel tsens remote host"
 	depends on I2C && THERMAL
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index f6f41bbca80c..00f63c2d5b2f 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -7,3 +7,4 @@
 obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST)	+= intel_tsens_thermal.o
 obj-$(CONFIG_INTEL_TSENS_I2C_SLAVE)	+= intel_tsens_i2c.o
 obj-$(CONFIG_INTEL_TSENS_IA_HOST)	+= intel_tsens_host.o
+obj-$(CONFIG_KEEMBAY_THERMAL)		+= keembay_thermal.o
diff --git a/drivers/misc/intel_tsens/keembay_thermal.c b/drivers/misc/intel_tsens/keembay_thermal.c
new file mode 100644
index 000000000000..d6c8fa8fc3aa
--- /dev/null
+++ b/drivers/misc/intel_tsens/keembay_thermal.c
@@ -0,0 +1,169 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel Keem Bay thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include "intel_tsens_thermal.h"
+#include "keembay_tsens.h"
+
+struct keembay_thermal_priv {
+	const char *name;
+	void __iomem *base_addr;
+	/* sensor lock*/
+	spinlock_t lock;
+	int current_temp[KEEMBAY_SENSOR_MAX];
+	struct intel_tsens_plat_data *plat_data;
+};
+
+static void kmb_sensor_read_temp(void __iomem *regs_val,
+				 int offset,
+				 int sample_valid_mask,
+				 int sample_value,
+				 int bit_shift,
+				 int *temp)
+{
+	int reg_val, kmb_raw_index;
+
+	/* clear the bit of TSENS_EN and re-enable again */
+	iowrite32(0x00, regs_val + AON_TSENS_CFG);
+	iowrite32(CFG_MASK_MANUAL, regs_val + AON_TSENS_CFG);
+	reg_val = ioread32(regs_val + offset);
+	if (reg_val & sample_valid_mask) {
+		reg_val = (reg_val >> bit_shift) & sample_value;
+		kmb_raw_index = reg_val - KEEMBAY_SENSOR_BASE_TEMP;
+		if (kmb_raw_index < 0)
+			reg_val = raw_kmb[0];
+		else if (kmb_raw_index > (raw_kmb_size - 1))
+			reg_val = raw_kmb[raw_kmb_size - 1];
+		else
+			reg_val = raw_kmb[kmb_raw_index];
+		*temp = reg_val;
+	} else {
+		*temp = -255;
+	}
+}
+
+/*The lock is assumed to be held by the caller.*/
+static int keembay_get_temp(struct platform_device *pdev, int type, int *temp)
+{
+	struct keembay_thermal_priv *priv = platform_get_drvdata(pdev);
+
+	spin_lock(&priv->lock);
+	switch (type) {
+	case KEEMBAY_SENSOR_MSS:
+		kmb_sensor_read_temp(priv->base_addr,
+				     AON_TSENS_DATA0,
+				     MSS_T_SAMPLE_VALID,
+				     MSS_T_SAMPLE,
+				     MSS_BIT_SHIFT,
+				     temp);
+		priv->current_temp[KEEMBAY_SENSOR_MSS] = *temp;
+		break;
+
+	case KEEMBAY_SENSOR_CSS:
+		kmb_sensor_read_temp(priv->base_addr,
+				     AON_TSENS_DATA0,
+				     CSS_T_SAMPLE_VALID,
+				     CSS_T_SAMPLE,
+				     CSS_BIT_SHIFT,
+				     temp);
+		priv->current_temp[KEEMBAY_SENSOR_CSS] = *temp;
+		break;
+
+	case KEEMBAY_SENSOR_NCE:
+	{
+		int nce0, nce1;
+
+		kmb_sensor_read_temp(priv->base_addr,
+				     AON_TSENS_DATA1,
+				     NCE0_T_SAMPLE_VALID,
+				     NCE0_T_SAMPLE,
+				     NCE0_BIT_SHIFT,
+				     &nce0);
+		kmb_sensor_read_temp(priv->base_addr,
+				     AON_TSENS_DATA1,
+				     NCE1_T_SAMPLE_VALID,
+				     NCE1_T_SAMPLE,
+				     NCE1_BIT_SHIFT,
+				     &nce1);
+		*temp = max(nce0, nce1);
+		priv->current_temp[KEEMBAY_SENSOR_NCE] = *temp;
+	}
+		break;
+
+	case KEEMBAY_SENSOR_SOC:
+	{
+		int i;
+
+		*temp = 0;
+		for (i = 0; i < KEEMBAY_SENSOR_MAX; i++)
+			*temp = max(*temp, priv->current_temp[i]);
+	}
+		break;
+
+	default:
+		break;
+	}
+	spin_unlock(&priv->lock);
+
+	return 0;
+}
+
+static int keembay_thermal_probe(struct platform_device *pdev)
+{
+	struct intel_tsens_plat_data *plat_data;
+	struct keembay_thermal_priv *priv;
+
+	plat_data = pdev->dev.platform_data;
+	if (!plat_data) {
+		dev_err(&pdev->dev, "Platform data not found\n");
+		return -EINVAL;
+	}
+	if (!plat_data->base_addr)
+		return -EINVAL;
+
+	priv = devm_kzalloc(&pdev->dev,
+			    sizeof(struct keembay_thermal_priv),
+			    GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+	priv->name = plat_data->name;
+	priv->base_addr = plat_data->base_addr;
+	priv->plat_data = plat_data;
+	plat_data->get_temp = keembay_get_temp;
+	spin_lock_init(&priv->lock);
+	platform_set_drvdata(pdev, priv);
+	dev_info(&pdev->dev, "Thermal driver loaded for %s\n",
+		 plat_data->name);
+	return 0;
+}
+
+static struct platform_driver keembay_thermal_driver = {
+	.probe = keembay_thermal_probe,
+	.driver = {
+		.name = "intel,keembay_thermal",
+	},
+};
+
+module_platform_driver(keembay_thermal_driver);
+
+MODULE_DESCRIPTION("Keem Bay Thermal Driver");
+MODULE_AUTHOR("Sandeep Singh <sandeep1.singh@intel.com>");
+MODULE_AUTHOR("Raja Subramanian, Lakshmi Bai <lakshmi.bai.raja.subramanian@intel.com>");
+MODULE_AUTHOR("Udhayakumar C <udhayakumar.c@intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/intel_tsens/keembay_tsens.h b/drivers/misc/intel_tsens/keembay_tsens.h
new file mode 100644
index 000000000000..7a61c09de261
--- /dev/null
+++ b/drivers/misc/intel_tsens/keembay_tsens.h
@@ -0,0 +1,366 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * Intel Keem Bay thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_KEEMBAY_TSENS_H
+#define _LINUX_KEEMBAY_TSENS_H
+
+#include <linux/thermal.h>
+
+/* Register values for keembay temperature (PVT Sensor) */
+#define AON_TSENS_TRIM0_CFG		0x0030
+#define AON_TSENS_TRIM1_CFG		0x0034
+#define AON_TSENS_CFG			0x0038
+#define AON_TSENS_INT0			0x203c
+#define AON_TSENS_INT1			0x2040
+#define AON_TSENS_IRQ_CLEAR		0x0044
+#define AON_TSENS_DATA0			0x0048
+#define MSS_T_SAMPLE_VALID		0x80000000
+#define MSS_T_SAMPLE			0x3ff
+#define CSS_T_SAMPLE_VALID		0x8000
+#define CSS_T_SAMPLE			0x3ff
+#define NCE1_T_SAMPLE_VALID		0x80000000
+#define NCE1_T_SAMPLE			0x3ff
+#define NCE0_T_SAMPLE_VALID		0x8000
+#define NCE0_T_SAMPLE			0x3ff
+#define AON_TSENS_DATA1			0x004c
+#define AON_INTERFACE			0x20260000
+/* Bit shift for registers*/
+#define MSS_BIT_SHIFT			16
+#define CSS_BIT_SHIFT			0
+#define NCE0_BIT_SHIFT			0
+#define NCE1_BIT_SHIFT			16
+/* mask values for config register */
+#define CFG_MASK_AUTO			0x80ff //(auto configuration)
+#define CFG_IRQ_MASK			0x8fff
+#define CFG_MASK_MANUAL		0x000f // TSENS_EN (manual config)
+
+/**
+ * KEEMBAY_SENSOR_MSS - Media subsystem junction temperature.
+ * KEEMBAY_SENSOR_CSS - Compute subsystem junction temperature.
+ * KEEMBAY_SENSOR_NCE - Neural computing engine junction temperature.
+ *			For NCE two sensors are available in kemmaby paltform,
+ *			maximum temperature of these two sensors will be
+ *			returned as NCE temperature.
+ * KEEMBAY_SENSOR_SOC - Soc temperature.
+ *			Maximum of MSS, CSS and NCE would be returned as
+ *			SOC temperature.
+ */
+enum keembay_thermal_sensor_en {
+	KEEMBAY_SENSOR_MSS,
+	KEEMBAY_SENSOR_CSS,
+	KEEMBAY_SENSOR_NCE,
+	KEEMBAY_SENSOR_SOC,
+	KEEMBAY_SENSOR_MAX
+};
+
+#define KEEMBAY_SENSOR_BASE_TEMP 27
+
+static int const raw_kmb[] = {
+39956,  -39637, -39319, -39001, -38684,
+
+38367,  -38050, -37734, -37418, -37103,
+
+36787,  -36472, -36158, -35844, -35530,
+
+35216,  -34903, -34590, -34278, -33966,
+
+33654,  -33343, -33032, -32721, -32411,
+
+32101,  -31791, -31482, -31173, -30864,
+
+30556,  -30248, -29940, -29633, -29326,
+
+29020,  -28713, -28407, -28102, -27797,
+
+27492,  -27187, -26883, -26579, -26276,
+
+25973,  -25670, -25367, -25065, -24763,
+
+24462,  -24160, -23860, -23559, -23259,
+
+22959,  -22660, -22360, -22062, -21763,
+
+21465,  -21167, -20869, -20572, -20275,
+
+19979,  -19683, -19387, -19091, -18796,
+
+18501,  -18206, -17912, -17618, -17325,
+
+-17031, -16738,  -16446, -16153, -15861,
+
+-15570, -15278,  -14987, -14697, -14406,
+
+-14116, -13826,  -13537, -13248, -12959,
+
+-12670, -12382,  -12094, -11807, -11520,
+
+-11233, -10946,  -10660, -10374, -10088,
+
+-9803, -9518, -9233, -8949, -8665,
+
+-8381, -8097, -7814, -7531, -7249,
+
+-6967, -6685, -6403, -6122, -5841,
+
+-5560, -5279, -4999, -4720, -4440,
+
+-4161, -3882, -3603, -3325, -3047,
+
+-2770, -2492, -2215, -1938, -1662,
+
+-1386, -1110, -834, -559, -284,
+
+-9, 265, 539, 813, 1086,
+
+1360, 1633, 1905, 2177, 2449,
+
+2721, 2993, 3264, 3535, 3805,
+
+4075, 4345, 4615, 4884, 5153,
+
+5422, 5691, 5959, 6227, 6495,
+
+6762, 7029, 7296, 7562, 7829,
+
+8095, 8360, 8626, 8891, 9155,
+
+9420, 9684, 9948, 10212, 10475,
+
+10738, 11001, 11264, 11526, 11788,
+
+12049, 12311, 12572, 12833, 13093,
+
+13354, 13614, 13874, 14133, 14392,
+
+14651, 14910, 15168, 15426, 15684,
+
+15942, 16199, 16456, 16713, 16969,
+
+17225, 17481, 17737, 17992, 18247,
+
+18502, 18757, 19011, 19265, 19519,
+
+19772, 20025, 20278, 20531, 20784,
+
+21036, 21288, 21539, 21791, 22042,
+
+22292, 22543, 22793, 23043, 23293,
+
+23543, 23792, 24041, 24290, 24538,
+
+24786, 25034, 25282, 25529, 25776,
+
+26023, 26270, 26516, 26763, 27008,
+
+27254, 27499, 27745, 27989, 28234,
+
+28478, 28722, 28966, 29210, 29453,
+
+29696, 29939, 30182, 30424, 30666,
+
+30908, 31149, 31391, 31632, 31873,
+
+32113, 32353, 32593, 32833, 33073,
+
+33312, 33551, 33790, 34029, 34267,
+
+34505, 34743, 34980, 35218, 35455,
+
+35692, 35928, 36165, 36401, 36637,
+
+36872, 37108, 37343, 37578, 37813,
+
+38047, 38281, 38515, 38749, 38982,
+
+39216, 39448, 39681, 39914, 40146,
+
+40378, 40610, 40841, 41073, 41304,
+
+41535, 41765, 41996, 42226, 42456,
+
+42686, 42915, 43144, 43373, 43602,
+
+43830, 44059, 44287, 44515, 44742,
+
+44970, 45197, 45424, 45650, 45877,
+
+46103, 46329, 46555, 46780, 47006,
+
+47231, 47456, 47680, 47905, 48129,
+
+48353, 48576, 48800,  49023, 49246,
+
+49469, 49692, 49914,  50136, 50358,
+
+50580, 50801, 51023,  51244, 51464,
+
+51685, 51905, 52126,  52346, 52565,
+
+52785, 53004, 53223,  53442, 53661,
+
+53879, 54097, 54315,  54533, 54750,
+
+54968, 55185, 55402,  55618, 55835,
+
+56051, 56267, 56483,  56699, 56914,
+
+57129, 57344, 57559,  57773, 57988,
+
+58202, 58416, 58630,  58843, 59056,
+
+59269, 59482, 59695,  59907, 60120,
+
+60332, 60543, 60755,  60966, 61178,
+
+61389, 61599, 61810,  62020, 62231,
+
+62440, 62650, 62860,  63069, 63278,
+
+63487, 63696, 63904,  64113, 64321,
+
+64529, 64737, 64944,  65151, 65358,
+
+65565, 65772, 65979,  66185, 66391,
+
+66597, 66803, 67008, 67213, 67419,
+
+67624, 67828, 68033, 68237, 68441,
+
+68645, 68849, 69052, 69256, 69459,
+
+69662, 69865, 70067, 70270, 70472,
+
+70674, 70876, 71077, 71279, 71480,
+
+71681, 71882, 72082, 72283, 72483,
+
+72683, 72883, 73083, 73282, 73481,
+
+73680, 73879, 74078, 74277, 74475,
+
+74673, 74871, 75069, 75266, 75464,
+
+75661, 75858, 76055, 76252, 76448,
+
+76644, 76841, 77037, 77232, 77428,
+
+77623, 77818, 78013, 78208, 78403,
+
+78597, 78792, 78986, 79180, 79373,
+
+79567, 79760, 79953, 80146, 80339,
+
+80532, 80724, 80917, 81109, 81301,
+
+81492, 81684, 81875, 82066, 82258,
+
+82448, 82639, 82830, 83020, 83210,
+
+83400, 83590, 83779, 83969, 84158,
+
+84347, 84536, 84725, 84913, 85102,
+
+85290, 85478, 85666, 85854, 86041,
+
+86228, 86416, 86603, 86789, 86976,
+
+87163, 87349, 87535, 87721, 87907,
+
+88092, 88278, 88463, 88648, 88833,
+
+89018, 89203, 89387, 89571, 89755,
+
+89939, 90123, 90307, 90490, 90674,
+
+90857, 91040, 91222, 91405, 91587,
+
+91770, 91952, 92134, 92315, 92497,
+
+92679, 92860, 93041, 93222, 93403,
+
+93583, 93764, 93944, 94124, 94304,
+
+94484, 94664, 94843, 95023, 95202,
+
+95381, 95560, 95738, 95917, 96095,
+
+96273, 96451, 96629, 96807, 96985,
+
+97162, 97339, 97516, 97693, 97870,
+
+98047, 98223, 98399, 98576, 98752,
+
+98927, 99103, 99279, 99454, 99629,
+
+99804, 99979, 100154, 100328, 100503,
+
+100677, 100851, 101025, 101199, 101373,
+
+101546, 101720, 101893, 102066, 102239,
+
+102411, 102584, 102756, 102929, 103101,
+
+103273, 103445, 103616, 103788, 103959,
+
+104130, 104302, 104472, 104643, 104814,
+
+104984, 105155, 105325, 105495, 105665,
+
+105835, 106004, 106174, 106343, 106512,
+
+106681, 106850, 107019, 107187, 107355,
+
+107524, 107692, 107860, 108028, 108195,
+
+108363, 108530, 108697, 108865, 109031,
+
+109198, 109365, 109531, 109698, 109864,
+
+110030, 110196, 110362, 110528, 110693,
+
+110858, 111024, 111189, 111354, 111518,
+
+111683, 111848, 112012, 112176, 112340,
+
+112504, 112668, 112832, 112995, 113159,
+
+113322, 113485, 113648, 113811, 113973,
+
+114136, 114298, 114461, 114623, 114785,
+
+114947, 115108, 115270, 115431, 115593,
+
+115754, 115915, 116076, 116236, 116397,
+
+116558, 116718, 116878, 117038, 117198,
+
+117358, 117518, 117677, 117836, 117996,
+
+118155, 118314, 118473, 118631, 118790,
+
+118948, 119107, 119265, 119423, 119581,
+
+119739, 119896, 120054, 120211, 120368,
+
+120525, 120682, 120839, 120996, 121153,
+
+121309, 121465, 121622, 121778, 121934,
+
+122089, 122245, 122400, 122556, 122711,
+
+122866, 123021, 123176, 123331, 123486,
+
+123640, 123794, 123949, 124103, 124257,
+
+124411, 124564, 124718, 124871, 125025,
+};
+
+static int raw_kmb_size = sizeof(raw_kmb) / sizeof(int);
+
+#endif /* _LINUX_KEEMBAY_TSENS_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 31/34] Intel Keem Bay XLink SMBus driver
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (29 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 30/34] misc:intel_tsens: Intel Keem Bay tsens driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 32/34] dt-bindings: misc: hddl_dev: Add hddl device management documentation mgross
                   ` (2 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Ramya P Karanth

From: Ramya P Karanth <ramya.p.karanth@intel.com>

Adds XLink SMBus driver for Intel Keem Bay SoC.

Xlink-smbus driver is a logical smbus adapter which uses Xlink
(xlink-pcie) protocol as an interface. Keem Bay(s) vision accelerators
are connected  to the server via PCI interface. The Server needs to know
the temperature of the Soc and the source to get the temperature can be
either on board sensors or on chip sensors. The sensors are ideally
connected over i2c bus of the Soc and the server does not have access to
sensors present in the PCB. With this xlink-smbus interfaces, server
access the on board/on chip sensors via xlink smbus adapter.

Signed-off-by: Ramya P Karanth <ramya.p.karanth@intel.com>
---
 Documentation/i2c/busses/index.rst            |   1 +
 .../i2c/busses/intel-xlink-smbus.rst          |  71 +++
 drivers/misc/Kconfig                          |   1 +
 drivers/misc/Makefile                         |   1 +
 drivers/misc/xlink-smbus/Kconfig              |  26 +
 drivers/misc/xlink-smbus/Makefile             |   5 +
 drivers/misc/xlink-smbus/xlink-smbus.c        | 467 ++++++++++++++++++
 7 files changed, 572 insertions(+)
 create mode 100644 Documentation/i2c/busses/intel-xlink-smbus.rst
 create mode 100644 drivers/misc/xlink-smbus/Kconfig
 create mode 100644 drivers/misc/xlink-smbus/Makefile
 create mode 100644 drivers/misc/xlink-smbus/xlink-smbus.c

diff --git a/Documentation/i2c/busses/index.rst b/Documentation/i2c/busses/index.rst
index 5e4077b08d86..6ce4a740f616 100644
--- a/Documentation/i2c/busses/index.rst
+++ b/Documentation/i2c/busses/index.rst
@@ -29,4 +29,5 @@ I2C Bus Drivers
    i2c-taos-evm
    i2c-viapro
    i2c-via
+   intel-xlink-smbus.rst
    scx200_acb
diff --git a/Documentation/i2c/busses/intel-xlink-smbus.rst b/Documentation/i2c/busses/intel-xlink-smbus.rst
new file mode 100644
index 000000000000..ab87d18051b4
--- /dev/null
+++ b/Documentation/i2c/busses/intel-xlink-smbus.rst
@@ -0,0 +1,71 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+Kernel driver: xlink_smbus
+==========================
+
+Supported chips:
+  * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+  Sufix: Bay
+
+  Slave address: The address is selectable by device-tree. (TBD)
+
+Authors:
+    - Raja Subramanian, Lakshmi Bai <lakshmi.bai.raja.subramanian@intel.com>
+    - Thalaiappan, Rathina <rathina.thalaiappan@intel.com>
+    - Karanth, Ramya P <ramya.p.karanth@intel.com>
+
+Description
+===========
+The Intel Edge.AI Computer Vision platforms have to be monitored using platform
+devices like sensors, fan controller, IO expander etc. Some of these devices
+are memory mapped and some are i2c based. Either of these devices are not
+directly accessible to the host.
+
+The host here refers to the server to which the vision accelerators are
+connected over PCIe Interface. The Host needs to do a consolidated action based
+on the parameters of platform devices. In general, most of the standard devices
+(includes sensors, fan controller, IO expander etc) are I2C/SMBus based and are
+used to provide the status of the accelerator. Standard drivers for these
+devices are available based on i2c/smbus APIs.
+
+Instead of changing the sensor drivers to adapt to PCIe interface, a generic
+i2c adapter "xlink-smbus" which underneath uses xlink as physical medium is
+used. With xlink-smbus, the drivers for the platform devices doesn't need to
+undergo any interface change.
+
+High-level architecture
+=======================
+
+Accessing Onchip devices::
+
+        -------------------                     -------------------
+        |   Remote Host   |                     |   Local Host    |
+        |   IA CPU        |                     | Vision platforms|
+        -------------------                     -------------------
+        |     Onchip      |                     |    i2c slave    | ==> Access the device
+        |  sensor driver  |                     |    handler      | ==> which is mmio based
+        -------------------                     -------------------
+        |Intel XLINK_SMBUS|                     |Intel XLINK_SMBUS|
+        |     adpater     |                     |     adapter     |
+        |    (Master)     |                     |   (I2C_SLAVE)   |
+        -------------------                     -------------------
+        |      XLINK      |    <==========>     |     XLINK       |
+        -------------------        PCIE         -------------------
+
+Accessing Onboard devices::
+
+        -------------------                     ----------------------
+        |   Remote Host   |                     |     Local Host     |
+        |   IA CPU        |                     |  Vision platforms  |
+        -------------------                     ----------------------
+        |    On board     |                     |      i2c smbus     | ==> Access the device
+        |  sensor driver  |                     |   xfer [synopsys]  | ==> which is on i2c bus
+        -------------------                     ----------------------
+        |Intel XLINK_SMBUS|                     | Intel XLINK_SMBUS  |
+        |     adpater     |                     |       adapter      |
+        |    (Master)     |                     |(SMBUS_PROXY Master)|
+        -------------------                     ----------------------
+        |      XLINK      |    <==========>     |        XLINK       |
+        -------------------        PCIE         ----------------------
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index aed3ef61897c..f6229dd8ba9e 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -486,4 +486,5 @@ source "drivers/misc/xlink-ipc/Kconfig"
 source "drivers/misc/xlink-core/Kconfig"
 source "drivers/misc/vpumgr/Kconfig"
 source "drivers/misc/intel_tsens/Kconfig"
+source "drivers/misc/xlink-smbus/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index c08502b22778..0ed8a62cbb20 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -62,3 +62,4 @@ obj-$(CONFIG_XLINK_IPC)		+= xlink-ipc/
 obj-$(CONFIG_XLINK_CORE)	+= xlink-core/
 obj-$(CONFIG_VPUMGR)		+= vpumgr/
 obj-y                           += intel_tsens/
+obj-$(CONFIG_XLINK_SMBUS)	+= xlink-smbus/
diff --git a/drivers/misc/xlink-smbus/Kconfig b/drivers/misc/xlink-smbus/Kconfig
new file mode 100644
index 000000000000..e6cdf8b9a096
--- /dev/null
+++ b/drivers/misc/xlink-smbus/Kconfig
@@ -0,0 +1,26 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+config XLINK_SMBUS
+	tristate "Enable smbus interface over Xlink PCIe"
+	depends on XLINK_CORE
+	depends on HDDL_DEVICE_CLIENT || HDDL_DEVICE_SERVER
+	help
+	 Enable xlink-pcie as i2c adapter both slave and master. The server
+	 (Remote Host) will use this interface to get sensor data from the soc
+	 (vision accelerator - Local Host) which is connected over PCIe.
+	 This driver is loaded on both Remote Host and Local Host.
+	 Select M to compile the driver as a module, name is xlink-smbus.
+	 If unsure, select N.
+
+
+config XLINK_SMBUS_PROXY
+	tristate "Enable SMBUS adapter as proxy for I2C controller"
+	depends on XLINK_CORE
+	depends on XLINK_SMBUS
+	help
+	 Enable this config when SMBUS adapter is acting as proxy for
+	 another I2C controller.
+	 Select M or Y if building for Intel Vision Processing Unit (VPU)
+	 Local Host core.
+	 Select N, if building for a Remote Host kernel.
diff --git a/drivers/misc/xlink-smbus/Makefile b/drivers/misc/xlink-smbus/Makefile
new file mode 100644
index 000000000000..27369dfa488c
--- /dev/null
+++ b/drivers/misc/xlink-smbus/Makefile
@@ -0,0 +1,5 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+#     Makefile for Xlink SMBus
+#
+obj-$(CONFIG_XLINK_SMBUS) += xlink-smbus.o
diff --git a/drivers/misc/xlink-smbus/xlink-smbus.c b/drivers/misc/xlink-smbus/xlink-smbus.c
new file mode 100644
index 000000000000..fc652e6c96bb
--- /dev/null
+++ b/drivers/misc/xlink-smbus/xlink-smbus.c
@@ -0,0 +1,467 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Intel Xlink SMBus Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ */
+
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/stddef.h>
+#include <linux/time.h>
+#include <linux/xlink.h>
+
+struct xlink_msg {
+	u16			addr;
+	u16			flags;
+	u8			read_write;
+	u8			command;
+	u16			padding;
+	u32			protocol;
+	union i2c_smbus_data	data;
+	s32			status;
+	struct list_head	node;
+};
+
+struct xlink_adapter_data {
+	struct	xlink_handle *xhandle;
+	struct	completion work;
+	struct	task_struct *task_recv;
+	struct	i2c_client *slave;
+	struct	list_head head;
+	struct	i2c_adapter *adap;
+	u32     channel;
+};
+
+#if defined(CONFIG_XLINK_SMBUS_PROXY)
+/*
+ * PROXY the commands using existing adapter
+ * I2C2 is fixed for Keem Bay, it has all sensors connected
+ */
+#define proxy_i2c_adapter_info() i2c_get_adapter(2)
+#else
+/*
+ * This is an adapter by itself
+ * It doesn't proxy transfer on another adapter
+ */
+#define proxy_i2c_adapter_info() ((void *)0)
+#endif
+
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+/*
+ * The complete slave protocol is implemented in one shot here as
+ * the whole chunk of data is transferred or received via xlink,
+ * not byte-by-byte
+ * Refer https://lwn.net/Articles/640346/ for protocol
+ */
+static s32 handle_slave_mode(struct i2c_client *slave, struct xlink_msg *msg)
+{
+	struct device *dev = &slave->dev;
+	u8 temp;
+
+	/* Send the command as first write */
+	i2c_slave_event(slave, I2C_SLAVE_WRITE_REQUESTED, NULL);
+	i2c_slave_event(slave, I2C_SLAVE_WRITE_RECEIVED, &msg->command);
+
+	/* Now handle specifics to read/write */
+	if (msg->read_write == I2C_SMBUS_WRITE) {
+		if (msg->protocol == I2C_SMBUS_BYTE_DATA) {
+			i2c_slave_event(slave, I2C_SLAVE_WRITE_RECEIVED,
+					&msg->data.byte);
+		} else if (msg->protocol == I2C_SMBUS_WORD_DATA) {
+			temp = msg->data.word & 0xFF;
+			i2c_slave_event(slave,
+					I2C_SLAVE_WRITE_RECEIVED,
+					&temp);
+			temp = (msg->data.word >> 8) & 0xFF;
+			i2c_slave_event(slave,
+					I2C_SLAVE_WRITE_RECEIVED,
+					&temp);
+		} else if (msg->protocol == I2C_SMBUS_BLOCK_DATA) {
+			int i;
+
+			if (msg->data.block[0] > I2C_SMBUS_BLOCK_MAX)
+				return -EPROTO;
+
+			for (i = 1; (i < msg->data.block[0] ||
+				     i <= I2C_SMBUS_BLOCK_MAX); ++i) {
+				i2c_slave_event(slave,
+						I2C_SLAVE_WRITE_RECEIVED,
+						&msg->data.block[i]);
+			}
+		} else {
+			dev_err(dev,
+				"unknown protocol (%d) received in %s\n",
+				msg->protocol,
+				__func__
+				);
+			return -EOPNOTSUPP;
+		}
+	} else {
+		if (msg->protocol == I2C_SMBUS_BYTE_DATA) {
+			i2c_slave_event(slave,
+					I2C_SLAVE_READ_REQUESTED,
+					&msg->data.byte);
+		} else if (msg->protocol == I2C_SMBUS_WORD_DATA) {
+			i2c_slave_event(slave,
+					I2C_SLAVE_READ_REQUESTED,
+					&temp);
+			msg->data.word = temp << 8;
+			i2c_slave_event(slave,
+					I2C_SLAVE_READ_REQUESTED,
+					&temp);
+			msg->data.word |= temp;
+		} else if (msg->protocol == I2C_SMBUS_BLOCK_DATA) {
+			int i;
+
+			if (msg->data.block[0] > I2C_SMBUS_BLOCK_MAX)
+				return -EPROTO;
+
+			for (i = 1; (i < msg->data.block[0] ||
+				     i <= I2C_SMBUS_BLOCK_MAX); ++i) {
+				i2c_slave_event(slave,
+						I2C_SLAVE_READ_REQUESTED,
+						&msg->data.block[i]);
+			}
+		} else {
+			dev_err(dev,
+				"unknown protocol (%d) received in %s\n",
+				msg->protocol,
+				__func__);
+			return -EOPNOTSUPP;
+		}
+		i2c_slave_event(slave, I2C_SLAVE_READ_PROCESSED, &temp);
+	}
+	i2c_slave_event(slave, I2C_SLAVE_STOP, NULL);
+	return 0;
+}
+#endif /* CONFIG_I2C_SLAVE */
+
+static s32 xlink_smbus_xfer(struct i2c_adapter *adap, u16 addr,
+			    unsigned short flags, char read_write,
+			    u8 command, int protocol,
+			    union i2c_smbus_data *data)
+{
+	struct xlink_adapter_data *adapt_data = NULL;
+	struct device *dev = NULL;
+	struct xlink_msg tx_msg, *rx_msg;
+	enum xlink_error xerr;
+	s32 rc = 0;
+
+	if (!adap)
+		return -ENODEV;
+	adapt_data = i2c_get_adapdata(adap);
+	dev = &adapt_data->adap->dev;
+
+	if (!data)
+		return -EINVAL;
+
+	tx_msg.addr = addr;
+	tx_msg.flags = flags;
+	tx_msg.read_write = read_write;
+	tx_msg.command = command;
+	tx_msg.protocol = protocol;
+	tx_msg.data = *data;
+	tx_msg.status = 0;
+
+	xerr = xlink_write_data(adapt_data->xhandle, adapt_data->channel,
+				(u8 *)&tx_msg,
+				sizeof(struct xlink_msg));
+
+	if (xerr != X_LINK_SUCCESS) {
+		dev_err_ratelimited(dev,
+				    "xlink_write_data failed (%d) dropping packet.\n",
+				    xerr);
+		return -EIO;
+	}
+
+	/*
+	 * wait for getting the response from the peer host device
+	 * message is received by xlinki2c_receive_thread
+	 * and notified here through completion trigger
+	 */
+	if (wait_for_completion_interruptible_timeout(&adapt_data->work,
+						      4 * HZ) > 0) {
+		rx_msg = list_first_entry(&adapt_data->head,
+					  struct xlink_msg,
+					  node);
+		list_del(&rx_msg->node);
+
+		/* Update the data and status from the xlink message received */
+		*data = rx_msg->data;
+		rc = rx_msg->status;
+
+		/* free the response received from Proxy */
+		kfree(rx_msg);
+	} else {
+		WARN_ONCE(1, "VPU not responding");
+		rc = -ETIMEDOUT;
+	}
+
+	return rc;
+}
+
+static int xlinki2c_receive_thread(void *param)
+{
+	struct xlink_adapter_data *adapt_data = param;
+	struct device *dev = &adapt_data->adap->dev;
+	struct i2c_adapter *adap;
+	enum xlink_error xerr;
+	struct xlink_msg *msg;
+	u32 size;
+
+	while (!kthread_should_stop()) {
+		/* msg will be freed in this context or other */
+		msg = kzalloc(sizeof(*msg), GFP_KERNEL);
+		if (!msg)
+			return -ENOMEM;
+
+		/* Wait to receive xlink message from the peer device */
+		xerr = xlink_read_data_to_buffer(adapt_data->xhandle,
+						 adapt_data->channel,
+						 (uint8_t *)msg, &size);
+		if (xerr != X_LINK_SUCCESS) {
+			if (xerr != X_LINK_TIMEOUT) {
+				dev_warn_ratelimited(dev,
+						     "[%d] Error (%d) dropping packet.\n",
+						     adapt_data->adap->nr, xerr);
+			}
+			kfree(msg);
+			continue;
+		}
+		xlink_release_data(adapt_data->xhandle, adapt_data->channel,
+				   NULL);
+		adap = proxy_i2c_adapter_info();
+
+		if (adap) {
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+			if (adapt_data->slave) {
+				msg->status = handle_slave_mode
+					(adapt_data->slave, msg);
+				goto send_resp;
+			}
+#endif
+			/*
+			 * This is a proxy for an existing adapter.
+			 * call the local adapter to receive the data
+			 * from the hardware.
+			 */
+			msg->status = i2c_smbus_xfer(adap,
+						     msg->addr,
+						     msg->flags,
+						     msg->read_write,
+						     msg->command,
+						     msg->protocol,
+						     &msg->data);
+
+			/*
+			 * Send back the complete message that
+			 * carries status, back to sender which is
+			 * waiting on xlinki2c_receive_thread
+			 */
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+send_resp:
+#endif
+			xlink_write_data(adapt_data->xhandle,
+					 adapt_data->channel, (u8 *)msg,
+					 sizeof(struct xlink_msg));
+			kfree(msg);
+		} else {
+			/*
+			 * This is an adapter on its own.
+			 * Receives the status and data over xlink (msg).
+			 * Indicate the data received to the component
+			 * which is waiting in xlink_smbus_xfer
+			 */
+			list_add_tail(&msg->node, &adapt_data->head);
+			complete(&adapt_data->work);
+		}
+	} /* thread loop */
+	dev_dbg(dev, "[%d] %s stopped\n", adapt_data->adap->nr, __func__);
+
+	return 0;
+}
+
+static inline u32 xlink_smbus_func(struct i2c_adapter *adapter)
+{
+	u32 func = I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+		I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA |
+		I2C_FUNC_SMBUS_BLOCK_DATA;
+
+	return func;
+}
+
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+
+/*
+ * This will be called when slave client driver
+ * register itself to an adapter
+ */
+static int xlink_smbus_reg_slave(struct i2c_client *slave)
+{
+	struct xlink_adapter_data *adapt_data =
+				i2c_get_adapdata(slave->adapter);
+
+	adapt_data->slave = slave;
+
+	return 0;
+}
+
+static int xlink_smbus_unreg_slave(struct i2c_client *slave)
+{
+	struct xlink_adapter_data *adapt_data =
+				i2c_get_adapdata(slave->adapter);
+
+	adapt_data->slave = NULL;
+
+	return 0;
+}
+#endif
+
+static struct i2c_algorithm xlink_algorithm = {
+	.smbus_xfer     = xlink_smbus_xfer,
+	.functionality  = xlink_smbus_func,
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+	.reg_slave      = xlink_smbus_reg_slave,
+	.unreg_slave    = xlink_smbus_unreg_slave,
+#endif
+};
+
+static int xlink_i2c_probe(struct platform_device *pdev)
+{
+	struct intel_hddl_clients *c = pdev->dev.platform_data;
+	struct xlink_handle *devhandle = &c->xlink_dev;
+	struct xlink_adapter_data *adapt_data;
+	struct device *dev = &pdev->dev;
+	struct i2c_adapter *adap;
+	u32 rc;
+
+	dev_dbg(dev, "Registering xlink SMBus adapter...\n");
+
+	adap = kzalloc(sizeof(*adap), GFP_KERNEL);
+	if (!adap)
+		return -ENOMEM;
+
+	c->adap[pdev->id & 0x3] = adap;
+	memset(adap, 0, sizeof(struct i2c_adapter));
+	adap->owner  = THIS_MODULE;
+	adap->algo   = &xlink_algorithm;
+	strcpy(adap->name, "xlink adapter");
+	platform_set_drvdata(pdev, adap);
+
+	adapt_data = kzalloc(sizeof(*adapt_data), GFP_KERNEL);
+	if (!adapt_data) {
+		kfree(adap);
+		return -ENOMEM;
+	}
+
+	init_completion(&adapt_data->work);
+
+	INIT_LIST_HEAD(&adapt_data->head);
+	adapt_data->channel = c->xlink_i2c_ch[pdev->id & 0x3];
+	adapt_data->xhandle = devhandle;
+	adapt_data->adap = adap;
+
+	rc = xlink_open_channel(devhandle,
+				adapt_data->channel,
+				RXB_TXB,  /* mode */
+				64 * 1024,
+				100);  /* timeout */
+	if (rc != X_LINK_SUCCESS) {
+		dev_err(dev, "xlink_open_channel failed[%d][%d][%p]\n", rc,
+			adapt_data->channel,
+			adapt_data->xhandle);
+		goto err_kfree;
+	}
+
+	i2c_set_adapdata(adap, adapt_data);
+
+	rc = i2c_add_adapter(adap);
+	if (rc)
+		goto err_exit;
+
+	/* Create receiver thread */
+	adapt_data->task_recv = kthread_run(xlinki2c_receive_thread,
+					    adapt_data,
+					    "xlinki2c_receive_thread");
+	if (!adapt_data->task_recv) {
+		dev_err(dev, "%s Thread creation failed", __func__);
+		i2c_del_adapter(adapt_data->adap);
+		goto err_exit;
+	}
+	return 0;
+
+err_exit:
+	xlink_close_channel(adapt_data->xhandle, adapt_data->channel);
+err_kfree:
+	kfree(adap);
+	kfree(adapt_data);
+	return rc;
+}
+
+static int xlink_i2c_remove(struct platform_device *pdev)
+{
+	struct i2c_adapter *adap = platform_get_drvdata(pdev);
+	struct xlink_adapter_data *adapt_data = i2c_get_adapdata(adap);
+
+	kthread_stop(adapt_data->task_recv);
+
+	dev_info(&adap->dev, "Delete the adapter[%d]\n", adap->nr);
+	/* Close the channel and disconnect */
+	xlink_close_channel(adapt_data->xhandle, adapt_data->channel);
+	/* This will block the dynamic registration */
+	i2c_del_adapter(adapt_data->adap);
+	kfree(adapt_data);
+
+	return 0;
+}
+
+static struct platform_driver xlink_i2c_driver = {
+	.probe = xlink_i2c_probe,
+	.remove = xlink_i2c_remove,
+	.driver = {
+		.name   = "i2c_xlink"
+	}
+};
+
+/* Define the xlink debug device structures to be used with dev_dbg() et al */
+
+static struct device_driver dbg_name = {
+		.name = "xlink_i2c_dbg"
+};
+
+static struct device dbg_subname = {
+		.init_name = "xlink_i2c_dbg",
+		.driver = &dbg_name
+};
+
+static struct device *dbgxi2c = &dbg_subname;
+
+static void __exit xlink_adapter_exit(void)
+{
+	dev_dbg(dbgxi2c, "Unloading XLink I2C module...\n");
+	platform_driver_unregister(&xlink_i2c_driver);
+}
+
+static int __init xlink_adapter_init(void)
+{
+	dev_dbg(dbgxi2c, "Loading XLink I2C module...\n");
+	platform_driver_register(&xlink_i2c_driver);
+	return 0;
+}
+
+module_init(xlink_adapter_init);
+module_exit(xlink_adapter_exit);
+
+MODULE_AUTHOR("Raja Subramanian, Lakshmi Bai <lakshmi.bai.raja.subramanian@intel.com>");
+MODULE_AUTHOR("Thalaiappan, Rathina <rathina.thalaiappan@intel.com>");
+MODULE_AUTHOR("Karanth, Ramya P <ramya.p.karanth@intel.com>");
+MODULE_DESCRIPTION("xlink i2c adapter");
+MODULE_LICENSE("GPL");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 32/34] dt-bindings: misc: hddl_dev: Add hddl device management documentation
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (30 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 31/34] Intel Keem Bay XLink SMBus driver mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:25 ` [PATCH v2 33/34] misc: Hddl device management for local host mgross
  2021-01-08 21:26 ` [PATCH v2 34/34] misc: HDDL device management for IA host mgross
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add hddl device management documentation

The HDDL client driver acts as an software RTC to sync with network time.
It abstracts xlink protocol to communicate with remote IA host.
This driver exports the details about sensors available in the platform
to remote IA host as xlink packets.
This driver also handles device connect/disconnect events and identifies
board id and soc id using gpio's based on platform configuration.

Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 .../bindings/misc/intel,hddl-client.yaml      | 114 ++++++++++++++++++
 1 file changed, 114 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/misc/intel,hddl-client.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,hddl-client.yaml b/Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
new file mode 100644
index 000000000000..c1d121c35fc5
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
@@ -0,0 +1,114 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,hddl-client.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel hddl client device to handle platform management in Bay series
+
+maintainers:
+  - Udhayakumar C <udhayakumar.c@intel.com>
+
+description: |
+  The HDDL client driver acts as an software RTC to sync with network time.
+  It abstracts xlink protocol to communicate with remote host. This driver
+  exports the details about sensors available in the platform to remote
+  host as xlink packets.
+  This driver also handles device connect/disconnect events and identifies
+  board id and soc id using gpio's based on platform configuration.
+
+select: false
+
+properties:
+  compatible:
+    items:
+      - const: intel,hddl-client
+
+  reg:
+    minItems: 4
+    maxItems: 4
+
+  xlink_chan:
+    minItems: 1
+    maxItems: 1
+    description: xlink channel number used for communication
+                 with remote host for time sync and sharing sensor
+                 details available in platform.
+
+  i2c_xlink_chan:
+    minItems: 1
+    maxItems: 1
+    description: xlink channel number used for communication
+                 with remote host for xlink i2c smbus.
+
+  sensor_name:
+    type: object
+    description:
+      Details about sensors and its configuration on local host and remote
+      host.
+
+    properties:
+      compatible:
+        items:
+          - const: intel_tsens
+
+      reg:
+        description: i2c slave address for sensor.
+
+      local-host:
+        minItems: 1
+        maxItems: 1
+        description: enable bit 0 to register sensor as i2c slave
+                     in local host (normal i2c client)
+                     enable bit 1 to mimic sensor as i2c slave
+                     in local host (onchip sensors as i2c slave)
+                     enable bit 2 to register i2c slave as xlink smbus slave
+                     in local host.
+      remote-host:
+        minItems: 1
+        maxItems: 1
+        description: enable bit 0 to register sensor as i2c slave
+                     in remote host (normal i2c client)
+                     enable bit 1 to mimic sensor as i2c slave
+                     in remote host (onchip sensors as i2c slave)
+                     enable bit 2 to register i2c slave as xlink smbus slave
+                     in remote host.
+
+      bus:
+        minItems: 1
+        maxItems: 1
+        description: i2c bus number for the i2c client device.
+
+    required:
+      - compatible
+      - reg
+      - local-host
+      - remote-host
+      - bus
+
+required:
+  - compatible
+  - reg
+  - xlink_chan
+  - i2c_xlink_chan
+
+additionalProperties: false
+
+examples:
+  - |
+    hddl_dev: hddl@20320000 {
+       compatible = "intel,hddl-client";
+       #address-cells = <2>;
+       #size-cells = <2>;
+       status = "disabled";
+       reg = <0x0 0x20320000 0x0 0x800>;
+       xlink_chan = <1080>;
+       i2c_xlink_chan = <1081>;
+       kmb_xlink_tj {
+         status = "okay";
+         compatible = "intel_tsens";
+         local-host = <0x3>;
+         remote-host = <0x3>;
+         bus = <0x1>;
+      };
+    };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 33/34] misc: Hddl device management for local host
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (31 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 32/34] dt-bindings: misc: hddl_dev: Add hddl device management documentation mgross
@ 2021-01-08 21:25 ` mgross
  2021-01-08 21:26 ` [PATCH v2 34/34] misc: HDDL device management for IA host mgross
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:25 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add local host hddl device management for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU                          Local Host ARM CPU
-------------------------------         ----------------------------
| * Send time as xlink packet |         |* Sync time with IA host  |
| * receive sensor details    |         |* Prepare and share sensor|
|   and register as i2c or    |         |  details to IA host as   |
|   xlink smbus slaves        |         |  xlink packets           |
-------------------------------         ----------------------------
|       hddl server           | <=====> |     hddl client          |
-------------------------------  xlink  ----------------------------

hddl device module:
-------------------
The HDDL client driver acts as an software RTC to sync with network
time. It abstracts xlink protocol to communicate with remote host.
This driver exports the details about sensors available in the
platform to remote host as xlink packets.
This driver also handles device connect/disconnect events and
identifies board id and soc id using gpio's, based on platform
configuration.

- Local Host driver
  * Intended for ARM CPU
  * It is based on xlink Framework
  * Driver path:
  {tree}/drivers/misc/hddl_device/hddl_device_client.c

Local arm host and Remote IA host drivers communicates using
XLINK protocol.

Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 .../misc-devices/hddl_device_client.rst       | 212 +++++
 Documentation/misc-devices/index.rst          |   1 +
 Documentation/vpu/index.rst                   |   1 +
 MAINTAINERS                                   |   1 +
 drivers/misc/Kconfig                          |   1 +
 drivers/misc/Makefile                         |   1 +
 drivers/misc/hddl_device/Kconfig              |  14 +
 drivers/misc/hddl_device/Makefile             |   5 +
 drivers/misc/hddl_device/hddl_device.c        | 565 +++++++++++++
 drivers/misc/hddl_device/hddl_device_lh.c     | 764 ++++++++++++++++++
 drivers/misc/hddl_device/hddl_device_util.h   |  52 ++
 11 files changed, 1617 insertions(+)
 create mode 100644 Documentation/misc-devices/hddl_device_client.rst
 create mode 100644 drivers/misc/hddl_device/Kconfig
 create mode 100644 drivers/misc/hddl_device/Makefile
 create mode 100644 drivers/misc/hddl_device/hddl_device.c
 create mode 100644 drivers/misc/hddl_device/hddl_device_lh.c
 create mode 100644 drivers/misc/hddl_device/hddl_device_util.h

diff --git a/Documentation/misc-devices/hddl_device_client.rst b/Documentation/misc-devices/hddl_device_client.rst
new file mode 100644
index 000000000000..413643b6b500
--- /dev/null
+++ b/Documentation/misc-devices/hddl_device_client.rst
@@ -0,0 +1,212 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================================
+Kernel driver: hddl_device_client
+=================================
+
+Supported chips:
+  * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+Authors:
+    - Thalaiappan, Rathina <rathina.thalaiappan@intel.com>
+    - Udhayakumar C <udhayakumar.c@intel.com>
+
+
+Overview
+========
+
+This driver supports hddl device management for Intel Edge.AI Computer Vision
+platforms.
+
+This driver supports the following features:
+
+  - Exports deatils of temperature sensor, current sensor and fan controller
+    present in Intel Edge.AI Computer Vision platforms to IA host.
+  - Enable Time sync of Intel Edge.AI Computer Vision platform with IA host.
+  - Handles device connect and disconnect events.
+  - Receives slave address from the IA host for memory mapped thermal sensors
+    present in SoC (Documentation/hwmon/intel_tsens_sensors.rst).
+  - Registers i2c slave device for slaves present in Intel Edge.AI Computer
+    Vision platform
+
+Keem Bay platform has
+Onchip sensors:
+
+  - Media Subsystem (mss) temperature sensor
+  - NN subsystem (nce) temperature sensor
+  - Compute subsystem (cse) temperature sensor
+  - SOC(Maximum of mss, nce and cse).
+
+Onboard sensors:
+
+  - two lm75 temperature sensors
+  - emc2103 fan controller
+  - ina3221 current sensor
+
+High-level architecture
+=======================
+::
+
+        Remote Host IA CPU                          Local Host ARM CPU
+        -------------------------------         ----------------------------
+        | * Send time as xlink packet |         |* Sync time with IA host  |
+        | * receive sensor details    |         |* Prepare and share sensor|
+        |   and register as i2c or    |         |  details to IA host as   |
+        |   xlink smbus slaves        |         |  xlink packets           |
+        -------------------------------         ----------------------------
+        |       hddl server           | <=====> |     hddl client          |
+        -------------------------------  xlink  ----------------------------
+
+Driver Structure
+================
+
+The driver provides a platform device where the ``probe`` and ``remove``
+operations are provided.
+
+  - probe: Gets list of external sensors from device-tree entries, identify
+    board id and soc id based on configuration from device-tree entries and
+    spawn kernel thread to monitor new PCIE devices.
+
+  - init task: Poll for new PCIE device with time interval of 5 seconds and
+    creates connect task to setup new device.
+
+  - connect task: Connect task is the main entity which connects to hddl
+    device server using xlink and does the basic initialisation and handshaking.
+    Additionally it also monitors the hddl device server link down/link up
+    events and reinitialise the drivers accordingly in the client side.
+
+  - remove: unregister i2c client devices, i2c adapters and close xlink
+    channel.
+
+HDDL Client Sequence – Basic Setup and handshaking with HDDL Device Server
+==========================================================================
+::
+
+        ,-----.                ,---------.          ,------------.           ,------------------.
+        |probe|                |Init task|          |connect task|           |hddl device server|
+        `--+--'                `----+----'          `-----+------'           `--------+---------'
+           ----.                    |                     |                           |
+               | "Parse DT"         |                     |                           |
+           <---'                    |                     |                           |
+           |                        |                     |                           |
+           | ,------------------!.  |                     |                           |
+           | |Get sensor details|_\ |                     |                           |
+           | |from device tree    | |                     |                           |
+           | `--------------------' |                     |                           |
+           ----.                    |                     |                           |
+               | "Identify Board Id"|                     |                           |
+           <---'                    |                     |                           |
+           |                        |                     |                           |
+           |   "Creates kthread"    |                     |                           |
+           |----------------------->|                     |                           |
+           |                        |                     |                           |
+           | ,-----------------------!.                   |                           |
+           | |creates kernel thread  |_\                  |                           |
+           | |to check for new device  |                  |                           |
+           | `-------------------------'                  |                           |
+          ,---------------------!.  ----.                 |                           |
+          |check for new device |_\     |                 |                           |
+          |with time interval of  | <---'                 |                           |
+          |5 seconds              | |                     |                           |
+          `-----------------------' |                     |                           |
+          ,---------------------!.  |                     |                           |
+          |if new device found?.|_\ |                     |                           |
+          |creates connect task   | |-------------------->|                           |
+          |to setup new device    | |                     |                           |
+          `-----------------------' |                     |                           |
+           |                       ,-------------------!. |----.                      |
+           |                       |setup xlink channel|_\|    |                      |
+           |                       |to communicate with  ||<---'                      |
+           |                       |server               ||                           |
+           |                       `---------------------'|                           |
+           |                        |                     |       Get time data       |
+           |                        |                     |       from server         |
+           |                        |                     | <--------------------------
+           |                        |                     |                           |
+           |                        |                     |       share board id      |
+           |                        |                     | -------------------------->
+           |                        |                     |                           |
+           |                        |                     |  share total number of    |
+           |                        |                     |  sensors available in SoC |
+           |                        |                     | -------------------------->
+           |                        |                     |                           |
+           |                   ,-----------------------!. |                           |
+           |                   |For each sensors share |_\|                           |
+           |                   |sensor type, name, trip  || -------------------------->
+           |                   |temp, trip type          ||                           |
+           |                   `-------------------------'|                           |
+           |                        |                     |  Receives Send complete.  |
+           |                        |                     | <--------------------------
+           |                        |                     |                           |
+           |                        |                     |----.                      |
+           |                        |                     |    | Register xlink i2c   |
+           |                        |                     |<---' adapters.            |
+           |                        |                     |                           |
+           |                        |                     |                           |
+           |                        |                     |  Receives slave addr for  |
+           |                        |                     |   each salve in SoC       |
+           |                        |                     | <--------------------------
+           |                        |                     |                           |
+           |                        |                     |----.                      |
+           |                        |                     |    | Register i2c clients.|
+           |                        |                     |<---'                      |
+           |                        |                     |                           |
+           |                        |                     |----.
+           |                        |                     |    | poll for device status
+           |                        |                     |<---'
+        ,--+--.                ,----+----.          ,-----+------.           ,--------+---------.
+        |probe|                |Init task|          |connect task|           |hddl device server|
+        `-----'                `---------'          `------------'           `------------------'
+
+
+XLINK i2c sequence:
+===================
+::
+
+        ,-----------------.          ,--------.          ,-----.          ,---------.
+        |xlink-i2c-adapter|          |I2C core|          |xlink|          |i2c-slave|
+        `--------+--------'          `---+----'          `--+--'          `----+----'
+                 |                       |                  |                  |
+                 |---------------------->|                  |                  |
+                 |                       |                  |                  |
+                 | ,--------------------------!.            |                  |
+                 | |Initialize xlink based i2c|_\           |                  |
+                 | |adapters.                   |           |                  |
+                 | `----------------------------'           |                  |
+                 |                       |                  |                  |
+                 |<-----------------------------------------|                  |
+                 |                       |                  |                  |
+                 | ,--------------------------------!.      |                  |
+                 | |I2C request is received as xlink|_\     |                  |
+                 | |packet from IA host               |     |                  |
+                 | `----------------------------------'     |                  |
+                 |                       |                  |                  |
+                 |---------------------->|                  |                  |
+                 |                       |                  |                  |
+                 |                       |  ,---------------------------------!.
+                 |                       |  |xlink I2C request is converted to|_\
+                 |                       |  |standard i2c request               |
+                 |                       |  `-----------------------------------'
+                 |                       |                  |                  |
+                 |                       | ----------------------------------->|
+                 |                       |                  |                  |
+                 |                       |  ,----------------------!.          |
+                 |                       |  |Linux i2c slave device|_\         |
+                 |                       |  |standard request        |         |
+                 |                       |  `------------------------'         |
+                 |                       |                  |                  |
+                 |                       | <-----------------------------------|
+                 |                       |                  |                  |
+                 |                       |  ,----------------------!.          |
+                 |                       |  |Linux i2c slave device|_\         |
+                 |                       |  |standard response       |         |
+                 |                       |  `------------------------'         |
+                 |     I2C response      |                  |                  |
+                 |<----------------------|                  |                  |
+                 |                       |                  |                  |
+                 |                       |                  | ,-------------------------!.
+                 |----------------------------------------->| |I2C response is converted|_\
+                 |                       |                  | |to xlink packet            |
+        ,--------+--------.          ,---+----.          ,--+-`---------------------------'
+        |xlink-i2c-adapter|          |I2C core|          |xlink|          |i2c-slave|
+        `-----------------'          `--------'          `-----'          `---------'
diff --git a/Documentation/misc-devices/index.rst b/Documentation/misc-devices/index.rst
index 64420b3314fe..102f7f9dea87 100644
--- a/Documentation/misc-devices/index.rst
+++ b/Documentation/misc-devices/index.rst
@@ -19,6 +19,7 @@ fit into other categories.
    bh1770glc
    eeprom
    c2port
+   hddl_device_client.rst
    ibmvmc
    ics932s401
    isl29003
diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index cd4272e089ec..b50b1376b591 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -17,3 +17,4 @@ This documentation contains information for the Intel VPU stack.
    xlink-pcie
    xlink-ipc
    xlink-core
+   hddl_device_client
diff --git a/MAINTAINERS b/MAINTAINERS
index 0dfbe892d852..e35fa595c2d5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1989,6 +1989,7 @@ F:	drivers/misc/intel_tsens/
 ARM/INTEL TSENS SUPPORT
 M:	Udhayakumar C <udhayakumar.c@intel.com>
 S:	Supported
+F:	drivers/misc/hddl_device/
 F:	drivers/misc/intel_tsens/
 
 ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index f6229dd8ba9e..f9f3d4d89e7b 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -487,4 +487,5 @@ source "drivers/misc/xlink-core/Kconfig"
 source "drivers/misc/vpumgr/Kconfig"
 source "drivers/misc/intel_tsens/Kconfig"
 source "drivers/misc/xlink-smbus/Kconfig"
+source "drivers/misc/hddl_device/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 0ed8a62cbb20..350618dbe678 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -63,3 +63,4 @@ obj-$(CONFIG_XLINK_CORE)	+= xlink-core/
 obj-$(CONFIG_VPUMGR)		+= vpumgr/
 obj-y                           += intel_tsens/
 obj-$(CONFIG_XLINK_SMBUS)	+= xlink-smbus/
+obj-y				+= hddl_device/
diff --git a/drivers/misc/hddl_device/Kconfig b/drivers/misc/hddl_device/Kconfig
new file mode 100644
index 000000000000..e1ae81fdf177
--- /dev/null
+++ b/drivers/misc/hddl_device/Kconfig
@@ -0,0 +1,14 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+config HDDL_DEVICE_CLIENT
+	tristate "Support for hddl device client"
+	depends on XLINK_CORE && INTEL_TSENS_LOCAL_HOST
+	help
+	  This option enables HDDL device client module.
+
+	  This driver is used for sharing time sync data to local host and
+	  retrives the sensors available on the platform. This also handles
+	  the device connect/disconnect programming sequence.
+	  Say Y if using a processor that includes the Intel VPU such as
+	  Keem Bay.  If unsure, say N.
diff --git a/drivers/misc/hddl_device/Makefile b/drivers/misc/hddl_device/Makefile
new file mode 100644
index 000000000000..dca381660baa
--- /dev/null
+++ b/drivers/misc/hddl_device/Makefile
@@ -0,0 +1,5 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+obj-$(CONFIG_HDDL_DEVICE_CLIENT)	+= hddl_device_client.o
+hddl_device_client-objs			+= hddl_device_lh.o hddl_device.o
diff --git a/drivers/misc/hddl_device/hddl_device.c b/drivers/misc/hddl_device/hddl_device.c
new file mode 100644
index 000000000000..89e22adc3a03
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device.c
@@ -0,0 +1,565 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * High Density Deep Learning HELPER module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/gpio/consumer.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/ioctl.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+#include <uapi/linux/stat.h>
+#include "hddl_device_util.h"
+
+#define HDDL_XLINK_OPEN_TIMEOUT		1000
+#define HDDL_I2C_CLIENT_INIT_TIME		1000
+
+enum hddl_device_event_type {
+	HDDL_NOTIFY_DEVICE_DISCONNECTED,
+	HDDL_NOTIFY_DEVICE_CONNECTED,
+	HDDL_NUM_EVENT_TYPE,
+};
+
+/*
+ * Register callback for device
+ * connect and disconnect events.
+ */
+static u32 hddl_device_events[] = {
+	HDDL_NOTIFY_DEVICE_DISCONNECTED,
+	HDDL_NOTIFY_DEVICE_CONNECTED,
+	HDDL_NUM_EVENT_TYPE
+};
+
+/**
+ * intel_hddl_new_device - check for new hddl device
+ * @dev: The hddl device.
+ * @hddl_clients: list of existing client devices.
+ * @sw_device_id_list: list of software device id's to check for new device.
+ * @num_devices: number of id's present in sw_device_id_list.
+ * @n_devs: number of existing client devices in hddl_clients list.
+ *
+ * Returns list of client devices by comapring the device id's with device id's
+ * present in existing client devices list. If any software id which is not
+ * matching with id's in existing client devices, it allocates new list to
+ * accommodate new client device and copies the existing client devices to
+ * new list and free's up the older list.
+ * If no new device id found in sw_device_id_list, it returns list of existing
+ * client devices.
+ */
+static struct intel_hddl_clients **
+		intel_hddl_new_device(struct device *dev,
+				      struct intel_hddl_clients **hddl_clients,
+				      u32 *sw_device_id_list,
+				      u32 num_devices,
+				      u32 *n_devs)
+{
+	struct intel_hddl_clients **cls;
+	bool match_found, new_dev = false;
+	int i, j;
+
+	/*
+	 * Check is there any new device by comparing id's with
+	 * existing list.
+	 */
+	for (i = 0; i < num_devices; i++) {
+		match_found = false;
+		for (j = 0; j < (*n_devs); j++) {
+			if (sw_device_id_list[i] ==
+				hddl_clients[j]->xlink_dev.sw_device_id) {
+				match_found = true;
+				break;
+			}
+		}
+		if (!match_found) {
+			new_dev = true;
+			break;
+		}
+	}
+	if (!new_dev)
+		return hddl_clients;
+	/*
+	 * Allocate memory for new list
+	 */
+	cls = kcalloc(num_devices,
+		      sizeof(struct intel_hddl_clients *),
+		      GFP_KERNEL);
+	if (!cls)
+		return hddl_clients;
+	/*
+	 * copy hddl client devices to new list.
+	 */
+	for (i = 0; i < num_devices; i++) {
+		for (j = 0; j < *n_devs; j++) {
+			if (sw_device_id_list[i] ==
+				hddl_clients[j]->xlink_dev.sw_device_id) {
+				cls[i] = hddl_clients[j];
+				break;
+			}
+		}
+	}
+	/*
+	 * update number of devices to include new device and free up existing
+	 * list.
+	 */
+	*n_devs = num_devices;
+	kfree(hddl_clients);
+	return cls;
+}
+
+/**
+ * intel_hddl_setup_device - Initialize new client device
+ * @dev: The hddl device.
+ * @task: Thread function to setup and connect to host/device.
+ * @n_devs: number of client devices in hddl_clients list.
+ * @hddl_clients: list of existing client devices.
+ * @pdata: platform data.
+ *
+ * Returns list of client devices. It also initialize the device and creates
+ * kernel thread to initiate communication over xlink.
+ */
+
+struct intel_hddl_clients **
+	intel_hddl_setup_device(struct device *dev,
+				intel_hddl_connect_task task, u32 *n_devs,
+				struct intel_hddl_clients **hddl_clients,
+				void *pdata)
+{
+	u32 sw_device_id_list[XLINK_MAX_DEVICE_LIST_SIZE];
+	char device_name[XLINK_MAX_DEVICE_NAME_SIZE];
+	struct intel_hddl_clients **cls;
+	u32 num_devices = 0;
+	u32 i = 0;
+
+	xlink_get_device_list(sw_device_id_list, &num_devices);
+	if (num_devices == 0) {
+		dev_err(dev, "HDDL:No devices found\n");
+		return NULL;
+	}
+
+	/*
+	 * If list available, add new device to the existing client devices
+	 * list, if list is not available create new list.
+	 */
+	if (hddl_clients) {
+		cls = intel_hddl_new_device(dev,
+					    hddl_clients,
+					    sw_device_id_list,
+					    num_devices, n_devs);
+		if (!cls)
+			return NULL;
+	} else {
+		cls = devm_kcalloc(dev, num_devices,
+				   sizeof(struct intel_hddl_clients *),
+				   GFP_KERNEL);
+		if (!cls)
+			return NULL;
+		/*
+		 * update number of devices in client devices list
+		 */
+		*n_devs = num_devices;
+	}
+	hddl_clients = cls;
+	for (i = 0; i < num_devices; i++) {
+		struct intel_hddl_clients *c = hddl_clients[i];
+		int rc;
+
+		/*
+		 * Initialize new client device.
+		 */
+		if (c)
+			continue;
+		c = devm_kzalloc(dev,
+				 sizeof(struct intel_hddl_clients),
+				 GFP_KERNEL);
+		if (!c)
+			return hddl_clients;
+		c->pdata = pdata;
+		c->xlink_dev.dev_type = HOST_DEVICE;
+		c->xlink_dev.sw_device_id = sw_device_id_list[i];
+		rc = xlink_get_device_name((&c->xlink_dev),
+					   device_name,
+					   XLINK_MAX_DEVICE_NAME_SIZE);
+		if (rc > 0) {
+			dev_err(dev,
+				"HDDL:Failed to get device name [EC%d] %x\n",
+				rc, c->xlink_dev.sw_device_id);
+			return hddl_clients;
+		}
+		dev_info(dev, "HDDL:Device name: %x %s\n",
+			 c->xlink_dev.sw_device_id, device_name);
+		if (GET_INTERFACE_FROM_SW_DEVICE_ID(sw_device_id_list[i]) ==
+		    SW_DEVICE_ID_PCIE_INTERFACE) {
+			/*
+			 * Start kernel thread to initialize
+			 * xlink communication.
+			 */
+			c->hddl_dev_connect_task = kthread_run(task,
+							       (void *)c,
+							       device_name);
+			if (!c->hddl_dev_connect_task) {
+				dev_err(dev, "failed to create thread\n");
+				return hddl_clients;
+			}
+			c->task = (void *)task;
+		}
+		hddl_clients[i] = c;
+	}
+
+	return hddl_clients;
+}
+
+int intel_hddl_xlink_remove_i2c_adap(struct device *dev,
+				     struct intel_hddl_clients *c)
+{
+	int i;
+
+	for (i = 0; i < HDDL_XLINK_I2C_END; i++) {
+		if (c->xlink_i2c_plt_dev[i]) {
+			dev_info(dev,
+				 "HDDL : platform_device_unregister = %d\n",
+				 i);
+			platform_device_unregister(c->xlink_i2c_plt_dev[i]);
+			c->xlink_i2c_plt_dev[i] = NULL;
+		}
+	}
+	return 0;
+}
+
+static void hddl_register_remote_smbus_client(struct device *dev,
+					      struct intel_hddl_clients *c,
+					      struct intel_hddl_i2c_devs *i2c)
+{
+	struct platform_device *xlink_pdev =
+			c->xlink_i2c_plt_dev[HDDL_XLINK_I2C_MASTER];
+	struct i2c_adapter *adap;
+
+	if (!xlink_pdev)
+		return;
+	adap = (struct i2c_adapter *)platform_get_drvdata(xlink_pdev);
+	if (!adap)
+		return;
+	c->adap[HDDL_XLINK_I2C_MASTER] = adap;
+	i2c->smbus_client =
+		i2c_new_client_device(adap,
+				      &i2c->board_info);
+	msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+}
+
+static void hddl_register_remote_i2c_client(struct device *dev,
+					    struct intel_hddl_clients *c,
+					    struct intel_hddl_i2c_devs *i2c)
+{
+	if (c->smbus_adap) {
+		i2c->board_info.platform_data = c;
+		i2c->i2c_client = i2c_new_client_device(c->smbus_adap,
+							&i2c->board_info);
+		msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+	}
+}
+
+static void hddl_register_remote_xlink_client(struct device *dev,
+					      struct intel_hddl_clients *c,
+					      struct intel_hddl_i2c_devs *i2c)
+{
+	struct platform_device *xlink_pdev =
+			c->xlink_i2c_plt_dev[HDDL_XLINK_I2C_SLAVE];
+	struct i2c_adapter *adap;
+
+	if (!xlink_pdev)
+		return;
+	adap = (struct i2c_adapter *)platform_get_drvdata(xlink_pdev);
+	if (!adap)
+		return;
+	c->adap[HDDL_XLINK_I2C_SLAVE] = adap;
+	i2c->board_info.platform_data = c;
+	i2c->xlk_client = i2c_new_client_device(adap, &i2c->board_info);
+	msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+}
+
+static void intel_hddl_add_remote_clients(struct device *dev,
+					  struct intel_hddl_clients *c,
+					  struct intel_hddl_i2c_devs *i2c_devs)
+{
+	if (!i2c_devs->enabled)
+		return;
+	/*
+	 * Register this device as xlink i2c client.
+	 */
+	if (i2c_devs->remote_host & HDDL_XLINK_CLIENT)
+		hddl_register_remote_xlink_client(dev, c, i2c_devs);
+	/*
+	 * Register this device as i2c smbbus client.
+	 */
+	if (i2c_devs->remote_host & HDDL_I2C_CLIENT)
+		hddl_register_remote_i2c_client(dev, c, i2c_devs);
+	/*
+	 * Register this device as xlink smbus i2c client.
+	 * Based on the bit mask of remote_host, It is possible that same
+	 * device can be registered as xlink i2c client and smbus i2c client.
+	 */
+	if (i2c_devs->remote_host & HDDL_XLINK_SMBUS_CLIENT)
+		hddl_register_remote_smbus_client(dev, c, i2c_devs);
+}
+
+static void intel_hddl_add_localhost_clients(struct device *dev,
+					     struct intel_hddl_clients *c,
+					     struct intel_hddl_i2c_devs *i2c)
+{
+	if (!i2c->enabled)
+		return;
+	/*
+	 * Register this device as xlink i2c client.
+	 */
+	if (i2c->local_host & HDDL_XLINK_CLIENT) {
+		struct platform_device *xlink_pdev =
+			c->xlink_i2c_plt_dev[HDDL_XLINK_I2C_SLAVE];
+		struct i2c_adapter *adap;
+
+		if (!xlink_pdev)
+			return;
+		adap = (struct i2c_adapter *)platform_get_drvdata(xlink_pdev);
+		if (!adap)
+			return;
+		c->adap[HDDL_XLINK_I2C_SLAVE] = adap;
+		i2c->xlk_client =
+			i2c_new_client_device(adap,
+					      &i2c->board_info);
+		msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+	}
+	/*
+	 * Register this device as smbus i2c client.
+	 * Based on the bit mask of local_host, It is possible that same
+	 * device can be registered as xlink i2c client and smbus i2c client.
+	 */
+	if (i2c->local_host & HDDL_I2C_CLIENT) {
+		i2c->i2c_client =
+			i2c_new_client_device(i2c_get_adapter(i2c->bus),
+					      &i2c->board_info);
+		msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+	}
+}
+
+void intel_hddl_add_xlink_i2c_clients(struct device *dev,
+				      struct intel_hddl_clients *c,
+				      struct intel_hddl_i2c_devs **i2c_devs,
+				      int n_clients, int remote)
+{
+	int i;
+
+	for (i = 0; i < n_clients; i++) {
+		if (remote)
+			intel_hddl_add_remote_clients(dev, c, i2c_devs[i]);
+		else
+			intel_hddl_add_localhost_clients(dev, c, i2c_devs[i]);
+	}
+}
+
+int intel_hddl_register_xlink_i2c_adap(struct device *dev,
+				       struct intel_hddl_clients *c)
+{
+	int i;
+
+	for (i = 0; i < HDDL_XLINK_I2C_END; i++) {
+		struct platform_device_info xlink_i2c_info;
+		int soc_id = c->board_info.soc_id;
+
+		memset(&xlink_i2c_info, 0, sizeof(xlink_i2c_info));
+		xlink_i2c_info.name = "i2c_xlink";
+		xlink_i2c_info.id = c->board_info.board_id << 4 |
+					soc_id << 2 | i;
+		c->xlink_i2c_ch[i] =
+			c->i2c_chan_num + (soc_id * 2) + i;
+		xlink_i2c_info.data = c;
+		xlink_i2c_info.size_data =
+			sizeof(struct intel_hddl_clients);
+		c->xlink_i2c_plt_dev[i] =
+			platform_device_register_full(&xlink_i2c_info);
+		if (!c->xlink_i2c_plt_dev[i]) {
+			dev_err(dev, "platform device register failed\n");
+			return -EFAULT;
+		}
+	}
+	return 0;
+}
+
+static int intel_hddl_device_probe(struct intel_hddl_clients *d)
+{
+	char device_name[XLINK_MAX_DEVICE_NAME_SIZE];
+	int rc;
+
+	if (d->status == HDDL_DEV_STATUS_CONNECTED)
+		return 0;
+	rc = xlink_get_device_name(&d->xlink_dev,
+				   device_name, XLINK_MAX_DEVICE_NAME_SIZE);
+	if (rc > 0) {
+		dev_err(&d->pdev->dev,
+			"HDDL:Failed to get device name of id [EC%d] %x\n",
+			rc, d->xlink_dev.sw_device_id);
+		return -ENODEV;
+	}
+
+	d->hddl_dev_connect_task =
+		kthread_run((intel_hddl_connect_task)d->task,
+			    (void *)d,
+			    device_name);
+	if (!d->hddl_dev_connect_task) {
+		dev_err(&d->pdev->dev, "failed to create thread\n");
+		return -EFAULT;
+	}
+	d->status = HDDL_DEV_STATUS_CONNECTED;
+
+	return 0;
+}
+
+void intel_hddl_device_remove(struct intel_hddl_clients *d)
+{
+	int i;
+
+	/** lock device removal.
+	 * xlink core gives multiple device disconnected notifications,
+	 * so add lock to diconnect the device and update the status as
+	 * disconnected. subsequent notificatios will check the status
+	 * and returs if the device is already disconnected.
+	 */
+	mutex_lock(&d->lock);
+	if (d->status == HDDL_DEV_STATUS_DISCONNECTED) {
+		mutex_unlock(&d->lock);
+		return;
+	}
+
+	for (i = 0; i < d->n_clients; i++)
+		intel_hddl_free_i2c_client(d, d->i2c_devs[i]);
+	intel_hddl_unregister_pdev(d);
+	xlink_close_channel(&d->xlink_dev, d->chan_num);
+	xlink_disconnect(&d->xlink_dev);
+	kthread_stop(d->hddl_dev_connect_task);
+	d->status = HDDL_DEV_STATUS_DISCONNECTED;
+	mutex_unlock(&d->lock);
+}
+
+static int intel_hddl_device_event_notify(u32 sw_device_id,
+					  uint32_t event_type)
+{
+	struct intel_hddl_clients **clients;
+	struct intel_hddl_clients *client;
+	int i, ret = 0;
+	int ndevs = 0;
+
+	clients = intel_hddl_get_clients(&ndevs);
+	if (!clients)
+		return 0;
+	for (i = 0; i < ndevs; i++) {
+		if (clients[i]->xlink_dev.sw_device_id == sw_device_id) {
+			client = clients[i];
+			break;
+		}
+	}
+	if (!client)
+		return -EINVAL;
+	switch (event_type) {
+	case HDDL_NOTIFY_DEVICE_DISCONNECTED:
+		intel_hddl_device_remove(client);
+		break;
+
+	case HDDL_NOTIFY_DEVICE_CONNECTED:
+		ret = intel_hddl_device_probe(client);
+		break;
+
+	default:
+		dev_err(&client->pdev->dev,
+			"HDDL:xlink pcie notify - Error[%x]: [%d]\n",
+			sw_device_id, event_type);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+void intel_hddl_close_xlink_device(struct device *dev,
+				   struct intel_hddl_clients *d)
+{
+	xlink_close_channel(&d->xlink_dev, d->chan_num);
+	xlink_disconnect(&d->xlink_dev);
+	kthread_stop(d->hddl_dev_connect_task);
+	d->status = HDDL_DEV_STATUS_DISCONNECTED;
+}
+
+int intel_hddl_open_xlink_device(struct device *dev,
+				 struct intel_hddl_clients *d)
+{
+	char device_name[XLINK_MAX_DEVICE_NAME_SIZE];
+	u32 device_status = 0xFF;
+	int rc;
+
+	rc = xlink_get_device_name(&d->xlink_dev,
+				   device_name, XLINK_MAX_DEVICE_NAME_SIZE);
+	if (rc > 0) {
+		dev_err(dev,
+			"HDDL:Failed to get device name of id [EC%d] %x\n",
+			rc, d->xlink_dev.sw_device_id);
+		return -ENODEV;
+	}
+	if (xlink_boot_device(&d->xlink_dev, device_name) !=
+	       X_LINK_SUCCESS) {
+		dev_err(dev, "xlink_boot_device failed\n");
+		return -ENODEV;
+	}
+	if (xlink_get_device_status(&d->xlink_dev, &device_status) !=
+	       X_LINK_SUCCESS) {
+		dev_err(dev, "xlink_get_device_status failed\n");
+		return -ENODEV;
+	}
+	if (xlink_connect(&d->xlink_dev) != X_LINK_SUCCESS) {
+		dev_err(dev, "xlink_connect failed\n");
+		return -ENODEV;
+	}
+	mutex_init(&d->lock);
+	xlink_register_device_event(&d->xlink_dev,
+				    hddl_device_events,
+				    HDDL_NUM_EVENT_TYPE,
+				    intel_hddl_device_event_notify);
+
+	d->status = HDDL_DEV_STATUS_CONNECTED;
+	/*
+	 * Try opening xlink channel, open channel will fail till host/client
+	 * initilaizes the channel. intel_hddl_open_xlink_device is invoked
+	 * from kernel thread. so it is safe to try indefinitely.
+	 */
+	while (xlink_open_channel(&d->xlink_dev,
+				  d->chan_num, RXB_TXB,
+				  64 * 1024, 0 /* timeout */) !=
+				  X_LINK_SUCCESS) {
+		if (kthread_should_stop()) {
+			xlink_disconnect(&d->xlink_dev);
+			return -ENODEV;
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/misc/hddl_device/hddl_device_lh.c b/drivers/misc/hddl_device/hddl_device_lh.c
new file mode 100644
index 000000000000..e44c480379bc
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device_lh.c
@@ -0,0 +1,764 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * High Density Deep Learning Kernel module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/gpio/consumer.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/ioctl.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+#include <uapi/linux/stat.h>
+#include "hddl_device_util.h"
+
+#define DRIVER_NAME "hddl_device_client"
+
+struct intel_tsens {
+	struct intel_tsens_data data;
+	struct intel_tsens_trip_info **trip_info;
+};
+
+struct intel_hddl_client_priv {
+	void __iomem *base_addr;
+	int board_id;
+	int soc_id;
+	int n_clients;
+	u32 nsens;
+	u32 xlink_chan;
+	u32 i2c_xlink_chan;
+	u32 n_hddl_devs;
+	struct platform_device *pdev;
+	struct intel_hddl_clients **hddl_client;
+	struct task_struct *hddl_dev_init_task;
+	struct intel_hddl_i2c_devs **i2c_devs;
+	struct intel_tsens **tsens;
+	struct intel_hddl_board_info board_info;
+};
+
+static struct intel_hddl_client_priv *g_priv;
+
+static inline int intel_hddl_get_xlink_data(struct device *dev,
+					    struct xlink_handle *xlink,
+					    int chan_num, u8 *msg,
+					    int *size)
+{
+	int rc;
+
+	rc = xlink_read_data_to_buffer(xlink, chan_num,
+				       msg, size);
+	if (rc) {
+		dev_err(dev,
+			"HDDL: xlink read data failed rc = %d\n",
+			rc);
+		return -EFAULT;
+	}
+	rc = xlink_release_data(xlink, chan_num, NULL);
+	if (rc) {
+		dev_err(dev,
+			"HDDL: xlink release failed rc = %d\n",
+			rc);
+		return -EFAULT;
+	}
+	return rc;
+}
+
+void intel_hddl_free_i2c_client(struct intel_hddl_clients *d,
+				struct intel_hddl_i2c_devs *i2c_dev)
+{
+	if (i2c_dev->xlk_client)
+		i2c_unregister_device(i2c_dev->xlk_client);
+	if (i2c_dev->i2c_client)
+		i2c_unregister_device(i2c_dev->i2c_client);
+	if (i2c_dev->smbus_client)
+		i2c_unregister_device(i2c_dev->smbus_client);
+	i2c_dev->xlk_client = NULL;
+	i2c_dev->i2c_client = NULL;
+	i2c_dev->smbus_client = NULL;
+}
+
+struct intel_hddl_clients **intel_hddl_get_clients(int *n_devs)
+{
+	if (!g_priv || !n_devs)
+		return NULL;
+	*n_devs = g_priv->n_hddl_devs;
+	return g_priv->hddl_client;
+}
+
+void intel_hddl_unregister_pdev(struct intel_hddl_clients *c)
+{
+	struct intel_hddl_client_priv *priv = c->pdata;
+
+	intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+}
+
+static u8 *intel_tsens_thermal_msg(struct intel_hddl_clients *c,
+				   struct intel_hddl_tsens_msg *msg,
+					u32 *size)
+{
+	struct intel_hddl_client_priv *priv = c->pdata;
+	struct intel_tsens **tsens = priv->tsens;
+
+	switch (msg->msg_type) {
+	case HDDL_GET_NSENS:
+	{
+		u32 *data;
+		*size = sizeof(int);
+		data = kzalloc(*size, GFP_KERNEL);
+		if (!data)
+			return NULL;
+		*data = priv->nsens;
+		return (u8 *)data;
+	}
+	case HDDL_GET_SENS_DETAILS:
+	{
+		struct intel_tsens_data *data;
+		u32 sensor_type = msg->sensor_type;
+		struct intel_tsens_data *tsens_data =
+				&tsens[sensor_type]->data;
+
+		*size = sizeof(struct intel_tsens_data);
+		data = kzalloc(*size, GFP_KERNEL);
+		if (!data)
+			return NULL;
+		strcpy(data->name, tsens_data->name);
+		data->n_trips = tsens_data->n_trips;
+		data->passive_delay = tsens_data->passive_delay;
+		data->polling_delay = tsens_data->polling_delay;
+		data->sensor_type = tsens_data->sensor_type;
+		return (u8 *)data;
+	}
+	case HDDL_GET_SENS_TRIP_INFO:
+	{
+		struct intel_tsens_trip_info *data;
+		u32 sensor_type = msg->sensor_type;
+		u32 trip_info_idx = msg->trip_info_idx;
+
+		*size = sizeof(struct intel_tsens_trip_info);
+		data = kzalloc(*size, GFP_KERNEL);
+		if (!data)
+			return NULL;
+		memcpy(data, tsens[sensor_type]->trip_info[trip_info_idx],
+		       sizeof(struct intel_tsens_trip_info));
+		return (u8 *)data;
+	}
+	default:
+		break;
+	}
+	return NULL;
+}
+
+static int intel_hddl_i2c_register_clients(struct device *dev,
+					   struct intel_hddl_clients *c)
+{
+	struct intel_hddl_client_priv *priv = c->pdata;
+	struct xlink_handle *xlink = &c->xlink_dev;
+	struct intel_hddl_tsens_msg msg;
+	int rc, i;
+	int size;
+
+	/* Get msg type */
+	rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+				       xlink, c->chan_num,
+				       (u8 *)&msg, &size);
+	if (rc)
+		return rc;
+
+	while (msg.msg_type != HDDL_GET_SENS_COMPLETE) {
+		u32 *data;
+
+		switch (msg.msg_type) {
+		case HDDL_GET_N_I2C_DEVS:
+		{
+			size = sizeof(int);
+			data = kzalloc(size, GFP_KERNEL);
+			if (!data)
+				return -ENOMEM;
+			*data = priv->n_clients;
+			break;
+		}
+		case HDDL_GET_I2C_DEVS:
+		{
+			struct intel_hddl_i2c_devs_data *i2c_dev;
+			int sensor_type = msg.sensor_type;
+
+			size = sizeof(struct intel_hddl_i2c_devs_data);
+			i2c_dev = kzalloc(size, GFP_KERNEL);
+			if (!i2c_dev)
+				return -ENOMEM;
+			strcpy(i2c_dev->name,
+			       priv->i2c_devs[sensor_type]->name);
+			i2c_dev->addr = priv->i2c_devs[sensor_type]->addr;
+			i2c_dev->bus = priv->i2c_devs[sensor_type]->bus;
+			i2c_dev->enabled =
+				priv->i2c_devs[sensor_type]->enabled;
+			i2c_dev->local_host =
+				priv->i2c_devs[sensor_type]->local_host;
+			i2c_dev->remote_host =
+				priv->i2c_devs[sensor_type]->remote_host;
+			data = (u32 *)i2c_dev;
+			break;
+		}
+		default:
+			dev_err(&priv->pdev->dev,
+				"HDDL: Invalid msg received\n");
+			return -EINVAL;
+		}
+		rc = xlink_write_volatile(xlink, c->chan_num,
+					  (u8 *)data, size);
+		if (rc) {
+			dev_err(&priv->pdev->dev,
+				"xlink write data failed rc = %d\n",
+				rc);
+			return rc;
+		}
+		kfree(data);
+		rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+					       xlink, c->chan_num,
+					       (u8 *)&msg, &size);
+		if (rc)
+			return rc;
+	}
+
+	for (i = 0; i < priv->n_clients; i++) {
+		msg.msg_type = HDDL_GET_I2C_DEV_ADDR;
+		msg.sensor_type = i;
+		rc = xlink_write_volatile(xlink, c->chan_num,
+					  (u8 *)&msg, sizeof(msg));
+		if (rc) {
+			dev_err(&priv->pdev->dev,
+				"xlink write data failed rc = %d\n",
+				rc);
+			return rc;
+		}
+		rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+					       xlink, c->chan_num,
+					       (u8 *)&priv->i2c_devs[i]->addr,
+					       &size);
+		if (rc)
+			return rc;
+		priv->i2c_devs[i]->board_info.addr = priv->i2c_devs[i]->addr;
+	}
+	/* Send Complete */
+	msg.msg_type = HDDL_GET_SENS_COMPLETE;
+	rc = xlink_write_volatile(xlink, c->chan_num,
+				  (u8 *)&msg, sizeof(msg));
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"xlink write data failed rc = %d\n",
+			rc);
+		return rc;
+	}
+
+	intel_hddl_add_xlink_i2c_clients(&priv->pdev->dev, c, priv->i2c_devs,
+					 priv->n_clients, 0);
+	return 0;
+}
+
+static int intel_hddl_send_tsens_data(struct intel_hddl_clients *c)
+{
+	struct intel_hddl_client_priv *priv = c->pdata;
+	struct xlink_handle *xlink = &c->xlink_dev;
+	struct intel_hddl_tsens_msg msg;
+	u32 size;
+	u8 *data;
+	int rc;
+
+	/* Get msg type */
+	rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+				       xlink, c->chan_num,
+				       (u8 *)&msg, &size);
+	if (rc)
+		return rc;
+
+	while (msg.msg_type != HDDL_GET_SENS_COMPLETE) {
+		data = intel_tsens_thermal_msg(c, &msg, &size);
+		if (!data) {
+			dev_err(&priv->pdev->dev, "HDDL: failed to get details\n");
+			return -EINVAL;
+		}
+		rc = xlink_write_volatile(xlink, c->chan_num,
+					  (u8 *)data, size);
+		if (rc) {
+			dev_err(&priv->pdev->dev,
+				"xlink write data failed rc = %d\n",
+				rc);
+			return rc;
+		}
+		kfree(data);
+		rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+					       xlink, c->chan_num,
+					       (u8 *)&msg, &size);
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
+
+static int intel_hddl_device_connect_task(void *data)
+{
+	struct intel_hddl_clients *c = (struct intel_hddl_clients *)data;
+	struct intel_hddl_client_priv *priv = c->pdata;
+	struct intel_hddl_board_info board_info_rcvd;
+	struct xlink_handle *xlink = &c->xlink_dev;
+	struct timespec64 ts;
+	u32 size, rc;
+
+	memcpy(&c->board_info, &priv->board_info,
+	       sizeof(struct intel_hddl_board_info));
+	c->chan_num = priv->xlink_chan;
+	c->i2c_chan_num = priv->i2c_xlink_chan;
+	c->i2c_devs = priv->i2c_devs;
+	c->n_clients = priv->n_clients;
+	if (intel_hddl_open_xlink_device(&priv->pdev->dev, c)) {
+		dev_err(&priv->pdev->dev, "HDDL open xlink dev failed\n");
+		return -EINVAL;
+	}
+	size = sizeof(ts);
+	rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+				       xlink, c->chan_num,
+				       (u8 *)&ts, &size);
+	if (rc)
+		goto close_xlink_dev;
+	do_settimeofday64(&ts);
+
+	rc = xlink_write_volatile(xlink, c->chan_num,
+				  (u8 *)&c->board_info,
+				  sizeof(struct intel_hddl_board_info));
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"xlink write data failed rc = %d\n",
+			rc);
+		goto close_xlink_dev;
+	}
+
+	size = sizeof(board_info_rcvd);
+	rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+				       xlink, c->chan_num,
+				       (u8 *)&board_info_rcvd,
+				       &size);
+	if (rc)
+		goto close_xlink_dev;
+	rc = intel_hddl_send_tsens_data(c);
+	if (rc) {
+		dev_err(&priv->pdev->dev, "HDDL: tsens data not sent\n");
+		goto close_xlink_dev;
+	}
+	rc = intel_hddl_register_xlink_i2c_adap(&priv->pdev->dev, c);
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"HDDL: register xlink i2c adapter failed\n");
+		goto remove_xlink_i2c_adap;
+	}
+	rc = intel_hddl_i2c_register_clients(&priv->pdev->dev, c);
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"HDDL: register i2c clients failed\n");
+		goto remove_xlink_i2c_adap;
+	}
+
+	return 0;
+remove_xlink_i2c_adap:
+	intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+close_xlink_dev:
+	intel_hddl_close_xlink_device(&priv->pdev->dev, c);
+	return rc;
+}
+
+static int intel_hddl_check_for_new_device(struct intel_hddl_client_priv *priv)
+{
+	struct intel_hddl_clients **hddl_clients;
+
+	hddl_clients =
+		intel_hddl_setup_device(&priv->pdev->dev,
+					intel_hddl_device_connect_task,
+					&priv->n_hddl_devs, priv->hddl_client,
+					priv);
+	if (!hddl_clients) {
+		dev_err(&priv->pdev->dev,
+			"intel_hddl_setup_device returned NULL\n");
+		return 0;
+	}
+	priv->hddl_client = hddl_clients;
+	return 1;
+}
+
+static int intel_hddl_device_init_task(void *data)
+{
+	struct intel_hddl_client_priv *priv =
+		(struct intel_hddl_client_priv *)data;
+
+	while (!kthread_should_stop()) {
+		if (!intel_hddl_check_for_new_device(priv)) {
+			dev_err(&priv->pdev->dev,
+				"Error while checking for new device\n");
+			return -EFAULT;
+		}
+		msleep_interruptible(HDDL_NEW_DEV_POLL_TIME);
+	}
+
+	return 0;
+}
+
+static int intel_hddl_device_init(struct intel_hddl_client_priv *priv)
+{
+	priv->hddl_dev_init_task = kthread_run(intel_hddl_device_init_task,
+					       (void *)priv,
+					       "hddl_device_init");
+	if (!priv->hddl_dev_init_task) {
+		dev_err(&priv->pdev->dev, "failed to create thread\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int hddl_tsens_config_sensors(struct device_node *s_node,
+				     struct intel_hddl_client_priv *priv,
+				     int sensor_type)
+{
+	struct intel_tsens *tsens = priv->tsens[sensor_type];
+	struct platform_device *pdev = priv->pdev;
+	s32 trip_temp_count, trip_temp_type_c, i;
+	int ret;
+
+	tsens->data.sensor_type = sensor_type;
+	if (of_property_read_u32(s_node, "passive_delay_rh",
+				 &tsens->data.passive_delay)) {
+		dev_err(&pdev->dev,
+			"passive_delay missing in dt for %s\n",
+			tsens->data.name);
+		return -EINVAL;
+	}
+	if (of_property_read_u32(s_node, "polling_delay_rh",
+				 &tsens->data.polling_delay)) {
+		dev_err(&pdev->dev,
+			"polling_delay missing in dt for %s\n",
+			tsens->data.name);
+		return -EINVAL;
+	}
+	trip_temp_count = of_property_count_u32_elems(s_node, "trip_temp_rh");
+	trip_temp_type_c = of_property_count_strings(s_node, "trip_type_rh");
+	if (trip_temp_count != trip_temp_type_c ||
+	    trip_temp_count <= 0 || trip_temp_type_c <= 0) {
+		dev_err(&pdev->dev,
+			"trip temp config is missing in dt for %s\n",
+			tsens->data.name);
+		return -EINVAL;
+	}
+
+	tsens->trip_info =
+		devm_kcalloc(&pdev->dev, trip_temp_count,
+			     sizeof(struct intel_tsens_trip_info *),
+			     GFP_KERNEL);
+	if (!tsens->trip_info)
+		return -ENOMEM;
+	tsens->data.n_trips = trip_temp_count;
+	for (i = 0; i < trip_temp_count; i++) {
+		const char *trip_name;
+
+		tsens->trip_info[i] =
+			devm_kzalloc(&pdev->dev,
+				     sizeof(struct intel_tsens_trip_info),
+				     GFP_KERNEL);
+		if (!tsens->trip_info[i])
+			return -ENOMEM;
+		ret = of_property_read_u32_index(s_node, "trip_temp_rh", i,
+						 &tsens->trip_info[i]->temp);
+		if (ret) {
+			dev_err(&pdev->dev, "Invalid trip temp");
+			return ret;
+		}
+		ret = of_property_read_string_index(s_node, "trip_type_rh", i,
+						    &trip_name);
+		if (ret) {
+			dev_err(&pdev->dev, "Invalid trip type");
+			return ret;
+		}
+		if (!strcmp(trip_name, "passive"))
+			tsens->trip_info[i]->trip_type = THERMAL_TRIP_PASSIVE;
+		else if (!strcmp(trip_name, "critical"))
+			tsens->trip_info[i]->trip_type = THERMAL_TRIP_CRITICAL;
+		else if (!strcmp(trip_name, "hot"))
+			tsens->trip_info[i]->trip_type = THERMAL_TRIP_HOT;
+		else
+			tsens->trip_info[i]->trip_type = THERMAL_TRIP_ACTIVE;
+	}
+
+	return 0;
+}
+
+static int hddl_get_onchip_sensors(struct platform_device *pdev,
+				   struct intel_hddl_client_priv *priv)
+{
+	struct device_node *s_node;
+	struct device_node *np = NULL;
+	int i = 0;
+
+	s_node = of_parse_phandle(pdev->dev.of_node, "soc-sensors", 0);
+	if (!s_node)
+		return -EINVAL;
+	priv->nsens = of_get_child_count(s_node);
+	if (priv->nsens == 0) {
+		dev_err(&pdev->dev, "No onchip sensors configured in dt\n");
+		return -EINVAL;
+	}
+	priv->tsens =
+		devm_kcalloc(&pdev->dev, priv->nsens,
+			     sizeof(struct intel_tsens *),
+			     GFP_KERNEL);
+	if (!priv->tsens)
+		return -ENOMEM;
+	for_each_child_of_node(s_node, np) {
+		struct intel_tsens *tsens;
+
+		tsens = devm_kzalloc(&pdev->dev,
+				     sizeof(struct intel_tsens),
+				     GFP_KERNEL);
+		if (!tsens)
+			return -ENOMEM;
+		priv->tsens[i] = tsens;
+		strcpy(tsens->data.name, np->name);
+		if (hddl_tsens_config_sensors(np, priv, i)) {
+			dev_err(&pdev->dev,
+				"Missing sensor info in dts for %s\n",
+				tsens->data.name);
+			return -EINVAL;
+		}
+		i++;
+	}
+
+	return 0;
+}
+
+static int intel_hddl_get_ids(struct platform_device *pdev,
+			      struct intel_hddl_client_priv *priv)
+{
+	int ret;
+	struct gpio_descs *board_id_gpios;
+	struct gpio_descs *soc_id_gpios;
+	unsigned long values = 0;
+
+	board_id_gpios =
+		gpiod_get_array_optional(&pdev->dev, "board-id",
+					 GPIOD_IN);
+	if (board_id_gpios) {
+		ret = gpiod_get_array_value(board_id_gpios->ndescs,
+					    board_id_gpios->desc,
+					    NULL, &values);
+		if (ret) {
+			dev_err(&pdev->dev,
+				"failed to get boardid values %d",
+				ret);
+			return ret;
+		}
+		priv->board_info.board_id = values;
+		priv->board_id = priv->board_info.board_id;
+	}
+	soc_id_gpios =
+		gpiod_get_array_optional(&pdev->dev, "soc-id",
+					 GPIOD_IN);
+	if (soc_id_gpios) {
+		values = 0;
+		ret = gpiod_get_array_value(soc_id_gpios->ndescs,
+					    soc_id_gpios->desc,
+					    NULL, &values);
+		if (ret) {
+			dev_err(&pdev->dev,
+				"failed to get soc-id values %d",
+				ret);
+			return ret;
+		}
+		priv->board_info.soc_id = values;
+		priv->soc_id = priv->board_info.soc_id;
+	}
+
+	return 0;
+}
+
+static int intel_hddl_config_dt(struct intel_hddl_client_priv *priv)
+{
+	struct platform_device *pdev = priv->pdev;
+	struct device_node *np = pdev->dev.of_node;
+	struct device_node *s_node = NULL;
+	struct resource *res;
+	int i, ret;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (res) {
+		priv->base_addr = ioremap(res->start,
+					  (res->end - res->start));
+	}
+	priv->board_info.soc_id = 0;
+	priv->board_info.board_id = 0;
+	ret = of_property_read_u32(np, "xlink_chan",
+				   &priv->xlink_chan);
+	if (ret) {
+		dev_err(&pdev->dev, "xlink channel not available in dt");
+		return ret;
+	}
+	ret = of_property_read_u32(np, "i2c_xlink_chan",
+				   &priv->i2c_xlink_chan);
+	if (ret) {
+		dev_err(&pdev->dev, "i2c xlink channel not available in dt");
+		return ret;
+	}
+	ret = intel_hddl_get_ids(pdev, priv);
+	if (ret) {
+		dev_err(&pdev->dev, "Unable to get board/soc id");
+		return ret;
+	}
+	ret = hddl_get_onchip_sensors(pdev, priv);
+	if (ret) {
+		dev_err(&pdev->dev, "Onchip sensor config failed");
+		return ret;
+	}
+	priv->n_clients = of_get_child_count(np);
+	priv->i2c_devs = devm_kcalloc(&pdev->dev, priv->n_clients,
+				      sizeof(struct intel_hddl_i2c_devs *),
+				      GFP_KERNEL);
+	if (!priv->i2c_devs)
+		return -ENOMEM;
+	i = 0;
+	for_each_child_of_node(np, s_node) {
+		const char *status;
+		struct intel_hddl_i2c_devs *i2c_dev;
+
+		i2c_dev = devm_kzalloc(&pdev->dev,
+				       sizeof(struct intel_hddl_i2c_devs),
+				       GFP_KERNEL);
+		if (!i2c_dev)
+			return -ENOMEM;
+		of_property_read_string_index(s_node, "status", 0,
+					      &status);
+		if (!strcmp(status, "okay")) {
+			u32 addr;
+			const char *name = NULL;
+
+			i2c_dev->enabled = 1;
+			of_property_read_string_index(s_node, "compatible", 0,
+						      &name);
+			if (name) {
+				strcpy(i2c_dev->name, name);
+				strcpy(i2c_dev->board_info.type,
+				       i2c_dev->name);
+			}
+			/**
+			 * below dt params are optional.
+			 */
+			of_property_read_u32(s_node, "reg", &addr);
+			i2c_dev->board_info.addr = addr;
+			i2c_dev->addr = addr;
+			of_property_read_u32(s_node, "bus",
+					     &i2c_dev->bus);
+			of_property_read_u32(s_node, "remote-host",
+					     &i2c_dev->remote_host);
+			of_property_read_u32(s_node, "local-host",
+					     &i2c_dev->local_host);
+		}
+		priv->i2c_devs[i] = i2c_dev;
+		i++;
+	}
+	return 0;
+}
+
+static int intel_hddl_client_probe(struct platform_device *pdev)
+{
+	struct intel_hddl_client_priv *priv;
+	int ret;
+
+	priv = devm_kzalloc(&pdev->dev,
+			    sizeof(struct intel_hddl_client_priv),
+			    GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+	priv->pdev = pdev;
+	if (pdev->dev.of_node) {
+		ret = intel_hddl_config_dt(priv);
+		if (ret) {
+			dev_err(&pdev->dev, "dt configuration failed\n");
+			devm_kfree(&pdev->dev, priv);
+			return ret;
+		}
+	} else {
+		dev_err(&pdev->dev,
+			"Non Device Tree build is not supported\n");
+		devm_kfree(&pdev->dev, priv);
+		return -EINVAL;
+	}
+	ret = intel_hddl_device_init(priv);
+	if (ret) {
+		dev_err(&pdev->dev, "HDDL device init failed\n");
+		devm_kfree(&pdev->dev, priv);
+		return -EINVAL;
+	}
+	g_priv = priv;
+	platform_set_drvdata(pdev, priv);
+	return 0;
+}
+
+/* Device Exit */
+static int intel_hddl_client_exit(struct platform_device *pdev)
+{
+	int k;
+	struct intel_hddl_client_priv *priv = platform_get_drvdata(pdev);
+
+	if (!priv)
+		return -EINVAL;
+	for (k = 0; k < priv->n_hddl_devs; k++) {
+		struct intel_hddl_clients *d = priv->hddl_client[k];
+
+		intel_hddl_device_remove(d);
+	}
+
+	return 0;
+}
+
+static const struct of_device_id intel_hddl_client_id_table[] = {
+	{ .compatible = "intel,hddl-client" },
+	{}
+};
+MODULE_DEVICE_TABLE(of, intel_hddl_client_id_table);
+
+static struct platform_driver intel_hddl_client_driver = {
+	.probe = intel_hddl_client_probe,
+	.remove = intel_hddl_client_exit,
+	.driver = {
+		.name = "intel_hddl_client",
+		.of_match_table = intel_hddl_client_id_table,
+	},
+};
+
+module_platform_driver(intel_hddl_client_driver);
+
+MODULE_DESCRIPTION("Intel HDDL Device driver");
+MODULE_AUTHOR("Sandeep Singh <sandeep1.singh@intel.com>");
+MODULE_AUTHOR("Vaidya, Mahesh R <mahesh.r.vaidya@intel.com>");
+MODULE_AUTHOR("Udhayakumar C <udhayakumar.c@intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/hddl_device/hddl_device_util.h b/drivers/misc/hddl_device/hddl_device_util.h
new file mode 100644
index 000000000000..628619e0cdb9
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device_util.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * High Density Deep Learning utils.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_HDDL_DEVICE_UTIL_H
+#define _LINUX_HDDL_DEVICE_UTIL_H
+
+#include <linux/xlink_drv_inf.h>
+#include "../xlink-core/xlink-defs.h"
+
+#define HDDL_NEW_DEV_POLL_TIME 2000
+
+typedef int (*intel_hddl_connect_task)(void *);
+
+struct intel_hddl_clients **
+	intel_hddl_setup_device(struct device *dev,
+				intel_hddl_connect_task task, u32 *n_devs,
+				struct intel_hddl_clients **hddl_clients,
+				void *pdata);
+
+int intel_hddl_xlink_remove_i2c_adap(struct device *dev,
+				     struct intel_hddl_clients *c);
+
+void intel_hddl_add_xlink_i2c_clients(struct device *dev,
+				      struct intel_hddl_clients *c,
+				      struct intel_hddl_i2c_devs **i2c_devs,
+				      int n_clients, int remote);
+
+int intel_hddl_register_xlink_i2c_adap(struct device *dev,
+				       struct intel_hddl_clients *c);
+
+struct intel_hddl_clients **intel_hddl_get_clients(int *n_devs);
+
+void intel_hddl_device_remove(struct intel_hddl_clients *d);
+
+void intel_hddl_unregister_pdev(struct intel_hddl_clients *c);
+
+void intel_hddl_close_xlink_device(struct device *dev,
+				   struct intel_hddl_clients *d);
+
+int intel_hddl_open_xlink_device(struct device *dev,
+				 struct intel_hddl_clients *d);
+
+void intel_hddl_free_i2c_client(struct intel_hddl_clients *d,
+				struct intel_hddl_i2c_devs *i2c_dev);
+
+#endif /* _LINUX_HDDL_DEVICE_UTIL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 34/34] misc: HDDL device management for IA host
  2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
                   ` (32 preceding siblings ...)
  2021-01-08 21:25 ` [PATCH v2 33/34] misc: Hddl device management for local host mgross
@ 2021-01-08 21:26 ` mgross
  33 siblings, 0 replies; 57+ messages in thread
From: mgross @ 2021-01-08 21:26 UTC (permalink / raw)
  To: markgross, mgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

From: "C, Udhayakumar" <udhayakumar.c@intel.com>

Add IA host hddl device management driver for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU                          Local Host ARM CPU
-------------------------------         ----------------------------
| * Send time as xlink packet |         |* Sync time with IA host  |
| * receive sensor details    |         |* Prepare and share sensor|
|   and register as i2c or    |         |  details to IA host as   |
|   xlink smbus slaves        |         |  xlink packets           |
-------------------------------         ----------------------------
|       hddl server           | <=====> |     hddl client          |
-------------------------------  xlink  ----------------------------

hddl device module:
-------------------
The HDDL client driver acts as an software RTC to sync with network
time. It abstracts xlink protocol to communicate with remote host.
This driver exports the details about sensors available in the
platform to remote host as xlink packets.
This driver also handles device connect/disconnect events and
identifies board id and soc id using gpio's, based on platform
configuration.

- Remote Host driver
  * Intended for IA CPU
  * It is based on xlink Framework
  * Driver path:
  {tree}/drivers/misc/hddl_device/hddl_device_server.c

Local arm host and Remote IA host drivers communicates using
XLINK protocol.

Signed-off-by: C, Udhayakumar <udhayakumar.c@intel.com>
---
 .../misc-devices/hddl_device_server.rst       | 205 +++++
 Documentation/misc-devices/index.rst          |   1 +
 drivers/misc/hddl_device/Kconfig              |  12 +
 drivers/misc/hddl_device/Makefile             |   2 +
 drivers/misc/hddl_device/hddl_device_rh.c     | 837 ++++++++++++++++++
 5 files changed, 1057 insertions(+)
 create mode 100644 Documentation/misc-devices/hddl_device_server.rst
 create mode 100644 drivers/misc/hddl_device/hddl_device_rh.c

diff --git a/Documentation/misc-devices/hddl_device_server.rst b/Documentation/misc-devices/hddl_device_server.rst
new file mode 100644
index 000000000000..0be37973d1fe
--- /dev/null
+++ b/Documentation/misc-devices/hddl_device_server.rst
@@ -0,0 +1,205 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Kernel driver: hddl_device_server
+=================================
+
+Supported chips:
+  * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+Authors:
+    - Thalaiappan, Rathina <rathina.thalaiappan@intel.com>
+    - Udhayakumar C <udhayakumar.c@intel.com>
+
+High-level architecture
+=======================
+::
+
+        Remote Host IA CPU                          Local Host ARM CPU
+        -------------------------------         ----------------------------
+        | * Send time as xlink packet |         |* Sync time with IA host  |
+        | * receive sensor details    |         |* Prepare and share sensor|
+        |   and register as i2c or    |         |  details to IA host as   |
+        |   xlink smbus slaves        |         |  xlink packets           |
+        -------------------------------         ----------------------------
+        |       hddl server           | <=====> |     hddl client          |
+        -------------------------------  xlink  ----------------------------
+
+Overview
+========
+
+This driver supports hddl device management for Intel Edge.AI Computer Vision
+platforms. This driver runs in IA host
+
+This driver supports the following features:
+
+  - Receives deatils of temperature sensor, current sensor and fan controller
+    present in Intel Edge.AI Computer Vision platforms.
+  - Send Time sync data to Intel Edge.AI Computer Vision platform.
+  - Handles device connect and disconnect events.
+  - Get free slave address for memory mapped thermal sensors present in SoC
+    (Documentation/hwmon/intel_tsens_sensors.rst) and share it with Intel
+    Edge.AI Computer Vision platform.
+  - Registers i2c slave device for slaves present in Intel Edge.AI Computer
+    Vision platform
+
+Keem Bay platform has
+Onchip sensors:
+
+  - Media Subsystem (mss) temperature sensor
+  - NN subsystem (nce) temperature sensor
+  - Compute subsystem (cse) temperature sensor
+  - SOC(Maximum of mss, nce and cse).
+
+Onboard sensors:
+
+  - two lm75 temperature sensors
+  - emc2103 fan controller
+  - ina3221 current sensor
+
+Driver Structure
+================
+
+The driver provides a platform device where the ``probe`` and ``remove``
+operations are provided.
+
+  - probe: spawn kernel thread to monitor new PCIE devices.
+
+  - init task: Poll for new PCIE device with time interval of 5 seconds and
+    creates connect task to setup new device.
+
+  - connect task: Connect task is the main entity which connects to hddl
+    device client using xlink and does the basic initialisation and handshaking.
+    Additionally it also monitors the hddl device client link down/link up
+    events and reinitialise the drivers accordingly in the server side.
+
+  - remove: unregister i2c client devices, i2c adapters and close xlink
+    channel.
+
+HDDL Server Sequence - Basic Setup and handshaking with HDDL Device Client
+==========================================================================
+::
+
+          ,-----.            ,---------.          ,------------.           ,------------------.
+          |probe|            |Init task|          |connect task|           |hddl device client|
+          `--+--'            `----+----'          `-----+------'           `--------+---------'
+             ----.                |                     |                           |
+                 | "Init char dev"|                     |                           |
+             <---'                |                     |                           |
+             |                    |                     |                           |
+             | ,----------------------!.                |                           |
+             | |Initialize char device|_\               |                           |
+             | |for ioctls              |               |                           |
+             | `------------------------'               |                           |
+             | "Creates kthread"  |                     |                           |
+             |------------------->|                     |                           |
+             |                    |                     |                           |
+             | ,-----------------------!.               |                           |
+             | |creates kernel thread  |_\              |                           |
+             | |to check for new device  |              |                           |
+             | `-------------------------'              |                           |
+        ,---------------------!.  ----.                 |                           |
+        |check for new device |_\     |                 |                           |
+        |with time interval of  | <---'                 |                           |
+        |5 seconds              | |                     |                           |
+        `-----------------------' |                     |                           |
+        ,---------------------!.  |                     |                           |
+        |if new device found?.|_\ |                     |                           |
+        |creates connect task   | |-------------------->|                           |
+        |to setup new device    | |                     |                           |
+        `-----------------------' |                     |                           |
+             |                   ,-------------------!. |----.                      |
+             |                   |setup xlink channel|_\|    |                      |
+             |                   |to communicate with  ||<---'                      |
+             |                   |client               ||                           |
+             |                   `---------------------'|                           |
+             |                    |                     |      share time data      |
+             |                    |                     |      to client            |
+             |                    |                     | -------------------------->
+             |                    |                     |                           |
+             |                    |                     |     receives board id     |
+             |                    |                     | <--------------------------
+             |                    |                     |                           |
+             |                    |                     |  Gets total number of     |
+             |                    |                     |  sensors available in SoC |
+             |                    |                     | <--------------------------
+             |                    |                     |                           |
+             |               ,-----------------------!. |                           |
+             |               |For each sensors get   |_\|                           |
+             |               |sensor type, name, trip  || <--------------------------
+             |               |temp, trip type          ||                           |
+             |               `-------------------------'|                           |
+             |                    |                     |       Send complete.      |
+             |                    |                     | -------------------------->
+             |                    |                     |                           |
+             |                    |                     |----.                      |
+             |                    |                     |    | Register xlink i2c   |
+             |                    |                     |<---' adapters.            |
+             |                    |                     |                           |
+             |                    |                     |                           |
+             |                    |                     |    send slave addr for    |
+             |                    |                     |     each salve in SoC     |
+             |                    |                     | -------------------------->
+             |                    |                     |                           |
+             |                    |                     |----.                      |
+             |                    |                     |    | Register i2c clients.|
+             |                    |                     |<---'                      |
+             |                    |                     |                           |
+             |                    |                     |----.
+             |                    |                     |    | poll for device status
+             |                    |                     |<---'
+          ,--+--.            ,----+----.          ,-----+------.           ,--------+---------.
+          |probe|            |Init task|          |connect task|           |hddl device client|
+          `-----'            `---------'          `------------'           `------------------'
+
+
+XLINK i2c sequence:
+===================
+::
+
+        ,-----------------.          ,--------.          ,---------.          ,-----.
+        |xlink-i2c-adapter|          |I2C core|          |i2c-slave|          |xlink|
+        `--------+--------'          `---+----'          `----+----'          `--+--'
+                 |                       |                    |                  |
+                 |---------------------->|                    |                  |
+                 |                       |                    |                  |
+                 | ,--------------------------!.              |                  |
+                 | |Initialize xlink based i2c|_\             |                  |
+                 | |adapters.                   |             |                  |
+                 | `----------------------------'             |                  |
+                 |                       |                    |                  |
+                 |                       | <------------------|                  |
+                 |                       |                    |                  |
+                 |                       |  ,----------------------!.            |
+                 |                       |  |Linux i2c slave device|_\           |
+                 |                       |  |standard request        |           |
+                 |                       |  `------------------------'           |
+                 |   i2c request from    |                    |                  |
+                 |   clients.            |                    |                  |
+                 |<----------------------|                    |                  |
+                 |                       |                    |                  |
+                 |                       |                    |                  |
+                 |-------------------------------------------------------------->|
+                 |                       |                    |                  |
+                 |                       |  ,----------------------------!.      |
+                 |                       |  |I2C request is sent as xlink|_\     |
+                 |                       |  |packet to SoC                 |     |
+                 |                       |  `------------------------------'     |
+                 |                       |                    |                  |
+                 |<--------------------------------------------------------------|
+                 |                       |                    |                  |
+                 |                       |  ,------------------------------!.    |
+                 |                       |  |I2C response from SoC as xlink|_\   |
+                 |                       |  |packet                          |   |
+                 |                       |  `--------------------------------'   |
+                 |                       |                    |                  |
+                 |---------------------->|                    |                  |
+                 |                       |                    |                  |
+                 | ,---------------------------!.             |                  |
+                 | |xlink response is converted|_\            |                  |
+                 | |to standard i2c response.    |            |                  |
+                 | `-----------------------------'            |                  |
+                 |                       |    i2c response    |                  |
+                 |                       | ------------------>|                  |
+        ,--------+--------.          ,---+----.          ,----+----.          ,--+--.
+        |xlink-i2c-adapter|          |I2C core|          |i2c-slave|          |xlink|
+        `-----------------'          `--------'          `---------'          `-----'
diff --git a/Documentation/misc-devices/index.rst b/Documentation/misc-devices/index.rst
index 102f7f9dea87..5a77a86261b7 100644
--- a/Documentation/misc-devices/index.rst
+++ b/Documentation/misc-devices/index.rst
@@ -20,6 +20,7 @@ fit into other categories.
    eeprom
    c2port
    hddl_device_client.rst
+   hddl_device_server.rst
    ibmvmc
    ics932s401
    isl29003
diff --git a/drivers/misc/hddl_device/Kconfig b/drivers/misc/hddl_device/Kconfig
index e1ae81fdf177..7f9a6a685275 100644
--- a/drivers/misc/hddl_device/Kconfig
+++ b/drivers/misc/hddl_device/Kconfig
@@ -12,3 +12,15 @@ config HDDL_DEVICE_CLIENT
 	  the device connect/disconnect programming sequence.
 	  Say Y if using a processor that includes the Intel VPU such as
 	  Keem Bay.  If unsure, say N.
+
+config HDDL_DEVICE_SERVER
+	tristate "Support for hddl device server"
+	depends on XLINK_CORE && !HDDL_DEVICE_CLIENT
+	help
+	  This option enables HDDL device server module.
+
+	  This driver is used for sharing time sync data to local host and
+	  retrives the sensors available on the platform. This also handles
+	  the device connect/disconnect programming sequence.
+	  Say Y if using a processor that includes the Intel VPU such as
+	  Keem Bay.  If unsure, say N.
diff --git a/drivers/misc/hddl_device/Makefile b/drivers/misc/hddl_device/Makefile
index dca381660baa..0e9a4cd2cef3 100644
--- a/drivers/misc/hddl_device/Makefile
+++ b/drivers/misc/hddl_device/Makefile
@@ -3,3 +3,5 @@
 
 obj-$(CONFIG_HDDL_DEVICE_CLIENT)	+= hddl_device_client.o
 hddl_device_client-objs			+= hddl_device_lh.o hddl_device.o
+obj-$(CONFIG_HDDL_DEVICE_SERVER)	+= hddl_device_server.o
+hddl_device_server-objs			+= hddl_device_rh.o hddl_device.o
diff --git a/drivers/misc/hddl_device/hddl_device_rh.c b/drivers/misc/hddl_device/hddl_device_rh.c
new file mode 100644
index 000000000000..78546ea64356
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device_rh.c
@@ -0,0 +1,837 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * High Density Deep Learning Kernel module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/intel_tsens_host.h>
+#include <linux/ioctl.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/platform_device.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+
+#include <uapi/linux/stat.h>
+
+#include "hddl_device_util.h"
+
+#define DRIVER_NAME "hddl_device_server"
+
+/*
+ * I2C client Reserved addr: 0x00 - 0x0f
+ *			     0xf0 - 0xff
+ */
+#define HDDL_FREE_CLIENT_ADDR_START	0x10
+#define HDDL_FREE_CLIENT_ADDR_END	0xf0
+#define HDDL_FREE_CLIENT_ADDR_SIZE	(HDDL_FREE_CLIENT_ADDR_END - \
+					HDDL_FREE_CLIENT_ADDR_START)
+/* Xlink channel reserved for HDDL device management
+ * HDDL_NODE_XLINK_CHANNEL - Default channel for HDDL device
+ *				Management communication.
+ * HDDL_I2C_XLINK_CHANNEL - channel used for xlink I2C
+ *				communication.
+ */
+#define HDDL_NODE_XLINK_CHANNEL	1080
+#define HDDL_I2C_XLINK_CHANNEL		1081
+
+#define HDDL_RESET_SUCCESS	1
+#define HDDL_RESET_FAILED	0
+
+static const int hddl_host_reserved_addrs[] = {
+	0x42,
+	0x52,
+	0x54,
+	0x60
+};
+
+struct intel_hddl_server_plat_data {
+	u32 xlink_chan;
+	u32 i2c_xlink_chan;
+};
+
+struct intel_hddl_device_priv {
+	u32 xlink_chan;
+	u32 i2c_xlink_chan;
+	u32 ndevs;
+	DECLARE_BITMAP(client_addr, HDDL_FREE_CLIENT_ADDR_SIZE);
+	/* HDDL device lock */
+	struct mutex lock;
+	struct platform_device *pdev;
+	struct intel_hddl_clients **hddl_client;
+	struct task_struct *hddl_dev_init_task;
+	struct intel_hddl_server_plat_data *plat_data;
+	struct i2c_adapter *smbus_adap;
+	struct class *dev_class;
+	struct cdev hddl_cdev;
+	dev_t cdev;
+};
+
+static struct intel_hddl_device_priv *g_priv;
+
+static inline int intel_hddl_get_xlink_data(struct device *dev,
+					    struct xlink_handle *xlink,
+					    int chan_num, u8 *msg,
+					    int *size)
+{
+	int rc;
+
+	rc = xlink_read_data_to_buffer(xlink, chan_num,
+				       msg, size);
+	if (rc) {
+		dev_err(dev,
+			"HDDL: xlink read data failed rc = %d\n",
+			rc);
+		return -EFAULT;
+	}
+	rc = xlink_release_data(xlink, chan_num, NULL);
+	if (rc) {
+		dev_err(dev,
+			"HDDL: xlink release failed rc = %d\n",
+			rc);
+		return -EFAULT;
+	}
+	return rc;
+}
+
+struct intel_hddl_clients **intel_hddl_get_clients(int *n_devs)
+{
+	if (!g_priv)
+		return NULL;
+	*n_devs = g_priv->ndevs;
+	return g_priv->hddl_client;
+}
+
+static long hddl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+	struct intel_hddl_device_priv *priv = file->private_data;
+	u32 __user *user_ptr = (u32 __user *)arg;
+	struct device *dev = &priv->pdev->dev;
+	struct sw_id_soft_reset soft_reset;
+	struct sw_id_hddl_data swid_data;
+	struct intel_hddl_clients **clients;
+	struct intel_hddl_clients *client;
+	int i, rc;
+
+	if (!user_ptr) {
+		dev_err(dev, "Null pointer from user\n");
+		return -EINVAL;
+	}
+	if (!priv) {
+		dev_err(dev, "Device ioctl failed\n");
+		return -ENODEV;
+	}
+	clients = priv->hddl_client;
+	if (!clients) {
+		dev_err(dev, "Device ioctl failed\n");
+		return -ENODEV;
+	}
+	switch (cmd) {
+	case HDDL_SOFT_RESET:
+		if (copy_from_user(&soft_reset,
+				   user_ptr,
+				   sizeof(struct sw_id_soft_reset)))
+			return -EFAULT;
+		for (i = 0; i < priv->ndevs; i++) {
+			if (clients[i]->xlink_dev.sw_device_id ==
+					soft_reset.sw_id) {
+				client = clients[i];
+				break;
+			}
+		}
+
+		if (!client) {
+			dev_err(dev, "target device to reset not found %d",
+				soft_reset.sw_id);
+			return -ENODEV;
+		}
+		/* xlink-reset */
+		rc =  xlink_reset_device(&client->xlink_dev);
+		if (rc > 0) {
+			dev_err(dev, "xlink_reset_device failed");
+			soft_reset.return_id = HDDL_RESET_FAILED;
+		} else {
+			soft_reset.return_id = HDDL_RESET_SUCCESS;
+		}
+		if (copy_to_user(user_ptr,
+				 &soft_reset, sizeof(struct sw_id_soft_reset)))
+			return -EFAULT;
+		/* xlink-rest */
+		break;
+	case HDDL_READ_SW_ID_DATA:
+		if (copy_from_user(&swid_data, user_ptr,
+				   sizeof(struct sw_id_hddl_data)))
+			return -EFAULT;
+		for (i = 0; i < priv->ndevs; i++) {
+			if (clients[i]->xlink_dev.sw_device_id ==
+					swid_data.sw_id) {
+				client = clients[i];
+				break;
+			}
+		}
+
+		if (!client) {
+			dev_err(dev, "target device to reset not found %d",
+				swid_data.sw_id);
+			return -ENODEV;
+		}
+		swid_data.board_id = client->board_info.board_id;
+		swid_data.soc_id = client->board_info.soc_id;
+		if (client->adap[0])
+			swid_data.soc_adaptor_no[0] = client->adap[0]->nr;
+		if (client->adap[1])
+			swid_data.soc_adaptor_no[1] = client->adap[1]->nr;
+		swid_data.return_id = 1;
+		if (copy_to_user(user_ptr,
+				 &swid_data, sizeof(struct sw_id_hddl_data)))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static int hddl_open(struct inode *inode, struct file *filp)
+{
+	struct intel_hddl_device_priv *priv;
+
+	priv = container_of(inode->i_cdev,
+			    struct intel_hddl_device_priv, hddl_cdev);
+	if (!priv) {
+		dev_err(&priv->pdev->dev, "Device open failed\n");
+		return -ENODEV;
+	}
+	filp->private_data = priv;
+	return 0;
+}
+
+static const struct file_operations hddl_fops = {
+	.owner	= THIS_MODULE,
+	.open = hddl_open,
+	.unlocked_ioctl = hddl_ioctl,
+};
+
+static int intel_hddl_cdev_init(struct intel_hddl_device_priv *priv)
+{
+	/*Allocating Major number*/
+	if ((alloc_chrdev_region(&priv->cdev, 0, 1, "hddl_dev")) < 0) {
+		dev_err(&priv->pdev->dev, "Cannot allocate major number\n");
+		return -EINVAL;
+	}
+	dev_err(&priv->pdev->dev, "Major = %d Minor = %d\n", MAJOR(priv->cdev),
+		MINOR(priv->cdev));
+	/*Creating cdev structure*/
+	cdev_init(&priv->hddl_cdev, &hddl_fops);
+	/*Adding character device to the system*/
+	if ((cdev_add(&priv->hddl_cdev, priv->cdev, 1)) < 0) {
+		dev_err(&priv->pdev->dev,
+			"Cannot add the device to the system\n");
+		goto r_region;
+	}
+	/*Creating struct class*/
+	priv->dev_class = class_create(THIS_MODULE, "hddl_class");
+	if (!priv->dev_class) {
+		dev_err(&priv->pdev->dev, "Cannot create the struct class\n");
+		goto r_device;
+	}
+	/*Creating device*/
+	if (!(device_create(priv->dev_class, NULL, priv->cdev, NULL,
+			    "hddl_device"))) {
+		dev_err(&priv->pdev->dev, "Cannot create the Device\n");
+		goto r_class;
+	}
+	return 0;
+
+r_class:
+	class_destroy(priv->dev_class);
+r_device:
+	cdev_del(&priv->hddl_cdev);
+r_region:
+	unregister_chrdev_region(priv->cdev, 1);
+	return -EINVAL;
+}
+
+static void intel_hddl_cdev_remove(struct intel_hddl_device_priv *priv)
+{
+	device_destroy(priv->dev_class, priv->cdev);
+	class_destroy(priv->dev_class);
+	cdev_del(&priv->hddl_cdev);
+	unregister_chrdev_region(priv->cdev, 1);
+}
+
+void intel_hddl_unregister_pdev(struct intel_hddl_clients *c)
+{
+	struct intel_hddl_device_priv *priv = c->pdata;
+
+	intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+}
+
+void intel_hddl_free_i2c_client(struct intel_hddl_clients *d,
+				struct intel_hddl_i2c_devs *i2c_dev)
+{
+	struct intel_hddl_device_priv *priv = d->pdata;
+	int bit_pos = i2c_dev->addr - HDDL_FREE_CLIENT_ADDR_START;
+
+	if (i2c_dev->xlk_client)
+		i2c_unregister_device(i2c_dev->xlk_client);
+	if (i2c_dev->i2c_client)
+		i2c_unregister_device(i2c_dev->i2c_client);
+	if (i2c_dev->smbus_client)
+		i2c_unregister_device(i2c_dev->smbus_client);
+	i2c_dev->xlk_client = NULL;
+	i2c_dev->i2c_client = NULL;
+	i2c_dev->smbus_client = NULL;
+	mutex_lock(&priv->lock);
+	clear_bit(bit_pos, priv->client_addr);
+	mutex_unlock(&priv->lock);
+}
+
+/** hddl_get_free_client - get free client address,
+ *
+ * https://i2c.info/i2c-bus-specification
+ * below client address are reserved as per i2c bus specification.
+ * I2C client Reserved addr: 0x00 - 0x0f
+ *			     0xf0 - 0xff
+ *
+ * Get free client address other than standard i2c clients reserved and
+ * i2c client address used by host. If any free client address found,
+ * mark it as reserved by setting the bit corresponding to the address,
+ * and return client address.
+ */
+static int hddl_get_free_client(struct intel_hddl_device_priv *priv)
+{
+	unsigned long bit_pos;
+	int client_addr;
+
+	bit_pos = find_first_zero_bit(priv->client_addr,
+				      HDDL_FREE_CLIENT_ADDR_SIZE);
+	if (bit_pos >= HDDL_FREE_CLIENT_ADDR_SIZE)
+		return -EINVAL;
+	client_addr = bit_pos + HDDL_FREE_CLIENT_ADDR_START;
+	set_bit(bit_pos, priv->client_addr);
+	return client_addr;
+}
+
+static int intel_hddl_i2c_register_clients(struct device *dev,
+					   struct intel_hddl_clients *c)
+{
+	struct intel_hddl_device_priv *priv = c->pdata;
+	struct xlink_handle *xlink = &c->xlink_dev;
+	struct intel_hddl_i2c_devs **i2c_devs;
+	struct intel_hddl_tsens_msg msg;
+	int rc, ndevs, size, i;
+
+	/* Get N I2C devices */
+	msg.msg_type = HDDL_GET_N_I2C_DEVS;
+	rc = xlink_write_volatile(xlink, c->chan_num,
+				  (u8 *)&msg, sizeof(msg));
+	if (rc) {
+		dev_err(dev,
+			"xlink write data failed rc = %d\n",
+			rc);
+		return rc;
+	}
+	rc = intel_hddl_get_xlink_data(dev,
+				       xlink, c->chan_num,
+				       (u8 *)&ndevs, &size);
+	if (rc)
+		return rc;
+	c->n_clients = ndevs;
+	i2c_devs = devm_kcalloc(dev, ndevs,
+				sizeof(struct intel_hddl_i2c_devs *),
+				GFP_KERNEL);
+	if (!i2c_devs)
+		return -ENOMEM;
+	c->i2c_devs = i2c_devs;
+	for (i = 0; i < ndevs; i++) {
+		struct intel_hddl_i2c_devs *i2c;
+		struct intel_hddl_i2c_devs_data i2c_data;
+
+		i2c = devm_kzalloc(dev,
+				   sizeof(struct intel_hddl_i2c_devs),
+				   GFP_KERNEL);
+		if (!i2c)
+			return -ENOMEM;
+		i2c_devs[i] = i2c;
+
+		/* Get Details*/
+		msg.msg_type = HDDL_GET_I2C_DEVS;
+		msg.sensor_type = i;
+		rc = xlink_write_volatile(xlink, c->chan_num,
+					  (u8 *)&msg, sizeof(msg));
+		if (rc) {
+			dev_err(dev, "xlink write data failed rc = %d\n", rc);
+			return rc;
+		}
+		rc = intel_hddl_get_xlink_data(dev,
+					       xlink, c->chan_num,
+					       (u8 *)&i2c_data, &size);
+		if (rc)
+			return rc;
+
+		strcpy(i2c->name, i2c_data.name);
+		i2c->addr = i2c_data.addr;
+		i2c->bus = i2c_data.bus;
+		i2c->enabled = i2c_data.enabled;
+		i2c->local_host = i2c_data.local_host;
+		i2c->remote_host = i2c_data.remote_host;
+	}
+
+	mutex_lock(&priv->lock);
+	for (i = 0; i < ndevs; i++) {
+		if (i2c_devs[i]->addr & (1 << 30))
+			i2c_devs[i]->addr = hddl_get_free_client(priv);
+
+		strcpy(i2c_devs[i]->board_info.type,
+		       i2c_devs[i]->name);
+		i2c_devs[i]->board_info.addr = i2c_devs[i]->addr;
+	}
+	mutex_unlock(&priv->lock);
+	/* Send Complete */
+	msg.msg_type = HDDL_GET_SENS_COMPLETE;
+	rc = xlink_write_volatile(xlink, c->chan_num,
+				  (u8 *)&msg, sizeof(msg));
+	if (rc) {
+		dev_err(dev, "xlink write data failed rc = %d\n", rc);
+		return rc;
+	}
+
+	mutex_lock(&priv->lock);
+
+	/* Get msg type */
+	rc = intel_hddl_get_xlink_data(dev,
+				       xlink, c->chan_num,
+				       (u8 *)&msg, &size);
+	if (rc) {
+		mutex_unlock(&priv->lock);
+		return rc;
+	}
+
+	while (msg.msg_type != HDDL_GET_SENS_COMPLETE) {
+		switch (msg.msg_type) {
+		case HDDL_GET_I2C_DEV_ADDR:
+		{
+			i = msg.sensor_type;
+			rc = xlink_write_volatile(xlink, c->chan_num,
+						  (u8 *)&i2c_devs[i]->addr,
+						  sizeof(i2c_devs[i]->addr));
+			if (rc) {
+				dev_err(dev,
+					"xlink write data failed rc = %d\n",
+					rc);
+				mutex_unlock(&priv->lock);
+				return rc;
+			}
+		}
+		break;
+		default:
+			break;
+		}
+		rc = intel_hddl_get_xlink_data(dev,
+					       xlink, c->chan_num,
+					       (u8 *)&msg, &size);
+		if (rc) {
+			mutex_unlock(&priv->lock);
+			return rc;
+		}
+	}
+	intel_hddl_add_xlink_i2c_clients(dev, c, c->i2c_devs,
+					 c->n_clients, 1);
+	mutex_unlock(&priv->lock);
+	return 0;
+}
+
+static int intel_hddl_tsens_data(struct intel_hddl_clients *c)
+{
+	struct intel_hddl_device_priv *priv = c->pdata;
+	struct xlink_handle *xlink = &c->xlink_dev;
+	struct intel_tsens_host **p_tsens;
+	struct intel_hddl_tsens_msg msg;
+	u32 size, i, j;
+	u32 nsens;
+	int rc;
+
+	/* Get Nsens */
+	msg.msg_type = HDDL_GET_NSENS;
+	rc = xlink_write_volatile(xlink, c->chan_num,
+				  (u8 *)&msg, sizeof(msg));
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"xlink write data failed rc = %d\n",
+			rc);
+			return rc;
+	}
+	rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+				       xlink, c->chan_num,
+				       (u8 *)&nsens, &size);
+	if (rc)
+		return rc;
+
+	c->nsens = nsens;
+	p_tsens = devm_kcalloc(&priv->pdev->dev, nsens,
+			       sizeof(struct intel_tsens_host *),
+			       GFP_KERNEL);
+	if (!p_tsens)
+		return -ENOMEM;
+	c->tsens = (void **)p_tsens;
+	for (i = 0; i < nsens; i++) {
+		struct intel_tsens_host *tsens;
+		struct intel_tsens_data *tsens_data;
+
+		tsens = devm_kzalloc(&priv->pdev->dev,
+				     sizeof(struct intel_tsens_host),
+				     GFP_KERNEL);
+		if (!tsens)
+			return -ENOMEM;
+		tsens_data = devm_kzalloc(&priv->pdev->dev,
+					  sizeof(struct intel_tsens_data),
+					  GFP_KERNEL);
+		if (!tsens_data)
+			return -ENOMEM;
+		tsens->t_data = tsens_data;
+
+		/* Get Details*/
+		msg.msg_type = HDDL_GET_SENS_DETAILS;
+		msg.sensor_type = i;
+		rc = xlink_write_volatile(xlink, c->chan_num,
+					  (u8 *)&msg, sizeof(msg));
+		if (rc) {
+			dev_err(&priv->pdev->dev,
+				"xlink write data failed rc = %d\n",
+				rc);
+			return rc;
+		}
+		rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+					       xlink, c->chan_num,
+					       (u8 *)tsens_data, &size);
+		if (rc)
+			return rc;
+
+		/* Get trip info*/
+		tsens->trip_info =
+		devm_kcalloc(&priv->pdev->dev, tsens_data->n_trips,
+			     sizeof(struct intel_tsens_host_trip_info *),
+			     GFP_KERNEL);
+		if (!tsens->trip_info)
+			return -ENOMEM;
+		for (j = 0; j < tsens_data->n_trips; j++) {
+			struct intel_tsens_host_trip_info *t_info;
+
+			t_info =
+			devm_kzalloc(&priv->pdev->dev,
+				     sizeof(struct intel_tsens_host_trip_info),
+				     GFP_KERNEL);
+			if (!t_info)
+				return -ENOMEM;
+			tsens->trip_info[j] = t_info;
+			msg.msg_type = HDDL_GET_SENS_TRIP_INFO;
+			msg.sensor_type = i;
+			msg.trip_info_idx = j;
+			rc = xlink_write_volatile(xlink, c->chan_num,
+						  (u8 *)&msg, sizeof(msg));
+			if (rc) {
+				dev_err(&priv->pdev->dev,
+					"xlink write data failed rc = %d\n",
+					rc);
+				return rc;
+			}
+			rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+						       xlink, c->chan_num,
+						       (u8 *)t_info, &size);
+			if (rc)
+				return rc;
+		}
+		p_tsens[i] = tsens;
+	}
+	/* Send Complete */
+	msg.msg_type = HDDL_GET_SENS_COMPLETE;
+	rc = xlink_write_volatile(xlink, c->chan_num,
+				  (u8 *)&msg, sizeof(msg));
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"xlink write data failed rc = %d\n",
+			rc);
+		return rc;
+	}
+
+	return 0;
+}
+
+static int intel_hddl_device_connect_task(void *data)
+{
+	struct intel_hddl_clients *c = (struct intel_hddl_clients *)data;
+	struct intel_hddl_device_priv *priv = c->pdata;
+	struct intel_hddl_board_info board_info_rcvd;
+	struct xlink_handle *xlink = &c->xlink_dev;
+	struct timespec64 ts;
+	u32 size, rc;
+
+	c->chan_num = priv->xlink_chan;
+	c->i2c_chan_num = priv->i2c_xlink_chan;
+	c->smbus_adap = priv->smbus_adap;
+	if (intel_hddl_open_xlink_device(&priv->pdev->dev, c)) {
+		dev_err(&priv->pdev->dev, "HDDL open xlink dev failed\n");
+		return -ENODEV;
+	}
+	ktime_get_real_ts64(&ts);
+	rc = xlink_write_volatile(xlink, c->chan_num, (u8 *)&ts,
+				  sizeof(struct timespec64));
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"xlink write data failed rc = %d\n",
+			rc);
+		return rc;
+	}
+
+	size = sizeof(c->board_info);
+	rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+				       xlink, c->chan_num,
+				       (u8 *)&c->board_info, &size);
+	if (rc)
+		return rc;
+	board_info_rcvd.board_id = ~(c->board_info.board_id);
+	rc = xlink_write_volatile(xlink, c->chan_num,
+				  (u8 *)&board_info_rcvd,
+				  sizeof(board_info_rcvd));
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"xlink write data failed rc = %d\n",
+			rc);
+		return rc;
+	}
+
+	rc = intel_hddl_tsens_data(c);
+	if (rc) {
+		dev_err(&priv->pdev->dev, "HDDL: tsens data not rcvd\n");
+		goto close_xlink_dev;
+	}
+	rc = intel_hddl_register_xlink_i2c_adap(&priv->pdev->dev, c);
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"HDDL: register xlink i2c adapter failed\n");
+		goto close_xlink_dev;
+	}
+	rc = intel_hddl_i2c_register_clients(&priv->pdev->dev, c);
+	if (rc) {
+		dev_err(&priv->pdev->dev,
+			"HDDL: register i2c clients failed\n");
+		goto remove_xlink_i2c_adap;
+	}
+	while (!kthread_should_stop())
+		msleep_interruptible(HDDL_NEW_DEV_POLL_TIME);
+
+	return 0;
+
+remove_xlink_i2c_adap:
+	intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+close_xlink_dev:
+	intel_hddl_close_xlink_device(&priv->pdev->dev, c);
+	return rc;
+}
+
+static int intel_hddl_check_for_new_device(struct intel_hddl_device_priv *priv)
+{
+	struct intel_hddl_clients **hddl_clients;
+
+	hddl_clients =
+		intel_hddl_setup_device(&priv->pdev->dev,
+					intel_hddl_device_connect_task,
+					&priv->ndevs, priv->hddl_client,
+					priv);
+
+	if (!hddl_clients) {
+		dev_err(&priv->pdev->dev,
+			"intel_hddl_setup_device returned NULL\n");
+		return 0;
+	}
+	priv->hddl_client = hddl_clients;
+	return 1;
+}
+
+static int intel_hddl_device_init_task(void *data)
+{
+	struct intel_hddl_device_priv *priv =
+		(struct intel_hddl_device_priv *)data;
+
+	while (!kthread_should_stop()) {
+		if (!intel_hddl_check_for_new_device(priv)) {
+			dev_err(&priv->pdev->dev,
+				"Error while checking for new device\n");
+			return -EFAULT;
+		}
+		msleep_interruptible(HDDL_NEW_DEV_POLL_TIME);
+	}
+
+	return 0;
+}
+
+static int intel_hddl_device_init(struct intel_hddl_device_priv *priv)
+{
+	struct i2c_adapter *temp;
+	int j = 0;
+
+	while ((temp = i2c_get_adapter(j))) {
+		if (strstr(temp->name, "SMBus I801"))
+			priv->smbus_adap = temp;
+		i2c_put_adapter(temp);
+		j++;
+	}
+	priv->hddl_dev_init_task = kthread_run(intel_hddl_device_init_task,
+					       (void *)priv,
+					       "hddl_device_init");
+	if (!priv->hddl_dev_init_task) {
+		dev_err(&priv->pdev->dev, "failed to create thread\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int intel_hddl_server_probe(struct platform_device *pdev)
+{
+	struct intel_hddl_server_plat_data *plat_data;
+	struct intel_hddl_device_priv *priv;
+	int ret, i;
+
+	plat_data = pdev->dev.platform_data;
+	if (!plat_data) {
+		dev_err(&pdev->dev, "Platform data not found\n");
+		return -EINVAL;
+	}
+
+	priv = devm_kzalloc(&pdev->dev,
+			    sizeof(struct intel_hddl_device_priv),
+			    GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+	priv->pdev = pdev;
+	priv->plat_data = plat_data;
+	priv->xlink_chan = plat_data->xlink_chan;
+	priv->i2c_xlink_chan = plat_data->i2c_xlink_chan;
+	mutex_init(&priv->lock);
+	g_priv = priv;
+	ret = intel_hddl_cdev_init(priv);
+	if (ret) {
+		dev_err(&pdev->dev, "HDDL char device init failed\n");
+		return -EINVAL;
+	}
+	/*
+	 * https://i2c.info/i2c-bus-specification
+	 * below client address are reserved as per i2c bus specification.
+	 * I2C client Reserved addr: 0x00 - 0x0f
+	 *			     0xf0 - 0xff
+	 *
+	 * hddl_get_free_client will not use standard i2c client
+	 * reserved address, so no need to mark them as reserved.
+	 * mark the address used by i2c clients connected to the host
+	 * as used.
+	 */
+	for (i = 0; i < ARRAY_SIZE(hddl_host_reserved_addrs); i++) {
+		int bit_pos = hddl_host_reserved_addrs[i] -
+				HDDL_FREE_CLIENT_ADDR_START;
+		set_bit(bit_pos, priv->client_addr);
+	}
+	ret = intel_hddl_device_init(priv);
+	if (ret) {
+		dev_err(&pdev->dev, "HDDL device init failed\n");
+		ret = -EINVAL;
+		goto free_cdev;
+	}
+	platform_set_drvdata(pdev, priv);
+
+	return 0;
+free_cdev:
+	intel_hddl_cdev_remove(priv);
+	return ret;
+}
+
+/* Device Exit */
+static int intel_hddl_server_remove(struct platform_device *pdev)
+{
+	struct intel_hddl_device_priv *priv = platform_get_drvdata(pdev);
+	int i;
+
+	if (!priv)
+		return -EINVAL;
+	intel_hddl_cdev_remove(priv);
+	for (i = 0; i < priv->ndevs; i++)
+		intel_hddl_device_remove(priv->hddl_client[i]);
+	kthread_stop(priv->hddl_dev_init_task);
+
+	return 0;
+}
+
+static struct platform_driver intel_hddl_server_driver = {
+	.probe = intel_hddl_server_probe,
+	.remove = intel_hddl_server_remove,
+	.driver = {
+		.name = "intel_hddl_server",
+	},
+};
+
+static struct platform_device *intel_hddl_server_pdev;
+
+static void intel_hddl_server_exit(void)
+{
+	platform_driver_unregister(&intel_hddl_server_driver);
+	platform_device_unregister(intel_hddl_server_pdev);
+}
+
+static int __init intel_hddl_server_init(void)
+{
+	struct intel_hddl_server_plat_data plat;
+	struct platform_device_info pdevinfo;
+	struct platform_device *dd;
+	int ret;
+
+	ret = platform_driver_register(&intel_hddl_server_driver);
+	if (ret) {
+		pr_err("HDDL SERVER: platform_driver_register failed %d", ret);
+		return ret;
+	}
+	memset(&pdevinfo, 0, sizeof(pdevinfo));
+	pdevinfo.name = "intel_hddl_server";
+	pdevinfo.data = &plat;
+	plat.xlink_chan = HDDL_NODE_XLINK_CHANNEL;
+	plat.i2c_xlink_chan = HDDL_I2C_XLINK_CHANNEL;
+	pdevinfo.size_data = sizeof(struct  intel_hddl_server_plat_data);
+	dd = platform_device_register_full(&pdevinfo);
+	if (IS_ERR(dd)) {
+		pr_err("HDDL SERVER: platform device register failed\n");
+		platform_driver_unregister(&intel_hddl_server_driver);
+		return -EINVAL;
+	}
+	intel_hddl_server_pdev = dd;
+	return 0;
+}
+
+module_init(intel_hddl_server_init);
+module_exit(intel_hddl_server_exit);
+
+MODULE_DESCRIPTION("Intel HDDL Device host driver");
+MODULE_AUTHOR("Sandeep Singh <sandeep1.singh@intel.com>");
+MODULE_AUTHOR("Vaidya, Mahesh R <mahesh.r.vaidya@intel.com>");
+MODULE_AUTHOR("Udhayakumar C <udhayakumar.c@intel.com>");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver
  2021-01-08 21:25 ` [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver mgross
@ 2021-01-10 17:18   ` Rob Herring
  2021-01-11 19:24   ` Rob Herring
  1 sibling, 0 replies; 57+ messages in thread
From: Rob Herring @ 2021-01-10 17:18 UTC (permalink / raw)
  To: mgross
  Cc: markgross, dragan.cvetic, bp, Paul Murphy, paul.walmsley, arnd,
	devicetree, damien.lemoal, leonard.crestez, robh+dt, shawnguo,
	Daniele Alessandrelli, linux-kernel, palmerdabbelt, corbet,
	peng.fan, gregkh, jassisinghbrar

On Fri, 08 Jan 2021 13:25:32 -0800, mgross@linux.intel.com wrote:
> From: Paul Murphy <paul.j.murphy@intel.com>
> 
> Add DT bindings documentation for the Keem Bay VPU IPC driver.
> 
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: devicetree@vger.kernel.org
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Paul Murphy <paul.j.murphy@intel.com>
> Co-developed-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
> Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
> ---
>  .../soc/intel/intel,keembay-vpu-ipc.yaml      | 153 ++++++++++++++++++
>  1 file changed, 153 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
> 

My bot found errors running 'make dt_binding_check' on your patch:

yamllint warnings/errors:
./Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml:21:9: [warning] wrong indentation: expected 10 but found 8 (indentation)

dtschema/dtc warnings/errors:

See https://patchwork.ozlabs.org/patch/1423960

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 17/34] xlink-ipc: Add xlink ipc device tree bindings
  2021-01-08 21:25 ` [PATCH v2 17/34] xlink-ipc: Add xlink ipc device tree bindings mgross
@ 2021-01-10 17:18   ` Rob Herring
  0 siblings, 0 replies; 57+ messages in thread
From: Rob Herring @ 2021-01-10 17:18 UTC (permalink / raw)
  To: mgross
  Cc: Seamus Kelly, robh+dt, gregkh, arnd, markgross, damien.lemoal,
	palmerdabbelt, corbet, paul.walmsley, shawnguo, peng.fan,
	jassisinghbrar, dragan.cvetic, Ryan Carnaghi, bp,
	leonard.crestez, devicetree, linux-kernel

On Fri, 08 Jan 2021 13:25:43 -0800, mgross@linux.intel.com wrote:
> From: Seamus Kelly <seamus.kelly@intel.com>
> 
> Add device tree bindings for the xLink IPC driver which enables xLink to
> control and communicate with the VPU IP present on the Intel Keem Bay
> SoC.
> 
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: devicetree@vger.kernel.org
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
> Signed-off-by: Ryan Carnaghi <ryan.r.carnaghi@intel.com>
> ---
>  .../misc/intel,keembay-xlink-ipc.yaml         | 49 +++++++++++++++++++
>  1 file changed, 49 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml
> 

My bot found errors running 'make dt_binding_check' on your patch:

yamllint warnings/errors:
./Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml:21:9: [warning] wrong indentation: expected 10 but found 8 (indentation)

dtschema/dtc warnings/errors:
/builds/robherring/linux-dt-review/Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml: 'additionalProperties' is a required property
/builds/robherring/linux-dt-review/Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml: ignoring, error in schema: 
warning: no schema found in file: ./Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml

See https://patchwork.ozlabs.org/patch/1423958

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings
  2021-01-08 21:25 ` [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings mgross
@ 2021-01-10 17:18   ` Rob Herring
  2021-01-11 19:27   ` Rob Herring
  1 sibling, 0 replies; 57+ messages in thread
From: Rob Herring @ 2021-01-10 17:18 UTC (permalink / raw)
  To: mgross
  Cc: paul.walmsley, devicetree, linux-kernel, Seamus Kelly, markgross,
	damien.lemoal, palmerdabbelt, corbet, peng.fan, arnd,
	dragan.cvetic, leonard.crestez, bp, shawnguo, jassisinghbrar,
	gregkh, robh+dt, Ryan Carnaghi

On Fri, 08 Jan 2021 13:25:45 -0800, mgross@linux.intel.com wrote:
> From: Seamus Kelly <seamus.kelly@intel.com>
> 
> Add device tree bindings for keembay-xlink.
> 
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: devicetree@vger.kernel.org
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
> Signed-off-by: Ryan Carnaghi <ryan.r.carnaghi@intel.com>
> ---
>  .../bindings/misc/intel,keembay-xlink.yaml    | 27 +++++++++++++++++++
>  1 file changed, 27 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> 

My bot found errors running 'make dt_binding_check' on your patch:

yamllint warnings/errors:
./Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml:21:9: [warning] wrong indentation: expected 10 but found 8 (indentation)

dtschema/dtc warnings/errors:
/builds/robherring/linux-dt-review/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml: 'additionalProperties' is a required property
/builds/robherring/linux-dt-review/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml: ignoring, error in schema: 
warning: no schema found in file: ./Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml

See https://patchwork.ozlabs.org/patch/1423961

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver
  2021-01-08 21:25 ` [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver mgross
  2021-01-10 17:18   ` Rob Herring
@ 2021-01-11 19:24   ` Rob Herring
  2021-01-19 14:32     ` Alessandrelli, Daniele
  1 sibling, 1 reply; 57+ messages in thread
From: Rob Herring @ 2021-01-11 19:24 UTC (permalink / raw)
  To: mgross
  Cc: markgross, arnd, bp, damien.lemoal, dragan.cvetic, gregkh,
	corbet, leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan,
	shawnguo, jassisinghbrar, linux-kernel, Paul Murphy, devicetree,
	Daniele Alessandrelli

On Fri, Jan 08, 2021 at 01:25:32PM -0800, mgross@linux.intel.com wrote:
> From: Paul Murphy <paul.j.murphy@intel.com>
> 
> Add DT bindings documentation for the Keem Bay VPU IPC driver.
> 
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: devicetree@vger.kernel.org
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Paul Murphy <paul.j.murphy@intel.com>
> Co-developed-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
> Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>

Needs your Sob.

> ---
>  .../soc/intel/intel,keembay-vpu-ipc.yaml      | 153 ++++++++++++++++++

This doesn't fit somewhere else?

>  1 file changed, 153 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
> 
> diff --git a/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml b/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
> new file mode 100644
> index 000000000000..cd1c4abe8bc9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
> @@ -0,0 +1,153 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +# Copyright (c) Intel Corporation. All rights reserved.
> +%YAML 1.2
> +---
> +$id: "http://devicetree.org/schemas/soc/intel/intel,keembay-vpu-ipc.yaml#"
> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> +
> +title: Intel Keem Bay VPU IPC
> +
> +maintainers:
> +  - Paul Murphy <paul.j.murphy@intel.com>
> +
> +description:
> +  The VPU IPC driver facilitates loading of firmware, control, and communication
> +  with the VPU over the IPC FIFO in the Intel Keem Bay SoC.

VPU is never defined. 

Bindings are for h/w blocks, not drivers.

> +
> +properties:
> +  compatible:
> +    oneOf:
> +      - items:
> +        - const: intel,keembay-vpu-ipc
> +
> +  reg:
> +    items:
> +      - description: NCE WDT registers
> +      - description: NCE TIM_GEN_CONFIG registers
> +      - description: MSS WDT registers
> +      - description: MSS TIM_GEN_CONFIG registers
> +
> +  reg-names:
> +    items:
> +      - const: nce_wdt
> +      - const: nce_tim_cfg
> +      - const: mss_wdt
> +      - const: mss_tim_cfg
> +
> +  memory-region:
> +    items:
> +      - description: reference to the VPU reserved memory region
> +      - description: reference to the X509 reserved memory region
> +      - description: reference to the MSS IPC area
> +
> +  clocks:
> +    items:
> +      - description: cpu clock
> +      - description: pll 0 out 0 rate
> +      - description: pll 0 out 1 rate
> +      - description: pll 0 out 2 rate
> +      - description: pll 0 out 3 rate
> +      - description: pll 1 out 0 rate
> +      - description: pll 1 out 1 rate
> +      - description: pll 1 out 2 rate
> +      - description: pll 1 out 3 rate
> +      - description: pll 2 out 0 rate
> +      - description: pll 2 out 1 rate
> +      - description: pll 2 out 2 rate
> +      - description: pll 2 out 3 rate
> +
> +  clock-names:
> +    items:
> +      - const: cpu_clock
> +      - const: pll_0_out_0
> +      - const: pll_0_out_1
> +      - const: pll_0_out_2
> +      - const: pll_0_out_3
> +      - const: pll_1_out_0
> +      - const: pll_1_out_1
> +      - const: pll_1_out_2
> +      - const: pll_1_out_3
> +      - const: pll_2_out_0
> +      - const: pll_2_out_1
> +      - const: pll_2_out_2
> +      - const: pll_2_out_3
> +
> +  interrupts:
> +    items:
> +      - description: number of NCE sub-system WDT timeout IRQ
> +      - description: number of MSS sub-system WDT timeout IRQ
> +
> +  interrupt-names:
> +    items:
> +      - const: nce_wdt
> +      - const: mss_wdt
> +
> +  intel,keembay-vpu-ipc-nce-wdt-redirect:
> +    $ref: "/schemas/types.yaml#/definitions/uint32"
> +    description:
> +      Number to which we will request that the NCE sub-system
> +      re-directs it's WDT timeout IRQ
> +
> +  intel,keembay-vpu-ipc-mss-wdt-redirect:
> +    $ref: "/schemas/types.yaml#/definitions/uint32"
> +    description:
> +      Number to which we will request that the MSS sub-system
> +      re-directs it's WDT timeout IRQ

These look like the same value as the interrupt numbers?

> +
> +  intel,keembay-vpu-ipc-imr:
> +    $ref: "/schemas/types.yaml#/definitions/uint32"
> +    description:
> +      IMR (isolated memory region) number which we will request
> +      the runtime service uses to protect the VPU memory region
> +      before authentication
> +
> +  intel,keembay-vpu-ipc-id:
> +    $ref: "/schemas/types.yaml#/definitions/uint32"
> +    description: The VPU ID to be passed to the VPU firmware.
> +
> +additionalProperties: False
> +
> +examples:
> +  - |
> +    #include <dt-bindings/interrupt-controller/arm-gic.h>
> +    vpu-ipc@3f00209c {
> +        compatible = "intel,keembay-vpu-ipc";
> +        reg = <0x3f00209c 0x10>,
> +              <0x3f003008 0x4>,
> +              <0x2082009c 0x10>,
> +              <0x20821008 0x4>;
> +        reg-names = "nce_wdt",
> +                    "nce_tim_cfg",
> +                    "mss_wdt",
> +                    "mss_tim_cfg";
> +        memory-region = <&vpu_reserved>,
> +                        <&vpu_x509_reserved>,
> +                        <&mss_ipc_reserved>;
> +        clocks = <&scmi_clk 0>,
> +                 <&scmi_clk 0>,
> +                 <&scmi_clk 1>,
> +                 <&scmi_clk 2>,
> +                 <&scmi_clk 3>,
> +                 <&scmi_clk 4>,
> +                 <&scmi_clk 5>,
> +                 <&scmi_clk 6>,
> +                 <&scmi_clk 7>,
> +                 <&scmi_clk 8>,
> +                 <&scmi_clk 9>,
> +                 <&scmi_clk 10>,
> +                 <&scmi_clk 11>;
> +        clock-names = "cpu_clock",
> +                      "pll_0_out_0", "pll_0_out_1",
> +                      "pll_0_out_2", "pll_0_out_3",
> +                      "pll_1_out_0", "pll_1_out_1",
> +                      "pll_1_out_2", "pll_1_out_3",
> +                      "pll_2_out_0", "pll_2_out_1",
> +                      "pll_2_out_2", "pll_2_out_3";
> +        interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>,
> +                     <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>;
> +        interrupt-names = "nce_wdt", "mss_wdt";
> +        intel,keembay-vpu-ipc-nce-wdt-redirect = <63>;
> +        intel,keembay-vpu-ipc-mss-wdt-redirect = <47>;
> +        intel,keembay-vpu-ipc-imr = <9>;
> +        intel,keembay-vpu-ipc-id = <0>;
> +    };
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings
  2021-01-08 21:25 ` [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings mgross
  2021-01-10 17:18   ` Rob Herring
@ 2021-01-11 19:27   ` Rob Herring
  1 sibling, 0 replies; 57+ messages in thread
From: Rob Herring @ 2021-01-11 19:27 UTC (permalink / raw)
  To: mgross
  Cc: markgross, arnd, bp, damien.lemoal, dragan.cvetic, gregkh,
	corbet, leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan,
	shawnguo, jassisinghbrar, linux-kernel, Seamus Kelly, devicetree,
	Ryan Carnaghi

On Fri, Jan 08, 2021 at 01:25:45PM -0800, mgross@linux.intel.com wrote:
> From: Seamus Kelly <seamus.kelly@intel.com>
> 
> Add device tree bindings for keembay-xlink.
> 
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: devicetree@vger.kernel.org
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Seamus Kelly <seamus.kelly@intel.com>
> Signed-off-by: Ryan Carnaghi <ryan.r.carnaghi@intel.com>
> ---
>  .../bindings/misc/intel,keembay-xlink.yaml    | 27 +++++++++++++++++++
>  1 file changed, 27 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> 
> diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> new file mode 100644
> index 000000000000..89c34018fa04
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> @@ -0,0 +1,27 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +# Copyright (c) Intel Corporation. All rights reserved.
> +%YAML 1.2
> +---
> +$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink.yaml#"
> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> +
> +title: Intel Keem Bay xlink
> +
> +maintainers:
> +  - Seamus Kelly <seamus.kelly@intel.com>
> +
> +description: |
> +  The Keem Bay xlink driver enables the communication/control sub-system
> +  for internal and external communications to the Intel Keem Bay SoC.
> +
> +properties:
> +  compatible:
> +    oneOf:
> +      - items:
> +        - const: intel,keembay-xlink
> +
> +examples:
> +  - |
> +    xlink {
> +        compatible = "intel,keembay-xlink";
> +    };

A node with a compatible and nothing else is generally a sign of abusing 
DT to instantiate a driver. You don't need DT for that.

Rob

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 29/34] Intel tsens i2c slave driver.
  2021-01-08 21:25 ` [PATCH v2 29/34] Intel tsens i2c slave driver mgross
@ 2021-01-12  7:15   ` Randy Dunlap
  2021-01-25 23:39     ` mark gross
  0 siblings, 1 reply; 57+ messages in thread
From: Randy Dunlap @ 2021-01-12  7:15 UTC (permalink / raw)
  To: mgross, markgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, C, Udhayakumar, C

On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> index 8b263fdd80c3..c2138339bd89 100644
> --- a/drivers/misc/intel_tsens/Kconfig
> +++ b/drivers/misc/intel_tsens/Kconfig
> @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
>  	  Say Y if using a processor that includes the Intel VPU such as
>  	  Keem Bay.  If unsure, say N.
>  
> +config INTEL_TSENS_I2C_SLAVE
> +	bool "I2C slave driver for intel tsens"

Why bool instead of tristate?

> +	depends on INTEL_TSENS_LOCAL_HOST
> +	select I2C
> +	select I2C_SLAVE
> +	help
> +	  This option enables tsens i2c slave driver.

	                            I2C

> +
> +	  This driver is used for reporting thermal data via I2C
> +	  SMBUS to remote host.
> +	  Enable this option if you want to have support for thermal
> +	  management controller

	             controller.

> +	  Say Y if using a processor that includes the Intel VPU such as
> +	  Keem Bay.  If unsure, say N.


-- 
~Randy


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver
  2021-01-11 19:24   ` Rob Herring
@ 2021-01-19 14:32     ` Alessandrelli, Daniele
  0 siblings, 0 replies; 57+ messages in thread
From: Alessandrelli, Daniele @ 2021-01-19 14:32 UTC (permalink / raw)
  To: robh, mgross
  Cc: corbet, palmerdabbelt, peng.fan, damien.lemoal, bp,
	leonard.crestez, markgross, linux-kernel, Murphy, Paul J,
	shawnguo, devicetree, paul.walmsley, arnd, jassisinghbrar,
	gregkh, dragan.cvetic

Hi Rob,

Thanks for your review.

On Mon, 2021-01-11 at 13:24 -0600, Rob Herring wrote:
> On Fri, Jan 08, 2021 at 01:25:32PM -0800, mgross@linux.intel.com
> wrote:
> > From: Paul Murphy <paul.j.murphy@intel.com>
> > 
> > Add DT bindings documentation for the Keem Bay VPU IPC driver.
> > 
> > Cc: Rob Herring <robh+dt@kernel.org>
> > Cc: devicetree@vger.kernel.org
> > Reviewed-by: Mark Gross <mgross@linux.intel.com>
> > Signed-off-by: Paul Murphy <paul.j.murphy@intel.com>
> > Co-developed-by: Daniele Alessandrelli <
> > daniele.alessandrelli@intel.com>
> > Signed-off-by: Daniele Alessandrelli <
> > daniele.alessandrelli@intel.com>
> 
> Needs your Sob.
> 
> > ---
> >  .../soc/intel/intel,keembay-vpu-ipc.yaml      | 153
> > ++++++++++++++++++
> 
> This doesn't fit somewhere else?

It's quite a SoC-specific driver, designed to control (start, stop,
monitor, etc) the Vision Processing Unit (VPU) integrated in the Keem
Bay SoC.

Do you think it would fit better somewhere else?

> 
> >  1 file changed, 153 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-
> > ipc.yaml
> > 
> > diff --git
> > a/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-
> > ipc.yaml
> > b/Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-
> > ipc.yaml
> > new file mode 100644
> > index 000000000000..cd1c4abe8bc9
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/soc/intel/intel,keembay-
> > vpu-ipc.yaml
> > @@ -0,0 +1,153 @@
> > +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> > +# Copyright (c) Intel Corporation. All rights reserved.
> > +%YAML 1.2
> > +---
> > +$id: "
> > http://devicetree.org/schemas/soc/intel/intel,keembay-vpu-ipc.yaml#
> > "
> > +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> > +
> > +title: Intel Keem Bay VPU IPC
> > +
> > +maintainers:
> > +  - Paul Murphy <paul.j.murphy@intel.com>
> > +
> > +description:
> > +  The VPU IPC driver facilitates loading of firmware, control, and
> > communication
> > +  with the VPU over the IPC FIFO in the Intel Keem Bay SoC.
> 
> VPU is never defined. 

We'll spell out the acronym in v3.

Anyway, VPU = Vision Processing Unit


> 
> Bindings are for h/w blocks, not drivers.

Will be fixed in v3

> 
> > +
> > +properties:
> > +  compatible:
> > +    oneOf:
> > +      - items:
> > +        - const: intel,keembay-vpu-ipc
> > +
> > +  reg:
> > +    items:
> > +      - description: NCE WDT registers
> > +      - description: NCE TIM_GEN_CONFIG registers
> > +      - description: MSS WDT registers
> > +      - description: MSS TIM_GEN_CONFIG registers
> > +
> > +  reg-names:
> > +    items:
> > +      - const: nce_wdt
> > +      - const: nce_tim_cfg
> > +      - const: mss_wdt
> > +      - const: mss_tim_cfg
> > +
> > +  memory-region:
> > +    items:
> > +      - description: reference to the VPU reserved memory region
> > +      - description: reference to the X509 reserved memory region
> > +      - description: reference to the MSS IPC area
> > +
> > +  clocks:
> > +    items:
> > +      - description: cpu clock
> > +      - description: pll 0 out 0 rate
> > +      - description: pll 0 out 1 rate
> > +      - description: pll 0 out 2 rate
> > +      - description: pll 0 out 3 rate
> > +      - description: pll 1 out 0 rate
> > +      - description: pll 1 out 1 rate
> > +      - description: pll 1 out 2 rate
> > +      - description: pll 1 out 3 rate
> > +      - description: pll 2 out 0 rate
> > +      - description: pll 2 out 1 rate
> > +      - description: pll 2 out 2 rate
> > +      - description: pll 2 out 3 rate
> > +
> > +  clock-names:
> > +    items:
> > +      - const: cpu_clock
> > +      - const: pll_0_out_0
> > +      - const: pll_0_out_1
> > +      - const: pll_0_out_2
> > +      - const: pll_0_out_3
> > +      - const: pll_1_out_0
> > +      - const: pll_1_out_1
> > +      - const: pll_1_out_2
> > +      - const: pll_1_out_3
> > +      - const: pll_2_out_0
> > +      - const: pll_2_out_1
> > +      - const: pll_2_out_2
> > +      - const: pll_2_out_3
> > +
> > +  interrupts:
> > +    items:
> > +      - description: number of NCE sub-system WDT timeout IRQ
> > +      - description: number of MSS sub-system WDT timeout IRQ
> > +
> > +  interrupt-names:
> > +    items:
> > +      - const: nce_wdt
> > +      - const: mss_wdt
> > +
> > +  intel,keembay-vpu-ipc-nce-wdt-redirect:
> > +    $ref: "/schemas/types.yaml#/definitions/uint32"
> > +    description:
> > +      Number to which we will request that the NCE sub-system
> > +      re-directs it's WDT timeout IRQ
> > +
> > +  intel,keembay-vpu-ipc-mss-wdt-redirect:
> > +    $ref: "/schemas/types.yaml#/definitions/uint32"
> > +    description:
> > +      Number to which we will request that the MSS sub-system
> > +      re-directs it's WDT timeout IRQ
> 
> These look like the same value as the interrupt numbers?

That's a very good point. We'll drop these additional properties and
re-use the interrupt numbers.

> 
> > +
> > +  intel,keembay-vpu-ipc-imr:
> > +    $ref: "/schemas/types.yaml#/definitions/uint32"
> > +    description:
> > +      IMR (isolated memory region) number which we will request
> > +      the runtime service uses to protect the VPU memory region
> > +      before authentication
> > +
> > +  intel,keembay-vpu-ipc-id:
> > +    $ref: "/schemas/types.yaml#/definitions/uint32"
> > +    description: The VPU ID to be passed to the VPU firmware.
> > +
> > +additionalProperties: False
> > +
> > +examples:
> > +  - |
> > +    #include <dt-bindings/interrupt-controller/arm-gic.h>
> > +    vpu-ipc@3f00209c {
> > +        compatible = "intel,keembay-vpu-ipc";
> > +        reg = <0x3f00209c 0x10>,
> > +              <0x3f003008 0x4>,
> > +              <0x2082009c 0x10>,
> > +              <0x20821008 0x4>;
> > +        reg-names = "nce_wdt",
> > +                    "nce_tim_cfg",
> > +                    "mss_wdt",
> > +                    "mss_tim_cfg";
> > +        memory-region = <&vpu_reserved>,
> > +                        <&vpu_x509_reserved>,
> > +                        <&mss_ipc_reserved>;
> > +        clocks = <&scmi_clk 0>,
> > +                 <&scmi_clk 0>,
> > +                 <&scmi_clk 1>,
> > +                 <&scmi_clk 2>,
> > +                 <&scmi_clk 3>,
> > +                 <&scmi_clk 4>,
> > +                 <&scmi_clk 5>,
> > +                 <&scmi_clk 6>,
> > +                 <&scmi_clk 7>,
> > +                 <&scmi_clk 8>,
> > +                 <&scmi_clk 9>,
> > +                 <&scmi_clk 10>,
> > +                 <&scmi_clk 11>;
> > +        clock-names = "cpu_clock",
> > +                      "pll_0_out_0", "pll_0_out_1",
> > +                      "pll_0_out_2", "pll_0_out_3",
> > +                      "pll_1_out_0", "pll_1_out_1",
> > +                      "pll_1_out_2", "pll_1_out_3",
> > +                      "pll_2_out_0", "pll_2_out_1",
> > +                      "pll_2_out_2", "pll_2_out_3";
> > +        interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>,
> > +                     <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>;
> > +        interrupt-names = "nce_wdt", "mss_wdt";
> > +        intel,keembay-vpu-ipc-nce-wdt-redirect = <63>;
> > +        intel,keembay-vpu-ipc-mss-wdt-redirect = <47>;
> > +        intel,keembay-vpu-ipc-imr = <9>;
> > +        intel,keembay-vpu-ipc-id = <0>;
> > +    };
> > -- 
> > 2.17.1
> > 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 08/34] misc: xlink-pcie: Add documentation for XLink PCIe driver
  2021-01-08 21:25 ` [PATCH v2 08/34] misc: xlink-pcie: Add documentation for XLink PCIe driver mgross
@ 2021-01-19 19:36   ` Randy Dunlap
  2021-01-24 18:27     ` Thokala, Srikanth
  0 siblings, 1 reply; 57+ messages in thread
From: Randy Dunlap @ 2021-01-19 19:36 UTC (permalink / raw)
  To: mgross, markgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Srikanth Thokala, linux-doc

Hi,
Here are a few doc comments for you:

On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> From: Srikanth Thokala <srikanth.thokala@intel.com>
> 
> Provide overview of XLink PCIe driver implementation
> 
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
> ---
>  Documentation/vpu/index.rst      |  1 +
>  Documentation/vpu/xlink-pcie.rst | 90 ++++++++++++++++++++++++++++++++
>  2 files changed, 91 insertions(+)
>  create mode 100644 Documentation/vpu/xlink-pcie.rst
> 

> diff --git a/Documentation/vpu/xlink-pcie.rst b/Documentation/vpu/xlink-pcie.rst
> new file mode 100644
> index 000000000000..2d877c966b1e
> --- /dev/null
> +++ b/Documentation/vpu/xlink-pcie.rst
> @@ -0,0 +1,90 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +================================
> +Kernel driver: Xlink-pcie driver
> +================================
> +Supported chips:
> +  * Intel Edge.AI Computer Vision platforms: Keem Bay
> +    Suffix: Bay
> +    Slave address: 6240
> +    Datasheet: Publicly available at Intel
> +
> +Author: Srikanth Thokala Srikanth.Thokala@intel.com
> +
> +Introduction
> +============
> +The Xlink-pcie driver provides transport layer implementation for
> +the data transfers to support Xlink protocol subsystem communication with the
> +peer device. i.e, between remote host system and Keem Bay device.

        device, i.e.,

> +
> +The Keem Bay device is an ARM-based SOC that includes a vision processing
> +unit (VPU) and deep learning, neural network core in the hardware.
> +The Xlink-pcie driver exports a functional device endpoint to the Keem Bay
> +device and supports two-way communication with the peer device.
> +
> +High-level architecture
> +=======================
> +Remote Host: IA CPU
> +Local Host: ARM CPU (Keem Bay)::
> +
> +        +------------------------------------------------------------------------+
> +        |  Remote Host IA CPU              | | Local Host ARM CPU (Keem Bay) |   |
> +        +==================================+=+===============================+===+
> +        |  User App                        | | User App                      |   |
> +        +----------------------------------+-+-------------------------------+---+
> +        |   XLink UAPI                     | | XLink UAPI                    |   |
> +        +----------------------------------+-+-------------------------------+---+
> +        |   XLink Core                     | | XLink Core                    |   |
> +        +----------------------------------+-+-------------------------------+---+
> +        |   XLink PCIe                     | | XLink PCIe                    |   |
> +        +----------------------------------+-+-------------------------------+---+
> +        |   XLink-PCIe Remote Host driver  | | XLink-PCIe Local Host driver  |   |
> +        +----------------------------------+-+-------------------------------+---+
> +        |-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:|:|:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:|
> +        +----------------------------------+-+-------------------------------+---+
> +        |     PCIe Host Controller         | | PCIe Device Controller        | HW|
> +        +----------------------------------+-+-------------------------------+---+
> +               ^                                             ^
> +               |                                             |
> +               |------------- PCIe x2 Link  -----------------|
> +
> +This XLink PCIe driver comprises of two variants:
> +* Local Host driver
> +
> +  * Intended for ARM CPU
> +  * It is based on PCI Endpoint Framework
> +  * Driver path: {tree}/drivers/misc/Xlink-pcie/local_host
> +
> +* Remote Host driver
> +
> +       * Intended for IA CPU
> +       * It is a PCIe endpoint driver
> +       * Driver path: {tree}/drivers/misc/Xlink-pcie/remote_host
> +
> +XLink PCIe communication between local host and remote host is achieved through
> +ring buffer management and MSI/Doorbell interrupts.
> +
> +The Xlink-pcie driver subsystem registers the Keem Bay device as an endpoint
> +driver and provides standard linux PCIe sysfs interface, #

                                Linux
What is the '#' sign for above?

> +/sys/bus/pci/devices/xxxx:xx:xx.0/
> +
> +
> +XLink protocol subsystem
> +========================
> +Xlink is an abstracted control and communication subsystem based on channel
> +identification. It is intended to support VPU technology both at SoC level as
> +well as at IP level, over multiple interfaces.
> +
> +- The Xlink subsystem abstracts several types of communication channels
> +  underneath, allowing the usage of different interfaces with the
> +  same function call interface.
> +- The Communication channels are full-duplex protocol channels allowing
> +  concurrent bidirectional communication.
> +- The Xlink subsystem also supports control operations to VPU either
> +  from standalone local system or from remote system based on communication
> +  interface underneath.
> +- The Xlink subsystem supports the following communication interfaces:
> +    * USB CDC
> +    * Gigabit Ethernet
> +    * PCIe
> +    * IPC
> 


-- 
~Randy
"He closes his eyes and drops the goggles.  You can't get hurt
by looking at a bitmap.  Or can you?"
(Neal Stephenson: Snow Crash)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 20/34] xlink-core: Add xlink core driver xLink
  2021-01-08 21:25 ` [PATCH v2 20/34] xlink-core: Add xlink core driver xLink mgross
@ 2021-01-19 19:58   ` Randy Dunlap
  0 siblings, 0 replies; 57+ messages in thread
From: Randy Dunlap @ 2021-01-19 19:58 UTC (permalink / raw)
  To: mgross, markgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, Seamus Kelly, Derek Kiernan, linux-doc

Doc comments only:

On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> diff --git a/Documentation/vpu/xlink-core.rst b/Documentation/vpu/xlink-core.rst
> new file mode 100644
> index 000000000000..1c471ec803d3
> --- /dev/null
> +++ b/Documentation/vpu/xlink-core.rst
> @@ -0,0 +1,81 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============================
> +xLink-core software subsystem
> +=============================
> +
> +The purpose of the xLink software subsystem is to facilitate communication
> +between multiple users on multiple nodes in the system.
> +
> +There are three types of xLink nodes:
> +
> +1. Remote Host: this is an external IA/x86 host system that is only capable of
> +   communicating directly to the Local Host node on VPU 2.x products.
> +2. Local Host: this is the ARM core within the VPU 2.x  SoC. The Local Host can
> +   communicate upstream to the Remote Host node, and downstream to the VPU IP
> +   node.
> +3. VPU IP: this is the Leon RT core within the VPU 2.x SoC. The VPU IP can only
> +   communicate upstream to the Local Host node.
> +
> +xLink provides a common API across all interfaces for users to access xLink
> +functions and provides user space APIs via an IOCTL interface implemented in
> +the xLink core.
> +
> +xLink manages communications from one interface to another and provides routing
> +of data through multiple channels across a single physical interface.
> +
> +It exposes a common API across all interfaces at both kernel and user levels
> +for processes/applications to access.
> +
> +It has typical API types (open, close, read, write) that you would associate
> +with a communication interface.
> +
> +It also has other APIs that are related to other functions that the device can
> +perform, e.g. boot, reset get/set device mode.
> +The driver is broken down into 4 source files.
> +
> +xlink-core:
> +Contains driver initialization, driver API and IOCTL interface (for user
> +space).
> +
> +xlink-multiplexer:
> +The Multiplexer component is responsible for securely routing messages through
> +multiple communication channels over a single physical interface.
> +
> +xlink-dispatcher:
> +The Dispatcher component is responsible for queueing and handling xLink
> +communication requests from all users in the system and invoking the underlying
> +platform interface drivers.
> +
> +xlink-platform:
> +provides abstraction to each interface supported (PCIe, USB, IPC, etc).
> +
> +Typical xLink transaction (simplified):
> +When a user wants to send data across an interface via xLink it firstly calls
> +xlink connect which connects to the relevant interface (PCIe, USB, IPC, etc.)
> +and then xlink open channel.
> +
> +Then it calls xlink write function, this takes the data, passes it to the

                             function. This takes

> +kernel which packages up the data and channel and then adds it to a transmit
> +queue.
> +
> +A separate thread reads this transaction queue and pops off data if available
> +and passes the data to the underlying interface (e.g. PCIe) write function.
> +Using this thread provides serialization of transactions and decouples the user
> +write from the platform write.
> +
> +On the other side of the interface, a thread is continually reading the
> +interface (e.g. PCIe) via the platform interface read function and if it reads
> +any data it adds it to channel packet container.
> +
> +The application at this side of the interface will have called xlink connect,
> +opened the channel and called xlink read function to read data from the
> +interface and if any exists for that channel , the data gets popped from the

                                        channel, the

> +channel packet container and copied from kernel space to user space buffer
> +provided by the call.
> +
> +xLink can handle API requests from multi-process and multi-threaded
> +application/processes.
> +
> +xLink maintains 4096 channels per device connected (via xlink connect) and
> +maintains a separate channel infrastructure for each device.


-- 
~Randy
"He closes his eyes and drops the goggles.  You can't get hurt
by looking at a bitmap.  Or can you?"
(Neal Stephenson: Snow Crash)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
  2021-01-08 21:25 ` [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host mgross
@ 2021-01-20 17:57   ` Greg KH
  2021-01-24 11:48     ` Thokala, Srikanth
  0 siblings, 1 reply; 57+ messages in thread
From: Greg KH @ 2021-01-20 17:57 UTC (permalink / raw)
  To: mgross
  Cc: markgross, arnd, bp, damien.lemoal, dragan.cvetic, corbet,
	leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan, robh+dt,
	shawnguo, jassisinghbrar, linux-kernel, Srikanth Thokala,
	Derek Kiernan

On Fri, Jan 08, 2021 at 01:25:35PM -0800, mgross@linux.intel.com wrote:
> From: Srikanth Thokala <srikanth.thokala@intel.com>
> 
> Add PCIe EPF driver for local host (lh) to configure BAR's and other
> HW resources. Underlying PCIe HW controller is a Synopsys DWC PCIe core.
> 
> Cc: Derek Kiernan <derek.kiernan@xilinx.com>
> Cc: Dragan Cvetic <dragan.cvetic@xilinx.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
> ---
>  MAINTAINERS                                 |   6 +
>  drivers/misc/Kconfig                        |   1 +
>  drivers/misc/Makefile                       |   1 +
>  drivers/misc/xlink-pcie/Kconfig             |   9 +
>  drivers/misc/xlink-pcie/Makefile            |   1 +
>  drivers/misc/xlink-pcie/local_host/Makefile |   2 +
>  drivers/misc/xlink-pcie/local_host/epf.c    | 413 ++++++++++++++++++++
>  drivers/misc/xlink-pcie/local_host/epf.h    |  39 ++
>  drivers/misc/xlink-pcie/local_host/xpcie.h  |  38 ++

Why such a deep directory tree?  Why is "local_host" needed?

Anyway, one thing stood out instantly:

> +static int intel_xpcie_check_bar(struct pci_epf *epf,
> +				 struct pci_epf_bar *epf_bar,
> +				 enum pci_barno barno,
> +				 size_t size, u8 reserved_bar)
> +{
> +	if (reserved_bar & (1 << barno)) {
> +		dev_err(&epf->dev, "BAR%d is already reserved\n", barno);
> +		return -EFAULT;

That error is only allowed when you really have a fault from
reading/writing to/from userspace memory.  Not this type of foolish
programming error by the caller.

> +	}
> +
> +	if (epf_bar->size != 0 && epf_bar->size < size) {
> +		dev_err(&epf->dev, "BAR%d fixed size is not enough\n", barno);
> +		return -ENOMEM;

Did you really run out of memory or was the parameters sent to you
incorrect?  -EINVAL is the properly thing here, right?



> +	}
> +
> +	return 0;
> +}
> +
> +static int intel_xpcie_configure_bar(struct pci_epf *epf,
> +				     const struct pci_epc_features
> +					*epc_features)

Odd indentation :(

> +{
> +	struct pci_epf_bar *epf_bar;
> +	bool bar_fixed_64bit;
> +	int ret, i;
> +
> +	for (i = BAR_0; i <= BAR_5; i++) {
> +		epf_bar = &epf->bar[i];
> +		bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 << i));
> +		if (bar_fixed_64bit)
> +			epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> +		if (epc_features->bar_fixed_size[i])
> +			epf_bar->size = epc_features->bar_fixed_size[i];
> +
> +		if (i == BAR_2) {
> +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> +						    BAR2_MIN_SIZE,
> +						    epc_features->reserved_bar);
> +			if (ret)
> +				return ret;
> +		}
> +
> +		if (i == BAR_4) {
> +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> +						    BAR4_MIN_SIZE,
> +						    epc_features->reserved_bar);
> +			if (ret)
> +				return ret;
> +		}

Why do you need to check all of this?  Where is the data coming from
that could be incorrect?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface
  2021-01-08 21:25 ` [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface mgross
@ 2021-01-20 17:59   ` Greg KH
  2021-01-21 23:20     ` mark gross
  2021-01-24 11:46     ` Thokala, Srikanth
  0 siblings, 2 replies; 57+ messages in thread
From: Greg KH @ 2021-01-20 17:59 UTC (permalink / raw)
  To: mgross
  Cc: markgross, arnd, bp, damien.lemoal, dragan.cvetic, corbet,
	leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan, robh+dt,
	shawnguo, jassisinghbrar, linux-kernel, Srikanth Thokala

On Fri, Jan 08, 2021 at 01:25:41PM -0800, mgross@linux.intel.com wrote:
> From: Srikanth Thokala <srikanth.thokala@intel.com>
> 
> Provide interface for XLink layer to interact with XLink PCIe transport
> layer on both local host and remote host.
> 
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
> ---
>  drivers/misc/xlink-pcie/common/interface.c   | 109 +++++++++++++++++++
>  drivers/misc/xlink-pcie/local_host/Makefile  |   1 +
>  drivers/misc/xlink-pcie/remote_host/Makefile |   1 +
>  3 files changed, 111 insertions(+)
>  create mode 100644 drivers/misc/xlink-pcie/common/interface.c
> 
> diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
> new file mode 100644
> index 000000000000..56c1d9ed9d8f
> --- /dev/null
> +++ b/drivers/misc/xlink-pcie/common/interface.c
> @@ -0,0 +1,109 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*****************************************************************************
> + *
> + * Intel Keem Bay XLink PCIe Driver
> + *
> + * Copyright (C) 2020 Intel Corporation
> + *
> + ****************************************************************************/

Do you really need the ******* mess?  :)

> +
> +#include <linux/xlink_drv_inf.h>
> +
> +#include "core.h"
> +#include "xpcie.h"
> +
> +/* Define xpcie driver interface API */
> +int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
> +{
> +	if (!sw_device_id_list || !num_devices)
> +		return -EINVAL;
> +
> +	*num_devices = intel_xpcie_get_device_num(sw_device_id_list);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(xlink_pcie_get_device_list);

EXPORT_SYMBOL_GPL() for all of these perhaps?  I have to ask...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface
  2021-01-20 17:59   ` Greg KH
@ 2021-01-21 23:20     ` mark gross
  2021-01-24 11:46     ` Thokala, Srikanth
  1 sibling, 0 replies; 57+ messages in thread
From: mark gross @ 2021-01-21 23:20 UTC (permalink / raw)
  To: Greg KH
  Cc: mgross, markgross, arnd, bp, damien.lemoal, dragan.cvetic,
	corbet, leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan,
	robh+dt, shawnguo, jassisinghbrar, linux-kernel,
	Srikanth Thokala

On Wed, Jan 20, 2021 at 06:59:33PM +0100, Greg KH wrote:
> On Fri, Jan 08, 2021 at 01:25:41PM -0800, mgross@linux.intel.com wrote:
> > From: Srikanth Thokala <srikanth.thokala@intel.com>
> > 
> > Provide interface for XLink layer to interact with XLink PCIe transport
> > layer on both local host and remote host.
> > 
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Reviewed-by: Mark Gross <mgross@linux.intel.com>
> > Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
> > ---
> >  drivers/misc/xlink-pcie/common/interface.c   | 109 +++++++++++++++++++
> >  drivers/misc/xlink-pcie/local_host/Makefile  |   1 +
> >  drivers/misc/xlink-pcie/remote_host/Makefile |   1 +
> >  3 files changed, 111 insertions(+)
> >  create mode 100644 drivers/misc/xlink-pcie/common/interface.c
> > 
> > diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
> > new file mode 100644
> > index 000000000000..56c1d9ed9d8f
> > --- /dev/null
> > +++ b/drivers/misc/xlink-pcie/common/interface.c
> > @@ -0,0 +1,109 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*****************************************************************************
> > + *
> > + * Intel Keem Bay XLink PCIe Driver
> > + *
> > + * Copyright (C) 2020 Intel Corporation
> > + *
> > + ****************************************************************************/
> 
> Do you really need the ******* mess?  :)
> 
> > +
> > +#include <linux/xlink_drv_inf.h>
> > +
> > +#include "core.h"
> > +#include "xpcie.h"
> > +
> > +/* Define xpcie driver interface API */
> > +int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
> > +{
> > +	if (!sw_device_id_list || !num_devices)
> > +		return -EINVAL;
> > +
> > +	*num_devices = intel_xpcie_get_device_num(sw_device_id_list);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(xlink_pcie_get_device_list);
> 
> EXPORT_SYMBOL_GPL() for all of these perhaps?  I have to ask...
I can't think of a reason why not using the _GPL flavor of export symbol.  I'll
change them all if that's desired.

--mark
> 
> thanks,
> 
> greg k-h

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface
  2021-01-20 17:59   ` Greg KH
  2021-01-21 23:20     ` mark gross
@ 2021-01-24 11:46     ` Thokala, Srikanth
  1 sibling, 0 replies; 57+ messages in thread
From: Thokala, Srikanth @ 2021-01-24 11:46 UTC (permalink / raw)
  To: Greg KH, mgross
  Cc: markgross, arnd, bp, damien.lemoal, dragan.cvetic, corbet,
	leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan, robh+dt,
	shawnguo, jassisinghbrar, linux-kernel

Hi Greg,

Thank you for the review.

> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Wednesday, January 20, 2021 11:30 PM
> To: mgross@linux.intel.com
> Cc: markgross@kernel.org; arnd@arndb.de; bp@suse.de;
> damien.lemoal@wdc.com; dragan.cvetic@xilinx.com; corbet@lwn.net;
> leonard.crestez@nxp.com; palmerdabbelt@google.com;
> paul.walmsley@sifive.com; peng.fan@nxp.com; robh+dt@kernel.org;
> shawnguo@kernel.org; jassisinghbrar@gmail.com; linux-
> kernel@vger.kernel.org; Thokala, Srikanth <srikanth.thokala@intel.com>
> Subject: Re: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface
> 
> On Fri, Jan 08, 2021 at 01:25:41PM -0800, mgross@linux.intel.com wrote:
> > From: Srikanth Thokala <srikanth.thokala@intel.com>
> >
> > Provide interface for XLink layer to interact with XLink PCIe transport
> > layer on both local host and remote host.
> >
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Reviewed-by: Mark Gross <mgross@linux.intel.com>
> > Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
> > ---
> >  drivers/misc/xlink-pcie/common/interface.c   | 109 +++++++++++++++++++
> >  drivers/misc/xlink-pcie/local_host/Makefile  |   1 +
> >  drivers/misc/xlink-pcie/remote_host/Makefile |   1 +
> >  3 files changed, 111 insertions(+)
> >  create mode 100644 drivers/misc/xlink-pcie/common/interface.c
> >
> > diff --git a/drivers/misc/xlink-pcie/common/interface.c
> b/drivers/misc/xlink-pcie/common/interface.c
> > new file mode 100644
> > index 000000000000..56c1d9ed9d8f
> > --- /dev/null
> > +++ b/drivers/misc/xlink-pcie/common/interface.c
> > @@ -0,0 +1,109 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> >
> +/************************************************************************
> *****
> > + *
> > + * Intel Keem Bay XLink PCIe Driver
> > + *
> > + * Copyright (C) 2020 Intel Corporation
> > + *
> > +
> **************************************************************************
> **/
> 
> Do you really need the ******* mess?  :)

It is not required; I will clean up.

> 
> > +
> > +#include <linux/xlink_drv_inf.h>
> > +
> > +#include "core.h"
> > +#include "xpcie.h"
> > +
> > +/* Define xpcie driver interface API */
> > +int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32
> *num_devices)
> > +{
> > +	if (!sw_device_id_list || !num_devices)
> > +		return -EINVAL;
> > +
> > +	*num_devices = intel_xpcie_get_device_num(sw_device_id_list);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(xlink_pcie_get_device_list);
> 
> EXPORT_SYMBOL_GPL() for all of these perhaps?  I have to ask...

I agree with Mark, will make the change to use EXPORT_SYMBOL_GPL().

Thanks!
Srikanth

> 
> thanks,
> 
> greg k-h

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
  2021-01-20 17:57   ` Greg KH
@ 2021-01-24 11:48     ` Thokala, Srikanth
  2021-01-24 11:56       ` Greg KH
  0 siblings, 1 reply; 57+ messages in thread
From: Thokala, Srikanth @ 2021-01-24 11:48 UTC (permalink / raw)
  To: Greg KH, mgross
  Cc: markgross, arnd, bp, damien.lemoal, dragan.cvetic, corbet,
	leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan, robh+dt,
	shawnguo, jassisinghbrar, linux-kernel, Derek Kiernan

Hi Greg,

Thank you for the review.

> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Wednesday, January 20, 2021 11:28 PM
> To: mgross@linux.intel.com
> Cc: markgross@kernel.org; arnd@arndb.de; bp@suse.de;
> damien.lemoal@wdc.com; dragan.cvetic@xilinx.com; corbet@lwn.net;
> leonard.crestez@nxp.com; palmerdabbelt@google.com;
> paul.walmsley@sifive.com; peng.fan@nxp.com; robh+dt@kernel.org;
> shawnguo@kernel.org; jassisinghbrar@gmail.com; linux-
> kernel@vger.kernel.org; Thokala, Srikanth <srikanth.thokala@intel.com>;
> Derek Kiernan <derek.kiernan@xilinx.com>
> Subject: Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver
> for Local Host
> 
> On Fri, Jan 08, 2021 at 01:25:35PM -0800, mgross@linux.intel.com wrote:
> > From: Srikanth Thokala <srikanth.thokala@intel.com>
> >
> > Add PCIe EPF driver for local host (lh) to configure BAR's and other
> > HW resources. Underlying PCIe HW controller is a Synopsys DWC PCIe core.
> >
> > Cc: Derek Kiernan <derek.kiernan@xilinx.com>
> > Cc: Dragan Cvetic <dragan.cvetic@xilinx.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Reviewed-by: Mark Gross <mgross@linux.intel.com>
> > Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
> > ---
> >  MAINTAINERS                                 |   6 +
> >  drivers/misc/Kconfig                        |   1 +
> >  drivers/misc/Makefile                       |   1 +
> >  drivers/misc/xlink-pcie/Kconfig             |   9 +
> >  drivers/misc/xlink-pcie/Makefile            |   1 +
> >  drivers/misc/xlink-pcie/local_host/Makefile |   2 +
> >  drivers/misc/xlink-pcie/local_host/epf.c    | 413 ++++++++++++++++++++
> >  drivers/misc/xlink-pcie/local_host/epf.h    |  39 ++
> >  drivers/misc/xlink-pcie/local_host/xpcie.h  |  38 ++
> 
> Why such a deep directory tree?  Why is "local_host" needed?

Xlink-pcie comprises of local host (ARM CPU) and remote host (IA CPU)
variants. It is a transport layer that establishes communication between
them. 

local_host/ running on ARM CPU is based on PCI Endpoint Framework
remote_host/ running on IA CPU is a PCIe Endpoint driver

As these two variants are architecturally different, we are maintaining
under two directories.

> 
> Anyway, one thing stood out instantly:
> 
> > +static int intel_xpcie_check_bar(struct pci_epf *epf,
> > +				 struct pci_epf_bar *epf_bar,
> > +				 enum pci_barno barno,
> > +				 size_t size, u8 reserved_bar)
> > +{
> > +	if (reserved_bar & (1 << barno)) {
> > +		dev_err(&epf->dev, "BAR%d is already reserved\n", barno);
> > +		return -EFAULT;
> 
> That error is only allowed when you really have a fault from
> reading/writing to/from userspace memory.  Not this type of foolish
> programming error by the caller.

Thanks for pointing out, I will use appropriate error value to return.
 
> > +	}
> > +
> > +	if (epf_bar->size != 0 && epf_bar->size < size) {
> > +		dev_err(&epf->dev, "BAR%d fixed size is not enough\n", barno);
> > +		return -ENOMEM;
> 
> Did you really run out of memory or was the parameters sent to you
> incorrect?  -EINVAL is the properly thing here, right?

Sure, I will change to return -EINVAL.

> 
> 
> 
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int intel_xpcie_configure_bar(struct pci_epf *epf,
> > +				     const struct pci_epc_features
> > +					*epc_features)
> 
> Odd indentation :(

I had to break this as the checkpatch complained about 80-line warning.
I will fix this to have better indentation.

> 
> > +{
> > +	struct pci_epf_bar *epf_bar;
> > +	bool bar_fixed_64bit;
> > +	int ret, i;
> > +
> > +	for (i = BAR_0; i <= BAR_5; i++) {
> > +		epf_bar = &epf->bar[i];
> > +		bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 <<
> i));
> > +		if (bar_fixed_64bit)
> > +			epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> > +		if (epc_features->bar_fixed_size[i])
> > +			epf_bar->size = epc_features->bar_fixed_size[i];
> > +
> > +		if (i == BAR_2) {
> > +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> > +						    BAR2_MIN_SIZE,
> > +						    epc_features->reserved_bar);
> > +			if (ret)
> > +				return ret;
> > +		}
> > +
> > +		if (i == BAR_4) {
> > +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> > +						    BAR4_MIN_SIZE,
> > +						    epc_features->reserved_bar);
> > +			if (ret)
> > +				return ret;
> > +		}
> 
> Why do you need to check all of this?  Where is the data coming from
> that could be incorrect?

PCI BAR attributes, as inputs, are coming from the PCIe controller driver
through PCIe End Point Framework.  These checks are required to compare the 
configuration this driver is expecting to the configuration coming from
the PCIe controller driver.

FYI, PCIe controller driver for Intel Keem Bay is currently under review:
https://lore.kernel.org/linux-pci/20210122032610.4958-1-srikanth.thokala@intel.com/

Thanks!
Srikanth

> 
> thanks,
> 
> greg k-h

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
  2021-01-24 11:48     ` Thokala, Srikanth
@ 2021-01-24 11:56       ` Greg KH
  2021-01-24 18:18         ` Thokala, Srikanth
  0 siblings, 1 reply; 57+ messages in thread
From: Greg KH @ 2021-01-24 11:56 UTC (permalink / raw)
  To: Thokala, Srikanth
  Cc: mgross, markgross, arnd, bp, damien.lemoal, dragan.cvetic,
	corbet, leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan,
	robh+dt, shawnguo, jassisinghbrar, linux-kernel, Derek Kiernan

On Sun, Jan 24, 2021 at 11:48:29AM +0000, Thokala, Srikanth wrote:
> > > +{
> > > +	struct pci_epf_bar *epf_bar;
> > > +	bool bar_fixed_64bit;
> > > +	int ret, i;
> > > +
> > > +	for (i = BAR_0; i <= BAR_5; i++) {
> > > +		epf_bar = &epf->bar[i];
> > > +		bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 <<
> > i));
> > > +		if (bar_fixed_64bit)
> > > +			epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> > > +		if (epc_features->bar_fixed_size[i])
> > > +			epf_bar->size = epc_features->bar_fixed_size[i];
> > > +
> > > +		if (i == BAR_2) {
> > > +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> > > +						    BAR2_MIN_SIZE,
> > > +						    epc_features->reserved_bar);
> > > +			if (ret)
> > > +				return ret;
> > > +		}
> > > +
> > > +		if (i == BAR_4) {
> > > +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> > > +						    BAR4_MIN_SIZE,
> > > +						    epc_features->reserved_bar);
> > > +			if (ret)
> > > +				return ret;
> > > +		}
> > 
> > Why do you need to check all of this?  Where is the data coming from
> > that could be incorrect?
> 
> PCI BAR attributes, as inputs, are coming from the PCIe controller driver
> through PCIe End Point Framework.  These checks are required to compare the 
> configuration this driver is expecting to the configuration coming from
> the PCIe controller driver.

So why do you not trust that information coming from the caller?
Shouldn't it always be correct as it already is validated by that
in-kernel caller?  Don't check for things you don't have to check for
because you control the code that calls this stuff.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
  2021-01-24 11:56       ` Greg KH
@ 2021-01-24 18:18         ` Thokala, Srikanth
  0 siblings, 0 replies; 57+ messages in thread
From: Thokala, Srikanth @ 2021-01-24 18:18 UTC (permalink / raw)
  To: Greg KH
  Cc: mgross, markgross, arnd, bp, damien.lemoal, dragan.cvetic,
	corbet, leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan,
	robh+dt, shawnguo, jassisinghbrar, linux-kernel, Derek Kiernan

Hi Greg,

> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Sunday, January 24, 2021 5:27 PM
> To: Thokala, Srikanth <srikanth.thokala@intel.com>
> Cc: mgross@linux.intel.com; markgross@kernel.org; arnd@arndb.de;
> bp@suse.de; damien.lemoal@wdc.com; dragan.cvetic@xilinx.com;
> corbet@lwn.net; leonard.crestez@nxp.com; palmerdabbelt@google.com;
> paul.walmsley@sifive.com; peng.fan@nxp.com; robh+dt@kernel.org;
> shawnguo@kernel.org; jassisinghbrar@gmail.com; linux-
> kernel@vger.kernel.org; Derek Kiernan <derek.kiernan@xilinx.com>
> Subject: Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver
> for Local Host
> 
> On Sun, Jan 24, 2021 at 11:48:29AM +0000, Thokala, Srikanth wrote:
> > > > +{
> > > > +	struct pci_epf_bar *epf_bar;
> > > > +	bool bar_fixed_64bit;
> > > > +	int ret, i;
> > > > +
> > > > +	for (i = BAR_0; i <= BAR_5; i++) {
> > > > +		epf_bar = &epf->bar[i];
> > > > +		bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1
> <<
> > > i));
> > > > +		if (bar_fixed_64bit)
> > > > +			epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> > > > +		if (epc_features->bar_fixed_size[i])
> > > > +			epf_bar->size = epc_features->bar_fixed_size[i];
> > > > +
> > > > +		if (i == BAR_2) {
> > > > +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> > > > +						    BAR2_MIN_SIZE,
> > > > +						    epc_features->reserved_bar);
> > > > +			if (ret)
> > > > +				return ret;
> > > > +		}
> > > > +
> > > > +		if (i == BAR_4) {
> > > > +			ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> > > > +						    BAR4_MIN_SIZE,
> > > > +						    epc_features->reserved_bar);
> > > > +			if (ret)
> > > > +				return ret;
> > > > +		}
> > >
> > > Why do you need to check all of this?  Where is the data coming from
> > > that could be incorrect?
> >
> > PCI BAR attributes, as inputs, are coming from the PCIe controller
> driver
> > through PCIe End Point Framework.  These checks are required to compare
> the
> > configuration this driver is expecting to the configuration coming from
> > the PCIe controller driver.
> 
> So why do you not trust that information coming from the caller?
> Shouldn't it always be correct as it already is validated by that
> in-kernel caller?  Don't check for things you don't have to check for
> because you control the code that calls this stuff.

Sure, I agree to your point.
I will fix it in my next version.

Thanks!
Srikanth

> 
> thanks,
> 
> greg k-h

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 08/34] misc: xlink-pcie: Add documentation for XLink PCIe driver
  2021-01-19 19:36   ` Randy Dunlap
@ 2021-01-24 18:27     ` Thokala, Srikanth
  0 siblings, 0 replies; 57+ messages in thread
From: Thokala, Srikanth @ 2021-01-24 18:27 UTC (permalink / raw)
  To: Randy Dunlap, mgross, markgross, arnd, bp, damien.lemoal,
	dragan.cvetic, gregkh, corbet, leonard.crestez, palmerdabbelt,
	paul.walmsley, peng.fan, robh+dt, shawnguo, jassisinghbrar
  Cc: linux-kernel, linux-doc

Hi Randy,

Thank you for the review.

> -----Original Message-----
> From: Randy Dunlap <rdunlap@infradead.org>
> Sent: Wednesday, January 20, 2021 1:06 AM
> To: mgross@linux.intel.com; markgross@kernel.org; arnd@arndb.de;
> bp@suse.de; damien.lemoal@wdc.com; dragan.cvetic@xilinx.com;
> gregkh@linuxfoundation.org; corbet@lwn.net; leonard.crestez@nxp.com;
> palmerdabbelt@google.com; paul.walmsley@sifive.com; peng.fan@nxp.com;
> robh+dt@kernel.org; shawnguo@kernel.org; jassisinghbrar@gmail.com
> Cc: linux-kernel@vger.kernel.org; Thokala, Srikanth
> <srikanth.thokala@intel.com>; linux-doc@vger.kernel.org
> Subject: Re: [PATCH v2 08/34] misc: xlink-pcie: Add documentation for
> XLink PCIe driver
> 
> Hi,
> Here are a few doc comments for you:
> 
> On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> > From: Srikanth Thokala <srikanth.thokala@intel.com>
> >
> > Provide overview of XLink PCIe driver implementation
> >
> > Cc: Jonathan Corbet <corbet@lwn.net>
> > Cc: linux-doc@vger.kernel.org
> > Reviewed-by: Mark Gross <mgross@linux.intel.com>
> > Signed-off-by: Srikanth Thokala <srikanth.thokala@intel.com>
> > ---
> >  Documentation/vpu/index.rst      |  1 +
> >  Documentation/vpu/xlink-pcie.rst | 90 ++++++++++++++++++++++++++++++++
> >  2 files changed, 91 insertions(+)
> >  create mode 100644 Documentation/vpu/xlink-pcie.rst
> >
> 
> > diff --git a/Documentation/vpu/xlink-pcie.rst b/Documentation/vpu/xlink-
> pcie.rst
> > new file mode 100644
> > index 000000000000..2d877c966b1e
> > --- /dev/null
> > +++ b/Documentation/vpu/xlink-pcie.rst
> > @@ -0,0 +1,90 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +================================
> > +Kernel driver: Xlink-pcie driver
> > +================================
> > +Supported chips:
> > +  * Intel Edge.AI Computer Vision platforms: Keem Bay
> > +    Suffix: Bay
> > +    Slave address: 6240
> > +    Datasheet: Publicly available at Intel
> > +
> > +Author: Srikanth Thokala Srikanth.Thokala@intel.com
> > +
> > +Introduction
> > +============
> > +The Xlink-pcie driver provides transport layer implementation for
> > +the data transfers to support Xlink protocol subsystem communication
> with the
> > +peer device. i.e, between remote host system and Keem Bay device.
> 
>         device, i.e.,

Ok.

> 
> > +
> > +The Keem Bay device is an ARM-based SOC that includes a vision
> processing
> > +unit (VPU) and deep learning, neural network core in the hardware.
> > +The Xlink-pcie driver exports a functional device endpoint to the Keem
> Bay
> > +device and supports two-way communication with the peer device.
> > +
> > +High-level architecture
> > +=======================
> > +Remote Host: IA CPU
> > +Local Host: ARM CPU (Keem Bay)::
> > +
> > +        +--------------------------------------------------------------
> ----------+
> > +        |  Remote Host IA CPU              | | Local Host ARM CPU (Keem
> Bay) |   |
> > +
> +==================================+=+===============================+===+
> > +        |  User App                        | | User App
> |   |
> > +        +----------------------------------+-+-------------------------
> ------+---+
> > +        |   XLink UAPI                     | | XLink UAPI
> |   |
> > +        +----------------------------------+-+-------------------------
> ------+---+
> > +        |   XLink Core                     | | XLink Core
> |   |
> > +        +----------------------------------+-+-------------------------
> ------+---+
> > +        |   XLink PCIe                     | | XLink PCIe
> |   |
> > +        +----------------------------------+-+-------------------------
> ------+---+
> > +        |   XLink-PCIe Remote Host driver  | | XLink-PCIe Local Host
> driver  |   |
> > +        +----------------------------------+-+-------------------------
> ------+---+
> > +        |-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:-:|:|:-:-:-:-:-:-:-:-:-:-:-:-
> :-:-:-:-:-:|
> > +        +----------------------------------+-+-------------------------
> ------+---+
> > +        |     PCIe Host Controller         | | PCIe Device Controller
> | HW|
> > +        +----------------------------------+-+-------------------------
> ------+---+
> > +               ^                                             ^
> > +               |                                             |
> > +               |------------- PCIe x2 Link  -----------------|
> > +
> > +This XLink PCIe driver comprises of two variants:
> > +* Local Host driver
> > +
> > +  * Intended for ARM CPU
> > +  * It is based on PCI Endpoint Framework
> > +  * Driver path: {tree}/drivers/misc/Xlink-pcie/local_host
> > +
> > +* Remote Host driver
> > +
> > +       * Intended for IA CPU
> > +       * It is a PCIe endpoint driver
> > +       * Driver path: {tree}/drivers/misc/Xlink-pcie/remote_host
> > +
> > +XLink PCIe communication between local host and remote host is achieved
> through
> > +ring buffer management and MSI/Doorbell interrupts.
> > +
> > +The Xlink-pcie driver subsystem registers the Keem Bay device as an
> endpoint
> > +driver and provides standard linux PCIe sysfs interface, #
> 
>                                 Linux

Ok.

> What is the '#' sign for above?

It denotes Linux prompt; I will change it to be more meaningful.

Thanks!
Srikanth

> 
> > +/sys/bus/pci/devices/xxxx:xx:xx.0/
> > +
> > +
> > +XLink protocol subsystem
> > +========================
> > +Xlink is an abstracted control and communication subsystem based on
> channel
> > +identification. It is intended to support VPU technology both at SoC
> level as
> > +well as at IP level, over multiple interfaces.
> > +
> > +- The Xlink subsystem abstracts several types of communication channels
> > +  underneath, allowing the usage of different interfaces with the
> > +  same function call interface.
> > +- The Communication channels are full-duplex protocol channels allowing
> > +  concurrent bidirectional communication.
> > +- The Xlink subsystem also supports control operations to VPU either
> > +  from standalone local system or from remote system based on
> communication
> > +  interface underneath.
> > +- The Xlink subsystem supports the following communication interfaces:
> > +    * USB CDC
> > +    * Gigabit Ethernet
> > +    * PCIe
> > +    * IPC
> >
> 
> 
> --
> ~Randy
> "He closes his eyes and drops the goggles.  You can't get hurt
> by looking at a bitmap.  Or can you?"
> (Neal Stephenson: Snow Crash)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 29/34] Intel tsens i2c slave driver.
  2021-01-12  7:15   ` Randy Dunlap
@ 2021-01-25 23:39     ` mark gross
  2021-01-26  7:45       ` Arnd Bergmann
  2021-01-27  4:44       ` C, Udhayakumar
  0 siblings, 2 replies; 57+ messages in thread
From: mark gross @ 2021-01-25 23:39 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: mgross, markgross, arnd, bp, damien.lemoal, dragan.cvetic,
	gregkh, corbet, leonard.crestez, palmerdabbelt, paul.walmsley,
	peng.fan, robh+dt, shawnguo, jassisinghbrar, linux-kernel, C,
	Udhayakumar, C

On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > index 8b263fdd80c3..c2138339bd89 100644
> > --- a/drivers/misc/intel_tsens/Kconfig
> > +++ b/drivers/misc/intel_tsens/Kconfig
> > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> >  	  Say Y if using a processor that includes the Intel VPU such as
> >  	  Keem Bay.  If unsure, say N.
> >  
> > +config INTEL_TSENS_I2C_SLAVE
> > +	bool "I2C slave driver for intel tsens"
> 
> Why bool instead of tristate?
Becuase the I2C driver depends on a file scoped global i2c_plat_data
instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER (intel_tsens_thermal.[ch])

Udhaya, would you care to comment further?

--mark


> 
> > +	depends on INTEL_TSENS_LOCAL_HOST
> > +	select I2C
> > +	select I2C_SLAVE
> > +	help
> > +	  This option enables tsens i2c slave driver.
> 
> 	                            I2C
> 
> > +
> > +	  This driver is used for reporting thermal data via I2C
> > +	  SMBUS to remote host.
> > +	  Enable this option if you want to have support for thermal
> > +	  management controller
> 
> 	             controller.
> 
> > +	  Say Y if using a processor that includes the Intel VPU such as
> > +	  Keem Bay.  If unsure, say N.
> 
> 
> -- 
> ~Randy
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 29/34] Intel tsens i2c slave driver.
  2021-01-25 23:39     ` mark gross
@ 2021-01-26  7:45       ` Arnd Bergmann
  2021-01-26 14:56         ` Gross, Mark
  2021-01-27  4:45         ` C, Udhayakumar
  2021-01-27  4:44       ` C, Udhayakumar
  1 sibling, 2 replies; 57+ messages in thread
From: Arnd Bergmann @ 2021-01-26  7:45 UTC (permalink / raw)
  To: mgross
  Cc: Randy Dunlap, markgross, Arnd Bergmann, Borislav Petkov,
	Damien Le Moal, Dragan Cvetic, gregkh, Jonathan Corbet,
	Leonard Crestez, Palmer Dabbelt, Paul Walmsley, Peng Fan,
	Rob Herring, Shawn Guo, Jassi Brar, linux-kernel, C, Udhayakumar,
	C

On Tue, Jan 26, 2021 at 12:39 AM mark gross <mgross@linux.intel.com> wrote:
>
> On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> > > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > > index 8b263fdd80c3..c2138339bd89 100644
> > > --- a/drivers/misc/intel_tsens/Kconfig
> > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > >       Say Y if using a processor that includes the Intel VPU such as
> > >       Keem Bay.  If unsure, say N.
> > >
> > > +config INTEL_TSENS_I2C_SLAVE
> > > +   bool "I2C slave driver for intel tsens"
> >
> > Why bool instead of tristate?
> Becuase the I2C driver depends on a file scoped global i2c_plat_data
> instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER (intel_tsens_thermal.[ch])
>
> Udhaya, would you care to comment further?

> > > +   depends on INTEL_TSENS_LOCAL_HOST
> > > +   select I2C
> > > +   select I2C_SLAVE

Please make this 'depends on I2C=y && I2C_SLAVE' instead of 'select'
in this case. A random driver should never force-enable another subsystem.

      Arnd

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 29/34] Intel tsens i2c slave driver.
  2021-01-26  7:45       ` Arnd Bergmann
@ 2021-01-26 14:56         ` Gross, Mark
  2021-01-27  4:45         ` C, Udhayakumar
  1 sibling, 0 replies; 57+ messages in thread
From: Gross, Mark @ 2021-01-26 14:56 UTC (permalink / raw)
  To: Arnd Bergmann, mgross
  Cc: Randy Dunlap, markgross, Arnd Bergmann, Borislav Petkov,
	Damien Le Moal, Dragan Cvetic, gregkh, Jonathan Corbet,
	Leonard Crestez, Palmer Dabbelt, Paul Walmsley, Peng Fan,
	Rob Herring, Shawn Guo, Jassi Brar, linux-kernel, C, Udhayakumar,
	C



> -----Original Message-----
> From: Arnd Bergmann <arnd@kernel.org>
> Sent: Monday, January 25, 2021 11:45 PM
> To: mgross@linux.intel.com
> Cc: Randy Dunlap <rdunlap@infradead.org>; markgross@kernel.org; Arnd
> Bergmann <arnd@arndb.de>; Borislav Petkov <bp@suse.de>; Damien Le Moal
> <damien.lemoal@wdc.com>; Dragan Cvetic <dragan.cvetic@xilinx.com>; gregkh
> <gregkh@linuxfoundation.org>; Jonathan Corbet <corbet@lwn.net>; Leonard
> Crestez <leonard.crestez@nxp.com>; Palmer Dabbelt
> <palmerdabbelt@google.com>; Paul Walmsley <paul.walmsley@sifive.com>;
> Peng Fan <peng.fan@nxp.com>; Rob Herring <robh+dt@kernel.org>; Shawn
> Guo <shawnguo@kernel.org>; Jassi Brar <jassisinghbrar@gmail.com>; linux-
> kernel@vger.kernel.org; C, Udhayakumar <udhayakumar.c@intel.com>;
> C@linux.intel.com
> Subject: Re: [PATCH v2 29/34] Intel tsens i2c slave driver.
> 
> On Tue, Jan 26, 2021 at 12:39 AM mark gross <mgross@linux.intel.com> wrote:
> >
> > On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > > On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> > > > diff --git a/drivers/misc/intel_tsens/Kconfig
> > > > b/drivers/misc/intel_tsens/Kconfig
> > > > index 8b263fdd80c3..c2138339bd89 100644
> > > > --- a/drivers/misc/intel_tsens/Kconfig
> > > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > > >       Say Y if using a processor that includes the Intel VPU such as
> > > >       Keem Bay.  If unsure, say N.
> > > >
> > > > +config INTEL_TSENS_I2C_SLAVE
> > > > +   bool "I2C slave driver for intel tsens"
> > >
> > > Why bool instead of tristate?
> > Becuase the I2C driver depends on a file scoped global i2c_plat_data
> > instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER
> > (intel_tsens_thermal.[ch])
> >
> > Udhaya, would you care to comment further?
> 
> > > > +   depends on INTEL_TSENS_LOCAL_HOST
> > > > +   select I2C
> > > > +   select I2C_SLAVE
> 
> Please make this 'depends on I2C=y && I2C_SLAVE' instead of 'select'
> in this case. A random driver should never force-enable another subsystem.
Will do, thanks for the feedback!

--mark




^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 29/34] Intel tsens i2c slave driver.
  2021-01-25 23:39     ` mark gross
  2021-01-26  7:45       ` Arnd Bergmann
@ 2021-01-27  4:44       ` C, Udhayakumar
  1 sibling, 0 replies; 57+ messages in thread
From: C, Udhayakumar @ 2021-01-27  4:44 UTC (permalink / raw)
  To: mgross, Randy Dunlap
  Cc: markgross, arnd, bp, damien.lemoal, dragan.cvetic, gregkh,
	corbet, leonard.crestez, palmerdabbelt, paul.walmsley, peng.fan,
	robh+dt, shawnguo, jassisinghbrar, linux-kernel, C

> On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> > > diff --git a/drivers/misc/intel_tsens/Kconfig
> > > b/drivers/misc/intel_tsens/Kconfig
> > > index 8b263fdd80c3..c2138339bd89 100644
> > > --- a/drivers/misc/intel_tsens/Kconfig
> > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > >  	  Say Y if using a processor that includes the Intel VPU such as
> > >  	  Keem Bay.  If unsure, say N.
> > >
> > > +config INTEL_TSENS_I2C_SLAVE
> > > +	bool "I2C slave driver for intel tsens"
> >
> > Why bool instead of tristate?
> Becuase the I2C driver depends on a file scoped global i2c_plat_data
> instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER
> (intel_tsens_thermal.[ch])
> 
> Udhaya, would you care to comment further?
> 
> --mark
> 
As Mark mentioned above, i2c_plat_data from intel_tsens_thermal.[ch] will be used by tsens_i2c client driver.
> 
> >
> > > +	depends on INTEL_TSENS_LOCAL_HOST
> > > +	select I2C
> > > +	select I2C_SLAVE
> > > +	help
> > > +	  This option enables tsens i2c slave driver.
> >
> > 	                            I2C
> >
Will update in v2 version of patch. Thanks for feedback.
> > > +
> > > +	  This driver is used for reporting thermal data via I2C
> > > +	  SMBUS to remote host.
> > > +	  Enable this option if you want to have support for thermal
> > > +	  management controller
> >
> > 	             controller.
> >
Will update in v2 version of patch. Thanks for feedback.
> > > +	  Say Y if using a processor that includes the Intel VPU such as
> > > +	  Keem Bay.  If unsure, say N.
> >
> >
> > --
> > ~Randy
> >

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 29/34] Intel tsens i2c slave driver.
  2021-01-26  7:45       ` Arnd Bergmann
  2021-01-26 14:56         ` Gross, Mark
@ 2021-01-27  4:45         ` C, Udhayakumar
  1 sibling, 0 replies; 57+ messages in thread
From: C, Udhayakumar @ 2021-01-27  4:45 UTC (permalink / raw)
  To: Arnd Bergmann, mgross
  Cc: Randy Dunlap, markgross, Arnd Bergmann, Borislav Petkov,
	Damien Le Moal, Dragan Cvetic, gregkh, Jonathan Corbet,
	Leonard Crestez, Palmer Dabbelt, Paul Walmsley, Peng Fan,
	Rob Herring, Shawn Guo, Jassi Brar, linux-kernel, C

> On Tue, Jan 26, 2021 at 12:39 AM mark gross <mgross@linux.intel.com>
> wrote:
> >
> > On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > > On 1/8/21 1:25 PM, mgross@linux.intel.com wrote:
> > > > diff --git a/drivers/misc/intel_tsens/Kconfig
> > > > b/drivers/misc/intel_tsens/Kconfig
> > > > index 8b263fdd80c3..c2138339bd89 100644
> > > > --- a/drivers/misc/intel_tsens/Kconfig
> > > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > > >       Say Y if using a processor that includes the Intel VPU such as
> > > >       Keem Bay.  If unsure, say N.
> > > >
> > > > +config INTEL_TSENS_I2C_SLAVE
> > > > +   bool "I2C slave driver for intel tsens"
> > >
> > > Why bool instead of tristate?
> > Becuase the I2C driver depends on a file scoped global i2c_plat_data
> > instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER
> > (intel_tsens_thermal.[ch])
> >
> > Udhaya, would you care to comment further?
> 
> > > > +   depends on INTEL_TSENS_LOCAL_HOST
> > > > +   select I2C
> > > > +   select I2C_SLAVE
> 
> Please make this 'depends on I2C=y && I2C_SLAVE' instead of 'select'
> in this case. A random driver should never force-enable another subsystem.
> 
Will update in v2 version of patch. Thanks for feedback.
>       Arnd

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2021-01-27  5:14 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-08 21:25 [PATCH v2 00/34] Intel Vision Processing base enabling mgross
2021-01-08 21:25 ` [PATCH v2 01/34] Add Vision Processing Unit (VPU) documentation mgross
2021-01-08 21:25 ` [PATCH v2 02/34] dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings mgross
2021-01-08 21:25 ` [PATCH v2 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox mgross
2021-01-08 21:25 ` [PATCH v2 04/34] dt-bindings: Add bindings for Keem Bay IPC driver mgross
2021-01-08 21:25 ` [PATCH v2 05/34] keembay-ipc: Add Keem Bay IPC module mgross
2021-01-08 21:25 ` [PATCH v2 06/34] dt-bindings: Add bindings for Keem Bay VPU IPC driver mgross
2021-01-10 17:18   ` Rob Herring
2021-01-11 19:24   ` Rob Herring
2021-01-19 14:32     ` Alessandrelli, Daniele
2021-01-08 21:25 ` [PATCH v2 07/34] keembay-vpu-ipc: Add Keem Bay VPU IPC module mgross
2021-01-08 21:25 ` [PATCH v2 08/34] misc: xlink-pcie: Add documentation for XLink PCIe driver mgross
2021-01-19 19:36   ` Randy Dunlap
2021-01-24 18:27     ` Thokala, Srikanth
2021-01-08 21:25 ` [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host mgross
2021-01-20 17:57   ` Greg KH
2021-01-24 11:48     ` Thokala, Srikanth
2021-01-24 11:56       ` Greg KH
2021-01-24 18:18         ` Thokala, Srikanth
2021-01-08 21:25 ` [PATCH v2 10/34] misc: xlink-pcie: lh: Add PCIe EP DMA functionality mgross
2021-01-08 21:25 ` [PATCH v2 11/34] misc: xlink-pcie: lh: Add core communication logic mgross
2021-01-08 21:25 ` [PATCH v2 12/34] misc: xlink-pcie: lh: Prepare changes for adding remote host driver mgross
2021-01-08 21:25 ` [PATCH v2 13/34] misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host mgross
2021-01-08 21:25 ` [PATCH v2 14/34] misc: xlink-pcie: rh: Add core communication logic mgross
2021-01-08 21:25 ` [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface mgross
2021-01-20 17:59   ` Greg KH
2021-01-21 23:20     ` mark gross
2021-01-24 11:46     ` Thokala, Srikanth
2021-01-08 21:25 ` [PATCH v2 16/34] misc: xlink-pcie: Add asynchronous event notification support for XLink mgross
2021-01-08 21:25 ` [PATCH v2 17/34] xlink-ipc: Add xlink ipc device tree bindings mgross
2021-01-10 17:18   ` Rob Herring
2021-01-08 21:25 ` [PATCH v2 18/34] xlink-ipc: Add xlink ipc driver mgross
2021-01-08 21:25 ` [PATCH v2 19/34] xlink-core: Add xlink core device tree bindings mgross
2021-01-10 17:18   ` Rob Herring
2021-01-11 19:27   ` Rob Herring
2021-01-08 21:25 ` [PATCH v2 20/34] xlink-core: Add xlink core driver xLink mgross
2021-01-19 19:58   ` Randy Dunlap
2021-01-08 21:25 ` [PATCH v2 21/34] xlink-core: Enable xlink protocol over pcie mgross
2021-01-08 21:25 ` [PATCH v2 22/34] xlink-core: Enable VPU IP management and runtime control mgross
2021-01-08 21:25 ` [PATCH v2 23/34] xlink-core: add async channel and events mgross
2021-01-08 21:25 ` [PATCH v2 24/34] dt-bindings: misc: Add Keem Bay vpumgr mgross
2021-01-08 21:25 ` [PATCH v2 25/34] misc: Add Keem Bay VPU manager mgross
2021-01-08 21:25 ` [PATCH v2 26/34] dt-bindings: misc: intel_tsens: Add tsens thermal bindings documentation mgross
2021-01-08 21:25 ` [PATCH v2 27/34] misc: Tsens ARM host thermal driver mgross
2021-01-08 21:25 ` [PATCH v2 28/34] misc: Intel tsens IA host driver mgross
2021-01-08 21:25 ` [PATCH v2 29/34] Intel tsens i2c slave driver mgross
2021-01-12  7:15   ` Randy Dunlap
2021-01-25 23:39     ` mark gross
2021-01-26  7:45       ` Arnd Bergmann
2021-01-26 14:56         ` Gross, Mark
2021-01-27  4:45         ` C, Udhayakumar
2021-01-27  4:44       ` C, Udhayakumar
2021-01-08 21:25 ` [PATCH v2 30/34] misc:intel_tsens: Intel Keem Bay tsens driver mgross
2021-01-08 21:25 ` [PATCH v2 31/34] Intel Keem Bay XLink SMBus driver mgross
2021-01-08 21:25 ` [PATCH v2 32/34] dt-bindings: misc: hddl_dev: Add hddl device management documentation mgross
2021-01-08 21:25 ` [PATCH v2 33/34] misc: Hddl device management for local host mgross
2021-01-08 21:26 ` [PATCH v2 34/34] misc: HDDL device management for IA host mgross

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).