linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
@ 2020-11-29  0:00 Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers Sonal Santan
                   ` (10 more replies)
  0 siblings, 11 replies; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

Hello,

This patch series adds management physical function driver for Xilinx Alveo PCIe
accelerator cards, https://www.xilinx.com/products/boards-and-kits/alveo.html
This driver is part of Xilinx Runtime (XRT) open source stack.

The patch depends on the "PATCH Xilinx Alveo libfdt prep" which was posted
before.

ALVEO PLATFORM ARCHITECTURE

Alveo PCIe FPGA based platforms have a static *shell* partition and a partial
re-configurable *user* partition. The shell partition is automatically loaded from
flash when host is booted and PCIe is enumerated by BIOS. Shell cannot be changed
till the next cold reboot. The shell exposes two PCIe physical functions:

1. management physical function
2. user physical function

The patch series includes Documentation/xrt.rst which describes Alveo
platform, xmgmt driver architecture and deployment model in more more detail.

Users compile their high level design in C/C++/OpenCL or RTL into FPGA image
using Vitis https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
tools. The image is packaged as xclbin and contains partial bitstream for the
user partition and necessary metadata. Users can dynamically swap the image
running on the user partition in order to switch between different workloads.

ALVEO DRIVERS

Alveo Linux kernel driver *xmgmt* binds to management physical function of
Alveo platform. The modular driver framework is organized into several
platform drivers which primarily handle the following functionality:

1.  Loading firmware container also called xsabin at driver attach time
2.  Loading of user compiled xclbin with FPGA Manager integration
3.  Clock scaling of image running on user partition
4.  In-band sensors: temp, voltage, power, etc.
5.  Device reset and rescan
6.  Flashing static *shell* partition

The platform drivers are packaged into *xrt-lib* helper module with a well
defined interfaces the details of which can be found in Documentation/xrt.rst.

xmgmt driver is second generation Alveo management driver and evolution of
the first generation (out of tree) Alveo management driver, xclmgmt. The
sources of the first generation drivers were posted on LKML last year--
https://lore.kernel.org/lkml/20190319215401.6562-1-sonal.santan@xilinx.com/

Changes since the first generation driver include the following: the driver
has been re-architected as data driven modular driver; the driver has been
split into xmgmt and xrt-lib; user physical function driver has been removed
from the patch series.

Alveo/XRT security and platform architecture is documented on the following 
GitHub pages:
https://xilinx.github.io/XRT/master/html/security.html
https://xilinx.github.io/XRT/master/html/platforms_partitions.html

User physical function driver is not included in this patch series.

TESTING AND VALIDATION

xmgmt driver can be tested with full XRT open source stack which includes
user space libraries, board utilities and (out of tree) first generation
user physical function driver xocl. XRT open source runtime stack is
available at https://github.com/Xilinx/XRT. This patch series has been
validated on Alveo U50 platform.

Complete documentation for XRT open source stack can be found here--
https://xilinx.github.io/XRT/master/html/index.html

Thanks,
-Sonal

Sonal Santan (8):
  Documentation: fpga: Add a document describing Alveo XRT drivers
  fpga: xrt: Add UAPI header files
  fpga: xrt: infrastructure support for xmgmt driver
  fpga: xrt: core infrastructure for xrt-lib module
  fpga: xrt: platform drivers for subsystems in shell partition
  fpga: xrt: header file for platform and parent drivers
  fpga: xrt: Alveo management physical function driver
  fpga: xrt: Kconfig and Makefile updates for XRT drivers

 Documentation/fpga/index.rst                  |    1 +
 Documentation/fpga/xrt.rst                    |  588 +++++
 drivers/fpga/Kconfig                          |    2 +
 drivers/fpga/Makefile                         |    3 +
 drivers/fpga/alveo/Kconfig                    |    7 +
 drivers/fpga/alveo/common/xrt-metadata.c      |  590 +++++
 drivers/fpga/alveo/common/xrt-root.c          |  744 +++++++
 drivers/fpga/alveo/common/xrt-root.h          |   24 +
 drivers/fpga/alveo/common/xrt-xclbin.c        |  387 ++++
 drivers/fpga/alveo/common/xrt-xclbin.h        |   46 +
 drivers/fpga/alveo/include/xmgmt-main.h       |   34 +
 drivers/fpga/alveo/include/xrt-axigate.h      |   31 +
 drivers/fpga/alveo/include/xrt-calib.h        |   28 +
 drivers/fpga/alveo/include/xrt-clkfreq.h      |   21 +
 drivers/fpga/alveo/include/xrt-clock.h        |   29 +
 drivers/fpga/alveo/include/xrt-cmc.h          |   23 +
 drivers/fpga/alveo/include/xrt-ddr-srsr.h     |   29 +
 drivers/fpga/alveo/include/xrt-flash.h        |   28 +
 drivers/fpga/alveo/include/xrt-gpio.h         |   41 +
 drivers/fpga/alveo/include/xrt-icap.h         |   27 +
 drivers/fpga/alveo/include/xrt-mailbox.h      |   44 +
 drivers/fpga/alveo/include/xrt-metadata.h     |  184 ++
 drivers/fpga/alveo/include/xrt-parent.h       |  103 +
 drivers/fpga/alveo/include/xrt-partition.h    |   33 +
 drivers/fpga/alveo/include/xrt-subdev.h       |  333 +++
 drivers/fpga/alveo/include/xrt-ucs.h          |   22 +
 drivers/fpga/alveo/lib/Kconfig                |   11 +
 drivers/fpga/alveo/lib/Makefile               |   42 +
 drivers/fpga/alveo/lib/subdevs/xrt-axigate.c  |  298 +++
 drivers/fpga/alveo/lib/subdevs/xrt-calib.c    |  291 +++
 drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c  |  214 ++
 drivers/fpga/alveo/lib/subdevs/xrt-clock.c    |  638 ++++++
 .../fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c   |  343 +++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c |  322 +++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h |  135 ++
 .../fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c  |  320 +++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c   |  361 ++++
 .../fpga/alveo/lib/subdevs/xrt-cmc-sensors.c  |  445 ++++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc.c      |  239 +++
 drivers/fpga/alveo/lib/subdevs/xrt-gpio.c     |  198 ++
 drivers/fpga/alveo/lib/subdevs/xrt-icap.c     |  306 +++
 drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c  | 1905 +++++++++++++++++
 .../fpga/alveo/lib/subdevs/xrt-partition.c    |  261 +++
 drivers/fpga/alveo/lib/subdevs/xrt-qspi.c     | 1347 ++++++++++++
 drivers/fpga/alveo/lib/subdevs/xrt-srsr.c     |  322 +++
 drivers/fpga/alveo/lib/subdevs/xrt-test.c     |  274 +++
 drivers/fpga/alveo/lib/subdevs/xrt-ucs.c      |  238 ++
 .../fpga/alveo/lib/subdevs/xrt-vsec-golden.c  |  238 ++
 drivers/fpga/alveo/lib/subdevs/xrt-vsec.c     |  337 +++
 drivers/fpga/alveo/lib/xrt-cdev.c             |  234 ++
 drivers/fpga/alveo/lib/xrt-main.c             |  275 +++
 drivers/fpga/alveo/lib/xrt-main.h             |   46 +
 drivers/fpga/alveo/lib/xrt-subdev.c           | 1007 +++++++++
 drivers/fpga/alveo/mgmt/Kconfig               |   11 +
 drivers/fpga/alveo/mgmt/Makefile              |   28 +
 drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c      |  194 ++
 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h          |   29 +
 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h     |   36 +
 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c  |  930 ++++++++
 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c      |  190 ++
 drivers/fpga/alveo/mgmt/xmgmt-main.c          |  843 ++++++++
 drivers/fpga/alveo/mgmt/xmgmt-root.c          |  375 ++++
 include/uapi/linux/xrt/flash_xrt_data.h       |   67 +
 include/uapi/linux/xrt/mailbox_proto.h        |  394 ++++
 include/uapi/linux/xrt/mailbox_transport.h    |   74 +
 include/uapi/linux/xrt/xclbin.h               |  418 ++++
 include/uapi/linux/xrt/xmgmt-ioctl.h          |   72 +
 67 files changed, 17710 insertions(+)
 create mode 100644 Documentation/fpga/xrt.rst
 create mode 100644 drivers/fpga/alveo/Kconfig
 create mode 100644 drivers/fpga/alveo/common/xrt-metadata.c
 create mode 100644 drivers/fpga/alveo/common/xrt-root.c
 create mode 100644 drivers/fpga/alveo/common/xrt-root.h
 create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.c
 create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.h
 create mode 100644 drivers/fpga/alveo/include/xmgmt-main.h
 create mode 100644 drivers/fpga/alveo/include/xrt-axigate.h
 create mode 100644 drivers/fpga/alveo/include/xrt-calib.h
 create mode 100644 drivers/fpga/alveo/include/xrt-clkfreq.h
 create mode 100644 drivers/fpga/alveo/include/xrt-clock.h
 create mode 100644 drivers/fpga/alveo/include/xrt-cmc.h
 create mode 100644 drivers/fpga/alveo/include/xrt-ddr-srsr.h
 create mode 100644 drivers/fpga/alveo/include/xrt-flash.h
 create mode 100644 drivers/fpga/alveo/include/xrt-gpio.h
 create mode 100644 drivers/fpga/alveo/include/xrt-icap.h
 create mode 100644 drivers/fpga/alveo/include/xrt-mailbox.h
 create mode 100644 drivers/fpga/alveo/include/xrt-metadata.h
 create mode 100644 drivers/fpga/alveo/include/xrt-parent.h
 create mode 100644 drivers/fpga/alveo/include/xrt-partition.h
 create mode 100644 drivers/fpga/alveo/include/xrt-subdev.h
 create mode 100644 drivers/fpga/alveo/include/xrt-ucs.h
 create mode 100644 drivers/fpga/alveo/lib/Kconfig
 create mode 100644 drivers/fpga/alveo/lib/Makefile
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-axigate.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-calib.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clock.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sensors.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-gpio.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-icap.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-partition.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-qspi.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-srsr.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-test.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-ucs.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec-golden.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec.c
 create mode 100644 drivers/fpga/alveo/lib/xrt-cdev.c
 create mode 100644 drivers/fpga/alveo/lib/xrt-main.c
 create mode 100644 drivers/fpga/alveo/lib/xrt-main.h
 create mode 100644 drivers/fpga/alveo/lib/xrt-subdev.c
 create mode 100644 drivers/fpga/alveo/mgmt/Kconfig
 create mode 100644 drivers/fpga/alveo/mgmt/Makefile
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-root.c
 create mode 100644 include/uapi/linux/xrt/flash_xrt_data.h
 create mode 100644 include/uapi/linux/xrt/mailbox_proto.h
 create mode 100644 include/uapi/linux/xrt/mailbox_transport.h
 create mode 100644 include/uapi/linux/xrt/xclbin.h
 create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h

--
2.17.1

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-12-01  4:54   ` Moritz Fischer
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files Sonal Santan
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Describe Alveo XRT driver architecture and provide basic overview
of Xilinx Alveo platform.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 Documentation/fpga/index.rst |   1 +
 Documentation/fpga/xrt.rst   | 588 +++++++++++++++++++++++++++++++++++
 2 files changed, 589 insertions(+)
 create mode 100644 Documentation/fpga/xrt.rst

diff --git a/Documentation/fpga/index.rst b/Documentation/fpga/index.rst
index f80f95667ca2..30134357b70d 100644
--- a/Documentation/fpga/index.rst
+++ b/Documentation/fpga/index.rst
@@ -8,6 +8,7 @@ fpga
     :maxdepth: 1

     dfl
+    xrt

 .. only::  subproject and html

diff --git a/Documentation/fpga/xrt.rst b/Documentation/fpga/xrt.rst
new file mode 100644
index 000000000000..9f37d46459b0
--- /dev/null
+++ b/Documentation/fpga/xrt.rst
@@ -0,1 +1,588 @@
+==================================
+XRTV2 Linux Kernel Driver Overview
+==================================
+
+XRTV2 drivers are second generation `XRT <https://github.com/Xilinx/XRT>`_ drivers which
+support `Alveo <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_ PCIe platforms
+from Xilinx.
+
+XRTV2 drivers support *subsystem* style data driven platforms where driver's configuration
+and behavior is determined by meta data provided by platform (in *device tree* format).
+Primary management physical function (MPF) driver is called **xmgmt**. Primary user physical
+function (UPF) driver is called **xuser** and HW subsystem drivers are packaged into a library
+module called **xrt-lib**, which is shared by **xmgmt** and **xuser** (WIP).
+
+Alveo Platform Overview
+=======================
+
+Alveo platforms are architected as two physical FPGA partitions: *Shell* and *User*. Shell
+provides basic infrastructure for the Alveo platform like PCIe connectivity, board management,
+Dynamic Function Exchange (DFX), sensors, clocking, reset, and security. User partition contains
+user compiled binary which is loaded by a process called DFX also known as partial reconfiguration.
+
+Physical partitions require strict HW compatibility with each other for DFX to work properly.
+Every physical partition has two interface UUIDs: *parent* UUID and *child* UUID. For simple
+single stage platforms Shell → User forms parent child relationship. For complex two stage
+platforms Base → Shell → User forms the parent child relationship chain.
+
+.. note::
+   Partition compatibility matching is key design component of Alveo platforms and XRT. Partitions
+   have child and parent relationship. A loaded partition exposes child partition UUID to advertise
+   its compatibility requirement for child partition. When loading a child partition the xmgmt
+   management driver matches parent UUID of the child partition against child UUID exported by the
+   parent. Parent and child partition UUIDs are stored in the *xclbin* (for user) or *xsabin* (for
+   base and shell). Except for root UUID, VSEC, hardware itself does not know about UUIDs. UUIDs are
+   stored in xsabin and xclbin.
+
+
+The physical partitions and their loading is illustrated below::
+
+            SHELL                               USER
+        +-----------+                  +-------------------+
+        |           |                  |                   |
+        | VSEC UUID | CHILD     PARENT |    LOGIC UUID     |
+        |           o------->|<--------o                   |
+        |           | UUID       UUID  |                   |
+        +-----+-----+                  +--------+----------+
+              |                                 |
+              .                                 .
+              |                                 |
+          +---+---+                      +------+--------+
+          |  POR  |                      | USER COMPILED |
+          | FLASH |                      |    XCLBIN     |
+          +-------+                      +---------------+
+
+
+Loading Sequence
+----------------
+
+Shell partition is loaded from flash at system boot time. It establishes the PCIe link and exposes
+two physical functions to the BIOS. After OS boot, xmgmt driver attaches to PCIe physical function
+0 exposed by the Shell and then looks for VSEC in PCIe extended configuration space. Using VSEC it
+determines the logic UUID of Shell and uses the UUID to load matching *xsabin* file from Linux
+firmware directory. The xsabin file contains metadata to discover peripherals that are part of Shell
+and firmware(s) for any embedded soft processors in Shell.
+
+Shell exports child interface UUID which is used for compatibility check when loading user compiled
+xclbin over the User partition as part of DFX. When a user requests loading of a specific xclbin the
+xmgmt management driver reads the parent interface UUID specified in the xclbin and matches it with
+child interface UUID exported by Shell to determine if xclbin is compatible with the Shell. If match
+fails loading of xclbin is denied.
+
+xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command. When loading xclbin xmgmt driver
+performs the following operations:
+
+1. Sanity check the xclbin contents
+2. Isolate the User partition
+3. Download the bitstream using the FPGA config engine (ICAP)
+4. De-isolate the User partition
+5. Program the clocks (ClockWiz) driving the User partition
+6. Wait for memory controller (MIG) calibration
+
+`Platform Loading Overview <https://xilinx.github.io/XRT/master/html/platforms_partitions.html>`_
+provides more detailed information on platform loading.
+
+xsabin
+------
+
+Each Alveo platform comes packaged with its own xsabin. The xsabin is trusted component of the
+platform. For format details refer to :ref:`xsabin/xclbin Container Format`. xsabin contains
+basic information like UUIDs, platform name and metadata in the form of device tree. See
+:ref:`Device Tree Usage` for details and example.
+
+xclbin
+------
+
+xclbin is compiled by end user using
+`Vitis <https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html>`_ tool set from
+Xilinx. The xclbin contains sections describing user compiled acceleration engines/kernels, memory
+subsystems, clocking information etc. It also contains bitstream for the user partition, UUIDs,
+platform name, etc. xclbin uses the same container format as xsabin which is described below.
+
+
+xsabin/xclbin Container Format
+------------------------------
+
+xclbin/xsabin is ELF-like binary container format. It is structured as series of sections.
+There is a file header followed by several section headers which is followed by sections.
+A section header points to an actual section. There is an optional signature at the end.
+The format is defined by header file ``xclbin.h``. The following figure illustrates a
+typical xclbin::
+
+
+          +---------------------+
+          |                     |
+          |       HEADER        |
+          +---------------------+
+          |   SECTION  HEADER   |
+          |                     |
+          +---------------------+
+          |         ...         |
+          |                     |
+          +---------------------+
+          |   SECTION  HEADER   |
+          |                     |
+          +---------------------+
+          |       SECTION       |
+          |                     |
+          +---------------------+
+          |         ...         |
+          |                     |
+          +---------------------+
+          |       SECTION       |
+          |                     |
+          +---------------------+
+          |      SIGNATURE      |
+          |      (OPTIONAL)     |
+          +---------------------+
+
+
+xclbin/xsabin files can be packaged, un-packaged and inspected using XRT utility called
+**xclbinutil**. xclbinutil is part of XRT open source software stack. The source code for
+xclbinutil can be found at https://github.com/Xilinx/XRT/tree/master/src/runtime_src/tools/xclbinutil
+
+For example to enumerate the contents of a xclbin/xsabin use the *--info* switch as shown
+below::
+
+  xclbinutil --info --input /opt/xilinx/firmware/u50/gen3x16-xdma/blp/test/bandwidth.xclbin
+  xclbinutil --info --input /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/partition.xsabin
+
+
+Device Tree Usage
+-----------------
+
+As mentioned previously xsabin stores metadata which advertise HW subsystems present in a partition.
+The metadata is stored in device tree format with well defined schema. Subsystem instantiations are
+captured as children of ``addressable_endpoints`` node. Subsystem nodes have standard attributes like
+``reg``, ``interrupts`` etc. Additionally the nodes also have PCIe specific attributes:
+``pcie_physical_function`` and ``pcie_bar_mapping``. These identify which PCIe physical function and
+which BAR space in that physical function the subsystem resides. XRT management driver uses this
+information to bind *platform drivers* to the subsystem instantiations. The platform drivers are
+found in **xrt-lib.ko** kernel module defined later. Below is an example of device tree for Alveo U50
+platform::
+
+  /dts-v1/;
+
+  /{
+	logic_uuid = "f465b0a3ae8c64f619bc150384ace69b";
+
+	schema_version {
+		major = <0x01>;
+		minor = <0x00>;
+	};
+
+	interfaces {
+
+		@0 {
+			interface_uuid = "862c7020a250293e32036f19956669e5";
+		};
+	};
+
+	addressable_endpoints {
+
+		ep_blp_rom_00 {
+			reg = <0x00 0x1f04000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+		};
+
+		ep_card_flash_program_00 {
+			reg = <0x00 0x1f06000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_quad_spi-1.0\0axi_quad_spi";
+			interrupts = <0x03 0x03>;
+		};
+
+		ep_cmc_firmware_mem_00 {
+			reg = <0x00 0x1e20000 0x00 0x20000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+			firmware {
+				firmware_product_name = "cmc";
+				firmware_branch_name = "u50";
+				firmware_version_major = <0x01>;
+				firmware_version_minor = <0x00>;
+			};
+		};
+
+		ep_cmc_intc_00 {
+			reg = <0x00 0x1e03000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
+			interrupts = <0x04 0x04>;
+		};
+
+		ep_cmc_mutex_00 {
+			reg = <0x00 0x1e02000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_cmc_regmap_00 {
+			reg = <0x00 0x1e08000 0x00 0x2000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+			firmware {
+				firmware_product_name = "sc-fw";
+				firmware_branch_name = "u50";
+				firmware_version_major = <0x05>;
+			};
+		};
+
+		ep_cmc_reset_00 {
+			reg = <0x00 0x1e01000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_ddr_mem_calib_00 {
+			reg = <0x00 0x63000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_debug_bscan_mgmt_00 {
+			reg = <0x00 0x1e90000 0x00 0x10000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-debug_bridge-1.0\0debug_bridge";
+		};
+
+		ep_ert_base_address_00 {
+			reg = <0x00 0x21000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_ert_command_queue_mgmt_00 {
+			reg = <0x00 0x40000 0x00 0x10000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
+		};
+
+		ep_ert_command_queue_user_00 {
+			reg = <0x00 0x40000 0x00 0x10000>;
+			pcie_physical_function = <0x01>;
+			compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
+		};
+
+		ep_ert_firmware_mem_00 {
+			reg = <0x00 0x30000 0x00 0x8000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+			firmware {
+				firmware_product_name = "ert";
+				firmware_branch_name = "v20";
+				firmware_version_major = <0x01>;
+			};
+		};
+
+		ep_ert_intc_00 {
+			reg = <0x00 0x23000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
+			interrupts = <0x05 0x05>;
+		};
+
+		ep_ert_reset_00 {
+			reg = <0x00 0x22000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_ert_sched_00 {
+			reg = <0x00 0x50000 0x00 0x1000>;
+			pcie_physical_function = <0x01>;
+			compatible = "xilinx.com,reg_abs-ert_sched-1.0\0ert_sched";
+			interrupts = <0x09 0x0c>;
+		};
+
+		ep_fpga_configuration_00 {
+			reg = <0x00 0x1e88000 0x00 0x8000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_hwicap-1.0\0axi_hwicap";
+			interrupts = <0x02 0x02>;
+		};
+
+		ep_icap_reset_00 {
+			reg = <0x00 0x1f07000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_mailbox_mgmt_00 {
+			reg = <0x00 0x1f10000 0x00 0x10000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
+			interrupts = <0x00 0x00>;
+		};
+
+		ep_mailbox_user_00 {
+			reg = <0x00 0x1f00000 0x00 0x10000>;
+			pcie_physical_function = <0x01>;
+			compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
+			interrupts = <0x08 0x08>;
+		};
+
+		ep_msix_00 {
+			reg = <0x00 0x00 0x00 0x20000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-msix-1.0\0msix";
+			pcie_bar_mapping = <0x02>;
+		};
+
+		ep_pcie_link_mon_00 {
+			reg = <0x00 0x1f05000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_pr_isolate_plp_00 {
+			reg = <0x00 0x1f01000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_pr_isolate_ulp_00 {
+			reg = <0x00 0x1000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+		};
+
+		ep_uuid_rom_00 {
+			reg = <0x00 0x64000 0x00 0x1000>;
+			pcie_physical_function = <0x00>;
+			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+		};
+
+		ep_xdma_00 {
+			reg = <0x00 0x00 0x00 0x10000>;
+			pcie_physical_function = <0x01>;
+			compatible = "xilinx.com,reg_abs-xdma-1.0\0xdma";
+			pcie_bar_mapping = <0x02>;
+		};
+	};
+
+  }
+
+
+
+Deployment Models
+=================
+
+Baremetal
+---------
+
+In bare-metal deployments both MPF and UPF are visible and accessible. xmgmt driver binds to
+MPF. xmgmt driver operations are privileged and available to system administrator. The full
+stack is illustrated below::
+
+
+                            HOST
+
+                 [XMGMT]            [XUSER]
+                    |                  |
+                    |                  |
+                 +-----+            +-----+
+                 | MPF |            | UPF |
+                 |     |            |     |
+                 | PF0 |            | PF1 |
+                 +--+--+            +--+--+
+          ......... ^................. ^..........
+                    |                  |
+                    |   PCIe DEVICE    |
+                    |                  |
+                 +--+------------------+--+
+                 |         SHELL          |
+                 |                        |
+                 +------------------------+
+                 |         USER           |
+                 |                        |
+                 |                        |
+                 |                        |
+                 |                        |
+                 +------------------------+
+
+
+
+Virtualized
+-----------
+
+In virtualized deployments privileged MPF is assigned to host but unprivileged UPF
+is assigned to guest VM via PCIe pass-through. xmgmt driver in host binds to MPF.
+xmgmt driver operations are privileged and only accessible by hosting service provider.
+The full stack is illustrated below::
+
+
+                                 .............
+                  HOST           .    VM     .
+                                 .           .
+                 [XMGMT]         .  [XUSER]  .
+                    |            .     |     .
+                    |            .     |     .
+                 +-----+         .  +-----+  .
+                 | MPF |         .  | UPF |  .
+                 |     |         .  |     |  .
+                 | PF0 |         .  | PF1 |  .
+                 +--+--+         .  +--+--+  .
+          ......... ^................. ^..........
+                    |                  |
+                    |   PCIe DEVICE    |
+                    |                  |
+                 +--+------------------+--+
+                 |         SHELL          |
+                 |                        |
+                 +------------------------+
+                 |         USER           |
+                 |                        |
+                 |                        |
+                 |                        |
+                 |                        |
+                 +------------------------+
+
+
+
+Driver Modules
+==============
+
+xrt-lib.ko
+----------
+
+Repository of all subsystem drivers and pure software modules that can potentially
+be shared between xmgmt and xuser. All these drivers are structured as Linux
+*platform driver* and are instantiated by xmgmt (or xuser in future) based on meta
+data associated with hardware. The metadata is in the form of device tree as
+explained before.
+
+xmgmt.ko
+--------
+
+The xmgmt driver is a PCIe device driver driving MPF found on Xilinx's Alveo
+PCIE device. It consists of one *root* driver, one or more *partition* drivers
+and one or more *leaf* drivers. The root and MPF specific leaf drivers are in
+xmgmt.ko. The partition driver and other leaf drivers are in xrt-lib.ko.
+
+The instantiation of specific partition driver or leaf driver is completely data
+driven based on meta data (mostly in device tree format) found through VSEC
+capability and inside firmware files, such as xsabin or xclbin file. The root
+driver manages life cycle of multiple partition drivers, which, in turn, manages
+multiple leaf drivers. This allows a single set of driver code to support all
+kinds of subsystems exposed by different shells. The difference among all
+these subsystems will be handled in leaf drivers with root and partition drivers
+being part of the infrastructure and provide common services for all leaves found
+on all platforms.
+
+
+xmgmt-root
+^^^^^^^^^^
+
+The xmgmt-root driver is a PCIe device driver attaches to MPF. It's part of the
+infrastructure of the MPF driver and resides in xmgmt.ko. This driver
+
+* manages one or more partition drivers
+* provides access to functionalities that requires pci_dev, such as PCIE config
+  space access, to other leaf drivers through parent calls
+* together with partition driver, facilities event callbacks for other leaf drivers
+* together with partition driver, facilities inter-leaf driver calls for other leaf
+  drivers
+
+When root driver starts, it will explicitly create an initial partition instance,
+which contains leaf drivers that will trigger the creation of other partition
+instances. The root driver will wait for all partitions and leaves to be created
+before it returns from it's probe routine and claim success of the initialization
+of the entire xmgmt driver.
+
+partition
+^^^^^^^^^
+
+The partition driver is a platform device driver whose life cycle is managed by
+root and does not have real IO mem or IRQ resources. It's part of the
+infrastructure of the MPF driver and resides in xrt-lib.ko. This driver
+
+* manages one or more leaf drivers so that multiple leaves can be managed as a group
+* provides access to root from leaves, so that parent calls, event notifications
+  and inter-leaf calls can happen
+
+In xmgmt, an initial partition driver instance will be created by root, which
+contains leaves that will trigger partition instances to be created to manage
+groups of leaves found on different partitions on hardware, such as VSEC, Shell,
+and User.
+
+leaves
+^^^^^^
+
+The leaf driver is a platform device driver whose life cycle is managed by
+a partition driver and may or may not have real IO mem or IRQ resources. They
+are the real meat of xmgmt and contains platform specific code to Shell and User
+found on a MPF.
+
+A leaf driver may not have real hardware resources when it merely acts as a driver
+that manages certain in-memory states for xmgmt. These in-memory states could be
+shared by multiple other leaves.
+
+Leaf drivers assigned to specific hardware resources drive specific subsystem in
+the device. To manipulate the subsystem or carry out a task, a leaf driver may ask
+help from root via parent calls and/or from other leaves via inter-leaf calls.
+
+A leaf can also broadcast events through infrastructure code for other leaves
+to process. It can also receive event notification from infrastructure about certain
+events, such as post-creation or pre-exit of a particular leaf.
+
+
+Driver Interfaces
+=================
+
+xmgmt Driver Ioctls
+-------------------
+
+Ioctls exposed by xmgmt driver to user space are enumerated in the following table:
+
+== ===================== ============================= ===========================
+#  Functionality         ioctl request code            data format
+== ===================== ============================= ===========================
+1  FPGA image download   XMGMT_IOCICAPDOWNLOAD_AXLF    xmgmt_ioc_bitstream_axlf
+2  CL frequency scaling  XMGMT_IOCFREQSCALE            xmgmt_ioc_freqscaling
+== ===================== ============================= ===========================
+
+xmgmt Driver Sysfs
+------------------
+
+xmgmt driver exposes a rich set of sysfs interfaces. Subsystem platform drivers
+export sysfs node for every platform instance.
+
+Every partition also exports its UUIDs. See below for examples::
+
+  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/interface_uuids
+  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/logic_uuids
+
+
+hwmon
+-----
+
+xmgmt driver exposes standard hwmon interface to report voltage, current, temperature,
+power, etc. These can easily be viewed using *sensors* command line utility.
+
+
+mailbox
+-------
+
+xmgmt communicates with user physical function driver via HW mailbox. Mailbox opcodes
+are defined in ``mailbox_proto.h``. `Mailbox Inter-domain Communication Protocol
+<https://xilinx.github.io/XRT/master/html/mailbox.proto.html>`_ defines the full
+specification. xmgmt implements subset of the specification. It provides the following
+services to the UPF driver:
+
+1.  Responding to *are you there* request including determining if the two drivers are
+    running in the same OS domain
+2.  Provide sensor readings, loaded xclbin UUID, clock frequency, shell information, etc.
+3.  Perform PCIe hot reset
+4.  Download user compiled xclbin
+
+
+Platform Security Considerations
+================================
+
+`Security of Alveo Platform <https://xilinx.github.io/XRT/master/html/security.html>`_
+discusses the deployment options and security implications in great detail.
--
2.17.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-12-01  4:27   ` Moritz Fischer
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 3/8] fpga: xrt: infrastructure support for xmgmt driver Sonal Santan
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Add XRT UAPI header files which describe flash layout, XRT
mailbox protocol, xclBin/axlf FPGA image container format and
XRT management physical function driver ioctl interfaces.

flash_xrt_data.h:
Layout used by XRT to store private data on flash.

mailbox_proto.h:
Mailbox opcodes and high level data structures representing
various kinds of information like sensors, clock, etc.

mailbox_transport.h:
Transport protocol used by mailbox.

xclbin.h:
Container format used to store compiled FPGA image which includes
bitstream and metadata.

xmgmt-ioctl.h:
Ioctls defined by management physical function driver:
* XMGMT_IOCICAPDOWNLOAD_AXLF
  xclbin download which programs the user partition
* XMGMT_IOCFREQSCALE
  Program the clocks driving user partition

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 include/uapi/linux/xrt/flash_xrt_data.h    |  67 ++++
 include/uapi/linux/xrt/mailbox_proto.h     | 394 +++++++++++++++++++
 include/uapi/linux/xrt/mailbox_transport.h |  74 ++++
 include/uapi/linux/xrt/xclbin.h            | 418 +++++++++++++++++++++
 include/uapi/linux/xrt/xmgmt-ioctl.h       |  72 ++++
 5 files changed, 1025 insertions(+)
 create mode 100644 include/uapi/linux/xrt/flash_xrt_data.h
 create mode 100644 include/uapi/linux/xrt/mailbox_proto.h
 create mode 100644 include/uapi/linux/xrt/mailbox_transport.h
 create mode 100644 include/uapi/linux/xrt/xclbin.h
 create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h

diff --git a/include/uapi/linux/xrt/flash_xrt_data.h b/include/uapi/linux/xrt/flash_xrt_data.h
new file mode 100644
index 000000000000..0cafc2f38fbe
--- /dev/null
+++ b/include/uapi/linux/xrt/flash_xrt_data.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef _FLASH_XRT_DATA_H_
+#define _FLASH_XRT_DATA_H_
+
+#define XRT_DATA_MAGIC	"XRTDATA"
+
+/*
+ * This header file contains data structure for xrt meta data on flash. This
+ * file is included in user space utilities and kernel drivers. The data
+ * structure is used to describe on-flash xrt data which is written by utility
+ * and read by driver. Any change of the data structure should either be
+ * backward compatible or cause version to be bumped up.
+ */
+
+struct flash_data_ident {
+	char fdi_magic[7];
+	char fdi_version;
+};
+
+/*
+ * On-flash meta data describing XRT data on flash. Either fdh_id_begin or
+ * fdh_id_end should be at well-known location on flash so that the reader
+ * can easily pick up fdi_version from flash before it tries to interpret
+ * the whole data structure.
+ * E.g., you align header in the end of the flash so that fdh_id_end is at well
+ * known location or align header at the beginning of the flash so that
+ * fdh_id_begin is at well known location.
+ */
+struct flash_data_header {
+	struct flash_data_ident fdh_id_begin;
+	uint32_t fdh_data_offset;
+	uint32_t fdh_data_len;
+	uint32_t fdh_data_parity;
+	uint8_t fdh_reserved[16];
+	struct flash_data_ident fdh_id_end;
+};
+
+static inline uint32_t flash_xrt_data_get_parity32(unsigned char *buf, size_t n)
+{
+	char *p;
+	size_t i;
+	size_t len;
+	uint32_t parity = 0;
+
+	for (len = 0; len < n; len += 4) {
+		uint32_t tmp = 0;
+		size_t thislen = n - len;
+
+		/* One word at a time. */
+		if (thislen > 4)
+			thislen = 4;
+
+		for (i = 0, p = (char *)&tmp; i < thislen; i++)
+			p[i] = buf[len + i];
+		parity ^= tmp;
+	}
+	return parity;
+}
+
+#endif
diff --git a/include/uapi/linux/xrt/mailbox_proto.h b/include/uapi/linux/xrt/mailbox_proto.h
new file mode 100644
index 000000000000..2aa782d86792
--- /dev/null
+++ b/include/uapi/linux/xrt/mailbox_proto.h
@@ -0,0 +1,394 @@
+/* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0 */
+/*
+ *  Copyright (C) 2019-2020, Xilinx Inc
+ */
+
+#ifndef _XCL_MB_PROTOCOL_H_
+#define _XCL_MB_PROTOCOL_H_
+
+#ifndef __KERNEL__
+#include <stdint.h>
+#else
+#include <linux/types.h>
+#endif
+
+/*
+ * This header file contains mailbox protocol b/w mgmt and user pfs.
+ * - Any changes made here should maintain backward compatibility.
+ * - If it's not possible, new OP code should be added and version number should
+ *   be bumped up.
+ * - Support for old OP code should never be removed.
+ */
+#define XCL_MB_PROTOCOL_VER	0U
+
+/*
+ * UUID_SZ should ALWAYS have the same number
+ * as the MACRO UUID_SIZE defined in linux/uuid.h
+ */
+#define XCL_UUID_SZ		16
+
+/**
+ * enum mailbox_request - List of all mailbox request OPCODE. Some OP code
+ *                        requires arguments, which is defined as corresponding
+ *                        data structures below. Response to the request usually
+ *                        is a int32_t containing the error code. Some responses
+ *                        are more complicated and require a data structure,
+ *                        which is also defined below in this file.
+ * @XCL_MAILBOX_REQ_UNKNOWN: invalid OP code
+ * @XCL_MAILBOX_REQ_TEST_READY: test msg is ready (post only, internal test only)
+ * @XCL_MAILBOX_REQ_TEST_READ: fetch test msg from peer (internal test only)
+ * @XCL_MAILBOX_REQ_LOCK_BITSTREAM: lock down xclbin on mgmt pf (not implemented)
+ * @XCL_MAILBOX_REQ_UNLOCK_BITSTREAM: unlock xclbin on mgmt pf (not implemented)
+ * @XCL_MAILBOX_REQ_HOT_RESET: request mgmt pf driver to reset the board
+ * @XCL_MAILBOX_REQ_FIREWALL: firewall trip detected on mgmt pf (post only)
+ * @XCL_MAILBOX_REQ_LOAD_XCLBIN_KADDR: download xclbin (pointed to by a pointer)
+ * @XCL_MAILBOX_REQ_LOAD_XCLBIN: download xclbin (bitstream is in payload)
+ * @XCL_MAILBOX_REQ_RECLOCK: set clock frequency
+ * @XCL_MAILBOX_REQ_PEER_DATA: read specified data from peer
+ * @XCL_MAILBOX_REQ_USER_PROBE: for user pf to probe the peer mgmt pf
+ * @XCL_MAILBOX_REQ_MGMT_STATE: for mgmt pf to notify user pf of its state change
+ *                          (post only)
+ * @XCL_MAILBOX_REQ_CHG_SHELL: shell change is required on mgmt pf (post only)
+ * @XCL_MAILBOX_REQ_PROGRAM_SHELL: request mgmt pf driver to reprogram shell
+ */
+enum xcl_mailbox_request {
+	XCL_MAILBOX_REQ_UNKNOWN =		0,
+	XCL_MAILBOX_REQ_TEST_READY =		1,
+	XCL_MAILBOX_REQ_TEST_READ =		2,
+	XCL_MAILBOX_REQ_LOCK_BITSTREAM =	3,
+	XCL_MAILBOX_REQ_UNLOCK_BITSTREAM =	4,
+	XCL_MAILBOX_REQ_HOT_RESET =		5,
+	XCL_MAILBOX_REQ_FIREWALL =		6,
+	XCL_MAILBOX_REQ_LOAD_XCLBIN_KADDR =	7,
+	XCL_MAILBOX_REQ_LOAD_XCLBIN =		8,
+	XCL_MAILBOX_REQ_RECLOCK =		9,
+	XCL_MAILBOX_REQ_PEER_DATA =		10,
+	XCL_MAILBOX_REQ_USER_PROBE =		11,
+	XCL_MAILBOX_REQ_MGMT_STATE =		12,
+	XCL_MAILBOX_REQ_CHG_SHELL =		13,
+	XCL_MAILBOX_REQ_PROGRAM_SHELL =		14,
+	XCL_MAILBOX_REQ_READ_P2P_BAR_ADDR =	15,
+	/* Version 0 OP code ends */
+};
+
+static inline const char *mailbox_req2name(enum xcl_mailbox_request req)
+{
+	switch (req) {
+	case XCL_MAILBOX_REQ_TEST_READY: return "XCL_MAILBOX_REQ_TEST_READY";
+	case XCL_MAILBOX_REQ_TEST_READ: return "XCL_MAILBOX_REQ_TEST_READ";
+	case XCL_MAILBOX_REQ_LOCK_BITSTREAM: return "XCL_MAILBOX_REQ_LOCK_BITSTREAM";
+	case XCL_MAILBOX_REQ_UNLOCK_BITSTREAM: return "XCL_MAILBOX_REQ_UNLOCK_BITSTREAM";
+	case XCL_MAILBOX_REQ_HOT_RESET: return "XCL_MAILBOX_REQ_HOT_RESET";
+	case XCL_MAILBOX_REQ_FIREWALL: return "XCL_MAILBOX_REQ_FIREWALL";
+	case XCL_MAILBOX_REQ_LOAD_XCLBIN_KADDR: return "XCL_MAILBOX_REQ_LOAD_XCLBIN_KADDR";
+	case XCL_MAILBOX_REQ_LOAD_XCLBIN: return "XCL_MAILBOX_REQ_LOAD_XCLBIN";
+	case XCL_MAILBOX_REQ_RECLOCK: return "XCL_MAILBOX_REQ_RECLOCK";
+	case XCL_MAILBOX_REQ_PEER_DATA: return "XCL_MAILBOX_REQ_PEER_DATA";
+	case XCL_MAILBOX_REQ_USER_PROBE: return "XCL_MAILBOX_REQ_USER_PROBE";
+	case XCL_MAILBOX_REQ_MGMT_STATE: return "XCL_MAILBOX_REQ_MGMT_STATE";
+	case XCL_MAILBOX_REQ_CHG_SHELL: return "XCL_MAILBOX_REQ_CHG_SHELL";
+	case XCL_MAILBOX_REQ_PROGRAM_SHELL: return "XCL_MAILBOX_REQ_PROGRAM_SHELL";
+	case XCL_MAILBOX_REQ_READ_P2P_BAR_ADDR: return "XCL_MAILBOX_REQ_READ_P2P_BAR_ADDR";
+	default: return "UNKNOWN";
+	}
+}
+
+/**
+ * struct mailbox_req_bitstream_lock - MAILBOX_REQ_LOCK_BITSTREAM and
+ *				       MAILBOX_REQ_UNLOCK_BITSTREAM payload type
+ * @uuid: uuid of the xclbin
+ */
+struct xcl_mailbox_req_bitstream_lock {
+	uint64_t reserved;
+	uint8_t uuid[XCL_UUID_SZ];
+};
+
+/**
+ * enum group_kind - Groups of data that can be fetched from mgmt side
+ * @SENSOR: all kinds of sensor readings
+ * @ICAP: ICAP IP related information
+ * @BDINFO: Board Info, serial_num, mac_address
+ * @MIG_ECC: ECC statistics
+ * @FIREWALL: AF detected time, status
+ */
+enum xcl_group_kind {
+	XCL_SENSOR = 0,
+	XCL_ICAP,
+	XCL_BDINFO,
+	XCL_MIG_ECC,
+	XCL_FIREWALL,
+	XCL_DNA,
+	XCL_SUBDEV,
+};
+
+static inline const char *mailbox_group_kind2name(enum xcl_group_kind kind)
+{
+	switch (kind) {
+	case XCL_SENSOR: return "XCL_SENSOR";
+	case XCL_ICAP: return "XCL_ICAP";
+	case XCL_BDINFO: return "XCL_BDINFO";
+	case XCL_MIG_ECC: return "XCL_MIG_ECC";
+	case XCL_FIREWALL: return "XCL_FIREWALL";
+	case XCL_DNA: return "XCL_DNA";
+	case XCL_SUBDEV: return "XCL_SUBDEV";
+	default: return "UNKNOWN";
+	}
+}
+
+/**
+ * struct xcl_board_info - Data structure used to fetch BDINFO group
+ */
+#define BOARD_INFO_STR_LEN	256
+#define BOARD_INFO_MAC_LEN	6
+#define BOARD_INFO_PAD_LEN	26
+struct xcl_board_info {
+	char	 serial_num[BOARD_INFO_STR_LEN];
+	char	 mac_addr0[BOARD_INFO_MAC_LEN];
+	char	 padding0[BOARD_INFO_PAD_LEN];
+	char	 mac_addr1[BOARD_INFO_MAC_LEN];
+	char	 padding1[BOARD_INFO_PAD_LEN];
+	char	 mac_addr2[BOARD_INFO_MAC_LEN];
+	char	 padding2[BOARD_INFO_PAD_LEN];
+	char	 mac_addr3[BOARD_INFO_MAC_LEN];
+	char	 padding3[BOARD_INFO_PAD_LEN];
+	char	 revision[BOARD_INFO_STR_LEN];
+	char	 bd_name[BOARD_INFO_STR_LEN];
+	char	 bmc_ver[BOARD_INFO_STR_LEN];
+	uint32_t max_power;
+	uint32_t fan_presence;
+	uint32_t config_mode;
+	char     exp_bmc_ver[BOARD_INFO_STR_LEN];
+	uint32_t mac_contiguous_num;
+	char     mac_addr_first[BOARD_INFO_MAC_LEN];
+};
+
+/**
+ * struct xcl_sensor - Data structure used to fetch SENSOR group
+ */
+struct xcl_sensor {
+	uint32_t vol_12v_pex;
+	uint32_t vol_12v_aux;
+	uint32_t cur_12v_pex;
+	uint32_t cur_12v_aux;
+	uint32_t vol_3v3_pex;
+	uint32_t vol_3v3_aux;
+	uint32_t cur_3v3_aux;
+	uint32_t ddr_vpp_btm;
+	uint32_t sys_5v5;
+	uint32_t top_1v2;
+	uint32_t vol_1v8;
+	uint32_t vol_0v85;
+	uint32_t ddr_vpp_top;
+	uint32_t mgt0v9avcc;
+	uint32_t vol_12v_sw;
+	uint32_t mgtavtt;
+	uint32_t vcc1v2_btm;
+	uint32_t fpga_temp;
+	uint32_t fan_temp;
+	uint32_t fan_rpm;
+	uint32_t dimm_temp0;
+	uint32_t dimm_temp1;
+	uint32_t dimm_temp2;
+	uint32_t dimm_temp3;
+	uint32_t vccint_vol;
+	uint32_t vccint_curr;
+	uint32_t se98_temp0;
+	uint32_t se98_temp1;
+	uint32_t se98_temp2;
+	uint32_t cage_temp0;
+	uint32_t cage_temp1;
+	uint32_t cage_temp2;
+	uint32_t cage_temp3;
+	uint32_t hbm_temp0;
+	uint32_t cur_3v3_pex;
+	uint32_t cur_0v85;
+	uint32_t vol_3v3_vcc;
+	uint32_t vol_1v2_hbm;
+	uint32_t vol_2v5_vpp;
+	uint32_t vccint_bram;
+	uint32_t version;
+	uint32_t oem_id;
+	uint32_t vccint_temp;
+	uint32_t vol_12v_aux1;
+	uint32_t vol_vcc1v2_i;
+	uint32_t vol_v12_in_i;
+	uint32_t vol_v12_in_aux0_i;
+	uint32_t vol_v12_in_aux1_i;
+	uint32_t vol_vccaux;
+	uint32_t vol_vccaux_pmc;
+	uint32_t vol_vccram;
+};
+
+/**
+ * struct xcl_hwicap - Data structure used to fetch ICAP group
+ */
+struct xcl_pr_region {
+	uint64_t freq_data;
+	uint64_t freq_kernel;
+	uint64_t freq_system;
+	uint64_t freq_unused;
+	uint64_t freq_cntr_data;
+	uint64_t freq_cntr_kernel;
+	uint64_t freq_cntr_system;
+	uint64_t freq_cntr_unused;
+	uint64_t idcode;
+	uint8_t uuid[XCL_UUID_SZ];
+	uint64_t mig_calib;
+	uint64_t data_retention;
+};
+
+/**
+ * struct xcl_mig_ecc -  Data structure used to fetch MIG_ECC group
+ */
+struct xcl_mig_ecc {
+	uint64_t mem_type;
+	uint64_t mem_idx;
+	uint64_t ecc_enabled;
+	uint64_t ecc_status;
+	uint64_t ecc_ce_cnt;
+	uint64_t ecc_ue_cnt;
+	uint64_t ecc_ce_ffa;
+	uint64_t ecc_ue_ffa;
+};
+
+/**
+ * struct xcl_firewall -  Data structure used to fetch FIREWALL group
+ */
+struct xcl_firewall {
+	uint64_t max_level;
+	uint64_t curr_status;
+	uint64_t curr_level;
+	uint64_t err_detected_status;
+	uint64_t err_detected_level;
+	uint64_t err_detected_time;
+};
+
+
+/**
+ * struct xcl_dna -  Data structure used to fetch DNA group
+ */
+struct xcl_dna {
+	uint64_t status;
+	uint32_t dna[4];
+	uint64_t capability;
+	uint64_t dna_version;
+	uint64_t revision;
+};
+/**
+ * Data structure used to fetch SUBDEV group
+ */
+enum xcl_subdev_return_code {
+	XRT_MSG_SUBDEV_RTN_UNCHANGED = 1,
+	XRT_MSG_SUBDEV_RTN_PARTIAL,
+	XRT_MSG_SUBDEV_RTN_COMPLETE,
+	XRT_MSG_SUBDEV_RTN_PENDINGPLP,
+};
+struct xcl_subdev {
+	uint32_t ver;
+	enum xcl_subdev_return_code rtncode;
+	uint64_t checksum;
+	uint64_t size;
+	uint64_t offset;
+	uint64_t data[1];
+};
+/**
+ * struct mailbox_subdev_peer - MAILBOX_REQ_PEER_DATA payload type
+ * @kind: data group
+ * @size: buffer size for receiving response
+ */
+struct xcl_mailbox_peer_data {
+	enum xcl_group_kind kind;
+	uint32_t padding;
+	uint64_t size;
+	uint64_t entries;
+	uint64_t offset;
+};
+
+/**
+ * struct mailbox_conn - MAILBOX_REQ_USER_PROBE payload type
+ * @kaddr: KVA of the verification data buffer
+ * @paddr: physical addresss of the verification data buffer
+ * @crc32: CRC value of the verification data buffer
+ * @version: protocol version supported by peer
+ */
+struct xcl_mailbox_conn {
+	uint64_t kaddr;
+	uint64_t paddr;
+	uint32_t crc32;
+	uint32_t version;
+};
+
+#define	XCL_COMM_ID_SIZE		2048
+#define XCL_MB_PEER_READY		(1UL << 0)
+#define XCL_MB_PEER_SAME_DOMAIN		(1UL << 1)
+/**
+ * struct mailbox_conn_resp - MAILBOX_REQ_USER_PROBE response payload type
+ * @version: protocol version should be used
+ * @conn_flags: connection status
+ * @chan_switch: bitmap to indicate SW / HW channel for each OP code msg
+ * @comm_id: user defined cookie
+ */
+struct xcl_mailbox_conn_resp {
+	uint32_t version;
+	uint32_t reserved;
+	uint64_t conn_flags;
+	uint64_t chan_switch;
+	char comm_id[XCL_COMM_ID_SIZE];
+};
+
+#define	XCL_MB_STATE_ONLINE	(1UL << 0)
+#define	XCL_MB_STATE_OFFLINE	(1UL << 1)
+/**
+ * struct mailbox_peer_state - MAILBOX_REQ_MGMT_STATE payload type
+ * @state_flags: peer state flags
+ */
+struct xcl_mailbox_peer_state {
+	uint64_t state_flags;
+};
+
+/**
+ * struct mailbox_bitstream_kaddr - MAILBOX_REQ_LOAD_XCLBIN_KADDR payload type
+ * @addr: pointer to xclbin body
+ */
+struct xcl_mailbox_bitstream_kaddr {
+	uint64_t addr;
+};
+
+/**
+ * struct mailbox_clock_freqscaling - MAILBOX_REQ_RECLOCK payload type
+ * @region: region of clock
+ * @target_freqs: array of target clock frequencies (max clocks: 16)
+ */
+struct xcl_mailbox_clock_freqscaling {
+	unsigned int region;
+	unsigned short target_freqs[16];
+};
+
+/**
+ * struct mailbox_req - mailbox request message header
+ * @req: opcode
+ * @flags: flags of this message
+ * @data: payload of variable length
+ */
+struct xcl_mailbox_req {
+	uint64_t flags;
+	enum xcl_mailbox_request req;
+	char data[1]; /* variable length of payload */
+};
+
+/**
+ * struct mailbox_p2p_bar_addr
+ * @bar_addr: p2p bar address
+ * @bar_len: p2p bar length
+ */
+struct xcl_mailbox_p2p_bar_addr {
+	uint64_t  p2p_bar_addr;
+	uint64_t  p2p_bar_len;
+};
+
+static inline const char *mailbox_chan2name(bool sw_ch)
+{
+	return sw_ch ? "SW-CHANNEL" : "HW-CHANNEL";
+}
+
+#endif /* _XCL_MB_PROTOCOL_H_ */
diff --git a/include/uapi/linux/xrt/mailbox_transport.h b/include/uapi/linux/xrt/mailbox_transport.h
new file mode 100644
index 000000000000..4823446797a6
--- /dev/null
+++ b/include/uapi/linux/xrt/mailbox_transport.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef _XCL_MB_TRANSPORT_H_
+#define _XCL_MB_TRANSPORT_H_
+
+/*
+ * This header file contains data structures used in mailbox transport layer
+ * b/w mgmt and user pfs. Any changes made here should maintain backward
+ * compatibility.
+ */
+
+/**
+ * struct sw_chan - mailbox software channel message metadata. This defines the
+ *                  interface between daemons (MPD and MSD) and mailbox's
+ *                  read or write callbacks. A mailbox message (either a request
+ *                  or response) is wrapped by this data structure as payload.
+ *                  A sw_chan is passed between mailbox driver and daemon via
+ *                  read / write driver callbacks. And it is also passed between
+ *                  MPD and MSD via vendor defined interface (TCP socket, etc).
+ * @sz: payload size
+ * @flags: flags of this message as in struct mailbox_req
+ * @id: message ID
+ * @data: payload (struct mailbox_req or response data matching the request)
+ */
+struct xcl_sw_chan {
+	uint64_t sz;
+	uint64_t flags;
+	uint64_t id;
+	char data[1]; /* variable length of payload */
+};
+
+/**
+ * A packet transport by mailbox hardware channel.
+ * When extending, only add new data structure to body. Choose to add new flag
+ * if new feature can be safely ignored by peer, other wise, add new type.
+ */
+enum packet_type {
+	PKT_INVALID = 0,
+	PKT_TEST,
+	PKT_MSG_START,
+	PKT_MSG_BODY
+};
+
+#define	PACKET_SIZE	16 /* Number of DWORD. */
+
+/* Lower 8 bits for type, the rest for flags. Total packet size is 64 bytes */
+#define	PKT_TYPE_MASK		0xff
+#define	PKT_TYPE_MSG_END	(1 << 31)
+struct mailbox_pkt {
+	struct {
+		u32		type;
+		u32		payload_size;
+	} hdr;
+	union {
+		u32		data[PACKET_SIZE - 2];
+		struct {
+			u64	msg_req_id;
+			u32	msg_flags;
+			u32	msg_size;
+			u32	payload[0];
+		} msg_start;
+		struct {
+			u32	payload[0];
+		} msg_body;
+	} body;
+} __packed;
+
+#endif /* _XCL_MB_TRANSPORT_H_ */
diff --git a/include/uapi/linux/xrt/xclbin.h b/include/uapi/linux/xrt/xclbin.h
new file mode 100644
index 000000000000..885cae1700f9
--- /dev/null
+++ b/include/uapi/linux/xrt/xclbin.h
@@ -0,0 +1,418 @@
+/* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0 */
+/*
+ *  Xilinx FPGA compiled binary container format
+ *
+ *  Copyright (C) 2015-2020, Xilinx Inc
+ */
+
+
+#ifndef _XCLBIN_H_
+#define _XCLBIN_H_
+
+#ifdef _WIN32
+  #include <cstdint>
+  #include <algorithm>
+  #include "windows/uuid.h"
+#else
+  #if defined(__KERNEL__)
+    #include <linux/types.h>
+    #include <linux/uuid.h>
+    #include <linux/version.h>
+  #elif defined(__cplusplus)
+    #include <cstdlib>
+    #include <cstdint>
+    #include <algorithm>
+    #include <uuid/uuid.h>
+  #else
+    #include <stdlib.h>
+    #include <stdint.h>
+    #include <uuid/uuid.h>
+  #endif
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+    /**
+     * Container format for Xilinx bitstreams, metadata and other
+     * binary blobs.
+     * Every segment must be aligned at 8 byte boundary with null byte padding
+     * between adjacent segments if required.
+     * For segements which are not present both offset and length must be 0 in
+     * the header.
+     * Currently only xclbin0\0 is recognized as file magic. In future if/when file
+     * format is updated the magic string will be changed to xclbin1\0 and so on.
+     */
+    enum XCLBIN_MODE {
+        XCLBIN_FLAT,
+        XCLBIN_PR,
+        XCLBIN_TANDEM_STAGE2,
+        XCLBIN_TANDEM_STAGE2_WITH_PR,
+        XCLBIN_HW_EMU,
+        XCLBIN_SW_EMU,
+        XCLBIN_MODE_MAX
+    };
+
+    /*
+     *  AXLF LAYOUT
+     *  -----------
+     *
+     *  -----------------------------------------
+     *  | Magic                                 |
+     *  -----------------------------------------
+     *  | Header                                |
+     *  -----------------------------------------
+     *  | One or more section headers           |
+     *  -----------------------------------------
+     *  | Matching number of sections with data |
+     *  -----------------------------------------
+     *
+     */
+
+    enum axlf_section_kind {
+        BITSTREAM = 0,
+        CLEARING_BITSTREAM,
+        EMBEDDED_METADATA,
+        FIRMWARE,
+        DEBUG_DATA,
+        SCHED_FIRMWARE,
+        MEM_TOPOLOGY,
+        CONNECTIVITY,
+        IP_LAYOUT,
+        DEBUG_IP_LAYOUT,
+        DESIGN_CHECK_POINT,
+        CLOCK_FREQ_TOPOLOGY,
+        MCS,
+        BMC,
+        BUILD_METADATA,
+        KEYVALUE_METADATA,
+        USER_METADATA,
+        DNA_CERTIFICATE,
+        PDI,
+        BITSTREAM_PARTIAL_PDI,
+        PARTITION_METADATA,
+        EMULATION_DATA,
+        SYSTEM_METADATA,
+        SOFT_KERNEL,
+        ASK_FLASH,
+        AIE_METADATA,
+        ASK_GROUP_TOPOLOGY,
+        ASK_GROUP_CONNECTIVITY
+    };
+
+    enum MEM_TYPE {
+        MEM_DDR3,
+        MEM_DDR4,
+        MEM_DRAM,
+        MEM_STREAMING,
+        MEM_PREALLOCATED_GLOB,
+        MEM_ARE, //Aurora
+        MEM_HBM,
+        MEM_BRAM,
+        MEM_URAM,
+        MEM_STREAMING_CONNECTION
+    };
+
+    enum IP_TYPE {
+        IP_MB = 0,
+        IP_KERNEL, //kernel instance
+        IP_DNASC,
+        IP_DDR4_CONTROLLER,
+        IP_MEM_DDR4,
+        IP_MEM_HBM
+    };
+
+    struct axlf_section_header {
+        uint32_t m_sectionKind;             /* Section type */
+        char m_sectionName[16];             /* Examples: "stage2", "clear1", "clear2", "ocl1", "ocl2, "ublaze", "sched" */
+        uint64_t m_sectionOffset;           /* File offset of section data */
+        uint64_t m_sectionSize;             /* Size of section data */
+    };
+
+    struct axlf_header {
+        uint64_t m_length;                  /* Total size of the xclbin file */
+        uint64_t m_timeStamp;               /* Number of seconds since epoch when xclbin was created */
+        uint64_t m_featureRomTimeStamp;     /* TimeSinceEpoch of the featureRom */
+        uint16_t m_versionPatch;            /* Patch Version */
+        uint8_t m_versionMajor;             /* Major Version - Version: 2.1.0*/
+        uint8_t m_versionMinor;             /* Minor Version */
+        uint32_t m_mode;                    /* XCLBIN_MODE */
+	union {
+	    struct {
+		uint64_t m_platformId;      /* 64 bit platform ID: vendor-device-subvendor-subdev */
+		uint64_t m_featureId;       /* 64 bit feature id */
+	    } rom;
+	    unsigned char rom_uuid[16];     /* feature ROM UUID for which this xclbin was generated */
+	};
+        unsigned char m_platformVBNV[64];   /* e.g. xilinx:xil-accel-rd-ku115:4ddr-xpr:3.4: null terminated */
+	union {
+	    char m_next_axlf[16];           /* Name of next xclbin file in the daisy chain */
+	    uuid_t uuid;                    /* uuid of this xclbin*/
+	};
+        char m_debug_bin[16];               /* Name of binary with debug information */
+        uint32_t m_numSections;             /* Number of section headers */
+    };
+
+    struct axlf {
+        char m_magic[8];                            /* Should be "xclbin2\0"  */
+        int32_t m_signature_length;                 /* Length of the signature. -1 indicates no signature */
+        unsigned char reserved[28];                 /* Note: Initialized to 0xFFs */
+
+        unsigned char m_keyBlock[256];              /* Signature for validation of binary */
+        uint64_t m_uniqueId;                        /* axlf's uniqueId, use it to skip redownload etc */
+        struct axlf_header m_header;                /* Inline header */
+        struct axlf_section_header m_sections[1];   /* One or more section headers follow */
+    };
+
+    typedef struct axlf xclBin;
+
+    /**** BEGIN : Xilinx internal section *****/
+
+    /* bitstream information */
+    struct xlnx_bitstream {
+        uint8_t m_freq[8];
+        char bits[1];
+    };
+
+    /****   MEMORY TOPOLOGY SECTION ****/
+    struct mem_data {
+        uint8_t m_type; //enum corresponding to mem_type.
+        uint8_t m_used; //if 0 this bank is not present
+        union {
+            uint64_t m_size; //if mem_type DDR, then size in KB;
+            uint64_t route_id; //if streaming then "route_id"
+        };
+        union {
+            uint64_t m_base_address;//if DDR then the base address;
+            uint64_t flow_id; //if streaming then "flow id"
+        };
+        unsigned char m_tag[16]; //DDR: BANK0,1,2,3, has to be null terminated; if streaming then stream0, 1 etc
+    };
+
+    struct mem_topology {
+        int32_t m_count; //Number of mem_data
+        struct mem_data m_mem_data[1]; //Should be sorted on mem_type
+    };
+
+    /****   CONNECTIVITY SECTION ****/
+    /* Connectivity of each argument of Kernel. It will be in terms of argument
+     * index associated. For associating kernel instances with arguments and
+     * banks, start at the connectivity section. Using the m_ip_layout_index
+     * access the ip_data.m_name. Now we can associate this kernel instance
+     * with its original kernel name and get the connectivity as well. This
+     * enables us to form related groups of kernel instances.
+     */
+
+    struct connection {
+        int32_t arg_index; //From 0 to n, may not be contiguous as scalars skipped
+        int32_t m_ip_layout_index; //index into the ip_layout section. ip_layout.m_ip_data[index].m_type == IP_KERNEL
+        int32_t mem_data_index; //index of the m_mem_data . Flag error is m_used false.
+    };
+
+    struct connectivity {
+        int32_t m_count;
+        struct connection m_connection[1];
+    };
+
+
+    /****   IP_LAYOUT SECTION ****/
+
+    // IP Kernel
+    #define IP_INT_ENABLE_MASK    0x0001
+    #define IP_INTERRUPT_ID_MASK  0x00FE
+    #define IP_INTERRUPT_ID_SHIFT 0x1
+
+    enum IP_CONTROL {
+        AP_CTRL_HS = 0,
+        AP_CTRL_CHAIN = 1,
+        AP_CTRL_NONE = 2,
+        AP_CTRL_ME = 3,
+        ACCEL_ADAPTER = 4
+    };
+
+    #define IP_CONTROL_MASK  0xFF00
+    #define IP_CONTROL_SHIFT 0x8
+
+    /* IPs on AXI lite - their types, names, and base addresses.*/
+    struct ip_data {
+        uint32_t m_type; //map to IP_TYPE enum
+        union {
+            uint32_t properties; // Default: 32-bits to indicate ip specific property.
+                                 // m_type: IP_KERNEL
+                                 //         m_int_enable   : Bit  - 0x0000_0001;
+                                 //         m_interrupt_id : Bits - 0x0000_00FE;
+                                 //         m_ip_control   : Bits = 0x0000_FF00;
+            struct {             // m_type: IP_MEM_*
+               uint16_t m_index;
+               uint8_t m_pc_index;
+               uint8_t unused;
+            } indices;
+        };
+        uint64_t m_base_address;
+        uint8_t m_name[64]; //eg Kernel name corresponding to KERNEL instance, can embed CU name in future.
+    };
+
+    struct ip_layout {
+        int32_t m_count;
+        struct ip_data m_ip_data[1]; //All the ip_data needs to be sorted by m_base_address.
+    };
+
+    /*** Debug IP section layout ****/
+    enum DEBUG_IP_TYPE {
+        UNDEFINED = 0,
+        LAPC,
+        ILA,
+        AXI_MM_MONITOR,
+        AXI_TRACE_FUNNEL,
+        AXI_MONITOR_FIFO_LITE,
+        AXI_MONITOR_FIFO_FULL,
+        ACCEL_MONITOR,
+        AXI_STREAM_MONITOR,
+        AXI_STREAM_PROTOCOL_CHECKER,
+        TRACE_S2MM,
+        AXI_DMA,
+        TRACE_S2MM_FULL
+    };
+
+    struct debug_ip_data {
+        uint8_t m_type; // type of enum DEBUG_IP_TYPE
+        uint8_t m_index_lowbyte;
+        uint8_t m_properties;
+        uint8_t m_major;
+        uint8_t m_minor;
+        uint8_t m_index_highbyte;
+        uint8_t m_reserved[2];
+        uint64_t m_base_address;
+        char    m_name[128];
+    };
+
+    struct debug_ip_layout {
+        uint16_t m_count;
+        struct debug_ip_data m_debug_ip_data[1];
+    };
+
+    enum CLOCK_TYPE {                      /* Supported clock frequency types */
+        CT_UNUSED = 0,                     /* Initialized value */
+        CT_DATA   = 1,                     /* Data clock */
+        CT_KERNEL = 2,                     /* Kernel clock */
+        CT_SYSTEM = 3                      /* System Clock */
+    };
+
+    struct clock_freq {                    /* Clock Frequency Entry */
+        uint16_t m_freq_Mhz;               /* Frequency in MHz */
+        uint8_t m_type;                    /* Clock type (enum CLOCK_TYPE) */
+        uint8_t m_unused[5];               /* Not used - padding */
+        char m_name[128];                  /* Clock Name */
+    };
+
+    struct clock_freq_topology {           /* Clock frequency section */
+        int16_t m_count;                   /* Number of entries */
+        struct clock_freq m_clock_freq[1]; /* Clock array */
+    };
+
+    enum MCS_TYPE {                        /* Supported MCS file types */
+        MCS_UNKNOWN = 0,                   /* Initialized value */
+        MCS_PRIMARY = 1,                   /* The primary mcs file data */
+        MCS_SECONDARY = 2,                 /* The secondary mcs file data */
+    };
+
+    struct mcs_chunk {                     /* One chunk of MCS data */
+        uint8_t m_type;                    /* MCS data type */
+        uint8_t m_unused[7];               /* padding */
+        uint64_t m_offset;                 /* data offset from the start of the section */
+        uint64_t m_size;                   /* data size */
+    };
+
+    struct mcs {                           /* MCS data section */
+        int8_t m_count;                    /* Number of chunks */
+        int8_t m_unused[7];                /* padding */
+        struct mcs_chunk m_chunk[1];       /* MCS chunks followed by data */
+    };
+
+    struct bmc {                           /* bmc data section  */
+        uint64_t m_offset;                 /* data offset from the start of the section */
+        uint64_t m_size;                   /* data size (bytes)*/
+        char m_image_name[64];             /* Name of the image (e.g., MSP432P401R) */
+        char m_device_name[64];            /* Device ID         (e.g., VCU1525)  */
+        char m_version[64];
+        char m_md5value[33];               /* MD5 Expected Value(e.g., 56027182079c0bd621761b7dab5a27ca)*/
+        char m_padding[7];                 /* Padding */
+    };
+
+    struct soft_kernel {                   /* soft kernel data section  */
+        // Prefix Syntax:
+        //   mpo - member, pointer, offset
+        //     This variable represents a zero terminated string
+        //     that is offseted from the beginning of the section.
+        //
+        //     The pointer to access the string is initialized as follows:
+        //     char * pCharString = (address_of_section) + (mpo value)
+        uint32_t mpo_name;         // Name of the soft kernel
+        uint32_t m_image_offset;   // Image offset
+        uint32_t m_image_size;     // Image size
+        uint32_t mpo_version;      // Version
+        uint32_t mpo_md5_value;    // MD5 checksum
+        uint32_t mpo_symbol_name;  // Symbol name
+        uint32_t m_num_instances;  // Number of instances
+        uint8_t padding[36];       // Reserved for future use
+        uint8_t reservedExt[16];   // Reserved for future extended data
+    };
+
+    enum CHECKSUM_TYPE
+    {
+        CST_UNKNOWN = 0,
+        CST_SDBM = 1,
+        CST_LAST
+    };
+
+    /**** END : Xilinx internal section *****/
+
+# ifdef __cplusplus
+    namespace xclbin {
+      inline const axlf_section_header*
+      get_axlf_section(const axlf* top, axlf_section_kind kind)
+      {
+        auto begin = top->m_sections;
+        auto end = begin + top->m_header.m_numSections;
+        auto itr = std::find_if(begin,end,[kind](const axlf_section_header& sec) { return sec.m_sectionKind==(const uint32_t) kind; });
+        return (itr!=end) ? &(*itr) : nullptr;
+      }
+
+      // Helper C++ section iteration
+      // To keep with with the current "coding" them, the function get_axlf_section_next() was
+      // introduced find 'next' common section names.
+      //
+      // Future TODO: Create a custom iterator and refactor the code base to use it.
+      //
+      // Example on how this function may be used:
+      //
+      // const axlf_section_header * pSection;
+      // const axlf* top = <xclbin image in memory>;
+      // for (pSection = xclbin::get_axlf_section( top, SOFT_KERNEL);
+      //      pSection != nullptr;
+      //      pSection = xclbin::get_axlf_section_next( top, pSection, SOFT_KERNEL)) {
+      //   <code to do work>
+      // }
+      inline const axlf_section_header*
+      get_axlf_section_next(const axlf* top, const axlf_section_header* current_section, axlf_section_kind kind)
+      {
+        if (top == nullptr) { return nullptr; }
+        if (current_section == nullptr) { return nullptr; }
+
+        auto end = top->m_sections + top->m_header.m_numSections;
+
+        auto begin = current_section + 1;        // Point to the next section
+        if (begin == end) { return nullptr; }
+
+        auto itr = std::find_if(begin, end, [kind](const axlf_section_header &sec) {return sec.m_sectionKind == (const uint32_t)kind;});
+        return (itr!=end) ? &(*itr) : nullptr;
+      }
+    }
+# endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/include/uapi/linux/xrt/xmgmt-ioctl.h b/include/uapi/linux/xrt/xmgmt-ioctl.h
new file mode 100644
index 000000000000..f949a7c21560
--- /dev/null
+++ b/include/uapi/linux/xrt/xmgmt-ioctl.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0 */
+/*
+ *  Copyright (C) 2015-2020, Xilinx Inc
+ *
+ */
+
+/**
+ * DOC: PCIe Kernel Driver for Managament Physical Function
+ * Interfaces exposed by *xclmgmt* driver are defined in file, *mgmt-ioctl.h*.
+ * Core functionality provided by *xmgmt* driver is described in the following table:
+ *
+ * ==== ====================================== ============================== ==================================
+ * #    Functionality                          ioctl request code             data format
+ * ==== ====================================== ============================== ==================================
+ * 1    FPGA image download                    XCLMGMT_IOCICAPDOWNLOAD_AXLF   xmgmt_ioc_bitstream_axlf
+ * 2    CL frequency scaling                   XCLMGMT_IOCFREQSCALE           xmgmt_ioc_freqscaling
+ * ==== ====================================== ============================== ==================================
+ *
+ */
+
+#ifndef _XMGMT_IOCALLS_POSIX_H_
+#define _XMGMT_IOCALLS_POSIX_H_
+
+#include <linux/ioctl.h>
+
+#define XMGMT_IOC_MAGIC	'X'
+#define XMGMT_NUM_SUPPORTED_CLOCKS 4
+
+#define XMGMT_IOC_FREQ_SCALE 0x2
+#define XMGMT_IOC_ICAP_DOWNLOAD_AXLF 0x6
+
+
+/**
+ * struct xmgmt_ioc_bitstream_axlf - load xclbin (AXLF) device image
+ * used with XMGMT_IOCICAPDOWNLOAD_AXLF ioctl
+ *
+ * @xclbin:	Pointer to user's xclbin structure in memory
+ */
+struct xmgmt_ioc_bitstream_axlf {
+	struct axlf *xclbin;
+};
+
+/**
+ * struct xmgmt_ioc_freqscaling - scale frequencies on the board using Xilinx clock wizard
+ * used with XMGMT_IOCFREQSCALE ioctl
+ *
+ * @ocl_region:	        PR region (currently only 0 is supported)
+ * @ocl_target_freq:	Array of requested frequencies, a value o zero in the array indicates leave untouched
+ */
+struct xmgmt_ioc_freqscaling {
+	unsigned int ocl_region;
+	unsigned short ocl_target_freq[XMGMT_NUM_SUPPORTED_CLOCKS];
+};
+
+#define DATA_CLK			0
+#define KERNEL_CLK			1
+#define SYSTEM_CLK			2
+
+#define XMGMT_IOCICAPDOWNLOAD_AXLF	_IOW(XMGMT_IOC_MAGIC, XMGMT_IOC_ICAP_DOWNLOAD_AXLF, struct xmgmt_ioc_bitstream_axlf)
+#define XMGMT_IOCFREQSCALE		_IOW(XMGMT_IOC_MAGIC, XMGMT_IOC_FREQ_SCALE, struct xmgmt_ioc_freqscaling)
+
+/*
+ * The following definitions are for binary compatibility with classic XRT management driver
+ */
+
+#define XCLMGMT_IOCICAPDOWNLOAD_AXLF XMGMT_IOCICAPDOWNLOAD_AXLF
+#define XCLMGMT_IOCFREQSCALE XMGMT_IOCFREQSCALE
+
+#define xclmgmt_ioc_bitstream_axlf xmgmt_ioc_bitstream_axlf
+#define xclmgmt_ioc_freqscaling xmgmt_ioc_freqscaling
+
+#endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 3/8] fpga: xrt: infrastructure support for xmgmt driver
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 4/8] fpga: xrt: core infrastructure for xrt-lib module Sonal Santan
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Add infrastructure code for XRT management physical function
driver. This provides support for enumerating and extracting
sections from xclbin files, interacting with device tree nodes
found in xclbin and working with Alveo partitions.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/alveo/common/xrt-metadata.c | 590 ++++++++++++++++++
 drivers/fpga/alveo/common/xrt-root.c     | 744 +++++++++++++++++++++++
 drivers/fpga/alveo/common/xrt-root.h     |  24 +
 drivers/fpga/alveo/common/xrt-xclbin.c   | 387 ++++++++++++
 drivers/fpga/alveo/common/xrt-xclbin.h   |  46 ++
 5 files changed, 1791 insertions(+)
 create mode 100644 drivers/fpga/alveo/common/xrt-metadata.c
 create mode 100644 drivers/fpga/alveo/common/xrt-root.c
 create mode 100644 drivers/fpga/alveo/common/xrt-root.h
 create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.c
 create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.h

diff --git a/drivers/fpga/alveo/common/xrt-metadata.c b/drivers/fpga/alveo/common/xrt-metadata.c
new file mode 100644
index 000000000000..5596619ed82d
--- /dev/null
+++ b/drivers/fpga/alveo/common/xrt-metadata.c
@@ -0,0 +1,590 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Metadata parse APIs
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/libfdt_env.h>
+#include "libfdt.h"
+#include "xrt-metadata.h"
+
+#define MAX_BLOB_SIZE	(4096 * 25)
+
+#define md_err(dev, fmt, args...)			\
+	dev_err(dev, "%s: "fmt, __func__, ##args)
+#define md_warn(dev, fmt, args...)			\
+	dev_warn(dev, "%s: "fmt, __func__, ##args)
+#define md_info(dev, fmt, args...)			\
+	dev_info(dev, "%s: "fmt, __func__, ##args)
+#define md_dbg(dev, fmt, args...)			\
+	dev_dbg(dev, "%s: "fmt, __func__, ##args)
+
+static int xrt_md_setprop(struct device *dev, char *blob, int offset,
+	const char *prop, const void *val, int size);
+static int xrt_md_overlay(struct device *dev, char *blob, int target,
+	const char *overlay_blob, int overlay_offset);
+static int xrt_md_get_endpoint(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name, int *ep_offset);
+
+long xrt_md_size(struct device *dev, const char *blob)
+{
+	long len = (long) fdt_totalsize(blob);
+
+	return (len > MAX_BLOB_SIZE) ? -EINVAL : len;
+}
+
+int xrt_md_create(struct device *dev, char **blob)
+{
+	int ret = 0;
+
+	WARN_ON(!blob);
+
+	*blob = vmalloc(MAX_BLOB_SIZE);
+	if (!*blob)
+		return -ENOMEM;
+
+	ret = fdt_create_empty_tree(*blob, MAX_BLOB_SIZE);
+	if (ret) {
+		md_err(dev, "format blob failed, ret = %d", ret);
+		goto failed;
+	}
+
+	ret = fdt_next_node(*blob, -1, NULL);
+	if (ret < 0) {
+		md_err(dev, "No Node, ret = %d", ret);
+		goto failed;
+	}
+
+	ret = fdt_add_subnode(*blob, ret, NODE_ENDPOINTS);
+	if (ret < 0)
+		md_err(dev, "add node failed, ret = %d", ret);
+
+failed:
+	if (ret < 0) {
+		vfree(*blob);
+		*blob = NULL;
+	} else
+		ret = 0;
+
+	return ret;
+}
+
+int xrt_md_add_node(struct device *dev, char *blob, int parent_offset,
+	const char *ep_name)
+{
+	int ret;
+
+	ret = fdt_add_subnode(blob, parent_offset, ep_name);
+	if (ret < 0 && ret != -FDT_ERR_EXISTS)
+		md_err(dev, "failed to add node %s. %d", ep_name, ret);
+
+	return ret;
+}
+
+int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
+	char *regmap_name)
+{
+	int ret;
+	int ep_offset;
+
+	ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name, &ep_offset);
+	if (ret) {
+		md_err(dev, "can not find ep %s", ep_name);
+		return -EINVAL;
+	}
+
+	ret = fdt_del_node(blob, ep_offset);
+	if (ret)
+		md_err(dev, "delete node %s failed, ret %d", ep_name, ret);
+
+	return ret;
+}
+
+static int __xrt_md_add_endpoint(struct device *dev, char *blob,
+	struct xrt_md_endpoint *ep, int *offset, bool root)
+{
+	int ret = 0;
+	int ep_offset;
+	u32 val, count = 0;
+	u64 io_range[2];
+	char comp[128];
+
+	if (!ep->ep_name) {
+		md_err(dev, "empty name");
+		return -EINVAL;
+	}
+
+	if (!root) {
+		ret = xrt_md_get_endpoint(dev, blob, NODE_ENDPOINTS, NULL,
+			&ep_offset);
+		if (ret) {
+			md_err(dev, "invalid blob, ret = %d", ret);
+			return -EINVAL;
+		}
+	} else {
+		ep_offset = 0;
+	}
+
+	ep_offset = xrt_md_add_node(dev, blob, ep_offset, ep->ep_name);
+	if (ep_offset < 0) {
+		md_err(dev, "add endpoint failed, ret = %d", ret);
+		return -EINVAL;
+	}
+	if (offset)
+		*offset = ep_offset;
+
+	if (ep->size != 0) {
+		val = cpu_to_be32(ep->bar);
+		ret = xrt_md_setprop(dev, blob, ep_offset, PROP_BAR_IDX,
+				&val, sizeof(u32));
+		if (ret) {
+			md_err(dev, "set %s failed, ret %d",
+				PROP_BAR_IDX, ret);
+			goto failed;
+		}
+		io_range[0] = cpu_to_be64((u64)ep->bar_off);
+		io_range[1] = cpu_to_be64((u64)ep->size);
+		ret = xrt_md_setprop(dev, blob, ep_offset, PROP_IO_OFFSET,
+			io_range, sizeof(io_range));
+		if (ret) {
+			md_err(dev, "set %s failed, ret %d",
+				PROP_IO_OFFSET, ret);
+			goto failed;
+		}
+	}
+
+	if (ep->regmap) {
+		if (ep->regmap_ver) {
+			count = snprintf(comp, sizeof(comp),
+				"%s-%s", ep->regmap, ep->regmap_ver);
+			count++;
+		}
+
+		count += snprintf(comp + count, sizeof(comp) - count,
+			"%s", ep->regmap);
+		count++;
+
+		ret = xrt_md_setprop(dev, blob, ep_offset, PROP_COMPATIBLE,
+			comp, count);
+		if (ret) {
+			md_err(dev, "set %s failed, ret %d",
+				PROP_COMPATIBLE, ret);
+			goto failed;
+		}
+	}
+
+failed:
+	if (ret)
+		xrt_md_del_endpoint(dev, blob, ep->ep_name, NULL);
+
+	return ret;
+}
+
+int xrt_md_add_endpoint(struct device *dev, char *blob,
+	struct xrt_md_endpoint *ep)
+{
+	return __xrt_md_add_endpoint(dev, blob, ep, NULL, false);
+}
+
+static int xrt_md_get_endpoint(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name, int *ep_offset)
+{
+	int offset;
+	const char *name;
+
+	for (offset = fdt_next_node(blob, -1, NULL);
+	    offset >= 0;
+	    offset = fdt_next_node(blob, offset, NULL)) {
+		name = fdt_get_name(blob, offset, NULL);
+		if (!name || strncmp(name, ep_name, strlen(ep_name) + 1))
+			continue;
+		if (!regmap_name ||
+		    !fdt_node_check_compatible(blob, offset, regmap_name))
+			break;
+	}
+	if (offset < 0)
+		return -ENODEV;
+
+	*ep_offset = offset;
+
+	return 0;
+}
+
+int xrt_md_get_epname_pointer(struct device *dev, const char *blob,
+	 const char *ep_name, const char *regmap_name, const char **epname)
+{
+	int offset;
+	int ret;
+
+	ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+		&offset);
+	if (!ret && epname && offset >= 0)
+		*epname = fdt_get_name(blob, offset, NULL);
+
+	return ret;
+}
+
+int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
+	const char *regmap_name, const char *prop, const void **val, int *size)
+{
+	int offset;
+	int ret;
+
+	if (val)
+		*val = NULL;
+	if (ep_name) {
+		ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+			&offset);
+		if (ret) {
+			md_err(dev, "cannot get ep %s, regmap %s, ret = %d",
+				ep_name, regmap_name, ret);
+			return -EINVAL;
+		}
+	} else {
+		offset = fdt_next_node(blob, -1, NULL);
+		if (offset < 0) {
+			md_err(dev, "internal error, ret = %d", offset);
+			return -EINVAL;
+		}
+	}
+
+	if (val) {
+		*val = fdt_getprop(blob, offset, prop, size);
+		if (!*val) {
+			md_dbg(dev, "get ep %s, prop %s failed", ep_name, prop);
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static int xrt_md_setprop(struct device *dev, char *blob, int offset,
+	 const char *prop, const void *val, int size)
+{
+	int ret;
+
+	ret = fdt_setprop(blob, offset, prop, val, size);
+	if (ret)
+		md_err(dev, "failed to set prop %d", ret);
+
+	return ret;
+}
+
+int xrt_md_set_prop(struct device *dev, char *blob,
+	const char *ep_name, const char *regmap_name,
+	const char *prop, const void *val, int size)
+{
+	int offset;
+	int ret;
+
+	if (ep_name) {
+		ret = xrt_md_get_endpoint(dev, blob, ep_name,
+			regmap_name, &offset);
+		if (ret) {
+			md_err(dev, "cannot get node %s, ret = %d",
+				ep_name, ret);
+			return -EINVAL;
+		}
+	} else {
+		offset = fdt_next_node(blob, -1, NULL);
+		if (offset < 0) {
+			md_err(dev, "internal error, ret = %d", offset);
+			return -EINVAL;
+		}
+	}
+
+	ret = xrt_md_setprop(dev, blob, offset, prop, val, size);
+	if (ret)
+		md_err(dev, "set prop %s failed, ret = %d", prop, ret);
+
+	return ret;
+}
+
+int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
+	const char *ep_name, const char *regmap_name, const char *new_ep_name)
+{
+	int offset, target;
+	int ret;
+	struct xrt_md_endpoint ep = {0};
+	const char *newepnm = new_ep_name ? new_ep_name : ep_name;
+
+	ret = xrt_md_get_endpoint(dev, src_blob, ep_name, regmap_name,
+		&offset);
+	if (ret)
+		return -EINVAL;
+
+	ret = xrt_md_get_endpoint(dev, blob, newepnm, regmap_name, &target);
+	if (ret) {
+		ep.ep_name = newepnm;
+		ret = __xrt_md_add_endpoint(dev, blob, &ep, &target,
+			fdt_parent_offset(src_blob, offset) == 0);
+		if (ret)
+			return -EINVAL;
+	}
+
+	ret = xrt_md_overlay(dev, blob, target, src_blob, offset);
+	if (ret)
+		md_err(dev, "overlay failed, ret = %d", ret);
+
+	return ret;
+}
+
+int xrt_md_copy_all_eps(struct device *dev, char *blob, const char *src_blob)
+{
+	return xrt_md_copy_endpoint(dev, blob, src_blob, NODE_ENDPOINTS,
+		NULL, NULL);
+}
+
+char *xrt_md_dup(struct device *dev, const char *blob)
+{
+	int ret;
+	char *dup_blob;
+
+	ret = xrt_md_create(dev, &dup_blob);
+	if (ret)
+		return NULL;
+	ret = xrt_md_overlay(dev, dup_blob, -1, blob, -1);
+	if (ret) {
+		vfree(dup_blob);
+		return NULL;
+	}
+
+	return dup_blob;
+}
+
+static int xrt_md_overlay(struct device *dev, char *blob, int target,
+	const char *overlay_blob, int overlay_offset)
+{
+	int	property, subnode;
+	int	ret;
+
+	WARN_ON(!blob || !overlay_blob);
+
+	if (!blob) {
+		md_err(dev, "blob is NULL");
+		return -EINVAL;
+	}
+
+	if (target < 0) {
+		target = fdt_next_node(blob, -1, NULL);
+		if (target < 0) {
+			md_err(dev, "invalid target");
+			return -EINVAL;
+		}
+	}
+	if (overlay_offset < 0) {
+		overlay_offset = fdt_next_node(overlay_blob, -1, NULL);
+		if (overlay_offset < 0) {
+			md_err(dev, "invalid overlay");
+			return -EINVAL;
+		}
+	}
+
+	fdt_for_each_property_offset(property, overlay_blob, overlay_offset) {
+		const char *name;
+		const void *prop;
+		int prop_len;
+
+		prop = fdt_getprop_by_offset(overlay_blob, property, &name,
+			&prop_len);
+		if (!prop || prop_len >= MAX_BLOB_SIZE) {
+			md_err(dev, "internal error");
+			return -EINVAL;
+		}
+
+		ret = xrt_md_setprop(dev, blob, target, name, prop,
+			prop_len);
+		if (ret) {
+			md_err(dev, "setprop failed, ret = %d", ret);
+			return ret;
+		}
+	}
+
+	fdt_for_each_subnode(subnode, overlay_blob, overlay_offset) {
+		const char *name = fdt_get_name(overlay_blob, subnode, NULL);
+		int nnode;
+
+		nnode = xrt_md_add_node(dev, blob, target, name);
+		if (nnode == -FDT_ERR_EXISTS)
+			nnode = fdt_subnode_offset(blob, target, name);
+		if (nnode < 0) {
+			md_err(dev, "add node failed, ret = %d", nnode);
+			return nnode;
+		}
+
+		ret = xrt_md_overlay(dev, blob, nnode, overlay_blob, subnode);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name,
+	char **next_ep, char **next_regmap)
+{
+	int offset, ret;
+
+	if (!ep_name) {
+		ret = xrt_md_get_endpoint(dev, blob, NODE_ENDPOINTS, NULL,
+			&offset);
+	} else {
+		ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+			&offset);
+	}
+
+	if (ret) {
+		*next_ep = NULL;
+		*next_regmap = NULL;
+		return -EINVAL;
+	}
+
+	offset = ep_name ? fdt_next_subnode(blob, offset) :
+		fdt_first_subnode(blob, offset);
+	if (offset < 0) {
+		*next_ep = NULL;
+		*next_regmap = NULL;
+		return -EINVAL;
+	}
+
+	*next_ep = (char *)fdt_get_name(blob, offset, NULL);
+	*next_regmap = (char *)fdt_stringlist_get(blob, offset, PROP_COMPATIBLE,
+		0, NULL);
+
+	return 0;
+}
+
+int xrt_md_get_compatible_epname(struct device *dev, const char *blob,
+	const char *regmap_name, const char **ep_name)
+{
+	int ep_offset;
+
+	ep_offset = fdt_node_offset_by_compatible(blob, -1, regmap_name);
+	if (ep_offset < 0) {
+		*ep_name = NULL;
+		return -ENOENT;
+	}
+
+	*ep_name = (char *)fdt_get_name(blob, ep_offset, NULL);
+
+	return 0;
+}
+
+int xrt_md_uuid_strtoid(struct device *dev, const char *uuidstr, uuid_t *p_uuid)
+{
+	char *p;
+	const char *str;
+	char tmp[3] = { 0 };
+	int i, ret;
+
+	memset(p_uuid, 0, sizeof(*p_uuid));
+	p = (char *)p_uuid;
+	str = uuidstr + strlen(uuidstr) - 2;
+
+	for (i = 0; i < sizeof(*p_uuid) && str >= uuidstr; i++) {
+		tmp[0] = *str;
+		tmp[1] = *(str + 1);
+		ret = kstrtou8(tmp, 16, p);
+		if (ret) {
+			md_err(dev, "Invalid uuid %s", uuidstr);
+			return -EINVAL;
+		}
+		p++;
+		str -= 2;
+	}
+
+	return 0;
+}
+
+void xrt_md_pack(struct device *dev, char *blob)
+{
+	int ret;
+
+	ret = fdt_pack(blob);
+	if (ret)
+		md_err(dev, "pack failed %d", ret);
+}
+
+int xrt_md_get_intf_uuids(struct device *dev, const char *blob,
+	u32 *num_uuids, uuid_t *intf_uuids)
+{
+	int offset, count = 0;
+	int ret;
+	const char *uuid_str;
+
+	ret = xrt_md_get_endpoint(dev, blob, NODE_INTERFACES, NULL, &offset);
+	if (ret)
+		return -ENOENT;
+
+	for (offset = fdt_first_subnode(blob, offset);
+	    offset >= 0;
+	    offset = fdt_next_subnode(blob, offset)) {
+		uuid_str = fdt_getprop(blob, offset, PROP_INTERFACE_UUID,
+			NULL);
+		if (!uuid_str) {
+			md_err(dev, "empty intf uuid node");
+			return -EINVAL;
+		}
+
+		if (intf_uuids && count < *num_uuids) {
+			ret = xrt_md_uuid_strtoid(dev, uuid_str,
+				&intf_uuids[count]);
+			if (ret)
+				return -EINVAL;
+		}
+		count++;
+	}
+
+	*num_uuids = count;
+
+	return 0;
+}
+
+int xrt_md_check_uuids(struct device *dev, const char *blob, char *subset_blob)
+{
+	const char *subset_int_uuid = NULL;
+	const char *int_uuid = NULL;
+	int offset, subset_offset, off;
+	int ret;
+
+	ret = xrt_md_get_endpoint(dev, subset_blob, NODE_INTERFACES, NULL,
+		&subset_offset);
+	if (ret)
+		return -EINVAL;
+
+	ret = xrt_md_get_endpoint(dev, blob, NODE_INTERFACES, NULL,
+		&offset);
+	if (ret)
+		return -EINVAL;
+
+	for (subset_offset = fdt_first_subnode(subset_blob, subset_offset);
+	    subset_offset >= 0;
+	    subset_offset = fdt_next_subnode(subset_blob, subset_offset)) {
+		subset_int_uuid = fdt_getprop(subset_blob, subset_offset,
+			PROP_INTERFACE_UUID, NULL);
+		if (!subset_int_uuid)
+			return -EINVAL;
+		off = offset;
+
+		for (off = fdt_first_subnode(blob, off);
+		    off >= 0;
+		    off = fdt_next_subnode(blob, off)) {
+			int_uuid = fdt_getprop(blob, off,
+				PROP_INTERFACE_UUID, NULL);
+			if (!int_uuid)
+				return -EINVAL;
+			if (!strcmp(int_uuid, subset_int_uuid))
+				break;
+		}
+		if (off < 0)
+			return -ENOENT;
+	}
+
+	return 0;
+}
diff --git a/drivers/fpga/alveo/common/xrt-root.c b/drivers/fpga/alveo/common/xrt-root.c
new file mode 100644
index 000000000000..7cc23dfaf3cf
--- /dev/null
+++ b/drivers/fpga/alveo/common/xrt-root.c
@@ -0,0 +1,744 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/hwmon.h>
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-partition.h"
+#include "xrt-root.h"
+#include "xrt-metadata.h"
+#include "xrt-root.h"
+
+#define	XROOT_PDEV(xr)		((xr)->pdev)
+#define	XROOT_DEV(xr)		(&(XROOT_PDEV(xr)->dev))
+#define xroot_err(xr, fmt, args...)	\
+	dev_err(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_warn(xr, fmt, args...)	\
+	dev_warn(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_info(xr, fmt, args...)	\
+	dev_info(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_dbg(xr, fmt, args...)	\
+	dev_dbg(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+
+#define XRT_VSEC_ID	0x20
+#define	XROOT_PART_FIRST	(-1)
+#define	XROOT_PART_LAST		(-2)
+
+static int xroot_parent_cb(struct device *, void *, u32, void *);
+
+struct xroot_async_evt {
+	struct list_head list;
+	struct xrt_parent_ioctl_async_broadcast_evt evt;
+};
+
+struct xroot_event_cb {
+	struct list_head list;
+	bool initialized;
+	struct xrt_parent_ioctl_evt_cb cb;
+};
+
+struct xroot_events {
+	struct list_head cb_list;
+	struct mutex cb_lock;
+	struct work_struct cb_init_work;
+	struct mutex async_evt_lock;
+	struct list_head async_evt_list;
+	struct work_struct async_evt_work;
+};
+
+struct xroot_parts {
+	struct xrt_subdev_pool pool;
+	struct work_struct bringup_work;
+	atomic_t bringup_pending;
+	atomic_t bringup_failed;
+	struct completion bringup_comp;
+};
+
+struct xroot {
+	struct pci_dev *pdev;
+	struct xroot_events events;
+	struct xroot_parts parts;
+};
+
+struct xroot_part_match_arg {
+	enum xrt_subdev_id id;
+	int instance;
+};
+
+static bool xroot_part_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	struct xroot_part_match_arg *a = (struct xroot_part_match_arg *)arg;
+	return id == a->id && pdev->id == a->instance;
+}
+
+static int xroot_get_partition(struct xroot *xr, int instance,
+	struct platform_device **partp)
+{
+	int rc = 0;
+	struct xrt_subdev_pool *parts = &xr->parts.pool;
+	struct device *dev = DEV(xr->pdev);
+	struct xroot_part_match_arg arg = { XRT_SUBDEV_PART, instance };
+
+	if (instance == XROOT_PART_LAST) {
+		rc = xrt_subdev_pool_get(parts, XRT_SUBDEV_MATCH_NEXT,
+			*partp, dev, partp);
+	} else if (instance == XROOT_PART_FIRST) {
+		rc = xrt_subdev_pool_get(parts, XRT_SUBDEV_MATCH_PREV,
+			*partp, dev, partp);
+	} else {
+		rc = xrt_subdev_pool_get(parts, xroot_part_match,
+			&arg, dev, partp);
+	}
+
+	if (rc && rc != -ENOENT)
+		xroot_err(xr, "failed to hold partition %d: %d", instance, rc);
+	return rc;
+}
+
+static void xroot_put_partition(struct xroot *xr, struct platform_device *part)
+{
+	int inst = part->id;
+	int rc = xrt_subdev_pool_put(&xr->parts.pool, part, DEV(xr->pdev));
+
+	if (rc)
+		xroot_err(xr, "failed to release partition %d: %d", inst, rc);
+}
+
+static int
+xroot_partition_trigger_evt(struct xroot *xr, struct xroot_event_cb *cb,
+	struct platform_device *part, enum xrt_events evt)
+{
+	xrt_subdev_match_t match = cb->cb.xevt_match_cb;
+	xrt_event_cb_t evtcb = cb->cb.xevt_cb;
+	void *arg = cb->cb.xevt_match_arg;
+	struct xrt_partition_ioctl_event e = { evt, &cb->cb };
+	struct xrt_event_arg_subdev esd = { XRT_SUBDEV_PART, part->id };
+	int rc;
+
+	if (match(XRT_SUBDEV_PART, part, arg)) {
+		rc = evtcb(cb->cb.xevt_pdev, evt, &esd);
+		if (rc)
+			return rc;
+	}
+
+	return xrt_subdev_ioctl(part, XRT_PARTITION_EVENT, &e);
+}
+
+static void
+xroot_event_partition(struct xroot *xr, int instance, enum xrt_events evt)
+{
+	int ret;
+	struct platform_device *pdev = NULL;
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+
+	BUG_ON(instance < 0);
+	ret = xroot_get_partition(xr, instance, &pdev);
+	if (ret)
+		return;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		int rc;
+
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		if (!tmp->initialized)
+			continue;
+
+		rc = xroot_partition_trigger_evt(xr, tmp, pdev, evt);
+		if (rc) {
+			list_del(&tmp->list);
+			vfree(tmp);
+		}
+	}
+	mutex_unlock(&xr->events.cb_lock);
+
+	(void) xroot_put_partition(xr, pdev);
+}
+
+int xroot_create_partition(void *root, char *dtb)
+{
+	struct xroot *xr = (struct xroot *)root;
+	int ret;
+
+	atomic_inc(&xr->parts.bringup_pending);
+	ret = xrt_subdev_pool_add(&xr->parts.pool,
+		XRT_SUBDEV_PART, xroot_parent_cb, xr, dtb);
+	if (ret >= 0) {
+		schedule_work(&xr->parts.bringup_work);
+	} else {
+		atomic_dec(&xr->parts.bringup_pending);
+		atomic_inc(&xr->parts.bringup_failed);
+		xroot_err(xr, "failed to create partition: %d", ret);
+	}
+	return ret;
+}
+
+static int xroot_destroy_single_partition(struct xroot *xr, int instance)
+{
+	struct platform_device *pdev = NULL;
+	int ret;
+
+	BUG_ON(instance < 0);
+	ret = xroot_get_partition(xr, instance, &pdev);
+	if (ret)
+		return ret;
+
+	xroot_event_partition(xr, instance, XRT_EVENT_PRE_REMOVAL);
+
+	/* Now tear down all children in this partition. */
+	ret = xrt_subdev_ioctl(pdev, XRT_PARTITION_FINI_CHILDREN, NULL);
+	(void) xroot_put_partition(xr, pdev);
+	if (!ret) {
+		ret = xrt_subdev_pool_del(&xr->parts.pool,
+			XRT_SUBDEV_PART, instance);
+	}
+
+	return ret;
+}
+
+static int xroot_destroy_partition(struct xroot *xr, int instance)
+{
+	struct platform_device *target = NULL;
+	struct platform_device *deps = NULL;
+	int ret;
+
+	BUG_ON(instance < 0);
+	/*
+	 * Make sure target partition exists and can't go away before
+	 * we remove it's dependents
+	 */
+	ret = xroot_get_partition(xr, instance, &target);
+	if (ret)
+		return ret;
+
+	/*
+	 * Remove all partitions depend on target one.
+	 * Assuming subdevs in higher partition ID can depend on ones in
+	 * lower ID partitions, we remove them in the reservse order.
+	 */
+	while (xroot_get_partition(xr, XROOT_PART_LAST, &deps) != -ENOENT) {
+		int inst = deps->id;
+
+		xroot_put_partition(xr, deps);
+		if (instance == inst)
+			break;
+		(void) xroot_destroy_single_partition(xr, inst);
+		deps = NULL;
+	}
+
+	/* Now we can remove the target partition. */
+	xroot_put_partition(xr, target);
+	return xroot_destroy_single_partition(xr, instance);
+}
+
+static int xroot_lookup_partition(struct xroot *xr,
+	struct xrt_parent_ioctl_lookup_partition *arg)
+{
+	int rc = -ENOENT;
+	struct platform_device *part = NULL;
+
+	while (rc < 0 && xroot_get_partition(xr, XROOT_PART_LAST,
+		&part) != -ENOENT) {
+		if (arg->xpilp_match_cb(XRT_SUBDEV_PART, part,
+			arg->xpilp_match_arg)) {
+			rc = part->id;
+		}
+		xroot_put_partition(xr, part);
+	}
+	return rc;
+}
+
+static void xroot_evt_cb_init_work(struct work_struct *work)
+{
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+	struct platform_device *part = NULL;
+	struct xroot *xr =
+		container_of(work, struct xroot, events.cb_init_work);
+
+	mutex_lock(&xr->events.cb_lock);
+
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		if (tmp->initialized)
+			continue;
+
+		while (xroot_get_partition(xr, XROOT_PART_LAST,
+			&part) != -ENOENT) {
+			int rc = xroot_partition_trigger_evt(xr, tmp, part,
+				XRT_EVENT_POST_CREATION);
+
+			(void) xroot_put_partition(xr, part);
+			if (rc & XRT_EVENT_CB_STOP) {
+				list_del(&tmp->list);
+				vfree(tmp);
+				tmp = NULL;
+				break;
+			}
+		}
+
+		if (tmp)
+			tmp->initialized = true;
+	}
+
+	mutex_unlock(&xr->events.cb_lock);
+}
+
+static bool xroot_evt(struct xroot *xr, enum xrt_events evt)
+{
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+	int rc;
+	bool success = true;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		rc = tmp->cb.xevt_cb(tmp->cb.xevt_pdev, evt, NULL);
+		if (rc & XRT_EVENT_CB_ERR)
+			success = false;
+		if (rc & XRT_EVENT_CB_STOP) {
+			list_del(&tmp->list);
+			vfree(tmp);
+		}
+	}
+	mutex_unlock(&xr->events.cb_lock);
+
+	return success;
+}
+
+static void xroot_evt_async_evt_work(struct work_struct *work)
+{
+	struct xroot_async_evt *tmp;
+	struct xroot *xr =
+		container_of(work, struct xroot, events.async_evt_work);
+	bool success;
+
+	mutex_lock(&xr->events.async_evt_lock);
+	while (!list_empty(&xr->events.async_evt_list)) {
+		tmp = list_first_entry(&xr->events.async_evt_list,
+			struct xroot_async_evt, list);
+		list_del(&tmp->list);
+		mutex_unlock(&xr->events.async_evt_lock);
+
+		success = xroot_evt(xr, tmp->evt.xaevt_event);
+		if (tmp->evt.xaevt_cb) {
+			tmp->evt.xaevt_cb(tmp->evt.xaevt_pdev,
+				tmp->evt.xaevt_event, tmp->evt.xaevt_arg,
+				success);
+		}
+		vfree(tmp);
+
+		mutex_lock(&xr->events.async_evt_lock);
+	}
+	mutex_unlock(&xr->events.async_evt_lock);
+}
+
+static void xroot_evt_init(struct xroot *xr)
+{
+	INIT_LIST_HEAD(&xr->events.cb_list);
+	INIT_LIST_HEAD(&xr->events.async_evt_list);
+	mutex_init(&xr->events.async_evt_lock);
+	mutex_init(&xr->events.cb_lock);
+	INIT_WORK(&xr->events.cb_init_work, xroot_evt_cb_init_work);
+	INIT_WORK(&xr->events.async_evt_work, xroot_evt_async_evt_work);
+}
+
+static void xroot_evt_fini(struct xroot *xr)
+{
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+
+	flush_scheduled_work();
+
+	BUG_ON(!list_empty(&xr->events.async_evt_list));
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		list_del(&tmp->list);
+		vfree(tmp);
+	}
+	mutex_unlock(&xr->events.cb_lock);
+}
+
+static int xroot_evt_cb_add(struct xroot *xr,
+	struct xrt_parent_ioctl_evt_cb *cb)
+{
+	struct xroot_event_cb *new = vzalloc(sizeof(*new));
+
+	if (!new)
+		return -ENOMEM;
+
+	cb->xevt_hdl = new;
+	new->cb = *cb;
+	new->initialized = false;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_add(&new->list, &xr->events.cb_list);
+	mutex_unlock(&xr->events.cb_lock);
+
+	schedule_work(&xr->events.cb_init_work);
+	return 0;
+}
+
+static int xroot_async_evt_add(struct xroot *xr,
+	struct xrt_parent_ioctl_async_broadcast_evt *arg)
+{
+	struct xroot_async_evt *new = vzalloc(sizeof(*new));
+
+	if (!new)
+		return -ENOMEM;
+
+	new->evt = *arg;
+
+	mutex_lock(&xr->events.async_evt_lock);
+	list_add(&new->list, &xr->events.async_evt_list);
+	mutex_unlock(&xr->events.async_evt_lock);
+
+	schedule_work(&xr->events.async_evt_work);
+	return 0;
+}
+
+static void xroot_evt_cb_del(struct xroot *xr, void *hdl)
+{
+	struct xroot_event_cb *cb = (struct xroot_event_cb *)hdl;
+	const struct list_head *ptr;
+	struct xroot_event_cb *tmp;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each(ptr, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		if (tmp == cb)
+			break;
+	}
+	list_del(&cb->list);
+	mutex_unlock(&xr->events.cb_lock);
+	vfree(cb);
+}
+
+static int xroot_get_leaf(struct xroot *xr,
+	struct xrt_parent_ioctl_get_leaf *arg)
+{
+	int rc = -ENOENT;
+	struct platform_device *part = NULL;
+
+	while (rc && xroot_get_partition(xr, XROOT_PART_LAST,
+		&part) != -ENOENT) {
+		rc = xrt_subdev_ioctl(part, XRT_PARTITION_GET_LEAF, arg);
+		xroot_put_partition(xr, part);
+	}
+	return rc;
+}
+
+static int xroot_put_leaf(struct xroot *xr,
+	struct xrt_parent_ioctl_put_leaf *arg)
+{
+	int rc = -ENOENT;
+	struct platform_device *part = NULL;
+
+	while (rc && xroot_get_partition(xr, XROOT_PART_LAST,
+		&part) != -ENOENT) {
+		rc = xrt_subdev_ioctl(part, XRT_PARTITION_PUT_LEAF, arg);
+		xroot_put_partition(xr, part);
+	}
+	return rc;
+}
+
+static int xroot_parent_cb(struct device *dev, void *parg, u32 cmd, void *arg)
+{
+	struct xroot *xr = (struct xroot *)parg;
+	int rc = 0;
+
+	switch (cmd) {
+	/* Leaf actions. */
+	case XRT_PARENT_GET_LEAF: {
+		struct xrt_parent_ioctl_get_leaf *getleaf =
+			(struct xrt_parent_ioctl_get_leaf *)arg;
+		rc = xroot_get_leaf(xr, getleaf);
+		break;
+	}
+	case XRT_PARENT_PUT_LEAF: {
+		struct xrt_parent_ioctl_put_leaf *putleaf =
+			(struct xrt_parent_ioctl_put_leaf *)arg;
+		rc = xroot_put_leaf(xr, putleaf);
+		break;
+	}
+	case XRT_PARENT_GET_LEAF_HOLDERS: {
+		struct xrt_parent_ioctl_get_holders *holders =
+			(struct xrt_parent_ioctl_get_holders *)arg;
+		rc = xrt_subdev_pool_get_holders(&xr->parts.pool,
+			holders->xpigh_pdev, holders->xpigh_holder_buf,
+			holders->xpigh_holder_buf_len);
+		break;
+	}
+
+
+	/* Partition actions. */
+	case XRT_PARENT_CREATE_PARTITION:
+		rc = xroot_create_partition(xr, (char *)arg);
+		break;
+	case XRT_PARENT_REMOVE_PARTITION:
+		rc = xroot_destroy_partition(xr, (int)(uintptr_t)arg);
+		break;
+	case XRT_PARENT_LOOKUP_PARTITION: {
+		struct xrt_parent_ioctl_lookup_partition *getpart =
+			(struct xrt_parent_ioctl_lookup_partition *)arg;
+		rc = xroot_lookup_partition(xr, getpart);
+		break;
+	}
+	case XRT_PARENT_WAIT_PARTITION_BRINGUP:
+		rc = xroot_wait_for_bringup(xr) ? 0 : -EINVAL;
+		break;
+
+
+	/* Event actions. */
+	case XRT_PARENT_ADD_EVENT_CB: {
+		struct xrt_parent_ioctl_evt_cb *cb =
+			(struct xrt_parent_ioctl_evt_cb *)arg;
+		rc = xroot_evt_cb_add(xr, cb);
+		break;
+	}
+	case XRT_PARENT_REMOVE_EVENT_CB:
+		xroot_evt_cb_del(xr, arg);
+		rc = 0;
+		break;
+	case XRT_PARENT_ASYNC_BOARDCAST_EVENT:
+		rc = xroot_async_evt_add(xr,
+			(struct xrt_parent_ioctl_async_broadcast_evt *)arg);
+		break;
+
+
+	/* Device info. */
+	case XRT_PARENT_GET_RESOURCE: {
+		struct xrt_parent_ioctl_get_res *res =
+			(struct xrt_parent_ioctl_get_res *)arg;
+		res->xpigr_res = xr->pdev->resource;
+		break;
+	}
+	case XRT_PARENT_GET_ID: {
+		struct xrt_parent_ioctl_get_id *id =
+			(struct xrt_parent_ioctl_get_id *)arg;
+
+		id->xpigi_vendor_id = xr->pdev->vendor;
+		id->xpigi_device_id = xr->pdev->device;
+		id->xpigi_sub_vendor_id = xr->pdev->subsystem_vendor;
+		id->xpigi_sub_device_id = xr->pdev->subsystem_device;
+		break;
+	}
+
+
+	case XRT_PARENT_HOT_RESET: {
+		xroot_hot_reset(xr->pdev);
+		break;
+	}
+
+	case XRT_PARENT_HWMON: {
+		struct xrt_parent_ioctl_hwmon *hwmon =
+			(struct xrt_parent_ioctl_hwmon *)arg;
+
+		if (hwmon->xpih_register) {
+			hwmon->xpih_hwmon_dev =
+				hwmon_device_register_with_info(DEV(xr->pdev),
+				hwmon->xpih_name, hwmon->xpih_drvdata, NULL,
+				hwmon->xpih_groups);
+		} else {
+			(void) hwmon_device_unregister(hwmon->xpih_hwmon_dev);
+		}
+		break;
+	}
+
+	default:
+		xroot_err(xr, "unknown IOCTL cmd %d", cmd);
+		rc = -EINVAL;
+		break;
+	}
+
+	return rc;
+}
+
+static void xroot_bringup_partition_work(struct work_struct *work)
+{
+	struct platform_device *pdev = NULL;
+	struct xroot *xr = container_of(work, struct xroot, parts.bringup_work);
+
+	while (xroot_get_partition(xr, XROOT_PART_LAST, &pdev) != -ENOENT) {
+		int r, i;
+
+		i = pdev->id;
+		r = xrt_subdev_ioctl(pdev, XRT_PARTITION_INIT_CHILDREN, NULL);
+		(void) xroot_put_partition(xr, pdev);
+		if (r == -EEXIST)
+			continue; /* Already brough up, nothing to do. */
+		if (r)
+			atomic_inc(&xr->parts.bringup_failed);
+
+		xroot_event_partition(xr, i, XRT_EVENT_POST_CREATION);
+
+		if (atomic_dec_and_test(&xr->parts.bringup_pending))
+			complete(&xr->parts.bringup_comp);
+	}
+}
+
+static void xroot_parts_init(struct xroot *xr)
+{
+	xrt_subdev_pool_init(DEV(xr->pdev), &xr->parts.pool);
+	INIT_WORK(&xr->parts.bringup_work, xroot_bringup_partition_work);
+	atomic_set(&xr->parts.bringup_pending, 0);
+	atomic_set(&xr->parts.bringup_failed, 0);
+	init_completion(&xr->parts.bringup_comp);
+}
+
+static void xroot_parts_fini(struct xroot *xr)
+{
+	flush_scheduled_work();
+	(void) xrt_subdev_pool_fini(&xr->parts.pool);
+}
+
+int xroot_add_vsec_node(void *root, char *dtb)
+{
+	struct xroot *xr = (struct xroot *)root;
+	struct device *dev = DEV(xr->pdev);
+	struct xrt_md_endpoint ep = { 0 };
+	int cap = 0, ret = 0;
+	u32 off_low, off_high, vsec_bar, header;
+	u64 vsec_off;
+
+	while ((cap = pci_find_next_ext_capability(xr->pdev, cap,
+	    PCI_EXT_CAP_ID_VNDR))) {
+		pci_read_config_dword(xr->pdev, cap + PCI_VNDR_HEADER, &header);
+		if (PCI_VNDR_HEADER_ID(header) == XRT_VSEC_ID)
+			break;
+	}
+	if (!cap) {
+		xroot_info(xr, "No Vendor Specific Capability.");
+		return -ENOENT;
+	}
+
+	if (pci_read_config_dword(xr->pdev, cap+8, &off_low) ||
+	    pci_read_config_dword(xr->pdev, cap+12, &off_high)) {
+		xroot_err(xr, "pci_read vendor specific failed.");
+		return -EINVAL;
+	}
+
+	ep.ep_name = NODE_VSEC;
+	ret = xrt_md_add_endpoint(dev, dtb, &ep);
+	if (ret) {
+		xroot_err(xr, "add vsec metadata failed, ret %d", ret);
+		goto failed;
+	}
+
+	vsec_bar = cpu_to_be32(off_low & 0xf);
+	ret = xrt_md_set_prop(dev, dtb, NODE_VSEC,
+		NULL, PROP_BAR_IDX, &vsec_bar, sizeof(vsec_bar));
+	if (ret) {
+		xroot_err(xr, "add vsec bar idx failed, ret %d", ret);
+		goto failed;
+	}
+
+	vsec_off = cpu_to_be64(((u64)off_high << 32) | (off_low & ~0xfU));
+	ret = xrt_md_set_prop(dev, dtb, NODE_VSEC,
+		NULL, PROP_OFFSET, &vsec_off, sizeof(vsec_off));
+	if (ret) {
+		xroot_err(xr, "add vsec offset failed, ret %d", ret);
+		goto failed;
+	}
+
+failed:
+	return ret;
+}
+
+int xroot_add_simple_node(void *root, char *dtb, const char *endpoint)
+{
+	struct xroot *xr = (struct xroot *)root;
+	struct device *dev = DEV(xr->pdev);
+	struct xrt_md_endpoint ep = { 0 };
+	int ret = 0;
+
+	ep.ep_name = endpoint;
+	ret = xrt_md_add_endpoint(dev, dtb, &ep);
+	if (ret)
+		xroot_err(xr, "add %s failed, ret %d", endpoint, ret);
+
+	return ret;
+}
+
+bool xroot_wait_for_bringup(void *root)
+{
+	struct xroot *xr = (struct xroot *)root;
+
+	wait_for_completion(&xr->parts.bringup_comp);
+	return atomic_xchg(&xr->parts.bringup_failed, 0) == 0;
+}
+
+int xroot_probe(struct pci_dev *pdev, void **root)
+{
+	struct device *dev = DEV(pdev);
+	struct xroot *xr = NULL;
+
+	dev_info(dev, "%s: probing...", __func__);
+
+	xr = devm_kzalloc(dev, sizeof(*xr), GFP_KERNEL);
+	if (!xr)
+		return -ENOMEM;
+
+	xr->pdev = pdev;
+	xroot_parts_init(xr);
+	xroot_evt_init(xr);
+
+	*root = xr;
+	return 0;
+}
+
+void xroot_remove(void *root)
+{
+	struct xroot *xr = (struct xroot *)root;
+	struct platform_device *part = NULL;
+
+	xroot_info(xr, "leaving...");
+
+	if (xroot_get_partition(xr, XROOT_PART_FIRST, &part) == 0) {
+		int instance = part->id;
+
+		xroot_put_partition(xr, part);
+		(void) xroot_destroy_partition(xr, instance);
+	}
+
+	xroot_evt_fini(xr);
+	xroot_parts_fini(xr);
+}
+
+static void xroot_broadcast_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg, bool success)
+{
+	struct completion *comp = (struct completion *)arg;
+
+	complete(comp);
+}
+
+void xroot_broadcast(void *root, enum xrt_events evt)
+{
+	int rc;
+	struct completion comp;
+	struct xroot *xr = (struct xroot *)root;
+	struct xrt_parent_ioctl_async_broadcast_evt e = {
+		NULL, evt, xroot_broadcast_event_cb, &comp
+	};
+
+	init_completion(&comp);
+	rc = xroot_async_evt_add(xr, &e);
+	if (rc == 0)
+		wait_for_completion(&comp);
+	else
+		xroot_err(xr, "can't broadcast event (%d): %d", evt, rc);
+}
diff --git a/drivers/fpga/alveo/common/xrt-root.h b/drivers/fpga/alveo/common/xrt-root.h
new file mode 100644
index 000000000000..e2a1c4554feb
--- /dev/null
+++ b/drivers/fpga/alveo/common/xrt-root.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_ROOT_H_
+#define	_XRT_ROOT_H_
+
+#include <linux/pci.h>
+#include "xrt-subdev.h"
+
+int xroot_probe(struct pci_dev *pdev, void **root);
+void xroot_remove(void *root);
+bool xroot_wait_for_bringup(void *root);
+int xroot_add_vsec_node(void *root, char *dtb);
+int xroot_create_partition(void *root, char *dtb);
+int xroot_add_simple_node(void *root, char *dtb, const char *endpoint);
+void xroot_hot_reset(struct pci_dev *pdev);
+void xroot_broadcast(void *root, enum xrt_events evt);
+
+#endif	/* _XRT_ROOT_H_ */
diff --git a/drivers/fpga/alveo/common/xrt-xclbin.c b/drivers/fpga/alveo/common/xrt-xclbin.c
new file mode 100644
index 000000000000..b7db1b52a086
--- /dev/null
+++ b/drivers/fpga/alveo/common/xrt-xclbin.c
@@ -0,0 +1,387 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Kernel Driver XCLBIN parser
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors: David Zhang <davidzha@xilinx.com>
+ */
+
+#include <asm/errno.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include "xrt-xclbin.h"
+#include "xrt-metadata.h"
+
+/* Used for parsing bitstream header */
+#define XHI_EVEN_MAGIC_BYTE     0x0f
+#define XHI_ODD_MAGIC_BYTE      0xf0
+
+/* Extra mode for IDLE */
+#define XHI_OP_IDLE  -1
+#define XHI_BIT_HEADER_FAILURE -1
+
+/* The imaginary module length register */
+#define XHI_MLR                  15
+
+const char *xrt_xclbin_kind_to_string(enum axlf_section_kind kind)
+{
+	switch (kind) {
+	case BITSTREAM:			return "BITSTREAM";
+	case CLEARING_BITSTREAM:	return "CLEARING_BITSTREAM";
+	case EMBEDDED_METADATA:		return "EMBEDDED_METADATA";
+	case FIRMWARE:			return "FIRMWARE";
+	case DEBUG_DATA:		return "DEBUG_DATA";
+	case SCHED_FIRMWARE:		return "SCHED_FIRMWARE";
+	case MEM_TOPOLOGY:		return "MEM_TOPOLOGY";
+	case CONNECTIVITY:		return "CONNECTIVITY";
+	case IP_LAYOUT:			return "IP_LAYOUT";
+	case DEBUG_IP_LAYOUT:		return "DEBUG_IP_LAYOUT";
+	case DESIGN_CHECK_POINT:	return "DESIGN_CHECK_POINT";
+	case CLOCK_FREQ_TOPOLOGY:	return "CLOCK_FREQ_TOPOLOGY";
+	case MCS:			return "MCS";
+	case BMC:			return "BMC";
+	case BUILD_METADATA:		return "BUILD_METADATA";
+	case KEYVALUE_METADATA:		return "KEYVALUE_METADATA";
+	case USER_METADATA:		return "USER_METADATA";
+	case DNA_CERTIFICATE:		return "DNA_CERTIFICATE";
+	case PDI:			return "PDI";
+	case BITSTREAM_PARTIAL_PDI:	return "BITSTREAM_PARTIAL_PDI";
+	case PARTITION_METADATA:	return "PARTITION_METADATA";
+	case EMULATION_DATA:		return "EMULATION_DATA";
+	case SYSTEM_METADATA:		return "SYSTEM_METADATA";
+	case SOFT_KERNEL:		return "SOFT_KERNEL";
+	case ASK_FLASH:			return "ASK_FLASH";
+	case AIE_METADATA:		return "AIE_METADATA";
+	case ASK_GROUP_TOPOLOGY:	return "ASK_GROUP_TOPOLOGY";
+	case ASK_GROUP_CONNECTIVITY:	return "ASK_GROUP_CONNECTIVITY";
+	default:			return "UNKNOWN";
+	}
+}
+
+static const struct axlf_section_header *
+xrt_xclbin_get_section_hdr(const struct axlf *xclbin,
+	enum axlf_section_kind kind)
+{
+	int i = 0;
+
+	for (i = 0; i < xclbin->m_header.m_numSections; i++) {
+		if (xclbin->m_sections[i].m_sectionKind == kind)
+			return &xclbin->m_sections[i];
+	}
+
+	return NULL;
+}
+
+static int
+xrt_xclbin_check_section_hdr(const struct axlf_section_header *header,
+	uint64_t xclbin_len)
+{
+	return (header->m_sectionOffset + header->m_sectionSize) > xclbin_len ?
+		-EINVAL : 0;
+}
+
+static int xrt_xclbin_section_info(const struct axlf *xclbin,
+	enum axlf_section_kind kind,
+	uint64_t *offset, uint64_t *size)
+{
+	const struct axlf_section_header *memHeader = NULL;
+	uint64_t xclbin_len;
+	int err = 0;
+
+	memHeader = xrt_xclbin_get_section_hdr(xclbin, kind);
+	if (!memHeader)
+		return -EINVAL;
+
+	xclbin_len = xclbin->m_header.m_length;
+	err = xrt_xclbin_check_section_hdr(memHeader, xclbin_len);
+	if (err)
+		return err;
+
+	*offset = memHeader->m_sectionOffset;
+	*size = memHeader->m_sectionSize;
+
+	return 0;
+}
+
+/* caller should free the allocated memory for **data */
+int xrt_xclbin_get_section(const char *buf,
+	enum axlf_section_kind kind, void **data, uint64_t *len)
+{
+	const struct axlf *xclbin = (const struct axlf *)buf;
+	void *section = NULL;
+	int err = 0;
+	uint64_t offset = 0;
+	uint64_t size = 0;
+
+	err = xrt_xclbin_section_info(xclbin, kind, &offset, &size);
+	if (err)
+		return err;
+
+	section = vmalloc(size);
+	if (section == NULL)
+		return -ENOMEM;
+
+	memcpy(section, ((const char *)xclbin) + offset, size);
+
+	*data = section;
+	if (len)
+		*len = size;
+
+	return 0;
+}
+
+/* parse bitstream header */
+int xrt_xclbin_parse_header(const unsigned char *data,
+	unsigned int size, struct XHwIcap_Bit_Header *header)
+{
+	unsigned int i;
+	unsigned int len;
+	unsigned int tmp;
+	unsigned int index;
+
+	/* Start Index at start of bitstream */
+	index = 0;
+
+	/* Initialize HeaderLength.  If header returned early inidicates
+	 * failure.
+	 */
+	header->HeaderLength = XHI_BIT_HEADER_FAILURE;
+
+	/* Get "Magic" length */
+	header->MagicLength = data[index++];
+	header->MagicLength = (header->MagicLength << 8) | data[index++];
+
+	/* Read in "magic" */
+	for (i = 0; i < header->MagicLength - 1; i++) {
+		tmp = data[index++];
+		if (i % 2 == 0 && tmp != XHI_EVEN_MAGIC_BYTE)
+			return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+		if (i % 2 == 1 && tmp != XHI_ODD_MAGIC_BYTE)
+			return -1;	/* INVALID_FILE_HEADER_ERROR */
+	}
+
+	/* Read null end of magic data. */
+	tmp = data[index++];
+
+	/* Read 0x01 (short) */
+	tmp = data[index++];
+	tmp = (tmp << 8) | data[index++];
+
+	/* Check the "0x01" half word */
+	if (tmp != 0x01)
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Read 'a' */
+	tmp = data[index++];
+	if (tmp != 'a')
+		return -1;	/* INVALID_FILE_HEADER_ERROR	*/
+
+	/* Get Design Name length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for design name and final null character. */
+	header->DesignName = vmalloc(len);
+
+	/* Read in Design Name */
+	for (i = 0; i < len; i++)
+		header->DesignName[i] = data[index++];
+
+	if (header->DesignName[len-1] != '\0')
+		return -1;
+
+	/* Read 'b' */
+	tmp = data[index++];
+	if (tmp != 'b')
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Get Part Name length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for part name and final null character. */
+	header->PartName = vmalloc(len);
+
+	/* Read in part name */
+	for (i = 0; i < len; i++)
+		header->PartName[i] = data[index++];
+
+	if (header->PartName[len-1] != '\0')
+		return -1;
+
+	/* Read 'c' */
+	tmp = data[index++];
+	if (tmp != 'c')
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Get date length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for date and final null character. */
+	header->Date = vmalloc(len);
+
+	/* Read in date name */
+	for (i = 0; i < len; i++)
+		header->Date[i] = data[index++];
+
+	if (header->Date[len - 1] != '\0')
+		return -1;
+
+	/* Read 'd' */
+	tmp = data[index++];
+	if (tmp != 'd')
+		return -1;	/* INVALID_FILE_HEADER_ERROR  */
+
+	/* Get time length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for time and final null character. */
+	header->Time = vmalloc(len);
+
+	/* Read in time name */
+	for (i = 0; i < len; i++)
+		header->Time[i] = data[index++];
+
+	if (header->Time[len - 1] != '\0')
+		return -1;
+
+	/* Read 'e' */
+	tmp = data[index++];
+	if (tmp != 'e')
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Get byte length of bitstream */
+	header->BitstreamLength = data[index++];
+	header->BitstreamLength = (header->BitstreamLength << 8) | data[index++];
+	header->BitstreamLength = (header->BitstreamLength << 8) | data[index++];
+	header->BitstreamLength = (header->BitstreamLength << 8) | data[index++];
+	header->HeaderLength = index;
+
+	return 0;
+}
+
+void xrt_xclbin_free_header(struct XHwIcap_Bit_Header *header)
+{
+	vfree(header->DesignName);
+	vfree(header->PartName);
+	vfree(header->Date);
+	vfree(header->Time);
+}
+
+struct xrt_clock_desc {
+	char	*clock_ep_name;
+	u32	clock_xclbin_type;
+	char	*clkfreq_ep_name;
+} clock_desc[] = {
+	{
+		.clock_ep_name = NODE_CLK_KERNEL1,
+		.clock_xclbin_type = CT_DATA,
+		.clkfreq_ep_name = NODE_CLKFREQ_K1,
+	},
+	{
+		.clock_ep_name = NODE_CLK_KERNEL2,
+		.clock_xclbin_type = CT_KERNEL,
+		.clkfreq_ep_name = NODE_CLKFREQ_K2,
+	},
+	{
+		.clock_ep_name = NODE_CLK_KERNEL3,
+		.clock_xclbin_type = CT_SYSTEM,
+		.clkfreq_ep_name = NODE_CLKFREQ_HBM,
+	},
+};
+
+const char *clock_type2epname(enum CLOCK_TYPE type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
+		if (clock_desc[i].clock_xclbin_type == type)
+			return clock_desc[i].clock_ep_name;
+	}
+	return NULL;
+}
+
+static const char *clock_type2clkfreq_name(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
+		if (clock_desc[i].clock_xclbin_type == type)
+			return clock_desc[i].clkfreq_ep_name;
+	}
+	return NULL;
+}
+
+static int xrt_xclbin_add_clock_metadata(struct device *dev,
+	const char *xclbin, char *dtb)
+{
+	int i;
+	u16 freq;
+	struct clock_freq_topology *clock_topo;
+	int rc = xrt_xclbin_get_section(xclbin,
+		CLOCK_FREQ_TOPOLOGY, (void **)&clock_topo, NULL);
+
+	if (rc)
+		return 0;
+
+	for (i = 0; i < clock_topo->m_count; i++) {
+		u8 type = clock_topo->m_clock_freq[i].m_type;
+		const char *ep_name = clock_type2epname(type);
+		const char *counter_name = clock_type2clkfreq_name(type);
+
+		if (!ep_name || !counter_name)
+			continue;
+
+		freq = cpu_to_be16(clock_topo->m_clock_freq[i].m_freq_Mhz);
+		rc = xrt_md_set_prop(dev, dtb, ep_name,
+			NULL, PROP_CLK_FREQ, &freq, sizeof(freq));
+		if (rc)
+			break;
+
+		rc = xrt_md_set_prop(dev, dtb, ep_name,
+			NULL, PROP_CLK_CNT, counter_name, strlen(counter_name) + 1);
+		if (rc)
+			break;
+	}
+
+	vfree(clock_topo);
+
+	return rc;
+}
+
+int xrt_xclbin_get_metadata(struct device *dev, const char *xclbin, char **dtb)
+{
+	char *md = NULL, *newmd = NULL;
+	u64 len;
+	int rc = xrt_xclbin_get_section(xclbin, PARTITION_METADATA,
+		(void **)&md, &len);
+
+	if (rc)
+		goto done;
+
+	/* Sanity check the dtb section. */
+	if (xrt_md_size(dev, md) > len) {
+		rc = -EINVAL;
+		goto done;
+	}
+
+	newmd = xrt_md_dup(dev, md);
+	if (!newmd) {
+		rc = -EFAULT;
+		goto done;
+	}
+	/* Convert various needed xclbin sections into dtb. */
+	rc = xrt_xclbin_add_clock_metadata(dev, xclbin, newmd);
+
+done:
+	if (rc == 0)
+		*dtb = newmd;
+	else
+		vfree(newmd);
+	vfree(md);
+	return rc;
+}
diff --git a/drivers/fpga/alveo/common/xrt-xclbin.h b/drivers/fpga/alveo/common/xrt-xclbin.h
new file mode 100644
index 000000000000..05214d824790
--- /dev/null
+++ b/drivers/fpga/alveo/common/xrt-xclbin.h
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Kernel Driver XCLBIN parser
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors: David Zhang <davidzha@xilinx.com>
+ */
+
+#ifndef _XRT_XCLBIN_H
+#define _XRT_XCLBIN_H
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/xrt/xclbin.h>
+
+#define	ICAP_XCLBIN_V2	"xclbin2"
+#define DMA_HWICAP_BITFILE_BUFFER_SIZE 1024
+#define MAX_XCLBIN_SIZE (1024 * 1024 * 1024) /* Assuming xclbin <= 1G, always */
+
+enum axlf_section_kind;
+struct axlf;
+
+/**
+ * Bitstream header information.
+ */
+struct XHwIcap_Bit_Header {
+	unsigned int HeaderLength;     /* Length of header in 32 bit words */
+	unsigned int BitstreamLength;  /* Length of bitstream to read in bytes*/
+	unsigned char *DesignName;     /* Design name get from bitstream */
+	unsigned char *PartName;       /* Part name read from bitstream */
+	unsigned char *Date;           /* Date read from bitstream header */
+	unsigned char *Time;           /* Bitstream creation time*/
+	unsigned int MagicLength;      /* Length of the magic numbers*/
+};
+
+const char *xrt_xclbin_kind_to_string(enum axlf_section_kind kind);
+int xrt_xclbin_get_section(const char *xclbin,
+	enum axlf_section_kind kind, void **data, uint64_t *len);
+int xrt_xclbin_get_metadata(struct device *dev, const char *xclbin, char **dtb);
+int xrt_xclbin_parse_header(const unsigned char *data,
+	unsigned int size, struct XHwIcap_Bit_Header *header);
+void xrt_xclbin_free_header(struct XHwIcap_Bit_Header *header);
+const char *clock_type2epname(enum CLOCK_TYPE type);
+
+#endif /* _XRT_XCLBIN_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 4/8] fpga: xrt: core infrastructure for xrt-lib module
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (2 preceding siblings ...)
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 3/8] fpga: xrt: infrastructure support for xmgmt driver Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 5/8] fpga: xrt: platform drivers for subsystems in shell partition Sonal Santan
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Add xrt-lib kernel module infrastructrure code which defines APIs
for working with device nodes, iteration and lookup of platform
devices, common interfaces for platform devices, plumbing of
function call and ioctls between platform devices and parent
partitions.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/alveo/lib/xrt-cdev.c   |  234 +++++++
 drivers/fpga/alveo/lib/xrt-main.c   |  275 ++++++++
 drivers/fpga/alveo/lib/xrt-main.h   |   46 ++
 drivers/fpga/alveo/lib/xrt-subdev.c | 1007 +++++++++++++++++++++++++++
 4 files changed, 1562 insertions(+)
 create mode 100644 drivers/fpga/alveo/lib/xrt-cdev.c
 create mode 100644 drivers/fpga/alveo/lib/xrt-main.c
 create mode 100644 drivers/fpga/alveo/lib/xrt-main.h
 create mode 100644 drivers/fpga/alveo/lib/xrt-subdev.c

diff --git a/drivers/fpga/alveo/lib/xrt-cdev.c b/drivers/fpga/alveo/lib/xrt-cdev.c
new file mode 100644
index 000000000000..b7bef9c8e9ce
--- /dev/null
+++ b/drivers/fpga/alveo/lib/xrt-cdev.c
@@ -0,0 +1,234 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA device node helper functions.
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include "xrt-subdev.h"
+
+extern struct class *xrt_class;
+
+#define	XRT_CDEV_DIR		"xfpga"
+#define	INODE2PDATA(inode)	\
+	container_of((inode)->i_cdev, struct xrt_subdev_platdata, xsp_cdev)
+#define	INODE2PDEV(inode)	\
+	to_platform_device(kobj_to_dev((inode)->i_cdev->kobj.parent))
+#define	CDEV_NAME(sysdev)	(strchr((sysdev)->kobj.name, '!') + 1)
+
+/* Allow it to be accessed from cdev. */
+static void xrt_devnode_allowed(struct platform_device *pdev)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+
+	/* Allow new opens. */
+	mutex_lock(&pdata->xsp_devnode_lock);
+	pdata->xsp_devnode_online = true;
+	mutex_unlock(&pdata->xsp_devnode_lock);
+}
+
+/* Turn off access from cdev and wait for all existing user to go away. */
+static int xrt_devnode_disallowed(struct platform_device *pdev)
+{
+	int ret = 0;
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+
+	mutex_lock(&pdata->xsp_devnode_lock);
+
+	/* Prevent new opens. */
+	pdata->xsp_devnode_online = false;
+	/* Wait for existing user to close. */
+	while (!ret && pdata->xsp_devnode_ref) {
+		int rc;
+
+		mutex_unlock(&pdata->xsp_devnode_lock);
+		rc = wait_for_completion_killable(&pdata->xsp_devnode_comp);
+		mutex_lock(&pdata->xsp_devnode_lock);
+
+		if (rc == -ERESTARTSYS) {
+			/* Restore online state. */
+			pdata->xsp_devnode_online = true;
+			xrt_err(pdev, "%s is in use, ref=%d",
+				CDEV_NAME(pdata->xsp_sysdev),
+				pdata->xsp_devnode_ref);
+			ret = -EBUSY;
+		}
+	}
+
+	mutex_unlock(&pdata->xsp_devnode_lock);
+
+	return ret;
+}
+
+static struct platform_device *
+__xrt_devnode_open(struct inode *inode, bool excl)
+{
+	struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+	struct platform_device *pdev = INODE2PDEV(inode);
+	bool opened = false;
+
+	mutex_lock(&pdata->xsp_devnode_lock);
+
+	if (pdata->xsp_devnode_online) {
+		if (excl && pdata->xsp_devnode_ref) {
+			xrt_err(pdev, "%s has already been opened exclusively",
+				CDEV_NAME(pdata->xsp_sysdev));
+		} else if (!excl && pdata->xsp_devnode_excl) {
+			xrt_err(pdev, "%s has been opened exclusively",
+				CDEV_NAME(pdata->xsp_sysdev));
+		} else {
+			pdata->xsp_devnode_ref++;
+			pdata->xsp_devnode_excl = excl;
+			opened = true;
+			xrt_info(pdev, "opened %s, ref=%d",
+				CDEV_NAME(pdata->xsp_sysdev),
+				pdata->xsp_devnode_ref);
+		}
+	} else {
+		xrt_err(pdev, "%s is offline", CDEV_NAME(pdata->xsp_sysdev));
+	}
+
+	mutex_unlock(&pdata->xsp_devnode_lock);
+
+	return opened ? pdev : NULL;
+}
+
+struct platform_device *
+xrt_devnode_open_excl(struct inode *inode)
+{
+	return __xrt_devnode_open(inode, true);
+}
+
+struct platform_device *
+xrt_devnode_open(struct inode *inode)
+{
+	return __xrt_devnode_open(inode, false);
+}
+EXPORT_SYMBOL_GPL(xrt_devnode_open);
+
+void xrt_devnode_close(struct inode *inode)
+{
+	struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+	struct platform_device *pdev = INODE2PDEV(inode);
+	bool notify = false;
+
+	mutex_lock(&pdata->xsp_devnode_lock);
+
+	pdata->xsp_devnode_ref--;
+	if (pdata->xsp_devnode_ref == 0) {
+		pdata->xsp_devnode_excl = false;
+		notify = true;
+	}
+	if (notify) {
+		xrt_info(pdev, "closed %s, ref=%d",
+			CDEV_NAME(pdata->xsp_sysdev), pdata->xsp_devnode_ref);
+	} else {
+		xrt_info(pdev, "closed %s, notifying waiter",
+			CDEV_NAME(pdata->xsp_sysdev));
+	}
+
+	mutex_unlock(&pdata->xsp_devnode_lock);
+
+	if (notify)
+		complete(&pdata->xsp_devnode_comp);
+}
+EXPORT_SYMBOL_GPL(xrt_devnode_close);
+
+static inline enum xrt_subdev_file_mode
+devnode_mode(struct xrt_subdev_drvdata *drvdata)
+{
+	return drvdata->xsd_file_ops.xsf_mode;
+}
+
+int xrt_devnode_create(struct platform_device *pdev, const char *file_name,
+	const char *inst_name)
+{
+	struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
+	struct xrt_subdev_file_ops *fops = &drvdata->xsd_file_ops;
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	struct cdev *cdevp;
+	struct device *sysdev;
+	int ret = 0;
+	char fname[256];
+
+	BUG_ON(fops->xsf_dev_t == (dev_t)-1);
+
+	mutex_init(&pdata->xsp_devnode_lock);
+	init_completion(&pdata->xsp_devnode_comp);
+
+	cdevp = &DEV_PDATA(pdev)->xsp_cdev;
+	cdev_init(cdevp, &fops->xsf_ops);
+	cdevp->owner = fops->xsf_ops.owner;
+	cdevp->dev = MKDEV(MAJOR(fops->xsf_dev_t), pdev->id);
+
+	/*
+	 * Set pdev as parent of cdev so that when pdev (and its platform
+	 * data) will not be freed when cdev is not freed.
+	 */
+	cdev_set_parent(cdevp, &DEV(pdev)->kobj);
+
+	ret = cdev_add(cdevp, cdevp->dev, 1);
+	if (ret) {
+		xrt_err(pdev, "failed to add cdev: %d", ret);
+		goto failed;
+	}
+	if (!file_name)
+		file_name = pdev->name;
+	if (!inst_name) {
+		if (devnode_mode(drvdata) == XRT_SUBDEV_FILE_MULTI_INST) {
+			snprintf(fname, sizeof(fname), "%s/%s/%s.%u",
+				XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
+				file_name, pdev->id);
+		} else {
+			snprintf(fname, sizeof(fname), "%s/%s/%s",
+				XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
+				file_name);
+		}
+	} else {
+		snprintf(fname, sizeof(fname), "%s/%s/%s.%s", XRT_CDEV_DIR,
+			DEV_PDATA(pdev)->xsp_root_name, file_name, inst_name);
+	}
+	sysdev = device_create(xrt_class, NULL, cdevp->dev, NULL, "%s", fname);
+	if (IS_ERR(sysdev)) {
+		ret = PTR_ERR(sysdev);
+		xrt_err(pdev, "failed to create device node: %d", ret);
+		goto failed;
+	}
+	pdata->xsp_sysdev = sysdev;
+
+	xrt_devnode_allowed(pdev);
+
+	xrt_info(pdev, "created (%d, %d): /dev/%s",
+		MAJOR(cdevp->dev), pdev->id, fname);
+	return 0;
+
+failed:
+	device_destroy(xrt_class, cdevp->dev);
+	cdev_del(cdevp);
+	cdevp->owner = NULL;
+	return ret;
+}
+
+int xrt_devnode_destroy(struct platform_device *pdev)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	struct cdev *cdevp = &pdata->xsp_cdev;
+	dev_t dev = cdevp->dev;
+	int rc;
+
+	BUG_ON(!cdevp->owner);
+
+	rc = xrt_devnode_disallowed(pdev);
+	if (rc)
+		return rc;
+
+	xrt_info(pdev, "removed (%d, %d): /dev/%s/%s", MAJOR(dev), MINOR(dev),
+		XRT_CDEV_DIR, CDEV_NAME(pdata->xsp_sysdev));
+	device_destroy(xrt_class, cdevp->dev);
+	pdata->xsp_sysdev = NULL;
+	cdev_del(cdevp);
+	return 0;
+}
diff --git a/drivers/fpga/alveo/lib/xrt-main.c b/drivers/fpga/alveo/lib/xrt-main.c
new file mode 100644
index 000000000000..b962410a628c
--- /dev/null
+++ b/drivers/fpga/alveo/lib/xrt-main.c
@@ -0,0 +1,275 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/module.h>
+#include "xrt-subdev.h"
+#include "xrt-main.h"
+
+#define	XRT_IPLIB_MODULE_NAME		"xrt-lib"
+#define	XRT_IPLIB_MODULE_VERSION	"4.0.0"
+#define	XRT_DRVNAME(drv)		((drv)->driver.name)
+#define	XRT_MAX_DEVICE_NODES		128
+
+struct mutex xrt_class_lock;
+struct class *xrt_class;
+
+/*
+ * Subdev driver is known by ID to others. We map the ID to it's
+ * struct platform_driver, which contains it's binding name and driver/file ops.
+ * We also map it to the endpoint name in DTB as well, if it's different
+ * than the driver's binding name.
+ */
+static struct xrt_drv_map {
+	enum xrt_subdev_id id;
+	struct platform_driver *drv;
+	struct xrt_subdev_endpoints *eps;
+	struct ida ida; /* manage driver instance and char dev minor */
+} xrt_drv_maps[] = {
+	{ XRT_SUBDEV_PART, &xrt_partition_driver, },
+	{ XRT_SUBDEV_VSEC, &xrt_vsec_driver, xrt_vsec_endpoints, },
+	{ XRT_SUBDEV_VSEC_GOLDEN, &xrt_vsec_golden_driver, xrt_vsec_golden_endpoints, },
+	{ XRT_SUBDEV_GPIO, &xrt_gpio_driver, xrt_gpio_endpoints,},
+	{ XRT_SUBDEV_AXIGATE, &xrt_axigate_driver, xrt_axigate_endpoints, },
+	{ XRT_SUBDEV_ICAP, &xrt_icap_driver, xrt_icap_endpoints, },
+	{ XRT_SUBDEV_CALIB, &xrt_calib_driver, xrt_calib_endpoints, },
+	{ XRT_SUBDEV_TEST, &xrt_test_driver, xrt_test_endpoints, },
+	{ XRT_SUBDEV_MGMT_MAIN, NULL, },
+	{ XRT_SUBDEV_QSPI, &xrt_qspi_driver, xrt_qspi_endpoints, },
+	{ XRT_SUBDEV_MAILBOX, &xrt_mailbox_driver, xrt_mailbox_endpoints, },
+	{ XRT_SUBDEV_CMC, &xrt_cmc_driver, xrt_cmc_endpoints, },
+	{ XRT_SUBDEV_CLKFREQ, &xrt_clkfreq_driver, xrt_clkfreq_endpoints, },
+	{ XRT_SUBDEV_CLOCK, &xrt_clock_driver, xrt_clock_endpoints, },
+	{ XRT_SUBDEV_UCS, &xrt_ucs_driver, xrt_ucs_endpoints, },
+};
+
+static inline struct xrt_subdev_drvdata *
+xrt_drv_map2drvdata(struct xrt_drv_map *map)
+{
+	return (struct xrt_subdev_drvdata *)map->drv->id_table[0].driver_data;
+}
+
+static struct xrt_drv_map *
+xrt_drv_find_map_by_id(enum xrt_subdev_id id)
+{
+	int i;
+	struct xrt_drv_map *map = NULL;
+
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		struct xrt_drv_map *tmap = &xrt_drv_maps[i];
+
+		if (tmap->id != id)
+			continue;
+		map = tmap;
+		break;
+	}
+	return map;
+}
+
+static int xrt_drv_register_driver(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+	struct xrt_subdev_drvdata *drvdata;
+	int rc = 0;
+	const char *drvname;
+
+	BUG_ON(!map);
+
+	if (!map->drv) {
+		pr_info("skip registration of subdev driver for id %d\n", id);
+		return rc;
+	}
+	drvname = XRT_DRVNAME(map->drv);
+
+	rc = platform_driver_register(map->drv);
+	if (rc) {
+		pr_err("register %s subdev driver failed\n", drvname);
+		return rc;
+	}
+
+	drvdata = xrt_drv_map2drvdata(map);
+	if (drvdata && drvdata->xsd_dev_ops.xsd_post_init) {
+		rc = drvdata->xsd_dev_ops.xsd_post_init();
+		if (rc) {
+			platform_driver_unregister(map->drv);
+			pr_err("%s's post-init, ret %d\n", drvname, rc);
+			return rc;
+		}
+	}
+
+	if (drvdata) {
+		/* Initialize dev_t for char dev node. */
+		if (xrt_devnode_enabled(drvdata)) {
+			rc = alloc_chrdev_region(
+				&drvdata->xsd_file_ops.xsf_dev_t, 0,
+				XRT_MAX_DEVICE_NODES, drvname);
+			if (rc) {
+				if (drvdata->xsd_dev_ops.xsd_pre_exit)
+					drvdata->xsd_dev_ops.xsd_pre_exit();
+				platform_driver_unregister(map->drv);
+				pr_err("failed to alloc dev minor for %s: %d\n",
+					drvname, rc);
+				return rc;
+			}
+		} else {
+			drvdata->xsd_file_ops.xsf_dev_t = (dev_t)-1;
+		}
+	}
+
+	ida_init(&map->ida);
+
+	pr_info("registered %s subdev driver\n", drvname);
+	return 0;
+}
+
+static void xrt_drv_unregister_driver(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+	struct xrt_subdev_drvdata *drvdata;
+	const char *drvname;
+
+	BUG_ON(!map);
+	if (!map->drv) {
+		pr_info("skip unregistration of subdev driver for id %d\n", id);
+		return;
+	}
+
+	drvname = XRT_DRVNAME(map->drv);
+
+	ida_destroy(&map->ida);
+
+	drvdata = xrt_drv_map2drvdata(map);
+	if (drvdata && drvdata->xsd_file_ops.xsf_dev_t != (dev_t)-1) {
+		unregister_chrdev_region(drvdata->xsd_file_ops.xsf_dev_t,
+			XRT_MAX_DEVICE_NODES);
+	}
+
+	if (drvdata && drvdata->xsd_dev_ops.xsd_pre_exit)
+		drvdata->xsd_dev_ops.xsd_pre_exit();
+
+	platform_driver_unregister(map->drv);
+
+	pr_info("unregistered %s subdev driver\n", drvname);
+}
+
+int xrt_subdev_register_external_driver(enum xrt_subdev_id id,
+	struct platform_driver *drv, struct xrt_subdev_endpoints *eps)
+{
+	int i;
+	int result = 0;
+
+	mutex_lock(&xrt_class_lock);
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		struct xrt_drv_map *map = &xrt_drv_maps[i];
+
+		if (map->id != id)
+			continue;
+		if (map->drv) {
+			result = -EEXIST;
+			pr_err("Id %d already has a registered driver, 0x%p\n",
+				id, map->drv);
+			break;
+		}
+		map->drv = drv;
+		BUG_ON(map->eps);
+		map->eps = eps;
+		xrt_drv_register_driver(id);
+	}
+	mutex_unlock(&xrt_class_lock);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_register_external_driver);
+
+void xrt_subdev_unregister_external_driver(enum xrt_subdev_id id)
+{
+	int i;
+
+	mutex_lock(&xrt_class_lock);
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		struct xrt_drv_map *map = &xrt_drv_maps[i];
+
+		if (map->id != id)
+			continue;
+		xrt_drv_unregister_driver(id);
+		map->drv = NULL;
+		map->eps = NULL;
+		break;
+	}
+	mutex_unlock(&xrt_class_lock);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_unregister_external_driver);
+
+static __init int xrt_drv_register_drivers(void)
+{
+	int i;
+	int rc = 0;
+
+	mutex_init(&xrt_class_lock);
+	xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
+	if (IS_ERR(xrt_class))
+		return PTR_ERR(xrt_class);
+
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		rc = xrt_drv_register_driver(xrt_drv_maps[i].id);
+		if (rc)
+			break;
+	}
+	if (!rc)
+		return 0;
+
+	while (i-- > 0)
+		xrt_drv_unregister_driver(xrt_drv_maps[i].id);
+	class_destroy(xrt_class);
+	return rc;
+}
+
+static __exit void xrt_drv_unregister_drivers(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++)
+		xrt_drv_unregister_driver(xrt_drv_maps[i].id);
+	class_destroy(xrt_class);
+}
+
+const char *xrt_drv_name(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	if (map)
+		return XRT_DRVNAME(map->drv);
+	return NULL;
+}
+
+int xrt_drv_get_instance(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	return ida_alloc_range(&map->ida, 0, XRT_MAX_DEVICE_NODES, GFP_KERNEL);
+}
+
+void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	ida_free(&map->ida, instance);
+}
+
+struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	return map ? map->eps : NULL;
+}
+
+module_init(xrt_drv_register_drivers);
+module_exit(xrt_drv_unregister_drivers);
+
+MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
+MODULE_AUTHOR("XRT Team <runtime@xilinx.com>");
+MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/fpga/alveo/lib/xrt-main.h b/drivers/fpga/alveo/lib/xrt-main.h
new file mode 100644
index 000000000000..f46f90d9e882
--- /dev/null
+++ b/drivers/fpga/alveo/lib/xrt-main.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_MAIN_H_
+#define	_XRT_MAIN_H_
+
+extern struct platform_driver xrt_partition_driver;
+extern struct platform_driver xrt_test_driver;
+extern struct platform_driver xrt_vsec_driver;
+extern struct platform_driver xrt_vsec_golden_driver;
+extern struct platform_driver xrt_axigate_driver;
+extern struct platform_driver xrt_qspi_driver;
+extern struct platform_driver xrt_gpio_driver;
+extern struct platform_driver xrt_mailbox_driver;
+extern struct platform_driver xrt_icap_driver;
+extern struct platform_driver xrt_cmc_driver;
+extern struct platform_driver xrt_clkfreq_driver;
+extern struct platform_driver xrt_clock_driver;
+extern struct platform_driver xrt_ucs_driver;
+extern struct platform_driver xrt_calib_driver;
+
+extern struct xrt_subdev_endpoints xrt_vsec_endpoints[];
+extern struct xrt_subdev_endpoints xrt_vsec_golden_endpoints[];
+extern struct xrt_subdev_endpoints xrt_axigate_endpoints[];
+extern struct xrt_subdev_endpoints xrt_test_endpoints[];
+extern struct xrt_subdev_endpoints xrt_qspi_endpoints[];
+extern struct xrt_subdev_endpoints xrt_gpio_endpoints[];
+extern struct xrt_subdev_endpoints xrt_mailbox_endpoints[];
+extern struct xrt_subdev_endpoints xrt_icap_endpoints[];
+extern struct xrt_subdev_endpoints xrt_cmc_endpoints[];
+extern struct xrt_subdev_endpoints xrt_clkfreq_endpoints[];
+extern struct xrt_subdev_endpoints xrt_clock_endpoints[];
+extern struct xrt_subdev_endpoints xrt_ucs_endpoints[];
+extern struct xrt_subdev_endpoints xrt_calib_endpoints[];
+
+extern const char *xrt_drv_name(enum xrt_subdev_id id);
+extern int xrt_drv_get_instance(enum xrt_subdev_id id);
+extern void xrt_drv_put_instance(enum xrt_subdev_id id, int instance);
+extern struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
+
+#endif	/* _XRT_MAIN_H_ */
diff --git a/drivers/fpga/alveo/lib/xrt-subdev.c b/drivers/fpga/alveo/lib/xrt-subdev.c
new file mode 100644
index 000000000000..644c4ec19429
--- /dev/null
+++ b/drivers/fpga/alveo/lib/xrt-subdev.c
@@ -0,0 +1,1007 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/platform_device.h>
+#include <linux/pci.h>
+#include <linux/vmalloc.h>
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-main.h"
+#include "xrt-metadata.h"
+
+#define DEV_IS_PCI(dev) ((dev)->bus == &pci_bus_type)
+static inline struct device *find_root(struct platform_device *pdev)
+{
+	struct device *d = DEV(pdev);
+
+	while (!DEV_IS_PCI(d))
+		d = d->parent;
+	return d;
+}
+
+/*
+ * It represents a holder of a subdev. One holder can repeatedly hold a subdev
+ * as long as there is a unhold corresponding to a hold.
+ */
+struct xrt_subdev_holder {
+	struct list_head xsh_holder_list;
+	struct device *xsh_holder;
+	int xsh_count;
+};
+
+/*
+ * It represents a specific instance of platform driver for a subdev, which
+ * provides services to its clients (another subdev driver or root driver).
+ */
+struct xrt_subdev {
+	struct list_head xs_dev_list;
+	struct list_head xs_holder_list;
+	enum xrt_subdev_id xs_id;		/* type of subdev */
+	struct platform_device *xs_pdev;	/* a particular subdev inst */
+	struct completion xs_holder_comp;
+};
+
+static struct xrt_subdev *xrt_subdev_alloc(void)
+{
+	struct xrt_subdev *sdev = vzalloc(sizeof(struct xrt_subdev));
+
+	if (!sdev)
+		return NULL;
+
+	INIT_LIST_HEAD(&sdev->xs_dev_list);
+	INIT_LIST_HEAD(&sdev->xs_holder_list);
+	init_completion(&sdev->xs_holder_comp);
+	return sdev;
+}
+
+static void xrt_subdev_free(struct xrt_subdev *sdev)
+{
+	vfree(sdev);
+}
+
+/*
+ * Subdev common sysfs nodes.
+ */
+static ssize_t holders_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	ssize_t len;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_parent_ioctl_get_holders holders = { pdev, buf, 1024 };
+
+	len = xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_GET_LEAF_HOLDERS, &holders);
+	if (len >= holders.xpigh_holder_buf_len)
+		return len;
+	buf[len] = '\n';
+	return len + 1;
+}
+static DEVICE_ATTR_RO(holders);
+
+static struct attribute *xrt_subdev_attrs[] = {
+	&dev_attr_holders.attr,
+	NULL,
+};
+
+static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
+	struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+{
+	struct device *dev = kobj_to_dev(kobj);
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	unsigned char *blob;
+	long  size;
+	ssize_t ret = 0;
+
+	blob = pdata->xsp_dtb;
+	size = xrt_md_size(dev, blob);
+	if (size <= 0) {
+		ret = -EINVAL;
+		goto failed;
+	}
+
+	if (off >= size)
+		goto failed;
+
+	if (off + count > size)
+		count = size - off;
+	memcpy(buf, blob + off, count);
+
+	ret = count;
+failed:
+	return ret;
+}
+
+static struct bin_attribute meta_data_attr = {
+	.attr = {
+		.name = "metadata",
+		.mode = 0400
+	},
+	.read = metadata_output,
+	.size = 0
+};
+
+static struct bin_attribute  *xrt_subdev_bin_attrs[] = {
+	&meta_data_attr,
+	NULL,
+};
+
+static const struct attribute_group xrt_subdev_attrgroup = {
+	.attrs = xrt_subdev_attrs,
+	.bin_attrs = xrt_subdev_bin_attrs,
+};
+
+static int
+xrt_subdev_getres(struct device *parent, enum xrt_subdev_id id,
+	char *dtb, struct resource **res, int *res_num)
+{
+	struct xrt_subdev_platdata *pdata;
+	struct resource *pci_res = NULL;
+	const u64 *bar_range;
+	const u32 *bar_idx;
+	char *ep_name = NULL, *regmap = NULL;
+	uint bar;
+	int count1 = 0, count2 = 0, ret;
+
+	if (!dtb)
+		return -EINVAL;
+
+	pdata = DEV_PDATA(to_platform_device(parent));
+
+	for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL,
+		&ep_name, &regmap);
+		ep_name != NULL;
+		xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap,
+		&ep_name, &regmap)) {
+		ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
+			PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+		if (!ret)
+			count1++;
+	}
+	if (!count1)
+		return 0;
+
+	*res = vzalloc(sizeof(struct resource) * count1);
+
+	for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL,
+		&ep_name, &regmap);
+		ep_name != NULL;
+		xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap,
+		&ep_name, &regmap)) {
+		ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
+			PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+		if (ret)
+			continue;
+		xrt_md_get_prop(parent, dtb, ep_name, regmap,
+			PROP_BAR_IDX, (const void **)&bar_idx, NULL);
+		bar = bar_idx ? be32_to_cpu(*bar_idx) : 0;
+		xrt_subdev_get_barres(to_platform_device(parent), &pci_res,
+			bar);
+		(*res)[count2].start = pci_res->start +
+			be64_to_cpu(bar_range[0]);
+		(*res)[count2].end = pci_res->start +
+			be64_to_cpu(bar_range[0]) +
+			be64_to_cpu(bar_range[1]) - 1;
+		(*res)[count2].flags = IORESOURCE_MEM;
+		/* check if there is conflicted resource */
+		ret = request_resource(pci_res, *res + count2);
+		if (ret) {
+			dev_err(parent, "Conflict resource %pR\n",
+				*res + count2);
+			vfree(*res);
+			*res_num = 0;
+			*res = NULL;
+			return ret;
+		}
+		release_resource(*res + count2);
+
+		(*res)[count2].parent = pci_res;
+
+		xrt_md_get_epname_pointer(parent, pdata->xsp_dtb, ep_name,
+			regmap, &(*res)[count2].name);
+
+		count2++;
+	}
+
+	BUG_ON(count1 != count2);
+	*res_num = count2;
+
+	return 0;
+}
+
+static inline enum xrt_subdev_file_mode
+xrt_devnode_mode(struct xrt_subdev_drvdata *drvdata)
+{
+	return drvdata->xsd_file_ops.xsf_mode;
+}
+
+static bool xrt_subdev_cdev_auto_creation(struct platform_device *pdev)
+{
+	struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
+
+	if (!drvdata)
+		return false;
+
+	return xrt_devnode_enabled(drvdata) &&
+		(xrt_devnode_mode(drvdata) == XRT_SUBDEV_FILE_DEFAULT ||
+		(xrt_devnode_mode(drvdata) == XRT_SUBDEV_FILE_MULTI_INST));
+}
+
+static struct xrt_subdev *
+xrt_subdev_create(struct device *parent, enum xrt_subdev_id id,
+	xrt_subdev_parent_cb_t pcb, void *pcb_arg, char *dtb)
+{
+	struct xrt_subdev *sdev = NULL;
+	struct platform_device *pdev = NULL;
+	struct xrt_subdev_platdata *pdata = NULL;
+	long dtb_len = 0;
+	size_t pdata_sz;
+	int inst = PLATFORM_DEVID_NONE;
+	struct resource *res = NULL;
+	int res_num = 0;
+
+	sdev = xrt_subdev_alloc();
+	if (!sdev) {
+		dev_err(parent, "failed to alloc subdev for ID %d", id);
+		goto fail;
+	}
+	sdev->xs_id = id;
+
+	if (dtb) {
+		xrt_md_pack(parent, dtb);
+		dtb_len = xrt_md_size(parent, dtb);
+		if (dtb_len <= 0) {
+			dev_err(parent, "invalid metadata len %ld", dtb_len);
+			goto fail;
+		}
+	}
+	pdata_sz = sizeof(struct xrt_subdev_platdata) + dtb_len - 1;
+
+	/* Prepare platform data passed to subdev. */
+	pdata = vzalloc(pdata_sz);
+	if (!pdata)
+		goto fail;
+
+	pdata->xsp_parent_cb = pcb;
+	pdata->xsp_parent_cb_arg = pcb_arg;
+	(void) memcpy(pdata->xsp_dtb, dtb, dtb_len);
+	if (id == XRT_SUBDEV_PART) {
+		/* Partition can only be created by root driver. */
+		BUG_ON(parent->bus != &pci_bus_type);
+		pdata->xsp_root_name = dev_name(parent);
+	} else {
+		struct platform_device *part = to_platform_device(parent);
+		/* Leaf can only be created by partition driver. */
+		BUG_ON(parent->bus != &platform_bus_type);
+		BUG_ON(strcmp(xrt_drv_name(XRT_SUBDEV_PART),
+			platform_get_device_id(part)->name));
+		pdata->xsp_root_name = DEV_PDATA(part)->xsp_root_name;
+	}
+
+	/* Obtain dev instance number. */
+	inst = xrt_drv_get_instance(id);
+	if (inst < 0) {
+		dev_err(parent, "failed to obtain instance: %d", inst);
+		goto fail;
+	}
+
+	/* Create subdev. */
+	if (id == XRT_SUBDEV_PART) {
+		pdev = platform_device_register_data(parent,
+			xrt_drv_name(XRT_SUBDEV_PART), inst, pdata, pdata_sz);
+	} else {
+		int rc = xrt_subdev_getres(parent, id, dtb, &res, &res_num);
+
+		if (rc) {
+			dev_err(parent, "failed to get resource for %s.%d: %d",
+				xrt_drv_name(id), inst, rc);
+			goto fail;
+		}
+		pdev = platform_device_register_resndata(parent,
+			xrt_drv_name(id), inst, res, res_num, pdata, pdata_sz);
+		vfree(res);
+	}
+	if (IS_ERR(pdev)) {
+		dev_err(parent, "failed to create subdev for %s inst %d: %ld",
+			xrt_drv_name(id), inst, PTR_ERR(pdev));
+		goto fail;
+	}
+	sdev->xs_pdev = pdev;
+
+	if (device_attach(DEV(pdev)) != 1) {
+		xrt_err(pdev, "failed to attach");
+		goto fail;
+	}
+
+	if (sysfs_create_group(&DEV(pdev)->kobj, &xrt_subdev_attrgroup))
+		xrt_err(pdev, "failed to create sysfs group");
+
+	/*
+	 * Create sysfs sym link under root for leaves
+	 * under random partitions for easy access to them.
+	 */
+	if (id != XRT_SUBDEV_PART) {
+		if (sysfs_create_link(&find_root(pdev)->kobj,
+			&DEV(pdev)->kobj, dev_name(DEV(pdev)))) {
+			xrt_err(pdev, "failed to create sysfs link");
+		}
+	}
+
+	/* All done, ready to handle req thru cdev. */
+	if (xrt_subdev_cdev_auto_creation(pdev)) {
+		(void) xrt_devnode_create(pdev,
+			DEV_DRVDATA(pdev)->xsd_file_ops.xsf_dev_name, NULL);
+	}
+
+	vfree(pdata);
+	return sdev;
+
+fail:
+	vfree(pdata);
+	if (sdev && !IS_ERR_OR_NULL(sdev->xs_pdev))
+		platform_device_unregister(sdev->xs_pdev);
+	if (inst >= 0)
+		xrt_drv_put_instance(id, inst);
+	xrt_subdev_free(sdev);
+	return NULL;
+}
+
+static void xrt_subdev_destroy(struct xrt_subdev *sdev)
+{
+	struct platform_device *pdev = sdev->xs_pdev;
+	int inst = pdev->id;
+	struct device *dev = DEV(pdev);
+
+	/* Take down the device node */
+	if (xrt_subdev_cdev_auto_creation(pdev))
+		(void) xrt_devnode_destroy(pdev);
+	if (sdev->xs_id != XRT_SUBDEV_PART)
+		(void) sysfs_remove_link(&find_root(pdev)->kobj, dev_name(dev));
+	(void) sysfs_remove_group(&dev->kobj, &xrt_subdev_attrgroup);
+	platform_device_unregister(pdev);
+	xrt_drv_put_instance(sdev->xs_id, inst);
+	xrt_subdev_free(sdev);
+}
+
+int xrt_subdev_parent_ioctl(struct platform_device *self, u32 cmd, void *arg)
+{
+	struct device *dev = DEV(self);
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(self);
+
+	return (*pdata->xsp_parent_cb)(dev->parent, pdata->xsp_parent_cb_arg,
+		cmd, arg);
+}
+
+int xrt_subdev_ioctl(struct platform_device *tgt, u32 cmd, void *arg)
+{
+	struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(tgt);
+
+	return (*drvdata->xsd_dev_ops.xsd_ioctl)(tgt, cmd, arg);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_ioctl);
+
+struct platform_device *
+xrt_subdev_get_leaf(struct platform_device *pdev,
+	xrt_subdev_match_t match_cb, void *match_arg)
+{
+	int rc;
+	struct xrt_parent_ioctl_get_leaf get_leaf = {
+		pdev, match_cb, match_arg, };
+
+	rc = xrt_subdev_parent_ioctl(pdev, XRT_PARENT_GET_LEAF, &get_leaf);
+	if (rc)
+		return NULL;
+	return get_leaf.xpigl_leaf;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_get_leaf);
+
+struct subdev_match_arg {
+	enum xrt_subdev_id id;
+	int instance;
+};
+
+static bool subdev_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
+	return id == a->id &&
+		(pdev->id == a->instance || PLATFORM_DEVID_NONE == a->instance);
+}
+
+struct platform_device *
+xrt_subdev_get_leaf_by_id(struct platform_device *pdev,
+	enum xrt_subdev_id id, int instance)
+{
+	struct subdev_match_arg arg = { id, instance };
+
+	return xrt_subdev_get_leaf(pdev, subdev_match, &arg);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_get_leaf_by_id);
+
+bool xrt_subdev_has_epname(struct platform_device *pdev, const char *ep_name)
+{
+	struct resource	*res;
+	int		i;
+
+	for (i = 0, res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	    res;
+	    res = platform_get_resource(pdev, IORESOURCE_MEM, ++i)) {
+		if (!strncmp(res->name, ep_name, strlen(res->name) + 1))
+			return true;
+	}
+
+	return false;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_has_epname);
+
+static bool xrt_subdev_match_epname(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	return xrt_subdev_has_epname(pdev, arg);
+}
+
+struct platform_device *
+xrt_subdev_get_leaf_by_epname(struct platform_device *pdev, const char *name)
+{
+	return xrt_subdev_get_leaf(pdev, xrt_subdev_match_epname, (void *)name);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_get_leaf_by_epname);
+
+int xrt_subdev_put_leaf(struct platform_device *pdev,
+	struct platform_device *leaf)
+{
+	struct xrt_parent_ioctl_put_leaf put_leaf = { pdev, leaf };
+
+	return xrt_subdev_parent_ioctl(pdev, XRT_PARENT_PUT_LEAF, &put_leaf);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_put_leaf);
+
+int xrt_subdev_create_partition(struct platform_device *pdev, char *dtb)
+{
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_CREATE_PARTITION, dtb);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_create_partition);
+
+int xrt_subdev_destroy_partition(struct platform_device *pdev, int instance)
+{
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_REMOVE_PARTITION, (void *)(uintptr_t)instance);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_destroy_partition);
+
+int xrt_subdev_lookup_partition(struct platform_device *pdev,
+	xrt_subdev_match_t match_cb, void *match_arg)
+{
+	int rc;
+	struct xrt_parent_ioctl_lookup_partition lkp = {
+		pdev, match_cb, match_arg, };
+
+	rc = xrt_subdev_parent_ioctl(pdev, XRT_PARENT_LOOKUP_PARTITION, &lkp);
+	if (rc)
+		return rc;
+	return lkp.xpilp_part_inst;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_lookup_partition);
+
+int xrt_subdev_wait_for_partition_bringup(struct platform_device *pdev)
+{
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_WAIT_PARTITION_BRINGUP, NULL);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_wait_for_partition_bringup);
+
+void *xrt_subdev_add_event_cb(struct platform_device *pdev,
+	xrt_subdev_match_t match, void *match_arg, xrt_event_cb_t cb)
+{
+	struct xrt_parent_ioctl_evt_cb c = { pdev, match, match_arg, cb };
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_ADD_EVENT_CB, &c);
+	return c.xevt_hdl;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_add_event_cb);
+
+void xrt_subdev_remove_event_cb(struct platform_device *pdev, void *hdl)
+{
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_REMOVE_EVENT_CB, hdl);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_remove_event_cb);
+
+static ssize_t
+xrt_subdev_get_holders(struct xrt_subdev *sdev, char *buf, size_t len)
+{
+	const struct list_head *ptr;
+	struct xrt_subdev_holder *h;
+	ssize_t n = 0;
+
+	list_for_each(ptr, &sdev->xs_holder_list) {
+		h = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+		n += snprintf(buf + n, len - n, "%s:%d ",
+			dev_name(h->xsh_holder), h->xsh_count);
+		if (n >= len)
+			break;
+	}
+	return n;
+}
+
+void xrt_subdev_pool_init(struct device *dev, struct xrt_subdev_pool *spool)
+{
+	INIT_LIST_HEAD(&spool->xpool_dev_list);
+	spool->xpool_owner = dev;
+	mutex_init(&spool->xpool_lock);
+	spool->xpool_closing = false;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_init);
+
+static void xrt_subdev_pool_wait_for_holders(struct xrt_subdev_pool *spool,
+	struct xrt_subdev *sdev)
+{
+	const struct list_head *ptr, *next;
+	char holders[128];
+	struct xrt_subdev_holder *holder;
+	struct mutex *lk = &spool->xpool_lock;
+
+	BUG_ON(!mutex_is_locked(lk));
+
+	while (!list_empty(&sdev->xs_holder_list)) {
+		int rc;
+
+		/* It's most likely a bug if we ever enters this loop. */
+		(void) xrt_subdev_get_holders(sdev, holders, sizeof(holders));
+		xrt_err(sdev->xs_pdev, "awaits holders: %s", holders);
+		mutex_unlock(lk);
+		rc = wait_for_completion_killable(&sdev->xs_holder_comp);
+		mutex_lock(lk);
+		if (rc == -ERESTARTSYS) {
+			xrt_err(sdev->xs_pdev,
+				"give up on waiting for holders, clean up now");
+			list_for_each_safe(ptr, next, &sdev->xs_holder_list) {
+				holder = list_entry(ptr,
+					struct xrt_subdev_holder,
+					xsh_holder_list);
+				list_del(&holder->xsh_holder_list);
+				vfree(holder);
+			}
+		}
+	}
+}
+
+int xrt_subdev_pool_fini(struct xrt_subdev_pool *spool)
+{
+	int ret = 0;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct mutex *lk = &spool->xpool_lock;
+
+	mutex_lock(lk);
+
+	if (spool->xpool_closing) {
+		mutex_unlock(lk);
+		return 0;
+	}
+
+	spool->xpool_closing = true;
+	/* Remove subdev in the reverse order of added. */
+	while (!ret && !list_empty(dl)) {
+		struct xrt_subdev *sdev = list_first_entry(dl,
+			struct xrt_subdev, xs_dev_list);
+		xrt_subdev_pool_wait_for_holders(spool, sdev);
+		list_del(&sdev->xs_dev_list);
+		mutex_unlock(lk);
+		xrt_subdev_destroy(sdev);
+		mutex_lock(lk);
+	}
+
+	mutex_unlock(lk);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_fini);
+
+static int xrt_subdev_hold(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+	const struct list_head *ptr;
+	struct list_head *hl = &sdev->xs_holder_list;
+	struct xrt_subdev_holder *holder;
+	bool found = false;
+
+	list_for_each(ptr, hl) {
+		holder = list_entry(ptr, struct xrt_subdev_holder,
+			xsh_holder_list);
+		if (holder->xsh_holder == holder_dev) {
+			holder->xsh_count++;
+			found = true;
+			break;
+		}
+	}
+
+	if (!found) {
+		holder = vzalloc(sizeof(*holder));
+		if (!holder)
+			return -ENOMEM;
+		holder->xsh_holder = holder_dev;
+		holder->xsh_count = 1;
+		list_add_tail(&holder->xsh_holder_list, hl);
+	}
+
+	return holder->xsh_count;
+}
+
+static int
+xrt_subdev_release(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+	const struct list_head *ptr, *next;
+	struct list_head *hl = &sdev->xs_holder_list;
+	struct xrt_subdev_holder *holder;
+	int count;
+	bool found = false;
+
+	list_for_each_safe(ptr, next, hl) {
+		holder = list_entry(ptr, struct xrt_subdev_holder,
+			xsh_holder_list);
+		if (holder->xsh_holder == holder_dev) {
+			found = true;
+			holder->xsh_count--;
+
+			count = holder->xsh_count;
+			if (count == 0) {
+				list_del(&holder->xsh_holder_list);
+				vfree(holder);
+				if (list_empty(hl))
+					complete(&sdev->xs_holder_comp);
+			}
+			break;
+		}
+	}
+	if (!found) {
+		dev_err(holder_dev, "can't release, %s did not hold %s",
+			dev_name(holder_dev),
+			dev_name(DEV(sdev->xs_pdev)));
+	}
+	return found ? count : -EINVAL;
+}
+
+int xrt_subdev_pool_add(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
+	xrt_subdev_parent_cb_t pcb, void *pcb_arg, char *dtb)
+{
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	int ret = 0;
+
+	sdev = xrt_subdev_create(spool->xpool_owner, id, pcb, pcb_arg, dtb);
+	if (sdev) {
+		mutex_lock(lk);
+		if (spool->xpool_closing) {
+			/* No new subdev when pool is going away. */
+			xrt_err(sdev->xs_pdev, "pool is closing");
+			ret = -ENODEV;
+		} else {
+			list_add(&sdev->xs_dev_list, dl);
+		}
+		mutex_unlock(lk);
+		if (ret)
+			xrt_subdev_destroy(sdev);
+	} else {
+		ret = -EINVAL;
+	}
+
+	return ret ? ret : sdev->xs_pdev->id;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_add);
+
+int xrt_subdev_pool_del(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
+	int instance)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	int ret = -ENOENT;
+
+	mutex_lock(lk);
+	list_for_each(ptr, dl) {
+		sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+		if (sdev->xs_id != id || sdev->xs_pdev->id != instance)
+			continue;
+		xrt_subdev_pool_wait_for_holders(spool, sdev);
+		list_del(&sdev->xs_dev_list);
+		ret = 0;
+		break;
+	}
+	mutex_unlock(lk);
+	if (ret)
+		return ret;
+
+	xrt_subdev_destroy(sdev);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_del);
+
+static int xrt_subdev_pool_get_impl(struct xrt_subdev_pool *spool,
+	xrt_subdev_match_t match, void *arg, struct device *holder_dev,
+	struct xrt_subdev **sdevp)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev = NULL;
+	int ret = -ENOENT;
+
+	mutex_lock(lk);
+
+	if (match == XRT_SUBDEV_MATCH_PREV) {
+		struct platform_device *pdev = (struct platform_device *)arg;
+		struct xrt_subdev *d = NULL;
+
+		if (!pdev) {
+			sdev = list_empty(dl) ? NULL : list_last_entry(dl,
+				struct xrt_subdev, xs_dev_list);
+		} else {
+			list_for_each(ptr, dl) {
+				d = list_entry(ptr, struct xrt_subdev,
+					xs_dev_list);
+				if (d->xs_pdev != pdev)
+					continue;
+				if (!list_is_first(ptr, dl))
+					sdev = list_prev_entry(d, xs_dev_list);
+				break;
+			}
+		}
+	} else if (match == XRT_SUBDEV_MATCH_NEXT) {
+		struct platform_device *pdev = (struct platform_device *)arg;
+		struct xrt_subdev *d = NULL;
+
+		if (!pdev) {
+			sdev = list_first_entry_or_null(dl,
+				struct xrt_subdev, xs_dev_list);
+		} else {
+			list_for_each(ptr, dl) {
+				d = list_entry(ptr, struct xrt_subdev,
+					xs_dev_list);
+				if (d->xs_pdev != pdev)
+					continue;
+				if (!list_is_last(ptr, dl))
+					sdev = list_next_entry(d, xs_dev_list);
+				break;
+			}
+		}
+	} else {
+		list_for_each(ptr, dl) {
+			struct xrt_subdev *d = NULL;
+
+			d = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+			if (d && !match(d->xs_id, d->xs_pdev, arg))
+				continue;
+			sdev = d;
+			break;
+		}
+	}
+
+	if (sdev)
+		ret = xrt_subdev_hold(sdev, holder_dev);
+
+	mutex_unlock(lk);
+
+	if (ret >= 0)
+		*sdevp = sdev;
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_get);
+
+int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
+	xrt_subdev_match_t match, void *arg, struct device *holder_dev,
+	struct platform_device **pdevp)
+{
+	int rc;
+	struct xrt_subdev *sdev;
+
+	rc = xrt_subdev_pool_get_impl(spool, match, arg, holder_dev, &sdev);
+	if (rc < 0) {
+		if (rc != -ENOENT)
+			dev_err(holder_dev, "failed to hold device: %d", rc);
+		return rc;
+	}
+
+	if (DEV_IS_PCI(holder_dev)) {
+#ifdef	SUBDEV_DEBUG
+		dev_info(holder_dev, "%s: %s <<==== %s, ref=%d", __func__,
+			dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)), rc);
+#endif
+	} else {
+		xrt_info(to_platform_device(holder_dev), "%s <<==== %s",
+			dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
+	}
+
+	*pdevp = sdev->xs_pdev;
+	return 0;
+}
+
+static int xrt_subdev_pool_put_impl(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, struct device *holder_dev)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	int ret = -ENOENT;
+
+	mutex_lock(lk);
+	list_for_each(ptr, dl) {
+		sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+		if (sdev->xs_pdev != pdev)
+			continue;
+		ret = xrt_subdev_release(sdev, holder_dev);
+		break;
+	}
+	mutex_unlock(lk);
+
+	if (ret < 0 && ret != -ENOENT)
+		dev_err(holder_dev, "failed to release device: %d", ret);
+	return ret;
+}
+
+int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, struct device *holder_dev)
+{
+	int ret = xrt_subdev_pool_put_impl(spool, pdev, holder_dev);
+
+	if (ret < 0)
+		return ret;
+
+	if (DEV_IS_PCI(holder_dev)) {
+#ifdef	SUBDEV_DEBUG
+		dev_info(holder_dev, "%s: %s <<==X== %s, ref=%d", __func__,
+			dev_name(holder_dev), dev_name(DEV(spdev)), ret);
+#endif
+	} else {
+		struct platform_device *d = to_platform_device(holder_dev);
+
+		xrt_info(d, "%s <<==X== %s",
+			dev_name(holder_dev), dev_name(DEV(pdev)));
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_put);
+
+int xrt_subdev_pool_event(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, xrt_subdev_match_t match, void *arg,
+	xrt_event_cb_t xevt_cb, enum xrt_events evt)
+{
+	int rc = 0;
+	struct platform_device *tgt = NULL;
+	struct xrt_subdev *sdev = NULL;
+	struct xrt_event_arg_subdev esd;
+
+	while (!rc && xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
+		tgt, DEV(pdev), &sdev) != -ENOENT) {
+		tgt = sdev->xs_pdev;
+		esd.xevt_subdev_id = sdev->xs_id;
+		esd.xevt_subdev_instance = tgt->id;
+		if (match(sdev->xs_id, sdev->xs_pdev, arg))
+			rc = xevt_cb(pdev, evt, &esd);
+		(void) xrt_subdev_pool_put_impl(spool, tgt, DEV(pdev));
+	}
+	return rc;
+}
+
+ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, char *buf, size_t len)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	ssize_t ret = 0;
+
+	mutex_lock(lk);
+	list_for_each(ptr, dl) {
+		sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+		if (sdev->xs_pdev != pdev)
+			continue;
+		ret = xrt_subdev_get_holders(sdev, buf, len);
+		break;
+	}
+	mutex_unlock(lk);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_get_holders);
+
+int xrt_subdev_broadcast_event_async(struct platform_device *pdev,
+	enum xrt_events evt, xrt_async_broadcast_event_cb_t cb, void *arg)
+{
+	struct xrt_parent_ioctl_async_broadcast_evt e = { pdev, evt, cb, arg };
+
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_ASYNC_BOARDCAST_EVENT, &e);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_broadcast_event_async);
+
+struct xrt_broadcast_event_arg {
+	struct completion comp;
+	bool success;
+};
+
+static void xrt_broadcast_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg, bool success)
+{
+	struct xrt_broadcast_event_arg *e =
+		(struct xrt_broadcast_event_arg *)arg;
+
+	e->success = success;
+	complete(&e->comp);
+}
+
+int xrt_subdev_broadcast_event(struct platform_device *pdev,
+	enum xrt_events evt)
+{
+	int ret;
+	struct xrt_broadcast_event_arg e;
+
+	init_completion(&e.comp);
+	e.success = false;
+	ret = xrt_subdev_broadcast_event_async(pdev, evt,
+		xrt_broadcast_event_cb, &e);
+	if (ret == 0)
+		wait_for_completion(&e.comp);
+	return e.success ? 0 : -EINVAL;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_broadcast_event);
+
+void xrt_subdev_hot_reset(struct platform_device *pdev)
+{
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_HOT_RESET, NULL);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_hot_reset);
+
+void xrt_subdev_get_barres(struct platform_device *pdev,
+	struct resource **res, uint bar_idx)
+{
+	struct xrt_parent_ioctl_get_res arg = { 0 };
+
+	BUG_ON(bar_idx > PCI_STD_RESOURCE_END);
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_GET_RESOURCE, &arg);
+
+	*res = &arg.xpigr_res[bar_idx];
+}
+
+void xrt_subdev_get_parent_id(struct platform_device *pdev,
+	unsigned short *vendor, unsigned short *device,
+	unsigned short *subvendor, unsigned short *subdevice)
+{
+	struct xrt_parent_ioctl_get_id id = { 0 };
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_GET_ID, (void *)&id);
+	if (vendor)
+		*vendor = id.xpigi_vendor_id;
+	if (device)
+		*device = id.xpigi_device_id;
+	if (subvendor)
+		*subvendor = id.xpigi_sub_vendor_id;
+	if (subdevice)
+		*subdevice = id.xpigi_sub_device_id;
+}
+
+struct device *xrt_subdev_register_hwmon(struct platform_device *pdev,
+	const char *name, void *drvdata, const struct attribute_group **grps)
+{
+	struct xrt_parent_ioctl_hwmon hm = { true, name, drvdata, grps, };
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_HWMON, (void *)&hm);
+	return hm.xpih_hwmon_dev;
+}
+
+void xrt_subdev_unregister_hwmon(struct platform_device *pdev,
+	struct device *hwmon)
+{
+	struct xrt_parent_ioctl_hwmon hm = { false, };
+
+	hm.xpih_hwmon_dev = hwmon;
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_HWMON, (void *)&hm);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 5/8] fpga: xrt: platform drivers for subsystems in shell partition
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (3 preceding siblings ...)
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 4/8] fpga: xrt: core infrastructure for xrt-lib module Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 6/8] fpga: xrt: header file for platform and parent drivers Sonal Santan
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Add platform drivers for HW subsystems found in shell partition.
Each driver implements interfaces defined by xrt-subdev.h. The
driver instances are created by parent partition to manage
subsystem instances discovered by walking device tree. The
platform drivers may populate their own sysfs nodes, create
device nodes if needed and make calls into parent or other
platform drivers. The platform drivers can also send and receive
events.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/alveo/lib/subdevs/xrt-axigate.c  |  298 +++
 drivers/fpga/alveo/lib/subdevs/xrt-calib.c    |  291 +++
 drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c  |  214 ++
 drivers/fpga/alveo/lib/subdevs/xrt-clock.c    |  638 ++++++
 .../fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c   |  343 +++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c |  322 +++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h |  135 ++
 .../fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c  |  320 +++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c   |  361 ++++
 .../fpga/alveo/lib/subdevs/xrt-cmc-sensors.c  |  445 ++++
 drivers/fpga/alveo/lib/subdevs/xrt-cmc.c      |  239 +++
 drivers/fpga/alveo/lib/subdevs/xrt-gpio.c     |  198 ++
 drivers/fpga/alveo/lib/subdevs/xrt-icap.c     |  306 +++
 drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c  | 1905 +++++++++++++++++
 .../fpga/alveo/lib/subdevs/xrt-partition.c    |  261 +++
 drivers/fpga/alveo/lib/subdevs/xrt-qspi.c     | 1347 ++++++++++++
 drivers/fpga/alveo/lib/subdevs/xrt-srsr.c     |  322 +++
 drivers/fpga/alveo/lib/subdevs/xrt-test.c     |  274 +++
 drivers/fpga/alveo/lib/subdevs/xrt-ucs.c      |  238 ++
 .../fpga/alveo/lib/subdevs/xrt-vsec-golden.c  |  238 ++
 drivers/fpga/alveo/lib/subdevs/xrt-vsec.c     |  337 +++
 21 files changed, 9032 insertions(+)
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-axigate.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-calib.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clock.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sensors.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-gpio.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-icap.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-partition.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-qspi.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-srsr.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-test.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-ucs.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec-golden.c
 create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec.c

diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-axigate.c b/drivers/fpga/alveo/lib/subdevs/xrt-axigate.c
new file mode 100644
index 000000000000..4b4877a89405
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-axigate.c
@@ -0,0 +1,298 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA AXI Gate Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-axigate.h"
+
+#define XRT_AXIGATE "xrt_axigate"
+
+struct axigate_regs {
+	u32		iag_wr;
+	u32		iag_rvsd;
+	u32		iag_rd;
+} __packed;
+
+struct xrt_axigate {
+	struct platform_device	*pdev;
+	void			*base;
+	struct mutex		gate_lock;
+
+	void			*evt_hdl;
+	const char		*ep_name;
+
+	bool			gate_freezed;
+};
+
+#define reg_rd(g, r)						\
+	ioread32(&((struct axigate_regs *)g->base)->r)
+#define reg_wr(g, v, r)						\
+	iowrite32(v, &((struct axigate_regs *)g->base)->r)
+
+#define freeze_gate(gate)			\
+	do {					\
+		reg_wr(gate, 0, iag_wr);	\
+		ndelay(500);			\
+		reg_rd(gate, iag_rd);		\
+	} while (0)
+
+#define free_gate(gate)				\
+	do {					\
+		reg_wr(gate, 0x2, iag_wr);	\
+		ndelay(500);			\
+		(void) reg_rd(gate, iag_rd);	\
+		reg_wr(gate, 0x3, iag_wr);	\
+		ndelay(500);			\
+		reg_rd(gate, iag_rd);		\
+	} while (0)				\
+
+static int xrt_axigate_epname_idx(struct platform_device *pdev)
+{
+	int			i;
+	int			ret;
+	struct resource		*res;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		xrt_err(pdev, "Empty Resource!");
+		return -EINVAL;
+	}
+
+	for (i = 0; xrt_axigate_epnames[i]; i++) {
+		ret = strncmp(xrt_axigate_epnames[i], res->name,
+			strlen(xrt_axigate_epnames[i]) + 1);
+		if (!ret)
+			break;
+	}
+
+	return (xrt_axigate_epnames[i]) ? i : -EINVAL;
+}
+
+static bool xrt_axigate_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	const char		*ep_name = arg;
+	struct resource		*res;
+
+	if (id != XRT_SUBDEV_AXIGATE)
+		return false;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		xrt_err(pdev, "Empty Resource!");
+		return false;
+	}
+
+	if (strncmp(res->name, ep_name, strlen(res->name) + 1))
+		return true;
+
+	return false;
+}
+
+static void xrt_axigate_freeze(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+	u32			freeze = 0;
+
+	gate = platform_get_drvdata(pdev);
+
+	mutex_lock(&gate->gate_lock);
+	freeze = reg_rd(gate, iag_rd);
+	if (freeze) {		/* gate is opened */
+		xrt_subdev_broadcast_event(pdev, XRT_EVENT_PRE_GATE_CLOSE);
+		freeze_gate(gate);
+	}
+
+	gate->gate_freezed = true;
+	mutex_unlock(&gate->gate_lock);
+
+	xrt_info(pdev, "freeze gate %s", gate->ep_name);
+}
+
+static void xrt_axigate_free(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+	u32			freeze;
+
+	gate = platform_get_drvdata(pdev);
+
+	mutex_lock(&gate->gate_lock);
+	freeze = reg_rd(gate, iag_rd);
+	if (!freeze) {		/* gate is closed */
+		free_gate(gate);
+		xrt_subdev_broadcast_event_async(pdev,
+			XRT_EVENT_POST_GATE_OPEN, NULL, NULL);
+		/* xrt_axigate_free() could be called in event cb, thus
+		 * we can not wait for the completes
+		 */
+	}
+
+	gate->gate_freezed = false;
+	mutex_unlock(&gate->gate_lock);
+
+	xrt_info(pdev, "free gate %s", gate->ep_name);
+}
+
+static int
+xrt_axigate_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct platform_device *leaf;
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+	enum xrt_subdev_id id;
+	int instance;
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION:
+		break;
+	default:
+		return XRT_EVENT_CB_CONTINUE;
+	}
+
+	id = esd->xevt_subdev_id;
+	instance = esd->xevt_subdev_instance;
+
+	/*
+	 * higher level axigate instance created,
+	 * make sure the gate is openned. This covers 1RP flow which
+	 * has plp gate as well.
+	 */
+	leaf = xrt_subdev_get_leaf_by_id(pdev, id, instance);
+	if (leaf) {
+		if (xrt_axigate_epname_idx(leaf) >
+		    xrt_axigate_epname_idx(pdev))
+			xrt_axigate_free(pdev);
+		else
+			xrt_subdev_ioctl(leaf, XRT_AXIGATE_FREE, NULL);
+		xrt_subdev_put_leaf(pdev, leaf);
+	}
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static int
+xrt_axigate_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	switch (cmd) {
+	case XRT_AXIGATE_FREEZE:
+		xrt_axigate_freeze(pdev);
+		break;
+	case XRT_AXIGATE_FREE:
+		xrt_axigate_free(pdev);
+		break;
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int xrt_axigate_remove(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+
+	gate = platform_get_drvdata(pdev);
+
+	if (gate->base)
+		iounmap(gate->base);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, gate);
+
+	return 0;
+}
+
+static int xrt_axigate_probe(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+	struct resource		*res;
+	int			ret;
+
+	gate = devm_kzalloc(&pdev->dev, sizeof(*gate), GFP_KERNEL);
+	if (!gate)
+		return -ENOMEM;
+
+	gate->pdev = pdev;
+	platform_set_drvdata(pdev, gate);
+
+	xrt_info(pdev, "probing...");
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		xrt_err(pdev, "Empty resource 0");
+		ret = -EINVAL;
+		goto failed;
+	}
+
+	gate->base = ioremap(res->start, res->end - res->start + 1);
+	if (!gate->base) {
+		xrt_err(pdev, "map base iomem failed");
+		ret = -EFAULT;
+		goto failed;
+	}
+
+	gate->evt_hdl = xrt_subdev_add_event_cb(pdev,
+		xrt_axigate_leaf_match, (void *)res->name,
+		xrt_axigate_event_cb);
+
+	gate->ep_name = res->name;
+
+	mutex_init(&gate->gate_lock);
+
+	return 0;
+
+failed:
+	xrt_axigate_remove(pdev);
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_axigate_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = "ep_pr_isolate_ulp_00" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = "ep_pr_isolate_plp_00" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_axigate_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_axigate_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_axigate_table[] = {
+	{ XRT_AXIGATE, (kernel_ulong_t)&xrt_axigate_data },
+	{ },
+};
+
+struct platform_driver xrt_axigate_driver = {
+	.driver = {
+		.name = XRT_AXIGATE,
+	},
+	.probe = xrt_axigate_probe,
+	.remove = xrt_axigate_remove,
+	.id_table = xrt_axigate_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-calib.c b/drivers/fpga/alveo/lib/subdevs/xrt-calib.c
new file mode 100644
index 000000000000..55d18b7c4b0f
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-calib.c
@@ -0,0 +1,291 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA memory calibration driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * memory calibration
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+#include <linux/delay.h>
+#include "xrt-xclbin.h"
+#include "xrt-metadata.h"
+#include "xrt-ddr-srsr.h"
+#include "xrt-calib.h"
+
+#define XRT_CALIB	"xrt_calib"
+
+struct calib_cache {
+	struct list_head	link;
+	const char		*ep_name;
+	char			*data;
+	uint32_t		data_size;
+};
+
+struct calib {
+	struct platform_device	*pdev;
+	void			*calib_base;
+	struct mutex		lock;
+	struct list_head	cache_list;
+	uint32_t		cache_num;
+	void			*evt_hdl;
+	enum xrt_calib_results	result;
+};
+
+#define CALIB_DONE(calib)			\
+	(ioread32(calib->calib_base) & BIT(0))
+
+static bool xrt_calib_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	if (id == XRT_SUBDEV_UCS || id == XRT_SUBDEV_SRSR)
+		return true;
+
+	return false;
+}
+
+static void calib_cache_clean_nolock(struct calib *calib)
+{
+	struct calib_cache *cache, *temp;
+
+	list_for_each_entry_safe(cache, temp, &calib->cache_list, link) {
+		vfree(cache->data);
+		list_del(&cache->link);
+		vfree(cache);
+	}
+	calib->cache_num = 0;
+	mutex_unlock(&calib->lock);
+}
+
+static void calib_cache_clean(struct calib *calib)
+{
+	mutex_lock(&calib->lock);
+	calib_cache_clean_nolock(calib);
+	mutex_unlock(&calib->lock);
+}
+
+static int calib_srsr(struct calib *calib, struct platform_device *srsr_leaf)
+{
+	const char		*ep_name;
+	int			ret;
+	struct calib_cache	*cache = NULL, *temp;
+	struct xrt_srsr_ioctl_calib req = { 0 };
+
+	ret = xrt_subdev_ioctl(srsr_leaf, XRT_SRSR_EP_NAME,
+		(void *)&ep_name);
+	if (ret) {
+		xrt_err(calib->pdev, "failed to get SRSR name %d", ret);
+		goto done;
+	}
+	xrt_info(calib->pdev, "Calibrate SRSR %s", ep_name);
+
+	mutex_lock(&calib->lock);
+	list_for_each_entry_safe(cache, temp, &calib->cache_list, link) {
+		if (!strncmp(ep_name, cache->ep_name, strlen(ep_name) + 1)) {
+			req.xsic_buf = cache->data;
+			req.xsic_size = cache->data_size;
+			ret = xrt_subdev_ioctl(srsr_leaf,
+				XRT_SRSR_FAST_CALIB, &req);
+			if (ret) {
+				xrt_err(calib->pdev, "Fast calib failed %d",
+					ret);
+				break;
+			}
+			goto done;
+		}
+	}
+
+	if (ret) {
+		/* fall back to full calibration */
+		xrt_info(calib->pdev, "fall back to full calibration");
+		vfree(cache->data);
+		memset(cache, 0, sizeof(*cache));
+	} else {
+		/* First full calibration */
+		cache = vzalloc(sizeof(*cache));
+		if (!cache) {
+			ret = -ENOMEM;
+			goto done;
+		}
+		list_add(&cache->link, &calib->cache_list);
+		calib->cache_num++;
+	}
+
+	req.xsic_buf = &cache->data;
+	ret = xrt_subdev_ioctl(srsr_leaf, XRT_SRSR_CALIB, &req);
+	if (ret) {
+		xrt_err(calib->pdev, "Full calib failed %d", ret);
+		list_del(&cache->link);
+		calib->cache_num--;
+		goto done;
+	}
+	cache->data_size = req.xsic_size;
+
+
+done:
+	mutex_unlock(&calib->lock);
+
+	if (ret && cache) {
+		vfree(cache->data);
+		vfree(cache);
+	}
+	return ret;
+}
+
+static int calib_calibration(struct calib *calib)
+{
+	int i;
+
+	for (i = 0; i < 20; i++) {
+		if (CALIB_DONE(calib))
+			break;
+		msleep(500);
+	}
+
+	if (i == 20) {
+		xrt_err(calib->pdev,
+			"MIG calibration timeout after bitstream download");
+		return -ETIMEDOUT;
+	}
+
+	xrt_info(calib->pdev, "took %dms", i * 500);
+	return 0;
+}
+
+static int xrt_calib_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct calib *calib = platform_get_drvdata(pdev);
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+	struct platform_device *leaf;
+	int ret;
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION: {
+		if (esd->xevt_subdev_id == XRT_SUBDEV_SRSR) {
+			leaf = xrt_subdev_get_leaf_by_id(pdev,
+				XRT_SUBDEV_SRSR, esd->xevt_subdev_instance);
+			BUG_ON(!leaf);
+			ret = calib_srsr(calib, leaf);
+			xrt_subdev_put_leaf(pdev, leaf);
+			calib->result =
+				ret ? XRT_CALIB_FAILED : XRT_CALIB_SUCCEEDED;
+		} else if (esd->xevt_subdev_id == XRT_SUBDEV_UCS) {
+			ret = calib_calibration(calib);
+			calib->result =
+				ret ? XRT_CALIB_FAILED : XRT_CALIB_SUCCEEDED;
+		}
+		break;
+	}
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		break;
+	}
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+int xrt_calib_remove(struct platform_device *pdev)
+{
+	struct calib *calib = platform_get_drvdata(pdev);
+
+	xrt_subdev_remove_event_cb(pdev, calib->evt_hdl);
+	calib_cache_clean(calib);
+
+	if (calib->calib_base)
+		iounmap(calib->calib_base);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, calib);
+
+	return 0;
+}
+
+int xrt_calib_probe(struct platform_device *pdev)
+{
+	struct calib *calib;
+	struct resource *res;
+	int err = 0;
+
+	calib = devm_kzalloc(&pdev->dev, sizeof(*calib), GFP_KERNEL);
+	if (!calib)
+		return -ENOMEM;
+
+	calib->pdev = pdev;
+	platform_set_drvdata(pdev, calib);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res)
+		goto failed;
+
+	calib->calib_base = ioremap(res->start, res->end - res->start + 1);
+	if (!calib->calib_base) {
+		err = -EIO;
+		xrt_err(pdev, "Map iomem failed");
+		goto failed;
+	}
+
+	calib->evt_hdl = xrt_subdev_add_event_cb(pdev, xrt_calib_leaf_match,
+		NULL, xrt_calib_event_cb);
+
+	mutex_init(&calib->lock);
+	INIT_LIST_HEAD(&calib->cache_list);
+
+	return 0;
+
+failed:
+	xrt_calib_remove(pdev);
+	return err;
+}
+
+static int
+xrt_calib_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct calib *calib = platform_get_drvdata(pdev);
+	int ret = 0;
+
+	switch (cmd) {
+	case XRT_CALIB_RESULT: {
+		enum xrt_calib_results *r = (enum xrt_calib_results *)arg;
+		*r = calib->result;
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		ret = -EINVAL;
+	}
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_calib_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_DDR_CALIB },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_calib_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_calib_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_calib_table[] = {
+	{ XRT_CALIB, (kernel_ulong_t)&xrt_calib_data },
+	{ },
+};
+
+struct platform_driver xrt_calib_driver = {
+	.driver = {
+		.name = XRT_CALIB,
+	},
+	.probe = xrt_calib_probe,
+	.remove = xrt_calib_remove,
+	.id_table = xrt_calib_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c b/drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c
new file mode 100644
index 000000000000..41702f7216f3
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Frequency Counter Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-clkfreq.h"
+
+#define CLKFREQ_ERR(clkfreq, fmt, arg...)   \
+	xrt_err((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_WARN(clkfreq, fmt, arg...)  \
+	xrt_warn((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_INFO(clkfreq, fmt, arg...)  \
+	xrt_info((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_DBG(clkfreq, fmt, arg...)   \
+	xrt_dbg((clkfreq)->pdev, fmt "\n", ##arg)
+
+#define XRT_CLKFREQ		"xrt_clkfreq"
+
+#define OCL_CLKWIZ_STATUS_MASK		0xffff
+
+#define OCL_CLKWIZ_STATUS_MEASURE_START	0x1
+#define OCL_CLKWIZ_STATUS_MEASURE_DONE	0x2
+#define OCL_CLK_FREQ_COUNTER_OFFSET	0x8
+#define OCL_CLK_FREQ_V5_COUNTER_OFFSET	0x10
+#define OCL_CLK_FREQ_V5_CLK0_ENABLED	0x10000
+
+struct clkfreq {
+	struct platform_device	*pdev;
+	void __iomem		*clkfreq_base;
+	const char		*clkfreq_ep_name;
+	struct mutex		clkfreq_lock;
+};
+
+static inline u32 reg_rd(struct clkfreq *clkfreq, u32 offset)
+{
+	return ioread32(clkfreq->clkfreq_base + offset);
+}
+
+static inline void reg_wr(struct clkfreq *clkfreq, u32 val, u32 offset)
+{
+	iowrite32(val, clkfreq->clkfreq_base + offset);
+}
+
+
+static u32 clkfreq_read(struct clkfreq *clkfreq)
+{
+	u32 freq = 0, status;
+	int times = 10;
+
+	mutex_lock(&clkfreq->clkfreq_lock);
+	reg_wr(clkfreq, OCL_CLKWIZ_STATUS_MEASURE_START, 0);
+	while (times != 0) {
+		status = reg_rd(clkfreq, 0);
+		if ((status & OCL_CLKWIZ_STATUS_MASK) ==
+		    OCL_CLKWIZ_STATUS_MEASURE_DONE)
+			break;
+		mdelay(1);
+		times--;
+	};
+	if (times > 0) {
+		freq = (status & OCL_CLK_FREQ_V5_CLK0_ENABLED) ?
+			reg_rd(clkfreq, OCL_CLK_FREQ_V5_COUNTER_OFFSET) :
+			reg_rd(clkfreq, OCL_CLK_FREQ_COUNTER_OFFSET);
+	}
+	mutex_unlock(&clkfreq->clkfreq_lock);
+
+	return freq;
+}
+
+static ssize_t freq_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	struct clkfreq *clkfreq = platform_get_drvdata(to_platform_device(dev));
+	u32 freq;
+	ssize_t count;
+
+	freq = clkfreq_read(clkfreq);
+	count = snprintf(buf, 64, "%d\n", freq);
+
+	return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clkfreq_attrs[] = {
+	&dev_attr_freq.attr,
+	NULL,
+};
+
+static struct attribute_group clkfreq_attr_group = {
+	.attrs = clkfreq_attrs,
+};
+
+static int
+xrt_clkfreq_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct clkfreq		*clkfreq;
+	int			ret = 0;
+
+	clkfreq = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_CLKFREQ_READ: {
+		*(u32 *)arg = clkfreq_read(clkfreq);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int clkfreq_remove(struct platform_device *pdev)
+{
+	struct clkfreq *clkfreq;
+
+	clkfreq = platform_get_drvdata(pdev);
+	if (!clkfreq) {
+		xrt_err(pdev, "driver data is NULL");
+		return -EINVAL;
+	}
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, clkfreq);
+
+	CLKFREQ_INFO(clkfreq, "successfully removed clkfreq subdev");
+	return 0;
+}
+
+
+
+static int clkfreq_probe(struct platform_device *pdev)
+{
+	struct clkfreq *clkfreq = NULL;
+	struct resource *res;
+	int ret;
+
+	clkfreq = devm_kzalloc(&pdev->dev, sizeof(*clkfreq), GFP_KERNEL);
+	if (!clkfreq)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, clkfreq);
+	clkfreq->pdev = pdev;
+	mutex_init(&clkfreq->clkfreq_lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	clkfreq->clkfreq_base = ioremap(res->start, res->end - res->start + 1);
+	if (!clkfreq->clkfreq_base) {
+		CLKFREQ_ERR(clkfreq, "map base %pR failed", res);
+		ret = -EFAULT;
+		goto failed;
+	}
+	clkfreq->clkfreq_ep_name = res->name;
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &clkfreq_attr_group);
+	if (ret) {
+		CLKFREQ_ERR(clkfreq, "create clkfreq attrs failed: %d", ret);
+		goto failed;
+	}
+
+	CLKFREQ_INFO(clkfreq, "successfully initialized clkfreq subdev");
+
+	return 0;
+
+failed:
+	clkfreq_remove(pdev);
+	return ret;
+}
+
+
+struct xrt_subdev_endpoints xrt_clkfreq_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .regmap_name = "freq_cnt" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_clkfreq_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_clkfreq_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_clkfreq_table[] = {
+	{ XRT_CLKFREQ, (kernel_ulong_t)&xrt_clkfreq_data },
+	{ },
+};
+
+struct platform_driver xrt_clkfreq_driver = {
+	.driver = {
+		.name = XRT_CLKFREQ,
+	},
+	.probe = clkfreq_probe,
+	.remove = clkfreq_remove,
+	.id_table = xrt_clkfreq_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-clock.c b/drivers/fpga/alveo/lib/subdevs/xrt-clock.c
new file mode 100644
index 000000000000..64ee67d31191
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-clock.c
@@ -0,0 +1,638 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Wizard Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-clock.h"
+#include "xrt-clkfreq.h"
+
+/* CLOCK_MAX_NUM_CLOCKS should be a concept from XCLBIN_ in the future */
+#define	CLOCK_MAX_NUM_CLOCKS		4
+#define	OCL_CLKWIZ_STATUS_OFFSET	0x4
+#define	OCL_CLKWIZ_STATUS_MASK		0xffff
+#define	OCL_CLKWIZ_STATUS_MEASURE_START	0x1
+#define	OCL_CLKWIZ_STATUS_MEASURE_DONE	0x2
+#define	OCL_CLKWIZ_CONFIG_OFFSET(n)	(0x200 + 4 * (n))
+#define	CLOCK_DEFAULT_EXPIRE_SECS	1
+
+#define	CLOCK_ERR(clock, fmt, arg...)	\
+	xrt_err((clock)->pdev, fmt "\n", ##arg)
+#define	CLOCK_WARN(clock, fmt, arg...)	\
+	xrt_warn((clock)->pdev, fmt "\n", ##arg)
+#define	CLOCK_INFO(clock, fmt, arg...)	\
+	xrt_info((clock)->pdev, fmt "\n", ##arg)
+#define	CLOCK_DBG(clock, fmt, arg...)	\
+	xrt_dbg((clock)->pdev, fmt "\n", ##arg)
+
+#define XRT_CLOCK	"xrt_clock"
+
+struct clock {
+	struct platform_device  *pdev;
+	void __iomem		*clock_base;
+	struct mutex		clock_lock;
+
+	const char		*clock_ep_name;
+};
+
+/*
+ * Precomputed table with config0 and config2 register values together with
+ * target frequency. The steps are approximately 5 MHz apart. Table is
+ * generated by wiz.pl.
+ */
+const static struct xmgmt_ocl_clockwiz {
+	/* target frequency */
+	unsigned short ocl;
+	/* config0 register */
+	unsigned long config0;
+	/* config2 register */
+	unsigned int config2;
+} frequency_table[] = {
+	{/*1275.000*/	10.000,		0x02EE0C01,	0x0001F47F},
+	{/*1575.000*/   15.000,		0x02EE0F01,     0x00000069},
+	{/*1600.000*/   20.000,		0x00001001,     0x00000050},
+	{/*1600.000*/   25.000,		0x00001001,     0x00000040},
+	{/*1575.000*/   30.000,		0x02EE0F01,     0x0001F434},
+	{/*1575.000*/   35.000,		0x02EE0F01,     0x0000002D},
+	{/*1600.000*/   40.000,		0x00001001,     0x00000028},
+	{/*1575.000*/   45.000,		0x02EE0F01,     0x00000023},
+	{/*1600.000*/   50.000,		0x00001001,     0x00000020},
+	{/*1512.500*/   55.000,		0x007D0F01,     0x0001F41B},
+	{/*1575.000*/   60.000,		0x02EE0F01,     0x0000FA1A},
+	{/*1462.500*/   65.000,		0x02710E01,     0x0001F416},
+	{/*1575.000*/   70.000,		0x02EE0F01,     0x0001F416},
+	{/*1575.000*/   75.000,		0x02EE0F01,     0x00000015},
+	{/*1600.000*/   80.000,		0x00001001,     0x00000014},
+	{/*1487.500*/   85.000,		0x036B0E01,     0x0001F411},
+	{/*1575.000*/   90.000,		0x02EE0F01,     0x0001F411},
+	{/*1425.000*/   95.000,		0x00FA0E01,     0x0000000F},
+	{/*1600.000*/   100.000,	0x00001001,     0x00000010},
+	{/*1575.000*/   105.000,	0x02EE0F01,     0x0000000F},
+	{/*1512.500*/   110.000,	0x007D0F01,     0x0002EE0D},
+	{/*1437.500*/   115.000,	0x01770E01,     0x0001F40C},
+	{/*1575.000*/   120.000,	0x02EE0F01,     0x00007D0D},
+	{/*1562.500*/   125.000,	0x02710F01,     0x0001F40C},
+	{/*1462.500*/   130.000,	0x02710E01,     0x0000FA0B},
+	{/*1350.000*/   135.000,	0x01F40D01,     0x0000000A},
+	{/*1575.000*/   140.000,	0x02EE0F01,     0x0000FA0B},
+	{/*1450.000*/   145.000,	0x01F40E01,     0x0000000A},
+	{/*1575.000*/   150.000,	0x02EE0F01,     0x0001F40A},
+	{/*1550.000*/   155.000,	0x01F40F01,     0x0000000A},
+	{/*1600.000*/   160.000,	0x00001001,     0x0000000A},
+	{/*1237.500*/   165.000,	0x01770C01,     0x0001F407},
+	{/*1487.500*/   170.000,	0x036B0E01,     0x0002EE08},
+	{/*1575.000*/   175.000,	0x02EE0F01,     0x00000009},
+	{/*1575.000*/   180.000,	0x02EE0F01,     0x0002EE08},
+	{/*1387.500*/   185.000,	0x036B0D01,     0x0001F407},
+	{/*1425.000*/   190.000,	0x00FA0E01,     0x0001F407},
+	{/*1462.500*/   195.000,	0x02710E01,     0x0001F407},
+	{/*1600.000*/   200.000,	0x00001001,     0x00000008},
+	{/*1537.500*/   205.000,        0x01770F01,     0x0001F407},
+	{/*1575.000*/   210.000,        0x02EE0F01,     0x0001F407},
+	{/*1075.000*/   215.000,        0x02EE0A01,     0x00000005},
+	{/*1512.500*/   220.000,        0x007D0F01,     0x00036B06},
+	{/*1575.000*/   225.000,        0x02EE0F01,     0x00000007},
+	{/*1437.500*/   230.000,        0x01770E01,     0x0000FA06},
+	{/*1175.000*/   235.000,        0x02EE0B01,     0x00000005},
+	{/*1500.000*/   240.000,        0x00000F01,     0x0000FA06},
+	{/*1225.000*/   245.000,        0x00FA0C01,     0x00000005},
+	{/*1562.500*/   250.000,        0x02710F01,     0x0000FA06},
+	{/*1275.000*/   255.000,        0x02EE0C01,     0x00000005},
+	{/*1462.500*/   260.000,        0x02710E01,     0x00027105},
+	{/*1325.000*/   265.000,        0x00FA0D01,     0x00000005},
+	{/*1350.000*/   270.000,        0x01F40D01,     0x00000005},
+	{/*1512.500*/   275.000,        0x007D0F01,     0x0001F405},
+	{/*1575.000*/   280.000,        0x02EE0F01,     0x00027105},
+	{/*1425.000*/   285.000,        0x00FA0E01,     0x00000005},
+	{/*1450.000*/   290.000,        0x01F40E01,     0x00000005},
+	{/*1475.000*/   295.000,        0x02EE0E01,     0x00000005},
+	{/*1575.000*/   300.000,        0x02EE0F01,     0x0000FA05},
+	{/*1525.000*/   305.000,        0x00FA0F01,     0x00000005},
+	{/*1550.000*/   310.000,        0x01F40F01,     0x00000005},
+	{/*1575.000*/   315.000,        0x02EE0F01,     0x00000005},
+	{/*1600.000*/   320.000,        0x00001001,     0x00000005},
+	{/*1462.500*/   325.000,        0x02710E01,     0x0001F404},
+	{/*1237.500*/   330.000,        0x01770C01,     0x0002EE03},
+	{/*837.500*/    335.000,        0x01770801,     0x0001F402},
+	{/*1487.500*/   340.000,        0x036B0E01,     0x00017704},
+	{/*862.500*/    345.000,        0x02710801,     0x0001F402},
+	{/*1575.000*/   350.000,        0x02EE0F01,     0x0001F404},
+	{/*887.500*/    355.000,        0x036B0801,     0x0001F402},
+	{/*1575.000*/   360.000,        0x02EE0F01,     0x00017704},
+	{/*912.500*/    365.000,        0x007D0901,     0x0001F402},
+	{/*1387.500*/   370.000,        0x036B0D01,     0x0002EE03},
+	{/*1500.000*/   375.000,        0x00000F01,     0x00000004},
+	{/*1425.000*/   380.000,        0x00FA0E01,     0x0002EE03},
+	{/*962.500*/    385.000,        0x02710901,     0x0001F402},
+	{/*1462.500*/   390.000,        0x02710E01,     0x0002EE03},
+	{/*987.500*/    395.000,        0x036B0901,     0x0001F402},
+	{/*1600.000*/   400.000,        0x00001001,     0x00000004},
+	{/*1012.500*/   405.000,        0x007D0A01,     0x0001F402},
+	{/*1537.500*/   410.000,        0x01770F01,     0x0002EE03},
+	{/*1037.500*/   415.000,        0x01770A01,     0x0001F402},
+	{/*1575.000*/   420.000,        0x02EE0F01,     0x0002EE03},
+	{/*1487.500*/   425.000,        0x036B0E01,     0x0001F403},
+	{/*1075.000*/   430.000,        0x02EE0A01,     0x0001F402},
+	{/*1087.500*/   435.000,        0x036B0A01,     0x0001F402},
+	{/*1375.000*/   440.000,        0x02EE0D01,     0x00007D03},
+	{/*1112.500*/   445.000,        0x007D0B01,     0x0001F402},
+	{/*1575.000*/   450.000,        0x02EE0F01,     0x0001F403},
+	{/*1137.500*/   455.000,        0x01770B01,     0x0001F402},
+	{/*1437.500*/   460.000,        0x01770E01,     0x00007D03},
+	{/*1162.500*/   465.000,        0x02710B01,     0x0001F402},
+	{/*1175.000*/   470.000,        0x02EE0B01,     0x0001F402},
+	{/*1425.000*/   475.000,        0x00FA0E01,     0x00000003},
+	{/*1500.000*/   480.000,        0x00000F01,     0x00007D03},
+	{/*1212.500*/   485.000,        0x007D0C01,     0x0001F402},
+	{/*1225.000*/   490.000,        0x00FA0C01,     0x0001F402},
+	{/*1237.500*/   495.000,        0x01770C01,     0x0001F402},
+	{/*1562.500*/   500.000,        0x02710F01,     0x00007D03},
+	{/*1262.500*/   505.000,        0x02710C01,     0x0001F402},
+	{/*1275.000*/   510.000,        0x02EE0C01,     0x0001F402},
+	{/*1287.500*/   515.000,        0x036B0C01,     0x0001F402},
+	{/*1300.000*/   520.000,        0x00000D01,     0x0001F402},
+	{/*1575.000*/   525.000,        0x02EE0F01,     0x00000003},
+	{/*1325.000*/   530.000,        0x00FA0D01,     0x0001F402},
+	{/*1337.500*/   535.000,        0x01770D01,     0x0001F402},
+	{/*1350.000*/   540.000,        0x01F40D01,     0x0001F402},
+	{/*1362.500*/   545.000,        0x02710D01,     0x0001F402},
+	{/*1512.500*/   550.000,        0x007D0F01,     0x0002EE02},
+	{/*1387.500*/   555.000,        0x036B0D01,     0x0001F402},
+	{/*1400.000*/   560.000,        0x00000E01,     0x0001F402},
+	{/*1412.500*/   565.000,        0x007D0E01,     0x0001F402},
+	{/*1425.000*/   570.000,        0x00FA0E01,     0x0001F402},
+	{/*1437.500*/   575.000,        0x01770E01,     0x0001F402},
+	{/*1450.000*/   580.000,        0x01F40E01,     0x0001F402},
+	{/*1462.500*/   585.000,        0x02710E01,     0x0001F402},
+	{/*1475.000*/   590.000,        0x02EE0E01,     0x0001F402},
+	{/*1487.500*/   595.000,        0x036B0E01,     0x0001F402},
+	{/*1575.000*/   600.000,        0x02EE0F01,     0x00027102},
+	{/*1512.500*/   605.000,        0x007D0F01,     0x0001F402},
+	{/*1525.000*/   610.000,        0x00FA0F01,     0x0001F402},
+	{/*1537.500*/   615.000,        0x01770F01,     0x0001F402},
+	{/*1550.000*/   620.000,        0x01F40F01,     0x0001F402},
+	{/*1562.500*/   625.000,        0x02710F01,     0x0001F402},
+	{/*1575.000*/   630.000,        0x02EE0F01,     0x0001F402},
+	{/*1587.500*/   635.000,        0x036B0F01,     0x0001F402},
+	{/*1600.000*/   640.000,        0x00001001,     0x0001F402},
+	{/*1290.000*/   645.000,        0x01F44005,     0x00000002},
+	{/*1462.500*/   650.000,        0x02710E01,     0x0000FA02}
+};
+
+static inline u32 reg_rd(struct clock *clock, u32 offset)
+{
+	return ioread32(clock->clock_base + offset);
+}
+
+static inline void reg_wr(struct clock *clock, u32 val, u32 offset)
+{
+	iowrite32(val, clock->clock_base + offset);
+}
+
+static u32 find_matching_freq_config(unsigned short freq,
+	const struct xmgmt_ocl_clockwiz *table, int size)
+{
+	u32 start = 0;
+	u32 end = size - 1;
+	u32 idx = size - 1;
+
+	if (freq < table[0].ocl)
+		return 0;
+
+	if (freq > table[size - 1].ocl)
+		return size - 1;
+
+	while (start < end) {
+		if (freq == table[idx].ocl)
+			break;
+		if (freq < table[idx].ocl)
+			end = idx;
+		else
+			start = idx + 1;
+		idx = start + (end - start) / 2;
+	}
+	if (freq < table[idx].ocl)
+		idx--;
+
+	return idx;
+}
+
+static u32 find_matching_freq(u32 freq,
+	const struct xmgmt_ocl_clockwiz *freq_table, int freq_table_size)
+{
+	int idx = find_matching_freq_config(freq, freq_table, freq_table_size);
+
+	return freq_table[idx].ocl;
+}
+
+static inline int clock_wiz_busy(struct clock *clock, int cycle,
+	int interval)
+{
+	u32 val = 0;
+	int count;
+
+	val = reg_rd(clock, OCL_CLKWIZ_STATUS_OFFSET);
+	for (count = 0; val != 1 && count < cycle; count++) {
+		mdelay(interval);
+		val = reg_rd(clock, OCL_CLKWIZ_STATUS_OFFSET);
+	}
+	if (val != 1) {
+		CLOCK_ERR(clock, "clockwiz is (%u) busy after %d ms",
+			val, cycle * interval);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int get_freq(struct clock *clock, u16 *freq)
+{
+#define XCL_INPUT_FREQ 100
+	const u64 input = XCL_INPUT_FREQ;
+	u32 val;
+	u32 mul0, div0;
+	u32 mul_frac0 = 0;
+	u32 div1;
+	u32 div_frac1 = 0;
+
+	BUG_ON(!mutex_is_locked(&clock->clock_lock));
+
+	val = reg_rd(clock, OCL_CLKWIZ_STATUS_OFFSET);
+	if ((val & 0x1) == 0) {
+		CLOCK_ERR(clock, "clockwiz is busy %x", val);
+		*freq = 0;
+		return -EBUSY;
+	}
+
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(0));
+
+	div0 = val & 0xff;
+	mul0 = (val & 0xff00) >> 8;
+	if (val & BIT(26)) {
+		mul_frac0 = val >> 16;
+		mul_frac0 &= 0x3ff;
+	}
+
+	/*
+	 * Multiply both numerator (mul0) and the denominator (div0) with 1000
+	 * to account for fractional portion of multiplier
+	 */
+	mul0 *= 1000;
+	mul0 += mul_frac0;
+	div0 *= 1000;
+
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(2));
+
+	div1 = val & 0xff;
+	if (val & BIT(18)) {
+		div_frac1 = val >> 8;
+		div_frac1 &= 0x3ff;
+	}
+
+	/*
+	 * Multiply both numerator (mul0) and the denominator (div1) with
+	 * 1000 to account for fractional portion of divider
+	 */
+
+	div1 *= 1000;
+	div1 += div_frac1;
+	div0 *= div1;
+	mul0 *= 1000;
+	if (div0 == 0) {
+		CLOCK_ERR(clock, "clockwiz 0 divider");
+		return 0;
+	}
+
+	*freq = (u16)((input * mul0) / div0);
+
+	return 0;
+}
+
+static int set_freq(struct clock *clock, u16 freq)
+{
+	u32 config;
+	int err;
+	u32 idx = 0;
+	u32 val;
+
+	BUG_ON(!mutex_is_locked(&clock->clock_lock));
+
+	idx = find_matching_freq_config(freq, frequency_table,
+		ARRAY_SIZE(frequency_table));
+
+	CLOCK_INFO(clock, "New: %d Mhz", freq);
+	err = clock_wiz_busy(clock, 20, 50);
+	if (err)
+		return -EBUSY;
+
+	config = frequency_table[idx].config0;
+	reg_wr(clock, config, OCL_CLKWIZ_CONFIG_OFFSET(0));
+
+	config = frequency_table[idx].config2;
+	reg_wr(clock, config, OCL_CLKWIZ_CONFIG_OFFSET(2));
+
+	mdelay(10);
+	reg_wr(clock, 7, OCL_CLKWIZ_CONFIG_OFFSET(23));
+
+	mdelay(1);
+	reg_wr(clock, 2, OCL_CLKWIZ_CONFIG_OFFSET(23));
+
+	CLOCK_INFO(clock, "clockwiz waiting for locked signal");
+
+	err = clock_wiz_busy(clock, 100, 100);
+	if (err) {
+		CLOCK_ERR(clock, "clockwiz MMCM/PLL did not lock");
+		/* restore */
+		reg_wr(clock, 4, OCL_CLKWIZ_CONFIG_OFFSET(23));
+		mdelay(10);
+		reg_wr(clock, 0, OCL_CLKWIZ_CONFIG_OFFSET(23));
+		return err;
+	}
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(0));
+	CLOCK_INFO(clock, "clockwiz CONFIG(0) 0x%x", val);
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(2));
+	CLOCK_INFO(clock, "clockwiz CONFIG(2) 0x%x", val);
+
+	return 0;
+}
+
+static int get_freq_counter(struct clock *clock, u32 *freq)
+{
+	const void *cnter;
+	struct platform_device *cnter_leaf;
+	struct platform_device *pdev = clock->pdev;
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
+	int err = xrt_md_get_prop(DEV(pdev), pdata->xsp_dtb,
+		clock->clock_ep_name, NULL, PROP_CLK_CNT, &cnter, NULL);
+
+	BUG_ON(!mutex_is_locked(&clock->clock_lock));
+
+	if (err) {
+		xrt_err(pdev, "no counter specified");
+		return err;
+	}
+
+	cnter_leaf = xrt_subdev_get_leaf_by_epname(pdev, cnter);
+	if (!cnter_leaf) {
+		xrt_err(pdev, "can't find counter");
+		return -ENOENT;
+	}
+
+	err = xrt_subdev_ioctl(cnter_leaf, XRT_CLKFREQ_READ, freq);
+	if (err)
+		xrt_err(pdev, "can't read counter");
+	xrt_subdev_put_leaf(clock->pdev, cnter_leaf);
+
+	return err;
+}
+
+static int clock_get_freq(struct clock *clock, u16 *freq, u32 *freq_cnter)
+{
+	int err = 0;
+
+	mutex_lock(&clock->clock_lock);
+
+	if (err == 0 && freq)
+		err = get_freq(clock, freq);
+
+	if (err == 0 && freq_cnter)
+		err = get_freq_counter(clock, freq_cnter);
+
+	mutex_unlock(&clock->clock_lock);
+	return err;
+}
+
+static int clock_set_freq(struct clock *clock, u16 freq)
+{
+	int err;
+
+	mutex_lock(&clock->clock_lock);
+	err = set_freq(clock, freq);
+	mutex_unlock(&clock->clock_lock);
+
+	return err;
+}
+
+static int clock_verify_freq(struct clock *clock)
+{
+	int err = 0;
+	u16 freq;
+	u32 lookup_freq, clock_freq_counter, request_in_khz, tolerance;
+
+	mutex_lock(&clock->clock_lock);
+
+	err = get_freq(clock, &freq);
+	if (err) {
+		xrt_err(clock->pdev, "get freq failed, %d", err);
+		goto end;
+	}
+
+	err = get_freq_counter(clock, &clock_freq_counter);
+	if (err) {
+		xrt_err(clock->pdev, "get freq counter failed, %d", err);
+		goto end;
+	}
+
+	lookup_freq = find_matching_freq(freq, frequency_table,
+		ARRAY_SIZE(frequency_table));
+	request_in_khz = lookup_freq * 1000;
+	tolerance = lookup_freq * 50;
+	if (tolerance < abs(clock_freq_counter-request_in_khz)) {
+		CLOCK_ERR(clock,
+		    "set clock(%s) failed, request %ukhz, actual %dkhz",
+		    clock->clock_ep_name, request_in_khz, clock_freq_counter);
+		err = -EDOM;
+	} else {
+		CLOCK_INFO(clock, "verified clock (%s)", clock->clock_ep_name);
+	}
+
+end:
+	mutex_unlock(&clock->clock_lock);
+	return err;
+}
+
+static int clock_init(struct clock *clock)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
+	int err = 0;
+	const u16 *freq;
+
+	err = xrt_md_get_prop(DEV(clock->pdev), pdata->xsp_dtb,
+		clock->clock_ep_name, NULL, PROP_CLK_FREQ,
+		(const void **)&freq, NULL);
+	if (err) {
+		xrt_info(clock->pdev, "no default freq");
+		return 0;
+	}
+
+	mutex_lock(&clock->clock_lock);
+	err = set_freq(clock, be16_to_cpu(*freq));
+	mutex_unlock(&clock->clock_lock);
+
+	return err;
+}
+
+static ssize_t freq_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	struct clock *clock = platform_get_drvdata(to_platform_device(dev));
+	u16 freq = 0;
+	ssize_t count;
+
+	count = clock_get_freq(clock, &freq, NULL);
+	if (count < 0)
+		return count;
+
+	count = snprintf(buf, 64, "%d\n", freq);
+
+	return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clock_attrs[] = {
+	&dev_attr_freq.attr,
+	NULL,
+};
+
+static struct attribute_group clock_attr_group = {
+	.attrs = clock_attrs,
+};
+
+static int
+xrt_clock_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct clock		*clock;
+	int			ret = 0;
+
+	clock = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_CLOCK_SET: {
+		u16	freq = (u16)(uintptr_t)arg;
+
+		ret = clock_set_freq(clock, freq);
+		break;
+	}
+	case XRT_CLOCK_VERIFY: {
+		ret = clock_verify_freq(clock);
+		break;
+	}
+	case XRT_CLOCK_GET: {
+		struct xrt_clock_ioctl_get *get =
+			(struct xrt_clock_ioctl_get *)arg;
+
+		ret = clock_get_freq(clock, &get->freq, &get->freq_cnter);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int clock_remove(struct platform_device *pdev)
+{
+	struct clock *clock;
+
+	clock = platform_get_drvdata(pdev);
+	if (!clock) {
+		xrt_err(pdev, "driver data is NULL");
+		return -EINVAL;
+	}
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, clock);
+
+	CLOCK_INFO(clock, "successfully removed Clock subdev");
+	return 0;
+}
+
+
+
+static int clock_probe(struct platform_device *pdev)
+{
+	struct clock *clock = NULL;
+	struct resource *res;
+	int ret;
+
+	clock = devm_kzalloc(&pdev->dev, sizeof(*clock), GFP_KERNEL);
+	if (!clock)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, clock);
+	clock->pdev = pdev;
+	mutex_init(&clock->clock_lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	clock->clock_base = ioremap(res->start, res->end - res->start + 1);
+	if (!clock->clock_base) {
+		CLOCK_ERR(clock, "map base %pR failed", res);
+		ret = -EFAULT;
+		goto failed;
+	}
+
+	clock->clock_ep_name = res->name;
+
+	ret = clock_init(clock);
+	if (ret)
+		goto failed;
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &clock_attr_group);
+	if (ret) {
+		CLOCK_ERR(clock, "create clock attrs failed: %d", ret);
+		goto failed;
+	}
+
+	CLOCK_INFO(clock, "successfully initialized Clock subdev");
+
+	return 0;
+
+failed:
+	clock_remove(pdev);
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_clock_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .regmap_name = "clkwiz" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_clock_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_clock_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_clock_table[] = {
+	{ XRT_CLOCK, (kernel_ulong_t)&xrt_clock_data },
+	{ },
+};
+
+struct platform_driver xrt_clock_driver = {
+	.driver = {
+		.name = XRT_CLOCK,
+	},
+	.probe = clock_probe,
+	.remove = clock_remove,
+	.id_table = xrt_clock_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c
new file mode 100644
index 000000000000..ec03ba73c677
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c
@@ -0,0 +1,343 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include "xrt-subdev.h"
+#include "xrt-cmc-impl.h"
+#include "xmgmt-main.h"
+#include <linux/xrt/mailbox_proto.h>
+
+enum board_info_key {
+	BDINFO_SN = 0x21,
+	BDINFO_MAC0,
+	BDINFO_MAC1,
+	BDINFO_MAC2,
+	BDINFO_MAC3,
+	BDINFO_REV,
+	BDINFO_NAME,
+	BDINFO_BMC_VER,
+	BDINFO_MAX_PWR,
+	BDINFO_FAN_PRESENCE,
+	BDINFO_CONFIG_MODE,
+	BDINFO_MAC_DYNAMIC = 0x4b,
+};
+
+struct xrt_cmc_bdinfo {
+	struct platform_device *pdev;
+	struct mutex lock;
+	char *bdinfo;
+	size_t bdinfo_sz;
+};
+
+static const char *cmc_parse_board_info(struct xrt_cmc_bdinfo *cmc_bdi,
+	enum board_info_key key, size_t *len)
+{
+	const char *buf = cmc_bdi->bdinfo, *p;
+	u32 sz = cmc_bdi->bdinfo_sz;
+
+	BUG_ON(!mutex_is_locked(&cmc_bdi->lock));
+
+	if (!buf)
+		return NULL;
+
+	for (p = buf; p < buf + sz;) {
+		char k = *(p++);
+		u8 l = *(p++);
+
+		if (k == key) {
+			if (len)
+				*len = l;
+			return p;
+		}
+		p += l;
+	}
+
+	return NULL;
+}
+
+static int cmc_refresh_board_info_nolock(struct xrt_cmc_bdinfo *cmc_bdi)
+{
+	int ret = 0;
+	int gen = -EINVAL;
+	char *bdinfo_raw = NULL;
+	size_t bd_info_sz = cmc_mailbox_max_payload(cmc_bdi->pdev);
+	struct platform_device *pdev = cmc_bdi->pdev;
+	void *newinfo = NULL;
+
+	BUG_ON(!mutex_is_locked(&cmc_bdi->lock));
+
+	bdinfo_raw = vzalloc(bd_info_sz);
+	if (bdinfo_raw == NULL) {
+		ret = -ENOMEM;
+		goto done;
+	}
+
+	/* Load new info from HW. */
+	gen = cmc_mailbox_acquire(pdev);
+	if (gen < 0) {
+		xrt_err(pdev, "failed to hold mailbox: %d", gen);
+		ret = gen;
+		goto done;
+	}
+	ret = cmc_mailbox_send_packet(pdev, gen, CMC_MBX_PKT_OP_BOARD_INFO,
+		NULL, 0);
+	if (ret) {
+		xrt_err(pdev, "failed to send pkt: %d", ret);
+		goto done;
+	}
+	ret = cmc_mailbox_recv_packet(pdev, gen, bdinfo_raw, &bd_info_sz);
+	if (ret) {
+		xrt_err(pdev, "failed to receive pkt: %d", ret);
+		goto done;
+	}
+
+	newinfo = kmemdup(bdinfo_raw, bd_info_sz, GFP_KERNEL);
+	if (newinfo == NULL) {
+		ret = -ENOMEM;
+		goto done;
+	}
+
+	kfree(cmc_bdi->bdinfo);
+	cmc_bdi->bdinfo = newinfo;
+	cmc_bdi->bdinfo_sz = bd_info_sz;
+
+done:
+	if (gen >= 0)
+		cmc_mailbox_release(pdev, gen);
+	vfree(bdinfo_raw);
+	return ret;
+}
+
+int cmc_refresh_board_info(struct platform_device *pdev)
+{
+	int ret;
+	struct xrt_cmc_bdinfo *cmc_bdi = cmc_pdev2bdinfo(pdev);
+
+	if (!cmc_bdi)
+		return -ENODEV;
+
+	mutex_lock(&cmc_bdi->lock);
+	ret = cmc_refresh_board_info_nolock(cmc_bdi);
+	mutex_unlock(&cmc_bdi->lock);
+	return ret;
+}
+
+static void cmc_copy_board_info_by_key(struct xrt_cmc_bdinfo *cmc_bdi,
+	enum board_info_key key, void *target)
+{
+	size_t len;
+	const char *info;
+
+	info = cmc_parse_board_info(cmc_bdi, key, &len);
+	if (!info)
+		return;
+	memcpy(target, info, len);
+}
+
+static void cmc_copy_dynamic_mac(struct xrt_cmc_bdinfo *cmc_bdi,
+	u32 *num_mac, void *first_mac)
+{
+	size_t len = 0;
+	const char *info;
+	u16 num = 0;
+
+	info = cmc_parse_board_info(cmc_bdi, BDINFO_MAC_DYNAMIC, &len);
+	if (!info)
+		return;
+
+	if (len != 8) {
+		xrt_err(cmc_bdi->pdev, "dynamic mac data is corrupted.");
+		return;
+	}
+
+	/*
+	 * Byte 0:1 is contiguous mac addresses number in LSB.
+	 * Byte 2:7 is first mac address.
+	 */
+	memcpy(&num, info, 2);
+	*num_mac = le16_to_cpu(num);
+	memcpy(first_mac, info + 2, 6);
+}
+
+static void cmc_copy_expect_bmc(struct xrt_cmc_bdinfo *cmc_bdi, void *expbmc)
+{
+/* Not a real SC version to indicate that SC image does not exist. */
+#define	NONE_BMC_VERSION	"0.0.0"
+	int ret = 0;
+	struct platform_device *pdev = cmc_bdi->pdev;
+	struct platform_device *mgmt_leaf = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_MGMT_MAIN, PLATFORM_DEVID_NONE);
+	struct xrt_mgmt_main_ioctl_get_axlf_section gs = { XMGMT_BLP, BMC, };
+	struct bmc *bmcsect;
+
+	(void)sprintf(expbmc, "%s", NONE_BMC_VERSION);
+
+	if (mgmt_leaf == NULL) {
+		xrt_err(pdev, "failed to get hold of main");
+		return;
+	}
+
+	ret = xrt_subdev_ioctl(mgmt_leaf, XRT_MGMT_MAIN_GET_AXLF_SECTION, &gs);
+	if (ret == 0) {
+		bmcsect = (struct bmc *)gs.xmmigas_section;
+		memcpy(expbmc, bmcsect->m_version, sizeof(bmcsect->m_version));
+	} else {
+		/*
+		 * no SC section, SC should be fixed, expected SC should be
+		 * the same as on board SC.
+		 */
+		cmc_copy_board_info_by_key(cmc_bdi, BDINFO_BMC_VER, expbmc);
+	}
+	(void) xrt_subdev_put_leaf(pdev, mgmt_leaf);
+}
+
+int cmc_bdinfo_read(struct platform_device *pdev, struct xcl_board_info *bdinfo)
+{
+	struct xrt_cmc_bdinfo *cmc_bdi = cmc_pdev2bdinfo(pdev);
+
+	mutex_lock(&cmc_bdi->lock);
+
+	if (cmc_bdi->bdinfo == NULL) {
+		xrt_err(cmc_bdi->pdev, "board info is not available");
+		mutex_unlock(&cmc_bdi->lock);
+		return -ENOENT;
+	}
+
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_SN, bdinfo->serial_num);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_MAC0, bdinfo->mac_addr0);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_MAC1, bdinfo->mac_addr1);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_MAC2, bdinfo->mac_addr2);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_MAC3, bdinfo->mac_addr3);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_REV, bdinfo->revision);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_NAME, bdinfo->bd_name);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_BMC_VER, bdinfo->bmc_ver);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_MAX_PWR, &bdinfo->max_power);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_FAN_PRESENCE,
+		&bdinfo->fan_presence);
+	cmc_copy_board_info_by_key(cmc_bdi, BDINFO_CONFIG_MODE,
+		&bdinfo->config_mode);
+	cmc_copy_dynamic_mac(cmc_bdi, &bdinfo->mac_contiguous_num,
+		bdinfo->mac_addr_first);
+	cmc_copy_expect_bmc(cmc_bdi, bdinfo->exp_bmc_ver);
+
+	mutex_unlock(&cmc_bdi->lock);
+	return 0;
+}
+
+#define	CMC_BDINFO_STRING_SYSFS_NODE(name, key)				\
+	static ssize_t name##_show(struct device *dev,			\
+		struct device_attribute *attr, char *buf)		\
+	{								\
+		const char *s;						\
+		struct platform_device *pdev = to_platform_device(dev);	\
+		struct xrt_cmc_bdinfo *cmc_bdi = cmc_pdev2bdinfo(pdev);\
+									\
+		mutex_lock(&cmc_bdi->lock);				\
+		s = cmc_parse_board_info(cmc_bdi, key, NULL);		\
+		mutex_unlock(&cmc_bdi->lock);				\
+		return sprintf(buf, "%s\n", s ? s : "");		\
+	}								\
+	static DEVICE_ATTR_RO(name)
+
+CMC_BDINFO_STRING_SYSFS_NODE(bd_name, BDINFO_NAME);
+CMC_BDINFO_STRING_SYSFS_NODE(bmc_ver, BDINFO_BMC_VER);
+
+static struct attribute *cmc_bdinfo_attrs[] = {
+	&dev_attr_bd_name.attr,
+	&dev_attr_bmc_ver.attr,
+	NULL
+};
+
+static ssize_t bdinfo_raw_show(struct file *filp, struct kobject *kobj,
+	struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+{
+	ssize_t ret = 0;
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_cmc_bdinfo *cmc_bdi = cmc_pdev2bdinfo(pdev);
+
+	if (!cmc_bdi || !cmc_bdi->bdinfo_sz)
+		return 0;
+
+	mutex_lock(&cmc_bdi->lock);
+
+	if (off < cmc_bdi->bdinfo_sz) {
+		if (off + count > cmc_bdi->bdinfo_sz)
+			count = cmc_bdi->bdinfo_sz - off;
+		memcpy(buf, cmc_bdi->bdinfo + off, count);
+		ret = count;
+	}
+
+	mutex_unlock(&cmc_bdi->lock);
+	return ret;
+}
+
+static struct bin_attribute bdinfo_raw_attr = {
+	.attr = {
+		.name = "board_info_raw",
+		.mode = 0400
+	},
+	.read = bdinfo_raw_show,
+	.size = 0
+};
+
+static struct bin_attribute *cmc_bdinfo_bin_attrs[] = {
+	&bdinfo_raw_attr,
+	NULL,
+};
+
+static struct attribute_group cmc_bdinfo_attr_group = {
+	.attrs = cmc_bdinfo_attrs,
+	.bin_attrs = cmc_bdinfo_bin_attrs,
+};
+
+void cmc_bdinfo_remove(struct platform_device *pdev)
+{
+	struct xrt_cmc_bdinfo *cmc_bdi = cmc_pdev2bdinfo(pdev);
+
+	if (!cmc_bdi)
+		return;
+
+	sysfs_remove_group(&pdev->dev.kobj, &cmc_bdinfo_attr_group);
+	kfree(cmc_bdi->bdinfo);
+}
+
+int cmc_bdinfo_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl)
+{
+	int ret;
+	struct xrt_cmc_bdinfo *cmc_bdi;
+
+	cmc_bdi = devm_kzalloc(DEV(pdev), sizeof(*cmc_bdi), GFP_KERNEL);
+	if (!cmc_bdi)
+		return -ENOMEM;
+
+	cmc_bdi->pdev = pdev;
+	mutex_init(&cmc_bdi->lock);
+
+	mutex_lock(&cmc_bdi->lock);
+	ret = cmc_refresh_board_info_nolock(cmc_bdi);
+	mutex_unlock(&cmc_bdi->lock);
+	if (ret) {
+		xrt_err(pdev, "failed to load board info: %d", ret);
+		goto fail;
+	}
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &cmc_bdinfo_attr_group);
+	if (ret) {
+		xrt_err(pdev, "create bdinfo attrs failed: %d", ret);
+		goto fail;
+	}
+
+	*hdl = cmc_bdi;
+	return 0;
+
+fail:
+	cmc_bdinfo_remove(pdev);
+	devm_kfree(DEV(pdev), cmc_bdi);
+	return ret;
+}
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c
new file mode 100644
index 000000000000..eeee1296c732
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c
@@ -0,0 +1,322 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/delay.h>
+#include <linux/string.h>
+#include <linux/vmalloc.h>
+#include "xrt-subdev.h"
+#include "xmgmt-main.h"
+#include "xrt-cmc-impl.h"
+
+struct xrt_cmc_ctrl {
+	struct platform_device *pdev;
+	struct cmc_reg_map reg_mutex;
+	struct cmc_reg_map reg_reset;
+	struct cmc_reg_map reg_io;
+	struct cmc_reg_map reg_image;
+	char *firmware;
+	size_t firmware_size;
+	void *evt_hdl;
+};
+
+static inline void
+cmc_mutex_config(struct xrt_cmc_ctrl *cmc_ctrl, u32 val)
+{
+	iowrite32(val, cmc_ctrl->reg_mutex.crm_addr + CMC_REG_MUTEX_CONFIG);
+}
+
+static inline u32
+cmc_mutex_status(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	return ioread32(cmc_ctrl->reg_mutex.crm_addr + CMC_REG_MUTEX_STATUS);
+}
+
+static inline void
+cmc_reset_wr(struct xrt_cmc_ctrl *cmc_ctrl, u32 val)
+{
+	iowrite32(val, cmc_ctrl->reg_reset.crm_addr);
+}
+
+static inline u32
+cmc_reset_rd(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	return ioread32(cmc_ctrl->reg_reset.crm_addr);
+}
+
+static inline void
+cmc_io_wr(struct xrt_cmc_ctrl *cmc_ctrl, u32 off, u32 val)
+{
+	iowrite32(val, cmc_ctrl->reg_io.crm_addr + off);
+}
+
+static inline u32
+cmc_io_rd(struct xrt_cmc_ctrl *cmc_ctrl, u32 off)
+{
+	return ioread32(cmc_ctrl->reg_io.crm_addr + off);
+}
+
+static inline bool cmc_reset_held(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	return cmc_reset_rd(cmc_ctrl) == CMC_RESET_MASK_ON;
+}
+
+static inline bool cmc_ulp_access_allowed(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	return (cmc_mutex_status(cmc_ctrl) & CMC_MUTEX_MASK_GRANT) != 0;
+}
+
+static inline bool cmc_stopped(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	union cmc_status st;
+
+	st.status_val = cmc_io_rd(cmc_ctrl, CMC_REG_IO_STATUS);
+	return st.status.mb_stopped;
+}
+
+static inline bool cmc_ready(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	return (cmc_mutex_status(cmc_ctrl) & CMC_MUTEX_MASK_READY) != 0;
+}
+
+static int cmc_ulp_access(struct xrt_cmc_ctrl *cmc_ctrl, bool granted)
+{
+	const char *opname = granted ? "grant access" : "revoke access";
+
+	cmc_mutex_config(cmc_ctrl, granted ? 1 : 0);
+	CMC_WAIT(cmc_ulp_access_allowed(cmc_ctrl) == granted);
+	if (cmc_ulp_access_allowed(cmc_ctrl) != granted) {
+		xrt_err(cmc_ctrl->pdev, "mutex status is 0x%x after %s",
+			cmc_mutex_status(cmc_ctrl), opname);
+		return -EBUSY;
+	}
+	xrt_info(cmc_ctrl->pdev, "%s operation succeeded", opname);
+	return 0;
+}
+
+static int cmc_stop(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	struct platform_device *pdev = cmc_ctrl->pdev;
+
+	if (cmc_reset_held(cmc_ctrl)) {
+		xrt_info(pdev, "CMC is already in reset state");
+		return 0;
+	}
+
+	if (!cmc_stopped(cmc_ctrl)) {
+		cmc_io_wr(cmc_ctrl, CMC_REG_IO_CONTROL, CMC_CTRL_MASK_STOP);
+		cmc_io_wr(cmc_ctrl, CMC_REG_IO_STOP_CONFIRM, 1);
+		CMC_WAIT(cmc_stopped(cmc_ctrl));
+		if (!cmc_stopped(cmc_ctrl)) {
+			xrt_err(pdev, "failed to stop CMC");
+			return -ETIMEDOUT;
+		}
+	}
+
+	cmc_reset_wr(cmc_ctrl, CMC_RESET_MASK_ON);
+	if (!cmc_reset_held(cmc_ctrl)) {
+		xrt_err(pdev, "failed to hold CMC in reset state");
+		return -EINVAL;
+	}
+
+	xrt_info(pdev, "CMC is successfully stopped");
+	return 0;
+}
+
+static int cmc_load_image(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	struct platform_device *pdev = cmc_ctrl->pdev;
+
+	/* Sanity check the size of the firmware. */
+	if (cmc_ctrl->firmware_size > cmc_ctrl->reg_image.crm_size) {
+		xrt_err(pdev, "CMC firmware image is too big: %ld",
+			cmc_ctrl->firmware_size);
+		return -EINVAL;
+	}
+
+	xrt_memcpy_toio(cmc_ctrl->reg_image.crm_addr,
+		cmc_ctrl->firmware, cmc_ctrl->firmware_size);
+	return 0;
+}
+
+static int cmc_start(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	struct platform_device *pdev = cmc_ctrl->pdev;
+
+	cmc_reset_wr(cmc_ctrl, CMC_RESET_MASK_OFF);
+	if (cmc_reset_held(cmc_ctrl)) {
+		xrt_err(pdev, "failed to release CMC from reset state");
+		return -EINVAL;
+	}
+
+	CMC_WAIT(cmc_ready(cmc_ctrl));
+	if (!cmc_ready(cmc_ctrl)) {
+		xrt_err(pdev, "failed to wait for CMC to be ready");
+		return -ETIMEDOUT;
+	}
+
+	xrt_info(pdev, "Wait for 5 seconds for CMC to connect to SC");
+	ssleep(5);
+
+	xrt_info(pdev, "CMC is ready: version 0x%x, status 0x%x, id 0x%x",
+		cmc_io_rd(cmc_ctrl, CMC_REG_IO_VERSION),
+		cmc_io_rd(cmc_ctrl, CMC_REG_IO_STATUS),
+		cmc_io_rd(cmc_ctrl, CMC_REG_IO_MAGIC));
+
+	return 0;
+}
+
+static int cmc_fetch_firmware(struct xrt_cmc_ctrl *cmc_ctrl)
+{
+	int ret = 0;
+	struct platform_device *pdev = cmc_ctrl->pdev;
+	struct platform_device *mgmt_leaf = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_MGMT_MAIN, PLATFORM_DEVID_NONE);
+	struct xrt_mgmt_main_ioctl_get_axlf_section gs = {
+		XMGMT_BLP, FIRMWARE,
+	};
+
+	if (mgmt_leaf == NULL)
+		return -ENOENT;
+
+	ret = xrt_subdev_ioctl(mgmt_leaf, XRT_MGMT_MAIN_GET_AXLF_SECTION, &gs);
+	if (ret == 0) {
+		cmc_ctrl->firmware = vmalloc(gs.xmmigas_section_size);
+		if (cmc_ctrl->firmware == NULL) {
+			ret = -ENOMEM;
+		} else {
+			memcpy(cmc_ctrl->firmware, gs.xmmigas_section,
+				gs.xmmigas_section_size);
+			cmc_ctrl->firmware_size = gs.xmmigas_section_size;
+		}
+	} else {
+		xrt_err(pdev, "failed to fetch firmware: %d", ret);
+	}
+	(void) xrt_subdev_put_leaf(pdev, mgmt_leaf);
+
+	return ret;
+}
+
+static ssize_t status_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct xrt_cmc_ctrl *cmc_ctrl = dev_get_drvdata(dev);
+	u32 val = cmc_io_rd(cmc_ctrl, CMC_REG_IO_STATUS);
+
+	return sprintf(buf, "0x%x\n", val);
+}
+static DEVICE_ATTR_RO(status);
+
+static struct attribute *cmc_ctrl_attrs[] = {
+	&dev_attr_status.attr,
+	NULL,
+};
+
+static struct attribute_group cmc_ctrl_attr_group = {
+	.attrs = cmc_ctrl_attrs,
+};
+
+void cmc_ctrl_remove(struct platform_device *pdev)
+{
+	struct xrt_cmc_ctrl *cmc_ctrl =
+		(struct xrt_cmc_ctrl *)cmc_pdev2ctrl(pdev);
+
+	if (!cmc_ctrl)
+		return;
+
+	if (cmc_ctrl->evt_hdl)
+		(void) xrt_subdev_remove_event_cb(pdev, cmc_ctrl->evt_hdl);
+	(void) sysfs_remove_group(&DEV(cmc_ctrl->pdev)->kobj,
+		&cmc_ctrl_attr_group);
+	(void) cmc_ulp_access(cmc_ctrl, false);
+	vfree(cmc_ctrl->firmware);
+	/* We intentionally leave CMC in running state. */
+}
+
+static bool cmc_ctrl_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	/* Only interested in broadcast events. */
+	return false;
+}
+
+static int cmc_ctrl_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct xrt_cmc_ctrl *cmc_ctrl =
+		(struct xrt_cmc_ctrl *)cmc_pdev2ctrl(pdev);
+
+	switch (evt) {
+	case XRT_EVENT_PRE_GATE_CLOSE:
+		(void) cmc_ulp_access(cmc_ctrl, false);
+		break;
+	case XRT_EVENT_POST_GATE_OPEN:
+		(void) cmc_ulp_access(cmc_ctrl, true);
+		break;
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		break;
+	}
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+int cmc_ctrl_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl)
+{
+	struct xrt_cmc_ctrl *cmc_ctrl;
+	int ret = 0;
+
+	cmc_ctrl = devm_kzalloc(DEV(pdev), sizeof(*cmc_ctrl), GFP_KERNEL);
+	if (!cmc_ctrl)
+		return -ENOMEM;
+
+	cmc_ctrl->pdev = pdev;
+
+	/* Obtain register maps we need to start/stop CMC. */
+	cmc_ctrl->reg_mutex = regmaps[IO_MUTEX];
+	cmc_ctrl->reg_reset = regmaps[IO_GPIO];
+	cmc_ctrl->reg_io = regmaps[IO_REG];
+	cmc_ctrl->reg_image = regmaps[IO_IMAGE_MGMT];
+
+	/* Get firmware image from xmgmt-main leaf. */
+	ret = cmc_fetch_firmware(cmc_ctrl);
+	if (ret)
+		goto done;
+
+	/* Load firmware. */
+
+	ret = cmc_ulp_access(cmc_ctrl, false);
+	if (ret)
+		goto done;
+
+	ret = cmc_stop(cmc_ctrl);
+	if (ret)
+		goto done;
+
+	ret = cmc_load_image(cmc_ctrl);
+	if (ret)
+		goto done;
+
+	ret = cmc_start(cmc_ctrl);
+	if (ret)
+		goto done;
+
+	ret  = sysfs_create_group(&DEV(pdev)->kobj, &cmc_ctrl_attr_group);
+	if (ret)
+		xrt_err(pdev, "failed to create sysfs nodes: %d", ret);
+
+	cmc_ctrl->evt_hdl = xrt_subdev_add_event_cb(pdev,
+		cmc_ctrl_leaf_match, NULL, cmc_ctrl_event_cb);
+
+	*hdl = cmc_ctrl;
+	return 0;
+
+done:
+	(void) cmc_ctrl_remove(pdev);
+	devm_kfree(DEV(pdev), cmc_ctrl);
+	return ret;
+}
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h
new file mode 100644
index 000000000000..9454dc948a41
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_CMC_IMPL_H_
+#define	_XRT_CMC_IMPL_H_
+
+#include "linux/delay.h"
+#include "xrt-subdev.h"
+#include <linux/xrt/mailbox_proto.h>
+
+#define	CMC_MAX_RETRY		150 /* Retry is set to 15s */
+#define	CMC_MAX_RETRY_LONG	(CMC_MAX_RETRY * 4) /* mailbox retry is 1min */
+#define	CMC_RETRY_INTERVAL	100 /* 100ms */
+
+/* Mutex register defines. */
+#define	CMC_REG_MUTEX_CONFIG			0x0
+#define	CMC_REG_MUTEX_STATUS			0x8
+#define	CMC_MUTEX_MASK_GRANT			(0x1 << 0)
+#define	CMC_MUTEX_MASK_READY			(0x1 << 1)
+
+/* Reset register defines. */
+#define	CMC_RESET_MASK_ON			0x0
+#define	CMC_RESET_MASK_OFF			0x1
+
+/* IO register defines. */
+#define	CMC_REG_IO_MAGIC			0x0
+#define	CMC_REG_IO_VERSION			0x4
+#define	CMC_REG_IO_STATUS			0x8
+#define	CMC_REG_IO_ERROR			0xc
+#define	CMC_REG_IO_CONTROL			0x18
+#define	CMC_REG_IO_STOP_CONFIRM			0x1C
+#define	CMC_REG_IO_MBX_OFFSET			0x300
+#define	CMC_REG_IO_MBX_ERROR			0x304
+#define	CMC_REG_IO_CORE_VERSION			0xC4C
+
+#define	CMC_CTRL_MASK_CLR_ERR			(1 << 1)
+#define	CMC_CTRL_MASK_STOP			(1 << 3)
+#define	CMC_CTRL_MASK_MBX_PKT_OWNER		(1 << 5)
+#define	CMC_ERROR_MASK_MBX_ERR			(1 << 26)
+#define	CMC_STATUS_MASK_STOPPED			(1 << 1)
+
+#define	__CMC_WAIT(cond, retries)				\
+	do {							\
+		int retry = 0;					\
+		while (retry++ < retries && !(cond))		\
+			msleep(CMC_RETRY_INTERVAL);		\
+	} while (0)
+#define CMC_WAIT(cond)	__CMC_WAIT(cond, CMC_MAX_RETRY)
+#define CMC_LONG_WAIT(cond)	__CMC_WAIT(cond, CMC_MAX_RETRY_LONG)
+
+union cmc_status {
+	u32 status_val;
+	struct {
+		u32 init_done		: 1;
+		u32 mb_stopped		: 1;
+		u32 reserved0		: 1;
+		u32 watchdog_reset	: 1;
+		u32 reserved1		: 6;
+		u32 power_mode		: 2;
+		u32 reserved2		: 12;
+		u32 sc_comm_ver		: 4;
+		u32 sc_mode		: 3;
+		u32 invalid_sc		: 1;
+	} status;
+};
+
+enum {
+	CMC_MBX_PKT_OP_UNKNOWN = 0,
+	CMC_MBX_PKT_OP_MSP432_SEC_START,
+	CMC_MBX_PKT_OP_MSP432_SEC_DATA,
+	CMC_MBX_PKT_OP_MSP432_IMAGE_END,
+	CMC_MBX_PKT_OP_BOARD_INFO,
+	CMC_MBX_PKT_OP_MSP432_ERASE_FW,
+};
+
+enum {
+	IO_REG = 0,
+	IO_GPIO,
+	IO_IMAGE_MGMT,
+	IO_MUTEX,
+	NUM_IOADDR
+};
+
+struct cmc_reg_map {
+	void __iomem *crm_addr;
+	size_t crm_size;
+};
+
+extern int cmc_ctrl_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl);
+extern void cmc_ctrl_remove(struct platform_device *pdev);
+extern void *cmc_pdev2ctrl(struct platform_device *pdev);
+
+extern int cmc_sensor_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl);
+extern void cmc_sensor_remove(struct platform_device *pdev);
+extern void *cmc_pdev2sensor(struct platform_device *pdev);
+extern void cmc_sensor_read(struct platform_device *pdev, struct xcl_sensor *s);
+
+extern int cmc_mailbox_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl);
+extern void cmc_mailbox_remove(struct platform_device *pdev);
+extern void *cmc_pdev2mbx(struct platform_device *pdev);
+extern int cmc_mailbox_acquire(struct platform_device *pdev);
+extern void cmc_mailbox_release(struct platform_device *pdev, int generation);
+extern size_t cmc_mailbox_max_payload(struct platform_device *pdev);
+extern int cmc_mailbox_send_packet(struct platform_device *pdev, int generation,
+	u8 op, const char *buf, size_t len);
+extern int cmc_mailbox_recv_packet(struct platform_device *pdev, int generation,
+	char *buf, size_t *len);
+
+extern int cmc_bdinfo_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl);
+extern void cmc_bdinfo_remove(struct platform_device *pdev);
+extern void *cmc_pdev2bdinfo(struct platform_device *pdev);
+extern int cmc_refresh_board_info(struct platform_device *pdev);
+extern int cmc_bdinfo_read(struct platform_device *pdev,
+	struct xcl_board_info *bdinfo);
+
+extern int cmc_sc_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl);
+extern void cmc_sc_remove(struct platform_device *pdev);
+extern void *cmc_pdev2sc(struct platform_device *pdev);
+extern int cmc_sc_open(struct inode *inode, struct file *file);
+extern int cmc_sc_close(struct inode *inode, struct file *file);
+extern ssize_t cmc_update_sc_firmware(struct file *file,
+	const char __user *ubuf, size_t n, loff_t *off);
+extern loff_t cmc_sc_llseek(struct file *filp, loff_t off, int whence);
+
+#endif	/* _XRT_CMC_IMPL_H_ */
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c
new file mode 100644
index 000000000000..5912cd9f3e90
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c
@@ -0,0 +1,320 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/mutex.h>
+#include <linux/delay.h>
+#include "xrt-subdev.h"
+#include "xrt-cmc-impl.h"
+
+/* We have a 4k buffer for cmc mailbox */
+#define	CMC_PKT_MAX_SZ	1024 /* In u32 */
+#define	CMC_PKT_MAX_PAYLOAD_SZ	\
+	(CMC_PKT_MAX_SZ - sizeof(struct cmc_pkt_hdr) / sizeof(u32)) /* In u32 */
+#define	CMC_PKT_MAX_PAYLOAD_SZ_IN_BYTES	(CMC_PKT_MAX_PAYLOAD_SZ * sizeof(u32))
+#define	CMC_PKT_SZ(hdr)		\
+	((sizeof(struct cmc_pkt_hdr) + (hdr)->payload_sz + sizeof(u32) - 1) / \
+	sizeof(u32)) /* In u32 */
+
+/* Make sure hdr is multiple of u32 */
+struct cmc_pkt_hdr {
+	u32 payload_sz	: 12;
+	u32 reserved	: 12;
+	u32 op		: 8;
+};
+
+struct cmc_pkt {
+	struct cmc_pkt_hdr hdr;
+	u32 data[CMC_PKT_MAX_PAYLOAD_SZ];
+};
+
+struct xrt_cmc_mbx {
+	struct platform_device *pdev;
+	struct cmc_reg_map reg_io;
+	u32 mbx_offset;
+	struct mutex lock;
+	struct cmc_pkt pkt;
+	struct semaphore sem;
+	int generation;
+};
+
+static inline void
+cmc_io_wr(struct xrt_cmc_mbx *cmc_mbx, u32 off, u32 val)
+{
+	iowrite32(val, cmc_mbx->reg_io.crm_addr + off);
+}
+
+static inline u32
+cmc_io_rd(struct xrt_cmc_mbx *cmc_mbx, u32 off)
+{
+	return ioread32(cmc_mbx->reg_io.crm_addr + off);
+}
+
+static inline bool
+cmc_pkt_host_owned(struct xrt_cmc_mbx *cmc_mbx)
+{
+	return (cmc_io_rd(cmc_mbx, CMC_REG_IO_CONTROL) &
+		CMC_CTRL_MASK_MBX_PKT_OWNER) == 0;
+}
+
+static inline void
+cmc_pkt_control_set(struct xrt_cmc_mbx *cmc_mbx, u32 ctrl)
+{
+	u32 val = cmc_io_rd(cmc_mbx, CMC_REG_IO_CONTROL);
+
+	cmc_io_wr(cmc_mbx, CMC_REG_IO_CONTROL, val | ctrl);
+}
+
+static inline void
+cmc_pkt_notify_device(struct xrt_cmc_mbx *cmc_mbx)
+{
+	cmc_pkt_control_set(cmc_mbx, CMC_CTRL_MASK_MBX_PKT_OWNER);
+}
+
+static inline void
+cmc_pkt_clear_error(struct xrt_cmc_mbx *cmc_mbx)
+{
+	cmc_pkt_control_set(cmc_mbx, CMC_CTRL_MASK_CLR_ERR);
+}
+
+static int cmc_mailbox_wait(struct xrt_cmc_mbx *cmc_mbx)
+{
+	u32 val;
+
+	BUG_ON(!mutex_is_locked(&cmc_mbx->lock));
+
+	CMC_LONG_WAIT(cmc_pkt_host_owned(cmc_mbx));
+	if (!cmc_pkt_host_owned(cmc_mbx)) {
+		xrt_err(cmc_mbx->pdev, "CMC packet error: time'd out");
+		return -ETIMEDOUT;
+	}
+
+	val = cmc_io_rd(cmc_mbx, CMC_REG_IO_ERROR);
+	if (val & CMC_ERROR_MASK_MBX_ERR)
+		val = cmc_io_rd(cmc_mbx, CMC_REG_IO_MBX_ERROR);
+	if (val) {
+		xrt_err(cmc_mbx->pdev, "CMC packet error: %d", val);
+		cmc_pkt_clear_error(cmc_mbx);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int cmc_mailbox_pkt_write(struct xrt_cmc_mbx *cmc_mbx)
+{
+	u32 *pkt = (u32 *)&cmc_mbx->pkt;
+	u32 len = CMC_PKT_SZ(&cmc_mbx->pkt.hdr);
+	int ret = 0;
+	u32 i;
+
+	BUG_ON(!mutex_is_locked(&cmc_mbx->lock));
+
+#ifdef	MBX_PKT_DEBUG
+	xrt_info(cmc_mbx->pdev, "Sending CMC packet: %d DWORDS...", len);
+	xrt_info(cmc_mbx->pdev, "opcode=%d payload_sz=0x%x (0x%x)",
+		cmc_mbx->pkt.hdr.op, cmc_mbx->pkt.hdr.payload_sz, pkt[0]);
+	for (i = 0; i < 16; i++) {// print out first 16 bytes
+		/* Comment out to avoid check patch complaint. */
+		//pr_cont("%02x ", *((u8 *)(cmc_mbx->pkt.data) + i));
+	}
+#endif
+	/* Push pkt data to mailbox on HW. */
+	for (i = 0; i < len; i++) {
+		cmc_io_wr(cmc_mbx,
+			cmc_mbx->mbx_offset + i * sizeof(u32), pkt[i]);
+	}
+
+	/* Notify HW that a pkt is ready for process. */
+	cmc_pkt_notify_device(cmc_mbx);
+	/* Make sure HW is done with the mailbox buffer. */
+	ret = cmc_mailbox_wait(cmc_mbx);
+
+	return ret;
+}
+
+static int cmc_mailbox_pkt_read(struct xrt_cmc_mbx *cmc_mbx)
+{
+	struct cmc_pkt_hdr hdr;
+	u32 *pkt;
+	u32 len;
+	u32 i;
+	int ret = 0;
+
+	BUG_ON(!mutex_is_locked(&cmc_mbx->lock));
+
+	/* Make sure HW is done with the mailbox buffer. */
+	ret = cmc_mailbox_wait(cmc_mbx);
+	if (ret)
+		return ret;
+
+	/* Receive pkt hdr. */
+	pkt = (u32 *)&hdr;
+	len = sizeof(hdr) / sizeof(u32);
+	for (i = 0; i < len; i++) {
+		pkt[i] = cmc_io_rd(cmc_mbx,
+			cmc_mbx->mbx_offset + i * sizeof(u32));
+	}
+
+	pkt = (u32 *)&cmc_mbx->pkt;
+	len = CMC_PKT_SZ(&hdr);
+	if (hdr.payload_sz == 0 || len > CMC_PKT_MAX_SZ) {
+		xrt_err(cmc_mbx->pdev, "read invalid CMC packet");
+		return -EINVAL;
+	}
+
+	/* Load pkt data from mailbox on HW. */
+	for (i = 0; i < len; i++) {
+		pkt[i] = cmc_io_rd(cmc_mbx,
+			cmc_mbx->mbx_offset + i * sizeof(u32));
+	}
+
+	return ret;
+}
+
+int cmc_mailbox_recv_packet(struct platform_device *pdev, int generation,
+	char *buf, size_t *len)
+{
+	int ret;
+	struct xrt_cmc_mbx *cmc_mbx = cmc_pdev2mbx(pdev);
+
+	if (cmc_mbx == NULL)
+		return -EINVAL;
+
+	if (cmc_mbx->generation != generation) {
+		xrt_err(cmc_mbx->pdev, "stale generation number passed in");
+		return -EINVAL;
+	}
+
+	mutex_lock(&cmc_mbx->lock);
+
+	ret = cmc_mailbox_pkt_read(cmc_mbx);
+	if (ret) {
+		mutex_unlock(&cmc_mbx->lock);
+		return ret;
+	}
+	if (cmc_mbx->pkt.hdr.payload_sz > *len) {
+		xrt_err(cmc_mbx->pdev,
+			"packet size (0x%x) exceeds buf size (0x%lx)",
+			cmc_mbx->pkt.hdr.payload_sz, *len);
+		mutex_unlock(&cmc_mbx->lock);
+		return -E2BIG;
+	}
+	memcpy(buf, cmc_mbx->pkt.data, cmc_mbx->pkt.hdr.payload_sz);
+	*len = cmc_mbx->pkt.hdr.payload_sz;
+
+	mutex_unlock(&cmc_mbx->lock);
+	return 0;
+}
+
+int cmc_mailbox_send_packet(struct platform_device *pdev, int generation,
+	u8 op, const char *buf, size_t len)
+{
+	int ret;
+	struct xrt_cmc_mbx *cmc_mbx = cmc_pdev2mbx(pdev);
+
+	if (cmc_mbx == NULL)
+		return -ENODEV;
+
+	if (cmc_mbx->generation != generation) {
+		xrt_err(cmc_mbx->pdev, "stale generation number passed in");
+		return -EINVAL;
+	}
+
+	if (len > CMC_PKT_MAX_PAYLOAD_SZ_IN_BYTES) {
+		xrt_err(cmc_mbx->pdev,
+			"packet size (0x%lx) exceeds max size (0x%lx)",
+			len, CMC_PKT_MAX_PAYLOAD_SZ_IN_BYTES);
+		return -E2BIG;
+	}
+
+	mutex_lock(&cmc_mbx->lock);
+
+	memset(&cmc_mbx->pkt, 0, sizeof(struct cmc_pkt));
+	cmc_mbx->pkt.hdr.op = op;
+	cmc_mbx->pkt.hdr.payload_sz = len;
+	if (buf)
+		memcpy(cmc_mbx->pkt.data, buf, len);
+	ret = cmc_mailbox_pkt_write(cmc_mbx);
+
+	mutex_unlock(&cmc_mbx->lock);
+
+	return ret;
+}
+
+int cmc_mailbox_acquire(struct platform_device *pdev)
+{
+	struct xrt_cmc_mbx *cmc_mbx = cmc_pdev2mbx(pdev);
+
+	if (cmc_mbx == NULL)
+		return -EINVAL;
+
+	if (down_killable(&cmc_mbx->sem)) {
+		xrt_info(cmc_mbx->pdev, "giving up on acquiring CMC mailbox");
+		return -ERESTARTSYS;
+	}
+
+	return cmc_mbx->generation;
+}
+
+void cmc_mailbox_release(struct platform_device *pdev, int generation)
+{
+	struct xrt_cmc_mbx *cmc_mbx = cmc_pdev2mbx(pdev);
+
+	if (cmc_mbx->generation != generation) {
+		xrt_err(cmc_mbx->pdev, "stale generation number passed in");
+		return;
+	}
+
+	/*
+	 * A hold is released, bump up generation number
+	 * to invalidate the previous hold.
+	 */
+	cmc_mbx->generation++;
+	up(&cmc_mbx->sem);
+}
+
+size_t cmc_mailbox_max_payload(struct platform_device *pdev)
+{
+	return CMC_PKT_MAX_PAYLOAD_SZ_IN_BYTES;
+}
+
+void cmc_mailbox_remove(struct platform_device *pdev)
+{
+	/* Nothing to do */
+}
+
+int cmc_mailbox_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl)
+{
+	struct xrt_cmc_mbx *cmc_mbx;
+
+	cmc_mbx = devm_kzalloc(DEV(pdev), sizeof(*cmc_mbx), GFP_KERNEL);
+	if (!cmc_mbx)
+		return -ENOMEM;
+
+	cmc_mbx->pdev = pdev;
+	/* Obtain register maps we need to start/stop CMC. */
+	cmc_mbx->reg_io = regmaps[IO_REG];
+	mutex_init(&cmc_mbx->lock);
+	sema_init(&cmc_mbx->sem, 1);
+	cmc_mbx->mbx_offset = cmc_io_rd(cmc_mbx, CMC_REG_IO_MBX_OFFSET);
+	if (cmc_mbx->mbx_offset == 0) {
+		xrt_err(cmc_mbx->pdev, "CMC mailbox is not available");
+		goto done;
+	} else {
+		xrt_info(cmc_mbx->pdev, "CMC mailbox offset is 0x%x",
+			cmc_mbx->mbx_offset);
+	}
+
+	*hdl = cmc_mbx;
+	return 0;
+done:
+	cmc_mailbox_remove(pdev);
+	devm_kfree(DEV(pdev), cmc_mbx);
+	return -ENODEV;
+}
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c
new file mode 100644
index 000000000000..c5af4f08f4d2
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c
@@ -0,0 +1,361 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/uaccess.h>
+#include "xrt-subdev.h"
+#include "xrt-cmc-impl.h"
+
+#define	CMC_CORE_SUPPORT_NOTUPGRADABLE	0x0c010004
+
+enum sc_mode {
+	CMC_SC_UNKNOWN = 0,
+	CMC_SC_NORMAL,
+	CMC_SC_BSL_MODE_UNSYNCED,
+	CMC_SC_BSL_MODE_SYNCED,
+	CMC_SC_BSL_MODE_SYNCED_SC_NOT_UPGRADABLE,
+	CMC_SC_NORMAL_MODE_SC_NOT_UPGRADABLE
+};
+
+struct cmc_pkt_payload_image_end {
+	u32 BSL_jump_addr;
+};
+
+struct cmc_pkt_payload_sector_start {
+	u32 addr;
+	u32 size;
+	u8 data[1];
+};
+
+struct cmc_pkt_payload_sector_data {
+	u8 data[1];
+};
+
+struct xrt_cmc_sc {
+	struct platform_device *pdev;
+	struct cmc_reg_map reg_io;
+	bool sc_fw_erased;
+	int mbx_generation;
+	size_t mbx_max_payload_sz;
+};
+
+static inline void cmc_io_wr(struct xrt_cmc_sc *cmc_sc, u32 off, u32 val)
+{
+	iowrite32(val, cmc_sc->reg_io.crm_addr + off);
+}
+
+static inline u32 cmc_io_rd(struct xrt_cmc_sc *cmc_sc, u32 off)
+{
+	return ioread32(cmc_sc->reg_io.crm_addr + off);
+}
+
+static bool is_sc_ready(struct xrt_cmc_sc *cmc_sc, bool quiet)
+{
+	union cmc_status st;
+
+	st.status_val = cmc_io_rd(cmc_sc, CMC_REG_IO_STATUS);
+	if (st.status.sc_mode == CMC_SC_NORMAL)
+		return true;
+
+	if (!quiet) {
+		xrt_err(cmc_sc->pdev, "SC is not ready, state=%d",
+			st.status.sc_mode);
+	}
+	return false;
+}
+
+static bool is_sc_fixed(struct xrt_cmc_sc *cmc_sc)
+{
+	union cmc_status st;
+	u32 cmc_core_version = cmc_io_rd(cmc_sc, CMC_REG_IO_CORE_VERSION);
+
+	st.status_val = cmc_io_rd(cmc_sc, CMC_REG_IO_STATUS);
+
+	if (cmc_core_version >= CMC_CORE_SUPPORT_NOTUPGRADABLE &&
+	    !st.status.invalid_sc &&
+	    (st.status.sc_mode == CMC_SC_BSL_MODE_SYNCED_SC_NOT_UPGRADABLE ||
+	     st.status.sc_mode == CMC_SC_NORMAL_MODE_SC_NOT_UPGRADABLE))
+		return true;
+
+	return false;
+}
+
+static int cmc_erase_sc_firmware(struct xrt_cmc_sc *cmc_sc)
+{
+	int ret = 0;
+
+	if (cmc_sc->sc_fw_erased)
+		return 0;
+
+	xrt_info(cmc_sc->pdev, "erasing SC firmware...");
+	ret = cmc_mailbox_send_packet(cmc_sc->pdev, cmc_sc->mbx_generation,
+		CMC_MBX_PKT_OP_MSP432_ERASE_FW, NULL, 0);
+	if (ret == 0)
+		cmc_sc->sc_fw_erased = true;
+	return ret;
+}
+
+static int cmc_write_sc_firmware_section(struct xrt_cmc_sc *cmc_sc,
+	loff_t start, size_t n, const char *buf)
+{
+	int ret = 0;
+	size_t sz, thissz, pktsize;
+	void *pkt;
+	struct cmc_pkt_payload_sector_start *start_payload;
+	struct cmc_pkt_payload_sector_data *data_payload;
+	u8 pkt_op;
+
+	xrt_info(cmc_sc->pdev, "writing %ld bytes @0x%llx", n, start);
+
+	if (n == 0)
+		return 0;
+
+	BUG_ON(!cmc_sc->sc_fw_erased);
+
+	pkt = vzalloc(cmc_sc->mbx_max_payload_sz);
+	if (!pkt)
+		return -ENOMEM;
+
+	for (sz = 0; ret == 0 && sz < n; sz += thissz) {
+		if (sz == 0) {
+			/* First packet for the section. */
+			pkt_op = CMC_MBX_PKT_OP_MSP432_SEC_START;
+			start_payload = pkt;
+			start_payload->addr = start;
+			start_payload->size = n;
+			thissz = cmc_sc->mbx_max_payload_sz - offsetof(
+				struct cmc_pkt_payload_sector_start, data);
+			thissz = min(thissz, n - sz);
+			memcpy(start_payload->data, buf + sz, thissz);
+			pktsize = thissz + offsetof(
+				struct cmc_pkt_payload_sector_start, data);
+		} else {
+			pkt_op = CMC_MBX_PKT_OP_MSP432_SEC_DATA;
+			data_payload = pkt;
+			thissz = cmc_sc->mbx_max_payload_sz - offsetof(
+				struct cmc_pkt_payload_sector_data, data);
+			thissz = min(thissz, n - sz);
+			memcpy(data_payload->data, buf + sz, thissz);
+			pktsize = thissz + offsetof(
+				struct cmc_pkt_payload_sector_data, data);
+		}
+		ret = cmc_mailbox_send_packet(cmc_sc->pdev,
+			cmc_sc->mbx_generation, pkt_op, pkt, pktsize);
+	}
+
+	return ret;
+}
+
+static int
+cmc_boot_sc(struct xrt_cmc_sc *cmc_sc, u32 jump_addr)
+{
+	int ret = 0;
+	struct cmc_pkt_payload_image_end pkt = { 0 };
+
+	xrt_info(cmc_sc->pdev, "rebooting SC @0x%x", jump_addr);
+
+	BUG_ON(!cmc_sc->sc_fw_erased);
+
+	/* Mark new SC firmware is installed. */
+	cmc_sc->sc_fw_erased = false;
+
+	/* Try booting it up. */
+	pkt.BSL_jump_addr = jump_addr;
+	ret = cmc_mailbox_send_packet(cmc_sc->pdev, cmc_sc->mbx_generation,
+		CMC_MBX_PKT_OP_MSP432_IMAGE_END, (char *)&pkt, sizeof(pkt));
+	if (ret)
+		return ret;
+
+	/* Wait for SC to reboot */
+	CMC_LONG_WAIT(is_sc_ready(cmc_sc, true));
+	if (!is_sc_ready(cmc_sc, false))
+		ret = -ETIMEDOUT;
+
+	return ret;
+}
+
+/*
+ * Write SC firmware image data at specified location.
+ */
+ssize_t cmc_update_sc_firmware(struct file *file,
+	const char __user *ubuf, size_t n, loff_t *off)
+{
+	u32 jump_addr = 0;
+	struct xrt_cmc_sc *cmc_sc = file->private_data;
+	/* Special offset for writing SC's BSL jump address. */
+	const loff_t jump_offset = 0xffffffff;
+	ssize_t ret = 0;
+	u8 *kbuf;
+	bool need_refresh = false;
+
+	/* Sanity check input 'n' */
+	if (n == 0 || n > jump_offset || n > 100 * 1024 * 1024)
+		return -EINVAL;
+
+	kbuf = vmalloc(n);
+	if (kbuf == NULL)
+		return -ENOMEM;
+	if (copy_from_user(kbuf, ubuf, n)) {
+		vfree(kbuf);
+		return -EFAULT;
+	}
+
+	cmc_sc->mbx_generation = cmc_mailbox_acquire(cmc_sc->pdev);
+	if (cmc_sc->mbx_generation < 0) {
+		vfree(kbuf);
+		return -ENODEV;
+	}
+
+	ret = cmc_erase_sc_firmware(cmc_sc);
+	if (ret) {
+		xrt_err(cmc_sc->pdev, "can't erase SC firmware");
+	} else if (*off == jump_offset) {
+		/*
+		 * Write to jump_offset will cause a reboot of SC and jump
+		 * to address that is passed in.
+		 */
+		if (n != sizeof(jump_addr)) {
+			xrt_err(cmc_sc->pdev, "invalid jump addr size");
+			ret = -EINVAL;
+		} else {
+			jump_addr = *(u32 *)kbuf;
+			ret = cmc_boot_sc(cmc_sc, jump_addr);
+			/* Need to reload board info after SC image update */
+			need_refresh = true;
+		}
+	} else {
+		ret = cmc_write_sc_firmware_section(cmc_sc, *off, n, kbuf);
+	}
+
+	cmc_mailbox_release(cmc_sc->pdev, cmc_sc->mbx_generation);
+
+	if (need_refresh)
+		(void) cmc_refresh_board_info(cmc_sc->pdev);
+
+	vfree(kbuf);
+	if (ret) {
+		cmc_sc->sc_fw_erased = false;
+		return ret;
+	}
+
+	*off += n;
+	return n;
+}
+
+/*
+ * Only allow one client at a time.
+ */
+int cmc_sc_open(struct inode *inode, struct file *file)
+{
+	struct platform_device *pdev = xrt_devnode_open_excl(inode);
+
+	file->private_data = cmc_pdev2sc(pdev);
+	return 0;
+}
+
+int cmc_sc_close(struct inode *inode, struct file *file)
+{
+	struct xrt_cmc_sc *cmc_sc = file->private_data;
+
+	if (!cmc_sc)
+		return -EINVAL;
+
+	file->private_data = NULL;
+	xrt_devnode_close(inode);
+	return 0;
+}
+
+loff_t cmc_sc_llseek(struct file *filp, loff_t off, int whence)
+{
+	loff_t npos;
+
+	switch (whence) {
+	case 0: /* SEEK_SET */
+		npos = off;
+		break;
+	case 1: /* SEEK_CUR */
+		npos = filp->f_pos + off;
+		break;
+	case 2: /* SEEK_END: no need to support */
+		return -EINVAL;
+	default: /* should not happen */
+		return -EINVAL;
+	}
+	if (npos < 0)
+		return -EINVAL;
+
+	filp->f_pos = npos;
+	return npos;
+}
+
+static ssize_t sc_is_fixed_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_cmc_sc *cmc_sc = cmc_pdev2sc(pdev);
+
+	return sprintf(buf, "%d\n", is_sc_fixed(cmc_sc));
+}
+static DEVICE_ATTR_RO(sc_is_fixed);
+
+static ssize_t sc_presence_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	return sprintf(buf, "1\n");
+}
+static DEVICE_ATTR_RO(sc_presence);
+
+static struct attribute *cmc_sc_attrs[] = {
+	&dev_attr_sc_is_fixed.attr,
+	&dev_attr_sc_presence.attr,
+	NULL
+};
+
+static struct attribute_group cmc_sc_attr_group = {
+	.attrs = cmc_sc_attrs,
+};
+
+void cmc_sc_remove(struct platform_device *pdev)
+{
+	struct xrt_cmc_sc *cmc_sc = cmc_pdev2sc(pdev);
+
+	if (!cmc_sc)
+		return;
+
+	sysfs_remove_group(&pdev->dev.kobj, &cmc_sc_attr_group);
+}
+
+int cmc_sc_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl)
+{
+	int ret;
+	struct xrt_cmc_sc *cmc_sc;
+
+	cmc_sc = devm_kzalloc(DEV(pdev), sizeof(*cmc_sc), GFP_KERNEL);
+	if (!cmc_sc)
+		return -ENOMEM;
+
+	cmc_sc->pdev = pdev;
+	/* Obtain register maps we need to start/stop CMC. */
+	cmc_sc->reg_io = regmaps[IO_REG];
+	cmc_sc->mbx_max_payload_sz = cmc_mailbox_max_payload(pdev);
+	cmc_sc->mbx_generation = -ENODEV;
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &cmc_sc_attr_group);
+	if (ret) {
+		xrt_err(pdev, "create sc attrs failed: %d", ret);
+		goto fail;
+	}
+
+	*hdl = cmc_sc;
+	return 0;
+
+fail:
+	cmc_sc_remove(pdev);
+	devm_kfree(DEV(pdev), cmc_sc);
+	return ret;
+}
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-cmc-sensors.c b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-sensors.c
new file mode 100644
index 000000000000..c76f6d1ed3b4
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-cmc-sensors.c
@@ -0,0 +1,445 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/hwmon.h>
+#include <linux/hwmon-sysfs.h>
+#include "xmgmt-main.h"
+#include "xrt-cmc-impl.h"
+
+#define	CMC_12V_PEX_REG			0x20
+#define	CMC_3V3_PEX_REG			0x2C
+#define	CMC_3V3_AUX_REG			0x38
+#define	CMC_12V_AUX_REG			0x44
+#define	CMC_DDR4_VPP_BTM_REG		0x50
+#define	CMC_SYS_5V5_REG			0x5C
+#define	CMC_VCC1V2_TOP_REG		0x68
+#define	CMC_VCC1V8_REG			0x74
+#define	CMC_VCC0V85_REG			0x80
+#define	CMC_DDR4_VPP_TOP_REG		0x8C
+#define	CMC_MGT0V9AVCC_REG		0x98
+#define	CMC_12V_SW_REG			0xA4
+#define	CMC_MGTAVTT_REG			0xB0
+#define	CMC_VCC1V2_BTM_REG		0xBC
+#define	CMC_12V_PEX_I_IN_REG		0xC8
+#define	CMC_12V_AUX_I_IN_REG		0xD4
+#define	CMC_VCCINT_V_REG		0xE0
+#define	CMC_VCCINT_I_REG		0xEC
+#define	CMC_FPGA_TEMP			0xF8
+#define	CMC_FAN_TEMP_REG		0x104
+#define	CMC_DIMM_TEMP0_REG		0x110
+#define	CMC_DIMM_TEMP1_REG		0x11C
+#define	CMC_DIMM_TEMP2_REG		0x128
+#define	CMC_DIMM_TEMP3_REG		0x134
+#define	CMC_FAN_SPEED_REG		0x164
+#define	CMC_SE98_TEMP0_REG		0x140
+#define	CMC_SE98_TEMP1_REG		0x14C
+#define	CMC_SE98_TEMP2_REG		0x158
+#define	CMC_CAGE_TEMP0_REG		0x170
+#define	CMC_CAGE_TEMP1_REG		0x17C
+#define	CMC_CAGE_TEMP2_REG		0x188
+#define	CMC_CAGE_TEMP3_REG		0x194
+#define	CMC_HBM_TEMP_REG		0x260
+#define	CMC_VCC3V3_REG			0x26C
+#define	CMC_3V3_PEX_I_REG		0x278
+#define	CMC_VCC0V85_I_REG		0x284
+#define	CMC_HBM_1V2_REG			0x290
+#define	CMC_VPP2V5_REG			0x29C
+#define	CMC_VCCINT_BRAM_REG		0x2A8
+#define	CMC_HBM_TEMP2_REG		0x2B4
+#define	CMC_12V_AUX1_REG                0x2C0
+#define	CMC_VCCINT_TEMP_REG             0x2CC
+#define	CMC_3V3_AUX_I_REG               0x2F0
+#define	CMC_HOST_MSG_OFFSET_REG		0x300
+#define	CMC_HOST_MSG_ERROR_REG		0x304
+#define	CMC_HOST_MSG_HEADER_REG		0x308
+#define	CMC_VCC1V2_I_REG                0x314
+#define	CMC_V12_IN_I_REG                0x320
+#define	CMC_V12_IN_AUX0_I_REG           0x32C
+#define	CMC_V12_IN_AUX1_I_REG           0x338
+#define	CMC_VCCAUX_REG                  0x344
+#define	CMC_VCCAUX_PMC_REG              0x350
+#define	CMC_VCCRAM_REG                  0x35C
+#define	XMC_CORE_VERSION_REG		0xC4C
+#define	XMC_OEM_ID_REG                  0xC50
+
+struct xrt_cmc_sensor {
+	struct platform_device *pdev;
+	struct cmc_reg_map reg_io;
+	struct device *hwmon_dev;
+	const char *name;
+};
+
+static inline u32
+cmc_reg_rd(struct xrt_cmc_sensor *cmc_sensor, u32 off)
+{
+	return ioread32(cmc_sensor->reg_io.crm_addr + off);
+}
+
+enum sensor_val_kind {
+	SENSOR_MAX,
+	SENSOR_AVG,
+	SENSOR_INS,
+};
+
+#define	READ_SENSOR(cmc_sensor, off, val_kind)	\
+	(cmc_reg_rd(cmc_sensor, off + sizeof(u32) * val_kind))
+
+/*
+ * Defining sysfs nodes for HWMON.
+ */
+
+#define	HWMON_INDEX(sensor, val_kind)	(sensor | (val_kind << 24))
+#define	HWMON_INDEX2SENSOR(index)	(index & 0xffffff)
+#define	HWMON_INDEX2VAL_KIND(index)	((index & ~0xffffff) >> 24)
+
+/* For voltage and current */
+static ssize_t hwmon_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct xrt_cmc_sensor *cmc_sensor = dev_get_drvdata(dev);
+	int index = to_sensor_dev_attr(da)->index;
+	u32 val = READ_SENSOR(cmc_sensor, HWMON_INDEX2SENSOR(index),
+		HWMON_INDEX2VAL_KIND(index));
+
+	return sprintf(buf, "%d\n", val);
+}
+#define	HWMON_VOLT_CURR_GROUP(type, id) hwmon_##type##id##_attrgroup
+#define	HWMON_VOLT_CURR_SYSFS_NODE(type, id, name, sensor)		\
+	static ssize_t type##id##_label(struct device *dev,		\
+		struct device_attribute *attr, char *buf)		\
+	{								\
+		return sprintf(buf, "%s\n", name);			\
+	}								\
+	static SENSOR_DEVICE_ATTR(type##id##_max, 0444, hwmon_show,	\
+		NULL, HWMON_INDEX(sensor, SENSOR_MAX));			\
+	static SENSOR_DEVICE_ATTR(type##id##_average, 0444, hwmon_show,	\
+		NULL, HWMON_INDEX(sensor, SENSOR_AVG));			\
+	static SENSOR_DEVICE_ATTR(type##id##_input, 0444, hwmon_show,	\
+		NULL, HWMON_INDEX(sensor, SENSOR_INS));			\
+	static SENSOR_DEVICE_ATTR(type##id##_label, 0444, type##id##_label,    \
+		NULL, HWMON_INDEX(sensor, SENSOR_INS));			\
+	static struct attribute *hwmon_##type##id##_attributes[] = {	\
+		&sensor_dev_attr_##type##id##_max.dev_attr.attr,	\
+		&sensor_dev_attr_##type##id##_average.dev_attr.attr,	\
+		&sensor_dev_attr_##type##id##_input.dev_attr.attr,	\
+		&sensor_dev_attr_##type##id##_label.dev_attr.attr,	\
+		NULL							\
+	};								\
+	static const struct attribute_group HWMON_VOLT_CURR_GROUP(type, id) = {\
+		.attrs = hwmon_##type##id##_attributes,			\
+	}
+
+/* For fan speed. */
+#define	HWMON_FAN_SPEED_GROUP(id) hwmon_fan##id##_attrgroup
+#define	HWMON_FAN_SPEED_SYSFS_NODE(id, name, sensor)			\
+	static ssize_t fan##id##_label(struct device *dev,		\
+		struct device_attribute *attr, char *buf)		\
+	{								\
+		return sprintf(buf, "%s\n", name);			\
+	}								\
+	static SENSOR_DEVICE_ATTR(fan##id##_input, 0444, hwmon_show,	\
+		NULL, HWMON_INDEX(sensor, SENSOR_INS));			\
+	static SENSOR_DEVICE_ATTR(fan##id##_label, 0444, fan##id##_label,      \
+		NULL, HWMON_INDEX(sensor, SENSOR_INS));			\
+	static struct attribute *hwmon_fan##id##_attributes[] = {	\
+		&sensor_dev_attr_fan##id##_input.dev_attr.attr,		\
+		&sensor_dev_attr_fan##id##_label.dev_attr.attr,		\
+		NULL							\
+	};								\
+	static const struct attribute_group HWMON_FAN_SPEED_GROUP(id) = {      \
+		.attrs = hwmon_fan##id##_attributes,			\
+	}
+
+/* For temperature */
+static ssize_t hwmon_temp_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct xrt_cmc_sensor *cmc_sensor = dev_get_drvdata(dev);
+	int index = to_sensor_dev_attr(da)->index;
+	u32 val = READ_SENSOR(cmc_sensor, HWMON_INDEX2SENSOR(index),
+		HWMON_INDEX2VAL_KIND(index));
+
+	return sprintf(buf, "%d\n", val * 1000);
+}
+#define	HWMON_TEMPERATURE_GROUP(id) hwmon_temp##id##_attrgroup
+#define	HWMON_TEMPERATURE_SYSFS_NODE(id, name, sensor)			\
+	static ssize_t temp##id##_label(struct device *dev,		\
+		struct device_attribute *attr, char *buf)		\
+	{								\
+		return sprintf(buf, "%s\n", name);			\
+	}								\
+	static SENSOR_DEVICE_ATTR(temp##id##_highest, 0444, hwmon_temp_show,   \
+		NULL, HWMON_INDEX(sensor, SENSOR_MAX));			\
+	static SENSOR_DEVICE_ATTR(temp##id##_input, 0444, hwmon_temp_show,     \
+		NULL, HWMON_INDEX(sensor, SENSOR_INS));			\
+	static SENSOR_DEVICE_ATTR(temp##id##_label, 0444, temp##id##_label,    \
+		NULL, HWMON_INDEX(sensor, SENSOR_INS));			\
+	static struct attribute *hwmon_temp##id##_attributes[] = {	\
+		&sensor_dev_attr_temp##id##_highest.dev_attr.attr,	\
+		&sensor_dev_attr_temp##id##_input.dev_attr.attr,	\
+		&sensor_dev_attr_temp##id##_label.dev_attr.attr,	\
+		NULL							\
+	};								\
+	static const struct attribute_group HWMON_TEMPERATURE_GROUP(id) = {    \
+		.attrs = hwmon_temp##id##_attributes,			\
+	}
+
+/* For power */
+uint64_t cmc_get_power(struct xrt_cmc_sensor *cmc_sensor,
+	enum sensor_val_kind kind)
+{
+	u32 v_pex, v_aux, v_3v3, c_pex, c_aux, c_3v3;
+	u64 val = 0;
+
+	v_pex = READ_SENSOR(cmc_sensor, CMC_12V_PEX_REG, kind);
+	v_aux = READ_SENSOR(cmc_sensor, CMC_12V_AUX_REG, kind);
+	v_3v3 = READ_SENSOR(cmc_sensor, CMC_3V3_PEX_REG, kind);
+	c_pex = READ_SENSOR(cmc_sensor, CMC_12V_PEX_I_IN_REG, kind);
+	c_aux = READ_SENSOR(cmc_sensor, CMC_12V_AUX_I_IN_REG, kind);
+	c_3v3 = READ_SENSOR(cmc_sensor, CMC_3V3_PEX_I_REG, kind);
+
+	val = (u64)v_pex * c_pex + (u64)v_aux * c_aux + (u64)v_3v3 * c_3v3;
+
+	return val;
+}
+static ssize_t hwmon_power_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct xrt_cmc_sensor *cmc_sensor = dev_get_drvdata(dev);
+	int index = to_sensor_dev_attr(da)->index;
+	u64 val = cmc_get_power(cmc_sensor, HWMON_INDEX2VAL_KIND(index));
+
+	return sprintf(buf, "%lld\n", val);
+}
+#define	HWMON_POWER_GROUP(id) hwmon_power##id##_attrgroup
+#define	HWMON_POWER_SYSFS_NODE(id, name)				\
+	static ssize_t power##id##_label(struct device *dev,		\
+		struct device_attribute *attr, char *buf)		\
+	{								\
+		return sprintf(buf, "%s\n", name);			\
+	}								\
+	static SENSOR_DEVICE_ATTR(power##id##_average, 0444, hwmon_power_show,\
+		NULL, HWMON_INDEX(0, SENSOR_MAX));			\
+	static SENSOR_DEVICE_ATTR(power##id##_input, 0444, hwmon_power_show,  \
+		NULL, HWMON_INDEX(0, SENSOR_INS));			\
+	static SENSOR_DEVICE_ATTR(power##id##_label, 0444, power##id##_label, \
+		NULL, HWMON_INDEX(0, SENSOR_INS));			\
+	static struct attribute *hwmon_power##id##_attributes[] = {	\
+		&sensor_dev_attr_power##id##_average.dev_attr.attr,	\
+		&sensor_dev_attr_power##id##_input.dev_attr.attr,	\
+		&sensor_dev_attr_power##id##_label.dev_attr.attr,	\
+		NULL							\
+	};								\
+	static const struct attribute_group HWMON_POWER_GROUP(id) = {	\
+		.attrs = hwmon_power##id##_attributes,			\
+	}
+
+HWMON_VOLT_CURR_SYSFS_NODE(in, 0, "12V PEX", CMC_12V_PEX_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 1, "12V AUX", CMC_12V_AUX_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 2, "3V3 PEX", CMC_3V3_PEX_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 3, "3V3 AUX", CMC_3V3_AUX_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 4, "5V5 SYS", CMC_SYS_5V5_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 5, "1V2 TOP", CMC_VCC1V2_TOP_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 6, "1V2 BTM", CMC_VCC1V2_BTM_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 7, "1V8 TOP", CMC_VCC1V8_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 8, "12V SW", CMC_12V_SW_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 9, "VCC INT", CMC_VCCINT_V_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 10, "0V9 MGT", CMC_MGT0V9AVCC_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 11, "0V85", CMC_VCC0V85_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 12, "MGT VTT", CMC_MGTAVTT_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 13, "DDR VPP BOTTOM", CMC_DDR4_VPP_BTM_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 14, "DDR VPP TOP", CMC_DDR4_VPP_TOP_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 15, "VCC 3V3", CMC_VCC3V3_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 16, "1V2 HBM", CMC_HBM_1V2_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 17, "2V5 VPP", CMC_VPP2V5_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(in, 18, "VCC INT BRAM", CMC_VCCINT_BRAM_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(curr, 1, "12V PEX Current", CMC_12V_PEX_I_IN_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(curr, 2, "12V AUX Current", CMC_12V_AUX_I_IN_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(curr, 3, "VCC INT Current", CMC_VCCINT_I_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(curr, 4, "3V3 PEX Current", CMC_3V3_PEX_I_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(curr, 5, "VCC 0V85 Current", CMC_VCC0V85_I_REG);
+HWMON_VOLT_CURR_SYSFS_NODE(curr, 6, "3V3 AUX Current", CMC_3V3_AUX_I_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(1, "PCB TOP FRONT", CMC_SE98_TEMP0_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(2, "PCB TOP REAR", CMC_SE98_TEMP1_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(3, "PCB BTM FRONT", CMC_SE98_TEMP2_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(4, "FPGA TEMP", CMC_FPGA_TEMP);
+HWMON_TEMPERATURE_SYSFS_NODE(5, "TCRIT TEMP", CMC_FAN_TEMP_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(6, "DIMM0 TEMP", CMC_DIMM_TEMP0_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(7, "DIMM1 TEMP", CMC_DIMM_TEMP1_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(8, "DIMM2 TEMP", CMC_DIMM_TEMP2_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(9, "DIMM3 TEMP", CMC_DIMM_TEMP3_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(10, "HBM TEMP", CMC_HBM_TEMP_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(11, "QSPF 0", CMC_CAGE_TEMP0_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(12, "QSPF 1", CMC_CAGE_TEMP1_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(13, "QSPF 2", CMC_CAGE_TEMP2_REG);
+HWMON_TEMPERATURE_SYSFS_NODE(14, "QSPF 3", CMC_CAGE_TEMP3_REG);
+HWMON_FAN_SPEED_SYSFS_NODE(1, "FAN SPEED", CMC_FAN_SPEED_REG);
+HWMON_POWER_SYSFS_NODE(1, "POWER");
+
+static const struct attribute_group *hwmon_cmc_attrgroups[] = {
+	&HWMON_VOLT_CURR_GROUP(in, 0),
+	&HWMON_VOLT_CURR_GROUP(in, 1),
+	&HWMON_VOLT_CURR_GROUP(in, 2),
+	&HWMON_VOLT_CURR_GROUP(in, 3),
+	&HWMON_VOLT_CURR_GROUP(in, 4),
+	&HWMON_VOLT_CURR_GROUP(in, 5),
+	&HWMON_VOLT_CURR_GROUP(in, 6),
+	&HWMON_VOLT_CURR_GROUP(in, 7),
+	&HWMON_VOLT_CURR_GROUP(in, 8),
+	&HWMON_VOLT_CURR_GROUP(in, 9),
+	&HWMON_VOLT_CURR_GROUP(in, 10),
+	&HWMON_VOLT_CURR_GROUP(in, 11),
+	&HWMON_VOLT_CURR_GROUP(in, 12),
+	&HWMON_VOLT_CURR_GROUP(in, 13),
+	&HWMON_VOLT_CURR_GROUP(in, 14),
+	&HWMON_VOLT_CURR_GROUP(in, 15),
+	&HWMON_VOLT_CURR_GROUP(in, 16),
+	&HWMON_VOLT_CURR_GROUP(in, 17),
+	&HWMON_VOLT_CURR_GROUP(in, 18),
+	&HWMON_VOLT_CURR_GROUP(curr, 1),
+	&HWMON_VOLT_CURR_GROUP(curr, 2),
+	&HWMON_VOLT_CURR_GROUP(curr, 3),
+	&HWMON_VOLT_CURR_GROUP(curr, 4),
+	&HWMON_VOLT_CURR_GROUP(curr, 5),
+	&HWMON_VOLT_CURR_GROUP(curr, 6),
+	&HWMON_TEMPERATURE_GROUP(1),
+	&HWMON_TEMPERATURE_GROUP(2),
+	&HWMON_TEMPERATURE_GROUP(3),
+	&HWMON_TEMPERATURE_GROUP(4),
+	&HWMON_TEMPERATURE_GROUP(5),
+	&HWMON_TEMPERATURE_GROUP(6),
+	&HWMON_TEMPERATURE_GROUP(7),
+	&HWMON_TEMPERATURE_GROUP(8),
+	&HWMON_TEMPERATURE_GROUP(9),
+	&HWMON_TEMPERATURE_GROUP(10),
+	&HWMON_TEMPERATURE_GROUP(11),
+	&HWMON_TEMPERATURE_GROUP(12),
+	&HWMON_TEMPERATURE_GROUP(13),
+	&HWMON_TEMPERATURE_GROUP(14),
+	&HWMON_FAN_SPEED_GROUP(1),
+	&HWMON_POWER_GROUP(1),
+	NULL
+};
+
+void cmc_sensor_remove(struct platform_device *pdev)
+{
+	struct xrt_cmc_sensor *cmc_sensor =
+		(struct xrt_cmc_sensor *)cmc_pdev2sensor(pdev);
+
+	BUG_ON(cmc_sensor == NULL);
+	if (cmc_sensor->hwmon_dev)
+		xrt_subdev_unregister_hwmon(pdev, cmc_sensor->hwmon_dev);
+	kfree(cmc_sensor->name);
+}
+
+static const char *cmc_get_vbnv(struct xrt_cmc_sensor *cmc_sensor)
+{
+	int ret;
+	const char *vbnv;
+	struct platform_device *mgmt_leaf =
+		xrt_subdev_get_leaf_by_id(cmc_sensor->pdev,
+		XRT_SUBDEV_MGMT_MAIN, PLATFORM_DEVID_NONE);
+
+	if (mgmt_leaf == NULL)
+		return NULL;
+
+	ret = xrt_subdev_ioctl(mgmt_leaf, XRT_MGMT_MAIN_GET_VBNV, &vbnv);
+	(void) xrt_subdev_put_leaf(cmc_sensor->pdev, mgmt_leaf);
+	if (ret)
+		return NULL;
+	return vbnv;
+}
+
+int cmc_sensor_probe(struct platform_device *pdev,
+	struct cmc_reg_map *regmaps, void **hdl)
+{
+	struct xrt_cmc_sensor *cmc_sensor;
+	const char *vbnv;
+
+	cmc_sensor = devm_kzalloc(DEV(pdev), sizeof(*cmc_sensor), GFP_KERNEL);
+	if (!cmc_sensor)
+		return -ENOMEM;
+
+	cmc_sensor->pdev = pdev;
+	/* Obtain register maps we need to read sensor values. */
+	cmc_sensor->reg_io = regmaps[IO_REG];
+
+	cmc_sensor->name = cmc_get_vbnv(cmc_sensor);
+	vbnv = cmc_sensor->name ? cmc_sensor->name : "golden-image";
+	/*
+	 * Make a parent call to ask root to register. If we register using
+	 * platform device, we'll be treated as ISA device, not PCI device.
+	 */
+	cmc_sensor->hwmon_dev = xrt_subdev_register_hwmon(pdev,
+		vbnv, cmc_sensor, hwmon_cmc_attrgroups);
+	if (cmc_sensor->hwmon_dev == NULL)
+		xrt_err(pdev, "failed to create HWMON device");
+
+	*hdl = cmc_sensor;
+	return 0;
+}
+
+void cmc_sensor_read(struct platform_device *pdev, struct xcl_sensor *s)
+{
+#define	READ_INST_SENSOR(off)	READ_SENSOR(cmc_sensor, off, SENSOR_INS)
+	struct xrt_cmc_sensor *cmc_sensor =
+		(struct xrt_cmc_sensor *)cmc_pdev2sensor(pdev);
+
+	s->vol_12v_pex = READ_INST_SENSOR(CMC_12V_PEX_REG);
+	s->vol_12v_aux = READ_INST_SENSOR(CMC_12V_AUX_REG);
+	s->cur_12v_pex = READ_INST_SENSOR(CMC_12V_PEX_I_IN_REG);
+	s->cur_12v_aux = READ_INST_SENSOR(CMC_12V_AUX_I_IN_REG);
+	s->vol_3v3_pex = READ_INST_SENSOR(CMC_3V3_PEX_REG);
+	s->vol_3v3_aux = READ_INST_SENSOR(CMC_3V3_AUX_REG);
+	s->cur_3v3_aux = READ_INST_SENSOR(CMC_3V3_AUX_I_REG);
+	s->ddr_vpp_btm = READ_INST_SENSOR(CMC_DDR4_VPP_BTM_REG);
+	s->sys_5v5 = READ_INST_SENSOR(CMC_SYS_5V5_REG);
+	s->top_1v2 = READ_INST_SENSOR(CMC_VCC1V2_TOP_REG);
+	s->vol_1v8 = READ_INST_SENSOR(CMC_VCC1V8_REG);
+	s->vol_0v85 = READ_INST_SENSOR(CMC_VCC0V85_REG);
+	s->ddr_vpp_top = READ_INST_SENSOR(CMC_DDR4_VPP_TOP_REG);
+	s->mgt0v9avcc = READ_INST_SENSOR(CMC_MGT0V9AVCC_REG);
+	s->vol_12v_sw = READ_INST_SENSOR(CMC_12V_SW_REG);
+	s->mgtavtt = READ_INST_SENSOR(CMC_MGTAVTT_REG);
+	s->vcc1v2_btm = READ_INST_SENSOR(CMC_VCC1V2_BTM_REG);
+	s->fpga_temp = READ_INST_SENSOR(CMC_FPGA_TEMP);
+	s->fan_temp = READ_INST_SENSOR(CMC_FAN_TEMP_REG);
+	s->fan_rpm = READ_INST_SENSOR(CMC_FAN_SPEED_REG);
+	s->dimm_temp0 = READ_INST_SENSOR(CMC_DIMM_TEMP0_REG);
+	s->dimm_temp1 = READ_INST_SENSOR(CMC_DIMM_TEMP1_REG);
+	s->dimm_temp2 = READ_INST_SENSOR(CMC_DIMM_TEMP2_REG);
+	s->dimm_temp3 = READ_INST_SENSOR(CMC_DIMM_TEMP3_REG);
+	s->vccint_vol = READ_INST_SENSOR(CMC_VCCINT_V_REG);
+	s->vccint_curr = READ_INST_SENSOR(CMC_VCCINT_I_REG);
+	s->se98_temp0 = READ_INST_SENSOR(CMC_SE98_TEMP0_REG);
+	s->se98_temp1 = READ_INST_SENSOR(CMC_SE98_TEMP1_REG);
+	s->se98_temp2 = READ_INST_SENSOR(CMC_SE98_TEMP2_REG);
+	s->cage_temp0 = READ_INST_SENSOR(CMC_CAGE_TEMP0_REG);
+	s->cage_temp1 = READ_INST_SENSOR(CMC_CAGE_TEMP1_REG);
+	s->cage_temp2 = READ_INST_SENSOR(CMC_CAGE_TEMP2_REG);
+	s->cage_temp3 = READ_INST_SENSOR(CMC_CAGE_TEMP3_REG);
+	s->hbm_temp0 = READ_INST_SENSOR(CMC_HBM_TEMP_REG);
+	s->cur_3v3_pex = READ_INST_SENSOR(CMC_3V3_PEX_I_REG);
+	s->cur_0v85 = READ_INST_SENSOR(CMC_VCC0V85_I_REG);
+	s->vol_3v3_vcc = READ_INST_SENSOR(CMC_VCC3V3_REG);
+	s->vol_1v2_hbm = READ_INST_SENSOR(CMC_HBM_1V2_REG);
+	s->vol_2v5_vpp = READ_INST_SENSOR(CMC_VPP2V5_REG);
+	s->vccint_bram = READ_INST_SENSOR(CMC_VCCINT_BRAM_REG);
+	s->version = cmc_reg_rd(cmc_sensor, XMC_CORE_VERSION_REG);
+	s->oem_id = cmc_reg_rd(cmc_sensor, XMC_OEM_ID_REG);
+	s->vccint_temp = READ_INST_SENSOR(CMC_VCCINT_TEMP_REG);
+	s->vol_12v_aux1 = READ_INST_SENSOR(CMC_12V_AUX1_REG);
+	s->vol_vcc1v2_i = READ_INST_SENSOR(CMC_VCC1V2_I_REG);
+	s->vol_v12_in_i = READ_INST_SENSOR(CMC_V12_IN_I_REG);
+	s->vol_v12_in_aux0_i = READ_INST_SENSOR(CMC_V12_IN_AUX0_I_REG);
+	s->vol_v12_in_aux1_i = READ_INST_SENSOR(CMC_V12_IN_AUX1_I_REG);
+	s->vol_vccaux = READ_INST_SENSOR(CMC_VCCAUX_REG);
+	s->vol_vccaux_pmc = READ_INST_SENSOR(CMC_VCCAUX_PMC_REG);
+	s->vol_vccram = READ_INST_SENSOR(CMC_VCCRAM_REG);
+}
+
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-cmc.c b/drivers/fpga/alveo/lib/subdevs/xrt-cmc.c
new file mode 100644
index 000000000000..476c91650cfd
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-cmc.c
@@ -0,0 +1,239 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA CMC Leaf Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-cmc-impl.h"
+#include "xrt-cmc.h"
+#include <linux/xrt/mailbox_proto.h>
+
+#define	XRT_CMC "xrt_cmc"
+
+static struct xrt_iores_map cmc_iores_id_map[] = {
+	{ NODE_CMC_REG, IO_REG},
+	{ NODE_CMC_RESET, IO_GPIO},
+	{ NODE_CMC_FW_MEM, IO_IMAGE_MGMT},
+	{ NODE_CMC_MUTEX, IO_MUTEX},
+};
+
+struct xrt_cmc {
+	struct platform_device *pdev;
+	struct cmc_reg_map regs[NUM_IOADDR];
+	void *ctrl_hdl;
+	void *sensor_hdl;
+	void *mbx_hdl;
+	void *bdinfo_hdl;
+	void *sc_hdl;
+};
+
+void *cmc_pdev2sc(struct platform_device *pdev)
+{
+	struct xrt_cmc *cmc = platform_get_drvdata(pdev);
+
+	return cmc->sc_hdl;
+}
+
+void *cmc_pdev2bdinfo(struct platform_device *pdev)
+{
+	struct xrt_cmc *cmc = platform_get_drvdata(pdev);
+
+	return cmc->bdinfo_hdl;
+}
+
+void *cmc_pdev2ctrl(struct platform_device *pdev)
+{
+	struct xrt_cmc *cmc = platform_get_drvdata(pdev);
+
+	return cmc->ctrl_hdl;
+}
+
+void *cmc_pdev2sensor(struct platform_device *pdev)
+{
+	struct xrt_cmc *cmc = platform_get_drvdata(pdev);
+
+	return cmc->sensor_hdl;
+}
+
+void *cmc_pdev2mbx(struct platform_device *pdev)
+{
+	struct xrt_cmc *cmc = platform_get_drvdata(pdev);
+
+	return cmc->mbx_hdl;
+}
+
+static int cmc_map_io(struct xrt_cmc *cmc, struct resource *res)
+{
+	int	id;
+
+	id = xrt_md_res_name2id(cmc_iores_id_map, ARRAY_SIZE(cmc_iores_id_map),
+		res->name);
+	if (id < 0) {
+		xrt_err(cmc->pdev, "resource %s ignored", res->name);
+		return -EINVAL;
+	}
+	if (cmc->regs[id].crm_addr) {
+		xrt_err(cmc->pdev, "resource %s already mapped", res->name);
+		return -EINVAL;
+	}
+	cmc->regs[id].crm_addr = ioremap(res->start, res->end - res->start + 1);
+	if (!cmc->regs[id].crm_addr) {
+		xrt_err(cmc->pdev, "resource %s map failed", res->name);
+		return -EIO;
+	}
+	cmc->regs[id].crm_size = res->end - res->start + 1;
+
+	return 0;
+}
+
+static int cmc_remove(struct platform_device *pdev)
+{
+	int i;
+	struct xrt_cmc *cmc = platform_get_drvdata(pdev);
+
+	xrt_info(pdev, "leaving...");
+
+	cmc_sc_remove(pdev);
+	cmc_bdinfo_remove(pdev);
+	cmc_mailbox_remove(pdev);
+	cmc_sensor_remove(pdev);
+	cmc_ctrl_remove(pdev);
+
+	for (i = 0; i < NUM_IOADDR; i++) {
+		if (cmc->regs[i].crm_addr == NULL)
+			continue;
+		iounmap(cmc->regs[i].crm_addr);
+	}
+
+	return 0;
+}
+
+static int cmc_probe(struct platform_device *pdev)
+{
+	struct xrt_cmc *cmc;
+	struct resource *res;
+	int i = 0;
+	int ret = 0;
+
+	xrt_info(pdev, "probing...");
+
+	cmc = devm_kzalloc(DEV(pdev), sizeof(*cmc), GFP_KERNEL);
+	if (!cmc)
+		return -ENOMEM;
+
+	cmc->pdev = pdev;
+	platform_set_drvdata(pdev, cmc);
+
+	for (i = 0; ; i++) {
+		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+		if (!res)
+			break;
+		(void) cmc_map_io(cmc, res);
+	}
+	for (i = 0; i < NUM_IOADDR; i++) {
+		if (cmc->regs[i].crm_addr == NULL)
+			break;
+	}
+	if (i != NUM_IOADDR) {
+		xrt_err(cmc->pdev, "not all needed resources are found");
+		ret = -EINVAL;
+		goto done;
+	}
+
+	ret = cmc_ctrl_probe(cmc->pdev, cmc->regs, &cmc->ctrl_hdl);
+	if (ret)
+		goto done;
+
+	/* Non-critical part of init can fail. */
+	(void) cmc_sensor_probe(cmc->pdev, cmc->regs, &cmc->sensor_hdl);
+	(void) cmc_mailbox_probe(cmc->pdev, cmc->regs, &cmc->mbx_hdl);
+	(void) cmc_bdinfo_probe(cmc->pdev, cmc->regs, &cmc->bdinfo_hdl);
+	(void) cmc_sc_probe(cmc->pdev, cmc->regs, &cmc->sc_hdl);
+
+	return 0;
+
+done:
+	(void) cmc_remove(pdev);
+	return ret;
+}
+
+static int
+xrt_cmc_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_cmc *cmc = platform_get_drvdata(pdev);
+	int ret = -ENOENT;
+
+	switch (cmd) {
+	case XRT_CMC_READ_BOARD_INFO: {
+		struct xcl_board_info *i = (struct xcl_board_info *)arg;
+
+		if (cmc->bdinfo_hdl)
+			ret = cmc_bdinfo_read(pdev, i);
+		break;
+	}
+	case XRT_CMC_READ_SENSORS: {
+		struct xcl_sensor *s = (struct xcl_sensor *)arg;
+
+		if (cmc->sensor_hdl) {
+			cmc_sensor_read(pdev, s);
+			ret = 0;
+		}
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_cmc_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []) {
+			{ .ep_name = NODE_CMC_REG },
+			{ .ep_name = NODE_CMC_RESET },
+			{ .ep_name = NODE_CMC_MUTEX },
+			{ .ep_name = NODE_CMC_FW_MEM },
+			{ NULL },
+		},
+		.xse_min_ep = 4,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_cmc_data = {
+	.xsd_file_ops = {
+		.xsf_ops = {
+			.owner = THIS_MODULE,
+			.open = cmc_sc_open,
+			.release = cmc_sc_close,
+			.llseek = cmc_sc_llseek,
+			.write = cmc_update_sc_firmware,
+		},
+		.xsf_dev_name = "cmc",
+	},
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_cmc_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id cmc_id_table[] = {
+	{ XRT_CMC, (kernel_ulong_t)&xrt_cmc_data },
+	{ },
+};
+
+struct platform_driver xrt_cmc_driver = {
+	.driver	= {
+		.name    = XRT_CMC,
+	},
+	.probe   = cmc_probe,
+	.remove  = cmc_remove,
+	.id_table = cmc_id_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-gpio.c b/drivers/fpga/alveo/lib/subdevs/xrt-gpio.c
new file mode 100644
index 000000000000..72c9c4ab970f
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-gpio.c
@@ -0,0 +1,198 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA GPIO Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-gpio.h"
+
+#define XRT_GPIO "xrt_gpio"
+
+struct xrt_name_id {
+	char *ep_name;
+	int id;
+};
+
+static struct xrt_name_id name_id[XRT_GPIO_MAX] = {
+	{ NODE_BLP_ROM, XRT_GPIO_ROM_UUID },
+	{ NODE_GOLDEN_VER, XRT_GPIO_GOLDEN_VER },
+};
+
+struct xrt_gpio {
+	struct platform_device	*pdev;
+	void		__iomem *base_addrs[XRT_GPIO_MAX];
+	ulong			sizes[XRT_GPIO_MAX];
+};
+
+static int xrt_gpio_name2id(struct xrt_gpio *gpio, const char *name)
+{
+	int	i;
+
+	for (i = 0; i < XRT_GPIO_MAX && name_id[i].ep_name; i++) {
+		if (!strncmp(name_id[i].ep_name, name,
+		    strlen(name_id[i].ep_name) + 1))
+			return name_id[i].id;
+	}
+
+	return -EINVAL;
+}
+
+static int
+xrt_gpio_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_gpio	*gpio;
+	int			ret = 0;
+
+	gpio = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_GPIO_READ: {
+		struct xrt_gpio_ioctl_rw	*rw_arg = arg;
+		u32				*p_src, *p_dst, i;
+
+		if (rw_arg->xgir_len & 0x3) {
+			xrt_err(pdev, "invalid len %d", rw_arg->xgir_len);
+			return -EINVAL;
+		}
+
+		if (rw_arg->xgir_id >= XRT_GPIO_MAX) {
+			xrt_err(pdev, "invalid id %d", rw_arg->xgir_id);
+			return -EINVAL;
+		}
+
+		p_src = gpio->base_addrs[rw_arg->xgir_id];
+		if (!p_src) {
+			xrt_err(pdev, "io not found, id %d",
+				rw_arg->xgir_id);
+			return -EINVAL;
+		}
+		if (rw_arg->xgir_offset + rw_arg->xgir_len >
+		    gpio->sizes[rw_arg->xgir_id]) {
+			xrt_err(pdev, "invalid argument, off %d, len %d",
+				rw_arg->xgir_offset, rw_arg->xgir_len);
+			return -EINVAL;
+		}
+		p_dst = rw_arg->xgir_buf;
+		for (i = 0; i < rw_arg->xgir_len / sizeof(u32); i++) {
+			u32 val = ioread32(p_src + rw_arg->xgir_offset + i);
+
+			memcpy(p_dst + i, &val, sizeof(u32));
+		}
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int xrt_gpio_remove(struct platform_device *pdev)
+{
+	struct xrt_gpio	*gpio;
+	int			i;
+
+	gpio = platform_get_drvdata(pdev);
+
+	for (i = 0; i < XRT_GPIO_MAX; i++) {
+		if (gpio->base_addrs[i])
+			iounmap(gpio->base_addrs[i]);
+	}
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, gpio);
+
+	return 0;
+}
+
+static int xrt_gpio_probe(struct platform_device *pdev)
+{
+	struct xrt_gpio	*gpio;
+	int			i, id, ret = 0;
+	struct resource		*res;
+
+	gpio = devm_kzalloc(&pdev->dev, sizeof(*gpio), GFP_KERNEL);
+	if (!gpio)
+		return -ENOMEM;
+
+	gpio->pdev = pdev;
+	platform_set_drvdata(pdev, gpio);
+
+	xrt_info(pdev, "probing...");
+	for (i = 0, res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	    res;
+	    res = platform_get_resource(pdev, IORESOURCE_MEM, ++i)) {
+		id = xrt_gpio_name2id(gpio, res->name);
+		if (id < 0) {
+			xrt_err(pdev, "ep %s not found", res->name);
+			continue;
+		}
+		gpio->base_addrs[id] = ioremap(res->start,
+			res->end - res->start + 1);
+		if (!gpio->base_addrs[id]) {
+			xrt_err(pdev, "map base failed %pR", res);
+			ret = -EIO;
+			goto failed;
+		}
+		gpio->sizes[id] = res->end - res->start + 1;
+	}
+
+failed:
+	if (ret)
+		xrt_gpio_remove(pdev);
+
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_gpio_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			/* add name if ep is in same partition */
+			{ .ep_name = NODE_BLP_ROM },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_GOLDEN_VER },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	/* adding ep bundle generates gpio device instance */
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_gpio_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_gpio_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_gpio_table[] = {
+	{ XRT_GPIO, (kernel_ulong_t)&xrt_gpio_data },
+	{ },
+};
+
+struct platform_driver xrt_gpio_driver = {
+	.driver = {
+		.name = XRT_GPIO,
+	},
+	.probe = xrt_gpio_probe,
+	.remove = xrt_gpio_remove,
+	.id_table = xrt_gpio_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-icap.c b/drivers/fpga/alveo/lib/subdevs/xrt-icap.c
new file mode 100644
index 000000000000..636429d665c3
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-icap.c
@@ -0,0 +1,306 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA ICAP Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-icap.h"
+#include "xrt-xclbin.h"
+
+#define XRT_ICAP "xrt_icap"
+
+#define	ICAP_ERR(icap, fmt, arg...)	\
+	xrt_err((icap)->pdev, fmt "\n", ##arg)
+#define	ICAP_WARN(icap, fmt, arg...)	\
+	xrt_warn((icap)->pdev, fmt "\n", ##arg)
+#define	ICAP_INFO(icap, fmt, arg...)	\
+	xrt_info((icap)->pdev, fmt "\n", ##arg)
+#define	ICAP_DBG(icap, fmt, arg...)	\
+	xrt_dbg((icap)->pdev, fmt "\n", ##arg)
+
+/*
+ * AXI-HWICAP IP register layout
+ */
+struct icap_reg {
+	u32	ir_rsvd1[7];
+	u32	ir_gier;
+	u32	ir_isr;
+	u32	ir_rsvd2;
+	u32	ir_ier;
+	u32	ir_rsvd3[53];
+	u32	ir_wf;
+	u32	ir_rf;
+	u32	ir_sz;
+	u32	ir_cr;
+	u32	ir_sr;
+	u32	ir_wfv;
+	u32	ir_rfo;
+	u32	ir_asr;
+} __packed;
+
+struct icap {
+	struct platform_device	*pdev;
+	struct icap_reg		*icap_regs;
+	struct mutex		icap_lock;
+
+	unsigned int		idcode;
+};
+
+static inline u32 reg_rd(void __iomem *reg)
+{
+	if (!reg)
+		return -1;
+
+	return ioread32(reg);
+}
+
+static inline void reg_wr(void __iomem *reg, u32 val)
+{
+	if (!reg)
+		return;
+
+	iowrite32(val, reg);
+}
+
+static int wait_for_done(struct icap *icap)
+{
+	u32	w;
+	int	i = 0;
+
+	BUG_ON(!mutex_is_locked(&icap->icap_lock));
+	for (i = 0; i < 10; i++) {
+		udelay(5);
+		w = reg_rd(&icap->icap_regs->ir_sr);
+		ICAP_INFO(icap, "XHWICAP_SR: %x", w);
+		if (w & 0x5)
+			return 0;
+	}
+
+	ICAP_ERR(icap, "bitstream download timeout");
+	return -ETIMEDOUT;
+}
+
+static int icap_write(struct icap *icap, const u32 *word_buf, int size)
+{
+	int i;
+	u32 value = 0;
+
+	for (i = 0; i < size; i++) {
+		value = be32_to_cpu(word_buf[i]);
+		reg_wr(&icap->icap_regs->ir_wf, value);
+	}
+
+	reg_wr(&icap->icap_regs->ir_cr, 0x1);
+
+	for (i = 0; i < 20; i++) {
+		value = reg_rd(&icap->icap_regs->ir_cr);
+		if ((value & 0x1) == 0)
+			return 0;
+		ndelay(50);
+	}
+
+	ICAP_ERR(icap, "writing %d dwords timeout", size);
+	return -EIO;
+}
+
+static int bitstream_helper(struct icap *icap, const u32 *word_buffer,
+	u32 word_count)
+{
+	u32 remain_word;
+	u32 word_written = 0;
+	int wr_fifo_vacancy = 0;
+	int err = 0;
+
+	BUG_ON(!mutex_is_locked(&icap->icap_lock));
+	for (remain_word = word_count; remain_word > 0;
+		remain_word -= word_written, word_buffer += word_written) {
+		wr_fifo_vacancy = reg_rd(&icap->icap_regs->ir_wfv);
+		if (wr_fifo_vacancy <= 0) {
+			ICAP_ERR(icap, "no vacancy: %d", wr_fifo_vacancy);
+			err = -EIO;
+			break;
+		}
+		word_written = (wr_fifo_vacancy < remain_word) ?
+			wr_fifo_vacancy : remain_word;
+		if (icap_write(icap, word_buffer, word_written) != 0) {
+			ICAP_ERR(icap, "write failed remain %d, written %d",
+					remain_word, word_written);
+			err = -EIO;
+			break;
+		}
+	}
+
+	return err;
+}
+
+static int icap_download(struct icap *icap, const char *buffer,
+	unsigned long length)
+{
+	u32	numCharsRead = DMA_HWICAP_BITFILE_BUFFER_SIZE;
+	u32	byte_read;
+	int	err = 0;
+
+	mutex_lock(&icap->icap_lock);
+	for (byte_read = 0; byte_read < length; byte_read += numCharsRead) {
+		numCharsRead = length - byte_read;
+		if (numCharsRead > DMA_HWICAP_BITFILE_BUFFER_SIZE)
+			numCharsRead = DMA_HWICAP_BITFILE_BUFFER_SIZE;
+
+		err = bitstream_helper(icap, (u32 *)buffer,
+			numCharsRead / sizeof(u32));
+		if (err)
+			goto failed;
+		buffer += numCharsRead;
+	}
+
+	err = wait_for_done(icap);
+
+failed:
+	mutex_unlock(&icap->icap_lock);
+
+	return err;
+}
+
+/*
+ * Run the following sequence of canned commands to obtain IDCODE of the FPGA
+ */
+static void icap_probe_chip(struct icap *icap)
+{
+	u32 w;
+
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	reg_wr(&icap->icap_regs->ir_gier, 0x0);
+	w = reg_rd(&icap->icap_regs->ir_wfv);
+	reg_wr(&icap->icap_regs->ir_wf, 0xffffffff);
+	reg_wr(&icap->icap_regs->ir_wf, 0xaa995566);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	reg_wr(&icap->icap_regs->ir_wf, 0x28018001);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	reg_wr(&icap->icap_regs->ir_cr, 0x1);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	reg_wr(&icap->icap_regs->ir_sz, 0x1);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	reg_wr(&icap->icap_regs->ir_cr, 0x2);
+	w = reg_rd(&icap->icap_regs->ir_rfo);
+	icap->idcode = reg_rd(&icap->icap_regs->ir_rf);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+}
+
+static int
+xrt_icap_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_icap_ioctl_wr	*wr_arg = arg;
+	struct icap			*icap;
+	int				ret = 0;
+
+	icap = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_ICAP_WRITE:
+		ret = icap_download(icap, wr_arg->xiiw_bit_data,
+				wr_arg->xiiw_data_len);
+		break;
+	case XRT_ICAP_IDCODE:
+		*(u64 *)arg = icap->idcode;
+		break;
+	default:
+		ICAP_ERR(icap, "unknown command %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int xrt_icap_remove(struct platform_device *pdev)
+{
+	struct icap	*icap;
+
+	icap = platform_get_drvdata(pdev);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, icap);
+
+	return 0;
+}
+
+static int xrt_icap_probe(struct platform_device *pdev)
+{
+	struct icap	*icap;
+	int			ret = 0;
+	struct resource		*res;
+
+	icap = devm_kzalloc(&pdev->dev, sizeof(*icap), GFP_KERNEL);
+	if (!icap)
+		return -ENOMEM;
+
+	icap->pdev = pdev;
+	platform_set_drvdata(pdev, icap);
+	mutex_init(&icap->icap_lock);
+
+	xrt_info(pdev, "probing");
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (res != NULL) {
+		icap->icap_regs = ioremap(res->start,
+			res->end - res->start + 1);
+		if (!icap->icap_regs) {
+			xrt_err(pdev, "map base failed %pR", res);
+			ret = -EIO;
+			goto failed;
+		}
+	}
+
+	icap_probe_chip(icap);
+failed:
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_icap_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_FPGA_CONFIG },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_icap_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_icap_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_icap_table[] = {
+	{ XRT_ICAP, (kernel_ulong_t)&xrt_icap_data },
+	{ },
+};
+
+struct platform_driver xrt_icap_driver = {
+	.driver = {
+		.name = XRT_ICAP,
+	},
+	.probe = xrt_icap_probe,
+	.remove = xrt_icap_remove,
+	.id_table = xrt_icap_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c b/drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c
new file mode 100644
index 000000000000..3d1b9bcb29b5
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c
@@ -0,0 +1,1905 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Mailbox IP Leaf Driver
+ *
+ * Copyright (C) 2016-2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+/**
+ * DOC: Statement of Theory
+ *
+ * This is the mailbox sub-device driver added to xrt drivers so that user pf
+ * and mgmt pf can send and receive messages of arbitrary length to / from its
+ * peer. The driver is written based on the spec of PG114 document
+ * (https://www.xilinx.com/support/documentation/ip_documentation/mailbox/v2_1/
+ * pg114-mailbox.pdf). The HW provides one TX channel and one RX channel, which
+ * operate completely independent of each other. Data can be pushed into or read
+ * from a channel in DWORD unit as a FIFO.
+ *
+ *
+ * Packet layer
+ *
+ * The driver implemented two transport layers - packet and message layer (see
+ * below). A packet is a fixed size chunk of data that can be sent through TX
+ * channel or retrieved from RX channel. The driver will not attempt to send
+ * next packet until the previous one is read by peer. Similarly, the driver
+ * will not attempt to read the data from HW until a full packet has been
+ * written to HW by peer.
+ *
+ * Interrupt is not enabled and driver will poll the HW periodically to see if
+ * FIFO is ready for reading or writing. When there is outstanding msg to be
+ * sent or received, driver will poll at high frequency. Otherwise, driver polls
+ * HW at very low frequency so that it will not consume much CPU cycles.
+ *
+ * A packet is defined as struct mailbox_pkt. There are mainly two types of
+ * packets: start-of-msg and msg-body packets. Both can carry end-of-msg flag to
+ * indicate that the packet is the last one in the current msg.
+ *
+ * The start-of-msg packet contains some meta data related to the entire msg,
+ * such as msg ID, msg flags and msg size. Strictly speaking, these info belongs
+ * to the msg layer, but it helps the receiving end to prepare buffer for the
+ * incoming msg payload after seeing the 1st packet instead of the whole msg.
+ * It is an optimization for msg receiving.
+ *
+ * The body-of-msg packet contains only msg payload.
+ *
+ *
+ * Message layer
+ *
+ * A message is a data buffer of arbitrary length. The driver will break a
+ * message into multiple packets and transmit them to the peer, which, in turn,
+ * will assemble them into a full message before it's delivered to upper layer
+ * for further processing. One message requires at least one packet to be
+ * transferred to the peer (a start-of-msg packet with end-of-msg flag).
+ *
+ * Each message has a unique temporary u64 ID (see communication model below
+ * for more detail). The ID shows up in start-of-msg packet only. So, at packet
+ * layer, there is an assumption that adjacent packets belong to the same
+ * message unless the next one is another start-of-msg packet. So, at message
+ * layer, the driver will not attempt to send the next message until the
+ * transmitting of current one is done. I.E., we implement a FIFO for message
+ * TX channel. All messages are sent by driver in the order of received from
+ * upper layer. We can implement msgs of different priority later, if needed.
+ *
+ * On the RX side, there is no certain order for receiving messages. It's up to
+ * the peer to decide which message gets enqueued into its own TX queue first,
+ * which will be received first on the other side.
+ *
+ * A TX message is considered as time'd out when it's transmit is not done
+ * within 1 seconds. An RX msg is considered as time'd out 20 seconds after the
+ * corresponding TX one has been sent out. There is no retry after msg time'd
+ * out. The error will be simply propagated back to the upper layer.
+ *
+ * A msg is defined as struct mailbox_msg. It carrys a flag indicating that if
+ * it's a msg of request or response msg. A response msg must have a big enough
+ * msg buffer sitting in the receiver's RX queue waiting for it. A request msg
+ * does not have a waiting msg buffer.
+ *
+ *
+ * Communication layer
+ *
+ * At the highest layer, the driver implements a request-response communication
+ * model. Three types of msgs can be sent/received in this model:
+ *
+ * - A request msg which requires a response.
+ * - A notification msg which does not require a response.
+ * - A response msg which is used to respond a request.
+ *
+ * The OP code of the request determines whether it's a request or notification.
+ *
+ * A request buffer or response buffer will be wrapped with a single msg. This
+ * means that a session contains at most 2 msgs and the msg ID serves as the
+ * session ID.
+ *
+ * A request or notification msg will automatically be assigned a msg ID when
+ * it's enqueued into TX channel for transmitting. A response msg must match a
+ * request msg by msg ID, or it'll be silently dropped. A communication session
+ * starts with a request and finishes with 0 or 1 reponse, always.
+ *
+ * Currently, the driver implements one kernel thread for RX channel (RX thread)
+ * , one for TX channel (TX thread) and one thread for processing incoming
+ * request (REQ thread).
+ *
+ * The RX thread is responsible for receiving incoming msgs. If it's a request
+ * or notification msg, it'll punt it to REQ thread for processing, which, in
+ * turn, will call the callback provided by mgmt pf driver or user pf driver to
+ * further process it. If it's a response, it'll simply wake up the waiting
+ * thread.
+ *
+ * The TX thread is responsible for sending out msgs. When it's done, the TX
+ * thread will simply wake up the waiting thread.
+ *
+ *
+ * Software communication channel
+ *
+ * A msg can be sent or received through HW mailbox channel or through a daemon
+ * implemented in user land (software communication daemon). The daemon waiting
+ * for sending msg from user pf to mgmt pf is called MPD. The other one is MSD,
+ * which is responsible for sending msg from mgmt pf to user pf.
+ *
+ * Each mailbox subdevice driver creates a device node under /dev. A daemon
+ * (MPD or MSD) can block and wait in the read() interface waiting for fetching
+ * out-going msg sent to peer. Or it can block and wait in the poll()/select()
+ * interface and will be woken up when there is an out-going msg ready to be
+ * sent. Then it can fetch the msg via read() interface. It's entirely up to the
+ * daemon to process the msg. It may pass it through to the peer or handle it
+ * completely in its own way.
+ *
+ * If the daemon wants to pass a msg (request or response) to a mailbox driver,
+ * it can do so by calling write() driver interface. It may block and wait until
+ * the previous msg is consumed by the RX thread before it can finish
+ * transmiting its own msg and return back to user land.
+ *
+ *
+ * Communication protocols
+ *
+ * As indicated above, the packet layer and msg layer communication protocol is
+ * defined as struct mailbox_pkt and struct mailbox_msg respectively in this
+ * file. The protocol for communicating at communication layer is defined in
+ * mailbox_proto.h.
+ *
+ * The software communication channel communicates at communication layer only,
+ * which sees only request and response buffers. It should only implement the
+ * protocol defined in mailbox_proto.h.
+ *
+ * The current protocol defined at communication layer followed a rule as below:
+ * All requests initiated from user pf requires a response and all requests from
+ * mgmt pf does not require a response. This should avoid any possible deadlock
+ * derived from each side blocking and waiting for response from the peer.
+ *
+ * The overall architecture can be shown as below::
+ *
+ *             +----------+      +----------+            +----------+
+ *             [ Req/Resp ]  <---[SW Channel]---->       [ Req/Resp ]
+ *       +-----+----------+      +----------+      +-----+----------+
+ *       [ Msg | Req/Resp ]                        [ Msg | Req/Resp ]
+ *       +---+-+------+---+      +----------+      +---+-+-----+----+
+ *       [Pkt]...[]...[Pkt]  <---[HW Channel]----> [Pkt]...[]...[Pkt]
+ *       +---+        +---+      +----------+      +---+        +---+
+ */
+
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/completion.h>
+#include <linux/list.h>
+#include <linux/poll.h>
+#include <linux/device.h>
+#include <linux/cdev.h>
+#include <linux/fs.h>
+#include <linux/io.h>
+#include <linux/ioctl.h>
+#include <linux/delay.h>
+#include <linux/crc32c.h>
+#include <linux/xrt/mailbox_transport.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-mailbox.h"
+#include "xmgmt-main.h"
+
+#define	FLAG_STI	(1 << 0)
+#define	FLAG_RTI	(1 << 1)
+
+#define	STATUS_EMPTY	(1 << 0)
+#define	STATUS_FULL	(1 << 1)
+#define	STATUS_STA	(1 << 2)
+#define	STATUS_RTA	(1 << 3)
+#define	STATUS_VALID	(STATUS_EMPTY|STATUS_FULL|STATUS_STA|STATUS_RTA)
+
+#define	MBX_ERR(mbx, fmt, arg...) xrt_err(mbx->mbx_pdev, fmt "\n", ##arg)
+#define	MBX_WARN(mbx, fmt, arg...) xrt_warn(mbx->mbx_pdev, fmt "\n", ##arg)
+#define	MBX_INFO(mbx, fmt, arg...) xrt_info(mbx->mbx_pdev, fmt "\n", ##arg)
+#define	MBX_DBG(mbx, fmt, arg...) xrt_dbg(mbx->mbx_pdev, fmt "\n", ##arg)
+
+#define	MAILBOX_TTL_TIMER	(HZ / 10) /* in jiffies */
+#define	MAILBOX_SEC2TTL(s)	((s) * HZ / MAILBOX_TTL_TIMER)
+#define	MSG_MAX_TTL		INT_MAX /* used to disable TTL checking */
+
+#define	INVALID_MSG_ID		((u64)-1)
+
+#define	MAX_MSG_QUEUE_LEN	5
+#define	MAX_REQ_MSG_SZ		(1024 * 1024)
+
+#define MBX_SW_ONLY(mbx) ((mbx)->mbx_regs == NULL)
+/*
+ * Mailbox IP register layout
+ */
+struct mailbox_reg {
+	u32			mbr_wrdata;
+	u32			mbr_resv1;
+	u32			mbr_rddata;
+	u32			mbr_resv2;
+	u32			mbr_status;
+	u32			mbr_error;
+	u32			mbr_sit;
+	u32			mbr_rit;
+	u32			mbr_is;
+	u32			mbr_ie;
+	u32			mbr_ip;
+	u32			mbr_ctrl;
+} __packed;
+
+/*
+ * A message transport by mailbox.
+ */
+#define MSG_FLAG_RESPONSE	(1 << 0)
+#define MSG_FLAG_REQUEST	(1 << 1)
+struct mailbox_msg {
+	struct list_head	mbm_list;
+	struct mailbox_channel	*mbm_ch;
+	u64			mbm_req_id;
+	char			*mbm_data;
+	size_t			mbm_len;
+	int			mbm_error;
+	struct completion	mbm_complete;
+	mailbox_msg_cb_t	mbm_cb;
+	void			*mbm_cb_arg;
+	u32			mbm_flags;
+	atomic_t		mbm_ttl;
+	bool			mbm_chan_sw;
+
+	/* Statistics for debugging. */
+	u64			mbm_num_pkts;
+	u64			mbm_start_ts;
+	u64			mbm_end_ts;
+};
+
+/* Mailbox communication channel state. */
+#define MBXCS_BIT_READY		0
+#define MBXCS_BIT_STOP		1
+#define MBXCS_BIT_TICK		2
+
+enum mailbox_chan_type {
+	MBXCT_RX,
+	MBXCT_TX
+};
+
+struct mailbox_channel;
+typedef	bool (*chan_func_t)(struct mailbox_channel *ch);
+struct mailbox_channel {
+	struct mailbox		*mbc_parent;
+	enum mailbox_chan_type	mbc_type;
+
+	struct workqueue_struct	*mbc_wq;
+	struct work_struct	mbc_work;
+	struct completion	mbc_worker;
+	chan_func_t		mbc_tran;
+	unsigned long		mbc_state;
+
+	struct mutex		mbc_mutex;
+	struct list_head	mbc_msgs;
+
+	struct mailbox_msg	*mbc_cur_msg;
+	int			mbc_bytes_done;
+	struct mailbox_pkt	mbc_packet;
+
+	/*
+	 * Software channel settings
+	 */
+	wait_queue_head_t	sw_chan_wq;
+	struct mutex		sw_chan_mutex;
+	void			*sw_chan_buf;
+	size_t			sw_chan_buf_sz;
+	uint64_t		sw_chan_msg_id;
+	uint64_t		sw_chan_msg_flags;
+
+	atomic_t		sw_num_pending_msg;
+};
+
+/*
+ * The mailbox softstate.
+ */
+struct mailbox {
+	struct platform_device	*mbx_pdev;
+	struct timer_list	mbx_poll_timer;
+	struct mailbox_reg	*mbx_regs;
+
+	struct mailbox_channel	mbx_rx;
+	struct mailbox_channel	mbx_tx;
+
+	/* For listening to peer's request. */
+	mailbox_msg_cb_t	mbx_listen_cb;
+	void			*mbx_listen_cb_arg;
+	struct mutex		mbx_listen_cb_lock;
+	struct workqueue_struct	*mbx_listen_wq;
+	struct work_struct	mbx_listen_worker;
+
+	/*
+	 * For testing basic intr and mailbox comm functionality via sysfs.
+	 * No locking protection, use with care.
+	 */
+	struct mailbox_pkt	mbx_tst_pkt;
+
+	/* Req list for all incoming request message */
+	struct completion	mbx_comp;
+	struct mutex		mbx_lock;
+	struct list_head	mbx_req_list;
+	uint32_t		mbx_req_cnt;
+	bool			mbx_listen_stop;
+
+	bool			mbx_peer_dead;
+	uint64_t		mbx_opened;
+};
+
+static inline const char *reg2name(struct mailbox *mbx, u32 *reg)
+{
+	static const char * const reg_names[] = {
+		"wrdata",
+		"reserved1",
+		"rddata",
+		"reserved2",
+		"status",
+		"error",
+		"sit",
+		"rit",
+		"is",
+		"ie",
+		"ip",
+		"ctrl"
+	};
+
+	return reg_names[((uintptr_t)reg -
+		(uintptr_t)mbx->mbx_regs) / sizeof(u32)];
+}
+
+static inline u32 mailbox_reg_rd(struct mailbox *mbx, u32 *reg)
+{
+	u32 val = ioread32(reg);
+
+#ifdef	MAILBOX_REG_DEBUG
+	MBX_DBG(mbx, "REG_RD(%s)=0x%x", reg2name(mbx, reg), val);
+#endif
+	return val;
+}
+
+static inline void mailbox_reg_wr(struct mailbox *mbx, u32 *reg, u32 val)
+{
+#ifdef	MAILBOX_REG_DEBUG
+	MBX_DBG(mbx, "REG_WR(%s, 0x%x)", reg2name(mbx, reg), val);
+#endif
+	iowrite32(val, reg);
+}
+
+static inline void reset_pkt(struct mailbox_pkt *pkt)
+{
+	pkt->hdr.type = PKT_INVALID;
+}
+
+static inline bool valid_pkt(struct mailbox_pkt *pkt)
+{
+	return (pkt->hdr.type != PKT_INVALID);
+}
+
+static inline bool is_rx_chan(struct mailbox_channel *ch)
+{
+	return ch->mbc_type == MBXCT_RX;
+}
+
+static inline char *ch_name(struct mailbox_channel *ch)
+{
+	return is_rx_chan(ch) ? "RX" : "TX";
+}
+
+static bool is_rx_msg(struct mailbox_msg *msg)
+{
+	return is_rx_chan(msg->mbm_ch);
+}
+
+static void chan_tick(struct mailbox_channel *ch)
+{
+	mutex_lock(&ch->mbc_mutex);
+
+	set_bit(MBXCS_BIT_TICK, &ch->mbc_state);
+	complete(&ch->mbc_worker);
+
+	mutex_unlock(&ch->mbc_mutex);
+}
+
+static void mailbox_poll_timer(struct timer_list *t)
+{
+	struct mailbox *mbx = from_timer(mbx, t, mbx_poll_timer);
+
+	chan_tick(&mbx->mbx_tx);
+	chan_tick(&mbx->mbx_rx);
+
+	/* We're a periodic timer. */
+	mutex_lock(&mbx->mbx_lock);
+	mod_timer(&mbx->mbx_poll_timer, jiffies + MAILBOX_TTL_TIMER);
+	mutex_unlock(&mbx->mbx_lock);
+}
+
+static void free_msg(struct mailbox_msg *msg)
+{
+	vfree(msg);
+}
+
+static void msg_done(struct mailbox_msg *msg, int err)
+{
+	struct mailbox_channel *ch = msg->mbm_ch;
+	struct mailbox *mbx = ch->mbc_parent;
+	u64 elapsed = (msg->mbm_end_ts - msg->mbm_start_ts) / 1000; /* in us. */
+
+	MBX_INFO(ch->mbc_parent,
+		"msg(id=0x%llx sz=%ldB crc=0x%x): %s %lldpkts in %lldus: %d",
+		msg->mbm_req_id, msg->mbm_len,
+		crc32c_le(~0, msg->mbm_data, msg->mbm_len),
+		ch_name(ch), msg->mbm_num_pkts, elapsed, err);
+
+	msg->mbm_error = err;
+
+	if (msg->mbm_cb) {
+		msg->mbm_cb(msg->mbm_cb_arg, msg->mbm_data, msg->mbm_len,
+			msg->mbm_req_id, msg->mbm_error, msg->mbm_chan_sw);
+		free_msg(msg);
+		return;
+	}
+
+	if (is_rx_msg(msg) && (msg->mbm_flags & MSG_FLAG_REQUEST)) {
+		if (err) {
+			MBX_WARN(mbx, "Time'd out receiving full req message");
+			free_msg(msg);
+		} else if (mbx->mbx_req_cnt >= MAX_MSG_QUEUE_LEN) {
+			MBX_WARN(mbx, "Too many pending req messages, dropped");
+			free_msg(msg);
+		} else {
+			mutex_lock(&ch->mbc_parent->mbx_lock);
+			list_add_tail(&msg->mbm_list,
+				&ch->mbc_parent->mbx_req_list);
+			mbx->mbx_req_cnt++;
+			mutex_unlock(&ch->mbc_parent->mbx_lock);
+			complete(&ch->mbc_parent->mbx_comp);
+		}
+	} else {
+		complete(&msg->mbm_complete);
+	}
+}
+
+static void reset_sw_ch(struct mailbox_channel *ch)
+{
+	BUG_ON(!mutex_is_locked(&ch->sw_chan_mutex));
+
+	vfree(ch->sw_chan_buf);
+	ch->sw_chan_buf = NULL;
+	ch->sw_chan_buf_sz = 0;
+	ch->sw_chan_msg_flags = 0;
+	ch->sw_chan_msg_id = 0;
+	atomic_dec_if_positive(&ch->sw_num_pending_msg);
+}
+
+static void reset_hw_ch(struct mailbox_channel *ch)
+{
+	struct mailbox *mbx = ch->mbc_parent;
+
+	if (!mbx->mbx_regs)
+		return;
+
+	mailbox_reg_wr(mbx, &mbx->mbx_regs->mbr_ctrl,
+		is_rx_chan(ch) ? 0x2 : 0x1);
+}
+
+static void chan_msg_done(struct mailbox_channel *ch, int err)
+{
+	if (!ch->mbc_cur_msg)
+		return;
+
+	ch->mbc_cur_msg->mbm_end_ts = ktime_get_ns();
+	if (err) {
+		if (ch->mbc_cur_msg->mbm_chan_sw) {
+			mutex_lock(&ch->sw_chan_mutex);
+			reset_sw_ch(ch);
+			mutex_unlock(&ch->sw_chan_mutex);
+		} else {
+			reset_hw_ch(ch);
+		}
+	}
+
+	msg_done(ch->mbc_cur_msg, err);
+	ch->mbc_cur_msg = NULL;
+	ch->mbc_bytes_done = 0;
+}
+
+void timeout_msg(struct mailbox_channel *ch)
+{
+	struct mailbox *mbx = ch->mbc_parent;
+	struct mailbox_msg *msg = NULL;
+	struct list_head *pos, *n;
+	struct list_head l = LIST_HEAD_INIT(l);
+
+	/* Check outstanding msg first. */
+	msg = ch->mbc_cur_msg;
+	if (msg) {
+		if (atomic_dec_if_positive(&msg->mbm_ttl) < 0) {
+			MBX_WARN(mbx, "found outstanding msg time'd out");
+			if (!mbx->mbx_peer_dead) {
+				MBX_WARN(mbx, "peer becomes dead");
+				/* Peer is not active any more. */
+				mbx->mbx_peer_dead = true;
+			}
+			chan_msg_done(ch, -ETIMEDOUT);
+		}
+	}
+
+	mutex_lock(&ch->mbc_mutex);
+
+	list_for_each_safe(pos, n, &ch->mbc_msgs) {
+		msg = list_entry(pos, struct mailbox_msg, mbm_list);
+		if (atomic_dec_if_positive(&msg->mbm_ttl) < 0) {
+			list_del(&msg->mbm_list);
+			list_add_tail(&msg->mbm_list, &l);
+		}
+	}
+
+	mutex_unlock(&ch->mbc_mutex);
+
+	if (!list_empty(&l))
+		MBX_ERR(mbx, "found awaiting msg time'd out");
+
+	list_for_each_safe(pos, n, &l) {
+		msg = list_entry(pos, struct mailbox_msg, mbm_list);
+		list_del(&msg->mbm_list);
+		msg_done(msg, -ETIMEDOUT);
+	}
+}
+
+static void msg_timer_on(struct mailbox_msg *msg, u32 ttl)
+{
+	atomic_set(&msg->mbm_ttl, MAILBOX_SEC2TTL(ttl));
+}
+
+/*
+ * Reset TTL for outstanding msg. Next portion of the msg is expected to
+ * arrive or go out before it times out.
+ */
+static void outstanding_msg_ttl_reset(struct mailbox_channel *ch)
+{
+	struct mailbox_msg *msg = ch->mbc_cur_msg;
+
+	if (!msg)
+		return;
+
+	// outstanding msg will time out if no progress is made within 1 second.
+	msg_timer_on(msg, 1);
+}
+
+static void handle_timer_event(struct mailbox_channel *ch)
+{
+	if (!test_bit(MBXCS_BIT_TICK, &ch->mbc_state))
+		return;
+	timeout_msg(ch);
+	clear_bit(MBXCS_BIT_TICK, &ch->mbc_state);
+}
+
+static void chan_worker(struct work_struct *work)
+{
+	struct mailbox_channel *ch =
+		container_of(work, struct mailbox_channel, mbc_work);
+	struct mailbox *mbx = ch->mbc_parent;
+	bool progress;
+
+	while (!test_bit(MBXCS_BIT_STOP, &ch->mbc_state)) {
+		if (ch->mbc_cur_msg) {
+			// fast poll (1000/s) to finish outstanding msg
+			usleep_range(1000, 2000);
+		} else {
+			// Wait for next poll timer trigger
+			wait_for_completion_interruptible(&ch->mbc_worker);
+		}
+
+		progress = ch->mbc_tran(ch);
+		if (progress) {
+			outstanding_msg_ttl_reset(ch);
+			if (mbx->mbx_peer_dead) {
+				MBX_INFO(mbx, "peer becomes active");
+				mbx->mbx_peer_dead = false;
+			}
+		}
+
+		handle_timer_event(ch);
+	}
+}
+
+static inline u32 mailbox_chk_err(struct mailbox *mbx)
+{
+	u32 val = mailbox_reg_rd(mbx, &mbx->mbx_regs->mbr_error);
+
+	/* Ignore bad register value after firewall is tripped. */
+	if (val == 0xffffffff)
+		val = 0;
+
+	/* Error should not be seen, shout when found. */
+	if (val)
+		MBX_ERR(mbx, "mailbox error detected, error=0x%x", val);
+	return val;
+}
+
+static int chan_msg_enqueue(struct mailbox_channel *ch, struct mailbox_msg *msg)
+{
+	int rv = 0;
+
+	MBX_DBG(ch->mbc_parent, "%s enqueuing msg, id=0x%llx",
+		ch_name(ch), msg->mbm_req_id);
+
+	BUG_ON(msg->mbm_req_id == INVALID_MSG_ID);
+
+	mutex_lock(&ch->mbc_mutex);
+	if (test_bit(MBXCS_BIT_STOP, &ch->mbc_state)) {
+		rv = -ESHUTDOWN;
+	} else {
+		list_add_tail(&msg->mbm_list, &ch->mbc_msgs);
+		msg->mbm_ch = ch;
+	}
+	mutex_unlock(&ch->mbc_mutex);
+
+	return rv;
+}
+
+static struct mailbox_msg *chan_msg_dequeue(struct mailbox_channel *ch,
+	u64 req_id)
+{
+	struct mailbox_msg *msg = NULL;
+	struct list_head *pos;
+
+	mutex_lock(&ch->mbc_mutex);
+
+	/* Take the first msg. */
+	if (req_id == INVALID_MSG_ID) {
+		msg = list_first_entry_or_null(&ch->mbc_msgs,
+		struct mailbox_msg, mbm_list);
+	/* Take the msg w/ specified ID. */
+	} else {
+		list_for_each(pos, &ch->mbc_msgs) {
+			struct mailbox_msg *temp;
+
+			temp = list_entry(pos, struct mailbox_msg, mbm_list);
+			if (temp->mbm_req_id == req_id) {
+				msg = temp;
+				break;
+			}
+		}
+	}
+
+	if (msg) {
+		MBX_DBG(ch->mbc_parent, "%s dequeued msg, id=0x%llx",
+			ch_name(ch), msg->mbm_req_id);
+		list_del(&msg->mbm_list);
+	}
+
+	mutex_unlock(&ch->mbc_mutex);
+	return msg;
+}
+
+static struct mailbox_msg *alloc_msg(void *buf, size_t len)
+{
+	char *newbuf = NULL;
+	struct mailbox_msg *msg = NULL;
+	/* Give MB*2 secs as time to live */
+
+	if (!buf) {
+		msg = vzalloc(sizeof(struct mailbox_msg) + len);
+		if (!msg)
+			return NULL;
+		newbuf = ((char *)msg) + sizeof(struct mailbox_msg);
+	} else {
+		msg = vzalloc(sizeof(struct mailbox_msg));
+		if (!msg)
+			return NULL;
+		newbuf = buf;
+	}
+
+	INIT_LIST_HEAD(&msg->mbm_list);
+	msg->mbm_data = newbuf;
+	msg->mbm_len = len;
+	atomic_set(&msg->mbm_ttl, MSG_MAX_TTL);
+	msg->mbm_chan_sw = false;
+	init_completion(&msg->mbm_complete);
+
+	return msg;
+}
+
+static void chan_fini(struct mailbox_channel *ch)
+{
+	struct mailbox_msg *msg;
+
+	if (!ch->mbc_parent)
+		return;
+
+	/*
+	 * Holding mutex to ensure no new msg is enqueued after
+	 * flag is set.
+	 */
+	mutex_lock(&ch->mbc_mutex);
+	set_bit(MBXCS_BIT_STOP, &ch->mbc_state);
+	mutex_unlock(&ch->mbc_mutex);
+
+	if (ch->mbc_wq) {
+		complete(&ch->mbc_worker);
+		cancel_work_sync(&ch->mbc_work);
+		destroy_workqueue(ch->mbc_wq);
+	}
+
+	mutex_lock(&ch->sw_chan_mutex);
+	if (ch->sw_chan_buf != NULL)
+		vfree(ch->sw_chan_buf);
+	mutex_unlock(&ch->sw_chan_mutex);
+
+	msg = ch->mbc_cur_msg;
+	if (msg)
+		chan_msg_done(ch, -ESHUTDOWN);
+
+	while ((msg = chan_msg_dequeue(ch, INVALID_MSG_ID)) != NULL)
+		msg_done(msg, -ESHUTDOWN);
+
+	mutex_destroy(&ch->mbc_mutex);
+	mutex_destroy(&ch->sw_chan_mutex);
+	ch->mbc_parent = NULL;
+}
+
+static int chan_init(struct mailbox *mbx, enum mailbox_chan_type type,
+	struct mailbox_channel *ch, chan_func_t fn)
+{
+	ch->mbc_parent = mbx;
+	ch->mbc_type = type;
+	ch->mbc_tran = fn;
+	INIT_LIST_HEAD(&ch->mbc_msgs);
+	init_completion(&ch->mbc_worker);
+	mutex_init(&ch->mbc_mutex);
+	mutex_init(&ch->sw_chan_mutex);
+
+	init_waitqueue_head(&ch->sw_chan_wq);
+	atomic_set(&ch->sw_num_pending_msg, 0);
+	ch->mbc_cur_msg = NULL;
+	ch->mbc_bytes_done = 0;
+
+	/* Reset pkt buffer. */
+	reset_pkt(&ch->mbc_packet);
+	/* Reset HW channel. */
+	reset_hw_ch(ch);
+	/* Reset SW channel. */
+	mutex_lock(&ch->sw_chan_mutex);
+	reset_sw_ch(ch);
+	mutex_unlock(&ch->sw_chan_mutex);
+
+	/* One thread for one channel. */
+	ch->mbc_wq =
+		create_singlethread_workqueue(dev_name(&mbx->mbx_pdev->dev));
+	if (!ch->mbc_wq) {
+		chan_fini(ch);
+		return -ENOMEM;
+	}
+	INIT_WORK(&ch->mbc_work, chan_worker);
+
+	/* Kick off channel thread, all initialization should be done by now. */
+	clear_bit(MBXCS_BIT_STOP, &ch->mbc_state);
+	set_bit(MBXCS_BIT_READY, &ch->mbc_state);
+	queue_work(ch->mbc_wq, &ch->mbc_work);
+	return 0;
+}
+
+static void listen_wq_fini(struct mailbox *mbx)
+{
+	BUG_ON(mbx == NULL);
+
+	if (mbx->mbx_listen_wq != NULL) {
+		mbx->mbx_listen_stop = true;
+		complete(&mbx->mbx_comp);
+		cancel_work_sync(&mbx->mbx_listen_worker);
+		destroy_workqueue(mbx->mbx_listen_wq);
+		mbx->mbx_listen_wq = NULL;
+	}
+}
+
+static void chan_recv_pkt(struct mailbox_channel *ch)
+{
+	int i, retry = 10;
+	struct mailbox *mbx = ch->mbc_parent;
+	struct mailbox_pkt *pkt = &ch->mbc_packet;
+
+	BUG_ON(valid_pkt(pkt));
+
+	/* Picking up a packet from HW. */
+	for (i = 0; i < PACKET_SIZE; i++) {
+		while ((mailbox_reg_rd(mbx,
+			&mbx->mbx_regs->mbr_status) & STATUS_EMPTY) &&
+			(retry-- > 0))
+			msleep(100);
+
+		*(((u32 *)pkt) + i) =
+			mailbox_reg_rd(mbx, &mbx->mbx_regs->mbr_rddata);
+	}
+	if ((mailbox_chk_err(mbx) & STATUS_EMPTY) != 0)
+		reset_pkt(pkt);
+	else
+		MBX_DBG(mbx, "received pkt: type=0x%x", pkt->hdr.type);
+}
+
+static void chan_send_pkt(struct mailbox_channel *ch)
+{
+	int i;
+	struct mailbox *mbx = ch->mbc_parent;
+	struct mailbox_pkt *pkt = &ch->mbc_packet;
+
+	BUG_ON(!valid_pkt(pkt));
+
+	MBX_DBG(mbx, "sending pkt: type=0x%x", pkt->hdr.type);
+
+	/* Pushing a packet into HW. */
+	for (i = 0; i < PACKET_SIZE; i++) {
+		mailbox_reg_wr(mbx, &mbx->mbx_regs->mbr_wrdata,
+			*(((u32 *)pkt) + i));
+	}
+
+	reset_pkt(pkt);
+	if (ch->mbc_cur_msg)
+		ch->mbc_bytes_done += ch->mbc_packet.hdr.payload_size;
+
+	BUG_ON((mailbox_chk_err(mbx) & STATUS_FULL) != 0);
+}
+
+static int chan_pkt2msg(struct mailbox_channel *ch)
+{
+	struct mailbox *mbx = ch->mbc_parent;
+	void *msg_data, *pkt_data;
+	struct mailbox_msg *msg = ch->mbc_cur_msg;
+	struct mailbox_pkt *pkt = &ch->mbc_packet;
+	size_t cnt = pkt->hdr.payload_size;
+	u32 type = (pkt->hdr.type & PKT_TYPE_MASK);
+
+	BUG_ON(((type != PKT_MSG_START) && (type != PKT_MSG_BODY)) || !msg);
+
+	if (type == PKT_MSG_START) {
+		msg->mbm_req_id = pkt->body.msg_start.msg_req_id;
+		BUG_ON(msg->mbm_len < pkt->body.msg_start.msg_size);
+		msg->mbm_len = pkt->body.msg_start.msg_size;
+		pkt_data = pkt->body.msg_start.payload;
+	} else {
+		pkt_data = pkt->body.msg_body.payload;
+	}
+
+	if (cnt > msg->mbm_len - ch->mbc_bytes_done) {
+		MBX_ERR(mbx, "invalid mailbox packet size");
+		return -EBADMSG;
+	}
+
+	msg_data = msg->mbm_data + ch->mbc_bytes_done;
+	(void) memcpy(msg_data, pkt_data, cnt);
+	ch->mbc_bytes_done += cnt;
+	msg->mbm_num_pkts++;
+
+	reset_pkt(pkt);
+	return 0;
+}
+
+/* Prepare outstanding msg for receiving incoming msg. */
+static void dequeue_rx_msg(struct mailbox_channel *ch,
+	u32 flags, u64 id, size_t sz)
+{
+	struct mailbox *mbx = ch->mbc_parent;
+	struct mailbox_msg *msg = NULL;
+	int err = 0;
+
+	if (ch->mbc_cur_msg)
+		return;
+
+	if (flags & MSG_FLAG_RESPONSE) {
+		msg = chan_msg_dequeue(ch, id);
+		if (!msg) {
+			MBX_ERR(mbx, "Failed to find msg (id 0x%llx)", id);
+		} else if (msg->mbm_len < sz) {
+			MBX_ERR(mbx, "Response (id 0x%llx) is too big: %lu",
+				id, sz);
+			err = -EMSGSIZE;
+		}
+	} else if (flags & MSG_FLAG_REQUEST) {
+		if (sz < MAX_REQ_MSG_SZ)
+			msg = alloc_msg(NULL, sz);
+		if (msg) {
+			msg->mbm_req_id = id;
+			msg->mbm_ch = ch;
+			msg->mbm_flags = flags;
+		} else {
+			MBX_ERR(mbx, "req msg len %luB is too big", sz);
+		}
+	} else {
+		/* Not a request or response? */
+		MBX_ERR(mbx, "Invalid incoming msg flags: 0x%x", flags);
+	}
+
+	if (msg) {
+		msg->mbm_start_ts = ktime_get_ns();
+		msg->mbm_num_pkts = 0;
+		ch->mbc_cur_msg = msg;
+	}
+
+	/* Fail received msg now on error. */
+	if (err)
+		chan_msg_done(ch, err);
+}
+
+static bool do_sw_rx(struct mailbox_channel *ch)
+{
+	u32 flags = 0;
+	u64 id = 0;
+	size_t len = 0;
+
+	/*
+	 * Don't receive new msg when a msg is being received from HW
+	 * for simplicity.
+	 */
+	if (ch->mbc_cur_msg)
+		return false;
+
+	mutex_lock(&ch->sw_chan_mutex);
+
+	flags = ch->sw_chan_msg_flags;
+	id = ch->sw_chan_msg_id;
+	len = ch->sw_chan_buf_sz;
+
+	mutex_unlock(&ch->sw_chan_mutex);
+
+	/* Nothing to receive. */
+	if (id == 0)
+		return false;
+
+	/* Prepare outstanding msg. */
+	dequeue_rx_msg(ch, flags, id, len);
+
+	mutex_lock(&ch->sw_chan_mutex);
+
+	BUG_ON(id != ch->sw_chan_msg_id);
+
+	if (ch->mbc_cur_msg) {
+		ch->mbc_cur_msg->mbm_chan_sw = true;
+		memcpy(ch->mbc_cur_msg->mbm_data,
+			ch->sw_chan_buf, ch->sw_chan_buf_sz);
+	}
+
+	/* Done with sw msg. */
+	reset_sw_ch(ch);
+
+	mutex_unlock(&ch->sw_chan_mutex);
+
+	wake_up_interruptible(&ch->sw_chan_wq);
+
+	chan_msg_done(ch, 0);
+
+	return true;
+}
+
+static bool do_hw_rx(struct mailbox_channel *ch)
+{
+	struct mailbox *mbx = ch->mbc_parent;
+	struct mailbox_pkt *pkt = &ch->mbc_packet;
+	u32 type;
+	bool eom = false, read_hw = false;
+	u32 st = mailbox_reg_rd(mbx, &mbx->mbx_regs->mbr_status);
+	bool progress = false;
+
+	/* Check if a packet is ready for reading. */
+	if (st & ~STATUS_VALID) {
+		/* Device is still being reset or firewall tripped. */
+		read_hw = false;
+	} else {
+		read_hw = ((st & STATUS_RTA) != 0);
+	}
+
+	if (!read_hw)
+		return progress;
+
+	chan_recv_pkt(ch);
+	type = pkt->hdr.type & PKT_TYPE_MASK;
+	eom = ((pkt->hdr.type & PKT_TYPE_MSG_END) != 0);
+
+	switch (type) {
+	case PKT_TEST:
+		(void) memcpy(&mbx->mbx_tst_pkt, &ch->mbc_packet,
+			sizeof(struct mailbox_pkt));
+		reset_pkt(pkt);
+		break;
+	case PKT_MSG_START:
+		if (ch->mbc_cur_msg) {
+			MBX_ERR(mbx, "Received partial msg (id 0x%llx)",
+				ch->mbc_cur_msg->mbm_req_id);
+			chan_msg_done(ch, -EBADMSG);
+		}
+		/* Prepare outstanding msg. */
+		dequeue_rx_msg(ch, pkt->body.msg_start.msg_flags,
+			pkt->body.msg_start.msg_req_id,
+			pkt->body.msg_start.msg_size);
+		if (!ch->mbc_cur_msg) {
+			MBX_ERR(mbx, "got unexpected msg start pkt");
+			reset_pkt(pkt);
+		}
+		break;
+	case PKT_MSG_BODY:
+		if (!ch->mbc_cur_msg) {
+			MBX_ERR(mbx, "got unexpected msg body pkt");
+			reset_pkt(pkt);
+		}
+		break;
+	default:
+		MBX_ERR(mbx, "invalid mailbox pkt type");
+		reset_pkt(pkt);
+		break;
+	}
+
+	if (valid_pkt(pkt)) {
+		int err = chan_pkt2msg(ch);
+
+		if (err || eom)
+			chan_msg_done(ch, err);
+		progress = true;
+	}
+
+	return progress;
+}
+
+/*
+ * Worker for RX channel.
+ */
+static bool chan_do_rx(struct mailbox_channel *ch)
+{
+	struct mailbox *mbx = ch->mbc_parent;
+	bool progress = false;
+
+	progress = do_sw_rx(ch);
+	if (!MBX_SW_ONLY(mbx))
+		progress |= do_hw_rx(ch);
+
+	return progress;
+}
+
+static void chan_msg2pkt(struct mailbox_channel *ch)
+{
+	size_t cnt = 0;
+	size_t payload_off = 0;
+	void *msg_data, *pkt_data;
+	struct mailbox_msg *msg = ch->mbc_cur_msg;
+	struct mailbox_pkt *pkt = &ch->mbc_packet;
+	bool is_start = (ch->mbc_bytes_done == 0);
+	bool is_eom = false;
+
+	if (is_start) {
+		payload_off = offsetof(struct mailbox_pkt,
+			body.msg_start.payload);
+	} else {
+		payload_off = offsetof(struct mailbox_pkt,
+			body.msg_body.payload);
+	}
+	cnt = PACKET_SIZE * sizeof(u32) - payload_off;
+	if (cnt >= msg->mbm_len - ch->mbc_bytes_done) {
+		cnt = msg->mbm_len - ch->mbc_bytes_done;
+		is_eom = true;
+	}
+
+	pkt->hdr.type = is_start ? PKT_MSG_START : PKT_MSG_BODY;
+	pkt->hdr.type |= is_eom ? PKT_TYPE_MSG_END : 0;
+	pkt->hdr.payload_size = cnt;
+
+	if (is_start) {
+		pkt->body.msg_start.msg_req_id = msg->mbm_req_id;
+		pkt->body.msg_start.msg_size = msg->mbm_len;
+		pkt->body.msg_start.msg_flags = msg->mbm_flags;
+		pkt_data = pkt->body.msg_start.payload;
+	} else {
+		pkt_data = pkt->body.msg_body.payload;
+	}
+	msg_data = msg->mbm_data + ch->mbc_bytes_done;
+	(void) memcpy(pkt_data, msg_data, cnt);
+}
+
+static void do_sw_tx(struct mailbox_channel *ch)
+{
+	mutex_lock(&ch->sw_chan_mutex);
+
+	BUG_ON(ch->mbc_cur_msg == NULL || !ch->mbc_cur_msg->mbm_chan_sw);
+	BUG_ON(ch->sw_chan_msg_id != 0);
+
+	ch->sw_chan_buf = vmalloc(ch->mbc_cur_msg->mbm_len);
+	if (!ch->sw_chan_buf) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		return;
+	}
+
+	ch->sw_chan_buf_sz = ch->mbc_cur_msg->mbm_len;
+	ch->sw_chan_msg_id = ch->mbc_cur_msg->mbm_req_id;
+	ch->sw_chan_msg_flags = ch->mbc_cur_msg->mbm_flags;
+	(void) memcpy(ch->sw_chan_buf, ch->mbc_cur_msg->mbm_data,
+		ch->sw_chan_buf_sz);
+	ch->mbc_bytes_done = ch->mbc_cur_msg->mbm_len;
+
+	/* Notify sw tx channel handler. */
+	atomic_inc(&ch->sw_num_pending_msg);
+
+	mutex_unlock(&ch->sw_chan_mutex);
+	wake_up_interruptible(&ch->sw_chan_wq);
+}
+
+static void do_hw_tx(struct mailbox_channel *ch)
+{
+	BUG_ON(ch->mbc_cur_msg == NULL || ch->mbc_cur_msg->mbm_chan_sw);
+	chan_msg2pkt(ch);
+	chan_send_pkt(ch);
+}
+
+/* Prepare outstanding msg for sending outgoing msg. */
+static void dequeue_tx_msg(struct mailbox_channel *ch)
+{
+	if (ch->mbc_cur_msg)
+		return;
+
+	ch->mbc_cur_msg = chan_msg_dequeue(ch, INVALID_MSG_ID);
+	if (ch->mbc_cur_msg) {
+		ch->mbc_cur_msg->mbm_start_ts = ktime_get_ns();
+		ch->mbc_cur_msg->mbm_num_pkts = 0;
+	}
+}
+
+/* Check if HW TX channel is ready for next msg. */
+static bool tx_hw_chan_ready(struct mailbox_channel *ch)
+{
+	struct mailbox *mbx = ch->mbc_parent;
+	u32 st;
+
+	st = mailbox_reg_rd(mbx, &mbx->mbx_regs->mbr_status);
+	return ((st != 0xffffffff) && ((st & STATUS_STA) != 0));
+}
+
+/* Check if SW TX channel is ready for next msg. */
+static bool tx_sw_chan_ready(struct mailbox_channel *ch)
+{
+	bool ready;
+
+	mutex_lock(&ch->sw_chan_mutex);
+	ready = (ch->sw_chan_msg_id == 0);
+	mutex_unlock(&ch->sw_chan_mutex);
+	return ready;
+}
+
+/*
+ * Worker for TX channel.
+ */
+static bool chan_do_tx(struct mailbox_channel *ch)
+{
+	struct mailbox_msg *curmsg = ch->mbc_cur_msg;
+	bool progress = false;
+
+	/* Check if current outstanding msg is fully sent. */
+	if (curmsg) {
+		bool done = curmsg->mbm_chan_sw ? tx_sw_chan_ready(ch) :
+			tx_hw_chan_ready(ch);
+		if (done) {
+			curmsg->mbm_num_pkts++;
+			if (curmsg->mbm_len == ch->mbc_bytes_done)
+				chan_msg_done(ch, 0);
+			progress = true;
+		}
+	}
+
+	dequeue_tx_msg(ch);
+	curmsg = ch->mbc_cur_msg;
+
+	/* Send the next msg out. */
+	if (curmsg) {
+		if (curmsg->mbm_chan_sw) {
+			if (tx_sw_chan_ready(ch)) {
+				do_sw_tx(ch);
+				progress = true;
+			}
+		} else {
+			if (tx_hw_chan_ready(ch)) {
+				do_hw_tx(ch);
+				progress = true;
+			}
+		}
+	}
+
+	return progress;
+}
+
+static ssize_t mailbox_ctl_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+	u32 *reg = (u32 *)mbx->mbx_regs;
+	int r, n;
+	int nreg = sizeof(struct mailbox_reg) / sizeof(u32);
+
+	if (MBX_SW_ONLY(mbx))
+		return 0;
+
+	for (r = 0, n = 0; r < nreg; r++, reg++) {
+		/* Non-status registers. */
+		if ((reg == &mbx->mbx_regs->mbr_resv1)		||
+			(reg == &mbx->mbx_regs->mbr_wrdata)	||
+			(reg == &mbx->mbx_regs->mbr_rddata)	||
+			(reg == &mbx->mbx_regs->mbr_resv2))
+			continue;
+		/* Write-only status register. */
+		if (reg == &mbx->mbx_regs->mbr_ctrl) {
+			n += sprintf(buf + n, "%02ld %10s = --",
+				r * sizeof(u32), reg2name(mbx, reg));
+		/* Read-able status register. */
+		} else {
+			n += sprintf(buf + n, "%02ld %10s = 0x%08x",
+				r * sizeof(u32), reg2name(mbx, reg),
+				mailbox_reg_rd(mbx, reg));
+		}
+	}
+
+	return n;
+}
+static ssize_t mailbox_ctl_store(struct device *dev,
+	struct device_attribute *da, const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+	u32 off, val;
+	int nreg = sizeof(struct mailbox_reg) / sizeof(u32);
+	u32 *reg = (u32 *)mbx->mbx_regs;
+
+	if (MBX_SW_ONLY(mbx))
+		return count;
+
+	if (sscanf(buf, "%d:%d", &off, &val) != 2 || (off % sizeof(u32)) ||
+		off >= nreg * sizeof(u32)) {
+		MBX_ERR(mbx, "input should be < reg_offset:reg_val>");
+		return -EINVAL;
+	}
+	reg += off / sizeof(u32);
+
+	mailbox_reg_wr(mbx, reg, val);
+	return count;
+}
+/* HW register level debugging i/f. */
+static DEVICE_ATTR_RW(mailbox_ctl);
+
+static ssize_t mailbox_pkt_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+	struct mailbox_pkt *pkt = &mbx->mbx_tst_pkt;
+	u32 sz = pkt->hdr.payload_size;
+
+	if (MBX_SW_ONLY(mbx))
+		return -ENODEV;
+
+	if (!valid_pkt(pkt))
+		return -ENOENT;
+
+	(void) memcpy(buf, pkt->body.data, sz);
+	reset_pkt(pkt);
+
+	return sz;
+}
+static ssize_t mailbox_pkt_store(struct device *dev,
+	struct device_attribute *da, const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+	struct mailbox_pkt *pkt = &mbx->mbx_tst_pkt;
+	size_t maxlen = sizeof(mbx->mbx_tst_pkt.body.data);
+
+	if (MBX_SW_ONLY(mbx))
+		return -ENODEV;
+
+	if (count > maxlen) {
+		MBX_ERR(mbx, "max input length is %ld", maxlen);
+		return 0;
+	}
+
+	(void) memcpy(pkt->body.data, buf, count);
+	pkt->hdr.payload_size = count;
+	pkt->hdr.type = PKT_TEST;
+
+	/* Sending test pkt. */
+	(void) memcpy(&mbx->mbx_tx.mbc_packet, &mbx->mbx_tst_pkt,
+		sizeof(struct mailbox_pkt));
+	reset_pkt(&mbx->mbx_tst_pkt);
+	chan_send_pkt(&mbx->mbx_tx);
+	return count;
+}
+/* Packet test i/f. */
+static DEVICE_ATTR_RW(mailbox_pkt);
+
+static struct attribute *mailbox_attrs[] = {
+	&dev_attr_mailbox_ctl.attr,
+	&dev_attr_mailbox_pkt.attr,
+	NULL,
+};
+
+static const struct attribute_group mailbox_attrgroup = {
+	.attrs = mailbox_attrs,
+};
+
+/*
+ * Msg will be sent to peer and reply will be received.
+ */
+static int mailbox_request(struct platform_device *pdev, void *req,
+	size_t reqlen, void *resp, size_t *resplen, bool sw_ch, u32 resp_ttl)
+{
+	int rv = -ENOMEM;
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+	struct mailbox_msg *reqmsg = NULL, *respmsg = NULL;
+
+	/* If peer is not alive, no point sending req and waiting for resp. */
+	if (mbx->mbx_peer_dead)
+		return -ENOTCONN;
+
+	reqmsg = alloc_msg(req, reqlen);
+	if (!reqmsg)
+		goto fail;
+	reqmsg->mbm_chan_sw = sw_ch;
+	reqmsg->mbm_req_id = (uintptr_t)reqmsg->mbm_data;
+	reqmsg->mbm_flags |= MSG_FLAG_REQUEST;
+
+	respmsg = alloc_msg(resp, *resplen);
+	if (!respmsg)
+		goto fail;
+	/* Only interested in response w/ same ID. */
+	respmsg->mbm_req_id = reqmsg->mbm_req_id;
+	respmsg->mbm_chan_sw = sw_ch;
+
+	/* Always enqueue RX msg before TX one to avoid race. */
+	rv = chan_msg_enqueue(&mbx->mbx_rx, respmsg);
+	if (rv)
+		goto fail;
+	rv = chan_msg_enqueue(&mbx->mbx_tx, reqmsg);
+	if (rv) {
+		respmsg = chan_msg_dequeue(&mbx->mbx_rx, reqmsg->mbm_req_id);
+		goto fail;
+	}
+
+	/* Wait for req to be sent. */
+	wait_for_completion(&reqmsg->mbm_complete);
+	rv = reqmsg->mbm_error;
+	if (rv) {
+		(void) chan_msg_dequeue(&mbx->mbx_rx, reqmsg->mbm_req_id);
+		goto fail;
+	}
+	free_msg(reqmsg);
+
+	/* Start timer and wait for resp to be received. */
+	msg_timer_on(respmsg, resp_ttl);
+	wait_for_completion(&respmsg->mbm_complete);
+	rv = respmsg->mbm_error;
+	if (rv == 0)
+		*resplen = respmsg->mbm_len;
+
+	free_msg(respmsg);
+	return rv;
+
+fail:
+	if (reqmsg)
+		free_msg(reqmsg);
+	if (respmsg)
+		free_msg(respmsg);
+	return rv;
+}
+
+/*
+ * Posting notification or response to peer.
+ */
+static int mailbox_post(struct platform_device *pdev,
+	u64 reqid, void *buf, size_t len, bool sw_ch)
+{
+	int rv = 0;
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+	struct mailbox_msg *msg = NULL;
+
+	/* If peer is not alive, no point posting a msg. */
+	if (mbx->mbx_peer_dead)
+		return -ENOTCONN;
+
+	msg = alloc_msg(NULL, len);
+	if (!msg)
+		return -ENOMEM;
+
+	(void) memcpy(msg->mbm_data, buf, len);
+	msg->mbm_chan_sw = sw_ch;
+	msg->mbm_req_id = reqid ? reqid : (uintptr_t)msg->mbm_data;
+	msg->mbm_flags |= reqid ? MSG_FLAG_RESPONSE : MSG_FLAG_REQUEST;
+
+	rv = chan_msg_enqueue(&mbx->mbx_tx, msg);
+	if (rv == 0) {
+		wait_for_completion(&msg->mbm_complete);
+		rv = msg->mbm_error;
+	}
+
+	if (rv)
+		MBX_ERR(mbx, "failed to post msg, err=%d", rv);
+	free_msg(msg);
+	return rv;
+}
+
+static void process_request(struct mailbox *mbx, struct mailbox_msg *msg)
+{
+	/* Call client's registered callback to process request. */
+	mutex_lock(&mbx->mbx_listen_cb_lock);
+
+	if (mbx->mbx_listen_cb) {
+		mbx->mbx_listen_cb(mbx->mbx_listen_cb_arg, msg->mbm_data,
+			msg->mbm_len, msg->mbm_req_id, msg->mbm_error,
+			msg->mbm_chan_sw);
+	} else {
+		MBX_INFO(mbx, "msg dropped, no listener");
+	}
+
+	mutex_unlock(&mbx->mbx_listen_cb_lock);
+}
+
+/*
+ * Wait for request from peer.
+ */
+static void mailbox_recv_request(struct work_struct *work)
+{
+	struct mailbox_msg *msg = NULL;
+	struct mailbox *mbx =
+		container_of(work, struct mailbox, mbx_listen_worker);
+
+	while (!mbx->mbx_listen_stop) {
+		/* Only interested in request msg. */
+		(void) wait_for_completion_interruptible(&mbx->mbx_comp);
+
+		mutex_lock(&mbx->mbx_lock);
+
+		while ((msg = list_first_entry_or_null(&mbx->mbx_req_list,
+			struct mailbox_msg, mbm_list)) != NULL) {
+			list_del(&msg->mbm_list);
+			mbx->mbx_req_cnt--;
+			mutex_unlock(&mbx->mbx_lock);
+
+			/* Process msg without holding mutex. */
+			process_request(mbx, msg);
+			free_msg(msg);
+
+			mutex_lock(&mbx->mbx_lock);
+		}
+
+		mutex_unlock(&mbx->mbx_lock);
+	}
+
+	/* Drain all msg before quit. */
+	mutex_lock(&mbx->mbx_lock);
+	while ((msg = list_first_entry_or_null(&mbx->mbx_req_list,
+		struct mailbox_msg, mbm_list)) != NULL) {
+		list_del(&msg->mbm_list);
+		free_msg(msg);
+	}
+	mutex_unlock(&mbx->mbx_lock);
+}
+
+static int mailbox_listen(struct platform_device *pdev,
+	mailbox_msg_cb_t cb, void *cbarg)
+{
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+
+	mutex_lock(&mbx->mbx_listen_cb_lock);
+
+	mbx->mbx_listen_cb_arg = cbarg;
+	mbx->mbx_listen_cb = cb;
+
+	mutex_unlock(&mbx->mbx_listen_cb_lock);
+
+	return 0;
+}
+
+static int mailbox_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+	int ret = 0;
+
+	MBX_INFO(mbx, "handling IOCTL cmd: %d", cmd);
+
+	switch (cmd) {
+	case XRT_MAILBOX_POST: {
+		struct xrt_mailbox_ioctl_post *post =
+			(struct xrt_mailbox_ioctl_post *)arg;
+
+		ret = mailbox_post(pdev, post->xmip_req_id, post->xmip_data,
+			post->xmip_data_size, post->xmip_sw_ch);
+		break;
+	}
+	case XRT_MAILBOX_REQUEST: {
+		struct xrt_mailbox_ioctl_request *req =
+			(struct xrt_mailbox_ioctl_request *)arg;
+
+		ret = mailbox_request(pdev, req->xmir_req, req->xmir_req_size,
+			req->xmir_resp, &req->xmir_resp_size, req->xmir_sw_ch,
+			req->xmir_resp_ttl);
+		break;
+	}
+	case XRT_MAILBOX_LISTEN: {
+		struct xrt_mailbox_ioctl_listen *listen =
+			(struct xrt_mailbox_ioctl_listen *)arg;
+
+		ret = mailbox_listen(pdev,
+			listen->xmil_cb, listen->xmil_cb_arg);
+		break;
+	}
+	default:
+		MBX_ERR(mbx, "unknown cmd: %d", cmd);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+static void mailbox_stop(struct mailbox *mbx)
+{
+	/* Tear down all threads. */
+	del_timer_sync(&mbx->mbx_poll_timer);
+	chan_fini(&mbx->mbx_tx);
+	chan_fini(&mbx->mbx_rx);
+	listen_wq_fini(mbx);
+	BUG_ON(!(list_empty(&mbx->mbx_req_list)));
+}
+
+static int mailbox_start(struct mailbox *mbx)
+{
+	int ret;
+
+	mbx->mbx_req_cnt = 0;
+	mbx->mbx_peer_dead = false;
+	mbx->mbx_opened = 0;
+	mbx->mbx_listen_stop = false;
+
+	/* Dedicated thread for listening to peer request. */
+	mbx->mbx_listen_wq =
+		create_singlethread_workqueue(dev_name(&mbx->mbx_pdev->dev));
+	if (!mbx->mbx_listen_wq) {
+		MBX_ERR(mbx, "failed to create request-listen work queue");
+		ret = -ENOMEM;
+		goto out;
+	}
+	INIT_WORK(&mbx->mbx_listen_worker, mailbox_recv_request);
+	queue_work(mbx->mbx_listen_wq, &mbx->mbx_listen_worker);
+
+	/* Set up communication channels. */
+	ret = chan_init(mbx, MBXCT_RX, &mbx->mbx_rx, chan_do_rx);
+	if (ret != 0) {
+		MBX_ERR(mbx, "failed to init rx channel");
+		goto out;
+	}
+	ret = chan_init(mbx, MBXCT_TX, &mbx->mbx_tx, chan_do_tx);
+	if (ret != 0) {
+		MBX_ERR(mbx, "failed to init tx channel");
+		goto out;
+	}
+
+	/* Only see status change when we have full packet sent or received. */
+	mailbox_reg_wr(mbx, &mbx->mbx_regs->mbr_rit, PACKET_SIZE - 1);
+	mailbox_reg_wr(mbx, &mbx->mbx_regs->mbr_sit, 0);
+
+	/* Disable both TX / RX intrs. We only do polling. */
+	if (!MBX_SW_ONLY(mbx))
+		mailbox_reg_wr(mbx, &mbx->mbx_regs->mbr_ie, 0x0);
+	timer_setup(&mbx->mbx_poll_timer, mailbox_poll_timer, 0);
+	mod_timer(&mbx->mbx_poll_timer, jiffies + MAILBOX_TTL_TIMER);
+
+out:
+	return ret;
+}
+
+static int mailbox_open(struct inode *inode, struct file *file)
+{
+	/*
+	 * Only allow one open from daemon. Mailbox msg can only be polled
+	 * by one daemon.
+	 */
+	struct platform_device *pdev = xrt_devnode_open_excl(inode);
+	struct mailbox *mbx = NULL;
+
+	if (!pdev)
+		return -ENXIO;
+
+	mbx = platform_get_drvdata(pdev);
+	if (!mbx)
+		return -ENXIO;
+
+	/*
+	 * Indicates that mpd/msd is up and running, assuming msd/mpd
+	 * is the only user of the software mailbox
+	 */
+	mutex_lock(&mbx->mbx_lock);
+	mbx->mbx_opened++;
+	mutex_unlock(&mbx->mbx_lock);
+
+	file->private_data = mbx;
+	return 0;
+}
+
+/*
+ * Called when the device goes from used to unused.
+ */
+static int mailbox_close(struct inode *inode, struct file *file)
+{
+	struct mailbox *mbx = file->private_data;
+
+	mutex_lock(&mbx->mbx_lock);
+	mbx->mbx_opened--;
+	mutex_unlock(&mbx->mbx_lock);
+	xrt_devnode_close(inode);
+	return 0;
+}
+
+/*
+ * Software channel TX handler. Msg goes out to peer.
+ *
+ * We either read the entire msg out or nothing and return error. Partial read
+ * is not supported.
+ */
+static ssize_t
+mailbox_read(struct file *file, char __user *buf, size_t n, loff_t *ignd)
+{
+	struct mailbox *mbx = file->private_data;
+	struct mailbox_channel *ch = &mbx->mbx_tx;
+	struct xcl_sw_chan args = { 0 };
+
+	if (n < sizeof(struct xcl_sw_chan)) {
+		MBX_ERR(mbx, "Software TX buf has no room for header");
+		return -EINVAL;
+	}
+
+	/* Wait until tx worker has something to transmit to peer. */
+	if (wait_event_interruptible(ch->sw_chan_wq,
+		atomic_read(&ch->sw_num_pending_msg) > 0) == -ERESTARTSYS) {
+		MBX_ERR(mbx, "Software TX channel handler is interrupted");
+		return -ERESTARTSYS;
+	}
+
+	/* We have something to send, do it now. */
+
+	mutex_lock(&ch->sw_chan_mutex);
+
+	/* Nothing to do. Someone is ahead of us and did the job? */
+	if (ch->sw_chan_msg_id == 0) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		MBX_ERR(mbx, "Software TX channel is empty");
+		return 0;
+	}
+
+	/* Copy header to user. */
+	args.id = ch->sw_chan_msg_id;
+	args.sz = ch->sw_chan_buf_sz;
+	args.flags = ch->sw_chan_msg_flags;
+	if (copy_to_user(buf, &args, sizeof(struct xcl_sw_chan)) != 0) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		return -EFAULT;
+	}
+
+	/*
+	 * Buffer passed in is too small for payload, return EMSGSIZE to ask
+	 * for a bigger one.
+	 */
+	if (ch->sw_chan_buf_sz > (n - sizeof(struct xcl_sw_chan))) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		/*
+		 * This error occurs when daemons try to query the size
+		 * of the msg. Show it as info to avoid flushing sytem console.
+		 */
+		MBX_INFO(mbx, "Software TX msg is too big");
+		return -EMSGSIZE;
+	}
+
+	/* Copy payload to user. */
+	if (copy_to_user(((struct xcl_sw_chan *)buf)->data,
+		ch->sw_chan_buf, ch->sw_chan_buf_sz) != 0) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		return -EFAULT;
+	}
+
+	/* Mark that job is done and we're ready for next TX msg. */
+	reset_sw_ch(ch);
+
+	mutex_unlock(&ch->sw_chan_mutex);
+	return args.sz + sizeof(struct xcl_sw_chan);
+}
+
+/*
+ * Software channel RX handler. Msg comes in from peer.
+ *
+ * We either receive the entire msg or nothing and return error. Partial write
+ * is not supported.
+ */
+static ssize_t
+mailbox_write(struct file *file, const char __user *buf, size_t n, loff_t *ignd)
+{
+	struct mailbox *mbx = file->private_data;
+	struct mailbox_channel *ch = &mbx->mbx_rx;
+	struct xcl_sw_chan args = { 0 };
+	void *payload = NULL;
+
+	if (n < sizeof(struct xcl_sw_chan)) {
+		MBX_ERR(mbx, "Software RX msg has invalid header");
+		return -EINVAL;
+	}
+
+	/* Wait until rx worker is ready for receiving next msg from peer. */
+	if (wait_event_interruptible(ch->sw_chan_wq,
+		atomic_read(&ch->sw_num_pending_msg) == 0) == -ERESTARTSYS) {
+		MBX_ERR(mbx, "Software RX channel handler is interrupted");
+		return -ERESTARTSYS;
+	}
+
+	/* Rx worker is ready to receive msg, do it now. */
+
+	mutex_lock(&ch->sw_chan_mutex);
+
+	/* No room for us. Someone is ahead of us and is using the channel? */
+	if (ch->sw_chan_msg_id != 0) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		MBX_ERR(mbx, "Software RX channel is busy");
+		return -EBUSY;
+	}
+
+	/* Copy header from user. */
+	if (copy_from_user(&args, buf, sizeof(struct xcl_sw_chan)) != 0) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		return -EFAULT;
+	}
+	if (args.id == 0 || args.sz == 0) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		MBX_ERR(mbx, "Software RX msg has malformed header");
+		return -EINVAL;
+	}
+
+	/* Copy payload from user. */
+	if (n < args.sz + sizeof(struct xcl_sw_chan)) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		MBX_ERR(mbx, "Software RX msg has invalid payload");
+		return -EINVAL;
+	}
+	payload = vmalloc(args.sz);
+	if (payload == NULL) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		return -ENOMEM;
+	}
+	if (copy_from_user(payload, ((struct xcl_sw_chan *)buf)->data,
+		args.sz) != 0) {
+		mutex_unlock(&ch->sw_chan_mutex);
+		vfree(payload);
+		return -EFAULT;
+	}
+
+	/* Set up received msg and notify rx worker. */
+	ch->sw_chan_buf_sz = args.sz;
+	ch->sw_chan_msg_id = args.id;
+	ch->sw_chan_msg_flags = args.flags;
+	ch->sw_chan_buf = payload;
+
+	atomic_inc(&ch->sw_num_pending_msg);
+
+	mutex_unlock(&ch->sw_chan_mutex);
+
+	return args.sz + sizeof(struct xcl_sw_chan);
+}
+
+static uint mailbox_poll(struct file *file, poll_table *wait)
+{
+	struct mailbox *mbx = file->private_data;
+	struct mailbox_channel *ch = &mbx->mbx_tx;
+	int counter;
+
+	poll_wait(file, &ch->sw_chan_wq, wait);
+	counter = atomic_read(&ch->sw_num_pending_msg);
+
+	MBX_DBG(mbx, "%s: %d", __func__, counter);
+	if (counter == 0)
+		return 0;
+	return POLLIN;
+}
+
+static int mailbox_remove(struct platform_device *pdev)
+{
+	struct mailbox *mbx = platform_get_drvdata(pdev);
+
+	BUG_ON(mbx == NULL);
+
+	/* Stop accessing from sysfs node. */
+	sysfs_remove_group(&pdev->dev.kobj, &mailbox_attrgroup);
+
+	mailbox_stop(mbx);
+
+	if (mbx->mbx_regs)
+		iounmap(mbx->mbx_regs);
+
+	MBX_INFO(mbx, "mailbox cleaned up successfully");
+
+	platform_set_drvdata(pdev, NULL);
+	return 0;
+}
+
+static int mailbox_probe(struct platform_device *pdev)
+{
+	struct mailbox *mbx = NULL;
+	struct resource *res;
+	int ret;
+
+	mbx = devm_kzalloc(DEV(pdev), sizeof(struct mailbox), GFP_KERNEL);
+	if (!mbx)
+		return -ENOMEM;
+
+	mbx->mbx_pdev = pdev;
+	platform_set_drvdata(pdev, mbx);
+
+	init_completion(&mbx->mbx_comp);
+	mutex_init(&mbx->mbx_lock);
+	mutex_init(&mbx->mbx_listen_cb_lock);
+	INIT_LIST_HEAD(&mbx->mbx_req_list);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (res != NULL) {
+		mbx->mbx_regs = ioremap(res->start, res->end - res->start + 1);
+		if (!mbx->mbx_regs) {
+			MBX_ERR(mbx, "failed to map in registers");
+			ret = -EIO;
+			goto failed;
+		}
+	}
+
+	ret = mailbox_start(mbx);
+	if (ret)
+		goto failed;
+
+	/* Enable access thru sysfs node. */
+	ret = sysfs_create_group(&pdev->dev.kobj, &mailbox_attrgroup);
+	if (ret != 0) {
+		MBX_ERR(mbx, "failed to init sysfs");
+		goto failed;
+	}
+
+	MBX_INFO(mbx, "successfully initialized");
+	return 0;
+
+failed:
+	mailbox_remove(pdev);
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_mailbox_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{ .ep_name = NODE_MAILBOX_VSEC},
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata mailbox_drvdata = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = mailbox_leaf_ioctl,
+	},
+	.xsd_file_ops = {
+		.xsf_ops = {
+			.owner = THIS_MODULE,
+			.open = mailbox_open,
+			.release = mailbox_close,
+			.read = mailbox_read,
+			.write = mailbox_write,
+			.poll = mailbox_poll,
+		},
+		.xsf_dev_name = "mailbox",
+	},
+};
+
+#define	XRT_MAILBOX	"xrt_mailbox"
+
+struct platform_device_id mailbox_id_table[] = {
+	{ XRT_MAILBOX, (kernel_ulong_t)&mailbox_drvdata },
+	{ },
+};
+
+struct platform_driver xrt_mailbox_driver = {
+	.probe		= mailbox_probe,
+	.remove		= mailbox_remove,
+	.driver		= {
+		.name	= XRT_MAILBOX,
+	},
+	.id_table = mailbox_id_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-partition.c b/drivers/fpga/alveo/lib/subdevs/xrt-partition.c
new file mode 100644
index 000000000000..eb2418a0bfb1
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-partition.c
@@ -0,0 +1,261 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Partition Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-partition.h"
+#include "xrt-metadata.h"
+#include "../xrt-main.h"
+
+#define	XRT_PART "xrt_partition"
+
+struct xrt_partition {
+	struct platform_device *pdev;
+	struct xrt_subdev_pool leaves;
+	bool leaves_created;
+	struct mutex lock;
+};
+
+static int xrt_part_parent_cb(struct device *dev, void *parg,
+	u32 cmd, void *arg)
+{
+	int rc;
+	struct platform_device *pdev =
+		container_of(dev, struct platform_device, dev);
+	struct xrt_partition *xp = (struct xrt_partition *)parg;
+
+	switch (cmd) {
+	case XRT_PARENT_GET_LEAF_HOLDERS: {
+		struct xrt_parent_ioctl_get_holders *holders =
+			(struct xrt_parent_ioctl_get_holders *)arg;
+		rc = xrt_subdev_pool_get_holders(&xp->leaves,
+			holders->xpigh_pdev, holders->xpigh_holder_buf,
+			holders->xpigh_holder_buf_len);
+		break;
+	}
+	default:
+		/* Forward parent call to root. */
+		rc = xrt_subdev_parent_ioctl(pdev, cmd, arg);
+		break;
+	}
+
+	return rc;
+}
+
+static int xrt_part_create_leaves(struct xrt_partition *xp)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(xp->pdev);
+	enum xrt_subdev_id did;
+	struct xrt_subdev_endpoints *eps = NULL;
+	int ep_count = 0, i, ret = 0, failed = 0;
+	long mlen;
+	char *dtb, *part_dtb = NULL;
+	const char *ep_name;
+
+
+	mutex_lock(&xp->lock);
+
+	if (xp->leaves_created) {
+		mutex_unlock(&xp->lock);
+		return -EEXIST;
+	}
+
+	xrt_info(xp->pdev, "bringing up leaves...");
+
+	/* Create all leaves based on dtb. */
+	if (!pdata)
+		goto bail;
+
+	mlen = xrt_md_size(DEV(xp->pdev), pdata->xsp_dtb);
+	if (mlen <= 0) {
+		xrt_err(xp->pdev, "invalid dtb, len %ld", mlen);
+		goto bail;
+	}
+
+	part_dtb = vmalloc(mlen);
+	if (!part_dtb)
+		goto bail;
+
+	memcpy(part_dtb, pdata->xsp_dtb, mlen);
+	for (did = 0; did < XRT_SUBDEV_NUM;) {
+		eps = eps ? eps + 1 : xrt_drv_get_endpoints(did);
+		if (!eps || !eps->xse_names) {
+			did++;
+			eps = NULL;
+			continue;
+		}
+		ret = xrt_md_create(DEV(xp->pdev), &dtb);
+		if (ret) {
+			xrt_err(xp->pdev, "create md failed, drv %s",
+				xrt_drv_name(did));
+			failed++;
+			continue;
+		}
+		for (i = 0; eps->xse_names[i].ep_name ||
+		    eps->xse_names[i].regmap_name; i++) {
+			if (!eps->xse_names[i].ep_name) {
+				ret = xrt_md_get_compatible_epname(
+					DEV(xp->pdev), part_dtb,
+					eps->xse_names[i].regmap_name,
+					&ep_name);
+				if (ret)
+					continue;
+			} else
+				ep_name = (char *)eps->xse_names[i].ep_name;
+			ret = xrt_md_copy_endpoint(DEV(xp->pdev),
+				dtb, part_dtb, ep_name,
+				(char *)eps->xse_names[i].regmap_name, NULL);
+			if (ret)
+				continue;
+			xrt_md_del_endpoint(DEV(xp->pdev), part_dtb, ep_name,
+				(char *)eps->xse_names[i].regmap_name);
+			ep_count++;
+		}
+		if (ep_count >= eps->xse_min_ep) {
+			ret = xrt_subdev_pool_add(&xp->leaves, did,
+				xrt_part_parent_cb, xp, dtb);
+			eps = NULL;
+			if (ret < 0) {
+				failed++;
+				xrt_err(xp->pdev, "failed to create %s: %d",
+					xrt_drv_name(did), ret);
+			}
+		} else if (ep_count > 0) {
+			xrt_md_copy_all_eps(DEV(xp->pdev), part_dtb, dtb);
+		}
+		vfree(dtb);
+		ep_count = 0;
+	}
+
+	xp->leaves_created = true;
+
+bail:
+	mutex_unlock(&xp->lock);
+
+	if (part_dtb)
+		vfree(part_dtb);
+
+	return failed == 0 ? 0 : -ECHILD;
+}
+
+static int xrt_part_remove_leaves(struct xrt_partition *xp)
+{
+	int rc;
+
+	mutex_lock(&xp->lock);
+
+	if (!xp->leaves_created) {
+		mutex_unlock(&xp->lock);
+		return 0;
+	}
+
+	xrt_info(xp->pdev, "tearing down leaves...");
+	rc = xrt_subdev_pool_fini(&xp->leaves);
+	xp->leaves_created = false;
+
+	mutex_unlock(&xp->lock);
+
+	return rc;
+}
+
+static int xrt_part_probe(struct platform_device *pdev)
+{
+	struct xrt_partition *xp;
+
+	xrt_info(pdev, "probing...");
+
+	xp = devm_kzalloc(&pdev->dev, sizeof(*xp), GFP_KERNEL);
+	if (!xp)
+		return -ENOMEM;
+
+	xp->pdev = pdev;
+	mutex_init(&xp->lock);
+	xrt_subdev_pool_init(DEV(pdev), &xp->leaves);
+	platform_set_drvdata(pdev, xp);
+
+	return 0;
+}
+
+static int xrt_part_remove(struct platform_device *pdev)
+{
+	struct xrt_partition *xp = platform_get_drvdata(pdev);
+
+	xrt_info(pdev, "leaving...");
+	return xrt_part_remove_leaves(xp);
+}
+
+static int xrt_part_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	int rc = 0;
+	struct xrt_partition *xp = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_PARTITION_GET_LEAF: {
+		struct xrt_parent_ioctl_get_leaf *get_leaf =
+			(struct xrt_parent_ioctl_get_leaf *)arg;
+
+		rc = xrt_subdev_pool_get(&xp->leaves, get_leaf->xpigl_match_cb,
+			get_leaf->xpigl_match_arg, DEV(get_leaf->xpigl_pdev),
+			&get_leaf->xpigl_leaf);
+		break;
+	}
+	case XRT_PARTITION_PUT_LEAF: {
+		struct xrt_parent_ioctl_put_leaf *put_leaf =
+			(struct xrt_parent_ioctl_put_leaf *)arg;
+
+		rc = xrt_subdev_pool_put(&xp->leaves, put_leaf->xpipl_leaf,
+			DEV(put_leaf->xpipl_pdev));
+		break;
+	}
+	case XRT_PARTITION_INIT_CHILDREN:
+		rc = xrt_part_create_leaves(xp);
+		break;
+	case XRT_PARTITION_FINI_CHILDREN:
+		rc = xrt_part_remove_leaves(xp);
+		break;
+	case XRT_PARTITION_EVENT: {
+		struct xrt_partition_ioctl_event *evt =
+			(struct xrt_partition_ioctl_event *)arg;
+		struct xrt_parent_ioctl_evt_cb *cb = evt->xpie_cb;
+
+		rc = xrt_subdev_pool_event(&xp->leaves, cb->xevt_pdev,
+			cb->xevt_match_cb, cb->xevt_match_arg, cb->xevt_cb,
+			evt->xpie_evt);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unknown IOCTL cmd %d", cmd);
+		rc = -EINVAL;
+		break;
+	}
+	return rc;
+}
+
+struct xrt_subdev_drvdata xrt_part_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_part_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_part_id_table[] = {
+	{ XRT_PART, (kernel_ulong_t)&xrt_part_data },
+	{ },
+};
+
+struct platform_driver xrt_partition_driver = {
+	.driver	= {
+		.name    = XRT_PART,
+	},
+	.probe   = xrt_part_probe,
+	.remove  = xrt_part_remove,
+	.id_table = xrt_part_id_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-qspi.c b/drivers/fpga/alveo/lib/subdevs/xrt-qspi.c
new file mode 100644
index 000000000000..c8ae5c386983
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-qspi.c
@@ -0,0 +1,1347 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA QSPI flash controller Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/delay.h>
+#include <linux/uaccess.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-flash.h"
+
+#define	XRT_QSPI "xrt_qspi"
+
+/* Status write command */
+#define QSPI_CMD_STATUSREG_WRITE		0x01
+/* Page Program command */
+#define QSPI_CMD_PAGE_PROGRAM			0x02
+/* Random read command */
+#define QSPI_CMD_RANDOM_READ			0x03
+/* Status read command */
+#define QSPI_CMD_STATUSREG_READ			0x05
+/* Enable flash write */
+#define QSPI_CMD_WRITE_ENABLE			0x06
+/* 4KB Subsector Erase command */
+#define QSPI_CMD_4KB_SUBSECTOR_ERASE		0x20
+/* Quad Input Fast Program */
+#define QSPI_CMD_QUAD_WRITE			0x32
+/* Extended quad input fast program */
+#define QSPI_CMD_EXT_QUAD_WRITE			0x38
+/* Dual Output Fast Read */
+#define QSPI_CMD_DUAL_READ			0x3B
+/* Clear flag register */
+#define QSPI_CMD_CLEAR_FLAG_REGISTER		0x50
+/* 32KB Subsector Erase command */
+#define QSPI_CMD_32KB_SUBSECTOR_ERASE		0x52
+/* Enhanced volatile configuration register write command */
+#define QSPI_CMD_ENH_VOLATILE_CFGREG_WRITE	0x61
+/* Enhanced volatile configuration register read command */
+#define QSPI_CMD_ENH_VOLATILE_CFGREG_READ	0x65
+/* Quad Output Fast Read */
+#define QSPI_CMD_QUAD_READ			0x6B
+/* Status flag read command */
+#define QSPI_CMD_FLAG_STATUSREG_READ		0x70
+/* Volatile configuration register write command */
+#define QSPI_CMD_VOLATILE_CFGREG_WRITE		0x81
+/* Volatile configuration register read command */
+#define QSPI_CMD_VOLATILE_CFGREG_READ		0x85
+/* Read ID Code */
+#define QSPI_CMD_IDCODE_READ			0x9F
+/* Non volatile configuration register write command */
+#define QSPI_CMD_NON_VOLATILE_CFGREG_WRITE	0xB1
+/* Non volatile configuration register read command */
+#define QSPI_CMD_NON_VOLATILE_CFGREG_READ	0xB5
+/* Dual IO Fast Read */
+#define QSPI_CMD_DUAL_IO_READ			0xBB
+/* Enhanced volatile configuration register write command */
+#define QSPI_CMD_EXTENDED_ADDRESS_REG_WRITE	0xC5
+/* Bulk Erase command */
+#define QSPI_CMD_BULK_ERASE			0xC7
+/* Enhanced volatile configuration register read command */
+#define QSPI_CMD_EXTENDED_ADDRESS_REG_READ	0xC8
+/* Sector Erase command */
+#define QSPI_CMD_SECTOR_ERASE			0xD8
+/* Quad IO Fast Read */
+#define QSPI_CMD_QUAD_IO_READ			0xEB
+
+#define	QSPI_ERR(flash, fmt, arg...)	xrt_err((flash)->pdev, fmt, ##arg)
+#define	QSPI_WARN(flash, fmt, arg...)	xrt_warn((flash)->pdev, fmt, ##arg)
+#define	QSPI_INFO(flash, fmt, arg...)	xrt_info((flash)->pdev, fmt, ##arg)
+#define	QSPI_DBG(flash, fmt, arg...)	xrt_dbg((flash)->pdev, fmt, ##arg)
+
+/*
+ * QSPI control reg bits.
+ */
+#define QSPI_CR_LOOPBACK		(1 << 0)
+#define QSPI_CR_ENABLED			(1 << 1)
+#define QSPI_CR_MASTER_MODE		(1 << 2)
+#define QSPI_CR_CLK_POLARITY		(1 << 3)
+#define QSPI_CR_CLK_PHASE		(1 << 4)
+#define QSPI_CR_TXFIFO_RESET		(1 << 5)
+#define QSPI_CR_RXFIFO_RESET		(1 << 6)
+#define QSPI_CR_MANUAL_SLAVE_SEL	(1 << 7)
+#define QSPI_CR_TRANS_INHIBIT		(1 << 8)
+#define QSPI_CR_LSB_FIRST		(1 << 9)
+#define QSPI_CR_INIT_STATE		(QSPI_CR_TRANS_INHIBIT		| \
+					QSPI_CR_MANUAL_SLAVE_SEL	| \
+					QSPI_CR_RXFIFO_RESET		| \
+					QSPI_CR_TXFIFO_RESET		| \
+					QSPI_CR_ENABLED			| \
+					QSPI_CR_MASTER_MODE)
+
+/*
+ * QSPI status reg bits.
+ */
+#define QSPI_SR_RX_EMPTY		(1 << 0)
+#define QSPI_SR_RX_FULL			(1 << 1)
+#define QSPI_SR_TX_EMPTY		(1 << 2)
+#define QSPI_SR_TX_FULL			(1 << 3)
+#define QSPI_SR_MODE_ERR		(1 << 4)
+#define QSPI_SR_SLAVE_MODE		(1 << 5)
+#define QSPI_SR_CPOL_CPHA_ERR		(1 << 6)
+#define QSPI_SR_SLAVE_MODE_ERR		(1 << 7)
+#define QSPI_SR_MSB_ERR			(1 << 8)
+#define QSPI_SR_LOOPBACK_ERR		(1 << 9)
+#define QSPI_SR_CMD_ERR			(1 << 10)
+#define QSPI_SR_ERRS			(QSPI_SR_CMD_ERR	|	\
+					QSPI_SR_LOOPBACK_ERR	|	\
+					QSPI_SR_MSB_ERR		|	\
+					QSPI_SR_SLAVE_MODE_ERR	|	\
+					QSPI_SR_CPOL_CPHA_ERR	|	\
+					QSPI_SR_MODE_ERR)
+
+#define	MAX_NUM_OF_SLAVES	2
+#define	SLAVE_NONE		(-1)
+#define SLAVE_SELECT_NONE	((1 << MAX_NUM_OF_SLAVES) - 1)
+
+/*
+ * We support erasing flash memory at three page unit. Page read-modify-write
+ * is done at smallest page unit.
+ */
+#define	QSPI_LARGE_PAGE_SIZE	(32UL * 1024)
+#define	QSPI_HUGE_PAGE_SIZE	(64UL * 1024)
+#define	QSPI_PAGE_SIZE		(4UL * 1024)
+#define	QSPI_PAGE_MASK		(QSPI_PAGE_SIZE - 1)
+#define	QSPI_PAGE_ALIGN(off)	((off) & ~QSPI_PAGE_MASK)
+#define	QSPI_PAGE_OFFSET(off)	((off) & QSPI_PAGE_MASK)
+static inline size_t QSPI_PAGE_ROUNDUP(loff_t offset)
+{
+	if (QSPI_PAGE_OFFSET(offset))
+		return round_up(offset, QSPI_PAGE_SIZE);
+	return offset + QSPI_PAGE_SIZE;
+}
+
+/*
+ * Wait for condition to be true for at most 1 second.
+ * Return true, if time'd out, false otherwise.
+ */
+#define QSPI_BUSY_WAIT(condition)					\
+({									\
+	const int interval = 5; /* in microsec */			\
+	int retry = 1000 * 1000 / interval; /* wait for 1 second */	\
+	while (retry && !(condition)) {					\
+		udelay(interval);					\
+		retry--;						\
+	}								\
+	(retry == 0);							\
+})
+
+static size_t micron_code2sectors(u8 code)
+{
+	size_t max_sectors = 0;
+
+	switch (code) {
+	case 0x17:
+		max_sectors = 1;
+		break;
+	case 0x18:
+		max_sectors = 1;
+		break;
+	case 0x19:
+		max_sectors = 2;
+		break;
+	case 0x20:
+		max_sectors = 4;
+		break;
+	case 0x21:
+		max_sectors = 8;
+		break;
+	case 0x22:
+		max_sectors = 16;
+		break;
+	default:
+		break;
+	}
+	return max_sectors;
+}
+
+static size_t macronix_code2sectors(u8 code)
+{
+	if (code < 0x38 || code > 0x3c)
+		return 0;
+	return (1 << (code - 0x38));
+}
+
+static u8 macronix_write_cmd(void)
+{
+	return QSPI_CMD_PAGE_PROGRAM;
+}
+
+static u8 micron_write_cmd(void)
+{
+	return QSPI_CMD_QUAD_WRITE;
+}
+
+/*
+ * Flash memory vendor specific operations.
+ */
+static struct qspi_flash_vendor {
+	u8 vendor_id;
+	const char *vendor_name;
+	size_t (*code2sectors)(u8 code);
+	u8 (*write_cmd)(void);
+} vendors[] = {
+	{ 0x20, "micron", micron_code2sectors, micron_write_cmd },
+	{ 0xc2, "macronix", macronix_code2sectors, macronix_write_cmd },
+};
+
+struct qspi_flash_addr {
+	u8 slave;
+	u8 sector;
+	u8 addr_lo;
+	u8 addr_mid;
+	u8 addr_hi;
+};
+
+/*
+ * QSPI flash controller IP register layout
+ */
+struct qspi_reg {
+	u32	qspi_padding1[16];
+	u32	qspi_reset;
+	u32	qspi_padding2[7];
+	u32	qspi_ctrl;
+	u32	qspi_status;
+	u32	qspi_tx;
+	u32	qspi_rx;
+	u32	qspi_slave;
+	u32	qspi_tx_fifo;
+	u32	qspi_rx_fifo;
+} __packed;
+
+struct xrt_qspi {
+	struct platform_device	*pdev;
+	struct resource *res;
+	struct mutex io_lock;
+	size_t flash_size;
+	u8 *io_buf;
+	struct qspi_reg *qspi_regs;
+	size_t qspi_fifo_depth;
+	u8 qspi_curr_sector;
+	struct qspi_flash_vendor *vendor;
+	int qspi_curr_slave;
+};
+
+static inline const char *reg2name(struct xrt_qspi *flash, u32 *reg)
+{
+	static const char * const reg_names[] = {
+		"qspi_ctrl",
+		"qspi_status",
+		"qspi_tx",
+		"qspi_rx",
+		"qspi_slave",
+		"qspi_tx_fifo",
+		"qspi_rx_fifo",
+	};
+	size_t off = (uintptr_t)reg - (uintptr_t)flash->qspi_regs;
+
+	if (off == offsetof(struct qspi_reg, qspi_reset))
+		return "qspi_reset";
+	if (off < offsetof(struct qspi_reg, qspi_ctrl))
+		return "padding";
+	off -= offsetof(struct qspi_reg, qspi_ctrl);
+	return reg_names[off / sizeof(u32)];
+}
+
+static inline u32 qspi_reg_rd(struct xrt_qspi *flash, u32 *reg)
+{
+	u32 val = ioread32(reg);
+
+	QSPI_DBG(flash, "REG_RD(%s)=0x%x", reg2name(flash, reg), val);
+	return val;
+}
+
+static inline void qspi_reg_wr(struct xrt_qspi *flash, u32 *reg, u32 val)
+{
+	QSPI_DBG(flash, "REG_WR(%s,0x%x)", reg2name(flash, reg), val);
+	iowrite32(val, reg);
+}
+
+static inline u32 qspi_get_status(struct xrt_qspi *flash)
+{
+	return qspi_reg_rd(flash, &flash->qspi_regs->qspi_status);
+}
+
+static inline u32 qspi_get_ctrl(struct xrt_qspi *flash)
+{
+	return qspi_reg_rd(flash, &flash->qspi_regs->qspi_ctrl);
+}
+
+static inline void qspi_set_ctrl(struct xrt_qspi *flash, u32 ctrl)
+{
+	qspi_reg_wr(flash, &flash->qspi_regs->qspi_ctrl, ctrl);
+}
+
+static inline void qspi_activate_slave(struct xrt_qspi *flash, int index)
+{
+	u32 slave_reg;
+
+	if (index == SLAVE_NONE)
+		slave_reg = SLAVE_SELECT_NONE;
+	else
+		slave_reg = ~(1 << index);
+	qspi_reg_wr(flash, &flash->qspi_regs->qspi_slave, slave_reg);
+}
+
+/*
+ * Pull one byte from flash RX fifo.
+ * So far, only 8-bit data width is supported.
+ */
+static inline u8 qspi_read8(struct xrt_qspi *flash)
+{
+	return (u8)qspi_reg_rd(flash, &flash->qspi_regs->qspi_rx);
+}
+
+/*
+ * Push one byte to flash TX fifo.
+ * So far, only 8-bit data width is supported.
+ */
+static inline void qspi_send8(struct xrt_qspi *flash, u8 val)
+{
+	qspi_reg_wr(flash, &flash->qspi_regs->qspi_tx, val);
+}
+
+static inline bool qspi_has_err(struct xrt_qspi *flash)
+{
+	u32 status = qspi_get_status(flash);
+
+	if (!(status & QSPI_SR_ERRS))
+		return false;
+
+	QSPI_ERR(flash, "QSPI error status: 0x%x", status);
+	return true;
+}
+
+/*
+ * Caller should make sure the flash controller has exactly
+ * len bytes in the fifo. It's an error if we pull out less.
+ */
+static int qspi_rx(struct xrt_qspi *flash, u8 *buf, size_t len)
+{
+	size_t cnt;
+	u8 c;
+
+	for (cnt = 0; cnt < len; cnt++) {
+		if ((qspi_get_status(flash) & QSPI_SR_RX_EMPTY) != 0)
+			return -EINVAL;
+
+		c = qspi_read8(flash);
+
+		if (buf)
+			buf[cnt] = c;
+	}
+
+	if ((qspi_get_status(flash) & QSPI_SR_RX_EMPTY) == 0) {
+		QSPI_ERR(flash, "failed to drain RX fifo");
+		return -EINVAL;
+	}
+
+	if (qspi_has_err(flash))
+		return -EINVAL;
+
+	return 0;
+}
+
+/*
+ * Caller should make sure the fifo is large enough to host len bytes.
+ */
+static int qspi_tx(struct xrt_qspi *flash, u8 *buf, size_t len)
+{
+	u32 ctrl = qspi_get_ctrl(flash);
+	int i;
+
+	BUG_ON(len > flash->qspi_fifo_depth);
+
+	/* Stop transfering to the flash. */
+	qspi_set_ctrl(flash, ctrl | QSPI_CR_TRANS_INHIBIT);
+
+	/* Fill out the FIFO. */
+	for (i = 0; i < len; i++)
+		qspi_send8(flash, buf[i]);
+
+	/* Start transfering to the flash. */
+	qspi_set_ctrl(flash, ctrl & ~QSPI_CR_TRANS_INHIBIT);
+
+	/* Waiting for FIFO to become empty again. */
+	if (QSPI_BUSY_WAIT(qspi_get_status(flash) &
+		(QSPI_SR_TX_EMPTY | QSPI_SR_ERRS))) {
+		if (qspi_has_err(flash)) {
+			QSPI_ERR(flash, "QSPI write failed");
+		} else {
+			QSPI_ERR(flash, "QSPI write timeout, status: 0x%x",
+				qspi_get_status(flash));
+		}
+		return -ETIMEDOUT;
+	}
+
+	/* Always stop transfering to the flash after we finish. */
+	qspi_set_ctrl(flash, ctrl | QSPI_CR_TRANS_INHIBIT);
+
+	if (qspi_has_err(flash))
+		return -EINVAL;
+
+	return 0;
+}
+
+/*
+ * Reset both RX and TX FIFO.
+ */
+static int qspi_reset_fifo(struct xrt_qspi *flash)
+{
+	const u32 status_fifo_mask = QSPI_SR_TX_FULL | QSPI_SR_RX_FULL |
+		QSPI_SR_TX_EMPTY | QSPI_SR_RX_EMPTY;
+	u32 fifo_status = qspi_get_status(flash) & status_fifo_mask;
+
+	if (fifo_status == (QSPI_SR_TX_EMPTY | QSPI_SR_RX_EMPTY))
+		return 0;
+
+	qspi_set_ctrl(flash, qspi_get_ctrl(flash) | QSPI_CR_TXFIFO_RESET |
+		QSPI_CR_RXFIFO_RESET);
+
+	if (QSPI_BUSY_WAIT((qspi_get_status(flash) & status_fifo_mask) ==
+		(QSPI_SR_TX_EMPTY | QSPI_SR_RX_EMPTY))) {
+		QSPI_ERR(flash, "failed to reset FIFO, status: 0x%x",
+			qspi_get_status(flash));
+		return -ETIMEDOUT;
+	}
+	return 0;
+}
+
+static int qspi_transaction(struct xrt_qspi *flash,
+	u8 *buf, size_t len, bool need_output)
+{
+	int ret = 0;
+
+	/* Reset both the TX and RX fifo before starting transaction. */
+	ret = qspi_reset_fifo(flash);
+	if (ret)
+		return ret;
+
+	/* The slave index should be within range. */
+	if (flash->qspi_curr_slave >= MAX_NUM_OF_SLAVES)
+		return -EINVAL;
+	qspi_activate_slave(flash, flash->qspi_curr_slave);
+
+	ret = qspi_tx(flash, buf, len);
+	if (ret)
+		return ret;
+
+	if (need_output) {
+		ret = qspi_rx(flash, buf, len);
+	} else {
+		/* Needs to drain the FIFO even when the data is not wanted. */
+		(void) qspi_rx(flash, NULL, len);
+	}
+
+	/* Always need to reset slave select register after each transaction */
+	qspi_activate_slave(flash, SLAVE_NONE);
+
+	return ret;
+}
+
+static size_t qspi_get_fifo_depth(struct xrt_qspi *flash)
+{
+	size_t depth = 0;
+	u32 ctrl;
+
+	/* Reset TX fifo. */
+	if (qspi_reset_fifo(flash))
+		return depth;
+
+	/* Stop transfering to flash. */
+	ctrl = qspi_get_ctrl(flash);
+	qspi_set_ctrl(flash, ctrl | QSPI_CR_TRANS_INHIBIT);
+
+	/*
+	 * Find out fifo depth by keep pushing data to QSPI until
+	 * the fifo is full. We can choose to send any data. But
+	 * sending 0 seems to cause error, so pick a non-zero one.
+	 */
+	while (!(qspi_get_status(flash) & (QSPI_SR_TX_FULL | QSPI_SR_ERRS))) {
+		qspi_send8(flash, 1);
+		depth++;
+	}
+
+	/* Make sure flash is still in good shape. */
+	if (qspi_has_err(flash))
+		return 0;
+
+	/* Reset RX/TX fifo and restore ctrl since we just touched them. */
+	qspi_set_ctrl(flash, ctrl);
+	(void) qspi_reset_fifo(flash);
+
+	return depth;
+}
+
+/*
+ * Exec flash IO command on specified slave.
+ */
+static inline int qspi_exec_io_cmd(struct xrt_qspi *flash,
+	size_t len, bool output_needed)
+{
+	char *buf = flash->io_buf;
+
+	return qspi_transaction(flash, buf, len, output_needed);
+}
+
+/* Test if flash memory is ready. */
+static bool qspi_is_ready(struct xrt_qspi *flash)
+{
+	/*
+	 * Reading flash device status input needs a dummy byte
+	 * after cmd byte. The output is in the 2nd byte.
+	 */
+	u8 cmd[2] = { QSPI_CMD_STATUSREG_READ, };
+	int ret = qspi_transaction(flash, cmd, sizeof(cmd), true);
+
+	if (ret || (cmd[1] & 0x1)) // flash device is busy
+		return false;
+
+	return true;
+}
+
+static int qspi_enable_write(struct xrt_qspi *flash)
+{
+	u8 cmd = QSPI_CMD_WRITE_ENABLE;
+	int ret = qspi_transaction(flash, &cmd, 1, false);
+
+	if (ret)
+		QSPI_ERR(flash, "Failed to enable flash write: %d", ret);
+	return ret;
+}
+
+static int qspi_set_sector(struct xrt_qspi *flash, u8 sector)
+{
+	int ret = 0;
+	u8 cmd[] = { QSPI_CMD_EXTENDED_ADDRESS_REG_WRITE, sector };
+
+	if (sector == flash->qspi_curr_sector)
+		return 0;
+
+	QSPI_DBG(flash, "setting sector to %d", sector);
+
+	ret = qspi_enable_write(flash);
+	if (ret)
+		return ret;
+
+	ret = qspi_transaction(flash, cmd, sizeof(cmd), false);
+	if (ret) {
+		QSPI_ERR(flash, "Failed to set sector %d: %d", sector, ret);
+		return ret;
+	}
+
+	flash->qspi_curr_sector = sector;
+	return ret;
+}
+
+/* For 24 bit addressing. */
+static inline void qspi_offset2faddr(loff_t addr, struct qspi_flash_addr *faddr)
+{
+	faddr->slave = (u8)(addr >> 56);
+	faddr->sector = (u8)(addr >> 24);
+	faddr->addr_lo = (u8)(addr);
+	faddr->addr_mid = (u8)(addr >> 8);
+	faddr->addr_hi = (u8)(addr >> 16);
+}
+
+static inline loff_t qspi_faddr2offset(struct qspi_flash_addr *faddr)
+{
+	loff_t off = 0;
+
+	off |= faddr->sector;
+	off <<= 8;
+	off |= faddr->addr_hi;
+	off <<= 8;
+	off |= faddr->addr_mid;
+	off <<= 8;
+	off |= faddr->addr_lo;
+	off |= ((u64)faddr->slave) << 56;
+	return off;
+}
+
+/* IO cmd starts with op code followed by address. */
+static inline int
+qspi_setup_io_cmd_header(struct xrt_qspi *flash,
+	u8 op, struct qspi_flash_addr *faddr, size_t *header_len)
+{
+	int ret = 0;
+
+	/* Set sector (the high byte of a 32-bit address), if needed. */
+	ret = qspi_set_sector(flash, faddr->sector);
+	if (ret == 0) {
+		/* The rest of address bytes are in cmd. */
+		flash->io_buf[0] = op;
+		flash->io_buf[1] = faddr->addr_hi;
+		flash->io_buf[2] = faddr->addr_mid;
+		flash->io_buf[3] = faddr->addr_lo;
+		*header_len = 4;
+	}
+	return ret;
+}
+
+static bool qspi_wait_until_ready(struct xrt_qspi *flash)
+{
+	if (QSPI_BUSY_WAIT(qspi_is_ready(flash))) {
+		QSPI_ERR(flash, "QSPI flash device is not ready");
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Do one FIFO read from flash.
+ * @cnt contains bytes actually read on successful return.
+ */
+static int qspi_fifo_rd(struct xrt_qspi *flash,
+	loff_t off, u8 *buf, size_t *cnt)
+{
+	/* For read cmd, we need to exclude a few more dummy bytes in FIFO. */
+	const size_t read_dummy_len = 4;
+
+	int ret;
+	struct qspi_flash_addr faddr;
+	size_t header_len, total_len, payload_len;
+
+	/* Should not cross page bundary. */
+	BUG_ON(off + *cnt > QSPI_PAGE_ROUNDUP(off));
+	qspi_offset2faddr(off, &faddr);
+
+	ret = qspi_setup_io_cmd_header(flash,
+		QSPI_CMD_QUAD_READ, &faddr, &header_len);
+	if (ret)
+		return ret;
+
+	/* Figure out length of IO for this read. */
+
+	/*
+	 * One read should not be more than one fifo depth, so that we don't
+	 * overrun flash->io_buf.
+	 * The first header_len + read_dummy_len bytes in output buffer are
+	 * always garbage, need to make room for them. What a wonderful memory
+	 * controller!!
+	 */
+	payload_len = min(*cnt,
+		flash->qspi_fifo_depth - header_len - read_dummy_len);
+	total_len = payload_len + header_len + read_dummy_len;
+
+	QSPI_DBG(flash, "reading %ld bytes @0x%llx", payload_len, off);
+
+	/* Now do the read. */
+
+	/*
+	 * You tell the memory controller how many bytes you want to read
+	 * by writing that many bytes to it. How hard would it be to just
+	 * add one more integer to specify the length in the input cmd?!
+	 */
+	ret = qspi_exec_io_cmd(flash, total_len, true);
+	if (ret)
+		return ret;
+
+	/* Copy out the output. Skip the garbage part. */
+	memcpy(buf, &flash->io_buf[header_len + read_dummy_len], payload_len);
+	*cnt = payload_len;
+	return 0;
+}
+
+/*
+ * Do one FIFO write to flash. Assuming erase is already done.
+ * @cnt contains bytes actually written on successful return.
+ */
+static int qspi_fifo_wr(struct xrt_qspi *flash,
+	loff_t off, u8 *buf, size_t *cnt)
+{
+	/*
+	 * For write cmd, we can't write more than write_max_len bytes in one
+	 * IO request even though we have larger fifo. Otherwise, writes will
+	 * randomly fail.
+	 */
+	const size_t write_max_len = 128UL;
+
+	int ret;
+	struct qspi_flash_addr faddr;
+	size_t header_len, total_len, payload_len;
+
+	qspi_offset2faddr(off, &faddr);
+
+	ret = qspi_setup_io_cmd_header(flash,
+		flash->vendor->write_cmd(), &faddr, &header_len);
+	if (ret)
+		return ret;
+
+	/* Figure out length of IO for this write. */
+
+	/*
+	 * One IO should not be more than one fifo depth, so that we don't
+	 * overrun flash->io_buf. And we don't go beyond the write_max_len;
+	 */
+	payload_len = min(*cnt, flash->qspi_fifo_depth - header_len);
+	payload_len = min(payload_len, write_max_len);
+	total_len = payload_len + header_len;
+
+	QSPI_DBG(flash, "writing %ld bytes @0x%llx", payload_len, off);
+
+	/* Copy in payload after header. */
+	memcpy(&flash->io_buf[header_len], buf, payload_len);
+
+	/* Now do the write. */
+
+	ret = qspi_enable_write(flash);
+	if (ret)
+		return ret;
+	ret = qspi_exec_io_cmd(flash, total_len, false);
+	if (ret)
+		return ret;
+	if (!qspi_wait_until_ready(flash))
+		return -EINVAL;
+
+	*cnt = payload_len;
+	return 0;
+}
+
+/*
+ * Load/store the whole buf of data from/to flash memory.
+ */
+static int qspi_buf_rdwr(struct xrt_qspi *flash,
+	u8 *buf, loff_t off, size_t len, bool write)
+{
+	int ret = 0;
+	size_t n, curlen;
+
+	for (n = 0; ret == 0 && n < len; n += curlen) {
+		curlen = len - n;
+		if (write)
+			ret = qspi_fifo_wr(flash, off + n, &buf[n], &curlen);
+		else
+			ret = qspi_fifo_rd(flash, off + n, &buf[n], &curlen);
+	}
+
+	/*
+	 * Yield CPU after every buf IO so that Linux does not complain
+	 * about CPU soft lockup.
+	 */
+	schedule();
+	return ret;
+}
+
+static u8 qspi_erase_cmd(size_t pagesz)
+{
+	u8 cmd = 0;
+	const size_t onek = 1024;
+
+	BUG_ON(!IS_ALIGNED(pagesz, onek));
+	switch (pagesz / onek) {
+	case 4:
+		cmd = QSPI_CMD_4KB_SUBSECTOR_ERASE;
+		break;
+	case 32:
+		cmd = QSPI_CMD_32KB_SUBSECTOR_ERASE;
+		break;
+	case 64:
+		cmd = QSPI_CMD_SECTOR_ERASE;
+		break;
+	default:
+		BUG_ON(1);
+		break;
+	}
+	return cmd;
+}
+
+/*
+ * Erase one flash page.
+ */
+static int qspi_page_erase(struct xrt_qspi *flash, loff_t off, size_t pagesz)
+{
+	int ret = 0;
+	struct qspi_flash_addr faddr;
+	size_t cmdlen;
+	u8 cmd = qspi_erase_cmd(pagesz);
+
+	QSPI_DBG(flash, "Erasing 0x%lx bytes @0x%llx with cmd=0x%x",
+		pagesz, off, (u32)cmd);
+
+	BUG_ON(!IS_ALIGNED(off, pagesz));
+	qspi_offset2faddr(off, &faddr);
+
+	if (!qspi_wait_until_ready(flash))
+		return -EINVAL;
+
+	ret = qspi_setup_io_cmd_header(flash, cmd, &faddr, &cmdlen);
+	if (ret)
+		return ret;
+
+	ret = qspi_enable_write(flash);
+	if (ret)
+		return ret;
+
+	ret = qspi_exec_io_cmd(flash, cmdlen, false);
+	if (ret) {
+		QSPI_ERR(flash, "Failed to erase 0x%lx bytes @0x%llx",
+			pagesz, off);
+		return ret;
+	}
+
+	if (!qspi_wait_until_ready(flash))
+		return -EINVAL;
+
+	return 0;
+}
+
+static bool is_valid_offset(struct xrt_qspi *flash, loff_t off)
+{
+	struct qspi_flash_addr faddr;
+
+	qspi_offset2faddr(off, &faddr);
+	/*
+	 * Assuming all flash are of the same size, we use
+	 * offset into flash 0 to perform boundary check.
+	 */
+	faddr.slave = 0;
+	return qspi_faddr2offset(&faddr) < flash->flash_size;
+}
+
+static int
+qspi_do_read(struct xrt_qspi *flash, char *kbuf, size_t n, loff_t off)
+{
+	u8 *page = NULL;
+	size_t cnt = 0;
+	struct qspi_flash_addr faddr;
+	int ret = 0;
+
+	page = vmalloc(QSPI_PAGE_SIZE);
+	if (page == NULL)
+		return -ENOMEM;
+
+	mutex_lock(&flash->io_lock);
+
+	qspi_offset2faddr(off, &faddr);
+	flash->qspi_curr_slave = faddr.slave;
+
+	if (!qspi_wait_until_ready(flash))
+		ret = -EINVAL;
+
+	while (ret == 0 && cnt < n) {
+		loff_t thisoff = off + cnt;
+		size_t thislen = min(n - cnt,
+			QSPI_PAGE_ROUNDUP(thisoff) - (size_t)thisoff);
+		char *thisbuf = &page[QSPI_PAGE_OFFSET(thisoff)];
+
+		ret = qspi_buf_rdwr(flash, thisbuf, thisoff, thislen, false);
+		if (ret)
+			break;
+
+		memcpy(&kbuf[cnt], thisbuf, thislen);
+		cnt += thislen;
+	}
+
+	mutex_unlock(&flash->io_lock);
+	vfree(page);
+	return ret;
+}
+
+/*
+ * Read flash memory page by page into user buf.
+ */
+static ssize_t
+qspi_read(struct file *file, char __user *ubuf, size_t n, loff_t *off)
+{
+	struct xrt_qspi *flash = file->private_data;
+	char *kbuf = NULL;
+	int ret = 0;
+
+	QSPI_INFO(flash, "reading %ld bytes @0x%llx", n, *off);
+
+	if (n == 0 || !is_valid_offset(flash, *off)) {
+		QSPI_ERR(flash, "Can't read: out of boundary");
+		return 0;
+	}
+	n = min(n, flash->flash_size - (size_t)*off);
+	kbuf = vmalloc(n);
+	if (kbuf == NULL)
+		return -ENOMEM;
+
+	ret = qspi_do_read(flash, kbuf, n, *off);
+	if (ret == 0) {
+		if (copy_to_user(ubuf, kbuf, n) != 0)
+			ret = -EFAULT;
+	}
+	vfree(kbuf);
+
+	if (ret)
+		return ret;
+
+	*off += n;
+	return n;
+}
+
+/* Read request from other parts of driver. */
+static int qspi_kernel_read(struct platform_device *pdev,
+	char *buf, size_t n, loff_t off)
+{
+	struct xrt_qspi *flash = platform_get_drvdata(pdev);
+
+	QSPI_INFO(flash, "kernel reading %ld bytes @0x%llx", n, off);
+	return qspi_do_read(flash, buf, n, off);
+}
+
+/*
+ * Write a page. Perform read-modify-write as needed.
+ * @cnt contains actual bytes copied from user on successful return.
+ */
+static int qspi_page_rmw(struct xrt_qspi *flash,
+	const char __user *ubuf, u8 *kbuf, loff_t off, size_t *cnt)
+{
+	loff_t thisoff = QSPI_PAGE_ALIGN(off);
+	size_t front = QSPI_PAGE_OFFSET(off);
+	size_t mid = min(*cnt, QSPI_PAGE_SIZE - front);
+	size_t last = QSPI_PAGE_SIZE - front - mid;
+	u8 *thiskbuf = kbuf;
+	int ret;
+
+	if (front) {
+		ret = qspi_buf_rdwr(flash, thiskbuf, thisoff, front, false);
+		if (ret)
+			return ret;
+	}
+	thisoff += front;
+	thiskbuf += front;
+	if (copy_from_user(thiskbuf, ubuf, mid) != 0)
+		return -EFAULT;
+	*cnt = mid;
+	thisoff += mid;
+	thiskbuf += mid;
+	if (last) {
+		ret = qspi_buf_rdwr(flash, thiskbuf, thisoff, last, false);
+		if (ret)
+			return ret;
+	}
+
+	ret = qspi_page_erase(flash, QSPI_PAGE_ALIGN(off), QSPI_PAGE_SIZE);
+	if (ret == 0) {
+		ret = qspi_buf_rdwr(flash, kbuf, QSPI_PAGE_ALIGN(off),
+			QSPI_PAGE_SIZE, true);
+	}
+	return ret;
+}
+
+static inline size_t qspi_get_page_io_size(loff_t off, size_t sz)
+{
+	if (IS_ALIGNED(off, QSPI_HUGE_PAGE_SIZE) &&
+		sz >= QSPI_HUGE_PAGE_SIZE)
+		return QSPI_HUGE_PAGE_SIZE;
+	if (IS_ALIGNED(off, QSPI_LARGE_PAGE_SIZE) &&
+		sz >= QSPI_LARGE_PAGE_SIZE)
+		return QSPI_LARGE_PAGE_SIZE;
+	if (IS_ALIGNED(off, QSPI_PAGE_SIZE) &&
+		sz >= QSPI_PAGE_SIZE)
+		return QSPI_PAGE_SIZE;
+
+	return 0; // can't do full page IO
+}
+
+/*
+ * Try to erase and write full (large/huge) page.
+ * @cnt contains actual bytes copied from user on successful return.
+ * Needs to fallback to RMW, if not possible.
+ */
+static int qspi_page_wr(struct xrt_qspi *flash,
+	const char __user *ubuf, u8 *kbuf, loff_t off, size_t *cnt)
+{
+	int ret;
+	size_t thislen = qspi_get_page_io_size(off, *cnt);
+
+	if (thislen == 0)
+		return -EOPNOTSUPP;
+
+	*cnt = thislen;
+
+	if (copy_from_user(kbuf, ubuf, thislen) != 0)
+		return -EFAULT;
+
+	ret = qspi_page_erase(flash, off, thislen);
+	if (ret == 0)
+		ret = qspi_buf_rdwr(flash, kbuf, off, thislen, true);
+	return ret;
+}
+
+/*
+ * Write to flash memory page by page from user buf.
+ */
+static ssize_t
+qspi_write(struct file *file, const char __user *buf, size_t n, loff_t *off)
+{
+	struct xrt_qspi *flash = file->private_data;
+	u8 *page = NULL;
+	size_t cnt = 0;
+	int ret = 0;
+	struct qspi_flash_addr faddr;
+
+	QSPI_INFO(flash, "writing %ld bytes @0x%llx", n, *off);
+
+	if (n == 0 || !is_valid_offset(flash, *off)) {
+		QSPI_ERR(flash, "Can't write: out of boundary");
+		return -ENOSPC;
+	}
+	n = min(n, flash->flash_size - (size_t)*off);
+
+	page = vmalloc(QSPI_HUGE_PAGE_SIZE);
+	if (page == NULL)
+		return -ENOMEM;
+
+	mutex_lock(&flash->io_lock);
+
+	qspi_offset2faddr(*off, &faddr);
+	flash->qspi_curr_slave = faddr.slave;
+
+	if (!qspi_wait_until_ready(flash))
+		ret = -EINVAL;
+	while (ret == 0 && cnt < n) {
+		loff_t thisoff = *off + cnt;
+		const char *thisbuf = buf + cnt;
+		size_t thislen = n - cnt;
+
+		/* Try write full page. */
+		ret = qspi_page_wr(flash, thisbuf, page, thisoff, &thislen);
+		if (ret) {
+			/* Fallback to RMW. */
+			if (ret == -EOPNOTSUPP) {
+				ret = qspi_page_rmw(flash, thisbuf, page,
+					thisoff, &thislen);
+			}
+			if (ret)
+				break;
+		}
+		cnt += thislen;
+	}
+	mutex_unlock(&flash->io_lock);
+
+	vfree(page);
+	if (ret)
+		return ret;
+
+	*off += n;
+	return n;
+}
+
+static loff_t
+qspi_llseek(struct file *filp, loff_t off, int whence)
+{
+	loff_t npos;
+
+	switch (whence) {
+	case 0: /* SEEK_SET */
+		npos = off;
+		break;
+	case 1: /* SEEK_CUR */
+		npos = filp->f_pos + off;
+		break;
+	case 2: /* SEEK_END: no need to support */
+		return -EINVAL;
+	default: /* should not happen */
+		return -EINVAL;
+	}
+	if (npos < 0)
+		return -EINVAL;
+
+	filp->f_pos = npos;
+	return npos;
+}
+
+/*
+ * Only allow one client at a time.
+ */
+static int qspi_open(struct inode *inode, struct file *file)
+{
+	struct xrt_qspi *flash;
+	struct platform_device *pdev = xrt_devnode_open_excl(inode);
+
+	if (!pdev)
+		return -EBUSY;
+
+	flash = platform_get_drvdata(pdev);
+	file->private_data = flash;
+	return 0;
+}
+
+static int qspi_close(struct inode *inode, struct file *file)
+{
+	struct xrt_qspi *flash = file->private_data;
+
+	if (!flash)
+		return -EINVAL;
+
+	file->private_data = NULL;
+	xrt_devnode_close(inode);
+	return 0;
+}
+
+static ssize_t flash_type_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	/* We support only QSPI flash controller. */
+	return sprintf(buf, "spi\n");
+}
+static DEVICE_ATTR_RO(flash_type);
+
+static ssize_t size_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	struct xrt_qspi *flash = dev_get_drvdata(dev);
+
+	return sprintf(buf, "%ld\n", flash->flash_size);
+}
+static DEVICE_ATTR_RO(size);
+
+static struct attribute *qspi_attrs[] = {
+	&dev_attr_flash_type.attr,
+	&dev_attr_size.attr,
+	NULL,
+};
+
+static struct attribute_group qspi_attr_group = {
+	.attrs = qspi_attrs,
+};
+
+static int qspi_remove(struct platform_device *pdev)
+{
+	struct xrt_qspi *flash = platform_get_drvdata(pdev);
+
+	if (!flash)
+		return -EINVAL;
+	platform_set_drvdata(pdev, NULL);
+
+	(void) sysfs_remove_group(&DEV(flash->pdev)->kobj, &qspi_attr_group);
+
+	if (flash->io_buf)
+		vfree(flash->io_buf);
+
+	if (flash->qspi_regs)
+		iounmap(flash->qspi_regs);
+
+	mutex_destroy(&flash->io_lock);
+	return 0;
+}
+
+static int qspi_get_ID(struct xrt_qspi *flash)
+{
+	int i;
+	struct qspi_flash_vendor *vendor = NULL;
+	/*
+	 * Reading flash device vendor ID. Vendor ID is in cmd[1], max vector
+	 * number is in cmd[3] from output.
+	 */
+	u8 cmd[5] = { QSPI_CMD_IDCODE_READ, };
+	int ret = qspi_transaction(flash, cmd, sizeof(cmd), true);
+
+	if (ret) {
+		QSPI_ERR(flash, "Can't get flash memory ID, err: %d", ret);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(vendors); i++) {
+		if (cmd[1] == vendors[i].vendor_id) {
+			vendor = &vendors[i];
+			break;
+		}
+	}
+
+	/* Find out flash vendor and size. */
+	if (vendor == NULL) {
+		QSPI_ERR(flash, "Unknown flash vendor: %d", cmd[1]);
+		return -EINVAL;
+	}
+	flash->vendor = vendor;
+
+	flash->flash_size = vendor->code2sectors(cmd[3]) * (16 * 1024 * 1024);
+	if (flash->flash_size == 0) {
+		QSPI_ERR(flash, "Unknown flash memory size code: %d", cmd[3]);
+		return -EINVAL;
+	}
+	QSPI_INFO(flash, "Flash vendor: %s, size: %ldMB",
+		vendor->vendor_name, flash->flash_size / 1024 / 1024);
+
+	return 0;
+}
+
+static int qspi_controller_probe(struct xrt_qspi *flash)
+{
+	int ret;
+
+	/* Probing on first flash only. */
+	flash->qspi_curr_slave = 0;
+
+	qspi_set_ctrl(flash, QSPI_CR_INIT_STATE);
+
+	/* Find out fifo depth before any read/write operations. */
+	flash->qspi_fifo_depth = qspi_get_fifo_depth(flash);
+	if (flash->qspi_fifo_depth == 0)
+		return -EINVAL;
+	QSPI_DBG(flash, "QSPI FIFO depth is: %ld", flash->qspi_fifo_depth);
+
+	if (!qspi_wait_until_ready(flash))
+		return -EINVAL;
+
+	/* Update flash vendor. */
+	ret = qspi_get_ID(flash);
+	if (ret)
+		return ret;
+
+	flash->qspi_curr_sector = 0xff;
+
+	return 0;
+}
+
+static int qspi_probe(struct platform_device *pdev)
+{
+	struct xrt_qspi *flash;
+	int ret;
+
+	flash = devm_kzalloc(DEV(pdev), sizeof(*flash), GFP_KERNEL);
+	if (!flash)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, flash);
+	flash->pdev = pdev;
+
+	mutex_init(&flash->io_lock);
+
+	flash->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!flash->res) {
+		ret = -EINVAL;
+		QSPI_ERR(flash, "empty resource");
+		goto error;
+	}
+
+	flash->qspi_regs = ioremap(flash->res->start,
+		flash->res->end - flash->res->start + 1);
+	if (!flash->qspi_regs) {
+		ret = -ENOMEM;
+		QSPI_ERR(flash, "failed to map resource");
+		goto error;
+	}
+
+	ret = qspi_controller_probe(flash);
+	if (ret)
+		goto error;
+
+	flash->io_buf = vmalloc(flash->qspi_fifo_depth);
+	if (flash->io_buf == NULL) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	ret  = sysfs_create_group(&DEV(pdev)->kobj, &qspi_attr_group);
+	if (ret)
+		QSPI_ERR(flash, "failed to create sysfs nodes");
+
+	return 0;
+
+error:
+	QSPI_ERR(flash, "probing failed");
+	qspi_remove(pdev);
+	return ret;
+}
+
+static size_t qspi_get_size(struct platform_device *pdev)
+{
+	struct xrt_qspi *flash = platform_get_drvdata(pdev);
+
+	return flash->flash_size;
+}
+
+static int
+qspi_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_qspi *flash = platform_get_drvdata(pdev);
+	int ret = 0;
+
+	QSPI_INFO(flash, "handling IOCTL cmd: %d", cmd);
+
+	switch (cmd) {
+	case XRT_FLASH_GET_SIZE: {
+		size_t *sz = (size_t *)arg;
+		*sz = qspi_get_size(pdev);
+		break;
+	}
+	case XRT_FLASH_READ: {
+		struct xrt_flash_ioctl_read *rd =
+			(struct xrt_flash_ioctl_read *)arg;
+		ret = qspi_kernel_read(pdev,
+			rd->xfir_buf, rd->xfir_size, rd->xfir_offset);
+		break;
+	}
+	default:
+		QSPI_ERR(flash, "unknown flash IOCTL cmd: %d", cmd);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_qspi_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{
+				.ep_name = NODE_FLASH_VSEC,
+			},
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata qspi_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = qspi_leaf_ioctl,
+	},
+	.xsd_file_ops = {
+		.xsf_ops = {
+			.owner = THIS_MODULE,
+			.open = qspi_open,
+			.release = qspi_close,
+			.read = qspi_read,
+			.write = qspi_write,
+			.llseek = qspi_llseek,
+		},
+		.xsf_dev_name = "flash",
+	},
+};
+
+static const struct platform_device_id qspi_id_table[] = {
+	{ XRT_QSPI, (kernel_ulong_t)&qspi_data },
+	{ },
+};
+
+struct platform_driver xrt_qspi_driver = {
+	.driver	= {
+		.name    = XRT_QSPI,
+	},
+	.probe   = qspi_probe,
+	.remove  = qspi_remove,
+	.id_table = qspi_id_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-srsr.c b/drivers/fpga/alveo/lib/subdevs/xrt-srsr.c
new file mode 100644
index 000000000000..150b82990ecb
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-srsr.c
@@ -0,0 +1,322 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo DDR SRSR Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-ddr-srsr.h"
+
+#define XRT_DDR_SRSR "xrt_ddr_srsr"
+
+#define	REG_STATUS_OFFSET		0x00000000
+#define	REG_CTRL_OFFSET			0x00000004
+#define	REG_CALIB_OFFSET		0x00000008
+#define	REG_XSDB_RAM_BASE		0x00004000
+
+#define	FULL_CALIB_TIMEOUT		100
+#define	FAST_CALIB_TIMEOUT		15
+
+#define	CTRL_BIT_SYS_RST		0x00000001
+#define	CTRL_BIT_XSDB_SELECT		0x00000010
+#define	CTRL_BIT_MEM_INIT_SKIP		0x00000020
+#define	CTRL_BIT_RESTORE_EN		0x00000040
+#define	CTRL_BIT_RESTORE_COMPLETE	0x00000080
+#define	CTRL_BIT_SREF_REQ		0x00000100
+
+#define	STATUS_BIT_CALIB_COMPLETE	0x00000001
+#define	STATUS_BIT_SREF_ACK		0x00000100
+
+struct xrt_ddr_srsr {
+	void __iomem		*base;
+	struct platform_device	*pdev;
+	struct mutex		lock;
+	const char		*ep_name;
+};
+
+#define reg_rd(g, offset)	ioread32(g->base + offset)
+#define reg_wr(g, val, offset)	iowrite32(val, g->base + offset)
+
+static ssize_t status_show(struct device *dev, struct device_attribute *attr,
+	char *buf)
+{
+	u32 status = 1;
+
+	return sprintf(buf, "0x%x\n", status);
+}
+static DEVICE_ATTR_RO(status);
+
+static struct attribute *xrt_ddr_srsr_attributes[] = {
+	&dev_attr_status.attr,
+	NULL
+};
+
+static const struct attribute_group xrt_ddr_srsr_attrgroup = {
+	.attrs = xrt_ddr_srsr_attributes,
+};
+
+static int srsr_full_calib(struct xrt_ddr_srsr *srsr,
+	char **data, u32 *data_len)
+{
+	int i = 0, err = -ETIMEDOUT;
+	u32 val, sz_lo, sz_hi;
+	u32 *cache = NULL;
+
+	mutex_lock(&srsr->lock);
+	reg_wr(srsr, CTRL_BIT_SYS_RST, REG_CTRL_OFFSET);
+	reg_wr(srsr, 0x0, REG_CTRL_OFFSET);
+
+
+	/* Safe to say, full calibration should finish in 2000ms*/
+	for (; i < FULL_CALIB_TIMEOUT; ++i) {
+		val = reg_rd(srsr, REG_STATUS_OFFSET);
+		if (val & STATUS_BIT_CALIB_COMPLETE) {
+			err = 0;
+			break;
+		}
+		msleep(20);
+	}
+
+	if (err) {
+		xrt_err(srsr->pdev, "Calibration timeout");
+		goto failed;
+	}
+
+	xrt_info(srsr->pdev, "calibrate time %dms", i * FULL_CALIB_TIMEOUT);
+
+	/* END_ADDR0/1 provides the end address for a given memory
+	 * configuration
+	 * END_ADDR 0 is lower 9 bits, the other one is higher 9 bits
+	 * E.g. sz_lo = 0x155,     0'b 1 0101 0101
+	 *      sz_hi = 0x5    0'b 0101
+	 *                     0'b 01011 0101 0101
+	 *                   =  0xB55
+	 * and the total size is 0xB55+1
+	 * Check the value, it should not excess predefined XSDB range
+	 */
+	sz_lo = reg_rd(srsr, REG_XSDB_RAM_BASE+4);
+	sz_hi = reg_rd(srsr, REG_XSDB_RAM_BASE+8);
+
+	*data_len = (((sz_hi << 9) | sz_lo) + 1) * sizeof(uint32_t);
+	if (*data_len >= 0x4000) {
+		xrt_err(srsr->pdev, "Invalid data size 0x%x", *data_len);
+		err = -EINVAL;
+		goto failed;
+	}
+
+	cache = vzalloc(*data_len);
+	if (!cache) {
+		err = -ENOMEM;
+		goto failed;
+	}
+
+	err = -ETIMEDOUT;
+	reg_wr(srsr, CTRL_BIT_SREF_REQ, REG_CTRL_OFFSET);
+	for ( ; i < FULL_CALIB_TIMEOUT; ++i) {
+		val = reg_rd(srsr, REG_STATUS_OFFSET);
+		if (val == (STATUS_BIT_SREF_ACK|STATUS_BIT_CALIB_COMPLETE)) {
+			err = 0;
+			break;
+		}
+		msleep(20);
+	}
+	if (err) {
+		xrt_err(srsr->pdev, "request data timeout");
+		goto failed;
+	}
+	xrt_info(srsr->pdev, "req data time %dms", i * FULL_CALIB_TIMEOUT);
+
+	reg_wr(srsr, CTRL_BIT_SREF_REQ | CTRL_BIT_XSDB_SELECT, REG_CTRL_OFFSET);
+
+	for (i = 0; i < *data_len / sizeof(u32); ++i) {
+		val = reg_rd(srsr, REG_XSDB_RAM_BASE + i * 4);
+		*(cache + i) = val;
+	}
+	*data = (char *)cache;
+
+	mutex_unlock(&srsr->lock);
+
+	return 0;
+
+failed:
+	mutex_unlock(&srsr->lock);
+	vfree(cache);
+
+	return err;
+}
+
+static int srsr_fast_calib(struct xrt_ddr_srsr *srsr, char *data,
+	u32 data_size, bool retention)
+{
+	int i = 0, err = -ETIMEDOUT;
+	u32 val, write_val = CTRL_BIT_RESTORE_EN | CTRL_BIT_XSDB_SELECT;
+
+	mutex_lock(&srsr->lock);
+	if (retention)
+		write_val |= CTRL_BIT_MEM_INIT_SKIP;
+
+	reg_wr(srsr, write_val, REG_CTRL_OFFSET);
+
+	msleep(20);
+	for (i = 0; i < data_size / sizeof(u32); ++i) {
+		val = *((u32 *)data + i);
+		reg_wr(srsr, val, REG_XSDB_RAM_BASE+i*4);
+	}
+
+	write_val = CTRL_BIT_RESTORE_EN | CTRL_BIT_RESTORE_COMPLETE;
+	if (retention)
+		write_val |= CTRL_BIT_MEM_INIT_SKIP;
+
+	reg_wr(srsr, write_val, REG_CTRL_OFFSET);
+
+	/* Safe to say, fast calibration should finish in 300ms*/
+	for (i = 0; i < FAST_CALIB_TIMEOUT; ++i) {
+		val = reg_rd(srsr, REG_STATUS_OFFSET);
+		if (val & STATUS_BIT_CALIB_COMPLETE) {
+			err = 0;
+			break;
+		}
+		msleep(20);
+	}
+	if (err)
+		xrt_err(srsr->pdev, "timed out");
+	else
+		xrt_info(srsr->pdev, "time %dms", i * FAST_CALIB_TIMEOUT);
+
+	reg_wr(srsr, CTRL_BIT_RESTORE_COMPLETE, REG_CTRL_OFFSET);
+	val = reg_rd(srsr, REG_CTRL_OFFSET);
+
+	mutex_lock(&srsr->lock);
+
+	return err;
+}
+
+static int
+xrt_srsr_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_ddr_srsr *srsr = platform_get_drvdata(pdev);
+	struct xrt_srsr_ioctl_calib *req = arg;
+	int ret = 0;
+
+	switch (cmd) {
+	case XRT_SRSR_CALIB:
+		ret = srsr_full_calib(srsr, (char **)req->xsic_buf,
+			&req->xsic_size);
+		break;
+	case XRT_SRSR_FAST_CALIB:
+		ret = srsr_fast_calib(srsr, req->xsic_buf, req->xsic_size,
+			req->xsic_retention);
+		break;
+	case XRT_SRSR_EP_NAME:
+		*(const char **)arg = srsr->ep_name;
+		break;
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int xrt_srsr_probe(struct platform_device *pdev)
+{
+	struct xrt_ddr_srsr *srsr;
+	struct resource *res;
+	int err = 0;
+
+	srsr = devm_kzalloc(&pdev->dev, sizeof(*srsr), GFP_KERNEL);
+	if (!srsr)
+		return -ENOMEM;
+
+	srsr->pdev = pdev;
+	platform_set_drvdata(pdev, srsr);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res)
+		goto failed;
+
+	xrt_info(pdev, "IO start: 0x%llx, end: 0x%llx",
+		res->start, res->end);
+
+	srsr->ep_name = res->name;
+	srsr->base = ioremap(res->start, res->end - res->start + 1);
+	if (!srsr->base) {
+		err = -EIO;
+		xrt_err(pdev, "Map iomem failed");
+		goto failed;
+	}
+	mutex_init(&srsr->lock);
+
+	err = sysfs_create_group(&pdev->dev.kobj, &xrt_ddr_srsr_attrgroup);
+	if (err)
+		goto create_xrt_ddr_srsr_failed;
+
+	return 0;
+
+create_xrt_ddr_srsr_failed:
+	platform_set_drvdata(pdev, NULL);
+failed:
+	return err;
+}
+
+static int xrt_srsr_remove(struct platform_device *pdev)
+{
+	struct xrt_ddr_srsr *srsr = platform_get_drvdata(pdev);
+
+	if (!srsr) {
+		xrt_err(pdev, "driver data is NULL");
+		return -EINVAL;
+	}
+
+	sysfs_remove_group(&pdev->dev.kobj, &xrt_ddr_srsr_attrgroup);
+
+	if (srsr->base)
+		iounmap(srsr->base);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, srsr);
+
+	return 0;
+}
+
+struct xrt_subdev_endpoints xrt_srsr_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .regmap_name = REGMAP_DDR_SRSR },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_srsr_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_srsr_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_srsr_table[] = {
+	{ XRT_DDR_SRSR, (kernel_ulong_t)&xrt_srsr_data },
+	{ },
+};
+
+struct platform_driver xrt_ddr_srsr_driver = {
+	.driver = {
+		.name = XRT_DDR_SRSR,
+	},
+	.probe = xrt_srsr_probe,
+	.remove = xrt_srsr_remove,
+	.id_table = xrt_srsr_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-test.c b/drivers/fpga/alveo/lib/subdevs/xrt-test.c
new file mode 100644
index 000000000000..10a940adf493
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-test.c
@@ -0,0 +1,274 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Test Leaf Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/delay.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+
+#define	XRT_TEST "xrt_test"
+
+struct xrt_test {
+	struct platform_device *pdev;
+	struct platform_device *leaf;
+	void *evt_hdl;
+};
+
+static bool xrt_test_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	int myid = (int)(uintptr_t)arg;
+	return id == XRT_SUBDEV_TEST && pdev->id != myid;
+}
+
+static ssize_t hold_store(struct device *dev,
+	struct device_attribute *da, const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_test *xt = platform_get_drvdata(pdev);
+	struct platform_device *leaf;
+
+	leaf = xrt_subdev_get_leaf(pdev, xrt_test_leaf_match,
+		(void *)(uintptr_t)pdev->id);
+	if (leaf)
+		xt->leaf = leaf;
+	return count;
+}
+static DEVICE_ATTR_WO(hold);
+
+static ssize_t release_store(struct device *dev,
+	struct device_attribute *da, const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_test *xt = platform_get_drvdata(pdev);
+
+	if (xt->leaf)
+		(void) xrt_subdev_put_leaf(pdev, xt->leaf);
+	return count;
+}
+static DEVICE_ATTR_WO(release);
+
+static struct attribute *xrt_test_attrs[] = {
+	&dev_attr_hold.attr,
+	&dev_attr_release.attr,
+	NULL,
+};
+
+static const struct attribute_group xrt_test_attrgroup = {
+	.attrs = xrt_test_attrs,
+};
+
+static void xrt_test_async_evt_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg, bool success)
+{
+	xrt_info(pdev, "async broadcast event (%d) is %s", evt,
+		success ? "successful" : "failed");
+}
+
+static int xrt_test_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct platform_device *leaf;
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION:
+		break;
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		return XRT_EVENT_CB_CONTINUE;
+	}
+
+	leaf = xrt_subdev_get_leaf_by_id(pdev, esd->xevt_subdev_id,
+		esd->xevt_subdev_instance);
+	if (leaf) {
+		(void) xrt_subdev_ioctl(leaf, 1, NULL);
+		(void) xrt_subdev_put_leaf(pdev, leaf);
+	}
+
+	/* Broadcast event. */
+	if (pdev->id == 1) {
+		xrt_subdev_broadcast_event_async(pdev, XRT_EVENT_TEST,
+			xrt_test_async_evt_cb, NULL);
+	}
+
+	xrt_info(pdev, "processed event %d for (%d, %d)",
+		evt, esd->xevt_subdev_id, esd->xevt_subdev_instance);
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static int xrt_test_create_metadata(struct xrt_test *xt, char **root_dtb)
+{
+	char *dtb = NULL;
+	struct xrt_md_endpoint ep = { .ep_name = NODE_TEST };
+	int ret;
+
+	ret = xrt_md_create(DEV(xt->pdev), &dtb);
+	if (ret) {
+		xrt_err(xt->pdev, "create metadata failed, ret %d", ret);
+		goto failed;
+	}
+
+	ret = xrt_md_add_endpoint(DEV(xt->pdev), dtb, &ep);
+	if (ret) {
+		xrt_err(xt->pdev, "add test node failed, ret %d", ret);
+		goto failed;
+	}
+
+	*root_dtb = dtb;
+	return 0;
+
+failed:
+	vfree(dtb);
+	return ret;
+}
+
+static int xrt_test_probe(struct platform_device *pdev)
+{
+	struct xrt_test *xt;
+	char *dtb = NULL;
+
+	xrt_info(pdev, "probing...");
+
+	xt = devm_kzalloc(DEV(pdev), sizeof(*xt), GFP_KERNEL);
+	if (!xt)
+		return -ENOMEM;
+
+	xt->pdev = pdev;
+	platform_set_drvdata(pdev, xt);
+
+	/* Ready to handle req thru sysfs nodes. */
+	if (sysfs_create_group(&DEV(pdev)->kobj, &xrt_test_attrgroup))
+		xrt_err(pdev, "failed to create sysfs group");
+
+	/* Add event callback to wait for the peer instance. */
+	xt->evt_hdl = xrt_subdev_add_event_cb(pdev, xrt_test_leaf_match,
+		(void *)(uintptr_t)pdev->id, xrt_test_event_cb);
+
+	/* Trigger partition creation, only when this is the first instance. */
+	if (pdev->id == 0) {
+		(void) xrt_test_create_metadata(xt, &dtb);
+		if (dtb)
+			(void) xrt_subdev_create_partition(pdev, dtb);
+		vfree(dtb);
+	} else {
+		xrt_subdev_broadcast_event(pdev, XRT_EVENT_TEST);
+	}
+
+	/* After we return here, we'll get inter-leaf calls. */
+	return 0;
+}
+
+static int xrt_test_remove(struct platform_device *pdev)
+{
+	struct xrt_test *xt = platform_get_drvdata(pdev);
+
+	/* By now, partition driver should prevent any inter-leaf call. */
+
+	xrt_info(pdev, "leaving...");
+
+	(void) xrt_subdev_remove_event_cb(pdev, xt->evt_hdl);
+
+	(void) sysfs_remove_group(&DEV(pdev)->kobj, &xrt_test_attrgroup);
+	/* By now, no more access thru sysfs nodes. */
+
+	/* Clean up can safely be done now. */
+	return 0;
+}
+
+static int
+xrt_test_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	xrt_info(pdev, "handling IOCTL cmd: %d", cmd);
+	return 0;
+}
+
+static int xrt_test_open(struct inode *inode, struct file *file)
+{
+	struct platform_device *pdev = xrt_devnode_open(inode);
+
+	/* Device may have gone already when we get here. */
+	if (!pdev)
+		return -ENODEV;
+
+	xrt_info(pdev, "opened");
+	file->private_data = platform_get_drvdata(pdev);
+	return 0;
+}
+
+static ssize_t
+xrt_test_read(struct file *file, char __user *ubuf, size_t n, loff_t *off)
+{
+	int i;
+	struct xrt_test *xt = file->private_data;
+
+	for (i = 0; i < 10; i++) {
+		xrt_info(xt->pdev, "reading...");
+		ssleep(1);
+	}
+	return 0;
+}
+
+static int xrt_test_close(struct inode *inode, struct file *file)
+{
+	struct xrt_test *xt = file->private_data;
+
+	xrt_devnode_close(inode);
+
+	xrt_info(xt->pdev, "closed");
+	return 0;
+}
+
+/* Link to device tree nodes. */
+struct xrt_subdev_endpoints xrt_test_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{ .ep_name = NODE_TEST },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+/*
+ * Callbacks registered with parent driver infrastructure.
+ */
+struct xrt_subdev_drvdata xrt_test_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_test_leaf_ioctl,
+	},
+	.xsd_file_ops = {
+		.xsf_ops = {
+			.owner = THIS_MODULE,
+			.open = xrt_test_open,
+			.release = xrt_test_close,
+			.read = xrt_test_read,
+		},
+		.xsf_mode = XRT_SUBDEV_FILE_MULTI_INST,
+	},
+};
+
+static const struct platform_device_id xrt_test_id_table[] = {
+	{ XRT_TEST, (kernel_ulong_t)&xrt_test_data },
+	{ },
+};
+
+/*
+ * Callbacks registered with Linux's platform driver infrastructure.
+ */
+struct platform_driver xrt_test_driver = {
+	.driver	= {
+		.name    = XRT_TEST,
+	},
+	.probe   = xrt_test_probe,
+	.remove  = xrt_test_remove,
+	.id_table = xrt_test_id_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-ucs.c b/drivers/fpga/alveo/lib/subdevs/xrt-ucs.c
new file mode 100644
index 000000000000..849a9b780e0b
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-ucs.c
@@ -0,0 +1,238 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA UCS Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-parent.h"
+#include "xrt-ucs.h"
+#include "xrt-clock.h"
+
+#define UCS_ERR(ucs, fmt, arg...)   \
+	xrt_err((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_WARN(ucs, fmt, arg...)  \
+	xrt_warn((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_INFO(ucs, fmt, arg...)  \
+	xrt_info((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_DBG(ucs, fmt, arg...)   \
+	xrt_dbg((ucs)->pdev, fmt "\n", ##arg)
+
+
+#define XRT_UCS		"xrt_ucs"
+
+#define CHANNEL1_OFFSET			0
+#define CHANNEL2_OFFSET			8
+
+#define CLK_MAX_VALUE			6400
+
+struct ucs_control_status_ch1 {
+	unsigned int shutdown_clocks_latched:1;
+	unsigned int reserved1:15;
+	unsigned int clock_throttling_average:14;
+	unsigned int reserved2:2;
+};
+
+
+struct xrt_ucs {
+	struct platform_device	*pdev;
+	void __iomem		*ucs_base;
+	struct mutex		ucs_lock;
+	void			*evt_hdl;
+};
+
+static inline u32 reg_rd(struct xrt_ucs *ucs, u32 offset)
+{
+	return ioread32(ucs->ucs_base + offset);
+}
+
+static inline void reg_wr(struct xrt_ucs *ucs, u32 val, u32 offset)
+{
+	iowrite32(val, ucs->ucs_base + offset);
+}
+
+static bool xrt_ucs_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	if (id == XRT_SUBDEV_CLOCK)
+		return true;
+
+	return false;
+}
+
+static int xrt_ucs_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+
+	struct xrt_ucs		*ucs;
+	struct platform_device	*leaf;
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+
+	ucs = platform_get_drvdata(pdev);
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION:
+		break;
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		return XRT_EVENT_CB_CONTINUE;
+	}
+
+	leaf = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_CLOCK, esd->xevt_subdev_instance);
+	BUG_ON(!leaf);
+	xrt_subdev_ioctl(leaf, XRT_CLOCK_VERIFY, NULL);
+	xrt_subdev_put_leaf(pdev, leaf);
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static void ucs_check(struct xrt_ucs *ucs, bool *latched)
+{
+	struct ucs_control_status_ch1 *ucs_status_ch1;
+	u32 status;
+
+	mutex_lock(&ucs->ucs_lock);
+	status = reg_rd(ucs, CHANNEL1_OFFSET);
+	ucs_status_ch1 = (struct ucs_control_status_ch1 *)&status;
+	if (ucs_status_ch1->shutdown_clocks_latched) {
+		UCS_ERR(ucs, "Critical temperature or power event, kernel clocks have been stopped, run 'xbutil valiate -q' to continue. See AR 73398 for more details.");
+		/* explicitly indicate reset should be latched */
+		*latched = true;
+	} else if (ucs_status_ch1->clock_throttling_average >
+	    CLK_MAX_VALUE) {
+		UCS_ERR(ucs, "kernel clocks %d exceeds expected maximum value %d.",
+			ucs_status_ch1->clock_throttling_average,
+			CLK_MAX_VALUE);
+	} else if (ucs_status_ch1->clock_throttling_average) {
+		UCS_ERR(ucs, "kernel clocks throttled at %d%%.",
+			(ucs_status_ch1->clock_throttling_average /
+			 (CLK_MAX_VALUE / 100)));
+	}
+	mutex_unlock(&ucs->ucs_lock);
+}
+
+static void ucs_enable(struct xrt_ucs *ucs)
+{
+	reg_wr(ucs, 1, CHANNEL2_OFFSET);
+}
+
+static int
+xrt_ucs_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_ucs		*ucs;
+	int			ret = 0;
+
+	ucs = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_UCS_CHECK: {
+		ucs_check(ucs, (bool *)arg);
+		break;
+	}
+	case XRT_UCS_ENABLE:
+		ucs_enable(ucs);
+		break;
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int ucs_remove(struct platform_device *pdev)
+{
+	struct xrt_ucs *ucs;
+
+	ucs = platform_get_drvdata(pdev);
+	if (!ucs) {
+		xrt_err(pdev, "driver data is NULL");
+		return -EINVAL;
+	}
+
+	xrt_subdev_remove_event_cb(pdev, ucs->evt_hdl);
+	if (ucs->ucs_base)
+		iounmap(ucs->ucs_base);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, ucs);
+
+	return 0;
+}
+
+
+
+static int ucs_probe(struct platform_device *pdev)
+{
+	struct xrt_ucs *ucs = NULL;
+	struct resource *res;
+	int ret;
+
+	ucs = devm_kzalloc(&pdev->dev, sizeof(*ucs), GFP_KERNEL);
+	if (!ucs)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, ucs);
+	ucs->pdev = pdev;
+	mutex_init(&ucs->ucs_lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	ucs->ucs_base = ioremap(res->start, res->end - res->start + 1);
+	if (!ucs->ucs_base) {
+		UCS_ERR(ucs, "map base %pR failed", res);
+		ret = -EFAULT;
+		goto failed;
+	}
+	ucs_enable(ucs);
+	ucs->evt_hdl = xrt_subdev_add_event_cb(pdev, xrt_ucs_leaf_match,
+		NULL, xrt_ucs_event_cb);
+
+	return 0;
+
+failed:
+	ucs_remove(pdev);
+	return ret;
+}
+
+
+struct xrt_subdev_endpoints xrt_ucs_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_UCS_CONTROL_STATUS },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_ucs_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_ucs_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_ucs_table[] = {
+	{ XRT_UCS, (kernel_ulong_t)&xrt_ucs_data },
+	{ },
+};
+
+struct platform_driver xrt_ucs_driver = {
+	.driver = {
+		.name = XRT_UCS,
+	},
+	.probe = ucs_probe,
+	.remove = ucs_remove,
+	.id_table = xrt_ucs_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-vsec-golden.c b/drivers/fpga/alveo/lib/subdevs/xrt-vsec-golden.c
new file mode 100644
index 000000000000..27e6ae3c539f
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-vsec-golden.c
@@ -0,0 +1,238 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA VSEC Driver for golden image
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Max Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/platform_device.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-gpio.h"
+
+#define XRT_VSEC_GOLDEN "xrt_vsec_golden"
+
+/*
+ * Global static table listing all known devices we need to bring up
+ * on all golden images that we need to support.
+ */
+static struct xrt_golden_endpoint {
+	unsigned short vendor;
+	unsigned short device;
+	struct xrt_md_endpoint ep;
+	const char *board_name;
+} vsec_golden_eps[] = {
+	{
+		.vendor = 0x10ee,
+		.device = 0xd020,
+		.ep = {
+			.ep_name = NODE_FLASH_VSEC,
+			.bar_off = 0x1f50000,
+			.size = 4096
+		},
+		.board_name = "u50",
+	},
+};
+
+/* Version of golden image is read from same location for all Alveo cards. */
+static struct xrt_md_endpoint xrt_golden_ver_endpoint = {
+	.ep_name = NODE_GOLDEN_VER,
+	.bar_off = 0x131008,
+	.size = 4
+};
+
+struct xrt_vsec {
+	struct platform_device	*pdev;
+	char			*metadata;
+	unsigned short		vendor;
+	unsigned short		device;
+	const char		*bdname;
+};
+
+static int xrt_vsec_get_golden_ver(struct xrt_vsec *vsec)
+{
+	struct platform_device *gpio_leaf;
+	struct platform_device *pdev = vsec->pdev;
+	struct xrt_gpio_ioctl_rw gpio_arg = { 0 };
+	int err, ver;
+
+	gpio_leaf = xrt_subdev_get_leaf_by_epname(pdev, NODE_GOLDEN_VER);
+	if (!gpio_leaf) {
+		xrt_err(pdev, "can not get %s", NODE_GOLDEN_VER);
+		return -EINVAL;
+	}
+
+	gpio_arg.xgir_id = XRT_GPIO_GOLDEN_VER;
+	gpio_arg.xgir_buf = &ver;
+	gpio_arg.xgir_len = sizeof(ver);
+	gpio_arg.xgir_offset = 0;
+	err = xrt_subdev_ioctl(gpio_leaf, XRT_GPIO_READ, &gpio_arg);
+	(void) xrt_subdev_put_leaf(pdev, gpio_leaf);
+	if (err) {
+		xrt_err(pdev, "can't get golden image version: %d", err);
+		return err;
+	}
+
+	return ver;
+}
+
+static int xrt_vsec_add_node(struct xrt_vsec *vsec,
+	struct xrt_md_endpoint *dev)
+{
+	int ret;
+
+	xrt_info(vsec->pdev, "add ep %s", dev->ep_name);
+	ret = xrt_md_add_endpoint(DEV(vsec->pdev), vsec->metadata, dev);
+	if (ret)
+		xrt_err(vsec->pdev, "add ep failed, ret %d", ret);
+	return ret;
+}
+
+static int xrt_vsec_add_all_nodes(struct xrt_vsec *vsec)
+{
+	int i;
+	int rc = -ENOENT;
+
+	for (i = 0; i < ARRAY_SIZE(vsec_golden_eps); i++) {
+		struct xrt_golden_endpoint *ep = &vsec_golden_eps[i];
+
+		if (vsec->vendor == ep->vendor && vsec->device == ep->device) {
+			rc = xrt_vsec_add_node(vsec, &ep->ep);
+			if (rc)
+				break;
+		}
+	}
+
+	if (rc == 0)
+		rc = xrt_vsec_add_node(vsec, &xrt_golden_ver_endpoint);
+
+	return rc;
+}
+
+static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
+{
+	int ret;
+
+	ret = xrt_md_create(&vsec->pdev->dev, &vsec->metadata);
+	if (ret) {
+		xrt_err(vsec->pdev, "create metadata failed");
+		return ret;
+	}
+
+	ret = xrt_vsec_add_all_nodes(vsec);
+	if (ret) {
+		vfree(vsec->metadata);
+		vsec->metadata = NULL;
+	}
+	return ret;
+}
+
+static ssize_t VBNV_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_vsec *vsec = platform_get_drvdata(pdev);
+
+	return sprintf(buf, "xilinx_%s_GOLDEN_%d\n", vsec->bdname,
+		xrt_vsec_get_golden_ver(vsec));
+}
+static DEVICE_ATTR_RO(VBNV);
+
+static struct attribute *vsec_attrs[] = {
+	&dev_attr_VBNV.attr,
+	NULL,
+};
+
+static const struct attribute_group vsec_attrgroup = {
+	.attrs = vsec_attrs,
+};
+
+static int xrt_vsec_remove(struct platform_device *pdev)
+{
+	struct xrt_vsec	*vsec;
+
+	xrt_info(pdev, "leaving...");
+	(void) sysfs_remove_group(&DEV(pdev)->kobj, &vsec_attrgroup);
+	vsec = platform_get_drvdata(pdev);
+	vfree(vsec->metadata);
+	return 0;
+}
+
+static int xrt_vsec_probe(struct platform_device *pdev)
+{
+	struct xrt_vsec	*vsec;
+	int			ret = 0;
+	int			i;
+
+	xrt_info(pdev, "probing...");
+
+	vsec = devm_kzalloc(&pdev->dev, sizeof(*vsec), GFP_KERNEL);
+	if (!vsec)
+		return -ENOMEM;
+
+	vsec->pdev = pdev;
+	xrt_subdev_get_parent_id(pdev, &vsec->vendor, &vsec->device,
+		NULL, NULL);
+	platform_set_drvdata(pdev, vsec);
+
+	ret = xrt_vsec_create_metadata(vsec);
+	if (ret) {
+		xrt_err(pdev, "create metadata failed, ret %d", ret);
+		goto failed;
+	}
+	ret = xrt_subdev_create_partition(pdev, vsec->metadata);
+	if (ret < 0)
+		xrt_err(pdev, "create partition failed, ret %d", ret);
+	else
+		ret = 0;
+
+	/* Cache golden board name. */
+	for (i = 0; i < ARRAY_SIZE(vsec_golden_eps); i++) {
+		struct xrt_golden_endpoint *ep = &vsec_golden_eps[i];
+
+		if (vsec->vendor == ep->vendor && vsec->device == ep->device) {
+			vsec->bdname = ep->board_name;
+			break;
+		}
+	}
+
+	if (sysfs_create_group(&DEV(pdev)->kobj, &vsec_attrgroup))
+		xrt_err(pdev, "failed to create sysfs group");
+
+failed:
+	if (ret)
+		xrt_vsec_remove(pdev);
+
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_vsec_golden_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{ .ep_name = NODE_VSEC_GOLDEN },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_vsec_data = {
+};
+
+static const struct platform_device_id xrt_vsec_table[] = {
+	{ XRT_VSEC_GOLDEN, (kernel_ulong_t)&xrt_vsec_data },
+	{ },
+};
+
+struct platform_driver xrt_vsec_golden_driver = {
+	.driver = {
+		.name = XRT_VSEC_GOLDEN,
+	},
+	.probe = xrt_vsec_probe,
+	.remove = xrt_vsec_remove,
+	.id_table = xrt_vsec_table,
+};
diff --git a/drivers/fpga/alveo/lib/subdevs/xrt-vsec.c b/drivers/fpga/alveo/lib/subdevs/xrt-vsec.c
new file mode 100644
index 000000000000..c9a3f258fceb
--- /dev/null
+++ b/drivers/fpga/alveo/lib/subdevs/xrt-vsec.c
@@ -0,0 +1,337 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA VSEC Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/platform_device.h>
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+
+#define XRT_VSEC "xrt_vsec"
+
+#define VSEC_TYPE_UUID		0x50
+#define VSEC_TYPE_FLASH		0x51
+#define VSEC_TYPE_PLATINFO	0x52
+#define VSEC_TYPE_MAILBOX	0x53
+#define VSEC_TYPE_END		0xff
+
+#define VSEC_UUID_LEN		16
+
+struct xrt_vsec_header {
+	u32		format;
+	u32		length;
+	u32		entry_sz;
+	u32		rsvd;
+} __packed;
+
+#define head_rd(g, r)			\
+	ioread32(&((struct xrt_vsec_header *)g->base)->r)
+
+#define GET_BAR(entry)	((entry->bar_rev >> 4) & 0xf)
+#define GET_BAR_OFF(entry)	(entry->off_lo | ((u64)entry->off_hi << 16))
+#define GET_REV(entry)	(entry->bar_rev & 0xf)
+
+struct xrt_vsec_entry {
+	u8		type;
+	u8		bar_rev;
+	u16		off_lo;
+	u32		off_hi;
+	u8		ver_type;
+	u8		minor;
+	u8		major;
+	u8		rsvd0;
+	u32		rsvd1;
+} __packed;
+
+#define read_entry(g, i, e)					\
+	do {							\
+		u32 *p = (u32 *)(g->base +			\
+			sizeof(struct xrt_vsec_header) +	\
+			i * sizeof(struct xrt_vsec_entry));	\
+		u32 off;					\
+		for (off = 0;					\
+		    off < sizeof(struct xrt_vsec_entry) / 4;	\
+		    off++)					\
+			*((u32 *)(e) + off) = ioread32(p + off);\
+	} while (0)
+
+struct vsec_device {
+	u8		type;
+	char		*ep_name;
+	ulong		size;
+	char		*regmap;
+};
+
+static struct vsec_device vsec_devs[] = {
+	{
+		.type = VSEC_TYPE_UUID,
+		.ep_name = NODE_BLP_ROM,
+		.size = VSEC_UUID_LEN,
+		.regmap = "vsec-uuid",
+	},
+	{
+		.type = VSEC_TYPE_FLASH,
+		.ep_name = NODE_FLASH_VSEC,
+		.size = 4096,
+		.regmap = "vsec-flash",
+	},
+	{
+		.type = VSEC_TYPE_PLATINFO,
+		.ep_name = NODE_PLAT_INFO,
+		.size = 4,
+		.regmap = "vsec-platinfo",
+	},
+	{
+		.type = VSEC_TYPE_MAILBOX,
+		.ep_name = NODE_MAILBOX_VSEC,
+		.size = 48,
+		.regmap = "vsec-mbx",
+	},
+};
+
+struct xrt_vsec {
+	struct platform_device	*pdev;
+	void			*base;
+	ulong			length;
+
+	char			*metadata;
+	char			uuid[VSEC_UUID_LEN];
+};
+
+static char *type2epname(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+		if (vsec_devs[i].type == type)
+			return (vsec_devs[i].ep_name);
+	}
+
+	return NULL;
+}
+
+static ulong type2size(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+		if (vsec_devs[i].type == type)
+			return (vsec_devs[i].size);
+	}
+
+	return 0;
+}
+
+static char *type2regmap(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+		if (vsec_devs[i].type == type)
+			return (vsec_devs[i].regmap);
+	}
+
+	return NULL;
+}
+
+static int xrt_vsec_add_node(struct xrt_vsec *vsec,
+	void *md_blob, struct xrt_vsec_entry *p_entry)
+{
+	struct xrt_md_endpoint ep;
+	char regmap_ver[64];
+	int ret;
+
+	if (!type2epname(p_entry->type))
+		return -EINVAL;
+
+	/*
+	 * VSEC may have more than 1 mailbox instance for the card
+	 * which has more than 1 physical function.
+	 * This is not supported for now. Assuming only one mailbox
+	 */
+
+	snprintf(regmap_ver, sizeof(regmap_ver) - 1, "%d-%d.%d.%d",
+		p_entry->ver_type, p_entry->major, p_entry->minor,
+		GET_REV(p_entry));
+	ep.ep_name = type2epname(p_entry->type);
+	ep.bar = GET_BAR(p_entry);
+	ep.bar_off = GET_BAR_OFF(p_entry);
+	ep.size = type2size(p_entry->type);
+	ep.regmap = type2regmap(p_entry->type);
+	ep.regmap_ver = regmap_ver;
+	ret = xrt_md_add_endpoint(DEV(vsec->pdev), vsec->metadata, &ep);
+	if (ret) {
+		xrt_err(vsec->pdev, "add ep failed, ret %d", ret);
+		goto failed;
+	}
+
+failed:
+	return ret;
+}
+
+static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
+{
+	struct xrt_vsec_entry entry;
+	int i, ret;
+
+	ret = xrt_md_create(&vsec->pdev->dev, &vsec->metadata);
+	if (ret) {
+		xrt_err(vsec->pdev, "create metadata failed");
+		return ret;
+	}
+
+	for (i = 0; i * sizeof(entry) < vsec->length -
+	    sizeof(struct xrt_vsec_header); i++) {
+		read_entry(vsec, i, &entry);
+		xrt_vsec_add_node(vsec, vsec->metadata, &entry);
+	}
+
+	return 0;
+}
+
+static int xrt_vsec_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	return 0;
+}
+
+static int xrt_vsec_mapio(struct xrt_vsec *vsec)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(vsec->pdev);
+	const u32 *bar;
+	const u64 *bar_off;
+	struct resource *res = NULL;
+	ulong addr;
+	int ret;
+
+	if (!pdata || xrt_md_size(DEV(vsec->pdev), pdata->xsp_dtb) <= 0) {
+		xrt_err(vsec->pdev, "empty metadata");
+		return -EINVAL;
+	}
+
+	ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, NODE_VSEC,
+		NULL, PROP_BAR_IDX, (const void **)&bar, NULL);
+	if (ret) {
+		xrt_err(vsec->pdev, "failed to get bar idx, ret %d", ret);
+		return -EINVAL;
+	}
+
+	ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, NODE_VSEC,
+		NULL, PROP_OFFSET, (const void **)&bar_off, NULL);
+	if (ret) {
+		xrt_err(vsec->pdev, "failed to get bar off, ret %d", ret);
+		return -EINVAL;
+	}
+
+	xrt_info(vsec->pdev, "Map vsec at bar %d, offset 0x%llx",
+		be32_to_cpu(*bar), be64_to_cpu(*bar_off));
+
+	xrt_subdev_get_barres(vsec->pdev, &res, be32_to_cpu(*bar));
+	if (!res) {
+		xrt_err(vsec->pdev, "failed to get bar addr");
+		return -EINVAL;
+	}
+
+	addr = res->start + (ulong)be64_to_cpu(*bar_off);
+
+	vsec->base = ioremap(addr, sizeof(struct xrt_vsec_header));
+	if (!vsec->base) {
+		xrt_err(vsec->pdev, "Map header failed");
+		return -EIO;
+	}
+
+	vsec->length = head_rd(vsec, length);
+	iounmap(vsec->base);
+	vsec->base = ioremap(addr, vsec->length);
+	if (!vsec->base) {
+		xrt_err(vsec->pdev, "map failed");
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int xrt_vsec_remove(struct platform_device *pdev)
+{
+	struct xrt_vsec	*vsec;
+
+	vsec = platform_get_drvdata(pdev);
+
+	if (vsec->base) {
+		iounmap(vsec->base);
+		vsec->base = NULL;
+	}
+
+	vfree(vsec->metadata);
+
+	return 0;
+}
+
+static int xrt_vsec_probe(struct platform_device *pdev)
+{
+	struct xrt_vsec	*vsec;
+	int			ret = 0;
+
+	vsec = devm_kzalloc(&pdev->dev, sizeof(*vsec), GFP_KERNEL);
+	if (!vsec)
+		return -ENOMEM;
+
+	vsec->pdev = pdev;
+	platform_set_drvdata(pdev, vsec);
+
+	ret = xrt_vsec_mapio(vsec);
+	if (ret)
+		goto failed;
+
+	ret = xrt_vsec_create_metadata(vsec);
+	if (ret) {
+		xrt_err(pdev, "create metadata failed, ret %d", ret);
+		goto failed;
+	}
+	ret = xrt_subdev_create_partition(pdev, vsec->metadata);
+	if (ret < 0)
+		xrt_err(pdev, "create partition failed, ret %d", ret);
+	else
+		ret = 0;
+
+failed:
+	if (ret)
+		xrt_vsec_remove(pdev);
+
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_vsec_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{ .ep_name = NODE_VSEC },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_vsec_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_vsec_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_vsec_table[] = {
+	{ XRT_VSEC, (kernel_ulong_t)&xrt_vsec_data },
+	{ },
+};
+
+struct platform_driver xrt_vsec_driver = {
+	.driver = {
+		.name = XRT_VSEC,
+	},
+	.probe = xrt_vsec_probe,
+	.remove = xrt_vsec_remove,
+	.id_table = xrt_vsec_table,
+};
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 6/8] fpga: xrt: header file for platform and parent drivers
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (4 preceding siblings ...)
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 5/8] fpga: xrt: platform drivers for subsystems in shell partition Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver Sonal Santan
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Add private header files for platform and parent drivers.
Each header file defines ioctls supported by the platform
or parent driver. The header files also define core data
structures for sending and receiving events by platform
and parent drivers.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/alveo/include/xmgmt-main.h    |  34 +++
 drivers/fpga/alveo/include/xrt-axigate.h   |  31 ++
 drivers/fpga/alveo/include/xrt-calib.h     |  28 ++
 drivers/fpga/alveo/include/xrt-clkfreq.h   |  21 ++
 drivers/fpga/alveo/include/xrt-clock.h     |  29 ++
 drivers/fpga/alveo/include/xrt-cmc.h       |  23 ++
 drivers/fpga/alveo/include/xrt-ddr-srsr.h  |  29 ++
 drivers/fpga/alveo/include/xrt-flash.h     |  28 ++
 drivers/fpga/alveo/include/xrt-gpio.h      |  41 +++
 drivers/fpga/alveo/include/xrt-icap.h      |  27 ++
 drivers/fpga/alveo/include/xrt-mailbox.h   |  44 +++
 drivers/fpga/alveo/include/xrt-metadata.h  | 184 ++++++++++++
 drivers/fpga/alveo/include/xrt-parent.h    | 103 +++++++
 drivers/fpga/alveo/include/xrt-partition.h |  33 ++
 drivers/fpga/alveo/include/xrt-subdev.h    | 333 +++++++++++++++++++++
 drivers/fpga/alveo/include/xrt-ucs.h       |  22 ++
 16 files changed, 1010 insertions(+)
 create mode 100644 drivers/fpga/alveo/include/xmgmt-main.h
 create mode 100644 drivers/fpga/alveo/include/xrt-axigate.h
 create mode 100644 drivers/fpga/alveo/include/xrt-calib.h
 create mode 100644 drivers/fpga/alveo/include/xrt-clkfreq.h
 create mode 100644 drivers/fpga/alveo/include/xrt-clock.h
 create mode 100644 drivers/fpga/alveo/include/xrt-cmc.h
 create mode 100644 drivers/fpga/alveo/include/xrt-ddr-srsr.h
 create mode 100644 drivers/fpga/alveo/include/xrt-flash.h
 create mode 100644 drivers/fpga/alveo/include/xrt-gpio.h
 create mode 100644 drivers/fpga/alveo/include/xrt-icap.h
 create mode 100644 drivers/fpga/alveo/include/xrt-mailbox.h
 create mode 100644 drivers/fpga/alveo/include/xrt-metadata.h
 create mode 100644 drivers/fpga/alveo/include/xrt-parent.h
 create mode 100644 drivers/fpga/alveo/include/xrt-partition.h
 create mode 100644 drivers/fpga/alveo/include/xrt-subdev.h
 create mode 100644 drivers/fpga/alveo/include/xrt-ucs.h

diff --git a/drivers/fpga/alveo/include/xmgmt-main.h b/drivers/fpga/alveo/include/xmgmt-main.h
new file mode 100644
index 000000000000..3f26c480ce27
--- /dev/null
+++ b/drivers/fpga/alveo/include/xmgmt-main.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XMGMT_MAIN_H_
+#define	_XMGMT_MAIN_H_
+
+#include <linux/xrt/xclbin.h>
+
+enum xrt_mgmt_main_ioctl_cmd {
+	// section needs to be vfree'd by caller
+	XRT_MGMT_MAIN_GET_AXLF_SECTION = 0,
+	// vbnv needs to be kfree'd by caller
+	XRT_MGMT_MAIN_GET_VBNV,
+};
+
+enum provider_kind {
+	XMGMT_BLP,
+	XMGMT_PLP,
+	XMGMT_ULP,
+};
+
+struct xrt_mgmt_main_ioctl_get_axlf_section {
+	enum provider_kind xmmigas_axlf_kind;
+	enum axlf_section_kind xmmigas_section_kind;
+	void *xmmigas_section;
+	u64 xmmigas_section_size;
+};
+
+#endif	/* _XMGMT_MAIN_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-axigate.h b/drivers/fpga/alveo/include/xrt-axigate.h
new file mode 100644
index 000000000000..b1dd70546040
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-axigate.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_AXIGATE_H_
+#define	_XRT_AXIGATE_H_
+
+
+#include "xrt-subdev.h"
+#include "xrt-metadata.h"
+
+/*
+ * AXIGATE driver IOCTL calls.
+ */
+enum xrt_axigate_ioctl_cmd {
+	XRT_AXIGATE_FREEZE = 0,
+	XRT_AXIGATE_FREE,
+};
+
+/* the ep names are in the order of hardware layers */
+static const char * const xrt_axigate_epnames[] = {
+	NODE_GATE_PLP,
+	NODE_GATE_ULP,
+	NULL
+};
+
+#endif	/* _XRT_AXIGATE_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-calib.h b/drivers/fpga/alveo/include/xrt-calib.h
new file mode 100644
index 000000000000..5e5bb5cec285
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-calib.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_CALIB_H_
+#define	_XRT_CALIB_H_
+
+#include "xrt-subdev.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * Memory calibration driver IOCTL calls.
+ */
+enum xrt_calib_results {
+	XRT_CALIB_UNKNOWN,
+	XRT_CALIB_SUCCEEDED,
+	XRT_CALIB_FAILED,
+};
+
+enum xrt_calib_ioctl_cmd {
+	XRT_CALIB_RESULT = 0,
+};
+
+#endif	/* _XRT_CALIB_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-clkfreq.h b/drivers/fpga/alveo/include/xrt-clkfreq.h
new file mode 100644
index 000000000000..60e4109cc05a
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-clkfreq.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_CLKFREQ_H_
+#define	_XRT_CLKFREQ_H_
+
+#include "xrt-subdev.h"
+
+/*
+ * CLKFREQ driver IOCTL calls.
+ */
+enum xrt_clkfreq_ioctl_cmd {
+	XRT_CLKFREQ_READ = 0,
+};
+
+#endif	/* _XRT_CLKFREQ_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-clock.h b/drivers/fpga/alveo/include/xrt-clock.h
new file mode 100644
index 000000000000..d98d9d619bb2
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-clock.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_CLOCK_H_
+#define	_XRT_CLOCK_H_
+
+#include "xrt-subdev.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * CLOCK driver IOCTL calls.
+ */
+enum xrt_clock_ioctl_cmd {
+	XRT_CLOCK_SET = 0,
+	XRT_CLOCK_GET,
+	XRT_CLOCK_VERIFY,
+};
+
+struct xrt_clock_ioctl_get {
+	u16 freq;
+	u32 freq_cnter;
+};
+
+#endif	/* _XRT_CLOCK_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-cmc.h b/drivers/fpga/alveo/include/xrt-cmc.h
new file mode 100644
index 000000000000..f2bb61a2ab23
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-cmc.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_CMC_H_
+#define	_XRT_CMC_H_
+
+#include "xrt-subdev.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * CMC driver IOCTL calls.
+ */
+enum xrt_cmc_ioctl_cmd {
+	XRT_CMC_READ_BOARD_INFO = 0,
+	XRT_CMC_READ_SENSORS,
+};
+
+#endif	/* _XRT_CMC_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-ddr-srsr.h b/drivers/fpga/alveo/include/xrt-ddr-srsr.h
new file mode 100644
index 000000000000..56dc2ff8ea7c
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-ddr-srsr.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef _XRT_DDR_SRSR_H_
+#define _XRT_DDR_SRSR_H_
+
+#include "xrt-subdev.h"
+
+/*
+ * ddr-srsr driver IOCTL calls.
+ */
+enum xrt_ddr_srsr_ioctl_cmd {
+	XRT_SRSR_FAST_CALIB,
+	XRT_SRSR_CALIB,
+	XRT_SRSR_EP_NAME,
+};
+
+struct xrt_srsr_ioctl_calib {
+	void	*xsic_buf;
+	u32	xsic_size;
+	bool	xsic_retention;
+};
+
+#endif
diff --git a/drivers/fpga/alveo/include/xrt-flash.h b/drivers/fpga/alveo/include/xrt-flash.h
new file mode 100644
index 000000000000..949f490a3154
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-flash.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_FLASH_H_
+#define	_XRT_FLASH_H_
+
+#include "xrt-subdev.h"
+
+/*
+ * Flash controller driver IOCTL calls.
+ */
+enum xrt_flash_ioctl_cmd {
+	XRT_FLASH_GET_SIZE = 0,
+	XRT_FLASH_READ,
+};
+
+struct xrt_flash_ioctl_read {
+	char *xfir_buf;
+	size_t xfir_size;
+	loff_t xfir_offset;
+};
+
+#endif	/* _XRT_FLASH_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-gpio.h b/drivers/fpga/alveo/include/xrt-gpio.h
new file mode 100644
index 000000000000..d85a356f5281
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-gpio.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_GPIO_H_
+#define	_XRT_GPIO_H_
+
+#include "xrt-subdev.h"
+
+/*
+ * GPIO driver IOCTL calls.
+ */
+enum xrt_gpio_ioctl_cmd {
+	XRT_GPIO_READ = 0,
+	XRT_GPIO_WRITE,
+};
+
+enum xrt_gpio_id {
+	XRT_GPIO_ROM_UUID,
+	XRT_GPIO_DDR_CALIB,
+	XRT_GPIO_GOLDEN_VER,
+	XRT_GPIO_MAX
+};
+
+struct xrt_gpio_ioctl_rw {
+	u32	xgir_id;
+	void	*xgir_buf;
+	u32	xgir_len;
+	u32	xgir_offset;
+};
+
+struct xrt_gpio_ioctl_intf_uuid {
+	u32	xgir_uuid_num;
+	uuid_t	*xgir_uuids;
+};
+
+#endif	/* _XRT_GPIO_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-icap.h b/drivers/fpga/alveo/include/xrt-icap.h
new file mode 100644
index 000000000000..ea9e688e07eb
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-icap.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_ICAP_H_
+#define	_XRT_ICAP_H_
+
+#include "xrt-subdev.h"
+
+/*
+ * ICAP driver IOCTL calls.
+ */
+enum xrt_icap_ioctl_cmd {
+	XRT_ICAP_WRITE = 0,
+	XRT_ICAP_IDCODE,
+};
+
+struct xrt_icap_ioctl_wr {
+	void	*xiiw_bit_data;
+	u32	xiiw_data_len;
+};
+
+#endif	/* _XRT_ICAP_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-mailbox.h b/drivers/fpga/alveo/include/xrt-mailbox.h
new file mode 100644
index 000000000000..cfa7e112c51b
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-mailbox.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_MAILBOX_H_
+#define	_XRT_MAILBOX_H_
+
+/*
+ * Mailbox IP driver IOCTL calls.
+ */
+enum xrt_mailbox_ioctl_cmd {
+	XRT_MAILBOX_POST = 0,
+	XRT_MAILBOX_REQUEST,
+	XRT_MAILBOX_LISTEN,
+};
+
+struct xrt_mailbox_ioctl_post {
+	u64 xmip_req_id; /* 0 means response */
+	bool xmip_sw_ch;
+	void *xmip_data;
+	size_t xmip_data_size;
+};
+
+struct xrt_mailbox_ioctl_request {
+	bool xmir_sw_ch;
+	u32 xmir_resp_ttl;
+	void *xmir_req;
+	size_t xmir_req_size;
+	void *xmir_resp;
+	size_t xmir_resp_size;
+};
+
+typedef	void (*mailbox_msg_cb_t)(void *arg, void *data, size_t len,
+	u64 msgid, int err, bool sw_ch);
+struct xrt_mailbox_ioctl_listen {
+	mailbox_msg_cb_t xmil_cb;
+	void *xmil_cb_arg;
+};
+
+#endif	/* _XRT_MAILBOX_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-metadata.h b/drivers/fpga/alveo/include/xrt-metadata.h
new file mode 100644
index 000000000000..f445bfc279d2
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-metadata.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Xilinx Alveo FPGA Test Leaf Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef _XRT_METADATA_H
+#define _XRT_METADATA_H
+
+#include <linux/device.h>
+#include <linux/vmalloc.h>
+#include <linux/uuid.h>
+
+#define PROP_COMPATIBLE "compatible"
+#define PROP_PF_NUM "pcie_physical_function"
+#define PROP_BAR_IDX "pcie_bar_mapping"
+#define PROP_IO_OFFSET "reg"
+#define PROP_INTERRUPTS "interrupts"
+#define PROP_INTERFACE_UUID "interface_uuid"
+#define PROP_LOGIC_UUID "logic_uuid"
+#define PROP_VERSION_MAJOR "firmware_version_major"
+
+#define PROP_HWICAP "axi_hwicap"
+#define PROP_PDI_CONFIG "pdi_config_mem"
+
+#define NODE_ENDPOINTS "addressable_endpoints"
+#define INTERFACES_PATH "/interfaces"
+
+#define NODE_FIRMWARE "firmware"
+#define NODE_INTERFACES "interfaces"
+#define NODE_PARTITION_INFO "partition_info"
+
+#define NODE_FLASH "ep_card_flash_program_00"
+#define NODE_XVC_PUB "ep_debug_bscan_user_00"
+#define NODE_XVC_PRI "ep_debug_bscan_mgmt_00"
+#define NODE_SYSMON "ep_cmp_sysmon_00"
+#define NODE_AF_BLP_CTRL_MGMT "ep_firewall_blp_ctrl_mgmt_00"
+#define NODE_AF_BLP_CTRL_USER "ep_firewall_blp_ctrl_user_00"
+#define NODE_AF_CTRL_MGMT "ep_firewall_ctrl_mgmt_00"
+#define NODE_AF_CTRL_USER "ep_firewall_ctrl_user_00"
+#define NODE_AF_CTRL_DEBUG "ep_firewall_ctrl_debug_00"
+#define NODE_AF_DATA_H2C "ep_firewall_data_h2c_00"
+#define NODE_AF_DATA_C2H "ep_firewall_data_c2h_00"
+#define NODE_AF_DATA_P2P "ep_firewall_data_p2p_00"
+#define NODE_AF_DATA_M2M "ep_firewall_data_m2m_00"
+#define NODE_CMC_REG "ep_cmc_regmap_00"
+#define NODE_CMC_RESET "ep_cmc_reset_00"
+#define NODE_CMC_MUTEX "ep_cmc_mutex_00"
+#define NODE_CMC_FW_MEM "ep_cmc_firmware_mem_00"
+#define NODE_ERT_FW_MEM "ep_ert_firmware_mem_00"
+#define NODE_ERT_CQ_MGMT "ep_ert_command_queue_mgmt_00"
+#define NODE_ERT_CQ_USER "ep_ert_command_queue_user_00"
+#define NODE_MAILBOX_MGMT "ep_mailbox_mgmt_00"
+#define NODE_MAILBOX_USER "ep_mailbox_user_00"
+#define NODE_GATE_PLP "ep_pr_isolate_plp_00"
+#define NODE_GATE_ULP "ep_pr_isolate_ulp_00"
+#define NODE_PCIE_MON "ep_pcie_link_mon_00"
+#define NODE_DDR_CALIB "ep_ddr_mem_calib_00"
+#define NODE_CLK_KERNEL1 "ep_aclk_kernel_00"
+#define NODE_CLK_KERNEL2 "ep_aclk_kernel_01"
+#define NODE_CLK_KERNEL3 "ep_aclk_hbm_00"
+#define NODE_KDMA_CTRL "ep_kdma_ctrl_00"
+#define NODE_FPGA_CONFIG "ep_fpga_configuration_00"
+#define NODE_ERT_SCHED "ep_ert_sched_00"
+#define NODE_XDMA "ep_xdma_00"
+#define NODE_MSIX "ep_msix_00"
+#define NODE_QDMA "ep_qdma_00"
+#define NODE_QDMA4 "ep_qdma4_00"
+#define NODE_STM "ep_stream_traffic_manager_00"
+#define NODE_STM4 "ep_stream_traffic_manager4_00"
+#define NODE_CLK_SHUTDOWN "ep_aclk_shutdown_00"
+#define NODE_ERT_BASE "ep_ert_base_address_00"
+#define NODE_ERT_RESET "ep_ert_reset_00"
+#define NODE_CLKFREQ_K1 "ep_freq_cnt_aclk_kernel_00"
+#define NODE_CLKFREQ_K2 "ep_freq_cnt_aclk_kernel_01"
+#define NODE_CLKFREQ_HBM "ep_freq_cnt_aclk_hbm_00"
+#define NODE_GAPPING "ep_gapping_demand_00"
+#define NODE_UCS_CONTROL_STATUS "ep_ucs_control_status_00"
+#define NODE_P2P "ep_p2p_00"
+#define NODE_REMAP_P2P "ep_remap_p2p_00"
+#define NODE_DDR4_RESET_GATE "ep_ddr_mem_srsr_gate_00"
+#define NODE_ADDR_TRANSLATOR "ep_remap_data_c2h_00"
+#define NODE_MAILBOX_XRT "ep_mailbox_user_to_ert_00"
+#define NODE_PMC_INTR   "ep_pmc_intr_00"
+#define NODE_PMC_MUX    "ep_pmc_mux_00"
+
+/* driver defined endpoints */
+#define NODE_VSEC "drv_ep_vsec_00"
+#define NODE_VSEC_GOLDEN "drv_ep_vsec_golden_00"
+#define NODE_BLP_ROM "drv_ep_blp_rom_00"
+#define NODE_MAILBOX_VSEC "ep_mailbox_vsec_00"
+#define NODE_PLAT_INFO "drv_ep_platform_info_mgmt_00"
+#define NODE_TEST "drv_ep_test_00"
+#define NODE_MGMT_MAIN "drv_ep_mgmt_main_00"
+#define NODE_FLASH_VSEC "drv_ep_card_flash_program_00"
+#define NODE_GOLDEN_VER "drv_ep_golden_ver_00"
+#define NODE_PARTITION_INFO_BLP "partition_info_0"
+#define NODE_PARTITION_INFO_PLP "partition_info_1"
+
+#define NODE_DDR_SRSR "drv_ep_ddr_srsr"
+#define REGMAP_DDR_SRSR "drv_ddr_srsr"
+
+#define PROP_OFFSET "drv_offset"
+#define PROP_CLK_FREQ "drv_clock_frequency"
+#define PROP_CLK_CNT "drv_clock_frequency_counter"
+#define	PROP_VBNV "vbnv"
+#define	PROP_VROM "vrom"
+#define PROP_PARTITION_LEVEL "partition_level"
+
+struct xrt_md_endpoint {
+	const char	*ep_name;
+	u32		bar;
+	long		bar_off;
+	ulong		size;
+	char		*regmap;
+	char		*regmap_ver;
+};
+
+/* Note: res_id is defined by leaf driver and must start with 0. */
+struct xrt_iores_map {
+	char		*res_name;
+	int		res_id;
+};
+
+static inline int xrt_md_res_name2id(const struct xrt_iores_map *res_map,
+	int entry_num, const char *res_name)
+{
+	int i;
+
+	BUG_ON(res_name == NULL);
+	for (i = 0; i < entry_num; i++) {
+		if (!strcmp(res_name, res_map->res_name))
+			return res_map->res_id;
+		res_map++;
+	}
+	return -1;
+}
+
+static inline const char *
+xrt_md_res_id2name(const struct xrt_iores_map *res_map, int entry_num, int id)
+{
+	int i;
+
+	BUG_ON(id > entry_num);
+	for (i = 0; i < entry_num; i++) {
+		if (res_map->res_id == id)
+			return res_map->res_name;
+		res_map++;
+	}
+	return NULL;
+}
+
+long xrt_md_size(struct device *dev, const char *blob);
+int xrt_md_create(struct device *dev, char **blob);
+int xrt_md_add_endpoint(struct device *dev, char *blob,
+	struct xrt_md_endpoint *ep);
+int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
+	char *regmap_name);
+int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
+	const char *regmap_name, const char *prop, const void **val, int *size);
+int xrt_md_set_prop(struct device *dev, char *blob, const char *ep_name,
+	const char *regmap_name, const char *prop, const void *val, int size);
+int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
+	const char *ep_name, const char *regmap_name, const char *new_ep_name);
+int xrt_md_copy_all_eps(struct device *dev, char  *blob, const char *src_blob);
+int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
+	const char *ep_name,  const char *regmap_name,
+	char **next_ep, char **next_regmap);
+int xrt_md_get_compatible_epname(struct device *dev, const char *blob,
+	const char *regmap_name, const char **ep_name);
+int xrt_md_get_epname_pointer(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name, const char **epname);
+void xrt_md_pack(struct device *dev, char *blob);
+char *xrt_md_dup(struct device *dev, const char *blob);
+int xrt_md_get_intf_uuids(struct device *dev, const char *blob,
+	u32 *num_uuids, uuid_t *intf_uuids);
+int xrt_md_check_uuids(struct device *dev, const char *blob, char *subset_blob);
+int xrt_md_uuid_strtoid(struct device *dev, const char *uuidstr, uuid_t *uuid);
+
+#endif
diff --git a/drivers/fpga/alveo/include/xrt-parent.h b/drivers/fpga/alveo/include/xrt-parent.h
new file mode 100644
index 000000000000..28de117fbf91
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-parent.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_PARENT_H_
+#define	_XRT_PARENT_H_
+
+#include "xrt-subdev.h"
+#include "xrt-partition.h"
+
+/*
+ * Parent IOCTL calls.
+ */
+enum xrt_parent_ioctl_cmd {
+	/* Leaf actions. */
+	XRT_PARENT_GET_LEAF = 0,
+	XRT_PARENT_PUT_LEAF,
+	XRT_PARENT_GET_LEAF_HOLDERS,
+
+	/* Partition actions. */
+	XRT_PARENT_CREATE_PARTITION,
+	XRT_PARENT_REMOVE_PARTITION,
+	XRT_PARENT_LOOKUP_PARTITION,
+	XRT_PARENT_WAIT_PARTITION_BRINGUP,
+
+	/* Event actions. */
+	XRT_PARENT_ADD_EVENT_CB,
+	XRT_PARENT_REMOVE_EVENT_CB,
+	XRT_PARENT_ASYNC_BOARDCAST_EVENT,
+
+	/* Device info. */
+	XRT_PARENT_GET_RESOURCE,
+	XRT_PARENT_GET_ID,
+
+	/* Misc. */
+	XRT_PARENT_HOT_RESET,
+	XRT_PARENT_HWMON,
+};
+
+struct xrt_parent_ioctl_get_leaf {
+	struct platform_device *xpigl_pdev; /* caller's pdev */
+	xrt_subdev_match_t xpigl_match_cb;
+	void *xpigl_match_arg;
+	struct platform_device *xpigl_leaf; /* target leaf pdev */
+};
+
+struct xrt_parent_ioctl_put_leaf {
+	struct platform_device *xpipl_pdev; /* caller's pdev */
+	struct platform_device *xpipl_leaf; /* target's pdev */
+};
+
+struct xrt_parent_ioctl_lookup_partition {
+	struct platform_device *xpilp_pdev; /* caller's pdev */
+	xrt_subdev_match_t xpilp_match_cb;
+	void *xpilp_match_arg;
+	int xpilp_part_inst;
+};
+
+struct xrt_parent_ioctl_evt_cb {
+	struct platform_device *xevt_pdev; /* caller's pdev */
+	xrt_subdev_match_t xevt_match_cb;
+	void *xevt_match_arg;
+	xrt_event_cb_t xevt_cb;
+	void *xevt_hdl;
+};
+
+struct xrt_parent_ioctl_async_broadcast_evt {
+	struct platform_device *xaevt_pdev; /* caller's pdev */
+	enum xrt_events xaevt_event;
+	xrt_async_broadcast_event_cb_t xaevt_cb;
+	void *xaevt_arg;
+};
+
+struct xrt_parent_ioctl_get_holders {
+	struct platform_device *xpigh_pdev; /* caller's pdev */
+	char *xpigh_holder_buf;
+	size_t xpigh_holder_buf_len;
+};
+
+struct xrt_parent_ioctl_get_res {
+	struct resource *xpigr_res;
+};
+
+struct xrt_parent_ioctl_get_id {
+	unsigned short  xpigi_vendor_id;
+	unsigned short  xpigi_device_id;
+	unsigned short  xpigi_sub_vendor_id;
+	unsigned short  xpigi_sub_device_id;
+};
+
+struct xrt_parent_ioctl_hwmon {
+	bool xpih_register;
+	const char *xpih_name;
+	void *xpih_drvdata;
+	const struct attribute_group **xpih_groups;
+	struct device *xpih_hwmon_dev;
+};
+
+#endif	/* _XRT_PARENT_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-partition.h b/drivers/fpga/alveo/include/xrt-partition.h
new file mode 100644
index 000000000000..e0048f2a146f
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-partition.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_PARTITION_H_
+#define	_XRT_PARTITION_H_
+
+#include "xrt-subdev.h"
+
+/*
+ * Partition driver IOCTL calls.
+ */
+enum xrt_partition_ioctl_cmd {
+	XRT_PARTITION_GET_LEAF = 0,
+	XRT_PARTITION_PUT_LEAF,
+	XRT_PARTITION_INIT_CHILDREN,
+	XRT_PARTITION_FINI_CHILDREN,
+	XRT_PARTITION_EVENT,
+};
+
+struct xrt_partition_ioctl_event {
+	enum xrt_events xpie_evt;
+	struct xrt_parent_ioctl_evt_cb *xpie_cb;
+};
+
+extern int xrt_subdev_parent_ioctl(struct platform_device *pdev,
+	u32 cmd, void *arg);
+
+#endif	/* _XRT_PARTITION_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-subdev.h b/drivers/fpga/alveo/include/xrt-subdev.h
new file mode 100644
index 000000000000..65ecbd9c596b
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-subdev.h
@@ -0,0 +1,333 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_SUBDEV_H_
+#define	_XRT_SUBDEV_H_
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/pci.h>
+#include <linux/libfdt_env.h>
+#include "libfdt.h"
+
+/*
+ * Every subdev driver should have an ID for others to refer to it.
+ * There can be unlimited number of instances of a subdev driver. A
+ * <subdev_id, subdev_instance> tuple should be a unique identification of
+ * a specific instance of a subdev driver.
+ * NOTE: PLEASE do not change the order of IDs. Sub devices in the same
+ * partition are initialized by this order.
+ */
+enum xrt_subdev_id {
+	XRT_SUBDEV_PART = 0,
+	XRT_SUBDEV_VSEC,
+	XRT_SUBDEV_VSEC_GOLDEN,
+	XRT_SUBDEV_GPIO,
+	XRT_SUBDEV_AXIGATE,
+	XRT_SUBDEV_ICAP,
+	XRT_SUBDEV_TEST,
+	XRT_SUBDEV_MGMT_MAIN,
+	XRT_SUBDEV_QSPI,
+	XRT_SUBDEV_MAILBOX,
+	XRT_SUBDEV_CMC,
+	XRT_SUBDEV_CALIB,
+	XRT_SUBDEV_CLKFREQ,
+	XRT_SUBDEV_CLOCK,
+	XRT_SUBDEV_SRSR,
+	XRT_SUBDEV_UCS,
+	XRT_SUBDEV_NUM,
+};
+
+/*
+ * If populated by subdev driver, parent will handle the mechanics of
+ * char device (un)registration.
+ */
+enum xrt_subdev_file_mode {
+	// Infra create cdev, default file name
+	XRT_SUBDEV_FILE_DEFAULT = 0,
+	// Infra create cdev, need to encode inst num in file name
+	XRT_SUBDEV_FILE_MULTI_INST,
+	// No auto creation of cdev by infra, leaf handles it by itself
+	XRT_SUBDEV_FILE_NO_AUTO,
+};
+struct xrt_subdev_file_ops {
+	const struct file_operations xsf_ops;
+	dev_t xsf_dev_t;
+	const char *xsf_dev_name;
+	enum xrt_subdev_file_mode xsf_mode;
+};
+
+/*
+ * Subdev driver callbacks populated by subdev driver.
+ */
+struct xrt_subdev_drv_ops {
+	/*
+	 * Per driver module callback. Don't take any arguments.
+	 * If defined these are called as part of driver (un)registration.
+	 */
+	int (*xsd_post_init)(void);
+	void (*xsd_pre_exit)(void);
+
+	/*
+	 * Per driver instance callback. The pdev points to the instance.
+	 * If defined these are called by other leaf drivers.
+	 * Note that root driver may call into xsd_ioctl of a partition driver.
+	 */
+	int (*xsd_ioctl)(struct platform_device *pdev, u32 cmd, void *arg);
+};
+
+/*
+ * Defined and populated by subdev driver, exported as driver_data in
+ * struct platform_device_id.
+ */
+struct xrt_subdev_drvdata {
+	struct xrt_subdev_file_ops xsd_file_ops;
+	struct xrt_subdev_drv_ops xsd_dev_ops;
+};
+
+/*
+ * Partially initialized by parent driver, then, passed in as subdev driver's
+ * platform data when creating subdev driver instance by calling platform
+ * device register API (platform_device_register_data() or the likes).
+ *
+ * Once device register API returns, platform driver framework makes a copy of
+ * this buffer and maintains its life cycle. The content of the buffer is
+ * completely owned by subdev driver.
+ *
+ * Thus, parent driver should be very careful when it touches this buffer
+ * again once it's handed over to subdev driver. And the data structure
+ * should not contain pointers pointing to buffers that is managed by
+ * other or parent drivers since it could have been freed before platform
+ * data buffer is freed by platform driver framework.
+ */
+typedef int (*xrt_subdev_parent_cb_t)(struct device *, void *, u32, void *);
+struct xrt_subdev_platdata {
+	/*
+	 * Per driver instance callback. The pdev points to the instance.
+	 * Should always be defined for subdev driver to call into its parent.
+	 */
+	xrt_subdev_parent_cb_t xsp_parent_cb;
+	void *xsp_parent_cb_arg;
+
+	/* Something to associate w/ root for msg printing. */
+	const char *xsp_root_name;
+
+	/*
+	 * Char dev support for this subdev instance.
+	 * Initialized by subdev driver.
+	 */
+	struct cdev xsp_cdev;
+	struct device *xsp_sysdev;
+	struct mutex xsp_devnode_lock;
+	struct completion xsp_devnode_comp;
+	int xsp_devnode_ref;
+	bool xsp_devnode_online;
+	bool xsp_devnode_excl;
+
+	/*
+	 * Subdev driver specific init data. The buffer should be embedded
+	 * in this data structure buffer after dtb, so that it can be freed
+	 * together with platform data.
+	 */
+	loff_t xsp_priv_off; /* Offset into this platform data buffer. */
+	size_t xsp_priv_len;
+
+	/*
+	 * Populated by parent driver to describe the device tree for
+	 * the subdev driver to handle. Should always be last one since it's
+	 * of variable length.
+	 */
+	char xsp_dtb[sizeof(struct fdt_header)];
+};
+
+/*
+ * this struct define the endpoints belong to the same subdevice
+ */
+struct xrt_subdev_ep_names {
+	const char *ep_name;
+	const char *regmap_name;
+};
+
+struct xrt_subdev_endpoints {
+	struct xrt_subdev_ep_names *xse_names;
+	/* minimum number of endpoints to support the subdevice */
+	u32 xse_min_ep;
+};
+
+/*
+ * It manages a list of xrt_subdevs for root and partition drivers.
+ */
+struct xrt_subdev_pool {
+	struct list_head xpool_dev_list;
+	struct device *xpool_owner;
+	struct mutex xpool_lock;
+	bool xpool_closing;
+};
+
+typedef bool (*xrt_subdev_match_t)(enum xrt_subdev_id,
+	struct platform_device *, void *);
+#define	XRT_SUBDEV_MATCH_PREV	((xrt_subdev_match_t)-1)
+#define	XRT_SUBDEV_MATCH_NEXT	((xrt_subdev_match_t)-2)
+
+/* All subdev drivers should use below common routines to print out msg. */
+#define	DEV(pdev)	(&(pdev)->dev)
+#define	DEV_PDATA(pdev)					\
+	((struct xrt_subdev_platdata *)dev_get_platdata(DEV(pdev)))
+#define	DEV_DRVDATA(pdev)				\
+	((struct xrt_subdev_drvdata *)			\
+	platform_get_device_id(pdev)->driver_data)
+#define	FMT_PRT(prt_fn, pdev, fmt, args...)		\
+	prt_fn(DEV(pdev), "%s %s: "fmt,			\
+	DEV_PDATA(pdev)->xsp_root_name, __func__, ##args)
+#define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
+#define xrt_warn(pdev, fmt, args...) FMT_PRT(dev_warn, pdev, fmt, ##args)
+#define xrt_info(pdev, fmt, args...) FMT_PRT(dev_info, pdev, fmt, ##args)
+#define xrt_dbg(pdev, fmt, args...) FMT_PRT(dev_dbg, pdev, fmt, ##args)
+
+/*
+ * Event notification.
+ */
+enum xrt_events {
+	XRT_EVENT_TEST = 0, // for testing
+	/*
+	 * Events related to specific subdev
+	 * Callback arg: struct xrt_event_arg_subdev
+	 */
+	XRT_EVENT_POST_CREATION,
+	XRT_EVENT_PRE_REMOVAL,
+	/*
+	 * Events related to change of the whole board
+	 * Callback arg: <none>
+	 */
+	XRT_EVENT_PRE_HOT_RESET,
+	XRT_EVENT_POST_HOT_RESET,
+	XRT_EVENT_PRE_GATE_CLOSE,
+	XRT_EVENT_POST_GATE_OPEN,
+	XRT_EVENT_POST_ATTACH,
+	XRT_EVENT_PRE_DETACH,
+};
+
+typedef int (*xrt_event_cb_t)(struct platform_device *pdev,
+	enum xrt_events evt, void *arg);
+typedef void (*xrt_async_broadcast_event_cb_t)(struct platform_device *pdev,
+	enum xrt_events evt, void *arg, bool success);
+
+struct xrt_event_arg_subdev {
+	enum xrt_subdev_id xevt_subdev_id;
+	int xevt_subdev_instance;
+};
+
+/*
+ * Flags in return value from event callback.
+ */
+/* Done with event handling, continue waiting for the next one */
+#define	XRT_EVENT_CB_CONTINUE	0x0
+/* Done with event handling, stop waiting for the next one */
+#define	XRT_EVENT_CB_STOP	0x1
+/* Error processing event */
+#define	XRT_EVENT_CB_ERR	0x2
+
+/*
+ * Subdev pool API for root and partition drivers only.
+ */
+extern void xrt_subdev_pool_init(struct device *dev,
+	struct xrt_subdev_pool *spool);
+extern int xrt_subdev_pool_fini(struct xrt_subdev_pool *spool);
+extern int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
+	xrt_subdev_match_t match, void *arg, struct device *holder_dev,
+	struct platform_device **pdevp);
+extern int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, struct device *holder_dev);
+extern int xrt_subdev_pool_add(struct xrt_subdev_pool *spool,
+	enum xrt_subdev_id id, xrt_subdev_parent_cb_t pcb,
+	void *pcb_arg, char *dtb);
+extern int xrt_subdev_pool_del(struct xrt_subdev_pool *spool,
+	enum xrt_subdev_id id, int instance);
+extern int xrt_subdev_pool_event(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, xrt_subdev_match_t match, void *arg,
+	xrt_event_cb_t xevt_cb, enum xrt_events evt);
+extern ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, char *buf, size_t len);
+/*
+ * For leaf drivers.
+ */
+extern bool xrt_subdev_has_epname(struct platform_device *pdev, const char *nm);
+extern struct platform_device *xrt_subdev_get_leaf(
+	struct platform_device *pdev, xrt_subdev_match_t cb, void *arg);
+extern struct platform_device *xrt_subdev_get_leaf_by_id(
+	struct platform_device *pdev, enum xrt_subdev_id id, int instance);
+extern struct platform_device *xrt_subdev_get_leaf_by_epname(
+	struct platform_device *pdev, const char *name);
+extern int xrt_subdev_put_leaf(struct platform_device *pdev,
+	struct platform_device *leaf);
+extern int xrt_subdev_create_partition(struct platform_device *pdev,
+	char *dtb);
+extern int xrt_subdev_destroy_partition(struct platform_device *pdev,
+	int instance);
+extern int xrt_subdev_lookup_partition(
+	struct platform_device *pdev, xrt_subdev_match_t cb, void *arg);
+extern int xrt_subdev_wait_for_partition_bringup(struct platform_device *pdev);
+extern void *xrt_subdev_add_event_cb(struct platform_device *pdev,
+	xrt_subdev_match_t match, void *match_arg, xrt_event_cb_t cb);
+extern void xrt_subdev_remove_event_cb(
+	struct platform_device *pdev, void *hdl);
+extern int xrt_subdev_ioctl(struct platform_device *tgt, u32 cmd, void *arg);
+extern int xrt_subdev_broadcast_event(struct platform_device *pdev,
+	enum xrt_events evt);
+extern int xrt_subdev_broadcast_event_async(struct platform_device *pdev,
+	enum xrt_events evt, xrt_async_broadcast_event_cb_t cb, void *arg);
+extern void xrt_subdev_hot_reset(struct platform_device *pdev);
+extern void xrt_subdev_get_barres(struct platform_device *pdev,
+	struct resource **res, uint bar_idx);
+extern void xrt_subdev_get_parent_id(struct platform_device *pdev,
+	unsigned short *vendor, unsigned short *device,
+	unsigned short *subvendor, unsigned short *subdevice);
+extern struct device *xrt_subdev_register_hwmon(struct platform_device *pdev,
+	const char *name, void *drvdata, const struct attribute_group **grps);
+extern void xrt_subdev_unregister_hwmon(struct platform_device *pdev,
+	struct device *hwmon);
+
+extern int xrt_subdev_register_external_driver(enum xrt_subdev_id id,
+	struct platform_driver *drv, struct xrt_subdev_endpoints *eps);
+extern void xrt_subdev_unregister_external_driver(enum xrt_subdev_id id);
+
+/*
+ * Char dev APIs.
+ */
+static inline bool xrt_devnode_enabled(struct xrt_subdev_drvdata *drvdata)
+{
+	return drvdata && drvdata->xsd_file_ops.xsf_ops.open != NULL;
+}
+extern int xrt_devnode_create(struct platform_device *pdev,
+	const char *file_name, const char *inst_name);
+extern int xrt_devnode_destroy(struct platform_device *pdev);
+extern struct platform_device *xrt_devnode_open_excl(struct inode *inode);
+extern struct platform_device *xrt_devnode_open(struct inode *inode);
+extern void xrt_devnode_close(struct inode *inode);
+
+/* Helpers. */
+static inline void xrt_memcpy_fromio(void *buf, void __iomem *iomem, u32 size)
+{
+	int i;
+
+	BUG_ON(size & 0x3);
+	for (i = 0; i < size / 4; i++)
+		((u32 *)buf)[i] = ioread32((char *)(iomem) + sizeof(u32) * i);
+}
+static inline void xrt_memcpy_toio(void __iomem *iomem, void *buf, u32 size)
+{
+	int i;
+
+	BUG_ON(size & 0x3);
+	for (i = 0; i < size / 4; i++)
+		iowrite32(((u32 *)buf)[i], ((char *)(iomem) + sizeof(u32) * i));
+}
+
+#endif	/* _XRT_SUBDEV_H_ */
diff --git a/drivers/fpga/alveo/include/xrt-ucs.h b/drivers/fpga/alveo/include/xrt-ucs.h
new file mode 100644
index 000000000000..a64b15bda865
--- /dev/null
+++ b/drivers/fpga/alveo/include/xrt-ucs.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_UCS_H_
+#define	_XRT_UCS_H_
+
+#include "xrt-subdev.h"
+
+/*
+ * UCS driver IOCTL calls.
+ */
+enum xrt_ucs_ioctl_cmd {
+	XRT_UCS_CHECK = 0,
+	XRT_UCS_ENABLE,
+};
+
+#endif	/* _XRT_UCS_H_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (5 preceding siblings ...)
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 6/8] fpga: xrt: header file for platform and parent drivers Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-12-01 20:51   ` Moritz Fischer
  2020-12-02  3:00   ` Xu Yilun
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 8/8] fpga: xrt: Kconfig and Makefile updates for XRT drivers Sonal Santan
                   ` (3 subsequent siblings)
  10 siblings, 2 replies; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Add management physical function driver core. The driver attaches
to management physical function of Alveo devices. It instantiates
the root driver and one or more partition drivers which in turn
instantiate platform drivers. The instantiation of partition and
platform drivers is completely data driven. The driver integrates
with FPGA manager and provides xclbin download service.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c     | 194 ++++
 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h         |  29 +
 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h    |  36 +
 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c | 930 +++++++++++++++++++
 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c     | 190 ++++
 drivers/fpga/alveo/mgmt/xmgmt-main.c         | 843 +++++++++++++++++
 drivers/fpga/alveo/mgmt/xmgmt-root.c         | 375 ++++++++
 7 files changed, 2597 insertions(+)
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main.c
 create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-root.c

diff --git a/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c b/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
new file mode 100644
index 000000000000..d451b5a2c291
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
@@ -0,0 +1,194 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2019-2020 Xilinx, Inc.
+ * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
+ *
+ * Authors: Sonal.Santan@xilinx.com
+ */
+
+#include <linux/cred.h>
+#include <linux/efi.h>
+#include <linux/fpga/fpga-mgr.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "xrt-subdev.h"
+#include "xmgmt-fmgr.h"
+#include "xrt-axigate.h"
+#include "xmgmt-main-impl.h"
+
+/*
+ * Container to capture and cache full xclbin as it is passed in blocks by FPGA
+ * Manager. Driver needs access to full xclbin to walk through xclbin sections.
+ * FPGA Manager's .write() backend sends incremental blocks without any
+ * knowledge of xclbin format forcing us to collect the blocks and stitch them
+ * together here.
+ */
+
+struct xfpga_klass {
+	const struct platform_device *pdev;
+	struct axlf         *blob;
+	char                 name[64];
+	size_t               count;
+	size_t               total_count;
+	struct mutex         axlf_lock;
+	int                  reader_ref;
+	enum fpga_mgr_states state;
+	enum xfpga_sec_level sec_level;
+};
+
+struct key *xfpga_keys;
+
+static int xmgmt_pr_write_init(struct fpga_manager *mgr,
+	struct fpga_image_info *info, const char *buf, size_t count)
+{
+	struct xfpga_klass *obj = mgr->priv;
+	const struct axlf *bin = (const struct axlf *)buf;
+
+	if (count < sizeof(struct axlf)) {
+		obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
+		return -EINVAL;
+	}
+
+	if (count > bin->m_header.m_length) {
+		obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
+		return -EINVAL;
+	}
+
+	/* Free up the previous blob */
+	vfree(obj->blob);
+	obj->blob = vmalloc(bin->m_header.m_length);
+	if (!obj->blob) {
+		obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
+		return -ENOMEM;
+	}
+
+	xrt_info(obj->pdev, "Begin download of xclbin %pUb of length %lld B",
+		&bin->m_header.uuid, bin->m_header.m_length);
+
+	obj->count = 0;
+	obj->total_count = bin->m_header.m_length;
+	obj->state = FPGA_MGR_STATE_WRITE_INIT;
+	return 0;
+}
+
+static int xmgmt_pr_write(struct fpga_manager *mgr,
+	const char *buf, size_t count)
+{
+	struct xfpga_klass *obj = mgr->priv;
+	char *curr = (char *)obj->blob;
+
+	if ((obj->state != FPGA_MGR_STATE_WRITE_INIT) &&
+		(obj->state != FPGA_MGR_STATE_WRITE)) {
+		obj->state = FPGA_MGR_STATE_WRITE_ERR;
+		return -EINVAL;
+	}
+
+	curr += obj->count;
+	obj->count += count;
+
+	/*
+	 * The xclbin buffer should not be longer than advertised in the header
+	 */
+	if (obj->total_count < obj->count) {
+		obj->state = FPGA_MGR_STATE_WRITE_ERR;
+		return -EINVAL;
+	}
+
+	xrt_info(obj->pdev, "Copying block of %zu B of xclbin", count);
+	memcpy(curr, buf, count);
+	obj->state = FPGA_MGR_STATE_WRITE;
+	return 0;
+}
+
+
+static int xmgmt_pr_write_complete(struct fpga_manager *mgr,
+				   struct fpga_image_info *info)
+{
+	int result = 0;
+	struct xfpga_klass *obj = mgr->priv;
+
+	if (obj->state != FPGA_MGR_STATE_WRITE) {
+		obj->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
+		return -EINVAL;
+	}
+
+	/* Check if we got the complete xclbin */
+	if (obj->blob->m_header.m_length != obj->count) {
+		obj->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
+		return -EINVAL;
+	}
+
+	result = xmgmt_ulp_download((void *)obj->pdev, obj->blob);
+
+	obj->state = result ? FPGA_MGR_STATE_WRITE_COMPLETE_ERR :
+		FPGA_MGR_STATE_WRITE_COMPLETE;
+	xrt_info(obj->pdev, "Finish downloading of xclbin %pUb: %d",
+		&obj->blob->m_header.uuid, result);
+	vfree(obj->blob);
+	obj->blob = NULL;
+	obj->count = 0;
+	return result;
+}
+
+static enum fpga_mgr_states xmgmt_pr_state(struct fpga_manager *mgr)
+{
+	struct xfpga_klass *obj = mgr->priv;
+
+	return obj->state;
+}
+
+static const struct fpga_manager_ops xmgmt_pr_ops = {
+	.initial_header_size = sizeof(struct axlf),
+	.write_init = xmgmt_pr_write_init,
+	.write = xmgmt_pr_write,
+	.write_complete = xmgmt_pr_write_complete,
+	.state = xmgmt_pr_state,
+};
+
+
+struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev)
+{
+	struct fpga_manager *fmgr;
+	int ret = 0;
+	struct xfpga_klass *obj = vzalloc(sizeof(struct xfpga_klass));
+
+	xrt_info(pdev, "probing...");
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
+	obj->state = FPGA_MGR_STATE_UNKNOWN;
+	obj->pdev = pdev;
+	fmgr = fpga_mgr_create(&pdev->dev,
+			       obj->name,
+			       &xmgmt_pr_ops,
+			       obj);
+	if (!fmgr)
+		return ERR_PTR(-ENOMEM);
+
+	obj->sec_level = XFPGA_SEC_NONE;
+	ret = fpga_mgr_register(fmgr);
+	if (ret) {
+		fpga_mgr_free(fmgr);
+		kfree(obj);
+		return ERR_PTR(ret);
+	}
+	mutex_init(&obj->axlf_lock);
+	return fmgr;
+}
+
+int xmgmt_fmgr_remove(struct fpga_manager *fmgr)
+{
+	struct xfpga_klass *obj = fmgr->priv;
+
+	mutex_destroy(&obj->axlf_lock);
+	obj->state = FPGA_MGR_STATE_UNKNOWN;
+	fpga_mgr_unregister(fmgr);
+	vfree(obj->blob);
+	vfree(obj);
+	return 0;
+}
diff --git a/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h b/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
new file mode 100644
index 000000000000..2beba649609f
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2019-2020 Xilinx, Inc.
+ * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
+ *
+ * Authors: Sonal.Santan@xilinx.com
+ */
+
+#ifndef	_XMGMT_FMGR_H_
+#define	_XMGMT_FMGR_H_
+
+#include <linux/fpga/fpga-mgr.h>
+#include <linux/mutex.h>
+
+#include <linux/xrt/xclbin.h>
+
+enum xfpga_sec_level {
+	XFPGA_SEC_NONE = 0,
+	XFPGA_SEC_DEDICATE,
+	XFPGA_SEC_SYSTEM,
+	XFPGA_SEC_MAX = XFPGA_SEC_SYSTEM,
+};
+
+struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev);
+int xmgmt_fmgr_remove(struct fpga_manager *fmgr);
+
+#endif
diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h b/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
new file mode 100644
index 000000000000..c89024cb8d46
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XMGMT_MAIN_IMPL_H_
+#define	_XMGMT_MAIN_IMPL_H_
+
+#include "xrt-subdev.h"
+#include "xmgmt-main.h"
+
+extern struct platform_driver xmgmt_main_driver;
+extern struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[];
+
+extern int xmgmt_ulp_download(struct platform_device *pdev, const void *xclbin);
+extern int bitstream_axlf_mailbox(struct platform_device *pdev,
+	const void *xclbin);
+extern int xmgmt_hot_reset(struct platform_device *pdev);
+
+/* Getting dtb for specified partition. Caller should vfree returned dtb .*/
+extern char *xmgmt_get_dtb(struct platform_device *pdev,
+	enum provider_kind kind);
+extern char *xmgmt_get_vbnv(struct platform_device *pdev);
+extern int xmgmt_get_provider_uuid(struct platform_device *pdev,
+	enum provider_kind kind, uuid_t *uuid);
+
+extern void *xmgmt_pdev2mailbox(struct platform_device *pdev);
+extern void *xmgmt_mailbox_probe(struct platform_device *pdev);
+extern void xmgmt_mailbox_remove(void *handle);
+extern void xmgmt_peer_notify_state(void *handle, bool online);
+
+#endif	/* _XMGMT_MAIN_IMPL_H_ */
diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c b/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
new file mode 100644
index 000000000000..b3d82fc3618b
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
@@ -0,0 +1,930 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA MGMT PF entry point driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Peer communication via mailbox
+ *
+ * Authors:
+ *      Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/crc32c.h>
+#include <linux/xrt/mailbox_proto.h>
+#include "xmgmt-main-impl.h"
+#include "xrt-mailbox.h"
+#include "xrt-cmc.h"
+#include "xrt-metadata.h"
+#include "xrt-xclbin.h"
+#include "xrt-clock.h"
+#include "xrt-calib.h"
+#include "xrt-icap.h"
+
+struct xmgmt_mailbox {
+	struct platform_device *pdev;
+	struct platform_device *mailbox;
+	struct mutex lock;
+	void *evt_hdl;
+	char *test_msg;
+	bool peer_in_same_domain;
+};
+
+#define	XMGMT_MAILBOX_PRT_REQ(xmbx, send, request, sw_ch)	do {	\
+	const char *dir = (send) ? ">>>>>" : "<<<<<";			\
+									\
+	if ((request)->req == XCL_MAILBOX_REQ_PEER_DATA) {		\
+		struct xcl_mailbox_peer_data *p =			\
+			(struct xcl_mailbox_peer_data *)(request)->data;\
+									\
+		xrt_info((xmbx)->pdev, "%s(%s) %s%s",			\
+			mailbox_req2name((request)->req),		\
+			mailbox_group_kind2name(p->kind),		\
+			dir, mailbox_chan2name(sw_ch));			\
+	} else {							\
+		xrt_info((xmbx)->pdev, "%s %s%s",			\
+			mailbox_req2name((request)->req),		\
+			dir, mailbox_chan2name(sw_ch));			\
+	}								\
+} while (0)
+#define	XMGMT_MAILBOX_PRT_REQ_SEND(xmbx, req, sw_ch)			\
+	XMGMT_MAILBOX_PRT_REQ(xmbx, true, req, sw_ch)
+#define	XMGMT_MAILBOX_PRT_REQ_RECV(xmbx, req, sw_ch)			\
+	XMGMT_MAILBOX_PRT_REQ(xmbx, false, req, sw_ch)
+#define	XMGMT_MAILBOX_PRT_RESP(xmbx, resp)				\
+	xrt_info((xmbx)->pdev, "respond %ld bytes >>>>>%s",		\
+	(resp)->xmip_data_size, mailbox_chan2name((resp)->xmip_sw_ch))
+
+static inline struct xmgmt_mailbox *pdev2mbx(struct platform_device *pdev)
+{
+	return (struct xmgmt_mailbox *)xmgmt_pdev2mailbox(pdev);
+}
+
+static void xmgmt_mailbox_post(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch, void *buf, size_t len)
+{
+	int rc;
+	struct xrt_mailbox_ioctl_post post = {
+		.xmip_req_id = msgid,
+		.xmip_sw_ch = sw_ch,
+		.xmip_data = buf,
+		.xmip_data_size = len
+	};
+
+	BUG_ON(!mutex_is_locked(&xmbx->lock));
+
+	if (!xmbx->mailbox) {
+		xrt_err(xmbx->pdev, "mailbox not available");
+		return;
+	}
+
+	if (msgid == 0) {
+		XMGMT_MAILBOX_PRT_REQ_SEND(xmbx,
+			(struct xcl_mailbox_req *)buf, sw_ch);
+	} else {
+		XMGMT_MAILBOX_PRT_RESP(xmbx, &post);
+	}
+
+	rc = xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_POST, &post);
+	if (rc)
+		xrt_err(xmbx->pdev, "failed to post msg: %d", rc);
+}
+
+static void xmgmt_mailbox_notify(struct xmgmt_mailbox *xmbx, bool sw_ch,
+	struct xcl_mailbox_req *req, size_t len)
+{
+	xmgmt_mailbox_post(xmbx, 0, sw_ch, req, len);
+}
+
+static void xmgmt_mailbox_respond(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch, void *buf, size_t len)
+{
+	mutex_lock(&xmbx->lock);
+	xmgmt_mailbox_post(xmbx, msgid, sw_ch, buf, len);
+	mutex_unlock(&xmbx->lock);
+}
+
+static void xmgmt_mailbox_resp_test_msg(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	char *msg;
+
+	mutex_lock(&xmbx->lock);
+
+	if (xmbx->test_msg == NULL) {
+		mutex_unlock(&xmbx->lock);
+		xrt_err(pdev, "test msg is not set, drop request");
+		return;
+	}
+	msg = xmbx->test_msg;
+	xmbx->test_msg = NULL;
+
+	mutex_unlock(&xmbx->lock);
+
+	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, msg, strlen(msg) + 1);
+	vfree(msg);
+}
+
+static int xmgmt_mailbox_dtb_add_prop(struct platform_device *pdev,
+	char *dst_dtb, const char *ep_name, const char *regmap_name,
+	const char *prop, const void *val, int size)
+{
+	int rc = xrt_md_set_prop(DEV(pdev), dst_dtb, ep_name, regmap_name,
+		prop, val, size);
+
+	if (rc) {
+		xrt_err(pdev, "failed to set %s@(%s, %s): %d",
+			ep_name, regmap_name, prop, rc);
+	}
+	return rc;
+}
+
+static int xmgmt_mailbox_dtb_add_vbnv(struct platform_device *pdev, char *dtb)
+{
+	int rc = 0;
+	char *vbnv = xmgmt_get_vbnv(pdev);
+
+	if (vbnv == NULL) {
+		xrt_err(pdev, "failed to get VBNV");
+		return -ENOENT;
+	}
+	rc = xmgmt_mailbox_dtb_add_prop(pdev, dtb, NULL, NULL,
+		PROP_VBNV, vbnv, strlen(vbnv) + 1);
+	kfree(vbnv);
+	return rc;
+}
+
+static int xmgmt_mailbox_dtb_copy_logic_uuid(struct platform_device *pdev,
+	const char *src_dtb, char *dst_dtb)
+{
+	const void *val;
+	int sz;
+	int rc = xrt_md_get_prop(DEV(pdev), src_dtb, NULL, NULL,
+		PROP_LOGIC_UUID, &val, &sz);
+
+	if (rc) {
+		xrt_err(pdev, "failed to get %s: %d", PROP_LOGIC_UUID, rc);
+		return rc;
+	}
+	return xmgmt_mailbox_dtb_add_prop(pdev, dst_dtb, NULL, NULL,
+		PROP_LOGIC_UUID, val, sz);
+}
+
+static int xmgmt_mailbox_dtb_add_vrom(struct platform_device *pdev,
+	const char *src_dtb, char *dst_dtb)
+{
+	/* For compatibility for legacy xrt driver. */
+	enum FeatureBitMask {
+		UNIFIED_PLATFORM		= 0x0000000000000001
+		, XARE_ENBLD			= 0x0000000000000002
+		, BOARD_MGMT_ENBLD		= 0x0000000000000004
+		, MB_SCHEDULER			= 0x0000000000000008
+		, PROM_MASK			= 0x0000000000000070
+		, DEBUG_MASK			= 0x000000000000FF00
+		, PEER_TO_PEER			= 0x0000000000010000
+		, FBM_UUID			= 0x0000000000020000
+		, HBM				= 0x0000000000040000
+		, CDMA				= 0x0000000000080000
+		, QDMA				= 0x0000000000100000
+		, RUNTIME_CLK_SCALE		= 0x0000000000200000
+		, PASSTHROUGH_VIRTUALIZATION	= 0x0000000000400000
+	};
+	struct FeatureRomHeader {
+		unsigned char EntryPointString[4];
+		uint8_t MajorVersion;
+		uint8_t MinorVersion;
+		uint32_t VivadoBuildID;
+		uint32_t IPBuildID;
+		uint64_t TimeSinceEpoch;
+		unsigned char FPGAPartName[64];
+		unsigned char VBNVName[64];
+		uint8_t DDRChannelCount;
+		uint8_t DDRChannelSize;
+		uint64_t DRBaseAddress;
+		uint64_t FeatureBitMap;
+		unsigned char uuid[16];
+		uint8_t HBMCount;
+		uint8_t HBMSize;
+		uint32_t CDMABaseAddress[4];
+	} header = { 0 };
+	char *vbnv = xmgmt_get_vbnv(pdev);
+	int rc;
+
+	*(u32 *)header.EntryPointString = 0x786e6c78;
+
+	if (vbnv)
+		strncpy(header.VBNVName, vbnv, sizeof(header.VBNVName) - 1);
+	kfree(vbnv);
+
+	header.FeatureBitMap = UNIFIED_PLATFORM;
+	rc = xrt_md_get_prop(DEV(pdev), src_dtb,
+		NODE_CMC_FW_MEM, NULL, PROP_IO_OFFSET, NULL, NULL);
+	if (rc == 0)
+		header.FeatureBitMap |= BOARD_MGMT_ENBLD;
+	rc = xrt_md_get_prop(DEV(pdev), src_dtb,
+		NODE_ERT_FW_MEM, NULL, PROP_IO_OFFSET, NULL, NULL);
+	if (rc == 0)
+		header.FeatureBitMap |= MB_SCHEDULER;
+
+	return xmgmt_mailbox_dtb_add_prop(pdev, dst_dtb, NULL, NULL,
+		PROP_VROM, &header, sizeof(header));
+}
+
+static u32 xmgmt_mailbox_dtb_user_pf(struct platform_device *pdev,
+	const char *dtb, const char *epname, const char *regmap)
+{
+	const u32 *pfnump;
+	int rc = xrt_md_get_prop(DEV(pdev), dtb, epname, regmap,
+		PROP_PF_NUM, (const void **)&pfnump, NULL);
+
+	if (rc)
+		return -1;
+	return be32_to_cpu(*pfnump);
+}
+
+static int xmgmt_mailbox_dtb_copy_user_endpoints(struct platform_device *pdev,
+	const char *src, char *dst)
+{
+	int rc = 0;
+	char *epname = NULL, *regmap = NULL;
+	u32 pfnum = xmgmt_mailbox_dtb_user_pf(pdev, src,
+		NODE_MAILBOX_USER, NULL);
+	const u32 level = cpu_to_be32(1);
+	struct device *dev = DEV(pdev);
+
+	if (pfnum == (u32)-1) {
+		xrt_err(pdev, "failed to get user pf num");
+		rc = -EINVAL;
+	}
+
+	for (xrt_md_get_next_endpoint(dev, src, NULL, NULL, &epname, &regmap);
+		rc == 0 && epname != NULL;
+		xrt_md_get_next_endpoint(dev, src, epname, regmap,
+		&epname, &regmap)) {
+		if (pfnum !=
+			xmgmt_mailbox_dtb_user_pf(pdev, src, epname, regmap))
+			continue;
+		rc = xrt_md_copy_endpoint(dev, dst, src, epname, regmap, NULL);
+		if (rc) {
+			xrt_err(pdev, "failed to copy (%s, %s): %d",
+				epname, regmap, rc);
+		} else {
+			rc = xrt_md_set_prop(dev, dst, epname, regmap,
+				PROP_PARTITION_LEVEL, &level, sizeof(level));
+			if (rc) {
+				xrt_err(pdev,
+					"can't set level for (%s, %s): %d",
+					epname, regmap, rc);
+			}
+		}
+	}
+	return rc;
+}
+
+static char *xmgmt_mailbox_user_dtb(struct platform_device *pdev)
+{
+	/* TODO: add support for PLP. */
+	const char *src = NULL;
+	char *dst = NULL;
+	struct device *dev = DEV(pdev);
+	int rc = xrt_md_create(dev, &dst);
+
+	if (rc || dst == NULL)
+		return NULL;
+
+	rc = xmgmt_mailbox_dtb_add_vbnv(pdev, dst);
+	if (rc)
+		goto fail;
+
+	src = xmgmt_get_dtb(pdev, XMGMT_BLP);
+	if (src == NULL) {
+		xrt_err(pdev, "failed to get BLP dtb");
+		goto fail;
+	}
+
+	rc = xmgmt_mailbox_dtb_copy_logic_uuid(pdev, src, dst);
+	if (rc)
+		goto fail;
+
+	rc = xmgmt_mailbox_dtb_add_vrom(pdev, src, dst);
+	if (rc)
+		goto fail;
+
+	rc = xrt_md_copy_endpoint(dev, dst, src, NODE_PARTITION_INFO,
+		NULL, NODE_PARTITION_INFO_BLP);
+	if (rc)
+		goto fail;
+
+	rc = xrt_md_copy_endpoint(dev, dst, src, NODE_INTERFACES, NULL, NULL);
+	if (rc)
+		goto fail;
+
+	rc = xmgmt_mailbox_dtb_copy_user_endpoints(pdev, src, dst);
+	if (rc)
+		goto fail;
+
+	xrt_md_pack(dev, dst);
+	vfree(src);
+	return dst;
+
+fail:
+	vfree(src);
+	vfree(dst);
+	return NULL;
+}
+
+static void xmgmt_mailbox_resp_subdev(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch, u64 offset, u64 size)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	char *dtb = xmgmt_mailbox_user_dtb(pdev);
+	long dtbsz;
+	struct xcl_subdev *hdr;
+	u64 totalsz;
+
+	if (dtb == NULL)
+		return;
+
+	dtbsz = xrt_md_size(DEV(pdev), dtb);
+	totalsz = dtbsz + sizeof(*hdr) - sizeof(hdr->data);
+	if (offset != 0 || totalsz > size) {
+		/* Only support fetching dtb in one shot. */
+		vfree(dtb);
+		xrt_err(pdev, "need %lldB, user buffer size is %lldB, dropped",
+			totalsz, size);
+		return;
+	}
+
+	hdr = vzalloc(totalsz);
+	if (hdr == NULL) {
+		vfree(dtb);
+		return;
+	}
+
+	hdr->ver = 1;
+	hdr->size = dtbsz;
+	hdr->rtncode = XRT_MSG_SUBDEV_RTN_COMPLETE;
+	(void) memcpy(hdr->data, dtb, dtbsz);
+
+	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, hdr, totalsz);
+
+	vfree(dtb);
+	vfree(hdr);
+}
+
+static void xmgmt_mailbox_resp_sensor(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch, u64 offset, u64 size)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	struct xcl_sensor sensors = { 0 };
+	struct platform_device *cmcpdev = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_CMC, PLATFORM_DEVID_NONE);
+	int rc;
+
+	if (cmcpdev) {
+		rc = xrt_subdev_ioctl(cmcpdev, XRT_CMC_READ_SENSORS, &sensors);
+		(void) xrt_subdev_put_leaf(pdev, cmcpdev);
+		if (rc)
+			xrt_err(pdev, "can't read sensors: %d", rc);
+	}
+
+	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &sensors,
+		min((u64)sizeof(sensors), size));
+}
+
+static int xmgmt_mailbox_get_freq(struct xmgmt_mailbox *xmbx,
+	enum CLOCK_TYPE type, u64 *freq, u64 *freq_cnter)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	const char *clkname =
+		clock_type2epname(type) ? clock_type2epname(type) : "UNKNOWN";
+	struct platform_device *clkpdev =
+		xrt_subdev_get_leaf_by_epname(pdev, clkname);
+	int rc;
+	struct xrt_clock_ioctl_get getfreq = { 0 };
+
+	if (clkpdev == NULL) {
+		xrt_info(pdev, "%s clock is not available", clkname);
+		return -ENOENT;
+	}
+
+	rc = xrt_subdev_ioctl(clkpdev, XRT_CLOCK_GET, &getfreq);
+	(void) xrt_subdev_put_leaf(pdev, clkpdev);
+	if (rc) {
+		xrt_err(pdev, "can't get %s clock frequency: %d", clkname, rc);
+		return rc;
+	}
+
+	if (freq)
+		*freq = getfreq.freq;
+	if (freq_cnter)
+		*freq_cnter = getfreq.freq_cnter;
+	return 0;
+}
+
+static int xmgmt_mailbox_get_icap_idcode(struct xmgmt_mailbox *xmbx, u64 *id)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	struct platform_device *icappdev = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_ICAP, PLATFORM_DEVID_NONE);
+	int rc;
+
+	if (icappdev == NULL) {
+		xrt_err(pdev, "can't find icap");
+		return -ENOENT;
+	}
+
+	rc = xrt_subdev_ioctl(icappdev, XRT_ICAP_IDCODE, id);
+	(void) xrt_subdev_put_leaf(pdev, icappdev);
+	if (rc)
+		xrt_err(pdev, "can't get icap idcode: %d", rc);
+	return rc;
+}
+
+static int xmgmt_mailbox_get_mig_calib(struct xmgmt_mailbox *xmbx, u64 *calib)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	struct platform_device *calibpdev = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_CALIB, PLATFORM_DEVID_NONE);
+	int rc;
+	enum xrt_calib_results res;
+
+	if (calibpdev == NULL) {
+		xrt_err(pdev, "can't find mig calibration subdev");
+		return -ENOENT;
+	}
+
+	rc = xrt_subdev_ioctl(calibpdev, XRT_CALIB_RESULT, &res);
+	(void) xrt_subdev_put_leaf(pdev, calibpdev);
+	if (rc) {
+		xrt_err(pdev, "can't get mig calibration result: %d", rc);
+	} else {
+		if (res == XRT_CALIB_SUCCEEDED)
+			*calib = 1;
+		else
+			*calib = 0;
+	}
+	return rc;
+}
+
+static void xmgmt_mailbox_resp_icap(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch, u64 offset, u64 size)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	struct xcl_pr_region icap = { 0 };
+
+	(void) xmgmt_mailbox_get_freq(xmbx,
+		CT_DATA, &icap.freq_data, &icap.freq_cntr_data);
+	(void) xmgmt_mailbox_get_freq(xmbx,
+		CT_KERNEL, &icap.freq_kernel, &icap.freq_cntr_kernel);
+	(void) xmgmt_mailbox_get_freq(xmbx,
+		CT_SYSTEM, &icap.freq_system, &icap.freq_cntr_system);
+	(void) xmgmt_mailbox_get_icap_idcode(xmbx, &icap.idcode);
+	(void) xmgmt_mailbox_get_mig_calib(xmbx, &icap.mig_calib);
+	BUG_ON(sizeof(icap.uuid) != sizeof(uuid_t));
+	(void) xmgmt_get_provider_uuid(pdev, XMGMT_ULP, (uuid_t *)&icap.uuid);
+
+	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &icap,
+		min((u64)sizeof(icap), size));
+}
+
+static void xmgmt_mailbox_resp_bdinfo(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch, u64 offset, u64 size)
+{
+	struct platform_device *pdev = xmbx->pdev;
+	struct xcl_board_info *info = vzalloc(sizeof(*info));
+	struct platform_device *cmcpdev;
+	int rc;
+
+	if (info == NULL)
+		return;
+
+	cmcpdev = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_CMC, PLATFORM_DEVID_NONE);
+	if (cmcpdev) {
+		rc = xrt_subdev_ioctl(cmcpdev, XRT_CMC_READ_BOARD_INFO, info);
+		(void) xrt_subdev_put_leaf(pdev, cmcpdev);
+		if (rc)
+			xrt_err(pdev, "can't read board info: %d", rc);
+	}
+
+	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, info,
+		min((u64)sizeof(*info), size));
+
+	vfree(info);
+}
+
+static void xmgmt_mailbox_simple_respond(struct xmgmt_mailbox *xmbx,
+	u64 msgid, bool sw_ch, int rc)
+{
+	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &rc, sizeof(rc));
+}
+
+static void xmgmt_mailbox_resp_peer_data(struct xmgmt_mailbox *xmbx,
+	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
+{
+	struct xcl_mailbox_peer_data *pdata =
+		(struct xcl_mailbox_peer_data *)req->data;
+
+	if (len < (sizeof(*req) + sizeof(*pdata) - 1)) {
+		xrt_err(xmbx->pdev, "received corrupted %s, dropped",
+			mailbox_req2name(req->req));
+		return;
+	}
+
+	switch (pdata->kind) {
+	case XCL_SENSOR:
+		xmgmt_mailbox_resp_sensor(xmbx, msgid, sw_ch,
+			pdata->offset, pdata->size);
+		break;
+	case XCL_ICAP:
+		xmgmt_mailbox_resp_icap(xmbx, msgid, sw_ch,
+			pdata->offset, pdata->size);
+		break;
+	case XCL_BDINFO:
+		xmgmt_mailbox_resp_bdinfo(xmbx, msgid, sw_ch,
+			pdata->offset, pdata->size);
+		break;
+	case XCL_SUBDEV:
+		xmgmt_mailbox_resp_subdev(xmbx, msgid, sw_ch,
+			pdata->offset, pdata->size);
+		break;
+	case XCL_MIG_ECC:
+	case XCL_FIREWALL:
+	case XCL_DNA: /* TODO **/
+		xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, 0);
+		break;
+	default:
+		xrt_err(xmbx->pdev, "%s(%s) request not handled",
+			mailbox_req2name(req->req),
+			mailbox_group_kind2name(pdata->kind));
+		break;
+	}
+}
+
+static bool xmgmt_mailbox_is_same_domain(struct xmgmt_mailbox *xmbx,
+	struct xcl_mailbox_conn *mb_conn)
+{
+	uint32_t crc_chk;
+	phys_addr_t paddr;
+	struct platform_device *pdev = xmbx->pdev;
+
+	paddr = virt_to_phys((void *)mb_conn->kaddr);
+	if (paddr != (phys_addr_t)mb_conn->paddr) {
+		xrt_info(pdev, "paddrs differ, user 0x%llx, mgmt 0x%llx",
+			mb_conn->paddr, paddr);
+		return false;
+	}
+
+	crc_chk = crc32c_le(~0, (void *)mb_conn->kaddr, PAGE_SIZE);
+	if (crc_chk != mb_conn->crc32) {
+		xrt_info(pdev, "CRCs differ, user 0x%x, mgmt 0x%x",
+			mb_conn->crc32, crc_chk);
+		return false;
+	}
+
+	return true;
+}
+
+static void xmgmt_mailbox_resp_user_probe(struct xmgmt_mailbox *xmbx,
+	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
+{
+	struct xcl_mailbox_conn_resp *resp = vzalloc(sizeof(*resp));
+	struct xcl_mailbox_conn *conn = (struct xcl_mailbox_conn *)req->data;
+
+	if (resp == NULL)
+		return;
+
+	if (len < (sizeof(*req) + sizeof(*conn) - 1)) {
+		xrt_err(xmbx->pdev, "received corrupted %s, dropped",
+			mailbox_req2name(req->req));
+		vfree(resp);
+		return;
+	}
+
+	resp->conn_flags |= XCL_MB_PEER_READY;
+	if (xmgmt_mailbox_is_same_domain(xmbx, conn)) {
+		xmbx->peer_in_same_domain = true;
+		resp->conn_flags |= XCL_MB_PEER_SAME_DOMAIN;
+	}
+
+	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, resp, sizeof(*resp));
+	vfree(resp);
+}
+
+static void xmgmt_mailbox_resp_hot_reset(struct xmgmt_mailbox *xmbx,
+	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
+{
+	int ret;
+	struct platform_device *pdev = xmbx->pdev;
+
+	xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, 0);
+
+	ret = xmgmt_hot_reset(pdev);
+	if (ret)
+		xrt_err(pdev, "failed to hot reset: %d", ret);
+	else
+		xmgmt_peer_notify_state(xmbx, true);
+}
+
+static void xmgmt_mailbox_resp_load_xclbin(struct xmgmt_mailbox *xmbx,
+	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
+{
+	struct xcl_mailbox_bitstream_kaddr *kaddr =
+		(struct xcl_mailbox_bitstream_kaddr *)req->data;
+	void *xclbin = (void *)(uintptr_t)kaddr->addr;
+	int ret = bitstream_axlf_mailbox(xmbx->pdev, xclbin);
+
+	xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, ret);
+}
+
+static void xmgmt_mailbox_listener(void *arg, void *data, size_t len,
+	u64 msgid, int err, bool sw_ch)
+{
+	struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)arg;
+	struct platform_device *pdev = xmbx->pdev;
+	struct xcl_mailbox_req *req = (struct xcl_mailbox_req *)data;
+
+	if (err) {
+		xrt_err(pdev, "failed to receive request: %d", err);
+		return;
+	}
+	if (len < sizeof(*req)) {
+		xrt_err(pdev, "received corrupted request");
+		return;
+	}
+
+	XMGMT_MAILBOX_PRT_REQ_RECV(xmbx, req, sw_ch);
+	switch (req->req) {
+	case XCL_MAILBOX_REQ_TEST_READ:
+		xmgmt_mailbox_resp_test_msg(xmbx, msgid, sw_ch);
+		break;
+	case XCL_MAILBOX_REQ_PEER_DATA:
+		xmgmt_mailbox_resp_peer_data(xmbx, req, len, msgid, sw_ch);
+		break;
+	case XCL_MAILBOX_REQ_READ_P2P_BAR_ADDR: /* TODO */
+		xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, -ENOTSUPP);
+		break;
+	case XCL_MAILBOX_REQ_USER_PROBE:
+		xmgmt_mailbox_resp_user_probe(xmbx, req, len, msgid, sw_ch);
+		break;
+	case XCL_MAILBOX_REQ_HOT_RESET:
+		xmgmt_mailbox_resp_hot_reset(xmbx, req, len, msgid, sw_ch);
+		break;
+	case XCL_MAILBOX_REQ_LOAD_XCLBIN_KADDR:
+		if (xmbx->peer_in_same_domain) {
+			xmgmt_mailbox_resp_load_xclbin(xmbx,
+				req, len, msgid, sw_ch);
+		} else {
+			xrt_err(pdev, "%s not handled, not in same domain",
+				mailbox_req2name(req->req));
+		}
+		break;
+	default:
+		xrt_err(pdev, "%s(%d) request not handled",
+			mailbox_req2name(req->req), req->req);
+		break;
+	}
+}
+
+static void xmgmt_mailbox_reg_listener(struct xmgmt_mailbox *xmbx)
+{
+	struct xrt_mailbox_ioctl_listen listen = {
+		xmgmt_mailbox_listener, xmbx };
+
+	BUG_ON(!mutex_is_locked(&xmbx->lock));
+	if (!xmbx->mailbox)
+		return;
+	(void) xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_LISTEN, &listen);
+}
+
+static void xmgmt_mailbox_unreg_listener(struct xmgmt_mailbox *xmbx)
+{
+	struct xrt_mailbox_ioctl_listen listen = { 0 };
+
+	BUG_ON(!mutex_is_locked(&xmbx->lock));
+	BUG_ON(!xmbx->mailbox);
+	(void) xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_LISTEN, &listen);
+}
+
+static bool xmgmt_mailbox_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	return (id == XRT_SUBDEV_MAILBOX);
+}
+
+static int xmgmt_mailbox_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION:
+		BUG_ON(esd->xevt_subdev_id != XRT_SUBDEV_MAILBOX);
+		BUG_ON(xmbx->mailbox);
+		mutex_lock(&xmbx->lock);
+		xmbx->mailbox = xrt_subdev_get_leaf_by_id(pdev,
+			XRT_SUBDEV_MAILBOX, PLATFORM_DEVID_NONE);
+		xmgmt_mailbox_reg_listener(xmbx);
+		mutex_unlock(&xmbx->lock);
+		break;
+	case XRT_EVENT_PRE_REMOVAL:
+		BUG_ON(esd->xevt_subdev_id != XRT_SUBDEV_MAILBOX);
+		BUG_ON(!xmbx->mailbox);
+		mutex_lock(&xmbx->lock);
+		xmgmt_mailbox_unreg_listener(xmbx);
+		(void) xrt_subdev_put_leaf(pdev, xmbx->mailbox);
+		xmbx->mailbox = NULL;
+		mutex_unlock(&xmbx->lock);
+		break;
+	default:
+		break;
+	}
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static ssize_t xmgmt_mailbox_user_dtb_show(struct file *filp,
+	struct kobject *kobj, struct bin_attribute *attr,
+	char *buf, loff_t off, size_t count)
+{
+	struct device *dev = kobj_to_dev(kobj);
+	struct platform_device *pdev = to_platform_device(dev);
+	char *blob = NULL;
+	long  size;
+	ssize_t ret = 0;
+
+	blob = xmgmt_mailbox_user_dtb(pdev);
+	if (!blob) {
+		ret = -ENOENT;
+		goto failed;
+	}
+
+	size = xrt_md_size(dev, blob);
+	if (size <= 0) {
+		ret = -EINVAL;
+		goto failed;
+	}
+
+	if (off >= size)
+		goto failed;
+	if (off + count > size)
+		count = size - off;
+	memcpy(buf, blob + off, count);
+
+	ret = count;
+failed:
+	vfree(blob);
+	return ret;
+}
+
+static struct bin_attribute meta_data_attr = {
+	.attr = {
+		.name = "metadata_for_user",
+		.mode = 0400
+	},
+	.read = xmgmt_mailbox_user_dtb_show,
+	.size = 0
+};
+
+static struct bin_attribute  *xmgmt_mailbox_bin_attrs[] = {
+	&meta_data_attr,
+	NULL,
+};
+
+int xmgmt_mailbox_get_test_msg(struct xmgmt_mailbox *xmbx, bool sw_ch,
+	char *buf, size_t *len)
+{
+	int rc;
+	struct platform_device *pdev = xmbx->pdev;
+	struct xcl_mailbox_req req = { 0, XCL_MAILBOX_REQ_TEST_READ, };
+	struct xrt_mailbox_ioctl_request leaf_req = {
+		.xmir_sw_ch = sw_ch,
+		.xmir_resp_ttl = 1,
+		.xmir_req = &req,
+		.xmir_req_size = sizeof(req),
+		.xmir_resp = buf,
+		.xmir_resp_size = *len
+	};
+
+	mutex_lock(&xmbx->lock);
+	if (xmbx->mailbox) {
+		XMGMT_MAILBOX_PRT_REQ_SEND(xmbx, &req, leaf_req.xmir_sw_ch);
+		/*
+		 * mgmt should never send request to peer. it should send
+		 * either notification or response. here is the only exception
+		 * for debugging purpose.
+		 */
+		rc = xrt_subdev_ioctl(xmbx->mailbox,
+			XRT_MAILBOX_REQUEST, &leaf_req);
+	} else {
+		rc = -ENODEV;
+		xrt_err(pdev, "mailbox not available");
+	}
+	mutex_unlock(&xmbx->lock);
+
+	if (rc == 0)
+		*len = leaf_req.xmir_resp_size;
+	return rc;
+}
+
+int xmgmt_mailbox_set_test_msg(struct xmgmt_mailbox *xmbx,
+	char *buf, size_t len)
+{
+	mutex_lock(&xmbx->lock);
+
+	if (xmbx->test_msg)
+		vfree(xmbx->test_msg);
+	xmbx->test_msg = vmalloc(len);
+	if (xmbx->test_msg == NULL) {
+		mutex_unlock(&xmbx->lock);
+		return -ENOMEM;
+	}
+	(void) memcpy(xmbx->test_msg, buf, len);
+
+	mutex_unlock(&xmbx->lock);
+	return 0;
+}
+
+static ssize_t peer_msg_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	size_t len = 4096;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
+	int ret = xmgmt_mailbox_get_test_msg(xmbx, false, buf, &len);
+
+	return ret == 0 ? len : ret;
+}
+static ssize_t peer_msg_store(struct device *dev,
+	struct device_attribute *da, const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
+	int ret = xmgmt_mailbox_set_test_msg(xmbx, (char *)buf, count);
+
+	return ret == 0 ? count : ret;
+}
+/* Message test i/f. */
+static DEVICE_ATTR_RW(peer_msg);
+
+static struct attribute *xmgmt_mailbox_attrs[] = {
+	&dev_attr_peer_msg.attr,
+	NULL,
+};
+
+static const struct attribute_group xmgmt_mailbox_attrgroup = {
+	.bin_attrs = xmgmt_mailbox_bin_attrs,
+	.attrs = xmgmt_mailbox_attrs,
+};
+
+void *xmgmt_mailbox_probe(struct platform_device *pdev)
+{
+	struct xmgmt_mailbox *xmbx =
+		devm_kzalloc(DEV(pdev), sizeof(*xmbx), GFP_KERNEL);
+
+	if (!xmbx)
+		return NULL;
+	xmbx->pdev = pdev;
+	mutex_init(&xmbx->lock);
+
+	xmbx->evt_hdl = xrt_subdev_add_event_cb(pdev,
+		xmgmt_mailbox_leaf_match, NULL, xmgmt_mailbox_event_cb);
+	(void) sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_mailbox_attrgroup);
+	return xmbx;
+}
+
+void xmgmt_mailbox_remove(void *handle)
+{
+	struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)handle;
+	struct platform_device *pdev = xmbx->pdev;
+
+	(void) sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_mailbox_attrgroup);
+	if (xmbx->evt_hdl)
+		(void) xrt_subdev_remove_event_cb(pdev, xmbx->evt_hdl);
+	if (xmbx->mailbox)
+		(void) xrt_subdev_put_leaf(pdev, xmbx->mailbox);
+	if (xmbx->test_msg)
+		vfree(xmbx->test_msg);
+}
+
+void xmgmt_peer_notify_state(void *handle, bool online)
+{
+	struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)handle;
+	struct xcl_mailbox_peer_state *st;
+	struct xcl_mailbox_req *req;
+	size_t reqlen = sizeof(*req) + sizeof(*st) - 1;
+
+	req = vzalloc(reqlen);
+	if (req == NULL)
+		return;
+
+	req->req = XCL_MAILBOX_REQ_MGMT_STATE;
+	st = (struct xcl_mailbox_peer_state *)req->data;
+	st->state_flags = online ? XCL_MB_STATE_ONLINE : XCL_MB_STATE_OFFLINE;
+	mutex_lock(&xmbx->lock);
+	xmgmt_mailbox_notify(xmbx, false, req, reqlen);
+	mutex_unlock(&xmbx->lock);
+}
diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c b/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
new file mode 100644
index 000000000000..042d86fcef41
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA MGMT PF entry point driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * xclbin download
+ *
+ * Authors:
+ *      Lizhi Hou <lizhi.hou@xilinx.com>
+ */
+
+#include <linux/firmware.h>
+#include <linux/uaccess.h>
+#include "xrt-xclbin.h"
+#include "xrt-metadata.h"
+#include "xrt-subdev.h"
+#include "xrt-gpio.h"
+#include "xmgmt-main.h"
+#include "xrt-icap.h"
+#include "xrt-axigate.h"
+
+static int xmgmt_download_bitstream(struct platform_device  *pdev,
+	const void *xclbin)
+{
+	struct platform_device *icap_leaf = NULL;
+	struct XHwIcap_Bit_Header bit_header = { 0 };
+	struct xrt_icap_ioctl_wr arg;
+	char *bitstream = NULL;
+	int ret;
+
+	ret = xrt_xclbin_get_section(xclbin, BITSTREAM, (void **)&bitstream,
+		NULL);
+	if (ret || !bitstream) {
+		xrt_err(pdev, "bitstream not found");
+		return -ENOENT;
+	}
+	ret = xrt_xclbin_parse_header(bitstream,
+		DMA_HWICAP_BITFILE_BUFFER_SIZE, &bit_header);
+	if (ret) {
+		ret = -EINVAL;
+		xrt_err(pdev, "invalid bitstream header");
+		goto done;
+	}
+	icap_leaf = xrt_subdev_get_leaf_by_id(pdev, XRT_SUBDEV_ICAP,
+		PLATFORM_DEVID_NONE);
+	if (!icap_leaf) {
+		ret = -ENODEV;
+		xrt_err(pdev, "icap does not exist");
+		goto done;
+	}
+	arg.xiiw_bit_data = bitstream + bit_header.HeaderLength;
+	arg.xiiw_data_len = bit_header.BitstreamLength;
+	ret = xrt_subdev_ioctl(icap_leaf, XRT_ICAP_WRITE, &arg);
+	if (ret)
+		xrt_err(pdev, "write bitstream failed, ret = %d", ret);
+
+done:
+	if (icap_leaf)
+		xrt_subdev_put_leaf(pdev, icap_leaf);
+	vfree(bitstream);
+
+	return ret;
+}
+
+static bool match_shell(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	const char *ulp_gate;
+	int ret;
+
+	if (!pdata || xrt_md_size(&pdev->dev, pdata->xsp_dtb) <= 0)
+		return false;
+
+	ret = xrt_md_get_epname_pointer(&pdev->dev, pdata->xsp_dtb,
+		NODE_GATE_ULP, NULL, &ulp_gate);
+	if (ret)
+		return false;
+
+	ret = xrt_md_check_uuids(&pdev->dev, pdata->xsp_dtb, arg);
+	if (ret)
+		return false;
+
+	return true;
+}
+
+static bool match_ulp(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	const char *ulp_gate;
+	int ret;
+
+	if (!pdata || xrt_md_size(&pdev->dev, pdata->xsp_dtb) <= 0)
+		return false;
+
+	ret = xrt_md_check_uuids(&pdev->dev, pdata->xsp_dtb, arg);
+	if (ret)
+		return false;
+
+	ret = xrt_md_get_epname_pointer(&pdev->dev, pdata->xsp_dtb,
+		NODE_GATE_ULP, NULL, &ulp_gate);
+	if (!ret)
+		return false;
+
+	return true;
+}
+
+int xmgmt_ulp_download(struct platform_device  *pdev, const void *xclbin)
+{
+	struct platform_device *axigate_leaf;
+	char *dtb = NULL;
+	int ret = 0, part_inst;
+
+	ret = xrt_xclbin_get_metadata(DEV(pdev), xclbin, &dtb);
+	if (ret) {
+		xrt_err(pdev, "can not get partition metadata, ret %d", ret);
+		goto failed;
+	}
+
+	part_inst = xrt_subdev_lookup_partition(pdev, match_shell, dtb);
+	if (part_inst < 0) {
+		xrt_err(pdev, "not found matching plp.");
+		ret = -ENODEV;
+		goto failed;
+	}
+
+	/*
+	 * Find ulp partition with interface uuid from incoming xclbin, which
+	 * is verified before with matching plp partition.
+	 */
+	part_inst = xrt_subdev_lookup_partition(pdev, match_ulp, dtb);
+	if (part_inst >= 0) {
+		ret = xrt_subdev_destroy_partition(pdev, part_inst);
+		if (ret) {
+			xrt_err(pdev, "failed to destroy existing ulp, %d",
+				ret);
+			goto failed;
+		}
+	}
+
+	axigate_leaf = xrt_subdev_get_leaf_by_epname(pdev, NODE_GATE_ULP);
+
+	/* gate may not be exist for 0rp */
+	if (axigate_leaf) {
+		ret = xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREEZE,
+			NULL);
+		if (ret) {
+			xrt_err(pdev, "can not freeze gate %s, %d",
+				NODE_GATE_ULP, ret);
+			xrt_subdev_put_leaf(pdev, axigate_leaf);
+			goto failed;
+		}
+	}
+	ret = xmgmt_download_bitstream(pdev, xclbin);
+	if (axigate_leaf) {
+		xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREE, NULL);
+
+		/* Do we really need this extra toggling gate before setting
+		 * clocks?
+		 * xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREEZE, NULL);
+		 * xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREE, NULL);
+		 */
+
+		xrt_subdev_put_leaf(pdev, axigate_leaf);
+	}
+	if (ret) {
+		xrt_err(pdev, "bitstream download failed, ret %d", ret);
+		goto failed;
+	}
+	ret = xrt_subdev_create_partition(pdev, dtb);
+	if (ret < 0) {
+		xrt_err(pdev, "failed creating partition, ret %d", ret);
+		goto failed;
+	}
+
+	ret = xrt_subdev_wait_for_partition_bringup(pdev);
+	if (ret)
+		xrt_err(pdev, "partiton bringup failed, ret %d", ret);
+
+	/*
+	 * TODO: needs to check individual subdevs to see if there
+	 * is any error, such as clock setting, memory bank calibration.
+	 */
+
+failed:
+	vfree(dtb);
+	return ret;
+}
diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main.c b/drivers/fpga/alveo/mgmt/xmgmt-main.c
new file mode 100644
index 000000000000..23e68e3a4ae1
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/xmgmt-main.c
@@ -0,0 +1,843 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA MGMT PF entry point driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Sonal Santan <sonals@xilinx.com>
+ */
+
+#include <linux/firmware.h>
+#include <linux/uaccess.h>
+#include "xrt-xclbin.h"
+#include "xrt-metadata.h"
+#include "xrt-flash.h"
+#include "xrt-subdev.h"
+#include <linux/xrt/flash_xrt_data.h>
+#include <linux/xrt/xmgmt-ioctl.h>
+#include "xrt-gpio.h"
+#include "xmgmt-main.h"
+#include "xmgmt-fmgr.h"
+#include "xrt-icap.h"
+#include "xrt-axigate.h"
+#include "xmgmt-main-impl.h"
+
+#define	XMGMT_MAIN "xmgmt_main"
+
+struct xmgmt_main {
+	struct platform_device *pdev;
+	void *evt_hdl;
+	char *firmware_blp;
+	char *firmware_plp;
+	char *firmware_ulp;
+	bool flash_ready;
+	bool gpio_ready;
+	struct fpga_manager *fmgr;
+	void *mailbox_hdl;
+	struct mutex busy_mutex;
+
+	uuid_t *blp_intf_uuids;
+	u32 blp_intf_uuid_num;
+};
+
+char *xmgmt_get_vbnv(struct platform_device *pdev)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	const char *vbnv;
+	char *ret;
+	int i;
+
+	if (xmm->firmware_plp) {
+		vbnv = ((struct axlf *)xmm->firmware_plp)->
+			m_header.m_platformVBNV;
+	} else if (xmm->firmware_blp) {
+		vbnv = ((struct axlf *)xmm->firmware_blp)->
+			m_header.m_platformVBNV;
+	} else {
+		return NULL;
+	}
+
+	ret = kstrdup(vbnv, GFP_KERNEL);
+	for (i = 0; i < strlen(ret); i++) {
+		if (ret[i] == ':' || ret[i] == '.')
+			ret[i] = '_';
+	}
+	return ret;
+}
+
+static bool xmgmt_main_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	if (id == XRT_SUBDEV_GPIO)
+		return xrt_subdev_has_epname(pdev, arg);
+	else if (id == XRT_SUBDEV_QSPI)
+		return true;
+
+	return false;
+}
+
+static int get_dev_uuid(struct platform_device *pdev, char *uuidstr, size_t len)
+{
+	char uuid[16];
+	struct platform_device *gpio_leaf;
+	struct xrt_gpio_ioctl_rw gpio_arg = { 0 };
+	int err, i, count;
+
+	gpio_leaf = xrt_subdev_get_leaf_by_epname(pdev, NODE_BLP_ROM);
+	if (!gpio_leaf) {
+		xrt_err(pdev, "can not get %s", NODE_BLP_ROM);
+		return -EINVAL;
+	}
+
+	gpio_arg.xgir_id = XRT_GPIO_ROM_UUID;
+	gpio_arg.xgir_buf = uuid;
+	gpio_arg.xgir_len = sizeof(uuid);
+	gpio_arg.xgir_offset = 0;
+	err = xrt_subdev_ioctl(gpio_leaf, XRT_GPIO_READ, &gpio_arg);
+	xrt_subdev_put_leaf(pdev, gpio_leaf);
+	if (err) {
+		xrt_err(pdev, "can not get uuid: %d", err);
+		return err;
+	}
+
+	for (count = 0, i = sizeof(uuid) - sizeof(u32);
+		i >= 0 && len > count; i -= sizeof(u32)) {
+		count += snprintf(uuidstr + count, len - count,
+			"%08x", *(u32 *)&uuid[i]);
+	}
+	return 0;
+}
+
+int xmgmt_hot_reset(struct platform_device *pdev)
+{
+	int ret = xrt_subdev_broadcast_event(pdev, XRT_EVENT_PRE_HOT_RESET);
+
+	if (ret) {
+		xrt_err(pdev, "offline failed, hot reset is canceled");
+		return ret;
+	}
+
+	(void) xrt_subdev_hot_reset(pdev);
+	xrt_subdev_broadcast_event(pdev, XRT_EVENT_POST_HOT_RESET);
+	return 0;
+}
+
+static ssize_t reset_store(struct device *dev,
+	struct device_attribute *da, const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+
+	(void) xmgmt_hot_reset(pdev);
+	return count;
+}
+static DEVICE_ATTR_WO(reset);
+
+static ssize_t VBNV_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	ssize_t ret;
+	char *vbnv;
+	struct platform_device *pdev = to_platform_device(dev);
+
+	vbnv = xmgmt_get_vbnv(pdev);
+	ret = sprintf(buf, "%s\n", vbnv);
+	kfree(vbnv);
+	return ret;
+}
+static DEVICE_ATTR_RO(VBNV);
+
+static ssize_t logic_uuids_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	ssize_t ret;
+	char uuid[80];
+	struct platform_device *pdev = to_platform_device(dev);
+
+	/*
+	 * Getting UUID pointed to by VSEC,
+	 * should be the same as logic UUID of BLP.
+	 * TODO: add PLP logic UUID
+	 */
+	ret = get_dev_uuid(pdev, uuid, sizeof(uuid));
+	if (ret)
+		return ret;
+	ret = sprintf(buf, "%s\n", uuid);
+	return ret;
+}
+static DEVICE_ATTR_RO(logic_uuids);
+
+static inline void uuid2str(const uuid_t *uuid, char *uuidstr, size_t len)
+{
+	int i, p;
+	u8 *u = (u8 *)uuid;
+
+	BUG_ON(sizeof(uuid_t) * 2 + 1 > len);
+	for (p = 0, i = sizeof(uuid_t) - 1; i >= 0; p++, i--)
+		(void) snprintf(&uuidstr[p*2], 3, "%02x", u[i]);
+}
+
+static ssize_t interface_uuids_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	ssize_t ret = 0;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	u32 i;
+
+	/*
+	 * TODO: add PLP interface UUID
+	 */
+	for (i = 0; i < xmm->blp_intf_uuid_num; i++) {
+		char uuidstr[80];
+
+		uuid2str(&xmm->blp_intf_uuids[i], uuidstr, sizeof(uuidstr));
+		ret += sprintf(buf + ret, "%s\n", uuidstr);
+	}
+	return ret;
+}
+static DEVICE_ATTR_RO(interface_uuids);
+
+static struct attribute *xmgmt_main_attrs[] = {
+	&dev_attr_reset.attr,
+	&dev_attr_VBNV.attr,
+	&dev_attr_logic_uuids.attr,
+	&dev_attr_interface_uuids.attr,
+	NULL,
+};
+
+static ssize_t ulp_image_write(struct file *filp, struct kobject *kobj,
+	struct bin_attribute *attr, char *buffer, loff_t off, size_t count)
+{
+	struct xmgmt_main *xmm =
+		dev_get_drvdata(container_of(kobj, struct device, kobj));
+	struct axlf *xclbin;
+	ulong len;
+
+	if (off == 0) {
+		if (count < sizeof(*xclbin)) {
+			xrt_err(xmm->pdev, "count is too small %ld", count);
+			return -EINVAL;
+		}
+
+		if (xmm->firmware_ulp) {
+			vfree(xmm->firmware_ulp);
+			xmm->firmware_ulp = NULL;
+		}
+		xclbin = (struct axlf *)buffer;
+		xmm->firmware_ulp = vmalloc(xclbin->m_header.m_length);
+		if (!xmm->firmware_ulp)
+			return -ENOMEM;
+	} else
+		xclbin = (struct axlf *)xmm->firmware_ulp;
+
+	len = xclbin->m_header.m_length;
+	if (off + count >= len && off < len) {
+		memcpy(xmm->firmware_ulp + off, buffer, len - off);
+		xmgmt_ulp_download(xmm->pdev, xmm->firmware_ulp);
+	} else if (off + count < len) {
+		memcpy(xmm->firmware_ulp + off, buffer, count);
+	}
+
+	return count;
+}
+
+static struct bin_attribute ulp_image_attr = {
+	.attr = {
+		.name = "ulp_image",
+		.mode = 0200
+	},
+	.write = ulp_image_write,
+	.size = 0
+};
+
+static struct bin_attribute *xmgmt_main_bin_attrs[] = {
+	&ulp_image_attr,
+	NULL,
+};
+
+static const struct attribute_group xmgmt_main_attrgroup = {
+	.attrs = xmgmt_main_attrs,
+	.bin_attrs = xmgmt_main_bin_attrs,
+};
+
+static int load_firmware_from_flash(struct platform_device *pdev,
+	char **fw_buf, size_t *len)
+{
+	struct platform_device *flash_leaf = NULL;
+	struct flash_data_header header = { 0 };
+	const size_t magiclen = sizeof(header.fdh_id_begin.fdi_magic);
+	size_t flash_size = 0;
+	int ret = 0;
+	char *buf = NULL;
+	struct flash_data_ident id = { 0 };
+	struct xrt_flash_ioctl_read frd = { 0 };
+
+	xrt_info(pdev, "try loading fw from flash");
+
+	flash_leaf = xrt_subdev_get_leaf_by_id(pdev, XRT_SUBDEV_QSPI,
+		PLATFORM_DEVID_NONE);
+	if (flash_leaf == NULL) {
+		xrt_err(pdev, "failed to hold flash leaf");
+		return -ENODEV;
+	}
+
+	(void) xrt_subdev_ioctl(flash_leaf, XRT_FLASH_GET_SIZE, &flash_size);
+	if (flash_size == 0) {
+		xrt_err(pdev, "failed to get flash size");
+		ret = -EINVAL;
+		goto done;
+	}
+
+	frd.xfir_buf = (char *)&header;
+	frd.xfir_size = sizeof(header);
+	frd.xfir_offset = flash_size - sizeof(header);
+	ret = xrt_subdev_ioctl(flash_leaf, XRT_FLASH_READ, &frd);
+	if (ret) {
+		xrt_err(pdev, "failed to read header from flash: %d", ret);
+		goto done;
+	}
+
+	/* Pick the end ident since header is aligned in the end of flash. */
+	id = header.fdh_id_end;
+	if (strncmp(id.fdi_magic, XRT_DATA_MAGIC, magiclen)) {
+		char tmp[sizeof(id.fdi_magic) + 1] = { 0 };
+
+		memcpy(tmp, id.fdi_magic, magiclen);
+		xrt_info(pdev, "ignore meta data, bad magic: %s", tmp);
+		ret = -ENOENT;
+		goto done;
+	}
+	if (id.fdi_version != 0) {
+		xrt_info(pdev, "flash meta data version is not supported: %d",
+			id.fdi_version);
+		ret = -EOPNOTSUPP;
+		goto done;
+	}
+
+	buf = vmalloc(header.fdh_data_len);
+	if (buf == NULL) {
+		ret = -ENOMEM;
+		goto done;
+	}
+
+	frd.xfir_buf = buf;
+	frd.xfir_size = header.fdh_data_len;
+	frd.xfir_offset = header.fdh_data_offset;
+	ret = xrt_subdev_ioctl(flash_leaf, XRT_FLASH_READ, &frd);
+	if (ret) {
+		xrt_err(pdev, "failed to read meta data from flash: %d", ret);
+		goto done;
+	} else if (flash_xrt_data_get_parity32(buf, header.fdh_data_len) ^
+		header.fdh_data_parity) {
+		xrt_err(pdev, "meta data is corrupted");
+		ret = -EINVAL;
+		goto done;
+	}
+
+	xrt_info(pdev, "found meta data of %d bytes @0x%x",
+		header.fdh_data_len, header.fdh_data_offset);
+	*fw_buf = buf;
+	*len = header.fdh_data_len;
+
+done:
+	(void) xrt_subdev_put_leaf(pdev, flash_leaf);
+	return ret;
+}
+
+static int load_firmware_from_disk(struct platform_device *pdev, char **fw_buf,
+	size_t *len)
+{
+	char uuid[80];
+	int err = 0;
+	char fw_name[256];
+	const struct firmware *fw;
+
+	err = get_dev_uuid(pdev, uuid, sizeof(uuid));
+	if (err)
+		return err;
+
+	(void) snprintf(fw_name,
+		sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
+	xrt_info(pdev, "try loading fw: %s", fw_name);
+
+	err = request_firmware(&fw, fw_name, DEV(pdev));
+	if (err)
+		return err;
+
+	*fw_buf = vmalloc(fw->size);
+	*len = fw->size;
+	if (*fw_buf != NULL)
+		memcpy(*fw_buf, fw->data, fw->size);
+	else
+		err = -ENOMEM;
+
+	release_firmware(fw);
+	return 0;
+}
+
+static const char *xmgmt_get_axlf_firmware(struct xmgmt_main *xmm,
+	enum provider_kind kind)
+{
+	switch (kind) {
+	case XMGMT_BLP:
+		return xmm->firmware_blp;
+	case XMGMT_PLP:
+		return xmm->firmware_plp;
+	case XMGMT_ULP:
+		return xmm->firmware_ulp;
+	default:
+		xrt_err(xmm->pdev, "unknown axlf kind: %d", kind);
+		return NULL;
+	}
+}
+
+char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	char *dtb = NULL;
+	const char *provider = xmgmt_get_axlf_firmware(xmm, kind);
+	int rc;
+
+	if (provider == NULL)
+		return dtb;
+
+	rc = xrt_xclbin_get_metadata(DEV(pdev), provider, &dtb);
+	if (rc)
+		xrt_err(pdev, "failed to find dtb: %d", rc);
+	return dtb;
+}
+
+static const char *get_uuid_from_firmware(struct platform_device *pdev,
+	const char *axlf)
+{
+	const void *uuid = NULL;
+	const void *uuiddup = NULL;
+	void *dtb = NULL;
+	int rc;
+
+	rc = xrt_xclbin_get_section(axlf, PARTITION_METADATA, &dtb, NULL);
+	if (rc)
+		return NULL;
+
+	rc = xrt_md_get_prop(DEV(pdev), dtb, NULL, NULL,
+		PROP_LOGIC_UUID, &uuid, NULL);
+	if (!rc)
+		uuiddup = kstrdup(uuid, GFP_KERNEL);
+	vfree(dtb);
+	return uuiddup;
+}
+
+static bool is_valid_firmware(struct platform_device *pdev,
+	char *fw_buf, size_t fw_len)
+{
+	struct axlf *axlf = (struct axlf *)fw_buf;
+	size_t axlflen = axlf->m_header.m_length;
+	const char *fw_uuid;
+	char dev_uuid[80];
+	int err;
+
+	err = get_dev_uuid(pdev, dev_uuid, sizeof(dev_uuid));
+	if (err)
+		return false;
+
+	if (memcmp(fw_buf, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)) != 0) {
+		xrt_err(pdev, "unknown fw format");
+		return false;
+	}
+
+	if (axlflen > fw_len) {
+		xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
+			fw_len, axlflen);
+		return false;
+	}
+
+	fw_uuid = get_uuid_from_firmware(pdev, fw_buf);
+	if (fw_uuid == NULL || strcmp(fw_uuid, dev_uuid) != 0) {
+		xrt_err(pdev, "bad fw UUID: %s, expect: %s",
+			fw_uuid ? fw_uuid : "<none>", dev_uuid);
+		kfree(fw_uuid);
+		return false;
+	}
+
+	kfree(fw_uuid);
+	return true;
+}
+
+int xmgmt_get_provider_uuid(struct platform_device *pdev,
+	enum provider_kind kind, uuid_t *uuid)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	const char *fwbuf;
+	const char *fw_uuid;
+	int rc = -ENOENT;
+
+	mutex_lock(&xmm->busy_mutex);
+
+	fwbuf = xmgmt_get_axlf_firmware(xmm, kind);
+	if (fwbuf == NULL)
+		goto done;
+
+	fw_uuid = get_uuid_from_firmware(pdev, fwbuf);
+	if (fw_uuid == NULL)
+		goto done;
+
+	rc = xrt_md_uuid_strtoid(DEV(pdev), fw_uuid, uuid);
+	kfree(fw_uuid);
+
+done:
+	mutex_unlock(&xmm->busy_mutex);
+	return rc;
+}
+
+static int xmgmt_create_blp(struct xmgmt_main *xmm)
+{
+	struct platform_device *pdev = xmm->pdev;
+	int rc = 0;
+	char *dtb = NULL;
+
+	dtb = xmgmt_get_dtb(pdev, XMGMT_BLP);
+	if (dtb) {
+		rc = xrt_subdev_create_partition(pdev, dtb);
+		if (rc < 0)
+			xrt_err(pdev, "failed to create BLP: %d", rc);
+		else
+			rc = 0;
+
+		BUG_ON(xmm->blp_intf_uuids);
+		xrt_md_get_intf_uuids(&pdev->dev, dtb,
+			&xmm->blp_intf_uuid_num, NULL);
+		if (xmm->blp_intf_uuid_num > 0) {
+			xmm->blp_intf_uuids = vzalloc(sizeof(uuid_t) *
+				xmm->blp_intf_uuid_num);
+			xrt_md_get_intf_uuids(&pdev->dev, dtb,
+				&xmm->blp_intf_uuid_num, xmm->blp_intf_uuids);
+		}
+	}
+
+	vfree(dtb);
+	return rc;
+}
+
+static int xmgmt_main_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+	enum xrt_subdev_id id;
+	int instance;
+	size_t fwlen;
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION: {
+		id = esd->xevt_subdev_id;
+		instance = esd->xevt_subdev_instance;
+		xrt_info(pdev, "processing event %d for (%d, %d)",
+			evt, id, instance);
+
+		if (id == XRT_SUBDEV_GPIO)
+			xmm->gpio_ready = true;
+		else if (id == XRT_SUBDEV_QSPI)
+			xmm->flash_ready = true;
+		else
+			BUG_ON(1);
+
+		if (xmm->gpio_ready && xmm->flash_ready) {
+			int rc;
+
+			rc = load_firmware_from_disk(pdev, &xmm->firmware_blp,
+				&fwlen);
+			if (rc != 0) {
+				rc = load_firmware_from_flash(pdev,
+					&xmm->firmware_blp, &fwlen);
+			}
+			if (rc == 0 && is_valid_firmware(pdev,
+			    xmm->firmware_blp, fwlen))
+				(void) xmgmt_create_blp(xmm);
+			else
+				xrt_err(pdev,
+					"failed to find firmware, giving up");
+			xmm->evt_hdl = NULL;
+		}
+		break;
+	}
+	case XRT_EVENT_POST_ATTACH:
+		xmgmt_peer_notify_state(xmm->mailbox_hdl, true);
+		break;
+	case XRT_EVENT_PRE_DETACH:
+		xmgmt_peer_notify_state(xmm->mailbox_hdl, false);
+		break;
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		break;
+	}
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static int xmgmt_main_probe(struct platform_device *pdev)
+{
+	struct xmgmt_main *xmm;
+
+	xrt_info(pdev, "probing...");
+
+	xmm = devm_kzalloc(DEV(pdev), sizeof(*xmm), GFP_KERNEL);
+	if (!xmm)
+		return -ENOMEM;
+
+	xmm->pdev = pdev;
+	platform_set_drvdata(pdev, xmm);
+	xmm->fmgr = xmgmt_fmgr_probe(pdev);
+	xmm->mailbox_hdl = xmgmt_mailbox_probe(pdev);
+	mutex_init(&xmm->busy_mutex);
+
+	xmm->evt_hdl = xrt_subdev_add_event_cb(pdev,
+		xmgmt_main_leaf_match, NODE_BLP_ROM, xmgmt_main_event_cb);
+
+	/* Ready to handle req thru sysfs nodes. */
+	if (sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup))
+		xrt_err(pdev, "failed to create sysfs group");
+	return 0;
+}
+
+static int xmgmt_main_remove(struct platform_device *pdev)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+
+	/* By now, partition driver should prevent any inter-leaf call. */
+
+	xrt_info(pdev, "leaving...");
+
+	if (xmm->evt_hdl)
+		(void) xrt_subdev_remove_event_cb(pdev, xmm->evt_hdl);
+	vfree(xmm->blp_intf_uuids);
+	vfree(xmm->firmware_blp);
+	vfree(xmm->firmware_plp);
+	vfree(xmm->firmware_ulp);
+	(void) xmgmt_fmgr_remove(xmm->fmgr);
+	xmgmt_mailbox_remove(xmm->mailbox_hdl);
+	(void) sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup);
+	return 0;
+}
+
+static int
+xmgmt_main_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	int ret = 0;
+
+	xrt_info(pdev, "handling IOCTL cmd: %d", cmd);
+
+	switch (cmd) {
+	case XRT_MGMT_MAIN_GET_AXLF_SECTION: {
+		struct xrt_mgmt_main_ioctl_get_axlf_section *get =
+			(struct xrt_mgmt_main_ioctl_get_axlf_section *)arg;
+		const char *firmware =
+			xmgmt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
+
+		if (firmware == NULL) {
+			ret = -ENOENT;
+		} else {
+			ret = xrt_xclbin_get_section(firmware,
+				get->xmmigas_section_kind,
+				&get->xmmigas_section,
+				&get->xmmigas_section_size);
+		}
+		break;
+	}
+	case XRT_MGMT_MAIN_GET_VBNV: {
+		char **vbnv_p = (char **)arg;
+
+		*vbnv_p = xmgmt_get_vbnv(pdev);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unknown cmd: %d", cmd);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+static int xmgmt_main_open(struct inode *inode, struct file *file)
+{
+	struct platform_device *pdev = xrt_devnode_open(inode);
+
+	/* Device may have gone already when we get here. */
+	if (!pdev)
+		return -ENODEV;
+
+	xrt_info(pdev, "opened");
+	file->private_data = platform_get_drvdata(pdev);
+	return 0;
+}
+
+static int xmgmt_main_close(struct inode *inode, struct file *file)
+{
+	struct xmgmt_main *xmm = file->private_data;
+
+	xrt_devnode_close(inode);
+
+	xrt_info(xmm->pdev, "closed");
+	return 0;
+}
+
+static int xmgmt_bitstream_axlf_fpga_mgr(struct xmgmt_main *xmm,
+	void *axlf, size_t size)
+{
+	int ret;
+	struct fpga_image_info info = { 0 };
+
+	BUG_ON(!mutex_is_locked(&xmm->busy_mutex));
+
+	/*
+	 * Should any error happens during download, we can't trust
+	 * the cached xclbin any more.
+	 */
+	vfree(xmm->firmware_ulp);
+	xmm->firmware_ulp = NULL;
+
+	info.buf = (char *)axlf;
+	info.count = size;
+	ret = fpga_mgr_load(xmm->fmgr, &info);
+	if (ret == 0)
+		xmm->firmware_ulp = axlf;
+
+	return ret;
+}
+
+int bitstream_axlf_mailbox(struct platform_device *pdev, const void *axlf)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	void *copy_buffer = NULL;
+	size_t copy_buffer_size = 0;
+	const struct axlf *xclbin_obj = axlf;
+	int ret = 0;
+
+	if (memcmp(xclbin_obj->m_magic, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)))
+		return -EINVAL;
+
+	copy_buffer_size = xclbin_obj->m_header.m_length;
+	if (copy_buffer_size > MAX_XCLBIN_SIZE)
+		return -EINVAL;
+	copy_buffer = vmalloc(copy_buffer_size);
+	if (copy_buffer == NULL)
+		return -ENOMEM;
+	(void) memcpy(copy_buffer, axlf, copy_buffer_size);
+
+	mutex_lock(&xmm->busy_mutex);
+	ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
+	mutex_unlock(&xmm->busy_mutex);
+	if (ret)
+		vfree(copy_buffer);
+	return ret;
+}
+
+static int bitstream_axlf_ioctl(struct xmgmt_main *xmm, const void __user *arg)
+{
+	void *copy_buffer = NULL;
+	size_t copy_buffer_size = 0;
+	struct xmgmt_ioc_bitstream_axlf ioc_obj = { 0 };
+	struct axlf xclbin_obj = { {0} };
+	int ret = 0;
+
+	if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
+		return -EFAULT;
+	if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin,
+		sizeof(xclbin_obj)))
+		return -EFAULT;
+	if (memcmp(xclbin_obj.m_magic, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)))
+		return -EINVAL;
+
+	copy_buffer_size = xclbin_obj.m_header.m_length;
+	if (copy_buffer_size > MAX_XCLBIN_SIZE)
+		return -EINVAL;
+	copy_buffer = vmalloc(copy_buffer_size);
+	if (copy_buffer == NULL)
+		return -ENOMEM;
+
+	if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
+		vfree(copy_buffer);
+		return -EFAULT;
+	}
+
+	ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
+	if (ret)
+		vfree(copy_buffer);
+
+	return ret;
+}
+
+static long xmgmt_main_ioctl(struct file *filp, unsigned int cmd,
+	unsigned long arg)
+{
+	long result = 0;
+	struct xmgmt_main *xmm = filp->private_data;
+
+	BUG_ON(!xmm);
+
+	if (_IOC_TYPE(cmd) != XMGMT_IOC_MAGIC)
+		return -ENOTTY;
+
+	mutex_lock(&xmm->busy_mutex);
+
+	xrt_info(xmm->pdev, "ioctl cmd %d, arg %ld", cmd, arg);
+	switch (cmd) {
+	case XMGMT_IOCICAPDOWNLOAD_AXLF:
+		result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
+		break;
+	default:
+		result = -ENOTTY;
+		break;
+	}
+
+	mutex_unlock(&xmm->busy_mutex);
+	return result;
+}
+
+void *xmgmt_pdev2mailbox(struct platform_device *pdev)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+
+	return xmm->mailbox_hdl;
+}
+
+struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{ .ep_name = NODE_MGMT_MAIN },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xmgmt_main_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xmgmt_main_leaf_ioctl,
+	},
+	.xsd_file_ops = {
+		.xsf_ops = {
+			.owner = THIS_MODULE,
+			.open = xmgmt_main_open,
+			.release = xmgmt_main_close,
+			.unlocked_ioctl = xmgmt_main_ioctl,
+		},
+		.xsf_dev_name = "xmgmt",
+	},
+};
+
+static const struct platform_device_id xmgmt_main_id_table[] = {
+	{ XMGMT_MAIN, (kernel_ulong_t)&xmgmt_main_data },
+	{ },
+};
+
+struct platform_driver xmgmt_main_driver = {
+	.driver	= {
+		.name    = XMGMT_MAIN,
+	},
+	.probe   = xmgmt_main_probe,
+	.remove  = xmgmt_main_remove,
+	.id_table = xmgmt_main_id_table,
+};
diff --git a/drivers/fpga/alveo/mgmt/xmgmt-root.c b/drivers/fpga/alveo/mgmt/xmgmt-root.c
new file mode 100644
index 000000000000..005fd5e42651
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/xmgmt-root.c
@@ -0,0 +1,375 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+
+#include "xrt-root.h"
+#include "xrt-subdev.h"
+#include "xmgmt-main-impl.h"
+#include "xrt-metadata.h"
+
+#define	XMGMT_MODULE_NAME	"xmgmt"
+#define	XMGMT_DRIVER_VERSION	"4.0.0"
+
+#define	XMGMT_PDEV(xm)		((xm)->pdev)
+#define	XMGMT_DEV(xm)		(&(XMGMT_PDEV(xm)->dev))
+#define xmgmt_err(xm, fmt, args...)	\
+	dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_warn(xm, fmt, args...)	\
+	dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_info(xm, fmt, args...)	\
+	dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_dbg(xm, fmt, args...)	\
+	dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define	XMGMT_DEV_ID(pdev)			\
+	((pci_domain_nr(pdev->bus) << 16) |	\
+	PCI_DEVID(pdev->bus->number, 0))
+
+static struct class *xmgmt_class;
+static const struct pci_device_id xmgmt_pci_ids[] = {
+	{ PCI_DEVICE(0x10EE, 0xd020), },
+	{ PCI_DEVICE(0x10EE, 0x5020), },
+	{ 0, }
+};
+
+struct xmgmt {
+	struct pci_dev *pdev;
+	void *root;
+
+	/* save config for pci reset */
+	u32 saved_config[8][16];
+	bool ready;
+};
+
+static int xmgmt_config_pci(struct xmgmt *xm)
+{
+	struct pci_dev *pdev = XMGMT_PDEV(xm);
+	int rc;
+
+	rc = pcim_enable_device(pdev);
+	if (rc < 0) {
+		xmgmt_err(xm, "failed to enable device: %d", rc);
+		return rc;
+	}
+
+	rc = pci_enable_pcie_error_reporting(pdev);
+	if (rc)
+		xmgmt_warn(xm, "failed to enable AER: %d", rc);
+
+	pci_set_master(pdev);
+
+	rc = pcie_get_readrq(pdev);
+	if (rc < 0) {
+		xmgmt_err(xm, "failed to read mrrs %d", rc);
+		return rc;
+	}
+	if (rc > 512) {
+		rc = pcie_set_readrq(pdev, 512);
+		if (rc) {
+			xmgmt_err(xm, "failed to force mrrs %d", rc);
+			return rc;
+		}
+	}
+
+	return 0;
+}
+
+static void xmgmt_save_config_space(struct pci_dev *pdev, u32 *saved_config)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		pci_read_config_dword(pdev, i * 4, &saved_config[i]);
+}
+
+static int xmgmt_match_slot_and_save(struct device *dev, void *data)
+{
+	struct xmgmt *xm = data;
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
+		pci_cfg_access_lock(pdev);
+		pci_save_state(pdev);
+		xmgmt_save_config_space(pdev,
+			xm->saved_config[PCI_FUNC(pdev->devfn)]);
+	}
+
+	return 0;
+}
+
+static void xmgmt_pci_save_config_all(struct xmgmt *xm)
+{
+	bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_save);
+}
+
+static void xmgmt_restore_config_space(struct pci_dev *pdev, u32 *config_saved)
+{
+	int i;
+	u32 val;
+
+	for (i = 0; i < 16; i++) {
+		pci_read_config_dword(pdev, i * 4, &val);
+		if (val == config_saved[i])
+			continue;
+
+		pci_write_config_dword(pdev, i * 4, config_saved[i]);
+		pci_read_config_dword(pdev, i * 4, &val);
+		if (val != config_saved[i]) {
+			dev_err(&pdev->dev,
+				 "restore config at %d failed", i * 4);
+		}
+	}
+}
+
+static int xmgmt_match_slot_and_restore(struct device *dev, void *data)
+{
+	struct xmgmt *xm = data;
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
+		xmgmt_restore_config_space(pdev,
+			xm->saved_config[PCI_FUNC(pdev->devfn)]);
+
+		pci_restore_state(pdev);
+		pci_cfg_access_unlock(pdev);
+	}
+
+	return 0;
+}
+
+static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
+{
+	bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_restore);
+}
+
+void xroot_hot_reset(struct pci_dev *pdev)
+{
+	struct xmgmt *xm = pci_get_drvdata(pdev);
+	struct pci_bus *bus;
+	u8 pci_bctl;
+	u16 pci_cmd, devctl;
+	int i;
+
+	xmgmt_info(xm, "hot reset start");
+
+	xmgmt_pci_save_config_all(xm);
+
+	pci_disable_device(pdev);
+
+	bus = pdev->bus;
+
+	/*
+	 * When flipping the SBR bit, device can fall off the bus. This is
+	 * usually no problem at all so long as drivers are working properly
+	 * after SBR. However, some systems complain bitterly when the device
+	 * falls off the bus.
+	 * The quick solution is to temporarily disable the SERR reporting of
+	 * switch port during SBR.
+	 */
+
+	pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
+	pci_write_config_word(bus->self, PCI_COMMAND,
+		(pci_cmd & ~PCI_COMMAND_SERR));
+	pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
+	pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL,
+		(devctl & ~PCI_EXP_DEVCTL_FERE));
+	pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
+	pci_bctl |= PCI_BRIDGE_CTL_BUS_RESET;
+	pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
+
+	msleep(100);
+	pci_bctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
+	pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
+	ssleep(1);
+
+	pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
+	pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
+
+	pci_enable_device(pdev);
+
+	for (i = 0; i < 300; i++) {
+		pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+		if (pci_cmd != 0xffff)
+			break;
+		msleep(20);
+	}
+
+	xmgmt_info(xm, "waiting for %d ms", i * 20);
+
+	xmgmt_pci_restore_config_all(xm);
+
+	xmgmt_config_pci(xm);
+}
+
+static int xmgmt_create_root_metadata(struct xmgmt *xm, char **root_dtb)
+{
+	char *dtb = NULL;
+	int ret;
+
+	ret = xrt_md_create(DEV(xm->pdev), &dtb);
+	if (ret) {
+		xmgmt_err(xm, "create metadata failed, ret %d", ret);
+		goto failed;
+	}
+
+	ret = xroot_add_simple_node(xm->root, dtb, NODE_TEST);
+	if (ret)
+		goto failed;
+
+	ret = xroot_add_vsec_node(xm->root, dtb);
+	if (ret == -ENOENT) {
+		/*
+		 * We may be dealing with a MFG board.
+		 * Try vsec-golden which will bring up all hard-coded leaves
+		 * at hard-coded offsets.
+		 */
+		ret = xroot_add_simple_node(xm, dtb, NODE_VSEC_GOLDEN);
+	} else if (ret == 0) {
+		ret = xroot_add_simple_node(xm->root, dtb, NODE_MGMT_MAIN);
+	}
+	if (ret)
+		goto failed;
+
+	*root_dtb = dtb;
+	return 0;
+
+failed:
+	vfree(dtb);
+	return ret;
+}
+
+static ssize_t ready_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct xmgmt *xm = pci_get_drvdata(pdev);
+
+	return sprintf(buf, "%d\n", xm->ready);
+}
+static DEVICE_ATTR_RO(ready);
+
+static struct attribute *xmgmt_root_attrs[] = {
+	&dev_attr_ready.attr,
+	NULL
+};
+
+static struct attribute_group xmgmt_root_attr_group = {
+	.attrs = xmgmt_root_attrs,
+};
+
+static int xmgmt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	int ret;
+	struct device *dev = DEV(pdev);
+	struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
+	char *dtb = NULL;
+
+	if (!xm)
+		return -ENOMEM;
+	xm->pdev = pdev;
+	pci_set_drvdata(pdev, xm);
+
+	ret = xmgmt_config_pci(xm);
+	if (ret)
+		goto failed;
+
+	ret = xroot_probe(pdev, &xm->root);
+	if (ret)
+		goto failed;
+
+	ret = xmgmt_create_root_metadata(xm, &dtb);
+	if (ret)
+		goto failed_metadata;
+
+	ret = xroot_create_partition(xm->root, dtb);
+	vfree(dtb);
+	if (ret)
+		xmgmt_err(xm, "failed to create root partition: %d", ret);
+
+	if (!xroot_wait_for_bringup(xm->root))
+		xmgmt_err(xm, "failed to bringup all partitions");
+	else
+		xm->ready = true;
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
+	if (ret) {
+		/* Warning instead of failing the probe. */
+		xrt_warn(pdev, "create xmgmt root attrs failed: %d", ret);
+	}
+
+	xroot_broadcast(xm->root, XRT_EVENT_POST_ATTACH);
+	xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
+	return 0;
+
+failed_metadata:
+	(void) xroot_remove(xm->root);
+failed:
+	pci_set_drvdata(pdev, NULL);
+	return ret;
+}
+
+static void xmgmt_remove(struct pci_dev *pdev)
+{
+	struct xmgmt *xm = pci_get_drvdata(pdev);
+
+	xroot_broadcast(xm->root, XRT_EVENT_PRE_DETACH);
+	sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
+	(void) xroot_remove(xm->root);
+	pci_disable_pcie_error_reporting(xm->pdev);
+	xmgmt_info(xm, "%s cleaned up successfully", XMGMT_MODULE_NAME);
+}
+
+static struct pci_driver xmgmt_driver = {
+	.name = XMGMT_MODULE_NAME,
+	.id_table = xmgmt_pci_ids,
+	.probe = xmgmt_probe,
+	.remove = xmgmt_remove,
+};
+
+static int __init xmgmt_init(void)
+{
+	int res = xrt_subdev_register_external_driver(XRT_SUBDEV_MGMT_MAIN,
+		&xmgmt_main_driver, xrt_mgmt_main_endpoints);
+
+	if (res)
+		return res;
+
+	xmgmt_class = class_create(THIS_MODULE, XMGMT_MODULE_NAME);
+	if (IS_ERR(xmgmt_class))
+		return PTR_ERR(xmgmt_class);
+
+	res = pci_register_driver(&xmgmt_driver);
+	if (res) {
+		class_destroy(xmgmt_class);
+		return res;
+	}
+
+	return 0;
+}
+
+static __exit void xmgmt_exit(void)
+{
+	pci_unregister_driver(&xmgmt_driver);
+	class_destroy(xmgmt_class);
+	xrt_subdev_unregister_external_driver(XRT_SUBDEV_MGMT_MAIN);
+}
+
+module_init(xmgmt_init);
+module_exit(xmgmt_exit);
+
+MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
+MODULE_VERSION(XMGMT_DRIVER_VERSION);
+MODULE_AUTHOR("XRT Team <runtime@xilinx.com>");
+MODULE_DESCRIPTION("Xilinx Alveo management function driver");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH Xilinx Alveo 8/8] fpga: xrt: Kconfig and Makefile updates for XRT drivers
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (6 preceding siblings ...)
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver Sonal Santan
@ 2020-11-29  0:00 ` Sonal Santan
  2020-11-30 18:08 ` [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Rob Herring
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-11-29  0:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

From: Sonal Santan <sonal.santan@xilinx.com>

Update fpga Kconfig/Makefile and add Kconfig/Makefile for
new drivers.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/Kconfig             |  2 ++
 drivers/fpga/Makefile            |  3 +++
 drivers/fpga/alveo/Kconfig       |  7 ++++++
 drivers/fpga/alveo/lib/Kconfig   | 11 +++++++++
 drivers/fpga/alveo/lib/Makefile  | 42 ++++++++++++++++++++++++++++++++
 drivers/fpga/alveo/mgmt/Kconfig  | 11 +++++++++
 drivers/fpga/alveo/mgmt/Makefile | 28 +++++++++++++++++++++
 7 files changed, 104 insertions(+)
 create mode 100644 drivers/fpga/alveo/Kconfig
 create mode 100644 drivers/fpga/alveo/lib/Kconfig
 create mode 100644 drivers/fpga/alveo/lib/Makefile
 create mode 100644 drivers/fpga/alveo/mgmt/Kconfig
 create mode 100644 drivers/fpga/alveo/mgmt/Makefile

diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig
index 7cd5a29fc437..8687ef231308 100644
--- a/drivers/fpga/Kconfig
+++ b/drivers/fpga/Kconfig
@@ -215,4 +215,6 @@ config FPGA_MGR_ZYNQMP_FPGA
 	  to configure the programmable logic(PL) through PS
 	  on ZynqMP SoC.
 
+source "drivers/fpga/alveo/Kconfig"
+
 endif # FPGA
diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile
index d8e21dfc6778..59943dccf405 100644
--- a/drivers/fpga/Makefile
+++ b/drivers/fpga/Makefile
@@ -46,3 +46,6 @@ dfl-afu-objs += dfl-afu-error.o
 
 # Drivers for FPGAs which implement DFL
 obj-$(CONFIG_FPGA_DFL_PCI)		+= dfl-pci.o
+
+obj-$(CONFIG_FPGA_ALVEO_LIB)		+= alveo/lib/
+obj-$(CONFIG_FPGA_ALVEO_XMGMT)		+= alveo/mgmt/
diff --git a/drivers/fpga/alveo/Kconfig b/drivers/fpga/alveo/Kconfig
new file mode 100644
index 000000000000..a583c3543945
--- /dev/null
+++ b/drivers/fpga/alveo/Kconfig
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx Alveo FPGA device configuration
+#
+
+source "drivers/fpga/alveo/lib/Kconfig"
+source "drivers/fpga/alveo/mgmt/Kconfig"
diff --git a/drivers/fpga/alveo/lib/Kconfig b/drivers/fpga/alveo/lib/Kconfig
new file mode 100644
index 000000000000..62175af2108e
--- /dev/null
+++ b/drivers/fpga/alveo/lib/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx Alveo FPGA device configuration
+#
+
+config FPGA_ALVEO_LIB
+	tristate "Xilinx Alveo Driver Library"
+	depends on HWMON && PCI 
+	select LIBFDT
+	help
+	  Xilinx Alveo FPGA PCIe device driver common library.
diff --git a/drivers/fpga/alveo/lib/Makefile b/drivers/fpga/alveo/lib/Makefile
new file mode 100644
index 000000000000..a14204dc489d
--- /dev/null
+++ b/drivers/fpga/alveo/lib/Makefile
@@ -0,0 +1,42 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020 Xilinx, Inc. All rights reserved.
+#
+# Authors: Sonal.Santan@xilinx.com
+#
+
+FULL_ALVEO_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_ALVEO_LIB) := xrt-lib.o
+
+xrt-lib-objs := 			\
+	xrt-main.o			\
+	xrt-subdev.o			\
+	xrt-cdev.o			\
+	../common/xrt-metadata.o	\
+	subdevs/xrt-partition.o	\
+	subdevs/xrt-test.o		\
+	subdevs/xrt-vsec.o		\
+	subdevs/xrt-vsec-golden.o	\
+	subdevs/xrt-axigate.o		\
+	subdevs/xrt-qspi.o		\
+	subdevs/xrt-gpio.o		\
+	subdevs/xrt-mailbox.o		\
+	subdevs/xrt-icap.o		\
+	subdevs/xrt-cmc.o		\
+	subdevs/xrt-cmc-ctrl.o		\
+	subdevs/xrt-cmc-sensors.o	\
+	subdevs/xrt-cmc-mailbox.o	\
+	subdevs/xrt-cmc-bdinfo.o	\
+	subdevs/xrt-cmc-sc.o		\
+	subdevs/xrt-srsr.o		\
+	subdevs/xrt-clock.o		\
+	subdevs/xrt-clkfreq.o		\
+	subdevs/xrt-ucs.o		\
+	subdevs/xrt-calib.o
+
+
+ccflags-y := -I$(FULL_ALVEO_PATH)/include \
+	-I$(FULL_ALVEO_PATH)/common \
+	-I$(FULL_DTC_PATH)
diff --git a/drivers/fpga/alveo/mgmt/Kconfig b/drivers/fpga/alveo/mgmt/Kconfig
new file mode 100644
index 000000000000..8a5590842dad
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx Alveo FPGA device configuration
+#
+
+config FPGA_ALVEO_XMGMT
+	tristate "Xilinx Alveo Management Driver"
+	depends on HWMON && PCI && FPGA_ALVEO_LIB
+	select LIBFDT
+	help
+	  Xilinx Alveo FPGA PCIe device driver for Management Physical Function.
diff --git a/drivers/fpga/alveo/mgmt/Makefile b/drivers/fpga/alveo/mgmt/Makefile
new file mode 100644
index 000000000000..08be7952a832
--- /dev/null
+++ b/drivers/fpga/alveo/mgmt/Makefile
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2019-2020 Xilinx, Inc. All rights reserved.
+#
+# Authors: Sonal.Santan@xilinx.com
+#
+
+FULL_ALVEO_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_ALVEO_XMGMT)	+= xmgmt.o
+
+commondir := ../common
+
+xmgmt-objs := xmgmt-root.o			\
+	   xmgmt-main.o				\
+	   xmgmt-fmgr-drv.o      		\
+	   xmgmt-main-ulp.o			\
+	   xmgmt-main-mailbox.o			\
+	   $(commondir)/xrt-root.o		\
+	   $(commondir)/xrt-metadata.o		\
+	   $(commondir)/xrt-xclbin.o
+
+
+
+ccflags-y := -I$(FULL_ALVEO_PATH)/include \
+	-I$(FULL_ALVEO_PATH)/common \
+	-I$(FULL_DTC_PATH)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (7 preceding siblings ...)
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 8/8] fpga: xrt: Kconfig and Makefile updates for XRT drivers Sonal Santan
@ 2020-11-30 18:08 ` Rob Herring
  2020-12-01 19:39   ` Sonal Santan
  2020-12-02  2:14 ` Xu Yilun
  2020-12-06 16:31 ` Tom Rix
  10 siblings, 1 reply; 29+ messages in thread
From: Rob Herring @ 2020-11-30 18:08 UTC (permalink / raw)
  To: Sonal Santan
  Cc: linux-kernel, Sonal Santan, linux-fpga, maxz, lizhih,
	Michal Simek, Stefano Stabellini, devicetree

On Sat, Nov 28, 2020 at 5:02 PM Sonal Santan <sonal.santan@xilinx.com> wrote:
>
> Hello,
>
> This patch series adds management physical function driver for Xilinx Alveo PCIe
> accelerator cards, https://www.xilinx.com/products/boards-and-kits/alveo.html
> This driver is part of Xilinx Runtime (XRT) open source stack.
>
> The patch depends on the "PATCH Xilinx Alveo libfdt prep" which was posted
> before.
>
> ALVEO PLATFORM ARCHITECTURE
>
> Alveo PCIe FPGA based platforms have a static *shell* partition and a partial
> re-configurable *user* partition. The shell partition is automatically loaded from
> flash when host is booted and PCIe is enumerated by BIOS. Shell cannot be changed
> till the next cold reboot. The shell exposes two PCIe physical functions:
>
> 1. management physical function
> 2. user physical function
>
> The patch series includes Documentation/xrt.rst which describes Alveo
> platform, xmgmt driver architecture and deployment model in more more detail.
>
> Users compile their high level design in C/C++/OpenCL or RTL into FPGA image
> using Vitis https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
> tools. The image is packaged as xclbin and contains partial bitstream for the
> user partition and necessary metadata. Users can dynamically swap the image
> running on the user partition in order to switch between different workloads.
>
> ALVEO DRIVERS
>
> Alveo Linux kernel driver *xmgmt* binds to management physical function of
> Alveo platform. The modular driver framework is organized into several
> platform drivers which primarily handle the following functionality:
>
> 1.  Loading firmware container also called xsabin at driver attach time
> 2.  Loading of user compiled xclbin with FPGA Manager integration
> 3.  Clock scaling of image running on user partition
> 4.  In-band sensors: temp, voltage, power, etc.
> 5.  Device reset and rescan
> 6.  Flashing static *shell* partition
>
> The platform drivers are packaged into *xrt-lib* helper module with a well
> defined interfaces the details of which can be found in Documentation/xrt.rst.
>
> xmgmt driver is second generation Alveo management driver and evolution of
> the first generation (out of tree) Alveo management driver, xclmgmt. The
> sources of the first generation drivers were posted on LKML last year--
> https://lore.kernel.org/lkml/20190319215401.6562-1-sonal.santan@xilinx.com/
>
> Changes since the first generation driver include the following: the driver
> has been re-architected as data driven modular driver; the driver has been
> split into xmgmt and xrt-lib; user physical function driver has been removed
> from the patch series.
>
> Alveo/XRT security and platform architecture is documented on the following
> GitHub pages:
> https://xilinx.github.io/XRT/master/html/security.html
> https://xilinx.github.io/XRT/master/html/platforms_partitions.html
>
> User physical function driver is not included in this patch series.
>
> TESTING AND VALIDATION
>
> xmgmt driver can be tested with full XRT open source stack which includes
> user space libraries, board utilities and (out of tree) first generation
> user physical function driver xocl. XRT open source runtime stack is
> available at https://github.com/Xilinx/XRT. This patch series has been
> validated on Alveo U50 platform.
>
> Complete documentation for XRT open source stack can be found here--
> https://xilinx.github.io/XRT/master/html/index.html

I've not gotten into the patch details, but I'm not clear on what the
lifecycle of the DT looks like here. What's the starting point and
what manipulations to the DT are being done? I'm trying to understand
if using libfdt is the right way versus operating on an unflattened
tree.

Rob

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files Sonal Santan
@ 2020-12-01  4:27   ` Moritz Fischer
  2020-12-02 18:57     ` Sonal Santan
  0 siblings, 1 reply; 29+ messages in thread
From: Moritz Fischer @ 2020-12-01  4:27 UTC (permalink / raw)
  To: Sonal Santan
  Cc: linux-kernel, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

Hi Sonal,

On Sat, Nov 28, 2020 at 04:00:34PM -0800, Sonal Santan wrote:
> From: Sonal Santan <sonal.santan@xilinx.com>
> 
> Add XRT UAPI header files which describe flash layout, XRT
> mailbox protocol, xclBin/axlf FPGA image container format and
> XRT management physical function driver ioctl interfaces.
> 
> flash_xrt_data.h:
> Layout used by XRT to store private data on flash.
> 
> mailbox_proto.h:
> Mailbox opcodes and high level data structures representing
> various kinds of information like sensors, clock, etc.
> 
> mailbox_transport.h:
> Transport protocol used by mailbox.
> 
> xclbin.h:
> Container format used to store compiled FPGA image which includes
> bitstream and metadata.

Can these headers be introduced together with the code that uses them as
logical change?

I haven't looked too closely, but it helps reviewing if you can break it
into smaller pieces that can stand by themselves.

Thanks,
Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers Sonal Santan
@ 2020-12-01  4:54   ` Moritz Fischer
  2020-12-02 21:24     ` Max Zhen
  0 siblings, 1 reply; 29+ messages in thread
From: Moritz Fischer @ 2020-12-01  4:54 UTC (permalink / raw)
  To: Sonal Santan
  Cc: linux-kernel, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

On Sat, Nov 28, 2020 at 04:00:33PM -0800, Sonal Santan wrote:
> From: Sonal Santan <sonal.santan@xilinx.com>
> 
> Describe Alveo XRT driver architecture and provide basic overview
> of Xilinx Alveo platform.
> 
> Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
> ---
>  Documentation/fpga/index.rst |   1 +
>  Documentation/fpga/xrt.rst   | 588 +++++++++++++++++++++++++++++++++++
>  2 files changed, 589 insertions(+)
>  create mode 100644 Documentation/fpga/xrt.rst
> 
> diff --git a/Documentation/fpga/index.rst b/Documentation/fpga/index.rst
> index f80f95667ca2..30134357b70d 100644
> --- a/Documentation/fpga/index.rst
> +++ b/Documentation/fpga/index.rst
> @@ -8,6 +8,7 @@ fpga
>      :maxdepth: 1
> 
>      dfl
> +    xrt
> 
>  .. only::  subproject and html
> 
> diff --git a/Documentation/fpga/xrt.rst b/Documentation/fpga/xrt.rst
> new file mode 100644
> index 000000000000..9f37d46459b0
> --- /dev/null
> +++ b/Documentation/fpga/xrt.rst
> @@ -0,1 +1,588 @@
> +==================================
> +XRTV2 Linux Kernel Driver Overview
> +==================================
> +
> +XRTV2 drivers are second generation `XRT <https://github.com/Xilinx/XRT>`_ drivers which
> +support `Alveo <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_ PCIe platforms
> +from Xilinx.
> +
> +XRTV2 drivers support *subsystem* style data driven platforms where driver's configuration
> +and behavior is determined by meta data provided by platform (in *device tree* format).
> +Primary management physical function (MPF) driver is called **xmgmt**. Primary user physical
> +function (UPF) driver is called **xuser** and HW subsystem drivers are packaged into a library
> +module called **xrt-lib**, which is shared by **xmgmt** and **xuser** (WIP).
WIP?
> +
> +Alveo Platform Overview
> +=======================
> +
> +Alveo platforms are architected as two physical FPGA partitions: *Shell* and *User*. Shell
Nit: The Shell provides ...
> +provides basic infrastructure for the Alveo platform like PCIe connectivity, board management,
> +Dynamic Function Exchange (DFX), sensors, clocking, reset, and security. User partition contains
> +user compiled binary which is loaded by a process called DFX also known as partial reconfiguration.
> +
> +Physical partitions require strict HW compatibility with each other for DFX to work properly.
> +Every physical partition has two interface UUIDs: *parent* UUID and *child* UUID. For simple
> +single stage platforms Shell → User forms parent child relationship. For complex two stage
> +platforms Base → Shell → User forms the parent child relationship chain.
> +
> +.. note::
> +   Partition compatibility matching is key design component of Alveo platforms and XRT. Partitions
> +   have child and parent relationship. A loaded partition exposes child partition UUID to advertise
> +   its compatibility requirement for child partition. When loading a child partition the xmgmt
> +   management driver matches parent UUID of the child partition against child UUID exported by the
> +   parent. Parent and child partition UUIDs are stored in the *xclbin* (for user) or *xsabin* (for
> +   base and shell). Except for root UUID, VSEC, hardware itself does not know about UUIDs. UUIDs are
> +   stored in xsabin and xclbin.
> +
> +
> +The physical partitions and their loading is illustrated below::
> +
> +            SHELL                               USER
> +        +-----------+                  +-------------------+
> +        |           |                  |                   |
> +        | VSEC UUID | CHILD     PARENT |    LOGIC UUID     |
> +        |           o------->|<--------o                   |
> +        |           | UUID       UUID  |                   |
> +        +-----+-----+                  +--------+----------+
> +              |                                 |
> +              .                                 .
> +              |                                 |
> +          +---+---+                      +------+--------+
> +          |  POR  |                      | USER COMPILED |
> +          | FLASH |                      |    XCLBIN     |
> +          +-------+                      +---------------+
> +
> +
> +Loading Sequence
> +----------------
> +
> +Shell partition is loaded from flash at system boot time. It establishes the PCIe link and exposes
Nit: The Shell
> +two physical functions to the BIOS. After OS boot, xmgmt driver attaches to PCIe physical function
> +0 exposed by the Shell and then looks for VSEC in PCIe extended configuration space. Using VSEC it
> +determines the logic UUID of Shell and uses the UUID to load matching *xsabin* file from Linux
> +firmware directory. The xsabin file contains metadata to discover peripherals that are part of Shell
> +and firmware(s) for any embedded soft processors in Shell.

Neat.
> +
> +Shell exports child interface UUID which is used for compatibility check when loading user compiled
Nit: The Shell
> +xclbin over the User partition as part of DFX. When a user requests loading of a specific xclbin the
> +xmgmt management driver reads the parent interface UUID specified in the xclbin and matches it with
> +child interface UUID exported by Shell to determine if xclbin is compatible with the Shell. If match
> +fails loading of xclbin is denied.
> +
> +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command. When loading xclbin xmgmt driver
> +performs the following operations:
> +
> +1. Sanity check the xclbin contents
> +2. Isolate the User partition
> +3. Download the bitstream using the FPGA config engine (ICAP)
> +4. De-isolate the User partition
Is this modelled as bridges and regions?

> +5. Program the clocks (ClockWiz) driving the User partition
> +6. Wait for memory controller (MIG) calibration
> +
> +`Platform Loading Overview <https://xilinx.github.io/XRT/master/html/platforms_partitions.html>`_
> +provides more detailed information on platform loading.
> +
> +xsabin
> +------
> +
> +Each Alveo platform comes packaged with its own xsabin. The xsabin is trusted component of the
> +platform. For format details refer to :ref:`xsabin/xclbin Container Format`. xsabin contains
> +basic information like UUIDs, platform name and metadata in the form of device tree. See
> +:ref:`Device Tree Usage` for details and example.
> +
> +xclbin
> +------
> +
> +xclbin is compiled by end user using
> +`Vitis <https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html>`_ tool set from
> +Xilinx. The xclbin contains sections describing user compiled acceleration engines/kernels, memory
> +subsystems, clocking information etc. It also contains bitstream for the user partition, UUIDs,
> +platform name, etc. xclbin uses the same container format as xsabin which is described below.
> +
> +
> +xsabin/xclbin Container Format
> +------------------------------
> +
> +xclbin/xsabin is ELF-like binary container format. It is structured as series of sections.
> +There is a file header followed by several section headers which is followed by sections.
> +A section header points to an actual section. There is an optional signature at the end.
> +The format is defined by header file ``xclbin.h``. The following figure illustrates a
> +typical xclbin::
> +
> +
> +          +---------------------+
> +          |                     |
> +          |       HEADER        |
> +          +---------------------+
> +          |   SECTION  HEADER   |
> +          |                     |
> +          +---------------------+
> +          |         ...         |
> +          |                     |
> +          +---------------------+
> +          |   SECTION  HEADER   |
> +          |                     |
> +          +---------------------+
> +          |       SECTION       |
> +          |                     |
> +          +---------------------+
> +          |         ...         |
> +          |                     |
> +          +---------------------+
> +          |       SECTION       |
> +          |                     |
> +          +---------------------+
> +          |      SIGNATURE      |
> +          |      (OPTIONAL)     |
> +          +---------------------+
> +
> +
> +xclbin/xsabin files can be packaged, un-packaged and inspected using XRT utility called
> +**xclbinutil**. xclbinutil is part of XRT open source software stack. The source code for
> +xclbinutil can be found at https://github.com/Xilinx/XRT/tree/master/src/runtime_src/tools/xclbinutil
> +
> +For example to enumerate the contents of a xclbin/xsabin use the *--info* switch as shown
> +below::
> +
> +  xclbinutil --info --input /opt/xilinx/firmware/u50/gen3x16-xdma/blp/test/bandwidth.xclbin
> +  xclbinutil --info --input /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/partition.xsabin
> +
> +
> +Device Tree Usage
> +-----------------
> +
> +As mentioned previously xsabin stores metadata which advertise HW subsystems present in a partition.
> +The metadata is stored in device tree format with well defined schema. Subsystem instantiations are
> +captured as children of ``addressable_endpoints`` node. Subsystem nodes have standard attributes like
> +``reg``, ``interrupts`` etc. Additionally the nodes also have PCIe specific attributes:
> +``pcie_physical_function`` and ``pcie_bar_mapping``. These identify which PCIe physical function and
> +which BAR space in that physical function the subsystem resides. XRT management driver uses this
> +information to bind *platform drivers* to the subsystem instantiations. The platform drivers are
> +found in **xrt-lib.ko** kernel module defined later. Below is an example of device tree for Alveo U50
> +platform::

I might be missing something, but couldn't you structure the addressable
endpoints in a way that encode the physical function as a parent / child
relation?

What are the regs relative to?
> +
> +  /dts-v1/;
> +
> +  /{
> +	logic_uuid = "f465b0a3ae8c64f619bc150384ace69b";
> +
> +	schema_version {
> +		major = <0x01>;
> +		minor = <0x00>;
> +	};
> +
> +	interfaces {
> +
> +		@0 {
> +			interface_uuid = "862c7020a250293e32036f19956669e5";
> +		};
> +	};
> +
> +	addressable_endpoints {
> +
> +		ep_blp_rom_00 {
> +			reg = <0x00 0x1f04000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +		};
> +
> +		ep_card_flash_program_00 {
> +			reg = <0x00 0x1f06000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_quad_spi-1.0\0axi_quad_spi";
> +			interrupts = <0x03 0x03>;
> +		};
> +
> +		ep_cmc_firmware_mem_00 {
> +			reg = <0x00 0x1e20000 0x00 0x20000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +
> +			firmware {
> +				firmware_product_name = "cmc";
> +				firmware_branch_name = "u50";
> +				firmware_version_major = <0x01>;
> +				firmware_version_minor = <0x00>;
> +			};
> +		};
> +
> +		ep_cmc_intc_00 {
> +			reg = <0x00 0x1e03000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> +			interrupts = <0x04 0x04>;
> +		};
> +
> +		ep_cmc_mutex_00 {
> +			reg = <0x00 0x1e02000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_cmc_regmap_00 {
> +			reg = <0x00 0x1e08000 0x00 0x2000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +
> +			firmware {
> +				firmware_product_name = "sc-fw";
> +				firmware_branch_name = "u50";
> +				firmware_version_major = <0x05>;
> +			};
> +		};
> +
> +		ep_cmc_reset_00 {
> +			reg = <0x00 0x1e01000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_ddr_mem_calib_00 {
> +			reg = <0x00 0x63000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_debug_bscan_mgmt_00 {
> +			reg = <0x00 0x1e90000 0x00 0x10000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-debug_bridge-1.0\0debug_bridge";
> +		};
> +
> +		ep_ert_base_address_00 {
> +			reg = <0x00 0x21000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_ert_command_queue_mgmt_00 {
> +			reg = <0x00 0x40000 0x00 0x10000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
> +		};
> +
> +		ep_ert_command_queue_user_00 {
> +			reg = <0x00 0x40000 0x00 0x10000>;
> +			pcie_physical_function = <0x01>;
> +			compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
> +		};
> +
> +		ep_ert_firmware_mem_00 {
> +			reg = <0x00 0x30000 0x00 0x8000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +
> +			firmware {
> +				firmware_product_name = "ert";
> +				firmware_branch_name = "v20";
> +				firmware_version_major = <0x01>;
> +			};
> +		};
> +
> +		ep_ert_intc_00 {
> +			reg = <0x00 0x23000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> +			interrupts = <0x05 0x05>;
> +		};
> +
> +		ep_ert_reset_00 {
> +			reg = <0x00 0x22000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_ert_sched_00 {
> +			reg = <0x00 0x50000 0x00 0x1000>;
> +			pcie_physical_function = <0x01>;
> +			compatible = "xilinx.com,reg_abs-ert_sched-1.0\0ert_sched";
> +			interrupts = <0x09 0x0c>;
> +		};
> +
> +		ep_fpga_configuration_00 {
> +			reg = <0x00 0x1e88000 0x00 0x8000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_hwicap-1.0\0axi_hwicap";
> +			interrupts = <0x02 0x02>;
> +		};
> +
> +		ep_icap_reset_00 {
> +			reg = <0x00 0x1f07000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_mailbox_mgmt_00 {
> +			reg = <0x00 0x1f10000 0x00 0x10000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
> +			interrupts = <0x00 0x00>;
> +		};
> +
> +		ep_mailbox_user_00 {
> +			reg = <0x00 0x1f00000 0x00 0x10000>;
> +			pcie_physical_function = <0x01>;
> +			compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
> +			interrupts = <0x08 0x08>;
> +		};
> +
> +		ep_msix_00 {
> +			reg = <0x00 0x00 0x00 0x20000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-msix-1.0\0msix";
> +			pcie_bar_mapping = <0x02>;
> +		};
> +
> +		ep_pcie_link_mon_00 {
> +			reg = <0x00 0x1f05000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_pr_isolate_plp_00 {
> +			reg = <0x00 0x1f01000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_pr_isolate_ulp_00 {
> +			reg = <0x00 0x1000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> +		};
> +
> +		ep_uuid_rom_00 {
> +			reg = <0x00 0x64000 0x00 0x1000>;
> +			pcie_physical_function = <0x00>;
> +			compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +		};
> +
> +		ep_xdma_00 {
> +			reg = <0x00 0x00 0x00 0x10000>;
> +			pcie_physical_function = <0x01>;
> +			compatible = "xilinx.com,reg_abs-xdma-1.0\0xdma";
> +			pcie_bar_mapping = <0x02>;
> +		};
> +	};
> +
> +  }
> +
> +
> +
> +Deployment Models
> +=================
> +
> +Baremetal
> +---------
> +
> +In bare-metal deployments both MPF and UPF are visible and accessible. xmgmt driver binds to
> +MPF. xmgmt driver operations are privileged and available to system administrator. The full
> +stack is illustrated below::
> +
> +
> +                            HOST
> +
> +                 [XMGMT]            [XUSER]
> +                    |                  |
> +                    |                  |
> +                 +-----+            +-----+
> +                 | MPF |            | UPF |
> +                 |     |            |     |
> +                 | PF0 |            | PF1 |
> +                 +--+--+            +--+--+
> +          ......... ^................. ^..........
> +                    |                  |
> +                    |   PCIe DEVICE    |
> +                    |                  |
> +                 +--+------------------+--+
> +                 |         SHELL          |
> +                 |                        |
> +                 +------------------------+
> +                 |         USER           |
> +                 |                        |
> +                 |                        |
> +                 |                        |
> +                 |                        |
> +                 +------------------------+
> +
> +
> +
> +Virtualized
> +-----------
> +
> +In virtualized deployments privileged MPF is assigned to host but unprivileged UPF
> +is assigned to guest VM via PCIe pass-through. xmgmt driver in host binds to MPF.
> +xmgmt driver operations are privileged and only accessible by hosting service provider.
> +The full stack is illustrated below::
> +
> +
> +                                 .............
> +                  HOST           .    VM     .
> +                                 .           .
> +                 [XMGMT]         .  [XUSER]  .
> +                    |            .     |     .
> +                    |            .     |     .
> +                 +-----+         .  +-----+  .
> +                 | MPF |         .  | UPF |  .
> +                 |     |         .  |     |  .
> +                 | PF0 |         .  | PF1 |  .
> +                 +--+--+         .  +--+--+  .
> +          ......... ^................. ^..........
> +                    |                  |
> +                    |   PCIe DEVICE    |
> +                    |                  |
> +                 +--+------------------+--+
> +                 |         SHELL          |
> +                 |                        |
> +                 +------------------------+
> +                 |         USER           |
> +                 |                        |
> +                 |                        |
> +                 |                        |
> +                 |                        |
> +                 +------------------------+
> +
> +
> +
> +Driver Modules
> +==============
> +
> +xrt-lib.ko
> +----------
> +
> +Repository of all subsystem drivers and pure software modules that can potentially
> +be shared between xmgmt and xuser. All these drivers are structured as Linux
> +*platform driver* and are instantiated by xmgmt (or xuser in future) based on meta
> +data associated with hardware. The metadata is in the form of device tree as
> +explained before.
> +
> +xmgmt.ko
> +--------
> +
> +The xmgmt driver is a PCIe device driver driving MPF found on Xilinx's Alveo
> +PCIE device. It consists of one *root* driver, one or more *partition* drivers
> +and one or more *leaf* drivers. The root and MPF specific leaf drivers are in
> +xmgmt.ko. The partition driver and other leaf drivers are in xrt-lib.ko.
> +
> +The instantiation of specific partition driver or leaf driver is completely data
> +driven based on meta data (mostly in device tree format) found through VSEC
> +capability and inside firmware files, such as xsabin or xclbin file. The root
> +driver manages life cycle of multiple partition drivers, which, in turn, manages
> +multiple leaf drivers. This allows a single set of driver code to support all
> +kinds of subsystems exposed by different shells. The difference among all
> +these subsystems will be handled in leaf drivers with root and partition drivers
> +being part of the infrastructure and provide common services for all leaves found
> +on all platforms.
> +
> +
> +xmgmt-root
> +^^^^^^^^^^
> +
> +The xmgmt-root driver is a PCIe device driver attaches to MPF. It's part of the
Nit: s/attaches/attached ?
> +infrastructure of the MPF driver and resides in xmgmt.ko. This driver
> +
> +* manages one or more partition drivers
> +* provides access to functionalities that requires pci_dev, such as PCIE config
> +  space access, to other leaf drivers through parent calls
> +* together with partition driver, facilities event callbacks for other leaf drivers
> +* together with partition driver, facilities inter-leaf driver calls for other leaf
> +  drivers
> +
> +When root driver starts, it will explicitly create an initial partition instance,
> +which contains leaf drivers that will trigger the creation of other partition
> +instances. The root driver will wait for all partitions and leaves to be created
> +before it returns from it's probe routine and claim success of the initialization
> +of the entire xmgmt driver.
> +
> +partition
> +^^^^^^^^^
> +
> +The partition driver is a platform device driver whose life cycle is managed by
> +root and does not have real IO mem or IRQ resources. It's part of the
> +infrastructure of the MPF driver and resides in xrt-lib.ko. This driver
> +
> +* manages one or more leaf drivers so that multiple leaves can be managed as a group
> +* provides access to root from leaves, so that parent calls, event notifications
> +  and inter-leaf calls can happen
> +
> +In xmgmt, an initial partition driver instance will be created by root, which
> +contains leaves that will trigger partition instances to be created to manage
> +groups of leaves found on different partitions on hardware, such as VSEC, Shell,
> +and User.
> +
> +leaves
> +^^^^^^
> +
> +The leaf driver is a platform device driver whose life cycle is managed by
> +a partition driver and may or may not have real IO mem or IRQ resources. They
> +are the real meat of xmgmt and contains platform specific code to Shell and User
> +found on a MPF.
> +
> +A leaf driver may not have real hardware resources when it merely acts as a driver
> +that manages certain in-memory states for xmgmt. These in-memory states could be
> +shared by multiple other leaves.
> +
> +Leaf drivers assigned to specific hardware resources drive specific subsystem in
> +the device. To manipulate the subsystem or carry out a task, a leaf driver may ask
> +help from root via parent calls and/or from other leaves via inter-leaf calls.
> +
> +A leaf can also broadcast events through infrastructure code for other leaves
> +to process. It can also receive event notification from infrastructure about certain
> +events, such as post-creation or pre-exit of a particular leaf.
> +
> +
> +Driver Interfaces
> +=================
> +
> +xmgmt Driver Ioctls
> +-------------------
> +
> +Ioctls exposed by xmgmt driver to user space are enumerated in the following table:
> +
> +== ===================== ============================= ===========================
> +#  Functionality         ioctl request code            data format
> +== ===================== ============================= ===========================
> +1  FPGA image download   XMGMT_IOCICAPDOWNLOAD_AXLF    xmgmt_ioc_bitstream_axlf
> +2  CL frequency scaling  XMGMT_IOCFREQSCALE            xmgmt_ioc_freqscaling
> +== ===================== ============================= ===========================
> +
> +xmgmt Driver Sysfs
> +------------------
> +
> +xmgmt driver exposes a rich set of sysfs interfaces. Subsystem platform drivers
> +export sysfs node for every platform instance.
> +
> +Every partition also exports its UUIDs. See below for examples::
> +
> +  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/interface_uuids
> +  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/logic_uuids
> +
> +
> +hwmon
> +-----
> +
> +xmgmt driver exposes standard hwmon interface to report voltage, current, temperature,
> +power, etc. These can easily be viewed using *sensors* command line utility.
> +
> +
> +mailbox
> +-------
> +
> +xmgmt communicates with user physical function driver via HW mailbox. Mailbox opcodes
> +are defined in ``mailbox_proto.h``. `Mailbox Inter-domain Communication Protocol
> +<https://xilinx.github.io/XRT/master/html/mailbox.proto.html>`_ defines the full
> +specification. xmgmt implements subset of the specification. It provides the following
> +services to the UPF driver:
> +
> +1.  Responding to *are you there* request including determining if the two drivers are
> +    running in the same OS domain
> +2.  Provide sensor readings, loaded xclbin UUID, clock frequency, shell information, etc.
> +3.  Perform PCIe hot reset
> +4.  Download user compiled xclbin

Is this gonna use the mailbox framework?

> +
> +
> +Platform Security Considerations
> +================================
> +
> +`Security of Alveo Platform <https://xilinx.github.io/XRT/master/html/security.html>`_
> +discusses the deployment options and security implications in great detail.
> --
> 2.17.1

That's a lot of text, I'll have to read it again most likely,

- Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
  2020-11-30 18:08 ` [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Rob Herring
@ 2020-12-01 19:39   ` Sonal Santan
  0 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-12-01 19:39 UTC (permalink / raw)
  To: Rob Herring
  Cc: linux-kernel, linux-fpga, Max Zhen, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree

Hi,

> -----Original Message-----
> From: Rob Herring <robh@kernel.org>
> Sent: Monday, November 30, 2020 10:09 AM
> To: Sonal Santan <sonals@xilinx.com>
> Cc: linux-kernel@vger.kernel.org; Sonal Santan <sonals@xilinx.com>; linux-
> fpga@vger.kernel.org; Max Zhen <maxz@xilinx.com>; Lizhi Hou
> <lizhih@xilinx.com>; Michal Simek <michals@xilinx.com>; Stefano Stabellini
> <stefanos@xilinx.com>; devicetree@vger.kernel.org
> Subject: Re: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
> 
> On Sat, Nov 28, 2020 at 5:02 PM Sonal Santan <sonal.santan@xilinx.com>
> wrote:
> >
> > Hello,
> >
> > This patch series adds management physical function driver for Xilinx
> > Alveo PCIe accelerator cards,
> > https://www.xilinx.com/products/boards-and-kits/alveo.html
> > This driver is part of Xilinx Runtime (XRT) open source stack.
> >
> > The patch depends on the "PATCH Xilinx Alveo libfdt prep" which was
> > posted before.
> >
> > ALVEO PLATFORM ARCHITECTURE
> >
> > Alveo PCIe FPGA based platforms have a static *shell* partition and a
> > partial re-configurable *user* partition. The shell partition is
> > automatically loaded from flash when host is booted and PCIe is
> > enumerated by BIOS. Shell cannot be changed till the next cold reboot. The
> shell exposes two PCIe physical functions:
> >
> > 1. management physical function
> > 2. user physical function
> >
> > The patch series includes Documentation/xrt.rst which describes Alveo
> > platform, xmgmt driver architecture and deployment model in more more
> detail.
> >
> > Users compile their high level design in C/C++/OpenCL or RTL into FPGA
> > image using Vitis
> > https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
> > tools. The image is packaged as xclbin and contains partial bitstream
> > for the user partition and necessary metadata. Users can dynamically
> > swap the image running on the user partition in order to switch between
> different workloads.
> >
> > ALVEO DRIVERS
> >
> > Alveo Linux kernel driver *xmgmt* binds to management physical
> > function of Alveo platform. The modular driver framework is organized
> > into several platform drivers which primarily handle the following
> functionality:
> >
> > 1.  Loading firmware container also called xsabin at driver attach
> > time 2.  Loading of user compiled xclbin with FPGA Manager integration
> > 3.  Clock scaling of image running on user partition 4.  In-band
> > sensors: temp, voltage, power, etc.
> > 5.  Device reset and rescan
> > 6.  Flashing static *shell* partition
> >
> > The platform drivers are packaged into *xrt-lib* helper module with a
> > well defined interfaces the details of which can be found in
> Documentation/xrt.rst.
> >
> > xmgmt driver is second generation Alveo management driver and
> > evolution of the first generation (out of tree) Alveo management
> > driver, xclmgmt. The sources of the first generation drivers were
> > posted on LKML last year--
> > https://lore.kernel.org/lkml/20190319215401.6562-1-sonal.santan@xilinx
> > .com/
> >
> > Changes since the first generation driver include the following: the
> > driver has been re-architected as data driven modular driver; the
> > driver has been split into xmgmt and xrt-lib; user physical function
> > driver has been removed from the patch series.
> >
> > Alveo/XRT security and platform architecture is documented on the
> > following GitHub pages:
> > https://xilinx.github.io/XRT/master/html/security.html
> > https://xilinx.github.io/XRT/master/html/platforms_partitions.html
> >
> > User physical function driver is not included in this patch series.
> >
> > TESTING AND VALIDATION
> >
> > xmgmt driver can be tested with full XRT open source stack which
> > includes user space libraries, board utilities and (out of tree) first
> > generation user physical function driver xocl. XRT open source runtime
> > stack is available at https://github.com/Xilinx/XRT. This patch series
> > has been validated on Alveo U50 platform.
> >
> > Complete documentation for XRT open source stack can be found here--
> > https://xilinx.github.io/XRT/master/html/index.html
> 
> I've not gotten into the patch details, but I'm not clear on what the lifecycle of
> the DT looks like here. What's the starting point and what manipulations to the
> DT are being done? I'm trying to understand if using libfdt is the right way
> versus operating on an unflattened tree.

The DT is created when *xmgmt* driver attaches to the device and reads the 
xsabin. The xsabin defines the shell and HW subsystems contained in the shell. 
Since the shell is live for the lifetime of the driver the DT is captured in the 
partition subdev. The partition then looks for "addressable_endpoints" node and 
walks the list of child end point nodes each of which is then copied into its own 
instance of subdev. The life cycle of the copied nodes is same as the owning subdev. 
All the DT nodes are released when the partition together with its child subdevs 
goes away which happens when the driver is unloaded.

xmgmt driver also collects all end points which advertise "pcie_physical_function 
= <0x01>" and then constructs a DT on the fly which is then sent to the user 
physical function driver via mailbox. This requires support for manipulating device 
tree nodes.

In the next revision of the driver we would like to add support for a variation of 
platform which has three partitions: base, shell and user-- 
https://xilinx.github.io/XRT/master/html/platforms_partitions.html#two-stage-platforms

In this model, *base* is initialized like described above. However *shell* can be 
changed dynamically by service provider. This means xmgmt would load DT 
corresponding to shell partition when shell is loaded and tear it down when shell 
is unloaded. The DT corresponding to *base* remains unaffected.

Thanks,
-Sonal
> 
> Rob

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver Sonal Santan
@ 2020-12-01 20:51   ` Moritz Fischer
       [not found]     ` <BY5PR02MB60683E3470179E6AD10FEE26B9F20@BY5PR02MB6068.namprd02.prod.outlook.com>
  2020-12-02  3:00   ` Xu Yilun
  1 sibling, 1 reply; 29+ messages in thread
From: Moritz Fischer @ 2020-12-01 20:51 UTC (permalink / raw)
  To: Sonal Santan
  Cc: linux-kernel, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

Hi Sonal,

On Sat, Nov 28, 2020 at 04:00:39PM -0800, Sonal Santan wrote:
> From: Sonal Santan <sonal.santan@xilinx.com>
> 
> Add management physical function driver core. The driver attaches
> to management physical function of Alveo devices. It instantiates
> the root driver and one or more partition drivers which in turn
> instantiate platform drivers. The instantiation of partition and
> platform drivers is completely data driven. The driver integrates
> with FPGA manager and provides xclbin download service.
> 
> Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
> ---
>  drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c     | 194 ++++
>  drivers/fpga/alveo/mgmt/xmgmt-fmgr.h         |  29 +
>  drivers/fpga/alveo/mgmt/xmgmt-main-impl.h    |  36 +
>  drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c | 930 +++++++++++++++++++
>  drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c     | 190 ++++
>  drivers/fpga/alveo/mgmt/xmgmt-main.c         | 843 +++++++++++++++++
>  drivers/fpga/alveo/mgmt/xmgmt-root.c         | 375 ++++++++
>  7 files changed, 2597 insertions(+)
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-root.c
> 
> diff --git a/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c b/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
> new file mode 100644
> index 000000000000..d451b5a2c291
> --- /dev/null
> +++ b/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
> @@ -0,0 +1,194 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo Management Function Driver
> + *
> + * Copyright (C) 2019-2020 Xilinx, Inc.
> + * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
> + *
> + * Authors: Sonal.Santan@xilinx.com
> + */
> +
> +#include <linux/cred.h>
> +#include <linux/efi.h>
> +#include <linux/fpga/fpga-mgr.h>
> +#include <linux/platform_device.h>
> +#include <linux/module.h>
> +#include <linux/vmalloc.h>
> +
> +#include "xrt-subdev.h"
> +#include "xmgmt-fmgr.h"
> +#include "xrt-axigate.h"
> +#include "xmgmt-main-impl.h"
> +
> +/*
> + * Container to capture and cache full xclbin as it is passed in blocks by FPGA
> + * Manager. Driver needs access to full xclbin to walk through xclbin sections.
> + * FPGA Manager's .write() backend sends incremental blocks without any
> + * knowledge of xclbin format forcing us to collect the blocks and stitch them
> + * together here.
> + */
> +
> +struct xfpga_klass {
Nit: xfpga_priv or xfpga_drvdata? 
> +	const struct platform_device *pdev;
> +	struct axlf         *blob;
> +	char                 name[64];
Nit: 64 could be a named constant ?
> +	size_t               count;
> +	size_t               total_count;
> +	struct mutex         axlf_lock;
> +	int                  reader_ref;
> +	enum fpga_mgr_states state;
> +	enum xfpga_sec_level sec_level;
This appears unused, do you want to add this with the code that uses it?
> +};

Maybe add some kerneldoc markup?
> +
> +struct key *xfpga_keys;
Appears unused, can you introduce this together with the code using it?
> +
> +static int xmgmt_pr_write_init(struct fpga_manager *mgr,
> +	struct fpga_image_info *info, const char *buf, size_t count)
> +{
> +	struct xfpga_klass *obj = mgr->priv;
> +	const struct axlf *bin = (const struct axlf *)buf;
Nit: Reverse x-mas tree please.

xxxxxx
xxxx
xxx
x
> +
> +	if (count < sizeof(struct axlf)) {
> +		obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
> +		return -EINVAL;
> +	}
> +
> +	if (count > bin->m_header.m_length) {
> +		obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
> +		return -EINVAL;
> +	}
> +
> +	/* Free up the previous blob */
> +	vfree(obj->blob);
> +	obj->blob = vmalloc(bin->m_header.m_length);
> +	if (!obj->blob) {
> +		obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
> +		return -ENOMEM;
> +	}
> +
> +	xrt_info(obj->pdev, "Begin download of xclbin %pUb of length %lld B",
> +		&bin->m_header.uuid, bin->m_header.m_length);
We already have framework level prints for that (admittedly somewhat
less verbose). Please remove.
> +
> +	obj->count = 0;
> +	obj->total_count = bin->m_header.m_length;
> +	obj->state = FPGA_MGR_STATE_WRITE_INIT;
Does the framework state tracking not work for you?
> +	return 0;
> +}
> +
> +static int xmgmt_pr_write(struct fpga_manager *mgr,
> +	const char *buf, size_t count)
> +{
> +	struct xfpga_klass *obj = mgr->priv;
> +	char *curr = (char *)obj->blob;
> +
> +	if ((obj->state != FPGA_MGR_STATE_WRITE_INIT) &&
> +		(obj->state != FPGA_MGR_STATE_WRITE)) {
> +		obj->state = FPGA_MGR_STATE_WRITE_ERR;
> +		return -EINVAL;
> +	}
> +
> +	curr += obj->count;
> +	obj->count += count;
> +
> +	/*
> +	 * The xclbin buffer should not be longer than advertised in the header
> +	 */
> +	if (obj->total_count < obj->count) {
> +		obj->state = FPGA_MGR_STATE_WRITE_ERR;
> +		return -EINVAL;
> +	}
> +
> +	xrt_info(obj->pdev, "Copying block of %zu B of xclbin", count);
Please drop those.
> +	memcpy(curr, buf, count);

I'm confused. Why are we just copying things around here. What picks
this up afterwards?
> +	obj->state = FPGA_MGR_STATE_WRITE;
> +	return 0;
> +}
> +
> +
> +static int xmgmt_pr_write_complete(struct fpga_manager *mgr,
> +				   struct fpga_image_info *info)
> +{
> +	int result = 0;
> +	struct xfpga_klass *obj = mgr->priv;
> +
> +	if (obj->state != FPGA_MGR_STATE_WRITE) {
> +		obj->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
> +		return -EINVAL;
> +	}
> +
> +	/* Check if we got the complete xclbin */
> +	if (obj->blob->m_header.m_length != obj->count) {
> +		obj->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
> +		return -EINVAL;
> +	}
> +
> +	result = xmgmt_ulp_download((void *)obj->pdev, obj->blob);
> +
> +	obj->state = result ? FPGA_MGR_STATE_WRITE_COMPLETE_ERR :
> +		FPGA_MGR_STATE_WRITE_COMPLETE;
Why the separate state tracking?
> +	xrt_info(obj->pdev, "Finish downloading of xclbin %pUb: %d",
> +		&obj->blob->m_header.uuid, result);
> +	vfree(obj->blob);
> +	obj->blob = NULL;
> +	obj->count = 0;
> +	return result;
> +}
> +
> +static enum fpga_mgr_states xmgmt_pr_state(struct fpga_manager *mgr)
> +{
> +	struct xfpga_klass *obj = mgr->priv;
> +
> +	return obj->state;
> +}
> +
> +static const struct fpga_manager_ops xmgmt_pr_ops = {
> +	.initial_header_size = sizeof(struct axlf),
> +	.write_init = xmgmt_pr_write_init,
> +	.write = xmgmt_pr_write,
> +	.write_complete = xmgmt_pr_write_complete,
> +	.state = xmgmt_pr_state,
> +};
> +
> +
> +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev)
> +{
> +	struct fpga_manager *fmgr;
> +	int ret = 0;
> +	struct xfpga_klass *obj = vzalloc(sizeof(struct xfpga_klass));
> +
> +	xrt_info(pdev, "probing...");
Drop this, please.
> +	if (!obj)
> +		return ERR_PTR(-ENOMEM);
> +
> +	snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
> +	obj->state = FPGA_MGR_STATE_UNKNOWN;
> +	obj->pdev = pdev;
> +	fmgr = fpga_mgr_create(&pdev->dev,
> +			       obj->name,
> +			       &xmgmt_pr_ops,
> +			       obj);
I think (eyeballed) this fits on two lines?
> +	if (!fmgr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	obj->sec_level = XFPGA_SEC_NONE;
Seems unused so far, please drop until it's used.
> +	ret = fpga_mgr_register(fmgr);
> +	if (ret) {
> +		fpga_mgr_free(fmgr);
> +		kfree(obj);
> +		return ERR_PTR(ret);
> +	}
> +	mutex_init(&obj->axlf_lock);
> +	return fmgr;
Since this patchset will wait at least till next cycle, you might want
to look into the devm_* functions for registering and creating FPGA
Managers.

> +}
> +
> +int xmgmt_fmgr_remove(struct fpga_manager *fmgr)
> +{
> +	struct xfpga_klass *obj = fmgr->priv;
> +
> +	mutex_destroy(&obj->axlf_lock);
> +	obj->state = FPGA_MGR_STATE_UNKNOWN;
> +	fpga_mgr_unregister(fmgr);
> +	vfree(obj->blob);
> +	vfree(obj);
> +	return 0;
> +}
> diff --git a/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h b/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
> new file mode 100644
> index 000000000000..2beba649609f
> --- /dev/null
> +++ b/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Xilinx Alveo Management Function Driver
> + *
> + * Copyright (C) 2019-2020 Xilinx, Inc.
> + * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
> + *
> + * Authors: Sonal.Santan@xilinx.com
> + */
> +
> +#ifndef	_XMGMT_FMGR_H_
> +#define	_XMGMT_FMGR_H_
> +
> +#include <linux/fpga/fpga-mgr.h>
> +#include <linux/mutex.h>
> +
> +#include <linux/xrt/xclbin.h>
> +
> +enum xfpga_sec_level {
> +	XFPGA_SEC_NONE = 0,
> +	XFPGA_SEC_DEDICATE,
> +	XFPGA_SEC_SYSTEM,
> +	XFPGA_SEC_MAX = XFPGA_SEC_SYSTEM,
> +};
> +
> +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev);
> +int xmgmt_fmgr_remove(struct fpga_manager *fmgr);
> +
> +#endif
> diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h b/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
> new file mode 100644
> index 000000000000..c89024cb8d46
> --- /dev/null
> +++ b/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
> @@ -0,0 +1,36 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Xilinx, Inc.
> + *
> + * Authors:
> + *	Lizhi Hou <Lizhi.Hou@xilinx.com>
> + *	Cheng Zhen <maxz@xilinx.com>
> + */
> +
> +#ifndef	_XMGMT_MAIN_IMPL_H_
> +#define	_XMGMT_MAIN_IMPL_H_
> +
> +#include "xrt-subdev.h"
> +#include "xmgmt-main.h"
> +
> +extern struct platform_driver xmgmt_main_driver;
> +extern struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[];
> +
> +extern int xmgmt_ulp_download(struct platform_device *pdev, const void *xclbin);
> +extern int bitstream_axlf_mailbox(struct platform_device *pdev,
> +	const void *xclbin);
> +extern int xmgmt_hot_reset(struct platform_device *pdev);
> +
> +/* Getting dtb for specified partition. Caller should vfree returned dtb .*/
> +extern char *xmgmt_get_dtb(struct platform_device *pdev,
> +	enum provider_kind kind);
> +extern char *xmgmt_get_vbnv(struct platform_device *pdev);
> +extern int xmgmt_get_provider_uuid(struct platform_device *pdev,
> +	enum provider_kind kind, uuid_t *uuid);
> +
> +extern void *xmgmt_pdev2mailbox(struct platform_device *pdev);
> +extern void *xmgmt_mailbox_probe(struct platform_device *pdev);
> +extern void xmgmt_mailbox_remove(void *handle);
> +extern void xmgmt_peer_notify_state(void *handle, bool online);
> +
> +#endif	/* _XMGMT_MAIN_IMPL_H_ */
> diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c b/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
> new file mode 100644
> index 000000000000..b3d82fc3618b
> --- /dev/null
> +++ b/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
> @@ -0,0 +1,930 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA MGMT PF entry point driver
> + *
> + * Copyright (C) 2020 Xilinx, Inc.
> + *
> + * Peer communication via mailbox
> + *
> + * Authors:
> + *      Cheng Zhen <maxz@xilinx.com>
> + */
> +
> +#include <linux/crc32c.h>
> +#include <linux/xrt/mailbox_proto.h>
> +#include "xmgmt-main-impl.h"
> +#include "xrt-mailbox.h"
> +#include "xrt-cmc.h"
> +#include "xrt-metadata.h"
> +#include "xrt-xclbin.h"
> +#include "xrt-clock.h"
> +#include "xrt-calib.h"
> +#include "xrt-icap.h"
> +
> +struct xmgmt_mailbox {
> +	struct platform_device *pdev;
> +	struct platform_device *mailbox;
> +	struct mutex lock;
> +	void *evt_hdl;
> +	char *test_msg;
> +	bool peer_in_same_domain;
> +};
> +
> +#define	XMGMT_MAILBOX_PRT_REQ(xmbx, send, request, sw_ch)	do {	\
> +	const char *dir = (send) ? ">>>>>" : "<<<<<";			\
> +									\
> +	if ((request)->req == XCL_MAILBOX_REQ_PEER_DATA) {		\
> +		struct xcl_mailbox_peer_data *p =			\
> +			(struct xcl_mailbox_peer_data *)(request)->data;\
> +									\
> +		xrt_info((xmbx)->pdev, "%s(%s) %s%s",			\
> +			mailbox_req2name((request)->req),		\
> +			mailbox_group_kind2name(p->kind),		\
> +			dir, mailbox_chan2name(sw_ch));			\
> +	} else {							\
> +		xrt_info((xmbx)->pdev, "%s %s%s",			\
> +			mailbox_req2name((request)->req),		\
> +			dir, mailbox_chan2name(sw_ch));			\
> +	}								\
> +} while (0)
> +#define	XMGMT_MAILBOX_PRT_REQ_SEND(xmbx, req, sw_ch)			\
> +	XMGMT_MAILBOX_PRT_REQ(xmbx, true, req, sw_ch)
> +#define	XMGMT_MAILBOX_PRT_REQ_RECV(xmbx, req, sw_ch)			\
> +	XMGMT_MAILBOX_PRT_REQ(xmbx, false, req, sw_ch)
> +#define	XMGMT_MAILBOX_PRT_RESP(xmbx, resp)				\
> +	xrt_info((xmbx)->pdev, "respond %ld bytes >>>>>%s",		\
> +	(resp)->xmip_data_size, mailbox_chan2name((resp)->xmip_sw_ch))
> +
> +static inline struct xmgmt_mailbox *pdev2mbx(struct platform_device *pdev)
> +{
> +	return (struct xmgmt_mailbox *)xmgmt_pdev2mailbox(pdev);
> +}
> +
> +static void xmgmt_mailbox_post(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch, void *buf, size_t len)
> +{
> +	int rc;
> +	struct xrt_mailbox_ioctl_post post = {
> +		.xmip_req_id = msgid,
> +		.xmip_sw_ch = sw_ch,
> +		.xmip_data = buf,
> +		.xmip_data_size = len
> +	};
> +
> +	BUG_ON(!mutex_is_locked(&xmbx->lock));
> +
> +	if (!xmbx->mailbox) {
> +		xrt_err(xmbx->pdev, "mailbox not available");
> +		return;
> +	}
> +
> +	if (msgid == 0) {
> +		XMGMT_MAILBOX_PRT_REQ_SEND(xmbx,
> +			(struct xcl_mailbox_req *)buf, sw_ch);
> +	} else {
> +		XMGMT_MAILBOX_PRT_RESP(xmbx, &post);
> +	}
> +
> +	rc = xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_POST, &post);
> +	if (rc)
> +		xrt_err(xmbx->pdev, "failed to post msg: %d", rc);
> +}
> +
> +static void xmgmt_mailbox_notify(struct xmgmt_mailbox *xmbx, bool sw_ch,
> +	struct xcl_mailbox_req *req, size_t len)
> +{
> +	xmgmt_mailbox_post(xmbx, 0, sw_ch, req, len);
> +}
> +
> +static void xmgmt_mailbox_respond(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch, void *buf, size_t len)
> +{
> +	mutex_lock(&xmbx->lock);
> +	xmgmt_mailbox_post(xmbx, msgid, sw_ch, buf, len);
> +	mutex_unlock(&xmbx->lock);
> +}
> +
> +static void xmgmt_mailbox_resp_test_msg(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	char *msg;
> +
> +	mutex_lock(&xmbx->lock);
> +
> +	if (xmbx->test_msg == NULL) {
> +		mutex_unlock(&xmbx->lock);
> +		xrt_err(pdev, "test msg is not set, drop request");
> +		return;
> +	}
> +	msg = xmbx->test_msg;
> +	xmbx->test_msg = NULL;
> +
> +	mutex_unlock(&xmbx->lock);
> +
> +	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, msg, strlen(msg) + 1);
> +	vfree(msg);
> +}
> +
> +static int xmgmt_mailbox_dtb_add_prop(struct platform_device *pdev,
> +	char *dst_dtb, const char *ep_name, const char *regmap_name,
> +	const char *prop, const void *val, int size)
> +{
> +	int rc = xrt_md_set_prop(DEV(pdev), dst_dtb, ep_name, regmap_name,
> +		prop, val, size);
> +
> +	if (rc) {
> +		xrt_err(pdev, "failed to set %s@(%s, %s): %d",
> +			ep_name, regmap_name, prop, rc);
> +	}
> +	return rc;
> +}
> +
> +static int xmgmt_mailbox_dtb_add_vbnv(struct platform_device *pdev, char *dtb)
> +{
> +	int rc = 0;
> +	char *vbnv = xmgmt_get_vbnv(pdev);
> +
> +	if (vbnv == NULL) {
> +		xrt_err(pdev, "failed to get VBNV");
> +		return -ENOENT;
> +	}
> +	rc = xmgmt_mailbox_dtb_add_prop(pdev, dtb, NULL, NULL,
> +		PROP_VBNV, vbnv, strlen(vbnv) + 1);
> +	kfree(vbnv);
> +	return rc;
> +}
> +
> +static int xmgmt_mailbox_dtb_copy_logic_uuid(struct platform_device *pdev,
> +	const char *src_dtb, char *dst_dtb)
> +{
> +	const void *val;
> +	int sz;
> +	int rc = xrt_md_get_prop(DEV(pdev), src_dtb, NULL, NULL,
> +		PROP_LOGIC_UUID, &val, &sz);
> +
> +	if (rc) {
> +		xrt_err(pdev, "failed to get %s: %d", PROP_LOGIC_UUID, rc);
> +		return rc;
> +	}
> +	return xmgmt_mailbox_dtb_add_prop(pdev, dst_dtb, NULL, NULL,
> +		PROP_LOGIC_UUID, val, sz);
> +}
> +
> +static int xmgmt_mailbox_dtb_add_vrom(struct platform_device *pdev,
> +	const char *src_dtb, char *dst_dtb)
> +{
> +	/* For compatibility for legacy xrt driver. */
> +	enum FeatureBitMask {
> +		UNIFIED_PLATFORM		= 0x0000000000000001
> +		, XARE_ENBLD			= 0x0000000000000002
> +		, BOARD_MGMT_ENBLD		= 0x0000000000000004
> +		, MB_SCHEDULER			= 0x0000000000000008
> +		, PROM_MASK			= 0x0000000000000070
> +		, DEBUG_MASK			= 0x000000000000FF00
> +		, PEER_TO_PEER			= 0x0000000000010000
> +		, FBM_UUID			= 0x0000000000020000
> +		, HBM				= 0x0000000000040000
> +		, CDMA				= 0x0000000000080000
> +		, QDMA				= 0x0000000000100000
> +		, RUNTIME_CLK_SCALE		= 0x0000000000200000
> +		, PASSTHROUGH_VIRTUALIZATION	= 0x0000000000400000
> +	};
> +	struct FeatureRomHeader {
> +		unsigned char EntryPointString[4];
> +		uint8_t MajorVersion;
> +		uint8_t MinorVersion;
> +		uint32_t VivadoBuildID;
> +		uint32_t IPBuildID;
> +		uint64_t TimeSinceEpoch;
> +		unsigned char FPGAPartName[64];
> +		unsigned char VBNVName[64];
> +		uint8_t DDRChannelCount;
> +		uint8_t DDRChannelSize;
> +		uint64_t DRBaseAddress;
> +		uint64_t FeatureBitMap;
> +		unsigned char uuid[16];
> +		uint8_t HBMCount;
> +		uint8_t HBMSize;
> +		uint32_t CDMABaseAddress[4];
> +	} header = { 0 };
> +	char *vbnv = xmgmt_get_vbnv(pdev);
> +	int rc;
> +
> +	*(u32 *)header.EntryPointString = 0x786e6c78;
> +
> +	if (vbnv)
> +		strncpy(header.VBNVName, vbnv, sizeof(header.VBNVName) - 1);
> +	kfree(vbnv);
> +
> +	header.FeatureBitMap = UNIFIED_PLATFORM;
> +	rc = xrt_md_get_prop(DEV(pdev), src_dtb,
> +		NODE_CMC_FW_MEM, NULL, PROP_IO_OFFSET, NULL, NULL);
> +	if (rc == 0)
> +		header.FeatureBitMap |= BOARD_MGMT_ENBLD;
> +	rc = xrt_md_get_prop(DEV(pdev), src_dtb,
> +		NODE_ERT_FW_MEM, NULL, PROP_IO_OFFSET, NULL, NULL);
> +	if (rc == 0)
> +		header.FeatureBitMap |= MB_SCHEDULER;
> +
> +	return xmgmt_mailbox_dtb_add_prop(pdev, dst_dtb, NULL, NULL,
> +		PROP_VROM, &header, sizeof(header));
> +}
> +
> +static u32 xmgmt_mailbox_dtb_user_pf(struct platform_device *pdev,
> +	const char *dtb, const char *epname, const char *regmap)
> +{
> +	const u32 *pfnump;
> +	int rc = xrt_md_get_prop(DEV(pdev), dtb, epname, regmap,
> +		PROP_PF_NUM, (const void **)&pfnump, NULL);
> +
> +	if (rc)
> +		return -1;
> +	return be32_to_cpu(*pfnump);
> +}
> +
> +static int xmgmt_mailbox_dtb_copy_user_endpoints(struct platform_device *pdev,
> +	const char *src, char *dst)
> +{
> +	int rc = 0;
> +	char *epname = NULL, *regmap = NULL;
> +	u32 pfnum = xmgmt_mailbox_dtb_user_pf(pdev, src,
> +		NODE_MAILBOX_USER, NULL);
> +	const u32 level = cpu_to_be32(1);
> +	struct device *dev = DEV(pdev);
> +
> +	if (pfnum == (u32)-1) {
> +		xrt_err(pdev, "failed to get user pf num");
> +		rc = -EINVAL;
> +	}
> +
> +	for (xrt_md_get_next_endpoint(dev, src, NULL, NULL, &epname, &regmap);
> +		rc == 0 && epname != NULL;
> +		xrt_md_get_next_endpoint(dev, src, epname, regmap,
> +		&epname, &regmap)) {
> +		if (pfnum !=
> +			xmgmt_mailbox_dtb_user_pf(pdev, src, epname, regmap))
> +			continue;
> +		rc = xrt_md_copy_endpoint(dev, dst, src, epname, regmap, NULL);
> +		if (rc) {
> +			xrt_err(pdev, "failed to copy (%s, %s): %d",
> +				epname, regmap, rc);
> +		} else {
> +			rc = xrt_md_set_prop(dev, dst, epname, regmap,
> +				PROP_PARTITION_LEVEL, &level, sizeof(level));
> +			if (rc) {
> +				xrt_err(pdev,
> +					"can't set level for (%s, %s): %d",
> +					epname, regmap, rc);
> +			}
> +		}
> +	}
> +	return rc;
> +}
> +
> +static char *xmgmt_mailbox_user_dtb(struct platform_device *pdev)
> +{
> +	/* TODO: add support for PLP. */
> +	const char *src = NULL;
> +	char *dst = NULL;
> +	struct device *dev = DEV(pdev);
> +	int rc = xrt_md_create(dev, &dst);
> +
> +	if (rc || dst == NULL)
> +		return NULL;
> +
> +	rc = xmgmt_mailbox_dtb_add_vbnv(pdev, dst);
> +	if (rc)
> +		goto fail;
> +
> +	src = xmgmt_get_dtb(pdev, XMGMT_BLP);
> +	if (src == NULL) {
> +		xrt_err(pdev, "failed to get BLP dtb");
> +		goto fail;
> +	}
> +
> +	rc = xmgmt_mailbox_dtb_copy_logic_uuid(pdev, src, dst);
> +	if (rc)
> +		goto fail;
> +
> +	rc = xmgmt_mailbox_dtb_add_vrom(pdev, src, dst);
> +	if (rc)
> +		goto fail;
> +
> +	rc = xrt_md_copy_endpoint(dev, dst, src, NODE_PARTITION_INFO,
> +		NULL, NODE_PARTITION_INFO_BLP);
> +	if (rc)
> +		goto fail;
> +
> +	rc = xrt_md_copy_endpoint(dev, dst, src, NODE_INTERFACES, NULL, NULL);
> +	if (rc)
> +		goto fail;
> +
> +	rc = xmgmt_mailbox_dtb_copy_user_endpoints(pdev, src, dst);
> +	if (rc)
> +		goto fail;
> +
> +	xrt_md_pack(dev, dst);
> +	vfree(src);
> +	return dst;
> +
> +fail:
> +	vfree(src);
> +	vfree(dst);
> +	return NULL;
> +}
> +
> +static void xmgmt_mailbox_resp_subdev(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch, u64 offset, u64 size)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	char *dtb = xmgmt_mailbox_user_dtb(pdev);
> +	long dtbsz;
> +	struct xcl_subdev *hdr;
> +	u64 totalsz;
> +
> +	if (dtb == NULL)
> +		return;
> +
> +	dtbsz = xrt_md_size(DEV(pdev), dtb);
> +	totalsz = dtbsz + sizeof(*hdr) - sizeof(hdr->data);
> +	if (offset != 0 || totalsz > size) {
> +		/* Only support fetching dtb in one shot. */
> +		vfree(dtb);
> +		xrt_err(pdev, "need %lldB, user buffer size is %lldB, dropped",
> +			totalsz, size);
> +		return;
> +	}
> +
> +	hdr = vzalloc(totalsz);
> +	if (hdr == NULL) {
> +		vfree(dtb);
> +		return;
> +	}
> +
> +	hdr->ver = 1;
> +	hdr->size = dtbsz;
> +	hdr->rtncode = XRT_MSG_SUBDEV_RTN_COMPLETE;
> +	(void) memcpy(hdr->data, dtb, dtbsz);
> +
> +	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, hdr, totalsz);
> +
> +	vfree(dtb);
> +	vfree(hdr);
> +}
> +
> +static void xmgmt_mailbox_resp_sensor(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch, u64 offset, u64 size)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	struct xcl_sensor sensors = { 0 };
> +	struct platform_device *cmcpdev = xrt_subdev_get_leaf_by_id(pdev,
> +		XRT_SUBDEV_CMC, PLATFORM_DEVID_NONE);
> +	int rc;
> +
> +	if (cmcpdev) {
> +		rc = xrt_subdev_ioctl(cmcpdev, XRT_CMC_READ_SENSORS, &sensors);
> +		(void) xrt_subdev_put_leaf(pdev, cmcpdev);
> +		if (rc)
> +			xrt_err(pdev, "can't read sensors: %d", rc);
> +	}
> +
> +	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &sensors,
> +		min((u64)sizeof(sensors), size));
> +}
> +
> +static int xmgmt_mailbox_get_freq(struct xmgmt_mailbox *xmbx,
> +	enum CLOCK_TYPE type, u64 *freq, u64 *freq_cnter)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	const char *clkname =
> +		clock_type2epname(type) ? clock_type2epname(type) : "UNKNOWN";
> +	struct platform_device *clkpdev =
> +		xrt_subdev_get_leaf_by_epname(pdev, clkname);
> +	int rc;
> +	struct xrt_clock_ioctl_get getfreq = { 0 };
> +
> +	if (clkpdev == NULL) {
> +		xrt_info(pdev, "%s clock is not available", clkname);
> +		return -ENOENT;
> +	}
> +
> +	rc = xrt_subdev_ioctl(clkpdev, XRT_CLOCK_GET, &getfreq);
> +	(void) xrt_subdev_put_leaf(pdev, clkpdev);
> +	if (rc) {
> +		xrt_err(pdev, "can't get %s clock frequency: %d", clkname, rc);
> +		return rc;
> +	}
> +
> +	if (freq)
> +		*freq = getfreq.freq;
> +	if (freq_cnter)
> +		*freq_cnter = getfreq.freq_cnter;
> +	return 0;
> +}
> +
> +static int xmgmt_mailbox_get_icap_idcode(struct xmgmt_mailbox *xmbx, u64 *id)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	struct platform_device *icappdev = xrt_subdev_get_leaf_by_id(pdev,
> +		XRT_SUBDEV_ICAP, PLATFORM_DEVID_NONE);
> +	int rc;
> +
> +	if (icappdev == NULL) {
> +		xrt_err(pdev, "can't find icap");
> +		return -ENOENT;
> +	}
> +
> +	rc = xrt_subdev_ioctl(icappdev, XRT_ICAP_IDCODE, id);
> +	(void) xrt_subdev_put_leaf(pdev, icappdev);
> +	if (rc)
> +		xrt_err(pdev, "can't get icap idcode: %d", rc);
> +	return rc;
> +}
> +
> +static int xmgmt_mailbox_get_mig_calib(struct xmgmt_mailbox *xmbx, u64 *calib)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	struct platform_device *calibpdev = xrt_subdev_get_leaf_by_id(pdev,
> +		XRT_SUBDEV_CALIB, PLATFORM_DEVID_NONE);
> +	int rc;
> +	enum xrt_calib_results res;
> +
> +	if (calibpdev == NULL) {
> +		xrt_err(pdev, "can't find mig calibration subdev");
> +		return -ENOENT;
> +	}
> +
> +	rc = xrt_subdev_ioctl(calibpdev, XRT_CALIB_RESULT, &res);
> +	(void) xrt_subdev_put_leaf(pdev, calibpdev);
> +	if (rc) {
> +		xrt_err(pdev, "can't get mig calibration result: %d", rc);
> +	} else {
> +		if (res == XRT_CALIB_SUCCEEDED)
> +			*calib = 1;
> +		else
> +			*calib = 0;
> +	}
> +	return rc;
> +}
> +
> +static void xmgmt_mailbox_resp_icap(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch, u64 offset, u64 size)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	struct xcl_pr_region icap = { 0 };
> +
> +	(void) xmgmt_mailbox_get_freq(xmbx,
> +		CT_DATA, &icap.freq_data, &icap.freq_cntr_data);
> +	(void) xmgmt_mailbox_get_freq(xmbx,
> +		CT_KERNEL, &icap.freq_kernel, &icap.freq_cntr_kernel);
> +	(void) xmgmt_mailbox_get_freq(xmbx,
> +		CT_SYSTEM, &icap.freq_system, &icap.freq_cntr_system);
> +	(void) xmgmt_mailbox_get_icap_idcode(xmbx, &icap.idcode);
> +	(void) xmgmt_mailbox_get_mig_calib(xmbx, &icap.mig_calib);
> +	BUG_ON(sizeof(icap.uuid) != sizeof(uuid_t));
> +	(void) xmgmt_get_provider_uuid(pdev, XMGMT_ULP, (uuid_t *)&icap.uuid);
> +
> +	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &icap,
> +		min((u64)sizeof(icap), size));
> +}
> +
> +static void xmgmt_mailbox_resp_bdinfo(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch, u64 offset, u64 size)
> +{
> +	struct platform_device *pdev = xmbx->pdev;
> +	struct xcl_board_info *info = vzalloc(sizeof(*info));
> +	struct platform_device *cmcpdev;
> +	int rc;
> +
> +	if (info == NULL)
> +		return;
> +
> +	cmcpdev = xrt_subdev_get_leaf_by_id(pdev,
> +		XRT_SUBDEV_CMC, PLATFORM_DEVID_NONE);
> +	if (cmcpdev) {
> +		rc = xrt_subdev_ioctl(cmcpdev, XRT_CMC_READ_BOARD_INFO, info);
> +		(void) xrt_subdev_put_leaf(pdev, cmcpdev);
> +		if (rc)
> +			xrt_err(pdev, "can't read board info: %d", rc);
> +	}
> +
> +	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, info,
> +		min((u64)sizeof(*info), size));
> +
> +	vfree(info);
> +}
> +
> +static void xmgmt_mailbox_simple_respond(struct xmgmt_mailbox *xmbx,
> +	u64 msgid, bool sw_ch, int rc)
> +{
> +	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &rc, sizeof(rc));
> +}
> +
> +static void xmgmt_mailbox_resp_peer_data(struct xmgmt_mailbox *xmbx,
> +	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> +{
> +	struct xcl_mailbox_peer_data *pdata =
> +		(struct xcl_mailbox_peer_data *)req->data;
> +
> +	if (len < (sizeof(*req) + sizeof(*pdata) - 1)) {
> +		xrt_err(xmbx->pdev, "received corrupted %s, dropped",
> +			mailbox_req2name(req->req));
> +		return;
> +	}
> +
> +	switch (pdata->kind) {
> +	case XCL_SENSOR:
> +		xmgmt_mailbox_resp_sensor(xmbx, msgid, sw_ch,
> +			pdata->offset, pdata->size);
> +		break;
> +	case XCL_ICAP:
> +		xmgmt_mailbox_resp_icap(xmbx, msgid, sw_ch,
> +			pdata->offset, pdata->size);
> +		break;
> +	case XCL_BDINFO:
> +		xmgmt_mailbox_resp_bdinfo(xmbx, msgid, sw_ch,
> +			pdata->offset, pdata->size);
> +		break;
> +	case XCL_SUBDEV:
> +		xmgmt_mailbox_resp_subdev(xmbx, msgid, sw_ch,
> +			pdata->offset, pdata->size);
> +		break;
> +	case XCL_MIG_ECC:
> +	case XCL_FIREWALL:
> +	case XCL_DNA: /* TODO **/
> +		xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, 0);
> +		break;
> +	default:
> +		xrt_err(xmbx->pdev, "%s(%s) request not handled",
> +			mailbox_req2name(req->req),
> +			mailbox_group_kind2name(pdata->kind));
> +		break;
> +	}
> +}
> +
> +static bool xmgmt_mailbox_is_same_domain(struct xmgmt_mailbox *xmbx,
> +	struct xcl_mailbox_conn *mb_conn)
> +{
> +	uint32_t crc_chk;
> +	phys_addr_t paddr;
> +	struct platform_device *pdev = xmbx->pdev;
> +
> +	paddr = virt_to_phys((void *)mb_conn->kaddr);
> +	if (paddr != (phys_addr_t)mb_conn->paddr) {
> +		xrt_info(pdev, "paddrs differ, user 0x%llx, mgmt 0x%llx",
> +			mb_conn->paddr, paddr);
> +		return false;
> +	}
> +
> +	crc_chk = crc32c_le(~0, (void *)mb_conn->kaddr, PAGE_SIZE);
> +	if (crc_chk != mb_conn->crc32) {
> +		xrt_info(pdev, "CRCs differ, user 0x%x, mgmt 0x%x",
> +			mb_conn->crc32, crc_chk);
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static void xmgmt_mailbox_resp_user_probe(struct xmgmt_mailbox *xmbx,
> +	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> +{
> +	struct xcl_mailbox_conn_resp *resp = vzalloc(sizeof(*resp));
> +	struct xcl_mailbox_conn *conn = (struct xcl_mailbox_conn *)req->data;
> +
> +	if (resp == NULL)
> +		return;
> +
> +	if (len < (sizeof(*req) + sizeof(*conn) - 1)) {
> +		xrt_err(xmbx->pdev, "received corrupted %s, dropped",
> +			mailbox_req2name(req->req));
> +		vfree(resp);
> +		return;
> +	}
> +
> +	resp->conn_flags |= XCL_MB_PEER_READY;
> +	if (xmgmt_mailbox_is_same_domain(xmbx, conn)) {
> +		xmbx->peer_in_same_domain = true;
> +		resp->conn_flags |= XCL_MB_PEER_SAME_DOMAIN;
> +	}
> +
> +	xmgmt_mailbox_respond(xmbx, msgid, sw_ch, resp, sizeof(*resp));
> +	vfree(resp);
> +}
> +
> +static void xmgmt_mailbox_resp_hot_reset(struct xmgmt_mailbox *xmbx,
> +	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> +{
> +	int ret;
> +	struct platform_device *pdev = xmbx->pdev;
> +
> +	xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, 0);
> +
> +	ret = xmgmt_hot_reset(pdev);
> +	if (ret)
> +		xrt_err(pdev, "failed to hot reset: %d", ret);
> +	else
> +		xmgmt_peer_notify_state(xmbx, true);
> +}
> +
> +static void xmgmt_mailbox_resp_load_xclbin(struct xmgmt_mailbox *xmbx,
> +	struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> +{
> +	struct xcl_mailbox_bitstream_kaddr *kaddr =
> +		(struct xcl_mailbox_bitstream_kaddr *)req->data;
> +	void *xclbin = (void *)(uintptr_t)kaddr->addr;
> +	int ret = bitstream_axlf_mailbox(xmbx->pdev, xclbin);
> +
> +	xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, ret);
> +}
> +
> +static void xmgmt_mailbox_listener(void *arg, void *data, size_t len,
> +	u64 msgid, int err, bool sw_ch)
> +{
> +	struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)arg;
> +	struct platform_device *pdev = xmbx->pdev;
> +	struct xcl_mailbox_req *req = (struct xcl_mailbox_req *)data;
> +
> +	if (err) {
> +		xrt_err(pdev, "failed to receive request: %d", err);
> +		return;
> +	}
> +	if (len < sizeof(*req)) {
> +		xrt_err(pdev, "received corrupted request");
> +		return;
> +	}
> +
> +	XMGMT_MAILBOX_PRT_REQ_RECV(xmbx, req, sw_ch);
> +	switch (req->req) {
> +	case XCL_MAILBOX_REQ_TEST_READ:
> +		xmgmt_mailbox_resp_test_msg(xmbx, msgid, sw_ch);
> +		break;
> +	case XCL_MAILBOX_REQ_PEER_DATA:
> +		xmgmt_mailbox_resp_peer_data(xmbx, req, len, msgid, sw_ch);
> +		break;
> +	case XCL_MAILBOX_REQ_READ_P2P_BAR_ADDR: /* TODO */
> +		xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, -ENOTSUPP);
> +		break;
> +	case XCL_MAILBOX_REQ_USER_PROBE:
> +		xmgmt_mailbox_resp_user_probe(xmbx, req, len, msgid, sw_ch);
> +		break;
> +	case XCL_MAILBOX_REQ_HOT_RESET:
> +		xmgmt_mailbox_resp_hot_reset(xmbx, req, len, msgid, sw_ch);
> +		break;
> +	case XCL_MAILBOX_REQ_LOAD_XCLBIN_KADDR:
> +		if (xmbx->peer_in_same_domain) {
> +			xmgmt_mailbox_resp_load_xclbin(xmbx,
> +				req, len, msgid, sw_ch);
> +		} else {
> +			xrt_err(pdev, "%s not handled, not in same domain",
> +				mailbox_req2name(req->req));
> +		}
> +		break;
> +	default:
> +		xrt_err(pdev, "%s(%d) request not handled",
> +			mailbox_req2name(req->req), req->req);
> +		break;
> +	}
> +}
> +
> +static void xmgmt_mailbox_reg_listener(struct xmgmt_mailbox *xmbx)
> +{
> +	struct xrt_mailbox_ioctl_listen listen = {
> +		xmgmt_mailbox_listener, xmbx };
> +
> +	BUG_ON(!mutex_is_locked(&xmbx->lock));
> +	if (!xmbx->mailbox)
> +		return;
> +	(void) xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_LISTEN, &listen);
> +}
> +
> +static void xmgmt_mailbox_unreg_listener(struct xmgmt_mailbox *xmbx)
> +{
> +	struct xrt_mailbox_ioctl_listen listen = { 0 };
> +
> +	BUG_ON(!mutex_is_locked(&xmbx->lock));
> +	BUG_ON(!xmbx->mailbox);
> +	(void) xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_LISTEN, &listen);
> +}
> +
> +static bool xmgmt_mailbox_leaf_match(enum xrt_subdev_id id,
> +	struct platform_device *pdev, void *arg)
> +{
> +	return (id == XRT_SUBDEV_MAILBOX);
> +}
> +
> +static int xmgmt_mailbox_event_cb(struct platform_device *pdev,
> +	enum xrt_events evt, void *arg)
> +{
> +	struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
> +	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
> +
> +	switch (evt) {
> +	case XRT_EVENT_POST_CREATION:
> +		BUG_ON(esd->xevt_subdev_id != XRT_SUBDEV_MAILBOX);
> +		BUG_ON(xmbx->mailbox);
> +		mutex_lock(&xmbx->lock);
> +		xmbx->mailbox = xrt_subdev_get_leaf_by_id(pdev,
> +			XRT_SUBDEV_MAILBOX, PLATFORM_DEVID_NONE);
> +		xmgmt_mailbox_reg_listener(xmbx);
> +		mutex_unlock(&xmbx->lock);
> +		break;
> +	case XRT_EVENT_PRE_REMOVAL:
> +		BUG_ON(esd->xevt_subdev_id != XRT_SUBDEV_MAILBOX);
> +		BUG_ON(!xmbx->mailbox);
> +		mutex_lock(&xmbx->lock);
> +		xmgmt_mailbox_unreg_listener(xmbx);
> +		(void) xrt_subdev_put_leaf(pdev, xmbx->mailbox);
> +		xmbx->mailbox = NULL;
> +		mutex_unlock(&xmbx->lock);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return XRT_EVENT_CB_CONTINUE;
> +}
> +
> +static ssize_t xmgmt_mailbox_user_dtb_show(struct file *filp,
> +	struct kobject *kobj, struct bin_attribute *attr,
> +	char *buf, loff_t off, size_t count)
> +{
> +	struct device *dev = kobj_to_dev(kobj);
> +	struct platform_device *pdev = to_platform_device(dev);
> +	char *blob = NULL;
> +	long  size;
> +	ssize_t ret = 0;
> +
> +	blob = xmgmt_mailbox_user_dtb(pdev);
> +	if (!blob) {
> +		ret = -ENOENT;
> +		goto failed;
> +	}
> +
> +	size = xrt_md_size(dev, blob);
> +	if (size <= 0) {
> +		ret = -EINVAL;
> +		goto failed;
> +	}
> +
> +	if (off >= size)
> +		goto failed;
> +	if (off + count > size)
> +		count = size - off;
> +	memcpy(buf, blob + off, count);
> +
> +	ret = count;
> +failed:
> +	vfree(blob);
> +	return ret;
> +}
> +
> +static struct bin_attribute meta_data_attr = {
> +	.attr = {
> +		.name = "metadata_for_user",
> +		.mode = 0400
> +	},
> +	.read = xmgmt_mailbox_user_dtb_show,
> +	.size = 0
> +};
> +
> +static struct bin_attribute  *xmgmt_mailbox_bin_attrs[] = {
> +	&meta_data_attr,
> +	NULL,
> +};
> +
> +int xmgmt_mailbox_get_test_msg(struct xmgmt_mailbox *xmbx, bool sw_ch,
> +	char *buf, size_t *len)
> +{
> +	int rc;
> +	struct platform_device *pdev = xmbx->pdev;
> +	struct xcl_mailbox_req req = { 0, XCL_MAILBOX_REQ_TEST_READ, };
> +	struct xrt_mailbox_ioctl_request leaf_req = {
> +		.xmir_sw_ch = sw_ch,
> +		.xmir_resp_ttl = 1,
> +		.xmir_req = &req,
> +		.xmir_req_size = sizeof(req),
> +		.xmir_resp = buf,
> +		.xmir_resp_size = *len
> +	};
> +
> +	mutex_lock(&xmbx->lock);
> +	if (xmbx->mailbox) {
> +		XMGMT_MAILBOX_PRT_REQ_SEND(xmbx, &req, leaf_req.xmir_sw_ch);
> +		/*
> +		 * mgmt should never send request to peer. it should send
> +		 * either notification or response. here is the only exception
> +		 * for debugging purpose.
> +		 */
> +		rc = xrt_subdev_ioctl(xmbx->mailbox,
> +			XRT_MAILBOX_REQUEST, &leaf_req);
> +	} else {
> +		rc = -ENODEV;
> +		xrt_err(pdev, "mailbox not available");
> +	}
> +	mutex_unlock(&xmbx->lock);
> +
> +	if (rc == 0)
> +		*len = leaf_req.xmir_resp_size;
> +	return rc;
> +}
> +
> +int xmgmt_mailbox_set_test_msg(struct xmgmt_mailbox *xmbx,
> +	char *buf, size_t len)
> +{
> +	mutex_lock(&xmbx->lock);
> +
> +	if (xmbx->test_msg)
> +		vfree(xmbx->test_msg);
> +	xmbx->test_msg = vmalloc(len);
> +	if (xmbx->test_msg == NULL) {
> +		mutex_unlock(&xmbx->lock);
> +		return -ENOMEM;
> +	}
> +	(void) memcpy(xmbx->test_msg, buf, len);
> +
> +	mutex_unlock(&xmbx->lock);
> +	return 0;
> +}
> +
> +static ssize_t peer_msg_show(struct device *dev,
> +	struct device_attribute *attr, char *buf)
> +{
> +	size_t len = 4096;
> +	struct platform_device *pdev = to_platform_device(dev);
> +	struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
> +	int ret = xmgmt_mailbox_get_test_msg(xmbx, false, buf, &len);
> +
> +	return ret == 0 ? len : ret;
> +}
> +static ssize_t peer_msg_store(struct device *dev,
> +	struct device_attribute *da, const char *buf, size_t count)
> +{
> +	struct platform_device *pdev = to_platform_device(dev);
> +	struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
> +	int ret = xmgmt_mailbox_set_test_msg(xmbx, (char *)buf, count);
> +
> +	return ret == 0 ? count : ret;
> +}
> +/* Message test i/f. */
> +static DEVICE_ATTR_RW(peer_msg);
> +
> +static struct attribute *xmgmt_mailbox_attrs[] = {
> +	&dev_attr_peer_msg.attr,
> +	NULL,
> +};
> +
> +static const struct attribute_group xmgmt_mailbox_attrgroup = {
> +	.bin_attrs = xmgmt_mailbox_bin_attrs,
> +	.attrs = xmgmt_mailbox_attrs,
> +};
> +
> +void *xmgmt_mailbox_probe(struct platform_device *pdev)
> +{
> +	struct xmgmt_mailbox *xmbx =
> +		devm_kzalloc(DEV(pdev), sizeof(*xmbx), GFP_KERNEL);
> +
> +	if (!xmbx)
> +		return NULL;
> +	xmbx->pdev = pdev;
> +	mutex_init(&xmbx->lock);
> +
> +	xmbx->evt_hdl = xrt_subdev_add_event_cb(pdev,
> +		xmgmt_mailbox_leaf_match, NULL, xmgmt_mailbox_event_cb);
> +	(void) sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_mailbox_attrgroup);
> +	return xmbx;
> +}
> +
> +void xmgmt_mailbox_remove(void *handle)
> +{
> +	struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)handle;
> +	struct platform_device *pdev = xmbx->pdev;
> +
> +	(void) sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_mailbox_attrgroup);
> +	if (xmbx->evt_hdl)
> +		(void) xrt_subdev_remove_event_cb(pdev, xmbx->evt_hdl);
> +	if (xmbx->mailbox)
> +		(void) xrt_subdev_put_leaf(pdev, xmbx->mailbox);
> +	if (xmbx->test_msg)
> +		vfree(xmbx->test_msg);
> +}
> +
> +void xmgmt_peer_notify_state(void *handle, bool online)
> +{
> +	struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)handle;
> +	struct xcl_mailbox_peer_state *st;
> +	struct xcl_mailbox_req *req;
> +	size_t reqlen = sizeof(*req) + sizeof(*st) - 1;
> +
> +	req = vzalloc(reqlen);
> +	if (req == NULL)
> +		return;
> +
> +	req->req = XCL_MAILBOX_REQ_MGMT_STATE;
> +	st = (struct xcl_mailbox_peer_state *)req->data;
> +	st->state_flags = online ? XCL_MB_STATE_ONLINE : XCL_MB_STATE_OFFLINE;
> +	mutex_lock(&xmbx->lock);
> +	xmgmt_mailbox_notify(xmbx, false, req, reqlen);
> +	mutex_unlock(&xmbx->lock);
> +}
> diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c b/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
> new file mode 100644
> index 000000000000..042d86fcef41
> --- /dev/null
> +++ b/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
> @@ -0,0 +1,190 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA MGMT PF entry point driver
> + *
> + * Copyright (C) 2020 Xilinx, Inc.
> + *
> + * xclbin download
> + *
> + * Authors:
> + *      Lizhi Hou <lizhi.hou@xilinx.com>
> + */
> +
> +#include <linux/firmware.h>
> +#include <linux/uaccess.h>
> +#include "xrt-xclbin.h"
> +#include "xrt-metadata.h"
> +#include "xrt-subdev.h"
> +#include "xrt-gpio.h"
> +#include "xmgmt-main.h"
> +#include "xrt-icap.h"
> +#include "xrt-axigate.h"
> +
> +static int xmgmt_download_bitstream(struct platform_device  *pdev,
> +	const void *xclbin)
> +{
> +	struct platform_device *icap_leaf = NULL;
> +	struct XHwIcap_Bit_Header bit_header = { 0 };
Please fix the style error in struct name ...
> +	struct xrt_icap_ioctl_wr arg;
> +	char *bitstream = NULL;
> +	int ret;
> +
> +	ret = xrt_xclbin_get_section(xclbin, BITSTREAM, (void **)&bitstream,
> +		NULL);
> +	if (ret || !bitstream) {
> +		xrt_err(pdev, "bitstream not found");
> +		return -ENOENT;
> +	}
> +	ret = xrt_xclbin_parse_header(bitstream,
> +		DMA_HWICAP_BITFILE_BUFFER_SIZE, &bit_header);
> +	if (ret) {
> +		ret = -EINVAL;
> +		xrt_err(pdev, "invalid bitstream header");
> +		goto done;
> +	}
> +	icap_leaf = xrt_subdev_get_leaf_by_id(pdev, XRT_SUBDEV_ICAP,
> +		PLATFORM_DEVID_NONE);
> +	if (!icap_leaf) {
> +		ret = -ENODEV;
> +		xrt_err(pdev, "icap does not exist");
> +		goto done;
> +	}
> +	arg.xiiw_bit_data = bitstream + bit_header.HeaderLength;
> +	arg.xiiw_data_len = bit_header.BitstreamLength;
> +	ret = xrt_subdev_ioctl(icap_leaf, XRT_ICAP_WRITE, &arg);
> +	if (ret)
> +		xrt_err(pdev, "write bitstream failed, ret = %d", ret);
> +
> +done:
> +	if (icap_leaf)
> +		xrt_subdev_put_leaf(pdev, icap_leaf);
> +	vfree(bitstream);
> +
> +	return ret;
> +}
> +
> +static bool match_shell(enum xrt_subdev_id id,
> +	struct platform_device *pdev, void *arg)
> +{
> +	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> +	const char *ulp_gate;
> +	int ret;
> +
> +	if (!pdata || xrt_md_size(&pdev->dev, pdata->xsp_dtb) <= 0)
> +		return false;
> +
> +	ret = xrt_md_get_epname_pointer(&pdev->dev, pdata->xsp_dtb,
> +		NODE_GATE_ULP, NULL, &ulp_gate);
> +	if (ret)
> +		return false;
> +
> +	ret = xrt_md_check_uuids(&pdev->dev, pdata->xsp_dtb, arg);
> +	if (ret)
> +		return false;
> +
> +	return true;
> +}
> +
> +static bool match_ulp(enum xrt_subdev_id id,
> +	struct platform_device *pdev, void *arg)
> +{
> +	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> +	const char *ulp_gate;
> +	int ret;
> +
> +	if (!pdata || xrt_md_size(&pdev->dev, pdata->xsp_dtb) <= 0)
> +		return false;
> +
> +	ret = xrt_md_check_uuids(&pdev->dev, pdata->xsp_dtb, arg);
> +	if (ret)
> +		return false;
> +
> +	ret = xrt_md_get_epname_pointer(&pdev->dev, pdata->xsp_dtb,
> +		NODE_GATE_ULP, NULL, &ulp_gate);
> +	if (!ret)
> +		return false;
> +
> +	return true;
> +}
> +
> +int xmgmt_ulp_download(struct platform_device  *pdev, const void *xclbin)
> +{
> +	struct platform_device *axigate_leaf;
> +	char *dtb = NULL;
> +	int ret = 0, part_inst;
> +
> +	ret = xrt_xclbin_get_metadata(DEV(pdev), xclbin, &dtb);
> +	if (ret) {
> +		xrt_err(pdev, "can not get partition metadata, ret %d", ret);
> +		goto failed;
> +	}
> +
> +	part_inst = xrt_subdev_lookup_partition(pdev, match_shell, dtb);
> +	if (part_inst < 0) {
> +		xrt_err(pdev, "not found matching plp.");
> +		ret = -ENODEV;
> +		goto failed;
> +	}
> +
> +	/*
> +	 * Find ulp partition with interface uuid from incoming xclbin, which
> +	 * is verified before with matching plp partition.
> +	 */
> +	part_inst = xrt_subdev_lookup_partition(pdev, match_ulp, dtb);
> +	if (part_inst >= 0) {
> +		ret = xrt_subdev_destroy_partition(pdev, part_inst);
> +		if (ret) {
> +			xrt_err(pdev, "failed to destroy existing ulp, %d",
> +				ret);
> +			goto failed;
> +		}
> +	}
> +
> +	axigate_leaf = xrt_subdev_get_leaf_by_epname(pdev, NODE_GATE_ULP);
> +
> +	/* gate may not be exist for 0rp */
> +	if (axigate_leaf) {
> +		ret = xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREEZE,
> +			NULL);
> +		if (ret) {
> +			xrt_err(pdev, "can not freeze gate %s, %d",
> +				NODE_GATE_ULP, ret);
> +			xrt_subdev_put_leaf(pdev, axigate_leaf);
> +			goto failed;
> +		}
> +	}
> +	ret = xmgmt_download_bitstream(pdev, xclbin);
> +	if (axigate_leaf) {
> +		xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREE, NULL);
> +
> +		/* Do we really need this extra toggling gate before setting
> +		 * clocks?
> +		 * xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREEZE, NULL);
> +		 * xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREE, NULL);
> +		 */
> +
> +		xrt_subdev_put_leaf(pdev, axigate_leaf);
> +	}
> +	if (ret) {
> +		xrt_err(pdev, "bitstream download failed, ret %d", ret);
> +		goto failed;
> +	}
> +	ret = xrt_subdev_create_partition(pdev, dtb);
> +	if (ret < 0) {
> +		xrt_err(pdev, "failed creating partition, ret %d", ret);
> +		goto failed;
> +	}
> +
> +	ret = xrt_subdev_wait_for_partition_bringup(pdev);
> +	if (ret)
> +		xrt_err(pdev, "partiton bringup failed, ret %d", ret);
> +
> +	/*
> +	 * TODO: needs to check individual subdevs to see if there
> +	 * is any error, such as clock setting, memory bank calibration.
> +	 */
> +
> +failed:
> +	vfree(dtb);
> +	return ret;
> +}
> diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main.c b/drivers/fpga/alveo/mgmt/xmgmt-main.c
> new file mode 100644
> index 000000000000..23e68e3a4ae1
> --- /dev/null
> +++ b/drivers/fpga/alveo/mgmt/xmgmt-main.c
> @@ -0,0 +1,843 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA MGMT PF entry point driver
> + *
> + * Copyright (C) 2020 Xilinx, Inc.
> + *
> + * Authors:
> + *	Sonal Santan <sonals@xilinx.com>
> + */
> +
> +#include <linux/firmware.h>
> +#include <linux/uaccess.h>
> +#include "xrt-xclbin.h"
> +#include "xrt-metadata.h"
> +#include "xrt-flash.h"
> +#include "xrt-subdev.h"
> +#include <linux/xrt/flash_xrt_data.h>
> +#include <linux/xrt/xmgmt-ioctl.h>
> +#include "xrt-gpio.h"
> +#include "xmgmt-main.h"
> +#include "xmgmt-fmgr.h"
> +#include "xrt-icap.h"
> +#include "xrt-axigate.h"
> +#include "xmgmt-main-impl.h"
> +
> +#define	XMGMT_MAIN "xmgmt_main"
> +
> +struct xmgmt_main {
> +	struct platform_device *pdev;
> +	void *evt_hdl;
> +	char *firmware_blp;
> +	char *firmware_plp;
> +	char *firmware_ulp;
> +	bool flash_ready;
> +	bool gpio_ready;
> +	struct fpga_manager *fmgr;
> +	void *mailbox_hdl;
> +	struct mutex busy_mutex;
> +
> +	uuid_t *blp_intf_uuids;
> +	u32 blp_intf_uuid_num;
> +};
> +
> +char *xmgmt_get_vbnv(struct platform_device *pdev)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	const char *vbnv;
> +	char *ret;
> +	int i;
> +
> +	if (xmm->firmware_plp) {
> +		vbnv = ((struct axlf *)xmm->firmware_plp)->
> +			m_header.m_platformVBNV;
> +	} else if (xmm->firmware_blp) {
> +		vbnv = ((struct axlf *)xmm->firmware_blp)->
> +			m_header.m_platformVBNV;
> +	} else {
> +		return NULL;
> +	}
> +
> +	ret = kstrdup(vbnv, GFP_KERNEL);
> +	for (i = 0; i < strlen(ret); i++) {
> +		if (ret[i] == ':' || ret[i] == '.')
> +			ret[i] = '_';
> +	}
> +	return ret;
> +}
> +
> +static bool xmgmt_main_leaf_match(enum xrt_subdev_id id,
> +	struct platform_device *pdev, void *arg)
> +{
> +	if (id == XRT_SUBDEV_GPIO)
> +		return xrt_subdev_has_epname(pdev, arg);
> +	else if (id == XRT_SUBDEV_QSPI)
> +		return true;
> +
> +	return false;
> +}
> +
> +static int get_dev_uuid(struct platform_device *pdev, char *uuidstr, size_t len)
> +{
> +	char uuid[16];
> +	struct platform_device *gpio_leaf;
> +	struct xrt_gpio_ioctl_rw gpio_arg = { 0 };
> +	int err, i, count;
> +
> +	gpio_leaf = xrt_subdev_get_leaf_by_epname(pdev, NODE_BLP_ROM);
> +	if (!gpio_leaf) {
> +		xrt_err(pdev, "can not get %s", NODE_BLP_ROM);
> +		return -EINVAL;
> +	}
> +
> +	gpio_arg.xgir_id = XRT_GPIO_ROM_UUID;
> +	gpio_arg.xgir_buf = uuid;
> +	gpio_arg.xgir_len = sizeof(uuid);
> +	gpio_arg.xgir_offset = 0;
> +	err = xrt_subdev_ioctl(gpio_leaf, XRT_GPIO_READ, &gpio_arg);
> +	xrt_subdev_put_leaf(pdev, gpio_leaf);
> +	if (err) {
> +		xrt_err(pdev, "can not get uuid: %d", err);
> +		return err;
> +	}
> +
> +	for (count = 0, i = sizeof(uuid) - sizeof(u32);
> +		i >= 0 && len > count; i -= sizeof(u32)) {
> +		count += snprintf(uuidstr + count, len - count,
> +			"%08x", *(u32 *)&uuid[i]);
> +	}
> +	return 0;
> +}
> +
> +int xmgmt_hot_reset(struct platform_device *pdev)
> +{
> +	int ret = xrt_subdev_broadcast_event(pdev, XRT_EVENT_PRE_HOT_RESET);
> +
> +	if (ret) {
> +		xrt_err(pdev, "offline failed, hot reset is canceled");
> +		return ret;
> +	}
> +
> +	(void) xrt_subdev_hot_reset(pdev);
> +	xrt_subdev_broadcast_event(pdev, XRT_EVENT_POST_HOT_RESET);
> +	return 0;
> +}
> +
> +static ssize_t reset_store(struct device *dev,
> +	struct device_attribute *da, const char *buf, size_t count)
> +{
> +	struct platform_device *pdev = to_platform_device(dev);
> +
> +	(void) xmgmt_hot_reset(pdev);
> +	return count;
> +}
> +static DEVICE_ATTR_WO(reset);
> +
> +static ssize_t VBNV_show(struct device *dev,
> +	struct device_attribute *da, char *buf)
> +{
> +	ssize_t ret;
> +	char *vbnv;
> +	struct platform_device *pdev = to_platform_device(dev);
> +
> +	vbnv = xmgmt_get_vbnv(pdev);
> +	ret = sprintf(buf, "%s\n", vbnv);
> +	kfree(vbnv);
> +	return ret;
> +}
> +static DEVICE_ATTR_RO(VBNV);
> +
> +static ssize_t logic_uuids_show(struct device *dev,
> +	struct device_attribute *da, char *buf)
> +{
> +	ssize_t ret;
> +	char uuid[80];
> +	struct platform_device *pdev = to_platform_device(dev);
> +
> +	/*
> +	 * Getting UUID pointed to by VSEC,
> +	 * should be the same as logic UUID of BLP.
> +	 * TODO: add PLP logic UUID
> +	 */
> +	ret = get_dev_uuid(pdev, uuid, sizeof(uuid));
> +	if (ret)
> +		return ret;
> +	ret = sprintf(buf, "%s\n", uuid);
> +	return ret;
> +}
> +static DEVICE_ATTR_RO(logic_uuids);
> +
> +static inline void uuid2str(const uuid_t *uuid, char *uuidstr, size_t len)
> +{
> +	int i, p;
> +	u8 *u = (u8 *)uuid;
> +
> +	BUG_ON(sizeof(uuid_t) * 2 + 1 > len);
> +	for (p = 0, i = sizeof(uuid_t) - 1; i >= 0; p++, i--)
> +		(void) snprintf(&uuidstr[p*2], 3, "%02x", u[i]);
> +}
> +
> +static ssize_t interface_uuids_show(struct device *dev,
> +	struct device_attribute *da, char *buf)
> +{
> +	ssize_t ret = 0;
> +	struct platform_device *pdev = to_platform_device(dev);
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	u32 i;
> +
> +	/*
> +	 * TODO: add PLP interface UUID
> +	 */
> +	for (i = 0; i < xmm->blp_intf_uuid_num; i++) {
> +		char uuidstr[80];
> +
> +		uuid2str(&xmm->blp_intf_uuids[i], uuidstr, sizeof(uuidstr));
> +		ret += sprintf(buf + ret, "%s\n", uuidstr);
> +	}
> +	return ret;
> +}
> +static DEVICE_ATTR_RO(interface_uuids);
> +
> +static struct attribute *xmgmt_main_attrs[] = {
> +	&dev_attr_reset.attr,
> +	&dev_attr_VBNV.attr,
> +	&dev_attr_logic_uuids.attr,
> +	&dev_attr_interface_uuids.attr,
> +	NULL,
> +};
> +
> +static ssize_t ulp_image_write(struct file *filp, struct kobject *kobj,
> +	struct bin_attribute *attr, char *buffer, loff_t off, size_t count)
> +{
> +	struct xmgmt_main *xmm =
> +		dev_get_drvdata(container_of(kobj, struct device, kobj));
> +	struct axlf *xclbin;
> +	ulong len;
> +
> +	if (off == 0) {
> +		if (count < sizeof(*xclbin)) {
> +			xrt_err(xmm->pdev, "count is too small %ld", count);
> +			return -EINVAL;
> +		}
> +
> +		if (xmm->firmware_ulp) {
> +			vfree(xmm->firmware_ulp);
> +			xmm->firmware_ulp = NULL;
> +		}
> +		xclbin = (struct axlf *)buffer;
> +		xmm->firmware_ulp = vmalloc(xclbin->m_header.m_length);
> +		if (!xmm->firmware_ulp)
> +			return -ENOMEM;
> +	} else
> +		xclbin = (struct axlf *)xmm->firmware_ulp;
> +
> +	len = xclbin->m_header.m_length;
> +	if (off + count >= len && off < len) {
> +		memcpy(xmm->firmware_ulp + off, buffer, len - off);
> +		xmgmt_ulp_download(xmm->pdev, xmm->firmware_ulp);
> +	} else if (off + count < len) {
> +		memcpy(xmm->firmware_ulp + off, buffer, count);
> +	}
> +
> +	return count;
> +}
> +
> +static struct bin_attribute ulp_image_attr = {
> +	.attr = {
> +		.name = "ulp_image",
> +		.mode = 0200
> +	},
> +	.write = ulp_image_write,
> +	.size = 0
> +};
> +
> +static struct bin_attribute *xmgmt_main_bin_attrs[] = {
> +	&ulp_image_attr,
> +	NULL,
> +};
> +
> +static const struct attribute_group xmgmt_main_attrgroup = {
> +	.attrs = xmgmt_main_attrs,
> +	.bin_attrs = xmgmt_main_bin_attrs,
> +};
> +
> +static int load_firmware_from_flash(struct platform_device *pdev,
> +	char **fw_buf, size_t *len)
> +{
> +	struct platform_device *flash_leaf = NULL;
> +	struct flash_data_header header = { 0 };
> +	const size_t magiclen = sizeof(header.fdh_id_begin.fdi_magic);
> +	size_t flash_size = 0;
> +	int ret = 0;
> +	char *buf = NULL;
> +	struct flash_data_ident id = { 0 };
> +	struct xrt_flash_ioctl_read frd = { 0 };
> +
> +	xrt_info(pdev, "try loading fw from flash");
> +
> +	flash_leaf = xrt_subdev_get_leaf_by_id(pdev, XRT_SUBDEV_QSPI,
> +		PLATFORM_DEVID_NONE);
> +	if (flash_leaf == NULL) {
> +		xrt_err(pdev, "failed to hold flash leaf");
> +		return -ENODEV;
> +	}
> +
> +	(void) xrt_subdev_ioctl(flash_leaf, XRT_FLASH_GET_SIZE, &flash_size);
> +	if (flash_size == 0) {
> +		xrt_err(pdev, "failed to get flash size");
> +		ret = -EINVAL;
> +		goto done;
> +	}
> +
> +	frd.xfir_buf = (char *)&header;
> +	frd.xfir_size = sizeof(header);
> +	frd.xfir_offset = flash_size - sizeof(header);
> +	ret = xrt_subdev_ioctl(flash_leaf, XRT_FLASH_READ, &frd);
> +	if (ret) {
> +		xrt_err(pdev, "failed to read header from flash: %d", ret);
> +		goto done;
> +	}
> +
> +	/* Pick the end ident since header is aligned in the end of flash. */
> +	id = header.fdh_id_end;
> +	if (strncmp(id.fdi_magic, XRT_DATA_MAGIC, magiclen)) {
> +		char tmp[sizeof(id.fdi_magic) + 1] = { 0 };
> +
> +		memcpy(tmp, id.fdi_magic, magiclen);
> +		xrt_info(pdev, "ignore meta data, bad magic: %s", tmp);
> +		ret = -ENOENT;
> +		goto done;
> +	}
> +	if (id.fdi_version != 0) {
> +		xrt_info(pdev, "flash meta data version is not supported: %d",
> +			id.fdi_version);
> +		ret = -EOPNOTSUPP;
> +		goto done;
> +	}
> +
> +	buf = vmalloc(header.fdh_data_len);
> +	if (buf == NULL) {
> +		ret = -ENOMEM;
> +		goto done;
> +	}
> +
> +	frd.xfir_buf = buf;
> +	frd.xfir_size = header.fdh_data_len;
> +	frd.xfir_offset = header.fdh_data_offset;
> +	ret = xrt_subdev_ioctl(flash_leaf, XRT_FLASH_READ, &frd);
> +	if (ret) {
> +		xrt_err(pdev, "failed to read meta data from flash: %d", ret);
> +		goto done;
> +	} else if (flash_xrt_data_get_parity32(buf, header.fdh_data_len) ^
> +		header.fdh_data_parity) {
> +		xrt_err(pdev, "meta data is corrupted");
> +		ret = -EINVAL;
> +		goto done;
> +	}
> +
> +	xrt_info(pdev, "found meta data of %d bytes @0x%x",
> +		header.fdh_data_len, header.fdh_data_offset);
> +	*fw_buf = buf;
> +	*len = header.fdh_data_len;
> +
> +done:
> +	(void) xrt_subdev_put_leaf(pdev, flash_leaf);
> +	return ret;
> +}
> +
> +static int load_firmware_from_disk(struct platform_device *pdev, char **fw_buf,
> +	size_t *len)
> +{
> +	char uuid[80];
> +	int err = 0;
> +	char fw_name[256];
> +	const struct firmware *fw;
> +
> +	err = get_dev_uuid(pdev, uuid, sizeof(uuid));
> +	if (err)
> +		return err;
> +
> +	(void) snprintf(fw_name,
> +		sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
> +	xrt_info(pdev, "try loading fw: %s", fw_name);
> +
> +	err = request_firmware(&fw, fw_name, DEV(pdev));
> +	if (err)
> +		return err;
> +
> +	*fw_buf = vmalloc(fw->size);
> +	*len = fw->size;
> +	if (*fw_buf != NULL)
> +		memcpy(*fw_buf, fw->data, fw->size);
> +	else
> +		err = -ENOMEM;
> +
> +	release_firmware(fw);
> +	return 0;
> +}
> +
> +static const char *xmgmt_get_axlf_firmware(struct xmgmt_main *xmm,
> +	enum provider_kind kind)
> +{
> +	switch (kind) {
> +	case XMGMT_BLP:
> +		return xmm->firmware_blp;
> +	case XMGMT_PLP:
> +		return xmm->firmware_plp;
> +	case XMGMT_ULP:
> +		return xmm->firmware_ulp;
> +	default:
> +		xrt_err(xmm->pdev, "unknown axlf kind: %d", kind);
> +		return NULL;
> +	}
> +}
> +
> +char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	char *dtb = NULL;
> +	const char *provider = xmgmt_get_axlf_firmware(xmm, kind);
> +	int rc;
> +
> +	if (provider == NULL)
> +		return dtb;
> +
> +	rc = xrt_xclbin_get_metadata(DEV(pdev), provider, &dtb);
> +	if (rc)
> +		xrt_err(pdev, "failed to find dtb: %d", rc);
> +	return dtb;
> +}
> +
> +static const char *get_uuid_from_firmware(struct platform_device *pdev,
> +	const char *axlf)
> +{
> +	const void *uuid = NULL;
> +	const void *uuiddup = NULL;
> +	void *dtb = NULL;
> +	int rc;
> +
> +	rc = xrt_xclbin_get_section(axlf, PARTITION_METADATA, &dtb, NULL);
> +	if (rc)
> +		return NULL;
> +
> +	rc = xrt_md_get_prop(DEV(pdev), dtb, NULL, NULL,
> +		PROP_LOGIC_UUID, &uuid, NULL);
> +	if (!rc)
> +		uuiddup = kstrdup(uuid, GFP_KERNEL);
> +	vfree(dtb);
> +	return uuiddup;
> +}
> +
> +static bool is_valid_firmware(struct platform_device *pdev,
> +	char *fw_buf, size_t fw_len)
> +{
> +	struct axlf *axlf = (struct axlf *)fw_buf;
> +	size_t axlflen = axlf->m_header.m_length;
> +	const char *fw_uuid;
> +	char dev_uuid[80];
> +	int err;
> +
> +	err = get_dev_uuid(pdev, dev_uuid, sizeof(dev_uuid));
> +	if (err)
> +		return false;
> +
> +	if (memcmp(fw_buf, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)) != 0) {
> +		xrt_err(pdev, "unknown fw format");
> +		return false;
> +	}
> +
> +	if (axlflen > fw_len) {
> +		xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
> +			fw_len, axlflen);
> +		return false;
> +	}
> +
> +	fw_uuid = get_uuid_from_firmware(pdev, fw_buf);
> +	if (fw_uuid == NULL || strcmp(fw_uuid, dev_uuid) != 0) {
> +		xrt_err(pdev, "bad fw UUID: %s, expect: %s",
> +			fw_uuid ? fw_uuid : "<none>", dev_uuid);
> +		kfree(fw_uuid);
> +		return false;
> +	}
> +
> +	kfree(fw_uuid);
> +	return true;
> +}
> +
> +int xmgmt_get_provider_uuid(struct platform_device *pdev,
> +	enum provider_kind kind, uuid_t *uuid)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	const char *fwbuf;
> +	const char *fw_uuid;
> +	int rc = -ENOENT;
> +
> +	mutex_lock(&xmm->busy_mutex);
> +
> +	fwbuf = xmgmt_get_axlf_firmware(xmm, kind);
> +	if (fwbuf == NULL)
> +		goto done;
> +
> +	fw_uuid = get_uuid_from_firmware(pdev, fwbuf);
> +	if (fw_uuid == NULL)
> +		goto done;
> +
> +	rc = xrt_md_uuid_strtoid(DEV(pdev), fw_uuid, uuid);
> +	kfree(fw_uuid);
> +
> +done:
> +	mutex_unlock(&xmm->busy_mutex);
> +	return rc;
> +}
> +
> +static int xmgmt_create_blp(struct xmgmt_main *xmm)
> +{
> +	struct platform_device *pdev = xmm->pdev;
> +	int rc = 0;
> +	char *dtb = NULL;
> +
> +	dtb = xmgmt_get_dtb(pdev, XMGMT_BLP);
> +	if (dtb) {
> +		rc = xrt_subdev_create_partition(pdev, dtb);
> +		if (rc < 0)
> +			xrt_err(pdev, "failed to create BLP: %d", rc);
> +		else
> +			rc = 0;
> +
> +		BUG_ON(xmm->blp_intf_uuids);
> +		xrt_md_get_intf_uuids(&pdev->dev, dtb,
> +			&xmm->blp_intf_uuid_num, NULL);
> +		if (xmm->blp_intf_uuid_num > 0) {
> +			xmm->blp_intf_uuids = vzalloc(sizeof(uuid_t) *
> +				xmm->blp_intf_uuid_num);
> +			xrt_md_get_intf_uuids(&pdev->dev, dtb,
> +				&xmm->blp_intf_uuid_num, xmm->blp_intf_uuids);
> +		}
> +	}
> +
> +	vfree(dtb);
> +	return rc;
> +}
> +
> +static int xmgmt_main_event_cb(struct platform_device *pdev,
> +	enum xrt_events evt, void *arg)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
> +	enum xrt_subdev_id id;
> +	int instance;
> +	size_t fwlen;
> +
> +	switch (evt) {
> +	case XRT_EVENT_POST_CREATION: {
> +		id = esd->xevt_subdev_id;
> +		instance = esd->xevt_subdev_instance;
> +		xrt_info(pdev, "processing event %d for (%d, %d)",
> +			evt, id, instance);
> +
> +		if (id == XRT_SUBDEV_GPIO)
> +			xmm->gpio_ready = true;
> +		else if (id == XRT_SUBDEV_QSPI)
> +			xmm->flash_ready = true;
> +		else
> +			BUG_ON(1);
> +
> +		if (xmm->gpio_ready && xmm->flash_ready) {
> +			int rc;
> +
> +			rc = load_firmware_from_disk(pdev, &xmm->firmware_blp,
> +				&fwlen);
> +			if (rc != 0) {
> +				rc = load_firmware_from_flash(pdev,
> +					&xmm->firmware_blp, &fwlen);
> +			}
> +			if (rc == 0 && is_valid_firmware(pdev,
> +			    xmm->firmware_blp, fwlen))
> +				(void) xmgmt_create_blp(xmm);
> +			else
> +				xrt_err(pdev,
> +					"failed to find firmware, giving up");
> +			xmm->evt_hdl = NULL;
> +		}
> +		break;
> +	}
> +	case XRT_EVENT_POST_ATTACH:
> +		xmgmt_peer_notify_state(xmm->mailbox_hdl, true);
> +		break;
> +	case XRT_EVENT_PRE_DETACH:
> +		xmgmt_peer_notify_state(xmm->mailbox_hdl, false);
> +		break;
> +	default:
> +		xrt_info(pdev, "ignored event %d", evt);
> +		break;
> +	}
> +
> +	return XRT_EVENT_CB_CONTINUE;
> +}
> +
> +static int xmgmt_main_probe(struct platform_device *pdev)
> +{
> +	struct xmgmt_main *xmm;
> +
> +	xrt_info(pdev, "probing...");
> +
> +	xmm = devm_kzalloc(DEV(pdev), sizeof(*xmm), GFP_KERNEL);
> +	if (!xmm)
> +		return -ENOMEM;
> +
> +	xmm->pdev = pdev;
> +	platform_set_drvdata(pdev, xmm);
> +	xmm->fmgr = xmgmt_fmgr_probe(pdev);
> +	xmm->mailbox_hdl = xmgmt_mailbox_probe(pdev);
> +	mutex_init(&xmm->busy_mutex);
> +
> +	xmm->evt_hdl = xrt_subdev_add_event_cb(pdev,
> +		xmgmt_main_leaf_match, NODE_BLP_ROM, xmgmt_main_event_cb);
> +
> +	/* Ready to handle req thru sysfs nodes. */
> +	if (sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup))
> +		xrt_err(pdev, "failed to create sysfs group");
> +	return 0;
> +}
> +
> +static int xmgmt_main_remove(struct platform_device *pdev)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +
> +	/* By now, partition driver should prevent any inter-leaf call. */
> +
> +	xrt_info(pdev, "leaving...");
> +
> +	if (xmm->evt_hdl)
> +		(void) xrt_subdev_remove_event_cb(pdev, xmm->evt_hdl);
> +	vfree(xmm->blp_intf_uuids);
> +	vfree(xmm->firmware_blp);
> +	vfree(xmm->firmware_plp);
> +	vfree(xmm->firmware_ulp);
> +	(void) xmgmt_fmgr_remove(xmm->fmgr);
> +	xmgmt_mailbox_remove(xmm->mailbox_hdl);
> +	(void) sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup);
> +	return 0;
> +}
> +
> +static int
> +xmgmt_main_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	int ret = 0;
> +
> +	xrt_info(pdev, "handling IOCTL cmd: %d", cmd);
> +
> +	switch (cmd) {
> +	case XRT_MGMT_MAIN_GET_AXLF_SECTION: {
> +		struct xrt_mgmt_main_ioctl_get_axlf_section *get =
> +			(struct xrt_mgmt_main_ioctl_get_axlf_section *)arg;
> +		const char *firmware =
> +			xmgmt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
> +
> +		if (firmware == NULL) {
> +			ret = -ENOENT;
> +		} else {
> +			ret = xrt_xclbin_get_section(firmware,
> +				get->xmmigas_section_kind,
> +				&get->xmmigas_section,
> +				&get->xmmigas_section_size);
> +		}
> +		break;
> +	}
> +	case XRT_MGMT_MAIN_GET_VBNV: {
> +		char **vbnv_p = (char **)arg;
> +
> +		*vbnv_p = xmgmt_get_vbnv(pdev);
> +		break;
> +	}
> +	default:
> +		xrt_err(pdev, "unknown cmd: %d", cmd);
> +		ret = -EINVAL;
> +		break;
> +	}
> +	return ret;
> +}
> +
> +static int xmgmt_main_open(struct inode *inode, struct file *file)
> +{
> +	struct platform_device *pdev = xrt_devnode_open(inode);
> +
> +	/* Device may have gone already when we get here. */
> +	if (!pdev)
> +		return -ENODEV;
> +
> +	xrt_info(pdev, "opened");
> +	file->private_data = platform_get_drvdata(pdev);
> +	return 0;
> +}
> +
> +static int xmgmt_main_close(struct inode *inode, struct file *file)
> +{
> +	struct xmgmt_main *xmm = file->private_data;
> +
> +	xrt_devnode_close(inode);
> +
> +	xrt_info(xmm->pdev, "closed");
> +	return 0;
> +}
> +
> +static int xmgmt_bitstream_axlf_fpga_mgr(struct xmgmt_main *xmm,
> +	void *axlf, size_t size)
> +{
> +	int ret;
> +	struct fpga_image_info info = { 0 };
> +
> +	BUG_ON(!mutex_is_locked(&xmm->busy_mutex));
> +
> +	/*
> +	 * Should any error happens during download, we can't trust
> +	 * the cached xclbin any more.
> +	 */
> +	vfree(xmm->firmware_ulp);
> +	xmm->firmware_ulp = NULL;
> +
> +	info.buf = (char *)axlf;
> +	info.count = size;
> +	ret = fpga_mgr_load(xmm->fmgr, &info);
> +	if (ret == 0)
> +		xmm->firmware_ulp = axlf;
> +
> +	return ret;
> +}
> +
> +int bitstream_axlf_mailbox(struct platform_device *pdev, const void *axlf)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	void *copy_buffer = NULL;
> +	size_t copy_buffer_size = 0;
> +	const struct axlf *xclbin_obj = axlf;
> +	int ret = 0;
> +
> +	if (memcmp(xclbin_obj->m_magic, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)))
> +		return -EINVAL;
> +
> +	copy_buffer_size = xclbin_obj->m_header.m_length;
> +	if (copy_buffer_size > MAX_XCLBIN_SIZE)
> +		return -EINVAL;
> +	copy_buffer = vmalloc(copy_buffer_size);
> +	if (copy_buffer == NULL)
> +		return -ENOMEM;
> +	(void) memcpy(copy_buffer, axlf, copy_buffer_size);
> +
> +	mutex_lock(&xmm->busy_mutex);
> +	ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
> +	mutex_unlock(&xmm->busy_mutex);
> +	if (ret)
> +		vfree(copy_buffer);
> +	return ret;
> +}
> +
> +static int bitstream_axlf_ioctl(struct xmgmt_main *xmm, const void __user *arg)
> +{
> +	void *copy_buffer = NULL;
> +	size_t copy_buffer_size = 0;
> +	struct xmgmt_ioc_bitstream_axlf ioc_obj = { 0 };
> +	struct axlf xclbin_obj = { {0} };
> +	int ret = 0;
> +
> +	if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
> +		return -EFAULT;
> +	if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin,
> +		sizeof(xclbin_obj)))
> +		return -EFAULT;
> +	if (memcmp(xclbin_obj.m_magic, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)))
> +		return -EINVAL;
> +
> +	copy_buffer_size = xclbin_obj.m_header.m_length;
> +	if (copy_buffer_size > MAX_XCLBIN_SIZE)
> +		return -EINVAL;
> +	copy_buffer = vmalloc(copy_buffer_size);
> +	if (copy_buffer == NULL)
> +		return -ENOMEM;
> +
> +	if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
> +		vfree(copy_buffer);
> +		return -EFAULT;
> +	}
> +
> +	ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
> +	if (ret)
> +		vfree(copy_buffer);
> +
> +	return ret;
> +}
> +
> +static long xmgmt_main_ioctl(struct file *filp, unsigned int cmd,
> +	unsigned long arg)
> +{
> +	long result = 0;
> +	struct xmgmt_main *xmm = filp->private_data;
> +
> +	BUG_ON(!xmm);
> +
> +	if (_IOC_TYPE(cmd) != XMGMT_IOC_MAGIC)
> +		return -ENOTTY;
> +
> +	mutex_lock(&xmm->busy_mutex);
> +
> +	xrt_info(xmm->pdev, "ioctl cmd %d, arg %ld", cmd, arg);
> +	switch (cmd) {
> +	case XMGMT_IOCICAPDOWNLOAD_AXLF:
> +		result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
> +		break;
> +	default:
> +		result = -ENOTTY;
> +		break;
> +	}
> +
> +	mutex_unlock(&xmm->busy_mutex);
> +	return result;
> +}
> +
> +void *xmgmt_pdev2mailbox(struct platform_device *pdev)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +
> +	return xmm->mailbox_hdl;
> +}
> +
> +struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[] = {
> +	{
> +		.xse_names = (struct xrt_subdev_ep_names []){
> +			{ .ep_name = NODE_MGMT_MAIN },
> +			{ NULL },
> +		},
> +		.xse_min_ep = 1,
> +	},
> +	{ 0 },
> +};
> +
> +struct xrt_subdev_drvdata xmgmt_main_data = {
> +	.xsd_dev_ops = {
> +		.xsd_ioctl = xmgmt_main_leaf_ioctl,
> +	},
> +	.xsd_file_ops = {
> +		.xsf_ops = {
> +			.owner = THIS_MODULE,
> +			.open = xmgmt_main_open,
> +			.release = xmgmt_main_close,
> +			.unlocked_ioctl = xmgmt_main_ioctl,
> +		},
> +		.xsf_dev_name = "xmgmt",
> +	},
> +};
> +
> +static const struct platform_device_id xmgmt_main_id_table[] = {
> +	{ XMGMT_MAIN, (kernel_ulong_t)&xmgmt_main_data },
> +	{ },
> +};
> +
> +struct platform_driver xmgmt_main_driver = {
> +	.driver	= {
> +		.name    = XMGMT_MAIN,
> +	},
> +	.probe   = xmgmt_main_probe,
> +	.remove  = xmgmt_main_remove,
> +	.id_table = xmgmt_main_id_table,
> +};
> diff --git a/drivers/fpga/alveo/mgmt/xmgmt-root.c b/drivers/fpga/alveo/mgmt/xmgmt-root.c
> new file mode 100644
> index 000000000000..005fd5e42651
> --- /dev/null
> +++ b/drivers/fpga/alveo/mgmt/xmgmt-root.c
> @@ -0,0 +1,375 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo Management Function Driver
> + *
> + * Copyright (C) 2020 Xilinx, Inc.
> + *
> + * Authors:
> + *	Cheng Zhen <maxz@xilinx.com>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/aer.h>
> +#include <linux/vmalloc.h>
> +#include <linux/delay.h>
> +
> +#include "xrt-root.h"
> +#include "xrt-subdev.h"
> +#include "xmgmt-main-impl.h"
> +#include "xrt-metadata.h"
> +
> +#define	XMGMT_MODULE_NAME	"xmgmt"
> +#define	XMGMT_DRIVER_VERSION	"4.0.0"
> +
> +#define	XMGMT_PDEV(xm)		((xm)->pdev)
> +#define	XMGMT_DEV(xm)		(&(XMGMT_PDEV(xm)->dev))
> +#define xmgmt_err(xm, fmt, args...)	\
> +	dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgmt_warn(xm, fmt, args...)	\
> +	dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgmt_info(xm, fmt, args...)	\
> +	dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgmt_dbg(xm, fmt, args...)	\
> +	dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define	XMGMT_DEV_ID(pdev)			\
> +	((pci_domain_nr(pdev->bus) << 16) |	\
> +	PCI_DEVID(pdev->bus->number, 0))
> +
> +static struct class *xmgmt_class;
> +static const struct pci_device_id xmgmt_pci_ids[] = {
> +	{ PCI_DEVICE(0x10EE, 0xd020), },
> +	{ PCI_DEVICE(0x10EE, 0x5020), },
> +	{ 0, }
> +};
> +
> +struct xmgmt {
> +	struct pci_dev *pdev;
> +	void *root;
> +
> +	/* save config for pci reset */
> +	u32 saved_config[8][16];
> +	bool ready;
> +};
> +
> +static int xmgmt_config_pci(struct xmgmt *xm)
> +{
> +	struct pci_dev *pdev = XMGMT_PDEV(xm);
> +	int rc;
> +
> +	rc = pcim_enable_device(pdev);
> +	if (rc < 0) {
> +		xmgmt_err(xm, "failed to enable device: %d", rc);
> +		return rc;
> +	}
> +
> +	rc = pci_enable_pcie_error_reporting(pdev);
> +	if (rc)
> +		xmgmt_warn(xm, "failed to enable AER: %d", rc);
> +
> +	pci_set_master(pdev);
> +
> +	rc = pcie_get_readrq(pdev);
> +	if (rc < 0) {
> +		xmgmt_err(xm, "failed to read mrrs %d", rc);
> +		return rc;
> +	}
> +	if (rc > 512) {
> +		rc = pcie_set_readrq(pdev, 512);
> +		if (rc) {
> +			xmgmt_err(xm, "failed to force mrrs %d", rc);
> +			return rc;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static void xmgmt_save_config_space(struct pci_dev *pdev, u32 *saved_config)
> +{
> +	int i;
> +
> +	for (i = 0; i < 16; i++)
> +		pci_read_config_dword(pdev, i * 4, &saved_config[i]);
> +}
> +
> +static int xmgmt_match_slot_and_save(struct device *dev, void *data)
> +{
> +	struct xmgmt *xm = data;
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +
> +	if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
> +		pci_cfg_access_lock(pdev);
> +		pci_save_state(pdev);
> +		xmgmt_save_config_space(pdev,
> +			xm->saved_config[PCI_FUNC(pdev->devfn)]);
> +	}
> +
> +	return 0;
> +}
> +
> +static void xmgmt_pci_save_config_all(struct xmgmt *xm)
> +{
> +	bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_save);
> +}
> +
> +static void xmgmt_restore_config_space(struct pci_dev *pdev, u32 *config_saved)
> +{
> +	int i;
> +	u32 val;
> +
> +	for (i = 0; i < 16; i++) {
> +		pci_read_config_dword(pdev, i * 4, &val);
> +		if (val == config_saved[i])
> +			continue;
> +
> +		pci_write_config_dword(pdev, i * 4, config_saved[i]);
> +		pci_read_config_dword(pdev, i * 4, &val);
> +		if (val != config_saved[i]) {
> +			dev_err(&pdev->dev,
> +				 "restore config at %d failed", i * 4);
> +		}
> +	}
> +}
> +
> +static int xmgmt_match_slot_and_restore(struct device *dev, void *data)
> +{
> +	struct xmgmt *xm = data;
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +
> +	if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
> +		xmgmt_restore_config_space(pdev,
> +			xm->saved_config[PCI_FUNC(pdev->devfn)]);
> +
> +		pci_restore_state(pdev);
> +		pci_cfg_access_unlock(pdev);
> +	}
> +
> +	return 0;
> +}
> +
> +static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
> +{
> +	bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_restore);
> +}
> +
> +void xroot_hot_reset(struct pci_dev *pdev)
> +{
> +	struct xmgmt *xm = pci_get_drvdata(pdev);
> +	struct pci_bus *bus;
> +	u8 pci_bctl;
> +	u16 pci_cmd, devctl;
> +	int i;
> +
> +	xmgmt_info(xm, "hot reset start");
> +
> +	xmgmt_pci_save_config_all(xm);
> +
> +	pci_disable_device(pdev);
> +
> +	bus = pdev->bus;
> +
> +	/*
> +	 * When flipping the SBR bit, device can fall off the bus. This is
> +	 * usually no problem at all so long as drivers are working properly
> +	 * after SBR. However, some systems complain bitterly when the device
> +	 * falls off the bus.
> +	 * The quick solution is to temporarily disable the SERR reporting of
> +	 * switch port during SBR.
> +	 */
> +
> +	pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
> +	pci_write_config_word(bus->self, PCI_COMMAND,
> +		(pci_cmd & ~PCI_COMMAND_SERR));
> +	pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
> +	pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL,
> +		(devctl & ~PCI_EXP_DEVCTL_FERE));
> +	pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
> +	pci_bctl |= PCI_BRIDGE_CTL_BUS_RESET;
> +	pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
> +
> +	msleep(100);
> +	pci_bctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
> +	pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
> +	ssleep(1);
> +
> +	pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
> +	pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
> +
> +	pci_enable_device(pdev);
> +
> +	for (i = 0; i < 300; i++) {
> +		pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
> +		if (pci_cmd != 0xffff)
> +			break;
> +		msleep(20);
> +	}
> +
> +	xmgmt_info(xm, "waiting for %d ms", i * 20);
> +
> +	xmgmt_pci_restore_config_all(xm);
> +
> +	xmgmt_config_pci(xm);
> +}
> +
> +static int xmgmt_create_root_metadata(struct xmgmt *xm, char **root_dtb)
> +{
> +	char *dtb = NULL;
> +	int ret;
> +
> +	ret = xrt_md_create(DEV(xm->pdev), &dtb);
> +	if (ret) {
> +		xmgmt_err(xm, "create metadata failed, ret %d", ret);
> +		goto failed;
> +	}
> +
> +	ret = xroot_add_simple_node(xm->root, dtb, NODE_TEST);
> +	if (ret)
> +		goto failed;
> +
> +	ret = xroot_add_vsec_node(xm->root, dtb);
> +	if (ret == -ENOENT) {
> +		/*
> +		 * We may be dealing with a MFG board.
> +		 * Try vsec-golden which will bring up all hard-coded leaves
> +		 * at hard-coded offsets.
> +		 */
> +		ret = xroot_add_simple_node(xm, dtb, NODE_VSEC_GOLDEN);
> +	} else if (ret == 0) {
> +		ret = xroot_add_simple_node(xm->root, dtb, NODE_MGMT_MAIN);
> +	}
> +	if (ret)
> +		goto failed;
> +
> +	*root_dtb = dtb;
> +	return 0;
> +
> +failed:
> +	vfree(dtb);
> +	return ret;
> +}
> +
> +static ssize_t ready_show(struct device *dev,
> +	struct device_attribute *da, char *buf)
> +{
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +	struct xmgmt *xm = pci_get_drvdata(pdev);
> +
> +	return sprintf(buf, "%d\n", xm->ready);
> +}
> +static DEVICE_ATTR_RO(ready);
> +
> +static struct attribute *xmgmt_root_attrs[] = {
> +	&dev_attr_ready.attr,
> +	NULL
> +};
> +
> +static struct attribute_group xmgmt_root_attr_group = {
> +	.attrs = xmgmt_root_attrs,
> +};
> +
> +static int xmgmt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> +	int ret;
> +	struct device *dev = DEV(pdev);
> +	struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
> +	char *dtb = NULL;
> +
> +	if (!xm)
> +		return -ENOMEM;
> +	xm->pdev = pdev;
> +	pci_set_drvdata(pdev, xm);
> +
> +	ret = xmgmt_config_pci(xm);
> +	if (ret)
> +		goto failed;
> +
> +	ret = xroot_probe(pdev, &xm->root);
> +	if (ret)
> +		goto failed;
> +
> +	ret = xmgmt_create_root_metadata(xm, &dtb);
> +	if (ret)
> +		goto failed_metadata;
> +
> +	ret = xroot_create_partition(xm->root, dtb);
> +	vfree(dtb);
> +	if (ret)
> +		xmgmt_err(xm, "failed to create root partition: %d", ret);
> +
> +	if (!xroot_wait_for_bringup(xm->root))
> +		xmgmt_err(xm, "failed to bringup all partitions");
> +	else
> +		xm->ready = true;
> +
> +	ret = sysfs_create_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
> +	if (ret) {
> +		/* Warning instead of failing the probe. */
> +		xrt_warn(pdev, "create xmgmt root attrs failed: %d", ret);
> +	}
> +
> +	xroot_broadcast(xm->root, XRT_EVENT_POST_ATTACH);
> +	xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
> +	return 0;
> +
> +failed_metadata:
> +	(void) xroot_remove(xm->root);
> +failed:
> +	pci_set_drvdata(pdev, NULL);
> +	return ret;
> +}
> +
> +static void xmgmt_remove(struct pci_dev *pdev)
> +{
> +	struct xmgmt *xm = pci_get_drvdata(pdev);
> +
> +	xroot_broadcast(xm->root, XRT_EVENT_PRE_DETACH);
> +	sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
> +	(void) xroot_remove(xm->root);
> +	pci_disable_pcie_error_reporting(xm->pdev);
> +	xmgmt_info(xm, "%s cleaned up successfully", XMGMT_MODULE_NAME);
> +}
> +
> +static struct pci_driver xmgmt_driver = {
> +	.name = XMGMT_MODULE_NAME,
> +	.id_table = xmgmt_pci_ids,
> +	.probe = xmgmt_probe,
> +	.remove = xmgmt_remove,
> +};
> +
> +static int __init xmgmt_init(void)
> +{
> +	int res = xrt_subdev_register_external_driver(XRT_SUBDEV_MGMT_MAIN,
> +		&xmgmt_main_driver, xrt_mgmt_main_endpoints);
> +
> +	if (res)
> +		return res;
> +
> +	xmgmt_class = class_create(THIS_MODULE, XMGMT_MODULE_NAME);
> +	if (IS_ERR(xmgmt_class))
> +		return PTR_ERR(xmgmt_class);
> +
> +	res = pci_register_driver(&xmgmt_driver);
> +	if (res) {
> +		class_destroy(xmgmt_class);
> +		return res;
> +	}
> +
> +	return 0;
> +}
> +
> +static __exit void xmgmt_exit(void)
> +{
> +	pci_unregister_driver(&xmgmt_driver);
> +	class_destroy(xmgmt_class);
> +	xrt_subdev_unregister_external_driver(XRT_SUBDEV_MGMT_MAIN);
> +}
> +
> +module_init(xmgmt_init);
> +module_exit(xmgmt_exit);
> +
> +MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
> +MODULE_VERSION(XMGMT_DRIVER_VERSION);
> +MODULE_AUTHOR("XRT Team <runtime@xilinx.com>");
> +MODULE_DESCRIPTION("Xilinx Alveo management function driver");
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.17.1

I have not yet looked at the whole thing, but why does the FPGA Manager
only copy things around?

Any reason your partitions cannot be modelled as FPGA regions/Bridges?

It would be helpful to split this up into smaller chunks, that make it
easier to review. The FPGA Manager driver should be a separate patch,
etc.

- Moritz
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (8 preceding siblings ...)
  2020-11-30 18:08 ` [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Rob Herring
@ 2020-12-02  2:14 ` Xu Yilun
  2020-12-02  5:33   ` Sonal Santan
  2020-12-06 16:31 ` Tom Rix
  10 siblings, 1 reply; 29+ messages in thread
From: Xu Yilun @ 2020-12-02  2:14 UTC (permalink / raw)
  To: Sonal Santan
  Cc: linux-kernel, Sonal Santan, linux-fpga, maxz, lizhih,
	michal.simek, stefanos, devicetree

On Sat, Nov 28, 2020 at 04:00:32PM -0800, Sonal Santan wrote:
> Hello,
> 
> This patch series adds management physical function driver for Xilinx Alveo PCIe
> accelerator cards, https://www.xilinx.com/products/boards-and-kits/alveo.html
> This driver is part of Xilinx Runtime (XRT) open source stack.
> 
> The patch depends on the "PATCH Xilinx Alveo libfdt prep" which was posted
> before.
> 
> ALVEO PLATFORM ARCHITECTURE
> 
> Alveo PCIe FPGA based platforms have a static *shell* partition and a partial
> re-configurable *user* partition. The shell partition is automatically loaded from
> flash when host is booted and PCIe is enumerated by BIOS. Shell cannot be changed
> till the next cold reboot. The shell exposes two PCIe physical functions:
> 
> 1. management physical function
> 2. user physical function
> 
> The patch series includes Documentation/xrt.rst which describes Alveo
> platform, xmgmt driver architecture and deployment model in more more detail.
> 
> Users compile their high level design in C/C++/OpenCL or RTL into FPGA image
> using Vitis https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
> tools. The image is packaged as xclbin and contains partial bitstream for the
> user partition and necessary metadata. Users can dynamically swap the image
> running on the user partition in order to switch between different workloads.
> 
> ALVEO DRIVERS
> 
> Alveo Linux kernel driver *xmgmt* binds to management physical function of
> Alveo platform. The modular driver framework is organized into several
> platform drivers which primarily handle the following functionality:
> 
> 1.  Loading firmware container also called xsabin at driver attach time
> 2.  Loading of user compiled xclbin with FPGA Manager integration
> 3.  Clock scaling of image running on user partition
> 4.  In-band sensors: temp, voltage, power, etc.
> 5.  Device reset and rescan
> 6.  Flashing static *shell* partition
> 
> The platform drivers are packaged into *xrt-lib* helper module with a well
> defined interfaces the details of which can be found in Documentation/xrt.rst.
> 
> xmgmt driver is second generation Alveo management driver and evolution of
> the first generation (out of tree) Alveo management driver, xclmgmt. The
> sources of the first generation drivers were posted on LKML last year--
> https://lore.kernel.org/lkml/20190319215401.6562-1-sonal.santan@xilinx.com/
> 
> Changes since the first generation driver include the following: the driver
> has been re-architected as data driven modular driver; the driver has been
> split into xmgmt and xrt-lib; user physical function driver has been removed
> from the patch series.
> 
> Alveo/XRT security and platform architecture is documented on the following 
> GitHub pages:
> https://xilinx.github.io/XRT/master/html/security.html
> https://xilinx.github.io/XRT/master/html/platforms_partitions.html
> 
> User physical function driver is not included in this patch series.
> 
> TESTING AND VALIDATION
> 
> xmgmt driver can be tested with full XRT open source stack which includes
> user space libraries, board utilities and (out of tree) first generation
> user physical function driver xocl. XRT open source runtime stack is
> available at https://github.com/Xilinx/XRT. This patch series has been
> validated on Alveo U50 platform.
> 
> Complete documentation for XRT open source stack can be found here--
> https://xilinx.github.io/XRT/master/html/index.html
> 
> Thanks,
> -Sonal
> 
> Sonal Santan (8):
>   Documentation: fpga: Add a document describing Alveo XRT drivers
>   fpga: xrt: Add UAPI header files
>   fpga: xrt: infrastructure support for xmgmt driver
>   fpga: xrt: core infrastructure for xrt-lib module
>   fpga: xrt: platform drivers for subsystems in shell partition

Seems the Patch #5 is missing in this seriies.

Thanks,
Yilun

>   fpga: xrt: header file for platform and parent drivers
>   fpga: xrt: Alveo management physical function driver
>   fpga: xrt: Kconfig and Makefile updates for XRT drivers
> 
>  Documentation/fpga/index.rst                  |    1 +
>  Documentation/fpga/xrt.rst                    |  588 +++++
>  drivers/fpga/Kconfig                          |    2 +
>  drivers/fpga/Makefile                         |    3 +
>  drivers/fpga/alveo/Kconfig                    |    7 +
>  drivers/fpga/alveo/common/xrt-metadata.c      |  590 +++++
>  drivers/fpga/alveo/common/xrt-root.c          |  744 +++++++
>  drivers/fpga/alveo/common/xrt-root.h          |   24 +
>  drivers/fpga/alveo/common/xrt-xclbin.c        |  387 ++++
>  drivers/fpga/alveo/common/xrt-xclbin.h        |   46 +
>  drivers/fpga/alveo/include/xmgmt-main.h       |   34 +
>  drivers/fpga/alveo/include/xrt-axigate.h      |   31 +
>  drivers/fpga/alveo/include/xrt-calib.h        |   28 +
>  drivers/fpga/alveo/include/xrt-clkfreq.h      |   21 +
>  drivers/fpga/alveo/include/xrt-clock.h        |   29 +
>  drivers/fpga/alveo/include/xrt-cmc.h          |   23 +
>  drivers/fpga/alveo/include/xrt-ddr-srsr.h     |   29 +
>  drivers/fpga/alveo/include/xrt-flash.h        |   28 +
>  drivers/fpga/alveo/include/xrt-gpio.h         |   41 +
>  drivers/fpga/alveo/include/xrt-icap.h         |   27 +
>  drivers/fpga/alveo/include/xrt-mailbox.h      |   44 +
>  drivers/fpga/alveo/include/xrt-metadata.h     |  184 ++
>  drivers/fpga/alveo/include/xrt-parent.h       |  103 +
>  drivers/fpga/alveo/include/xrt-partition.h    |   33 +
>  drivers/fpga/alveo/include/xrt-subdev.h       |  333 +++
>  drivers/fpga/alveo/include/xrt-ucs.h          |   22 +
>  drivers/fpga/alveo/lib/Kconfig                |   11 +
>  drivers/fpga/alveo/lib/Makefile               |   42 +
>  drivers/fpga/alveo/lib/subdevs/xrt-axigate.c  |  298 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-calib.c    |  291 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c  |  214 ++
>  drivers/fpga/alveo/lib/subdevs/xrt-clock.c    |  638 ++++++
>  .../fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c   |  343 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c |  322 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h |  135 ++
>  .../fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c  |  320 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c   |  361 ++++
>  .../fpga/alveo/lib/subdevs/xrt-cmc-sensors.c  |  445 ++++
>  drivers/fpga/alveo/lib/subdevs/xrt-cmc.c      |  239 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-gpio.c     |  198 ++
>  drivers/fpga/alveo/lib/subdevs/xrt-icap.c     |  306 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c  | 1905 +++++++++++++++++
>  .../fpga/alveo/lib/subdevs/xrt-partition.c    |  261 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-qspi.c     | 1347 ++++++++++++
>  drivers/fpga/alveo/lib/subdevs/xrt-srsr.c     |  322 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-test.c     |  274 +++
>  drivers/fpga/alveo/lib/subdevs/xrt-ucs.c      |  238 ++
>  .../fpga/alveo/lib/subdevs/xrt-vsec-golden.c  |  238 ++
>  drivers/fpga/alveo/lib/subdevs/xrt-vsec.c     |  337 +++
>  drivers/fpga/alveo/lib/xrt-cdev.c             |  234 ++
>  drivers/fpga/alveo/lib/xrt-main.c             |  275 +++
>  drivers/fpga/alveo/lib/xrt-main.h             |   46 +
>  drivers/fpga/alveo/lib/xrt-subdev.c           | 1007 +++++++++
>  drivers/fpga/alveo/mgmt/Kconfig               |   11 +
>  drivers/fpga/alveo/mgmt/Makefile              |   28 +
>  drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c      |  194 ++
>  drivers/fpga/alveo/mgmt/xmgmt-fmgr.h          |   29 +
>  drivers/fpga/alveo/mgmt/xmgmt-main-impl.h     |   36 +
>  drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c  |  930 ++++++++
>  drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c      |  190 ++
>  drivers/fpga/alveo/mgmt/xmgmt-main.c          |  843 ++++++++
>  drivers/fpga/alveo/mgmt/xmgmt-root.c          |  375 ++++
>  include/uapi/linux/xrt/flash_xrt_data.h       |   67 +
>  include/uapi/linux/xrt/mailbox_proto.h        |  394 ++++
>  include/uapi/linux/xrt/mailbox_transport.h    |   74 +
>  include/uapi/linux/xrt/xclbin.h               |  418 ++++
>  include/uapi/linux/xrt/xmgmt-ioctl.h          |   72 +
>  67 files changed, 17710 insertions(+)
>  create mode 100644 Documentation/fpga/xrt.rst
>  create mode 100644 drivers/fpga/alveo/Kconfig
>  create mode 100644 drivers/fpga/alveo/common/xrt-metadata.c
>  create mode 100644 drivers/fpga/alveo/common/xrt-root.c
>  create mode 100644 drivers/fpga/alveo/common/xrt-root.h
>  create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.c
>  create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.h
>  create mode 100644 drivers/fpga/alveo/include/xmgmt-main.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-axigate.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-calib.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-clkfreq.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-clock.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-cmc.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-ddr-srsr.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-flash.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-gpio.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-icap.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-mailbox.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-metadata.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-parent.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-partition.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-subdev.h
>  create mode 100644 drivers/fpga/alveo/include/xrt-ucs.h
>  create mode 100644 drivers/fpga/alveo/lib/Kconfig
>  create mode 100644 drivers/fpga/alveo/lib/Makefile
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-axigate.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-calib.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clock.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sensors.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-gpio.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-icap.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-partition.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-qspi.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-srsr.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-test.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-ucs.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec-golden.c
>  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec.c
>  create mode 100644 drivers/fpga/alveo/lib/xrt-cdev.c
>  create mode 100644 drivers/fpga/alveo/lib/xrt-main.c
>  create mode 100644 drivers/fpga/alveo/lib/xrt-main.h
>  create mode 100644 drivers/fpga/alveo/lib/xrt-subdev.c
>  create mode 100644 drivers/fpga/alveo/mgmt/Kconfig
>  create mode 100644 drivers/fpga/alveo/mgmt/Makefile
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main.c
>  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-root.c
>  create mode 100644 include/uapi/linux/xrt/flash_xrt_data.h
>  create mode 100644 include/uapi/linux/xrt/mailbox_proto.h
>  create mode 100644 include/uapi/linux/xrt/mailbox_transport.h
>  create mode 100644 include/uapi/linux/xrt/xclbin.h
>  create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h
> 
> --
> 2.17.1

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver
  2020-11-29  0:00 ` [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver Sonal Santan
  2020-12-01 20:51   ` Moritz Fischer
@ 2020-12-02  3:00   ` Xu Yilun
  2020-12-04  4:40     ` Max Zhen
  1 sibling, 1 reply; 29+ messages in thread
From: Xu Yilun @ 2020-12-02  3:00 UTC (permalink / raw)
  To: Sonal Santan
  Cc: linux-kernel, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

> +static int xmgmt_main_event_cb(struct platform_device *pdev,
> +	enum xrt_events evt, void *arg)
> +{
> +	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
> +	enum xrt_subdev_id id;
> +	int instance;
> +	size_t fwlen;
> +
> +	switch (evt) {
> +	case XRT_EVENT_POST_CREATION: {
> +		id = esd->xevt_subdev_id;
> +		instance = esd->xevt_subdev_instance;
> +		xrt_info(pdev, "processing event %d for (%d, %d)",
> +			evt, id, instance);
> +
> +		if (id == XRT_SUBDEV_GPIO)
> +			xmm->gpio_ready = true;
> +		else if (id == XRT_SUBDEV_QSPI)
> +			xmm->flash_ready = true;
> +		else
> +			BUG_ON(1);
> +
> +		if (xmm->gpio_ready && xmm->flash_ready) {
> +			int rc;
> +
> +			rc = load_firmware_from_disk(pdev, &xmm->firmware_blp,
> +				&fwlen);
> +			if (rc != 0) {
> +				rc = load_firmware_from_flash(pdev,
> +					&xmm->firmware_blp, &fwlen);

I'm curious that before the shell metadata is loaded, how the QSPI
subdev is enumerated and get to work? The QSPI DT info itself is
stored in metadata, is it?

I didn't find the creation of leaf platform devices, maybe I can find
the answer in the missing Patch #5?

Thanks,
Yilun

> +			}
> +			if (rc == 0 && is_valid_firmware(pdev,
> +			    xmm->firmware_blp, fwlen))
> +				(void) xmgmt_create_blp(xmm);
> +			else
> +				xrt_err(pdev,
> +					"failed to find firmware, giving up");
> +			xmm->evt_hdl = NULL;
> +		}
> +		break;
> +	}

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
  2020-12-02  2:14 ` Xu Yilun
@ 2020-12-02  5:33   ` Sonal Santan
  0 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-12-02  5:33 UTC (permalink / raw)
  To: Xu Yilun
  Cc: linux-kernel, linux-fpga, Max Zhen, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree



> -----Original Message-----
> From: Xu Yilun <yilun.xu@intel.com>
> Sent: Tuesday, December 1, 2020 6:14 PM
> To: Sonal Santan <sonals@xilinx.com>
> Cc: linux-kernel@vger.kernel.org; Sonal Santan <sonals@xilinx.com>; linux-
> fpga@vger.kernel.org; Max Zhen <maxz@xilinx.com>; Lizhi Hou
> <lizhih@xilinx.com>; Michal Simek <michals@xilinx.com>; Stefano Stabellini
> <stefanos@xilinx.com>; devicetree@vger.kernel.org
> Subject: Re: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
> 
> On Sat, Nov 28, 2020 at 04:00:32PM -0800, Sonal Santan wrote:
> > Hello,
> >
> > This patch series adds management physical function driver for Xilinx
> > Alveo PCIe accelerator cards,
> > https://www.xilinx.com/products/boards-and-kits/alveo.html
> > This driver is part of Xilinx Runtime (XRT) open source stack.
> >
> > The patch depends on the "PATCH Xilinx Alveo libfdt prep" which was
> > posted before.
> >
> > ALVEO PLATFORM ARCHITECTURE
> >
> > Alveo PCIe FPGA based platforms have a static *shell* partition and a
> > partial re-configurable *user* partition. The shell partition is
> > automatically loaded from flash when host is booted and PCIe is
> > enumerated by BIOS. Shell cannot be changed till the next cold reboot. The
> shell exposes two PCIe physical functions:
> >
> > 1. management physical function
> > 2. user physical function
> >
> > The patch series includes Documentation/xrt.rst which describes Alveo
> > platform, xmgmt driver architecture and deployment model in more more
> detail.
> >
> > Users compile their high level design in C/C++/OpenCL or RTL into FPGA
> > image using Vitis
> > https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
> > tools. The image is packaged as xclbin and contains partial bitstream
> > for the user partition and necessary metadata. Users can dynamically
> > swap the image running on the user partition in order to switch between
> different workloads.
> >
> > ALVEO DRIVERS
> >
> > Alveo Linux kernel driver *xmgmt* binds to management physical
> > function of Alveo platform. The modular driver framework is organized
> > into several platform drivers which primarily handle the following
> functionality:
> >
> > 1.  Loading firmware container also called xsabin at driver attach
> > time 2.  Loading of user compiled xclbin with FPGA Manager integration
> > 3.  Clock scaling of image running on user partition 4.  In-band
> > sensors: temp, voltage, power, etc.
> > 5.  Device reset and rescan
> > 6.  Flashing static *shell* partition
> >
> > The platform drivers are packaged into *xrt-lib* helper module with a
> > well defined interfaces the details of which can be found in
> Documentation/xrt.rst.
> >
> > xmgmt driver is second generation Alveo management driver and
> > evolution of the first generation (out of tree) Alveo management
> > driver, xclmgmt. The sources of the first generation drivers were
> > posted on LKML last year--
> > https://lore.kernel.org/lkml/20190319215401.6562-1-sonal.santan@xilinx
> > .com/
> >
> > Changes since the first generation driver include the following: the
> > driver has been re-architected as data driven modular driver; the
> > driver has been split into xmgmt and xrt-lib; user physical function
> > driver has been removed from the patch series.
> >
> > Alveo/XRT security and platform architecture is documented on the
> > following GitHub pages:
> > https://xilinx.github.io/XRT/master/html/security.html
> > https://xilinx.github.io/XRT/master/html/platforms_partitions.html
> >
> > User physical function driver is not included in this patch series.
> >
> > TESTING AND VALIDATION
> >
> > xmgmt driver can be tested with full XRT open source stack which
> > includes user space libraries, board utilities and (out of tree) first
> > generation user physical function driver xocl. XRT open source runtime
> > stack is available at https://github.com/Xilinx/XRT. This patch series
> > has been validated on Alveo U50 platform.
> >
> > Complete documentation for XRT open source stack can be found here--
> > https://xilinx.github.io/XRT/master/html/index.html
> >
> > Thanks,
> > -Sonal
> >
> > Sonal Santan (8):
> >   Documentation: fpga: Add a document describing Alveo XRT drivers
> >   fpga: xrt: Add UAPI header files
> >   fpga: xrt: infrastructure support for xmgmt driver
> >   fpga: xrt: core infrastructure for xrt-lib module
> >   fpga: xrt: platform drivers for subsystems in shell partition
> 
> Seems the Patch #5 is missing in this seriies.

Patch #5 was posted along with the rest. You can find a copy here--
https://lore.kernel.org/lkml/20201129000040.24777-6-sonals@xilinx.com/

Thanks,
Sonal
> 
> Thanks,
> Yilun
> 
> >   fpga: xrt: header file for platform and parent drivers
> >   fpga: xrt: Alveo management physical function driver
> >   fpga: xrt: Kconfig and Makefile updates for XRT drivers
> >
> >  Documentation/fpga/index.rst                  |    1 +
> >  Documentation/fpga/xrt.rst                    |  588 +++++
> >  drivers/fpga/Kconfig                          |    2 +
> >  drivers/fpga/Makefile                         |    3 +
> >  drivers/fpga/alveo/Kconfig                    |    7 +
> >  drivers/fpga/alveo/common/xrt-metadata.c      |  590 +++++
> >  drivers/fpga/alveo/common/xrt-root.c          |  744 +++++++
> >  drivers/fpga/alveo/common/xrt-root.h          |   24 +
> >  drivers/fpga/alveo/common/xrt-xclbin.c        |  387 ++++
> >  drivers/fpga/alveo/common/xrt-xclbin.h        |   46 +
> >  drivers/fpga/alveo/include/xmgmt-main.h       |   34 +
> >  drivers/fpga/alveo/include/xrt-axigate.h      |   31 +
> >  drivers/fpga/alveo/include/xrt-calib.h        |   28 +
> >  drivers/fpga/alveo/include/xrt-clkfreq.h      |   21 +
> >  drivers/fpga/alveo/include/xrt-clock.h        |   29 +
> >  drivers/fpga/alveo/include/xrt-cmc.h          |   23 +
> >  drivers/fpga/alveo/include/xrt-ddr-srsr.h     |   29 +
> >  drivers/fpga/alveo/include/xrt-flash.h        |   28 +
> >  drivers/fpga/alveo/include/xrt-gpio.h         |   41 +
> >  drivers/fpga/alveo/include/xrt-icap.h         |   27 +
> >  drivers/fpga/alveo/include/xrt-mailbox.h      |   44 +
> >  drivers/fpga/alveo/include/xrt-metadata.h     |  184 ++
> >  drivers/fpga/alveo/include/xrt-parent.h       |  103 +
> >  drivers/fpga/alveo/include/xrt-partition.h    |   33 +
> >  drivers/fpga/alveo/include/xrt-subdev.h       |  333 +++
> >  drivers/fpga/alveo/include/xrt-ucs.h          |   22 +
> >  drivers/fpga/alveo/lib/Kconfig                |   11 +
> >  drivers/fpga/alveo/lib/Makefile               |   42 +
> >  drivers/fpga/alveo/lib/subdevs/xrt-axigate.c  |  298 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-calib.c    |  291 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c  |  214 ++
> >  drivers/fpga/alveo/lib/subdevs/xrt-clock.c    |  638 ++++++
> >  .../fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c   |  343 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c |  322 +++
> > drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h |  135 ++
> > .../fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c  |  320 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c   |  361 ++++
> >  .../fpga/alveo/lib/subdevs/xrt-cmc-sensors.c  |  445 ++++
> >  drivers/fpga/alveo/lib/subdevs/xrt-cmc.c      |  239 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-gpio.c     |  198 ++
> >  drivers/fpga/alveo/lib/subdevs/xrt-icap.c     |  306 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c  | 1905 +++++++++++++++++
> >  .../fpga/alveo/lib/subdevs/xrt-partition.c    |  261 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-qspi.c     | 1347 ++++++++++++
> >  drivers/fpga/alveo/lib/subdevs/xrt-srsr.c     |  322 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-test.c     |  274 +++
> >  drivers/fpga/alveo/lib/subdevs/xrt-ucs.c      |  238 ++
> >  .../fpga/alveo/lib/subdevs/xrt-vsec-golden.c  |  238 ++
> >  drivers/fpga/alveo/lib/subdevs/xrt-vsec.c     |  337 +++
> >  drivers/fpga/alveo/lib/xrt-cdev.c             |  234 ++
> >  drivers/fpga/alveo/lib/xrt-main.c             |  275 +++
> >  drivers/fpga/alveo/lib/xrt-main.h             |   46 +
> >  drivers/fpga/alveo/lib/xrt-subdev.c           | 1007 +++++++++
> >  drivers/fpga/alveo/mgmt/Kconfig               |   11 +
> >  drivers/fpga/alveo/mgmt/Makefile              |   28 +
> >  drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c      |  194 ++
> >  drivers/fpga/alveo/mgmt/xmgmt-fmgr.h          |   29 +
> >  drivers/fpga/alveo/mgmt/xmgmt-main-impl.h     |   36 +
> >  drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c  |  930 ++++++++
> >  drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c      |  190 ++
> >  drivers/fpga/alveo/mgmt/xmgmt-main.c          |  843 ++++++++
> >  drivers/fpga/alveo/mgmt/xmgmt-root.c          |  375 ++++
> >  include/uapi/linux/xrt/flash_xrt_data.h       |   67 +
> >  include/uapi/linux/xrt/mailbox_proto.h        |  394 ++++
> >  include/uapi/linux/xrt/mailbox_transport.h    |   74 +
> >  include/uapi/linux/xrt/xclbin.h               |  418 ++++
> >  include/uapi/linux/xrt/xmgmt-ioctl.h          |   72 +
> >  67 files changed, 17710 insertions(+)  create mode 100644
> > Documentation/fpga/xrt.rst  create mode 100644
> > drivers/fpga/alveo/Kconfig  create mode 100644
> > drivers/fpga/alveo/common/xrt-metadata.c
> >  create mode 100644 drivers/fpga/alveo/common/xrt-root.c
> >  create mode 100644 drivers/fpga/alveo/common/xrt-root.h
> >  create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.c
> >  create mode 100644 drivers/fpga/alveo/common/xrt-xclbin.h
> >  create mode 100644 drivers/fpga/alveo/include/xmgmt-main.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-axigate.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-calib.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-clkfreq.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-clock.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-cmc.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-ddr-srsr.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-flash.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-gpio.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-icap.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-mailbox.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-metadata.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-parent.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-partition.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-subdev.h
> >  create mode 100644 drivers/fpga/alveo/include/xrt-ucs.h
> >  create mode 100644 drivers/fpga/alveo/lib/Kconfig  create mode 100644
> > drivers/fpga/alveo/lib/Makefile  create mode 100644
> > drivers/fpga/alveo/lib/subdevs/xrt-axigate.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-calib.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clkfreq.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-clock.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-bdinfo.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-ctrl.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-impl.h
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-mailbox.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sc.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc-sensors.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-cmc.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-gpio.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-icap.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-mailbox.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-partition.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-qspi.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-srsr.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-test.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-ucs.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec-golden.c
> >  create mode 100644 drivers/fpga/alveo/lib/subdevs/xrt-vsec.c
> >  create mode 100644 drivers/fpga/alveo/lib/xrt-cdev.c  create mode
> > 100644 drivers/fpga/alveo/lib/xrt-main.c  create mode 100644
> > drivers/fpga/alveo/lib/xrt-main.h  create mode 100644
> > drivers/fpga/alveo/lib/xrt-subdev.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/Kconfig  create mode
> > 100644 drivers/fpga/alveo/mgmt/Makefile  create mode 100644
> > drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-root.c
> >  create mode 100644 include/uapi/linux/xrt/flash_xrt_data.h
> >  create mode 100644 include/uapi/linux/xrt/mailbox_proto.h
> >  create mode 100644 include/uapi/linux/xrt/mailbox_transport.h
> >  create mode 100644 include/uapi/linux/xrt/xclbin.h  create mode
> > 100644 include/uapi/linux/xrt/xmgmt-ioctl.h
> >
> > --
> > 2.17.1

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files
  2020-12-01  4:27   ` Moritz Fischer
@ 2020-12-02 18:57     ` Sonal Santan
  2020-12-02 23:47       ` Moritz Fischer
  0 siblings, 1 reply; 29+ messages in thread
From: Sonal Santan @ 2020-12-02 18:57 UTC (permalink / raw)
  To: Moritz Fischer
  Cc: linux-kernel, linux-fpga, Max Zhen, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree

Hi Moritz,

> -----Original Message-----
> From: Moritz Fischer <mdf@kernel.org>
> Sent: Monday, November 30, 2020 8:27 PM
> To: Sonal Santan <sonals@xilinx.com>
> Cc: linux-kernel@vger.kernel.org; linux-fpga@vger.kernel.org; Max Zhen
> <maxz@xilinx.com>; Lizhi Hou <lizhih@xilinx.com>; Michal Simek
> <michals@xilinx.com>; Stefano Stabellini <stefanos@xilinx.com>;
> devicetree@vger.kernel.org
> Subject: Re: [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files
> 
> Hi Sonal,
> 
> On Sat, Nov 28, 2020 at 04:00:34PM -0800, Sonal Santan wrote:
> > From: Sonal Santan <sonal.santan@xilinx.com>
> >
> > Add XRT UAPI header files which describe flash layout, XRT mailbox
> > protocol, xclBin/axlf FPGA image container format and XRT management
> > physical function driver ioctl interfaces.
> >
> > flash_xrt_data.h:
> > Layout used by XRT to store private data on flash.
> >
> > mailbox_proto.h:
> > Mailbox opcodes and high level data structures representing various
> > kinds of information like sensors, clock, etc.
> >
> > mailbox_transport.h:
> > Transport protocol used by mailbox.
> >
> > xclbin.h:
> > Container format used to store compiled FPGA image which includes
> > bitstream and metadata.
> 
> Can these headers be introduced together with the code that uses them as
> logical change?
> 
> I haven't looked too closely, but it helps reviewing if you can break it into
> smaller pieces that can stand by themselves.
> 

These UAPI header files are used by multiple source files hence I wanted to get 
these reviewed separately. However if this is getting in the way, in the next 
version of the patch series I would look into arranging the files differently.

You can browse the changes here as well--
https://github.com/Xilinx/linux-xoclv2/tree/xrtv2-A

Thanks,
-Sonal

> Thanks,
> Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-12-01  4:54   ` Moritz Fischer
@ 2020-12-02 21:24     ` Max Zhen
  2020-12-02 23:10       ` Moritz Fischer
  0 siblings, 1 reply; 29+ messages in thread
From: Max Zhen @ 2020-12-02 21:24 UTC (permalink / raw)
  To: Moritz Fischer, Sonal Santan
  Cc: linux-kernel, linux-fpga, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree

Hi Moritz,

Thanks for your feedback. Please see my reply inline.

Thanks,
-Max

> -----Original Message-----
> From: Moritz Fischer <mdf@kernel.org>
> Sent: Monday, November 30, 2020 20:55
> To: Sonal Santan <sonals@xilinx.com>
> Cc: linux-kernel@vger.kernel.org; linux-fpga@vger.kernel.org; Max Zhen
> <maxz@xilinx.com>; Lizhi Hou <lizhih@xilinx.com>; Michal Simek
> <michals@xilinx.com>; Stefano Stabellini <stefanos@xilinx.com>;
> devicetree@vger.kernel.org
> Subject: Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document
> describing Alveo XRT drivers
> 
> 
> On Sat, Nov 28, 2020 at 04:00:33PM -0800, Sonal Santan wrote:
> > From: Sonal Santan <sonal.santan@xilinx.com>
> >
> > Describe Alveo XRT driver architecture and provide basic overview of
> > Xilinx Alveo platform.
> >
> > Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
> > ---
> >  Documentation/fpga/index.rst |   1 +
> >  Documentation/fpga/xrt.rst   | 588
> +++++++++++++++++++++++++++++++++++
> >  2 files changed, 589 insertions(+)
> >  create mode 100644 Documentation/fpga/xrt.rst
> >
> > diff --git a/Documentation/fpga/index.rst
> > b/Documentation/fpga/index.rst index f80f95667ca2..30134357b70d
> 100644
> > --- a/Documentation/fpga/index.rst
> > +++ b/Documentation/fpga/index.rst
> > @@ -8,6 +8,7 @@ fpga
> >      :maxdepth: 1
> >
> >      dfl
> > +    xrt
> >
> >  .. only::  subproject and html
> >
> > diff --git a/Documentation/fpga/xrt.rst b/Documentation/fpga/xrt.rst
> > new file mode 100644 index 000000000000..9f37d46459b0
> > --- /dev/null
> > +++ b/Documentation/fpga/xrt.rst
> > @@ -0,1 +1,588 @@
> > +==================================
> > +XRTV2 Linux Kernel Driver Overview
> > +==================================
> > +
> > +XRTV2 drivers are second generation `XRT
> > +<https://github.com/Xilinx/XRT>`_ drivers which support `Alveo
> > +<https://www.xilinx.com/products/boards-and-kits/alveo.html>`_ PCIe
> platforms from Xilinx.
> > +
> > +XRTV2 drivers support *subsystem* style data driven platforms where
> > +driver's configuration and behavior is determined by meta data provided
> by platform (in *device tree* format).
> > +Primary management physical function (MPF) driver is called
> > +**xmgmt**. Primary user physical function (UPF) driver is called
> > +**xuser** and HW subsystem drivers are packaged into a library module
> called **xrt-lib**, which is shared by **xmgmt** and **xuser** (WIP).
> WIP?

Working in progress. I'll expand it in the doc.

> > +
> > +Alveo Platform Overview
> > +=======================
> > +
> > +Alveo platforms are architected as two physical FPGA partitions:
> > +*Shell* and *User*. Shell
> Nit: The Shell provides ...

Sure. Will fix.

> > +provides basic infrastructure for the Alveo platform like PCIe
> > +connectivity, board management, Dynamic Function Exchange (DFX),
> > +sensors, clocking, reset, and security. User partition contains user
> compiled binary which is loaded by a process called DFX also known as partial
> reconfiguration.
> > +
> > +Physical partitions require strict HW compatibility with each other for DFX
> to work properly.
> > +Every physical partition has two interface UUIDs: *parent* UUID and
> > +*child* UUID. For simple single stage platforms Shell → User forms
> > +parent child relationship. For complex two stage platforms Base → Shell
> → User forms the parent child relationship chain.
> > +
> > +.. note::
> > +   Partition compatibility matching is key design component of Alveo
> platforms and XRT. Partitions
> > +   have child and parent relationship. A loaded partition exposes child
> partition UUID to advertise
> > +   its compatibility requirement for child partition. When loading a child
> partition the xmgmt
> > +   management driver matches parent UUID of the child partition against
> child UUID exported by the
> > +   parent. Parent and child partition UUIDs are stored in the *xclbin* (for
> user) or *xsabin* (for
> > +   base and shell). Except for root UUID, VSEC, hardware itself does not
> know about UUIDs. UUIDs are
> > +   stored in xsabin and xclbin.
> > +
> > +
> > +The physical partitions and their loading is illustrated below::
> > +
> > +            SHELL                               USER
> > +        +-----------+                  +-------------------+
> > +        |           |                  |                   |
> > +        | VSEC UUID | CHILD     PARENT |    LOGIC UUID     |
> > +        |           o------->|<--------o                   |
> > +        |           | UUID       UUID  |                   |
> > +        +-----+-----+                  +--------+----------+
> > +              |                                 |
> > +              .                                 .
> > +              |                                 |
> > +          +---+---+                      +------+--------+
> > +          |  POR  |                      | USER COMPILED |
> > +          | FLASH |                      |    XCLBIN     |
> > +          +-------+                      +---------------+
> > +
> > +
> > +Loading Sequence
> > +----------------
> > +
> > +Shell partition is loaded from flash at system boot time. It
> > +establishes the PCIe link and exposes
> Nit: The Shell

Will fix.

> > +two physical functions to the BIOS. After OS boot, xmgmt driver
> > +attaches to PCIe physical function
> > +0 exposed by the Shell and then looks for VSEC in PCIe extended
> > +configuration space. Using VSEC it determines the logic UUID of Shell
> > +and uses the UUID to load matching *xsabin* file from Linux firmware
> > +directory. The xsabin file contains metadata to discover peripherals that
> are part of Shell and firmware(s) for any embedded soft processors in Shell.
> 
> Neat.

Thanks :-).

> > +
> > +Shell exports child interface UUID which is used for compatibility
> > +check when loading user compiled
> Nit: The Shell

Sure.

> > +xclbin over the User partition as part of DFX. When a user requests
> > +loading of a specific xclbin the xmgmt management driver reads the
> > +parent interface UUID specified in the xclbin and matches it with
> > +child interface UUID exported by Shell to determine if xclbin is compatible
> with the Shell. If match fails loading of xclbin is denied.
> > +
> > +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command.
> > +When loading xclbin xmgmt driver performs the following operations:
> > +
> > +1. Sanity check the xclbin contents
> > +2. Isolate the User partition
> > +3. Download the bitstream using the FPGA config engine (ICAP) 4.
> > +De-isolate the User partition
> Is this modelled as bridges and regions?

Alveo drivers as written today do not use fpga bridge and region framework. It seems that if we add support for that framework, it’s possible to receive PR program request from kernel outside of xmgmt driver? Currently, we can’t support this and PR program can only be initiated using XRT’s runtime API in user space.

Or maybe we have missed some points about the use case for this framework?

> 
> > +5. Program the clocks (ClockWiz) driving the User partition 6. Wait
> > +for memory controller (MIG) calibration
> > +
> > +`Platform Loading Overview
> > +<https://xilinx.github.io/XRT/master/html/platforms_partitions.html>`
> > +_ provides more detailed information on platform loading.
> > +
> > +xsabin
> > +------
> > +
> > +Each Alveo platform comes packaged with its own xsabin. The xsabin is
> > +trusted component of the platform. For format details refer to
> > +:ref:`xsabin/xclbin Container Format`. xsabin contains basic
> > +information like UUIDs, platform name and metadata in the form of
> device tree. See :ref:`Device Tree Usage` for details and example.
> > +
> > +xclbin
> > +------
> > +
> > +xclbin is compiled by end user using
> > +`Vitis
> > +<https://www.xilinx.com/products/design-tools/vitis/vitis-platform.ht
> > +ml>`_ tool set from Xilinx. The xclbin contains sections describing
> > +user compiled acceleration engines/kernels, memory subsystems,
> clocking information etc. It also contains bitstream for the user partition,
> UUIDs, platform name, etc. xclbin uses the same container format as xsabin
> which is described below.
> > +
> > +
> > +xsabin/xclbin Container Format
> > +------------------------------
> > +
> > +xclbin/xsabin is ELF-like binary container format. It is structured as series
> of sections.
> > +There is a file header followed by several section headers which is
> followed by sections.
> > +A section header points to an actual section. There is an optional signature
> at the end.
> > +The format is defined by header file ``xclbin.h``. The following
> > +figure illustrates a typical xclbin::
> > +
> > +
> > +          +---------------------+
> > +          |                     |
> > +          |       HEADER        |
> > +          +---------------------+
> > +          |   SECTION  HEADER   |
> > +          |                     |
> > +          +---------------------+
> > +          |         ...         |
> > +          |                     |
> > +          +---------------------+
> > +          |   SECTION  HEADER   |
> > +          |                     |
> > +          +---------------------+
> > +          |       SECTION       |
> > +          |                     |
> > +          +---------------------+
> > +          |         ...         |
> > +          |                     |
> > +          +---------------------+
> > +          |       SECTION       |
> > +          |                     |
> > +          +---------------------+
> > +          |      SIGNATURE      |
> > +          |      (OPTIONAL)     |
> > +          +---------------------+
> > +
> > +
> > +xclbin/xsabin files can be packaged, un-packaged and inspected using
> > +XRT utility called **xclbinutil**. xclbinutil is part of XRT open
> > +source software stack. The source code for xclbinutil can be found at
> > +https://github.com/Xilinx/XRT/tree/master/src/runtime_src/tools/xclbi
> > +nutil
> > +
> > +For example to enumerate the contents of a xclbin/xsabin use the
> > +*--info* switch as shown
> > +below::
> > +
> > +  xclbinutil --info --input
> > + /opt/xilinx/firmware/u50/gen3x16-xdma/blp/test/bandwidth.xclbin
> > +  xclbinutil --info --input
> > + /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/partition.xsab
> > + in
> > +
> > +
> > +Device Tree Usage
> > +-----------------
> > +
> > +As mentioned previously xsabin stores metadata which advertise HW
> subsystems present in a partition.
> > +The metadata is stored in device tree format with well defined
> > +schema. Subsystem instantiations are captured as children of
> > +``addressable_endpoints`` node. Subsystem nodes have standard
> attributes like ``reg``, ``interrupts`` etc. Additionally the nodes also have PCIe
> specific attributes:
> > +``pcie_physical_function`` and ``pcie_bar_mapping``. These identify
> > +which PCIe physical function and which BAR space in that physical
> > +function the subsystem resides. XRT management driver uses this
> > +information to bind *platform drivers* to the subsystem
> > +instantiations. The platform drivers are found in **xrt-lib.ko**
> > +kernel module defined later. Below is an example of device tree for
> > +Alveo U50
> > +platform::
> 
> I might be missing something, but couldn't you structure the addressable
> endpoints in a way that encode the physical function as a parent / child
> relation?

Alveo driver does not generate the metadata. The metadata is formatted and generated by HW tools when the Alveo HW platform is built. 

> 
> What are the regs relative to?

Regs indicates offset of the register on the PCIE BAR of the Alveo device.

> > +
> > +  /dts-v1/;
> > +
> > +  /{
> > +     logic_uuid = "f465b0a3ae8c64f619bc150384ace69b";
> > +
> > +     schema_version {
> > +             major = <0x01>;
> > +             minor = <0x00>;
> > +     };
> > +
> > +     interfaces {
> > +
> > +             @0 {
> > +                     interface_uuid = "862c7020a250293e32036f19956669e5";
> > +             };
> > +     };
> > +
> > +     addressable_endpoints {
> > +
> > +             ep_blp_rom_00 {
> > +                     reg = <0x00 0x1f04000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_bram_ctrl-
> 1.0\0axi_bram_ctrl";
> > +             };
> > +
> > +             ep_card_flash_program_00 {
> > +                     reg = <0x00 0x1f06000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_quad_spi-
> 1.0\0axi_quad_spi";
> > +                     interrupts = <0x03 0x03>;
> > +             };
> > +
> > +             ep_cmc_firmware_mem_00 {
> > +                     reg = <0x00 0x1e20000 0x00 0x20000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible =
> > + "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> > +
> > +                     firmware {
> > +                             firmware_product_name = "cmc";
> > +                             firmware_branch_name = "u50";
> > +                             firmware_version_major = <0x01>;
> > +                             firmware_version_minor = <0x00>;
> > +                     };
> > +             };
> > +
> > +             ep_cmc_intc_00 {
> > +                     reg = <0x00 0x1e03000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> > +                     interrupts = <0x04 0x04>;
> > +             };
> > +
> > +             ep_cmc_mutex_00 {
> > +                     reg = <0x00 0x1e02000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_cmc_regmap_00 {
> > +                     reg = <0x00 0x1e08000 0x00 0x2000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible =
> > + "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> > +
> > +                     firmware {
> > +                             firmware_product_name = "sc-fw";
> > +                             firmware_branch_name = "u50";
> > +                             firmware_version_major = <0x05>;
> > +                     };
> > +             };
> > +
> > +             ep_cmc_reset_00 {
> > +                     reg = <0x00 0x1e01000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_ddr_mem_calib_00 {
> > +                     reg = <0x00 0x63000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_debug_bscan_mgmt_00 {
> > +                     reg = <0x00 0x1e90000 0x00 0x10000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-debug_bridge-
> 1.0\0debug_bridge";
> > +             };
> > +
> > +             ep_ert_base_address_00 {
> > +                     reg = <0x00 0x21000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_ert_command_queue_mgmt_00 {
> > +                     reg = <0x00 0x40000 0x00 0x10000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-ert_command_queue-
> 1.0\0ert_command_queue";
> > +             };
> > +
> > +             ep_ert_command_queue_user_00 {
> > +                     reg = <0x00 0x40000 0x00 0x10000>;
> > +                     pcie_physical_function = <0x01>;
> > +                     compatible = "xilinx.com,reg_abs-ert_command_queue-
> 1.0\0ert_command_queue";
> > +             };
> > +
> > +             ep_ert_firmware_mem_00 {
> > +                     reg = <0x00 0x30000 0x00 0x8000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible =
> > + "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> > +
> > +                     firmware {
> > +                             firmware_product_name = "ert";
> > +                             firmware_branch_name = "v20";
> > +                             firmware_version_major = <0x01>;
> > +                     };
> > +             };
> > +
> > +             ep_ert_intc_00 {
> > +                     reg = <0x00 0x23000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> > +                     interrupts = <0x05 0x05>;
> > +             };
> > +
> > +             ep_ert_reset_00 {
> > +                     reg = <0x00 0x22000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_ert_sched_00 {
> > +                     reg = <0x00 0x50000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x01>;
> > +                     compatible = "xilinx.com,reg_abs-ert_sched-1.0\0ert_sched";
> > +                     interrupts = <0x09 0x0c>;
> > +             };
> > +
> > +             ep_fpga_configuration_00 {
> > +                     reg = <0x00 0x1e88000 0x00 0x8000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_hwicap-1.0\0axi_hwicap";
> > +                     interrupts = <0x02 0x02>;
> > +             };
> > +
> > +             ep_icap_reset_00 {
> > +                     reg = <0x00 0x1f07000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_mailbox_mgmt_00 {
> > +                     reg = <0x00 0x1f10000 0x00 0x10000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
> > +                     interrupts = <0x00 0x00>;
> > +             };
> > +
> > +             ep_mailbox_user_00 {
> > +                     reg = <0x00 0x1f00000 0x00 0x10000>;
> > +                     pcie_physical_function = <0x01>;
> > +                     compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
> > +                     interrupts = <0x08 0x08>;
> > +             };
> > +
> > +             ep_msix_00 {
> > +                     reg = <0x00 0x00 0x00 0x20000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-msix-1.0\0msix";
> > +                     pcie_bar_mapping = <0x02>;
> > +             };
> > +
> > +             ep_pcie_link_mon_00 {
> > +                     reg = <0x00 0x1f05000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_pr_isolate_plp_00 {
> > +                     reg = <0x00 0x1f01000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_pr_isolate_ulp_00 {
> > +                     reg = <0x00 0x1000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > +             };
> > +
> > +             ep_uuid_rom_00 {
> > +                     reg = <0x00 0x64000 0x00 0x1000>;
> > +                     pcie_physical_function = <0x00>;
> > +                     compatible = "xilinx.com,reg_abs-axi_bram_ctrl-
> 1.0\0axi_bram_ctrl";
> > +             };
> > +
> > +             ep_xdma_00 {
> > +                     reg = <0x00 0x00 0x00 0x10000>;
> > +                     pcie_physical_function = <0x01>;
> > +                     compatible = "xilinx.com,reg_abs-xdma-1.0\0xdma";
> > +                     pcie_bar_mapping = <0x02>;
> > +             };
> > +     };
> > +
> > +  }
> > +
> > +
> > +
> > +Deployment Models
> > +=================
> > +
> > +Baremetal
> > +---------
> > +
> > +In bare-metal deployments both MPF and UPF are visible and
> > +accessible. xmgmt driver binds to MPF. xmgmt driver operations are
> > +privileged and available to system administrator. The full stack is
> illustrated below::
> > +
> > +
> > +                            HOST
> > +
> > +                 [XMGMT]            [XUSER]
> > +                    |                  |
> > +                    |                  |
> > +                 +-----+            +-----+
> > +                 | MPF |            | UPF |
> > +                 |     |            |     |
> > +                 | PF0 |            | PF1 |
> > +                 +--+--+            +--+--+
> > +          ......... ^................. ^..........
> > +                    |                  |
> > +                    |   PCIe DEVICE    |
> > +                    |                  |
> > +                 +--+------------------+--+
> > +                 |         SHELL          |
> > +                 |                        |
> > +                 +------------------------+
> > +                 |         USER           |
> > +                 |                        |
> > +                 |                        |
> > +                 |                        |
> > +                 |                        |
> > +                 +------------------------+
> > +
> > +
> > +
> > +Virtualized
> > +-----------
> > +
> > +In virtualized deployments privileged MPF is assigned to host but
> > +unprivileged UPF is assigned to guest VM via PCIe pass-through. xmgmt
> driver in host binds to MPF.
> > +xmgmt driver operations are privileged and only accessible by hosting
> service provider.
> > +The full stack is illustrated below::
> > +
> > +
> > +                                 .............
> > +                  HOST           .    VM     .
> > +                                 .           .
> > +                 [XMGMT]         .  [XUSER]  .
> > +                    |            .     |     .
> > +                    |            .     |     .
> > +                 +-----+         .  +-----+  .
> > +                 | MPF |         .  | UPF |  .
> > +                 |     |         .  |     |  .
> > +                 | PF0 |         .  | PF1 |  .
> > +                 +--+--+         .  +--+--+  .
> > +          ......... ^................. ^..........
> > +                    |                  |
> > +                    |   PCIe DEVICE    |
> > +                    |                  |
> > +                 +--+------------------+--+
> > +                 |         SHELL          |
> > +                 |                        |
> > +                 +------------------------+
> > +                 |         USER           |
> > +                 |                        |
> > +                 |                        |
> > +                 |                        |
> > +                 |                        |
> > +                 +------------------------+
> > +
> > +
> > +
> > +Driver Modules
> > +==============
> > +
> > +xrt-lib.ko
> > +----------
> > +
> > +Repository of all subsystem drivers and pure software modules that
> > +can potentially be shared between xmgmt and xuser. All these drivers
> > +are structured as Linux *platform driver* and are instantiated by
> > +xmgmt (or xuser in future) based on meta data associated with
> > +hardware. The metadata is in the form of device tree as explained before.
> > +
> > +xmgmt.ko
> > +--------
> > +
> > +The xmgmt driver is a PCIe device driver driving MPF found on
> > +Xilinx's Alveo PCIE device. It consists of one *root* driver, one or
> > +more *partition* drivers and one or more *leaf* drivers. The root and
> > +MPF specific leaf drivers are in xmgmt.ko. The partition driver and other
> leaf drivers are in xrt-lib.ko.
> > +
> > +The instantiation of specific partition driver or leaf driver is
> > +completely data driven based on meta data (mostly in device tree
> > +format) found through VSEC capability and inside firmware files, such
> > +as xsabin or xclbin file. The root driver manages life cycle of
> > +multiple partition drivers, which, in turn, manages multiple leaf
> > +drivers. This allows a single set of driver code to support all kinds
> > +of subsystems exposed by different shells. The difference among all
> > +these subsystems will be handled in leaf drivers with root and
> > +partition drivers being part of the infrastructure and provide common
> services for all leaves found on all platforms.
> > +
> > +
> > +xmgmt-root
> > +^^^^^^^^^^
> > +
> > +The xmgmt-root driver is a PCIe device driver attaches to MPF. It's
> > +part of the
> Nit: s/attaches/attached ?

Yes, sure.

> > +infrastructure of the MPF driver and resides in xmgmt.ko. This driver
> > +
> > +* manages one or more partition drivers
> > +* provides access to functionalities that requires pci_dev, such as
> > +PCIE config
> > +  space access, to other leaf drivers through parent calls
> > +* together with partition driver, facilities event callbacks for
> > +other leaf drivers
> > +* together with partition driver, facilities inter-leaf driver calls
> > +for other leaf
> > +  drivers
> > +
> > +When root driver starts, it will explicitly create an initial
> > +partition instance, which contains leaf drivers that will trigger the
> > +creation of other partition instances. The root driver will wait for
> > +all partitions and leaves to be created before it returns from it's
> > +probe routine and claim success of the initialization of the entire xmgmt
> driver.
> > +
> > +partition
> > +^^^^^^^^^
> > +
> > +The partition driver is a platform device driver whose life cycle is
> > +managed by root and does not have real IO mem or IRQ resources. It's
> > +part of the infrastructure of the MPF driver and resides in
> > +xrt-lib.ko. This driver
> > +
> > +* manages one or more leaf drivers so that multiple leaves can be
> > +managed as a group
> > +* provides access to root from leaves, so that parent calls, event
> > +notifications
> > +  and inter-leaf calls can happen
> > +
> > +In xmgmt, an initial partition driver instance will be created by
> > +root, which contains leaves that will trigger partition instances to
> > +be created to manage groups of leaves found on different partitions
> > +on hardware, such as VSEC, Shell, and User.
> > +
> > +leaves
> > +^^^^^^
> > +
> > +The leaf driver is a platform device driver whose life cycle is
> > +managed by a partition driver and may or may not have real IO mem or
> > +IRQ resources. They are the real meat of xmgmt and contains platform
> > +specific code to Shell and User found on a MPF.
> > +
> > +A leaf driver may not have real hardware resources when it merely
> > +acts as a driver that manages certain in-memory states for xmgmt.
> > +These in-memory states could be shared by multiple other leaves.
> > +
> > +Leaf drivers assigned to specific hardware resources drive specific
> > +subsystem in the device. To manipulate the subsystem or carry out a
> > +task, a leaf driver may ask help from root via parent calls and/or from
> other leaves via inter-leaf calls.
> > +
> > +A leaf can also broadcast events through infrastructure code for
> > +other leaves to process. It can also receive event notification from
> > +infrastructure about certain events, such as post-creation or pre-exit of a
> particular leaf.
> > +
> > +
> > +Driver Interfaces
> > +=================
> > +
> > +xmgmt Driver Ioctls
> > +-------------------
> > +
> > +Ioctls exposed by xmgmt driver to user space are enumerated in the
> following table:
> > +
> > +== ===================== =============================
> ===========================
> > +#  Functionality         ioctl request code            data format
> > +== ===================== =============================
> ===========================
> > +1  FPGA image download   XMGMT_IOCICAPDOWNLOAD_AXLF
> xmgmt_ioc_bitstream_axlf
> > +2  CL frequency scaling  XMGMT_IOCFREQSCALE
> xmgmt_ioc_freqscaling
> > +== ===================== =============================
> > +===========================
> > +
> > +xmgmt Driver Sysfs
> > +------------------
> > +
> > +xmgmt driver exposes a rich set of sysfs interfaces. Subsystem
> > +platform drivers export sysfs node for every platform instance.
> > +
> > +Every partition also exports its UUIDs. See below for examples::
> > +
> > +  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/interface_uuids
> > +  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/logic_uuids
> > +
> > +
> > +hwmon
> > +-----
> > +
> > +xmgmt driver exposes standard hwmon interface to report voltage,
> > +current, temperature, power, etc. These can easily be viewed using
> *sensors* command line utility.
> > +
> > +
> > +mailbox
> > +-------
> > +
> > +xmgmt communicates with user physical function driver via HW mailbox.
> > +Mailbox opcodes are defined in ``mailbox_proto.h``. `Mailbox
> > +Inter-domain Communication Protocol
> > +<https://xilinx.github.io/XRT/master/html/mailbox.proto.html>`_
> > +defines the full specification. xmgmt implements subset of the
> specification. It provides the following services to the UPF driver:
> > +
> > +1.  Responding to *are you there* request including determining if the
> two drivers are
> > +    running in the same OS domain
> > +2.  Provide sensor readings, loaded xclbin UUID, clock frequency, shell
> information, etc.
> > +3.  Perform PCIe hot reset
> > +4.  Download user compiled xclbin
> 
> Is this gonna use the mailbox framework?

The xclbin can be downloaded via IOCTL interface of xmgmt driver.
Or the download request can come from user pf driver via mailbox, yes.

Thanks,
Max

> 
> > +
> > +
> > +Platform Security Considerations
> > +================================
> > +
> > +`Security of Alveo Platform
> > +<https://xilinx.github.io/XRT/master/html/security.html>`_
> > +discusses the deployment options and security implications in great detail.
> > --
> > 2.17.1
> 
> That's a lot of text, I'll have to read it again most likely,
> 
> - Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-12-02 21:24     ` Max Zhen
@ 2020-12-02 23:10       ` Moritz Fischer
  2020-12-03  3:38         ` Max Zhen
  0 siblings, 1 reply; 29+ messages in thread
From: Moritz Fischer @ 2020-12-02 23:10 UTC (permalink / raw)
  To: Max Zhen
  Cc: Moritz Fischer, Sonal Santan, linux-kernel, linux-fpga,
	Lizhi Hou, Michal Simek, Stefano Stabellini, devicetree

Hi Max,

On Wed, Dec 02, 2020 at 09:24:29PM +0000, Max Zhen wrote:
> Hi Moritz,
> 
> Thanks for your feedback. Please see my reply inline.
> 
> Thanks,
> -Max
> 
> > -----Original Message-----
> > From: Moritz Fischer <mdf@kernel.org>
> > Sent: Monday, November 30, 2020 20:55
> > To: Sonal Santan <sonals@xilinx.com>
> > Cc: linux-kernel@vger.kernel.org; linux-fpga@vger.kernel.org; Max Zhen
> > <maxz@xilinx.com>; Lizhi Hou <lizhih@xilinx.com>; Michal Simek
> > <michals@xilinx.com>; Stefano Stabellini <stefanos@xilinx.com>;
> > devicetree@vger.kernel.org
> > Subject: Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document
> > describing Alveo XRT drivers
> > 
> > 
> > On Sat, Nov 28, 2020 at 04:00:33PM -0800, Sonal Santan wrote:
> > > From: Sonal Santan <sonal.santan@xilinx.com>
> > >
> > > Describe Alveo XRT driver architecture and provide basic overview of
> > > Xilinx Alveo platform.
> > >
> > > Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
> > > ---
> > >  Documentation/fpga/index.rst |   1 +
> > >  Documentation/fpga/xrt.rst   | 588
> > +++++++++++++++++++++++++++++++++++
> > >  2 files changed, 589 insertions(+)
> > >  create mode 100644 Documentation/fpga/xrt.rst
> > >
> > > diff --git a/Documentation/fpga/index.rst
> > > b/Documentation/fpga/index.rst index f80f95667ca2..30134357b70d
> > 100644
> > > --- a/Documentation/fpga/index.rst
> > > +++ b/Documentation/fpga/index.rst
> > > @@ -8,6 +8,7 @@ fpga
> > >      :maxdepth: 1
> > >
> > >      dfl
> > > +    xrt
> > >
> > >  .. only::  subproject and html
> > >
> > > diff --git a/Documentation/fpga/xrt.rst b/Documentation/fpga/xrt.rst
> > > new file mode 100644 index 000000000000..9f37d46459b0
> > > --- /dev/null
> > > +++ b/Documentation/fpga/xrt.rst
> > > @@ -0,1 +1,588 @@
> > > +==================================
> > > +XRTV2 Linux Kernel Driver Overview
> > > +==================================
> > > +
> > > +XRTV2 drivers are second generation `XRT
> > > +<https://github.com/Xilinx/XRT>`_ drivers which support `Alveo
> > > +<https://www.xilinx.com/products/boards-and-kits/alveo.html>`_ PCIe
> > platforms from Xilinx.
> > > +
> > > +XRTV2 drivers support *subsystem* style data driven platforms where
> > > +driver's configuration and behavior is determined by meta data provided
> > by platform (in *device tree* format).
> > > +Primary management physical function (MPF) driver is called
> > > +**xmgmt**. Primary user physical function (UPF) driver is called
> > > +**xuser** and HW subsystem drivers are packaged into a library module
> > called **xrt-lib**, which is shared by **xmgmt** and **xuser** (WIP).
> > WIP?
> 
> Working in progress. I'll expand it in the doc.
> 
> > > +
> > > +Alveo Platform Overview
> > > +=======================
> > > +
> > > +Alveo platforms are architected as two physical FPGA partitions:
> > > +*Shell* and *User*. Shell
> > Nit: The Shell provides ...
> 
> Sure. Will fix.
> 
> > > +provides basic infrastructure for the Alveo platform like PCIe
> > > +connectivity, board management, Dynamic Function Exchange (DFX),
> > > +sensors, clocking, reset, and security. User partition contains user
> > compiled binary which is loaded by a process called DFX also known as partial
> > reconfiguration.
> > > +
> > > +Physical partitions require strict HW compatibility with each other for DFX
> > to work properly.
> > > +Every physical partition has two interface UUIDs: *parent* UUID and
> > > +*child* UUID. For simple single stage platforms Shell → User forms
> > > +parent child relationship. For complex two stage platforms Base → Shell
> > → User forms the parent child relationship chain.
> > > +
> > > +.. note::
> > > +   Partition compatibility matching is key design component of Alveo
> > platforms and XRT. Partitions
> > > +   have child and parent relationship. A loaded partition exposes child
> > partition UUID to advertise
> > > +   its compatibility requirement for child partition. When loading a child
> > partition the xmgmt
> > > +   management driver matches parent UUID of the child partition against
> > child UUID exported by the
> > > +   parent. Parent and child partition UUIDs are stored in the *xclbin* (for
> > user) or *xsabin* (for
> > > +   base and shell). Except for root UUID, VSEC, hardware itself does not
> > know about UUIDs. UUIDs are
> > > +   stored in xsabin and xclbin.
> > > +
> > > +
> > > +The physical partitions and their loading is illustrated below::
> > > +
> > > +            SHELL                               USER
> > > +        +-----------+                  +-------------------+
> > > +        |           |                  |                   |
> > > +        | VSEC UUID | CHILD     PARENT |    LOGIC UUID     |
> > > +        |           o------->|<--------o                   |
> > > +        |           | UUID       UUID  |                   |
> > > +        +-----+-----+                  +--------+----------+
> > > +              |                                 |
> > > +              .                                 .
> > > +              |                                 |
> > > +          +---+---+                      +------+--------+
> > > +          |  POR  |                      | USER COMPILED |
> > > +          | FLASH |                      |    XCLBIN     |
> > > +          +-------+                      +---------------+
> > > +
> > > +
> > > +Loading Sequence
> > > +----------------
> > > +
> > > +Shell partition is loaded from flash at system boot time. It
> > > +establishes the PCIe link and exposes
> > Nit: The Shell
> 
> Will fix.
> 
> > > +two physical functions to the BIOS. After OS boot, xmgmt driver
> > > +attaches to PCIe physical function
> > > +0 exposed by the Shell and then looks for VSEC in PCIe extended
> > > +configuration space. Using VSEC it determines the logic UUID of Shell
> > > +and uses the UUID to load matching *xsabin* file from Linux firmware
> > > +directory. The xsabin file contains metadata to discover peripherals that
> > are part of Shell and firmware(s) for any embedded soft processors in Shell.
> > 
> > Neat.
> 
> Thanks :-).
> 
> > > +
> > > +Shell exports child interface UUID which is used for compatibility
> > > +check when loading user compiled
> > Nit: The Shell
> 
> Sure.
> 
> > > +xclbin over the User partition as part of DFX. When a user requests
> > > +loading of a specific xclbin the xmgmt management driver reads the
> > > +parent interface UUID specified in the xclbin and matches it with
> > > +child interface UUID exported by Shell to determine if xclbin is compatible
> > with the Shell. If match fails loading of xclbin is denied.
> > > +
> > > +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command.
> > > +When loading xclbin xmgmt driver performs the following operations:
> > > +
> > > +1. Sanity check the xclbin contents
> > > +2. Isolate the User partition
> > > +3. Download the bitstream using the FPGA config engine (ICAP) 4.
> > > +De-isolate the User partition
> > Is this modelled as bridges and regions?
> 
> Alveo drivers as written today do not use fpga bridge and region framework. It seems that if we add support for that framework, it’s possible to receive PR program request from kernel outside of xmgmt driver? Currently, we can’t support this and PR program can only be initiated using XRT’s runtime API in user space.

I'm not 100% sure I understand the concern here, let me reply to what I
think I understand:

You're worried that if you use FPGA region as interface to accept PR
requests something else could attempt to reconfigure the region from
within the kernel using the FPGA Region API?

Assuming I got this right, I don't think this is a big deal. When you
create the regions you control who gets the references to it. 

From what I've seen so far Regions seem to be roughly equivalent to
Partitions, hence my surprise to see a new structure bypassing them.
> 
> Or maybe we have missed some points about the use case for this framework?
> 
> > 
> > > +5. Program the clocks (ClockWiz) driving the User partition 6. Wait
> > > +for memory controller (MIG) calibration
> > > +
> > > +`Platform Loading Overview
> > > +<https://xilinx.github.io/XRT/master/html/platforms_partitions.html>`
> > > +_ provides more detailed information on platform loading.
> > > +
> > > +xsabin
> > > +------
> > > +
> > > +Each Alveo platform comes packaged with its own xsabin. The xsabin is
> > > +trusted component of the platform. For format details refer to
> > > +:ref:`xsabin/xclbin Container Format`. xsabin contains basic
> > > +information like UUIDs, platform name and metadata in the form of
> > device tree. See :ref:`Device Tree Usage` for details and example.
> > > +
> > > +xclbin
> > > +------
> > > +
> > > +xclbin is compiled by end user using
> > > +`Vitis
> > > +<https://www.xilinx.com/products/design-tools/vitis/vitis-platform.ht
> > > +ml>`_ tool set from Xilinx. The xclbin contains sections describing
> > > +user compiled acceleration engines/kernels, memory subsystems,
> > clocking information etc. It also contains bitstream for the user partition,
> > UUIDs, platform name, etc. xclbin uses the same container format as xsabin
> > which is described below.
> > > +
> > > +
> > > +xsabin/xclbin Container Format
> > > +------------------------------
> > > +
> > > +xclbin/xsabin is ELF-like binary container format. It is structured as series
> > of sections.
> > > +There is a file header followed by several section headers which is
> > followed by sections.
> > > +A section header points to an actual section. There is an optional signature
> > at the end.
> > > +The format is defined by header file ``xclbin.h``. The following
> > > +figure illustrates a typical xclbin::
> > > +
> > > +
> > > +          +---------------------+
> > > +          |                     |
> > > +          |       HEADER        |
> > > +          +---------------------+
> > > +          |   SECTION  HEADER   |
> > > +          |                     |
> > > +          +---------------------+
> > > +          |         ...         |
> > > +          |                     |
> > > +          +---------------------+
> > > +          |   SECTION  HEADER   |
> > > +          |                     |
> > > +          +---------------------+
> > > +          |       SECTION       |
> > > +          |                     |
> > > +          +---------------------+
> > > +          |         ...         |
> > > +          |                     |
> > > +          +---------------------+
> > > +          |       SECTION       |
> > > +          |                     |
> > > +          +---------------------+
> > > +          |      SIGNATURE      |
> > > +          |      (OPTIONAL)     |
> > > +          +---------------------+
> > > +
> > > +
> > > +xclbin/xsabin files can be packaged, un-packaged and inspected using
> > > +XRT utility called **xclbinutil**. xclbinutil is part of XRT open
> > > +source software stack. The source code for xclbinutil can be found at
> > > +https://github.com/Xilinx/XRT/tree/master/src/runtime_src/tools/xclbi
> > > +nutil
> > > +
> > > +For example to enumerate the contents of a xclbin/xsabin use the
> > > +*--info* switch as shown
> > > +below::
> > > +
> > > +  xclbinutil --info --input
> > > + /opt/xilinx/firmware/u50/gen3x16-xdma/blp/test/bandwidth.xclbin
> > > +  xclbinutil --info --input
> > > + /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/partition.xsab
> > > + in
> > > +
> > > +
> > > +Device Tree Usage
> > > +-----------------
> > > +
> > > +As mentioned previously xsabin stores metadata which advertise HW
> > subsystems present in a partition.
> > > +The metadata is stored in device tree format with well defined
> > > +schema. Subsystem instantiations are captured as children of
> > > +``addressable_endpoints`` node. Subsystem nodes have standard
> > attributes like ``reg``, ``interrupts`` etc. Additionally the nodes also have PCIe
> > specific attributes:
> > > +``pcie_physical_function`` and ``pcie_bar_mapping``. These identify
> > > +which PCIe physical function and which BAR space in that physical
> > > +function the subsystem resides. XRT management driver uses this
> > > +information to bind *platform drivers* to the subsystem
> > > +instantiations. The platform drivers are found in **xrt-lib.ko**
> > > +kernel module defined later. Below is an example of device tree for
> > > +Alveo U50
> > > +platform::
> > 
> > I might be missing something, but couldn't you structure the addressable
> > endpoints in a way that encode the physical function as a parent / child
> > relation?
> 
> Alveo driver does not generate the metadata. The metadata is formatted and generated by HW tools when the Alveo HW platform is built. 

Sure, but you control the tools that generate the metadata :) Your
userland can structure / process it however it wants / needs?
> 
> > 
> > What are the regs relative to?
> 
> Regs indicates offset of the register on the PCIE BAR of the Alveo device.
> 
> > > +
> > > +  /dts-v1/;
> > > +
> > > +  /{
> > > +     logic_uuid = "f465b0a3ae8c64f619bc150384ace69b";
> > > +
> > > +     schema_version {
> > > +             major = <0x01>;
> > > +             minor = <0x00>;
> > > +     };
> > > +
> > > +     interfaces {
> > > +
> > > +             @0 {
> > > +                     interface_uuid = "862c7020a250293e32036f19956669e5";
> > > +             };
> > > +     };
> > > +
> > > +     addressable_endpoints {
> > > +
> > > +             ep_blp_rom_00 {
> > > +                     reg = <0x00 0x1f04000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_bram_ctrl-
> > 1.0\0axi_bram_ctrl";
> > > +             };
> > > +
> > > +             ep_card_flash_program_00 {
> > > +                     reg = <0x00 0x1f06000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_quad_spi-
> > 1.0\0axi_quad_spi";
> > > +                     interrupts = <0x03 0x03>;
> > > +             };
> > > +
> > > +             ep_cmc_firmware_mem_00 {
> > > +                     reg = <0x00 0x1e20000 0x00 0x20000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible =
> > > + "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> > > +
> > > +                     firmware {
> > > +                             firmware_product_name = "cmc";
> > > +                             firmware_branch_name = "u50";
> > > +                             firmware_version_major = <0x01>;
> > > +                             firmware_version_minor = <0x00>;
> > > +                     };
> > > +             };
> > > +
> > > +             ep_cmc_intc_00 {
> > > +                     reg = <0x00 0x1e03000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> > > +                     interrupts = <0x04 0x04>;
> > > +             };
> > > +
> > > +             ep_cmc_mutex_00 {
> > > +                     reg = <0x00 0x1e02000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_cmc_regmap_00 {
> > > +                     reg = <0x00 0x1e08000 0x00 0x2000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible =
> > > + "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> > > +
> > > +                     firmware {
> > > +                             firmware_product_name = "sc-fw";
> > > +                             firmware_branch_name = "u50";
> > > +                             firmware_version_major = <0x05>;
> > > +                     };
> > > +             };
> > > +
> > > +             ep_cmc_reset_00 {
> > > +                     reg = <0x00 0x1e01000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_ddr_mem_calib_00 {
> > > +                     reg = <0x00 0x63000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_debug_bscan_mgmt_00 {
> > > +                     reg = <0x00 0x1e90000 0x00 0x10000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-debug_bridge-
> > 1.0\0debug_bridge";
> > > +             };
> > > +
> > > +             ep_ert_base_address_00 {
> > > +                     reg = <0x00 0x21000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_ert_command_queue_mgmt_00 {
> > > +                     reg = <0x00 0x40000 0x00 0x10000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-ert_command_queue-
> > 1.0\0ert_command_queue";
> > > +             };
> > > +
> > > +             ep_ert_command_queue_user_00 {
> > > +                     reg = <0x00 0x40000 0x00 0x10000>;
> > > +                     pcie_physical_function = <0x01>;
> > > +                     compatible = "xilinx.com,reg_abs-ert_command_queue-
> > 1.0\0ert_command_queue";
> > > +             };
> > > +
> > > +             ep_ert_firmware_mem_00 {
> > > +                     reg = <0x00 0x30000 0x00 0x8000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible =
> > > + "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> > > +
> > > +                     firmware {
> > > +                             firmware_product_name = "ert";
> > > +                             firmware_branch_name = "v20";
> > > +                             firmware_version_major = <0x01>;
> > > +                     };
> > > +             };
> > > +
> > > +             ep_ert_intc_00 {
> > > +                     reg = <0x00 0x23000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> > > +                     interrupts = <0x05 0x05>;
> > > +             };
> > > +
> > > +             ep_ert_reset_00 {
> > > +                     reg = <0x00 0x22000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_ert_sched_00 {
> > > +                     reg = <0x00 0x50000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x01>;
> > > +                     compatible = "xilinx.com,reg_abs-ert_sched-1.0\0ert_sched";
> > > +                     interrupts = <0x09 0x0c>;
> > > +             };
> > > +
> > > +             ep_fpga_configuration_00 {
> > > +                     reg = <0x00 0x1e88000 0x00 0x8000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_hwicap-1.0\0axi_hwicap";
> > > +                     interrupts = <0x02 0x02>;
> > > +             };
> > > +
> > > +             ep_icap_reset_00 {
> > > +                     reg = <0x00 0x1f07000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_mailbox_mgmt_00 {
> > > +                     reg = <0x00 0x1f10000 0x00 0x10000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
> > > +                     interrupts = <0x00 0x00>;
> > > +             };
> > > +
> > > +             ep_mailbox_user_00 {
> > > +                     reg = <0x00 0x1f00000 0x00 0x10000>;
> > > +                     pcie_physical_function = <0x01>;
> > > +                     compatible = "xilinx.com,reg_abs-mailbox-1.0\0mailbox";
> > > +                     interrupts = <0x08 0x08>;
> > > +             };
> > > +
> > > +             ep_msix_00 {
> > > +                     reg = <0x00 0x00 0x00 0x20000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-msix-1.0\0msix";
> > > +                     pcie_bar_mapping = <0x02>;
> > > +             };
> > > +
> > > +             ep_pcie_link_mon_00 {
> > > +                     reg = <0x00 0x1f05000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_pr_isolate_plp_00 {
> > > +                     reg = <0x00 0x1f01000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_pr_isolate_ulp_00 {
> > > +                     reg = <0x00 0x1000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> > > +             };
> > > +
> > > +             ep_uuid_rom_00 {
> > > +                     reg = <0x00 0x64000 0x00 0x1000>;
> > > +                     pcie_physical_function = <0x00>;
> > > +                     compatible = "xilinx.com,reg_abs-axi_bram_ctrl-
> > 1.0\0axi_bram_ctrl";
> > > +             };
> > > +
> > > +             ep_xdma_00 {
> > > +                     reg = <0x00 0x00 0x00 0x10000>;
> > > +                     pcie_physical_function = <0x01>;
> > > +                     compatible = "xilinx.com,reg_abs-xdma-1.0\0xdma";
> > > +                     pcie_bar_mapping = <0x02>;
> > > +             };
> > > +     };
> > > +
> > > +  }
> > > +
> > > +
> > > +
> > > +Deployment Models
> > > +=================
> > > +
> > > +Baremetal
> > > +---------
> > > +
> > > +In bare-metal deployments both MPF and UPF are visible and
> > > +accessible. xmgmt driver binds to MPF. xmgmt driver operations are
> > > +privileged and available to system administrator. The full stack is
> > illustrated below::
> > > +
> > > +
> > > +                            HOST
> > > +
> > > +                 [XMGMT]            [XUSER]
> > > +                    |                  |
> > > +                    |                  |
> > > +                 +-----+            +-----+
> > > +                 | MPF |            | UPF |
> > > +                 |     |            |     |
> > > +                 | PF0 |            | PF1 |
> > > +                 +--+--+            +--+--+
> > > +          ......... ^................. ^..........
> > > +                    |                  |
> > > +                    |   PCIe DEVICE    |
> > > +                    |                  |
> > > +                 +--+------------------+--+
> > > +                 |         SHELL          |
> > > +                 |                        |
> > > +                 +------------------------+
> > > +                 |         USER           |
> > > +                 |                        |
> > > +                 |                        |
> > > +                 |                        |
> > > +                 |                        |
> > > +                 +------------------------+
> > > +
> > > +
> > > +
> > > +Virtualized
> > > +-----------
> > > +
> > > +In virtualized deployments privileged MPF is assigned to host but
> > > +unprivileged UPF is assigned to guest VM via PCIe pass-through. xmgmt
> > driver in host binds to MPF.
> > > +xmgmt driver operations are privileged and only accessible by hosting
> > service provider.
> > > +The full stack is illustrated below::
> > > +
> > > +
> > > +                                 .............
> > > +                  HOST           .    VM     .
> > > +                                 .           .
> > > +                 [XMGMT]         .  [XUSER]  .
> > > +                    |            .     |     .
> > > +                    |            .     |     .
> > > +                 +-----+         .  +-----+  .
> > > +                 | MPF |         .  | UPF |  .
> > > +                 |     |         .  |     |  .
> > > +                 | PF0 |         .  | PF1 |  .
> > > +                 +--+--+         .  +--+--+  .
> > > +          ......... ^................. ^..........
> > > +                    |                  |
> > > +                    |   PCIe DEVICE    |
> > > +                    |                  |
> > > +                 +--+------------------+--+
> > > +                 |         SHELL          |
> > > +                 |                        |
> > > +                 +------------------------+
> > > +                 |         USER           |
> > > +                 |                        |
> > > +                 |                        |
> > > +                 |                        |
> > > +                 |                        |
> > > +                 +------------------------+
> > > +
> > > +
> > > +
> > > +Driver Modules
> > > +==============
> > > +
> > > +xrt-lib.ko
> > > +----------
> > > +
> > > +Repository of all subsystem drivers and pure software modules that
> > > +can potentially be shared between xmgmt and xuser. All these drivers
> > > +are structured as Linux *platform driver* and are instantiated by
> > > +xmgmt (or xuser in future) based on meta data associated with
> > > +hardware. The metadata is in the form of device tree as explained before.
> > > +
> > > +xmgmt.ko
> > > +--------
> > > +
> > > +The xmgmt driver is a PCIe device driver driving MPF found on
> > > +Xilinx's Alveo PCIE device. It consists of one *root* driver, one or
> > > +more *partition* drivers and one or more *leaf* drivers. The root and
> > > +MPF specific leaf drivers are in xmgmt.ko. The partition driver and other
> > leaf drivers are in xrt-lib.ko.
> > > +
> > > +The instantiation of specific partition driver or leaf driver is
> > > +completely data driven based on meta data (mostly in device tree
> > > +format) found through VSEC capability and inside firmware files, such
> > > +as xsabin or xclbin file. The root driver manages life cycle of
> > > +multiple partition drivers, which, in turn, manages multiple leaf
> > > +drivers. This allows a single set of driver code to support all kinds
> > > +of subsystems exposed by different shells. The difference among all
> > > +these subsystems will be handled in leaf drivers with root and
> > > +partition drivers being part of the infrastructure and provide common
> > services for all leaves found on all platforms.
> > > +
> > > +
> > > +xmgmt-root
> > > +^^^^^^^^^^
> > > +
> > > +The xmgmt-root driver is a PCIe device driver attaches to MPF. It's
> > > +part of the
> > Nit: s/attaches/attached ?
> 
> Yes, sure.
> 
> > > +infrastructure of the MPF driver and resides in xmgmt.ko. This driver
> > > +
> > > +* manages one or more partition drivers
> > > +* provides access to functionalities that requires pci_dev, such as
> > > +PCIE config
> > > +  space access, to other leaf drivers through parent calls
> > > +* together with partition driver, facilities event callbacks for
> > > +other leaf drivers
> > > +* together with partition driver, facilities inter-leaf driver calls
> > > +for other leaf
> > > +  drivers
> > > +
> > > +When root driver starts, it will explicitly create an initial
> > > +partition instance, which contains leaf drivers that will trigger the
> > > +creation of other partition instances. The root driver will wait for
> > > +all partitions and leaves to be created before it returns from it's
> > > +probe routine and claim success of the initialization of the entire xmgmt
> > driver.
> > > +
> > > +partition
> > > +^^^^^^^^^
> > > +
> > > +The partition driver is a platform device driver whose life cycle is
> > > +managed by root and does not have real IO mem or IRQ resources. It's
> > > +part of the infrastructure of the MPF driver and resides in
> > > +xrt-lib.ko. This driver
> > > +
> > > +* manages one or more leaf drivers so that multiple leaves can be
> > > +managed as a group
> > > +* provides access to root from leaves, so that parent calls, event
> > > +notifications
> > > +  and inter-leaf calls can happen
> > > +
> > > +In xmgmt, an initial partition driver instance will be created by
> > > +root, which contains leaves that will trigger partition instances to
> > > +be created to manage groups of leaves found on different partitions
> > > +on hardware, such as VSEC, Shell, and User.
> > > +
> > > +leaves
> > > +^^^^^^
> > > +
> > > +The leaf driver is a platform device driver whose life cycle is
> > > +managed by a partition driver and may or may not have real IO mem or
> > > +IRQ resources. They are the real meat of xmgmt and contains platform
> > > +specific code to Shell and User found on a MPF.
> > > +
> > > +A leaf driver may not have real hardware resources when it merely
> > > +acts as a driver that manages certain in-memory states for xmgmt.
> > > +These in-memory states could be shared by multiple other leaves.
> > > +
> > > +Leaf drivers assigned to specific hardware resources drive specific
> > > +subsystem in the device. To manipulate the subsystem or carry out a
> > > +task, a leaf driver may ask help from root via parent calls and/or from
> > other leaves via inter-leaf calls.
> > > +
> > > +A leaf can also broadcast events through infrastructure code for
> > > +other leaves to process. It can also receive event notification from
> > > +infrastructure about certain events, such as post-creation or pre-exit of a
> > particular leaf.
> > > +
> > > +
> > > +Driver Interfaces
> > > +=================
> > > +
> > > +xmgmt Driver Ioctls
> > > +-------------------
> > > +
> > > +Ioctls exposed by xmgmt driver to user space are enumerated in the
> > following table:
> > > +
> > > +== ===================== =============================
> > ===========================
> > > +#  Functionality         ioctl request code            data format
> > > +== ===================== =============================
> > ===========================
> > > +1  FPGA image download   XMGMT_IOCICAPDOWNLOAD_AXLF
> > xmgmt_ioc_bitstream_axlf
> > > +2  CL frequency scaling  XMGMT_IOCFREQSCALE
> > xmgmt_ioc_freqscaling
> > > +== ===================== =============================
> > > +===========================
> > > +
> > > +xmgmt Driver Sysfs
> > > +------------------
> > > +
> > > +xmgmt driver exposes a rich set of sysfs interfaces. Subsystem
> > > +platform drivers export sysfs node for every platform instance.
> > > +
> > > +Every partition also exports its UUIDs. See below for examples::
> > > +
> > > +  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/interface_uuids
> > > +  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/logic_uuids
> > > +
> > > +
> > > +hwmon
> > > +-----
> > > +
> > > +xmgmt driver exposes standard hwmon interface to report voltage,
> > > +current, temperature, power, etc. These can easily be viewed using
> > *sensors* command line utility.
> > > +
> > > +
> > > +mailbox
> > > +-------
> > > +
> > > +xmgmt communicates with user physical function driver via HW mailbox.
> > > +Mailbox opcodes are defined in ``mailbox_proto.h``. `Mailbox
> > > +Inter-domain Communication Protocol
> > > +<https://xilinx.github.io/XRT/master/html/mailbox.proto.html>`_
> > > +defines the full specification. xmgmt implements subset of the
> > specification. It provides the following services to the UPF driver:
> > > +
> > > +1.  Responding to *are you there* request including determining if the
> > two drivers are
> > > +    running in the same OS domain
> > > +2.  Provide sensor readings, loaded xclbin UUID, clock frequency, shell
> > information, etc.
> > > +3.  Perform PCIe hot reset
> > > +4.  Download user compiled xclbin
> > 
> > Is this gonna use the mailbox framework?
> 
> The xclbin can be downloaded via IOCTL interface of xmgmt driver.
> Or the download request can come from user pf driver via mailbox, yes.
> 
> Thanks,
> Max
> 
> > 
> > > +
> > > +
> > > +Platform Security Considerations
> > > +================================
> > > +
> > > +`Security of Alveo Platform
> > > +<https://xilinx.github.io/XRT/master/html/security.html>`_
> > > +discusses the deployment options and security implications in great detail.
> > > --
> > > 2.17.1
> > 
> > That's a lot of text, I'll have to read it again most likely,
> > 
> > - Moritz

Thanks,
Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files
  2020-12-02 18:57     ` Sonal Santan
@ 2020-12-02 23:47       ` Moritz Fischer
  0 siblings, 0 replies; 29+ messages in thread
From: Moritz Fischer @ 2020-12-02 23:47 UTC (permalink / raw)
  To: Sonal Santan
  Cc: Moritz Fischer, linux-kernel, linux-fpga, Max Zhen, Lizhi Hou,
	Michal Simek, Stefano Stabellini, devicetree

Hi Sonal,

On Wed, Dec 02, 2020 at 06:57:11PM +0000, Sonal Santan wrote:
> Hi Moritz,
> 
> > -----Original Message-----
> > From: Moritz Fischer <mdf@kernel.org>
> > Sent: Monday, November 30, 2020 8:27 PM
> > To: Sonal Santan <sonals@xilinx.com>
> > Cc: linux-kernel@vger.kernel.org; linux-fpga@vger.kernel.org; Max Zhen
> > <maxz@xilinx.com>; Lizhi Hou <lizhih@xilinx.com>; Michal Simek
> > <michals@xilinx.com>; Stefano Stabellini <stefanos@xilinx.com>;
> > devicetree@vger.kernel.org
> > Subject: Re: [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files
> > 
> > Hi Sonal,
> > 
> > On Sat, Nov 28, 2020 at 04:00:34PM -0800, Sonal Santan wrote:
> > > From: Sonal Santan <sonal.santan@xilinx.com>
> > >
> > > Add XRT UAPI header files which describe flash layout, XRT mailbox
> > > protocol, xclBin/axlf FPGA image container format and XRT management
> > > physical function driver ioctl interfaces.
> > >
> > > flash_xrt_data.h:
> > > Layout used by XRT to store private data on flash.
> > >
> > > mailbox_proto.h:
> > > Mailbox opcodes and high level data structures representing various
> > > kinds of information like sensors, clock, etc.
> > >
> > > mailbox_transport.h:
> > > Transport protocol used by mailbox.
> > >
> > > xclbin.h:
> > > Container format used to store compiled FPGA image which includes
> > > bitstream and metadata.
> > 
> > Can these headers be introduced together with the code that uses them as
> > logical change?
> > 
> > I haven't looked too closely, but it helps reviewing if you can break it into
> > smaller pieces that can stand by themselves.
> > 
> 
> These UAPI header files are used by multiple source files hence I wanted to get 
> these reviewed separately. However if this is getting in the way, in the next 
> version of the patch series I would look into arranging the files differently.
> 
> You can browse the changes here as well--
> https://github.com/Xilinx/linux-xoclv2/tree/xrtv2-A

Please submit the code in the form you want the patches to be applied,
in this case submit the headers with the code that uses them, and split
it up in smaller chunks please.

Submitting them as a series in the correct order should provide the
proper context for reviewers.

Cheers,
Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-12-02 23:10       ` Moritz Fischer
@ 2020-12-03  3:38         ` Max Zhen
  2020-12-03  4:36           ` Moritz Fischer
  0 siblings, 1 reply; 29+ messages in thread
From: Max Zhen @ 2020-12-03  3:38 UTC (permalink / raw)
  To: Moritz Fischer
  Cc: Sonal Santan, linux-kernel, linux-fpga, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree

Hi Moritz,

Please see my reply below.

Thanks,
-Max

> -----Original Message-----
> From: Moritz Fischer <mdf@kernel.org>
> Sent: Wednesday, December 2, 2020 15:10
> To: Max Zhen <maxz@xilinx.com>
> Cc: Moritz Fischer <mdf@kernel.org>; Sonal Santan <sonals@xilinx.com>; 
> linux-kernel@vger.kernel.org; linux-fpga@vger.kernel.org; Lizhi Hou 
> <lizhih@xilinx.com>; Michal Simek <michals@xilinx.com>; Stefano 
> Stabellini <stefanos@xilinx.com>; devicetree@vger.kernel.org
> Subject: Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a 
> document describing Alveo XRT drivers
> 
> 
> Hi Max,
> 
> On Wed, Dec 02, 2020 at 09:24:29PM +0000, Max Zhen wrote:
> > Hi Moritz,
> >
> > Thanks for your feedback. Please see my reply inline.
> >
> > Thanks,
> > -Max
> >
> > > -----Original Message-----
> > > From: Moritz Fischer <mdf@kernel.org>
> > > Sent: Monday, November 30, 2020 20:55
> > > To: Sonal Santan <sonals@xilinx.com>
> > > Cc: linux-kernel@vger.kernel.org; linux-fpga@vger.kernel.org; Max 
> > > Zhen <maxz@xilinx.com>; Lizhi Hou <lizhih@xilinx.com>; Michal 
> > > Simek <michals@xilinx.com>; Stefano Stabellini 
> > > <stefanos@xilinx.com>; devicetree@vger.kernel.org
> > > Subject: Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a 
> > > document describing Alveo XRT drivers
> > >
> > >
> > > On Sat, Nov 28, 2020 at 04:00:33PM -0800, Sonal Santan wrote:
> > > > From: Sonal Santan <sonal.santan@xilinx.com>
> > > >
> > > > Describe Alveo XRT driver architecture and provide basic 
> > > > overview of Xilinx Alveo platform.
> > > >
> > > > Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
> > > > ---
> > > >  Documentation/fpga/index.rst |   1 +
> > > >  Documentation/fpga/xrt.rst   | 588
> > > +++++++++++++++++++++++++++++++++++
> > > >  2 files changed, 589 insertions(+)  create mode 100644 
> > > > Documentation/fpga/xrt.rst
> > > >

[...cut...]

> > > > +xclbin over the User partition as part of DFX. When a user 
> > > > +requests loading of a specific xclbin the xmgmt management 
> > > > +driver reads the parent interface UUID specified in the xclbin 
> > > > +and matches it with child interface UUID exported by Shell to 
> > > > +determine if xclbin is compatible
> > > with the Shell. If match fails loading of xclbin is denied.
> > > > +
> > > > +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl
> command.
> > > > +When loading xclbin xmgmt driver performs the following operations:
> > > > +
> > > > +1. Sanity check the xclbin contents 2. Isolate the User 
> > > > +partition 3. Download the bitstream using the FPGA config engine (ICAP) 4.
> > > > +De-isolate the User partition
> > > Is this modelled as bridges and regions?
> >
> > Alveo drivers as written today do not use fpga bridge and region
> framework. It seems that if we add support for that framework, it’s 
> possible to receive PR program request from kernel outside of xmgmt driver?
> Currently, we can’t support this and PR program can only be initiated 
> using XRT’s runtime API in user space.
> 
> I'm not 100% sure I understand the concern here, let me reply to what 
> I think I understand:
> 
> You're worried that if you use FPGA region as interface to accept PR 
> requests something else could attempt to reconfigure the region from 
> within the kernel using the FPGA Region API?
> 
> Assuming I got this right, I don't think this is a big deal. When you 
> create the regions you control who gets the references to it.

Thanks for explaining. Yes, I think you got my point :-).

> 
> From what I've seen so far Regions seem to be roughly equivalent to 
> Partitions, hence my surprise to see a new structure bypassing them.

I see where the gap is.

Regions in Linux is very different than "partitions" we have defined in xmgmt. Regions seem to be a software data structure representing an area on the FPGA that can be reprogrammed. This area is protected by the concept of "bridge" which can be disabled before program and reenabled after it. And you go through region when you need to reprogram this area.

The "partition" is part of the main infrastructure of xmgmt driver, which represents a group of subdev drivers for each individual IP (HW subcomponents). Basically, xmgmt root driver is parent of several partitions who is, in turn, the parent of several subdev drivers. The parent manages the life cycle of its children here.

We do have a partition to represent the group of subdevs/IPs in the reprogrammable area. And we also have partitions representing other areas which cannot be reprogrammed. So, it is difficult to use "Region" to implement "partition".

From what you have explained, it seems that even if I use region / bridge in xmgmt, we can still keep it private to xmgmt instead of exposing the interface to outside world, which we can't support anyway? This means that region will be used as an internal data structure for xmgmt. Since we can't simply replace partition with region, we might as well just use partition throughout the driver, instead of introducing two data structures and use them both in different places.

However, if using region/bridge can bring in other benefits, please let us know and we could see if we can also add this to xmgmt.

> >
> > Or maybe we have missed some points about the use case for this
> framework?
> >

[...cut...]

> > > > +-----------------
> > > > +
> > > > +As mentioned previously xsabin stores metadata which advertise 
> > > > +HW
> > > subsystems present in a partition.
> > > > +The metadata is stored in device tree format with well defined 
> > > > +schema. Subsystem instantiations are captured as children of 
> > > > +``addressable_endpoints`` node. Subsystem nodes have standard
> > > attributes like ``reg``, ``interrupts`` etc. Additionally the 
> > > nodes also have PCIe specific attributes:
> > > > +``pcie_physical_function`` and ``pcie_bar_mapping``. These 
> > > > +identify which PCIe physical function and which BAR space in 
> > > > +that physical function the subsystem resides. XRT management 
> > > > +driver uses this information to bind *platform drivers* to the 
> > > > +subsystem instantiations. The platform drivers are found in 
> > > > +**xrt-lib.ko** kernel module defined later. Below is an example 
> > > > +of device tree for Alveo U50
> > > > +platform::
> > >
> > > I might be missing something, but couldn't you structure the 
> > > addressable endpoints in a way that encode the physical function 
> > > as a parent / child relation?
> >
> > Alveo driver does not generate the metadata. The metadata is 
> > formatted
> and generated by HW tools when the Alveo HW platform is built.
> 
> Sure, but you control the tools that generate the metadata :) Your 
> userland can structure / process it however it wants / needs?

XRT is a runtime software stack, it is not responsible for generating HW metadata. It is one of the consumers of these data. The shell design is generated by a sophisticated tool framework which is difficult to change.

However, we will take this as a feedback for future revision of the tool.

Thanks,
Max

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-12-03  3:38         ` Max Zhen
@ 2020-12-03  4:36           ` Moritz Fischer
  2020-12-04  1:17             ` Max Zhen
  0 siblings, 1 reply; 29+ messages in thread
From: Moritz Fischer @ 2020-12-03  4:36 UTC (permalink / raw)
  To: Max Zhen
  Cc: Moritz Fischer, Sonal Santan, linux-kernel, linux-fpga,
	Lizhi Hou, Michal Simek, Stefano Stabellini, devicetree

Max,

On Thu, Dec 03, 2020 at 03:38:26AM +0000, Max Zhen wrote:
> [...cut...]
> 
> > > > > +xclbin over the User partition as part of DFX. When a user 
> > > > > +requests loading of a specific xclbin the xmgmt management 
> > > > > +driver reads the parent interface UUID specified in the xclbin 
> > > > > +and matches it with child interface UUID exported by Shell to 
> > > > > +determine if xclbin is compatible
> > > > with the Shell. If match fails loading of xclbin is denied.
> > > > > +
> > > > > +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl
> > command.
> > > > > +When loading xclbin xmgmt driver performs the following operations:
> > > > > +
> > > > > +1. Sanity check the xclbin contents 2. Isolate the User 
> > > > > +partition 3. Download the bitstream using the FPGA config engine (ICAP) 4.
> > > > > +De-isolate the User partition
> > > > Is this modelled as bridges and regions?
> > >
> > > Alveo drivers as written today do not use fpga bridge and region
> > framework. It seems that if we add support for that framework, it’s 
> > possible to receive PR program request from kernel outside of xmgmt driver?
> > Currently, we can’t support this and PR program can only be initiated 
> > using XRT’s runtime API in user space.
> > 
> > I'm not 100% sure I understand the concern here, let me reply to what 
> > I think I understand:
> > 
> > You're worried that if you use FPGA region as interface to accept PR 
> > requests something else could attempt to reconfigure the region from 
> > within the kernel using the FPGA Region API?
> > 
> > Assuming I got this right, I don't think this is a big deal. When you 
> > create the regions you control who gets the references to it.
> 
> Thanks for explaining. Yes, I think you got my point :-).

We can add code to make a region 'static' or 'one-time' or 'fixed'.
> 
> > 
> > From what I've seen so far Regions seem to be roughly equivalent to 
> > Partitions, hence my surprise to see a new structure bypassing them.
> 
> I see where the gap is.
> 
> Regions in Linux is very different than "partitions" we have defined in xmgmt. Regions seem to be a software data structure representing an area on the FPGA that can be reprogrammed. This area is protected by the concept of "bridge" which can be disabled before program and reenabled after it. And you go through region when you need to reprogram this area.

Your central management driver can create / destroy regions at will. It
can keep them in a list, array or tree.

Regions can but don't have to have bridges.

If you need to go through the central driver to reprogram a region,
you can use that to figure out which region to program.
> 
> The "partition" is part of the main infrastructure of xmgmt driver, which represents a group of subdev drivers for each individual IP (HW subcomponents). Basically, xmgmt root driver is parent of several partitions who is, in turn, the parent of several subdev drivers. The parent manages the life cycle of its children here.

I don't see how this is conceptually different from what DFL does, and
they managed to use Regions and Bridges.

If things are missing in the framework, please add them instead of
rewriting an entire parallel framework.

> 
> We do have a partition to represent the group of subdevs/IPs in the reprogrammable area. And we also have partitions representing other areas which cannot be reprogrammed. So, it is difficult to use "Region" to implement "partition".

You implement your regions callbacks, you can return -EINVAL / -ENOTTY
if you want to fail a reprogramming request to a static partion /
region.

> From what you have explained, it seems that even if I use region / bridge in xmgmt, we can still keep it private to xmgmt instead of exposing the interface to outside world, which we can't support anyway? This means that region will be used as an internal data structure for xmgmt. Since we can't simply replace partition with region, we might as well just use partition throughout the driver, instead of introducing two data structures and use them both in different places.

Think about your partition as an extension to a region that implements
what you need to do for your case of enumerating and reprogramming that
particular piece of your chip.

> However, if using region/bridge can bring in other benefits, please let us know and we could see if we can also add this to xmgmt.

As maintainer I can say it brings the benefit of looking like existing
infrastructure we have. We can add features to the framework as needed
but blanket replacing the entire thing is always a hard sell.
> 
> > >
> > > Or maybe we have missed some points about the use case for this
> > framework?
> > >
> 
> [...cut...]
> 
> > > > > +-----------------
> > > > > +
> > > > > +As mentioned previously xsabin stores metadata which advertise 
> > > > > +HW
> > > > subsystems present in a partition.
> > > > > +The metadata is stored in device tree format with well defined 
> > > > > +schema. Subsystem instantiations are captured as children of 
> > > > > +``addressable_endpoints`` node. Subsystem nodes have standard
> > > > attributes like ``reg``, ``interrupts`` etc. Additionally the 
> > > > nodes also have PCIe specific attributes:
> > > > > +``pcie_physical_function`` and ``pcie_bar_mapping``. These 
> > > > > +identify which PCIe physical function and which BAR space in 
> > > > > +that physical function the subsystem resides. XRT management 
> > > > > +driver uses this information to bind *platform drivers* to the 
> > > > > +subsystem instantiations. The platform drivers are found in 
> > > > > +**xrt-lib.ko** kernel module defined later. Below is an example 
> > > > > +of device tree for Alveo U50
> > > > > +platform::
> > > >
> > > > I might be missing something, but couldn't you structure the 
> > > > addressable endpoints in a way that encode the physical function 
> > > > as a parent / child relation?
> > >
> > > Alveo driver does not generate the metadata. The metadata is 
> > > formatted
> > and generated by HW tools when the Alveo HW platform is built.
> > 
> > Sure, but you control the tools that generate the metadata :) Your 
> > userland can structure / process it however it wants / needs?
> 
> XRT is a runtime software stack, it is not responsible for generating HW metadata. It is one of the consumers of these data. The shell design is generated by a sophisticated tool framework which is difficult to change.

The Kernel userspace ABI is not going to change once it is merged, which
is why we need to get it right. You can change your userspace code long
time after it is merged into the kernel. The otherway round does not
work.

If you're going to do device-tree you'll need device-tree maintainers to
be ok with your bindings.

> However, we will take this as a feedback for future revision of the tool.
> 
> Thanks,
> Max

Btw: Can you fix your line-breaks :)

- Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-12-03  4:36           ` Moritz Fischer
@ 2020-12-04  1:17             ` Max Zhen
  2020-12-04  4:18               ` Moritz Fischer
  0 siblings, 1 reply; 29+ messages in thread
From: Max Zhen @ 2020-12-04  1:17 UTC (permalink / raw)
  To: Moritz Fischer
  Cc: Sonal Santan, linux-kernel, linux-fpga, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree

Hi Moritz,

I manually fixed some line breaks. Not sure why outlook is not doing it properly.
Let me know if it still looks bad to you.

Please see my reply below.

> 
> 
> Max,
> 
> On Thu, Dec 03, 2020 at 03:38:26AM +0000, Max Zhen wrote:
> > [...cut...]
> >
> > > > > > +xclbin over the User partition as part of DFX. When a user
> > > > > > +requests loading of a specific xclbin the xmgmt management
> > > > > > +driver reads the parent interface UUID specified in the xclbin
> > > > > > +and matches it with child interface UUID exported by Shell to
> > > > > > +determine if xclbin is compatible with the Shell. If match fails loading of xclbin is denied.
> > > > > > +
> > > > > > +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command.
> > > > > > +When loading xclbin xmgmt driver performs the following operations:
> > > > > > +
> > > > > > +1. Sanity check the xclbin contents 2. Isolate the User
> > > > > > +partition 3. Download the bitstream using the FPGA config engine (ICAP) 4.
> > > > > > +De-isolate the User partition
> > > > > Is this modelled as bridges and regions?
> > > >
> > > > Alveo drivers as written today do not use fpga bridge and region
> > > > framework. It seems that if we add support for that framework, it’s
> > > > possible to receive PR program request from kernel outside of xmgmt driver?
> > > > Currently, we can’t support this and PR program can only be initiated
> > > > using XRT’s runtime API in user space.
> > >
> > > I'm not 100% sure I understand the concern here, let me reply to what
> > > I think I understand:
> > >
> > > You're worried that if you use FPGA region as interface to accept PR
> > > requests something else could attempt to reconfigure the region from
> > > within the kernel using the FPGA Region API?
> > >
> > > Assuming I got this right, I don't think this is a big deal. When you
> > > create the regions you control who gets the references to it.
> >
> > Thanks for explaining. Yes, I think you got my point :-).
> 
> We can add code to make a region 'static' or 'one-time' or 'fixed'.
> >
> > >
> > > From what I've seen so far Regions seem to be roughly equivalent to
> > > Partitions, hence my surprise to see a new structure bypassing them.
> >
> > I see where the gap is.
> >
> > Regions in Linux is very different than "partitions" we have defined in xmgmt. Regions seem to be a software data structure
> > representing an area on the FPGA that can be reprogrammed. This area is protected by the concept of "bridge" which can be disabled
> > before program and reenabled after it. And you go through region when you need to reprogram this area.
> 
> Your central management driver can create / destroy regions at will. It
> can keep them in a list, array or tree.
> 
> Regions can but don't have to have bridges.
> 
> If you need to go through the central driver to reprogram a region,
> you can use that to figure out which region to program.

That sounds fine. I can create a region and call into it from xmgmt for
PR programing. The region will, then, call the xmgmt's fpga manager
to program it.

> >
> > The "partition" is part of the main infrastructure of xmgmt driver, which represents a group of subdev drivers for each individual IP
> > (HW subcomponents). Basically, xmgmt root driver is parent of several partitions who is, in turn, the parent of several subdev drivers.
> > The parent manages the life cycle of its children here.
> 
> I don't see how this is conceptually different from what DFL does, and
> they managed to use Regions and Bridges.
> 
> If things are missing in the framework, please add them instead of
> rewriting an entire parallel framework.
> 
> >
> > We do have a partition to represent the group of subdevs/IPs in the reprogrammable area. And we also have partitions
> > representing other areas which cannot be reprogrammed. So, it is difficult to use "Region" to implement "partition".
> 
> You implement your regions callbacks, you can return -EINVAL / -ENOTTY
> if you want to fail a reprogramming request to a static partion /
> region.
> 
> > From what you have explained, it seems that even if I use region / bridge in xmgmt, we can still keep it private to xmgmt instead of
> > exposing the interface to outside world, which we can't support anyway? This means that region will be used as an internal data
> > structure for xmgmt. Since we can't simply replace partition with region, we might as well just use partition throughout the driver,
> > instead of introducing two data structures and use them both in different places.
> 
> Think about your partition as an extension to a region that implements
> what you need to do for your case of enumerating and reprogramming that
> particular piece of your chip.

Yes, we can add region / bridges to represent the PR area and use it in our
code path for reprogramming the PR area. I think what we will do is to
instantiate a region instance for the PR area and associate it with the
FPGA manager in xmgmt for reprogramming it. We can also instantiate
bridges and map the "ULP gate" subdev driver to it in xmgmt. Thus, we
could incorporate region and bridge data structures in xmgmt for PR
reprogramming.

This will be a non-trivial change for us. I'd like to confirm that this is what
you are looking for before we start working on the change. Let us know :-).

> 
> > However, if using region/bridge can bring in other benefits, please let us know and we could see if we can also add this to xmgmt.
> 
> As maintainer I can say it brings the benefit of looking like existing
> infrastructure we have. We can add features to the framework as needed
> but blanket replacing the entire thing is always a hard sell.
> >
> > > >
> > > > Or maybe we have missed some points about the use case for this
> > > > framework?
> > > >
> >
> > [...cut...]
> >
> > > > > > +-----------------
> > > > > > +
> > > > > > +As mentioned previously xsabin stores metadata which advertise
> > > > > > +HW
> > > > > > subsystems present in a partition.
> > > > > > +The metadata is stored in device tree format with well defined
> > > > > > +schema. Subsystem instantiations are captured as children of
> > > > > > +``addressable_endpoints`` node. Subsystem nodes have standard
> > > > > > attributes like ``reg``, ``interrupts`` etc. Additionally the
> > > > > > nodes also have PCIe specific attributes:
> > > > > > +``pcie_physical_function`` and ``pcie_bar_mapping``. These
> > > > > > +identify which PCIe physical function and which BAR space in
> > > > > > +that physical function the subsystem resides. XRT management
> > > > > > +driver uses this information to bind *platform drivers* to the
> > > > > > +subsystem instantiations. The platform drivers are found in
> > > > > > +**xrt-lib.ko** kernel module defined later. Below is an example
> > > > > > +of device tree for Alveo U50
> > > > > > +platform::
> > > > >
> > > > > I might be missing something, but couldn't you structure the
> > > > > addressable endpoints in a way that encode the physical function
> > > > > as a parent / child relation?
> > > >
> > > > Alveo driver does not generate the metadata. The metadata is
> > > > formatted
> > > > and generated by HW tools when the Alveo HW platform is built.
> > >
> > > Sure, but you control the tools that generate the metadata :) Your
> > > userland can structure / process it however it wants / needs?
> >
> > XRT is a runtime software stack, it is not responsible for generating HW metadata. It is one of the consumers of these data. The shell
> > design is generated by a sophisticated tool framework which is difficult to change.
> 
> The Kernel userspace ABI is not going to change once it is merged, which
> is why we need to get it right. You can change your userspace code long
> time after it is merged into the kernel. The otherway round does not
> work.
> 
> If you're going to do device-tree you'll need device-tree maintainers to
> be ok with your bindings.
> 


Yes, we'll wait for the device-tree maintainers to chime in here :-).

Thanks,
Max

> > However, we will take this as a feedback for future revision of the tool.
> >
> > Thanks,
> > Max
> 
> Btw: Can you fix your line-breaks :)
> 
> - Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers
  2020-12-04  1:17             ` Max Zhen
@ 2020-12-04  4:18               ` Moritz Fischer
  0 siblings, 0 replies; 29+ messages in thread
From: Moritz Fischer @ 2020-12-04  4:18 UTC (permalink / raw)
  To: Max Zhen
  Cc: Moritz Fischer, Sonal Santan, linux-kernel, linux-fpga,
	Lizhi Hou, Michal Simek, Stefano Stabellini, devicetree

On Fri, Dec 04, 2020 at 01:17:37AM +0000, Max Zhen wrote:
> Hi Moritz,
> 
> I manually fixed some line breaks. Not sure why outlook is not doing it properly.
> Let me know if it still looks bad to you.

That might just be outlook :)
> 
> Please see my reply below.
> 
> > 
> > 
> > Max,
> > 
> > On Thu, Dec 03, 2020 at 03:38:26AM +0000, Max Zhen wrote:
> > > [...cut...]
> > >
> > > > > > > +xclbin over the User partition as part of DFX. When a user
> > > > > > > +requests loading of a specific xclbin the xmgmt management
> > > > > > > +driver reads the parent interface UUID specified in the xclbin
> > > > > > > +and matches it with child interface UUID exported by Shell to
> > > > > > > +determine if xclbin is compatible with the Shell. If match fails loading of xclbin is denied.
> > > > > > > +
> > > > > > > +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command.
> > > > > > > +When loading xclbin xmgmt driver performs the following operations:
> > > > > > > +
> > > > > > > +1. Sanity check the xclbin contents 2. Isolate the User
> > > > > > > +partition 3. Download the bitstream using the FPGA config engine (ICAP) 4.
> > > > > > > +De-isolate the User partition
> > > > > > Is this modelled as bridges and regions?
> > > > >
> > > > > Alveo drivers as written today do not use fpga bridge and region
> > > > > framework. It seems that if we add support for that framework, it’s
> > > > > possible to receive PR program request from kernel outside of xmgmt driver?
> > > > > Currently, we can’t support this and PR program can only be initiated
> > > > > using XRT’s runtime API in user space.
> > > >
> > > > I'm not 100% sure I understand the concern here, let me reply to what
> > > > I think I understand:
> > > >
> > > > You're worried that if you use FPGA region as interface to accept PR
> > > > requests something else could attempt to reconfigure the region from
> > > > within the kernel using the FPGA Region API?
> > > >
> > > > Assuming I got this right, I don't think this is a big deal. When you
> > > > create the regions you control who gets the references to it.
> > >
> > > Thanks for explaining. Yes, I think you got my point :-).
> > 
> > We can add code to make a region 'static' or 'one-time' or 'fixed'.
> > >
> > > >
> > > > From what I've seen so far Regions seem to be roughly equivalent to
> > > > Partitions, hence my surprise to see a new structure bypassing them.
> > >
> > > I see where the gap is.
> > >
> > > Regions in Linux is very different than "partitions" we have defined in xmgmt. Regions seem to be a software data structure
> > > representing an area on the FPGA that can be reprogrammed. This area is protected by the concept of "bridge" which can be disabled
> > > before program and reenabled after it. And you go through region when you need to reprogram this area.
> > 
> > Your central management driver can create / destroy regions at will. It
> > can keep them in a list, array or tree.
> > 
> > Regions can but don't have to have bridges.
> > 
> > If you need to go through the central driver to reprogram a region,
> > you can use that to figure out which region to program.
> 
> That sounds fine. I can create a region and call into it from xmgmt for
> PR programing. The region will, then, call the xmgmt's fpga manager
> to program it.

It sounds closer than what I'd expect.
> 
> > >
> > > The "partition" is part of the main infrastructure of xmgmt driver, which represents a group of subdev drivers for each individual IP
> > > (HW subcomponents). Basically, xmgmt root driver is parent of several partitions who is, in turn, the parent of several subdev drivers.
> > > The parent manages the life cycle of its children here.
> > 
> > I don't see how this is conceptually different from what DFL does, and
> > they managed to use Regions and Bridges.
> > 
> > If things are missing in the framework, please add them instead of
> > rewriting an entire parallel framework.
> > 
> > >
> > > We do have a partition to represent the group of subdevs/IPs in the reprogrammable area. And we also have partitions
> > > representing other areas which cannot be reprogrammed. So, it is difficult to use "Region" to implement "partition".
> > 
> > You implement your regions callbacks, you can return -EINVAL / -ENOTTY
> > if you want to fail a reprogramming request to a static partion /
> > region.
> > 
> > > From what you have explained, it seems that even if I use region / bridge in xmgmt, we can still keep it private to xmgmt instead of
> > > exposing the interface to outside world, which we can't support anyway? This means that region will be used as an internal data
> > > structure for xmgmt. Since we can't simply replace partition with region, we might as well just use partition throughout the driver,
> > > instead of introducing two data structures and use them both in different places.
> > 
> > Think about your partition as an extension to a region that implements
> > what you need to do for your case of enumerating and reprogramming that
> > particular piece of your chip.
> 
> Yes, we can add region / bridges to represent the PR area and use it in our
> code path for reprogramming the PR area. I think what we will do is to
> instantiate a region instance for the PR area and associate it with the
> FPGA manager in xmgmt for reprogramming it. We can also instantiate
> bridges and map the "ULP gate" subdev driver to it in xmgmt. Thus, we
> could incorporate region and bridge data structures in xmgmt for PR
> reprogramming.

I'd need to take another look, but the ULP gate sounds like a bridge (or
close to it).
 
> This will be a non-trivial change for us. I'd like to confirm that this is what
> you are looking for before we start working on the change. Let us know :-).

I understand. It looks like the right direction. Let's discuss code when
we have code to look at.

It may take a couple of iterations to get it all sorted.

That's normal when you show show up with that much code all at once :)

Cheers,
Moritz

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver
  2020-12-02  3:00   ` Xu Yilun
@ 2020-12-04  4:40     ` Max Zhen
  0 siblings, 0 replies; 29+ messages in thread
From: Max Zhen @ 2020-12-04  4:40 UTC (permalink / raw)
  To: Xu Yilun, Sonal Santan
  Cc: linux-kernel, linux-fpga, lizhih, michal.simek, stefanos, devicetree

Hi Yilun,


On 12/1/20 7:00 PM, Xu Yilun wrote:
> 
> 
>> +static int xmgmt_main_event_cb(struct platform_device *pdev,
>> +     enum xrt_events evt, void *arg)
>> +{
>> +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> +     struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
>> +     enum xrt_subdev_id id;
>> +     int instance;
>> +     size_t fwlen;
>> +
>> +     switch (evt) {
>> +     case XRT_EVENT_POST_CREATION: {
>> +             id = esd->xevt_subdev_id;
>> +             instance = esd->xevt_subdev_instance;
>> +             xrt_info(pdev, "processing event %d for (%d, %d)",
>> +                     evt, id, instance);
>> +
>> +             if (id == XRT_SUBDEV_GPIO)
>> +                     xmm->gpio_ready = true;
>> +             else if (id == XRT_SUBDEV_QSPI)
>> +                     xmm->flash_ready = true;
>> +             else
>> +                     BUG_ON(1);
>> +
>> +             if (xmm->gpio_ready && xmm->flash_ready) {
>> +                     int rc;
>> +
>> +                     rc = load_firmware_from_disk(pdev, &xmm->firmware_blp,
>> +                             &fwlen);
>> +                     if (rc != 0) {
>> +                             rc = load_firmware_from_flash(pdev,
>> +                                     &xmm->firmware_blp, &fwlen);
> 
> I'm curious that before the shell metadata is loaded, how the QSPI
> subdev is enumerated and get to work? The QSPI DT info itself is
> stored in metadata, is it?

No, it is not from the shell metadata. The QSPI subdev info is 
discovered from a rom located on the PCIE BAR pointed to by VSEC cap 
found in config space.

> 
> I didn't find the creation of leaf platform devices, maybe I can find
> the answer in the missing Patch #5?

Leaf driver is children of partition driver. They are created in 
xrt_part_create_leaves() in xrt-partition.c.

Thanks,
Max

> 
> Thanks,
> Yilun
> 
>> +                     }
>> +                     if (rc == 0 && is_valid_firmware(pdev,
>> +                         xmm->firmware_blp, fwlen))
>> +                             (void) xmgmt_create_blp(xmm);
>> +                     else
>> +                             xrt_err(pdev,
>> +                                     "failed to find firmware, giving up");
>> +                     xmm->evt_hdl = NULL;
>> +             }
>> +             break;
>> +     }

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver
       [not found]     ` <BY5PR02MB60683E3470179E6AD10FEE26B9F20@BY5PR02MB6068.namprd02.prod.outlook.com>
@ 2020-12-04  6:22       ` Sonal Santan
  0 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-12-04  6:22 UTC (permalink / raw)
  To: Moritz Fischer
  Cc: linux-kernel, linux-fpga, Max Zhen, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree

Hello Moritz,

> -----Original Message-----
> From: Moritz Fischer <mdf@kernel.org>
> Sent: Tuesday, December 1, 2020 12:52
> To: Sonal Santan <sonals@xilinx.com>
> Cc: linux-kernel@vger.kernel.org; linux-fpga@vger.kernel.org; Max Zhen
<maxz@xilinx.com>; Lizhi Hou <lizhih@xilinx.com>; Michal
> Simek <michals@xilinx.com>; Stefano Stabellini <stefanos@xilinx.com>;
devicetree@vger.kernel.org
> Subject: Re: [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical
function driver
>
>
> Hi Sonal,
>
> On Sat, Nov 28, 2020 at 04:00:39PM -0800, Sonal Santan wrote:
> > From: Sonal Santan <sonal.santan@xilinx.com>
> >
> > Add management physical function driver core. The driver attaches
> > to management physical function of Alveo devices. It instantiates
> > the root driver and one or more partition drivers which in turn
> > instantiate platform drivers. The instantiation of partition and
> > platform drivers is completely data driven. The driver integrates
> > with FPGA manager and provides xclbin download service.
> >
> > Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
> > ---
> >  drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c     | 194 ++++
> >  drivers/fpga/alveo/mgmt/xmgmt-fmgr.h         |  29 +
> >  drivers/fpga/alveo/mgmt/xmgmt-main-impl.h    |  36 +
> >  drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c | 930
+++++++++++++++++++
> >  drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c     | 190 ++++
> >  drivers/fpga/alveo/mgmt/xmgmt-main.c         | 843 +++++++++++++++++
> >  drivers/fpga/alveo/mgmt/xmgmt-root.c         | 375 ++++++++
> >  7 files changed, 2597 insertions(+)
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-main.c
> >  create mode 100644 drivers/fpga/alveo/mgmt/xmgmt-root.c
> >
> > diff --git a/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
b/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
> > new file mode 100644
> > index 000000000000..d451b5a2c291
> > --- /dev/null
> > +++ b/drivers/fpga/alveo/mgmt/xmgmt-fmgr-drv.c
> > @@ -0,0 +1,194 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Xilinx Alveo Management Function Driver
> > + *
> > + * Copyright (C) 2019-2020 Xilinx, Inc.
> > + * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
> > + *
> > + * Authors: Sonal.Santan@xilinx.com
> > + */
> > +
> > +#include <linux/cred.h>
> > +#include <linux/efi.h>
> > +#include <linux/fpga/fpga-mgr.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/module.h>
> > +#include <linux/vmalloc.h>
> > +
> > +#include "xrt-subdev.h"
> > +#include "xmgmt-fmgr.h"
> > +#include "xrt-axigate.h"
> > +#include "xmgmt-main-impl.h"
> > +
> > +/*
> > + * Container to capture and cache full xclbin as it is passed in blocks by
FPGA
> > + * Manager. Driver needs access to full xclbin to walk through xclbin
sections.
> > + * FPGA Manager's .write() backend sends incremental blocks without any
> > + * knowledge of xclbin format forcing us to collect the blocks and stitch
them
> > + * together here.
> > + */
> > +
> > +struct xfpga_klass {
> Nit: xfpga_priv or xfpga_drvdata?
> > +     const struct platform_device *pdev;
> > +     struct axlf         *blob;
> > +     char                 name[64];
> Nit: 64 could be a named constant ?
> > +     size_t               count;
> > +     size_t               total_count;
> > +     struct mutex         axlf_lock;
> > +     int                  reader_ref;
> > +     enum fpga_mgr_states state;
> > +     enum xfpga_sec_level sec_level;
> This appears unused, do you want to add this with the code that uses it?

This hook is for validating signature of FPGA image. Will look into adding it
into the next version of the patch.

> > +};
>
> Maybe add some kerneldoc markup?

Will do in the next version

> > +
> > +struct key *xfpga_keys;
> Appears unused, can you introduce this together with the code using it?
> > +
> > +static int xmgmt_pr_write_init(struct fpga_manager *mgr,
> > +     struct fpga_image_info *info, const char *buf, size_t count)
> > +{
> > +     struct xfpga_klass *obj = mgr->priv;
> > +     const struct axlf *bin = (const struct axlf *)buf;
> Nit: Reverse x-mas tree please.
>
> xxxxxx
> xxxx
> xxx
> x

Will update in the next version

> > +
> > +     if (count < sizeof(struct axlf)) {
> > +             obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
> > +             return -EINVAL;
> > +     }
> > +
> > +     if (count > bin->m_header.m_length) {
> > +             obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
> > +             return -EINVAL;
> > +     }
> > +
> > +     /* Free up the previous blob */
> > +     vfree(obj->blob);
> > +     obj->blob = vmalloc(bin->m_header.m_length);
> > +     if (!obj->blob) {
> > +             obj->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
> > +             return -ENOMEM;
> > +     }
> > +
> > +     xrt_info(obj->pdev, "Begin download of xclbin %pUb of length %lld B",
> > +             &bin->m_header.uuid, bin->m_header.m_length);
> We already have framework level prints for that (admittedly somewhat
> less verbose). Please remove.

Will update in the next version
> > +
> > +     obj->count = 0;
> > +     obj->total_count = bin->m_header.m_length;
> > +     obj->state = FPGA_MGR_STATE_WRITE_INIT;
> Does the framework state tracking not work for you?
> > +     return 0;
> > +}
> > +
> > +static int xmgmt_pr_write(struct fpga_manager *mgr,
> > +     const char *buf, size_t count)
> > +{
> > +     struct xfpga_klass *obj = mgr->priv;
> > +     char *curr = (char *)obj->blob;
> > +
> > +     if ((obj->state != FPGA_MGR_STATE_WRITE_INIT) &&
> > +             (obj->state != FPGA_MGR_STATE_WRITE)) {
> > +             obj->state = FPGA_MGR_STATE_WRITE_ERR;
> > +             return -EINVAL;
> > +     }
> > +
> > +     curr += obj->count;
> > +     obj->count += count;
> > +
> > +     /*
> > +      * The xclbin buffer should not be longer than advertised in the header
> > +      */
> > +     if (obj->total_count < obj->count) {
> > +             obj->state = FPGA_MGR_STATE_WRITE_ERR;
> > +             return -EINVAL;
> > +     }
> > +
> > +     xrt_info(obj->pdev, "Copying block of %zu B of xclbin", count);
> Please drop those.
> > +     memcpy(curr, buf, count);
>
> I'm confused. Why are we just copying things around here. What picks
> this up afterwards?

The current implementation caches the full FPGA image in a local buffer so
we can walk various segments of the xclbin container to identify bitstreams,
clock scaling metadata, etc. Full image is also needed for signature verification.
Does the framework guarantee one .write call in case fpga_image_info is
created with complete buffer? That will remove the need to cache the buffer.

> > +     obj->state = FPGA_MGR_STATE_WRITE;
> > +     return 0;
> > +}
> > +
> > +
> > +static int xmgmt_pr_write_complete(struct fpga_manager *mgr,
> > +                                struct fpga_image_info *info)
> > +{
> > +     int result = 0;
> > +     struct xfpga_klass *obj = mgr->priv;
> > +
> > +     if (obj->state != FPGA_MGR_STATE_WRITE) {
> > +             obj->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
> > +             return -EINVAL;
> > +     }
> > +
> > +     /* Check if we got the complete xclbin */
> > +     if (obj->blob->m_header.m_length != obj->count) {
> > +             obj->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
> > +             return -EINVAL;
> > +     }
> > +
> > +     result = xmgmt_ulp_download((void *)obj->pdev, obj->blob);
> > +
> > +     obj->state = result ? FPGA_MGR_STATE_WRITE_COMPLETE_ERR :
> > +             FPGA_MGR_STATE_WRITE_COMPLETE;
> Why the separate state tracking?
> > +     xrt_info(obj->pdev, "Finish downloading of xclbin %pUb: %d",
> > +             &obj->blob->m_header.uuid, result);
> > +     vfree(obj->blob);
> > +     obj->blob = NULL;
> > +     obj->count = 0;
> > +     return result;
> > +}
> > +
> > +static enum fpga_mgr_states xmgmt_pr_state(struct fpga_manager
*mgr)
> > +{
> > +     struct xfpga_klass *obj = mgr->priv;
> > +
> > +     return obj->state;
> > +}
> > +
> > +static const struct fpga_manager_ops xmgmt_pr_ops = {
> > +     .initial_header_size = sizeof(struct axlf),
> > +     .write_init = xmgmt_pr_write_init,
> > +     .write = xmgmt_pr_write,
> > +     .write_complete = xmgmt_pr_write_complete,
> > +     .state = xmgmt_pr_state,
> > +};
> > +
> > +
> > +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev)
> > +{
> > +     struct fpga_manager *fmgr;
> > +     int ret = 0;
> > +     struct xfpga_klass *obj = vzalloc(sizeof(struct xfpga_klass));
> > +
> > +     xrt_info(pdev, "probing...");
> Drop this, please.
> > +     if (!obj)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
> > +     obj->state = FPGA_MGR_STATE_UNKNOWN;
> > +     obj->pdev = pdev;
> > +     fmgr = fpga_mgr_create(&pdev->dev,
> > +                            obj->name,
> > +                            &xmgmt_pr_ops,
> > +                            obj);
> I think (eyeballed) this fits on two lines?
> > +     if (!fmgr)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     obj->sec_level = XFPGA_SEC_NONE;
> Seems unused so far, please drop until it's used.
> > +     ret = fpga_mgr_register(fmgr);
> > +     if (ret) {
> > +             fpga_mgr_free(fmgr);
> > +             kfree(obj);
> > +             return ERR_PTR(ret);
> > +     }
> > +     mutex_init(&obj->axlf_lock);
> > +     return fmgr;
> Since this patchset will wait at least till next cycle, you might want
> to look into the devm_* functions for registering and creating FPGA
> Managers.

Will address this in next revision of the patch.

>
> > +}
> > +
> > +int xmgmt_fmgr_remove(struct fpga_manager *fmgr)
> > +{
> > +     struct xfpga_klass *obj = fmgr->priv;
> > +
> > +     mutex_destroy(&obj->axlf_lock);
> > +     obj->state = FPGA_MGR_STATE_UNKNOWN;
> > +     fpga_mgr_unregister(fmgr);
> > +     vfree(obj->blob);
> > +     vfree(obj);
> > +     return 0;
> > +}
> > diff --git a/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
b/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
> > new file mode 100644
> > index 000000000000..2beba649609f
> > --- /dev/null
> > +++ b/drivers/fpga/alveo/mgmt/xmgmt-fmgr.h
> > @@ -0,0 +1,29 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Xilinx Alveo Management Function Driver
> > + *
> > + * Copyright (C) 2019-2020 Xilinx, Inc.
> > + * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
> > + *
> > + * Authors: Sonal.Santan@xilinx.com
> > + */
> > +
> > +#ifndef      _XMGMT_FMGR_H_
> > +#define      _XMGMT_FMGR_H_
> > +
> > +#include <linux/fpga/fpga-mgr.h>
> > +#include <linux/mutex.h>
> > +
> > +#include <linux/xrt/xclbin.h>
> > +
> > +enum xfpga_sec_level {
> > +     XFPGA_SEC_NONE = 0,
> > +     XFPGA_SEC_DEDICATE,
> > +     XFPGA_SEC_SYSTEM,
> > +     XFPGA_SEC_MAX = XFPGA_SEC_SYSTEM,
> > +};
> > +
> > +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev);
> > +int xmgmt_fmgr_remove(struct fpga_manager *fmgr);
> > +
> > +#endif
> > diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
b/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
> > new file mode 100644
> > index 000000000000..c89024cb8d46
> > --- /dev/null
> > +++ b/drivers/fpga/alveo/mgmt/xmgmt-main-impl.h
> > @@ -0,0 +1,36 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (C) 2020 Xilinx, Inc.
> > + *
> > + * Authors:
> > + *   Lizhi Hou <Lizhi.Hou@xilinx.com>
> > + *   Cheng Zhen <maxz@xilinx.com>
> > + */
> > +
> > +#ifndef      _XMGMT_MAIN_IMPL_H_
> > +#define      _XMGMT_MAIN_IMPL_H_
> > +
> > +#include "xrt-subdev.h"
> > +#include "xmgmt-main.h"
> > +
> > +extern struct platform_driver xmgmt_main_driver;
> > +extern struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[];
> > +
> > +extern int xmgmt_ulp_download(struct platform_device *pdev, const void
*xclbin);
> > +extern int bitstream_axlf_mailbox(struct platform_device *pdev,
> > +     const void *xclbin);
> > +extern int xmgmt_hot_reset(struct platform_device *pdev);
> > +
> > +/* Getting dtb for specified partition. Caller should vfree returned dtb .*/
> > +extern char *xmgmt_get_dtb(struct platform_device *pdev,
> > +     enum provider_kind kind);
> > +extern char *xmgmt_get_vbnv(struct platform_device *pdev);
> > +extern int xmgmt_get_provider_uuid(struct platform_device *pdev,
> > +     enum provider_kind kind, uuid_t *uuid);
> > +
> > +extern void *xmgmt_pdev2mailbox(struct platform_device *pdev);
> > +extern void *xmgmt_mailbox_probe(struct platform_device *pdev);
> > +extern void xmgmt_mailbox_remove(void *handle);
> > +extern void xmgmt_peer_notify_state(void *handle, bool online);
> > +
> > +#endif       /* _XMGMT_MAIN_IMPL_H_ */
> > diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
b/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
> > new file mode 100644
> > index 000000000000..b3d82fc3618b
> > --- /dev/null
> > +++ b/drivers/fpga/alveo/mgmt/xmgmt-main-mailbox.c
> > @@ -0,0 +1,930 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Xilinx Alveo FPGA MGMT PF entry point driver
> > + *
> > + * Copyright (C) 2020 Xilinx, Inc.
> > + *
> > + * Peer communication via mailbox
> > + *
> > + * Authors:
> > + *      Cheng Zhen <maxz@xilinx.com>
> > + */
> > +
> > +#include <linux/crc32c.h>
> > +#include <linux/xrt/mailbox_proto.h>
> > +#include "xmgmt-main-impl.h"
> > +#include "xrt-mailbox.h"
> > +#include "xrt-cmc.h"
> > +#include "xrt-metadata.h"
> > +#include "xrt-xclbin.h"
> > +#include "xrt-clock.h"
> > +#include "xrt-calib.h"
> > +#include "xrt-icap.h"
> > +
> > +struct xmgmt_mailbox {
> > +     struct platform_device *pdev;
> > +     struct platform_device *mailbox;
> > +     struct mutex lock;
> > +     void *evt_hdl;
> > +     char *test_msg;
> > +     bool peer_in_same_domain;
> > +};
> > +
> > +#define      XMGMT_MAILBOX_PRT_REQ(xmbx, send, request, sw_ch)
do {    \
> > +     const char *dir = (send) ? ">>>>>" : "<<<<<";                   \
> > +                                                                     \
> > +     if ((request)->req == XCL_MAILBOX_REQ_PEER_DATA) {              \
> > +             struct xcl_mailbox_peer_data *p =                       \
> > +                     (struct xcl_mailbox_peer_data *)(request)->data;\
> > +                                                                     \
> > +             xrt_info((xmbx)->pdev, "%s(%s) %s%s",                   \
> > +                     mailbox_req2name((request)->req),               \
> > +                     mailbox_group_kind2name(p->kind),               \
> > +                     dir, mailbox_chan2name(sw_ch));                 \
> > +     } else {                                                        \
> > +             xrt_info((xmbx)->pdev, "%s %s%s",                       \
> > +                     mailbox_req2name((request)->req),               \
> > +                     dir, mailbox_chan2name(sw_ch));                 \
> > +     }                                                               \
> > +} while (0)
> > +#define      XMGMT_MAILBOX_PRT_REQ_SEND(xmbx, req, sw_ch)
\
> > +     XMGMT_MAILBOX_PRT_REQ(xmbx, true, req, sw_ch)
> > +#define      XMGMT_MAILBOX_PRT_REQ_RECV(xmbx, req, sw_ch)
\
> > +     XMGMT_MAILBOX_PRT_REQ(xmbx, false, req, sw_ch)
> > +#define      XMGMT_MAILBOX_PRT_RESP(xmbx, resp)                              \
> > +     xrt_info((xmbx)->pdev, "respond %ld bytes >>>>>%s",             \
> > +     (resp)->xmip_data_size, mailbox_chan2name((resp)->xmip_sw_ch))
> > +
> > +static inline struct xmgmt_mailbox *pdev2mbx(struct platform_device
*pdev)
> > +{
> > +     return (struct xmgmt_mailbox *)xmgmt_pdev2mailbox(pdev);
> > +}
> > +
> > +static void xmgmt_mailbox_post(struct xmgmt_mailbox *xmbx,
> > +     u64 msgid, bool sw_ch, void *buf, size_t len)
> > +{
> > +     int rc;
> > +     struct xrt_mailbox_ioctl_post post = {
> > +             .xmip_req_id = msgid,
> > +             .xmip_sw_ch = sw_ch,
> > +             .xmip_data = buf,
> > +             .xmip_data_size = len
> > +     };
> > +
> > +     BUG_ON(!mutex_is_locked(&xmbx->lock));
> > +
> > +     if (!xmbx->mailbox) {
> > +             xrt_err(xmbx->pdev, "mailbox not available");
> > +             return;
> > +     }
> > +
> > +     if (msgid == 0) {
> > +             XMGMT_MAILBOX_PRT_REQ_SEND(xmbx,
> > +                     (struct xcl_mailbox_req *)buf, sw_ch);
> > +     } else {
> > +             XMGMT_MAILBOX_PRT_RESP(xmbx, &post);
> > +     }
> > +
> > +     rc = xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_POST, &post);
> > +     if (rc)
> > +             xrt_err(xmbx->pdev, "failed to post msg: %d", rc);
> > +}
> > +
> > +static void xmgmt_mailbox_notify(struct xmgmt_mailbox *xmbx, bool
sw_ch,
> > +     struct xcl_mailbox_req *req, size_t len)
> > +{
> > +     xmgmt_mailbox_post(xmbx, 0, sw_ch, req, len);
> > +}
> > +
> > +static void xmgmt_mailbox_respond(struct xmgmt_mailbox *xmbx,
> > +     u64 msgid, bool sw_ch, void *buf, size_t len)
> > +{
> > +     mutex_lock(&xmbx->lock);
> > +     xmgmt_mailbox_post(xmbx, msgid, sw_ch, buf, len);
> > +     mutex_unlock(&xmbx->lock);
> > +}
> > +
> > +static void xmgmt_mailbox_resp_test_msg(struct xmgmt_mailbox *xmbx,
> > +     u64 msgid, bool sw_ch)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     char *msg;
> > +
> > +     mutex_lock(&xmbx->lock);
> > +
> > +     if (xmbx->test_msg == NULL) {
> > +             mutex_unlock(&xmbx->lock);
> > +             xrt_err(pdev, "test msg is not set, drop request");
> > +             return;
> > +     }
> > +     msg = xmbx->test_msg;
> > +     xmbx->test_msg = NULL;
> > +
> > +     mutex_unlock(&xmbx->lock);
> > +
> > +     xmgmt_mailbox_respond(xmbx, msgid, sw_ch, msg, strlen(msg) + 1);
> > +     vfree(msg);
> > +}
> > +
> > +static int xmgmt_mailbox_dtb_add_prop(struct platform_device *pdev,
> > +     char *dst_dtb, const char *ep_name, const char *regmap_name,
> > +     const char *prop, const void *val, int size)
> > +{
> > +     int rc = xrt_md_set_prop(DEV(pdev), dst_dtb, ep_name,
regmap_name,
> > +             prop, val, size);
> > +
> > +     if (rc) {
> > +             xrt_err(pdev, "failed to set %s@(%s, %s): %d",
> > +                     ep_name, regmap_name, prop, rc);
> > +     }
> > +     return rc;
> > +}
> > +
> > +static int xmgmt_mailbox_dtb_add_vbnv(struct platform_device *pdev,
char *dtb)
> > +{
> > +     int rc = 0;
> > +     char *vbnv = xmgmt_get_vbnv(pdev);
> > +
> > +     if (vbnv == NULL) {
> > +             xrt_err(pdev, "failed to get VBNV");
> > +             return -ENOENT;
> > +     }
> > +     rc = xmgmt_mailbox_dtb_add_prop(pdev, dtb, NULL, NULL,
> > +             PROP_VBNV, vbnv, strlen(vbnv) + 1);
> > +     kfree(vbnv);
> > +     return rc;
> > +}
> > +
> > +static int xmgmt_mailbox_dtb_copy_logic_uuid(struct platform_device
*pdev,
> > +     const char *src_dtb, char *dst_dtb)
> > +{
> > +     const void *val;
> > +     int sz;
> > +     int rc = xrt_md_get_prop(DEV(pdev), src_dtb, NULL, NULL,
> > +             PROP_LOGIC_UUID, &val, &sz);
> > +
> > +     if (rc) {
> > +             xrt_err(pdev, "failed to get %s: %d", PROP_LOGIC_UUID, rc);
> > +             return rc;
> > +     }
> > +     return xmgmt_mailbox_dtb_add_prop(pdev, dst_dtb, NULL, NULL,
> > +             PROP_LOGIC_UUID, val, sz);
> > +}
> > +
> > +static int xmgmt_mailbox_dtb_add_vrom(struct platform_device *pdev,
> > +     const char *src_dtb, char *dst_dtb)
> > +{
> > +     /* For compatibility for legacy xrt driver. */
> > +     enum FeatureBitMask {
> > +             UNIFIED_PLATFORM                = 0x0000000000000001
> > +             , XARE_ENBLD                    = 0x0000000000000002
> > +             , BOARD_MGMT_ENBLD              = 0x0000000000000004
> > +             , MB_SCHEDULER                  = 0x0000000000000008
> > +             , PROM_MASK                     = 0x0000000000000070
> > +             , DEBUG_MASK                    = 0x000000000000FF00
> > +             , PEER_TO_PEER                  = 0x0000000000010000
> > +             , FBM_UUID                      = 0x0000000000020000
> > +             , HBM                           = 0x0000000000040000
> > +             , CDMA                          = 0x0000000000080000
> > +             , QDMA                          = 0x0000000000100000
> > +             , RUNTIME_CLK_SCALE             = 0x0000000000200000
> > +             , PASSTHROUGH_VIRTUALIZATION    = 0x0000000000400000
> > +     };
> > +     struct FeatureRomHeader {
> > +             unsigned char EntryPointString[4];
> > +             uint8_t MajorVersion;
> > +             uint8_t MinorVersion;
> > +             uint32_t VivadoBuildID;
> > +             uint32_t IPBuildID;
> > +             uint64_t TimeSinceEpoch;
> > +             unsigned char FPGAPartName[64];
> > +             unsigned char VBNVName[64];
> > +             uint8_t DDRChannelCount;
> > +             uint8_t DDRChannelSize;
> > +             uint64_t DRBaseAddress;
> > +             uint64_t FeatureBitMap;
> > +             unsigned char uuid[16];
> > +             uint8_t HBMCount;
> > +             uint8_t HBMSize;
> > +             uint32_t CDMABaseAddress[4];
> > +     } header = { 0 };
> > +     char *vbnv = xmgmt_get_vbnv(pdev);
> > +     int rc;
> > +
> > +     *(u32 *)header.EntryPointString = 0x786e6c78;
> > +
> > +     if (vbnv)
> > +             strncpy(header.VBNVName, vbnv, sizeof(header.VBNVName) - 1);
> > +     kfree(vbnv);
> > +
> > +     header.FeatureBitMap = UNIFIED_PLATFORM;
> > +     rc = xrt_md_get_prop(DEV(pdev), src_dtb,
> > +             NODE_CMC_FW_MEM, NULL, PROP_IO_OFFSET, NULL, NULL);
> > +     if (rc == 0)
> > +             header.FeatureBitMap |= BOARD_MGMT_ENBLD;
> > +     rc = xrt_md_get_prop(DEV(pdev), src_dtb,
> > +             NODE_ERT_FW_MEM, NULL, PROP_IO_OFFSET, NULL, NULL);
> > +     if (rc == 0)
> > +             header.FeatureBitMap |= MB_SCHEDULER;
> > +
> > +     return xmgmt_mailbox_dtb_add_prop(pdev, dst_dtb, NULL, NULL,
> > +             PROP_VROM, &header, sizeof(header));
> > +}
> > +
> > +static u32 xmgmt_mailbox_dtb_user_pf(struct platform_device *pdev,
> > +     const char *dtb, const char *epname, const char *regmap)
> > +{
> > +     const u32 *pfnump;
> > +     int rc = xrt_md_get_prop(DEV(pdev), dtb, epname, regmap,
> > +             PROP_PF_NUM, (const void **)&pfnump, NULL);
> > +
> > +     if (rc)
> > +             return -1;
> > +     return be32_to_cpu(*pfnump);
> > +}
> > +
> > +static int xmgmt_mailbox_dtb_copy_user_endpoints(struct
platform_device *pdev,
> > +     const char *src, char *dst)
> > +{
> > +     int rc = 0;
> > +     char *epname = NULL, *regmap = NULL;
> > +     u32 pfnum = xmgmt_mailbox_dtb_user_pf(pdev, src,
> > +             NODE_MAILBOX_USER, NULL);
> > +     const u32 level = cpu_to_be32(1);
> > +     struct device *dev = DEV(pdev);
> > +
> > +     if (pfnum == (u32)-1) {
> > +             xrt_err(pdev, "failed to get user pf num");
> > +             rc = -EINVAL;
> > +     }
> > +
> > +     for (xrt_md_get_next_endpoint(dev, src, NULL, NULL, &epname,
&regmap);
> > +             rc == 0 && epname != NULL;
> > +             xrt_md_get_next_endpoint(dev, src, epname, regmap,
> > +             &epname, &regmap)) {
> > +             if (pfnum !=
> > +                     xmgmt_mailbox_dtb_user_pf(pdev, src, epname, regmap))
> > +                     continue;
> > +             rc = xrt_md_copy_endpoint(dev, dst, src, epname, regmap, NULL);
> > +             if (rc) {
> > +                     xrt_err(pdev, "failed to copy (%s, %s): %d",
> > +                             epname, regmap, rc);
> > +             } else {
> > +                     rc = xrt_md_set_prop(dev, dst, epname, regmap,
> > +                             PROP_PARTITION_LEVEL, &level, sizeof(level));
> > +                     if (rc) {
> > +                             xrt_err(pdev,
> > +                                     "can't set level for (%s, %s): %d",
> > +                                     epname, regmap, rc);
> > +                     }
> > +             }
> > +     }
> > +     return rc;
> > +}
> > +
> > +static char *xmgmt_mailbox_user_dtb(struct platform_device *pdev)
> > +{
> > +     /* TODO: add support for PLP. */
> > +     const char *src = NULL;
> > +     char *dst = NULL;
> > +     struct device *dev = DEV(pdev);
> > +     int rc = xrt_md_create(dev, &dst);
> > +
> > +     if (rc || dst == NULL)
> > +             return NULL;
> > +
> > +     rc = xmgmt_mailbox_dtb_add_vbnv(pdev, dst);
> > +     if (rc)
> > +             goto fail;
> > +
> > +     src = xmgmt_get_dtb(pdev, XMGMT_BLP);
> > +     if (src == NULL) {
> > +             xrt_err(pdev, "failed to get BLP dtb");
> > +             goto fail;
> > +     }
> > +
> > +     rc = xmgmt_mailbox_dtb_copy_logic_uuid(pdev, src, dst);
> > +     if (rc)
> > +             goto fail;
> > +
> > +     rc = xmgmt_mailbox_dtb_add_vrom(pdev, src, dst);
> > +     if (rc)
> > +             goto fail;
> > +
> > +     rc = xrt_md_copy_endpoint(dev, dst, src, NODE_PARTITION_INFO,
> > +             NULL, NODE_PARTITION_INFO_BLP);
> > +     if (rc)
> > +             goto fail;
> > +
> > +     rc = xrt_md_copy_endpoint(dev, dst, src, NODE_INTERFACES, NULL,
NULL);
> > +     if (rc)
> > +             goto fail;
> > +
> > +     rc = xmgmt_mailbox_dtb_copy_user_endpoints(pdev, src, dst);
> > +     if (rc)
> > +             goto fail;
> > +
> > +     xrt_md_pack(dev, dst);
> > +     vfree(src);
> > +     return dst;
> > +
> > +fail:
> > +     vfree(src);
> > +     vfree(dst);
> > +     return NULL;
> > +}
> > +
> > +static void xmgmt_mailbox_resp_subdev(struct xmgmt_mailbox *xmbx,
> > +     u64 msgid, bool sw_ch, u64 offset, u64 size)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     char *dtb = xmgmt_mailbox_user_dtb(pdev);
> > +     long dtbsz;
> > +     struct xcl_subdev *hdr;
> > +     u64 totalsz;
> > +
> > +     if (dtb == NULL)
> > +             return;
> > +
> > +     dtbsz = xrt_md_size(DEV(pdev), dtb);
> > +     totalsz = dtbsz + sizeof(*hdr) - sizeof(hdr->data);
> > +     if (offset != 0 || totalsz > size) {
> > +             /* Only support fetching dtb in one shot. */
> > +             vfree(dtb);
> > +             xrt_err(pdev, "need %lldB, user buffer size is %lldB, dropped",
> > +                     totalsz, size);
> > +             return;
> > +     }
> > +
> > +     hdr = vzalloc(totalsz);
> > +     if (hdr == NULL) {
> > +             vfree(dtb);
> > +             return;
> > +     }
> > +
> > +     hdr->ver = 1;
> > +     hdr->size = dtbsz;
> > +     hdr->rtncode = XRT_MSG_SUBDEV_RTN_COMPLETE;
> > +     (void) memcpy(hdr->data, dtb, dtbsz);
> > +
> > +     xmgmt_mailbox_respond(xmbx, msgid, sw_ch, hdr, totalsz);
> > +
> > +     vfree(dtb);
> > +     vfree(hdr);
> > +}
> > +
> > +static void xmgmt_mailbox_resp_sensor(struct xmgmt_mailbox *xmbx,
> > +     u64 msgid, bool sw_ch, u64 offset, u64 size)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     struct xcl_sensor sensors = { 0 };
> > +     struct platform_device *cmcpdev = xrt_subdev_get_leaf_by_id(pdev,
> > +             XRT_SUBDEV_CMC, PLATFORM_DEVID_NONE);
> > +     int rc;
> > +
> > +     if (cmcpdev) {
> > +             rc = xrt_subdev_ioctl(cmcpdev, XRT_CMC_READ_SENSORS,
&sensors);
> > +             (void) xrt_subdev_put_leaf(pdev, cmcpdev);
> > +             if (rc)
> > +                     xrt_err(pdev, "can't read sensors: %d", rc);
> > +     }
> > +
> > +     xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &sensors,
> > +             min((u64)sizeof(sensors), size));
> > +}
> > +
> > +static int xmgmt_mailbox_get_freq(struct xmgmt_mailbox *xmbx,
> > +     enum CLOCK_TYPE type, u64 *freq, u64 *freq_cnter)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     const char *clkname =
> > +             clock_type2epname(type) ? clock_type2epname(type) :
"UNKNOWN";
> > +     struct platform_device *clkpdev =
> > +             xrt_subdev_get_leaf_by_epname(pdev, clkname);
> > +     int rc;
> > +     struct xrt_clock_ioctl_get getfreq = { 0 };
> > +
> > +     if (clkpdev == NULL) {
> > +             xrt_info(pdev, "%s clock is not available", clkname);
> > +             return -ENOENT;
> > +     }
> > +
> > +     rc = xrt_subdev_ioctl(clkpdev, XRT_CLOCK_GET, &getfreq);
> > +     (void) xrt_subdev_put_leaf(pdev, clkpdev);
> > +     if (rc) {
> > +             xrt_err(pdev, "can't get %s clock frequency: %d", clkname, rc);
> > +             return rc;
> > +     }
> > +
> > +     if (freq)
> > +             *freq = getfreq.freq;
> > +     if (freq_cnter)
> > +             *freq_cnter = getfreq.freq_cnter;
> > +     return 0;
> > +}
> > +
> > +static int xmgmt_mailbox_get_icap_idcode(struct xmgmt_mailbox *xmbx,
u64 *id)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     struct platform_device *icappdev = xrt_subdev_get_leaf_by_id(pdev,
> > +             XRT_SUBDEV_ICAP, PLATFORM_DEVID_NONE);
> > +     int rc;
> > +
> > +     if (icappdev == NULL) {
> > +             xrt_err(pdev, "can't find icap");
> > +             return -ENOENT;
> > +     }
> > +
> > +     rc = xrt_subdev_ioctl(icappdev, XRT_ICAP_IDCODE, id);
> > +     (void) xrt_subdev_put_leaf(pdev, icappdev);
> > +     if (rc)
> > +             xrt_err(pdev, "can't get icap idcode: %d", rc);
> > +     return rc;
> > +}
> > +
> > +static int xmgmt_mailbox_get_mig_calib(struct xmgmt_mailbox *xmbx,
u64 *calib)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     struct platform_device *calibpdev = xrt_subdev_get_leaf_by_id(pdev,
> > +             XRT_SUBDEV_CALIB, PLATFORM_DEVID_NONE);
> > +     int rc;
> > +     enum xrt_calib_results res;
> > +
> > +     if (calibpdev == NULL) {
> > +             xrt_err(pdev, "can't find mig calibration subdev");
> > +             return -ENOENT;
> > +     }
> > +
> > +     rc = xrt_subdev_ioctl(calibpdev, XRT_CALIB_RESULT, &res);
> > +     (void) xrt_subdev_put_leaf(pdev, calibpdev);
> > +     if (rc) {
> > +             xrt_err(pdev, "can't get mig calibration result: %d", rc);
> > +     } else {
> > +             if (res == XRT_CALIB_SUCCEEDED)
> > +                     *calib = 1;
> > +             else
> > +                     *calib = 0;
> > +     }
> > +     return rc;
> > +}
> > +
> > +static void xmgmt_mailbox_resp_icap(struct xmgmt_mailbox *xmbx,
> > +     u64 msgid, bool sw_ch, u64 offset, u64 size)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     struct xcl_pr_region icap = { 0 };
> > +
> > +     (void) xmgmt_mailbox_get_freq(xmbx,
> > +             CT_DATA, &icap.freq_data, &icap.freq_cntr_data);
> > +     (void) xmgmt_mailbox_get_freq(xmbx,
> > +             CT_KERNEL, &icap.freq_kernel, &icap.freq_cntr_kernel);
> > +     (void) xmgmt_mailbox_get_freq(xmbx,
> > +             CT_SYSTEM, &icap.freq_system, &icap.freq_cntr_system);
> > +     (void) xmgmt_mailbox_get_icap_idcode(xmbx, &icap.idcode);
> > +     (void) xmgmt_mailbox_get_mig_calib(xmbx, &icap.mig_calib);
> > +     BUG_ON(sizeof(icap.uuid) != sizeof(uuid_t));
> > +     (void) xmgmt_get_provider_uuid(pdev, XMGMT_ULP, (uuid_t
*)&icap.uuid);
> > +
> > +     xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &icap,
> > +             min((u64)sizeof(icap), size));
> > +}
> > +
> > +static void xmgmt_mailbox_resp_bdinfo(struct xmgmt_mailbox *xmbx,
> > +     u64 msgid, bool sw_ch, u64 offset, u64 size)
> > +{
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     struct xcl_board_info *info = vzalloc(sizeof(*info));
> > +     struct platform_device *cmcpdev;
> > +     int rc;
> > +
> > +     if (info == NULL)
> > +             return;
> > +
> > +     cmcpdev = xrt_subdev_get_leaf_by_id(pdev,
> > +             XRT_SUBDEV_CMC, PLATFORM_DEVID_NONE);
> > +     if (cmcpdev) {
> > +             rc = xrt_subdev_ioctl(cmcpdev, XRT_CMC_READ_BOARD_INFO,
info);
> > +             (void) xrt_subdev_put_leaf(pdev, cmcpdev);
> > +             if (rc)
> > +                     xrt_err(pdev, "can't read board info: %d", rc);
> > +     }
> > +
> > +     xmgmt_mailbox_respond(xmbx, msgid, sw_ch, info,
> > +             min((u64)sizeof(*info), size));
> > +
> > +     vfree(info);
> > +}
> > +
> > +static void xmgmt_mailbox_simple_respond(struct xmgmt_mailbox
*xmbx,
> > +     u64 msgid, bool sw_ch, int rc)
> > +{
> > +     xmgmt_mailbox_respond(xmbx, msgid, sw_ch, &rc, sizeof(rc));
> > +}
> > +
> > +static void xmgmt_mailbox_resp_peer_data(struct xmgmt_mailbox
*xmbx,
> > +     struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> > +{
> > +     struct xcl_mailbox_peer_data *pdata =
> > +             (struct xcl_mailbox_peer_data *)req->data;
> > +
> > +     if (len < (sizeof(*req) + sizeof(*pdata) - 1)) {
> > +             xrt_err(xmbx->pdev, "received corrupted %s, dropped",
> > +                     mailbox_req2name(req->req));
> > +             return;
> > +     }
> > +
> > +     switch (pdata->kind) {
> > +     case XCL_SENSOR:
> > +             xmgmt_mailbox_resp_sensor(xmbx, msgid, sw_ch,
> > +                     pdata->offset, pdata->size);
> > +             break;
> > +     case XCL_ICAP:
> > +             xmgmt_mailbox_resp_icap(xmbx, msgid, sw_ch,
> > +                     pdata->offset, pdata->size);
> > +             break;
> > +     case XCL_BDINFO:
> > +             xmgmt_mailbox_resp_bdinfo(xmbx, msgid, sw_ch,
> > +                     pdata->offset, pdata->size);
> > +             break;
> > +     case XCL_SUBDEV:
> > +             xmgmt_mailbox_resp_subdev(xmbx, msgid, sw_ch,
> > +                     pdata->offset, pdata->size);
> > +             break;
> > +     case XCL_MIG_ECC:
> > +     case XCL_FIREWALL:
> > +     case XCL_DNA: /* TODO **/
> > +             xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, 0);
> > +             break;
> > +     default:
> > +             xrt_err(xmbx->pdev, "%s(%s) request not handled",
> > +                     mailbox_req2name(req->req),
> > +                     mailbox_group_kind2name(pdata->kind));
> > +             break;
> > +     }
> > +}
> > +
> > +static bool xmgmt_mailbox_is_same_domain(struct xmgmt_mailbox
*xmbx,
> > +     struct xcl_mailbox_conn *mb_conn)
> > +{
> > +     uint32_t crc_chk;
> > +     phys_addr_t paddr;
> > +     struct platform_device *pdev = xmbx->pdev;
> > +
> > +     paddr = virt_to_phys((void *)mb_conn->kaddr);
> > +     if (paddr != (phys_addr_t)mb_conn->paddr) {
> > +             xrt_info(pdev, "paddrs differ, user 0x%llx, mgmt 0x%llx",
> > +                     mb_conn->paddr, paddr);
> > +             return false;
> > +     }
> > +
> > +     crc_chk = crc32c_le(~0, (void *)mb_conn->kaddr, PAGE_SIZE);
> > +     if (crc_chk != mb_conn->crc32) {
> > +             xrt_info(pdev, "CRCs differ, user 0x%x, mgmt 0x%x",
> > +                     mb_conn->crc32, crc_chk);
> > +             return false;
> > +     }
> > +
> > +     return true;
> > +}
> > +
> > +static void xmgmt_mailbox_resp_user_probe(struct xmgmt_mailbox
*xmbx,
> > +     struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> > +{
> > +     struct xcl_mailbox_conn_resp *resp = vzalloc(sizeof(*resp));
> > +     struct xcl_mailbox_conn *conn = (struct xcl_mailbox_conn *)req->data;
> > +
> > +     if (resp == NULL)
> > +             return;
> > +
> > +     if (len < (sizeof(*req) + sizeof(*conn) - 1)) {
> > +             xrt_err(xmbx->pdev, "received corrupted %s, dropped",
> > +                     mailbox_req2name(req->req));
> > +             vfree(resp);
> > +             return;
> > +     }
> > +
> > +     resp->conn_flags |= XCL_MB_PEER_READY;
> > +     if (xmgmt_mailbox_is_same_domain(xmbx, conn)) {
> > +             xmbx->peer_in_same_domain = true;
> > +             resp->conn_flags |= XCL_MB_PEER_SAME_DOMAIN;
> > +     }
> > +
> > +     xmgmt_mailbox_respond(xmbx, msgid, sw_ch, resp, sizeof(*resp));
> > +     vfree(resp);
> > +}
> > +
> > +static void xmgmt_mailbox_resp_hot_reset(struct xmgmt_mailbox *xmbx,
> > +     struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> > +{
> > +     int ret;
> > +     struct platform_device *pdev = xmbx->pdev;
> > +
> > +     xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, 0);
> > +
> > +     ret = xmgmt_hot_reset(pdev);
> > +     if (ret)
> > +             xrt_err(pdev, "failed to hot reset: %d", ret);
> > +     else
> > +             xmgmt_peer_notify_state(xmbx, true);
> > +}
> > +
> > +static void xmgmt_mailbox_resp_load_xclbin(struct xmgmt_mailbox
*xmbx,
> > +     struct xcl_mailbox_req *req, size_t len, u64 msgid, bool sw_ch)
> > +{
> > +     struct xcl_mailbox_bitstream_kaddr *kaddr =
> > +             (struct xcl_mailbox_bitstream_kaddr *)req->data;
> > +     void *xclbin = (void *)(uintptr_t)kaddr->addr;
> > +     int ret = bitstream_axlf_mailbox(xmbx->pdev, xclbin);
> > +
> > +     xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, ret);
> > +}
> > +
> > +static void xmgmt_mailbox_listener(void *arg, void *data, size_t len,
> > +     u64 msgid, int err, bool sw_ch)
> > +{
> > +     struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)arg;
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     struct xcl_mailbox_req *req = (struct xcl_mailbox_req *)data;
> > +
> > +     if (err) {
> > +             xrt_err(pdev, "failed to receive request: %d", err);
> > +             return;
> > +     }
> > +     if (len < sizeof(*req)) {
> > +             xrt_err(pdev, "received corrupted request");
> > +             return;
> > +     }
> > +
> > +     XMGMT_MAILBOX_PRT_REQ_RECV(xmbx, req, sw_ch);
> > +     switch (req->req) {
> > +     case XCL_MAILBOX_REQ_TEST_READ:
> > +             xmgmt_mailbox_resp_test_msg(xmbx, msgid, sw_ch);
> > +             break;
> > +     case XCL_MAILBOX_REQ_PEER_DATA:
> > +             xmgmt_mailbox_resp_peer_data(xmbx, req, len, msgid, sw_ch);
> > +             break;
> > +     case XCL_MAILBOX_REQ_READ_P2P_BAR_ADDR: /* TODO */
> > +             xmgmt_mailbox_simple_respond(xmbx, msgid, sw_ch, -
ENOTSUPP);
> > +             break;
> > +     case XCL_MAILBOX_REQ_USER_PROBE:
> > +             xmgmt_mailbox_resp_user_probe(xmbx, req, len, msgid, sw_ch);
> > +             break;
> > +     case XCL_MAILBOX_REQ_HOT_RESET:
> > +             xmgmt_mailbox_resp_hot_reset(xmbx, req, len, msgid, sw_ch);
> > +             break;
> > +     case XCL_MAILBOX_REQ_LOAD_XCLBIN_KADDR:
> > +             if (xmbx->peer_in_same_domain) {
> > +                     xmgmt_mailbox_resp_load_xclbin(xmbx,
> > +                             req, len, msgid, sw_ch);
> > +             } else {
> > +                     xrt_err(pdev, "%s not handled, not in same domain",
> > +                             mailbox_req2name(req->req));
> > +             }
> > +             break;
> > +     default:
> > +             xrt_err(pdev, "%s(%d) request not handled",
> > +                     mailbox_req2name(req->req), req->req);
> > +             break;
> > +     }
> > +}
> > +
> > +static void xmgmt_mailbox_reg_listener(struct xmgmt_mailbox *xmbx)
> > +{
> > +     struct xrt_mailbox_ioctl_listen listen = {
> > +             xmgmt_mailbox_listener, xmbx };
> > +
> > +     BUG_ON(!mutex_is_locked(&xmbx->lock));
> > +     if (!xmbx->mailbox)
> > +             return;
> > +     (void) xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_LISTEN,
&listen);
> > +}
> > +
> > +static void xmgmt_mailbox_unreg_listener(struct xmgmt_mailbox *xmbx)
> > +{
> > +     struct xrt_mailbox_ioctl_listen listen = { 0 };
> > +
> > +     BUG_ON(!mutex_is_locked(&xmbx->lock));
> > +     BUG_ON(!xmbx->mailbox);
> > +     (void) xrt_subdev_ioctl(xmbx->mailbox, XRT_MAILBOX_LISTEN,
&listen);
> > +}
> > +
> > +static bool xmgmt_mailbox_leaf_match(enum xrt_subdev_id id,
> > +     struct platform_device *pdev, void *arg)
> > +{
> > +     return (id == XRT_SUBDEV_MAILBOX);
> > +}
> > +
> > +static int xmgmt_mailbox_event_cb(struct platform_device *pdev,
> > +     enum xrt_events evt, void *arg)
> > +{
> > +     struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
> > +     struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev
*)arg;
> > +
> > +     switch (evt) {
> > +     case XRT_EVENT_POST_CREATION:
> > +             BUG_ON(esd->xevt_subdev_id != XRT_SUBDEV_MAILBOX);
> > +             BUG_ON(xmbx->mailbox);
> > +             mutex_lock(&xmbx->lock);
> > +             xmbx->mailbox = xrt_subdev_get_leaf_by_id(pdev,
> > +                     XRT_SUBDEV_MAILBOX, PLATFORM_DEVID_NONE);
> > +             xmgmt_mailbox_reg_listener(xmbx);
> > +             mutex_unlock(&xmbx->lock);
> > +             break;
> > +     case XRT_EVENT_PRE_REMOVAL:
> > +             BUG_ON(esd->xevt_subdev_id != XRT_SUBDEV_MAILBOX);
> > +             BUG_ON(!xmbx->mailbox);
> > +             mutex_lock(&xmbx->lock);
> > +             xmgmt_mailbox_unreg_listener(xmbx);
> > +             (void) xrt_subdev_put_leaf(pdev, xmbx->mailbox);
> > +             xmbx->mailbox = NULL;
> > +             mutex_unlock(&xmbx->lock);
> > +             break;
> > +     default:
> > +             break;
> > +     }
> > +
> > +     return XRT_EVENT_CB_CONTINUE;
> > +}
> > +
> > +static ssize_t xmgmt_mailbox_user_dtb_show(struct file *filp,
> > +     struct kobject *kobj, struct bin_attribute *attr,
> > +     char *buf, loff_t off, size_t count)
> > +{
> > +     struct device *dev = kobj_to_dev(kobj);
> > +     struct platform_device *pdev = to_platform_device(dev);
> > +     char *blob = NULL;
> > +     long  size;
> > +     ssize_t ret = 0;
> > +
> > +     blob = xmgmt_mailbox_user_dtb(pdev);
> > +     if (!blob) {
> > +             ret = -ENOENT;
> > +             goto failed;
> > +     }
> > +
> > +     size = xrt_md_size(dev, blob);
> > +     if (size <= 0) {
> > +             ret = -EINVAL;
> > +             goto failed;
> > +     }
> > +
> > +     if (off >= size)
> > +             goto failed;
> > +     if (off + count > size)
> > +             count = size - off;
> > +     memcpy(buf, blob + off, count);
> > +
> > +     ret = count;
> > +failed:
> > +     vfree(blob);
> > +     return ret;
> > +}
> > +
> > +static struct bin_attribute meta_data_attr = {
> > +     .attr = {
> > +             .name = "metadata_for_user",
> > +             .mode = 0400
> > +     },
> > +     .read = xmgmt_mailbox_user_dtb_show,
> > +     .size = 0
> > +};
> > +
> > +static struct bin_attribute  *xmgmt_mailbox_bin_attrs[] = {
> > +     &meta_data_attr,
> > +     NULL,
> > +};
> > +
> > +int xmgmt_mailbox_get_test_msg(struct xmgmt_mailbox *xmbx, bool
sw_ch,
> > +     char *buf, size_t *len)
> > +{
> > +     int rc;
> > +     struct platform_device *pdev = xmbx->pdev;
> > +     struct xcl_mailbox_req req = { 0, XCL_MAILBOX_REQ_TEST_READ, };
> > +     struct xrt_mailbox_ioctl_request leaf_req = {
> > +             .xmir_sw_ch = sw_ch,
> > +             .xmir_resp_ttl = 1,
> > +             .xmir_req = &req,
> > +             .xmir_req_size = sizeof(req),
> > +             .xmir_resp = buf,
> > +             .xmir_resp_size = *len
> > +     };
> > +
> > +     mutex_lock(&xmbx->lock);
> > +     if (xmbx->mailbox) {
> > +             XMGMT_MAILBOX_PRT_REQ_SEND(xmbx, &req,
leaf_req.xmir_sw_ch);
> > +             /*
> > +              * mgmt should never send request to peer. it should send
> > +              * either notification or response. here is the only exception
> > +              * for debugging purpose.
> > +              */
> > +             rc = xrt_subdev_ioctl(xmbx->mailbox,
> > +                     XRT_MAILBOX_REQUEST, &leaf_req);
> > +     } else {
> > +             rc = -ENODEV;
> > +             xrt_err(pdev, "mailbox not available");
> > +     }
> > +     mutex_unlock(&xmbx->lock);
> > +
> > +     if (rc == 0)
> > +             *len = leaf_req.xmir_resp_size;
> > +     return rc;
> > +}
> > +
> > +int xmgmt_mailbox_set_test_msg(struct xmgmt_mailbox *xmbx,
> > +     char *buf, size_t len)
> > +{
> > +     mutex_lock(&xmbx->lock);
> > +
> > +     if (xmbx->test_msg)
> > +             vfree(xmbx->test_msg);
> > +     xmbx->test_msg = vmalloc(len);
> > +     if (xmbx->test_msg == NULL) {
> > +             mutex_unlock(&xmbx->lock);
> > +             return -ENOMEM;
> > +     }
> > +     (void) memcpy(xmbx->test_msg, buf, len);
> > +
> > +     mutex_unlock(&xmbx->lock);
> > +     return 0;
> > +}
> > +
> > +static ssize_t peer_msg_show(struct device *dev,
> > +     struct device_attribute *attr, char *buf)
> > +{
> > +     size_t len = 4096;
> > +     struct platform_device *pdev = to_platform_device(dev);
> > +     struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
> > +     int ret = xmgmt_mailbox_get_test_msg(xmbx, false, buf, &len);
> > +
> > +     return ret == 0 ? len : ret;
> > +}
> > +static ssize_t peer_msg_store(struct device *dev,
> > +     struct device_attribute *da, const char *buf, size_t count)
> > +{
> > +     struct platform_device *pdev = to_platform_device(dev);
> > +     struct xmgmt_mailbox *xmbx = pdev2mbx(pdev);
> > +     int ret = xmgmt_mailbox_set_test_msg(xmbx, (char *)buf, count);
> > +
> > +     return ret == 0 ? count : ret;
> > +}
> > +/* Message test i/f. */
> > +static DEVICE_ATTR_RW(peer_msg);
> > +
> > +static struct attribute *xmgmt_mailbox_attrs[] = {
> > +     &dev_attr_peer_msg.attr,
> > +     NULL,
> > +};
> > +
> > +static const struct attribute_group xmgmt_mailbox_attrgroup = {
> > +     .bin_attrs = xmgmt_mailbox_bin_attrs,
> > +     .attrs = xmgmt_mailbox_attrs,
> > +};
> > +
> > +void *xmgmt_mailbox_probe(struct platform_device *pdev)
> > +{
> > +     struct xmgmt_mailbox *xmbx =
> > +             devm_kzalloc(DEV(pdev), sizeof(*xmbx), GFP_KERNEL);
> > +
> > +     if (!xmbx)
> > +             return NULL;
> > +     xmbx->pdev = pdev;
> > +     mutex_init(&xmbx->lock);
> > +
> > +     xmbx->evt_hdl = xrt_subdev_add_event_cb(pdev,
> > +             xmgmt_mailbox_leaf_match, NULL, xmgmt_mailbox_event_cb);
> > +     (void) sysfs_create_group(&DEV(pdev)->kobj,
&xmgmt_mailbox_attrgroup);
> > +     return xmbx;
> > +}
> > +
> > +void xmgmt_mailbox_remove(void *handle)
> > +{
> > +     struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)handle;
> > +     struct platform_device *pdev = xmbx->pdev;
> > +
> > +     (void) sysfs_remove_group(&DEV(pdev)->kobj,
&xmgmt_mailbox_attrgroup);
> > +     if (xmbx->evt_hdl)
> > +             (void) xrt_subdev_remove_event_cb(pdev, xmbx->evt_hdl);
> > +     if (xmbx->mailbox)
> > +             (void) xrt_subdev_put_leaf(pdev, xmbx->mailbox);
> > +     if (xmbx->test_msg)
> > +             vfree(xmbx->test_msg);
> > +}
> > +
> > +void xmgmt_peer_notify_state(void *handle, bool online)
> > +{
> > +     struct xmgmt_mailbox *xmbx = (struct xmgmt_mailbox *)handle;
> > +     struct xcl_mailbox_peer_state *st;
> > +     struct xcl_mailbox_req *req;
> > +     size_t reqlen = sizeof(*req) + sizeof(*st) - 1;
> > +
> > +     req = vzalloc(reqlen);
> > +     if (req == NULL)
> > +             return;
> > +
> > +     req->req = XCL_MAILBOX_REQ_MGMT_STATE;
> > +     st = (struct xcl_mailbox_peer_state *)req->data;
> > +     st->state_flags = online ? XCL_MB_STATE_ONLINE :
XCL_MB_STATE_OFFLINE;
> > +     mutex_lock(&xmbx->lock);
> > +     xmgmt_mailbox_notify(xmbx, false, req, reqlen);
> > +     mutex_unlock(&xmbx->lock);
> > +}
> > diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
b/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
> > new file mode 100644
> > index 000000000000..042d86fcef41
> > --- /dev/null
> > +++ b/drivers/fpga/alveo/mgmt/xmgmt-main-ulp.c
> > @@ -0,0 +1,190 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Xilinx Alveo FPGA MGMT PF entry point driver
> > + *
> > + * Copyright (C) 2020 Xilinx, Inc.
> > + *
> > + * xclbin download
> > + *
> > + * Authors:
> > + *      Lizhi Hou <lizhi.hou@xilinx.com>
> > + */
> > +
> > +#include <linux/firmware.h>
> > +#include <linux/uaccess.h>
> > +#include "xrt-xclbin.h"
> > +#include "xrt-metadata.h"
> > +#include "xrt-subdev.h"
> > +#include "xrt-gpio.h"
> > +#include "xmgmt-main.h"
> > +#include "xrt-icap.h"
> > +#include "xrt-axigate.h"
> > +
> > +static int xmgmt_download_bitstream(struct platform_device  *pdev,
> > +     const void *xclbin)
> > +{
> > +     struct platform_device *icap_leaf = NULL;
> > +     struct XHwIcap_Bit_Header bit_header = { 0 };
> Please fix the style error in struct name ...
> > +     struct xrt_icap_ioctl_wr arg;
> > +     char *bitstream = NULL;
> > +     int ret;
> > +
> > +     ret = xrt_xclbin_get_section(xclbin, BITSTREAM, (void **)&bitstream,
> > +             NULL);
> > +     if (ret || !bitstream) {
> > +             xrt_err(pdev, "bitstream not found");
> > +             return -ENOENT;
> > +     }
> > +     ret = xrt_xclbin_parse_header(bitstream,
> > +             DMA_HWICAP_BITFILE_BUFFER_SIZE, &bit_header);
> > +     if (ret) {
> > +             ret = -EINVAL;
> > +             xrt_err(pdev, "invalid bitstream header");
> > +             goto done;
> > +     }
> > +     icap_leaf = xrt_subdev_get_leaf_by_id(pdev, XRT_SUBDEV_ICAP,
> > +             PLATFORM_DEVID_NONE);
> > +     if (!icap_leaf) {
> > +             ret = -ENODEV;
> > +             xrt_err(pdev, "icap does not exist");
> > +             goto done;
> > +     }
> > +     arg.xiiw_bit_data = bitstream + bit_header.HeaderLength;
> > +     arg.xiiw_data_len = bit_header.BitstreamLength;
> > +     ret = xrt_subdev_ioctl(icap_leaf, XRT_ICAP_WRITE, &arg);
> > +     if (ret)
> > +             xrt_err(pdev, "write bitstream failed, ret = %d", ret);
> > +
> > +done:
> > +     if (icap_leaf)
> > +             xrt_subdev_put_leaf(pdev, icap_leaf);
> > +     vfree(bitstream);
> > +
> > +     return ret;
> > +}
> > +
> > +static bool match_shell(enum xrt_subdev_id id,
> > +     struct platform_device *pdev, void *arg)
> > +{
> > +     struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> > +     const char *ulp_gate;
> > +     int ret;
> > +
> > +     if (!pdata || xrt_md_size(&pdev->dev, pdata->xsp_dtb) <= 0)
> > +             return false;
> > +
> > +     ret = xrt_md_get_epname_pointer(&pdev->dev, pdata->xsp_dtb,
> > +             NODE_GATE_ULP, NULL, &ulp_gate);
> > +     if (ret)
> > +             return false;
> > +
> > +     ret = xrt_md_check_uuids(&pdev->dev, pdata->xsp_dtb, arg);
> > +     if (ret)
> > +             return false;
> > +
> > +     return true;
> > +}
> > +
> > +static bool match_ulp(enum xrt_subdev_id id,
> > +     struct platform_device *pdev, void *arg)
> > +{
> > +     struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> > +     const char *ulp_gate;
> > +     int ret;
> > +
> > +     if (!pdata || xrt_md_size(&pdev->dev, pdata->xsp_dtb) <= 0)
> > +             return false;
> > +
> > +     ret = xrt_md_check_uuids(&pdev->dev, pdata->xsp_dtb, arg);
> > +     if (ret)
> > +             return false;
> > +
> > +     ret = xrt_md_get_epname_pointer(&pdev->dev, pdata->xsp_dtb,
> > +             NODE_GATE_ULP, NULL, &ulp_gate);
> > +     if (!ret)
> > +             return false;
> > +
> > +     return true;
> > +}
> > +
> > +int xmgmt_ulp_download(struct platform_device  *pdev, const void
*xclbin)
> > +{
> > +     struct platform_device *axigate_leaf;
> > +     char *dtb = NULL;
> > +     int ret = 0, part_inst;
> > +
> > +     ret = xrt_xclbin_get_metadata(DEV(pdev), xclbin, &dtb);
> > +     if (ret) {
> > +             xrt_err(pdev, "can not get partition metadata, ret %d", ret);
> > +             goto failed;
> > +     }
> > +
> > +     part_inst = xrt_subdev_lookup_partition(pdev, match_shell, dtb);
> > +     if (part_inst < 0) {
> > +             xrt_err(pdev, "not found matching plp.");
> > +             ret = -ENODEV;
> > +             goto failed;
> > +     }
> > +
> > +     /*
> > +      * Find ulp partition with interface uuid from incoming xclbin, which
> > +      * is verified before with matching plp partition.
> > +      */
> > +     part_inst = xrt_subdev_lookup_partition(pdev, match_ulp, dtb);
> > +     if (part_inst >= 0) {
> > +             ret = xrt_subdev_destroy_partition(pdev, part_inst);
> > +             if (ret) {
> > +                     xrt_err(pdev, "failed to destroy existing ulp, %d",
> > +                             ret);
> > +                     goto failed;
> > +             }
> > +     }
> > +
> > +     axigate_leaf = xrt_subdev_get_leaf_by_epname(pdev,
NODE_GATE_ULP);
> > +
> > +     /* gate may not be exist for 0rp */
> > +     if (axigate_leaf) {
> > +             ret = xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREEZE,
> > +                     NULL);
> > +             if (ret) {
> > +                     xrt_err(pdev, "can not freeze gate %s, %d",
> > +                             NODE_GATE_ULP, ret);
> > +                     xrt_subdev_put_leaf(pdev, axigate_leaf);
> > +                     goto failed;
> > +             }
> > +     }
> > +     ret = xmgmt_download_bitstream(pdev, xclbin);
> > +     if (axigate_leaf) {
> > +             xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREE, NULL);
> > +
> > +             /* Do we really need this extra toggling gate before setting
> > +              * clocks?
> > +              * xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREEZE, NULL);
> > +              * xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREE, NULL);
> > +              */
> > +
> > +             xrt_subdev_put_leaf(pdev, axigate_leaf);
> > +     }
> > +     if (ret) {
> > +             xrt_err(pdev, "bitstream download failed, ret %d", ret);
> > +             goto failed;
> > +     }
> > +     ret = xrt_subdev_create_partition(pdev, dtb);
> > +     if (ret < 0) {
> > +             xrt_err(pdev, "failed creating partition, ret %d", ret);
> > +             goto failed;
> > +     }
> > +
> > +     ret = xrt_subdev_wait_for_partition_bringup(pdev);
> > +     if (ret)
> > +             xrt_err(pdev, "partiton bringup failed, ret %d", ret);
> > +
> > +     /*
> > +      * TODO: needs to check individual subdevs to see if there
> > +      * is any error, such as clock setting, memory bank calibration.
> > +      */
> > +
> > +failed:
> > +     vfree(dtb);
> > +     return ret;
> > +}
> > diff --git a/drivers/fpga/alveo/mgmt/xmgmt-main.c
b/drivers/fpga/alveo/mgmt/xmgmt-main.c
> > new file mode 100644
> > index 000000000000..23e68e3a4ae1
> > --- /dev/null
> > +++ b/drivers/fpga/alveo/mgmt/xmgmt-main.c
> > @@ -0,0 +1,843 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Xilinx Alveo FPGA MGMT PF entry point driver
> > + *
> > + * Copyright (C) 2020 Xilinx, Inc.
> > + *
> > + * Authors:
> > + *   Sonal Santan <sonals@xilinx.com>
> > + */
> > +
> > +#include <linux/firmware.h>
> > +#include <linux/uaccess.h>
> > +#include "xrt-xclbin.h"
> > +#include "xrt-metadata.h"
> > +#include "xrt-flash.h"
> > +#include "xrt-subdev.h"
> > +#include <linux/xrt/flash_xrt_data.h>
> > +#include <linux/xrt/xmgmt-ioctl.h>
> > +#include "xrt-gpio.h"
> > +#include "xmgmt-main.h"
> > +#include "xmgmt-fmgr.h"
> > +#include "xrt-icap.h"
> > +#include "xrt-axigate.h"
> > +#include "xmgmt-main-impl.h"
> > +
> > +#define      XMGMT_MAIN "xmgmt_main"
> > +
> > +struct xmgmt_main {
> > +     struct platform_device *pdev;
> > +     void *evt_hdl;
> > +     char *firmware_blp;
> > +     char *firmware_plp;
> > +     char *firmware_ulp;
> > +     bool flash_ready;
> > +     bool gpio_ready;
> > +     struct fpga_manager *fmgr;
> > +     void *mailbox_hdl;
> > +     struct mutex busy_mutex;
> > +
> > +     uuid_t *blp_intf_uuids;
> > +     u32 blp_intf_uuid_num;
> > +};
> > +
> > +char *xmgmt_get_vbnv(struct platform_device *pdev)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +     const char *vbnv;
> > +     char *ret;
> > +     int i;
> > +
> > +     if (xmm->firmware_plp) {
> > +             vbnv = ((struct axlf *)xmm->firmware_plp)->
> > +                     m_header.m_platformVBNV;
> > +     } else if (xmm->firmware_blp) {
> > +             vbnv = ((struct axlf *)xmm->firmware_blp)->
> > +                     m_header.m_platformVBNV;
> > +     } else {
> > +             return NULL;
> > +     }
> > +
> > +     ret = kstrdup(vbnv, GFP_KERNEL);
> > +     for (i = 0; i < strlen(ret); i++) {
> > +             if (ret[i] == ':' || ret[i] == '.')
> > +                     ret[i] = '_';
> > +     }
> > +     return ret;
> > +}
> > +
> > +static bool xmgmt_main_leaf_match(enum xrt_subdev_id id,
> > +     struct platform_device *pdev, void *arg)
> > +{
> > +     if (id == XRT_SUBDEV_GPIO)
> > +             return xrt_subdev_has_epname(pdev, arg);
> > +     else if (id == XRT_SUBDEV_QSPI)
> > +             return true;
> > +
> > +     return false;
> > +}
> > +
> > +static int get_dev_uuid(struct platform_device *pdev, char *uuidstr, size_t
len)
> > +{
> > +     char uuid[16];
> > +     struct platform_device *gpio_leaf;
> > +     struct xrt_gpio_ioctl_rw gpio_arg = { 0 };
> > +     int err, i, count;
> > +
> > +     gpio_leaf = xrt_subdev_get_leaf_by_epname(pdev, NODE_BLP_ROM);
> > +     if (!gpio_leaf) {
> > +             xrt_err(pdev, "can not get %s", NODE_BLP_ROM);
> > +             return -EINVAL;
> > +     }
> > +
> > +     gpio_arg.xgir_id = XRT_GPIO_ROM_UUID;
> > +     gpio_arg.xgir_buf = uuid;
> > +     gpio_arg.xgir_len = sizeof(uuid);
> > +     gpio_arg.xgir_offset = 0;
> > +     err = xrt_subdev_ioctl(gpio_leaf, XRT_GPIO_READ, &gpio_arg);
> > +     xrt_subdev_put_leaf(pdev, gpio_leaf);
> > +     if (err) {
> > +             xrt_err(pdev, "can not get uuid: %d", err);
> > +             return err;
> > +     }
> > +
> > +     for (count = 0, i = sizeof(uuid) - sizeof(u32);
> > +             i >= 0 && len > count; i -= sizeof(u32)) {
> > +             count += snprintf(uuidstr + count, len - count,
> > +                     "%08x", *(u32 *)&uuid[i]);
> > +     }
> > +     return 0;
> > +}
> > +
> > +int xmgmt_hot_reset(struct platform_device *pdev)
> > +{
> > +     int ret = xrt_subdev_broadcast_event(pdev,
XRT_EVENT_PRE_HOT_RESET);
> > +
> > +     if (ret) {
> > +             xrt_err(pdev, "offline failed, hot reset is canceled");
> > +             return ret;
> > +     }
> > +
> > +     (void) xrt_subdev_hot_reset(pdev);
> > +     xrt_subdev_broadcast_event(pdev, XRT_EVENT_POST_HOT_RESET);
> > +     return 0;
> > +}
> > +
> > +static ssize_t reset_store(struct device *dev,
> > +     struct device_attribute *da, const char *buf, size_t count)
> > +{
> > +     struct platform_device *pdev = to_platform_device(dev);
> > +
> > +     (void) xmgmt_hot_reset(pdev);
> > +     return count;
> > +}
> > +static DEVICE_ATTR_WO(reset);
> > +
> > +static ssize_t VBNV_show(struct device *dev,
> > +     struct device_attribute *da, char *buf)
> > +{
> > +     ssize_t ret;
> > +     char *vbnv;
> > +     struct platform_device *pdev = to_platform_device(dev);
> > +
> > +     vbnv = xmgmt_get_vbnv(pdev);
> > +     ret = sprintf(buf, "%s\n", vbnv);
> > +     kfree(vbnv);
> > +     return ret;
> > +}
> > +static DEVICE_ATTR_RO(VBNV);
> > +
> > +static ssize_t logic_uuids_show(struct device *dev,
> > +     struct device_attribute *da, char *buf)
> > +{
> > +     ssize_t ret;
> > +     char uuid[80];
> > +     struct platform_device *pdev = to_platform_device(dev);
> > +
> > +     /*
> > +      * Getting UUID pointed to by VSEC,
> > +      * should be the same as logic UUID of BLP.
> > +      * TODO: add PLP logic UUID
> > +      */
> > +     ret = get_dev_uuid(pdev, uuid, sizeof(uuid));
> > +     if (ret)
> > +             return ret;
> > +     ret = sprintf(buf, "%s\n", uuid);
> > +     return ret;
> > +}
> > +static DEVICE_ATTR_RO(logic_uuids);
> > +
> > +static inline void uuid2str(const uuid_t *uuid, char *uuidstr, size_t len)
> > +{
> > +     int i, p;
> > +     u8 *u = (u8 *)uuid;
> > +
> > +     BUG_ON(sizeof(uuid_t) * 2 + 1 > len);
> > +     for (p = 0, i = sizeof(uuid_t) - 1; i >= 0; p++, i--)
> > +             (void) snprintf(&uuidstr[p*2], 3, "%02x", u[i]);
> > +}
> > +
> > +static ssize_t interface_uuids_show(struct device *dev,
> > +     struct device_attribute *da, char *buf)
> > +{
> > +     ssize_t ret = 0;
> > +     struct platform_device *pdev = to_platform_device(dev);
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +     u32 i;
> > +
> > +     /*
> > +      * TODO: add PLP interface UUID
> > +      */
> > +     for (i = 0; i < xmm->blp_intf_uuid_num; i++) {
> > +             char uuidstr[80];
> > +
> > +             uuid2str(&xmm->blp_intf_uuids[i], uuidstr, sizeof(uuidstr));
> > +             ret += sprintf(buf + ret, "%s\n", uuidstr);
> > +     }
> > +     return ret;
> > +}
> > +static DEVICE_ATTR_RO(interface_uuids);
> > +
> > +static struct attribute *xmgmt_main_attrs[] = {
> > +     &dev_attr_reset.attr,
> > +     &dev_attr_VBNV.attr,
> > +     &dev_attr_logic_uuids.attr,
> > +     &dev_attr_interface_uuids.attr,
> > +     NULL,
> > +};
> > +
> > +static ssize_t ulp_image_write(struct file *filp, struct kobject *kobj,
> > +     struct bin_attribute *attr, char *buffer, loff_t off, size_t count)
> > +{
> > +     struct xmgmt_main *xmm =
> > +             dev_get_drvdata(container_of(kobj, struct device, kobj));
> > +     struct axlf *xclbin;
> > +     ulong len;
> > +
> > +     if (off == 0) {
> > +             if (count < sizeof(*xclbin)) {
> > +                     xrt_err(xmm->pdev, "count is too small %ld", count);
> > +                     return -EINVAL;
> > +             }
> > +
> > +             if (xmm->firmware_ulp) {
> > +                     vfree(xmm->firmware_ulp);
> > +                     xmm->firmware_ulp = NULL;
> > +             }
> > +             xclbin = (struct axlf *)buffer;
> > +             xmm->firmware_ulp = vmalloc(xclbin->m_header.m_length);
> > +             if (!xmm->firmware_ulp)
> > +                     return -ENOMEM;
> > +     } else
> > +             xclbin = (struct axlf *)xmm->firmware_ulp;
> > +
> > +     len = xclbin->m_header.m_length;
> > +     if (off + count >= len && off < len) {
> > +             memcpy(xmm->firmware_ulp + off, buffer, len - off);
> > +             xmgmt_ulp_download(xmm->pdev, xmm->firmware_ulp);
> > +     } else if (off + count < len) {
> > +             memcpy(xmm->firmware_ulp + off, buffer, count);
> > +     }
> > +
> > +     return count;
> > +}
> > +
> > +static struct bin_attribute ulp_image_attr = {
> > +     .attr = {
> > +             .name = "ulp_image",
> > +             .mode = 0200
> > +     },
> > +     .write = ulp_image_write,
> > +     .size = 0
> > +};
> > +
> > +static struct bin_attribute *xmgmt_main_bin_attrs[] = {
> > +     &ulp_image_attr,
> > +     NULL,
> > +};
> > +
> > +static const struct attribute_group xmgmt_main_attrgroup = {
> > +     .attrs = xmgmt_main_attrs,
> > +     .bin_attrs = xmgmt_main_bin_attrs,
> > +};
> > +
> > +static int load_firmware_from_flash(struct platform_device *pdev,
> > +     char **fw_buf, size_t *len)
> > +{
> > +     struct platform_device *flash_leaf = NULL;
> > +     struct flash_data_header header = { 0 };
> > +     const size_t magiclen = sizeof(header.fdh_id_begin.fdi_magic);
> > +     size_t flash_size = 0;
> > +     int ret = 0;
> > +     char *buf = NULL;
> > +     struct flash_data_ident id = { 0 };
> > +     struct xrt_flash_ioctl_read frd = { 0 };
> > +
> > +     xrt_info(pdev, "try loading fw from flash");
> > +
> > +     flash_leaf = xrt_subdev_get_leaf_by_id(pdev, XRT_SUBDEV_QSPI,
> > +             PLATFORM_DEVID_NONE);
> > +     if (flash_leaf == NULL) {
> > +             xrt_err(pdev, "failed to hold flash leaf");
> > +             return -ENODEV;
> > +     }
> > +
> > +     (void) xrt_subdev_ioctl(flash_leaf, XRT_FLASH_GET_SIZE, &flash_size);
> > +     if (flash_size == 0) {
> > +             xrt_err(pdev, "failed to get flash size");
> > +             ret = -EINVAL;
> > +             goto done;
> > +     }
> > +
> > +     frd.xfir_buf = (char *)&header;
> > +     frd.xfir_size = sizeof(header);
> > +     frd.xfir_offset = flash_size - sizeof(header);
> > +     ret = xrt_subdev_ioctl(flash_leaf, XRT_FLASH_READ, &frd);
> > +     if (ret) {
> > +             xrt_err(pdev, "failed to read header from flash: %d", ret);
> > +             goto done;
> > +     }
> > +
> > +     /* Pick the end ident since header is aligned in the end of flash. */
> > +     id = header.fdh_id_end;
> > +     if (strncmp(id.fdi_magic, XRT_DATA_MAGIC, magiclen)) {
> > +             char tmp[sizeof(id.fdi_magic) + 1] = { 0 };
> > +
> > +             memcpy(tmp, id.fdi_magic, magiclen);
> > +             xrt_info(pdev, "ignore meta data, bad magic: %s", tmp);
> > +             ret = -ENOENT;
> > +             goto done;
> > +     }
> > +     if (id.fdi_version != 0) {
> > +             xrt_info(pdev, "flash meta data version is not supported: %d",
> > +                     id.fdi_version);
> > +             ret = -EOPNOTSUPP;
> > +             goto done;
> > +     }
> > +
> > +     buf = vmalloc(header.fdh_data_len);
> > +     if (buf == NULL) {
> > +             ret = -ENOMEM;
> > +             goto done;
> > +     }
> > +
> > +     frd.xfir_buf = buf;
> > +     frd.xfir_size = header.fdh_data_len;
> > +     frd.xfir_offset = header.fdh_data_offset;
> > +     ret = xrt_subdev_ioctl(flash_leaf, XRT_FLASH_READ, &frd);
> > +     if (ret) {
> > +             xrt_err(pdev, "failed to read meta data from flash: %d", ret);
> > +             goto done;
> > +     } else if (flash_xrt_data_get_parity32(buf, header.fdh_data_len) ^
> > +             header.fdh_data_parity) {
> > +             xrt_err(pdev, "meta data is corrupted");
> > +             ret = -EINVAL;
> > +             goto done;
> > +     }
> > +
> > +     xrt_info(pdev, "found meta data of %d bytes @0x%x",
> > +             header.fdh_data_len, header.fdh_data_offset);
> > +     *fw_buf = buf;
> > +     *len = header.fdh_data_len;
> > +
> > +done:
> > +     (void) xrt_subdev_put_leaf(pdev, flash_leaf);
> > +     return ret;
> > +}
> > +
> > +static int load_firmware_from_disk(struct platform_device *pdev, char
**fw_buf,
> > +     size_t *len)
> > +{
> > +     char uuid[80];
> > +     int err = 0;
> > +     char fw_name[256];
> > +     const struct firmware *fw;
> > +
> > +     err = get_dev_uuid(pdev, uuid, sizeof(uuid));
> > +     if (err)
> > +             return err;
> > +
> > +     (void) snprintf(fw_name,
> > +             sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
> > +     xrt_info(pdev, "try loading fw: %s", fw_name);
> > +
> > +     err = request_firmware(&fw, fw_name, DEV(pdev));
> > +     if (err)
> > +             return err;
> > +
> > +     *fw_buf = vmalloc(fw->size);
> > +     *len = fw->size;
> > +     if (*fw_buf != NULL)
> > +             memcpy(*fw_buf, fw->data, fw->size);
> > +     else
> > +             err = -ENOMEM;
> > +
> > +     release_firmware(fw);
> > +     return 0;
> > +}
> > +
> > +static const char *xmgmt_get_axlf_firmware(struct xmgmt_main *xmm,
> > +     enum provider_kind kind)
> > +{
> > +     switch (kind) {
> > +     case XMGMT_BLP:
> > +             return xmm->firmware_blp;
> > +     case XMGMT_PLP:
> > +             return xmm->firmware_plp;
> > +     case XMGMT_ULP:
> > +             return xmm->firmware_ulp;
> > +     default:
> > +             xrt_err(xmm->pdev, "unknown axlf kind: %d", kind);
> > +             return NULL;
> > +     }
> > +}
> > +
> > +char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind
kind)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +     char *dtb = NULL;
> > +     const char *provider = xmgmt_get_axlf_firmware(xmm, kind);
> > +     int rc;
> > +
> > +     if (provider == NULL)
> > +             return dtb;
> > +
> > +     rc = xrt_xclbin_get_metadata(DEV(pdev), provider, &dtb);
> > +     if (rc)
> > +             xrt_err(pdev, "failed to find dtb: %d", rc);
> > +     return dtb;
> > +}
> > +
> > +static const char *get_uuid_from_firmware(struct platform_device *pdev,
> > +     const char *axlf)
> > +{
> > +     const void *uuid = NULL;
> > +     const void *uuiddup = NULL;
> > +     void *dtb = NULL;
> > +     int rc;
> > +
> > +     rc = xrt_xclbin_get_section(axlf, PARTITION_METADATA, &dtb, NULL);
> > +     if (rc)
> > +             return NULL;
> > +
> > +     rc = xrt_md_get_prop(DEV(pdev), dtb, NULL, NULL,
> > +             PROP_LOGIC_UUID, &uuid, NULL);
> > +     if (!rc)
> > +             uuiddup = kstrdup(uuid, GFP_KERNEL);
> > +     vfree(dtb);
> > +     return uuiddup;
> > +}
> > +
> > +static bool is_valid_firmware(struct platform_device *pdev,
> > +     char *fw_buf, size_t fw_len)
> > +{
> > +     struct axlf *axlf = (struct axlf *)fw_buf;
> > +     size_t axlflen = axlf->m_header.m_length;
> > +     const char *fw_uuid;
> > +     char dev_uuid[80];
> > +     int err;
> > +
> > +     err = get_dev_uuid(pdev, dev_uuid, sizeof(dev_uuid));
> > +     if (err)
> > +             return false;
> > +
> > +     if (memcmp(fw_buf, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)) != 0) {
> > +             xrt_err(pdev, "unknown fw format");
> > +             return false;
> > +     }
> > +
> > +     if (axlflen > fw_len) {
> > +             xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
> > +                     fw_len, axlflen);
> > +             return false;
> > +     }
> > +
> > +     fw_uuid = get_uuid_from_firmware(pdev, fw_buf);
> > +     if (fw_uuid == NULL || strcmp(fw_uuid, dev_uuid) != 0) {
> > +             xrt_err(pdev, "bad fw UUID: %s, expect: %s",
> > +                     fw_uuid ? fw_uuid : "<none>", dev_uuid);
> > +             kfree(fw_uuid);
> > +             return false;
> > +     }
> > +
> > +     kfree(fw_uuid);
> > +     return true;
> > +}
> > +
> > +int xmgmt_get_provider_uuid(struct platform_device *pdev,
> > +     enum provider_kind kind, uuid_t *uuid)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +     const char *fwbuf;
> > +     const char *fw_uuid;
> > +     int rc = -ENOENT;
> > +
> > +     mutex_lock(&xmm->busy_mutex);
> > +
> > +     fwbuf = xmgmt_get_axlf_firmware(xmm, kind);
> > +     if (fwbuf == NULL)
> > +             goto done;
> > +
> > +     fw_uuid = get_uuid_from_firmware(pdev, fwbuf);
> > +     if (fw_uuid == NULL)
> > +             goto done;
> > +
> > +     rc = xrt_md_uuid_strtoid(DEV(pdev), fw_uuid, uuid);
> > +     kfree(fw_uuid);
> > +
> > +done:
> > +     mutex_unlock(&xmm->busy_mutex);
> > +     return rc;
> > +}
> > +
> > +static int xmgmt_create_blp(struct xmgmt_main *xmm)
> > +{
> > +     struct platform_device *pdev = xmm->pdev;
> > +     int rc = 0;
> > +     char *dtb = NULL;
> > +
> > +     dtb = xmgmt_get_dtb(pdev, XMGMT_BLP);
> > +     if (dtb) {
> > +             rc = xrt_subdev_create_partition(pdev, dtb);
> > +             if (rc < 0)
> > +                     xrt_err(pdev, "failed to create BLP: %d", rc);
> > +             else
> > +                     rc = 0;
> > +
> > +             BUG_ON(xmm->blp_intf_uuids);
> > +             xrt_md_get_intf_uuids(&pdev->dev, dtb,
> > +                     &xmm->blp_intf_uuid_num, NULL);
> > +             if (xmm->blp_intf_uuid_num > 0) {
> > +                     xmm->blp_intf_uuids = vzalloc(sizeof(uuid_t) *
> > +                             xmm->blp_intf_uuid_num);
> > +                     xrt_md_get_intf_uuids(&pdev->dev, dtb,
> > +                             &xmm->blp_intf_uuid_num, xmm->blp_intf_uuids);
> > +             }
> > +     }
> > +
> > +     vfree(dtb);
> > +     return rc;
> > +}
> > +
> > +static int xmgmt_main_event_cb(struct platform_device *pdev,
> > +     enum xrt_events evt, void *arg)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +     struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev
*)arg;
> > +     enum xrt_subdev_id id;
> > +     int instance;
> > +     size_t fwlen;
> > +
> > +     switch (evt) {
> > +     case XRT_EVENT_POST_CREATION: {
> > +             id = esd->xevt_subdev_id;
> > +             instance = esd->xevt_subdev_instance;
> > +             xrt_info(pdev, "processing event %d for (%d, %d)",
> > +                     evt, id, instance);
> > +
> > +             if (id == XRT_SUBDEV_GPIO)
> > +                     xmm->gpio_ready = true;
> > +             else if (id == XRT_SUBDEV_QSPI)
> > +                     xmm->flash_ready = true;
> > +             else
> > +                     BUG_ON(1);
> > +
> > +             if (xmm->gpio_ready && xmm->flash_ready) {
> > +                     int rc;
> > +
> > +                     rc = load_firmware_from_disk(pdev, &xmm->firmware_blp,
> > +                             &fwlen);
> > +                     if (rc != 0) {
> > +                             rc = load_firmware_from_flash(pdev,
> > +                                     &xmm->firmware_blp, &fwlen);
> > +                     }
> > +                     if (rc == 0 && is_valid_firmware(pdev,
> > +                         xmm->firmware_blp, fwlen))
> > +                             (void) xmgmt_create_blp(xmm);
> > +                     else
> > +                             xrt_err(pdev,
> > +                                     "failed to find firmware, giving up");
> > +                     xmm->evt_hdl = NULL;
> > +             }
> > +             break;
> > +     }
> > +     case XRT_EVENT_POST_ATTACH:
> > +             xmgmt_peer_notify_state(xmm->mailbox_hdl, true);
> > +             break;
> > +     case XRT_EVENT_PRE_DETACH:
> > +             xmgmt_peer_notify_state(xmm->mailbox_hdl, false);
> > +             break;
> > +     default:
> > +             xrt_info(pdev, "ignored event %d", evt);
> > +             break;
> > +     }
> > +
> > +     return XRT_EVENT_CB_CONTINUE;
> > +}
> > +
> > +static int xmgmt_main_probe(struct platform_device *pdev)
> > +{
> > +     struct xmgmt_main *xmm;
> > +
> > +     xrt_info(pdev, "probing...");
> > +
> > +     xmm = devm_kzalloc(DEV(pdev), sizeof(*xmm), GFP_KERNEL);
> > +     if (!xmm)
> > +             return -ENOMEM;
> > +
> > +     xmm->pdev = pdev;
> > +     platform_set_drvdata(pdev, xmm);
> > +     xmm->fmgr = xmgmt_fmgr_probe(pdev);
> > +     xmm->mailbox_hdl = xmgmt_mailbox_probe(pdev);
> > +     mutex_init(&xmm->busy_mutex);
> > +
> > +     xmm->evt_hdl = xrt_subdev_add_event_cb(pdev,
> > +             xmgmt_main_leaf_match, NODE_BLP_ROM,
xmgmt_main_event_cb);
> > +
> > +     /* Ready to handle req thru sysfs nodes. */
> > +     if (sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup))
> > +             xrt_err(pdev, "failed to create sysfs group");
> > +     return 0;
> > +}
> > +
> > +static int xmgmt_main_remove(struct platform_device *pdev)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +
> > +     /* By now, partition driver should prevent any inter-leaf call. */
> > +
> > +     xrt_info(pdev, "leaving...");
> > +
> > +     if (xmm->evt_hdl)
> > +             (void) xrt_subdev_remove_event_cb(pdev, xmm->evt_hdl);
> > +     vfree(xmm->blp_intf_uuids);
> > +     vfree(xmm->firmware_blp);
> > +     vfree(xmm->firmware_plp);
> > +     vfree(xmm->firmware_ulp);
> > +     (void) xmgmt_fmgr_remove(xmm->fmgr);
> > +     xmgmt_mailbox_remove(xmm->mailbox_hdl);
> > +     (void) sysfs_remove_group(&DEV(pdev)->kobj,
&xmgmt_main_attrgroup);
> > +     return 0;
> > +}
> > +
> > +static int
> > +xmgmt_main_leaf_ioctl(struct platform_device *pdev, u32 cmd, void
*arg)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +     int ret = 0;
> > +
> > +     xrt_info(pdev, "handling IOCTL cmd: %d", cmd);
> > +
> > +     switch (cmd) {
> > +     case XRT_MGMT_MAIN_GET_AXLF_SECTION: {
> > +             struct xrt_mgmt_main_ioctl_get_axlf_section *get =
> > +                     (struct xrt_mgmt_main_ioctl_get_axlf_section *)arg;
> > +             const char *firmware =
> > +                     xmgmt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
> > +
> > +             if (firmware == NULL) {
> > +                     ret = -ENOENT;
> > +             } else {
> > +                     ret = xrt_xclbin_get_section(firmware,
> > +                             get->xmmigas_section_kind,
> > +                             &get->xmmigas_section,
> > +                             &get->xmmigas_section_size);
> > +             }
> > +             break;
> > +     }
> > +     case XRT_MGMT_MAIN_GET_VBNV: {
> > +             char **vbnv_p = (char **)arg;
> > +
> > +             *vbnv_p = xmgmt_get_vbnv(pdev);
> > +             break;
> > +     }
> > +     default:
> > +             xrt_err(pdev, "unknown cmd: %d", cmd);
> > +             ret = -EINVAL;
> > +             break;
> > +     }
> > +     return ret;
> > +}
> > +
> > +static int xmgmt_main_open(struct inode *inode, struct file *file)
> > +{
> > +     struct platform_device *pdev = xrt_devnode_open(inode);
> > +
> > +     /* Device may have gone already when we get here. */
> > +     if (!pdev)
> > +             return -ENODEV;
> > +
> > +     xrt_info(pdev, "opened");
> > +     file->private_data = platform_get_drvdata(pdev);
> > +     return 0;
> > +}
> > +
> > +static int xmgmt_main_close(struct inode *inode, struct file *file)
> > +{
> > +     struct xmgmt_main *xmm = file->private_data;
> > +
> > +     xrt_devnode_close(inode);
> > +
> > +     xrt_info(xmm->pdev, "closed");
> > +     return 0;
> > +}
> > +
> > +static int xmgmt_bitstream_axlf_fpga_mgr(struct xmgmt_main *xmm,
> > +     void *axlf, size_t size)
> > +{
> > +     int ret;
> > +     struct fpga_image_info info = { 0 };
> > +
> > +     BUG_ON(!mutex_is_locked(&xmm->busy_mutex));
> > +
> > +     /*
> > +      * Should any error happens during download, we can't trust
> > +      * the cached xclbin any more.
> > +      */
> > +     vfree(xmm->firmware_ulp);
> > +     xmm->firmware_ulp = NULL;
> > +
> > +     info.buf = (char *)axlf;
> > +     info.count = size;
> > +     ret = fpga_mgr_load(xmm->fmgr, &info);
> > +     if (ret == 0)
> > +             xmm->firmware_ulp = axlf;
> > +
> > +     return ret;
> > +}
> > +
> > +int bitstream_axlf_mailbox(struct platform_device *pdev, const void *axlf)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +     void *copy_buffer = NULL;
> > +     size_t copy_buffer_size = 0;
> > +     const struct axlf *xclbin_obj = axlf;
> > +     int ret = 0;
> > +
> > +     if (memcmp(xclbin_obj->m_magic, ICAP_XCLBIN_V2,
sizeof(ICAP_XCLBIN_V2)))
> > +             return -EINVAL;
> > +
> > +     copy_buffer_size = xclbin_obj->m_header.m_length;
> > +     if (copy_buffer_size > MAX_XCLBIN_SIZE)
> > +             return -EINVAL;
> > +     copy_buffer = vmalloc(copy_buffer_size);
> > +     if (copy_buffer == NULL)
> > +             return -ENOMEM;
> > +     (void) memcpy(copy_buffer, axlf, copy_buffer_size);
> > +
> > +     mutex_lock(&xmm->busy_mutex);
> > +     ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer,
copy_buffer_size);
> > +     mutex_unlock(&xmm->busy_mutex);
> > +     if (ret)
> > +             vfree(copy_buffer);
> > +     return ret;
> > +}
> > +
> > +static int bitstream_axlf_ioctl(struct xmgmt_main *xmm, const void
__user *arg)
> > +{
> > +     void *copy_buffer = NULL;
> > +     size_t copy_buffer_size = 0;
> > +     struct xmgmt_ioc_bitstream_axlf ioc_obj = { 0 };
> > +     struct axlf xclbin_obj = { {0} };
> > +     int ret = 0;
> > +
> > +     if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
> > +             return -EFAULT;
> > +     if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin,
> > +             sizeof(xclbin_obj)))
> > +             return -EFAULT;
> > +     if (memcmp(xclbin_obj.m_magic, ICAP_XCLBIN_V2,
sizeof(ICAP_XCLBIN_V2)))
> > +             return -EINVAL;
> > +
> > +     copy_buffer_size = xclbin_obj.m_header.m_length;
> > +     if (copy_buffer_size > MAX_XCLBIN_SIZE)
> > +             return -EINVAL;
> > +     copy_buffer = vmalloc(copy_buffer_size);
> > +     if (copy_buffer == NULL)
> > +             return -ENOMEM;
> > +
> > +     if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
> > +             vfree(copy_buffer);
> > +             return -EFAULT;
> > +     }
> > +
> > +     ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer,
copy_buffer_size);
> > +     if (ret)
> > +             vfree(copy_buffer);
> > +
> > +     return ret;
> > +}
> > +
> > +static long xmgmt_main_ioctl(struct file *filp, unsigned int cmd,
> > +     unsigned long arg)
> > +{
> > +     long result = 0;
> > +     struct xmgmt_main *xmm = filp->private_data;
> > +
> > +     BUG_ON(!xmm);
> > +
> > +     if (_IOC_TYPE(cmd) != XMGMT_IOC_MAGIC)
> > +             return -ENOTTY;
> > +
> > +     mutex_lock(&xmm->busy_mutex);
> > +
> > +     xrt_info(xmm->pdev, "ioctl cmd %d, arg %ld", cmd, arg);
> > +     switch (cmd) {
> > +     case XMGMT_IOCICAPDOWNLOAD_AXLF:
> > +             result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
> > +             break;
> > +     default:
> > +             result = -ENOTTY;
> > +             break;
> > +     }
> > +
> > +     mutex_unlock(&xmm->busy_mutex);
> > +     return result;
> > +}
> > +
> > +void *xmgmt_pdev2mailbox(struct platform_device *pdev)
> > +{
> > +     struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> > +
> > +     return xmm->mailbox_hdl;
> > +}
> > +
> > +struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[] = {
> > +     {
> > +             .xse_names = (struct xrt_subdev_ep_names []){
> > +                     { .ep_name = NODE_MGMT_MAIN },
> > +                     { NULL },
> > +             },
> > +             .xse_min_ep = 1,
> > +     },
> > +     { 0 },
> > +};
> > +
> > +struct xrt_subdev_drvdata xmgmt_main_data = {
> > +     .xsd_dev_ops = {
> > +             .xsd_ioctl = xmgmt_main_leaf_ioctl,
> > +     },
> > +     .xsd_file_ops = {
> > +             .xsf_ops = {
> > +                     .owner = THIS_MODULE,
> > +                     .open = xmgmt_main_open,
> > +                     .release = xmgmt_main_close,
> > +                     .unlocked_ioctl = xmgmt_main_ioctl,
> > +             },
> > +             .xsf_dev_name = "xmgmt",
> > +     },
> > +};
> > +
> > +static const struct platform_device_id xmgmt_main_id_table[] = {
> > +     { XMGMT_MAIN, (kernel_ulong_t)&xmgmt_main_data },
> > +     { },
> > +};
> > +
> > +struct platform_driver xmgmt_main_driver = {
> > +     .driver = {
> > +             .name    = XMGMT_MAIN,
> > +     },
> > +     .probe   = xmgmt_main_probe,
> > +     .remove  = xmgmt_main_remove,
> > +     .id_table = xmgmt_main_id_table,
> > +};
> > diff --git a/drivers/fpga/alveo/mgmt/xmgmt-root.c
b/drivers/fpga/alveo/mgmt/xmgmt-root.c
> > new file mode 100644
> > index 000000000000..005fd5e42651
> > --- /dev/null
> > +++ b/drivers/fpga/alveo/mgmt/xmgmt-root.c
> > @@ -0,0 +1,375 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Xilinx Alveo Management Function Driver
> > + *
> > + * Copyright (C) 2020 Xilinx, Inc.
> > + *
> > + * Authors:
> > + *   Cheng Zhen <maxz@xilinx.com>
> > + */
> > +
> > +#include <linux/module.h>
> > +#include <linux/pci.h>
> > +#include <linux/aer.h>
> > +#include <linux/vmalloc.h>
> > +#include <linux/delay.h>
> > +
> > +#include "xrt-root.h"
> > +#include "xrt-subdev.h"
> > +#include "xmgmt-main-impl.h"
> > +#include "xrt-metadata.h"
> > +
> > +#define      XMGMT_MODULE_NAME       "xmgmt"
> > +#define      XMGMT_DRIVER_VERSION    "4.0.0"
> > +
> > +#define      XMGMT_PDEV(xm)          ((xm)->pdev)
> > +#define      XMGMT_DEV(xm)           (&(XMGMT_PDEV(xm)->dev))
> > +#define xmgmt_err(xm, fmt, args...)  \
> > +     dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> > +#define xmgmt_warn(xm, fmt, args...) \
> > +     dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> > +#define xmgmt_info(xm, fmt, args...) \
> > +     dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> > +#define xmgmt_dbg(xm, fmt, args...)  \
> > +     dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> > +#define      XMGMT_DEV_ID(pdev)                      \
> > +     ((pci_domain_nr(pdev->bus) << 16) |     \
> > +     PCI_DEVID(pdev->bus->number, 0))
> > +
> > +static struct class *xmgmt_class;
> > +static const struct pci_device_id xmgmt_pci_ids[] = {
> > +     { PCI_DEVICE(0x10EE, 0xd020), },
> > +     { PCI_DEVICE(0x10EE, 0x5020), },
> > +     { 0, }
> > +};
> > +
> > +struct xmgmt {
> > +     struct pci_dev *pdev;
> > +     void *root;
> > +
> > +     /* save config for pci reset */
> > +     u32 saved_config[8][16];
> > +     bool ready;
> > +};
> > +
> > +static int xmgmt_config_pci(struct xmgmt *xm)
> > +{
> > +     struct pci_dev *pdev = XMGMT_PDEV(xm);
> > +     int rc;
> > +
> > +     rc = pcim_enable_device(pdev);
> > +     if (rc < 0) {
> > +             xmgmt_err(xm, "failed to enable device: %d", rc);
> > +             return rc;
> > +     }
> > +
> > +     rc = pci_enable_pcie_error_reporting(pdev);
> > +     if (rc)
> > +             xmgmt_warn(xm, "failed to enable AER: %d", rc);
> > +
> > +     pci_set_master(pdev);
> > +
> > +     rc = pcie_get_readrq(pdev);
> > +     if (rc < 0) {
> > +             xmgmt_err(xm, "failed to read mrrs %d", rc);
> > +             return rc;
> > +     }
> > +     if (rc > 512) {
> > +             rc = pcie_set_readrq(pdev, 512);
> > +             if (rc) {
> > +                     xmgmt_err(xm, "failed to force mrrs %d", rc);
> > +                     return rc;
> > +             }
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static void xmgmt_save_config_space(struct pci_dev *pdev, u32
*saved_config)
> > +{
> > +     int i;
> > +
> > +     for (i = 0; i < 16; i++)
> > +             pci_read_config_dword(pdev, i * 4, &saved_config[i]);
> > +}
> > +
> > +static int xmgmt_match_slot_and_save(struct device *dev, void *data)
> > +{
> > +     struct xmgmt *xm = data;
> > +     struct pci_dev *pdev = to_pci_dev(dev);
> > +
> > +     if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
> > +             pci_cfg_access_lock(pdev);
> > +             pci_save_state(pdev);
> > +             xmgmt_save_config_space(pdev,
> > +                     xm->saved_config[PCI_FUNC(pdev->devfn)]);
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static void xmgmt_pci_save_config_all(struct xmgmt *xm)
> > +{
> > +     bus_for_each_dev(&pci_bus_type, NULL, xm,
xmgmt_match_slot_and_save);
> > +}
> > +
> > +static void xmgmt_restore_config_space(struct pci_dev *pdev, u32
*config_saved)
> > +{
> > +     int i;
> > +     u32 val;
> > +
> > +     for (i = 0; i < 16; i++) {
> > +             pci_read_config_dword(pdev, i * 4, &val);
> > +             if (val == config_saved[i])
> > +                     continue;
> > +
> > +             pci_write_config_dword(pdev, i * 4, config_saved[i]);
> > +             pci_read_config_dword(pdev, i * 4, &val);
> > +             if (val != config_saved[i]) {
> > +                     dev_err(&pdev->dev,
> > +                              "restore config at %d failed", i * 4);
> > +             }
> > +     }
> > +}
> > +
> > +static int xmgmt_match_slot_and_restore(struct device *dev, void *data)
> > +{
> > +     struct xmgmt *xm = data;
> > +     struct pci_dev *pdev = to_pci_dev(dev);
> > +
> > +     if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
> > +             xmgmt_restore_config_space(pdev,
> > +                     xm->saved_config[PCI_FUNC(pdev->devfn)]);
> > +
> > +             pci_restore_state(pdev);
> > +             pci_cfg_access_unlock(pdev);
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
> > +{
> > +     bus_for_each_dev(&pci_bus_type, NULL, xm,
xmgmt_match_slot_and_restore);
> > +}
> > +
> > +void xroot_hot_reset(struct pci_dev *pdev)
> > +{
> > +     struct xmgmt *xm = pci_get_drvdata(pdev);
> > +     struct pci_bus *bus;
> > +     u8 pci_bctl;
> > +     u16 pci_cmd, devctl;
> > +     int i;
> > +
> > +     xmgmt_info(xm, "hot reset start");
> > +
> > +     xmgmt_pci_save_config_all(xm);
> > +
> > +     pci_disable_device(pdev);
> > +
> > +     bus = pdev->bus;
> > +
> > +     /*
> > +      * When flipping the SBR bit, device can fall off the bus. This is
> > +      * usually no problem at all so long as drivers are working properly
> > +      * after SBR. However, some systems complain bitterly when the device
> > +      * falls off the bus.
> > +      * The quick solution is to temporarily disable the SERR reporting of
> > +      * switch port during SBR.
> > +      */
> > +
> > +     pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
> > +     pci_write_config_word(bus->self, PCI_COMMAND,
> > +             (pci_cmd & ~PCI_COMMAND_SERR));
> > +     pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
> > +     pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL,
> > +             (devctl & ~PCI_EXP_DEVCTL_FERE));
> > +     pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
> > +     pci_bctl |= PCI_BRIDGE_CTL_BUS_RESET;
> > +     pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
> > +
> > +     msleep(100);
> > +     pci_bctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
> > +     pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
> > +     ssleep(1);
> > +
> > +     pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
> > +     pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
> > +
> > +     pci_enable_device(pdev);
> > +
> > +     for (i = 0; i < 300; i++) {
> > +             pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
> > +             if (pci_cmd != 0xffff)
> > +                     break;
> > +             msleep(20);
> > +     }
> > +
> > +     xmgmt_info(xm, "waiting for %d ms", i * 20);
> > +
> > +     xmgmt_pci_restore_config_all(xm);
> > +
> > +     xmgmt_config_pci(xm);
> > +}
> > +
> > +static int xmgmt_create_root_metadata(struct xmgmt *xm, char
**root_dtb)
> > +{
> > +     char *dtb = NULL;
> > +     int ret;
> > +
> > +     ret = xrt_md_create(DEV(xm->pdev), &dtb);
> > +     if (ret) {
> > +             xmgmt_err(xm, "create metadata failed, ret %d", ret);
> > +             goto failed;
> > +     }
> > +
> > +     ret = xroot_add_simple_node(xm->root, dtb, NODE_TEST);
> > +     if (ret)
> > +             goto failed;
> > +
> > +     ret = xroot_add_vsec_node(xm->root, dtb);
> > +     if (ret == -ENOENT) {
> > +             /*
> > +              * We may be dealing with a MFG board.
> > +              * Try vsec-golden which will bring up all hard-coded leaves
> > +              * at hard-coded offsets.
> > +              */
> > +             ret = xroot_add_simple_node(xm, dtb, NODE_VSEC_GOLDEN);
> > +     } else if (ret == 0) {
> > +             ret = xroot_add_simple_node(xm->root, dtb,
NODE_MGMT_MAIN);
> > +     }
> > +     if (ret)
> > +             goto failed;
> > +
> > +     *root_dtb = dtb;
> > +     return 0;
> > +
> > +failed:
> > +     vfree(dtb);
> > +     return ret;
> > +}
> > +
> > +static ssize_t ready_show(struct device *dev,
> > +     struct device_attribute *da, char *buf)
> > +{
> > +     struct pci_dev *pdev = to_pci_dev(dev);
> > +     struct xmgmt *xm = pci_get_drvdata(pdev);
> > +
> > +     return sprintf(buf, "%d\n", xm->ready);
> > +}
> > +static DEVICE_ATTR_RO(ready);
> > +
> > +static struct attribute *xmgmt_root_attrs[] = {
> > +     &dev_attr_ready.attr,
> > +     NULL
> > +};
> > +
> > +static struct attribute_group xmgmt_root_attr_group = {
> > +     .attrs = xmgmt_root_attrs,
> > +};
> > +
> > +static int xmgmt_probe(struct pci_dev *pdev, const struct pci_device_id
*id)
> > +{
> > +     int ret;
> > +     struct device *dev = DEV(pdev);
> > +     struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
> > +     char *dtb = NULL;
> > +
> > +     if (!xm)
> > +             return -ENOMEM;
> > +     xm->pdev = pdev;
> > +     pci_set_drvdata(pdev, xm);
> > +
> > +     ret = xmgmt_config_pci(xm);
> > +     if (ret)
> > +             goto failed;
> > +
> > +     ret = xroot_probe(pdev, &xm->root);
> > +     if (ret)
> > +             goto failed;
> > +
> > +     ret = xmgmt_create_root_metadata(xm, &dtb);
> > +     if (ret)
> > +             goto failed_metadata;
> > +
> > +     ret = xroot_create_partition(xm->root, dtb);
> > +     vfree(dtb);
> > +     if (ret)
> > +             xmgmt_err(xm, "failed to create root partition: %d", ret);
> > +
> > +     if (!xroot_wait_for_bringup(xm->root))
> > +             xmgmt_err(xm, "failed to bringup all partitions");
> > +     else
> > +             xm->ready = true;
> > +
> > +     ret = sysfs_create_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
> > +     if (ret) {
> > +             /* Warning instead of failing the probe. */
> > +             xrt_warn(pdev, "create xmgmt root attrs failed: %d", ret);
> > +     }
> > +
> > +     xroot_broadcast(xm->root, XRT_EVENT_POST_ATTACH);
> > +     xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
> > +     return 0;
> > +
> > +failed_metadata:
> > +     (void) xroot_remove(xm->root);
> > +failed:
> > +     pci_set_drvdata(pdev, NULL);
> > +     return ret;
> > +}
> > +
> > +static void xmgmt_remove(struct pci_dev *pdev)
> > +{
> > +     struct xmgmt *xm = pci_get_drvdata(pdev);
> > +
> > +     xroot_broadcast(xm->root, XRT_EVENT_PRE_DETACH);
> > +     sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
> > +     (void) xroot_remove(xm->root);
> > +     pci_disable_pcie_error_reporting(xm->pdev);
> > +     xmgmt_info(xm, "%s cleaned up successfully",
XMGMT_MODULE_NAME);
> > +}
> > +
> > +static struct pci_driver xmgmt_driver = {
> > +     .name = XMGMT_MODULE_NAME,
> > +     .id_table = xmgmt_pci_ids,
> > +     .probe = xmgmt_probe,
> > +     .remove = xmgmt_remove,
> > +};
> > +
> > +static int __init xmgmt_init(void)
> > +{
> > +     int res =
xrt_subdev_register_external_driver(XRT_SUBDEV_MGMT_MAIN,
> > +             &xmgmt_main_driver, xrt_mgmt_main_endpoints);
> > +
> > +     if (res)
> > +             return res;
> > +
> > +     xmgmt_class = class_create(THIS_MODULE,
XMGMT_MODULE_NAME);
> > +     if (IS_ERR(xmgmt_class))
> > +             return PTR_ERR(xmgmt_class);
> > +
> > +     res = pci_register_driver(&xmgmt_driver);
> > +     if (res) {
> > +             class_destroy(xmgmt_class);
> > +             return res;
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static __exit void xmgmt_exit(void)
> > +{
> > +     pci_unregister_driver(&xmgmt_driver);
> > +     class_destroy(xmgmt_class);
> > +     xrt_subdev_unregister_external_driver(XRT_SUBDEV_MGMT_MAIN);
> > +}
> > +
> > +module_init(xmgmt_init);
> > +module_exit(xmgmt_exit);
> > +
> > +MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
> > +MODULE_VERSION(XMGMT_DRIVER_VERSION);
> > +MODULE_AUTHOR("XRT Team <runtime@xilinx.com>");
> > +MODULE_DESCRIPTION("Xilinx Alveo management function driver");
> > +MODULE_LICENSE("GPL v2");
> > --
> > 2.17.1
>
> I have not yet looked at the whole thing, but why does the FPGA Manager
> only copy things around?
>
> Any reason your partitions cannot be modelled as FPGA regions/Bridges?
>

We will address this as Max explained in the other email.

> It would be helpful to split this up into smaller chunks, that make it
> easier to review. The FPGA Manager driver should be a separate patch,
> etc.
>

Will resolve in the next revision of the patch series.

Thanks for your feedback.
-Sonal

> - Moritz
>  

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
  2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
                   ` (9 preceding siblings ...)
  2020-12-02  2:14 ` Xu Yilun
@ 2020-12-06 16:31 ` Tom Rix
  2020-12-08 21:40   ` Sonal Santan
  10 siblings, 1 reply; 29+ messages in thread
From: Tom Rix @ 2020-12-06 16:31 UTC (permalink / raw)
  To: Sonal Santan, linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree

On 11/28/20 4:00 PM, Sonal Santan wrote:
> Hello,
>
> This patch series adds management physical function driver for Xilinx Alveo PCIe
> accelerator cards, https://www.xilinx.com/products/boards-and-kits/alveo.html
> This driver is part of Xilinx Runtime (XRT) open source stack.

A few general things.

Use scripts/get_maintainer.pl to find who a patch should go to, i should have been on the cc line.

Each patch should at a minimum pass scripts/checkpatch.pl, none do.

Looking broadly at the files, there are competing names xrt or alveo.

It seems like xrt is the dfl equivalent, so maybe

drivers/fpga/alveo should be drivers/fpga/xrt

There are a lot of files with unnecessary prefixes

ex/

fpga/alveo/include/xrt-ucs.h could just be fpga/alveo/include/ucs.h

individual subdev's may not belong in the fpga subsystem.

I think it would be better to submit these one at a time as is done for dfl.

So this will not block getting the basics done, in the next revision, can you leave the subdev's out ?

 

Because of the checkpatch.pl failures, I will wait for the next revision.

Tom



^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
  2020-12-06 16:31 ` Tom Rix
@ 2020-12-08 21:40   ` Sonal Santan
  0 siblings, 0 replies; 29+ messages in thread
From: Sonal Santan @ 2020-12-08 21:40 UTC (permalink / raw)
  To: Tom Rix, linux-kernel
  Cc: linux-fpga, Max Zhen, Lizhi Hou, Michal Simek,
	Stefano Stabellini, devicetree

Hello Tom,

> -----Original Message-----
> From: Tom Rix <trix@redhat.com>
> Sent: Sunday, December 6, 2020 8:31 AM
> To: Sonal Santan <sonals@xilinx.com>; linux-kernel@vger.kernel.org
> Cc: Sonal Santan <sonals@xilinx.com>; linux-fpga@vger.kernel.org; Max Zhen
> <maxz@xilinx.com>; Lizhi Hou <lizhih@xilinx.com>; Michal Simek
> <michals@xilinx.com>; Stefano Stabellini <stefanos@xilinx.com>;
> devicetree@vger.kernel.org
> Subject: Re: [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview
> 
> On 11/28/20 4:00 PM, Sonal Santan wrote:
> > Hello,
> >
> > This patch series adds management physical function driver for Xilinx
> > Alveo PCIe accelerator cards,
> > https://www.xilinx.com/products/boards-and-kits/alveo.html
> > This driver is part of Xilinx Runtime (XRT) open source stack.
> 
> A few general things.
> 
> Use scripts/get_maintainer.pl to find who a patch should go to, i should have
> been on the cc line.
> 
Will do.
> Each patch should at a minimum pass scripts/checkpatch.pl, none do.
> 
Looks like a few files missed our checkpatch process. Will address in the
upcoming patch series.

> Looking broadly at the files, there are competing names xrt or alveo.
> 
> It seems like xrt is the dfl equivalent, so maybe
> 
> drivers/fpga/alveo should be drivers/fpga/xrt
> 
Agreed. Will address in the next patch series.
> There are a lot of files with unnecessary prefixes
> 
> ex/
> 
> fpga/alveo/include/xrt-ucs.h could just be fpga/alveo/include/ucs.h
> 
Would work on separating xrt infrastructure and subdevs header files
into separate directories and drop the xrt prefix. 
> individual subdev's may not belong in the fpga subsystem.
> 
> I think it would be better to submit these one at a time as is done for dfl.
> 
In the upcoming patch revision, will drop the subdevs except bare minimum 
necessary to perform bitstream download.
> So this will not block getting the basics done, in the next revision, can you leave
> the subdev's out ?
> 
> 
Thanks for the feedback.
-Sonal
> 
> Because of the checkpatch.pl failures, I will wait for the next revision.
> 
> Tom
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2020-12-08 21:41 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-29  0:00 [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Sonal Santan
2020-11-29  0:00 ` [PATCH Xilinx Alveo 1/8] Documentation: fpga: Add a document describing Alveo XRT drivers Sonal Santan
2020-12-01  4:54   ` Moritz Fischer
2020-12-02 21:24     ` Max Zhen
2020-12-02 23:10       ` Moritz Fischer
2020-12-03  3:38         ` Max Zhen
2020-12-03  4:36           ` Moritz Fischer
2020-12-04  1:17             ` Max Zhen
2020-12-04  4:18               ` Moritz Fischer
2020-11-29  0:00 ` [PATCH Xilinx Alveo 2/8] fpga: xrt: Add UAPI header files Sonal Santan
2020-12-01  4:27   ` Moritz Fischer
2020-12-02 18:57     ` Sonal Santan
2020-12-02 23:47       ` Moritz Fischer
2020-11-29  0:00 ` [PATCH Xilinx Alveo 3/8] fpga: xrt: infrastructure support for xmgmt driver Sonal Santan
2020-11-29  0:00 ` [PATCH Xilinx Alveo 4/8] fpga: xrt: core infrastructure for xrt-lib module Sonal Santan
2020-11-29  0:00 ` [PATCH Xilinx Alveo 5/8] fpga: xrt: platform drivers for subsystems in shell partition Sonal Santan
2020-11-29  0:00 ` [PATCH Xilinx Alveo 6/8] fpga: xrt: header file for platform and parent drivers Sonal Santan
2020-11-29  0:00 ` [PATCH Xilinx Alveo 7/8] fpga: xrt: Alveo management physical function driver Sonal Santan
2020-12-01 20:51   ` Moritz Fischer
     [not found]     ` <BY5PR02MB60683E3470179E6AD10FEE26B9F20@BY5PR02MB6068.namprd02.prod.outlook.com>
2020-12-04  6:22       ` Sonal Santan
2020-12-02  3:00   ` Xu Yilun
2020-12-04  4:40     ` Max Zhen
2020-11-29  0:00 ` [PATCH Xilinx Alveo 8/8] fpga: xrt: Kconfig and Makefile updates for XRT drivers Sonal Santan
2020-11-30 18:08 ` [PATCH Xilinx Alveo 0/8] Xilinx Alveo/XRT patch overview Rob Herring
2020-12-01 19:39   ` Sonal Santan
2020-12-02  2:14 ` Xu Yilun
2020-12-02  5:33   ` Sonal Santan
2020-12-06 16:31 ` Tom Rix
2020-12-08 21:40   ` Sonal Santan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).