linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview
@ 2020-12-17  7:50 Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 1/6] Documentation: fpga: Add a document describing XRT Alveo drivers Sonal Santan
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Sonal Santan @ 2020-12-17  7:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree, trix, mdf


Hello,

This is V2 of patch series which adds management physical function driver
for Xilinx Alveo PCIe accelerator cards,
https://www.xilinx.com/products/boards-and-kits/alveo.html
This driver is part of Xilinx Runtime (XRT) open source stack.

The patch series depends on libfdt patches which were posted before:
https://lore.kernel.org/lkml/20201128235659.24679-1-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201128235659.24679-2-sonals@xilinx.com/

ALVEO PLATFORM ARCHITECTURE

Alveo PCIe FPGA based platforms have a static *shell* partition and a partial
re-configurable *user* partition. The shell partition is automatically loaded from
flash when host is booted and PCIe is enumerated by BIOS. Shell cannot be changed
till the next cold reboot. The shell exposes two PCIe physical functions:

1. management physical function
2. user physical function

The patch series includes Documentation/xrt.rst which describes Alveo
platform, XRT driver architecture and deployment model in more detail.

Users compile their high level design in C/C++/OpenCL or RTL into FPGA image
using Vitis https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
tools. The compiled image is packaged as xclbin and contains partial bitstream
for the user partition and necessary metadata. Users can dynamically swap the
image running on the user partition in order to switch between different workloads.

XRT DRIVERS FOR ALVEO

XRT Linux kernel driver *xmgmt* binds to management physical function of
Alveo platform. The modular driver framework is organized into several
platform drivers which primarily handle the following functionality:

1.  Loading firmware container also called xsabin at driver attach time
2.  Loading of user compiled xclbin with FPGA Manager integration
3.  Clock scaling of image running on user partition
4.  In-band sensors: temp, voltage, power, etc.
5.  Device reset and rescan
6.  Flash upgrade of static *shell* partition

The platform drivers are packaged into *xrt-lib* helper module with a well
defined interfaces the details of which can be found in Documentation/xrt.rst.

User physical function driver is not included in this patch series.

TESTING AND VALIDATION

xmgmt driver can be tested with full XRT open source stack which includes
user space libraries, board utilities and (out of tree) first generation
user physical function driver xocl. XRT open source runtime stack is
available at https://github.com/Xilinx/XRT

Complete documentation for XRT open source stack including sections on
Alveo/XRT security and platform architecture can be found here:
https://xilinx.github.io/XRT/master/html/index.html
https://xilinx.github.io/XRT/master/html/security.html
https://xilinx.github.io/XRT/master/html/platforms_partitions.html

Changes since v1:
- Updated the driver to use fpga_region and fpga_bridge for FPGA
  programming
- Dropped subdev drivers not related to PR programming to focus on XRT
  core framework
- Updated Documentation/fpga/xrt.rst with information on XRT core framework
- Addressed checkpatch issues 
- Dropped xrt- prefix from some header files

For reference V1 version of patch series can be found here--
https://lore.kernel.org/lkml/20201129000040.24777-1-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-2-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-3-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-4-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-5-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-6-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-7-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-8-sonals@xilinx.com/
https://lore.kernel.org/lkml/20201129000040.24777-9-sonals@xilinx.com/

Thanks,
-Sonal

Sonal Santan (6):
  Documentation: fpga: Add a document describing XRT Alveo drivers
  fpga: xrt: infrastructure support for xmgmt driver
  fpga: xrt: core infrastructure for xrt-lib module
  fpga: xrt: XRT Alveo management physical function driver
  fpga: xrt: platform drivers for subsystems in shell partition
  fpga: xrt: Kconfig and Makefile updates for XRT drivers

 Documentation/fpga/index.rst                 |    1 +
 Documentation/fpga/xrt.rst                   |  649 +++++++++++
 drivers/fpga/Kconfig                         |    2 +
 drivers/fpga/Makefile                        |    4 +
 drivers/fpga/xrt/Kconfig                     |    7 +
 drivers/fpga/xrt/Makefile                    |   21 +
 drivers/fpga/xrt/common/xrt-metadata.c       |  590 ++++++++++
 drivers/fpga/xrt/common/xrt-root.c           |  737 +++++++++++++
 drivers/fpga/xrt/common/xrt-root.h           |   26 +
 drivers/fpga/xrt/common/xrt-xclbin.c         |  387 +++++++
 drivers/fpga/xrt/common/xrt-xclbin.h         |   48 +
 drivers/fpga/xrt/include/metadata.h          |  184 ++++
 drivers/fpga/xrt/include/parent.h            |  103 ++
 drivers/fpga/xrt/include/partition.h         |   33 +
 drivers/fpga/xrt/include/subdev.h            |  333 ++++++
 drivers/fpga/xrt/include/subdev/axigate.h    |   31 +
 drivers/fpga/xrt/include/subdev/calib.h      |   28 +
 drivers/fpga/xrt/include/subdev/clkfreq.h    |   21 +
 drivers/fpga/xrt/include/subdev/clock.h      |   29 +
 drivers/fpga/xrt/include/subdev/gpio.h       |   41 +
 drivers/fpga/xrt/include/subdev/icap.h       |   27 +
 drivers/fpga/xrt/include/subdev/ucs.h        |   22 +
 drivers/fpga/xrt/include/xmgmt-main.h        |   34 +
 drivers/fpga/xrt/lib/Kconfig                 |   11 +
 drivers/fpga/xrt/lib/Makefile                |   30 +
 drivers/fpga/xrt/lib/subdevs/xrt-axigate.c   |  298 ++++++
 drivers/fpga/xrt/lib/subdevs/xrt-calib.c     |  226 ++++
 drivers/fpga/xrt/lib/subdevs/xrt-clkfreq.c   |  214 ++++
 drivers/fpga/xrt/lib/subdevs/xrt-clock.c     |  638 +++++++++++
 drivers/fpga/xrt/lib/subdevs/xrt-gpio.c      |  198 ++++
 drivers/fpga/xrt/lib/subdevs/xrt-icap.c      |  306 ++++++
 drivers/fpga/xrt/lib/subdevs/xrt-partition.c |  261 +++++
 drivers/fpga/xrt/lib/subdevs/xrt-ucs.c       |  238 +++++
 drivers/fpga/xrt/lib/subdevs/xrt-vsec.c      |  337 ++++++
 drivers/fpga/xrt/lib/xrt-cdev.c              |  234 ++++
 drivers/fpga/xrt/lib/xrt-main.c              |  270 +++++
 drivers/fpga/xrt/lib/xrt-main.h              |   46 +
 drivers/fpga/xrt/lib/xrt-subdev.c            | 1007 ++++++++++++++++++
 drivers/fpga/xrt/mgmt/Kconfig                |   11 +
 drivers/fpga/xrt/mgmt/Makefile               |   27 +
 drivers/fpga/xrt/mgmt/xmgmt-fmgr-drv.c       |  179 ++++
 drivers/fpga/xrt/mgmt/xmgmt-fmgr.h           |   29 +
 drivers/fpga/xrt/mgmt/xmgmt-main-impl.h      |   35 +
 drivers/fpga/xrt/mgmt/xmgmt-main-region.c    |  476 +++++++++
 drivers/fpga/xrt/mgmt/xmgmt-main.c           |  738 +++++++++++++
 drivers/fpga/xrt/mgmt/xmgmt-root.c           |  375 +++++++
 include/uapi/linux/xrt/xclbin.h              |  386 +++++++
 include/uapi/linux/xrt/xmgmt-ioctl.h         |   72 ++
 48 files changed, 10000 insertions(+)
 create mode 100644 Documentation/fpga/xrt.rst
 create mode 100644 drivers/fpga/xrt/Kconfig
 create mode 100644 drivers/fpga/xrt/Makefile
 create mode 100644 drivers/fpga/xrt/common/xrt-metadata.c
 create mode 100644 drivers/fpga/xrt/common/xrt-root.c
 create mode 100644 drivers/fpga/xrt/common/xrt-root.h
 create mode 100644 drivers/fpga/xrt/common/xrt-xclbin.c
 create mode 100644 drivers/fpga/xrt/common/xrt-xclbin.h
 create mode 100644 drivers/fpga/xrt/include/metadata.h
 create mode 100644 drivers/fpga/xrt/include/parent.h
 create mode 100644 drivers/fpga/xrt/include/partition.h
 create mode 100644 drivers/fpga/xrt/include/subdev.h
 create mode 100644 drivers/fpga/xrt/include/subdev/axigate.h
 create mode 100644 drivers/fpga/xrt/include/subdev/calib.h
 create mode 100644 drivers/fpga/xrt/include/subdev/clkfreq.h
 create mode 100644 drivers/fpga/xrt/include/subdev/clock.h
 create mode 100644 drivers/fpga/xrt/include/subdev/gpio.h
 create mode 100644 drivers/fpga/xrt/include/subdev/icap.h
 create mode 100644 drivers/fpga/xrt/include/subdev/ucs.h
 create mode 100644 drivers/fpga/xrt/include/xmgmt-main.h
 create mode 100644 drivers/fpga/xrt/lib/Kconfig
 create mode 100644 drivers/fpga/xrt/lib/Makefile
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-axigate.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-calib.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-clkfreq.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-clock.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-gpio.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-icap.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-partition.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-ucs.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-vsec.c
 create mode 100644 drivers/fpga/xrt/lib/xrt-cdev.c
 create mode 100644 drivers/fpga/xrt/lib/xrt-main.c
 create mode 100644 drivers/fpga/xrt/lib/xrt-main.h
 create mode 100644 drivers/fpga/xrt/lib/xrt-subdev.c
 create mode 100644 drivers/fpga/xrt/mgmt/Kconfig
 create mode 100644 drivers/fpga/xrt/mgmt/Makefile
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-fmgr-drv.c
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-fmgr.h
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-main-impl.h
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-main-region.c
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-main.c
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-root.c
 create mode 100644 include/uapi/linux/xrt/xclbin.h
 create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH V2 XRT Alveo 1/6] Documentation: fpga: Add a document describing XRT Alveo drivers
  2020-12-17  7:50 [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview Sonal Santan
@ 2020-12-17  7:50 ` Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 2/6] fpga: xrt: infrastructure support for xmgmt driver Sonal Santan
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Sonal Santan @ 2020-12-17  7:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree, trix, mdf

From: Sonal Santan <sonal.santan@xilinx.com>

Describe XRT driver architecture and provide basic overview of
Xilinx Alveo platform.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 Documentation/fpga/index.rst |   1 +
 Documentation/fpga/xrt.rst   | 649 +++++++++++++++++++++++++++++++++++
 2 files changed, 650 insertions(+)
 create mode 100644 Documentation/fpga/xrt.rst

diff --git a/Documentation/fpga/index.rst b/Documentation/fpga/index.rst
index f80f95667ca2..30134357b70d 100644
--- a/Documentation/fpga/index.rst
+++ b/Documentation/fpga/index.rst
@@ -8,6 +8,7 @@ fpga
     :maxdepth: 1
 
     dfl
+    xrt
 
 .. only::  subproject and html
 
diff --git a/Documentation/fpga/xrt.rst b/Documentation/fpga/xrt.rst
new file mode 100644
index 000000000000..8faf259be1c3
--- /dev/null
+++ b/Documentation/fpga/xrt.rst
@@ -0,0 +1,649 @@
+==================================
+XRTV2 Linux Kernel Driver Overview
+==================================
+
+Authors:
+
+* Sonal Santan <sonal.santan@xilinx.com>
+* Max Zhen <max.zhen@xilinx.com>
+* Lizhi Hou <lizhi.hou@xilinx.com>
+
+XRTV2 drivers are second generation `XRT <https://github.com/Xilinx/XRT>`_
+drivers which support `Alveo <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_
+PCIe platforms from Xilinx.
+
+XRTV2 drivers support *subsystem* style data driven platforms where driver's
+configuration and behavior is determined by meta data provided by the platform
+(in *device tree* format). Primary management physical function (MPF) driver
+is called **xmgmt**. Primary user physical function (UPF) driver is called
+**xuser** and HW subsystem drivers are packaged into a library module called
+**xrt-lib**, which is shared by **xmgmt** and **xuser** (under development).
+
+Alveo Platform Overview
+=======================
+
+Alveo platforms are architected as two physical FPGA partitions: *Shell* and
+*User*. The Shell provides basic infrastructure for the Alveo platform like
+PCIe connectivity, board management, Dynamic Function Exchange (DFX), sensors,
+clocking, reset, and security. User partition contains user compiled FPGA
+binary which is loaded by a process called DFX also known as partial
+reconfiguration.
+
+Physical partitions require strict HW compatibility with each other for DFX to
+work properly. Every physical partition has two interface UUIDs: *parent* UUID
+and *child* UUID. For simple single stage platforms, Shell → User forms parent
+child relationship. For complex two stage platforms, Base → Shell → User forms
+the parent child relationship chain.
+
+.. note::
+   Partition compatibility matching is key design component of Alveo platforms
+   and XRT. Partitions have child and parent relationship. A loaded partition
+   exposes child partition UUID to advertise its compatibility requirement for
+   child partition. When loading a child partition the xmgmt management driver
+   matches parent UUID of the child partition against child UUID exported by
+   the parent. Parent and child partition UUIDs are stored in the *xclbin*
+   (for user) or *xsabin* (for base and shell). Except for root UUID, VSEC,
+   hardware itself does not know about UUIDs. UUIDs are stored in xsabin and
+   xclbin.
+
+
+The physical partitions and their loading is illustrated below::
+
+           SHELL                               USER
+        +-----------+                  +-------------------+
+        |           |                  |                   |
+        | VSEC UUID | CHILD     PARENT |    LOGIC UUID     |
+        |           o------->|<--------o                   |
+        |           | UUID       UUID  |                   |
+        +-----+-----+                  +--------+----------+
+              |                                 |
+              .                                 .
+              |                                 |
+          +---+---+                      +------+--------+
+          |  POR  |                      | USER COMPILED |
+          | FLASH |                      |    XCLBIN     |
+          +-------+                      +---------------+
+
+
+Loading Sequence
+----------------
+
+The Shell partition is loaded from flash at system boot time. It establishes the
+PCIe link and exposes two physical functions to the BIOS. After OS boot, xmgmt
+driver attaches to PCIe physical function 0 exposed by the Shell and then looks
+for VSEC in PCIe extended configuration space. Using VSEC it determines the logic
+UUID of Shell and uses the UUID to load matching *xsabin* file from Linux firmware
+directory. The xsabin file contains metadata to discover peripherals that are part
+of Shell and firmware(s) for any embedded soft processors in Shell.
+
+The Shell exports child interface UUID which is used for compatibility check when
+loading user compiled xclbin over the User partition as part of DFX. When a user
+requests loading of a specific xclbin the xmgmt management driver reads the parent
+interface UUID specified in the xclbin and matches it with child interface UUID
+exported by Shell to determine if xclbin is compatible with the Shell. If match
+fails loading of xclbin is denied.
+
+xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command. When loading
+xclbin, xmgmt driver performs the following *logical* operations:
+
+1. Sanity check the xclbin contents
+2. Isolate the User partition
+3. Download the bitstream using the FPGA config engine (ICAP)
+4. De-isolate the User partition
+5. Program the clocks (ClockWiz) driving the User partition
+6. Wait for memory controller (MIG) calibration
+
+`Platform Loading Overview <https://xilinx.github.io/XRT/master/html/platforms_partitions.html>`_
+provides more detailed information on platform loading.
+
+
+xsabin
+------
+
+Each Alveo platform comes packaged with its own xsabin. The xsabin is trusted
+component of the platform. For format details refer to :ref:`xsabin/xclbin Container Format`.
+xsabin contains basic information like UUIDs, platform name and metadata in the
+form of device tree. See :ref:`Device Tree Usage` for details and example.
+
+xclbin
+------
+
+xclbin is compiled by end user using
+`Vitis <https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html>`_
+tool set from Xilinx. The xclbin contains sections describing user compiled
+acceleration engines/kernels, memory subsystems, clocking information etc. It also
+contains bitstream for the user partition, UUIDs, platform name, etc. xclbin uses
+the same container format as xsabin which is described below.
+
+
+xsabin/xclbin Container Format
+------------------------------
+
+xclbin/xsabin is ELF-like binary container format. It is structured as series of sections.
+There is a file header followed by several section headers which is followed by sections.
+A section header points to an actual section. There is an optional signature at the end.
+The format is defined by header file ``xclbin.h``. The following figure illustrates a
+typical xclbin::
+
+
+           +---------------------+
+           |                     |
+           |       HEADER        |
+           +---------------------+
+           |   SECTION  HEADER   |
+           |                     |
+           +---------------------+
+           |         ...         |
+           |                     |
+           +---------------------+
+           |   SECTION  HEADER   |
+           |                     |
+           +---------------------+
+           |       SECTION       |
+           |                     |
+           +---------------------+
+           |         ...         |
+           |                     |
+           +---------------------+
+           |       SECTION       |
+           |                     |
+           +---------------------+
+           |      SIGNATURE      |
+           |      (OPTIONAL)     |
+           +---------------------+
+
+
+xclbin/xsabin files can be packaged, un-packaged and inspected using XRT utility
+called **xclbinutil**. xclbinutil is part of XRT open source software stack. The
+source code for xclbinutil can be found at
+https://github.com/Xilinx/XRT/tree/master/src/runtime_src/tools/xclbinutil
+
+For example to enumerate the contents of a xclbin/xsabin use the *--info* switch
+as shown below::
+
+  xclbinutil --info --input /opt/xilinx/firmware/u50/gen3x16-xdma/blp/test/bandwidth.xclbin
+  xclbinutil --info --input /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/partition.xsabin
+
+
+Device Tree Usage
+-----------------
+
+As mentioned previously xsabin stores metadata which advertise HW subsystems present
+in a partition. The metadata is stored in device tree format with well defined schema.
+Subsystem instantiations are captured as children of ``addressable_endpoints`` node.
+Subsystem nodes have standard attributes like ``reg``, ``interrupts`` etc. Additionally
+the nodes also have PCIe specific attributes: ``pcie_physical_function`` and
+``pcie_bar_mapping``. These identify which PCIe physical function and which BAR space
+in that physical function the subsystem resides. XRT management driver uses this
+information to bind *platform drivers* to the subsystem instantiations. The platform
+drivers are found in **xrt-lib.ko** kernel module defined later. Below is an example
+of device tree for Alveo U50 platform::
+
+  /dts-v1/;
+
+  /{
+        logic_uuid = "f465b0a3ae8c64f619bc150384ace69b";
+
+        schema_version {
+                major = <0x01>;
+                minor = <0x00>;
+        };
+
+        interfaces {
+
+                @0 {
+                        interface_uuid = "862c7020a250293e32036f19956669e5";
+                };
+        };
+
+        addressable_endpoints {
+
+                ep_blp_rom_00 {
+                        reg = <0x00 0x1f04000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+                };
+
+                ep_card_flash_program_00 {
+                        reg = <0x00 0x1f06000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_quad_spi-1.0\0axi_quad_spi";
+                        interrupts = <0x03 0x03>;
+                };
+
+                ep_cmc_firmware_mem_00 {
+                        reg = <0x00 0x1e20000 0x00 0x20000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+                        firmware {
+                                firmware_product_name = "cmc";
+                                firmware_branch_name = "u50";
+                                firmware_version_major = <0x01>;
+                                firmware_version_minor = <0x00>;
+                        };
+                };
+
+                ep_cmc_intc_00 {
+                        reg = <0x00 0x1e03000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
+                        interrupts = <0x04 0x04>;
+                };
+
+                ep_cmc_mutex_00 {
+                        reg = <0x00 0x1e02000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_cmc_regmap_00 {
+                        reg = <0x00 0x1e08000 0x00 0x2000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+                        firmware {
+                                firmware_product_name = "sc-fw";
+                                firmware_branch_name = "u50";
+                                firmware_version_major = <0x05>;
+                        };
+                };
+
+                ep_cmc_reset_00 {
+                        reg = <0x00 0x1e01000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_ddr_mem_calib_00 {
+                        reg = <0x00 0x63000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_debug_bscan_mgmt_00 {
+                        reg = <0x00 0x1e90000 0x00 0x10000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-debug_bridge-1.0\0debug_bridge";
+                };
+
+                ep_ert_base_address_00 {
+                        reg = <0x00 0x21000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_ert_command_queue_mgmt_00 {
+                        reg = <0x00 0x40000 0x00 0x10000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
+                };
+
+                ep_ert_command_queue_user_00 {
+                        reg = <0x00 0x40000 0x00 0x10000>;
+                        pcie_physical_function = <0x01>;
+                        compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
+                };
+
+                ep_ert_firmware_mem_00 {
+                        reg = <0x00 0x30000 0x00 0x8000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+                        firmware {
+                                firmware_product_name = "ert";
+                                firmware_branch_name = "v20";
+                                firmware_version_major = <0x01>;
+                        };
+                };
+
+                ep_ert_intc_00 {
+                        reg = <0x00 0x23000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
+                        interrupts = <0x05 0x05>;
+                };
+
+                ep_ert_reset_00 {
+                        reg = <0x00 0x22000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_ert_sched_00 {
+                        reg = <0x00 0x50000 0x00 0x1000>;
+                        pcie_physical_function = <0x01>;
+                        compatible = "xilinx.com,reg_abs-ert_sched-1.0\0ert_sched";
+                        interrupts = <0x09 0x0c>;
+                };
+
+                ep_fpga_configuration_00 {
+                        reg = <0x00 0x1e88000 0x00 0x8000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_hwicap-1.0\0axi_hwicap";
+                        interrupts = <0x02 0x02>;
+                };
+
+                ep_icap_reset_00 {
+                        reg = <0x00 0x1f07000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_msix_00 {
+                        reg = <0x00 0x00 0x00 0x20000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-msix-1.0\0msix";
+                        pcie_bar_mapping = <0x02>;
+                };
+
+                ep_pcie_link_mon_00 {
+                        reg = <0x00 0x1f05000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_pr_isolate_plp_00 {
+                        reg = <0x00 0x1f01000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_pr_isolate_ulp_00 {
+                        reg = <0x00 0x1000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+                };
+
+                ep_uuid_rom_00 {
+                        reg = <0x00 0x64000 0x00 0x1000>;
+                        pcie_physical_function = <0x00>;
+                        compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+                };
+
+                ep_xdma_00 {
+                        reg = <0x00 0x00 0x00 0x10000>;
+                        pcie_physical_function = <0x01>;
+                        compatible = "xilinx.com,reg_abs-xdma-1.0\0xdma";
+                        pcie_bar_mapping = <0x02>;
+                };
+        };
+
+  }
+
+
+
+Deployment Models
+=================
+
+Baremetal
+---------
+
+In bare-metal deployments both MPF and UPF are visible and accessible. xmgmt
+driver binds to MPF. xmgmt driver operations are privileged and available to
+system administrator. The full stack is illustrated below::
+
+                            HOST
+
+                 [XMGMT]            [XUSER]
+                    |                  |
+                    |                  |
+                 +-----+            +-----+
+                 | MPF |            | UPF |
+                 |     |            |     |
+                 | PF0 |            | PF1 |
+                 +--+--+            +--+--+
+          ......... ^................. ^..........
+                    |                  |
+                    |   PCIe DEVICE    |
+                    |                  |
+                 +--+------------------+--+
+                 |         SHELL          |
+                 |                        |
+                 +------------------------+
+                 |         USER           |
+                 |                        |
+                 |                        |
+                 |                        |
+                 |                        |
+                 +------------------------+
+
+
+
+Virtualized
+-----------
+
+In virtualized deployments privileged MPF is assigned to host but unprivileged
+UPF is assigned to guest VM via PCIe pass-through. xmgmt driver in host binds
+to MPF. xmgmt driver operations are privileged and only accessible by hosting
+service provider. The full stack is illustrated below::
+
+
+                                 .............
+                  HOST           .    VM     .
+                                 .           .
+                 [XMGMT]         .  [XUSER]  .
+                    |            .     |     .
+                    |            .     |     .
+                 +-----+         .  +-----+  .
+                 | MPF |         .  | UPF |  .
+                 |     |         .  |     |  .
+                 | PF0 |         .  | PF1 |  .
+                 +--+--+         .  +--+--+  .
+          ......... ^................. ^..........
+                    |                  |
+                    |   PCIe DEVICE    |
+                    |                  |
+                 +--+------------------+--+
+                 |         SHELL          |
+                 |                        |
+                 +------------------------+
+                 |         USER           |
+                 |                        |
+                 |                        |
+                 |                        |
+                 |                        |
+                 +------------------------+
+
+
+
+Driver Modules
+==============
+
+xrt-lib.ko
+----------
+
+Repository of all subsystem drivers and pure software modules that can potentially
+be shared between xmgmt and xuser. All these drivers are structured as Linux
+*platform driver* and are instantiated by xmgmt (or xuser in future) based on meta
+data associated with hardware. The metadata is in the form of device tree as
+explained before.
+
+xmgmt.ko
+--------
+
+The xmgmt driver is a PCIe device driver driving MPF found on Xilinx's Alveo
+PCIE device. It consists of one *root* driver, one or more *partition* drivers
+and one or more *leaf* drivers. The root and MPF specific leaf drivers are in
+xmgmt.ko. The partition driver and other leaf drivers are in xrt-lib.ko.
+
+The instantiation of specific partition driver or leaf driver is completely data
+driven based on meta data (mostly in device tree format) found through VSEC
+capability and inside firmware files, such as xsabin or xclbin file. The root
+driver manages life cycle of multiple partition drivers, which, in turn, manages
+multiple leaf drivers. This allows a single set of driver code to support all
+kinds of subsystems exposed by different shells. The difference among all
+these subsystems will be handled in leaf drivers with root and partition drivers
+being part of the infrastructure and provide common services for all leaves found
+on all platforms.
+
+The driver object model looks like the following::
+
+                    +-----------+
+                    |   root    |
+                    +-----+-----+
+                          |
+              +-----------+-----------+
+              |                       |
+              v                       v
+        +-----------+          +-----------+
+        | partition |    ...   | partition |
+        +-----+-----+          +------+----+
+              |                       |
+              |                       |
+        +-----+----+            +-----+----+
+        |          |            |          |
+        v          v            v          v
+    +------+   +------+     +------+   +------+
+    | leaf |...| leaf |     | leaf |...| leaf |
+    +------+   +------+     +------+   +------+
+
+
+xmgmt-root
+^^^^^^^^^^
+
+The xmgmt-root driver is a PCIe device driver attached to MPF. It's part of the
+infrastructure of the MPF driver and resides in xmgmt.ko. This driver
+
+* manages one or more partition drivers
+* provides access to functionalities that requires pci_dev, such as PCIE config
+  space access, to other leaf drivers through parent calls
+* together with partition driver, facilities event callbacks for other leaf drivers
+* together with partition driver, facilities inter-leaf driver calls for other leaf
+  drivers
+
+When root driver starts, it will explicitly create an initial partition instance,
+which contains leaf drivers that will trigger the creation of other partition
+instances. The root driver will wait for all partitions and leaves to be created
+before it returns from it's probe routine and claim success of the initialization
+of the entire xmgmt driver.
+
+.. note::
+   See code in ``common/xrt-root.c`` and ``mgmt/xmgmt-root.c``
+
+
+partition
+^^^^^^^^^
+
+The partition driver is a platform device driver whose life cycle is managed by
+root and does not have real IO mem or IRQ resources. It's part of the
+infrastructure of the MPF driver and resides in xrt-lib.ko. This driver
+
+* manages one or more leaf drivers so that multiple leaves can be managed as a
+  group
+* provides access to root from leaves, so that parent calls, event notifications
+  and inter-leaf calls can happen
+
+In xmgmt, an initial partition driver instance will be created by root, which
+contains leaves that will trigger partition instances to be created to manage
+groups of leaves found on different partitions on hardware, such as VSEC, Shell,
+and User.
+
+Every *fpga_region* has a partition object associated with it. The partition is
+created when xclbin image is loaded on the fpga_region. The existing partition
+is destroyed when a new xclbin image is loaded. The fpga_region persists
+across xclbin downloads.
+
+.. note::
+   See code in ``lib/subdevs/xrt-partition.c``
+
+
+leaves
+^^^^^^
+
+The leaf driver is a platform device driver whose life cycle is managed by
+a partition driver and may or may not have real IO mem or IRQ resources. They
+are the real meat of xmgmt and contains platform specific code to Shell and
+User found on a MPF.
+
+A leaf driver may not have real hardware resources when it merely acts as a
+driver that manages certain in-memory states for xmgmt. These in-memory states
+could be shared by multiple other leaves.
+
+Leaf drivers assigned to specific hardware resources drive specific subsystem in
+the device. To manipulate the subsystem or carry out a task, a leaf driver may
+ask help from root via parent calls and/or from other leaves via inter-leaf calls.
+
+A leaf can also broadcast events through infrastructure code for other leaves
+to process. It can also receive event notification from infrastructure about
+certain events, such as post-creation or pre-exit of a particular leaf.
+
+.. note::
+   See code in ``lib/subdevs/*.c``
+
+
+FPGA Manager Interaction
+========================
+
+fpga_manager
+------------
+
+An instance of fpga_manager is created by xmgmt_main and is used for xclbin
+image download. fpga_manager requires the full xclbin image before it can
+start programming the FPGA configuration engine via ICAP subdev driver.
+
+fpga_region
+-----------
+
+A new instance of fpga_region is created like a *child* region for every
+interface exposed by currently loaded xclbin or xsabin in the *parent*
+fpga_region. The device tree of the *parent* fpga_region defines the
+resources for a new instance of fpga_bridge which isolates the parent from
+child fpga_region. This new instance of fpga_bridge will be used when a
+xclbin image is loaded on the child fpga_region. After the xclbin image is
+downloaded to the fpga_region, a partition instance is created for the
+fpga_region using the device tree obtained as part of xclbin. This device
+tree defines any child interfaces then it can trigger the creation of
+fpga_bridge and fpga_region for the next region in the chain.
+
+fpga_bridge
+-----------
+
+Like fpga_region, matching fpga_bridge is also created by walking the device
+tree of the parent partition.
+
+Driver Interfaces
+=================
+
+xmgmt Driver Ioctls
+-------------------
+
+Ioctls exposed by xmgmt driver to user space are enumerated in the following table:
+
+== ===================== ============================= ===========================
+#  Functionality         ioctl request code            data format
+== ===================== ============================= ===========================
+1  FPGA image download   XMGMT_IOCICAPDOWNLOAD_AXLF    xmgmt_ioc_bitstream_axlf
+2  CL frequency scaling  XMGMT_IOCFREQSCALE            xmgmt_ioc_freqscaling
+== ===================== ============================= ===========================
+
+A xclbin can be downloaded by using xbmgmt tool from XRT open source suite. See
+example usage below ::
+
+  xbmgmt partition --program --path /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/test/verify.xclbin --force
+
+xmgmt Driver Sysfs
+------------------
+
+xmgmt driver exposes a rich set of sysfs interfaces. Subsystem platform
+drivers export sysfs node for every platform instance.
+
+Every partition also exports its UUIDs. See below for examples::
+
+  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/interface_uuids
+  /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/logic_uuids
+
+
+hwmon
+-----
+
+xmgmt driver exposes standard hwmon interface to report voltage, current,
+temperature, power, etc. These can easily be viewed using *sensors* command
+line utility.
+
+
+Platform Security Considerations
+================================
+
+`Security of Alveo Platform <https://xilinx.github.io/XRT/master/html/security.html>`_
+discusses the deployment options and security implications in great detail.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 XRT Alveo 2/6] fpga: xrt: infrastructure support for xmgmt driver
  2020-12-17  7:50 [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 1/6] Documentation: fpga: Add a document describing XRT Alveo drivers Sonal Santan
@ 2020-12-17  7:50 ` Sonal Santan
  2020-12-21  6:41   ` kernel test robot
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 3/6] fpga: xrt: core infrastructure for xrt-lib module Sonal Santan
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Sonal Santan @ 2020-12-17  7:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree, trix, mdf

From: Sonal Santan <sonal.santan@xilinx.com>

Add infrastructure code for XRT management physical function
driver. This provides support for enumerating and extracting
sections from xclbin files, interacting with device tree nodes
found in xclbin and working with Alveo partitions.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/xrt/common/xrt-metadata.c | 590 ++++++++++++++++++++
 drivers/fpga/xrt/common/xrt-root.c     | 737 +++++++++++++++++++++++++
 drivers/fpga/xrt/common/xrt-root.h     |  26 +
 drivers/fpga/xrt/common/xrt-xclbin.c   | 387 +++++++++++++
 drivers/fpga/xrt/common/xrt-xclbin.h   |  48 ++
 include/uapi/linux/xrt/xclbin.h        | 386 +++++++++++++
 6 files changed, 2174 insertions(+)
 create mode 100644 drivers/fpga/xrt/common/xrt-metadata.c
 create mode 100644 drivers/fpga/xrt/common/xrt-root.c
 create mode 100644 drivers/fpga/xrt/common/xrt-root.h
 create mode 100644 drivers/fpga/xrt/common/xrt-xclbin.c
 create mode 100644 drivers/fpga/xrt/common/xrt-xclbin.h
 create mode 100644 include/uapi/linux/xrt/xclbin.h

diff --git a/drivers/fpga/xrt/common/xrt-metadata.c b/drivers/fpga/xrt/common/xrt-metadata.c
new file mode 100644
index 000000000000..2a5b1ee9e37c
--- /dev/null
+++ b/drivers/fpga/xrt/common/xrt-metadata.c
@@ -0,0 +1,590 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Metadata parse APIs
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/libfdt_env.h>
+#include "libfdt.h"
+#include "metadata.h"
+
+#define MAX_BLOB_SIZE	(4096 * 25)
+
+#define md_err(dev, fmt, args...)			\
+	dev_err(dev, "%s: "fmt, __func__, ##args)
+#define md_warn(dev, fmt, args...)			\
+	dev_warn(dev, "%s: "fmt, __func__, ##args)
+#define md_info(dev, fmt, args...)			\
+	dev_info(dev, "%s: "fmt, __func__, ##args)
+#define md_dbg(dev, fmt, args...)			\
+	dev_dbg(dev, "%s: "fmt, __func__, ##args)
+
+static int xrt_md_setprop(struct device *dev, char *blob, int offset,
+	const char *prop, const void *val, int size);
+static int xrt_md_overlay(struct device *dev, char *blob, int target,
+	const char *overlay_blob, int overlay_offset);
+static int xrt_md_get_endpoint(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name, int *ep_offset);
+
+long xrt_md_size(struct device *dev, const char *blob)
+{
+	long len = (long) fdt_totalsize(blob);
+
+	return (len > MAX_BLOB_SIZE) ? -EINVAL : len;
+}
+
+int xrt_md_create(struct device *dev, char **blob)
+{
+	int ret = 0;
+
+	WARN_ON(!blob);
+
+	*blob = vmalloc(MAX_BLOB_SIZE);
+	if (!*blob)
+		return -ENOMEM;
+
+	ret = fdt_create_empty_tree(*blob, MAX_BLOB_SIZE);
+	if (ret) {
+		md_err(dev, "format blob failed, ret = %d", ret);
+		goto failed;
+	}
+
+	ret = fdt_next_node(*blob, -1, NULL);
+	if (ret < 0) {
+		md_err(dev, "No Node, ret = %d", ret);
+		goto failed;
+	}
+
+	ret = fdt_add_subnode(*blob, ret, NODE_ENDPOINTS);
+	if (ret < 0)
+		md_err(dev, "add node failed, ret = %d", ret);
+
+failed:
+	if (ret < 0) {
+		vfree(*blob);
+		*blob = NULL;
+	} else
+		ret = 0;
+
+	return ret;
+}
+
+int xrt_md_add_node(struct device *dev, char *blob, int parent_offset,
+	const char *ep_name)
+{
+	int ret;
+
+	ret = fdt_add_subnode(blob, parent_offset, ep_name);
+	if (ret < 0 && ret != -FDT_ERR_EXISTS)
+		md_err(dev, "failed to add node %s. %d", ep_name, ret);
+
+	return ret;
+}
+
+int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
+	char *regmap_name)
+{
+	int ret;
+	int ep_offset;
+
+	ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name, &ep_offset);
+	if (ret) {
+		md_err(dev, "can not find ep %s", ep_name);
+		return -EINVAL;
+	}
+
+	ret = fdt_del_node(blob, ep_offset);
+	if (ret)
+		md_err(dev, "delete node %s failed, ret %d", ep_name, ret);
+
+	return ret;
+}
+
+static int __xrt_md_add_endpoint(struct device *dev, char *blob,
+	struct xrt_md_endpoint *ep, int *offset, bool root)
+{
+	int ret = 0;
+	int ep_offset;
+	u32 val, count = 0;
+	u64 io_range[2];
+	char comp[128];
+
+	if (!ep->ep_name) {
+		md_err(dev, "empty name");
+		return -EINVAL;
+	}
+
+	if (!root) {
+		ret = xrt_md_get_endpoint(dev, blob, NODE_ENDPOINTS, NULL,
+			&ep_offset);
+		if (ret) {
+			md_err(dev, "invalid blob, ret = %d", ret);
+			return -EINVAL;
+		}
+	} else {
+		ep_offset = 0;
+	}
+
+	ep_offset = xrt_md_add_node(dev, blob, ep_offset, ep->ep_name);
+	if (ep_offset < 0) {
+		md_err(dev, "add endpoint failed, ret = %d", ret);
+		return -EINVAL;
+	}
+	if (offset)
+		*offset = ep_offset;
+
+	if (ep->size != 0) {
+		val = cpu_to_be32(ep->bar);
+		ret = xrt_md_setprop(dev, blob, ep_offset, PROP_BAR_IDX,
+				&val, sizeof(u32));
+		if (ret) {
+			md_err(dev, "set %s failed, ret %d",
+				PROP_BAR_IDX, ret);
+			goto failed;
+		}
+		io_range[0] = cpu_to_be64((u64)ep->bar_off);
+		io_range[1] = cpu_to_be64((u64)ep->size);
+		ret = xrt_md_setprop(dev, blob, ep_offset, PROP_IO_OFFSET,
+			io_range, sizeof(io_range));
+		if (ret) {
+			md_err(dev, "set %s failed, ret %d",
+				PROP_IO_OFFSET, ret);
+			goto failed;
+		}
+	}
+
+	if (ep->regmap) {
+		if (ep->regmap_ver) {
+			count = snprintf(comp, sizeof(comp),
+				"%s-%s", ep->regmap, ep->regmap_ver);
+			count++;
+		}
+
+		count += snprintf(comp + count, sizeof(comp) - count,
+			"%s", ep->regmap);
+		count++;
+
+		ret = xrt_md_setprop(dev, blob, ep_offset, PROP_COMPATIBLE,
+			comp, count);
+		if (ret) {
+			md_err(dev, "set %s failed, ret %d",
+				PROP_COMPATIBLE, ret);
+			goto failed;
+		}
+	}
+
+failed:
+	if (ret)
+		xrt_md_del_endpoint(dev, blob, ep->ep_name, NULL);
+
+	return ret;
+}
+
+int xrt_md_add_endpoint(struct device *dev, char *blob,
+	struct xrt_md_endpoint *ep)
+{
+	return __xrt_md_add_endpoint(dev, blob, ep, NULL, false);
+}
+
+static int xrt_md_get_endpoint(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name, int *ep_offset)
+{
+	int offset;
+	const char *name;
+
+	for (offset = fdt_next_node(blob, -1, NULL);
+	    offset >= 0;
+	    offset = fdt_next_node(blob, offset, NULL)) {
+		name = fdt_get_name(blob, offset, NULL);
+		if (!name || strncmp(name, ep_name, strlen(ep_name) + 1))
+			continue;
+		if (!regmap_name ||
+		    !fdt_node_check_compatible(blob, offset, regmap_name))
+			break;
+	}
+	if (offset < 0)
+		return -ENODEV;
+
+	*ep_offset = offset;
+
+	return 0;
+}
+
+int xrt_md_get_epname_pointer(struct device *dev, const char *blob,
+	 const char *ep_name, const char *regmap_name, const char **epname)
+{
+	int offset;
+	int ret;
+
+	ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+		&offset);
+	if (!ret && epname && offset >= 0)
+		*epname = fdt_get_name(blob, offset, NULL);
+
+	return ret;
+}
+
+int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
+	const char *regmap_name, const char *prop, const void **val, int *size)
+{
+	int offset;
+	int ret;
+
+	if (val)
+		*val = NULL;
+	if (ep_name) {
+		ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+			&offset);
+		if (ret) {
+			md_err(dev, "cannot get ep %s, regmap %s, ret = %d",
+				ep_name, regmap_name, ret);
+			return -EINVAL;
+		}
+	} else {
+		offset = fdt_next_node(blob, -1, NULL);
+		if (offset < 0) {
+			md_err(dev, "internal error, ret = %d", offset);
+			return -EINVAL;
+		}
+	}
+
+	if (val) {
+		*val = fdt_getprop(blob, offset, prop, size);
+		if (!*val) {
+			md_dbg(dev, "get ep %s, prop %s failed", ep_name, prop);
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static int xrt_md_setprop(struct device *dev, char *blob, int offset,
+	 const char *prop, const void *val, int size)
+{
+	int ret;
+
+	ret = fdt_setprop(blob, offset, prop, val, size);
+	if (ret)
+		md_err(dev, "failed to set prop %d", ret);
+
+	return ret;
+}
+
+int xrt_md_set_prop(struct device *dev, char *blob,
+	const char *ep_name, const char *regmap_name,
+	const char *prop, const void *val, int size)
+{
+	int offset;
+	int ret;
+
+	if (ep_name) {
+		ret = xrt_md_get_endpoint(dev, blob, ep_name,
+			regmap_name, &offset);
+		if (ret) {
+			md_err(dev, "cannot get node %s, ret = %d",
+				ep_name, ret);
+			return -EINVAL;
+		}
+	} else {
+		offset = fdt_next_node(blob, -1, NULL);
+		if (offset < 0) {
+			md_err(dev, "internal error, ret = %d", offset);
+			return -EINVAL;
+		}
+	}
+
+	ret = xrt_md_setprop(dev, blob, offset, prop, val, size);
+	if (ret)
+		md_err(dev, "set prop %s failed, ret = %d", prop, ret);
+
+	return ret;
+}
+
+int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
+	const char *ep_name, const char *regmap_name, const char *new_ep_name)
+{
+	int offset, target;
+	int ret;
+	struct xrt_md_endpoint ep = {0};
+	const char *newepnm = new_ep_name ? new_ep_name : ep_name;
+
+	ret = xrt_md_get_endpoint(dev, src_blob, ep_name, regmap_name,
+		&offset);
+	if (ret)
+		return -EINVAL;
+
+	ret = xrt_md_get_endpoint(dev, blob, newepnm, regmap_name, &target);
+	if (ret) {
+		ep.ep_name = newepnm;
+		ret = __xrt_md_add_endpoint(dev, blob, &ep, &target,
+			fdt_parent_offset(src_blob, offset) == 0);
+		if (ret)
+			return -EINVAL;
+	}
+
+	ret = xrt_md_overlay(dev, blob, target, src_blob, offset);
+	if (ret)
+		md_err(dev, "overlay failed, ret = %d", ret);
+
+	return ret;
+}
+
+int xrt_md_copy_all_eps(struct device *dev, char *blob, const char *src_blob)
+{
+	return xrt_md_copy_endpoint(dev, blob, src_blob, NODE_ENDPOINTS,
+		NULL, NULL);
+}
+
+char *xrt_md_dup(struct device *dev, const char *blob)
+{
+	int ret;
+	char *dup_blob;
+
+	ret = xrt_md_create(dev, &dup_blob);
+	if (ret)
+		return NULL;
+	ret = xrt_md_overlay(dev, dup_blob, -1, blob, -1);
+	if (ret) {
+		vfree(dup_blob);
+		return NULL;
+	}
+
+	return dup_blob;
+}
+
+static int xrt_md_overlay(struct device *dev, char *blob, int target,
+	const char *overlay_blob, int overlay_offset)
+{
+	int	property, subnode;
+	int	ret;
+
+	WARN_ON(!blob || !overlay_blob);
+
+	if (!blob) {
+		md_err(dev, "blob is NULL");
+		return -EINVAL;
+	}
+
+	if (target < 0) {
+		target = fdt_next_node(blob, -1, NULL);
+		if (target < 0) {
+			md_err(dev, "invalid target");
+			return -EINVAL;
+		}
+	}
+	if (overlay_offset < 0) {
+		overlay_offset = fdt_next_node(overlay_blob, -1, NULL);
+		if (overlay_offset < 0) {
+			md_err(dev, "invalid overlay");
+			return -EINVAL;
+		}
+	}
+
+	fdt_for_each_property_offset(property, overlay_blob, overlay_offset) {
+		const char *name;
+		const void *prop;
+		int prop_len;
+
+		prop = fdt_getprop_by_offset(overlay_blob, property, &name,
+			&prop_len);
+		if (!prop || prop_len >= MAX_BLOB_SIZE) {
+			md_err(dev, "internal error");
+			return -EINVAL;
+		}
+
+		ret = xrt_md_setprop(dev, blob, target, name, prop,
+			prop_len);
+		if (ret) {
+			md_err(dev, "setprop failed, ret = %d", ret);
+			return ret;
+		}
+	}
+
+	fdt_for_each_subnode(subnode, overlay_blob, overlay_offset) {
+		const char *name = fdt_get_name(overlay_blob, subnode, NULL);
+		int nnode;
+
+		nnode = xrt_md_add_node(dev, blob, target, name);
+		if (nnode == -FDT_ERR_EXISTS)
+			nnode = fdt_subnode_offset(blob, target, name);
+		if (nnode < 0) {
+			md_err(dev, "add node failed, ret = %d", nnode);
+			return nnode;
+		}
+
+		ret = xrt_md_overlay(dev, blob, nnode, overlay_blob, subnode);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name,
+	char **next_ep, char **next_regmap)
+{
+	int offset, ret;
+
+	if (!ep_name) {
+		ret = xrt_md_get_endpoint(dev, blob, NODE_ENDPOINTS, NULL,
+			&offset);
+	} else {
+		ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+			&offset);
+	}
+
+	if (ret) {
+		*next_ep = NULL;
+		*next_regmap = NULL;
+		return -EINVAL;
+	}
+
+	offset = ep_name ? fdt_next_subnode(blob, offset) :
+		fdt_first_subnode(blob, offset);
+	if (offset < 0) {
+		*next_ep = NULL;
+		*next_regmap = NULL;
+		return -EINVAL;
+	}
+
+	*next_ep = (char *)fdt_get_name(blob, offset, NULL);
+	*next_regmap = (char *)fdt_stringlist_get(blob, offset, PROP_COMPATIBLE,
+		0, NULL);
+
+	return 0;
+}
+
+int xrt_md_get_compatible_epname(struct device *dev, const char *blob,
+	const char *regmap_name, const char **ep_name)
+{
+	int ep_offset;
+
+	ep_offset = fdt_node_offset_by_compatible(blob, -1, regmap_name);
+	if (ep_offset < 0) {
+		*ep_name = NULL;
+		return -ENOENT;
+	}
+
+	*ep_name = (char *)fdt_get_name(blob, ep_offset, NULL);
+
+	return 0;
+}
+
+int xrt_md_uuid_strtoid(struct device *dev, const char *uuidstr, uuid_t *p_uuid)
+{
+	char *p;
+	const char *str;
+	char tmp[3] = { 0 };
+	int i, ret;
+
+	memset(p_uuid, 0, sizeof(*p_uuid));
+	p = (char *)p_uuid;
+	str = uuidstr + strlen(uuidstr) - 2;
+
+	for (i = 0; i < sizeof(*p_uuid) && str >= uuidstr; i++) {
+		tmp[0] = *str;
+		tmp[1] = *(str + 1);
+		ret = kstrtou8(tmp, 16, p);
+		if (ret) {
+			md_err(dev, "Invalid uuid %s", uuidstr);
+			return -EINVAL;
+		}
+		p++;
+		str -= 2;
+	}
+
+	return 0;
+}
+
+void xrt_md_pack(struct device *dev, char *blob)
+{
+	int ret;
+
+	ret = fdt_pack(blob);
+	if (ret)
+		md_err(dev, "pack failed %d", ret);
+}
+
+int xrt_md_get_intf_uuids(struct device *dev, const char *blob,
+	u32 *num_uuids, uuid_t *intf_uuids)
+{
+	int offset, count = 0;
+	int ret;
+	const char *uuid_str;
+
+	ret = xrt_md_get_endpoint(dev, blob, NODE_INTERFACES, NULL, &offset);
+	if (ret)
+		return -ENOENT;
+
+	for (offset = fdt_first_subnode(blob, offset);
+	    offset >= 0;
+	    offset = fdt_next_subnode(blob, offset)) {
+		uuid_str = fdt_getprop(blob, offset, PROP_INTERFACE_UUID,
+			NULL);
+		if (!uuid_str) {
+			md_err(dev, "empty intf uuid node");
+			return -EINVAL;
+		}
+
+		if (intf_uuids && count < *num_uuids) {
+			ret = xrt_md_uuid_strtoid(dev, uuid_str,
+				&intf_uuids[count]);
+			if (ret)
+				return -EINVAL;
+		}
+		count++;
+	}
+
+	*num_uuids = count;
+
+	return 0;
+}
+
+int xrt_md_check_uuids(struct device *dev, const char *blob, char *subset_blob)
+{
+	const char *subset_int_uuid = NULL;
+	const char *int_uuid = NULL;
+	int offset, subset_offset, off;
+	int ret;
+
+	ret = xrt_md_get_endpoint(dev, subset_blob, NODE_INTERFACES, NULL,
+		&subset_offset);
+	if (ret)
+		return -EINVAL;
+
+	ret = xrt_md_get_endpoint(dev, blob, NODE_INTERFACES, NULL,
+		&offset);
+	if (ret)
+		return -EINVAL;
+
+	for (subset_offset = fdt_first_subnode(subset_blob, subset_offset);
+	    subset_offset >= 0;
+	    subset_offset = fdt_next_subnode(subset_blob, subset_offset)) {
+		subset_int_uuid = fdt_getprop(subset_blob, subset_offset,
+			PROP_INTERFACE_UUID, NULL);
+		if (!subset_int_uuid)
+			return -EINVAL;
+		off = offset;
+
+		for (off = fdt_first_subnode(blob, off);
+		    off >= 0;
+		    off = fdt_next_subnode(blob, off)) {
+			int_uuid = fdt_getprop(blob, off,
+				PROP_INTERFACE_UUID, NULL);
+			if (!int_uuid)
+				return -EINVAL;
+			if (!strcmp(int_uuid, subset_int_uuid))
+				break;
+		}
+		if (off < 0)
+			return -ENOENT;
+	}
+
+	return 0;
+}
diff --git a/drivers/fpga/xrt/common/xrt-root.c b/drivers/fpga/xrt/common/xrt-root.c
new file mode 100644
index 000000000000..325686547ee8
--- /dev/null
+++ b/drivers/fpga/xrt/common/xrt-root.c
@@ -0,0 +1,737 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/hwmon.h>
+#include "subdev.h"
+#include "parent.h"
+#include "partition.h"
+#include "xrt-root.h"
+#include "metadata.h"
+#include "xrt-root.h"
+
+#define	XROOT_PDEV(xr)		((xr)->pdev)
+#define	XROOT_DEV(xr)		(&(XROOT_PDEV(xr)->dev))
+#define xroot_err(xr, fmt, args...)	\
+	dev_err(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_warn(xr, fmt, args...)	\
+	dev_warn(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_info(xr, fmt, args...)	\
+	dev_info(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_dbg(xr, fmt, args...)	\
+	dev_dbg(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+
+#define XRT_VSEC_ID	0x20
+#define	XROOT_PART_FIRST	(-1)
+#define	XROOT_PART_LAST		(-2)
+
+static int xroot_parent_cb(struct device *, void *, u32, void *);
+
+struct xroot_async_evt {
+	struct list_head list;
+	struct xrt_parent_ioctl_async_broadcast_evt evt;
+};
+
+struct xroot_event_cb {
+	struct list_head list;
+	bool initialized;
+	struct xrt_parent_ioctl_evt_cb cb;
+};
+
+struct xroot_events {
+	struct list_head cb_list;
+	struct mutex cb_lock;
+	struct work_struct cb_init_work;
+	struct mutex async_evt_lock;
+	struct list_head async_evt_list;
+	struct work_struct async_evt_work;
+};
+
+struct xroot_parts {
+	struct xrt_subdev_pool pool;
+	struct work_struct bringup_work;
+	atomic_t bringup_pending;
+	atomic_t bringup_failed;
+	struct completion bringup_comp;
+};
+
+struct xroot {
+	struct pci_dev *pdev;
+	struct xroot_events events;
+	struct xroot_parts parts;
+};
+
+struct xroot_part_match_arg {
+	enum xrt_subdev_id id;
+	int instance;
+};
+
+static bool xroot_part_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	struct xroot_part_match_arg *a = (struct xroot_part_match_arg *)arg;
+	return id == a->id && pdev->id == a->instance;
+}
+
+static int xroot_get_partition(struct xroot *xr, int instance,
+	struct platform_device **partp)
+{
+	int rc = 0;
+	struct xrt_subdev_pool *parts = &xr->parts.pool;
+	struct device *dev = DEV(xr->pdev);
+	struct xroot_part_match_arg arg = { XRT_SUBDEV_PART, instance };
+
+	if (instance == XROOT_PART_LAST) {
+		rc = xrt_subdev_pool_get(parts, XRT_SUBDEV_MATCH_NEXT,
+			*partp, dev, partp);
+	} else if (instance == XROOT_PART_FIRST) {
+		rc = xrt_subdev_pool_get(parts, XRT_SUBDEV_MATCH_PREV,
+			*partp, dev, partp);
+	} else {
+		rc = xrt_subdev_pool_get(parts, xroot_part_match,
+			&arg, dev, partp);
+	}
+
+	if (rc && rc != -ENOENT)
+		xroot_err(xr, "failed to hold partition %d: %d", instance, rc);
+	return rc;
+}
+
+static void xroot_put_partition(struct xroot *xr, struct platform_device *part)
+{
+	int inst = part->id;
+	int rc = xrt_subdev_pool_put(&xr->parts.pool, part, DEV(xr->pdev));
+
+	if (rc)
+		xroot_err(xr, "failed to release partition %d: %d", inst, rc);
+}
+
+static int
+xroot_partition_trigger_evt(struct xroot *xr, struct xroot_event_cb *cb,
+	struct platform_device *part, enum xrt_events evt)
+{
+	xrt_subdev_match_t match = cb->cb.xevt_match_cb;
+	xrt_event_cb_t evtcb = cb->cb.xevt_cb;
+	void *arg = cb->cb.xevt_match_arg;
+	struct xrt_partition_ioctl_event e = { evt, &cb->cb };
+	struct xrt_event_arg_subdev esd = { XRT_SUBDEV_PART, part->id };
+	int rc;
+
+	if (match(XRT_SUBDEV_PART, part, arg)) {
+		rc = evtcb(cb->cb.xevt_pdev, evt, &esd);
+		if (rc)
+			return rc;
+	}
+
+	return xrt_subdev_ioctl(part, XRT_PARTITION_EVENT, &e);
+}
+
+static void
+xroot_event_partition(struct xroot *xr, int instance, enum xrt_events evt)
+{
+	int ret;
+	struct platform_device *pdev = NULL;
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+
+	BUG_ON(instance < 0);
+	ret = xroot_get_partition(xr, instance, &pdev);
+	if (ret)
+		return;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		int rc;
+
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		if (!tmp->initialized)
+			continue;
+
+		rc = xroot_partition_trigger_evt(xr, tmp, pdev, evt);
+		if (rc) {
+			list_del(&tmp->list);
+			vfree(tmp);
+		}
+	}
+	mutex_unlock(&xr->events.cb_lock);
+
+	(void) xroot_put_partition(xr, pdev);
+}
+
+int xroot_create_partition(struct xroot *xr, char *dtb)
+{
+	int ret;
+
+	atomic_inc(&xr->parts.bringup_pending);
+	ret = xrt_subdev_pool_add(&xr->parts.pool,
+		XRT_SUBDEV_PART, xroot_parent_cb, xr, dtb);
+	if (ret >= 0) {
+		schedule_work(&xr->parts.bringup_work);
+	} else {
+		atomic_dec(&xr->parts.bringup_pending);
+		atomic_inc(&xr->parts.bringup_failed);
+		xroot_err(xr, "failed to create partition: %d", ret);
+	}
+	return ret;
+}
+
+static int xroot_destroy_single_partition(struct xroot *xr, int instance)
+{
+	struct platform_device *pdev = NULL;
+	int ret;
+
+	BUG_ON(instance < 0);
+	ret = xroot_get_partition(xr, instance, &pdev);
+	if (ret)
+		return ret;
+
+	xroot_event_partition(xr, instance, XRT_EVENT_PRE_REMOVAL);
+
+	/* Now tear down all children in this partition. */
+	ret = xrt_subdev_ioctl(pdev, XRT_PARTITION_FINI_CHILDREN, NULL);
+	(void) xroot_put_partition(xr, pdev);
+	if (!ret) {
+		ret = xrt_subdev_pool_del(&xr->parts.pool,
+			XRT_SUBDEV_PART, instance);
+	}
+
+	return ret;
+}
+
+static int xroot_destroy_partition(struct xroot *xr, int instance)
+{
+	struct platform_device *target = NULL;
+	struct platform_device *deps = NULL;
+	int ret;
+
+	BUG_ON(instance < 0);
+	/*
+	 * Make sure target partition exists and can't go away before
+	 * we remove it's dependents
+	 */
+	ret = xroot_get_partition(xr, instance, &target);
+	if (ret)
+		return ret;
+
+	/*
+	 * Remove all partitions depend on target one.
+	 * Assuming subdevs in higher partition ID can depend on ones in
+	 * lower ID partitions, we remove them in the reservse order.
+	 */
+	while (xroot_get_partition(xr, XROOT_PART_LAST, &deps) != -ENOENT) {
+		int inst = deps->id;
+
+		xroot_put_partition(xr, deps);
+		if (instance == inst)
+			break;
+		(void) xroot_destroy_single_partition(xr, inst);
+		deps = NULL;
+	}
+
+	/* Now we can remove the target partition. */
+	xroot_put_partition(xr, target);
+	return xroot_destroy_single_partition(xr, instance);
+}
+
+static int xroot_lookup_partition(struct xroot *xr,
+	struct xrt_parent_ioctl_lookup_partition *arg)
+{
+	int rc = -ENOENT;
+	struct platform_device *part = NULL;
+
+	while (rc < 0 && xroot_get_partition(xr, XROOT_PART_LAST,
+		&part) != -ENOENT) {
+		if (arg->xpilp_match_cb(XRT_SUBDEV_PART, part,
+			arg->xpilp_match_arg)) {
+			rc = part->id;
+		}
+		xroot_put_partition(xr, part);
+	}
+	return rc;
+}
+
+static void xroot_evt_cb_init_work(struct work_struct *work)
+{
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+	struct platform_device *part = NULL;
+	struct xroot *xr =
+		container_of(work, struct xroot, events.cb_init_work);
+
+	mutex_lock(&xr->events.cb_lock);
+
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		if (tmp->initialized)
+			continue;
+
+		while (xroot_get_partition(xr, XROOT_PART_LAST,
+			&part) != -ENOENT) {
+			int rc = xroot_partition_trigger_evt(xr, tmp, part,
+				XRT_EVENT_POST_CREATION);
+
+			(void) xroot_put_partition(xr, part);
+			if (rc & XRT_EVENT_CB_STOP) {
+				list_del(&tmp->list);
+				vfree(tmp);
+				tmp = NULL;
+				break;
+			}
+		}
+
+		if (tmp)
+			tmp->initialized = true;
+	}
+
+	mutex_unlock(&xr->events.cb_lock);
+}
+
+static bool xroot_evt(struct xroot *xr, enum xrt_events evt)
+{
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+	int rc;
+	bool success = true;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		rc = tmp->cb.xevt_cb(tmp->cb.xevt_pdev, evt, NULL);
+		if (rc & XRT_EVENT_CB_ERR)
+			success = false;
+		if (rc & XRT_EVENT_CB_STOP) {
+			list_del(&tmp->list);
+			vfree(tmp);
+		}
+	}
+	mutex_unlock(&xr->events.cb_lock);
+
+	return success;
+}
+
+static void xroot_evt_async_evt_work(struct work_struct *work)
+{
+	struct xroot_async_evt *tmp;
+	struct xroot *xr =
+		container_of(work, struct xroot, events.async_evt_work);
+	bool success;
+
+	mutex_lock(&xr->events.async_evt_lock);
+	while (!list_empty(&xr->events.async_evt_list)) {
+		tmp = list_first_entry(&xr->events.async_evt_list,
+			struct xroot_async_evt, list);
+		list_del(&tmp->list);
+		mutex_unlock(&xr->events.async_evt_lock);
+
+		success = xroot_evt(xr, tmp->evt.xaevt_event);
+		if (tmp->evt.xaevt_cb) {
+			tmp->evt.xaevt_cb(tmp->evt.xaevt_pdev,
+				tmp->evt.xaevt_event, tmp->evt.xaevt_arg,
+				success);
+		}
+		vfree(tmp);
+
+		mutex_lock(&xr->events.async_evt_lock);
+	}
+	mutex_unlock(&xr->events.async_evt_lock);
+}
+
+static void xroot_evt_init(struct xroot *xr)
+{
+	INIT_LIST_HEAD(&xr->events.cb_list);
+	INIT_LIST_HEAD(&xr->events.async_evt_list);
+	mutex_init(&xr->events.async_evt_lock);
+	mutex_init(&xr->events.cb_lock);
+	INIT_WORK(&xr->events.cb_init_work, xroot_evt_cb_init_work);
+	INIT_WORK(&xr->events.async_evt_work, xroot_evt_async_evt_work);
+}
+
+static void xroot_evt_fini(struct xroot *xr)
+{
+	const struct list_head *ptr, *next;
+	struct xroot_event_cb *tmp;
+
+	flush_scheduled_work();
+
+	BUG_ON(!list_empty(&xr->events.async_evt_list));
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each_safe(ptr, next, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		list_del(&tmp->list);
+		vfree(tmp);
+	}
+	mutex_unlock(&xr->events.cb_lock);
+}
+
+static int xroot_evt_cb_add(struct xroot *xr,
+	struct xrt_parent_ioctl_evt_cb *cb)
+{
+	struct xroot_event_cb *new = vzalloc(sizeof(*new));
+
+	if (!new)
+		return -ENOMEM;
+
+	cb->xevt_hdl = new;
+	new->cb = *cb;
+	new->initialized = false;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_add(&new->list, &xr->events.cb_list);
+	mutex_unlock(&xr->events.cb_lock);
+
+	schedule_work(&xr->events.cb_init_work);
+	return 0;
+}
+
+static int xroot_async_evt_add(struct xroot *xr,
+	struct xrt_parent_ioctl_async_broadcast_evt *arg)
+{
+	struct xroot_async_evt *new = vzalloc(sizeof(*new));
+
+	if (!new)
+		return -ENOMEM;
+
+	new->evt = *arg;
+
+	mutex_lock(&xr->events.async_evt_lock);
+	list_add(&new->list, &xr->events.async_evt_list);
+	mutex_unlock(&xr->events.async_evt_lock);
+
+	schedule_work(&xr->events.async_evt_work);
+	return 0;
+}
+
+static void xroot_evt_cb_del(struct xroot *xr, void *hdl)
+{
+	struct xroot_event_cb *cb = (struct xroot_event_cb *)hdl;
+	const struct list_head *ptr;
+	struct xroot_event_cb *tmp;
+
+	mutex_lock(&xr->events.cb_lock);
+	list_for_each(ptr, &xr->events.cb_list) {
+		tmp = list_entry(ptr, struct xroot_event_cb, list);
+		if (tmp == cb)
+			break;
+	}
+	list_del(&cb->list);
+	mutex_unlock(&xr->events.cb_lock);
+	vfree(cb);
+}
+
+static int xroot_get_leaf(struct xroot *xr,
+	struct xrt_parent_ioctl_get_leaf *arg)
+{
+	int rc = -ENOENT;
+	struct platform_device *part = NULL;
+
+	while (rc && xroot_get_partition(xr, XROOT_PART_LAST,
+		&part) != -ENOENT) {
+		rc = xrt_subdev_ioctl(part, XRT_PARTITION_GET_LEAF, arg);
+		xroot_put_partition(xr, part);
+	}
+	return rc;
+}
+
+static int xroot_put_leaf(struct xroot *xr,
+	struct xrt_parent_ioctl_put_leaf *arg)
+{
+	int rc = -ENOENT;
+	struct platform_device *part = NULL;
+
+	while (rc && xroot_get_partition(xr, XROOT_PART_LAST,
+		&part) != -ENOENT) {
+		rc = xrt_subdev_ioctl(part, XRT_PARTITION_PUT_LEAF, arg);
+		xroot_put_partition(xr, part);
+	}
+	return rc;
+}
+
+static int xroot_parent_cb(struct device *dev, void *parg, u32 cmd, void *arg)
+{
+	struct xroot *xr = (struct xroot *)parg;
+	int rc = 0;
+
+	switch (cmd) {
+	/* Leaf actions. */
+	case XRT_PARENT_GET_LEAF: {
+		struct xrt_parent_ioctl_get_leaf *getleaf =
+			(struct xrt_parent_ioctl_get_leaf *)arg;
+		rc = xroot_get_leaf(xr, getleaf);
+		break;
+	}
+	case XRT_PARENT_PUT_LEAF: {
+		struct xrt_parent_ioctl_put_leaf *putleaf =
+			(struct xrt_parent_ioctl_put_leaf *)arg;
+		rc = xroot_put_leaf(xr, putleaf);
+		break;
+	}
+	case XRT_PARENT_GET_LEAF_HOLDERS: {
+		struct xrt_parent_ioctl_get_holders *holders =
+			(struct xrt_parent_ioctl_get_holders *)arg;
+		rc = xrt_subdev_pool_get_holders(&xr->parts.pool,
+			holders->xpigh_pdev, holders->xpigh_holder_buf,
+			holders->xpigh_holder_buf_len);
+		break;
+	}
+
+
+	/* Partition actions. */
+	case XRT_PARENT_CREATE_PARTITION:
+		rc = xroot_create_partition(xr, (char *)arg);
+		break;
+	case XRT_PARENT_REMOVE_PARTITION:
+		rc = xroot_destroy_partition(xr, (int)(uintptr_t)arg);
+		break;
+	case XRT_PARENT_LOOKUP_PARTITION: {
+		struct xrt_parent_ioctl_lookup_partition *getpart =
+			(struct xrt_parent_ioctl_lookup_partition *)arg;
+		rc = xroot_lookup_partition(xr, getpart);
+		break;
+	}
+	case XRT_PARENT_WAIT_PARTITION_BRINGUP:
+		rc = xroot_wait_for_bringup(xr) ? 0 : -EINVAL;
+		break;
+
+
+	/* Event actions. */
+	case XRT_PARENT_ADD_EVENT_CB: {
+		struct xrt_parent_ioctl_evt_cb *cb =
+			(struct xrt_parent_ioctl_evt_cb *)arg;
+		rc = xroot_evt_cb_add(xr, cb);
+		break;
+	}
+	case XRT_PARENT_REMOVE_EVENT_CB:
+		xroot_evt_cb_del(xr, arg);
+		rc = 0;
+		break;
+	case XRT_PARENT_ASYNC_BOARDCAST_EVENT:
+		rc = xroot_async_evt_add(xr,
+			(struct xrt_parent_ioctl_async_broadcast_evt *)arg);
+		break;
+
+
+	/* Device info. */
+	case XRT_PARENT_GET_RESOURCE: {
+		struct xrt_parent_ioctl_get_res *res =
+			(struct xrt_parent_ioctl_get_res *)arg;
+		res->xpigr_res = xr->pdev->resource;
+		break;
+	}
+	case XRT_PARENT_GET_ID: {
+		struct xrt_parent_ioctl_get_id *id =
+			(struct xrt_parent_ioctl_get_id *)arg;
+
+		id->xpigi_vendor_id = xr->pdev->vendor;
+		id->xpigi_device_id = xr->pdev->device;
+		id->xpigi_sub_vendor_id = xr->pdev->subsystem_vendor;
+		id->xpigi_sub_device_id = xr->pdev->subsystem_device;
+		break;
+	}
+
+
+	case XRT_PARENT_HOT_RESET: {
+		xroot_hot_reset(xr->pdev);
+		break;
+	}
+
+	case XRT_PARENT_HWMON: {
+		struct xrt_parent_ioctl_hwmon *hwmon =
+			(struct xrt_parent_ioctl_hwmon *)arg;
+
+		if (hwmon->xpih_register) {
+			hwmon->xpih_hwmon_dev =
+				hwmon_device_register_with_info(DEV(xr->pdev),
+				hwmon->xpih_name, hwmon->xpih_drvdata, NULL,
+				hwmon->xpih_groups);
+		} else {
+			(void) hwmon_device_unregister(hwmon->xpih_hwmon_dev);
+		}
+		break;
+	}
+
+	default:
+		xroot_err(xr, "unknown IOCTL cmd %d", cmd);
+		rc = -EINVAL;
+		break;
+	}
+
+	return rc;
+}
+
+static void xroot_bringup_partition_work(struct work_struct *work)
+{
+	struct platform_device *pdev = NULL;
+	struct xroot *xr = container_of(work, struct xroot, parts.bringup_work);
+
+	while (xroot_get_partition(xr, XROOT_PART_LAST, &pdev) != -ENOENT) {
+		int r, i;
+
+		i = pdev->id;
+		r = xrt_subdev_ioctl(pdev, XRT_PARTITION_INIT_CHILDREN, NULL);
+		(void) xroot_put_partition(xr, pdev);
+		if (r == -EEXIST)
+			continue; /* Already brough up, nothing to do. */
+		if (r)
+			atomic_inc(&xr->parts.bringup_failed);
+
+		xroot_event_partition(xr, i, XRT_EVENT_POST_CREATION);
+
+		if (atomic_dec_and_test(&xr->parts.bringup_pending))
+			complete(&xr->parts.bringup_comp);
+	}
+}
+
+static void xroot_parts_init(struct xroot *xr)
+{
+	xrt_subdev_pool_init(DEV(xr->pdev), &xr->parts.pool);
+	INIT_WORK(&xr->parts.bringup_work, xroot_bringup_partition_work);
+	atomic_set(&xr->parts.bringup_pending, 0);
+	atomic_set(&xr->parts.bringup_failed, 0);
+	init_completion(&xr->parts.bringup_comp);
+}
+
+static void xroot_parts_fini(struct xroot *xr)
+{
+	flush_scheduled_work();
+	(void) xrt_subdev_pool_fini(&xr->parts.pool);
+}
+
+int xroot_add_vsec_node(struct xroot *xr, char *dtb)
+{
+	struct device *dev = DEV(xr->pdev);
+	struct xrt_md_endpoint ep = { 0 };
+	int cap = 0, ret = 0;
+	u32 off_low, off_high, vsec_bar, header;
+	u64 vsec_off;
+
+	while ((cap = pci_find_next_ext_capability(xr->pdev, cap,
+	    PCI_EXT_CAP_ID_VNDR))) {
+		pci_read_config_dword(xr->pdev, cap + PCI_VNDR_HEADER, &header);
+		if (PCI_VNDR_HEADER_ID(header) == XRT_VSEC_ID)
+			break;
+	}
+	if (!cap) {
+		xroot_info(xr, "No Vendor Specific Capability.");
+		return -ENOENT;
+	}
+
+	if (pci_read_config_dword(xr->pdev, cap+8, &off_low) ||
+	    pci_read_config_dword(xr->pdev, cap+12, &off_high)) {
+		xroot_err(xr, "pci_read vendor specific failed.");
+		return -EINVAL;
+	}
+
+	ep.ep_name = NODE_VSEC;
+	ret = xrt_md_add_endpoint(dev, dtb, &ep);
+	if (ret) {
+		xroot_err(xr, "add vsec metadata failed, ret %d", ret);
+		goto failed;
+	}
+
+	vsec_bar = cpu_to_be32(off_low & 0xf);
+	ret = xrt_md_set_prop(dev, dtb, NODE_VSEC,
+		NULL, PROP_BAR_IDX, &vsec_bar, sizeof(vsec_bar));
+	if (ret) {
+		xroot_err(xr, "add vsec bar idx failed, ret %d", ret);
+		goto failed;
+	}
+
+	vsec_off = cpu_to_be64(((u64)off_high << 32) | (off_low & ~0xfU));
+	ret = xrt_md_set_prop(dev, dtb, NODE_VSEC,
+		NULL, PROP_OFFSET, &vsec_off, sizeof(vsec_off));
+	if (ret) {
+		xroot_err(xr, "add vsec offset failed, ret %d", ret);
+		goto failed;
+	}
+
+failed:
+	return ret;
+}
+
+int xroot_add_simple_node(struct xroot *xr, char *dtb, const char *endpoint)
+{
+	struct device *dev = DEV(xr->pdev);
+	struct xrt_md_endpoint ep = { 0 };
+	int ret = 0;
+
+	ep.ep_name = endpoint;
+	ret = xrt_md_add_endpoint(dev, dtb, &ep);
+	if (ret)
+		xroot_err(xr, "add %s failed, ret %d", endpoint, ret);
+
+	return ret;
+}
+
+bool xroot_wait_for_bringup(struct xroot *xr)
+{
+	wait_for_completion(&xr->parts.bringup_comp);
+	return atomic_xchg(&xr->parts.bringup_failed, 0) == 0;
+}
+
+int xroot_probe(struct pci_dev *pdev, struct xroot **root)
+{
+	struct device *dev = DEV(pdev);
+	struct xroot *xr = NULL;
+
+	dev_info(dev, "%s: probing...", __func__);
+
+	xr = devm_kzalloc(dev, sizeof(*xr), GFP_KERNEL);
+	if (!xr)
+		return -ENOMEM;
+
+	xr->pdev = pdev;
+	xroot_parts_init(xr);
+	xroot_evt_init(xr);
+
+	*root = xr;
+	return 0;
+}
+
+void xroot_remove(struct xroot *xr)
+{
+	struct platform_device *part = NULL;
+
+	xroot_info(xr, "leaving...");
+
+	if (xroot_get_partition(xr, XROOT_PART_FIRST, &part) == 0) {
+		int instance = part->id;
+
+		xroot_put_partition(xr, part);
+		(void) xroot_destroy_partition(xr, instance);
+	}
+
+	xroot_evt_fini(xr);
+	xroot_parts_fini(xr);
+}
+
+static void xroot_broadcast_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg, bool success)
+{
+	struct completion *comp = (struct completion *)arg;
+
+	complete(comp);
+}
+
+void xroot_broadcast(struct xroot *xr, enum xrt_events evt)
+{
+	struct completion comp;
+	struct xrt_parent_ioctl_async_broadcast_evt e = {
+		NULL, evt, xroot_broadcast_event_cb, &comp
+	};
+	int rc;
+
+	init_completion(&comp);
+	rc = xroot_async_evt_add(xr, &e);
+	if (rc == 0)
+		wait_for_completion(&comp);
+	else
+		xroot_err(xr, "can't broadcast event (%d): %d", evt, rc);
+}
diff --git a/drivers/fpga/xrt/common/xrt-root.h b/drivers/fpga/xrt/common/xrt-root.h
new file mode 100644
index 000000000000..7745afa60c95
--- /dev/null
+++ b/drivers/fpga/xrt/common/xrt-root.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_ROOT_H_
+#define	_XRT_ROOT_H_
+
+#include <linux/pci.h>
+#include "subdev.h"
+
+struct xroot;
+
+int xroot_probe(struct pci_dev *pdev, struct xroot **root);
+void xroot_remove(struct xroot *root);
+bool xroot_wait_for_bringup(struct xroot *root);
+int xroot_add_vsec_node(struct xroot *root, char *dtb);
+int xroot_create_partition(struct xroot *xr, char *dtb);
+int xroot_add_simple_node(struct xroot *root, char *dtb, const char *endpoint);
+void xroot_hot_reset(struct pci_dev *pdev);
+void xroot_broadcast(struct xroot *root, enum xrt_events evt);
+
+#endif	/* _XRT_ROOT_H_ */
diff --git a/drivers/fpga/xrt/common/xrt-xclbin.c b/drivers/fpga/xrt/common/xrt-xclbin.c
new file mode 100644
index 000000000000..3b1b52445009
--- /dev/null
+++ b/drivers/fpga/xrt/common/xrt-xclbin.c
@@ -0,0 +1,387 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Kernel Driver XCLBIN parser
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors: David Zhang <davidzha@xilinx.com>
+ */
+
+#include <asm/errno.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include "xrt-xclbin.h"
+#include "metadata.h"
+
+/* Used for parsing bitstream header */
+#define XHI_EVEN_MAGIC_BYTE     0x0f
+#define XHI_ODD_MAGIC_BYTE      0xf0
+
+/* Extra mode for IDLE */
+#define XHI_OP_IDLE  -1
+#define XHI_BIT_HEADER_FAILURE -1
+
+/* The imaginary module length register */
+#define XHI_MLR                  15
+
+const char *xrt_xclbin_kind_to_string(enum axlf_section_kind kind)
+{
+	switch (kind) {
+	case BITSTREAM:			return "BITSTREAM";
+	case CLEARING_BITSTREAM:	return "CLEARING_BITSTREAM";
+	case EMBEDDED_METADATA:		return "EMBEDDED_METADATA";
+	case FIRMWARE:			return "FIRMWARE";
+	case DEBUG_DATA:		return "DEBUG_DATA";
+	case SCHED_FIRMWARE:		return "SCHED_FIRMWARE";
+	case MEM_TOPOLOGY:		return "MEM_TOPOLOGY";
+	case CONNECTIVITY:		return "CONNECTIVITY";
+	case IP_LAYOUT:			return "IP_LAYOUT";
+	case DEBUG_IP_LAYOUT:		return "DEBUG_IP_LAYOUT";
+	case DESIGN_CHECK_POINT:	return "DESIGN_CHECK_POINT";
+	case CLOCK_FREQ_TOPOLOGY:	return "CLOCK_FREQ_TOPOLOGY";
+	case MCS:			return "MCS";
+	case BMC:			return "BMC";
+	case BUILD_METADATA:		return "BUILD_METADATA";
+	case KEYVALUE_METADATA:		return "KEYVALUE_METADATA";
+	case USER_METADATA:		return "USER_METADATA";
+	case DNA_CERTIFICATE:		return "DNA_CERTIFICATE";
+	case PDI:			return "PDI";
+	case BITSTREAM_PARTIAL_PDI:	return "BITSTREAM_PARTIAL_PDI";
+	case PARTITION_METADATA:	return "PARTITION_METADATA";
+	case EMULATION_DATA:		return "EMULATION_DATA";
+	case SYSTEM_METADATA:		return "SYSTEM_METADATA";
+	case SOFT_KERNEL:		return "SOFT_KERNEL";
+	case ASK_FLASH:			return "ASK_FLASH";
+	case AIE_METADATA:		return "AIE_METADATA";
+	case ASK_GROUP_TOPOLOGY:	return "ASK_GROUP_TOPOLOGY";
+	case ASK_GROUP_CONNECTIVITY:	return "ASK_GROUP_CONNECTIVITY";
+	default:			return "UNKNOWN";
+	}
+}
+
+static const struct axlf_section_header *
+xrt_xclbin_get_section_hdr(const struct axlf *xclbin,
+	enum axlf_section_kind kind)
+{
+	int i = 0;
+
+	for (i = 0; i < xclbin->m_header.m_numSections; i++) {
+		if (xclbin->m_sections[i].m_sectionKind == kind)
+			return &xclbin->m_sections[i];
+	}
+
+	return NULL;
+}
+
+static int
+xrt_xclbin_check_section_hdr(const struct axlf_section_header *header,
+	uint64_t xclbin_len)
+{
+	return (header->m_sectionOffset + header->m_sectionSize) > xclbin_len ?
+		-EINVAL : 0;
+}
+
+static int xrt_xclbin_section_info(const struct axlf *xclbin,
+	enum axlf_section_kind kind,
+	uint64_t *offset, uint64_t *size)
+{
+	const struct axlf_section_header *memHeader = NULL;
+	uint64_t xclbin_len;
+	int err = 0;
+
+	memHeader = xrt_xclbin_get_section_hdr(xclbin, kind);
+	if (!memHeader)
+		return -EINVAL;
+
+	xclbin_len = xclbin->m_header.m_length;
+	err = xrt_xclbin_check_section_hdr(memHeader, xclbin_len);
+	if (err)
+		return err;
+
+	*offset = memHeader->m_sectionOffset;
+	*size = memHeader->m_sectionSize;
+
+	return 0;
+}
+
+/* caller should free the allocated memory for **data */
+int xrt_xclbin_get_section(const struct axlf *buf,
+	enum axlf_section_kind kind, void **data, uint64_t *len)
+{
+	const struct axlf *xclbin = (const struct axlf *)buf;
+	void *section = NULL;
+	int err = 0;
+	uint64_t offset = 0;
+	uint64_t size = 0;
+
+	err = xrt_xclbin_section_info(xclbin, kind, &offset, &size);
+	if (err)
+		return err;
+
+	section = vmalloc(size);
+	if (section == NULL)
+		return -ENOMEM;
+
+	memcpy(section, ((const char *)xclbin) + offset, size);
+
+	*data = section;
+	if (len)
+		*len = size;
+
+	return 0;
+}
+
+/* parse bitstream header */
+int xrt_xclbin_parse_header(const unsigned char *data,
+	unsigned int size, struct XHwIcap_Bit_Header *header)
+{
+	unsigned int i;
+	unsigned int len;
+	unsigned int tmp;
+	unsigned int index;
+
+	/* Start Index at start of bitstream */
+	index = 0;
+
+	/* Initialize HeaderLength.  If header returned early inidicates
+	 * failure.
+	 */
+	header->HeaderLength = XHI_BIT_HEADER_FAILURE;
+
+	/* Get "Magic" length */
+	header->MagicLength = data[index++];
+	header->MagicLength = (header->MagicLength << 8) | data[index++];
+
+	/* Read in "magic" */
+	for (i = 0; i < header->MagicLength - 1; i++) {
+		tmp = data[index++];
+		if (i % 2 == 0 && tmp != XHI_EVEN_MAGIC_BYTE)
+			return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+		if (i % 2 == 1 && tmp != XHI_ODD_MAGIC_BYTE)
+			return -1;	/* INVALID_FILE_HEADER_ERROR */
+	}
+
+	/* Read null end of magic data. */
+	tmp = data[index++];
+
+	/* Read 0x01 (short) */
+	tmp = data[index++];
+	tmp = (tmp << 8) | data[index++];
+
+	/* Check the "0x01" half word */
+	if (tmp != 0x01)
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Read 'a' */
+	tmp = data[index++];
+	if (tmp != 'a')
+		return -1;	/* INVALID_FILE_HEADER_ERROR	*/
+
+	/* Get Design Name length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for design name and final null character. */
+	header->DesignName = vmalloc(len);
+
+	/* Read in Design Name */
+	for (i = 0; i < len; i++)
+		header->DesignName[i] = data[index++];
+
+	if (header->DesignName[len-1] != '\0')
+		return -1;
+
+	/* Read 'b' */
+	tmp = data[index++];
+	if (tmp != 'b')
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Get Part Name length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for part name and final null character. */
+	header->PartName = vmalloc(len);
+
+	/* Read in part name */
+	for (i = 0; i < len; i++)
+		header->PartName[i] = data[index++];
+
+	if (header->PartName[len-1] != '\0')
+		return -1;
+
+	/* Read 'c' */
+	tmp = data[index++];
+	if (tmp != 'c')
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Get date length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for date and final null character. */
+	header->Date = vmalloc(len);
+
+	/* Read in date name */
+	for (i = 0; i < len; i++)
+		header->Date[i] = data[index++];
+
+	if (header->Date[len - 1] != '\0')
+		return -1;
+
+	/* Read 'd' */
+	tmp = data[index++];
+	if (tmp != 'd')
+		return -1;	/* INVALID_FILE_HEADER_ERROR  */
+
+	/* Get time length */
+	len = data[index++];
+	len = (len << 8) | data[index++];
+
+	/* allocate space for time and final null character. */
+	header->Time = vmalloc(len);
+
+	/* Read in time name */
+	for (i = 0; i < len; i++)
+		header->Time[i] = data[index++];
+
+	if (header->Time[len - 1] != '\0')
+		return -1;
+
+	/* Read 'e' */
+	tmp = data[index++];
+	if (tmp != 'e')
+		return -1;	/* INVALID_FILE_HEADER_ERROR */
+
+	/* Get byte length of bitstream */
+	header->BitstreamLength = data[index++];
+	header->BitstreamLength = (header->BitstreamLength << 8) | data[index++];
+	header->BitstreamLength = (header->BitstreamLength << 8) | data[index++];
+	header->BitstreamLength = (header->BitstreamLength << 8) | data[index++];
+	header->HeaderLength = index;
+
+	return 0;
+}
+
+void xrt_xclbin_free_header(struct XHwIcap_Bit_Header *header)
+{
+	vfree(header->DesignName);
+	vfree(header->PartName);
+	vfree(header->Date);
+	vfree(header->Time);
+}
+
+struct xrt_clock_desc {
+	char	*clock_ep_name;
+	u32	clock_xclbin_type;
+	char	*clkfreq_ep_name;
+} clock_desc[] = {
+	{
+		.clock_ep_name = NODE_CLK_KERNEL1,
+		.clock_xclbin_type = CT_DATA,
+		.clkfreq_ep_name = NODE_CLKFREQ_K1,
+	},
+	{
+		.clock_ep_name = NODE_CLK_KERNEL2,
+		.clock_xclbin_type = CT_KERNEL,
+		.clkfreq_ep_name = NODE_CLKFREQ_K2,
+	},
+	{
+		.clock_ep_name = NODE_CLK_KERNEL3,
+		.clock_xclbin_type = CT_SYSTEM,
+		.clkfreq_ep_name = NODE_CLKFREQ_HBM,
+	},
+};
+
+const char *clock_type2epname(enum CLOCK_TYPE type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
+		if (clock_desc[i].clock_xclbin_type == type)
+			return clock_desc[i].clock_ep_name;
+	}
+	return NULL;
+}
+
+static const char *clock_type2clkfreq_name(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
+		if (clock_desc[i].clock_xclbin_type == type)
+			return clock_desc[i].clkfreq_ep_name;
+	}
+	return NULL;
+}
+
+static int xrt_xclbin_add_clock_metadata(struct device *dev,
+	const struct axlf *xclbin, char *dtb)
+{
+	int i;
+	u16 freq;
+	struct clock_freq_topology *clock_topo;
+	int rc = xrt_xclbin_get_section(xclbin,
+		CLOCK_FREQ_TOPOLOGY, (void **)&clock_topo, NULL);
+
+	if (rc)
+		return 0;
+
+	for (i = 0; i < clock_topo->m_count; i++) {
+		u8 type = clock_topo->m_clock_freq[i].m_type;
+		const char *ep_name = clock_type2epname(type);
+		const char *counter_name = clock_type2clkfreq_name(type);
+
+		if (!ep_name || !counter_name)
+			continue;
+
+		freq = cpu_to_be16(clock_topo->m_clock_freq[i].m_freq_Mhz);
+		rc = xrt_md_set_prop(dev, dtb, ep_name,
+			NULL, PROP_CLK_FREQ, &freq, sizeof(freq));
+		if (rc)
+			break;
+
+		rc = xrt_md_set_prop(dev, dtb, ep_name,
+			NULL, PROP_CLK_CNT, counter_name, strlen(counter_name) + 1);
+		if (rc)
+			break;
+	}
+
+	vfree(clock_topo);
+
+	return rc;
+}
+
+int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb)
+{
+	char *md = NULL, *newmd = NULL;
+	u64 len;
+	int rc = xrt_xclbin_get_section(xclbin, PARTITION_METADATA,
+		(void **)&md, &len);
+
+	if (rc)
+		goto done;
+
+	/* Sanity check the dtb section. */
+	if (xrt_md_size(dev, md) > len) {
+		rc = -EINVAL;
+		goto done;
+	}
+
+	newmd = xrt_md_dup(dev, md);
+	if (!newmd) {
+		rc = -EFAULT;
+		goto done;
+	}
+	/* Convert various needed xclbin sections into dtb. */
+	rc = xrt_xclbin_add_clock_metadata(dev, xclbin, newmd);
+
+done:
+	if (rc == 0)
+		*dtb = newmd;
+	else
+		vfree(newmd);
+	vfree(md);
+	return rc;
+}
diff --git a/drivers/fpga/xrt/common/xrt-xclbin.h b/drivers/fpga/xrt/common/xrt-xclbin.h
new file mode 100644
index 000000000000..a8676f214c10
--- /dev/null
+++ b/drivers/fpga/xrt/common/xrt-xclbin.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0 */
+/*
+ * Xilinx Kernel Driver XCLBIN parser
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors: David Zhang <davidzha@xilinx.com>
+ */
+
+#ifndef _XRT_XCLBIN_H
+#define _XRT_XCLBIN_H
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/xrt/xclbin.h>
+
+#define	ICAP_XCLBIN_V2	"xclbin2"
+#define DMA_HWICAP_BITFILE_BUFFER_SIZE 1024
+#define MAX_XCLBIN_SIZE (1024 * 1024 * 1024) /* Assuming xclbin <= 1G, always */
+
+enum axlf_section_kind;
+struct axlf;
+
+/**
+ * Bitstream header information as defined by Xilinx tools.
+ * Please note that this struct definition is not owned by the driver and
+ * hence it does not use Linux coding style.
+ */
+struct XHwIcap_Bit_Header {
+	unsigned int HeaderLength;     /* Length of header in 32 bit words */
+	unsigned int BitstreamLength;  /* Length of bitstream to read in bytes*/
+	unsigned char *DesignName;     /* Design name get from bitstream */
+	unsigned char *PartName;       /* Part name read from bitstream */
+	unsigned char *Date;           /* Date read from bitstream header */
+	unsigned char *Time;           /* Bitstream creation time*/
+	unsigned int MagicLength;      /* Length of the magic numbers*/
+};
+
+const char *xrt_xclbin_kind_to_string(enum axlf_section_kind kind);
+int xrt_xclbin_get_section(const struct axlf *xclbin,
+	enum axlf_section_kind kind, void **data, uint64_t *len);
+int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb);
+int xrt_xclbin_parse_header(const unsigned char *data,
+	unsigned int size, struct XHwIcap_Bit_Header *header);
+void xrt_xclbin_free_header(struct XHwIcap_Bit_Header *header);
+const char *clock_type2epname(enum CLOCK_TYPE type);
+
+#endif /* _XRT_XCLBIN_H */
diff --git a/include/uapi/linux/xrt/xclbin.h b/include/uapi/linux/xrt/xclbin.h
new file mode 100644
index 000000000000..630d45597bd1
--- /dev/null
+++ b/include/uapi/linux/xrt/xclbin.h
@@ -0,0 +1,386 @@
+/* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0 */
+/*
+ *  Xilinx FPGA compiled binary container format
+ *
+ *  Copyright (C) 2015-2020, Xilinx Inc
+ */
+
+
+#ifndef _XCLBIN_H_
+#define _XCLBIN_H_
+
+#ifdef _WIN32
+  #include <cstdint>
+  #include <algorithm>
+  #include "windows/uuid.h"
+#else
+  #if defined(__KERNEL__)
+    #include <linux/types.h>
+    #include <linux/uuid.h>
+    #include <linux/version.h>
+  #elif defined(__cplusplus)
+    #include <cstdlib>
+    #include <cstdint>
+    #include <algorithm>
+    #include <uuid/uuid.h>
+  #else
+    #include <stdlib.h>
+    #include <stdint.h>
+    #include <uuid/uuid.h>
+  #endif
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: Container format for Xilinx FPGA images
+ * The container stores bitstreams, metadata and firmware images.
+ * xclbin/xsabin is ELF-like binary container format. It is structured
+ * series of sections. There is a file header followed by several section
+ * headers which is followed by sections. A section header points to an
+ * actual section. There is an optional signature at the end. The
+ * following figure illustrates a typical xclbin:
+ *
+ *     +---------------------+
+ *     |		     |
+ *     |       HEADER	     |
+ *     +---------------------+
+ *     |   SECTION  HEADER   |
+ *     |		     |
+ *     +---------------------+
+ *     |	 ...	     |
+ *     |		     |
+ *     +---------------------+
+ *     |   SECTION  HEADER   |
+ *     |		     |
+ *     +---------------------+
+ *     |       SECTION	     |
+ *     |		     |
+ *     +---------------------+
+ *     |	 ...	     |
+ *     |		     |
+ *     +---------------------+
+ *     |       SECTION	     |
+ *     |		     |
+ *     +---------------------+
+ *     |      SIGNATURE	     |
+ *     |      (OPTIONAL)     |
+ *     +---------------------+
+ */
+
+enum XCLBIN_MODE {
+	XCLBIN_FLAT,
+	XCLBIN_PR,
+	XCLBIN_TANDEM_STAGE2,
+	XCLBIN_TANDEM_STAGE2_WITH_PR,
+	XCLBIN_HW_EMU,
+	XCLBIN_SW_EMU,
+	XCLBIN_MODE_MAX
+};
+
+enum axlf_section_kind {
+	BITSTREAM = 0,
+	CLEARING_BITSTREAM,
+	EMBEDDED_METADATA,
+	FIRMWARE,
+	DEBUG_DATA,
+	SCHED_FIRMWARE,
+	MEM_TOPOLOGY,
+	CONNECTIVITY,
+	IP_LAYOUT,
+	DEBUG_IP_LAYOUT,
+	DESIGN_CHECK_POINT,
+	CLOCK_FREQ_TOPOLOGY,
+	MCS,
+	BMC,
+	BUILD_METADATA,
+	KEYVALUE_METADATA,
+	USER_METADATA,
+	DNA_CERTIFICATE,
+	PDI,
+	BITSTREAM_PARTIAL_PDI,
+	PARTITION_METADATA,
+	EMULATION_DATA,
+	SYSTEM_METADATA,
+	SOFT_KERNEL,
+	ASK_FLASH,
+	AIE_METADATA,
+	ASK_GROUP_TOPOLOGY,
+	ASK_GROUP_CONNECTIVITY
+};
+
+enum MEM_TYPE {
+	MEM_DDR3,
+	MEM_DDR4,
+	MEM_DRAM,
+	MEM_STREAMING,
+	MEM_PREALLOCATED_GLOB,
+	MEM_ARE,
+	MEM_HBM,
+	MEM_BRAM,
+	MEM_URAM,
+	MEM_STREAMING_CONNECTION
+};
+
+enum IP_TYPE {
+	IP_MB = 0,
+	IP_KERNEL,
+	IP_DNASC,
+	IP_DDR4_CONTROLLER,
+	IP_MEM_DDR4,
+	IP_MEM_HBM
+};
+
+struct axlf_section_header {
+	uint32_t m_sectionKind;		    /* Section type */
+	char m_sectionName[16];		    /* Examples: "stage2", "clear1", "clear2", "ocl1", "ocl2, "ublaze", "sched" */
+	uint64_t m_sectionOffset;	    /* File offset of section data */
+	uint64_t m_sectionSize;		    /* Size of section data */
+};
+
+struct axlf_header {
+	uint64_t m_length;		    /* Total size of the xclbin file */
+	uint64_t m_timeStamp;		    /* Number of seconds since epoch when xclbin was created */
+	uint64_t m_featureRomTimeStamp;	    /* TimeSinceEpoch of the featureRom */
+	uint16_t m_versionPatch;	    /* Patch Version */
+	uint8_t m_versionMajor;		    /* Major Version - Version: 2.1.0*/
+	uint8_t m_versionMinor;		    /* Minor Version */
+	uint32_t m_mode;		    /* XCLBIN_MODE */
+	union {
+		struct {
+			uint64_t m_platformId;	/* 64 bit platform ID: vendor-device-subvendor-subdev */
+			uint64_t m_featureId;	/* 64 bit feature id */
+		} rom;
+		unsigned char rom_uuid[16];	/* feature ROM UUID for which this xclbin was generated */
+	};
+	unsigned char m_platformVBNV[64];	/* e.g. xilinx:xil-accel-rd-ku115:4ddr-xpr:3.4: null terminated */
+	union {
+		char m_next_axlf[16];		/* Name of next xclbin file in the daisy chain */
+		uuid_t uuid;			/* uuid of this xclbin*/
+	};
+	char m_debug_bin[16];			/* Name of binary with debug information */
+	uint32_t m_numSections;			/* Number of section headers */
+};
+
+struct axlf {
+	char m_magic[8];			    /* Should be "xclbin2\0"  */
+	int32_t m_signature_length;		    /* Length of the signature. -1 indicates no signature */
+	unsigned char reserved[28];		    /* Note: Initialized to 0xFFs */
+
+	unsigned char m_keyBlock[256];		    /* Signature for validation of binary */
+	uint64_t m_uniqueId;			    /* axlf's uniqueId, use it to skip redownload etc */
+	struct axlf_header m_header;		    /* Inline header */
+	struct axlf_section_header m_sections[1];   /* One or more section headers follow */
+};
+
+/* bitstream information */
+struct xlnx_bitstream {
+	uint8_t m_freq[8];
+	char bits[1];
+};
+
+/****	MEMORY TOPOLOGY SECTION ****/
+struct mem_data {
+	uint8_t m_type; //enum corresponding to mem_type.
+	uint8_t m_used; //if 0 this bank is not present
+	union {
+		uint64_t m_size; //if mem_type DDR, then size in KB;
+		uint64_t route_id; //if streaming then "route_id"
+	};
+	union {
+		uint64_t m_base_address;//if DDR then the base address;
+		uint64_t flow_id; //if streaming then "flow id"
+	};
+	unsigned char m_tag[16]; //DDR: BANK0,1,2,3, has to be null terminated; if streaming then stream0, 1 etc
+};
+
+struct mem_topology {
+	int32_t m_count; //Number of mem_data
+	struct mem_data m_mem_data[1]; //Should be sorted on mem_type
+};
+
+/****	CONNECTIVITY SECTION ****/
+/* Connectivity of each argument of Kernel. It will be in terms of argument
+ * index associated. For associating kernel instances with arguments and
+ * banks, start at the connectivity section. Using the m_ip_layout_index
+ * access the ip_data.m_name. Now we can associate this kernel instance
+ * with its original kernel name and get the connectivity as well. This
+ * enables us to form related groups of kernel instances.
+ */
+
+struct connection {
+	int32_t arg_index; //From 0 to n, may not be contiguous as scalars skipped
+	int32_t m_ip_layout_index; //index into the ip_layout section. ip_layout.m_ip_data[index].m_type == IP_KERNEL
+	int32_t mem_data_index; //index of the m_mem_data . Flag error is m_used false.
+};
+
+struct connectivity {
+	int32_t m_count;
+	struct connection m_connection[1];
+};
+
+
+/****	IP_LAYOUT SECTION ****/
+
+// IP Kernel
+#define IP_INT_ENABLE_MASK	  0x0001
+#define IP_INTERRUPT_ID_MASK  0x00FE
+#define IP_INTERRUPT_ID_SHIFT 0x1
+
+enum IP_CONTROL {
+	AP_CTRL_HS = 0,
+	AP_CTRL_CHAIN = 1,
+	AP_CTRL_NONE = 2,
+	AP_CTRL_ME = 3,
+	ACCEL_ADAPTER = 4
+};
+
+#define IP_CONTROL_MASK	 0xFF00
+#define IP_CONTROL_SHIFT 0x8
+
+/* IPs on AXI lite - their types, names, and base addresses.*/
+struct ip_data {
+	uint32_t m_type; //map to IP_TYPE enum
+	union {
+		uint32_t properties; // Default: 32-bits to indicate ip specific property.
+		// m_type: IP_KERNEL
+		//	    m_int_enable   : Bit  - 0x0000_0001;
+		//	    m_interrupt_id : Bits - 0x0000_00FE;
+		//	    m_ip_control   : Bits = 0x0000_FF00;
+		struct {		 // m_type: IP_MEM_*
+			uint16_t m_index;
+			uint8_t m_pc_index;
+			uint8_t unused;
+		} indices;
+	};
+	uint64_t m_base_address;
+	uint8_t m_name[64]; //eg Kernel name corresponding to KERNEL instance, can embed CU name in future.
+};
+
+struct ip_layout {
+	int32_t m_count;
+	struct ip_data m_ip_data[1]; //All the ip_data needs to be sorted by m_base_address.
+};
+
+/*** Debug IP section layout ****/
+enum DEBUG_IP_TYPE {
+	UNDEFINED = 0,
+	LAPC,
+	ILA,
+	AXI_MM_MONITOR,
+	AXI_TRACE_FUNNEL,
+	AXI_MONITOR_FIFO_LITE,
+	AXI_MONITOR_FIFO_FULL,
+	ACCEL_MONITOR,
+	AXI_STREAM_MONITOR,
+	AXI_STREAM_PROTOCOL_CHECKER,
+	TRACE_S2MM,
+	AXI_DMA,
+	TRACE_S2MM_FULL
+};
+
+struct debug_ip_data {
+	uint8_t m_type; // type of enum DEBUG_IP_TYPE
+	uint8_t m_index_lowbyte;
+	uint8_t m_properties;
+	uint8_t m_major;
+	uint8_t m_minor;
+	uint8_t m_index_highbyte;
+	uint8_t m_reserved[2];
+	uint64_t m_base_address;
+	char	m_name[128];
+};
+
+struct debug_ip_layout {
+	uint16_t m_count;
+	struct debug_ip_data m_debug_ip_data[1];
+};
+
+/* Supported clock frequency types */
+enum CLOCK_TYPE {
+	CT_UNUSED = 0,			   /* Initialized value */
+	CT_DATA	  = 1,			   /* Data clock */
+	CT_KERNEL = 2,			   /* Kernel clock */
+	CT_SYSTEM = 3			   /* System Clock */
+};
+
+/* Clock Frequency Entry */
+struct clock_freq {
+	uint16_t m_freq_Mhz;		   /* Frequency in MHz */
+	uint8_t m_type;			   /* Clock type (enum CLOCK_TYPE) */
+	uint8_t m_unused[5];		   /* Not used - padding */
+	char m_name[128];		   /* Clock Name */
+};
+
+/* Clock frequency section */
+struct clock_freq_topology {
+	int16_t m_count;		   /* Number of entries */
+	struct clock_freq m_clock_freq[1]; /* Clock array */
+};
+
+/* Supported MCS file types */
+enum MCS_TYPE {
+	MCS_UNKNOWN = 0,		   /* Initialized value */
+	MCS_PRIMARY = 1,		   /* The primary mcs file data */
+	MCS_SECONDARY = 2,		   /* The secondary mcs file data */
+};
+
+/* One chunk of MCS data */
+struct mcs_chunk {
+	uint8_t m_type;			   /* MCS data type */
+	uint8_t m_unused[7];		   /* padding */
+	uint64_t m_offset;		   /* data offset from the start of the section */
+	uint64_t m_size;		   /* data size */
+};
+
+/* MCS data section */
+struct mcs {
+	int8_t m_count;			   /* Number of chunks */
+	int8_t m_unused[7];		   /* padding */
+	struct mcs_chunk m_chunk[1];	   /* MCS chunks followed by data */
+};
+
+/* bmc data section */
+struct bmc {
+	uint64_t m_offset;		   /* data offset from the start of the section */
+	uint64_t m_size;		   /* data size (bytes)*/
+	char m_image_name[64];		   /* Name of the image (e.g., MSP432P401R) */
+	char m_device_name[64];		   /* Device ID		(e.g., VCU1525)	 */
+	char m_version[64];
+	char m_md5value[33];		   /* MD5 Expected Value(e.g., 56027182079c0bd621761b7dab5a27ca)*/
+	char m_padding[7];		   /* Padding */
+};
+
+/* soft kernel data section, used by classic driver */
+struct soft_kernel {
+	/** Prefix Syntax:
+	 *  mpo - member, pointer, offset
+	 *  This variable represents a zero terminated string
+	 *  that is offseted from the beginning of the section.
+	 *  The pointer to access the string is initialized as follows:
+	 *  char * pCharString = (address_of_section) + (mpo value)
+	 */
+	uint32_t mpo_name;	   /* Name of the soft kernel */
+	uint32_t m_image_offset;   /* Image offset */
+	uint32_t m_image_size;	   /* Image size */
+	uint32_t mpo_version;	   /* Version */
+	uint32_t mpo_md5_value;	   /* MD5 checksum */
+	uint32_t mpo_symbol_name;  /* Symbol name */
+	uint32_t m_num_instances;  /* Number of instances */
+	uint8_t padding[36];	   /* Reserved for future use */
+	uint8_t reservedExt[16];   /* Reserved for future extended data */
+};
+
+enum CHECKSUM_TYPE {
+	CST_UNKNOWN = 0,
+	CST_SDBM = 1,
+	CST_LAST
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 XRT Alveo 3/6] fpga: xrt: core infrastructure for xrt-lib module
  2020-12-17  7:50 [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 1/6] Documentation: fpga: Add a document describing XRT Alveo drivers Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 2/6] fpga: xrt: infrastructure support for xmgmt driver Sonal Santan
@ 2020-12-17  7:50 ` Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 4/6] fpga: xrt: XRT Alveo management physical function driver Sonal Santan
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Sonal Santan @ 2020-12-17  7:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree, trix, mdf

From: Sonal Santan <sonal.santan@xilinx.com>

Add xrt-lib kernel module infrastructrure code which defines APIs
for working with device nodes, iteration and lookup of platform
devices, common interfaces for platform devices, plumbing of
function call and ioctls between platform devices and parent
partitions.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/xrt/include/metadata.h          |  184 ++++
 drivers/fpga/xrt/include/parent.h            |  103 ++
 drivers/fpga/xrt/include/partition.h         |   33 +
 drivers/fpga/xrt/include/subdev.h            |  333 ++++++
 drivers/fpga/xrt/lib/subdevs/xrt-partition.c |  261 +++++
 drivers/fpga/xrt/lib/xrt-cdev.c              |  234 ++++
 drivers/fpga/xrt/lib/xrt-main.c              |  270 +++++
 drivers/fpga/xrt/lib/xrt-main.h              |   46 +
 drivers/fpga/xrt/lib/xrt-subdev.c            | 1007 ++++++++++++++++++
 9 files changed, 2471 insertions(+)
 create mode 100644 drivers/fpga/xrt/include/metadata.h
 create mode 100644 drivers/fpga/xrt/include/parent.h
 create mode 100644 drivers/fpga/xrt/include/partition.h
 create mode 100644 drivers/fpga/xrt/include/subdev.h
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-partition.c
 create mode 100644 drivers/fpga/xrt/lib/xrt-cdev.c
 create mode 100644 drivers/fpga/xrt/lib/xrt-main.c
 create mode 100644 drivers/fpga/xrt/lib/xrt-main.h
 create mode 100644 drivers/fpga/xrt/lib/xrt-subdev.c

diff --git a/drivers/fpga/xrt/include/metadata.h b/drivers/fpga/xrt/include/metadata.h
new file mode 100644
index 000000000000..f445bfc279d2
--- /dev/null
+++ b/drivers/fpga/xrt/include/metadata.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Xilinx Alveo FPGA Test Leaf Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef _XRT_METADATA_H
+#define _XRT_METADATA_H
+
+#include <linux/device.h>
+#include <linux/vmalloc.h>
+#include <linux/uuid.h>
+
+#define PROP_COMPATIBLE "compatible"
+#define PROP_PF_NUM "pcie_physical_function"
+#define PROP_BAR_IDX "pcie_bar_mapping"
+#define PROP_IO_OFFSET "reg"
+#define PROP_INTERRUPTS "interrupts"
+#define PROP_INTERFACE_UUID "interface_uuid"
+#define PROP_LOGIC_UUID "logic_uuid"
+#define PROP_VERSION_MAJOR "firmware_version_major"
+
+#define PROP_HWICAP "axi_hwicap"
+#define PROP_PDI_CONFIG "pdi_config_mem"
+
+#define NODE_ENDPOINTS "addressable_endpoints"
+#define INTERFACES_PATH "/interfaces"
+
+#define NODE_FIRMWARE "firmware"
+#define NODE_INTERFACES "interfaces"
+#define NODE_PARTITION_INFO "partition_info"
+
+#define NODE_FLASH "ep_card_flash_program_00"
+#define NODE_XVC_PUB "ep_debug_bscan_user_00"
+#define NODE_XVC_PRI "ep_debug_bscan_mgmt_00"
+#define NODE_SYSMON "ep_cmp_sysmon_00"
+#define NODE_AF_BLP_CTRL_MGMT "ep_firewall_blp_ctrl_mgmt_00"
+#define NODE_AF_BLP_CTRL_USER "ep_firewall_blp_ctrl_user_00"
+#define NODE_AF_CTRL_MGMT "ep_firewall_ctrl_mgmt_00"
+#define NODE_AF_CTRL_USER "ep_firewall_ctrl_user_00"
+#define NODE_AF_CTRL_DEBUG "ep_firewall_ctrl_debug_00"
+#define NODE_AF_DATA_H2C "ep_firewall_data_h2c_00"
+#define NODE_AF_DATA_C2H "ep_firewall_data_c2h_00"
+#define NODE_AF_DATA_P2P "ep_firewall_data_p2p_00"
+#define NODE_AF_DATA_M2M "ep_firewall_data_m2m_00"
+#define NODE_CMC_REG "ep_cmc_regmap_00"
+#define NODE_CMC_RESET "ep_cmc_reset_00"
+#define NODE_CMC_MUTEX "ep_cmc_mutex_00"
+#define NODE_CMC_FW_MEM "ep_cmc_firmware_mem_00"
+#define NODE_ERT_FW_MEM "ep_ert_firmware_mem_00"
+#define NODE_ERT_CQ_MGMT "ep_ert_command_queue_mgmt_00"
+#define NODE_ERT_CQ_USER "ep_ert_command_queue_user_00"
+#define NODE_MAILBOX_MGMT "ep_mailbox_mgmt_00"
+#define NODE_MAILBOX_USER "ep_mailbox_user_00"
+#define NODE_GATE_PLP "ep_pr_isolate_plp_00"
+#define NODE_GATE_ULP "ep_pr_isolate_ulp_00"
+#define NODE_PCIE_MON "ep_pcie_link_mon_00"
+#define NODE_DDR_CALIB "ep_ddr_mem_calib_00"
+#define NODE_CLK_KERNEL1 "ep_aclk_kernel_00"
+#define NODE_CLK_KERNEL2 "ep_aclk_kernel_01"
+#define NODE_CLK_KERNEL3 "ep_aclk_hbm_00"
+#define NODE_KDMA_CTRL "ep_kdma_ctrl_00"
+#define NODE_FPGA_CONFIG "ep_fpga_configuration_00"
+#define NODE_ERT_SCHED "ep_ert_sched_00"
+#define NODE_XDMA "ep_xdma_00"
+#define NODE_MSIX "ep_msix_00"
+#define NODE_QDMA "ep_qdma_00"
+#define NODE_QDMA4 "ep_qdma4_00"
+#define NODE_STM "ep_stream_traffic_manager_00"
+#define NODE_STM4 "ep_stream_traffic_manager4_00"
+#define NODE_CLK_SHUTDOWN "ep_aclk_shutdown_00"
+#define NODE_ERT_BASE "ep_ert_base_address_00"
+#define NODE_ERT_RESET "ep_ert_reset_00"
+#define NODE_CLKFREQ_K1 "ep_freq_cnt_aclk_kernel_00"
+#define NODE_CLKFREQ_K2 "ep_freq_cnt_aclk_kernel_01"
+#define NODE_CLKFREQ_HBM "ep_freq_cnt_aclk_hbm_00"
+#define NODE_GAPPING "ep_gapping_demand_00"
+#define NODE_UCS_CONTROL_STATUS "ep_ucs_control_status_00"
+#define NODE_P2P "ep_p2p_00"
+#define NODE_REMAP_P2P "ep_remap_p2p_00"
+#define NODE_DDR4_RESET_GATE "ep_ddr_mem_srsr_gate_00"
+#define NODE_ADDR_TRANSLATOR "ep_remap_data_c2h_00"
+#define NODE_MAILBOX_XRT "ep_mailbox_user_to_ert_00"
+#define NODE_PMC_INTR   "ep_pmc_intr_00"
+#define NODE_PMC_MUX    "ep_pmc_mux_00"
+
+/* driver defined endpoints */
+#define NODE_VSEC "drv_ep_vsec_00"
+#define NODE_VSEC_GOLDEN "drv_ep_vsec_golden_00"
+#define NODE_BLP_ROM "drv_ep_blp_rom_00"
+#define NODE_MAILBOX_VSEC "ep_mailbox_vsec_00"
+#define NODE_PLAT_INFO "drv_ep_platform_info_mgmt_00"
+#define NODE_TEST "drv_ep_test_00"
+#define NODE_MGMT_MAIN "drv_ep_mgmt_main_00"
+#define NODE_FLASH_VSEC "drv_ep_card_flash_program_00"
+#define NODE_GOLDEN_VER "drv_ep_golden_ver_00"
+#define NODE_PARTITION_INFO_BLP "partition_info_0"
+#define NODE_PARTITION_INFO_PLP "partition_info_1"
+
+#define NODE_DDR_SRSR "drv_ep_ddr_srsr"
+#define REGMAP_DDR_SRSR "drv_ddr_srsr"
+
+#define PROP_OFFSET "drv_offset"
+#define PROP_CLK_FREQ "drv_clock_frequency"
+#define PROP_CLK_CNT "drv_clock_frequency_counter"
+#define	PROP_VBNV "vbnv"
+#define	PROP_VROM "vrom"
+#define PROP_PARTITION_LEVEL "partition_level"
+
+struct xrt_md_endpoint {
+	const char	*ep_name;
+	u32		bar;
+	long		bar_off;
+	ulong		size;
+	char		*regmap;
+	char		*regmap_ver;
+};
+
+/* Note: res_id is defined by leaf driver and must start with 0. */
+struct xrt_iores_map {
+	char		*res_name;
+	int		res_id;
+};
+
+static inline int xrt_md_res_name2id(const struct xrt_iores_map *res_map,
+	int entry_num, const char *res_name)
+{
+	int i;
+
+	BUG_ON(res_name == NULL);
+	for (i = 0; i < entry_num; i++) {
+		if (!strcmp(res_name, res_map->res_name))
+			return res_map->res_id;
+		res_map++;
+	}
+	return -1;
+}
+
+static inline const char *
+xrt_md_res_id2name(const struct xrt_iores_map *res_map, int entry_num, int id)
+{
+	int i;
+
+	BUG_ON(id > entry_num);
+	for (i = 0; i < entry_num; i++) {
+		if (res_map->res_id == id)
+			return res_map->res_name;
+		res_map++;
+	}
+	return NULL;
+}
+
+long xrt_md_size(struct device *dev, const char *blob);
+int xrt_md_create(struct device *dev, char **blob);
+int xrt_md_add_endpoint(struct device *dev, char *blob,
+	struct xrt_md_endpoint *ep);
+int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
+	char *regmap_name);
+int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
+	const char *regmap_name, const char *prop, const void **val, int *size);
+int xrt_md_set_prop(struct device *dev, char *blob, const char *ep_name,
+	const char *regmap_name, const char *prop, const void *val, int size);
+int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
+	const char *ep_name, const char *regmap_name, const char *new_ep_name);
+int xrt_md_copy_all_eps(struct device *dev, char  *blob, const char *src_blob);
+int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
+	const char *ep_name,  const char *regmap_name,
+	char **next_ep, char **next_regmap);
+int xrt_md_get_compatible_epname(struct device *dev, const char *blob,
+	const char *regmap_name, const char **ep_name);
+int xrt_md_get_epname_pointer(struct device *dev, const char *blob,
+	const char *ep_name, const char *regmap_name, const char **epname);
+void xrt_md_pack(struct device *dev, char *blob);
+char *xrt_md_dup(struct device *dev, const char *blob);
+int xrt_md_get_intf_uuids(struct device *dev, const char *blob,
+	u32 *num_uuids, uuid_t *intf_uuids);
+int xrt_md_check_uuids(struct device *dev, const char *blob, char *subset_blob);
+int xrt_md_uuid_strtoid(struct device *dev, const char *uuidstr, uuid_t *uuid);
+
+#endif
diff --git a/drivers/fpga/xrt/include/parent.h b/drivers/fpga/xrt/include/parent.h
new file mode 100644
index 000000000000..8e921a78ea2d
--- /dev/null
+++ b/drivers/fpga/xrt/include/parent.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_PARENT_H_
+#define	_XRT_PARENT_H_
+
+#include "subdev.h"
+#include "partition.h"
+
+/*
+ * Parent IOCTL calls.
+ */
+enum xrt_parent_ioctl_cmd {
+	/* Leaf actions. */
+	XRT_PARENT_GET_LEAF = 0,
+	XRT_PARENT_PUT_LEAF,
+	XRT_PARENT_GET_LEAF_HOLDERS,
+
+	/* Partition actions. */
+	XRT_PARENT_CREATE_PARTITION,
+	XRT_PARENT_REMOVE_PARTITION,
+	XRT_PARENT_LOOKUP_PARTITION,
+	XRT_PARENT_WAIT_PARTITION_BRINGUP,
+
+	/* Event actions. */
+	XRT_PARENT_ADD_EVENT_CB,
+	XRT_PARENT_REMOVE_EVENT_CB,
+	XRT_PARENT_ASYNC_BOARDCAST_EVENT,
+
+	/* Device info. */
+	XRT_PARENT_GET_RESOURCE,
+	XRT_PARENT_GET_ID,
+
+	/* Misc. */
+	XRT_PARENT_HOT_RESET,
+	XRT_PARENT_HWMON,
+};
+
+struct xrt_parent_ioctl_get_leaf {
+	struct platform_device *xpigl_pdev; /* caller's pdev */
+	xrt_subdev_match_t xpigl_match_cb;
+	void *xpigl_match_arg;
+	struct platform_device *xpigl_leaf; /* target leaf pdev */
+};
+
+struct xrt_parent_ioctl_put_leaf {
+	struct platform_device *xpipl_pdev; /* caller's pdev */
+	struct platform_device *xpipl_leaf; /* target's pdev */
+};
+
+struct xrt_parent_ioctl_lookup_partition {
+	struct platform_device *xpilp_pdev; /* caller's pdev */
+	xrt_subdev_match_t xpilp_match_cb;
+	void *xpilp_match_arg;
+	int xpilp_part_inst;
+};
+
+struct xrt_parent_ioctl_evt_cb {
+	struct platform_device *xevt_pdev; /* caller's pdev */
+	xrt_subdev_match_t xevt_match_cb;
+	void *xevt_match_arg;
+	xrt_event_cb_t xevt_cb;
+	void *xevt_hdl;
+};
+
+struct xrt_parent_ioctl_async_broadcast_evt {
+	struct platform_device *xaevt_pdev; /* caller's pdev */
+	enum xrt_events xaevt_event;
+	xrt_async_broadcast_event_cb_t xaevt_cb;
+	void *xaevt_arg;
+};
+
+struct xrt_parent_ioctl_get_holders {
+	struct platform_device *xpigh_pdev; /* caller's pdev */
+	char *xpigh_holder_buf;
+	size_t xpigh_holder_buf_len;
+};
+
+struct xrt_parent_ioctl_get_res {
+	struct resource *xpigr_res;
+};
+
+struct xrt_parent_ioctl_get_id {
+	unsigned short  xpigi_vendor_id;
+	unsigned short  xpigi_device_id;
+	unsigned short  xpigi_sub_vendor_id;
+	unsigned short  xpigi_sub_device_id;
+};
+
+struct xrt_parent_ioctl_hwmon {
+	bool xpih_register;
+	const char *xpih_name;
+	void *xpih_drvdata;
+	const struct attribute_group **xpih_groups;
+	struct device *xpih_hwmon_dev;
+};
+
+#endif	/* _XRT_PARENT_H_ */
diff --git a/drivers/fpga/xrt/include/partition.h b/drivers/fpga/xrt/include/partition.h
new file mode 100644
index 000000000000..b4f4ea639234
--- /dev/null
+++ b/drivers/fpga/xrt/include/partition.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_PARTITION_H_
+#define	_XRT_PARTITION_H_
+
+#include "subdev.h"
+
+/*
+ * Partition driver IOCTL calls.
+ */
+enum xrt_partition_ioctl_cmd {
+	XRT_PARTITION_GET_LEAF = 0,
+	XRT_PARTITION_PUT_LEAF,
+	XRT_PARTITION_INIT_CHILDREN,
+	XRT_PARTITION_FINI_CHILDREN,
+	XRT_PARTITION_EVENT,
+};
+
+struct xrt_partition_ioctl_event {
+	enum xrt_events xpie_evt;
+	struct xrt_parent_ioctl_evt_cb *xpie_cb;
+};
+
+extern int xrt_subdev_parent_ioctl(struct platform_device *pdev,
+	u32 cmd, void *arg);
+
+#endif	/* _XRT_PARTITION_H_ */
diff --git a/drivers/fpga/xrt/include/subdev.h b/drivers/fpga/xrt/include/subdev.h
new file mode 100644
index 000000000000..65ecbd9c596b
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev.h
@@ -0,0 +1,333 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_SUBDEV_H_
+#define	_XRT_SUBDEV_H_
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/pci.h>
+#include <linux/libfdt_env.h>
+#include "libfdt.h"
+
+/*
+ * Every subdev driver should have an ID for others to refer to it.
+ * There can be unlimited number of instances of a subdev driver. A
+ * <subdev_id, subdev_instance> tuple should be a unique identification of
+ * a specific instance of a subdev driver.
+ * NOTE: PLEASE do not change the order of IDs. Sub devices in the same
+ * partition are initialized by this order.
+ */
+enum xrt_subdev_id {
+	XRT_SUBDEV_PART = 0,
+	XRT_SUBDEV_VSEC,
+	XRT_SUBDEV_VSEC_GOLDEN,
+	XRT_SUBDEV_GPIO,
+	XRT_SUBDEV_AXIGATE,
+	XRT_SUBDEV_ICAP,
+	XRT_SUBDEV_TEST,
+	XRT_SUBDEV_MGMT_MAIN,
+	XRT_SUBDEV_QSPI,
+	XRT_SUBDEV_MAILBOX,
+	XRT_SUBDEV_CMC,
+	XRT_SUBDEV_CALIB,
+	XRT_SUBDEV_CLKFREQ,
+	XRT_SUBDEV_CLOCK,
+	XRT_SUBDEV_SRSR,
+	XRT_SUBDEV_UCS,
+	XRT_SUBDEV_NUM,
+};
+
+/*
+ * If populated by subdev driver, parent will handle the mechanics of
+ * char device (un)registration.
+ */
+enum xrt_subdev_file_mode {
+	// Infra create cdev, default file name
+	XRT_SUBDEV_FILE_DEFAULT = 0,
+	// Infra create cdev, need to encode inst num in file name
+	XRT_SUBDEV_FILE_MULTI_INST,
+	// No auto creation of cdev by infra, leaf handles it by itself
+	XRT_SUBDEV_FILE_NO_AUTO,
+};
+struct xrt_subdev_file_ops {
+	const struct file_operations xsf_ops;
+	dev_t xsf_dev_t;
+	const char *xsf_dev_name;
+	enum xrt_subdev_file_mode xsf_mode;
+};
+
+/*
+ * Subdev driver callbacks populated by subdev driver.
+ */
+struct xrt_subdev_drv_ops {
+	/*
+	 * Per driver module callback. Don't take any arguments.
+	 * If defined these are called as part of driver (un)registration.
+	 */
+	int (*xsd_post_init)(void);
+	void (*xsd_pre_exit)(void);
+
+	/*
+	 * Per driver instance callback. The pdev points to the instance.
+	 * If defined these are called by other leaf drivers.
+	 * Note that root driver may call into xsd_ioctl of a partition driver.
+	 */
+	int (*xsd_ioctl)(struct platform_device *pdev, u32 cmd, void *arg);
+};
+
+/*
+ * Defined and populated by subdev driver, exported as driver_data in
+ * struct platform_device_id.
+ */
+struct xrt_subdev_drvdata {
+	struct xrt_subdev_file_ops xsd_file_ops;
+	struct xrt_subdev_drv_ops xsd_dev_ops;
+};
+
+/*
+ * Partially initialized by parent driver, then, passed in as subdev driver's
+ * platform data when creating subdev driver instance by calling platform
+ * device register API (platform_device_register_data() or the likes).
+ *
+ * Once device register API returns, platform driver framework makes a copy of
+ * this buffer and maintains its life cycle. The content of the buffer is
+ * completely owned by subdev driver.
+ *
+ * Thus, parent driver should be very careful when it touches this buffer
+ * again once it's handed over to subdev driver. And the data structure
+ * should not contain pointers pointing to buffers that is managed by
+ * other or parent drivers since it could have been freed before platform
+ * data buffer is freed by platform driver framework.
+ */
+typedef int (*xrt_subdev_parent_cb_t)(struct device *, void *, u32, void *);
+struct xrt_subdev_platdata {
+	/*
+	 * Per driver instance callback. The pdev points to the instance.
+	 * Should always be defined for subdev driver to call into its parent.
+	 */
+	xrt_subdev_parent_cb_t xsp_parent_cb;
+	void *xsp_parent_cb_arg;
+
+	/* Something to associate w/ root for msg printing. */
+	const char *xsp_root_name;
+
+	/*
+	 * Char dev support for this subdev instance.
+	 * Initialized by subdev driver.
+	 */
+	struct cdev xsp_cdev;
+	struct device *xsp_sysdev;
+	struct mutex xsp_devnode_lock;
+	struct completion xsp_devnode_comp;
+	int xsp_devnode_ref;
+	bool xsp_devnode_online;
+	bool xsp_devnode_excl;
+
+	/*
+	 * Subdev driver specific init data. The buffer should be embedded
+	 * in this data structure buffer after dtb, so that it can be freed
+	 * together with platform data.
+	 */
+	loff_t xsp_priv_off; /* Offset into this platform data buffer. */
+	size_t xsp_priv_len;
+
+	/*
+	 * Populated by parent driver to describe the device tree for
+	 * the subdev driver to handle. Should always be last one since it's
+	 * of variable length.
+	 */
+	char xsp_dtb[sizeof(struct fdt_header)];
+};
+
+/*
+ * this struct define the endpoints belong to the same subdevice
+ */
+struct xrt_subdev_ep_names {
+	const char *ep_name;
+	const char *regmap_name;
+};
+
+struct xrt_subdev_endpoints {
+	struct xrt_subdev_ep_names *xse_names;
+	/* minimum number of endpoints to support the subdevice */
+	u32 xse_min_ep;
+};
+
+/*
+ * It manages a list of xrt_subdevs for root and partition drivers.
+ */
+struct xrt_subdev_pool {
+	struct list_head xpool_dev_list;
+	struct device *xpool_owner;
+	struct mutex xpool_lock;
+	bool xpool_closing;
+};
+
+typedef bool (*xrt_subdev_match_t)(enum xrt_subdev_id,
+	struct platform_device *, void *);
+#define	XRT_SUBDEV_MATCH_PREV	((xrt_subdev_match_t)-1)
+#define	XRT_SUBDEV_MATCH_NEXT	((xrt_subdev_match_t)-2)
+
+/* All subdev drivers should use below common routines to print out msg. */
+#define	DEV(pdev)	(&(pdev)->dev)
+#define	DEV_PDATA(pdev)					\
+	((struct xrt_subdev_platdata *)dev_get_platdata(DEV(pdev)))
+#define	DEV_DRVDATA(pdev)				\
+	((struct xrt_subdev_drvdata *)			\
+	platform_get_device_id(pdev)->driver_data)
+#define	FMT_PRT(prt_fn, pdev, fmt, args...)		\
+	prt_fn(DEV(pdev), "%s %s: "fmt,			\
+	DEV_PDATA(pdev)->xsp_root_name, __func__, ##args)
+#define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
+#define xrt_warn(pdev, fmt, args...) FMT_PRT(dev_warn, pdev, fmt, ##args)
+#define xrt_info(pdev, fmt, args...) FMT_PRT(dev_info, pdev, fmt, ##args)
+#define xrt_dbg(pdev, fmt, args...) FMT_PRT(dev_dbg, pdev, fmt, ##args)
+
+/*
+ * Event notification.
+ */
+enum xrt_events {
+	XRT_EVENT_TEST = 0, // for testing
+	/*
+	 * Events related to specific subdev
+	 * Callback arg: struct xrt_event_arg_subdev
+	 */
+	XRT_EVENT_POST_CREATION,
+	XRT_EVENT_PRE_REMOVAL,
+	/*
+	 * Events related to change of the whole board
+	 * Callback arg: <none>
+	 */
+	XRT_EVENT_PRE_HOT_RESET,
+	XRT_EVENT_POST_HOT_RESET,
+	XRT_EVENT_PRE_GATE_CLOSE,
+	XRT_EVENT_POST_GATE_OPEN,
+	XRT_EVENT_POST_ATTACH,
+	XRT_EVENT_PRE_DETACH,
+};
+
+typedef int (*xrt_event_cb_t)(struct platform_device *pdev,
+	enum xrt_events evt, void *arg);
+typedef void (*xrt_async_broadcast_event_cb_t)(struct platform_device *pdev,
+	enum xrt_events evt, void *arg, bool success);
+
+struct xrt_event_arg_subdev {
+	enum xrt_subdev_id xevt_subdev_id;
+	int xevt_subdev_instance;
+};
+
+/*
+ * Flags in return value from event callback.
+ */
+/* Done with event handling, continue waiting for the next one */
+#define	XRT_EVENT_CB_CONTINUE	0x0
+/* Done with event handling, stop waiting for the next one */
+#define	XRT_EVENT_CB_STOP	0x1
+/* Error processing event */
+#define	XRT_EVENT_CB_ERR	0x2
+
+/*
+ * Subdev pool API for root and partition drivers only.
+ */
+extern void xrt_subdev_pool_init(struct device *dev,
+	struct xrt_subdev_pool *spool);
+extern int xrt_subdev_pool_fini(struct xrt_subdev_pool *spool);
+extern int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
+	xrt_subdev_match_t match, void *arg, struct device *holder_dev,
+	struct platform_device **pdevp);
+extern int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, struct device *holder_dev);
+extern int xrt_subdev_pool_add(struct xrt_subdev_pool *spool,
+	enum xrt_subdev_id id, xrt_subdev_parent_cb_t pcb,
+	void *pcb_arg, char *dtb);
+extern int xrt_subdev_pool_del(struct xrt_subdev_pool *spool,
+	enum xrt_subdev_id id, int instance);
+extern int xrt_subdev_pool_event(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, xrt_subdev_match_t match, void *arg,
+	xrt_event_cb_t xevt_cb, enum xrt_events evt);
+extern ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, char *buf, size_t len);
+/*
+ * For leaf drivers.
+ */
+extern bool xrt_subdev_has_epname(struct platform_device *pdev, const char *nm);
+extern struct platform_device *xrt_subdev_get_leaf(
+	struct platform_device *pdev, xrt_subdev_match_t cb, void *arg);
+extern struct platform_device *xrt_subdev_get_leaf_by_id(
+	struct platform_device *pdev, enum xrt_subdev_id id, int instance);
+extern struct platform_device *xrt_subdev_get_leaf_by_epname(
+	struct platform_device *pdev, const char *name);
+extern int xrt_subdev_put_leaf(struct platform_device *pdev,
+	struct platform_device *leaf);
+extern int xrt_subdev_create_partition(struct platform_device *pdev,
+	char *dtb);
+extern int xrt_subdev_destroy_partition(struct platform_device *pdev,
+	int instance);
+extern int xrt_subdev_lookup_partition(
+	struct platform_device *pdev, xrt_subdev_match_t cb, void *arg);
+extern int xrt_subdev_wait_for_partition_bringup(struct platform_device *pdev);
+extern void *xrt_subdev_add_event_cb(struct platform_device *pdev,
+	xrt_subdev_match_t match, void *match_arg, xrt_event_cb_t cb);
+extern void xrt_subdev_remove_event_cb(
+	struct platform_device *pdev, void *hdl);
+extern int xrt_subdev_ioctl(struct platform_device *tgt, u32 cmd, void *arg);
+extern int xrt_subdev_broadcast_event(struct platform_device *pdev,
+	enum xrt_events evt);
+extern int xrt_subdev_broadcast_event_async(struct platform_device *pdev,
+	enum xrt_events evt, xrt_async_broadcast_event_cb_t cb, void *arg);
+extern void xrt_subdev_hot_reset(struct platform_device *pdev);
+extern void xrt_subdev_get_barres(struct platform_device *pdev,
+	struct resource **res, uint bar_idx);
+extern void xrt_subdev_get_parent_id(struct platform_device *pdev,
+	unsigned short *vendor, unsigned short *device,
+	unsigned short *subvendor, unsigned short *subdevice);
+extern struct device *xrt_subdev_register_hwmon(struct platform_device *pdev,
+	const char *name, void *drvdata, const struct attribute_group **grps);
+extern void xrt_subdev_unregister_hwmon(struct platform_device *pdev,
+	struct device *hwmon);
+
+extern int xrt_subdev_register_external_driver(enum xrt_subdev_id id,
+	struct platform_driver *drv, struct xrt_subdev_endpoints *eps);
+extern void xrt_subdev_unregister_external_driver(enum xrt_subdev_id id);
+
+/*
+ * Char dev APIs.
+ */
+static inline bool xrt_devnode_enabled(struct xrt_subdev_drvdata *drvdata)
+{
+	return drvdata && drvdata->xsd_file_ops.xsf_ops.open != NULL;
+}
+extern int xrt_devnode_create(struct platform_device *pdev,
+	const char *file_name, const char *inst_name);
+extern int xrt_devnode_destroy(struct platform_device *pdev);
+extern struct platform_device *xrt_devnode_open_excl(struct inode *inode);
+extern struct platform_device *xrt_devnode_open(struct inode *inode);
+extern void xrt_devnode_close(struct inode *inode);
+
+/* Helpers. */
+static inline void xrt_memcpy_fromio(void *buf, void __iomem *iomem, u32 size)
+{
+	int i;
+
+	BUG_ON(size & 0x3);
+	for (i = 0; i < size / 4; i++)
+		((u32 *)buf)[i] = ioread32((char *)(iomem) + sizeof(u32) * i);
+}
+static inline void xrt_memcpy_toio(void __iomem *iomem, void *buf, u32 size)
+{
+	int i;
+
+	BUG_ON(size & 0x3);
+	for (i = 0; i < size / 4; i++)
+		iowrite32(((u32 *)buf)[i], ((char *)(iomem) + sizeof(u32) * i));
+}
+
+#endif	/* _XRT_SUBDEV_H_ */
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-partition.c b/drivers/fpga/xrt/lib/subdevs/xrt-partition.c
new file mode 100644
index 000000000000..17acec11993b
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-partition.c
@@ -0,0 +1,261 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Partition Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include "subdev.h"
+#include "parent.h"
+#include "partition.h"
+#include "metadata.h"
+#include "../xrt-main.h"
+
+#define	XRT_PART "xrt_partition"
+
+struct xrt_partition {
+	struct platform_device *pdev;
+	struct xrt_subdev_pool leaves;
+	bool leaves_created;
+	struct mutex lock;
+};
+
+static int xrt_part_parent_cb(struct device *dev, void *parg,
+	u32 cmd, void *arg)
+{
+	int rc;
+	struct platform_device *pdev =
+		container_of(dev, struct platform_device, dev);
+	struct xrt_partition *xp = (struct xrt_partition *)parg;
+
+	switch (cmd) {
+	case XRT_PARENT_GET_LEAF_HOLDERS: {
+		struct xrt_parent_ioctl_get_holders *holders =
+			(struct xrt_parent_ioctl_get_holders *)arg;
+		rc = xrt_subdev_pool_get_holders(&xp->leaves,
+			holders->xpigh_pdev, holders->xpigh_holder_buf,
+			holders->xpigh_holder_buf_len);
+		break;
+	}
+	default:
+		/* Forward parent call to root. */
+		rc = xrt_subdev_parent_ioctl(pdev, cmd, arg);
+		break;
+	}
+
+	return rc;
+}
+
+static int xrt_part_create_leaves(struct xrt_partition *xp)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(xp->pdev);
+	enum xrt_subdev_id did;
+	struct xrt_subdev_endpoints *eps = NULL;
+	int ep_count = 0, i, ret = 0, failed = 0;
+	long mlen;
+	char *dtb, *part_dtb = NULL;
+	const char *ep_name;
+
+
+	mutex_lock(&xp->lock);
+
+	if (xp->leaves_created) {
+		mutex_unlock(&xp->lock);
+		return -EEXIST;
+	}
+
+	xrt_info(xp->pdev, "bringing up leaves...");
+
+	/* Create all leaves based on dtb. */
+	if (!pdata)
+		goto bail;
+
+	mlen = xrt_md_size(DEV(xp->pdev), pdata->xsp_dtb);
+	if (mlen <= 0) {
+		xrt_err(xp->pdev, "invalid dtb, len %ld", mlen);
+		goto bail;
+	}
+
+	part_dtb = vmalloc(mlen);
+	if (!part_dtb)
+		goto bail;
+
+	memcpy(part_dtb, pdata->xsp_dtb, mlen);
+	for (did = 0; did < XRT_SUBDEV_NUM;) {
+		eps = eps ? eps + 1 : xrt_drv_get_endpoints(did);
+		if (!eps || !eps->xse_names) {
+			did++;
+			eps = NULL;
+			continue;
+		}
+		ret = xrt_md_create(DEV(xp->pdev), &dtb);
+		if (ret) {
+			xrt_err(xp->pdev, "create md failed, drv %s",
+				xrt_drv_name(did));
+			failed++;
+			continue;
+		}
+		for (i = 0; eps->xse_names[i].ep_name ||
+		    eps->xse_names[i].regmap_name; i++) {
+			if (!eps->xse_names[i].ep_name) {
+				ret = xrt_md_get_compatible_epname(
+					DEV(xp->pdev), part_dtb,
+					eps->xse_names[i].regmap_name,
+					&ep_name);
+				if (ret)
+					continue;
+			} else
+				ep_name = (char *)eps->xse_names[i].ep_name;
+			ret = xrt_md_copy_endpoint(DEV(xp->pdev),
+				dtb, part_dtb, ep_name,
+				(char *)eps->xse_names[i].regmap_name, NULL);
+			if (ret)
+				continue;
+			xrt_md_del_endpoint(DEV(xp->pdev), part_dtb, ep_name,
+				(char *)eps->xse_names[i].regmap_name);
+			ep_count++;
+		}
+		if (ep_count >= eps->xse_min_ep) {
+			ret = xrt_subdev_pool_add(&xp->leaves, did,
+				xrt_part_parent_cb, xp, dtb);
+			eps = NULL;
+			if (ret < 0) {
+				failed++;
+				xrt_err(xp->pdev, "failed to create %s: %d",
+					xrt_drv_name(did), ret);
+			}
+		} else if (ep_count > 0) {
+			xrt_md_copy_all_eps(DEV(xp->pdev), part_dtb, dtb);
+		}
+		vfree(dtb);
+		ep_count = 0;
+	}
+
+	xp->leaves_created = true;
+
+bail:
+	mutex_unlock(&xp->lock);
+
+	if (part_dtb)
+		vfree(part_dtb);
+
+	return failed == 0 ? 0 : -ECHILD;
+}
+
+static int xrt_part_remove_leaves(struct xrt_partition *xp)
+{
+	int rc;
+
+	mutex_lock(&xp->lock);
+
+	if (!xp->leaves_created) {
+		mutex_unlock(&xp->lock);
+		return 0;
+	}
+
+	xrt_info(xp->pdev, "tearing down leaves...");
+	rc = xrt_subdev_pool_fini(&xp->leaves);
+	xp->leaves_created = false;
+
+	mutex_unlock(&xp->lock);
+
+	return rc;
+}
+
+static int xrt_part_probe(struct platform_device *pdev)
+{
+	struct xrt_partition *xp;
+
+	xrt_info(pdev, "probing...");
+
+	xp = devm_kzalloc(&pdev->dev, sizeof(*xp), GFP_KERNEL);
+	if (!xp)
+		return -ENOMEM;
+
+	xp->pdev = pdev;
+	mutex_init(&xp->lock);
+	xrt_subdev_pool_init(DEV(pdev), &xp->leaves);
+	platform_set_drvdata(pdev, xp);
+
+	return 0;
+}
+
+static int xrt_part_remove(struct platform_device *pdev)
+{
+	struct xrt_partition *xp = platform_get_drvdata(pdev);
+
+	xrt_info(pdev, "leaving...");
+	return xrt_part_remove_leaves(xp);
+}
+
+static int xrt_part_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	int rc = 0;
+	struct xrt_partition *xp = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_PARTITION_GET_LEAF: {
+		struct xrt_parent_ioctl_get_leaf *get_leaf =
+			(struct xrt_parent_ioctl_get_leaf *)arg;
+
+		rc = xrt_subdev_pool_get(&xp->leaves, get_leaf->xpigl_match_cb,
+			get_leaf->xpigl_match_arg, DEV(get_leaf->xpigl_pdev),
+			&get_leaf->xpigl_leaf);
+		break;
+	}
+	case XRT_PARTITION_PUT_LEAF: {
+		struct xrt_parent_ioctl_put_leaf *put_leaf =
+			(struct xrt_parent_ioctl_put_leaf *)arg;
+
+		rc = xrt_subdev_pool_put(&xp->leaves, put_leaf->xpipl_leaf,
+			DEV(put_leaf->xpipl_pdev));
+		break;
+	}
+	case XRT_PARTITION_INIT_CHILDREN:
+		rc = xrt_part_create_leaves(xp);
+		break;
+	case XRT_PARTITION_FINI_CHILDREN:
+		rc = xrt_part_remove_leaves(xp);
+		break;
+	case XRT_PARTITION_EVENT: {
+		struct xrt_partition_ioctl_event *evt =
+			(struct xrt_partition_ioctl_event *)arg;
+		struct xrt_parent_ioctl_evt_cb *cb = evt->xpie_cb;
+
+		rc = xrt_subdev_pool_event(&xp->leaves, cb->xevt_pdev,
+			cb->xevt_match_cb, cb->xevt_match_arg, cb->xevt_cb,
+			evt->xpie_evt);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unknown IOCTL cmd %d", cmd);
+		rc = -EINVAL;
+		break;
+	}
+	return rc;
+}
+
+struct xrt_subdev_drvdata xrt_part_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_part_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_part_id_table[] = {
+	{ XRT_PART, (kernel_ulong_t)&xrt_part_data },
+	{ },
+};
+
+struct platform_driver xrt_partition_driver = {
+	.driver	= {
+		.name    = XRT_PART,
+	},
+	.probe   = xrt_part_probe,
+	.remove  = xrt_part_remove,
+	.id_table = xrt_part_id_table,
+};
diff --git a/drivers/fpga/xrt/lib/xrt-cdev.c b/drivers/fpga/xrt/lib/xrt-cdev.c
new file mode 100644
index 000000000000..6dd3907699eb
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xrt-cdev.c
@@ -0,0 +1,234 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA device node helper functions.
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include "subdev.h"
+
+extern struct class *xrt_class;
+
+#define	XRT_CDEV_DIR		"xfpga"
+#define	INODE2PDATA(inode)	\
+	container_of((inode)->i_cdev, struct xrt_subdev_platdata, xsp_cdev)
+#define	INODE2PDEV(inode)	\
+	to_platform_device(kobj_to_dev((inode)->i_cdev->kobj.parent))
+#define	CDEV_NAME(sysdev)	(strchr((sysdev)->kobj.name, '!') + 1)
+
+/* Allow it to be accessed from cdev. */
+static void xrt_devnode_allowed(struct platform_device *pdev)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+
+	/* Allow new opens. */
+	mutex_lock(&pdata->xsp_devnode_lock);
+	pdata->xsp_devnode_online = true;
+	mutex_unlock(&pdata->xsp_devnode_lock);
+}
+
+/* Turn off access from cdev and wait for all existing user to go away. */
+static int xrt_devnode_disallowed(struct platform_device *pdev)
+{
+	int ret = 0;
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+
+	mutex_lock(&pdata->xsp_devnode_lock);
+
+	/* Prevent new opens. */
+	pdata->xsp_devnode_online = false;
+	/* Wait for existing user to close. */
+	while (!ret && pdata->xsp_devnode_ref) {
+		int rc;
+
+		mutex_unlock(&pdata->xsp_devnode_lock);
+		rc = wait_for_completion_killable(&pdata->xsp_devnode_comp);
+		mutex_lock(&pdata->xsp_devnode_lock);
+
+		if (rc == -ERESTARTSYS) {
+			/* Restore online state. */
+			pdata->xsp_devnode_online = true;
+			xrt_err(pdev, "%s is in use, ref=%d",
+				CDEV_NAME(pdata->xsp_sysdev),
+				pdata->xsp_devnode_ref);
+			ret = -EBUSY;
+		}
+	}
+
+	mutex_unlock(&pdata->xsp_devnode_lock);
+
+	return ret;
+}
+
+static struct platform_device *
+__xrt_devnode_open(struct inode *inode, bool excl)
+{
+	struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+	struct platform_device *pdev = INODE2PDEV(inode);
+	bool opened = false;
+
+	mutex_lock(&pdata->xsp_devnode_lock);
+
+	if (pdata->xsp_devnode_online) {
+		if (excl && pdata->xsp_devnode_ref) {
+			xrt_err(pdev, "%s has already been opened exclusively",
+				CDEV_NAME(pdata->xsp_sysdev));
+		} else if (!excl && pdata->xsp_devnode_excl) {
+			xrt_err(pdev, "%s has been opened exclusively",
+				CDEV_NAME(pdata->xsp_sysdev));
+		} else {
+			pdata->xsp_devnode_ref++;
+			pdata->xsp_devnode_excl = excl;
+			opened = true;
+			xrt_info(pdev, "opened %s, ref=%d",
+				CDEV_NAME(pdata->xsp_sysdev),
+				pdata->xsp_devnode_ref);
+		}
+	} else {
+		xrt_err(pdev, "%s is offline", CDEV_NAME(pdata->xsp_sysdev));
+	}
+
+	mutex_unlock(&pdata->xsp_devnode_lock);
+
+	return opened ? pdev : NULL;
+}
+
+struct platform_device *
+xrt_devnode_open_excl(struct inode *inode)
+{
+	return __xrt_devnode_open(inode, true);
+}
+
+struct platform_device *
+xrt_devnode_open(struct inode *inode)
+{
+	return __xrt_devnode_open(inode, false);
+}
+EXPORT_SYMBOL_GPL(xrt_devnode_open);
+
+void xrt_devnode_close(struct inode *inode)
+{
+	struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+	struct platform_device *pdev = INODE2PDEV(inode);
+	bool notify = false;
+
+	mutex_lock(&pdata->xsp_devnode_lock);
+
+	pdata->xsp_devnode_ref--;
+	if (pdata->xsp_devnode_ref == 0) {
+		pdata->xsp_devnode_excl = false;
+		notify = true;
+	}
+	if (notify) {
+		xrt_info(pdev, "closed %s, ref=%d",
+			CDEV_NAME(pdata->xsp_sysdev), pdata->xsp_devnode_ref);
+	} else {
+		xrt_info(pdev, "closed %s, notifying waiter",
+			CDEV_NAME(pdata->xsp_sysdev));
+	}
+
+	mutex_unlock(&pdata->xsp_devnode_lock);
+
+	if (notify)
+		complete(&pdata->xsp_devnode_comp);
+}
+EXPORT_SYMBOL_GPL(xrt_devnode_close);
+
+static inline enum xrt_subdev_file_mode
+devnode_mode(struct xrt_subdev_drvdata *drvdata)
+{
+	return drvdata->xsd_file_ops.xsf_mode;
+}
+
+int xrt_devnode_create(struct platform_device *pdev, const char *file_name,
+	const char *inst_name)
+{
+	struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
+	struct xrt_subdev_file_ops *fops = &drvdata->xsd_file_ops;
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	struct cdev *cdevp;
+	struct device *sysdev;
+	int ret = 0;
+	char fname[256];
+
+	BUG_ON(fops->xsf_dev_t == (dev_t)-1);
+
+	mutex_init(&pdata->xsp_devnode_lock);
+	init_completion(&pdata->xsp_devnode_comp);
+
+	cdevp = &DEV_PDATA(pdev)->xsp_cdev;
+	cdev_init(cdevp, &fops->xsf_ops);
+	cdevp->owner = fops->xsf_ops.owner;
+	cdevp->dev = MKDEV(MAJOR(fops->xsf_dev_t), pdev->id);
+
+	/*
+	 * Set pdev as parent of cdev so that when pdev (and its platform
+	 * data) will not be freed when cdev is not freed.
+	 */
+	cdev_set_parent(cdevp, &DEV(pdev)->kobj);
+
+	ret = cdev_add(cdevp, cdevp->dev, 1);
+	if (ret) {
+		xrt_err(pdev, "failed to add cdev: %d", ret);
+		goto failed;
+	}
+	if (!file_name)
+		file_name = pdev->name;
+	if (!inst_name) {
+		if (devnode_mode(drvdata) == XRT_SUBDEV_FILE_MULTI_INST) {
+			snprintf(fname, sizeof(fname), "%s/%s/%s.%u",
+				XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
+				file_name, pdev->id);
+		} else {
+			snprintf(fname, sizeof(fname), "%s/%s/%s",
+				XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
+				file_name);
+		}
+	} else {
+		snprintf(fname, sizeof(fname), "%s/%s/%s.%s", XRT_CDEV_DIR,
+			DEV_PDATA(pdev)->xsp_root_name, file_name, inst_name);
+	}
+	sysdev = device_create(xrt_class, NULL, cdevp->dev, NULL, "%s", fname);
+	if (IS_ERR(sysdev)) {
+		ret = PTR_ERR(sysdev);
+		xrt_err(pdev, "failed to create device node: %d", ret);
+		goto failed;
+	}
+	pdata->xsp_sysdev = sysdev;
+
+	xrt_devnode_allowed(pdev);
+
+	xrt_info(pdev, "created (%d, %d): /dev/%s",
+		MAJOR(cdevp->dev), pdev->id, fname);
+	return 0;
+
+failed:
+	device_destroy(xrt_class, cdevp->dev);
+	cdev_del(cdevp);
+	cdevp->owner = NULL;
+	return ret;
+}
+
+int xrt_devnode_destroy(struct platform_device *pdev)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	struct cdev *cdevp = &pdata->xsp_cdev;
+	dev_t dev = cdevp->dev;
+	int rc;
+
+	BUG_ON(!cdevp->owner);
+
+	rc = xrt_devnode_disallowed(pdev);
+	if (rc)
+		return rc;
+
+	xrt_info(pdev, "removed (%d, %d): /dev/%s/%s", MAJOR(dev), MINOR(dev),
+		XRT_CDEV_DIR, CDEV_NAME(pdata->xsp_sysdev));
+	device_destroy(xrt_class, cdevp->dev);
+	pdata->xsp_sysdev = NULL;
+	cdev_del(cdevp);
+	return 0;
+}
diff --git a/drivers/fpga/xrt/lib/xrt-main.c b/drivers/fpga/xrt/lib/xrt-main.c
new file mode 100644
index 000000000000..08a8d13c34dd
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xrt-main.c
@@ -0,0 +1,270 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/module.h>
+#include "subdev.h"
+#include "xrt-main.h"
+
+#define	XRT_IPLIB_MODULE_NAME		"xrt-lib"
+#define	XRT_IPLIB_MODULE_VERSION	"4.0.0"
+#define	XRT_DRVNAME(drv)		((drv)->driver.name)
+#define	XRT_MAX_DEVICE_NODES		128
+
+struct mutex xrt_class_lock;
+struct class *xrt_class;
+
+/*
+ * Subdev driver is known by ID to others. We map the ID to it's
+ * struct platform_driver, which contains it's binding name and driver/file ops.
+ * We also map it to the endpoint name in DTB as well, if it's different
+ * than the driver's binding name.
+ */
+static struct xrt_drv_map {
+	enum xrt_subdev_id id;
+	struct platform_driver *drv;
+	struct xrt_subdev_endpoints *eps;
+	struct ida ida; /* manage driver instance and char dev minor */
+} xrt_drv_maps[] = {
+	{ XRT_SUBDEV_PART, &xrt_partition_driver, },
+	{ XRT_SUBDEV_VSEC, &xrt_vsec_driver, xrt_vsec_endpoints, },
+	{ XRT_SUBDEV_GPIO, &xrt_gpio_driver, xrt_gpio_endpoints,},
+	{ XRT_SUBDEV_AXIGATE, &xrt_axigate_driver, xrt_axigate_endpoints, },
+	{ XRT_SUBDEV_ICAP, &xrt_icap_driver, xrt_icap_endpoints, },
+	{ XRT_SUBDEV_CALIB, &xrt_calib_driver, xrt_calib_endpoints, },
+	{ XRT_SUBDEV_MGMT_MAIN, NULL, },
+	{ XRT_SUBDEV_CLKFREQ, &xrt_clkfreq_driver, xrt_clkfreq_endpoints, },
+	{ XRT_SUBDEV_CLOCK, &xrt_clock_driver, xrt_clock_endpoints, },
+	{ XRT_SUBDEV_UCS, &xrt_ucs_driver, xrt_ucs_endpoints, },
+};
+
+static inline struct xrt_subdev_drvdata *
+xrt_drv_map2drvdata(struct xrt_drv_map *map)
+{
+	return (struct xrt_subdev_drvdata *)map->drv->id_table[0].driver_data;
+}
+
+static struct xrt_drv_map *
+xrt_drv_find_map_by_id(enum xrt_subdev_id id)
+{
+	int i;
+	struct xrt_drv_map *map = NULL;
+
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		struct xrt_drv_map *tmap = &xrt_drv_maps[i];
+
+		if (tmap->id != id)
+			continue;
+		map = tmap;
+		break;
+	}
+	return map;
+}
+
+static int xrt_drv_register_driver(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+	struct xrt_subdev_drvdata *drvdata;
+	int rc = 0;
+	const char *drvname;
+
+	BUG_ON(!map);
+
+	if (!map->drv) {
+		pr_info("skip registration of subdev driver for id %d\n", id);
+		return rc;
+	}
+	drvname = XRT_DRVNAME(map->drv);
+
+	rc = platform_driver_register(map->drv);
+	if (rc) {
+		pr_err("register %s subdev driver failed\n", drvname);
+		return rc;
+	}
+
+	drvdata = xrt_drv_map2drvdata(map);
+	if (drvdata && drvdata->xsd_dev_ops.xsd_post_init) {
+		rc = drvdata->xsd_dev_ops.xsd_post_init();
+		if (rc) {
+			platform_driver_unregister(map->drv);
+			pr_err("%s's post-init, ret %d\n", drvname, rc);
+			return rc;
+		}
+	}
+
+	if (drvdata) {
+		/* Initialize dev_t for char dev node. */
+		if (xrt_devnode_enabled(drvdata)) {
+			rc = alloc_chrdev_region(
+				&drvdata->xsd_file_ops.xsf_dev_t, 0,
+				XRT_MAX_DEVICE_NODES, drvname);
+			if (rc) {
+				if (drvdata->xsd_dev_ops.xsd_pre_exit)
+					drvdata->xsd_dev_ops.xsd_pre_exit();
+				platform_driver_unregister(map->drv);
+				pr_err("failed to alloc dev minor for %s: %d\n",
+					drvname, rc);
+				return rc;
+			}
+		} else {
+			drvdata->xsd_file_ops.xsf_dev_t = (dev_t)-1;
+		}
+	}
+
+	ida_init(&map->ida);
+
+	pr_info("registered %s subdev driver\n", drvname);
+	return 0;
+}
+
+static void xrt_drv_unregister_driver(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+	struct xrt_subdev_drvdata *drvdata;
+	const char *drvname;
+
+	BUG_ON(!map);
+	if (!map->drv) {
+		pr_info("skip unregistration of subdev driver for id %d\n", id);
+		return;
+	}
+
+	drvname = XRT_DRVNAME(map->drv);
+
+	ida_destroy(&map->ida);
+
+	drvdata = xrt_drv_map2drvdata(map);
+	if (drvdata && drvdata->xsd_file_ops.xsf_dev_t != (dev_t)-1) {
+		unregister_chrdev_region(drvdata->xsd_file_ops.xsf_dev_t,
+			XRT_MAX_DEVICE_NODES);
+	}
+
+	if (drvdata && drvdata->xsd_dev_ops.xsd_pre_exit)
+		drvdata->xsd_dev_ops.xsd_pre_exit();
+
+	platform_driver_unregister(map->drv);
+
+	pr_info("unregistered %s subdev driver\n", drvname);
+}
+
+int xrt_subdev_register_external_driver(enum xrt_subdev_id id,
+	struct platform_driver *drv, struct xrt_subdev_endpoints *eps)
+{
+	int i;
+	int result = 0;
+
+	mutex_lock(&xrt_class_lock);
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		struct xrt_drv_map *map = &xrt_drv_maps[i];
+
+		if (map->id != id)
+			continue;
+		if (map->drv) {
+			result = -EEXIST;
+			pr_err("Id %d already has a registered driver, 0x%p\n",
+				id, map->drv);
+			break;
+		}
+		map->drv = drv;
+		BUG_ON(map->eps);
+		map->eps = eps;
+		xrt_drv_register_driver(id);
+	}
+	mutex_unlock(&xrt_class_lock);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_register_external_driver);
+
+void xrt_subdev_unregister_external_driver(enum xrt_subdev_id id)
+{
+	int i;
+
+	mutex_lock(&xrt_class_lock);
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		struct xrt_drv_map *map = &xrt_drv_maps[i];
+
+		if (map->id != id)
+			continue;
+		xrt_drv_unregister_driver(id);
+		map->drv = NULL;
+		map->eps = NULL;
+		break;
+	}
+	mutex_unlock(&xrt_class_lock);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_unregister_external_driver);
+
+static __init int xrt_drv_register_drivers(void)
+{
+	int i;
+	int rc = 0;
+
+	mutex_init(&xrt_class_lock);
+	xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
+	if (IS_ERR(xrt_class))
+		return PTR_ERR(xrt_class);
+
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
+		rc = xrt_drv_register_driver(xrt_drv_maps[i].id);
+		if (rc)
+			break;
+	}
+	if (!rc)
+		return 0;
+
+	while (i-- > 0)
+		xrt_drv_unregister_driver(xrt_drv_maps[i].id);
+	class_destroy(xrt_class);
+	return rc;
+}
+
+static __exit void xrt_drv_unregister_drivers(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++)
+		xrt_drv_unregister_driver(xrt_drv_maps[i].id);
+	class_destroy(xrt_class);
+}
+
+const char *xrt_drv_name(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	if (map)
+		return XRT_DRVNAME(map->drv);
+	return NULL;
+}
+
+int xrt_drv_get_instance(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	return ida_alloc_range(&map->ida, 0, XRT_MAX_DEVICE_NODES, GFP_KERNEL);
+}
+
+void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	ida_free(&map->ida, instance);
+}
+
+struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
+{
+	struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+	return map ? map->eps : NULL;
+}
+
+module_init(xrt_drv_register_drivers);
+module_exit(xrt_drv_unregister_drivers);
+
+MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
+MODULE_AUTHOR("XRT Team <runtime@xilinx.com>");
+MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/fpga/xrt/lib/xrt-main.h b/drivers/fpga/xrt/lib/xrt-main.h
new file mode 100644
index 000000000000..f46f90d9e882
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xrt-main.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_MAIN_H_
+#define	_XRT_MAIN_H_
+
+extern struct platform_driver xrt_partition_driver;
+extern struct platform_driver xrt_test_driver;
+extern struct platform_driver xrt_vsec_driver;
+extern struct platform_driver xrt_vsec_golden_driver;
+extern struct platform_driver xrt_axigate_driver;
+extern struct platform_driver xrt_qspi_driver;
+extern struct platform_driver xrt_gpio_driver;
+extern struct platform_driver xrt_mailbox_driver;
+extern struct platform_driver xrt_icap_driver;
+extern struct platform_driver xrt_cmc_driver;
+extern struct platform_driver xrt_clkfreq_driver;
+extern struct platform_driver xrt_clock_driver;
+extern struct platform_driver xrt_ucs_driver;
+extern struct platform_driver xrt_calib_driver;
+
+extern struct xrt_subdev_endpoints xrt_vsec_endpoints[];
+extern struct xrt_subdev_endpoints xrt_vsec_golden_endpoints[];
+extern struct xrt_subdev_endpoints xrt_axigate_endpoints[];
+extern struct xrt_subdev_endpoints xrt_test_endpoints[];
+extern struct xrt_subdev_endpoints xrt_qspi_endpoints[];
+extern struct xrt_subdev_endpoints xrt_gpio_endpoints[];
+extern struct xrt_subdev_endpoints xrt_mailbox_endpoints[];
+extern struct xrt_subdev_endpoints xrt_icap_endpoints[];
+extern struct xrt_subdev_endpoints xrt_cmc_endpoints[];
+extern struct xrt_subdev_endpoints xrt_clkfreq_endpoints[];
+extern struct xrt_subdev_endpoints xrt_clock_endpoints[];
+extern struct xrt_subdev_endpoints xrt_ucs_endpoints[];
+extern struct xrt_subdev_endpoints xrt_calib_endpoints[];
+
+extern const char *xrt_drv_name(enum xrt_subdev_id id);
+extern int xrt_drv_get_instance(enum xrt_subdev_id id);
+extern void xrt_drv_put_instance(enum xrt_subdev_id id, int instance);
+extern struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
+
+#endif	/* _XRT_MAIN_H_ */
diff --git a/drivers/fpga/xrt/lib/xrt-subdev.c b/drivers/fpga/xrt/lib/xrt-subdev.c
new file mode 100644
index 000000000000..2e3b5612cf8f
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xrt-subdev.c
@@ -0,0 +1,1007 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/platform_device.h>
+#include <linux/pci.h>
+#include <linux/vmalloc.h>
+#include "subdev.h"
+#include "parent.h"
+#include "xrt-main.h"
+#include "metadata.h"
+
+#define DEV_IS_PCI(dev) ((dev)->bus == &pci_bus_type)
+static inline struct device *find_root(struct platform_device *pdev)
+{
+	struct device *d = DEV(pdev);
+
+	while (!DEV_IS_PCI(d))
+		d = d->parent;
+	return d;
+}
+
+/*
+ * It represents a holder of a subdev. One holder can repeatedly hold a subdev
+ * as long as there is a unhold corresponding to a hold.
+ */
+struct xrt_subdev_holder {
+	struct list_head xsh_holder_list;
+	struct device *xsh_holder;
+	int xsh_count;
+};
+
+/*
+ * It represents a specific instance of platform driver for a subdev, which
+ * provides services to its clients (another subdev driver or root driver).
+ */
+struct xrt_subdev {
+	struct list_head xs_dev_list;
+	struct list_head xs_holder_list;
+	enum xrt_subdev_id xs_id;		/* type of subdev */
+	struct platform_device *xs_pdev;	/* a particular subdev inst */
+	struct completion xs_holder_comp;
+};
+
+static struct xrt_subdev *xrt_subdev_alloc(void)
+{
+	struct xrt_subdev *sdev = vzalloc(sizeof(struct xrt_subdev));
+
+	if (!sdev)
+		return NULL;
+
+	INIT_LIST_HEAD(&sdev->xs_dev_list);
+	INIT_LIST_HEAD(&sdev->xs_holder_list);
+	init_completion(&sdev->xs_holder_comp);
+	return sdev;
+}
+
+static void xrt_subdev_free(struct xrt_subdev *sdev)
+{
+	vfree(sdev);
+}
+
+/*
+ * Subdev common sysfs nodes.
+ */
+static ssize_t holders_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	ssize_t len;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_parent_ioctl_get_holders holders = { pdev, buf, 1024 };
+
+	len = xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_GET_LEAF_HOLDERS, &holders);
+	if (len >= holders.xpigh_holder_buf_len)
+		return len;
+	buf[len] = '\n';
+	return len + 1;
+}
+static DEVICE_ATTR_RO(holders);
+
+static struct attribute *xrt_subdev_attrs[] = {
+	&dev_attr_holders.attr,
+	NULL,
+};
+
+static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
+	struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+{
+	struct device *dev = kobj_to_dev(kobj);
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+	unsigned char *blob;
+	long  size;
+	ssize_t ret = 0;
+
+	blob = pdata->xsp_dtb;
+	size = xrt_md_size(dev, blob);
+	if (size <= 0) {
+		ret = -EINVAL;
+		goto failed;
+	}
+
+	if (off >= size)
+		goto failed;
+
+	if (off + count > size)
+		count = size - off;
+	memcpy(buf, blob + off, count);
+
+	ret = count;
+failed:
+	return ret;
+}
+
+static struct bin_attribute meta_data_attr = {
+	.attr = {
+		.name = "metadata",
+		.mode = 0400
+	},
+	.read = metadata_output,
+	.size = 0
+};
+
+static struct bin_attribute  *xrt_subdev_bin_attrs[] = {
+	&meta_data_attr,
+	NULL,
+};
+
+static const struct attribute_group xrt_subdev_attrgroup = {
+	.attrs = xrt_subdev_attrs,
+	.bin_attrs = xrt_subdev_bin_attrs,
+};
+
+static int
+xrt_subdev_getres(struct device *parent, enum xrt_subdev_id id,
+	char *dtb, struct resource **res, int *res_num)
+{
+	struct xrt_subdev_platdata *pdata;
+	struct resource *pci_res = NULL;
+	const u64 *bar_range;
+	const u32 *bar_idx;
+	char *ep_name = NULL, *regmap = NULL;
+	uint bar;
+	int count1 = 0, count2 = 0, ret;
+
+	if (!dtb)
+		return -EINVAL;
+
+	pdata = DEV_PDATA(to_platform_device(parent));
+
+	for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL,
+		&ep_name, &regmap);
+		ep_name != NULL;
+		xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap,
+		&ep_name, &regmap)) {
+		ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
+			PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+		if (!ret)
+			count1++;
+	}
+	if (!count1)
+		return 0;
+
+	*res = vzalloc(sizeof(struct resource) * count1);
+
+	for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL,
+		&ep_name, &regmap);
+		ep_name != NULL;
+		xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap,
+		&ep_name, &regmap)) {
+		ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
+			PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+		if (ret)
+			continue;
+		xrt_md_get_prop(parent, dtb, ep_name, regmap,
+			PROP_BAR_IDX, (const void **)&bar_idx, NULL);
+		bar = bar_idx ? be32_to_cpu(*bar_idx) : 0;
+		xrt_subdev_get_barres(to_platform_device(parent), &pci_res,
+			bar);
+		(*res)[count2].start = pci_res->start +
+			be64_to_cpu(bar_range[0]);
+		(*res)[count2].end = pci_res->start +
+			be64_to_cpu(bar_range[0]) +
+			be64_to_cpu(bar_range[1]) - 1;
+		(*res)[count2].flags = IORESOURCE_MEM;
+		/* check if there is conflicted resource */
+		ret = request_resource(pci_res, *res + count2);
+		if (ret) {
+			dev_err(parent, "Conflict resource %pR\n",
+				*res + count2);
+			vfree(*res);
+			*res_num = 0;
+			*res = NULL;
+			return ret;
+		}
+		release_resource(*res + count2);
+
+		(*res)[count2].parent = pci_res;
+
+		xrt_md_get_epname_pointer(parent, pdata->xsp_dtb, ep_name,
+			regmap, &(*res)[count2].name);
+
+		count2++;
+	}
+
+	BUG_ON(count1 != count2);
+	*res_num = count2;
+
+	return 0;
+}
+
+static inline enum xrt_subdev_file_mode
+xrt_devnode_mode(struct xrt_subdev_drvdata *drvdata)
+{
+	return drvdata->xsd_file_ops.xsf_mode;
+}
+
+static bool xrt_subdev_cdev_auto_creation(struct platform_device *pdev)
+{
+	struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
+
+	if (!drvdata)
+		return false;
+
+	return xrt_devnode_enabled(drvdata) &&
+		(xrt_devnode_mode(drvdata) == XRT_SUBDEV_FILE_DEFAULT ||
+		(xrt_devnode_mode(drvdata) == XRT_SUBDEV_FILE_MULTI_INST));
+}
+
+static struct xrt_subdev *
+xrt_subdev_create(struct device *parent, enum xrt_subdev_id id,
+	xrt_subdev_parent_cb_t pcb, void *pcb_arg, char *dtb)
+{
+	struct xrt_subdev *sdev = NULL;
+	struct platform_device *pdev = NULL;
+	struct xrt_subdev_platdata *pdata = NULL;
+	long dtb_len = 0;
+	size_t pdata_sz;
+	int inst = PLATFORM_DEVID_NONE;
+	struct resource *res = NULL;
+	int res_num = 0;
+
+	sdev = xrt_subdev_alloc();
+	if (!sdev) {
+		dev_err(parent, "failed to alloc subdev for ID %d", id);
+		goto fail;
+	}
+	sdev->xs_id = id;
+
+	if (dtb) {
+		xrt_md_pack(parent, dtb);
+		dtb_len = xrt_md_size(parent, dtb);
+		if (dtb_len <= 0) {
+			dev_err(parent, "invalid metadata len %ld", dtb_len);
+			goto fail;
+		}
+	}
+	pdata_sz = sizeof(struct xrt_subdev_platdata) + dtb_len - 1;
+
+	/* Prepare platform data passed to subdev. */
+	pdata = vzalloc(pdata_sz);
+	if (!pdata)
+		goto fail;
+
+	pdata->xsp_parent_cb = pcb;
+	pdata->xsp_parent_cb_arg = pcb_arg;
+	(void) memcpy(pdata->xsp_dtb, dtb, dtb_len);
+	if (id == XRT_SUBDEV_PART) {
+		/* Partition can only be created by root driver. */
+		BUG_ON(parent->bus != &pci_bus_type);
+		pdata->xsp_root_name = dev_name(parent);
+	} else {
+		struct platform_device *part = to_platform_device(parent);
+		/* Leaf can only be created by partition driver. */
+		BUG_ON(parent->bus != &platform_bus_type);
+		BUG_ON(strcmp(xrt_drv_name(XRT_SUBDEV_PART),
+			platform_get_device_id(part)->name));
+		pdata->xsp_root_name = DEV_PDATA(part)->xsp_root_name;
+	}
+
+	/* Obtain dev instance number. */
+	inst = xrt_drv_get_instance(id);
+	if (inst < 0) {
+		dev_err(parent, "failed to obtain instance: %d", inst);
+		goto fail;
+	}
+
+	/* Create subdev. */
+	if (id == XRT_SUBDEV_PART) {
+		pdev = platform_device_register_data(parent,
+			xrt_drv_name(XRT_SUBDEV_PART), inst, pdata, pdata_sz);
+	} else {
+		int rc = xrt_subdev_getres(parent, id, dtb, &res, &res_num);
+
+		if (rc) {
+			dev_err(parent, "failed to get resource for %s.%d: %d",
+				xrt_drv_name(id), inst, rc);
+			goto fail;
+		}
+		pdev = platform_device_register_resndata(parent,
+			xrt_drv_name(id), inst, res, res_num, pdata, pdata_sz);
+		vfree(res);
+	}
+	if (IS_ERR(pdev)) {
+		dev_err(parent, "failed to create subdev for %s inst %d: %ld",
+			xrt_drv_name(id), inst, PTR_ERR(pdev));
+		goto fail;
+	}
+	sdev->xs_pdev = pdev;
+
+	if (device_attach(DEV(pdev)) != 1) {
+		xrt_err(pdev, "failed to attach");
+		goto fail;
+	}
+
+	if (sysfs_create_group(&DEV(pdev)->kobj, &xrt_subdev_attrgroup))
+		xrt_err(pdev, "failed to create sysfs group");
+
+	/*
+	 * Create sysfs sym link under root for leaves
+	 * under random partitions for easy access to them.
+	 */
+	if (id != XRT_SUBDEV_PART) {
+		if (sysfs_create_link(&find_root(pdev)->kobj,
+			&DEV(pdev)->kobj, dev_name(DEV(pdev)))) {
+			xrt_err(pdev, "failed to create sysfs link");
+		}
+	}
+
+	/* All done, ready to handle req thru cdev. */
+	if (xrt_subdev_cdev_auto_creation(pdev)) {
+		(void) xrt_devnode_create(pdev,
+			DEV_DRVDATA(pdev)->xsd_file_ops.xsf_dev_name, NULL);
+	}
+
+	vfree(pdata);
+	return sdev;
+
+fail:
+	vfree(pdata);
+	if (sdev && !IS_ERR_OR_NULL(sdev->xs_pdev))
+		platform_device_unregister(sdev->xs_pdev);
+	if (inst >= 0)
+		xrt_drv_put_instance(id, inst);
+	xrt_subdev_free(sdev);
+	return NULL;
+}
+
+static void xrt_subdev_destroy(struct xrt_subdev *sdev)
+{
+	struct platform_device *pdev = sdev->xs_pdev;
+	int inst = pdev->id;
+	struct device *dev = DEV(pdev);
+
+	/* Take down the device node */
+	if (xrt_subdev_cdev_auto_creation(pdev))
+		(void) xrt_devnode_destroy(pdev);
+	if (sdev->xs_id != XRT_SUBDEV_PART)
+		(void) sysfs_remove_link(&find_root(pdev)->kobj, dev_name(dev));
+	(void) sysfs_remove_group(&dev->kobj, &xrt_subdev_attrgroup);
+	platform_device_unregister(pdev);
+	xrt_drv_put_instance(sdev->xs_id, inst);
+	xrt_subdev_free(sdev);
+}
+
+int xrt_subdev_parent_ioctl(struct platform_device *self, u32 cmd, void *arg)
+{
+	struct device *dev = DEV(self);
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(self);
+
+	return (*pdata->xsp_parent_cb)(dev->parent, pdata->xsp_parent_cb_arg,
+		cmd, arg);
+}
+
+int xrt_subdev_ioctl(struct platform_device *tgt, u32 cmd, void *arg)
+{
+	struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(tgt);
+
+	return (*drvdata->xsd_dev_ops.xsd_ioctl)(tgt, cmd, arg);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_ioctl);
+
+struct platform_device *
+xrt_subdev_get_leaf(struct platform_device *pdev,
+	xrt_subdev_match_t match_cb, void *match_arg)
+{
+	int rc;
+	struct xrt_parent_ioctl_get_leaf get_leaf = {
+		pdev, match_cb, match_arg, };
+
+	rc = xrt_subdev_parent_ioctl(pdev, XRT_PARENT_GET_LEAF, &get_leaf);
+	if (rc)
+		return NULL;
+	return get_leaf.xpigl_leaf;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_get_leaf);
+
+struct subdev_match_arg {
+	enum xrt_subdev_id id;
+	int instance;
+};
+
+static bool subdev_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
+	return id == a->id &&
+		(pdev->id == a->instance || PLATFORM_DEVID_NONE == a->instance);
+}
+
+struct platform_device *
+xrt_subdev_get_leaf_by_id(struct platform_device *pdev,
+	enum xrt_subdev_id id, int instance)
+{
+	struct subdev_match_arg arg = { id, instance };
+
+	return xrt_subdev_get_leaf(pdev, subdev_match, &arg);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_get_leaf_by_id);
+
+bool xrt_subdev_has_epname(struct platform_device *pdev, const char *ep_name)
+{
+	struct resource	*res;
+	int		i;
+
+	for (i = 0, res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	    res;
+	    res = platform_get_resource(pdev, IORESOURCE_MEM, ++i)) {
+		if (!strncmp(res->name, ep_name, strlen(res->name) + 1))
+			return true;
+	}
+
+	return false;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_has_epname);
+
+static bool xrt_subdev_match_epname(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	return xrt_subdev_has_epname(pdev, arg);
+}
+
+struct platform_device *
+xrt_subdev_get_leaf_by_epname(struct platform_device *pdev, const char *name)
+{
+	return xrt_subdev_get_leaf(pdev, xrt_subdev_match_epname, (void *)name);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_get_leaf_by_epname);
+
+int xrt_subdev_put_leaf(struct platform_device *pdev,
+	struct platform_device *leaf)
+{
+	struct xrt_parent_ioctl_put_leaf put_leaf = { pdev, leaf };
+
+	return xrt_subdev_parent_ioctl(pdev, XRT_PARENT_PUT_LEAF, &put_leaf);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_put_leaf);
+
+int xrt_subdev_create_partition(struct platform_device *pdev, char *dtb)
+{
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_CREATE_PARTITION, dtb);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_create_partition);
+
+int xrt_subdev_destroy_partition(struct platform_device *pdev, int instance)
+{
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_REMOVE_PARTITION, (void *)(uintptr_t)instance);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_destroy_partition);
+
+int xrt_subdev_lookup_partition(struct platform_device *pdev,
+	xrt_subdev_match_t match_cb, void *match_arg)
+{
+	int rc;
+	struct xrt_parent_ioctl_lookup_partition lkp = {
+		pdev, match_cb, match_arg, };
+
+	rc = xrt_subdev_parent_ioctl(pdev, XRT_PARENT_LOOKUP_PARTITION, &lkp);
+	if (rc)
+		return rc;
+	return lkp.xpilp_part_inst;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_lookup_partition);
+
+int xrt_subdev_wait_for_partition_bringup(struct platform_device *pdev)
+{
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_WAIT_PARTITION_BRINGUP, NULL);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_wait_for_partition_bringup);
+
+void *xrt_subdev_add_event_cb(struct platform_device *pdev,
+	xrt_subdev_match_t match, void *match_arg, xrt_event_cb_t cb)
+{
+	struct xrt_parent_ioctl_evt_cb c = { pdev, match, match_arg, cb };
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_ADD_EVENT_CB, &c);
+	return c.xevt_hdl;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_add_event_cb);
+
+void xrt_subdev_remove_event_cb(struct platform_device *pdev, void *hdl)
+{
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_REMOVE_EVENT_CB, hdl);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_remove_event_cb);
+
+static ssize_t
+xrt_subdev_get_holders(struct xrt_subdev *sdev, char *buf, size_t len)
+{
+	const struct list_head *ptr;
+	struct xrt_subdev_holder *h;
+	ssize_t n = 0;
+
+	list_for_each(ptr, &sdev->xs_holder_list) {
+		h = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+		n += snprintf(buf + n, len - n, "%s:%d ",
+			dev_name(h->xsh_holder), h->xsh_count);
+		if (n >= len)
+			break;
+	}
+	return n;
+}
+
+void xrt_subdev_pool_init(struct device *dev, struct xrt_subdev_pool *spool)
+{
+	INIT_LIST_HEAD(&spool->xpool_dev_list);
+	spool->xpool_owner = dev;
+	mutex_init(&spool->xpool_lock);
+	spool->xpool_closing = false;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_init);
+
+static void xrt_subdev_pool_wait_for_holders(struct xrt_subdev_pool *spool,
+	struct xrt_subdev *sdev)
+{
+	const struct list_head *ptr, *next;
+	char holders[128];
+	struct xrt_subdev_holder *holder;
+	struct mutex *lk = &spool->xpool_lock;
+
+	BUG_ON(!mutex_is_locked(lk));
+
+	while (!list_empty(&sdev->xs_holder_list)) {
+		int rc;
+
+		/* It's most likely a bug if we ever enters this loop. */
+		(void) xrt_subdev_get_holders(sdev, holders, sizeof(holders));
+		xrt_err(sdev->xs_pdev, "awaits holders: %s", holders);
+		mutex_unlock(lk);
+		rc = wait_for_completion_killable(&sdev->xs_holder_comp);
+		mutex_lock(lk);
+		if (rc == -ERESTARTSYS) {
+			xrt_err(sdev->xs_pdev,
+				"give up on waiting for holders, clean up now");
+			list_for_each_safe(ptr, next, &sdev->xs_holder_list) {
+				holder = list_entry(ptr,
+					struct xrt_subdev_holder,
+					xsh_holder_list);
+				list_del(&holder->xsh_holder_list);
+				vfree(holder);
+			}
+		}
+	}
+}
+
+int xrt_subdev_pool_fini(struct xrt_subdev_pool *spool)
+{
+	int ret = 0;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct mutex *lk = &spool->xpool_lock;
+
+	mutex_lock(lk);
+
+	if (spool->xpool_closing) {
+		mutex_unlock(lk);
+		return 0;
+	}
+
+	spool->xpool_closing = true;
+	/* Remove subdev in the reverse order of added. */
+	while (!ret && !list_empty(dl)) {
+		struct xrt_subdev *sdev = list_first_entry(dl,
+			struct xrt_subdev, xs_dev_list);
+		xrt_subdev_pool_wait_for_holders(spool, sdev);
+		list_del(&sdev->xs_dev_list);
+		mutex_unlock(lk);
+		xrt_subdev_destroy(sdev);
+		mutex_lock(lk);
+	}
+
+	mutex_unlock(lk);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_fini);
+
+static int xrt_subdev_hold(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+	const struct list_head *ptr;
+	struct list_head *hl = &sdev->xs_holder_list;
+	struct xrt_subdev_holder *holder;
+	bool found = false;
+
+	list_for_each(ptr, hl) {
+		holder = list_entry(ptr, struct xrt_subdev_holder,
+			xsh_holder_list);
+		if (holder->xsh_holder == holder_dev) {
+			holder->xsh_count++;
+			found = true;
+			break;
+		}
+	}
+
+	if (!found) {
+		holder = vzalloc(sizeof(*holder));
+		if (!holder)
+			return -ENOMEM;
+		holder->xsh_holder = holder_dev;
+		holder->xsh_count = 1;
+		list_add_tail(&holder->xsh_holder_list, hl);
+	}
+
+	return holder->xsh_count;
+}
+
+static int
+xrt_subdev_release(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+	const struct list_head *ptr, *next;
+	struct list_head *hl = &sdev->xs_holder_list;
+	struct xrt_subdev_holder *holder;
+	int count;
+	bool found = false;
+
+	list_for_each_safe(ptr, next, hl) {
+		holder = list_entry(ptr, struct xrt_subdev_holder,
+			xsh_holder_list);
+		if (holder->xsh_holder == holder_dev) {
+			found = true;
+			holder->xsh_count--;
+
+			count = holder->xsh_count;
+			if (count == 0) {
+				list_del(&holder->xsh_holder_list);
+				vfree(holder);
+				if (list_empty(hl))
+					complete(&sdev->xs_holder_comp);
+			}
+			break;
+		}
+	}
+	if (!found) {
+		dev_err(holder_dev, "can't release, %s did not hold %s",
+			dev_name(holder_dev),
+			dev_name(DEV(sdev->xs_pdev)));
+	}
+	return found ? count : -EINVAL;
+}
+
+int xrt_subdev_pool_add(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
+	xrt_subdev_parent_cb_t pcb, void *pcb_arg, char *dtb)
+{
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	int ret = 0;
+
+	sdev = xrt_subdev_create(spool->xpool_owner, id, pcb, pcb_arg, dtb);
+	if (sdev) {
+		mutex_lock(lk);
+		if (spool->xpool_closing) {
+			/* No new subdev when pool is going away. */
+			xrt_err(sdev->xs_pdev, "pool is closing");
+			ret = -ENODEV;
+		} else {
+			list_add(&sdev->xs_dev_list, dl);
+		}
+		mutex_unlock(lk);
+		if (ret)
+			xrt_subdev_destroy(sdev);
+	} else {
+		ret = -EINVAL;
+	}
+
+	return ret ? ret : sdev->xs_pdev->id;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_add);
+
+int xrt_subdev_pool_del(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
+	int instance)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	int ret = -ENOENT;
+
+	mutex_lock(lk);
+	list_for_each(ptr, dl) {
+		sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+		if (sdev->xs_id != id || sdev->xs_pdev->id != instance)
+			continue;
+		xrt_subdev_pool_wait_for_holders(spool, sdev);
+		list_del(&sdev->xs_dev_list);
+		ret = 0;
+		break;
+	}
+	mutex_unlock(lk);
+	if (ret)
+		return ret;
+
+	xrt_subdev_destroy(sdev);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_del);
+
+static int xrt_subdev_pool_get_impl(struct xrt_subdev_pool *spool,
+	xrt_subdev_match_t match, void *arg, struct device *holder_dev,
+	struct xrt_subdev **sdevp)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev = NULL;
+	int ret = -ENOENT;
+
+	mutex_lock(lk);
+
+	if (match == XRT_SUBDEV_MATCH_PREV) {
+		struct platform_device *pdev = (struct platform_device *)arg;
+		struct xrt_subdev *d = NULL;
+
+		if (!pdev) {
+			sdev = list_empty(dl) ? NULL : list_last_entry(dl,
+				struct xrt_subdev, xs_dev_list);
+		} else {
+			list_for_each(ptr, dl) {
+				d = list_entry(ptr, struct xrt_subdev,
+					xs_dev_list);
+				if (d->xs_pdev != pdev)
+					continue;
+				if (!list_is_first(ptr, dl))
+					sdev = list_prev_entry(d, xs_dev_list);
+				break;
+			}
+		}
+	} else if (match == XRT_SUBDEV_MATCH_NEXT) {
+		struct platform_device *pdev = (struct platform_device *)arg;
+		struct xrt_subdev *d = NULL;
+
+		if (!pdev) {
+			sdev = list_first_entry_or_null(dl,
+				struct xrt_subdev, xs_dev_list);
+		} else {
+			list_for_each(ptr, dl) {
+				d = list_entry(ptr, struct xrt_subdev,
+					xs_dev_list);
+				if (d->xs_pdev != pdev)
+					continue;
+				if (!list_is_last(ptr, dl))
+					sdev = list_next_entry(d, xs_dev_list);
+				break;
+			}
+		}
+	} else {
+		list_for_each(ptr, dl) {
+			struct xrt_subdev *d = NULL;
+
+			d = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+			if (d && !match(d->xs_id, d->xs_pdev, arg))
+				continue;
+			sdev = d;
+			break;
+		}
+	}
+
+	if (sdev)
+		ret = xrt_subdev_hold(sdev, holder_dev);
+
+	mutex_unlock(lk);
+
+	if (ret >= 0)
+		*sdevp = sdev;
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_get);
+
+int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
+	xrt_subdev_match_t match, void *arg, struct device *holder_dev,
+	struct platform_device **pdevp)
+{
+	int rc;
+	struct xrt_subdev *sdev;
+
+	rc = xrt_subdev_pool_get_impl(spool, match, arg, holder_dev, &sdev);
+	if (rc < 0) {
+		if (rc != -ENOENT)
+			dev_err(holder_dev, "failed to hold device: %d", rc);
+		return rc;
+	}
+
+	if (DEV_IS_PCI(holder_dev)) {
+#ifdef	SUBDEV_DEBUG
+		dev_info(holder_dev, "%s: %s <<==== %s, ref=%d", __func__,
+			dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)), rc);
+#endif
+	} else {
+		xrt_info(to_platform_device(holder_dev), "%s <<==== %s",
+			dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
+	}
+
+	*pdevp = sdev->xs_pdev;
+	return 0;
+}
+
+static int xrt_subdev_pool_put_impl(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, struct device *holder_dev)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	int ret = -ENOENT;
+
+	mutex_lock(lk);
+	list_for_each(ptr, dl) {
+		sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+		if (sdev->xs_pdev != pdev)
+			continue;
+		ret = xrt_subdev_release(sdev, holder_dev);
+		break;
+	}
+	mutex_unlock(lk);
+
+	if (ret < 0 && ret != -ENOENT)
+		dev_err(holder_dev, "failed to release device: %d", ret);
+	return ret;
+}
+
+int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, struct device *holder_dev)
+{
+	int ret = xrt_subdev_pool_put_impl(spool, pdev, holder_dev);
+
+	if (ret < 0)
+		return ret;
+
+	if (DEV_IS_PCI(holder_dev)) {
+#ifdef	SUBDEV_DEBUG
+		dev_info(holder_dev, "%s: %s <<==X== %s, ref=%d", __func__,
+			dev_name(holder_dev), dev_name(DEV(spdev)), ret);
+#endif
+	} else {
+		struct platform_device *d = to_platform_device(holder_dev);
+
+		xrt_info(d, "%s <<==X== %s",
+			dev_name(holder_dev), dev_name(DEV(pdev)));
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_put);
+
+int xrt_subdev_pool_event(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, xrt_subdev_match_t match, void *arg,
+	xrt_event_cb_t xevt_cb, enum xrt_events evt)
+{
+	int rc = 0;
+	struct platform_device *tgt = NULL;
+	struct xrt_subdev *sdev = NULL;
+	struct xrt_event_arg_subdev esd;
+
+	while (!rc && xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
+		tgt, DEV(pdev), &sdev) != -ENOENT) {
+		tgt = sdev->xs_pdev;
+		esd.xevt_subdev_id = sdev->xs_id;
+		esd.xevt_subdev_instance = tgt->id;
+		if (match(sdev->xs_id, sdev->xs_pdev, arg))
+			rc = xevt_cb(pdev, evt, &esd);
+		(void) xrt_subdev_pool_put_impl(spool, tgt, DEV(pdev));
+	}
+	return rc;
+}
+
+ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+	struct platform_device *pdev, char *buf, size_t len)
+{
+	const struct list_head *ptr;
+	struct mutex *lk = &spool->xpool_lock;
+	struct list_head *dl = &spool->xpool_dev_list;
+	struct xrt_subdev *sdev;
+	ssize_t ret = 0;
+
+	mutex_lock(lk);
+	list_for_each(ptr, dl) {
+		sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+		if (sdev->xs_pdev != pdev)
+			continue;
+		ret = xrt_subdev_get_holders(sdev, buf, len);
+		break;
+	}
+	mutex_unlock(lk);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_get_holders);
+
+int xrt_subdev_broadcast_event_async(struct platform_device *pdev,
+	enum xrt_events evt, xrt_async_broadcast_event_cb_t cb, void *arg)
+{
+	struct xrt_parent_ioctl_async_broadcast_evt e = { pdev, evt, cb, arg };
+
+	return xrt_subdev_parent_ioctl(pdev,
+		XRT_PARENT_ASYNC_BOARDCAST_EVENT, &e);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_broadcast_event_async);
+
+struct xrt_broadcast_event_arg {
+	struct completion comp;
+	bool success;
+};
+
+static void xrt_broadcast_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg, bool success)
+{
+	struct xrt_broadcast_event_arg *e =
+		(struct xrt_broadcast_event_arg *)arg;
+
+	e->success = success;
+	complete(&e->comp);
+}
+
+int xrt_subdev_broadcast_event(struct platform_device *pdev,
+	enum xrt_events evt)
+{
+	int ret;
+	struct xrt_broadcast_event_arg e;
+
+	init_completion(&e.comp);
+	e.success = false;
+	ret = xrt_subdev_broadcast_event_async(pdev, evt,
+		xrt_broadcast_event_cb, &e);
+	if (ret == 0)
+		wait_for_completion(&e.comp);
+	return e.success ? 0 : -EINVAL;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_broadcast_event);
+
+void xrt_subdev_hot_reset(struct platform_device *pdev)
+{
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_HOT_RESET, NULL);
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_hot_reset);
+
+void xrt_subdev_get_barres(struct platform_device *pdev,
+	struct resource **res, uint bar_idx)
+{
+	struct xrt_parent_ioctl_get_res arg = { 0 };
+
+	BUG_ON(bar_idx > PCI_STD_RESOURCE_END);
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_GET_RESOURCE, &arg);
+
+	*res = &arg.xpigr_res[bar_idx];
+}
+
+void xrt_subdev_get_parent_id(struct platform_device *pdev,
+	unsigned short *vendor, unsigned short *device,
+	unsigned short *subvendor, unsigned short *subdevice)
+{
+	struct xrt_parent_ioctl_get_id id = { 0 };
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_GET_ID, (void *)&id);
+	if (vendor)
+		*vendor = id.xpigi_vendor_id;
+	if (device)
+		*device = id.xpigi_device_id;
+	if (subvendor)
+		*subvendor = id.xpigi_sub_vendor_id;
+	if (subdevice)
+		*subdevice = id.xpigi_sub_device_id;
+}
+
+struct device *xrt_subdev_register_hwmon(struct platform_device *pdev,
+	const char *name, void *drvdata, const struct attribute_group **grps)
+{
+	struct xrt_parent_ioctl_hwmon hm = { true, name, drvdata, grps, };
+
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_HWMON, (void *)&hm);
+	return hm.xpih_hwmon_dev;
+}
+
+void xrt_subdev_unregister_hwmon(struct platform_device *pdev,
+	struct device *hwmon)
+{
+	struct xrt_parent_ioctl_hwmon hm = { false, };
+
+	hm.xpih_hwmon_dev = hwmon;
+	(void) xrt_subdev_parent_ioctl(pdev, XRT_PARENT_HWMON, (void *)&hm);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 XRT Alveo 4/6] fpga: xrt: XRT Alveo management physical function driver
  2020-12-17  7:50 [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview Sonal Santan
                   ` (2 preceding siblings ...)
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 3/6] fpga: xrt: core infrastructure for xrt-lib module Sonal Santan
@ 2020-12-17  7:50 ` Sonal Santan
  2020-12-21  9:03   ` kernel test robot
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 5/6] fpga: xrt: platform drivers for subsystems in shell partition Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 6/6] fpga: xrt: Kconfig and Makefile updates for XRT drivers Sonal Santan
  5 siblings, 1 reply; 10+ messages in thread
From: Sonal Santan @ 2020-12-17  7:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree, trix, mdf

From: Sonal Santan <sonal.santan@xilinx.com>

Add management physical function driver core. The driver attaches
to management physical function of Alveo devices. It instantiates
the root driver and one or more partition drivers which in turn
instantiate platform drivers. The instantiation of partition and
platform drivers is completely data driven. The driver integrates
with FPGA manager and provides xclbin download service.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/xrt/include/xmgmt-main.h     |  34 +
 drivers/fpga/xrt/mgmt/xmgmt-fmgr-drv.c    | 179 ++++++
 drivers/fpga/xrt/mgmt/xmgmt-fmgr.h        |  29 +
 drivers/fpga/xrt/mgmt/xmgmt-main-impl.h   |  35 +
 drivers/fpga/xrt/mgmt/xmgmt-main-region.c | 476 ++++++++++++++
 drivers/fpga/xrt/mgmt/xmgmt-main.c        | 738 ++++++++++++++++++++++
 drivers/fpga/xrt/mgmt/xmgmt-root.c        | 375 +++++++++++
 include/uapi/linux/xrt/xmgmt-ioctl.h      |  72 +++
 8 files changed, 1938 insertions(+)
 create mode 100644 drivers/fpga/xrt/include/xmgmt-main.h
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-fmgr-drv.c
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-fmgr.h
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-main-impl.h
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-main-region.c
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-main.c
 create mode 100644 drivers/fpga/xrt/mgmt/xmgmt-root.c
 create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h

diff --git a/drivers/fpga/xrt/include/xmgmt-main.h b/drivers/fpga/xrt/include/xmgmt-main.h
new file mode 100644
index 000000000000..3f26c480ce27
--- /dev/null
+++ b/drivers/fpga/xrt/include/xmgmt-main.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XMGMT_MAIN_H_
+#define	_XMGMT_MAIN_H_
+
+#include <linux/xrt/xclbin.h>
+
+enum xrt_mgmt_main_ioctl_cmd {
+	// section needs to be vfree'd by caller
+	XRT_MGMT_MAIN_GET_AXLF_SECTION = 0,
+	// vbnv needs to be kfree'd by caller
+	XRT_MGMT_MAIN_GET_VBNV,
+};
+
+enum provider_kind {
+	XMGMT_BLP,
+	XMGMT_PLP,
+	XMGMT_ULP,
+};
+
+struct xrt_mgmt_main_ioctl_get_axlf_section {
+	enum provider_kind xmmigas_axlf_kind;
+	enum axlf_section_kind xmmigas_section_kind;
+	void *xmmigas_section;
+	u64 xmmigas_section_size;
+};
+
+#endif	/* _XMGMT_MAIN_H_ */
diff --git a/drivers/fpga/xrt/mgmt/xmgmt-fmgr-drv.c b/drivers/fpga/xrt/mgmt/xmgmt-fmgr-drv.c
new file mode 100644
index 000000000000..5e4a4e20b228
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/xmgmt-fmgr-drv.c
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2019-2020 Xilinx, Inc.
+ * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
+ *
+ * Authors: Sonal.Santan@xilinx.com
+ */
+
+#include <linux/cred.h>
+#include <linux/efi.h>
+#include <linux/fpga/fpga-mgr.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "xrt-xclbin.h"
+#include "subdev.h"
+#include "xmgmt-fmgr.h"
+#include "subdev/axigate.h"
+#include "subdev/icap.h"
+#include "xmgmt-main-impl.h"
+
+struct xfpga_klass {
+	const struct platform_device *pdev;
+	char                          name[64];
+};
+
+/*
+ * xclbin download plumbing -- find the download subsystem, ICAP and
+ * pass the xclbin for heavy lifting
+ */
+static int xmgmt_download_bitstream(struct platform_device *pdev,
+	const struct axlf *xclbin)
+
+{
+	struct XHwIcap_Bit_Header bit_header = { 0 };
+	struct platform_device *icap_leaf = NULL;
+	struct xrt_icap_ioctl_wr arg;
+	char *bitstream = NULL;
+	int ret;
+
+	ret = xrt_xclbin_get_section(xclbin, BITSTREAM, (void **)&bitstream,
+		NULL);
+	if (ret || !bitstream) {
+		xrt_err(pdev, "bitstream not found");
+		return -ENOENT;
+	}
+	ret = xrt_xclbin_parse_header(bitstream,
+		DMA_HWICAP_BITFILE_BUFFER_SIZE, &bit_header);
+	if (ret) {
+		ret = -EINVAL;
+		xrt_err(pdev, "invalid bitstream header");
+		goto done;
+	}
+	icap_leaf = xrt_subdev_get_leaf_by_id(pdev, XRT_SUBDEV_ICAP,
+		PLATFORM_DEVID_NONE);
+	if (!icap_leaf) {
+		ret = -ENODEV;
+		xrt_err(pdev, "icap does not exist");
+		goto done;
+	}
+	arg.xiiw_bit_data = bitstream + bit_header.HeaderLength;
+	arg.xiiw_data_len = bit_header.BitstreamLength;
+	ret = xrt_subdev_ioctl(icap_leaf, XRT_ICAP_WRITE, &arg);
+	if (ret)
+		xrt_err(pdev, "write bitstream failed, ret = %d", ret);
+
+done:
+	if (icap_leaf)
+		xrt_subdev_put_leaf(pdev, icap_leaf);
+	vfree(bitstream);
+
+	return ret;
+}
+
+/*
+ * There is no HW prep work we do here since we need the full
+ * xclbin for its sanity check.
+ */
+static int xmgmt_pr_write_init(struct fpga_manager *mgr,
+	struct fpga_image_info *info, const char *buf, size_t count)
+{
+	const struct axlf *bin = (const struct axlf *)buf;
+	struct xfpga_klass *obj = mgr->priv;
+
+	if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
+		xrt_info(obj->pdev, "%s only supports partial reconfiguration\n", obj->name);
+		return -EINVAL;
+	}
+
+	if (count < sizeof(struct axlf))
+		return -EINVAL;
+
+	if (count > bin->m_header.m_length)
+		return -EINVAL;
+
+	xrt_info(obj->pdev, "Prepare download of xclbin %pUb of length %lld B",
+		&bin->m_header.uuid, bin->m_header.m_length);
+
+	return 0;
+}
+
+/*
+ * The implementation requries full xclbin image before we can start
+ * programming the hardware via ICAP subsystem. Full image is required
+ * for checking the validity of xclbin and walking the sections to
+ * discover the bitstream.
+ */
+static int xmgmt_pr_write(struct fpga_manager *mgr,
+	const char *buf, size_t count)
+{
+	const struct axlf *bin = (const struct axlf *)buf;
+	struct xfpga_klass *obj = mgr->priv;
+
+	if (bin->m_header.m_length != count)
+		return -EINVAL;
+
+	return xmgmt_download_bitstream((void *)obj->pdev, bin);
+}
+
+static int xmgmt_pr_write_complete(struct fpga_manager *mgr,
+	struct fpga_image_info *info)
+{
+	const struct axlf *bin = (const struct axlf *)info->buf;
+	struct xfpga_klass *obj = mgr->priv;
+
+	xrt_info(obj->pdev, "Finished download of xclbin %pUb",
+		 &bin->m_header.uuid);
+	return 0;
+}
+
+static enum fpga_mgr_states xmgmt_pr_state(struct fpga_manager *mgr)
+{
+	return FPGA_MGR_STATE_UNKNOWN;
+}
+
+static const struct fpga_manager_ops xmgmt_pr_ops = {
+	.initial_header_size = sizeof(struct axlf),
+	.write_init = xmgmt_pr_write_init,
+	.write = xmgmt_pr_write,
+	.write_complete = xmgmt_pr_write_complete,
+	.state = xmgmt_pr_state,
+};
+
+
+struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev)
+{
+	struct xfpga_klass *obj = devm_kzalloc(DEV(pdev), sizeof(struct xfpga_klass),
+					       GFP_KERNEL);
+	struct fpga_manager *fmgr = NULL;
+	int ret = 0;
+
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
+	obj->pdev = pdev;
+	fmgr = fpga_mgr_create(&pdev->dev,
+			       obj->name,
+			       &xmgmt_pr_ops,
+			       obj);
+	if (!fmgr)
+		return ERR_PTR(-ENOMEM);
+
+	ret = fpga_mgr_register(fmgr);
+	if (ret) {
+		fpga_mgr_free(fmgr);
+		return ERR_PTR(ret);
+	}
+	return fmgr;
+}
+
+int xmgmt_fmgr_remove(struct fpga_manager *fmgr)
+{
+	fpga_mgr_unregister(fmgr);
+	return 0;
+}
diff --git a/drivers/fpga/xrt/mgmt/xmgmt-fmgr.h b/drivers/fpga/xrt/mgmt/xmgmt-fmgr.h
new file mode 100644
index 000000000000..2beba649609f
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/xmgmt-fmgr.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2019-2020 Xilinx, Inc.
+ * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
+ *
+ * Authors: Sonal.Santan@xilinx.com
+ */
+
+#ifndef	_XMGMT_FMGR_H_
+#define	_XMGMT_FMGR_H_
+
+#include <linux/fpga/fpga-mgr.h>
+#include <linux/mutex.h>
+
+#include <linux/xrt/xclbin.h>
+
+enum xfpga_sec_level {
+	XFPGA_SEC_NONE = 0,
+	XFPGA_SEC_DEDICATE,
+	XFPGA_SEC_SYSTEM,
+	XFPGA_SEC_MAX = XFPGA_SEC_SYSTEM,
+};
+
+struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev);
+int xmgmt_fmgr_remove(struct fpga_manager *fmgr);
+
+#endif
diff --git a/drivers/fpga/xrt/mgmt/xmgmt-main-impl.h b/drivers/fpga/xrt/mgmt/xmgmt-main-impl.h
new file mode 100644
index 000000000000..73a03d6b0601
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/xmgmt-main-impl.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XMGMT_MAIN_IMPL_H_
+#define	_XMGMT_MAIN_IMPL_H_
+
+#include "subdev.h"
+#include "xmgmt-main.h"
+
+struct fpga_manager;
+extern struct platform_driver xmgmt_main_driver;
+extern struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[];
+
+extern int xmgmt_process_xclbin(struct platform_device *pdev,
+	struct fpga_manager *fmgr, const struct axlf *xclbin, enum provider_kind kind);
+extern void xmgmt_region_cleanup_all(struct platform_device *pdev);
+
+extern int bitstream_axlf_mailbox(struct platform_device *pdev,
+	const void *xclbin);
+extern int xmgmt_hot_reset(struct platform_device *pdev);
+
+/* Getting dtb for specified partition. Caller should vfree returned dtb .*/
+extern char *xmgmt_get_dtb(struct platform_device *pdev,
+	enum provider_kind kind);
+extern char *xmgmt_get_vbnv(struct platform_device *pdev);
+extern int xmgmt_get_provider_uuid(struct platform_device *pdev,
+	enum provider_kind kind, uuid_t *uuid);
+
+#endif	/* _XMGMT_MAIN_IMPL_H_ */
diff --git a/drivers/fpga/xrt/mgmt/xmgmt-main-region.c b/drivers/fpga/xrt/mgmt/xmgmt-main-region.c
new file mode 100644
index 000000000000..1612c8273b50
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/xmgmt-main-region.c
@@ -0,0 +1,476 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2019-2020 Xilinx, Inc.
+ * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
+ *
+ * Authors: Lizhi.Hou@xilinx.com
+ */
+
+#include <linux/uuid.h>
+#include <linux/fpga/fpga-bridge.h>
+#include <linux/fpga/fpga-region.h>
+#include "metadata.h"
+#include "subdev.h"
+#include "subdev/axigate.h"
+#include "xrt-xclbin.h"
+#include "xmgmt-main-impl.h"
+
+struct xmgmt_bridge {
+	struct platform_device *pdev;
+	const char *axigate_name;
+};
+
+struct xmgmt_region {
+	struct platform_device *pdev;
+	struct fpga_region *fregion;
+	uuid_t intf_uuid;
+	struct fpga_bridge *fbridge;
+	int part_inst;
+	uuid_t dep_uuid;
+	struct list_head list;
+};
+
+struct xmgmt_region_match_arg {
+	struct platform_device *pdev;
+	uuid_t *uuids;
+	u32 uuid_num;
+};
+
+static int xmgmt_br_enable_set(struct fpga_bridge *bridge, bool enable)
+{
+	struct xmgmt_bridge *br_data = (struct xmgmt_bridge *)bridge->priv;
+	struct platform_device *axigate_leaf;
+	int rc;
+
+	axigate_leaf = xrt_subdev_get_leaf_by_epname(br_data->pdev,
+		br_data->axigate_name);
+	if (!axigate_leaf) {
+		xrt_err(br_data->pdev, "failed to get leaf %s",
+			br_data->axigate_name);
+		return -ENOENT;
+	}
+
+	if (enable)
+		rc = xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREE, NULL);
+	else
+		rc = xrt_subdev_ioctl(axigate_leaf, XRT_AXIGATE_FREEZE, NULL);
+
+	if (rc) {
+		xrt_err(br_data->pdev, "failed to %s gate %s, rc %d",
+			(enable ? "free" : "freeze"), br_data->axigate_name,
+			rc);
+	}
+
+	xrt_subdev_put_leaf(br_data->pdev, axigate_leaf);
+
+	return rc;
+}
+
+const struct fpga_bridge_ops xmgmt_bridge_ops = {
+	.enable_set = xmgmt_br_enable_set
+};
+
+static void xmgmt_destroy_bridge(struct fpga_bridge *br)
+{
+	struct xmgmt_bridge *br_data = br->priv;
+
+	if (!br_data)
+		return;
+
+	xrt_info(br_data->pdev, "destroy fpga bridge %s",
+		br_data->axigate_name);
+	fpga_bridge_unregister(br);
+
+	devm_kfree(DEV(br_data->pdev), br_data);
+
+	fpga_bridge_free(br);
+}
+
+static struct fpga_bridge *xmgmt_create_bridge(struct platform_device *pdev,
+	char *dtb)
+{
+	struct xmgmt_bridge *br_data;
+	struct fpga_bridge *br = NULL;
+	const char *gate;
+	int rc;
+
+	br_data = devm_kzalloc(DEV(pdev), sizeof(*br_data), GFP_KERNEL);
+	if (!br_data)
+		return NULL;
+	br_data->pdev = pdev;
+
+	br_data->axigate_name = NODE_GATE_ULP;
+	rc = xrt_md_get_epname_pointer(&pdev->dev, dtb, NODE_GATE_ULP,
+		NULL, &gate);
+	if (rc) {
+		br_data->axigate_name = NODE_GATE_PLP;
+		rc = xrt_md_get_epname_pointer(&pdev->dev, dtb, NODE_GATE_PLP,
+			NULL, &gate);
+	}
+	if (rc) {
+		xrt_err(pdev, "failed to get axigate, rc %d", rc);
+		goto failed;
+	}
+
+	br = fpga_bridge_create(DEV(pdev), br_data->axigate_name,
+		&xmgmt_bridge_ops, br_data);
+	if (!br) {
+		xrt_err(pdev, "failed to create bridge");
+		goto failed;
+	}
+
+	rc = fpga_bridge_register(br);
+	if (rc) {
+		xrt_err(pdev, "failed to register bridge, rc %d", rc);
+		goto failed;
+	}
+
+	xrt_info(pdev, "created fpga bridge %s", br_data->axigate_name);
+
+	return br;
+
+failed:
+	if (br)
+		fpga_bridge_free(br);
+	if (br_data)
+		devm_kfree(DEV(pdev), br_data);
+
+	return NULL;
+}
+
+static void xmgmt_destroy_region(struct fpga_region *re)
+{
+	struct xmgmt_region *r_data = re->priv;
+
+	xrt_info(r_data->pdev, "destroy fpga region %llx%llx",
+		re->compat_id->id_l, re->compat_id->id_h);
+
+	fpga_region_unregister(re);
+
+	if (r_data->part_inst > 0)
+		xrt_subdev_destroy_partition(r_data->pdev, r_data->part_inst);
+
+	if (r_data->fbridge)
+		xmgmt_destroy_bridge(r_data->fbridge);
+
+	if (r_data->fregion->info) {
+		fpga_image_info_free(r_data->fregion->info);
+		r_data->fregion->info = NULL;
+	}
+
+	fpga_region_free(re);
+
+	devm_kfree(DEV(r_data->pdev), r_data);
+}
+
+static int xmgmt_region_match(struct device *dev, const void *data)
+{
+	const struct xmgmt_region_match_arg *arg = data;
+	const struct fpga_region *match_re;
+	int i;
+
+	if (dev->parent != &arg->pdev->dev)
+		return false;
+
+	match_re = to_fpga_region(dev);
+	/*
+	 * The device tree provides both parent and child uuids for an
+	 * xclbin in one array. Here we try both uuids to see if it matches
+	 * with target region's compat_id. Strictly speaking we should
+	 * only match xclbin's parent uuid with target region's compat_id
+	 * but given the uuids by design are unique comparing with both
+	 * does not hurt.
+	 */
+	for (i = 0; i < arg->uuid_num; i++) {
+		if (!memcmp(match_re->compat_id, &arg->uuids[i],
+		    sizeof(*match_re->compat_id)))
+			return true;
+	}
+
+	return false;
+}
+
+static int xmgmt_region_match_base(struct device *dev, const void *data)
+{
+	const struct xmgmt_region_match_arg *arg = data;
+	const struct fpga_region *match_re;
+	const struct xmgmt_region *r_data;
+
+	if (dev->parent != &arg->pdev->dev)
+		return false;
+
+	match_re = to_fpga_region(dev);
+	r_data = match_re->priv;
+	if (uuid_is_null(&r_data->dep_uuid))
+		return true;
+
+	return false;
+}
+
+static int xmgmt_region_match_by_depuuid(struct device *dev, const void *data)
+{
+	const struct xmgmt_region_match_arg *arg = data;
+	const struct fpga_region *match_re;
+	const struct xmgmt_region *r_data;
+
+	if (dev->parent != &arg->pdev->dev)
+		return false;
+
+	match_re = to_fpga_region(dev);
+	r_data = match_re->priv;
+	if (!memcmp(&r_data->dep_uuid, arg->uuids,
+	    sizeof(uuid_t)))
+		return true;
+
+	return false;
+}
+
+static void xmgmt_region_cleanup(struct fpga_region *re)
+{
+	struct xmgmt_region *r_data = re->priv, *temp;
+	struct platform_device *pdev = r_data->pdev;
+	struct fpga_region *match_re = NULL;
+	struct device *start_dev = NULL;
+	struct xmgmt_region_match_arg arg;
+	LIST_HEAD(free_list);
+
+	list_add_tail(&r_data->list, &free_list);
+	arg.pdev = pdev;
+	arg.uuid_num = 1;
+
+	while (r_data != NULL) {
+		arg.uuids = (uuid_t *)r_data->fregion->compat_id;
+		match_re = fpga_region_class_find(start_dev, &arg,
+			xmgmt_region_match_by_depuuid);
+		if (match_re) {
+			r_data = match_re->priv;
+			list_add_tail(&r_data->list, &free_list);
+			start_dev = &match_re->dev;
+			put_device(&match_re->dev);
+			continue;
+		}
+
+		r_data = list_is_last(&r_data->list, &free_list) ? NULL :
+			list_next_entry(r_data, list);
+		start_dev = NULL;
+	}
+
+	list_for_each_entry_safe_reverse(r_data, temp, &free_list, list) {
+		if (list_is_first(&r_data->list, &free_list)) {
+			if (r_data->part_inst > 0) {
+				xrt_subdev_destroy_partition(pdev,
+					r_data->part_inst);
+				r_data->part_inst = -1;
+			}
+			if (r_data->fregion->info) {
+				fpga_image_info_free(r_data->fregion->info);
+				r_data->fregion->info = NULL;
+			}
+			continue;
+		}
+		xmgmt_destroy_region(r_data->fregion);
+	}
+}
+
+void xmgmt_region_cleanup_all(struct platform_device *pdev)
+{
+	struct fpga_region *base_re;
+	struct xmgmt_region_match_arg arg;
+
+	arg.pdev = pdev;
+
+	for (base_re = fpga_region_class_find(NULL, &arg,
+	    xmgmt_region_match_base);
+	    base_re;
+	    base_re = fpga_region_class_find(NULL, &arg,
+	    xmgmt_region_match_base)) {
+		put_device(&base_re->dev);
+
+		xmgmt_region_cleanup(base_re);
+		xmgmt_destroy_region(base_re);
+	}
+}
+
+/*
+ * Program a given region with given xclbin image. Bring up the subdevs and the
+ * partition object to contain the subdevs.
+ */
+static int xmgmt_region_program(struct fpga_region *re, const void *xclbin, char *dtb)
+{
+	struct xmgmt_region *r_data = re->priv;
+	struct platform_device *pdev = r_data->pdev;
+	struct fpga_image_info *info;
+	const struct axlf *xclbin_obj = xclbin;
+	int rc;
+
+	info = fpga_image_info_alloc(&pdev->dev);
+	if (!info)
+		return -ENOMEM;
+
+	info->buf = xclbin;
+	info->count = xclbin_obj->m_header.m_length;
+	info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
+	re->info = info;
+	rc = fpga_region_program_fpga(re);
+	if (rc) {
+		xrt_err(pdev, "programming xclbin failed, rc %d", rc);
+		return rc;
+	}
+
+	/* free bridges to allow reprogram */
+	if (re->get_bridges)
+		fpga_bridges_put(&re->bridge_list);
+
+	/*
+	 * Next bringup the subdevs for this region which will be managed by
+	 * its own partition object.
+	 */
+	r_data->part_inst = xrt_subdev_create_partition(pdev, dtb);
+	if (r_data->part_inst < 0) {
+		xrt_err(pdev, "failed to create partition, rc %d",
+			r_data->part_inst);
+		rc = r_data->part_inst;
+		return rc;
+	}
+
+	rc = xrt_subdev_wait_for_partition_bringup(pdev);
+	if (rc)
+		xrt_err(pdev, "partition bringup failed, rc %d", rc);
+	return rc;
+}
+
+static int xmgmt_get_bridges(struct fpga_region *re)
+{
+	struct xmgmt_region *r_data = re->priv;
+	struct device *dev = &r_data->pdev->dev;
+
+	return fpga_bridge_get_to_list(dev, re->info, &re->bridge_list);
+}
+
+/*
+ * Program/create FPGA regions based on input xclbin file. This is key function
+ * stitching the flow together:
+ * 1. Identify a matching existing region for this xclbin
+ * 2. Tear down any previous objects for the found region
+ * 3. Program this region with input xclbin
+ * 4. Iterate over this region's interface uuids to determine if it defines any
+ *    child region. Create fpga_region for the child region.
+ */
+int xmgmt_process_xclbin(struct platform_device *pdev,
+	struct fpga_manager *fmgr, const struct axlf *xclbin, enum provider_kind kind)
+{
+	struct fpga_region *re, *compat_re = NULL;
+	struct xmgmt_region_match_arg arg;
+	struct xmgmt_region *r_data;
+	char *dtb = NULL;
+	int rc, i;
+
+	rc = xrt_xclbin_get_metadata(DEV(pdev), xclbin, &dtb);
+	if (rc) {
+		xrt_err(pdev, "failed to get dtb: %d", rc);
+		goto failed;
+	}
+
+	xrt_md_get_intf_uuids(DEV(pdev), dtb, &arg.uuid_num, NULL);
+	if (arg.uuid_num == 0) {
+		xrt_err(pdev, "failed to get intf uuid");
+		rc = -EINVAL;
+		goto failed;
+	}
+	arg.uuids = vzalloc(sizeof(uuid_t) * arg.uuid_num);
+	if (!arg.uuids) {
+		rc = -ENOMEM;
+		goto failed;
+	}
+	arg.pdev = pdev;
+
+	xrt_md_get_intf_uuids(DEV(pdev), dtb, &arg.uuid_num, arg.uuids);
+
+	/* if this is not base firmware, search for a compatible region */
+	if (kind != XMGMT_BLP) {
+		compat_re = fpga_region_class_find(NULL, &arg,
+			xmgmt_region_match);
+		if (!compat_re) {
+			xrt_err(pdev, "failed to get compatible region");
+			rc = -ENOENT;
+			goto failed;
+		}
+
+		xmgmt_region_cleanup(compat_re);
+
+		rc = xmgmt_region_program(compat_re, xclbin, dtb);
+		if (rc) {
+			xrt_err(pdev, "failed to program region");
+			goto failed;
+		}
+
+	}
+
+	/* create all the new regions contained in this xclbin */
+	for (i = 0; i < arg.uuid_num; i++) {
+		if (compat_re && !memcmp(compat_re->compat_id, &arg.uuids[i],
+					 sizeof(*compat_re->compat_id)))
+			/* region for this interface already exists */
+			continue;
+		re = fpga_region_create(DEV(pdev), fmgr, xmgmt_get_bridges);
+		if (!re) {
+			xrt_err(pdev, "failed to create fpga region");
+			rc = -EFAULT;
+			goto failed;
+		}
+		r_data = devm_kzalloc(DEV(pdev), sizeof(*r_data), GFP_KERNEL);
+		if (!r_data) {
+			rc = -ENOMEM;
+			fpga_region_free(re);
+			goto failed;
+		}
+		r_data->pdev = pdev;
+		r_data->fregion = re;
+		r_data->part_inst = -1;
+		memcpy(&r_data->intf_uuid, &arg.uuids[i],
+			sizeof(r_data->intf_uuid));
+		if (compat_re) {
+			memcpy(&r_data->dep_uuid, compat_re->compat_id,
+				sizeof(r_data->intf_uuid));
+		}
+		r_data->fbridge = xmgmt_create_bridge(pdev, dtb);
+		if (!r_data->fbridge) {
+			xrt_err(pdev, "failed to create fpga bridge");
+			rc = -EFAULT;
+			devm_kfree(DEV(pdev), r_data);
+			fpga_region_free(re);
+			goto failed;
+		}
+
+		re->compat_id = (struct fpga_compat_id *)&r_data->intf_uuid;
+		re->priv = r_data;
+
+		rc = fpga_region_register(re);
+		if (rc) {
+			xrt_err(pdev, "failed to register fpga region");
+			xmgmt_destroy_bridge(r_data->fbridge);
+			fpga_region_free(re);
+			devm_kfree(DEV(pdev), r_data);
+			goto failed;
+		}
+
+		xrt_info(pdev, "created fpga region %llx%llx",
+			re->compat_id->id_l, re->compat_id->id_h);
+	}
+
+failed:
+	if (compat_re)
+		put_device(&compat_re->dev);
+
+	if (rc) {
+		if (compat_re)
+			xmgmt_region_cleanup(compat_re);
+	}
+
+	if (dtb)
+		vfree(dtb);
+
+	return rc;
+}
diff --git a/drivers/fpga/xrt/mgmt/xmgmt-main.c b/drivers/fpga/xrt/mgmt/xmgmt-main.c
new file mode 100644
index 000000000000..6afeb2d23320
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/xmgmt-main.c
@@ -0,0 +1,738 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA MGMT PF entry point driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Sonal Santan <sonals@xilinx.com>
+ */
+
+#include <linux/firmware.h>
+#include <linux/uaccess.h>
+#include "xrt-xclbin.h"
+#include "metadata.h"
+#include "subdev.h"
+#include <linux/xrt/xmgmt-ioctl.h>
+#include "subdev/gpio.h"
+#include "xmgmt-main.h"
+#include "xmgmt-fmgr.h"
+#include "subdev/icap.h"
+#include "subdev/axigate.h"
+#include "xmgmt-main-impl.h"
+
+#define	XMGMT_MAIN "xmgmt_main"
+
+struct xmgmt_main {
+	struct platform_device *pdev;
+	void *evt_hdl;
+	struct axlf *firmware_blp;
+	struct axlf *firmware_plp;
+	struct axlf *firmware_ulp;
+	bool flash_ready;
+	bool gpio_ready;
+	struct fpga_manager *fmgr;
+	struct mutex busy_mutex;
+
+	uuid_t *blp_intf_uuids;
+	u32 blp_intf_uuid_num;
+};
+
+char *xmgmt_get_vbnv(struct platform_device *pdev)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	const char *vbnv;
+	char *ret;
+	int i;
+
+	if (xmm->firmware_plp)
+		vbnv = xmm->firmware_plp->m_header.m_platformVBNV;
+	else if (xmm->firmware_blp)
+		vbnv = xmm->firmware_blp->m_header.m_platformVBNV;
+	else
+		return NULL;
+
+	ret = kstrdup(vbnv, GFP_KERNEL);
+	for (i = 0; i < strlen(ret); i++) {
+		if (ret[i] == ':' || ret[i] == '.')
+			ret[i] = '_';
+	}
+	return ret;
+}
+
+static bool xmgmt_main_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	if (id == XRT_SUBDEV_GPIO)
+		return xrt_subdev_has_epname(pdev, arg);
+	else if (id == XRT_SUBDEV_QSPI)
+		return true;
+
+	return false;
+}
+
+static int get_dev_uuid(struct platform_device *pdev, char *uuidstr, size_t len)
+{
+	char uuid[16];
+	struct platform_device *gpio_leaf;
+	struct xrt_gpio_ioctl_rw gpio_arg = { 0 };
+	int err, i, count;
+
+	gpio_leaf = xrt_subdev_get_leaf_by_epname(pdev, NODE_BLP_ROM);
+	if (!gpio_leaf) {
+		xrt_err(pdev, "can not get %s", NODE_BLP_ROM);
+		return -EINVAL;
+	}
+
+	gpio_arg.xgir_id = XRT_GPIO_ROM_UUID;
+	gpio_arg.xgir_buf = uuid;
+	gpio_arg.xgir_len = sizeof(uuid);
+	gpio_arg.xgir_offset = 0;
+	err = xrt_subdev_ioctl(gpio_leaf, XRT_GPIO_READ, &gpio_arg);
+	xrt_subdev_put_leaf(pdev, gpio_leaf);
+	if (err) {
+		xrt_err(pdev, "can not get uuid: %d", err);
+		return err;
+	}
+
+	for (count = 0, i = sizeof(uuid) - sizeof(u32);
+		i >= 0 && len > count; i -= sizeof(u32)) {
+		count += snprintf(uuidstr + count, len - count,
+			"%08x", *(u32 *)&uuid[i]);
+	}
+	return 0;
+}
+
+int xmgmt_hot_reset(struct platform_device *pdev)
+{
+	int ret = xrt_subdev_broadcast_event(pdev, XRT_EVENT_PRE_HOT_RESET);
+
+	if (ret) {
+		xrt_err(pdev, "offline failed, hot reset is canceled");
+		return ret;
+	}
+
+	(void) xrt_subdev_hot_reset(pdev);
+	xrt_subdev_broadcast_event(pdev, XRT_EVENT_POST_HOT_RESET);
+	return 0;
+}
+
+static ssize_t reset_store(struct device *dev,
+	struct device_attribute *da, const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+
+	(void) xmgmt_hot_reset(pdev);
+	return count;
+}
+static DEVICE_ATTR_WO(reset);
+
+static ssize_t VBNV_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	ssize_t ret;
+	char *vbnv;
+	struct platform_device *pdev = to_platform_device(dev);
+
+	vbnv = xmgmt_get_vbnv(pdev);
+	ret = sprintf(buf, "%s\n", vbnv);
+	kfree(vbnv);
+	return ret;
+}
+static DEVICE_ATTR_RO(VBNV);
+
+static ssize_t logic_uuids_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	ssize_t ret;
+	char uuid[80];
+	struct platform_device *pdev = to_platform_device(dev);
+
+	/*
+	 * Getting UUID pointed to by VSEC,
+	 * should be the same as logic UUID of BLP.
+	 * TODO: add PLP logic UUID
+	 */
+	ret = get_dev_uuid(pdev, uuid, sizeof(uuid));
+	if (ret)
+		return ret;
+	ret = sprintf(buf, "%s\n", uuid);
+	return ret;
+}
+static DEVICE_ATTR_RO(logic_uuids);
+
+static inline void uuid2str(const uuid_t *uuid, char *uuidstr, size_t len)
+{
+	int i, p;
+	u8 *u = (u8 *)uuid;
+
+	BUG_ON(sizeof(uuid_t) * 2 + 1 > len);
+	for (p = 0, i = sizeof(uuid_t) - 1; i >= 0; p++, i--)
+		(void) snprintf(&uuidstr[p*2], 3, "%02x", u[i]);
+}
+
+static ssize_t interface_uuids_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	ssize_t ret = 0;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	u32 i;
+
+	/*
+	 * TODO: add PLP interface UUID
+	 */
+	for (i = 0; i < xmm->blp_intf_uuid_num; i++) {
+		char uuidstr[80];
+
+		uuid2str(&xmm->blp_intf_uuids[i], uuidstr, sizeof(uuidstr));
+		ret += sprintf(buf + ret, "%s\n", uuidstr);
+	}
+	return ret;
+}
+static DEVICE_ATTR_RO(interface_uuids);
+
+static struct attribute *xmgmt_main_attrs[] = {
+	&dev_attr_reset.attr,
+	&dev_attr_VBNV.attr,
+	&dev_attr_logic_uuids.attr,
+	&dev_attr_interface_uuids.attr,
+	NULL,
+};
+
+/*
+ * sysfs hook to load xclbin primarily used for driver debug
+ */
+static ssize_t ulp_image_write(struct file *filp, struct kobject *kobj,
+	struct bin_attribute *attr, char *buffer, loff_t off, size_t count)
+{
+	struct xmgmt_main *xmm =
+		dev_get_drvdata(container_of(kobj, struct device, kobj));
+	struct axlf *xclbin;
+	ulong len;
+
+	if (off == 0) {
+		if (count < sizeof(*xclbin)) {
+			xrt_err(xmm->pdev, "count is too small %ld", count);
+			return -EINVAL;
+		}
+
+		if (xmm->firmware_ulp) {
+			vfree(xmm->firmware_ulp);
+			xmm->firmware_ulp = NULL;
+		}
+		xclbin = (struct axlf *)buffer;
+		xmm->firmware_ulp = vmalloc(xclbin->m_header.m_length);
+		if (!xmm->firmware_ulp)
+			return -ENOMEM;
+	} else
+		xclbin = xmm->firmware_ulp;
+
+	len = xclbin->m_header.m_length;
+	if (off + count >= len && off < len) {
+		memcpy(xmm->firmware_ulp + off, buffer, len - off);
+		xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, xmm->firmware_ulp,
+			XMGMT_ULP);
+	} else if (off + count < len) {
+		memcpy(xmm->firmware_ulp + off, buffer, count);
+	}
+
+	return count;
+}
+
+static struct bin_attribute ulp_image_attr = {
+	.attr = {
+		.name = "ulp_image",
+		.mode = 0200
+	},
+	.write = ulp_image_write,
+	.size = 0
+};
+
+static struct bin_attribute *xmgmt_main_bin_attrs[] = {
+	&ulp_image_attr,
+	NULL,
+};
+
+static const struct attribute_group xmgmt_main_attrgroup = {
+	.attrs = xmgmt_main_attrs,
+	.bin_attrs = xmgmt_main_bin_attrs,
+};
+
+static int load_firmware_from_flash(struct platform_device *pdev,
+	struct axlf **fw_buf, size_t *len)
+{
+	return -ENOTSUPP;
+}
+
+static int load_firmware_from_disk(struct platform_device *pdev, struct axlf **fw_buf,
+	size_t *len)
+{
+	char uuid[80];
+	int err = 0;
+	char fw_name[256];
+	const struct firmware *fw;
+
+	err = get_dev_uuid(pdev, uuid, sizeof(uuid));
+	if (err)
+		return err;
+
+	(void) snprintf(fw_name,
+		sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
+	xrt_info(pdev, "try loading fw: %s", fw_name);
+
+	err = request_firmware(&fw, fw_name, DEV(pdev));
+	if (err)
+		return err;
+
+	*fw_buf = vmalloc(fw->size);
+	*len = fw->size;
+	if (*fw_buf != NULL)
+		memcpy(*fw_buf, fw->data, fw->size);
+	else
+		err = -ENOMEM;
+
+	release_firmware(fw);
+	return 0;
+}
+
+static const struct axlf *xmgmt_get_axlf_firmware(struct xmgmt_main *xmm,
+	enum provider_kind kind)
+{
+	switch (kind) {
+	case XMGMT_BLP:
+		return xmm->firmware_blp;
+	case XMGMT_PLP:
+		return xmm->firmware_plp;
+	case XMGMT_ULP:
+		return xmm->firmware_ulp;
+	default:
+		xrt_err(xmm->pdev, "unknown axlf kind: %d", kind);
+		return NULL;
+	}
+}
+
+char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	char *dtb = NULL;
+	const struct axlf *provider = xmgmt_get_axlf_firmware(xmm, kind);
+	int rc;
+
+	if (provider == NULL)
+		return dtb;
+
+	rc = xrt_xclbin_get_metadata(DEV(pdev), provider, &dtb);
+	if (rc)
+		xrt_err(pdev, "failed to find dtb: %d", rc);
+	return dtb;
+}
+
+static const char *get_uuid_from_firmware(struct platform_device *pdev,
+	const struct axlf *xclbin)
+{
+	const void *uuid = NULL;
+	const void *uuiddup = NULL;
+	void *dtb = NULL;
+	int rc;
+
+	rc = xrt_xclbin_get_section(xclbin, PARTITION_METADATA, &dtb, NULL);
+	if (rc)
+		return NULL;
+
+	rc = xrt_md_get_prop(DEV(pdev), dtb, NULL, NULL,
+		PROP_LOGIC_UUID, &uuid, NULL);
+	if (!rc)
+		uuiddup = kstrdup(uuid, GFP_KERNEL);
+	vfree(dtb);
+	return uuiddup;
+}
+
+static bool is_valid_firmware(struct platform_device *pdev,
+	const struct axlf *xclbin, size_t fw_len)
+{
+	const char *fw_buf = (const char *)xclbin;
+	size_t axlflen = xclbin->m_header.m_length;
+	const char *fw_uuid;
+	char dev_uuid[80];
+	int err;
+
+	err = get_dev_uuid(pdev, dev_uuid, sizeof(dev_uuid));
+	if (err)
+		return false;
+
+	if (memcmp(fw_buf, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)) != 0) {
+		xrt_err(pdev, "unknown fw format");
+		return false;
+	}
+
+	if (axlflen > fw_len) {
+		xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
+			fw_len, axlflen);
+		return false;
+	}
+
+	fw_uuid = get_uuid_from_firmware(pdev, xclbin);
+	if (fw_uuid == NULL || strcmp(fw_uuid, dev_uuid) != 0) {
+		xrt_err(pdev, "bad fw UUID: %s, expect: %s",
+			fw_uuid ? fw_uuid : "<none>", dev_uuid);
+		kfree(fw_uuid);
+		return false;
+	}
+
+	kfree(fw_uuid);
+	return true;
+}
+
+int xmgmt_get_provider_uuid(struct platform_device *pdev,
+	enum provider_kind kind, uuid_t *uuid)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	const struct axlf *fwbuf;
+	const char *fw_uuid;
+	int rc = -ENOENT;
+
+	mutex_lock(&xmm->busy_mutex);
+
+	fwbuf = xmgmt_get_axlf_firmware(xmm, kind);
+	if (fwbuf == NULL)
+		goto done;
+
+	fw_uuid = get_uuid_from_firmware(pdev, fwbuf);
+	if (fw_uuid == NULL)
+		goto done;
+
+	rc = xrt_md_uuid_strtoid(DEV(pdev), fw_uuid, uuid);
+	kfree(fw_uuid);
+
+done:
+	mutex_unlock(&xmm->busy_mutex);
+	return rc;
+}
+
+static int xmgmt_create_blp(struct xmgmt_main *xmm)
+{
+	struct platform_device *pdev = xmm->pdev;
+	int rc = 0;
+	char *dtb = NULL;
+	const struct axlf *provider = xmgmt_get_axlf_firmware(xmm, XMGMT_BLP);
+
+	dtb = xmgmt_get_dtb(pdev, XMGMT_BLP);
+	if (dtb) {
+		rc = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr,
+			provider, XMGMT_BLP);
+		if (rc) {
+			xrt_err(pdev, "failed to process BLP: %d", rc);
+			goto failed;
+		}
+
+		rc = xrt_subdev_create_partition(pdev, dtb);
+		if (rc < 0)
+			xrt_err(pdev, "failed to create BLP part: %d", rc);
+		else
+			rc = 0;
+
+		BUG_ON(xmm->blp_intf_uuids);
+		xrt_md_get_intf_uuids(&pdev->dev, dtb,
+			&xmm->blp_intf_uuid_num, NULL);
+		if (xmm->blp_intf_uuid_num > 0) {
+			xmm->blp_intf_uuids = vzalloc(sizeof(uuid_t) *
+				xmm->blp_intf_uuid_num);
+			xrt_md_get_intf_uuids(&pdev->dev, dtb,
+				&xmm->blp_intf_uuid_num, xmm->blp_intf_uuids);
+		}
+	}
+
+failed:
+	vfree(dtb);
+	return rc;
+}
+
+static int xmgmt_main_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+	enum xrt_subdev_id id;
+	int instance;
+	size_t fwlen;
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION: {
+		id = esd->xevt_subdev_id;
+		instance = esd->xevt_subdev_instance;
+		xrt_info(pdev, "processing event %d for (%d, %d)",
+			evt, id, instance);
+
+		if (id == XRT_SUBDEV_GPIO)
+			xmm->gpio_ready = true;
+		else if (id == XRT_SUBDEV_QSPI)
+			xmm->flash_ready = true;
+		else
+			BUG_ON(1);
+
+		if (xmm->gpio_ready && xmm->flash_ready) {
+			int rc;
+
+			rc = load_firmware_from_disk(pdev, &xmm->firmware_blp,
+				&fwlen);
+			if (rc != 0) {
+				rc = load_firmware_from_flash(pdev,
+					&xmm->firmware_blp, &fwlen);
+			}
+			if (rc == 0 && is_valid_firmware(pdev,
+			    xmm->firmware_blp, fwlen))
+				(void) xmgmt_create_blp(xmm);
+			else
+				xrt_err(pdev,
+					"failed to find firmware, giving up");
+			xmm->evt_hdl = NULL;
+		}
+		break;
+	}
+	case XRT_EVENT_POST_ATTACH:
+	case XRT_EVENT_PRE_DETACH:
+		break;
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		break;
+	}
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static int xmgmt_main_probe(struct platform_device *pdev)
+{
+	struct xmgmt_main *xmm;
+
+	xrt_info(pdev, "probing...");
+
+	xmm = devm_kzalloc(DEV(pdev), sizeof(*xmm), GFP_KERNEL);
+	if (!xmm)
+		return -ENOMEM;
+
+	xmm->pdev = pdev;
+	xmm->fmgr = xmgmt_fmgr_probe(pdev);
+	if (IS_ERR(xmm->fmgr))
+		return PTR_ERR(xmm->fmgr);
+
+	platform_set_drvdata(pdev, xmm);
+	mutex_init(&xmm->busy_mutex);
+
+	xmm->evt_hdl = xrt_subdev_add_event_cb(pdev,
+		xmgmt_main_leaf_match, NODE_BLP_ROM, xmgmt_main_event_cb);
+
+	/* Ready to handle req thru sysfs nodes. */
+	if (sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup))
+		xrt_err(pdev, "failed to create sysfs group");
+	return 0;
+}
+
+static int xmgmt_main_remove(struct platform_device *pdev)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+
+	/* By now, partition driver should prevent any inter-leaf call. */
+
+	xrt_info(pdev, "leaving...");
+
+	if (xmm->evt_hdl)
+		(void) xrt_subdev_remove_event_cb(pdev, xmm->evt_hdl);
+	vfree(xmm->blp_intf_uuids);
+	vfree(xmm->firmware_blp);
+	vfree(xmm->firmware_plp);
+	vfree(xmm->firmware_ulp);
+	xmgmt_region_cleanup_all(pdev);
+	(void) xmgmt_fmgr_remove(xmm->fmgr);
+	(void) sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup);
+	return 0;
+}
+
+static int
+xmgmt_main_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+	int ret = 0;
+
+	xrt_info(pdev, "handling IOCTL cmd: %d", cmd);
+
+	switch (cmd) {
+	case XRT_MGMT_MAIN_GET_AXLF_SECTION: {
+		struct xrt_mgmt_main_ioctl_get_axlf_section *get =
+			(struct xrt_mgmt_main_ioctl_get_axlf_section *)arg;
+		const struct axlf *firmware =
+			xmgmt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
+
+		if (firmware == NULL) {
+			ret = -ENOENT;
+		} else {
+			ret = xrt_xclbin_get_section(firmware,
+				get->xmmigas_section_kind,
+				&get->xmmigas_section,
+				&get->xmmigas_section_size);
+		}
+		break;
+	}
+	case XRT_MGMT_MAIN_GET_VBNV: {
+		char **vbnv_p = (char **)arg;
+
+		*vbnv_p = xmgmt_get_vbnv(pdev);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unknown cmd: %d", cmd);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+static int xmgmt_main_open(struct inode *inode, struct file *file)
+{
+	struct platform_device *pdev = xrt_devnode_open(inode);
+
+	/* Device may have gone already when we get here. */
+	if (!pdev)
+		return -ENODEV;
+
+	xrt_info(pdev, "opened");
+	file->private_data = platform_get_drvdata(pdev);
+	return 0;
+}
+
+static int xmgmt_main_close(struct inode *inode, struct file *file)
+{
+	struct xmgmt_main *xmm = file->private_data;
+
+	xrt_devnode_close(inode);
+
+	xrt_info(xmm->pdev, "closed");
+	return 0;
+}
+
+/*
+ * Called for xclbin download by either: xclbin load ioctl or
+ * peer request from the userpf driver over mailbox.
+ */
+static int xmgmt_bitstream_axlf_fpga_mgr(struct xmgmt_main *xmm,
+	void *axlf, size_t size)
+{
+	int ret;
+
+	BUG_ON(!mutex_is_locked(&xmm->busy_mutex));
+
+	/*
+	 * Should any error happens during download, we can't trust
+	 * the cached xclbin any more.
+	 */
+	vfree(xmm->firmware_ulp);
+	xmm->firmware_ulp = NULL;
+
+	ret = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, axlf, XMGMT_ULP);
+	if (ret == 0)
+		xmm->firmware_ulp = axlf;
+
+	return ret;
+}
+
+static int bitstream_axlf_ioctl(struct xmgmt_main *xmm, const void __user *arg)
+{
+	void *copy_buffer = NULL;
+	size_t copy_buffer_size = 0;
+	struct xmgmt_ioc_bitstream_axlf ioc_obj = { 0 };
+	struct axlf xclbin_obj = { {0} };
+	int ret = 0;
+
+	if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
+		return -EFAULT;
+	if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin,
+		sizeof(xclbin_obj)))
+		return -EFAULT;
+	if (memcmp(xclbin_obj.m_magic, ICAP_XCLBIN_V2, sizeof(ICAP_XCLBIN_V2)))
+		return -EINVAL;
+
+	copy_buffer_size = xclbin_obj.m_header.m_length;
+	if (copy_buffer_size > MAX_XCLBIN_SIZE)
+		return -EINVAL;
+	copy_buffer = vmalloc(copy_buffer_size);
+	if (copy_buffer == NULL)
+		return -ENOMEM;
+
+	if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
+		vfree(copy_buffer);
+		return -EFAULT;
+	}
+
+	ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
+	if (ret)
+		vfree(copy_buffer);
+
+	return ret;
+}
+
+static long xmgmt_main_ioctl(struct file *filp, unsigned int cmd,
+	unsigned long arg)
+{
+	long result = 0;
+	struct xmgmt_main *xmm = filp->private_data;
+
+	BUG_ON(!xmm);
+
+	if (_IOC_TYPE(cmd) != XMGMT_IOC_MAGIC)
+		return -ENOTTY;
+
+	mutex_lock(&xmm->busy_mutex);
+
+	xrt_info(xmm->pdev, "ioctl cmd %d, arg %ld", cmd, arg);
+	switch (cmd) {
+	case XMGMT_IOCICAPDOWNLOAD_AXLF:
+		result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
+		break;
+	default:
+		result = -ENOTTY;
+		break;
+	}
+
+	mutex_unlock(&xmm->busy_mutex);
+	return result;
+}
+
+struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{ .ep_name = NODE_MGMT_MAIN },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xmgmt_main_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xmgmt_main_leaf_ioctl,
+	},
+	.xsd_file_ops = {
+		.xsf_ops = {
+			.owner = THIS_MODULE,
+			.open = xmgmt_main_open,
+			.release = xmgmt_main_close,
+			.unlocked_ioctl = xmgmt_main_ioctl,
+		},
+		.xsf_dev_name = "xmgmt",
+	},
+};
+
+static const struct platform_device_id xmgmt_main_id_table[] = {
+	{ XMGMT_MAIN, (kernel_ulong_t)&xmgmt_main_data },
+	{ },
+};
+
+struct platform_driver xmgmt_main_driver = {
+	.driver	= {
+		.name    = XMGMT_MAIN,
+	},
+	.probe   = xmgmt_main_probe,
+	.remove  = xmgmt_main_remove,
+	.id_table = xmgmt_main_id_table,
+};
diff --git a/drivers/fpga/xrt/mgmt/xmgmt-root.c b/drivers/fpga/xrt/mgmt/xmgmt-root.c
new file mode 100644
index 000000000000..8cac9c3b60b8
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/xmgmt-root.c
@@ -0,0 +1,375 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+
+#include "xrt-root.h"
+#include "subdev.h"
+#include "xmgmt-main-impl.h"
+#include "metadata.h"
+
+#define	XMGMT_MODULE_NAME	"xmgmt"
+#define	XMGMT_DRIVER_VERSION	"4.0.0"
+
+#define	XMGMT_PDEV(xm)		((xm)->pdev)
+#define	XMGMT_DEV(xm)		(&(XMGMT_PDEV(xm)->dev))
+#define xmgmt_err(xm, fmt, args...)	\
+	dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_warn(xm, fmt, args...)	\
+	dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_info(xm, fmt, args...)	\
+	dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_dbg(xm, fmt, args...)	\
+	dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define	XMGMT_DEV_ID(pdev)			\
+	((pci_domain_nr(pdev->bus) << 16) |	\
+	PCI_DEVID(pdev->bus->number, 0))
+
+static struct class *xmgmt_class;
+static const struct pci_device_id xmgmt_pci_ids[] = {
+	{ PCI_DEVICE(0x10EE, 0xd020), },
+	{ PCI_DEVICE(0x10EE, 0x5020), },
+	{ 0, }
+};
+
+struct xmgmt {
+	struct pci_dev *pdev;
+	struct xroot *root;
+
+	/* save config for pci reset */
+	u32 saved_config[8][16];
+	bool ready;
+};
+
+static int xmgmt_config_pci(struct xmgmt *xm)
+{
+	struct pci_dev *pdev = XMGMT_PDEV(xm);
+	int rc;
+
+	rc = pcim_enable_device(pdev);
+	if (rc < 0) {
+		xmgmt_err(xm, "failed to enable device: %d", rc);
+		return rc;
+	}
+
+	rc = pci_enable_pcie_error_reporting(pdev);
+	if (rc)
+		xmgmt_warn(xm, "failed to enable AER: %d", rc);
+
+	pci_set_master(pdev);
+
+	rc = pcie_get_readrq(pdev);
+	if (rc < 0) {
+		xmgmt_err(xm, "failed to read mrrs %d", rc);
+		return rc;
+	}
+	if (rc > 512) {
+		rc = pcie_set_readrq(pdev, 512);
+		if (rc) {
+			xmgmt_err(xm, "failed to force mrrs %d", rc);
+			return rc;
+		}
+	}
+
+	return 0;
+}
+
+static void xmgmt_save_config_space(struct pci_dev *pdev, u32 *saved_config)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		pci_read_config_dword(pdev, i * 4, &saved_config[i]);
+}
+
+static int xmgmt_match_slot_and_save(struct device *dev, void *data)
+{
+	struct xmgmt *xm = data;
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
+		pci_cfg_access_lock(pdev);
+		pci_save_state(pdev);
+		xmgmt_save_config_space(pdev,
+			xm->saved_config[PCI_FUNC(pdev->devfn)]);
+	}
+
+	return 0;
+}
+
+static void xmgmt_pci_save_config_all(struct xmgmt *xm)
+{
+	bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_save);
+}
+
+static void xmgmt_restore_config_space(struct pci_dev *pdev, u32 *config_saved)
+{
+	int i;
+	u32 val;
+
+	for (i = 0; i < 16; i++) {
+		pci_read_config_dword(pdev, i * 4, &val);
+		if (val == config_saved[i])
+			continue;
+
+		pci_write_config_dword(pdev, i * 4, config_saved[i]);
+		pci_read_config_dword(pdev, i * 4, &val);
+		if (val != config_saved[i]) {
+			dev_err(&pdev->dev,
+				 "restore config at %d failed", i * 4);
+		}
+	}
+}
+
+static int xmgmt_match_slot_and_restore(struct device *dev, void *data)
+{
+	struct xmgmt *xm = data;
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
+		xmgmt_restore_config_space(pdev,
+			xm->saved_config[PCI_FUNC(pdev->devfn)]);
+
+		pci_restore_state(pdev);
+		pci_cfg_access_unlock(pdev);
+	}
+
+	return 0;
+}
+
+static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
+{
+	bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_restore);
+}
+
+void xroot_hot_reset(struct pci_dev *pdev)
+{
+	struct xmgmt *xm = pci_get_drvdata(pdev);
+	struct pci_bus *bus;
+	u8 pci_bctl;
+	u16 pci_cmd, devctl;
+	int i;
+
+	xmgmt_info(xm, "hot reset start");
+
+	xmgmt_pci_save_config_all(xm);
+
+	pci_disable_device(pdev);
+
+	bus = pdev->bus;
+
+	/*
+	 * When flipping the SBR bit, device can fall off the bus. This is
+	 * usually no problem at all so long as drivers are working properly
+	 * after SBR. However, some systems complain bitterly when the device
+	 * falls off the bus.
+	 * The quick solution is to temporarily disable the SERR reporting of
+	 * switch port during SBR.
+	 */
+
+	pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
+	pci_write_config_word(bus->self, PCI_COMMAND,
+		(pci_cmd & ~PCI_COMMAND_SERR));
+	pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
+	pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL,
+		(devctl & ~PCI_EXP_DEVCTL_FERE));
+	pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
+	pci_bctl |= PCI_BRIDGE_CTL_BUS_RESET;
+	pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
+
+	msleep(100);
+	pci_bctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
+	pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
+	ssleep(1);
+
+	pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
+	pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
+
+	pci_enable_device(pdev);
+
+	for (i = 0; i < 300; i++) {
+		pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+		if (pci_cmd != 0xffff)
+			break;
+		msleep(20);
+	}
+
+	xmgmt_info(xm, "waiting for %d ms", i * 20);
+
+	xmgmt_pci_restore_config_all(xm);
+
+	xmgmt_config_pci(xm);
+}
+
+static int xmgmt_create_root_metadata(struct xmgmt *xm, char **root_dtb)
+{
+	char *dtb = NULL;
+	int ret;
+
+	ret = xrt_md_create(DEV(xm->pdev), &dtb);
+	if (ret) {
+		xmgmt_err(xm, "create metadata failed, ret %d", ret);
+		goto failed;
+	}
+
+	ret = xroot_add_simple_node(xm->root, dtb, NODE_TEST);
+	if (ret)
+		goto failed;
+
+	ret = xroot_add_vsec_node(xm->root, dtb);
+	if (ret == -ENOENT) {
+		/*
+		 * We may be dealing with a MFG board.
+		 * Try vsec-golden which will bring up all hard-coded leaves
+		 * at hard-coded offsets.
+		 */
+		ret = xroot_add_simple_node(xm->root, dtb, NODE_VSEC_GOLDEN);
+	} else if (ret == 0) {
+		ret = xroot_add_simple_node(xm->root, dtb, NODE_MGMT_MAIN);
+	}
+	if (ret)
+		goto failed;
+
+	*root_dtb = dtb;
+	return 0;
+
+failed:
+	vfree(dtb);
+	return ret;
+}
+
+static ssize_t ready_show(struct device *dev,
+	struct device_attribute *da, char *buf)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct xmgmt *xm = pci_get_drvdata(pdev);
+
+	return sprintf(buf, "%d\n", xm->ready);
+}
+static DEVICE_ATTR_RO(ready);
+
+static struct attribute *xmgmt_root_attrs[] = {
+	&dev_attr_ready.attr,
+	NULL
+};
+
+static struct attribute_group xmgmt_root_attr_group = {
+	.attrs = xmgmt_root_attrs,
+};
+
+static int xmgmt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	int ret;
+	struct device *dev = DEV(pdev);
+	struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
+	char *dtb = NULL;
+
+	if (!xm)
+		return -ENOMEM;
+	xm->pdev = pdev;
+	pci_set_drvdata(pdev, xm);
+
+	ret = xmgmt_config_pci(xm);
+	if (ret)
+		goto failed;
+
+	ret = xroot_probe(pdev, &xm->root);
+	if (ret)
+		goto failed;
+
+	ret = xmgmt_create_root_metadata(xm, &dtb);
+	if (ret)
+		goto failed_metadata;
+
+	ret = xroot_create_partition(xm->root, dtb);
+	vfree(dtb);
+	if (ret)
+		xmgmt_err(xm, "failed to create root partition: %d", ret);
+
+	if (!xroot_wait_for_bringup(xm->root))
+		xmgmt_err(xm, "failed to bringup all partitions");
+	else
+		xm->ready = true;
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
+	if (ret) {
+		/* Warning instead of failing the probe. */
+		xrt_warn(pdev, "create xmgmt root attrs failed: %d", ret);
+	}
+
+	xroot_broadcast(xm->root, XRT_EVENT_POST_ATTACH);
+	xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
+	return 0;
+
+failed_metadata:
+	(void) xroot_remove(xm->root);
+failed:
+	pci_set_drvdata(pdev, NULL);
+	return ret;
+}
+
+static void xmgmt_remove(struct pci_dev *pdev)
+{
+	struct xmgmt *xm = pci_get_drvdata(pdev);
+
+	xroot_broadcast(xm->root, XRT_EVENT_PRE_DETACH);
+	sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
+	(void) xroot_remove(xm->root);
+	pci_disable_pcie_error_reporting(xm->pdev);
+	xmgmt_info(xm, "%s cleaned up successfully", XMGMT_MODULE_NAME);
+}
+
+static struct pci_driver xmgmt_driver = {
+	.name = XMGMT_MODULE_NAME,
+	.id_table = xmgmt_pci_ids,
+	.probe = xmgmt_probe,
+	.remove = xmgmt_remove,
+};
+
+static int __init xmgmt_init(void)
+{
+	int res = xrt_subdev_register_external_driver(XRT_SUBDEV_MGMT_MAIN,
+		&xmgmt_main_driver, xrt_mgmt_main_endpoints);
+
+	if (res)
+		return res;
+
+	xmgmt_class = class_create(THIS_MODULE, XMGMT_MODULE_NAME);
+	if (IS_ERR(xmgmt_class))
+		return PTR_ERR(xmgmt_class);
+
+	res = pci_register_driver(&xmgmt_driver);
+	if (res) {
+		class_destroy(xmgmt_class);
+		return res;
+	}
+
+	return 0;
+}
+
+static __exit void xmgmt_exit(void)
+{
+	pci_unregister_driver(&xmgmt_driver);
+	class_destroy(xmgmt_class);
+	xrt_subdev_unregister_external_driver(XRT_SUBDEV_MGMT_MAIN);
+}
+
+module_init(xmgmt_init);
+module_exit(xmgmt_exit);
+
+MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
+MODULE_VERSION(XMGMT_DRIVER_VERSION);
+MODULE_AUTHOR("XRT Team <runtime@xilinx.com>");
+MODULE_DESCRIPTION("Xilinx Alveo management function driver");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/xrt/xmgmt-ioctl.h b/include/uapi/linux/xrt/xmgmt-ioctl.h
new file mode 100644
index 000000000000..f949a7c21560
--- /dev/null
+++ b/include/uapi/linux/xrt/xmgmt-ioctl.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0 */
+/*
+ *  Copyright (C) 2015-2020, Xilinx Inc
+ *
+ */
+
+/**
+ * DOC: PCIe Kernel Driver for Managament Physical Function
+ * Interfaces exposed by *xclmgmt* driver are defined in file, *mgmt-ioctl.h*.
+ * Core functionality provided by *xmgmt* driver is described in the following table:
+ *
+ * ==== ====================================== ============================== ==================================
+ * #    Functionality                          ioctl request code             data format
+ * ==== ====================================== ============================== ==================================
+ * 1    FPGA image download                    XCLMGMT_IOCICAPDOWNLOAD_AXLF   xmgmt_ioc_bitstream_axlf
+ * 2    CL frequency scaling                   XCLMGMT_IOCFREQSCALE           xmgmt_ioc_freqscaling
+ * ==== ====================================== ============================== ==================================
+ *
+ */
+
+#ifndef _XMGMT_IOCALLS_POSIX_H_
+#define _XMGMT_IOCALLS_POSIX_H_
+
+#include <linux/ioctl.h>
+
+#define XMGMT_IOC_MAGIC	'X'
+#define XMGMT_NUM_SUPPORTED_CLOCKS 4
+
+#define XMGMT_IOC_FREQ_SCALE 0x2
+#define XMGMT_IOC_ICAP_DOWNLOAD_AXLF 0x6
+
+
+/**
+ * struct xmgmt_ioc_bitstream_axlf - load xclbin (AXLF) device image
+ * used with XMGMT_IOCICAPDOWNLOAD_AXLF ioctl
+ *
+ * @xclbin:	Pointer to user's xclbin structure in memory
+ */
+struct xmgmt_ioc_bitstream_axlf {
+	struct axlf *xclbin;
+};
+
+/**
+ * struct xmgmt_ioc_freqscaling - scale frequencies on the board using Xilinx clock wizard
+ * used with XMGMT_IOCFREQSCALE ioctl
+ *
+ * @ocl_region:	        PR region (currently only 0 is supported)
+ * @ocl_target_freq:	Array of requested frequencies, a value o zero in the array indicates leave untouched
+ */
+struct xmgmt_ioc_freqscaling {
+	unsigned int ocl_region;
+	unsigned short ocl_target_freq[XMGMT_NUM_SUPPORTED_CLOCKS];
+};
+
+#define DATA_CLK			0
+#define KERNEL_CLK			1
+#define SYSTEM_CLK			2
+
+#define XMGMT_IOCICAPDOWNLOAD_AXLF	_IOW(XMGMT_IOC_MAGIC, XMGMT_IOC_ICAP_DOWNLOAD_AXLF, struct xmgmt_ioc_bitstream_axlf)
+#define XMGMT_IOCFREQSCALE		_IOW(XMGMT_IOC_MAGIC, XMGMT_IOC_FREQ_SCALE, struct xmgmt_ioc_freqscaling)
+
+/*
+ * The following definitions are for binary compatibility with classic XRT management driver
+ */
+
+#define XCLMGMT_IOCICAPDOWNLOAD_AXLF XMGMT_IOCICAPDOWNLOAD_AXLF
+#define XCLMGMT_IOCFREQSCALE XMGMT_IOCFREQSCALE
+
+#define xclmgmt_ioc_bitstream_axlf xmgmt_ioc_bitstream_axlf
+#define xclmgmt_ioc_freqscaling xmgmt_ioc_freqscaling
+
+#endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 XRT Alveo 5/6] fpga: xrt: platform drivers for subsystems in shell partition
  2020-12-17  7:50 [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview Sonal Santan
                   ` (3 preceding siblings ...)
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 4/6] fpga: xrt: XRT Alveo management physical function driver Sonal Santan
@ 2020-12-17  7:50 ` Sonal Santan
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 6/6] fpga: xrt: Kconfig and Makefile updates for XRT drivers Sonal Santan
  5 siblings, 0 replies; 10+ messages in thread
From: Sonal Santan @ 2020-12-17  7:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree, trix, mdf

From: Sonal Santan <sonal.santan@xilinx.com>

Add platform drivers for HW subsystems found in shell partition.
Each driver implements interfaces defined by xrt-subdev.h. The
driver instances are created by parent partition to manage
subsystem instances discovered by walking device tree. The
platform drivers may populate their own sysfs nodes, create
device nodes if needed and make calls into parent or other
platform drivers. The platform drivers can also send and receive
events.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/xrt/include/subdev/axigate.h  |  31 +
 drivers/fpga/xrt/include/subdev/calib.h    |  28 +
 drivers/fpga/xrt/include/subdev/clkfreq.h  |  21 +
 drivers/fpga/xrt/include/subdev/clock.h    |  29 +
 drivers/fpga/xrt/include/subdev/gpio.h     |  41 ++
 drivers/fpga/xrt/include/subdev/icap.h     |  27 +
 drivers/fpga/xrt/include/subdev/ucs.h      |  22 +
 drivers/fpga/xrt/lib/subdevs/xrt-axigate.c | 298 ++++++++++
 drivers/fpga/xrt/lib/subdevs/xrt-calib.c   | 226 ++++++++
 drivers/fpga/xrt/lib/subdevs/xrt-clkfreq.c | 214 +++++++
 drivers/fpga/xrt/lib/subdevs/xrt-clock.c   | 638 +++++++++++++++++++++
 drivers/fpga/xrt/lib/subdevs/xrt-gpio.c    | 198 +++++++
 drivers/fpga/xrt/lib/subdevs/xrt-icap.c    | 306 ++++++++++
 drivers/fpga/xrt/lib/subdevs/xrt-ucs.c     | 238 ++++++++
 drivers/fpga/xrt/lib/subdevs/xrt-vsec.c    | 337 +++++++++++
 15 files changed, 2654 insertions(+)
 create mode 100644 drivers/fpga/xrt/include/subdev/axigate.h
 create mode 100644 drivers/fpga/xrt/include/subdev/calib.h
 create mode 100644 drivers/fpga/xrt/include/subdev/clkfreq.h
 create mode 100644 drivers/fpga/xrt/include/subdev/clock.h
 create mode 100644 drivers/fpga/xrt/include/subdev/gpio.h
 create mode 100644 drivers/fpga/xrt/include/subdev/icap.h
 create mode 100644 drivers/fpga/xrt/include/subdev/ucs.h
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-axigate.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-calib.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-clkfreq.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-clock.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-gpio.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-icap.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-ucs.c
 create mode 100644 drivers/fpga/xrt/lib/subdevs/xrt-vsec.c

diff --git a/drivers/fpga/xrt/include/subdev/axigate.h b/drivers/fpga/xrt/include/subdev/axigate.h
new file mode 100644
index 000000000000..d26f6d31a948
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev/axigate.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_AXIGATE_H_
+#define	_XRT_AXIGATE_H_
+
+
+#include "subdev.h"
+#include "metadata.h"
+
+/*
+ * AXIGATE driver IOCTL calls.
+ */
+enum xrt_axigate_ioctl_cmd {
+	XRT_AXIGATE_FREEZE = 0,
+	XRT_AXIGATE_FREE,
+};
+
+/* the ep names are in the order of hardware layers */
+static const char * const xrt_axigate_epnames[] = {
+	NODE_GATE_PLP,
+	NODE_GATE_ULP,
+	NULL
+};
+
+#endif	/* _XRT_AXIGATE_H_ */
diff --git a/drivers/fpga/xrt/include/subdev/calib.h b/drivers/fpga/xrt/include/subdev/calib.h
new file mode 100644
index 000000000000..9328f28a83b0
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev/calib.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Cheng Zhen <maxz@xilinx.com>
+ */
+
+#ifndef	_XRT_CALIB_H_
+#define	_XRT_CALIB_H_
+
+#include "subdev.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * Memory calibration driver IOCTL calls.
+ */
+enum xrt_calib_results {
+	XRT_CALIB_UNKNOWN,
+	XRT_CALIB_SUCCEEDED,
+	XRT_CALIB_FAILED,
+};
+
+enum xrt_calib_ioctl_cmd {
+	XRT_CALIB_RESULT = 0,
+};
+
+#endif	/* _XRT_CALIB_H_ */
diff --git a/drivers/fpga/xrt/include/subdev/clkfreq.h b/drivers/fpga/xrt/include/subdev/clkfreq.h
new file mode 100644
index 000000000000..c4ed0e074510
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev/clkfreq.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_CLKFREQ_H_
+#define	_XRT_CLKFREQ_H_
+
+#include "subdev.h"
+
+/*
+ * CLKFREQ driver IOCTL calls.
+ */
+enum xrt_clkfreq_ioctl_cmd {
+	XRT_CLKFREQ_READ = 0,
+};
+
+#endif	/* _XRT_CLKFREQ_H_ */
diff --git a/drivers/fpga/xrt/include/subdev/clock.h b/drivers/fpga/xrt/include/subdev/clock.h
new file mode 100644
index 000000000000..8f0b8954dcdb
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev/clock.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_CLOCK_H_
+#define	_XRT_CLOCK_H_
+
+#include "subdev.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * CLOCK driver IOCTL calls.
+ */
+enum xrt_clock_ioctl_cmd {
+	XRT_CLOCK_SET = 0,
+	XRT_CLOCK_GET,
+	XRT_CLOCK_VERIFY,
+};
+
+struct xrt_clock_ioctl_get {
+	u16 freq;
+	u32 freq_cnter;
+};
+
+#endif	/* _XRT_CLOCK_H_ */
diff --git a/drivers/fpga/xrt/include/subdev/gpio.h b/drivers/fpga/xrt/include/subdev/gpio.h
new file mode 100644
index 000000000000..bb965ee1940c
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev/gpio.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_GPIO_H_
+#define	_XRT_GPIO_H_
+
+#include "subdev.h"
+
+/*
+ * GPIO driver IOCTL calls.
+ */
+enum xrt_gpio_ioctl_cmd {
+	XRT_GPIO_READ = 0,
+	XRT_GPIO_WRITE,
+};
+
+enum xrt_gpio_id {
+	XRT_GPIO_ROM_UUID,
+	XRT_GPIO_DDR_CALIB,
+	XRT_GPIO_GOLDEN_VER,
+	XRT_GPIO_MAX
+};
+
+struct xrt_gpio_ioctl_rw {
+	u32	xgir_id;
+	void	*xgir_buf;
+	u32	xgir_len;
+	u32	xgir_offset;
+};
+
+struct xrt_gpio_ioctl_intf_uuid {
+	u32	xgir_uuid_num;
+	uuid_t	*xgir_uuids;
+};
+
+#endif	/* _XRT_GPIO_H_ */
diff --git a/drivers/fpga/xrt/include/subdev/icap.h b/drivers/fpga/xrt/include/subdev/icap.h
new file mode 100644
index 000000000000..8424743d3280
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev/icap.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_ICAP_H_
+#define	_XRT_ICAP_H_
+
+#include "subdev.h"
+
+/*
+ * ICAP driver IOCTL calls.
+ */
+enum xrt_icap_ioctl_cmd {
+	XRT_ICAP_WRITE = 0,
+	XRT_ICAP_IDCODE,
+};
+
+struct xrt_icap_ioctl_wr {
+	void	*xiiw_bit_data;
+	u32	xiiw_data_len;
+};
+
+#endif	/* _XRT_ICAP_H_ */
diff --git a/drivers/fpga/xrt/include/subdev/ucs.h b/drivers/fpga/xrt/include/subdev/ucs.h
new file mode 100644
index 000000000000..e0ae697b69da
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev/ucs.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *	Lizhi Hou <Lizhi.Hou@xilinx.com>
+ */
+
+#ifndef	_XRT_UCS_H_
+#define	_XRT_UCS_H_
+
+#include "subdev.h"
+
+/*
+ * UCS driver IOCTL calls.
+ */
+enum xrt_ucs_ioctl_cmd {
+	XRT_UCS_CHECK = 0,
+	XRT_UCS_ENABLE,
+};
+
+#endif	/* _XRT_UCS_H_ */
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-axigate.c b/drivers/fpga/xrt/lib/subdevs/xrt-axigate.c
new file mode 100644
index 000000000000..27dabf388220
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-axigate.c
@@ -0,0 +1,298 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA AXI Gate Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "subdev.h"
+#include "parent.h"
+#include "subdev/axigate.h"
+
+#define XRT_AXIGATE "xrt_axigate"
+
+struct axigate_regs {
+	u32		iag_wr;
+	u32		iag_rvsd;
+	u32		iag_rd;
+} __packed;
+
+struct xrt_axigate {
+	struct platform_device	*pdev;
+	void			*base;
+	struct mutex		gate_lock;
+
+	void			*evt_hdl;
+	const char		*ep_name;
+
+	bool			gate_freezed;
+};
+
+#define reg_rd(g, r)						\
+	ioread32(&((struct axigate_regs *)g->base)->r)
+#define reg_wr(g, v, r)						\
+	iowrite32(v, &((struct axigate_regs *)g->base)->r)
+
+#define freeze_gate(gate)			\
+	do {					\
+		reg_wr(gate, 0, iag_wr);	\
+		ndelay(500);			\
+		reg_rd(gate, iag_rd);		\
+	} while (0)
+
+#define free_gate(gate)				\
+	do {					\
+		reg_wr(gate, 0x2, iag_wr);	\
+		ndelay(500);			\
+		(void) reg_rd(gate, iag_rd);	\
+		reg_wr(gate, 0x3, iag_wr);	\
+		ndelay(500);			\
+		reg_rd(gate, iag_rd);		\
+	} while (0)				\
+
+static int xrt_axigate_epname_idx(struct platform_device *pdev)
+{
+	int			i;
+	int			ret;
+	struct resource		*res;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		xrt_err(pdev, "Empty Resource!");
+		return -EINVAL;
+	}
+
+	for (i = 0; xrt_axigate_epnames[i]; i++) {
+		ret = strncmp(xrt_axigate_epnames[i], res->name,
+			strlen(xrt_axigate_epnames[i]) + 1);
+		if (!ret)
+			break;
+	}
+
+	return (xrt_axigate_epnames[i]) ? i : -EINVAL;
+}
+
+static bool xrt_axigate_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	const char		*ep_name = arg;
+	struct resource		*res;
+
+	if (id != XRT_SUBDEV_AXIGATE)
+		return false;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		xrt_err(pdev, "Empty Resource!");
+		return false;
+	}
+
+	if (strncmp(res->name, ep_name, strlen(res->name) + 1))
+		return true;
+
+	return false;
+}
+
+static void xrt_axigate_freeze(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+	u32			freeze = 0;
+
+	gate = platform_get_drvdata(pdev);
+
+	mutex_lock(&gate->gate_lock);
+	freeze = reg_rd(gate, iag_rd);
+	if (freeze) {		/* gate is opened */
+		xrt_subdev_broadcast_event(pdev, XRT_EVENT_PRE_GATE_CLOSE);
+		freeze_gate(gate);
+	}
+
+	gate->gate_freezed = true;
+	mutex_unlock(&gate->gate_lock);
+
+	xrt_info(pdev, "freeze gate %s", gate->ep_name);
+}
+
+static void xrt_axigate_free(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+	u32			freeze;
+
+	gate = platform_get_drvdata(pdev);
+
+	mutex_lock(&gate->gate_lock);
+	freeze = reg_rd(gate, iag_rd);
+	if (!freeze) {		/* gate is closed */
+		free_gate(gate);
+		xrt_subdev_broadcast_event_async(pdev,
+			XRT_EVENT_POST_GATE_OPEN, NULL, NULL);
+		/* xrt_axigate_free() could be called in event cb, thus
+		 * we can not wait for the completes
+		 */
+	}
+
+	gate->gate_freezed = false;
+	mutex_unlock(&gate->gate_lock);
+
+	xrt_info(pdev, "free gate %s", gate->ep_name);
+}
+
+static int
+xrt_axigate_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct platform_device *leaf;
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+	enum xrt_subdev_id id;
+	int instance;
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION:
+		break;
+	default:
+		return XRT_EVENT_CB_CONTINUE;
+	}
+
+	id = esd->xevt_subdev_id;
+	instance = esd->xevt_subdev_instance;
+
+	/*
+	 * higher level axigate instance created,
+	 * make sure the gate is openned. This covers 1RP flow which
+	 * has plp gate as well.
+	 */
+	leaf = xrt_subdev_get_leaf_by_id(pdev, id, instance);
+	if (leaf) {
+		if (xrt_axigate_epname_idx(leaf) >
+		    xrt_axigate_epname_idx(pdev))
+			xrt_axigate_free(pdev);
+		else
+			xrt_subdev_ioctl(leaf, XRT_AXIGATE_FREE, NULL);
+		xrt_subdev_put_leaf(pdev, leaf);
+	}
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static int
+xrt_axigate_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	switch (cmd) {
+	case XRT_AXIGATE_FREEZE:
+		xrt_axigate_freeze(pdev);
+		break;
+	case XRT_AXIGATE_FREE:
+		xrt_axigate_free(pdev);
+		break;
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int xrt_axigate_remove(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+
+	gate = platform_get_drvdata(pdev);
+
+	if (gate->base)
+		iounmap(gate->base);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, gate);
+
+	return 0;
+}
+
+static int xrt_axigate_probe(struct platform_device *pdev)
+{
+	struct xrt_axigate	*gate;
+	struct resource		*res;
+	int			ret;
+
+	gate = devm_kzalloc(&pdev->dev, sizeof(*gate), GFP_KERNEL);
+	if (!gate)
+		return -ENOMEM;
+
+	gate->pdev = pdev;
+	platform_set_drvdata(pdev, gate);
+
+	xrt_info(pdev, "probing...");
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		xrt_err(pdev, "Empty resource 0");
+		ret = -EINVAL;
+		goto failed;
+	}
+
+	gate->base = ioremap(res->start, res->end - res->start + 1);
+	if (!gate->base) {
+		xrt_err(pdev, "map base iomem failed");
+		ret = -EFAULT;
+		goto failed;
+	}
+
+	gate->evt_hdl = xrt_subdev_add_event_cb(pdev,
+		xrt_axigate_leaf_match, (void *)res->name,
+		xrt_axigate_event_cb);
+
+	gate->ep_name = res->name;
+
+	mutex_init(&gate->gate_lock);
+
+	return 0;
+
+failed:
+	xrt_axigate_remove(pdev);
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_axigate_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = "ep_pr_isolate_ulp_00" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = "ep_pr_isolate_plp_00" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_axigate_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_axigate_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_axigate_table[] = {
+	{ XRT_AXIGATE, (kernel_ulong_t)&xrt_axigate_data },
+	{ },
+};
+
+struct platform_driver xrt_axigate_driver = {
+	.driver = {
+		.name = XRT_AXIGATE,
+	},
+	.probe = xrt_axigate_probe,
+	.remove = xrt_axigate_remove,
+	.id_table = xrt_axigate_table,
+};
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-calib.c b/drivers/fpga/xrt/lib/subdevs/xrt-calib.c
new file mode 100644
index 000000000000..6108f24a2023
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-calib.c
@@ -0,0 +1,226 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA memory calibration driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * memory calibration
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+#include <linux/delay.h>
+#include "xrt-xclbin.h"
+#include "metadata.h"
+#include "subdev/calib.h"
+
+#define XRT_CALIB	"xrt_calib"
+
+struct calib_cache {
+	struct list_head	link;
+	const char		*ep_name;
+	char			*data;
+	uint32_t		data_size;
+};
+
+struct calib {
+	struct platform_device	*pdev;
+	void			*calib_base;
+	struct mutex		lock;
+	struct list_head	cache_list;
+	uint32_t		cache_num;
+	void			*evt_hdl;
+	enum xrt_calib_results	result;
+};
+
+#define CALIB_DONE(calib)			\
+	(ioread32(calib->calib_base) & BIT(0))
+
+static bool xrt_calib_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	if (id == XRT_SUBDEV_UCS || id == XRT_SUBDEV_SRSR)
+		return true;
+
+	return false;
+}
+
+static void calib_cache_clean_nolock(struct calib *calib)
+{
+	struct calib_cache *cache, *temp;
+
+	list_for_each_entry_safe(cache, temp, &calib->cache_list, link) {
+		vfree(cache->data);
+		list_del(&cache->link);
+		vfree(cache);
+	}
+	calib->cache_num = 0;
+}
+
+static void calib_cache_clean(struct calib *calib)
+{
+	mutex_lock(&calib->lock);
+	calib_cache_clean_nolock(calib);
+	mutex_unlock(&calib->lock);
+}
+
+static int calib_srsr(struct calib *calib, struct platform_device *srsr_leaf)
+{
+	return -ENOTSUPP;
+}
+
+static int calib_calibration(struct calib *calib)
+{
+	int i;
+
+	for (i = 0; i < 20; i++) {
+		if (CALIB_DONE(calib))
+			break;
+		msleep(500);
+	}
+
+	if (i == 20) {
+		xrt_err(calib->pdev,
+			"MIG calibration timeout after bitstream download");
+		return -ETIMEDOUT;
+	}
+
+	xrt_info(calib->pdev, "took %dms", i * 500);
+	return 0;
+}
+
+static int xrt_calib_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+	struct calib *calib = platform_get_drvdata(pdev);
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+	struct platform_device *leaf;
+	int ret;
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION: {
+		if (esd->xevt_subdev_id == XRT_SUBDEV_SRSR) {
+			leaf = xrt_subdev_get_leaf_by_id(pdev,
+				XRT_SUBDEV_SRSR, esd->xevt_subdev_instance);
+			BUG_ON(!leaf);
+			ret = calib_srsr(calib, leaf);
+			xrt_subdev_put_leaf(pdev, leaf);
+			calib->result =
+				ret ? XRT_CALIB_FAILED : XRT_CALIB_SUCCEEDED;
+		} else if (esd->xevt_subdev_id == XRT_SUBDEV_UCS) {
+			ret = calib_calibration(calib);
+			calib->result =
+				ret ? XRT_CALIB_FAILED : XRT_CALIB_SUCCEEDED;
+		}
+		break;
+	}
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		break;
+	}
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+int xrt_calib_remove(struct platform_device *pdev)
+{
+	struct calib *calib = platform_get_drvdata(pdev);
+
+	xrt_subdev_remove_event_cb(pdev, calib->evt_hdl);
+	calib_cache_clean(calib);
+
+	if (calib->calib_base)
+		iounmap(calib->calib_base);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, calib);
+
+	return 0;
+}
+
+int xrt_calib_probe(struct platform_device *pdev)
+{
+	struct calib *calib;
+	struct resource *res;
+	int err = 0;
+
+	calib = devm_kzalloc(&pdev->dev, sizeof(*calib), GFP_KERNEL);
+	if (!calib)
+		return -ENOMEM;
+
+	calib->pdev = pdev;
+	platform_set_drvdata(pdev, calib);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res)
+		goto failed;
+
+	calib->calib_base = ioremap(res->start, res->end - res->start + 1);
+	if (!calib->calib_base) {
+		err = -EIO;
+		xrt_err(pdev, "Map iomem failed");
+		goto failed;
+	}
+
+	calib->evt_hdl = xrt_subdev_add_event_cb(pdev, xrt_calib_leaf_match,
+		NULL, xrt_calib_event_cb);
+
+	mutex_init(&calib->lock);
+	INIT_LIST_HEAD(&calib->cache_list);
+
+	return 0;
+
+failed:
+	xrt_calib_remove(pdev);
+	return err;
+}
+
+static int
+xrt_calib_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct calib *calib = platform_get_drvdata(pdev);
+	int ret = 0;
+
+	switch (cmd) {
+	case XRT_CALIB_RESULT: {
+		enum xrt_calib_results *r = (enum xrt_calib_results *)arg;
+		*r = calib->result;
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		ret = -EINVAL;
+	}
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_calib_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_DDR_CALIB },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_calib_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_calib_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_calib_table[] = {
+	{ XRT_CALIB, (kernel_ulong_t)&xrt_calib_data },
+	{ },
+};
+
+struct platform_driver xrt_calib_driver = {
+	.driver = {
+		.name = XRT_CALIB,
+	},
+	.probe = xrt_calib_probe,
+	.remove = xrt_calib_remove,
+	.id_table = xrt_calib_table,
+};
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-clkfreq.c b/drivers/fpga/xrt/lib/subdevs/xrt-clkfreq.c
new file mode 100644
index 000000000000..d70b668cea3b
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-clkfreq.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Frequency Counter Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "subdev.h"
+#include "parent.h"
+#include "subdev/clkfreq.h"
+
+#define CLKFREQ_ERR(clkfreq, fmt, arg...)   \
+	xrt_err((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_WARN(clkfreq, fmt, arg...)  \
+	xrt_warn((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_INFO(clkfreq, fmt, arg...)  \
+	xrt_info((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_DBG(clkfreq, fmt, arg...)   \
+	xrt_dbg((clkfreq)->pdev, fmt "\n", ##arg)
+
+#define XRT_CLKFREQ		"xrt_clkfreq"
+
+#define OCL_CLKWIZ_STATUS_MASK		0xffff
+
+#define OCL_CLKWIZ_STATUS_MEASURE_START	0x1
+#define OCL_CLKWIZ_STATUS_MEASURE_DONE	0x2
+#define OCL_CLK_FREQ_COUNTER_OFFSET	0x8
+#define OCL_CLK_FREQ_V5_COUNTER_OFFSET	0x10
+#define OCL_CLK_FREQ_V5_CLK0_ENABLED	0x10000
+
+struct clkfreq {
+	struct platform_device	*pdev;
+	void __iomem		*clkfreq_base;
+	const char		*clkfreq_ep_name;
+	struct mutex		clkfreq_lock;
+};
+
+static inline u32 reg_rd(struct clkfreq *clkfreq, u32 offset)
+{
+	return ioread32(clkfreq->clkfreq_base + offset);
+}
+
+static inline void reg_wr(struct clkfreq *clkfreq, u32 val, u32 offset)
+{
+	iowrite32(val, clkfreq->clkfreq_base + offset);
+}
+
+
+static u32 clkfreq_read(struct clkfreq *clkfreq)
+{
+	u32 freq = 0, status;
+	int times = 10;
+
+	mutex_lock(&clkfreq->clkfreq_lock);
+	reg_wr(clkfreq, OCL_CLKWIZ_STATUS_MEASURE_START, 0);
+	while (times != 0) {
+		status = reg_rd(clkfreq, 0);
+		if ((status & OCL_CLKWIZ_STATUS_MASK) ==
+		    OCL_CLKWIZ_STATUS_MEASURE_DONE)
+			break;
+		mdelay(1);
+		times--;
+	};
+	if (times > 0) {
+		freq = (status & OCL_CLK_FREQ_V5_CLK0_ENABLED) ?
+			reg_rd(clkfreq, OCL_CLK_FREQ_V5_COUNTER_OFFSET) :
+			reg_rd(clkfreq, OCL_CLK_FREQ_COUNTER_OFFSET);
+	}
+	mutex_unlock(&clkfreq->clkfreq_lock);
+
+	return freq;
+}
+
+static ssize_t freq_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	struct clkfreq *clkfreq = platform_get_drvdata(to_platform_device(dev));
+	u32 freq;
+	ssize_t count;
+
+	freq = clkfreq_read(clkfreq);
+	count = snprintf(buf, 64, "%d\n", freq);
+
+	return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clkfreq_attrs[] = {
+	&dev_attr_freq.attr,
+	NULL,
+};
+
+static struct attribute_group clkfreq_attr_group = {
+	.attrs = clkfreq_attrs,
+};
+
+static int
+xrt_clkfreq_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct clkfreq		*clkfreq;
+	int			ret = 0;
+
+	clkfreq = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_CLKFREQ_READ: {
+		*(u32 *)arg = clkfreq_read(clkfreq);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int clkfreq_remove(struct platform_device *pdev)
+{
+	struct clkfreq *clkfreq;
+
+	clkfreq = platform_get_drvdata(pdev);
+	if (!clkfreq) {
+		xrt_err(pdev, "driver data is NULL");
+		return -EINVAL;
+	}
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, clkfreq);
+
+	CLKFREQ_INFO(clkfreq, "successfully removed clkfreq subdev");
+	return 0;
+}
+
+
+
+static int clkfreq_probe(struct platform_device *pdev)
+{
+	struct clkfreq *clkfreq = NULL;
+	struct resource *res;
+	int ret;
+
+	clkfreq = devm_kzalloc(&pdev->dev, sizeof(*clkfreq), GFP_KERNEL);
+	if (!clkfreq)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, clkfreq);
+	clkfreq->pdev = pdev;
+	mutex_init(&clkfreq->clkfreq_lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	clkfreq->clkfreq_base = ioremap(res->start, res->end - res->start + 1);
+	if (!clkfreq->clkfreq_base) {
+		CLKFREQ_ERR(clkfreq, "map base %pR failed", res);
+		ret = -EFAULT;
+		goto failed;
+	}
+	clkfreq->clkfreq_ep_name = res->name;
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &clkfreq_attr_group);
+	if (ret) {
+		CLKFREQ_ERR(clkfreq, "create clkfreq attrs failed: %d", ret);
+		goto failed;
+	}
+
+	CLKFREQ_INFO(clkfreq, "successfully initialized clkfreq subdev");
+
+	return 0;
+
+failed:
+	clkfreq_remove(pdev);
+	return ret;
+}
+
+
+struct xrt_subdev_endpoints xrt_clkfreq_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .regmap_name = "freq_cnt" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_clkfreq_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_clkfreq_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_clkfreq_table[] = {
+	{ XRT_CLKFREQ, (kernel_ulong_t)&xrt_clkfreq_data },
+	{ },
+};
+
+struct platform_driver xrt_clkfreq_driver = {
+	.driver = {
+		.name = XRT_CLKFREQ,
+	},
+	.probe = clkfreq_probe,
+	.remove = clkfreq_remove,
+	.id_table = xrt_clkfreq_table,
+};
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-clock.c b/drivers/fpga/xrt/lib/subdevs/xrt-clock.c
new file mode 100644
index 000000000000..9e3b93d322f8
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-clock.c
@@ -0,0 +1,638 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Wizard Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "subdev.h"
+#include "parent.h"
+#include "subdev/clock.h"
+#include "subdev/clkfreq.h"
+
+/* CLOCK_MAX_NUM_CLOCKS should be a concept from XCLBIN_ in the future */
+#define	CLOCK_MAX_NUM_CLOCKS		4
+#define	OCL_CLKWIZ_STATUS_OFFSET	0x4
+#define	OCL_CLKWIZ_STATUS_MASK		0xffff
+#define	OCL_CLKWIZ_STATUS_MEASURE_START	0x1
+#define	OCL_CLKWIZ_STATUS_MEASURE_DONE	0x2
+#define	OCL_CLKWIZ_CONFIG_OFFSET(n)	(0x200 + 4 * (n))
+#define	CLOCK_DEFAULT_EXPIRE_SECS	1
+
+#define	CLOCK_ERR(clock, fmt, arg...)	\
+	xrt_err((clock)->pdev, fmt "\n", ##arg)
+#define	CLOCK_WARN(clock, fmt, arg...)	\
+	xrt_warn((clock)->pdev, fmt "\n", ##arg)
+#define	CLOCK_INFO(clock, fmt, arg...)	\
+	xrt_info((clock)->pdev, fmt "\n", ##arg)
+#define	CLOCK_DBG(clock, fmt, arg...)	\
+	xrt_dbg((clock)->pdev, fmt "\n", ##arg)
+
+#define XRT_CLOCK	"xrt_clock"
+
+struct clock {
+	struct platform_device  *pdev;
+	void __iomem		*clock_base;
+	struct mutex		clock_lock;
+
+	const char		*clock_ep_name;
+};
+
+/*
+ * Precomputed table with config0 and config2 register values together with
+ * target frequency. The steps are approximately 5 MHz apart. Table is
+ * generated by wiz.pl.
+ */
+const static struct xmgmt_ocl_clockwiz {
+	/* target frequency */
+	unsigned short ocl;
+	/* config0 register */
+	unsigned long config0;
+	/* config2 register */
+	unsigned int config2;
+} frequency_table[] = {
+	{/*1275.000*/	10.000,		0x02EE0C01,	0x0001F47F},
+	{/*1575.000*/   15.000,		0x02EE0F01,     0x00000069},
+	{/*1600.000*/   20.000,		0x00001001,     0x00000050},
+	{/*1600.000*/   25.000,		0x00001001,     0x00000040},
+	{/*1575.000*/   30.000,		0x02EE0F01,     0x0001F434},
+	{/*1575.000*/   35.000,		0x02EE0F01,     0x0000002D},
+	{/*1600.000*/   40.000,		0x00001001,     0x00000028},
+	{/*1575.000*/   45.000,		0x02EE0F01,     0x00000023},
+	{/*1600.000*/   50.000,		0x00001001,     0x00000020},
+	{/*1512.500*/   55.000,		0x007D0F01,     0x0001F41B},
+	{/*1575.000*/   60.000,		0x02EE0F01,     0x0000FA1A},
+	{/*1462.500*/   65.000,		0x02710E01,     0x0001F416},
+	{/*1575.000*/   70.000,		0x02EE0F01,     0x0001F416},
+	{/*1575.000*/   75.000,		0x02EE0F01,     0x00000015},
+	{/*1600.000*/   80.000,		0x00001001,     0x00000014},
+	{/*1487.500*/   85.000,		0x036B0E01,     0x0001F411},
+	{/*1575.000*/   90.000,		0x02EE0F01,     0x0001F411},
+	{/*1425.000*/   95.000,		0x00FA0E01,     0x0000000F},
+	{/*1600.000*/   100.000,	0x00001001,     0x00000010},
+	{/*1575.000*/   105.000,	0x02EE0F01,     0x0000000F},
+	{/*1512.500*/   110.000,	0x007D0F01,     0x0002EE0D},
+	{/*1437.500*/   115.000,	0x01770E01,     0x0001F40C},
+	{/*1575.000*/   120.000,	0x02EE0F01,     0x00007D0D},
+	{/*1562.500*/   125.000,	0x02710F01,     0x0001F40C},
+	{/*1462.500*/   130.000,	0x02710E01,     0x0000FA0B},
+	{/*1350.000*/   135.000,	0x01F40D01,     0x0000000A},
+	{/*1575.000*/   140.000,	0x02EE0F01,     0x0000FA0B},
+	{/*1450.000*/   145.000,	0x01F40E01,     0x0000000A},
+	{/*1575.000*/   150.000,	0x02EE0F01,     0x0001F40A},
+	{/*1550.000*/   155.000,	0x01F40F01,     0x0000000A},
+	{/*1600.000*/   160.000,	0x00001001,     0x0000000A},
+	{/*1237.500*/   165.000,	0x01770C01,     0x0001F407},
+	{/*1487.500*/   170.000,	0x036B0E01,     0x0002EE08},
+	{/*1575.000*/   175.000,	0x02EE0F01,     0x00000009},
+	{/*1575.000*/   180.000,	0x02EE0F01,     0x0002EE08},
+	{/*1387.500*/   185.000,	0x036B0D01,     0x0001F407},
+	{/*1425.000*/   190.000,	0x00FA0E01,     0x0001F407},
+	{/*1462.500*/   195.000,	0x02710E01,     0x0001F407},
+	{/*1600.000*/   200.000,	0x00001001,     0x00000008},
+	{/*1537.500*/   205.000,        0x01770F01,     0x0001F407},
+	{/*1575.000*/   210.000,        0x02EE0F01,     0x0001F407},
+	{/*1075.000*/   215.000,        0x02EE0A01,     0x00000005},
+	{/*1512.500*/   220.000,        0x007D0F01,     0x00036B06},
+	{/*1575.000*/   225.000,        0x02EE0F01,     0x00000007},
+	{/*1437.500*/   230.000,        0x01770E01,     0x0000FA06},
+	{/*1175.000*/   235.000,        0x02EE0B01,     0x00000005},
+	{/*1500.000*/   240.000,        0x00000F01,     0x0000FA06},
+	{/*1225.000*/   245.000,        0x00FA0C01,     0x00000005},
+	{/*1562.500*/   250.000,        0x02710F01,     0x0000FA06},
+	{/*1275.000*/   255.000,        0x02EE0C01,     0x00000005},
+	{/*1462.500*/   260.000,        0x02710E01,     0x00027105},
+	{/*1325.000*/   265.000,        0x00FA0D01,     0x00000005},
+	{/*1350.000*/   270.000,        0x01F40D01,     0x00000005},
+	{/*1512.500*/   275.000,        0x007D0F01,     0x0001F405},
+	{/*1575.000*/   280.000,        0x02EE0F01,     0x00027105},
+	{/*1425.000*/   285.000,        0x00FA0E01,     0x00000005},
+	{/*1450.000*/   290.000,        0x01F40E01,     0x00000005},
+	{/*1475.000*/   295.000,        0x02EE0E01,     0x00000005},
+	{/*1575.000*/   300.000,        0x02EE0F01,     0x0000FA05},
+	{/*1525.000*/   305.000,        0x00FA0F01,     0x00000005},
+	{/*1550.000*/   310.000,        0x01F40F01,     0x00000005},
+	{/*1575.000*/   315.000,        0x02EE0F01,     0x00000005},
+	{/*1600.000*/   320.000,        0x00001001,     0x00000005},
+	{/*1462.500*/   325.000,        0x02710E01,     0x0001F404},
+	{/*1237.500*/   330.000,        0x01770C01,     0x0002EE03},
+	{/*837.500*/    335.000,        0x01770801,     0x0001F402},
+	{/*1487.500*/   340.000,        0x036B0E01,     0x00017704},
+	{/*862.500*/    345.000,        0x02710801,     0x0001F402},
+	{/*1575.000*/   350.000,        0x02EE0F01,     0x0001F404},
+	{/*887.500*/    355.000,        0x036B0801,     0x0001F402},
+	{/*1575.000*/   360.000,        0x02EE0F01,     0x00017704},
+	{/*912.500*/    365.000,        0x007D0901,     0x0001F402},
+	{/*1387.500*/   370.000,        0x036B0D01,     0x0002EE03},
+	{/*1500.000*/   375.000,        0x00000F01,     0x00000004},
+	{/*1425.000*/   380.000,        0x00FA0E01,     0x0002EE03},
+	{/*962.500*/    385.000,        0x02710901,     0x0001F402},
+	{/*1462.500*/   390.000,        0x02710E01,     0x0002EE03},
+	{/*987.500*/    395.000,        0x036B0901,     0x0001F402},
+	{/*1600.000*/   400.000,        0x00001001,     0x00000004},
+	{/*1012.500*/   405.000,        0x007D0A01,     0x0001F402},
+	{/*1537.500*/   410.000,        0x01770F01,     0x0002EE03},
+	{/*1037.500*/   415.000,        0x01770A01,     0x0001F402},
+	{/*1575.000*/   420.000,        0x02EE0F01,     0x0002EE03},
+	{/*1487.500*/   425.000,        0x036B0E01,     0x0001F403},
+	{/*1075.000*/   430.000,        0x02EE0A01,     0x0001F402},
+	{/*1087.500*/   435.000,        0x036B0A01,     0x0001F402},
+	{/*1375.000*/   440.000,        0x02EE0D01,     0x00007D03},
+	{/*1112.500*/   445.000,        0x007D0B01,     0x0001F402},
+	{/*1575.000*/   450.000,        0x02EE0F01,     0x0001F403},
+	{/*1137.500*/   455.000,        0x01770B01,     0x0001F402},
+	{/*1437.500*/   460.000,        0x01770E01,     0x00007D03},
+	{/*1162.500*/   465.000,        0x02710B01,     0x0001F402},
+	{/*1175.000*/   470.000,        0x02EE0B01,     0x0001F402},
+	{/*1425.000*/   475.000,        0x00FA0E01,     0x00000003},
+	{/*1500.000*/   480.000,        0x00000F01,     0x00007D03},
+	{/*1212.500*/   485.000,        0x007D0C01,     0x0001F402},
+	{/*1225.000*/   490.000,        0x00FA0C01,     0x0001F402},
+	{/*1237.500*/   495.000,        0x01770C01,     0x0001F402},
+	{/*1562.500*/   500.000,        0x02710F01,     0x00007D03},
+	{/*1262.500*/   505.000,        0x02710C01,     0x0001F402},
+	{/*1275.000*/   510.000,        0x02EE0C01,     0x0001F402},
+	{/*1287.500*/   515.000,        0x036B0C01,     0x0001F402},
+	{/*1300.000*/   520.000,        0x00000D01,     0x0001F402},
+	{/*1575.000*/   525.000,        0x02EE0F01,     0x00000003},
+	{/*1325.000*/   530.000,        0x00FA0D01,     0x0001F402},
+	{/*1337.500*/   535.000,        0x01770D01,     0x0001F402},
+	{/*1350.000*/   540.000,        0x01F40D01,     0x0001F402},
+	{/*1362.500*/   545.000,        0x02710D01,     0x0001F402},
+	{/*1512.500*/   550.000,        0x007D0F01,     0x0002EE02},
+	{/*1387.500*/   555.000,        0x036B0D01,     0x0001F402},
+	{/*1400.000*/   560.000,        0x00000E01,     0x0001F402},
+	{/*1412.500*/   565.000,        0x007D0E01,     0x0001F402},
+	{/*1425.000*/   570.000,        0x00FA0E01,     0x0001F402},
+	{/*1437.500*/   575.000,        0x01770E01,     0x0001F402},
+	{/*1450.000*/   580.000,        0x01F40E01,     0x0001F402},
+	{/*1462.500*/   585.000,        0x02710E01,     0x0001F402},
+	{/*1475.000*/   590.000,        0x02EE0E01,     0x0001F402},
+	{/*1487.500*/   595.000,        0x036B0E01,     0x0001F402},
+	{/*1575.000*/   600.000,        0x02EE0F01,     0x00027102},
+	{/*1512.500*/   605.000,        0x007D0F01,     0x0001F402},
+	{/*1525.000*/   610.000,        0x00FA0F01,     0x0001F402},
+	{/*1537.500*/   615.000,        0x01770F01,     0x0001F402},
+	{/*1550.000*/   620.000,        0x01F40F01,     0x0001F402},
+	{/*1562.500*/   625.000,        0x02710F01,     0x0001F402},
+	{/*1575.000*/   630.000,        0x02EE0F01,     0x0001F402},
+	{/*1587.500*/   635.000,        0x036B0F01,     0x0001F402},
+	{/*1600.000*/   640.000,        0x00001001,     0x0001F402},
+	{/*1290.000*/   645.000,        0x01F44005,     0x00000002},
+	{/*1462.500*/   650.000,        0x02710E01,     0x0000FA02}
+};
+
+static inline u32 reg_rd(struct clock *clock, u32 offset)
+{
+	return ioread32(clock->clock_base + offset);
+}
+
+static inline void reg_wr(struct clock *clock, u32 val, u32 offset)
+{
+	iowrite32(val, clock->clock_base + offset);
+}
+
+static u32 find_matching_freq_config(unsigned short freq,
+	const struct xmgmt_ocl_clockwiz *table, int size)
+{
+	u32 start = 0;
+	u32 end = size - 1;
+	u32 idx = size - 1;
+
+	if (freq < table[0].ocl)
+		return 0;
+
+	if (freq > table[size - 1].ocl)
+		return size - 1;
+
+	while (start < end) {
+		if (freq == table[idx].ocl)
+			break;
+		if (freq < table[idx].ocl)
+			end = idx;
+		else
+			start = idx + 1;
+		idx = start + (end - start) / 2;
+	}
+	if (freq < table[idx].ocl)
+		idx--;
+
+	return idx;
+}
+
+static u32 find_matching_freq(u32 freq,
+	const struct xmgmt_ocl_clockwiz *freq_table, int freq_table_size)
+{
+	int idx = find_matching_freq_config(freq, freq_table, freq_table_size);
+
+	return freq_table[idx].ocl;
+}
+
+static inline int clock_wiz_busy(struct clock *clock, int cycle,
+	int interval)
+{
+	u32 val = 0;
+	int count;
+
+	val = reg_rd(clock, OCL_CLKWIZ_STATUS_OFFSET);
+	for (count = 0; val != 1 && count < cycle; count++) {
+		mdelay(interval);
+		val = reg_rd(clock, OCL_CLKWIZ_STATUS_OFFSET);
+	}
+	if (val != 1) {
+		CLOCK_ERR(clock, "clockwiz is (%u) busy after %d ms",
+			val, cycle * interval);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int get_freq(struct clock *clock, u16 *freq)
+{
+#define XCL_INPUT_FREQ 100
+	const u64 input = XCL_INPUT_FREQ;
+	u32 val;
+	u32 mul0, div0;
+	u32 mul_frac0 = 0;
+	u32 div1;
+	u32 div_frac1 = 0;
+
+	BUG_ON(!mutex_is_locked(&clock->clock_lock));
+
+	val = reg_rd(clock, OCL_CLKWIZ_STATUS_OFFSET);
+	if ((val & 0x1) == 0) {
+		CLOCK_ERR(clock, "clockwiz is busy %x", val);
+		*freq = 0;
+		return -EBUSY;
+	}
+
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(0));
+
+	div0 = val & 0xff;
+	mul0 = (val & 0xff00) >> 8;
+	if (val & BIT(26)) {
+		mul_frac0 = val >> 16;
+		mul_frac0 &= 0x3ff;
+	}
+
+	/*
+	 * Multiply both numerator (mul0) and the denominator (div0) with 1000
+	 * to account for fractional portion of multiplier
+	 */
+	mul0 *= 1000;
+	mul0 += mul_frac0;
+	div0 *= 1000;
+
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(2));
+
+	div1 = val & 0xff;
+	if (val & BIT(18)) {
+		div_frac1 = val >> 8;
+		div_frac1 &= 0x3ff;
+	}
+
+	/*
+	 * Multiply both numerator (mul0) and the denominator (div1) with
+	 * 1000 to account for fractional portion of divider
+	 */
+
+	div1 *= 1000;
+	div1 += div_frac1;
+	div0 *= div1;
+	mul0 *= 1000;
+	if (div0 == 0) {
+		CLOCK_ERR(clock, "clockwiz 0 divider");
+		return 0;
+	}
+
+	*freq = (u16)((input * mul0) / div0);
+
+	return 0;
+}
+
+static int set_freq(struct clock *clock, u16 freq)
+{
+	u32 config;
+	int err;
+	u32 idx = 0;
+	u32 val;
+
+	BUG_ON(!mutex_is_locked(&clock->clock_lock));
+
+	idx = find_matching_freq_config(freq, frequency_table,
+		ARRAY_SIZE(frequency_table));
+
+	CLOCK_INFO(clock, "New: %d Mhz", freq);
+	err = clock_wiz_busy(clock, 20, 50);
+	if (err)
+		return -EBUSY;
+
+	config = frequency_table[idx].config0;
+	reg_wr(clock, config, OCL_CLKWIZ_CONFIG_OFFSET(0));
+
+	config = frequency_table[idx].config2;
+	reg_wr(clock, config, OCL_CLKWIZ_CONFIG_OFFSET(2));
+
+	mdelay(10);
+	reg_wr(clock, 7, OCL_CLKWIZ_CONFIG_OFFSET(23));
+
+	mdelay(1);
+	reg_wr(clock, 2, OCL_CLKWIZ_CONFIG_OFFSET(23));
+
+	CLOCK_INFO(clock, "clockwiz waiting for locked signal");
+
+	err = clock_wiz_busy(clock, 100, 100);
+	if (err) {
+		CLOCK_ERR(clock, "clockwiz MMCM/PLL did not lock");
+		/* restore */
+		reg_wr(clock, 4, OCL_CLKWIZ_CONFIG_OFFSET(23));
+		mdelay(10);
+		reg_wr(clock, 0, OCL_CLKWIZ_CONFIG_OFFSET(23));
+		return err;
+	}
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(0));
+	CLOCK_INFO(clock, "clockwiz CONFIG(0) 0x%x", val);
+	val = reg_rd(clock, OCL_CLKWIZ_CONFIG_OFFSET(2));
+	CLOCK_INFO(clock, "clockwiz CONFIG(2) 0x%x", val);
+
+	return 0;
+}
+
+static int get_freq_counter(struct clock *clock, u32 *freq)
+{
+	const void *cnter;
+	struct platform_device *cnter_leaf;
+	struct platform_device *pdev = clock->pdev;
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
+	int err = xrt_md_get_prop(DEV(pdev), pdata->xsp_dtb,
+		clock->clock_ep_name, NULL, PROP_CLK_CNT, &cnter, NULL);
+
+	BUG_ON(!mutex_is_locked(&clock->clock_lock));
+
+	if (err) {
+		xrt_err(pdev, "no counter specified");
+		return err;
+	}
+
+	cnter_leaf = xrt_subdev_get_leaf_by_epname(pdev, cnter);
+	if (!cnter_leaf) {
+		xrt_err(pdev, "can't find counter");
+		return -ENOENT;
+	}
+
+	err = xrt_subdev_ioctl(cnter_leaf, XRT_CLKFREQ_READ, freq);
+	if (err)
+		xrt_err(pdev, "can't read counter");
+	xrt_subdev_put_leaf(clock->pdev, cnter_leaf);
+
+	return err;
+}
+
+static int clock_get_freq(struct clock *clock, u16 *freq, u32 *freq_cnter)
+{
+	int err = 0;
+
+	mutex_lock(&clock->clock_lock);
+
+	if (err == 0 && freq)
+		err = get_freq(clock, freq);
+
+	if (err == 0 && freq_cnter)
+		err = get_freq_counter(clock, freq_cnter);
+
+	mutex_unlock(&clock->clock_lock);
+	return err;
+}
+
+static int clock_set_freq(struct clock *clock, u16 freq)
+{
+	int err;
+
+	mutex_lock(&clock->clock_lock);
+	err = set_freq(clock, freq);
+	mutex_unlock(&clock->clock_lock);
+
+	return err;
+}
+
+static int clock_verify_freq(struct clock *clock)
+{
+	int err = 0;
+	u16 freq;
+	u32 lookup_freq, clock_freq_counter, request_in_khz, tolerance;
+
+	mutex_lock(&clock->clock_lock);
+
+	err = get_freq(clock, &freq);
+	if (err) {
+		xrt_err(clock->pdev, "get freq failed, %d", err);
+		goto end;
+	}
+
+	err = get_freq_counter(clock, &clock_freq_counter);
+	if (err) {
+		xrt_err(clock->pdev, "get freq counter failed, %d", err);
+		goto end;
+	}
+
+	lookup_freq = find_matching_freq(freq, frequency_table,
+		ARRAY_SIZE(frequency_table));
+	request_in_khz = lookup_freq * 1000;
+	tolerance = lookup_freq * 50;
+	if (tolerance < abs(clock_freq_counter-request_in_khz)) {
+		CLOCK_ERR(clock,
+		    "set clock(%s) failed, request %ukhz, actual %dkhz",
+		    clock->clock_ep_name, request_in_khz, clock_freq_counter);
+		err = -EDOM;
+	} else {
+		CLOCK_INFO(clock, "verified clock (%s)", clock->clock_ep_name);
+	}
+
+end:
+	mutex_unlock(&clock->clock_lock);
+	return err;
+}
+
+static int clock_init(struct clock *clock)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
+	int err = 0;
+	const u16 *freq;
+
+	err = xrt_md_get_prop(DEV(clock->pdev), pdata->xsp_dtb,
+		clock->clock_ep_name, NULL, PROP_CLK_FREQ,
+		(const void **)&freq, NULL);
+	if (err) {
+		xrt_info(clock->pdev, "no default freq");
+		return 0;
+	}
+
+	mutex_lock(&clock->clock_lock);
+	err = set_freq(clock, be16_to_cpu(*freq));
+	mutex_unlock(&clock->clock_lock);
+
+	return err;
+}
+
+static ssize_t freq_show(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	struct clock *clock = platform_get_drvdata(to_platform_device(dev));
+	u16 freq = 0;
+	ssize_t count;
+
+	count = clock_get_freq(clock, &freq, NULL);
+	if (count < 0)
+		return count;
+
+	count = snprintf(buf, 64, "%d\n", freq);
+
+	return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clock_attrs[] = {
+	&dev_attr_freq.attr,
+	NULL,
+};
+
+static struct attribute_group clock_attr_group = {
+	.attrs = clock_attrs,
+};
+
+static int
+xrt_clock_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct clock		*clock;
+	int			ret = 0;
+
+	clock = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_CLOCK_SET: {
+		u16	freq = (u16)(uintptr_t)arg;
+
+		ret = clock_set_freq(clock, freq);
+		break;
+	}
+	case XRT_CLOCK_VERIFY: {
+		ret = clock_verify_freq(clock);
+		break;
+	}
+	case XRT_CLOCK_GET: {
+		struct xrt_clock_ioctl_get *get =
+			(struct xrt_clock_ioctl_get *)arg;
+
+		ret = clock_get_freq(clock, &get->freq, &get->freq_cnter);
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int clock_remove(struct platform_device *pdev)
+{
+	struct clock *clock;
+
+	clock = platform_get_drvdata(pdev);
+	if (!clock) {
+		xrt_err(pdev, "driver data is NULL");
+		return -EINVAL;
+	}
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, clock);
+
+	CLOCK_INFO(clock, "successfully removed Clock subdev");
+	return 0;
+}
+
+
+
+static int clock_probe(struct platform_device *pdev)
+{
+	struct clock *clock = NULL;
+	struct resource *res;
+	int ret;
+
+	clock = devm_kzalloc(&pdev->dev, sizeof(*clock), GFP_KERNEL);
+	if (!clock)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, clock);
+	clock->pdev = pdev;
+	mutex_init(&clock->clock_lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	clock->clock_base = ioremap(res->start, res->end - res->start + 1);
+	if (!clock->clock_base) {
+		CLOCK_ERR(clock, "map base %pR failed", res);
+		ret = -EFAULT;
+		goto failed;
+	}
+
+	clock->clock_ep_name = res->name;
+
+	ret = clock_init(clock);
+	if (ret)
+		goto failed;
+
+	ret = sysfs_create_group(&pdev->dev.kobj, &clock_attr_group);
+	if (ret) {
+		CLOCK_ERR(clock, "create clock attrs failed: %d", ret);
+		goto failed;
+	}
+
+	CLOCK_INFO(clock, "successfully initialized Clock subdev");
+
+	return 0;
+
+failed:
+	clock_remove(pdev);
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_clock_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .regmap_name = "clkwiz" },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_clock_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_clock_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_clock_table[] = {
+	{ XRT_CLOCK, (kernel_ulong_t)&xrt_clock_data },
+	{ },
+};
+
+struct platform_driver xrt_clock_driver = {
+	.driver = {
+		.name = XRT_CLOCK,
+	},
+	.probe = clock_probe,
+	.remove = clock_remove,
+	.id_table = xrt_clock_table,
+};
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-gpio.c b/drivers/fpga/xrt/lib/subdevs/xrt-gpio.c
new file mode 100644
index 000000000000..358e274a1550
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-gpio.c
@@ -0,0 +1,198 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA GPIO Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "subdev.h"
+#include "parent.h"
+#include "subdev/gpio.h"
+
+#define XRT_GPIO "xrt_gpio"
+
+struct xrt_name_id {
+	char *ep_name;
+	int id;
+};
+
+static struct xrt_name_id name_id[XRT_GPIO_MAX] = {
+	{ NODE_BLP_ROM, XRT_GPIO_ROM_UUID },
+	{ NODE_GOLDEN_VER, XRT_GPIO_GOLDEN_VER },
+};
+
+struct xrt_gpio {
+	struct platform_device	*pdev;
+	void		__iomem *base_addrs[XRT_GPIO_MAX];
+	ulong			sizes[XRT_GPIO_MAX];
+};
+
+static int xrt_gpio_name2id(struct xrt_gpio *gpio, const char *name)
+{
+	int	i;
+
+	for (i = 0; i < XRT_GPIO_MAX && name_id[i].ep_name; i++) {
+		if (!strncmp(name_id[i].ep_name, name,
+		    strlen(name_id[i].ep_name) + 1))
+			return name_id[i].id;
+	}
+
+	return -EINVAL;
+}
+
+static int
+xrt_gpio_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_gpio	*gpio;
+	int			ret = 0;
+
+	gpio = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_GPIO_READ: {
+		struct xrt_gpio_ioctl_rw	*rw_arg = arg;
+		u32				*p_src, *p_dst, i;
+
+		if (rw_arg->xgir_len & 0x3) {
+			xrt_err(pdev, "invalid len %d", rw_arg->xgir_len);
+			return -EINVAL;
+		}
+
+		if (rw_arg->xgir_id >= XRT_GPIO_MAX) {
+			xrt_err(pdev, "invalid id %d", rw_arg->xgir_id);
+			return -EINVAL;
+		}
+
+		p_src = gpio->base_addrs[rw_arg->xgir_id];
+		if (!p_src) {
+			xrt_err(pdev, "io not found, id %d",
+				rw_arg->xgir_id);
+			return -EINVAL;
+		}
+		if (rw_arg->xgir_offset + rw_arg->xgir_len >
+		    gpio->sizes[rw_arg->xgir_id]) {
+			xrt_err(pdev, "invalid argument, off %d, len %d",
+				rw_arg->xgir_offset, rw_arg->xgir_len);
+			return -EINVAL;
+		}
+		p_dst = rw_arg->xgir_buf;
+		for (i = 0; i < rw_arg->xgir_len / sizeof(u32); i++) {
+			u32 val = ioread32(p_src + rw_arg->xgir_offset + i);
+
+			memcpy(p_dst + i, &val, sizeof(u32));
+		}
+		break;
+	}
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int xrt_gpio_remove(struct platform_device *pdev)
+{
+	struct xrt_gpio	*gpio;
+	int			i;
+
+	gpio = platform_get_drvdata(pdev);
+
+	for (i = 0; i < XRT_GPIO_MAX; i++) {
+		if (gpio->base_addrs[i])
+			iounmap(gpio->base_addrs[i]);
+	}
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, gpio);
+
+	return 0;
+}
+
+static int xrt_gpio_probe(struct platform_device *pdev)
+{
+	struct xrt_gpio	*gpio;
+	int			i, id, ret = 0;
+	struct resource		*res;
+
+	gpio = devm_kzalloc(&pdev->dev, sizeof(*gpio), GFP_KERNEL);
+	if (!gpio)
+		return -ENOMEM;
+
+	gpio->pdev = pdev;
+	platform_set_drvdata(pdev, gpio);
+
+	xrt_info(pdev, "probing...");
+	for (i = 0, res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	    res;
+	    res = platform_get_resource(pdev, IORESOURCE_MEM, ++i)) {
+		id = xrt_gpio_name2id(gpio, res->name);
+		if (id < 0) {
+			xrt_err(pdev, "ep %s not found", res->name);
+			continue;
+		}
+		gpio->base_addrs[id] = ioremap(res->start,
+			res->end - res->start + 1);
+		if (!gpio->base_addrs[id]) {
+			xrt_err(pdev, "map base failed %pR", res);
+			ret = -EIO;
+			goto failed;
+		}
+		gpio->sizes[id] = res->end - res->start + 1;
+	}
+
+failed:
+	if (ret)
+		xrt_gpio_remove(pdev);
+
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_gpio_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			/* add name if ep is in same partition */
+			{ .ep_name = NODE_BLP_ROM },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_GOLDEN_VER },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	/* adding ep bundle generates gpio device instance */
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_gpio_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_gpio_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_gpio_table[] = {
+	{ XRT_GPIO, (kernel_ulong_t)&xrt_gpio_data },
+	{ },
+};
+
+struct platform_driver xrt_gpio_driver = {
+	.driver = {
+		.name = XRT_GPIO,
+	},
+	.probe = xrt_gpio_probe,
+	.remove = xrt_gpio_remove,
+	.id_table = xrt_gpio_table,
+};
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-icap.c b/drivers/fpga/xrt/lib/subdevs/xrt-icap.c
new file mode 100644
index 000000000000..3b23afb55d3c
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-icap.c
@@ -0,0 +1,306 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA ICAP Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "subdev.h"
+#include "parent.h"
+#include "subdev/icap.h"
+#include "xrt-xclbin.h"
+
+#define XRT_ICAP "xrt_icap"
+
+#define	ICAP_ERR(icap, fmt, arg...)	\
+	xrt_err((icap)->pdev, fmt "\n", ##arg)
+#define	ICAP_WARN(icap, fmt, arg...)	\
+	xrt_warn((icap)->pdev, fmt "\n", ##arg)
+#define	ICAP_INFO(icap, fmt, arg...)	\
+	xrt_info((icap)->pdev, fmt "\n", ##arg)
+#define	ICAP_DBG(icap, fmt, arg...)	\
+	xrt_dbg((icap)->pdev, fmt "\n", ##arg)
+
+/*
+ * AXI-HWICAP IP register layout
+ */
+struct icap_reg {
+	u32	ir_rsvd1[7];
+	u32	ir_gier;
+	u32	ir_isr;
+	u32	ir_rsvd2;
+	u32	ir_ier;
+	u32	ir_rsvd3[53];
+	u32	ir_wf;
+	u32	ir_rf;
+	u32	ir_sz;
+	u32	ir_cr;
+	u32	ir_sr;
+	u32	ir_wfv;
+	u32	ir_rfo;
+	u32	ir_asr;
+} __packed;
+
+struct icap {
+	struct platform_device	*pdev;
+	struct icap_reg		*icap_regs;
+	struct mutex		icap_lock;
+
+	unsigned int		idcode;
+};
+
+static inline u32 reg_rd(void __iomem *reg)
+{
+	if (!reg)
+		return -1;
+
+	return ioread32(reg);
+}
+
+static inline void reg_wr(void __iomem *reg, u32 val)
+{
+	if (!reg)
+		return;
+
+	iowrite32(val, reg);
+}
+
+static int wait_for_done(struct icap *icap)
+{
+	u32	w;
+	int	i = 0;
+
+	BUG_ON(!mutex_is_locked(&icap->icap_lock));
+	for (i = 0; i < 10; i++) {
+		udelay(5);
+		w = reg_rd(&icap->icap_regs->ir_sr);
+		ICAP_INFO(icap, "XHWICAP_SR: %x", w);
+		if (w & 0x5)
+			return 0;
+	}
+
+	ICAP_ERR(icap, "bitstream download timeout");
+	return -ETIMEDOUT;
+}
+
+static int icap_write(struct icap *icap, const u32 *word_buf, int size)
+{
+	int i;
+	u32 value = 0;
+
+	for (i = 0; i < size; i++) {
+		value = be32_to_cpu(word_buf[i]);
+		reg_wr(&icap->icap_regs->ir_wf, value);
+	}
+
+	reg_wr(&icap->icap_regs->ir_cr, 0x1);
+
+	for (i = 0; i < 20; i++) {
+		value = reg_rd(&icap->icap_regs->ir_cr);
+		if ((value & 0x1) == 0)
+			return 0;
+		ndelay(50);
+	}
+
+	ICAP_ERR(icap, "writing %d dwords timeout", size);
+	return -EIO;
+}
+
+static int bitstream_helper(struct icap *icap, const u32 *word_buffer,
+	u32 word_count)
+{
+	u32 remain_word;
+	u32 word_written = 0;
+	int wr_fifo_vacancy = 0;
+	int err = 0;
+
+	BUG_ON(!mutex_is_locked(&icap->icap_lock));
+	for (remain_word = word_count; remain_word > 0;
+		remain_word -= word_written, word_buffer += word_written) {
+		wr_fifo_vacancy = reg_rd(&icap->icap_regs->ir_wfv);
+		if (wr_fifo_vacancy <= 0) {
+			ICAP_ERR(icap, "no vacancy: %d", wr_fifo_vacancy);
+			err = -EIO;
+			break;
+		}
+		word_written = (wr_fifo_vacancy < remain_word) ?
+			wr_fifo_vacancy : remain_word;
+		if (icap_write(icap, word_buffer, word_written) != 0) {
+			ICAP_ERR(icap, "write failed remain %d, written %d",
+					remain_word, word_written);
+			err = -EIO;
+			break;
+		}
+	}
+
+	return err;
+}
+
+static int icap_download(struct icap *icap, const char *buffer,
+	unsigned long length)
+{
+	u32	numCharsRead = DMA_HWICAP_BITFILE_BUFFER_SIZE;
+	u32	byte_read;
+	int	err = 0;
+
+	mutex_lock(&icap->icap_lock);
+	for (byte_read = 0; byte_read < length; byte_read += numCharsRead) {
+		numCharsRead = length - byte_read;
+		if (numCharsRead > DMA_HWICAP_BITFILE_BUFFER_SIZE)
+			numCharsRead = DMA_HWICAP_BITFILE_BUFFER_SIZE;
+
+		err = bitstream_helper(icap, (u32 *)buffer,
+			numCharsRead / sizeof(u32));
+		if (err)
+			goto failed;
+		buffer += numCharsRead;
+	}
+
+	err = wait_for_done(icap);
+
+failed:
+	mutex_unlock(&icap->icap_lock);
+
+	return err;
+}
+
+/*
+ * Run the following sequence of canned commands to obtain IDCODE of the FPGA
+ */
+static void icap_probe_chip(struct icap *icap)
+{
+	u32 w;
+
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	reg_wr(&icap->icap_regs->ir_gier, 0x0);
+	w = reg_rd(&icap->icap_regs->ir_wfv);
+	reg_wr(&icap->icap_regs->ir_wf, 0xffffffff);
+	reg_wr(&icap->icap_regs->ir_wf, 0xaa995566);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	reg_wr(&icap->icap_regs->ir_wf, 0x28018001);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	reg_wr(&icap->icap_regs->ir_wf, 0x20000000);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	reg_wr(&icap->icap_regs->ir_cr, 0x1);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	w = reg_rd(&icap->icap_regs->ir_sr);
+	reg_wr(&icap->icap_regs->ir_sz, 0x1);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+	reg_wr(&icap->icap_regs->ir_cr, 0x2);
+	w = reg_rd(&icap->icap_regs->ir_rfo);
+	icap->idcode = reg_rd(&icap->icap_regs->ir_rf);
+	w = reg_rd(&icap->icap_regs->ir_cr);
+}
+
+static int
+xrt_icap_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_icap_ioctl_wr	*wr_arg = arg;
+	struct icap			*icap;
+	int				ret = 0;
+
+	icap = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_ICAP_WRITE:
+		ret = icap_download(icap, wr_arg->xiiw_bit_data,
+				wr_arg->xiiw_data_len);
+		break;
+	case XRT_ICAP_IDCODE:
+		*(u64 *)arg = icap->idcode;
+		break;
+	default:
+		ICAP_ERR(icap, "unknown command %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int xrt_icap_remove(struct platform_device *pdev)
+{
+	struct icap	*icap;
+
+	icap = platform_get_drvdata(pdev);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, icap);
+
+	return 0;
+}
+
+static int xrt_icap_probe(struct platform_device *pdev)
+{
+	struct icap	*icap;
+	int			ret = 0;
+	struct resource		*res;
+
+	icap = devm_kzalloc(&pdev->dev, sizeof(*icap), GFP_KERNEL);
+	if (!icap)
+		return -ENOMEM;
+
+	icap->pdev = pdev;
+	platform_set_drvdata(pdev, icap);
+	mutex_init(&icap->icap_lock);
+
+	xrt_info(pdev, "probing");
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (res != NULL) {
+		icap->icap_regs = ioremap(res->start,
+			res->end - res->start + 1);
+		if (!icap->icap_regs) {
+			xrt_err(pdev, "map base failed %pR", res);
+			ret = -EIO;
+			goto failed;
+		}
+	}
+
+	icap_probe_chip(icap);
+failed:
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_icap_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_FPGA_CONFIG },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_icap_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_icap_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_icap_table[] = {
+	{ XRT_ICAP, (kernel_ulong_t)&xrt_icap_data },
+	{ },
+};
+
+struct platform_driver xrt_icap_driver = {
+	.driver = {
+		.name = XRT_ICAP,
+	},
+	.probe = xrt_icap_probe,
+	.remove = xrt_icap_remove,
+	.id_table = xrt_icap_table,
+};
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-ucs.c b/drivers/fpga/xrt/lib/subdevs/xrt-ucs.c
new file mode 100644
index 000000000000..8ce696491357
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-ucs.c
@@ -0,0 +1,238 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA UCS Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "subdev.h"
+#include "parent.h"
+#include "subdev/ucs.h"
+#include "subdev/clock.h"
+
+#define UCS_ERR(ucs, fmt, arg...)   \
+	xrt_err((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_WARN(ucs, fmt, arg...)  \
+	xrt_warn((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_INFO(ucs, fmt, arg...)  \
+	xrt_info((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_DBG(ucs, fmt, arg...)   \
+	xrt_dbg((ucs)->pdev, fmt "\n", ##arg)
+
+
+#define XRT_UCS		"xrt_ucs"
+
+#define CHANNEL1_OFFSET			0
+#define CHANNEL2_OFFSET			8
+
+#define CLK_MAX_VALUE			6400
+
+struct ucs_control_status_ch1 {
+	unsigned int shutdown_clocks_latched:1;
+	unsigned int reserved1:15;
+	unsigned int clock_throttling_average:14;
+	unsigned int reserved2:2;
+};
+
+
+struct xrt_ucs {
+	struct platform_device	*pdev;
+	void __iomem		*ucs_base;
+	struct mutex		ucs_lock;
+	void			*evt_hdl;
+};
+
+static inline u32 reg_rd(struct xrt_ucs *ucs, u32 offset)
+{
+	return ioread32(ucs->ucs_base + offset);
+}
+
+static inline void reg_wr(struct xrt_ucs *ucs, u32 val, u32 offset)
+{
+	iowrite32(val, ucs->ucs_base + offset);
+}
+
+static bool xrt_ucs_leaf_match(enum xrt_subdev_id id,
+	struct platform_device *pdev, void *arg)
+{
+	if (id == XRT_SUBDEV_CLOCK)
+		return true;
+
+	return false;
+}
+
+static int xrt_ucs_event_cb(struct platform_device *pdev,
+	enum xrt_events evt, void *arg)
+{
+
+	struct xrt_ucs		*ucs;
+	struct platform_device	*leaf;
+	struct xrt_event_arg_subdev *esd = (struct xrt_event_arg_subdev *)arg;
+
+	ucs = platform_get_drvdata(pdev);
+
+	switch (evt) {
+	case XRT_EVENT_POST_CREATION:
+		break;
+	default:
+		xrt_info(pdev, "ignored event %d", evt);
+		return XRT_EVENT_CB_CONTINUE;
+	}
+
+	leaf = xrt_subdev_get_leaf_by_id(pdev,
+		XRT_SUBDEV_CLOCK, esd->xevt_subdev_instance);
+	BUG_ON(!leaf);
+	xrt_subdev_ioctl(leaf, XRT_CLOCK_VERIFY, NULL);
+	xrt_subdev_put_leaf(pdev, leaf);
+
+	return XRT_EVENT_CB_CONTINUE;
+}
+
+static void ucs_check(struct xrt_ucs *ucs, bool *latched)
+{
+	struct ucs_control_status_ch1 *ucs_status_ch1;
+	u32 status;
+
+	mutex_lock(&ucs->ucs_lock);
+	status = reg_rd(ucs, CHANNEL1_OFFSET);
+	ucs_status_ch1 = (struct ucs_control_status_ch1 *)&status;
+	if (ucs_status_ch1->shutdown_clocks_latched) {
+		UCS_ERR(ucs, "Critical temperature or power event, kernel clocks have been stopped, run 'xbutil valiate -q' to continue. See AR 73398 for more details.");
+		/* explicitly indicate reset should be latched */
+		*latched = true;
+	} else if (ucs_status_ch1->clock_throttling_average >
+	    CLK_MAX_VALUE) {
+		UCS_ERR(ucs, "kernel clocks %d exceeds expected maximum value %d.",
+			ucs_status_ch1->clock_throttling_average,
+			CLK_MAX_VALUE);
+	} else if (ucs_status_ch1->clock_throttling_average) {
+		UCS_ERR(ucs, "kernel clocks throttled at %d%%.",
+			(ucs_status_ch1->clock_throttling_average /
+			 (CLK_MAX_VALUE / 100)));
+	}
+	mutex_unlock(&ucs->ucs_lock);
+}
+
+static void ucs_enable(struct xrt_ucs *ucs)
+{
+	reg_wr(ucs, 1, CHANNEL2_OFFSET);
+}
+
+static int
+xrt_ucs_leaf_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	struct xrt_ucs		*ucs;
+	int			ret = 0;
+
+	ucs = platform_get_drvdata(pdev);
+
+	switch (cmd) {
+	case XRT_UCS_CHECK: {
+		ucs_check(ucs, (bool *)arg);
+		break;
+	}
+	case XRT_UCS_ENABLE:
+		ucs_enable(ucs);
+		break;
+	default:
+		xrt_err(pdev, "unsupported cmd %d", cmd);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int ucs_remove(struct platform_device *pdev)
+{
+	struct xrt_ucs *ucs;
+
+	ucs = platform_get_drvdata(pdev);
+	if (!ucs) {
+		xrt_err(pdev, "driver data is NULL");
+		return -EINVAL;
+	}
+
+	xrt_subdev_remove_event_cb(pdev, ucs->evt_hdl);
+	if (ucs->ucs_base)
+		iounmap(ucs->ucs_base);
+
+	platform_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, ucs);
+
+	return 0;
+}
+
+
+
+static int ucs_probe(struct platform_device *pdev)
+{
+	struct xrt_ucs *ucs = NULL;
+	struct resource *res;
+	int ret;
+
+	ucs = devm_kzalloc(&pdev->dev, sizeof(*ucs), GFP_KERNEL);
+	if (!ucs)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, ucs);
+	ucs->pdev = pdev;
+	mutex_init(&ucs->ucs_lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	ucs->ucs_base = ioremap(res->start, res->end - res->start + 1);
+	if (!ucs->ucs_base) {
+		UCS_ERR(ucs, "map base %pR failed", res);
+		ret = -EFAULT;
+		goto failed;
+	}
+	ucs_enable(ucs);
+	ucs->evt_hdl = xrt_subdev_add_event_cb(pdev, xrt_ucs_leaf_match,
+		NULL, xrt_ucs_event_cb);
+
+	return 0;
+
+failed:
+	ucs_remove(pdev);
+	return ret;
+}
+
+
+struct xrt_subdev_endpoints xrt_ucs_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names[]) {
+			{ .ep_name = NODE_UCS_CONTROL_STATUS },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_ucs_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_ucs_leaf_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_ucs_table[] = {
+	{ XRT_UCS, (kernel_ulong_t)&xrt_ucs_data },
+	{ },
+};
+
+struct platform_driver xrt_ucs_driver = {
+	.driver = {
+		.name = XRT_UCS,
+	},
+	.probe = ucs_probe,
+	.remove = ucs_remove,
+	.id_table = xrt_ucs_table,
+};
diff --git a/drivers/fpga/xrt/lib/subdevs/xrt-vsec.c b/drivers/fpga/xrt/lib/subdevs/xrt-vsec.c
new file mode 100644
index 000000000000..0ed9e1124588
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdevs/xrt-vsec.c
@@ -0,0 +1,337 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA VSEC Driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ *
+ * Authors:
+ *      Lizhi Hou<Lizhi.Hou@xilinx.com>
+ */
+
+#include <linux/platform_device.h>
+#include "metadata.h"
+#include "subdev.h"
+
+#define XRT_VSEC "xrt_vsec"
+
+#define VSEC_TYPE_UUID		0x50
+#define VSEC_TYPE_FLASH		0x51
+#define VSEC_TYPE_PLATINFO	0x52
+#define VSEC_TYPE_MAILBOX	0x53
+#define VSEC_TYPE_END		0xff
+
+#define VSEC_UUID_LEN		16
+
+struct xrt_vsec_header {
+	u32		format;
+	u32		length;
+	u32		entry_sz;
+	u32		rsvd;
+} __packed;
+
+#define head_rd(g, r)			\
+	ioread32(&((struct xrt_vsec_header *)g->base)->r)
+
+#define GET_BAR(entry)	((entry->bar_rev >> 4) & 0xf)
+#define GET_BAR_OFF(entry)	(entry->off_lo | ((u64)entry->off_hi << 16))
+#define GET_REV(entry)	(entry->bar_rev & 0xf)
+
+struct xrt_vsec_entry {
+	u8		type;
+	u8		bar_rev;
+	u16		off_lo;
+	u32		off_hi;
+	u8		ver_type;
+	u8		minor;
+	u8		major;
+	u8		rsvd0;
+	u32		rsvd1;
+} __packed;
+
+#define read_entry(g, i, e)					\
+	do {							\
+		u32 *p = (u32 *)(g->base +			\
+			sizeof(struct xrt_vsec_header) +	\
+			i * sizeof(struct xrt_vsec_entry));	\
+		u32 off;					\
+		for (off = 0;					\
+		    off < sizeof(struct xrt_vsec_entry) / 4;	\
+		    off++)					\
+			*((u32 *)(e) + off) = ioread32(p + off);\
+	} while (0)
+
+struct vsec_device {
+	u8		type;
+	char		*ep_name;
+	ulong		size;
+	char		*regmap;
+};
+
+static struct vsec_device vsec_devs[] = {
+	{
+		.type = VSEC_TYPE_UUID,
+		.ep_name = NODE_BLP_ROM,
+		.size = VSEC_UUID_LEN,
+		.regmap = "vsec-uuid",
+	},
+	{
+		.type = VSEC_TYPE_FLASH,
+		.ep_name = NODE_FLASH_VSEC,
+		.size = 4096,
+		.regmap = "vsec-flash",
+	},
+	{
+		.type = VSEC_TYPE_PLATINFO,
+		.ep_name = NODE_PLAT_INFO,
+		.size = 4,
+		.regmap = "vsec-platinfo",
+	},
+	{
+		.type = VSEC_TYPE_MAILBOX,
+		.ep_name = NODE_MAILBOX_VSEC,
+		.size = 48,
+		.regmap = "vsec-mbx",
+	},
+};
+
+struct xrt_vsec {
+	struct platform_device	*pdev;
+	void			*base;
+	ulong			length;
+
+	char			*metadata;
+	char			uuid[VSEC_UUID_LEN];
+};
+
+static char *type2epname(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+		if (vsec_devs[i].type == type)
+			return (vsec_devs[i].ep_name);
+	}
+
+	return NULL;
+}
+
+static ulong type2size(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+		if (vsec_devs[i].type == type)
+			return (vsec_devs[i].size);
+	}
+
+	return 0;
+}
+
+static char *type2regmap(u32 type)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+		if (vsec_devs[i].type == type)
+			return (vsec_devs[i].regmap);
+	}
+
+	return NULL;
+}
+
+static int xrt_vsec_add_node(struct xrt_vsec *vsec,
+	void *md_blob, struct xrt_vsec_entry *p_entry)
+{
+	struct xrt_md_endpoint ep;
+	char regmap_ver[64];
+	int ret;
+
+	if (!type2epname(p_entry->type))
+		return -EINVAL;
+
+	/*
+	 * VSEC may have more than 1 mailbox instance for the card
+	 * which has more than 1 physical function.
+	 * This is not supported for now. Assuming only one mailbox
+	 */
+
+	snprintf(regmap_ver, sizeof(regmap_ver) - 1, "%d-%d.%d.%d",
+		p_entry->ver_type, p_entry->major, p_entry->minor,
+		GET_REV(p_entry));
+	ep.ep_name = type2epname(p_entry->type);
+	ep.bar = GET_BAR(p_entry);
+	ep.bar_off = GET_BAR_OFF(p_entry);
+	ep.size = type2size(p_entry->type);
+	ep.regmap = type2regmap(p_entry->type);
+	ep.regmap_ver = regmap_ver;
+	ret = xrt_md_add_endpoint(DEV(vsec->pdev), vsec->metadata, &ep);
+	if (ret) {
+		xrt_err(vsec->pdev, "add ep failed, ret %d", ret);
+		goto failed;
+	}
+
+failed:
+	return ret;
+}
+
+static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
+{
+	struct xrt_vsec_entry entry;
+	int i, ret;
+
+	ret = xrt_md_create(&vsec->pdev->dev, &vsec->metadata);
+	if (ret) {
+		xrt_err(vsec->pdev, "create metadata failed");
+		return ret;
+	}
+
+	for (i = 0; i * sizeof(entry) < vsec->length -
+	    sizeof(struct xrt_vsec_header); i++) {
+		read_entry(vsec, i, &entry);
+		xrt_vsec_add_node(vsec, vsec->metadata, &entry);
+	}
+
+	return 0;
+}
+
+static int xrt_vsec_ioctl(struct platform_device *pdev, u32 cmd, void *arg)
+{
+	return 0;
+}
+
+static int xrt_vsec_mapio(struct xrt_vsec *vsec)
+{
+	struct xrt_subdev_platdata *pdata = DEV_PDATA(vsec->pdev);
+	const u32 *bar;
+	const u64 *bar_off;
+	struct resource *res = NULL;
+	ulong addr;
+	int ret;
+
+	if (!pdata || xrt_md_size(DEV(vsec->pdev), pdata->xsp_dtb) <= 0) {
+		xrt_err(vsec->pdev, "empty metadata");
+		return -EINVAL;
+	}
+
+	ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, NODE_VSEC,
+		NULL, PROP_BAR_IDX, (const void **)&bar, NULL);
+	if (ret) {
+		xrt_err(vsec->pdev, "failed to get bar idx, ret %d", ret);
+		return -EINVAL;
+	}
+
+	ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, NODE_VSEC,
+		NULL, PROP_OFFSET, (const void **)&bar_off, NULL);
+	if (ret) {
+		xrt_err(vsec->pdev, "failed to get bar off, ret %d", ret);
+		return -EINVAL;
+	}
+
+	xrt_info(vsec->pdev, "Map vsec at bar %d, offset 0x%llx",
+		be32_to_cpu(*bar), be64_to_cpu(*bar_off));
+
+	xrt_subdev_get_barres(vsec->pdev, &res, be32_to_cpu(*bar));
+	if (!res) {
+		xrt_err(vsec->pdev, "failed to get bar addr");
+		return -EINVAL;
+	}
+
+	addr = res->start + (ulong)be64_to_cpu(*bar_off);
+
+	vsec->base = ioremap(addr, sizeof(struct xrt_vsec_header));
+	if (!vsec->base) {
+		xrt_err(vsec->pdev, "Map header failed");
+		return -EIO;
+	}
+
+	vsec->length = head_rd(vsec, length);
+	iounmap(vsec->base);
+	vsec->base = ioremap(addr, vsec->length);
+	if (!vsec->base) {
+		xrt_err(vsec->pdev, "map failed");
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int xrt_vsec_remove(struct platform_device *pdev)
+{
+	struct xrt_vsec	*vsec;
+
+	vsec = platform_get_drvdata(pdev);
+
+	if (vsec->base) {
+		iounmap(vsec->base);
+		vsec->base = NULL;
+	}
+
+	vfree(vsec->metadata);
+
+	return 0;
+}
+
+static int xrt_vsec_probe(struct platform_device *pdev)
+{
+	struct xrt_vsec	*vsec;
+	int			ret = 0;
+
+	vsec = devm_kzalloc(&pdev->dev, sizeof(*vsec), GFP_KERNEL);
+	if (!vsec)
+		return -ENOMEM;
+
+	vsec->pdev = pdev;
+	platform_set_drvdata(pdev, vsec);
+
+	ret = xrt_vsec_mapio(vsec);
+	if (ret)
+		goto failed;
+
+	ret = xrt_vsec_create_metadata(vsec);
+	if (ret) {
+		xrt_err(pdev, "create metadata failed, ret %d", ret);
+		goto failed;
+	}
+	ret = xrt_subdev_create_partition(pdev, vsec->metadata);
+	if (ret < 0)
+		xrt_err(pdev, "create partition failed, ret %d", ret);
+	else
+		ret = 0;
+
+failed:
+	if (ret)
+		xrt_vsec_remove(pdev);
+
+	return ret;
+}
+
+struct xrt_subdev_endpoints xrt_vsec_endpoints[] = {
+	{
+		.xse_names = (struct xrt_subdev_ep_names []){
+			{ .ep_name = NODE_VSEC },
+			{ NULL },
+		},
+		.xse_min_ep = 1,
+	},
+	{ 0 },
+};
+
+struct xrt_subdev_drvdata xrt_vsec_data = {
+	.xsd_dev_ops = {
+		.xsd_ioctl = xrt_vsec_ioctl,
+	},
+};
+
+static const struct platform_device_id xrt_vsec_table[] = {
+	{ XRT_VSEC, (kernel_ulong_t)&xrt_vsec_data },
+	{ },
+};
+
+struct platform_driver xrt_vsec_driver = {
+	.driver = {
+		.name = XRT_VSEC,
+	},
+	.probe = xrt_vsec_probe,
+	.remove = xrt_vsec_remove,
+	.id_table = xrt_vsec_table,
+};
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 XRT Alveo 6/6] fpga: xrt: Kconfig and Makefile updates for XRT drivers
  2020-12-17  7:50 [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview Sonal Santan
                   ` (4 preceding siblings ...)
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 5/6] fpga: xrt: platform drivers for subsystems in shell partition Sonal Santan
@ 2020-12-17  7:50 ` Sonal Santan
  2020-12-17 14:55   ` kernel test robot
  5 siblings, 1 reply; 10+ messages in thread
From: Sonal Santan @ 2020-12-17  7:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Sonal Santan, linux-fpga, maxz, lizhih, michal.simek, stefanos,
	devicetree, trix, mdf

From: Sonal Santan <sonal.santan@xilinx.com>

Update fpga Kconfig/Makefile and add Kconfig/Makefile for
new drivers.

Signed-off-by: Sonal Santan <sonal.santan@xilinx.com>
---
 drivers/fpga/Kconfig           |  2 ++
 drivers/fpga/Makefile          |  4 ++++
 drivers/fpga/xrt/Kconfig       |  7 +++++++
 drivers/fpga/xrt/Makefile      | 21 +++++++++++++++++++++
 drivers/fpga/xrt/lib/Kconfig   | 11 +++++++++++
 drivers/fpga/xrt/lib/Makefile  | 30 ++++++++++++++++++++++++++++++
 drivers/fpga/xrt/mgmt/Kconfig  | 11 +++++++++++
 drivers/fpga/xrt/mgmt/Makefile | 27 +++++++++++++++++++++++++++
 8 files changed, 113 insertions(+)
 create mode 100644 drivers/fpga/xrt/Kconfig
 create mode 100644 drivers/fpga/xrt/Makefile
 create mode 100644 drivers/fpga/xrt/lib/Kconfig
 create mode 100644 drivers/fpga/xrt/lib/Makefile
 create mode 100644 drivers/fpga/xrt/mgmt/Kconfig
 create mode 100644 drivers/fpga/xrt/mgmt/Makefile

diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig
index 7cd5a29fc437..73e4deb20986 100644
--- a/drivers/fpga/Kconfig
+++ b/drivers/fpga/Kconfig
@@ -215,4 +215,6 @@ config FPGA_MGR_ZYNQMP_FPGA
 	  to configure the programmable logic(PL) through PS
 	  on ZynqMP SoC.
 
+source "drivers/fpga/xrt/Kconfig"
+
 endif # FPGA
diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile
index d8e21dfc6778..2b4453ff7c52 100644
--- a/drivers/fpga/Makefile
+++ b/drivers/fpga/Makefile
@@ -46,3 +46,7 @@ dfl-afu-objs += dfl-afu-error.o
 
 # Drivers for FPGAs which implement DFL
 obj-$(CONFIG_FPGA_DFL_PCI)		+= dfl-pci.o
+
+# XRT drivers for Alveo
+obj-$(CONFIG_FPGA_XRT_LIB)		+= xrt/lib/
+obj-$(CONFIG_FPGA_XRT_XMGMT)		+= xrt/mgmt/
diff --git a/drivers/fpga/xrt/Kconfig b/drivers/fpga/xrt/Kconfig
new file mode 100644
index 000000000000..50422f77c6df
--- /dev/null
+++ b/drivers/fpga/xrt/Kconfig
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx Alveo FPGA device configuration
+#
+
+source "drivers/fpga/xrt/lib/Kconfig"
+source "drivers/fpga/xrt/mgmt/Kconfig"
diff --git a/drivers/fpga/xrt/Makefile b/drivers/fpga/xrt/Makefile
new file mode 100644
index 000000000000..19e828cc7af9
--- /dev/null
+++ b/drivers/fpga/xrt/Makefile
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020 Xilinx, Inc. All rights reserved.
+#
+# Authors: Sonal.Santan@xilinx.com
+#
+
+all:
+	$(MAKE) -C lib all
+	$(MAKE) -C mgmt all
+	$(MAKE) lint
+
+tags:
+	../../../../scripts/tags.sh
+
+clean:
+	$(MAKE) -C lib clean
+	$(MAKE) -C mgmt clean
+
+lint:
+	../../../../scripts/lint.sh
diff --git a/drivers/fpga/xrt/lib/Kconfig b/drivers/fpga/xrt/lib/Kconfig
new file mode 100644
index 000000000000..541af91008ee
--- /dev/null
+++ b/drivers/fpga/xrt/lib/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# XRT Alveo FPGA device configuration
+#
+
+config FPGA_XRT_LIB
+	tristate "XRT Alveo Driver Library"
+	depends on HWMON && PCI
+	select LIBFDT
+	help
+	  XRT Alveo FPGA PCIe device driver common library.
diff --git a/drivers/fpga/xrt/lib/Makefile b/drivers/fpga/xrt/lib/Makefile
new file mode 100644
index 000000000000..176e2134171c
--- /dev/null
+++ b/drivers/fpga/xrt/lib/Makefile
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020 Xilinx, Inc. All rights reserved.
+#
+# Authors: Sonal.Santan@xilinx.com
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_LIB) := xrt-lib.o
+
+xrt-lib-objs := 			\
+	xrt-main.o			\
+	xrt-subdev.o			\
+	xrt-cdev.o			\
+	../common/xrt-metadata.o	\
+	subdevs/xrt-partition.o		\
+	subdevs/xrt-vsec.o		\
+	subdevs/xrt-axigate.o		\
+	subdevs/xrt-gpio.o		\
+	subdevs/xrt-icap.o		\
+	subdevs/xrt-clock.o		\
+	subdevs/xrt-clkfreq.o		\
+	subdevs/xrt-ucs.o		\
+	subdevs/xrt-calib.o
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+	-I$(FULL_XRT_PATH)/common \
+	-I$(FULL_DTC_PATH)
diff --git a/drivers/fpga/xrt/mgmt/Kconfig b/drivers/fpga/xrt/mgmt/Kconfig
new file mode 100644
index 000000000000..a2fe7ab21941
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx XRT FPGA device configuration
+#
+
+config FPGA_XRT_XMGMT
+	tristate "Xilinx Alveo Management Driver"
+	depends on HWMON && PCI && FPGA_XRT_LIB
+	select LIBFDT
+	help
+	  XRT Alveo FPGA PCIe device driver for Management Physical Function.
diff --git a/drivers/fpga/xrt/mgmt/Makefile b/drivers/fpga/xrt/mgmt/Makefile
new file mode 100644
index 000000000000..d32698b8bf58
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2019-2020 Xilinx, Inc. All rights reserved.
+#
+# Authors: Sonal.Santan@xilinx.com
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_XMGMT)	+= xmgmt.o
+
+commondir := ../common
+
+xmgmt-objs := xmgmt-root.o			\
+	   xmgmt-main.o				\
+	   xmgmt-fmgr-drv.o      		\
+	   xmgmt-main-region.o			\
+	   $(commondir)/xrt-root.o		\
+	   $(commondir)/xrt-metadata.o		\
+	   $(commondir)/xrt-xclbin.o
+
+
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+	-I$(FULL_XRT_PATH)/common \
+	-I$(FULL_DTC_PATH)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH V2 XRT Alveo 6/6] fpga: xrt: Kconfig and Makefile updates for XRT drivers
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 6/6] fpga: xrt: Kconfig and Makefile updates for XRT drivers Sonal Santan
@ 2020-12-17 14:55   ` kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2020-12-17 14:55 UTC (permalink / raw)
  To: Sonal Santan, linux-kernel
  Cc: kbuild-all, Sonal Santan, linux-fpga, maxz, lizhih, michal.simek,
	stefanos, devicetree, trix, mdf

[-- Attachment #1: Type: text/plain, Size: 10447 bytes --]

Hi Sonal,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linux/master]
[also build test WARNING on linus/master v5.10 next-20201217]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Sonal-Santan/XRT-Alveo-driver-overview/20201217-160048
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 09162bc32c880a791c6c0668ce0745cf7958f576
config: i386-randconfig-r035-20201217 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/3096c9b7caac1243afabb56e8b6c6f752cd1a0de
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Sonal-Santan/XRT-Alveo-driver-overview/20201217-160048
        git checkout 3096c9b7caac1243afabb56e8b6c6f752cd1a0de
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/fpga/xrt/lib/xrt-main.c: In function 'xrt_subdev_register_external_driver':
>> drivers/fpga/xrt/lib/xrt-main.c:158:6: warning: variable 'result' set but not used [-Wunused-but-set-variable]
     158 |  int result = 0;
         |      ^~~~~~
--
>> drivers/fpga/xrt/lib/../common/xrt-metadata.c:76:5: warning: no previous prototype for 'xrt_md_add_node' [-Wmissing-prototypes]
      76 | int xrt_md_add_node(struct device *dev, char *blob, int parent_offset,
         |     ^~~~~~~~~~~~~~~
--
   drivers/fpga/xrt/lib/subdevs/xrt-icap.c: In function 'icap_probe_chip':
>> drivers/fpga/xrt/lib/subdevs/xrt-icap.c:181:6: warning: variable 'w' set but not used [-Wunused-but-set-variable]
     181 |  u32 w;
         |      ^
--
>> drivers/fpga/xrt/lib/subdevs/xrt-clock.c:55:1: warning: 'static' is not at beginning of declaration [-Wold-style-declaration]
      55 | const static struct xmgmt_ocl_clockwiz {
         | ^~~~~
--
   drivers/fpga/xrt/lib/subdevs/xrt-ucs.c: In function 'xrt_ucs_event_cb':
>> drivers/fpga/xrt/lib/subdevs/xrt-ucs.c:77:19: warning: variable 'ucs' set but not used [-Wunused-but-set-variable]
      77 |  struct xrt_ucs  *ucs;
         |                   ^~~
--
>> drivers/fpga/xrt/lib/subdevs/xrt-calib.c:125:5: warning: no previous prototype for 'xrt_calib_remove' [-Wmissing-prototypes]
     125 | int xrt_calib_remove(struct platform_device *pdev)
         |     ^~~~~~~~~~~~~~~~
>> drivers/fpga/xrt/lib/subdevs/xrt-calib.c:141:5: warning: no previous prototype for 'xrt_calib_probe' [-Wmissing-prototypes]
     141 | int xrt_calib_probe(struct platform_device *pdev)
         |     ^~~~~~~~~~~~~~~
--
   In file included from include/linux/device.h:15,
                    from drivers/fpga/xrt/mgmt/../common/xrt-xclbin.h:14,
                    from drivers/fpga/xrt/mgmt/xmgmt-main.c:13:
   drivers/fpga/xrt/mgmt/xmgmt-main.c: In function 'ulp_image_write':
>> drivers/fpga/xrt/mgmt/../include/subdev.h:187:20: warning: format '%ld' expects argument of type 'long int', but argument 5 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
     187 |  prt_fn(DEV(pdev), "%s %s: "fmt,   \
         |                    ^~~~~~~~~
   include/linux/dev_printk.h:19:22: note: in definition of macro 'dev_fmt'
      19 | #define dev_fmt(fmt) fmt
         |                      ^~~
   drivers/fpga/xrt/mgmt/../include/subdev.h:187:2: note: in expansion of macro 'dev_err'
     187 |  prt_fn(DEV(pdev), "%s %s: "fmt,   \
         |  ^~~~~~
   drivers/fpga/xrt/mgmt/../include/subdev.h:189:37: note: in expansion of macro 'FMT_PRT'
     189 | #define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
         |                                     ^~~~~~~
   drivers/fpga/xrt/mgmt/xmgmt-main.c:216:4: note: in expansion of macro 'xrt_err'
     216 |    xrt_err(xmm->pdev, "count is too small %ld", count);
         |    ^~~~~~~
   drivers/fpga/xrt/mgmt/xmgmt-main.c:216:45: note: format string is defined here
     216 |    xrt_err(xmm->pdev, "count is too small %ld", count);
         |                                           ~~^
         |                                             |
         |                                             long int
         |                                           %d
   In file included from include/linux/device.h:15,
                    from drivers/fpga/xrt/mgmt/../common/xrt-xclbin.h:14,
                    from drivers/fpga/xrt/mgmt/xmgmt-main.c:13:
   drivers/fpga/xrt/mgmt/xmgmt-main.c: In function 'is_valid_firmware':
>> drivers/fpga/xrt/mgmt/../include/subdev.h:187:20: warning: format '%ld' expects argument of type 'long int', but argument 5 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
     187 |  prt_fn(DEV(pdev), "%s %s: "fmt,   \
         |                    ^~~~~~~~~
   include/linux/dev_printk.h:19:22: note: in definition of macro 'dev_fmt'
      19 | #define dev_fmt(fmt) fmt
         |                      ^~~
   drivers/fpga/xrt/mgmt/../include/subdev.h:187:2: note: in expansion of macro 'dev_err'
     187 |  prt_fn(DEV(pdev), "%s %s: "fmt,   \
         |  ^~~~~~
   drivers/fpga/xrt/mgmt/../include/subdev.h:189:37: note: in expansion of macro 'FMT_PRT'
     189 | #define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
         |                                     ^~~~~~~
   drivers/fpga/xrt/mgmt/xmgmt-main.c:370:3: note: in expansion of macro 'xrt_err'
     370 |   xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
         |   ^~~~~~~
   drivers/fpga/xrt/mgmt/xmgmt-main.c:370:42: note: format string is defined here
     370 |   xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
         |                                        ~~^
         |                                          |
         |                                          long int
         |                                        %d
   In file included from include/linux/device.h:15,
                    from drivers/fpga/xrt/mgmt/../common/xrt-xclbin.h:14,
                    from drivers/fpga/xrt/mgmt/xmgmt-main.c:13:
   drivers/fpga/xrt/mgmt/../include/subdev.h:187:20: warning: format '%ld' expects argument of type 'long int', but argument 6 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
     187 |  prt_fn(DEV(pdev), "%s %s: "fmt,   \
         |                    ^~~~~~~~~
   include/linux/dev_printk.h:19:22: note: in definition of macro 'dev_fmt'
      19 | #define dev_fmt(fmt) fmt
         |                      ^~~
   drivers/fpga/xrt/mgmt/../include/subdev.h:187:2: note: in expansion of macro 'dev_err'
     187 |  prt_fn(DEV(pdev), "%s %s: "fmt,   \
         |  ^~~~~~
   drivers/fpga/xrt/mgmt/../include/subdev.h:189:37: note: in expansion of macro 'FMT_PRT'
     189 | #define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
         |                                     ^~~~~~~
   drivers/fpga/xrt/mgmt/xmgmt-main.c:370:3: note: in expansion of macro 'xrt_err'
     370 |   xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
         |   ^~~~~~~
   drivers/fpga/xrt/mgmt/xmgmt-main.c:370:55: note: format string is defined here
     370 |   xrt_err(pdev, "truncated fw, length: %ld, expect: %ld",
         |                                                     ~~^
         |                                                       |
         |                                                       long int
         |                                                     %d
   In file included from drivers/fpga/xrt/mgmt/xmgmt-main.c:21:
   At top level:
   drivers/fpga/xrt/mgmt/../include/subdev/axigate.h:25:27: warning: 'xrt_axigate_epnames' defined but not used [-Wunused-const-variable=]
      25 | static const char * const xrt_axigate_epnames[] = {
         |                           ^~~~~~~~~~~~~~~~~~~
--
>> drivers/fpga/xrt/mgmt/../common/xrt-metadata.c:76:5: warning: no previous prototype for 'xrt_md_add_node' [-Wmissing-prototypes]
      76 | int xrt_md_add_node(struct device *dev, char *blob, int parent_offset,
         |     ^~~~~~~~~~~~~~~


vim +/result +158 drivers/fpga/xrt/lib/xrt-main.c

2039758de374ce7 Sonal Santan 2020-12-16  153  
2039758de374ce7 Sonal Santan 2020-12-16  154  int xrt_subdev_register_external_driver(enum xrt_subdev_id id,
2039758de374ce7 Sonal Santan 2020-12-16  155  	struct platform_driver *drv, struct xrt_subdev_endpoints *eps)
2039758de374ce7 Sonal Santan 2020-12-16  156  {
2039758de374ce7 Sonal Santan 2020-12-16  157  	int i;
2039758de374ce7 Sonal Santan 2020-12-16 @158  	int result = 0;
2039758de374ce7 Sonal Santan 2020-12-16  159  
2039758de374ce7 Sonal Santan 2020-12-16  160  	mutex_lock(&xrt_class_lock);
2039758de374ce7 Sonal Santan 2020-12-16  161  	for (i = 0; i < ARRAY_SIZE(xrt_drv_maps); i++) {
2039758de374ce7 Sonal Santan 2020-12-16  162  		struct xrt_drv_map *map = &xrt_drv_maps[i];
2039758de374ce7 Sonal Santan 2020-12-16  163  
2039758de374ce7 Sonal Santan 2020-12-16  164  		if (map->id != id)
2039758de374ce7 Sonal Santan 2020-12-16  165  			continue;
2039758de374ce7 Sonal Santan 2020-12-16  166  		if (map->drv) {
2039758de374ce7 Sonal Santan 2020-12-16  167  			result = -EEXIST;
2039758de374ce7 Sonal Santan 2020-12-16  168  			pr_err("Id %d already has a registered driver, 0x%p\n",
2039758de374ce7 Sonal Santan 2020-12-16  169  				id, map->drv);
2039758de374ce7 Sonal Santan 2020-12-16  170  			break;
2039758de374ce7 Sonal Santan 2020-12-16  171  		}
2039758de374ce7 Sonal Santan 2020-12-16  172  		map->drv = drv;
2039758de374ce7 Sonal Santan 2020-12-16  173  		BUG_ON(map->eps);
2039758de374ce7 Sonal Santan 2020-12-16  174  		map->eps = eps;
2039758de374ce7 Sonal Santan 2020-12-16  175  		xrt_drv_register_driver(id);
2039758de374ce7 Sonal Santan 2020-12-16  176  	}
2039758de374ce7 Sonal Santan 2020-12-16  177  	mutex_unlock(&xrt_class_lock);
2039758de374ce7 Sonal Santan 2020-12-16  178  	return 0;
2039758de374ce7 Sonal Santan 2020-12-16  179  }
2039758de374ce7 Sonal Santan 2020-12-16  180  EXPORT_SYMBOL_GPL(xrt_subdev_register_external_driver);
2039758de374ce7 Sonal Santan 2020-12-16  181  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 32814 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V2 XRT Alveo 2/6] fpga: xrt: infrastructure support for xmgmt driver
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 2/6] fpga: xrt: infrastructure support for xmgmt driver Sonal Santan
@ 2020-12-21  6:41   ` kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2020-12-21  6:41 UTC (permalink / raw)
  To: Sonal Santan, linux-kernel
  Cc: kbuild-all, Sonal Santan, linux-fpga, maxz, lizhih, michal.simek,
	stefanos, devicetree, trix, mdf

[-- Attachment #1: Type: text/plain, Size: 1837 bytes --]

Hi Sonal,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linux/master]
[also build test ERROR on linus/master v5.10 next-20201218]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Sonal-Santan/XRT-Alveo-driver-overview/20201217-160048
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 09162bc32c880a791c6c0668ce0745cf7958f576
config: x86_64-rhel (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/40454bdc15831407c2041bec3d4f389816916ed6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Sonal-Santan/XRT-Alveo-driver-overview/20201217-160048
        git checkout 40454bdc15831407c2041bec3d4f389816916ed6
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> error: include/uapi/linux/xrt/xclbin.h: missing "WITH Linux-syscall-note" for SPDX-License-Identifier
   make[2]: *** [scripts/Makefile.headersinst:63: usr/include/linux/xrt/xclbin.h] Error 1
   make[2]: Target '__headers' not remade because of errors.
   make[1]: *** [Makefile:1288: headers] Error 2
   make[1]: Target 'headers_install' not remade because of errors.
   make: *** [Makefile:185: __sub-make] Error 2
   make: Target 'headers_install' not remade because of errors.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 45658 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V2 XRT Alveo 4/6] fpga: xrt: XRT Alveo management physical function driver
  2020-12-17  7:50 ` [PATCH V2 XRT Alveo 4/6] fpga: xrt: XRT Alveo management physical function driver Sonal Santan
@ 2020-12-21  9:03   ` kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2020-12-21  9:03 UTC (permalink / raw)
  To: Sonal Santan, linux-kernel
  Cc: kbuild-all, Sonal Santan, linux-fpga, maxz, lizhih, michal.simek,
	stefanos, devicetree, trix, mdf

[-- Attachment #1: Type: text/plain, Size: 2042 bytes --]

Hi Sonal,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linux/master]
[also build test ERROR on linus/master v5.10 next-20201221]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Sonal-Santan/XRT-Alveo-driver-overview/20201217-160048
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 09162bc32c880a791c6c0668ce0745cf7958f576
config: x86_64-rhel (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/ec70cdb9cc3612eec369754db4ae631a05f4a325
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Sonal-Santan/XRT-Alveo-driver-overview/20201217-160048
        git checkout ec70cdb9cc3612eec369754db4ae631a05f4a325
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   error: include/uapi/linux/xrt/xclbin.h: missing "WITH Linux-syscall-note" for SPDX-License-Identifier
   make[2]: *** [scripts/Makefile.headersinst:63: usr/include/linux/xrt/xclbin.h] Error 1
>> error: include/uapi/linux/xrt/xmgmt-ioctl.h: missing "WITH Linux-syscall-note" for SPDX-License-Identifier
   make[2]: *** [scripts/Makefile.headersinst:63: usr/include/linux/xrt/xmgmt-ioctl.h] Error 1
   make[2]: Target '__headers' not remade because of errors.
   make[1]: *** [Makefile:1288: headers] Error 2
   make[1]: Target 'headers_install' not remade because of errors.
   make: *** [Makefile:185: __sub-make] Error 2
   make: Target 'headers_install' not remade because of errors.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 45658 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-12-21 10:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-17  7:50 [PATCH V2 XRT Alveo 0/6] XRT Alveo driver overview Sonal Santan
2020-12-17  7:50 ` [PATCH V2 XRT Alveo 1/6] Documentation: fpga: Add a document describing XRT Alveo drivers Sonal Santan
2020-12-17  7:50 ` [PATCH V2 XRT Alveo 2/6] fpga: xrt: infrastructure support for xmgmt driver Sonal Santan
2020-12-21  6:41   ` kernel test robot
2020-12-17  7:50 ` [PATCH V2 XRT Alveo 3/6] fpga: xrt: core infrastructure for xrt-lib module Sonal Santan
2020-12-17  7:50 ` [PATCH V2 XRT Alveo 4/6] fpga: xrt: XRT Alveo management physical function driver Sonal Santan
2020-12-21  9:03   ` kernel test robot
2020-12-17  7:50 ` [PATCH V2 XRT Alveo 5/6] fpga: xrt: platform drivers for subsystems in shell partition Sonal Santan
2020-12-17  7:50 ` [PATCH V2 XRT Alveo 6/6] fpga: xrt: Kconfig and Makefile updates for XRT drivers Sonal Santan
2020-12-17 14:55   ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).