dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/7] New DRM driver for Intel VPU
@ 2022-07-28 13:17 Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 1/7] drm/vpu: Introduce a new " Jacek Lawrynowicz
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

Hi,

This patchset contains a new Linux* Kernel Driver for Intel® VPUs.

VPU stands for Versatile Processing Unit and it is an AI inference accelerator
integrated with Intel non-server CPUs starting from 14th generation.
VPU enables efficient execution of Deep Learning applications
like object detection, classification etc.

Driver is part of gpu/drm subsystem because VPU is similar in operation to
an integrated GPU. Reusing drm driver init, ioctl handling, gem and prime
helpers and drm_mm allows to minimize code duplication in the kernel.

The whole driver is licensed under GPL-2.0-only except for two headers imported
from the firmware that are MIT licensed.

User mode driver stack consists of Level Zero API driver and OpenVINO plugin.
Both should be open-sourced by the end of Q3.
The firmware for the VPU will be distributed as a closed source binary.

Regards,
Jacek

Jacek Lawrynowicz (7):
  drm/vpu: Introduce a new DRM driver for Intel VPU
  drm/vpu: Add Intel VPU MMU support
  drm/vpu: Add GEM buffer object management
  drm/vpu: Add IPC driver and JSM messages
  drm/vpu: Implement firmware parsing and booting
  drm/vpu: Add command buffer submission logic
  drm/vpu: Add PM support

 MAINTAINERS                           |    8 +
 drivers/gpu/drm/Kconfig               |    2 +
 drivers/gpu/drm/Makefile              |    1 +
 drivers/gpu/drm/vpu/Kconfig           |   12 +
 drivers/gpu/drm/vpu/Makefile          |   16 +
 drivers/gpu/drm/vpu/vpu_boot_api.h    |  222 ++++++
 drivers/gpu/drm/vpu/vpu_drv.c         |  642 +++++++++++++++
 drivers/gpu/drm/vpu/vpu_drv.h         |  178 +++++
 drivers/gpu/drm/vpu/vpu_fw.c          |  417 ++++++++++
 drivers/gpu/drm/vpu/vpu_fw.h          |   38 +
 drivers/gpu/drm/vpu/vpu_gem.c         |  846 ++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_gem.h         |  113 +++
 drivers/gpu/drm/vpu/vpu_hw.h          |  163 ++++
 drivers/gpu/drm/vpu/vpu_hw_mtl.c      | 1040 +++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_hw_mtl_reg.h  |  468 +++++++++++
 drivers/gpu/drm/vpu/vpu_hw_reg_io.h   |  114 +++
 drivers/gpu/drm/vpu/vpu_ipc.c         |  480 ++++++++++++
 drivers/gpu/drm/vpu/vpu_ipc.h         |   91 +++
 drivers/gpu/drm/vpu/vpu_job.c         |  624 +++++++++++++++
 drivers/gpu/drm/vpu/vpu_job.h         |   73 ++
 drivers/gpu/drm/vpu/vpu_jsm_api.h     |  529 +++++++++++++
 drivers/gpu/drm/vpu/vpu_jsm_msg.c     |  220 ++++++
 drivers/gpu/drm/vpu/vpu_jsm_msg.h     |   25 +
 drivers/gpu/drm/vpu/vpu_mmu.c         |  944 ++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_mmu.h         |   53 ++
 drivers/gpu/drm/vpu/vpu_mmu_context.c |  418 ++++++++++
 drivers/gpu/drm/vpu/vpu_mmu_context.h |   49 ++
 drivers/gpu/drm/vpu/vpu_pm.c          |  353 +++++++++
 drivers/gpu/drm/vpu/vpu_pm.h          |   38 +
 include/uapi/drm/vpu_drm.h            |  330 ++++++++
 30 files changed, 8507 insertions(+)
 create mode 100644 drivers/gpu/drm/vpu/Kconfig
 create mode 100644 drivers/gpu/drm/vpu/Makefile
 create mode 100644 drivers/gpu/drm/vpu/vpu_boot_api.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_drv.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_drv.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_fw.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_fw.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_gem.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_gem.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw_mtl.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw_mtl_reg.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw_reg_io.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_ipc.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_ipc.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_job.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_job.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_jsm_api.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_jsm_msg.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_jsm_msg.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu_context.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu_context.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_pm.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_pm.h
 create mode 100644 include/uapi/drm/vpu_drm.h

--
2.34.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v1 1/7] drm/vpu: Introduce a new DRM driver for Intel VPU
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
@ 2022-07-28 13:17 ` Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 2/7] drm/vpu: Add Intel VPU MMU support Jacek Lawrynowicz
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

VPU stands for Versatile Processing Unit and it's a CPU-integrated
inference accelerator for Computer Vision and Deep Learning
applications.

The VPU device consist of following componensts:
  - Buttress - provides CPU to VPU integration, interrupt, frequency and
    power management.
  - Memory Management Unit (based on ARM MMU-600) - translates VPU to
    host DMA addresses, isolates user workloads.
  - RISC based microcontroller - executes firmware that provides job
    execution API for the kernel-mode driver
  - Neural Compute Subsystem (NCS) - does the actual work, provides
    Compute and Copy engines.
  - Network on Chip (NoC) - network fabric connecting all the components

This driver supports VPU IP v2.7 integrated into Intel Meteor Lake
client CPUs (14th generation).

Module sources are at drivers/gpu/drm/vpu and module name is
"intel_vpu.ko".

This patch includes only very besic functionality:
  - module, PCI device and IRQ initialization
  - register definitions and low level register manipulation functions
  - SET/GET_PARAM ioctls
  - power up without firmware

Signed-off-by: Krystian Pradzynski <krystian.pradzynski@linux.intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 MAINTAINERS                          |    8 +
 drivers/gpu/drm/Kconfig              |    2 +
 drivers/gpu/drm/Makefile             |    1 +
 drivers/gpu/drm/vpu/Kconfig          |   12 +
 drivers/gpu/drm/vpu/Makefile         |    8 +
 drivers/gpu/drm/vpu/vpu_drv.c        |  392 ++++++++++
 drivers/gpu/drm/vpu/vpu_drv.h        |  157 ++++
 drivers/gpu/drm/vpu/vpu_hw.h         |  163 +++++
 drivers/gpu/drm/vpu/vpu_hw_mtl.c     | 1003 ++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_hw_mtl_reg.h |  468 ++++++++++++
 drivers/gpu/drm/vpu/vpu_hw_reg_io.h  |  114 +++
 include/uapi/drm/vpu_drm.h           |   95 +++
 12 files changed, 2423 insertions(+)
 create mode 100644 drivers/gpu/drm/vpu/Kconfig
 create mode 100644 drivers/gpu/drm/vpu/Makefile
 create mode 100644 drivers/gpu/drm/vpu/vpu_drv.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_drv.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw_mtl.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw_mtl_reg.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_hw_reg_io.h
 create mode 100644 include/uapi/drm/vpu_drm.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 0f9366144d31..e556fa2d366b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6944,6 +6944,14 @@ F:	Documentation/devicetree/bindings/gpu/vivante,gc.yaml
 F:	drivers/gpu/drm/etnaviv/
 F:	include/uapi/drm/etnaviv_drm.h
 
+DRM DRIVERS FOR VPU
+M:	Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
+M:	Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
+S:	Supported
+T:	git git://anongit.freedesktop.org/drm/drm-misc
+F:	drivers/gpu/drm/vpu/
+F:	include/uapi/drm/vpu_drm.h
+
 DRM DRIVERS FOR XEN
 M:	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
 L:	dri-devel@lists.freedesktop.org
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 6c2256e8474b..cf2f159b3e01 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -363,6 +363,8 @@ source "drivers/gpu/drm/v3d/Kconfig"
 
 source "drivers/gpu/drm/vc4/Kconfig"
 
+source "drivers/gpu/drm/vpu/Kconfig"
+
 source "drivers/gpu/drm/etnaviv/Kconfig"
 
 source "drivers/gpu/drm/hisilicon/Kconfig"
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index e7af358e6dda..735a12514321 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -95,6 +95,7 @@ obj-$(CONFIG_DRM_KMB_DISPLAY)  += kmb/
 obj-$(CONFIG_DRM_MGAG200) += mgag200/
 obj-$(CONFIG_DRM_V3D)  += v3d/
 obj-$(CONFIG_DRM_VC4)  += vc4/
+obj-$(CONFIG_DRM_VPU)  += vpu/
 obj-$(CONFIG_DRM_SIS)   += sis/
 obj-$(CONFIG_DRM_SAVAGE)+= savage/
 obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/
diff --git a/drivers/gpu/drm/vpu/Kconfig b/drivers/gpu/drm/vpu/Kconfig
new file mode 100644
index 000000000000..e4504c483d9d
--- /dev/null
+++ b/drivers/gpu/drm/vpu/Kconfig
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config DRM_VPU
+	tristate "Intel VPU for Meteor Lake and newer"
+	depends on DRM
+	depends on X86_64 && PCI
+	select SHMEM
+	help
+	  Choose this option if you have a system that has an 14th generation Intel CPU
+	  or newer. VPU stands for Versatile Processing Unit and it's a CPU-integrated
+	  inference accelerator for Computer Vision and Deep Learning applications.
+
+	  If "M" is selected, the module will be called intel_vpu.
diff --git a/drivers/gpu/drm/vpu/Makefile b/drivers/gpu/drm/vpu/Makefile
new file mode 100644
index 000000000000..5d6d2a2566cf
--- /dev/null
+++ b/drivers/gpu/drm/vpu/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright © 2022 Intel Corporation
+
+intel_vpu-y := \
+	vpu_drv.o \
+	vpu_hw_mtl.o
+
+obj-$(CONFIG_DRM_VPU) += intel_vpu.o
diff --git a/drivers/gpu/drm/vpu/vpu_drv.c b/drivers/gpu/drm/vpu/vpu_drv.c
new file mode 100644
index 000000000000..bbe7ad97a32c
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_drv.c
@@ -0,0 +1,392 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include <drm/drm_drv.h>
+#include <drm/drm_file.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_ioctl.h>
+
+#include "vpu_drv.h"
+#include "vpu_hw.h"
+
+#ifndef DRIVER_VERSION_STR
+#define DRIVER_VERSION_STR __stringify(DRM_VPU_DRIVER_MAJOR) "." \
+			   __stringify(DRM_VPU_DRIVER_MINOR) "."
+#endif
+
+static const struct drm_driver driver;
+
+int vpu_dbg_mask;
+module_param_named(dbg_mask, vpu_dbg_mask, int, 0644);
+MODULE_PARM_DESC(dbg_mask, "Driver debug mask. See VPU_DBG_* macros.");
+
+u8 vpu_pll_min_ratio;
+module_param_named(pll_min_ratio, vpu_pll_min_ratio, byte, 0644);
+MODULE_PARM_DESC(pll_min_ratio, "Minimum PLL ratio used to set VPU frequency");
+
+u8 vpu_pll_max_ratio = U8_MAX;
+module_param_named(pll_max_ratio, vpu_pll_max_ratio, byte, 0644);
+MODULE_PARM_DESC(pll_max_ratio, "Maximum PLL ratio used to set VPU frequency");
+
+char *vpu_platform_to_str(u32 platform)
+{
+	switch (platform) {
+	case VPU_PLATFORM_SILICON:
+		return "VPU_PLATFORM_SILICON";
+	case VPU_PLATFORM_SIMICS:
+		return "VPU_PLATFORM_SIMICS";
+	case VPU_PLATFORM_FPGA:
+		return "VPU_PLATFORM_FPGA";
+	default:
+		return "Invalid platform";
+	}
+}
+
+void vpu_file_priv_get(struct vpu_file_priv *file_priv, struct vpu_file_priv **link)
+{
+	kref_get(&file_priv->ref);
+	*link = file_priv;
+}
+
+static void file_priv_release(struct kref *ref)
+{
+	struct vpu_file_priv *file_priv = container_of(ref, struct vpu_file_priv, ref);
+
+	kfree(file_priv);
+}
+
+void vpu_file_priv_put(struct vpu_file_priv **link)
+{
+	struct vpu_file_priv *file_priv = *link;
+
+	*link = NULL;
+	kref_put(&file_priv->ref, file_priv_release);
+}
+
+static int vpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct vpu_file_priv *file_priv = file->driver_priv;
+	struct vpu_device *vdev = file_priv->vdev;
+	struct pci_dev *pdev = to_pci_dev(vdev->drm.dev);
+	struct drm_vpu_param *args = data;
+	int ret = 0;
+
+	switch (args->param) {
+	case DRM_VPU_PARAM_DEVICE_ID:
+		args->value = pdev->device;
+		break;
+	case DRM_VPU_PARAM_DEVICE_REVISION:
+		args->value = pdev->revision;
+		break;
+	case DRM_VPU_PARAM_PLATFORM_TYPE:
+		args->value = vdev->platform;
+		break;
+	case DRM_VPU_PARAM_CORE_CLOCK_RATE:
+		args->value = vpu_hw_reg_pll_freq_get(vdev);
+		break;
+	case DRM_VPU_PARAM_NUM_CONTEXTS:
+		args->value = vpu_get_context_count(vdev);
+		break;
+	case DRM_VPU_PARAM_CONTEXT_BASE_ADDRESS:
+		args->value = vdev->hw->ranges.user_low.start;
+		break;
+	case DRM_VPU_PARAM_CONTEXT_PRIORITY:
+		args->value = file_priv->priority;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int vpu_set_param_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct vpu_file_priv *file_priv = file->driver_priv;
+	struct drm_vpu_param *args = data;
+	int ret = 0;
+
+	switch (args->param) {
+	case DRM_VPU_PARAM_CONTEXT_PRIORITY:
+		if (args->value <= DRM_VPU_CONTEXT_PRIORITY_REALTIME)
+			file_priv->priority = args->value;
+		else
+			ret = -EINVAL;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int vpu_open(struct drm_device *dev, struct drm_file *file)
+{
+	struct vpu_device *vdev = to_vpu_dev(dev);
+	struct vpu_file_priv *file_priv;
+
+	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
+	if (!file_priv)
+		return -ENOMEM;
+
+	file_priv->vdev = vdev;
+	file_priv->priority = DRM_VPU_CONTEXT_PRIORITY_NORMAL;
+
+	kref_init(&file_priv->ref);
+
+	file->driver_priv = file_priv;
+
+	return 0;
+}
+
+static void vpu_postclose(struct drm_device *dev, struct drm_file *file)
+{
+	struct vpu_file_priv *file_priv = file->driver_priv;
+
+	vpu_file_priv_put(&file_priv);
+}
+
+static const struct drm_ioctl_desc vpu_drm_ioctls[] = {
+	DRM_IOCTL_DEF_DRV(VPU_GET_PARAM, vpu_get_param_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(VPU_SET_PARAM, vpu_set_param_ioctl, DRM_RENDER_ALLOW),
+};
+
+DEFINE_DRM_GEM_FOPS(vpu_fops);
+
+int vpu_shutdown(struct vpu_device *vdev)
+{
+	int ret;
+
+	vpu_hw_irq_disable(vdev);
+
+	ret = vpu_hw_power_down(vdev);
+	if (ret)
+		vpu_warn(vdev, "Failed to power down HW: %d\n", ret);
+
+	return ret;
+}
+
+static const struct drm_driver driver = {
+	.driver_features = DRIVER_GEM | DRIVER_RENDER,
+
+	.open = vpu_open,
+	.postclose = vpu_postclose,
+
+	.ioctls = vpu_drm_ioctls,
+	.num_ioctls = ARRAY_SIZE(vpu_drm_ioctls),
+	.fops = &vpu_fops,
+
+	.name = DRIVER_NAME,
+	.desc = DRIVER_DESC,
+	.date = DRIVER_DATE,
+	.major = DRM_VPU_DRIVER_MAJOR,
+	.minor = DRM_VPU_DRIVER_MINOR,
+};
+
+static int vpu_irq_init(struct vpu_device *vdev)
+{
+	struct pci_dev *pdev = to_pci_dev(vdev->drm.dev);
+	int ret;
+
+	ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_MSIX);
+	if (ret < 0) {
+		vpu_err(vdev, "Failed to allocate and MSI IRQ: %d\n", ret);
+		return ret;
+	}
+
+	vdev->irq = pci_irq_vector(pdev, 0);
+
+	ret = request_irq(vdev->irq, vdev->hw->ops->irq_handler, IRQF_SHARED,
+			  DRIVER_NAME, vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to request an IRQ %d\n", ret);
+		pci_free_irq_vectors(pdev);
+	}
+
+	return ret;
+}
+
+static void vpu_irq_fini(struct vpu_device *vdev)
+{
+	free_irq(vdev->irq, vdev);
+	pci_free_irq_vectors(to_pci_dev(vdev->drm.dev));
+}
+
+static int vpu_pci_init(struct vpu_device *vdev)
+{
+	struct pci_dev *pdev = to_pci_dev(vdev->drm.dev);
+	struct resource *bar0 = &pdev->resource[0];
+	struct resource *bar4 = &pdev->resource[4];
+	int ret;
+
+	vpu_dbg(MISC, "Mapping BAR0 (RegV) %pR\n", bar0);
+	vdev->regv = devm_ioremap_resource(vdev->drm.dev, bar0);
+	if (IS_ERR(vdev->regv)) {
+		vpu_err(vdev, "Failed to map bar 0\n");
+		return -ENOMEM;
+	}
+
+	vpu_dbg(MISC, "Mapping BAR4 (RegB) %pR\n", bar4);
+	vdev->regb = devm_ioremap_resource(vdev->drm.dev, bar4);
+	if (IS_ERR(vdev->regb)) {
+		vpu_err(vdev, "Failed to map bar 4\n");
+		return -ENOMEM;
+	}
+
+	ret = dma_set_mask_and_coherent(vdev->drm.dev, DMA_BIT_MASK(38));
+	if (ret) {
+		vpu_err(vdev, "Failed to set DMA mask: %d\n", ret);
+		return ret;
+	}
+
+	/* Clear any pending errors */
+	pcie_capability_clear_word(pdev, PCI_EXP_DEVSTA, 0x3f);
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to enable PCI device: %d\n", ret);
+		return ret;
+	}
+
+	pci_set_master(pdev);
+
+	return 0;
+}
+
+static void vpu_pci_fini(struct vpu_device *vdev)
+{
+	pci_disable_device(to_pci_dev(vdev->drm.dev));
+}
+
+static int vpu_dev_init(struct vpu_device *vdev)
+{
+	int ret;
+
+	vdev->hw = devm_kzalloc(vdev->drm.dev, sizeof(*vdev->hw), GFP_KERNEL);
+	if (!vdev->hw)
+		return -ENOMEM;
+
+	vdev->hw->ops = &vpu_hw_mtl_ops;
+	vdev->platform = VPU_PLATFORM_INVALID;
+
+	xa_init_flags(&vdev->context_xa, XA_FLAGS_ALLOC);
+	vdev->context_xa_limit.min = VPU_GLOBAL_CONTEXT_MMU_SSID + 1;
+	vdev->context_xa_limit.max = VPU_CONTEXT_LIMIT;
+
+	ret = vpu_pci_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize PCI device: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_irq_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize IRQs: %d\n", ret);
+		goto err_pci_fini;
+	}
+
+	ret = vpu_hw_info_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize HW info: %d\n", ret);
+		goto err_irq_fini;
+	}
+
+	ret = vpu_hw_power_up(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to power up HW: %d\n", ret);
+		goto err_irq_fini;
+	}
+
+	return 0;
+
+err_irq_fini:
+	vpu_irq_fini(vdev);
+err_pci_fini:
+	vpu_pci_fini(vdev);
+	return ret;
+}
+
+static void vpu_dev_fini(struct vpu_device *vdev)
+{
+	vpu_shutdown(vdev);
+
+	vpu_irq_fini(vdev);
+	vpu_pci_fini(vdev);
+
+	WARN_ON(!xa_empty(&vdev->context_xa));
+	xa_destroy(&vdev->context_xa);
+}
+
+static struct pci_device_id vpu_pci_ids[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_MTL) },
+	{ }
+};
+MODULE_DEVICE_TABLE(pci, vpu_pci_ids);
+
+static int vpu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct vpu_device *vdev;
+	int ret;
+
+	vdev = devm_drm_dev_alloc(&pdev->dev, &driver, struct vpu_device, drm);
+	if (IS_ERR(vdev))
+		return PTR_ERR(vdev);
+
+	pci_set_drvdata(pdev, vdev);
+	vdev->drm.dev_private = vdev;
+
+	ret = vpu_dev_init(vdev);
+	if (ret) {
+		dev_err(&pdev->dev, "Failed to initialize VPU device: %d\n", ret);
+		return ret;
+	}
+
+	ret = drm_dev_register(&vdev->drm, 0);
+	if (ret) {
+		dev_err(&pdev->dev, "Failed to register DRM device: %d\n", ret);
+		vpu_dev_fini(vdev);
+	}
+
+	return ret;
+}
+
+static void vpu_remove(struct pci_dev *pdev)
+{
+	struct vpu_device *vdev = pci_get_drvdata(pdev);
+
+	drm_dev_unregister(&vdev->drm);
+	vpu_dev_fini(vdev);
+}
+
+static struct pci_driver vpu_pci_driver = {
+	.name = KBUILD_MODNAME,
+	.id_table = vpu_pci_ids,
+	.probe = vpu_probe,
+	.remove = vpu_remove,
+};
+
+static __init int vpu_init(void)
+{
+	pr_info("Intel VPU driver version: %s", DRIVER_VERSION_STR);
+
+	return pci_register_driver(&vpu_pci_driver);
+}
+
+static __exit void vpu_fini(void)
+{
+	pci_unregister_driver(&vpu_pci_driver);
+}
+
+module_init(vpu_init);
+module_exit(vpu_fini);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL and additional rights");
+MODULE_VERSION(DRIVER_VERSION_STR);
diff --git a/drivers/gpu/drm/vpu/vpu_drv.h b/drivers/gpu/drm/vpu/vpu_drv.h
new file mode 100644
index 000000000000..b2e7d355c0de
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_drv.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_DRV_H__
+#define __VPU_DRV_H__
+
+#include <drm/drm_device.h>
+#include <drm/drm_mm.h>
+#include <drm/drm_print.h>
+
+#include <linux/pci.h>
+#include <linux/xarray.h>
+#include <uapi/drm/vpu_drm.h>
+
+#define DRIVER_NAME "intel_vpu"
+#define DRIVER_DESC "Driver for Intel Versatile Processing Unit (VPU)"
+#define DRIVER_DATE "20220101"
+
+#define PCI_VENDOR_ID_INTEL 0x8086
+#define PCI_DEVICE_ID_MTL   0x7d1d
+
+#define VPU_GLOBAL_CONTEXT_MMU_SSID 0
+#define VPU_CONTEXT_LIMIT	    64
+#define VPU_NUM_ENGINES		    2
+
+#define VPU_PLATFORM_SILICON 0
+#define VPU_PLATFORM_SIMICS  2
+#define VPU_PLATFORM_FPGA    3
+#define VPU_PLATFORM_INVALID 8
+
+#define VPU_DBG_REG	BIT(0)
+#define VPU_DBG_IRQ	BIT(1)
+#define VPU_DBG_MMU	BIT(2)
+#define VPU_DBG_FILE	BIT(3)
+#define VPU_DBG_MISC	BIT(4)
+#define VPU_DBG_FW_BOOT BIT(5)
+#define VPU_DBG_PM	BIT(6)
+#define VPU_DBG_IPC	BIT(7)
+#define VPU_DBG_BO	BIT(8)
+#define VPU_DBG_JOB	BIT(9)
+#define VPU_DBG_JSM	BIT(10)
+#define VPU_DBG_KREF	BIT(11)
+
+#define vpu_err(vdev, fmt, ...) \
+	dev_err((vdev)->drm.dev, "[%s] ERROR: " fmt, __func__, ##__VA_ARGS__)
+
+#define vpu_err_ratelimited(vdev, fmt, ...) \
+	dev_err_ratelimited((vdev)->drm.dev, "[%s] ERROR: " fmt, __func__, ##__VA_ARGS__)
+
+#define vpu_warn(vdev, fmt, ...) \
+	dev_warn((vdev)->drm.dev, "[%s] WARNING: " fmt, __func__, ##__VA_ARGS__)
+
+#define vpu_warn_ratelimited(vdev, fmt, ...) \
+	dev_warn_ratelimited((vdev)->drm.dev, "[%s] WARNING: " fmt, __func__, ##__VA_ARGS__)
+
+#define vpu_info(vdev, fmt, ...) dev_info((vdev)->drm.dev, fmt, ##__VA_ARGS__)
+
+#define vpu_dbg(type, fmt, args...) do {                                       \
+	if (unlikely(VPU_DBG_##type & vpu_dbg_mask))                           \
+		dev_dbg((vdev)->drm.dev, "[%s] " fmt, #type, ##args);          \
+} while (0)
+
+#define VPU_WA(wa_name) (vdev->wa.wa_name)
+
+struct vpu_wa_table {
+	bool punit_disabled;
+	bool clear_runtime_mem;
+};
+
+struct vpu_hw_info;
+struct vpu_fw_info;
+struct vpu_ipc_info;
+struct vpu_mmu_info;
+struct vpu_pm_info;
+
+struct vpu_device {
+	struct drm_device drm; /* Must be first */
+	void __iomem *regb;
+	void __iomem *regv;
+	u32 platform;
+	u32 irq;
+
+	struct vpu_wa_table wa;
+	struct vpu_hw_info *hw;
+
+	struct xarray context_xa;
+	struct xa_limit context_xa_limit;
+
+	struct {
+		int boot;
+		int jsm;
+		int tdr;
+		int reschedule_suspend;
+	} timeout;
+};
+
+struct vpu_file_priv {
+	struct kref ref;
+	struct vpu_device *vdev;
+	u32 priority;
+};
+
+extern int vpu_dbg_mask;
+extern u8 vpu_pll_min_ratio;
+extern u8 vpu_pll_max_ratio;
+
+void vpu_file_priv_get(struct vpu_file_priv *file_priv, struct vpu_file_priv **link);
+void vpu_file_priv_put(struct vpu_file_priv **link);
+char *vpu_platform_to_str(u32 platform);
+int vpu_shutdown(struct vpu_device *vdev);
+
+static inline bool vpu_is_mtl(struct vpu_device *vdev)
+{
+	return to_pci_dev(vdev->drm.dev)->device == PCI_DEVICE_ID_MTL;
+}
+
+static inline u8 vpu_revison(struct vpu_device *vdev)
+{
+	return to_pci_dev(vdev->drm.dev)->revision;
+}
+
+static inline struct vpu_device *to_vpu_dev(struct drm_device *dev)
+{
+	return container_of(dev, struct vpu_device, drm);
+}
+
+static inline u32 vpu_get_context_count(struct vpu_device *vdev)
+{
+	struct xa_limit ctx_limit = vdev->context_xa_limit;
+
+	return (ctx_limit.max - ctx_limit.min + 1);
+}
+
+static inline u32 vpu_get_platform(struct vpu_device *vdev)
+{
+	WARN_ON_ONCE(vdev->platform == VPU_PLATFORM_INVALID);
+	return vdev->platform;
+}
+
+static inline bool vpu_is_silicon(struct vpu_device *vdev)
+{
+	return vpu_get_platform(vdev) == VPU_PLATFORM_SILICON;
+}
+
+static inline bool vpu_is_simics(struct vpu_device *vdev)
+{
+	return vpu_get_platform(vdev) == VPU_PLATFORM_SIMICS;
+}
+
+static inline bool vpu_is_fpga(struct vpu_device *vdev)
+{
+	return vpu_get_platform(vdev) == VPU_PLATFORM_FPGA;
+}
+
+#endif /* __VPU_DRV_H__ */
diff --git a/drivers/gpu/drm/vpu/vpu_hw.h b/drivers/gpu/drm/vpu/vpu_hw.h
new file mode 100644
index 000000000000..971b6a607f73
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_hw.h
@@ -0,0 +1,163 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_HW_H__
+#define __VPU_HW_H__
+
+#include "vpu_drv.h"
+
+struct vpu_hw_ops {
+	int (*info_init)(struct vpu_device *vdev);
+	int (*power_up)(struct vpu_device *vdev);
+	int (*boot_fw)(struct vpu_device *vdev);
+	int (*power_down)(struct vpu_device *vdev);
+	bool (*is_idle)(struct vpu_device *vdev);
+	void (*wdt_disable)(struct vpu_device *vdev);
+	u32 (*reg_pll_freq_get)(struct vpu_device *vdev);
+	u32 (*reg_telemetry_offset_get)(struct vpu_device *vdev);
+	u32 (*reg_telemetry_size_get)(struct vpu_device *vdev);
+	u32 (*reg_telemetry_enable_get)(struct vpu_device *vdev);
+	void (*reg_db_set)(struct vpu_device *vdev, u32 db_id);
+	u32 (*reg_ipc_rx_addr_get)(struct vpu_device *vdev);
+	u32 (*reg_ipc_rx_count_get)(struct vpu_device *vdev);
+	void (*reg_ipc_tx_set)(struct vpu_device *vdev, u32 vpu_addr);
+	void (*irq_clear)(struct vpu_device *vdev);
+	void (*irq_enable)(struct vpu_device *vdev);
+	void (*irq_disable)(struct vpu_device *vdev);
+	irqreturn_t (*irq_handler)(int irq, void *ptr);
+};
+
+struct vpu_addr_range {
+	u64 start;
+	u64 end;
+};
+
+struct vpu_hw_info {
+	const struct vpu_hw_ops *ops;
+	struct {
+		struct vpu_addr_range global_low;
+		struct vpu_addr_range global_high;
+		struct vpu_addr_range user_low;
+		struct vpu_addr_range user_high;
+		struct vpu_addr_range global_aliased_pio;
+	} ranges;
+	struct {
+		u8 min_ratio;
+		u8 max_ratio;
+	} pll;
+	u32 tile_fuse;
+	u32 sku;
+	u16 config;
+};
+
+extern const struct vpu_hw_ops vpu_hw_mtl_ops;
+
+static inline int vpu_hw_info_init(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->info_init(vdev);
+};
+
+static inline int vpu_hw_power_up(struct vpu_device *vdev)
+{
+	vpu_dbg(PM, "HW power up\n");
+
+	return vdev->hw->ops->power_up(vdev);
+};
+
+static inline int vpu_hw_boot_fw(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->boot_fw(vdev);
+};
+
+static inline bool vpu_hw_is_idle(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->is_idle(vdev);
+};
+
+static inline int vpu_hw_power_down(struct vpu_device *vdev)
+{
+	vpu_dbg(PM, "HW power down\n");
+
+	return vdev->hw->ops->power_down(vdev);
+};
+
+static inline void vpu_hw_wdt_disable(struct vpu_device *vdev)
+{
+	vdev->hw->ops->wdt_disable(vdev);
+};
+
+/* Register indirect accesses */
+static inline u32 vpu_hw_reg_pll_freq_get(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->reg_pll_freq_get(vdev);
+};
+
+static inline u32 vpu_hw_reg_telemetry_offset_get(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->reg_telemetry_offset_get(vdev);
+};
+
+static inline u32 vpu_hw_reg_telemetry_size_get(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->reg_telemetry_size_get(vdev);
+};
+
+static inline u32 vpu_hw_reg_telemetry_enable_get(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->reg_telemetry_enable_get(vdev);
+};
+
+static inline void vpu_hw_reg_db_set(struct vpu_device *vdev, u32 db_id)
+{
+	vdev->hw->ops->reg_db_set(vdev, db_id);
+};
+
+static inline u32 vpu_hw_reg_ipc_rx_addr_get(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->reg_ipc_rx_addr_get(vdev);
+};
+
+static inline u32 vpu_hw_reg_ipc_rx_count_get(struct vpu_device *vdev)
+{
+	return vdev->hw->ops->reg_ipc_rx_count_get(vdev);
+};
+
+static inline void vpu_hw_reg_ipc_tx_set(struct vpu_device *vdev, u32 vpu_addr)
+{
+	vdev->hw->ops->reg_ipc_tx_set(vdev, vpu_addr);
+};
+
+static inline void vpu_hw_irq_clear(struct vpu_device *vdev)
+{
+	vdev->hw->ops->irq_clear(vdev);
+};
+
+static inline void vpu_hw_irq_enable(struct vpu_device *vdev)
+{
+	vdev->hw->ops->irq_enable(vdev);
+};
+
+static inline void vpu_hw_irq_disable(struct vpu_device *vdev)
+{
+	vdev->hw->ops->irq_disable(vdev);
+};
+
+static inline void vpu_hw_init_range(struct vpu_addr_range *range, u64 start, u64 size)
+{
+	range->start = start;
+	range->end = start + size;
+}
+
+static inline u64 vpu_hw_range_size(const struct vpu_addr_range *range)
+{
+	return range->end - range->start;
+}
+
+static inline u8 vpu_hw_get_pll_ratio_in_range(u8 pll_ratio, u8 min, u8 max)
+{
+	return min_t(u8, max_t(u8, pll_ratio, min), max);
+}
+
+#endif /* __VPU_HW_H__ */
diff --git a/drivers/gpu/drm/vpu/vpu_hw_mtl.c b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
new file mode 100644
index 000000000000..bdab4b84d202
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
@@ -0,0 +1,1003 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include "vpu_drv.h"
+#include "vpu_hw_mtl_reg.h"
+#include "vpu_hw_reg_io.h"
+#include "vpu_hw.h"
+
+#define TILE_FUSE_ENABLE_BOTH	     0x0
+#define TILE_FUSE_ENABLE_LOWER	     0x1
+#define TILE_FUSE_ENABLE_UPPER	     0x2
+
+#define TILE_SKU_BOTH_MTL	     0x3630
+#define TILE_SKU_LOWER_MTL	     0x3631
+#define TILE_SKU_UPPER_MTL	     0x3632
+
+/* Work point configuration values */
+#define WP_CONFIG_1_TILE_5_3_RATIO   0x0101
+#define WP_CONFIG_1_TILE_4_3_RATIO   0x0102
+#define WP_CONFIG_2_TILE_5_3_RATIO   0x0201
+#define WP_CONFIG_2_TILE_4_3_RATIO   0x0202
+#define WP_CONFIG_0_TILE_PLL_OFF     0x0000
+
+#define PLL_REF_CLK_FREQ	     (50 * 1000000)
+#define PLL_SIMULATION_FREQ	     (10 * 1000000)
+#define PLL_RATIO_TO_FREQ(x)	     ((x) * PLL_REF_CLK_FREQ)
+#define PLL_DEFAULT_EPP_VALUE	     0x80
+
+#define TIM_SAFE_ENABLE		     0xf1d0dead
+#define TIM_WATCHDOG_RESET_VALUE     0xffffffff
+
+#define TIMEOUT_US		     (150 * USEC_PER_MSEC)
+#define PWR_ISLAND_STATUS_TIMEOUT_US (5 * USEC_PER_MSEC)
+#define PLL_TIMEOUT_US		     (1500 * USEC_PER_MSEC)
+#define IDLE_TIMEOUT_US		     (500 * USEC_PER_MSEC)
+
+#define ICB_0_IRQ_MASK ((REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT)))
+
+#define ICB_1_IRQ_MASK ((REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_2_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_3_INT)) | \
+			(REG_FLD(MTL_VPU_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_4_INT)))
+
+#define ICB_0_1_IRQ_MASK ((((u64)ICB_1_IRQ_MASK) << 32) | ICB_0_IRQ_MASK)
+
+#define BUTTRESS_IRQ_MASK ((REG_FLD(MTL_BUTTRESS_INTERRUPT_STAT, FREQ_CHANGE)) | \
+			   (REG_FLD(MTL_BUTTRESS_INTERRUPT_STAT, ATS_ERR)) | \
+			   (REG_FLD(MTL_BUTTRESS_INTERRUPT_STAT, UFI_ERR)))
+
+#define ITF_FIREWALL_VIOLATION_MASK ((REG_FLD(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, CSS_ROM_CMX)) | \
+				     (REG_FLD(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, CSS_DBG)) | \
+				     (REG_FLD(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, CSS_CTRL)) | \
+				     (REG_FLD(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, DEC400)) | \
+				     (REG_FLD(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, MSS_NCE)) | \
+				     (REG_FLD(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI)) | \
+				     (REG_FLD(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI_CMX)))
+
+static void vpu_hw_read_platform(struct vpu_device *vdev)
+{
+	u32 gen_ctrl = REGV_RD32(MTL_VPU_HOST_SS_GEN_CTRL);
+	u32 platform = REG_GET_FLD(MTL_VPU_HOST_SS_GEN_CTRL, PS, gen_ctrl);
+
+	if  (platform == VPU_PLATFORM_SIMICS || platform == VPU_PLATFORM_FPGA)
+		vdev->platform = platform;
+	else
+		vdev->platform = VPU_PLATFORM_SILICON;
+
+	vpu_dbg(MISC, "Platform type: %s (%d)\n",
+		vpu_platform_to_str(vdev->platform), vdev->platform);
+}
+
+static void vpu_hw_wa_init(struct vpu_device *vdev)
+{
+	vdev->wa.punit_disabled = vpu_is_fpga(vdev);
+	vdev->wa.clear_runtime_mem = true;
+}
+
+static void vpu_hw_timeouts_init(struct vpu_device *vdev)
+{
+	if (vpu_is_simics(vdev)) {
+		vdev->timeout.boot = 100000;
+		vdev->timeout.jsm = 50000;
+		vdev->timeout.tdr = 2000000;
+		vdev->timeout.reschedule_suspend = 1000;
+	} else {
+		vdev->timeout.boot = 1000;
+		vdev->timeout.jsm = 500;
+		vdev->timeout.tdr = 2000;
+		vdev->timeout.reschedule_suspend = 10;
+	}
+}
+
+static int vpu_pll_wait_for_cmd_send(struct vpu_device *vdev)
+{
+	return REGB_POLL_FLD(MTL_BUTTRESS_WP_REQ_CMD, SEND, 0, PLL_TIMEOUT_US);
+}
+
+/* Send KMD initiated workpoint change */
+static int vpu_pll_cmd_send(struct vpu_device *vdev, u16 min_ratio, u16 max_ratio,
+			    u16 target_ratio, u16 config)
+{
+	int ret;
+	u32 val;
+
+	ret = vpu_pll_wait_for_cmd_send(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to sync before WP request: %d\n", ret);
+		return ret;
+	}
+
+	val = REGB_RD32(MTL_BUTTRESS_WP_REQ_PAYLOAD0);
+	val = REG_SET_FLD_NUM(MTL_BUTTRESS_WP_REQ_PAYLOAD0, MIN_RATIO, min_ratio, val);
+	val = REG_SET_FLD_NUM(MTL_BUTTRESS_WP_REQ_PAYLOAD0, MAX_RATIO, max_ratio, val);
+	REGB_WR32(MTL_BUTTRESS_WP_REQ_PAYLOAD0, val);
+
+	val = REGB_RD32(MTL_BUTTRESS_WP_REQ_PAYLOAD1);
+	val = REG_SET_FLD_NUM(MTL_BUTTRESS_WP_REQ_PAYLOAD1, TARGET_RATIO, target_ratio, val);
+	val = REG_SET_FLD_NUM(MTL_BUTTRESS_WP_REQ_PAYLOAD1, EPP, PLL_DEFAULT_EPP_VALUE, val);
+	REGB_WR32(MTL_BUTTRESS_WP_REQ_PAYLOAD1, val);
+
+	val = REGB_RD32(MTL_BUTTRESS_WP_REQ_PAYLOAD2);
+	val = REG_SET_FLD_NUM(MTL_BUTTRESS_WP_REQ_PAYLOAD2, CONFIG, config, val);
+	REGB_WR32(MTL_BUTTRESS_WP_REQ_PAYLOAD2, val);
+
+	val = REGB_RD32(MTL_BUTTRESS_WP_REQ_CMD);
+	val = REG_SET_FLD(MTL_BUTTRESS_WP_REQ_CMD, SEND, val);
+	REGB_WR32(MTL_BUTTRESS_WP_REQ_CMD, val);
+
+	ret = vpu_pll_wait_for_cmd_send(vdev);
+	if (ret)
+		vpu_err(vdev, "Failed to sync after WP request: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_pll_wait_for_lock(struct vpu_device *vdev, bool enable)
+{
+	u32 exp_val = enable ? 0x1 : 0x0;
+
+	if (VPU_WA(punit_disabled))
+		return 0;
+
+	return REGB_POLL_FLD(MTL_BUTTRESS_PLL_STATUS, LOCK, exp_val, PLL_TIMEOUT_US);
+}
+
+static int vpu_pll_wait_for_status_ready(struct vpu_device *vdev)
+{
+	if (VPU_WA(punit_disabled))
+		return 0;
+
+	return REGB_POLL_FLD(MTL_BUTTRESS_VPU_STATUS, READY, 1, PLL_TIMEOUT_US);
+}
+
+static u16 vpu_hw_mtl_reg_pll_min_ratio_get(struct vpu_device *vdev)
+{
+	if (VPU_WA(punit_disabled))
+		return 0;
+
+	return REGB_RD32(MTL_BUTTRESS_FMIN_FUSE) & MTL_BUTTRESS_FMIN_FUSE_RATIO_MASK;
+}
+
+static u16 vpu_hw_mtl_reg_pll_max_ratio_get(struct vpu_device *vdev)
+{
+	if (VPU_WA(punit_disabled))
+		return 0;
+
+	return REGB_RD32(MTL_BUTTRESS_FMAX_FUSE) & MTL_BUTTRESS_FMAX_FUSE_RATIO_MASK;
+}
+
+static int vpu_pll_drive(struct vpu_device *vdev, bool enable)
+{
+	int ret;
+	struct vpu_hw_info *hw = vdev->hw;
+	u16 target_ratio;
+	u16 config;
+
+	if (VPU_WA(punit_disabled)) {
+		vpu_dbg(PM, "Skipping PLL request on %s\n", vpu_platform_to_str(vdev->platform));
+		return 0;
+	}
+
+	if (enable) {
+		u8 pll_hw_min_ratio = vpu_hw_mtl_reg_pll_min_ratio_get(vdev);
+		u8 pll_hw_max_ratio = vpu_hw_mtl_reg_pll_max_ratio_get(vdev);
+
+		hw->pll.max_ratio = vpu_hw_get_pll_ratio_in_range(vpu_pll_max_ratio,
+								  pll_hw_min_ratio,
+								  pll_hw_max_ratio);
+		hw->pll.min_ratio = vpu_hw_get_pll_ratio_in_range(vpu_pll_min_ratio,
+								  pll_hw_min_ratio,
+								  pll_hw_max_ratio);
+		if (hw->pll.max_ratio < hw->pll.min_ratio) {
+			vpu_err(vdev, "Invalid pll ratio values, min 0x%x max 0x%x\n",
+				hw->pll.min_ratio, hw->pll.max_ratio);
+			return -EINVAL;
+		}
+
+		target_ratio = hw->pll.min_ratio;
+		config = hw->config;
+	} else {
+		target_ratio = 0;
+		config = 0;
+	}
+
+	vpu_dbg(PM, "PLL workpoint request: %d Hz\n", PLL_RATIO_TO_FREQ(target_ratio));
+
+	ret = vpu_pll_cmd_send(vdev, hw->pll.min_ratio, hw->pll.max_ratio, target_ratio, config);
+	if (ret) {
+		vpu_err(vdev, "Failed to send PLL workpoint request: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_pll_wait_for_lock(vdev, enable);
+	if (ret) {
+		vpu_err(vdev, "Timed out waiting for PLL lock\n");
+		return ret;
+	}
+
+	if (enable) {
+		ret = vpu_pll_wait_for_status_ready(vdev);
+		if (ret) {
+			vpu_err(vdev, "Timed out waiting for PLL ready status\n");
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int vpu_pll_enable(struct vpu_device *vdev)
+{
+	return vpu_pll_drive(vdev, true);
+}
+
+static int vpu_pll_disable(struct vpu_device *vdev)
+{
+	return vpu_pll_drive(vdev, false);
+}
+
+static void vpu_boot_host_ss_rst_clr_assert(struct vpu_device *vdev)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_CPR_RST_CLR);
+
+	val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, TOP_NOC, val);
+	val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, DSS_MAS, val);
+	val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, MSS_MAS, val);
+
+	REGV_WR32(MTL_VPU_HOST_SS_CPR_RST_CLR, val);
+}
+
+static void vpu_boot_host_ss_rst_drive(struct vpu_device *vdev, bool enable)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_CPR_RST_SET);
+
+	if (enable) {
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_SET, TOP_NOC, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_SET, DSS_MAS, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_SET, MSS_MAS, val);
+	} else {
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_CPR_RST_SET, TOP_NOC, val);
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_CPR_RST_SET, DSS_MAS, val);
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_CPR_RST_SET, MSS_MAS, val);
+	}
+
+	REGV_WR32(MTL_VPU_HOST_SS_CPR_RST_SET, val);
+}
+
+static void vpu_boot_host_ss_clk_drive(struct vpu_device *vdev, bool enable)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_CPR_CLK_SET);
+
+	if (enable) {
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_CLK_SET, TOP_NOC, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_CLK_SET, DSS_MAS, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_CLK_SET, MSS_MAS, val);
+	} else {
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_CPR_CLK_SET, TOP_NOC, val);
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_CPR_CLK_SET, DSS_MAS, val);
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_CPR_CLK_SET, MSS_MAS, val);
+	}
+
+	REGV_WR32(MTL_VPU_HOST_SS_CPR_CLK_SET, val);
+}
+
+static int vpu_boot_noc_qreqn_check(struct vpu_device *vdev, u32 exp_val)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_NOC_QREQN);
+
+	if (!REG_TEST_FLD_NUM(MTL_VPU_HOST_SS_NOC_QREQN, TOP_SOCMMIO, exp_val, val))
+		return -EIO;
+
+	return 0;
+}
+
+static int vpu_boot_noc_qacceptn_check(struct vpu_device *vdev, u32 exp_val)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_NOC_QACCEPTN);
+
+	if (!REG_TEST_FLD_NUM(MTL_VPU_HOST_SS_NOC_QACCEPTN, TOP_SOCMMIO, exp_val, val))
+		return -EIO;
+
+	return 0;
+}
+
+static int vpu_boot_noc_qdeny_check(struct vpu_device *vdev, u32 exp_val)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_NOC_QDENY);
+
+	if (!REG_TEST_FLD_NUM(MTL_VPU_HOST_SS_NOC_QDENY, TOP_SOCMMIO, exp_val, val))
+		return -EIO;
+
+	return 0;
+}
+
+static int vpu_boot_top_noc_qrenqn_check(struct vpu_device *vdev, u32 exp_val)
+{
+	u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QREQN);
+
+	if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) ||
+	    !REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val))
+		return -EIO;
+
+	return 0;
+}
+
+static int vpu_boot_top_noc_qacceptn_check(struct vpu_device *vdev, u32 exp_val)
+{
+	u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QACCEPTN);
+
+	if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) ||
+	    !REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val))
+		return -EIO;
+
+	return 0;
+}
+
+static int vpu_boot_top_noc_qdeny_check(struct vpu_device *vdev, u32 exp_val)
+{
+	u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QDENY);
+
+	if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) ||
+	    !REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val))
+		return -EIO;
+
+	return 0;
+}
+
+static int vpu_boot_host_ss_configure(struct vpu_device *vdev)
+{
+	vpu_boot_host_ss_rst_clr_assert(vdev);
+
+	return vpu_boot_noc_qreqn_check(vdev, 0x0);
+}
+
+static void vpu_boot_vpu_idle_gen_disable(struct vpu_device *vdev)
+{
+	REGV_WR32(MTL_VPU_HOST_SS_AON_VPU_IDLE_GEN, 0x0);
+}
+
+static int vpu_boot_host_ss_axi_drive(struct vpu_device *vdev, bool enable)
+{
+	int ret;
+	u32 val;
+
+	val = REGV_RD32(MTL_VPU_HOST_SS_NOC_QREQN);
+	if (enable)
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val);
+	else
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val);
+	REGV_WR32(MTL_VPU_HOST_SS_NOC_QREQN, val);
+
+	ret = vpu_boot_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0);
+	if (ret) {
+		vpu_err(vdev, "Failed qacceptn check: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_boot_noc_qdeny_check(vdev, 0x0);
+	if (ret)
+		vpu_err(vdev, "Failed qdeny check: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_boot_host_ss_axi_enable(struct vpu_device *vdev)
+{
+	return vpu_boot_host_ss_axi_drive(vdev, true);
+}
+
+static int vpu_boot_host_ss_axi_disable(struct vpu_device *vdev)
+{
+	return vpu_boot_host_ss_axi_drive(vdev, false);
+}
+
+static int vpu_boot_host_ss_top_noc_drive(struct vpu_device *vdev, bool enable)
+{
+	int ret;
+	u32 val;
+
+	val = REGV_RD32(MTL_VPU_TOP_NOC_QREQN);
+	if (enable) {
+		val = REG_SET_FLD(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, val);
+		val = REG_SET_FLD(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, val);
+	} else {
+		val = REG_CLR_FLD(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, val);
+		val = REG_CLR_FLD(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, val);
+	}
+	REGV_WR32(MTL_VPU_TOP_NOC_QREQN, val);
+
+	ret = vpu_boot_top_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0);
+	if (ret) {
+		vpu_err(vdev, "Failed qacceptn check: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_boot_top_noc_qdeny_check(vdev, 0x0);
+	if (ret)
+		vpu_err(vdev, "Failed qdeny check: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_boot_host_ss_top_noc_enable(struct vpu_device *vdev)
+{
+	return vpu_boot_host_ss_top_noc_drive(vdev, true);
+}
+
+static int vpu_boot_host_ss_top_noc_disable(struct vpu_device *vdev)
+{
+	return vpu_boot_host_ss_top_noc_drive(vdev, false);
+}
+
+static void vpu_boot_pwr_island_trickle_drive(struct vpu_device *vdev, bool enable)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0);
+
+	if (enable)
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, MSS_CPU, val);
+	else
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, MSS_CPU, val);
+
+	REGV_WR32(MTL_VPU_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, val);
+}
+
+static void vpu_boot_pwr_island_drive(struct vpu_device *vdev, bool enable)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_AON_PWR_ISLAND_EN0);
+
+	if (enable)
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_AON_PWR_ISLAND_EN0, MSS_CPU, val);
+	else
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_AON_PWR_ISLAND_EN0, MSS_CPU, val);
+
+	REGV_WR32(MTL_VPU_HOST_SS_AON_PWR_ISLAND_EN0, val);
+}
+
+static int vpu_boot_wait_for_pwr_island_status(struct vpu_device *vdev, u32 exp_val)
+{
+	/* FPGA model (UPF) is not power aware, skipped Power Island polling */
+	if (vpu_is_fpga(vdev))
+		return 0;
+
+	return REGV_POLL_FLD(MTL_VPU_HOST_SS_AON_PWR_ISLAND_STATUS0, MSS_CPU,
+			     exp_val, PWR_ISLAND_STATUS_TIMEOUT_US);
+}
+
+static void vpu_boot_pwr_island_isolation_drive(struct vpu_device *vdev, bool enable)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_AON_PWR_ISO_EN0);
+
+	if (enable)
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_AON_PWR_ISO_EN0, MSS_CPU, val);
+	else
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_AON_PWR_ISO_EN0, MSS_CPU, val);
+
+	REGV_WR32(MTL_VPU_HOST_SS_AON_PWR_ISO_EN0, val);
+}
+
+static void vpu_boot_dpu_active_drive(struct vpu_device *vdev, bool enable)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_SS_AON_DPU_ACTIVE);
+
+	if (enable)
+		val = REG_SET_FLD(MTL_VPU_HOST_SS_AON_DPU_ACTIVE, DPU_ACTIVE, val);
+	else
+		val = REG_CLR_FLD(MTL_VPU_HOST_SS_AON_DPU_ACTIVE, DPU_ACTIVE, val);
+
+	REGV_WR32(MTL_VPU_HOST_SS_AON_DPU_ACTIVE, val);
+}
+
+static int vpu_boot_pwr_domain_disable(struct vpu_device *vdev)
+{
+	vpu_boot_dpu_active_drive(vdev, false);
+	vpu_boot_pwr_island_isolation_drive(vdev, true);
+	vpu_boot_pwr_island_trickle_drive(vdev, false);
+	vpu_boot_pwr_island_drive(vdev, false);
+
+	return vpu_boot_wait_for_pwr_island_status(vdev, 0x0);
+}
+
+static int vpu_boot_pwr_domain_enable(struct vpu_device *vdev)
+{
+	int ret;
+
+	vpu_boot_pwr_island_trickle_drive(vdev, true);
+	vpu_boot_pwr_island_drive(vdev, true);
+
+	ret = vpu_boot_wait_for_pwr_island_status(vdev, 0x1);
+	if (ret) {
+		vpu_err(vdev, "Timed out waiting for power island status\n");
+		return ret;
+	}
+
+	ret = vpu_boot_top_noc_qrenqn_check(vdev, 0x0);
+	if (ret) {
+		vpu_err(vdev, "Failed qrenqn check %d\n", ret);
+		return ret;
+	}
+
+	vpu_boot_host_ss_clk_drive(vdev, true);
+	vpu_boot_pwr_island_isolation_drive(vdev, false);
+	vpu_boot_host_ss_rst_drive(vdev, true);
+	vpu_boot_dpu_active_drive(vdev, true);
+
+	return ret;
+}
+
+static void vpu_boot_no_snoop_enable(struct vpu_device *vdev)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES);
+
+	val = REG_SET_FLD(MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES, NOSNOOP_OVERRIDE_EN, val);
+	val = REG_SET_FLD(MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES, AW_NOSNOOP_OVERRIDE, val);
+	val = REG_SET_FLD(MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val);
+
+	REGV_WR32(MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES, val);
+}
+
+static void vpu_boot_tbu_mmu_enable(struct vpu_device *vdev)
+{
+	u32 val = REGV_RD32(MTL_VPU_HOST_IF_TBU_MMUSSIDV);
+
+	if (vpu_is_fpga(vdev)) {
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val);
+	} else {
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU1_AWMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU1_ARMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU3_AWMMUSSIDV, val);
+		val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU3_ARMMUSSIDV, val);
+	}
+
+	REGV_WR32(MTL_VPU_HOST_IF_TBU_MMUSSIDV, val);
+}
+
+static void vpu_boot_soc_cpu_boot(struct vpu_device *vdev)
+{
+	u32 val;
+
+	val = REGV_RD32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC);
+	val = REG_SET_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTRUN0, val);
+
+	val = REG_CLR_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTVEC, val);
+	REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
+
+	val = REG_SET_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val);
+	REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
+
+	val = REG_CLR_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val);
+	REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
+}
+
+static int vpu_boot_d0i3_drive(struct vpu_device *vdev, bool enable)
+{
+	int ret;
+	u32 val;
+
+	ret = REGB_POLL_FLD(MTL_BUTTRESS_VPU_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US);
+	if (ret) {
+		vpu_err(vdev, "Failed to sync before D0i3 tansition: %d\n", ret);
+		return ret;
+	}
+
+	val = REGB_RD32(MTL_BUTTRESS_VPU_D0I3_CONTROL);
+	if (enable)
+		val = REG_SET_FLD(MTL_BUTTRESS_VPU_D0I3_CONTROL, I3, val);
+	else
+		val = REG_CLR_FLD(MTL_BUTTRESS_VPU_D0I3_CONTROL, I3, val);
+	REGB_WR32(MTL_BUTTRESS_VPU_D0I3_CONTROL, val);
+
+	ret = REGB_POLL_FLD(MTL_BUTTRESS_VPU_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US);
+	if (ret)
+		vpu_err(vdev, "Failed to sync after D0i3 tansition: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_hw_mtl_info_init(struct vpu_device *vdev)
+{
+	struct vpu_hw_info *hw = vdev->hw;
+	u32 tile_fuse;
+
+	tile_fuse = REGB_RD32(MTL_BUTTRESS_TILE_FUSE);
+	if (!REG_TEST_FLD(MTL_BUTTRESS_TILE_FUSE, VALID, tile_fuse))
+		vpu_warn(vdev, "Tile Fuse: Invalid (0x%x)\n", tile_fuse);
+
+	hw->tile_fuse = REG_GET_FLD(MTL_BUTTRESS_TILE_FUSE, SKU, tile_fuse);
+	switch (hw->tile_fuse) {
+	case TILE_FUSE_ENABLE_LOWER:
+		hw->sku = TILE_SKU_LOWER_MTL;
+		hw->config = WP_CONFIG_1_TILE_5_3_RATIO;
+		vpu_dbg(MISC, "Tile Fuse: Enable Lower\n");
+		break;
+	case TILE_FUSE_ENABLE_UPPER:
+		hw->sku = TILE_SKU_UPPER_MTL;
+		hw->config = WP_CONFIG_1_TILE_4_3_RATIO;
+		vpu_dbg(MISC, "Tile Fuse: Enable Upper\n");
+		break;
+	case TILE_FUSE_ENABLE_BOTH:
+		hw->sku = TILE_SKU_BOTH_MTL;
+		hw->config = WP_CONFIG_2_TILE_5_3_RATIO;
+		vpu_dbg(MISC, "Tile Fuse: Enable Both\n");
+		break;
+	default:
+		hw->config = WP_CONFIG_0_TILE_PLL_OFF;
+		vpu_dbg(MISC, "Tile Fuse: Disable\n");
+		break;
+	}
+
+	vpu_hw_init_range(&hw->ranges.global_low, 0x80000000, SZ_512M);
+	vpu_hw_init_range(&hw->ranges.global_high, 0x180000000, SZ_2M);
+	vpu_hw_init_range(&hw->ranges.user_low, 0xc0000000, 127 * SZ_1M);
+	vpu_hw_init_range(&hw->ranges.user_high, 0x180000000, SZ_2G);
+	hw->ranges.global_aliased_pio = hw->ranges.user_low;
+
+	return 0;
+}
+
+static int vpu_hw_mtl_reset(struct vpu_device *vdev)
+{
+	int ret;
+	u32 val;
+
+	if (VPU_WA(punit_disabled))
+		return 0;
+
+	ret = REGB_POLL_FLD(MTL_BUTTRESS_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US);
+	if (ret) {
+		vpu_err(vdev, "Timed out waiting for TRIGGER bit\n");
+		return ret;
+	}
+
+	val = REGB_RD32(MTL_BUTTRESS_VPU_IP_RESET);
+	val = REG_SET_FLD(MTL_BUTTRESS_VPU_IP_RESET, TRIGGER, val);
+	REGB_WR32(MTL_BUTTRESS_VPU_IP_RESET, val);
+
+	ret = REGB_POLL_FLD(MTL_BUTTRESS_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US);
+	if (ret)
+		vpu_err(vdev, "Timed out waiting for RESET completion\n");
+
+	return ret;
+}
+
+static int vpu_hw_mtl_d0i3_enable(struct vpu_device *vdev)
+{
+	int ret;
+
+	ret = vpu_boot_d0i3_drive(vdev, true);
+	if (ret)
+		vpu_err(vdev, "Failed to enable D0i3: %d\n", ret);
+
+	udelay(5); /* VPU requires 5 us to complete the transition */
+
+	return ret;
+}
+
+static int vpu_hw_mtl_d0i3_disable(struct vpu_device *vdev)
+{
+	int ret;
+
+	ret = vpu_boot_d0i3_drive(vdev, false);
+	if (ret)
+		vpu_err(vdev, "Failed to disable D0i3: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_hw_mtl_power_up(struct vpu_device *vdev)
+{
+	int ret;
+
+	vpu_hw_read_platform(vdev);
+	vpu_hw_wa_init(vdev);
+	vpu_hw_timeouts_init(vdev);
+
+	ret = vpu_hw_mtl_reset(vdev);
+	if (ret)
+		vpu_warn(vdev, "Failed to reset HW: %d\n", ret);
+
+	ret = vpu_hw_mtl_d0i3_disable(vdev);
+	if (ret)
+		vpu_warn(vdev, "Failed to disable D0I3: %d\n", ret);
+
+	ret = vpu_pll_enable(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to enable PLL: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_boot_host_ss_configure(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to configure host SS: %d\n", ret);
+		return ret;
+	}
+
+	/*
+	 * The control circuitry for vpu_idle indication logic powers up active.
+	 * To ensure unnecessary low power mode signal from LRT during bring up,
+	 * KMD disables the circuitry prior to bringing up the Main Power island.
+	 */
+	vpu_boot_vpu_idle_gen_disable(vdev);
+
+	ret = vpu_boot_pwr_domain_enable(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to enable power domain: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_boot_host_ss_axi_enable(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to enable AXI: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_boot_host_ss_top_noc_enable(vdev);
+	if (ret)
+		vpu_err(vdev, "Failed to enable TOP NOC: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_hw_mtl_boot_fw(struct vpu_device *vdev)
+{
+	vpu_boot_no_snoop_enable(vdev);
+	vpu_boot_tbu_mmu_enable(vdev);
+	vpu_boot_soc_cpu_boot(vdev);
+
+	return 0;
+}
+
+static bool vpu_hw_mtl_is_idle(struct vpu_device *vdev)
+{
+	u32 val;
+
+	if (VPU_WA(punit_disabled))
+		return true;
+
+	val = REGB_RD32(MTL_BUTTRESS_VPU_STATUS);
+	return REG_TEST_FLD(MTL_BUTTRESS_VPU_STATUS, READY, val) &&
+	       REG_TEST_FLD(MTL_BUTTRESS_VPU_STATUS, IDLE, val);
+}
+
+static int vpu_hw_mtl_power_down(struct vpu_device *vdev)
+{
+	int ret = 0;
+
+	/* FPGA requires manual clearing of IP_Reset bit by enabling quiescent state */
+	if (vpu_is_fpga(vdev)) {
+		if (vpu_boot_host_ss_top_noc_disable(vdev)) {
+			vpu_err(vdev, "Failed to disable TOP NOC\n");
+			ret = -EIO;
+		}
+
+		if (vpu_boot_host_ss_axi_disable(vdev)) {
+			vpu_err(vdev, "Failed to disable AXI\n");
+			ret = -EIO;
+		}
+	}
+
+	if (vpu_boot_pwr_domain_disable(vdev)) {
+		vpu_err(vdev, "Failed to disable power domain\n");
+		ret = -EIO;
+	}
+
+	if (vpu_pll_disable(vdev)) {
+		vpu_err(vdev, "Failed to disable PLL\n");
+		ret = -EIO;
+	}
+
+	if (vpu_hw_mtl_d0i3_enable(vdev))
+		vpu_warn(vdev, "Failed to enable D0I3\n");
+
+	return ret;
+}
+
+static void vpu_hw_mtl_wdt_disable(struct vpu_device *vdev)
+{
+	u32 val;
+
+	/* Enable writing and set non-zero WDT value */
+	REGV_WR32(MTL_VPU_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE);
+	REGV_WR32(MTL_VPU_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE);
+
+	/* Enable writing and disable watchdog timer */
+	REGV_WR32(MTL_VPU_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE);
+	REGV_WR32(MTL_VPU_CPU_SS_TIM_WDOG_EN, 0);
+
+	/* Now clear the timeout interrupt */
+	val = REGV_RD32(MTL_VPU_CPU_SS_TIM_GEN_CONFIG);
+	val = REG_CLR_FLD(MTL_VPU_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val);
+	REGV_WR32(MTL_VPU_CPU_SS_TIM_GEN_CONFIG, val);
+}
+
+/* Register indirect accesses */
+static u32 vpu_hw_mtl_reg_pll_freq_get(struct vpu_device *vdev)
+{
+	u32 pll_curr_ratio;
+
+	pll_curr_ratio = REGB_RD32(MTL_BUTTRESS_CURRENT_PLL);
+	pll_curr_ratio &= MTL_BUTTRESS_CURRENT_PLL_RATIO_MASK;
+
+	if (!vpu_is_silicon(vdev))
+		return PLL_SIMULATION_FREQ;
+
+	return PLL_RATIO_TO_FREQ(pll_curr_ratio);
+}
+
+static u32 vpu_hw_mtl_reg_telemetry_offset_get(struct vpu_device *vdev)
+{
+	return REGB_RD32(MTL_BUTTRESS_VPU_TELEMETRY_OFFSET);
+}
+
+static u32 vpu_hw_mtl_reg_telemetry_size_get(struct vpu_device *vdev)
+{
+	return REGB_RD32(MTL_BUTTRESS_VPU_TELEMETRY_SIZE);
+}
+
+static u32 vpu_hw_mtl_reg_telemetry_enable_get(struct vpu_device *vdev)
+{
+	return REGB_RD32(MTL_BUTTRESS_VPU_TELEMETRY_ENABLE);
+}
+
+static void vpu_hw_mtl_reg_db_set(struct vpu_device *vdev, u32 db_id)
+{
+	u32 reg_stride = MTL_VPU_CPU_SS_DOORBELL_1 - MTL_VPU_CPU_SS_DOORBELL_0;
+	u32 val = BIT(MTL_VPU_CPU_SS_DOORBELL_0_SET_OFFSET);
+
+	REGV_WR32I(MTL_VPU_CPU_SS_DOORBELL_0, reg_stride, db_id, val);
+}
+
+static u32 vpu_hw_mtl_reg_ipc_rx_addr_get(struct vpu_device *vdev)
+{
+	return REGV_RD32(MTL_VPU_HOST_SS_TIM_IPC_FIFO_ATM);
+}
+
+static u32 vpu_hw_mtl_reg_ipc_rx_count_get(struct vpu_device *vdev)
+{
+	u32 count = REGV_RD32_SILENT(MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT);
+
+	return REG_GET_FLD(MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT, FILL_LEVEL, count);
+}
+
+static void vpu_hw_mtl_reg_ipc_tx_set(struct vpu_device *vdev, u32 vpu_addr)
+{
+	REGV_WR32(MTL_VPU_CPU_SS_TIM_IPC_FIFO, vpu_addr);
+}
+
+static void vpu_hw_mtl_irq_clear(struct vpu_device *vdev)
+{
+	REGV_WR64(MTL_VPU_HOST_SS_ICB_CLEAR_0, ICB_0_1_IRQ_MASK);
+}
+
+static void vpu_hw_mtl_irq_enable(struct vpu_device *vdev)
+{
+	REGV_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, ITF_FIREWALL_VIOLATION_MASK);
+	REGV_WR64(MTL_VPU_HOST_SS_ICB_ENABLE_0, ICB_0_1_IRQ_MASK);
+	REGB_WR32(MTL_BUTTRESS_LOCAL_INT_MASK, (u32)~(BIT(1) | BIT(2)));
+	REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x0);
+}
+
+static void vpu_hw_mtl_irq_disable(struct vpu_device *vdev)
+{
+	REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x1);
+	REGB_WR32(MTL_BUTTRESS_LOCAL_INT_MASK, BUTTRESS_IRQ_MASK);
+	REGV_WR64(MTL_VPU_HOST_SS_ICB_ENABLE_0, 0x0ull);
+	REGB_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0ul);
+}
+
+static irqreturn_t vpu_hw_mtl_irq_wdt_nce_handler(struct vpu_device *vdev)
+{
+	vpu_warn_ratelimited(vdev, "WDT NCE irq\n");
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t vpu_hw_mtl_irq_wdt_mss_handler(struct vpu_device *vdev)
+{
+	vpu_warn_ratelimited(vdev, "WDT MSS irq\n");
+
+	vpu_hw_wdt_disable(vdev);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t vpu_hw_mtl_irq_noc_firewall_handler(struct vpu_device *vdev)
+{
+	vpu_warn_ratelimited(vdev, "NOC Firewall irq\n");
+
+	return IRQ_HANDLED;
+}
+
+/* Handler for IRQs from VPU core (irqV) */
+static irqreturn_t vpu_hw_mtl_irqv_handler(struct vpu_device *vdev, int irq)
+{
+	irqreturn_t ret = IRQ_HANDLED;
+	u32 status = REGV_RD32(MTL_VPU_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK;
+
+	REGV_WR32(MTL_VPU_HOST_SS_ICB_CLEAR_0, status);
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, status))
+		ret &= vpu_hw_mtl_irq_wdt_mss_handler(vdev);
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, status))
+		ret &= vpu_hw_mtl_irq_wdt_nce_handler(vdev);
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, status))
+		ret &= vpu_hw_mtl_irq_noc_firewall_handler(vdev);
+
+	return ret;
+}
+
+/* Handler for IRQs from Buttress core (irqB) */
+static irqreturn_t vpu_hw_mtl_irqb_handler(struct vpu_device *vdev, int irq)
+{
+	u32 status = REGB_RD32(MTL_BUTTRESS_INTERRUPT_STAT) & BUTTRESS_IRQ_MASK;
+
+	REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, status);
+
+	if (REG_TEST_FLD(MTL_BUTTRESS_INTERRUPT_STAT, FREQ_CHANGE, status))
+		vpu_dbg(IRQ, "FREQ_CHANGE");
+
+	if (REG_TEST_FLD(MTL_BUTTRESS_INTERRUPT_STAT, ATS_ERR, status)) {
+		vpu_dbg(IRQ, "ATS_ERR 0x%016llx", REGB_RD64(MTL_BUTTRESS_ATS_ERR_LOG_0));
+		REGB_WR32(MTL_BUTTRESS_ATS_ERR_CLEAR, 0x1);
+	}
+
+	if (REG_TEST_FLD(MTL_BUTTRESS_INTERRUPT_STAT, UFI_ERR, status)) {
+		vpu_dbg(IRQ, "UFI_ERR 0x%08x", REGB_RD32(MTL_BUTTRESS_UFI_ERR_LOG));
+		REGB_WR32(MTL_BUTTRESS_UFI_ERR_CLEAR, 0x1);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t vpu_hw_mtl_irq_handler(int irq, void *ptr)
+{
+	struct vpu_device *vdev = ptr;
+	irqreturn_t ret_irqv;
+	irqreturn_t ret_irqb;
+
+	vpu_hw_mtl_irq_disable(vdev);
+
+	ret_irqv = vpu_hw_mtl_irqv_handler(vdev, irq);
+	ret_irqb = vpu_hw_mtl_irqb_handler(vdev, irq);
+
+	vpu_hw_mtl_irq_enable(vdev);
+
+	return ret_irqv & ret_irqb;
+}
+
+const struct vpu_hw_ops vpu_hw_mtl_ops = {
+	.info_init = vpu_hw_mtl_info_init,
+	.power_up = vpu_hw_mtl_power_up,
+	.is_idle = vpu_hw_mtl_is_idle,
+	.power_down = vpu_hw_mtl_power_down,
+	.boot_fw = vpu_hw_mtl_boot_fw,
+	.wdt_disable = vpu_hw_mtl_wdt_disable,
+	.reg_pll_freq_get = vpu_hw_mtl_reg_pll_freq_get,
+	.reg_telemetry_offset_get = vpu_hw_mtl_reg_telemetry_offset_get,
+	.reg_telemetry_size_get = vpu_hw_mtl_reg_telemetry_size_get,
+	.reg_telemetry_enable_get = vpu_hw_mtl_reg_telemetry_enable_get,
+	.reg_db_set = vpu_hw_mtl_reg_db_set,
+	.reg_ipc_rx_addr_get = vpu_hw_mtl_reg_ipc_rx_addr_get,
+	.reg_ipc_rx_count_get = vpu_hw_mtl_reg_ipc_rx_count_get,
+	.reg_ipc_tx_set = vpu_hw_mtl_reg_ipc_tx_set,
+	.irq_clear = vpu_hw_mtl_irq_clear,
+	.irq_enable = vpu_hw_mtl_irq_enable,
+	.irq_disable = vpu_hw_mtl_irq_disable,
+	.irq_handler = vpu_hw_mtl_irq_handler,
+};
diff --git a/drivers/gpu/drm/vpu/vpu_hw_mtl_reg.h b/drivers/gpu/drm/vpu/vpu_hw_mtl_reg.h
new file mode 100644
index 000000000000..c2881a9648c1
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_hw_mtl_reg.h
@@ -0,0 +1,468 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_HW_MTL_REG_H__
+#define __VPU_HW_MTL_REG_H__
+
+#define MTL_BUTTRESS_INTERRUPT_TYPE					0x00000000u
+
+#define MTL_BUTTRESS_INTERRUPT_STAT					0x00000004u
+#define MTL_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_SHIFT			0u
+#define MTL_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_MASK			0x00000001u
+#define MTL_BUTTRESS_INTERRUPT_STAT_ATS_ERR_SHIFT			1u
+#define MTL_BUTTRESS_INTERRUPT_STAT_ATS_ERR_MASK			0x00000002u
+#define MTL_BUTTRESS_INTERRUPT_STAT_UFI_ERR_SHIFT			2u
+#define MTL_BUTTRESS_INTERRUPT_STAT_UFI_ERR_MASK			0x00000004u
+
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD0					0x00000008u
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_SHIFT			0u
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_MASK			0x0000ffffu
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_SHIFT			16u
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_MASK			0xffff0000u
+
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD1					0x0000000cu
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_SHIFT			0u
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK			0x0000ffffu
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD1_EPP_SHIFT				16u
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD1_EPP_MASK				0xffff0000u
+
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD2					0x00000010u
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD2_CONFIG_SHIFT			0u
+#define MTL_BUTTRESS_WP_REQ_PAYLOAD2_CONFIG_MASK			0x0000ffffu
+
+#define MTL_BUTTRESS_WP_REQ_CMD						0x00000014u
+#define MTL_BUTTRESS_WP_REQ_CMD_SEND_SHIFT				0u
+#define MTL_BUTTRESS_WP_REQ_CMD_SEND_MASK				0x00000001u
+
+#define MTL_BUTTRESS_WP_DOWNLOAD					0x00000018u
+#define MTL_BUTTRESS_WP_DOWNLOAD_TARGET_RATIO_SHIFT			0u
+#define MTL_BUTTRESS_WP_DOWNLOAD_TARGET_RATIO_MASK			0x0000ffffu
+
+#define MTL_BUTTRESS_CURRENT_PLL					0x0000001cu
+#define MTL_BUTTRESS_CURRENT_PLL_RATIO_SHIFT				0u
+#define MTL_BUTTRESS_CURRENT_PLL_RATIO_MASK				0x0000ffffu
+
+#define MTL_BUTTRESS_PLL_ENABLE						0x00000020u
+
+#define MTL_BUTTRESS_FMIN_FUSE						0x00000024u
+#define MTL_BUTTRESS_FMIN_FUSE_RATIO_SHIFT				0u
+#define MTL_BUTTRESS_FMIN_FUSE_RATIO_MASK				0x0000ffffu
+
+#define MTL_BUTTRESS_FMAX_FUSE						0x00000028u
+#define MTL_BUTTRESS_FMAX_FUSE_RATIO_SHIFT				0u
+#define MTL_BUTTRESS_FMAX_FUSE_RATIO_MASK				0x0000ffffu
+
+#define MTL_BUTTRESS_TILE_FUSE						0x0000002cu
+#define MTL_BUTTRESS_TILE_FUSE_VALID_SHIFT				0u
+#define MTL_BUTTRESS_TILE_FUSE_VALID_MASK				0x00000001u
+#define MTL_BUTTRESS_TILE_FUSE_SKU_SHIFT				1u
+#define MTL_BUTTRESS_TILE_FUSE_SKU_MASK					0x00000006u
+
+#define MTL_BUTTRESS_LOCAL_INT_MASK					0x00000030u
+#define MTL_BUTTRESS_GLOBAL_INT_MASK					0x00000034u
+
+#define MTL_BUTTRESS_PLL_STATUS						0x00000040u
+#define MTL_BUTTRESS_PLL_STATUS_LOCK_SHIFT				1u
+#define MTL_BUTTRESS_PLL_STATUS_LOCK_MASK				0x00000002u
+
+#define MTL_BUTTRESS_VPU_STATUS						0x00000044u
+#define MTL_BUTTRESS_VPU_STATUS_READY_SHIFT				0u
+#define MTL_BUTTRESS_VPU_STATUS_READY_MASK				0x00000001u
+#define MTL_BUTTRESS_VPU_STATUS_IDLE_SHIFT				1u
+#define MTL_BUTTRESS_VPU_STATUS_IDLE_MASK				0x00000002u
+
+#define MTL_BUTTRESS_VPU_D0I3_CONTROL					0x00000060u
+#define MTL_BUTTRESS_VPU_D0I3_CONTROL_INPROGRESS_SHIFT			0u
+#define MTL_BUTTRESS_VPU_D0I3_CONTROL_INPROGRESS_MASK			0x00000001u
+#define MTL_BUTTRESS_VPU_D0I3_CONTROL_I3_SHIFT				2u
+#define MTL_BUTTRESS_VPU_D0I3_CONTROL_I3_MASK				0x00000004u
+
+#define MTL_BUTTRESS_VPU_IP_RESET					0x00000050u
+#define MTL_BUTTRESS_VPU_IP_RESET_TRIGGER_SHIFT				0u
+#define MTL_BUTTRESS_VPU_IP_RESET_TRIGGER_MASK				0x00000001u
+
+#define MTL_BUTTRESS_VPU_TELEMETRY_OFFSET				0x00000080u
+#define MTL_BUTTRESS_VPU_TELEMETRY_SIZE					0x00000084u
+#define MTL_BUTTRESS_VPU_TELEMETRY_ENABLE				0x00000088u
+
+#define MTL_BUTTRESS_ATS_ERR_LOG_0					0x000000a0u
+#define MTL_BUTTRESS_ATS_ERR_LOG_1					0x000000a4u
+#define MTL_BUTTRESS_ATS_ERR_CLEAR					0x000000a8u
+#define MTL_BUTTRESS_UFI_ERR_LOG					0x000000b0u
+#define MTL_BUTTRESS_UFI_ERR_CLEAR					0x000000b4u
+
+#define MTL_VPU_HOST_SS_CPR_CLK_SET					0x00000084u
+#define MTL_VPU_HOST_SS_CPR_CLK_SET_TOP_NOC_SHIFT			1u
+#define MTL_VPU_HOST_SS_CPR_CLK_SET_TOP_NOC_MASK			0x00000002u
+#define MTL_VPU_HOST_SS_CPR_CLK_SET_DSS_MAS_SHIFT			10u
+#define MTL_VPU_HOST_SS_CPR_CLK_SET_DSS_MAS_MASK			0x00000400u
+#define MTL_VPU_HOST_SS_CPR_CLK_SET_MSS_MAS_SHIFT			11u
+#define MTL_VPU_HOST_SS_CPR_CLK_SET_MSS_MAS_MASK			0x00000800u
+
+#define MTL_VPU_HOST_SS_CPR_RST_SET					0x00000094u
+#define MTL_VPU_HOST_SS_CPR_RST_SET_TOP_NOC_SHIFT			1u
+#define MTL_VPU_HOST_SS_CPR_RST_SET_TOP_NOC_MASK			0x00000002u
+#define MTL_VPU_HOST_SS_CPR_RST_SET_DSS_MAS_SHIFT			10u
+#define MTL_VPU_HOST_SS_CPR_RST_SET_DSS_MAS_MASK			0x00000400u
+#define MTL_VPU_HOST_SS_CPR_RST_SET_MSS_MAS_SHIFT			11u
+#define MTL_VPU_HOST_SS_CPR_RST_SET_MSS_MAS_MASK			0x00000800u
+
+#define MTL_VPU_HOST_SS_CPR_RST_CLR					0x00000098u
+#define MTL_VPU_HOST_SS_CPR_RST_CLR_TOP_NOC_SHIFT			1u
+#define MTL_VPU_HOST_SS_CPR_RST_CLR_TOP_NOC_MASK			0x00000002u
+#define MTL_VPU_HOST_SS_CPR_RST_CLR_DSS_MAS_SHIFT			10u
+#define MTL_VPU_HOST_SS_CPR_RST_CLR_DSS_MAS_MASK			0x00000400u
+#define MTL_VPU_HOST_SS_CPR_RST_CLR_MSS_MAS_SHIFT			11u
+#define MTL_VPU_HOST_SS_CPR_RST_CLR_MSS_MAS_MASK			0x00000800u
+
+#define MTL_VPU_HOST_SS_HW_VERSION					0x00000108u
+#define MTL_VPU_HOST_SS_HW_VERSION_SOC_REVISION_SHIFT			0u
+#define MTL_VPU_HOST_SS_HW_VERSION_SOC_REVISION_MASK			0x000000ffu
+#define MTL_VPU_HOST_SS_HW_VERSION_SOC_NUMBER_SHIFT			8u
+#define MTL_VPU_HOST_SS_HW_VERSION_SOC_NUMBER_MASK			0x0000ff00u
+#define MTL_VPU_HOST_SS_HW_VERSION_VPU_GENERATION_SHIFT			16u
+#define MTL_VPU_HOST_SS_HW_VERSION_VPU_GENERATION_MASK			0x00ff0000u
+
+#define MTL_VPU_HOST_SS_SW_VERSION					0x0000010cu
+
+#define MTL_VPU_HOST_SS_GEN_CTRL					0x00000118u
+#define MTL_VPU_HOST_SS_GEN_CTRL_PS_SHIFT				0x0000001du
+#define MTL_VPU_HOST_SS_GEN_CTRL_PS_MASK				0xe0000000u
+
+#define MTL_VPU_HOST_SS_NOC_QREQN					0x00000154u
+#define MTL_VPU_HOST_SS_NOC_QREQN_TOP_SOCMMIO_SHIFT			0u
+#define MTL_VPU_HOST_SS_NOC_QREQN_TOP_SOCMMIO_MASK			0x00000001u
+
+#define MTL_VPU_HOST_SS_NOC_QACCEPTN					0x00000158u
+#define MTL_VPU_HOST_SS_NOC_QACCEPTN_TOP_SOCMMIO_SHIFT			0u
+#define MTL_VPU_HOST_SS_NOC_QACCEPTN_TOP_SOCMMIO_MASK			0x00000001u
+
+#define MTL_VPU_HOST_SS_NOC_QDENY					0x0000015cu
+#define MTL_VPU_HOST_SS_NOC_QDENY_TOP_SOCMMIO_SHIFT			0u
+#define MTL_VPU_HOST_SS_NOC_QDENY_TOP_SOCMMIO_MASK			0x00000001u
+
+#define MTL_VPU_TOP_NOC_QREQN						0x00000160u
+#define MTL_VPU_TOP_NOC_QREQN_CPU_CTRL_SHIFT				0u
+#define MTL_VPU_TOP_NOC_QREQN_CPU_CTRL_MASK				0x00000001u
+#define MTL_VPU_TOP_NOC_QREQN_HOSTIF_L2CACHE_SHIFT			1u
+#define MTL_VPU_TOP_NOC_QREQN_HOSTIF_L2CACHE_MASK			0x00000002u
+
+#define MTL_VPU_TOP_NOC_QACCEPTN					0x00000164u
+#define MTL_VPU_TOP_NOC_QACCEPTN_CPU_CTRL_SHIFT				0u
+#define MTL_VPU_TOP_NOC_QACCEPTN_CPU_CTRL_MASK				0x00000001u
+#define MTL_VPU_TOP_NOC_QACCEPTN_HOSTIF_L2CACHE_SHIFT			1u
+#define MTL_VPU_TOP_NOC_QACCEPTN_HOSTIF_L2CACHE_MASK			0x00000002u
+
+#define MTL_VPU_TOP_NOC_QDENY						0x00000168u
+#define MTL_VPU_TOP_NOC_QDENY_CPU_CTRL_SHIFT				0u
+#define MTL_VPU_TOP_NOC_QDENY_CPU_CTRL_MASK				0x00000001u
+#define MTL_VPU_TOP_NOC_QDENY_HOSTIF_L2CACHE_SHIFT			1u
+#define MTL_VPU_TOP_NOC_QDENY_HOSTIF_L2CACHE_MASK			0x00000002u
+
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN					0x00000170u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_CSS_ROM_CMX_SHIFT			0u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_CSS_ROM_CMX_MASK			0x00000001u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_CSS_DBG_SHIFT			1u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_CSS_DBG_MASK			0x00000002u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_CSS_CTRL_SHIFT			2u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_CSS_CTRL_MASK			0x00000004u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_DEC400_SHIFT			3u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_DEC400_MASK			0x00000008u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_MSS_NCE_SHIFT			4u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_MSS_NCE_MASK			0x00000010u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_MSS_MBI_SHIFT			5u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_MSS_MBI_MASK			0x00000020u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_MSS_MBI_CMX_SHIFT			6u
+#define MTL_VPU_HOST_SS_FW_SOC_IRQ_EN_MSS_MBI_CMX_MASK			0x00000040u
+
+#define MTL_VPU_HOST_SS_ICB_STATUS_0					0x00010210u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_0_INT_SHIFT			0u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_0_INT_MASK			0x00000001u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_1_INT_SHIFT			1u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_1_INT_MASK			0x00000002u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_2_INT_SHIFT			2u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_2_INT_MASK			0x00000004u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_3_INT_SHIFT			3u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_TIMER_3_INT_MASK			0x00000008u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_HOST_IPC_FIFO_INT_SHIFT		4u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_HOST_IPC_FIFO_INT_MASK		0x00000010u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_MMU_IRQ_0_INT_SHIFT		5u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_MMU_IRQ_0_INT_MASK			0x00000020u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_MMU_IRQ_1_INT_SHIFT		6u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_MMU_IRQ_1_INT_MASK			0x00000040u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_MMU_IRQ_2_INT_SHIFT		7u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_MMU_IRQ_2_INT_MASK			0x00000080u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_NOC_FIREWALL_INT_SHIFT		8u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_NOC_FIREWALL_INT_MASK		0x00000100u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_0_INT_SHIFT	30u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_0_INT_MASK	0x40000000u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_1_INT_SHIFT	31u
+#define MTL_VPU_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_1_INT_MASK	0x80000000u
+
+#define MTL_VPU_HOST_SS_ICB_STATUS_1					0x00010214u
+#define MTL_VPU_HOST_SS_ICB_STATUS_1_CPU_INT_REDIRECT_2_INT_SHIFT	0u
+#define MTL_VPU_HOST_SS_ICB_STATUS_1_CPU_INT_REDIRECT_2_INT_MASK	0x00000001u
+#define MTL_VPU_HOST_SS_ICB_STATUS_1_CPU_INT_REDIRECT_3_INT_SHIFT	1u
+#define MTL_VPU_HOST_SS_ICB_STATUS_1_CPU_INT_REDIRECT_3_INT_MASK	0x00000002u
+#define MTL_VPU_HOST_SS_ICB_STATUS_1_CPU_INT_REDIRECT_4_INT_SHIFT	2u
+#define MTL_VPU_HOST_SS_ICB_STATUS_1_CPU_INT_REDIRECT_4_INT_MASK	0x00000004u
+
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0					0x00010220u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_0_INT_SHIFT			0u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_0_INT_MASK			0x00000001u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_1_INT_SHIFT			1u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_1_INT_MASK			0x00000002u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_2_INT_SHIFT			2u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_2_INT_MASK			0x00000004u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_3_INT_SHIFT			3u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_TIMER_3_INT_MASK			0x00000008u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_HOST_IPC_FIFO_INT_SHIFT		4u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_HOST_IPC_FIFO_INT_MASK		0x00000010u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_MMU_IRQ_0_INT_SHIFT			5u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_MMU_IRQ_0_INT_MASK			0x00000020u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_MMU_IRQ_1_INT_SHIFT			6u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_MMU_IRQ_1_INT_MASK			0x00000040u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_MMU_IRQ_2_INT_SHIFT			7u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_MMU_IRQ_2_INT_MASK			0x00000080u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_NOC_FIREWALL_INT_SHIFT		8u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_NOC_FIREWALL_INT_MASK		0x00000100u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_CPU_INT_REDIRECT_0_INT_SHIFT	30u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_CPU_INT_REDIRECT_0_INT_MASK		0x40000000u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_CPU_INT_REDIRECT_1_INT_SHIFT	31u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_0_CPU_INT_REDIRECT_1_INT_MASK		0x80000000u
+
+#define MTL_VPU_HOST_SS_ICB_CLEAR_1					0x00010224u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_1_CPU_INT_REDIRECT_2_INT_SHIFT	0u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_1_CPU_INT_REDIRECT_2_INT_MASK		0x00000001u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_1_CPU_INT_REDIRECT_3_INT_SHIFT	1u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_1_CPU_INT_REDIRECT_3_INT_MASK		0x00000002u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_1_CPU_INT_REDIRECT_4_INT_SHIFT	2u
+#define MTL_VPU_HOST_SS_ICB_CLEAR_1_CPU_INT_REDIRECT_4_INT_MASK		0x00000004u
+
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0					0x00010240u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_0_INT_SHIFT			0u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_0_INT_MASK			0x00000001u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_1_INT_SHIFT			1u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_1_INT_MASK			0x00000002u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_2_INT_SHIFT			2u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_2_INT_MASK			0x00000004u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_3_INT_SHIFT			3u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_TIMER_3_INT_MASK			0x00000008u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_HOST_IPC_FIFO_INT_SHIFT		4u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_HOST_IPC_FIFO_INT_MASK		0x00000010u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_MMU_IRQ_0_INT_SHIFT		5u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_MMU_IRQ_0_INT_MASK			0x00000020u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_MMU_IRQ_1_INT_SHIFT		6u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_MMU_IRQ_1_INT_MASK			0x00000040u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_MMU_IRQ_2_INT_SHIFT		7u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_MMU_IRQ_2_INT_MASK			0x00000080u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_NOC_FIREWALL_INT_SHIFT		8u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_NOC_FIREWALL_INT_MASK		0x00000100u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_CPU_INT_REDIRECT_0_INT_SHIFT	30u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_CPU_INT_REDIRECT_0_INT_MASK	0x40000000u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_CPU_INT_REDIRECT_1_INT_SHIFT	31u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_0_CPU_INT_REDIRECT_1_INT_MASK	0x80000000u
+
+#define MTL_VPU_HOST_SS_ICB_ENABLE_1					0x00010244u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_1_CPU_INT_REDIRECT_2_INT_SHIFT	0u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_1_CPU_INT_REDIRECT_2_INT_MASK	0x00000001u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_1_CPU_INT_REDIRECT_3_INT_SHIFT	1u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_1_CPU_INT_REDIRECT_3_INT_MASK	0x00000002u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_1_CPU_INT_REDIRECT_4_INT_SHIFT	2u
+#define MTL_VPU_HOST_SS_ICB_ENABLE_1_CPU_INT_REDIRECT_4_INT_MASK	0x00000004u
+
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_ATM				0x000200f4u
+
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT				0x000200fcu
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_DEFAULT			0x00000000u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_READ_POINTER_SHIFT		0u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_READ_POINTER_MASK		0x000000ffu
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_READ_POINTER_EFAULT		0x00000000u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_WRITE_POINTER_SHIFT		8u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_WRITE_POINTER_MASK		0x0000ff00u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_WRITE_POINTER_DEFAULT		0x00000000u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_FILL_LEVEL_SHIFT		16u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_FILL_LEVEL_MASK		0x00ff0000u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_FILL_LEVEL_DEFAULT		0x00000000u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_RSVD0_SHIFT			24u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_RSVD0_MASK			0xff000000u
+#define MTL_VPU_HOST_SS_TIM_IPC_FIFO_STAT_RSVD0_DEFAULT			0x00000000u
+
+#define MTL_VPU_HOST_SS_AON_PWR_ISO_EN0					0x00030020u
+#define MTL_VPU_HOST_SS_AON_PWR_ISO_EN0_MSS_CPU_SHIFT			3u
+#define MTL_VPU_HOST_SS_AON_PWR_ISO_EN0_MSS_CPU_MASK			0x00000008u
+
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_EN0				0x00030024u
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_EN0_MSS_CPU_SHIFT		3u
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_EN0_MSS_CPU_MASK			0x00000008u
+
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0			0x00030028u
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0_MSS_CPU_SHIFT	3u
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0_MSS_CPU_MASK		0x00000008u
+
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_STATUS0				0x0003002cu
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_STATUS0_MSS_CPU_SHIFT		3u
+#define MTL_VPU_HOST_SS_AON_PWR_ISLAND_STATUS0_MSS_CPU_MASK		0x00000008u
+
+#define MTL_VPU_HOST_SS_AON_VPU_IDLE_GEN				0x00030200u
+#define MTL_VPU_HOST_SS_AON_VPU_IDLE_GEN_EN_SHIFT			0u
+#define MTL_VPU_HOST_SS_AON_VPU_IDLE_GEN_EN_MASK			0x00000001u
+
+#define MTL_VPU_HOST_SS_AON_DPU_ACTIVE					0x00030204u
+#define MTL_VPU_HOST_SS_AON_DPU_ACTIVE_DPU_ACTIVE_SHIFT			0u
+#define MTL_VPU_HOST_SS_AON_DPU_ACTIVE_DPU_ACTIVE_MASK			0x00000001u
+
+#define MTL_VPU_HOST_SS_LOADING_ADDRESS_LO				0x00041040u
+#define MTL_VPU_HOST_SS_LOADING_ADDRESS_LO_DONE_SHIFT			0u
+#define MTL_VPU_HOST_SS_LOADING_ADDRESS_LO_DONE_MASK			0x00000001u
+#define MTL_VPU_HOST_SS_LOADING_ADDRESS_LO_IOSF_RS_ID_SHIFT		1u
+#define MTL_VPU_HOST_SS_LOADING_ADDRESS_LO_IOSF_RS_ID_MASK		0x00000006u
+#define MTL_VPU_HOST_SS_LOADING_ADDRESS_LO_IMAGE_LOCATION_SHIFT		3u
+#define MTL_VPU_HOST_SS_LOADING_ADDRESS_LO_IMAGE_LOCATION_MASK		0xfffffff8u
+
+#define MTL_VPU_HOST_SS_WORKPOINT_CONFIG_MIRROR				0x00082020u
+#define MTL_VPU_HOST_SS_WORKPOINT_CONFIG_MIRROR_FINAL_PLL_FREQ_SHIFT	0u
+#define MTL_VPU_HOST_SS_WORKPOINT_CONFIG_MIRROR_FINAL_PLL_FREQ_MASK	0x0000ffffu
+#define MTL_VPU_HOST_SS_WORKPOINT_CONFIG_MIRROR_CONFIG_ID_SHIFT		16u
+#define MTL_VPU_HOST_SS_WORKPOINT_CONFIG_MIRROR_CONFIG_ID_MASK		0xffff0000u
+
+#define MTL_VPU_HOST_MMU_IDR0						0x00200000u
+#define MTL_VPU_HOST_MMU_IDR1						0x00200004u
+#define MTL_VPU_HOST_MMU_IDR3						0x0020000cu
+#define MTL_VPU_HOST_MMU_IDR5						0x00200014u
+#define MTL_VPU_HOST_MMU_CR0						0x00200020u
+#define MTL_VPU_HOST_MMU_CR0ACK						0x00200024u
+#define MTL_VPU_HOST_MMU_CR1						0x00200028u
+#define MTL_VPU_HOST_MMU_CR2						0x0020002cu
+#define MTL_VPU_HOST_MMU_IRQ_CTRL					0x00200050u
+#define MTL_VPU_HOST_MMU_IRQ_CTRLACK					0x00200054u
+
+#define MTL_VPU_HOST_MMU_GERROR						0x00200060u
+#define MTL_VPU_HOST_MMU_GERROR_CMDQ_SHIFT				0u
+#define MTL_VPU_HOST_MMU_GERROR_CMDQ_MASK				0x00000001u
+#define MTL_VPU_HOST_MMU_GERROR_EVTQ_ABT_SHIFT				2u
+#define MTL_VPU_HOST_MMU_GERROR_EVTQ_ABT_MASK				0x00000004u
+#define MTL_VPU_HOST_MMU_GERROR_PRIQ_ABT_SHIFT				3u
+#define MTL_VPU_HOST_MMU_GERROR_PRIQ_ABT_MASK				0x00000008u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_CMDQ_ABT_SHIFT			4u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_CMDQ_ABT_MASK			0x00000010u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_EVTQ_ABT_SHIFT			5u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_EVTQ_ABT_MASK			0x00000020u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_PRIQ_ABT_SHIFT			6u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_PRIQ_ABT_MASK			0x00000040u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_ABT_SHIFT				7u
+#define MTL_VPU_HOST_MMU_GERROR_MSI_ABT_MASK				0x00000080u
+#define MTL_VPU_HOST_MMU_GERROR_SFM_SHIFT				8u
+#define MTL_VPU_HOST_MMU_GERROR_SFM_MASK				0x00000100u
+#define MTL_VPU_HOST_MMU_GERRORN					0x00200064u
+
+#define MTL_VPU_HOST_MMU_STRTAB_BASE					0x00200080u
+#define MTL_VPU_HOST_MMU_STRTAB_BASE_CFG				0x00200088u
+#define MTL_VPU_HOST_MMU_CMDQ_BASE					0x00200090u
+#define MTL_VPU_HOST_MMU_CMDQ_PROD					0x00200098u
+#define MTL_VPU_HOST_MMU_CMDQ_CONS					0x0020009cu
+#define MTL_VPU_HOST_MMU_EVTQ_BASE					0x002000a0u
+#define MTL_VPU_HOST_MMU_EVTQ_PROD					0x002000a8u
+#define MTL_VPU_HOST_MMU_EVTQ_CONS					0x002000acu
+#define MTL_VPU_HOST_MMU_EVTQ_PROD_SEC					(0x002000a8u + SZ_64K)
+#define MTL_VPU_HOST_MMU_EVTQ_CONS_SEC					(0x002000acu + SZ_64K)
+
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES				0x00360000u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_CACHE_OVERRIDE_EN_SHIFT	0u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_CACHE_OVERRIDE_EN_MASK	0x00000001u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_AWCACHE_OVERRIDE_SHIFT	1u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_AWCACHE_OVERRIDE_MASK		0x00000002u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_ARCACHE_OVERRIDE_SHIFT	2u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_ARCACHE_OVERRIDE_MASK		0x00000004u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_NOSNOOP_OVERRIDE_EN_SHIFT	3u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_NOSNOOP_OVERRIDE_EN_MASK	0x00000008u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_AW_NOSNOOP_OVERRIDE_SHIFT	4u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_AW_NOSNOOP_OVERRIDE_MASK	0x00000010u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_AR_NOSNOOP_OVERRIDE_SHIFT	5u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_AR_NOSNOOP_OVERRIDE_MASK	0x00000020u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_PTW_AW_CONTEXT_FLAG_SHIFT	6u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_PTW_AW_CONTEXT_FLAG_MASK	0x000007C0u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_PTW_AR_CONTEXT_FLAG_SHIFT	11u
+#define MTL_VPU_HOST_IF_TCU_PTW_OVERRIDES_PTW_AR_CONTEXT_FLAG_MASK	0x0000f800u
+
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV					0x00360004u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU0_AWMMUSSIDV_SHIFT		0u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU0_AWMMUSSIDV_MASK		0x00000001u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU0_ARMMUSSIDV_SHIFT		1u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU0_ARMMUSSIDV_MASK		0x00000002u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU1_AWMMUSSIDV_SHIFT		2u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU1_AWMMUSSIDV_MASK		0x00000004u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU1_ARMMUSSIDV_SHIFT		3u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU1_ARMMUSSIDV_MASK		0x00000008u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU2_AWMMUSSIDV_SHIFT		4u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU2_AWMMUSSIDV_MASK		0x00000010u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU2_ARMMUSSIDV_SHIFT		5u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU2_ARMMUSSIDV_MASK		0x00000020u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU3_AWMMUSSIDV_SHIFT		6u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU3_AWMMUSSIDV_MASK		0x00000040u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU3_ARMMUSSIDV_SHIFT		7u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU3_ARMMUSSIDV_MASK		0x00000080u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU4_AWMMUSSIDV_SHIFT		8u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU4_AWMMUSSIDV_MASK		0x00000100u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU4_ARMMUSSIDV_SHIFT		9u
+#define MTL_VPU_HOST_IF_TBU_MMUSSIDV_TBU4_ARMMUSSIDV_MASK		0x00000200u
+
+#define MTL_VPU_CPU_SS_DSU_LEON_RT_BASE					0x04000000u
+#define MTL_VPU_CPU_SS_DSU_LEON_RT_DSU_CTRL				0x04000000u
+#define MTL_VPU_CPU_SS_DSU_LEON_RT_PC_REG				0x04400010u
+#define MTL_VPU_CPU_SS_DSU_LEON_RT_NPC_REG				0x04400014u
+#define MTL_VPU_CPU_SS_DSU_LEON_RT_DSU_TRAP_REG				0x04400020u
+
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_CLK_SET				0x06010004u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_CLK_SET_CPU_DSU_SHIFT			1u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_CLK_SET_CPU_DSU_MASK			0x00000002u
+
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_RST_CLR				0x06010018u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_RST_CLR_CPU_DSU_SHIFT			1u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_RST_CLR_CPU_DSU_MASK			0x00000002u
+
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC				0x06010040u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_SHIFT	0u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_MASK		0x00000001u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_SHIFT	1u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_MASK		0x00000002u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_SHIFT	2u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_MASK		0x00000004u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_SHIFT	3u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_MASK		0x00000008u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_SHIFT		4u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_MASK		0xfffffff0u
+
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC				0x06010040u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_SHIFT	0u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_MASK		0x00000001u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_SHIFT	1u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_MASK		0x00000002u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_SHIFT	2u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_MASK		0x00000004u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_SHIFT	3u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_MASK		0x00000008u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_SHIFT		4u
+#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_MASK		0xfffffff0u
+
+#define MTL_VPU_CPU_SS_TIM_WATCHDOG					0x0602009cu
+#define MTL_VPU_CPU_SS_TIM_WDOG_EN					0x060200a4u
+#define MTL_VPU_CPU_SS_TIM_SAFE						0x060200a8u
+
+#define MTL_VPU_CPU_SS_TIM_IPC_FIFO					0x060200f0u
+#define MTL_VPU_CPU_SS_TIM_IPC_FIFO_OF_FLAG0				0x06020100u
+#define MTL_VPU_CPU_SS_TIM_IPC_FIFO_OF_FLAG1				0x06020104u
+
+#define MTL_VPU_CPU_SS_TIM_GEN_CONFIG					0x06021008u
+#define MTL_VPU_CPU_SS_TIM_GEN_CONFIG_WDOG_TO_INT_CLR_OFFSET		9u
+#define MTL_VPU_CPU_SS_TIM_GEN_CONFIG_WDOG_TO_INT_CLR_MASK		0x00000200u
+
+#define MTL_VPU_CPU_SS_DOORBELL_0					0x06300000u
+#define MTL_VPU_CPU_SS_DOORBELL_0_SET_OFFSET				0u
+#define MTL_VPU_CPU_SS_DOORBELL_0_SET_MASK				0x00000001u
+
+#define MTL_VPU_CPU_SS_DOORBELL_1					0x06301000u
+
+#endif /* __VPU_HW_MTL_REG_H__ */
diff --git a/drivers/gpu/drm/vpu/vpu_hw_reg_io.h b/drivers/gpu/drm/vpu/vpu_hw_reg_io.h
new file mode 100644
index 000000000000..f12262553bcf
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_hw_reg_io.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_HW_REG_IO_H__
+#define __VPU_HW_REG_IO_H__
+
+#include <linux/bitfield.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+
+#include "vpu_drv.h"
+
+#define REG_POLL_SLEEP_US 50
+
+#define REGB_RD32(reg)          vpu_hw_reg_rd32(vdev, vdev->regb, (reg), #reg, __func__)
+#define REGB_RD32_SILENT(reg)   readl(vdev->regb + (reg))
+#define REGB_RD64(reg)          vpu_hw_reg_rd64(vdev, vdev->regb, (reg), #reg, __func__)
+#define REGB_WR32(reg, val)     vpu_hw_reg_wr32(vdev, vdev->regb, (reg), (val), #reg, __func__)
+#define REGB_WR64(reg, val)     vpu_hw_reg_wr64(vdev, vdev->regb, (reg), (val), #reg, __func__)
+
+#define REGV_RD32(reg)          vpu_hw_reg_rd32(vdev, vdev->regv, (reg), #reg, __func__)
+#define REGV_RD32_SILENT(reg)   readl(vdev->regv + (reg))
+#define REGV_RD64(reg)          vpu_hw_reg_rd64(vdev, vdev->regv, (reg), #reg, __func__)
+#define REGV_WR32(reg, val)     vpu_hw_reg_wr32(vdev, vdev->regv, (reg), (val), #reg, __func__)
+#define REGV_WR64(reg, val)     vpu_hw_reg_wr64(vdev, vdev->regv, (reg), (val), #reg, __func__)
+
+#define REGV_WR32I(reg, stride, index, val) \
+	vpu_hw_reg_wr32_index(vdev, vdev->regv, (reg), (stride), (index), (val), #reg, __func__)
+
+#define REG_FLD(REG, FLD) \
+	(REG##_##FLD##_MASK)
+#define REG_FLD_NUM(REG, FLD, num) \
+	FIELD_PREP(REG##_##FLD##_MASK, num)
+#define REG_GET_FLD(REG, FLD, val) \
+	FIELD_GET(REG##_##FLD##_MASK, val)
+#define REG_CLR_FLD(REG, FLD, val) \
+	((val) & ~(REG##_##FLD##_MASK))
+#define REG_SET_FLD(REG, FLD, val) \
+	((val) | (REG##_##FLD##_MASK))
+#define REG_SET_FLD_NUM(REG, FLD, num, val) \
+	(((val) & ~(REG##_##FLD##_MASK)) | FIELD_PREP(REG##_##FLD##_MASK, num))
+#define REG_TEST_FLD(REG, FLD, val) \
+	((REG##_##FLD##_MASK) == ((val) & (REG##_##FLD##_MASK)))
+#define REG_TEST_FLD_NUM(REG, FLD, num, val) \
+	((num) == FIELD_GET(REG##_##FLD##_MASK, val))
+
+#define REGB_POLL(reg, var, cond, timeout_us) \
+	read_poll_timeout(REGB_RD32_SILENT, var, cond, REG_POLL_SLEEP_US, timeout_us, false, reg)
+
+#define REGV_POLL(reg, var, cond, timeout_us) \
+	read_poll_timeout(REGV_RD32_SILENT, var, cond, REG_POLL_SLEEP_US, timeout_us, false, reg)
+
+#define REGB_POLL_FLD(reg, fld, val, timeout_us) \
+({ \
+	u32 var; \
+	REGB_POLL(reg, var, (FIELD_GET(reg##_##fld##_MASK, var) == (val)), timeout_us); \
+})
+
+#define REGV_POLL_FLD(reg, fld, val, timeout_us) \
+({ \
+	u32 var; \
+	REGV_POLL(reg, var, (FIELD_GET(reg##_##fld##_MASK, var) == (val)), timeout_us); \
+})
+
+static inline u32
+vpu_hw_reg_rd32(struct vpu_device *vdev, void __iomem *base, u32 reg,
+		const char *name, const char *func)
+{
+	u32 val = readl(base + reg);
+
+	vpu_dbg(REG, "%s RD: %s (0x%08x) => 0x%08x\n", func, name, reg, val);
+	return val;
+}
+
+static inline u64
+vpu_hw_reg_rd64(struct vpu_device *vdev, void __iomem *base, u32 reg,
+		const char *name, const char *func)
+{
+	u64 val = readq(base + reg);
+
+	vpu_dbg(REG, "%s RD: %s (0x%08x) => 0x%016llx\n", func, name, reg, val);
+	return val;
+}
+
+static inline void
+vpu_hw_reg_wr32(struct vpu_device *vdev, void __iomem *base, u32 reg, u32 val,
+		const char *name, const char *func)
+{
+	vpu_dbg(REG, "%s WR: %s (0x%08x) <= 0x%08x\n", func, name, reg, val);
+	writel(val, base + reg);
+}
+
+static inline void
+vpu_hw_reg_wr64(struct vpu_device *vdev, void __iomem *base, u32 reg, u64 val,
+		const char *name, const char *func)
+{
+	vpu_dbg(REG, "%s WR: %s (0x%08x) <= 0x%016llx\n", func, name, reg, val);
+	writeq(val, base + reg);
+}
+
+static inline void
+vpu_hw_reg_wr32_index(struct vpu_device *vdev, void __iomem *base, u32 reg,
+		      u32 stride, u32 index, u32 val, const char *name,
+		      const char *func)
+{
+	reg += index * stride;
+
+	vpu_dbg(REG, "%s WR: %s_%d (0x%08x) <= 0x%08x\n", func, name, index, reg, val);
+	writel(val, base + reg);
+}
+
+#endif /* __VPU_HW_REG_IO_H__ */
diff --git a/include/uapi/drm/vpu_drm.h b/include/uapi/drm/vpu_drm.h
new file mode 100644
index 000000000000..3311a36bf07b
--- /dev/null
+++ b/include/uapi/drm/vpu_drm.h
@@ -0,0 +1,95 @@
+/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __UAPI_VPU_DRM_H__
+#define __UAPI_VPU_DRM_H__
+
+#include "drm.h"
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+#define DRM_VPU_DRIVER_MAJOR 1
+#define DRM_VPU_DRIVER_MINOR 0
+
+#define DRM_VPU_GET_PARAM		 0x00
+#define DRM_VPU_SET_PARAM		 0x01
+
+#define DRM_IOCTL_VPU_GET_PARAM                                                                    \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VPU_GET_PARAM, struct drm_vpu_param)
+
+#define DRM_IOCTL_VPU_SET_PARAM                                                                    \
+	DRM_IOW(DRM_COMMAND_BASE + DRM_VPU_SET_PARAM, struct drm_vpu_param)
+
+/**
+ * DOC: contexts
+ *
+ * VPU contexts have private virtual address space, job queues and priority.
+ * Each context is identified by an unique ID. Context is created on open().
+ */
+
+#define DRM_VPU_PARAM_DEVICE_ID		   0
+#define DRM_VPU_PARAM_DEVICE_REVISION	   1
+#define DRM_VPU_PARAM_PLATFORM_TYPE	   2
+#define DRM_VPU_PARAM_CORE_CLOCK_RATE	   3
+#define DRM_VPU_PARAM_NUM_CONTEXTS	   4
+#define DRM_VPU_PARAM_CONTEXT_BASE_ADDRESS 5
+#define DRM_VPU_PARAM_CONTEXT_PRIORITY	   6
+
+#define DRM_VPU_PLATFORM_TYPE_SILICON	   0
+
+#define DRM_VPU_CONTEXT_PRIORITY_IDLE	   0
+#define DRM_VPU_CONTEXT_PRIORITY_NORMAL	   1
+#define DRM_VPU_CONTEXT_PRIORITY_FOCUS	   2
+#define DRM_VPU_CONTEXT_PRIORITY_REALTIME  3
+
+/**
+ * struct drm_vpu_param - Get/Set VPU parameters
+ */
+struct drm_vpu_param {
+	/**
+	 * @param:
+	 *
+	 * Supported params:
+	 *
+	 * %DRM_VPU_PARAM_DEVICE_ID:
+	 * PCI Device ID of the VPU device (read-only)
+	 *
+	 * %DRM_VPU_PARAM_DEVICE_REVISION:
+	 * VPU device revision (read-only)
+	 *
+	 * %DRM_VPU_PARAM_PLATFORM_TYPE:
+	 * Returns %DRM_VPU_PLATFORM_TYPE_SILICON on real hardware or device specific
+	 * platform type when executing on a simulator or emulator (read-only)
+	 *
+	 * %DRM_VPU_PARAM_CORE_CLOCK_RATE:
+	 * Current PLL frequency (read-only)
+	 *
+	 * %DRM_VPU_PARAM_NUM_CONTEXTS:
+	 * Maximum number of simultaneously existing contexts (read-only)
+	 *
+	 * %DRM_VPU_PARAM_CONTEXT_BASE_ADDRESS:
+	 * Lowest VPU virtual address available in the current context (read-only)
+	 *
+	 * %DRM_VPU_PARAM_CONTEXT_PRIORITY:
+	 * Value of current context scheduling priority (read-write).
+	 * See DRM_VPU_CONTEXT_PRIORITY_* for possible values.
+	 *
+	 */
+	__u32 param;
+
+	/** @index: Index for params that have multiple instances */
+	__u32 index;
+
+	/** @value: Param value */
+	__u64 value;
+};
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* __UAPI_VPU_DRM_H__ */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 2/7] drm/vpu: Add Intel VPU MMU support
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 1/7] drm/vpu: Introduce a new " Jacek Lawrynowicz
@ 2022-07-28 13:17 ` Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 3/7] drm/vpu: Add GEM buffer object management Jacek Lawrynowicz
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

VPU Memory Management Unit is based on ARM MMU-600.
It allows to create multiple virtual address spaces for the device and
map noncontinuous host memory (there is no dedicated memory on the VPU).

Address space is implemented as a struct vpu_mmu_context, it has an ID,
drm_mm allocator for VPU addresses and struct vpu_mmu_pgtable that holds
actual 3-level, 4KB page table.
Context with ID 0 (global context) is created upon driver initialization
and it's mainly used for mapping memory required to execute
the firmware.
Contexts with non-zero IDs are user contexts allocated each time
the devices is open()-ed and they map command buffers and other
workload-related memory.
Workloads executing in a given contexts have access only
to the memory mapped in this context.

This patch is has to main files:
  - vpu_mmu_context.c handles MMU page tables and memory mapping
  - vpu_mmu.c implements a driver that programs the MMU device

Signed-off-by: Karol Wachowski <karol.wachowski@linux.intel.com>
Signed-off-by: Krystian Pradzynski <krystian.pradzynski@linux.intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/gpu/drm/vpu/Makefile          |   4 +-
 drivers/gpu/drm/vpu/vpu_drv.c         |  59 +-
 drivers/gpu/drm/vpu/vpu_drv.h         |   6 +
 drivers/gpu/drm/vpu/vpu_hw_mtl.c      |  10 +
 drivers/gpu/drm/vpu/vpu_mmu.c         | 940 ++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_mmu.h         |  53 ++
 drivers/gpu/drm/vpu/vpu_mmu_context.c | 418 ++++++++++++
 drivers/gpu/drm/vpu/vpu_mmu_context.h |  49 ++
 include/uapi/drm/vpu_drm.h            |   4 +
 9 files changed, 1540 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu_context.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_mmu_context.h

diff --git a/drivers/gpu/drm/vpu/Makefile b/drivers/gpu/drm/vpu/Makefile
index 5d6d2a2566cf..b789e3a6ed22 100644
--- a/drivers/gpu/drm/vpu/Makefile
+++ b/drivers/gpu/drm/vpu/Makefile
@@ -3,6 +3,8 @@
 
 intel_vpu-y := \
 	vpu_drv.o \
-	vpu_hw_mtl.o
+	vpu_hw_mtl.o \
+	vpu_mmu.o \
+	vpu_mmu_context.o
 
 obj-$(CONFIG_DRM_VPU) += intel_vpu.o
diff --git a/drivers/gpu/drm/vpu/vpu_drv.c b/drivers/gpu/drm/vpu/vpu_drv.c
index bbe7ad97a32c..75ec1fe9c3e2 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.c
+++ b/drivers/gpu/drm/vpu/vpu_drv.c
@@ -14,6 +14,8 @@
 
 #include "vpu_drv.h"
 #include "vpu_hw.h"
+#include "vpu_mmu.h"
+#include "vpu_mmu_context.h"
 
 #ifndef DRIVER_VERSION_STR
 #define DRIVER_VERSION_STR __stringify(DRM_VPU_DRIVER_MAJOR) "." \
@@ -50,6 +52,11 @@ char *vpu_platform_to_str(u32 platform)
 
 void vpu_file_priv_get(struct vpu_file_priv *file_priv, struct vpu_file_priv **link)
 {
+	struct vpu_device *vdev = file_priv->vdev;
+
+	vpu_dbg(KREF, "file_priv get: ctx %u refcount %u\n",
+		file_priv->ctx.id, kref_read(&file_priv->ref));
+
 	kref_get(&file_priv->ref);
 	*link = file_priv;
 }
@@ -57,6 +64,12 @@ void vpu_file_priv_get(struct vpu_file_priv *file_priv, struct vpu_file_priv **l
 static void file_priv_release(struct kref *ref)
 {
 	struct vpu_file_priv *file_priv = container_of(ref, struct vpu_file_priv, ref);
+	struct vpu_device *vdev = file_priv->vdev;
+
+	vpu_dbg(FILE, "file_priv release: ctx %u\n", file_priv->ctx.id);
+
+	if (file_priv->ctx.id)
+		vpu_mmu_user_context_fini(file_priv);
 
 	kfree(file_priv);
 }
@@ -64,6 +77,10 @@ static void file_priv_release(struct kref *ref)
 void vpu_file_priv_put(struct vpu_file_priv **link)
 {
 	struct vpu_file_priv *file_priv = *link;
+	struct vpu_device *vdev = file_priv->vdev;
+
+	vpu_dbg(KREF, "file_priv put: ctx %u refcount %u\n",
+		file_priv->ctx.id, kref_read(&file_priv->ref));
 
 	*link = NULL;
 	kref_put(&file_priv->ref, file_priv_release);
@@ -75,7 +92,11 @@ static int vpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_fi
 	struct vpu_device *vdev = file_priv->vdev;
 	struct pci_dev *pdev = to_pci_dev(vdev->drm.dev);
 	struct drm_vpu_param *args = data;
-	int ret = 0;
+	int ret;
+
+	ret = vpu_mmu_user_context_init(file_priv);
+	if (ret)
+		return ret;
 
 	switch (args->param) {
 	case DRM_VPU_PARAM_DEVICE_ID:
@@ -99,6 +120,9 @@ static int vpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_fi
 	case DRM_VPU_PARAM_CONTEXT_PRIORITY:
 		args->value = file_priv->priority;
 		break;
+	case DRM_VPU_PARAM_CONTEXT_ID:
+		args->value = file_priv->ctx.id;
+		break;
 	default:
 		ret = -EINVAL;
 	}
@@ -110,7 +134,11 @@ static int vpu_set_param_ioctl(struct drm_device *dev, void *data, struct drm_fi
 {
 	struct vpu_file_priv *file_priv = file->driver_priv;
 	struct drm_vpu_param *args = data;
-	int ret = 0;
+	int ret;
+
+	ret = vpu_mmu_user_context_init(file_priv);
+	if (ret)
+		return ret;
 
 	switch (args->param) {
 	case DRM_VPU_PARAM_CONTEXT_PRIORITY:
@@ -139,9 +167,13 @@ static int vpu_open(struct drm_device *dev, struct drm_file *file)
 	file_priv->priority = DRM_VPU_CONTEXT_PRIORITY_NORMAL;
 
 	kref_init(&file_priv->ref);
+	mutex_init(&file_priv->lock);
 
 	file->driver_priv = file_priv;
 
+	vpu_dbg(FILE, "file_priv alloc: process %s pid %d\n",
+		current->comm, task_pid_nr(current));
+
 	return 0;
 }
 
@@ -164,6 +196,7 @@ int vpu_shutdown(struct vpu_device *vdev)
 	int ret;
 
 	vpu_hw_irq_disable(vdev);
+	vpu_mmu_disable(vdev);
 
 	ret = vpu_hw_power_down(vdev);
 	if (ret)
@@ -272,6 +305,10 @@ static int vpu_dev_init(struct vpu_device *vdev)
 	if (!vdev->hw)
 		return -ENOMEM;
 
+	vdev->mmu = devm_kzalloc(vdev->drm.dev, sizeof(*vdev->mmu), GFP_KERNEL);
+	if (!vdev->mmu)
+		return -ENOMEM;
+
 	vdev->hw->ops = &vpu_hw_mtl_ops;
 	vdev->platform = VPU_PLATFORM_INVALID;
 
@@ -303,8 +340,24 @@ static int vpu_dev_init(struct vpu_device *vdev)
 		goto err_irq_fini;
 	}
 
+	ret = vpu_mmu_global_context_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize global MMU context: %d\n", ret);
+		goto err_power_down;
+	}
+
+	ret = vpu_mmu_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize MMU device: %d\n", ret);
+		goto err_mmu_gctx_fini;
+	}
+
 	return 0;
 
+err_mmu_gctx_fini:
+	vpu_mmu_global_context_fini(vdev);
+err_power_down:
+	vpu_hw_power_down(vdev);
 err_irq_fini:
 	vpu_irq_fini(vdev);
 err_pci_fini:
@@ -316,6 +369,8 @@ static void vpu_dev_fini(struct vpu_device *vdev)
 {
 	vpu_shutdown(vdev);
 
+	vpu_mmu_fini(vdev);
+	vpu_mmu_global_context_fini(vdev);
 	vpu_irq_fini(vdev);
 	vpu_pci_fini(vdev);
 
diff --git a/drivers/gpu/drm/vpu/vpu_drv.h b/drivers/gpu/drm/vpu/vpu_drv.h
index b2e7d355c0de..a9f3ad0c5f67 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.h
+++ b/drivers/gpu/drm/vpu/vpu_drv.h
@@ -14,6 +14,8 @@
 #include <linux/xarray.h>
 #include <uapi/drm/vpu_drm.h>
 
+#include "vpu_mmu_context.h"
+
 #define DRIVER_NAME "intel_vpu"
 #define DRIVER_DESC "Driver for Intel Versatile Processing Unit (VPU)"
 #define DRIVER_DATE "20220101"
@@ -84,7 +86,9 @@ struct vpu_device {
 
 	struct vpu_wa_table wa;
 	struct vpu_hw_info *hw;
+	struct vpu_mmu_info *mmu;
 
+	struct vpu_mmu_context gctx;
 	struct xarray context_xa;
 	struct xa_limit context_xa_limit;
 
@@ -99,6 +103,8 @@ struct vpu_device {
 struct vpu_file_priv {
 	struct kref ref;
 	struct vpu_device *vdev;
+	struct mutex lock;
+	struct vpu_mmu_context ctx;
 	u32 priority;
 };
 
diff --git a/drivers/gpu/drm/vpu/vpu_hw_mtl.c b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
index bdab4b84d202..901ec5c40de9 100644
--- a/drivers/gpu/drm/vpu/vpu_hw_mtl.c
+++ b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
@@ -7,6 +7,7 @@
 #include "vpu_hw_mtl_reg.h"
 #include "vpu_hw_reg_io.h"
 #include "vpu_hw.h"
+#include "vpu_mmu.h"
 
 #define TILE_FUSE_ENABLE_BOTH	     0x0
 #define TILE_FUSE_ENABLE_LOWER	     0x1
@@ -930,6 +931,15 @@ static irqreturn_t vpu_hw_mtl_irqv_handler(struct vpu_device *vdev, int irq)
 
 	REGV_WR32(MTL_VPU_HOST_SS_ICB_CLEAR_0, status);
 
+	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT, status))
+		ret &= vpu_mmu_irq_evtq_handler(vdev);
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT, status))
+		vpu_dbg(IRQ, "MMU sync complete\n");
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT, status))
+		ret &= vpu_mmu_irq_gerr_handler(vdev);
+
 	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, status))
 		ret &= vpu_hw_mtl_irq_wdt_mss_handler(vdev);
 
diff --git a/drivers/gpu/drm/vpu/vpu_mmu.c b/drivers/gpu/drm/vpu/vpu_mmu.c
new file mode 100644
index 000000000000..ace91ee5a857
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_mmu.c
@@ -0,0 +1,940 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <linux/highmem.h>
+#include <linux/circ_buf.h>
+
+#include "vpu_drv.h"
+#include "vpu_hw_mtl_reg.h"
+#include "vpu_hw_reg_io.h"
+#include "vpu_mmu.h"
+#include "vpu_mmu_context.h"
+
+#define VPU_MMU_IDR0_REF		0x080f3e0f
+#define VPU_MMU_IDR0_REF_SIMICS		0x080f3e1f
+#define VPU_MMU_IDR1_REF		0x0e739d18
+#define VPU_MMU_IDR3_REF		0x0000003c
+#define VPU_MMU_IDR5_REF		0x00040070
+#define VPU_MMU_IDR5_REF_SIMICS		0x00000075
+#define VPU_MMU_IDR5_REF_FPGA		0x00800075
+
+#define VPU_MMU_CDTAB_ENT_SIZE		64
+#define VPU_MMU_CDTAB_ENT_COUNT_LOG2	8 /* 256 entries */
+#define VPU_MMU_CDTAB_ENT_COUNT		((u32)1 << VPU_MMU_CDTAB_ENT_COUNT_LOG2)
+
+#define VPU_MMU_STRTAB_ENT_SIZE		64
+#define VPU_MMU_STRTAB_ENT_COUNT	4
+#define VPU_MMU_STRTAB_CFG		0x2
+
+#define VPU_MMU_Q_COUNT_LOG2		4 /* 16 entries */
+#define VPU_MMU_Q_COUNT			((u32)1 << VPU_MMU_Q_COUNT_LOG2)
+#define VPU_MMU_Q_WRAP_BIT		(VPU_MMU_Q_COUNT << 1)
+#define VPU_MMU_Q_WRAP_MASK		(VPU_MMU_Q_WRAP_BIT - 1)
+#define VPU_MMU_Q_IDX_MASK		(VPU_MMU_Q_COUNT - 1)
+#define VPU_MMU_Q_IDX(val)		((val) & VPU_MMU_Q_IDX_MASK)
+
+#define VPU_MMU_CMDQ_CMD_SIZE		16
+#define VPU_MMU_CMDQ_SIZE		(VPU_MMU_Q_COUNT * VPU_MMU_CMDQ_CMD_SIZE)
+
+#define VPU_MMU_EVTQ_CMD_SIZE		32
+#define VPU_MMU_EVTQ_SIZE		(VPU_MMU_Q_COUNT * VPU_MMU_EVTQ_CMD_SIZE)
+
+#define VPU_MMU_CMD_OPCODE		GENMASK(8, 0)
+
+#define VPU_MMU_CMD_SYNC_0_CS		GENMASK(13, 12)
+#define VPU_MMU_CMD_SYNC_0_MSH		GENMASK(23, 22)
+#define VPU_MMU_CMD_SYNC_0_MSI_ATTR	GENMASK(27, 24)
+#define VPU_MMU_CMD_SYNC_0_MSI_ATTR	GENMASK(27, 24)
+#define VPU_MMU_CMD_SYNC_0_MSI_DATA	GENMASK(63, 32)
+
+#define VPU_MMU_CMD_CFGI_0_SSEC		BIT(10)
+#define VPU_MMU_CMD_CFGI_0_SSV		BIT(11)
+#define VPU_MMU_CMD_CFGI_0_SSID		GENMASK(31, 12)
+#define VPU_MMU_CMD_CFGI_0_SID		GENMASK(63, 32)
+#define VPU_MMU_CMD_CFGI_1_RANGE	GENMASK(4, 0)
+
+#define VPU_MMU_CMD_TLBI_0_ASID		GENMASK(63, 48)
+#define VPU_MMU_CMD_TLBI_0_VMID		GENMASK(47, 32)
+
+#define CMD_PREFETCH_CFG		0x1
+#define CMD_CFGI_STE			0x3
+#define CMD_CFGI_ALL			0x4
+#define CMD_CFGI_CD_ALL			0x6
+#define CMD_TLBI_NH_ASID		0x11
+#define CMD_TLBI_EL2_ALL		0x20
+#define CMD_TLBI_NSNH_ALL		0x30
+#define CMD_SYNC			0x46
+
+#define VPU_MMU_EVT_F_UUT		0x01
+#define VPU_MMU_EVT_C_BAD_STREAMID	0x02
+#define VPU_MMU_EVT_F_STE_FETCH		0x03
+#define VPU_MMU_EVT_C_BAD_STE		0x04
+#define VPU_MMU_EVT_F_BAD_ATS_TREQ	0x05
+#define VPU_MMU_EVT_F_STREAM_DISABLED	0x06
+#define VPU_MMU_EVT_F_TRANSL_FORBIDDEN	0x07
+#define VPU_MMU_EVT_C_BAD_SUBSTREAMID	0x08
+#define VPU_MMU_EVT_F_CD_FETCH		0x09
+#define VPU_MMU_EVT_C_BAD_CD		0x0a
+#define VPU_MMU_EVT_F_WALK_EABT		0x0b
+#define VPU_MMU_EVT_F_TRANSLATION	0x10
+#define VPU_MMU_EVT_F_ADDR_SIZE		0x11
+#define VPU_MMU_EVT_F_ACCESS		0x12
+#define VPU_MMU_EVT_F_PERMISSION	0x13
+#define VPU_MMU_EVT_F_TLB_CONFLICT	0x20
+#define VPU_MMU_EVT_F_CFG_CONFLICT	0x21
+#define VPU_MMU_EVT_E_PAGE_REQUEST	0x24
+#define VPU_MMU_EVT_F_VMS_FETCH		0x25
+
+#define VPU_MMU_EVTS_MAX		8
+
+#define VPU_MMU_EVT_OP_MASK		GENMASK_ULL(7, 0)
+#define VPU_MMU_EVT_SSID_MASK		GENMASK_ULL(31, 12)
+
+#define VPU_MMU_Q_BASE_RWA		BIT(62)
+#define VPU_MMU_Q_BASE_ADDR_MASK	GENMASK_ULL(51, 5)
+#define VPU_MMU_STRTAB_BASE_RA		BIT(62)
+#define VPU_MMU_STRTAB_BASE_ADDR_MASK	GENMASK_ULL(51, 6)
+
+#define VPU_MMU_IRQ_EVTQ_EN		BIT(2)
+#define VPU_MMU_IRQ_GERROR_EN		BIT(0)
+
+#define VPU_MMU_CR0_ATSCHK		BIT(4)
+#define VPU_MMU_CR0_CMDQEN		BIT(3)
+#define VPU_MMU_CR0_EVTQEN		BIT(2)
+#define VPU_MMU_CR0_PRIQEN		BIT(1)
+#define VPU_MMU_CR0_SMMUEN		BIT(0)
+
+#define VPU_MMU_CR1_TABLE_SH		GENMASK(11, 10)
+#define VPU_MMU_CR1_TABLE_OC		GENMASK(9, 8)
+#define VPU_MMU_CR1_TABLE_IC		GENMASK(7, 6)
+#define VPU_MMU_CR1_QUEUE_SH		GENMASK(5, 4)
+#define VPU_MMU_CR1_QUEUE_OC		GENMASK(3, 2)
+#define VPU_MMU_CR1_QUEUE_IC		GENMASK(1, 0)
+#define VPU_MMU_CACHE_NC		0
+#define VPU_MMU_CACHE_WB		1
+#define VPU_MMU_CACHE_WT		2
+#define VPU_MMU_SH_NSH			0
+#define VPU_MMU_SH_OSH			2
+#define VPU_MMU_SH_ISH			3
+
+#define VPU_MMU_CMDQ_OP			GENMASK_ULL(7, 0)
+
+#define VPU_MMU_CD_0_TCR_T0SZ		GENMASK_ULL(5, 0)
+#define VPU_MMU_CD_0_TCR_TG0		GENMASK_ULL(7, 6)
+#define VPU_MMU_CD_0_TCR_IRGN0		GENMASK_ULL(9, 8)
+#define VPU_MMU_CD_0_TCR_ORGN0		GENMASK_ULL(11, 10)
+#define VPU_MMU_CD_0_TCR_SH0		GENMASK_ULL(13, 12)
+#define VPU_MMU_CD_0_TCR_EPD0		BIT_ULL(14)
+#define VPU_MMU_CD_0_TCR_EPD1		BIT_ULL(30)
+#define VPU_MMU_CD_0_ENDI		BIT(15)
+#define VPU_MMU_CD_0_V			BIT(31)
+#define VPU_MMU_CD_0_TCR_IPS		GENMASK_ULL(34, 32)
+#define VPU_MMU_CD_0_TCR_TBI0		BIT_ULL(38)
+#define VPU_MMU_CD_0_AA64		BIT(41)
+#define VPU_MMU_CD_0_S			BIT(44)
+#define VPU_MMU_CD_0_R			BIT(45)
+#define VPU_MMU_CD_0_A			BIT(46)
+#define VPU_MMU_CD_0_ASET		BIT(47)
+#define VPU_MMU_CD_0_ASID		GENMASK_ULL(63, 48)
+
+#define VPU_MMU_CD_1_TTB0_MASK		GENMASK_ULL(51, 4)
+
+#define VPU_MMU_STE_0_S1CDMAX		GENMASK_ULL(63, 59)
+#define VPU_MMU_STE_0_S1FMT		GENMASK_ULL(5, 4)
+#define VPU_MMU_STE_0_S1FMT_LINEAR	0
+#define VPU_MMU_STE_DWORDS		8
+#define VPU_MMU_STE_0_CFG_S1_TRANS	5
+#define VPU_MMU_STE_0_CFG		GENMASK_ULL(3, 1)
+#define VPU_MMU_STE_0_S1CTXPTR_MASK	GENMASK_ULL(51, 6)
+#define VPU_MMU_STE_0_V			BIT(0)
+
+#define VPU_MMU_STE_1_STRW_NSEL1	0ul
+#define VPU_MMU_STE_1_STRW		GENMASK_ULL(31, 30)
+#define VPU_MMU_STE_1_PRIVCFG		GENMASK_ULL(49, 48)
+#define VPU_MMU_STE_1_PRIVCFG_UNPRIV	2ul
+#define VPU_MMU_STE_1_INSTCFG		GENMASK_ULL(51, 50)
+#define VPU_MMU_STE_1_INSTCFG_DATA	2ul
+#define VPU_MMU_STE_1_MEV		BIT(19)
+#define VPU_MMU_STE_1_S1STALLD		BIT(27)
+#define VPU_MMU_STE_1_S1C_CACHE_NC	0ul
+#define VPU_MMU_STE_1_S1C_CACHE_WBRA	1ul
+#define VPU_MMU_STE_1_S1C_CACHE_WT	2ul
+#define VPU_MMU_STE_1_S1C_CACHE_WB	3ul
+#define VPU_MMU_STE_1_S1CIR		GENMASK_ULL(3, 2)
+#define VPU_MMU_STE_1_S1COR		GENMASK_ULL(5, 4)
+#define VPU_MMU_STE_1_S1CSH		GENMASK_ULL(7, 6)
+#define VPU_MMU_STE_1_S1DSS		GENMASK_ULL(1, 0)
+#define VPU_MMU_STE_1_S1DSS_TERMINATE	0x0
+
+#define VPU_MMU_REG_TIMEOUT_US		(10 * USEC_PER_MSEC)
+#define VPU_MMU_QUEUE_TIMEOUT_US	(100 * USEC_PER_MSEC)
+
+#define VPU_MMU_GERROR_ERR_MASK ((REG_FLD(MTL_VPU_HOST_MMU_GERROR, CMDQ)) | \
+				 (REG_FLD(MTL_VPU_HOST_MMU_GERROR, EVTQ_ABT)) | \
+				 (REG_FLD(MTL_VPU_HOST_MMU_GERROR, PRIQ_ABT)) | \
+				 (REG_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_CMDQ_ABT)) | \
+				 (REG_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_EVTQ_ABT)) | \
+				 (REG_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_PRIQ_ABT)) | \
+				 (REG_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_ABT)) | \
+				 (REG_FLD(MTL_VPU_HOST_MMU_GERROR, SFM)))
+
+static char *vpu_mmu_evt_to_str(u32 cmd)
+{
+	switch (cmd) {
+	case VPU_MMU_EVT_F_UUT:
+		return "Unsupported Upstream Transaction";
+	case VPU_MMU_EVT_C_BAD_STREAMID:
+		return "Transaction StreamID out of range";
+	case VPU_MMU_EVT_F_STE_FETCH:
+		return "Fetch of STE caused external abort";
+	case VPU_MMU_EVT_C_BAD_STE:
+		return "Used STE invalid";
+	case VPU_MMU_EVT_F_BAD_ATS_TREQ:
+		return "Address Request disallowed for a StreamID";
+	case VPU_MMU_EVT_F_STREAM_DISABLED:
+		return "Transaction marks non-substream disabled";
+	case VPU_MMU_EVT_F_TRANSL_FORBIDDEN:
+		return "MMU bypass is disallowed for this StreamID";
+	case VPU_MMU_EVT_C_BAD_SUBSTREAMID:
+		return "Invalid StreamID";
+	case VPU_MMU_EVT_F_CD_FETCH:
+		return "Fetch of CD caused external abort";
+	case VPU_MMU_EVT_C_BAD_CD:
+		return "Fetched CD invalid";
+	case VPU_MMU_EVT_F_WALK_EABT:
+		return " An external abort occurred fetching a TLB";
+	case VPU_MMU_EVT_F_TRANSLATION:
+		return "Translation fault";
+	case VPU_MMU_EVT_F_ADDR_SIZE:
+		return " Output address caused address size fault";
+	case VPU_MMU_EVT_F_ACCESS:
+		return "Access flag fault";
+	case VPU_MMU_EVT_F_PERMISSION:
+		return "Permission fault occurred on page access";
+	case VPU_MMU_EVT_F_TLB_CONFLICT:
+		return "A TLB conflict";
+	case VPU_MMU_EVT_F_CFG_CONFLICT:
+		return "A configuration cache conflict";
+	case VPU_MMU_EVT_E_PAGE_REQUEST:
+		return "Page request hint from a client device";
+	case VPU_MMU_EVT_F_VMS_FETCH:
+		return "Fetch of VMS caused external abort";
+	default:
+		return "Unknown CMDQ command";
+	}
+}
+
+static int vpu_mmu_config_check(struct vpu_device *vdev)
+{
+	u32 val_ref;
+	u32 val;
+
+	if (vpu_is_simics(vdev))
+		val_ref = VPU_MMU_IDR0_REF_SIMICS;
+	else
+		val_ref = VPU_MMU_IDR0_REF;
+
+	val = REGV_RD32(MTL_VPU_HOST_MMU_IDR0);
+	if (val != val_ref)
+		vpu_err(vdev, "IDR0 0x%x != IDR0_REF 0x%x\n", val, val_ref);
+
+	val = REGV_RD32(MTL_VPU_HOST_MMU_IDR1);
+	if (val != VPU_MMU_IDR1_REF)
+		vpu_warn(vdev, "IDR1 0x%x != IDR1_REF 0x%x\n", val, VPU_MMU_IDR1_REF);
+
+	val = REGV_RD32(MTL_VPU_HOST_MMU_IDR3);
+	if (val != VPU_MMU_IDR3_REF)
+		vpu_warn(vdev, "IDR3 0x%x != IDR3_REF 0x%x\n", val, VPU_MMU_IDR3_REF);
+
+	if (vpu_is_simics(vdev))
+		val_ref = VPU_MMU_IDR5_REF_SIMICS;
+	else if (vpu_is_fpga(vdev))
+		val_ref = VPU_MMU_IDR5_REF_FPGA;
+	else
+		val_ref = VPU_MMU_IDR5_REF;
+
+	val = REGV_RD32(MTL_VPU_HOST_MMU_IDR5);
+	if (val != val_ref)
+		vpu_warn(vdev, "IDR5 0x%x != IDR5_REF 0x%x\n", val, val_ref);
+
+	return 0;
+}
+
+static int vpu_mmu_cdtab_alloc(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	struct vpu_mmu_cdtab *cdtab = &mmu->cdtab;
+	size_t size = VPU_MMU_CDTAB_ENT_COUNT * VPU_MMU_CDTAB_ENT_SIZE;
+
+	cdtab->base = dmam_alloc_coherent(vdev->drm.dev, size, &cdtab->dma, GFP_KERNEL);
+	if (!cdtab->base)
+		return -ENOMEM;
+
+	vpu_dbg(MMU, "CDTAB alloc: dma=%pad size=%zu\n", &cdtab->dma, size);
+
+	return 0;
+}
+
+static int vpu_mmu_strtab_alloc(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	struct vpu_mmu_strtab *strtab = &mmu->strtab;
+	size_t size = VPU_MMU_STRTAB_ENT_COUNT * VPU_MMU_STRTAB_ENT_SIZE;
+
+	strtab->base = dmam_alloc_coherent(vdev->drm.dev, size, &strtab->dma, GFP_KERNEL);
+	if (!strtab->base)
+		return -ENOMEM;
+
+	strtab->base_cfg = VPU_MMU_STRTAB_CFG;
+	strtab->dma_q = VPU_MMU_STRTAB_BASE_RA;
+	strtab->dma_q |= strtab->dma & VPU_MMU_STRTAB_BASE_ADDR_MASK;
+
+	vpu_dbg(MMU, "STRTAB alloc: dma=%pad dma_q=%pad size=%zu\n",
+		&strtab->dma, &strtab->dma_q, size);
+
+	return 0;
+}
+
+static int vpu_mmu_cmdq_alloc(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	struct vpu_mmu_queue *q = &mmu->cmdq;
+
+	q->base = dmam_alloc_coherent(vdev->drm.dev, VPU_MMU_CMDQ_SIZE, &q->dma, GFP_KERNEL);
+	if (!q->base)
+		return -ENOMEM;
+
+	q->dma_q = VPU_MMU_Q_BASE_RWA;
+	q->dma_q |= q->dma & VPU_MMU_Q_BASE_ADDR_MASK;
+	q->dma_q |= VPU_MMU_Q_COUNT_LOG2;
+
+	vpu_dbg(MMU, "CMDQ alloc: dma=%pad dma_q=%pad size=%u\n",
+		&q->dma, &q->dma_q, VPU_MMU_CMDQ_SIZE);
+
+	return 0;
+}
+
+static int vpu_mmu_evtq_alloc(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	struct vpu_mmu_queue *q = &mmu->evtq;
+
+	q->base = dmam_alloc_coherent(vdev->drm.dev, VPU_MMU_EVTQ_SIZE, &q->dma, GFP_KERNEL);
+	if (!q->base)
+		return -ENOMEM;
+
+	q->dma_q = VPU_MMU_Q_BASE_RWA;
+	q->dma_q |= q->dma & VPU_MMU_Q_BASE_ADDR_MASK;
+	q->dma_q |= VPU_MMU_Q_COUNT_LOG2;
+
+	vpu_dbg(MMU, "EVTQ alloc: dma=%pad dma_q=%pad size=%u\n",
+		&q->dma, &q->dma_q, VPU_MMU_EVTQ_SIZE);
+
+	return 0;
+}
+
+static int vpu_mmu_structs_alloc(struct vpu_device *vdev)
+{
+	int ret;
+
+	ret = vpu_mmu_cdtab_alloc(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to allocate cdtab: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_mmu_strtab_alloc(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to allocate strtab: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_mmu_cmdq_alloc(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to allocate cmdq: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_mmu_evtq_alloc(vdev);
+	if (ret)
+		vpu_err(vdev, "Failed to allocate evtq: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_mmu_reg_write(struct vpu_device *vdev, u32 reg, u32 val)
+{
+	u32 reg_ack = reg + 4; /* ACK register is 4B after base register */
+	u32 val_ack;
+	int ret;
+
+	REGV_WR32(reg, val);
+
+	ret = REGV_POLL(reg_ack, val_ack, (val == val_ack), VPU_MMU_REG_TIMEOUT_US);
+	if (ret)
+		vpu_err(vdev, "Failed to write register 0x%x\n", reg);
+
+	return ret;
+}
+
+static int vpu_mmu_irqs_setup(struct vpu_device *vdev)
+{
+	u32 irq_ctrl = VPU_MMU_IRQ_EVTQ_EN | VPU_MMU_IRQ_GERROR_EN;
+	int ret;
+
+	ret = vpu_mmu_reg_write(vdev, MTL_VPU_HOST_MMU_IRQ_CTRL, 0);
+	if (ret)
+		return ret;
+
+	return vpu_mmu_reg_write(vdev, MTL_VPU_HOST_MMU_IRQ_CTRL, irq_ctrl);
+}
+
+static int vpu_mmu_cmdq_wait_for_cons(struct vpu_device *vdev)
+{
+	struct vpu_mmu_queue *cmdq = &vdev->mmu->cmdq;
+
+	return REGV_POLL(MTL_VPU_HOST_MMU_CMDQ_CONS, cmdq->cons, (cmdq->prod == cmdq->cons),
+			 VPU_MMU_QUEUE_TIMEOUT_US);
+}
+
+static int vpu_mmu_cmdq_cmd_write(struct vpu_device *vdev, const char *name, u64 data0, u64 data1)
+{
+	struct vpu_mmu_queue *q = &vdev->mmu->cmdq;
+	u64 *queue_buffer = q->base;
+	int idx = VPU_MMU_Q_IDX(q->prod) * (VPU_MMU_CMDQ_CMD_SIZE / sizeof(*queue_buffer));
+
+	if (!CIRC_SPACE(VPU_MMU_Q_IDX(q->prod), VPU_MMU_Q_IDX(q->cons), VPU_MMU_Q_COUNT)) {
+		vpu_err(vdev, "Failed to write MMU CMD %s\n", name);
+		return -EBUSY;
+	}
+
+	queue_buffer[idx] = data0;
+	queue_buffer[idx + 1] = data1;
+	q->prod = (q->prod + 1) & VPU_MMU_Q_WRAP_MASK;
+
+	vpu_dbg(MMU, "CMD write: %s data: 0x%llx 0x%llx\n", name, data0, data1);
+
+	return 0;
+}
+
+static int vpu_mmu_cmdq_sync(struct vpu_device *vdev)
+{
+	struct vpu_mmu_queue *q = &vdev->mmu->cmdq;
+	u64 val;
+	int ret;
+
+	val = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_SYNC) |
+	      FIELD_PREP(VPU_MMU_CMD_SYNC_0_CS, 0x2) |
+	      FIELD_PREP(VPU_MMU_CMD_SYNC_0_MSH, 0x3) |
+	      FIELD_PREP(VPU_MMU_CMD_SYNC_0_MSI_ATTR, 0xf);
+
+	ret = vpu_mmu_cmdq_cmd_write(vdev, "SYNC", val, 0);
+	if (ret)
+		return ret;
+
+	clflush_cache_range(q->base, VPU_MMU_CMDQ_SIZE);
+	REGV_WR32(MTL_VPU_HOST_MMU_CMDQ_PROD, q->prod);
+
+	ret = vpu_mmu_cmdq_wait_for_cons(vdev);
+	if (ret)
+		vpu_err(vdev, "Timed out waiting for consumer: %d\n", ret);
+
+	return ret;
+}
+
+static int
+vpu_mmu_cmdq_write_cmd_to_each_sid(struct vpu_device *vdev, const char *name, u64 data0, u64 data1)
+{
+	const int VPU_MMU_SID_COUNT = 4;
+	int ret;
+	int i;
+
+	for (i = 0; i < VPU_MMU_SID_COUNT; i++) {
+		data0 |= FIELD_PREP(VPU_MMU_CMD_CFGI_0_SID, i);
+
+		ret = vpu_mmu_cmdq_cmd_write(vdev, name, data0, data1);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int vpu_mmu_cmdq_write_prefetch_cfg(struct vpu_device *vdev)
+{
+	u64 val = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_PREFETCH_CFG);
+
+	return vpu_mmu_cmdq_write_cmd_to_each_sid(vdev, "PREFETCH_CFG", val, 0);
+}
+
+static int vpu_mmu_cmdq_write_cfgi_ste(struct vpu_device *vdev)
+{
+	u64 val = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_CFGI_STE);
+
+	return vpu_mmu_cmdq_write_cmd_to_each_sid(vdev, "CFGI_STE", val, 0);
+}
+
+static int vpu_mmu_cmdq_write_cfgi_cd_all(struct vpu_device *vdev)
+{
+	u64 val = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_CFGI_CD_ALL);
+
+	return vpu_mmu_cmdq_write_cmd_to_each_sid(vdev, "CFGI_CD_ALL", val, 0);
+}
+
+static int vpu_mmu_cmdq_write_cfgi_all(struct vpu_device *vdev)
+{
+	u64 data0 = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_CFGI_ALL);
+	u64 data1 = FIELD_PREP(VPU_MMU_CMD_CFGI_1_RANGE, 0x1f);
+
+	return vpu_mmu_cmdq_cmd_write(vdev, "CFGI_ALL", data0, data1);
+}
+
+static int vpu_mmu_cmdq_write_tlbi_nh_asid(struct vpu_device *vdev, u16 ssid)
+{
+	u64 val = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_TLBI_NH_ASID) |
+		  FIELD_PREP(VPU_MMU_CMD_TLBI_0_ASID, ssid);
+
+	return vpu_mmu_cmdq_cmd_write(vdev, "TLBI_NH_ASID", val, 0);
+}
+
+static int vpu_mmu_cmdq_write_tlbi_el2_all(struct vpu_device *vdev)
+{
+	u64 val = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_TLBI_EL2_ALL);
+
+	return vpu_mmu_cmdq_cmd_write(vdev, "TLBI_EL2_ALL", val, 0);
+}
+
+static int vpu_mmu_cmdq_write_tlbi_nsnh_all(struct vpu_device *vdev)
+{
+	u64 val = FIELD_PREP(VPU_MMU_CMD_OPCODE, CMD_TLBI_NSNH_ALL);
+
+	return vpu_mmu_cmdq_cmd_write(vdev, "TLBI_NSNH_ALL", val, 0);
+}
+
+static int vpu_mmu_reset(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	u32 val;
+	int ret;
+
+	memset(mmu->cmdq.base, 0, VPU_MMU_CMDQ_SIZE);
+	clflush_cache_range(mmu->cmdq.base, VPU_MMU_CMDQ_SIZE);
+	mmu->cmdq.prod = 0;
+	mmu->cmdq.cons = 0;
+
+	memset(mmu->evtq.base, 0, VPU_MMU_EVTQ_SIZE);
+	clflush_cache_range(mmu->evtq.base, VPU_MMU_EVTQ_SIZE);
+	mmu->evtq.prod = 0;
+	mmu->evtq.cons = 0;
+
+	ret = vpu_mmu_reg_write(vdev, MTL_VPU_HOST_MMU_CR0, 0);
+	if (ret)
+		return ret;
+
+	val = FIELD_PREP(VPU_MMU_CR1_TABLE_SH, VPU_MMU_SH_ISH) |
+	      FIELD_PREP(VPU_MMU_CR1_TABLE_OC, VPU_MMU_CACHE_WB) |
+	      FIELD_PREP(VPU_MMU_CR1_TABLE_IC, VPU_MMU_CACHE_WB) |
+	      FIELD_PREP(VPU_MMU_CR1_QUEUE_SH, VPU_MMU_SH_ISH) |
+	      FIELD_PREP(VPU_MMU_CR1_QUEUE_OC, VPU_MMU_CACHE_WB) |
+	      FIELD_PREP(VPU_MMU_CR1_QUEUE_IC, VPU_MMU_CACHE_WB);
+	REGV_WR32(MTL_VPU_HOST_MMU_CR1, val);
+
+	REGV_WR64(MTL_VPU_HOST_MMU_STRTAB_BASE, mmu->strtab.dma_q);
+	REGV_WR32(MTL_VPU_HOST_MMU_STRTAB_BASE_CFG, mmu->strtab.base_cfg);
+
+	REGV_WR64(MTL_VPU_HOST_MMU_CMDQ_BASE, mmu->cmdq.dma_q);
+	REGV_WR32(MTL_VPU_HOST_MMU_CMDQ_PROD, 0);
+	REGV_WR32(MTL_VPU_HOST_MMU_CMDQ_CONS, 0);
+
+	val = VPU_MMU_CR0_CMDQEN;
+	ret = vpu_mmu_reg_write(vdev, MTL_VPU_HOST_MMU_CR0, val);
+	if (ret)
+		return ret;
+
+	ret = vpu_mmu_cmdq_write_cfgi_all(vdev);
+	if (ret)
+		return ret;
+
+	ret = vpu_mmu_cmdq_write_tlbi_el2_all(vdev);
+	if (ret)
+		return ret;
+
+	ret = vpu_mmu_cmdq_write_tlbi_nsnh_all(vdev);
+	if (ret)
+		return ret;
+
+	ret = vpu_mmu_cmdq_sync(vdev);
+	if (ret)
+		return ret;
+
+	REGV_WR64(MTL_VPU_HOST_MMU_EVTQ_BASE, mmu->evtq.dma_q);
+	REGV_WR32(MTL_VPU_HOST_MMU_EVTQ_PROD_SEC, 0);
+	REGV_WR32(MTL_VPU_HOST_MMU_EVTQ_CONS_SEC, 0);
+
+	val |= VPU_MMU_CR0_EVTQEN;
+	ret = vpu_mmu_reg_write(vdev, MTL_VPU_HOST_MMU_CR0, val);
+	if (ret)
+		return ret;
+
+	val |= VPU_MMU_CR0_ATSCHK;
+	ret = vpu_mmu_reg_write(vdev, MTL_VPU_HOST_MMU_CR0, val);
+	if (ret)
+		return ret;
+
+	ret = vpu_mmu_irqs_setup(vdev);
+	if (ret)
+		return ret;
+
+	val |= VPU_MMU_CR0_SMMUEN;
+	return vpu_mmu_reg_write(vdev, MTL_VPU_HOST_MMU_CR0, val);
+}
+
+static void vpu_mmu_strtab_link_cd(struct vpu_device *vdev, u32 sid)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	struct vpu_mmu_strtab *strtab = &mmu->strtab;
+	struct vpu_mmu_cdtab *cdtab = &mmu->cdtab;
+	u64 *entry = strtab->base + (sid * VPU_MMU_STRTAB_ENT_SIZE);
+	u64 str[2];
+
+	str[0] = FIELD_PREP(VPU_MMU_STE_0_CFG, VPU_MMU_STE_0_CFG_S1_TRANS) |
+		 FIELD_PREP(VPU_MMU_STE_0_S1CDMAX, VPU_MMU_CDTAB_ENT_COUNT_LOG2) |
+		 FIELD_PREP(VPU_MMU_STE_0_S1FMT, VPU_MMU_STE_0_S1FMT_LINEAR) |
+		 VPU_MMU_STE_0_V |
+		 (cdtab->dma & VPU_MMU_STE_0_S1CTXPTR_MASK);
+
+	str[1] = FIELD_PREP(VPU_MMU_STE_1_S1DSS, VPU_MMU_STE_1_S1DSS_TERMINATE) |
+		 FIELD_PREP(VPU_MMU_STE_1_S1CIR, VPU_MMU_STE_1_S1C_CACHE_NC) |
+		 FIELD_PREP(VPU_MMU_STE_1_S1COR, VPU_MMU_STE_1_S1C_CACHE_NC) |
+		 FIELD_PREP(VPU_MMU_STE_1_S1CSH, VPU_MMU_SH_NSH) |
+		 FIELD_PREP(VPU_MMU_STE_1_PRIVCFG, VPU_MMU_STE_1_PRIVCFG_UNPRIV) |
+		 FIELD_PREP(VPU_MMU_STE_1_INSTCFG, VPU_MMU_STE_1_INSTCFG_DATA) |
+		 FIELD_PREP(VPU_MMU_STE_1_STRW, VPU_MMU_STE_1_STRW_NSEL1) |
+		 VPU_MMU_STE_1_MEV |
+		 VPU_MMU_STE_1_S1STALLD;
+
+	WRITE_ONCE(entry[1], str[1]);
+	WRITE_ONCE(entry[0], str[0]);
+
+	clflush_cache_range(entry, VPU_MMU_STRTAB_ENT_SIZE);
+
+	vpu_dbg(MMU, "STRTAB write entry (SSID=%u): 0x%llx, 0x%llx\n",
+		sid, str[0], str[1]);
+}
+
+static int vpu_mmu_strtab_init(struct vpu_device *vdev)
+{
+	int i;
+
+	for (i = 0; i < VPU_MMU_STRTAB_ENT_COUNT; i++)
+		vpu_mmu_strtab_link_cd(vdev, i);
+
+	return 0;
+}
+
+int vpu_mmu_invalidate_tlb(struct vpu_device *vdev, u16 ssid)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	int ret;
+
+	if (mutex_lock_interruptible(&mmu->lock))
+		return -EINTR;
+
+	if (!mmu->on) {
+		ret = 0;
+		goto unlock;
+	}
+
+	ret = vpu_mmu_cmdq_write_tlbi_nh_asid(vdev, ssid);
+	if (ret)
+		goto unlock;
+
+	ret = vpu_mmu_cmdq_sync(vdev);
+unlock:
+	mutex_unlock(&mmu->lock);
+	return ret;
+}
+
+static int vpu_mmu_cd_add(struct vpu_device *vdev, u32 ssid, u64 cd_dma)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	struct vpu_mmu_cdtab *cdtab = &mmu->cdtab;
+	u64 *entry;
+	u64 cd[4];
+	int ret;
+
+	if (ssid > VPU_MMU_CDTAB_ENT_COUNT)
+		return -EINVAL;
+
+	if (mutex_lock_interruptible(&mmu->lock))
+		return -EINTR;
+
+	entry = cdtab->base + (ssid * VPU_MMU_CDTAB_ENT_SIZE);
+
+	if (cd_dma != 0) {
+		cd[0] = FIELD_PREP(VPU_MMU_CD_0_TCR_T0SZ, 26) |
+			FIELD_PREP(VPU_MMU_CD_0_TCR_TG0, 0) |
+			FIELD_PREP(VPU_MMU_CD_0_TCR_IRGN0, 0) |
+			FIELD_PREP(VPU_MMU_CD_0_TCR_ORGN0, 0) |
+			FIELD_PREP(VPU_MMU_CD_0_TCR_SH0, 0) |
+			FIELD_PREP(VPU_MMU_CD_0_TCR_IPS, 3) |
+			FIELD_PREP(VPU_MMU_CD_0_ASID, ssid) |
+			VPU_MMU_CD_0_TCR_EPD1 |
+			VPU_MMU_CD_0_AA64 |
+			VPU_MMU_CD_0_R |
+			VPU_MMU_CD_0_A |
+			VPU_MMU_CD_0_ASET |
+			VPU_MMU_CD_0_V;
+		cd[1] = cd_dma & VPU_MMU_CD_1_TTB0_MASK;
+		cd[2] = 0;
+		cd[3] = 0x0000000000007444;
+	} else {
+		memset(cd, 0, sizeof(cd));
+	}
+
+	WRITE_ONCE(entry[1], cd[1]);
+	WRITE_ONCE(entry[2], cd[2]);
+	WRITE_ONCE(entry[3], cd[3]);
+	WRITE_ONCE(entry[0], cd[0]);
+
+	clflush_cache_range(entry, VPU_MMU_CDTAB_ENT_SIZE);
+
+	vpu_dbg(MMU, "CDTAB %s entry (SSID=%u, dma=%pad): 0x%llx, 0x%llx, 0x%llx, 0x%llx\n",
+		cd_dma ? "write" : "clear", ssid, &cd_dma, cd[0], cd[1], cd[2], cd[3]);
+
+	if (!mmu->on) {
+		ret = 0;
+		goto unlock;
+	}
+
+	ret = vpu_mmu_cmdq_write_cfgi_cd_all(vdev);
+	if (ret)
+		goto unlock;
+
+	ret = vpu_mmu_cmdq_write_tlbi_nh_asid(vdev, ssid);
+	if (ret)
+		goto unlock;
+
+	ret = vpu_mmu_cmdq_sync(vdev);
+unlock:
+	mutex_unlock(&mmu->lock);
+	return ret;
+}
+
+static int vpu_mmu_cd_add_gbl(struct vpu_device *vdev)
+{
+	int ret;
+
+	ret = vpu_mmu_cd_add(vdev, 0, vdev->gctx.pgtable.pgd_dma);
+	if (ret)
+		vpu_err(vdev, "Failed to add global CD entry: %d\n", ret);
+
+	return ret;
+}
+
+static int vpu_mmu_cd_add_user(struct vpu_device *vdev, u32 ssid, dma_addr_t cd_dma)
+{
+	int ret;
+
+	if (ssid == 0) {
+		vpu_err(vdev, "Invalid SSID: %u\n", ssid);
+		return -EINVAL;
+	}
+
+	ret = vpu_mmu_cd_add(vdev, ssid, cd_dma);
+	if (ret)
+		vpu_err(vdev, "Failed to add CD entry SSID=%u: %d\n", ssid, ret);
+
+	return ret;
+}
+
+void vpu_mmu_fini(struct vpu_device *vdev)
+{
+	mutex_destroy(&vdev->mmu->lock);
+}
+
+int vpu_mmu_init(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	int ret;
+
+	vpu_dbg(MMU, "Init..\n");
+
+	mutex_init(&mmu->lock);
+
+	ret = vpu_mmu_config_check(vdev);
+	if (ret)
+		goto err;
+
+	ret = vpu_mmu_structs_alloc(vdev);
+	if (ret)
+		goto err;
+
+	ret = vpu_mmu_strtab_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize strtab: %d\n", ret);
+		goto err;
+	}
+
+	ret = vpu_mmu_cd_add_gbl(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize strtab: %d\n", ret);
+		goto err;
+	}
+
+	ret = vpu_mmu_enable(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to resume MMU: %d\n", ret);
+		goto err;
+	}
+
+	vpu_dbg(MMU, "Init done\n");
+
+	return 0;
+
+err:
+	vpu_mmu_fini(vdev);
+	return ret;
+}
+
+int vpu_mmu_enable(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+	int ret;
+
+	mutex_lock(&mmu->lock);
+
+	mmu->on = true;
+
+	ret = vpu_mmu_reset(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to reset MMU: %d\n", ret);
+		goto err;
+	}
+
+	ret = vpu_mmu_cmdq_write_cfgi_ste(vdev);
+	if (ret)
+		goto err;
+
+	ret = vpu_mmu_cmdq_write_prefetch_cfg(vdev);
+	if (ret)
+		goto err;
+
+	ret = vpu_mmu_cmdq_write_cfgi_cd_all(vdev);
+	if (ret)
+		goto err;
+
+	ret = vpu_mmu_cmdq_write_tlbi_nsnh_all(vdev);
+	if (ret)
+		goto err;
+
+	ret = vpu_mmu_cmdq_sync(vdev);
+	if (ret)
+		goto err;
+
+	mutex_unlock(&mmu->lock);
+
+	return 0;
+err:
+	mmu->on = false;
+	mutex_unlock(&mmu->lock);
+	return ret;
+}
+
+void vpu_mmu_disable(struct vpu_device *vdev)
+{
+	struct vpu_mmu_info *mmu = vdev->mmu;
+
+	mutex_lock(&mmu->lock);
+	mmu->on = false;
+	mutex_unlock(&mmu->lock);
+}
+
+irqreturn_t vpu_mmu_irq_evtq_handler(struct vpu_device *vdev)
+{
+	struct vpu_mmu_queue *evtq = &vdev->mmu->evtq;
+	u64 in_addr, fetch_addr;
+	u32 *evt, op, ssid, sid, counter = 0;
+
+	vpu_dbg(IRQ, "MMU event queue\n");
+
+	do {
+		evt = (evtq->base + (VPU_MMU_Q_IDX(evtq->cons) * VPU_MMU_EVTQ_CMD_SIZE));
+		clflush_cache_range(evt, VPU_MMU_EVTQ_CMD_SIZE);
+
+		op = FIELD_GET(VPU_MMU_EVT_OP_MASK, evt[0]);
+		ssid = FIELD_GET(VPU_MMU_EVT_SSID_MASK, evt[0]);
+		sid = evt[1];
+		in_addr = ((u64)evt[5]) << 32 | evt[4];
+		fetch_addr = ((u64)evt[7]) << 32 | evt[6];
+
+		vpu_err(vdev, "MMU EVTQ: 0x%x (%s) SSID: %d SID: %d, e[2] %08x, e[3] %08x, in addr: 0x%llx, fetch addr: 0x%llx\n",
+			op, vpu_mmu_evt_to_str(op), ssid, sid, evt[2], evt[3], in_addr, fetch_addr);
+		evtq->cons = (evtq->cons + 1) & VPU_MMU_Q_WRAP_MASK;
+
+		REGV_WR32(MTL_VPU_HOST_MMU_EVTQ_CONS_SEC, evtq->cons);
+
+		evtq->prod = REGV_RD32(MTL_VPU_HOST_MMU_EVTQ_PROD_SEC);
+
+		if (counter++ >= VPU_MMU_EVTS_MAX)
+			break;
+
+	} while (evtq->prod != evtq->cons);
+
+	return IRQ_HANDLED;
+}
+
+irqreturn_t vpu_mmu_irq_gerr_handler(struct vpu_device *vdev)
+{
+	u32 gerror_val, gerrorn_val, active;
+
+	vpu_dbg(IRQ, "MMU error\n");
+
+	gerror_val = REGV_RD32(MTL_VPU_HOST_MMU_GERROR);
+	gerrorn_val = REGV_RD32(MTL_VPU_HOST_MMU_GERRORN);
+
+	active = gerror_val ^ gerrorn_val;
+	if (!(active & VPU_MMU_GERROR_ERR_MASK))
+		return IRQ_NONE;
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, SFM, active)) {
+		vpu_err_ratelimited(vdev, "MMU has entered service failure mode\n");
+	}
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_ABT, active))
+		vpu_warn_ratelimited(vdev, "MMU MSI ABT write aborted\n");
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_PRIQ_ABT, active))
+		vpu_warn_ratelimited(vdev, "MMU PRIQ MSI ABT write aborted\n");
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_EVTQ_ABT, active))
+		vpu_warn_ratelimited(vdev, "MMU EVTQ MSI ABT write aborted\n");
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_CMDQ_ABT, active))
+		vpu_warn_ratelimited(vdev, "MMU CMDQ MSI ABT write aborted\n");
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, PRIQ_ABT, active))
+		vpu_err_ratelimited(vdev, "MMU PRIQ write aborted\n");
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, EVTQ_ABT, active))
+		vpu_err_ratelimited(vdev, "MMU EVTQ write aborted\n");
+
+	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, CMDQ, active))
+		vpu_err_ratelimited(vdev, "MMU CMDQ write aborted\n");
+
+	REGV_WR32(MTL_VPU_HOST_MMU_GERRORN, gerror_val);
+
+	return IRQ_HANDLED;
+}
+
+int vpu_mmu_set_pgtable(struct vpu_device *vdev, int ssid, struct vpu_mmu_pgtable *pgtable)
+{
+	return vpu_mmu_cd_add_user(vdev, ssid, pgtable->pgd_dma);
+}
+
+void vpu_mmu_clear_pgtable(struct vpu_device *vdev, int ssid)
+{
+	vpu_mmu_cd_add_user(vdev, ssid, 0); /* 0 will clear CD entry */
+}
diff --git a/drivers/gpu/drm/vpu/vpu_mmu.h b/drivers/gpu/drm/vpu/vpu_mmu.h
new file mode 100644
index 000000000000..c113446044ec
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_mmu.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_MMU_H__
+#define __VPU_MMU_H__
+
+#include <linux/irqreturn.h>
+
+struct vpu_device;
+
+struct vpu_mmu_cdtab {
+	void *base;
+	dma_addr_t dma;
+};
+
+struct vpu_mmu_strtab {
+	void *base;
+	dma_addr_t dma;
+	u64 dma_q;
+	u32 base_cfg;
+};
+
+struct vpu_mmu_queue {
+	void *base;
+	dma_addr_t dma;
+	u64 dma_q;
+	u32 prod;
+	u32 cons;
+};
+
+struct vpu_mmu_info {
+	struct mutex lock; /* Protects cdtab, strtab, cmdq, on */
+	struct vpu_mmu_cdtab cdtab;
+	struct vpu_mmu_strtab strtab;
+	struct vpu_mmu_queue cmdq;
+	struct vpu_mmu_queue evtq;
+	bool on;
+};
+
+int vpu_mmu_init(struct vpu_device *vdev);
+void vpu_mmu_fini(struct vpu_device *vdev);
+void vpu_mmu_disable(struct vpu_device *vdev);
+int vpu_mmu_enable(struct vpu_device *vdev);
+int vpu_mmu_set_pgtable(struct vpu_device *vdev, int ssid, struct vpu_mmu_pgtable *pgtable);
+void vpu_mmu_clear_pgtable(struct vpu_device *vdev, int ssid);
+int vpu_mmu_invalidate_tlb(struct vpu_device *vdev, u16 ssid);
+
+irqreturn_t vpu_mmu_irq_evtq_handler(struct vpu_device *vdev);
+irqreturn_t vpu_mmu_irq_gerr_handler(struct vpu_device *vdev);
+
+#endif /* __VPU_MMU_H__ */
diff --git a/drivers/gpu/drm/vpu/vpu_mmu_context.c b/drivers/gpu/drm/vpu/vpu_mmu_context.c
new file mode 100644
index 000000000000..43e0f06b7c4e
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_mmu_context.c
@@ -0,0 +1,418 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <linux/bitfield.h>
+#include <linux/highmem.h>
+
+#include "vpu_drv.h"
+#include "vpu_hw.h"
+#include "vpu_mmu.h"
+#include "vpu_mmu_context.h"
+
+#define VPU_MMU_PGD_INDEX_MASK          GENMASK(38, 30)
+#define VPU_MMU_PMD_INDEX_MASK          GENMASK(29, 21)
+#define VPU_MMU_PTE_INDEX_MASK          GENMASK(20, 12)
+#define VPU_MMU_ENTRY_FLAGS_MASK        GENMASK(11, 0)
+#define VPU_MMU_ENTRY_FLAG_NG           BIT(11)
+#define VPU_MMU_ENTRY_FLAG_AF           BIT(10)
+#define VPU_MMU_ENTRY_FLAG_USER         BIT(6)
+#define VPU_MMU_ENTRY_FLAG_LLC_COHERENT BIT(2)
+#define VPU_MMU_ENTRY_FLAG_TYPE_PAGE    BIT(1)
+#define VPU_MMU_ENTRY_FLAG_VALID        BIT(0)
+
+#define VPU_MMU_PAGE_SIZE    SZ_4K
+#define VPU_MMU_PTE_MAP_SIZE (VPU_MMU_PGTABLE_ENTRIES * VPU_MMU_PAGE_SIZE)
+#define VPU_MMU_PMD_MAP_SIZE (VPU_MMU_PGTABLE_ENTRIES * VPU_MMU_PTE_MAP_SIZE)
+#define VPU_MMU_PGTABLE_SIZE (VPU_MMU_PGTABLE_ENTRIES * sizeof(u64))
+
+#define VPU_MMU_DUMMY_ADDRESS 0xdeadb000
+#define VPU_MMU_ENTRY_VALID   (VPU_MMU_ENTRY_FLAG_TYPE_PAGE | VPU_MMU_ENTRY_FLAG_VALID)
+#define VPU_MMU_ENTRY_INVALID (VPU_MMU_DUMMY_ADDRESS & ~VPU_MMU_ENTRY_FLAGS_MASK)
+#define VPU_MMU_ENTRY_MAPPED  (VPU_MMU_ENTRY_FLAG_AF | VPU_MMU_ENTRY_FLAG_USER | \
+			       VPU_MMU_ENTRY_FLAG_NG | VPU_MMU_ENTRY_VALID)
+
+static int vpu_mmu_pgtable_init(struct vpu_device *vdev, struct vpu_mmu_pgtable *pgtable)
+{
+	dma_addr_t pgd_dma;
+	u64 *pgd;
+
+	pgd = dma_alloc_wc(vdev->drm.dev, VPU_MMU_PGTABLE_SIZE, &pgd_dma, GFP_KERNEL);
+	if (!pgd)
+		return -ENOMEM;
+
+	pgtable->pgd = pgd;
+	pgtable->pgd_dma = pgd_dma;
+
+	return 0;
+}
+
+static void vpu_mmu_pgtable_free(struct vpu_device *vdev, struct vpu_mmu_pgtable *pgtable)
+{
+	int pgd_index, pmd_index;
+
+	for (pgd_index = 0; pgd_index < VPU_MMU_PGTABLE_ENTRIES; ++pgd_index) {
+		u64 **pmd_entries = pgtable->pgd_cpu_entries[pgd_index];
+		u64 *pmd = pgtable->pgd_entries[pgd_index];
+
+		if (!pmd_entries)
+			continue;
+
+		for (pmd_index = 0; pmd_index < VPU_MMU_PGTABLE_ENTRIES; ++pmd_index) {
+			if (pmd_entries[pmd_index])
+				dma_free_wc(vdev->drm.dev, VPU_MMU_PGTABLE_SIZE,
+					    pmd_entries[pmd_index],
+					    pmd[pmd_index] & ~VPU_MMU_ENTRY_FLAGS_MASK);
+		}
+
+		kfree(pmd_entries);
+		dma_free_wc(vdev->drm.dev, VPU_MMU_PGTABLE_SIZE, pgtable->pgd_entries[pgd_index],
+			    pgtable->pgd[pgd_index] & ~VPU_MMU_ENTRY_FLAGS_MASK);
+	}
+
+	dma_free_wc(vdev->drm.dev, VPU_MMU_PGTABLE_SIZE, pgtable->pgd,
+		    pgtable->pgd_dma & ~VPU_MMU_ENTRY_FLAGS_MASK);
+}
+
+static u64*
+vpu_mmu_ensure_pmd(struct vpu_device *vdev, struct vpu_mmu_pgtable *pgtable, u64 pgd_index)
+{
+	u64 **pmd_entries;
+	dma_addr_t pmd_dma;
+	u64 *pmd;
+
+	if (pgtable->pgd_entries[pgd_index])
+		return pgtable->pgd_entries[pgd_index];
+
+	pmd = dma_alloc_wc(vdev->drm.dev, VPU_MMU_PGTABLE_SIZE, &pmd_dma, GFP_KERNEL);
+	if (!pmd)
+		return NULL;
+
+	pmd_entries = kzalloc(VPU_MMU_PGTABLE_SIZE, GFP_KERNEL);
+	if (!pmd_entries)
+		goto err_free_pgd;
+
+	pgtable->pgd_entries[pgd_index] = pmd;
+	pgtable->pgd_cpu_entries[pgd_index] = pmd_entries;
+	pgtable->pgd[pgd_index] = pmd_dma | VPU_MMU_ENTRY_VALID;
+
+	return pmd;
+
+err_free_pgd:
+	dma_free_wc(vdev->drm.dev, VPU_MMU_PGTABLE_SIZE, pmd, pmd_dma);
+	return NULL;
+}
+
+static u64*
+vpu_mmu_ensure_pte(struct vpu_device *vdev, struct vpu_mmu_pgtable *pgtable,
+		   int pgd_index, int pmd_index)
+{
+	dma_addr_t pte_dma;
+	u64 *pte;
+
+	if (pgtable->pgd_cpu_entries[pgd_index][pmd_index])
+		return pgtable->pgd_cpu_entries[pgd_index][pmd_index];
+
+	pte = dma_alloc_wc(vdev->drm.dev, VPU_MMU_PGTABLE_SIZE, &pte_dma, GFP_KERNEL);
+	if (!pte)
+		return NULL;
+
+	pgtable->pgd_cpu_entries[pgd_index][pmd_index] = pte;
+	pgtable->pgd_entries[pgd_index][pmd_index] = pte_dma | VPU_MMU_ENTRY_VALID;
+
+	return pte;
+}
+
+static int
+vpu_mmu_context_map_page(struct vpu_device *vdev, struct vpu_mmu_context *ctx,
+			 u64 vpu_addr, dma_addr_t dma_addr, int prot)
+{
+	u64 *pte;
+	int pgd_index = FIELD_GET(VPU_MMU_PGD_INDEX_MASK, vpu_addr);
+	int pmd_index = FIELD_GET(VPU_MMU_PMD_INDEX_MASK, vpu_addr);
+	int pte_index = FIELD_GET(VPU_MMU_PTE_INDEX_MASK, vpu_addr);
+
+	/* Allocate PMD - second level page table if needed */
+	if (!vpu_mmu_ensure_pmd(vdev, &ctx->pgtable, pgd_index))
+		return -ENOMEM;
+
+	/* Allocate PTE - third level page table if needed */
+	pte = vpu_mmu_ensure_pte(vdev, &ctx->pgtable, pgd_index, pmd_index);
+	if (!pte)
+		return -ENOMEM;
+
+	/* Update PTE - third level page table with DMA address */
+	pte[pte_index] = dma_addr | prot;
+
+	return 0;
+}
+
+static void vpu_mmu_context_unmap_page(struct vpu_mmu_context *ctx, u64 vpu_addr)
+{
+	int pgd_index = FIELD_GET(VPU_MMU_PGD_INDEX_MASK, vpu_addr);
+	int pmd_index = FIELD_GET(VPU_MMU_PMD_INDEX_MASK, vpu_addr);
+	int pte_index = FIELD_GET(VPU_MMU_PTE_INDEX_MASK, vpu_addr);
+
+	/* Update PTE with dummy physical address and clear flags */
+	ctx->pgtable.pgd_cpu_entries[pgd_index][pmd_index][pte_index] = VPU_MMU_ENTRY_INVALID;
+}
+
+static void
+vpu_mmu_context_flush_page_tables(struct vpu_mmu_context *ctx, u64 vpu_addr, size_t size)
+{
+	u64 end_addr = vpu_addr + size;
+	u64 *pgd = ctx->pgtable.pgd;
+
+	/* Align to PMD entry (2 MB) */
+	vpu_addr &= ~(VPU_MMU_PTE_MAP_SIZE - 1);
+
+	while (vpu_addr < end_addr) {
+		int pgd_index = FIELD_GET(VPU_MMU_PGD_INDEX_MASK, vpu_addr);
+		u64 pmd_end = (pgd_index + 1) * (u64)VPU_MMU_PMD_MAP_SIZE;
+		u64 *pmd = ctx->pgtable.pgd_entries[pgd_index];
+
+		while (vpu_addr < end_addr && vpu_addr < pmd_end) {
+			int pmd_index = FIELD_GET(VPU_MMU_PMD_INDEX_MASK, vpu_addr);
+			u64 *pte = ctx->pgtable.pgd_cpu_entries[pgd_index][pmd_index];
+
+			clflush_cache_range(pte, VPU_MMU_PGTABLE_SIZE);
+			vpu_addr += VPU_MMU_PTE_MAP_SIZE;
+		}
+		clflush_cache_range(pmd, VPU_MMU_PGTABLE_SIZE);
+	}
+	clflush_cache_range(pgd, VPU_MMU_PGTABLE_SIZE);
+}
+
+static int
+vpu_mmu_context_map_pages(struct vpu_device *vdev, struct vpu_mmu_context *ctx,
+			  u64 vpu_addr, dma_addr_t dma_addr, size_t size, int prot)
+{
+	while (size) {
+		int ret = vpu_mmu_context_map_page(vdev, ctx, vpu_addr, dma_addr, prot);
+
+		if (ret)
+			return ret;
+
+		vpu_addr += VPU_MMU_PAGE_SIZE;
+		dma_addr += VPU_MMU_PAGE_SIZE;
+		size -= VPU_MMU_PAGE_SIZE;
+	}
+
+	return 0;
+}
+
+static void vpu_mmu_context_unmap_pages(struct vpu_mmu_context *ctx, u64 vpu_addr, size_t size)
+{
+	while (size) {
+		vpu_mmu_context_unmap_page(ctx, vpu_addr);
+		vpu_addr += VPU_MMU_PAGE_SIZE;
+		size -= VPU_MMU_PAGE_SIZE;
+	}
+}
+
+int
+vpu_mmu_context_map_sgt(struct vpu_device *vdev, struct vpu_mmu_context *ctx,
+			u64 vpu_addr, struct sg_table *sgt,  bool llc_coherent)
+{
+	struct scatterlist *sg;
+	int prot;
+	int ret;
+	u64 i;
+
+	if (!IS_ALIGNED(vpu_addr, VPU_MMU_PAGE_SIZE))
+		return -EINVAL;
+	/*
+	 * VPU is only 32 bit, but DMA engine is 38 bit
+	 * Ranges < 2 GB are reserved for VPU internal registers
+	 * Limit range to 8 GB
+	 */
+	if (vpu_addr < SZ_2G || vpu_addr > SZ_8G)
+		return -EINVAL;
+
+	prot = VPU_MMU_ENTRY_MAPPED;
+	if (llc_coherent)
+		prot |= VPU_MMU_ENTRY_FLAG_LLC_COHERENT;
+
+	mutex_lock(&ctx->lock);
+
+	for_each_sgtable_dma_sg(sgt, sg, i) {
+		u64 dma_addr = sg_dma_address(sg) - sg->offset;
+		size_t size = sg_dma_len(sg) + sg->offset;
+
+		ret = vpu_mmu_context_map_pages(vdev, ctx, vpu_addr, dma_addr, size, prot);
+		if (ret) {
+			vpu_err(vdev, "Failed to map context pages\n");
+			mutex_unlock(&ctx->lock);
+			return ret;
+		}
+		vpu_mmu_context_flush_page_tables(ctx, vpu_addr, size);
+		vpu_addr += size;
+	}
+
+	mutex_unlock(&ctx->lock);
+
+	ret = vpu_mmu_invalidate_tlb(vdev, ctx->id);
+	if (ret)
+		vpu_err(vdev, "Failed to invalidate TLB for ctx %u: %d\n", ctx->id, ret);
+	return ret;
+}
+
+void
+vpu_mmu_context_unmap_sgt(struct vpu_device *vdev, struct vpu_mmu_context *ctx,
+			  u64 vpu_addr, struct sg_table *sgt)
+{
+	struct scatterlist *sg;
+	int ret;
+	u64 i;
+
+	if (!IS_ALIGNED(vpu_addr, VPU_MMU_PAGE_SIZE))
+		vpu_warn(vdev, "Unaligned vpu_addr: 0x%llx\n", vpu_addr);
+
+	mutex_lock(&ctx->lock);
+
+	for_each_sgtable_dma_sg(sgt, sg, i) {
+		size_t size = sg_dma_len(sg) + sg->offset;
+
+		vpu_mmu_context_unmap_pages(ctx, vpu_addr, size);
+		vpu_mmu_context_flush_page_tables(ctx, vpu_addr, size);
+		vpu_addr += size;
+	}
+
+	mutex_unlock(&ctx->lock);
+
+	ret = vpu_mmu_invalidate_tlb(vdev, ctx->id);
+	if (ret)
+		vpu_warn(vdev, "Failed to invalidate TLB for ctx %u: %d\n", ctx->id, ret);
+}
+
+int
+vpu_mmu_context_insert_node_locked(struct vpu_mmu_context *ctx, const struct vpu_addr_range *range,
+				   u64 size, struct drm_mm_node *node)
+{
+	lockdep_assert_held(&ctx->lock);
+
+	return drm_mm_insert_node_in_range(&ctx->mm, node, size, VPU_MMU_PAGE_SIZE,
+					  0, range->start, range->end, DRM_MM_INSERT_BEST);
+}
+
+void
+vpu_mmu_context_remove_node_locked(struct vpu_mmu_context *ctx, struct drm_mm_node *node)
+{
+	lockdep_assert_held(&ctx->lock);
+
+	drm_mm_remove_node(node);
+}
+
+static int
+vpu_mmu_context_init(struct vpu_device *vdev, struct vpu_mmu_context *ctx, u32 context_id)
+{
+	u64 start, end;
+	int ret;
+
+	mutex_init(&ctx->lock);
+	INIT_LIST_HEAD(&ctx->bo_list);
+
+	ret = vpu_mmu_pgtable_init(vdev, &ctx->pgtable);
+	if (ret)
+		return ret;
+
+	if (!context_id) {
+		start = vdev->hw->ranges.global_low.start;
+		end = vdev->hw->ranges.global_high.end;
+	} else {
+		start = vdev->hw->ranges.user_low.start;
+		end = vdev->hw->ranges.user_high.end;
+	}
+
+	drm_mm_init(&ctx->mm, start, end - start);
+	ctx->id = context_id;
+
+	return 0;
+}
+
+static void vpu_mmu_context_fini(struct vpu_device *vdev, struct vpu_mmu_context *ctx)
+{
+	WARN_ON(!ctx->pgtable.pgd);
+
+	mutex_destroy(&ctx->lock);
+	vpu_mmu_pgtable_free(vdev, &ctx->pgtable);
+	drm_mm_takedown(&ctx->mm);
+}
+
+int vpu_mmu_global_context_init(struct vpu_device *vdev)
+{
+	return vpu_mmu_context_init(vdev, &vdev->gctx, VPU_GLOBAL_CONTEXT_MMU_SSID);
+}
+
+void vpu_mmu_global_context_fini(struct vpu_device *vdev)
+{
+	return vpu_mmu_context_fini(vdev, &vdev->gctx);
+}
+
+int vpu_mmu_user_context_init(struct vpu_file_priv *file_priv)
+{
+	struct vpu_device *vdev = file_priv->vdev;
+	u32 context_id;
+	void *old;
+	int ret;
+
+	mutex_lock(&file_priv->lock);
+
+	if (file_priv->ctx.id)
+		goto unlock;
+
+	ret = xa_alloc(&vdev->context_xa, &context_id, NULL, vdev->context_xa_limit, GFP_KERNEL);
+	if (ret) {
+		vpu_err(vdev, "Failed to allocate context_id\n");
+		goto err_unlock;
+	}
+
+	ret = vpu_mmu_context_init(vdev, &file_priv->ctx, context_id);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize context\n");
+		goto err_erase_context_id;
+	}
+
+	ret = vpu_mmu_set_pgtable(vdev, context_id, &file_priv->ctx.pgtable);
+	if (ret) {
+		vpu_err(vdev, "Failed to set page table\n");
+		goto err_context_fini;
+	}
+
+	old = xa_store(&vdev->context_xa, context_id, file_priv, GFP_KERNEL);
+	if (xa_is_err(old)) {
+		ret = xa_err(old);
+		vpu_err(vdev, "Failed to store context %u: %d\n", context_id, ret);
+		goto err_clear_pgtable;
+	}
+
+	vpu_dbg(FILE, "file_priv context init: id %u process %s pid %d\n",
+		context_id, current->comm, task_pid_nr(current));
+
+unlock:
+	mutex_unlock(&file_priv->lock);
+	return 0;
+
+err_clear_pgtable:
+	vpu_mmu_clear_pgtable(vdev, context_id);
+err_context_fini:
+	vpu_mmu_context_fini(vdev, &file_priv->ctx);
+err_erase_context_id:
+	xa_erase(&vdev->context_xa, context_id);
+err_unlock:
+	mutex_unlock(&file_priv->lock);
+	return ret;
+}
+
+void vpu_mmu_user_context_fini(struct vpu_file_priv *file_priv)
+{
+	struct vpu_device *vdev = file_priv->vdev;
+
+	WARN_ON(!file_priv->ctx.id);
+
+	xa_store(&vdev->context_xa, file_priv->ctx.id,  NULL, GFP_KERNEL);
+	vpu_mmu_clear_pgtable(vdev, file_priv->ctx.id);
+	vpu_mmu_context_fini(vdev, &file_priv->ctx);
+	xa_erase(&vdev->context_xa, file_priv->ctx.id);
+}
diff --git a/drivers/gpu/drm/vpu/vpu_mmu_context.h b/drivers/gpu/drm/vpu/vpu_mmu_context.h
new file mode 100644
index 000000000000..04b54457624b
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_mmu_context.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_MMU_CONTEXT_H__
+#define __VPU_MMU_CONTEXT_H__
+
+#include <drm/drm_mm.h>
+
+struct vpu_device;
+struct vpu_file_priv;
+struct vpu_addr_range;
+
+#define VPU_MMU_PGTABLE_ENTRIES	512
+
+struct vpu_mmu_pgtable {
+	u64             **pgd_cpu_entries[VPU_MMU_PGTABLE_ENTRIES];
+	u64		*pgd_entries[VPU_MMU_PGTABLE_ENTRIES];
+	u64		*pgd;
+	dma_addr_t	pgd_dma;
+};
+
+struct vpu_mmu_context {
+	struct mutex lock; /* protects: mm, pgtable, bo_list */
+	struct drm_mm mm;
+	struct vpu_mmu_pgtable pgtable;
+	struct list_head bo_list;
+	u32 id;
+};
+
+int vpu_mmu_global_context_init(struct vpu_device *vdev);
+void vpu_mmu_global_context_fini(struct vpu_device *vdev);
+
+int vpu_mmu_user_context_init(struct vpu_file_priv *file_priv);
+void vpu_mmu_user_context_fini(struct vpu_file_priv *file_priv);
+
+int vpu_mmu_context_insert_node_locked(struct vpu_mmu_context *ctx,
+				       const struct vpu_addr_range *range,
+				       u64 size, struct drm_mm_node *node);
+void vpu_mmu_context_remove_node_locked(struct vpu_mmu_context *ctx,
+					struct drm_mm_node *node);
+
+int vpu_mmu_context_map_sgt(struct vpu_device *vdev, struct vpu_mmu_context *ctx,
+			    u64 vpu_addr, struct sg_table *sgt, bool llc_coherent);
+void vpu_mmu_context_unmap_sgt(struct vpu_device *vdev, struct vpu_mmu_context *ctx,
+			       u64 vpu_addr, struct sg_table *sgt);
+
+#endif /* __VPU_MMU_CONTEXT_H__ */
diff --git a/include/uapi/drm/vpu_drm.h b/include/uapi/drm/vpu_drm.h
index 3311a36bf07b..8e3b852d78a1 100644
--- a/include/uapi/drm/vpu_drm.h
+++ b/include/uapi/drm/vpu_drm.h
@@ -38,6 +38,7 @@ extern "C" {
 #define DRM_VPU_PARAM_NUM_CONTEXTS	   4
 #define DRM_VPU_PARAM_CONTEXT_BASE_ADDRESS 5
 #define DRM_VPU_PARAM_CONTEXT_PRIORITY	   6
+#define DRM_VPU_PARAM_CONTEXT_ID	   7
 
 #define DRM_VPU_PLATFORM_TYPE_SILICON	   0
 
@@ -78,6 +79,9 @@ struct drm_vpu_param {
 	 * Value of current context scheduling priority (read-write).
 	 * See DRM_VPU_CONTEXT_PRIORITY_* for possible values.
 	 *
+	 * %DRM_VPU_PARAM_CONTEXT_ID:
+	 * Current context ID, always greater than 0 (read-only)
+	 *
 	 */
 	__u32 param;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 3/7] drm/vpu: Add GEM buffer object management
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 1/7] drm/vpu: Introduce a new " Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 2/7] drm/vpu: Add Intel VPU MMU support Jacek Lawrynowicz
@ 2022-07-28 13:17 ` Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 4/7] drm/vpu: Add IPC driver and JSM messages Jacek Lawrynowicz
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

Adds four types of GEM-based BOs for the VPU:
  - shmem
  - userptr
  - internal
  - prime

All types are implemented as struct vpu_bo, based on
struct drm_gem_object. VPU address is allocated when buffer is created
except for imported prime buffers that allocate it in BO_INFO IOCTL due
to missing file_priv arg in gem_prime_import callback.
Internal buffers are pinned on creation, the rest of buffers types
can be pinned on demand (in SUBMIT IOCTL).
Buffer VPU address, allocated pages and mappings are relased when the
buffer is destroyed.
Eviction mechism is planned for future versions.

Add three new IOCTLs: BO_CREATE, BO_INFO, BO_USERPTR

Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/gpu/drm/vpu/Makefile  |   1 +
 drivers/gpu/drm/vpu/vpu_drv.c |   9 +
 drivers/gpu/drm/vpu/vpu_gem.c | 833 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_gem.h | 113 +++++
 include/uapi/drm/vpu_drm.h    | 114 +++++
 5 files changed, 1070 insertions(+)
 create mode 100644 drivers/gpu/drm/vpu/vpu_gem.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_gem.h

diff --git a/drivers/gpu/drm/vpu/Makefile b/drivers/gpu/drm/vpu/Makefile
index b789e3a6ed22..b2f7a6c240f7 100644
--- a/drivers/gpu/drm/vpu/Makefile
+++ b/drivers/gpu/drm/vpu/Makefile
@@ -3,6 +3,7 @@
 
 intel_vpu-y := \
 	vpu_drv.o \
+	vpu_gem.o \
 	vpu_hw_mtl.o \
 	vpu_mmu.o \
 	vpu_mmu_context.o
diff --git a/drivers/gpu/drm/vpu/vpu_drv.c b/drivers/gpu/drm/vpu/vpu_drv.c
index 75ec1fe9c3e2..0e820aeecdcc 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.c
+++ b/drivers/gpu/drm/vpu/vpu_drv.c
@@ -11,8 +11,10 @@
 #include <drm/drm_file.h>
 #include <drm/drm_gem.h>
 #include <drm/drm_ioctl.h>
+#include <drm/drm_prime.h>
 
 #include "vpu_drv.h"
+#include "vpu_gem.h"
 #include "vpu_hw.h"
 #include "vpu_mmu.h"
 #include "vpu_mmu_context.h"
@@ -187,6 +189,9 @@ static void vpu_postclose(struct drm_device *dev, struct drm_file *file)
 static const struct drm_ioctl_desc vpu_drm_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(VPU_GET_PARAM, vpu_get_param_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(VPU_SET_PARAM, vpu_set_param_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(VPU_BO_CREATE, vpu_bo_create_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(VPU_BO_INFO, vpu_bo_info_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(VPU_BO_USERPTR, vpu_bo_userptr_ioctl, DRM_RENDER_ALLOW),
 };
 
 DEFINE_DRM_GEM_FOPS(vpu_fops);
@@ -210,6 +215,10 @@ static const struct drm_driver driver = {
 
 	.open = vpu_open,
 	.postclose = vpu_postclose,
+	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+	.gem_prime_import = vpu_gem_prime_import,
+	.gem_prime_mmap = drm_gem_prime_mmap,
 
 	.ioctls = vpu_drm_ioctls,
 	.num_ioctls = ARRAY_SIZE(vpu_drm_ioctls),
diff --git a/drivers/gpu/drm/vpu/vpu_gem.c b/drivers/gpu/drm/vpu/vpu_gem.c
new file mode 100644
index 000000000000..12f82ab941bd
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_gem.c
@@ -0,0 +1,833 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/highmem.h>
+#include <linux/module.h>
+#include <linux/set_memory.h>
+#include <linux/xarray.h>
+
+#include <drm/drm_cache.h>
+#include <drm/drm_debugfs.h>
+#include <drm/drm_file.h>
+#include <drm/drm_utils.h>
+
+#include "vpu_drv.h"
+#include "vpu_gem.h"
+#include "vpu_hw.h"
+#include "vpu_mmu.h"
+#include "vpu_mmu_context.h"
+
+MODULE_IMPORT_NS(DMA_BUF);
+
+static const struct drm_gem_object_funcs vpu_gem_funcs;
+
+static int __must_check prime_alloc_pages_locked(struct vpu_bo *bo)
+{
+	/* Pages are managed by the underlying dma-buf */
+	return 0;
+}
+
+static void prime_free_pages_locked(struct vpu_bo *bo)
+{
+	/* Pages are managed by the underlying dma-buf */
+}
+
+static int prime_map_pages_locked(struct vpu_bo *bo)
+{
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+	struct sg_table *sgt;
+
+	WARN_ON(!bo->base.import_attach);
+
+	sgt = dma_buf_map_attachment(bo->base.import_attach, DMA_BIDIRECTIONAL);
+	if (IS_ERR(sgt)) {
+		vpu_err(vdev, "Failed to map attachment: %ld\n", PTR_ERR(sgt));
+		return PTR_ERR(sgt);
+	}
+
+	bo->sgt = sgt;
+	return 0;
+}
+
+static void prime_unmap_pages_locked(struct vpu_bo *bo)
+{
+	WARN_ON(!bo->base.import_attach);
+
+	dma_buf_unmap_attachment(bo->base.import_attach, bo->sgt, DMA_BIDIRECTIONAL);
+	bo->sgt = NULL;
+}
+
+static const struct vpu_bo_ops prime_ops = {
+	.type = VPU_BO_TYPE_PRIME,
+	.name = "prime",
+	.alloc_pages = prime_alloc_pages_locked,
+	.free_pages = prime_free_pages_locked,
+	.map_pages = prime_map_pages_locked,
+	.unmap_pages = prime_unmap_pages_locked,
+};
+
+static int __must_check shmem_alloc_pages_locked(struct vpu_bo *bo)
+{
+	int npages = bo->base.size >> PAGE_SHIFT;
+	struct page **pages;
+
+	pages = drm_gem_get_pages(&bo->base);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
+
+	if (!vpu_bo_is_cached(bo))
+		set_pages_array_uc(pages, npages);
+
+	bo->pages = pages;
+	return 0;
+}
+
+static void shmem_free_pages_locked(struct vpu_bo *bo)
+{
+	if (!vpu_bo_is_cached(bo))
+		set_pages_array_wb(bo->pages, bo->base.size >> PAGE_SHIFT);
+
+	drm_gem_put_pages(&bo->base, bo->pages, true, false);
+	bo->pages = NULL;
+}
+
+static int vpu_bo_map_pages_locked(struct vpu_bo *bo)
+{
+	int npages = bo->base.size >> PAGE_SHIFT;
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+	struct sg_table *sgt;
+	int ret;
+
+	sgt = drm_prime_pages_to_sg(&vdev->drm, bo->pages, npages);
+	if (IS_ERR(sgt)) {
+		vpu_err(vdev, "Failed to allocate sgtable\n");
+		return PTR_ERR(sgt);
+	}
+
+	ret = dma_map_sgtable(vdev->drm.dev, sgt, DMA_BIDIRECTIONAL, 0);
+	if (ret) {
+		vpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret);
+		goto err_free_sgt;
+	}
+
+	bo->sgt = sgt;
+	return 0;
+
+err_free_sgt:
+	kfree(sgt);
+	return ret;
+}
+
+static void vpu_bo_unmap_pages_locked(struct vpu_bo *bo)
+{
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+
+	dma_unmap_sgtable(vdev->drm.dev, bo->sgt, DMA_BIDIRECTIONAL, 0);
+	sg_free_table(bo->sgt);
+	kfree(bo->sgt);
+	bo->sgt = NULL;
+}
+
+static const struct vpu_bo_ops shmem_ops = {
+	.type = VPU_BO_TYPE_SHMEM,
+	.name = "shmem",
+	.alloc_pages = shmem_alloc_pages_locked,
+	.free_pages = shmem_free_pages_locked,
+	.map_pages = vpu_bo_map_pages_locked,
+	.unmap_pages = vpu_bo_unmap_pages_locked,
+};
+
+static int __must_check userptr_alloc_pages_locked(struct vpu_bo *bo)
+{
+	unsigned int npages = bo->base.size >> PAGE_SHIFT;
+	struct page **pages;
+	int ret;
+
+	pages = kvmalloc_array(npages, sizeof(*bo->pages), GFP_KERNEL);
+	if (!pages)
+		return -ENOMEM;
+
+	ret = pin_user_pages(bo->user_ptr & PAGE_MASK, npages,
+			     FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, pages, NULL);
+	if (ret != npages) {
+		if (ret > 0)
+			goto err_unpin_pages;
+		goto err_free_pages;
+	}
+
+	bo->pages = pages;
+	return 0;
+
+err_unpin_pages:
+	unpin_user_pages(pages, ret);
+err_free_pages:
+	kvfree(pages);
+	return ret;
+}
+
+static void userptr_free_pages_locked(struct vpu_bo *bo)
+{
+	unpin_user_pages(bo->pages, bo->base.size >> PAGE_SHIFT);
+	kvfree(bo->pages);
+	bo->pages = NULL;
+}
+
+static const struct vpu_bo_ops userptr_ops = {
+	.type = VPU_BO_TYPE_USERPTR,
+	.name = "userptr",
+	.alloc_pages = userptr_alloc_pages_locked,
+	.free_pages = userptr_free_pages_locked,
+	.map_pages = vpu_bo_map_pages_locked,
+	.unmap_pages = vpu_bo_unmap_pages_locked,
+};
+
+static int __must_check internal_alloc_pages_locked(struct vpu_bo *bo)
+{
+	unsigned int i, npages = bo->base.size >> PAGE_SHIFT;
+	struct page **pages;
+	int ret;
+
+	pages = kvmalloc_array(npages, sizeof(*bo->pages), GFP_KERNEL);
+	if (!pages)
+		return -ENOMEM;
+
+	for (i = 0; i < npages; i++) {
+		pages[i] = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
+		if (!pages[i]) {
+			ret = -ENOMEM;
+			goto err_free_pages;
+		}
+		cond_resched();
+	}
+
+	bo->pages = pages;
+	return 0;
+
+err_free_pages:
+	while (i--)
+		put_page(pages[i]);
+	kvfree(pages);
+	return ret;
+}
+
+static void internal_free_pages_locked(struct vpu_bo *bo)
+{
+	unsigned int i, npages = bo->base.size >> PAGE_SHIFT;
+
+	for (i = 0; i < npages; i++)
+		put_page(bo->pages[i]);
+
+	kvfree(bo->pages);
+	bo->pages = NULL;
+}
+
+static const struct vpu_bo_ops internal_ops = {
+	.type = VPU_BO_TYPE_INTERNAL,
+	.name = "internal",
+	.alloc_pages = internal_alloc_pages_locked,
+	.free_pages = internal_free_pages_locked,
+	.map_pages = vpu_bo_map_pages_locked,
+	.unmap_pages = vpu_bo_unmap_pages_locked,
+};
+
+static int __must_check vpu_bo_alloc_and_map_pages_locked(struct vpu_bo *bo)
+{
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+	int ret;
+
+	lockdep_assert_held(&bo->lock);
+	WARN_ON(bo->sgt);
+
+	ret = bo->ops->alloc_pages(bo);
+	if (ret) {
+		vpu_err(vdev, "Failed to allocate pages for BO: %d", ret);
+		return ret;
+	}
+
+	ret = bo->ops->map_pages(bo);
+	if (ret) {
+		vpu_err(vdev, "Failed to map pages for BO: %d", ret);
+		goto err_free_pages;
+	}
+	return ret;
+
+err_free_pages:
+	bo->ops->free_pages(bo);
+	return ret;
+}
+
+static void vpu_bo_unmap_and_free_pages(struct vpu_bo *bo)
+{
+	mutex_lock(&bo->lock);
+
+	WARN_ON(!bo->sgt);
+	bo->ops->unmap_pages(bo);
+	WARN_ON(bo->sgt);
+	bo->ops->free_pages(bo);
+	WARN_ON(bo->pages);
+
+	mutex_unlock(&bo->lock);
+}
+
+/**
+ * vpu_bo_pin() - pin the backing physical pages and map them to VPU.
+ *
+ * This function pins physical memory pages, then maps the physical pages
+ * to IOMMU address space and finally updates the VPU MMU page tables
+ * to allow the VPU to translate VPU address to IOMMU address.
+ */
+int __must_check vpu_bo_pin(struct vpu_bo *bo)
+{
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+	int ret = 0;
+
+	mutex_lock(&bo->lock);
+
+	if (!bo->vpu_addr) {
+		vpu_err(vdev, "vpu_addr not set for BO ctx_id: %d handle: %d\n",
+			bo->ctx->id, bo->handle);
+		ret = -EINVAL;
+		goto unlock;
+	}
+
+	if (!bo->sgt) {
+		ret = vpu_bo_alloc_and_map_pages_locked(bo);
+		if (ret)
+			goto unlock;
+	}
+
+	if (!bo->mmu_mapped) {
+		ret = vpu_mmu_context_map_sgt(vdev, bo->ctx, bo->vpu_addr, bo->sgt,
+					      vpu_bo_is_cached(bo));
+		if (ret) {
+			vpu_err(vdev, "Failed to map BO in MMU: %d\n", ret);
+			goto unlock;
+		}
+		bo->mmu_mapped = true;
+	}
+
+unlock:
+	mutex_unlock(&bo->lock);
+
+	return ret;
+}
+
+static int
+vpu_bo_alloc_vpu_addr(struct vpu_bo *bo, struct vpu_mmu_context *ctx,
+		      const struct vpu_addr_range *range)
+{
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+	int ret;
+
+	if (!range) {
+		if (bo->flags & DRM_VPU_BO_HIGH_MEM)
+			range = &vdev->hw->ranges.user_high;
+		else
+			range = &vdev->hw->ranges.user_low;
+	}
+
+	mutex_lock(&ctx->lock);
+	ret = vpu_mmu_context_insert_node_locked(ctx, range, bo->base.size, &bo->mm_node);
+	if (!ret) {
+		bo->ctx = ctx;
+		bo->vpu_addr = bo->mm_node.start;
+		list_add_tail(&bo->ctx_node, &ctx->bo_list);
+	}
+	mutex_unlock(&ctx->lock);
+
+	return ret;
+}
+
+static void vpu_bo_free_vpu_addr(struct vpu_bo *bo)
+{
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+	struct vpu_mmu_context *ctx = bo->ctx;
+
+	vpu_dbg(BO, "remove from ctx: ctx %d vpu_addr 0x%llx allocated %d mmu_mapped %d\n",
+		ctx->id, bo->vpu_addr, (bool)bo->sgt, bo->mmu_mapped);
+
+	mutex_lock(&bo->lock);
+
+	if (bo->mmu_mapped) {
+		WARN_ON(!bo->sgt);
+		vpu_mmu_context_unmap_sgt(vdev, ctx, bo->vpu_addr, bo->sgt);
+		bo->mmu_mapped = false;
+	}
+
+	mutex_lock(&ctx->lock);
+	list_del(&bo->ctx_node);
+	bo->vpu_addr = 0;
+	bo->ctx = NULL;
+	vpu_mmu_context_remove_node_locked(ctx, &bo->mm_node);
+	mutex_unlock(&ctx->lock);
+
+	mutex_unlock(&bo->lock);
+}
+
+void vpu_bo_remove_all_bos_from_context(struct vpu_mmu_context *ctx)
+{
+	struct vpu_bo *bo, *tmp;
+
+	list_for_each_entry_safe(bo, tmp, &ctx->bo_list, ctx_node)
+		vpu_bo_free_vpu_addr(bo);
+}
+
+static struct vpu_bo *
+vpu_bo_alloc(struct vpu_device *vdev, struct vpu_mmu_context *mmu_context,
+	     u64 unaligned_size, u32 flags, const struct vpu_bo_ops *ops,
+	     const struct vpu_addr_range *range, u64 user_ptr)
+{
+	u64 size = PAGE_ALIGN(unaligned_size);
+	struct vpu_bo *bo;
+	int ret = 0;
+
+	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+	if (!bo)
+		return ERR_PTR(-ENOMEM);
+
+	mutex_init(&bo->lock);
+	bo->base.funcs = &vpu_gem_funcs;
+	bo->flags = flags;
+	bo->ops = ops;
+	bo->user_ptr = user_ptr;
+
+	if (ops->type == VPU_BO_TYPE_SHMEM)
+		ret = drm_gem_object_init(&vdev->drm, &bo->base, size);
+	else
+		drm_gem_private_object_init(&vdev->drm, &bo->base, size);
+
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize drm object\n");
+		goto err_free;
+	}
+
+	if (flags & DRM_VPU_BO_MAPPABLE) {
+		ret = drm_gem_create_mmap_offset(&bo->base);
+		if (ret) {
+			vpu_err(vdev, "Failed to allocate mmap offset\n");
+			goto err_release;
+		}
+	}
+
+	if (mmu_context) {
+		ret = vpu_bo_alloc_vpu_addr(bo, mmu_context, range);
+		if (ret) {
+			vpu_err(vdev, "Failed to add BO to context: %d\n", ret);
+			goto err_release;
+		}
+	}
+
+	return bo;
+
+err_release:
+	drm_gem_object_release(&bo->base);
+err_free:
+	kfree(bo);
+	return ERR_PTR(ret);
+}
+
+static void vpu_bo_free(struct drm_gem_object *obj)
+{
+	struct vpu_bo *bo = to_vpu_bo(obj);
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+
+	if (bo->ctx)
+		vpu_dbg(BO, "free: ctx %d vpu_addr 0x%llx allocated %d mmu_mapped %d\n",
+			bo->ctx->id, bo->vpu_addr, (bool)bo->sgt, bo->mmu_mapped);
+	else
+		vpu_dbg(BO, "free: ctx (released) allocated %d mmu_mapped %d\n",
+			(bool)bo->sgt, bo->mmu_mapped);
+
+	WARN_ON(!dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ));
+
+	vunmap(bo->kvaddr);
+
+	if (bo->ctx)
+		vpu_bo_free_vpu_addr(bo);
+
+	if (bo->sgt)
+		vpu_bo_unmap_and_free_pages(bo);
+
+	if (bo->base.import_attach)
+		drm_prime_gem_destroy(&bo->base, bo->sgt);
+
+	drm_gem_object_release(&bo->base);
+
+	mutex_destroy(&bo->lock);
+	kfree(bo);
+}
+
+static int vpu_bo_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+{
+	struct vpu_bo *bo = to_vpu_bo(obj);
+	struct vpu_device *vdev = vpu_bo_to_vdev(bo);
+	pgprot_t vm_page_prot;
+
+	vpu_dbg(BO, "mmap: ctx %u handle %u vpu_addr 0x%llx size %zu type %s",
+		bo->ctx->id, bo->handle, bo->vpu_addr, bo->base.size, bo->ops->name);
+
+	if (obj->import_attach) {
+		/* Drop the reference drm_gem_mmap_obj() acquired.*/
+		drm_gem_object_put(obj);
+		vma->vm_private_data = NULL;
+		return dma_buf_mmap(obj->dma_buf, vma, 0);
+	}
+
+	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND;
+
+	if (!vpu_bo_is_cached(bo)) {
+		vm_page_prot = vm_get_page_prot(vma->vm_flags);
+		vma->vm_page_prot = pgprot_noncached(vm_page_prot);
+	}
+
+	return 0;
+}
+
+static struct sg_table *vpu_bo_get_sg_table(struct drm_gem_object *obj)
+{
+	struct vpu_bo *bo = to_vpu_bo(obj);
+	loff_t npages = obj->size >> PAGE_SHIFT;
+	int ret = 0;
+
+	mutex_lock(&bo->lock);
+
+	if (!bo->sgt)
+		ret = vpu_bo_alloc_and_map_pages_locked(bo);
+
+	mutex_unlock(&bo->lock);
+
+	if (ret)
+		return ERR_PTR(ret);
+
+	return drm_prime_pages_to_sg(obj->dev, bo->pages, npages);
+}
+
+static vm_fault_t vpu_vm_fault(struct vm_fault *vmf)
+{
+	struct vm_area_struct *vma = vmf->vma;
+	struct drm_gem_object *obj = vma->vm_private_data;
+	struct vpu_bo *bo = to_vpu_bo(obj);
+	loff_t npages = obj->size >> PAGE_SHIFT;
+	pgoff_t page_offset;
+	struct page *page;
+	vm_fault_t ret;
+	int err;
+
+	mutex_lock(&bo->lock);
+
+	if (!bo->sgt) {
+		err = vpu_bo_alloc_and_map_pages_locked(bo);
+		if (err) {
+			ret = vmf_error(err);
+			goto unlock;
+		}
+	}
+
+	/* We don't use vmf->pgoff since that has the fake offset */
+	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
+	if (page_offset >= npages) {
+		ret = VM_FAULT_SIGBUS;
+	} else {
+		page = bo->pages[page_offset];
+		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
+	}
+
+unlock:
+	mutex_unlock(&bo->lock);
+
+	return ret;
+}
+
+static const struct vm_operations_struct vpu_vm_ops = {
+	.fault = vpu_vm_fault,
+	.open = drm_gem_vm_open,
+	.close = drm_gem_vm_close,
+};
+
+static const struct drm_gem_object_funcs vpu_gem_funcs = {
+	.free = vpu_bo_free,
+	.mmap = vpu_bo_mmap,
+	.vm_ops = &vpu_vm_ops,
+	.get_sg_table = vpu_bo_get_sg_table,
+};
+
+int
+vpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct vpu_file_priv *file_priv = file->driver_priv;
+	struct vpu_device *vdev = file_priv->vdev;
+	struct drm_vpu_bo_create *args = data;
+	struct vpu_bo *bo;
+	int ret;
+
+	if (args->flags & ~DRM_VPU_BO_FLAGS)
+		return -EINVAL;
+
+	if (args->size == 0)
+		return -EINVAL;
+
+	ret = vpu_mmu_user_context_init(file_priv);
+	if (ret)
+		return ret;
+
+	bo = vpu_bo_alloc(vdev, &file_priv->ctx, args->size, args->flags,
+			  &shmem_ops, NULL, 0);
+	if (IS_ERR(bo)) {
+		vpu_err(vdev, "Failed to create BO: %pe (ctx %u size %llu flags 0x%x)",
+			bo, file_priv->ctx.id, args->size, args->flags);
+		return PTR_ERR(bo);
+	}
+
+	ret = drm_gem_handle_create(file, &bo->base, &bo->handle);
+	if (!ret) {
+		args->vpu_addr = bo->vpu_addr;
+		args->handle = bo->handle;
+	}
+
+	drm_gem_object_put(&bo->base);
+
+	vpu_dbg(BO, "alloc shmem: ctx %u vpu_addr 0x%llx size %zu flags 0x%x\n",
+		file_priv->ctx.id, bo->vpu_addr, bo->base.size, bo->flags);
+
+	return ret;
+}
+
+int
+vpu_bo_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct vpu_file_priv *file_priv = file->driver_priv;
+	struct vpu_device *vdev = file_priv->vdev;
+	struct drm_vpu_bo_userptr *args = data;
+	struct vpu_bo *bo;
+	int ret;
+
+	if (args->user_ptr == 0 || !PAGE_ALIGNED(args->user_ptr))
+		return -EINVAL;
+
+	if (args->user_size == 0 || !PAGE_ALIGNED(args->user_size))
+		return -EINVAL;
+
+	if (args->flags & ~DRM_VPU_BO_HIGH_MEM)
+		return -EINVAL;
+
+	if (!access_ok((const void __user *)args->user_ptr, args->user_size))
+		return -EFAULT;
+
+	ret = vpu_mmu_user_context_init(file_priv);
+	if (ret)
+		return ret;
+
+	bo = vpu_bo_alloc(vdev, &file_priv->ctx, args->user_size, args->flags,
+			  &userptr_ops, NULL, args->user_ptr);
+	if (IS_ERR(bo)) {
+		vpu_err(vdev, "Failed to create BO: %pe (ctx %u size %llu flags 0x%x)",
+			bo, file_priv->ctx.id, args->user_size, args->flags);
+		return PTR_ERR(bo);
+	}
+
+	if (!bo)
+		return -ENOMEM;
+
+	ret = drm_gem_handle_create(file, &bo->base, &bo->handle);
+	if (!ret) {
+		args->vpu_addr = bo->vpu_addr;
+		args->handle = bo->handle;
+	}
+
+	drm_gem_object_put(&bo->base);
+
+	vpu_dbg(BO, "alloc userptr: ctx %u vpu_addr 0x%llx size %zu flags 0x%x\n",
+		file_priv->ctx.id, bo->vpu_addr, bo->base.size, args->flags);
+
+	return ret;
+}
+
+struct vpu_bo *
+vpu_bo_alloc_internal(struct vpu_device *vdev, u64 vpu_addr, u64 size, bool cached)
+{
+	const struct vpu_addr_range *range;
+	struct vpu_addr_range fixed_range;
+	pgprot_t prot = PAGE_KERNEL;
+	struct vpu_bo *bo;
+	int flags = cached ? 0 : DRM_VPU_BO_UNCACHED;
+	int ret;
+
+	WARN_ON(!PAGE_ALIGNED(vpu_addr));
+	WARN_ON(!PAGE_ALIGNED(size));
+
+	if (vpu_addr) {
+		fixed_range.start = vpu_addr;
+		fixed_range.end = vpu_addr + size;
+		range = &fixed_range;
+	} else {
+		range = &vdev->hw->ranges.global_low;
+	}
+
+	bo = vpu_bo_alloc(vdev, &vdev->gctx, size, flags, &internal_ops, range, 0);
+	if (IS_ERR(bo)) {
+		vpu_err(vdev, "Failed to create BO: %pe (vpu_addr 0x%llx size %llu cached %d)",
+			bo, vpu_addr, size, cached);
+		return NULL;
+	}
+
+	ret = vpu_bo_pin(bo);
+	if (ret)
+		goto err_put;
+
+	if (!vpu_bo_is_cached(bo)) {
+		prot = pgprot_noncached(prot);
+		drm_clflush_pages(bo->pages, bo->base.size >> PAGE_SHIFT);
+	}
+
+	bo->kvaddr = vmap(bo->pages, bo->base.size >> PAGE_SHIFT, VM_MAP, prot);
+	if (!bo->kvaddr) {
+		vpu_err(vdev, "Failed to map BO into kernel virtual memory\n");
+		goto err_put;
+	}
+
+	vpu_dbg(BO, "alloc internal: ctx 0 vpu_addr 0x%llx size %zu cached %d\n",
+		bo->vpu_addr, bo->base.size, cached);
+
+	return bo;
+
+err_put:
+	drm_gem_object_put(&bo->base);
+	return NULL;
+}
+
+void vpu_bo_free_internal(struct vpu_bo *bo)
+{
+	drm_gem_object_put(&bo->base);
+}
+
+int vpu_bo_vremap_internal(struct vpu_device *vdev, struct vpu_bo *bo, bool cached)
+{
+	pgprot_t prot = cached ? PAGE_KERNEL : pgprot_noncached(PAGE_KERNEL);
+
+	vunmap(bo->kvaddr);
+
+	bo->kvaddr = vmap(bo->pages, bo->base.size >> PAGE_SHIFT, VM_MAP, prot);
+	if (!bo->kvaddr) {
+		vpu_err(vdev, "Failed to remap BO into kernel virtual memory\n");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+struct drm_gem_object *vpu_gem_prime_import(struct drm_device *dev, struct dma_buf *buf)
+{
+	struct vpu_device *vdev = to_vpu_dev(dev);
+	struct dma_buf_attachment *attach;
+	struct vpu_bo *bo;
+
+	attach = dma_buf_attach(buf, dev->dev);
+	if (IS_ERR(attach))
+		return ERR_CAST(attach);
+
+	get_dma_buf(buf);
+
+	bo = vpu_bo_alloc(vdev, NULL, buf->size, DRM_VPU_BO_MAPPABLE, &prime_ops, NULL, 0);
+	if (IS_ERR(bo)) {
+		vpu_err(vdev, "Failed to import BO: %pe (size %lu)", bo, buf->size);
+		goto err_detach;
+	}
+
+	bo->base.import_attach = attach;
+	bo->base.resv = buf->resv;
+
+	return &bo->base;
+
+err_detach:
+	dma_buf_detach(buf, attach);
+	dma_buf_put(buf);
+	return ERR_CAST(bo);
+}
+
+int vpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct vpu_file_priv *file_priv = file->driver_priv;
+	struct vpu_device *vdev = to_vpu_dev(dev);
+	struct drm_vpu_bo_info *args = data;
+	struct drm_gem_object *obj;
+	struct vpu_bo *bo;
+	int ret;
+
+	obj = drm_gem_object_lookup(file, args->handle);
+	if (!obj)
+		return -ENOENT;
+
+	ret = vpu_mmu_user_context_init(file_priv);
+	if (ret)
+		goto obj_put;
+
+	bo = to_vpu_bo(obj);
+
+	mutex_lock(&bo->lock);
+
+	if (!bo->ctx) {
+		ret = vpu_bo_alloc_vpu_addr(bo, &file_priv->ctx, NULL);
+		if (ret) {
+			vpu_err(vdev, "Failed to allocate vpu_addr: %d\n", ret);
+			goto unlock;
+		}
+	}
+
+	args->flags = bo->flags;
+	args->mmap_offset = drm_vma_node_offset_addr(&obj->vma_node);
+	args->vpu_addr = bo->vpu_addr;
+	args->size = obj->size;
+unlock:
+	mutex_unlock(&bo->lock);
+obj_put:
+	drm_gem_object_put(obj);
+	return ret;
+}
+
+static void vpu_bo_print_info(struct vpu_bo *bo, struct drm_printer *p)
+{
+	unsigned long dma_refcount = 0;
+
+	if (bo->base.dma_buf && bo->base.dma_buf->file)
+		dma_refcount = atomic_long_read(&bo->base.dma_buf->file->f_count);
+
+	drm_printf(p, "%5u %6d %16llx %10lu %10u %12lu %14s\n",
+		   bo->ctx->id, bo->handle, bo->vpu_addr, bo->base.size,
+		   kref_read(&bo->base.refcount), dma_refcount, bo->ops->name);
+}
+
+void vpu_bo_list(struct drm_device *dev, struct drm_printer *p)
+{
+	struct vpu_device *vdev = to_vpu_dev(dev);
+	struct vpu_file_priv *file_priv;
+	unsigned long ctx_id;
+	struct vpu_bo *bo;
+
+	drm_printf(p, "%5s %6s %16s %10s %10s %12s %14s\n",
+		   "ctx", "handle", "vpu_addr", "size", "refcount", "dma_refcount", "type");
+
+	list_for_each_entry(bo, &vdev->gctx.bo_list, ctx_node) {
+		mutex_lock(&vdev->gctx.lock);
+		vpu_bo_print_info(bo, p);
+		mutex_unlock(&vdev->gctx.lock);
+	}
+
+	xa_for_each(&vdev->context_xa, ctx_id, file_priv) {
+		if (!file_priv)
+			continue;
+
+		mutex_lock(&file_priv->ctx.lock);
+		list_for_each_entry(bo, &file_priv->ctx.bo_list, ctx_node)
+			vpu_bo_print_info(bo, p);
+		mutex_unlock(&file_priv->ctx.lock);
+	}
+}
+
+void vpu_bo_list_print(struct drm_device *dev)
+{
+	struct drm_printer p = drm_info_printer(dev->dev);
+
+	vpu_bo_list(dev, &p);
+}
diff --git a/drivers/gpu/drm/vpu/vpu_gem.h b/drivers/gpu/drm/vpu/vpu_gem.h
new file mode 100644
index 000000000000..5afe761bc0f3
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_gem.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+#ifndef __VPU_GEM_H__
+#define __VPU_GEM_H__
+
+#include <drm/drm_gem.h>
+#include <drm/drm_mm.h>
+
+struct dma_buf;
+struct vpu_bo_ops;
+struct vpu_file_priv;
+
+struct vpu_bo {
+	struct drm_gem_object base;
+	const struct vpu_bo_ops *ops;
+
+	struct vpu_mmu_context *ctx;
+	struct list_head ctx_node;
+	struct drm_mm_node mm_node;
+
+	struct mutex lock; /* Protects: pages, sgt, mmu_mapped */
+	struct sg_table *sgt;
+	struct page **pages;
+	bool mmu_mapped;
+
+	void *kvaddr;
+	u64 vpu_addr;
+	u32 handle;
+	u32 flags;
+	uintptr_t user_ptr;
+};
+
+enum vpu_bo_type {
+	VPU_BO_TYPE_SHMEM = 1,
+	VPU_BO_TYPE_USERPTR,
+	VPU_BO_TYPE_INTERNAL,
+	VPU_BO_TYPE_PRIME,
+};
+
+struct vpu_bo_ops {
+	enum vpu_bo_type type;
+	const char *name;
+	int (*alloc_pages)(struct vpu_bo *bo);
+	void (*free_pages)(struct vpu_bo *bo);
+	int (*map_pages)(struct vpu_bo *bo);
+	void (*unmap_pages)(struct vpu_bo *bo);
+};
+
+int vpu_bo_pin(struct vpu_bo *bo);
+void vpu_bo_remove_all_bos_from_context(struct vpu_mmu_context *ctx);
+void vpu_bo_list(struct drm_device *dev, struct drm_printer *p);
+void vpu_bo_list_print(struct drm_device *dev);
+
+struct vpu_bo *
+vpu_bo_alloc_internal(struct vpu_device *vdev, u64 vpu_addr, u64 size, bool cached);
+void vpu_bo_free_internal(struct vpu_bo *bo);
+int vpu_bo_vremap_internal(struct vpu_device *vdev, struct vpu_bo *bo, bool cached);
+struct drm_gem_object *vpu_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf);
+void vpu_bo_unmap_sgt_and_remove_from_context(struct vpu_bo *bo);
+
+int vpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int vpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int vpu_bo_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int vpu_bo_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+
+static inline struct vpu_bo *to_vpu_bo(struct drm_gem_object *obj)
+{
+	return container_of(obj, struct vpu_bo, base);
+}
+
+static inline struct page *vpu_bo_get_page(struct vpu_bo *bo, u64 offset)
+{
+	if (offset > bo->base.size || !bo->pages)
+		return NULL;
+
+	return bo->pages[offset / PAGE_SIZE];
+}
+
+static inline bool vpu_bo_is_cached(struct vpu_bo *bo)
+{
+	return !(bo->flags & DRM_VPU_BO_UNCACHED);
+}
+
+static inline struct vpu_device *vpu_bo_to_vdev(struct vpu_bo *bo)
+{
+	return bo->base.dev->dev_private;
+}
+
+static inline void *vpu_to_cpu_addr(struct vpu_bo *bo, u32 vpu_addr)
+{
+	if (vpu_addr < bo->vpu_addr)
+		return NULL;
+
+	if (vpu_addr >= (bo->vpu_addr + bo->base.size))
+		return NULL;
+
+	return bo->kvaddr + (vpu_addr - bo->vpu_addr);
+}
+
+static inline u32 cpu_to_vpu_addr(struct vpu_bo *bo, void *cpu_addr)
+{
+	if (cpu_addr < bo->kvaddr)
+		return 0;
+
+	if (cpu_addr >= (bo->kvaddr + bo->base.size))
+		return 0;
+
+	return bo->vpu_addr + (cpu_addr - bo->kvaddr);
+}
+
+#endif /* __VPU_GEM_H__ */
diff --git a/include/uapi/drm/vpu_drm.h b/include/uapi/drm/vpu_drm.h
index 8e3b852d78a1..8793ed06bfa9 100644
--- a/include/uapi/drm/vpu_drm.h
+++ b/include/uapi/drm/vpu_drm.h
@@ -17,6 +17,9 @@ extern "C" {
 
 #define DRM_VPU_GET_PARAM		 0x00
 #define DRM_VPU_SET_PARAM		 0x01
+#define DRM_VPU_BO_CREATE		 0x02
+#define DRM_VPU_BO_INFO			 0x03
+#define DRM_VPU_BO_USERPTR		 0x04
 
 #define DRM_IOCTL_VPU_GET_PARAM                                                                    \
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VPU_GET_PARAM, struct drm_vpu_param)
@@ -24,6 +27,15 @@ extern "C" {
 #define DRM_IOCTL_VPU_SET_PARAM                                                                    \
 	DRM_IOW(DRM_COMMAND_BASE + DRM_VPU_SET_PARAM, struct drm_vpu_param)
 
+#define DRM_IOCTL_VPU_BO_CREATE                                                                    \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VPU_BO_CREATE, struct drm_vpu_bo_create)
+
+#define DRM_IOCTL_VPU_BO_INFO                                                                      \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VPU_BO_INFO, struct drm_vpu_bo_info)
+
+#define DRM_IOCTL_VPU_BO_USERPTR                                                                   \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VPU_BO_USERPTR, struct drm_vpu_bo_userptr)
+
 /**
  * DOC: contexts
  *
@@ -92,6 +104,108 @@ struct drm_vpu_param {
 	__u64 value;
 };
 
+#define DRM_VPU_BO_HIGH_MEM 0x00000001
+#define DRM_VPU_BO_MAPPABLE 0x00000002
+#define DRM_VPU_BO_UNCACHED 0x00010000
+
+#define DRM_VPU_BO_FLAGS \
+	(DRM_VPU_BO_HIGH_MEM | \
+	 DRM_VPU_BO_MAPPABLE | \
+	 DRM_VPU_BO_UNCACHED)
+
+/**
+ * struct drm_vpu_bo_create - Create BO backed by SHMEM
+ *
+ * Create GEM buffer object allocated in SHMEM memory.
+ */
+struct drm_vpu_bo_create {
+	/** @size: The size in bytes of the allocated memory */
+	__u64 size;
+
+	/**
+	 * @flags:
+	 *
+	 * Supported flags:
+	 *
+	 * %DRM_VPU_BO_HIGH_MEM:
+	 *
+	 * Allocate VPU address from >4GB range.
+	 * Buffer object with vpu address >4GB can be always accessed by the
+	 * VPU DMA engine, but some HW generation may not be able to access
+	 * this memory from then firmware running on the VPU management processor.
+	 * Suitable for input, output and some scratch buffers.
+	 *
+	 * %DRM_VPU_BO_MAPPABLE:
+	 *
+	 * Buffer object can be mapped using mmap().
+	 *
+	 * %DRM_VPU_BO_UNCACHED:
+	 *
+	 * Allocated BO will not be cached on host side and it will be snooped
+	 * on the VPU side.
+	 */
+	__u32 flags;
+
+	/** @handle: Returned GEM object handle */
+	__u32 handle;
+
+	/** @vpu_addr: Returned VPU virtual address */
+	__u64 vpu_addr;
+};
+
+/**
+ * struct drm_vpu_bo_info - Query buffer object info
+ */
+struct drm_vpu_bo_info {
+	/** @handle: Handle of the queried BO */
+	__u32 handle;
+
+	/** @flags: Returned flags used to create the BO */
+	__u32 flags;
+
+	/** @vpu_addr: Returned VPU virtual address */
+	__u64 vpu_addr;
+
+	/**
+	 * @mmap_offset:
+	 *
+	 * Returned offset to be used in mmap(). 0 in case the BO is not mappable.
+	 */
+	__u64 mmap_offset;
+
+	/** @size: Returned GEM object size, aligned to PAGE_SIZE */
+	__u64 size;
+};
+
+/**
+ * struct drm_vpu_bo_userptr - Create BO from user memory
+ *
+ * Create GEM buffer object from user allocated memory. The provided @user_ptr
+ * has to be page aligned. BOs created using this ioctl are always cacheable.
+ */
+struct drm_vpu_bo_userptr {
+	/** @user_ptr: User allocated pointer aligned to PAGE_SIZE */
+	__u64 user_ptr;
+
+	/** @user_size: The size in bytes of the allocated memory */
+	__u64 user_size;
+
+	/**
+	 * @flags:
+	 *
+	 * Supported flags:
+	 *
+	 * %DRM_VPU_BO_HIGH_MEM: see &drm_vpu_bo_create->flags
+	 */
+	__u32 flags;
+
+	/** @handle: Returned GEM object handle */
+	__u32 handle;
+
+	/** @vpu_addr: Returned VPU virtual address */
+	__u64 vpu_addr;
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 4/7] drm/vpu: Add IPC driver and JSM messages
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
                   ` (2 preceding siblings ...)
  2022-07-28 13:17 ` [PATCH v1 3/7] drm/vpu: Add GEM buffer object management Jacek Lawrynowicz
@ 2022-07-28 13:17 ` Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 5/7] drm/vpu: Implement firmware parsing and booting Jacek Lawrynowicz
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

The IPC driver is used to send and receive messages to/from firmware
running on the VPU.

The only supported IPC message format is Job Submission Model (JSM)
defined in vpu_jsm_api.h header.

Signed-off-by: Andrzej Kacprowski <andrzej.kacprowski@linux.intel.com>
Signed-off-by: Krystian Pradzynski <krystian.pradzynski@linux.intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/gpu/drm/vpu/Makefile      |   2 +
 drivers/gpu/drm/vpu/vpu_drv.c     |  15 +
 drivers/gpu/drm/vpu/vpu_drv.h     |   1 +
 drivers/gpu/drm/vpu/vpu_hw_mtl.c  |   4 +
 drivers/gpu/drm/vpu/vpu_ipc.c     | 474 ++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_ipc.h     |  91 +++++
 drivers/gpu/drm/vpu/vpu_jsm_api.h | 529 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_jsm_msg.c | 220 +++++++++++++
 drivers/gpu/drm/vpu/vpu_jsm_msg.h |  25 ++
 9 files changed, 1361 insertions(+)
 create mode 100644 drivers/gpu/drm/vpu/vpu_ipc.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_ipc.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_jsm_api.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_jsm_msg.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_jsm_msg.h

diff --git a/drivers/gpu/drm/vpu/Makefile b/drivers/gpu/drm/vpu/Makefile
index b2f7a6c240f7..43cede47c75f 100644
--- a/drivers/gpu/drm/vpu/Makefile
+++ b/drivers/gpu/drm/vpu/Makefile
@@ -5,6 +5,8 @@ intel_vpu-y := \
 	vpu_drv.o \
 	vpu_gem.o \
 	vpu_hw_mtl.o \
+	vpu_ipc.o \
+	vpu_jsm_msg.o \
 	vpu_mmu.o \
 	vpu_mmu_context.o
 
diff --git a/drivers/gpu/drm/vpu/vpu_drv.c b/drivers/gpu/drm/vpu/vpu_drv.c
index 0e820aeecdcc..48551c147a1c 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.c
+++ b/drivers/gpu/drm/vpu/vpu_drv.c
@@ -16,6 +16,7 @@
 #include "vpu_drv.h"
 #include "vpu_gem.h"
 #include "vpu_hw.h"
+#include "vpu_ipc.h"
 #include "vpu_mmu.h"
 #include "vpu_mmu_context.h"
 
@@ -201,6 +202,7 @@ int vpu_shutdown(struct vpu_device *vdev)
 	int ret;
 
 	vpu_hw_irq_disable(vdev);
+	vpu_ipc_disable(vdev);
 	vpu_mmu_disable(vdev);
 
 	ret = vpu_hw_power_down(vdev);
@@ -318,6 +320,10 @@ static int vpu_dev_init(struct vpu_device *vdev)
 	if (!vdev->mmu)
 		return -ENOMEM;
 
+	vdev->ipc = devm_kzalloc(vdev->drm.dev, sizeof(*vdev->ipc), GFP_KERNEL);
+	if (!vdev->ipc)
+		return -ENOMEM;
+
 	vdev->hw->ops = &vpu_hw_mtl_ops;
 	vdev->platform = VPU_PLATFORM_INVALID;
 
@@ -361,8 +367,16 @@ static int vpu_dev_init(struct vpu_device *vdev)
 		goto err_mmu_gctx_fini;
 	}
 
+	ret = vpu_ipc_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize IPC: %d\n", ret);
+		goto err_mmu_fini;
+	}
+
 	return 0;
 
+err_mmu_fini:
+	vpu_mmu_fini(vdev);
 err_mmu_gctx_fini:
 	vpu_mmu_global_context_fini(vdev);
 err_power_down:
@@ -378,6 +392,7 @@ static void vpu_dev_fini(struct vpu_device *vdev)
 {
 	vpu_shutdown(vdev);
 
+	vpu_ipc_fini(vdev);
 	vpu_mmu_fini(vdev);
 	vpu_mmu_global_context_fini(vdev);
 	vpu_irq_fini(vdev);
diff --git a/drivers/gpu/drm/vpu/vpu_drv.h b/drivers/gpu/drm/vpu/vpu_drv.h
index a9f3ad0c5f67..ddb83aeaf6d3 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.h
+++ b/drivers/gpu/drm/vpu/vpu_drv.h
@@ -87,6 +87,7 @@ struct vpu_device {
 	struct vpu_wa_table wa;
 	struct vpu_hw_info *hw;
 	struct vpu_mmu_info *mmu;
+	struct vpu_ipc_info *ipc;
 
 	struct vpu_mmu_context gctx;
 	struct xarray context_xa;
diff --git a/drivers/gpu/drm/vpu/vpu_hw_mtl.c b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
index 901ec5c40de9..b53ec7b9cc4d 100644
--- a/drivers/gpu/drm/vpu/vpu_hw_mtl.c
+++ b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
@@ -7,6 +7,7 @@
 #include "vpu_hw_mtl_reg.h"
 #include "vpu_hw_reg_io.h"
 #include "vpu_hw.h"
+#include "vpu_ipc.h"
 #include "vpu_mmu.h"
 
 #define TILE_FUSE_ENABLE_BOTH	     0x0
@@ -931,6 +932,9 @@ static irqreturn_t vpu_hw_mtl_irqv_handler(struct vpu_device *vdev, int irq)
 
 	REGV_WR32(MTL_VPU_HOST_SS_ICB_CLEAR_0, status);
 
+	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT, status))
+		ret &= vpu_ipc_irq_handler(vdev);
+
 	if (REG_TEST_FLD(MTL_VPU_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT, status))
 		ret &= vpu_mmu_irq_evtq_handler(vdev);
 
diff --git a/drivers/gpu/drm/vpu/vpu_ipc.c b/drivers/gpu/drm/vpu/vpu_ipc.c
new file mode 100644
index 000000000000..0a01e5614a5f
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_ipc.c
@@ -0,0 +1,474 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <linux/genalloc.h>
+#include <linux/highmem.h>
+#include <linux/kthread.h>
+#include <linux/wait.h>
+
+#include "vpu_drv.h"
+#include "vpu_gem.h"
+#include "vpu_hw.h"
+#include "vpu_ipc.h"
+#include "vpu_jsm_msg.h"
+
+#define IPC_MAX_RX_MSG	128
+#define IS_KTHREAD()	(get_current()->flags & PF_KTHREAD)
+
+struct vpu_ipc_tx_buf {
+	struct vpu_ipc_hdr ipc;
+	struct vpu_jsm_msg jsm;
+} __packed;
+
+struct vpu_ipc_rx_msg {
+	struct list_head link;
+	struct vpu_ipc_hdr *ipc_hdr;
+	struct vpu_jsm_msg *jsm_msg;
+};
+
+static void vpu_ipc_msg_dump(struct vpu_device *vdev, char *c,
+			     struct vpu_ipc_hdr *ipc_hdr, u32 vpu_addr)
+{
+	vpu_dbg(IPC,
+		"%s: vpu:0x%x (data_addr:0x%08x, data_size:0x%x, channel:0x%x, src_node:0x%x, dst_node:0x%x, status:0x%x)",
+		c, vpu_addr, ipc_hdr->data_addr, ipc_hdr->data_size,
+		ipc_hdr->channel, ipc_hdr->src_node, ipc_hdr->dst_node,
+		ipc_hdr->status);
+}
+
+static void vpu_jsm_msg_dump(struct vpu_device *vdev, char *c,
+			     struct vpu_jsm_msg *jsm_msg, u32 vpu_addr)
+{
+	u32 *payload = (u32 *)&jsm_msg->payload;
+
+	vpu_dbg(JSM,
+		"%s: vpu:0x%08x (type:%s, status:0x%x, id: 0x%x, result: 0x%x, payload:0x%x 0x%x 0x%x 0x%x 0x%x)\n",
+		c, vpu_addr, vpu_jsm_msg_type_to_str(jsm_msg->type),
+		jsm_msg->status, jsm_msg->request_id, jsm_msg->result,
+		payload[0], payload[1], payload[2], payload[3], payload[4]);
+}
+
+static void
+vpu_ipc_rx_mark_free(struct vpu_device *vdev, struct vpu_ipc_hdr *ipc_hdr,
+		     struct vpu_jsm_msg *jsm_msg)
+{
+	ipc_hdr->status = VPU_IPC_HDR_FREE;
+	if (jsm_msg)
+		jsm_msg->status = VPU_JSM_MSG_FREE;
+	wmb(); /* Flush msg status to VPU */
+}
+
+static void vpu_ipc_mem_fini(struct vpu_device *vdev)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+
+	vpu_bo_free_internal(ipc->mem_rx);
+	vpu_bo_free_internal(ipc->mem_tx);
+}
+
+static int
+vpu_ipc_tx_prepare(struct vpu_device *vdev, struct vpu_ipc_consumer *cons,
+		   struct vpu_jsm_msg *req)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	struct vpu_ipc_tx_buf *tx_buf;
+	u32 tx_buf_vpu_addr;
+	u32 jsm_vpu_addr;
+
+	tx_buf_vpu_addr = gen_pool_alloc(ipc->mm_tx, sizeof(*tx_buf));
+	if (!tx_buf_vpu_addr) {
+		vpu_err(vdev, "Failed to reserve IPC buffer, size %ld\n",
+			sizeof(*tx_buf));
+		return -ENOMEM;
+	}
+
+	tx_buf = vpu_to_cpu_addr(ipc->mem_tx, tx_buf_vpu_addr);
+	if (WARN_ON(!tx_buf)) {
+		gen_pool_free(ipc->mm_tx, tx_buf_vpu_addr, sizeof(*tx_buf));
+		return -EIO;
+	}
+
+	jsm_vpu_addr = tx_buf_vpu_addr + offsetof(struct vpu_ipc_tx_buf, jsm);
+
+	if (tx_buf->ipc.status != VPU_IPC_HDR_FREE)
+		vpu_warn(vdev, "IPC message vpu:0x%x not released by firmware\n",
+			 tx_buf_vpu_addr);
+
+	if (tx_buf->jsm.status != VPU_JSM_MSG_FREE)
+		vpu_warn(vdev, "JSM message vpu:0x%x not released by firmware\n",
+			 jsm_vpu_addr);
+
+	memset(tx_buf, 0, sizeof(*tx_buf));
+	tx_buf->ipc.data_addr = jsm_vpu_addr;
+	/* TODO: Set data_size to actual JSM message size, not union of all messages */
+	tx_buf->ipc.data_size = sizeof(*req);
+	tx_buf->ipc.channel = cons->channel;
+	tx_buf->ipc.src_node = 0;
+	tx_buf->ipc.dst_node = 1;
+	tx_buf->ipc.status = VPU_IPC_HDR_ALLOCATED;
+	tx_buf->jsm.type = req->type;
+	tx_buf->jsm.status = VPU_JSM_MSG_ALLOCATED;
+	tx_buf->jsm.payload = req->payload;
+
+	req->request_id = atomic_inc_return(&ipc->request_id);
+	tx_buf->jsm.request_id = req->request_id;
+	cons->request_id = req->request_id;
+	wmb(); /* Flush IPC, JSM msgs */
+
+	cons->tx_vpu_addr = tx_buf_vpu_addr;
+
+	vpu_jsm_msg_dump(vdev, "TX", &tx_buf->jsm, jsm_vpu_addr);
+	vpu_ipc_msg_dump(vdev, "TX", &tx_buf->ipc, tx_buf_vpu_addr);
+
+	return 0;
+}
+
+static void vpu_ipc_tx_release(struct vpu_device *vdev, u32 vpu_addr)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+
+	if (vpu_addr)
+		gen_pool_free(ipc->mm_tx, vpu_addr, sizeof(struct vpu_ipc_tx_buf));
+}
+
+static void vpu_ipc_tx(struct vpu_device *vdev, u32 vpu_addr)
+{
+	vpu_hw_reg_ipc_tx_set(vdev, vpu_addr);
+}
+
+void vpu_ipc_consumer_add(struct vpu_device *vdev,
+			  struct vpu_ipc_consumer *cons, u32 channel)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+
+	init_waitqueue_head(&cons->rx_msg_wq);
+	INIT_LIST_HEAD(&cons->link);
+	cons->channel = channel;
+	cons->tx_vpu_addr = 0;
+	cons->request_id = 0;
+
+	INIT_LIST_HEAD(&cons->rx_msg_list);
+
+	spin_lock_irq(&ipc->cons_list_lock);
+	list_add_tail(&cons->link, &ipc->cons_list);
+	spin_unlock_irq(&ipc->cons_list_lock);
+}
+
+void vpu_ipc_consumer_del(struct vpu_device *vdev, struct vpu_ipc_consumer *cons)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	struct vpu_ipc_rx_msg *rx_msg, *r;
+
+	spin_lock_irq(&ipc->cons_list_lock);
+
+	list_del(&cons->link);
+	list_for_each_entry_safe(rx_msg, r, &cons->rx_msg_list, link) {
+		list_del(&rx_msg->link);
+		vpu_ipc_rx_mark_free(vdev, rx_msg->ipc_hdr, rx_msg->jsm_msg);
+		atomic_dec(&ipc->rx_msg_count);
+		kfree(rx_msg);
+	}
+
+	spin_unlock_irq(&ipc->cons_list_lock);
+
+	vpu_ipc_tx_release(vdev, cons->tx_vpu_addr);
+}
+
+static int vpu_ipc_send(struct vpu_device *vdev, struct vpu_ipc_consumer *cons,
+			struct vpu_jsm_msg *req)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	int ret;
+
+	if (mutex_lock_interruptible(&ipc->lock))
+		return -EINTR;
+
+	if (!ipc->on) {
+		ret = -EAGAIN;
+		goto unlock;
+	}
+
+	ret = vpu_ipc_tx_prepare(vdev, cons, req);
+	if (ret)
+		goto unlock;
+
+	vpu_ipc_tx(vdev, cons->tx_vpu_addr);
+
+unlock:
+	mutex_unlock(&ipc->lock);
+	return ret;
+}
+
+int vpu_ipc_receive(struct vpu_device *vdev, struct vpu_ipc_consumer *cons,
+		    struct vpu_ipc_hdr *ipc_buf,
+		    struct vpu_jsm_msg *ipc_payload, unsigned long timeout_ms)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	struct vpu_ipc_rx_msg *rx_msg;
+	int wait_ret, ret = 0;
+
+	wait_ret = wait_event_interruptible_timeout(cons->rx_msg_wq,
+						    (IS_KTHREAD() && kthread_should_stop()) ||
+						    !list_empty(&cons->rx_msg_list),
+						    msecs_to_jiffies(timeout_ms));
+
+	if (IS_KTHREAD() && kthread_should_stop())
+		return -EINTR;
+
+	if (wait_ret == 0)
+		return -ETIMEDOUT;
+
+	if (wait_ret < 0)
+		return -ERESTARTSYS;
+
+	spin_lock_irq(&ipc->rx_msg_list_lock);
+	rx_msg = list_first_entry_or_null(&cons->rx_msg_list,
+					  struct vpu_ipc_rx_msg, link);
+	if (!rx_msg) {
+		spin_unlock_irq(&ipc->rx_msg_list_lock);
+		return -EAGAIN;
+	}
+	list_del(&rx_msg->link);
+	spin_unlock_irq(&ipc->rx_msg_list_lock);
+
+	if (ipc_buf)
+		memcpy(ipc_buf, rx_msg->ipc_hdr, sizeof(*ipc_buf));
+	if (rx_msg->jsm_msg) {
+		u32 size = min_t(int, rx_msg->ipc_hdr->data_size, sizeof(*ipc_payload));
+
+		if (rx_msg->jsm_msg->result != VPU_JSM_STATUS_SUCCESS) {
+			vpu_dbg(IPC, "IPC resp result error: %d\n", rx_msg->jsm_msg->result);
+			ret = -EBADMSG;
+		}
+
+		if (ipc_payload)
+			memcpy(ipc_payload, rx_msg->jsm_msg, size);
+	}
+
+	vpu_ipc_rx_mark_free(vdev, rx_msg->ipc_hdr, rx_msg->jsm_msg);
+	atomic_dec(&ipc->rx_msg_count);
+	kfree(rx_msg);
+
+	return ret;
+}
+
+int vpu_ipc_send_receive(struct vpu_device *vdev, struct vpu_jsm_msg *req,
+			 enum vpu_ipc_msg_type expected_resp_type,
+			 struct vpu_jsm_msg *resp, u32 channel,
+			 unsigned long timeout_ms)
+{
+	struct vpu_ipc_consumer cons;
+	int ret;
+
+	vpu_ipc_consumer_add(vdev, &cons, channel);
+
+	ret = vpu_ipc_send(vdev, &cons, req);
+	if (ret) {
+		vpu_warn(vdev, "IPC send failed: %d\n", ret);
+		goto consumer_del;
+	}
+
+	ret = vpu_ipc_receive(vdev, &cons, NULL, resp, timeout_ms);
+	if (ret) {
+		vpu_warn(vdev, "IPC receive failed: %d\n", ret);
+		goto consumer_del;
+	}
+
+	if (resp->type != expected_resp_type) {
+		vpu_warn(vdev, "Invalid JSM response type: %d\n", resp->type);
+		ret = -EBADE;
+	}
+
+consumer_del:
+	vpu_ipc_consumer_del(vdev, &cons);
+
+	return ret;
+}
+
+static bool
+vpu_ipc_match_consumer(struct vpu_device *vdev, struct vpu_ipc_consumer *cons,
+		       struct vpu_ipc_hdr *ipc_hdr, struct vpu_jsm_msg *jsm_msg)
+{
+	if (cons->channel != ipc_hdr->channel)
+		return false;
+
+	if (!jsm_msg || jsm_msg->request_id == cons->request_id)
+		return true;
+
+	return false;
+}
+
+static bool
+vpu_ipc_dispatch(struct vpu_device *vdev, struct vpu_ipc_consumer *cons,
+		 struct vpu_ipc_hdr *ipc_hdr, struct vpu_jsm_msg *jsm_msg)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	struct vpu_ipc_rx_msg *rx_msg;
+	unsigned long f;
+
+	if (!vpu_ipc_match_consumer(vdev, cons, ipc_hdr, jsm_msg))
+		return false;
+
+	if (atomic_read(&ipc->rx_msg_count) > IPC_MAX_RX_MSG) {
+		vpu_warn(vdev, "IPC RX message dropped, msg count %d\n",
+			 IPC_MAX_RX_MSG);
+		vpu_ipc_rx_mark_free(vdev, ipc_hdr, jsm_msg);
+		return true;
+	}
+
+	rx_msg = kzalloc(sizeof(*rx_msg), GFP_ATOMIC);
+	if (!rx_msg) {
+		vpu_ipc_rx_mark_free(vdev, ipc_hdr, jsm_msg);
+		return true;
+	}
+
+	atomic_inc(&ipc->rx_msg_count);
+
+	rx_msg->ipc_hdr = ipc_hdr;
+	rx_msg->jsm_msg = jsm_msg;
+
+	spin_lock_irqsave(&ipc->rx_msg_list_lock, f);
+	list_add_tail(&rx_msg->link, &cons->rx_msg_list);
+	spin_unlock_irqrestore(&ipc->rx_msg_list_lock, f);
+
+	wake_up_all(&cons->rx_msg_wq);
+
+	return true;
+}
+
+irqreturn_t vpu_ipc_irq_handler(struct vpu_device *vdev)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	struct vpu_ipc_consumer *cons;
+	struct vpu_ipc_hdr *ipc_hdr;
+	struct vpu_jsm_msg *jsm_msg;
+	unsigned long f;
+
+	/* Driver needs to purge all messages from IPC FIFO to clear IPC interrupt.
+	 * Without purge IPC FIFO to 0 next IPC interrupts won't be generated.
+	 */
+	while (vpu_hw_reg_ipc_rx_count_get(vdev)) {
+		u32 vpu_addr = vpu_hw_reg_ipc_rx_addr_get(vdev);
+		bool dispatched = false;
+
+		ipc_hdr = vpu_to_cpu_addr(ipc->mem_rx, vpu_addr);
+		if (!ipc_hdr) {
+			vpu_warn(vdev, "IPC message 0x%x out of range\n", vpu_addr);
+			continue;
+		}
+		vpu_ipc_msg_dump(vdev, "RX", ipc_hdr, vpu_addr);
+
+		jsm_msg = NULL;
+		if (ipc_hdr->channel != VPU_IPC_CHAN_BOOT_MSG) {
+			jsm_msg = vpu_to_cpu_addr(ipc->mem_rx, ipc_hdr->data_addr);
+
+			if (!jsm_msg) {
+				vpu_warn(vdev, "JSM message 0x%x out of range\n",
+					 ipc_hdr->data_addr);
+				vpu_ipc_rx_mark_free(vdev, ipc_hdr, NULL);
+				continue;
+			}
+
+			vpu_jsm_msg_dump(vdev, "RX", jsm_msg, ipc_hdr->data_addr);
+		}
+
+		spin_lock_irqsave(&ipc->cons_list_lock, f);
+		list_for_each_entry(cons, &ipc->cons_list, link) {
+			if (vpu_ipc_dispatch(vdev, cons, ipc_hdr, jsm_msg)) {
+				dispatched = true;
+				break;
+			}
+		}
+		spin_unlock_irqrestore(&ipc->cons_list_lock, f);
+
+		if (!dispatched) {
+			vpu_dbg(IPC, "IPC RX message 0x%x dropped (no consumer)\n",
+				vpu_addr);
+			vpu_ipc_rx_mark_free(vdev, ipc_hdr, jsm_msg);
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
+int vpu_ipc_init(struct vpu_device *vdev)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	struct vpu_bo *mem_tx, *mem_rx;
+	int ret = -ENOMEM;
+
+	mem_tx = vpu_bo_alloc_internal(vdev, 0, SZ_16K, true);
+	if (!mem_tx)
+		return ret;
+
+	mem_rx = vpu_bo_alloc_internal(vdev, 0, SZ_16K, true);
+	if (!mem_rx)
+		goto err_free_tx;
+
+	ipc->mm_tx = devm_gen_pool_create(vdev->drm.dev, __ffs(VPU_IPC_ALIGNMENT),
+					  -1, "TX_IPC_JSM");
+	if (IS_ERR(ipc->mm_tx)) {
+		ret = PTR_ERR(ipc->mm_tx);
+		vpu_err(vdev, "Failed to create gen pool, %pe\n", ipc->mm_tx);
+		goto err_free_rx;
+	}
+
+	ret = gen_pool_add(ipc->mm_tx, mem_tx->vpu_addr, mem_tx->base.size, -1);
+	if (ret) {
+		vpu_err(vdev, "gen_pool_add failed, ret %d\n", ret);
+		goto err_free_rx;
+	}
+
+	ipc->mem_rx = mem_rx;
+	ipc->mem_tx = mem_tx;
+
+	memset(ipc->mem_tx->kvaddr, 0, ipc->mem_tx->base.size);
+	memset(ipc->mem_rx->kvaddr, 0, ipc->mem_rx->base.size);
+
+	spin_lock_init(&ipc->rx_msg_list_lock);
+	spin_lock_init(&ipc->cons_list_lock);
+	INIT_LIST_HEAD(&ipc->cons_list);
+
+	mutex_init(&ipc->lock);
+
+	return 0;
+
+err_free_rx:
+	vpu_bo_free_internal(mem_rx);
+err_free_tx:
+	vpu_bo_free_internal(mem_tx);
+	return ret;
+}
+
+void vpu_ipc_fini(struct vpu_device *vdev)
+{
+	vpu_ipc_mem_fini(vdev);
+}
+
+void vpu_ipc_enable(struct vpu_device *vdev)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+
+	mutex_lock(&ipc->lock);
+	ipc->on = true;
+	mutex_unlock(&ipc->lock);
+}
+
+void vpu_ipc_disable(struct vpu_device *vdev)
+{
+	struct vpu_ipc_info *ipc = vdev->ipc;
+	struct vpu_ipc_consumer *cons, *c;
+	unsigned long f;
+
+	mutex_lock(&ipc->lock);
+	ipc->on = false;
+	mutex_unlock(&ipc->lock);
+
+	spin_lock_irqsave(&ipc->cons_list_lock, f);
+	list_for_each_entry_safe(cons, c, &ipc->cons_list, link)
+		wake_up_all(&cons->rx_msg_wq);
+
+	spin_unlock_irqrestore(&ipc->cons_list_lock, f);
+}
diff --git a/drivers/gpu/drm/vpu/vpu_ipc.h b/drivers/gpu/drm/vpu/vpu_ipc.h
new file mode 100644
index 000000000000..1245bfbef4ac
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_ipc.h
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_IPC_H__
+#define __VPU_IPC_H__
+
+#include <linux/interrupt.h>
+
+#include "vpu_jsm_api.h"
+
+struct vpu_bo;
+
+/* VPU FW boot notification */
+#define VPU_IPC_CHAN_BOOT_MSG	   0x3ff
+#define VPU_IPC_BOOT_MSG_DATA_ADDR 0x424f4f54
+
+/* The alignment to be used for IPC Buffers and IPC Data. */
+#define VPU_IPC_ALIGNMENT	   64
+
+#define VPU_IPC_HDR_FREE	   0
+#define VPU_IPC_HDR_ALLOCATED	   0
+
+/**
+ * struct vpu_ipc_hdr - The IPC message header structure, exchanged
+ * with the VPU device firmware.
+ * @data_addr: The VPU address of the payload (JSM message)
+ * @data_size: The size of the payload.
+ * @channel: The channel used.
+ * @src_node: The Node ID of the sender.
+ * @dst_node: The Node ID of the intended receiver.
+ * @status: IPC buffer usage status
+ */
+struct vpu_ipc_hdr {
+	u32 data_addr;
+	u32 data_size;
+	u16 channel;
+	u8 src_node;
+	u8 dst_node;
+	u8 status;
+} __packed __aligned(VPU_IPC_ALIGNMENT);
+
+struct vpu_ipc_consumer {
+	struct list_head link;
+	u32 channel;
+	u32 tx_vpu_addr;
+	u32 request_id;
+
+	struct list_head rx_msg_list;
+	wait_queue_head_t rx_msg_wq;
+};
+
+struct vpu_ipc_info {
+	struct gen_pool *mm_tx;
+	struct vpu_bo *mem_tx;
+	struct vpu_bo *mem_rx;
+
+	spinlock_t rx_msg_list_lock; /* Lock for consumer rx_msg list */
+	atomic_t rx_msg_count;
+
+	spinlock_t cons_list_lock; /* Lock for consumers list */
+	struct list_head cons_list;
+
+	atomic_t request_id;
+	struct mutex lock; /* Lock on status */
+	bool on;
+};
+
+int vpu_ipc_init(struct vpu_device *vdev);
+void vpu_ipc_fini(struct vpu_device *vdev);
+
+void vpu_ipc_enable(struct vpu_device *vdev);
+void vpu_ipc_disable(struct vpu_device *vdev);
+
+irqreturn_t vpu_ipc_irq_handler(struct vpu_device *vdev);
+
+void vpu_ipc_consumer_add(struct vpu_device *vdev, struct vpu_ipc_consumer *cons,
+			  u32 channel);
+void vpu_ipc_consumer_del(struct vpu_device *vdev, struct vpu_ipc_consumer *cons);
+
+int vpu_ipc_receive(struct vpu_device *vdev, struct vpu_ipc_consumer *cons,
+		    struct vpu_ipc_hdr *ipc_buf, struct vpu_jsm_msg *ipc_payload,
+		    unsigned long timeout_ms);
+
+int vpu_ipc_send_receive(struct vpu_device *vdev, struct vpu_jsm_msg *req,
+			 enum vpu_ipc_msg_type expected_resp_type,
+			 struct vpu_jsm_msg *resp, u32 channel,
+			 unsigned long timeout_ms);
+
+#endif /* __VPU_IPC_H__ */
diff --git a/drivers/gpu/drm/vpu/vpu_jsm_api.h b/drivers/gpu/drm/vpu/vpu_jsm_api.h
new file mode 100644
index 000000000000..6ca0c45479c0
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_jsm_api.h
@@ -0,0 +1,529 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+/**
+ * @file
+ * @brief JSM shared definitions
+ *
+ * @ingroup Jsm
+ * @brief JSM shared definitions
+ * @{
+ */
+#ifndef VPU_JSM_API_H
+#define VPU_JSM_API_H
+
+/*
+ * Major version changes that break backward compatibility
+ */
+#define VPU_JSM_API_VER_MAJOR 2
+
+/*
+ * Minor version changes when API backward compatibility is preserved.
+ */
+#define VPU_JSM_API_VER_MINOR 5
+
+/*
+ * API header changed (field names, documentation, formatting) but API itself has not been changed
+ */
+#define VPU_JSM_API_VER_PATCH 2
+
+/*
+ * Index in the API version table
+ */
+#define VPU_JSM_API_VER_INDEX 4
+
+/** Pack the API structures for now, once alignment issues are fixed this can be removed */
+#pragma pack(push, 1)
+
+/*
+ * Engine indexes.
+ */
+#define VPU_ENGINE_COMPUTE 0
+#define VPU_ENGINE_COPY	   1
+#define VPU_ENGINE_NB	   2
+
+/*
+ * VPU status values.
+ */
+#define VPU_JSM_STATUS_SUCCESS				 0x0U
+#define VPU_JSM_STATUS_PARSING_ERR			 0x1U
+#define VPU_JSM_STATUS_PROCESSING_ERR			 0x2U
+#define VPU_JSM_STATUS_PREEMPTED			 0x3U
+#define VPU_JSM_STATUS_ABORTED				 0x4U
+#define VPU_JSM_STATUS_USER_CTX_VIOL_ERR		 0x5U
+#define VPU_JSM_STATUS_GLOBAL_CTX_VIOL_ERR		 0x6U
+#define VPU_JSM_STATUS_MVNCI_WRONG_INPUT_FORMAT		 0x7U
+#define VPU_JSM_STATUS_MVNCI_UNSUPPORTED_NETWORK_ELEMENT 0x8U
+#define VPU_JSM_STATUS_MVNCI_INVALID_HANDLE		 0x9U
+#define VPU_JSM_STATUS_MVNCI_OUT_OF_RESOURCES		 0xAU
+#define VPU_JSM_STATUS_MVNCI_NOT_IMPLEMENTED		 0xBU
+#define VPU_JSM_STATUS_MVNCI_INTERNAL_ERROR		 0xCU
+
+/*
+ * Host <-> VPU IPC channels.
+ * ASYNC commands use a high priority channel, other messages use low-priority ones.
+ */
+#define VPU_IPC_CHAN_ASYNC_CMD 0
+#define VPU_IPC_CHAN_GEN_CMD   10
+#define VPU_IPC_CHAN_JOB_RET   11
+
+/*
+ * Job flags bit masks.
+ */
+#define VPU_JOB_FLAGS_NULL_SUBMISSION_MASK 0x00000001
+
+/*
+ * Sizes of the reserved areas in jobs, in bytes.
+ */
+#define VPU_JOB_RESERVED_BYTES	     16
+/*
+ * Sizes of the reserved areas in job queues, in bytes.
+ */
+#define VPU_JOB_QUEUE_RESERVED_BYTES 52
+
+/*
+ * Max length (including trailing NULL char) of trace entity name (e.g., the
+ * name of a logging destination or a loggable HW component).
+ */
+#define VPU_TRACE_ENTITY_NAME_MAX_LEN 32
+
+/*
+ * Max length (including trailing NULL char) of a dyndbg command.
+ *
+ * NOTE: 112 is used so that the size of 'struct vpu_ipc_msg' in the JSM API is
+ * 128 bytes (multiple of 64 bytes, the cache line size).
+ */
+#define VPU_DYNDBG_CMD_MAX_LEN 112
+
+/*
+ * Job format.
+ */
+struct vpu_job_queue_entry {
+	u64 batch_buf_addr; /**< Address of VPU commands batch buffer */
+	u32 job_id;	  /**< Job ID */
+	u32 flags; /**< Flags bit field, see VPU_JOB_FLAGS_* above */
+	u64 root_page_table_addr; /**< Address of root page table to use for this job */
+	u64 root_page_table_update_counter; /**< Page tables update events counter */
+	u64 preemption_buffer_address; /**< Address of the preemption buffer to use for this job */
+	u64 preemption_buffer_size; /**< Size of the preemption buffer to use for this job */
+	u8 reserved[VPU_JOB_RESERVED_BYTES];
+};
+
+/*
+ * Job queue control registers.
+ */
+struct vpu_job_queue_header {
+	u32 engine_idx;
+	u32 head;
+	u32 tail;
+	u8 reserved[VPU_JOB_QUEUE_RESERVED_BYTES];
+};
+
+/*
+ * Job queue format.
+ */
+struct vpu_job_queue {
+	struct vpu_job_queue_header header;
+	struct vpu_job_queue_entry job[];
+};
+
+/**
+ * Logging entity types.
+ *
+ * This enum defines the different types of entities involved in logging.
+ */
+enum vpu_trace_entity_type {
+	/** Logging destination (entity where logs can be stored / printed). */
+	VPU_TRACE_ENTITY_TYPE_DESTINATION = 1,
+	/** Loggable HW component (HW entity that can be logged). */
+	VPU_TRACE_ENTITY_TYPE_HW_COMPONENT = 2,
+};
+
+/*
+ * Host <-> VPU IPC messages types.
+ */
+enum vpu_ipc_msg_type {
+	VPU_JSM_MSG_UNKNOWN = 0xFFFFFFFF,
+	/* IPC Host -> Device, Async commands */
+	VPU_JSM_MSG_ASYNC_CMD = 0x1100,
+	VPU_JSM_MSG_ENGINE_RESET = VPU_JSM_MSG_ASYNC_CMD,
+	VPU_JSM_MSG_ENGINE_PREEMPT = 0x1101,
+	VPU_JSM_MSG_REGISTER_DB = 0x1102,
+	VPU_JSM_MSG_UNREGISTER_DB = 0x1103,
+	VPU_JSM_MSG_QUERY_ENGINE_HB = 0x1104,
+	VPU_JSM_MSG_GET_POWER_LEVEL_COUNT = 0x1105,
+	VPU_JSM_MSG_GET_POWER_LEVEL = 0x1106,
+	VPU_JSM_MSG_SET_POWER_LEVEL = 0x1107,
+	VPU_JSM_MSG_METRIC_STREAMER_OPEN = 0x1108,
+	VPU_JSM_MSG_METRIC_STREAMER_CLOSE = 0x1109,
+	/** Configure logging (used to modify configuration passed in boot params). */
+	VPU_JSM_MSG_TRACE_SET_CONFIG = 0x110a,
+	/** Return current logging configuration. */
+	VPU_JSM_MSG_TRACE_GET_CONFIG = 0x110b,
+	/**
+	 * Get masks of destinations and HW components supported by the firmware
+	 * (may vary between HW generations and FW compile
+	 * time configurations)
+	 */
+	VPU_JSM_MSG_TRACE_GET_CAPABILITY = 0x110c,
+	/** Get the name of a destination or HW component. */
+	VPU_JSM_MSG_TRACE_GET_NAME = 0x110d,
+	/**
+	 * Clean up user context. All jobs that belong to the current context are
+	 * aborted and removed from internal scheduling queues. All doorbells assigned
+	 * to the context are unregistered and any internal FW resources belonging to
+	 * the context are released.
+	 *
+	 * Note: VPU_JSM_MSG_CONTEXT_DELETE is currently added as a placeholder and is
+	 * not yet functional. Implementation of this command to be added by EISW-40925.
+	 */
+	VPU_JSM_MSG_CONTEXT_DELETE = 0x110e,
+	/* IPC Host -> Device, General commands */
+	VPU_JSM_MSG_GENERAL_CMD = 0x1200,
+	VPU_JSM_MSG_BLOB_DEINIT = VPU_JSM_MSG_GENERAL_CMD,
+	/**
+	 * Control dyndbg behavior by executing a dyndbg command; equivalent to
+	 * Linux command: `echo '<dyndbg_cmd>' > <debugfs>/dynamic_debug/control`.
+	 */
+	VPU_JSM_MSG_DYNDBG_CONTROL = 0x1201,
+	/* IPC Device -> Host, Job completion */
+	VPU_JSM_MSG_JOB_DONE = 0x2100,
+	/* IPC Device -> Host, Async command completion */
+	VPU_JSM_MSG_ASYNC_CMD_DONE = 0x2200,
+	VPU_JSM_MSG_ENGINE_RESET_DONE = VPU_JSM_MSG_ASYNC_CMD_DONE,
+	VPU_JSM_MSG_ENGINE_PREEMPT_DONE = 0x2201,
+	VPU_JSM_MSG_REGISTER_DB_DONE = 0x2202,
+	VPU_JSM_MSG_UNREGISTER_DB_DONE = 0x2203,
+	VPU_JSM_MSG_QUERY_ENGINE_HB_DONE = 0x2204,
+	VPU_JSM_MSG_GET_POWER_LEVEL_COUNT_DONE = 0x2205,
+	VPU_JSM_MSG_GET_POWER_LEVEL_DONE = 0x2206,
+	VPU_JSM_MSG_SET_POWER_LEVEL_DONE = 0x2207,
+	VPU_JSM_MSG_METRIC_STREAMER_OPEN_DONE = 0x2208,
+	VPU_JSM_MSG_METRIC_STREAMER_CLOSE_DONE = 0x2209,
+	/** Response to VPU_JSM_MSG_TRACE_SET_CONFIG. */
+	VPU_JSM_MSG_TRACE_SET_CONFIG_RSP = 0x220a,
+	/** Response to VPU_JSM_MSG_TRACE_GET_CONFIG. */
+	VPU_JSM_MSG_TRACE_GET_CONFIG_RSP = 0x220b,
+	/** Response to VPU_JSM_MSG_TRACE_GET_CAPABILITY. */
+	VPU_JSM_MSG_TRACE_GET_CAPABILITY_RSP = 0x220c,
+	/** Response to VPU_JSM_MSG_TRACE_GET_NAME. */
+	VPU_JSM_MSG_TRACE_GET_NAME_RSP = 0x220d,
+	/** Response to VPU_JSM_MSG_CONTEXT_DELETE. */
+	VPU_JSM_MSG_CONTEXT_DELETE_DONE = 0x220e,
+	/* IPC Device -> Host, General command completion */
+	VPU_JSM_MSG_GENERAL_CMD_DONE = 0x2300,
+	VPU_JSM_MSG_BLOB_DEINIT_DONE = VPU_JSM_MSG_GENERAL_CMD_DONE,
+	/** Response to VPU_JSM_MSG_DYNDBG_CONTROL. */
+	VPU_JSM_MSG_DYNDBG_CONTROL_RSP = 0x2301,
+};
+
+enum vpu_ipc_msg_status { VPU_JSM_MSG_FREE, VPU_JSM_MSG_ALLOCATED };
+
+/*
+ * Host <-> LRT IPC message payload definitions
+ */
+struct vpu_ipc_msg_payload_engine_reset {
+	/* Engine to be reset. */
+	u32 engine_idx;
+};
+
+struct vpu_ipc_msg_payload_engine_preempt {
+	/* Engine to be preempted. */
+	u32 engine_idx;
+	/* ID of the preemption request. */
+	u32 preempt_id;
+};
+
+struct vpu_ipc_msg_payload_register_db {
+	/* Index of the doorbell to register. */
+	u32 db_idx;
+	/* Virtual address in Global GTT pointing to the start of job queue. */
+	u64 jobq_base;
+	/* Size of the job queue in bytes. */
+	u32 jobq_size;
+	/* Host sub-stream ID for the context assigned to the doorbell. */
+	u32 host_ssid;
+};
+
+struct vpu_ipc_msg_payload_unregister_db {
+	/* Index of the doorbell to unregister. */
+	u32 db_idx;
+};
+
+struct vpu_ipc_msg_payload_query_engine_hb {
+	/* Engine to return heartbeat value. */
+	u32 engine_idx;
+};
+
+struct vpu_ipc_msg_payload_power_level {
+	/**
+	 * Requested power level. The power level value is in the
+	 * range [0, power_level_count-1] where power_level_count
+	 * is the number of available power levels as returned by
+	 * the get power level count command. A power level of 0
+	 * corresponds to the maximum possible power level, while
+	 * power_level_count-1 corresponds to the minimum possible
+	 * power level. Values outside of this range are not
+	 * considered to be valid.
+	 */
+	u32 power_level;
+};
+
+struct vpu_ipc_msg_payload_context_delete {
+	/* Host sub-stream ID for the context to be finalised. */
+	u32 host_ssid;
+};
+
+struct vpu_ipc_msg_payload_blob_deinit {
+	/* 64-bit unique ID for the blob to be de-initialized. */
+	u64 blob_id;
+};
+
+/**
+ * @brief Metric streamer open command structure
+ * @see VPU_JSM_MSG_METRIC_STREAMER_OPEN
+ */
+struct vpu_ipc_msg_payload_metric_streamer_open {
+	/* Bit mask to select the desired metric group */
+	/* @see bit indexes in vpu_metric_group_bit enum */
+	u64 metric_group_mask;
+	/* Sampling rate in milliseconds */
+	u32 rate;
+	/* Number of samples to collect */
+	u32 sample_count;
+	/**
+	 * 64bit address to an array of 32bit sizes of the buffers where
+	 * the counters will be stored.
+	 * The array is indexed by metric group bit indexes.
+	 */
+	u64 metric_size_array_address;
+	/**
+	 * 64bit address to an array of 32bit sizes of the buffers where
+	 * the counters will be stored.
+	 * The array is indexed by metric group bit indexes.
+	 */
+	u64 metric_data_array_address;
+};
+
+static_assert(sizeof(struct vpu_ipc_msg_payload_metric_streamer_open) % 8 == 0,
+	      "vpu_ipc_msg_payload_metric_streamer_open is misaligned");
+
+struct vpu_ipc_msg_payload_job_done {
+	/* Engine to which the job was submitted. */
+	u32 engine_idx;
+	/* Index of the doorbell to which the job was submitted */
+	u32 db_idx;
+	/* ID of the completed job */
+	u32 job_id;
+	/* Status of the completed job */
+	u32 job_status;
+};
+
+struct vpu_ipc_msg_payload_engine_reset_done {
+	/* Engine reset. */
+	u32 engine_idx;
+};
+
+struct vpu_ipc_msg_payload_engine_preempt_done {
+	/* Engine preempted. */
+	u32 engine_idx;
+	/* ID of the preemption request. */
+	u32 preempt_id;
+};
+
+struct vpu_ipc_msg_payload_register_db_done {
+	/* Index of the registered doorbell. */
+	u32 db_idx;
+};
+
+struct vpu_ipc_msg_payload_unregister_db_done {
+	/* Index of the unregistered doorbell. */
+	u32 db_idx;
+};
+
+struct vpu_ipc_msg_payload_query_engine_hb_done {
+	/* Engine returning heartbeat value. */
+	u32 engine_idx;
+	/* Heartbeat value. */
+	u64 heartbeat;
+};
+
+struct vpu_ipc_msg_payload_get_power_level_count_done {
+	/**
+	 * Number of supported power levels. The maximum possible
+	 * value of power_level_count is 16 but this may vary across
+	 * implementations.
+	 */
+	u32 power_level_count;
+	/**
+	 * Power consumption limit for each supported power level in
+	 * [0-100%] range relative to power level 0.
+	 */
+	u8 power_limit[16];
+};
+
+struct vpu_ipc_msg_payload_blob_deinit_done {
+	/* 64-bit unique ID for the blob de-initialized. */
+	u64 blob_id;
+};
+
+/**
+ * Payload for VPU_JSM_MSG_TRACE_SET_CONFIG[_RSP] and
+ * VPU_JSM_MSG_TRACE_GET_CONFIG_RSP messages.
+ *
+ * The payload is interpreted differently depending on the type of message:
+ *
+ * - For VPU_JSM_MSG_TRACE_SET_CONFIG, the payload specifies the desired
+ *   logging configuration to be set.
+ *
+ * - For VPU_JSM_MSG_TRACE_SET_CONFIG_RSP, the payload reports the logging
+ *   configuration that was set after a VPU_JSM_MSG_TRACE_SET_CONFIG request.
+ *   The host can compare this payload with the one it sent in the
+ *   VPU_JSM_MSG_TRACE_SET_CONFIG request to check whether or not the
+ *   configuration was set as desired.
+ *
+ * - VPU_JSM_MSG_TRACE_GET_CONFIG_RSP, the payload reports the current logging
+ *   configuration.
+ */
+struct vpu_ipc_msg_payload_trace_config {
+	/**
+	 * Logging level (currently set or to be set); see 'mvLog_t' enum for
+	 * acceptable values. The specified logging level applies to all
+	 * destinations and HW components
+	 */
+	u32 trace_level;
+	/**
+	 * Bitmask of logging destinations (currently enabled or to be enabled);
+	 * bitwise OR of values defined in logging_destination enum.
+	 */
+	u32 trace_destination_mask;
+	/**
+	 * Bitmask of loggable HW components (currently enabled or to be enabled);
+	 * bitwise OR of values defined in loggable_hw_component enum.
+	 */
+	u64 trace_hw_component_mask;
+	u64 reserved_0; /**< Reserved for future extensions. */
+};
+
+/**
+ * Payload for VPU_JSM_MSG_TRACE_GET_CAPABILITY_RSP messages.
+ */
+struct vpu_ipc_msg_payload_trace_capability_rsp {
+	u32 trace_destination_mask; /**< Bitmask of supported logging destinations. */
+	u32 reserved_0;
+	u64 trace_hw_component_mask; /**< Bitmask of supported loggable HW components. */
+	u64 reserved_1; /**< Reserved for future extensions. */
+};
+
+/**
+ * Payload for VPU_JSM_MSG_TRACE_GET_NAME requests.
+ */
+struct vpu_ipc_msg_payload_trace_get_name {
+	/**
+	 * The type of the entity to query name for; see logging_entity_type for
+	 * possible values.
+	 */
+	u32 entity_type;
+	u32 reserved_0;
+	/**
+	 * The ID of the entity to query name for; possible values depends on the
+	 * entity type.
+	 */
+	u64 entity_id;
+};
+
+/**
+ * Payload for VPU_JSM_MSG_TRACE_GET_NAME_RSP responses.
+ */
+struct vpu_ipc_msg_payload_trace_get_name_rsp {
+	/**
+	 * The type of the entity whose name was queried; see logging_entity_type
+	 * for possible values.
+	 */
+	u32 entity_type;
+	u32 reserved_0;
+	/**
+	 * The ID of the entity whose name was queried; possible values depends on
+	 * the entity type.
+	 */
+	u64 entity_id;
+	/** Reserved for future extensions. */
+	u64 reserved_1;
+	/** The name of the entity. */
+	char entity_name[VPU_TRACE_ENTITY_NAME_MAX_LEN];
+};
+
+/**
+ * Payload for VPU_JSM_MSG_DYNDBG_CONTROL requests.
+ *
+ * VPU_JSM_MSG_DYNDBG_CONTROL are used to control the VPU FW Dynamic Debug
+ * feature, which allows developers to selectively enable / disable MVLOG_DEBUG
+ * messages. This is equivalent to the Dynamic Debug functionality provided by
+ * Linux
+ * (https://www.kernel.org/doc/html/latest/admin-guide/dynamic-debug-howto.html)
+ * The host can control Dynamic Debug behavior by sending dyndbg commands, which
+ * have the same syntax as Linux
+ * dyndbg commands.
+ *
+ * NOTE: in order for MVLOG_DEBUG messages to be actually printed, the host
+ * still has to set the logging level to MVLOG_DEBUG, using the
+ * VPU_JSM_MSG_TRACE_SET_CONFIG command.
+ *
+ * The host can see the current dynamic debug configuration by executing a
+ * special 'show' command. The dyndbg configuration will be printed to the
+ * configured logging destination using MVLOG_INFO logging level.
+ */
+struct vpu_ipc_msg_payload_dyndbg_control {
+	/**
+	 * Dyndbg command (same format as Linux dyndbg); must be a NULL-terminated
+	 * string.
+	 */
+	char dyndbg_cmd[VPU_DYNDBG_CMD_MAX_LEN];
+};
+
+/*
+ * Payloads union, used to define complete message format.
+ */
+union vpu_ipc_msg_payload {
+	struct vpu_ipc_msg_payload_engine_reset engine_reset;
+	struct vpu_ipc_msg_payload_engine_preempt engine_preempt;
+	struct vpu_ipc_msg_payload_register_db register_db;
+	struct vpu_ipc_msg_payload_unregister_db unregister_db;
+	struct vpu_ipc_msg_payload_query_engine_hb query_engine_hb;
+	struct vpu_ipc_msg_payload_power_level power_level;
+	struct vpu_ipc_msg_payload_blob_deinit blob_deinit;
+	struct vpu_ipc_msg_payload_metric_streamer_open metric_streamer_open;
+	struct vpu_ipc_msg_payload_context_delete context_delete;
+	struct vpu_ipc_msg_payload_job_done job_done;
+	struct vpu_ipc_msg_payload_engine_reset_done engine_reset_done;
+	struct vpu_ipc_msg_payload_engine_preempt_done engine_preempt_done;
+	struct vpu_ipc_msg_payload_register_db_done register_db_done;
+	struct vpu_ipc_msg_payload_unregister_db_done unregister_db_done;
+	struct vpu_ipc_msg_payload_query_engine_hb_done query_engine_hb_done;
+	struct vpu_ipc_msg_payload_get_power_level_count_done get_power_level_count_done;
+	struct vpu_ipc_msg_payload_blob_deinit_done blob_deinit_done;
+	struct vpu_ipc_msg_payload_trace_config trace_config;
+	struct vpu_ipc_msg_payload_trace_capability_rsp trace_capability;
+	struct vpu_ipc_msg_payload_trace_get_name trace_get_name;
+	struct vpu_ipc_msg_payload_trace_get_name_rsp trace_get_name_rsp;
+	struct vpu_ipc_msg_payload_dyndbg_control dyndbg_control;
+};
+
+/*
+ * Host <-> LRT IPC message base structure.
+ */
+struct vpu_jsm_msg {
+	enum vpu_ipc_msg_type type;
+	enum vpu_ipc_msg_status status;
+	u32 request_id;
+	u32 result;
+	union vpu_ipc_msg_payload payload;
+};
+
+#pragma pack(pop)
+
+#endif
+
+///@}
diff --git a/drivers/gpu/drm/vpu/vpu_jsm_msg.c b/drivers/gpu/drm/vpu/vpu_jsm_msg.c
new file mode 100644
index 000000000000..ee4c1b74c329
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_jsm_msg.c
@@ -0,0 +1,220 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include "vpu_drv.h"
+#include "vpu_ipc.h"
+#include "vpu_jsm_msg.h"
+
+const char *vpu_jsm_msg_type_to_str(enum vpu_ipc_msg_type type)
+{
+	switch (type) {
+	case VPU_JSM_MSG_ENGINE_RESET:
+		return "VPU_JSM_MSG_ENGINE_RESET";
+	case VPU_JSM_MSG_ENGINE_PREEMPT:
+		return "VPU_JSM_MSG_ENGINE_PREEMPT";
+	case VPU_JSM_MSG_REGISTER_DB:
+		return "VPU_JSM_MSG_REGISTER_DB";
+	case VPU_JSM_MSG_UNREGISTER_DB:
+		return "VPU_JSM_MSG_UNREGISTER_DB";
+	case VPU_JSM_MSG_QUERY_ENGINE_HB:
+		return "VPU_JSM_MSG_QUERY_ENGINE_HB";
+	case VPU_JSM_MSG_TRACE_SET_CONFIG:
+		return "VPU_JSM_MSG_TRACE_SET_CONFIG";
+	case VPU_JSM_MSG_TRACE_GET_CONFIG:
+		return "VPU_JSM_MSG_TRACE_GET_CONFIG";
+	case VPU_JSM_MSG_TRACE_GET_CAPABILITY:
+		return "VPU_JSM_MSG_TRACE_GET_CAPABILITY";
+	case VPU_JSM_MSG_BLOB_DEINIT:
+		return "VPU_JSM_MSG_BLOB_DEINIT";
+	case VPU_JSM_MSG_DYNDBG_CONTROL:
+		return "VPU_JSM_MSG_DYNDBG_CONTROL";
+	case VPU_JSM_MSG_JOB_DONE:
+		return "VPU_JSM_MSG_JOB_DONE";
+	case VPU_JSM_MSG_ENGINE_RESET_DONE:
+		return "VPU_JSM_MSG_ENGINE_RESET_DONE";
+	case VPU_JSM_MSG_ENGINE_PREEMPT_DONE:
+		return "VPU_JSM_MSG_ENGINE_PREEMPT_DONE";
+	case VPU_JSM_MSG_REGISTER_DB_DONE:
+		return "VPU_JSM_MSG_REGISTER_DB_DONE";
+	case VPU_JSM_MSG_UNREGISTER_DB_DONE:
+		return "VPU_JSM_MSG_UNREGISTER_DB_DONE";
+	case VPU_JSM_MSG_QUERY_ENGINE_HB_DONE:
+		return "VPU_JSM_MSG_QUERY_ENGINE_HB_DONE";
+	case VPU_JSM_MSG_TRACE_SET_CONFIG_RSP:
+		return "VPU_JSM_MSG_TRACE_SET_CONFIG_RSP";
+	case VPU_JSM_MSG_TRACE_GET_CONFIG_RSP:
+		return "VPU_JSM_MSG_TRACE_GET_CONFIG_RSP";
+	case VPU_JSM_MSG_TRACE_GET_CAPABILITY_RSP:
+		return "VPU_JSM_MSG_TRACE_GET_CAPABILITY_RSP";
+	case VPU_JSM_MSG_BLOB_DEINIT_DONE:
+		return "VPU_JSM_MSG_BLOB_DEINIT_DONE";
+	case VPU_JSM_MSG_DYNDBG_CONTROL_RSP:
+		return "VPU_JSM_MSG_DYNDBG_CONTROL_RSP";
+	default:
+		return "Unknown JSM message type";
+	}
+}
+
+int vpu_jsm_register_db(struct vpu_device *vdev, u32 ctx_id, u32 db_id,
+			u64 jobq_base, u32 jobq_size)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_REGISTER_DB };
+	struct vpu_jsm_msg resp;
+	int ret = 0;
+
+	req.payload.register_db.db_idx = db_id;
+	req.payload.register_db.jobq_base = jobq_base;
+	req.payload.register_db.jobq_size = jobq_size;
+	req.payload.register_db.host_ssid = ctx_id;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_REGISTER_DB_DONE, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret) {
+		vpu_err(vdev, "Failed to register doorbell %d: %d\n", db_id, ret);
+		return ret;
+	}
+
+	vpu_dbg(JSM, "Doorbell %d registered to context %d\n", db_id, ctx_id);
+
+	return 0;
+}
+
+int vpu_jsm_unregister_db(struct vpu_device *vdev, u32 db_id)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_UNREGISTER_DB };
+	struct vpu_jsm_msg resp;
+	int ret = 0;
+
+	req.payload.unregister_db.db_idx = db_id;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_UNREGISTER_DB_DONE, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret) {
+		vpu_warn(vdev, "Failed to unregister doorbell %d: %d\n", db_id, ret);
+		return ret;
+	}
+
+	vpu_dbg(JSM, "Doorbell %d unregistered\n", db_id);
+
+	return 0;
+}
+
+int vpu_jsm_get_heartbeat(struct vpu_device *vdev, u32 engine, u64 *heartbeat)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_QUERY_ENGINE_HB };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	if (engine > VPU_ENGINE_COPY)
+		return -EINVAL;
+
+	req.payload.query_engine_hb.engine_idx = engine;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_QUERY_ENGINE_HB_DONE, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret) {
+		vpu_err(vdev, "Failed to get heartbeat from engine %d: %d\n", engine, ret);
+		goto rpm_put;
+	}
+
+	*heartbeat = resp.payload.query_engine_hb_done.heartbeat;
+rpm_put:
+	return ret;
+}
+
+int vpu_jsm_reset_engine(struct vpu_device *vdev, u32 engine)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_ENGINE_RESET };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	if (engine > VPU_ENGINE_COPY)
+		return -EINVAL;
+
+	req.payload.engine_reset.engine_idx = engine;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_ENGINE_RESET_DONE, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		vpu_err(vdev, "Failed to reset engine %d: %d\n", engine, ret);
+
+	return ret;
+}
+
+int vpu_jsm_preempt_engine(struct vpu_device *vdev, u32 engine, u32 preempt_id)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_ENGINE_PREEMPT };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	if (engine > VPU_ENGINE_COPY)
+		return -EINVAL;
+
+	req.payload.engine_preempt.engine_idx = engine;
+	req.payload.engine_preempt.preempt_id = preempt_id;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_ENGINE_PREEMPT_DONE, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		vpu_err(vdev, "Failed to preempt engine %d: %d\n", engine, ret);
+
+	return ret;
+}
+
+int vpu_jsm_dyndbg_control(struct vpu_device *vdev, char *command, size_t size)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_DYNDBG_CONTROL };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	if (!strncpy(req.payload.dyndbg_control.dyndbg_cmd, command, VPU_DYNDBG_CMD_MAX_LEN - 1))
+		return -ENOMEM;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_DYNDBG_CONTROL_RSP, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		vpu_warn(vdev, "Failed to send command \"%s\": ret %d\n", command, ret);
+
+	return ret;
+}
+
+int vpu_jsm_trace_get_capability(struct vpu_device *vdev, u32 *trace_destination_mask,
+				 u64 *trace_hw_component_mask)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_TRACE_GET_CAPABILITY };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_TRACE_GET_CAPABILITY_RSP, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret) {
+		vpu_warn(vdev, "Failed to get trace capability: %d\n", ret);
+		return ret;
+	}
+
+	*trace_destination_mask = resp.payload.trace_capability.trace_destination_mask;
+	*trace_hw_component_mask = resp.payload.trace_capability.trace_hw_component_mask;
+
+	return ret;
+}
+
+int vpu_jsm_trace_set_config(struct vpu_device *vdev, u32 trace_level, u32 trace_destination_mask,
+			     u64 trace_hw_component_mask)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_TRACE_SET_CONFIG };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.trace_config.trace_level = trace_level;
+	req.payload.trace_config.trace_destination_mask = trace_destination_mask;
+	req.payload.trace_config.trace_hw_component_mask = trace_hw_component_mask;
+
+	ret = vpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_TRACE_SET_CONFIG_RSP, &resp,
+				   VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		vpu_warn(vdev, "Failed to set config: %d\n", ret);
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/vpu/vpu_jsm_msg.h b/drivers/gpu/drm/vpu/vpu_jsm_msg.h
new file mode 100644
index 000000000000..93e28a1a7943
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_jsm_msg.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_JSM_MSG_H__
+#define __VPU_JSM_MSG_H__
+
+#include "vpu_jsm_api.h"
+
+const char *vpu_jsm_msg_type_to_str(enum vpu_ipc_msg_type type);
+
+int vpu_jsm_register_db(struct vpu_device *vdev, u32 ctx_id, u32 db_id,
+			u64 jobq_base, u32 jobq_size);
+int vpu_jsm_unregister_db(struct vpu_device *vdev, u32 db_id);
+int vpu_jsm_get_heartbeat(struct vpu_device *vdev, u32 engine, u64 *heartbeat);
+int vpu_jsm_reset_engine(struct vpu_device *vdev, u32 engine);
+int vpu_jsm_preempt_engine(struct vpu_device *vdev, u32 engine, u32 preempt_id);
+int vpu_jsm_dyndbg_control(struct vpu_device *vdev, char *command, size_t size);
+int vpu_jsm_trace_get_capability(struct vpu_device *vdev, u32 *trace_destination_mask,
+				 u64 *trace_hw_component_mask);
+int vpu_jsm_trace_set_config(struct vpu_device *vdev, u32 trace_level, u32 trace_destination_mask,
+			     u64 trace_hw_component_mask);
+
+#endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 5/7] drm/vpu: Implement firmware parsing and booting
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
                   ` (3 preceding siblings ...)
  2022-07-28 13:17 ` [PATCH v1 4/7] drm/vpu: Add IPC driver and JSM messages Jacek Lawrynowicz
@ 2022-07-28 13:17 ` Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 6/7] drm/vpu: Add command buffer submission logic Jacek Lawrynowicz
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

Read, parse and boot VPU firmware image.

Signed-off-by: Andrzej Kacprowski <andrzej.kacprowski@linux.intel.com>
Signed-off-by: Krystian Pradzynski <krystian.pradzynski@linux.intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/gpu/drm/vpu/Makefile       |   1 +
 drivers/gpu/drm/vpu/vpu_boot_api.h | 222 ++++++++++++++++
 drivers/gpu/drm/vpu/vpu_drv.c      | 123 ++++++++-
 drivers/gpu/drm/vpu/vpu_drv.h      |   9 +
 drivers/gpu/drm/vpu/vpu_fw.c       | 413 +++++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_fw.h       |  38 +++
 drivers/gpu/drm/vpu/vpu_hw_mtl.c   |  11 +
 include/uapi/drm/vpu_drm.h         |  21 ++
 8 files changed, 837 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/vpu/vpu_boot_api.h
 create mode 100644 drivers/gpu/drm/vpu/vpu_fw.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_fw.h

diff --git a/drivers/gpu/drm/vpu/Makefile b/drivers/gpu/drm/vpu/Makefile
index 43cede47c75f..995a3a9c6777 100644
--- a/drivers/gpu/drm/vpu/Makefile
+++ b/drivers/gpu/drm/vpu/Makefile
@@ -3,6 +3,7 @@
 
 intel_vpu-y := \
 	vpu_drv.o \
+	vpu_fw.o \
 	vpu_gem.o \
 	vpu_hw_mtl.o \
 	vpu_ipc.o \
diff --git a/drivers/gpu/drm/vpu/vpu_boot_api.h b/drivers/gpu/drm/vpu/vpu_boot_api.h
new file mode 100644
index 000000000000..b8ab30979b8a
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_boot_api.h
@@ -0,0 +1,222 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef VPU_BOOT_API_H
+#define VPU_BOOT_API_H
+
+/*
+ * =========== FW API version information beginning ================
+ * The below values will be used to construct the version info this way:
+ * fw_bin_header->api_version[VPU_BOOT_API_VER_ID] = (VPU_BOOT_API_VER_MAJOR << 16) |
+ * VPU_BOOT_API_VER_MINOR; VPU_BOOT_API_VER_PATCH will be ignored.
+ */
+
+/*
+ * Major version changes that break backward compatibility.
+ * Major version must start from 1 and can only be incremented.
+ */
+#define VPU_BOOT_API_VER_MAJOR 3
+
+/*
+ * Minor version changes when API backward compatibility is preserved.
+ * Resets to 0 if Major version is incremented.
+ */
+#define VPU_BOOT_API_VER_MINOR 7
+
+/*
+ * API header changed (field names, documentation, formatting) but API itself has not been changed
+ */
+#define VPU_BOOT_API_VER_PATCH 2
+
+/*
+ * Index in the API version table
+ * Must be unique for each API
+ */
+#define VPU_BOOT_API_VER_INDEX 0
+/* ------------ FW API version information end ---------------------*/
+
+#pragma pack(push, 1)
+
+/* Firmware image header format */
+#define VPU_FW_HEADER_SIZE    4096
+#define VPU_FW_HEADER_VERSION 0x1
+#define VPU_FW_VERSION_SIZE   32
+#define VPU_FW_API_VER_NUM    16
+
+struct vpu_firmware_header {
+	u32 header_version;
+	u32 image_format;
+	u64 image_load_address;
+	u32 image_size;
+	u64 entry_point;
+	u8 vpu_version[VPU_FW_VERSION_SIZE];
+	u32 compression_type;
+	u64 firmware_version_load_address;
+	u32 firmware_version_size;
+	u64 boot_params_load_address;
+	u32 api_version[VPU_FW_API_VER_NUM];
+	/* Size of memory require for firmware execution */
+	u32 runtime_size;
+	u32 shave_nn_fw_size;
+};
+
+/* Firmware boot parameters format */
+#define VPU_BOOT_PLL_COUNT     3
+#define VPU_BOOT_PLL_OUT_COUNT 4
+
+/* Values for boot_type field */
+#define VPU_BOOT_TYPE_COLDBOOT 0
+#define VPU_BOOT_TYPE_WARMBOOT 1
+
+/* Value for magic filed */
+#define VPU_BOOT_PARAMS_MAGIC 0x10000
+
+enum VPU_BOOT_L2_CACHE_CFG_TYPE {
+	VPU_BOOT_L2_CACHE_CFG_UPA = 0,
+	VPU_BOOT_L2_CACHE_CFG_NN = 1,
+	VPU_BOOT_L2_CACHE_CFG_NUM = 2
+};
+
+/**
+ * Logging destinations.
+ *
+ * Logging output can be directed to different logging destinations. This enum
+ * defines the list of logging destinations supported by the VPU firmware (NOTE:
+ * a specific VPU FW binary may support only a subset of such output
+ * destinations, depending on the target platform and compile options).
+ */
+enum vpu_trace_destination {
+	VPU_TRACE_DESTINATION_PIPEPRINT = 0x1,
+	VPU_TRACE_DESTINATION_VERBOSE_TRACING = 0x2,
+	VPU_TRACE_DESTINATION_NORTH_PEAK = 0x4,
+};
+
+struct vpu_boot_l2_cache_config {
+	u8 use;
+	u8 cfg;
+};
+
+struct vpu_warm_boot_section {
+	u32 src;
+	u32 dst;
+	u32 size;
+	u32 core_id;
+	u32 is_clear_op;
+};
+
+struct vpu_boot_params {
+	u32 magic;
+	u32 vpu_id;
+	u32 vpu_count;
+	u32 pad0[5];
+	/* Clock frequencies: 0x20 - 0xFF */
+	u32 frequency;
+	u32 pll[VPU_BOOT_PLL_COUNT][VPU_BOOT_PLL_OUT_COUNT];
+	u32 pad1[43];
+	/* Memory regions: 0x100 - 0x1FF */
+	u64 ipc_header_area_start;
+	u32 ipc_header_area_size;
+	u64 shared_region_base;
+	u32 shared_region_size;
+	u64 ipc_payload_area_start;
+	u32 ipc_payload_area_size;
+	u64 global_aliased_pio_base;
+	u32 global_aliased_pio_size;
+	u32 autoconfig;
+	struct vpu_boot_l2_cache_config cache_defaults[VPU_BOOT_L2_CACHE_CFG_NUM];
+	u64 global_memory_allocator_base;
+	u32 global_memory_allocator_size;
+	/*
+	 * ShaveNN FW section VPU base address
+	 * On VPU2.7 HW this address must be within 2GB range starting from L2C_PAGE_TABLE base
+	 */
+	u64 shave_nn_fw_base;
+	u64 save_restore_ret_address; /* stores the address of FW's restore entry point */
+	u32 pad2[43];
+	/* IRQ re-direct numbers: 0x200 - 0x2FF */
+	s32 watchdog_irq_mss;
+	s32 watchdog_irq_nce;
+	u32 host_to_vpu_irq;
+	u32 vpu_to_host_irq;
+	/* VPU -> ARM IRQ line to use to request MMU update. */
+	u32 mmu_update_request_irq;
+	/* ARM -> VPU IRQ line to use to notify of MMU update completion. */
+	u32 mmu_update_done_irq;
+	/* ARM -> VPU IRQ line to use to request power level change. */
+	u32 set_power_level_irq;
+	/* VPU -> ARM IRQ line to use to notify of power level change completion. */
+	u32 set_power_level_done_irq;
+	/* VPU -> ARM IRQ line to use to notify of VPU idle state change */
+	u32 set_vpu_idle_update_irq;
+	/* VPU -> ARM IRQ line to use to request counter reset. */
+	u32 metric_query_event_irq;
+	/* ARM -> VPU IRQ line to use to notify of counter reset completion. */
+	u32 metric_query_event_done_irq;
+	u32 pad3[53];
+	/* Silicon information: 0x300 - 0x3FF */
+	u32 host_version_id;
+	u32 si_stepping;
+	u64 device_id;
+	u64 feature_exclusion;
+	u64 sku;
+	u32 min_freq_pll_ratio;
+	u32 max_freq_pll_ratio;
+	/**
+	 * Initial log level threshold (messages with log level severity less than
+	 * the threshold will not be logged); applies to every enabled logging
+	 * destination and loggable HW component. See 'mvLog_t' enum for acceptable
+	 * values.
+	 */
+	u32 default_trace_level;
+	u32 boot_type;
+	u64 punit_telemetry_sram_base;
+	u64 punit_telemetry_sram_size;
+	u32 vpu_telemetry_enable;
+	u64 crit_tracing_buff_addr;
+	u32 crit_tracing_buff_size;
+	u64 verbose_tracing_buff_addr;
+	u32 verbose_tracing_buff_size;
+	u64 verbose_tracing_sw_component_mask; /* TO BE REMOVED */
+	/**
+	 * Mask of destinations to which logging messages are delivered; bitwise OR
+	 * of values defined in vpu_trace_destination enum.
+	 */
+	u32 trace_destination_mask;
+	/**
+	 * Mask of hardware components for which logging is enabled; bitwise OR of
+	 * bits defined by the VPU_TRACE_PROC_BIT_* macros.
+	 */
+	u64 trace_hw_component_mask;
+	/** Mask of trace message formats supported by the driver */
+	u64 tracing_buff_message_format_mask;
+	u64 trace_reserved_1[2];
+	u32 pad4[30];
+	/* Warm boot information: 0x400 - 0x43F */
+	u32 warm_boot_sections_count;
+	u32 warm_boot_start_address_reference;
+	u32 warm_boot_section_info_address_offset;
+	u32 pad5[13];
+	/* Power States transitions timestamps: 0x440 - 0x46F*/
+	struct {
+		/* VPU_IDLE -> VPU_ACTIVE transition initiated timestamp */
+		u64 vpu_active_state_requested;
+		/* VPU_IDLE -> VPU_ACTIVE transition completed timestamp */
+		u64 vpu_active_state_achieved;
+		/* VPU_ACTIVE -> VPU_IDLE transition initiated timestamp */
+		u64 vpu_idle_state_requested;
+		/* VPU_ACTIVE -> VPU_IDLE transition completed timestamp */
+		u64 vpu_idle_state_achieved;
+		/* VPU_IDLE -> VPU_STANDBY transition initiated timestamp */
+		u64 vpu_standby_state_requested;
+		/* VPU_IDLE -> VPU_STANDBY transition completed timestamp */
+		u64 vpu_standby_state_achieved;
+	} power_states_timestamps;
+	/* Unused/reserved: 0x470 - 0xFFF */
+	u32 pad6[740];
+};
+
+#pragma pack(pop)
+
+#endif
diff --git a/drivers/gpu/drm/vpu/vpu_drv.c b/drivers/gpu/drm/vpu/vpu_drv.c
index 48551c147a1c..d0cdbb791e1f 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.c
+++ b/drivers/gpu/drm/vpu/vpu_drv.c
@@ -13,10 +13,13 @@
 #include <drm/drm_ioctl.h>
 #include <drm/drm_prime.h>
 
+#include "vpu_boot_api.h"
 #include "vpu_drv.h"
+#include "vpu_fw.h"
 #include "vpu_gem.h"
 #include "vpu_hw.h"
 #include "vpu_ipc.h"
+#include "vpu_jsm_msg.h"
 #include "vpu_mmu.h"
 #include "vpu_mmu_context.h"
 
@@ -31,6 +34,10 @@ int vpu_dbg_mask;
 module_param_named(dbg_mask, vpu_dbg_mask, int, 0644);
 MODULE_PARM_DESC(dbg_mask, "Driver debug mask. See VPU_DBG_* macros.");
 
+int vpu_test_mode;
+module_param_named_unsafe(test_mode, vpu_test_mode, int, 0644);
+MODULE_PARM_DESC(test_mode, "Test mode: 0 - normal operation, 1 - fw unit test, 2 - null hw");
+
 u8 vpu_pll_min_ratio;
 module_param_named(pll_min_ratio, vpu_pll_min_ratio, byte, 0644);
 MODULE_PARM_DESC(pll_min_ratio, "Minimum PLL ratio used to set VPU frequency");
@@ -126,6 +133,28 @@ static int vpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_fi
 	case DRM_VPU_PARAM_CONTEXT_ID:
 		args->value = file_priv->ctx.id;
 		break;
+	case DRM_VPU_PARAM_FW_API_VERSION:
+		if (args->index < VPU_FW_API_VER_NUM) {
+			struct vpu_firmware_header *fw_hdr;
+
+			fw_hdr = (struct vpu_firmware_header *)vdev->fw->file->data;
+			args->value = fw_hdr->api_version[args->index];
+		} else {
+			ret = -EINVAL;
+		}
+		break;
+	case DRM_VPU_PARAM_ENGINE_HEARTBEAT:
+		ret = vpu_jsm_get_heartbeat(vdev, args->index, &args->value);
+		break;
+	case DRM_VPU_PARAM_UNIQUE_INFERENCE_ID:
+		args->value = (u64)atomic64_inc_return(&vdev->unique_id_counter);
+		break;
+	case DRM_VPU_PARAM_TILE_CONFIG:
+		args->value = vdev->hw->tile_fuse;
+		break;
+	case DRM_VPU_PARAM_SKU:
+		args->value = vdev->hw->sku;
+		break;
 	default:
 		ret = -EINVAL;
 	}
@@ -197,6 +226,71 @@ static const struct drm_ioctl_desc vpu_drm_ioctls[] = {
 
 DEFINE_DRM_GEM_FOPS(vpu_fops);
 
+static int vpu_wait_for_ready(struct vpu_device *vdev)
+{
+	struct vpu_ipc_consumer cons;
+	struct vpu_ipc_hdr ipc_hdr;
+	unsigned long timeout;
+	int ret;
+
+	if (vpu_test_mode == VPU_TEST_MODE_FW_TEST)
+		return 0;
+
+	vpu_ipc_consumer_add(vdev, &cons, VPU_IPC_CHAN_BOOT_MSG);
+
+	timeout = jiffies + msecs_to_jiffies(vdev->timeout.boot);
+	while (1) {
+		vpu_ipc_irq_handler(vdev);
+		ret = vpu_ipc_receive(vdev, &cons, &ipc_hdr, NULL, 0);
+		if (ret != -ETIMEDOUT || time_after_eq(jiffies, timeout))
+			break;
+
+		cond_resched();
+		if (signal_pending(current)) {
+			ret = -EINTR;
+			break;
+		}
+	}
+
+	vpu_ipc_consumer_del(vdev, &cons);
+
+	if (!ret && ipc_hdr.data_addr != VPU_IPC_BOOT_MSG_DATA_ADDR) {
+		vpu_err(vdev, "Invalid VPU ready message: 0x%x\n",
+			ipc_hdr.data_addr);
+		return -EIO;
+	}
+
+	if (!ret)
+		vpu_info(vdev, "VPU ready message received successfully\n");
+
+	return ret;
+}
+
+int vpu_boot(struct vpu_device *vdev)
+{
+	int ret;
+
+	/* Update boot params located at first 4KB of FW memory */
+	vpu_fw_boot_params_setup(vdev, vdev->fw->mem->kvaddr);
+
+	ret = vpu_hw_boot_fw(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to start the firmware: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_wait_for_ready(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to boot the firmware: %d\n", ret);
+		return ret;
+	}
+
+	vpu_hw_irq_clear(vdev);
+	vpu_hw_irq_enable(vdev);
+	vpu_ipc_enable(vdev);
+	return 0;
+}
+
 int vpu_shutdown(struct vpu_device *vdev)
 {
 	int ret;
@@ -320,6 +414,10 @@ static int vpu_dev_init(struct vpu_device *vdev)
 	if (!vdev->mmu)
 		return -ENOMEM;
 
+	vdev->fw = devm_kzalloc(vdev->drm.dev, sizeof(*vdev->fw), GFP_KERNEL);
+	if (!vdev->fw)
+		return -ENOMEM;
+
 	vdev->ipc = devm_kzalloc(vdev->drm.dev, sizeof(*vdev->ipc), GFP_KERNEL);
 	if (!vdev->ipc)
 		return -ENOMEM;
@@ -331,6 +429,8 @@ static int vpu_dev_init(struct vpu_device *vdev)
 	vdev->context_xa_limit.min = VPU_GLOBAL_CONTEXT_MMU_SSID + 1;
 	vdev->context_xa_limit.max = VPU_CONTEXT_LIMIT;
 
+	atomic64_set(&vdev->unique_id_counter, 0);
+
 	ret = vpu_pci_init(vdev);
 	if (ret) {
 		vpu_err(vdev, "Failed to initialize PCI device: %d\n", ret);
@@ -367,14 +467,34 @@ static int vpu_dev_init(struct vpu_device *vdev)
 		goto err_mmu_gctx_fini;
 	}
 
+	ret = vpu_fw_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize firmware: %d\n", ret);
+		goto err_mmu_fini;
+	}
+
 	ret = vpu_ipc_init(vdev);
 	if (ret) {
 		vpu_err(vdev, "Failed to initialize IPC: %d\n", ret);
-		goto err_mmu_fini;
+		goto err_fw_fini;
+	}
+
+	ret = vpu_fw_load(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to load firmware: %d\n", ret);
+		goto err_fw_fini;
+	}
+
+	ret = vpu_boot(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to boot: %d\n", ret);
+		goto err_fw_fini;
 	}
 
 	return 0;
 
+err_fw_fini:
+	vpu_fw_fini(vdev);
 err_mmu_fini:
 	vpu_mmu_fini(vdev);
 err_mmu_gctx_fini:
@@ -393,6 +513,7 @@ static void vpu_dev_fini(struct vpu_device *vdev)
 	vpu_shutdown(vdev);
 
 	vpu_ipc_fini(vdev);
+	vpu_fw_fini(vdev);
 	vpu_mmu_fini(vdev);
 	vpu_mmu_global_context_fini(vdev);
 	vpu_irq_fini(vdev);
diff --git a/drivers/gpu/drm/vpu/vpu_drv.h b/drivers/gpu/drm/vpu/vpu_drv.h
index ddb83aeaf6d3..94c8712396a6 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.h
+++ b/drivers/gpu/drm/vpu/vpu_drv.h
@@ -87,12 +87,15 @@ struct vpu_device {
 	struct vpu_wa_table wa;
 	struct vpu_hw_info *hw;
 	struct vpu_mmu_info *mmu;
+	struct vpu_fw_info *fw;
 	struct vpu_ipc_info *ipc;
 
 	struct vpu_mmu_context gctx;
 	struct xarray context_xa;
 	struct xa_limit context_xa_limit;
 
+	atomic64_t unique_id_counter;
+
 	struct {
 		int boot;
 		int jsm;
@@ -113,9 +116,15 @@ extern int vpu_dbg_mask;
 extern u8 vpu_pll_min_ratio;
 extern u8 vpu_pll_max_ratio;
 
+#define VPU_TEST_MODE_DISABLED  0
+#define VPU_TEST_MODE_FW_TEST   1
+#define VPU_TEST_MODE_NULL_HW   2
+extern int vpu_test_mode;
+
 void vpu_file_priv_get(struct vpu_file_priv *file_priv, struct vpu_file_priv **link);
 void vpu_file_priv_put(struct vpu_file_priv **link);
 char *vpu_platform_to_str(u32 platform);
+int vpu_boot(struct vpu_device *vdev);
 int vpu_shutdown(struct vpu_device *vdev);
 
 static inline bool vpu_is_mtl(struct vpu_device *vdev)
diff --git a/drivers/gpu/drm/vpu/vpu_fw.c b/drivers/gpu/drm/vpu/vpu_fw.c
new file mode 100644
index 000000000000..153aafcf3423
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_fw.c
@@ -0,0 +1,413 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <linux/firmware.h>
+#include <linux/highmem.h>
+#include <linux/moduleparam.h>
+#include <linux/pci.h>
+
+#include "vpu_boot_api.h"
+#include "vpu_drv.h"
+#include "vpu_fw.h"
+#include "vpu_gem.h"
+#include "vpu_hw.h"
+#include "vpu_ipc.h"
+
+#define FW_MAX_NAMES		3
+#define FW_GLOBAL_MEM_START	(2ull * SZ_1G)
+#define FW_GLOBAL_MEM_END	(3ull * SZ_1G)
+#define FW_SHARED_MEM_SIZE	SZ_256M /* Must be aligned to FW_SHARED_MEM_ALIGNMENT */
+#define FW_SHARED_MEM_ALIGNMENT	SZ_128K /* VPU MTRR limitation */
+#define FW_RUNTIME_MAX_SIZE	SZ_512M
+#define FW_SHAVE_NN_MAX_SIZE	SZ_2M
+#define FW_RUNTIME_MIN_ADDR	(FW_GLOBAL_MEM_START)
+#define FW_RUNTIME_MAX_ADDR	(FW_GLOBAL_MEM_END - FW_SHARED_MEM_SIZE)
+#define FW_VERSION_HEADER_SIZE	SZ_4K
+#define FW_FILE_IMAGE_OFFSET	(VPU_FW_HEADER_SIZE + FW_VERSION_HEADER_SIZE)
+
+#define WATCHDOG_MSS_REDIRECT	32
+#define WATCHDOG_NCE_REDIRECT	33
+
+#define ADDR_TO_L2_CACHE_CFG(addr) ((addr) >> 31)
+
+#define VPU_FW_CHECK_API(vdev, fw_hdr, name) vpu_fw_check_api(vdev, fw_hdr, #name, \
+								  VPU_##name##_API_VER_INDEX, \
+								  VPU_##name##_API_VER_MAJOR, \
+								  VPU_##name##_API_VER_MINOR)
+
+static char *vpu_firmware;
+module_param_named_unsafe(firmware, vpu_firmware, charp, 0644);
+MODULE_PARM_DESC(firmware, "VPU firmware binary in /lib/firmware/..");
+
+static int vpu_fw_request(struct vpu_device *vdev)
+{
+	const char *fw_names[FW_MAX_NAMES] = {
+		vpu_firmware,
+		"mtl_vpu.bin",
+		"intel/vpu/mtl_vpu_v0.0.bin"
+	};
+	int ret = -ENOENT;
+	int i;
+
+	for (i = 0; i < FW_MAX_NAMES; i++) {
+		ret = request_firmware(&vdev->fw->file, fw_names[i], vdev->drm.dev);
+		if (!ret)
+			return 0;
+	}
+
+	vpu_err(vdev, "Failed to request firmware: %d\n", ret);
+	return ret;
+}
+
+static void
+vpu_fw_check_api(struct vpu_device *vdev, const struct vpu_firmware_header *fw_hdr,
+		 const char *str, int index, u16 expected_major, u16 expected_minor)
+{
+	u16 major = (u16)(fw_hdr->api_version[index] >> 16);
+	u16 minor = (u16)(fw_hdr->api_version[index]);
+
+	if (major != expected_major) {
+		vpu_warn(vdev, "Incompatible FW %s API version: %d.%d (expected %d.%d)\n",
+			 str, major, minor, expected_major, expected_minor);
+	}
+	vpu_dbg(FW_BOOT, "FW %s API version: %d.%d (expected %d.%d)\n",
+		str, major, minor, expected_major, expected_minor);
+}
+
+static int vpu_fw_parse(struct vpu_device *vdev)
+{
+	struct vpu_fw_info *fw = vdev->fw;
+	const struct vpu_firmware_header *fw_hdr = (const void *)fw->file->data;
+	u64 runtime_addr, image_load_addr, runtime_size, image_size;
+
+	if (fw->file->size <= FW_FILE_IMAGE_OFFSET) {
+		vpu_err(vdev, "Firmware file is too small: %zu\n", fw->file->size);
+		return -EINVAL;
+	}
+
+	if (fw_hdr->header_version != VPU_FW_HEADER_VERSION) {
+		vpu_err(vdev, "Invalid firmware header version: %u\n", fw_hdr->header_version);
+		return -EINVAL;
+	}
+
+	runtime_addr = fw_hdr->boot_params_load_address;
+	runtime_size = fw_hdr->runtime_size;
+	image_load_addr = fw_hdr->image_load_address;
+	image_size = fw_hdr->image_size;
+
+	if (runtime_addr < FW_RUNTIME_MIN_ADDR || runtime_addr > FW_RUNTIME_MAX_ADDR) {
+		vpu_err(vdev, "Invalid firmware runtime address: 0x%llx\n", runtime_addr);
+		return -EINVAL;
+	}
+
+	if (runtime_size < fw->file->size || runtime_size > FW_RUNTIME_MAX_SIZE) {
+		vpu_err(vdev, "Invalid firmware runtime size: %llu\n", runtime_size);
+		return -EINVAL;
+	}
+
+	if (FW_FILE_IMAGE_OFFSET + image_size > fw->file->size) {
+		vpu_err(vdev, "Invalid image size: %llu\n", image_size);
+		return -EINVAL;
+	}
+
+	if (image_load_addr < runtime_addr ||
+	    image_load_addr + image_size > runtime_addr + runtime_size) {
+		vpu_err(vdev, "Invalid firmware load address size: 0x%llx and size %llu\n",
+			image_load_addr, image_size);
+		return -EINVAL;
+	}
+
+	if (fw_hdr->shave_nn_fw_size > FW_SHAVE_NN_MAX_SIZE) {
+		vpu_err(vdev, "SHAVE NN firmware is too big: %u\n", fw_hdr->shave_nn_fw_size);
+		return -EINVAL;
+	}
+
+	if (fw_hdr->entry_point < image_load_addr ||
+	    fw_hdr->entry_point >= image_load_addr + image_size) {
+		vpu_err(vdev, "Invalid entry point: 0x%llx\n", fw_hdr->entry_point);
+		return -EINVAL;
+	}
+
+	fw->runtime_addr = runtime_addr;
+	fw->runtime_size = runtime_size;
+	fw->image_load_offset = image_load_addr - runtime_addr;
+	fw->image_size = image_size;
+	fw->shave_nn_size = PAGE_ALIGN(fw_hdr->shave_nn_fw_size);
+
+	fw->cold_boot_entry_point = fw_hdr->entry_point;
+	fw->entry_point = fw->cold_boot_entry_point;
+
+	vpu_dbg(FW_BOOT, "Header version: 0x%x, format 0x%x\n",
+		fw_hdr->header_version, fw_hdr->image_format);
+	vpu_dbg(FW_BOOT, "Size: file %lu image %u runtime %u shavenn %u\n",
+		fw->file->size, fw->image_size, fw->runtime_size, fw->shave_nn_size);
+	vpu_dbg(FW_BOOT, "Address: runtime 0x%llx, load 0x%llx, entry point 0x%llx\n",
+		fw->runtime_addr, image_load_addr, fw->entry_point);
+	vpu_dbg(FW_BOOT, "FW version: %s\n", (char *)fw_hdr + VPU_FW_HEADER_SIZE);
+
+	VPU_FW_CHECK_API(vdev, fw_hdr, BOOT);
+	VPU_FW_CHECK_API(vdev, fw_hdr, JSM);
+
+	return 0;
+}
+
+static void vpu_fw_release(struct vpu_device *vdev)
+{
+	release_firmware(vdev->fw->file);
+}
+
+static int vpu_fw_update_global_range(struct vpu_device *vdev)
+{
+	struct vpu_fw_info *fw = vdev->fw;
+	u64 start = ALIGN(fw->runtime_addr + fw->runtime_size, FW_SHARED_MEM_ALIGNMENT);
+	u64 size = FW_SHARED_MEM_SIZE;
+
+	if (start + size > FW_GLOBAL_MEM_END) {
+		vpu_err(vdev, "No space for shared region, start %lld, size %lld\n", start, size);
+		return -EINVAL;
+	}
+
+	vpu_hw_init_range(&vdev->hw->ranges.global_low, start, size);
+	return 0;
+}
+
+static int vpu_fw_mem_init(struct vpu_device *vdev)
+{
+	struct vpu_fw_info *fw = vdev->fw;
+	int ret;
+
+	ret = vpu_fw_update_global_range(vdev);
+	if (ret)
+		return ret;
+
+	fw->mem = vpu_bo_alloc_internal(vdev, fw->runtime_addr, fw->runtime_size, false);
+	if (!fw->mem) {
+		vpu_err(vdev, "Failed to allocate firmware runtime memory\n");
+		return -ENOMEM;
+	}
+
+	if (fw->shave_nn_size) {
+		fw->mem_shave_nn = vpu_bo_alloc_internal(vdev, vdev->hw->ranges.global_high.start,
+							 fw->shave_nn_size, false);
+		if (!fw->mem_shave_nn) {
+			vpu_err(vdev, "Failed to allocate shavenn buffer\n");
+			vpu_bo_free_internal(fw->mem);
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
+static void vpu_fw_mem_fini(struct vpu_device *vdev)
+{
+	struct vpu_fw_info *fw = vdev->fw;
+
+	if (fw->mem_shave_nn) {
+		vpu_bo_free_internal(fw->mem_shave_nn);
+		fw->mem_shave_nn = NULL;
+	}
+
+	vpu_bo_free_internal(fw->mem);
+	fw->mem = NULL;
+}
+
+int vpu_fw_init(struct vpu_device *vdev)
+{
+	int ret;
+
+	ret = vpu_fw_request(vdev);
+	if (ret)
+		return ret;
+
+	ret = vpu_fw_parse(vdev);
+	if (ret)
+		goto err_fw_release;
+
+	ret = vpu_fw_mem_init(vdev);
+	if (ret)
+		goto err_fw_release;
+
+	return 0;
+
+err_fw_release:
+	vpu_fw_release(vdev);
+	return ret;
+}
+
+void vpu_fw_fini(struct vpu_device *vdev)
+{
+	vpu_fw_mem_fini(vdev);
+	vpu_fw_release(vdev);
+}
+
+int vpu_fw_load(struct vpu_device *vdev)
+{
+	struct vpu_fw_info *fw = vdev->fw;
+	u64 image_end_offset = fw->image_load_offset + fw->image_size;
+	int ret;
+
+	ret = vpu_bo_vremap_internal(vdev, fw->mem, true);
+	if (ret)
+		return ret;
+
+	memset(fw->mem->kvaddr, 0, fw->image_load_offset);
+	memcpy(fw->mem->kvaddr + fw->image_load_offset,
+	       fw->file->data + FW_FILE_IMAGE_OFFSET, fw->image_size);
+	clflush_cache_range(fw->mem->kvaddr, image_end_offset);
+
+	if (VPU_WA(clear_runtime_mem)) {
+		u8 *start = fw->mem->kvaddr + image_end_offset;
+		u64 size = fw->mem->base.size - image_end_offset;
+
+		memset(start, 0, size);
+		clflush_cache_range(start, size);
+	}
+
+	return vpu_bo_vremap_internal(vdev, fw->mem, false);
+}
+
+static void vpu_fw_boot_params_print(struct vpu_device *vdev, struct vpu_boot_params *boot_params)
+{
+	vpu_dbg(FW_BOOT, "boot_params.magic = 0x%x\n",
+		boot_params->magic);
+	vpu_dbg(FW_BOOT, "boot_params.vpu_id = 0x%x\n",
+		boot_params->vpu_id);
+	vpu_dbg(FW_BOOT, "boot_params.vpu_count = 0x%x\n",
+		boot_params->vpu_count);
+	vpu_dbg(FW_BOOT, "boot_params.frequency = %u\n",
+		boot_params->frequency);
+
+	vpu_dbg(FW_BOOT, "boot_params.ipc_header_area_start = 0x%llx\n",
+		boot_params->ipc_header_area_start);
+	vpu_dbg(FW_BOOT, "boot_params.ipc_header_area_size = 0x%x\n",
+		boot_params->ipc_header_area_size);
+	vpu_dbg(FW_BOOT, "boot_params.shared_region_base = 0x%llx\n",
+		boot_params->shared_region_base);
+	vpu_dbg(FW_BOOT, "boot_params.shared_region_size = 0x%x\n",
+		boot_params->shared_region_size);
+	vpu_dbg(FW_BOOT, "boot_params.ipc_payload_area_start = 0x%llx\n",
+		boot_params->ipc_payload_area_start);
+	vpu_dbg(FW_BOOT, "boot_params.ipc_payload_area_size = 0x%x\n",
+		boot_params->ipc_payload_area_size);
+	vpu_dbg(FW_BOOT, "boot_params.global_aliased_pio_base = 0x%llx\n",
+		boot_params->global_aliased_pio_base);
+	vpu_dbg(FW_BOOT, "boot_params.global_aliased_pio_size = 0x%x\n",
+		boot_params->global_aliased_pio_size);
+
+	vpu_dbg(FW_BOOT, "boot_params.autoconfig = 0x%x\n",
+		boot_params->autoconfig);
+
+	vpu_dbg(FW_BOOT, "boot_params.cache_defaults[VPU_BOOT_L2_CACHE_CFG_NN].use = 0x%x\n",
+		boot_params->cache_defaults[VPU_BOOT_L2_CACHE_CFG_NN].use);
+	vpu_dbg(FW_BOOT, "boot_params.cache_defaults[VPU_BOOT_L2_CACHE_CFG_NN].cfg = 0x%x\n",
+		boot_params->cache_defaults[VPU_BOOT_L2_CACHE_CFG_NN].cfg);
+
+	vpu_dbg(FW_BOOT, "boot_params.global_memory_allocator_base = 0x%llx\n",
+		boot_params->global_memory_allocator_base);
+	vpu_dbg(FW_BOOT, "boot_params.global_memory_allocator_size = 0x%x\n",
+		boot_params->global_memory_allocator_size);
+
+	vpu_dbg(FW_BOOT, "boot_params.shave_nn_fw_base = 0x%llx\n",
+		boot_params->shave_nn_fw_base);
+
+	vpu_dbg(FW_BOOT, "boot_params.watchdog_irq_mss = 0x%x\n",
+		boot_params->watchdog_irq_mss);
+	vpu_dbg(FW_BOOT, "boot_params.watchdog_irq_nce = 0x%x\n",
+		boot_params->watchdog_irq_nce);
+	vpu_dbg(FW_BOOT, "boot_params.host_to_vpu_irq = 0x%x\n",
+		boot_params->host_to_vpu_irq);
+	vpu_dbg(FW_BOOT, "boot_params.vpu_to_host_irq = 0x%x\n",
+		boot_params->vpu_to_host_irq);
+
+	vpu_dbg(FW_BOOT, "boot_params.host_version_id = 0x%x\n",
+		boot_params->host_version_id);
+	vpu_dbg(FW_BOOT, "boot_params.si_stepping = 0x%x\n",
+		boot_params->si_stepping);
+	vpu_dbg(FW_BOOT, "boot_params.vdev_id = 0x%llx\n",
+		boot_params->device_id);
+	vpu_dbg(FW_BOOT, "boot_params.feature_exclusion = 0x%llx\n",
+		boot_params->feature_exclusion);
+	vpu_dbg(FW_BOOT, "boot_params.sku = %lld\n",
+		boot_params->sku);
+	vpu_dbg(FW_BOOT, "boot_params.min_freq_pll_ratio = 0x%x\n",
+		boot_params->min_freq_pll_ratio);
+	vpu_dbg(FW_BOOT, "boot_params.max_freq_pll_ratio = 0x%x\n",
+		boot_params->max_freq_pll_ratio);
+	vpu_dbg(FW_BOOT, "boot_params.default_trace_level = 0x%x\n",
+		boot_params->default_trace_level);
+	vpu_dbg(FW_BOOT, "boot_params.tracing_buff_message_format_mask = 0x%llx\n",
+		boot_params->tracing_buff_message_format_mask);
+	vpu_dbg(FW_BOOT, "boot_params.trace_destination_mask = 0x%x\n",
+		boot_params->trace_destination_mask);
+	vpu_dbg(FW_BOOT, "boot_params.trace_hw_component_mask = 0x%llx\n",
+		boot_params->trace_hw_component_mask);
+	vpu_dbg(FW_BOOT, "boot_params.boot_type = 0x%x\n",
+		boot_params->boot_type);
+	vpu_dbg(FW_BOOT, "boot_params.punit_telemetry_sram_base = 0x%llx\n",
+		boot_params->punit_telemetry_sram_base);
+	vpu_dbg(FW_BOOT, "boot_params.punit_telemetry_sram_size = 0x%llx\n",
+		boot_params->punit_telemetry_sram_size);
+	vpu_dbg(FW_BOOT, "boot_params.vpu_telemetry_enable = 0x%x\n",
+		boot_params->vpu_telemetry_enable);
+}
+
+void vpu_fw_boot_params_setup(struct vpu_device *vdev, struct vpu_boot_params *boot_params)
+{
+	struct vpu_bo *ipc_mem_rx = vdev->ipc->mem_rx;
+
+	/* In case of warm boot we only have to reset the entrypoint addr */
+	if (!vpu_fw_is_cold_boot(vdev)) {
+		boot_params->save_restore_ret_address = 0;
+		return;
+	}
+
+	boot_params->magic = VPU_BOOT_PARAMS_MAGIC;
+	boot_params->vpu_id = to_pci_dev(vdev->drm.dev)->bus->number;
+	boot_params->frequency = vpu_hw_reg_pll_freq_get(vdev);
+
+	/*
+	 * Uncached region of VPU address space, covers IPC buffers, job queues
+	 * and log buffers, programmable to L2$ Uncached by VPU MTRR
+	 */
+	boot_params->shared_region_base = vdev->hw->ranges.global_low.start;
+	boot_params->shared_region_size = vdev->hw->ranges.global_low.end -
+					  vdev->hw->ranges.global_low.start;
+
+	boot_params->ipc_header_area_start = ipc_mem_rx->vpu_addr;
+	boot_params->ipc_header_area_size = ipc_mem_rx->base.size / 2;
+
+	boot_params->ipc_payload_area_start = ipc_mem_rx->vpu_addr + ipc_mem_rx->base.size / 2;
+	boot_params->ipc_payload_area_size = ipc_mem_rx->base.size / 2;
+
+	boot_params->global_aliased_pio_base =
+		vdev->hw->ranges.global_aliased_pio.start;
+	boot_params->global_aliased_pio_size =
+		vpu_hw_range_size(&vdev->hw->ranges.global_aliased_pio);
+
+	/* Allow configuration for L2C_PAGE_TABLE with boot param value */
+	boot_params->autoconfig = 1;
+
+	/* Enable L2 cache for first 2GB of high memory */
+	boot_params->cache_defaults[VPU_BOOT_L2_CACHE_CFG_NN].use = 1;
+	boot_params->cache_defaults[VPU_BOOT_L2_CACHE_CFG_NN].cfg =
+		ADDR_TO_L2_CACHE_CFG(vdev->hw->ranges.global_high.start);
+
+	if (vdev->fw->mem_shave_nn)
+		boot_params->shave_nn_fw_base = vdev->fw->mem_shave_nn->vpu_addr;
+
+	boot_params->watchdog_irq_mss = WATCHDOG_MSS_REDIRECT;
+	boot_params->watchdog_irq_nce = WATCHDOG_NCE_REDIRECT;
+	boot_params->sku = vdev->hw->sku;
+
+	boot_params->min_freq_pll_ratio = vdev->hw->pll.min_ratio;
+	boot_params->max_freq_pll_ratio = vdev->hw->pll.max_ratio;
+
+	boot_params->punit_telemetry_sram_base = vpu_hw_reg_telemetry_offset_get(vdev);
+	boot_params->punit_telemetry_sram_size = vpu_hw_reg_telemetry_size_get(vdev);
+	boot_params->vpu_telemetry_enable = vpu_hw_reg_telemetry_enable_get(vdev);
+
+	vpu_fw_boot_params_print(vdev, boot_params);
+}
diff --git a/drivers/gpu/drm/vpu/vpu_fw.h b/drivers/gpu/drm/vpu/vpu_fw.h
new file mode 100644
index 000000000000..932bae42ca41
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_fw.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_FW_H__
+#define __VPU_FW_H__
+
+struct vpu_device;
+struct vpu_bo;
+struct vpu_boot_params;
+
+struct vpu_fw_info {
+	const struct firmware *file;
+	struct vpu_bo *mem;
+	struct vpu_bo *mem_shave_nn;
+	struct vpu_bo *mem_log_crit;
+	struct vpu_bo *mem_log_verb;
+	u64 runtime_addr;
+	u32 runtime_size;
+	u64 image_load_offset;
+	u32 image_size;
+	u32 shave_nn_size;
+	u64 entry_point; /* Cold or warm boot entry point for next boot */
+	u64 cold_boot_entry_point;
+};
+
+int vpu_fw_init(struct vpu_device *vdev);
+void vpu_fw_fini(struct vpu_device *vdev);
+int vpu_fw_load(struct vpu_device *vdev);
+void vpu_fw_boot_params_setup(struct vpu_device *vdev, struct vpu_boot_params *bp);
+
+static inline bool vpu_fw_is_cold_boot(struct vpu_device *vdev)
+{
+	return vdev->fw->entry_point == vdev->fw->cold_boot_entry_point;
+}
+
+#endif /* __VPU_FW_H__ */
diff --git a/drivers/gpu/drm/vpu/vpu_hw_mtl.c b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
index b53ec7b9cc4d..ba24dc29f962 100644
--- a/drivers/gpu/drm/vpu/vpu_hw_mtl.c
+++ b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
@@ -4,6 +4,7 @@
  */
 
 #include "vpu_drv.h"
+#include "vpu_fw.h"
 #include "vpu_hw_mtl_reg.h"
 #include "vpu_hw_reg_io.h"
 #include "vpu_hw.h"
@@ -583,6 +584,16 @@ static void vpu_boot_soc_cpu_boot(struct vpu_device *vdev)
 
 	val = REG_CLR_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val);
 	REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
+
+	val = vdev->fw->entry_point >> 9;
+	REGV_WR32(MTL_VPU_HOST_SS_LOADING_ADDRESS_LO, val);
+
+	val = REG_SET_FLD(MTL_VPU_HOST_SS_LOADING_ADDRESS_LO, DONE, val);
+	REGV_WR32(MTL_VPU_HOST_SS_LOADING_ADDRESS_LO, val);
+
+	vpu_dbg(PM, "Booting firmware, mode: %s\n",
+		vdev->fw->entry_point == vdev->fw->cold_boot_entry_point ?
+		"cold boot" : "resume");
 }
 
 static int vpu_boot_d0i3_drive(struct vpu_device *vdev, bool enable)
diff --git a/include/uapi/drm/vpu_drm.h b/include/uapi/drm/vpu_drm.h
index 8793ed06bfa9..b0492225433d 100644
--- a/include/uapi/drm/vpu_drm.h
+++ b/include/uapi/drm/vpu_drm.h
@@ -51,6 +51,11 @@ extern "C" {
 #define DRM_VPU_PARAM_CONTEXT_BASE_ADDRESS 5
 #define DRM_VPU_PARAM_CONTEXT_PRIORITY	   6
 #define DRM_VPU_PARAM_CONTEXT_ID	   7
+#define DRM_VPU_PARAM_FW_API_VERSION	   8
+#define DRM_VPU_PARAM_ENGINE_HEARTBEAT	   9
+#define DRM_VPU_PARAM_UNIQUE_INFERENCE_ID  10
+#define DRM_VPU_PARAM_TILE_CONFIG	   11
+#define DRM_VPU_PARAM_SKU		   12
 
 #define DRM_VPU_PLATFORM_TYPE_SILICON	   0
 
@@ -94,6 +99,22 @@ struct drm_vpu_param {
 	 * %DRM_VPU_PARAM_CONTEXT_ID:
 	 * Current context ID, always greater than 0 (read-only)
 	 *
+	 * %DRM_VPU_PARAM_FW_API_VERSION:
+	 * Firmware API version array (read-only)
+	 *
+	 * %DRM_VPU_PARAM_ENGINE_HEARTBEAT:
+	 * Heartbeat value from an engine (read-only).
+	 * Engine ID (i.e. DRM_VPU_ENGINE_COMPUTE) is given via index.
+	 *
+	 * %DRM_VPU_PARAM_UNIQUE_INFERENCE_ID:
+	 * Device-unique inference ID (read-only)
+	 *
+	 * %DRM_VPU_PARAM_TILE_CONFIG:
+	 * VPU tile configuration  (read-only)
+	 *
+	 * %DRM_VPU_PARAM_SKU:
+	 * VPU SKU ID (read-only)
+	 *
 	 */
 	__u32 param;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 6/7] drm/vpu: Add command buffer submission logic
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
                   ` (4 preceding siblings ...)
  2022-07-28 13:17 ` [PATCH v1 5/7] drm/vpu: Implement firmware parsing and booting Jacek Lawrynowicz
@ 2022-07-28 13:17 ` Jacek Lawrynowicz
  2022-07-28 13:17 ` [PATCH v1 7/7] drm/vpu: Add PM support Jacek Lawrynowicz
  2022-08-08  2:34 ` [PATCH v1 0/7] New DRM driver for Intel VPU Dave Airlie
  7 siblings, 0 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

Each of the user contexts has two command queues, one for compute engine
and one for the copy engine. Command queues are allocated and registered
in the device when the first job (command buffer) is submitted from
the user space to the VPU device. The userspace provides a list of
GEM buffer object handles to submit to the VPU, the driver resolves
buffer handles, pins physical memory if needed, increments ref count
for each buffer and stores pointers to buffer objects in
the vpu_job objects that track jobs submitted to the device.
The VPU signals job completion with an asynchronous message that
contains the job id passed to firmware when the job was submitted.

Currently, the driver supports simple scheduling logic
where jobs submitted from user space are immediately pushed
to the VPU device command queues. In the future, it will be
extended to use hardware base scheduling and/or drm_sched.

Signed-off-by: Andrzej Kacprowski <andrzej.kacprowski@linux.intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/gpu/drm/vpu/Makefile  |   1 +
 drivers/gpu/drm/vpu/vpu_drv.c |  26 +-
 drivers/gpu/drm/vpu/vpu_drv.h |   6 +-
 drivers/gpu/drm/vpu/vpu_gem.c |  13 +
 drivers/gpu/drm/vpu/vpu_job.c | 611 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_job.h |  73 ++++
 include/uapi/drm/vpu_drm.h    |  96 ++++++
 7 files changed, 822 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/vpu/vpu_job.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_job.h

diff --git a/drivers/gpu/drm/vpu/Makefile b/drivers/gpu/drm/vpu/Makefile
index 995a3a9c6777..70493dacecda 100644
--- a/drivers/gpu/drm/vpu/Makefile
+++ b/drivers/gpu/drm/vpu/Makefile
@@ -7,6 +7,7 @@ intel_vpu-y := \
 	vpu_gem.o \
 	vpu_hw_mtl.o \
 	vpu_ipc.o \
+	vpu_job.o \
 	vpu_jsm_msg.o \
 	vpu_mmu.o \
 	vpu_mmu_context.o
diff --git a/drivers/gpu/drm/vpu/vpu_drv.c b/drivers/gpu/drm/vpu/vpu_drv.c
index d0cdbb791e1f..74db0cb18491 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.c
+++ b/drivers/gpu/drm/vpu/vpu_drv.c
@@ -19,6 +19,7 @@
 #include "vpu_gem.h"
 #include "vpu_hw.h"
 #include "vpu_ipc.h"
+#include "vpu_job.h"
 #include "vpu_jsm_msg.h"
 #include "vpu_mmu.h"
 #include "vpu_mmu_context.h"
@@ -78,8 +79,11 @@ static void file_priv_release(struct kref *ref)
 
 	vpu_dbg(FILE, "file_priv release: ctx %u\n", file_priv->ctx.id);
 
-	if (file_priv->ctx.id)
+	if (file_priv->ctx.id) {
+		vpu_cmdq_release_all(file_priv);
+		vpu_bo_remove_all_bos_from_context(&file_priv->ctx);
 		vpu_mmu_user_context_fini(file_priv);
+	}
 
 	kfree(file_priv);
 }
@@ -222,6 +226,8 @@ static const struct drm_ioctl_desc vpu_drm_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(VPU_BO_CREATE, vpu_bo_create_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(VPU_BO_INFO, vpu_bo_info_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(VPU_BO_USERPTR, vpu_bo_userptr_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(VPU_SUBMIT, vpu_submit_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(VPU_BO_WAIT, vpu_bo_wait_ioctl, DRM_RENDER_ALLOW),
 };
 
 DEFINE_DRM_GEM_FOPS(vpu_fops);
@@ -429,6 +435,7 @@ static int vpu_dev_init(struct vpu_device *vdev)
 	vdev->context_xa_limit.min = VPU_GLOBAL_CONTEXT_MMU_SSID + 1;
 	vdev->context_xa_limit.max = VPU_CONTEXT_LIMIT;
 
+	xa_init_flags(&vdev->submitted_jobs_xa, XA_FLAGS_ALLOC1);
 	atomic64_set(&vdev->unique_id_counter, 0);
 
 	ret = vpu_pci_init(vdev);
@@ -479,20 +486,30 @@ static int vpu_dev_init(struct vpu_device *vdev)
 		goto err_fw_fini;
 	}
 
+	ret = vpu_job_done_thread_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize job done thread: %d\n", ret);
+		goto err_ipc_fini;
+	}
+
 	ret = vpu_fw_load(vdev);
 	if (ret) {
 		vpu_err(vdev, "Failed to load firmware: %d\n", ret);
-		goto err_fw_fini;
+		goto err_job_done_thread_fini;
 	}
 
 	ret = vpu_boot(vdev);
 	if (ret) {
 		vpu_err(vdev, "Failed to boot: %d\n", ret);
-		goto err_fw_fini;
+		goto err_job_done_thread_fini;
 	}
 
 	return 0;
 
+err_job_done_thread_fini:
+	vpu_job_done_thread_fini(vdev);
+err_ipc_fini:
+	vpu_ipc_fini(vdev);
 err_fw_fini:
 	vpu_fw_fini(vdev);
 err_mmu_fini:
@@ -512,6 +529,7 @@ static void vpu_dev_fini(struct vpu_device *vdev)
 {
 	vpu_shutdown(vdev);
 
+	vpu_job_done_thread_fini(vdev);
 	vpu_ipc_fini(vdev);
 	vpu_fw_fini(vdev);
 	vpu_mmu_fini(vdev);
@@ -521,6 +539,8 @@ static void vpu_dev_fini(struct vpu_device *vdev)
 
 	WARN_ON(!xa_empty(&vdev->context_xa));
 	xa_destroy(&vdev->context_xa);
+	WARN_ON(!xa_empty(&vdev->submitted_jobs_xa));
+	xa_destroy(&vdev->submitted_jobs_xa);
 }
 
 static struct pci_device_id vpu_pci_ids[] = {
diff --git a/drivers/gpu/drm/vpu/vpu_drv.h b/drivers/gpu/drm/vpu/vpu_drv.h
index 94c8712396a6..f4898399e64b 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.h
+++ b/drivers/gpu/drm/vpu/vpu_drv.h
@@ -94,6 +94,9 @@ struct vpu_device {
 	struct xarray context_xa;
 	struct xa_limit context_xa_limit;
 
+	struct xarray submitted_jobs_xa;
+	struct task_struct *job_done_thread;
+
 	atomic64_t unique_id_counter;
 
 	struct {
@@ -107,7 +110,8 @@ struct vpu_device {
 struct vpu_file_priv {
 	struct kref ref;
 	struct vpu_device *vdev;
-	struct mutex lock;
+	struct mutex lock; /* Protects cmdq and context init */
+	struct vpu_cmdq *cmdq[VPU_NUM_ENGINES];
 	struct vpu_mmu_context ctx;
 	u32 priority;
 };
diff --git a/drivers/gpu/drm/vpu/vpu_gem.c b/drivers/gpu/drm/vpu/vpu_gem.c
index 12f82ab941bd..ca1760fd897b 100644
--- a/drivers/gpu/drm/vpu/vpu_gem.c
+++ b/drivers/gpu/drm/vpu/vpu_gem.c
@@ -786,6 +786,19 @@ int vpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	return ret;
 }
 
+int vpu_bo_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct drm_vpu_bo_wait *args = data;
+	unsigned long timeout = drm_timeout_abs_to_jiffies(args->timeout_ns);
+	long ret;
+
+	ret = drm_gem_dma_resv_wait(file, args->handle, true, timeout);
+	if (ret == -ETIME)
+		ret = -ETIMEDOUT;
+
+	return ret;
+}
+
 static void vpu_bo_print_info(struct vpu_bo *bo, struct drm_printer *p)
 {
 	unsigned long dma_refcount = 0;
diff --git a/drivers/gpu/drm/vpu/vpu_job.c b/drivers/gpu/drm/vpu/vpu_job.c
new file mode 100644
index 000000000000..16ca280d12b2
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_job.c
@@ -0,0 +1,611 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <drm/drm_file.h>
+
+#include <linux/bitfield.h>
+#include <linux/highmem.h>
+#include <linux/kthread.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <uapi/drm/vpu_drm.h>
+
+#include "vpu_drv.h"
+#include "vpu_hw.h"
+#include "vpu_ipc.h"
+#include "vpu_job.h"
+#include "vpu_jsm_msg.h"
+
+#define CMD_BUF_IDX	    0
+#define JOB_ID_JOB_MASK	    GENMASK(7, 0)
+#define JOB_ID_CONTEXT_MASK GENMASK(31, 8)
+
+static unsigned int vpu_tdr_timeout_ms;
+module_param_named(tdr_timeout_ms, vpu_tdr_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(tdr_timeout_ms, "Timeout for device hang detection, in milliseconds, 0 - default");
+
+static void vpu_cmdq_ring_db(struct vpu_device *vdev, struct vpu_cmdq *cmdq)
+{
+	vpu_hw_reg_db_set(vdev, cmdq->db_id);
+}
+
+static struct vpu_cmdq *vpu_cmdq_alloc(struct vpu_file_priv *file_priv, u16 engine)
+{
+	struct vpu_device *vdev = file_priv->vdev;
+	struct vpu_job_queue_header *jobq_header;
+	struct vpu_cmdq *cmdq;
+
+	cmdq = kzalloc(sizeof(*cmdq), GFP_KERNEL);
+	if (!cmdq)
+		return NULL;
+
+	cmdq->mem = vpu_bo_alloc_internal(vdev, 0, SZ_4K, true);
+	if (!cmdq->mem)
+		goto cmdq_free;
+
+	cmdq->db_id = file_priv->ctx.id + engine * vpu_get_context_count(vdev);
+	cmdq->entry_count = (u32)((cmdq->mem->base.size - sizeof(struct vpu_job_queue_header)) /
+				  sizeof(struct vpu_job_queue_entry));
+
+	cmdq->jobq = (struct vpu_job_queue *)cmdq->mem->kvaddr;
+	jobq_header = &cmdq->jobq->header;
+	jobq_header->engine_idx = engine;
+	jobq_header->head = 0;
+	jobq_header->tail = 0;
+
+	return cmdq;
+
+cmdq_free:
+	kfree(cmdq);
+	return NULL;
+}
+
+static void vpu_cmdq_free(struct vpu_file_priv *file_priv, struct vpu_cmdq *cmdq)
+{
+	if (!cmdq)
+		return;
+
+	vpu_bo_free_internal(cmdq->mem);
+	kfree(cmdq);
+}
+
+static struct vpu_cmdq *vpu_cmdq_acquire(struct vpu_file_priv *file_priv, u16 engine)
+{
+	struct vpu_device *vdev = file_priv->vdev;
+	struct vpu_cmdq *cmdq = file_priv->cmdq[engine];
+	int ret;
+
+	lockdep_assert_held(&file_priv->lock);
+
+	if (!cmdq) {
+		cmdq = vpu_cmdq_alloc(file_priv, engine);
+		if (!cmdq)
+			return NULL;
+		file_priv->cmdq[engine] = cmdq;
+	}
+
+	if (cmdq->db_registered)
+		return cmdq;
+
+	ret = vpu_jsm_register_db(vdev, file_priv->ctx.id, cmdq->db_id,
+				  cmdq->mem->vpu_addr, cmdq->mem->base.size);
+	if (ret)
+		return NULL;
+
+	cmdq->db_registered = true;
+
+	return cmdq;
+}
+
+static void vpu_cmdq_release_locked(struct vpu_file_priv *file_priv, u16 engine)
+{
+	struct vpu_cmdq *cmdq = file_priv->cmdq[engine];
+
+	lockdep_assert_held(&file_priv->lock);
+
+	if (cmdq) {
+		file_priv->cmdq[engine] = NULL;
+		if (cmdq->db_registered)
+			vpu_jsm_unregister_db(file_priv->vdev, cmdq->db_id);
+
+		vpu_cmdq_free(file_priv, cmdq);
+	}
+}
+
+void vpu_cmdq_release_all(struct vpu_file_priv *file_priv)
+{
+	int i;
+
+	mutex_lock(&file_priv->lock);
+
+	for (i = 0; i < VPU_NUM_ENGINES; i++)
+		vpu_cmdq_release_locked(file_priv, i);
+
+	mutex_unlock(&file_priv->lock);
+}
+
+/*
+ * Mark the doorbell as unregistered and reset job queue pointers.
+ * This function needs to be called when the VPU hardware is restarted
+ * and FW looses job queue state. The next time job queue is used it
+ * will be registered again.
+ */
+static void vpu_cmdq_reset_locked(struct vpu_file_priv *file_priv, u16 engine)
+{
+	struct vpu_cmdq *cmdq = file_priv->cmdq[engine];
+
+	lockdep_assert_held(&file_priv->lock);
+
+	if (cmdq) {
+		cmdq->db_registered = false;
+		cmdq->jobq->header.head = 0;
+		cmdq->jobq->header.tail = 0;
+	}
+}
+
+static void vpu_cmdq_reset_all(struct vpu_file_priv *file_priv)
+{
+	int i;
+
+	mutex_lock(&file_priv->lock);
+
+	for (i = 0; i < VPU_NUM_ENGINES; i++)
+		vpu_cmdq_reset_locked(file_priv, i);
+
+	mutex_unlock(&file_priv->lock);
+}
+
+void vpu_cmdq_reset_all_contexts(struct vpu_device *vdev)
+{
+	struct vpu_file_priv *file_priv;
+	unsigned long id;
+
+	xa_for_each(&vdev->context_xa, id, file_priv) {
+		if (!file_priv)
+			continue;
+
+		vpu_cmdq_reset_all(file_priv);
+	}
+}
+
+static int vpu_cmdq_push_job(struct vpu_cmdq *cmdq, struct vpu_job *job)
+{
+	struct vpu_device *vdev = job->vdev;
+	struct vpu_job_queue_header *header = &cmdq->jobq->header;
+	struct vpu_job_queue_entry *entry = &cmdq->jobq->job[header->tail];
+	u32 next_entry = (header->tail + 1) % cmdq->entry_count;
+
+	/* Check if there is space left in job queue */
+	if (next_entry == header->head) {
+		vpu_dbg(JOB, "Job queue full: ctx %d engine %d db %d head %d tail %d\n",
+			job->file_priv->ctx.id, job->engine_idx, cmdq->db_id,
+			header->head, header->tail);
+		return -EBUSY;
+	}
+
+	entry->batch_buf_addr = job->cmd_buf_vpu_addr;
+	entry->job_id = job->job_id;
+	entry->flags = 0;
+
+	mb(); /* Make sure the entry is written before updating the queue head*/
+	header->tail = next_entry;
+	mb(); /* Make sure tail is update before ringing db */
+
+	return 0;
+}
+
+struct vpu_fence {
+	struct dma_fence base;
+	spinlock_t lock; /* protects base */
+	struct vpu_device *vdev;
+};
+
+static inline struct vpu_fence *to_vpu_fence(struct dma_fence *fence)
+{
+	return container_of(fence, struct vpu_fence, base);
+}
+
+static const char *vpu_fence_get_driver_name(struct dma_fence *fence)
+{
+	return DRIVER_NAME;
+}
+
+static const char *vpu_fence_get_timeline_name(struct dma_fence *fence)
+{
+	struct vpu_fence *vpu_fence = to_vpu_fence(fence);
+
+	return dev_name(vpu_fence->vdev->drm.dev);
+}
+
+static const struct dma_fence_ops vpu_fence_ops = {
+	.get_driver_name = vpu_fence_get_driver_name,
+	.get_timeline_name = vpu_fence_get_timeline_name,
+};
+
+static struct dma_fence *vpu_fence_create(struct vpu_device *vdev)
+{
+	struct vpu_fence *fence;
+
+	fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+	if (!fence)
+		return NULL;
+
+	fence->vdev = vdev;
+	spin_lock_init(&fence->lock);
+	dma_fence_init(&fence->base, &vpu_fence_ops, &fence->lock, dma_fence_context_alloc(1), 1);
+
+	return &fence->base;
+}
+
+static void job_get(struct vpu_job *job, struct vpu_job **link)
+{
+	struct vpu_device *vdev = job->vdev;
+
+	vpu_dbg(KREF, "Job get: id %u refcount %u\n", job->job_id, kref_read(&job->ref));
+
+	kref_get(&job->ref);
+	*link = job;
+}
+
+static void job_release(struct kref *ref)
+{
+	struct vpu_job *job = container_of(ref, struct vpu_job, ref);
+	struct vpu_device *vdev = job->vdev;
+	u32 i;
+
+	for (i = 0; i < job->bo_count; i++)
+		if (job->bos[i])
+			drm_gem_object_put(&job->bos[i]->base);
+
+	dma_fence_put(job->done_fence);
+	vpu_file_priv_put(&job->file_priv);
+
+	vpu_dbg(KREF, "Job released: id %u\n", job->job_id);
+	kfree(job);
+}
+
+static void job_put(struct vpu_job *job)
+{
+	struct vpu_device *vdev = job->vdev;
+
+	vpu_dbg(KREF, "Job put: id %u refcount %u\n", job->job_id, kref_read(&job->ref));
+	kref_put(&job->ref, job_release);
+}
+
+static struct vpu_job *
+vpu_create_job(struct vpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
+{
+	struct vpu_device *vdev = file_priv->vdev;
+	struct vpu_job *job;
+	size_t buf_size;
+
+	buf_size = sizeof(*job) + bo_count * sizeof(struct vpu_bo *);
+	job = kzalloc(buf_size, GFP_KERNEL);
+	if (!job)
+		return NULL;
+
+	kref_init(&job->ref);
+
+	job->vdev = vdev;
+	job->engine_idx = engine_idx;
+	job->bo_count = bo_count;
+	job->done_fence = vpu_fence_create(vdev);
+	if (!job->done_fence) {
+		vpu_warn_ratelimited(vdev, "Failed to create a fence\n");
+		goto err_free_job;
+	}
+
+	vpu_file_priv_get(file_priv, &job->file_priv);
+
+	vpu_dbg(JOB, "Job created: ctx %2d engine %d", job->file_priv->ctx.id, job->engine_idx);
+
+	return job;
+
+err_free_job:
+	kfree(job);
+	return NULL;
+}
+
+static void vpu_job_update_status(struct page *p, struct vpu_job *job, u32 job_status)
+{
+	void *dst = kmap_local_page(p);
+	drm_vpu_job_status_t *status = dst + (job->submit_status_offset % PAGE_SIZE);
+
+	*status = job_status;
+
+	flush_dcache_page(p);
+	kunmap_local(dst);
+}
+
+static int vpu_job_done(struct vpu_device *vdev, u32 job_id, u32 job_status)
+{
+	struct vpu_job *job;
+	struct page *p;
+
+	job = xa_erase(&vdev->submitted_jobs_xa, job_id);
+	if (!job)
+		return -ENOENT;
+
+	p = vpu_bo_get_page(job->bos[CMD_BUF_IDX], job->submit_status_offset);
+	if (!p) {
+		vpu_warn(vdev, "Failed to get cmd bo status page\n");
+		goto job_put;
+	}
+	vpu_job_update_status(p, job, job_status);
+	dma_fence_signal(job->done_fence);
+
+job_put:
+	vpu_dbg(JOB, "Job complete:  id %3u ctx %2d engine %d status 0x%x\n",
+		job->job_id, job->file_priv->ctx.id, job->engine_idx, job_status);
+
+	job_put(job);
+	return 0;
+}
+
+static void vpu_job_done_message(struct vpu_device *vdev, void *msg)
+{
+	struct vpu_ipc_msg_payload_job_done *payload;
+	struct vpu_jsm_msg *job_ret_msg = msg;
+	int ret;
+
+	payload = (struct vpu_ipc_msg_payload_job_done *)&job_ret_msg->payload;
+
+	ret = vpu_job_done(vdev, payload->job_id, payload->job_status);
+	if (ret)
+		vpu_err(vdev, "Failed to finish job %d: %d\n", payload->job_id, ret);
+}
+
+void vpu_jobs_abort_all(struct vpu_device *vdev)
+{
+	struct vpu_job *job;
+	unsigned long id;
+
+	xa_for_each(&vdev->submitted_jobs_xa, id, job)
+		vpu_job_done(vdev, id, VPU_JSM_STATUS_ABORTED);
+}
+
+static int vpu_direct_job_submission(struct vpu_job *job)
+{
+	struct vpu_file_priv *file_priv = job->file_priv;
+	struct vpu_device *vdev = job->vdev;
+	struct xa_limit job_id_range;
+	struct vpu_cmdq *cmdq;
+	int ret;
+
+	mutex_lock(&file_priv->lock);
+
+	cmdq = vpu_cmdq_acquire(job->file_priv, job->engine_idx);
+	if (!cmdq) {
+		vpu_warn(vdev, "Failed get job queue, ctx %d engine %d\n",
+			 file_priv->ctx.id, job->engine_idx);
+		ret = -EINVAL;
+		goto err_unlock;
+	}
+
+	job_id_range.min = FIELD_PREP(JOB_ID_CONTEXT_MASK, (file_priv->ctx.id - 1));
+	job_id_range.max = job_id_range.min | JOB_ID_JOB_MASK;
+
+	job_get(job, &job);
+	ret = xa_alloc(&vdev->submitted_jobs_xa, &job->job_id, job, job_id_range, GFP_KERNEL);
+	if (ret) {
+		vpu_warn_ratelimited(vdev, "Failed to allocate job id: %d\n", ret);
+		goto err_job_put;
+	}
+
+	ret = vpu_cmdq_push_job(cmdq, job);
+	if (ret)
+		goto err_xa_erase;
+
+	vpu_dbg(JOB, "Job submitted: id %3u ctx %2d engine %d next %d\n",
+		job->job_id, file_priv->ctx.id, job->engine_idx, cmdq->jobq->header.tail);
+
+	if (vpu_test_mode == VPU_TEST_MODE_NULL_HW) {
+		vpu_job_done(vdev, job->job_id, VPU_JSM_STATUS_SUCCESS);
+		cmdq->jobq->header.head = cmdq->jobq->header.tail;
+	} else {
+		vpu_cmdq_ring_db(vdev, cmdq);
+	}
+
+	mutex_unlock(&file_priv->lock);
+	return 0;
+
+err_xa_erase:
+	xa_erase(&vdev->submitted_jobs_xa, job->job_id);
+err_job_put:
+	job_put(job);
+err_unlock:
+	mutex_unlock(&file_priv->lock);
+	return ret;
+}
+
+static int
+vpu_job_prepare_bos_for_submit(struct drm_file *file, struct vpu_job *job, u32 *buf_handles,
+			       u32 buf_count, u32 status_offset, u32 commands_offset)
+{
+	struct vpu_file_priv *file_priv = file->driver_priv;
+	struct vpu_device *vdev = file_priv->vdev;
+	struct ww_acquire_ctx acquire_ctx;
+	struct vpu_bo *bo;
+	int ret;
+	u32 i;
+
+	for (i = 0; i < buf_count; i++) {
+		struct drm_gem_object *obj = drm_gem_object_lookup(file, buf_handles[i]);
+
+		if (!obj)
+			return -ENOENT;
+
+		job->bos[i] = to_vpu_bo(obj);
+
+		ret = vpu_bo_pin(job->bos[i]);
+		if (ret)
+			return ret;
+	}
+
+	bo = job->bos[CMD_BUF_IDX];
+	if (!dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_READ)) {
+		vpu_warn(vdev, "Buffer is already in use\n");
+		return -EBUSY;
+	}
+
+	if (commands_offset >= bo->base.size) {
+		vpu_warn(vdev, "Invalid command buffer offset %u\n", commands_offset);
+		return -EINVAL;
+	}
+
+	job->cmd_buf_vpu_addr = bo->vpu_addr + commands_offset;
+
+	if (status_offset > (bo->base.size - sizeof(drm_vpu_job_status_t))) {
+		vpu_warn(vdev, "Invalid status offset %u\n", status_offset);
+		return -EINVAL;
+	}
+	job->submit_status_offset = status_offset;
+
+	ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count,
+					&acquire_ctx);
+	if (ret) {
+		vpu_warn(vdev, "Failed to lock reservations: %d\n", ret);
+		return ret;
+	}
+	for (i = 0; i < buf_count; i++) {
+		ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1);
+		if (ret) {
+			vpu_warn(vdev, "Failed to reserve fences: %d\n", ret);
+			goto unlock_reservations;
+		}
+	}
+
+	for (i = 0; i < buf_count; i++)
+		dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE);
+
+unlock_reservations:
+	drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, buf_count, &acquire_ctx);
+
+	return ret;
+}
+
+int vpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	int ret = 0;
+	struct vpu_file_priv *file_priv = file->driver_priv;
+	struct vpu_device *vdev = file_priv->vdev;
+	struct drm_vpu_submit *params = data;
+	struct vpu_job *job;
+	u32 *buf_handles;
+
+	if (params->engine > DRM_VPU_ENGINE_COPY)
+		return -EINVAL;
+
+	if (params->buffer_count == 0)
+		return -EINVAL;
+
+	if (params->commands_offset == params->status_offset)
+		return -EINVAL;
+
+	if (!IS_ALIGNED(params->commands_offset, 8))
+		return -EINVAL;
+
+	if (!IS_ALIGNED(params->status_offset, 8))
+		return -EINVAL;
+
+	if (!file_priv->ctx.id)
+		return -EINVAL;
+
+	buf_handles = kcalloc(params->buffer_count, sizeof(u32), GFP_KERNEL);
+	if (!buf_handles)
+		return -ENOMEM;
+
+	ret = copy_from_user(buf_handles,
+			     (void __user *)params->buffers_ptr,
+			     params->buffer_count * sizeof(u32));
+	if (ret)
+		goto free_handles;
+
+	vpu_dbg(JOB, "Submit ioctl: ctx %u buf_count %u\n",
+		file_priv->ctx.id, params->buffer_count);
+	job = vpu_create_job(file_priv, params->engine, params->buffer_count);
+	if (!job) {
+		vpu_err(vdev, "Failed to create job\n");
+		ret = -ENOMEM;
+		goto free_handles;
+	}
+
+	ret = vpu_job_prepare_bos_for_submit(file, job, buf_handles, params->buffer_count,
+					     params->status_offset, params->commands_offset);
+	if (ret) {
+		vpu_err(vdev, "Failed to prepare job, ret %d\n", ret);
+		goto job_put;
+	}
+
+	ret = vpu_direct_job_submission(job);
+	if (ret) {
+		dma_fence_signal(job->done_fence);
+		vpu_err(vdev, "Failed to submit job to the HW, ret %d\n", ret);
+	}
+
+job_put:
+	job_put(job);
+free_handles:
+	kfree(buf_handles);
+
+	return ret;
+}
+
+static int vpu_job_done_thread(void *arg)
+{
+	struct vpu_device *vdev = (struct vpu_device *)arg;
+	struct vpu_ipc_consumer cons;
+	struct vpu_jsm_msg jsm_msg;
+	bool jobs_submitted;
+	unsigned int timeout;
+	int ret;
+
+	vpu_dbg(JOB, "Started %s\n", __func__);
+
+	vpu_ipc_consumer_add(vdev, &cons, VPU_IPC_CHAN_JOB_RET);
+
+	while (!kthread_should_stop()) {
+		timeout = vpu_tdr_timeout_ms ? vpu_tdr_timeout_ms : vdev->timeout.tdr;
+		jobs_submitted = !xa_empty(&vdev->submitted_jobs_xa);
+		ret = vpu_ipc_receive(vdev, &cons, NULL, &jsm_msg, timeout);
+		if (!ret) {
+			vpu_job_done_message(vdev, &jsm_msg);
+		} else if (ret == -ETIMEDOUT) {
+			if (jobs_submitted && !xa_empty(&vdev->submitted_jobs_xa))
+				vpu_err(vdev, "TDR detected, timeout %d ms", timeout);
+		}
+	}
+
+	vpu_ipc_consumer_del(vdev, &cons);
+
+	vpu_jobs_abort_all(vdev);
+
+	vpu_dbg(JOB, "Stopped %s\n", __func__);
+	return 0;
+}
+
+int vpu_job_done_thread_init(struct vpu_device *vdev)
+{
+	struct task_struct *thread;
+
+	thread = kthread_run(&vpu_job_done_thread, (void *)vdev, "vpu_job_done_thread");
+	if (IS_ERR(thread)) {
+		vpu_err(vdev, "Failed to start job completion thread\n");
+		return -EIO;
+	}
+
+	get_task_struct(thread);
+	wake_up_process(thread);
+
+	vdev->job_done_thread = thread;
+
+	return 0;
+}
+
+void vpu_job_done_thread_fini(struct vpu_device *vdev)
+{
+	kthread_stop(vdev->job_done_thread);
+	put_task_struct(vdev->job_done_thread);
+}
diff --git a/drivers/gpu/drm/vpu/vpu_job.h b/drivers/gpu/drm/vpu/vpu_job.h
new file mode 100644
index 000000000000..11e1f345c2a3
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_job.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_JOB_H__
+#define __VPU_JOB_H__
+
+#include <linux/kref.h>
+#include <linux/idr.h>
+
+#include "vpu_gem.h"
+
+struct vpu_device;
+struct vpu_file_priv;
+
+/**
+ * struct vpu_cmdq - Object representing device queue used to send jobs.
+ * @jobq:	   Pointer to job queue memory shared with the device
+ * @mem:           Memory allocated for the job queue, shared with device
+ * @entry_count    Number of job entries in the queue
+ * @db_id:	   Doorbell assigned to this job queue
+ * @db_registered: True if doorbell is registered in device
+ */
+struct vpu_cmdq {
+	struct vpu_job_queue *jobq;
+	struct vpu_bo *mem;
+	u32 entry_count;
+	u32 db_id;
+	bool db_registered;
+};
+
+/**
+ * struct vpu_job - KMD object that represents batchbuffer / DMA buffer.
+ * Each batch / DMA buffer is a job to be submitted and executed by the VPU FW.
+ * This is a unit of execution, and be tracked by the job_id for
+ * any status reporting from VPU FW through IPC JOB RET/DONE message.
+ * @file_priv:		  The client that submitted this job
+ * @job_id:		  Job ID for KMD tracking and job status reporting from VPU FW
+ * @status:		  Status of the Job from IPC JOB RET/DONE message
+ * @batch_buffer:	  CPU vaddr points to the batch buffer memory allocated for the job
+ * @submit_status_offset: Offset within batch buffer where job completion handler
+			  will update the job status
+ */
+struct vpu_job {
+	struct kref ref;
+
+	struct vpu_device *vdev;
+
+	struct vpu_file_priv *file_priv;
+
+	u64 submit_status_offset;
+
+	struct dma_fence *done_fence;
+
+	u64 cmd_buf_vpu_addr;
+	u32 job_id;
+	u32 engine_idx;
+	size_t bo_count;
+	struct vpu_bo *bos[];
+};
+
+int vpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+
+void vpu_cmdq_release_all(struct vpu_file_priv *file_priv);
+void vpu_cmdq_reset_all_contexts(struct vpu_device *vdev);
+
+int vpu_job_done_thread_init(struct vpu_device *vdev);
+void vpu_job_done_thread_fini(struct vpu_device *vdev);
+
+void vpu_jobs_abort_all(struct vpu_device *vdev);
+
+#endif /* __VPU_JOB_H__ */
diff --git a/include/uapi/drm/vpu_drm.h b/include/uapi/drm/vpu_drm.h
index b0492225433d..68b1f2d94046 100644
--- a/include/uapi/drm/vpu_drm.h
+++ b/include/uapi/drm/vpu_drm.h
@@ -20,6 +20,8 @@ extern "C" {
 #define DRM_VPU_BO_CREATE		 0x02
 #define DRM_VPU_BO_INFO			 0x03
 #define DRM_VPU_BO_USERPTR		 0x04
+#define DRM_VPU_SUBMIT			 0x05
+#define DRM_VPU_BO_WAIT			 0x06
 
 #define DRM_IOCTL_VPU_GET_PARAM                                                                    \
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VPU_GET_PARAM, struct drm_vpu_param)
@@ -36,6 +38,12 @@ extern "C" {
 #define DRM_IOCTL_VPU_BO_USERPTR                                                                   \
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VPU_BO_USERPTR, struct drm_vpu_bo_userptr)
 
+#define DRM_IOCTL_VPU_SUBMIT                                                                       \
+	DRM_IOW(DRM_COMMAND_BASE + DRM_VPU_SUBMIT, struct drm_vpu_submit)
+
+#define DRM_IOCTL_VPU_BO_WAIT                                                                      \
+	DRM_IOW(DRM_COMMAND_BASE + DRM_VPU_BO_WAIT, struct drm_vpu_bo_wait)
+
 /**
  * DOC: contexts
  *
@@ -227,6 +235,94 @@ struct drm_vpu_bo_userptr {
 	__u64 vpu_addr;
 };
 
+/* drm_vpu_submit job status type */
+typedef __u64 drm_vpu_job_status_t;
+
+/* drm_vpu_submit job status codes */
+#define DRM_VPU_JOB_STATUS_SUCCESS 0
+
+/* drm_vpu_submit engines */
+#define DRM_VPU_ENGINE_COMPUTE 0
+#define DRM_VPU_ENGINE_COPY    1
+
+/**
+ * struct drm_vpu_submit - Submit commands to the VPU
+ *
+ * Execute a single command buffer on a given VPU engine.
+ * Handles to all referenced buffer objects have to be provided in @buffers_ptr.
+ *
+ * User space may wait on job completion using %DRM_VPU_BO_WAIT ioctl.
+ */
+struct drm_vpu_submit {
+	/**
+	 * @buffers_ptr:
+	 *
+	 * A pointer to an u32 array of GEM handles of the BOs required for this job.
+	 * The number of elements in the array must be equal to the value given by @buffer_count.
+	 *
+	 * The first BO is the command buffer. The rest of array has to contail all
+	 * BOs referenced from the command buffer.
+	 */
+	__u64 buffers_ptr;
+
+	/** @buffer_count: Number of elements in the @buffers_ptr */
+	__u32 buffer_count;
+
+	/**
+	 * @engine: Select the engine this job should be executed on
+	 *
+	 * %DRM_VPU_ENGINE_COMPUTE:
+	 *
+	 * Performs Deep Learning Neural Compute Inference Operations
+	 *
+	 * %DRM_VPU_ENGINE_COPY:
+	 *
+	 * Performs memory copy operations to/from system memory allocated for VPU
+	 */
+	__u32 engine;
+
+	/** @flags: Reserved for future use - must be zero */
+	__u32 flags;
+
+	/**
+	 * @commands_offset:
+	 *
+	 * Offset inside the first buffer in @buffers_ptr containing commands
+	 * to be executed. The offset has to be 8-byte aligned.
+	 */
+	__u32 commands_offset;
+
+	/**
+	 * @status_offset:
+	 *
+	 * Offset inside the first buffer in @buffers_ptr containing an u64 with
+	 * job status code which is updated after the job is completed.
+	 * &DRM_VPU_JOB_STATUS_SUCCESS or device specific error otherwise.
+	 * The offset has to be 8-byte aligned.
+	 */
+	__u32 status_offset;
+
+	/** @pad: Padding - must be zero */
+	__u32 pad;
+};
+
+/**
+ * struct drm_vpu_bo_wait - Wait for BO to become inactive
+ *
+ * Blocks until a given buffer object becomes inactive.
+ * With @timeout_ms set to 0 returns immediately.
+ */
+struct drm_vpu_bo_wait {
+	/** @handle: Handle to the buffer object to be waited on */
+	__u32 handle;
+
+	/** @flags: Reserved for future use - must be zero */
+	__u32 flags;
+
+	/** @timeout_ns: Absolute timeout in nanoseconds (may be zero) */
+	__s64 timeout_ns;
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 7/7] drm/vpu: Add PM support
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
                   ` (5 preceding siblings ...)
  2022-07-28 13:17 ` [PATCH v1 6/7] drm/vpu: Add command buffer submission logic Jacek Lawrynowicz
@ 2022-07-28 13:17 ` Jacek Lawrynowicz
  2022-08-08  2:34 ` [PATCH v1 0/7] New DRM driver for Intel VPU Dave Airlie
  7 siblings, 0 replies; 11+ messages in thread
From: Jacek Lawrynowicz @ 2022-07-28 13:17 UTC (permalink / raw)
  To: dri-devel, airlied, daniel
  Cc: andrzej.kacprowski, Jacek Lawrynowicz, stanislaw.gruszka

  - Implement cold and warm firmware boot flows
  - Add hang recovery support
  - Add runtime power management support

Signed-off-by: Krystian Pradzynski <krystian.pradzynski@linux.intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/gpu/drm/vpu/Makefile     |   3 +-
 drivers/gpu/drm/vpu/vpu_drv.c    |  32 ++-
 drivers/gpu/drm/vpu/vpu_drv.h    |   1 +
 drivers/gpu/drm/vpu/vpu_fw.c     |   4 +
 drivers/gpu/drm/vpu/vpu_hw_mtl.c |  12 ++
 drivers/gpu/drm/vpu/vpu_ipc.c    |   6 +
 drivers/gpu/drm/vpu/vpu_job.c    |  17 +-
 drivers/gpu/drm/vpu/vpu_mmu.c    |   4 +
 drivers/gpu/drm/vpu/vpu_pm.c     | 353 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/vpu/vpu_pm.h     |  38 ++++
 10 files changed, 466 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/vpu/vpu_pm.c
 create mode 100644 drivers/gpu/drm/vpu/vpu_pm.h

diff --git a/drivers/gpu/drm/vpu/Makefile b/drivers/gpu/drm/vpu/Makefile
index 70493dacecda..f8218a0b0729 100644
--- a/drivers/gpu/drm/vpu/Makefile
+++ b/drivers/gpu/drm/vpu/Makefile
@@ -10,6 +10,7 @@ intel_vpu-y := \
 	vpu_job.o \
 	vpu_jsm_msg.o \
 	vpu_mmu.o \
-	vpu_mmu_context.o
+	vpu_mmu_context.o \
+	vpu_pm.o
 
 obj-$(CONFIG_DRM_VPU) += intel_vpu.o
diff --git a/drivers/gpu/drm/vpu/vpu_drv.c b/drivers/gpu/drm/vpu/vpu_drv.c
index 74db0cb18491..d62aed956f02 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.c
+++ b/drivers/gpu/drm/vpu/vpu_drv.c
@@ -23,6 +23,7 @@
 #include "vpu_jsm_msg.h"
 #include "vpu_mmu.h"
 #include "vpu_mmu_context.h"
+#include "vpu_pm.h"
 
 #ifndef DRIVER_VERSION_STR
 #define DRIVER_VERSION_STR __stringify(DRM_VPU_DRIVER_MAJOR) "." \
@@ -80,9 +81,11 @@ static void file_priv_release(struct kref *ref)
 	vpu_dbg(FILE, "file_priv release: ctx %u\n", file_priv->ctx.id);
 
 	if (file_priv->ctx.id) {
+		vpu_rpm_get(vdev);
 		vpu_cmdq_release_all(file_priv);
 		vpu_bo_remove_all_bos_from_context(&file_priv->ctx);
 		vpu_mmu_user_context_fini(file_priv);
+		vpu_rpm_put(vdev);
 	}
 
 	kfree(file_priv);
@@ -428,6 +431,10 @@ static int vpu_dev_init(struct vpu_device *vdev)
 	if (!vdev->ipc)
 		return -ENOMEM;
 
+	vdev->pm = devm_kzalloc(vdev->drm.dev, sizeof(*vdev->pm), GFP_KERNEL);
+	if (!vdev->pm)
+		return -ENOMEM;
+
 	vdev->hw->ops = &vpu_hw_mtl_ops;
 	vdev->platform = VPU_PLATFORM_INVALID;
 
@@ -486,10 +493,16 @@ static int vpu_dev_init(struct vpu_device *vdev)
 		goto err_fw_fini;
 	}
 
+	ret = vpu_pm_init(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to initialize PM: %d\n", ret);
+		goto err_ipc_fini;
+	}
+
 	ret = vpu_job_done_thread_init(vdev);
 	if (ret) {
 		vpu_err(vdev, "Failed to initialize job done thread: %d\n", ret);
-		goto err_ipc_fini;
+		goto err_pm_fini;
 	}
 
 	ret = vpu_fw_load(vdev);
@@ -508,6 +521,8 @@ static int vpu_dev_init(struct vpu_device *vdev)
 
 err_job_done_thread_fini:
 	vpu_job_done_thread_fini(vdev);
+err_pm_fini:
+	vpu_pm_fini(vdev);
 err_ipc_fini:
 	vpu_ipc_fini(vdev);
 err_fw_fini:
@@ -530,6 +545,7 @@ static void vpu_dev_fini(struct vpu_device *vdev)
 	vpu_shutdown(vdev);
 
 	vpu_job_done_thread_fini(vdev);
+	vpu_pm_fini(vdev);
 	vpu_ipc_fini(vdev);
 	vpu_fw_fini(vdev);
 	vpu_mmu_fini(vdev);
@@ -584,11 +600,25 @@ static void vpu_remove(struct pci_dev *pdev)
 	vpu_dev_fini(vdev);
 }
 
+static const struct dev_pm_ops vpu_drv_pci_pm = {
+	SET_SYSTEM_SLEEP_PM_OPS(vpu_pm_suspend_cb, vpu_pm_resume_cb)
+	SET_RUNTIME_PM_OPS(vpu_pm_runtime_suspend_cb, vpu_pm_runtime_resume_cb, NULL)
+};
+
+static const struct pci_error_handlers vpu_drv_pci_err = {
+	.reset_prepare = vpu_pm_reset_prepare_cb,
+	.reset_done = vpu_pm_reset_done_cb,
+};
+
 static struct pci_driver vpu_pci_driver = {
 	.name = KBUILD_MODNAME,
 	.id_table = vpu_pci_ids,
 	.probe = vpu_probe,
 	.remove = vpu_remove,
+	.driver = {
+		.pm = &vpu_drv_pci_pm,
+	},
+	.err_handler = &vpu_drv_pci_err,
 };
 
 static __init int vpu_init(void)
diff --git a/drivers/gpu/drm/vpu/vpu_drv.h b/drivers/gpu/drm/vpu/vpu_drv.h
index f4898399e64b..52593831dc6a 100644
--- a/drivers/gpu/drm/vpu/vpu_drv.h
+++ b/drivers/gpu/drm/vpu/vpu_drv.h
@@ -89,6 +89,7 @@ struct vpu_device {
 	struct vpu_mmu_info *mmu;
 	struct vpu_fw_info *fw;
 	struct vpu_ipc_info *ipc;
+	struct vpu_pm_info *pm;
 
 	struct vpu_mmu_context gctx;
 	struct xarray context_xa;
diff --git a/drivers/gpu/drm/vpu/vpu_fw.c b/drivers/gpu/drm/vpu/vpu_fw.c
index 153aafcf3423..4c10a6f963bc 100644
--- a/drivers/gpu/drm/vpu/vpu_fw.c
+++ b/drivers/gpu/drm/vpu/vpu_fw.c
@@ -14,6 +14,7 @@
 #include "vpu_gem.h"
 #include "vpu_hw.h"
 #include "vpu_ipc.h"
+#include "vpu_pm.h"
 
 #define FW_MAX_NAMES		3
 #define FW_GLOBAL_MEM_START	(2ull * SZ_1G)
@@ -361,9 +362,12 @@ void vpu_fw_boot_params_setup(struct vpu_device *vdev, struct vpu_boot_params *b
 	/* In case of warm boot we only have to reset the entrypoint addr */
 	if (!vpu_fw_is_cold_boot(vdev)) {
 		boot_params->save_restore_ret_address = 0;
+		vdev->pm->is_warmboot = true;
 		return;
 	}
 
+	vdev->pm->is_warmboot = false;
+
 	boot_params->magic = VPU_BOOT_PARAMS_MAGIC;
 	boot_params->vpu_id = to_pci_dev(vdev->drm.dev)->bus->number;
 	boot_params->frequency = vpu_hw_reg_pll_freq_get(vdev);
diff --git a/drivers/gpu/drm/vpu/vpu_hw_mtl.c b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
index ba24dc29f962..93f1799c7284 100644
--- a/drivers/gpu/drm/vpu/vpu_hw_mtl.c
+++ b/drivers/gpu/drm/vpu/vpu_hw_mtl.c
@@ -10,6 +10,7 @@
 #include "vpu_hw.h"
 #include "vpu_ipc.h"
 #include "vpu_mmu.h"
+#include "vpu_pm.h"
 
 #define TILE_FUSE_ENABLE_BOTH	     0x0
 #define TILE_FUSE_ENABLE_LOWER	     0x1
@@ -916,6 +917,8 @@ static irqreturn_t vpu_hw_mtl_irq_wdt_nce_handler(struct vpu_device *vdev)
 {
 	vpu_warn_ratelimited(vdev, "WDT NCE irq\n");
 
+	vpu_pm_schedule_recovery(vdev);
+
 	return IRQ_HANDLED;
 }
 
@@ -924,6 +927,7 @@ static irqreturn_t vpu_hw_mtl_irq_wdt_mss_handler(struct vpu_device *vdev)
 	vpu_warn_ratelimited(vdev, "WDT MSS irq\n");
 
 	vpu_hw_wdt_disable(vdev);
+	vpu_pm_schedule_recovery(vdev);
 
 	return IRQ_HANDLED;
 }
@@ -932,6 +936,8 @@ static irqreturn_t vpu_hw_mtl_irq_noc_firewall_handler(struct vpu_device *vdev)
 {
 	vpu_warn_ratelimited(vdev, "NOC Firewall irq\n");
 
+	vpu_pm_schedule_recovery(vdev);
+
 	return IRQ_HANDLED;
 }
 
@@ -970,6 +976,7 @@ static irqreturn_t vpu_hw_mtl_irqv_handler(struct vpu_device *vdev, int irq)
 /* Handler for IRQs from Buttress core (irqB) */
 static irqreturn_t vpu_hw_mtl_irqb_handler(struct vpu_device *vdev, int irq)
 {
+	bool schedule_recovery = false;
 	u32 status = REGB_RD32(MTL_BUTTRESS_INTERRUPT_STAT) & BUTTRESS_IRQ_MASK;
 
 	REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, status);
@@ -980,13 +987,18 @@ static irqreturn_t vpu_hw_mtl_irqb_handler(struct vpu_device *vdev, int irq)
 	if (REG_TEST_FLD(MTL_BUTTRESS_INTERRUPT_STAT, ATS_ERR, status)) {
 		vpu_dbg(IRQ, "ATS_ERR 0x%016llx", REGB_RD64(MTL_BUTTRESS_ATS_ERR_LOG_0));
 		REGB_WR32(MTL_BUTTRESS_ATS_ERR_CLEAR, 0x1);
+		schedule_recovery = true;
 	}
 
 	if (REG_TEST_FLD(MTL_BUTTRESS_INTERRUPT_STAT, UFI_ERR, status)) {
 		vpu_dbg(IRQ, "UFI_ERR 0x%08x", REGB_RD32(MTL_BUTTRESS_UFI_ERR_LOG));
 		REGB_WR32(MTL_BUTTRESS_UFI_ERR_CLEAR, 0x1);
+		schedule_recovery = true;
 	}
 
+	if (schedule_recovery)
+		vpu_pm_schedule_recovery(vdev);
+
 	return IRQ_HANDLED;
 }
 
diff --git a/drivers/gpu/drm/vpu/vpu_ipc.c b/drivers/gpu/drm/vpu/vpu_ipc.c
index 0a01e5614a5f..db8a18f353b5 100644
--- a/drivers/gpu/drm/vpu/vpu_ipc.c
+++ b/drivers/gpu/drm/vpu/vpu_ipc.c
@@ -13,6 +13,7 @@
 #include "vpu_hw.h"
 #include "vpu_ipc.h"
 #include "vpu_jsm_msg.h"
+#include "vpu_pm.h"
 
 #define IPC_MAX_RX_MSG	128
 #define IS_KTHREAD()	(get_current()->flags & PF_KTHREAD)
@@ -264,6 +265,10 @@ int vpu_ipc_send_receive(struct vpu_device *vdev, struct vpu_jsm_msg *req,
 
 	vpu_ipc_consumer_add(vdev, &cons, channel);
 
+	ret = vpu_rpm_get(vdev);
+	if (ret < 0)
+		return ret;
+
 	ret = vpu_ipc_send(vdev, &cons, req);
 	if (ret) {
 		vpu_warn(vdev, "IPC send failed: %d\n", ret);
@@ -282,6 +287,7 @@ int vpu_ipc_send_receive(struct vpu_device *vdev, struct vpu_jsm_msg *req,
 	}
 
 consumer_del:
+	vpu_rpm_put(vdev);
 	vpu_ipc_consumer_del(vdev, &cons);
 
 	return ret;
diff --git a/drivers/gpu/drm/vpu/vpu_job.c b/drivers/gpu/drm/vpu/vpu_job.c
index 16ca280d12b2..5b9fed41d34f 100644
--- a/drivers/gpu/drm/vpu/vpu_job.c
+++ b/drivers/gpu/drm/vpu/vpu_job.c
@@ -17,6 +17,7 @@
 #include "vpu_ipc.h"
 #include "vpu_job.h"
 #include "vpu_jsm_msg.h"
+#include "vpu_pm.h"
 
 #define CMD_BUF_IDX	    0
 #define JOB_ID_JOB_MASK	    GENMASK(7, 0)
@@ -264,6 +265,9 @@ static void job_release(struct kref *ref)
 
 	vpu_dbg(KREF, "Job released: id %u\n", job->job_id);
 	kfree(job);
+
+	/* Allow the VPU to get suspended, must be called after vpu_file_priv_put() */
+	vpu_rpm_put(vdev);
 }
 
 static void job_put(struct vpu_job *job)
@@ -280,11 +284,16 @@ vpu_create_job(struct vpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
 	struct vpu_device *vdev = file_priv->vdev;
 	struct vpu_job *job;
 	size_t buf_size;
+	int ret;
+
+	ret = vpu_rpm_get(vdev);
+	if (ret < 0)
+		return NULL;
 
 	buf_size = sizeof(*job) + bo_count * sizeof(struct vpu_bo *);
 	job = kzalloc(buf_size, GFP_KERNEL);
 	if (!job)
-		return NULL;
+		goto err_rpm_put;
 
 	kref_init(&job->ref);
 
@@ -305,6 +314,8 @@ vpu_create_job(struct vpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
 
 err_free_job:
 	kfree(job);
+err_rpm_put:
+	vpu_rpm_put(vdev);
 	return NULL;
 }
 
@@ -573,8 +584,10 @@ static int vpu_job_done_thread(void *arg)
 		if (!ret) {
 			vpu_job_done_message(vdev, &jsm_msg);
 		} else if (ret == -ETIMEDOUT) {
-			if (jobs_submitted && !xa_empty(&vdev->submitted_jobs_xa))
+			if (jobs_submitted && !xa_empty(&vdev->submitted_jobs_xa)) {
 				vpu_err(vdev, "TDR detected, timeout %d ms", timeout);
+				vpu_pm_schedule_recovery(vdev);
+			}
 		}
 	}
 
diff --git a/drivers/gpu/drm/vpu/vpu_mmu.c b/drivers/gpu/drm/vpu/vpu_mmu.c
index ace91ee5a857..b0a17d3bc274 100644
--- a/drivers/gpu/drm/vpu/vpu_mmu.c
+++ b/drivers/gpu/drm/vpu/vpu_mmu.c
@@ -11,6 +11,7 @@
 #include "vpu_hw_reg_io.h"
 #include "vpu_mmu.h"
 #include "vpu_mmu_context.h"
+#include "vpu_pm.h"
 
 #define VPU_MMU_IDR0_REF		0x080f3e0f
 #define VPU_MMU_IDR0_REF_SIMICS		0x080f3e1f
@@ -883,6 +884,8 @@ irqreturn_t vpu_mmu_irq_evtq_handler(struct vpu_device *vdev)
 
 	} while (evtq->prod != evtq->cons);
 
+	vpu_pm_schedule_recovery(vdev);
+
 	return IRQ_HANDLED;
 }
 
@@ -901,6 +904,7 @@ irqreturn_t vpu_mmu_irq_gerr_handler(struct vpu_device *vdev)
 
 	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, SFM, active)) {
 		vpu_err_ratelimited(vdev, "MMU has entered service failure mode\n");
+		vpu_pm_schedule_recovery(vdev);
 	}
 
 	if (REG_TEST_FLD(MTL_VPU_HOST_MMU_GERROR, MSI_ABT, active))
diff --git a/drivers/gpu/drm/vpu/vpu_pm.c b/drivers/gpu/drm/vpu/vpu_pm.c
new file mode 100644
index 000000000000..59bb2b42291b
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_pm.c
@@ -0,0 +1,353 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#include <linux/highmem.h>
+#include <linux/moduleparam.h>
+#include <linux/pci.h>
+#include <linux/pm_runtime.h>
+#include <linux/reboot.h>
+
+#include "vpu_boot_api.h"
+#include "vpu_drv.h"
+#include "vpu_hw.h"
+#include "vpu_fw.h"
+#include "vpu_ipc.h"
+#include "vpu_job.h"
+#include "vpu_mmu.h"
+#include "vpu_pm.h"
+
+static bool vpu_disable_recovery;
+module_param_named_unsafe(disable_recovery, vpu_disable_recovery, bool, 0644);
+MODULE_PARM_DESC(disable_recovery, "Disables recovery when VPU hang is detected");
+
+#define PM_RESCHEDULE_LIMIT     5
+
+static void vpu_pm_prepare_cold_boot(struct vpu_device *vdev)
+{
+	struct vpu_fw_info *fw = vdev->fw;
+
+	vpu_cmdq_reset_all_contexts(vdev);
+	vpu_fw_load(vdev);
+	fw->entry_point = fw->cold_boot_entry_point;
+}
+
+static void vpu_pm_prepare_warm_boot(struct vpu_device *vdev)
+{
+	struct vpu_fw_info *fw = vdev->fw;
+	struct vpu_boot_params *bp = fw->mem->kvaddr;
+
+	if (!bp->save_restore_ret_address) {
+		vpu_pm_prepare_cold_boot(vdev);
+		return;
+	}
+
+	vpu_dbg(FW_BOOT, "Save/restore entry point %llx", bp->save_restore_ret_address);
+	fw->entry_point = bp->save_restore_ret_address;
+}
+
+static int vpu_suspend(struct vpu_device *vdev)
+{
+	int ret;
+
+	lockdep_assert_held(&vdev->pm->lock);
+
+	ret = vpu_shutdown(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to shutdown VPU: %d\n", ret);
+		return ret;
+	}
+
+	return ret;
+}
+
+static int vpu_resume(struct vpu_device *vdev)
+{
+	int ret;
+
+	lockdep_assert_held(&vdev->pm->lock);
+
+retry:
+	ret = vpu_hw_power_up(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to power up HW: %d\n", ret);
+		return ret;
+	}
+
+	ret = vpu_mmu_enable(vdev);
+	if (ret) {
+		vpu_err(vdev, "Failed to resume MMU: %d\n", ret);
+		vpu_hw_power_down(vdev);
+		return ret;
+	}
+
+	ret = vpu_boot(vdev);
+	if (ret) {
+		vpu_mmu_disable(vdev);
+		vpu_hw_power_down(vdev);
+		if (!vpu_fw_is_cold_boot(vdev)) {
+			vpu_warn(vdev, "Failed to resume the FW: %d. Retrying cold boot..\n", ret);
+			vpu_pm_prepare_cold_boot(vdev);
+			goto retry;
+		} else {
+			vpu_err(vdev, "Failed to resume the FW: %d\n", ret);
+		}
+	}
+
+	return ret;
+}
+
+static void vpu_pm_recovery_work(struct work_struct *work)
+{
+	struct vpu_pm_info *pm = container_of(work, struct vpu_pm_info, recovery_work);
+	struct vpu_device *vdev =  pm->vdev;
+	char *evt[2] = {"VPU_PM_EVENT=VPU_RECOVER", NULL};
+	int ret;
+
+	ret = pci_reset_function(to_pci_dev(vdev->drm.dev));
+	if (ret)
+		vpu_err(vdev, "Failed to reset VPU: %d\n", ret);
+
+	atomic_set(&pm->in_recovery, 0);
+	kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
+}
+
+void vpu_pm_schedule_recovery(struct vpu_device *vdev)
+{
+	struct vpu_pm_info *pm = vdev->pm;
+
+	if (vpu_disable_recovery) {
+		vpu_err(vdev, "Recovery not available when disable_recovery param is set\n");
+		return;
+	}
+
+	if (vpu_is_fpga(vdev)) {
+		vpu_err(vdev, "Recovery not available on FPGA\n");
+		return;
+	}
+
+	/* Schedule recovery if it's not in progress */
+	if (atomic_cmpxchg(&pm->in_recovery, 0, 1) == 0) {
+		vpu_hw_irq_disable(vdev);
+		queue_work(system_long_wq, &pm->recovery_work);
+	}
+}
+
+int vpu_pm_suspend_cb(struct device *dev)
+{
+	struct drm_device *drm = dev_get_drvdata(dev);
+	struct vpu_device *vdev = to_vpu_dev(drm);
+	int ret;
+
+	vpu_dbg(PM, "Suspend..\n");
+
+	mutex_lock(&vdev->pm->lock);
+
+	ret = vpu_suspend(vdev);
+	if (ret && vdev->pm->suspend_reschedule_counter) {
+		vpu_dbg(PM, "VPU failed to enter idle, rescheduling suspend, retries left %d\n",
+			vdev->pm->suspend_reschedule_counter);
+		pm_schedule_suspend(dev, vdev->timeout.reschedule_suspend);
+		vdev->pm->suspend_reschedule_counter--;
+		mutex_unlock(&vdev->pm->lock);
+		return -EBUSY;
+	} else if (!vdev->pm->suspend_reschedule_counter) {
+		vpu_warn(vdev, "VPU failed to enter idle, force suspend\n");
+		vpu_pm_prepare_cold_boot(vdev);
+	} else {
+		vpu_pm_prepare_warm_boot(vdev);
+	}
+
+	vdev->pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
+
+	pci_save_state(to_pci_dev(dev));
+	pci_set_power_state(to_pci_dev(dev), PCI_D3hot);
+
+	mutex_unlock(&vdev->pm->lock);
+
+	vpu_dbg(PM, "Suspend done.\n");
+
+	return ret;
+}
+
+int vpu_pm_resume_cb(struct device *dev)
+{
+	struct drm_device *drm = dev_get_drvdata(dev);
+	struct vpu_device *vdev = to_vpu_dev(drm);
+	int ret;
+
+	vpu_dbg(PM, "Resume..\n");
+
+	mutex_lock(&vdev->pm->lock);
+
+	pci_set_power_state(to_pci_dev(dev), PCI_D0);
+	pci_restore_state(to_pci_dev(dev));
+
+	ret = vpu_resume(vdev);
+	if (ret)
+		vpu_err(vdev, "Failed to resume: %d\n", ret);
+
+	mutex_unlock(&vdev->pm->lock);
+
+	vpu_dbg(PM, "Resume done.\n");
+
+	return ret;
+}
+
+int vpu_pm_runtime_suspend_cb(struct device *dev)
+{
+	struct drm_device *drm = dev_get_drvdata(dev);
+	struct vpu_device *vdev = to_vpu_dev(drm);
+	int ret;
+
+	vpu_dbg(PM, "Runtime suspend..\n");
+
+	if (!vpu_hw_is_idle(vdev) && vdev->pm->suspend_reschedule_counter) {
+		vpu_dbg(PM, "VPU failed to enter idle, rescheduling suspend, retries left %d\n",
+			vdev->pm->suspend_reschedule_counter);
+		pm_schedule_suspend(dev, vdev->timeout.reschedule_suspend);
+		vdev->pm->suspend_reschedule_counter--;
+		return -EAGAIN;
+	}
+
+	mutex_lock(&vdev->pm->lock);
+
+	ret = vpu_suspend(vdev);
+	if (ret)
+		vpu_err(vdev, "Failed to set suspend VPU: %d\n", ret);
+
+	if (!vdev->pm->suspend_reschedule_counter) {
+		vpu_warn(vdev, "VPU failed to enter idle, force suspended.\n");
+		vpu_pm_prepare_cold_boot(vdev);
+	} else {
+		vpu_pm_prepare_warm_boot(vdev);
+	}
+
+	vdev->pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
+	mutex_unlock(&vdev->pm->lock);
+
+	vpu_dbg(PM, "Runtime suspend done.\n");
+
+	return 0;
+}
+
+int vpu_pm_runtime_resume_cb(struct device *dev)
+{
+	struct drm_device *drm = dev_get_drvdata(dev);
+	struct vpu_device *vdev = to_vpu_dev(drm);
+	int ret;
+
+	vpu_dbg(PM, "Runtime resume..\n");
+
+	mutex_lock(&vdev->pm->lock);
+
+	ret = vpu_resume(vdev);
+	if (ret)
+		vpu_err(vdev, "Failed to set RESUME state: %d\n", ret);
+
+	mutex_unlock(&vdev->pm->lock);
+
+	vpu_dbg(PM, "Runtime resume done.\n");
+
+	return ret;
+}
+
+int vpu_rpm_get(struct vpu_device *vdev)
+{
+	int ret;
+
+	vpu_dbg(PM, "rpm_get count %d\n", atomic_read(&vdev->drm.dev->power.usage_count));
+
+	ret = pm_runtime_get_sync(vdev->drm.dev);
+	if (ret < 0) {
+		vpu_err(vdev, "Failed to resume operation: %d\n", ret);
+		pm_runtime_put_noidle(vdev->drm.dev);
+	} else {
+		vdev->pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
+	}
+
+	return ret;
+}
+
+void vpu_rpm_put(struct vpu_device *vdev)
+{
+	vpu_dbg(PM, "rpm_put count %d\n", atomic_read(&vdev->drm.dev->power.usage_count));
+
+	pm_runtime_mark_last_busy(vdev->drm.dev);
+	pm_runtime_put_autosuspend(vdev->drm.dev);
+}
+
+void vpu_pm_reset_prepare_cb(struct pci_dev *pdev)
+{
+	struct vpu_device *vdev = pci_get_drvdata(pdev);
+
+	vpu_dbg(PM, "Pre-reset..\n");
+
+	mutex_lock(&vdev->pm->lock);
+
+	vpu_shutdown(vdev);
+	vpu_pm_prepare_cold_boot(vdev);
+	vpu_jobs_abort_all(vdev);
+
+	mutex_unlock(&vdev->pm->lock);
+
+	vpu_dbg(PM, "Pre-reset done.\n");
+}
+
+void vpu_pm_reset_done_cb(struct pci_dev *pdev)
+{
+	struct vpu_device *vdev = pci_get_drvdata(pdev);
+	int ret;
+
+	vpu_dbg(PM, "Post-reset..\n");
+
+	mutex_lock(&vdev->pm->lock);
+
+	ret = vpu_resume(vdev);
+	if (ret)
+		vpu_err(vdev, "Failed to set RESUME state: %d\n", ret);
+
+	mutex_unlock(&vdev->pm->lock);
+
+	vpu_dbg(PM, "Post-reset done.\n");
+}
+
+int vpu_pm_init(struct vpu_device *vdev)
+{
+	struct device *dev = vdev->drm.dev;
+	struct vpu_pm_info *pm = vdev->pm;
+
+	pm->vdev = vdev;
+	pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
+
+	atomic_set(&pm->in_recovery, 0);
+	INIT_WORK(&pm->recovery_work, vpu_pm_recovery_work);
+	mutex_init(&pm->lock);
+
+	pm_runtime_use_autosuspend(dev);
+
+	if (vpu_disable_recovery)
+		pm_runtime_set_autosuspend_delay(dev, -1);
+	else if (vpu_is_silicon(vdev))
+		pm_runtime_set_autosuspend_delay(dev, 1000);
+	else
+		pm_runtime_set_autosuspend_delay(dev, 60000);
+
+	pm_runtime_set_active(dev);
+	pm_runtime_allow(dev);
+	pm_runtime_mark_last_busy(dev);
+	pm_runtime_put_autosuspend(dev);
+
+	vpu_dbg(PM, "Initial RPM count %d\n", atomic_read(&dev->power.usage_count));
+
+	return 0;
+}
+
+void vpu_pm_fini(struct vpu_device *vdev)
+{
+	pm_runtime_forbid(vdev->drm.dev);
+	pm_runtime_get_noresume(vdev->drm.dev);
+
+	vpu_dbg(PM, "Release RPM count %d\n", atomic_read(&vdev->drm.dev->power.usage_count));
+}
diff --git a/drivers/gpu/drm/vpu/vpu_pm.h b/drivers/gpu/drm/vpu/vpu_pm.h
new file mode 100644
index 000000000000..c1709225ae5c
--- /dev/null
+++ b/drivers/gpu/drm/vpu/vpu_pm.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright © 2020-2022 Intel Corporation
+ */
+
+#ifndef __VPU_PM_H__
+#define __VPU_PM_H__
+
+#include <linux/types.h>
+
+struct vpu_device;
+
+struct vpu_pm_info {
+	struct vpu_device *vdev;
+	struct mutex lock; /* Protects state transitions */
+	struct work_struct recovery_work;
+	atomic_t in_recovery;
+	bool is_warmboot;
+	u32 suspend_reschedule_counter;
+};
+
+int vpu_pm_init(struct vpu_device *vdev);
+void vpu_pm_fini(struct vpu_device *vdev);
+
+int vpu_pm_suspend_cb(struct device *dev);
+int vpu_pm_resume_cb(struct device *dev);
+int vpu_pm_runtime_suspend_cb(struct device *dev);
+int vpu_pm_runtime_resume_cb(struct device *dev);
+
+void vpu_pm_reset_prepare_cb(struct pci_dev *pdev);
+void vpu_pm_reset_done_cb(struct pci_dev *pdev);
+
+int vpu_rpm_get(struct vpu_device *vdev);
+void vpu_rpm_put(struct vpu_device *vdev);
+
+void vpu_pm_schedule_recovery(struct vpu_device *vdev);
+
+#endif /* __VPU_PM_H__ */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 0/7] New DRM driver for Intel VPU
  2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
                   ` (6 preceding siblings ...)
  2022-07-28 13:17 ` [PATCH v1 7/7] drm/vpu: Add PM support Jacek Lawrynowicz
@ 2022-08-08  2:34 ` Dave Airlie
  2022-08-08 14:49   ` Stanislaw Gruszka
  7 siblings, 1 reply; 11+ messages in thread
From: Dave Airlie @ 2022-08-08  2:34 UTC (permalink / raw)
  To: Jacek Lawrynowicz, Koenig, Christian, Maarten Lankhorst
  Cc: Dave Airlie, andrzej.kacprowski, dri-devel, stanislaw.gruszka

On Thu, 28 Jul 2022 at 23:17, Jacek Lawrynowicz
<jacek.lawrynowicz@linux.intel.com> wrote:
>
> Hi,
>
> This patchset contains a new Linux* Kernel Driver for Intel® VPUs.
>
> VPU stands for Versatile Processing Unit and it is an AI inference accelerator
> integrated with Intel non-server CPUs starting from 14th generation.
> VPU enables efficient execution of Deep Learning applications
> like object detection, classification etc.
>
> Driver is part of gpu/drm subsystem because VPU is similar in operation to
> an integrated GPU. Reusing drm driver init, ioctl handling, gem and prime
> helpers and drm_mm allows to minimize code duplication in the kernel.
>
> The whole driver is licensed under GPL-2.0-only except for two headers imported
> from the firmware that are MIT licensed.
>
> User mode driver stack consists of Level Zero API driver and OpenVINO plugin.
> Both should be open-sourced by the end of Q3.
> The firmware for the VPU will be distributed as a closed source binary.


Thanks for the submission, this looks pretty good and well layed out,

just a few higher level things, I think I'd like this name intel-vpu
or ivpu or something, VPU is a pretty generic namespace usage.

I think adding some sort of TODO file with what is missing and what
future things need to happen would be useful to know when merging this
might be a good idea.

I'm kinda thinking with a rename we could merge this sooner into a
staging-lite model.

I think I'd like Christian/Maarten to maybe review the fencing/uapi,
to make sure nothing too much is wrong there. The submit/waitbo model
is getting a bit old, and using syncobjs might be useful to make it
more modern. Is this device meant to be used by multiple users at
once? Maybe we'd want scheduler integration for it as well (which I
think I saw mentioned somewhere in passing).

Dave.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 0/7] New DRM driver for Intel VPU
  2022-08-08  2:34 ` [PATCH v1 0/7] New DRM driver for Intel VPU Dave Airlie
@ 2022-08-08 14:49   ` Stanislaw Gruszka
  2022-08-08 18:53     ` Sam Ravnborg
  0 siblings, 1 reply; 11+ messages in thread
From: Stanislaw Gruszka @ 2022-08-08 14:49 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Dave Airlie, dri-devel, Jacek Lawrynowicz, andrzej.kacprowski,
	Koenig, Christian

On Mon, Aug 08, 2022 at 12:34:59PM +1000, Dave Airlie wrote:
> On Thu, 28 Jul 2022 at 23:17, Jacek Lawrynowicz
> <jacek.lawrynowicz@linux.intel.com> wrote:
> >
> > Hi,
> >
> > This patchset contains a new Linux* Kernel Driver for Intel® VPUs.
> >
> > VPU stands for Versatile Processing Unit and it is an AI inference accelerator
> > integrated with Intel non-server CPUs starting from 14th generation.
> > VPU enables efficient execution of Deep Learning applications
> > like object detection, classification etc.
> >
> > Driver is part of gpu/drm subsystem because VPU is similar in operation to
> > an integrated GPU. Reusing drm driver init, ioctl handling, gem and prime
> > helpers and drm_mm allows to minimize code duplication in the kernel.
> >
> > The whole driver is licensed under GPL-2.0-only except for two headers imported
> > from the firmware that are MIT licensed.
> >
> > User mode driver stack consists of Level Zero API driver and OpenVINO plugin.
> > Both should be open-sourced by the end of Q3.
> > The firmware for the VPU will be distributed as a closed source binary.
> 
> 
> Thanks for the submission, this looks pretty good and well layed out,
> 
> just a few higher level things, I think I'd like this name intel-vpu
> or ivpu or something, VPU is a pretty generic namespace usage.

Thanks for the comments, we will consider renaming. 

> I think adding some sort of TODO file with what is missing and what
> future things need to happen would be useful to know when merging this
> might be a good idea.
> 
> I'm kinda thinking with a rename we could merge this sooner into a
> staging-lite model.

I'm not sure what we can add to TODO file, from driver perspective
I think it's pretty much ready for merging (except renaming), just
other components: F/W and user-space are not yet released.

> I think I'd like Christian/Maarten to maybe review the fencing/uapi,
> to make sure nothing too much is wrong there. The submit/waitbo model
> is getting a bit old, and using syncobjs might be useful to make it
> more modern. Is this device meant to be used by multiple users at
> once? Maybe we'd want scheduler integration for it as well (which I
> think I saw mentioned somewhere in passing).

The current approach with submit/wait_bo is simplistic but sufficient
for basic use case. In the future we are planning to add support for
HW based scheduling (we are also looking at SW scheduler) and we will
likely revisit submit/sync APIs at that time.

Regards
Stanislaw

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 0/7] New DRM driver for Intel VPU
  2022-08-08 14:49   ` Stanislaw Gruszka
@ 2022-08-08 18:53     ` Sam Ravnborg
  0 siblings, 0 replies; 11+ messages in thread
From: Sam Ravnborg @ 2022-08-08 18:53 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Dave Airlie, dri-devel, Jacek Lawrynowicz, andrzej.kacprowski,
	Koenig, Christian

Hi Stanislaw,

> I'm not sure what we can add to TODO file, from driver perspective
> I think it's pretty much ready for merging (except renaming), just
> other components: F/W and user-space are not yet released.
> 
> > I think I'd like Christian/Maarten to maybe review the fencing/uapi,
> > to make sure nothing too much is wrong there. The submit/waitbo model
> > is getting a bit old, and using syncobjs might be useful to make it
> > more modern. Is this device meant to be used by multiple users at
> > once? Maybe we'd want scheduler integration for it as well (which I
> > think I saw mentioned somewhere in passing).
> 
> In the future we are planning to add support for
> HW based scheduling (we are also looking at SW scheduler) and we will
> likely revisit submit/sync APIs at that time.

This is already two entries in the TODO file.

	Sam

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-08-08 18:59 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-28 13:17 [PATCH v1 0/7] New DRM driver for Intel VPU Jacek Lawrynowicz
2022-07-28 13:17 ` [PATCH v1 1/7] drm/vpu: Introduce a new " Jacek Lawrynowicz
2022-07-28 13:17 ` [PATCH v1 2/7] drm/vpu: Add Intel VPU MMU support Jacek Lawrynowicz
2022-07-28 13:17 ` [PATCH v1 3/7] drm/vpu: Add GEM buffer object management Jacek Lawrynowicz
2022-07-28 13:17 ` [PATCH v1 4/7] drm/vpu: Add IPC driver and JSM messages Jacek Lawrynowicz
2022-07-28 13:17 ` [PATCH v1 5/7] drm/vpu: Implement firmware parsing and booting Jacek Lawrynowicz
2022-07-28 13:17 ` [PATCH v1 6/7] drm/vpu: Add command buffer submission logic Jacek Lawrynowicz
2022-07-28 13:17 ` [PATCH v1 7/7] drm/vpu: Add PM support Jacek Lawrynowicz
2022-08-08  2:34 ` [PATCH v1 0/7] New DRM driver for Intel VPU Dave Airlie
2022-08-08 14:49   ` Stanislaw Gruszka
2022-08-08 18:53     ` Sam Ravnborg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).